Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
find extreme values of $\frac{2x}{x²+4}$ I am doing my homework like a good little boy. I know that when I want to find the extreme values of a function I have to put the derivative equal to zero so I can find the x values. I've done it before with easier functions where I only have to use the power rule. But with the function $\dfrac{2x}{x²+4}$ I think I have to use the quotient rule: $f'(x)= \dfrac {2x²+4-(2x)^2} {(x²+4)^2} \implies f'(x)= \dfrac{-2x²+4}{(x²+4)^2}$ Is this correct? If it is, I think the following step would be $\dfrac{-2x²+4}{x^4+8x²+16}=0$ I don't know how I can find the solutions to this... Any tips?
Here is an algebraic way without using calculus $$\text{Let }y=\frac{2x}{x^2+4}\iff x^2y-2x+4y=0$$ which is a Quadratic equation in $x$ As $x$ is real, the discriminant of the above equation must be $\ge0$ i.e, $(-2)^2\ge 4\cdot y\cdot 4y\iff y^2\le \frac14$ We know, $x^2\le a^2\iff -a\le x\le a$ Alternatively, let $x=2\tan\theta$ which is legal as $x,\tan\theta$ can assume any real value $$\implies\frac{2x}{x^2+4}=\frac{2\cdot 2\tan\theta }{(2\tan\theta)^2+4}=2\frac{ 2\tan\theta }{4(1+\tan^2\theta)}=\frac{\sin2\theta}2$$ Now, we know the range of $\sin2\theta$ for real $\theta$
{ "language": "en", "url": "https://math.stackexchange.com/questions/472169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
math fallacy problem: $-1= (-1)^3 = (-1)^{6/2} = \sqrt{(-1)^6}= 1$? I know there is something wrong with this but I don't know where. It's some kind of a math fallacy and it is driving me crazy. Here it is: $$-1= (-1)^3 = (-1)^{6/2} = \sqrt{(-1)^6}= 1?$$
The problem here is that the square root function, $\sqrt{-},(-)^\frac{1}{2}$, is not a single-valued function. As PVAL says, it is a two-valued function, meaning you have two consistently choose which square root you're talking about. That's why you often will have problems when you have chains of equalities as above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/472227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95", "answer_count": 15, "answer_id": 0 }
Separating 18 people into 5 teams A teacher wants to divide her class of 18 students into 5 teams to work on projects, with two teams of 3 students each and three teams of 4 students each. a) In how many ways can she do this, if the teams are not numbered? b) What is the probability that two of the students, Mia and Max, will be on the same team? [This is not a homework problem.]
The answer to the first question is just a multinomial coefficient divided by $2! \cdot 3!$ to make up for the fact that the teams are not numbered and we have therefore counted each partition into teams multiple times: $${\binom{18}{3, 3 ,4,4,4}}/(2!\cdot 3!) = \frac{18!}{(3!)^2 * (4!)^3 \cdot (2! \cdot 3!)}=1.072. 071. 000$$ The second question: Probability that Max is on a 4-person team is $12/18$, probability that he is on a 3-person team is $6/18$. So the chance that mia is on the same team is $12/18 \cdot 3/17 + 6/18 \cdot 2/17 = 8/51$
{ "language": "en", "url": "https://math.stackexchange.com/questions/472286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Interpreting a singular value in a specific problem In a similar spirit to this post, I pose the following: Contextual Problem A PhD student in Applied Mathematics is defending his dissertation and needs to make 10 gallon keg consisting of vodka and beer to placate his thesis committee. Suppose that all committee members, being stubborn people, refuse to sign his dissertation paperwork until the next day. Since all committee members will be driving home immediately after his defense, he wants to make sure that they all drive home safely. To do so, he must ensure that his mixture doesn't contain too much alcohol in it! Therefore, his goal is to make a 10 liter mixture of vodka and beer such that the total alcohol content of the mixture is only $12$ percent. Suppose that beer has $8\%$ alcohol while vodka has $40\%$. If $x$ is the volume of beer and $y$ is the volume of vodka needed, then clearly the system of equations is \begin{equation} x+y=10 \\ 0.08 x +0.4 y = 0.12\times 10 \end{equation} My Question The singular value decomposition of the corresponding matrix \begin{equation} A=\left[ \begin{array}{cc} 1 & 1\\ 0.08 & 0.4 \end{array} \right] \end{equation} is $$A=U\Sigma V^T$$ with \begin{equation} U=\left[ \begin{array}{cc} -0.9711 & -0.2388\\ -0.2388 & 0.9711 \end{array} \right] \end{equation} \begin{equation} \Sigma=\left[ \begin{array}{cc} 1.4554 & 0\\ 0 & 0.2199 \end{array} \right] \end{equation} \begin{equation} V=\left[ \begin{array}{cc} -0.6804 &-0.7329\\ -0.7329 & 0.6804 \end{array} \right] \end{equation} How do I interpret their physical meaning of the singular values and the columns of the two unitary matrices in the context of this particular problem? That is, what insight do these quantities give me about the nature of the problem itself or perturbations thereof?
I think it is easiest to interpret this when we don't think of the vector spaces in question as $\mathbb{R}^2$, but rather, think of $A$ as a linear map between the vector space $V = (\text{beer}, \text{vodka})$ to the vector space $W = (\text{volume}, \text{alcohol content})$. This map takes the drinks you have, and spits out their corresponding total liquid volume and alcohol content when mixed together. Now, consider the unit circle in $V$; this is the set of all alcohol drink pairs whose "radius" (which is different from the total liquid volume) is equal to $1$. Now consider the set of all points in $W$ corresponding to the total liquid volumes, alcohol content pairs: (Ignore the labels above and the angle of the ellipse; it is only a visualization aid) Then the first left singular vector $u_1$ corresponds to the direction in the $(\text{volume}, \text{alcohol content})$ space that is most sensitive to some change in the $(\text{beer}, \text{vodka})$ space; the amount that it changes by (as a ratio of magnitudes) is given by the singular value $\sigma_1$, and the direction in the $(\text{beer}, \text{vodka})$ space corresponding to this change is the right singular vector $v_1$. So if you want to use as little beer and vodka as possible (measured in the Euclidean norm rather than the "volume" norm) to affect the greatest change in the corresponding volume and alcohol content of the mixed result, you should add/remove an amount of beer+vodka from your mixture in relative proportions corresponding to the right singular vector $v_1$; in our case, this means adding/removing beer/vodka in a ratio of $0.6804:0.7329$ (the signs don't matter since you can always flip them). To interpret the second left singular vector $u_2$, we can refer to the diagram again - it corresponds to the next largest change possible in the $(\text{volume}, \text{alcohol content})$ space that is orthogonal to $u_1$. This change has sensitivity given by $\sigma_2$ and is caused by adding/removing beer/vodka in relative proportions given by the second right singular vector $v_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/472365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Subgraphs of Complete graphs I have been studying a little graph theory on my own and a simple google search has not helped so I am deciding to turn to math stack exchange. My question is: Given a complete graph $K_{n}$ where $n\ge 2$, how many non-isomorphic subgraphs occupy it? My initial thought was that the formula was going to involve powers of 2 since at first glance it seems as we are counting subsets. Hence the power set formula, Given a set A with $\alpha$ elements there exists $2^\alpha$ subsets of set A. So I took a look at a couple of cases. For $K_{2}$, there are 2 non-isomorphic subgraphs: An edge connecting the two vertices and a set of two vertices not connected by an edge. So I thought to myself: "Ok the formula is $2^{n-1}$ since I was able to deduce on my own earlier that a complete graph on $n$ vertices $K_{n}$ has $\frac{n\cdot(n-1)}{2}$ -or the summation of the other $n-1$ vertices. So I thought, maybe for $K_{n}$ we are counting in a similar fashion since our lower bound is at $n\ge2$. For $K_{3}$ I found that there were 4 non-isomorphic sugraphs. $2^{3-1} = 4 $So far so good. Next $K_{4}$, if my hypothesis was correct I would have 8 non-isomorphic subgraphs. Turns out I was only able to find 12. Looking back we might see that the number of non isomorphic subgraphs might be more reated to the number of edges rather than the number of vertices on the graph. However for $K_{4}$ with 6 edges this would lead to $2^{6-1}=32$ edges which is an overcount. Can someone give me a lax proof of what the actual formula is? For example the pattern that I noticed with the number of edges on a complete graph can be described as follows: Given a complete graph $K_{n}$ with vertices $\{X_{1},X_{2}, X_{3},\ldots,X_{n}\}$ we may arrange the vertices as a locus around an imaginary center point and form $n$ edges to form a regular $n$-gon. We then begin to connect the rest of the vertices in a circular manner. Since $X_{i}$ has already formed two edges there are $n-3$ edges going from $X_{i}$. Next adjacent to $X_{i}$ the vertice $X_{i+1}$ has not been affected by the new edges formed in the last step so there are another $n-3$ edges made. The next vertice $X_{i+3}$ however has had an edge drawn from $X_{i}$ two steps ago so there are $n-3$ new edges formed. This process continues until we get to $X_{i+(n-2)}$. When simply put, the steps to make this complete graph leaves us with the following sum of edges: $n + (n-3) + (n-3) + \ldots + 1$ = $n + (n-3) + \frac{(n-2)\cdot(n-3)}{2}$ = $n + \frac{2(n-3)+(n-2)\cdot(n-3) }{2}$ = $n + \frac{(n)\cdot(n-3)}{2}$ = $\frac{2n+(n)\cdot(n-3)}{2}$ = $\frac{n(n-1)}{2}$. Therefore a complete graph $K_{n}$ on $n$ vertices has $\frac{n(n-1)}{2}$ edges. An explanation of that sort would be lovely. Thanks!
The number of subgraphs (including the isomorphic subgraphs and the disconected subgraphs) of a comple graph (with n>=3) is $$ \sum_{k=1}^n {n \choose k} ( 2^{k \choose 2} ) $$ I found it in Grimaldi, R. P. (2003) Discrete and Combinatorial Mathematics. (5th ed.) Pearson. Chap. 11. Solutions book. I'm trying to proove it. Cheers,
{ "language": "en", "url": "https://math.stackexchange.com/questions/472450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Why is the supremum of the empty set $-\infty$ and the infimum $\infty$? I read in a paper on set theory that the supremum and the infimum of the empty set are defined as $\sup(\{\})=-\infty$ and $\inf(\{\})=\infty$. But intuitively I can't figure out why that is the case. Is there a reason why the supremum and infimum of the empty set are defined this way?
In addition to the reasons mentioned above about it making things easier, it also makes sense intuitively if you think about how each is defined. The supremum is the lowest upper bound on a set, so, since any real number is an upper bound on the empty set, no real number can be the lowest such bound (If x is that bound, then x - 1 is a lower upper bound.) Thus, it is defined to be $-\infty$ -- less than any real number, and similarly for the infimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/472532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 6, "answer_id": 1 }
Prove that: $ \cot7\frac12 ^\circ = \sqrt2 + \sqrt3 + \sqrt4 + \sqrt6$ How to prove the following trignometric identity? $$ \cot7\frac12 ^\circ = \sqrt2 + \sqrt3 + \sqrt4 + \sqrt6$$ Using half angle formulas, I am getting a number for $\cot7\frac12 ^\circ $, but I don't know how to show it to equal the number $\sqrt2 + \sqrt3 + \sqrt4 + \sqrt6$. I would however like to learn the technique of dealing with surds such as these, especially in trignometric problems as I have a lot of similar problems and I don't have a clue as to how to deal with those. Hints please! EDIT: What I have done using half angles is this: (and please note, for convenience, I am dropping the degree symbols. The angles here are in degrees however). I know that $$ \cos 15 = \dfrac{\sqrt3+1}{2\sqrt2}$$ So, $$\sin7.5 = \sqrt{\dfrac{1-\cos 15} {2}}$$ $$\cos7.5 = \sqrt{\dfrac{1+\cos 15} {2}} $$ $$\implies \cot 7.5 = \sqrt{\dfrac{2\sqrt2 + \sqrt3 + 1} {2\sqrt2 - \sqrt3 + 1}} $$
$$\text{As } \cot x =\frac{\cos x}{\sin x}$$ $$ =\frac{2\cos^2x}{2\sin x\cos x}(\text{ multiplying the numerator & the denominator by }2\cos7\frac12 ^\circ)$$ $$=\frac{1+\cos2x}{\sin2x}(\text{using }\sin2A=2\sin A\cos A,\cos2A=2\cos^2A-1$$ $$ \cot7\frac12 ^\circ =\frac{1+\cos15^\circ}{\sin15^\circ}$$ $\cos15^\circ=\cos(45-30)^\circ=\cos45^\circ\cos30^\circ+\sin45^\circ\sin30^\circ=\frac{\sqrt3+1}{2\sqrt2}$ $\sin15^\circ=\sin(45-30)^\circ=\sin45^\circ\cos30^\circ-\cos45^\circ\sin30^\circ=\frac{\sqrt3-1}{2\sqrt2}$ Method $1:$ $$\frac{1+\cos15^\circ}{\sin15^\circ}=\csc15^\circ+\cot15^\circ$$ $$\cot15^\circ=\frac{\cos15^\circ}{\sin15^\circ}=\frac{\sqrt3+1}{\sqrt3-1}=\frac{(\sqrt3+1)^2}{(\sqrt3-1)(\sqrt3+1)}=2+\sqrt3$$ $$\csc15^\circ=\frac{2\sqrt2}{\sqrt3-1}=\frac{2\sqrt2(\sqrt3+1)}{(\sqrt3-1)(\sqrt3+1)}=\sqrt2(\sqrt3+1)=\sqrt6+\sqrt2$$ Method $2:$ $$\implies \frac{1+\cos15^\circ}{\sin15^\circ}=\frac{1+\frac{\sqrt3+1}{2\sqrt2}}{\frac{\sqrt3-1}{2\sqrt2}}=\frac{2\sqrt2+\sqrt3+1}{\sqrt3-1}=\frac{(2\sqrt2+\sqrt3+1)(\sqrt3+1)}{(\sqrt3-1)(\sqrt3+1)}(\text{ rationalizing the denominator })$$ $$=\frac{2\sqrt6+4+2\sqrt3+2\sqrt2}2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/472594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 8, "answer_id": 5 }
In how many ways can $r$ elements be chosen from $n$ elements with repetition, such that every element is chosen at least once? Thank you for your comments -- hopefully a clarifying example -- In choosing all combinations of 4 elements from (A,B,C), (e.g., AAAA,BBBB,ACAB,BBBC, etc.) how many of these combinations include A, B and C? Using a computer, I came up with 36 for this example. There are 81 combinations, and I found only 45 combinations that include 2 or less of the three elements. That is, where element order is considered and repetition allowed. I can't seem to get my head around it. What I came up with was n choose r - (n choose n-1) * (n-1 choose r) but the latter overlap and I was not sure how to calculate the overlaps. Can anyone please help? Thanks in advance.
Thw answer in general is $r!S(n,r)$, where $S(n,r)$ is the Stirling number of the second kind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/472666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
How to decompose permutations? In algebra, we have seen theorems such as Every permutation is the product of disjoint cycles of length $\geq 2$. I don't really know how to apply this, so I looked at its proof hoping it would be helpful. Proof: Decompose $\{1, \dots, n\}$ disjointly in orbits of $\langle \sigma \rangle$. This didn't help me at all. I don't really know what is meant by this. Can anyone explain to me the proof and also how to actually do it? For example, how can I find: $$\begin{pmatrix} 1 \end{pmatrix} \begin{pmatrix} 2 & 4 & 3 \end{pmatrix} = \begin{pmatrix} 1 & 2 \end{pmatrix} \begin{pmatrix} 3 & 4 \end{pmatrix} \begin{pmatrix} 1 & 2 & 3 \end{pmatrix}$$ Thanks in advance for any help.
The key to decomposing cycles is to trace the "orbit" of each element under the permutation. So, for example, let's decompose $$ \sigma= \begin{pmatrix} 1 & 2 \end{pmatrix} \begin{pmatrix} 3 & 4 \end{pmatrix} \begin{pmatrix} 1 & 2 & 3 \end{pmatrix} $$ We begin by finding $\sigma(1)$. Applying the permutations from right to left, we find $1\to2$ under the right-most cycle, $2$ in turn stays the same under the middle cycle, and $2\to1$ under the leftmost cycle. So, $\sigma(1)=1$. Since we ended where we began, our first cycle is $\pmatrix{1}$. We move on to the next element Now, find $\sigma(2)$. We find $2\to3$, $3\to4$, $4\to4$. Thus, $\sigma(2)=4$. Now, find $\sigma(4)$. We find $4\to4\to3\to3$. Thus, $\sigma(4)=3$. Now, find $\sigma(3)$. We find $3\to1\to1\to2$. Thus, $\sigma(3)=2$. Since we ended where we began, our second cycle is $\pmatrix{2&4&3}$. Since there are no more elements to permute, we are done. Thus, we find $\sigma = \pmatrix{1}\pmatrix{2&4&3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/472733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Definition of limit and axiom of choice In the definition of limit of a function ($\epsilon-\delta$ definition) we say certain statements such as for every $\epsilon>0$ there exist $\delta>0$ .... Now my question is, is a choice function required to ensure that for every $\epsilon$ there exist a $\delta>0$? Moreover, how we can we test whether a mathematical statement depends on axiom of choice or not? I really don't know whether this question has any meaning or not, so please help me.
The question doesn't make much sense, and it seems that you are misunderstanding the definition. We define continuity by $\varepsilon$-$\delta$ at a point $x$ in such way. If you want to show that a function is continuous at $x$ it suffices to show that every $\varepsilon$ has such $\delta$. We are not required to assign in a single stroke all the $\varepsilon$'s with fitting $\delta$'s. We just have to show that given $\varepsilon>0$ there is some $\delta$ which satisfies the statement. On the other hand, the axiom of choice is required in order to show that continuity by $\varepsilon$-$\delta$ at a point is equivalent to continuity using sequences (i.e. $f(x)=\lim_{x_n\to x_0}f(x_n)$ for every $x_n\to x$). The axiom of countable choice suffices here. For more details on that last part, see the entire discussion on: Continuity and the Axiom of Choice Finally, to prove that the axiom of choice is required to prove $\varphi$ we can either show that $\varphi$ implies something we already know to require the axiom of choice (or a fragment thereof), or that we can construct a model of $\sf ZF$ in which $\varphi$ fails (of course it makes sense only if $\sf ZFC$ proves $\varphi$ in the first place). For more on that, you can read: * *How do we know we need the axiom of choice for some theorem? *We cannot write this function
{ "language": "en", "url": "https://math.stackexchange.com/questions/472806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How can I calculate the points of two lines that connect two circles? Let's say I have two circles of equal or differing radii, a variable distance apart. I want to calculate the end points of two lines, that will "connect" the circles. And no matter how the circles may be oriented, they should still "connect" in the same way. How can I calculate the end points of these two lines?
In either way if you know the circles find gradient of circle meaning y' and equate them so you get the line. "this line will be tangent since they are not cutting circle" Note: To joint endpoint of diameter they must be of equal radii and in any case the above method will get you result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/472858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Understanding concatenating the empty set to any set. I know that concatenating the empty set to any set yields the empty set. So, $A \circ \varnothing = \varnothing$. Here $A$ is a set of strings and the concatenation ($\circ$) of two sets of strings, $X$ and $Y$ is the set consisting of all strings of the form $xy$ where $x\in X$ and $y \in Y$. (You may want to take a look at page 65, Example 1.53 of Introduction to the Theory of Computation by Michael Sipser). However, I get somewhat puzzled when I try to intuitively understand this. A wrong line of thinking will make one to ask, "If we concatenate $A$ with $\varnothing$, should not it still be $A$?" Well, one way force myself to understand the correct answer, may be, to say that, since I am concatenating with an empty set, actually I will not be able to carry out the concatenation. The concatenation will not exist at all. I am asking for help from experienced users to provide hints and real life examples which will help one to modify the thinking process and help one better to really understand the correct answer. I am putting more stress on real life examples. I need to understand this. I am not happy simply memorizing the correct answer.
The wrong line of thinking is almost correct, in the following way. Let $\epsilon$ be the empty string. For any string $a$, we have $a\epsilon=a$. Let $\{\epsilon\}$ be the set containing exactly one element, namely the empty string. Then for any set of strings $A$, we have $A\circ\{\epsilon\}=A$. The real issue, then, is confusing $\{\epsilon\}$ with $\varnothing$. The former is a set containing one string; the latter is a set containing zero strings. (One potential trap is that some formalisms might identify $\epsilon=\varnothing$. Then we have to distinguish between $\{\varnothing\}$ and $\varnothing$. This is theoretically straightforward, but it means that you have to be careful with your notation.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/472952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 0 }
A set of basic abstract algebra exercises I wanted to review some basic abstract algebra. Here's a few problems for which I am seeking solution verification. Thank you very much in advance! $\textbf{Problem:}$ Let $H$ be a subgroup of $G$, and let $X$ denote the set of all the left cosets of $H$ in $G$. For each element $a \in G$, define $\rho_{a}: X \rightarrow X$ as follows: $$\rho_{a} (xH) = (ax) H.$$ * *Prove that $\rho_{a}$ is a permutation of $X$ for each $a \in G$. *Prove that $h: G \rightarrow S_{X}$ defined by $h(a)=\rho_{a}$ is a homomorphism. *Prove that the set $\{a \in H : xax^{-1} \in H \> \forall x \in G\}$ is the kernel of $h$. $\textbf{Solution:}$ * *Choose any $a \in G$. We first show that $\rho_{a}$ is injective. So, assume $\rho_{a}(xH) = \rho_{a}(x'H).$ Hence, $(ax)H = (ax')H$; we need to show $ xH = x'H$. Let $g \in xH$. Then, $g = xh_0$ and $ag = (ax)h_0 = (ax')h_1$ by our assumption. Multiplying $ag = (ax')h_1$ on the left by $a^{-1}$ gives us that $g= x'h_1$. Thus, $g \in x'H$. A similar argument gives us the reverse inclusion. To prove the surjectivity of $\rho_{a}$, let $xH \in X$. Since $a^{-1}x \in G$, we have $\rho_{a} (a^{-1}x H) = (aa^{-1}x)H = xH$. Indeed, $\rho_{a}$ is surjective. *First, we show $\rho_{ab} = \rho_{a} \circ \rho_{b}$. Let $xH$ be an arbitrary element belonging to $X$. Observe that $$\rho_{a} \circ \rho_{b} (xH) = \rho_{a}((bx)H) = (abx)H = \rho_{ab}(xH).$$ Thus, $$h(ab)=\rho_{ab}=\rho_{a} \circ \rho_{b} = h(a)h(b),$$ and we conclude that $h$ is a homomorphism. *Let $K$ denote the kernel of $h$. We show $\{a \in H : xax^{-1} \in H \> \forall x \in G\} = K$. To start, let $k \in K$. Then, $h(k)=\rho_{k}=\rho_{e}$, where $e$ is the identity element of $G$. Since $\rho_{k}=\rho_{e}$, for each $xH \in X$ we have $(kx)H=xH.$ Hence, $kxh_0 = xh_1$ for some $h_0,h_1 \in H$ and $x^{-1}kx=h_{1}h^{-1}_{0}.$ This implies $x^{-1}kx \in H$. For clarity, put $x_0 = x^{-1}$. So, $x^{-1}kx = x_{0}kx^{-1}_0 \in H$. Indeed, $k \in\{a \in H : xax^{-1} \in H \> \forall x \in G\}$. To prove the reverse inclusion, this time let $k \in \{a \in H : xax^{-1} \in H \> \forall x \in G\}.$ Then, we must show $(kx)H=xH$. Let $g\in (kx)H$. Suppose $g = kxh_0$ for $h_0 \in H$. Multiplying on the right by $x^{-1}$, we obtain $x^{-1}g = x^{-1}kxh_0 = h_1$ for some $h_1 \in H$. Multiplying on the right by $x$, we indeed get $g=xh_1 \in xH$. For the reverse inclusion, we let $g \in xH$ so that $g=xh_0$ for some $h_0$. Then, $$kg=kxh_0$$ $$g^{-1}kg=g^{-1}kxh_{0}$$ $$ h_{1}=g^{-1}kxh_0$$ $$g=kxh_0h^{-1}_{1}.$$ The last line gives us that $g \in (kx)H$ as desired. $\blacksquare$
It looks good in its current state, but some of the arguments could be simplified. Do recall that $$xH = yH \iff x^{-1} y \in H$$ So for the proof of the first one, we have \begin{align} \rho_\alpha (xH) = \rho_{\alpha}(x'H) &\iff (\alpha x) H = (\alpha x') H \\ &\iff (\alpha x)^{-1} (\alpha x') \in H \\ &\iff x^{-1} \alpha^{-1} \alpha x' \in H \\ &\iff x^{-1} x' \in H \\ &\iff xH = x'H \end{align} Proceeding similarly, the proof of 3 can be shortened.
{ "language": "en", "url": "https://math.stackexchange.com/questions/473124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Why doesn't this set have a supremum in a non-complete field? Why doesn't the set $\{x:x^2<5\}$ have a supremum in $\mathbb{Q}$? I know that the rational numbers aren't a complete field, but I'm still not understanding how a set can have upper bounds, but no least upper bound in a field. In $\mathbb{Z}$ for example, $\{x:x^2<5\}=\{-2,-1,0,1,2\}$. It has the set of upper bounds: $[2,\infty)\cup\mathbb{Z}$. So why isn't the least upper bound $2$?
Suppose $q$ is a rational number which is an upper bound for your set. Since $\sqrt{5}$ is irrational, it must be the case that $q>\sqrt{5}$. But then there exists a rational number $q'$ such that $$\sqrt{5} < q' < q$$ so $q$ couldn't have been a least upper bound. Added: if you want to work entirely inside $\mathbb{Q}$ then you can get away without referring to $\sqrt{5}$ at all. It's a fact that if $q > x$ for all $x \in \mathbb{Q}$ with $x^2 < 5$, then there exists $q' < q$ with the same property. As for $\mathbb{Z}$, the set $\{ x \in \mathbb{Z} : x^2 < 5 \}$ is equal to $$\{ -2, -1, 0, 1, 2 \}$$ It's not equal to $\{0, 1, 4 \}$, which is the set of integer squares less than $5$, whereas our set is the set of integers whose squares are less than $5$. (Think for a while about why these aren't the same thing.) And indeed, this set does have a supremum in $\mathbb{Z}$, namely $2$, its largest element. Added again: I'd like to stress that your question really asks about two sets, not just one. The notation $\{ x : x^2 < 5 \}$ is not very clear if it's not specified what values $x$ is meant to range over. The two sets are, respectively: * *$\{ x \in \mathbb{Q} : x^2 < 5 \}$, the set of rationals whose squares are less than $5$ *$\{ x \in \mathbb{Z} : x^2 < 5 \}$, the set of integers whose squares are less than $5$ The first of these sets is infinite, and the second is finite!
{ "language": "en", "url": "https://math.stackexchange.com/questions/473211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Entire Function Problem in Complex Analysis I am currently working on some review problems in complex analysis and came upon the following conundrum of a problem. "If $f(z)$ is an entire function, and satisfies $|f(z^2)|\le|f(z)|^2$, prove that f(z) is a polynomial." My intuition tells me to show that f(z) has a pole at infinity by showing that infinity is not an essential or removable singularity. However, I am getting stuck after this. Thanks for the help,
Let $$ M=\sup_{|z|=2}|f(z)| $$ Then, with the condition given, it can be proven inductively that $$ \sup_{|z|=2^{2^n}}|f(z)|\le M^{2^n} $$ which implies $$ |f(z)|\le|z|^{2\log_2(M)} $$ We can use Cauchy's Theorem to give $$ f^{(n)}(z)=n!\int_{\gamma_R}\frac{f(w)\,\mathrm{d}w}{(w-z)^{n+1}} $$ where $\gamma_R$ is the circle of radius $R$. If we choose $n\gt2\log_2(M)$, then if we let $R\to\infty$, $$ \begin{align} |f^{(n)}(z)| &\le\int_{\gamma_R}\frac{|w|^{2\log_2(M)}\,\mathrm{d}w}{(z-w)^{n+1}}\\ &\sim2\pi R^{2\log_2(M)-n}\\[12pt] &\to0 \end{align} $$ Since the $n^\text{th}$ derivative is identically $0$, $f$ must be a polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/473275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Convergence implies lim sup = lim inf Could someone please explain to me how the following can be proven? I get the intution but don't know how to write it rigorously. Thank you.
Both $limsup$ and $liminf$ are limit points of a sequence; the largest and smallest limit points respectively of a sequence, just as $lim$ is a limit point . Assuming your space is Hausdorff, or something else that guarantees that the limit is unique, then there is only one limit point, so we must have $liminf=limsup=lim$
{ "language": "en", "url": "https://math.stackexchange.com/questions/473329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
General formula needed for this product rule expression (differential operator) Let $D_i^t$, $D_i^0$ for $i=1,\dots,n$ be differential operators. (For example $D_1^t = D_x^t$, $D_2^t = D_y^t,\dots$, where $x$, $y$ are the coordinates). Suppose I am given the identity $${D}_a^t (F_t u) = \sum_{j=1}^n F_t({D}_j^0 u){D}_a^t\varphi_j$$ where $\varphi_j$ are smooth functions and $F_t$ is some nice map. So $$ D^t_bD^t_a(F_t u) = \sum_j D^t_b\left(F_t({D}_j^0 u)\right){D}_a^t\varphi_j+\sum_j F_t({D}_j^0 u)D^t_b{D}_a^t\varphi_j $$ and because $${D}_b^t (F_t (D_j^0u)) = \sum_{k=1}^n F_t({D}_k^0 D_j^0u){D}_b^t\varphi_k,$$ we have $$D^t_bD^t_a(F_t u) = \sum_{j,k=1}^n F_t({D}_k^0 D_j^0u){D}_b^t\varphi_k+\sum_j F_t({D}_j^0 u)D^t_b{D}_a^t\varphi_j .$$ My question is how do I generalise this and obtain a rule for $$D^t_{\alpha} (F_t u)$$ where $\alpha$ is a multiindex of order $n$ (or order $m$)? My intention is to put the derivatives on $u$ and put the $F_t$ outside, like I demonstrated above. Can anyone help me with getting the formula for this? It's really tedious to write out multiple derivatives so it's hard to tell for me.
This is probably better served as a comment, but I can't add one because of lack of reputation. If I understand correctly you are probably looking for the multivariate version of Faà di Bruno's formula, beware that the Wikipedia entry uses slightly different notation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/473481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
What is the "reverse" of the cartesian product? Suppose $A = \{a_1,a_2 \}$ and $B = \{b_1,b_2 \}$. Then $A \times B = \{(a_1,b_1), (a_1,b_2), (a_2,b_1), (a_2,b_2) \}$. What is the "reverse" of this operation? In particular, what would $A \div B$ be? The motivation for this question is from relational algebra. Consider the following two tables: $$\text{Table A}: \{(s_1,p_1), (s_2,p_1), (s_1,p_2), (s_3,p_1), (s_5,p_3) \}$$ $$\text{Table B}: \{p_1,p_2\}$$ Then $$A \div B = \{s_1 \}$$ In other words, we look at the x-coordinate which has both $p_1$ and $p_2$ as y-coordinates.
It looks like you want to define it as follows: Given sets $X,Y,$ $A\subseteq X\times Y,$ and $B\subseteq Y$ we define $$A\div B:=\bigl\{x\in X\mid\{x\}\times B\subseteq A\bigr\}.$$ As far as I'm aware, there is no standard name for this. More generally, if you wanted it to work for arbitrary sets (not just subsets of Cartesian products), you could instead proceed as follows: The domain of a set $A$, denoted $\text{dom}(A),$ is the set of all $x$ such that $\langle x,y\rangle\in A$ for some $y$. Equivalently, $\text{dom}(A)$ is the domain of the largest binary relation contained in $A$, so while the domain of a non-empty set can be empty, the domain of a non-empty binary relation cannot. Given sets $A,B,$ we then define $$A\div B:=\bigl\{x\in\text{dom}(A)\mid\{x\}\times B\subseteq A\bigr\}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/473563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Looking for a source: Fourier inversion of $f \in L^1$ Is there a book where I can find a thorough proof of the following assertion? Let $f \in L^1(\mathbb{R}^d)$ be continuous at zero and $\hat{f}\ge0$. Then $\hat{f} \in L^1(\mathbb{R}^d)$ and $$f(t) = \int_{\mathbb{R}^d} \hat{f}(\xi)e^{2\pi i\, t\xi} \,d\xi$$ almost everywhere. I'm looking for the context in which this Lemma is stated, more than the actual proof. I've finally found the source: E. M. Stein, G. Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, 1971. $\S$1. Corollary 1.26 (p.15)
In terms of the proof please see here: Proof of Fourier Inverse formula for $L^1$ case Further to another German reference that rather pays attention to context than proof is here http://www.math.ethz.ch/education/bachelor/seminars/hs2007/harm-analysis/FT2.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/473657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $f(x)=\int_{0}^x \cos^4 t\, dt\implies f(x+\pi)=f(x)+f(\pi)$ How to prove that if $f(x)=\int_{0}^x \cos^4 t\, dt$ then $f(x+\pi)$ equals to $f(x)+f(\pi)$ I thought to first integrate $\cos^4 t$ but this might be the problem of Definite Integral
$$f(x+\pi)-f(\pi)=\int_0^{x+\pi}\cos^4t dt-\int_0^{\pi}\cos^4t dt=\int_{\pi}^{x+ \pi}\cos^4t dt$$ Putting $u=t-\pi, dt=du$ and $\cos t=\cos(u+\pi)=-\cos u$ $$\int_{\pi}^{x+\pi}\cos^4t dt=\int_0^x(-\cos u)^4du=f(x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/473739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
I want to show that ker f is a normal subgroup of some group $X$ Suppose I have two groups, call them $X$ and $Y$, and I let $f : X \longrightarrow Y$ be a group homomorphism. I want to prove that the ker $f$ is a normal subgroup of $X$. Here is my attempt. Let me know how my proof looks and if I am missing details: We want to show that $aca^{-1} \in$ Ker $f$, $\forall a \in A$ and $\forall c \in$ Ker $f$. $f(aca^{-1}) = f(a)f(c)f(a^{-1})$ $= f(a)f(c)f(a)^{-1}$ $= f(a)If(a)^{-1}$ because $c \in$ Ker $f$ $= f(a)f(a)^{-1}$ $= I$ Thus, ker $f$ is a normal subgroup of $X$.
Yes, it looks entirely fine. You might want to explicitly state what $I$ means, but the logic is sound.
{ "language": "en", "url": "https://math.stackexchange.com/questions/473927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove the Following Property of an Ultrafilter In her text Introduction to Modern Set Theory , Judith Roitman defined a filter of a set $X$ as a family $F$ of subsets of $X$ so that: (a) If $A \in F$ and $X \supseteq B \supseteq A$ then $B \in F$. (b) If $A_1, ... ,A_n$ are elements of $F$, so is $A1 \cap ... \cap An$. Then she proceeded to define an ultrafilter as such: "If $F$ is proper and, for all $A_n \subseteq X$, either $A \in F$ or $X-A \in F$, we say that $F$ is an ultrafilter." Now, suppose that $F$ is an ultrafilter on a set $X$. Prove that if $X = S_1 \cup ... \cup S_n$, then some $S_n \in F$. She wrote, "If not, then, since no $S_i \in F$, $F$ is proper, and each $X-S_i \in F$. So $\bigcap_{i \le n}(X-S_i) \in F$. But $\bigcap_{i \le n}(X-S_i) = \emptyset \notin F$. What I did not understand was that if she already defined an ultrafilter as proper, why did she have to say "since no $S_i \in F$, $F$ is proper ..."? My thinking was that if $X = S_1 \cup ... \cup S_n$ is not an element of $F$, then by the fact that $F$ is an ultrafilter, $X^c$ = $\bigcap_{i \le n}(X-Si) \in F$, but $X^c = \emptyset \notin F$, creating a contradiction. Did I misunderstand something?
If I were writing such text, I would have pointed it out in order to remind the reader of this. Especially when in just one sentence we derived a contradiction from it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/474015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to find the integral by changing the coordinates? Let R be the region in the first quadrant where $$3 \geq y-x \geq 0$$ $$5 \geq xy \geq2$$ Compute $$\int_A (x^2-y^2)\,dx\,dy.$$ I tried to use $ u= y-x, v= xy$ as my change of coordinates, but then I don't know how to solve it. Can someone help me?
For the Jacobian, use this fact that: $$\frac{\partial(x,y)}{\partial(u,v)}=\frac{1}{\frac{\partial(u,v)}{\partial(x,y)}}$$ provided $\frac{\partial(x,y)}{\partial(u,v)}\neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/474189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
how to find out any digit of any irrational number? We know that irrational number has not periodic digits of finite number as rational number. All this means that we can find out which digit exist in any position of rational number. But what about non-rational or irrational numbers? For example: How to find out which digit exists in Fortieth position of $\sqrt[2]{2}$ which equals 1,414213....... Is it possible to solve such kind of problem for any irrational number?
You can use continued fraction approximations to find rational numbers arbitrarily close to any irrational number. For $\sqrt 2$ this is equivalent to the chain of approximations $\frac 11, \frac 32, \frac 75, \frac {12}{17} \dots$ where the fraction $\cfrac {a_{n+1}}{b_{n+1}}=\cfrac {a_n+2b_n}{a_n+b_n}.$ The accuracy of the estimate at the $n^{th}$ fraction is approximately $\left|\cfrac 1{b_n b_{n-1}} \right|$ - so you go far enough to get the accuracy you need to identify the decimal digit you want from the rational approximation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/474260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Commuting an $\int$ improper at its both ends and $\lim$ I am working on the following problem: Let $f, g$ be continuous nonnegative functions defined and improperly-integrable on $(0, \infty)$ Furthermore, assume they satisfy $$ \lim_{x\rightarrow 0}f(x) = 0 \wedge \lim_{x\rightarrow\infty}xg(x)=0. $$ Then prove that $$ \lim_{n\rightarrow\infty}n\int_0^\infty f(x)g(nx)dx = 0. $$ I tried to swap $\lim$ and $\int$. I know that if the improper integral and its integrand converge uniformly, they commute. But I am at a loss as to how to prove this condition. I would be grateful if you could help me in this regard.
Here is a way to use a direct bound for looking at the integral over $[1,\infty)$: $\int_1^\infty f(x) ng(nx) dx \leq \int_n^\infty f(u/n) g(u) du = \int_n^\infty \frac{f(u/n)}{u} (u g(u)) du \leq \frac{1}{n} \int_n^\infty f(u/n) (ug(u)) du$ The latter inequality follows from the fact that in the integrand $u > n \geq 1$. If you get here, then by assumption, for sufficiently large $n$, $u(g(u))$ is small for $u > n$. See if you can finish from this angle. Okay, there is a delicate procedure I have in mind which I will outline here: (Let $\epsilon > 0$ be given) * *Use continuity of $f$ near $0$ , find a $\delta$ such that $|f(x)| < \epsilon$ for $|x| < \delta$. *Separate the integral to $\int_0^\delta$ and $\int_\delta^\infty$. *Bound $\int_0^\delta f(x) ng(nx) dx < \epsilon \int_0^\delta ng(nx) dx = \epsilon \int_0^{n\delta} g(u) du < \epsilon |g|_1$ *For $\int_\delta^\infty$, use the procedure above to get a bound of the form $|f|_1 \epsilon$. (Now that $\delta$ is fixed, if $n$ is large enough $u g(u) < \epsilon$ for $u > \delta n$. So the full bound is $\epsilon(|f|_1 + |g|_1)$, so now you can adjust a few parameters above to get it back to $\epsilon$ if you wish, though it suffices that you can bound by a quantity that is arbitrarily small, and hence the limit is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/474334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Field structure on $\mathbb{R}^2$ I have the following question: Is there a simple way to see that if we put a multiplication $*$ on $\mathbb{R}^2$ (considered as a vector space over $\mathbb{R}$) such that with usual addition and this multiplication $\mathbb{R}^2$ becomes a field, then there exists a nonzero $(x,y)$ such that $(x,y)*(x,y)=-1$? Remark: * *What I mean by "a simple way to see" is that I really don't want to refer to Frobenius's Theorem on real finite dimensional division algebras. *I haven't said this in the problem but I'm also assuming that with this multiplication $\mathbb{R}^2$ becomes an algebra meaning $x*(\alpha y)=\alpha(x*y).$
Sorry if I make it too elementary: If $1\in\mathbb R^2$ denotes $1$ of your field, and if $x\in\mathbb R^2$ is not its real multiple: $1,x,x^2$ are linearly dependent (over $\mathbb R$), i.e. $ax^2+bx+c=0$ for some $a,b,c$, and $a\neq 0$ (as $x$ is not a multiple of $1$), so we can suppose it's 1. If we complete squares, we get $(x+p)^2+q=0$ for some $p,q\in \mathbb R$. Now $q$ must be positive - otherwise $(x+p+\sqrt{-q})(x+p-\sqrt{-q})=0$, so you don't have a field (we found divisors of $0$). So finally $(x+p)/\sqrt{q}$ is the element you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/474495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Limit point intuition Quoting Rudin, "A point $p$ is a limit point of the set $E$ if every neighborhood of $p$ contains a point $q\not=p : q \in E$." This would imply that the points in an open ball would all be limit points, since for any $p$ in $E$ there are $q$ such that $d(p,q) < r$ for all $q \in E$. So E is also a neighborhood of the open ball. Is my intuition correct? What can be improved about this statement?
You can think of the set of limit points $L(S)$ of a set $S$ as all points which are "close to" $S$. In the example of an open ball in $\mathbb{R}^n$, the limit points are all points of the open ball, plus all points lying on the boundary, since every punctured neighborhood of such points will intersect the set. Note, however, that if $S$ is some set and $L(S)$ is the set of limit points, then it is not always true that $S \subseteq L(S)$. For example, in $\mathbb{R}$ under the ordinary topology, the set of integers has no limit points. (An element of a set which is not a limit point of the set is called an isolated point, which provides a good intuitive way of thinking about such points.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/474571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Radius and amplitude of kernel for Simplex noise I'm wondering if formulas exist for the radius and amplitude of the hypersphere kernel used in Simplex noise, generalized to an arbitrary number of dimensions. Ideally I'd like an answer with two equations in terms of n (number of dimensions) that give me r (radius) and a (amplitude), as well as an explanation of what makes these formulas significant. Here is a link to a document descrbing the Simplex noise algorithm. It mentions that the radius and amplitude need to be tuned, but it doesn't specify what values to use, like they're just fudge factors. http://www.csee.umbc.edu/~olano/s2002c36/ch02.pdf
The formula for the radius $r$ is simple. $$r^2 = \frac {1} {2}$$ This holds for all values of $N$. Let me explain. The simplex noise kernel summation radius $r$ should be the height of the N-simplex. If the kernel summation radius is larger than this, the kernel contribution will extend outside of the simplex. This will cause visual discontinuities because contributions are only added to the containing simplex, and not the surrounding simplices. The height of the N-simplex as a function of N is as follows. $$ r = h = s\sqrt{\frac {N+1} {2N}} $$ $s$ is the length of an edge, or the distance from one vertex to another. To find this value, unskew $[1, 1 \cdots 1]$ and get the distance to $[0, 0 \cdots 0]$. Note that $unskew([0, 0 \cdots 0])$ is $[0, 0 \cdots 0]$. Using the origin like this simplifies the math. $$ s = \sqrt{\frac {N} {N+1}} = \sqrt {N \left({1 + N \frac {\frac {1} {\sqrt {N+1}} - 1} {N}}\right) ^2} = \sqrt {unskew([1, 1 \cdots 1]) \cdot unskew([1, 1 \cdots 1])} $$ We can now use this this to calculate $r$ as a function of $N$. $$ r = h = \sqrt{\frac {1} {2}} = \sqrt{\frac {N} {N+1}} \sqrt{\frac {N+1} {2N}} $$ $N$ and $N+1$ divide out and the useful term, $r^2$, always works out to $\frac {1} {2}$. The contribution of each vertex is given by the following formula. $$max(r^2-d^2, 0)^a \cdot (\vec{d} \cdot \vec{g})$$ $d^2$ is the squared distance from the vertex to the input point. $\vec{d}$ is the displacement vector and $\vec{g}$ is the N-dimensional gradient. My understanding is that the amplitude is $a$ in the above formala. It can be whatever you want it to be. It is 4 in Ken Perlin's reference implementation. Different values give different visual noise. Think of it as desired smoothness. Also note that you may want a normalization factor to clamp output to a range of -1 to +1. Perlin's reference implementation uses 8. Gustavson uses different factors for different values of $N$. Sharpe claims the following formula can be used to calculate the normalization factor $n$ as a function of $N$. $$n = \frac {1} {\sqrt{\frac {N} {N+1}} \left( r^2 – \frac {N} {4 (N+1)} \right) ^ a} = \frac {1} {2{\frac s 2} \left( r^2 – \left( {\frac s 2} \right) ^2 \right) ^ a}$$ I am not convinced $2{\frac s 2}$ actually equates to $2(\vec{d} \cdot \vec{g})$. Sharpe is probably right that the minimum and maximum values occur on edge midpoints. I have not been able to independently verify this. If the ideally contributing gradient $\vec{g}$ is $[1, 1 \cdots 1]$ and an edge midpoint $\vec{d}$ is $unskew([\frac 1 2,\frac 1 2 \cdots \frac 1 2])$, then $2(\vec{d} \cdot \vec{g})$ works out to the following. $$ s = {\frac {N} {\sqrt {N+1}}} = 2N \left({\frac 1 2 + \frac 1 2 N \frac {\frac {1} {\sqrt {N+1}} - 1} {N}}\right) = 2 (unskew([\frac 1 2,\frac 1 2 \cdots \frac 1 2]) \cdot [1, 1 \cdots 1]) $$ Assuming Sharpe is right about edge midpoints, the scaling factor $n$ as a function of $N$ is as follows. $$n = \frac {1} {{\frac {N} {\sqrt {N+1}}} \left( r^2 – \frac {N} {4 (N+1)} \right) ^ a} = \frac {1} {2(\vec{d} \cdot \vec{g}) \left( r^2 – \left( {\frac s 2} \right) ^2 \right) ^ a}$$ Note that this assumes the ideally contributing gradient $\vec{g}$ is $[1, 1 \cdots 1]$ and not something like $[0, 1 \cdots 1]$. A different ideal gradient changes the dot product of $(\vec{d} \cdot \vec{g})$. References I can't link to: * *Wikipedia - Simplex Noise *Wikipedia - Simplex *Simplex noise demystified, Stefan Gustavson
{ "language": "en", "url": "https://math.stackexchange.com/questions/474638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What are better words to use in an article than "obvious"? I've heard often than it is ill-form to use the word "obvious" in a research paper. I was hoping to gather a list of less offensive words that mean generally the same thing. For example, one that I can think of is the word "direct". So instead of saying "...obviously follows from lemma 2.3..." you'd say "...this proof directly follows from lemma 2.3...".
evidently, visibly, naturally, undeniably.. Words like trivially, and obviously sound disrespectful, it is as if the author is mocking the reader. Also they sound 'empty' and many authors use these words to make up for the incompleteness in their work. Mathematics is about deduction, not intuition. So any word that does not imply to bring in intuition to reason can be thought of as a good word. :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/474698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 12, "answer_id": 11 }
How to find asymptotes of implicit function? How to find the asymptotes of the implicit function $$8x^3+y^3-6xy-3=0?$$
I have seen you are interested in doing problems by Maple so, the following codes may help you machineary: [> f:=8*x^3+y^3-6*x*y-3: t := solve(f = 0, y): m := floor(limit(t[1]/x, x = -infinity)); $$\color{blue}{m=-2}$$ [> h:=floor(limit(t[1]-m*x, x = -infinity)); $$\color{blue}{h=-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/474860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Diagonalizable matrix with only one eigenvalue I have a question from a test I solved (without that question.. =) "If a matrix $A$ s.t $A$ is in $M(\mathbb{C})$ have only $1$ eigenvalue than $A$ is a diagonalizable matrix" That is a false assumption since a ($n\times n$ matrix) a square matrix needs to have at least $n$ different eigenvalues (to make eigenvectors from) - but doesn't the identity matrix have only $1$ eigenvalue?...
You need your $n\times n$ matrix to have n linearly-independent eigenvectors. And the identity matrix is already in diagonal form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/474939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Leibniz notation for high-order derivatives What is the reason for the positioning of the superscript $n$ in an $n$-order derivative $\frac{d^ny}{dx^n}$? Is it just a convention or does it have some mathematical meaning?
Several people have already posted answers saying it's $\left(\dfrac{d}{dx}\right)^n y$, so instead of saying more about that I will mention another aspect. Say $y$ is in meters and $x$ is in seconds; then in what units is $\dfrac{dy}{dx}$ measured? The unit is $\text{meter}/\text{second}$. The infinitely small quantities $dy$ and $dx$ are respectively in meters and seconds, and you're dividing one by the other. So in what units is $\dfrac{d^n y}{dx^n}$ measured? The thing on the bottom is in $\text{second}^n$ (seconds to the $n$th power); the thing on top is still in meters, not meters to the $n$th power. The "$d$" is in effect unitless, or dimensionless if you like that word. I don't think it's mere chance that has resulted in long-run survival of a notation that is "dimensionally correct". But somehow it seems unfashionable to talk about this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/475016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Issues in calculating the gradient I am trying to calculate the gradient of a certain expression. I am not sure if it's possible. I have the following $f(\alpha_1,\alpha_2,\Lambda) = \log(|2Q_1+2Q_2 +2Q_3|)$ $Q_1$ is a diagonal matrix with the diagonal terms equal to $\alpha_1$ $Q_2$ is a diagonal matrix with the diagonal terms equal to $\alpha_2$ $Q_3$ is a matrix with a bunch of parameters. Now how can I take the gradient of the function $f$ wrt $\alpha_1$, $\alpha_2$ and the params lets say $\Lambda_{ii}$. Is it possible to have something
Well, if you are interested in finding gradient with the respect to the parameters ($\alpha_1, \alpha_2, \Lambda_{ii}$) separately (without concatenating them in one vector and searching the derivative wrt a vector) you can use some matrix calculus identities (but first you can pull out the factor of $2$ out of the logarithm): $$\frac{\partial \ln|\mathbf{U}|}{\partial x} ={\rm tr}\left(\mathbf{U}^{-1}\frac{\partial \mathbf{U}}{\partial x}\right)$$ Here $x$ is $\alpha_1, \alpha_2$ or $\Lambda_{ii}$ and matrix $U$ is $\mathbf{Q_1+Q_2+Q_3}$. So $$ \begin{eqnarray} \frac{\partial f(\alpha_1,\alpha_2,\Lambda)}{\partial \alpha_1}={\rm tr}\left((\mathbf{Q_1+Q_2+Q_3})^{-1}\frac{\partial \mathbf{\mathbf{Q_1}}}{\partial \alpha_1}\right)\\ \frac{\partial f(\alpha_1,\alpha_2,\Lambda)}{\partial \alpha_2}={\rm tr}\left((\mathbf{Q_1+Q_2+Q_3})^{-1}\frac{\partial \mathbf{\mathbf{Q_2}}}{\partial \alpha_2}\right) \\ \frac{\partial f(\alpha_1,\alpha_2,\Lambda)}{\partial \Lambda_{ii}}={\rm tr}\left((\mathbf{Q_1+Q_2+Q_3})^{-1}\frac{\partial \mathbf{\mathbf{Q_3}}}{\partial \Lambda_{ii}}\right) \end{eqnarray} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/475143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$R/I$ when $R$ is the ring of real continuous functions If $R$ is the ring of all real continuous functions on $[0,1]$, I am trying to find $R/I$ where $$I=\{f\in{R}|f(.5)=0\}$$ Showing $I$ is an ideal is not a problem since we're defining addition and multiplication as $$(f+g)(x)=f(x)+g(x).$$ $$(fg)(x)=f(x)g(x)$$but I'm trying to identify $R/I$. I am new to ring theory and i'm having trouble with the math. What do elements of $R/I$ look like? I know that Ideals are analogues of normal subgroups in group theory and that normal subgroups are defined as $$N=\{a\in{G}|aH=Ha\}$$ where G is a group and H is a subgroup. This makes sense to me with Ideals but I can't think of this in terms of continuous functions.
Hint: What is the kernel of the ring homomorphism $R\to \mathbb R$, $f\mapsto f(0.5)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/475241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Please Explain $\lg(T(N)) = 3 \lg N + \lg a$ is equivalent to $ T(N) = aN^3$ I'm reading Algorithms by Kevin Wayne and Robert Sedgewick. They state that: $\lg(T(N)) = 3 \lg N + \lg a $ (where $a$ is constant) is equivalent to $T(N) = aN^3$ I know that $\lg$ means a base $10$ logarithm and that $\lg(T(N))$ means the index of the power to which $10$ must be raised to produce $T(N)$ but I'd like some help understanding how to get from the first equation to the second.
Simply raise $10$ to the power of both sides of the equation: $\large{10^{\log {T(N)}}=10^{3\log N +\log a}=10^{3\log N}\cdot10^{\log a}=(10^{log N})^3\cdot10^{\log a}}$ Since by definition $\log b = c \iff 10^c=b$, it follows that $10^{\log b}=b$, and thus $T(N)=N^3 \cdot a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/475281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Rational function with absolute value $1$ on unit circle What is the general form of a rational function which has absolute value $1$ on the circle $|z|=1$? In particular, how are the zeros and poles related to each other? So, write $R(z)=\dfrac{P(z)}{Q(z)}$, where $P,Q$ are polynomials in $z$. The condition specifies that $|R(z)|=1$ for all $z$ such that $|z|=1$. In other words, $|P(z)|=|Q(z)|$ for all $z$ such that $|z|=1$. What can we say about $P$ and $Q$?
Answer your question about why $M$ is constant: it's simply because $M$ is a quotient of two polynomials. If the quotient is $1$ on the unit circle, it means these two polynomials are equal at all the points of the circle. This implies that these two polynomials are the same. So $M$ is identically $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/475344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 2 }
Generating function for $\sum_{k\geq 1} H^{(k)}_n x^ k $ Is there a generating function for $$\tag{1}\sum_{k\geq 1} H^{(k)}_n x^ k $$ I know that $$\tag{2}\sum_{n\geq 1} H^{(k)}_n x^n= \frac{\operatorname{Li}_k(x)}{1-x} $$ But notice in (1) the fixed $n$.
Let $\psi(x)=\frac{\Gamma'}{\Gamma}(x)=\frac{d}{dx}\log\Gamma(x)$ be the digamma function. For $N$ a positive integer, we have $$ \psi(x+N)-\psi(x)=\sum_{j=0}^{N-1}\frac{1}{x+j} $$ (this follows from $x\Gamma(x)=\Gamma(x+1)$ and induction). Now \begin{eqnarray*} \sum_{k\geq 1}H_n^{(k)}x^k&=&\sum_{k\geq 1}\sum_{j=1}^n \frac{1}{j^k}x^k\\ &=&\sum_{j=1}^n \sum_{k\geq 1} \left(\frac{x}{j}\right)^k\\ &=&\sum_{j=1}^n \frac{x}{j-x}\\ &=&x\sum_{j=1}^n \frac{1}{j-x}\\ &=&x\sum_{j=0}^{n-1}\frac{1}{-x+1+j}\\ &=&x(\psi(-x+1+n)-\psi(-x+1)) \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/475491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Schreier generators I am facing some problem in understanding the proof of the following theorem. Can somebody provide me a simple proof . Given $G=\langle A \rangle$ and $H \leq G$, and $R$ is the coset representatives for $H$ in $G$. Let $B=\{r_1ar^{-1}_2 | r_1,r_2 \in R, a \in A\}\cap H.$ Then $B$ generates $H$.
Your assertion is a few incorrect. It must be as follows: Given $G=\langle A \rangle$ and $H\le G$, and $R$ is a set of representatives of the right cosets for H in G. Let B={r1ar−12|r1,r2∈R,a∈A}∩H. Then B generates H. Let $B = \{r_1ar^{-1}_2 | r_1 \in R, a \in A\}$, where $r_2$ is the representative of the coset $Hr_1a$. Then $B$ generates $H$. You can find a proof in: M.I. Kargapolov, Ju.I. Merzljakov, Fundamentals of the Theory of Groups, Springer, Graduate Texts in Math (62), Theorem 14.3.1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/475590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that in a parabola the tangent at one end of a focal chord is parallel to the normal at the other end. Prove that in a parabola the tangent at one end of a focal chord is parallel to the normal at the other end. Now, I know prove this algebraically, and that's very easy, but I am not getting any visual picture of the above situation. It'd be great if someone could give a proof without (or with minimal) words for this one - these proofs are exciting! EDIT: A quick-n-dirty working of what I call an algebraic proof: WLOG, let the equation of the parabola be $y^2 = 4ax$. The coordinates of points on the focal chord and also on the parabola are: $ P(at^2,2at) $ and $Q(al^2, 2am)$. For these points to lie on a focal chord, $ tm = -1 $. The tangent at $P$ is given as $ y(2at) = 2a(x+at^2) $ so the slope is $1/t$. Similarly the slope of tangent at $Q$ is $1/m$. So the slope of the normal at $Q$ is $-m = 1/t = $ slope of tangent at $P$. Now, I know I am using some 'shortcuts' here, but then this was just a fast-paced look at what I did. The point is, I know how to prove the statement using the regular 'equations approach'. I want to know if there are any visual proofs.
Here's a geometric proof, based on the fact that a line (thought of as a light ray) going through the focus of a parabola reflects to a line parallel to the axis of the parabola. This is sometimes called the reflective property of the parabola. Call the focus $F$, and have the parabola arranged with its axis the $y$ axis. Pick the chord $BA$ through $F$, so that $A$ lies to the right of $F$ and $B$ to the left. We need to name some reference points: Pick a point $A_L$ to the left of $A$ on the tangent line $T_A$, and another point $A_R$ to its right. Similarly pick the points $B_L,B_R$ to the left and right of $B$ on the tangent line $T_B$. Also pick a point $A'$ above $A$ on the vertical through $A$ and another point $A''$ below $A$ on that vertical; similarly pick points $B'$ and $B''$ above and below $B$ on the vertical through $B$. Now the reflective property of the parabola means in this notation that the angles $A_LAF$ and $A'AA_R$ are equal. Call that common angle $\alpha$, and note that by the vertical angle theorem (opposite angles of intersecting lines are equal) we also have $\alpha$ equal to the angle $A_LAA''.$ Similarly we have the three equal angles, call each $\beta$, namely angles $B_RBF$ and $B_LBB'$ from the reflective property and the further equal angle in this triple $B_RBB''$ again from the vertical angle theorem. Now because the segment $BA$ may be extended to a line transverse to the two parallel verticals through $A$ and $B$, we have that angle $B''BB_R$ is equal to angle $BAA'$. A diagram shows that the first of these is $2\beta$, while the second is $\pi-2\alpha$. This brings us almost to the end of the argument, since we now have $\alpha+\beta=\pi.$ So if we let the two tangent lines $T_A,\ T_B$ meet at the point $P$, we see (again referring to a sketch) that triangle $BPA$ is a right triangle with its right angle at $P$. But this means the two tangent lines $T_A,\ T_B$ are perpendicular, so we may conclude finally that the normal line $N_B$ through $B$ is parallel to the tangent line $T_A$ at $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/475666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Evaluating Laplace Transform I have a Laplace transform function of the following form and I'm trying to evaluate it. From my research I think I need to take the Inverse Laplace Transform and then integrate, but I'm having trouble working that out, or if that's even what I have to do. $$\frac{s^2}{s^2+10s+25} $$ and $$\frac{s}{s+5}$$ Any pointers on where to go from here is much appreciated. Also, this is not homework, merely my own side project that I'm attempting to work on, but this has got my stymied.
The definition of a Laplace transform leads to the following expression for the inverse Laplace transform of a function $F(s)$: $$f(t) = \frac{1}{i 2 \pi} \int_{c-i \infty}^{c+i \infty} ds \, F(s) \, e^{s t}$$ where $c$ is a real number larger than the real parts of all poles of $F$ in the complex $s$ plane. That is, the line $\Re{s}=c$ is to the right of all poles of $F$. When $F$ consists of poles only in the complex plane, the above integral may be shown to be equal to the sum of the residues of $F(s) e^{s t}$ evaluated at the poles of $F(s)$. For $F(s)=s/(s+5)$, there is a single, simple pole at $s=-5$. The ILT is then $$f(t)=\lim_{s\to-5} (s+5) F(s) e^{s t} = -5 e^{-5 t}$$ For the case $F(s) = s^2/(s+5)^2$, there is a double pole at $s=-5$, so that the residue calculation involves a derivative: $$f(t) = \lim_{s\to-5} \frac{d}{ds} (s+5)^2 F(s) e^{s t} = (25 t-10) e^{-5 t} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/475729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Can $P \implies Q$ be represented by $P \vee \lnot Q $? Source: p 46, How to Prove It by Daniel Velleman Though the author writes $Q$ (the original apodosis) as 'You'll fail the course', for brevity I shorten $Q$ to 'You fail'. Let $P$ be the statement “You will neglect your homework” and $Q$ be “You fail.” Then “You won’t neglect your homework, or you fail.” $ \quad = \color{green}{\quad \lnot P \vee Q}$. But what message is the teacher trying to convey with this statement? Clearly the intended message is “If you neglect your homework, then you fail,” or in other words $P \rightarrow Q.$ Thus, in this example, the statements $\lnot P \vee Q$ and $ P \rightarrow Q $ seem to mean the same thing. Why cannot the bolded be symbolised as $\color{darkred}{P \vee \lnot Q}$ = "You neglect your homework, or you fail not." ? I am trying to intuit Material Implication: intuitively, how does $P \Longrightarrow Q \equiv \color{green}{\lnot P \vee Q}$ ?
This follows simply from the Law of Excluded Middle: $P \lor \neg P$ for all propositions $P$. Let's assume $P \rightarrow Q$ and deduce $Q \lor \neg P$. By the Law of Excluded Middle, we have either $P$ or we have $\neg P$. We do a case analysis over which one is true: If we got a $P$, by our assumption, we can deduce $Q$. If we got a $\neg P$, well... we stick with $\neg P$. Having covered both cases, we know one or the other is true, so we have $Q \lor \neg P$. Let's go the other way now, to show the statements are logically equivalent. We'll assume $Q \lor \neg P$ and deduce $P \rightarrow Q$. We assume we have a $P$, and our goal is to produce a $Q$, and this will prove $P \rightarrow Q$. Do case analysis on our assumption $Q \lor \neg P$. If we have $Q$, we've have fulfilled our obligation. If we have a $\neg P$, on the other hand, we notice that we also have a $P$ just hanging around in our current list of assumptions. This gives us a contradiction, and by Principle of Explosion, we may conclude $Q$. This again fulfills our obligation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/475776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 4 }
Is the number 8 special in turning a sphere inside out? So after watching the famous video on youtube How to turn a sphere inside out I noticed that the sphere is deformed into 8 bulges in the process. Is there something special about the number 8 here? Could this be done with any number of bulges, including 2? Image: Video: How to Turn a Sphere Inside Out
No, 8 isn't special beyond it being the choice they made for that specific video. The software the group wrote to make that video allowed you to choose that parameter arbitrarily. I bet if you spent some time digging you could find that software somewhere on the internet, and create your own eversion videos with a different choice of the number of corrugations. You can find a (modified) version of the source code here: http://profs.etsmtl.ca/mmcguffin/eversion/ as well as commentary from Silvio Levy on the choice of number of strips.
{ "language": "en", "url": "https://math.stackexchange.com/questions/475837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Convergence of $\{nz^n\}_1^{\infty}.$ Discuss completely the convergence and uniform convergence of the sequence $\{nz^n\}_1^{\infty}.$ If $|z|\geq 1$, then $|nz^n|=n|z|^n\geq n$ diverges, so the sequence $nz^n$ also diverges. If $|z|<1$, it should converge to $0$. So for any $\varepsilon$, we must find $N$ such that $|nz^n|=n|z|^n<\varepsilon$, or in other words $|z|^n<\dfrac{\varepsilon}{n}$ for all $n\geq N$. It should be true since the left hand side converges rapidly to $0$, but how to prove it rigorously? Then finally, the sequence doesn't converge uniformly in the open disk $|z|<1$, because if it did, for any $\varepsilon$ we must have $N$ such that $|z|^n<\dfrac{\varepsilon}{n}$ for all $|z|<1$ and all $n\geq N$. But we can choose $|z|$ large enough (close enough to $1$) to break this inequality. So my question is: how to prove that for any $a\in(-1,1)$ and any $\epsilon>0$, there exists $N$ such that $a^n<\dfrac{\varepsilon}{n}$ for all $n\geq N$.
Hint: From real analysis/calculus you may recall the result $$ \lim_{n\to\infty}\frac{n}{a^n}=0, $$ whenever the constant $a>1$. An exponential function grows faster than a power function or some catch-phrase like that is sometimes associated with this result. Fix a constant $a>1$ and consider the numbers $z$ such that $|z|<1/a$. Remark: You should also prove that the convergence of your sequence is uniform in a closed disk $|z|\le r$, where $r$ is a constant from the interval ... Reminder: Assume $a>1$, so $a=1+b$ with $b>0$. Then from the binomial theorem you get that $$ a^n=(1+b)^n=\sum_{k=0}^n{n\choose k}b^k>{n\choose 2}b^2. $$ Thus $$ 0<\frac{n}{a^n}<\frac{n}{b^2 {n\choose 2}}=\frac{2}{b^2(n-1)}\to0,\ \text{as $n\to\infty.$}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/475907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
At what angle do these curves cut one another? I'm working on an exercise that asks this: At what angle do the curves $$y = 3.5x^2 + 2$$ and $$y = x^2 - 5x + 9.5$$ cut one another? I have set these equations equal to one another to find two values for x. Namely, $x = 1$ and $x = -3$ as intersections. How should I proceed? Most everyone here is always extremely helpful so I can't thank you all enough in advance for any assistance.
You know the curves cut themselves at $x=1$ and $x=-3$. Let's consider a general case you might find helpful. Consider two functions $f,g$ that intersect at a point $x=\xi$. Consider now the tangent line of $f$ at $x=\xi$. What angles does it make with the $x$ axis? It shouldn't be new news that $\tan\theta=f'(\xi)$. Similarily, the tangent line of $g$ at $x=\xi$ makes an angle $\eta$ with $\tan\eta =g'(\xi)$. So at what angle do these lines cross? Would you be convinced it must be $\rho=\theta-\eta$? You can make a drawing, and of course choose $\theta$ to be the largest angle. But we know that $$\tan(\theta-\eta)=\frac{\tan\theta-\tan\eta}{1+\tan\theta\tan\eta}$$ That is, $$\tan(\rho)=\frac{f'(\xi)-g'(\xi)}{1+f'(\xi)g'(\xi)}$$ You should be careful about the angles you're dealing with so as not to get any "off-set" results!
{ "language": "en", "url": "https://math.stackexchange.com/questions/475951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Taking the limit of $n(e^{-1/n}-1)$ as $n$ approaches infinity The form is infinity times zero and that is indeterminate which means I need to use L'Hospital's rule, but I have tried to do that but every time I would find another indeterminate form. How can I use sneaky algebra or sneaky replacements to find the answer?
L'Hospital's Rule is not my favourite approach, since a computation replaces insight about the behaviour of the function. But one cannot deny its usefulness. We do two L'Hospital's Rule calculations. Calculation 1: We want to find $$\lim_{n\to\infty}\frac{e^{-1/n}-1}{1/n}.\tag{1}$$ Let $x=\frac{1}{n}$. As $n\to\infty$, $1/n\to 0^+$. So if $$\lim_{x\to 0^+} \frac{e^{-x}-1}{x}\tag{2}$$ exists, then our limit does, and is the same. Now use L'Hospital's Rule, taking the derivative of the top and bottom of (2), Since $$\lim_{x\to 0^+} \frac{-e^{-x}}{1}=-1,$$ our limit exists and is $-1$. Calculation 2: This time, we deliberately do things in a suboptimal way, by finding $$\lim_{y\to\infty}\frac{e^{-1/y}-1}{1/y}.$$ Take the derivative of top and bottom. So we want to find $$\lim_{y\to\infty}\frac{(-1/y^2)(-e^{-1/y})}{(-1/y^2)}.$$ The above simplifies to $$\lim_{y\to\infty}(-e^{-1/y}),$$ which is $-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/476025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
Possible Research Topics for High School I am a highschool student some experience with Math Olympiads and I will be taking a Scientific Research class next year. I would like to ask for interesting Mathematics topics that I could consider - I have tried going online for possible research topics but I couldn't determine which was for my level. I would like to also ask for resources that I could use for carrying out my research. I do not have access to programs such as PRIMES as I live outside the US. Thank you.
I'm a fan of mathematical games for projects like these. They are fun, don't (necessarily) require advanced math -- I 've taught basic theory to groups 10 year olds -- and there are lots of open problems. Check out Winning Ways, by Berlekamp, Conway, and Guy, and M. Albert, R. J. Nowakowski, D. Wolfe, Lessons in Play. Here's a good website with a link to a list of open problems http://www.mscs.dal.ca/~rjn/Site/Game_Theory.html For motivation check out an award winning work by a (then) high school senior. http://www.emis.ams.org/journals/INTEGERS/papers/dg3/dg3.Abstract.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/476154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Prove $X^2+Y^2-1$ is irreducible using geometrical tools. I'm trying to understand what is meant in this paragraph: of "Conics and Cubics. A Concrete Introduction to Algebraic Curves (by Robert Byx)": He wants to prove that the polynomial $X^2+Y^2-1$ is irreducible using geometrical tools. I have the following doubts: * *What he means by Since every line has at least three points on it ? *Why the fact that $X^2+Y^2-1=F(X,Y)\cdot G(X,Y)$, where F, G are lines implies that the circle $X^2+Y^2-1=0$ is made up of two lines? I would really grateful if anyone could help me.
* *A line in $k^2$ has the form $aX+bY+c=0$ where $a$ and $b$ cannot both be zero. We may assume without loss of generality that $a\ne 0$, so that we have $X=-a^{-1}bY-a^{-1}c$. It follows that for any $\alpha\in k$, the point $$(-a^{-1}b\alpha-a^{-1}c,\alpha)$$ lies on the line. It follows that the number of points on any line is in bijective correspondence with the field $k$. So in order for a line to have at least three points, we should assume that $k$ is not the field of $2$ elements, which is not a very strict assumption. *If $X^2+Y^2-1=F(X,Y)\cdot G(X,Y)=0$, then notice by the zero product property that any point on the circle must satisfy at least one of the linear polynomials $F(X,Y)$ and $G(X,Y)$. It is also true that any point satisfying the polynomials $F$ and $G$ will lie on the circle. It follows that the circle must be the union of two lines (a contradiction).
{ "language": "en", "url": "https://math.stackexchange.com/questions/476218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 0 }
If $a>0$, $b>0$ and $n\in \mathbb{N}$, show that $aIf $a>0$, $b>0$ and $n\in \mathbb{N}$, show that $a<b$ if and only if $a^n<b^n$. Hint: Use mathematical induction. Having trouble with the proof that if $a<b$ then $a^n<b^n$. So far I have; Assume $a<b$ then $a^k<b^k$ for $k=1$ Assume $\exists m \in \mathbb{n}$ such that $a^m<b^m$ Then let $k=m+1$ so $a^{m+1}<b^{m+1}$ Then $a*a^m<b*b^m$ I'm not positive I can make the second assumption and if I can I don't know how to prove the last statement which would allow the extension into all natural numbers.
Hint: Do it in two steps, $a^{m+1}=a^m a\lt a^m b\lt b^mb=b^{m+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/476287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How prove this geometry $\Delta PCA \sim\Delta PBD$ let the circle $O_{1} $ and the circle $O_{2}$ the radius of is $r_{1},r_{2}$ respectively,and the circle $O_{1}$and $O_{2}$ intersection with $A$ and $B$,and the tangent to $O_{1}$ at $C$,and the tangent to $O_{2}$ at $D$, and such $\dfrac{PC}{PD}=\dfrac{r_{1}}{r_{2}}$, show that: $\Delta PCA \sim\Delta PBD$ and This problem is my student ask me, and I find this same problem with 2012 china girls problem,But for my student problem, I can't prove it,and I guess this problem have some nice methods, Thank you everyone. see this same probelm:http://www.artofproblemsolving.com/Forum/viewtopic.php?p=2769659&sid=f359cf6f214ae30fda2709d047be0587#p2769659
Here's a partial solution. I'll revise notation a little bit and use coordinates to help set the stage. My circles have centers $H(-h,0)$ and $K(k,0)$ and respective radii $r$ and $s$. (Without loss of generality, we assume $r > s$; the case $r=s$ is left to the reader.) The circles meet at points $A(0,a)$ and $B(0,-a)$. The origin is $O$. Drawing right triangles $\triangle HOA$ and $\triangle KOA$ (with hypotenuses of length $r$ and $s$), we can define angles $\theta$ and $\phi$ (at $H$ and $K$) such that $$r \cos\theta = h \qquad r\sin\theta = a = s\sin\phi \qquad s \cos\phi = k$$ First we observe that, because $|PC|^2 + r^2 = |PH|^2$ and $|PD|^2 + s^2 = |PK|^2$, the proportionality condition on $|PC|$ and $|PD|$ implies an identical condition on $|PH|$ and $|PK|$. $$\frac{|PC|}{|PD|} = \frac{r}{s} \qquad \implies \qquad \frac{|PH|}{|PK|} = \frac{r}{s} \qquad (1)$$ Consequently, $P$ lies on the circle defined by $$\frac{( x + h )^2 + y^2}{r^2} = \frac{(x-k)^2 + y^2}{s^2}$$ Let $Q(q,0)$ be the circle's center, and $p$ its radius; we can determine that $$q = \frac{h s^2 + k r^2}{r^2-s^2} = \frac{r s \; \left( r \cos\phi + s \cos\theta\right)}{r^2 - s^2} \qquad p = \frac{r s\;\left(h+k\right)}{r^2-s^2} = \frac{rs\;\left( r \cos\theta + s \cos\phi\right)}{r^2 - s^2}$$ In right triangle $\triangle QOA$, we define angle $\psi$ at $Q$ such that $$p \cos\psi = q \qquad p \sin\psi = a$$ Fact. $\psi = \phi - \theta$. Proof. Show that $\cos\psi = \cos\left(\phi-\theta\right)$, expressing the trig quantities in terms of $h$, $k$, $r$, $s$, $a$, and invoking the relation $r^2 - h^2 = a^2 = s^2 - k^2$. Now, since $\theta$, $\phi$, $\psi$ measure half of a central angle subtending chord $AB$ in respective circles $\bigcirc H$, $\bigcirc K$, $\bigcirc Q$, they also measure any inscribed angle in those circles subtending (and on the appropriate side of) the same chord. In particular, $$\angle AEB = \theta \qquad \angle AFB = \phi \qquad \angle APB = \psi$$ where $E$ and $F$ are the "other" points where $\overleftrightarrow{PA}$ and $\overleftrightarrow{PB}$ meet $\bigcirc H$ and $\bigcirc K$. Note that $\angle AFB = \phi$ is an exterior angle of $\triangle APF$, which has remote interior angle $\psi = \phi-\theta$; consequently, $\angle PAF = \theta$. This implies that $AF$ and $EB$ are parallel, so that $\triangle APF \sim \triangle EPB$. Thus, for instance, $$\frac{|PA|}{|PF|}=\frac{|PE|}{|PB|} \qquad (2)$$ By the Power of a Point "secant-tangent" theorem applied to $P$ relative to $\bigcirc H$ and $\bigcirc K$, we have $$|PA||PE| = |PC|^2 \qquad |PF||PB| = |PD|^2 \qquad (3)$$ Using $(3)$ to eliminate $|PE|$ and $|PF|$ from $(2)$ gives $$\frac{|PA|}{|PD|^2/|PB|} = \frac{|PC|^2/|PA|}{|PB|}$$ whence $$\frac{|PA|}{|PD|} = \frac{|PC|}{|PB|}$$ which completes two-thirds of the desired similarity proof. The final third requires demonstrating either that $|AC|/|BD|$ is equal to the above, or that $\angle APC \cong \angle BPD$. Arduous coordinate calculations seem to bear this out, but the process isn't at all pretty, so I'll leave things here (for now).
{ "language": "en", "url": "https://math.stackexchange.com/questions/476363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
A variant of the Schwartz–Zippel lemma Let $f \in \mathbb{F}[x_1,\ldots,x_n]$ be a nonzero polynomial. Let $d_1$ be the maximum exponent of $x_1$ in $f$ and let $f_1$ be the coefficient of $x_1^{d_1}$ in $f.$ Let $d_2$ be the maximal exponent of $x_2$ in $f_1$ and so on for $d_3,\ldots,d_n.$ I would like to show that if $S_1, \ldots, S_n \subseteq \mathbb{F}$ are arbitrary subsets and $r_i \in S_i$ are chosen uniformly at random then $$ Pr[f(r_1,\ldots,r_n) = 0] \leq \frac{d_1}{|S_1|} + \cdots + \frac{d_n}{|S_n|}.$$ The claim is obviously true for $n = 1$ but for multivariate polynomials I don't see how to prove the claim since I don't see how to handle the fact that we disregard from $f$ all the terms containing $x_1^{k}$ for $k < d_1.$
This "variant of the Schwartz-Zippel Lemma" is in fact Lemma 1 from Jack Schwartz's original paper. Note that although it looks nice to phrase the statement and/or the proof in probabilistic language, it is certainly not necessary to do so: Schwartz phrases it as a pure counting argument, which is (even) shorter than Jernej's (nice) solution above. I mention this because the first time I saw this kind of proof of the Schwartz-Zippel Lemma, I think I took the probabilistic stuff a bit too seriously: it really is a way of phrasing the argument rather than a proof technique. I hope the textbook gives some indication of the provenance of this "exercise"!
{ "language": "en", "url": "https://math.stackexchange.com/questions/476431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Partial fractions for inverse laplace transform I have the following function for which I need to find the inverse laplace transform: $$\frac1{s(s^2+1)^2}$$ Am I correct in saying the partial fraction is: $$\frac1{s(s^2+1)^2}=\frac{A}{s}+\frac{Bs+C}{s^2+1}+\frac{Ds+E}{(s^2+1)^2}$$
Another way to evaluate the ILT when there are just poles (i.e., no branch points like roots and logarithms) is to apply the residue theorem. In this case, the ILT is simply the sum of the residues at the poles of the LT. That is, $$\begin{align}f(t) &= \operatorname*{Res}_{s=0} \frac{e^{s t}}{s(1+s^2)^2}+ \operatorname*{Res}_{s=i} \frac{e^{s t}}{s(1+s^2)^2}+\operatorname*{Res}_{s=-i} \frac{e^{s t}}{s(1+s^2)^2}\\ &= \frac{e^{(0) t}}{(0^2+1)^2}+\left[\frac{d}{ds}\frac{e^{s t}}{s(s+i)^2}\right]_{s=i} +\left[\frac{d}{ds}\frac{e^{s t}}{s(s-i)^2}\right]_{s=-i}\\ &= 1+\left[ \frac{e^{s t} ((s+i) s t-3 s-i)}{s^2 (s+i)^3}\right]_{s=i}+\left [\frac{e^{s t} ((s-i) s t-3 s+i)}{s^2 (s-i)^3} \right ]_{s=-i}\\ &=1+\left[-\frac{1}{8} i e^{i t} (-2 t-4 i)\right]+\left[\frac{1}{8} i e^{-i t} (-2 t+4 i)\right]\\&=1-\cos{t}-\frac12 t \sin{t}\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/476505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does a logarithmic branch point imply logarithmic behavior? The complex logarithm $L(z)$ is given by $$L(z)=\ln(r)+i\theta$$ where $z=re^{i\theta}$ and $\ln(x)$ is the real natural logarithm. It is well known that $L(z)$ then sends each $z$ to infinitely many values, each of which are different by an integer multiple of $2\pi i$. If we imagine going along the unit circle in the $z$-plane starting from $1$ and working anti-clockwise, the image of it under $L(z)$ will be a segment of the vertical line $i\theta$ with $\theta$ a real number. However we clearly see how $1$ has mapped to $2$ different values, and how we could continue on mapping it to infinitely many values by winding around $0$ indefinitely. We then call $0$ a Logarithmic branch point of the mapping. My question is, do there exist mappings that have logarithmic branch points that themselves have nothing to do with logarithms i.e. the logarithm does not appear explicitly somewhere in it's definition or it does not behave like a logarithm. Does a logarithmic branch point indicate logarithmic behavior?
I think I may have an answer; $f(z)=z^p$ with $p$ an irrational number. If $p$ is rational, then the order of the branch point $f(z)$ at $z=0$ is just the denominator in lowest terms. So if we take the limit as $p$ goes to an irrational number, we should get an infinite order branch point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/476568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How many times the parabola $y=x^2$ intersects the origin? The answer of this question is 1, right? but I'm about to study algebraic curves by this book and I was surprised by this theorem: If I'm right the immediate corollary of that is $y=x^2$ intersects the origin two times. Then he continues given an example: I don't understand this, anyone can help me? Thanks a lot
You have to be a bit careful there. If we want to find the intersection of $y=x^2$ and the line $y=0$ within this framework, we would (or, at least, could) say $$ y = p(x) = x^2\\ g(x,y) = y-x^2 $$ The theorem then tells us that as long as $\mathbf{y-p(x)}$ is not a factor of $\mathbf{g(x,y)}$, there are two intersections of the origin. Since $g(x,y)=y-p(x)$ (or, if you put things together a bit differently, $-(y-p(x))$) the theorem doesn't apply.
{ "language": "en", "url": "https://math.stackexchange.com/questions/476641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Simple Algorithm Running Time Analysis A sorting algorithm takes $1$ second to sort $1,000$ items on your local machine. How long will it take to sort $10,000$ items * *if you believe that the algorithm takes time proportional to $n^2$, and *if you believe that the algorithm takes time roughly proportional to $n\log n$? If the algorithm takes time proportional to $n^2$, then $1,000^2=1,000,000$, and $10,000^2=100,000,000$. Dividing the latter by the former yields $100$. Therefore, the sorting algorithm would take $1$ minute and $40$ seconds to sort $10,000$ items. If the algorithm takes time proportional to $n\log n$, then $1,000\log 1,000=3,000$, and $10,000\log 10,000=40,000$. Dividing the latter by the former yields $13\frac13$. Therefore, the sorting algorithm would take $13.\bar3$ seconds to sort $10,000$ items. Does my thought process seem correct?
Yes, it seems correct. -Servaes
{ "language": "en", "url": "https://math.stackexchange.com/questions/476701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
When does L'Hopital's rule pick up asymptotics? I'm taking a graduate economics course this semester. One of the homework questions asks: Let $$u(c,\theta) = \frac{c^{1-\theta}}{1-\theta}.$$ Show that $\lim_{\theta\to 1} u(c) = \ln(c)$. Hint: Use L'Hopital's rule. Strictly speaking, one can't use L'Hopital's rule; at $\theta=1$, $u(c,\theta)$ is not an indeterminate form. However, if one naively uses it anyway, $$\lim_{\theta\to 1} \frac{c^{1-\theta}}{1-\theta} = \lim_{\theta\to 1}\frac{-\ln(c) c^{1-\theta}}{-1} = \ln(c).$$ More formally, using a change of variable $\vartheta = 1-\theta$ and expanding in a power series, \begin{align*} u(c,\theta) &= \frac{1}{\vartheta} \bigg( 1 + (\vartheta\ln(c)) + \frac{1}{2!}(\vartheta\ln(c))^2 +\frac{1}{3!}(\vartheta\ln(c))^3 + \cdots \bigg)\\ &= \frac{1}{\vartheta} + \ln(c) + \frac{1}{2!}\vartheta\ln(c)^2 + \frac{1}{3!}\vartheta^2\ln(c)^3 + \cdots \end{align*} has constant first order term $\ln(c)$. Is it a coincidence that L'Hopital's rule is picking up this asymptotic term? More generally, when does a naive application of L'Hopital's rule pick up the asymptotic behavior of a function near a singularity?
One can imagine a situation in which L'Hospital's Rule does not apply, but gives the right answer. This is not one of them. The limit is not $\ln c$. A glance at the expression shows that the limit from the left is "$\infty$" and the limit from the right is "$-\infty$." Remark: Suppose that for some constants $a,b,c,d$ $$\lim_{x\to a} \frac{f(x)-b}{g(x)-c}=d$$ and that limit can be calculated by L'Hospital's Rule. Then the Rule, wrongly applied, will report that $\lim_{x\to a}\frac{f(x)}{g(x)}=d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/476760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proving the real numbers are complete In Rudin's book, the following proof is published: Let $A$ be the set of all positive rationals $p : p^2 < 2$. Let $B$ be the set of all positive rationals $p : p^2 > 2$. $A$ contains no largest number and $B$ contains no smallest. Let q be a rational. More explicitly, $$\forall p \in A, \exists q \in A : p < q$$ and $$\forall p \in B, \exists q \in B : q < p$$ This needs proof. Now he introduces a couple of equations which are not clear where they are derived from. As follows, To do this, we associate with each rational $p > 0$ the number $$q = p - {\frac{p^2-2}{p+2}} = {\frac{2p+2}{p+2}}$$ then $$q^2 - 2 = {\frac {2(p^2-2)}{(p+2)^2}}$$ He then goes on to prove how $p \in A \rightarrow q \in A$; similarly for the set B. My question is: How did he arrive at those equations - particularly, the first one (since the second is clear)?
if $p^2<2$ let's add 2p to both sides of the inequality then $p^2+2p<2+2p$ then $p(p+2)<2(p+1)$ then $p<\frac{2(p+1)}{p+2}$ so let's set $q=\frac{2(p+1)}{p+2}$ then we have if $p^2<2$ then $p<q$. what is left to show is that $q^2<2$ and that would show that $0<p<q$ and $q \in A$
{ "language": "en", "url": "https://math.stackexchange.com/questions/476812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 5 }
The intuition behind Trig substitutions in calculus I'm going through the MIT open calculus course, and in one of the lectures (19-28min marks) the professor uses the trig substitution $x = \tan \theta$ to find the integral of $\frac{dx}{x^2 \sqrt{1+x^2}}$. His answer: $-\csc(\arctan x) + c$, which he shows is equivalent to $-\frac{1+x^2}{x} + c$ by drawing a right triangle on the blackboard. I get the math behind each step of it, but I can't wrap my head around why that equivalence works. We just used an arbitrary $x = \tan \theta$ substitution, where $\theta$ moves differently than x does, and the expression $-\frac{1+x^2}{x} + c$ by itself doesn't know anything about trigonometry. But I type both into Excel for a bunch of different x values, and obviously they are equivalent. I guess I'm not really sure what my question is here, but I could just use some perspective. It just seems like substituting ANY function in for x then integrating it shouldn't work, especially when crossing into polar coordinates.
Your method is based on the identity $$ 1+\tan ^{2}\theta =\sec ^{2}\theta . $$ But there is another standard method to integrate by substitution an irrational function of the type $f(R(x),\sqrt{a^2+x^{2}})$, where $R(x)$ is a rational function of $x$. This alternative method, which is based on the identity $$1+\sinh ^{2}t=\cosh ^{2}t,$$ uses the hyperbolic substitution $$ x=\sinh t\Rightarrow dx=\cosh t\,dt,\qquad \text {or}\qquad x=a\sinh t,\quad \text {for a given } a.$$ As a matter of fact \begin{eqnarray*} \int \frac{dx}{x^{2}\sqrt{1+x^{2}}} &=&\int \frac{\cosh t}{\sinh ^{2}t\cosh t }\,dt,\qquad x=\sinh t \\ &=&\int \frac{dt}{\sinh ^{2}t}=-\frac{\cosh t}{\sinh t}+C \\ &=&-\frac{\sqrt{1+x^{2}}}{x}+C. \end{eqnarray*} The key issue is that both the trigonometric and the hyperbolic substitutions lead to simpler integrals, because the irrational integrand becomes a rational function of trigonometric or hyperbolic functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/476890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Complex power of a complex number Can someone explain to me, step by step, how to calculate all infinite values of, say, $(1+i)^{3+4i}$? I know how to calculate the principal value, but not how to get all infinite values...and I'm not sure how to insert the portion that gives me the other infinity values.
Let's suppose you've already defined $\log r$ for real $r > 0$, say, using Taylor series. Then given $z, \alpha \in \mathbb{C}$, you can define $$z^{\alpha} = \exp(\alpha \log z)$$ where $$\exp(w) = \displaystyle \sum_{j=0}^{\infty} \dfrac{z^j}{j!} \qquad \text{and} \qquad \log(w) = \log |w| + i \arg(w)$$ This is not well-defined - it relies on a choice of argument, which is well-defined only up to adding multiples of $2\pi$. It's these multiples of $2\pi$ which give you new values of $z^{\alpha}$. Explicitly, if $w$ is one value of $z^{\alpha}$, then so is $$w \cdot e^{2n \pi \alpha i}$$ for any $n \in \mathbb{Z}$. Fun facts ensue: * *if $\alpha$ is an integer then $z^{\alpha}$ is well-defined *if $\alpha$ is rational then $z^{\alpha}$ has finitely many values *if $\alpha$ is pure imaginary then $z^{\alpha}$ is real (but not well-defined)
{ "language": "en", "url": "https://math.stackexchange.com/questions/476968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 1 }
A question regarding a step in power method justification (Writing a vector in terms of the eigenvectors of a matrix) Let $A$ be a $t \times t$ matrix. Can we present any $t \times 1$ vector, as a linear combination of eigenvectors of $A$? I think this should not be the case unless all eigenvectors of $A$ happened to be linearly independent. (right?) But it seems to be used in the proof of power method below. Edit: That is how the argument works: Suppose $x_0$ is an initial vector. Write it in terms of eigenvectors of $A$, i.e. $x_0 = \sum_i c_i v_i$ where $v_i$ is the $i^{th}$ eigenvector and $\lambda_i$ the corresponding eigenvalue. Let $x(n) = A^n x_0$. We want to find $\lim_{n \to \infty} x(n).$ Based on the first assumption we have $x(n) = A^n \sum_i c_iv_i= \sum_i c_i \lambda_i ^n v_i= \lambda_1^n \sum_i (\lambda_i/\lambda_1)^n v_i$ where $(\lambda_1, v_1)$ is the leading eigenpair. Since $\lambda_i/\lambda_1 <1, \forall i\neq1$, all terms in the sum decay exponentially as $n$ becomes large and hence in the limit we get $x(n) = c_1 \lambda_1^n v_1$. In other words, $x= \lim_{n \to \infty} x(n)$ is simply proportional to the leading eigenvector of the matrix. Equivalently, $Ax = \lambda_1 x$.
As @oldrinb already said, eigenvectors corresponding to distinct eigenvalues are always linearly independent. Next, a matrix has a basis of eigenvectors if and only if it's diagonalisable, which is not always the case, consider $$\begin{pmatrix}1&1\\0&1\end{pmatrix}.$$ This matrix has only one eigenvector (up to a constant factor) $\begin{pmatrix}1\\0\end{pmatrix}$. Another issue arises when you want to have diagonalisability over $\Bbb R$, not $\Bbb C$. For example, a matrix $$\begin{pmatrix}0&1\\-1&0\end{pmatrix}$$ is diagonalisable over $\Bbb C$ and not diagonalisabe over $\Bbb R$. So finally, any vector is a linear combination of eigenvectos $\iff$ there is a basis of eigenvectors $\iff$ the matrix is diagonalisable. Your illustration with power method is not really relevant. It makes many assumptions on the matrix (in most cases we ask it to have real eigenvalues and the eigenvalue of the biggest absolute value to be of multiplicity one).
{ "language": "en", "url": "https://math.stackexchange.com/questions/477037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Asymptotic correlation between sample mean and sample median Suppose $X_1,X_2,\cdots$ are i.i.d. $N(\mu,1)$. Show that the asymptotic correlation between sample mean and sample median (after suitably centering and renormalization) is $\sqrt{\frac{2}{\pi}}$.
Obtain this paper, written by T.S. Ferguson, a professor at UCLA (his page is here). It derives the joint asymptotic distribution for the sample mean and sample median. To be specific, let $\hat X_n$ be the sample mean and $\mu$ the population mean, $Y_n$ be the sample median and $\mathbb v$ the population median. Let $f()$ be the probability density of the random variables involved ($X$) Let $\sigma^2$ be the variance. Then Ferguson proves that $$\sqrt n\Big [\left (\begin{matrix} \hat X_n \\ Y_n \end{matrix}\right) - \left (\begin{matrix} \mu \\ \mathbb v \end{matrix}\right)\Big ] \rightarrow_{\mathbf L}\; N\Big [\left (\begin{matrix} 0 \\ 0 \end{matrix}\right) , \Sigma \Big]$$ $$ \Sigma = \left (\begin{matrix} \sigma^2 & E\left(|X-\mathbb v|\right)\left[2f(\mathbb v)\right]^{-1} \\ E\left(|X-\mathbb v|\right)\left[2f(\mathbb v)\right]^{-1} & \left[2f(\mathbb v)\right]^{-2} \end{matrix}\right)$$ Then the asymptotic correlation of this centered and normalized quantity is (abusing notation as usual) $$\rho_{A}(\hat X_n,\, Y_n) = \frac {\text {Cov} (\hat X_n,\, Y_n)}{\sqrt {\text{Var}(\hat X_n)\text{Var}(Y_n)}} = \frac {E\left(|X-\mathbb v|\right)\left[2f(\mathbb v)\right]^{-1}}{\sigma\left[2f(\mathbb v)\right]^{-1}} = \frac {E\left(|X-\mathbb v|\right)}{\sigma}$$ In your case, $\sigma = 1$ so we end up with $$\rho_{A}(\hat X_n,\, Y_n) = E\left(|X-\mathbb v|\right)$$ In your case, the population follows the normal with unitary variance, so the random variable $Z= X-\mathbb v$ is $N(0,1)$. Then its absolute value follows the (standard) half normal distribution, whose expected value is $$ E(|Z|) =\sigma\sqrt {\frac{2}{\pi}} = \sqrt {\frac{2}{\pi}}$$ since here $\sigma =1$. So $$\rho_{A}(\hat X_n,\, Y_n) = \sqrt {\frac{2}{\pi}}$$ Added note: It can be seen that the result does not depend on $\sigma =1$ since $\sigma$ cancels out from nominator and denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/477115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Equations for double etale covers of the hyperelliptic curve $y^2 = x^5+1$ Let $X$ be the (smooth projective model) of the hyperelliptic curve $y^2=x^5+1$ over $\mathbf C$. Can we "easily" write down equations for all double unramified covers of $X$? Topologically, these covers correspond to (normal) subgroups of index two in the fundamental group of $X$. So there's only a finite number of them. The function field of $X$ is $K=\mathbf C(x)[y]/(y^2-x^5-1)$. A double etale cover of $X$ corresponds to (a certain) quadratic extension $L/K$. I guess every quadratic extension of $K$ is given by taking the root of some element $D$ in $K$, or by adjoining $(1+\sqrt{D})/2$ to $K$ for some $D$. I didn't get much further than this though.
Let me elaborate a bit my comment. This part works for any smooth projective curve $X$ in characteristic $\ne 2$. A double cover $Y\to X$ is given, as you said, by a quadratic extension $L$ of $K={\mathbf C}(X)$. It is an elementary result that such an extension is always given by adjoining a square root $z$ of some $f\in K$: $L=K[z]$ with $z^2=f$. Now the question is when is this cover étale ? Claim: The cover $Y \to X$ given by $z^2=f$ is étale if and only if the divisor $\mathrm{div}_X(f)$ of $f$ is of the form $$\mathrm{div}_X(f)=2D$$ for some divisor $D$ on $X$. It is a little too long to check all details here, but we can see that if $f$ has a zero of odd order at some point $x_0\in X$, then locally at $x_0$, we have $z^2=t^du$ where $t$ is a parameter at $x_0$, $u$ is a unit of $O_{X,x_0}$ and $d=2m+1$ is an odd positive integer. Therefore $t=(z/t^m )^2u^{-1}$ vanishes at ordre $>1$ at any point of $Y$ lying over $x_0$, so $Y\to X$ is ramified above $x_0$. Admitting the above claim, our task now is to find rational functions $f$ on $X$ such that $\mathrm{div}(f)$ has only zeros and poles of even orders (and of course, for $L$ to be really quadratic over $K$, $f$ must not be a square in $K$). Equivalently, we are looking for divisors $D$ on $X$ such that $2D\sim 0$ and $D\not\sim 0$. A such $D$ has degree $0$ and its class in the Jacobian of $X$ has order exactly $2$. Note that if $D'\sim D$, then the étale cover associated to $D'$ is the same then the one associated to $D$ because $\mathrm{div}(f')=2D'$ implies that $f=h^2f'$ for some $h\in K$ (the ground field needs to be algebraically closed for this), so $K[\sqrt{f}]=K[\sqrt{f'}]$. This says that $X$ has exactly $2^{2g}-1$ double étale covers, where $g$ is the genus of $X$. If you are patient enough to arrive to this point, we can start to find the desired $D$ on a hyperelliptic curve $X$ defined by $y^2=P(x)$, with $P(x)$ of odd degree $2g+1$ ($g$ is the genus of $X$). It is known that the differences of two Weierstrass points are $2$-torsion in the Jacobian. More concretely, let $a_1, \dots, a_{2g+1}$ be the zeros of $P(X)$. Then $$f_i:=(x-a_1)/(x-a_i), \quad 2\le i\le 2g+1$$ has divisor $2(w_1-w_i)$ where $w_i$ is the Weierstrass point $x=a_i, y=0$ in $X$. If I remember correctly, the $w_1-w_i$ form a basis of the $2$-torsion points over $\mathbb Z/2\mathbb Z$. This means finally that the étale double covers of $X$ are given by $$z^2=\prod_{2\le i\le 2g+1} f_i^{\epsilon_i}, $$ for some $(\epsilon_2, \dots, \epsilon_{2g+1})\in (\mathbb Z/2\mathbb Z)^{2g}\setminus \{ 0\}$. Now you know what to do with your specific curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/477176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Divisibility of sequence Let the sequence $x_n$ be defined by $x_1=1,\,x_{n+1}=x_n+x_{[(n+1)/2]},$ where $[x]$ is the integer part of a real number $x$. This is A033485. How to prove or disprove that 4 is not a divisor of any its term? The problem belongs to math folklore. As far as I know it, M. Kontsevich authors that.
For $n\in\Bbb Z^+$ let $u_n=x_n\bmod 4$, and let $\oplus$ denote addition modulo $4$. We have the recurrences $$\left\{\begin{align*} u_{2n}&=u_{2n-1}\oplus u_n\\ u_{2n+1}&=u_{2n}\oplus u_n\;, \end{align*}\right.$$ and the first few values are $u_1=1,u_2=2,u_3=3$, and $u_4=1$. The desired result is an immediate corollary of the following proposition. Proposition. For all $n\in\Bbb Z^+$, $u_{4n}=u_n$, $u_{4n+1},u_{4n+3}\in\{1,3\}$, and $u_{4n+2}=2$. The proof is by induction on $n$. For $n>1$ we have $$\begin{align*} u_{4n}&=u_{4n-1}\oplus\color{blue}{u_{2n}}\\ &=\color{blue}{u_{4n-1}}\oplus u_{2n-1}\oplus u_n\\ &=\color{blue}{u_{4n-2}}\oplus 2u_{2n-1}\oplus u_n\\ &=\color{blue}{u_{4n-3}}\oplus 3u_{2n-1}\oplus u_n\\ &=\color{blue}{u_{4n-4}}\oplus u_{2n-2}\oplus 3u_{2n-1}\oplus u_n\\ &=\color{blue}{u_{n-1}\oplus u_{2n-2}}\oplus 3u_{2n-1}\oplus u_n\\ &=\color{blue}{4u_{2n-1}}\oplus u_n\\ &=x_n\;, \end{align*}$$ where on each line I’ve highlighted in blue the term(s) to be manipulated to get the next line. Then we have $$\begin{align*} &u_{4n+1}=u_{4n}\oplus u_{2n}=u_n\oplus u_{2n}=u_{2n+1}\in\{1,3\}\;,\\ &u_{4n+2}=u_{4n+1}\oplus u_{2n+1}=u_{2n+1}\oplus u_{2n+1}=2\;,\text{ and}\\ &u_{4n+3}=u_{4n+2}\oplus u_{2n+1}=2\oplus u_{2n+1}\in\{1,3\}\;, \end{align*}$$ and the induction goes through.
{ "language": "en", "url": "https://math.stackexchange.com/questions/477257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Expected overlap Suppose I have an interval of length $x$ and I want to drop $n$ sticks of unit length onto it (where $\sqrt x<n<x$). What is the expected overlap between sticks? ($x$ can be assumed to be large enough that edge effects are negligible.) I assume this is a standard problem and has a name but I don't know it. It's related to the birthday problem, a discrete version of this problem, and also to Rényi's parking problem which disallows overlap rather than measuring it. I suppose there are at least two ways to measure overlap: the total length of the interval covered by more than one stick, or the same weighted by the number of sticks, less 1, at that point. The second is slightly more natural in my application, but I'd be happy with either. Edit: I found Comparing Continuous and Discrete Birthday Coincidences: “Same-Day” versus “Within 24 Hours” (2010) which discusses finding the probability of any overlap in my model, rather than the expected length of the overlap.
Let $t \in [0,x]$. The probability that a given stick hits $t$ (assuming left end-point of stick chosen uniformly in $[0,x-1]$) is $p(t) = \begin{cases} \frac{t}{x-1} & t < 1 \\ \frac{1}{x-1} & 1 < t < x-1 \\ \frac{x-t}{x-1} & x-1 < t < x\end{cases}$. The probability that $\geq 2$ sticks hit $t$ is (via the complement) $1-(1-p(t))^n- np(t)(1-p(t))^{n-1} $. If $H(t,\omega)$ is the indicator for the event that $\geq 2$ sticks hit $t$, then the total length of such points is $\int_{t=0}^x H(t,\omega) dt$. Take the expectation, and we are left with $\int_0^x (1-(1-p(t))^n - np(t)(1-p(t))^{n-1} dt$. Feed that into some symbolic integrator? For the weighted version, the same idea yields $\int_0^x \sum_{k=2}^n (k-1) \binom{n}{k} p(t)^k (1-p(t))^{n-k} dt$. That's all I feel like doing for the moment. Interesting form... A potential simplification is if you consider a cyclic interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/477320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
I need to calculate $x^{50}$ $x=\begin{pmatrix}1&0&0\\1&0&1\\0&1&0\end{pmatrix}$, I need to calculate $x^{50}$ Could anyone tell me how to proceed? Thank you.
The Jordan Decomposition yields $$ \left[ \begin{array}{r} 1 & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right] = \left[ \begin{array}{r} 0 & 0 & 2 \\ -1 & 1 & 1 \\ 1 & 1 & 0 \end{array} \right] \left[ \begin{array}{r} -1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{r} 0 & 0 & 2 \\ -1 & 1 & 1 \\ 1 & 1 & 0 \end{array} \right]^{-1} $$ Block matrices are easier to raise to a power: $$ \begin{align} \left[ \begin{array}{r} 1 & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right]^{50} &= \left[ \begin{array}{r} 0 & 0 & 2 \\ -1 & 1 & 1 \\ 1 & 1 & 0 \end{array} \right] \left[ \begin{array}{r} -1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{array} \right]^{50} \left[ \begin{array}{r} 0 & 0 & 2 \\ -1 & 1 & 1 \\ 1 & 1 & 0 \end{array} \right]^{-1}\\[6pt] &= \left[ \begin{array}{r} 0 & 0 & 2 \\ -1 & 1 & 1 \\ 1 & 1 & 0 \end{array} \right] \left[ \begin{array}{r} 1 & 0 & 0 \\ 0 & 1 & 50 \\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{r} 0 & 0 & 2 \\ -1 & 1 & 1 \\ 1 & 1 & 0 \end{array} \right]^{-1}\\[6pt] &= \left[ \begin{array}{r} 1 & 0 & 0 \\ 25 & 1 & 0 \\ 25 & 0 & 1 \end{array} \right] \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/477382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 8, "answer_id": 5 }
Proof of properties of injective and surjective functions. I'd like to see if these proofs are correct/have them critiqued. Let $g: A \to B$ and $f: B \to C$ be functions. Then: (a) If $g$ and $f$ are one-to-one, then $f \circ g$ is one-to-one. (b) If $g$ and $f$ are onto, then $f \circ g$ is onto. (c) If $f \circ g$ is one-to-one, then $f$ is one-to-one? (d) If $f \circ g$ is one-to-one, then $g$ is one-to-one? (e) If $f \circ g$ is onto, then $f$ is onto? (f) If $f \circ g$ is onto, then $g$ is onto? (a) Let $a,b \in A$. If $(f \circ g)(a)=(f \circ g)(b)$, then $f(g(a))=f(g(b))$. Since $f$ is one-to-one, we know that $g(a)=g(b)$. And, since $g$ is one-to-one is must be that $a=b$. Hence $f \circ g$ is one-to-one. (b) Since $f$ is surjective we know that for all $c \in C$ there is a $b\in B$ such that $f(b)=c$. Since $g$ is surjective, there is a $a \in A$ such that $g(a)=b$. Hence, for all $c \in C$ there is an $a \in A$ such that $(f \circ g)(a)=f(g(a))=f(b)=c$. (c) This is false. Let $A=\{1\}$, $B=\{1,2\}$, and $C=\{1\}$. Define $g(1)=1$ and $f(1)=f(2)=1$. Then $f \circ g$ is injective, but $f$ is not injective. (d) This is true. Let $a,b \in A$ where $a \neq b$. If $(f\circ g)(a)\neq(f\circ g)(b)$, then $f(g(a))\neq f(g(b))$. Hence $g(a) \neq g(b)$. (e) Since $f \circ g$ is surjective, then for every $ c \in C$ there is an $a \in A$ such that $f(g(a))=c$. Let $b = g(a)$. Then $b \in B$ and $f(b)=g(f(a))=c$. Thus $f(b)=c$. Hence $f$ is surjective. (f) Using the same set up as part (c), we have that $g$ is not surjective since there is nothing in $A$ such that $g$ will go to 1.
Nicely done! In part (f), I think you mean that $g$ is not surjective since there is nothing in $A$ that $g$ will take to $2,$ but the idea is spot on. A minor critique for your proof of (b): I would instead suggest that you take an arbitrary $c\in C$, use surjectivity of $f$ to conclude that there is some $b\in B$ such that $f(b)=c$, then use surjectivity of $g$ to conclude that there is some $a\in A$ such that $g(a)=b,$ whence $(f\circ g)(a)=c$ as you showed. Since $c$ was an arbitrary element of $C,$ then for all $c\in C$ there exists $a\in A$ such that $(f\circ g)(a)=c$. This is basically the same as the approach you took, but the connection is a bit clearer and more justified, to my mind. You mentioned that you weren't sure about part (e). You can clean it up a bit by again taking an arbitrary $c\in C,$ concluding from surjectivity of $f\circ g$ that there is some $a\in A$ such that $(f\circ g)(a)=c,$ and noting that $g(a)\in B$ and $f\bigl(g(a)\bigr)=c.$ Since $c$ was an arbitrary element of $C,$ then for all $c\in C$ there exists $b\in B$ such that $f(b)=c$. As an alternate approach, you could let $h$ be the restriction of $f$ to the range of $g$--that is, $h:g(A)\to C$ is defined by $h(b)=f(b)$ for all $b\in g(A)$. Note/prove that $h$ is surjective if and only if $f\circ g$ is. Since $h$ is a restriction of $f$ and is surjective, then $f$ is surjective. I think your approach is better, personally, but you mentioned you weren't sure about it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/477453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 2, "answer_id": 0 }
Region in complex plane with $|1-z|\leq M(1-|z|)$ Let $M>0$. Describe the region in the complex plane such that $|1-z|\leq M(1-|z|)$. To start, I take $M=1$. The inequality becomes $|1-z|\leq 1-|z|$. But by triangle inequality, we have $|1-z|+|z|\geq |(1-z)+z| = 1$. We must have equality, and it holds when $z\in [0,1]$. For arbitrary $M$, the inequality becomes $|1-z|+M|z|\leq M$. I don't really know what to do with this, except that any $|z|>1$ is clearly ruled out because then $M|z|>M$.
Your region consists of all $z \in \mathbb C$, such that the ratio $$\frac{|1-z|}{1-|z|} $$ is bounded (by $M$). As you mentioned only points within the unit disk are admissible. More precisely, the region is a subset of the unit disk, which is contained within a circular wedge of angle $\alpha=\alpha(M)$ (the Stolz angle). The higher $M$ gets, the wider $\alpha(M)$ is (as you mentioned $\alpha(0)=0)$. I recommend this wolfram demonstration, for more insights.
{ "language": "en", "url": "https://math.stackexchange.com/questions/477539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Convergence of $\sum_{n=0}^{\infty}\frac{z^n}{1+z^{2n}}$ For what complex values of $z$ is $$\sum_{n=0}^{\infty}\dfrac{z^n}{1+z^{2n}}$$ convergent? I would like to write the sum as a power series, because with a power series we can determine the radius of convergence. But in this case it seems untidy. We have $\dfrac{1}{1+z^{2n}}=1-z^{2n}+z^{4n}-\cdots$, so that the original sum is $$\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}(-1)^kz^{n+2nk}$$ and I don't think that's very useful...
Hint: When $|z|<1$ we have $$\left|\frac{z^n}{1+z^{2n}}\right| \le \frac{|z|^n}{1-|z|^{2}},$$ when $|z|>1$ we have $$\left|\frac{z^n}{1+z^{2n}}\right| \sim {|z|^{-n}}$$ and when $|z|=1$ we have $$\left|\frac{z^n}{1+z^{2n}}\right| \not\to 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/477610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Interesting question in analysis I am trying to prove this : Consider $\Omega \subset R^n$ ( $n \geq 2$) a bounded and open set and $u $ a smooth function defined in $\overline{\Omega}$. Suppose that $u(y) = 0$ for $y \in \partial \Omega$ and suppose that exists a $\alpha >0$ such that $|\nabla u (x)| = \sqrt{\displaystyle\sum_{i=1}^{n} (\frac{\partial u}{ \partial x_i }(x)} )^2\geq \alpha >0$ for all $x \in \Omega$, then $$ |u(x)| \geq \alpha |x-y|$$ for all $x \in \Omega$ and for all $y \in \partial \Omega$. drawing a picture is easy to see the affirmation.. i am trying to prove this. but nothing ... My professor said that this is true.... Someone can give me a hint ?
I think there is something wrong here. If $u$ is zero in $\partial \Omega$, then $u$ has a minimum or a maximum in $\Omega$. So there is a point $P$ in $\Omega$ such that $\nabla u(P)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/477749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
General question about 'vieta jumping' Suppose I want to prove that a variable posesses a certain property (e.g. is a square). For example if I wanted to prove that $x$ in $\frac{x^2+y^2+1}{xy} = k$ has the property of being a square (It is obviously false, but suppose it is true ). Is it then necessary to fix $x$ and look at the pairs $(y,k)$ (which satisfy the equation), or look at the pairs $(x,y)$ and fix k? The latter is the only version I have seen so far. ( source to the vieta jumping method : www.yimin-ge.com/doc/VietaJumping.pdf )
In this case the problem is very simple you just need to use Vieta Jumping and find the smallest solution. In other problems that uses vieta's jumping you normally fix one varible an then you find the possible k. Here you have an example http://www.artofproblemsolving.com/community/c6h339649
{ "language": "en", "url": "https://math.stackexchange.com/questions/477830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$h(x)={f(x)\over x}$ is decreasing or increasing or both over $[0,\infty)$ $f$ is real valued function on $[0,\infty)$ such that $f''(x)>0$ for all $x$ and $f(0)=0$ Then $h(x)={f(x)\over x}$ is decreasing or increasing or both over $[0,\infty)$ $h'(x)={xf'(x)-f(x)\over x^2}$ What I can conclude from here?
Since f''(x) > 0 on [0,∞] we have that $\int f''(x)\,dx$ = f'(x) > 0 for all x ≥0. So f is increasing everywhere (we know that anyway since f''(x) > 0 means f is concave upwards, but maybe this is a little more precise). Further it is increasing faster than x, because f'(x) = 1 and f''(x) = 0. So the numerator of f(x)/x is increasing faster than the denominator, which suggests intuitively that h should be increasing. We can be a little more mathematical about it. By the mean value theorem f($x_1$)/x$_1$ = f'(c) for some c such that 0 ≤ c ≤ x$_1$ . If x$_2$ > x$_1$ then f($x_2$)/x$_2$ = f'(d) for some d such that 0 ≤ d ≤ x$_2$ . Because f is concave upwards, d > c, so f'(d) > f'(c) (since f' is monotonic). Thus h($x_1$) < h($x_2$) and is also monotonic. A note about h'(x). We are taught in elementary calculus that if f'(x) > 0 on an interval then f is monotonic there. But as you see from this example (and there are worse ones -- much worse), it may not be easy to prove that f'(x) > 0. However, sometimes you can think yourself through this kind of problem with more elementary methods.
{ "language": "en", "url": "https://math.stackexchange.com/questions/477889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Arithmetical Functions Sum, $\sum\limits_{d|n}\sigma(d)\phi(\frac{n}{d})$ and $\sum\limits_{d|n}\tau(d)\phi(\frac{n}{d})$ $$\sum_{d|n}\sigma(d)\phi\left(\frac{n}{d}\right)=n\tau(n) ,\\ \sum_{d|n}\tau(d)\phi\left(\frac{n}{d}\right)=\sigma(n)$$ The problem (7.4.15) of Burton's Elementary Number Theory has been request to prove the above equalities. In this book Dirichlet multiplication or Riemann's zeta function isn't expressed before this problem. In addition, I know that the first time, Pillai has been proved this equalities but I couldn't find the Pillai's paper on the web. Can you please refer me to a link of Pillai's paper? or give me a proof without the use of Dirichlet multiplication or Riemann's zeta function?
Prove that both sides are multiplicative functions and that they coincide when $n$ is a prime power.
{ "language": "en", "url": "https://math.stackexchange.com/questions/477961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Fast Matlab Code for hypergeometric function $_2F_1$ I am looking for a good numerical algorithm to evaluate the hypergeometric function $_2F_1$ in Matlab (hypergeom in Matlab is very slow). I looked across the Internet, but did not find anything useful. Does anybody here have an idea where to find a fast algorithm to compute this function?
There is Matlab source code for J. Pearson's master's thesis "Computation of Hypergeometric Functions". The thesis is a available as http://people.maths.ox.ac.uk/porterm/research/pearson_final.pdf and the Matlab code URL is http://people.maths.ox.ac.uk/porterm/research/hypergeometricpackage.zip (I cannot judge the code because I am working with Pascal/Delphi).
{ "language": "en", "url": "https://math.stackexchange.com/questions/478052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Wilson's theorem intuition Wilson's Theorem: $p$ is prime $\iff$ $(p-1)!\equiv -1\mod p$ I can use Wilson's theorem in questions, and I can follow the proof whereby factors of $(p-1)!$ are paired up with their (mod $p$) inverses, but I am struggling to gain further insight into the meaning of the theorem. To try and find a more tangible expression, I tried rewriting the theorem as: * *$p$ prime $\iff (p-1)!+1\equiv 0\mod p$ *$p$ prime $\iff (p-1)!\equiv p-1\mod p$ *$p$ prime $\iff p|(p-1)!+1 \implies (p-1)!+1=kp,k\in\mathbb{Z}$ *$p$ prime $\iff (p-1)!=kp-1$ But I found none of these particularly instructive. We have $(p-1)!$, containing no factors of $p$, which somehow evaluates to one less than a multiple of $p$ if and only if $p$ is prime. What can we say about $(m-1)!$ if $m$ is composite? I guess what I'm asking for is some 'informal' or more conceptually based justification of this theorem.
First let's understand why for prime $p$, $(p-1)!\equiv -1\mod p$. Understanding why that never happens for composite numbers actually ends up being a bit messy (although fundamentally not very complicated), so we'll cover that later. The important intuition here is that the multiplicative group of $\mathbb Z/p\mathbb Z$ is cyclic. If you don't know what that means, it means that modulo a prime $p$, we can always find some number $r$ such that the sequence $1, r, r^2, ..., r^{p-2}$ runs through all of the non-zero values modulo $p$, and $r^{p-1}=1$. What this means is that multiplication modulo $p$ works just like addition modulo $p-1$, since $r^ar^b=r^{a+b}$, and as we have $r^{p-1}=1$, we have the familiar "wrap around" effect of modular addition: $r^{p-1+k}=r^k$. In other words, doing multiplication modulo $p$ on some numbers is just doing addition modulo $p-1$ on the exponents of those numbers. This means that any time we have a problem involving multiplying modulo a prime $p$, we can basically pretend it's a problem about addition modulo $p-1$. In this translation scheme, $-1$ becomes $\frac{p-1}2$, since $r^{\frac{p-1}2}=-1\mod p$. So the question becomes: when we're doing addition modulo an even number $n$, why do we always have $1+2+3+...n-1=\frac n2\mod n$? We can use the same famous trick Gauss used to calculate $1+2+3+...+100$. Pair up $1$ with $n-1$, $2$ with $n-2$, and so on. All of these couples become $0$ modulo $n$, and only $\frac n 2$ remains. This explains why the phenomenon occurs for prime numbers. To see why it never occurs for composite numbers we can reason one step at a time: * *First of all, the main reason why is that for most composite numbers $n$, there are two distinct numbers $a,b<n$ with $n=ab$, and those two numbers will show up in the factorial $(n-1)!$, guaranteeing that this factorial will be a multiple of $n$. *But this logic doesn't work if $a=b$, and sometimes that's our only option, namely in the case of a number $n$ which is the square of a prime, $p^2$. The theorem continues to work in this case, and an example makes it clear why: $1\cdot2\cdot3\cdot4\cdot5\cdot6\cdot7\cdot8$ is a multiple of $9=3^2$ since $3$ and $6$ give us "two threes" in $8!$, giving us our $3^2$ and making $8!$ a multiple of $9$. In general, as long as $p>2$, $p$ and $2p$ will both appear in $(p^2-1)!$ and therefore that factorial will be a multiple of $p^2$. *And when $p=2$, well, basically, we get lucky: it so happens that $(2^2-1)!$ is not $-1$ modulo $2^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/478130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Boolean Algebra-Simplification Assistance Needed I have to show that (!(P.Q) + R)(!Q + P.!R) => !Q by simplifying it using De Morgan's Laws. Here is what I did but I'm not sure it's right. (!(P.Q) + R)(!Q + P.!R) => !Q (!P + !Q + R)(!Q + P.!R) !(P.Q) + !P.P.!R + !Q + !Q.P.!R + R.!Q + R.P.!R !(P.Q) + 0 + !Q + !Q.P.!Q + R.!Q + 0 !(P.Q) + !Q + !Q(1 + P.!R + R) !(P.Q) + !Q + !Q !Q + !(P.Q) !Q(1 + !P) !Q Hope that's clear enough.
Indeed, your work is correct. Let's shorten things up, though, using the Distributive Law (D.L.) twice immediately following the application of DeMorgan's: $\begin{align}(\overline{P\cdot Q} + R)\cdot (\overline Q + P\cdot \overline R) &= (\overline P + \overline Q + R)\cdot(\overline Q + P\cdot \overline R) \tag{DeMorgan's} \\ \\ & = \overline Q + (\overline P + R)(P\cdot \overline R) \tag{D.L.} \\ \\ & = \overline Q + (\color{blue}{\overline P\cdot P}\cdot \overline R) + (\color{red}{R}\cdot P\cdot \color{red}{ \overline R}) \tag{D.L.}\\ \\ & = \overline Q + \color{blue}{ 0} + \color{red}{\bf 0} \tag{$A \cdot \overline A = 0$}\\ \\ &= Q'\end{align}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/478232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $(ab)^3=a^3 b^3$, prove that the group $G$ is abelian. If in a group $G$, $(ab)^3=a^3 b^3$ for all $a,b\in G$, amd the $3$ does not divide $o(G)$, prove that $G$ is abelian. I interpreted the fact that $3$ does not divide $o(G)$ as saying $(ab)^3\neq e$, where $e$ is the identity of the group. As for proving $ab=ba$, I did not get anywhere useful. I got the relation $(ba)^2=a^2 b^2$, but could not proceed beyond that. I got a lot of other relations too which I could not exploit- like $a^2 b^3=b^3 a^2$ A helpful hint instead of a solution would be great! Thanks in advance!
This is an attempt at an elementary and linear exposition of this proof strategy, since the original seemed to cause some confusion from a comment - and it uses the language of homomorphisms, which I have avoided. First you need to establish that every element of the group is a cube. Since the group has order not divisible by 3, we know that if $x^3=e$ then $x=e$ (using $e$ for the identity). Now suppose that $a^3=b^3$ - then we have $e=(a^3)(b^{-1})^3=(ab^{-1})^3$ so that $ab^{-1}=e$ whence $a=b$. This means that no two cubes of different elements are equal. So if we cube all $n$ elements in the group we get $n$ different results. Hence every element of the group must be a cube. Now consider $(aba^{-1})^3$ in two ways. Writing it out in full and cancelling $aa^{-1}=e$ we get $ab^3a^{-1}$ Using the special relation we have for the group we get $(ab)^3(a^{-1})^3=a^3b^3(a^{-1})^3$ Setting these equal and cancelling: $b^3=a^2b^3(a^{-1})^2$ or $b^3a^2=a^2b^3$ Since $a$ and $b$ were completely arbitrary, every square commutes with every cube. But every element of the group is a cube, so every square commutes with everything. Now we can write $ababab=(ab)^3=a^3b^3$ and when we cancel and use the fact that squares commute we find that $baba=a^2b^2=b^2a^2$ whence $ab=ba$. Note that the fact that the group had order not divisible by $3$ was only used to prove that there were no elements of order $3$. The question might arise about infinite groups which have no elements or order $3$ and obey the relation given in the question. The proof here does not go through, because a counting argument was used to show that every element in the group is a cube. This counting argument cannot be transposed to the infinite case. We can still prove that all the cubes of different elements are different and that every square commutes with every cube, but that is no longer enough to conclude the argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/478328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
$f(x,y)=\frac{x^3}{x^2+y^2}$ is not differentiable at $(0,0)$ Define $f(x,y)=\frac{x^3}{x^2+y^2}$ if $(x,y)\neq(0,0)$ and $f(x,y)=0$ for $(x,y)=(0,0)$ Show that it is not differentiable at $(0,0)$ I figured out that both $f_x$ and $f_y$ exists and are discontinuous at $(0,0)$ but can't say anything about differentiability of $f(x,y)$ at $(0,0)$ and also $f(x,y)$ looks like a continuous function!
Were your function differentiable, it would be the case that if $f'(0;v)$ is the directional derivative at $0$ with direction $v$ $$f'(0;v+w)=f'(0;v)+f'(0,w)$$ Now let $v=(v_1,v_2)$ such that $v\neq 0$. Then $$f'(0;v)=\lim\limits_{t\to 0}\frac{f(tv)}{t}=\lim_{t\to 0}\frac{t^3v_1^3}{t^3\lVert v\rVert ^2}=\frac{v_1^3}{\lVert v\rVert^2}$$ This is evidently not linear, hence your function cannot be differentiable at the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/478368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding minima and maxima of $\frac{e^{1/({1-x^2})}}{1+x^2}$ Find minima and maxima of $\frac{e^{1/({1-x^2})}}{1+x^2}$. I have: \begin{align} f'(x)=\frac{ 2x\cdot e^{{1}/({1-x^2})} +\left(\frac{1+x^2}{(1-x^2)^2}-1\right)}{(1+x^2)^2}. \end{align} I have $x=0$ and $x=+\sqrt{3},x=-\sqrt{3}$ for solutions of $f'(x)=0$, but I can't find $f''(x)$ so I need help if someone can simplify this?
We calculate the derivative, because that is the first thing you did. It is fairly complicated. The denominator is $(1+x^2)^2$. The numerator, after some simplification, turns out to be $$\frac{2x^3(3-x^2)}{(1-x^2)^2}e^{-1/(1-x^2)}.$$ This is indeed $0$ at $x=0$ and $x=\pm\sqrt{3}$. Looking at the function: Note the symmetry of $y=f(x)$ about the $y$-axis. Then take note of the singularities at $x=\pm 1$. We look only at $x=1$, since symmetry takes care of the other. If $x$ is a little under $1$, then $1-x^2$ is small positive, and therefore $e^{1/(1-x^2)}$ is huge positive. On the other side of $1$ but near $1$, we are looking at $e$ to a huge negative power. The result is nearly $0$. Division by $(1-x^2)^2$ still leaves us nearly at $0$. Note also that for $x$ big, our function is very close to $0$. These observations tell us everything: we will have a local minimum at $x=0$. The value there is $e$. We will have local maxima at $x=\pm \sqrt{3}$. Amusingly, the values there are *less than $e$. Note that the local minimum is not a global minimum: For $x$ close to $1$ but to the right of $1$, or for $x$ close to $-1$ but to the left of $-1$, the function is close to $0$. The function is also positive and close to $0$ when $x$ is large positive or negative. So there is no global minimum. Using the derivative: If you wish, we can obtain this information from the derivative. The denominator of the derivative is safely positive, as is $e^{-1/(1-x^2)}$. So we need only look at the $2x^3(3-x^2)$ part. Look first near $0$. In the interval $(-1,0)$ the derivative is negative, and in the interval $(0,1)$ it is positive. so the function decreased, then increased, so reached a local min at $x=0$. Now look near $x=\sqrt{3}$. In the interval $(1,\sqrt{3})$ the derivative is positive, and in $(\sqrt{3},\infty)$ is is nrgative, so we reached a local max at $x=\sqrt{3}$. The story for $-\sqrt{3}$ is the same. Summary: We have a local min at $x=0$, and local maxima at $x=\pm\sqrt{3}$. There is no global min, and there is no global max. Remark: We did not compute the second derivative. For one thing, we are not that heroic, or masochistic. Students tend to overuse the second derivative test in one-variable calculus. The full story is already contained in the first derivative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/478467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Two questions regarding $\mathrm {Li}$ from "Edwards" I would appreciate help understanding a relation in Edwards's "Riemann's Zeta Function." On page 30 he has: $$\int_{C^{+}} \frac{t^{\beta - 1}}{\log t}dt = \int_{0}^{x^{\beta}}\frac{du}{\log u}= \mathrm {Li} (x^{\beta}) - i\pi$$ He states for $\beta$ positive and real, change variables $u = t^{\beta}$ which implies $\log t = \log u/\beta$ and $dt/t = du/u \beta$. Here $C^{+}$ is a path which is a line segment from $0$ to $1 - \epsilon$ and passes over the singularity at $u = 1$ in a semi-circle in the upper half-plane and continues in a line segment from $1 + \epsilon$ to $1$. I would appreciate help with two aspects: -- Since most of the discussions of the logarithmic integral I have seen take the integral from $2$ rather than from $0$, how do you treat what looks like a $- \mathrm {Li}(0)$ term? -- How do you actually get the $- i \pi$ term. I would guess it's from integrating around the half-circle above $u = 1$ in a clockwise direction. But I have tried parametrization with $u = r e^{i \theta}$. Maybe this is something I should know from complex analysis. Thanks very much.
Here is a try for part two, the $- \pi i$ term: Change variable from $u$ to $e^{t}$. Then $\log u = t$ and $du = e^{t}dt$. Considering the integral around a half circle, now about and above $t = 0$, in the clockwise direction, the residue of the integrand is $-i\pi\left(e^{t}\right)_{z=0} = - i \pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/478534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How many 3-digit positive integers are odd and do not contain digit 5? It is a GRE question. And it has been answered here. But I still want to ask it again, just to know why I am wrong. The correct is 288. My idea is, first I get the total number of 3-digit integers that do not contain 5, then divide it by 2. And because it is a 3-digit integer, the hundreds digit can not be zero. So, I have (8*9*9)/2 = 324. Why this idea is not the correct?
(Hundreds) (Tens) (Units), Units could be $(1, 3, 7, 9) \rightarrow 4$ numbers, Tens could be $(0, 1, 2, 3, 4, 6, 7, 8, 9)\rightarrow 9$ numbers, Hundreds could be $(1, 2, 3, 4, 6, 7, 8, 9) \rightarrow 8$ numbers, (Hundreds) (Tens) (Units) $\rightarrow (8) (9) (4) = 288$
{ "language": "en", "url": "https://math.stackexchange.com/questions/478619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Find the shortest distance between the point $(8,3,2)$ and the line through the points $(1,2,1)$ and $(0,4,0)$ "Find the shortest distance between the point $(8,3,2)$ and the line through the points $(1,2,1)$ and $(0,4,0)$" $$P = (1,2,1), Q = (0,4,0), A = (8,3,2)$$ $OP$ = vector to $P$ $$PQ_ = (0,4,0) - (1,2,1)$$ I found that the equation of the line $L$ that passes through $(1,2,1)$ and $(0,4,0)$ is: $$L = OP + PQ \, t;$$ $$L = (1,2,1) + (-1,2,-1) \, t .$$ However after this I'm not sure how to proceed. I can find PA_ then draw a line from $A$ to the line $L$... advice?
Any point on the line looks like $\vec{\rm r}\left(\lambda\right) \equiv \vec{P} + \lambda\vec{n}$ where $\lambda \in {\mathbb R}$ and $\vec{n} \equiv \vec{Q} - \vec{P}$. The distance between the point $\vec{A}$ and the point $\vec{\rm r}\left(\lambda\right)$ is given by ${\rm d}\left(\lambda\right) = \left\vert\vec{\rm r}\left(\lambda\right) - \vec{A}\right\vert = \left\vert\lambda\,\vec{n} + \vec{P} - \vec{A}\right\vert$. So, you have to minimize \begin{align} {\rm d}^{2}\left(\lambda\right) =& \lambda^{2} + 2\vec{n}\cdot\left(\vec{P} - \vec{A}\right)\,\lambda + \left(\vec{P} - \vec{A}\right)^{2} = \left\lbrack\lambda + \vec{n}\cdot\left(\vec{P} - \vec{A}\right)\right\rbrack^{2} + \left(\vec{P} - \vec{A}\right)^{2} - \left\lbrack\vec{n}\cdot\left(\vec{P} - \vec{A}\right)\right\rbrack^{2} \end{align} Then the shortest distance is given by $${\large\sqrt{% \left(\vec{P} - \vec{A}\right)^{2} - \left\lbrack\vec{n}\cdot\left(\vec{P} - \vec{A}\right)\right\rbrack^{2}}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/478765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Example of an integrable function with an entire extension and whose derivative only vanishes at infinity I am looking for a function $f : \mathbb{C} \to \mathbb{C}$ with the following properties: * *$f$ is entire. *$\int_{-\infty}^\infty |f(t)| \ dt < \infty$ i.e. the restriction of $f$ to the real line is in $L^1(\mathbb{R})$. *$\lim_{|t| \to \infty} f'(t) = 0$ i.e. the restriction of $f'$ to the real line is in $C_0(\mathbb{R})$. *$\int_{-\infty}^\infty |f'(t)| \ dt = \infty$ i.e. the restriction of $f'$ the real line is not in $L^1(\mathbb{R})$. So far I've at least had some success coming up with an example of an entire function $g$ whose restriction to the real line is $C_0(\mathbb{R})$, but not $L^1(\mathbb{R})$. For instance, $g(z) = \frac{e^{-z^2} - 1}{z}$ seems to do the trick. I'm not sure if there's an antiderivative $f$ of this $g$ with the desired property though.
Let $$ f(z)=\int_0^z\frac{\sin(w^2)}{w}\,dw-\frac{\pi}{4}. $$ $f$ is an even entire function. As the real variable $t \to \infty$, $f(t)$ behaves like $O(t^{-2})$, so, in particular, that $\lim_{t\to\pm\infty}f(t)=0$ and $\int_{-\infty}^\infty|f(t)|\,dt<\infty$. On the other hand $$ f'(z)=\frac{\sin(z^2)}{z} $$ and $$ \int_{-\infty}^\infty|f'(t)|\,dt=2\int_0^\infty\frac{|\sin(t^2)|}{t}\,dt=\int_0^\infty\frac{|\sin(u)|}{u}\,du=\infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/478816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Generator for $[G,G]$ given that $G = \left$. If $S'$ is a generating set for $G$, let $S=S' \cup \{s^{-1} \,| \, s\in S\}$, then is the set $[S,S] = \{[s,z] \,|\, s,z \in S\}$ a generating set for the commutator subgroup $[G,G]$? I want to believe that this is true and "almost" have a proof, whose weak point might be the thing that can be used to generate a counter example. Clearly, if the following were true, then the answer to the question at the beginning would be positive. Claim $[s_1 \dots s_n, z_1 \dots z_m] \in \left<[S,S] \right>$, for $s_1, \dots, s_n, z_1, \dots, z_m \in S$. First, show that $[s_1, \dots, s_n, z] \in \left<[S,S]\right>$, where $s_1, \dots, s_n, z \in S$. To do this, use induction on $n$. The case for $n=1$ is trivial. For $n >1$, \begin{align*} [s_1 \dots s_n, z] =& (s_1\dots s_n)^{-1} z^{-1} s_1 \dots s_n z \\ =& s_n^{-1} (s_1 \dots s_{n-1})^{-1} z^{-1} s_1 \dots s_n z \\ =& s_n^{-1} (s_1 \dots s_{n-1})^{-1} z^{-1} s_1 \dots s_{n-1} z s_n [s_n, z] \\ =& s_n^{-1} [s_1 \dots s_{n-1}, z] s_n [s_n,z] \\ =& [s_1 \dots s_{n-1}, z]^{s_n} [s_n,z] \end{align*} By induction hypothesis, $[s_1 \dots s_{n-1}, z], [s_n,z] \in \left<[S,S]\right>$. Not sure how to complete this here, so let's just move on for now and pretend that $[s_1 \dots s_{n-1}, z]^{s_n} \in \left<[S,S]\right>$ by some miracle. Next, we show $[z, s_1 \dots s_n] \in \left<[S,S]\right>$ where $z = z_1 \dots z_m$, and $z_1, \dots, z_m, s_1, \dots, s_n \in S$. We proceed by induction on $n$. The case $n=1$ is done by the above. Note for $n>1$, everything follows the same format as above except we have the order reversed. Thus we are done.
This won't be true in general. Suppose $G$ is generated by a set $S$ with only two elements. Then $[S,S]$ contains only two non-identity elements, inverse to one another, and so $\langle[S,S]\rangle$ will be cyclic. But, for example, the symmetric group $S_n$ is generated by a set of two elements, but its commutator subgroup $A_n$ is not cyclic if $n>3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/478887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Show that an element has order $2$ in $S_n$ if and only if its cycle decomposition is a product of commuting $2$-cycles. In the Dummit-Foote text the algorithm for cycle decomposition is given as: Based on the above algorithm the following exercise is asked to solve: Show that an element has order $2$ in $S_n$ if and only if its cycle decomposition is a product of commuting $2$-cycles. The problem, even though, is not hard to interpret, I still want it to leave for verification since the solutions of problems on symmetric group often contain less symbol and more word. Please have a look to the logic I used and comment on its validity My attempt: Lemma: Cycle decomposition decomposes permutation into disjoint cycles: For otherwise, let the cycles $C_1,C_2$ have some element in common, x say. Let in course of the decomposition $C_1$ (whose first element is assumed to be $a$) is constructed earlier than $C_2$ (whose first element is assumed to be $b$). Then $x=\sigma^i(a)=\sigma^j(b)\implies\sigma^{i-j}(a)=b,$ a contradiction. Proof of the Exercise: Let $\sigma\in S_n$ be such that $|\sigma|=2.$ Since $\sigma\ne1,$ the cycle decomposition of $\sigma$ must contain a cycle (of length greater than 1). If possible, let the decomposition of $\sigma$ contains a cycle of length $\ge3,$ say $(a_1~a_2~a_3~...).$ Then $\sigma^2(a_1)=a_3$ (by the construction of the algorithm) $\ne a_1,$ a contradiction to $\sigma^2=1.$ Thus cycle decomposition is a product of $2$-cycles and since they are disjoint (Ref: Lemma) they must be commuting. Conversely, let $\sigma$ be such that it's cycle decomposition is a product of commuting $2$-cycles. Since those cycles appear due to cycle decomposition they must be mutually disjoint. Clearly $|\sigma|\ge2$ for otherwise $\sigma$ won't consider any $2$-cycle. Choose $x\in\{1,2,...,n\}.$ * *If $x$ doesn't appear in any $2$-cycle $\sigma^2(x)=x.$ *If $x$ appears in any $2$-cycle $\sigma^2(x)=x$ (by the construction of the algorithm). Thus $\sigma^2=1\implies|\sigma|=2.$ Am I correct?
So all you need now is to show that any cycle can be written as the product of transposition...: $$(i_1\,i_2\,\ldots\,i_n)=(i_2\,i_3)(i_3\,i_4)\cdot\ldots\cdot(i_{n-1}\,i_n)(i_n\,i_1)$$ Oberve that there are $\,n-1\,$ transpositions in the RHS above...
{ "language": "en", "url": "https://math.stackexchange.com/questions/478978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Finding the local extrema of this trigonometric, multivariate function QUESTION Find all extrema and their places for $$ f(x,y) = \mathtt{sin} x + \mathtt{cos} y + \mathtt{cos} (x-y)$$ for $ 0 \le x \le \frac{\pi}{2}$ and $ 0 \le y \le \frac{\pi}{2}$ ATTEMPT I go ahead and find the first order partial derivatives: $$f_x = \mathtt{cos} x - \mathtt{sin} (x-y) $$ $$f_y = - \mathtt{sin} y + \mathtt{sin} (x-y) $$ Equating them to zero to find the critical points, I get the following system of equations - (now this is where things get tricky for me - not completely sure if I'm making the right conclusions) For $f_x = 0$: $$ - \mathtt{sin} y = - \mathtt{sin} (x-y) ...(1) $$ $$\Rightarrow y = x -y \Rightarrow 2y = x ...(2)$$ Then for $f_y = 0$: $$ \mathtt{cos} x = \mathtt{sin} (x-y)...(3)$$ then from (2): $$\mathtt{sin} (x-y) = \mathtt {sin} y = \mathtt {sin} \frac{x}{2}...(4) $$ (2) and (3) give: $$ \mathtt{cos} x = \mathtt{sin} \frac{x}{2}...(5)$$ Now having looked at the sin and cos graph, I found that the two only intersect at $$ \mathtt {sin} \frac{\pi}{4} = \mathtt {cos} \frac{\pi}{4}$$ in the interval given. I guess I don't know how to move forward from here... Do I equate the variables to $\frac{\pi}{4}$ and feel things out from there? Cause when I do that I come to some weird-looking three-way equality sign equations that don't seem right. e.g. $$ x = \frac{x}{2} = \frac{\pi}{4}$$ I understand how to find local extrema etc but I think its the sin/cos thing thats messing with me. Basic trigonometry...
You have found ($x = \pi/4$) values of $x$ where $\sin(x) = \cos(x)$. You need to find values of $x$ such that $\cos(x) = \sin(x/2)$ instead (i.e. the graphs of $\cos(x)$ and $\sin(x/2)$ intersect: http://www.wolframalpha.com/input/?i=plot+cos%28x%29%2C+sin%28x%2F2%29 ). For instance, if $x = -\pi$, then $\sin(-\pi/2) = -1 = \cos(-\pi)$. There are other solutions as well. Of course, you want the answer between $0$ and $\pi/2$, but I'll leave it to you to find it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/479164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
De Moivre's formula I'm starting to study complex numbers, obviously we've work with De Moivre's formula. I was courious about the origin of it and i look for the original paper, I found it in the Philosophicis Transactionibus Num. 309, "De sectione Anguli", but only in latin, so some words are difficult to understand, however, on the math part I don't see where's the formula.
DeMoivre only suggested the formula in $1722$. It was Euler who proved it in $1749$. Later DeMoivre's formula was discovered not only for complex numbers, but also for quaternions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/479251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Proving the following conditions are equivalent I have to prove four conditions are equivalent. I'm guessing I should proceed with (a) implies (b), (b) implies (c), (c) implies (d), and (d) implies (a)? I have gotten (a) implies (b), (b) implies (a), and (b) implies (d). I'm struggling with (b) implies (c) and (c) implies (d) and (d) implies (a) though. Here is the problem and my work so far: Let $ f: G → H $ be a homomorphism of groups, and let a and b be elements of G. Let K be the kernel of $f$. Prove that the following conditions are equivalent: (a) $f(a)=f(b)$ (b) $a^{-1}b$ is in $K$ (c) $b$ is in the coset $aK$ (d) The cosets $bK$ and $aK$ are equal I have so far: Suppose that $f(a)=f(b)$. Then $f(a^{-1}b) = f(a^{-1})f(b) = f(a)^{-1}f(b)=1.$ Therefore, $a^{-1}b$ is in the kernel K. If $a^{-1}b$ is in $K$, then $1 = f(a^{-1}b)=f(a)^{-1}f(b)$, so $f(a)=f(b)$. This shows that the first two conditions are equivalent. I have a lengthy argument that I won't put here for how (b) implies (d). Now how do I show that (b) implies (c), (c) implies (d), and (d) implies (a)? Thanks.
$\bbox[5px,border:2px solid #4B0082]{(b)\implies (c)}$ $$\begin{align} a^{-1}b\in K&\implies (\exists k\in K)(a^{-1}b=k)\\ &\implies (\exists k\in K)(b=ak)\\ &\implies b\in aK \end{align}$$ $\bbox[5px,border:2px solid #4B0082]{(c)\implies (d)}$ Suppose $b\in aK$. There exists $k\in K$ such that $b=ak$. * *$\bbox[5px,border:2px solid #7FFF00]{bK\subseteq aK}$ Let $x\in bK$. There exists $\overline k\in K$ such that $x=b\overline k$. It follows that $x=b\overline k=a\underbrace{k\overline k}_{\large \in K}$ and therefore $x\in aK$. *$\bbox[5px,border:2px solid #7FFF00]{aK\subseteq bK}$ Let $x\in aK$. There exists $\overset{\sim}k\in K$ such that $x=a\overset{\sim}k$. Therefore $x=a\overset\sim k=b\,\underbrace{k^{-1}\overset{\sim} k}_{\large \in K}\in bK$. It follows that $aK=bK$ as wanted. $\bbox[5px,border:2px solid #4B0082]{(d)\implies (a)}$ $$\begin{align} aK=bK&\implies (\forall k\in K)(\exists k'\in K)(ak=bk')\\ &\implies (\forall k\in K)(\exists k'\in K)(a=bk'k^{-1})\\ &\implies (\forall k\in K)(\exists k'\in K)(f(a)=f(bk'k^{-1}))\\ &\implies (\forall k\in K)(\exists k'\in K)\left(f(a)=f(b)f(\underbrace{k'k^{-1}}_{\large \in \ker (f)})\right)\\ &\implies (\forall k\in K)(\exists k'\in K)(f(a)=f(b))\\ &\implies f(a)=f(b)\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/479342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
A way to teach Archimedean property A student asked me how to understand the Archimedean property, I tried to re-read with him what he has already done in class (well, actually copy from the blackboard in class). However I think I'm not helping much, my suspicion is that maybe I'm being too formal. How can I approach this differently? He just started studying college, and I think he's not very familiarized with math language, I belive that this is the first "formal" prove they have encounter in class, the rest has been very empirical and easy going.
One way I have seen the Archimedean Property posed, which makes it relatively simple to understand, is as these two equivalent properties: (1) For any positive number $c$, there is a natural number $n$ such that $n >c$. (2) For any positive number $\epsilon$, there is a natural number $n$ such that $\frac{1}{n} < \epsilon$. Perhaps you could phrase your explanation to him in terms of using the natural numbers to control how large/small the elements of the real numbers can get? Maybe an example of what would happen if the property did not hold would be best, since it demonstrates the property's usefulness?
{ "language": "en", "url": "https://math.stackexchange.com/questions/479401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Function and Maclaurin series Function $f(x)=\frac{x^2+3\cdot\ e^x}{e^{2x}}$ need to be developed in Maclaurin series. I can't find any rule to sum all fractions I've got...so any suggestion that helps? Thanks
Hint: Do you know the expansion for $e^{-2x}$ and $e^{-x}$? Can you multiply a power series by $x^2$ and by $3$? If so, you have the tools. Just give the series for $x^2e^{-2x}+3e^{-x}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/479522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
is $\{(x,y) : x,y \in \mathbb{Z} \}$ a closed set? I claim yes, and to show this, it will suffice to show that $\mathbb{R}^2 \setminus \mathbb{Z}^2$ is open. So that for every $x \in \mathbb{R}^2 \setminus \mathbb{Z}^2$, we must find a neighborhood $N$ of $x$ such that $N \cap \mathbb{Z}^2 = \varnothing$. Let $r = \min(\|x\|, 1 - \|x\|)$ in $N = D^2(x,r)$. Suppose there exists $z \in N \cap \mathbb{Z}^2$. So $$ \|z\| \leq \|z-x\| + \|x\| < \min(\|x\|, 1 - \|x\|) + \|x\| \leq 1 - \|x\| + \|x\| = 1 $$ So, we have a contradiction, and therefore $N \cap \mathbb{Z}^2$ must be empty as desired. Is this correct? any feedback? thanks.
$\mathbb{R}^2 \setminus \mathbb{Z}^2$ is a union of translated strips that look like $(0, 1) \times \mathbb R$ and $\mathbb R \times (0, 1)$, each of which is open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/479597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 1 }
How does Cesaro summability imply the partial sums converge to the same sum? I can't reconcile this fact I used to know. Suppose you have a sequence of nonnegative terms $a_k$. Let $s_n=\sum_{k=1}^n a_k$, and suppose $$ \lim_{n\to\infty}\frac{s_1+\cdots+s_n}{n}=L. $$ Then $\sum_{k=1}^\infty a_k$ also exists and equals $L$. I could recover that $\sum_{k=1}^\infty a_k$ exists. If not, it diverges to $\infty$. Suppose $M>0$ is given. There exists $N$ such that $\sum_{k=1}^N a_k>M$. If $n>N$, then $$ \begin{align*} \frac{s_1+\cdots+s_n}{n} &=\frac{s_1+\cdots+s_N}{n}+\frac{s_{N+1}+\cdots+s_n}{n}\\ &\geq\frac{n-N}{n}M. \end{align*} $$ Taking $n\to\infty$ shows that $L\geq M$ for all positive $M$, which is clearly not true. But I can't for the life of me remember why $\sum_{k=1}^\infty a_k=L$ and can't find it online. Can someone clear this up for me? Thanks.
No this is not true. $a_k = (-1)^k$ is a counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/479654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
what kind of singularity $e^{\sin z}$ has at $z=\infty$ Could anyone tell me what kind of singularity $e^{\sin z}$ has at $z=\infty$ ? Enough to investigate $e^{\sin{1\over z}}$ at $z=0$ But $\lim_{z\to 0}$ the limit is $\infty$ and sometimes 0? so essential singularity ?
If it's an isolated singularity, and not removable, and not a pole, there's only one possibility left...
{ "language": "en", "url": "https://math.stackexchange.com/questions/479725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Estimating the upper bound of prime count in the given range I need to estimate count of primes in the range $[n..m)$, where $n < m$, $n \in N$ and $m \in N$ and this estimation must always exceed the actual count of primes in the given range (i.e. be an upper bound). The simple but ineffective solution would be just $m - n$, though it is fine when $m$ is close to $n$. An accurate estimation would look as $[Li(m) - Li(n)]$ according to the Riemann formula for $\pi(x)$ but I believe I can't use it for upper bound. So the questions is: how can I estimate this upper bound effectively enough (i.e. by using relatively simple calculations)? If giving such an approximation is hard for $[n..m)$, may be some simple method exists for upper bound for the range $(1..m)$?
One way to do it would be the following. The exact number of primes below a given number $x$ is given by the prime counting function, denoted as $\pi(x)$. For example, $\pi(10) = 4$ and $\pi(20) = 8$. However, to find $\pi(x)$ it is required to count the number of primes explicitly, which is cumbersome. Wikipedia informs me the following inequality: $$\frac{x}{\ln x} < \pi(x) < 1.25506\frac{x}{\ln x}$$ Therefore the number of primes between $m$ and $n$ with $m > n$ is $$\pi(m) - \pi(n) < 1.25506\left(\frac{m}{\ln m} - \frac{n}{\ln n}\right)$$ which is a much easier computation than, say, $\operatorname{Li}(x)$. This is not a rigorous argument but numerical evidence shows that the bound is fairly accurate. Caveat: Be warned that this bound can fail if $m$ and $n$ are too close. You might want to experiment with different values of $m$ and $n$ and perhaps adjust the bound in one way or another if, say, $m - n$ is smaller than a threshold value. EDIT: An alternative upper bound would be $$1.25506\frac{m}{\ln m} - \frac{n}{\ln n}, \quad n > 17$$ which is safer to use, but much more inaccurate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/479798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show (via differentiation) $1-2+3-4+\cdots+(-1)^{n-1}n$ is $-\frac{n}{2}$ for $n$ even, $\frac{(n+1)}{2}$ for $n$ odd. i) By considering $(1+x+x^2+\cdots+x^n)(1-x)$ show that, if $x\neq 1$, $$1+x+x^2+\cdots+x^n=\frac{(1-x^{n+1})}{1-x}$$ ii) By differentiating both sides and setting $x=-1$ show that $$1-2+3-4+\cdots+(-1)^{n-1}n$$ takes the value $-\frac{n}{2}$ if n is even and the value $\frac{(n+1)}{2}$ if n is odd. For part i) I just simplified the LHS, divided by $(1-x)$ and got the desired result. For the next part I found the derivative of both sides, and set $x=-1$ giving me: $$1-2+3-4+\cdots+(-1)^{n-1}n = \frac{(2)(-1(n+1)(-1)^n)-(1-x^{n+1})(-1)}{4} = \frac{-2(n+1)(-1)^n+1+(-1)^{n+2}}{4}$$ However I'm not understanding the part about n being even and odd. If n is even, does this mean that $n = 2n$ and if it is odd, $n = 2n+1/2n-1$? What would be the next step? Thanks
If $n$ is even, $n = 2k$ for some integer $k$. Then $(-1)^n = (-1)^{2k} = ((-1)^2)^k = 1^k = 1$ and $(-1)^{n + 2} = (-1)^n\times(-1)^2 = 1\times 1 = 1$. Therefore, we have \begin{align*} \frac{-2(n+1)(-1)^n + 1 +(-1)^{n+2}}{4} &= \frac{-2(n+1)\times 1 + 1 + 1}{4}\\ &= \frac{-2(n+1) + 2}{4}\\ &= \frac{-2n -2 + 2}{4}\\ &= \frac{-2n}{4}\\ &= -\frac{n}{2}. \end{align*} Can you follow the steps to do the odd case?
{ "language": "en", "url": "https://math.stackexchange.com/questions/479878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
In what manner are functions sets? From Introduction to Topology, Bert Mendelson, ed. 3, page 15: A function may be viewed as a special case of what is called a relation. Yet, a relation is a set A relation $R$ on a set $E$ is a subset of $E\times E$. while a function is a correspondence or rule. Is then a function also a set?
Let's begin at the other end. A function can be regarded as a rule for assigning a unique value $f(x)$ to each $x$. Let's construct a set $F$ of all the ordered pairs $(x,f(x))$. If $f:X\to Y$ we need every $x\in X$ to have a value $f(x)$. So for each $x$ there is an ordered pair $(x,y)$ in the set for some $y\in Y$. We also need $f(x)$ to be uniquely defined by $x$ so that whenever the set contains $(x,y)$ and $(x,z)$ we have $y=z$. In this way there is a unique $f(x)$ for each $x$ as we require. The ordered pairs can be taken as elements of $X \times Y$ so that $F\subset X \times Y$ If we have a set of ordered pairs with the required properties, we can work backwards and see that this gives us back our original idea of a function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/479936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
finding the limit $\lim\limits_{x \to \infty }(\frac{1}{e}(1+\frac{1}{x})^x)^x$ Can someone show me how to calculate the limit: $$\lim_{x \to \infty }\left(\frac{1}{e}\left(1+\frac{1}{x}\right)^x\right)^x $$ I tried to use taylor series but failed. Thanks
Take the logarithm, $$\begin{align} \log \left(\frac{1}{e}\left(1+\frac{1}{x}\right)^x\right)^x &= x\left(\log \left(1+\frac1x\right)^x - 1\right)\\ &= x\left(x\log\left(1+\frac1x\right)-1\right)\\ &= x\left(-\frac{1}{2x} + O\left(\frac{1}{x^2}\right)\right). \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/480003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Integral in spherical coordinates, $\Omega$ is the unit sphere, of $\iiint_\Omega 1/(2+z)^2dx\ dy\ dz$ $$\iiint_\Omega \frac{1}{(2+z)^2}dx\ dy\ dz$$ There is a VERY similar question How to integrate $\iiint\limits_\Omega \frac{1}{(1+z)^2} \, dx \, dy \, dz$ here But this is different. I like my spherical coordinates to have the angle in the x/z plane taken from "3 oclock" as normal, rather than from 12. So anyway, I got this: $$\iiint_\Omega \frac{1}{(2+p\sin(\theta))^2}p^2 \cos(\theta)dp\ d\theta\ d\psi$$ Over $$\Omega = \lbrace(p,\theta,\psi)|\theta\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right],\psi\in\left[0,2\pi\right],p\in[0,1]\rbrace$$ I'm not sure what to do, I want to do a substitution but I've been explicitly told to use spherical coords, and it'd be a multivariate substitution (involving p and $\psi$) Thanks. Addendum: I'm hoping you guys will make me feel silly and it'll be obvious, but I've been looking at it for a while, I can't see partial fractions helping,I can't get the p out of the denominator, I'm stumped, sadly. Thoughts: It actually seems quite easy in Cartesian. Also why can't I substitute, I'm going to have a go, treating it first as integral involving 2+something, then deal with the sin.
$$I=\int^1_0\int^\frac{\pi}{2}_{-\frac{\pi}{2}}\frac{\cos(\theta)p^2}{(2+p\sin(\theta))^2}\int^{2\pi}_0d\psi\ d\theta\ dp$$ $$=2\pi\int^1_0p\int^\frac{\pi}{2}_{-\frac{\pi}{2}}\frac{p\cos(\theta)}{(2+p\sin(\theta))^2}d\theta\ dp$$ Let $u=2+p\sin(\theta)$ then $\frac{du}{d\theta}=p\cos(\theta)$ Thus: $$du=p\cos(\theta)d\theta$$ $$\theta=\frac{\pi}{2}\rightarrow u=2+p$$ $$\theta=-\frac{\pi}{2}\rightarrow u=2-p$$ So: $$I=2\pi\int^1_0p\int^{2+p}_{2-p}\frac{1}{u^2}du\ dp$$ $$=2\pi\int^1_0p\left[\frac{1}{u}\right]^{2-p}_{2+p}dp$$ $$=2\pi\int^1_0p\left(\frac{1}{2-p}-\frac{1}{2+p}\right)dp$$ Integrate this by parts (differentiate p, integrate the brackets) Tidy up to get: $$I=4\pi[\ln(3)-1]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/480129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Gradiant of XAY with respect to A How can I find the gradient of the following function with respect to A? $$ F(A) = X^T \cdot A \cdot Y $$ Where X and Y are mx1 vectors and A is mxm matrix
Writing $F(A + \Delta A) = X^T A Y + X^T \Delta A Y$, we see that $F(A + \Delta A) - F(A) = X^T \Delta A Y$; the error terms vanish exactly as they would for a plain old vanilla-flavored scalar function $f(x) = ax$: $f( x + \Delta x) - f(x) = a(x + \Delta x) - ax = a \Delta x$: thus the derivative is a constant linear map; we have $DF(\Delta A) = X^T (\Delta A) Y$. The derivative of any linear map is a constant and equal to the map itself. For more info., see this Wikioedia page: http://en.m.wikipedia.org/wiki/Matrix_calculus. Hope this helps! Cheers!
{ "language": "en", "url": "https://math.stackexchange.com/questions/480207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Any finite set is a null-set How can we prove that a finite set is a null-set? Maybe would it be easier to prove that the outer measure of a finite set is $0$? any ideas on how to tackle this problem? thanks,
You can just as easily prove that a countably infinite set $\{a_1,a_2,a_3,\dots\}$ is null by putting an interval of width $\frac{\epsilon}{2^n}$ about $a_n$. The above paragraph assumed we were working in the reals, but a similar idea works for $\mathbb{R}^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/480256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Predicate Calculus statement I've been asked to write down a statement using predicate calculus and it is confusing me a great deal. I've got statement A "no dog can fly" and B "There is a dog which can fly" D = set of all dogs , F = set of all creatures that can fly P(x) is the proposition that "creature x can fly" Q(x) is the proposition that "creature x is a dog" How do I write statements A and B using predicate calculus in terms of P(x)? I wrote for A: ∀x(P(x)→¬Q(x) and B: ∃x(P(x)→Q(x) but this doesn't seem right to me at all. Anyone got a suggestion?
The statement A is OK, apart from a missing parenthesis. Statement B should be something like $\exists x(Q(x)\land P(x))$. Your version of B would be true if there were, for example, no flying creatures. There are always many equivalent ways of stating things. Closer in tone to the English statement of A is $\forall x(Q(x)\longrightarrow \lnot P(x))$. Or maybe $\lnot\exists x(Q(x)\land P(x))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/480332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Two functions agreeing except on set of measure zero Let $f,g:S\rightarrow\mathbb{R}$; assume $f$ and $g$ are integrable over $S$. Show that if $f$ and $g$ agree except on a set of measure zero, then $\int_Sf=\int_Sg$. Since $f$ and $g$ are integrable over $S$, we have $f-g$ also integrable over $S$. So $(f-g)(x)=0$ except for a set of measure zero. If $(f-g)$ were bounded, we could choose a partition so that the volume covering the points $x$ such that $(f-g)(x)\neq 0$ is less than any $\epsilon$, which will imply that $\int_S(f-g)=0$, and so $\int_S f=\int_S g$. But here we don't have boundedness. How can we go from here?
If you want to go with all the details starting from the definition, do it like this : $$ \left| \int_S (f - g) \, d\mu \right| \le \int_S |f-g| d\mu = \sup \left\{ \left. \int_S \varphi \, d\mu \, \right| \, 0 \le \varphi \le |f-g|, \varphi \in \mathcal L(S)\right\} $$ where I wrote $\mathcal L(S)$ for the set of all linear combinations of characteristic functions (a characteristic function is a function which is $1$ on some measurable set and $0$ elsewhere). Since for these functions, the integral is defined as $$ \int_S \left( \sum_{i=1}^n a_i \mathbb 1_{A_i} \right) d\mu = \sum_{i=1}^n a_i \mu(A_i), $$ the condition $0 \le \varphi \le |f-g|$ ensures that $a_i \neq 0 \implies \mu(A_i) = 0$, hence $\int_S \varphi d\mu = 0$ for every $\varphi$ with $0 \le \varphi \le |f-g|$. Taking the supremum over a bunch of zeros is zero. Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/480403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }