Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to prove an Inequality I'm beginner with proofs and I got the follow exercise: Prove the inequality $$(a + b)\Bigl(\frac{1}{a} + \frac{4}{b}\Bigr) \ge 9$$ when $a > 0$ and $b > 0$. Determine when the equality occurs. I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ? Thanks in advance.
We can play with the inequality: let's suppose it was true for now, and perform a series of reversible steps and see what the inequality would imply. So we begin by multiplying everything out, which gives: $1 + 4 + \dfrac{b}{a} + \dfrac{4a}{b} \geq 9 \Leftrightarrow \dfrac{b}{a} + \dfrac{4a}{b} \geq 4$. (1) Now, when working with inequalities there are some common results that you use; one of these is the AM-GM inequality which states: $x + y \geq 2\sqrt{xy}$, where $x$ and $y$ are nonnegative. (Square both sides and collect the terms to one side to see why this is true). Letting $x = b/a$ and $y = 4a/b$ yields that $\dfrac{b}{a} + \dfrac{4a}{b} \geq 2\sqrt{\dfrac{b}{a}\cdot \dfrac{4a}{b}} = 2\sqrt{4} = 4$, which is what we wanted to show in line (1). Since we can reverse all of our steps (we expanded and subtracted 5; to reverse this add 5 and then factor), we have proven the original inequality. (Note: a formal solution would start with this last line and work "upwards.")
{ "language": "en", "url": "https://math.stackexchange.com/questions/207521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 8, "answer_id": 3 }
counting the number of loop runs I have the following loop structure: for $i_1=1$ to $m$ for $i_2=i_1$ to $m$ $\vdots$ for $i_n=i_{n-1}$ to $m$ Of course, all indices $i_k$ are integers, and $m$ and $n$ are also positive integers. How can I count how many times the inner loop will run?
This is a problem in which it really helps to look at some small examples or to start with the simplest version and work up (or both!). The simplest version is $n=0$, which isn’t very interesting. The next simplest is $n=1$, which is also pretty trivial. The first interesting case is $n=2$, but it turns out to be helpful to have looked at some variants of the $n=1$ case first. Suppose that there’s one instruction $I$ inside the nest of loops. If there are no loops, $I$ is executed once. If there’s one loop running from $i$ to $m$, $I$ is executed $m-i+1$ times. In particular, if there’s one loop running from $1$ to $m$, it’s executed $m$ times. Now suppose that there are two loops. The inner loop runs from $i_1$ to $m$ for each value of $i_1$ from $1$ to $m$. Thus, $I$ is executed $m-i_1+1$ times for each value of $i_1$ from $1$ to $m$, for a total of $$\sum_{i_1=1}^m(m-i_1+1)=\sum_{k=1}^mk=\binom{m+1}2\;.$$ How many times is $I$ executed if the outer loop runs from $j$ to $m$ instead of from $1$ to $m$? That would be $$\sum_{i_1=j}^m(m-i_1+1)=\sum_{k=1}^{m-j+1}k=\binom{m-j+2}2\;.$$ Now suppose that there are three loops. The second loop runs from $i_1$ to $m$ for each value of $i_1$ from $1$ to $m$, so $I$ is executed a grand total of $$\sum_{i_1=1}^m\binom{m-i_1+2}2\tag{1}$$ times. You’ve probably seen an identity involving binomial coefficents that will let you simplify $(1)$ to a single binomial coefficient. You may now already be able to generalize to get the result for arbitrary $n$; if not, try repeating the argument to derive the result for $n=4$. Once you have the result, you’ll need to prove it by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prove $ \int_{cX} \frac{dt}{t} = \int_{X} \frac{dt}{t}$ for every Lebesgue measurable set $X$ Let $c>0$. Let $X \subseteq (0,\infty)$ be a Lebesgue measurable set. Define $$ cX := \{ cx \mid x \in X \}. $$ Then $$ \int_{cX} \frac{dt}{t} = \int_{X} \frac{dt}{t}$$ Now I can prove this for $X$ an interval and, thus, any set generated by set operations on intervals. It is simply by using the Fundamental Theorem of Calculus and natural log $\ln$. But I'm not sure how to approach for general Lebesgue measurable set.
Suppose $m$ and $n$ are non-negative measures and $c$ is a positivie number and $n=m/c$. Can you show that $$ \int_A f\,dm = \int_A (c f)\,dn\text{ ?} $$ If you can, let $A=cX$, $m=$ Lebesgue measure, $f(t)=1/t$. Then find a one-to-one correspondence between $A=cX$ and $X$ such that the value of $1/t$ for $t\in X$ is the same as the value of $cf(t)$ for $t\in A$, and think about that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Size of factors in number ring Let $R$ be the ring ${\mathbb Z}[\sqrt{2}]$. For $z\in R$, $z=x+y\sqrt{2}\in R$ with $x$ and $y$ in $\mathbb Z$, put $\|z\|={\sf max}(|x|,|y|)$ and $$D_z=\left\{ (a,b) \in R^2 : ab=z,\ a \text{ and } b \text{ are not units in } R\right\}$$ and $$ \rho(z)= \begin{cases} 0,& \text{ if} \ D_z=\emptyset; \\ {\sf min}\left(\|a\|,(a,b) \in D_z^2\right), & \text{ if} \ D_z\neq\emptyset. \end{cases} $$ Finally, set $\mu(m)={\sf max}(\rho(z) : \|z\| \leq m)$ for an integer $m$. Is anything at all known about the growth and the asymptotics of $\mu$ ? For example, are there any polynomial or exponential, upper or lower bounds ? Update 09/10/2012 : It seems that there is a linear bound, namely $\rho(z) \leq ||z||$. To show this, it would suffice to show that when ${\sf gcd}(x,y)=1$ and $p$ is a prime divisor of $x^2-2y^2$, then we have a representation $p=a^2-2b^2$ with ${\sf max}(|a|,|b|) \leq {\sf max}(|x|,|y|)$. The existence of $(a,b)$ (without the last inequality) is well-known, but the bound on $|a|$ and $|b|$ does not seem easy to establish.
If $x$ is even in $z=x+y\sqrt 2$ then $z$ is a multiple of $\sqrt 2$, hence $\rho(z)\le 1$. On the other hand, if $|N(z)|=|x^2-2y^2|$ is the square of a prime $p$ (and $x+y\sqrt 2$ is not a prime in $R$), then we must have $N(a+b\sqrt2)=a^2-2b^2=\pm p$, which implies $a^2\ge p$ or $b^2\ge \frac2p$, hence $||a+b\sqrt 2||\ge \sqrt{\frac p2}$ and fnally $\rho(z)\ge \sqrt{\frac p2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expected value of applying the sigmoid function to a normal distribution Short version: I would like to calculate the expected value if you apply the sigmoid function $\frac{1}{1+e^{-x}}$ to a normal distribution with expected value $\mu$ and standard deviation $\sigma$. If I'm correct this corresponds to the following integral: $$\int_{-\infty}^\infty \frac{1}{1+e^{-x}} \frac{1}{\sigma\sqrt{2\pi}}\ e^{ -\frac{(x-\mu)^2}{2\sigma^2} } dx$$ However, I can't solve this integral. I've tried manually, with Maple and with Wolfram|Alpha, but didn't get anywhere. Some background info (why I want to do this): Sigmoid functions are used in artificial neural networks as an activation function, mapping a value of $(-\infty,\infty)$ to $(0,1)$. Often this value is used directly in further calculations but sometimes (e.g. in RBM's) it's first stochastically rounded to a 0 or a 1, with the probabililty of a 1 being that value. The stochasticity helps the learning, but is sometimes not desired when you finally use the network. Just using the normal non-stochastic methods on a network that you trained stochastically doesn't work though. It changes the expected result, because (in short): $$\operatorname{E}[S(X)] \neq S(\operatorname{E}[X])$$ for most X. However, if you approximate X as a normal distribution and could somehow calculate this expected value, you could eliminate most of the bias. That's what I'm trying to do.
Apart from the the MacLaurin approximation, the usual way to compute that integral in Statistics is to approximate the sigmoid with a probit function. More specifically $\mathrm{sigm}(a) \approx \Phi(\lambda a)$ with $\lambda^2=\pi/8$. Then the result would be: $$\int \mathrm{sigm}(x) \, N(x \mid \mu,\sigma^2) \, dx \approx \int \Phi(\lambda x) \, N(x \mid \mu,\sigma^2) \, dx = \Phi\left(\frac{\mu}{\sqrt{\lambda^{-2} + \sigma^2}}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/207861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Eigenvalues of a rectangular matrix I've read that the singular values of a matrix are equal to the $$\sigma=\sqrt{\lambda_{K}}$$ where $\lambda$ are the eigenvalue but I'm assuming this only applies to square matrices. How could I determine the eigenvalues of a non-square matrix. Pardon my ignorance.
Eigenvalues aren't defined for rectangular matrices, but the singular values are closely related: The right and left singular values for rectangular matrix M are the eigenvalues of M'M and MM'.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Modular Arithmetic order of operations In an assignment, I am given $E_K(M) = M + K \pmod {26}$. This formula is to be applied twice in a formal proof, so that we have $E_a(E_b(M)) =\ ...$. What I'm wondering is; is the original given formula equal to $(M + K)\pmod{26}$, or $M + (K \mod{26})$? This will obviously make a big difference down the line. I do suspect the first ($(M + K)\pmod{26}$), however I want to be certain before I move forward in my proof. NB: I did not tag this as homework as this is not actually part of the problem, rather just a clarification.
I suspect that what is meant is $(M+K)\bmod 26$, where $\bmod$ is the operator, especially if this is in a cryptographic context. More careful writers reserve the parenthesized notation $\pmod{26}$ for the relation of congruence modulo $26$, using it only in connection with $\equiv$, as in $27\equiv 53\pmod{26}$. Thus, if $M=K=27$, the intended result is probably $54\bmod 26=2$, not $27+(27\bmod 26)=28$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
What's the probability of losing a coin tossing gambling with a wealthy man? Imagine you have $k$ dollars in your pocket and you are gambling with a wealthy man (with infinitely much money). The rule is repeatedly tossing a coin and you win $\$1$ if it's a head, otherwise you lose $\$1$. Now, what's the probability that you get broke eventually?
The probability is $1$. This is the gambler’s ruin problem, and see also one-dimensional random walks.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integral of sinc function multiplied by Gaussian I am wondering whether the following integral $$\int_{-\infty}^{\infty} \frac{\exp( - a x^2 ) \sin( bx )}{x} \,\mathrm{d}x$$ exists in closed form. I would like to use it for numerical calculation and find an efficient way to evaluate it. If analytical form does not exist, I really appreciate any alternative means for evaluating the integral. One method would be numerical quadrature including Gaussian quadrature, but it may be inefficient when the parameters $a$ and $b$ are very different in scale. EDIT : In view of this discussion, we have decided to add OP's self-answer to the end of the question, for it does not qualify as an answer yet contains vital details. The copying is unabridged. Thanks very much for your comments, and the following result was obtained including the case for $x_0 \ne 0$: $$ \int_{-\infty}^{\infty} dx \exp[-a(x-x_0)^2] \frac{ \sin(bx) }{ x } = \pi \exp(-a x_0^2) \mathrm{Re}\left(\mathrm{erf}\left[\frac{b+2iax_0}{2\sqrt{a}}\right] - \mathrm{erf}\left[\frac{2iax_0}{2\sqrt{a}}\right]\right) $$ where $a\gt0, b, x_0$ are assumed to be all real. (note: coefficients etc may be still wrong...) This integral appears in a type of electronic structure calculation based on a grid representation (sinc-function basis). I believe the above result should be definitely useful. Thanks much!! --jaian
We assume $a,b>0$. Then $$\begin{eqnarray*} \int_{-\infty}^\infty dx\, e^{-a x^2}\frac{\sin b x}{x} &=& \int_0^b d\beta \, \int_{-\infty}^\infty dx\, e^{-a x^2} \cos \beta x \\ &=& \int_0^b d\beta \, \mathrm{Re} \int_{-\infty}^\infty dx\, e^{-a x^2+i \beta x} \\ &=& \int_0^b d\beta \, \mathrm{Re}\, \sqrt{\frac{\pi}{a}} e^{-{\beta}^2/(4a)} \\ &=& \pi \, \mathrm{erf}\left(\frac{b}{2\sqrt{a}}\right). \end{eqnarray*}$$ This approach can be generalized to $x_0\ne0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Intersection and Span Assume $S_{1}$ and $S_{2}$ are subsets of a vector space V. It has already been proved that span $(S1 \cap S2)$ $\subseteq$ span $(S_{1}) \cap$ span $(S_{2})$ There seem to be many cases where span $(S1 \cap S2)$ $=$ span $(S_{1}) \cap$ span $(S_{2})$ but not many where span $(S1 \cap S2)$ $\not=$ span $(S_{1}) \cap$ span $(S_{2})$. Please help me find an example. Thanks.
HINT: Let $S_1=\{v\}$, where $v$ is a non-zero vector, and let $S_2=\{2v\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Combination with repetitions. The formula for computing a k-combination with repetitions from n elements is: $$\binom{n + k - 1}{k} = \binom{n + k - 1}{n - 1}$$ I would like if someone can give me a simple basic proof that a beginner can understand easily.
To make the things more clear let there are $n$ different kinds of drinks available in a restaurant, and there are $m$ people including yourself, at your birthday party. Each one is to be served only one 'drink' as one desires/orders without restriction that two or more can ask for different or the same drink as per individual wish. Obviously, there can be at the most $m-1$ people asking for a brand of the 'drink' which have already been desired by other people. Thus at most $m-1$ copies of one or more brands. Let $i$ people desired different brands out of the available $n$ brands which they can ask for in $n\choose i$ ways, and $i$ can vary from $1$ to $m$. Further more, the remaining $m-i$ people must have desired the same brand(s) as already opted by the $i$ people. This can be done in $m-1\choose m-i$ combinations. Thus, the drinks that can be served, have ${n\choose i}{m-1\choose m-i}$ options. Hence total number of options is the summation of ${n\choose i}{m-1\choose m-i}$ over $i$. After using the Vandermonde's Identity. This is equal to $n+m-1\choose m$. Therefore, the total number of options is ${n+m-1\choose m}= \frac{(n+m-1)!}{m!(n-1)!}$. So, if $n=5$ and $m=3$ then total number of options is given by $= \frac{(5+3-1)!}{3!(5-1)!} = \frac{7!}{3!4!} = 35$
{ "language": "en", "url": "https://math.stackexchange.com/questions/208377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 9, "answer_id": 8 }
Can vectors be inverted? I wish to enquire if it is possible to solve the below for $C$. $$B^{-1}(x-\mu) = xc $$ Here obviously $B$ is an invertible matrix and both $c$ and $\mu$ are column vectors. Would the solution be $$x^{-1}B^{-1}(x-\mu) = c $$ is it possible to invert vectors ? How about if it was the other way $$B^{-1}(x-\mu) = cx $$ Is there any other way to do this ? Thanks in advance.
Vectors, in general, can't be inverted under matrix multiplication, as only square matricies can inverses. However, in the situation you've described, it's possible to compute $c$ anyway, assuming the equation is satisfied for some $c$. If we multiply both sides by $X^T$, the result is $x^T B^{-1} (x-\mu) = x^T x c = |x|^2 c$. Assuming x is not the zero vector (in which case any $c$ will still have $xc=0$ so any choice of $c$ should work), we just get $c= \frac{1}{|x|^2} x^T B^{-1} (x-\mu)$. I must caution that the expression above for $c$ is defined even when there is no solution of the original equation, which will be almost all of the time for randomly generated vectors and matrices. Hence, if you are going to use it, you should check that this works by plugging what you get for $c$ back into the original expression and see if it works. Also, the choice of $x^T \over |x|^2$ is not unique; any row vector $v$ such that $vx=1$ will work equally well in the above expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 1 }
Extrapolating to derive an $O(h^3)$ formula The Forward difference formula can be expressed as: $$f'(x_0)={1 \over h}[f(x_0+h)-f(x_0)]-{h\over 2}f''(x_0)-{h^2 \over 6}f'''(x_0)+O(h^3)$$ Use Extrapolation to derive an an $O(h^3)$ formula for $f'(x_0)$ I am unsure how to begin but From what I have seen in the textbook, extrapolating is replacing the h with 2h for example, multiplying the equation by some value, then subtracting it from its original equation. I am not exactly sure what the process is but this is the answer: $$f'(x_0)={1 \over 12h}[f(x_0+4h)-12f(x_0+2h)+32f(x_0+h)-21f(x_0)]$$ Maybe this will be helpful. Here is the example in the textbook. You gotta go to the previous page which is page 190. Start reading from "which implies that". Its the third line from the top of the page.
Given: $$f'(x_0)={1 \over h}[f(x_0+h)-f(x_0)]-{h\over 2}f''(x_0)-{h^2 \over 6}f'''(x_0) + O(h^3) \tag{1}$$ Replace $h$ with $2h$ and simplify: $$f'(x_0)={1 \over {2h}}[f(x_0+2h)-f(x_0)]-hf''(x_0)-{{2}\over{3}h^2}f'''(x_0) + O(h^3) \tag{2}$$ Subtract $\frac{1}{2} (2)$ from $(1)$: $$f'(x_0) - \frac{1}{2}f'(x_0)={1 \over h}[f(x_0+h)-f(x_0)] - {1 \over {4h}}[f(x_0+2h)-f(x_0)] + \frac{1}{6}h^2f'''(x_0) + O(h^3) \tag{3}$$ Multiply $(3)$ by $2$ and rewrite: $$f'(x_0) = \frac{-f(x_0+2h) + 4f(x_0 + h) - 3f(x_0)}{2h} + \frac{4}{3}h^2 f'''(x_0) + O(h^3) \tag{4}$$ You now have $O(h^2)$ Replace $h$ with $2h$ in $(4)$, multiply and subtract. You now have $O(h^3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Distribution function given expectation and maximal variance I'm working on the following problem I got in a hw but I'm stuck. It just asks to find the distribution function of a random variable $X$ on a discrete probability spaces that takes values in $[A,B]$ and for which $Var(X) = \left(\frac{B-A}{2}\right)^{2}.$ I got that this equality gives the expected values $E(X) = \frac{A+B}{2}$ and $E(X^{2}) = \frac{A^{2}+B^{2}}{2}$, but I can't see why this gives a unique distribution (as the statement of the problem suggests). I also found the distribution function $p(x) = 0$ for $x \in (A,B)$ and $p(A)=\frac{1}{2}$, $p(B)=\frac{1}{2}$ that works for example, but I don't see how this is the only one. Can anyone shed some light please? Thanks a lot!
Let $m=\frac12(A+B)$ and $h=\frac12(B-A)$. The OP indicates in a comment how to prove that any random variable $X$ with values in $[A,B]$ and such that $\mathrm{Var}(X)=h^2$ is such that $\mathbb E(X)=m$ and $\mathbb E((X-m)^2)=h^2$. Starting from this point, note that $|X(\omega)-m|\leqslant h$ for every $\omega$ since $A\leqslant X(\omega)\leqslant B$, hence $(X-m)^2\leqslant h^2$ everywhere. Together with the equality $\mathbb E((X-m)^2)=h^2$, this proves that $(X-m)^2=h^2$ almost surely, that is, $X\in\{A,B\}$ almost surely. Now, use once again the fact that $\mathbb E(X)=m$ to deduce that $\mathbb P(X=A)=\mathbb P(X=B)=\frac12$. Edit: Let us recall why $Y\geqslant0$ almost everywhere and $\mathbb E(Y)=0$ imply that $Y=0$ almost everywhere. Fix $\varepsilon\gt0$, then $Y\geqslant0$ almost everywhere hence $Y\geqslant\varepsilon\mathbf 1_{Y\geqslant\varepsilon}$ almost everywhere. This implies that $0=\mathbb E(Y)\geqslant\varepsilon\,\mathbb P(Y\geqslant\varepsilon)$, that is, $\mathbb P(Y\geqslant\varepsilon)=0$. Now, $[Y\ne0]=[Y\gt0]$ is the countable union over every positive integer $n$ of the events $[Y\geqslant1/n]$ hence $\mathbb P(Y\ne0)=0$. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is a hexagon? Having a slight parenting anxiety attack and I hate teaching my son something incorrect. Wiktionary tells me that a Hexagon is a polygon with $6$ sides and $6$ angles. Why the $6$ angle requirement? This has me confused. Would the shape below be also considered a hexagon?
Yes, It is Considered as a Hexagon. There is a difference between an Irregular Hexagon and a Regular Hexagon. A regular hexagon has sides that are segments of straight lines that are all equal in length. The interior angles are all equal with 120 degrees. An irregular hexagon has sides that may be of different lengths. It also follows that the interior angles are not all equal. Some interior angles may be greater than 180 degrees, but the sum of all interior angles is 720 degrees. Hope this gives u an idea about it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 11, "answer_id": 3 }
What is the limit of $\left(2\sqrt{n}\left(\sqrt{n+1}-\sqrt{n}\right)\right)^n$ as $n \to \infty$? I'd would like to know how to get the answer of the following problem: $$\lim_{n \to \infty} \left(2\sqrt{n}\left(\sqrt{n+1}-\sqrt{n}\right)\right)^n$$ I know that the answer is $\frac{1}{e^{1/4}}$, but I can't figure out how to get there. This is a homework for my analysis class, but I can't solve it with any of the tricks we learned there. This is what I got after a few steps, however it feels like this is a dead end: $$\lim_{n \to \infty} \left(2\sqrt{n}\times \frac{\left(\sqrt{n+1}-\sqrt{n}\right)\times \left(\sqrt{n+1}+\sqrt{n}\right)}{\left(\sqrt{n+1}+\sqrt{n}\right)}\right)^n=$$ $$\lim_{n \to \infty} \left(\frac{2\sqrt{n}}{\sqrt{n+1}+\sqrt{n}}\right)^n$$ Thanks for your help in advance.
Expanding the Taylor series of $\sqrt{1+x}$ near $x=0$ gives $ \sqrt{1+x} = 1 + \frac{x}{2} - \frac{x^2}{8} + \mathcal{O}(x^3).$ Thus $$ \left( 2\sqrt{n} ( \sqrt{n+1} - \sqrt{n} )\right)^n =2^n n^{\frac{n+1}{2}} \left(\sqrt{1+\frac{1}{n}}-1 \right)^n$$ $$=2^n n^{\frac{n+1}{2}} \left( \frac{1}{2n} - \frac{1}{8n^2}+ \mathcal{O}(1/n^3)\right)^n = n^{\frac{1-n}{2}} \left(1 - \frac{1}{4n} + \mathcal{O}(n^{-2})\right)^n\to e^{-1/4}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/208715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Can we prove that a statement cannot be proved? Is it possible to prove that you can't prove some theorem? For example: Prove that you can't prove the Riemann hypothesis. I have a feeling it's not possible but I want to know for sure. Thanks in advance
As far as I know the continuum hypothesis has been proved "independent" from the ZFC axioms. So you can assume it true or false. This is very different from just true undecidible statements (as the one used by Gödel to prove his incompleteness theorem). Gödel statements is true even though it is not derived by the axioms described in the theorem (Peano axioms) but it is proved by a different argument (the theorem itself).
{ "language": "en", "url": "https://math.stackexchange.com/questions/208761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 5 }
Asymptotic growth of $\sum_{i=1}^n \frac{1}{i^\alpha}$? Let $0 < \alpha < 1$. Can somebody please explain why $$\sum_{i=1}^n \frac{1}{i^\alpha} \sim n^{1-\alpha}$$ holds?
We have for $x\in [k,k+1)$ that $$(k+1)^{—\alpha}\leq x^{-\alpha}\leq k^{—\alpha},$$ and integrating this we get $$(k+1)^{—\alpha}\leq \frac 1{1-\alpha}((k+1)^{1-\alpha}-k^{\alpha})\leq k^{—\alpha}.$$ We get after summing and having used $(n+1)^{1-\alpha}-1\sim n^{1-\alpha}$, the equivalent $\frac{n^{1-\alpha}}{1-\alpha}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How do I find the variance of a linear combination of terms in a time series? I'm working through an econometrics textbook and came upon this supposedly simple problem early on. Suppose you win $\$1$ if a fair coin shows heads and lose $\$1$ if it shows tails. Denote the outcome on toss $t$ by $ε_t$. Assume you want to calculate your average winnings on the last four tosses. For each coin toss $t$, your average payoff on the last four tosses is $w_t= 0.25ε_t + 0.25ε_{t-1} + 0.25ε_{t-2} + 0.25ε_{t-3}$ Find var($w_t$), and find var($w_t$) conditional on $ε_{t-3} = ε_{t-2} = 1$ For the first part of the question (finding the variance without any conditions on $ε$), I know that: Var($w_t$) = Var(0.25ε_t + 0.25ε_{t-1} + 0.25ε_{t-2} + 0.25ε_{t-3} = Var(0.25ε_t) + Var(0.25ε_{t-1}) + Var(0.25ε_{t-2}) + Var(0.25ε_{t-3}) =0.0625 Var($ε_t$) + 0.0625 Var($ε_{t-1}$) + 0.0625 Var($ε_{t-2}$) + 0.0625 Var($ε_{t-3}$) This is the point in the problem that confuses me. I know the variance for the entire time series is 1, because each possible result of the coin flip (-1 or 1) is $\$1$ away from series' expected value of 0. Does this mean that Var($ε_t$) = 1, $∀ t$ as well? Based on that logic, I could expect Var($w_t$) = 0.0625 (1+1+1+1) = 0.25, but this textbook does not provide any solutions, so I have no way of checking my logic. I'm confident that once I figure out this part of the question, the part of the question that assumes $ε_{t-3} = ε_{t-2} = 1$, but I added that part to my initial question in case there are other nuances I'm missing, in which case I can use said nuances to form a new question.
The variance of $w_t$ is 0.25, for the reasons you explained. But conditioning on $ε_{t−3}=ε_{t−2}=1$ changes the problem since now, $w_t=$ 0.25$ε_t+$ 0.25$ε_{t−1}+$ some constant, hence the conditional variance is only 0.125.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $\{a^m!a^n : n > 0, m > 0, n > m\}$ a context free language? I'm trying to construct a context-free grammar for the language $L = \{a^m!a^n : n > 0, m > 0, n > m\}$, but I'm not sure how to ensure that the right trail of $a$s is longer than the left one. Is there a way I could include the left side in the right side using context-free grammar?
We could look at the language as strings of the form $a^n!a^ka^n.$ Guided by this insight we could use the CFG with productions $S\rightarrow aSa\ |\ a!Ta,\quad T\rightarrow aT\ |\ a$. We're generating strings of the form $a^n\dots a^n$ and then finishing off by placing $!a^k$ in the middle. This idiom of "constructing the outside part and then finishing with the interior" is fairly common in CFG construction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For every real number $a$ there exists a sequence $r_n$ of rational numbers such that $r_n$ approaches $a$. How to prove that for every real number $a$ there exists a sequence $r_n$ of rational numbers such that $r_n \rightarrow a$.
By Riemann's series theorem it follows, that for every $x \in \mathbb{R}$ there is an arrangement $\sigma:\mathbb{N} \to \mathbb{N}$ such that $\sum_{n=1}^\infty \frac{(-1)^{\sigma(n+1)}}{\sigma(n)} = x$. Note that $b_n := \sum_{k=1}^n \frac{(-1)^{\sigma(n+1)}}{\sigma(n)}$ is a rational sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 4 }
Positive Definite Matrix Determinant Prove that a positive definite matrix has positive determinant and positive trace. In order to be a positive determinant the matrix must be regular and have pivots that are positive which is the definition. Its obvious that the determinant must be positive since that is what a positive definite is, so how can I prove that?
We will solve it by assuming a function which is +ve definite and then using continuity definition of ϵ-δ. So,if you know ϵ-δ continuity definition of a function,then only consider this solution,otherwise skip it. A is +ve definite.lets define a function : f(x)= |xA + (1-x)I| for 0<=x<=1 -----(1), here I is the identity matrix. clearly,f(x) is never 0. Now we can easily see [xA+(1-x)I] > 0 for 0<=x<=1. Also,f is continuous on our assumed interval of x. let us choose a point b in [0,1]. then |x-b| < δ (where δ>0) (assume) -----(2). now, |f(x)-f(b)| = |xA+(1-x)I - bA-(1-b)I| = |(x-b)A+(b-x)I|. |f(x)-f(b)| = |(x-b)A+(b-x)I| <= |(x-b)A| +|(b-x)I| (triangle inequality). |f(x)-f(b)| <= |(x-b)A| +|(b-x)I| < |x-b| |A| |b-x| |I|. |f(x)-f(b)| <= |x-b| |A| |b-x| |I| < $δ^2 |A|$ (using (2) ) [note: |I| = 1]. |f(x)-f(b)| <$ δ^2 |A|$ ----(3). Now,since f is continuous ,thus |A| > 0 is a must as then only the definition of continuity holds. thus we get |A| > 0 . and then we can choose δ = $[ϵ/|A|]^{1/2}$ and get : |f(x)-f(b)| < ϵ . Hence proved |A| > 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Does a power series vanish on the circle of convergence imply that the power series equals to zero? Let $f(z)=\sum_{n=0}^{\infty} a_n z^n$ be a power series, $a_n, z\in \mathbb{C}$. Suppose the radius of convergence of $f$ is $1$, and $f$ is convergent at every point of the unit circle. Question:If $f(z)=0$ for every $|z|=1$, then can we draw the conclusion that $a_n=0$ for all nonnegative integer $n$? I think the answer is yes, but I failed to prove it. My approach is concerning about the function $F_\lambda(z):=f(\lambda z)$ for $0\leq\lambda\leq 1$, $|z|=1$. Abel's theorem shows that $F_\lambda$ converge to $F_1$ pointwisely as $\lambda\rightarrow 1$ on the unit circle. If I have the property that $f$ is bounded in the unit disk, then I can apply Lebesgue's dominated convergence theorem to prove $a_0=0$, and by induction I can prove $a_n=0$ for all $n$. However, I cannot prove $f(z) $ is bounded in the unit disk. Any answers or comments are welcome. I'll really appreciate your help.
It seems to me that this is a particular case of an old Theorem from Cantor (1870), called Cantor's uniqueness theorem. The theorem says that if, for every real $x$, $$\lim_{N \rightarrow \infty} \sum_{n=-N}^N c_n e^{inx}=0,$$ then all the complex numbers $c_n$'s are zero. You can google "Uniqueness of Representation by Trigonometric Series" for more information. See e.g. this document for a proof and some history of the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 1, "answer_id": 0 }
Modular arithmetic congruence class simple proof I have the following question but I'm unsure of how it can be approached by a method of proof. I'm new to modular arithmetic and any information on how to solve this would be great for me. (b) Let $t,s\in\{0,1,2,3,4,5\}$. In $\mathbb Z_{25}$, prove that $[t]\,[s]\neq[24]$.
First note that $24\equiv -1\pmod{25}$ and hence we are trying to show that $ts\not\equiv-1\pmod{25}$. Suppose for contradiction that $ts\equiv -1\pmod{25}$, then multiplying through by $-1$ we get $-ts\equiv 1\pmod{25}$ so $t$ or $s$ is invertible, say $t$ with inverse $-s$. Therefore $\gcd(t,25) = 1$ (why?), and hence $t = 1,2,3,4$. There is a unique value of $s$ corresponding to each of these (why?), each of which should give you a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Lebesgue measuarable sets under a differentiable bijection Let $U,V \subseteq \mathbb{R}^{n}$ be open and suppose $A\subseteq U$ are (Lebesgue) measurable. Suppose $\sigma \in C^{1} (U,V)$ be a bijective differentiable function. Then does it follow that $\sigma(A)$ is (Lebesgue) measurable? I've tried work on it, but still stuck and cannot progress at all. I've tried to use continuity of $\sigma$ but then as $A$ is not an open set, I couldn't really use it. Should I use the deifinition of Lebesuge measurable set? But I think it will be more complex..
The answer is yes. Let me call your differentable bijection $f$... Hint : Every Lebesgue measurable set is the union of a $F_{\sigma}$ and a set of measure zero. Now, use the fact that the image by $f$ of any $F_{\sigma}$ is Lebesgue measurable (why?) and that $f$ maps sets of measure zero to sets of measure zero... EDIT To show that $f$ maps sets of measure zero to sets of measure zero, note that $f$ is locally lipschitz, and you can proceed as in this question
{ "language": "en", "url": "https://math.stackexchange.com/questions/209345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Identity proof $(x^{n}-y^{n})/(x-y) = \sum_{k=1}^{n} x^{n-k}y^{k-1}$ In a proof from a textbook they use the following identity (without proof): $(x^{n}-y^{n})/(x-y) = \sum_{k=1}^{n} x^{n-k}y^{k-1}$ Is there an easy way to prove the above? I suppose maybe an induction proof will be appropriate, but I would really like to find a more intuitive proof.
I see no one likes induction. For $n=0$, $$ \frac{x^0-y^0}{x-y}=\sum_{1 \le i \le 0}x^{0-i}y^{i-1}=0. $$ Assume for $n=j$ that the identity is true. Then, for $n=j+1$, $$ \begin{align} \sum_{1 \le i \le j+1}x^{j+1-i}y^{i-1}&=\left(\sum_{1 \le i \le j}x^{j+1-i}y^{i-1}\right)+y^j\\ &=\left(x\sum_{1 \le i \le j}x^{j-i}y^{i-1} \right)+y^{j}\\ &=x\frac{x^{j}-y^{j}}{x-y}+y^{j}\\ &=\frac{x(x^j-y^j)+y^j(x-y)}{x-y}\\ &=\frac{x^{j+1}-xy^j+y^jx-y^{j+1}}{x-y}\\ &=\frac{x^{j+1}-y^{j+1}}{x-y}. \end{align} $$ Hence, it is as we sought.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
Prove that $n!e-2< \sum_{k=1}^{n}(^{n}\textrm{P}_{k}) \leq n!e-1$ Prove that $$n!e-2 < \sum_{k=1}^{n}(^{n}\textrm{P}_{k}) \leq n!e-1$$ where $^{n}\textrm{P}_k = n(n-1)\cdots(n-k+1)$ is the number of permutations of $k$ distinct objects from $n$ distinct objects and $e$ is the exponential constant (Euler's number).
Denote the expression in question by $S$ $$ \sum_{k=0}^{n-1}\frac{1}{k!}=e-\sum_{k=n}^{\infty}\frac{1}{k!}=e-\frac{1}{n!}\bigg(1+\frac{1}{n+1}+\frac{1}{(n+1)(n+2)} + \cdots \bigg)\\ \leq e-\frac{1}{n!} \bigg(1+\frac{1}{n+1} + \frac{1}{(n+1)^2} +\cdots \bigg)\\ =e-\frac{1}{n!} \sum_{k=0}^{\infty} \bigg(\frac{1}{n+1} \bigg)^k=e-\frac{1}{n!}\bigg(1+\frac{1}{n}\bigg) $$ Pre-multiplying this by $n!$ the upper bounds comes out easily: $$ S<n!e-1-\frac{1}{n} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/209619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Almost Sure Convergence Using Borel-Cantelli I am working on the following problem: Let $(f_n)$ be a sequence of measurable real-valued functions on $\mathbb{R}$. Prove that there exist constants $c_n > 0$ such that the series $\sum c_n f_n$ converges for almost every $x$ in $\mathbb{R}$. (Hint: Use Borel-Cantelli Lemma). I am thinking to use an approach similar to the one in Proposition 2 here http://www.austinmohr.com/Work_files/prob2/hw1.pdf , where we pick $c_n$ such that $\mu({x: c_n f_n(x) > 1}) < 1/2^n$. But I don't understand if and why there exists such a $c_n$, i.e. consider an unbounded function. Hence, I think a different approach is needed. Thank you.
Assume that $f_n$ is Lebesgue almost everywhere finite, for every $n$ (otherwise the result fails). For every $n$, the interval $[-n,n]$ has finite Lebesgue measure hence there exists $c_n\gt0$ such that the Lebesgue measure of the Borel set $$ A_n=\{x\in[-n,n]\,\mid\,c_n\cdot|f_n(x)|\gt1/n^2\} $$ is at most $1/n^2$. Then, Borel-Cantelli lemma shows that $\limsup A_n$ has Lebesgue measure zero, hence, for Lebesgue almost every $x$ in $\mathbb R$, $x$ is not in $A_n$ for every $n$ large enough. Since $|x|\leqslant n$ for every $n$ large enough, this means that $c_n\cdot|f_n(x)|\leqslant1/n^2$ for every $n$ large enough. In particular, the series $\sum\limits_nc_nf_n(x)$ converges (absolutely). QED. Edit: Assume that there exists $k$ such that $f_k$ is not Lebesgue almost everywhere finite. This means that $A=\{x\in\mathbb R\,\mid\,|f_k(x)|=+\infty\}$ has positive Lebesgue measure. Now, for every positive valued sequence $(c_n)$, the series $\sum\limits_nc_nf_n$ diverges on $A$. Hence there exists no positive valued sequence $(c_n)$ such that the series $\sum\limits_nc_nf_n$ converges Lebesgue almost everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which function grows faster: $(n!)!$ or $((n-1)!)!(n-1)!^{n!}$? Of course, I can use Stirling's approximation, but for me it is quite interesting, that, if we define $k = (n-1)!$, then the left function will be $(nk)!$, and the right one will be $k! k^{n!}$. I don't think that it is a coincidence. It seems, that there should be smarter solution for this, other than Stirling's approximation.
For $(nk)!$ your factors are $1,2,3,\dots, k$ then $k+1, \dots, 2k,2k+1 \dots, k!$. For $k! k^{n!}$ your factors are $1,2,3,\dots, k$ but then constant $k,\dots,k$. So every factor of (nk)! is > or = to each factor of k!k^(n!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/209856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Homomorphism between $A_5$ and $A_6$ The problem is to find an injective homomorphism between the alternating groups $A_5$ and $A_6$ such that the image of the homomorphism contains only elements that leave no element of $\{1,2,3,4,5,6\}$ fixed. I.e., the image must be a subset of $A_6$ that consists of permutations with no fixed points. The hint given is that $A_5$ is isomorphic to the rotational symmetry group of the dodecahedron. I figured out that the permutations in $A_6$ that leave no point fixed are of the forms: * *Double 3-cycle, e.g. (123)(456), in total 40 of them *transposition + 4-cycle, e.g. (12)(3456), in total 90 of them Considering that $A_5$ has 60 elements and $A_6$ has 360 elements, how should I proceed in finding a homomorphism whose image is a subset of the 130 elements described above?
What do you think of this homomorphism: f:A5->A6 f(x)=(123)(456)x(654)(321) This is a homomorphism because f(xy)=(123)(456)xy(654)(321) =(123)(456)x(654)(321)(123)(456)y(654)(321) =f(x)f(y) Because (321)(123)=e=(654)(456). And it is injective because if f(x)=f(y) then, (123)(456)x(654)(321)=(123)(456)y(654)(321) And multiplying both sides with (123)(456) from the right and (654)(321) from the right gives us x=y. This homomorfism does not leave any points fixed, unless x=e.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Question about the global dimension of End$_A(M)$, whereupon $M$ is a generator-cogenerator for $A$ Let $A$ be a finite-dimensional Algebra over a fixed field $k$. Let $M$ be a generator-cogenerator for $A$, that means that all proj. indecomposable $A$-modules and all inj. indecomposable $A$-modules occur as direct summands of $M$. For any indecomposable direct summand $N$ of $M$, denote the corresponding simple End$_A(M)$-module by $E_N$. My question is: Why is it enough to construct a proj. resolution with length $\leq 3$ for every simple module $E_N$ in order to prove that the global dimension of End$_A(M)$ is $\leq 3$? Is there a general theorem which states that fact? I would be very grateful for any hints and references concerning literature, respectively. Thank you very much.
The global dimension of a noetherian ring with finite global dimension is equal to the supremum of the projective dimensions of its simple modules. This is proved in most textbooks dealing with the subject. For example, this is proved in McConnell and Robson's Noncommutative Noetherian rings (Corollary 7.1.14) If the ring is semiprimary (that is, if its Jacobson ideal is nilpotent and the corresponding quotient semisimple) then you can drop the hypothesis that the globaldimension be finite. This covers your case. You can find this theorem of Auslander in Lam's Lectures on modules and rings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/210045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Example of a *-homomorphism that is faithful on a dense *-subalgebra, but not everwhere Let $A,B$ be C*-algebras and let $\varphi: A \to B$ be a $*$-homomorphism. Suppose that $\ker( \varphi) \cap D = \{0\}$ where $D$ is a dense $*$-subalgebra of $A$. Does it follow that $\varphi$ is injective? I'm pretty sure the answer is "no". At various times I've had to upgrade injectivity on a dense subalgebra to injectivity on the whole algebra, but this has always involved very specific situations or extra hypotheses. I don't think I've ever seen an example which shows this is generally false though. Added: In hindsight, the following equivalent formulation of this question would have been slightly cleaner. Find a nonzero, closed ideal $I$ in a C*-algebra $A$ such that $D \cap I = \{0\}$ for some dense $*$-subalgebra $D \subset A$? Jonas Meyer gives a very simple example. Take $A=C[0,2]$, $I$ to be the ideal of functions which vanish on $[0,1]$, and $D$ to be the polynomial functions in $A$. A more elaborate example is obtained by taking $A=C^*(G)$, the full group C*-algebra of a discrete, nonamenable group $G$; $D = \mathbb{C} G$, the copy of the group algebra in $A$; and $I$ to be the kernel of projection down to the reduced group C*-algebra $C^*_r(G)$ None's answer appears to be erroneous.
If $A=C[0,2]$ and $B=C[0,1]$, define $\phi:A\to B$ to be the restriction map $\phi(f)=f|_{[0,1]}$. Let $D\subset A$ be the algebra of polynomial functions on $[0,2]$. Then $\ker(\phi)\cap D=\{0\}$ because no nonzero polynomial function vanishes on $[0,1]$. However, $\phi$ is not injective because for example it sends the nonzero continuous function $f(t)=\max\{0,t-1\}$ on $[0,2]$ to the zero function on $[0,1]$. Note that $D$ is dense by the Weierstrass approximation theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/210100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Second derivative "formula derivation" I've been trying to understand how the second order derivative "formula" works: $$\lim_{h\to0} \frac{f(x+h) - 2f(x) + f(x-h)}{h^2}$$ So, the rate of change of the rate of change for an arbitrary continuous function. It basically feels right, since it samples "the after $x+h$ and the before $x-h$" and the $h^2$ is there (due to the expected /h/h -> /h*h), but I'm having trouble finding the equation on my own. It's is basically a derivative of a derivative, right? Newtonian notation declares as $f''$ and Leibniz's as $\frac{\partial^2{y}}{\partial{x}^2}$ which dissolves into: $$(f')'$$ and $$\frac{\partial{}}{\partial{x}}\frac{\partial{f}}{\partial{x}}$$ So, first derivation shows the rate of change of a function's value relative to input. The second derivative shows the rate of change of the actual rate of change, suggesting information relating to how frequenly it changes. The original one is rather straightforward: $$\frac{\Delta y}{\Delta x} = \lim_{h\to0} \frac{f(x+h) - f(x)}{x + h - x} = \lim_{h\to0} \frac{f(x+h) - f(x)}{h}$$ And can easily be shown that $f'(x) = nx^{n-1} + \dots$ is correct for the more forthcoming of polynomial functions. So, my logic suggests that to get the derivative of a derivative, one only needs to send the derivative function as input to finding the new derivative. I'll drop the $\lim_{h\to0}$ for simplicity: $$f'(x) = \frac{f(x+h) - f(x)}{h}$$ So, the derivative of the derivative should be: $$f''(x) = \lim_{h\to0} \frac{f'(x+h) - f'(x)}{h}$$ $$f''(x) = \lim_{h\to0} \frac{ \frac{ f(x+2h) - f(x+h)}{h} - \frac{ f(x+h) - f(x)}{h} }{h}$$ $$f''(x) = \lim_{h\to0} \frac{ \frac{ f(x+2h) - f(x+h) - f(x+h) + f(x)}{h} }{h}$$ $$f''(x) = \lim_{h\to0} \frac{ f(x+2h) - f(x+h) - f(x+h) + f(x) }{h^2}$$ $$f''(x) = \lim_{h\to0} \frac{ f(x+2h) - 2f(x+h) + f(x) }{h^2}$$ What am I doing wrong? Perhaps it is the mess of it all, but I just can't see it. Please help.
The only problem is that you’re looking at the wrong three points: you’re looking at $x+2h,x+h$, and $x$, and the version that you want to prove is using $x+h,x$, and $x-h$. Start with $$f\,''(x)=\lim_{h\to 0}\frac{f\,'(x)-f\,'(x-h)}h\;,$$ and you’ll be fine. To see that this really is equivalent to looking at $$f\,''(x)=\lim_{h\to 0}\frac{f\,'(x+h)-f\,'(x)}h\;,$$ let $k=-h$; then $$\begin{align*} f\,''(x)&=\lim_{h\to 0}\frac{f\,'(x)-f\,'(x-h)}h\\ &=\lim_{-k\to0}\frac{f\,'(x)-f\,'(x-(-k))}{-k}\\ &=\lim_{k\to 0}\frac{f\,'(x-(-k))-f\,'(x)}k\\ &=\lim_{k\to 0}\frac{f\,'(x+k)-f\,'(x)}k\;, \end{align*}$$ and renaming the dummy variable back to $h$ completes the demonstration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/210264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 3, "answer_id": 0 }
Modules with projective dimension $n$ have not vanishing $\mathrm{Ext}^n$ Let $R$ be a noetherian ring and $M$ a finitely generated $R$-module with projective dimension $n$. Then for every finitely generated $R$-module $N$ we have $\mathrm{Ext}^n(M,N)\neq 0$. Why? By definition, if the projective dimension is $n$ this means that $\mathrm{Ext}^{n+1}(M,-)=0$ and $\mathrm{Ext}^n(M,-)\neq 0$, so there exists an $N$ such that $\mathrm{Ext}^n(M,N)\neq 0$. Why is this true for every $N$? I found this theorem on this notes, page 6 proposition 9. Did I misunderstand it? Is there a similar statement? This result is used on page 41, implication 3 implies 4, and is used in the case $N=R$. Is it true in this case?
Take $R=\mathbb Z\times\mathbb Z$, consider the elements $e_1=(1,0)$, $e_2=(0,1)\in R$, and the modules $M=R/(2e_1)$ and $N=Re_2$. Show that the projective dimension of $M$ is $1$ and compute $\operatorname{Ext}_R^1(M,N)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/210304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
A holomorphic function $f$, injective on $\partial D$, must be injective in $\bar{D}$? Prove: If $f$ is holomorphic on a neighborhood of the closed unit disc $\bar{D}$, and if $f$ is one-to-one on $\partial D$, then $f$ is one-to-one on $\bar{D}$. (Greene and Krantz's Function Theory of One Complex Variable (3rd), Ch. 5, Problem 17.) Can anyone provide a clue as to how to attack this problem ?
Some hints: The function $f$ restricted to $\partial D$ is an injective continuous map of $\partial D\sim S^1$ into ${\mathbb C}$. By the Jordan curve theorem the curve $\gamma:=f(\partial D)$ separates ${\mathbb C}\setminus\gamma$ into two connected domains $\Omega_{\rm int}$ and $\Omega_{\rm ext}$, called the interior and the exterior of $\gamma$. The points $a\in \Omega_{\rm int}$ are characterized by the fact that $\gamma$ has winding number $\pm1$ around them. Create a connection with the argument principle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/210396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How to show that linear span in $C[0,1]$ need not be closed Possible Duplicate: Non-closed subspace of a Banach space Let $X$ be an infinite dimensional normed space over $\mathbb{R}$. I want to find a set of vectors $(x_k)$ such that the linear span of $(x_k)$ of vectors is not closed. I feel like the set $P$, which conists of polynomials in $X=C[0,1]$ (with the sup-norm) would be a good candidate since the Weierstrass approximation theorem yields that the span of $P$ is dense in $X$. How can I show that this span is not closed?
You can take your favourite convergent sequence of polynomials (e.g. partial sums of $\exp x = \displaystyle \sum_{n = 0}^\infty \frac{x^n}{n!}$) and then prove that the limit is not in the span. This proves the span is not sequentially closed. Since $X$ is normed, it follows that the span isn't closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/210449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove by induction $\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$ for $n\ge1$ Prove the following statement $S(n)$ for $n\ge1$: $$\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$$ To prove the basis, I substitute $1$ for $n$ in $S(n)$: $$\sum_{i=1}^11^3=1=\frac{1^2(2)^2}{4}$$ Great. For the inductive step, I assume $S(n)$ to be true and prove $S(n+1)$: $$\sum_{i=1}^{n+1}i^3=\frac{(n+1)^2(n+2)^2}{4}$$ Considering the sum on the left side: $$\sum_{i=1}^{n+1}i^3=\sum_{i=1}^ni^3+(n+1)^3$$ I make use of $S(n)$ by substituting its right side for $\sum_{i=1}^ni^3$: $$\sum_{i=1}^{n+1}i^3=\frac{n^2(n+1)^2}{4}+(n+1)^3$$ This is where I get a little lost. I think I expand the equation to be $$=\frac{(n^4+2n^3+n^2)}{4}+(n+1)^3$$ but I'm not totally confident about that. Can anyone provide some guidance?
Here's yet another way: You have $\frac{n^2(n+1)^2}{4}+(n+1)^3$ and you want $\frac{(n+1)^2(n+2)^2}{4}$ So manipulate it to get there; $\frac{n^2(n+1)^2}{4}+(n+1)^3 =$ $\frac{(n^2 + 4n + 4)(n+1)^2 - (4n + 4)(n+1)^2}{4}+(n+1)^3 =$ $\frac{(n+2)^2(n+1)^2}{4}- \frac{ (4n + 4)(n+1)^2}{4}+(n+1)^3 =$ $\frac{(n+2)^2(n+1)^2}{4}-(n + 1)(n+1)^2+(n+1)^3 =$ $\frac{(n+2)^2(n+1)^2}{4}-(n+1)^3+(n+1)^3 =$ $\frac{(n+2)^2(n+1)^2}{4}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/210504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
upper bound of exponential function I am looking for a tight upper bound of exponential function (or sum of exponential functions): $e^x<f(x)\;$ when $ \;x<0$ or $\displaystyle\sum_{i=1}^n e^{x_i} < g(x_1,...,x_n)\;$ when $\;x_i<0$ Thanks a lot!
Since you suggest in the comments you would like a polynomial bound, you can use any even Taylor polynomial for $e$. Proposition. $\boldsymbol{1 + x + \frac{x^2}{2!} + \cdots + \frac{x^n}{n!}}$ is an upper bound for $\boldsymbol{e^x}$ when $\boldsymbol{n}$ is even and $\boldsymbol{x \le 0}$. Proof. We wish to show $f(x) \ge 0$ for all $x$, where $f: (-\infty, 0] \to \mathbb{R}$ is the function defined by $f(x) = 1 + x + \frac{x^2}{2!} + \cdots + \frac{x^n}{n!} - e^x.$ Since $f(x) \to \infty$ as $x \to - \infty$, $f$ must attain an absolute minimum somewhere on the interval $(-\infty, 0]$. * *If $f$ has an absolute minimum at $0$, then for all $x$, $f(x) \ge f(0) = 1 - e^0 = 0$, so we are done. *If $f$ has an absolute minimum at $y$ for some $y < 0$, then $f'(y) = 0$. But differentiating, $$ f'(y) = 1 + y + \frac{y^2}{2!} + \cdots + \frac{y^{n-1}}{(n-1)!} - e^y = f(y) - \frac{y^n}{n!}. $$ Therefore, for any $x$, $$ f(x) \ge f(y) = \frac{y^n}{n!} + f'(y) = \frac{y^n}{n!} > 0, $$ since $n$ is even. $\square$ Keep in mind that any polynomial upper bound will only be tight up to a certain point, because the polynomial will blow up to infinity as $x \to -\infty$. Also note, the same proof shows that the Taylor polynomial is a lower bound for $e^x$ when $n$ is odd and $x \le 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/210591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
What does the value of a probability density function (PDF) at some x indicate? I understand that the probability mass function of a discrete random-variable X is $y=g(x)$. This means $P(X=x_0) = g(x_0)$. Now, a probability density function of of a continuous random variable X is $y=f(x)$. Wikipedia defines this function $y$ to mean In probability theory, a probability density function (pdf), or density of a continuous random variable, is a function that describes the relative likelihood for this random variable to take on a given value. I am confused about the meaning of 'relative likelihood' because it certainly does not mean probability! The probability $P(X<x_0)$ is given by some integral of the pdf. So what does $f(x_0)$ indicate? It gives a real number, but isn't the relative likelihood of a specific value for a CRV always zero?
'Relative likelihood' is indeed misleading. Look at it as a limit instead: $$ f(x)=\lim_{h \to 0}\frac{F(x+h)-F(x)}{h} $$ where $F(x) = P(X \leq x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/210630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 6, "answer_id": 4 }
Proof that Quantile Function characterizes Probability Distribution The quantile function is defined as $Q(u)= \inf \{x: F(x) \geq u\}$. It is well known the distribution function characterizes the probability distribution in the following sense Theorem Let $X_{1}$ and $X_{2}$ be two real valued random variables with distribution functions $F_{1}$ and $F_{2}$ respectively. If $F_{1}(x)=F_{2}(x)$, $\forall x\in \mathbb{R}$ then $X_{1}$ and $X_{2}$ have the same probability distribution. I want to prove that the quantile function also characterizes the probability distribution, this fact is stated as a corallary of the above theorem in this book (see Corallary 1.2 on p19): Corallary: Let $X_{1}$ and $X_{2}$ be two real valued random variables with quantile functions $Q_{1}$ and $Q_{2}$ respectively. If $Q_{1}(u)=Q_{2}(u)$, $\forall u\in\left(0,1\right)$ then $X_{1}$ and $X_{2}$ have the same probability distribution. The proof in the book is based on the following facts Fact i: $Q(F(x)) \leq x$. Fact ii: $F( Q(u) ) \geq u$. Fact iii: $Q(u) \leq x$ iff $u \leq F(x)$. Fact iv: $Q(u)$ is nondecreasing. But I think it is wrong. Assuming $F_{1}(x_0) < F_{2}(x_0)$ for some fixed $x_0$ the author sets out to prove that this leads to a contradiction. Using facts (i) and (iv) he shows $Q_{2}(F_1(x_0)) < Q_{2}(F_2(x_0)) \leq x_0$. Then he applies fact (iii) to obtain $F_{1}(x_0) \leq F_{2}(x_0)$. The author claims that this is a contradiction. But clearly its not and the argument proves nothing. Am I missing something here? Does anyone know the correct proof to the corallary?
Changed since version discussed in first five comments: The key line in the proof of Corollary 1.2 in Severini's Elements of Distribution Theory book is Hence by part (iii) of Theorem 1.8, $F_2(x_0) \ge F_1(x_0)$ so that $F_1(x_0) \lt F_2(x_0)$ is impossible. As you say, $F_1(x_0) \lt F_2(x_0)$ in fact implies $F_1(x_0) \le F_2(x_0)$, i.e. $F_2(x_0) \ge F_1(x_0)$, so this is not a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/210683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If Same Rank, Same Null Spaces? "If matrices B and AB have the same rank, prove that they must have the same null spaces." I have absolutely NO idea how to prove this one, been stuck for hours now. Even if you don't know the answer, any help is greatly appreciated.
I would begin by showing that the null space of $B$ is a subspace of the null space of $AB$. Next show that having the same rank implies they have the same nullity. Finally, what can you conclude when a subspace is the same dimension as its containing vector space?
{ "language": "en", "url": "https://math.stackexchange.com/questions/210731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Solve $3\log_{10}(x-15) = \left(\frac{1}{4}\right)^x$ $$3\log_{10}(x-15) = \left(\frac{1}{4}\right)^x$$ I am completely lost on how to proceed. Could someone explain how to find any real solution to the above equation?
Put \begin{equation*} f(x) = 3\log_{10}(x - 15) - \left(\dfrac{1}{4}\right)^x. \end{equation*} We have $f$ is a increasing function on $(15, +\infty)$. Another way, $f(16)>0 $ and $f(17)>0$. Therefore the given equation has only solution belongs to $(16,17)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/210810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Two convergent sequences in a metric space. Question: Let {$x_n$} and {$y_n$} be two convergent sequences in a metric space (E,d). For all $n \in \mathbb{N}$, we defind $z_{2n}=x_n$ and $z_{2n+1}=y_n$. Show that {$z_n$} converges to some $l \in E$ $\longleftrightarrow$ $ \lim_{n \to \infty}x_n$= $\lim_{n \to \infty}y_n$=$l$. My Work: Since $x_n$ converges, $\exists N_1 \in \mathbb{N}$ s.t. $\forall n \geq N_1$, $|x_n-l_1|<\epsilon$. Likewise, since $y_n$ converges, $\exists N_2 \in \mathbb{N}$ s.t. $\forall n\geq N_2$, $|y_n-l_2|<\epsilon$. Because $z_{2n}=x_n$ and $z_{2n+1}=y_n$, then pick $N=\max\{N_1,N_2\}$. Since eventually $2n+1>2n>N$, if $z_n$ converges to $l$, then $|z_n-l|=|x_{n/2}-l|<\epsilon$ because of how we picked our N. Am I correct in this approach and should continue this way or am I wrong? This is a homework problem so please no solutions!!! Any help is appreciated. My work for the other way: If $\lim_{n \to \infty}x_n=\lim_{n \to \infty}y_n=l$,then we show $\lim_{n \to \infty}$$z_n=l$.Then for $N \in \mathbb{N}$, where $N=max{(\frac{N_1}{2},\frac{N_2-1}{2})}$. Since $|x_n-l|=|y_n-l|=|z_{2n}-l|=|z_{2n+1}-l|<\epsilon$ then for $n\geq N$, $|z_n-l|<\epsilon$.
Hint Remember the fact that "every convergent sequence is a Cauchy sequence". $(\Rightarrow)$ Assume $\lim_{n \to \infty} z_n=l$, then notice that $$ |x_n-l|=|z_{2n}-l|=|(z_{2n}-z_n)+(z_n-l)|\leq |z_{2n}-z_n|+|z_n-l|<\dots $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/210869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
$E$ is measurable, $m(E)< \infty$, and $f(x)=m[(E+x)\bigcap E]$ for all $x \in \mathbb{R}$ Question: $E$ is measurable, $m(E)< \infty$, and $f(x)=m[(E+x)\bigcap E]$ for all > $x \in \mathbb{R}$. Prove $\lim_{x \rightarrow \infty} f(x)=0$. First, since measure is translation invariant, I'm assuming that $(E+x)\bigcap E=E$. But then I had this thought: if $E=\{1,2,3\}$ and $x=1$, then $E+x = \{2,3,4\}$. So the intersection is just a single point. This will have measure zero. My question is, I'm not sure if this is the right train of thinking. And, if it is, I'm not sure how to make this rigorous.
Well, it seems you are a bit confused about which object lives where.. By translation invariance, we indeed have $m(E)=m(E+x)$, but not $E=E+x$ as you wrote. Also, $\{1,2,3\}\cap\{2,3,4\}$ has two common elements, not just one:) The hint in one of the comments to consider $E_n:=E\cap[-n,n]$ is a great idea, because in case $x>2n$, $E_n$ and $E_n+x$ will be disjoint, and $$(E+x)\cap E = \bigcup_n \left( (E_n+x)\cap E_n \right) $$ so you can use continuity from below of $m$. Denote $f_n(x):=m((E_n+x)\cap E_n)$. By the above argument, if $x>2n$, then $f_n(x)=0$. We are looking for $$\lim_{x\to\infty} m((E+x)\cap E) = \lim_{x\to\infty}\lim_{n\to\infty} f_n(x)$$ And exchange the limits.. ($f_n$ is nonnegative and bounded: $f_n(x)\le m(E)<\infty$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/210946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
seminorm & Minkowski Functional It is known that if $p$ is a seminorm on a real vector space $X$, then the set $A= \{x\in X: p(x)<1\}$ is convex, balanced, and absorbing. I tried to prove that the Minkowski functional $u_A$ of $A$ coincides with the seminorm $p$. Im interested on proving that $u_A$ less or equal to $p$ on $X$. My idea is as follows. We let $x \in X$. Then we can choose $s>p(x)$. Then $p(s^{-1}x) = s^{-1}p(x)<1$. This means that $s^{-1}x$ belongs to $A$. Hence $u_A(x)$ is less or equal to $s$. From here, how can we conclude that $u_A(x)$ is less than or equal to $p(x)$? Thanks in advance juniven
What you proved is that $u_A(x)\leq s$ for every $s\in(p(x),\infty)$. In other words, $$ u_A(x)\leq p(x)+\varepsilon $$ for every $\varepsilon>0$. This implies that $u_A(x)\leq p(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/211081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
question regarding metric spaces let X be the surface of the earth for any two points on the earth surface. let d(a,b) be the least time needed to travel from a to b.is this the metric on X? kindly explain each step and logic, specially for these two axioms d(a,b)=0 iff a=b and triangle inequality.
This will generally not be a metric since the condition of symmetry is not fulfilled: It usually takes a different time to travel from $a$ to $b$ than to travel from $b$ to $a$. (I know that because I live on a hill. :-) The remaining conditions are fulfilled: * *The time required to travel from $a$ to $b$ is non-negative. *The time required to travel from $a$ to $b$ is zero if and only if $a=b$. *$d(a,c)\le d(a,b)+d(b,c)$ since you can always travel first from $a$ to $b$ and then from $b$ to $c$ in order to travel from $a$ to $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/211139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding Tangent line from Parametric I need to find an equation of the tangent line to the curve $x=5+t^2-t$, $y=t^2+5$ at the point $(5,6)$. Setting $x=5$ and $y = 6$ and solving for $t$ gives me $t=0,1,-1$. I know I have to do y/x, and then take the derivative. But how do I know what $t$ value to use?
You have i) $x=5+t^2-t$ and ii) $y=t^2+5$ and $P=(5,6)$. From $P=(5,6)$ you get $x=5$ and $y=6$. From ii) you get now $t^2=1\Rightarrow t=1$ or $t=-1$ and from i) you get (knowing that $t\in${1,-1} ):$\ $ $t=1$ Your curve has the parametric representation $\gamma: I\subseteq\mathbb{R}\rightarrow {\mathbb{R}}^2: t\mapsto (5+t^2-t,t^2+5)$. Therefore, $\frac{d}{dt}\gamma=(2t-1,2t)$. Put $t=1$ into the derivative of $\gamma$ and from this you can get the slope of the tangent in P.
{ "language": "en", "url": "https://math.stackexchange.com/questions/211200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Cross Product of Partial Orders im going to have a similar questions on my test tomorrow. I am really stuck on this problem. I don't know how to start. Any sort of help will be appreciated. Thank you Suppose that (L1;≤_1) and (L2;≤_2) are partially ordered sets. We define a partial order ≤on the set L1 x L2 in the most obvious way- we say (a,b)≤(c,d) if and only if a≤_1 c and b≤_2 d a)Verify that this is a partial order. Show by example that it may not be a total order. b)Show tha if (L1;≤_1) and (L2;≤_2) are both lattices, ten so is (L1 x L2;≤). c)Show that if (L1;≤_1) and (L2;≤_2) are both modular lattices then so is (L1 x L2;≤) d)Show that if (L1;≤_1) and (L2;≤_2) are both distributive lattices then so is (L1 x L2;≤) e)Show that if (L1;≤_1) and (L2;≤_2) are both Boolean alegbras, then so is (L1 x L2;≤)
I will talk about part a, then you should give the other parts a try. They will follow in a similar manner (i.e. breaking $\leq$ into its components $\leq_1$ and $\leq_2$). To show $(L_1 \times L_2, \leq)$ is a partial order, we need to show it is reflexive, anti-symmetric, and transitive. Reflexivity: Given any $(x,y) \in L_1 \times L_2$ we want to show $(x,y) \leq (x,y)$. Looking at the definition of $\leq$, we are really asking whether both $x \leq_1 x$ and $y \leq_2 y$, which is true since $(L_1, \leq_1)$ and $(L_2, \leq_2)$ are both partial orders. Anti-symmetry: Suppose $(a,x) \leq (b,y)$ and $(b,y) \leq (a,x)$. This tells us a lot of information: * *From the first coordinates, we see $a \leq_1 b$ and $b \leq_1 a$, so $a = b$ (since $\leq_1$ is anti-symmetric). *From the second coordinates, we see $x \leq_2 y$ and $y \leq_2 x$, so $x = y$ (since $\leq_2$ is anti-symmetric). Combining those two observations gives $(a,x) = (b,y)$, so $\leq$ is anti-symmetric. Transitivity: Suppose $(a,x) \leq (b,y)$ and $(b,y) \leq (c,z)$. We want to show $(a,x) \leq (c,z)$. Our hypotheses give lots of information again: * *From the first coordinates, we see $a \leq_1 b$ and $b \leq_1 c$. Since $\leq_1$ is transitive, we know $a \leq_1 c$. *From the first coordinates, we see $x \leq_2 y$ and $y \leq_2 z$. Since $\leq_2$ is transitive, we know $x \leq_2 z$. Combining those two observations gives $(a,x) \leq (c,z)$, so $\leq$ is transitive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/211266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of prime divisors of element orders from character table. From wikipedia: It follows, using some results of Richard Brauer from modular representation theory, that the prime divisors of the orders of the elements of each conjugacy class of a finite group can be deduced from its character table (an observation of Graham Higman). Precisely how is this done? I would be happy with a reference. (In particular, I am looking for the location of the elements in a solvable group with which have the maximum number of prime divisors in their order. If somebody has any extra information on that particular situation that would also be appreciated.)
I think I probably wrote the quoted passage in Wikipedia. If we let $\pi$ be a prime ideal of $\mathbb{Z}[\omega]$ containing $p,$ where $\omega$ is a primitive complex $|G|$-th root of unity, then it is the case that two elements $x$ and $y$ of $G$ have conjugate $p^{\prime}$-part if and only if we have $\chi(x) \equiv \chi(y)$ (mod $\pi$) for each irreducible character $\chi$ of $G$. This is because the $\mathbb{Z}$-module spanned by the restrictions of irreducible characters to $p$-regular elements is the $\mathbb{Z}$-span of irreducible Brauer characters. The rows of the Brauer character table remain linearly independent (mod $\pi$), as Brauer showed. Furthermore, the value (mod $\pi$) of $\chi(x)$ only depends on the $p^{\prime}$-part of $x,$ since $\eta- 1 \in \pi$ whenever $\eta$ is a $p$-power root of unity. We now work inductively: for any prime $p$, we can recognise elements $g \in G$ which have $p$-power order, since $g$ has $p$-power order if and only if $\chi(g) \equiv \chi(1)$ (mod $\pi$) for all irreducible characters $\chi.$ If we choose a different prime $q,$ we can recognise $q$-elements by the same procedure. But then we can recognise the elements whose orders have the form $p^{a}q^{b}.$ Such an element $h$ must have $p^{\prime}$-part $z$ which has order $q^{b},$ so we have previously identified the possible $p^{\prime}$-parts. Furthermore, $h$ has $p^{\prime}$-part $z$ if and only if $\chi(h) \equiv \chi(z)$ (mod $\pi$) for all irreducible characters $\chi.$ And then, given a different prime $r,$ we can identify all elements whose orders have the form $p^{a}q^{b}r^{c}$ in a similar manner, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/211324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prime, followed by the cube of a prime, followed by the square of a prime. Other examples? The numbers 7, 8, 9, apart from being part of a really lame math joke, also have a unique property. Consecutively, they are a prime number, followed by the cube of a prime, followed by the square of a prime. Firstly, does this occurence happen with any other triplet of consecutive numbers? More importantly, is there anyway to predict or determine when and where these phenomena will occur, or do we just discover them as we go?
If you'll settle for a prime, cube of a prime, square of a prime in arithmetic progression (instead of consecutive), you've got $$5,27=3^3,49=7^2\qquad \rm{(common\ difference\ 22)}$$ and $$157,\ 343=7^3,\ 529=23^2 \qquad \rm{(common\ difference\ 186)}$$ and, no doubt, many more where those came from. A bit more exotic is the arithmetic progression $81,\ 125,\ 169$ with common difference 44: the 4th power of the prime 3, the cube of the prime 5, the square of the prime 13. So $$3^4,5^3,13^2$$ is an arithmetic progression of powers of primes, and the exponents are also in arithmetic progression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/211379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Inverse of a Positive Definite Let K be nonsingular symmetric matrix, prove that if K is a positive definite so is $K^{-1}$ . My attempt: I have that $K = K^T$ so $x^TKx = x^TK^Tx = (xK)^Tx = (xIK)^Tx$ and then I don't know what to do next.
inspired by the answer of kjetil b halvorsen To recap, matrix $A \in \mathbb{R}^{n \times n}$ is HPD (hermitian positive definite), iff $\forall x \in \mathbb{C}^n, x \neq 0 : x^*Ax > 0$. HPD matrices have full rank, therefore are invertible and $A^{-1}$ exists. Also full rank matrices represent a bijection, therefore $\forall x \in \mathbb{R}^n \enspace \exists y \in \mathbb{R}^n : x = Ay$. We want to know if $A^{-1}$ is also HPD, that is, our goal is $\forall x \in \mathbb{C}^n, x \neq 0 : x^*A^{-1}x > 0$. Let $x \in \mathbb{C}^n, x \neq 0$. Because $A$ is a bijection, there exists $y \in \mathbb{C}^n$ such that $x=Ay$. We can therefore write $$x^*A^{-1}x = (Ay)^*A^{-1}(Ay) = y^*A^*A^{-1}Ay = y^*A^*y = y^*Ay > 0,$$ which is what we wanted to prove.
{ "language": "en", "url": "https://math.stackexchange.com/questions/211453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 4, "answer_id": 3 }
Set of points of continuity are $G_{\delta}$ Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a function. Show that the points at which $f$ is continuous is a $G_{\delta}$ set. $$A_n = \{ x \in \mathbb{R} | x \in B(x,r) \text{ open }, f(x'')-f(x')<\frac{1}{n}, \forall x',x'' \in B(x)\}$$ I saw that this proof was already on here, but I wanted to confirm and flesh out more details. "$\Rightarrow$" If f is continuous at $x$, then $f(x'')-f(x')<\frac{1}{n}$ for $x'',x' \in B(x, r_{n})$. That is, there is a ball of radius $r$ where $r$ depends on $n$. Then $x \in A_n$ and thus $x \in \cap A_n$. "$\Leftarrow$" If $x \in \cap A_n$, then there is an $\epsilon > 0$ and a $\delta > 0$ such that $x' , x'' \in B(x, \delta_n)$ for all $n$ and $$|f(x'')-f(x')|<\epsilon.$$ Take $\epsilon = \frac{1}{n}$.
Here's a slightly different approach. Let $G$ be the set of points where $f$ is continuous, $A_{n,x} = (x-\frac{1}{n}, x+ \frac{1}{n})$ is an open set where $f$ is continuous, and $A_n = \bigcup_{x \in G} A_{n,x}$. Since $A_n$ is union of open sets which is open, $\bigcap_{n \in \mathbb{N}} A_n $ is a $G_\delta$ set. We want to show $\bigcap_{n \in \mathbb{N}} A_n = G$. $\forall n \in \mathbb{N}, A_{n+1, x} \subset A_{n,x} \\ \Rightarrow \bigcup_{x \in G}A_{n+1,x} \subset \bigcup_{x \in G}A_{n,x} \\ \Rightarrow A_{n+1} \subset A_n \\ \Rightarrow \{A_n\} \text{ is a decreasing sequence of nested sets}$ Furthermore, $\bigcap_{n \in \mathbb{N}} A_n = \{x\} $. (i.e. intersection is non-empty by Nested Set Theorem / Cantor Theorem). Therefore, $\begin{align}\bigcap_{n \in \mathbb{N}} A_n &= \lim_{n \to \infty} A_n \\ &= \lim_{n \to \infty} \bigcup_{x \in G} A_{n,x} \\ &= \bigcup_{x \in G} (\lim_{n \to \infty} A_{n,x} ) \\ &= \bigcup_{x \in G} \{x\} \\ &= G\end{align}$ So $G$ is the intersection of open sets and is a $G_\delta$ set. Can someone see why interchanging the limit and union is legal?
{ "language": "en", "url": "https://math.stackexchange.com/questions/211511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Riemann-Stieltjes integral, integration by parts (Rudin) Problem 17 of Chapter 6 of Rudin's Principles of Mathematical Analysis asks us to prove the following: Suppose $\alpha$ increases monotonically on $[a,b]$, $g$ is continuous, and $g(x)=G'(x)$ for $a \leq x \leq b$. Prove that, $$\int_a^b\alpha(x)g(x)\,dx=G(b)\alpha(b)-G(a)\alpha(a)-\int_a^bG\,d\alpha.$$ It seems to me that the continuity of $g$ is not necessary for the result above. It is enough to assume that $g$ is Riemann integrable. Am I right in thinking this? I have thought as follows: $\int_a^bG\,d\alpha$ exists because $G$ is differentiable and hence continuous. $\alpha(x)$ is integrable with respect to $x$ since it is monotonic. If $g(x)$ is also integrable with respect to $x$ then $\int_a^b\alpha(x)g(x)\,dx$ also exists. To prove the given formula, I start from the hint given by Rudin $$\sum_{i=1}^n\alpha(x_i)g(t_i)\Delta x_i=G(b)\alpha(b)-G(a)\alpha(a)-\sum_{i=1}^nG(x_{i-1})\Delta \alpha_i$$ where $g(t_i)\Delta x_i=\Delta G_i$ by the intermediate mean value theorem. Now the sum on the right-hand side converges to $\int_a^bG\,d\alpha$. The sum on the left-hand side would have converged to $\int_a^b\alpha(x)g(x)\,dx$ if it had been $$\sum_{i=1}^n \alpha(x_i)g(x_i)\Delta x$$ The absolute difference between this and what we have is bounded above by $$\max(|\alpha(a)|,|\alpha(b)|)\sum_{i=1}^n |g(x_i)-g(t_i)|\Delta x$$ and this can be made arbitrarily small because $g(x)$ is integrable with respect to $x$.
Compare with the following theorem, Theorem: Suppose $f$ and $g$ are bounded functions with no common discontinuities on the interval $[a,b]$, and the Riemann-Stieltjes integral of $f$ with respect to $g$ exists. Then the Riemann-Stieltjes integral of $g$ with respect to $f$ exists, and $$\int_{a}^{b} g(x)df(x) = f(b)g(b)-f(a)g(a)-\int_{a}^{b} f(x)dg(x)\,. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/211552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
What is the number of all possible relations/intersections of n sets? If n defines number of sets, what is the number of all possible relations between them? For example, when n = 2: 1) A can intersect with B 2) A and B can be disjoint 3) A can be subset of B 4) B can be subset of A that leaves us with 4 possible relations. Now with the n = 3 it gets tricker (A can intersect with B, but not C or B can be subsect of C and intersect with A there etc). Wondering if there's any formula that can be made to calculate such possible relations. Working on this problem for last couple of days, read about Venn diagrams, Karnaugh map, but still can't figure that one out. Any help is appreciated!
Disclaimer: Not an answer® I'd like to think about this problem not as sets, but as elements in a partial order. Suppose all sets are different. Define $\mathscr{P} = \langle\mathscr{P}(\bigcup_n A_n), \subseteq\rangle$ as the partial order generated bay subset relation on all "interesting" sets. Define the operation $\cap$ on $\mathscr{P}$ as $$ C = A\cap B \iff C = \sup_\subseteq \{D: D\subseteq A \wedge D\subseteq B\} $$ which is well defined because of, well, set theory.... The question could be then stated as: Given the sets $A_n$, let $\mathscr{G}$ the subgraph of $\mathscr{P}$ generated by the $A_n$ and closed under the $\cap$ operation. How many non-isomoprhic graph can we get?
{ "language": "en", "url": "https://math.stackexchange.com/questions/211645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Why not write $\sqrt{3}2$? Is it just for aesthetic purposes, or is there a deeper reason why we write $2\sqrt{3}$ and not $\sqrt{3}2$?
Certainly one can find old books in which $\sqrt{x}$ was set as $\sqrt{\vphantom{x}}x$, and just as $32$ does not mean $3\cdot2$, so also $\sqrt{\vphantom{32}}32$ would not mean $\sqrt{3}\cdot 2$, but rather $\sqrt{32}$. An overline was once used where round brackets are used today, so that, where we now write $(a+b)^2$, people would write $\overline{a+b}^2$. Probably that's how the overline in $\sqrt{a+b}$ originated. Today, an incessant battle that will never end tries to call students' attention to the fact that $\sqrt{5}z$ is not the same as $\sqrt{5z}$ and $\sqrt{b^2-4ac}$ is not the same as $\sqrt{b^2-4}ac$, the latter being what one sees written by students.
{ "language": "en", "url": "https://math.stackexchange.com/questions/211695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 0 }
How to find the eigenvalues and eigenvector without computation? The given matrix is $$ \begin{pmatrix} 2 & 2 & 2 \\ 2 & 2 & 2 \\ 2 & 2 & 2 \\ \end{pmatrix} $$ so, how could i find the eigenvalues and eigenvector without computation? Thank you
For the eigenvalues, you can look at the matrix and extract some quick informations. Notice that the matrix has rank one (all columns are the same), hence zero is an eigenvalue with algebraic multiplicity two. For the third eigenvalue, use the fact that the trace of the matrix equals the sum of all its eigenvalues; since $\lambda_1=\lambda_2=0$, you easily get $\lambda_3=6$. For the eigenvectors corresponding to $\lambda=0$ sometimes it's not hard; in this case it's clear that $v_1=[1,-1,0]$ and $v_2=[0,1,-1]$ (to be normalized) are eigenvectors corresponding to $\lambda=0$. For the eigenvector corresponding to $\lambda_3$, it's not as obvious as before, and you might have to actually write down the system, finding $v_3=[1,1,1]$ (to be normalized).
{ "language": "en", "url": "https://math.stackexchange.com/questions/211865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Are there any memorization techniques that exist for math students? I just watched this video on Ted.com entitled: Joshua Foer: Feats of memory anyone can do and it got me thinking about memory from a programmers perspective, and since programming and mathematics are so similar I figured I post here as well. There are so many abstract concepts and syntactic nuances that are constantly encountered, and yet we still manage to retain that information. The memory palace may help in remembering someone's name, a sequence of numbers, or a random story, but are there any memorization techniques that can better aid those learning new math concepts?
For propositional logic operations, you can remember their truth tables as follows: Let 0 stand for falsity and 1 for truth. For the conjunction operation use the mnemonic of the minimum of two numbers. For the disjunction operation, use the mnemonic of the maximum of two numbers. For the truth table for the material conditional (x->y), you can use max(1-x, y). For negation you can use 1-x.
{ "language": "en", "url": "https://math.stackexchange.com/questions/211944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 3 }
linear algebra problem please help let $V=\mathbb{R}^4$ and let $W=\langle\begin{bmatrix}1&1&0&0\end{bmatrix}^t,\begin{bmatrix}1&0&1&0\end{bmatrix}^t\rangle$. we need to find the subspaces $U$ & $T$ such that $ V=W\bigoplus U$ & $V=W \bigoplus T$ but $U\ne T$.
HINT: Look at a simpler problem first. Let $X=\{\langle x,0\rangle:x\in\Bbb R\}$, a subspace of $\Bbb R^2$. Can you find subspaces $V$ and $W$ of $\Bbb R^2$ such that $\Bbb R^2=X\oplus V=X\oplus W$, but $V\ne W$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/212017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Arcsine law for Brownian motion Here is the question: $(B_t,t\ge 0)$ is a standard brwonian motion, starting at $0$. $S_t=\sup_{0\le s\le t} B_s$. $T=\inf\{t\ge 0: B_t=S_1\}$. Show that $T$ follows the arcsinus law with density $g(t)=\frac{1}{\pi\sqrt{t(1-t)}}1_{]0,1[}(t)$. I used Markov property to get the following equality: $P(T<t)=P(\sup_{t<s<1}B_s<S_t)=E(P(\sup_{0<s<1-t}(B_{t+s}-B_t)<S_t-B_t|F_t))=P(\hat{S}_{1-t}<S_t-B_t).$ where $\hat{S}_{1-t}$ is defined for the brownian motion $\hat{B}_s=B_{t+s}-B_t$, which is independant of $F_t$. However the reflexion principle tells us that $S_t-B_t$ has the same law as $S_t$, so we can also write that $P(T<t)=P(\hat{S}_{1-t}<S_t)$. To this point, we can calculate $P(T<t)$ because we know the joint density of $(\hat{S}_{1-t},S_t)$, but this calculation leads to a complicated form of integral and I can not get the density $g$ at the end. Do you know how to get the arcsinus law? Thank you.
Let us start from the formula $\mathbb P(T\lt t)=\mathbb P(\hat S_{1-t}\lt S_t)$, where $0\leqslant t\leqslant 1$, and $\hat S_{1-t}$ and $S_t$ are the maxima at times $1-t$ and $t$ of two independent Brownian motions. Let $X$ and $Y$ denote two i.i.d. standard normal random variables, then $(\hat S_{1-t},S_t)$ coincides in distribution with $(\sqrt{1-t}|X|,\sqrt{t}|Y|)$ hence $$ \mathbb P(T\lt t)=\mathbb P(\sqrt{1-t}|X|\lt\sqrt{t}|Y|)=\mathbb P(|Z|\lt\sqrt{t}), $$ where $Z=X/\sqrt{X^2+Y^2}$. Now, $Z=\sin\Theta$, where the random variable $\Theta$ is the argument of the two-dimensional random vector $(X,Y)$ whose density is $\mathrm e^{-(x^2+y^2)/2}/(2\pi)$, which is invariant by the rotations of center $(0,0)$. Hence $\Theta$ is uniformly distributed on $[-\pi,\pi]$ and $$ \mathbb P(T\lt t)=\mathbb P(|\sin\Theta|\lt\sqrt{t})=2\,\mathbb P(|\Theta|\lt\arcsin\sqrt{t})=\tfrac2\pi\,\arcsin\sqrt{t}. $$ The density of the distribution of $T$ follows by differentiation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/212072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Derivative wrt. to Lie bracket. Let $\mathbf{G}$ be a matrix Lie group, $\frak{g}$ the corresponding Lie algebra, $\widehat{\mathbf{x}} = \sum_i^m x_i G_i$ the corresponding hat-operator ($G_i$ the $i$th basis vector of the tangent space/Lie algebra $\frak{g}$) and $(\cdot)^\vee$ the inverse of $\widehat{\cdot}$: $$(X)^\vee := \text{ that } \mathbf{x} \text{ such that } \widehat{\mathbf{x}} = X.$$ Let us define the Lie bracket over $m$-vectors as: $[\mathbf{a},\mathbf{b}] = \left(\widehat{\mathbf{a}}\cdot\widehat{\mathbf{b}}-\widehat{\mathbf{b}}\cdot\widehat{\mathbf{a}}\right)^\vee$. (Example: For $\frak{so}(3)$, $[\mathbf{a},\mathbf{b}] = \mathbf{a}\times \mathbf{b}$ with $\mathbf{a},\mathbf{b} \in \mathbb{R}^3$.) Is there a common name of the derivative: $$\frac{\partial [\mathbf{a},\mathbf{b}]}{\partial \mathbf{a}}$$?
I may be misunderstanding your question, but it seems like you are asking for the derivative of the map $$ \mathfrak g \to \mathfrak g, ~~ a \mapsto [a, b] $$ where $b \in \mathfrak g$ is fixed. Since $\mathfrak g$ is a vector space, the derivative at a point can be viewed as a map $\mathfrak g \to \mathfrak g$. But the above map is linear so it is it's own derivative at any point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/212130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A subset of a compact set is compact? Claim:Let $S\subset T\subset X$ where $X$ is a metric space. If $T$ is compact in $X$ then $S$ is also compact in $X$. Proof:Given that $T$ is compact in $X$ then any open cover of T, there is a finite open subcover, denote it as $\left \{V_i \right \}_{i=1}^{N}$. Since $S\subset T\subset \left \{V_i \right \}_{i=1}^{N}$ so $\left \{V_i \right \}_{i=1}^{N}$ also covers $S$ and hence $S$ is compact in X Edited: I see why this is false but in general, why every closed subset of a compact set is compact?
According to the definition of the compact set, we need every open cover of set K contains a finite subcover. Hence, not every subsets of compact sets are compact. Why closed subsets of compact sets are compact? Proof Suppose $F\subset K\subset X$, F is closed in X, and K is compact. Let $\{G_{\alpha}\}$ be an open cover of F. Because F is closed, then $F^{c}$ is open. If we add $F^{c}$ to $\{G_{\alpha}\}$, then we can get a open cover $\Omega$ of K. Because K is compact, $\Omega$ has finite sucollection $\omega$ which covers K, and hence F. If $F^{C}$ is in $\omega$, we can remove it from the subcollection. We have found a finite subcollection of $\{G_{\alpha}\}$ covers F.
{ "language": "en", "url": "https://math.stackexchange.com/questions/212181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 6, "answer_id": 4 }
What does it mean for something to be true but not provable in peano arithmetic? Specifically, the Paris-Harrington theorem. In what sense is it true? True in Peano arithmetic but not provable in Peano arithmetic, or true in some other sense?
Peano Arithmetic is a particular proof system for reasoning about the natural numbers. As such it does not make sense to speak about something being "true in PA" -- there is only "provable in PA", "disprovable in PA", and "independent of PA". When we speak of "truth" it must be with respect to some particular model. In the case of arithmetic statements, the model we always speak about unless something else is explicitly specified is the actual (Platonic) natural numbers. Virtually all mathematicians expect these numbers to "exist" (in whichever philosophical sense you prefer mathematical objects to exist in) independently of any formal system for reasoning about them, and the great majority expect all statements about them to have objective (but not necessarily knowable) truth values. We're very sure that everything that is "provable in PA" is also "true about the natural numbers", but the converse does not hold: There exist sentences that are "true about the actual natural numbers" but not "provable in PA". This was famously proved by Gödel -- actually he gave a (formalizable, with a few additional technical assumptions) proof that a particular sentence was neither "provable in PA" nor "disprovable in PA", and a convincing (but not strictly formalizable) argument that this sentence is true about the actual natural numbers. Paris-Harrington shows that another particular sentence is of this kind: not provable in PA, yet true about the actual natural numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/213253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 5, "answer_id": 0 }
Proofs with limit superior and limit inferior: $\liminf a_n \leq \limsup a_n$ I am stuck on proofs with subsequences. I do not really have a strategy or starting point with subsequences. NOTE: subsequential limits are limits of subsequences Prove: $a_n$ is bounded $\implies \liminf a_n \leq \limsup a_n$ Proof: Let $a_n$ be a bounded sequence. That is, $\forall_n(a_n \leq A)$. If $a_n$ converges then $\liminf a_n = \lim a_n = \limsup a_n$ and we are done. Otherwise $a_n$ has a set of subsequential limits we need to show $\liminf a_n \leq \limsup a_n$: This is where I am stuck...
Hint: Think about what the definitions mean. We have $$\limsup a_n = \lim_n \sup \{ a_k \textrm{ : } k \geq n\}$$ and $$\liminf a_n = \lim_n \inf \{ a_k \textrm{ : } k \geq n\}$$ What can you say about the individual terms $\sup \{a_k \textrm{ : } k \geq n\}$ and $\inf \{a_k \textrm{ : } k \geq n\}$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/213327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Prove an inequality with a $\sin$ function: $\sin(x) > \frac2\pi x$ for $0$$\forall{x\in(0,\frac{\pi}{2})}\ \sin(x) > \frac{2}{\pi}x $$ I suppose that solving $ \sin x = \frac{2}{\pi}x $ is the top difficulty of this exercise, but I don't know how to think out such cases in which there is an argument on the right side of a trigonometric equation.
Here is a simple solution. Let $0 < x < \pi/2$ be fixed. By Mean value theorem there exists $y \in (0, x)$ such that $$\sin(x)-\sin(0)= \cos(y)(x-0).$$ Thus $$\frac{\sin(x)}{x}= \cos(y).$$ Similarly there exists $z \in (x, \pi/2)$ such that $$\sin(\pi/2)-\sin(x)= \cos(z)(\pi/2-x).$$ Thus $$\frac{1-\sin(x)}{\pi/2- x}= \cos(z).$$ As $0 < y < z< \pi/2$ and $\cos$ is a strictly decreasing function in $[0, \pi/2]$ we see that $$ \cos (z) < \cos (y).$$ Thus $$\frac{1-\sin(x)}{\pi/2- x} < \frac{\sin(x)}{x}.$$ $$\Rightarrow \frac{1-\sin(x)}{\sin(x)} < \frac{\pi/2- x}{x}.$$ $$\Rightarrow \frac{1}{\sin(x)}- 1< \frac{\pi}{2x}- 1.$$ $$\Rightarrow \frac{2}{\pi}< \frac{\sin (x)}{x}.$$ Since $y \in (0, \pi/2)$, therefore $\cos(y) <1$. Hence it is also clear that $$\frac{\sin(x)}{x}= \cos(y) <1.$$ Hence for any $0 < x < \pi/2$, we get $$\frac 2 \pi<\frac{\sin(x)}{x}<1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/213382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Tenenbaum and Pollard, Ordinary Differential Equations, problem 1.4.29, what am I missing? Tenenbaum and Pollard's "Ordinary Differential Equations," chapter 1, section 4, problem 29 asks for a differential equation whose solution is "a family of straight lines that are tangent to the circle $x^2 + y^2 = c^2$, where $c$ is a constant." Since the solutions will be lines, I start with the formula $y = m x + b$, and since the line is determined by a single parameter (the point on the circle to which the line is tangent) I expect the differential equation to be of order one. Differentiating, I get $y' = m$, so $y = y' x + b$. So now, I need an equation for $b$. The solution given in the text is $y = x y' \pm c \sqrt{(y')^2 + 1}$, implying $b = \pm c \sqrt{(y')^2 + 1}$, but try as I might I have been unable to derive this formula for $b$. I'm sure I'm missing something simple.
I'll assume the point $P=(x,y)$ lies on the circle $x^2+y^2=c^2$ in the first quadrant. The slope of the tangent at $P$ is $y'$ as you say. You need to express the $y$ intercept. Extend the tangent line until it meets the $x$ axis $A$ and the $y$ axis at $B$, and call the origin $O$. Then the two triangles $APO$ and $OPB$ are similar. From this you can get the y intercept, which is the point $(0,OB)$ by use of $OB=OP*(AB/OA)=OP*sqrt([OA^2+OB^2]/OA^2)=OP*sqrt(1+[OB/OA]^2)$. And $y'=-OB/OA$, being the slope of the line joining $A$ to $B$ lying respectively on the $x$ and $y$ axes. Finally the $OP$ here is the constant $c$, the circle's radius.
{ "language": "en", "url": "https://math.stackexchange.com/questions/213453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving trigonometric equations of the form $a\sin x + b\cos x = c$ Suppose that there is a trigonometric equation of the form $a\sin x + b\cos x = c$, where $a,b,c$ are real and $0 < x < 2\pi$. An example equation would go the following: $\sqrt{3}\sin x + \cos x = 2$ where $0<x<2\pi$. How do you solve this equation without using the method that moves $b\cos x$ to the right side and squaring left and right sides of the equation? And how does solving $\sqrt{3}\sin x + \cos x = 2$ equal to solving $\sin (x+ \frac{\pi}{6}) = 1$
Riffing on @Yves' "little known" solutions ... The above trigonograph shows a scenario with $a^2 + b^2 = c^2 + d^2$, for $d \geq 0$, and we see that $$\theta = \operatorname{atan}\frac{a}{b} + \operatorname{atan}\frac{d}{c} \tag{1}$$ (If the "$a$" triangle were taller than the "$b$" triangle, the "$+$" would become "$-$". Effectively, we can take $d$ to be negative to get the "other" solution.) Observe that both $c$ and $d$ are expressible in terms of $a$, $b$, $\theta$: $$\begin{align} a \sin\theta + b \cos\theta &= c \\ b \sin\theta - a\cos\theta &= d \quad\text{(could be negative)} \end{align}$$ Solving that system for $\sin\theta$ and $\cos\theta$ gives $$\left.\begin{align} \sin\theta &= \frac{ac+bd}{a^2+b^2} \\[6pt] \cos\theta &= \frac{bc-ad}{a^2+b^2} \end{align}\quad\right\rbrace\quad\to\quad \tan\theta = \frac{ac+bd}{bc-ad} \tag{2}$$ We can arrive at $(2)$ in a slightly-more-geometric manner by noting $$c d = (a\sin\theta + b \cos\theta)d = c( b\sin\theta - a \cos\theta ) \;\to\; ( b c - a d)\sin\theta = \left( a c + b d \right)\cos\theta \;\to\; (2) $$ where each term in the expanded form of the first equation can be viewed as the area of a rectangular region in the trigonograph. (For instance, $b c \sin\theta$ is the area of the entire figure.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/213545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 2 }
definite and indefinite sums and integrals It just occurred to me that I tend to think of integrals primarily as indefinite integrals and sums primarily as definite sums. That is, when I see a definite integral, my first approach at solving it is to find an antiderivative, and only if that doesn't seem promising I'll consider whether there might be a special way to solve it for these special limits; whereas when I see a sum it's usually not the first thing that occurs to me that the terms might be differences of some function. In other words, telescoping a sum seems to be just one particular way among many of evaluating it, whereas finding antiderivatives is the primary way of evaluating integrals. In fact I learned about telescoping sums much later than about antiderivatives, and I've only relatively recently learned to see these two phenomena as different versions of the same thing. Also it seems to me that empirically the fraction of cases in which this approach is useful is much higher for integrals than for sums. So I'm wondering why that is. Do you see a systematic reason why this method is more productive for integrals? Or is it perhaps just a matter of education and an "objective" view wouldn't make a distinction between sums and integrals in this regard? I'm aware that this is rather a soft question, but I'm hoping it might generate some insight without leading to open-ended discussions.
Also, read about how Feynman learned some non-standard methods of indefinite integration (such as differentiating under the integral sign) and used these to get various integrals that usually needed complex integration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/213606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 1 }
Get the equation of a circle when given 3 points Get the equation of a circle through the points $(1,1), (2,4), (5,3) $. I can solve this by simply drawing it, but is there a way of solving it (easily) without having to draw?
Big hint: Let $A\equiv (1,1)$,$B\equiv (2,4)$ and $C\equiv (5,3)$. We know that the perpendicular bisectors of the three sides of a triangle are concurrent.Join $A$ and $B$ and also $B$ and $C$. The perpendicular bisector of $AB$ must pass through the point $(\frac{1+2}{2},\frac{1+4}{2})$ Now find the equations of the straight lines AB and BC and after that the equation of the perpendicular bisectors of $AB$ and $BC$.Solve for the equations of the perpendicular bisectors of $AB$ and $BC$ to get the centre of your circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/213658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 16, "answer_id": 0 }
The intersection of a line with a circle Get the intersections of the line $y=x+2$ with the circle $x^2+y^2=10$ What I did: $y^2=10-x^2$ $y=\sqrt{10-x^2}$ or $y=-\sqrt{10-x^2}$ $ x+ 2 = y=\sqrt{10-x^2}$ If you continue, $x=-3$ or $x=1$ , so you get 2 points $(1,3)$, $(-3,-1)$ But then, and here is where the problems come: $x+2=-\sqrt{10-x^2}$ I then, after a while get the point $(-3\dfrac{1}{2}, -1\dfrac{1}{2}$) but this doesn't seem to be correct. What have I done wrong at the end?
Let the intersection be $(a,b)$, so it must satisfy both the given eqaution. So, $a=b+2$ also $a^2+b^2=10$ Putting $b=a+2$ in the given circle $a^2+(a+2)^2=10$ $2a^2+4a+4=10\implies a=1$ or $-3$ If $a=1,b=a+2=3$ If $a=-3,b=-3+2=-1$ So, the intersections are $(-3,-1)$ and $(1,3)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/213711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to find subfields of a non-Galois field extension? Let $K/F$ be a finite field extension. If $K/F$ is Galois then it is well known that there is a bijection between subgroups of $Gal(K/F)$ and subfields of $K/F$. Since finding subgroups of a finite group is always easy (at least in the meaning that we can find every subgroup by brute-force or otherwise) this gives a nice way of finding subfields and proving they are the only ones. What can we do in the case that $K/F$ is not a Galois extension ? that is: How can I find all subfields of a non-Galois field extension ?
In the inseparable case there is an idea for a substitute Galois correspondence due, I think, to Jacobson: instead of considering subgroups of the Galois group, we consider (restricted) Lie subalgebras of the Lie algebra of derivations. I don't know much about this approach, but "inseparable Galois theory" seems to be a good search term.
{ "language": "en", "url": "https://math.stackexchange.com/questions/213778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Volume of Region R Bounded By $y=x^3$ and $y=x$ Let R be the region bounded by $y=x^3$ and $y=x$ in the first quadrant. Find the volume of the solid generated by revolving R about the line $x=-1$
The region goes from $y=0$ to $y=1$. For an arbitrary $y$-value, say, $y=c$, $0\le c\le1$, what is the cross section of the region at height $c$? That is, what is the intersection of the region with the horizontal line $y=c$? What do you get when you rotate that cross-section around the line $x=-1$? Can you find the area of what you get when you rotate that cross-section? If you can, then you can integrate that area from 0 to 1 to get the volume.
{ "language": "en", "url": "https://math.stackexchange.com/questions/213846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matching in bipartite graphs I'm studying graph theory and the follow question is driving me crazy. Any hint in any direction would be appreciated. Here is the question: Let $G = G[X, Y]$ be a bipartite graph in which each vertex in $X$ is of odd degree. Suppose that any two distinct vertices of $X$ have an even number of common neighbours. Show that $G$ has matching covering every vertex of $X$.
Hint for one possible solution: Consider the adjacency matrix $M\in\Bbb F_2^{|X|\times|Y|}$ of the bipartite graph, i.e. $$M_{x,y}:=\left\{ \begin{align} 1 & \text{ if }x,y \text{ are adjacent} \\ 0 & \text{ else} \end{align} \right. $$ then try to prove, it has rank $|X|$, and then, I think, using Gaussian elimination (perhaps only on the selected linearly independent columns) would produce a proper matching..
{ "language": "en", "url": "https://math.stackexchange.com/questions/213923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove $\lim _{x \to 0} \sin(\frac{1}{x}) \ne 0$ Prove $$\lim _{x \to 0} \sin\left(\frac{1}{x}\right) \ne 0.$$ I am unsure of how to prove this problem. I will ask questions if I have doubt on the proof. Thank you!
HINT Consider the sequences $$x_n = \dfrac1{2n \pi + \pi/2}$$ and $$y_n = \dfrac1{2n \pi + \pi/4}$$ and look at what happens to your function along these two sequences. Note that both sequences tend to $0$ as $n \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/214010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Find the limit without l'Hôpital's rule Find the limit $$\lim_{x\to 1}\frac{(x^2-1)\sin(3x-3)}{\cos(x^3-1)\tan^2(x^2-x)}.$$ I'm a little rusty with limits, can somebody please give me some pointers on how to solve this one? Also, l'Hôpital's rule isn't allowed in case you were thinking of using it. Thanks in advance.
$$\dfrac{\sin(3x-3)}{\tan^2(x^2-x)} = \dfrac{\sin(3x-3)}{3x-3} \times \left(\dfrac{x^2-x}{\tan(x^2-x)} \right)^2 \times \dfrac{3(x-1)}{x^2(x-1)^2}$$ Hence, $$\dfrac{(x^2-1)\sin(3x-3)}{\cos(x^3-1)\tan^2(x^2-x)} = \dfrac{\sin(3x-3)}{3x-3} \times \left(\dfrac{x^2-x}{\tan(x^2-x)} \right)^2 \times \dfrac{3(x-1)(x^2-1)}{x^2(x-1)^2 \cos(x^3-1)}\\ = \dfrac{\sin(3x-3)}{3x-3} \times \left(\dfrac{x^2-x}{\tan(x^2-x)} \right)^2 \times \dfrac{3(x+1)}{x^2 \cos(x^3-1)}$$ Now the first and second term on the right has limit $1$ as $x \to 1$. The last term limit can be obtained by plugging $x=1$, to give the limit as $$1 \times 1 \times \dfrac{3 \times (1+1)}{1^2 \times \cos(0)} = 6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/214076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Probability distribution for sampling an element $k$ times after $x$ independent random samplings with replacement In an earlier question ( probability distribution of coverage of a set after `X` independently, randomly selected members of the set ), Ross Rogers asked for the probability distribution for the coverage of a set of $n$ elements after sampling with replacement $x$ times, with uniform probability for every element in the set. Henry provided a very nice solution. My question is a slight extension of this earlier one: What is the probability distribution (mean and variance) for the number of elements that have been sampled at least $k$ times after sampling a set of $n$ elements with replacement, and with uniform probability, $x$ times?
What is not as apparent as it could be in the solution on the page you refer to is that the probability distributions in this kind of problems are often a mess while the expectations and variances can be much nicer. (By the way, you might be confusing probability distributions on the one hand, and expectations and variances on the other hand.) To see this in the present case, note that the number of elements sampled at least $k$ times is $$ N=\sum\limits_{i=1}^n\mathbf 1_{A_i}, $$ where $A_i$ is the event that element $i$ is sampled at least $k$ times. Now, $p_x=\mathbb P(A_i)$ does not depend on $i$ and is the probability that one gets at least $k$ heads in a game of $x$ heads-and-tails such that probability of heads is $u=\frac1n$. Thus, $$ p_x=\sum\limits_{s=k}^x{x\choose s}u^s(1-u)^{x-s}. $$ This yields $$ \mathbb E(N)=np_x. $$ Likewise, $r_x=\mathbb P(A_i\cap A_j)$ does not depend on $i\ne j$ and is the probability that one gets at least $k$ times result A and $k$ times result B in $x$ games where each game yields the results A, B and C with respective probabilities $u$, $u$ and $1-2u$. Thus, $r_x$ can be written down using multinomial distributions instead of the binomial distributions involved in $p_x$. This yields $\mathbb E(N^2)=np_x+n(n-1)r_x$, hence $\mathrm{var}(N)=\mathbb E(N^2)-(np_x)^2$ is $$ \mathrm{var}(N)=np_x+n(n-1)r_x-n^2p_x^2=np_x(1-p_x)+n(n-1)(r_x-p_x^2). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/214132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the probability mass function. Abe and Zach live in Springfield. Suppose Abe's friends and Zach's friends are each a random sample of 50 out of the 1000 people who live in Springfield. Find the probability mass function of them having $X$ mutual friends. I figured the expected value is $(1000)(\frac{50}{1000})^2 = \frac{5}{2}$ since each person in Springfield has a $(\frac{50}{1000})^2$ chance of being friends with both Abe and Zach. However, how do I generalize this expected value idea to create a probability mass function that returns the probability of having $X$ mutual friends?
The expected value is right, since: $$E[X]=\frac{1}{\binom{1000}{50}}\sum_{k=1}^{50}k\binom{50}{k}\binom{950}{50-k}=\frac{50}{\binom{1000}{50}}\sum_{k=0}^{49}\binom{49}{k}\binom{950}{49-k}$$ $$\frac{\binom{1000}{50}}{50}\,E[X]=[x^{49}]\left((1+x)^{49} x^{49} (1+x^{-1})^{950}\right)=[x^{49}]\left((1+x)^{999}x^{-901}\right)=[x^{950}](x+1)^{999}=\binom{999}{950}=\binom{999}{49},$$ or, more simply, $\frac{\binom{1000}{50}}{50}\,E[X]$ is the number of ways to choose 49 stones between 49 black stones and 950 withe stones, so it is clearly $\binom{999}{49}$. $$E[X]=\frac{50^2}{1000}=2.5.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/214220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
function from A5 to A6 Possible Duplicate: Homomorphism between $A_5$ and $A_6$ Why is it true that every element of the image of the function $f: A_5\longrightarrow A_6$ (alternating groups) defined by $f(x)=(123)(456) x (654)(321)$ does not leave any element of $\{1,2,3,4,5,6\}$ fixed (except the identity)?
I think you are asking why the following is true: Every element of $A_6$ of the form $(123)(456)x(654)(321)$, with $x$ a nontrivial element of $A_5$, leaves no element of $\{1,2,3,4,5,6\}$ fixed. This is actually false: let $x = (12)(34)$. Then $f(x)$ fixes either $5$ or $6$, depending on how you define composition of permutations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/214285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Bounding the Gamma Function I'm trying to verify a bound for the gamma function $$ \Gamma(z) = \int_0^\infty e^{-t}t^{z - 1}\;dt. $$ In particular, for real $m \geq 1$, I'd like to show that $$ \Gamma(m + 1) \leq 2\left(\frac{3m}{5}\right)^m. $$ Knowing that the bound should be attainable, my first instinct is to split the integral as $$ \Gamma(m + 1) = \int_0^{3m/5} e^{-t}t^{m}\;dt + \int_{3m/5}^\infty e^{-t}t^m\;dt \leq (1 - e^{-3m/5})\left(\frac{3m}{5}\right)^m + \int_{3m/5}^\infty e^{-t}t^m\;dt. $$ Using integration by parts, $$ \int_{3m/5}^\infty e^{-t}t^m\;dt = e^{-3m/5}\left(\frac{3m}{5}\right)^m + m\int_{3m/5}^\infty e^{-t}t^{m-1}\;dt.$$ So the problem has been reduced to showing $$ m\int_{3m/5}^\infty e^{-t}t^{m-1}\;dt \leq \left(\frac{3m}{5}\right)^m. $$ But this doesn't seem to have made the problem any easier. Any help is appreciated, thanks.
I'll prove something that's close enough for my applications; in particular, that $$\Gamma(m + 1) \leq 3\left(\frac{3m}{5}\right)^m.$$ Let $0 < \alpha < 1$ be chosen later. We'll split $e^{-t}t^m$ as $(e^{-\alpha t}t^m)e^{-(1 - \alpha)t}$ and use this to bound the integral. First, take a derivative to find a maximum for $e^{-\alpha t}t^m$. $$\frac{d}{dt}e^{-\alpha t}t^m = -\alpha e^{-\alpha t}t^m + me^{-\alpha t}t^{m-1} = -\alpha e^{-\alpha t}t^{m - 1}\left(t - \frac{m}{\alpha}\right). $$ So $t = m / \alpha$ is a critical point, and in particular a maximum (increasing before and decreasing after, if you like). Then we can bound the integral $$ \Gamma(m + 1) = \int_0^\infty e^{-t}t^m\;dt \leq \left(\frac{m}{\alpha e}\right)^m \int_0^\infty e^{-(1 - \alpha)t}\;dt = \left(\frac{m}{\alpha e}\right)^m \left(\frac{1}{1 - \alpha}\right).$$ Choosing $\alpha = 5/(3e)$ and noting that $\frac{1}{1 - 5/(3e)} \leq 3$, we've proven $$ \Gamma(m + 1) \leq 3\left(\frac{3m}{5}\right)^m. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/214422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Show $m^p+n^p\equiv 0 \mod p$ implies $m^p+n^p\equiv 0 \mod p^2$ Let $p$ an odd prime. Show that $m^p+n^p\equiv 0 \pmod p$ implies $m^p+n^p\equiv 0 \pmod{p^2}$.
From little Fermat, $m^p \equiv m \pmod p$ and $n^p \equiv n \pmod p$. Hence, $p$ divides $m+n$ i.e. $m+n = pk$. $$m^p + n^p = (pk-n)^p + n^p = p^2 M + \dbinom{p}{1} (pk) (-n)^{p-1} + (-n)^p + n^p\\ = p^2 M + p^2k (-n)^{p-1} \equiv 0 \pmod {p^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/214497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Generators and cyclic group concept These statements are false according to my book. I am not sure why though * *In every cyclic group, every element is a generator *A cyclic group has a unique generator. Both statements seem to be opposites. I tried to give a counterexample * *I think it's because $\mathbb{Z}_4$ for example has generators $\langle 1\rangle$ and $\langle3\rangle$, but 2 or 0 isn't a generator. *As shown in (1), we have two different generators, $\langle1\rangle$ and $\langle3\rangle$
Take $Z_n$. This group is cyclic and the generators are $\phi(n)$ = all the numbers that are relatively prime to $n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/214569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Trigonometric bounds Is there a nice way to show: $\sin(x) + \sin(y) + \sin(z) \geq 2$ for all $x,y,z$ such that $0 \leq x,y,z \leq \frac{\pi}{2}$ and $x + y + z = \pi$?
Use the following inequality: $$\sin(x) \geq x\frac{2}{\pi} , x \in [0,\pi/2]$$ And to prove this inequality, Consider the function: $ f(x) = \frac{\sin(x)}{x} $ if $x \in (0, \pi/2]$ and $f(x) = 1$ if $x=0$. Now show $f$ decreases on $[0,\pi/2]$. Hint: Use Mean Value Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/214624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find pair of polynomials a(x) and b(x) If $a(x) + b(x) = x^6-1$ and $\gcd(a(x),b(x))=x+1$ then find a pair of polynomials of $a(x)$,$b(x)$. Prove or disprove, if there exists more than 1 more distinct values of the polynomials.
There is too much freedom. Let $a(x)=x+1$ and $b(x)=(x^6-1)-(x+1)$. Or else use $a(x)=k(x+1)$, $b(x)=x^6-1-k(x+1)$, where $k$ is any non-zero integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/214705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Sobolev differentiability of composite function I was wondering about the following fact: if $\Omega$ is a bounded subset of $\mathbb{R}^n$ and $u\in W^{1,p}(\Omega)$ and $g\in C^1(\mathbb{R},\mathbb{R})$ such that $|g'(t)t|+|g(t)|\leq M$, is it true that $g\circ u \in W^{1,p}(\Omega)$? If $g'\in L^{\infty}$ this would be true, but here we don't have this kind of estimate...
You get $g' \in L^\infty$ from the assumptions, since $|g'(t)| \le M/|t|$, and $g'$ is continuous at $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/214893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Uncountability of basis of $\mathbb R^{\mathbb N}$ Given vector space $V$ over $\mathbb R$ such that the elements of $V$ are infinite-tuples. How to show that any basis of it is uncountable?
Take any almost disjoint family $\mathcal A$ of infinite subsets of $\mathbb N$ with cardinality $2^{\aleph_0}$. Construction of such set is given here. I.e. for any two set $A,B\in\mathcal A$ the intersection $A\cap B$ is finite. Notice that $$\{\chi_A; A\in\mathcal A\}$$ is a subset of $\mathbb R^{\mathbb N}$ which has cardinality $2^{\aleph_0}$. We will show that this set is linearly independent. This implies that the base must have cardinality at least $2^{\aleph_0}$. (Since every independent set is contained in a basis - this can be shown using Zorn lemma. You can find the proof in many places, for example these notes on applications of Zorn lemma by Keith Conrad.) Suppose that, on the contrary, $$\chi_A=\sum_{i\in F} c_i\chi_{A_i}$$ for some finite set $F$ and $A,A_i\in\mathcal A$ (where $A_i\ne A$ for $i\in F$). The set $P=A\setminus \bigcup\limits_{i\in F}(A\cap A_i)$ is infinite. For any $n\in P$ we have $\chi_A(n)=1$ and $\sum\limits_{i\in F} c_i\chi_{A_i}(n)=0$. So the above equality cannot hold. You can find a proof about dimension of the space $\mathbb R^{\mathbb R}$ (together with some basic facts about Hamel bases) here: Does $\mathbb{R}^\mathbb{R}$ have a basis? In fact, it can be shown that already smaller spaces must have dimension $2^{\aleph_0}$, see Cardinality of a Hamel basis of $\ell_1(\mathbb{R})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/214984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Irreducibility of $X^{p-1} + \cdots + X+1$ Can someone give me a hint how to the irreducibility of $X^{p-1} + \cdots + X+1$, where $p$ is a prime, in $\mathbb{Z}[X]$ ? Our professor gave us already one, namely to substitute $X$ with $X+1$, but I couldn't make much of that.
Hint: Let $y=x-1$. Note that our polynomial is $\dfrac{x^p-1}{x-1}$, which is $\dfrac{(y+1)^p-1}{y}$. It is not difficult to show that $\binom{p}{k}$ is divisible by $p$ if $0\lt k\lt p$. Now use the Eisenstein Criterion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/215042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 2 }
Continuity of $f \cdot g$ and $f/g$ on standard topology. Let $f, g: X \rightarrow \mathbb{R}$ be continuous functions, where ($X, \tau$) is a topological space and $\mathbb{R}$ is given the standard topology. a)Show that the function $f \cdot g : X \rightarrow \mathbb{R}$,defined by $(f \cdot g)(x) = f(x)g(x)$ is continuous. b)Let $h: X \setminus \{x \in X | g(x) = 0\}\rightarrow \mathbb{R}$ be defined by $h(x) = \frac{f(x)}{g(x)}$ Show that $h$ is continuous.
The central fact is that the operations $$p:\ {\mathbb R}^2\to{\mathbb R},\quad (x,y)\mapsto x\cdot y$$ and similarly $q:\ (x,y)\mapsto {\displaystyle{x\over y}}$ are continuous where defined and that $$h:\ X\to{\mathbb R}^2,\quad x\mapsto\bigl(f(x),g(x)\bigr)$$ is continuous if $f$ and $g$ are continuous. It follows that $f\cdot g=p\circ h$ is continuous, and similarly for ${\displaystyle{f\over g}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/215109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Cartesian product of a set Question: What is A $\times$ A , where A = {0, $\pm$1, $\pm$2, ...} ? Thinking: Is this a set say B = {0, 1, 2, ... } ? This was in my homework can you help me ?
No, $A\times A$ is not a sequence. Neither is $A$: it’s just a set. By definition $A\times A=\{\langle a_1,a_2\rangle:a_1,a_2\in A\}$. Thus, $A\times A$ contains elements like $\langle 1,0\rangle$, $\langle -2,17\rangle$, and so on. Indeed, since $A$ is just the set of all integers, usually denoted by $\Bbb Z$, $A\times A$ is simply the set of all ordered pairs of integers. Here is a picture of $A\times A$: the lines are the coordinate axes, and the dots are the points of $A\times A$, thought of as points in the plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/215177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show that $\liminf \limits _{k\rightarrow \infty} f_k = \lim \limits _{k\rightarrow \infty} f_k$ Is there a way to show that $\liminf \limits _{k\rightarrow \infty} f_k = \lim \limits _{k\rightarrow \infty} f_k$. The only way I can think of is by showing $\liminf \limits _{k\rightarrow \infty} f_k = \limsup \limits _{k\rightarrow \infty} f_k$. Is there another way? Edit: Sorry, I should have mentioned that you can assume that $\{f_k\}_{k=0}^\infty$ where $f_k:E \rightarrow R_e$ is a (Lebesgue) measurable function
The following always holds: $\inf_{k\geq n} f_k \leq f_n \leq \sup_{k\geq n} f_k$. Note that the lower bound in non-decreasing and the upper bound is non-increasing. Suppose $\alpha = \liminf_k f_k = \limsup_k f_k$, and let $\epsilon>0$. Then there exists a $N$ such that for $n>N$, we have $\alpha -\inf_{k\geq n} f_k < \epsilon$ and $\sup_{k\geq n} f_k -\alpha < \epsilon$. Combining this with the above inequality yields $-\epsilon < f_k - \alpha< \epsilon$ from which it follows that $\lim_k f_k = \alpha$. Now suppose $\alpha = \lim_k f_k$. Let $\epsilon >0$, then there exists a $N$ such that $-\frac{\epsilon}{2}+\alpha < f_k< \frac{\epsilon}{2}+\alpha$. It follows from this that $-\epsilon + \alpha \leq \inf_{k\geq n} f_k \leq \sup_{k\geq n} f_k < \epsilon+\alpha$, and hence $\liminf_k f_k = \limsup_k f_k = \alpha$. Hence the limit exists iff the $\liminf$ and $\limsup$ are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/215258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is $\Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}$? It seems as if no one has asked this here before, unless I don't know how to search. The Gamma function is $$ \Gamma(\alpha)=\int_0^\infty x^{\alpha-1} e^{-x}\,dx. $$ Why is $$ \Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}\text{ ?} $$ (I'll post my own answer, but I know there are many ways to show this, so post your own!)
This is a "proof". We know that the surface area of the $n-1$ dimensional unit sphere is $$ |S^{n-1}| = \frac{2\pi^{\frac{n}2}}{\Gamma(\frac{n}2)}. $$ On the other hand, we know that $|S^2|=4\pi$, which gives $$ 4\pi = \frac{2\pi^{\frac32}}{\Gamma(\frac32)} = \frac{2\pi^{\frac32}}{\frac12\Gamma(\frac12)}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/215352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76", "answer_count": 11, "answer_id": 10 }
What gives rise to the normal distribution? I'd like to know if anyone has a generally friendly explanation of why the normal distribution is an attractor of so many observed behaviors in their eventuality. I have a degree in math if you want to get technical, but I'd like to be able to explain to my grandma as well
To my mind the reason for the pre-eminence can at best be seen in what must be the most electrifying half page of prose in the scientific literature, where Clark Maxwell deduces the distribution law for the velocities of molecules of an ideal gas (now known as the Maxwell-Boltzmann law), thus founding the discipline of statistical physics. This can be found in his collected papers or, more accessibly, in Hawking's anthology "On the Shoulders of Giants". The only assumptions he uses are that the density depends on the magnitude of the velocity (and not on the direction) and that the components parallel to the axes are independent. Mathematically, this means that the only functions in three-dimensional space which have radial symmetry and split as a product of three functions of the individual variables are those which arise in the normal distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/215432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Function that sends $1,2,3,4$ to $0,1,1,0$ respectively I already got tired trying to think of a function $f:\{1,2,3,4\}\rightarrow \{0,1,1,0\}$ in other words: $$f(1)=0\\f(2)=1\\f(3)=1\\f(4)=0$$ Don't suggest division in integers; it will not pass for me. Are there ways to implement it with modulo, absolute value, and so on, without conditions?
Look the example for Lagrange Interpolation, then it is easy to construct any function from any sequence to any sequence. In this case : $$L(x)=\frac{1}{2}(x-1)(x-3)(x-4) + \frac{-1}{2}(x-1)(x-2)(x-4)$$ wich simplifies to: $$L(x)=\frac{-1}{2}(x-1)(x-4)$$ which could possibly explain Jasper's answer, but since the method for derivation was not mentioned can not say for sure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/215487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 10, "answer_id": 9 }
Prove that the language $\{ww \mid w \in \{a,b\}^*\}$ is not FA (Finite Automata) recognisable. Hint: Assume that $|xy| \le k$ in the pumping lemma. I have no idea where to begin for this. Any help would be much appreciated.
It's also possible — and perhaps simpler — to prove this directly using the pigeonhole principle without invoking the pumping lemma. Namely, assume that the language $L = \{ww \,|\, w \in \{a,b\}^*\}$ is recognized by a finite state automaton with $n$ states, and consider the set $W = \{a,b\}^k \subset \{a,b\}^*$ of words of length $k$, where $2^k > n$. By the pigeonhole principle, since $|W| = 2^k > n$, there must be two distinct words $w,w' \in W$ such that, after reading the word $w$, the automaton is in the same state as after reading $w'$. But this means that, since the automaton accepts $ww \in L$, it must also accept $w'w \notin L$, which leads to a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/215553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Differentiability at 0 I am having a problem with this exercise. Please help. Let $\alpha >1$. Show that if $|f(x)| \leq |x|^\alpha$, then $f$ is differentiable at $0$.
Use the definition of the derivative. It is clear that $f(0)=0$. Note that if $h\ne 0$ then $$\left|\frac{f(h)-0}{h}\right| \le |h|^{\alpha-1}.$$ Since $\alpha\gt 1$, $|h|^{\alpha-1}\to 0$ as $h\to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/215633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve logarithmic equation I'm getting stuck trying to solve this logarithmic equation: $$ \log( \sqrt{4-x} ) - \log( \sqrt{x+3} ) = \log(x) $$ I understand that the first and second terms can be combined & the logarithms share the same base so one-to-one properties apply and I get to: $$ x = \frac{\sqrt{4-x}}{ \sqrt{x+3} } $$ Now if I square both sides to remove the radicals: $$ x^2 = \frac{4-x}{x+3} $$ Then: $$ x^2(x+3) = 4-x $$ $$ x^3 +3x^2 + x - 4 = 0 $$ Is this correct so far? How do I solve for x from here?
Fine so far. I would just use Wolfram Alpha, which shows there is a root about $0.89329$. The exact value is a real mess. I tried the rational root theorem, which failed. If I didn't have Alpha, I would go for a numeric solution. You can see there is a solution in $(0,1)$ because the left side is $-4$ at $0$ and $+1$ at $1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/215732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to convert a formula to CNF? I am trying to convert the formula: $((p \wedge \neg n) \vee (n \wedge \neg p)) \vee z$. I understand i need to apply the z to each clause, which gives: $((p \wedge \neg n) \vee z) \vee ((n \wedge \neg p) \vee z)$. I know how to simplify this, but am unsure how it will lead to the answer containing 2 clauses. Thanks.
To convert to conjugtive normal form we use the following rules: Double Negation: * *$P\leftrightarrow \lnot(\lnot P)$ De Morgans Laws * *$\lnot(P\bigvee Q)\leftrightarrow (\lnot P) \bigwedge (\lnot Q)$ *$\lnot(P\bigwedge Q)\leftrightarrow (\lnot P) \bigvee (\lnot Q)$ Distributive Laws * *$(P \bigvee (Q\bigwedge R))\leftrightarrow (P \bigvee Q) \bigwedge (P\bigvee R)$ *$(P \bigwedge (Q\bigvee R))\leftrightarrow (P \bigwedge Q) \bigvee (P\bigwedge R)$ So lets expand the following * *$((P\bigwedge \lnot N)\bigvee (N∧\lnot P))\bigvee Z$ *$((((P\bigwedge \lnot N)\bigvee N)\bigwedge ((P\bigwedge \lnot N)\bigvee \lnot P)))\bigvee Z)$ *$(((P\bigvee N)\bigwedge (\lnot N\bigvee N ) \bigwedge (P\bigvee \lnot P)\bigwedge (\lnot N \bigvee \lnot P)) \bigvee Z)$ Then noting that $(\lnot N\bigvee N)$ and $(P\bigvee \lnot P)$ are always true we may remove them and get (It's cancelling these terms that gives a 2 term answer): * *$((P\bigvee N)\bigwedge (\lnot N \bigvee \lnot P)) \bigvee Z)$ *$((P\bigvee N\bigvee Z)\bigwedge (\lnot N \bigvee \lnot P \bigvee Z))$ This will in general not happen though and you may get more terms in your formula in CNF. Just so you know you can also check these things on Wolfram Alpha
{ "language": "en", "url": "https://math.stackexchange.com/questions/215790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do you prove that proof by induction is a proof? Are proofs by induction limited to cases where there is an explicit dependance on a integer, like sums? I cannot grasp the idea of induction being a proof in less explicit cases. What if you have a function that suddenly changes behavior? If a function is positive up to some limit couldn't I prove by induction that it is always positive by starting my proof on the left of the limit? I feel like I am not getting a grasp on something fundamental and I hope you can help me with this. Thank you.
This math.SE question has a lot of great answers as far as induction over the reals goes. And as Austin mentioned, there are many cases in graph theory where you can use induction on the vertices or edges of a graph to prove a result. an example: If every two nodes of $G$ are joined by a unique path, then $G$ is connected and $n = e + 1$ Where $n$ is the number of nodes and $e$ is the number of edges. $G$ is connected since any two nodes are joined by a path. To show $n = e + 1$, we use induction. Assume it’s true for less than $n$ points (this means we're using strong induction). Removing any edge from $G$ breaks $G$ into two components, since paths are unique. Suppose the sizes are $n_1$ and $n_2$, with $n_1 + n_2 = n$. By the induction hypothesis, $n_1 = e_1 + 1$ and $n_2 = e_2 + 1$; but then$$n = n_1 + n_2 = (e_1 + 1) + (e_2 + 1) = (e_1 + e_2) + 2 = e − 1 + 2 = e + 1$$ So our statement is true by strong induction
{ "language": "en", "url": "https://math.stackexchange.com/questions/215846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 6, "answer_id": 3 }
Proving that floor(n/2)=n/2 if n is an even integer and floor(n/2)=(n-1)/2 if n is an odd integer. How would one go about proving the following. Any ideas as to where to start? For any integer n, the floor of n/2 equals n/2 if n is even and (n-1)/2 if n is odd. Summarize: [n/2] = n/2 if n = even [n/2] = (n-1)/2 if n = odd Working through it, I try to initially set n = 2n for the even case but am stuck on how to show its a floor... thanks
You should set $n=2m$ for even numbers, where $m$ is an integer. Then $\frac n2=m$ and the floor of an integer is itself. The odd case is similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/215909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve the Relation $T(n)=T(n/4)+T(3n/4)+n$ Solve the recurrence relation: $T(n)=T(n/4)+T(3n/4)+n$. Also, specify an asymptotic bound. Clearly $T(n)\in \Omega(n)$ because of the constant factor. The recursive nature hints at a possibly logarithmic runtime (because $T(n) = T(n/2) + 1$ is logarithmic, something similar may occur for the problem here). However, I'm not sure how to proceed from here. Even though the recurrence does not specify an initial value (i.e. $T(0)$), if I set $T(0) = 1$ some resulting values are: 0 1 100 831 200 1939 300 3060 400 4291 500 5577 600 6926 700 8257 800 9665 900 10933 The question: Is there a technique that I can use to solve the recurrence in terms of $n$ and $T(0)$? If that proves infeasible, is there a way to determine the asymptotic behavior of the recurrence?
$T(n)=T\left(\dfrac{n}{4}\right)+T\left(\dfrac{3n}{4}\right)+n$ $T(n)-T\left(\dfrac{n}{4}\right)-T\left(\dfrac{3n}{4}\right)=n$ For the particular solution part, getting the close-form solution is not a great problem. Let $T_p(n)=An\ln n$ , Then $An\ln n-\dfrac{An}{4}\ln\dfrac{n}{4}-\dfrac{3An}{4}\ln\dfrac{3n}{4}\equiv n$ $An\ln n-\dfrac{An}{4}(\ln n-\ln4)-\dfrac{3An}{4}(\ln n+\ln3-\ln4)\equiv n$ $\dfrac{(4\ln4-3\ln3)An}{4}\equiv n$ $\therefore\dfrac{(4\ln4-3\ln3)A}{4}=1$ $A=\dfrac{4}{4\ln4-3\ln3}$ $\therefore T_p(n)=\dfrac{4n\ln n}{4\ln4-3\ln3}$ But getting the complementary solution part is not optimistic. Since we should handle the equation $T_c(n)-T_c\left(\dfrac{n}{4}\right)-T_c\left(\dfrac{3n}{4}\right)=0$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/215984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
$ \lim_{x \to \infty} \frac{xe^{x-1}}{(x-1)e^x} $ $$ \lim_{x \to \infty} \frac{xe^{x-1}}{(x-1)e^x} $$ I don't know what to do. At all. I've read the explanations in my book at least a thousand times, but they're over my head. Oh, and I'm not allowed to use L'Hospital's rule. (I'm guessing it isn't needed for limits of this kind anyway. This one is supposedly simple - a beginners problem.) Most of the answers I've seen on the Internet simply says "use L'Hospital's rule". Any help really appreciated. I'm so frustrated right now...
$$\lim_{x\to\infty}\frac{xe^{x-1}}{(x-1)e^x}=\lim_{x\to\infty}\frac{x}{x-1}\cdot\lim_{x\to\infty}\frac{e^{x-1}}{e^x}=1\cdot\frac{1}{e}=\frac{1}{e}$$ the first equality being justified by the fact that each of the right hand side limits exists finitely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/216049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Prove DeMorgan's Theorem for indexed family of sets. Let $\{A_n\}_{n\in\mathbb N}$ be an indexed family of sets. Then: $(i) (\bigcup\limits_{n=1}^\infty A_n)' = \bigcap\limits_{n=1}^\infty (A'_n)$ $(ii) (\bigcap\limits_{n=1}^\infty A_n)' = \bigcup\limits_{n=1}^\infty (A'_n)$ I went from doing simple, straightforward indexed set proofs to this, and I don't even know where to start.
If $a\in (\bigcup_{n=1}^{\infty}A_{n})'$ then $a\notin A_{n}$ for any $n\in \mathbb{N}$, therefore $a\in A_{n}'$ for all $n\in \mathbb{N}$. Thus $a\in \bigcap_{n=1}^{\infty}A_{n}'$. Since $a$ was arbitrary, this shows $(\bigcup_{n=1}^{\infty}A_{n})' \subset \bigcap_{n=1}^{\infty}A_{n}'$. The other containment and the other problem are similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/216149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }