Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Covariance of order statistics (uniform case) Let $X_1, \ldots, X_n$ be uniformly distributed on $[0,1]$ and $X_{(1)}, ..., X_{(n)}$ the corresponding order statistic. I want to calculate $Cov(X_{(j)}, X_{(k)})$ for $j, k \in \{1, \ldots, n\}$. The problem is of course to calculate $\mathbb{E}[X_{(j)}X_{(k)}]$. The joint density of $X_{(j)}$ and $X_{(k)}$ is given by $$f_{X_{(j)}, X_{(k)}}=\binom{n}{k}\binom{k}{j-1}x^{j-1}(y-x)^{k-1-j}(1-y)^{n-k}$$ where $0\leq x\leq y\leq 1$. (I used the general formula here.) Sadly, I see no other way to calculate $\mathbb{E}[X_{(j)}X_{(k)}]$ than by $$\mathbb{E}[X_{(j)}X_{(k)}]=\binom{n}{k}\binom{k}{j-1}\int_0^1\int_0^yxyx^{j-1}(y-x)^{k-1-j}(1-y)^{n-k}\,dx\,dy.$$ But this integral is too much for me. I tried integration by parts, but got lost along the way. Is there a trick to do it? Did I even get the limits of integration right? Apart from that, I wonder if there's a smart approach to solve the whole problem more elegantly.
Thank you so much for posting this -- I too looked at this covariance integral (presented in David and Nagaraja's 2003 3rd edition Order Statistics text) and thought that it looked ugly. However, I fear that there may be a few small mistakes in your math on E(X_jX_k), assuming that I'm following you right. The joint density should have (j-1)! in the denominator instead of j! at the outset -- otherwise j! would entirely cancel out the j! in the numerator of Beta(j+1,k-j) instead of ending up with j in the numerator of the solution... Right?
{ "language": "en", "url": "https://math.stackexchange.com/questions/400677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Derive Cauchy-Bunyakovsky by taking expected value In my notes, it is said that taking expectation on both sides of this inequality $$|\frac{XY}{\sqrt{\mathbb{E}X^2\mathbb{E}Y^2}}|\le\frac{1}{2}\left(\frac{X^2}{\mathbb{E}X^2}+\frac{Y^2}{\mathbb{E}Y^2}\right)$$ can lead to the Cauchy-Bunyakovxky (Schwarz) inequality $$\mathbb{E}|XY|\le\sqrt{\mathbb{E}X^2\mathbb{E}Y^2}$$ I am not really good at taking expected values, may anyone guide me how to go about it? Note: I am familiar with the linearity and monotonicity of expected values, what I am unsure about is the derivation that leads to the inequality, especially when dealing with double expectation. Thanks.
You can simplify your inequality as follows, for the left side: $|\frac {XY}{\sqrt{EX^{2}EY^{2}}}|=\frac {|XY|}{\sqrt{EX^{2}EY^{2}}}$ for the right side, take the expectation: $\frac{1}{2}E\left( \frac{X^2}{EX^2}+\frac{Y^2}{EY^2}\right)= \frac{1}{2} E \left( \frac{X^2 EY^2+X^2 EY^2}{EY^2 EX^2} \right)$ Now, $E(X^2 EY^2+X^2 EY^2)=2*EX^2EY^2 $ using the fact that $E(EY^2)=E(Y^2)$ Plug in and you get the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/400750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Check if $\lim_{x\to\infty}{\log x\over x^{1/2}}=\infty$, $\lim_{x\to\infty}{\log x\over x}=0$ Could anyone tell me which of the following is/are true? * *$\lim_{x\to\infty}{\log x\over x^{1/2}}=0$, $\lim_{x\to\infty}{\log x\over x}=\infty$ *$\lim_{x\to\infty}{\log x\over x^{1/2}}=\infty$, $\lim_{x\to\infty}{\log x\over x}=0$ *$\lim_{x\to\infty}{\log x\over x^{1/2}}=0$, $\lim_{x\to\infty}{\log x\over x}=0$ *$\lim_{x\to\infty}{\log x\over x^{1/2}}=0$, but $\lim_{x\to\infty}{\log x\over x}$ does not exists. For I know $\lim_{x\to\infty}{1\over x}$ exist and so is for $1\over x^{1/2}$.
$3$ is correct as $\log x$ grows slower than any $x^n$. So $x^{-1}$ and $x^{-\frac{1}{2}}$ will manage to pull it down to $0$. And $\displaystyle\lim_{x\to\infty}{1\over x}=\lim_{x\to\infty}{1\over \sqrt{x}}=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/400903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is $\sum\limits_{k=1}^{n-1}\binom{n}{k}x^{n-k}y^k$ always even? Is $$ f(n,x,y)=\sum^{n-1}_{k=1}{n\choose k}x^{n-k}y^k,\qquad\qquad\forall~n>0~\text{and}~x,y\in\mathbb{Z}$$ always divisible by $2$?
Hint: Recall binomial formula $$ (x+y)^n=\sum\limits_{k=0}^n{n\choose k} x^{n-k} y^k $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/400961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
One of $2^1-1,2^2-1,...,2^n-1$ is divisible by $n$ for odd $n$ Let $n$ be an odd integer greater than 1. Show that one of the numbers $2^1-1,2^2-1,...,2^n-1$ is divisible by $n$. I know that pigeonhole principle would be helpful, but how should I apply it? Thanks.
Hints: $$(1)\;\;\;2^k-1=2^{k-1}+2^{k-2}+\ldots+2+1\;,\;\;k\in\Bbb N$$ $$(2)\;\;\;\text{Every natural number $\,n\,$ can be uniquely written in base two }$$ $$\text{with maximal power of two less than $\,n\,$ }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/401036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
determine whether an integral is positive Given a standardized normal variable $X\sim N\left(0,1\right)$, and constants $ \kappa \in \left[0,1\right)$ and $\tau \in \mathbb{R}$, I want to sign the following expression: \begin{equation} \int_{-\infty}^\tau \left(X-\kappa \tau \right) \phi(X)\text{d}X \end{equation} where $\phi$ is the PDF of $X$. Any comment will be appreciated. I would at least want to know if the sign of the expression can be determined given the information, or whether it hinges on the value of $\tau$.
Note that $$ \int_{-\infty}^\infty (X-\kappa\tau)\phi(X)\,\mathrm dX=E[X-\kappa\tau]=-\kappa\tau$$ and for $\tau>0$ $$ \int_{\tau}^\infty (X-\kappa\tau)\phi(X)\,\mathrm dX\ge\int_{\tau}^\infty (1-\kappa)\tau\phi(X)\,\mathrm dX>0.$$ So for $\tau>0$ your expression is positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/401109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A problem on matrices: Find the value of $k$ If $ \begin{bmatrix} \cos \frac{2 \pi}{7} & -\sin \frac{2 \pi}{7} \\ \sin \frac{2 \pi}{7} & \cos \frac{2 \pi}{7} \\ \end{bmatrix}^k = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} $, then the least positive integral value of $k$ is? Actually, I got no idea how to solve this. I did trial and error method and got 7 as the answer. how do i solve this? Can you please offer your assistance? Thank you
Powers of matrices should always be attacked with diagonalization, if feasible. Forget $2\pi/7$, for the moment, and look at $$ A=\begin{bmatrix} \cos\alpha & -\sin\alpha\\ \sin\alpha & \cos\alpha \end{bmatrix} $$ whose characteristic polynomial is, easily, $p_A(X)=1-2X\cos\alpha+X^2$. The discriminant is $4(\cos^2\alpha-1)=4(i\sin\alpha)^2$, so the eigenvalues of $A$ are \begin{align} \lambda&=\cos\alpha+i\sin\alpha\\ \bar{\lambda}&=\cos\alpha-i\sin\alpha \end{align} Finding the eigenvectors is easy: $$ A-\lambda I= \begin{bmatrix} -i\sin\alpha & -\sin\alpha\\ i\sin\alpha & \sin\alpha \end{bmatrix} $$ and an eigenvector is $v=[-i\quad 1]^T$. Similarly, an eigenvector for $\bar{\lambda}$ is $w=[i\quad 1]^T$. If $$ S=\begin{bmatrix}-i & i\\1 & 1\end{bmatrix} $$ you get immediately that $$ S^{-1}=\frac{i}{2}\begin{bmatrix}1 & -i\\-1 & -i\end{bmatrix} $$ so, by well known rules, $$ A=SDS^{-1} $$ where $$ D= \begin{bmatrix} \cos\alpha+i\sin\alpha & 0 \\ 0 & \cos\alpha-i\sin\alpha \end{bmatrix}. $$ By De Moivre's formulas, you have $$ D^k= \begin{bmatrix} \cos(k\alpha)+i\sin(k\alpha) & 0 \\ 0 & \cos(k\alpha)-i\sin(k\alpha) \end{bmatrix}. $$ Since $A^k=S D^k S^{-1}$ your problem is now to find the minimum $k$ such that $\cos(k\alpha)+i\sin(k\alpha)=1$, that is, for $\alpha=2\pi/7$, $$ \begin{cases} \cos k(2\pi/7)=1\\ \sin k(2\pi/7)=0 \end{cases} $$ and you get $k=7$. This should not be a surprise, after all: the effect of $A$ on vectors is exactly rotating them by the angle $\alpha$. If you think to the vector $v=[x\quad y]^T$ as the complex number $z=x+iy$, when you do $Av$ you get $$ Av= \begin{bmatrix} \cos\alpha & -\sin\alpha\\ \sin\alpha & \cos\alpha \end{bmatrix} \begin{bmatrix}x\\y\end{bmatrix} =\begin{bmatrix} x\cos\alpha-y\sin\alpha\\x\sin\alpha+y\cos\alpha \end{bmatrix} $$ and $$ (x\cos\alpha-y\sin\alpha)+i(x\sin\alpha+y\cos\alpha)= (x+iy)(\cos\alpha+i\sin\alpha)=\lambda z $$ (notation as above). Thus $z$ is mapped to $\lambda z$, which is just $z$ rotated by an angle $\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/401158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Jacobson Radical and Finite Dimensional Algebra In general, it is usually not the case that for a ring $R$, the Jacobson radical of $R$ has to be equal to the intersection of the maximal ideals of $R$. However, what I do like to know is, if we are given a finite-dimensional algebra $A$ over the field $F$, is it true that the Jacobson radical of $A$ is precisely the intersection of the maximal ideals of $A$?
Yes, it is true. See J.A. Drozd, V.V. Kirichenko, Finite dimensional algebras, Springer, 1994
{ "language": "en", "url": "https://math.stackexchange.com/questions/401248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Symmetric Matrices Using Pythagorean Triples Find symmetric matrices A =$\begin{pmatrix} a &b \\ c&d \end{pmatrix}$ such that $A^{2}=I_{2}$. Alright, so I've posed this problem earlier but my question is in regard to this problem. I was told that $\frac{1}{t}\begin{pmatrix}\mp r & \mp s \\ \mp s & \pm r \end{pmatrix}$ is a valid matrix $A$ as $A^{2}=I_{2}$, given the condition that $r^{2}+s^{2}=t^{2}$, that is, (r,s,t) is a Pythagorean Triple. Does anybody know why this works?
It works because $$A^2 = \frac{1}{t^2}\begin{pmatrix}r & s\\s & -r\end{pmatrix}\begin{pmatrix}r&s\\s&-r\end{pmatrix} = \frac{1}{t^2}\begin{pmatrix}r^2+s^2 & 0\\0 & r^2 + s^2\end{pmatrix}.$$ and you want the diagonals to be 1, i.e. $\frac{r^2 + s^2}{t^2} = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/401310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Similar triangles question If I have a right triangle with sides $a$.$b$, and $c$ with $a$ being the hypotenuse and another right triangle with sides $d$, $e$, and $f$ with $d$ being the hypotenuse and $d$ has a length $x$ times that of $a$ with $x \in \mathbb R$, is it necessary for $e$ and $f$ to have a length $x$ times that of $b$ and $c$ respectively? EDIT: The corresponding non-right angles of both triangles are the same.
EDIT: This answer is now incorrect since the OP changed his question. This is a good way of visualizing the failure of your claim: Imagine the point C moving along the circle from A to the north pole. This gives you a continuum of non-similar right triangles with a given hypotenuse (in this case the diameter).
{ "language": "en", "url": "https://math.stackexchange.com/questions/401383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to prove that End(R) is isomorphic to R? I tried to prove this: $R$ as a ring is isomorphic to $End(R)$, where $R$ is considered as a left $R$-module. The map of isomorphism is $$F:R\to End(R), \quad F(r)=fr,$$ where $fr(a)=ar$. And I define the multiplication in $End(R)$ by $(.)$, where $h.g=g\circ h$ for $g,h$ in $End(R)$. Is this true?
It's true that: $a\mapsto ar$ is a left module homomorphism. If we call this map $a\mapsto ar$ by $\theta_r$, then indeed $\theta:R\to End(_RR)$. * *Check that it's additive. *Check that it's multiplicative. (You will absolutely need your rule that $f\circ g=g\cdot f$. The $\cdot$ operation you have given is the multiplication in $(End(R))^{op}$) If you instead try to show that $R\cong End(R_R)$ in the same way (with $\theta_r$ denoting $a\mapsto ra$), you will have better luck. Can you spot where the two cases are different?
{ "language": "en", "url": "https://math.stackexchange.com/questions/401437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Find what values of $n$ give $\varphi(n) = 10$ For what values of $n$ do we get $\varphi(n) = 10$? Here, $\varphi$ is the Euler Totient Function. Now, just by looking at it, I can see that this happens when $n = 11$. Also, my friend told me that it happens when $n = 22$, but both of these are lucky guesses, or educated guesses. I haven't actually worked it out, and I don't know if there are any more. How would I go about answering this question?
Suppose $\varphi(n)=10$. If $p \mid n$ is prime then $p-1$ divides $10$. Thus $p$ is one of $2,3,11$. If $3 \mid n$, it does so with multiplicity $1$. But then there would exist $m \in \mathbb{N}$ such that $\varphi(m)=5$, and this quickly leads to a contradiction (e.g. note that such values are always even). Thus $n$ fits the form $2^a\cdot 11^b$, and we claim that $b=1$. If $b>1$ we have $11 \mid \varphi(n)$, a contradiction. As well, $b=0$ gives $\varphi(n)$ a power of $2$. Thus $n=2^a \cdot 11$, and it's easy to see from here that $n=11,22$ are the only solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/401495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
every integers from 1 to 121 can be written as 5 powers of 3 We have a two-pan balance and have 5 integer weights with which it is possible to weight exactly all the weights integers from 1 to 121 Kg.The weights can be placed all on a plate but you can also put some in a dish and others with the goods to be weighted. It's asked to find the 5 weights that give us this possibility. It also asks you to prove that the group of five weights is the only one that solves the problem. Easily i found the 5 weights: 1 , 3 , 9 , 27 , 81 but i can' demonstrate that this group of five weights is the only one that solves the problem. Can you help me ? Thanks in advance !
A more general result says, given weights $w_1\le w_2 \le \dots \le w_n$, and if $S_k = \sum_{i=1}^k w_k$; $S_0 = 0$, then everything from $1$ to $S_n$ is weighable iff each of the following inequalities hold: $$S_{k+1} \le 3S_k + 1 \text{ for } k = 0,\dots,n-1$$ Note that this is equivalent to $w_k \le 2 S_k +1$ for each $k$. If you wish, a proof is given here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/401561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
Calculate the limit of two interrelated sequences? I'm given two sequences: $$a_{n+1}=\frac{1+a_n+a_nb_n}{b_n},b_{n+1}=\frac{1+b_n+a_nb_n}{a_n}$$ as well as an initial condition $a_1=1$, $b_1=2$, and am told to find: $\displaystyle \lim_{n\to\infty}{a_n}$. Given that I'm not even sure how to approach this problem, I tried anyway. I substituted $b_{n-1}$ for $b_n$ to begin the search for a pattern. This eventually reduced to: $$a_{n+1}=\frac{a_{n-1}(a_n+1)+a_n(1+b_{n-1}+a_{n-1}b_{n-1})}{1+b_{n-1}+a_{n-1}b_{n-1}}$$ Seeing no pattern, I did the same once more: $$a_{n+1}=\frac{a_{n-2}a_{n-1}(a_n+1)+a_n\left(a_{n-2}+(a_{n-1}+1)(1+b_{n-2}+a_{n-2}b_{n-2})\right)}{a_{n-2}+(a_{n-1}+1)(1+b_{n-2}+a_{n-2}b_{n-2})}$$ While this equation is atrocious, it actually reveals somewhat of a pattern. I can sort of see one emerging - though I'm unsure how I would actually express that. My goal here is generally to find a closed form for the $a_n$ equation, then take the limit of it. How should I approach this problem? I'm totally lost as is. Any pointers would be very much appreciated! Edit: While there is a way to prove that $\displaystyle\lim_{n\to\infty}{a_n}=5$ using $\displaystyle f(x)=\frac{1}{x-1}$, I'm still looking for a way to find the absolute form of the limit, $\displaystyle\frac{1+2a+ab}{b-a}$.
The answer is $a_n \to 5$ , $b_n \to \infty$. I'm trying to prove that and I will edit this post if I figure out something. EDIT: I would write all this in comment instead in answer, but I cannot find how to do it.. maybe I need to have more reputation to do this (low reputation = low privileges:P) Anyway, I still didn't solved it, but maybe something of that will help you. I will edit it when I think something out. EDIT: After many transformations and playing with numbers, I think that the limit, for $a<b$, is $$ \frac{ab + 2a +1}{b-a}$$ But still cannot prove it. (In statement above: $a = a_1 $ , $ b = b_1 $)
{ "language": "en", "url": "https://math.stackexchange.com/questions/401637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
For any arrangment of numbers 1 to 10 in a circle, there will always exist a pair of 3 adjacent numbers in the circle that sum up to 17 or more I set out to solve the following question using the pigeonhole principle Regardless of how one arranges numbers $1$ to $10$ in a circle, there will always exist a pair of three adjacent numbers in the circle that sum up to $17$ or more. My outline [1] There are $10$ triplets consisting of adjacent numbers in the circle, and since each number appears thrice, the total sum of these adjacent triplets for all permutations of the number in the circle, is $3\cdot 55=165$. [2] If we consider that all the adjacent triplets sum to 16 , and since there are $10$ such triplets, the sum accordingly would be $160$, but we just said the invariant sum is $165$ hence there would have to be a triplet with sum of $17$ or more. My query Could someone polish this into a mathematical proof and also clarify if I did make use of the pigeonhole principle.
We will show something stronger, namely that there exists 3 adjacent numbers that sum to 18 or more. Let the integers be $\{a_i\}_{i=1}^{10}$. WLOG, $a_1 = 1$. Consider $$a_2 + a_3 + a_4, a_5 + a_6 + a_7, a_8 + a_9 + a_{10}$$ The sum of these 3 numbers is $2+3 +\ldots + 10 = 54$. Hence, by the pigeonhole principle, there exists one number which is at least $\lfloor \frac{54}{3} \rfloor = 18 $. I leave it to you to show that there is a construction where no 3 adjacent numbers sum to 19 or more, which shows that 18 is the best that we can do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/401753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Finding the determinant of $2A+A^{-1}-I$ given the eigenvalues of $A$ Let $A$ be a $2\times 2$ matrix whose eigenvalues are $1$ and $-1$. Find the determinant of $S=2A+A^{-1}-I$. Here I don't know how to find $A$ if eigenvectors are not given. If eigenvectors are given, then I can find using $A=PDP^{-1}$. Please solve for $A$; the rest I can do.
\begin{pmatrix} 0 &-1 \\ -1&0 \end{pmatrix} the Characteristic polynomial is $(x-1)(x+1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/401821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Counterexample to inverse Leibniz alternating series test The alternating series test is a sufficient condition for the convergence of a numerical series. I am searching for a counterexample for its inverse: i.e. a series (alternating, of course) which converges, but for which the hypothesis of the theorem are false. In particular, if one writes the series as $\sum (-1)^n a_n$, then $a_n$ should not be monotonically decreasing (since it must be infinitesimal, for the series to converge).
If you want a conditionally convergent series in which the signs alternate, but we do not have monotonicity, look at $$\frac{1}{2}-1+\frac{1}{4}-\frac{1}{3}+\frac{1}{6}-\frac{1}{5}+\frac{1}{8}-\frac{1}{7}+\cdots.$$ It is not hard to show that this converges to the same number as its more familiar sister.
{ "language": "en", "url": "https://math.stackexchange.com/questions/401909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Listing subgroups of a group I made a program to list all the subgroups of any group and I came up with satisfactory result for $\operatorname{Symmetric Group}[3]$ as $\left\{\{\text{Cycles}[\{\}]\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 2 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 2 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 2 & 3 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 3 & 2 \\ \end{array} \right)\right]\right\}\right\}$ It excludes the whole set itself though it can be added seperately. But in case of $SymmetricGroup[4]$ I am getting following $\left\{\{\text{Cycles}[\{\}]\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 2 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 2 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 2 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 3 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 3 \\ 2 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 4 \\ 2 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 2 & 3 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 3 & 2 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 2 & 4 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 4 & 2 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 3 & 4 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 4 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 2 & 3 & 4 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 2 & 4 & 3 \\ \end{array} \right)\right]\right\}\right\}$ The matrix form shows double transposition. Can someone please check for me if I am getting appropriate results? I doubt I am!!
I have the impression that you only list the cyclic subgroups. For $S_3$, the full group $S_3$ ist missing as a subgroup (you are mentioning that in your question). For $S_4$, several subgroups are missing. In total, there should be $30$ of them. $14$ of them are cyclic, which are exactly the ones you listed. To give you a concrete example, the famous Klein Four subgroup $$\{\operatorname{id},(12)(34),(13)(24),(14)(23)\}$$ is not contained in your list.
{ "language": "en", "url": "https://math.stackexchange.com/questions/401971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does a closed walk necessarily contain a cycle? [HOMEWORK] I asked my professor and he said that a counter example would be two nodes, by which the pathw ould go from one node and back. this would be a closed path but does not contain a cycle. But I am confused after looking at this again. Why is this not a cycle? Need there be another node?
I guess the answer depends on the exact definition of cycle. If it is as you wrote in your comment - a closed walk that starts and ends in the same vertex, and no vertex repeats on the walk (except for the start and end), then your example with two nodes is a cycle. However, a definition of a cycle usually contains a condition of non-triviality stating that a cycle has at least three vertices. So a graph with two vertices is not a cycle according to this definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate the limit: $\lim_{n\to+\infty}\sum_{k=1}^n\frac{\sin{\frac{k\pi}n}}{n}$ Using definite integral between the interval $[0,1]$. Calculate the limit: $\lim_{n\to+\infty}\sum_{k=1}^n\frac{\sin{\frac{k\pi}n}}{n}$ Using definite integral between the interval $[0,1]$. It seems to me like a Riemann integral definition: $\sum_{k=1}^n\frac{\sin{\frac{k\pi}{n}}}{n}=\frac1n(\sin{\frac{\pi}n}+...+\sin\pi)$ So $\Delta x=\frac 1n$ and $f(x)=\sin(\frac{\pi x}n)$ (Not sure about $f(x))$ How do i proceed from this point?
I think it should be $$ \int_{0}^{1}\sin \pi xdx=\frac{2}{\pi} $$ EDIT OK, the point here is the direct implementation of Riemanns sums with $d x = \frac{b-a}{n}, \ b=1, \ a=0$ and $\frac{k}{n}=x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/402114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Show that $\langle(1234),(12)\rangle = S_4$ I am trying to show that $\langle(1234),(12)\rangle = S_4$. I can multiply and get all $24$ permutations manually but isn't there a more compact solution?
Write $H$ for the subgroup generated by those two permutations. Then $(1234)(12)=(234)$, so $H$ contains certainly the elements of $\langle(1234)\rangle$, $\langle (234)\rangle$ and $(12)$, hence $\vert H\vert \geq 7$ and therefore $\vert H\vert \geq 8$ ($\vert H\vert$ must divide $24=\vert S_{4}\vert$). Since $(1234)\in H$ but such a permutation is not even, i.e it doesn't belong to $A_{4}$, $H$ is not a subgroup of $A_{4}$. By Lagrange's theorem, $\vert H\vert$ must equal $24$ (the only subgroup of order $12$ in $S_{4}$ is $A_{4}$) and we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
A property of uniform spaces Is it true that $E\circ E\subseteq E$ for every entourage $E$ of every uniform space?
As noted in the comments, it clearly is not true in general. If an entourage $E$ has the property that $E\circ E\subseteq E$, then $E$ is a transitive, reflexive relation on $X$, and the entourage $E\cap E^{-1}$ is an equivalence relation on $X$. A uniform space whose uniformity has a base of equivalence relations is non-Archimedean, the uniform analogue of a non-Archimedean metric space; such a space is zero-dimensional, since it clearly has a base of clopen sets. Conversely, it’s known that every zero-dimensional topology is induced by a non-Archimedean uniformity. (I’ve seen this last result cited to B. Banaschewski, Über nulldimensionale Räume, Math. Nachr. $\bf13$ ($1955$), $129$-$140$, and to A.F. Monna, Remarques sur les métriques non-archimédiennes, Indag. Math. $\bf12$ ($1950$), $122$-$133$; ibid. $\bf12$ ($1950$), $179$-$191$, but I’ve not seen these papers.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/402231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding the differential $dx$ when doing $u$-substitution I just finished taking my first year of calculus in college and I passed with an A. I don't think, however, that I ever really understood the entire $\frac{dy}{dx}$ notation (so I just focused on using $u'$), and now that I'm going to be starting calculus 2 and learning newer integration techniques, I feel that understanding the differential in an integral is important. Take this problem for an example: $$ \int 2x (x^2+4)^{100}dx $$ So solving this... $$u = x^2 + 4 \implies du = 2x dx \longleftarrow \text{why $dx$ here?}$$ And so now I'd have: $\displaystyle \int (u)^{100}du$ which is $\displaystyle \frac{u^{101}}{101} + C$ and then I'd just substitute back in my $u$. I'm so confused by all of this. I know how to do it from practice, but I don't understand what is really happening here. What happens to the $du$ between rewriting the integral and taking the anti-derivative? Why is writing the $dx$ so important? How should I be viewing this in when seeing an integral?
$dx$ is what is known as a differential. It is an infinitesimally small interval of $x$: $$dx=\lim_{x\to{x_0}}x-x_0$$ Using this definition, it is clear from the definition of the derivative why $\frac{dy}{dx}$ is the derivative of $y$ with respect to $x$: $$y'=f'(x)=\frac{dy}{dx}=\lim_{x\to{x_0}}\frac{f(x-x_0)-f(x_0)}{x-x_0}$$ When doing u-substitution, you are defining $u$ to be an expression dependent on $x$. $\frac{du}{dx}$ is a fraction like any other, so to get $du$ you must multiply both sides of the equation by $dx$: $$u=x^2+4$$ $$\frac{du}{dx}=2x$$ $$dx\frac{du}{dx}=2x\space{dx}$$ $$du=2x\space{dx}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/402303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 3 }
Are there nonlinear operators that have the group property? To be clear: What I am actually talking about is a nonlinear operator on a finitely generated vector space V with dimension $d(V)\;\in \mathbb{N}>1$. I can think of several nonlinear operators on such a vector space but none of them have the requisite properties of a group. In particular, but not exclusively, are there any such nonlinear operator groups that meet the definition of a Lie Group.
How about the following construction. Let $M \in R_{d}(G)$ be a $d$ dimensional representation of some Lie group (e.g. $SL(2,R)$ and $V$ is some $d$ dimensional vector space. Now define: $$ f(x,M,\epsilon)= \frac{M x}{(1+\epsilon w.x)} $$ where the vector $w$ is chosen so that $w.M= w$ for all $M\in SL(2,\mathbb{R})$. Such vectors exist, for instance, if $d=4$ and $R_4(G)=R_2(G) \otimes R_2(G)$ then $w=\begin{pmatrix} 0 \\ 1\\ -1\\ 0 \end{pmatrix}$ is one such vector. It is now not too difficult to convince oneself that: $$ \begin{eqnarray} f(x,I,0) = x \\ f(f(x,M_1,\epsilon_1),M_2,\epsilon_2) = f(x,M_1 M_2,\epsilon_1+\epsilon_2) \end{eqnarray}$$ so that the mapping $f$ is nonlinear and satisfies the group property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How can I prove this closed form for $\sum_{n=1}^\infty\frac{(4n)!}{\Gamma\left(\frac23+n\right)\,\Gamma\left(\frac43+n\right)\,n!^2\,(-256)^n}$ How can I prove the following conjectured identity? $$\mathcal{S}=\sum_{n=1}^\infty\frac{(4\,n)!}{\Gamma\left(\frac23+n\right)\,\Gamma\left(\frac43+n\right)\,n!^2\,(-256)^n}\stackrel?=\frac{\sqrt3}{2\,\pi}\left(2\sqrt{\frac8{\sqrt\alpha}-\alpha}-2\sqrt\alpha-3\right),$$ where $$\alpha=2\sqrt[3]{1+\sqrt2}-\frac2{\sqrt[3]{1+\sqrt2}}.$$ The conjecture is equivalent to saying that $\pi\,\mathcal{S}$ is the root of the polynomial $$256 x^8-6912 x^6-814752 x^4-13364784 x^2+531441,$$ belonging to the interval $-1<x<0$. The summand came as a solution to the recurrence relation $$\begin{cases}a(1)=-\frac{81\sqrt3}{512\,\pi}\\\\a(n+1)=-\frac{9\,(2n+1)(4n+1)(4 n+3)}{32\,(n+1)(3n+2)(3n+4)}a(n)\end{cases}.$$ The conjectured closed form was found using computer based on results of numerical summation. The approximate numeric result is $\mathcal{S}=-0.06339748327393640606333225108136874...$ (click to see 1000 digits).
According to Mathematica, the sum is $$ \frac{3}{\Gamma(\frac13)\Gamma(\frac23)}\left( -1 + {}_3F_2\left(\frac14,\frac12,\frac34; \frac23,\frac43; -1\right) \right). $$ This form is actually quite straightforward if you write out $(4n)!$ as $$ 4^{4n}n!(1/4)_n (1/2)_n (3/4)_n $$ using rising powers ("Pochhammer symbols") and then use the definition of a hypergeometric function. The hypergeometric function there can be handled with equation 25 here: http://mathworld.wolfram.com/HypergeometricFunction.html: $$ {}_3F_2\left(\frac14,\frac12,\frac34; \frac23,\frac43; y\right)=\frac{1}{1-x^k},$$ where $k=3$, $0\leq x\leq (1+k)^{-1/k}$ and $$ y = \left(\frac{x(1-x^k)}{f_k}\right)^k, \qquad f_k = \frac{k}{(1+k)^{(1+1/k)}}. $$ Now setting $y=-1$, we get the polynomial equation in $x$ $$ \frac{256}{27} x^3 \left(1-x^3\right)^3 = -1,$$ which has two real roots, neither of them in the necessary interval $[0,(1+k)^{-1/k}=4^{-1/3}]$, since one is $-0.43\ldots$ and the other $1.124\ldots$. However, one of those roots, $x_1=-0.436250\ldots$ just happens to give the (numerically at least) right answer, so never mind that. Also, note that $$ \Gamma(1/3)\Gamma(2/3) = \frac{2\pi}{\sqrt{3}}. $$ The polynomial equation above is in terms of $x^3$, so we can simplify that too a little, so the answer is that the sum equals $$ \frac{3^{3/2}}{2\pi} \left(-1+(1-z_1)^{-1}\right), $$ where $z_1$ is a root of the polynomial equation $$ 256z(1-z)^3+27=0, \qquad z_1=-0.0830249175076244\ldots $$ (The other real root is $\approx 1.42$.) How did you find the conjectured closed form?
{ "language": "en", "url": "https://math.stackexchange.com/questions/402432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51", "answer_count": 2, "answer_id": 0 }
Intuition of Addition Formula for Sine and Cosine The proof of two angles for sine function is derived using $$\sin(A+B)=\sin A\cos B+\sin B\cos A$$ and $$\cos(A+B)=\cos A\cos B-\sin A\sin B$$ for cosine function. I know how to derive both of the proofs using acute angles which can be seen here http://en.wikibooks.org/wiki/Trigonometry/Addition_Formula_for_Cosines but pretty sure those who have taken trig know what I'm talking about. So I know how to derive and prove both of the two-angle functions using the acute angles, but what I am completely confused about is where those triangles came from. So for proving the two-angle cosine function, we look at two acute angles, $A$ and $B$, where $A+B<90$ and keep on expanding. So my question is, where did those two triangles come from and what is the intuition behind having two acute triangles on top of each other?
The bottom triangle is the right triangle used to compute sine and cosine of $\alpha$. The upper triangle is the right triangle used to compute sine and cosine of $\beta$, scaled and rotated so its base is the same as the hypotenuse of the lower triangle. We know the ratios of the sides of these triangles because of the definitions of sine and cosine. Making the base of the upper triangle the same length as the hypotenuse of the lower triangle allows relations to be drawn between the two triangles. Setting the base of the upper triangle to be aligned with the hypotenuse of the lower triangle creates a triangle with $\alpha+\beta$ as an angle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 3, "answer_id": 2 }
$f$ and $g$ are holomorphic function , $A=\{z:{1\over 2}<\lvert z\rvert<1\}$, $D=\{z: \lvert z-2\rvert<1\}$ $f$ and $g$ are holomorphic function defined on $A\cup D$ $A=\{z:{1\over 2}<\lvert z\rvert<1\}$, $D=\{z:\lvert z-2\rvert <1\}$ * *If $f(z)g(z)=0\forall z\in A\cup D $ then either $f(z)=0\forall z\in A$ or $g(z)=0\forall z\in A$ *If $f(z)g(z)=0\forall z\in D $ then either $f(z)=0\forall z\in D$ or $g(z)=0\forall z\in D$ *If $f(z)g(z)=0\forall z\in A\cup D $ then either $f(z)=0\forall z\in A\cup D$ or $g(z)=0\forall z\in A\cup D$ *If $f(z)g(z)=0\forall z\in A $ then either $f(z)=0\forall z\in A$ or $g(z)=0\forall z\in A$ Except The $1,3$, I can say $2,4$ are true beacuse of Identity Theorem right? zeroes of holomorphic function are isolated. for $3$ I can define $f(z)=z$ on $A$ and $0$ on $D$ and $g(z)=0$ on $A$ and $z^2$ on $D$
You can use the identity theorem, but since that theorem doesn't directly apply to products, you should explain how you use the identity theorem. You can, for example, argue that if $f\cdot g = 0$ on some connected open set $S$, then there are $S_1,S_2$ with $f = 0$ on $S_1$, $g = 0$ on $S_2$, $S_1 \cap S_2 = \emptyset$ and $S_1 \cup S_2 = S$. Now if $f$ isn't identically zero on $S$, then due to the identity theorem $S_1$ contains only isolated points, hence for every $x \in S_1$ you have that $B_\epsilon(x) \setminus {x} \subset S_2$ for some $\epsilon > 0$. Using the identity theorem again, you get $g = 0$ on $S$. The same applies with $f$ and $g$ reversed, thus at least one of them is identically zero on $S$. This works, as you correctly stated, for (2) and (4). As Robert Isreal commented, you get (1) from (4), and for (3) your counter-example indeed refutes the assertion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exponential Growth, half-life time An exponential growth function has at time $t = 5$ a) the growth factor (I guess that is just the "$\lambda$") of $0.125$ - what is the half life time? b) A growth factor of $64$ - what is the doubling time ("Verdopplungsfaktor")? For a), as far as I know the half life time is $\displaystyle T_{1/n} = \frac{ln(n)}{\lambda}$ but how do I use the fact that we are at $t = 5$? I don't understand the (b) part. Thanks
The growth factor tells you the relative growth between $f(x)$ and $f(x+1)$, i.e. it's $$ \frac{f(t+1)}{f(t)} \text{.} $$ If $f$ grows exactly exponentially, i.e. if $$ f(t) = \lambda\alpha^t = \lambda e^{\beta t} \quad\text{($\beta = \ln \alpha$ respectively $\alpha = e^\beta$)} \text{,} $$ then $$ \frac{f(t+1)}{f(t)} = \frac{\lambda\alpha^{t+1}}{\lambda\alpha^t} = \alpha = e^\beta \text{,} $$ meaning that, as you noticed, the grow factor doesn't depend on $t$ - it's constant. The half-life time is the time $h$ it takes to get from $f(t)$ to $f(t+h)=\frac{f(t)}{2}$. For a strictly exponential $f$, you have $$ f(t+h) = \frac{f(t)}{2} \Rightarrow \lambda\alpha^{t+h} = \frac{\lambda}{2}\alpha^t \Rightarrow \alpha^h = \frac{1}{2} \Rightarrow h = \log_\alpha \frac{1}{2} = -\frac{\ln 2}{\ln \alpha} = -\frac{\ln 2}{\beta} \text{.} $$ Similarly, the doubling-time is the time $d$ it takes to get from $f(t)$ to $f(t+d) = 2f(t)$, and you have $$ f(t+d) = 2f(t) \Rightarrow \lambda\alpha^{t+d} = 2\lambda\alpha^t \Rightarrow \alpha^d = 2\Rightarrow h = \log_\alpha 2 = \frac{\ln 2}{\ln \alpha} = \frac{\ln 2}{\beta} \text{.} $$ Thus, you always have that $d = -h$ for doubling-time $d$ and half-time $h$, which of course makes sense. If you go forward $d$ units of time to double the value, then going backwards $d$ units halves the value, similarly for going forward respectively backward $h$ units to half respectively double the value. In your case, you get that the doubling time for (b) is $\frac{\ln 2}{\ln 64} = \frac{1}{6}$. For (a) you get that the half-life time is $-\frac{\ln 2}{\ln \frac{1}{8}} = \frac{1}{3}$. You can also derive those by observing that a growth factor of one-eight means that going forward one unit of time means the value decreases to one-eight of the original value. Thus, after going forward one-third unit, the value decreases to one-half, since if if it decreases three times to one-half each, it overall decreases to one-eight. Similarly, if the value increases to 64 times the original value when going forward one unit of time, you have to go forward one-sixt unit of time to have it increase to twice the value, since $2^6 = 64$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Range of: $\sqrt{\sin(\cos x)}+\sqrt{\cos(\sin x)}$ Range of: $$\sqrt{\sin(\cos x)}+\sqrt{\cos(\sin x)}$$ Any help will be appreciated.
$$u=\cos(x); v=\sin(x)$$ $$\sqrt{\sin(u)}+\sqrt{\cos(v)}$$ $\sin(u)$ must be greater than or equal to 0, and $\cos(v)$ must be greater than or equal to 0: $$u\in{[2\pi{k},\pi+2\pi{k}],k\in\mathbb{Z}}$$ $$v\in{[-\frac{\pi}{2}+2\pi{k},\frac{\pi}{2}+2\pi{k}]},k\in\mathbb{Z}$$ Since $v=\sin{x}$, $v\in{[-1,1]}$, $v$ will always be in the necessary interval. But $u$ will only be in the necessary interval when $\cos{x}\ge0$. That is, when $x\in[-\frac{\pi}{2}+2\pi{k},\frac{\pi}{2}+2\pi{k}]\bigcap[2\pi{k},\pi+2\pi{k}]$. So the domain is $[2\pi{k},\frac{\pi}{2}+2\pi{k}],k\in\mathbb{Z}$. Now find the set of function values over that domain. Note that the function is not strictly increasing or decreasing over the domain, so the minimum and maximum are not the endpoints of the domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
a real convergent sequence has a unique limit point How to show that a real convergent sequence has a unique limit point viz. the limit of the sequene? I've used the result several times but I don't know how to prove it! Please help me!
I am assuming that limit points are defined as in Section $6.4$ of the book Analysis $1$ by the author Terence Tao. We assume that the sequence of real numbers $(a_{n})_{n=m}^{\infty}$ converges to the real number $c$. Then we have to show that $c$ is the unique limit point of the sequence. First, we shall show that $c$ is indeed a limit point. By the definition of convergence we have $$\forall\epsilon > 0\:\exists N_{\epsilon}\geq m\:\text{s.t.}\:\forall n\geq N_{\epsilon}\:|a_{n}-c|\leq\epsilon$$ We need to prove that $\forall\epsilon >0\:\forall N\geq m\:\exists n\geq N\:\text{s.t.}\:|a_{n}-c|\leq\epsilon$. Fix $\epsilon$ and $N$. If $N\leq N_{\epsilon}$ then we could pick and $n\geq N_{\epsilon}$. If $N>N_{\epsilon}$ then picking any $n\geq N > N_{\epsilon}$ would suffice. Now, we shall prove that $c$ is the unique limit point. Let us assume for the sake of contradiction that $\exists$ a limit point $c'\in\mathbb{R}$ and $c'\neq c$. Since the sequence converges to $c$, $\forall\epsilon > 0\:\exists M_{\epsilon}\geq m$ such that $\forall n\geq M_{\epsilon}$ we have $|a_{n}-c|\leq \epsilon/2$. Further, since $c'$ is a limit point, $\forall\epsilon > 0$ and $\forall N\geq m\:\exists k_{N}\geq N$ such that $|a_{k_{N}}-c'|\leq \epsilon/2$. In particular, $|a_{k_{M_{\epsilon}}}-c'|\leq \epsilon/2$ and $|a_{k_{M_{\epsilon}}}-c|\leq \epsilon/2$. Using the triangle inequality we get $\forall\epsilon > 0\: |c-c'|\leq\epsilon\implies |c-c'| = 0\implies c = c'$ which contradicts the assumption.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Pointwise supremum of a convex function collection In Hoang Tuy, Convex Analysis and Global Optimization, Kluwer, pag. 46, I read: "A positive combination of finitely many proper convex functions on $R^n$ is convex. The upper envelope (pointwise supremum) of an arbitrary family of convex functions is convex". In order to prove the second claim the author sets the pointwise supremum as: $$ f(x) = \sup \{f_i(x) \mid i \in I\} $$ Then $$ \mathrm{epi} f = \bigcap_{i \in I} \mathrm{epi} f_i $$ As "the intersection of a family of convex sets is a convex set" the thesis follows. The claim on the intersections raises some doubts for me. If $(x,t)$ is in the epigraph of $f$, $f(x) \leq t)$ and, as $f(x)\geq f_i(x)$, also $f_i(x) \leq t)$; therefore $(x,t)$ is in the epigraph of every $f_i$ and the intersection proposition follows. Now, what if, for some (not all) $\hat{i}$, $f_{\hat{i}}(x^0)$ is not defined? The sup still applies to the other $f_i$ and so, in as far as the sup is finite, $f(x^0)$ is defined. In this case when $(x^0,t) \in \mathrm{epi} f$ not $\in \mathrm{epi} f_{\hat{i}}$ too, since $x^0 \not \in \mathrm{dom}\, f_{\hat{i}}$. So it is simple to say that $f$ is convex over $\bigcap_{i \in I} \mathrm{dom} f_i\subset \mathrm{dom} f$, because in this set every $x \in \mathrm{dom}\, f_{\hat{i}}$. What can we say when $x \in \mathrm{dom}\, f$ but $x \not \in \mathrm{dom}\, f_{\hat{i}}$ for some $i$?
I think it is either assumed that the $f_i$ are defined on the same domain $D$, or that (following a common convention) we set $f_i(x)=+\infty$ if $x \notin \mathrm{Dom}(f_i)$. You can easily check that under this convention, the extended $f_i$ still remain convex and the claim is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving $\sqrt{3x^2-2}+\sqrt[3]{x^2-1}= 3x-2 $ How can I solve the equation $$\sqrt{3x^2-2}+\sqrt[3]{x^2-1}= 3x-2$$ I know that it has two roots: $x=1$ and $x=3$.
Substituting $x = \sqrt{t^3+1}$ and twice squaring, we arrive to the equation $$ 36t^6-24t^5-95t^4+8t^3+4t^2-48t=0.$$ Its real roots are $t=0$ and $t=2$ (the latter root is found in the form $\pm \text{divisor}(48)/\text{divisor}(36)$), therefore $x=1$ and $x=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/402965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Inverse Laplace transformation using reside method of transfer function that contains time delay I'm having a problem trying to inverse laplace transform the following equation $$ h_0 = K_p * \frac{1 - T s}{1 + T s} e ^ { - \tau s} $$ I've tried to solve this equation using the residue method and got the following. $$ y(t) = 2 K_p e ^ {- \tau s} e ^ {-t/T} $$ $$ y(t) = 2 K_p e ^ {\frac{\tau}{T}} e ^ {-t/T} $$ And that is clearly wrong. Is it so that you can't use the residue method on functions that contains time delay, or is it possible to do a "work around" or something to get to the right answer?
First do polynomial division to simplify the fraction: $$\frac{1-Ts}{1+Ts}=-1+\frac{2}{1+Ts}$$ Now expand $h_0$: $$h_0=-K_pe^{-\tau{s}}+2K_p\frac{1}{Ts+1}e^{-\tau{s}}$$ Recall the time-domain shift property: $$\scr{L}(f(t-\tau))=f(s)e^{-\tau{s}}$$ $$\scr{L}^{-1}h_0=-k_p\delta{(t-\tau)}+2k_p g(t-\tau)$$ Where $g(t)=\scr{L}^{-1}\frac{1}{ts+1}$. To take the inverse Laplace transform of this term, recall the frequency domain shift property: $$\scr{L}^{-1}f(s-a)=f(t)e^{at}$$ $$\frac{1}{Ts+1}=\frac{1}{T(s+\frac{1}{T})}=\frac{1}{T}\frac{1}{s+\frac{1}{T}}$$ Therefore the inverse Laplace transform is: $$\frac{1}{T}e^{-\frac{t}{T}}$$ Finally, putting all of it together, the full inverse Laplace transform of the original expression is: $$-K_p\delta(t-\tau)+\frac{2K_p}{T}e^{-\frac{t-\tau}{T}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/403042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $n \ge \sqrt{n+1}+\sqrt{n}$ (how) Can I show that: $n \ge \sqrt{n+1}+\sqrt{n}$ ? It should be true for all $n \ge 5$. Tried it via induction: * *$n=5$: $5 \ge \sqrt{5} + \sqrt{6} $ is true. *$n\implies n+1$: I need to show that $n+1 \ge \sqrt{n+1} + \sqrt{n+2}$ Starting with $n+1 \ge \sqrt{n} + \sqrt{n+1} + 1 $ .. (now??) Is this the right way?
Hint: $\sqrt{n} + \sqrt{n+1} \leq 2\sqrt{n+1}$. Can you take it from there?
{ "language": "en", "url": "https://math.stackexchange.com/questions/403090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 6, "answer_id": 3 }
Integral over a ball Let $a=(1,2)\in\mathbb{R}^{2}$ and $B(a,3)$ denote a ball in $\mathbb{R}^{2}$ centered at $a$ and of radius equal to $3$. Evaluate the following integral: $$\int_{B(a,3)}y^{3}-3x^{2}y \ dx dy$$ Should I use polar coordinates? Or is there any tricky solution to this?
Note that $\Delta(y^3-3x^2y)=0$, so that $y^3-3x^2y$ is harmonic. By the mean value property, we get that the mean value over the ball is the value at the center. Since the area of the ball is $9\pi$ and the value at the center is $2$, we get $$ \int_{B(a,3)}\left(y^3-3x^2y\right)\mathrm{d}x\,\mathrm{d}y=18\pi $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/403162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove $\sin \left(1/n\right)$ tends to $0$ as $n$ tends to infinity. I'm sure there is an easy solution to this but my mind has gone blank! Any help on proving that $\sin( \frac{1}{n})\longrightarrow0$ as $n\longrightarrow\infty$ would be much appreciated. This question was set on a course before continuity was introduced, just using basic sequence facts. I should have phrased it as: Find an $N$ such that for all $n > N$ $|\sin(1/n)| < \varepsilon$
Hint $$\quad0 \leq\sin\left(\frac{1}{n}\right)\leq \frac{1}{n}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/403210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
A problem from distribution theory. Let $f$, $g\in C(\Omega)$, and suppose that $f \neq g$ in $C(\Omega)$. How can we prove that $f \neq g$ as distributions? Here's the idea of my proof. $f$ and $g$ are continuous functions, so they will be locally integrable. Now, take any $\phi \neq 0 \in D(\Omega)$. Let us suppose that $\langle T_f,\phi\rangle =\langle T_g,\phi \rangle $ and $f\neq g$ $\langle T_f,\phi \rangle = \langle T_g,\phi \rangle$ $\implies \int_\Omega f(x) \phi(x) dx = \int_\Omega g(x) \phi(x) dx$ $\implies \int_\Omega \phi(x)[f(x)-g(x)] dx =0$ i.e., the area under above function is zero. We know that $\phi$ is non zero, we just need to prove that this integral will be zero only when $f(x)-g(x)=0$ and it will make contradiction to our supposition that $f$ and $g$ are not equal. [Please help me prove the last point]
Let me add a little more detail to my comment. You are correct up to the $\int_{\Omega} \phi(x)[f(x)-g(x)]dx = 0$. Now, for $\varepsilon > 0$ and $x_0 \in \Omega$, let $B_{\varepsilon}(x_0)$ denote the ball of radius $\varepsilon$ centered at $x_0$. Then, you can find a test function $\psi$ with the following properties: (i) $\psi(x) > 0 \, \forall x \in B_{\varepsilon}(x_0)$ (ii) $\psi(x) = 0 \, \forall x \notin B_{\varepsilon}(x_0)$ (iii) $\int_{\Omega} \psi = \int_{B_{\varepsilon}(x_0)}\psi = 1.$ Now, choose $\phi = \psi$ and for any $x_0 \in \Omega$ $$ 0 = \int_{\Omega} \psi(x)[f(x)-g(x)]dx = \int_{B_{\varepsilon}(x_0)} \psi(x)[f(x)-g(x)]dx \approx f(x_0)-g(x_0) $$ As $\varepsilon \rightarrow 0$, the equation is exact, i.e. $f(x_0) - g(x_0) = 0$. Since $x_0$ was arbitrary, $f = g$ in $\Omega$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove if $n^2$ is even, then $n$ is even. I am just learning maths, and would like someone to verify my proof. Suppose $n$ is an integer, and that $n^2$ is even. If we add $n$ to $n^2$, we have $n^2 + n = n(n+1)$, and it follows that $n(n+1)$ is even. Since $n^2$ is even, $n$ is even. Is this valid?
We know that, * *$n^2=n\times n $, We also know that *even $\times$ even = even *odd $\times$ odd = odd *odd $\times$ even = even Observation $1$: As $n^2$ is even, we also get an even result in the 2nd and 4th case . Observation $2$: In the expression "$n \times n$" both operands are same i.e. '$n$', hence we get the result even in the $2$nd and $3$rd case Since the $2$nd case is common in both the operations we take the first case.Hence comparing even $\times$ even = even and $n^2=n\times n $, Hence proved if $n^2$ is even, then $n$ is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68", "answer_count": 15, "answer_id": 1 }
Is There A Function Of Constant Area? If I take a point $(x,y)$ and multiply the coordinates $x\times y$ to find the area $(A)$ defined by the rectangle formed with the axes, then is there a function $f(x)$ so that $xy = A$, regardless of what value of $x$ is chosen?
Take any desired real value $A$, then from $xy = A$, define $$f: \mathbb R\setminus \{0\} \to \mathbb R,\quad f(x) = y = \dfrac Ax$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/403401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
What is the answer to this limit what is the limit value of the power series: $$ \lim_{x\rightarrow +\infty} \sum_{k=1}^\infty (-1)^k \frac{x^k}{k^{k-m}}$$ where $m>1$.
For $\lim_{x\rightarrow +\infty} \sum_{k=1}^\infty (-1)^k \frac{x^k}{k^{k-m}}$, the ratio of consecutive terms is $\begin{align} \frac{x^{k+1}}{(k+1)^{k+1-m}}\big/\frac{x^k}{k^{k-m}} &=\frac{x k^{k-m}}{(k+1)^{k+1-m}}\\ &=\frac{x }{k(1+1/k)^{k+1-m}}\\ &=\frac{x }{k(1+1/k)^{k+1}(1+1/k)^{-m}}\\ &\approx\frac{x }{ke(1+1/k)^{-m}}\\ &=\frac{x (1+1/k)^{m}}{ke}\\ \end{align} $ For large $k$ and fixed $m$, $(1+1/k)^{m} \approx 1+m/k$, so the ratio is about $x/(ke)$, so the sum is like $\lim_{x\rightarrow +\infty} \sum_{k=1}^\infty (-1)^k \big(\frac{x}{ek}\big)^k$. Note: I feel somewhat uncomfortable about this conclusion, but I will continue anyway. Interesting how the $m$ goes away (unless I made a mistake, which is not unknown). In the answer pointed to by Mhenni Benghorbal, it is shown that this limit is $-1$, so it seems that this is the limit of this sum also.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
When a quotient of a UFD is also a UFD? Let $R$ be a UFD and let $a\in R$ be nonzero element. Under what conditions will $R/aR$ be a UFD? A more specific question: Suppose $R$ is a regular local ring and let $I$ be a height two ideal which is radical. Can we find an element $a\in I$ such that $R/aR$ is a UFD?
This is indeed a complicated question, that has also been much studied. Let me just more or less quote directly from Eisenbud's Commutative Algebra book (all found in Exercise 20.17): The Noether-Lefschetz theorem: if $R = \mathbb{C}[x_1, \ldots x_4]$ is the polynomial ring in $4$ variables over $\mathbb{C}$, then for almost every homogeneous form $f$ of degree $\ge 4$, $R/(f)$ is factorial. On the other hand, in dimension $3$, there is a theorem of Andreotti-Salmon: Let $(R,P)$ be a $3$-dimensional regular local ring, and $0 \ne f \in P$. Then $R/(f)$ is factorial iff $f$ cannot be written as the determinant of an $n \times n$ matrix with entries in $P$, for $n > 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Injectivity of a map between manifolds I'm learning the concepts of immersions at the moment. However I'm a bit confused when they define an immersion as a function $f: X\rightarrow Y$ where $X$ and $Y$ are manifolds with dim$X <$ dim$Y$ such that $df_x: T_x(X)\rightarrow T_y(Y)$ is injective. I was wondering why don't we let $f$ be injective and say that's the best case we can get for the condition dim$X <$ dim$Y$(since under this condition we can't apply the inverse function theorem)? Also does injectivity of $df_x$ inply the injectivity of $f$ (it seems that I can't prove it)? How should we picture immersion as (something like the tangent space of $X$ always "immerses" into the tangent space of $Y$)? Thanks for everyone's help!
I think a glimpse on the wikipedia's article helps. Immersions usually are not injective because the image can appear "knotted" in the target space. I do not think you need $\dim X<\dim Y$ in general. You can define it for $\dim X=\dim Y$, it is only because we need $df_{p}$ to have rank equal to $\dim X$ that made you "need" $\dim X\le \dim Y$. I think you can find in classical differential topology/manifold books (like Boothby's ) that an immersion is locally an injection map $$\mathbb{R}^{n}\times \{0\}\rightarrow \mathbb{R}^{m+n}$$You can attempt to prove this via inverse function theorem or implicit function theorem. The proof is quite standard.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Does $x^T(M+M^T)x \geq 0 \implies x^TMx \geq 0$ hold in only one direction? I know this is true for the "if" part, but what about the "only if"? Can you give me one example when the "only if" part does not hold? I am not quite sure about this. I forgot to tell you that $M$ is real and $x$ arbitrary.
Well $x^{T}Mx \geq 0 \implies x^{T}M^{T}x \geq 0$ (by taking transpose) Hence, $x^{T}(M^{T} + M)x \geq 0$. So, this gives you the one side. However, you wrote the converse in your title.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
First moment inequality implies tail distribution inequality? Let $U,V$ be two continuous random variables, both with continuous CDF. Suppose that $\mathbb E V \geq \mathbb E U$. Can one conclude that $\mathbb P(V> x) \geq \mathbb P(U>x)$ for all $x\geq 0$? If not, what additional conditions are needed?
Here is an argument with may make think it's not true: if $Y$ is a positive random variable, then $E(X)=\int_0^{+\infty}P(X>t)dt$. The fact that $P(U>x)\geqslant P(V>x)$ seems much stronger than $E(U)\geqslant E(V)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that certain elements are not in some ideal I have the following question: Is there a simple way to prove that $x+1 \notin \langle2, x^2+1\rangle_{\mathbb{Z}[x]}$ and $x-1 \notin \langle2, x^2+1\rangle_{\mathbb{Z}[x]}$ without using the fact that $\mathbb{Z}[x]/\langle2, x^2+1\rangle$ is an integral domain? Thanks for the help.
First a comment: it looks like the goal of the problem is to show that $(2,x^2+1)$ isn't a prime ideal (and hence the quotient isn't a domain), because $(x+1)^2=x^2+1+2x$ is in the ideal, and we hope to show that $x+1$ isn't in the ideal. If $x+1\in (2,x^2+1)$, then we would be able to find two polynomimals in $\Bbb Z[x]$, say $a$ and $b$, such that $x+1=2a+(x^2+1)b$. Looking at the equation mod 2, you get $x+1=(x^2+1)\overline{b}$, where all the terms are in $\Bbb F_2[x]$. But since $\Bbb F_2[x]$ is a domain, the degrees on both sides have to match. If $\deg(b)>0$, then the degree of the right hand side would be at least 3, and even if the degree of $b$ were 0, the right hand side would have degree 2. It is impossible then, for such an expression to be equal to $x+1$. By this contradiction, we conclude $x+1$ is not in the ideal. Finally, you can note that $x+1$ is in the ideal iff $x-1$ is, since $x-1+2=x+1$. Thus we have shown that $(2,x^2+1)$ is not a prime ideal, and it isn't even a semiprime ideal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Evaluating $\int_{0}^{1} \frac{\ln^{n} x}{(1-x)^{m}} \, \mathrm dx$ On another site, someone asked about proving that $$ \int_{0}^{1} \frac{\ln^{n}x}{(1-x)^{m}} \, dx = (-1)^{n+m-1} \frac{n!}{(m-1)!} \sum_{j=1}^{m-1} (-1)^{j} s (m-1,j) \zeta(n+1-j), \tag{1} $$ where $n, m \in \mathbb{N}$, $n \ge m$, $m \ge 2$, and $s(m-1,j)$ are the Stirling numbers of the first kind. My attempt at proving $(1)$: $$ \begin{align}\int_{0}^{1} \frac{\ln^{n}x}{(1-x)^{m}} \, dx &= \frac{1}{(m-1)!} \int_{0}^{1} \ln^{n} x \sum_{k=m-1}^{\infty} k(k-1) \cdots (k-m+2) \ x^{k-m+1} \ dx \\ &= \frac{1}{(m-1)!} \sum_{k=m-1}^{\infty} k(k-1) \cdots (k-m+2) \int_{0}^{1} x^{k-m+1} \ln^{n} x \, dx \\ &= \frac{1}{(m-1)!} \sum_{k=m-1}^{\infty} k(k-1) \cdots (k-m+2) \frac{(-1)^{n} n!}{(k-m+2)^{n+1}}\\ &= \frac{1}{(m-1)!} \sum_{k=m-1}^{\infty} \sum_{j=0}^{m-1} s(m-1,j) \ k^{j} \ \frac{(-1)^{n} n!}{(k-m+2)^{n+1}} \\ &= (-1)^{n} \frac{n!}{(m-1)!} \sum_{j=0}^{m-1} s(m-1,j) \sum_{k=m-1}^{\infty} \frac{k^{j}}{(k-m+2)^{n+1}} \end{align} $$ But I don't quite see how the last line is equivalent to the right side of $(1)$. (Wolfram Alpha does say they are equivalent for at least a few particular values of $m$ and $n$.)
Substitute $x=e^{-t}$ and get that the integral is equal to $$(-1)^n \int_0^{\infty} dt \, e^{-t} \frac{t^n}{(1-e^{-t})^m} $$ Now use the expansion $$(1-y)^{-m} = \sum_{k=0}^{\infty} \binom{m+k-1}{k} y^k$$ and reverse the order of summation and integration to get $$\sum_{k=0}^{\infty} \binom{m+k-1}{k} \int_0^{\infty} dt \, t^n \, e^{-(k+1) t}$$ I then get as the value of the integral: $$\int_0^1 dx \, \frac{\ln^n{x}}{(1-x)^m} = (-1)^n\, n!\, \sum_{k=0}^{\infty} \binom{m+k-1}{k} \frac{1}{(k+1)^{n+1}}$$ Note that when $m=0$, the sum reduces to $1$; every term in the sum save that at $k=0$ is zero. Note also that this sum gives you the ability to express the integral in terms of a Riemann zeta function for various values of $m$, which will provide the Stirling coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
The integral over a subset is smaller? In a previous question I had $A \subset \bigcup_{k=1}^\infty R_k$ where $R_k$ in $\Bbb{R}^n$ are rectangles I then proceeded to use the following inequality $\left|\int_A f\right| \le \left|\int_{\bigcup_{k=1}^\infty R_k} f \right|$ which I am not really certain of. Does anyone know how to prove it? If its wrong what similar inequality should I use to prove the result here.
The inequality is not true in general (think of an $f$ that is positive on $A$ but such that it is negative outside $A$). But it does hold if $f\geq 0$. This is not an obstacle to you using it, because you would just have to split your function in its positive and negative part.
{ "language": "en", "url": "https://math.stackexchange.com/questions/403994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Normal form of a vector field in $\mathbb {R}^2$ Edited after considering the comments Problem: What is the normal form of the vector field: $$\dot x_1=x_1+x_2^2$$ $$\dot x_2=2x_2+x_1^2$$ Solution: The eugine values of the matrix of the linearised around $(0,0)$ system are $2$ and $1$. We, therefore, have the only resonance $2=2\dot{}1+0\dot{} 2$. The resonant vector-monome is $(0,x_1^2)$. The normal form is then $$\dot x_1=x_1$$ $$\dot x_2=2x_2+cx_1^2$$ Question: I believe this is correct, is it not?
I would use $y$ instead of $x$ in the normal form, since these are not the same variables. Otherwise, what you did is correct. (I don't know if the problem required the identification of a transformation between $x$ and $y$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/404062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proof that $1/\sqrt{x}$ is itself its sine and cosine transform As far as I understand, I have to calculate integrals $$\int_{0}^{\infty} \frac{1}{\sqrt{x}}\cos \omega x \operatorname{d}\!x$$ and $$\int_{0}^{\infty} \frac{1}{\sqrt{x}}\sin \omega x \operatorname{d}\!x$$ Am I right? If yes, could you please help me to integrate those? And if no, could you please explain me. EDIT: Knowledge of basic principles and definitions only is supposed to be used.
Let $$I_1(\omega)=\int_0^\infty \frac{1}{\sqrt{x}}\cdot \cos (\omega\cdot x)\space dx,$$ and $$I_2(\omega)=\int_0^\infty \frac{1}{\sqrt{x}}\cdot\sin (\omega\cdot x)\space dx.$$ Let $x=t^2/\omega$ such that $dx=2t/\omega\space dt$, where $t\in [0,\infty)$. It follows that $$I_1(\omega)=\frac{2}{\sqrt{\omega}}\cdot\int_0^\infty \cos (t^2)\space dt,$$ and $$I_2(\omega)=\frac{2}{\sqrt{\omega}}\cdot\int_0^\infty \sin (t^2)\space dt.$$ Recognize that both integrands are even and exploit symmetry. It follows that $$I_1(\omega)=\frac{1}{\sqrt{\omega}}\cdot\int_{-\infty}^{\infty} \cos (t^2)\space dt,$$ and $$I_2(\omega)=\frac{1}{\sqrt{\omega}}\cdot\int_{-\infty}^{\infty} \sin (t^2)\space dt.$$ Establish the equation $$I_1(\omega)-i\cdot I_2(\omega)=\frac{1}{\sqrt{\omega}}\cdot\int_{-\infty}^{\infty} (\cos (t^2)-i\cdot\sin (t^2))\space dt.$$ Applying Euler's formula in complex analysis gives $$I_1(\omega)-i\cdot I_2(\omega)=\frac{1}{\sqrt{\omega}}\cdot\int_{-\infty}^{\infty} e^{-i\cdot t^2}dt.$$ Let $t=i^{-1/2}\cdot u$ such that $dt=i^{-1/2}\space du$, where $u\in(-\infty,\infty)$: $$I_1(\omega)-i\cdot I_2(\omega)=\frac{i^{-1/2}}{\sqrt{\omega}}\cdot \int_{-\infty}^{\infty} e^{-u^2}du.$$ Evaluate the Gaussian integral: $$I_1(\omega)-i\cdot I_2(\omega)=i^{-1/2}\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}.$$ Make use of the general properties of the exponential function and logarithms in order to rewrite $i^{-1/2}$: $$I_1(\omega)-i\cdot I_2(\omega)=e^{\ln(i^{-1/2})}\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}=e^{-1/2\cdot \ln(i)}\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}.$$ In cartesian form, $i=0+i\cdot 1$. Therefore, in polar form, $i=1\cdot e^{i\cdot \pi/2}$. Taking the natural logarithm on both sides gives $\ln(i)=i\cdot\pi/2$. Substitution into the equation gives $$I_1(\omega)-i\cdot I_2(\omega)=e^{-i\cdot \pi/4}\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}.$$ Applying Euler's formula in complex analysis gives $$I_1(\omega)-i\cdot I_2(\omega)=(\cos(\pi/4)-i\cdot \sin(\pi/4))\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}=(\frac{1}{\sqrt{2}}-i\cdot \frac{1}{\sqrt{2}})\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}.$$ Expanding the terms reveals that $$I_1(\omega)=I_2(\omega)=\sqrt{\frac{\pi}{2\cdot\omega}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/404119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
What makes $5$ and $6$ so special that taking powers doesn't change the last digit? Why are $5$ and $6$ (and numbers ending with these respective last digits) the only (nonzero, non-one) numbers such that no matter what (integer) power you raise them to, the last digit stays the same? (by the way please avoid modular arithmetic) Thanks!
It's because, let x be any number, if x^2 ends with x, then x raised to any positive integer(excluding zero) will end with x. For x^2 to end with x, x(x-1) have to be a multiple of 10 raised to the number of digits in x. (Ex: if x = 5, then 10^1. If x = 25, then 10^2) By following this procedure, I have come up with 25 and 625 which ends with themselves, when raised to any positive integer excluding zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/404161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 8, "answer_id": 5 }
Hopf-Rinow Theorem for Riemannian Manifolds with Boundary I am a little rusty on my Riemannian geometry. In addressing a problem in PDE's I came across a situation that I cannot reconcile with the Hopf-Rinow Theorem. If $\Omega \subset \mathbb{R}^n$ is a bounded, open set with smooth boundary, then $\mathbb{R}^n - \Omega$ is a Riemannian manifold with smooth boundary. Since $\mathbb{R}^n - \Omega$ is closed in $\mathbb{R}^n$, it follows that $\mathbb{R}^n - \Omega$ is a complete metric space. However, the Hopf-Rinow Theorem seems to indicate that $\mathbb{R}^n - \Omega$ (endowed with the usual Euclidean metric) is not a complete metric space since not all geodesics $\gamma$ are defined for all time. Am I missing something here? Do the hypotheses of the Hopf-Rinow theorem have to be altered to accommodate manifolds with boundary?
Hopf-Rinow concerns, indeed, Riemannian manifolds with no boundary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/404223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Prove that $V$ is the direct sum of $W_1, W_2 ,\dots , W_k$ if and only if $\dim(V) = \sum_{i=1}^k \dim W_i$ Let $W_1,\dots,W_k$ be a subspace of a finite dimensional vector space $V$ such that the $\sum_{i=1}^k W_i = V$. Prove: $V$ is the direct sum of $W_1, W_2 , \dots, W_k$ if and only $\dim(V) = \sum_{i=1}^k \dim W_i$.
Define $\ \ B_i$ a basis of $ \ \ W_i\ \ $ for $\ \ i \in {1,..,k } $. Since sum( $W_i$)$= V$ we know that $\bigcup_{i=1}^k B_i$ is a spanning list of $V$. Now we add the condition : sum dim($B_i$)$=$sum dim($W_i$)$=$dim($V$). This means that $\bigcup_{i=1}^k B_i$ is a basis. (Spanning list of $V$ with same dimension). Can you conclude from here the direct sum property? If we define Basises as before and we know that $W_i$ is a direct sum can you conclude ( knowing that sum( $W_i$)$= V$ , wich is given) that $\ \ \bigcup_{i=1}^k B_i$ is a Basis of $V$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/404290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What does it mean when one says "$B$ has no limit point in $A$"? If $B$ is a subset of a set $A$, what does the sentence "$B$ has no limit points in $A$" mean? I am aware that $x$ is a limit point of $A$, if for every neighbourhood $U$ of $x$, $(U-\{x\})\cap A$ is non-empty. Please let me know. Thank you.
Giving an example may be helpful for you. Example: Let $X=\mathbb R$ with usual topology. $A=\{x\in \mathbb R: x > 0\} \subseteq X$ and $B=\{1, \frac12, \frac13,... \frac1n,...\} \subset A$. It is not difficult to see that $B$ has no limit points in $A$ since the unique limit point 0 of $B$ is not in $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/404364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving that ${n}\choose{k}$ $=$ ${n}\choose{n-k}$ I'm reading Lang's Undergraduate Analysis: Let ${n}\choose{k}$ denote the binomial coefficient, $${n\choose k}=\frac{n!}{k!(n-k)!}$$ where $n,k$ are integers $\geq0,0\leq k\leq n$, and $0!$ is defined to be $1$. Prove the following assertion: $${n\choose k}={n\choose n-k}$$ I proceded by making the adequate substitutions: $${n\choose n-k}=\frac{n!}{\color{red}{(n-k)}!(n-\color{red}{(n-k)})!}$$ And then I simplified and achieved: $$\frac{n!}{(n-k)!-k!}$$ But I'm not sure on how to proceed from here, I've noticed that this result is very similar to $\frac{n!}{k!(n-k)!}$. What should I do? I guess it has something to do with the statement about the nature of $n$ and $k$: $n,k$ are integers $\geq0,0\leq k\leq n$ So should I just change the minus sign to plus sign and think of it as a product of $(n-k)!$? $$\frac{n!}{(n-k)!-k!}\Rightarrow\frac{n!}{(n-k)!+k!}\Rightarrow \frac{n!}{k!(n-k)!}$$ I'm in doubt because I've obtained the third result on Mathematica, but obtained the first with paper and pencil. I'm not sure if there are different rules for simplification with factorials. I'm not sure if this $(n-k)!+k!$ mean a sum or a product in this case.
$$ \binom{n}{n-k}=\frac{n!}{(n-k)!(n-n-k)!}=\frac{n!}{(n-k)!k!}=\binom{n}{k} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/404417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 4 }
Onto and one-to-one Let $T$ be a linear operator on a finite dimensional inner product space $V$. If $T$ has an eigenvector, then so does $T^*$. Proof. Suppose that $v$ is an eigenvector of $T$ with corresponding eigenvalue $\lambda$. Then for any $x \in V$, $$ 0 = \langle0,x\rangle = \langle(T-\lambda I)v,x\rangle = \langle v, (T-\lambda I)^*x\rangle = \langle v,(T^*-\bar{\lambda} I)x\rangle $$ This means that $(T^*-\bar{\lambda} I)$ is not onto. WHY? (Of course the proof is not completed in here)
If you want to know why $T^{*}$ is not onto if $T$ is not onto, just observe that the matrix of $T^{*}$ is just the conjugate transpose of the matrix of $T$ and we know that a matrix and its conjugate transpose have the same rank (which is equal to the dimension of the range space).
{ "language": "en", "url": "https://math.stackexchange.com/questions/404471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find an integrable $g(x,y) \ge |e^{-xy}\sin x|$ I want to use Fubini theorem on $$\int_0^{A} \int_0^{\infty} e^{-xy}\sin x dy dx=\int_0^{\infty} \int_0^{A}e^{-xy}\sin x dx dy$$ Must verify that $\int_M |f|d(\mu \times \nu) < \infty$. I'm using the Lebesgue theorem, so far I've come up with $g(x,y)=e^{-y}$ but am not sure whether it's correct. My argument is that if $x\in (0,1)$ then the $\sin x$ part is going to ensure that the inequality holds.
Try $g(x,y)=x\mathrm e^{-xy}$, then $|\mathrm e^{-xy}\sin x|\leqslant g(x,y)$ for every nonnegative $x$ and $y$. Furthermore, $\int\limits_0^\infty g(x,y)\mathrm dy=1$ for every $x\gt0$ hence $\int\limits_0^A\int\limits_0^\infty g(x,y)\mathrm dy\mathrm dx=A$, which is finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/404549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Product of two functions converging in $L^1(X,\mu)$ Let $f_n\to f$ in $L^1(X,\mu)$, $\mu(X)<\infty$, and let $\{g_n\}$ be a sequence of measurable functions such that $|g_n|\le M<\infty\ \forall n$ with some constant $M$, and $g_n\to g$ almost everywhere. Prove that $g_nf_n\to gf$ in $L^1(X,\mu)$. This is a question from one of my past papers, but unfortunately there is no solution provided. Here is as far as I have gotten: $$\int|gf-g_nf_n|=\int|(f-f_n)g_n+(g-g_n)f|\le\int|f-f_n||g_n|+\int|g-g_n||f|$$ $$\le M\int|f-f_n|+\int|g-g_n||f|$$ We know that $f_n\to f$ in $L^1$, so $\int|f-f_n|\to 0$, and by Lebesgue's bounded convergence theorem it follows that $\int|g-g_n|\to 0$. But I am unsure whether this also implies $\int|g-g_n||f|\to0$.
Hint: Observe that $2M|f|$ is an integrable bound for $|g_n - g|\cdot |f|$ and the latter converges a. e. to $0$. Now apply the bounded convergence theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/404654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Help $\lim_{k \to \infty} \frac{1-e^{-kt}}{k}=? $ what is the lime of $ \frac{1-e^{-kt}}{k}$ as $k \to \infty$? Is that just equal $\frac{1}{\infty}=0$? Does any one can help, I am not sure if We can apply L'Hopital's rule. S
HINT: If $t=0,e^{-kt}=1$ If $t>0, \lim_{k\to\infty}e^{-kt}=0$ If $t<0, t=-r^2$(say), $\lim_{k\to\infty}\frac{1-e^{-kt}}k=\lim_{k\to\infty}\frac{1-e^{kr^2}}k=\frac\infty\infty $ So, applying L'Hospitals' rule, $\lim_{k\to\infty}\frac{1-e^{kr^2}}k=-r^2\cdot\lim_{k\to\infty}\frac{e^{r^2t}}1=-r^2\cdot\infty=-\infty$
{ "language": "en", "url": "https://math.stackexchange.com/questions/404806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
definition of discriminant and traces of number field. Let $K=\Bbb Q [x]$ be a number field, $A$ be the ring of integers of $K$. Let $(x_1,\cdots,x_n)\in A^n$. In usual, what does it mean $D(x_1,\cdots,x_n)$? Either $\det(Tr_{\Bbb K/ \Bbb Q} (x_ix_j))$ or $\det(Tr_{A/ \Bbb Z} (x_ix_j))$? Or does it always same value? I searched some definitions, but it is not explicitly stated.
I'm not entirely certain what $\operatorname{Tr}_{A/\mathbb{Z}}$ is, but the notation $D(x_1,\dots,x_n)$ or $\Delta(x_1,\dots,x_n)$ usually means the discriminant of $K$ with respect to the basis $x_1,\dots,x_n$, so I would say it's most likely the former. After all, $x_1,\dots,x_n\in A\subset K$, so it still makes sense to talk about the trace of these as elements of $K$, and that is what the definition of the discriminant of $K$ with respect to a basis is (assuming that $x_1,\dots,x_n$ are indeed a basis for $K/\mathbb{Q}$!).
{ "language": "en", "url": "https://math.stackexchange.com/questions/404880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $x\equiv y \pmod{\gcd(a,b)}$, show that there is a unique $z\pmod{\text{lcm}(a,b)}$ with $z\equiv x\pmod a$ and $z\equiv y\pmod b$ If $x\equiv y \pmod{\gcd(a,b)}$, show that there is a unique $z\pmod{\text{lcm}(a,b)}$ with $z\equiv x\pmod{a}$ and $z\equiv y\pmod{b}$ What I have so far: Let $z \equiv x\pmod{\frac{a}{\gcd(a,b)}}$ and $ z \equiv y\pmod b $. Then by the chinese remainder theorem there is a unique $z\pmod{\text{lcm}(a,b)}$ which satisfies this... Is this the right approach here? I can't figure out how to get from $$z \equiv x\pmod{\frac{a}{\gcd(a,b)}}$$ what I need.
Put $d=\gcd(a,b)$ and $\delta=x\bmod d=y\bmod d$ (here "mod" is the remainder operation). Then the numbers $x'=x-\delta$, $y'=y-\delta$ are both divisible by$~d$. In terms of a new variable $z'=z-\delta$ we need to solve the system $$ \begin{align}z'&\equiv x'\pmod a,\\z'&\equiv y'\pmod b.\end{align} $$ Since $x',y',a,b$ are all divisible by $d$, any solution $z'$ will have to be as well; therefore we can divide everything by$~d$, and the system is equivalent to $$ \begin{align}\frac{z'}d&\equiv \frac{x'}d\pmod{\frac ad},\\ \frac{z'}d&\equiv \frac{y'}d\pmod{\frac bd}.\end{align} $$ Here the moduli $\frac ad,\frac bd$ are relatively prime, so by the Chinese remainder theorem there is a solution $\frac{z'}d\in\mathbf Z$, and it is unique modulo $\frac ad\times\frac bd$. Then the solutions for $z'$ will then form a single class modulo $\frac ad\times\frac bd\times d=\frac{ab}d=\operatorname{lcm}(a,b)$, and so will the solutions for $z=z'+\delta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/404966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
find a group with the property : a)find a nontrivial group $G$ such that $G$ is isomorphic to $G \times G $ what i'm sure is that $G$ must be infinite ! but have now idea how to get or construct such group i chose many $G$'s but all of the homomorphism was not injective b) an infinite group in which every element has finite order but for each positive integer n there is an element of order n the group $G = (Z_1 \times Z_2 \times Z_3 \times Z_4 \times ...) $ satisfies the conditions except the one which says that every element have finite order . how can we use this group to reach the asked group ?
For the second problem, you can use the subgroup of $\mathbb{Z}_1\times \mathbb{Z}_2\times \cdots\times \mathbb{Z}_n \times \cdot$ consisting of all sequences $(a_1,a_2,a_3,\dots)$ such that all but finitely many of the $a_i$ are $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/405042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Determine the matrix of a quadratic function I'm given a quadratic form $\Phi:\mathbb{R}^3\longrightarrow\mathbb{R}$, for which we know that: * *$(0,1,0)$ and $(0,1,-1)$ are conjugated by the function *$(1,0,-1)$ belongs to the kernel *$\Phi(0,0,1)=1$ *The trace is $0$ From here, I know the matrix must be symmetric, so it will have up to six unique numbers, I have four conditions that I can apply, and I will get four equations that relate the numbers of the matrix, but I still need two to full determine it. Applying the above I get that the matrix must be of the form: $$A=\pmatrix{2c-1 & b & c\\b&-2c&-2c\\c & -2c & 1}$$ How do I determine $b$ and $c$?
Ok, solved it, the last two equations came from knowing the vector that was in the kernel, so it should be that $f_p[(1,-0,-1),(x,y,z)]=0$, being $f_p$ the polar form of $\Phi$: $f_p=\frac{1}{2}[\Phi(x+y)-\Phi(x)-\Phi(y)]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/405116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Interesting Problems for NonMath Majors Sometime in the upcoming future, I will be doing a presentation as a college alumni to a bunch of undergrads from an organization I was in college. I did a dual major in mathematics and computer science, however the audience that I am presenting to are not necessarily people who enjoy math. So, to get their attention, I was thinking about presenting an interesting problem in math, for example, the birthday problem, to get their attention and have them enjoy the field of math a little. I feel a question in the field of probability would interest them the most (due to its instinctiveness) , although that's just a personal opinion. The audience studies a variety of majors, from sciences to engineers to literature and the arts. So here's my question, besides the birthday problem, are there any other interesting problems that would be easy to understand for people who have limited knowledge of calculus and would, hopefully see math as an interesting subject and get their attention? (Doesn't have to be in the field of probability.)
I think geometry is the area the most attractive that a "non-mathematician" can enjoy, and I believe that's the idea Serge Lang has when he prepared his encounters with high school students and in his public dialogues, I refer her to this tow reports of These events : The Beauty of Doing Mathematics : Three Public Dialogues Math! : Encounters with High School Students I hope you can access to these tow books because, I think they might provide something helpful for you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/405188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How can I prove $2\sup(S) = \sup (2S)$? Let $S$ be a nonempty bounded subset of $\mathbb{R}$ and $T = \{2s : s \in S \}$. Show $\sup T = 2\sup S$ Proof Consider $2s = s + s \leq \sup S + \sup S = 2\sup S $. $T \subset S$ where T is also bounded, so applying the lub property, we must have $\sup T \leq 2 \sup S$. On the other hand $2s + s - s \leq \sup T + \sup S - 3\sup S \implies 2\sup S \leq 2s + 2\sup S \leq \sup T $. Which gives the desired result. Okay I am really worried about my other direction. Especially $2\sup S \leq 2s + 2\sup S$, do I know that $2s$ is positive? Also in the beginning, how do I know that $\sup S \leq 2 \sup S$? How do I know that the supremum is positive?
You can't assume $s$ is positive, nor can you assume $\sup S$ is positive. Your proof also assumes a couple of other weird things: * *$T \subset S$ is usually not true. *$2s + s - s \le \sup T + \sup S - 3\sup S$ is not necessarily true. Why would $-s \le -3\sup S$? The first part of your proof is actually correct, ignoring the $T \subset S$ statement. What you are saying is that any element of $T$, say the element $2s$, is bounded above by $2 \sup S$; thus $2 \sup S$ is an upper bound on $T$; thus $2 \sup S \ge \sup T$ by the least upper bound property. For the second part of the proof, you need to show that $\sup T \ge 2 \sup S$. To do this, you need to show that $\frac{\sup T}{2}$ is an upper bound on $S$. This will imply $\frac{\sup T}{2} \ge \sup S$ by least upper bound property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/405254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to calculate: $\sum_{n=1}^{\infty} n a^n$ I've tried to calculate this sum: $$\sum_{n=1}^{\infty} n a^n$$ The point of this is to try to work out the "mean" term in an exponentially decaying average. I've done the following: $$\text{let }x = \sum_{n=1}^{\infty} n a^n$$ $$x = a + a \sum_{n=1}^{\infty} (n+1) a^n$$ $$x = a + a (\sum_{n=1}^{\infty} n a^n + \sum_{n=1}^{\infty} a^n)$$ $$x = a + a (x + \sum_{n=1}^{\infty} a^n)$$ $$x = a + ax + a\sum_{n=1}^{\infty} a^n$$ $$(1-a)x = a + a\sum_{n=1}^{\infty} a^n$$ Lets try to work out the $\sum_{n=1}^{\infty} a^n$ part: $$let y = \sum_{n=1}^{\infty} a^n$$ $$y = a + a \sum_{n=1}^{\infty} a^n$$ $$y = a + ay$$ $$y - ay = a$$ $$y(1-a) = a$$ $$y = a/(1-a)$$ Substitute y back in: $$(1-a)x = a + a*(a/(1-a))$$ $$(1-a)^2 x = a(1-a) + a^2$$ $$(1-a)^2 x = a - a^2 + a^2$$ $$(1-a)^2 x = a$$ $$x = a/(1-a)^2$$ Is this right, and if so is there a shorter way? Edit: To actually calculate the "mean" term of a exponential moving average we need to keep in mind that terms are weighted at the level of $(1-a)$. i.e. for $a=1$ there is no decay, for $a=0$ only the most recent term counts. So the above result we need to multiply by $(1-a)$ to get the result: Exponential moving average "mean term" = $a/(1-a)$ This gives the results, for $a=0$, the mean term is the "0th term" (none other are used) whereas for $a=0.5$ the mean term is the "1st term" (i.e. after the current term).
We give a mean proof, at least for the case $0\lt a\lt 1$. Suppose that we toss a coin that has probability $a$ of landing heads, and probability $1-a$ of landing heads. Let $X$ be the number of tosses until the first tail. Then $X=1$ with probability $1-a$, $X=2$ with probability $a(1-a)$, $X=3$ with probability $a^2(1-a)$, and so on. Thus $$E(X)=(1-a)+2a(1-a)+3a^2(1-a)+4a^3(1-a)\cdots.\tag{$1$}$$ Note that by a standard convergence test, $E(X)$ is finite. Let $b=E(X)$. On the first toss, we get a tail with probability $1-a$. In that case, $X=1$. If on the first toss we get a head, it has been a "wasted" toss, and the expected number of tosses until the first tail is $1+b$. Thus $$b=(1-a)(1)+a(1+b).$$ Solve for $b$. We get $b=\dfrac{1}{1-a}$. But the desired sum $a+2a^2+3a^3+\cdots$ is $\dfrac{a}{1-a}$ times the sum in $(1)$. Thus the desired sum is $\dfrac{a}{(1-a)^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/405332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 3 }
Proof identity of differential equation I would appreciate if somebody could help me with the following problem: Q: $f''(x)$ continuous in $\mathbb{R}$ show that $$ \lim_{h\to 0}\frac{f(x+h)+f(x-h)-2f(x)}{h^2}=f''(x)$$
Use L'Hospital's rule $$ \lim_{h\to 0}\frac{F(h)}{G(h)}=\lim_{h\to 0}\frac{F'(h)}{G'(h)}$$ You can use the rule if you have $\frac{0}{0}$ result. You need to apply it 2 times $$ \lim_{h\to 0}\frac{f(x+h)+f(x-h)-2f(x)}{h^2}= \lim_{h\to 0}\frac{f'(x+h)-f'(x-h)}{2h}=\lim_{h\to 0}\frac{f''(x+h)+f''(x-h)}{2}=f''(x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/405392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Find the polynomial $f(x)$ which have the following property Find the polynomial $p(x)=x^2+px+q$ for which $\max\{\:|p(x)|\::\:x\in[-1,1]\:\}$ is minimal. This is the 2nd exercise from a test I gave, and I didn't know how to resolve it. Any good explanations will be appreciated. Thanks!
Here's an informal argument that doesn't use calculus. Notice that $p(x)$ is congruent to $y = x^2$ (for example, simply complete the square). Now suppose that we chose our values for the coefficients $p,q$ carefully, and it resulted in producing the minimal value of $m$. Hence, we can think of the problem instead like this: By changing the vertex of $y=x^2$, what is the minimal value of $m$ such that for all $x\in [-1,1], -m \le p(x) \le m$? By symmetry, there are only two cases to consider (based on the location of the vertex). Case 1: Suppose the vertex is at $(1,-m)$ and that the parabola extends to the top left and passes through the point $(-1,m)$. Using vertex form, we have $p(x)=(x-1)^2-m$ and plugging in the second point yields $m=(-1-1)^2-m \iff 2m=4 \iff m = 2$. Case 2: Suppose that the vertex is at $(0, -m)$ and that the parabola extends to the top left and passes through the point $(-1,m)$. Using vertex form, we have $p(x)=x^2-m$ and plugging in the second point yields $m=(-1)^2-m \iff 2m = 1 \iff m = 1/2$. Since $m=1/2$ is smaller, we conclude that $\boxed{p(x)=x^2-\dfrac{1}{2}}$ is the desired polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/405480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
$y^2 - x^3$ not an embedded submanifold How can I show that the cuspidal cubic $y^2 = x^3$ is not an embedded submanifold of $\Bbb{R}^2$? By embedded submanifold I mean a topological manifold in the subspace topology equipped with a smooth structure such that the inclusion of the curve into $\Bbb{R}^2$ is a smooth embedding. I don't even know where to start please help me. All the usual tricks I know of removing a point from a curve and see what happens don't work. How can I extract out information about the cusp to conclude it is not? Also can I put a smooth structure on it so it is an immersed submanifold? THankz.
It is better to view $y$ as the independent variable and $x=y^{2/3}$. Since $2/3<1$, this has infinite slope at the origin for positive $y$ and infinite negative slope for negative $y$. Hence the origin is not a smooth point of this graph, which is therefore not a submanifold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/405563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Pigeon Hole Principle; 3 know each other, or 3 don't know each other I found another question in my text book, it seems simple, but the hardest part is to prove it. Here the question There are six persons in a party. Prove that either 3 of them recognize each other or 3 of them don't recognize each other. I heard the answer use Pigeon Hole Principle, but i have no idea in using it. Could somebody please tell me the way to solve it? Thanks for the attention and sorry for the bad English and my messy post
NOTE: in my answer I assume that the relation "know someone" is symmetric (i.e., A knows B if and only if B knows A). If this relation is not symmetric for you then, I did not really check it but I believe the statement is not true.\\\ Choose a person A at the party. The following two situations are possible: (CASE 1) A knows at least three people, say B, C and D, at the party; (CASE 2) A doesn't know at least three people at the party. In (CASE 1), if at least a pair among {B,C}, {C,D} or {D,B} is formed by people that know each other, then you have three people that know each other. If there is no such pair among these three, then B, C and D are three people that do not know each other. In (CASE 2) proceed similarly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/405636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sum of Random Variables... Imagine we repeat the following loop some thousands of times: $$ \begin{align} & \text{array} = []\\ & \text{for n} = 1: 10 000 \\ & k = 0 \\ & \text{while unifrnd}(0,1) < 0.3 \\ & k = k + 1 \\ & \text{end} \\ & \text{if k} \neq 0 \\ & \text{array} = [\text{array,k}] \\ & \text{end} \\ \end{align} $$ whereas "unifrnd(0,1)" means a random number drawn from the uniform distribution on the unit interval. My question is then: What is then the value of k, which is the most often observed - except for k = 0? And is that the expectation of k? Thanks very much
It appears you exit the loop the first time the random is greater than $0.3$. In that case, the most probable value for $k$ is $0$. It occurs with probability $0.7$. The next most probable is $1$, which occurs with probability $0.3 \cdot 0.7$, because you need the first random to be less than $0.3$ and the second to be $0.7$. In general, the probability of a value $k$ is $0.3^k\cdot 0.7$
{ "language": "en", "url": "https://math.stackexchange.com/questions/405699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Distribution of $pX+(1-p) Y$ We have two independent, normally distributed RV's: $$X \sim N(\mu_1,\sigma^2_1), \quad Y \sim N(\mu_2,\sigma^2_2)$$ and we're interested in the distribution of $pX+(1-p) Y, \space p \in (0,1)$. I've tried to solve this via moment generating functions. Since $$X \perp Y \Rightarrow \Psi_X(t) \Psi_Y(t)$$ where for $ N(\mu,\sigma^2)$ we'll have the MGF $$\Psi(t) = \exp\{ \mu t + \frac12 \sigma^2 t^2 \}$$ After computation I've got the joint MGF as $$\Psi_{pX+(1-p) Y}(t) = \exp\{ t(p \mu_1 +(1-p)\mu_2) + \frac{t^2}{2}(p^2 \sigma_1^2 +(1-p)^2 \sigma_2^2) \}$$ which would mean $$pX+(1-p) Y \sim N(p \mu_1 +(1-p)\mu_2, p^2 \sigma_1^2 +(1-p)^2 \sigma_2^2) $$ Is my approach correct? Intuitively it makes sense and the math also adds up.
Any linear combination $aX+bY$ of independent normally distributed random variables $X$ and $Y,$ where $a$ and $b$ are constants, i.e. not random, is normally distributed. You can show that by using moment-generating functions provided you have a theorem that says only normally distributed random variables can have the same m.g.f. that a normally distributed random variable has. Alternatively, show that $aX$ and $bY$ are normally distributed, and then compute the convolution of the two normal density functions, to see that it is also a normal density function. And basic formulas concerning the mean and the variance say \begin{align} \operatorname{E}(aX+bY) & = a\operatorname{E}(X) + b\operatorname{E}(Y), \\[6pt] \operatorname{var}(aX+bY) & = a^2 \operatorname{var}(X) + b^2\operatorname{var}(Y). \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/405769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Choosing a bound when it can be plus or minus? I.e. $\sqrt{4}$ My textbook glossed over how to choose integral bounds when using substitution and the value is sign-agnostic. Or I missed it! Consider the definite integral: $$ \int_1^4\! \frac{6^{-\sqrt{x}}}{\sqrt x} dx $$ Let $ u = -\sqrt{x} $ such that $$ du = - \frac{1}{2\sqrt{x}} dx $$ Now, if one wishes to alter the bounds of the integral so as to avoid substituting $ - \sqrt{x} $ back in for $ u $, how is the sign of the integral's bounds determined? Because: $ u(1) = -\sqrt 1 = -(\pm 1) = \pm 1 $ and $ u(4) = -\sqrt{4} = -(\pm2) = \pm2 $ How does one determine the correct bound? My textbook selected $ -1 $ and $-2 $ without explaining the choices.
It is convention that $\sqrt{x} = + \sqrt{x}$. Thus, you set $u(1) = -\sqrt{1}=-1$ and $u(4) = -\sqrt{4}=-2$. The only situation where you introduce the $\pm$ signs is when you are finding the root of a quadratic such as $y^2=x$ in which case both $y=+\sqrt{x}$ and $y=-\sqrt{x}$ satisfy the original equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/405899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How does one derive $O(n \log{n}) =O(n^2)$? I was studying time complexity where I found that time complexity for sorting is $O(n\log n)=O(n^2)$. Now, I am confused how they found out the right-hand value. According to this $\log n=n$. So, can anyone tell me how they got that value? Here is the link where I found out the result.
What this equation means is that the class $O(n\log n)$ is included in the class $O(n^2)$. That is, if a sequence is eventually bounded above by a constant times $n \log n$, it will eventually be bounded above by a (possibly different) constant times $n^2$. Can you prove this? The notation is somewhat surprising at first, yes, but you get used to it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/405940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
How do you divide a complex number with an exponent term? Ok, so basically I have this: $$ \frac{3+4i}{5e^{-3i}} $$ So basically, I converted the numerator into polar form and then converted it to exponent form using Euler's formula, but I can have two possible solutions. I can have $5e^{0.972i}$ (radian) and $5e^{53.13i}$ (angle). So my real question is, the exponent, does it have to be in radian or angle? I ended up with $e^{3.927i}$.
A complex number $z=x+iy$ can be written as $z=re^{i\theta}$, where $r=|z|=\sqrt{x^2+y^2}$ is the absolute value of $z$ and $\theta=\arg{z}=\operatorname{atan2}(y,x)$ is the angle between the $x$-axis and $z$ measured counterclockwise and in radians. In this case, we have $r=5$ and $\theta=\arctan\frac{4}{3}$ (since $x>0$, see atan2 for more information), so $3+i4=5e^{i\arctan\frac{4}{3}}$ and $$\frac{3+i4}{5e^{-i3}}=\frac{5e^{i\arctan\frac{4}{3}}}{5e^{-i3}}=e^{i(\arctan\frac{4}{3}+3)}\approx e^{i3.927}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/406015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Repeatedly assigning people to subgroups so everyone knows each other Say a teacher divides his students into subgroups once every class. The profile of subgroup sizes is the same everyday (e.g. with 28 students it might be always 8 groups of 3 and 1 group of 4). How can the teacher specify the subgroup assignments for a classes so that in the shortest number of classes, everyone has been in a subgroup with everyone else? Exhaustive search seems intractable, so what would be good optimization approach? Edit: To clarify, I'm assuming the subgroup structure is given and fixed. The optimization is just the assignment of students to groups across the days.
The shortest number of classes is 1, with a single subgroup that contains all the students. The second shortest number of classes is 3, which is achieved with one small subgroup and a large one. For example, ABCDEFG/HI, ABCDEHI/FG, ABCFGHI/DE. The proof that 2 classes is impossible: let X, Y be the two largest subgroups from the first day. Each member of X must meet each member of Y on the second day, but to do this would create a subgroup larger than either X or Y.
{ "language": "en", "url": "https://math.stackexchange.com/questions/406074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Category with endomorphisms only How is called a category with endomorphisms only? How is called a subcategory got from an other category by removing all morphisms except of endomorphisms?
Every category in which every arrow is an endomorphism is a coproduct (in the category of categories) of monoids (a monoid is a category with just one object). So a category with all morphisms endormophisms is a coproduct of monoids. I'm not aware of any specific terminology for it. Clearly the category of all coproducts of monoids admits an inclusion functor into the category of all categories. The construction you describe (of removing all non-endo morphisms) describes a right adjoint to this inclusion functor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/406119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Invariant subspaces of a linear operator that commutes with a projection I have an assignment problem, the statement is: Let $V$ be a vector space and $P:V \to V$ be a projection. That is, a linear operator with $P^2=P.$ We set $U:= \operatorname{im} P$ and $W:= \ker P.$ Further suppose that $T:V\to V$ is a linear operator such that $TP = PT.$ Prove that $TU \subseteq U$ and $TW\subseteq W.$ Here is my attempt: Suppose $u\in U:= \operatorname{im} P$ so $u= P(v)$ for some $v\in V.$ Then $Tu = TPv = PTv \in \operatorname{im} P := U$ so $TU\subseteq U.$ Suppose $w\in W:= \ker P$ so that $Pw=0.$ Then $P (Tw) = T(Pw) = T(0)=0$ so $Tw\in \ker P := W$ so $TW\subseteq W.$ It seems fine to me, but nowhere did I use that $P$ was a projection, I only used $TP=PT.$ Is my proof okay?
Yes, that's all. $P$ doesn't have to be projection for this particular exercise. However, we can just start out from the fact that $P$ is a projection in a solution: now it projects to subspace $U$, in the direction of $W$, and we also have $U\oplus W=V$, and $P|_U={\rm id}_U$. Having these, an operator $T$ commutes with $P$ iff both $U$ and $W$ are $T$-invariant subspaces. Your proof can be reformulated for one direction, and the other direction goes as: If $TU\subseteq U$ and $TW\subseteq W$, then $TP(u+w)=Tu$, as it is $\in U$, it $=PTu$, and as $Tw\in W$, we finally have $$TP(u+w)=Tu=PTu=PTu+PTw=PT(u+w)\,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/406199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Convex homogeneous function Prove (or disprove) that any CONVEX function $f$, with the property that $\forall \alpha\ge 0, f(\alpha x) \le \alpha f(x)$, is positively homogeneous; i.e. $\forall \alpha\ge 0, f(\alpha x) = \alpha f(x)$.
Maybe I'm missing something, but it seems to me that you don't even need convexity. Given the property you stated, we have that, for $\alpha>0$, $$f(x)=f(\alpha^{-1}\alpha x)\leq \alpha^{-1}f(\alpha x)$$ so that $\alpha f(x)\leq f(\alpha x)$ as well. Therefore, we have that $\alpha f(x)=f(\alpha x)$ for every $\alpha>0$. The equality for $\alpha=0$ follows by continuity of $f$ at zero (which is implied by convexity).
{ "language": "en", "url": "https://math.stackexchange.com/questions/406269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show the points $u,v,w$ are not collinear Consider triples of points $u,v,w \in R^2$, which we may consider as single points $(u,v,w) \in R^6$. Show that for almost every $(u,v,w) \in R^6$, the points $u,v,w$ are not collinear. I think I should use Sard's Theorem, simply because that is the only "almost every" statement in differential topology I've read so far. But I have no idea how to relate this to regular value etc, and to solve this problem. Another Theorem related to this problem is Fubini Theorem (for measure zero): Let $A$ be a closed subset of $R^n$ such that $A \cap V_c$ has measure zero in $V_c$ for all $c \in R^k$. Then $A$ has measure zero in $R^n$. Thank you very much for your help!
$u,v,$ and $w$ are collinear if and only if there is some $\lambda\in\mathbb{R}$ with $w=v+\lambda(v-u)$. We can thus define a smooth function $$\begin{array}{rcl}f:\mathbb{R}^5&\longrightarrow&\mathbb{R}^6\\(u,v,\lambda)&\longmapsto&(u,v,v+\lambda(v-u))\end{array}$$ By the equivalence mentioned in the first sentence, the image of $f$ is exactly the points $(u,v,w)$ in $\mathbb{R}^6$ with $u,v,$ and $w$ collinear. Now, because $5<6$, every point in $\mathbb{R}^5$ is a critical point, so that the entire image of $f$ has measure $0$, by Sard's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/406334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Topology of uniform convergence on elements of $\gamma$ Let $\gamma$ be a cover of space $X$ and consider $C_\gamma (X)$ of all continuous functions on $X$ with values in the discrete space $D=\{0,1\}$ endowed with the topology of uniform convergence on elements of $\gamma$. What does "topology of uniform convergence on elements of $\gamma$" mean?
In general we have a metric co-domain $(R,d)$, so we consider (continuous) functions from $X$ to $R$, and we have a cover $\gamma$ of $X$. A subbase for the topology of uniform convergence on elements of $\gamma$ is given by sets of the form $S(A, f, \epsilon)$, for all $f \in C(X,R)$, $A \in \gamma$, $\epsilon>0$ real, and $S(A, f, \epsilon) = \{g \in C(X,R): \forall x \in A: \, d(f(x),g(x)) < \epsilon \}$ For the cover of singletons we get the pointwise topology, and for the cover $\{X\}$ we get the uniform metric, and also the cover by all compact sets is used (topology of compact convergence). In your case we have $\{0,1\}$ as codomain, so we can just consider all $S(A,f,1)$ as subbasic sets, and those are all functions that exactly coincide with $f$ on $S$, due to the discreteness of the codomain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/406403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Given $G = (V,E)$, a planar, connected graph with cycles, Prove: $|E| \leq \frac{s}{s-2}(|V|-2)$. $s$ is the length of smallest cycle Given $G = (V,E)$, a planar, connected graph with cycles, where the smallest simple cycle is of length $s$. Prove: $|E| \leq \frac{s}{s-2}(|V|-2)$. The first thing I thought about was Euler's Formula where $v - e + f = 2$. But I really could not connect $v$, $e$ or $f$ to the fact that we have a cycle with minimum length $s$. Any direction will be appreciated, thanks!
Lets use euler's theorem for this n-e+f=2 where: n-vertices, e-edges and f-faces Let $d_1,d_2,...,d_f$ where each $d_i$ is the number of edges in face $i$ of our grph. A cycle causes us to have a face, and according to this our smallest cycle is of size s $\Rightarrow |d_i| \geq s$ $d_1+d_2+...+d_f = 2e \Rightarrow s \times f\leq 2e\Rightarrow $ (f=2-n+e) $ 2e\geq s(2-n+e)\Rightarrow 2e\geq 2s-sn+se\Rightarrow s(n-2) \geq e(s-2)$ Now just divide and you'll get the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/406487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Dual space of $H^1(\Omega)$ I'm a bit confused, why do people not define $H^1(\Omega)^*$? Instead they only say that $H^{-1}(\Omega)$ is the dual of $H^1_0(\Omega).$ $H^1(\Omega)$ is a Hilbert space so it has a well-defined dual space. Can someone explain the issue with this?
As far as I remember, one usually defines $H^{-1}(\Omega)$ to be the dual space of $H^1(\Omega)$. The reason for that is that one usually does not identify $H^1(\Omega)^*$ with $H^1(\Omega)$ (which would be possible) but instead works with a different representation. E.g. one works with the $L^2$-inner product as dual pairing between $H^{-1}(\Omega)$ and $H^1(\Omega)$ (in case the element in $H^{-1}(\Omega)$ is an $L^2$-function).
{ "language": "en", "url": "https://math.stackexchange.com/questions/406568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
How can $4^x = 4^{400}+4^{400}+4^{400}+4^{400}$ have the solution $x=401$? How can $4^x = 4^{400} + 4^{400} + 4^{400} + 4^{400}$ have the solution $x = 401$? Can someone explain to me how this works in a simple way?
The number you're adding is being added n (number) times. Well, we can infer from this that if that number is being added n (number) times, it is multiplicating itself. Now, if a number is multiplicating itself, then we have an exponentiation! $N_1+N_2+N_3+...+N_N = N\times N = N^2$ If you're adding a number, no matter how big it is or what operation you're doing with it, and you're adding it n (number) times, you'll end up with $(N^k)1+(N^k)2+(N^k)3+...(N^k)N = (N^k)N$ Which is $N^{k+1}$ So, if you're doing it with 4^400, we've got $(4^{400})+(4^{400})+(4^{400})+(4^{400}) = (4^{400})\times 4 = 4^{401}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/406642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 5 }
Elegant way to solve $n\log_2(n) \le 10^6$ I'm studying Tomas Cormen Algorithms book and solve tasks listed after each chapter. I'm curious about task 1-1. that is right after Chapter #1. The question is: what is the best way to solve: $n\lg(n) \le 10^6$, $n \in \mathbb Z$, $\lg(n) = \log_2(n)$; ? The simplest but longest one is substitution. Are there some elegant ways to solve this? Thank you! Some explanations: $n$ - is what I should calculate - total quantity of input elements, $10^6$ - time in microseconds - total algorithm running time. I should figure out nmax.
For my money, the best way is to solve $n\log_2n=10^6$ by Newton's Method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/406707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 2 }
Find a maximal ideal in $\mathbb Z[x]$ that properly contains the ideal $(x-1)$ I'm trying to find a maximal ideal in ${\mathbb Z}[x]$ that properly contains the ideal $(x-1)$. I know the relevant definitions, and that "a proper ideal $M$ in ${\mathbb Z}[x]$ is maximal iff ${\mathbb Z}[x]/M$ is a field." I think the maximal ideal I require will not be principal, but I can't find it. Any help would be appreciated. Thanks.
Hint: the primes containing $\,(x-1)\subset \Bbb Z[x]\,$ are in $1$-$1$ correspondence with the primes in $\,\Bbb Z[x]/(x-1)\cong \Bbb Z,\,$ by a basic property of quotient rings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/406783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to find the integral of implicitly defined function? Let $a$ and $b$ be real numbers such that $ 0<a<b$. The decreasing continuous function $y:[0,1] \to [0,1]$ is implicitly defined by the equation $y^a-y^b=x^a-x^b.$ Prove $$\int_0^1 \frac {\ln (y)} x \, dx=- \frac {\pi^2} {3ab}. $$
OK, at long last, I have a solution. Thanks to @Occupy Gezi and my colleague Robert Varley for getting me on the right track. As @Occupy Gezi noted, some care is required to work with convergent integrals. Consider the curve $x^a-x^b=y^a-y^b$ (with $y(0)=1$ and $y(1)=0$). We want to exploit the symmetry of the curve about the line $y=x$. Let $x=y=\tau$ be the point on the curve where $x=y$, and let's write $$\int_0^1 \ln y \frac{dx}x = \int_0^\tau \ln y \frac{dx}x + \int_\tau^1 \ln y \frac{dx}x\,.$$ We make a change of coordinates $x=yu$ to do the first integral: Since $\dfrac{dx}x = \dfrac{dy}y+\dfrac{du}u$, we get (noting that $u$ goes from $0$ to $1$ as $x$ goes from $0$ to $\tau$) \begin{align*} \int_0^\tau \ln y \frac{dx}x &= -\int_\tau^1 \ln y \frac{dy}y + \int_0^1 \ln y \frac{du}u \\ &= -\frac12(\ln y)^2\Big]_\tau^1 + \int_0^1 \ln y \frac{du}u = \frac12(\ln\tau)^2 + \int_0^1 \ln y \frac{du}u\,. \end{align*} Next, note that as $(x,y)\to (1,0)$ along the curve, $(\ln x)(\ln y)\to 0$ because, using $(\ln x)\ln(1-x^{b-a}) = (\ln y)\ln(1-y^{b-a})$, we have $$(\ln x)(\ln y) \sim \frac{(\ln y)^2\ln(1-y^{b-a})}{\ln (1-x^{b-a})} \sim \frac{(\ln y)^2 y^{b-a}}{a\ln y} = \frac1a y^{b-a}\ln y\to 0 \text{ as } y\to 0.$$ We now can make the "inverse" change of coordinates $y=xv$ to do the second integral. This time we must do an integration by parts first. \begin{align*} \int_\tau^1 \ln y \frac{dx}x &= (\ln x)(\ln y)\Big]_{(x,y)=(\tau,\tau)}^{(x,y)=(1,0)} + \int_0^\tau \ln x \frac{dy}y \\ & = -(\ln\tau)^2 + \int_0^\tau \ln x \frac{dy}y \\ &= -(\ln\tau)^2 - \int_\tau^1 \ln x \frac{dx}x + \int_0^1 \ln x \frac{dz}z \\ &= -\frac12(\ln\tau)^2 + \int_0^1 \ln x \frac{dz}z\,. \end{align*} Thus, exploiting the inherent symmetry, we have $$\int_0^1 \ln y\frac{dx}x = \int_0^1 \ln y \frac{du}u + \int_0^1 \ln x \frac{dz}z = 2\int_0^1 \ln x \frac{dz}z\,.$$ Now observe that \begin{multline*} x^a-x^b=y^a-y^b \implies x^a(1-x^{b-a}) = x^az^a(1-x^{b-a}z^{b-a}) \\ \implies x^{b-a} = \frac{1-z^a}{1-z^b}\,, \end{multline*} and so, doing easy substitutions, \begin{align*} \int_0^1 \ln x \frac{dz}z &= \frac1{b-a}\left(\int_0^1 \ln(1-z^a)\frac{dz}z - \int_0^1 \ln(1-z^b)\frac{dz}z\right) \\ &=\frac1{b-a}\left(\frac1a\int_0^1 \ln(1-w)\frac{dw}w - \frac1b\int_0^1 \ln(1-w)\frac{dw}w\right) \\ &= \frac1{ab}\int_0^1 \ln(1-w)\frac{dw}w\,. \end{align*} By expansion in power series, one recognizes that this dilogarithm integral gives us, at long last, $$\int_0^1 \ln y\frac{dx}x = \frac2{ab}\int_0^1 \ln(1-w)\frac{dw}w = \frac2{ab}\left(-\frac{\pi^2}6\right) = -\frac{\pi^2}{3ab}\,.$$ (Whew!!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/406847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 1 }
Evaluating the series $\sum\limits_{n=1}^\infty \frac1{4n^2+2n}$ How do we evaluate the following series: $$\sum_{n=1}^\infty \frac1{4n^2+2n}$$ I know that it converges by the comparison test. Wolfram Alpha gives the answer $1 - \ln(2)$, but I cannot see how to get it. The Taylor series of logarithm is nowhere near this series.
Hint: $\frac 1{4n^2+2n} = \frac 1{2n}-\frac 1{2n+1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/406918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 6, "answer_id": 1 }
Self-Paced Graduate Math Courses for Independent Study Does anyone know of any graduate math courses that are self-paced, for independent study? I am a high school math teacher at a charter school in Texas. While I am quite happy with where I am right now, but my goal is to earn at least 18 graduate credits on math subjects so that I can teach higher-level math, and be certified as dual-credit math teacher. (HS class where students earn high school as well as college credits at the same time.) I am aware of many online graduate classes offered by some respectable universities, but all of them are semester-based, which may not be very feasible since my full-time teaching is extremely demanding, not to mention that math is anything but a casual subject. I welcome any suggestions even for programs from outside of US, as long as they are accredited and conducted in English. (For example, I was told that the college system in the Philippine is an exact "copy cat" of the US.) For your information, I am quite comfortable studying independently, in fact, I took lots of prerequisite math classes successfully under this study mode. By the way, last year I took GRE for this purpose, my verbal + quantitative score is a decent 1200 under old scoring scale. Thank you very much for your time and help.
For a decent selection of grad courses and to whet your appetite, ocw.mit.edu : MIT open course ware.
{ "language": "en", "url": "https://math.stackexchange.com/questions/406960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 5, "answer_id": 4 }
A (not necessarily continuous) function on a compact metric space attaining its maximum. I am studying for an exam and my study partners and I are having a dispute about my reasoning for $f$ being continuous by way of open and closed pullbacks (see below). Please help me correct my thinking. Here is the problem and my proposed solution: Let $(K, d)$ be a compact metric space, and let $f: K \rightarrow \mathbb{R}$ be a function satisfying that for each $\alpha \in \mathbb{R}$ the set {$x \in K: f(x) \ge \alpha$} is a closed subset of $K$. Show that $f$ attains a maximum value on $K$. Proof: Notice that $A :=$ {$x \in K: f(x) \ge \alpha$} is precisely $f^{-1}[\alpha, \infty)$. Since $[\alpha, \infty)$ is closed in $\mathbb{R}$ and $A$ is assumed to be closed in $K$, then it follows that $f$ is continuous on $A$. On the other hand, $K-A = f^{-1}(-\infty, \alpha)$ is open in $K$ since $A$ is closed in $K$. And since $(\alpha, \infty)$ is open in $\mathbb{R}$ and $K - A$ is open in $K$, then if follow that $f$ is continuous on $K - A$, hence $f$ is continuous on $K$. Since $K$ is compact and $f$ is continuous, then $f(K)$ is compact in $\mathbb{R}$. Compact sets in $\mathbb{R}$ are closed and bounded intervals. Thus $\sup{f(K)} = \max{f(K)} = f(x_0)$ for some $x_0 \in K$. Thus $f$ indeed attains its maximum value on $K$. $\blacksquare$
Here is a complete proof with sequential compactness: Suppose that $f$ has no maximum on $K$. Then there are two cases: Case 1: $\sup_{x \in K} f(x) = \infty$. Then for any $n \in \mathbb{N}$ there is some $x_n \in K$ such that $f(x_n) > n$. Since $K$ is compact there exists a subsequence $x_{n_k} \in K$ that converges to some $x \in K$. Let $A = \{y \in K| f(y) > f(x)+1\}$. Then $x_{n_k} \in A$ for sufficiently large $k$, but $x \not \in A$. Therefore $A$ is not closed. Case 2: $\sup_{x \in K} f(x) = L \in \mathbb{R}$. Then for any $n \in \mathbb{N}$ there is some $x_n \in K$ such that $f(x_n) > L - \frac{1}{n}$. Again since $K$ is compact there exists a subsequence $x_{n_k} \in K$ that converges to some $x \in K$. Because $f$ attains no maximum, we have $f(x) < L$. Let $A = \{y \in K| f(y) > \frac{f(x)+L}{2}\}$. Then $x_{n_k} \in A$ for sufficiently large $k$, but again $x \not \in A$. Hence $A$ is not closed as in Case 1. $\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/407031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
How to solve $x\log(x) = 10^6$ I am trying to solve $$x\log(x) = 10^6$$ but can't find an elegant solution. Any ideas ?
You won't find a "nice" answer, since this is a transcendental equation (no "algebraic" solution). There is a special function related to this called the Lambert W-function, defined by $ \ z = W(z) \cdot e^{W(z)} \ $ . The "exact" answer to your equation is $ \ x = e^{W( [\ln 10] \cdot 10^6)} \ . $ (I'm assuming you're using the base-10 logarithm here; otherwise you can drop the $ \ln 10 $ factor.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/407112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Integral $\frac{\sqrt{e}}{\sqrt{2\pi}}\int^\infty_{-\infty}{e^{-1/2(x-1)^2}dx}$ gives $\sqrt{e}$. How? To calculate the expectation of $e^x$ for a standard normal distribution I eventually get, via exponential simplification: $$\frac{\sqrt{e}}{\sqrt{2\pi}}\int^\infty_{-\infty}{e^{-1/2(x-1)^2}dx}$$ When I plug this into Wolfram Alpha I get $\sqrt e$ as the result. I'd like to know the integration step(s) or other means I could use to obtain this result on my own from the point I stopped. I am assuming that Wolfram Alpha "knew" an analytical solution since it presents $\sqrt e$ as a solution as well as the numerical value of $\sqrt e $. Thanks in advance!
This is because $$\int_{\Bbb R} e^{-x^2}dx=\sqrt \pi$$ Note that your shift $x\mapsto x-1$ doesn't change the value of integral, while $x\mapsto \frac{x}{\sqrt 2}$ multiplies it by $\sqrt 2$, giving the desired result, that is, $$\int_{\Bbb R} e^{-\frac 1 2(x-1)^2}dx=\sqrt {2\pi}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/407237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Second pair of matching birthdays The "birthday problem" is well-known and well-studied. There are many versions of it and many questions one might ask. For example, "how many people do we need in a room to obtain at least a 50% chance that some pair shares a birthday?" (Answer: 23) Another is this: "Given $M$ bins, what is the expected number of balls I must toss uniformly at random into bins before some bin will contain 2 balls?" (Answer: $\sqrt{M \pi/2} +2/3$) Here is my question: what is the expected number of balls I must toss into $M$ bins to get two collisions? More precisely, how many expected balls must I toss to obtain the event "ball lands in occupied bin" twice? I need an answer for very large $M$, so solutions including summations are not helpful. Silly Observation: The birthday problem predicts we need about 25 US Presidents for them to share a birthday. It actually took 28 presidents to happen (Harding and Polk were both born on Nov 2). We see from the answers below that after about 37 US Presidents we should have a 2nd collision. However Obama is the 43rd and it still hasn't happened (nor would it have happened if McCain had won or Romney had won; nor will it happen if H. Clinton wins in 2016).
Suppose there are $n$ people, and we want to allow $0$ or $1$ collisions only. $0$ collisions is the birthday problem: $$\frac{M^{\underline{n}}}{M^n}$$ For 1 collision, we first choose which two people collide, ${n\choose 2}$, then the 2nd person must agree with the first $\frac{1}{M}$, then avoid collisions for the remaining people, getting $${n \choose 2}\frac{M^{\underline{n-1}}}{M^{n}}$$ Hence the desired answer is $$1-\frac{M^{\underline{n}}}{M^n}-{n \choose 2}\frac{M^{\underline{n-1}}}{M^{n}}$$ or $$ 1-\frac{M^{\underline{n-1}}(M-n+1+{n\choose 2})}{M^n}$$ When $M=365$, the minimum $n$ to get at least a 50% chance of more than 1 collision is $n=36$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/407307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Permutation and Combinations with conditions Hallo :) This is a question about permutations but with conditions. 2 boys and 4 girls are to be arranged in a straight line. In how many ways can this be done if the two boys must be separated? (The order matters) Thank You.
Total number of ways of arranging the people = 6! Cases when boys are together: (2B) G G G G G (2B) G G G G G (2B) G G G G G (2B) G G G G G (2B) Each of the above combinations can be arranged in 2 * 4! ways. (The factor of 2 is accommodated since the boys themselves could be interchanged, as they are on a straight line.) Number of ways to separate the boys = (6! - (2 * 5 * 4!)) = 5! * (6 - 2) = 480
{ "language": "en", "url": "https://math.stackexchange.com/questions/407398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Smallest projective subspace containing a degree $d$ curve Is it true that the smallest projective subspace containing a degree $d$ curve inside $\mathbb{P}^n$ has dimension at most $d$? If not, is there any bound on the dimension? Generalization to varieties? For $d=1$ this is obvious. I think for the case that the curve is an embedding of $\mathbb{P}^1$ this is also true: Suppose the embedding is given by $n$ degree $d$ homogeneous polynomials $f_0,\dots,f_n$. For each $0\leq i\leq d$, let $p_i=(c_{0,i},\dots,c_{n,i})$ where $c_{j,i}$ is the coefficient of $x^iy^{d-i}$ in $f_j$ (or we ignore $p_i$ if all $c_{j,i}$ are zero). Then the curve is contained in the projective subspace spanned by all $p_i$.
I think your observation is correct for curves. Given a curve $C$ in $\mathbb{P}^n$ satisfying $C$ is not contained in any projective subspace of $\mathbb{P}^n$, WLOG we may assume $C$ is irreducible. Let $\tilde C$ be the normalization of $C$, then we have a regular map $\phi: \tilde C\rightarrow \mathbb{P}^n$ which is an embedding outside a finite subset of $\tilde C$. Since $C$ is not contained in any projective subspace, $\phi$ is the map induced by the linear series $L$ of all hyperplane divisors. It is clear that $\dim L=\dim \mathbb{P}^n=n$, so for any hyperplane divisor $D$, we have $h^0(\tilde C,D)\geq \dim L+1=n+1$. On the other hand, $h^0(\tilde C,D)\leq \deg(D)+1$ since $D$ is an effective divisor, so $\deg C=\deg D\geq n$. Any generalizations to higher dimensional varieties? Guess: for a $k$-dimensional variety $X$ embedded in $\mathbb{P}^n$, if $X$ is not contained in any projective subspace, then $\deg X\geq n-k+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/407501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Basis of a basis I'm having troubles to understand the concept of coordinates in Linear Algebra. Let me give an example: Consider the following basis of $\mathbb R^2$: $S_1=\{u_1=(1,-2),u_2=(3,-4)\}$ and $S_2=\{v_1=(1,3),v_2=(3,8)\}$ Let $w=(2,3)$ be a vector with coordinates in $S_1$, then $w=2u_1+3u_2=2(1,-2)+3(3,-4)=(11,-16)$. When I tried to found the coordinates of $w$ in $S_2$, I found the following problem: Which basis $(11,-16)$ belongs to? I suppose the same of $u_1$ and $u_2$, but which basis $u_1$ and $u_2$ belongs to? and if I discover the basis of $u_1$ and $u_2$, what's the basis of the basis of $u_1$ and $u_2$? I found an infinite recurrence problem and I was stuck there. Maybe I'm seeing things more complicated than it is, but it seems that there is a deeper and philosophical question inside of this doubt, I couldn't see what a coordinate really is. I would be very grateful if anyone help me with this doubt.
The basis for everything, unless specified, is the standard basis $\{\textbf{e}_1=(1,0),\textbf{e}_2=(0,1)\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/407563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
An interesting problem using Pigeonhole principle I saw this problem: Let $A \subset \{1,2,3,\cdots,2n\}$. Also, $|A|=n+1$. Show that There exist $a,b \in A$ with $a \neq b$ and $a$ and $b$ is coprime. I proved this one very easily by using pigeon hole principle on partition on $\{1,2\},\{3,4\},\dots,\{2n-1,2n\}$. My question is How can I prove or disprove that: Let $A \subset \{1,2,3,\cdots,2n\}$. Also, $|A|=n+1$. Show that There exist $a,b \in A$ with $a \neq b$ and $a|b$. I can't make suitable partition. Is this true?
Any number from the set $A$ is of the form $2^{k}l$ where $k\ge 0,0\le l\le (2n-1)$ and $l$ is odd. Number of odd numbers $l\le (2n-1) $ is $n$. Now if we select $(n+1)$ numbers from the set $A$ then there must be two numbers(among the selected numbers) with the same $l$. That is, we must get $a,b$ with $a=2^{k_1}l$ and $b=2^{k_2}l$ now as $a\ne b$ so $k_1\ne k_2$.Now if $k_1>k_2$ then $b|a$ else $a|b$. This completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/407648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is $C_2$ the correct Galois Group of $f(x)= x^3+x^2+x+1$? Let $\operatorname{f} \in \mathbb{Q}[x]$ where $\operatorname{f}(x) = x^3+x^2+x+1$. This is, of course, a cyclotomic polynomial. The roots are the fourth roots of unity, except $1$ itself. I get $\mathbb{Q}[x]/(\operatorname{f}) \cong \mathbb{Q}(\pm 1, \pm i) \cong \mathbb{Q}(i) = \{a+bi : a,b \in \mathbb{Q}\}.$ Let $\alpha : \mathbb{Q}(i) \to \mathbb{Q}(i)$ be a $\mathbb{Q}$-automorphism. We have: $$\alpha(a+bi) = \alpha(a)+\alpha(bi) = \alpha(a)+\alpha(b)\alpha(a)i = a+b\alpha(i).$$ Since $\alpha(i)^2 = \alpha(i)\alpha(i) = \alpha(ii) = \alpha(-1)=-1$ we have $\alpha(i) = \pm\sqrt{-1} = \pm i$. There are then two $\mathbb{Q}$-automorphisms: the identity with $\alpha(z)=z$ and the conjugate $\alpha(z)=\overline{z}$. This tells me that the Galois Group is $S_2=\langle(12)\rangle.$ I've been using GAP software, and it says that the Galois Group is $\langle(13)\rangle$. I can see that $\langle(12)\rangle \cong \langle(13)\rangle$. However, $\langle(13)\rangle < S_3$. My suspision is that because $x^3+x^2+x+1$ is reducible over $\mathbb{Q}$: $x^3+x^2+x+1 \equiv (x+1)(x^2+1)$. Is GAP telling me that the Galois Group of $x^3+x^2+x+1$ is $C_1\times C_2$? How should I think about the Galois Group of $x^3+x^2+x+1$? Is it $C_2$, is it a subgroup of $S_3$ which is isomorphic to $C_2$, or is it the product $C_1 \times C_2$. I realise that these are all isomorphic, but what's the best way to think of it?
The Galois group is the group of authomorphisms of the splitting field. It acts on the roots of any splitting polynomial (such as $f$) by permuting the roots. In your case, there are three roots, $-1, i, -i$ and the automorphisms must leave $-1$ fixed. Since the action is also free, you can view $G$ (via this action) as a subgroup of $\operatorname{Sym}(\{-1,i,-i\})$ and of cours it as only one nontrivial element $(1)(i\ {-i})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/407759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving existence of a surjection $2^{\aleph_0} \to \aleph_1$ without AC I'm quite sure I'm missing something obvious, but I can't seem to work out the following problem (web search indicates that it has a solution, but I didn't manage to locate one -- hence the formulation): Prove that there exists a surjection $2^{\aleph_0} \to \aleph_1$ without using the Axiom of Choice. Of course, this surjection is very trivial using AC (well-order $2^{\aleph_0}$). I have been looking around a bit, but an obvious inroad like injecting $\aleph_1$ into $\Bbb R$ in an order-preserving way is impossible. Hints and suggestions are appreciated.
One of my favorite ways is to fix a bijection between $\Bbb N$ and $\Bbb Q$, say $q_n$ is the $n$th rational. Now we map $A\subseteq\Bbb N$ to $\alpha$ if $\{q_n\mid n\in A\}$ has order type $\alpha$ (ordered with the usual order of the rationals), and $0$ otherwise. Because every countable ordinal can be embedded into the rationals, for every $\alpha<\omega_1$ we can find a subset $\{q_i\mid i\in I\}$ which is isomorphic to $\alpha$, and therefore $I$ is mapped to $\alpha$. Thus we have a surjection onto $\omega_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/407833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 1 }
Using integration by parts to evaluate an integrals Can't understand how to solve this math: use integration by parts to evaluate this integrals: $$\int x\sin(2x + 1) \,dx$$ can any one solve this so i can understand how to do this! Thanks :)
$\int uv'=uv-\int u'v$. Choose $u(x):=x$, $v'(x):=\sin(2x+1)$. Then $u'(x)=1$ and $v(x)=-\frac{\cos(2x+1)}{2}$. So $$ \int x\sin(2x+1)\,dx=-x\frac{\cos(2x+1)}{2}+\int\frac{\cos(2x+1)}{2}\,dx=-x\frac{\cos(2x+1)}{2}+\frac{\sin(2x+1)}{4}+C. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/407899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How to prove/show $1- (\frac{2}{3})^{\epsilon} \geq \frac{\epsilon}{4}$, given $0 \leq \epsilon \leq 1$? How to prove/show $1- (\frac{2}{3})^{\epsilon} \geq \frac{\epsilon}{4}$, given $0 \leq \epsilon \leq 1$? I found the inequality while reading a TCS paper, where this inequality was taken as a fact while proving some theorems. I'm not a math major, and I'm not as sufficiently fluent in proving inequalities such as these (as I would like to be), hence I'd like to know, why this is true (it does hold for a range of values of $\epsilon$ from $0$ to $1$), and how to go about proving such inequalities in general.
On of the most helpful inequalities about the exponential is $e^t\ge 1+t$ for all $t\in\mathbb R$. Using this, $$ \left(\frac32\right)^\epsilon=e^{\epsilon\ln\frac32}\ge 1+\epsilon\ln\frac32$$ for all $\epsilon\in\mathbb R$. Under the additional assumption that $ -\frac1{\ln\frac32}\le \epsilon< 4$, multiply with $1-\frac\epsilon4$ to obtain $$\begin{align}\left(\frac32\right)^\epsilon\left(1-\frac\epsilon4\right)&\ge \left(1+\epsilon\ln\frac32\right)\left(1-\frac\epsilon4\right)\\&=1+\epsilon\left(\ln\frac32-\frac14\right)-\frac{\ln\frac32}{4}\epsilon^2\\&=1+\frac{\ln\frac32}{4}\epsilon\cdot\left(4-\frac1{\ln\frac32}-\epsilon\right).\end{align}$$ Hence $\left(\frac32\right)^\epsilon\left(1-\frac\epsilon4\right)\ge1$ and ultimately $1-\frac\epsilon4\ge \left(\frac23\right)^\epsilon$ for all $\epsilon$ with $0\le\epsilon\le 4-\frac1{\ln\frac32}\approx1.53$
{ "language": "en", "url": "https://math.stackexchange.com/questions/407985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }