Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How can I interpret $\max(X,Y)$? My textbook says: Let $X$ and $Y$ be two stochastically independent, equally distributed random variables with distribution function F. Define $Z = \max (X, Y)$. I don't understand what is meant by this. I hope I translated it correctly. I would conclude that $X=Y$ out of this. And therefore $Z=X=Y$. How can I interpret $\max(X,Y)$?
What's the problem? Max is the usual maximum of two real numbers (or two real-valued random variables, so that we can define, more explicitely, that $$ Z = \begin{cases} X & \text{if $X \ge Y$} \\ Y & \text{if $Y \ge X$} \\ \end{cases} $$ So your conclusion is most surely wrong! There is no base for concluding that $Z=X=Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/278732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Some weird equations In our theoreticall class professor stated that from this equation $(C = constant)$ $$ x^2 + 4Cx - 2Cy = 0 $$ we can first get: $$ x = \frac{-4C + \sqrt{16 C^2 - 4(-2Cy)}}{2} $$ and than this one: $$ x = 2C \left[\sqrt{1 + \frac{y}{2C}} -1\right] $$ How is this even possible?
Here's the algebra: $$x^2 + 4Cx - 2Cy = (x+2C)^2-4C^2 - 2Cy = 0 $$ Thus: $$ (x+2C)^2 = 4C^2 + 2Cy = 2C(2C+y). $$ Take square roots: $$ x_1 = -2C + \sqrt{2C(2C+y)} =\frac{-4C + \sqrt{16C^2 +8Cy}}{2}$$ and $$ x_2 = -2C - \sqrt{2C(2C+y)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/278780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Prove that any function can be represented as a sum of two injections How to prove that for any function $f: \mathbb{R} \rightarrow \mathbb{R}$ there exist two injections $g,h \in \mathbb{R}^{\mathbb{R}} \ : \ g+h=f$. Could you help me?
Note that it suffices to find an injection $g$ such that $f+g$ is also an injection, as $f$ can then be written as the sum $(f+g)+(-g)$. Such a $g$ can be constructed by transfinite recursion. Let $\{x_\xi:\xi<2^\omega\}$ be an enumeration of $\Bbb R$. Suppose that $\eta<2^\omega$, and we’ve defined $g(x_\xi)$ for all $\xi<\eta$ in such a way that $g(x_\xi)\ne g(x_\zeta)$ and $(f+g)(x_\xi)\ne(f+g)(x_\zeta)$ whenever $\xi<\zeta<\eta$. Let $$S_\eta=\{g(x_\xi):\xi<\eta\}$$ and $$T_\eta=\{(f+g)(x_\xi)-f(x_\eta):\xi<\eta\}\;.$$ Then $|S_\eta\cup T_\eta|<2^\omega$, so we may choose $g(x_\eta)\in\Bbb R\setminus(S_\eta\cup T_\eta)$. Clearly $g(x_\eta)\ne g(x_\xi)$ for $\xi<\eta$, since $g(x_\eta)\notin S_\eta$. Moreover, $g(x_\eta)\notin T_\eta$, so for each $\xi<\eta$ we have $g(x_\eta)\ne(f+g)(x_\xi)-f(x_\eta)$ and hence $(f+g)(x_\eta)\ne(f+g)(x_\xi)$. Thus, the recursion goes through to define $g(x_\xi)$ for all $\xi<2^\omega$, and it’s clear from the construction that both $g$ and $f+g$ are injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/278838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Proving convergence of sequence with induction I have a sequence defined as $a_{1}=\sqrt{a}$ and $a_{n}=\sqrt{1+a_{n-1}}$ and I need to prove that it has an upper bound and therefore is convergent. So i have assumed that the sequence has a limit and by squaring I got that the limit is $\frac{1+\sqrt{5}}{2}$ only $ \mathbf{if}$ it converges. What methods are there for proving convergence? I am trying to show that $a_{n}<\frac{1+\sqrt{5}}{2}$ by induction but could use some help since I have never done induction proof before. Progress: Step 1(Basis): Check if it holds for lowest possible integer: Since $a_{0}$ is not defined, lowest possible value is $2$. $a_{2}=\sqrt{1+a_{1}}=\sqrt{1+\sqrt{a}}=\sqrt{1+\sqrt{\frac{1+\sqrt{5}}{2}}}< \frac{1+\sqrt{5}}{2}$. Step 2: Assume it holds for $k\in \mathbb{N},k\geq 3$. If we can prove that it holds for $n=k+1$ we are done and therefore it holds for all $k$. This is were i am stuck: $a_{k+1}=\sqrt{1+a_{k}}$. I don't know how to proceed because I don't know where I am supposed to end.
Let $f(x) = \sqrt{1+x}-x$. We find that $f'(x) = \frac 1 {2\sqrt{1+x}} -1 <0$. This means that $f$ is a strictly decreasing function. Set $\phi = \frac{1+\sqrt 5}{2}$. We now that $f(\phi)=0$. We must then have that $f(x)>0$ if $x<\phi$ and $f(x)<0$ if $x>\phi$. So $a_{n+1}>a_n$ if $a_n< \phi$ and $a_{n+1} < a_n$ if $a_n > \phi$. I claim that if $a_1<\phi$ then $a_n < \phi$ for all $n$. This is proven by induction. Assume that $a_k < \phi$. Then $a_{k+1}^2 = a_k +1 < 1+\frac{1+\sqrt 5}{2}= \frac{6+2\sqrt{5}}{4} =\phi^2$. So $a_{k+1} < \phi$ and by induction we get that $a_n < \phi$ for all $n$. If $a_1<\phi$ we thus know that we get a bounded increasing sequence. All bounded increasing sequences converge. To deal with the case when $\sqrt {a_1}>\phi$ is left as an exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/278902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Finding a topology on $ \mathbb{R}^2 $ such that the $x$-axis is dense The problem is the following Put a topology on $ \mathbb{R}^2$ with the property that the line $\{(x,0):x\in \mathbb{R}\}$ is dense in $\mathbb{R}^2$ My attempt If (a,b) is in $R^2$, then define an open sets of $(a,b)$ as the strip between $d$ and $c$, inclusive where $c$ has the opposite sign of $b$ and $d>b$ if $b$ is positive otherwise $d<b$ . Clearly this always contains the set $ \mathbb{R}\times\{0\}$. It also obeys the the topological laws, ie, the intersection of two open sets is open. The union of any number of open sets is open. Thanks for your help
The following generalizes all solutions (EDIT: not all solutions, just those which give a topology on $\mathbb{R}^2$ homeomorphic to the standard topology). It doesn't have much topological content, but it serves to show how basic set theory can often be used to trivialize problems in other fields. A famous example of this phenomenon is Cantor's proof of the existence of (infinitely many) transcendental numbers. So, let $X$ be a dense subset of $\mathbb{R}^2$ be such that $$\left|X\right| = \left|\mathbb{R}^2\setminus X\right| = \left|\mathbb{R}^2\right|$$ For instance, $X$ could be the set of points with irrational first coordinate. Now let $f$ and $g$ be bijections: $$f : \mathbb{R}\times\{0\} \to X$$ $$g : \mathbb{R}\times (\mathbb{R}\setminus\{0\}) \to \mathbb{R}^2\setminus X$$ Then let $F = f \cup g$ is a bijection from the plane to itself which will map the $x$-axis onto $X$. The topology we want consists of those sets $A$ for which $F[A]$ is open in the standard topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/278968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Four times the distance from the $x$-axis plus 9 times the distance from $y$-axis equals $10$. What geometric figure is formed by the locus of points such that the sum of four times the distance from the $x$-axis and nine times its distance from $y$-axis is equal to $10$? I get $4x+9y=10$. So it is a straight line, but the given answer is parallelogram. Can anyone tell me where my mistake is?
If $P$ has coordinates $(x,y)$, then $d(P,x\text{ axis})=|y|$ and $d(P,y\text{ axis})=|x|$. So your stated condition requires $4|y|+9|x|=10$, the graph of which is shown below.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Neighborhoods of half plane Define $H^n = \{(x_1, \dots, x_n)\in \mathbb R^n : x_n \ge 0\}$, $\partial H^n = \{(x_1, \dots, x_{n-1},0) : x_i \in \mathbb R\}$. $\partial H^n$ is a manifold of dimension $n-1$: As a subspace of $H^n$ it is Hausdorff and second-countable. If $U \subseteq \partial H^n$ is open in $H^n$ with the subspace topology of $\mathbb R^n$ then $f: (x_1, \dots, x_{n-1},0) \mapsto (x_1, \dots, x_{n-1}), \partial H^n \to \mathbb R^{n-1} $ is injective and continous. Then its restriction to $U$ is a homeomorphism by invariance of domain. I am asked to show that a nbhd $U'$ of $x \in \partial H^n$ is not homeomorphic to an open set $U \subseteq \mathbb R^n$. My try: $H^n$ is with the subspace topology. Then a set $U' \subseteq H^n$ is open iff it is $U \cap H^n$ for some open set $U \subseteq \mathbb R^n$. $H^n$ is closed in $\mathbb R^n$. How to proof that $U \cap H^n$ can't be open? Thank you for correcting me.
Suppose $U'$ is open in $\mathbb R^n$ and $U' \ni x \in \partial H^n$. Then $U'$ must contain an open ball $B$ around $x$ and so there must exist a point $y \in B$ with $y_n < 0$ and therefore $U' \not \subset H^n$. For the more general argument, suppose $\phi: U' \to \mathbb R^n$ is a homeomorphism. Then there is an open ball $B \subset \mathbb R^n$ containing $\phi(x)$ and so there is also a ball $B' \subset \phi^{-1}(B)$ that contains $x$. So we are done by the first paragraph.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do we have such a direct product decomposition of Galois groups? Let $L = \Bbb{Q}(\zeta_m)$ where we write $m = p^k n$ with $(p,n) = 1$. Let $p$ be a prime of $\Bbb{Z}$ and $P$ any prime of $\mathcal{O}_L$ lying over $p$. Notation: We write $I = I(P|p)$ to denote the inertia group and $D = D(P|p)$ the decomposition group. Now we have a tower of fields $$\begin{array}{c} L \\ |\\ L^E \\| \\ L^D \\| \\ \Bbb{Q}\end{array}$$ and it is clear that $L^E = \Bbb{Q}(\zeta_n)$ so that $E \cong \Bbb{Z}/p^k\Bbb{Z}^\times$. My question is: I want to identify the decomposition group $D$. So this got me thinking: Do we have a decomposition into direct products $$D \cong E \times D/E?$$ This would be very convenient because $D/E$ is already known to be finite cyclic of order $f$ while $E$ I have already stated above. Note it is not necessarily given that $(e,f) = 1$ so we can't invoke Schur - Zassenhaus or anything like that.
You can write your $L$ as the compositum of $\mathbb{Q}(\zeta_{p^k})$ and $\mathbb{Q}(\zeta_{n})$. Since $(p,n)=1$, the two are disjoint over $\mathbb{Q}$, and so the Galois group of $L$ is isomorphic to the direct product of the two Galois groups, one of which is $E$. Let's call the other subgroup $H$. Now, every element $g$ of $G$ is uniquely a product of an element $\epsilon$ of $E$ and an element $h$ of $H$. $E$ is contained in $D$, so $\epsilon h$ fixes $P$ if and only if $h$ does. In other words, the decomposition group $D$ is generated by $E$ and the decomposition group of $P$ in ${\rm Gal}(\mathbb{Q}(\zeta_{p^kn})/\mathbb{Q}(\zeta_{p^k}))=H$, so is indeed a direct product.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Confusion related to integral of a Gaussian I am a bit confused about calculating the integral of a Gaussian $$\int_{-\infty}^{\infty}e^{-x^{2}+bx+c}\:dx=\sqrt{\pi}e^{\frac{b^{2}}{4}+c}$$ Given above is the integral of a Gaussian. The integral of a Gaussian is Gaussian itself. But what is the mean and variance of this Gaussian obtained after integration?
The question is only meaningful if $\Im{b} \ne 0$. Let's say that, rather, $\Re{b} = 0$ and $b = i B$. Now you can assign a mean/variance to the resulting Gaussian. This, BTW, is related to the well-known fact that a Fourier transform of a Gaussian is a Gaussian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Compute $\lim_{n\to\infty} \left(\sum_{k=1}^n \frac{H_k}{k}-\frac{1}{2}(\ln n+\gamma)^2\right) $ Compute $$\lim_{n\to\infty} \left(\sum_{k=1}^n \frac{H_k}{k}-\frac{1}{2}(\ln n+\gamma)^2\right) $$ where $\gamma$ - Euler's constant.
We have \begin{align} 2\sum_{k=1}^n \frac{H_k}{k} &= 2\sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} \\ &= \sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} + \sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} \\ &= \sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} + \sum_{j=1}^n \sum_{k=j}^n \frac{1}{jk}, \text{ swapping the order of summation on the second sum}\\ &= \sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} + \sum_{k=1}^n \sum_{j=k}^n \frac{1}{jk}, \text{ changing variables on the second sum}\\ &= \sum_{k=1}^n \sum_{j=1}^n \frac{1}{jk} + \sum_{k=1}^n \frac{1}{k^2} \\ &= \left(\sum_{k=1}^n \frac{1}{k} \right)^2 + \sum_{k=1}^n \frac{1}{k^2} \\ &= H_n^2+ H^{(2)}_n. \\ \end{align} Thus \begin{align*} \lim_{n\to\infty} \left(\sum_{k=1}^n \frac{H_k}{k}-\frac{1}{2}(\log n+\gamma)^2\right) &= \lim_{n\to\infty} \frac{1}{2}\left(H_n^2+ H^{(2)}_n-(\log n+\gamma)^2\right) \\ &= \lim_{n\to\infty} \frac{1}{2}\left((\log n + \gamma)^2 + O(\log n/n) + H^{(2)}_n-(\log n+\gamma)^2\right) \\ &= \frac{1}{2}\lim_{n\to\infty} \left( H^{(2)}_n + O(\log n/n) \right) \\ &= \frac{1}{2}\lim_{n\to\infty} \sum_{k=1}^n \frac{1}{k^2}\\ &= \frac{\pi^2}{12}. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/279380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 1, "answer_id": 0 }
A numerical inequality. If $a_1\ge a_2\ge \cdots \ge a_n\ge 0$, $b_1\ge b_2\ge \cdots \ge b_n\ge 0$ with $\sum_{j=1}^kb_j=1$ for some $1\le k\le n$. Is it true that $2\sum_{j=1}^na_jb_j\le a_1+\frac{1}{k}\sum_{j=1}^na_j$? The above question is denied. Give a simple proof to a weaker version? Under the same condtion, then $\sum_{j=1}^na_jb_j\le \max\{a_1,\frac{1}{k}\sum_{j=1}^na_j\}$.
No, it isn't true. Let $a_j=b_j=1$ for $1\leq j\leq n$. Then $\sum_{j=1}^1b_j=1$, hence $k=1$, thus we've satisfied the hypotheses. However, $$2\sum_{j=1}^na_jb_j=2n\geq1+n$$ with equality if and only if $n=1$. EDIT: For the second case, let $a_j=1/10$ for all $j$, $b_1=1$ and $b_j=10$ for $j\geq 2$. Then we end up with $$n-0.9\geq n/10$$ with equality if and only if $n=1$ again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluating: $\int_0^{\infty} \frac{1}{t^2} dt$ I try to evaluate following integral $$\int_0^{\infty} \frac{1}{t^2} dt$$ At first it seems easy to me. I rewrite it as follows.$$\lim_{b \to \infty} \int_0^{b} \frac{1}{t^2} dt$$ and integrate the $\frac{1}{t^2}$. I proceed as follows: $$\lim_{b\to\infty} \left[ -t^{-1}\right]_0^b$$ Then it results in,$$\lim_{b\to\infty} -b^{-1}$$ and it converges to zero. However, wolframalpha says the integral is divergent. I do not understand what is wrong with this reasoning in particular and what is the right solution. Many thanks in advance!
You have done something strange with the lower limit. In fact, your integral is generalized at both endpoints, since the integrand is unbounded near $0$. You get $$\int_0^\infty \frac{1}{t^2}\,dt = \lim_{\varepsilon \to 0^+} \int_\varepsilon^1 \frac{1}{t^2}\,dt + \lim_{b \to\infty} \int_1^b \frac{1}{t^2}\,dt$$ and while the second limit exists, the first does not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
How to evaluate $\lim_{n\rightarrow\infty}\int_{0}^{1}x^n f(x)dx$, How to evaluate $\lim\limits_{n\rightarrow\infty}\int_{0}^{1}x^n f(x)dx$, well, i did one problem from rudins book that if $\int_{0}^{1}x^n f(x)dx=0\forall n\in\mathbb{N}$ then $f\equiv 0$ by stone weirstrass theorem. please help me here.
Assuming $f$ is integrable you can use dominated convergence. If $f$ is positive, monotone convergence works too. In either case, the limit is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Linear transformation satisfies $T^n=T$; has eigenvalues? I have a linear transformation $T:V\to V$ over a (finite, is needed) field $F$, which satisfies $T^n=T$. Prove that $T$ has a eigenvalue, or give a counter example. Thanks
This is false. The matrix $$A = \left(\begin{matrix}0 & -1\\1 & 0 \end{matrix}\right)$$ satisfies $A^4 = 1$ (thus $A^5 = A$) but its characteristic polynomial $X^2+1$ has no real roots, so $A$ has no real eigenvalues. For an example over a finite field, consider $$A = \left(\begin{matrix}0 & 1\\1 & 1 \end{matrix}\right)$$ over $\mathbb F_2$. It satisfies $A^3 = 1$ (thus $A^4 = A$) but the characteristic polynomial $X^2+X+1$ has no root in $\mathbb F_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Applications of Residue Theorem in complex analysis? Does anyone know the applications of Residue Theorem in complex analysis? I would like to do a quick paper on the matter, but am not sure where to start. The residue theorem The residue theorem, sometimes called Cauchy's residue theorem (one of many things named after Augustin-Louis Cauchy), is a powerful tool to evaluate line integrals of analytic functions over closed curves; it can often be used to compute real integrals as well. It generalizes the Cauchy integral theorem and Cauchy's integral formula. From a geometrical perspective, it is a special case of the generalized Stokes' theorem. Illustration of the setting The statement is as follows: Suppose $U$ is a simply connected open subset of the complex plane, and $a_1,\ldots,a_n$ are finitely many points of $U$ and $f$ is a function which is defined and holomorphic on $U\setminus\{a_1,\ldots,a_n\}$. If $\gamma$ is a rectifiable curve in $U$ which does not meet any of the $a_k$, and whose start point equals its endpoint, then $$\oint_\gamma f(z)\,dz=2\pi i\sum_{k=1}^n I(\gamma,a_k)\mathrm{Res}(f,a_k)$$ I'm sure many complex analysis experts are very familiar with this theorem. I was just hoping someone could enlighten me on its many applications for my paper. Thank you!
Other then as a fantastic tool to evaluate some difficult real integrals, complex integrals have many purposes. Firstly, contour integrals are used in Laurent Series, generalizing real power series. The argument principle can tell us the difference between the poles and roots of a function in the closed contour $C$: $$\oint_{C} {f'(z) \over f(z)}\, dz=2\pi i (\text{Number of Roots}-\text{Number of Poles})$$ and this has been used to prove many important theorems, especially relating to the zeros of the Riemann zeta function. Noting that the residue of $\pi \cot (\pi z)f(z)$ is $f(z)$ at all the integers. Using a square contour offset by the integers by $\frac{1}{2}$, we note the contour disappears as it gets large, and thus $$\sum_{n=-\infty}^\infty f(n) = -\pi \sum \operatorname{Res}\, \cot (\pi z)f(z)$$ where the residues are at poles of $f$. While I have only mentioned a few, basic uses, many, many others exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
Proving $\gcd(a, c) = \gcd(b, c)$ for $a + b = c^2$ I am trying to prove that, given positive integers $a, b, c$ such that $a + b = c^2$, $\gcd(a, c) = \gcd(b, c)$. I am getting a bit stuck. I have written down that $(a, c) = ra + sc$ and $(b, c) = xb + yc$ for some integers $r, s, x, y$. I am now trying to see how I can manipulate these expressions considering that $a + b = c^2$ in order to work towards $ra + sc = xb + yc$ which means $(a, c) = (b, c)$. Am I starting off correctly, or am I missing something important? Any advice would help.
I don't see a way to proceed using the approach you suggest - this doesn't mean that there isn't one (it's often a good method to work through). But I don't see it yet. But you can directly show that the same primes to the same powers divide each: Consider a prime $p$. Suppose that $p^\beta \mid \mid \gcd(a,c)$, so that in particular $p^\beta \mid c^2 - a = b$. And so $p^\beta | a$. Suppose now that $p^\alpha \mid \mid a$. Then we know that $\alpha \geq \beta$ from this argument. Reversing the roles of $a$ and $b$ in the same argument shows that $\beta \geq \alpha$. Thus $\alpha = \beta$, and so the same primes to the same powers maximally divide $\gcd(a,c)$ and $\gcd(b,c)$. Further, it seems this argument generalizes to cases that look like $a + b = c^n$ for any $n \geq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Question about limit of a product Is it possible that both $\displaystyle\lim_{x\to a}f(x)$ and $\displaystyle\lim_{x\to a}g(x)$ do not exist but $\displaystyle\lim_{x\to a}f(x)g(x)$ does exist? The reason I ask is that I was able to show that if $\displaystyle\lim_{x\to a}f(x)$ does not exist but both $\displaystyle\lim_{x\to a}g(x)$ and $\displaystyle\lim_{x\to a}f(x)g(x)$ do exist, then $\displaystyle\lim_{x\to a}g(x)=0$. However, I'm not sure whether the assumption on the existence of $\displaystyle\lim_{x\to a}g(x)$ is necessary.
Yes. As a simple example, I'll work with sequences. Let $f(n) = (-1)^n$ and $g(n) = (-1)^{n}$. Then $f(n)g(n) = 1$, so $1 = \lim_{n \to \infty} f(n)g(n)$, but neither of the individual limits exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
prove the divergence of cauchy product of convergent series $a_{n}:=b_{n}:=\dfrac{(-1)^n}{\sqrt{n+1}}$ i am given these series which converge. $a_{n}:=b_{n}:=\dfrac{(-1)^n}{\sqrt{n+1}}$ i solved this with quotient test and came to $-1$, which is obviously wrong. because it must be $0<\theta<1$ so that the series converges. my steps: $\dfrac{(-1)^{n+1}}{\sqrt{n+2}}\cdot \dfrac{\sqrt{n+1}}{(-1)^{n}} = - \dfrac{\sqrt{n+1}}{\sqrt{n+2}} = - \dfrac{\sqrt{n+1}}{\sqrt{n+2}}\cdot \dfrac{\sqrt{n+2}}{\sqrt{n+2}} = - \dfrac{(n+1)\cdot (n+2)}{(n+2)\cdot (n+2)} = - \dfrac{n^2+3n+2}{n^2+4n+4} = -1 $ did i do something wrong somewhere? and i tried to know whether the cauchy produkt diverges as task says: $\sum_{k=0}^{n}\dfrac{(-1)^{n-k}}{\sqrt{n-k+1}}\cdot \dfrac{(-1)^{k}}{\sqrt{k+1}} = \dfrac{(-1)^n}{nk+n-k^2+1} = ..help.. = diverging $ i am stuck here how to show that the produkt diverges, thanks for any help!
$\sum_{n=0}^\infty\dfrac{(-1)^n}{\sqrt{n+1}}$ is convergent by Leibniz's test, but it is not absolutely convergente (i.e. it is conditionally convergent.) To show that the Cauchy product does not converge use the inequality $$ x\,y\le\frac{x^2+y^2}{2}\quad x,y\in\mathbb{R}. $$ Then $$ \sqrt{n-k+1}\,\sqrt{k+1}\le\frac{n+2}{2} $$ and $$ \sum_{k=0}^n\frac{1}{\sqrt{n-k+1}\,\sqrt{k+1}}\ge\frac{2(n+1)}{n+2}. $$ This shows that the the terms of the Cauchy product do not converge to $0$, and the series diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
What is limit of: $\displaystyle\lim_{x\to 0}$$\tan x - \sin x\over x$ I want to search limit of this trigonometric function: $$\displaystyle\lim_{x\to 0}\frac{\tan x - \sin x}{x^n}$$ Note: $n \geq 1$
Checking separatedly the cases for $\,n=1,2,3\,$, we find: $$n=1:\;\;\;\;\;\;\frac{\tan x-\sin x}{x}=\frac{1}{\cos x}\frac{\sin x}{x}(1-\cos x)\xrightarrow [x\to 0]{}1\cdot 1\cdot 0=0$$ $$n=2:\;\;\frac{\tan x-\sin x}{x^2}=\frac{1}{\cos x}\frac{\sin x}{x}\frac{1-\cos x}{x}\xrightarrow [x\to 0]{}1\cdot 1\cdot 0=0\;\;(\text{Applying L'Hospital})$$ $$n=3:\;\;\;\frac{\tan x-\sin x}{x^3}=\frac{1}{\cos x}\frac{\sin x}{x}\frac{1-\cos x}{x^2}\xrightarrow[x\to 0]{}1\cdot 1\cdot \frac{1}{2}=\frac{1}{2}\;\;(\text{Again L'H})$$ $$n\geq 4:\;\;\;\;\frac{\tan x-\sin x}{x^4}=\frac{1}{\cos x}\frac{\sin x}{x}\frac{1-\cos x}{x^2}\frac{1}{x^{n-3}}\xrightarrow[x\to 0]{}1\cdot 1\cdot \frac{1}{2}\cdot\frac{1}{\pm 0}=$$ and the above either doesn't exists (if $\,n-3\,$ is odd), or it is $\,\infty\,$ , so in any case $\,n\geq 4\,$ the limit doesn't exist in a finite form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/279967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
extension of a non-finite measure For a finite measure on a field $\mathcal{F_0}$ there always exists its extension to $\sigma(\mathcal{F_0})$. Can somebody give me an example of a non-finite measure on a field which cannot be extended to $\sigma(\mathcal{F_0})$. It would be better if somebody can point in the proof (for existence of extension of finite measure), where the finite-ness property is used.
Every measure can be extended from a field to the generated $\sigma$-algebra. The classical proof by Caratheodory does not rely on the measure being finite, so there is no such example. As Ilya mentioned in a comment, the extension may not be unique. Here is an explicit example: Let $\mathcal{F}$ be the field of subsets of $\mathbb{Q}$ generated by sets of the form $(a,b]\cap\mathbb{Q}$ with $a,b\in\mathbb{Q}$. Let $\mu(A)=\infty$ for $A\in\mathcal{F}\backslash\{\emptyset\}$. It is easy to see that $\sigma(\mathcal{F})=P(\mathbb{Q})$, the powerset. Now let $r>0$ be a real number. Then there is a unique measure $\mu_r$ such that $\mu\big(\{q\}\big)=r$ for all $q\in\mathbb{Q}$. So for each $r>0$, $\mu_r$ is a different extension of $\mu$. So there is a continuum of possible extensions of $\mu$ to $\sigma(\mathcal{F})$. The example is based on one in Counterexamples in Probability and Real Analysis by Wise and Hall.
{ "language": "en", "url": "https://math.stackexchange.com/questions/280022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Why is there always a basis for the Cartan orthonormal relative to the Killing form? I'm trying to understand a step in a proof: Let $\mathfrak{g}$ be semi-simple (finite dimensional) Lie-algebra over $\mathbb{C}$, $\mathfrak{h}\subset\mathfrak{g}$ a Cartan subalgebra and let $\kappa:\mathfrak{g}\times\mathfrak{g}\to\mathbb{C}$ be the Killing form. In this setting, the author of the proof chooses an orthonormal basis $h_1,\dots,h_n$ of $\mathfrak{h}$ relative to the Killing form, which is - to my understanding - a basis satisfying $\kappa(h_i,h_j)=\delta_{ij}$. Why is it always possible to find such an orthonormal basis? Thank you for your help!
The Killing form is symmetric and non-degenerate(Cartan's criterion). For such bilinear forms you can always diagonalize it via a proper basis. So in particular over $\mathbb{C}$ you should be able to find an orthonormal basis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/280090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Number Theory and Congruency I have the following problem: $$2x+7=3 \pmod{17}$$ I know HOW to do this problem. It's as follows: $$2x=3-7\\ x=-2\equiv 15\pmod{17}$$ But I have no idea WHY I'm doing that. I don't really even understand what the problem is asking, I'm just doing what the book says to do. Can someone explain what this problem is asking and what I'm finding? Thanks
The $\bmod{17}$ congruence classes are represented by $0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16 $. You're trying to find out which of those classes $x$ belongs to. That's why you're doing that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/280157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
Lower bounds on numbers of arrangements of a Rubik's cube Last night, a friend of mine informed me that there were forty-three quintillion positions that a Rubik's Cube could be in and asked me how many there were for my Professor's Cube (5x5x5). So I gave him an upper bound: $$8!~3^8\cdot24!~2^{24}/2^{12}\cdot12!~2^{12}\cdot24!/4!^6\cdot24!/4!^6,$$ did some rough approximation, and said it was around 10^79. Then I decided I'd try to give a lower bound, and came up dry. In the Wikipedia article, we read that: * *The orientation of the eighth vertex depends on the other seven. *The 24 edge pieces that are next to vertices can't be flipped. *The twelfth central edge's orientation depends on the other eleven. *The parity of a permutation of the vertices and of the edge pieces are linked. The second point is easy to see by imagining the route that such an edge piece takes under all the various moves. (The Wikipedia article opts to give a mechanical reason instead, which is a questionable practice.) The other points are not too hard to see either. So we divide by $3\cdot2^{24}\cdot2\cdot2$ and get the answer, around $2.83\cdot10^{74}.$ My friend does not know group theory, and the only proof I know of the independence of the rest of the stuff uses group theory to a considerably greater extent than that which proves the dependent parts dependent. Can anyone think of a simpler proof of a reasonable lower bound? One that, say, at least puts the number greater than that for Rubik's Revenge (4x4x4), which is about $7.40\cdot10^{45}$?
Chris Hardwick has come up with a closed-form solution for the number of permutations for an $n\times n\times n$ cube: http://speedcubing.com/chris/cubecombos.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/280276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proving $\sum\limits_{k=1}^{n}{\frac{1}{\sqrt[k+1]{k}}} \geq \frac{n^2+3n}{2n+2}$ How to prove the following inequalities without using Bernoulli's inequality? * *$$\prod_{k=1}^{n}{\sqrt[k+1]{k}} \leq \frac{2^n}{n+1},$$ *$$\sum_{k=1}^{n}{\frac{1}{\sqrt[k+1]{k}}} \geq \frac{n^2+3n}{2n+2}.$$ My proof: * * \begin{align*} \prod_{k=1}^{n}{\sqrt[k+1]{k}} &= \prod_{k=1}^{n}{\sqrt[k+1]{k\cdot 1 \cdot 1 \cdots 1}} \leq \prod^{n}_{k=1}{\frac{k+1+1+\cdots +1}{k+1}}\\ &=\prod^{n}_{k=1}{\frac{2k}{k+1}}=2^n \cdot \prod^{n}_{k=1}{\frac{k}{k+1}}=\frac{2^n}{n+1}.\end{align*} * $$\sum_{k=1}^{n}{\frac{1}{\sqrt[k+1]{k}}} \geq n \cdot \sqrt[n]{\prod_{k=1}^{n}{\frac{1}{\sqrt[k+1]{k}}}} \geq n \cdot \sqrt[n]{\frac{n+1}{2^n}}=\frac{n}{2}\cdot \sqrt[n]{n+1}.$$ It remains to prove that $$\frac{n}{2}\cdot \sqrt[n]{n+1} \geq \frac{n^2+3n}{2n+2}=\frac{n(n+3)}{2(n+1)},$$ or $$\sqrt[n]{n+1} \geq \frac{n+3}{n+1},$$ or $$(n+1) \cdot (n+1)^{\frac{1}{n}} \geq n+3.$$ We apply Bernoulli's Inequality and we have: $$(n+1)\cdot (1+n)^{\frac{1}{n}}\geq (n+1) \cdot \left(1+n\cdot \frac{1}{n}\right)=(n+1)\cdot 2 \geq n+3,$$ which is equivalent with: $$2n+2 \geq n+3,$$ or $$n\geq 1,$$ and this is true becaue $n \neq 0$, $n$ is a natural number. Can you give another solution without using Bernoulli's inequality? Thanks :-)
The inequality to be shown is $$(n+1)^{n+1}\geqslant(n+3)^n, $$ for every positive integer $n$. For $n = 1$ it is easy. For $n \ge 2$, apply AM-GM inequality to $(n-2)$-many $(n+3)$, 2 $\frac{n+3}{2}$, and $4$, we get $$(n+3)^n < \left(\frac{(n-2)(n+3) + \frac{n+3}{2} + \frac{n+3}{2} + 4}{n+1}\right)^{n+1} = \left(n+1\right)^{n+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/280360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Finding the general solution of a quasilinear PDE This is a homework that I'm having a bit of trouble with: Find a general solution of: $(x^2+3y^2+3u^2)u_x-2xyuu_y+2xu=0~.$ Of course this should be done using the method of characteristics but I'm having trouble solving the characteristic equations since none of the equations decouple: $\dfrac{dx}{x^2+3y^2+3u^2}=-\dfrac{dy}{2xyu}=-\dfrac{du}{2xu}$ Any suggestions?
Follow the method in http://en.wikipedia.org/wiki/Characteristic_equations#Example: $(x^2+3y^2+3u^2)u_x-2xyuu_y+2xu=0$ $2xyuu_y-(x^2+3y^2+3u^2)u_x=2xu$ $2yu_y-\left(\dfrac{x}{u}+\dfrac{3y^2}{xu}+\dfrac{3u}{x}\right)u_x=2$ $\dfrac{du}{dt}=2$ , letting $u(0)=0$ , we have $u=2t$ $\dfrac{dy}{dt}=2y$ , letting $y(0)=y_0$ , we have $y=y_0e^{2t}=y_0e^u$ $\dfrac{dx}{dt}=-\left(\dfrac{x}{u}+\dfrac{3y^2}{xu}+\dfrac{3u}{x}\right)=-\dfrac{x}{2t}-\dfrac{3y_0^2e^{4t}}{2xt}-\dfrac{6t}{x}$ $\dfrac{dx}{dt}+\dfrac{x}{2t}=-\left(\dfrac{3y_0^2e^{4t}}{2t}+6t\right)\dfrac{1}{x}$ Let $w=x^2$ , Then $\dfrac{dw}{dt}=2x\dfrac{dx}{dt}$ $\therefore\dfrac{1}{2x}\dfrac{dw}{dt}+\dfrac{x}{2t}=-\left(\dfrac{3y_0^2e^{4t}}{2t}+6t\right)\dfrac{1}{x}$ $\dfrac{dw}{dt}+\dfrac{x^2}{t}=-\dfrac{3y_0^2e^{4t}}{t}-12t$ $\dfrac{dw}{dt}+\dfrac{w}{t}=-\dfrac{3y_0^2e^{4t}}{t}-12t$ I.F. $=e^{\int\frac{1}{t}dt}=e^{\ln t}=t$ $\therefore\dfrac{d}{dt}(tw)=-3y_0^2e^{4t}-12t^2$ $tw=\int(-3y_0^2e^{4t}-12t^2)~dt$ $tx^2=-\dfrac{3y_0^2e^{4t}}{4}-4t^3+f(y_0)$ $x^2=-\dfrac{3y_0^2e^{4t}}{4t}-4t^2+\dfrac{f(y_0)}{t}$ $x=\pm\sqrt{-\dfrac{3y_0^2e^{4t}}{4t}-4t^2+\dfrac{f(y_0)}{t}}$ $x=\pm\sqrt{-\dfrac{3y^2}{2u}-u^2+\dfrac{2f(ye^{-u})}{u}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/280411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
how to find inverse of a matrix in $\Bbb Z_5$ how to find inverse of a matrix in $\Bbb Z_5$ please help me explicitly how to find the inverse of matrix below, what I was thinking that to find inverses separately of the each term in $\Bbb Z_5$ and then form the matrix? $$\begin{pmatrix}1&2&0\\0&2&4\\0&0&3\end{pmatrix}$$ Thank you.
Hint: Use the adjugate matrix. Answer: The cofactor matrix of $A$ comes $\color{grey}{C_A= \begin{pmatrix} +\begin{vmatrix} 2 & 4 \\ 0 & 3 \end{vmatrix} & -\begin{vmatrix} 0 & 4 \\ 0 & 3 \end{vmatrix} & +\begin{vmatrix} 0 & 2 \\ 0 & 0 \end{vmatrix} \\ -\begin{vmatrix} 2 & 0 \\ 0 & 3 \end{vmatrix} & +\begin{vmatrix} 1 & 0 \\ 0 & 3 \end{vmatrix} & -\begin{vmatrix} 1 & 2 \\ 0 & 0 \end{vmatrix} \\ +\begin{vmatrix} 2 & 0 \\ 2 & 4 \end{vmatrix} & -\begin{vmatrix} 1 & 0 \\ 0 & 4 \end{vmatrix} & +\begin{vmatrix} 1 & 2 \\ 0 & 2 \end{vmatrix} \end{pmatrix}= \begin{pmatrix} 6 & 0 & 0 \\ -6 & 3 & 0 \\ 8 & -4 & 2 \end{pmatrix}=} \begin{pmatrix} 1 & 0 & 0 \\ -1 & 3 & 0 \\ 3 & 1 & 2 \end{pmatrix}.$ Therefore the adjugate matrix of $A$ is $\color{grey}{\text{adj}(A)=C_A^T= \begin{pmatrix} 1 & 0 & 0 \\ -1 & 3 & 0 \\ 3 & 1 & 2 \end{pmatrix}^T=} \begin{pmatrix} 1 & -1 & 3 \\ 0 & 3 & 1 \\ 0 & 0 & 2 \end{pmatrix}$. Since $\det{(A)}=1$, it follows that $A^{-1}=\text{adj}(A)= \begin{pmatrix} 1 & -1 & 3 \\ 0 & 3 & 1 \\ 0 & 0 & 2 \end{pmatrix}$. And we confirm this by multiplying them matrices: $\begin{pmatrix} 1 & -1 & 3 \\ 0 & 3 & 1 \\ 0 & 0 & 2\end{pmatrix} \begin{pmatrix} 1 & 2 & 0 \\ 0 & 2 & 4 \\ 0 & 0 & 3\end{pmatrix}= \begin{pmatrix} 1 & 0 & 5 \\ 0 & 6 & 15 \\ 0 & 0 & 6\end{pmatrix}= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/280522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Can you provide me historical examples of pure mathematics becoming "useful"? I am trying to think/know about something, but I don't know if my base premise is plausible. Here we go. Sometimes when I'm talking with people about pure mathematics, they usually dismiss it because it has no practical utility, but I guess that according to the history of mathematics, the math that is useful today was once pure mathematics (I'm not so sure but I guess that when the calculus was invented, it hadn't a practical application). Also, I guess that the development of pure mathematics is important because it allows us to think about non-intuitive objects before encountering some phenomena that is similar to these mathematical non-intuitive objects, with this in mind can you provide me historical examples of pure mathematics becoming "useful"?
Group theory is commonplace in quantum mechanics to represent families of operators that possess particular symmetries. You can find some info here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/280530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "166", "answer_count": 34, "answer_id": 29 }
Non-Abelian simple group of order $120$ Let's assume that there exists simple non-Abelian group $G$ of order $120$. How can I show that $G$ is isomorphic to some subgroup of $A_6$?
A group of order 120 can not be simple. Let's assume that there exists simple non-abelian group $G$ of order 120. Then we know the number Sylow 5-subgroups of $G$ is 6. Hence, the index of $N_{G}(P)$ in $G$ is 6 ($P$ is a Sylow 5-subgroup of $G$). Now there exists a monomorphism $\phi$ of $G$ to $S_{6}$. We claim that $\operatorname{Im\phi}\leq A_{6}$. Otherwise $\operatorname{Im\phi}$ has an odd permutation and so $G$ has a normal subgroup of index 2, a contradiction. Hence, $G\cong \operatorname{Im(\phi)}\leq A_{6}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/280657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
How do i apply multinomial laws in this question? the Question is i assume i have 15 students in class A grade obtain probaiblity = 0.3 B grade obtain probability =0.4 C grade obtain probability = 0.3 and I have this question What is the probabilty if we are given 2 students at least obtain A ? Do I need apply this law ? P=n!/((A!)(B!)(C!)) 〖P1〗^A 〖P2〗^(B ) 〖P3〗^C
We reword the question, perhaps incorrectly. If a student is chosen at random, the probabilities she obtains an A, B, and C are, respectively, $0.3$, $0.4$, and $0.3$. If $15$ students are chosen at random, what is the probability that at least $2$ of the students obtain an A? The probability that $a$ students get an A, $b$ get a B, and $c$ get a C, where $a+b+c=15$, is indeed given by the formula quoted in the post. It is $$\frac{15!}{a!b!c!}(0.3)^a(0.4)^b (0.3)^c.$$ However, the above formula is not the best tool for solving the problem. The probability a student gets an A is $0.3$, and the probability she doesn't is $0.7$. We are only interested in the number of A's, so we are in a binomial situation. We want the probability there are $2$ or more A's. It is easier to first find the probability of fewer than $2$ A's. This is the probability of $0$ A's plus the probability of $1$ A. If $X$ is the number of A's, $$\Pr(X\lt 2)=\binom{15}{0}(0.3)^0(0.7)^{15}+\binom{15}{1}(0.3)^1(0.7)^{14}.$$ It follows that the probability of $2$ or more A's is $$1-\left((0.7)^{15}+15(0.3)(0.7)^{14}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/280749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
An inequality about sequences in a $\sigma$-algebra Let $(X,\mathbb X,\mu)$ be a measure space and let $(E_n)$ be a sequence in $\mathbb X$. Show that $$\mu(\lim\inf E_n)\leq\lim\inf\mu(E_n).$$ I am quite sure I need to use the following lemma. Lemma. Let $\mu$ be a measure defined on a $\sigma$-algebra $\mathbb X$. * *If $(E_n)$ is an increasing sequence in $\mathbb X$, then $$\mu\left(\bigcup_{n=1}^\infty E_n\right)=\lim\mu(E_n).$$ *If $(F_n)$ is a decreasing sequence in $\mathbb X$ and if $\mu(F_1)<+\infty$, then $$\mu\left(\bigcap_{n=1}^\infty F_n\right)=\lim\mu(F_n).$$ I know that $$\mu(\liminf_n E_n)=\mu\left(\bigcup_{i=1}^\infty\bigcap_{n=i}^\infty E_n\right)= \lim_i\mu\left(\bigcap_{n=i}^\infty E_n\right).$$ The first equality follows from the definition of $\lim\inf$ and the second from point 1 of the lemma above. Here is where I am stuck.
For every $i\geq 1$ we have that $$ \bigcap_{n=i}^\infty E_n\subseteq E_i $$ and so for all $i\geq 1$ $$ \mu\left(\bigcap_{n=i}^\infty E_n\right)\leq \mu(E_i). $$ Then $$ \mu(\liminf_n E_n)=\lim_{i\to\infty}\mu\left(\bigcap_{n=i}^\infty E_n\right)\leq \liminf \mu(E_i), $$ using the fact that if $(x_n)_{n\geq 1}$ and $(y_n)_{n\geq 1}$ are sequences with $x_n\leq y_n$ for all $n$, then $\liminf x_n\leq \liminf y_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/280840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Examples of familiar, easy-to-visualize manifolds that admit Lie group structures I have a trouble learning Lie groups --- I have no canonical example to imagine while thinking of a Lie group. When I imagine a manifold it is usually some kind of a $2$D blanket or a circle/curve or a sphere, a torus etc. However I have a problem visualizing a Lie group. The best one I thought is $SO(2)$ which as far as I understand just a circle. But a circle apparently lacks distinguished points so I guess there is no way to canonically prescribe a neutral element to turn a circle into a group $SO(2)$. Examples I saw so far start from a group, describe it as a group of matrices to show that the group is endowed with the structure of a manifold. I would appreciate the other way --- given a manifold show that it is naturally a group. And such a manifold should be easily imaginable.
Think of $SO_2$ as the group of $2\times 2$ rotation matrices: $$ \left[\begin{array}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{array} \right]$$ or the group of complex numbers of unit length $e^{i\theta}$. You can convince yourself directly from definitions that either of these objects is a group under the appropriate multiplication, and that they are isomorphic to each other. This group (which is presented in two ways) is a 1-manifold because it admits smooth parametrization in one variable ($\theta$ here). Why does this group represent the circle $S^1$? Well, the matrices of the above form are symmetries of circles about the origin in $\mathbb{R}^2$, and the trace of $e^{i\theta}$ is the unit circle in the complex plane. I think it is best conceptually to think of Lie groups first as groups, and then to develop geometric intuition to "flavor" your algebraic construction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/280896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
When do regular values form an open set? Let $f:M\to N$ be a $C^\infty$ map between manifolds. When is the set of regular values of $N$ an open set in $N$? There is a case which I sort of figured out: * *If $\operatorname{dim} M = \operatorname{dim} N$ and $M$ is compact, it is open by the following argument (fixed thanks to user7530 in the comments): Let $y\in N$. Suppose $f^{-1}(y)\neq \emptyset$. The stack of records theorem applies: $f^{-1}(y)=\{x_1,\dots,x_n\}$ and there is an open neighborhood $U$ of $y$ such that $f^{-1}(U)=\bigcup_{i=1}^n U_i$, with $U_i$ an open neighborhood of $x_i$ such that the $U_i$ are pairwise disjoint and $f$ maps $U_i$ diffeomorphically onto $U$. Now every point in $f^{-1}(U)$ is regular, since if $x_i'\in U_i$, then $f|_{U_i}:U_i\to U$ diffeomorphism $\Rightarrow$ $df_{x_i'}$ isomorphism (thanks to user 7530 for simplifying the argument). Now suppose $f^{-1}(y)=\emptyset$. Then there is an open neighborhood $V$ of $y$ such that every value in $V$ has no preimages. Indeed, the set $N\setminus f(M)$ is open, since $M$ compact $\Rightarrow$ $f(M)$ compact, hence closed. Therefore $V$ is an open neighborhood of $y$ where all values are regular, and we are done. Can we remove the compactness/dimension assumptions in some way?
The set of critical points is closed. You want that the image of this set under $f$ be closed. What about demanding that $f$ is closed? A condition that implies that $f$ is closed is to demand that $f$ is proper (i.e. preimages of compact sets are compact).
{ "language": "en", "url": "https://math.stackexchange.com/questions/280965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Studying $ u_n = \int_0^1 (\arctan x)^n \mathrm dx$ I would like to find an equivalent of: $$ u_n = \int_0^1 (\arctan x)^n \mathrm dx$$ which might be: $$ u_n \sim \frac{\pi}{2n} \left(\frac{\pi}{4} \right)^n$$ $$ 0\le u_n\le \left( \frac{\pi}{4} \right)^n$$ So $$ u_n \rightarrow 0$$ In order to get rid of $\arctan x$ I used the substitution $$x=\tan \left(\frac{\pi t}{4n} \right) $$ which gives: $$ u_n= \left(\frac{\pi}{4n} \right)^{n+1} \int_0^n t^n\left(1+\tan\left(\frac{\pi t}{4n} \right)^2 \right) \mathrm dt$$ But this integral is not easier to study! Or: $$ t=(\arctan x)^n $$ $$ u_n = \frac{1}{n} \int_0^{(\pi/4)^n} t^{1/n}(1+\tan( t^{1/n})^2 ) \mathrm dt $$ How could I deal with this one?
Another (simpler) approach is to substitute $x = \tan{y}$ and get $$u_n = \int_0^{\frac{\pi}{4}} dy \: y^n \, \sec^2{y}$$ Now we perform an analysis not unlike Laplace's Method: as $n \rightarrow \infty$, the contribution to the integral is dominated by the value of the integrand at $y=\pi/4$. We may then say that $$u_n \sim \sec^2{\frac{\pi}{4}} \int_0^{\frac{\pi}{4}} dy \: y^n = \frac{2}{n+1} \left ( \frac{\pi}{4} \right )^{n+1} (n \rightarrow \infty) $$ The stated result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/281017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Computing conditional probability combining events I know $P(E|A)$ and $P(E|B)$, how do I know $P(E|A,B)$? Assuming $A,B$ independent.
Consider this example: * *$A$ = first coin lands Head, *$B$ = second coin lands Head, *$E_1$ = odd number of Heads, *$E_2$ = even number of Heads. The first two conditional probabilities are both $1/2$. The third is $0$ for $E_1$ and $1$ for $E_2$. (Also: Welcome to Math.SE, and please do not try to make your questions as short as possible. Adding your thoughts, even tentative, is always a plus.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/281099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Book Recommendation for Integer partitions and $q$ series I have been studying number theory for a little while now, and I would like to learn about integer partitions and $q$ series, but I have never studied anything in the field of combinatorics, so are there any prerequisites or things I should be familiar with before I try to study the latter?
George Andrews has contributed greatly to the study of integer partitions. (The link with his name will take you to his webpage listing publications, some of which are accessible as pdf documents.) Also see, e.g., his classic text The Theory of Partitions and the more recent Integer Partitions. You can pretty much "jump right in" with the following, though their breadth of coverage may be more than you care to explore (in which case, they each have fine sections on the topics of interest to you, with ample references for more in depth study of each topic): Two books I highly recommend are Concrete Mathematics by Graham, Knuth, and Patashnik. Combinatorics: Topics, Techniques, and Algorithms by Peter J. Cameron. See his associated site for the text.
{ "language": "en", "url": "https://math.stackexchange.com/questions/281153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 1 }
Equation of the Plane I have been working through all the problems in my textbook and I have finally got to a difficult one. The problem ask Find the equation of the plane.The plane that passes through the points $(-1,2,1)$ and contains the line of intersection of the planes $x+y-z =2$ and $2x-y+3z=1$ So far I found the direction vector of the line of intersection to be $<1,-2,1>$ and I have identified a point on this line when $x=0$ to be $(0,5,-1)$. I do not know how to find the desired plane from here. Any assistance would be appreciated.
The following determinant is another form of what Scott noted: $$\begin{vmatrix} (x-x_0)& (y-y_0)& (z-z_0)\\1& 1& -1\\2& -1& 3 \end{vmatrix}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/281191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
On a special normal subgroup of a group Let $G$ be a group such $H$ is a normal subgroup of $G$ and $Z(H)=1$ and $Inn(H)=Aut(H)$. Then prove there exists a normal subgroup $K$ of $G$ such that $G=H\times K$.
First, define the map $\psi: G \to Inn(G)$ by $\psi_g(x) = gxg^{-1}$. Since $H$ is a normal subgroup, $\psi_g(H) = H$, so $Inn(G) = Aut(H) = Inn(H)$. From $Z(H) = 1$, we know that $Inn(H) \cong H$. Compose this isomorphism with $\psi$ to get a surjective map $G \to H$. The kernel of this map is $K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/281245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Connectedness of $\beta \mathbb{R}$ I am not very familiar with Stone-Čech compactification, but I would like to understand why the remainder $\beta\mathbb{R}\backslash\mathbb{R}$ has exactly two connected components.
I finally found something: For convenience, let $X= \mathbb{R} \backslash (-1,1)$. First, we show that $(\beta \mathbb{R})\backslash (-1,1)= \beta (\mathbb{R} \backslash (-1,1))$. Let $f_0 : X \to [0,1]$ be a continuous function. Obviously, $f_0$ can be extend to a continuous function $f : \mathbb{R} \to [0,1]$; then we extend $f$ to a continuous function $\tilde{f} : \beta \mathbb{R} \to [0,1]$ and we get by restriction a continuous function $\tilde{f}_0 : (\beta \mathbb{R}) \backslash (-1,1) \to [0,1]$. By the universal property of Stone-Čech compactification, we deduce that $\beta X= (\beta \mathbb{R}) \backslash (-1,1)$. With the same argument, we show that for an closed interval $I \subset X$, $\beta I = \text{cl}_{\beta X}(I)$. Let $F_1= \text{cl}_{\beta X} [1,+ \infty)$ and $F_2= \text{cl}_{\beta X} (-\infty,-1]$. Then, $F_1 \cup F_2 = \beta X$, $F_1 \cap F_1= \emptyset$ and $F_1$, $F_2$ are connected. Indeed, there exists a continuous function $h : X \to \mathbb{R}$ such that $h_{|[1,+ \infty)} \equiv 0$ and $h_{|(-\infty,1]} \equiv 1$, so when we extend $h$ on $\beta X$ we get a continuous function $\tilde{h}$ such that $\tilde{h} \equiv 0$ on $F_1$ and $\tilde{h} \equiv 1$ on $F_2$; therefore $F_1 \cap F_2= \emptyset$. Finally, $F_1$ and $F_2$ are connected as closures of connected sets. So $(\beta \mathbb{R} ) \backslash (-1,1)$ has exactly two connected components; in fact, it is the same thing for $(\beta \mathbb{R}) \backslash (-n,n)$. Because $(\beta \mathbb{R}) \backslash \mathbb{R}= \bigcap\limits_{ n\geq 1 } (\beta \mathbb{R}) \backslash (-n,n)$, we deduce that $(\beta \mathbb{R}) \backslash \mathbb{R}$ has exactly two connected components (the intersection of a non-increasing sequence of connected compact sets is connected).
{ "language": "en", "url": "https://math.stackexchange.com/questions/281325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
How does one show that the set of rationals is topologically disconnected? Let $\mathbb{Q}$ be the set of rationals with its usual topology based on distance: $$d(x,y) = |x-y|$$ Suppose we can only use axioms about $\mathbb{Q}$ (and no axiom about $\mathbb{R}$, the set of reals). Then how can we show that $\mathbb{Q}$ is topologically disconnected, i.e.: there exist two open sets $X$ and $Y$ whose union is $\mathbb{Q}$? If we were allowed to use axioms about $\mathbb{R}$, then we could show that for any irrational number $a$: * *if $M$ is the intersection of $]-\infty, a[$ with the rationals, then $M$ is an open set of $\mathbb{Q}$ *if N is the intersection of $]a, +\infty[$ with the rationals, then $N$ is an open set of $\mathbb{Q}$ *$\mathbb{Q}$ is the union of $M$ and $N$. CQFD. But if we are not allowed to use axioms about $\mathbb{R}$, just axioms about $\mathbb{Q}$?
The rationals is the union of two disjoint open sets $\{x\in\mathbb{Q}:x^2>2\}$ and $\{x\in\mathbb{Q}:x^2<2\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/281377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
How do i prove that this given set is open? Let $X$ be a topological space and $E$ be an open set. If $A$ is open in $\overline{E}$ and $A\subset E$, then how do i prove that $A$ is open in $X$? It seems trivial, but i'm stuck.. Thank you in advance.
Is enough to show that $A=G\cap E$ for an open set $G\subset X$. Use the hypothesis and the fact that $A=G\cap \overline{E}\Rightarrow A=G\cap E$ (why?).
{ "language": "en", "url": "https://math.stackexchange.com/questions/281459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What is $\lim\limits_{z \to 0} |z \cdot \sin(\frac{1}{z})|$ for $z \in \mathbb C$? What is $\lim\limits_{z \to 0} |z \cdot \sin(\frac{1}{z})|$ for $z \in \mathbb C^*$? I need it to determine the type of the singularity at $z = 0$.
We have $a_n=\frac{1}{n} \to 0$ as $n\to \infty$ and $$ \lim_{n \to \infty}\Big|a_n\sin\Big(\frac{1}{a_n}\Big)\Big|=\lim_{n \to \infty}\frac{|\sin n|}{n}=0, $$ but $$ \lim_{n \to \infty}\Big|ia_n\sin\Big(\frac{1}{ia_n}\Big)\Big|=\lim_{n \to \infty}\frac{e^n-e^{-n}}{2n}=\infty. $$ Therefore $\lim_{z \to 0}|z\sin(z^{-1})|$ does not exist. Notice for every $k \in \mathbb{N}$ the limit $\lim_{z\to 0}|z\sin(z^{-1})|$ does not exist. In fact $$ \lim_{n \to \infty}\Big|a_n^{k+1}\sin\Big(\frac{1}{a_n}\Big)\Big|=\lim_{n \to \infty}\frac{|\sin n|}{n^{k+1}}=0, $$ but $$ \lim_{n \to \infty}\Big|(ia_n)^{k+1}\sin\Big(\frac{1}{ia_n}\Big)\Big|=\lim_{n \to \infty}\frac{e^n-e^{-n}}{2n^{k+1}}=\infty. $$ It follows that $z=0$ is neither a pole nor a removable singularity. Hence $z=0$ is an essential singularity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/281518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Showing that $\displaystyle\int_{-a}^{a} \frac{\sqrt{a^2-x^2}}{1+x^2}dx = \pi\left (\sqrt{a^2+1}-1\right)$. How can I show that $\displaystyle\int_{-a}^{a} \frac{\sqrt{a^2-x^2}}{1+x^2}dx = \pi\left(\sqrt{a^2+1}-1\right)$?
Let $x = a \sin(y)$. Then we have $$\dfrac{\sqrt{a^2-x^2}}{1+x^2} dx = \dfrac{a^2 \cos^2(y)}{1+a^2 \sin^2(y)} dy $$ Hence, $$I = \int_{-a}^{a}\dfrac{\sqrt{a^2-x^2}}{1+x^2} dx = \int_{-\pi/2}^{\pi/2} \dfrac{a^2 \cos^2(y)}{1+a^2 \sin^2(y)} dy $$ Hence, $$I + \pi = \int_{-\pi/2}^{\pi/2} \dfrac{a^2 \cos^2(y)}{1+a^2 \sin^2(y)} dy + \int_{-\pi/2}^{\pi/2} dy = \int_{-\pi/2}^{\pi/2} \dfrac{1+a^2}{1+a^2 \sin^2(y)} dy\\ = \dfrac{1+a^2}2 \int_0^{2 \pi} \dfrac{dy}{1+a^2 \sin^2(y)}$$ Now $$\int_0^{2 \pi} \dfrac{dy}{1+a^2 \sin^2(y)} = \oint_{|z| = 1} \dfrac{dz}{iz \left(1 + a^2 \left(\dfrac{z-\dfrac1z}{2i}\right)^2 \right)} = \oint_{|z| = 1} \dfrac{4z^2 dz}{iz \left(4z^2 - a^2 \left(z^2-1\right)^2 \right)}$$ $$\oint_{|z| = 1} \dfrac{4z^2 dz}{iz \left(4z^2 - a^2 \left(z^2-1\right)^2 \right)} = \oint_{|z| = 1} \dfrac{4z dz}{i(2z + a(z^2-1))(2z - a(z^2-1))}$$ Now $$ \dfrac{4z}{(2z + a(z^2-1))(2z - a(z^2-1))} = \dfrac1{az^2 - a + 2z} - \dfrac1{az^2 - a - 2z}$$ $$\oint_{\vert z \vert = 1} \dfrac{dz}{az^2 - a + 2z} = \oint_{\vert z \vert = 1} \dfrac{dz}{a \left(z + \dfrac{1 + \sqrt{1+a^2}}a\right) \left(z + \dfrac{1 - \sqrt{1+a^2}}a\right)} = \dfrac{2 \pi i}{2 \sqrt{1+a^2}}$$ $$\oint_{\vert z \vert = 1} \dfrac{dz}{az^2 - a - 2z} = \oint_{\vert z \vert = 1} \dfrac{dz}{a \left(z - \dfrac{1 + \sqrt{1+a^2}}a\right) \left(z - \dfrac{1 - \sqrt{1+a^2}}a\right)} = -\dfrac{2 \pi i}{2 \sqrt{1+a^2}}$$ Hence, $$\oint_{|z| = 1} \dfrac{4z dz}{i(2z + a(z^2-1))(2z - a(z^2-1))} = \dfrac{2 \pi i}i \dfrac1{\sqrt{1+a^2}} = \dfrac{2 \pi}{\sqrt{1+a^2}}$$ Hence, we get that $$I + \pi = \left(\dfrac{1+a^2}2\right) \dfrac{2 \pi}{\sqrt{1+a^2}} = \pi \sqrt{1+a^2}$$ Hence, we get that $$I = \pi \left(\sqrt{1+a^2} - 1 \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/281587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
Can that two double series representations of the $\eta$/$\zeta$ function be converted into each other? By an analysis of the matrix of Eulerian numbers(see pg 8) I came across the representation for the alternating Dirichlet series $\eta$: $$ \eta(s) = 2^{s-1} \sum_{c=0}^\infty \left( \sum_{k=0}^c(-1)^k \binom{1-s}{c-k}(1+k)^{-s} \right) \tag 1$$ The H.Hasse/ K.Knopp-form as globally convergent series (see wikipedia) is $$\eta(s) = \sum_{c=0}^\infty \left( { 1\over 2^{c+1} } \sum_{k=0}^c (-1)^k \binom{c}{k}(1+k)^{-s} \right) \tag 2 $$ (Here I removed the leading factor of the $\zeta$-notation in the wikipedia to arrive at the $\eta$-notation) The difference in the formulae, which made me most curious is that in the binomial-expression, whose upper value is constant in the first formulaand then the same effect in the power-of-2 expression. I just tried to find a conversion from(1) to (2) but it seems to be more difficult than I hoped. Do I overlook something obvious here? Surely there must be a conversion since the first formula comes from that Eulerian-triangle and this is connected to the sums-of-like powers, but I hope there is an easier one... Q: "How can the formula (1) be converted into the form (2) ?" or: "how can the equivalence of the two formulae be shown?" The first formula can be evaluated using the "sumalt"-procedure in Pari/GP which allows to sum some divergent, but alternating series. Here is a bit of code: myeta(s) = 2^(s-1)*sumalt(c=0,sum(k=0,c,(-1)^k*binomial(1-s,c-k)*(1+k)^(-s))) myzeta(s)= myeta(s)/(1-2^(1-s))
For $|z|<1$ and any $s$ $$-Li_s(-z) (1+z)^{1-s}= \sum_k z^k (-1)^{k+1}k^{-s}\sum_m z^m {1-s\choose m}=\sum_c z^c \sum_{k\le c} (-1)^{k+1}k^{-s}{1-s\choose c-k}$$ Interpret $-Li_s(-e^{2\pi it}) (1+e^{2\pi it})^{1-s}$ as $\lim_{r\to 1^-}-Li_s(-r e^{2\pi it}) (1+r e^{2\pi it})^{1-s}$. With enough partial summations we have that $Li_s(z)$ is continuous for $|z|\le 1,z\ne 1$ and for $\Re(s)\le 1,t\not \in \Bbb{Z}$, the value on the boundary $Li_s(e^{2i\pi t})$ is the analytic continuation of $Li_s(e^{2i\pi t}),\Re(s) > 1$. Also $-Li_s(-e^{2\pi it}) (1+e^{2\pi it})^{1-s}\in L^1(\Bbb{R/Z})$, thus $\sum_c e^{2i\pi t c} \sum_{k\le c} (-1)^{k+1}k^{-s}{1-s\choose c-k}$ is its Fourier series. Your claim is that the Fourier series converges at $t=0$, Which is true, because $-Li_s(-e^{2\pi it}) (1+e^{2\pi it})^{1-s}$ is $C^1$ at $t=0$. Whence for all $s\in \Bbb{C}$ $$\sum_{c=0}^\infty \sum_{k\le c} (-1)^{k+1}k^{-s}{1-s\choose c-k}=2^{1-s}\eta(s)$$ I think it is quite different to the spirit of the proof of (2)
{ "language": "en", "url": "https://math.stackexchange.com/questions/281711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Complex numbers straight line proof Prove that the three distinct points $z_1,z_2$, and $z_3$ lie on the same straight line iff $z_3 - z_2 = c(z_2 - z_1)$ for some real number $c$ and $z$ is complex. I know that two vectors are parallel iff one is a scalar multiple of the other, thus $z$ is parallel to $w$ iff $z = cw$. So, from that, does that mean $z_3 - z_2 = c(z_2 - z_1)$ are parallel thus making it lie on the same line?
If $z_k=x_k+iy_k$ for $k=1,2,3$ As $z_3-z_2=c(z_2-z_1),$ If $c=0, z_3=z_2$ and if $z=\infty, z_2=z_1$ so $c$ non-zero finite number. $\implies x_3-x_2+i(y_3-y_2)=c\{x_2-x_1+i(y_2-y_1)\}$ Equating the real & the imaginary parts, $$x_3-x_2=c(x_2-x_1),y_3-y_2=c(y_2-y_1)$$ So, $$\frac{y_3-y_2}{x_3-x_2}=\frac{y_2-y_1}{x_2-x_1}\implies x_1(y_2-y_3)+x_2(y_3-y_1)+x_3(y_1-y_2)=0$$ Hence $z_1,z_2$ and $z_3$ are collinear. Alternatively, the area of the triangle with vertices $z_k=x_k+iy_k$ for $k=1,2,3$ is $$\frac12\det\begin{pmatrix}x_1&y_1 &1\\x_2 & y_2 & 1 \\ x_3 & y_3&1\end{pmatrix}$$ $$=\frac12\det\begin{pmatrix}x_1-x_2&y_1-y_2 &0\\x_2 & y_2 & 1 \\ x_3-x_2 & y_3-y_2&0\end{pmatrix}$$ applying $R_1'=R_1-R_2$ and $R_3'=R_3-R_2$ $$=\frac12\det\begin{pmatrix}x_1-x_2&y_1-y_2 &0\\x_2 & y_2 & 1 \\ -c(x_1-x_2) & -c(y_1-y_2)&0\end{pmatrix}$$ $$=-\frac c2\det\begin{pmatrix}x_1-x_2&y_1-y_2 &0\\x_2 & y_2 & 1 \\ x_1-x_2&y_1-y_2&0\end{pmatrix}=0$$ as the 1st & the 3rd rows are identical. Hence $z_1,z_2$ and $z_3$ are collinear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/281784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Book recommendations for commutative algebra and algebraic number theory Are there any books which teach commutative algebra and algebraic number theory at the same time. Many commutative algebra books contain few chapters on algebraic number theory at end. But I don't need that. I'm seaching for book which motivates commutative algebra using algebraic number theory.My main is to learn algebraic number theory but while doing so I also want to pick up enough commutative algebra to deal with algebraic geometry as well.
There's no law against reading more than one book at a time! Although algebraic number theory and algebraic geometry both use commutative algebra heavily, the algebra needed for geometry is rather broader in scope (for alg number theory you need to know lots about Dedekind domains, but commutative algebra uses a much wider class of rings). So I don't think you can expect that there will be a textbook on number theory which will also teach you all the algebra you need for algebraic geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/281863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 1 }
What is a 'critical value' in statistics? Here's where I encountered this word: The raw material needed for the manufacture of medicine has to be at least $97\%$ pure. A buyer analyzes the nullhypothesis, that the proportion is $\mu_0=97\%$, with the alternative hypothesis that the proportion is higher than $97\%$. He decides to buy the raw material if the nulhypothesis gets rejected with $\alpha = 0.05$. So if the calculated critical value is equal to $t_{\alpha} = 98 \%$, he'll only buy if he finds a proportion of $98\%$ or higher with his analysis. The risk that he buys a raw material with a proportion of $97\%$ (nullhypothesis is true) is $100 \times \alpha = 5 \%$ I don't really understand what is meant by 'critical value'
A critical value is the point (or points) on the scale of the test statistic beyond which we reject the null hypothesis, and is derived from the level of significance $\alpha$ of the test. You may be used to doing hypothesis tests like this: * *Calculate test statistics *Calculate p-value of test statistic. *Compare p-value to the significance level $\alpha$. However, you can also do hypothesis tests in a slightly different way: * *Calculate test statistic *Calculate critical value(s) based on the significance level $\alpha$. *Compare test statistic to critical value. Basically, rather than mapping the test statistic onto the scale of the significance level with a p-value, we're mapping the significance level onto the scale of the test statistic with one or more critical values. The two methods are completely equivalent. In the theoretical underpinnings, hypothesis tests are based on the notion of critical regions: the null hypothesis is rejected if the test statistic falls in the critical region. The critical values are the boundaries of the critical region. If the test is one-sided (like a $\chi^2$ test or a one-sided $t$-test) then there will be just one critical value, but in other cases (like a two-sided $t$-test) there will be two.
{ "language": "en", "url": "https://math.stackexchange.com/questions/281940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How is the number of points in the convex hull of five random points distributed? This is about another result that follows from the results on Sylvester's four-point problem and its generalizations; it's perhaps slightly less obvious than the other one I posted. Given a probability distribution in the plane, if we know the probability $p_5$ for five points to form a convex pentagon and the probability $p_4$ for four points to form a convex quadrilateral, how can we determine the distribution of the number of points in the convex hull of five points (where all the points are independently drawn from the given distribution)?
Denote the probability for the convex hull of the five points to consist of $k$ points by $x_k$. The convex hull has five points if and only if the five points form a convex pentagon, so $x_5=p_5$. Now let's determine the expected number of subsets of four of the five points that form a convex quadrilateral in two different ways. There are $5$ such subsets, and each has probability $p_4$ to form a convex quadrilateral, so the expected number is $5p_4$. On the other hand, if the convex hull has $5$ points, all $5$ subsets form a convex quadrilateral; if it has $4$ points, the convex hull itself and two of the other four quadrilaterals are convex, for a total of $3$, and if the convex hull has $3$ points, exactly one of the five quadrilaterals is convex (the one not including the hull vertex that the line joining the two inner points separates from the other two hull vertices). Thus we have $$ 5p_4=5x_5+3x_4+x_3\;. $$ Together with $x_5=p_5$ and $x_3+x_4+x_5=1$, that makes three linear equations for the three unknowns. The solution is $$ \begin{align} x_3&=\frac32-\frac52p_4+p_5\;,\\ x_4&=-\frac12+\frac52p_4-2p_5\;,\\ x_5&=\vphantom{\frac12}p_5\;. \end{align} $$ MathWorld gives $p_4$ and $p_5$ for points uniformly selected in a triangle and a parallelogram; here are the corresponding distributions: $$ \begin{array}{c|c|c|c} \text{shape}&p_4&p_5&x_3&x_4&x_5\\\hline \text{triangle}&\frac23&\frac{11}{36}&\frac5{36}&\frac59&\frac{11}{36}\\\hline \text{parallelogram}&\frac{25}{36}&\frac{49}{144}&\frac5{48}&\frac59&\frac{49}{144} \end{array} $$ The probability $x_4$ that the convex hull consists of four of the five points is the same in both cases; however, this probability is different for an ellipse. Here's code to check these results and estimate the values for an ellipse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/282147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Derivative of $\sqrt{\sin (x^2)}$ I have problems calculating derivative of $f(x)=\sqrt{\sin (x^2)}$. I know that $f'(\sqrt{2k \pi + \pi})= - \infty$ and $f'(\sqrt{2k \pi})= + \infty$ because $f$ has derivative only if $ \sqrt{2k \pi} \leq |x| \leq \sqrt{2k \pi + \pi}$. The answer says that for all other values of $x$, $f'(0-)=-1$ and $f'(0+)=1$. Why is that? All I get is $f'(x)= \dfrac{x \cos x^2}{\sqrt{\sin (x^2)}} $.
I don't know if you did it this way, so I figured that I would at least display it. \begin{align} y &= \sqrt{\sin x^2}\\ y^2 &= \sin x^2\\ 2yy' &= 2x \cos x^2\\ y' &= \frac{x \cos x^2}{\sqrt{\sin x^2}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/282279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Logarithm as limit Wolfram's website lists this as a limit representation of the natural log: $$\ln{z} = \lim_{\omega \to \infty} \omega(z^{1/\omega} - 1)$$ Is there a quick proof of this? Thanks
$\ln z$ is the derivative of $t\mapsto z^t$ at $t=0$, so $$\ln z = \lim_{h\to 0}\frac{ z^h-1}h=\lim_{\omega\to \infty} \omega(z^{1/\omega}-1).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/282339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
At what speed should it be traveling if the driver aims to arrive at Town B at 2.00 pm? A car will travel from Town A to Town B. If it travels at a constant speed of 60 km/h, it will arrive at 3.00 pm. If travels at a constant speed of 80kh/h, it will arrive at 1.00 pm. At what speed should it be traveling if the driver aims to arrive at Town B at 2.00 pm?
The trip became $120$ minutes ($2$ hours) shorter by using $\frac34$ of a minute per kilometer ($80$ km/hr) instead of $1$ minute per kilometer ($60$ km/hr.) Since the savings from going faster was $\frac14$ of a minute per kilometer, the trip must be $480$ kilometers long, so it took $8$ hours at $60$ km/hr, and we set off at 7 AM. Therefore, to arrive at 2 PM, we should travel $480$ kilometers in $7$ hours, or $68\frac{4}{7}$ km/hr.
{ "language": "en", "url": "https://math.stackexchange.com/questions/282402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Upper bound for the absolute value of an inner product I am trying to prove the inequality $$ \left|\sum\limits_{i=1}^n a_{i}x_{i} \right| \leq \frac{1}{2}(x_{(n)} - x_{(1)}) \sum\limits_{i=1}^n \left| a_{i} \right| \>,$$ where $x_{(n)} = \max_i x_i$ and $x_{(1)} = \min_i x_i$, subject to the condition $\sum_i a_i = 0$. I've tried squaring and applying Samuelson's inequality to bound the distance between any particular observation and the sample mean, but am making very little headway. I also don't quite understand what's going on with the linear combination of observations out front. Can you guys point me in the right direction on how to get started with this thing?
Hint: $$ \left|\sum_i a_i x_i\right| = \frac{1}{2} \left|\sum_i a_i x_i\right| + \frac{1}{2} \left|\sum_i a_i \cdot (-x_i)\right| \>. $$ Now, * *What do you know about $\sum_i a_i x_{(1)}$ and $\sum_i a_i x_{(n)}$? (Use your assumptions.) *Recall the old saw: "There are only three basic operations in mathematics: Addition by zero, multiplication by one, and integration by parts!" (Hint: You won't need the last one.) Use this and the most basic properties of absolute values and positivity to finish this off.
{ "language": "en", "url": "https://math.stackexchange.com/questions/282462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Simple combinations - Party Lamps [IOI 98] You are given N lamps and four switches. The first switch toggles all lamps, the second the even lamps, the third the odd lamps, and last switch toggles lamps $1, 4, 7, 10, \dots $ Given the number of lamps, N, the number of button presses made (up to $10,000$), and the state of some of the lamps (e.g., lamp $7$ is off), output all the possible states the lamps could be in. Naively, for each button press, you have to try $4$ possibilities, for a total of $4^{10000}$ (about $10^{6020}$ ), which means there's no way you could do complete search (this particular algorithm would exploit recursion). Noticing that the order of the button presses does not matter gets this number down to about $10000^4$ (about $10^{16}$ ), still too big to completely search (but certainly closer by a factor of over $10^{6000}$ ). However, pressing a button twice is the same as pressing the button no times, so all you really have to check is pressing each button either 0 or 1 times. That's only $2^4 = 16$ possibilities, surely a number of iterations solvable within the time limit. Above is a simple problem with a brief explanation of the solution. What I am not able to conclude is the part where it says order doesn't matter and switches solution from $4^{10000}$ to $10000^4$. Any idea ?
The naive solution works in this way: There are four buttons we can push. We need to account for at most $10000$ button presses. Let's make it easier and say we only have to account for at most three button presses. Then our button-press 1 is either the first button, the second one, the third one, or the fourth one. Similarly for button presses 2 and 3. So there are four options at each of the three decisions, thus $4^3$. The same logic gets $4^{10000}$. (Actually, there are some other options: this counts only the $10000$-press cases; not, say, the $4586$-press cases. But the point is that it's too big, so the point stands.) The better solution works this way. Since pressing any button only affects the same lightbulb, then we can separate it into four problems about pressing a button at most $10000$ times. Again, let's count only the $10000$-press cases. We can think of a button press as a square, and then put the squares into four groups, where each group represents what buttons was pressed. It now doesn't matter what order the button-presses happened. All that matters is the number of times the buttons were pressed. So then you choose how to arrange $10000$ objects into $4$ groups, which is ${10000\choose 4}\approx 10000^4$ [for some definitions of $\approx$]. (Again, this simplifies things because we only considered the $10000$-press cases, but again the point is that it's too big.) That idea is taken to the extreme in the best solution, which notices that the parity (even-or-odd-ness) is the only thing that really matters about the number of times each button was pressed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/282530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Let $a,b$ and $c$ be real numbers.evaluate the following determinant: |$b^2c^2 ,bc, b+c;c^2a^2,ca,c+a;a^2b^2,ab,a+b$| Let $a,b$ and $c$ be real numbers. Evaluate the following determinant: $$\begin{vmatrix}b^2c^2 &bc& b+c\cr c^2a^2&ca&c+a\cr a^2b^2&ab&a+b\cr\end{vmatrix}$$ after long calculation I get that the answer will be $0$. Is there any short processs? Please help someone thank you.
Imagine expanding along the first column. Note that the cofactor of $b^2c^2$ is $$(a+b)ac-(a+c)ab=a^2(c-b)$$ which is a multiple of $a^2$. The other two terms in the expansion along the first column are certainly multiples of $a^2$, so the determinant is a multiple of $a^2$. By symmetry, it's also a multiple of $b^2$ and of $c^2$. If $a=b$ then the first two rows are equal, so the determinant's zero, so the determinant is divisible by $a-b$. By symmetry, it's also divisible by $a-c$ and by $b-c$. So, the determinant is divisible by $a^2b^2c^2(a-b)(a-c)(b-c)$, a poynomial of degree $9$. But the detrminant is a polynomial of degree $7$, so it must be identically zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/282655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
Combinatorics Problem: Box Riddle A huge group of people live a bizarre box based existence. Every day, everyone changes the box that they're in, and every day they share their box with exactly one person, and never share a box with the same person twice. One of the people of the boxes gets sick. The illness is spread by co-box-itation. What is the minimum number of people who are ill on day n? Additional information (not originally included in problem): Potentially relevant OEIS sequence: http://oeis.org/A007843
Just in case this helps someone: (In each step we must cover a $N\times N$ board with $N$ non-self attacking rooks, diagonal forbidden). This gives the sequence (I start numbering day 1 for N=2) : (2,4,4,6,8,8,8,10,12,12,14,16,16,16) Updated: a. Brief explanation: each column-row corresponds to a person; the numbered $n$ cells shows the pairings of sick people corresponding to day $n$ (day 1: pair {1,2}; day 2: pairs {1,4}, {2,3}) b. This, as most answers here, assumes that we are interested in a sequence of pairings that minimize the number of sick people for all $n$. But it can be argued that the question is not clear on this, and that might be interested in minimize the number of sick people for one fixed $n$. In this case, the problem is simpler, see Daniel Martin's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/282740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 4, "answer_id": 1 }
A function with a non-negative upper derivative must be increasing? I am trying to show that if $f$ is continuous on the interval $[a,b]$ and its upper derivative $\overline{D}f$ is such that $ \overline{D}f \geq 0$ on $(a,b)$, then $f$ is increasing on the entire interval. Here $\overline{D}f$ is defined by $$ \overline{D}f(x) = \lim\limits_{h \to 0} \sup\limits_{h, 0 < |t| \leq h} \frac{f(x+t) - f(x)}{t} $$ I am not sure where to begin, though. Letting $x,y \in [a,b]$ be such that $x \leq y$, suppose for contradiction that $f(x) > f(y)$, then continuity of $f$ means that there is some neighbourhood of $y$ such that $f$ takes on values strictly less than $f(x)$ on this neighbourhood. Now I think I would like to use this neighbourhood to argue that the upper derivative at $y$ is then negative, but I cannot see how to complete this argument. Any help is appreciated! :)
Probably not the best approach, but here is an idea: show taht MVT holds in this case: Lemma Let $[c,d]$ be a subinterval of $[a,b]$. Then there exists a point $e \in [c,d]$ so that $$\frac{f(d)-f(c)}{d-c}=\overline{D}f(e)$$ Proof: Let $g(x)=f(x)-\frac{f(d)-f(c)}{d-c}(x-c) \,.$ Then $g$ is continuous on $[c,d]$ and hence it attains an absolute max and an absolute minimum. Since $g(c)=g(d)$, then either $g$ is constant, or one of them is attatined at some point $e \in (c,d)$. In the first case you can prove that $\overline{D}g=0$ on $[c,d]$, otherwise it is easy to conclude that $\overline{D}g(e)=0$. Your claim follows immediately from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/282889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
Trace and Norm of a separable extension. If $L | K$ is a separable extension and $\sigma : L \rightarrow \bar K$ varies over the different $K$-embeddings of $L$ into an algebraic closure $\bar K$ of $K$, then how to prove that $$f_x(t) = \Pi (t - \sigma(x))?$$ $f_x(t)$ is the characteristic polynomial of the linear transformation $T_x:L \rightarrow L$ where $T_x(a)=xa$
First assume $L = K(x)$. By the Cayley-Hamilton Theorem, $f_x(x) = 0$, so $f_x$ is a multiple of the minimal polynomial of $x$ which is $\prod_\sigma (t-\sigma(x))$. Since both polynomials are monic and have the same degree, they are in fact equal. For the general case, choose a basis $b_1,\ldots,b_r$ of $L$ over $K(x)$. Then, as $K$-vector spaces, $L = \bigoplus_{i=1}^r K(x)b_i$, and $T_x$ acts on the direct summands separately. Therefore, the characteristic polynomial of $T_x: L \to L$ is the product of the characteristic polynomials of the restricted maps $T_x: K(x)b_i \to K(x)b_i$. All those restricted maps have the same characteristic polynomial, namely the minimal polynomial $g$ of $x$. So the characteristic polynomial of $T_x: L\to L$ is $g^{[L:K(x)]}$. Since every embedding $\tilde\sigma: K(x) \to \overline K$ can be extended to an embedding $\sigma: L \to \overline K$ in exactly $[L:K(x)]$ ways, this equals $\prod_\sigma (t-\sigma(x))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/282966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why is this function entire? $f(z) = z^{-1} \sin{z} \exp(i tz)$ In problem 10.44 of Real & Complex Analysis, the author says $f(z) = z^{-1} \sin{z} \exp(i tz)$ is entire without explaining why. My guess is that $z = 0$ is a removable singularity, $f(z) = 1$ and $f'(z) = 0$, but I cannot seem to prove it from the definitions of limit and derivative. The definition of derivative gives: $$ \left|\dfrac{\sin z \exp(itz)}{z^2}\right| $$ Is my intuition correct? How can I prove that the above goes to $0$ as $z \to 0$?
Note that $u:z\mapsto\sin(z)/z$ is indeed entire since $u(z)=\sum\limits_{n=0}^{+\infty}(-1)^nz^{2n}/(2n+1)!$ has an infinite radius of convergence. Multiplying $u$ by the exponential, also entire, does not change anything.
{ "language": "en", "url": "https://math.stackexchange.com/questions/283030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Multilinear Functions I have a question regarding the properties of a multilinear function. This is for a linear algebra class. I know that for a multilinear function, $f(cv_1,v_2,\dots,v_n)=c⋅f(v_1,v_2,\dots,v_n)$ Does this imply $f(cv_1,dv_2,\dot,v_n)=c⋅d⋅f(v_1,v_2,\dots,v_n)$? It is for a question involving a multilinear function $f:\mathbb{R}^2\times\mathbb{R}^2\times\mathbb{R}^2\to \mathbb{R}$. I am given eight values of $f$, each of which is composed of a combination three unit vectors. They are: $f((1,0),(1,0),(1,0))=e$ $f((1,0),(1,0),(0,1))=\sqrt{7}$ $f((1,0),(0,1),(1,0))=0$ $f((1,0),(0,1),(0,1))=2$ $f((0,1),(1,0),(1,0))=\sqrt{5}$ $f((0,1),(1,0),(0,1))=0 $ $f((0,1),(0,1),(1,0))=\pi$ $f((0,1),(0,1),(0,1))=3$ Then I am asked to compute for different values of f. For instance, $f((2,3),(-1,1),(7,4))$ How do I approach this question to solve for the value of $f$?
If I interpret your question correctly, then the clue is in $f:\mathbb{R}^2\times\mathbb{R}^2\times\mathbb{R}^2\to\mathbb{R}$, which seems to imply that the function is trilinear, that is, in three inputs of two dimensions each. In that case, $\begin{align*}f((2,3),(-1,1),(7,4)) =&2\cdot -1\cdot 7\cdot f((1,0),(1,0),(1,0)) \\+&2\cdot -1\cdot 4\cdot f((1,0),(1,0),(0,1)) \\+&2\cdot 1\cdot 7\cdot f((1,0),(0,1),(1,0)) \\+&2\cdot 1\cdot 4\cdot f((1,0),(0,1),(0,1)) \\+&3\cdot -1\cdot 7\cdot f((0,1),(1,0),(1,0)) \\+&3\cdot -1\cdot 4\cdot f((0,1),(1,0),(0,1)) \\+&3\cdot 1\cdot 7\cdot f((0,1),(0,1),(1,0)) \\+&3\cdot 1\cdot 4\cdot f((0,1),(0,1),(0,1)) \end{align*}$ I'll leave you to plug in the values, I'm too lazy. This might be easier to see if you realize that a bilinear function $f(x,y)=x'Ay$ (where $A$ is a matrix) works the same way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/283080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Application of fundamental theorem of calculus I have this problem: $$ \frac{d}{dx} \left( \int_{\sqrt{x}}^{x^2-3x} \tan(t) dt \right) $$ I know how found the derivative of the integral from constant $a$ to variable $x$ so: $$ \frac{d}{dx} \left( \int_a^x f(t) dt \right) $$ but I don't know how make it between two variables, in this case from $\sqrt{x}$ to $x^2-3x$ Thanks so much.
First we work formally: you can write your integral, say $F(x)=\int_a^{g(x)}f(t)\,dt-\int_a^{h(x)}f(t)\,dt$, where $f,g$ and $h$ are the functions appearing in your problem, and $a\in\mathbb R$ is constant. Next, you can apply chain rule together with fundamental theorem of calculus in order to derivate the difference above. What is left? the existence of such $a$: Recall that by definition the upper and lower Riemann integrals are defined for bounded functions, so it is required that your integrand $\tan$ is bounded on one of the possible two integration intervals $I=[f(x),g(x)]$ or $J=[g(x),f(x)]$. This occurs only when $$\sqrt x,\,x^2-3x\in\Bigl(-\frac{\pi}2+k\pi,\frac{\pi}2+k\pi\Bigr),\ \style{font-family:inherit;}{\text{for some integer}}\ k\,.\tag{$\mathbf{I}$}$$ Since both $\sqrt{x}$ and $x^2-3x$ are continuous functions, the set of the values $x$ satisfying the previous inclusion is non-empty (easy exercise left to you) and open in $\mathbb R$, so it is a countable union of open intervals. When you try to calculate the derivative, you are working locally, that is, in some of these intervals, so you simply choose a fixed element $a$ in such interval, and proceed as stated at the beginning. If you are not familiar with the notion of "open set", then simply solve explicitly the equation $(\mathbf{I})$ and see what happens.
{ "language": "en", "url": "https://math.stackexchange.com/questions/283210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
How I study these two sequence? Let $a_1=1$ , $a_{n+1}=a_n+(-1)^n \cdot 2^{-n}$ , $b_n=\frac{2 a_{n+1}-a_n}{a_n}$ (1) $\{\ {a_n\}}$ converges to $0$ and $\{\ {b_n\}}$ is a cauchy sequence . (2) $\{\ {a_n\}}$ converges to non-zero number and $\{\ {b_n\}}$ is a cauchy sequence . (3) $\{\ {a_n\}}$ converges to $0$ and $\{\ {b_n\}}$ is not a cauchy sequence . (4) $\{\ {a_n\}}$ converges to non-zero number and $\{\ {b_n\}}$ is not a cauchy sequence . Trial: Here $$\begin{align} a_1 &=1\\ a_2 &=a_1 -\frac{1}{2} =1 -\frac{1}{2} \\ a_3 &= 1 -\frac{1}{2} + \frac{1}{2^2} \\ \vdots \\ a_n &= 1 -\frac{1}{2} + \frac{1}{2^2} -\cdots +(-1)^{n-1} \frac{1}{2^{n-1}}\end{align}$$ $$\lim_{n \to \infty}a_n=\frac{1}{1+\frac{1}{2}}=\frac{2}{3}$$ Here I conclude $\{\ {a_n\}}$ converges to non-zero number. Am I right? I know the definition of cauchy sequence but here I am stuck to check. Please help.
We have $b_n=\frac{2 a_{n+1}-a_n}{a_n}=2\frac{a_{n+1}}{a_n}-1$. For very large values of $n$, since $a_n\to2/3$ we have $a_{n+1}\sim a_n$. So $b_n\to 2-1=1$ so it is Cauchy as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/283294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How I can find this limit? If $a_n=(1+\frac{2}{n})^n$ , then find $$\lim_{n \to \infty}(1-\frac{a_n}{n})^n$$. Trial: Can I use $$\lim_{n \to \infty}a_n=e^2$$ Again $$\lim_{n \to \infty}(1-\frac{a_n}{n})^n=\exp(-e^2)$$ Please help.
Due to $$(1-\frac{a_n}{n})^n=\left[\left(1-\frac{a_n}{n}\right)^{\frac{n}{-a_n}}\right]^{\frac{-a_n}{n}n}=\left[\left(1-\frac{a_n}{n}\right)^{\frac{n}{-a_n}}\right]^{-a_n}.$$ $$\lim_{n\to\infty}\left(1-\frac{a_n}{n}\right)^{\frac{n}{-a_n}}=e$$ and $$\lim_{n\to \infty}(-a_n)=-e^2$$ Let $A_n=\left(1-\frac{a_n}{n}\right)^{\frac{n}{-a_n}}$ and $B_n=-a_n$, by the "claim" below, you can get the result!
{ "language": "en", "url": "https://math.stackexchange.com/questions/283396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Solving linear first order differential equation with hard integral I'm try to solve this differential equation: $y'=x-1+xy-y$ After rearranging it I can see that is a linear differential equation: $$y' + (1-x)y = x-1$$ So the integrating factor is $l(x) = e^{\int(1-x) dx} = e^{(1-x)x}$ That leaves me with an integral that I can't solve... I tried to solve it in Wolfram but the result is nothing I ever done before in the classes so I'm wondering if I made some mistake... This is the integral: $$ye^{(1-x)x} = \int (x-1)e^{(1-x)x} dx$$
A much easier way without an integrating factor: $y′=x−1+xy−y$ $y′=x-1+y(x-1)$ $y′=(x-1)(1+y)$ $\frac{dy}{(1+y)} = (x-1)dx$ $ln|1+y| = x^2/2 -x + C$ And you can do the rest
{ "language": "en", "url": "https://math.stackexchange.com/questions/283450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that given a nonnegative integer $n$, there is a unique nonnegative integer $m$ such that $(m-1)^2 ≤ n < m^2$ Prove that given a nonnegative integer $n$, there is a unique nonnegative integer $m$ such that $(m-1)^2 ≤ n < m^2$ My first guess is to use an induction proof, so I started with n = k = 0: $(m-1)^2 ≤ 0 < m^2 $ So clearly, there is a unique $m$ satisfying this proposition, namely $m=1$. Now I try to extend it to the inductive step and say that if the proposition is true for any $k$, it must also be true for $k+1$. $(m-1)^2 + 1 ≤ k + 1 < m^2 + 1$ But now I'm not sure how to proof that. Any ideas?
Its too late to answer the question but if it helps you can prove it by contradiction also. Assume that there exists a k such that k is less than m. so (k−1)2≤n< k2 The smallest k which is possible is k = m-1. Then we have (m-2)2 ≤ n< (m-1) 2 which is contradicting the assumed statement. so the solution has a unique value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/283515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
$\lim_{x \to 0} \frac {(x^2-\sin x^2) }{ (e^ {x^2}+ e^ {-x^2} -2)} $ solution? I recently took an math exam where I had this limit to solve $$ \lim_{x \to 0} \frac {(x^2-\sin x^2) }{ (e^ {x^2}+ e^ {-x^2} -2)} $$ and I tought I did it right, since I proceeded like this: 1st I applied Taylor expansion of the terms to the second grade of Taylor, but since I found out the grade in the numerator and in the denominator weren't alike, I chose to try and scale down one grade of Taylor, and I found my self like this: $$\frac{(x^2-x^2+o(x^2) )}{( (1+x^2)+(1-x^2)-2+o(x^2) )}$$ which should be: $$\frac{0+o(x^2)}{0+o(x^2)}$$ which should lead to $0$. Well, my teacher valued this wrong, and I think i'm missing something, I either don't understand how to apply Taylor the right way, or my teacher did a mis-correction (I never was able to see where my teacher said I was wrong, so that's why I'm asking you guys) Can someone tell me if I really was wrong, and in case I was explain how I should have solved this? Thanks a lot.
How is $\frac{0+o(x^2)}{0+o(x^2)}$ zero? You need to expand to a degree high enough to keep something nontrivial after cancellation! Note that $\sin(x^2)=x^2-\frac12 x^4+o(x^6)$ and $e^{\pm x^2}=1+\pm x^2+\frac 12 x^4+o(x^6)$, hence $$f(x)= \frac{\frac12 x^4 + o(x^6)}{x^4+o(x^6)}=\frac{\frac12 + o(x^2)}{1+o(x^2)}\to \frac 12$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/283585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
How many combinations of coloured dots (with restrictions)? My friend is designing a logo. The logo can essentially be reduced to 24 coloured dots arranged in a circle, and they can be either red or white. We want to produce a individual variation of this logo for each employee. That, if I have worked it out right, (since this appears analogous to a 24-bit binary string), means we could have an individual logo for 2^24 employees, obviously way more than we need. But of course, we don't really want logos that don't have a lot of white dots as they may look too sparse. So we stipulate that there must always be at least half + 1 = 13 in the logo. How many combinations does that restrict us to? My initial thought is 12 (half) + 1 + 2^11, but I'm not good enough to prove it. Also, how can we generalise this formula for $x$ dots, $y$ individual colours and at least $n$ colours of a single type? If that's too general, what about just the case $y = 2$ as we have above?
If rotations of the circle are allowed, you need to apply Pólya's coloring theorem. The relevant group for just rotations of 24 elements is $C_{24}$, whose cycle index is: $$ \zeta_{C_{24}}(x_1, \ldots x_{24}) = \frac{1}{24} \sum_{d \mid 24} \phi(d)x_d^{24 / d} = \frac{1}{24} \left( x_1^{24} + x_2^{12} + 2 x_3^{8} + 2 x_4^{6} + 3 x_6^4 + 4 x_8^3 + 6 x_{12}^2 + 8 x_{24} \right) $$ For 13 red and 11 white ones (use $r$ and $w$ for them) you want the coefficient of $r^{13} w^{11}$ in $\zeta_{C_{24}}(r + w, r^2 + w^2, \ldots, r^{24} + z^{24})$. The only term that can provide exponents 13 and 11 is the first one: $$ [r^{13} w^{11}] \zeta_{C_{24}}(r + w, r^2 + w^2, \ldots, r^{24} + z^{24}) = [r^{13} w^{11}] \frac{1}{24} (r + w)^{24} = \frac{1}{24} \binom{24}{13} $$ Flipping over is left as an excercise ;-) (I'm sure that as soon as I post this, somebody will post a simple reason why this is so by considering that 24 is even, and 13 and 11 odd...).
{ "language": "en", "url": "https://math.stackexchange.com/questions/283662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Weak convergence of a sequence of characteristic functions I am trying to produce a sequence of sets $A_n \subseteq [0,1] $ such that their characteristic functions $\chi_{A_n}$ converge weakly in $L^2[0,1]$ to $\frac{1}{2}\chi_{[0,1]}$. The sequence of sets $$A_n = \bigcup\limits_{k=0}^{2^{n-1} - 1} \left[ \frac{2k}{2^n}, \frac{2k+1}{2^n} \right]$$ seems like it should work to me, as their characteristic functions look like they will "average out" to $\frac{1}{2} \chi_{[0,1]}$ as needed. However, I'm having trouble completing the actual computation. Let $g \in L^2[0,1]$, then we'd like to show that $$ \lim_{n \to \infty} \int_{[0,1]} \chi_{A_n} g(x) dx = \int_{[0,1]} \frac{1}{2}\chi_{[0,1]} g(x) dx = \frac{1}{2} \int_{[0,1]} g(x) dx $$ We have that $$ \int_{[0,1]} \chi_{A_n} g(x) dx = \sum\limits_{k=0}^{2^{n-1}-1} \int_{\left[ \frac{2k}{2^n}, \frac{2k+1}{2^n} \right] } \chi_{A_n} g(x) dx $$ Now I am stuck, as I don't see how to use a limit argument to show that this goes to the desired limit as $ n \to \infty$. Does anyone have any suggestions on how to proceed? Any help is appreciated! :)
Suggestions: * *First consider the case where $g$ is the characteristic function of an interval. *Generalize to the case where $g$ is a step function. *Use density of step functions in $L^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/283737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Homogeneous system of linear equations over $\mathbb{C}$ I have two systems of linear equations and I need to verify if they are indeed the same system, and if they are I must rewrite each equation as a linear combination of the others.
In B, multiply 2nd equation by $i$, add to 1st equation (so $x_3$ disappears), solve for $x_1$ in terms of $x_2$ and $x_4$, substitute this into either original equation of B, solve for $x_3$ in terms of $x_2$ and $x_4$, compare with your answer for A.
{ "language": "en", "url": "https://math.stackexchange.com/questions/283853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Have I justified that $\forall x \in \mathbb{R}$, $x > 1 \rightarrow x^2 > x$ Have I justified that $\forall x \in \mathbb{R}$, $x > 1 \rightarrow x^2 > x$ Here is what I would do if this were asked on a test and I was told to "justify" the answer. Let $x \in \mathbb{R}$ Assume $x$ is greater than $1$. Then $x * x > x$ , since $x$ is greater than $1$. Therefore $x^2 > x \square$ Not sure how that will fly with the grader or this community. What I would like to know is if I have correctly shown that the statement is true? How would you have justified that this is a true statement? It is these really obviously true or false statements that I have trouble proving.
It indeed does fly. If you multiply both sides of an inequality by a positive quantity, the inequality is preserved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/283909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Partial Derivatives involving Vectors and Matrices Let $Y$ be a $(N \times 1)$ vector, $X$ be a $N \times M$ matrix and $\lambda$ be a $M \times 1$ vector. I am wondering how I can evaluate the following partial derivative. \begin{align} \frac{\partial (Y-X\lambda)^T (Y-X\lambda)}{\partial \lambda_j} \end{align} where $j = 1 \ldots P$ I run into such problems fairly often and would very much appreciate it if anyone could post a guide on differentiate expressions involving vectors and matrices (and in particular transposes / products of matrices).
See the entry on Matrix Calculus in Wikipedia, or search for "matrix calculus" on the internet. In your particular case, $$ \frac{\partial (Y-X\lambda)^T (Y-X\lambda)}{\partial\lambda^T}=-2(Y-X\lambda)^T X $$ and hence the partial derivative w.r.t. $\lambda_j$ is the $j$-th entry of the row vector $-2(Y-X\lambda)^T X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/283981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Simple proof for uniqueness of solutions of linear ODEs? Consider the system of linear ODEs $\dot{x}(t)=Ax(t)$, $x(0)=x_0\in\mathbb{R}^n$. Does anyone know a simple proof showing that the solutions are unique that does not require resorting to more general existence/uniqueness results (e.g., those relating to the Picard iteration) nor solving for the solutions explicitly?
Since the students are engineers, why don't you want to show them explicit solutions, which surely they'd need to see anyway? If we knew about a matrix exponential $e^{At}$, then to show $x(t) = e^{At}x_0$ let's look at the $t$-derivative of $e^{-At}x(t)$, which is $$ e^{-At}x'(t) + (-Ae^{-At})x(t) = e^{-At}Ax(t) - Ae^{-At}x(t). $$ From the series definition of the matrix exponential, $A$ and $e^{Bt}$ commute if $A$ and $B$ commute, so $A$ and $e^{-At}$ commute. Thus $$ (e^{-At}x(t))' = e^{-At}Ax(t) - Ae^{-At}x(t) = Ae^{-At}x(t) - Ae^{-At}x(t) = 0. $$ Therefore $e^{-At}x(t)$ is a constant vector, and setting $t = 0$ tells us this constant vector has to be $x(0) = x_0$. Thus $e^{-At}x(t) = x_0$, so $x(t) = e^{At}x_0$ if we know that $e^{At}$ and $e^{-At}$ are inverses of each other. Note that this solution can be thought of as a higher-dimensional version of the integration-free proof that the only solution of the 1-dim. ODE $y'(t) = ay(t)$ with $y(0) = y_0$ is $y_0e^{at}$: if $y(t)$ is a solution then the derivative of $e^{-at}y(t)$ is $$ e^{-at}y'(t) - ae^{-at}y(t) = e^{-at}(ay(t)) - ae^{-at}y(t) = 0. $$ Thus $e^{-at}y(t)$ is a constant function, and at $t = 0$ we see the value is $y(0) = y_0$, so $e^{-at}y(t) = y_0$. Thus $y(t) = y_0e^{at}$. In higher dimensions we just need to be more careful about the order of multiplication (e.g., the way the product rule is formulated for matrix-valued functions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/284061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
A question about polynomial rings This may be a trivial question. We say an ideal $I$ in a ring $R$ is $k$-generated iff $I$ is generated by at most $k$ elements of $R$. Let $F$ be a field. Is it true that every ideal in $F[x_1,x_2,....,x_n]$ is $n-$generated. (This is true when $n=1$, because $F[x_1]$ is a PID) Second question: Is it true that every ideal in $F[x_1,x_2,x_3,...]$ is generated by a countable set of elements of $F[x_1,x_2,x_3,...]$ ? Thank you
Since Qiaochu has answered your first question, I'll answer the second: yes, every ideal $I\subset F[x_1,x_2,x_3,...]$ is generated by a countable set of elements of $F[x_1,x_2,x_3,...]$. Indeed, let $G_n\subset I_n$ be a finite set of generators for the ideal $I_n=I\cap F[x_1,x_2,x_3,...,x_n]$ of the noetherian ring $F[x_1,x_2,x_3,...,x_n]$. The union $G=\bigcup_n G_n$ is then the required denumerable set generating the ideal $I$. The reason is simply that every polynomial $P\in I$ actually involves only finitely many variables $x_1,...,x_r$ so that $P\in F[x_1,x_2,x_3,...,x_r]$ for some $r$ and thus, since $P\in I_r$, one can write $P=\sum g_i\cdot f_i$ for some $g_i\in G_r\subset G$ and $f_i\in F[x_1,x_2,x_3,...,x_r]$. This proves that $G$ generates $I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/284127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
How many 8-character passwords can be created with given constrains How many unique 8-character passwords can be made from the letters $\{a,b,c,d,e,f,g,h,i,j\}$ if a) The letters $a,b,c$ must appear at least two times. b) The letters $a,b,c$ must appear only once and $a$ and $b$ must appear before $c$. So for the first part I tried: The format of the password would be $aabbccxy$ , where $x$ and $y$ can be any of the given characters. So for $xy$, I have $10^2=100$ variations and for the rest, I can shuffle them in $\frac{6!}{(2! \cdot 2! \cdot 2!)}=90$ ways (the division is so they won't repeat) which makes total of $100*90=9000$ possibilities. Now I don't know how to count the permutations when $x$ and $y$ are on different places. I wanted to do another permutation and multiply by $9000$, this time taking all $8$ characters in account, so I get $\frac{8!}{(2!\cdot 2! \cdot 2!)}$, but when $x$ and $y$ have the same value there still will be repetition. As for the second I have no idea how to approach.
For the first: count the number of passwords that do not satisfy the condition, then subtract from the total number of passwords For the second: Lay down your 5 "non-a,b,c" letters in order. There are $7^5$ ways to do this. Then you have to lay down the letters a,b,c in the 6 "gaps" between the 5 letters (don't forget the ends): $|x|x|x|x|x|$ where "|" denotes a gap and $x$ denotes one of the 7 non-a,b,c letters. We just have to count the number of ways to place a,b,c in the gaps. Place c first, and see how many ways you can place a,b such that they appear before c.
{ "language": "en", "url": "https://math.stackexchange.com/questions/284259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Derive the Quadratic Equation Find the Quadratic Equation whose roots are $2+\sqrt3$ and $2-\sqrt3$. Some basics: * *The general form of a Quadratic Equation is $ax^2+bx+c=0$ *In Quadratic Equation, $ax^2+bx+c=0$, if $\alpha$ and $\beta$ are the roots of the given Quadratic Equation, Then, $$\alpha+\beta=\frac{-b}{a}, \alpha\beta=\frac{c}{a}$$ I am here confused that how we can derive a Quadratic Equations from the given roots
Here $$-\frac ba=\alpha+\beta=2+\sqrt3+2-\sqrt3=4$$ and $$\frac ca=\alpha\beta=(2+\sqrt3)(2-\sqrt3)=2^2-3=1$$ So, the quadratic equation becomes $$x^2-4x+1=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/284338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
How many possible arrangements for a round robin tournament? How many arrangements are possible for a round robin tournament over an even number of players $n$? A round robin tournament is a competition where $n = 2k$ players play each other once in a heads-up match (like the group stage of a FIFA World Cup). To accommodate this, there are $n-1$ rounds with $\frac{n}{2}$ games in each round. For an arrangement of a tournament, let's say that the matches within an individual round are unordered, but the rounds in the tournament are ordered. For $n$ players, how many possible arrangements of the tournament can there be? ... I don't know if a formal statement is needed, but hey ... Let $P = \{ p_1, \ldots, p_n \}$ be a set of an even $n$ players. Let $R$ denote a round consisting of a set of pairs $(p_i,p_j)$ (denoting a match), such that $0<i<j\leq n$, and such that each player in $P$ is mentioned precisely once in $R$. Let $T$ be a tournament consisting of a tuple of $n-1$ valid rounds $(R_1, \ldots, R_{n-1})$, such that all rounds in $T$ are pair-wise disjoint (no round shares a match). How many valid constructions of $T$ are there for $n$ input players? The answer for 2 players is trivially 1. The answer for 4 players is 6. I believe the answer for 6 players to be 320. But how can this be solved in the general case?
This is almost the definition of a "$1$-factorization of $K_{2k}$", except that a $1$-factorization has an unordered set of matchings instead of a sequence of rounds. Since there are $2k-1$ rounds, this means that there are $(2k-1)!$ times as many tournaments, according to the definition above, as there are $1$-factorizations. Counting $1$-factorizations of $K_{2k}$ seems to be a nontrival problem; see the Encyclopedia of Mathematics entry. The number of $1$-factorizations of $K_{2k}$ is OEIS sequence A000438. Also, see this paper (also here) for a count in the $k=7$ case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/284416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Relation between the notions connected and disconnect, confused In the textbook "Topology without tears" I found the definition. $(X, \tau)$ is diconnected iff there exists open sets $A,B$ with $X = A \cup B$ and $A \cap B = \emptyset$. In Walter Rudin: Principles of Analysis, I found. $E \subseteq X$ is connected iff it is not the union of two nonempty separated sets. Where two sets $A,B$ are separeted iff $A \cap \overline{B} = \emptyset$ and $\overline{A} \cap B = \emptyset$. So I am confused, why is in the first defintion nothing about the concept of separted sets said, for these two definitions are not logical negates of one another????
First, note that one should (in both versions) add that $A,B$ should be nonempty. If $A,B$ are open and disjoint, then also $\overline A$ and $B$ are disjoint as $\overline A$ is the intersection of all closed sets containing $A$, thus $\overline A$ is a subset of the closed set $X\setminus B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/284507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Combinatorial Correctness of one-to-one functions Let $\lbrack k\rbrack$ be the set of integers $\{1, 2, \ldots, k\}$. What is the number of one-to-one functions from $m$ to $n$ if $m \leq n$? My answer is: $\dfrac{n!}{(n-m)!}$ My reasoning is the following: We have an $m$-step, independent process: Step 1: choose the first $m \in \lbrack m \rbrack$ to be mapped to a $n \in \lbrack n \rbrack$ There are $n$ choices. Step 2: choose the second $m \in \lbrack m \rbrack$ to be mapped to a $n \in \lbrack n \rbrack$ There are $n-1$ choices here since we cannot map to the $n$ in the previous step (as we must count one-to-one functions) Repeat this until for $1, 2, \ldots m$. This is $n(n-1)(n-2) \ldots (n-m+1) = \dfrac{n!}{(n-m)!}$ * *Is this correct? *Is my reasoning correct?
It is OK, modulo minor problems. You don't have to select the $m$'s, just go with the natural order 1, 2, ... Take a look at the notation suggested by Knuth et al in "Concrete Mathematics", it really does clean up much clutter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/284576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If a and b are relatively prime and ab is a square, then a and b are squares. If $a$ and $b$ are two relatively prime positive integers such that $ab$ is a square, then $a$ and $b$ are squares. I need to prove this statement, so I would like someone to critique my proof. Thanks Since $ab$ is a square, the exponent of every prime in the prime factorization of $ab$ must be even. Since $a$ and $b$ are coprime, they share no prime factors. Therefore, the exponent of every prime in the factorization of $a$ (and $b$) are even, which means $a$ and $b$ are squares.
Yes, it suffices to examine the parity of exponents of primes. Alternatively, and more generally, we can use gcds to explicitly show $\rm\,a,b\,$ are squares. Writing $\,\rm(m,n,\ldots)\,$ for $\rm\, \gcd(m,n,\ldots)\,$ we have Theorem $\rm\ \ \color{#C00}{c^2 = ab}\, \Rightarrow\ a = (a,c)^2,\ b = (b,c)^2\: $ if $\rm\ \color{#0A0}{(a,b,c)} = 1\ $ and $\rm\:a,b,c\in \mathbb N$ Proof $\rm\ \ \ \ (a,c)^2 = (a^2,\color{#C00}{c^2},ac)\, =\, (a^2,\color{#C00}{ab},ac)\,=\, a\,\color{#0A0}{(a,b,c)} = a.\, $ Similarly $\rm \,(b,c)^2 = b.$ Yours is the special case $\rm\:(a,b) = 1\ (\Rightarrow\ (a,b,c) = 1)$. The above proof uses only universal gcd laws (associative, commutative, distributive), so it generalizes to any gcd domain/monoid (where, generally, prime factorizations need not exist, but gcds do exist).
{ "language": "en", "url": "https://math.stackexchange.com/questions/284636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
$\binom{n}{n+1} = 0$, right? I was looking at the identity $\binom{n}{r} = \binom{n-1}{r-1} + \binom{n-1}{r}, 1 \leq r \leq n$, so in the case $r = n$ we have $\binom{n}{n} = \binom{n-1}{n-1} + \binom{n-1}{n}$ that is $1 = 1 + \binom{n-1}{n}$ thus $\binom{n-1}{n} = 0$, am I right?
This is asking how many ways you can take $n$ items from $n-1$ items - there are none. So you are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/284791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Question about theta of $T(n)=4T(n/5)+n$ I have this recurrence relation $T(n)=4T(\frac{n}{5})+n$ with the base case $T(x)=1$ when $x\leq5$. I want to solve it and find it's $\theta$. I think i have solved it correctly but I can't get the theta because of this term $\frac{5}{5^{log_{4}n}}$ . Any help? $T(n)=4(4T(\frac{n}{5^{2}})+\frac{n}{5})+n$ $=4^{2}(4T(\frac{n}{5^{3}})+\frac{n}{5^{2}})+4\frac{n}{5}+n$ $=...$ $=4^{k}T(\frac{n}{5^{k}})+4^{k-1}\frac{n}{5^{k-1}}+...+4\frac{n}{5}+n$ $=...$ $=4^{m}T(\frac{n}{5^{m}})+4^{m-1}\frac{n}{5^{m-1}}+...+4\frac{n}{5}+n$ Assuming $n=4^{m}$ $=4^{m}T(\lceil(\frac{4}{5})^{m}\rceil)+((\frac{4}{5})^{m-1}+...+1)n$ $=n+\frac{1-(\frac{4}{5})^{m}}{1-\frac{4}{5}}n=n+5n-n^{2}\frac{5}{5^{log_{4}n}}$ $=6n-n^{2}\frac{5}{5^{log_{4}n}}$
An alternative approach is to prove that $T(n)\leqslant5n$ for every $n$. This holds for every $n\leqslant5$ and, if $T(n/5)\leqslant5(n/5)=n$, then $T(n)\leqslant4n+n=5n$. By induction, the claim holds. On the other hand, $T(n)\geqslant n$ for every $n\gt5$, hence $T(n)=\Theta(n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/284848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$AB-BA=I$ having no solutions The following question is from Artin's Algebra. If $A$ and $B$ are two square matrices with real entries, show that $AB-BA=I$ has no solutions. I have no idea on how to tackle this question. I tried block multiplication, but it didn't appear to work.
Eigenvalues of $AB \text{ and }BA$ are equal.Therefore, 0 must be the eigenvalue of $AB-BA$. The product of all eigenvalues is the determinant of the operator. Hence, $$|AB-BA|=|I| \implies 0=1, \text{ which is a contradiction }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/284901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 6, "answer_id": 5 }
How to find finite trigonometric products I wonder how to prove ? $$\prod_{k=1}^{n}\left(1+2\cos\frac{2\pi 3^k}{3^n+1} \right)=1$$ give me a tip
Let $S_n = \sum_{k=0}^n 3^k = \frac{3^{n+1}-1}{2}$. Then $$3^{n}- S_{n-1} = 3^{n} - \frac{3^{n}-1}{2} = \frac{3^{n}+1}{2} = S_{n-1}+1. $$ Now by induction we have the following product identity for $n \geq 0$: $$ \begin{eqnarray} \prod_{k=0}^{n}\left(z^{3^k}+1+z^{-3^k}\right) &=& \left(z^{3^{n}}+1+z^{-3^{n}}\right)\prod_{k=0}^{n-1}\left(z^{3^k}+1+z^{-3^k}\right) \\ &=& \left(z^{3^{n}}+1+z^{-3^{n}}\right) \left(\sum_{k=-S_{n-1}}^{S_{n-1}} z^k\right) \\ &=&\sum_{k=S_{n-1}+1}^{S_n}z^k + \sum_{k=-S_{n-1}}^{S_{n-1}}z^k+\sum_{k=-S_n}^{-S_{n-1}-1} z^k \\ &=& \sum_{k=-S_n}^{S_n} z^k \end{eqnarray} $$ Now take $z = \exp\left(\frac{\pi \, i}{3^n + 1}\right)$ and use that $z^{3^n+1}=-1$ to get $$\begin{eqnarray} \prod_{k=0}^n\left(1 + 2 \cos \left(\frac{2 \pi \,3^k}{3^n+1}\right)\right) &=& \sum_{k=-S_n}^{S_n}z^{2k} = \frac{z^{2S_n+1}-z^{-2S_n-1}}{z-z^{-1}} = \frac{z^{3^{n+1}}-z^{-3^{n+1}}}{z-z^{-1}} \\ &=& \frac{z^{3(3^n+1)-3} - z^{-3(3^n+1)+3}}{z-z^{-1}} = \frac{z^3-z^{-3}}{z-z^{-1}} = z^2 + 1 + z^{-2} \\ &=& 1 + 2\cos\left(\frac{2\pi}{3^n+1}\right) \end{eqnarray}$$ and your identity follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/284971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Irreducibility preserved under étale maps? I remember hearing about this statement once, but cannot remember where or when. If it is true i could make good use of it. Let $\pi: X \rightarrow Y$ be an étale map of (irreducible) algebraic varieties and let $Z \subset Y$ be an irreducible subvariety. Does it follow that $\pi^{-1}(Z)$ is irreducible? If so, why? If not, do you know a counterexample? If necessary $X$ and $Y$ can be surfaces over $\mathbb{C}$, the map $\pi$ of degree two, and $Z$ a hyperplane section (i.e. it defines a very ample line bundle). Thanks! Edit: I assume the varieties $X$ and $Y$ to be projective.
Hmm...what about $\mathbb{A}^1 - 0 \rightarrow \mathbb{A}^1$ - 0, with $z \mapsto z^2$? Then the preimage of 1 is $\pm 1$, which is not irreducible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finding solutions using trigonometric identities I have an exam tomorrow and it is highly likely that there will be a trig identity on it. To practice I tried this identity: $$2 \sin 5x\cos 4x-\sin x = \sin9x$$ We solved the identity but we had to move terms from one side to another. My question is: what are the things that you can and cannot do with trig identities? And what things are must not when doing trig identities? Thank you
The intended techniques all follow from using the sine and cosine addition formulas and normalization, which you should have seen before. However, I wanted to point out that a more unified approach to the general problem of proving trig. identities is to work in the complex plane. For instance $e^{ix}=\cos(x)+i\sin(x)$ allows you to derive identities such as $\cos(x)=\frac{e^{ix}+e^{-ix}}{2}$ and $\sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$. Let $w=e^{ix}$. Then you can turn your identity into an equivalent "factorization problem" for a sparse polynomial in $w$, of degree $18$. Try it out!
{ "language": "en", "url": "https://math.stackexchange.com/questions/285189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
limit of the sum $\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n} $ Prove that : $\displaystyle \lim_{n\to \infty} \frac{1}{n+1}+\frac{1}{n+2}+\frac{1}{n+3}+\cdots+\frac{1}{2n}=\ln 2$ the only thing I could think of is that it can be written like this : $$ \lim_{n\to \infty} \sum_{k=1}^n \frac{1}{k+n} =\lim_{n\to \infty} \frac{1}{n} \sum_{k=1}^n \frac{1}{\frac{k}{n}+1}=\int_0^1 \frac{1}{x+1} \ \mathrm{d}x=\ln 2$$ is my answer right ? and are there any other method ?(I'm sure there are)
We are going to use the Euler's constant $$\lim_{n\to\infty}\left(\left(1+\frac{1}{2}+\cdots+\frac{1}{2n}-\ln (2n)\right)-\left(1+\frac{1}{2}+\cdots+\frac{1}{n}-\ln n\right)\right)=\lim_{n\to\infty}(\gamma_{2n}-\gamma_{n})=0$$ Hence the limit is $\ln 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 1 }
a theorem in topology Is there anyone know there is a theorem in topology which states that a compact manifold "parallelizable" with N smooth independent vector fields. must be an N-torus? and why the vector field here is parallel to manifold ?
I think you are talking about a theorem due to V.I. Arnold: you can find more details in "Mathematical methods of classical mechanics", chapter 10. Here is the statement. Theorem: Let $M$ be a n-dimensional compact and connected manifold and let $Y_{1},...,Y_{n}$ be smooth vector fields on M, commuting each other. If, for each $ x \in M$ $ (Y_{1}(x),...,Y_{n}(x))$ is a basis of the tangent space to M at x, then M is diffeomorphic to $ \mathbf{T}^{n} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/285354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is $\lim_{x \to 0} {\rm li}(n^x)-{\rm li}(2^x)=\log\left(\frac{\log(n)}{\log(2)}\right)$? I'm trying to give at least some partial answers for one of my own questions (this one). There the following arose: $\hskip1.7in$ Why is $\lim_{x \to 0} {\rm li}(n^x)-{\rm li}(2^x)=\log\left(\frac{\log(n)}{\log(2)}\right)$? Expanding at $x=0$ doesn't look reasonable to me since ${\rm li}(1)=-\infty$ and Wolfram only helps for concrete numbers, see here for example. Would a "$\infty-\infty$" version of L'Hospital work? Any help appreciated. Thanks,
$$ \begin{align} \lim_{x\to0}\int_{2^x}^{n^x}\frac{\mathrm{d}t}{\log(t)} &=\int_{x\log(2)}^{x\log(n)}\frac{e^u}{u}\mathrm{d}u\\ &=\lim_{x\to0}\left(\color{#C00000}{\int_{x\log(2)}^{x\log(n)}\frac{e^u-1}{u}\mathrm{d}u} +\color{#00A000}{\int_{x\log(2)}^{x\log(n)}\frac{1}{u}\mathrm{d}u}\right)\\ &=\color{#C00000}{0}+\lim_{x\to0}\big(\color{#00A000}{\log(x\log(n))-\log(x\log(2))}\big)\\ &=\log\left(\frac{\log(n)}{\log(2)}\right) \end{align} $$ Note Added: since $\lim\limits_{u\to0}\dfrac{e^u-1}{u}=1$, $\dfrac{e^u-1}{u}$ is bounded near $0$, therefore, its integral over an interval whose length tends to $0$, tends to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Reflection around a plane, parallel to a line I'm supposed to determine the matrix of the reflection of a vector $v \in \mathbb{R}^{3}$ around the plane $z = 0$, parallel to the line $x = y = z$. I think this means that, denoting the plane by $E$ and the line by $F$, we will have $\mathbb{R}^{3} = E \oplus F$ and thus for a vector $v$, we write $v = z + w$ where $z \in E$ and $w \in F$, and then we set $Rv = z + Rw$? Then I guess we'd have $Rw = -w$ making $Rv = z - w$. Here $R$ denotes the reflection. Is this the correct definition?
The definition is exactly as stated in the question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Intermediate Value Theorem guarantee I'm doing a review packet for Calculus and I'm not really sure what it is asking for the answer? The question is: Let f be a continuous function on the closed interval [-3, 6]. If f(-3)=-2 and f(6)=3, what does the Intermediate Value Theorem guarantee? I get that the intermediate value theorem basically means but not really sure how to explain it?
Since $f(-3)=-2<0<3=f(6)$, we can guarantee that the function has a zero in the interval $[-3,6]$. We cannot conclude it has only one, though (it may be many zeros). EDIT: As has already been pointed out elsewhere, the IVT guarantees the existence of at least one $x\in[-3,6]$ such that $f(x)=c$ for any $c\in[-2,3]$. Note that the fact that there is a zero may be important (for example, you couldn't define a rational function over this domain with this particular function in the denominator), or you may be more interested in the fact that it attains the value $y=1$ for some $x\in(-3,6)$. I hope this helps make the solution a little bit more clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
When the spectral radius of a matrix $B$ is less than $1$ then $B^n \to 0$ as $n$ goes to infinity Hello how to show the following fact? When the spectral radius of a matrix $B$ is less than $1$ then $B^n \to 0$ as $n$ goes to infinity Thank you!
There is a proof on the Wikipedia page for spectral radius. Also there you will find the formula $\lim\limits_{n\to\infty}\|B^n\|^{1/n}$ for the spectral radius, from which this fact follows. However, the Wikipedia article's author(s) used the result in your question to prove the formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that $\frac{f(n)}{n!}=\sum_{k=0}^n \frac{(-1)^k}{k!}$ Consider a function $f$ on non-negative integer such that $f(0)=1,f(1)=0$ and $f(n)+f(n-1)=nf(n-1)+(n-1)f(n-2)$ for $n \geq 2$. Show that $$\frac{f(n)}{n!}=\sum_{k=0}^n \frac{(-1)^k}{k!}$$ Here $$f(n)+f(n-1)=nf(n-1)+(n-1)f(n-2)$$ $$\implies f(n)=(n-1)(f(n-1)+f(n-2))$$ Then I am stuck.
Let $$ g(n) = \sum_{k=0}^n(-1)^k\frac{n!}{k!} \tag{1} $$ then \begin{align} g(n) &= \sum_{k=0}^n(-1)^k\frac{n!}{k!} \\ &= n\sum_{k=0}^{n-1}(-1)^k\frac{(n-1)!}{k!} + (-1)^n\frac{n!}{n!} \\ \\ \\ &= ng(n-1)+(-1)^n \\ \\ &= (n-1)g(n-1) +g(n-1)+(-1)^n \\ &= (n-1)g(n-1)+\Big((n-1)g(f-2)+(-1)^{n-1}\Big) + (-1)^n\\ &= (n-1)g(n-1) + (n-1)g(n-2) \end{align} so for any function $g$ that fulfills $(1)$ we have that $$ g(n)+g(n-1)=ng(n-1)+(n-1)g(n-2) $$ and with $g(0) = 1 = f(0)$, $g(1) = 0 = f(1)$ we conclude $g \equiv f$. Cheers!
{ "language": "en", "url": "https://math.stackexchange.com/questions/285672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
maximum modulus principle on $\lbrace z : |f(z)| \geq \alpha \rbrace$ Let $f(z)$ be an entire function that is not identically constant. Show that $$\lbrace z : |f(z)| \geq \alpha \rbrace = \text{cl }\lbrace z : |f(z)| > \alpha \rbrace.$$ This question was in our exam and hinted that we had to apply the maximum modulus principle. I was wondering what that solution looked like as my proof used the open mapping theorem.
Let's prove this by showing inclusion in both directions. Let $w$ be a limit point of $E = \{z : |f(z)| > a\}$. This means that there is a sequence $\{z_k\} \subset E$ so that $z_k \to w$ as $k \to \infty$. Since $f$ is continuous, it follows that $|f(w)| \ge a$ and $\operatorname{cl}(E) \subset \{z : |f(z)| \ge a\}$. Let $w$ be a point so that $|f(w)| = a$. By the maximum modulus principle, every neighborhood of $w$ contains a point $z$ so that $|f(z)| > |f(w)|$. Thus, $|f(z)| > a$ and $z \in E$. Hence, $\{z : |f(z)| \ge a\} \subset \operatorname{cl}(E)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the homology group of the sphere with an annular ring? I'm trying to compute the homology groups of $\mathbb S^2$ with an annular ring whose inner circle is a great circle of the $\mathbb S^2$. space X Calling this space $X$, the $H_0(X)$ is easy, because this space is path-connected then it's connected, thus $H_0(X)=\mathbb Z$ When we triangulate this space, it's easy to see that $H_2(X)=\mathbb Z$. But I've found the $H_1(X)$ difficult to discover, I don't know the fundamental group of it, then I can't use the Hurowicz Theorem. I'm trying to find this using the triangulation of it, but there are so many calculations. I have the following questions: 1- How we can use Mayer-Vietoris theorem in this case? 2-What is the fundamental group of this space? 3- I know the homology groups of the sphere and the annulus, this can help in this case? I need help, please Thanks a lot.
If I understand your space correctly, it seems you could do a deformation retraction onto $S^2$, and hence $H_1(X)=H_1(S^2)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
vanishing of higher derived structure sheaf given a field $k$ and a proper integral scheme $f:X\rightarrow \operatorname{Spec}(k)$, is it true that $f_{*}\mathcal{O}_{X}\cong \mathcal{O}_{\operatorname{Spec}(k)}$? Consider the normalization $\nu:X_1\rightarrow X$,let $g:X_1\rightarrow \operatorname{Spec}(k)$ be the structure morphism and assume that there is a quasi isomorphism $\mathcal{O}_{\operatorname{Spec}(k)}\cong \mathbb{R}^{\cdot}g_{*}\mathcal{O}_{X_1}$, what can I say about $\mathbb{R}^{\cdot}f_{*}\mathcal{O}_{X}$? Thanks
The isomorphism $f_{*}O_X=k$ holds if $k$ is algebraically closed. Otherwise, take a finite non-trivial extension $k'/k$ and $X=\mathrm{Spec}(k)$ you will get a counterexample. A sufficient condition over an arbitrary field is $X$ is proper, geometrically connected (necessary) and geometrically reduced. This amounts the prove the above isomorphism over an algebraically field. The this case, by general results $f_*O_X=H^0(X, O_X)$ is a finite $k$-algebra, integral because $X$ is integral, hence equal to $k$ (there is a direct proof using the properness of any morphism from $X$ to any separated algebraic variety). For your second question, consider the case of singular rational curves. I think there is no much things one can say about the $H^1$. It could be arbitrarily large.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
True/false question: limit of absolute function I have this true/false question that I think is true because I can not really find a counterexample but I find it hard to really prove it. I tried with the regular epsilon/delta definition of a limit but I can't find a closing proof. Anyone that If $\lim_{x \rightarrow a} | f(x) | = | A |$ then $ \lim_{x \rightarrow a}f(x) = A $
Let $f$ be constant $1$ and $A:=-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Need help with an integration word problem. This appears to be unsolvable due to lack of information. I'm not sure I understand what to do with what's given to me to solve this. I know it has to do with the relationship between velocity, acceleration and time. At a distance of $45m$ from a traffic light, a car traveling $15 m/sec$ is brought to a stop at a constant deceleration. a. What is the value of deceleration? b. How far has the car moved when its speed has been reduced to $3m/sec$? c. How many seconds would the car take to come to a full stop? Can somebody give me some hints as to where I should start? All I know from reading this is that $v_0=15m$, and I have no idea what to do with the $45m$ distance. I can't tell if it starts to slow down when it gets to $45m$ from the light, or stops $45m$ from the light. Edit: I do know that since accelleration is the change in velocity over a change in time, $V(t)=\int a\ dt=at+C$, where $C=v_0$. Also, $S(t)=\int v_{0}+at\ dt=s_0+v_0t+\frac{1}{2}at^2$. But I don't see a time variable to plug in to get the answers I need... or am I missing something?
Hint: Constant acceleration means that the velocity $v(t)=v(0)+at$ where $a$ is the acceleration. The position is then $s(t)=s(0)+v(0)t+\frac 12 at^2$. You should be able to use these to answer the questions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/285975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Linear independence in Finite Fields How can we define linear independence in vectors over $\mathbb{F_{2^m}}$ ? Let vectors $v_1,v_2,v_3$ $\in$ $\mathbb{F_{2^m}}$, If $v_1,v_2,v_3$ are linearly independent,then $\alpha_1v_1+\alpha_2v_2+\alpha_3v_3$=0 if and only if $\alpha_1=\alpha_2=\alpha_3=0$ and $\alpha_1,\alpha_2,\alpha_3 \in \mathbb{F_2}$ ? or $\mathbb{F_{2^m}}$ ? Thanks in advance
Linear independence is defined the same way in every vector space: $\{v_i\mid i\in I\}$ is a linearly independent subset of $V$ if $\sum_{i=1}^n \lambda_i v_i=0$ implies all the $\lambda_i=0$ for all $i$, where the $\lambda_i$ are in the field. In short, you definitely would not take the $\lambda_i$ from $F^m$. You are probably thinking of multiplying coordinate-wise. The definition of a linear combination, though, takes coefficents from the field (and $F^m$ is not a field). To address the edits, which radically changed the question: Linear independence depends on the field (no pun intended.) If you want them to be linearly independent over $F$, then $\lambda_i$ can only come from $F$. If you want it to be linearly independent over $F_{2^k}$, then the $\lambda_i$ are all from $F_{2^k}$. For a simple example, look at $F_2$ and $F_8$. If $x\in F_8\setminus F_2$, then $\{1,x\}$ is linearly independent over $F_2$, but it is linearly dependent over $F_8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/286027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to efficiently compute the determinant of a matrix using elementary operations? Need help to compute $\det A$ where $$A=\left(\begin{matrix}36&60&72&37\\43&71&78&34\\44&69&73&32\\30&50&65&38\end{matrix} \right)$$ How would one use elementary operations to calculate the determinant easily? I know that $\det A=1$
I suggest Gaussian Elimination till upper triangle form or further but keep track of the effect of each elementary. see here for elementary's effect on det
{ "language": "en", "url": "https://math.stackexchange.com/questions/286080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Limit question with absolute value: $ \lim_{x\to 4^{\large -}}\large \frac{x-4}{|x-4|} $ How would I solve the following limit, as $\,x\,$ approaches $\,4\,$ from the left? $$ \lim_{x\to 4^{\large -}}\frac{x-4}{|x-4|} $$ Do I have to factor anything?
Hint: If $x \lt 4, |x-4|=4-x$. Now you can just divide.
{ "language": "en", "url": "https://math.stackexchange.com/questions/286140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Function in $L^1([0,1])$ that is not locally in any $L^{\infty}$ Can we find a function such that $f\in L^1([0,1])$ and for any $0\leq a<b\leq 1$ we have that $||f||_{L^{\infty}([a,b])}=\infty$?
Yes, we can. Consider $\{r_j,j\in\Bbb N\}$ an enumeration of rational numbers of $[0,1]$ and $$f(x):=\sum_{j=1}^{+\infty}\frac{2^{—j}}{\sqrt{|x-r_j|}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/286222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Nondeterministic finite automaton proof I am having a really hard time working the problem below out. I am not sure I am even on the right direction with this logic . Swapping the accept and reject states alone is not sufficient to accept all string of the language ~ L. We would need to swap the transition directions as well for L (with the bar on top) to be accepted. If I am not mistaken L with the bar on top is simply not L (~L), right? As an example, I created an NFA that accepts any string that has at least two zeros. Swapping the accept states with the reject states, didn't really help me prove anything by counterexample. This is the problem: Your friend thinks that swapping accept and reject states of an NFA that accepts a language L, that the resulting NFA must accept the language L (with a bar on top). Prove by counter-example, that your friend is incorrect.
$\bar L$ is the complement of $L$, that is, $\bar L$ is the set of strings that are not in $L$. Hint: make a nondeterministic automaton that accepts every string, and so that if you switch the accepting and rejecting states it still accepts every string.
{ "language": "en", "url": "https://math.stackexchange.com/questions/286295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }