Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Prove that the eigenvalues of a real symmetric matrix are real I am having a difficult time with the following question. Any help will be much appreciated.
Let $A$ be an $n×n$ real matrix such that $A^T = A$. We call such matrices “symmetric.” Prove that the eigenvalues of a real symmetric matrix are real (i.e. if $\lambda$ is an eigenvalue of $A$, show that $\lambda = \overline{\lambda}$ )
| Consider the real operator $$u := (x \mapsto Ax)$$ for all $x \in \mathbb{R}^{n}$ and the complex operator $$\tilde{u} := (x \mapsto Ax) $$ for all $x \in \mathbb{C}^{n}$. Both operators have the same characteristic polynomial, say $p(\lambda) = \det(A - \lambda I)$. Since $A$ is symmetric, $\tilde{u}$ is an hermitian operator. For the spectral theorem for hermitian operators all the eigenvalues (i.e. the roots of the $p(\lambda)$) of $\tilde{u}$ are real. Hence, all the eigenvalues (i.e. the roots of the $p(\lambda)$) of $u$ are real.
We have shown that the eigenvalues of a symmetric matrix are real numbers as a consequence of the fact that the eigenvalues of an Hermitian matrix are reals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 9,
"answer_id": 8
} |
Help in proving that $\nabla\cdot (r^n \hat r)=(n+2)r^{n-1}$
Show that$$\nabla \cdot (r^n \hat r)=(n+2)r^{n-1}$$ where $\hat r$ is the unit vector along $\bar r$.
Please give me some hint. I am clueless as of now.
| You can also use Cartesian coordinates and using the fact that $r \hat{r} = \vec{r} = (x,y,z)$.
\begin{align}
r^n \hat{r} &= r^{n-1} (x,y,z) \\
\nabla \cdot r^n \hat{r} & = \partial_x(r^{n-1}x) + \partial_y(r^{n-1}y) + \partial_z(r^{n-1}z)
\end{align}
Each term can be calculated:
$\partial_x(r^{n-1}x) = r^{n-1} + x (n-1) r^{n-2} \partial_x r$
$\partial_x r = \frac{x}{r}$. (Here, I used the fact that $r = \sqrt{x^2+y^2+z^2}$.)
The terms involving $y$ and $z$ are exactly the same with $y$ and $z$ replacing $x$.
And so,
\begin{align}
\nabla \cdot r^n \hat{r}& = r^{n-1} + x(n-1)r^{n-2}\frac{x}{r} + r^{n-1} + y(n-1)r^{n-2}\frac{y}{r} + r^{n-1} + z(n-1)r^{n-2}\frac{z}{r}\\
& = 3r^{n-1}+(n-1)(x^2 + y^2 + z^2)r^{n-3} \\
& = 3r^{n-1} + (n-1)r^2 r^{n-3} \\
& = (n+2)r^{n-1}
\end{align}
Of course, polar is the easiest :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Is there a continuous function with these properties (not piecewise) In short, I'm wondering if I can find a continuous non-piecewise function with these properties. I've found one that was close but not perfect. It's actually really useful to have something like this to scale it. Sorry about the lack of formatting (EDIT: Thank you)
$$\begin{aligned}
f(0) &= 1\\
f(x) &> 0\\
f'(0) &> 1\\
f'(x) &> 0\\
f''(0) &= 0\\
\lim_{x\to\infty} f '(x) &= 1 \\
\lim_{x\to -\infty} f(x) &= 0
\end{aligned}
$$
The second and fourth constraints were added afterwards, so keep that in mind while viewing comments/answers
Bonus points if you can find a function that has an elementary integral and that approaches y = x + 1
I've managed to get a function that displays all of the properties except the inflection at zero (and obviously the decrease in $f'(x)$), being hyperbolic (simply $\sqrt{1 + x^2 / 4}+ x / 2$).
The actual purpose isn't for mathematical purposes, it's for a program. In short I need to normalize a value so that it cannot drop below 0 but has no cap therein, and should approach additivity but with diminishing returns, rather than increasing that I would get with the hyperbolic one above. Thus, for most of the positive portion I need it to be concave down. If it's integrable I can do more things with it without breaking time-independence but most of the time it won't matter.
| After playing around with various compositions and games, here's an analytic one:
$$f(x) = \frac 2 \pi \frac {e^x} {e^x + 1} x \arctan x + e^{2x - (2 + \frac 1 \pi)x^2}$$
The first part gives you the limits at $\pm \infty$ and the second part makes adjustments at $0$.
So what was this good for anyway? :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Condition for $\det(A^{T}A)=0$ Is it always true that
$\det(A^{T}A)=0$, $\hspace{0.5mm}$ for $A=n \times m$ matrix with $n<m$?
From some notes I am reading on Regression analysis, and from some trials, it would appear this is true.
It is not a result I have seen, surprisingly.
Can anyone provide a proof?
Thanks.
| From the way you wrote it, the product is size $m.$ However, the maximum rank is $n$ which is smaller. The matrix $A^T A$ being square and of non-maximal rank, it has determinant $0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Proof that the sum of the cubes of any three consecutive positive integers is divisible by three. So this question has less to do about the proof itself and more to do about whether my chosen method of proof is evidence enough. It can actually be shown by the Principle of Mathematical Induction that the sum of the cubes of any three consecutive positive integers is divisible by 9, but this is not what I intend to show and not what the author is asking. I believe that the PMI is not the authors intended path for the reader, hence why they asked to prove divisibility by 3. So I did a proof without using the PMI. But is it enough?
It's from Beachy-Blair's Abstract Algebra Section 1.1 Problem 21. This is not for homework, I took Abstract Algebra as an undergraduate. I was just going through some problems that I have yet to solve from the textbook for pleasure.
Question: Prove that the sum of the cubes of any three consecutive positive integers is divisible by 3.
So here's my proof:
Let a $\in$ $\mathbb{Z}^+$
Define \begin{equation} S(x) = x^3 + (x+1)^3 + (x+2)^3 \end{equation}
So,
\begin{equation}S(a) = a^3 + (a+1)^3 + (a+2)^3\end{equation}
\begin{equation}S(a) = a^3 + (a^3 + 3a^2 + 3a + 1) + (a^3 +6a^2 + 12a +8) \end{equation}
\begin{equation}S(a) = 3a^3 + 9a^2 + 15a + 9 \end{equation}
\begin{equation}S(a) = 3(a^3 + 3a^2 + 5a + 3) \end{equation}
Hence, $3 \mid S(a)$.
QED
| Your solution is fine, provided you intended to prove that the sum is divisible by $3$.
If you intended to prove divisibility by $9$, then you've got more work to do!
If you're familiar with working $\pmod 3$, note @Math Gems comment/answer/alternative. (Though to be honest, I would have proceeded as did you, totally overlooking the value of Math Gems approach.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 8,
"answer_id": 0
} |
If $J$ is the $n×n$ matrix of all ones, and $A = (l−b)I +bJ$, then $\det(A) = (l − b)^{n−1}(l + (n − 1)b)$ I am stuck on how to prove this by induction.
Let $J$ be the $n×n$ matrix of all ones, and let $A = (l−b)I +bJ$. Show that $$\det(A) = (l − b)^{n−1}(l + (n − 1)b).$$
I have shown that it holds for $n=2$, and I'm assuming that it holds for the $n=k$ case, $$(l-b)^{k-1}(a+(k-1)b)$$ but I'm having trouble proving that it holds for the $k+1$ case. Please help.
| I think that it would be better to use $J_n$ for the $n \times n$ matrix of all ones, (and similarly $A_n, I_n$) so it is clear what the dimensions of the matrices are.
Proof by induction on $n$ that $\det(A_n)=(l-b)^n+nb(l-b)^{n-1}$:
When $n=1, 2$, this is easy to verify. We have $\det(A_1)=\det(l)=l=(l-b)^1+b(l-b)^0$ and $\det(A_2)=\det(\begin{array}{ccc} l & b \\ b & l \end{array})=l^2-b^2=(l-b)^2+2b(l-b)$.
Suppose that the statement holds for $n=k$. Consider $$A_{k+1}=(l-b)I_{k+1}+bJ_{k+1}=\left(\begin{array}{ccccc} l & b & b & \ldots & b \\ b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots \\ b & b & b & \ldots & l \end{array}\right)$$
Now subtracting the second row from the first gives
\begin{align}
\det(A_{k+1})& =\det\left(\begin{array}{ccccc} l & b & b & \ldots & b \\ b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots \\ b & b & b & \ldots & l \end{array}\right) \\
& =\det\left(\begin{array}{ccccc} l-b & b-l & 0 & \ldots & 0 \\ b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots \\ b & b & b & \ldots & l \end{array}\right) \\
& =(l-b)\det(A_k)-(b-l)\det\left(\begin{array}{cccccc} b & b & b & b & \ldots & b \\ b & l & b & b & \ldots & b \\ b & b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots & \ldots \\ b & b & b & b & \ldots & l \end{array}\right)
\end{align}
Now taking the matrix in the last line above and subtracting the first row from all other rows gives an upper triangular matrix:
\begin{align}
\det\left(\begin{array}{cccccc} b & b & b & b & \ldots & b \\ b & l & b & b & \ldots & b \\ b & b & l & b & \ldots & b \\ \ldots & \ldots & \ldots &\ldots &\ldots & \ldots \\ b & b & b & b & \ldots & l \end{array}\right) & =\det\left(\begin{array}{cccccc} b & b & b & b & \ldots & b \\ 0 & l-b & 0 & 0 & \ldots & 0 \\ 0 & 0 & l-b & 0 & \ldots & 0 \\ \ldots & \ldots & \ldots &\ldots &\ldots & \ldots \\ 0 & 0 & 0 & 0 & \ldots & l-b \end{array}\right) \\
& =b(l-b)^k
\end{align}
Therefore we have (using the induction hypothesis)
\begin{align}
\det(A_{k+1}) & =(l-b)\det(A_k)-(b-l)(b(l-b)^k) \\
& =(l-b)((l-b)^k+kb(l-b)^{k-1})+(l-b)(b(l-b)^{k-1}) \\
& =(l-b)^{k+1}+(k+1)b(l-b)^k
\end{align}
We are thus done by induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Units (invertibles) of polynomial rings over a field
If $R$ is a field, what are the units of $R[X]$?
My attempt: Let $f,g \in R[X]$ and $f(X)g(X)=1$. Then the only solution for the equation is both $f,g \in {R}$. So $U(R[X])=R$, exclude zero elements of $R$.
Is this correct ?
| If $f=a_0+a_1x+\cdots+a_mx^m$ has degree $m$, i.e. $a_m\ne 0$, and $fg=1$ for some $g=b_0+\cdots+b_n x^n$ (and $b_n\ne 0$), then observe that $$0=\deg(1)=\deg(fg)=\deg(a_0b_0+\cdots+a_mb_nx^{n+m})=m+n$$ as $a_mb_n\ne 0$. Hence $m+n=0$ and so $m=n=0$ as $m,n\ge 0$. Hence $f\in R^*$ and $R[x]^*=R^*$ (the star denotes the set of units).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
} |
Euler's proof for the infinitude of the primes I am trying to recast the proof of Euler for the infinitude of the primes in modern mathematical language, but am not sure how it is to be done. The statement is that:
$$\prod_{p\in P} \frac{1}{1-1/p}=\prod_{p\in P} \sum_{k\geq 0} \frac{1}{p^k}=\sum_n\frac{1}{n}$$
Here $P$ is the set of primes. What bothers me is the second equality above which is obtained by the distributive law, applied not neccessarily finitely many times. Is that justified?
| It might be instructive to see the process of moving from a heuristic argument to a rigorous proof.
Probably the simplest thing to do when considering a heuristic argument involving infinite sums (or infinite products or improper integrals or other such things) is to consider its finite truncations. i.e. what can we do with the following?
$$\prod_{\substack{p \in P \\ p < B}} \frac{1}{1 - 1/p}$$
Well, we can repeat the first step easily:
$$\prod_{\substack{p \in P \\ p < B}} \frac{1}{1 - 1/p}
= \prod_{\substack{p \in P \\ p < B}} \sum_{k=0}^{+\infty} \frac{1}{p^k}$$
Because all summations involved are all absolutely convergent (I think that's the condition I want? my real analysis is rusty), we can distribute:
$$ \ldots = \sum_{n} \frac{1}{n} $$
where the summation is over only those integers whose prime factors are all less than $B$.
At this point, it is easy to make two very rough bounds:
$$ \sum_{n=1}^{B-1} \frac{1}{n} \leq \prod_{\substack{p \in P \\ p < B}} \frac{1}{1 - 1/p}
\leq \sum_{n=1}^{+\infty} \frac{1}{n} $$
And now, we can take the limit as $B \to +\infty$ and apply the squeeze theorem:
$$ \prod_{\substack{p \in P}} \frac{1}{1 - 1/p}
= \sum_{n=1}^{+\infty} \frac{1}{n} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
If a group has 14 men and 11 women, how many different teams can be made with $6$ people that contains exactly $4$ women?
A group has $14$ men and $11$ women.
(a) How many different teams can be made with $7$ people?
(b) How many different teams can be made with $6$ people that contains exactly $4$ women?
Answer key to a is $257$ but I can't figure out how to get $257$? There's no answer key to b though, but here's my attempt:
$$\binom{25}{6} - \left[\binom{14}{6} + \binom{14}{5} + \binom{14}{4} + \binom{14}{3} + \binom{14}{1} + \binom{11}{6} + \binom{11}{5}\right]$$
What I'm trying to do here is subtracting all men, all women, 5 men, 4 men, 3 men, 1 men, and 5 women team from all possible combination of team.
Thanks
| Hint:
For the first one, number of ways you can choose $k$ woman and $m$ men= $11\choose k$+ $14\choose m$, such that $k+m=7$.
For the second one, number ways you can choose $4$ women= $14\choose4$
AND number of choosing $2$ men= $11 \choose 2$, when you have an AND, you gotta multiply them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Another Series $\sum\limits_{k=2}^\infty \frac{\log(k)}{k}\sin(2k \mu \pi)$ I ran across an interesting series in a paper written by J.W.L. Glaisher. Glaisher mentions that it is a known formula but does not indicate how it can be derived.
I think it is difficult.
$$\sum_{k=2}^\infty \frac{\log(k)}{k}\sin(2k \mu \pi) = \pi \left(\log(\Gamma(\mu)) +\frac{1}{2}\log \sin(\pi \mu)-(1-\mu)\log(\pi)- \left(\frac{1}{2}-\mu\right)(\gamma+\log 2)\right)$$
Can someone suggest a method of attack?
$\gamma$ is the Euler-Mascheroni Constant.
Thank You!
| It suffices to do these integrals:
$$
\begin{align}
\int_0^1 \log(\Gamma(s))\;ds &= \frac{\log(2\pi)}{2}
\tag{1a}\\
\int_0^1 \log(\Gamma(s))\;\cos(2k \pi s)\;ds &= \frac{1}{4k},\qquad k \ge 1
\tag{1b}\\
\int_0^1 \log(\Gamma(s))\;\sin(2k \pi s)\;ds &= \frac{\gamma+\log(2k\pi)}{2k\pi},\qquad k \ge 1
\tag{1c}
\\
\int_0^1 \frac{\log(\sin(\pi s))}{2}\;ds &= \frac{-\log 2}{2}
\tag{2a}
\\
\int_0^1 \frac{\log(\sin(\pi s))}{2}\;\cos(2k \pi s)\;ds &= \frac{-1}{4k},\qquad k \ge 1
\tag{2b}
\\
\int_0^1 \frac{\log(\sin(\pi s))}{2}\;\sin(2k \pi s)\;ds &= 0,\qquad k \ge 1
\tag{2c}
\\
\int_0^1 1 \;ds &= 1
\tag{3a}
\\
\int_0^1 1 \cdot \cos(2k \pi s)\;ds &= 0,\qquad k \ge 1
\tag{3b}
\\
\int_0^1 1 \cdot \sin(2k \pi s)\;ds &= 0,\qquad k \ge 1
\tag{3c}
\\
\int_0^1 s \;ds &= \frac{1}{2}
\tag{4a}
\\
\int_0^1 s \cdot \cos(2k \pi s)\;ds &= 0,\qquad k \ge 1
\tag{4b}
\\
\int_0^1 s \cdot \sin(2k \pi s)\;ds &= \frac{-1}{2k\pi},\qquad k \ge 1
\tag{4c}
\end{align}
$$
Then for $f(s) = \pi \left(\log(\Gamma(s)) +\frac{1}{2}\log \sin(\pi s)-(1-s)\log(\pi)- \left(\frac{1}{2}-s\right)(\gamma+\log 2)\right)$, we get
$$
\begin{align}
\int_0^1 f(s)\;ds &= 0
\\
2\int_0^1f(s) \cos(2k\pi s)\;\;ds &= 0,\qquad k \ge 1
\\
2\int_0^1f(s) \sin(2k\pi s)\;\;ds &= \frac{\log k}{k},\qquad k \ge 1
\end{align}
$$
and the formula follows as a Fourier series:
$$
f(s) = \sum_{k=1}^\infty \frac{\log k}{k}\;\sin(2 k\pi s),\qquad 0 < s < 1.
$$
reference
Gradshteyn & Ryzhik, Table of Integrals Series and Products
(1a) 6.441.2
(1b) 6.443.3
(1c) 6.443.1
(2a) 4.384.3
(2b) 4.384.3
(2c) 4.384.1
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 2,
"answer_id": 0
} |
Function design: a logarithm asymptotic to one? I want to design an increasing monotonic function asymptotic to $1$ when $x\to +\infty $ that uses a logarithm.
Also, the function should have "similar properties" to $\dfrac{x}{1+x}$, i.e.:
*
*increasing monotonic
*$f(x)>0$ when $x>0$
*gets close to 1 "quickly",
say $f(10)>0.8$
| How about
$$
f(x)=\frac{a\log(1+x)}{1+a\log(1+x)},\quad a>0?
$$
Notes:
*
*I have used $\log(1+x)$ instead of $\log x$ to avoid issues near $x=0$ and to make it more similar to $x/(1+x)$.
*Choose $a$ large enough to have $f(10)>0,8$.
*You can see the graph of $f$ (for $a=1$) compared to $x/(1+x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $z$ is a complex number of unit modulus and argument theta If $z$ is a complex number such that $|z|=1$ and $\text{arg} z=\theta$, then what is $$\text{arg}\frac{1 + z}{1+ \overline{z}}?$$
| Multiplying both numerator and denominator by $z$, we get:
$$\arg\left(\frac{1+z}{1+\bar{z}}\right)=\arg\left(\frac{z+z^{2}}{z+1}\right)=\arg\left(\frac{z(1+z)}{1+z}\right)=\arg\left(z\right)$$
We are told that $\arg(z)=\theta$, therefore:
$$\arg\left(\frac{1+z}{1+\bar{z}}\right)=\theta$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
Is the gradient of a function in $H^2_0$ in $H^1_0$? Suppose we have $f\in H^2_0(U)$, so $f$ is the limit of some sequence $(g_n)$ of smooth compactly supported functions on $U\in\mathbb{R}^n$ (assume bounded & smooth boundary) and $f$ is in the Sobolev space $H^2(U)$. Does this imply that $\frac{\partial f}{\partial x_i}\in H^1_0(U)$ for all $i \in \{1,...,n\}$?
Clearly $f\in H^1(U)$ but I'm not sure if it's the limit of a sequence in $C^{\infty}_c(U)$. Can we just take the sequence $(\frac{\partial g_n}{\partial x_i})$, or does this derivative not commute with the limit $n\rightarrow\infty$?
| If $g_n$ converges towards $f$ in $H^2(U)$, you have
$$ \sum_{|\alpha|\le 2}\left\| \partial^\alpha f - \partial^\alpha g_n \right\|^2_{L^2(U)} \to 0,$$
where the sum ranges over all multiindices $\alpha$, with $|\alpha| \le 2$.
In order to prove $\partial g_n / \partial x_i \to \partial f / \partial x_i$ in $H^1(U)$, you have to prove$$ \sum_{|\alpha|\le 1}\left\| \partial^\alpha \partial_{x_i} f - \partial^\alpha \partial_{x_i} g_n \right\|^2_{L^2(U)} \to 0.$$
Now you use
$$\sum_{|\alpha|\le 1}\left\| \partial^\alpha \partial_{x_i} f - \partial^\alpha \partial_{x_i} g_n \right\|^2_{L^2(U)} \le \sum_{|\alpha|\le 2}\left\| \partial^\alpha f - \partial^\alpha g_n \right\|^2_{L^2(U)}.$$
Hence, the (components of the) gradient belongs to $H_0^1(U)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $A$ be any uncountable set, and let $B$ be a countable subset of $A$. Prove that the cardinality of $A = A - B $ I am going over my professors answer to the following problem and to be honest I am quite confused :/ Help would be greatly appreciated!
Let $A$ be any uncountable set, and let $B$ be a countable subset of $A$. Prove that $|A| = |A - B|$
The answer key that I am reading right now follows this idea:
It says that $A-B$ is infinite and proceeds to define a new denumerable subset $A-B$ as $C$. Of course since $C$ is countably infinite then we can write $C$ as ${c1,c2,c3...}$
Once we have a set $C$, we know that the union of $C$ and $B$ must be denumerable (from another proof) since $B$ is countable and $C$ is denumerable.
This is where I start to have trouble. The rest of the solution goes like this...
Since the union of $C$ and $B$ is denumerable, there is a bijective function $f$ that maps the union of $C$ and $B$ to $C$ again. The solution then proceeds to define another function $h$ that maps $A$ to $A-B$.
I am just so lost. The thing is I don't even understand the point of constructing a new subset $C$ or defining functions like $f$ or $h$.
So I suppose my question is in general, how would one approach this problem? I am not mathematically inclined unfortunately, and a lot of the steps in almost all of these problems seems arbitrary and random. Help would be really appreciated on this problem and some general ideas on how to solve problems like these!!!
Thank you so very much!
| $|A-B|\leq |A|$ is obvious.
For the reverse inequality,
Use $|A|=|A\times A|$.
Denote the bijection $A\rightarrow A\times A$ by $\phi$, then $\phi(B)$ embeds into a countable subset of $A\times A$.
Then consider the projection $\pi_1$ onto the first coordinate, of the set $\phi(B)$.
Namely $\pi_1 \phi(B)$.
Since $|\pi_1\phi(B)|\leq |B|$, and $|B|<|A|$, we see that $A-\pi_1\phi(B)$ is nonempty. Pick any element $x \in A-\pi_1\phi(B)$, then we see that
$\{x\}\times A$ embeds into $(A\times A)-\phi(B)$.
This gives the reverse inequality $|A|\leq |A-B|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
$X(n)$ and $Y(n)$ divergent doesn't imply $X(n)+Y(n)$ divergent. Please, give me an example where $X(n)$ and $Y(n)$ are both divergent series, but $(X(n) + Y(n))$ converges.
| Try $x_n = n, y_n = -n$. Then both $x_n,y_n$ clearly diverge, but $x_n+y_n = 0$ clearly converges.
Or try $x_n = n, y_n = \frac{1}{n}-n$ if you want something less trivial. Again both $x_n,y_n$ diverge, but $x_n+y_n = \frac{1}{n}$ converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Alternate proof for $2^{2^n}+1$ ends with 7, n>1. I have a proof by induction that $2^{2^n}+1$ end with 7. I've been trying to prove that within the theory of rings and ideals, but haven't achieved it yet. The statement is equivalent to $2^{2^n}-6$ ends with zero, so
Prove that for $$ e \in \mathbb{N} : e=2^n \Rightarrow 2^e-6 \in (10)\subset\mathbb{Z}$$
I'm not sure if this is the easiest equivalent statement to prove this in the language of rings. any help?
or alternatively $$ e\in \bar{0}\in\mathbb{Z}_4 \Rightarrow 2^e-6 \in \bar{0} \in \mathbb{Z}_{10} $$
is also proven by induction.
| Here is an alternate proof, it is basically the same idea as TA or Marvis, but probably presented in a "elementary" way.
For $n \geq 2$ then
$$2^{2^n}-16=16^{2^{n-2}}-16=16[16^{\alpha}-1]=16(16-1)(\mbox{junk})$$
Since 16 is even and 15 is divisible by 5, it follows that $2^{2^n}-16$ is a multiple of 10....
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Order of infinite dimension norms I know that
$$\|{f}\|_{L^1(0,L)}\leq\|{f}\|_{L^2(0,L)}\leq\|{f}\|_{\mathscr{C}^1(0,L)}\leq\|{f}\|_{\mathscr{C}^2(0,L)}\leq\|{f}\|_{\mathscr{C}^{\infty}(0,L)}$$
But I don't know where to put in this chain this norm:
$\|{f}\|_{L^{\infty}}=\inf\{C\in\mathbb{R},\left|f(x)\right|\leq{C}\text{ almost everywhere}\}$
Thanks pals.
| If the norm $\mathcal C^1$ is defined by $\sup_{0<x<L}|f(x)|+\sup_{0<x<L}|f'(x)|$, then we have
$$\lVert f\rVert_{L^2}\leqslant \sqrt L\lVert f\rVert_{L^{\infty}}\leqslant \sqrt L\lVert f\rVert_{\mathcal C^1}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
indexed family of sets, not pairwise disjoint but whole family is disjoint I've seen this problem before, but can't remember how to finish it:
Define an indexed family of sets $ \{A_i : i \in \mathbb{N} \}$ in which for any $m,n\in \mathbb{N}, A_m \cap A_n \not= \emptyset$ and $\bigcap A_i = \emptyset$. The closest I came was something to the effect of $A_i = \{(0, 1/n): n \in \mathbb{N} \}$, but I know that doesn't meet the last criterion.
Suggestions on how to fix/finish it? Am I even as close as I think I am?
| How about $A_i =\{n\in\mathbb N\mid n>i\}$? But your choice, more properly written $A_i=(0,1/i)$, is also fine.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Given a dense subset of $\mathbb{R}^n$, can we find a line that intersects it in a dense set? I have some difficulties in the following question.
Let $S$ be a dense subset in $\mathbb{R}^n$. Can we find a straight line $L\subset\mathbb{R}^n$ such that $S\cap L$ is a dense subset of $L$.
Note. From the couterexample of Brian M. Scott, I would like to ask more. If we suppose that $S$ has a full of measure. Could we find a straight line $L$ such that $S\cap L$ is a dense subset of $L$?
| For each $n>1$ it’s possible to construct a dense subset $D$ of $\Bbb R^n$ such that every straight line in $\Bbb R^n$ intersects $D$ in at most two points.
Let $\mathscr{B}=\{B_n:n\in\omega\}$ be a countable base for $\Bbb R^n$. Construct a set $D=\{x_n:n\in\omega\}\subseteq\Bbb R^n$ recursively as follows. Given $n\in\omega$ and the points $x_k$ for $k<n$, no three of which are collinear, observe that there are only finitely many straight lines containing two points of $\{x_k:k<n\}$; let their union be $L_n$. $L_n\cup\{x_k:k<n\}$ is a closed, nowhere dense set in $\Bbb R^n$, so it does not contain the open set $B_n$, and we may choose $x_n\in B_n\setminus\big(L_n\cup\{x_k:k<n\}\big)$.
Clearly $D$, so constructed, is dense in $\Bbb R^n$, since it meets every member of the base $\mathscr{B}$, and by construction no three points of $D$ are collinear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Defining a metric space I'm studying for actuarial exams, but I always pick up mathematics books because I like to challenge myself and try to learn new branches. Recently I've bought Topology by D. Kahn and am finding it difficult. Here is a problem that I think I'm am answering sufficiently but any help would be great if I am off.
If $d$ is a metric on a set $S$, show that
$$d_1(x,y)=\frac{d(x,y)}{1+d(x,y)}$$ is a metric on $S$.
The conditions for being a metric are $d(X,Y)\ge{0}, d(X,Y)=0$ iff $X=Y$, $d(X,Y)=d(Y,X)$, and $d(X,Y)\le{d(X,Z)+d(Z,Y)}$. Thus, we simply go axiom by axiom.
1) Since both $d(x,y)\ge{0}$ and $1+d(x,y)\ge{0},$ it is clear that $d_1(x,y)\ge{0}$. (Is this a sufficient analysis?)
2) $d_1(x,x)=\frac{d(x,x)}{1+d(x,x)}=\frac{0}{1+0}=0$.
3) $d_1(x,y)=\frac{d(x,y)}{1+d(x,y)}=\frac{d(y,x)}{1+d(y,x)}=d_1(y,x).$
4) $d_1(x,y)=\frac{d(x,y)}{1+d(x,y)}\le{\frac{d(x,z)+d(z,y)}{1+d(x,z)+d(z,y)}}=\frac{d(x,z)}{1+d(x,z)+d(z,y)}+\frac{d(z,y)}{1+d(x,z)+d(z,y)}\lt\frac{d(x,z)}{1+d(x,z)}+\frac{d(z,y)}{1+d(z,y)}=d_1(x,z)+d_1(z,y).$
However, #4 is strictly less, not less than or equal to, according to my analysis, so where did I go wrong?
| There is someing wrong in 4), just as Brian comments. Here I offered a proof for you:
Proof: Notice that $f(x)=\frac{x} {1+x}$ is increasing on $\mathbb R^+$: to see this, let $g(x)=\frac{1}{x+1}$. It is easily to see that $g(x)$ is descreasing on $\mathbb R^+$. And note that $f(x)+g(x)=1$. Therefore, $f(x)$ is a increasing function on $\mathbb R^+$.
Since $d(x,y) \le d(x,z)+d(z,y)$, we have $d_1(x,y)=\frac{d(x,y)}{1+d(x,y)}\le{\frac{d(x,z)+d(z,y)}{1+d(x,z)+d(z,y)}}=\frac{d(x,z)}{1+d(x,z)+d(z,y)}+\frac{d(z,y)}{1+d(x,z)+d(z,y)}\lt\frac{d(x,z)}{1+d(x,z)}+\frac{d(z,y)}{1+d(z,y)}=d_1(x,z)+d_1(z,y).$
Hope this be helpful for you.
ADDed: $d(z,y)$ and $d(x,z)$ could be zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
How to read this expression? How can I read this expression :
$$\frac{1}{4} \le a \lt b \le 1$$
Means $a,b$ lies between $\displaystyle \frac{1}{4}$ and $1$?
Or is $a$ less the $b$ also less than equal to $1$?
So $a+b$ won't be greater than $1$?
| Think of the number line. The numbers $\{\tfrac{1}{4}, a, b, 1\}$ are arranged from left to right. The weak inequalities on either end indicate that $a$ could be $\tfrac{1}{4}$ and $b$ could be $1$. However, the strict inequality in the middle indicates that $a$ never equals $b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
What does it mean to have a determinant equal to zero? After looking in my book for a couple of hours, I'm still confused about what it means for a $(n\times n)$-matrix $A$ to have a determinant equal to zero, $\det(A)=0$.
I hope someone can explain this to me in plain English.
| Take a 2 x 2 matrix, call it A, plot that in a coordinate system.
A= [[2,1],[4,2]] . --> Numpy notation of a matrix
Following two vectors are written from A
x=[2,4]
y=[1,2]
If you plot that, you can see that they are in the same span. That means x and y vectors do not form an area. Hence, the det(A) is zero. Det refers to the area formed by the vectors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "117",
"answer_count": 11,
"answer_id": 5
} |
How can I calculate the expected number of changes of state of a discrete-time Markov chain? Assume we have a 2 state Markov chain with the transition matrix:
$$
\left[
\begin{array}
(p & 1-p\\
1-q & q
\end{array}
\right]
$$
and we assume that the first state is the starting state. What is the expected number of state transitions in $T$ periods? (I want to count the number of state changes from state 1 to state 2, and the other way round).
| 1. Let $s=(2-p-q)^{-1}$, then $\pi_0=(1-q)s$, $\pi_1=(1-p)s$, defines a stationary distribution $\pi$. If the initial distribution is $\pi$, at each step the distribution is $\pi$ hence the probability that a jump occurs is
$$
r=(1-p)\pi_0+(1-q)\pi_1=2(1-p)(1-q)s.
$$
In particular, the mean number of jumps during $T$ periods is exactly $rT$.
2. By a coupling argument, for any distribution $\nu$, the difference between the number of jumps of the Markov chain with initial distribution $\nu$ and the number of jumps of the Markov chain with initial distribution $\pi$ is exponentially integrable.
3. Hence, for any starting distribution, the mean number of jumps during $T$ periods is $rT+O(1)$.
4. Note that
$$
\frac1r=\frac12\left(\frac1{1-p}+\frac1{1-q}\right),
$$
and that $\frac1{1-p}$ and $\frac1{1-q}$ are the mean lengths of the intervals during which the chain stays at $0$ and $1$ before a transition occurs to $1$ and $0$ respectively. Hence the formula. For a Markov chain with $n$ states and transition matrix $Q$, one would get
$$
\frac1r=\frac1n\sum_{k=1}^n\frac1{1-Q_{kk}}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving that $L = \{0^k \mid \text{$k$ is composite}\}$ is not regular by pumping lemma
Suppose $L = \{0^k \mid \text{$k$ is composite}\}$. Prove that this language is not regular.
What bugs me in this lemma is that when I choose a string in $L$ and try to consider all cases of dividing it into three parts so that in each case it violates lemma, I always find one case that does not violate it. A bit of help will be appreciated.
Thanks in advance.
My attempt:
*
*Suppose that $L$ is regular.
*Choosing string $x = 0^{2k}$ where $k$ is prime ($2k$ pumping constant)
*We can divide $x$ into three parts $u, v, w$ such that:
$$|uv| \le 2k \qquad |v| > 0\qquad uv^iw \in L \text{ for $i \ge 0$}$$
*If $u$ and $w$ are empty, all conditions are met.
It is the same when I change $2$ for any other number. Maybe I'm choosing wrong.
| You can’t assume that the pumping constant is even. If you want to start with a word of the form $0^{2k}$ for some $k$, that’s fine, but you can’t take $2k$ to be the pumping constant $p$; you can only assume that $2k\ge p$. But trying to use the pumping lemma directly to prove that $L$ is not regular is going to be a bit difficult. I would use the fact that the regular languages are closed under complementation, so if $L$ is regular, so is
$$L'=\{0^n:n\text{ is prime}\}\;.$$
Now apply the pumping lemma to $L'$. I’ve done it in the spoiler-protected text below, but I think that you can probably do it yourself without the help.
Let $p$ be the pumping length, and start with $x=0^n$ for some prime $n\ge p$. Decompose $x$ in the usual way as $uvw$, so that $|uv|\le p$, $|v|>0$, and $uv^kw\in L$ for $k\ge 0$. Let $a=|uw|$ and $m=|v|>0$; then for each $k\ge 0$ you have $|uv^kw|=a+km$. In other words, the lengths of the words $uv^kw$ form an arithmetic sequence with first term $a$ and constant difference $m$. You should have no trouble showing that this sequence must contain a composite number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Let $a$ be a prime element in PID. Show that $R/(a)$ is a field. Let $a$ be a prime element in PID. Show that $R/(a)$ is a field.
My attempt: Since $a$ is prime, $(a)$ is a prime ideal of $R$. Since $R$ is a PID, every nonzero prime ideal of $R$ is maximal. This implies that $(a)$ is maximal and hence $R/(a)$ is a field.
Is my proof correct?
| Almost. But, you should just snip out ", $R$ is also a UFD and hence". The fact that every non-zero prime is maximal is a fact true about PIDs but NOT about UFDs. Indeed, consider that $\mathbb{Z}[x]/(x)$ is an integral domain but not a field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/355968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Additive set function properties I am reading an introduction to measure theory, which starts by defining $\sigma$-rings then additive set functions and their properties which are given without proof. I was able to prove two of them which are very easy $\phi(\emptyset)=0$ and $\phi(A_1\cup \ldots\cup A_n)=\displaystyle\sum_{i=1}^nA_i$.
There are three more which I couldn't do which are :
1- $\phi(A_1\cup A_2)+\phi(A_1 \cap A_2)=\phi(A_1)+\phi(A_2)$ for any two sets $A_1 $ and $A_2$.
2- If $\phi(A) \ge 0$ for all $A$ and $A_1 \subset A_2$ then $\phi(A_1) \le \phi(A_2)$.
3- If $A_1 \subset A_2$ and $|\phi(A_1)|< + \infty$ then $\phi(A_2-A_1)=\phi(A_2)-\phi(A_1)$.
Where $\phi$ is an additive set function.
I need some hints on how to proceed.
| I was able to use the third proposition to prove the first.
Let $A \subset B$. We have $(A \setminus B) \cap B = \emptyset$ then since $\phi$ is additive we have $\phi(A \setminus B) + \phi(B)=\phi((A-B) \cup B)=\phi(A) $ . Then $\phi(A \setminus B)=\phi(A)-\phi(B)$, this proves the second one.
Now for the first : $A \cup B= A\setminus (A\cap B) \cup B \setminus (A\cap B) \cup(A \cap B)$. Then $\phi(A \cup B)=\phi(A\setminus (A\cap B) \cup B \setminus (A\cap B) )= \phi(A)-\phi(A\cap B)+\phi(B)- \phi(A\cap B) + \phi(A \cap B)$
Hence upon adding $\phi(A \cap B) $ to both sides we get $\phi(A \cup B) + \phi(A \cap B)= \phi(A) + \phi (B)$. Can anyone help with the second one?
Edit: Here's the second one. Since $\phi(A) > 0$ for all sets then $\phi(A\setminus B)>0$ hence by the second property $\phi(A)-\phi(B) > 0$ when $B \subset A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Logarithm calculation result I am carrying out a review of a network protocol, and the author has provided a function to calculate the average steps a message needs to take to traverse a network.
It is written as $$\log_{2^b}(N)$$
Does the positioning of the $2^b$ pose any significance during calculation? I can't find an answer either way. The reason is, they have provided the results of their calculations and according to their paper, the result would be $1.25$ (given $b= 4$ and $N= 32$).
Another example was given this time $N= 50$, $b=4$ giving a result of $1.41$.
I don't seem to be able to get the same result if I were to apply the calculation and so it's either my method/order of working or their result is incorrect (which I doubt).
Can someone help to provide the correct way of calculating the values, and confirm the initial results?
My initial calculation was calculate $\log(2^4) \cdot 32$... Clearly it's totally wrong (maths is not a strong point for me).
| The base of the logarithm is $2^b$. You want to find an $x$ such that $(2^b)^x = N$, i.e. $2^{bx} = N$.
You can rewrite that as $$x = \dfrac{\log N}{b}$$ if you take the $\log$ to base-2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
An ambulance problem involve sum of two independent uniform random variables An ambulance travels back and forth at a constant speed along a road of length $L$. At a certain moment of time, an accident occurs at a point uniformly distributed on the road.[That is, the distance of the point from one of the fixed ends of the road is uniformly distributed over ($0$,$L$).] Assuming that the ambulance's location at the moment of the accident is also uniformly distributed, and assuming independence of the variables, compute the distribution of the distance of the ambulance from the accident.
Here is what I have so far:
$X$ = point where the accident happened
$Y$ = location of the ambulance at the moment.
$D = |X-Y|$, represents the distance between the accident and the ambulance
$P(D \leq d) = $$\mathop{\int\int}_{(x,y)\epsilon C} f(x,y) dx dy$
where $C$ is the set of points where $|X-Y| \leq d$
I'm having trouble setting up the limit for the integral. It would be greatly appreciated if someone can upload a picture of the area of integration.
| Here is a rough sketch of the integration region:
The $x$ and $y$ axes goes between $0$ and $L$. The "shaded" (for lack of a better word) region represents those $X$ and $Y$ such that $|X-Y| \le d$.
The integration region is split in 3 pieces, which I hope you can see from this admittedly crude diagram:
$$P(|X-Y| \le d) = \frac{1}{L^2} \left [\int_0^d dx \: \int_0^{d+x} dy + \int_d^{L-d} dx \: \int_{-d+x}^{d+x} dy + \int_{L-d}^{L} dx \: \int_{-d+x}^{L} dy \right ]$$
So you can check, the result I get is
$$P(|X-Y| \le d) = \frac{d}{L} \left ( 2 - \frac{d}{L} \right)$$
You can also see it from the difference between the area of the whole region minus the area of the 2 right triangles outside the "shaded" region.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Principal axis of a matrix I try to find the definition of the main axis of a matrix.
I saw this phrase in some exercise:
Let $A$ be a positive matrix, $f:G\longrightarrow \mathbb{R}$ a smooth function, $G$ an open set in $\mathbb{R}^n$. I need to find the orthogonal coordinate transformation $y=Px$ such that the main axis on $y$'s coordinates will be the principle axis of $A$.
The book says to diagonalize $A$: $PAP^t=D$ and to choose $P$ to be the transformation.
What is the definition of principle axis of matrix?
thanks.
| Often, principal axes of a matrix refer to its eigenvectors. With this diagonalization, $P$ is the matrix of eigenvectors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Examples of 2D wave equations with analytic solutions I need to numerically solve the following wave equation$$\nabla^2\psi(\vec{r},t) - \frac{1}{c(\vec{r})^2}\frac{\partial^2}{\partial t^2}\psi(\vec{r},t) = -s(\vec{r},t)$$ subject to zero initial conditions $$\psi(\vec{r},0)=0, \quad \left.\frac{\partial}{\partial t}\psi(\vec{r},t)\right|_{t=0}=0$$ where $\vec{r} \in \mathbb{R}^2$ and $t \in \mathbb{R}$.
The problem is that I don't know if my numerical solution is right or not, so I wonder if there are some simple cases where the analytic solution can be calculated (besides the green's function, i.e. the solution when $c(\vec{r}) \equiv $ const and $s(\vec{r},t) = \delta(\vec{r},t)$), so I can compare it with my numerical solution.
Thanks!
| Solution using Green functions and using Sommerfeld radiation condition, in cylindrical coordinates.
\begin{eqnarray}
u_s(\rho, \phi, t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} \{ -i \pi H_0^{(2)}\left(k | \sigma - \sigma_s| \right) F(\omega) e^{i\omega t} d\omega\}
\end{eqnarray}
Where $s$ denotes the source position.
More details and implementation can be found in this post here from Computational Science.It uses the following reference: Morse and Feshbach, 1953, p. 891 - Methods of theoretical physics: New York, MacGraw-Hill Book Co., Inc.
Here are two snapshots $\mathbb{R}^2$ in [200, 200] lattice with space increment of 100 meters (is not in the axis), velocity 8000 m/s.
The source function $ s(\vec{r},t) = \delta(\rho, \phi)f(t) $ bellow sampled with $\Delta t = 0.05 $ seconds, obviously placed in the origin $(\rho=0,\phi=0)$. In fact can be anything you want as long you can have its Fourier Transform.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Find the area of the surface obtained by revolving $\sqrt{1-x^2}$ about the x-axis? So I began by choosing my formulas:
Since I know the curve is being rotated around the x-axis I choose $2\pi\int yds$
where $y=f(x)=\sqrt{1-x^2}$
$ds=\sqrt{1+[f'(x)]^2}$
When I compute ds, I find that $ds=\sqrt{x^6-2x^4+x^2+1}$
Therefore, my integral becomes: $2\pi\int(1-x^2)^{\frac{1}{2}}\sqrt{x^6-2x^4+x^2+1}dx$
Am I on the right track, because this integral itself seems very hard to solve?
| No.
$$f'(x) = -\frac{x}{\sqrt{1-x^2}} \implies 1+f'(x)^2 = \frac{1}{1-x^2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does Seperable + First Countable + Sigma-Locally Finite Basis Imply Second Countable? A topological space is separable if it has a countable dense subset. A space is first countable if it has a countable basis at each point. It is second countable if there is a countable basis for the whole space. A collection of subsets of a space is locally finite if each point has a neighborhood which intersects only finitely many sets in the collection. A collection of subsets of a space is sigma-locally finite (AKA countably locally finite) if it is the union of countably many locally finite collections.
My question is, if a space is separable, first countable, and has a sigma-locally finite basis, must it also be second countable? I think the answer is yes, because I haven't found any counterexample here.
Any help would be greatly appreciated.
Thank You in Advance.
EDIT: I fixed my question. I meant that the space should have a locally finite basis, not be locally finite itself, which doesn't really mean much.
| For example:
Helly Space;
Right Half-Open Interval Topology;
Weak Parallel Line Topology.
These space are all separable, first countable and paracompact, but not second countable.
Note that a paracompact is the union of one locall finite collection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Proof of Proposition/Theorem V in Gödel's 1931 paper? Proposition V in Gödel's famous 1931 paper is stated as follows:
For every recursive relation $ R(x_{1},...,x_{n})$ there is an n-ary "predicate" $r$ (with "free variables" $u_1,...,u_n$) such that, for all n-tuples of numbers $(x_1,...,x_n)$, we have:
$$R(x_1,...,x_n)\Longrightarrow Bew[Sb(r~_{Z(x_1)}^{u_1}\cdot\cdot\cdot~_{Z(x_n)}^{u_n})] $$
$$\overline{R}(x_1,...x_n)\Longrightarrow Bew[Neg~Sb(r~_{Z(x_1)}^{u_1}\cdot\cdot\cdot~_{Z(x_n)}^{u_n})]$$
Gödel "indicate(s) the outline of the proof" and basically says, in his inductive step, that the construction of $r$ can be formally imitated from the construction of the recursive function defining relation $R$.
I have been trying to demonstrate the above proposition with more rigor, but to no avail. I have, however, consulted "On Undecidable Propositions of Formal Mathematical Systems," the lecture notes taken by Kleene and Rosser from Gödel's 1934 lecture, which have been much more illuminating; but still omits the details in the inductive step from recursive definition, stating "the proof ... is too long to give here."
So can anyone give me helpful hint for the proof of the above proposition, or even better, a source where I can find such a demonstration? Thanks!
| I'm not completely familiar with Gödel's notation, but I think this is equivalent to theorem 60 in Chapter 2 of The Logic of Provability by George Boolos, which has fairly detailed proofs of this sort of thing (all in chapter 2).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Permutation for finding the smallest positive integer Let $\pi = (1,2)(3,4,5,6,7)(8,9,10,11)(12) \in S_{12}$. Find the smallest positive integer $k$ for which $$\pi^{(k)}=\pi \circ \pi \circ\ldots\circ \pi = \iota$$
Generalize. If a $\pi$'s disjoint cycles have length $n_1, n_2,\dots,n_t$, what is the smallest integer $k$ so that $\pi^{(k)} = \iota$?
I'm confused with this question. A clear explanation would be appreciated.
| $\iota$ represents the identity permutation: every element in $\{1, 2, ..., 12\}$ is mapped to itself:
$\quad \iota = (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)$.
Recall the definition of the order of any element of a finite group.
In the case of a permutation $\pi$ in $S_n$, the exponent $k$ in your question represents the order of $\pi$.
That is, if $\;\pi^k = \underbrace{\pi\circ \pi \circ \cdots \circ \pi}_{\large k \; times}\; = \iota,\;$ and if $k$ is the least such positive integer such that $\pi^k = \iota$, then $k$ is the order of $\pi$.
A permutation expressed as the product of disjoint cycles has order $k$ equal to the least common multiple of the lengths of the cycles.
So, if $\pi = (1,2)(3,4,5,6,7)(8,9,10,11)(12) \in S_{12}$,
Then the lengths $n_i$ of the 4 disjoint cycles of $\pi$ are, in order of their listing above, $n_1 = 2, \; n_2 = 5, \; n_3 = 4,\;n_4 = 1.\;$.
So the order $k$ of $\pi$ is given by the least common multiple $$\;\text{lcm}\,(n_1, n_2, n_3, n_4) = \operatorname{lcm}(2, 5, 4, 1) = 20.$$ That is, $\pi^k = \pi^{20} = \iota,\;$ and there is NO positive integer $n<k = 20\,$ such that $\pi^n = \iota$.
What this means is that $$\underbrace{\pi \circ \pi \circ \cdots \circ \pi}_{\large 20 \; factors} = \iota$$ and $$\underbrace{\pi \circ \pi \circ \cdots \circ \pi}_{ n \; factors,\;1 \,\lt n\, \lt 20} \neq \iota$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do i prove that $\det(tI-A)$ is a polynomial? In wikipedia, it's said "$\det(tI-A)$ can be explicitly evaluated using exterior algebra", but i have not learned exterior algebra yet and i just want to know whether it is polynomial, not how it looks like.
How do i prove that $\det(tI-A)$ is a polynomial in $\mathbb{F}[t]$ where $\mathbb{F}$ is a field and $A$ is an $n\times n$ matrix?
| To prove that $\det (tI-A)$ is a polynomial you must know some definition, or some properties, of the determinant. The most straightforward and least mystical approach is to use Laplace's formula: http://en.wikipedia.org/wiki/Laplace_expansion
This would give a rather quick way of proving that $\det (tI-A)$ is a polynomial, and for almost the same amount of work, that it is a monic polynomial of degree $n$ (if $A$ is an $n\times n$ matrix).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/356875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Find minimum in a constrained two-variable inequation I would appreciate if somebody could help me with the following problem:
Q: find minimum
$$9a^2+9b^2+c^2$$
where $a^2+b^2\leq 9, c=\sqrt{9-a^2}\sqrt{9-b^2}-2ab$
| Maybe this comes to your rescue.
Consider $b \ge a \ge 0$
When you expansion of $(\sqrt{9-a^2}\sqrt{9-b^2}-2ab)^2=(9-a^2)(9-b^2)+4a^2b^2-4ab \sqrt{(9-a^2)(9-b^2)}$
This attains minimum when $4ab \sqrt{(9-a^2)(9-b^2)}$ is maximum.
Applying AM-GM :
$\dfrac{9-a^2+9-b^2}{2} \ge \sqrt{(9-a^2)(9-b^2)} \implies 9- \dfrac{9}{2} \ge \sqrt{(9-a^2)(9-b^2)}$
$\dfrac{a^2+b^2}{2} \ge ab \implies 18 \ge 4ab$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
$f^{-1}(U)$ is regular open set in $X$ for regular open set $U$ in $Y$, whenever $f$ is continuous. Let $f$ be a continuous function from space $X$ to space $Y$. If $U$ is regular open set in $Y$, it it true that $f^{-1}(U)$ is a regular open set in $X$?
| Not necessarily. Consider the absolute value function $x \mapsto | x |$, and the inverse image of $(0,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Quicksort analysis problem This is a problem from a probability textbook, not a CS one, if you are curious. Since I'm too lazy to retype the $\LaTeX$ I will post an ugly stitched screenshot:
This seems ridiculously hard to approach, and it doesn't help that all the difficult problems have no solutions in the textbook (the uselessly easy ones do :P ). How would I attack it?
| The question is already broken into pieces in order to help you out.
a) This is the law of total expectation, using the fact that the pivot is chosen randomly.
b) Once we have a pivot, we need to split the remaining $n-1$ numbers into $2$ groups (one comparison each), and then solve the two sub-problems; one of of size $i-1$ and one of size $n-i$, respectively. The recursion comes from plugging in the result from part b to the formula from part a.
c) You can derive this from the recursion in part b.
d) Use the recursion from part c to work out what $C_{n+1}$ should be, using the fact that the harmonic sum $\sum_{i=1}^{n+1} \frac{1}{i}$ is approximately $\log (n+1)$ for large $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How can I prove that $xy\leq x^2+y^2$? How can I prove that $xy\leq x^2+y^2$ for all $x,y\in\mathbb{R}$ ?
| $$x^2+y^2-xy=\frac{(2x-y)^2+3y^2}4=\frac{(2x-y)^2+(\sqrt3y)^2}4$$
Now, the square of any real numbers is $\ge0$
So, $(2x-y)^2+(\sqrt3y)^2\ge0,$ the equality occurs if each $=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 25,
"answer_id": 0
} |
Linear Recurrence Relations I'm having trouble understanding the process of solving simple linear recurrence relation problems. The problem in the book is this:
$$
0=a_{n+1}-1.5a_n,\ n \ge 0
$$
What is the general process, and purpose, of solving this? Unfortunately there is a very large language barrier between my professor and myself, which is quite a problem.
| The general solution to the equation
$$a_{n+1} = k a_n$$
is
$$a_n = B \cdot k^n$$
for some constant $B$, which is related to an initial condition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Examples of $\kappa$-Fréchet-Urysohn spaces. We say that
A space $X$ is $\kappa$-Fréchet-Urysohn at a point $x\in X$, if whenever $x\in\overline{U}$, where $U$ is a regular open subset of $X$, some
sequence of points of $U$ converges to $x$.
I'm looking for some examples of $\kappa$-Fréchet-Urysohn space. I guess it is not true that every compact Hausdorff space is a $\kappa$-Fréchet-Urysohn But how about compact Hausdorff homogeneous spaces?
| What examples are you looking for?
I think, for some exotic examples you should search Engelking’s “General Topology” (on Frechet-Urysohn spaces) and part 10 “Generalized metric spaces”
of the “Handbook of Set-Theoretic Topology”.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
An analytic function is onto All sets are subsets of $\mathbb{C}$. Suppose $f: U \to D$ is analytic where $U$ is bounded and open, and $D$ is the open unit disk.
Now suppose we can continuously extend $f$ to $\bar{f}: \bar{U} \to \bar D$, such that $\bar{f}(\partial U) \subseteq \partial D$. To show that $f$ is onto, I was thinking maybe I could show that $f(U)$ is a dense subset of $\bar{D}$ , and since $\bar{f}(U) = f(U)$ is open by the open mapping theorem, it must be $D$. But to do this I would need to know that $f(\partial U) = \partial D$. Is this true?
Some advice or other approaches would be greatly appreciated. Thank you.
| Hint: If $f$ is not onto then $D$ containts a point $w$ which is on the boundary of $f(U)$. Take a sequence in $U$ with $f(z_k)\to w$, and use compactness.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Learning Combinatorial Species. I have been reading the book conceptual mathematics(first edition) and I'm also about halfway through Diestel's Graph theory (4th edition). I was wondering if I was able to start learning about combinatorial species. This is very interesting to me because I love combinatorics and it makes direct use of category therory.
Also, what are some good resources to understand the main ideas behind the area and understand the juciest part of it?
Regards, and thanks to Anon who told me about this area of math.
| For an easy to understand introduction,
http://dept.cs.williams.edu/~byorgey/pub/species-pearl.pdf
seems to be nice. But it leans more towards the computer science applications.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Distribution of a random variable
$X_1$, $X_2$, $X_3$ are independent random variables, each with an exponential distribution, but with means of $2.0, 5.0, 10.0$ respectively. Let $Y$= the smallest or minimum value of these three random variables. Derive and identify the distribution of $Y$. (The distribution function may be useful).
How do I solve this question? Do I plug in each mean to the exponential distribution? I would appreciate it if someone could explain this to me, thanks.
| The wiki on exponential distribution has an answer to that. The answer of course is exponential distribution.
http://en.wikipedia.org/wiki/Exponential_distribution#Distribution_of_the_minimum_of_exponential_random_variables
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Density of sum of two independent uniform random variables on $[0,1]$ I am trying to understand an example from my textbook.
Let's say $Z = X + Y$, where $X$ and $Y$ are independent uniform random variables with range $[0,1]$. Then the PDF
is
$$f(z) = \begin{cases}
z & \text{for $0 < z < 1$} \\
2-z & \text{for $1 \le z < 2$} \\
0 & \text{otherwise.}
\end{cases}$$
How was this PDF obtained?
Thanks
| By the hint of jay-sun, consider this idea, if and only if $f_X (z-y) = 1$ when $0 \le z-y \le 1$. So we get
$$ z-1 \le y \le z $$
however, $z \in [0, 2]$, the range of $y$ may not be in the range of $[0, 1]$ in order to get $f_X (z-y) = 1$, and the value $1$ is a good splitting point. Because $z-1 \in [-1, 1]$.
Consider (i) if $z-1 \le 0$ then $ -1 \le z-1 \le 0$ that is $ z \in [0, 1]$, we get the range of $y \in [0, z]$ since $z \in [0, 1]$. And we get $\int_{-\infty}^{\infty}f_X(z-y)dy = \int_0^{z} 1 dy=z$ if $z \in [0, 1]$.
Consider (ii) if $z-1 \ge 0$ that is $ z \in [1, 2]$, so we get the range of $y \in [z-1, 1]$, and $\int_{-\infty}^{\infty}f_X(z-y)dy = \int_{z-1}^{1} 1 dy = 2-z$ if $z \in [1, 2]$.
To sum up, consider to clip the range in order to get $f_X (z-y) = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86",
"answer_count": 6,
"answer_id": 1
} |
Find an orthogonal vector to 2 vector I have the following problem:
A B C D are the 4 consecutive summit of a parallelogram, and have the following coordinates
A(1,-1,1);B(3,0,2);C(2,3,4);D(0,2,3)
I must find a vector that is orthogonal to both CB and CD.
How? Is there some kind of formula?
Thanks,
| The cross product of two vectors is orthogonal to both, and has magnitude equal to the area of the parallelogram bounded on two sides by those vectors. Thus, if you have:
$$\vec{CB} = \langle3-2, 0-3, 2-4\rangle = \langle1, -3, -2\rangle$$
$$\vec{CD} = \langle0-2, 2-3, 3-4\rangle = \langle-2, -1, -1\rangle$$
Compute the following, which is an answer to your question: $$\langle1, -3, -2\rangle\times\langle-2, -1, -1\rangle = \langle1, 5, -7\rangle$$
Note, though, that there are infinitely many vectors that are orthogonal to $\vec{CB}$ and $\vec{CD}$. However, these are all non-zero scalar multiples of the cross product. So, you can multiply your cross product by any (non-zero) scalar.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Understanding the fundamentals of pattern recognition I'm learning now about sequences and series: patterns in short. This is part of my Calc II class. I'm finding I'm having difficulty in detecting all of the patterns that my text book is asking me to solve. My question at this point isn't directly about a homework problem (yet anyway), but instead help in understanding why certain statements are made in the example.
So, the example from the book:
$$
1 + \frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{16}+...\\
\begin{array}{lcc}
Partial Sum & Value & Sugg. Expression \\
s_1 = 1 & 1 & 2 - 1 \\
s_2 = 1 + \frac{1}{2} & \frac{3}{2} & 2 - \frac{1}{2} \\
s_3 = 1 + \frac{1}{2} + \frac{1}{4} & \frac{7}{4} & 2 - \frac{1}{4} \\
& ... & \\
s_n = 1 + \frac{1}{2} + \frac{1}{4} + ... + \frac{1}{2^{n-1}} & \frac{2^n - 1}{2^{n-1}} & 2 - \frac{1}{2^{n-1}}
\end{array}
$$
Why is that for $s_1$ they say, "Suggested Expression" is 2-1? 4 - 3 also yields 1. Granted, the suggested values in the textbook are much simpler to work with. However, I'd like to know what wisdom leads the authors to say that 2-1 is the suggested expression instead of some other expression also yielding 1.
It is also interesting to me that this process is necessary when this sequence is quite easily seen as $\sum_{n=1}^{\infty}\frac{1}{2^{n-1}}$. This section is all about convergence and divergence. Am I learning these extra steps because using the rule I've just outlined doesn't show what it converges to? Also, in typing this question, I think I've just discovered something the textbook was saying: a series isn't convergent unless the limit of its terms is 0. That is: $\lim_{n\to\infty}\frac{1}{2^{n-1}} = 0$. It's amazing what one finds when looking for something else.
Thanks,
Andy
| The suggested expressions weren’t found one at a time: they’re synthesized from the whole pattern. We don’t seriously consider $4-3$ for the first one, for instance, because it doesn’t fit nicely with anything else. I’d describe the thought process this way. First we calculate the first few partial sums:
$$\begin{array}{r|cc}
n&1&2&3&4&5&6\\ \hline
s_n&1&\frac32&\frac74&\frac{15}8&\frac{31}{16}&\frac{63}{32}
\end{array}$$
At this point I can pursue either of two lines of thought.
*
*The partial sums seem to be getting very close to $2$. Perhaps they’re doing so in some regular, easily identifiable fashion? Let’s add another line to the table: $$\begin{array}{r|cc}
n&1&2&3&4&5&6\\ \hline
s_n&1&\frac32&\frac74&\frac{15}8&\frac{31}{16}&\frac{63}{32}\\ \hline
2-s_n&1&\frac12&\frac14&\frac18&\frac1{16}&\frac1{32}
\end{array}$$ Now that was very informative: the denominators of the new entries are instantly recognizable as powers of $2$, specifically $2^{n-1}$, and it looks very much as if $2-s_n=\frac1{2^{n-1}}$, or $$s_n=2-\frac1{2^{n-1}}\;.$$ This is the line of thought that leads to the suggested expressions in the example.
*The denominators of $s_n$ are instantly recognizable as powers of $2$, specifically $2^{n-1}$, and the numerators seem to be one less than the next higher power of $2$, or $2^n-1$. It looks very much as if $$s_n=\frac{2^n-1}{2^{n-1}}\;.$$
A little algebra of course shows that the conjectures are the same: $2-\dfrac1{2^{n-1}}=\dfrac{2^n-1}{2^{n-1}}$.
Without seeing the example in full I can’t be sure, but I suspect that the suggested expressions are there because they make it immediately evident that
$$\lim_{n\to\infty}s_n=\lim_{n\to\infty}\left(2-\frac1{2^{n-1}}\right)=2-\lim_{n\to\infty}\frac1{2^{n-1}}=2\;,$$
since clearly $\lim\limits_{n\to\infty}\dfrac1{2^{n-1}}=0$. Essentially the same idea is at the heart of the proof that
$$\sum_{n\ge 0}x^n=\frac1{1-x}$$
if $|x|<1$; it that argument hasn’t yet appeared in your text, this example may be part of the preparation.
Finally, note the correction of your final remark that joriki made in the comments.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to generate random symmetric positive definite matrices using MATLAB? Could anybody tell me how to generate random symmetric positive definite matrices using MATLAB?
| The algorithm I described in the comments is elaborated below. I will use $\tt{MATLAB}$ notation.
function A = generateSPDmatrix(n)
% Generate a dense n x n symmetric, positive definite matrix
A = rand(n,n); % generate a random n x n matrix
% construct a symmetric matrix using either
A = 0.5*(A+A'); OR
A = A*A';
% The first is significantly faster: O(n^2) compared to O(n^3)
% since A(i,j) < 1 by construction and a symmetric diagonally dominant matrix
% is symmetric positive definite, which can be ensured by adding nI
A = A + n*eye(n);
end
Several changes are able to be used in the case of a sparse matrix.
function A = generatesparseSPDmatrix(n,density)
% Generate a sparse n x n symmetric, positive definite matrix with
% approximately density*n*n non zeros
A = sprandsym(n,density); % generate a random n x n matrix
% since A(i,j) < 1 by construction and a symmetric diagonally dominant matrix
% is symmetric positive definite, which can be ensured by adding nI
A = A + n*speye(n);
end
In fact, if the desired eigenvalues of the random matrix are known and stored in the vector rc, then the command
A = sprandsym(n,density,rc);
will construct the desired matrix. (Source: MATLAB sprandsym website)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/357980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43",
"answer_count": 5,
"answer_id": 1
} |
What is the real life use of hyperbola? The point of this question is to compile a list of applications of hyperbola because a lot of people are unknown to it and asks it frequently.
| Applications of hyperbola
Dulles Airport, designed by Eero Saarinen, has a roof in the
shape of a hyperbolic paraboloid. The hyperbolic paraboloid is a three-dimensional
surface that is a hyperbola in one cross-section, and a parabola in another cross section.
This is a Gear Transmission. Greatest application of a pair of hyperbola gears:
And hyperbolic structures are used in Cooling Towers of Nuclear Reactors.. Doesn't it make hyperbola, a great deal on earth? :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 8,
"answer_id": 2
} |
factor group is cyclic. Prove that a factor group of a cyclic group is cyclic.
I didn't understand last two lines of proof ..
Therefore $gH=(aH)^i$ for any coset $gH$.
so $G/H$ is cyclic , by definition of cyclic groups.
How $gH=(aH)^i$ of any coset $gH$.
proves factor group to be cyclic.
Please explain.
| I just wanted to mention that more generally, if $G$ is generated by $n$ elements, then every factor group of $G$ is generated by at most $n$ elements:
Let $G$ be generated by $\{x_1,\ldots x_n\}$, and let $N$ be a normal subgroup of $G$. Then every coset of $N$ in $G$ can be expressed as a product of the cosets $Nx_1,\ldots, Nx_n$. So the set $\{Nx_1,\ldots,Nx_n\}$ generates $G/N$, and this set contains at most $n$ elements.
(Note that the cosets $Nx_i$ will not all be distinct if $N$ is non-trivial, but it's fine to write the set this way, just as $\{x^2 \mid x\in \mathbb{R}\}$ is a perfectly valid description of the set of non-negative real numbers.)
The result about cyclic groups is then just the special case $n=1$ of this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Simplifying $\sum\limits_{k=1}^{n-1} (2k + \log_2(k) - 1)$ I'm trying to simplify the following summation:
$$\sum_{k=1}^{n-1} (2k + \log_2(k) - 1)$$.
I've basically done the following:
$$\sum_{k=1}^{n-1} (2k + \log_2(k) - 1) \\
=
\sum_{k=1}^{n-1} 2k + \sum_{k=1}^{n-1} \log_2(k) - \sum_{k=1}^{n-1} 1\\
=
\frac{n(n-1)}{2} + \sum_{k=1}^{n-1} \log_2(k) - (n-1)$$
Now I'm trying to do deal with this term $\sum_{k=1}^{n-1} \log_2(k)$, but I'm a bit confused.
My gut tells me I can do the following:
$$\sum_{k=1}^{n-1} \log_2(k)\\
=
\log_2(1) + \log_2(2) + \ldots + \log_2(n-1)\\
=
\log_2(\prod_{k=1}^{n-1} k)\\
=
\log_2((n-1)!)$$
Using that $\log_a(b) + \log_a(c) = \log_a(b \cdot c)$.
However, I'm not convinced this is an entirely valid reasoning because I can't find any rules/identities for dealing with $\sum_{k=1}^{n-1} \log_2(k)$.
Is this correct or are there any rules to apply?
| (Just so people know this has been answered)
You right
$$\sum_{k=1}^{n} \log k = \log (n!)$$
If you want to justify it formally, you can try using induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Find $x,y$ such that $x=4y$ and $1$-$9$ occur in $x$ or $y$ exactly once.
$x$ is a $5$-digits number, while $y$ is $4$-digits number. $x=4y$, and they used up all numbers from 1 to 9. Find $x,y$.
Can someone give me some ideas please? Thank you.
| Here are the $x,y$ pairs a quick bit of code found.
15768 3942
17568 4392
23184 5796
31824 7956
No insight to offer at the moment I'm afraid..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Whats the percentage of somebody getting homework in class there is a 25% chance you get homework in one class
there is a 40% chance you get homework in another class
what is the probability you get homework in both classes
| it is between 0 and 0.25, based on the information you have given. However, if you assume they are independent. i.e. they are two teachers who do not disccuss whether they should give home work on the same day or not
then P(A and B) = P(A)P(B) = 0.25 * 0.4 = 0.1
A is getting work from the 1st class, B is getting work from the 2nd class.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\lim \limits_{n\to \infty}\,\,\, n\!\! \int\limits_{0}^{\pi/2}\!\! \left(1-\sqrt [n]{\sin x} \right)\,\mathrm dx$
Evaluate the following limit:
$$\lim \limits_{n\to \infty}\,\,\, n\!\! \int\limits_{0}^{\pi/2}\!\! \left(1-\sqrt [n]{\sin x} \right)\,\mathrm dx $$
I have done the problem .
My method:
First I applied L'Hôpital's rule as it can be made of the form $\frac0 0$. Then I used weighted mean value theorem and using sandwich theorem reduced the limit to an integral which could be evaluated using properties of define integration .
I would like to see other different ways to solve for the limit.
| You can have a close form solution; infact if $Re(1/n)>-1$ you have that the integral collapse in:
$$\int_{0}^{\pi/2}\left[1-(\sin(x))^{1/n}\right]dx=\frac{1}{2} \left(\pi -\frac{2 \sqrt{\pi } n \Gamma
\left(\frac{n+1}{2 n}\right)}{\Gamma
\left(\frac{1}{2 n}\right)}\right)$$
So we define:
$$y(n)=\frac{n}{2} \left(\pi -\frac{2 \sqrt{\pi } n \Gamma
\left(\frac{n+1}{2 n}\right)}{\Gamma
\left(\frac{1}{2 n}\right)}\right)$$
And performing the limit:
$$\lim_{n \rightarrow + \infty}y(n)=-\frac{1}{4} \pi \left[\gamma +\psi
^{(0)}\left(\frac{1}{2}\right)\right]$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 6,
"answer_id": 4
} |
dealing with sum of squares (1) I need to be able to conclude that there are $a, b \in \Bbb Z$, not 0, such that $|a| < √p,\ |b| < √p$
and $$a^2 + 2b^2 ≡ 0\ (mod\ p)$$
I'm not sure how to go about this at all. But apparently it is supposed to help me show (2) that
there are $a, b \in \Bbb Z$, such that either $$a^2 + 2b^2 = p$$ or $$a^2 + 2b^2 = 2p$$
Any idea how to go about 1 or 2?
| Hint $\ $ Apply the following result of Aubry-Thue
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
integrate $\int_0^{2\pi} e^{\cos \theta} \cos( \sin \theta) d\theta$ How to integrate
$ 1)\displaystyle \int_0^{2\pi} e^{\cos \theta} \cos( \sin \theta) d\theta$
$ 2)\displaystyle \int_0^{2\pi} e^{\cos \theta} \sin ( \sin \theta) d\theta$
| Let $\gamma$ be the unitary circumference positively parametrized going around just once.
Consider $\displaystyle \int _\gamma \frac{e^z}{z}\,dz$.
On the one hand $$\begin{align}
\int _\gamma \frac{e^z}{z}\mathrm dz&=\int \limits_0^{2\pi}\frac{e^{e^{i\theta}}}{e^{i\theta}}ie^{i\theta}\mathrm d\theta\\
&=i\int _0^{2\pi}e^{\cos (\theta)+i\sin (\theta )}\mathrm d\theta\\
&=i\int _0^{2\pi}e^{\cos (\theta )}[\cos (\sin (\theta))+i\sin (\sin (\theta))\textbf{]}\mathrm d\theta.
\end{align}$$
On the other hand Cauchy's integral formula gives you: $\displaystyle \int _\gamma \frac{e^z}{z}\mathrm dz=2\pi i$.
$\large \color{red}{\text{FINISH HIM!}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
} |
How to expand $(a_0+a_1x+a_2x^2+...a_nx^n)^2$? I know you can easily expand $(x+y)^n$ using the binomial expansion. However, is there a simple summation formula for the following expansion?
$$(a_0+a_1x+a_2x^2+...+a_nx^n)^2$$
I found something called the multinomial theorem on wikipedia but I'm not sure if that applies to this specific problem. Thanks.
| To elaborate on the answer vonbrand gave, perhaps the following will help:
$$(a + b)^2 \;\; = \;\; a^2 + b^2 + 2ab$$
$$(a + b + c)^2 \;\; = \;\; a^2 + b^2 + c^2 + 2ab + 2ac + 2bc$$
$$(a + b + c + d)^2 \;\; = \;\; a^2 + b^2 + c^2 + d^2 + 2ab + 2ac + 2ad + 2bc + 2bd + 2cd$$
In words, the square of a polynomial is the sum of the squares of all the terms plus the sum of double the products of all pairs of different terms.
(added a few minutes later) I just realized Denise T wrote "... is there a simple summation formula for the following expansion", but maybe what I said could still be of help to others. What lead me to initially respond was her comment (which initially I misunderstood) "What does the i,j notation in a summation mean?", which suggested to me that perhaps she didn't understand sigma notation and thus vonbrand's answer was much too advanced.
(added 3 days later) Because of Denise T's comment, where she said in part "I am only familiar with basic sigma notation", and because of Wikipedia's notational-heavy treatment of the Multinomial theorem that in my opinion isn't very useful to someone who doesn't already mostly know the topic, I thought it would be useful to include a development that focuses more on the underlying ideas and technique. I would have written this earlier, but I've been extremely busy the past few days.
Notation: In what follows, ellipses (i.e. $\dots$) denote the continuation of a finite sum, not the continuation of an infinite sum.
Recall the (extended) distributive law:
$$A(x+y+z+\dots) \; = \; Ax + Ay + Az + \dots$$
Using $A = (a+b+c+\dots)$, we get
$$(a+b+c+\dots)(x+y+z+\dots)$$
$$= \; (a+b+c+\dots)x \; + \; (a+b+c+\dots)y \; + \; (a+b+c+\dots)z \; + \; \dots$$
Now imagine expanding each of the terms on the right side. For example, one such term to imagine expanding is $(a+b+c+\dots)y.$ It should be clear from this that the expansion of $(a+b+c+\dots)(x+y+z+\dots)$ consists of all possible products of the form
(term from left sum) times (term from right sum)
The above can be considered a left brain approach. A corresponding right brain approach would be the well known school method of dividing a rectangle with dimensions $(a+b+c+\dots)$ by $(x+y+z+\dots)$ into lots of tiny rectangles whose dimensions are $a$ by $x$, $a$ by $y,$ etc. For example, see this diagram. Of course, the rectangle approach assumes $a,$ $b,$ $\dots,$ $x,$ $y,$ $\dots$ are all positive.
Now let's consider the case where the two multinomials, $(a+b+c+\dots)$ and $(x+y+z+\dots),$ are equal.
$$(a+b+c+\dots)(a+b+c+\dots)$$
$$= \; (a+b+c+\dots)a \; + \; (a+b+c+\dots)b \; + \; (a+b+c+\dots)c \; + \; \dots$$
After expanding each of the terms on the right side, such as the term $(a+b+c+\dots)b,$ we find that the individual products of the form
(term from left sum) times (term from right sum)
can be classifed into the these two types:
Type 1 product: $\;\;$ term from left sum $\;\; = \;\;$ term from right sum
Type 2 product: $\;\;$ term from left sum $\;\; \neq \;\;$ term from right sum
Examples of Type 1 products are $aa,$ $bb,$ $cc,$ etc. Examples of Type 2 products are $ab,$ $ba,$ $ac,$ $ca,$ $bc,$ $cb,$ etc.
When all the Type 1 products are assembled together, we get
$$a^2 + b^2 + c^2 + \dots$$
When all the Type 2 products are assembled together, we get
$$(ab + ba) \; + \; (ac + ca) \; + \; (bc + cb) \; + \; \dots$$
$$=\;\; 2ab \; + \; 2ac \; + \; 2bc \; + \; \dots$$
Here is a way of looking at this that is based on the area of rectangles approach I mentioned above. The full expansion arises from adding all the entries in a Cayley table (i.e. a multiplication table) for a binary operation on the finite set $\{a,\;b,\,c,\;\dots\}$ in which the binary operation is commutative. The Type 1 products are the diagonal entries in the Cayley table and the Type 2 products are the non-diagonal entries in the Cayley table. Because the operation is commutative, the sum of the non-diagonal entries can be obtained by doubling the sum of the entries above the diagonal (or by doubling the sum of the entries below the diagonal).
In (abbreviated) sigma notation we have
$$\left(\sum_{i}a_{i}\right)^2 \;\; = \;\; (type \; 1 \; products) \;\; + \;\; (type \; 2 \; products)$$
$$= \;\; \sum_{\begin{array}{c}
(i,j) \\ i = j \end{array}} a_{i}a_{j} \;\; + \;\; \sum_{\begin{array}{c}
(i,j) \\ i \neq j \end{array}} a_{i}a_{j}$$
$$= \;\; \sum_{\begin{array}{c}
(i,j) \\ i = j \end{array}} a_{i}a_{j} \;\; + \;\; 2\sum_{\begin{array}{c}
(i,j) \\ i < j \end{array}} a_{i}a_{j}$$
In older advanced "school level" algebra texts from the 1800s, such the texts by George Chrystal and Hall/Knight and Elias Loomis and Charles Smith and William Steadman Aldis and Isaac Todhunter, you can often find the following even more abbreviated notation used:
$$(\Sigma)^2 \;\; = \;\;(\Sigma a^2) \; + \; 2(\Sigma ab)$$
In this notation $(\Sigma a^2)$ represents the sum of all expressions $a^2$ where $a$ varies over the terms in the multinomial being squared, and $(\Sigma ab)$ represents the sum of all expressions $ab$ (with $a \neq b$) where $a$ and $b$ vary over the terms in the multinomial being squared (with the selection being "unordered", so that for example once you choose "$b$ and $c$" you don't later choose "$c$ and $b$").
This older notation is especially helpful in stating and using expansions of degree higher than $2$:
$$(\Sigma)^3 \;\; = \;\; (\Sigma a^3) \; + \; 3(\Sigma a^2b) \; + \; 6(\Sigma abc)$$
$$(\Sigma)^4 \;\; = \;\; (\Sigma a^4) \; + \; 4(\Sigma a^3b) \; + \; 6(\Sigma a^2b^2) \; + \; 12(\Sigma a^2bc) \; + \; 24(\Sigma abcd)$$
Thus, using the above formula for $(\Sigma)^3$, we can immediately expand $(x+y+z+w)^3$. Altogether, there will be $20$ distinct types of terms:
$$(\Sigma a^3) \;\; = \;\; x^3 \; + \; y^3 \; + \; z^3 \; + \; w^3$$
$$3(\Sigma a^2b) \;\; = \;\; 3( x^2y + x^2z + x^2w + y^2x + y^2z + y^2w + z^2x + z^2y + z^2w + w^2x + w^2y + w^2z)$$
$$6(\Sigma abc) \;\; = \;\; 6( xyz \; + \; xyw \; + \; xzw \; + \; yzw)$$
Here is why cubing a multinomial produces the pattern I gave above. By investigating what happens when you multiply $(a+b+c+\dots)$ by the expanded form of $(a+b+c+\dots)^2,$ you'll find that
$$(a+b+c+\dots)(a+b+c+\dots)(a+b+c+\dots)$$
can be expanded by adding all individual products of the form
(term from left) times (term from middle) times (term from right)
The various products that arise can be classifed into the these three types:
Type 1 product: $\;\;$ all $3$ terms are equal to each other
Type 2 product: $\;\;$ exactly $2$ terms are equal to each other
Type 3 product: $\;\;$ all $3$ terms are different from each other
Examples of Type 1 products are $aaa,$ $bbb,$ $ccc,$ etc. Examples of Type 2 products are $aab$ $aba,$ $baa,$ $aac,$ $aca,$ $caa,$ etc. Examples of Type 3 products are $abc,$ $acb,$ $bac,$ $bca,$ $cab,$ $cba,$ etc. Note that each algebraic equivalent of a Type 2 product, such as $a^2b,$ shows up $3$ times, which explains why we multiply $(\Sigma a^2b)$ by $3.$ Also, each algebraic equivalent of a Type 3 product, such as $abc,$ shows up $6$ times, and hence we multiply $(\Sigma abc)$ by $6.$
Those who have understood most things up to this point and who can come up with an explanation for why the $4$th power of a multinomial produces the pattern I gave above (an explanation similar to what I gave for the $3$rd power of a multinomial) are probably now at the point where the Wikipedia article Multinomial theorem can be attempted. See also Milo Brandt's excellent explanation in his answer to Is there a simple explanation on the multinomial theorem?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
How to solve $(1+x)^{y+1}=(1-x)^{y-1}$ for $x$? Suppose $y \in [0,1]$ is some constant, and $x \in [y,1]$. How to solve the following equation for $x$:
$\frac{1+y}{2}\log_2(1+x)+\frac{1-y}{2}\log_2(1-x)=0$ ?
Or equivalently $1+x = (1-x)^{\frac{y-1}{y+1}}$?
Thanks very much.
| If we set $f = \frac{1+x}{1-x}$ and $\eta = \frac{y+1}{2}$ some manipulation yields the following:
$$2f^{\eta} - f - 1 = 0$$
For rational $\eta = \frac{p}{q}$, this can be converted to a polynomial in $f^{\frac{1}{q}}$, and is likely "unsolveable" exactly, and would require numerical methods (like Newton-Raphson etc, which will work for irrational $\eta$ too).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Ways to fill a $n\times n$ square with $1\times 1$ squares and $1\times 2$ rectangles I came up with this question when I'm actually starring at the wall of my dorm hall. I'm not sure if I'm asking it correctly, but that's what I roughly have:
So, how many ways (pattern) that there are to fill a $n\times n:n\in\mathbb{Z}_{n>0}$ square with only $1\times 1$ squares and $1\times 2$ rectangles?
For example, for a $2\times 2$ square:
*
*Four $1\times 1$ squares; 1 way.
*Two $1\times 1$ squares and one $1\times 2$ rectangle; $4$ ways total since we can rotate it to get different pattern.
*Two $1\times 2$ rectangles; 2 ways total: placed horizontally or vertically.
$\therefore$ There's a total of $1+4+2=\boxed{7}$ ways to fill a $2\times 2$ square.
So, I'm just wondering if there's a general formula for calculating the ways to fill a $n\times n$ square.
Thanks!
| We can probably give some upper and lower bounds though. Let $t_n$ be the possible ways to tile an $n\times n$ in the manner you described. At each square, we may have $5$ possibilities: either a $1\times 1$ square, or $4$ kinds of $1\times 2$ rectangles going up, right, down, or left. This gives you the upper bound $t_n \leq 5^{n^2}$.
For the lower bound, consider a $2n\times 2n$ rectangle, and divide it to $n^2$ $2\times 2$ blocks, starting from the top left and putting a $2\times 2$ square, putting another $2\times 2$ square to its right and so on... For each of these $2\times 2$ squares, we have $5$ possible distinct ways of tiling. This gives the lower bound $t_{2n} \geq 7^{n^2}$. Obviously, $t_{2n+1} \geq t_{2n},\,n \geq 1$, and therefore $t_n \geq 7^{\lfloor \frac{n}{2}\rfloor ^2}$.
Hence,
\begin{align}
7^{\lfloor \frac{n}{2}\rfloor ^2} \leq t_n \leq 5^{n^2},
\end{align}
or roughly (if $n$ is even),
\begin{align}
(7^{1/4})^{n^2} \leq t_n \leq 5^{n^2}.
\end{align}
BTW, $7^{1/4} \geq 1.6$. So, at least we know $\log t_n \in \Theta(n^2)$.
Note: Doing the $3\times 3 $ case for the lower bound, we get $(131)^{1/9} \geq 1.7$ which is slightly better.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 4,
"answer_id": 0
} |
Give an example of a simply ordered set without the least upper bound property. In Theorem 27.1 in Topology by Munkres, he states "Let $X$ be a simply ordered set having the least upper bound property. In the order topology, each closed interval in $X$ is compact."
(The LUB property is if a subset is bounded above, then it has a LUB.)
I don't understand how you could have a simply ordered set (a chain) WITHOUT the LUB property. If a subset is bounded and it is a chain, then how can it not have a LUB?
Can someone give an example?
Thanks!
| According to the strict definitions given by the OP, the null set fails to have a Least Upper Bound while still being simply ordered.
The Least Upper Bound of a set, as defined at the Wikipedia page he links to requires that it be a member of that set. The null set, having no members, clearly lacks a LUB.
However, the definition given for being simply ordered does not require that the set have any elements. Indeed, a set can only lack the property if it has a pair of elements that are not comparable.
So, the null set is indeed Simply Ordered without having the Least Upper Bound property.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 4
} |
The Radical of $SL(n,k)$ For an algebraically closed field $k$, I'd like to show that the algebraic group $G=SL(n,k)$ is semisimple. Since $G$ is connected and nontrivial, this amounts to showing that the radical of $G$, denoted $R(G)$, is trivial. $R(G)$ can be defined as the unique largest normal, solvable, connected subgroup of $G$.
I know that the group of $n$th roots of unity of $k$ is inside of $G$, and it is normal and solvable (being in the center of $G$) but not connected, having one irreducible component for each root of unity. What are other normal subgroups in $G$? How can I show that $R(G)=e?$
| The fact is that the quotient $\mathrm{PSL}_n(k)$ of $\mathrm{SL}_n(k)$ by its center is simple. Since the center of $\mathrm{SL}_n(k)$ consists, as you say, of the $n$th roots of unity, this shows that there are no nontrivial connected normal subgroups of $\mathrm{SL}_n(k)$.
The fact that the projective special linear group is simple is not entirely trivial. There is a proof using Tits systems in the famous Bourbaki book on Lie groups and Lie algebras (chapter 4), which is of course more general. A more elementary approach, just using linear algebra, can be found in Grove's book "Classical groups and geometric algebra".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$\mathbb Q/\mathbb Z$ is an infinite group I'm trying to prove that $\mathbb Q/\mathbb Z$ is an infinite abelian group, the easiest part is to prove that this set is an abelian group, I'm trying so hard to prove that this group is infinite without success.
This set is defined to be equivalences classes of $\mathbb Q$, where $x\sim y$ iff $x-y\in \mathbb Z$.
I need help here.
thanks a lot
| Another hint: prove that for
$$n,k\in\Bbb N\;,\;\;n\neq k\;,\;\;\;\frac{1}{n}+\Bbb Z\neq\frac{1}{k}+\Bbb Z$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 8,
"answer_id": 4
} |
How can the graph of an equivalence relation be conceptualized? Consider a generic equivalence relation $R$ on a set $S$. By definition, if we partition $S$ using the relation $R$ into $\pi_S$, whose members are the congruence classes $c_1, c_2...$ then
$aRb \text{ iff a and b are members of the same congruence class in } \pi_S$.
But what is the domain and codomain of $R$? Is it $S \rightarrow S $ or $S^2 \rightarrow \text{{true,false}}$?
The reason I ask is to have an idea of the members of $graph(R)$. Does it only contain ordered pairs which are equivalent, i.e. $(a,b)$; or all elements of $S \times S$ followed by whether they are equivalent, i.e. $((a,b), true)$?
Moreover, what would the image of $s \in S$ under $R$ look like? If one suggests that $R(s)$ would return a set of all the equivalent members, then the former definition is fitting.
I suppose the source of confusion is that we rarely think of equivalence relations as 'mapping' from an input to an output, instead it tells us if two objects are similar in some way.
| The domain is $S\times S$ and codomain is {true, false}. Said another way, you can just think of any relation as a subset of $S\times S$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/358986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Dimensions: $\bigcap^{k}_{i=1}V_i \neq \{0\}$ Let $V$ be a vector space of dimension $n$ and let $V_1,V_2,\ldots,V_k \subset V$ be subspaces of
$V$. Assume that
\begin{eqnarray}
\sum^{k}_{i=1} \dim(V_i) > n(k-1).
\end{eqnarray}
To show that $\bigcap^{k}_{i=1}V_i \neq \{0\}$, what must be done? Also, could there be an accompanying schematic/diagram to show the architecture of the spaces' form; that is, something like what's shown here.
| Hint: take complements. That is, pick vector spaces $W_i$ with $\dim(W_i) + \dim(V_i) = n$ and $W_i \cap V_i = \{0\}$.
EDIT: thanks, Ted
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Which is the function that this sequence of functions converges Prove that $$ \left(\sqrt x, \sqrt{x + \sqrt x}, \sqrt{x + \sqrt {x + \sqrt x}}, \ldots\right)$$ in $[0,\infty)$ is convergent and I should find the limit function as well.
For give a idea, I was plotting the sequence and it's look like
| Note that $f(x)$ is limited by $\sqrt{x}+1$ as can be shown easily by $induction$.
For the limit $y=f(x)$ we have the following :
$\sqrt{x+y}=y$
$x+y=y^2$
$0=y^2-y-x$
$\Delta= 1+4x $
$ So $ , $y=f(x)=\frac{1+\sqrt{1+4x}}{2}$ for $x>0$ and $f(0)=0 $
.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Are there any other constructions of a finite field with characteristic $p$ except $\Bbb Z_p$? I mean, $\Bbb Z_p$ is an instance of $\Bbb F_p$, I wonder if there are other ways to construct a field with characteristic $p$?
Thanks a lot!
| Just to supplement the other answers: As stated in the other answers, for every prime power $p^r$, $r>0$, there is a unique (up to isomorphism) field with $p^r$ elements. There are also infinite fields of characteristic $p$, for instance if $F$ is any field of characteristic $p$ (e.g., $\mathbb Z_p$), the field $F(t)$ (the field of fractions of the polynomial ring $F[t]$, with $t$ an indeterminate), is an infinite field of characteristic $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 0
} |
Can you find function which satisfies $f(ab)=\frac{f(a)}{f(b)}$? Can you find function which satisfies $f(ab)=\frac{f(a)}{f(b)}$? For example $log(x)$ satisfies condition $f(ab)=f(a)+f(b)$ and $x^2$ satisfies $f(ab)=f(a)f(b)$?
| Let us reformulate the question as classify all maps $f : G \rightarrow H$ which need not be group morphism that satisfies the condition $f(ab)=f(a)f(b)^{-1}$
A simple calculation shows that $f(e)=f(x)f(x^{-1})^{-1}= f(x^{-1})f(x)^{-1}$ or we have $f(x)=f(e)^{-1}f(x^{-1})=f(e)f(x^{-1})$ or $f(e)^{-1}=f(e)$ Now $f(x)=f(ex)=f(e)f(x)^{-1}=f(e)^{-1}f(x)^{-1}=(f(x)f(e))^{-1}=(f(x)f(e)^{-1})^{-1}=f(xe)^{-1}=f(x)^{-1}$ so even if we don't assume a group morphism we have the image involutive. And hence it's a group morphism $f(ab)=f(a)f(b)^{-1}=f(a)f(b)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Why is this function Lipschitz? Let $f:A \to B$ where $A$, $B \subset \mathbb{R}^n$.
Suppose
$$\lVert f(y_1) - f(y_2)\rVert_{\ell_\infty} \geq C\lVert y_1 - y_2 \rVert_{\ell_\infty}$$
This tells us that $f$ is one to one and that the inverse is Lipschitz.
I am told that $f$ is bi-Lipschitz; so $f$ is also Lipschitz, but I don't see why?
| I was inaccurate: in the general settings of your problem, where the only thing we know on $A$ is that $A \subset \mathbb{R}^n$, it is not true that even if $f \in W^{1,\infty}(A)$ then $f \in \text{Lip}(A)$. Neverthless what is true is the following:
$\textbf{Theorem:}$ Leu $U$ be open and bounded, with $\partial U$ of class $C^1$. Then $u \colon U \to \mathbb{R}$ is Lipschitz continuous if and aonly if $u \in W^{1,\infty}(U)$.
This is proved in Evan's Partial Differential Equations: this is Th. 4 of the additional topics in chapter 5.
For the case in which $U$ is unbounded you just need to read the section dedicated to extentions.
The theoreom is pretty good, indeed any weakening of the hypothesis makes the theorem false: google has counterexamples. (Even the Hypothesis $\partial U$ of Lipschitz class is too weak.)
If your $A$ is a general open set what is true is that $u \in W^{1,\infty}_{loc}(A) \Longleftrightarrow u \in \text{Lip}_{loc}(A)$.
I really hope this helps!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Simplicial cohomology of $ \Bbb{R}\text{P}^2$ I've managed to confuse myself on a simple cohomology calculation. I'm working with the usual $\Delta$-complex on $X = \mathbf{R}\mathbf{P}^2$ and I've computed the complex as $\newcommand{Z}{\mathbf{Z}}$
$$ 0 \to \Z \oplus \Z \stackrel{\partial^0}{\to} \Z \oplus \Z \oplus \Z \stackrel{\partial^1}{\to} \Z \oplus \Z \to 0 $$
with $\partial^0$ given by $(l, m) \mapsto (-l+m, -l+m, 0)$ and $\partial^1$ by $(l,m,n) \mapsto (l+m-n, -l+m+n)$. Then $\mathrm{Ker}(\partial^0) = \left<(1,1)\right> \cong \Z$ and $\mathrm{Im}(\partial^0) = \left<(1,1,0)\right> \cong \Z$. For $\partial^1$ I got $\mathrm{Ker}(\partial^1) = \left<(1,0,1)\right> \cong \Z$ and I'm pretty confident about everything so far.
Now for $\mathrm{Im}(\partial^1)$ I first got $2\Z \oplus 2\Z$, since $(2, 0)$ and $(0, 2)$ are both in the image while $(1, 0)$ and $(0, 1)$ are not. I don't see what's wrong with this logic, but it doesn't give the right answer: $H^2(X) \cong \Z \oplus \Z / (2\Z \oplus 2\Z) \cong \Z/2\Z \oplus \Z/2\Z$ while I believe the correct answer has only one copy.
A second approach I tried is the "isomorphism theorem" which says $\mathrm{Im}(\partial^1) \cong \Z \oplus \Z \oplus \Z / \mathrm{Ker}(\partial^1) = (\Z \oplus \Z \oplus \Z) / \Z \cong \Z \oplus \Z$. But then $H^2(X) \cong \Z \oplus \Z / (\Z \oplus \Z) = 0$ is still wrong.
What's wrong with both of these approaches, and what's the correct one?
EDIT: I just realised that of course $\Z \oplus \Z \cong 2\Z \oplus 2\Z$ so both approaches actually give the same answer for $\mathrm{Im}(\partial^1)$. More specifically I think it is generated by $\left<(1, 1), (1, -1)\right>$. So I can only assume I'm computing the quotient $\Z^2/\mathrm{Im}(\partial^1)$ incorrectly.
To be very precise, we have the isomorphism
$$ H^2(X) = \Z \oplus \Z / \mathrm{Im}(\partial^1) \stackrel{\cong}{\to} \Z $$
given by $(m, n) + \left<(1, 1), (1, -1)\right> \mapsto m + n$. Since $(m,n) \sim (0, m+n)$ this map is injective; and it is obviously surjective because $(n, 0)$ always maps to $n$ for any $n \in \Z$.
This is so weird......
EDIT: Of course, the problem with the above "isomorphism" is that it is not actually a well-defined homomorphism, as it doesn't agree on $(1, 1)$ and $(1, -1)$ (hence we mod out $2\Z$...)
| Assuming that you have computed your cochain complex correctly, the problem with your first approach is that it is not true that
$$(A \oplus B)/(C \oplus D) \cong A/B \oplus C/D.$$
Instead to calculate $H^2(X)$ you will need to work with generators and relations. Define $a := (1,0)$ and $b:= (0,1)$, your basis vectors of $\Bbb{Z} \oplus \Bbb{Z}$. A basis for the image of $\partial^1$ is given by $a -b$ and $a + b$. So when you quotient out by $\operatorname{im} \partial^1$, you are effectively saying that
$$H^2(X) \cong \langle a,b | a +b = a-b= 0\rangle$$
The relations $a + b= 0$ and $a - b = 0$ combine to give $2a = 0$, $a = -b$. This means
$$H^2(X) \cong \langle a | 2a = 0 \rangle \cong \Bbb{Z}/2\Bbb{Z}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Number of conjugacy classes of the reflection in $D_n$.
Consider the conjugation action of $D_n$ on $D_n$. Prove that the
number of conjugacy classes of the reflections are
$\begin{cases} 1 &\text{ if } n=\text{odd} \\ 2 &\text{ if }
n=\text{even} \end{cases} $
I tried this:
Let $σ$ be a reflection. And $ρ$ be the standard rotation of $D_n$.
$$ρ^l⋅σρ^k⋅ρ^{-l}=σρ^{k-2l}$$
$$σρ^l⋅σr^k⋅ρ^{-l}σ=σρ^{-k+2l}$$
If $n$ is even, it depends on $k$ if $-k+2l$ will stay even.
But if $n$ is odd, then at some point $-k+2l=|D_n|$ and therefore you will also get the even elements. So independent of $k$ you will get all the elements. Is this the idea ?
| Hint. The Orbit-Stabilizer theorem gives you that $[G:C_G(g)]$ is the size of the conjugacy class containing $g$. When $n$ is odd, a reflection $g$ commutes only with itself (why?), so $g$ has $[G:C_G(g)]=|G|/2$ elements, which are easily identified as the other reflections. Now, use this same technique to figure out the answer for the case of even $n$, keeping in mind that dihedral groups $D_{2n}$ have nontrivial centers when $n$ is even (why?).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to evaluate powers of powers (i.e. $2^3^4$) in absence of parentheses? If you look at $2^{3^4}$, what is the expected result? Should it be read as $2^{(3^4)}$ or $(2^3)^4$? Normally I would use parentheses to make the meaning clear, but if none are shown, what would you expect?
(In this case, the formatting gives a hint, because I can either enter $2^{3^4}$ or ${2^3}^4$. If I omit braces in the MathML expression, the output is shown as $2^3^4$. Just suppose all three numbers were displayed in the same size, and the same vertical offset between 2, 3 and 3, 4.)
| In the same way that any expressions in brackets inside other brackets are done before the rest of the things in the brackets, I'd say that one works from the top down in such a case.
i.e. because we do $(a*d)$ first in$((a*b)*c)*d$, I'd imagine it'd be the expected thing to do $x^{(y^z)}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
Reversing the Gram Matrix Let $A$ be a $M\times N$ real matrix, then $B=A^TA$ is the gramian of $A$. Suppose $B$ is given, is $A$ unique? Can I say something on it depending on $M$ and $N$.
| $A$ will definitely not be unique without some pretty serious restrictions. The simplest case to think about might be to consider $M\times 1$ 'matrices', i.e. column vectors. Then, $A^TA$ is simply the norm-squared of $A$, so for instance $A^TA=1$ would hold for any vector with norm $1$ (i.e. the unit sphere in $\Bbb{R}^M$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Condition number question Please, help me with this problem:
Let $A$ a matrix of orden $100$,
$$A\ =\ \left(\begin{array}{ccccc}
1 & 2 & & & \\
& 1 & 2 & & \\
& & \ddots & \ddots & \\
& & & 1 & 2\\
& & & & 1
\end{array}\right).$$
Show that $\mbox{cond}_2(A) \geq 2^{99}$.
Thanks in advance.
| We consider a general order $n$. Calculate $\|Ax\|/\|x\|$ with $x=(1,1,\ldots,1)^T$ to get a lower bound $p=\sqrt{\frac{9(n-1)+1}{n}}$ for $\sigma_1(A)$. Compute $\|Ax\|/\|x\|$ for $x=\left((-2)^{n-1},(-2)^{n-2},\,\ldots,\,-2,1\right)^T$ to get an upper bound $q=\sqrt{\frac{3}{4^n-1}}$ for $\sigma_n(A)$. Now $\frac pq$ is a lower bound for $\operatorname{cond}_2(A)$ and you may show that it is $\ge2^{n-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Standard Young Tableaux and Bijection to saturated chains in Young Lattice I'm reading Sagan's book The Symmetric Group and am quite confused.
I was under the assumption that any tableau with entries weakly increasing along a row and strictly increasing down a column would be considered standard Young tableau, e.g.
$$1\; 2$$
$$2\; 3$$
would be a standard Young tableau. But Sagan proposes that there is a simple bijection between standard Young tableaux and saturated $\emptyset-\lambda$ chains in the Young lattice. But this wouldn't make sense for the above tableau, since you could take both:
*
*$\emptyset \prec (1,0) \prec (1,1) \prec (2,1) \prec (2,2)$
*$\emptyset \prec (1,0) \prec (2,0) \prec (2,1) \prec (2,2)$
I believe I am missing something, can someone please clarify?
| Actually your Young tableau corresponds to the chain $$\begin{array}{cccccc}\emptyset & \prec & \bullet & \prec & \bullet & \bullet & \prec & \bullet & \bullet \\ & & & & \bullet & & & \bullet & \bullet\end{array}$$
that is, $\emptyset \prec (1,0) \prec (2,1) \prec (2,2)$, which is not saturated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
The Best of Dover Books (a.k.a the best cheap mathematical texts) Perhaps this is a repeat question -- let me know if it is -- but I am interested in knowing the best of Dover mathematics books. The reason is because Dover books are very cheap and most other books are not: For example, while something like Needham's Visual Complex Analysis is a wonderful book, most copies of it are over $100.
In particular, I am interested in the best of both undergraduate and graduate-level Dover books. As an example, I particularly loved the Dover books Calculus of Variations by Gelfand & Fomin and Differential Topology by Guillemin & Pollack.
Thanks.
(P.S., I am sort of in an 'intuition-appreciation' kick in my mathematical studies (e.g., Needham))
EDIT: Thank you so far. I'd just like to mention that the books need not be Dover, just excellent and affordable at the same time.
| Though it lacks any treatment of cardinal functions, Stephen Willard’s General Topology remains one of the best treatments of point-set topology at the advanced undergraduate or beginning graduate level. Steen & Seebach, Counterexamples in Topology, is not a text, but it is a splendid reference; the title is self-explanatory.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "142",
"answer_count": 23,
"answer_id": 14
} |
Why aren't logarithms defined for negative $x$? Given a logarithm is true, if and only if, $y = \log_b{x}$ and $b^y = x$ (and $x$ and $b$ are positive, and $b$ is not equal to $1$)[1], are true, why aren't logarithms defined for negative numbers?
Why can't $b$ be negative? Take $(-2)^3 = -8$ for example. Turning that into a logarithm, we get $3 = \log_{(-2)}{(-8)}$ which is an invalid equation as $b$ and $x$ are not positive! Why is this?
| For the real, continuous exponentiation operator -- the used in the definition of the real, continuous logarithm -- $(-2)^3$ is undefined, because it has a negative base.
The motivation stems from continuity: If $(-2)^3$ is defined, then $(-2)^{\pi}$ and $(-2)^{22/7}$ should both be defined as well, and be "close" in value to $(-2)^3$, because $\pi$ and $22/7$ are "close" to 3. But the same line of reasoning says that $(-2)^{22/7}$ should be $8.8327...$, which isn't very close to $-8$ at all, and what could $(-2)^{\pi}$ possibly mean?!?!
You can define a "discrete logarithm" operator that corresponds to the discrete exponentiation operator (i.e. the one that defines $(-2)^3$ to be repeated multiplication, and thus $8$), but this has less general utility -- my expectation is that one would be much better off learning just the continuous logarithm and dealing with the signs "manually" in those odd cases where you want to solve questions involving discrete exponentiation with negative bases. (and eventually learn the complex logarithm)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
On the weak closedness I have some difficulties in this question.
Let $X$ be a nonreflexive Banach space and $K\subset X$ be a nonempty, convex, bounded and closed in norm. We consider $K$ as a subset of $X^{**}$. I would like to ask whether $K$ is closed w.r.t the topology $\sigma(X^{**}, X^*)$.
Thank you for all comments and kind help.
| No, this fails in every nonreflexive space for $K=\{x\in X:\|x\|\le 1\}$, the closed unit ball of $X$.
Indeed, any neighborhood of a point $p\in X^{**}$ contains the intersection of finitely many "slabs" $\{x^{**}\in X^{**}: a_j< \langle x^{**}, x^*_j\rangle < b_j \}$ for some $x^*_j\in X^*$and $a_j,b_j\in \mathbb R$. It is easy to see that $\bigcap_j \{x\in X: a_j< \langle x, x^*_j\rangle < b_j \}$ is nonempty as well; indeed, $\bigcap_j \{x \in X: \langle x, x^*_j\rangle = (a_j+b_j)/2\}$ is a subspace of finite codimension. Thus, $X$ is dense in $X^{**}$. Since the embedding of $X$ into $X^{**}$ is isometric, the unit ball of $X$ is dense in the unit ball of $X^{**}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/359933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding the limit of a weird sequence function? Do you know any convergent sequence of continuous functions, that his limit function is a discontinuous function in infinitely many points of its domain (?)
| Let $$f_n(x) = \begin{cases} nx & \hat{x}\in[0,\frac{1}{n}) \\ 1 & \hat{x}\in[\frac{1}{n},\frac{n-1}{n}) \\ n-nx & \hat{x}\in(\frac{n-1}{n},1)\end{cases},$$ where $\hat{x}$ is the fractional part of $x$. It is not to hard to see that $f_n$ is continuous for every $n\in\mathbb{N}$, but $\lim_{n\to\infty} f_n = f_\infty$, where $f_\infty(x) = \begin{cases} 1 & x\notin\mathbb{Z} \\ 0 & x\in\mathbb{Z}\end{cases}$, which is clearly not continuous at any integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Meaure theory problem no. 12 page 92, book by stein and shakarchi Show that there are $f \in L^1(\Bbb{R}^d,m)$ and a sequence $\{f_n\}$ with $f_n \in L^1(\Bbb{R}^d,m)$ such that $\|f_n - f\|_{L^1} \to 0$, but $f_n(x) \to f(x)$ for no $x$.
| Let us see the idea for $\mathbb{R}$ . Let $f_1$ be a characteristic function of $[0,0.5]$, $f_2$ --- of [0.5,1], $f_3$ of [0,0.25] and so on. They form the desired sequence on $L^1([0,1])$. The rest is an exercise because $\mathbb{R}$ can be covered by a countable number of intervals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
About injectivity of induced homomorphisms on quotient rings Let $A, B$ be commutative rings with identity, let $f: A \rightarrow B$ be a ring homomorphism (with $f(1) = 1$), let $\mathfrak{a}$ be an ideal of $A$, $\mathfrak{b}$ an ideal of $B$ such that $f(\mathfrak{a}) \subseteq \mathfrak{b}$. Then there is a well-defined homomorphism
\begin{align}
&\bar f: &&A / \mathfrak{a} &&\rightarrow &&B / \mathfrak{b} \\
& &&a + \mathfrak{a} &&\mapsto &&f(a) + \mathfrak{b}.
\end{align}
It's clear to me that if $\mathfrak{a} = \mathfrak{b}^c$, then $f$ injective $\implies \bar f$ injective.
Question: Under what conditions does $\bar f$ injective $\implies f$ injective hold?
| Let $\bar f$ is injective. Then $f(a)\in \mathfrak{b} $ implies $a\in \mathfrak{a} $. This means $\mathfrak{a} \supseteq \mathfrak{b}^c$ hence $\mathfrak{a}=\mathfrak{b}^c$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$\mathbb{R^+}$ is the disjoint union of two nonempty sets, each closed under addition. I saw Using Zorn's lemma show that $\mathbb R^+$ is the disjoint union of two sets closed under addition. and have a question related to the answer (I'm not sure if this is the right place to post it);
Why don't we just take
$\mathcal{A}=\{A\subseteq \mathbb{R^+}\ :\text{A is closed under addition and all elements of A are irrational}\}$,
$\mathcal{B}=\{B\subseteq \mathbb{R^+}\ :\text{B is closed under addition and all elements of B are rational}\}$
and order both of them with the partial order $\subseteq$ so that they will satisfy the chain condition? Then, they have maximal elements $\bigcup{\mathcal{A}}$ and $\bigcup{\mathcal{B}}$ respectively. So it remains to verify that $\bigcup{\mathcal{A}}$ is the set of all irrationals and $\bigcup{\mathcal{B}}$ is the set of all rationals. Is this approach correct?
| There is a reason why you shouldn't try to explicitly construct such a decomposition. If $\mathbb{R}^{+}$ is a disjoint union of $A$ and $B$, where both $A$ and $B$ are closed under addition then neither one of $A$ and $B$ is either Lebesgue measurable or has Baire property since if $X$ is either a non meager set of reals with Baire property or $X$ has positive Lebesgue measure then $X + X$ has non empty interior. It is well known that such sets cannot be provably defined within set theory (ZFC).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Proving $\sqrt{2}\in\mathbb{Q_7}$? Why does Hensel's lemma imply that $\sqrt{2}\in\mathbb{Q_7}$?
I understand Hensel's lemma, namely:
Let $f(x)$ be a polynomial with integer coefficients, and let $m$, $k$ be positive integers such that $m \leq k$. If $r$ is an integer such that $f(r) \equiv 0 \pmod{p^k}$ and $f'(r) \not\equiv 0 \pmod{p}$ then there exists an integer $s$ such that $f(s) \equiv 0 \pmod{p^{k+m}}$ and $r \equiv s \pmod{p^{k}}$.
But I don't see how this has anything to do with $\sqrt{2}\in\mathbb{Q_7}$?
I know a $7$-adic number $\alpha$ is a $7$-adically Cauchy sequence $a_n$ of rational numbers. We write $\mathbb{Q}_7$ for the set of $7$-adic numbers.
A sequence $a_n$ of rational numbers is $p$-adically Cauchy if $|a_{m}-a_n|_p \to 0$ as $n \to \infty$.
How do we show $\sqrt{2}\in\mathbb{Q_7}$?
| This is how I see it and, perhaps, it will help you: we can easily solve the polynomial equation
$$p(x)=x^2-2=0\pmod 7\;\;(\text{ i.e., in the ring (field)} \;\;\;\Bbb F_7:=\Bbb Z/7\Bbb Z)$$
and we know that there's a solution $\,w:=\sqrt 2=3\in \Bbb F_{7}\,$
Since the roots $\,w\,$ is simple (i.e., $\,p'(w)\neq 0\,$) , Hensel's Lemma gives us lifts for the root, meaning: for any $\,k\ge 2\,$ , there exists an integers $\,w_k\,$ s.t.
$$(i)\;\;\;\;p(w_k)=0\pmod {7^k}\;\;\wedge\;\;w_k=w_{k-1}\pmod{p^{k-1}}$$
Now, if you know the inverse limit definition of the $\,p$-adic integers then we're done as the above shows the existence of $\,\sqrt 2\in\Bbb Q_p\,$ , otherwise you can go by some other definition (say, infinite sequences and etc.) and you get the same result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 3
} |
Series with complex terms, convergence Could you tell me how to determine convergence of series with terms being products of real and complex numbers, like this:
$\sum_{n=1} ^{\infty}\frac{n (2+i)^n}{2^n}$ , $ \ \ \ \ \sum_{n=1} ^{\infty}\frac{1}{\sqrt{n} +i}$?
I know that $\sum (a_n +ib_n)$ is convergent iff $\sum a_n$ converges and $\sum b_n$ converges.
(How) can I use it here?
| We have
$$\left|\frac{n(2+i)^n}{2^n}\right|=n\left(\frac{\sqrt{5}}{2}\right)^n\not\to0$$
so the series $\displaystyle\sum_{n=1}^\infty \frac{n(2+i)^n}{2^n}$ is divergent.
For the second series we have
$$\frac{1}{\sqrt{n}+i}\sim_\infty\frac{1}{\sqrt{n}}$$
then the series $\displaystyle\sum_{n=1}^\infty \frac{1}{\sqrt{n}+i}$ is also divergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Shooting game - a probability question
In a shooting game, the probability for Jack to hit a target is 0.6.
Suppose he makes 8 shots, find the probabilities that he can hit the
target in more than 5 shots.
I find this question in an exercise and do not know how to solve it. I have tried my best but my answer is different from the one in the answer key. Can anyone suggest a clear solution to it?
My trial: (0.6^6)(0.4^2)+(0.6^7)(0.4)+0.6^8
But it is wrong...
(The answer is 0.3154, which is correct to 4 significant figures)
| Total shots = n = 8
Success = p = The probability for Jack to hit a target is 0.6
Failure $= q = (1-p) =$ The probability for Jack to NOT Hit a target is $(1- 0.6) = 0.4$
P( He can hit the target in more than 5 shots, i.e. from 6 to 8 ) $= [ P(X = 6) + P(X = 7) + P(X = 8) ] =$ $
[\binom{8}{6}.(.6)^{6}.(.4)^{8-6} + \binom{8}{7}.(.6)^{7}.(.4)^{8-7} + \binom{8}{8}.(.6)^{8}.(.4)^{8-8} ]$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Polynomials not dense in holder spaces How to prove that the polynomials are not dense in Holder space with exponent, say, $\frac{1}{2}$?
| By exhibiting a function that cannot be approximated by polynomials in the norm of $C^{1/2}$, such as $f(x)=\sqrt{x}$ on the interval $[0,1]$. The proof is divided into steps below; you might not need to read all of them.
Let $p$ be a polynomial.
$(p(x)-p(0))/x\to p'(0)$ as $x\to 0^+$
$|p(x)-p(0)|/x^{1/2}\to 0$ as $x\to 0^+$
$|(f(x)-p(x))-(f(0)-p(0))|/x^{1/2}\to 1$ as $x\to 0^+$
$\|f-p\|_{C^{1/2}}\ge 1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Graph Concavity Test I'm studying for my final, and I'm having a problem with one of the questions. Everything before hand has been going fine and is correct, but I'm not understanding this part of the concavity test.
$$f(x) = \frac{2(x+1)}{3x^2}$$
$$f'(x) =-\frac{2(x+2)}{3x^3}$$
$$f''(x) = \frac{4(x+3)}{3x^4}$$
For the increasing and decreasing test I found that the critical point is -2:
$$2(x+2)=0$$
$$x = -2$$
(This part is probably done differently than most of you do this), here's the chart of the I/D
2(x+2), Before -2 you will get a negative number, and after -2 you will get a positive number.
Therefore, f'(x), before -2 will give you a negative number, and after you will get a positive number, so f(x) before -2 will be decreasing and after it will be increasing. Where -2 is your local minimum.
(By this I mean; 2(x+2), any number before -2 (ie. -10) it will give you a negative number.)
As for the concavity test, I did the same thing basically;
$$4(x + 3) = 0$$
$$x = -3$$
However, my textbook says that x = 0 is also a critical point, I don't understand where you get this from. If anyone can explain this I would appreciate it, also if there's a more simple way of doing these tests I would love to hear it, thanks.
| If I got your question correctly, you are working on the function $f(x)$ as above and want to know the concavity of it. First of all note that as the first comment above says; $x=0$ is also a critical point for $f$. Remember what is the definition of a critical point for a function. Secondly, you see that $x=0$ cannot get you a local max or local min cause $f$ is undefined at this point. $x=0$ is not an inflection point because when $x<0$ or $x>0$ in a small neighborhood, the sign of $f''$ doesn't change. It is positive so $f$ is concave upward around the origin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
transformation of integral from 0 to infinity to 0 to 1 How do I transform the integral $$\int_0^\infty e^{-x^2} dx$$ from 0 to $\infty$ to o to 1 and. I have to devise a monte carlo algorithm to solve this further, so any advise would be of great help
| Pick your favorite invertible, increasing function $f : (0,1) \to (0,+\infty)$. Make a change of variable $x = f(y)$.
Or, pick your favorite invertible, increasing function $g : (0,+\infty) \to (0,1)$. Make a change of variable $y = g(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Prove that $\tan(75^\circ) = 2 + \sqrt{3}$ My (very simple) question to a friend was how do I prove the following using basic trig principles:
$\tan75^\circ = 2 + \sqrt{3}$
He gave this proof (via a text message!)
$1. \tan75^\circ$
$2. = \tan(60^\circ + (30/2)^\circ)$
$3. = (\tan60^\circ + \tan(30/2)^\circ) / (1 - \tan60^\circ \tan(30/2)^\circ) $
$4. \tan (30/2)^\circ = \dfrac{(1 - \cos30^\circ)}{ \sin30^\circ}$
Can this be explained more succinctly as I'm new to trigonometry and a little lost after (2.) ?
EDIT
Using the answers given I'm almost there:
*
*$\tan75^\circ$
*$\tan(45^\circ + 30^\circ)$
*$\sin(45^\circ + 30^\circ) / \cos(45^\circ + 30^\circ)$
*$(\sin30^\circ.\cos45^\circ + \sin45^\circ.\cos30^\circ) / (\cos30^\circ.\cos45^\circ - \sin45^\circ.\sin30^\circ)$
*$\dfrac{(1/2\sqrt{2}) + (3/2\sqrt{2})}{(3/2\sqrt{2}) - (1/2\sqrt{2})}$
*$\dfrac{(1 + \sqrt{3})}{(\sqrt{3}) - 1}$
*multiply throughout by $(\sqrt{3}) + 1)$
Another alternative approach:
*
*$\tan75^\circ$
*$\tan(45^\circ + 30^\circ)$
*$\dfrac{\tan45^\circ + \tan30^\circ}{1-\tan45^\circ.\tan30^\circ}$
*$\dfrac{1 + 1/\sqrt{3}}{1-1/\sqrt{3}}$
*at point 6 in above alternative
| You can rather use $\tan (75)=\tan(45+30)$ and plug into the formula by Metin. Cause: Your $15^\circ$ is not so trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
} |
Suggest an Antique Math Book worth reading? I'm not a math wizard, but I recently started reading through a few math books to prepare myself for some upcoming classes and I'm starting to really get into it. Then I noticed a few antique math books at a used bookstore and bought them thinking that, if nothing else, they would look cool on my bookshelf. But as it turns out, I enjoy both reading and collecting them. I find myself constantly browsing used book stores, thrift stores, antique stores ect. looking for the next book to add to my library.
So do you know of an antique book that you found interesting, helpful, or historically relevant?
(Just some insight- some of the books I have that I like are: The Laws of Thought by George Boole; Mathematical Methods of Statistics by Harald Cramer; and Introduction to Mathematical Analysis by F.L. Griffin. I've also enjoyed reading online about probablity, logic, and math history. But any area of mathematics is fine as I'm still discovering which areas interest me.)
| For a book that is not going to teach you any new math, but will give you a window into how a mathematical personality might think or act, I would recommend I Want to be a Mathematician by Paul Halmos. Quite a fun read, full of all of the joys and nuisances of being a high class working mathematician.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 11,
"answer_id": 9
} |
Brent's algorithm
Use Brent's algorithm to find all real roots of the equation
$$9-\sqrt{99+2x-x^2}=\cos(2x),\\ x\in[-8,10]$$
I am having difficulty understanding Brent's algorithm. I looked at an example in wikipedia and in my book but the examples given isn't the same as this question. Any help will be greatly appreciated.
| The wikipedia entry you cite explains Brent's algorithm as a modification on other ones. Write down all algorithms that are mentioned in there, see how they go into Brent's. Perhaps try one or two iterations of each to feel how they work.
Try to write Brent's algorithm down as a program in some language you are familiar with. Make sure your program follows the logic given.
Run your program on the function given (or some simpler one), and have it tell you what it is doing each step. Look over the explanation to see why that step makes sense, check what the alternative would have been.
As a result, you will have a firm grasp of the algorithm (and some others, with their shortcommings and strong points), and thus why it is done the way it is.
[Presumably that is what this assignment is all about...]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find the coordinates of any stationary points on the curve $y = {1 \over {1 + {x^2}}}$ and state it's nature I know I could use the quotient rule and determine the second differential and check if it's a max/min point, the problem is the book hasn't covered the quotient rule yet and this section of the book concerns exercises related only to the chain rule, so I was wondering what other method is there to determine the nature of the stationary point (which happens to be (0,1)) given that this is an exercise that is meant to utilise the chain rule.
$${{dy} \over {dx}} = - {{2x} \over {{{({x^2} + 1)}^2}}}$$
So essentially my question is can I determine ${{{d^2}y} \over {d{x^2}}}$ by using the chain rule (which I dont think you can) and thus the nature of stationary point, or would I have to determine the nature of the stationary point another way?
I think I may have overlooked something, any help would be appreciated. Thank you.
| As stated in the comments below, you can check whether a "stationary point" (a point where the first derivative is zero), is a maximum or minimum by using the first derivative. Evaluate points on each side of $x = 0$ to determine on which side it is decreasing (where $f'(x)$ is negative) and which side it is increasing (where $f'(x)$ is positive).
Increasing --> stationary --> decreasing $\implies$ maximum.
Decreasing ..> stationary ..> increasing $\implies$ minimum.
In your case, we have ($f'(x) > 0$ means $f$ is increasing to left of $x = 0$) and ($f'(x) <0$ means $f$ is decreasing to the right of $x = 0$) hence the point $\;(0, 1)\,$ is a local maximum of $f(x)$.
With respect to the second derivative:
While the quotient rule can simplify the evaluation of $\dfrac{d^2y}{dx^2}$, you can evaluate the second derivative of your given function by finding the derivative of $\;\displaystyle {{dy} \over {dx}} = {{-2x} \over {{{({x^2} + 1)}^2}}}\;$ by using the chain rule and the product rule:
Given $\quad \dfrac{dy}{dx} = (-2x)(x^2 + 1)^{-2},\;$
then using the product rule we get $$\frac{d^2y}{dx^2} = -2x \cdot \underbrace{\frac{d}{dx}\left((x^2 + 1)^{-2}\right)}_{\text{use chain rule}} + (x^2 + 1)^{-2}\cdot \dfrac{d}{dx}(-2x)$$
$$\frac{d^2y}{dx^2} = \frac{6x^2 - 2}{\left(x^2 + 1\right)^3}$$
Note: The product rule, if you haven't yet learned it, is as follows:
If $\;f(x) = g(x)\cdot h(x)\;$ (i.e., if $\,f(x)\,$ is the product of two functions, which we'll call $g(x)$ and $h(x)$ respectively), then
$$f'(x) = g(x)h'(x) + g'(x)h(x)\tag{product rule}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/360957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
The angle between two unit vectors is not what I expected Ok imagine a vector with only X and Z components that make a 45 degree angle with the positive X axis. It's a unit vector. Now also imagine a unit vector that has the same direction as the positive x axis. Now imagine rotating both of these around the Z axis.
I expect the angle between these vectors to still be 45 degrees. But it's not. If you don't believe me look here. Angle between two 3D vectors is not what I expected.
Another way to think about it is to draw a 45 degree angle between two lines on a piece of paper. Now stand the paper up, and rotate the paper. The angle between the lines are still 45 degrees.
Why is my way of thinking wrong?
| If you start with the vectors $(1,0,0)$ and $(1/\sqrt{2},0,1/\sqrt{2})$, and rotate both by $45^\circ$ about the $z\text{-axis}$, then you end up with $(1/\sqrt{2},1/\sqrt{2},0)$ and $(1/2,1/2,1/\sqrt{2})$. The second point is not $(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})$ as you imagined. If you think about it, the $z\text{-coordinate}$ cannot be changed by this rotation. If the $z\text{-axis}$ is vertical, and the $x\text{-}y$ plane is horizontal, then the height of the point above the plane is not changed by rotation about the $z\text{-axis}$. The height remains $1/\sqrt{2}$, and the length of the horizontal coordinate remains $1/\sqrt{2}$ as well. That would not be the case if the final vector were what you thought it was.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/361036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Show that $(2+i)$ is a prime ideal
Consider the set Gaussian integer $\mathbb{Z}[i]$. Show that $(2+i)$ is a prime ideal.
I try to come out with a quotient ring such that the set Gaussian integers over the ideal $(2+i)$ is either a field or integral domain. But I failed to see what is the quotient ring.
| The quotient ring you are after must be $\Bbb Z[i]/I$ where $I=(2+i)$, otherwise it would not tell you much about the status is the ideal $I$. You must know that multiplication by a complex number is a combination of rotation and scaling, and so multiplication by $2+i$ is the unique such operation that sends $1\in\Bbb C$ to $2+i$. Therefore the image of the grid (lattice) $\Bbb Z[i]\subset\Bbb C$ is the grid $I\subset\Bbb Z[i]$ with orthogonal spanning vectors $2+i$ and $(2+i)i=-1+2i$. The square spanned by those two vectors has area $|\begin{smallmatrix}2&-1\\1&\phantom+2\end{smallmatrix}|=5$, so the density of the points of $I$ should be smaller than that of $\Bbb Z[i]$ by a factor $5$. This is a not entirely rigorous argument showing that the quotient ring $\Bbb Z[i]/I$ should have $5$ elements. There aren't very many rings with $5$ elements, so you should be able to guess which $5$ elements you can choose as representatives of the classes in $\Bbb Z[i]/I$. Now go ahead and show that all elements of $\Bbb Z[i]$ can be transformed into one and only one of those $5$ elements by adding an element of $I$.
By the way, the quotient being a finite ring it will be a field if and only if it is an integral domain.: it is either both of them or none of them. In the case at hand it will be both (all rings with a prime number of elements are integral domains and also fields).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/361099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 4
} |
First Order Logic: Formula for $y$ is the sum of non-negative powers of $2$ As the title states, is it possible to write down a first order formula that states that $y$ can be written as the sum of non-negative powers of $2$.
I have been trying for the past hour or two to get a formula that does so (if it is possible), but It seems to not work.
Here's my attempt:
Let $\varphi(y)$ be the formula $(\exists n < 2y)(\exists v_0 < 2)\cdots (\exists v_n < 2)(y = v_0\cdot 1 + v_1\cdot 2 + \cdots + v_n2^n)$.
In the above, the $\mathcal{L}$-language is $\{+,\cdot, 0, s\}$ where $s$ is the successor function. But the problem with the formula above is that when $n$ is quantified existentially as less than $2y$, $n$ does not appear in $2^n$ when we write it out as products of $2$ $n$ times. I think this is the problem.
My other attempts at this problem happen to be the same issue, where $n$ is quantified but does not appear in the statement, such as the example provided above.
If you can give me any feedback, that would be great. Thanks for your time.
Edit: I guess that when I write $2^n$, I mean $(s(s(0)))^n$.
| In the natural numbers, the formula $\theta(x) \equiv x = x$ works. Think about binary notation.
More seriously, once you have developed the machinery to quantify over finite sequences, it is not so hard to write down the formula. Let $\phi(x)$ define the set of powers of 2. The formula will look like this:
$$
(\exists \sigma)(\exists \tau)[\, (\forall n < |\sigma|)[\phi(\sigma(n))] \land |\tau| = |\sigma| + 1 \land \tau(0) = 0 \land (\forall n < |\sigma|) [ \tau(n+1) = \tau(n) + \sigma(n)] \land x =\tau(|\tau|)]
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/361190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Directional derivative of a scalar field in the direction of fastest increase of another such field Suppose $f,g : \mathbb{R}^n \rightarrow \mathbb{R}$ are scalar fields. What expression represents the directional derivative of $f$ in the direction in which $g$ is increasing the fastest?
| The vector field encoding the greatest increase in $g$ is the gradient of $g$, so the directional derivative of $f$ in the direction of $\text{grad}(g)$ is just $\text{grad}(f)\bullet \text{grad}(g)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/361249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Neatest proof that set of finite subsets is countable? I am looking for a beautiful way of showing the following basic result in elementary set theory:
If $A$ is a countable set then the set of finite subsets of $A$ is
countable.
I proved it as follows but my proof is somewhat fugly so I was wondering if there is a neat way of showing it:
Let $|A| \le \aleph_0$. If $A$ is finite then $P(A)$ is finite and hence countable. If $|A| = \aleph_0$ then there is a bijection $A \to \omega$ so that we may assume that we are talking about finite subsets of $\omega$ from now on. Define a map $\varphi: [A]^{<\aleph_0} \to (0,1) \cap \mathbb Q$ as $B \mapsto \sum_{n \in \omega} \frac{\chi_B (n)}{2^n}$. Then $\varphi$ is injective hence the claim follows. (The proof of which is what is the core-fugly part of the proof and I omit it.).
| A proof for finite subsets of $\mathbb{N}$:
For every $n \in \mathbb{N}$, there are finitely many finite sets $S \subseteq \mathbb{N}$ whose sum $\sum S = n$.
Then we can enumerate every finite set by enumerating all $n \in \mathbb{N}$ and then enumerating every (finitely many) set $S$ whose sum is $\sum S = n$.
Since every finite set has such an $n$, every finite set is enumerated. QED.
If you want the proof to hold for any countable $A$, first define any injective function $f: A \to \mathbb{N}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/361320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 13,
"answer_id": 8
} |
G is a group, H is a subgroup of G, and N a normal group of G. Prove that N is a normal subgroup of NH. So I have already proved that NH is a subgroup of G. To prove that N is a normal subgroup of NH I said the we need to show $xNx^{-1}$ is a subgroup of NH for all $x\in NH$
Or am i defining it wrong?
| For any subgroup $K$ of $G$ with $N \subset K \subset G$, you can show that $N$ is a normal subgroup of $K$. Also, you can easily show $N \subset NH$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/361380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Uniform convergence for $x\arctan(nx)$ I am to check the uniform convergence of this sequence of functions : $f_{n}(x) = x\arctan(nx)$ where $x \in \mathbb{R} $. I came to a conclusion that $f_{n}(x) \rightarrow \frac{\left|x\right|\pi}{2} $. So if $x\in [a,b]$ then $\sup_{x \in [a,b]}\left|f_n(x)- \frac{\left|x\right|\pi}{2}\right|\rightarrow 0$ as $n\to\infty.$ Now, how do I check the uniform convergence?
$$\sup_{x\in\mathbb{R}}\left|f_n(x)-\frac{\left|x\right|\pi}{2}\right| = ?$$
Thanks in advance!
| Hint: Use the fact that
$$\arctan t+\arctan\left(\frac 1t\right)=\frac{\pi}2$$
for all $t>0$ and that $f_n$ are even.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/361439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to prove that $\det(M) = (-1)^k \det(A) \det(B)?$ Let $\mathbf{A}$ and $\mathbf{B}$ be $k \times k$ matrices and $\mathbf{M}$ is the block matrix
$$\mathbf{M} = \begin{pmatrix}0 & \mathbf{B} \\ \mathbf{A} & 0\end{pmatrix}.$$
How to prove that $\det(\mathbf{M}) = (-1)^k \det(\mathbf{A}) \det(\mathbf{B})$?
| Here is one way among others:
$$
\left( \matrix{0&B\\A&0}\right)=\left( \matrix{0&I_k\\I_k&0}\right)\left( \matrix{A&0\\0&B}\right).
$$
I assume you are allowed to use the block diagonal case, which gives $\det A\cdot\det B$ for the matrix on the far right. Just in case, this follows for instance from Leibniz formula.
Now it boils down to
$$
\det\left( \matrix{0&I_k\\I_k&0}\right)=(-1)^k.
$$
This is the matrix of a permutation made of $k$ transpositions. So the determinant is $(-1)^k$ after $k$ elementary operations (transpositions in this case) leading to the identity matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/361552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
} |
Time dilation by special relativity When reading about special relativity and time dilation I encounter a problem;
Here is a link: Time dilation in GPS
On page 1 under header "2. Time dilation by special relativity." It says: Since $(1 – x)^{-1/2} ≈ 1 + x /2$ for small $x$, we get...
How is $(1 – x)^{-1/2} ≈ 1 + x /2$ for small $x$? I really don't get it.
Thank you in advance!
| That is a mathematical approximation that is valid when $x\ll 1$ (meaning $x$ much less than one). It can be easily demonstrated if you know calculus: for any function $f(x)$ that is defined and derivable at $x=0$, $f(x)=f(0)+f'(0)x+\ldots$ (it is called a Taylor series expansion). The first two terms are enough if $x$ is small, otherwise the approximation requires more terms to be good.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/361657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.