Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
How one can show that $\int _0 ^1 ((h')^2-h^2)dx \ge0 $ for all $h\in C^1[0,1]$ and $h(0) = 0$? How one can show that $\int _0 ^1 ((h')^2-h^2)dx \ge0 $ for all $h\in C^1[0,1]$ and $h(0) = 0$?
| By the fundamental theorem of Calculus:
$$
\left|h(x)\right| = \left|\int_0^x h'(t) \,dt\right| \le \int_0^x \left|h'(t)\right| \,dt \le \int_0^1 \left|h'(x)\right| \,dx
$$
By Cauchy-Schwarz (or Jensen's) inequality:
$$
\left|h(x)\right|^2 \le \left(\int_0^1 \left|h'(x)\right| \,dx\right)^2 \le \int_0^1 \left|h'(x)\right|^2 \,dx
$$
Integrate both sides with respect to $x$ from $0$ to $1$ to get:
$$
\int_0^1 \left|h(x)\right|^2\,dx \le \int_0^1 \left|h'(x)\right|^2 \,dx
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/346638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Probability Question relating prison break I am stuck in a question regarding a prisoner trapped in a cell with 3 doors that actually has a probability associated with each door chosen(say $.5$ for door $A$, $.3$ for door $B$ and $.2$ for door $C$). The first door leads to his own cell after traveling $2$ days, whereas the second door leads to his own cell after $3$ days and the third to freedom after $1$ day.
"A prisoner is trapped in a cell containig three doors. The first door leads to a tunnel that returns him to his cell after two days of travel. The second leads to a tunnel that returns him to his cell after three days of travel. The third door leads immediately to freedom.
a) Assuming that the prisoner will always select doors 1,2,and 3 with probability 0.5, 0.3, 0.2 what is teh expected number of days until he reaches freedom?
b) Assuming that the prisoner is always equally likely to choose among those doors that he not used, what is the expected number of days until he reaches freedom? (In this version, for instance, if the prisoner initially tries door1, then when he returns to the cell, he will now select only from doors 2 and 3)
c) For parts (a) and (b) find the variance of the number of days until the prisoner reaches freedom.
"
In the problem I was able to find the $E[X]$ (Expected number of days until prisoner is free where X is # of days to be free). Where I get stuck is how to find the variances for this problem. I do not know how to find $E[X^2$ given door $1$ (or $2$ or $3$) chosen$]$. My understanding is that he does not learn from choosing the wrong door. Could anyone help me out? Thank you very much
| Let's go through it for part (a) first. Let $X$ denote the number of days until this prisoner gains freedom. I think you already have $E[X|D=1], E[X|D=2], E[X|D=3]$:
$E[X|D=1] = E[X + 2]$
$E[X|D=2] = E[X + 3]$
$E[X|D=3] = E[0]$
So we have
$E[X^2|D=1] = E[(X + 2)^2] = E[X^2] + 4E[X] + 4$
$E[X^2|D=2] = E[(X + 3)^2]= E[X^2] + 6E[X] + 9$
$E[X^2|D=3] = E[0^2] = 0 $
and now you can solve it b/c you have $E[X]$ already?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/346717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Group theory - left/right $H$-cosets and quotient sets $G/H$ and $G \setminus H$. Let $G$ be a group and $H$ be a subgroup of $G$. The left $H$-cosets are the sets $gH, g \in G$. The set of left $H$-cosets is the quotient set $G/H$. The right $H$-cosets are the sets $Hg, g\in G$. The set of right $H$-cosets is the quotient $H \setminus G.$ The set of $G$ is the union of left(respectively, right) $H$-cosets, each of which has $\left|H\right|$ elements. We deduce that the order of $H$ divides the order of $G$, and that the number of left $H$-cosets equals the number of right $H$-cosets.
What I wrote can be find in Groups and symmetries - Yvette Kosmann-Schwarzbach.
What should I understand from that statement ?
is ok if I say that : $$G=gH \cup Hg?$$
$$\left|G/H\right|=\left|H \setminus G\right|?$$
What means that the number of left $H$-cosets equals the number of right $H$-cosets?
And why we deduce that the order of $H$ divides the order of $G$?
Thanks :)
| It means that $G=\cup_{g\in G} gH=\cup_{g\in G} Hg$, that the map $x\rightarrow gx$ is a bijection between $H$ and $gH$ and therefore $|gH|=|H|$. It can be deduced from this that $|G|=|G/H||H|$. A similar statement for right cosets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/346783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Right translation - left coset - orbits We can remark that the left coset $gH$ of $g \in G$ relative to a subgroup $H$ of $G$ is the orbit of $g$ under the action of $H \subset G$ acting by right translation.
What is that right translation? and how can I prove that the orbit of $g$ under the action of $H \subset G$ acting by right translation is $gH$ ?
| Right translation can equally be read as "right multiplication", except there is an implication of commutativity.
As to your second query, let the subgroup $H$ act on $G$ by right multiplication:
$$h \cdot g = gh \qquad \forall g \in G \quad \forall h \in H$$
For any $g \in G$, the orbit $H \cdot g$ is the set
$$\{ h \cdot g : h \in H \} = \{ gh: h \in H \} = gH.$$
Note that we didn't need the operation to be commutative here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/346856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Trouble with wording of this math question "Find the derivative as follows (you need not simplify)":
a) $y = 2^x f (x)$, where $f (x)$ is a differentiable function and so is $f '(x)$: find $\frac{d^2x}{dx^2}$.
That's the exact wording of the question, and no additional information is given outside of this question. Can anyone make sense out of this question? I'm not looking for a solved answer, just an idea of what I'm actually supposed to be doing. Thanks
| We are given:
$$y = 2^x f(x)$$
where $f(x)$ and $f'(x)$ are differentiable functions and asked to find the second derivative.
This is just an application of the product rule, so
$\displaystyle \frac{d}{dx} \left(2^x f(x)\right) = 2^x (f(x) \log(2) + f'(x))$
$\displaystyle \frac{d^2}{dx^2} \left(2^x f(x)\right) = 2^x (f(x) log(2)^2 + 2\log(2) f'(x) + f''(x))$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/346919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determine and classify all singular points Determine and find residues for all singular points $z\in \mathbb{C}$ for
(i) $\frac{1}{z\sin(2z)}$
(ii) $\frac{1}{1-e^{-z}}$
Note: I have worked out (i), but (ii) seems still not easy.
| Thank Mhenni Benghorbal for the hint! I have worked out (i) so far:
The singular points for (i) are $\frac{k\pi}{2}, k \in \mathbb{Z}$,
The case $k=0$ was justified by Mhenni,
For $k \neq 0$ the singular points are simple poles, since $$\lim_{z \to \frac{k\pi}{2}}\frac{z-\frac{k\pi}{2}}{z\sin(2z)}=\lim_{z \to \frac{k\pi}{2}}(-1)^k\frac{z-\frac{k\pi}{2}}{z\sin(2(z-\frac{k\pi}{2}))}=(-1)^k\frac{1}{k\pi}\neq0$$ which also gives the corresponding residues.
For (ii): CLearly the singularities are $2ki\pi, k\in\mathbb{Z}$, which are isolated poles (as their reciprocal corresponding to isolated zeros).
But I have problem about computing the limit such as $$\lim_{z \to 0} \frac{z e^z}{e^z-1}$$.
Note that L-Hospitial's rule may not apply here, the method by series expansion is not applicable here either.?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/346990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
why $\log(n!)$ isn't zero? I have wondered that why the $\log (n!)$ isn't zero for $n \in N$.
Because I think that $\log (1)$ is zero so all following numbers after multiplying the result will become zero.
Thanks in advance.
| Might as well make an answer of it.
$$\begin{align*}
\lg(n!)&=\lg(1\cdot2\cdot3\cdot\ldots\cdot n)\\
&=\lg 1+\lg 2+\lg 3+\ldots+\lg n\\
&=\lg 2+\lg 3+\ldots+\lg n\;,
\end{align*}$$
so it won’t be $0$ unless $n=1$ (or $n=0$): you’re adding the logs, not multiplying them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to prove $\sum\limits_{k=0}^n{n \choose k}(k-1)^k(n-k+1)^{n-k-1}= n^n$? How do I prove the following identity directly?
$$\sum_{k=0}^n{n \choose k}(k-1)^k(n-k+1)^{n-k-1}= n^n$$
I thought about using the binomial theorem for $(x+a)^n$, but got stuck, because I realized that my $x$ and $a$ in this case are dynamic variables. Any hints? Don't give me the answer; I really want to think through this on my own, but a nudge in the correct direction would be awesome. Thanks!
| A nice proof uses the Lagrange-Bürman inversion formula. Start defining:
\begin{equation}
C(z) = z e^{C(z)}
\end{equation}
which gives the expansion:
\begin{equation}
e^{\alpha C(z)} = \alpha \sum_{n \ge 0} \frac{(\alpha + n)^{n - 1} z^n}{n!}
\end{equation}
Then you have:
\begin{equation}
e^{(\alpha + \beta) C(z)} = e^{\alpha C(z)} \cdot e^{\beta C(z)}
\end{equation}
Expanding and comparing both sides gives Abel's binomial theorem:
\begin{equation}
(\alpha + \beta) (\alpha + \beta + n)^{n - 1}
= \sum_{0 \le k \le n}
\binom{n}{k}
(\alpha + k)^{k - 1}
(\beta + n - k)^{n - k - 1}
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 6,
"answer_id": 2
} |
Conditional probabilities, urns I found this question interesting and apparently it has to do with conditional probabilities:
An urn contains six black balls and some white ones. Two balls are drawn simutaneously. They have the same color with probability 0.5. How many with balls are in
the urn?
As far as I am concerned I would say it is two white balls...
| Long hint/walkthrough: Let the number of white balls be denoted $w$. The probability of pulling two white balls will be $\frac{w}{6+w}\cdot\frac{w-1}{6+w-1}$ since the probability of choosing a white ball will be $P(w_1)=\frac{w}{w+6}$ and since there is one less white ball the probability of choosing another will be $P(w_2)=\frac{w-1}{6+w-1}$. To find the probability that both these events will occur we multiply their probability $P(w_1\cap w_2)=P(w_1)\cdot P(w_2)$. Note that this is only the probability of finding two white balls. What will be the probability of finding two black balls? To find the probability that one OR another event occurs we add the probability of each event ($P(x\cup y)=P(x)+P(y)$). So what is the probability of choosing two of the same color balls?
Can we find a mathematical way to express the probability of choosing one ball of each color? If so, we can set these equations equal and have $P(w_1\cap w_2)+P(b_1\cap b_2)=P(b_1\cap w_2)+P(w_1\cap b_2)$. Since our only variable should be $w$ this will allow us to solve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Strictly increasing function on positive integers giving value between $100$ and $200$ I'm looking for some sort of function $f$ that can take any integer $n>0$ and give a real number $100 \le m \lt 200$ such that if $a \lt b$ then $f(a) \lt f(b)$. How can I do that? I'm a programmer and I need this for an application of mine.
| $f(n)=200-2^{-n}$ satisfies your criteria.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Irreducible components of the variety $V(X^2+Y^2-1,X^2-Z^2-1)\subset \mathbb{C}^3.$ I want to find the irreducible components of the variety $V(X^2+Y^2-1, \ X^2-Z^2-1)\subset \mathbb{C}^3$ but I am completely stuck on how to do this. I have some useful results that can help me decompose $V(F)$ when $F$ is a single polynomial, but the problem seems much harder even with just two polynomials. Can someone please help me?
EDIT: In trying to answer this question, I knew it would be useful to know if the ideal $I=(X^2+Y^2-1, X^2-Z^2-1)$ was a prime ideal of $\mathbb{C}[X,Y,Z]$ but I'm finding it hard to describe the quotient ring. Is it a prime ideal?
| $\newcommand{\rad}{\text{rad}\hspace{1mm}}
$
The ideal $(x^2 + y^2 - 1,x^2 - z^2 - 1)$ is equal to the ideal $(y^2 + z^2 ,x^2 - z^2 - 1)$. This is because
\begin{eqnarray*} (y^2 + z^2) + (x^2 - z^2 - 1) &=& y^2 + x^2 - 1\\
(x^2 + y^2 - 1) - (x^2 - z^2 - 1) &=& y^2 + z^2. \end{eqnarray*}
Thus we get
\begin{eqnarray*} V(x^2 + y^2 - 1,x^2 - z^2 - 1) &=& V( y^2 + z^2,x^2 - z^2 - 1) \\
&=& V\left( (y+zi)(y- zi),x^2 - z^2 - 1\right) \\
&=& V(y+zi,x^2 - z^2 - 1) \cup V(y-zi,x^2 - z^2 - 1).\end{eqnarray*}
Now we claim that the ideals $(y+zi,x^2 - z^2 - 1)$ and $(y-zi,x^2 - z^2 - 1)$ are prime ideals. I will only show that the former is prime because the proof for the latter is similar. By the Third Isomorphism Theorem we have
\begin{eqnarray*} \Bbb{C}[x,y,z]/(y+ zi,x^2 - z^2 - 1) &\cong& \Bbb{C}[x,y,z]/(y+zi) \bigg/ (y+ zi,x^2 - z^2 - 1)/ (y + zi) \\
&\cong& \Bbb{C}[x,z]/(x^2 - z^2 - 1)\end{eqnarray*}
because $\Bbb{C}[x,y,z]/(y + zi) \cong \Bbb{C}[x,z]$. At this point there are two ways to proceed, one of which is showing that $x^2 - z^2 - 1$ is irreducible. However there is a nicer approach which is the following. Writing
\begin{eqnarray*} x &=& \frac{x+z}{2} + \frac{x-z}{2} \\
z &=& \frac{z + x}{2} + \frac{z-x}{2}\end{eqnarray*}
this shows that $\Bbb{C}[x,z] = \Bbb{C}[x+z,x-z]$. Then upon factoring $x^2 - z^2 - 1$ as $(x+z)(x-z) - 1$ the quotient $\Bbb{C}[x,z]/(x^2 - z^2 - 1)$ is isomorphic to $\Bbb{C}[u][v]/(uv - 1)$ where $u = x+z, v = x-z$. Now recall that
$$\left(\Bbb{C}[u] \right)[v]/(uv - 1) \cong \left(\Bbb{C}[u]\right)_{u} $$
where the subscript denotes localisation at the multiplicative set $\{1,u,u^2,u^3 \ldots \}$. Since the localisation of an integral domain is an integral domain, this completes the proof that $(y+zi,x^2 - z^2 - 1)$ is prime and hence a radical ideal.
Now use Hilbert's Nullstellensatz to complete the proof that your algebraic set decomposes into irreducibles as claimed in Andrew's answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 2,
"answer_id": 1
} |
Funny integral inequality Assume $f(x) \in C^1([0,1])$,and $\int_0^{\frac{1}{2}}f(x)\text{d}x=0$,show that:
$$\left(\int_0^1f(x)\text{d}x\right)^2 \leq \frac{1}{12}\int_0^1[f'(x)]^2\text{d}x$$
and how to find the smallest constant $C$ which satisfies
$$\left(\int_0^1f(x)\text{d}x\right)^2 \leq C\int_0^1[f'(x)]^2\text{d}x$$
| solutin 2:
by Schwarz,we have
$$\int_{0}^{\frac{1}{2}}[f'(x)]^2dx\int_{0}^{\frac{1}{2}}x^2dx\ge\left(\int_{0}^{\frac{1}{2}}xf'(x)dx\right)^2=\left[\dfrac{1}{2}f(\dfrac{1}{2})-\int_{0}^{\frac{1}{2}}f(x)dx\right]^2$$
so
$$\int_{0}^{\frac{1}{2}}[f'(x)]^2dx\ge 24\left[\dfrac{1}{2}f(\dfrac{1}{2})-\int_{0}^{\frac{1}{2}}f(x)dx\right]^2$$
the same methods,we have
$$\int_{\frac{1}{2}}^{1}[f'(x)]^2dx\ge 24\left[\dfrac{1}{2}f(\dfrac{1}{2})-\int_{0}^{\frac{1}{2}}f(x)dx\right]^2$$
and use $2(a^2+b^2)\ge (a+b)^2$
then we have
$$\int_{0}^{1}[f'(x)]^2dx\ge 12\left(\int_{0}^{1}f(x)dx-2\int_{0}^{\frac{1}{2}}f(x)dx\right)^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
} |
Proving $\,f$ is constant. Let $\,f:[a,b] \rightarrow \Bbb R $ be continuous
and $\int_a^b f(x)g(x)\,dx=0$, whenever $g:[a,b] \rightarrow \Bbb R $ is continuous and $\int_a^b g(x)\,dx=0$.
Show that $f$ is a constant function.
I tried a bunch of things including the mid-point integral theorem(?) but to no avail.
I'd appreciate an explanation of a solution because I really don't see where to go with this one..
| Suppose $f$ is nonconstant.
Define $g(x) = f(x)-\bar{f}$, where $\bar{f}:= \frac{1}{b-a} \int_a^b f(x)dx$. Then $\int_a^b g = 0$.
Then $$\int_a^b f(x)g(x)dx = \int_a^b f(x) \big(f(x)-\bar{f}\big) dx = \int_a^b \big(f(x)-\bar{f}\big)^2 dx >0$$ The reason that this last term is larger than zero, is that $f$ is non-constant, so we can find at least one value $x \in [a,b]$ such that $f(x) \neq \bar{f}$. By continuity, there exists some interval $[c,d] \ni x$ such that $f(y) \neq \bar{f}$ for $y \in [c,d]$, and thus $(f(y)-\bar{f})^2>0$ for $y\in [c,d]$.
Thus if $f$ is nonconstant, then we can find at least one $g$ for which $\int_a^b g = 0$ and $\int_a^b fg >0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Differential Forms, Exterior Derivative I have a question regarding differential forms.
Let $\omega = dx_1\wedge dx_2$. What would $d\omega$ equal? Would it be 0?
| The differential form $\omega = dx_1 \wedge dx_2$ is constant hence we have $$ d\omega = d(dx_1 \wedge dx_2) = d(1) \wedge dx_1 \wedge dx_2 \pm 1 \, ddx_1 \wedge dx_2 \pm 1 \, dx_1 \wedge ddx_2$$ and because $d^2 = 0$, we have $$ d \omega = 0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Prove set is dense This is a pretty basic and general question.
I have to prove (if true) that the sum of two dense sets is dense as well.
Let $A, B$ be nonempty dense sets in $\mathbb R$. Then $A+B=\{a+b\mid a\in A, b\in B\}$ is also dense.
Can anyone give me a pointer as to how one may prove this (just the method)? is it algebraic or with $\sup /\inf/\text{sequences}$ etc.
Thanks!
| Hint:
*
*If $A$ is dense in $\mathbb{R}$, then $A+2013 = \{a+2013 \mid a \in A\}$ is also dense in $\mathbb{R}$.
*Union of dense sets is dense, in particular, $A+B = \bigcup_{b \in B}A+b$.
Good luck!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Vector Fields Question 4 I am struggling with the following question:
Prove that any left invariant vector field on a Lie group is complete.
Any help would be great!
| Call your lie group $G$ and your vector field $V$.
It suffices to show that there exists $\epsilon > 0$ such that given $g \in G$ (notice the order of the quanitfiers!) there exists an integral curve $\gamma_g : (-\epsilon, \epsilon) \rightarrow G$ with $\gamma_g (0) = g$, I.e. a curve $\gamma$ starting at $p$ with $\gamma_\ast (\frac{d}{dt}|_s) = V_{\gamma(s)}$.
Now there exists an integral curve $\gamma _e$through the identity, $e$, defined on some open neighborhood $(-\epsilon, \epsilon)$ of $0$. This is the $\epsilon$ we choose. For any $g \in G$ we use the left invariance of $V$ to check that $L_g \circ \gamma_e$ is an integral curve: $L_{g_\ast} (\gamma _{e_\ast} (\frac{d}{dt}|_s) = L_{g_\ast} ( V_{\gamma_e (s)}) = V_{L_g \circ \gamma_e (s)}$. This completes the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Minimize $\|A-XB\|_F$ subject to $Xv=0$ Assume we are given two matrices $A, B \in \mathbb R^{n \times m}$ and a vector $v \in \mathbb R^n$. Let $\|\cdot\|_F$ be the Frobenius norm of a matrix. How can we solve the following optimization problem in $X \in \mathbb R^{n \times n}$?
$$\begin{array}{ll} \text{minimize} & \|A-XB\|_F\\ \text{subject to} & Xv=0\end{array}$$
Can this problem be converted to a constrained least squares problem with the optimization variable being a vector instead of a matrix? If so, does this way work?
Are there some references about solving such constrained linear least Frobenius norm problems?
Thanks!
| As the others show, the answer to your question is affirmative. However, I don't see what's the point of doing so, when the problem can actually be converted into an unconstrained least square problem.
Let $Q$ be a real orthogonal matrix that has $\frac{v}{\|v\|}$ as its last column. For instance, you may take $Q$ as a Householder matrix:
$$
Q = I - 2uu^T,\ u = \frac{v-\|v\|e_n}{\|v-\|v\|e_n\|},
\ e_n=(0,0,\ldots,0,1)^T.
$$
Then $Xv=0$ means that $XQe_n=0$, which in turn implies that $XQ$ is a matrix of the form $XQ = (\widetilde{X},0)$, where $\widetilde{X}\in\mathbb{R}^{n\times(n-1)}$. So, if you partition $Q^TB$ as $Q^TB = \pmatrix{\widetilde{B}\\ \ast}$, where $\widetilde{B}$ is $(n-1)\times m$, then your minimisation problem can be rewritten as the unconstrained least-square problem
$$\min_{\widetilde{X}\in\mathbb{R}^{n\times(n-1)}} \|A-\widetilde{X}\widetilde{B}\|_F.$$
And the least-norm solution is given by $\widetilde{X} = A\widetilde{B}^+$, where $\widetilde{B}^+$ denotes the Moore-Penrose pseudoinverse of $\widetilde{B}$. Once $\widetilde{X}$ is obtained, you can recover $X = (\widetilde{X},0)Q^T$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Vertical line test
A vertical line crossing the x-axis at a point $a$ will meet the set in exactly one point $(a, b)$ if $f(a)$ is defined, and $f(a) = b$.
If the vertical line meets the set of points in two points then $f(a)$ is undefined?
| The highlighted proposition is one way of describing the vertical line test, which determines whether $f$ is a function.
If there is one and only point of intersection between $x = a$ and $f(x)$, then $f$ is a function.
If there are two or more points of intersection between $x = a$ and $f(x)$, then $f$ maps a given $x = a$ to two (or more) distinct values, $f(a) = b, f(a) = c, \; b \neq c$, and hence, fails to be a function, by definition of a function. $f$ may, however, a relation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Adjust percentage I'm stuck on what I would think is a simple problem.
A group of $3$ people are selling a product through a store.
Under the current arraignment, the store gets $30$% of the price the product is sold for. The group of $3$ get the remaining $70$%.
The group of $3$ split up the remaining $70$% as $25$%, $25$%, and $20$%.
If we considered the remaining $70$% as $100$% which would be divided up in the same proportions, what would those percentages be?
| If you are confused with the percentages, it is always to better write down statements to make it easier.
If $70$% is equivalent to $100$% , then $25$% is equivalent to ?
$(25*100/70)$ = $(2500/70)$% = $35.715$%
Similarly if $70$% is equivalent to $100$% , then $25$% is equivalent to ?
$(25*100/70)$ = $(2500/70)$% = $35.715$%
Similarly if $70$% is equivalent to $100$% , then $20$% is equivalent to ?
$(20*100/70)$ = $(2000/70)$% = $28.57$%
Hope the answer is clear !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/347922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In Search of a More Elegant Solution I was asked to determine the maximum and minimum value of $$f(x,y,z)=(3x+4y+5z^{2})e^{-x^{2}-y^{2}-z^{2}}$$ on $\mathbb{R}^{3}$.
Now, I employed the usually strategy; in other words calculating the partial derivatives, setting each to zero, and the solve for $x,y,z$ before comparing the values of the stationary points. I obtained $$M=5e^{-3/4}$$ as the maximum value and $$m=(-5e^{-1/2})/{\sqrt{2}}$$ as the minimum value, both of which turned out to be correct. However, as I decided to solve for $x,y,z$ by the method of substitution, the calculations became somewhat hostile.
I'm sure there must be a simpler way to arrive at the solutions, and I would be thrilled if someone here would be so generous as to share such a solution.
| We have $$\frac\partial{\partial x}f(x,y,z)=(3-2x(3x+4y+5z^2))e^{-x^2-y^2-z^2}$$
$$\frac\partial{\partial y}f(x,y,z)=(4-2y(3x+4y+5z^2))e^{-x^2-y^2-z^2}$$
$$\frac\partial{\partial z}f(x,y,z)=(10z-2z(3x+4y+5z^2))e^{-x^2-y^2-z^2}$$
At a stationary point, either $z=0$ and then $3y=4x$, $x=\pm\frac3{10}\sqrt 2 $.
Or $3x+4y+5z^2=5$ and then $x=\frac3{10}$, $y=\frac25$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Find the value of : $\lim_{x\to\infty}x\left(\sqrt{x^2-1}-\sqrt{x^2+1}\right)=-1$ How can I show/explain the following limit?
$$\lim_{x\to\infty} \;x\left(\sqrt{x^2-1}-\sqrt{x^2+1}\right)=-1$$
Some trivial transformation seems to be eluding me.
| The expression can be multiplied with its conjugate and then:
$$\begin{align}
\lim_{x\to\infty} x\left(\sqrt{x^2-1}-\sqrt{x^2+1}\right)
&= \lim_{x\to\infty} x\left(\sqrt{x^2-1}-\sqrt{x^2+1}\right)\left(\frac{\sqrt{x^2-1}+\sqrt{x^2+1}}{\sqrt{x^2-1}+\sqrt{x^2+1}}\right) \cr
&=\lim_{x\to\infty} x\left(\frac{x^2-1-x^2-1}{\sqrt{x^2-1}+\sqrt{x^2+1}}\right) \cr
&=\lim_{x\to\infty} x\left(\frac{-2}{\sqrt{x^2-1}+\sqrt{x^2+1}}\right) \cr
&=\lim_{x\to\infty} \frac{-2}{\frac{\sqrt{x^2-1}}{x} + \frac{\sqrt{x^2+1}}{x}} \cr
&=\lim_{x\to\infty} \frac{-2}{\sqrt{\frac{x^2}{x^2}-\frac{1}{x^2}} + \sqrt{\frac{x^2}{x^2}+\frac{1}{x^2}}} \cr
&=\lim_{x\to\infty} \frac{-2}{\sqrt{1-0} + \sqrt{1-0}} \cr
&=\lim_{x\to\infty} \frac{-2}{1+1} \cr
&= -1\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Does a such condition imply differentiability? Let function $f:\mathbb{R}\to \mathbb{R}$ be such that
$$
\lim_{\Large{(y,z)\rightarrow (x,x) \atop y\neq z}} \frac{f(y)-f(z)}{y-z}=0.
$$
Is it then $f'(x)=0$ ?
| What is $f'(x)=\displaystyle\lim_{y\to x}\dfrac{f(y)-f(x)}{y-x}$?
Let $\epsilon>0$. From the hypothesis it follows that exists a $\delta>0$ such that if $y\neq z$ and $0<\|(y,z)-(x,x)\|<\delta$ then $\left|\dfrac{f(y)-f(z)}{y-z}\right|<\epsilon$.
Now if $0<|y-x|<\delta$ then $0<\|(y,x)-(x,x)\|=|y-x|<\delta$ and $y\neq x$.
Therefore $\left|\dfrac{f(y)-f(x)}{y-x}\right|<\epsilon$.
It follows that $f'(x)=\displaystyle\lim_{y\to x}\dfrac{f(y)-f(x)}{y-x}=0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Best Fake Proofs? (A M.SE April Fools Day collection) In honor of April Fools Day $2013$, I'd like this question to collect the best, most convincing fake proofs of impossibilities you have seen.
I've posted one as an answer below. I'm also thinking of a geometric one where the "trick" is that it's very easy to draw the diagram wrong and have two lines intersect in the wrong place (or intersect when they shouldn't). If someone could find and link this, I would appreciate it very much.
| Let me prove that the number $1$ is a multiple of $3$.
To accomplish such a wonderful result we are going to use the symbol $\equiv$ to denote "congruent modulo $3$". Thus, what we need to prove is that $1 \equiv 0$. Next I give you the proof:
$1\equiv 4$
$\quad \Rightarrow \quad$
$2^1\equiv 2^4$
$\quad \Rightarrow \quad$
$2\equiv 16$
$\quad \Rightarrow \quad$
$2\equiv 1$
$\quad \Rightarrow \quad$
$2-1\equiv 1-1$
$\quad \Rightarrow \quad$
$1\equiv 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "242",
"answer_count": 27,
"answer_id": 21
} |
A question on estimates of surface measures If $\mathcal{H}^s $ is $s$ dimensional Hausdorff measure on $ \mathbb{R}^n$, is the following inequality true for all $ x \in \mathbb{R}^n,\ R,t > 0 $ ?
$$ \mathcal{H}^{n-1}(\partial B(x,t)\cap B(0,R)) \leq \mathcal{H}^{n-1}(\partial B(0,R)) $$ If the answer is not affirmative then a concrete counter example of two such open balls would be very helpful.
| The definition of $\mathcal H^{s}$ implies that it does not increase under $1$-Lipschitz maps: $\mathcal H^s(f(E))\le \mathcal H^s(E)$ if $f$ satisfies $|f(a)-f(b)|\le |a-b|$ for all $a,b\in E$.
The nearest point projection $\pi:\mathbb R^n \to B(x,t)$ is a $1$-Lipschitz map. (Note that when $y\in B(x,t)$, the nearest point to $y$ is $y$ itself; that is, the projection is the identity map on the target set). It remains to show that $$\partial B(x,t)\cap B(0,R) \subset \pi(\partial B(0,R))\tag1$$
Given a point $y\in \partial B(x,t)\cap B(0,R)$, consider the half-line $\{y+tn: t\ge 0\}$ where $n$ is an outward normal vector to $\partial B(x,t)$ at $y$. This half-line intersects $\partial B(0,R)$ at some point $z$. An easy geometric argument shows that $\pi(z)=y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Le Cam's theorem and total variation distance Le Cam's theorem gives the total variation distance between the sum of independent Bernoilli variables and a Poisson random variable with the same mean. In particular it tells you that the sum is approximately Poisson in a specific sense.
Define
$$S_n = X_1+\dots+X_n \text{ and } \lambda_n = p_1+\dots+p_n$$
where $P(X_i = 1) = p_i$.
The theorem states that
$$\sum_{k=0}^{\infty}\left| P(S_n=k)-\frac{\lambda_n^k e^{-\lambda_n}}{k!}\right| < 2\sum_{i=1}^n p_i^2.$$
I am having problems understanding what this tells you about the relationship between their cdfs. That is between $P(S_n < x)$ and $P(\operatorname{Poiss}(\lambda_n)) < x)$. In particular, can you given $n$ and $x$ give a bound on the difference or can you say that as $n$ grows the difference tends to zero?
| Let $Y_n$ be any Poisson random variable with parameter $\lambda_n$. Then, for every $x$,
$$
\left|P(S_n < x)-P(Y_n < x)\right|=\left|\sum_{k\lt x} P(S_n=k)-P(Y_n=k)\right|\leqslant\sum_{k=0}^{\infty}\left| P(S_n=k)-P(Y_n=k)\right|.
$$
Hence,
$$
\left|P(S_n < x)-P(Y_n < x)\right| < 2\sum_{i=1}^n p_i^2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Struggling with an integral with trig substitution I've got another problem with my CalcII homework. The problem deals with trig substitution in the integral for integrals following this pattern: $\sqrt{a^2 + x^2}$. So, here's the problem:
$$\int_{-2}^2 \frac{\mathrm{d}x}{4 + x^2}$$
I graphed the function and because of symmetry, I'm using the integral: $2\int_0^2 \frac{\mathrm{d}x}{4 + x^2}$
Since the denominator is not of the form: $\sqrt{a^2 + x^2}$ but is basically what I want, I ultimately decided to take the square root of the numerator and denominator:
$$2 \int_0^2 \frac{\sqrt{1}}{\sqrt{4+x^2}}\mathrm{d}x = 2 \int_0^2 \frac{\mathrm{d}x}{\sqrt{4+x^2}}$$
From there, I now have, using the following: $\tan\theta = \frac{x}{2} => x = 2\tan\theta => dx = 2\sec^2\theta d\theta$
$$
\begin{array}{rcl}
2\int_{0}^{2}\frac{\mathrm{d}x}{4+x^2}\mathrm{d}x & = & \sqrt{2}\int_{0}^{2}\frac{\mathrm{d}x}{\sqrt{4+x^2}}\mathrm{d}x \\
& = & \sqrt{2}\int_{0}^{2}\frac{2\sec^2(\theta)}{\sqrt{4+4\tan^2(\theta)}}\mathrm{d}\theta \\
& = & \sqrt{2}\int_{0}^{2}\frac{2\sec^2(\theta)}{2\sqrt{1+\tan^2(\theta)}}\mathrm{d}\theta \\
& = & \sqrt{2}\int_{0}^{2}\frac{\sec^2(\theta)}{\sqrt{\sec^2(\theta)}}\mathrm{d}\theta \\
& = & \sqrt{2}\int_{0}^{2}\frac{\sec^2(\theta)}{\sec(\theta)}\mathrm{d}\theta \\
& = & \sqrt{2}\int_{0}^{2}\sec(\theta)\mathrm{d}\theta \\
& = & \sqrt{2}\left [\ln{\sec(\theta)+\tan(\theta)} \right|_{0}^{2}] \\
& = & \sqrt{2}\left [ \ln{\frac{\sqrt{4+x^2}}{2}+\frac{x}{2} } \right|_{0}^{2} ]
\end{array}
$$
I'm not sure if I've correctly made the integral look like the pattern it's supposed to have. That is, trig substitutions are supposed to be for $\sqrt{a^2 + x^2}$ (in this case that is, there are others). This particular problem is an odd numbered problem and the answer is supposed to be $\frac{\pi}{4}$. I'm not getting that. So, the obvious question is, what am I doing wrong? Also note, I had trouble getting the absolute value bars to produce for the ln: don't know what I did wrong there either.
Thanks for any help,
Andy
| Hint: you can cut your work considerably by using the trig substitution directly into the proper integral, and proceeding (no place for taking the square root of the denominator):
You have $$2\int_0^2 \frac{dx}{4+x^2}\quad\text{and NOT} \quad 2\int_0^2 \frac{dx}{\sqrt{4+x^2}}$$
But that's good, because this integral (on the left) is what you have and is already in in the form where it is appropriate to use the following substitution:
Let $x = 2 \tan \theta$, which you'll see is standard for integrals of this form.
As suggested by Andrew in the comments, we can arrive at his suggested result, and as shown in Wikipedia:
Given any integral in the form
$$\int\frac{dx}{{a^2+x^2}}$$
we can substitute
$$x=a\tan(\theta),\quad dx=a\sec^2(\theta)\,d\theta, \quad \theta=\arctan\left(\tfrac{x}{a}\right)$$
Substituting gives us:
$$
\begin{align} \int\frac{dx}{{a^2+x^2}}
& = \int\frac{a\sec^2(\theta)\,d\theta}{{a^2+a^2\tan^2(\theta)}} \\ \\
& = \int\frac{a\sec^2(\theta)\,d\theta}{{a^2(1+\tan^2(\theta))}} \\ \\
& = \int \frac{a\sec^2(\theta)\,d\theta}{{a^2\sec^2(\theta)}} \\ \\
& = \int \frac{d\theta}{a} \\ &= \tfrac{\theta}{a}+C \\ \\
& = \tfrac{1}{a} \arctan \left(\tfrac{x}{a}\right)+C \\ \\
\end{align}
$$
Note, you would have gotten precisely the correct result had you not taken the square root of $\sec^2\theta$ in the denominator, i.e., if you had not evaluated the integral of the square root of your function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Cayley's Theorem question: examples of groups which aren't symmetric groups. Basically, Cayley's Theorem says that every finite group, say $G$, is isomorphic to a subgroup of the group $S_G$ of all permutations of $G$.
My question: why is there the word "subgroup of"? If we omit this word, is the statement wrong? brief examples would be nice.
Thank you guys so much!
| The symmetric group $S_n$ has order $n!$ whereas there exists a group of any order (eg. $\mathbb{Z}_n$ has order $n$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
} |
Interior of closure of an open set The question is is the interior of closure of an open set equal the interior of the set?
That is, is this true:
$(\overline{E})^\circ=E^\circ$
($E$ open)
Thanks.
| Let $\varepsilon>0$, I claim there is an open set of measure (or total length, if you like) less than $\varepsilon$ whose closure is all of $\mathbb R$.
To see this, simply enumerate the rationals $\{r_n\}$ and then for each $n\in\mathbb N$ choose an open interval about $r_n$ of length $\varepsilon/2^n$. The union of those intervals has the desired property.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Learning Mathematics using only audio. Are there any mathematics audio books or other audio sources for learning mathematics, like for example math podcasts which really go into detail. I ask this because I make about 1 hour from my house to the school and staring at a screen on the car makes me dizzy. I know about podcasts and stuff but those don't really teach you math, they talk about math news and mathematicians.
I ask this because many topics can be understood just by giving it a lot of thought and don't necessarily require pen and paper.
Regards.
| I can't really point to a source, but I find the question quite relevant, as audiobooks of mathematic subject can be important also for blind people.
Learnoutloud has a repository of audiobooks and podcast about math and statistics, and related novels as well. Nevertheless it seems to offer no advanced math repository.
PapersOutLoud may be a good project.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 3,
"answer_id": 0
} |
Double dot product vs double inner product Anything involving tensors has 47 different names and notations, and I am having trouble getting any consistency out of it.
This document (http://www.polymerprocessing.com/notes/root92a.pdf) clearly ascribes to the colon symbol (as "double dot product"):
$\mathbf{T}:\mathbf{U}=T_{ij} U_{ji}$
while this document (http://www.foamcfd.org/Nabla/guides/ProgrammersGuidese3.html) clearly ascribes to the colon symbol (as "double inner product"):
$\mathbf{T}:\mathbf{U}=T_{ij} U_{ij}$
Same symbol, two different definitions. To make matters worse, my textbook has:
$\mathbf{\epsilon}:\mathbf{T}$
where $\epsilon$ is the Levi-Civita symbol $\epsilon_{ijk}$ so who knows what that expression is supposed to represent.
Sorry for the rant/crankiness, but it's late, and I'm trying to study for a test which is apparently full of contradictions. Any help is greatly appreciated.
| I know this might not serve your question as it is very late, but I myself am struggling with this as part of a continuum mechanics graduate course. The way I want to think about this is to compare it to a 'single dot product.' For example:
\begin{align}
\textbf{A} \cdot \textbf{B} &= A_{ij}B_{kl} (e_i \otimes e_j) \cdot (e_k \otimes e_l)\\
&= A_{ij} B_{kl} (e_j \cdot e_k) (e_i \otimes e_l) \\
&= A_{ij} B_{kl} \delta_{jk} (e_i \otimes e_l) \\
&= A_{ij} B_{jl} (e_i \otimes e_l)
\end{align}
Where the dot product occurs between the basis vectors closest to the dot product operator, i.e. $e_j \cdot e_k$. So now $\mathbf{A} : \mathbf{B}$ would be as following:
\begin{align}
\textbf{A} : \textbf{B} &= A_{ij}B_{kl} (e_i \otimes e_j):(e_k \otimes e_l)\\
&= A_{ij} B_{kl} (e_j \cdot e_k) (e_i \cdot e_l) \\
&= A_{ij} B_{kl} \delta_{jk} \delta_{il} \\
&= A_{ij} B_{jl} \delta_{il}\\
&= A_{ij} B_{ji}
\end{align}
But I found that a few textbooks give the following result:
$$ \textbf{A}:\textbf{B} = A_{ij}B_{ij}$$
But based on the operation carried out before, this is actually the result of $$\textbf{A}:\textbf{B}^t$$ because
\begin{align}
\textbf{A} : \textbf{B}^t &= A_{ij}B_{kl} (e_i \otimes e_j):(e_l \otimes e_k)\\
&= A_{ij} B_{kl} (e_j \cdot e_l) (e_j \cdot e_k) \\
&= A_{ij} B_{kl} \delta_{jl} \delta_{ik} \\
&= A_{ij} B_{il} \delta_{jl}\\
&= A_{ij} B_{ij}
\end{align}
But I finally found why this is not the case!
The definition of tensor contraction is not the way the operation above was carried out, rather it is as following:
\begin{align}
\textbf{A} : \textbf{B}^t &= \textbf{tr}(\textbf{AB}^t)\\
&= \textbf{tr}(\textbf{BA}^t)\\
&= \textbf{tr}(\textbf{A}^t\textbf{B})\\
&= \textbf{tr}(\textbf{B}^t\textbf{A}) = \textbf{A} : \textbf{B}^t\\
\end{align}
and if you do the exercise, you'll find that:
$$\textbf{A}:\textbf{B} = A_{ij} B_{ij} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 3,
"answer_id": 0
} |
Can we extend any metric space to any larger set? Let $(X,d)$ be metric space and $X\subset Y$. Can $d$ be extended to $Y^2$ so that $(Y,d)$ is a metric space?
Edit:
how about extending any $(\Bbb Z,d)$ to $(\Bbb R,d)$
| Let $Z=Y\setminus X$. Let $\kappa=|X|$. If $|Z|\ge\kappa$, we can index $Y=\{z(\xi,x):\langle\xi,x\rangle\in\kappa\times X\}$ in such a way that $z(0,x)=x$ for each $x\in X$. Now define
$$\overline d:Y\times Y\to\Bbb R:\langle z(\xi,x),z(\eta,y)\rangle\mapsto\begin{cases}
d(x,y),&\text{if }\xi=\eta\\
d(x,y)+1&\text{if }\xi\ne\eta\;.
\end{cases}$$
Then $\overline d$ is a metric on $Y$ extending $d$.
If $|Z|<\kappa$, you can still use the same basic idea. Let $\varphi:Z\to X$ be any injection, and define
$$\overline d:Y\times Y\to\Bbb R:\langle x,y\rangle\mapsto\begin{cases}
d(x,y),&\text{if }x,y\in X\\
d((\varphi(x),\varphi(y)),&\text{if }x,y\in Z\\
d(x,\varphi(y))+1,&\text{if }x\in X\text{ and }y\in Z\\
d(\varphi(x),y)+1,&\text{if }x\in Z\text{ and }y\in X\;.
\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
How do I show that these sums are the same? My textbook says that I should check that
$$ \sum_{i=0}^\infty \frac{\left( \lambda\mathtt{I} + \mathtt{J}_k \right)^i}{i!} $$
is in fact the same as the product of sums
$$ \left( \sum_{i=0}^\infty \frac{\left( \lambda\mathtt{I}\right)^i}{i!} \right) \cdot
\left( \sum_{j=0}^k\frac{\left( \mathtt{J}_k \right)^j}{j!} \right)$$
Where $ \mathtt{J}_k $ is all zero exept first super diagonal that has all ones.
But I can't figure out how to do it.
[edit]
To clarify: I'm working towards a definition of $f(\mathtt{A})$ where $f$ is a "nice" function, and $\mathtt{A}$ is an arbitrary square matrix.
The text basically goes like this.
$\mathtt{B} = f(\mathtt{A})$ defined as $b_{ij} = f(a_{ij})$ is a bad idea because
$f(\mathtt{A})$ where $f(x) = x^2$ is generally not the same as $\mathtt{A}^2$ and so on.
BUT, we know that for numbers, $e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!} $ so lets try this for matrices.
Then it goes on to show that for diagonal matrices the power series gives the same result as if we would apply the function on the diagonal elements then its expanded to diagonalizable matrices. Then to Jordan blocks, and thats where these sums come in.
| Hints:
$$(1)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\sum_{k=0}^\infty\frac{X^k}{k!}=e^X$$
$$(2)\;\;\;\;\;\;\;\;J_k^{n}=0\;,\;\;\;\text{where$\,n\,$ is the number of rows of the matrix}\;J_k$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does every section of $J^r L$ come from some section $s\in H^0(C,L)$, with $L$ line bundle on a compact Riemann surface? I am working with jet bundles on compact Riemann surfaces. So if we have a line bundle $L$ on a compact Riemann surface $C$ we can associate to it the $r$-th jet bundle $J^rL$ on $C$, which is a bundle of rank $r+1$. If we have a section $s\in H^0(C,L)$ then there is an induced section $D^rs\in H^0(C,J^rL)$ which is defined, locally on an open subset $U\subset C$ trivializing both $L$ and $\omega_C$, as the $(r+1)$-tuple $(f,f',\dots,f^{(r)})$, where $f\in O_C(U)$ represents $s$ on $U$.
Question 1. Does every section of $J^rL$ come from some $s\in
H^0(C,L)$ this way?
Question 2. Do you know of any reference for a general description of
the transition matrices attached to $J^rL$? I only know them for $r=1$
up to now and I am working on $r=2$.
Thank you in advance.
| This is rather old so maybe you figured out the answers already.
Answer to Q1 is No. Not every global section of $J^r L$ comes from the "prolongation" of a section of $L$, not even locally. Consider for example the section in $J^1(\mathcal{O}_\mathbb{C})$ given in coordinates by $(0,1)$ (constant sections $0$ and $1$). This is obviously not of the form $(f,f')$.
The second question: maybe you find the explicit formulas for the transition of charts in Saunders "Geometry of Jet Bundle".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/348938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Hyperbolic cosine I have an A level exam question I'm not too sure how to approach:
a) Show $1+\frac{1}{2}x^2>x, \forall x \in \mathbb{R}$
b) Deduce $ \cosh x > x$
c) Find the point P such that it lies on $y=\cosh x$ and its perpendicular distance from the line $y=x$ is a minimum.
I understand how to show the first statement, by finding the discriminant of $1+\frac{1}{2}x^2-x>0$, but trying to apply this to part b doesn't seem to work:
$$\cosh x > x \Rightarrow \frac{1}{2}(e^x+e^{-x}) > x \Rightarrow e^x+e^{-x}>2x \Rightarrow e^{2x}-2xe^x+1>0$$
Applying $\Delta>0$ to this because I know the two functions do not intersect:
$$\Delta>0 \Rightarrow 4x^2-4>0 \Rightarrow x^2>1 \Rightarrow |x|>1$$
This tells me that in fact, the two do meet, but only when $|x|>1$, what have I done wrong here?
This last bit I don't know how to approach, I was thinking maybe vectors in $\mathbb{R}^2$ were involved to find perpendicular distances?
Thanks in advance.
| a) You'r right, you can do this with the discriminant and it is very natural. But you can also use the well-known inequality: $2ab\leq a^2+b^2$ which follows from the expansion of $(a-b)^2\geq 0$. So you get
$$
2x=2\cdot x\cdot 1\leq x^2+1^2=x^2+1<x^2+2\qquad \forall x\in\mathbb{R}.
$$
Then divide by $2$.
b) By definition, $\cosh x$ is the even part of $e^x$. Since $e^x=\sum_{n\geq 0}\frac{x^n}{n!}$ for every $x$, you are left with
$$
\cosh x=\sum_{n\geq 0}\frac{x^{2n}}{(2n)!}=1+\frac{x^2}{2}+\frac{x^4}{24}+\ldots\qquad\forall x\in\mathbb{R}.
$$
Now, using inequality a) and the fact that every term of the series is nonnegative:
$$
\cosh x=1+\frac{x^2}{2}+\frac{x^4}{24}+\ldots\geq 1+\frac{x^2}{2}> x\qquad\forall x\in\mathbb{R}.
$$
c) The distance from the point $(x_0,y_0)$ to the line $x-y=0$ is
$$
d=\frac{|x_0-y_0|}{\sqrt{1^2+(-1)^2}}=\frac{1}{\sqrt{2}}|x_0-y_0|.
$$
So for a point $(x,\cosh x)$ on the graph of the hyperbolic cosine, we get
$$
d(x)=\frac{1}{\sqrt{2}}|x-\cosh x|=\frac{1}{\sqrt{2}}(\cosh x-x)
$$
as $\cosh x > x$ for every $x$. Now study the variations of $d(x)$ to find its minimum. It will occur at a critical point $d'(x_0)=0$. And actually, there is only one such critical point, solution of $\sinh x=1$. So your minimum occurs at
$$
x_0=\mbox{arsinh} 1=\log (1+\sqrt{2})\qquad y_0=\cosh x_0=\sqrt{2}.
$$
Click on this link for a drawing of the situation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
$\mbox{Ker} \;S$ is T-invariant, when $TS=ST$ Let $T,S:V\to S$ linear transformations, s.t: $TS=ST$, then $\ker(S)$ is $T$-invariant.
My solution:
$$\{T(v)\in V:TS(v)=0 \}=\{T(v)\in V:ST(v)=0 \}\subseteq\ker(S)$$
If its right, then why $$\{T(v)\in V:ST(v)=0 \}=\ker(S)$$?
Thank you.
| What you wrote is not correct. You simply have to check that if $v$ belongs to $\mbox{Ker} S$, then $Tv$ also lies in $\mbox{Ker} S$. So assume $Sv=0$. Then
$$STv=TSv=T0=0$$
where the last equality holds because a linear transformation always sends $0$ to $0$.
Therefore $v \in \mbox{Ker} S$ implies $Tv\in \mbox{Ker} S$. In other words $$T(\mbox{Ker} S)\subseteq\mbox{Ker} S$$ that is $\mbox{Ker} S$ is invariant under $T$.
Note: if $S$ is injective, this is of course trivial. But if not, this says that that the eigenspace of $S$ with respect to the eigenvalue $0$ is invariant under $T$. More generally, every eigenspace of $S$ is invariant under $T$. When $S$ and $T$ are two cummuting diagonalizable matrices, this is the key remark when showing that they are simultaneously diagonalizable.
Note: as pointed out by Marc van Leeuwen, it is not more diffcult to show that $\mbox{Im S}$ is invariant under $T$, as $TSv=STv$. And finally, every polynomial in $S$ still commutes with $T$, so you can replace $S$ by $p(S)$ everywhere if you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If $e^A$ and $e^B$ commute, do $A$ and $B$ commute? It is known that if two matrices $A,B \in M_n(\mathbb{C})$ commute, then $e^A$ and $e^B$ commute. Is the converse true?
If $e^A$ and $e^B$ commute, do $A$ and $B$ commute?
Edit: Addionally, what happens in $M_n(\mathbb{R})$?
Nota Bene: As a corollary of the counterexamples below, we deduce that if $A$ is not diagonal then $e^A$ may be diagonal.
| Here's an example over $\mathbb{R}$, modeled after Harald's answer: let
$$A=\pmatrix{0&-2\pi\\ 2\pi&0}.$$
Again, $e^A=I$. Now choose any $B$ that doesn't commute with $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61",
"answer_count": 5,
"answer_id": 2
} |
Is there an algorithm to find all subsets of a set? I'm trying to find a way to find all subsets of a set.
Is there an algorithm to calculate this?
| An algorithm is type of finite procedure operating on finite data as input and generating a finite output. So you can only have an algorithm to find the subsets of $\Sigma$ if $\Sigma$ is finite. (You've been given some hints for that case, but it is important to stress that these hints only work for finite $\Sigma$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 10,
"answer_id": 8
} |
Is $\mathbb{R}$ a subset of $\mathbb{R}^2$? Is it correct to say that $\mathbb{R}$ is a subset of $\mathbb{R}^2$? Or, put more generally, given $n,m\in\mathbb{N}$, $n<m$, is $\mathbb{R}^n$ a subset of $\mathbb{R}^m$?
Also, strictly related to that: what is then the "relationship" between the set $\{(x,0)\in\mathbb{R}^2,x\in\mathbb{R}\}\subset\mathbb{R}^2$ and $\mathbb{R}$? Do they coincide (I would say no)? As vector spaces, do they have the same dimension (I would say yes)?
If you could give me a reference book for this kind of stuff, I would really appreciate it.
Thank you very much in advance.
(Please correct the tags if they are not appropriate)
| I wouldn't say so, even though every onedimensional subspace of $\mathbb{R}^n$ is isomorphic to $\mathbb{R}$, but there is no natural embedding.
But a more or less funny is, that even thought nearly everyone say that $\mathbb{R}\not\subset\mathbb{R}^2$ many mathematicans say that $\mathbb{R}\subset\mathbb{C}$
even though there is a canocial transformation from $\mathbb{C}$ to $\mathbb{R}^2$.
I guess sometime one stops to distinguish often between things which are isomorphic but not the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Lie Groups induce Lie Algebra homomorphisms I am having a difficult time showing that if $\phi: G \rightarrow H$ is a Lie group homomorphism, then $d\phi: \mathfrak{g} \rightarrow \mathfrak{h}$ satisfies the property that for any $X, Y \in \mathfrak{g},$ we have that $d\phi([X, Y]_\mathfrak{g}) = [d\phi(X), d\phi(Y)]_\mathfrak{h}$.
Any help is appreciated!!!!!!!!
| Let $x \in G$. Since $\phi$ is a Lie group homomorphism, we have that
$$\phi(xyx^{-1}) = \phi(x) \phi(y) \phi(x)^{-1} \tag{$\ast$}$$
for all $y \in G$. Differentiating $(\ast)$ with respect to $y$ at $y = 1$ in the direction of $Y \in \mathfrak{g}$ gives us
$$d\phi(\mathrm{Ad}(x) Y) = \mathrm{Ad}(\phi(x)) d\phi(Y). \tag{$\ast\ast$}$$
Differentiating $(\ast\ast)$ with respect to $x$ at $x = 1$ in the direction of $X \in \mathfrak{g}$, we obtain
$$d\phi(\mathrm{ad}(X) Y) = \mathrm{ad}(d\phi(X)) d\phi(Y)$$
$$\implies d\phi([X, Y]_{\mathfrak{g}}) = [d\phi(X), d\phi(Y)]_{\mathfrak{h}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Best estimate for random values Due to work related issues I can't discuss the exact question I want to ask, but I thought of a silly little example that conveys the same idea.
Lets say the number of candy that comes in a package is a random variable with mean $\mu$ and a standard deviation $s$, after about 2 months of data gathering we've got about 100000 measurements and a pretty good estimate of $\mu$ and $s$.
Lets say that said candy comes in 5 flavours that are NOT identically distributed (we know the mean and standard deviation for each flavor, lets call them $\mu_1$ through $\mu_5$ and $s_1$ trough $s_5$).
Lets say that next month we will get a new batch (several packages) of candy from our supplier and we would like to estimate the amount of candy we will get for each flavour. Is there a better way than simply assuming that we'll get "around" the mean for each flavour taking into account that the amount of candy we'll get is around $\mu$?
I have access to all the measurements made, so if anything is needed (higher order moments, other relevant data, etc.) I can compute it and update the question as needed.
Cheers and thanks!
| It depends on your definition of "better."
You need to define your risk function. If your risk function is MSE, you can do better than simply using the sample means. The idea is to use shrinkage, which as the name suggests means to shrink all your $\mu_i$ estimates slightly towards 0. The amount of shrinkage should be proportional to the sample variance $s^2$ of your data (noisier data calls for more shrinkage) and inversely proportional to the number of data points $n$ that you collect. Note that the James-Stein estimator is only better for $m \ge 3$ flavors. In general, some form of regularization is always wise in empirical problems.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Solve $\frac{1}{2x}+\frac{1}{2}\left(\frac{1}{2x}+\cdots\right)$ If
$$\displaystyle \frac{1}{2x}+\frac{1}{2}\left(\frac{1}{2x}+ \frac{1}{2}\left(\frac{1}{2x} +\cdots\right) \right) = y$$
then what is $x$?
I was thinking of expanding the brackets and trying to notice a pattern but as it effectively goes to infinity. I don't think I can expand it properly, can I?
| Expand. The first term is $\frac{1}{2x}$.
The sum of the first two terms is $\frac{1}{2x}+\frac{1}{4x}$.
The sum of the first three terms is $\frac{1}{2x}+\frac{1}{4x}+\frac{1}{8x}$.
And so on.
The sum of the first $n$ terms is
$$\frac{1}{2x}\left(1+\frac{1}{2}+\frac{1}{4}+\cdots+\frac{1}{2^{n-1}}\right).$$
As $n\to\infty$, the inner sum approaches $2$. So the whole thing approaches $\frac{1}{x}$.
So we are solving the equation $\frac{1}{x}=y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Bounds on $ \sum\limits_{n=0}^{\infty }{\frac{a..\left( a+n-1 \right)}{\left( a+b \right)...\left( a+b+n-1 \right)}\frac{{{z}^{n}}}{n!}}$ I have a confluent hypergeometric function as $ _{1}{{F}_{1}}\left( a,a+b,z \right)$ where $z<0$ and $a,b>0$ and integer.
I am interested to find the bounds on the value it can take or an approximation for it. Since $$0<\frac{a..\left( a+n-1 \right)}{\left( a+b \right)...\left( a+b+n-1 \right)}<1, $$
I was thinking that ${{e}^{z}}$ would be an upper bound. Is that right?
I am not sure about lower bound, can this go to $-\infty$?
$$ _{1}{{F}_{1}}\left( a,a+b,z \right)=\sum\limits_{n=0}^{\infty }{\frac{a..\left( a+n-1 \right)}{\left( a+b \right)...\left( a+b+n-1 \right)}\frac{{{z}^{n}}}{n!}}\le {{e}^{z}}$$
Thanks!
| You may be interested in the asymptotic formula,
$$
{}_1F_1(a,a+b,z) = \frac{\Gamma(a+b)}{\Gamma(b)} (-z)^{-a} + O(z^{-a-1})
$$
as $\operatorname{Re} z \to -\infty$ (see, e.g., [1]).
Note, in particular, that it is not true that ${}_1F_1(a,a+b,x) \leq e^x$ for $x \in \mathbb{R}$ large and negative.
[1] Bateman Manuscript Project, Higher Transcendental Functions. Vol. 1, p. 248,255,278. [pdf]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Larger Theory for root formula Consider the quadratic equation:
$$ax^2 + bx + c = 0$$
and the linear equation:
$$bx + c = 0$$.
We note the solution of the linear equation is
$$x = -\frac{c}{b}.$$
We note the solution of the quadratic equation is
$$\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$$.
Suppose we take the limit as $a$ approaches 0 on the quadratic equation: ideally we should get the expression $-\frac{c}{b}$ left over, but this is clearly not the case:
This means that the quadratic formula does not generalize the linear formula, it is instead streamlined to only solve quadratic equations.
What would the general formula be? Basically what is the formula such that if $a$ is non-zero is equivalent to the quadratic equation and if $a = 0$ breaks down to the linear case?
| Hint $\ $ First rationalize the numerator, then take the limit as $\rm\:a\to 0.$
$$\rm \frac{-b + \sqrt{b^2 - 4ac}}{2a}\ =\ \frac{2c}{-b -\sqrt{b^2 - 4ac}}\ \to\ -\frac{c}b\ \ \ as\ \ \ a\to 0 $$
Remark $\ $ The quadratic equation for $\rm\,\ z = 1/x\,\ $ is $\rm\,\ c\ z^2+ b\ z + a = 0\,\ $ hence
$$\rm z\ =\ \dfrac{1}{x}\ =\ \dfrac{-b \pm \sqrt{b^2-4\:a\:c}}{2\:c} $$
Inverting the above now yields the sought limit as $\rm\:a\to 0,\:$ so effectively removing the apparent singularity at $\rm\: a = 0\:.\ $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Restricted Permutations and Combinations Tino, Colin, Candice, Derek, Esther, Mary and Ronald are famous artist. Starting next week, they will take turns to display their work
and each artist's work will be on display at the London Show for
exactly one week so that the display of the artworks will last the next seven
weeks. In how many ways can a display schedule be developed if
$(a)$. There is no restriction. ANS 7! EASY
Suppose that a revised timetable has been drawn up and the artists in $(a)$ are
to display their work in groups of twos or threes so that the entire exercise
takes at most $3$ weeks. In how many ways can Colin and Candice find themselves
in the same group?
Hint: We need to explore all possible scenarios - the possible distributions of the
teams in terms of numbers are: $2,2,3; 2,3,2;$ and $3,2,2.$ Candice and Colin can be in
the $1st, 2nd$ or $3rd$ week. This thinking gives us $9$ possible scenarios
in which Adam and Brian may end up being in the same team. Answer: $150$
How to get the $150$ I have scratched my head to no avail.
What I assumed is that if they are both in a group that has only two people, we count that group as one person and if they are in a group that has $3$ people we count the group as two people.
I tried:
In both cases the $n$ is reduced to $7-2$, and $r$ to either $3-2$ or $2-2$.
Hence: $(5C1*2!*3!*2! + 5C0*2!*2!*2!) * 3!$
and many other futile attempts.
The problem is :
1). Classification, how do I classify this problem? As a strictly permutation or strictly combination or mixed permutation or combination?
2). Is sampling with or without replacement?
3). How do I change $n$ and $r$, by subtracting one from each? Or by subtracting p, the number of objects that will always occur? $7-2=5?$ or $7-1 =6?$
The correct reasoning approach will be greatly appreciated.
| Where do you get $2*6!$ for (a)? I find $7!$
For the three week problem, let us start by assuming $3,2,2$. We can multiply by $3$ at the end to take care of cyclic permutations of weeks. The two pairs are differently named bins in this case. The two C's can be together in one of the twos in $2\text{(which week)}*{5 \choose 3}\text{(who is in the triplet)}=20$ ways. They can be two of the triplet in $5\text{(the other member of the triplet)}*{4 \choose 2}\text{(the first other couple)}=30$ ways. Multiplying by $3$ gives a total of $150$ ways.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving the symmetry of the Ricci tensor? Consider the Ricci tensor :
$R_{\mu\nu}=\partial_{\rho}\Gamma_{\nu\mu}^{\rho}
-\partial_{\nu}\Gamma_{\rho\mu}^{\rho}
+\Gamma_{\rho\lambda}^{\rho}\Gamma_{\nu\mu}^{\lambda}
-\Gamma_{\nu\lambda}^{\rho}\Gamma_{\rho\mu}^{\lambda}$
In the most general case, is this tensor symmetric ? If yes, how to prove it ?
| This misunderstands the previous post by assuming different conventions about the ordering of indices on the curvature tensor. The previous post assumes the convention that
$$2\nabla_{[j}\nabla_{k]}v^\ell = {R^\ell}_{ijk}v^i.$$
With this convention, the argument is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
} |
Easiest and most complex proof of $\gcd (a,b) \times \operatorname{lcm} (a,b) =ab.$ I'm looking for an understandable proof of this theorem, and also a complex one involving beautiful math techniques such as analytic number theory, or something else. I hope you can help me on that. Thank you very much
| Let $\ell= \text{lcm}(a,b), g=\text{gcd}(a,b)$ for some $a,b$ positive integers.
Division algorithm: exists $q,r$ integers with $0\leq r < \ell$ such $ab = q\ell + r$. Observing that both $a$ and $b$ divide both $ab$ and $q\ell$ we conclude they both divide $r$. As $r$ is a common multiple, we must have $\ell \leq r$ or $r\leq 0$ so $r=0$.
Therefore $s=\frac{ab}{\ell}$ is an integer. Observe that $\frac{a}{s}=\frac{\ell}{b}$ and $\frac{b}{s}=\frac{\ell}{a}$ are both integers, so $s$ is a common divisor. Thus $g\geq s = \frac{ab}{\ell}$.
On the other hand, $\frac{ab}{g}=a\frac{b}{g}=b\frac{a}{g}$ where $\frac{b}{g}$ and $\frac{a}{g}$ are both integers, so $\frac{ab}{g}$ is a common multiple of $a$ and $b$. As $\frac{ab}{g}>0$ we conclude $\frac{ab}{g}\geq \ell$, therefoire $\frac{ab}{\ell}\geq g$ and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35",
"answer_count": 11,
"answer_id": 9
} |
Concatenation of 2 finite Automata I have some problems understanding the algorithm of concatenation of two NFAs.
For example: How to concatenate A1 and A2?
A1:
# a b
- - -
-> s {s} {s,p}
p {r} {0}
*r {r} {r}
A2:
# a b
- - -
-> s {s} {p}
p {s} {p,r}
*r {r} {s}
Any help would be greatly appreciated.
| We connect the accepting states of A1 to the starting point of A2. Assuming that -> means start and * means accepting state.. (I labelled the states according to the original automata, and deleted * from r1 and -> from s2, but added s2 for each possible state change to r1 (once an A1-word would be accepted, we can jump to A2).
# a b
-- -- --
-> s1 {s1} {s1,p1}
p1 {r1,s2} 0
r1 {r1,s2} {r1,s2}
s2 {s2} {p2}
p2 {s2} {p2,r2}
*r2 {r2} {s2}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/349994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Applications of group cohomology to algebra I started learning about group cohomology (of finite groups) from two books: Babakhanian and Hilton&Stammbach.
The theory is indeed natural and beautiful, but I could not find many examples to its uses in algebra.
I am looking for problems stated in more classical algebraic terms which are solved elegantly or best understood through the notion of group cohomology. What I would like to know the most is "what can we learn about a finite group $G$ by looking at its cohomology groups relative to various $G$-modules?").
The one example I did find is $H^2(G,M)$ classifying extensions of $M$ by $G$.
So, my question is:
What problems on groups/rings/fields/modules/associative algebras/Lie algebras are solved or best understood through group cohomology?
Examples in algebraic number theory are also welcome (this is slightly less interesting from my current perspective, but I do remember the lecturer mentioning this concept in a basic algnt course I've taken some time ago).
| Here's a simple example off the top of my head. A group is said to be finitely presentable if it has a presentation with finitely many generators and relations. This, in particular, implies that $H_2(G)$ is of finite rank. (You can take nontrivial coefficient systems here too.) So you get a nice necessary condition for finite presentability.
The proof of this fact is simple. If $G$ is finitely presented, you can build a finite $2$-complex that has $G$ as its fundamental group. To get an Eilenberg-Maclane space $K(G,1)$ you add $3$-cells to kill all $\pi_2$, then you add $4$-cells to kill all $\pi_3$ etc... You end up building a $K(G,1)$ with a finite $2$-skeleton.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
Integrate: $\oint_c (x^2 + iy^2)ds$ How do I integrate the following with $|z| = 2$ and $s$ is the arc length? The answer is $8\pi(1+i)$ but I can't seem to get it.
$$\oint_c (x^2 + iy^2)ds$$
| Parametrize $C$ as $\gamma(t) = 2e^{it} = 2(\cos t + i \sin t)$ for $t \in [0, 2\pi]$.
From the definition of the path integral, we have:
$$
\oint_C f(z) \,ds = \int_a^b f(\gamma(t)) \left|\gamma'(t)\right| \,dt
$$
Plug in the given values to get:
\begin{align}
\oint_C (x^2 + iy^2) \,ds &= \int_0^{2\pi} 4(\cos^2{t} + i \sin^2{t})\left| 2ie^{it}\right| \,dt \\
&= 8\left(\int_0^{2\pi} \cos^2 t\,dt + i \int _0^{2\pi} \sin^2 t\,dt\right)
\end{align}
The last 2 integrals should be straightforward. Both evaluate to $\pi$. Hence, the original integral evaluates to $8\pi(i + 1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Square root is operator monotone This is a fact I've used a lot, but how would one actually prove this statement?
Paraphrased: given two positive operators $X, Y \geq 0$, how can you show that $X^2 \leq Y^2 \Rightarrow X \leq Y$ (or that $X \leq Y \Rightarrow \sqrt X \leq \sqrt Y$, but I feel like the first version would be easier to work with)?
Note: $X$ and $Y$ don't have to commute, so $X^2 - Y^2$ is not necessarily $(X+Y)(X-Y)$.
| Here is a proof which works more generally for $x,y\ge 0$ in a $C^*$-algebra, where the spectral radius satisfies $r(z)\le \|z\|=\sqrt{\|z^*z\|}$ for every element $z$, and $r(t)=\|t\|$ for every normal element $t$. The main difference with @user1551's argument is that we will use the invertibility of $y$. Other than that, the idea is essentially the same.
If $y$ is not invertible, replace $y$ by $y+\epsilon1$ in the following argument, and then let $\epsilon$ tend to $0$ at the very end.
Now assume that $y$ is invertible and note that
$$x^2\le y^2\quad\Rightarrow\quad y^{-1}x^2y^{-1}\le y^{-1}y^2y^{-1}=1\quad\Rightarrow\quad\left\|y^{-1}x^2y^{-1}\right\|\le 1.$$
Since $t=y^{-1/2}xy^{-1/2}$ and $z=xy^{-1}$ are similar, they have the same spectral radius and therefore
$$\left\|t\right\|=r(t)=r\left(z\right)\le\|z\|=\sqrt{\|z^*z\|}=\sqrt{\left\|y^{-1}x^2y^{-1}\right\|}\le 1.$$
It follows that $y^{-1/2}xy^{-1/2}\le 1$, whence $x\le y$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 1
} |
Order of an element in a group G Suppose that $a$ is an element of order $n$ in a group $G$. Prove:
i) $a^i = a^j$ if and only if $i \equiv j \pmod n$;
ii) if $d = (m,n)$, then the order of $a^m$ is $n/d$;
I was trying to self teach myself this and came to this question. How would you solve this? Can someone please show how to?
| If $i \equiv j \pmod n$, then $i = j + kn$ for some $k \in \Bbb Z$. It follows that:
$$
a^i = a^{j + kn} = a^ja^{kn} = a^j (a^n)^k = a^j
$$
If you haven't proved the power properties $a^{p+q} = a^pa^q$ and $a^{pq} = (a^p)^q$ for $p, q \in \Bbb Z^+$, this is a good exercise to do now. Try using induction.
Now, if $a^i = a^j$, then:
$$
a^i a^{-j} = a^{i - j} = 1
$$
And this is only possible if $i - j$ is a multiple of $n$. (i) follows.
The property $(a^n)^{-1} = a^{-n}$ was used here. Again, try proving it via induction.
For (ii), assume the order of $a^m$ is a number smaller than $n/d$, and try to use (i) to reach a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What series test would you use on these and why? I just started learning series , I am trying to put everything together...I have few some few random problems just to see what kind of strategy you would use here...
*
*$\displaystyle\sum_{n=1}^\infty\frac{n^n}{(2^n)^2}$
*$\displaystyle\sum_{n=1}^\infty\frac2{(2n - 1)(2n + 1)}$; telescoping series ?
*$\displaystyle\sum_{n=1}^\infty\frac1{n(1 + \ln^2 n)}$
*$\displaystyle\sum_{n=1}^\infty\frac1{\sqrt{n (n + 1)}}$; integral test ? cause you could do u-sub?
| General and mixed hints:
$$\frac{2}{(2n-1)(2n+1)}=\left(\frac{1}{2n-1}-\frac{1}{2n+1}\right)$$
$$\frac{n}{4}\xrightarrow[n\to\infty]{}\infty$$
$$\frac{2^n}{2^n(1+n^2\log^22)}\le\frac{1}{\log^22}\cdot\frac{1}{n^2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Can the matrix $A=\begin{bmatrix} 0 & 1\\ 3 & 3 \end{bmatrix}$ be diagonalized over $\mathbb{Z}_5$? Im stuck on finding eigenvalues that are in the field please help.
Given matrix:
$$
A= \left[\begin{matrix}
0 & 1\\
3 & 3
\end{matrix}\right]
$$
whose entries are from $\mathbb{Z}_5 = \{0, 1, 2, 3, 4\}$, find, if possible, matrices $P$ and $D$ over $\mathbb{Z}_5$ such that $P^{−1} AP = D$.
I have found the characteristic polynomial: $x^2-3x-3=0$
Since its over $\mathbb{Z}_5$, $x^2-3x-3=x^2+2x+2=0$.
But from there I'm not sure how to find the eigenvalues, once I get the eigenvalues that are in the field it will be easy to find the eigenvectors and create the matrix $P$.
| yes over $\Bbb Z_5$ because:
$\lambda^2 -3\lambda-3=o$ at Z_5 we will have $\Delta=9+12=4+2=6$ (9~4 and 12~2 at Z_5)
so $\Delta=1$
and so $\lambda_1=\frac{3+1}{2}=2$ and $\lambda_2=\frac{3-1}{2}=1$
about:
$\lambda_1$ we have :$ ( \left[\begin{matrix}
0 & 1\\
3 &3
\end{matrix}\right]-\left[\begin{matrix}
2 & \\
0 &2
\end{matrix}\right] )\left[\begin{matrix}
x\\
y
\end{matrix}\right]=0$
$$-2x+y=0 $$ & $$( 3x+y=0 ~ -2x+y=0 ) $$ and so $$ y=2x $$
is our space of eigen value of $ \lambda_1 =\{(2,4),(0,0)(1,2)\} $ => (dim =1) base={(1,2)}
about $\lambda_2$:
$ ( \left[\begin{matrix}
0 & 1\\
3 &3
\end{matrix}\right]-\left[\begin{matrix}
1 & 0\\
0 &1
\end{matrix}\right] )\left[\begin{matrix}
x\\
y
\end{matrix}\right]=0$ and so $y=x$ is our answer
and eigenvector space of $\lambda_2=\{(0,0)(1,1)(2,2)(3,3)(4,4)\} \implies $ $(\dim=1)$
base ={(1,1)}
matrix at base of$ \{(1,1),(1,2)\}$ will be diagonalizable
$\left[\begin{matrix}
2 & 0\\
0 &1
\end{matrix}\right] $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Separating $\frac{1}{1-x^2}$ into multiple terms I'm working through an example that contains the following steps:
$$\int\frac{1}{1-x^2}dx$$
$$=\frac{1}{2}\int\frac{1}{1+x} - \frac{1}{1-x}dx$$
$$\ldots$$
$$=\frac{1}{2}\ln{\frac{1+x}{1-x}}$$
I don't understand why the separation works. If I attempt to re-combine the terms, I get this:
$$\frac{1}{1+x} \frac{1}{1-x}$$
$$=\frac{1-x}{1-x}\frac{1}{1+x} - \frac{1+x}{1+x}\frac{1}{1-x}$$
$$=\frac{1-x - (1+x)}{1-x^2}$$
$$=\frac{-2x}{1-x^2} \ne \frac{2}{1-x^2}$$
Or just try an example, and plug in $x = 2$:
$$2\frac{1}{1-2^2} = \frac{-2}{3}$$
$$\frac{1}{1+2} -\frac{1}{1-2} = \frac{1}{3} + 1 = \frac{4}{3} \ne \frac{-2}{3}$$
Why can $\frac{1}{1-x^2}$ be split up in this integral, when the new terms do not equal the old term?
| The thing is $$\frac{1}{1-x}\color{red}{+}\frac 1 {1+x}=\frac{2}{1-x^2}$$
What you might have seen is
$$\frac{1}{x-1}\color{red}{-}\frac 1 {x+1}=\frac{2}{1-x^2}$$
Note the denominator is reversed in the sense $1-x=-(x-1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
PRA: Rare event approximation with $P(A\cup B \cup \neg C)$? The rare event approximation for event $A$ and $B$ means the upper-bound approximation $P(A\cup B)=P(A)+P(B)-P(A\cap B)\leq P(A)+P(B)$. Now by inclusion-exclusion-principle $$P(A\cup B\cup \neg C)= P(A)+P(B)+P(C)-P(A\cap B)-P(A\cap \neg C) -P(B\cap\neg C) +P(A\cap B\cap \neg C) \leq P(A)+P(B)+P(C)+P(A\cap B\cap \neg C)$$
and now by Wikipedia, this is the general form
so does the rare-event-approximation means removal of minus terms in the general form of inclusion-exclusion principle aka the below?
$$\mathbb P\left(\cup_{i=1}^{n}A_i\right)\leq \sum_{I\subset\{1,3,...,2h-1\}; |I|=k}\mathbb P(A_I)$$
where $2h-1$ is the last odd term in $\{1,2,3,...,n\}$.
Example
For example in the case of three events, is the below true rare-event
approximation?
$$P(A\cup B\cup \neg C) \leq P(A)+P(B)+P(\neg C)+P(A\cap B\cap \neg C)$$
P.s. I am studying probability-risk-assessment course, Mat-2.3117.
| Removing all the negatives certainly gives an upper bound. But if one looks at the logic of the inclusion-exclusion argument, whenever we have just added, we have added too much (except possibly, at the very end). So at any stage just before we start subtracting again, our truncated expression gives an upper bound.
Thus one obtains upper bounds by truncating after the first sum, or the third, or the fifth, and so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
We break a unit length rod into two pieces at a uniformly chosen point. Find the expected length of the smaller piece We break a unit length rod into two pieces at a uniformly chosen point. Find the
expected length of the smaller piece
| With probability ${1\over2}$ each the break takes place in the left half, resp. in the right half of the rod. In both cases the average length of the smaller piece is ${1\over4}$. Therefore the overall expected length of the smaller piece is ${1\over4}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Section of unions of open subschemes I'm stuck at a line in Hartshorne's text (p.g. 82). Could someone help me please?
Fact. Suppose that $X$ is a scheme having $U$ and $V$ as two non-empty disjoint open subsets of $X$. Then $\mathcal{O}_X(U \cup V) = \mathcal{O}(U) \times \mathcal{O}_X(V)$.
I know how to prove this when $X$ is affine, but I don't know how to reduce the general case to this affine case. The fact sounds intuitively reasonable: since $U$ and $V$ are disjoint, a "function" is defined on $U \cup V$ iff it is defined independently on $U$ and on $V$. However, I can't prove this rigorously, since I'm stuck at the first difficulty with algebraic geometry using scheme language: no formula for $\mathcal{O}_X(U)$! It would be nice to know a rigorous of this fact. Thanks!
| There is a canonical homomorphism $(\rho^{U\cup V}_U, \rho^{U\cup V}_V) : \Gamma(U\cup V) \to \Gamma(U) \times \Gamma(V)$ induced by the restriction homomorphisms. This being an isomorphism follows directly from the fact that the structure sheaf is a sheaf: injectivity is precisely the fact that a section of $U \cup V$ restricts to 0 on both $U$ and $V$ iff it is 0 on the union; and surjectivity is precisely the fact that two sections on U and V respectively can be lifted to the union (since they agree trivially on the intersection).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Reasoning about the gamma function using the digamma function I am working on evaluating the following equation:
$\log\Gamma(\frac{1}{2}x) - \log\Gamma(\frac{1}{3}x)$
If I'm understanding correctly, the above is an increasing function which can be demonstrated by the following argument using the digamma function $\frac{\Gamma'}{\Gamma}(x) = \int_0^\infty(\frac{e^{-t}}{t} - \frac{e^{-xt}}{1-e^{-t}})$:
$\frac{\Gamma'}{\Gamma}(\frac{1}{2}x) - \frac{\Gamma'}{\Gamma'}(\frac{1}{3}x) = \int_0^\infty\frac{1}{1-e^{-t}}(e^{-\frac{1}{3}xt} - e^{-\frac{1}{2}xt})dt > 0 (x > 1)$
Please let me know if this reasoning is incorrect or if you have any corrections.
Thanks very much!
-Larry
| This answer is provided with help from J.M.
$\log\Gamma(\frac{1}{2}x) - \log\Gamma(\frac{1}{3}x)$ is an increasing function. This can be shown using this series for $\psi$:
The function is increasing if we can show: $\frac{d}{dx}(\log\Gamma(\frac{1}{2}x) - \log\Gamma(\frac{1}{3}x)) > 0$
We can show this using the digamma function $\psi(x)$:
$$\frac{d}{dx}(\log\Gamma(\frac{1}{2}x) - \log\Gamma(\frac{1}{3}x)) = \frac{\psi(\frac{1}{2}x)}{2} - \frac{\psi(\frac{1}{3}x)}{3}$$
$$\frac{\psi(\frac{1}{2}x)}{2} - \frac{\psi(\frac{1}{3}x)}{3} = -\gamma + \sum_{k=0}^\infty(\frac{1}{k+1} - \frac{1}{k + {\frac{1}{2}}}) + \gamma - \sum_{k=0}^\infty(\frac{1}{k+1} - \frac{1}{k+\frac{1}{3}})$$
$$= \sum_{k=0}^\infty(\frac{1}{k+\frac{1}{3}} - \frac{1}{k+\frac{1}{2}})$$
Since for all $k\ge 0$: $k + \frac{1}{3} < k + \frac{1}{2}$, it follows that for all $k\ge0$: $\frac{1}{k+\frac{1}{3}} > \frac{1}{k+\frac{1}{2}}$ and therefore: $$\sum_{k=0}^\infty(\frac{1}{k+\frac{1}{3}} - \frac{1}{k+\frac{1}{2}}) > 0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Name of $a*b=c$ and $b*a=-c$ $A_+=(A,+,0,-)$ is a noncommutative group where inverse elements are $-a$
$A_*=(A,*)$ is not associative and is not commutative
$\mathbf A=(A,+,*)$ is a structure where
1) if $a*b=c$ then $b*a=-c$ holds and
2) $(a*b)+a=b+(a*b)$
A-How is called the structure $\mathbf A$?
B-What is the name of 1) and 2) in abstract algebra (even not in the same structure)?
C-and what is this structure? It Has been already studied and does it have other intresting (or obvious) properties that I don't see?
Thanks in advance and I apologize for errors in my english.
Update
As Lord_Farin explained to me, $\mathbf A=(A,+,*)$ can't be a structure (closed) since the property 2) imply that $a*0$ and $0*a$ are assorbing elements of $(A,+)$ and that is impossible because $A_+$ is a non-trivial group (in the definition).
Anyways I notice that my questions B (about property 2) ) and C are still open in the case of a "generic" $A_+$ .
| From 2) we have $(a*0)+a = 0+(a*0) = a*0 = (a*0)+0$ which contradicts the fact that $(A,+)$ is a group (which implies "left-multiplication" by $(a*0)$, i.e. $x \mapsto (a*0)+x$, is injective). Thus the structure $(A,+,*)$ cannot exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Rank of a matrix. Let a non-zero column matrix $A_{m\times 1}$ be multiplied with a non-zero row matrix $B_{1\times n}$ to get a matrix $X_{m\times n}$ . Then how to find rank of $X$?
| Let me discuss a shortcut for finding the rank of a matrix .
Rank of a matrix is always equal to the number of independent equations .
The number of equations are equal to the number of rows and the variables in one equation are equal to number of columns .
Suppose there is a 3X3 matrix with elements as :
row 1 : 1 2 3
row 2 : 3 4 2
row 3 : 4 5 6
So there will be three equations as
x + 2y + 3z = 0 -1
3x + 4y + 2z = 0 -2
4x + 5y + 2z = 0 -3
Any of the above equation cannot be obtained by adding or subtracting two equations or multiplying or dividing a single equation by a constant . So there are three independent equations . So rank of above matrix is 3 .
Consider another matrix of order 3X3 with elements as :
row 1 : 10 11 12
row 2 : 1 2 7
row 3 : 11 13 19
equations :
10x + 11y + 12z =0 - 4
x + 2y + 7z =0 - 5
11x + 13z + 19z =0 - 6
equation 6 can be obtained by adding equations 4 and 5 . So there are only two independent equations . So rank of this matrix is 2 .
This method can be applied to matrix of any order .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/350996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
How to compute the area of the shadow?
If we can not use the integral, then how to compute the area of the shadow?
It seems easy, but actually not?
Thanks!
| Let $a$ be a side of the square. Consider the following diagram
The area we need to calculate is as follows.
$$\begin{eqnarray} \color{Black}{\text{Black}}=(\color{blue}{\text{Blue}}+\color{black}{\text{Black}})-\color{blue}{\text{Blue}}. \end{eqnarray}$$
Note that the blue area can be calculated as
$$\begin{eqnarray}\color{blue}{\text{Blue}}=\frac14a^2\pi-2\cdot\left(\color{orange}{\text{Yellow}}+\color{red}{\text{Red}}\right).\end{eqnarray}$$
We already know most of the lengths. What's stopping us from calculating the black area is lack of known angles. Because of symmetry, almost any angle would do the trick.
It's fairly easy to calculate angles of triangle $\begin{eqnarray}\color{orange}{\triangle POA}\end{eqnarray}$, if we use cosine rule.
$$\begin{eqnarray}
|PA|^2&=&|AO|^2+|PO|^2-2\cdot|AO|\cdot|PO|\cos\angle POA\\
a^2&=&\frac{a^2}{4}+\frac{2a^2}{4}-2\cdot\frac a2\cdot\frac{a\sqrt2}{2}\cdot\cos\angle POA\\
4a^2&=&3a^2-2a^2\sqrt2\cos\angle POA\\
1&=&-2\sqrt2\cos\angle POA\\
\cos\angle POA&=&-\frac{1}{2\sqrt2}=-\frac{\sqrt2}{4}.
\end{eqnarray}$$
Now, because of symmetry, we have $\angle POA=\angle POB$, so $\angle AOB=360^\circ-2\angle POA$. So the cosine of angle $\angle AOB$ can be calculated as follows:
$$\begin{eqnarray}
\cos\angle AOB&=&\cos(360^\circ-2\angle POA)=\cos(2\pi-2\angle POA)\\
\cos\angle AOB&=&\cos(-2\angle POA)=\cos(2\angle POA)\\
\cos\angle AOB&=&\cos^2(\angle POA)-\sin^2(\angle POA)\\
\cos\angle AOB&=&\cos^2(\angle POA)-(1-\cos^2(\angle POA))\\
\cos\angle AOB&=&2\cos^2(\angle POA)-1\\
\cos\angle AOB&=&2\cdot\left(-\frac{\sqrt2}{4}\right)^2-1=-\frac34\\
\end{eqnarray}$$
From this, we can easily calculate the sine of angle $\angle AOB$, using Pythagorean identity.
$$ \sin\angle AOB=\sqrt{1-\frac9{16}}=\sqrt\frac{16-9}{16}=\frac{\sqrt7}4 $$
Going this way, I believe it's not hard to calculate other angles and use known trigonometry-like formulas for area. Then you can easily pack it together using the first equation with colors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Rudin Theorem 1.35 - Cauchy Schwarz Inequality Any motivation for the sum that Rudin considers in his proof of the Cauchy-Schwarz Inequality?
Theorem 1.35 If $a_1,...,a_n$ and $b_1, ..., b_n$ are complex numbers, then
$$\Biggl\vert\sum_{j=1}^n a_j\overline{b_j}\Biggr\vert^2 \leq \sum_{j=1}^n|a_j|^2\sum_{j=1}^n|b_j|^2.$$
For the proof, he considers this sum to kick it off:
$$\sum_{j=1}^n \vert Ba_j - Cb_j\vert, \text{ where } B = \sum_{j=1}^n \vert b_j \vert^2 \text{ and } C = \sum_{j=1}^na_j\overline{b_j}.$$
I don't see where it comes from. Any help?
Thank-you.
| He does it because it works. Essentially, as you see, $$\sum_{j=1}^n |Ba_j-Cb_j|^{2}$$ is always greater or equal to zero. He then shows that $$\tag 1 \sum_{j=1}^n |Ba_j-Cb_j|^{2}=B(AB-|C|^2)$$
and having assumed $B>0$; this means $AB-|C|^2\geq 0$, which is the Cauchy Schwarz inequality.
ADD Let's compare two different proofs of Cauchy Schwarz in $\Bbb R^n$.
PROOF1. We can see the Cauchy Schwarz inequality is true whenever ${\bf x}=0 $ or ${\bf{y}}=0$, so discard those. Let ${\bf x}=(x_1,\dots,x_n)$ and ${\bf y }=(y_1,\dots,y_n)$, so that $${\bf x}\cdot {\bf y}=\sum_{i=1}^n x_iy_i$$
We wish to show that $$|{\bf x}\cdot {\bf y}|\leq ||{\bf x}||\cdot ||{\bf y}||$$
Define $$X_i=\frac{x_i}{||{\bf x}||}$$
$$Y_i=\frac{y_i}{||{\bf y}||}$$
Because for any $x,y$ $$(x-y)^2\geq 0$$ we have that $$x^2+y^2\geq 2xy$$ Using this with $X_i,Y_i$ for $i=1,\dots,n$ we have that $$X_i^2 + Y_i^2 \geqslant 2{X_i}{Y_i}$$
and summing up through $1,\dots,n$ gives $$\eqalign{
& \frac{{\sum\limits_{i = 1}^n {y_i^2} }}{{||{\bf{y}}|{|^2}}} + \frac{{\sum\limits_{i = 1}^n {x_i^2} }}{{||{\bf{x}}|{|^2}}} \geqslant 2\frac{{\sum\limits_{i = 1}^n {{x_i}{y_i}} }}{{||{\bf{x}}|| \cdot ||{\bf{y}}||}} \cr
& \frac{{||{\bf{y}}|{|^2}}}{{||{\bf{y}}|{|^2}}} + \frac{{||{\bf{x}}|{|^2}}}{{||{\bf{x}}|{|^2}}} \geqslant 2\frac{{\sum\limits_{i = 1}^n {{x_i}{y_i}} }}{{||{\bf{x}}|| \cdot ||{\bf{y}}||}} \cr
& 2 \geqslant 2\frac{{\sum\limits_{i = 1}^n {{x_i}{y_i}} }}{{||{\bf{x}}|| \cdot ||{\bf{y}}||}} \cr
& ||{\bf{x}}|| \cdot ||{\bf{y}}|| \geqslant \sum\limits_{i = 1}^n {{x_i}{y_i}} \cr} $$
NOTE How may we add the absolute value signs to conclude?
PROOF2
We can see the Cauchy Schwarz inequality is true whenever ${\bf x}=0 $ or ${\bf{y}}=0$, or $y=\lambda x$ for some scalar. Thus, discard those hypotheses. Then consider the polynomial (here $\cdot$ is inner product) $$\displaylines{
P(\lambda ) = \left\| {{\bf x} - \lambda {\bf{y}}} \right\|^2 \cr
= ( {\bf x} - \lambda {\bf{y}})\cdot({\bf x} - \lambda {\bf{y}}) \cr
= {\left\| {\bf x} \right\|^2} - 2\lambda {\bf x} \cdot {\bf{y}} + {\lambda ^2}{\left\| {\bf{y}} \right\|^2} \cr} $$
Since ${\bf x}\neq \lambda{\bf y}$ for any $\lambda \in \Bbb R$, $P(\lambda)>0$ for each $\lambda\in\Bbb R$. It follows the discriminant is negative, that is $$\Delta = b^2-4ac={\left( {-2\left( {{\bf x} \cdot y} \right)} \right)^2} - 4{\left\| {\bf x} \right\|^2}{\left\| {\bf{y}} \right\|^2} <0$$ so that $$\displaylines{
{\left( {{\bf x}\cdot {\bf{y}}} \right)^2} <{\left\| {\bf x} \right\|^2}{\left\| {\bf{y}} \right\|^2} \cr
\left| {{\bf x} \cdot {\bf{y}}} \right| <\left\| {\bf x}\right\| \cdot \left\| {\bf{y}} \right\| \cr} $$ which is Cauchy Schwarz, with equaliy if and only if ${\bf x}=\lambda {\bf y}$ for some $0\neq \lambda \in\Bbb R$ or either vector is null.
One proof shows the Cauchy Schwarz inequality is a direct consequence of the known fact that $x^2\geq 0$ for each real $x$. The other is shorter and sweeter, and uses the fact that a norm is always nonnegative, and properties of the inner product of vectors in $\Bbb R^n$, plus that fact that a polynomial in $\Bbb R$ with no real roots must have negative discriminant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
} |
Find a simple formula for $\binom{n}{0}\binom{n}{1}+\binom{n}{1}\binom{n}{2}+...+\binom{n}{n-1}\binom{n}{n}$
$$\binom{n}{0}\binom{n}{1}+\binom{n}{1}\binom{n}{2}+...+\binom{n}{n-1}\binom{n}{n}$$
All I could think of so far is to turn this expression into a sum. But that does not necessarily simplify the expression. Please, I need your help.
| Hint: it's the coefficient of $T$ in the binomial expansion of $(1+T)^n(1+T^{-1})^n$, which is equivalent to saying that it's the coefficient of $T^{n+1}$ in the expansion of $(1+T)^n(1+T^{-1})^nT^n=(1+T)^{2n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 3
} |
Solve recursive equation $ f_n = \frac{2n-1}{n}f_{n-1}-\frac{n-1}{n}f_{n-2} + 1$ Solve recursive equation:
$$ f_n = \frac{2n-1}{n}f_{n-1}-\frac{n-1}{n}f_{n-2} + 1$$
$f_0 = 0, f_1 = 1$
What I have done so far:
$$ f_n = \frac{2n-1}{n}f_{n-1}-\frac{n-1}{n}f_{n-2} + 1- [n=0]$$
I multiplied it by $n$ and I have obtained:
$$ nf_n = (2n-1)f_{n-1}-(n-1)f_{n-2} + n- n[n=0]$$
$$ \sum nf_n x^n = \sum(2n-1)f_{n-1}x^n-\sum (n-1)f_{n-2}x^n + \sum n x^n $$
$$ \sum nf_n x^n = \sum(2n-1)f_{n-1}x^n-\sum (n-1)f_{n-2}x^n + \frac{1}{(1-z)^2} - \frac{1}{1-z} $$
But I do not know what to do with parts with $n$. I suppose that there can be useful derivation or integration, but I am not sure. Any HINTS?
| Let's take a shot at this:
$$
f_n - f_{n - 1} = \frac{n - 1}{n} (f_{n - 1} - f_{n - 2}) + 1
$$
This immediately suggests the substitution $g_n = f_n - f_{n - 1}$, so $g_1 = f_1 - f_0 = 1$:
$$
g_n - \frac{n - 1}{n} g_{n - 1} = 1
$$
First order linear non-homogeneous recurrence, the summing factor $n$ is simple to see here:
$$
n g_n - (n - 1) g_{n - 1} = n
$$
Summing:
$$
\begin{align*}
\sum_{2 \le k \le n} (k g_k - (k - 1) g_{k - 1}) &= \sum_{2 \le k \le n} k \\
n g_n - 1 \cdot g_1 &= \frac{n (n + 1)}{2} - 1 \\
g_n &= \frac{n + 1}{2} \\
f_n - f_{n - 1} &= \frac{n + 1}{2} \\
\sum_{1 \le k \le n} (f_n - f_{n - 1})
&= \sum_{1 \le k \le n} \frac{k + 1}{2} \\
f_n - f_0 &= \frac{1}{2} \left( \frac{n (n + 1)}{2} + n \right) \\
f_n &= \frac{n (n + 3)}{4}
\end{align*}
$$
Maxima tells me this checks out. Pretty!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Integral solutions of hyperboloid $x^2+y^2-z^2=1$ Are there integral solutions to the equation $x^2+y^2-z^2=1$?
| We can take the equation to $x^2 + y^2 = 1 + z^2$ so if we pick a $z$ then we just need to find all possible ways of expressing $z^2 + 1$ as a sum of two squares (as noted in the comments we always have one way: $z^2 + 1$). This is a relatively well known problem and there will be multiple possible solutions for $x$ and $y$, there is another question on this site about efficiently finding the solutions of this should you wish to do so.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 9,
"answer_id": 4
} |
Bounded partial derivatives imply continuity As stated in my notes:
Remark: Suppose $f: E \to \mathbb{R}$, $E \subseteq \mathbb{R}^n$, and $p \in E$. Also, suppose that $D_if$ exists in some neighborhood of $p$, say, $N(p, h)$ where $h>0$. If all partial derivatives of $f$ are bounded, then $f$ is continuous on $E$.
I found a sketch of the proof here. I'm wondering if I can adapt this proof as follows:
$f(x_1+h_1,...,x_n+h_n)-f(x_1,...,x_n)=f(x_1+h_1,...,x_n+h_n)-f(x_1,x_2+h_2,...,x_n+h_n)-...-f(x_1,x_2,...,x_{n-1}+h_{n-1},x_n+h_n)-f(x_1,...,x_{n-1},x_n+h_n)-f(x_1,...,x_n)$
However, I'm not sure how to apply the contraction principle to finish off the proof. Is there a more efficient way to prove the above remark?
| The proof is a combination of two facts:
*
*A function of one real variable with a bounded derivative is Lipschitz.
*Let $Q\subset \mathbb R^n$ be a cube aligned to coordinate axes. If a function $f:Q\to\mathbb R$ is Lipschitz in each variable separately, then it is Lipschitz.
The proof of 2 involves a telescoping sum such as
$$\begin{split}
f(x,y,z)-f(x',y',z')&= f(x,y,z)-f(x',y,z) \\ & + f(x',y,z)-f(x',y',z)\\&+f(x',y',z)-f(x',y',z')
\end{split}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
How to prove boundary of a subset is closed in $X$? Suppose $A\subseteq X$. Prove that the boundary $\partial A$ of $A$ is closed in $X$.
My knowledge:
*
*$A^{\circ}$ is the interior
*$A^{\circ}\subseteq A \subseteq \overline{A}\subseteq X$
My proof was as follows:
To show $\partial A = \overline{A} \setminus A^{\circ}$ is closed, we have to show that the complement $( \partial A) ^C = X\setminus{}\partial A =X \setminus (\overline{A} \setminus A^{\circ})$ is open in $X$. This is the set $A^{\circ}\cup X \setminus(\overline{A})$
Then I claim that $A^{\circ}$ is open by definion ($a\in A^{\circ} \implies \exists \epsilon>0: B_\epsilon(a)\subseteq A$. As this is true for all $a$, by definition of open sets, $A^{\circ}$ is open.
My next claim is that $X \setminus \overline{A}$ is open. This is true because the complement is $\overline{A}$ is closed in $X$, hence $X \setminus \overline{A}$ is open in $X$.
My concluding claims are: We have a union of two open sets in $X$, By a proposition in my textbook, this set is open in $X$. Therefore the complement of that set is closed, which is we had to show.
What about this ?
| From your definition, directly,
$$
\partial A=\overline{A}\setminus \mathring{A}=\overline{A}\cap (X\setminus \mathring{A})
$$
is the intersection of two closed sets. Hence it is closed.
No need to prove that the complement is open, it just makes it longer and more complicated.
Also, keep in mind that a set $S$ is open in $X$ if and only if its complement $X\setminus S$ is closed in $X$. This should be pavlovian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Matrix Manifolds Question I am not sure at all how to do the following question. Any help is appreciated. Thank you.
Consider $SL_n \mathbb{R}$ as a group and as a topological space with
the topology induced from $R^{n^2}$. Show that if $H \subset SL_n \mathbb{R}$ is an abelian subgroup, then the closure $H$ of $SL_n \mathbb{R}$ is also an abelian subgroup.
| Hint: The map $\overline{H}\times \overline{H}\to \overline{H}$ defined by $(a,b)\mapsto aba^{-1}b^{-1}$ is continuous. Since $\overline{H}$ is Hausdorff, and the map is constant on a dense subset of its domain, it must be constant everywhere.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why does the series $\sum\limits_{n=2}^\infty\frac{\cos(n\pi/3)}{n}$ converge? Why does this series
$$\sum\limits_{n=2}^\infty\frac{\cos(n\pi/3)}{n}$$
converge? Can't you use a limit comparison with $1/n$?
| Note that $$\cos(n\pi/3) = 1/2, \ -1/2, \ -1, \ -1/2, \ 1/2, \ 1, \ 1/2, \ -1/2, \ -1, \ \cdots $$ so your series is just 3 alternating (and convergent) series inter-weaved. Exercise: Prove that if $\sum a_n, \sum b_n$ are both convergent, then the sequence $$a_1, a_1+b_1, a_1+b_1+a_2, a_1+b_1+a_2+b_2, \cdots $$ is convergent. Applying that twice proves your series converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Probability that a stick randomly broken in five places can form a tetrahedron Edit (June. 2015) This question has been moved to MathOverflow, where a recent write-up finds a similar approximation as leonbloy's post below; see here.
Randomly break a stick in five places.
Question: What is the probability that the resulting six pieces can form a tetrahedron?
Clearly satisfying the triangle inequality on each face is a necessary but not sufficient condition; an example is provided below.
Furthermore, another commenter kindly points to a reference that may be of help in resolving this problem. In particular, it relates the question of when six numbers can be edges of a tetrahedron to a certain $5 \times 5$ determinant.
Finally, a third commenter points out that since one such construction is possible, there is an admissible neighborhood around this arrangement, so that the probability is in fact positive.
In any event, this problem is far harder than the classic $2D$ "form a triangle" one.
Several numerical attacks can be found below; I will be grateful if anyone can provide an exact solution.
| if stick pieces are s1 (longest) to s6 shortest.
Picture the tetrahedron with longest side s1 out of view. Then s2 is the spine and any combination of pairs from {s3,s4,s5,s6} can make the two side triangles Hence s3+s6 needs to be longer than s2 (P=0.25) And s4+s5 needs to be longer than s2. (P=0.25)
so P(can form)=0.25*0.25=0.0625
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "120",
"answer_count": 8,
"answer_id": 6
} |
Transitive closure proof (Pierce, ex. 2.2.7) Simple exercise taken from the book Types and Programming Languages by Benjamin C. Pierce.
This is a definition of the transitive closure of a relation R.
First, we define the sequence of sets of pairs:
$$R_0 = R$$
$$R_{i+1} = R_i \cup \{ (s, u) | \exists t, (s, t) \in R_i, (t, u) \in R_i \}$$
Finally, define the relation $R^+$ as the union of all the $R_i$:
$$R^+=\bigcup_i R_i$$
Show that $R^+$ is really the transitive closure of R.
Questions:
*
*I would like to see the proof (I don't have enough mathematical background to make it myself).
*Isn't the final union superfluous? Won't $R_n$ be the union of all previous sequences?
| We need to show that $R^+$ contains $R$, is transitive, and is minmal among all such relations.
$R\subseteq R^+$ is clear from $R=R_0\subseteq \bigcup R_i=R^+$.
Transitivity:
By induction on $j$, show that $R_i\subseteq R_j$ if $i\le j$.
Assume $(a,b), (b,c)\in R^+$. Then $(a,b)\in R_i$ for some $i$ and $(b,c)\in R_j$ for some $j$. This implies $(a,b),(b,c)\in R_{\max(i,j)}$ and hence $(a,c)\in R_{\max(i,j)+1}\subseteq R^+$.
Now for minimality, let $R'$ be transitive and containing $R$.
By induction show that $R_i\subseteq R'$ for all $i$, hence $R^+\subseteq R'$, as was to be shown.
As for your specific question #2:
Yes, $R_n$ contains all previous $R_k$ (a fact, the proof above uses as intermediate result). But neither is $R_n$ merely the union of all previous $R_k$, nor does there necessarily exist a single $n$ that already equals $R^+$.
For example, on $\mathbb N$ take the realtaion $aRb\iff a=b+1$. Then $aR^+b\iff a>b$, but $aR_nb$ implies that additionally $a\le b+2^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/351990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Orthogonality of Legendre Functions The Legendre Polynomials satisfy the following orthogonality condition:
The definite integral of $P(n,x) \cdot P(m,x)$ from $-1$ to $1$ equals $0$, if $m$ is not equal to $n$: $$\int_{-1}^1 P(n,x) \cdot P(m,x) dx = 0. \qquad (m \neq n)$$
Based on this, I am trying to evaluate the integral from $-1$ to $1$ of $x \cdot P(n-1,x) \cdot P(n,x)$ for some given $n$:
$$\int_{-1}^1 x \cdot P(n-1,\;x) \cdot P(n,x) dx.$$
If I integrate this by parts, letting $x$ be one function and $P(n-1,x) \cdot P(n,x)$ be the other function, then I get zero, but according to my textbook, its value is non-zero. What am I doing wrong?
| Integration by parts is $\int f'g+\int fg'=fg\ (+C)$, so for the definite integral, it is
$$\int_a^b f'g+\int_a^b fg'=[fg]_a^b=f(b)g(b)-f(a)g(a)\,.$$
Now we have $f=x$ and $g'=P_{n-1}(x)\cdot P_n(x)$. That is, $g$ is the antiderivative of $P_{n-1}\cdot P_n$. By the definite integral of this, it only allows us to conclude that $g(1)-g(-1)=[g]^1_{-1}=0$. Not less and not more. In particular, $g\ne 0$. So, now it yields:
$$\int_{-1}^1 g+\int_{-1}^1 x\cdot P_{n-1}(x)\cdot P_n(x)=1\cdot g(1)-(-1)\cdot g(-1)=2\cdot g(1)\,.$$
This can be evaluated, knowing the explicit form of the $P_n$'s, but then probably it's not simpler than simply writing these in the original integral...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Absolute convergence of the series $\sum\limits_{n=1}^{\infty} (-1)^n \ln\left(\cos \left( \frac{1}{n} \right)\right)$ This sum
$$\sum_{n=1}^{\infty} (-1)^n \ln\left(\cos \left( \frac{1}{n} \right)\right)$$
apparently converges absolutely, but I'm having trouble understanding how so.
First of all, doesn't it already fail the alternating series test? the $B_{n+1}$ term is greater than the $B_n$ term, correct?
| Since $$1-\cos{x}\underset{x\to{0}}{\sim}{\dfrac{x^2}{2}}\;\; \Rightarrow \;\; \cos{\dfrac{1}{n}}={1-\dfrac{1}{2n^2}} +o\left(\dfrac{1}{n^2} \right),\;\; n\to\infty$$
and
$$\ln(1+x)\underset{x\to{0}}{\sim}{x},$$
thus $$\ln\left(\cos { \dfrac{1}{n} }\right)\underset{n\to{\infty}}{\sim}{-\dfrac{1}{2n^2}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
How will studying "stochastic process" help me as mathematician?? I wish to decide if I should take a course called "INTRODUCTION TO STOCHASTIC PROCESSES" which will be held next semester in my University.
I can make an un-educated guess that stochastic processes are important in mathematics. But I am also curious to know how. i.e, in what fields/methods, will basic understanding in "stochastic processes" will help me do better mathematics?
| such a similar question
stochastic process is very usefull in Acturial Sience, Mathematical finance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Counting strictly increasing and non-decreasing functions $f$ is non-decreasing if $x \lt y$ implies $f(x) \leq f(y)$ and increasing if $x < y$ implies $f(x) < f(y)$.
*
*How many $f: [a]\to [b]$ are nondecreasing?
*How many $f: [a] \to [b]$ are strictly increasing?
Where $[a]=\{1,2\ldots a\}$ and $[b]=\{1,2\ldots b\}$
| Strictly increasing is easy: we need to choose the $n$ items in $[k]$ that will be the range of our function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Parallel transport for a conformally equivalent metric Suppose $M$ is a smooth manifold equipped with a Riemannian metric $g$. Given a curve $c$, let $P_c$ denote parallel transport along $c$. Now suppose you consider a new metric $g'=fg$ where $f$ is a smooth positive function. Let $P_c'$ denote parallel transport along $c$ with respect to $g'$. How are $P_c$ and $P_c$ related?
A similar question is: let $K:TTM \rightarrow TM$ denote the connection map associated to $g$ and $K'$ the one associated to $g'$. How are $K$ and $K'$ related?
In case it's helpful, recall the definition of $K$: given $V\in T_{(x,v)}TM$, let $z(t)=(c(t),v(t))$
be a curve in $TM$ such that $z(0)=(x,v)$ and $\dot{z}(0)=V$. Then set
$$K(V):=\nabla_{t}v(0).$$
| Both the parallel transport and the connection map are determined by the connection, in your case this is the Levi-Civita connection of metric $g$ whose transformation is known (see e.g. this answer).
For the connection map you already have a formula in the definition, just use the facts and get the expression.
With regards to the parallel transport I guess the best way would be to start with the equations
$$
\dot{V}^{k}(t)= - V^{j}(t)\dot{c}^{i}(t)\Gamma^{k}_{ij}(c(t))
$$
that describe the parallel transport (see the details e.g. in J.Lee's "Riemannian manifolds. An Introduction to Curvature").
The Christoffel symbols of the conformally rescaled metric are given in this Wikipedia article. Using them we get the equations of the conformally related parallel transport.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Convergence of the infinite series $ \sum_{n = 1}^\infty \frac{1} {n^2 - x^2}$ How can I prove that for every $ x \notin \mathbb Z$ the series
$$ \sum_{n = 1}^\infty \frac{1} {n^2 - x^2}$$
converges uniformly in a neighborhood of $ x $?
| Apart from the first few summands, we have $n^2-y^2>\frac12n^2$ for all $y\approx x$, hence the tail is (uniformly near $x$) bounded by $2\sum_{n>N}\frac1{n^2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Law of Quadratic Reciprocity Equivalent Statement
Let $p,q$ be two distinct odd primes. Then $(\frac{q}p)=1 \iff p=\pm\beta^2 \pmod{4q}$ for some odd $\beta$. Show that this statement is eqivalent to the Law of Quadratic Reciprocity.
I'm trying to grapple with what the question is actually asking me to show.
Do I split into various cases of what $p$ and $q$ could possibly be (ie. $1 \pmod 4$ and $3 \pmod 4$) and then show that in each case, the statement holds?
| We do one of the four cases. Because $p$ and $q$ both of the shape $4k+1$ is "too easy" and does not fully illustrate the problems we can bump into, we deal with the case $p$ of the form $4k+3$ and $q$ of the form $4k+1$.
Suppose that $(q/p)=1$, with $p$ of the form $4k+3$ and $q$ of the form $4k+1$. We want to show that $p\equiv \pm \beta^2\pmod{4q}$ for some odd $\beta$.
Note that by Quadratic Reciprocity we have $(p/q)=1$. So $p$ is a quadratic residue modulo $q$. This means that $p\equiv \alpha^2\pmod{q}$ for some $\alpha$. But $-1$ is a quadratic residue of $q$, since $q$ is of the form $4k+1$. So $-1\equiv \gamma^2\pmod{q}$ for some $\gamma$, and therefore
$$p\equiv -(\alpha\gamma)^2\pmod{q}.$$
Without loss of generality we may assume that $\alpha\gamma$ is odd. If it isn't, replace it by $q-\alpha\gamma$.
Since the square of an odd number is congruent to $1$ modulo $4$, we have $$p\equiv -(\alpha\gamma)^2\pmod{4}.$$
It follows that $p\equiv -(\alpha\gamma)^2\pmod{4q}$.
The reverse direction is straightforward. Reverse directions are not really needed if we deal with the "forward" direction in all four cases.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Notation for "absolute value" in multiplicative group. In an additive number group (e.g. $(\mathbb{Z},+)$) there is a well known notation for absolute value, namely $|a|$, which coincides with $\max(a,-a)$, for $a \in \mathbb{Z}$.
When the context is a multiplicative number group instead, is there a similar notation, which would coincide with $\max(a,\frac{1}{a})$?
| If you're working with a multiplicative group $G\subseteq\Bbb{R}$, you can definitely say
$$
\operatorname{abs}(g) := \max\{g,g^{-1}\}\quad\textrm{for }g\in G.
$$
The question is whether or not it is useful to the study of the group $G$ in any way.
Also, when it comes to the question of notation, $\left|g\right|$ is normally used to mean the order of $g\in G$, which is the smallest $n\in\Bbb{N}$ such that $g^n = e$ (where $e\in G$ is the identity) or equivalently, the order of the subgroup of $G$ generated by $g$. As far as I know, there is no standard notation for $\max\{g,g^{-1}\}$ when $g\in G\subseteq\Bbb{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove that $2222^{5555}+5555^{2222}=3333^{5555}+4444^{2222} \pmod 7$ I am utterly new to modular arithmetic and I am having trouble with this proof.
$$2222^{5555}+5555^{2222}=3333^{5555}+4444^{2222} \pmod 7$$
It's because $2+5=3+4=7$, but it's not so clear for me with the presence of powers.
Maybe some explanation would help.
EDITED Some serious typo
EDIT
Since some arguments against it appear here is :
WolframAlpha
EDIT Above is incorrect. I appreciate proofs that it is wrong. Sorry for others.
| First recall that as $7$ is prime, then $x^6 = 1 \pmod{7}$. Now, we have
$$ 2222 = \begin{cases} 2 \pmod{6} \\ 3 \pmod{7} \end{cases}, \quad 3333 = 1 \pmod{7}$$
$$4444 = -1 \pmod{7}, \quad 5555 = \begin{cases} 5 \pmod{6} \\ 4 \pmod{7} \end{cases}$$
Then we can reduce each side of the equation to
$$ 3^5 + 4^2 = 1^5 + (-1)^2 \pmod{7}$$
Then the LHS is $0$ but the RHS is $2$, so the statement is false.
EDIT: For reference, I'm testing the conjecture $2222^{5555} + 5555^{2222} = 3333^{5555} + 4444^{2222}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Darts on a ruler probability If two points are selected at random on an interval from 0 to 1.5 inches, what is
the probability that the distance between them is less than or equal to 1/4"?
| Draw the square with corners $(0,0)$, $(1.5.0)$, $(1.5,1.5)$, and $(0,1.5)$.
Imagine the points are chosen one at a time. Let random variable $X$ be the first chosen point, and $Y$ the second chosen point. We are invited to assume that $X$ and $Y$ are uniformly distributed in the interval $[0,1.5]$ and independent. (Uniform distribution is highly implausible with real darts.)
Then $(X,Y)$ is uniformly distributed in the square just drawn.
Consider the two lines $y=x+\frac{1}{4}$ and $y=x-\frac{1}{4}$.
The two points are within $\frac{1}{4}$ inch from each other if the random variable $(X,Y)$ falls in the part of our square between the two lines.
Call that part of the square $A$. Then our probability is the area of $A$ divided by the area of the whole square.
Remark: It is easier to find first the area of the part of the square which is not in $A$. This consists of two isosceles right triangles with legs $\frac{5}{4}$, so their combined area is $\frac{25}{16}$. The area of the whole square is $\frac{9}{4}$, so the area of $A$ is $\frac{11}{16}$.
Thus our probability is $\dfrac{\frac{11}{16}}{\frac{9}{4}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Exact number of events to get the expected outcoms Suppose in a competition 11 matches are to be played, each having one of 3
distinct outcomes as possibilities. How many number of ways one can predict the
outcomes of all 11 matches such that exactly 6 of the predictions turn out to
be correct?
| The $6$ matches on which our prediction is correct can be chosen in $\binom{11}{6}$ ways. For each of these choices, we can make wrong predictions on the remaining $5$ matches in $2^5$ ways. Thus the total number is
$$\binom{11}{6}2^5.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Show that $\lim \limits_{n\rightarrow\infty}\frac{n!}{(2n)!}=0$ I have to show that $\lim \limits_{n\rightarrow\infty}\frac{n!}{(2n)!}=0$
I am not sure if correct but i did it like this :
$(2n)!=(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))\cdot (n!)$ so I have $$\displaystyle \frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}$$ and $$\lim \limits_{n\rightarrow \infty}\frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}=0$$ is this correct ? If not why ?
| Hint:
$$ 0 \leq \lim_{n\to \infty}\frac{n!}{(2n)!} \leq \lim_{n\to \infty} \frac{n!}{(n!)^2} = \lim_{k \to \infty, k = n!}\frac{k}{k^2} = \lim_{k \to \infty}\frac{1}{k} = 0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
} |
Is the vector $(3,-1,0,-1)$ in the subspace of $\Bbb R^4$ spanned by the vectors $(2,-1,3,2)$, $(-1,1,1,-3)$, $(1,1,9,-5)$? Is the vector $(3,-1,0,-1)$ in the subspace of $\Bbb R^4$ spanned by the vectors $(2,-1,3,2)$, $(-1,1,1,-3)$, $(1,1,9,-5)$?
| To find wether $(3,-1,0,-1)$ is in the span of the other vectors, solve the system:
$$(3,-1,0,-1)=\lambda_1(2,-1,3,2)+\lambda _2(-1,1,1,-3)+\lambda _3(1,1,9,-5)$$
If you get a solution, then the vector is the span. If you don't get a solution, then it isn't.
It's worth noting that the span of $(2,-1,3,2), (-1,1,1,-3), (1,1,9,-5)$ is exactly the set $\left\{\lambda_1(2,-1,3,2)+\lambda _2(-1,1,1,-3)+\lambda _3(1,1,9,-5):\lambda _1, \lambda _2, \lambda _3\in \Bbb R\right\}$
Alternatively you can consider the matrix $\begin{bmatrix}2& -1 &3 & 2\\ -1 &1 &1 &-3\\ 1 & 1 & 9 & -5 \\ 3 &-1 &0 & -1\end{bmatrix}$. Compute its determinant. If it's not $0$, then the four vectors are linearly independent. If it is $0$ they are linearly dependent. What does that tell you?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Rate of change of a cubes area in respect to the space diagonal The space diagonal of a cube shrinks with $0.02\rm m/s$. How fast is the area shrinking when the space diagonal is $0.8\rm m$ long?
I try:
Space Diagonal = $s_d = \sqrt{a^2+b^2+c^2}=\sqrt{3a^2}$ Where $a$ is the length of one side.
Area = $a^2$
Rate of change for $s_d$ with respect to $a$
$${\mathrm d\over \mathrm da}s_d={\mathrm d\over \mathrm da}\sqrt{3a^2}={\sqrt{3}a \over \sqrt{a^2}}$$
Rate of change for $\rm area$ with respect to $a$
$${\mathrm d\over \mathrm da}\mathrm{area}={\mathrm d\over \mathrm da}a^2={2a}$$
Im stuck when it comes to calculating one thing from another thing! However I have no problem when it comes to position, velocity and acceleration! Can anybody solve this?
| The simplest way is to express the area $a$ of the cube as a function of the length $d$ of the space diagonal. Given $d$ the side length $s$ of the cube is
$$s={1\over\sqrt{3}} \ d\ ,$$
and the total surface area $a$ then becomes
$$a=6s^2=2 d^2\ .$$
Now all quantities appearing here are in fact functions of $t$; therefore at all times $t$ we have
$$a(t)=2d^2(t)\ .$$
It follows that
$$a'(t)=4d (t) \ d'(t)\ ,$$
and the data $d(t_0)=0.8$ m, $d'(t_0)=-0.02$ m/sec imply that $a'(t_0)=-0.064$ m$^2$/sec.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/352987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Fourier series of a function Consider $$ f(t)= \begin{cases} 1 \mbox{ ; } 0<t<1\\ 2-t \mbox{ ; } 1<t<2 \end{cases}$$
Let $f_1(t)$ be the Fourier sine series and $f_2(t)$ be the Fourier cosine series of $f$, $f_1(t)=f_2(t), 0<t<2$. Write the form of the series (without computing the coefficients) and graph $f_1$ and $f_2$ on [-4,4] (including the endpoints $\pm 4$) using *'s to identify the value of the series at points of discontinuity.
I think we have:
$f_1(t)=\sum \limits_{n=1}^{\infty} b_n \sin \frac{n \pi t}{2}$
$f_2(t)=\frac{a_0}{2}+\sum \limits_{n=1}^{\infty} a_n \cos \frac{n \pi t}{2}$
I think we have $f_2=1$ and for $0<t<2, f_1=f_2=1$
Can we do anything else? Can someone help me with the end?
Thank you
| Ok at first we gonna plot our function
We know that on jump discontinuities it will converge to the arithmetic mean of them, so the first approximation is just taking $\frac{1}{2}$.
This gonna look like
The Cos terms gonna look like
The Sin terms are looking like
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Generating functions. Number of solutions of equation.
Let's consider two equations
$x_1+x_2+\cdots+x_{19}=9$, where
$x_i \le 1$
and
$x_1+x_2+\cdots+x_{10}=10, $ where $ x_i \le 5$
The point is to find whose equation has greater number of solutions
What I have found is:
number of solutions for first equation: $\binom{19}{9}=92378$
generating function for the second equation:
$$\left(\frac{1-x^6}{1-x}\right)^{10}=(1-x^6)^{10} \cdot (1-x)^{-10}$$
Here I completely do not know how to find number near $[x^{10}]$ coefficient.
Wolfram said that it is $85228$, so theoretically I have solution, but I would like to know more generic way how to solve such problems. Any ideas?
| Generating functions is a generic way.
To continue on your attempt, you can apply Binomial theorem, which applies to negative exponents too!
We have that
$$ (1-x)^{-r} = \sum_{n=0}^{\infty} \binom{-r}{n} (-x)^n$$
where
$$\binom{-r}{n} = \dfrac{-r \times (-r -1) \times \dots \times (-r - n +1)}{n!} = (-1)^n\dfrac{r(r+1)\dots(n+r-1)}{n!} $$
Thus
$$ \binom{-r}{n} =(-1)^n\binom{n + r- 1}{n}$$
And so
$$ (1-x)^{-r} = \sum_{n=0}^{\infty} \binom{n +r -1}{n} x^n$$
Now your problem becomes finding the coefficient in the product of two polynomials, as you can truncate the infinite one for exponents $\gt 10$.
I will leave that to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
True or False: Every finite dimensional vector space can made into an inner product space with the same dimension. Every finite dimensional vector space can made into an inner product space with the
same dimension.
| I think it depends on what field you are using for your vector space. If it is $\mathbf{R}$ or $\mathbb{C}$, the answer is definitely "yes" (see the comments, which are correct). I am pretty sure it is "yes" if your field is a subfield of $\mathbb{C}$ that is closed (the word "stable" also seems to be standard) under complex conjugation, such as the algebraic numbers. Otherwise, e.g. if your field is $\mathbf{F}_2$, I don't know. I consulted Wikipedia's article on inner product spaces and they only dealt with the case where the field was the reals or the complex numbers.
EDIT: Marc's answer is better than mine. See his comment regaring subfields of $\mathbf{C}$. Some such subfields are not stable under complex conjugation and cannot be used as a field for an inner product space. I am pretty sure that if $\mathbb{K}$ is any ordered field (which may or may not be a subfield of $\mathbb{R}$) you can use it as the field for an inner product space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Solution of a Sylvester equation? I'd like to solve $AX -BX + XC = D$, for the matrix $X$, where all matrices have real entries and $X$ is a rectangular matrix, while $B$ and $C$ are symmetric matrices and $A$ is formed by an outer product matrix (i.e, as $vv^T$ for some real vector $v$) while $D$ is 'not' symmetric. $A,B,C,D$ matrices are fixed while $X$ is the unknown.
How can this equation be solved? Secondly, is there any case, where the solution of this equation has a closed form?
| More generally, Sylvester's equation of the form
$$AX+XB=C$$ can be put into the form
$$M\cdot \textrm{vec}X=L$$ for larger matrices $M$ and $L$.
Here $\textrm{vec}X$ is a stack of all columns of matrix $X$.
How to find the matrix $M$ and $L$, is shown in chapter 4 of this book: http://www.amazon.com/Topics-Matrix-Analysis-Roger-Horn/dp/0521467136
Indeed, $M=(I\otimes A)+(B^T\otimes I)$, and $L=\textrm{vec}C$, where $\otimes$ denotes the Kronecker product.
Special case with $M$ invertible, we have $\textrm{vec}X=M^{-1}L$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Evaluate $\int_0^\pi\frac{1}{1+(\tan x)^\sqrt2}\ dx$ How can we evaluate $$\int_0^\pi\frac{1}{1+(\tan x)^\sqrt2}\ dx$$ Can you keep this at Calculus 1 level please? Please include a full solution if possible. I tried this every way I knew and I couldn't get it.
| This is a Putnam problem from years ago. There is no Calc I solution of which I'm aware. You need to put a parameter (new variable) in place of $\sqrt 2$ and then differentiate the resulting function of the parameter (this is usually called "differentiating under the integral sign"). Most students don't even learn this in Calc III!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Homeomorphism vs diffeomorphism in the definition of k-chain In "Analysis and Algebra on Differentiable Manifolds", 1st Ed., by Gadea and Masqué, in Problem 3.2.4, the student is asked to prove that circles can not be boundaries of any 2-chain in $\mathbb{R}^2-\{0\}$. I understand the solution which makes use of the differential of the angle $\theta$.
The pdf at http://www.math.upenn.edu/~ryblair/Math%20600/papers/Lec17.pdf mentions that a singular k-cube $c$ is an homeomorphism of the unit k-cube.
Since discs are homeomorphic to unit squares, if $c$ is just asked to be an homeomorphism, the circle can be a boundary of a 2-chain. But a disc is not diffeomorphic to the unit square.
Is it correct to say that for the exercise to make sense, the definition of a singular k-cube to consider has to be the one using a diffeomorphism ?
| As discussed in the comments, the unit circle defines a singular 1-simplex, $\sigma$ (i.e. a continuous map from the closed interval) which is not the boundary of any singular 2-chain (i.e. any formal sum of continuous maps from the standard 2-simplex) in the punctured plane. One way to see this is by noting that the punctured plane deformation retracts onto the unit circle and that $\sigma$ represents a generator for $H_1(S^1)$ (which can be seen by using a Mayer-Vietoris sequence, for instance). It should be mentioned that you can see the importance of the punctured origin in the definition of the 1-form $d\theta$ you mention, which does not extend smoothly to the entire plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Write in array form, product of disjoint cycles, product of 2-cycles... In symmetric group $S_7$, let $A= (2 3 5)(2 7 5 4)$ and $B= (3 7)(3 6)(1 2 5)(1 5)$.
Write $A^{-1}$, $AB$, and $BA$ in the following ways:
(i) Array Form
(ii) Product of Disjoint Cycles
(iii) Product of $2$-Cycles
Also are any of $A^{-1}, AB$, or $BA$ in $A_7$?
| I won’t do the problem, but I will answer the same questions for the permutation $A$; see if you can use that as a model. I assume throughout that cycles are applied from left to right.
$A=(235)(2754)=(2345)(2754)(1)(6)$; that means that $A$ sends $1$ to $1$, $2$ to $3$, $3$ to $5$ to $4$, $4$ to $2$, $5$ to $2$ to $7$; $6$ to $6$, and $7$ to $5$. Thus, its two-line or array representation must be $$\binom{1234567}{1342765}\;.$$
Now that we have this, it’s easy to find the disjoint cycles. Start with $1$; it goes to $1$, closing off the cycle $(1)$. The next available input is $2$; it goes to $3$, which goes to $4$, which goes to $2$, giving us the cycle $(234)$. The next available input is $5$; it goes to $7$, which goes right back to $5$, and we have the cycle $(57)$. Finally, $(6)$ is another cycle of length $1$. Thus, $A=(1)(234)(57)(6)$ or, if you’re supposed to ignore cycles of length $1$, simply $(234)(57)$.
Finally, you should know that a cycle $(a_1a_2\dots a_n)$ can be written as a product of $2$-cycles in the following way:
$$(a_1a_2\dots a_n)=(a_1a_2)(a_1a_3)\dots(a_1a_n)\;.$$
Thus, $A=(23)(24)(56)$.
One further hint: one easy way to find $A^{-1}$ is to turn the two-line representation of $A$ upside-down (and then shuffle the columns so that the top line is in numerical order).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to show measurability of a function implies existence of bounding simple functions If $(X,\mathscr{M},\mu)$ is a measure space with $\mu(X) < \infty$, and $(X,\overline{\mathscr{M}},\overline{\mu})$ is its completion and $f\colon X \to \mathbb{R}$ is bounded. Then $f$ is $\overline{\mathscr{M}}$-measurable (and hence in $L^1(\overline{\mu}))$ iff there exist sequences $\{\phi_n\}$ and $\{\psi_n\}$ of $\mathscr{M}$-measurable simple functions such that $\phi_n \le f \le \psi_n$ and $\int (\psi_n - \phi_n)d \mu < n^{-1}$. In this case, $\lim \int \phi_n d \mu = \lim \int \psi_n d \mu = \int f d \bar{\mu}$.
I am able to prove everything except the part that $f$ is $\overline{\mathscr{M}}$-measurable $\implies$ there exist sequences $\{\phi_n\}$ and $\{\psi_n\}$ of $\mathscr{M}$-measurable simple functions such that $\phi_n \le f \le \psi_n$ and $\int (\psi_n - \phi_n)d \mu < n^{-1}$.
I know that $f$ is $\overline{\mathscr{M}}$-measurable $\implies$ there exists an $\mathscr{M}$-measurable function $g$ s.t. $f=g$ $\overline{\mu}$-almost everywhere but I'm not sure where to proceed after that
Thank you!
| A set $N$ is $(\mathcal M,\mu)$-negligible if we can find $N'\in\mathcal M$ such that $\mu(N')=0$ and $N\subset N'$.
Recall that
$$\overline{\mathcal M}^{\mu}=\{B\cup N,B\in\mathcal M,N\mbox{ is }(\mathcal M,\mu)-\mbox{negligible}\}.$$
It can indeed be shown that the latter collection is a $\sigma$-algebra, the smallest containing both $\mathcal M$-measurable sets and $(\mathcal M,\mu)$-negligible ones.
First, as $f$ is $\overline{\mathcal M}^{\mu}$-measurable, it can be approximated pointwise by simple functions, that is, linear combinations of elements of $\overline{\mathcal M}^{\mu}$. So if we deal with the case $f=\chi_S$, where $S\in\overline{\mathcal M}^{\mu}$, we write it as $S=B\cup N$, and we notice that
$$\chi_B\leqslant \chi_S\leqslant \chi_{B\cup N'},$$
where $N'$ is as in the definition of negligible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How use Maple12 to solve a differential equation by using Euler's method? Consider the differential equation $y^{\prime}=y-2$ with initial condition $y\left(0\right)=1$.
a) Use Euler's method with 4 steps of size 0.2 to estimate $y\left(0.8\right)$
I know how to do this by hand; however, I have maple 12 installed and was trying to figure out how to do this with Maple, and then make a graph showing each step of the function. Any suggestions. I have tried looking on mapleprimes, but it keeps pointing me to functions for newer versions of maplesoft, which I don't have.
I posted this question to use as a model, because I have solved this problem by hand and it will help me edited it for other differential equations.
ps. I hope this is the proper place to ask this question, if not please tell me where would be a better place.
| maybe this would help ,just change initially condition and step size
http://homepages.math.uic.edu/~hanson/MAPLE/euler.html
i am not sure that is is for maple12,but i think commands would be same,just try it and if there is errors,post here
use also
https://stackoverflow.com/questions
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Notation for $X - \mathbb{E}(X)$? Let $X$ be a random variable with expectation value $\mathbb{E}(X)=\mu$.
Is there a (reasonably standard) notation to denote the "centered" random variable $X - \mu$?
And, while I'm at it, if $X_i$ is a random variable, $\forall\,i \in \mathbf{n} \equiv \{0,\dots,n-1\}$, and if $\overline{X} = \frac{1}{n}\sum_{i\in\mathbf{n}} X_i$, is there a notation for the random variable $X_i - \overline{X}$? (This second question is "secondary". Feel free to disregard it.)
| I've never seen any specific notation for these. They are such simple expressions that there wouldn't be much to gain by abbreviating them further. If you feel you must, you could invent your own, or just say "let $Y = X - \mu$".
One way people often avoid writing out $X - \mu$ is by a statement like "without loss of generality, we can assume $E[X] = 0$" (provided of course that we actually can).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Clarification of sequence space definition Let $(x_n)$ denote a sequence whose $n$th term is $x_n$, and $\{x_n\,:\,n\in\mathbb{N}\}$ denote the set of all elements of the sequence. I have a text that states
Note that $\{x_n\,:\,n\in\mathbb{N}\}$ can be a finite set even though $(x_n)$ is an infinite sequence.
To me this seems to be a contradiction. Can any reiterate this quotation to shed light on its meaning? I am confused on the difference between $(x_n)$ and $x_n$, and $\{x_n\,:\,n\in\mathbb{N}\}$. Thanks all.
| A sequence of real numbers is a function, not a set. Thus, for instance, the sequence $(x_n)$ is actually a function $f:\mathbb N \to \mathbb R$, where we have the equality $x_n =f(n)$. Now, the image of the function is the set $\{x_n\mid n\in \mathbb N\}$, which is a very different thing. An example where this associated set is infinite is for the sequence $(x_n)$ where $x_n=1$ for all $n$. Then the associated set is just $\{1\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Euclidean cirle question Let $c_1$ be a circle with center $O$. Let angle $ABC$ be an inscribed angle of the circle $c_1$.
i) If $O$ and $B$ are on the same side of the line $AC$, what is the relationship between $\angle ABC$ and $ \angle AOC$?
ii) If $O$ and $B$ are on opposite side of the line $AC$, what is the relationship between $\angle ABC$ and $\angle AOC$
I guess $\angle ABC=(1/2) \angle AOC$ but I don't know how to explain
|
Here $O$ and $B$ are on the same side of of the line $AC$, you can figure out the other part.
$\angle AOB=180-2y$ and $\angle COB=180-2x$
$\angle AOB+ \angle COB=360-2(x+y) \implies \angle AOC=2(x+y) \implies 2 \angle ABC$.
This is known widely as Inscribed Angle theorem as RobJohn said in his comment.:)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Mystery about irrational numbers I'm new here as you can see.
There is a mystery about $\pi$ that I heard before and want to check if its true. They told me that if I convert the digits of $\pi$ in letters eventually I could read the Bible, any book written and even the history of my life! This happens because $\pi$ is irrational and will display all kind of finite combinations if we keep looking its digits.
If that's true then I could use this argument for any irrational.
My question is: Is this true?
|
$\pi$ is just another number like $5.243424974950134566032 \dots$, you can use your argument here. Continue your number with number of particles of universe, number of stars, number of pages in bible, number of letters in the bible, and so on. And 'DO NOT STOP DOING SO, if you do the number becomes rational'.
There are another set of numbers which are formed just by two or three numbers in its decimal expansion, take an example :$0.kkk0\underbrace{kkkk}0\underbrace{kkk}000\underbrace{k}0000\underbrace{kk}\ldots$, where $0<k \le 9$. An irrational number is just defined as NOT Rational number. In other means, which can't be expressed in the form $\dfrac{p}{q}$.
When I was reading about $\pi$, I just found this picture, Crazy $\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Calculus world problem expansion of air How would I solve this problem?
The adiabatic law for expansion of air is $P(V)^{1.4}=C$ when P is pressure V is volume and C is a certain constant.At a given instant the volume is 30 cubic feet and the pressure is 60 psi. At what rate is the pressure changing if the volume is decreasing at a rate of 2 cubic feet per second?
I know that $\frac{dv}{dt}=-2$
$P(1.4)\frac{dv}{dt}+V^{1.4}\frac{dp}{dt}=0$
$60(1.4)(-2)+30^{1.4}\frac{dp}{dt}=0$
$-168+116.9417\frac{dp}{dt}$
$168=116.9417\frac{dp}{dt}$
$\frac{dp}{dt}=1.4366$ but would this be right.
| Note:
$PV^{\gamma}=C \implies \dfrac{dP}{dt}\cdot V^{\gamma}+(\gamma)V^{(\gamma-1)} \cdot \dfrac{dV}{dt}\cdot P=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/353989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove two inequalities about limit inferior and limit superior I wish to prove the following two inequalities:
Suppose $X$ is a subset in $\Bbb R$, and functions $f$ and $g$: $X\to \Bbb R$, and $x_{0}\in X$ is a limit point. Then: $$\lim\sup_{x\to x_0}(f(x)+g(x))\le \lim\sup_{x\to x_0}(f(x))+\lim\sup_{x\to x_0}(g(x))$$ and,$$\lim\inf_{x\to x_0}(f(x))+\lim\inf _{x\to x_0}(g(x))\le \lim\inf_{x\to x_0}(f(x)+g(x)).$$
I did try to use proof by contradiciton, but I just got the desired results. Any help on them, please.
| Contradiction is not recommended, as there is a natural direct approach.
1- limsup: recall that
$$
\limsup_{x\rightarrow x_0}h(x)=\inf_{\epsilon>0}\sup_{0<|x-x_0|<\epsilon}h(x)=\lim_{\epsilon>0}\sup_{0<|x-x_0|<\epsilon}h(x)
$$
where the rhs is the limit of a nonincreasing function of $\epsilon$. Note that the condition $x\in X$ should be added everywhere we take the a sup above. Let's say it is here implicitly to alleviate the notations.
The only thing you need to obtain the desired inequality is essentially the fact that for two nonempty subsets $A,B$ of $\mathbb{R}$
$$
\sup (A+B)\leq \sup A+\sup B.
$$
I prove it at the end below. Now fix $\epsilon>0$. The latter provides the second inequality in the following
$$
\limsup_{x\rightarrow x_0}f(x)+g(x)\leq \sup_{0<|x-x_0|<\epsilon}f(x)+g(x)\leq \sup_{0<|x-x_0|<\epsilon}f(x)+\sup_{0<|x-x_0|<\epsilon}g(x),
$$
while the first inequality is simply due to the fact that the limsup is the inf, hence a lower bound, of the $\epsilon$ suprema.
The desired inequality follows by letting $\epsilon$ tend to $0$ in the rhs.
2- liminf: you can use a similar argument with $\inf (A+B)\geq \inf A+\inf B$. Or you can simply use the following standard trick
$$
\liminf_{x\rightarrow x_0}h(x)=-\limsup_{x\rightarrow x_0}-h(x)
$$
which follows from $\inf S=-\sup (-S)$ (good exercise on sup/inf upper/lower bound manipulation). It only remains to apply the limsup inequality to $-f$ and $-g$.
Sup inequality proof: it is trivial if one of the two sets $A,B$ is not bounded above. So assume both are bounded above. Now recall that $\sup S$ is the least upper bound of the set $S$, when it is bounded above. In particular, it is an upper bound of $S$. For every $a\in A$, we have $a\leq \sup A$, and for every $b\in B$, we get $b\leq \sup B$. So for every $x=a+b\in A+B$, we have $x\leq \sup A+\sup B$. Thus $\sup A+\sup B$ is an upper bound of $A+B$. Hence it is not smaller than the least upper bound of the latter, namely $\sup(A+B)$. QED.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/354066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.