Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Looking for intuition behind coin-flipping pattern expectation I was discussing the following problem with my son:
Suppose we start flipping a (fair) coin, and write down the sequence; for example it might come out HTTHTHHTTTTH.... I am interested in the expected number of flips to obtain a given pattern. For example, it takes an expected 30 flips to get HHHH. But here's the (somewhat surprising) thing: it takes only 20 expected flips to get HTHT.
The tempting intuition is to think that any pattern XXXX is equiprobable since, in batches of 4 isolated flips, this is true. But when we are looking for embedded patterns like this, things change. My son wanted to know why HTHT was so much more likely to occur before HHHH but I could not articulate any kind of satisfying explanation. Can you?
| Suppose we have 4-slot queue. By state we mean the longest tail of the coin sequence that matches the pattern $XXXX$ from the left. If there no matching, we denote the state as $\varnothing$. For instance, the state of the sequence $$TTTTHTHHTTTHTH,$$ given the pattern $XXXX = HTHT$, is $HTH$ and the state for the pattern $TTTT$ is $\varnothing$.
Now suppose the pattern is $XXXX = HHHH$. If you have $T$ and fail to complete the pattern, the state collapses to $\varnothing$, so that we have to start at the beginning.
But if the pattern is $XXXX = HTHT$ and the previous state is either $H$ or $HTH$, then the state collapses to $H$ even if you fail. Thus we do not have to start at the beginning in this case. This difference allows us to complete the pattern faster, resulting in the short expected time.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Product of two ideals doesn't equal the intersection The product of two ideals is defined as the set of all finite sums $\sum f_i g_i$, with $f_i$ an element of $I$, and $g_i$ an element of $J$. I'm trying to think of an example in which $IJ$ does not equal $I \cap J$.
I'm thinking of letting $I = 2\mathbb{Z}$, and $J = \mathbb{Z}$, and $I\cap J = 2\mathbb{Z}$? Can someone point out anything faulty about this logic of working with an even ideal and then an odd ideal?
Thanks in advance.
| Maybe it is helpful for you to realise what really happens for ideals in the integers. You probably know that any ideal in $\mathbb Z$ is of the form $(a)$ for $a\in \mathbb Z$, i.e. is generated by one element. The elements in $(a)$ are all integers which are divisible by $a$.
If we are given two ideals $(a)$ and $(b)$, their intersection consists of those numbers which are divisible by $a$ and divisible by $b$. Their product consists of all numbers which are divisible by the product $ab$.
If $a$ and $b$ are coprime they are the same. E.g. all numbers which are divisible by $2$ and $3$ are also divisible by $6$, and vice versa. If they are not coprime the situation changes. If a number is divisible by $4$ and $2$, then it is not necessarily divisible by $8$.
Another way of saying that two integers $a$, $b$ are coprime is that there exist $x,y$, such that $xa+by=1$ (cf. Euclidean algorithm). In the language of ideals this translates to $(a)+(b)=\mathbb Z$ and the circle closes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 2
} |
Finding the asymptotic limit of an integral. I'm having trouble finding the asymptotic of the integral
$$ \int^{1}_{0} \ln^\lambda \frac{1}{x} dx$$
as $\lambda \rightarrow + \infty$.
Can anyone help?
Thank you!
| Let
$-\log x=u$ then the integral becomes
$$\int\limits_0^1 {{{\left( { - \log x} \right)}^\lambda }dx} = \int\limits_0^{ + \infty } {{e^{ - u}}{u^\lambda }du} $$
This is Euler's famous Gamma function, which has an asymptotic formula by Stirling
$$\int\limits_0^{ + \infty } {{e^{ - u}}{u^\lambda }du} \sim {\left( {\frac{\lambda }{e}} \right)^\lambda }\sqrt {2\pi \lambda } $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Convex functions in integral inequality Let $\mu,\sigma>0$ and define the function $f$ as follows:
$$
f(x) = \frac{1}{\sigma\sqrt{2\pi}}\mathrm \exp\left(-\frac{(x-\mu)^2}{2\sigma ^2}\right)
$$
How can I show that
$$
\int\limits_{-\infty}^\infty x\log|x|f(x)\mathrm dx\geq \underbrace{\left(\int\limits_{-\infty}^\infty x f(x)\mathrm dx\right)}_\mu\cdot\left(\int\limits_{-\infty}^\infty \log|x| f(x)\mathrm dx\right)
$$
which is also equivalent to $\mathsf E[ X\log|X|]\geq \underbrace{\mathsf EX}_\mu\cdot\mathsf E\log|X|$ for a random variable $X\sim\mathscr N(\mu,\sigma^2).$
| Below is a probabilistic and somewhat noncomputational proof.
We ignore the restriction to the normal distribution in what follows below. Instead, we consider a mean-zero random variable $Z$ with a distribution symmetric about zero and set $X = \mu + Z$ for $\mu \in \mathbb R$.
Claim: Let $X$ be described as above such that $\mathbb E X\log|X|$ is finite for every $\mu$. Then, for $\mu \geq 0$,
$$
\mathbb E X \log |X| \geq \mu \mathbb E \log |X| \>
$$
and for $\mu < 0$,
$$\mathbb E X \log |X| \leq \mu \mathbb E \log |X| \>.$$
Proof. Since $X = \mu + Z$, we observe that
$$
\mathbb E X \log |X| = \mu \mathbb E \log |X| + \mathbb E Z \log |\mu + Z| \>,
$$
and so it suffices to analyze the second term on the right-hand side.
Define
$$
f(\mu) := \mathbb E Z \log|\mu+Z| \>.
$$
Then, by symmetry of $Z$, we have
$$
f(-\mu) = \mathbb E Z \log|{-\mu}+Z| = \mathbb E Z \log|\mu-Z| = - \mathbb E \tilde Z \log|\mu + \tilde Z| = - f(\mu) \>,
$$
where $\tilde Z = - Z$ has the same distribution as $Z$ and the last equality follows from this fact. This shows the $f$ is odd as a function of $\mu$.
Now, for $\mu \neq 0$,
$$
\frac{f(\mu) - f(-\mu)}{\mu} = \mathbb E \frac{Z}{\mu} \log \left|\frac{1+ Z/\mu}{1- Z/\mu}\right| \geq 0\>,
$$
since $x \log\left|\frac{1+x}{1-x}\right| \geq 0$, from which we conclude that $f(\mu) \geq 0$ for all $\mu > 0$.
Thus, for $\mu > 0$, $\mu \mathbb E \log |X|$ is a lower bound on the quantity of interest and for $\mu < 0$, it is an upper bound.
NB. In the particular case of a normal distribution, $X \sim \mathcal N(\mu,\sigma^2)$ and $Z \sim N(0,\sigma^2)$. The moment condition stated in the claim is satisfied.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
} |
The number of elements which are squares in a finite field. Meanwhile reading some introductory notes about the projective special linear group $PSL(2,q)$ wherein $q$ is the cardinal number of the field; I saw:
....in a finite field of order $q$, the number of elements ($≠0$) which are squares is $q-1$ if $q$ is even number and is $\frac{1}{2}(q-1)$ if $q$ is a odd number..." .
I can see it through $\mathbb Z_5$ or $GF(2)$. Any hints for proving above fact? Thanks.
| Another way to prove it, way less elegant than Dustan's but perhaps slightly more elementary: let $$a_1,a_2,...,a_{q-1}$$ be the non zero residues modulo $\,q\,,\,q$ an odd prime . Observe that $\,\,\forall\,i\,,\,\,a_i^2=(q-a_i)^2 \pmod q\,$ , so that all the quadratic residues must be among $$a_1^2\,,\,a_2^2\,,...,a_m^2\,\,,\,m:=\frac{q-1}{2}
$$
Note that $\,\,\forall\,1\leq i,j\leq m\,:$$$a_i+a_j=0\Longrightarrow a_i=-a_j=q-a_i\Longrightarrow$$$$\Longrightarrow a_i-a_j=q=0$$
Both left most equalities above would lead us to $\,a_i=a_j=0\,$, which is absurd.
Finally, we prove that not two of the above $\,\,(q-1)/2\,\,$ elements are equal. The following's done modulo $\,q$:$$a_i^2=a_j^2\Longrightarrow (a_i-a_j)(a_i+a_j)=0\Longrightarrow a_i-a_j=0$$since we already showed that $\,\,a_i+a_j\neq 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 5,
"answer_id": 0
} |
Infinite Degree Algebraic Field Extensions In I. Martin Isaacs Algebra: A Graduate Course, Isaacs uses the field of algebraic numbers $$\mathbb{A}=\{\alpha \in \mathbb{C} \; | \; \alpha \; \text{algebraic over} \; \mathbb{Q}\}$$ as an example of an infinite degree algebraic field extension. I have done a cursory google search and thought about it for a little while, but I cannot come up with a less contrived example.
My question is
What are some other examples of infinite degree algebraic field extensions?
| Another simple example is the extension obtained by adjoining all roots of unity.
Since adjoining a primitive $n$-th root of unity gives you an extension of degree $\varphi(n)$ and $\varphi(n)=n-1$ when $n$ is prime, you get algebraic numbers of arbitrarily large degree when you adjoin all roots of unity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 5,
"answer_id": 1
} |
Flux of a vector field I've been trying to solve a flux integral with Gauss' theorem so a little input would be appreciated.
Problem statement: Find the flux of ${\bf{F}}(x,y,z) = (x,y,z^2)$ upwards through the surface ${\bf r}(u,v) = (u \cos v, u \sin v, u), \hspace{1em} (0 \leq u \leq 2; 0 \leq v \leq \pi)$
OK. I notice that $z = u$ so $0 \leq z \leq 2$. Furthermore I notice that $x^2 + y^2 = z^2$ so $x^2 + y^2 \leq 4$. It makes sense to use cylindrical coordinates so $(0 \leq r \leq 2)$ and $(0 \leq \theta \leq 2 \pi)$. Finally $div {\bf F} = 2(z+1)$.With this in mind I set up my integral
\begin{align*}
2\int ^{2 \pi} _0 \int ^2 _0 \int _0 ^2 (z+1)rdrdzd\theta &= \int ^{2 \pi} _0 \int ^2 _0[(z+1)r^2]_0 ^2 dzd\theta \\
&= 4\int ^{2 \pi} _0 \int ^2 _0 z + 1 dzd\theta\\
&= 4\int ^{2 \pi} _0 [1/2 z^2 + z]_0 ^2 d\theta \\
&= 16 \int _0 ^{2 \pi}d\theta \\
&= 32 \pi
\end{align*}
And I'm not sure how to continue from this point so if anyone can offer help it would be appreciated.
Thanks!
| I am not convinced that your integration limits are in order. Domain of integration is the volume below a half cone. So I would proceed as follows
$$2\int_{0}^{\pi}\int_{0}^{2}\int_{0}^{r}\left(z+1\right)rdzdrd\theta=2\int_{0}^{\pi}\int_{0}^{2}\left(\frac{r^{3}}{2}+r^{2}\right)drd\theta=2\int_{0}^{\pi}\left[\left.\left(\frac{r^{4}}{8}+\frac{r^{3}}{3}\right)\right|_{0}^{2}\right]d\theta=2\pi\left(2+\frac{8}{3}\right)=\frac{28\pi}{3}$$
Then by Gauss' theorem you will have calculated the flux
EDIT: arithmetical error in the second transition
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
When is something "obvious"? I try to be a good student but I often find it hard to know when something is "obvious" and when it isn't. Obviously (excuse the pun) I understand that it is specific to the level at which the writer is pitching the statement. My teacher is fond of telling a story that goes along the lines of
A famous maths professor was giving a lecture during which he
said "it is obvious that..." and then he paused at length in thought, and then
excused himself from the lecture temporarily. Upon his return some fifteen minutes later he said "Yes, it is obvious that...." and continued the lecture.
My teacher's point is that this only comes with a certain mathematical maturity and even eludes the best mathematicians at times.
I would like to know :
*
*Are there any ways to develop a better sense of this, or does it just come with time and practice ?
*Is this quote a true quote ? If so, who is it attributable to and if not is it a mathematical urban legend or just something that my teacher likely made up ?
| Mathematical statements are only evaluated by individuals. Since individuals differ in mathematical ability, the answer is that "something" is never obvious to everyone or to yourself. The crux of the joke is that it was only obvious to the professor after reflection, which is deliberate irony since there would be no point in reflecting if something were explicitly obvious. Hence, the point is that if even the expert professor had to make sure it was obvious, then students should only check axioms even more diligently, no matter how hallowed the axiom.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 10,
"answer_id": 6
} |
Doubly exponential sequence behaviour from inequality I am investigating a strictly decreasing sequence $(a_i)_{i=0}^\infty$ in $(0, 1)$, with $\lim_{i\to\infty}a_i=0$, such that there exist constants $K>1$ and $m\in\mathbb{N}$ such that $$\frac{a_{i-1}^m}{K} \leq a_i \leq K a_{i-1}^m$$ for all $i$. Even though $K>1$, is it of the right lines to conclude that $a_i \sim \alpha^{m^i}$ for some constant $0<\alpha<1$?
Thanks,
DW
| [Edit: Now that the question has been changed a day later, I removed analysis of the old version of the first inequality. Perhaps sometime I will update to fully answer the new question. The following still applies to the second inequality.]
On the other hand, f For the second inequality, $$a_i\leq Ka_{i-1}^m\implies a_i\leq K^{1+m+m^2+\cdots+m^{i-1}}a_0^{m^i}\leq \left(K^{2/m}a_0\right)^{m^i}.$$ You could apply a similar inequality with each $N\in\mathbb N$ in place of $0$ to get $$a_i\leq \left(K^{2/(m^{N+1})}a_N^{1/m^N}\right)^{m^i}.$$ By choosing $N$ such that $K^{2/m}a_N<1$, you at least get $\displaystyle{a_i=O\left(\alpha^{m^{i}}\right)}$ with $\alpha=\left(K^{2/m}a_N\right)^{1/m^N}\in(0,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find all $P$ with $P(x^2)=P(x)^2$ The following problem is from Golan's book on linear algebra, chapter 4. I have posted a proposed answer below.
Problem: Let $F$ be a field. Find all nonzero polynomials $P\in F[x]$ satisfying
$$P(x^2)=[P(x)]^2.$$
| Assume first that $F$ is a field with characteristic not equal to 2. The only ones are 1 and $x^n$, $n\in \mathbb{N}$.
Let $a_n$ denote the coefficient of $x^n$ in $P$. We see immediately that all $a_n=0$ for odd $n>0$. Examining the constant coefficient, we see $a_0=a_0^2\Rightarrow a_0=1$ or $a_0=0$.
Now proceed by induction. Consider the case where $a_0=1$. Assume we have shown $a_n=0$ for all $n<k$, $n\neq 1$. We will show $a_k=0$. If $k$ is odd, we are done. If $k$ is even, the coefficient of $x^{k}$ in $P(x^2)$ is $a_{k/2}$, so it is 0. We evaluate $[P(x)]^2$ and ignore higher order terms, and see
$$(a_kx^{k}+1)^2=a_k^2x^{2k}+2a_kx^k+1$$
and the only way for the coefficient of $x^k$ to vanish here is for $a_k$ to be 0.
The case with $a_0=0$ is similar. Assume we have shown $a_n=0$ for all $n<k$. The coefficient of $x^{2k}$ in $P(x^2)$ is $a_{k}$. If evaluate $[P(x)]^2=[...a_kx^k]^2$ and ignore higher order terms again, we get $a_k^2x^{2k}$. So $a_k=1$ or $a_k=0$. If $a_k=0$, we continue the induction. If $a_k=1$, we factor $x^k$ out of the original polynomial and are reduced to the first case.
In a field of characteristic 2 however, I believe that any polynomial with all coefficients equal to 0 or 1 works. Just use the "freshman's dream." Further, because equating constants on both sides gives $a_n^2=a_n$, these are the only ones that work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
(Regular) wreath product of nilpotent groups Is the wreath product of two nilpotent groups always nilpotent?
I know the answer is no due to a condition "The regular wreath product A wr B of a group A by a group B is nilpotent if and only if A is a nilpotent p-group of finite exponent and B is a finite p-group for the same prime p ", but I can't easily construct a counter example to show it.
| Following Jug's suggestion: let $\,\,A:=C_3=\langle a\rangle\,,\,B:=C_2=\langle c\rangle\,$ , with the regular action of $\,B\,$ on itself, and form the (regular) wreath product $$A\wr B\cong \left(C_3\times C_3\right)\rtimes_R C_2$$ Take the elements $$\pi=((1,1),c))\,\,,\,\,\sigma=((a,a^2),1)$$It's now easy to check that$$\pi^2=\sigma^3=1\,\,,\,\,\pi\sigma\pi=\sigma^2$$ so we get that $\,\,\langle \pi\,,\,\sigma\rangle\cong S_3\,\,$ and thus $\,\,A\wr B\,\,$ can't be nilpotent, though both $\,\,A,B\,\,$ are (they're even abelian...)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/151953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\sin(x+\frac{\pi}{n})$ converges uniformly to $\sin(x)$. I've just starting learning uniform convergence and understand the formal definition. What I've got so far is:
$|\sin(x+ \frac{\pi}{n}) - \sin(x)| < \epsilon \ \ \ \ \forall x \in \mathbb{R} \ \ \ \ $ for $n \geq N, \epsilon>0$
LHS = $|2\cos(x+\frac{\pi}{2n})\cdot \sin(\frac{\pi}{2n})| < \epsilon $
Am I going down the right route here? I've done some examples fine, but when trig is involved on all space, I get confused as to what I should be doing...
Any help at all would be VERY much appreciated, I have an analysis exam tomorrow and need to be able to practice this.
Thanks.
| Use the fact that the sine function's derivative has absolute value of at most one to see that
$$|\sin(x) - \sin(y)| \le |x - y|.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Localisation is isomorphic to a quotient of polynomial ring I am having trouble with the following problem.
Let $R$ be an integral domain, and let $a \in R$ be a non-zero element. Let $D = \{1, a, a^2, ...\}$. I need to show that $R_D \cong R[x]/(ax-1)$.
I just want a hint.
Basically, I've been looking for a surjective homomorphism from $R[x]$ to $R_D$, but everything I've tried has failed. I think the fact that $f(a)$ is a unit, where $f$ is our mapping, is relevant, but I'm not sure. Thanks
| Here's another answer using the universal property in another way (I know it's a bit late, but is it ever too late ?)
As for universal properties in general, the ring satisfying the universal property described by Arturo Magidin in his answer is unique up to isomorphism. Thus to show that $R[x]/(ax-1) \simeq R_D$, it suffices to show that $R[x]/(ax-1)$ has the same universal property !
But that is quite easy: let $\phi: R\to T$ be a ring morphism such that $\phi(a) \in T^{\times}$.
Using the universal property of $R[x]$, we get a unique morphism $\overline{\phi}$ extending $\phi$ with $\overline{\phi}(x) = \phi(a)^{-1}$.
Quite obviously, $ax -1 \in \operatorname{Ker}\overline{\phi}$. Thus $\overline{\phi}$ factorizes uniquely through $R[x]/(ax-1)$.
Thus we get a unique morphism $\mathcal{F}: R[x]/(ax-1) \to T$ with $\mathcal{F}\circ \pi = \phi$, where $\pi$ is the canonical map $R\to R[x]/(ax-1)$. This shows that $\pi: R \to R[x]/(ax-1)$ has the universal property of the localization, thus it is isomorphic to the localization.
This is essentially another way of seeing Arturo Magidin's answer
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 5,
"answer_id": 0
} |
Moment of inertia of an ellipse in 2D I'm trying to compute the moment of inertia of a 2D ellipse about the z axis, centered on the origin, with major/minor axes aligned to the x and y axes. My best guess was to try to compute it as:
$$4\rho \int_0^a \int_0^{\sqrt{b^2(1 - x^2/a^2)}}(x^2 +y^2)\,dydx$$
... I couldn't figure out how to integrate that. Is there a better way or a trick, or is the formula known? I'd also be happy with a good numerical approximation given a and b.
| Use 'polar' coordinates, as in $\phi(\lambda, \theta) = (\lambda a \cos \theta, \lambda b \sin \theta)$, with $(\lambda, \theta) \in S = (0,1] \times [0,2 \pi]$. It is straightforward to compute the Jacobian determinant as
$$ J_{\phi}(\lambda, \theta) = |\det D\phi(\lambda, \theta)| = \lambda a b.$$
Let $E = \{ (x,y) \,|\, 0 <(\frac{x}{a})^2 + (\frac{y}{b})^2 \leq 1 \}$. (Eliminating $(0,0)$ makes no difference to the integral, and is a technicality for the change of variables below.) We have $E = \phi (S)$, and
$$\begin{align}
I &= \rho \int_{\phi ( S)} (x^2+y^2) \, dx dy \\
&= \rho \int_{S} \lambda^2 (a^2 \cos^2 \theta+ b^2 \sin^2 \theta) \lambda a b \, d \lambda d \theta \\
&= \rho a b \int_{0}^1 \lambda^3 \, d \lambda \int_0^{2 \pi} a^2 \cos^2 \theta+ b^2 \sin^2 \theta \, d\theta \\
&= \rho \pi a b \frac{a^2+b^2}{4}.
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A Banach space is reflexive if and only if its dual is reflexive How to show that a Banach space $X$ is reflexive if and only if its dual $X'$ is reflexive?
| Here's a different, more geometric approach that comes from Folland's book, exercise 5.24
Let $\widehat X$, $\widehat{X^*}$ be the natural images of $X$ and $X^*$ in $X^{**}$ and $X^{***}$.
Define $\widehat X^0 = \{F\in X^{***}: F(\widehat x) = 0 \text{ for all } \widehat x \in \widehat X\}$
1) It isn't hard to show that $\widehat{X^*} \bigcap \widehat X^0 = \{0\}$.
2) Furthermore, $\widehat{X^*} + \widehat X^0 = X^{***}$. To show this, let $f\in X^{***}$, and define $l \in X^*$ by $l(x) = f(\widehat x)$ for all $x\in X$.
Then $f(\phi) = \widehat l(\phi) + [f(\phi) - \widehat l(\phi)]$.
Clearly $\widehat l \in \widehat{X^*}$, and we claim $f - \widehat l \in \widehat X^0$. Let $\widehat x \in \widehat X$. Then $f(\widehat x) - \widehat l ( \widehat x) = f(\widehat x) - \widehat x (l) = f(\widehat x) - l(x) = 0$
Now that 1) and 2) are verified, we prove the claim:
If $X$ is reflexive, then $\widehat X^0 = \{0\}$, and so $X^{***} = \widehat{X^*}$, so $X^*$ is reflexive.
If $X^*$ is reflexive, then $X^{***} = \widehat{X^*}$, so $\widehat X^0 = \{0\}$. Since $\widehat X$ is a closed subspace of $X^{**}$ (on assumption $X$ is Banach), if $\widehat X$ were a proper subspace of $X^{**}$, we would be able to use Hahn-Banach to construct an $F \in X^{***}$ such that $F$ is zero on $\widehat X$ and has ||F|| = 1. This, however, would contradict $\widehat X^0 = \{0\}$. So we conclude $\widehat X = X^{**}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39",
"answer_count": 4,
"answer_id": 3
} |
Probability problem of 220 people randomly selecting only 12 of 35 exclusive options. There are 220 people and 35 boxes filled with trinkets.
Each person takes one trinket out of a random box.
What is the probability that the 220 people will have grabbed a trinket from exactly 12 different boxes?
I'm trying to calculate the probability of grabbing a trinket from at most 12 boxes, $P(12)$. Then calculate $P(11)$ with the answer being $P(12)-P(11)$ but I'm drawing blank.
$P(12) = 1-(23/35)^{220}$ doesn't look right to me.
| If the probability that the first $n$ people have chosen from exactly $c$ boxes out of a possible $t$ total boxes [$t=35$ in this case] is $p(n,c,t)$ then $$p(n,c,t)=\frac{c \times p(n-1,c,t)+(t-c+1)\times p(n-1,c-1,t)}{t}$$ starting with $p(0,c,t)=0$ and $p(n,0,t)=0$ except $p(0,0,t)=1$.
Using this gives $p(220,12,35) \approx 4.42899922 \times 10^{-94}$. This is close to but not exactly the naive ${35 \choose 12}\times \left(\frac{12}{35}\right)^{220} \approx 4.42899948\times 10^{-94}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Inverse of transformation matrix I am preparing for a computer 3D graphics test and have a sample question which I am unable to solve.
The question is as follows:
For the following 3D transfromation matrix M, find its inverse. Note that M is a composite matrix built from fundamental geometric affine transformations only. Show the initial transformation sequence of M, invert it, and write down the final inverted matrix of M.
$M =\begin{pmatrix}0&0&1&5\\0&3&0&3\\-1&0&0&2\\0&0&0&1\end{pmatrix} $
I only know basic linear algebra and I don't think it is the purpose to just invert the matrix but to use the information in the question to solve this.
Can anyone help?
Thanks
| I know this is old, but the inverse of a transformation matrix is just the inverse of the matrix. For a transformation matrix $M$ which transforms some vector $\mathbf a$ to position $\mathbf v$, then to get a matrix which transforms some vector $\mathbf v$ to $\mathbf a$ we just multiply by $M^{-1}$
$M\cdot \mathbf a = \mathbf v \\
M^{-1} \cdot M \cdot \mathbf a = M^{-1} \cdot \mathbf v \\
\mathbf a = M^{-1} \cdot \mathbf v$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Finding solutions to equation of the form $1+x+x^{2} + \cdots + x^{m} = y^{n}$ Exercise $12$ in Section $1.6$ of Nathanson's : Methods in Number Theory book has the following question.
*
*When is the sum of a geometric progression equal to a power? Equivalently, what are the solutions of the exponential diophantine equation $$1+x+x^{2}+ \cdots +x^{m} = y^{n} \qquad \cdots \ (1)$$ in integers $x,m,n,y$ greater than $2$? Check that
\begin{align*}
1 + 3 + 3^{2} + 3^{3} + 3^{4} & = 11^{2}, \\\ 1 + 7 + 7^{2} + 7^{3} &= 20^{2}, \\\ 1 + 18 +18^{2} &= 7^{3}.
\end{align*}
These are the only known solutions of $(1)$.
The Wikipedia link doesn't reveal much about the above question. My question here would be to ask the following:
*
*Are there any other known solutions to the above equation. Can we conjecture that this equation can have only finitely many solutions?
Added: Alright. I had posted this question on Mathoverflow some time after I had posed here. This user by name Gjergji Zaimi had actually given me a link which tells more about this particular question. Here is the link:
*
*https://mathoverflow.net/questions/58697/
| I liked your question much. The cardinality of the solutions to the above equation purely depends upon the values of $m,n$.
Let me break your problem into some cases. There are three cases possible.
*
*When $ m = 1 $ and $ n = 1 $ , you know that there are infinitely many solutions .
*When $m=2$ and $n=1$ you know that a conic may have an infinitely many rational points or finitely many rational points. In more broad sense, these are genus -1 curves. Where the elliptic curves are also included ( when $m=2,n=3$ or hyper elliptic curves when $m=2, n\ge 4$ ) . This case the number of points on the curve are figured out using the conjecture of Birch and Swinnerton-dyer. It gives you a measure of Cardinality, whether infinite or finite by considering the $L$-functions associated to the curves.
*When $m \ge 2 , n \ge 4$ it may represent some higher dimensional curve. So by the standard theorem of Falting, it has finitely many points given that the curve has genus $g \ge 2$ .
Thank you. I update this answer once if I find something more interesting.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 1,
"answer_id": 0
} |
Proving that $\lim\limits_{x \to 0}\frac{e^x-1}{x} = 1$ I was messing around with the definition of the derivative, trying to work out the formulas for the common functions using limits. I hit a roadblock, however, while trying to find the derivative of $e^x$. The process went something like this:
$$\begin{align}
(e^x)' &= \lim_{h \to 0} \frac{e^{x+h}-e^x}{h} \\
&= \lim_{h \to 0} \frac{e^xe^h-e^x}{h} \\
&= \lim_{h \to 0} e^x\frac{e^{h}-1}{h} \\
&= e^x \lim_{h \to 0}\frac{e^h-1}{h}
\end{align}
$$
I can show that $\lim_{h\to 0} \frac{e^h-1}{h} = 1$ using L'Hôpital's, but it kind of defeats the purpose of working out the derivative, so I want to prove it in some other way. I've been trying, but I can't work anything out. Could someone give a hint?
| Let say $y=e^h -1$, then $\lim_{h \rightarrow 0} \dfrac{e^h -1}{h} = \lim_{y \rightarrow 0}{\dfrac{y}{\ln{(y+1)}}} = \lim_{y \rightarrow 0} {\dfrac{1}{\dfrac{\ln{(y+1)}}{y}}} = \lim_{y \rightarrow 0}{\dfrac{1}{\ln{(y+1)}^\frac{1}{y}}}$. It is easy to prove that $\lim_{y \rightarrow 0}{(y+1)}^\frac{1}{y} = e$. Then using Limits of Composite Functions $\lim_{y \rightarrow 0}{\dfrac{1}{\ln{(y+1)}^\frac{1}{y}}} = \dfrac{1}{\ln{(\lim_{y \rightarrow 0}{(y+1)^\frac{1}{y}})}} = \dfrac{1}{\ln{e}} = \dfrac{1}{1} = 1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 8,
"answer_id": 2
} |
Showing the divergence of $ \int_0^{\infty} \frac{1}{1+\sqrt{t}\sin(t)^2} dt$ How can I show the divergence of
$$ \int_0^x \frac{1}{1+\sqrt{t}\sin(t)^2} dt$$
as $x\rightarrow\infty?$
| For $t \gt 0$:
$$
1 + t \ge 1 + \sqrt{t}\sin^2t
$$
Or:
$$
\frac{1}{1 + t} \le \frac{1}{1 + \sqrt{t}\sin^2t}
$$
Now consider:
$$
\int_0^x \frac{dt}{1 + t} \le \int_0^x \frac{dt}{1 + \sqrt{t}\sin^2t}
$$
The LHS diverges as $x \to +\infty$, so the RHS does too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the Taylor series for $g(x) =\frac{ \sinh(-x^{1/2})}{(-x^{1/2})}$, for $x < 0$?
What is the Taylor series for $$g(x) = \frac{\sinh((-x)^{1/2})}{(-x)^{1/2}}$$, for $x < 0$?
Using the standard Taylor Series:
$$\sinh x = x + \frac{x^3}{3!} + \frac{x^5}{5!} + \frac{x^7}{7!}$$
I substituted in $x = x^{1/2}$, since $x < 0$, it would simply be $x^{1/2}$
getting,
$$\sinh(x^{1/2}) = x^{1/2} + \frac{x^{3/2}}{3!} + \frac{x^{5/2}}{5!} + \frac{x^{7/2}}{7!}$$
Then to get the Taylor series for $\sinh((-x)^{1/2})/((-x)^{1/2})$, would I just divide each term by $x^{1/2}$?
This gives me, $1+\frac{x}{3!}+\frac{x^2}{5!}+\frac{x^3}{7!}$
Is this correct?
Thanks for any help!
| As Arturo pointed out in a comment, It has to be $(-x)^{\frac{1}{2}}$ to be defined for $x<0$, then you have:
$$\sinh x = x + \frac{x^3}{3!} + \frac{x^5}{5!} + \frac{x^7}{7!}+\dots$$
Substituting $x$ with $(-x)^{\frac{1}{2}}$ we get:
$$\sinh (-x)^{\frac{1}{2}} = (-x)^{\frac{1}{2}} + \frac{({(-x)^{\frac{1}{2}}})^3}{3!} + \frac{({(-x)^{\frac{1}{2}}})^5}{5!} + \frac{({(-x)^{\frac{1}{2}}})^7}{7!}+\dots$$
Dividing by $(-x)^{\frac{1}{2}}$:
$$\frac{\sinh (-x)^{\frac{1}{2}}}{(-x)^{\frac{1}{2}}} = 1 + \frac{({(-x)^{\frac{1}{2}}})^2}{3!} + \frac{({(-x)^{\frac{1}{2}}})^4}{5!} + \frac{({(-x)^{\frac{1}{2}}})^6}{7!}+\dots$$
And after simplification:
$$\frac{\sinh (-x)^{\frac{1}{2}}}{(-x)^{\frac{1}{2}}} = 1 - \frac{x}{3!} + \frac{x^2}{5!} - \frac{x^3}{7!}+\dots$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Given an alphabet with 6 non-distinct integers, how many distinct 4-digit integers are there?
How many distinct four-digit integers can one make from the digits
$1$, $3$, $3$, $7$, $7$ and $8$?
I can't really think how to get started with this, the only way I think might work would be to go through all the cases. For instance, two $3$'s and two $7$'s as one case, one $1$, two $3$'s and one $8$ as another. This seems a bit tedious though (especially for a larger alphabet) and so I'm here to ask if there's a better way.
Thanks.
| Distinct numbers with two $3$s and two $7$s: $\binom{4}{2}=6$.
Distinct numbers with two $3$s and one or fewer $7$s: $\binom{4}{2}3\cdot2=36$.
Distinct numbers with two $7$s and one or fewer $3$s: $\binom{4}{2}3\cdot2=36$.
Distinct numbers with one or fewer $7$s and one or fewer $3$s: $4\cdot3\cdot2\cdot1=24$.
Total: $6+36+36+24=102$
With larger alphabets,
Suppose there are $a$ numbers with 4 or more in the list, $b$ numbers with exactly 3 in the list, $c$ numbers with exactly 2 in the list, and $d$ numbers with exactly 1 in the list.
Distinct numbers with all 4 digits the same: $a$
Distinct numbers with 3 digits the same: $\binom{4}{3}(a+b)(a+b+c+d-1)$
Distinct numbers with 2 pairs of digits: $\binom{4}{2}\binom{a+b+c}{2}$
Distinct numbers with exactly 1 pair of digits: $\binom{4}{2}(a+b+c)\binom{a+b+c+d-1}{2}2!$
Distinct numbers with no pair of digits: $\binom{a+b+c+d}{4}4!$
Total: $a+4(a+b)(a+b+c+d-1)+6\binom{a+b+c}{2}+12(a+b+c)\binom{a+b+c+d-1}{2}+24\binom{a+b+c+d}{4}$
Apply to the previous case: $a=b=0$, $c=2$, and $d=2$:
$0+0+6\binom{2}{2}+12(2)\binom{3}{2}+24\binom{4}{4}=102$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Quadratic equation related to physics problem - how to proceed? It's a physics-related problem, but it has a nasty equation:
Let the speed of sound be $340\dfrac{m}{s}$, then let a heavy stone fall into the well. How deep is the well when you hear the impact after $2$ seconds?
The formula for the time it takes the stone to fall and the subsequent sound of impact to travel upwards is simple enough:
$t = \sqrt{\dfrac{2s}{g}} + \dfrac{s}{v}$
for $s$ = distance, $g$ = local gravity and $t$ = time.
This translates to said nasty equation:
$gs^2 - 2sv^2 + 2gstv + gt^2v^2 = 0$
Now I need to solve this in terms of $s$, but I'm at a loss as to how to accomplish this. How to proceed?
| Hint: if you insert the values of $g, t$ and $v$ you have a quadratic equation in $s$. Even if you just regard $g, t$ and $v$ as constants, you can plug this into the quadratic formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
expressing $x^3 /1000 - 100x^2 - 100x + 3$ in big theta Hello can somebody help me in expressing $x^3/1000 - 100x^2 - 100x + 3$ in big theta notation. It looks like of $x^3$ to me, but obviously at $x =0$ obviously this polynomial gives a value of $3$. And multiplying $x^3$ by any constant won't help at all. Is there a standard way to approach such kind of problem?
| More generally, given an arbitrary real polynomial $p(x)=a_nx^n+\cdots+a_1x+a_0$ with $a_n>0$, let us denote by $M$ a number greater than all of $p$'s real roots. We have
$$\lim_{x\to\infty}\frac{p(x)}{x^n}=a_n+a_{n-1}(0)+\cdots+a_1(0)+a_0(0)=a_n>0.$$
Now $p$ is continuous and has no roots beyond $M$ so it cannot change sign beyond $M$; at the same time the limit is positive so it must be the case that $p(x)>0$ for all $x>M$. Also, $x^{-n}p(x)-a_n$ tends to zero as $x\to\infty$ and has no singularities so it must be bounded, hence also given $0<L<a_n$ there is some $N$ such that it is $\le L$ in magnitude for all $x>N$ (by the definition of a limit at infinity).
Claim:
$$-L+a_n\le \frac{p(x)}{x^n}< a_n+\frac{|a_{n-1}|}{M}+\cdots+\frac{|a_1|}{M^{n-1}}+\frac{|a_0|}{M^n} \quad \text{for all }x>\max\{M,N\}.$$
Proof. The left inequality follows from hypothesis on $-L$ being a lower bound. Otherwise for $x>M$ we have that $a_k<|a_k|$ and $1/x^\ell<1/M^\ell$; the latter is because $x\mapsto 1/x^\ell$ is a decreasing function on $x>0$ for $\ell>0$, and since the latter involves positive quantities it may be multiplied by the former (see here and adapt as necessary), hence $a_k/x^\ell <|a_k|/M^\ell$. Apply this to each term in $p(x)/x^n$ and then add the inequalities together to get the right-hand inequality above.
This demonstrates that $p(x)/x^n$ is squeezed between two positive reals for sufficiently large $x$; multiply both sides by $x^n$ and we have shown $p(x)$ fulfills the definition of $\Theta(x^n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Are isomorphic structures really indistinguishable? I always believed that in two isomorphic structures what you could tell for the one you would tell for the other... is this true? I mean, I've heard about structures that are isomorphic but different with respect to some property and I just wanted to know more about it.
EDIT: I try to add clearer informations about what I want to talk about. In practice, when we talk about some structured set, we can view the structure in more different ways (as lots of you observed). For example, when someone speaks about $\mathbb{R}$, one could see it as an ordered field with particular lub property, others may view it with more structures added (for example as a metric space or a vector space and so on). Analogously (and surprisingly!), even if we say that $G$ is a group and $G^\ast$ is a permutation group, we are talking about different mathematical object, even if they are isomorphic as groups! In fact there are groups that are isomorphic (wrt group isomorphisms) but have different properties, for example, when seen as permutation groups.
| I'm not sure, if this is what you are referring to, but here goes...
There are questions that are easy to decide in one structure, but much more difficult in another isomorphic structure. The discrete logarithm problem comes to mind. The additive group
$G_1=\mathbf{Z}_{502}$ is generated by $5$, and to a given $x\in G_1$ finding a multiplier $n$ such that
$$
5n=(5+5+\cdots 5)=x
$$
is easy, as the generalized Euclidean algorithm will do it for us.
The multiplicative group $G_2=\mathbf{Z}_{503}^*$ is also cyclic of order $502$ and also generated by $5$. Yet, to a given $x\in G_2$ the problem of finding an exponent (now an exponent as the group is multiplicative) $n$ such that
$$
5^n=(5\cdot5\cdot5\cdots5)=x
$$
is more difficult. The difference in difficulty becomes more pronounced as the size of the groups grows.
The problem is that describing an isomorphism is not enough to translate a question from one structure to the other, if you cannot also describe its inverse.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/152980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
} |
A good reference to begin analytic number theory I know a little bit about basic number theory, much about algebra/analysis, I've read most of Niven & Zuckerman's "Introduction to the theory of numbers" (first 5 chapters), but nothing about analytic number theory. I'd like to know if there would be a book that I could find (or notes from a teacher online) that would introduce me to analytic number theory's classical results. Any suggestions?
Thanks for the tips,
| I'm quite partial to Apostol's books, and although I haven't read them (yet) his analytic number theory books have an excellent reputation.
Introduction to Analytic Number Theory (Difficult undergraduate level)
Modular Functions and Dirichlet Series in Number Theory (can be considered a continuation of the book above)
I absolutely plan to read them in the future, but I'm going through some of his other books right now.
Ram Murty's Problems in Analytic Number Theory is stellar as it has a ton of problems to work out!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 8,
"answer_id": 1
} |
harmonic function question Let $u$ and $v$ be real-valued harmonic functions on $U=\{z:|z|<1\}$. Let $A=\{z\in U:u(z)=v(z)\}$. Suppose $A$ contains a nonempty open set. Prove $A=U$.
Here is what I have so far: Let $h=u-v$. Then $h$ is harmonic. Let $X$ be the set of all $z$ such that $h(z)=0$ in some open neighborhood of $z$. By our assumptions on $A$, $X$ is not empty. Let $z\in X$. Then $h(z)=0$ on some open set $V$ containing $z$. If $x\in V$, then $h(w)=0$ in some open set containing $x$, namely $V$. So $X$ is open.
I want to show $X$ is also closed but I am having trouble doing so. Any suggestions:
| Each real harmonic function $h$ on a simply connected domain defines unique up to the constant holomorphic function $f\in\mathcal{O}(U)$ such that
$$
\mathrm{Im}(f)=h
$$
$$
\mathrm{Re}(f)=
\int\limits_{(x_0,y_0)}^{(x,y)}\left(\frac{\partial h}{\partial y}dx-\frac{\partial h}{\partial x}dy\right)+C
$$
If $h=0$ on some ball $B\subset A$, then respective function $f=C$ on $B$. Since $A$ is open, by uniqueness principle $f=C$ on $U$. Hence $h=\mathrm{Im}(f)=0$ on $U$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
What is $\lim_{(x,y)\to(0,0)} \frac{(x^3+y^3)}{(x^2-y^2)}$? In class, we were simply given that this limit is undefined since along the paths $y=\pm x$, the function is undefined.
Am I right to think that this should be the case for any function, where the denominator is $x^2-y^2$, regardless of what the numerator is?
Just wanted to see if this is a quick way to identify limits of this form.
Thanks for the discussion and help!
| For your function, in the domain of $f$ (so $x\ne \pm y)$, to compute the limit you can set $x=r\cos\theta, y=r\sin\theta$, and plug it it. You get $\lim\limits_{r\to 0} \frac{r^3(cos^3\theta+sin^3\theta)}{r^2(cos^2\theta-sin^2\theta)} =\lim\limits_{r\to 0} \frac{r(cos^3\theta+sin^3\theta)}{(cos^2\theta-sin^2\theta)}$, and you can easily see that this is $0$ for any $\theta$ in the domain of $f$ (you need to avoid $\theta = \frac{n\pi}{2}-\pi/4$). Of course, if you are considering the whole plane, then the limit does not exist, because the function isn't even defined at $y=x$, so you can't compute the limit along that path.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Proof of Eberlein–Smulian Theorem for a reflexive Banach spaces Looking for the proof of Eberlein-Smulian Theorem.
Searching for the proof is what I break with this morning. Some of my friends recommend Haim Brezis (Functional Analysis, Sobolev Spaces and Partial
Differential Equations). After I search the book, I only found the statement of the theorem, is the proof very difficult to grasp? Why is Haim Brezis skip it in his book?
Please I need a reference where I can find the proof in detail.
Theorem:(Eberlein-Smul'yan Theorem) A Banach space $E$ is reflexive if and
only if every (norm) bounded sequence in $E$ has a subsequence which converges
weakly to an element of $E$.
| Kôsaku Yosida, Functional Analysis, Springer 1980, Chapter V, Appendix, section 4. (This appears to be the 6th edition).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Why doesn't this find the mid point? I saw a simple question and decided to try an alternate method to see if I could get the same answer; however, it didn't work out how I had expected.
Given $A(4, 4, 2)~$ and $~B(6, 1, 0)$, find the coordinates of the
midpoint $M$ of the line $AB$.
I realize that this is quite easy just taking $\frac{1}{2}(A+B) = (5, \frac{5}{2}, 1)$; however, I don't understand why this doesn't give me the same answer:
If I take $\frac{1}{2}\vec{AB}~$ I would have thought that I would be half way to B from A which would be the midpoint right? but, of course I get:
$\frac{1}{2}\vec{AB} = \frac{1}{2}(2, -3, -2) = (1, -\frac{3}{2}, -1)$
Is it just because this is a directional vector which doesn't indicate position in any way, and I am trying to halve the direction/angle or something?
| That's right...your calculation doesn't take into account position in any way. You are going half the distance from $A$ to $B$, but starting at the origin, not at $A$. Try $A+\frac{1}{2}\vec{AB}$
EDIT: It occured to me that I should point out: $$A+\frac{1}{2}\vec{AB}=A+\frac{1}{2}(B-A)=\frac{1}{2}(A+B)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Probability of a baseball team winning next 2 games Given their previous performance, the probability of a particular baseball team winning any given game is 4/5.
The probability that the team will win their next 2 games is...
I'm confused on how to start this question. Any help is appreciated.
| Probability of a particular baseball team winning any given game is 4/5.
Probability that the team will win their next 2 games is probability of winning 1st match $*$ probability of winning 2nd match.
$$P = (4/5) * (4/5)$$
$$P = 16/25$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Intermediate fields of cyclotomic splitting fields and the polynomials they split Consider the splitting field $K$ over $\mathbb Q$ of the cyclotomic polynomial $f(x)=1+x+x^2 +x^3 +x^4 +x^5 +x^6$. Find the lattice of subfields of K and for each subfield $F$ find polynomial $g(x) \in \mathbb Z[x]$ such that $F$ is the splitting field of $g(x)$ over $\mathbb Q$.
My attempt: We know the Galois group to be the cyclic group of order 6. It has two proper subgroups of order 2 and 3 and hence we are looking for only two intermediate field extensions of degree 3 and 2.
$\mathbb Q[\zeta_7+\zeta_7^{-1}]$ is a real subfield.
$\mathbb Q[\zeta_7-\zeta_7^{-1}]$ is also a subfield.
How do I calculate the degree and minimal polynomial?
| Somehow, the theme of symmetrization often doesn't come across very clearly in many expositions of Galois theory. Here is a basic definition:
Definition. Let $F$ be a field, and let $G$ be a finite group of automorphisms of $F$. The symmetrization function $\phi_G\colon F\to F$ associated to $G$ is defined by the formula
$$
\phi_G(x) \;=\; \sum_{g\in G} g(x).
$$
Example. Let $\mathbb{C}$ be the field of complex numbers, and let $G\leq \mathrm{Aut}(\mathbb{C})$ be the group $\{\mathrm{id},c\}$, where $\mathrm{id}$ is the identity automorphism, and $c$ is complex conjugation. Then $\phi_G\colon\mathbb{C}\to\mathbb{C}$ is defined by the formula
$$
\phi_G(z) \;=\; \mathrm{id}(z) + c(z) \;=\; z+\overline{z} \;=\; 2\,\mathrm{Re}(z).
$$
Note that the image of $\phi$ is the field of real numbers, which is precisely the fixed field of $G$. This example generalizes:
Theorem. Let $F$ be a field, let $G$ be a finite group of automorphisms of $F$, and let $\phi_G\colon F\to F$ be the associated symmetrization function. Then the image of $\phi_G$ is contained in the fixed field $F^G$. Moreover, if $F$ has characteristic zero, then $\mathrm{im}(\phi_G) = F^G$.
Of course, since $\phi_G$ isn't a homomorphism, it's not always obvious how to compute a nice set of generators for its image. However, in small examples the goal is usually just to produce a few elements of $F^G$, and then prove that they generate.
Let's apply symmetrization to the present example. You are interested in the field $\mathbb{Q}(\zeta_7)$, whose Galois group is cyclic of order $6$. There are two subgroups of the Galois group to consider:
The subgroup of order two: This is the group $\{\mathrm{id},c\}$, where $c$ is complex conjugation. You have already used your intuition to guess that $\mathbb{Q}(\zeta_7+\zeta_7^{-1})$ is the corresponding fixed field. The basic reason that this works is that $\zeta_7+\zeta_7^{-1}$ is the symmetrization of $\zeta_7$ with respect to this group.
The subgroup of order three: This is the group $\{\mathrm{id},\alpha,\alpha^2\}$, where $\alpha\colon\mathbb{Q}(\zeta_7)\to\mathbb{Q}(\zeta_7)$ is the automorphism defined by $\alpha(\zeta_7) = \zeta_7^2$. (Note that this indeed has order three, since $\alpha^3(\zeta_7) = \zeta_7^8 = \zeta_7$.) The resulting symmetrization of $\zeta_7$ is
$$
\mathrm{id}(\zeta_7) + \alpha(\zeta_7) + \alpha^2(\zeta_7) \;=\; \zeta_7 + \zeta_7^2 + \zeta_7^4.
$$
Therefore, the corresponding fixed field is presumably $\mathbb{Q}(\zeta_7 + \zeta_7^2 + \zeta_7^4)$.
All that remains is to find the minimal polynomials of $\zeta_7+\zeta_7^{-1}$ and $\zeta_7 + \zeta_7^2 + \zeta_7^4$. This is just a matter of computing powers until we find some that are linearly dependent. Using the basis $\{1,\zeta_7,\zeta_7^2,\zeta_7^3,\zeta_7^4,\zeta_7^5\}$, we have
$$
\begin{align*}
\zeta_7 + \zeta_7^{-1} \;&=\; -1 - \zeta_7^2 - \zeta_7^3 - \zeta_7^4 - \zeta_7^5 \\
(\zeta_7 + \zeta_7^{-1})^2 \;&=\; 2 + \zeta_7^2 + \zeta_7^5 \\
(\zeta_7 + \zeta_7^{-1})^3 \;&=\; -3 - 3\zeta_7^2 - 2\zeta_7^3 - 2\zeta_7^4 - 3\zeta_7^5
\end{align*}
$$
In particular, $(\zeta_7+\zeta_7^{-1})^3 + (\zeta_7+\zeta_7^{-1})^2 - 2(\zeta_7+\zeta_7^{-1}) - 1 = 0$, so the minimal polynomial for $\zeta_7+\zeta_7^{-1}$ is $x^3 + x^2 - 2x - 1$. Similarly, we find that
$$
(\zeta_7 + \zeta_7^2 + \zeta_7^4)^2 \;=\; -2 - \zeta_7 - \zeta_7^2 - \zeta_7^4
$$
so the minimal polynomial for $\zeta_7 + \zeta_7^2 + \zeta_7^4$ is $x^2+x+2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
How does trigonometric substitution work? I have read my book, watched the MIT lecture and read Paul's Online Notes (which was pretty much worthless, no explanations just examples) and I have no idea what is going on with this at all.
I understand that if I need to find something like $$\int \frac { \sqrt{9-x^2}}{x^2}dx$$
I can't use any other method except this one. What I do not get is pretty much everything else.
It is hard to visualize the bounds of the substitution that will keep it positive but I think that is something I can just memorize from a table.
So this is similar to u substitution except that I am not using a single variable but expressing x in the form of a trig function. How does this not change the value of the problem? To me it seems like it would, algebraically how is something like $$\int \frac { \sqrt{9-x^2}}{x^2}dx$$ the same as $$\int \frac {3\cos x}{9\sin^2 x}3\cos x \, dx$$
It feels like if I were to put in numbers for $x$ that it would be a different answer.
Anyways just assuming that works I really do not understand at all what happens next.
"Returning" to the original variable to me should just mean plugging back in what you had from before the substitution but for whatever unknown and unexplained reason this is not true. Even though on problems before I could just plug back in my substitution of $u = 2x$, $\sin2u = \sin4x$ that would work fine but for whatever reason no longer works.
I am not expected to do some pretty complex trigonometric manipulation with the use of a triangle which I do not follow at all, luckily though this process is not explained at all in my book so I think I am just suppose to memorize it.
Then when it gets time for the answer there is no explanation at all but out of nowhere inverse sin comes in for some reason.
$$\frac {- \sqrt{9-x^2}}{x} - \sin^{-1} (x/3) +c$$
I have no idea happened but neither does the author apparently since there is no explanation.
| There are some basic trigonometric identities which is not hard to memorise, one of the easiest and most important being $\,\,\cos^2x+\sin^2x=1\,\,$ , also known as the Trigonometric Pythagoras Theorem.
From here we get $\,1-\sin^2x=\cos^2x\,$ , so (watch the algebra!)$$\sqrt{9-x^2}=\sqrt{9\left(1-\left(\frac{x}{3}\right)^2\right)}=3\sqrt{1-\left(\frac{x}{3}\right)^2}$$From here, the substitution proposed for the integral is $$\displaystyle{\sin\theta=\frac{x}{3}\Longrightarrow \cos\theta\, d\theta=\frac{1}{3}dx}\,\,,\,x=3\sin\theta\,,\,dx=3\cos\theta\,d\theta$$ so in the integral we get$$\int \frac{\sqrt{9-x^2}}{x^2}\,dx =\int \frac{3\sqrt{1-\left(\frac{x}{3}\right)^2}}{x^2}\to\int\frac{\rlap{/}{3}\sqrt{1-\sin^2\theta}}{\rlap{/}{9}\sin^2\theta}\,\rlap{/}{3}\cos\theta\,d\theta=$$$$\int\frac{\cos\theta\,\cos\theta}{\sin^2\theta}\,d\theta$$which is what you have in your book...:)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 6,
"answer_id": 5
} |
A surjective homomorphism between finite free modules of the same rank I know a proof of the following theorem using determinants.
For some reason, I'd like to know a proof without using them.
Theorem
Let $A$ be a commutative ring.
Let $E$ and $F$ be finite free modules of the same rank over $A$.
Let $f:E → F$ be a surjective $A$-homomorphism.
Then $f$ is an isomorphism.
| This answer is not complete. See the comments below.
The modules $E$ and $F$ being free of finite rank $n$ over $A$ means that they each have a finite basis over $A$. Take $y \in F$, and since $f$ is surjective some $x \in E$ maps to $y$. Pick a basis for $\langle e_1, \dots, e_n \rangle$ of $E$ over $A$, so $x = a_1e_1 + \dotsb + a_ne_n$ for some $a_i \in A$. Then for our arbitrary element $y \in F$,
$$
y = f(a_1e_1 + \dotsb + a_ne_n) = a_1f(e_1) + \dotsb + a_nf(e_n)
\,
$$
so $\langle f(e_1),\dotsc, f(e_n)\rangle$ generates $F$. Since $F$ has the same rank as $E$, these generators must form a basis (this needs to be proven. See darij grinberg's comment below). Since these generators form a basis
$$
0 = f(\alpha_1e_1 + \dotsb + \alpha_ne_n) = \alpha_1f(e_1) + \dotsb + \alpha_nf(e_n)
$$
only when the $\alpha_i$ are all zero, so $f$ is injective and hence an isomorphism. ${_\square}$
I don't see why we need $A$ to be a commutative ring. Since we're specifying that $E$ and $F$ have the same rank I assume they have the invariant dimension property. Otherwise commutivity would imply this. Also we're only talking about a single map $f \colon E \to F$ and don't need to talk about the module structure on $\mathrm{Hom}_R(E,F)$, for which we need $R$ to be commutative.
Also I've seen it asked as an exercise, is this still true if we assume $f$ is injective instead of surjective? The answer is no, looking at the counterexample $\mathbf{Z} \to \mathbf{Z}$ where $1 \mapsto 2$, regarding $\mathbf{Z}$ as a rank $1$ free module over itself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
How to solve $x_j y_j = \sum_{i=1}^N x_i$ I have N equations and am having trouble with finding a solution.
$$\left\{\begin{matrix}
x_1 y_1 = \sum_{i=1}^N x_i\\
x_2 y_2 = \sum_{i=1}^N x_i\\
\vdots\\
x_N y_N = \sum_{i=1}^N x_i
\end{matrix}\right.$$
where $x_i$, ($i = 1, 2, \cdots, N$) is an unknown and $y_i$, ($i = 1, 2, \cdots, N$) is a known variable.
Given $y_i$'s, I have to find $x_i$'s but, I don't know where to start and even if it has a solution.
| *
*$x_i= 0$ $\forall i$ is always a solution.
2 Suppose that $y_i \ne 0$ $\forall i$. Then, $x_1 = \frac{1}{y_1} \sum x_i$ and summing over all indexes we get $\sum x_i = \sum \frac{1}{y_i} \sum x_i$ So we must either have $\sum x_i = 0$ or $\sum \frac{1}{y_i} = 1$
2.a The case $\sum x_i = 0$ gives only the trivial solution $x_i=0 $
2.b Elsewhere, if we are given $\sum \frac{1}{y_i} = 1$, then any $x_i = \frac{\alpha}{y_i}$ is a solution, for any $\alpha$
3 If some $y_j=0$ for some $j$, then we must have $\sum x_i =0$ and $x_i=0$ for all $i$ with $y_i\ne 0$. This provides extra solutions if there are more than one zero-valued $y_j$. Eg, say $y_1=y_2=y_3=0$ and $y_j \ne 0$ for $j>3$; then any ${\bf x}$ with $x_j=0$ for $j>3$ and $x_1+x_2+x_3=0$ is a solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Simplify integral of inverse of derivative. I need to simplify function $g(x)$ which I describe below.
Let $F(y)$ be the inverse of $f'(\cdot)$ i.e. $F = \left( f'\right)^{-1}$ and $f(x): \mathbb{R} \to \mathbb{R}$, then $$g(x) =\int_a^x F(y)dy$$ Is it possible to simplify $g(x)$?
| Let $t = F(y)$. Then we get that $y = F^{-1}(t) = f'(t)$. Hence, $dy = f''(t) dt$. Hence, we get that
\begin{align}
g(x) & = \int_{F(a)}^{F(x)} t f''(t) dt\\
& = \left. \left(t f'(t) - f(t) \right) \right \rvert_{F(a)}^{F(x)}\\
& = F(x) f'(F(x)) - f(F(x)) - (F(a) f'(F(a)) - f(F(a)))\\
& = xF(x) - f(F(x)) - aF(a) + f(F(a))
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability of getting two consecutive 7s without getting a 6 when two dice are rolled Two dice are rolled at a time, for many time until either A or B wins. A wins if we get two consecutive 7s and B wins if we get one 6 at any time.
what is the probability of A winning the game??
| Let $p_A$ and $p_B$ the winning probabilities for $A$ and $B$, and $p_6$ and $p_7$ the probabilities to roll 6 and 7.
Now, by regarding the different possibilities for the first roll (and if one starts with 7, also the second roll), we find:
$$p_A=p_6 \cdot 0 + p_7 (p_6 \cdot 0 +p_7+(1-p_6-p_7)p_A) + (1-p_6-p_7)p_A.$$
This is a linear equation for $p_A$.
We have $p_6=\frac5{36}$, $p_7=\frac{6}{36}$, and therefore:
$$p_A=\frac{p_7^2}{p_7p_6+p_7^2+p_6} =\frac{6}{41}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Simple geometric proof for Snell's law of refraction Snell's law of refraction can be derived from Fermat's principle
that light travels paths that minimize the time using simple
calculus. Since Snell's law only involves sines I wonder whether
this minimum problem has a simple geometric solution.
| Perhaps this will help, if you are looking at a non Calculus approach.
Consider two parallel rays $A$ and $B$ coming through the medium $1$ (say air) to the medium $2$ (say water). Upon arrival at the interface $\mathcal{L}$ between the two media (air and water), they continue their parallel course in the directions $U$ and $V$ respectively.
Let us assume that at time $t=0$, light ray $A$ arrives at the interface $\mathcal{L}$ at point $C$, while ray $B$ is still shy of the surface by a distance $PD$. $B$ travels at the speed $v_{1}=\frac{c}{n_{1}}$ and arrives at $D$ in $t$ seconds. During this time interval, ray $A$ continues its journey through the medium $2$ at a speed $v_{2}=\frac{c}{n_{2}}$ and reaches the point $Q$.
We can formulate the rest, geometrically (looking at the parallel lines) from the figure. Let $x$ denote the distance between $C$ and $D$.
\begin{eqnarray*}
x \sin\left(\theta_{i}\right) &=& PD \\
&=& v_{1} t \\
&=& \frac{c}{n_{1}} t \\
x \sin\left(\theta_{r}\right) &=& CQ \\
&=& v_{2} t \\
&=& \frac{c}{n_{2}} t
\end{eqnarray*}
Thus,
\begin{eqnarray*}
n_{1} \sin\left(\theta_{i}\right) &=& \frac{c}{x} t \\
n_{2} \sin\left(\theta_{r}\right) &=& \frac{c}{x} t
\end{eqnarray*}
Re arranging this will take us to the Snell's law as we know.
\begin{eqnarray*}
\frac{n_{2} }{n_{1}} &=& \frac{\sin\left(\theta_{i}\right) }{ \sin\left(\theta_{r}\right)}
\end{eqnarray*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 2,
"answer_id": 1
} |
Derivative of a linear transformation. We define derivatives of functions as linear transformations of $R^n \to R^m$. Now talking about the derivative of such linear transformation ,
as we know if $x \in R^n$ , then
$A(x+h)-A(x)=A(h)$, because of linearity of $A$, which implies that $A'(x)=A$ where , $A'$ is derivative of $A$ .
What does this mean? I am not getting the point I think.
| $A'$, where $A$ is seen as a linear /map/, has a derivative $A$, where $A$ is now seen as a (constant) matrix..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 2,
"answer_id": 1
} |
Prove: The weak closure of the unit sphere is the unit ball. I want to prove that in an infinite dimensional normed space $X$, the weak closure of the unit sphere $S=\{ x\in X : \| x \| = 1 \}$ is the unit ball $B=\{ x\in X : \| x \| \leq 1 \}$.
$\\$
Here is my attempt with what I know:
I know that the weak closure of $S$ is a subset of $B$ because $B$ is norm closed and convex, so it is weakly closed, and $B$ contains $S$.
But I need to show that $B$ is a subset of the weak closure of $S$.
$\\$
for small $\epsilon > 0$, and some $x^*_1,...,x^*_n \in X^*$, I let $U=\{ x : \langle x, x^*_i \rangle < \epsilon , i = 1,...,n \} $
then $U$ is a weak neighbourhood of $0$
What I think I need to show now is that $U$ intersects $S$, but I don't know how.
| With the same notations in you question: Notice that if $x_i^*(x) = 0$ for all $i$, then $x \in U$, and therefore the intersection of the kernels $\bigcap_{i=1}^n \mathrm{ker}(x_i^*)$ is in $U$. Since the codimension of $\mathrm{ker}(x^*_i)$ is at most $1$, then the intersection has codimension at most $n$ (exercise: prove this). But since $X$ is infinite dimensional, this means the intersection has an infinite dimension, and in particular contains a line. Since any line going through $0$ intersects $S$, then $U$ intersects $S$.
The same argument can be applied to any point in $B$ (any line going through a point in $B$ intersects $S$), and since you've proved the other inclusion, the weak closure of $S$ is $B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 1,
"answer_id": 0
} |
History of Modern Mathematics Available on the Internet I have been meaning to ask this question for some time, and have been spurred to do so by Georges Elencwajg's fantastic answer to this question and the link contained therein.
In my free time I enjoy reading historical accounts of "recent" mathematics (where, to me, recent means within the last 100 years). A few favorites of mine being Alexander Soifer's The Mathematical Coloring Book, Allyn Jackson's two part mini-biography of Alexander Grothendieck (Part I and Part II) and Charles Weibel's History of Homological Algebra.
My question is then:
What freely available resources (i.e. papers, theses, articles) are there concerning the history of "recent" mathematics on the internet?
I would like to treat this question in a manner similar to my question about graph theory resources, namely as a list of links to the relevant materials along with a short description. Perhaps one person (I would do this if necessary) could collect all the suggestions and links into one answer which could be used a repository for such materials.
Any suggestions I receive in the comments will listed below.
Suggestions in the Comments:
*
*[Gregory H. Moore, The emergence of open sets, closed sets, and limit points in analysis and topology]
http://mcs.cankaya.edu.tr/~kenan/Moore2008.pdf
| Babois's thesis on the birth of the cohomology of groups .
Beaulieu on Bourbaki
Brechenmacher on the history of matrices
Demazure's eulogy of Henri Cartan
Serre's eulogy of Henri Cartan
Dolgachev on Cremona and algebraic cubic surfaces
The Hirzebruch-Atiyah correspondence on $K$-theory
Krömer's thesis on the beginnings of category theory
Raynaud on Grothendieck and schemes.
Rubin on the solving of Fermat's last theorem.
Schneps's review of the book The Grothendieck-Serre Correspondence
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 2,
"answer_id": 1
} |
Integral of$\int_0^1 x\sqrt{2- \sqrt{1-x^2}}dx$ I have no idea how to do this, it seems so complex I do not know what to do.
$$\int_0^1 x\sqrt{2- \sqrt{1-x^2}}dx$$
I tried to do double trig identity substitution but that did not seem to work.
| Here is how I would do it, and for simplicity I would simply look at the indefinite integral.
First make the substitution $u = x^2$ so that $du = 2xdx$. We get: $$\frac{1}{2} \int \sqrt{2-\sqrt{1-u}} du$$ Then, make the substitution $v = 1-u$ so that $dv = -du$. We get: $$-\frac{1}{2} \int \sqrt{2 - \sqrt{v}} dv$$ Then make the substitution $w = \sqrt{v}$ so that $dw = \frac{1}{2\sqrt{v}} dv$ meaning that $dv = 2w \text{ } dw$. So we get: $$-\int \sqrt{2-w} \text{ } w \text{ } dw$$ Now make the substitution $s = 2-w$ so that $ds = -dw$ to get: $$-\int \sqrt{s}(s-2) ds$$ The rest should be straightforward.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Sum with binomial coefficients: $\sum_{k=1}^m \frac{1}{k}{m \choose k} $ I got this sum, in some work related to another question:
$$S_m=\sum_{k=1}^m \frac{1}{k}{m \choose k} $$
Are there any known results about this (bounds, asymptotics)?
| Consider a random task as follows. First, one chooses a nonempty subset $X$ of $\{1,2,\ldots,m\}$, each with equal probability. Then, one uniformly randomly selects an element $n$ of $X$. The event of interest is when $n=\max(X)$.
Fix $k\in\{1,2,\ldots,m\}$. The probability that $|X|=k$ is $\frac{1}{2^m-1}\,\binom{m}{k}$. The probability that the maximum element of $X$, given $X$ with $|X|=k$, is chosen is $\frac{1}{k}$. Consequently, the probability that the desired event happens is given by $$\sum_{k=1}^m\,\left(\frac{1}{2^m-1}\,\binom{m}{k}\right)\,\left(\frac{1}{k}\right)=\frac{S_m}{2^m-1}\,.$$
Now, consider a fixed element $n\in\{1,2,\ldots,m\}$. Then, there are $\binom{n-1}{j-1}$ possible subsets $X$ of $\{1,2,\ldots,m\}$ such that $n=\max(X)$ and $|X|=j$. The probability of getting such an $X$ is $\frac{\binom{n-1}{j-1}}{2^m-1}$. The probability that $n=\max(X)$, given $X$, is $\frac{1}{j}$. That is, the probability that $n=\max(X)$ is $$\begin{align}
\sum_{j=1}^{n}\,\left(\frac{\binom{n-1}{j-1}}{2^m-1}\right)\,\left(\frac{1}{j}\right)
&=\frac{1}{2^m-1}\,\sum_{j=1}^n\,\frac1j\,\binom{n-1}{j-1}=\frac{1}{2^m-1}\,\left(\frac{1}{n}\,\sum_{j=1}^n\,\frac{n}{j}\,\binom{n-1}{j-1}\right)
\\
&=\frac{1}{2^m-1}\,\left(\frac{1}{n}\,\sum_{j=1}^n\,\binom{n}{j}\right)=\frac{1}{2^m-1}\left(\frac{2^n-1}{n}\right)\,.
\end{align}$$
Finally, it follows that $\frac{S_m}{2^m-1}=\sum_{n=1}^m\,\frac{1}{2^m-1}\left(\frac{2^n-1}{n}\right)$. Hence,
$$\sum_{k=1}^m\,\frac{1}{k}\,\binom{m}{k}=S_m=\sum_{n=1}^m\,\left(\frac{2^n-1}{n}\right)\,.$$
It can then be shown by induction on $m>3$ that $\frac{2^{m+1}}{m}<S_m<\frac{2^{m+1}}{m}\left(1+\frac{2}{m}\right)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 7,
"answer_id": 1
} |
probability of a horse winning a race. Lets suppose ten horses are participating in a race and each horse has equal chance
of winning the race. I am required to find the following:
(a) the probability that horse A wins the race followed by horse B.
(b) the probability that horse C becomes either first or second in the race.
I know there are $10 \cdot 9 \cdot 8 $ ways of having first, second or third.
Since each horse has an equal chance of winning, each has probability of 1/10. Would I be right in saying that the probability that A wins followed by B is $\frac{1}{10} \cdot \frac{1}{10} $?
Is it okay if I do this for (b)? $\frac{1}{10} +\frac{1}{10} $?
| The answer to A is 1/90 because of non replacement method. Horse A has 1/10 chance to be 1st and horse b would then be 1 of 9 with a chance to be 2nd. Multiply 1/10 times 1/9 gets 1/90th
The answer to b is different as its an addition problemas in he has a chance to be first and/or 2nd so 1/10 plus 1/9...common denominator of 90 so 9/90 plus 10/90 or 19/90 chance. This is just a statistical question not a real question because percentages change based on post position, horse ability, jockey, trainer etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Subring of polynomials Let $k$ be a field and $A=k[X^3,X^5] \subseteq k[X]$.
Prove that:
a. $A$ is a Noetherian domain.
b. $A$ is not integrally closed.
c. $dim(A)=?$ (the Krull dimension).
I suppose that the first follows from $A$ being a subring of $k[X]$, but I don't know about the rest.
Thank you in advance.
| a) Not every subring of a noetherian ring is noetherian (there are plenty of counterexamples), so this doesn't work here. Instead, use Hilbert's Basis Theorem.
b) The element $X^2 = \frac{X^5}{X^3}$ is in $\mathrm{Quot}(A)$. Try to show that it is integral over $A$, but not in $A$.
c) The dimension is the transcendence degree of $\mathrm{Quot}(A)$ over $k$. But this field is easy to compute.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Piecewise functions: Got an example of a real world piecewise function? Looking for something beyond a contrived textbook problem concerning jelly beans or equations that do not represent anything concrete. Not just a piecewise function for its own sake. Anyone?
| As Wim mentions in the comments, piecewise polynomials are used a fair bit in applications. In designing profiles and shapes for cars, airplanes, and other such devices, one usually uses pieces of Bézier or B-spline curves (or surfaces) during the modeling process, for subsequent machining. In fact, the continuity/smoothness conditions for such curves (usually continuity up to the second derivative) are important here, since during machining, an abrupt change in the curvature can cause the material for the modeling, the mill, or both, to crack (remembering that velocity and acceleration are derivatives of position with respect to time might help to understand why you want smooth curves during machining).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 18,
"answer_id": 4
} |
Numerical solution of the Laplace equation on circular domain I was solving Laplace equation in MATLAB numerically. However I have problems when the domain is not rectangular.
The equation is as follows:
$$
\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=0
$$
domain is circular
$$
x^2 + y^2 < 16
$$
and boundary condition
$$
u(x,y)= x^2y^2
$$
How should I start with solving this equation numerically ?
| Perhaps begin by rewriting the problem in polar coordintates:
$$\frac{\partial^{2}u}{\partial r^{2}}+\frac{1}{r}\frac{\partial u}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}u}{\partial\theta^{2}}=0$$
$$r^2<16$$
$$\left. u(r,\theta)\right|_D=r^4\cos^2\theta\sin^2\theta$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Reflections generating isometry group I was reading an article and it states that every isometry of the upper half plane model of the hyperbolic plane is a composition of reflections in hyperbolic lines, but does not seem to explain why this is true. Could anyone offer any insight? Thanks.
| An isometry $\phi:M\to N$ between connected Riemannian manifolds $M$ and $N$ is completely determined by its value at a single point $p$ and its differential at $d\phi_p$.
Take any isometry $\phi$ of $\mathbb{H}^2$. Connect $i$ and $\phi(i)$ by a (unique) shortest geodesic and let $C$ be a perpendicular bisector of the connecting geodesic. Then the reflection $r_C$ across $C$ maps $i$ to $\phi(i)$. Now take an orthonormal basis $e_j$ at $i$. It is mapped to $d\phi e_j\in T_{\phi(i)}\mathbb{H}^2$. Linear algebra tells us that in $T_{\phi(i)}\mathbb{H}^2$ we can map $d\phi e_j$ to $dr_C e_j$ by a reflection across a line (if $\phi$ is orientation-preserving) or by a rotation (if $\phi$ is orientation-reversing). A two-dimensional rotation can be written as a composition of two reflections.
By the exponential map, the lines across which we're reflecting map to geodesics in $\mathbb{H}^2$ and the reflections extend to reflections of $\mathbb{H}^2$ across those geodesics.
We have therefore written $\phi$ as a composition of reflections.
In some cases, e.g. in Thurston, Three-Dimensional Geometry and Topology, the isometry group of $\mathbb{H}^2$ is defined as the group generated by reflections across circles (e.g., in the Poincare model). Chapter two of that book has a discussion of hyperbolic geometry and exercises comparing the various perspectives on isometries, e.g., as $$Sl(2;\mathbb{R}) = SO(1,1) = \mbox{Möbius transformations with real coefficients} = \langle\mbox{refl. across circles}\rangle.$$You might also check Chapter B (I think) of Benedetti and Petronio, Lectures on Hyperbolic Geometry.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Is Koch snowflake a continuous curve? For Koch snowflake, does there exits a continuous map from $[0,1]$ to it?
The actural construction of the map may be impossible, but how to claim the existence of such a continuous map? Or can we conside the limit of a sequence of continuous map, but this sequence of continuous maps may not have continuous limit.
| Consider the snowflake curve as the limit of the curves $(\gamma_n)_{n\in \mathbb N}$, in the usual way, starting with $\gamma_0$ which is just a equilateral triangle of side length 1. Then each $\gamma_n$ is piecewise linear, consisting of $3\cdot 4^n$ pieces of length $3^{-n}$ each; for definiteness let us imagine that we parameterize it such that $|\gamma_n'(t)| = 3(\frac 43)^n$ whenever it exists.
Now, it always holds that $|\gamma_{n+1}(t)-\gamma_n(t)|\le 3^{-n}$ for every $t$ (because each step of the iteration just changes the curve between two corners in the existing curve, but keeps each corner and its corresponding parameter value unchanged). This means that the $\gamma_n$'s converge uniformly towards their pointwise limit: At every $t$ the distance between $\gamma_n(t)$ and $\lim_{i\to\infty}\gamma_i(t)$ is at most $\sum_{i=n}^\infty (1/3)^i$ which is independent of $t$ and goes to $0$ as $n\to\infty$.
Because uniform convergence preserves continuity, the limiting curve is a continuous function from $[0,1]$ to the plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
"8 Dice arranged as a Cube" Face-Sum Equals 14 Problem I found this here:
Sum Problem
Given eight dice. Build a $2\times 2\times2$ cube, so that the sum of the points on each side is the same.
$\hskip2.7in$
Here is one of 20 736 solutions with the sum 14.
You find more at the German magazine "Bild der Wissenschaft 3-1980".
Now my question:
Is $14$ the only possible face sum? At least, in the example given, it seems to related to the fact, that on every face two dice-pairs show up, having $n$ and $7-n$ pips. Is this necessary? Sufficient it is...
| No, 14 is not the only possibility.
For example:
Arrange the dice, so that you only see 1,2 and 3 pips and all the 2's are on the upper and lower face of the big cube. This gives you face sum 8.
Please ask your other questions as separate questions if you are still interested.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Computing the length of a finite group Can someone suggest a GAP or MAGMA command (or code) to obtain the length $l(G)$ of a finite group $G$, i.e. the maximum length of a strictly descending chain of subgroups in $G$?
Thanks in advance.
| Just to get you started, here is a very short recursive Magma function to compute
this. You could do something similar in GAP. Of course, it will only work in reasonable time for small groups. On my computer it took about 10 seconds to do $A_8$. To do better you would need to do something more complicated like working up through the subgroup lattice. It is not an easy function to compute exactly.
Len := function(G)
if #G eq 1 then return 0; end if;
return 1 + Max([$$(m`subgroup) : m in MaximalSubgroups(G)]);
end function;
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Find the area of overlap of two triangles Suppose we are given two triangles $ABC$ and $DEF$. We can assume nothing about them other than that they are in the same plane. The triangles may or may not overlap. I want to algorithmically determine the area (possibly $0$) of their overlap; call it $T_{common}$.
We have a multitude of ways of determining the areas of $ABC$ and $DEF$; among the "nicest" are the Heronian formula, which is in terms of the side lengths alone, and
$T = \frac{1}{2} \left| \det\begin{pmatrix}x_A & x_B & x_C \\ y_A & y_B & y_C \\ 1 & 1 & 1\end{pmatrix} \right| = \frac{1}{2} \big| x_A y_B - x_A y_C + x_B y_C - x_B y_A + x_C y_A - x_C y_B \big|$
which is in terms of the coordinates alone.
Obviously, there does exist a function from $A,B,C,D,E,F$ to $T_{common}$, but my question is: is there a "nice" (or even not-"nice") expression for $T_{common}$ in terms of the $x$ and $y$ coordinates of $A,B,C,D,E,F$?
I've drawn out on paper what I think are the various cases, but my issues with this approach are: identifying the case is a job in itself, which I can't easily see how to algorithmise ("just look at a picture" doesn't work for a computer); even within each case the algebra is fiddly and error-prone; and I have little confidence that I've enumerated all possible cases and got the computations right!
In my imagination there is a neat approach using ideas from analysis (treating the triangles as functions from $\mathbb{R}^2$ to $\{0,1\}$ and... multiplying them??) but I have no idea whether that's just a flight of fancy or something workable.
| Sorry about the comment -- I hit the return key prematurely.
This isn't really an answer (except in the negative sense).
The common (overlap) area is a function of the coordinates of the 6 points, so it's a mapping from $R^{12}$ into $R$. Think about one of the points moving around, while the other 5 are fixed. When the moving point passes into (or out of) the other triangle, some of the partial derivatives of the area function will be discontinuous (this seems intuitively clear -- the proof is left to someone who has more skill and patience than I do). Anyway, if you believe me, this means that the area function is not smooth. Therefore there can not be some simple formula like Heron's formula that gives the area (because this simple formula would give a smooth function).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 1
} |
Operators from $\ell^\infty$ into $c_0$ I have the following question related to $\ell^\infty(\mathbb{N}).$ How can I construct a bounded, linear operator from $\ell^\infty(\mathbb{N})$ into $c_0(\mathbb{N})$ which is non-compact?
It is clear that $\ell^\infty$ is a Grothendieck space with Dunford-Pettis property, hence any operator from $\ell^\infty$ into a separable Banach space must be strictly singular. But I do not know any example above which is non-compact.
| A bounded operator $T:\ell_\infty\rightarrow c_0$ has the form $Tx=(x_n^*(x))$ for some weak$^*$ null sequence $(x_n^*)$ in $\ell_\infty^*$. A set $K\subset c_0$ is relatively compact if and only if there is a $x\in c_0$ such that $|k_n|\le |x_n|$ for all $k\in K$ and all $n\ge1$. From these two facts, it follows that $T(B({\ell_\infty}))$ is relatively compact if and only if the representing sequence $(x_n^*)$ is norm-null.
So, you need only find a sequence in $\ell_\infty^*$ that is weak$^*$ null, but not norm null.
Such a sequence exists in $\ell_\infty^*$ since: 1) weak$^*$ convergent sequences in $\ell_\infty^*$ are weakly convergent ($\ell_\infty^*$ has the Grothendieck property), and 2) $\ell_\infty^*$ does not have the Schur property (weakly convergent sequences are norm convergent).
(There may be a less roundabout way of showing the the result of the preceding paragraph.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
How to "rotate" a polar equation? Take a simple polar equation like r = θ/2 that graphs out to:
But, how would I achieve a rotation of the light-grey plot in this image (roughly 135 degrees)? Is there a way to easily shift the plot?
| A way to think about this is is that you want to shift all $\theta$ to $\theta'=\theta +\delta$, where $\delta$ is the amount by which you want to rotate. This question has a significance if you want to rotate some equation which is a function of theta. In the case $r=\theta$ that becomes $r=\theta+\delta$.
Of course if our independent variable in our polar equation was a non-identity function of $\theta$ you might be able to use the angle-sum indentities to help you out:
$$
\sin(\alpha + \beta) = \sin \alpha \cos \beta + \cos \alpha \sin \beta \\
\cos(\alpha + \beta) = \cos \alpha \cos \beta - \sin \alpha \sin \beta
$$
In case an anyone is trying to programme this in a Cartesian setting like I was trying to do (for a music visualizer) where I wanted my spiral's rotation to be a function of time. $r = \theta(t)$. Normally where solving $r=\theta$ or $\sqrt{x^2+y^2}=tan(\frac{\sin(\theta)}{\cos(\theta)})=tan(\frac{y}{x})$ you can substitute as follows.
$$
\sqrt{x^2+y^2}= tan(\frac{\sin(\theta+t)}{\cos(\theta+t)}) = tan(\frac{\sin\theta \cos t+\cos \theta \sin t}{\cos \theta \cos t - \sin \theta \sin t})
= tan(\frac{y \cos t +x\sin t }{x\cos t - y \sin t})
$$ /
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
$\mid\theta-\frac{a}{b}\mid< \frac{1}{b^{1.0000001}}$, question related to the dirichlet theorem The question is:
A certain real number $\theta$ has the following property: There exist infinitely many rational numbers $\frac{a}{b}$(in reduced form) such that:
$$\mid\theta-\frac{a}{b}\mid< \frac{1}{b^{1.0000001}}$$
Prove that $\theta$ is irrational.
I just don't know how I could somehow relate $b^{1.0000001}$ to $b^2$ or $2b^2$ so that the dirichlet theorem can be applied. Or is there other ways to approach the problem?
Thank you in advance for your help!
| Hint: Let $\theta=\frac{p}{q}$, where $p$ and $q$ are relatively prime. Look at
$$\left|\frac{p}{q}-\frac{a}{b}\right|.\tag{$1$}$$
Bring to the common denominator $bq$. Then if the top is non-zero, it is $\ge 1$, and therefore Expression $(1)$ is $\ge \frac{1}{bq}$.
But if $b$ is large enough, then $bq<b^{1.0000001}$.
Edit: The above shows that if $\theta$ is rational, there cannot be arbitrarily large $b$ such that
$$\left|\theta-\frac{a}{b}\right|<\frac{1}{b^{1.0000001}}.\tag{$2$}$$
Of course, if we replace the right-hand side by $b^{1.0000001}$, then there are arbitrarily large such $b$. Indeed if we replace it by any fixed $\epsilon\gt 0$, there are arbitrarily large such $b$, since any real number can be approximated arbitrarily closely by rationals. Thus if in the original problem one has $b^{1.0000001}$, and not its reciprocal, it must be a typo.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Indefinite integral of secant cubed $\int \sec^3 x\>dx$ I need to calculate the following indefinite integral:
$$I=\int \frac{1}{\cos^3(x)}dx$$
I know what the result is (from Mathematica):
$$I=\tanh^{-1}(\tan(x/2))+(1/2)\sec(x)\tan(x)$$
but I don't know how to integrate it myself. I have been trying some substitutions to no avail.
Equivalently, I need to know how to compute:
$$I=\int \sqrt{1+z^2}dz$$
which follows after making the change of variables $z=\tan x$.
| We have an odd power of cosine. So there is a mechanical procedure for doing the integration. Multiply top and bottom by $\cos x$. The bottom is now $\cos^4 x$, which is $(1-\sin^2 x)^2$. So we want to find
$$\int \frac{\cos x\,dx}{(1-\sin^2 x)^2}.$$
After the natural substitution $t=\sin x$, we arrive at
$$\int \frac{dt}{(1-t^2)^2}.$$
So we want the integral of a rational function. Use the partial fractions machinery to find numbers $A$, $B$, $C$, $D$ such that
$$\frac{1}{(1-t^2)^2}=\frac{A}{1-t}+\frac{B}{(1-t)^2}+ \frac{C}{1+t}+\frac{D}{(1+t)^2}$$
and integrate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 8,
"answer_id": 1
} |
Can a function with suport on a finite interval have a Fourier transform which is zero on a finite interval? If $f$ has support on $[-x_0,x_0]$ can its Fourier transform $\tilde{f}$ be zero on $[-p_0,p_0]$? If so, what is the maximum admissible product $x_0p_0$?
| Let's assume that $f$ is not identically zero.
In this answer, it is shown that if $f$ has compact support, then $\hat{f}$ is entire. A non-zero entire function cannot be zero on a set with a limit point. Thus, if $f=0$ on $[-p_0,p_0]$, then $p_0=0$, and therefore, the maximum of $x_0p_0=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Need help with the integral $\int \frac{2\tan(x)+3}{5\sin^2(x)+4}\,dx$ I'm having a problem resolving the following integral, spent almost all day trying.
Any help would be appreciated.
$$\int \frac{2\tan(x)+3}{5\sin^2(x)+4}\,dx$$
| Hint:
*
*Multiply the numerator and denomiator by $\sec^{2}(x)$. Then you have $$\int \frac{2 \tan{x} +3}{5\sin^{2}(x)+4} \times \frac{\sec^{2}(x)}{\sec^{2}{x}} \ dx$$
*Now, the denomiator becomes $5\tan^{2}(x) + 4\cdot \bigl(1+\tan^{2}(x)\bigr)$, and you can put $t=\tan{x}$.
*So your new integral in terms of $t$ is : $\displaystyle \int \frac{2t+3}{9t^{2}+4} \ dt = \int\frac{2t}{9t^{2}+4} \ dt + \int\frac{3}{9t^{2}+4} \ dt$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Counting matrices over $\mathbb{Z}/2\mathbb{Z}$ with conditions on rows and columns I want to solve the following seemingly combinatorial problem, but I don't know where to start.
How many matrices in $\mathrm{Mat}_{M,N}(\mathbb{Z}_2)$ are there such that the sum of entries in each row and the sum of entries in each column is zero? More precisely find cardinality of the set
$$
\left\{A\in\mathrm{Mat}_{M,N}(\mathbb{Z}/2\mathbb{Z}): \forall j\in\{1,\ldots,N\}\quad \sum\limits_{k=1}^M A_{kj}=0,\quad \forall i\in\{1,\ldots,M\}\quad \sum\limits_{l=1}^N A_{il}=0 \right\}
$$.
Thanks for your help.
| If you consider the entries of the matrices as unknowns, you have $N\cdot M$ unknowns and $N+M$ linear equations.
If you think a little bit, you find out that these equations are not independent, but you get linearly independent equations if you omit one of them.
So the solution space has the dimension $NM-N-M+1$, hence it contains $2^{MN-N-M-1}=2^{(N-1)(M-1)}$ elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
How to prove that the sum and product of two algebraic numbers is algebraic? Suppose $E/F$ is a field extension and $\alpha, \beta \in E$ are algebraic over $F$. Then it is not too hard to see that when $\alpha$ is nonzero, $1/\alpha$ is also algebraic. If $a_0 + a_1\alpha + \cdots + a_n \alpha^n = 0$, then dividing by $\alpha^{n}$ gives $$a_0\frac{1}{\alpha^n} + a_1\frac{1}{\alpha^{n-1}} + \cdots + a_n = 0.$$
Is there a similar elementary way to show that $\alpha + \beta$ and $\alpha \beta$ are also algebraic (i.e. finding an explicit formula for a polynomial that has $\alpha + \beta$ or $\alpha\beta$ as its root)?
The only proof I know for this fact is the one where you show that $F(\alpha, \beta) / F$ is a finite field extension and thus an algebraic extension.
| Okay, I'm giving a second answer because this one is clearly distinct from the first one. Recall that finding a polynomial over which $\alpha+\beta$ or $\alpha \beta$ is a root of $p(x) \in F[x]$ is equivalent to finding the eigenvalue of a square matrix over $F$ (living in some algebraic extension of $F$), since you can link the polynomial $p(x)$ to the companion matrix $C(p(x))$ which has precisely characteristic polynomial $p(x)$, hence the eigenvalues of the companion matrix are the roots of $p(x)$.
If $\alpha$ is an eigenvalue of $A$ with eigenvector $x \in V$ and $\beta$ is an eigenvalue of $B$ with eigenvector $y \in W$, then using the tensor product of $V$ and $W$, namely $V \otimes W$, we can compute
$$
(A \otimes I + I \otimes B)(x \otimes y) = (Ax \otimes y) + (x \otimes By) = (\alpha x \otimes y) + (x \otimes \beta y) = (\alpha + \beta) (x \otimes y)
$$
so that $\alpha + \beta$ is an eigenvalue of $A \otimes I + I \otimes B$. Also,
$$
(A \otimes B)(x \otimes y) = (Ax \otimes By) = (\alpha x \otimes \beta y) = \alpha \beta (x \otimes y)
$$
hence $\alpha \beta$ is an eigenvalue of the matrix $A \otimes B$. If you want explicit expressions for the polynomials you are looking for, you can just compute the characteristic polynomial of the tensor products.
Hope that helps,
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51",
"answer_count": 5,
"answer_id": 2
} |
Natural question about weak convergence. Let $u_k, u \in H^{1}(\Omega)$ such that $u_k \rightharpoonup u$ (weak convergence) in $H^{1}(\Omega)$. Is true that $u_{k}^{+}\rightharpoonup u^{+}$ in $\{u\geqslant 0\}$? You can do hypothesis on $\Omega$ if you need.
| I get the idea from Richard's answer.
Let $\Omega:=(0,2\pi)$ and $u_k(x):=\frac{\cos(kx)}{k+1}$. Then $\{u_k\}$ converges weakly to $0$ in $H^1(\Omega)$ (as it's bounded in $H^1(\Omega)$, and $\int_{\Omega}(u_k\varphi+u'_k\varphi')dx\to 0$ for all test function $\varphi$).
Assume that $\{u_k^+\}$ converges weakly to $0$ in $H^1(\Omega)$. Up to a subsequence, using the fact that $L^2(\Omega)$ is a Hilbert space, we can assume that $(v_k^+)'$ and $v_k^+$ converge weakly in $L^2(\Omega)$, where $v_k:=u_{n_k}$, respectively to $f$ and $g$. It's due to the fact that these sequences are bounded in $L^2$.
Then writing the definition $g'=f$. We also have $f=0$, hence (by connectedness of $\Omega$), $g$ is constant and should be equal to $0$.
But $$\int_{\Omega}|\sin(kx)|dx\geqslant \int_{\Omega}\sin^2(kx)dx=\pi,$$
and we should have that $2u_k^+-u=2u^+_k-u_k^++u_k^-=|u_k|$ weakly converge to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How to solve this quartic equation? For the quartic equation:
$$x^4 - x^3 + 4x^2 + 3x + 5 = 0$$
I tried Ferrari so far and a few others but I just can't get its complex solutions. I know it has no real solutions.
| $$x^4 - x^3 + 4x^2 + \underbrace{3x}_{4x-x} + \overbrace{5}^{4+1} = \\\color{red}{x^4-x^3}+4x^2+4x+4\color{red}{-x+1}\\={x^4-x^3}-x+1+4(x^2+x+1)\\={x^3(x-1)}-(x-1)+4(x^2+x+1)\\=(x-1)(x^3-1)+4(x^2+x+1)\\=(x-1)(x-1)(x^2+x+1)+4(x^2+x+1)\\=(x^2+x+1)(x-1)^2+4)=(x^2+x+1)(x^2-2x+5)$$
As for the roots, I assume you could solve those two quadratic equations and you could find the results on Wolfram|Alpha.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Effective cardinality Consider $X,Y \subseteq \mathbb{N}$.
We say that $X \equiv Y$ iff there exists a bijection between $X$ and $Y$.
We say that $X \equiv_c Y$ iff there exist a bijective computable function between $X$ and $Y$.
Can you show me some examples in which the two concepts disagree?
| The structure with the computable equivalence (which is defined as $A\leq_T B$ and $B \leq_T A$) is called Turing degrees and has a very rich structure (unlike the usual bijection).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Evaluating $ \int_1^{\infty} \frac{\{t\} (\{t\} - 1)}{t^2} dt$ I am interested in a proof of the following.
$$ \int_1^{\infty} \dfrac{\{t\} (\{t\} - 1)}{t^2} dt = \log \left(\dfrac{2 \pi}{e^2}\right)$$
where $\{t\}$ is the fractional part of $t$.
I obtained a circuitous proof for the above integral. I'm curious about other ways to prove the above identity. So I thought I will post here and look at others suggestion and answers.
I am particularly interested in different ways to go about proving the above.
I'll hold off from posting my proof for sometime to see what all different proofs I get for this.
| Let's consider the following way involving some known results of celebre integrals with fractional parts:
$$ \int_1^{\infty} \dfrac{\{t\} (\{t\} - 1)}{t^2} dt = \int_1^{\infty} \dfrac{\{t\}^2}{t^2} dt - \int_1^{\infty} \dfrac{\{t\}}{t^2} dt = \int_0^1 \left\{\frac{1}{t}\right\}^2 dt- \int_0^1 \left\{\frac{1}{t}\right\} dt = (\ln(2\pi) -\gamma-1)-(1-\gamma)=\ln(2\pi)-2=\log \left(\dfrac{2 \pi}{e^2}\right).$$
REMARK: there is a theorem that establishes a way of calculating the value of the below integral for $m\geq1$:
$$\int_0^1 \left\{\frac{1}{x}\right\}^m dx$$
The proof is complete.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
$\{1,1\}=\{1\}$, origin of this convention Is there any book that explicitly contain the convention that a representation of the set that contain repeated element is the same as the one without repeated elements?
Like $\{1,1,2,3\} = \{1,2,3\}$.
I have looked over a few books and it didn't mention such thing. (Wikipedia has it, but it does not cite source).
In my years learning mathematics in both US and Hungary, this convention is known and applied. However recently I noticed some Chinese students claim they have never seen this before, and I don't remember I saw it in any book either.
I never found a book explicitly says what are the rules in how $\{a_1,a_2,a_3,\ldots,a_n\}$ specify a set. Some people believe it can only specify a set if $a_i\neq a_j \Leftrightarrow i\neq j$. The convention shows that doesn't have to be satisfied.
| I took a quick look through some of the likelier candidates on my shelves. The following introductory discrete math texts all explicitly point out, with at least one example, that neither the order of listing nor the number of times an element is listed makes any difference to the identity of a set:
*
*Winfried K. Grassman & Jean-Paul Tremblay, Logic and Discrete Mathematics: A Computer Science Perspective
*Ralph P. Grimaldi, Discrete and Combinatorial Mathematics: An Applied Introduction, 4th ed.
*Richard Johnsonbaugh, Discrete Mathematics, 4th ed.
*Bernard Kolman, Robert C. Busby, & Sharon Ross, Discrete Mathematical Structures for Computer Science, 3rd ed.
*Edward Scheinerman, Mathematics: A Discrete Introduction, 2nd ed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 3
} |
Real life application of Gaussian Elimination I would normally use Gaussian Elimination to solve a linear system. If we have more unknowns than equations we end up with an infinite number of solutions. Are there any real life applications of these infinite solutions? I can think of solving puzzles like Sudoku but are there others?
| One important application is this: Given the corner points of a convex hull $\{\mathbf v_1,\cdots,\mathbf v_m \}$ in $n$ dimensions, s.t. $m > n+1$ and a point $\mathbf c$ inside the convex hull, find an enclosing simplex of $\mathbf c$(of size $r \le n+1$). To solve the the problem, one can find a solution to $\alpha_1 \mathbf v_1+\cdots+\alpha_m \mathbf v_m=\mathbf c$ and $\alpha_1+\cdots+\alpha_m=1$. Once the solution is found, one can use the Carathéodory's theorem to reduce the number of non-zero ${\alpha_i}'s$ to $r$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Prove that $4^{2n} + 10n -1$ is a multiple of 25 Prove that if $n$ is a positive integer then $4^{2n} + 10n - 1$ is a multiple of $25$
I see that proof by induction would be the logical thing here so I start with trying $n=1$ and it is fine. Then assume statement is true and substitute $n$ by $n+1$ so I have the following:
$4^{2(n+1)} + 10(n+1) - 1$
And I have to prove that the above is a multiple of 25. I tried simplifying it but I can't seem to get it right. Any ideas? Thanks.
| $\rm\displaystyle 25\ |\ 10n\!-\!(1\!-\!4^{2n}) \iff 5\ |\ 2n - \frac{1-(-4)^{2n}}{5}.\ $ Now via $\rm\ \dfrac{1-x^k}{1-x}\, =\, 1\!+\!x\!+\cdots+x^{k-1}\ $
$\rm\displaystyle we\ easily\ calculate\ that, \ mod\ 5\!:\, \frac{1-(-4)^{2n}}{1-(-4)\ \ \,}\, =\, 1\!+\!1\!+\cdots + 1^{2n-1} \equiv\, 2n\ \ $ by $\rm\: -4\equiv 1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 7,
"answer_id": 2
} |
Simplify these expressions with radical sign 2 My question is
1) Rationalize the denominator:
$$\frac{1}{\sqrt{2}+\sqrt{3}+\sqrt{5}}$$
My answer is:
$$\frac{\sqrt{12}+\sqrt{18}-\sqrt{30}}{18}$$
My question is
2) $$\frac{1}{\sqrt{2}+\sqrt{3}-\sqrt{5}}+\frac{1}{\sqrt{2}-\sqrt{3}-\sqrt{5}}$$
My answer is: $$\frac{1}{\sqrt{2}}$$
I would also like to know whether my solutions are right.
Thank you,
| $\begin{eqnarray*}
(\sqrt{2}+\sqrt{3}+\sqrt{5})(\sqrt{12}+\sqrt{18}-\sqrt{30}) & = & (\sqrt{2}+\sqrt{3}+\sqrt{5})(2\sqrt{3}+3\sqrt{2}-\sqrt{2}\sqrt{3}\sqrt{5})\\& = & 12,
\end{eqnarray*}$
if you expand out the terms, so your first answer is incorrect. The denominator should be $12$.
$\begin{eqnarray*}
(\sqrt{2}+\sqrt{3}-\sqrt{5})(\sqrt{2}-\sqrt{3}-\sqrt{5}) & = & (\sqrt{2}-\sqrt{5})^2-\sqrt{3}^2\\& = & 7-2\sqrt{10}-3\\& = & 2\sqrt{2}(\sqrt{2}-\sqrt{5}),
\end{eqnarray*}$
and so when your fractions in the second part are given common denominators, you'll have exactly $\cfrac{1}{\sqrt{2}}$ after cancellation, so your second answer is correct.
Note: In general, if you want to see if two fractions are the same (as in the first problem), cross-multiplication is often a useful way to see it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Constructing arithmetic progressions It is known that in the sequence of primes there exists arithmetic progressions of primes of arbitrary length. This was proved by Ben Green and Terence Tao in 2006. However the proof given is a nonconstructive one.
I know the following theorem from Burton gives some criteria on how large the common difference must be.
Let $n > 2$. If all the terms of the arithmetic progression
$$
p, p+d, \ldots, p+(n-1)d
$$
are prime numbers then the common difference $d$ is divisible by every prime $q <n$.
So for instance if you want a sequence of primes in arithmetic progression of length $5$ ie
$$
p, p+d, \ldots, p+4d
$$
you need that $d \geq 6$. Using this we can get that the prime $p=5$ and $d = 6$ will result in a sequence primes in arithmetic progression of length $5$.
So my question is what are the known techniques for constructing a sequence of primes of length $k$? How would one find the "first" prime in the sequence or even the "largest prime" that would satisfy the sequence (assuming there is one)? Also, while the theorem gives a lower bound for $d$, is it known if it is the sharpest lowest bound there is?
NOTE: This is not my area of research so this question is mostly out of curiosity.
| This may not answer the question, but I would like to point out that more recent work of Green and Tao have proven even stronger results.
Specifically, Green and Tao give exact asymptotics for the number of solutions to systems of linear equations in the prime numbers, and their paper Linear Equations in the Primes was published in the Annals in 2010. In particular, this tells us asymptotics for the number of $k$-term arithmetic progressions in the primes up to $N$.
For example, as $N\rightarrow \infty$, we can count the asymptotic number of 4-tuples of primes $p_1<p_2<p_3<p_4\leq N$ which lie in arithmetic progression, and it equals $$(1+o(1))\frac{N^2}{\log^4(N)} \frac{3}{4}\prod_{p\geq 5} \left(1-\frac{3p-1}{(p-1)^3}\right).$$
Do note however, that Green and Tao's paper made two major assumptions. They assumed the Möbius Nilsquence conjecture (MN) and the Gowers Inverse norm conjecture (GI). In a paper published in the Annals in 2012, Green and Tao resolve the MN conjecture, proving that the Möbius Function is strongly orthogonal to nilsequences. Recently, Green, Tao and Ziegler resolved the Gowers inverse conjecture, and their paper is currently on the arxiv. (It has not yet been published) This means that we unconditionally have asymptotics for the number of primes in a $k$ term arithmetic progression.
If you would like to learn more, I suggest reading Julia Wolf's excellent survey article Arithmetic and polynomial progressions in the primes, d'après Gowers, Green, Tao and Ziegler. It is very recent, as it was for the Bourbaki lectures at 2 month ago.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
} |
$\bar{\partial}$-Poincaré lemma This is $\bar{\partial}$-Poincaré lemma: Given a holomorphic funtion $f:U\subset \mathbb{C} \to \mathbb{C}$ ,locally on $U$ there is a holomorphic function $g$ such that : $$\frac{\partial g}{\partial \bar z}=f$$
The author says that this is a local statement so we may assume $f$ with compact support and defined on the whole plane $\mathbb{C}$, my question is why she says that... thanks.
*Added*
$f,g$ are suppose to be $C^k$ not holomorphic, by definition $$\frac{\partial g}{\partial \bar z}=0$$ if $g$ were holomorphic...
| I don't have the book, and thus I can't check the statement.
However, I believe that the statement holds for smooth $f$.
Basically we want to construct/find $g$ as the following integral:
$$g(z) = \frac{1}{2 \pi i}\int_{w\in \mathbb{C}} \frac{f(w)}{z-w} d\overline{w}\wedge dw$$
In order to do this, $f$ must be defined over the whole complex plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Show that for all $\lambda \geq 1~$ $\frac{\lambda^n}{e^\lambda} < \frac{C}{\lambda^2}$
Show that for any $n \in \mathbb N$ there exists $C_n > 0$ such that for all $\lambda \geq 1$
$$ \frac{\lambda^n}{e^\lambda} < \frac{C_n}{\lambda^2}$$
I can see that both sides of the inequality have a limit of $0$ as $\lambda \rightarrow \infty$ since, on the LHS, repeated application of L'Hôpital's rule will render the $\lambda^n$ term as a constant eventually, while the $e^{\lambda}$ term will remain, and the RHS is obvious.
I can also see that the denominator of the LHS will become large faster than the RHS denominator, but I can't seem to think of anything that will show that the inequality is true for all the smaller intermediate values.
| The function $\lambda \mapsto \lambda^{n+2}$ is strictly increasing for positive $\lambda$ and also $e^{\lambda} > \lambda$. Combining this you get
$$e^{\lambda} = \left( e^{\frac{\lambda}{n+2}} \right)^{n+2} > \left( \frac{\lambda}{n + 2} \right)^{n+2}$$
for all positive $\lambda$ and therefore
$$\frac{\lambda^n}{e^\lambda} < \frac{\lambda^n (n+2)^{n+2}}{\lambda^{n+2}} = \frac{(n+2)^{n+2}}{\lambda^2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Why are bump functions compactly supported? Smooth and compactly supported functions are called bump functions. They play an important role in mathematics and physics.
In $\mathbb{R}^n$ and $\mathbb{C}^n$, a set is compact if and only if it is closed and bounded.
It is clear why we like to work with functions that have a bounded support. But what is the advantage of working with functions that have a support that is also closed? Why do we often work with compactly supported functions, and not just functions with bounded support?
| *
*On spaces such as open intervals and (more generally) domains in $\mathbb R^n$, compactness of support tells us much more than its boundedness. Any function $f\colon (0,1)\to\mathbb R$ has bounded support, since the space $(0,1)$ itself is bounded. But if the support is compact, that means that $f$ vanishes near $0$ and near $1$. (Generally, near the boundary of the domain).
*On the other hand, when bump functions are considered on infinite-dimensional spaces (which does not happen nearly as often), the support is assumed bounded, not compact. A compact subset of an infinite-dimensional Banach space has empty interior, and so cannot support a nonzero continuous function. If you are interested in this subject (which is a subset of the geometry of Banach spaces), see Smooth Bump Functions and the Geometry of Banach Spaces: A Brief Survey by R. Fry and S. McManus, Expo. Math. 20 (2002): 143-183
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
How do I show that this function is always $> 0$
Show that $$f(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} +
\frac{x^4}{4!} > 0 ~~~ \forall_x \in \mathbb{R}$$
I can show that the first 3 terms are $> 0$ for all $x$:
$(x+1)^2 + 1 > 0$
But, I'm having trouble with the last two terms. I tried to show that the following was true:
$\frac{x^3}{3!} \leq \frac{x^4}{4!}$
$4x^3 \leq x^4$
$4 \leq x$
which is not true for all $x$.
I tried taking the derivative and all that I could ascertain was that the the function became more and more increasing as $x \rightarrow \infty$ and became more and more decreasing as $x \rightarrow -\infty$, but I couldn't seem to prove that there were no roots to go with this property.
| Hint:
$$f(x) = \frac{1}{4} + \frac{(x + 3/2)^2}{3} +\frac{x^2(x+2)^2}{24}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Derivative of an implicit function I am asked to take the derivative of the following equation for $y$:
$$y = x + xe^y$$
However, I get lost. I thought that it would be
$$\begin{align}
& y' = 1 + e^y + xy'e^y\\
& y'(1 - xe^y) = 1 + e^y\\
& y' = \frac{1+e^y}{1-xe^y}
\end{align}$$
However, the text book gives me a different answer.
Can anyone help me with this?
Thank you and sorry if I got any terms wrong, my math studies were not done in English... :)
| You can simplify things as follows:
$$y' = \frac{1+e^y}{1-xe^y} = \frac{x+xe^y}{x(1-xe^y)} = \frac{y}{x(1-y+x)}$$
Here in the last step we used $y=x+xe^y$ and $xe^y=y-x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Prove that a metric space which contains a sequence with no convergent subsequence also contains an cover by open sets with no finite subcover. I really need help with this question:
Prove that a metric space which contains a sequence with no convergent subsequence also contains an cover by open sets with no finite subcover.
| Let $(a_n)$ be a sequence in the metric space $M$ that doesn't have any convergent subsequence. The set $\{a_n\}$ consists of isolated points (that is, it doesn't have any accumulation points; otherwise you could take a convergent subsequence), and it's infinite (because if it wasn't, one of the points would repeat infinitely in the sequence and we could get a constant subsequence). Now, for each $n$ take an open ball $B_{r_n}(a_n)$ around $a_n$ with radius $r_n$ small enough not to contain any other terms in the sequence, constructing an (infinite) family of open sets $\{B_{r_n}(a_n)\}$.
Now if you know some facts about compactness you could argue:
The set $\{a_n\}$ is closed, and then since it's a subset of the metric space $M$, if $M$ is compact then so is $\{a_n\}$. However, note that $\{B_{r_n}(a_n)\}$ is an open cover of $\{a_n\}$, but contains no finite subcover. Then $\{a_n\}$ isn't compact, and neither is $M$ (that is, not every subcover of $M$ admits a finite subcover).
Or otherwise, like suggested by Martin Sleziak (and showing a direct proof):
consider the open cover $\{B_{r_n}(a_n)\} \cup (M \setminus \{a_n\})$, which doesn't admit a finite subcover.
It's a nice exercise trying to fill in the details.
An example is given in the space of bounded sequences of real numbers, with the supremum norm: for $i \in \mathbb{N}$, let $e_i$ to be the sequence with a one at the $i$-th position and zeroes elsewhere; then $(e_i)$ is a sequence with no convergent subsequence and $\{e_i\}$ doesn't have accumulation points (even more: it's uniformly discrete).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
In center-excenter configuration in a right angled triangle My question is:
Given triangle $ABC$, where angle $C=90°$.
Prove that the set $\{ s , s-a , s-b , s-c \}$ is identical to $\{ r , r_1 , r_2 , r_3 \}$.
$s=$semiperimeter, $r_1,r_2,r_3$ are the ex-radii.
Any help to solve this would be greatly appreciated.
| This problem have some interesting fact behind so I use pure geometry method to solve it to show these facts:
first we draw a picture.
$\triangle ABC$, $I$ is incenter, $A0,B0,C0$ are the tangent points of incircle. $R1,R2,R3$ is ex-circle center,their tangent points are $A1,B1,C1,A2,B2,C2,A3,B3,C3$.
FOR in-circle, we have $CA0=CB0=s-c,BA0=BC0=s-b,AB0=AC0=s-a$,
now look at circle $R2, BC2$ and $BA1$ are the tangent lines, so $BC2=BA1$,
for same reason, we have
$CA1=CB2,AC2=AB2$, and $CB2=AC-AB2$.
since $BA1=BC+CA1=BC+CB2=BC+AC-AB2,BC2=AB+AC2=AB+AB2$,
so we have
$BC+AC-AB2=AB+AB2$, ie$ AB2=\dfrac{BC+AC-AB}{2}=s-c=CB0, CB2=AC-AB2=AC-CB0=AB0=s-b$ .
with same reason, we have
$ AC2=BC0=s-b$,
$BC2=AC0=s-a,BA2=CA0=s-c,BA0=CA2=s-a$.
above facts are for any triangles as we don't have limits for the triangle.
now we check $R1-A2-C-B1$, since $R1A2=R1B1=r1$, so $R1-A2-C-B1$ is a square! we get
$r1=CA2=s-a$,
with same reason , we get
$ r2=CB2=s-b$,
we also konw $r3=R3A3=CB3$ ,
since $\angle B3AC2=\angle R3AB3$($AR3$ is bisector), so $AC2=AB3$,
$ r3=CB3=AC+AB3=AC+AC2=b+s-b=s$,
clearly:
$ r=IA0=CB0=s-c$
now we show the interesting fact:
$ r+r1+r2+r3=s-c+s-b+s-a+s=2s=a+b+c $
$ r^2+r1^2+r2^2+r3^2=(s-c)^2+(s-b)^2+(s-a)^2+s^2=4S^2+a^2+b^2+c^2-2s(a+b+c)=4S^2+a^2+b^2+c^2-2S*2S=a^2+b^2+c^2$
we rewrite the again:
In Right angle:
$ r+r1+r2+r3==a+b+c $
$ r^2+r1^2+r2^2+r3^2=a^2+b^2+c^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that 16, 1156, 111556, 11115556, 1111155556… are squares. I'm 16 years old, and I'm studying for my exam maths coming this monday. In the chapter "sequences and series", there is this exercise:
Prove that a positive integer formed by $k$ times digit 1, followed by $(k-1)$
times digit 5 and ending on one 6, is the square of an integer.
I'm not a native English speaker, so my translation of the exercise might be a bit crappy. What is says is that 16, 1156, 111556, 11115556, 1111155556, etc are all squares of integers. I'm supposed to prove that. I think my main problem is that I don't see the link between these numbers and sequences.
Of course, we assume we use a decimal numeral system (= base 10)
Can anyone point me in the right direction (or simply prove it, if it is difficult to give a hint without giving the whole evidence). I think it can't be that difficult, since I'm supposed to solve it.
For sure, by using the word "integer", I mean "natural number" ($\in\mathbb{N}$)
Thanks in advance.
As TMM pointed out, the square roots are 4, 34, 334, 3334, 33334, etc...
This row is given by one of the following descriptions:
*
*$t_n = t_{n-1} + 3*10^{n-1}$
*$t_n = \lfloor\frac{1}{3}*10^{n}\rfloor + 1$
*$t_n = t_{n-1} * 10 - 6$
But, I still don't see any progress in my evidence. A human being can see in these numbers a system and can tell it will be correct for $k$ going to $\infty$. But this isn't enough for a mathematical evidence.
| Multiply one of these numbers by $9$, and you get $100...00400...004$, which is $100...002^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 10,
"answer_id": 2
} |
Formula to estimate sum to nearly correct : $\sum_{n=1}^\infty\frac{(-1)^n}{n^3}$ Estimate the sum correct to three decimal places :
$$\sum_{n=1}^\infty\frac{(-1)^n}{n^3}$$
This problem is in my homework. I find that n = 22 when use Maple to solve this. (with some programming) But, in my homework, teacher said find the formula for this problem.
Thanks :)
| For alternating sums $\sum(-1)^n a_n$ with $a_n> 0 $ strictly decreasing there is a simple means to estimate the remainder $\sum^\infty_{k=n} (-1)^k a_k$. You can just use $a_{n-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
} |
Evaluating $\int \frac{dx}{x^2 - 2x} dx$ $$\int \frac{dx}{x^2 - 2x}$$
I know that I have to complete the square so the problem becomes.
$$\int \frac{dx}{(x - 1)^2 -1}dx$$
Then I set up my A B and C stuff
$$\frac{A}{x-1} + \frac{B}{(x-1)^2} + \frac{C}{-1}$$
With that I find $A = -1, B = -1$ and $C = 0$ which I know is wrong.
I must be setting up the $A, B, C$ thing wrong but I do not know why.
| Added: "I know that I have to complete the square" is ambiguous. I interpreted it as meaning that the OP thought that completing the square was necessary to solve the problem.
Completing the square is not a universal tool. To find the integral efficiently, you certainly do not need to complete the square.
The simplest approach is to use partial fractions. The bottom factors as $x(x-2)$. Find numbers $A$ and $B$ such that
$$\frac{1}{x^2-2x}=\frac{A}{x-2}+\frac{B}{x}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Combination of cards
From a deck of 52 cards, how many five card poker hands can be formed
if there is a pair (two of the cards are the same number, and none of
the other cards are the same number)?
I believe you can pick out the first card by ${_4}C_2$, as there are 4 cards which would be the same number (as there are 4 suits). I would pick two from here.
From here on though, I am unsure. I believe it involves the numbers 48, 44, and 40, as after every picking you cannot have an identical number anywhere, so there would be 4 less to choose from. However, I don't believe I can just do $_{48}C_1 * _{44}C_1,..$ as I am not simply selecting 1 card from 48 and removing 4 random ones.
The answer is $1098240$.
| We will use the notation $\binom{n}{r}$, which is more common among mathematicians, where you write ${}_nC_r$.
The kind of card that we have a pair of can be chosen in $\binom{13}{1}$ ways. For each choice of kind, the actual cards can be chosen in $\binom{4}{2}$ ways.
(By kind we mean things like Ace, or $7$, or Queen.)
For each choice made so far, we now count the number of ways to pick the rest of the cards. The kinds of cards we have singletons of can be chosen in $\binom{12}{3}$ ways. For each such choice, the actual cards can be chosen in $4^3$ ways. This is because if, for example, we are to have a $7$, a $10$, and a Queen, the $7$ can be picked in $4$ ways, as can the $10$, as can the Queen. The total number of one pair hands is therefore
$$(13)\binom{4}{2}\binom{12}{3}4^3.$$
Compute. We get $1098240$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Index notation clarification Previously, I have seen matrix notation of the form $T_{ij}$ and all the indices have been in the form of subscripts, such that $T_{ij}x_j$ implies contraction over $j$. However, recently I saw something of the form $T_i^j$ which seems to work not entirely differently from what I was used to. What is the difference? and how do they decide which index to write as a superscript and which a subscript? What is the point of writing them this way? Is there a difference?
(A link to a good reference explaining how these indices work would also be appreciated!)
Thanks.
| Mostly it's just a matter of the author's preference.
The staggered index notation $T^i{}_j$ works great in conjunction with the Einstein summation convention, where one of the rules is that an index that is summed over must appear once as a subscript and once as a superscript. Usually the index of an ordinary vector's components are written in superscript, so the contraction becomes $T^i{}_j x^j$.
This rule can become relevant when one is working with multiple bases, in which cases supscript and superscript indices behave differently under basis change. Writing the matrix with staggered indices then servers a reminder that you're planning to use the matrix to represent a linear transformation, rather than to represent a bilinear form, for which both indices are always on the same level. This agrees with the fact that the matrix of a linear transformation and the matrix of a bilinear form respond differently to basis changes.
These considerations are most weighty in contexts where one needs to juggle a lot of basis changes -- or just to be sure that what one is writing does not depend on the particular choice of basis -- such as differential geometry. On the other hand, in introductory texts where this is less of an issue, there's an argument that explaining the rules for different kind of indices will just confuse the student without really adding to his understanding (as I have may confused you in the above paragraph).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Set theory puzzles - chess players and mathematicians I'm looking at "Basic Set Theory" by A. Shen. The very first 2 problems are: 1) can the oldest mathematician among chess players and the oldest chess player among mathematicians be 2 different people? and 2) can the best mathematician among chess players and the best chess player among mathematicians be 2 different people? I think the answers are no, and yes, because a person can only have one age, but they can have separate aptitudes for chess playing and for math. Is this correct?
| (1) Think of it in terms of sets. Let $M$ be the set of mathematicians, $C$ the set of chess players. Both are asking for the oldest person in $C\cap M$.
(2) Absolutely fantastic reasoning, though perhaps less simply set-theoretically described.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Solving for $c$ in $a(b + cd) \equiv 0 \mod e$ If I have a modulo operation like this:
$$(a ( b+cd ) ) \equiv 0 \pmod{e},$$
How can I derive $c$ in term of other variables present here. I.e. What function $f$ can be used such that:
$$c = f (a,b,d,e) $$
And what is the implication of a mod operation's result being $0$ in terms of simplifying the equation.
Thank you very much for your time.
| EDIT
My original solution to this problem was, in hindsight, radically overcomplicated. I have it below the modified post.
We have $a(b + dx) \equiv 0 \mod e$, or equivalently $ab + adx \equiv 0$, or equivalently $(ad)x \equiv -ab \mod e$. This is a very well understood subset of the problem on solving $ax \equiv b \mod m$. It turns out that there are solutions iff $(ad, e) | (-ab)$, and if there are any solutions than there are exactly $(ad,e)$ of them.
This is the Linear Congruence Theorem. In fact, my previous answer is sort of a skim of the ideas behind the proof, in a sense. And although it gives a clear record of my overcomplicating a problem, I think there is a certain amount of illustrative-ness behind it.
Original answer
So we have $a(b + cd) \equiv 0\mod e$.
If we are lucky, and in partcular that $a,b,c,d,e \neq 0$ and $(a,e) = (b,e) = (c,e) = (d,e) = 1$, i.e. that everything is coprime with $e$, then we have that $c \equiv d^{-1} (-b) \mod e$. In fact, we don't actually require $b$ or $c$ to be coprime to $e$ for this to work, as we just wanted to 'cancel' off the $a$ and be able to write down $d^{-1}$.
Otherwise, the solution may not be well-defined, or rather is a family of solutions. To vastly simplify, suppose we have something like $6(1 + c)\equiv 0 \mod 18$. Then $c \equiv 2, 5, 8, 11, 14, 17$ are all solutions.
The general idea to finding the solutions to problems like $6(1+c) \equiv 0 \mod 18$ involve noting that $(6,18) = 6$, and so by dividing out by $6$ we get $(1+c) \equiv 0 \mod 3$. This is not the same equation, but it tells us that $c \equiv 2 \mod 3$, and a quick check shows that $2,5,8,11,14,17$ are all solutions. What we really looked at were the congruences $(1+c) \equiv 0, 3, 6, 9, 12, 15 \mod 18$, as multiplying all of these by $6$ yield the original congruence. In fact, you might see that there are $6$ of them - and this is not a fluke.
So in your case, with $a(b+cd) \equiv 0 \mod e$, you might first do a 'reduction' to account for $(a,e) > 1$. Afterwards, you can effectively ignore the $a$, but remember that it incorporates $(a,e)$ distinct solutions. You are then left with $cd \equiv -b \mod e$, and this is a classic problem. I link to the Linear Congruence Theorem below for more on this.
Let's do a quick example, illustrating the method and a potential problem. $3(1 + 2c) \equiv 0 \mod 18$. We see that $(3,18) = 3$, so we look at $(1 + 2c) \equiv 0 \mod 6$ or $2c \equiv 4 \mod 6$. But what we really wanted was $1+2c \equiv 0, 1+2c \equiv 6,$ and $1+2c \equiv 12 \mod 18$. So we get that $2c \equiv 5, 11, 17 \mod 18$. We could proceed, but it's easy to see here that none of these have a solution. Working $\mod 18$, the left side is always even and the right sides are always odd.
It turns out that the congruence $ax \equiv b \mod m$ has a solution iff $(a,m)|b$. So if we define $e' := e/(a,e)$, then $a(b + xd) \equiv 0 \mod e$ will have solutions iff $(d,e') | (e'-b)$. In the (non)example abolve, we required $(2,6) = 2 | 1$, which it doesn't. But if you repeat with $3(2 + 2c) \equiv 0 \mod 18$, there will be solutions. (Answer: They are $2, 5, 8, 11, 14, 17$, with a total of $6$ coming from the $3$ from $(3,18) = 3$, each having $2 = (2, 6)$ solutions of their own. Food for thought: is it possible that there are fewer than $(a,e)(d,e') due to some overlap?)
To save this answer from becoming too long, I will recommend sources of study. For further reading, I recommend looking into what wikipedia calls the Linear Congruence Theorem, which talks of the general solvability of equations like $ax \equiv b \mod m$, and the Chinese Remainder Theorem. In addition, almost any introductory book on number theory will cover this sort of reasoning. I am particularly fond of recommending Rosen's Elementary Number Theory and Ireland and Rosen's Classical Introduction to Modern Number Theory, which is harder, and by a different Rosen.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
On the zeta sum $\sum_{n=1}^\infty[\zeta(5n)-1]$ and others For p = 2, we have,
$\begin{align}&\sum_{n=1}^\infty[\zeta(pn)-1] = \frac{3}{4}\end{align}$
It seems there is a general form for odd p. For example, for p = 5, define $z_5 = e^{\pi i/5}$. Then,
$\begin{align} &5 \sum_{n=1}^\infty[\zeta(5n)-1] = 6+\gamma+z_5^{-1}\psi(z_5^{-1})+z_5\psi(z_5)+z_5^{-3}\psi(z_5^{-3})+z_5^{3}\psi(z_5^{3}) = 0.18976\dots \end{align}$
with the Euler-Mascheroni constant $\gamma$ and the digamma function $\psi(z)$.
*
*Anyone knows how to prove/disprove this?
*Also, how do we split $\psi(e^{\pi i/p})$ into its real and imaginary parts so as to express the above purely in real terms?
More details in my blog.
| $$
\begin{align}
\sum_{n=1}^\infty\left[\zeta(pn)-1\right] & = \sum_{n=1}^\infty \sum_{k=2}^\infty \frac{1}{k^{pn}} \\
& = \sum_{k=2}^\infty \sum_{n=1}^\infty (k^{-p})^n \\
& = \sum_{k=2}^\infty \frac{1}{k^p-1}
\end{align}
$$
Let $\omega_p = e^{2\pi i/p} = z_p^2$, then we can decompose $1/(k^p-1)$ into partial fractions
$$
\frac{1}{k^p-1} = \frac{1}{p}\sum_{j=0}^{p-1} \frac{\omega_p^j}{k-\omega_p^j}
= \frac{1}{p}\sum_{j=0}^{p-1} \omega_p^j \left[\frac{1}{k-\omega_p^j}-\frac{1}{k}\right]
$$
where we are able to add the term in the last equality because $\sum_{j=0}^{p-1}\omega_p^j = 0$. So
$$
p\sum_{n=1}^\infty\left[\zeta(pn)-1\right] = \sum_{j=0}^{p-1}\omega_p^j\sum_{k=2}^{\infty}\left[\frac{1}{k-\omega_p^j}-\frac{1}{k}\right]
$$
Using the identities
$$
\psi(1+z) = -\gamma-\sum_{k=1}^\infty\left[\frac{1}{k+z}-\frac{1}{k}\right]
= -\gamma+1-\frac{1}{1+z}-\sum_{k=2}^\infty\left[\frac{1}{k+z}-\frac{1}{k}\right]\\
\psi(1+z) = \psi(z)+\frac{1}{z}
$$
for $z$ not a negative integer, and
$$
\sum_{k=2}^\infty\left[\frac{1}{k-1}-\frac{1}{k}\right]=1
$$
by telescoping, so finally
$$
\begin{align}
p\sum_{n=1}^\infty\left[\zeta(pn)-1\right]
& = 1+\sum_{j=1}^{p-1}\omega_p^j\left[1-\gamma-\frac{1}{1-\omega_p^j}-\psi(1-\omega_p^j)\right] \\
& = \gamma-\sum_{j=1}^{p-1}\omega_p^j\psi(2-\omega_p^j)
\end{align}
$$
So far this applies for all $p>1$. Your identities will follow by considering that when $p$ is odd $\omega_p^j = -z_p^{2j+p}$, so
$$
\begin{align}
p\sum_{n=1}^\infty\left[\zeta(pn)-1\right]
& = \gamma+\sum_{j=1}^{p-1}z_p^{2j+p}\psi(2+z_p^{2j+p})\\
& = \gamma+\sum_{j=1}^{p-1}z_p^{2j+p}\left[\frac{1}{1+z_p^{2j+p}}+\frac{1}{z_p^{2j+p}}+\psi(z_p^{2j+p})\right] \\
& = \gamma+p-1+S_p+\sum_{j=1}^{p-1}z_p^{2j+p}\psi(z_p^{2j+p})
\end{align}
$$
where
$$
\begin{align}
S_p & = \sum_{j=1}^{p-1}\frac{z_p^{2j+p}}{1+z_p^{2j+p}} \\
& = \sum_{j=1}^{(p-1)/2}\left(\frac{z_p^{2j-1}}{1+z_p^{2j-1}}+\frac{z_p^{1-2j}}{1+z_p^{1-2j}}\right) \\
& = \sum_{j=1}^{(p-1)/2}\frac{2+z_p^{2j-1}+z_p^{1-2j}}{2+z_p^{2j-1}+z_p^{1-2j}} \\
& = \frac{p-1}{2}
\end{align}
$$
which establishes your general form.
I don't have an answer for your second question at this time.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 1,
"answer_id": 0
} |
Why is $\log_{-2}{4}$ complex? With the logarithm being the inverse of the exponential function, it follows that $ \log_{-2}{4}$ should equal $2$, since $(-2)^2=4$. The change of base law, however, implies that $\log_{-2}{4}=\frac{\log{4}}{\log{-2}}$, which is a complex number. Why does this occur when there is a real solution?
| The exponential function is not invertible on the complexes. Correspondingly, the complex logarithm is not a function, it is a multi-valued function. For example, $\log(e)$ is not $1$ -- instead it is the set of all values $1 + 2 \pi \mathbf{i} n$ over all integers $n$.
How are you defining $\log_a(b)$? If you are defining it by $\log(b) / \log(a)$, then it too is a multi-valued function. The values of $\log(4)/\log(-2)$ ranges over all values $(\ln 4 + 2 \pi \mathbf{i} m)/(\ln 2 + \pi \mathbf{i} + 2 \pi \mathbf{i} n)$, where $m$ and $n$ are integers.
Do note that the set of values of this multi-valued function does include $2$; e.g. when $m =1$ and $n=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 1
} |
"Negative" versus "Minus" As a math educator, do you think it is appropriate to insist that students say "negative $0.8$" and
not "minus $0.8$" to denote $-0.8$?
The so called "textbook answer" regarding this question reads:
A number and its opposite are called additive inverses of each other because their sum is zero, the identity element for addition. Thus, the numeral $-5$ can be read "negative five," "the opposite of five," or "the additive inverse of five."
This question involves two separate, but related issues; the first is discussed at an elementary level here. While the second, and more advanced, issue is discussed here. I also found this concerning use in elementary education.
I recently found an excellent historical/cultural perspective on What's so baffling about negative numbers? written by a Fields medalist.
| As a retired teacher, I can say that I tried very hard for many years to get my students to use the term "negative" instead of "minus", but after so many years of trying, I was finally happy if they could understand the concept, and stopped worrying so much about whether they used the correct terminology!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55",
"answer_count": 26,
"answer_id": 9
} |
Which of the following are Dense in $\mathbb{R}^2$? Which of the following sets are dense in $\mathbb R^2$ with respect to the usual topology.
*
*$\{ (x, y)\in\mathbb R^2 : x\in\mathbb N\}$
*$\{ (x, y)\in\mathbb R^2 : x+y\in\mathbb Q\}$
*$\{ (x, y)\in\mathbb R^2 : x^2 + y^2 = 5\}$
*$\{ (x, y)\in\mathbb R^2 : xy\neq 0\}$.
Any hint is welcome.
| *
*No. It's a bunch of parallel lines. These are vertical and the go through the integer points on the $x$-axis.
*No. It's a circle.
*Yes. It's the plane with the $x$ and $y$ axes excised.
*Interesting. It is a union of parallel lines with slope -1 and $y$-intercept at the various rationals. It's dense in the plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
} |
Matrix commutator question Here's a nice question I heard on IRC, courtesy of "tmyklebu."
Let $A$, $B$, and $C$ be $2\times 2$ complex matrices. Define the commutator $[X,Y]=XY-YX$ for any matrices $X$ and $Y$. Prove
$$[[A,B]^2,C]=0.$$
| Here's a better argument (not posted at midnight...) which shows that the result holds over any field: we don't need the matrices to be complex.
As in the other answer, the trace of $[A,B]$ is $0$. Therefore, the characteristic polynomial of $[A,B]$ is $x^2+\det[A,B]$. By the Cayley-Hamilton Theorem,
$$[A,B]^2 = -\det[A,B]I.$$
Therefore, $[A,B]^2$ is a scalar matrix, and therefore lies in the center of $M_{2\times 2}(\mathbf{F})$. We conclude that $[[A,B]^2,C]=0$ for any matrix $C\in M_{2\times 2}(\mathbf{F})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Calculating statistic for multiple runs I have s imple, general question regarding calculating statistic for N runs of the same experiment. Suppose I would like to calculate mean of values returned by some Test. Each run of the test generates $ \langle x_1 ... x_n \rangle$ , possibly of different length. Let's say the statistic is mean. Which approach would be better and why:
*
*Sum all values from M runs, and then divide by number of values
*for each run calculate average, and then average across all averages
I believe one of the above might beunder/overestimating the mean slightly and I don't know which. Thanks for your answers.
| $\def\E{{\rm E}}\def\V{{\rm Var}}$Say you have $M$ runs of lengths $n_1,\dots,n_M$. Denote the $j$th value in the $i$th run by $X^i_j$, and let the $X^i_j$ be independent and identically distributed, with mean $\mu$ and variance $\sigma^2$.
In your first approach you calculate
$$\mu_1 = \frac{1}{n_1+\cdots n_M} \sum_{i=1}^M \sum_{j=1}^{n_i} X^i_j$$
and in your second approach you calculate
$$\mu_2 = \frac{1}{M} \sum_{i=1}^M \left( \frac{1}{n_i} \sum_{j=1}^{n_i} X^i_j\right)$$
You can compute their expectations:
$$\E(\mu_1) = \frac{1}{n_1+\cdots n_M} \sum_{i=1}^M \sum_{j=1}^{n_i} \mu = \frac{(n_1+\cdots n_M)\mu}{n_1+\cdots n_M} = \mu$$
vs
$$\E(\mu_2) = \frac{1}{M} \sum_{i=1}^M \left( \frac{1}{n_i} \sum_{j=1}^{n_i}\mu \right) = \frac{1}{M} ( M\mu ) = \mu$$
so the estimator is unbiased in both cases. However, if you calculate the variances you will find that
$$\V(\mu_1) = \frac{\sigma^2}{n_1+\cdots n_M}$$
and
$$\V(\mu_2) = \frac{1}{M} \left( \sum_{i=1}^M \frac{1}{n_i} \right) \sigma^2$$
With a little effort, you can show that
$$\V(\mu_1)\leq \V(\mu_2)$$
where the inequality is strict except when $n_1=n_2=\cdots=n_M$, i.e. when all of the runs produce the same amount of output. If you need to be convinced of this, work through the details in the case $M=2$, $n_1=1$ and $n_2=N >1$.
Therefore it is better to take your first approach, of summing up the output of all runs and dividing by the total length of the output. The expectation is the same in either case, but the variance is lower with the first approach.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Limits of Subsequences If $s=\{s_n\}$ and $t=\{t_n\}$ are two nonzero decreasing sequences converging to 0, such that $s_n ≤t_n$ for all $n$. Can we find subsequences $s ′$ of $s$ and $t ′$ of $t$ such that $\lim \frac{s'}{t'}=0$ , i.e., $s ′$ decreases more rapidly than $t ′$ ?
| Yes, we can. So we have two positive decreasing sequences with $s_n \leq t_n$, and $s_n \to 0, t_n \to 0$.
Then we can let $t' \equiv t$. As $\{t_n\}$ is positive, $t_1 > 0$. As $s_n \to 0$, there is some $k$ s.t. $s_k < t_1/1$. Similarly, there is some $l > k$ s.t. $s_l < t_2/2$. Continuing in this fashion, we see that we can find a sequence $s'$ so that $s'_n < t_n/n$, so that $\dfrac{s'_n}{t_n} < \dfrac{1}{n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Recurrence relation $T_{k+1} = 2T_k + 2$ I have a series of number in binary system as following:
0, 10, 110, 1110, 11110, 111110, 1111110, 11111110, ...
I want to understand : Is there a general seri for my series?
I found this series has a formula as following:
(Number * 2) + 2
but i don't know this formula is correct or is there a general series (such as fibonacci) for my issue.
| $T_{k+1} = 2T_k + 2$. Adding $2$ to both sides, we get that $$\left(T_{k+1}+2 \right) = 2 T_k + 4 = 2 \left( T_k + 2\right)$$
Calling $T_k+2 = u_k$, we get that $u_{k+1} = 2u_k$. Hence, $u_{k+1} = 2^{k+1}u_0$. This gives us $$\left(T_{k}+2 \right) = 2^k \left( T_0 + 2\right) \implies T_k = 2^{k+1} - 2 +2^kT_0$$
Since, $T_0 = 0$, we get that $$T_k = 2^{k+1} - 2$$ where my index starts from $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Show that $\frac{n}{\sigma(n)} > (1-\frac{1}{p_1})(1-\frac{1}{p_2})\cdots(1-\frac{1}{p_r})$ If $n=p_1^{k_1}p_2^{k_2}\cdots p_r^{k_r}$ is the prime factorization of $n>1$ then show that :
$$1>\frac{n}{ \sigma (n)} > \left(1-\frac{1}{p_1}\right)\left(1-\frac{1}{p_2}\right)\cdots\cdots\left(1-\frac{1}{p_r}\right)$$
I have solved the $1^\text{st}$ inequality($1>\frac{n}{ \sigma (n)}$) and tried some manipulations on the right hand side of the $2^\text{nd}$ inequality but can't get much further.Please help.
| Note that the function $\dfrac{n}{\sigma(n)}$ is multiplicative. Hence, if $n = p_1^{k_1}p_2^{k_2} \ldots p_m^{k_m}$, then we have that $$\dfrac{n}{\sigma(n)} = \dfrac{p_1^{k_1}}{\sigma \left(p_1^{k_1} \right)} \dfrac{p_2^{k_2}}{\sigma \left(p_2^{k_2} \right)} \ldots \dfrac{p_m^{k_m}}{\sigma \left(p_m^{k_m} \right)}$$ Hence, it suffices to prove it for $n = p^k$ where $p$ is a prime and $k \in \mathbb{Z}^+$.
Let $n=p^k$, then $\sigma(n) = p^{k+1}-1$. This gives us that $$\dfrac{n}{\sigma(n)} = p^k \times \dfrac{p-1}{p^{k+1}-1} = \dfrac{p^{k+1} - p^k}{p^{k+1} - 1} = 1 - \dfrac{p^k-1}{p^{k+1}-1}.$$ Since $p > 1$, we have that $p(p^k-1) < p^{k+1} - 1 \implies \dfrac{p^k-1}{p^{k+1}-1} < \dfrac1p \implies 1 - \dfrac{p^k-1}{p^{k+1}-1} > 1 - \dfrac1p$. Hence, if $n=p^k$, then $$\dfrac{n}{\sigma(n)} > \left( 1 - \dfrac1p\right)$$
Since, $\dfrac{n}{\sigma(n)}$ is multiplicative, we have the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
hausdorff, intersection of all closed sets Can you please help me with this question?
Let's $X$ be a topological space.
Show that these two following conditions are equivalent :
*
*$X$ is hausdorff
*for all $x\in X$ intersection of all closed sets containing the neighborhoods of $x$ it's $\{x\}$.
Thanks a lot!
| HINTS:
*
*If $x$ and $y$ are distinct points in a Hausdorff space, they have disjoint open nbhds $V_x$ and $V_y$, and $X\setminus V_y$ is a closed set containing $V_x$.
*If $F$ is a closed set containing an open nbhd $V$ of $x$, then $V$ and $X\setminus F$ are disjoint open sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
A space is normal iff every pair of disjoint closed subsets have disjoint closed neighbourhoods.
A space is normal iff every pair of disjoint closed subsets have disjoint closed neighbourhoods.
Given space $X$ and two disjoint closed subsets $A$ and $B$.
I have shown necessity: If X is normal then by Urysohn's lemma there exists continuous map $f:X \to [0,1]$ such that $f(A)=0$ and $f(B)=1$, then $f^{-1}(0)$ and $f^{-1}(1)$ are two disjoint closed neighbourhoods of A and B.
But how to show the sufficiency?
| If $A$ and $B$ have disjoint closed neighborhoods $U$ and $V$, then by definition of neighborhood we know that $A\subseteq \mathrm{int}(U)$ and $B\subseteq\mathrm{int}(V)$. Now, the interior of $U$ is an open neighborhood of $A$, the interior of $V$ is an open neighborhood of $B$, and so $A$ and $B$ have disjoint open neighborhoods.
Thus, if a space satisfies your requirement, then it is normal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why do $\mathbb{C}$ and $\mathbb{H}$ generate all of $M_2(\mathbb{C})$? For this question, I'm identifying the quaternions $\mathbb{H}$ as a subring of $M_2(\mathbb{C})$, so I view them as the set of matrices of form
$$
\begin{pmatrix}
a & b \\ -\bar{b} & \bar{a}
\end{pmatrix}.
$$
I'm also viewing $\mathbb{C}$ as the subfield of scalar matrices in $M_2(\mathbb{C})$, identifying $z\in\mathbb{C}$ with the diagonal matrix with $z$ along the main diagonal.
Since $\mathbb{H}$ contains $j=\begin{pmatrix} 0 & 1 \\ -1 & 0\end{pmatrix}$ and $k=\begin{pmatrix} 0 & i \\ i & 0\end{pmatrix}$, I know that
$$
ij+k=\begin{pmatrix} 0 & 2i\\ 0 & 0 \end{pmatrix}
$$
and
$$
-ij+k= \begin{pmatrix} 0 & 0\\ 2i & 0 \end{pmatrix}
$$
are in the generated subring. I'm just trying to find matrices of form $\begin{pmatrix} a & 0 \\ 0 & 0 \end{pmatrix}$ and $\begin{pmatrix} 0 & 0\\ 0 & d \end{pmatrix}$ for $a,d\neq 0$ to conclude the generated subring is the whole ring. How can I get these remaining two pieces? Thanks.
| Hint: Use linear combinations of
$$
jk=\pmatrix{i&0\cr0&-i\cr}\qquad\text{and the scalar}\qquad \pmatrix{i&0\cr 0&i\cr}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
a function that maps half planes Define
$H^{+}=\{z:y>0\}$
$H^{-}=\{z:y<0\}$
$L^{+}=\{z:x>0\}$
$L^{-}=\{z:x<0\}$
$f(z)=\frac{z}{3z+1}$ maps which portion onto which from above and vice-versa? I will be glad if any one tell me how to handle this type of problem? by inspection?
| HINTS
*
*Fractional transformations/Möbius transformations take circles and lines to circles and lines, i.e. they are 'circilinear.' They also preserve connected regions.
*If you find out what happens to the boundaries, you'll know almost everything (except for in which side of the boundary the image resides); in one of those silly word plays, the image of the boundary is the boundary of the image.
*Once you know where the upper half plane, say, goes, you know where the lower half plane goes automatically.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Nearest matrix in doubly stochastic matrix set Suppose $\mathcal{D}_N$ denote an $N\times N$ doubly stochastic matrix, given any element $M\in \mathcal{D}_N$ , the singular value decomposition for $M$ is $$ M=USV'$$
where $U$ and $V$ are two $N\times N$ orthogonal matrix and $S$ is a $N \times N$ diagonal matrix
Let $P$ be the 'closest' orthogonal matrix to $M$, i.e. $P=\arg\min_{X\in\mathcal{O}}||X-M||_F^2$,$\mathcal{O}$ represents the $N\times N$ orthogonal matrix set. Note such $P$ may be not unique. In this case, we choose any of it. On conclusion about $P$ is $P=UV'$, where $U$ and $V$ are defined before(although can be not unique, we just choose any of them)
$M_1 \in \mathcal{D}_N$, which is 'closest' to $P$. More specifically
$$ M_1 = \arg\min_{X\in\mathcal{D}} ||X - P||_F ^2 $$
Similarly, If $M_1$ is not unique, we choose any of it(This should not happen actually. Since we may image it as a 'ball' approaching a 'polygon', should have only one minimum)
My question is :
The statement: $M_1=M$ if and only if $M$ is a permutation matrix
Does this statement always hold true?
Actually, if $M$ is a permutation matrix, $M_1=M$, this is obvious, since $S=I$, and $P=M$.
However, does another direction always hold true? If so, how to prove this, otherwise, how to give a counter-example?
Thanks for any suggestions!
| I didn't exactly get your question. But the solution for the optimization problem you are looking is always a permutation matrix. This follows from the birkhoff's theorem. The birkhoff's theorem states that every doubly stochastic matrix is a convex combination of the permutation matrices. Hence, permutation matrices form the corners of the convex set of all doubly stochastic matrices. The objective function you have here is a convex function. Thus the minimum should be attained at one of the corner points, which are all permutation matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Multiplicative Selfinverse in Fields I assume there are only two multiplicative self inverse in each field with characteristice bigger than $2$ (the field is finite but I think it holds in general). In a field $F$ with $\operatorname{char}(F)>2$ a multiplicative self inverse $a \in F$ is an element such that
$$ a \cdot a = 1.$$
I think in each field it is $1$ and $-1$. Any ideas how to proof that?
| Hint $\rm\ x^2\! =\! 1\!\iff\! (x\!-\!1)(x\!+\!1) = 0\! \iff\! x = \pm1,\:$ by $\rm\:ab=0\:\Rightarrow\: a=0\:\ or\:\ b=0\:$ in a field.
This may fail if the latter property fails, i.e. if nontrivial zero-divisors exist. Consider, for example, $\rm\ x^2 = 1\:$ has $4$ roots $\rm\:x = \pm1, \pm 3\:$ in $\rm\:\mathbb Z/8 = $ integers mod $8,\:$ i.e. $\rm\:odd^2 \equiv 1\pmod 8$.
Rings satsifying the latter property (no zero-divisors) are called (integral) domains. They are characterized by a generalization of the above, viz. a ring $\rm\: D\:$ is a domain $\iff$ every nonzero polynomial $\rm\ f(x)\in D[x]\ $ has at most $\rm\ deg\ f\ $ roots in $\rm\:D.\:$ For the simple proof see my post here, where I illustrate it constructively in $\rm\: \mathbb Z/m\: $ by showing that, given any $\rm\:f(x)\:$ with more roots than its degree,$\:$ we can quickly compute a nontrivial factor of $\rm\:m\:$ via a $\rm\:gcd$. The quadratic case of this result is at the heart of many integer factorization algorithms, which try to factor $\rm\:m\:$ by searching for a nontrivial square root in $\rm\: \mathbb Z/m,\:$ e.g. a square root of $1$ that is not $\:\pm 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Evaluation of $\lim\limits_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$ One of the previous posts made me think of the following question: Is it possible to evaluate this limit without L'Hopital and Taylor?
$$\lim_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$$
| Here is a different approach. Let $$L = \lim_{x \to 0} \dfrac{\tan(x) - x}{x^3}$$
Replacing $x$ by $2y$, we get that
\begin{align}
L & = \lim_{y \to 0} \dfrac{\tan(2y) - 2y}{(2y)^3} = \lim_{y \to 0} \dfrac{\dfrac{2 \tan(y)}{1 - \tan^2(y)} - 2y}{(2y)^3}\\
& = \lim_{y \to 0} \dfrac{\dfrac{2 \tan(y)}{1 - \tan^2(y)} - 2 \tan(y) + 2 \tan(y) - 2y}{(2y)^3}\\
& = \lim_{y \to 0} \dfrac{\dfrac{2 \tan^3(y)}{1 - \tan^2(y)} + 2 \tan(y) - 2y}{(2y)^3}\\
& = \lim_{y \to 0} \left(\dfrac{2 \tan^3(y)}{8y^3(1 - \tan^2(y))} + \dfrac{2 \tan(y) - 2y}{8y^3} \right)\\
& = \lim_{y \to 0} \left(\dfrac{2 \tan^3(y)}{8y^3(1 - \tan^2(y))} \right) + \lim_{y \to 0} \left(\dfrac{2 \tan(y) - 2y}{8y^3} \right)\\
& = \dfrac14 \lim_{y \to 0} \left(\dfrac{\tan^3(y)}{y^3} \dfrac1{1 - \tan^2(y)} \right) + \dfrac14 \lim_{y \to 0} \left(\dfrac{\tan(y) - y}{y^3} \right)\\
& = \dfrac14 + \dfrac{L}4
\end{align}
Hence, $$\dfrac{3L}{4} = \dfrac14 \implies L = \dfrac13$$
EDIT
In Hans Lundmark answer, evaluating the desired limit boils down to evaluating $$S=\lim_{x \to 0} \dfrac{\sin(x)-x}{x^3}$$ The same idea as above can be used to evaluate $S$ as well.
Replacing $x$ by $2y$, we get that \begin{align}
S & = \lim_{y \to 0} \dfrac{\sin(2y) - 2y}{(2y)^3} = \lim_{y \to 0} \dfrac{2 \sin(y) \cos(y) - 2y}{8y^3}\\
& = \lim_{y \to 0} \dfrac{2 \sin(y) \cos(y) - 2 \sin(y) + 2 \sin(y) - 2y}{8y^3}\\
& = \lim_{y \to 0} \dfrac{2 \sin(y) - 2y}{8y^3} + \lim_{y \to 0} \dfrac{2 \sin(y) \cos(y)-2 \sin(y)}{8y^3}\\
& = \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) - y}{y^3} - \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) (1 - \cos(y))}{y^3}\\
& = \dfrac{S}4 - \dfrac14 \lim_{y \to 0} \dfrac{\sin(y) 2 \sin^2(y/2)}{y^3}\\
& = \dfrac{S}4 - \dfrac18 \lim_{y \to 0} \dfrac{\sin(y)}{y} \dfrac{\sin^2(y/2)}{(y/2)^2}\\
& = \dfrac{S}4 - \dfrac18 \lim_{y \to 0} \dfrac{\sin(y)}{y} \lim_{y \to 0} \dfrac{\sin^2(y/2)}{(y/2)^2}\\
& = \dfrac{S}4 - \dfrac18\\
\dfrac{3S}4 & = - \dfrac18\\
S & = - \dfrac16
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 2
} |
Combining a radical and simplifying? How would I combine and simplify the following radical:
$$\sqrt {\frac{A^2}{2}} - \sqrt \frac{A^2}{8}$$
| $$\sqrt {\frac{A^2}{2}} - \sqrt \frac{A^2}{8}\\=\frac{|A|}{\sqrt 2}-\frac{|A|}{2\sqrt 2}\\=\frac{2|A|-|A|}{2\sqrt 2}\\=\frac{|A|}{2\sqrt 2}\frac{\sqrt 2}{\sqrt 2}\\=\frac{\sqrt 2|A|}{4}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.