Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Show that in any group of order $23 \cdot 24$, the $23$-Sylow subgroup is normal. Show that in any group of order $23 \cdot 24$, the $23$-Sylow subgroup is normal. Let $P_k$ denote the $k$-Sylow subgroup and let $n_3$ denote the number of conjugates of $P_k$. $n_2 \equiv 1 \mod 2$ and $n_2 | 69 \implies n_2= 1, 3, 23, 69$ $n_3 \equiv 1 \mod 3$ and $n_3 | 184 \implies n_3 = 1, 4, 184$ $n_{23} \equiv 1 \mod 23$ and $n_{23} | 24 \implies n_{23} = 1, 24$ Suppose for contradiction that $n_{23} = 24$. Then $N(P_{23})=552/24=23$. So the normalizer of $P_3$ only contains the identity and elements of order $23$. It is impossible for $n_2$ to equal $23$ or $69$, or for $n_3$ to equal $184$, since we would then have more than $552$ elements in G. Case 1: Let $n_2 = 1$. Then we have $P_2 \triangleleft G$, and so we have a subgroup $H=P_2P_{23}$ in G. We know that $|H|=\frac{|P_2||P_{23}|}{|P_2 \cap P_{23}} = 184$. We also know that the $23$-Sylow subgroup of $H$ is normal, so it's normalizer is of order $184$. Since a $p$-sylow subgroup is the largest subgroup of order $p^k$ for some $k$; and since the order of the $23$-Sylow subgroup of H and G both equal $23$, they must coincide. But that means that elements of orders not equal to $23$ normalize $P_{23}$, which is a contradiction. Case 2: Suppose that $n_3=1$. Then we have $P_3 \triangleleft G$, and so we have a subgroup $K=P_2P_{23}$ in $G$. Since $|K|=\frac{|P_3||P_{23}|}{|P_2 \cap P_{23}}= 69$, the $23$-sylow subgroup of $K$ is normal. Again, as in case 1, we have an element of order not equal to $23$ that normalizes $P_{23}$. Case 3: Let $n_3=4$ and $n_2=3$. But then $N(P_2)=184$. So this is the same as case 1, since we have a subgroup of G of order $184$. Do you think my answer is correct? Thanks in advance
As commented back in the day, the OP's solution is correct. Promoting a modified (IMHO simplified) form of the idea from Thomas Browning's comment to an alternative answer. The group $G$ contains $23\cdot24-24\cdot22=24$ elements outside the Sylow $23$-subgroups. Call the set of those elements $X$. Clearly $X$ consists of full conjugacy classes. If $s\in G$ is an element of order $23$, then consider the conjugation action of $P=\langle s\rangle$ on $X$. The identity element is obviously in an orbit by itself. By Orbit-Stabilizer either $X\setminus\{1_G\}$ is a single orbit, or the action of $P$ has another non-identity fixed point in $X$. * *In the former case all the elements of $X\setminus\{1_G\}$ must share the same order, being conjugates, in violation of Cauchy's theorem implying that there must exist elements of orders $2$ and $3$ in $X$ at least. *In the latter case $P$ is centralized, hence normalized, by a non-identity element $\in X$. But this contradicts the fact that $P$ is known to have $24=[G:P]$ conjugate subgroups, and hence must be self-normalizing, $N_G(P)=P$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/463823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 1, "answer_id": 0 }
Show that $S=\{\frac{p}{2^i}: p\in\Bbb Z, i \in \Bbb N \}$ is dense in $\Bbb R$. Show that $S=\{\frac{p}{2^i}: p\in\Bbb Z, i \in \Bbb N \}$ is dense in $\Bbb R$. Just found this given as an example of a dense set while reading, and I couldn't convince myself of this claim's truthfulness. It kind of bugs me and I wonder if you guys have any idea why it is true. (I thought of taking two rational numbers that I know exist in any real neighborhood and averaging them in some way, but I didn't get far with that idea..) Thank you!
I like to think of the answer intuitively. Represent $p$ in binary (base 2). Then $\frac{p}{2^i}$ is simply a number with finitely many binary digits. Conversely, any number whose binary representation has finitely many digits can be written as $\frac{p}{2^i}$. To show a set is dense, we have to show that given an element $a$ in the set, we can always find an element $b\neq a$ such that $a$ is arbitrarily close to $b$. As a consequence, the density is infinite: You can find infinite numbers from the set in an unit interval. That's the intuitive meaning of "dense". If you look at the intuition, it should be clear that the set is dense: You can always find a number that is as close as you want to any other number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/463884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proof read from "A problem seminar" May you help me judging the correctness of my proof?: Show that the if $a$ and $b$ are positive integers, then $$\left(a+\frac{1}{2}\right)^n+\left(b+\frac{1}{2}\right)^n$$ is integer for only finintely many positive integers $n$ We want $n$ so that $$\left(a+\frac{1}{2}\right)^n+\left(b+\frac{1}{2}\right)^n\equiv0\pmod{1}$$ So we know by the binomial theorem that $(an+b)^k\equiv b^n\pmod{n}$ for positive $k$ Then, $$\left(a+\frac{1}{2}\right)^n\equiv(1/2)^n\pmod{1}$$ and similarly with the $b$ So $$\left(a+\frac{1}{2}\right)^n+\left(b+\frac{1}{2}\right)^n\equiv 2*(1/2)^n\pmod{1}$$ Therefore, we want $2*(1/2)^n$ to be integer, so that $2^n|2$ clearly, the only positive option is $n=1$ (Editing, my question got prematurely posted. Done)
Our expression can be written as $$\frac{(2a+1)^n+(2b+1)^n}{2^n}.$$ If $n$ is even, then $(2a+1)^n$ and $(2b+1)^n$ are both the squares of odd numbers. Any odd perfect square is congruent to $1$ modulo $8$. So their sum is congruent to $2$ modulo $8$, and therefore cannot be divisible by any $2^n$ with $n\gt 1$. So we can assume that $n$ is odd. For odd $n$, we have the identity $$x^n+y^n=(x+y)(x^{n-1}-x^{n-2}y+\cdots +y^{n-1}).$$ Let $x=2a+1$ and $y=2b+1$. Note that $x^{n-1}-x^{n-2}y+\cdots +y^{n-1}$ is a sum of an odd number of terms, each odd, so it is odd. Thus the highest power of $2$ that divides $(2a+1)^n+(2b+1)^n$ is the highest power of $2$ that divides $(2a+1)+(2b+1)$. Since $(2a+1)+(2b+1)\ne 0$, there is a largest $n$ such that our expression is an integer. Remark: The largest $n$ such that our expression is an integer can be made quite large. You might want to see for example what happens if we let $2a+1=2049$ and $2b+1=2047$. Your proposed proof suggests, in particular, that $n$ cannot be greater than $1$. I suggest that when you are trying to write out a number-theoretic argument, you avoid fractions as much as possible and deal with integers only.
{ "language": "en", "url": "https://math.stackexchange.com/questions/463980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Quartic Equation having Galois Group as $S_4$ Suppose $f(x)\in \mathbb{Z}[x]$ be an irreducible Quartic polynomial with Galois Group as $S_4$. Let $\theta$ be a root of $f(x)$ and set $K=\mathbb{Q}(\theta)$.Now, the Question is: Prove that $K$ is an extension of degree $\mathbb{Q}$ of degree 4 which has no proper Subfields? Are there any Galois Extensions of $\mathbb{Q}$ of degree 4 with no proper sub fields. As i have adjoined a root of irreducible quartic, I can see that $K$ is of degree $4$ over $\mathbb{Q}$. But, why does there is no proper subfield of $K$ containing $\mathbb{Q}$. suppose $L$ is proper subfield of $K$, then $L$ has to be of degree $2$ over $\mathbb{Q}$. So, $L$ is Galois over $\mathbb{Q}$. i.e., $L$ is normal So corresponding subgroup of Galois group has to be normal. I tried working in this way but could not able to conclude anything from this. any help/suggestion would be appreciated. Thank You
As has been remarked, the non-existence of intermediate fields is equivalent to $S_{3}$ being a maximal subroup of $S_{4}.$ If not, then there is a subgroup $H$ of $S_{4}$ with $[S_{4}:H] = [H:S_{3}] = 2.$ Now $S_{3} \lhd H$ and $S_{3}$ contains all Sylow $3$-subgroup of $H.$ But $S_{3}$ has a unique Sylow $3$-subgroup, which is therefore normal in $H.$ Hence $H$ contains all Sylow $3$-subgroups of $S_{4}$ as $H \lhd S_{4}$ and $[S_{4}:H] = 2.$ Then since $H$ only has one Sylow $3$-subgroup, $S_{4}$ has only one Sylow $3$-subgroup, a contradiction (for example, $\langle (123) \rangle$ and $\langle (124) \rangle$ are different Sylow $3$-subgroups of $S_{4}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/464113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Finding an orthogonal basis of the subspace spanned by given vectors Let W be the subspace spanned by the given vectors. Find a basis for $W^\perp$. $$v_1=(2,1,-2) ;v_2=(4,0,1)$$ Well I did the following to find the basis. $$(x,y,z)*(2,1,-2)=0$$ $$(x,y,z)*(4,0,1)=0$$ If you simplify this in to a Linear equation $$2x + y - 2z = 0$$ $$4x + z = 0$$ Now bu placing this in a vector and performing row echelon I get $$ w = \left[ \begin{array}{ccc|c} 1 &\ 0 &\ 1/4 &0\\ 0&\ 1 &\ -5/2 &0 \end{array} \right]$$ By solving this I get $$x=-1/4 t$$ $$y=5/2 t$$ $$z=t$$ By this I get the basis to be. $$[-1/4, 5/2 ,1]$$ I don't see that the answer is correct because you get the vector space in the someway. Please tell me if I used the correct method. Thank you
Since you work in $\mathbb R^3$ so simply take $v_3=v_1\wedge v_2=(1,-10,-4)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/464174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why is there never a proof that extending the reals to the complex numbers will not cause contradictions? The number $i$ is essentially defined for a property we would like to have - to then lead to taking square roots of negative reals, to solve any polynomial, etc. But there is never a proof this cannot give rise to contradictions, and this bothers me. For instance, one would probably like to define division by zero, since undefined values are simply annoying. We can introduce the number "$\infty$" if we so choose, but by doing so, we can argue contradictory statements with it, such as $1=2$ and so on that you've doubtlessly seen before. So since the definition of an extra number to have certain properties that previously did not exist may cause contradictions, why aren't we so rigourous with the definition of $i$? edit: I claim we aren't, simply because no matter how long and hard I look, I never find anything close to what I'm looking for. Just a definition that we assume is compatible with everything we already know.
There are several ways to introduce the complex numbers rigorously, but simply postulating the properties of $i$ isn't one of them. (At least not unless accompanied by some general theory of when such postulations are harmless). The most elementary way to do it is to look at the set $\mathbb R^2$ of pairs of real numbers and then study the two functions $f,g:\mathbb R^2\times \mathbb R^2\to\mathbb R^2$: $$ f((a,b),(c,d)) = (a+c, b+d) \qquad g((a,b),(c,d))=(ac-bd,ad+bc) $$ It is then straightforward to check that * *$(\mathbb R^2,f,g)$ satisfies the axioms for a field, with $(0,0)$ being the "zero" of the field and $(1,0)$ being the "one" of the field. *the subset of pairs with second component being $0$ is a subfield that's isomorphic to $\mathbb R$, *the square of $(0,1)$ is $(-1,0)$, which we've just identified with the real number $-1$, so let's call $(0,1)$ $i$, and *every element of $\mathbb R^2$ can be written as $a+bi$ with real $a$ and $b$ in exactly one way, namely $(a,b)=(a,0)+(b,0)(0,1)$. With this construction in mind, if we ever find a contradiction involving complex number arithmetic, this contradiction can be translated to a contradiction involving plain old (pairs of) real numbers. Since we believe that the real numbers are contradiction-free, so are the complex numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/464262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59", "answer_count": 8, "answer_id": 6 }
Please help on this Probability problem A bag contains 5 red marbles and 7 green marbles. Two marbles are drawn randomly one at a time, and without replacement. Find the probability of picking a red and a green, without order. This is how I attempted the question: I first go $P(\text{Red})= 5/12$ and $P(\text{Green})= 7/11$ and multiplied the two: $$\frac{7}{11}\times \frac{5}{12}= \frac{35}{132}$$ Then I got $P(\text{Green})= 7/12$ and $P(\text{Red})= 5/11$ $\implies$ $$\frac{5}{11} × \frac{7}{12}= \frac{35}{132}$$ So I decided that $$P(\text{G and R}) \;\text{ or }\; P(\text{R and G}) =\frac{35}{132} + \frac{35}{132} =\frac{35}{66}$$ Is this correct?
Very nice and successful attempt. You recognized that there are two ways once can draw a red and green marble, given two draws: Red then Green, or Green then Red. You took into account that the marbles are not replaced. And your computations are correct: you multiplied when you needed to multiply and added when you needed to add: $$\left[P(\text{1. Red}) \times P(\text{2. Green})\right]+ \left[P(\text{1. Green}) \times P(\text{2. Red})\right]$$ Your method and result are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/464344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Mean value of the rotation angle is 126.5° In the paper "Applications of Quaternions to Computation with Rotations" by Eugene Salamin, 1979 (click here), they get 126.5 degrees as the mean value of the rotation angle of a random rotation (by integrating quaternions over the 3-sphere). How can I make sense of this result? If rotation angle around a given axis can be 0..360°, should not the mean be 180? or 0 if it can be −180..180°? or 90° if you take absolute values?
First, SO(3) of course has its unique invariant probabilistic measure. Hence, “random rotation” is a well-defined SO(3)-valued random variable. Each rotation (an element of SO(3)) has an uniquely defined rotation angle θ, from 0 to 180° (π) because of axis–angle representation. (Note that axis is undefined for θ = 0 and has two possible values for θ = 180°, but θ itself has no ambiguity.) Hence, “angle of a random rotation” is a well-defined random angle. Why is its average closer to one end (180°) than to another (0)? In short, because there are many 180° rotations, whereas rotation by zero angle is unique (identity map). Note that I ignore Spin(3) → SO(3) covering that is important in quaternionic discourse, but it won’t change the result: Haar measure on Spin(3) projected onto SO(3) gives the same Haar measure on SO(3), hence there is no difference whether do we make computations on S3 of unit quaternions (the same as Spin(3)) or directly on SO(3).
{ "language": "en", "url": "https://math.stackexchange.com/questions/464419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to find the degrees between 2 vectors when I have $\arccos$ just in radian mode? I'm trying to write in java a function which finds the angles, in degrees, between 2 vectors, according to the follow equation - $$\cos{\theta} = \frac{\vec{u} \cdot \vec{v}}{\|\mathbf{u}\|\|\mathbf{v}\|}$$ but in java the Math.acos method returns the angle radians, so what I have to do after I calculate $\frac{\vec{u} \cdot \vec{v}}{\|\mathbf{u}\|\|\mathbf{v}\|}$, to get it in degrees?
You can compute the angle, in degrees, by computing the angle in radians, and then multiplying by $\dfrac {360}{2\pi} = \dfrac {180\; \text{degrees}}{\pi\; \text{radians}}$: $$\theta = \arccos\left(\frac{\vec{u} \cdot \vec{v}}{\|\mathbf{u}\|\|\mathbf{v}\|}\right)\cdot \frac {180}{\pi}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/464557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Expected number of people sitting in the right seats. There was a popular interview question from a while back: there are $n$ people getting seated an airplane, and the first person comes in and sits at a random seat. Everyone else who comes in either sits in his seat, or if his seat has been taken, sits in a random unoccupied seat. What is the probability that the last person sits in his correct seat? The answer to this question is $1/2$ because everyone looking to sit on a random seat has an equal probability of sitting in the first person's seat as the last person's. My question is: what is the expected number of people sitting in their correct seat? My take: this would be $\sum_{i=1}^n p_i$ where $p_i$ is the probability that person $i$ sits in the right seat.. $X_1 = 1/n$ $X_2 = 1 - 1/n$ $X_3 = 1 - (1/n + 1/n(n-1))$ $X_4 = 1 - (1/n + 2/n(n-1) + 1/n(n-1)(n-2))$ Is this correct? And does it generalize to $X_i$ having an $\max(0, i-1)$ term of $1/n(n-1)$, a $\max(0, i-2)$ term of $1/n(n-1)(n-2)$ etc? Thanks.
I found this question and the answer might be relevant. Seating of $n$ people with tickets into $n+k$ chairs with 1st person taking a random seat The answer states that the probability of a person not sitting in his seat is $\frac{1}{k+2}$ where $k$ is the number of seats left after he takes a seat. This makes sense because for person $i$, if anyone sits in chairs $1, i+1, ... n$ then he must sit in his own seat, so the probability of that happening is $\frac{n-i+1}{n-i+2}$. So $k = 0$ for the last person and $k = n-1$ for the second person. The answer then should just be $1/n + \sum_{i = 2}^{n} \frac{n-i+1}{n-i+2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/464625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 7, "answer_id": 2 }
What does it mean "adjoin A to B"? What does it mean that we can obtain $\mathbb{C}$ from $\mathbb{R}$ by adjoining $i$? Or that we can also adjoin $\sqrt{2}$ to $\mathbb{Q}$ to get $\mathbb{Q}(\sqrt{2})=\{a+b \sqrt{2}\mid a,b \in \mathbb{Q}\}$?
It means exactly what you have written there. Let $F$ be a field and $\alpha$ be a root of a polynomial $f(x)$ that is irreducible of degree $d$ over $F[x]$. Then we say we can adjoin $\alpha$ to $F$ by considering all linear combinations of field elements of $F$ with scalar multiples of powers of $\alpha$ up to $d-1$, or rather: $F(\alpha) = \{a_{1} + a_{2}\alpha + a_{3} \alpha^{2} + \cdots + a_{d}\alpha^{d-1}| a_{1},\ldots,a_{d} \in F\}$. Note that because $\alpha \notin F$ because it is a root of an irreducible polynomial in $F[x]$. However, $F(\alpha)$ certainly contains $\alpha$. In this light, we can view field extensions as a way of "extending" base fields to include elements they wouldn't otherwise have. You'll notice this is consistent with the definition of $\mathbb{C}$ from $\mathbb{R}$. $i$ is the root of the irreducible polynomial $x^{2} + 1$ over $\mathbb{R}[x]$. Hence, if we define $\mathbb{C} = \mathbb{R}(i)$, then $\mathbb{C} = \{a + bi | a, b \in \mathbb{R}\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/464686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Laplace operator's interpretation (Laplacian) What is your interpretation of Laplace operator? When evaluating Laplacian of some scalar field at a given point one can get a value. What does this value tell us about the field or it's behaviour in the given spot? I can grasp the meaning of gradient and divergence. But viewing Laplace operator as divergence of gradient gives me interpretation "sources of gradient" which to be honest doesn't make sense to me. It seems a bit easier to interpret Laplacian in certain physical situations or to interpret Laplace's equation, that might be a good place to start. Or misleading. I seek an interpretation that would be as universal as gradients interpretation seems to me - applicable, correct and understandable on any scalar field. PS The text of this question is taken from Physics StackExchange. I found it useful for people who search in the Math StackExchange forum.
It's enlightening to note that the adjoint of $\nabla$ is $-\text{div}$, so that $-\text{div} \nabla$ has the familiar pattern $A^T A$, which recurs throughout linear algebra. Hence you would expect (or hope) $-\text{div} \nabla$ to have the properties enjoyed by a symmetric positive definite matrix -- namely, the eigenvalues are nonnegative and there is an orthonormal basis of eigenvectors. $-\text{div}$ is sometimes called the "convergence", and this viewpoint suggests that the Laplacian should have been defined as the convergence of the gradient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/464756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Classification of all semisimple rings of a certain order I'd appreciate it if you tell me where to begin in order to solve this question: Classify (up to ring isomorphism) all semisimple rings of order 720. Could the Wedderburn-Artin Structural Theorem be applicable?
Yes, you should definitely apply Artin-Wedderburn. The thing you gain from knowing the ring is finite is that the ring will be a product of matrix rings over fields, since finite division rings are fields. Hopefully you know that all finite fields are of prime power order. Now then, an n by n matrix ring over a field with q elements clearly has $q^{n^2}$ matrices. Start deducing what the possibilities are :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/464819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
When polynomial is power $P(x)$ ia a polynomial with real coefficients, and $k>1$ is an integer. For any $n\in\Bbb Z$, we have $P(n)=m^k$ for some $m\in\Bbb Z$. Show that there exists a real coefficients polynomial $H(x)$ such that $P(x)=(H(x))^k$, and $\forall n\in\Bbb Z,$ $H(n)$ is an integer. This is an old question, but I never saw a complete proof. Thanks a lot!
The result is Corollary 3.3 in this paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/464902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$\sum_{k=1}^nH_k = (n+1)H_n-n$. Why? This is motivated by my answer to this question. The Wikipedia entry on harmonic numbers gives the following identity: $$ \sum_{k=1}^nH_k=(n+1)H_n-n $$ Why is this? Note that I don't just want a proof of this fact (It's very easily done by induction, for example). Instead, I want to know if anyone's got a really nice interpretation of this result: a very simple way to show not just that this relation is true, but why it is true. Has anyone got a way of showing that this identity is not just true, but obvious?
I suck at making pictures, but I try nevertheless. Write $n+1$ rows of the sum $H_n$: $$\begin{matrix} 1 & \frac12 & \frac13 & \dotsb & \frac1n\\ \overline{1\Big\vert} & \frac12 & \frac13 & \dotsb & \frac1n\\ 1 & \overline{\frac12\Big\vert} & \frac13 & \dotsb & \frac1n\\ 1 & \frac12 & \overline{\frac13\Big\vert}\\ \vdots & & &\ddots & \vdots\\ 1 & \frac12 &\frac13 & \dotsb & \overline{\frac1n\Big\vert} \end{matrix}$$ The total sum is obviously $(n+1)H_n$. The part below the diagonal is obviously $\sum\limits_{k=1}^n H_k$. The part above (and including) the diagonal is obviously $\sum_{k=1}^n k\cdot\frac1k = n$. It boils down of course to the same argument as Raymond Manzoni gave, but maybe the picture makes it even more obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/464957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 4 }
Spectrum and tower decomposition I'm trying to read "Partitions of Lebesgue space in trajectories defined by ergodic automorphisms" by Belinskaya (1968). In the beginning of the proof of theorem 2.7, the author considers an ergodic automorphism $R$ of a Lebesgue space whose spectrum contains all $2^i$-th roots of unity ($i=1,2,\ldots$), and then he/she claims that for each $i$ there exists a system of sets ${\{D_i^k\}}_{k=1}^{2^i}$ such that $R D_i^k=D_i^{k+1}$ for $k=1, \ldots, 2^i-1$ and $R D_i^{2^i}=D_i^1$. I'm rather a newbie in ergodic theory and I don't know where this claim comes from. I would appreciate any explanation.
Let $R$ be an ergodic automorphism of a Lebesgue space. Let $\omega$ be a root of unity in the spectrum of $R$, and $n$ be the smallest positive integer such that $\omega^n = 1$. Let $f$ be a non-zero eigenfunction corresponding to the eigenvalue $\omega$. Then: $f^n \circ R = (f \circ R)^n = (\omega f)^n = f^n,$ and $f^n$ is an eigenfunction for the eigenvalue $1$. Without loss of generality, we can assume that $f^n$ is bounded (you just have to take $f/|f|$ instead whenever $f \neq 0$). Then, since the transformation is ergodic, $f^n$ is constant. Up to multiplication by a constant, we can assume that $f^n \equiv 1$. Thus $f$ takes its values in the group generated by $\omega$. For all $0 \leq k < n$, let $A_k$ be the set $\{f = \omega^k\}$. This is a partition of the whole space. Moreover, if $x \in A_k$, then $f \circ R (x) = \omega f(x) = \omega \cdot \omega^k = \omega^{k+1}$, so $R (x) \in A_{k+1}$ (with the addition taken modulo $n$), and conversely. Hence, $R A_k = A_{k+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Mathematical Analysis advice Claim: Let $\delta>0, n\in N. $ Then $\lim_{n\rightarrow\infty} I_{n} $exists, where $ I_{n}=\int_{0}^{\delta} \frac{\sin\ nx}{x} dx $ Proof: $f(x) =\frac{\sin\ nx}{x}$ has a removable discontinuity at $x=0$ and so we let $f(0) =n$ $x = \frac{t}{n}$ is continuous and monotone on $t\in[0,n\delta]$, hence, $ I_{n}=\int_{0}^{n\delta} \frac{\sin\ t}{t} dt $ For all $p\geq 1$, given any $\epsilon \gneq 0, \exists n_{0} \gneq \frac{\epsilon}{2\delta}$ such that $\forall n\gneq n_{0}$, $|I_{n+p}-I_{n}| = \left\lvert\int_{n\delta}^{(n+p)\delta} \frac{\sin\ t}{t} dt \right\lvert \leq \int_{n\delta}^{(n+p)\delta} \frac{\left\lvert\sin t\ \right\lvert}{t} dt = \frac{1} {n\delta}\int_{n\delta}^{M} \left\lvert\sin t\right\lvert dt \ \leq \frac{2} {n\delta} \lneq \epsilon$ , for some $M \in [n\delta, (n+p)\delta]$, by Bonnet's theorem Is my proof valid? Thank you.
I can suggest a alternative path. Prove that $$\lim_{a\to 0^+}\lim_{b \to\infty}\int_a^b\frac{\sin x}xdx$$ exists as follows: integrating by parts $$\int_a^b \frac{\sin x}xdx=\left.\frac{1-\cos x}x\right|_a^b-\int_a^b\frac{1-\cos x}{x^2}dx$$ Then use $$\frac{1-\cos h}h\stackrel{h\to 0}\to 0$$ $$\frac{1-\cos h}{h^2}\stackrel{h\to 0}\to\frac 1 2$$ $$\int_1^\infty\frac{dt}{t^2}=1<+\infty$$ Then it will follow your limit is indeed that integral, since changing variables and since $\delta >0$ $$\int_0^{n\delta}\frac{\sin x}xdx\to\int_0^\infty\frac{\sin x}xdx$$ Then it remains to find what that improper Riemann integral equals to. One can prove it equals $\dfrac \pi2$. First, one uses that $$1 +2\sum_{k=1}^n\cos kx=\frac{\sin \left(n+\frac 1 2\right)x}{\sin\frac x 2}$$ from where it follows $$\int_0^\pi\frac{\sin \left(n+\frac 1 2\right)x}{\sin\frac x 2}dx=\pi$$ since all the cosine integrals vanish. Now, the function $$\frac{2}{t}-\frac{1}{\sin\frac t 2}$$ is continuous on $[0,\pi]$. Thus, by the Riemann Lebesgue Lemma, $$\mathop {\lim }\limits_{n \to \infty } \int_0^\pi {\sin \left( {n + \frac{1}{2}} \right)x\left( {\frac{2}{x} - {{\left( {\sin \frac{x}{2}} \right)}^{ - 1}}} \right)dx} = 0$$ It follows that $$\mathop {\lim }\limits_{n \to \infty } \int_0^\pi {\frac{{\sin \left( {n + \frac{1}{2}} \right)x}}{x}dx} = \frac{\pi }{2}$$ so $$\mathop {\lim }\limits_{n \to \infty } \int_0^{\pi \left( {n + \frac{1}{2}} \right)} {\frac{{\sin x}}{x}dx} = \frac{\pi }{2}$$ Since we know the integral already exists, we conclude $$\int_0^\infty {\frac{{\sin x}}{x}dx} = \frac{\pi }{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/465096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Parametric equations, eliminating the parameter $\,x = t^2 + t,\,$ $y= 2t-1$ $$x = t^2 + t\qquad y= 2t-1$$ So I solve $y$ for $t$ $$t = \frac{1}{2}(y+1)$$ Then I am supposed to plug it into the equation of $x$ which is where I lose track of the logic. $$x = \left( \frac{1}{2}(y+1)\right)^2 + \frac{1}{2}(y+1) = \frac{1}{4}y^2 + y+\frac{3}{4}$$ That is now my answer? I am lost. This is x(y)? How is this valid? I don't understand.
Let's assume you are walking on an xy-plane. Your x-position (or east-west position) at a certain time t is given by $x = t^2 + t$, and your y-position is $y = 2t - 1$. If you want to know what the whole path you remained is, without wanting to know when you stepped on where? Eliminate t: $$x = \frac{1}{4} y^2 + y + \frac{3}{4}$$ If a pair of $(x, y)$ satisfied that, that means some time in the past or in the future, you stepped or would step on that point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Showing probability no husband next to wife converges to $e^{-1}$ Inspired by these questions: * *Probability of Couples sitting next to each other (Sitting in a Row) *Probability question about married couples *Four married couples, eight seats. Probability that husband sits next to his wife? *In how many ways can n couples (husband and wife) be arranged on a bench so no wife would sit next to her husband? *No husband can sit next to his wife in this probability question the more general question of the probability that seating $n$ couples (i.e. $2n$ individuals) in a row at random means that no couples are sat together can be expressed using inclusion-exclusion as $$\displaystyle\sum_{i=0}^n (-2)^i {n \choose i}\frac{(2n-i)!}{(2n)!}$$ which for small $n$ takes the values: n Probability of no couple together 1 0 0 2 1/3 0.3333333 3 1/3 0.3333333 4 12/35 0.3481481 5 47/135 0.3428571 6 3655/10395 0.3516114 7 1772/5005 0.3540460 8 20609/57915 0.3558491 This made me wonder whether it converges to $e^{-1} \approx 0.3678794$ as $n$ increases, like other cases such as the secretary/dating problem and $\left(1-\frac1n\right)^n$ do. So I tried the following R code (using logarithms to avoid overflows) couples <- 1000000 together <- 0:couples sum( (-1)^together * exp( log(2)*together + lchoose(couples,together) + lfactorial(2*couples - together) - lfactorial(2*couples) ) ) which indeed gave a figure of $0.3678794$. How might one try to prove this limit?
I observe that each term with $i$ fixed approaches a nice limit. We have $$ 2^i \frac{n(n-1)(n-2)\cdots(n-i+1)}{i!} \frac1{(2n-i+1)(2n-i+2)\cdots(2n)} $$ or $$ \frac1{i!} \frac{2n}{2n} \frac{2(n-1)}{2n-1} \cdots \frac{2(n-i+1)}{(2n-i+1)} \sim \frac 1{i!} $$ This gives you the series, assuming the limits (defining terms with $i>n$ to be zero) may be safely exchanged, $$\lim_{n\to\infty} \sum_{i=0}^\infty [\cdots] = \sum_{i=0}^\infty \lim_{n\to\infty} [\cdots] = \sum_{i=0}^\infty (-1)^i \frac1 {i!} \equiv e^{-1}$$ Justifying the limit interchange I haven't thought about but I suspect this can be shown to be fine without too much effort... Edit: You can probably use the Weierstrass M-test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Proof that equality on categorical products is componentwise equality I want to proof that in the categorical product as defined here it holds that for $x,y \in \prod X_i$ then $$ x = y \textrm{ iff } \forall i \in I : \pi_i(x) = \pi_i(y). $$ The direction from left to right is trivial, but the other, that iff the components equal than their product is equal I am not able to proof, I tried to substitute the identity morphisms in the universal property, but I always get the wrong "types" in the functions involved. Any hints?
In arbitrary categories, there is a notion of "generalized element": A generalized element of an object $A$ is any morphism into $A$ (from any object of the category). A morphism $A\to B$ can be applied to a generalized element $Z\to A$ just by composing them to get a generalized element $Z\to B$. In these terms, the result you want can be proved: A generalized element of $\prod_iX_i$ is determined by its images under all the projections $\pi_i$. But "proved" here is too grandiose a term; this fact is just part of the categorical definition of product.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Induced homomorphism between fundamental groups of a retract is surjective I'm trying to understand why the induced map $i_*: \pi_1(A) \rightarrow \pi_1(X)$ is surjective, for $A$ being a retract of $X$ and $i: A \rightarrow X$ being the inclusion map? For homotopy retracts it's obvious, but for retracts it seems I miss something.
Any loop in $A$ is also a loop in $X$. What does $f_*$ do to an element of $\pi_1(X)$ that is a loop in $A$? More categorically, if $i:A\to X$ is the inclusion map (so that $f\circ i=\mathrm{id}_A$), then $f_*\circ i_*=\mathrm{id}_{\pi_1(A)}$ because $\pi_1$ is a functor. Since $\mathrm{id}_{\pi_1(A)}$ is surjective we must have that $f_*$ is surjective. Regarding your edited question, the map $i_*:\pi_1(A)\to \pi_1(X)$ does not have to be surjective, regardless of whether or not there is a retraction $f:X\to A$. For example, let $X$ be any space with a non-trivial fundamental group and let $A=\{x\}$ be a point in $X$. There is an obvious retraction $f:X\to A$ (the constant map to $x$). But $\pi_1(A)$ is trivial and hence $i_*:\pi_1(A)\to\pi_1(X)$ cannot be surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If $A$ is compact and $B$ is Lindelöf space , will be $A \cup B$ Lindelöf I have 2 different questions: As we know a space Y is Lindelöf if each open covering contains a countable subcovering. (1) :If $A$ is compact and $B$ is Lindelöf space , will be $A \cup B$ Lindelöf? If it is right, how can we prove it? (2) : A topological space is called $KC$ , when each compact subset is closed. Is cartesian product of KC spaces also KC space? is infinite or finite number important?
I am facing a notational problem. What is a $KC$ space? Answer of your first question is the following. Lindelof Space: A space $X$ is said to be Lindelof is every open cover of the space has a countable subcover. Consider an open cover $P = \{P_{\alpha}: \alpha \in J, P_{\alpha}$ is open in $A \cup B\}$ Now $P$ will gives cover for both $A$ and $B$ say $P_1$ and $P_2$ where $P_1 = \{P_{\alpha} \cap A\}$ and $P_{\alpha}\cap B$. Now $A$ is compact, thus there is a finite cover say $P_1^{'}$. You may write its elements yourself. $B$ is Lindelof, thus you shall get a countable subcover from $P_2$, say $P_2^{'}$. Collect all elements of $P_1^{'}$ and $P_2^{'}$ form your original cover $P$, it is countable. So $A \cup B$ is countable. There are for Latex problems in this answers. Thank you for correction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Can a 2D person walking on a Möbius strip prove that it's on a Möbius strip? Or other non-orientable surface, can a 2D walker on a non-orientable surface prove that the surface is non-orientable or does it always take an observer from a next dimension to prove that an entity of a lower dimension is non-orientable? So it always takes a next dimension to prove that an entity of the current dimension is non-orientable?
If he has a friend then they both can paint their right hands blue and left hands red. His friend stays where he is, he goes once around the strip, now his left hand and right hand are switched when he compares them to his friends hands.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 4, "answer_id": 0 }
Expected value of game involving 100-sided die The following question is from a Jane Street interview. You are given a 100-sided die. After you roll once, you can choose to either get paid the dollar amount of that roll OR pay one dollar for one more roll. What is the expected value of the game? (There is no limit on the number of rolls.) P.S. I think the question assumes that we are rational.
Let $v$ denote the expected value of the game. If you roll some $x\in\{1,\ldots,100\}$, you have two options: * *Keep the $x$ dollars. *Pay the \$$1$ continuation fee and spin the dice once again. The expected value of the next roll is $v$. Thus, the net expected value of this option turns out to be $v-1$ dollars. You choose one of these two options based on whichever provides you with a higher gain. Therefore, if you spun $x$, your payoff is $\max\{x,v-1\}$. Now, the expected value of the game, $v$, is given as the expected value of these payoffs: \begin{align*} v=\frac{1}{100}\sum_{x=1}^{100}\max\{x,v-1\}\tag{$\star$}, \end{align*} since each $x$ has a probability of $1/100$ and given a roll of $x$, your payoff is exactly $\max\{x,v-1\}$. This equation is not straightforward to solve. The right-hand side sums up those $x$ values for which $x>v-1$, and for all such values of $x$ that $x\leq v-1$, you add $v-1$ to the sum. This pair of summations gives you $v$. The problem is that you don't know where to separate the two summations, since the threshold value based on $v-1$ is exactly what you need to compute. This threshold value can be guessed using a numerical computation, based on which one can confirm the value of $v$ rigorously. This turns out to be $v=87\frac{5}{14}$. Incidentally, this solution also reveals that you should keep rolling the dice for a $1 fee as long as you roll 86 or less, and accept any amount 87 or more. ADDED$\phantom{-}$In response to a comment, let me add further details on the computation. Solving for the equation ($\star$) is complicated by the possibility that the solution may not be an integer (indeed, ultimately it is not). As explained above, however, ($\star$) can be rewritten in the following way: \begin{align*} v=\frac{1}{100}\left[\sum_{x=1}^{\lfloor v\rfloor-1}(v-1)+\sum_{x=\lfloor v\rfloor}^{100}x\right],\tag{$\star\star$} \end{align*} where $\lfloor\cdot\rfloor$ is the floor function (rounding down to the nearest integer; for example: $\lfloor1\mathord.356\rfloor=1$; $\lfloor23\mathord.999\rfloor=23$; $\lfloor24\rfloor=24$). Now let’s pretend for a moment that $v$ is an integer, so that we can obtain the following equation: \begin{align*} v=\frac{1}{100}\left[\sum_{x=1}^{v-1}(v-1)+\sum_{x=v}^{100}x\right]. \end{align*} It is algebraically tedious yet conceptually not difficult to show that this is a quadratic equation with roots \begin{align*} v\in\left\{\frac{203\pm3\sqrt{89}}{2}\right\}. \end{align*} The larger root exceeds $100$, so we can disregard it, and the smaller root is approximately $87\mathord.349$. Of course, this is not a solution to ($\star\star$) (remember, we pretended that the solution was an integer, and the result of $87\mathord.349$ does not conform to that assumption), but this should give us a pretty good idea about the approximate value of $v$. In particular, this helps us formulate the conjecture that $\lfloor v\rfloor=87$. Upon substituting this conjectured value of $\lfloor v\rfloor$ back into ($\star\star$), we now have the exact solution $v=87\frac{5}{14}$, which also confirms that our heuristic conjecture that $\lfloor v\rfloor=87$ was correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
What's 4 times more likely than 80%? There's an 80% probability of a certain outcome, we get some new information that means that outcome is 4 times more likely to occur. What's the new probability as a percentage and how do you work it out? As I remember it the question was posed like so: Suppose there's a student, Tom W, if you were asked to estimate the probability that Tom is a student of computer science. Without any other information you would only have the base rate to go by (percentage of total students enrolled on computer science) suppose this base rate is 80%. Then you are given a description of Tom W's personality, suppose from this description you estimate that Tom W is 4 times more likely to be enrolled on computer science. What is the new probability that Tom W is enrolled on computer science. The answer given in the book is 94.1% but I couldn't work out how to calculate it! Another example in the book is with a base rate of 3%, 4 times more likely than this is stated as 11%.
The only way I see to make sense of this is to divide by $4$ the probability it does not happen. Here we obtain $20/4=5$, so the new probability is $95\%$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "146", "answer_count": 6, "answer_id": 5 }
Evaluating $\int_0^\infty \frac{1}{x+1-u}\cdot \frac{\mathrm{d}x}{\log^2 x+\pi^2}$ using real methods. By reading a german wikipedia (see here) about integrals, i stumpled upon this entry 27 1.5 $$ \color{black}{ \int_0^\infty \frac{1}{x+1-u}\cdot \frac{\mathrm{d}x}{\log^2 x+\pi^2} =\frac{1}{u}+\frac{1}{\log(1-u)}\,, \qquad u \in (0,1)} $$ (Click for the source) Where the result was proven using complex analysis. Is there any method to show the equality using real methods? Any help will be appreciated =)
I'm not sure about the full solution, but there is a way to find an interesting functional equation for this integral. First, let's get rid of the silly restriction on $u$. By numerical evaluation, the integral exists for all $u \in (-\infty,1)$ Now let's introduce the more convenient parameter: $$v=1-u$$ $$I(v)=\int_0^{\infty} \frac{dx}{(v+x)(\pi^2+\ln^2 x)}$$ Now let's make a change of variable: $$x=e^t$$ $$I(v)=\int_{-\infty}^{\infty} \frac{e^t dt}{(v+e^t)(\pi^2+t^2)}$$ $$I(v)=\int_{-\infty}^{\infty} \frac{(v+e^t) dt}{(v+e^t)(\pi^2+t^2)}-v \int_{-\infty}^{\infty} \frac{ dt}{(v+e^t)(\pi^2+t^2)}=1-v J(v)$$ Now let's make another change of variable: $$t=-z$$ $$I(v)=\int_{-\infty}^{\infty} \frac{e^{-z} d(-z)}{(v+e^{-z})(\pi^2+z^2)}=\int_{-\infty}^{\infty} \frac{ dz}{(1+v e^z)(\pi^2+z^2)}=\frac{1}{v} J \left( \frac{1}{v} \right)$$ Now we get: $$1-v J(v)=\frac{1}{v} J \left( \frac{1}{v} \right)=I(v)$$ $$v J(v)+\frac{1}{v} J \left( \frac{1}{v} \right)=1$$ $$v \in (0,\infty)$$ For example, we immediately get the correct value: $$J(1)=I(1)=\int_0^{\infty} \frac{dx}{(1+x)(\pi^2+\ln^2 x)}=\frac{1}{2}$$ We can also check that this equation works for the known solution (which is actually valid on the whole interval $v \in (0,\infty)$, except for $v=1$). $$I(v)=\frac{1}{1-v}+\frac{1}{\ln v}$$ $$J(v)=-\frac{1}{1-v}-\frac{1}{v \ln v}$$ $$J \left( \frac{1}{v} \right)=\frac{v}{1-v}+\frac{v}{\ln v}$$ $$1-v J(v)=\frac{1}{v} J \left( \frac{1}{v} \right)$$ Now this is not a solution of course (except for $I(1)$), but it's a big step made without any complicated integration techniques. Basically, if we define: $$f(v)=vJ(v)$$ $$I(v)=1-f(v)$$ We need to solve a simple functional equation: $$I(v)+I \left( \frac{1}{v} \right)=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/465790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Can this difference operator be factorised? If a difference operator is defined as $$LY_i=\left(-\epsilon\dfrac{D^+ -D^-}{h_1}+aD^-\right)Y_i,\quad 1\leq i\leq N$$ Suppose $Y_N$ and $Y_0$ are given and that the difference operators are defined as follows $D^+V_i=(V_{i+1}-V_i)/h_1$, $D^-V_i=(V_i-V_{i-1})/h_1$. How is it possible to write the difference operator as $$LY_i=(Y_N-Y_0)\left(-\epsilon\dfrac{D^+ -D^-}{h_1}+aD^-\right)\psi_i?$$ I am thinking that telescoping trick is used, but am failing to see how. If my question is not clear could someone clarify the second $L_\epsilon^NY_i$ on page 57 of the excerpt which is attached below.
$-\frac{\epsilon}{h^2} (Y_{i+1}-2Y_i + Y_{i-1}) + \alpha(Y_i-Y_{i-1}) = 0 \;\;\; (1)$ $-\frac{\epsilon}{h^2} Y_{N+1} + \left ( \frac{2\epsilon}{h^2} + \alpha \right )Y_N - \left ( \frac{\epsilon}{h^2} + \alpha \right ) Y_{N-1} = 0$ $\dots$ $Y_{N} = \frac{\epsilon Y_{N+1}}{2\epsilon + \alpha h^2}+\frac{\epsilon + \alpha h^2}{2\epsilon + \alpha h^2} Y_{N} \equiv aY_{N+1} + (1-a)Y_{N-1} = aY_{N+1}(1+(1-a) + (1-a)^2 + \dots+ (1-a)^N)+(1-a)^{N+1}Y_0 = a\frac{1-(1-a)^{N+1}}{a}Y_{N+1} +(1-a)^{N+1}Y_0 = (1-(1-a)^{N+1})Y_{N+1}+(1-a)^{N+1}Y_0$ $Y_{N-1} = (1-(1-a)^{N})Y_{N+1}+(1-a)^{N}Y_0$ Hence, (1) can be rewritten as a function of $Y_0$ and $Y_{N+1}$. Collecting the terms by $Y_0$ and $Y_{N+1}$ and defining your $\phi$ appropriately, you should get the desired. Not worth the bounty but should point you in the right direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a simpler way to express the fraction $\frac{x}{x+y}$? Can I simplify this expression, perhaps into two expressions $\frac{x}{x+y}$ or is that already simplified as much as possible?
The given expression uses two operations (one division and one addition). If we judge simplicity by the number of operations, only an expression with one operation would be simpler, but the expression equals none of $x+y$, $x-y$, $y-x$, $xy$, $\frac xy$, $\frac yx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/465932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Evaluating a 2-variable limit Could you help me evaluating this limit? $$ \lim_{x\to 0}\frac{1}{x}\cdot\left[\arccos\left(\frac{1}{x\sqrt{x^{2}- 2x\cdot \cos(y)+1}}-\frac{1}{x}\right)-y\right] $$
Notice: I changed what I think a typo otherwise the limit is undefined. By the Taylor series we have (and we denote $a=\cos(y)$) $$\frac{1}{\sqrt{x^{2}-2xa+1}}=1+xa+x^2(\frac{3}{2}a^2-\frac{1}{2})+O(x^3)$$ so $$\frac{1}{x\sqrt{x^{2}-2xa+1}}-\frac{1}{x}=a+x(\frac{3}{2}a^2-\frac{1}{2})+O(x^2)$$ Now using $$\arccos(a+\alpha x)=\arccos(a)-\frac{\alpha}{\sqrt{1-a^2}}x+O(x^2)$$ we have $$\arccos(\frac{1}{x\sqrt{x^{2}-2xa+1}}-\frac{1}{x})=\arccos(a)-\frac{\frac{3}{2}a^2-\frac{1}{2}}{\sqrt{1-a^2}}x+O(x^2)$$ so if we suppose that $y\in[-\frac{\pi}{2},\frac{\pi}{2}]$ then $$\lim_{x\to 0}\frac{1}{x}\cdot\left[\arccos\left(\frac{1}{x\sqrt{x^{2}-2x\cdot \cos(y)+1}}-\frac{1}{x}\right)-y\right]=-\frac{\frac{3}{2}a^2-\frac{1}{2}}{\sqrt{1-a^2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/465973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $\det(A + B) = \det(A) + \det(B)$ hold? Well considering two $n \times n$ matrices does the following hold true: $$\det(A+B) = \det(A) + \det(B)$$ Can there be said anything about $\det(A+B)$? If $A/B$ are symmetric (or maybe even of the form $\lambda I$) - can then things be said?
Although the determinant function is not linear in general, I have a way to construct matrices $A$ and $B$ such that $\det(A + B) = \det(A) + \det(B)$, where neither $A$ nor $B$ contains a zero entry and all three determinants are nonzero: Suppose $A = [a_{ij}]$ and $B = [b_{ij}]$ are 2 x 2 real matrices. Then $\det(A + B) = (a_{11} + b_{11})(a_{22} + b_{22}) - (a_{12} + b_{12})(a_{21} + b_{21})$ and $\det(A) + \det(B) = (a_{11} a_{22} - a_{12} a_{21}) + (b_{11} b_{22} - b_{12} b_{21})$. These two determinant expressions are equal if and only if $a_{11} b_{22} + b_{11} a_{22} - a_{12} b_{21} - b_{12} a_{21} = $ $\det \left[ \begin{array}{cc} a_{11} & a_{12}\\ b_{21} & b_{22} \end{array} \right]$ + $\det \left[ \begin{array}{cc} b_{11} & b_{12}\\ a_{21} & a_{22} \end{array} \right]$ = 0. Therefore, if we choose any nonsingular 2 x 2 matrix $ A = [a_{ij}]$ with nonzero entries and then create $B = [b_{ij}]$ such that $b_{11} = - a_{21}, b_{12} = - a_{22}, b_{21} = a_{11},$ and $b_{22} = a_{12}$, we have solved our problem. For example, if we take $$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \quad \text{and}\quad B = \begin{bmatrix} -3 & -4 \\ 1 & 2\end{bmatrix} ,$$ then $\det(A) = -2, \det(B) = -2, $ and $\det(A + B) = -4$, as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/466043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 5, "answer_id": 1 }
Three quotient-ring isomorphism questions I need some help with the following isomorphisms. Let $R$ be a commutative ring with ideals $I,J$ such that $I \cap J = \{ 0\}$. Then * *$I+J \cong I \times J$ *$(I+J)/J \cong I$ *$(R/I)/\bar{J} \cong R/(I+J) \quad \text{where} \quad \bar{J}=\{x+I \in R/I: x \in J \}$ For the first item $\theta: I \times J \ \rightarrow I+J: \ (x,y) \mapsto x+y$ is clearly a surjective homomorphism. It's injective because $I \cap J = \{ 0\}$. For the second item, the mapping $\eta \ : \ I+J \rightarrow I \ : \ x+y \mapsto x$ is well-defined by item 1, surjective, and the kernel equals $0+J$. Now we can use the first isomorphism theorem. For the third item, I tried to find a define a mapping: $$\phi: \quad R/I \ \rightarrow \ R/(I+J) \quad : \quad x + I \ \mapsto \ x+I+J $$ And I tried to show that $\bar{J}$ is the kernel, but it didn't totally feel okay because I got confused. Is the following correct? $$ x \in \ker(\phi) \ \iff \ x+ I \in I+J \iff x+I \in \bar{J} $$ I would appreciate it if you could tell me if I made mistakes. Could you provide me a little information about the third item? It seems like a blur to me.
The line $ x \in \ker(\phi) \ \iff \ x+ I \in I+J \iff x+I \in \bar{J} $ is wrong, because $x+I$ could be principally no element of $I+J$ since $I+J$ is an ideal which contains elements of $R$, and $x+I$ is a left coset of an ideal and hence also a set of elements of $R$. You could write instead $ x + I \in \ker(\phi) \iff x \in I+J \iff x+I \in (I+J)/I = \overline{J}.$ Are you familiar which the third isomorphism theorem of rings? If $I \subseteq J$ are ideals of $R$, then $J/I$ is an ideal of $R/I$ and $(R/I)/(J/I) \cong R/J$. Note, that in your case $\overline{J} = (I+J)/I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/466170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How find this $3\sqrt{x^2+y^2}+5\sqrt{(x-1)^2+(y-1)^2}+\sqrt{5}(\sqrt{(x-1)^2+y^2}+\sqrt{x^2+(y-1)^2})$ find this follow minimum $$3\sqrt{x^2+y^2}+5\sqrt{(x-1)^2+(y-1)^2}+\sqrt{5}\left(\sqrt{(x-1)^2+y^2}+\sqrt{x^2+(y-1)^2}\right)$$ I guess This minimum is $6\sqrt{2}$ But I can't prove,Thank you
If $v_1 = (0,0), v_2 = (1,1), v_3 = (0,1)$, and $v_4 = (1,0)$ and $p = (x,y)$, then you are trying to minimize $$3|p - v_1| + 5|p - v_2| + \sqrt{5}|p - v_3| + \sqrt{5}|p - v_4|$$Note that if $p$ is on the line $y = x$, moving it perpendicularly away from the line will only increase $|p - v_1|$ and $|p - v_2|$, and it is not too hard to show it also increases $|p - v_3| + |p - v_4|$. So the minimum has to occur on the line $y = x$. So letting $p = (t,t)$ your problem becomes to minimize $$3\sqrt{2}t + 5\sqrt{2}(1 - t) + 2\sqrt{5}\sqrt{2t^2 - 2t + 1}$$ This can be minimized through calculus... maybe there's a slick geometric way too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/466244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Partition Topology I am trying to prove the following equivalence: "Let $X$ be a set and $R$ be a partition of $X$, this is: i) $(\forall A,B \in R, A \neq B) \colon A \cap B = \emptyset$ ii) $ \bigcup_{A \in R} A = X$ We say that a topology $\tau$ on $X$ comes from is a partition topology iff $\tau = \tau(R)$ for some partition $R$ of $X$. Then a topology $\tau$ is a partition topology iff every open set in $\tau$ is also a closed set." I am trying to proove $\Leftarrow$. I have tried using Zorn to proove the existance of a kind of maximal refinement of $\tau$ so as to find the partition that could generate $\tau$ but I am getting nowhere. I would truly appreciate any help posible...
Alternative hint: $R$ consists of the closures of the one-point sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/466321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can some statements be consistent with intuitionistic logic but not classical logic, when intuitionistic logic proves not not LEM? I've heard that some axioms, such as "all functions are continuous" or "all functions are computable", are compatible with intuitionistic type theories but not their classical equivalents. But if they aren't compatible with LEM, shouldn't that mean they prove not LEM? But not LEM means not (A or not A) which in particular implies not A - but that implies (A or not A). What's gone wrong here?
If $A$ is a sentence (ie has no free variables), then your reasoning is correct and in fact $\neg (A \vee \neg A)$ is not consistent with intuitionistic logic. However, all the instances of excluded middle that are contradicted by the statements "all functions are continuous" and "all functions are computable" are for formulas of the form $A(x)$ where $x$ is a free variable. To give an explicit example, working over Heyting arithmetic (HA), let $A(n)$ be the statement that the $n$th Turing machine halts on input $n$. Then, it is consistent with HA that $\forall n\;A(n) \vee \neg A(n)$ is false. That is, $\neg (\forall n \; A(n) \vee \neg A(n))$ is consistent with HA, and is in fact implied by $\mathsf{CT}_0$ (essentially the statement that all functions are computable). Note that even in classical logic this doesn't directly imply $\neg A(n)$, which would be equivalent to $\forall n \; \neg A(n)$. What we could do in classical logic is deduce $\exists n \; \neg (A(n) \vee \neg A(n))$ and continue as before, but this does not work in intuitionistic logic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/466501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
A gamma function identity I am given the impression that the following is true (for at least all positive $\lambda$ - may be even true for any complex $\lambda$) $$ \left\lvert \frac{\Gamma(i\lambda + 1/2)}{\Gamma(i\lambda)} \right\rvert^2 = \lambda \tanh (\pi \lambda) $$ It would be great if someone can help derive this.
Using the Euler's reflection formula $$\Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin\pi z},$$ we get (for real $\lambda$) \begin{align} \left|\frac{\Gamma\left(\frac12+i\lambda\right)}{\Gamma(i\lambda)}\right|^2&= \frac{\Gamma\left(\frac12+i\lambda\right)\Gamma\left(\frac12-i\lambda\right)}{\Gamma(i\lambda)\Gamma(-i\lambda)}=\\ &=(-i\lambda) \frac{\Gamma\left(\frac12+i\lambda\right)\Gamma\left(\frac12-i\lambda\right)}{\Gamma(i\lambda)\Gamma(1-i\lambda)}=\\ &=(-i\lambda)\frac{\pi/\sin\pi\left(\frac12-i\lambda\right)}{\pi/\sin\pi i\lambda}=\\ &=-i\lambda\frac{\sin \pi i \lambda}{\cos\pi i \lambda}=\\ &=\lambda \tanh\pi \lambda. \end{align} This will not hold if $\lambda$ is complex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/466614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Showing that the function $f(x,y)=x+y-ye^x$ is non-negative in the region $x+y≤1,x≥0,y≥0$ ok, since it's been so long when I took Calculus, I just wanna make sure I'm not doing anything wrong here. Given $f:\mathbb{R}^2\rightarrow \mathbb{R}$ defined as $f(x,y)=x+y-ye^x$. I would like to show that the function is nonnegative in the region $x+y\leq 1, \;\;x\geq 0, \;\;y\geq 0$. Now my game plan is as follows: 1. Show the function is non-negative on the boundary of the region 2. Show the function takes a positive value in the interior of the region 3. Show that the function has no critical points in the interior of the region 4. By continuity the function is non-negative everywhere in the region. Is the above sufficient or am I doing something wrong? Would there be a better way to show this?
I'll try using Lagrange multiplier: The function is: $$f(x,y) = x + y - ye^x$$ and constraint are: $$g(x,y) = x+y \leq 1$$ $$h(x) = x \geq 0$$ $$j(y) = y \geq 0$$ So using Lagrange multiplier now we have: $$F(x,y,\lambda,\lambda_1,\lambda_2) = x + y - ye^x - \lambda(x+y-1) - \lambda_1(x) - \lambda_2(y)$$ Now we take parital derivatives: $$F_x = 1 - ye^x - \lambda - \lambda_1 = 0$$ $$F_y = 1 - e^x - \lambda - \lambda_2 = 0$$ $$\lambda(x+y-1) = 0$$ $$\lambda_1(x) = 0$$ $$\lambda_2(y) = 0$$ Now we have 8 cases: 1) $\lambda = \lambda_1 = \lambda_2 = 0$ This implies one solution $(x,y) = (0,1)$ 2) $\lambda = \lambda_1 = y = 0$ Now in $F_x$ we have $1=0$, which is not posible, so this case doesn't give a solution. 3) $\lambda = x = \lambda_2 = 0$ Now in $F_x$ we have $y + \lambda_1 = 1$, because all $\lambda$ values are positive, we get $y \leq 1$. So the solutions are $(x,y) = (0,y)$, where $0 \leq y \leq 1$ 4) $\lambda = x = y = 0$ Simply this implies one solution $(x,y) = (0,0)$ 5) $x + y - 1 = \lambda_1 = \lambda_2 = 0$ This implies a solution that we've already obtained $(x,y) = (0,1)$ 6) $x + y - 1 = \lambda_1 = y = 0$ This simply implies one solution $(x,y) = (1,0)$ 7) $x + y - 1 = x = \lambda_2 = 0$ This simply implies one solution $(x,y) = (0,1)$ 8) $x + y - 1 = x = y = 0$ This case doesn't imply any solution because it's a contradiction. Now we have 4 distinct solutions we check them all now: 1) $(x,y) = (0,1)$ $$f(x,y) = x + y - ye^x = 1 - 1 = 0$$ 2) $(x,y) = (0,y)$ $$f(x,y) = x + y - ye^x = y - y = 0$$ 3) $(x,y) = (0,0)$ $$f(x,y) = x + y - ye^x = 0$$ 4) $(x,y) = (1,0)$ $$f(x,y) = x + y - ye^x = 1$$ This means that $f(x,y)$ has minimum of $0$ at point $(0,y)$ and maximum of $1$ at point $(1,0)$. Q.E.D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/466695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is $\pi r^2$ the surface of a circle Why is $\pi r^2$ the surface of a circle? I have learned this formula ages ago and I'm just using it like most people do, but I don't think I truly understand how circles work until I understand why this formula works. So I want to understand why it works and not just how. Please don't use complicated symbols.
The simplest explanation is that the area of any shape has to be in units of area, that is in units of length squared. In a circle, the only "number" describing it the the radius $r$ (with units of length), so that the area must be proportional to $r^2$. So for some constant $b$, $$A=b r^2$$ Now, to find the constant $b$, I think the easiest way is to look at this Wikipedia diagram: This shows how when you subdivide the circle into many equal small triangles, the area becomes a rectangle with height $r$ and length equal to half the circumference of the circle, which is $\pi r$, by the definition of $\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/466762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 0 }
Primes between $n$ and $2n$ I know that there exists a prime between $n$ and $2n$ for all $2\leq n \in \mathbb{N}$ . Which number is the fourth number that has just one prime in its gap? First three numbers are $2$ , $3$ and $5$ . I checked with computer until $15000$ and couldn't find next one. Maybe, you can prove that there is no other number with this condition? Also, when I say, a number $n$ has one prime in its gap it means the set $X = \{x: x$ is prime and $n<x<2n\}$ has only one element. Thanks for any help.
There is no other such $n$. For instance, In 1952, Jitsuro Nagura proved that for $n ≥ 25$, there is always a prime between $n$ and $(1 + 1/5)n$. This immediately means that for $n \ge 25$, we have one prime between $n$ and $\frac{6}{5}n$, and another prime between $\frac{6}{5}n$ and $\frac65\frac65n = \frac{36}{25}n < 2n$. In fact, $\left(\frac{6}{5}\right)^3 < 2$ as well, so we can be sure that for $n \ge 25$, there are at least three primes between $n$ and $2n$. As you have already checked all $n$ up to $25$ (and more) and found only $2$, $3$, $5$, we can be sure that these are the only ones. The number of primes between $n$ and $2n$ only gets larger as $n$ increases: it follows from the prime-number theorem that $$ \lim_{n \to \infty} \frac{\pi(2n) - \pi(n)}{n/\log n} = 2 - 1 = 1,$$ so the number of primes between $n$ and $2n$, which is $\pi(2n) - \pi(n)$, is actually asymptotic to $\frac{n}{\log n}$ which gets arbitrarily large.
{ "language": "en", "url": "https://math.stackexchange.com/questions/466844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
prove $\sum\limits_{n\geq 1} (-1)^{n+1}\frac{H_{\lfloor n/2\rfloor}}{n^3} = \zeta^2(2)/2-\frac{7}{4}\zeta(3)\log(2)$ Prove the following $$\sum\limits_{n\geq 1}(-1)^{n+1}\frac{H_{\lfloor n/2\rfloor}}{n^3} = \frac{1}{2}\zeta(2)^2-\frac{7}{4}\zeta(3)\log(2)$$ I was able to prove the formula above and interested in what approach you would take .
The chalenge is interresting, but easy if we know some classical infinite sums with harmonic numbers : http://mathworld.wolfram.com/HarmonicNumber.html ( typing mistake corrected) I was sure that the formula for $\sum\frac{H_{k}}{(2k+1)^3}$ was in all the mathematical handbooks among the list of sums of the same kind. I just realize that it is missing in the article of Wolfram referenced above. Sorry for that. Then, see : http://www.wolframalpha.com/input/?i=sum+HarmonicNumber%28n%29%2F%282n%2B1%29%5E3+from+n%3D1to+infinity One can find in the literature some papers dealing with the sums of harmonic numbers and even more with the sums of polygamma functions. The harmonic numbers are directly related to some particular values of polygamma functions. So, when we are facing a problem of harmonic number, it is a good idea to transform it to a problem of polygamma. For example, in the paper “On Some Sums of Digamma and Polygamma Functions” by Michael Milgram, on can find what are the methods and a lot of formulas with the proofs : http://arxiv.org/ftp/math/papers/0406/0406338.pdf From this, one could derive a general formula for $\sum\limits_{n\geq 1}\frac{H_n}{(an+b)^p}$ with any $a, b$ and integer $p>2$. Less ambitious, the case $a=2 ; b=1 ; p=3$ is considered below :
{ "language": "en", "url": "https://math.stackexchange.com/questions/467002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 1, "answer_id": 0 }
$O(n,\mathbb R)$ of all orthogonal matrices is a closed subset of $M(n,\mathbb R).$ Let $M(n,\mathbb R)$ be endowed with the norm $(a_{ij})_{n\times n}\mapsto\sqrt{\sum_{i,j}|a_{ij}|^2}.$ Then the set $O(n,\mathbb R)$ of all orthogonal matrices is a closed subset of $M(n,\mathbb R).$ My Attempt: Let $f:M(n,\mathbb R)\to M(n,\mathbb R):A\mapsto AA^t.$ Choose a sequence $\{A_k=(a^k_{ij})\}\subset M(n,\mathbb R)$ such that $A_k\to A=(a_{ij})$ for chosen $A\in M(n,\mathbb R).$ Then $\forall~i,j,$ $a_{ij}^k\to a_{ij}$ in $\mathbb R.$ Now $A_kA_k^t=(\sum_{p=1}^n a_{ip}^ka_{jp}^k)~\forall~k\in\mathbb Z^+.$ Choose $i,j\in\{1,2,...,n\}.$ Then for $p=1,2,...,n;~a_{ip}^k\to a_{ip},~a_{jp}^k\to a_{jp}$ in $\mathbb R\implies \sum_{p=1}^n a_{ip}^ka_{jp}^k\to \sum_{p=1}^n a_{ip}a_{jp}$ in $\mathbb R.$ So $(\sum_{p=1}^n a_{ip}^ka_{jp}^k)\to (\sum_{p=1}^n a_{ip}a_{jp})\implies A_kA_k^t\to AA^t.$ So $f$ is continuous on $M(n,\mathbb R).$ Now $O(n,\mathbb R)=f^{-1}(\{I\}).$ The singleton set $\{I\}$ being closed in $M(n,\mathbb R),$ $O(n,\mathbb R)$ is closed in $M(n,\mathbb R).$ I'm not absolutely sure about the steps. Is't a correct attempt?
It would be quicker to observe that $f$ is a vector of polynomials in the natural coordinates, and polynomials are continuous, so $f$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/467089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
The origin of $\pi$ How was $\pi$ originally found? Was it originally found using the ratio of the circumference to diameter of a circle of was it found using trigonometric functions? I am trying to find a way to find the area of the circle without using $\pi$ at all but it seems impossible, or is it? If i integrate the circle I get: $$4\int_{0}^{1}\sqrt{1-x^{2}}dx=4\left [ \frac{\sin^{-1} x}{2}+\frac{x\sqrt{1-x^{2}}}{2} \right ]_{0}^{1}=\pi $$ But why does $\sin^{-1} 1=\frac{\pi }{2}$? Is it at all possible to find the exact area of the circle without using $\pi$?
to answer at "Is it at all possible to find the exact area of the circle without using π?" hello, $A=CR/2$ "How was π originally found?" maybe Pythagore and euclide with a²+b²=c² found the area of squares. Then Archimede found $3+10/71<pi<3+1/7$
{ "language": "en", "url": "https://math.stackexchange.com/questions/467149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Abelian Groups and Number Theory What is the connection between "Finite Abelian Groups" and "Chinese Remainder Theorem"? (I have not seen the "abstract theory" behind Chinese Remainder Theorem and also its proof. On the other hand, I know abstract group theory and classification of finite abelian groups. Please, give also a motivation to study "Chinese Remainder Theorem from "Group Theory point of view".)
Let $m$ and $n$ be coprime, and let $a$ and $b$ be any integers. According to the Chinese remainder theorem, there exists a unique solution modulo $mn$ to the pair of equations $$x \equiv a \mod{m}$$ $$x \equiv b \mod{n}$$ Now the map $(a,b) \mapsto x$ is an isomorphism of rings from $\mathbb{Z}/m\mathbb{Z} \oplus \mathbb{Z}/n\mathbb{Z}$ to $\mathbb{Z}/mn\mathbb{Z}$. Conversely, if we are given an isomorphism of rings $\phi: \mathbb{Z}/m\mathbb{Z} \oplus \mathbb{Z}/n\mathbb{Z} \rightarrow \mathbb{Z}/mn\mathbb{Z}$, then $x = \phi(a,b)$ is a solution to the pair of equations since in this case $\phi(x,x) = x = \phi(a,b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/467219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
singleton null vector set linearly dependent, but other singletons are linearly independent set Why the set $\{\theta_v\}$ where $\theta_v$ is the null vector of a vector space is a dependent set intuitively (what is the source of dependence) and the singleton vector set which are non-null are independent sets ? (btw, I know how to show it mathematically, but not understanding the intuition behind this). Does it really make sense to think of independence of a singleton set ?
The intuition is the following: the null vector only spans a zero-dimensional space, whereas any other vector spans a one-dimensional space. This is captured by the following thought: A set of vectors $\{ \bar v_1, \bar v_2, ..., \bar v_n\}$ is linearly independent iff $span(\{ \bar v_1, ..., \bar v_n\})$ is not spanned by a proper subset of $\{ \bar v_1, \bar v_2, ..., \bar v_n\}$. Now, the space spanned by $\{ \bar o\}$ is already spanned by a proper subset of $\{ \bar o\}$ namely $\emptyset$. For a non-zero $\bar v, span(\bar v)$ is not spanned by any proper subset of $\{ \bar v \}$. Edit: Definition: Let $S=\{ \bar v_i:i\in I\}$ be a subset of a vector space $V$, then $$span(S):= \{ \sum_{i \in I}c_i\bar v_i: \bar v_i \in S, c_i \in \Bbb F, c_i=0 \mbox { for almost all } i \}.$$ Taking $I=\emptyset$, i.e. $S=\emptyset$, we get $$span(\emptyset )= \{ \sum_{i \in \varnothing }c_i\bar v_i \} = \{ \bar o \},$$ because by definition the empty sum equals $\bar o$ (just like in arithmetic, where the empty sum equals $0$ and the empty product equals $1$, or in set theory, where the empty union equals $\emptyset$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/467396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Integral $ \lim_{n\rightarrow\infty}\sqrt{n}\int\limits_0^1 \frac {f(x)dx}{1 + nx^2} = \frac{\pi}{2}f(0) $ Show that for $ f(x) $ a continuous function on $ [0,1] $ we have \begin{equation} \lim_{n\rightarrow\infty}\sqrt{n}\int\limits_0^1 \frac {f(x)dx}{1 + nx^2} = \frac{\pi}{2}f(0) \end{equation} It is obvious that \begin{equation} \sqrt{n}\int\limits_0^1 \frac {f(x)dx}{1 + nx^2} = \int\limits_0^1 f(x) d [\arctan(\sqrt{n}x)] \end{equation} and for any $ x \in (0, 1] $ \begin{equation} \lim_{n\rightarrow\infty}{\arctan(\sqrt{n}x)} = \frac{\pi}{2}, \end{equation} so the initial statement looks very reasonable. But we can't even integrate by parts because $ f(x) $ is in general non-smooth! Can anybody help please?
Hint: Make the change of variables $ y=\sqrt{n}x .$
{ "language": "en", "url": "https://math.stackexchange.com/questions/467562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Is a locally compact space a KC-space if and only if it is Hausdorff? A topological space is called a $US$-space provided that each convergent sequence has a unique limit. We know that for locally compact spaces,‎ ‎$ ‎T‎_{2} ‎‎‎\equiv KC‎‎$. We have: ‎‎$ ‎T_2‎ ‎\Rightarrow ‎KC‎‎‎ ‎\Rightarrow ‎US‎\Rightarrow ‎T_1‎ $‎. Is it possible to say "for locally compact spaces, $ ‎US‎\Rightarrow T_2$? ( It means $T_2 \equiv US$)
GEdgar has given one example in the comments. Start with the ordinal space $\omega_1$, and add two points, $p$ and $q$. For each $\alpha<\omega_1$ let $U_\alpha(p)=\{p\}\cup(\alpha,\omega_1)$ and $U_\alpha(q)=\{q\}\cup(\alpha,\omega_1)$, and take $\{U_\alpha(p):\alpha<\omega_1\}$ and $\{U_\alpha(q):\alpha<\omega_1\}$ as local bases at $p$ and $q$, respectively. The resulting space $X$ is not $T_2$, since $p$ and $q$ do not have disjoint nbhds, but it is compact, locally compact by any definition, and $US$. ($X$ is $US$ because $\omega_1$ is $T_2$, and the only sequences converging to $p$ or to $q$ are trivial ones.) The same idea can be applied to $\beta\omega$. Fix $p\in\beta\omega\setminus\omega$, let $q$ be a new point not in $\beta\omega$, and let $X=\beta\omega\cup\{q\}$. Topologize $X$ by making $\beta\omega$ an open subset of $X$ with its usual topology and making $U\subseteq X$ an open nbhd of $q$ iff $q\in U$, and $\{p\}\cup\big(U\setminus\{q\}\big)$ is an open nbhd of $p$ in $\beta\omega$. (In other words, we make $q$ a second copy of $p$.) The only convergent sequences in $X$ are the trivial ones, so $X$ is $US$, and it’s clear that $X$ has the other required properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/467587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A sum of zeros of an infinite Hadamard product I was experimenting with pairs of zeros of the following function ($i$ = imaginary unit): $\displaystyle \xi_{int}(s) := \xi_{int}(0) \prod_{n=1}^\infty \left(1- \frac{s}{+ ni} \right) \left(1- \frac{s}{{- ni}} \right) = \frac{\sinh(\pi s)}{s}$ and plugged these zeros into the following (paired) sum: $f(s):=\displaystyle \sum_{n=1}^\infty \left( \frac{s^{ni}}{ni} +\frac{s^{-ni}}{-ni} \right)$ which can be nicely transformed into this closed form: $f(s) := i( \ln(1-s^i) - \ln(1-s^{-i}))$ I then found that numerically: $f(e) = \pi-1$ and $f(\pi) = \pi-ln(\pi)$ but also that: $\displaystyle \lim_{x \to 0} f(1 \pm xi) = \pi$ however I struggle to properly mathematically derive these outcomes. Grateful for any help. Thanks.
If $s\ne e^{2n\pi},\quad n\in \mathbb{Z}$, you have $$f(s)=i\ln\left(\frac{1-s^i}{1-s^{-i}}\right)=i\ln\left(\frac{1-s^i}{1-s^{-i}}\right)=i\ln(-s^i)\\ =i\ln((se^{(2k+1)\pi})^i)=i\ln\left(\left(r^ie^{(-\theta+i(2k+1)\pi)}\right)\right),\quad (k\in \mathbb{Z})\\= -((2k+1)\pi+\ln r)-i\theta$$where $s=re^{i\theta}$. So, $f(s)$ is actually multi-valued. One value of $f(e)$ is thus $\pi -1$ which is obtained by putting $k=-1$ in the above equation. Similarly, putting $k=-1$ in the equation for $f(\pi)$, you get the result you've got numerically. Also, $$f(1\pm xi)=f(\sqrt{1+x^2}e^{i\theta})=\pi-\frac{1}{2}\ln(1+x^2)-i\theta\quad (k=-1)$$ where $$\theta=\pm \tan^{-1}x$$So, as $x\rightarrow 0$, $\theta\rightarrow 0$ and $f(1\pm xi)\rightarrow \pi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/467646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simple integration (area under the curve) - help I'm currently doing a simple integration question: Here is my working/solution so far: I have calculated this several times and only be seem to be getting a negative number as the final result. I know this is wrong as it is an area that needs to be calculated and therefore cannot be a negative number. Any help on this is very much appreciated. Thank you!
It seems your work is in an image which is appearing weirdly on my screen. I'll just outline my work. $ \displaystyle\int \left( 5 + \dfrac{5}{4\sqrt{x}} - x^4 \right) \, \mathrm{d}x = \displaystyle\int 5 \, \mathrm{d}x + \displaystyle\int 4 \cdot x^{\frac{1}{2}} \, \mathrm{d}x - \displaystyle\int x^4 \, \mathrm{d}x $ $ = 5x + \dfrac {8}{3} x^{\frac{3}{2}} - \dfrac {x^5}{5} + \mathcal{C} $ Now, apply the limits of integration. At $ x = 3 $, the function value is $ \approx -19.7 $. At $ x = 2 $, the function value is $ \approx 11.1 $. The answer is, thus: $$ \approx -19.7 - 11.1 = \boxed {-30.8}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/467704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is the main use of Lie brackets in the Lie algebra of a Lie group? I am beginner in Lie group theory, and I can't find the answer a question I am asking myself : I know that the Lie algebra $\mathfrak g$ of a Lie group $G$ is more or less the tangent vector of $G$ at the identity, so that $\mathfrak g$ have a very interesting property : linearity. However $\mathfrak g$ has another property : it is stable under Lie brackets $[.,.]$. For me when I study Lie groups I always find linearity of Lie algebras really important, and I don't see and I didn't find why the stability under Lie brackets is important. What is the main result/property of Lie groups using this property? That would be great if you could light me!
A good question. There are many aspects of the situation... At least one fundamental structure can be understood in the following way. First, imagining that $t$ is an "infinitesimal", so that $t^3=0$ (not $t^2=0$!) (or equivalent...), and imagining that elements of the Lie group near the identity are $g=1+tx$ and $h=1+ty$ (with $x,y$ in the Lie algebra) observe that $(1+tx)(1+ty)(1-tx)(1-ty)=1+t^2(xy-yx)$. Thus, we care about $xy-yx=[x,y]$. E.g., for matrix Lie groups, so that $x,y$ are matrices, this makes sense, where $xy-yx$ is in the matrix algebra. Yes, several issues are left hanging after this walk-through, but the symbol-pattern proves to be excellent, in essentially all incarnations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/467797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 0 }
Function that is identically zero Is it true that: Any rational function $f$ on $\mathbb{C}^2$ that vanishes on $S=\{(x,y)\in\mathbb{C}^2 : x=ny \text{ for some } n \in \mathbb{Z}\}$ must be identically zero. I have a theorem that says any rational function that vanishes on an open set in Zariski topology must be identically zero, but I can't seem to prove that $S$ is open. Actually, I don't even think $S$ is open.
If the rational function $f = \frac{p}{q}$ vanishes on $S$, then at each point of $S$, so does either the polynomial $p$ or the polynomial $q$. Which means that the polynomial $pq$ vanishes on the whole of $S$. However, if this polynomial is non-zero, this means that $(x-ny)$ is a factor of $pq$ for all $n$, and therefore $pq$ is of infinite degree. This is clearly absurd, so $pq$ must be identically $0$. It cannot be $q$, so therefore it must be $p$ that is identically $0$, and hence also $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/467933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Evaluating an improper integral using complex analysis I am trying to evaluate the improper integral $I:=\int_{-\infty}^\infty f(x)dx$, where $$ f(z) := \frac{\exp((1+i)z)}{(1+\exp z)^2}. $$ I tried to do this by using complex integration. Let $L,L^\prime>0$ be real numbers, and $C_1, C_2, C_3, C_4$ be the line segments that go from $-L^\prime$ to $L$, from $L$ to $L+2\pi i$, from $L + 2\pi i$ to $-L^\prime+2\pi i$ and from $-L^\prime+2\pi i$ to $-L^\prime$, respectively. Let $C = C_1 + C_2 + C_3 + C_4$. Here we have (for sufficiently large $L$ and $L^\prime$) $$ \int_{C_2}f(z) dz \le \int_0^{2\pi}\left|\frac{\exp((1+i)(L+iy))}{(\exp(L+iy)+1)}i\right| dy \le \int\frac{1}{(1-e^{-L})(e^L - 1)}dy\rightarrow0\quad(L\rightarrow\infty), $$ $$ \int_{C_4}f(z)dz\le\int_0^{2\pi}\left|\frac{\exp((1+i)(-L^\prime+iy))}{(\exp(-L^\prime + iy) + 1))^2}(-i)\right|dy\le\int\frac{e^{-L^\prime}}{(1-e^{-L})^2}dy\rightarrow 0\quad(L^\prime\rightarrow\infty), $$ and $$ \int_{C_3}f(z)dz = e^{-2\pi}\int_{C_1}f(z)dz. $$ Thus $$I = \lim_{L,L^\prime\rightarrow\infty}\frac{1}{ (1 + e^{-2\pi})}\oint_Cf(z)dz.$$ Within the perimeter $C$ of the rectangle, $f$ has only one pole: $z = \pi i$. Around this point, $f$ has the expansion $$ f(z) = \frac{O(1)}{(-(z-\pi i)(1 + O(z-\pi i)))^2} =\frac{O(1)(1+O(z-\pi i))^2}{(z-\pi i)^2} = \frac{1}{(z-\pi i)^2} + O((z-\pi i)^{-1}), $$ and thus the order of the pole is 2. Its residue is $$ \frac{1}{(2-1)!}\frac{d}{dz}\Big|_{z=\pi i}(z-\pi i)^2f(z) = -\pi \exp(i\pi^2) $$ (after a long calculation) and we have finally $I=-\exp(i\pi^2)/2i(1+\exp(-2\pi))$. My question is whether this derivation is correct. I would also like to know if there are easier ways to do this (especially, those of calculating the residue). I would appreciate if you could help me work on this problem.
\begin{eqnarray*} \int_{-\infty}^{\infty} {{\rm e}^{\left(1\ +\ {\rm i}\right)x} \over \left(1 + {\rm e}^{x}\right)^2}\,{\rm d}x & = & \int_{0}^{\infty}\left\lbrack% {{\rm e}^{\left(-1\ +\ {\rm i}\right)x} \over \left(1 + {\rm e}^{-x}\right)^2} + {{\rm e}^{-\left(1\ +\ {\rm i}\right)x} \over \left(1 + {\rm e}^{-x}\right)^2} \right\rbrack {\rm d}x \\ & = & 2\,\Re\int_{0}^{\infty} {{\rm e}^{-\left(1\ -\ {\rm i}\right)x} \over \left(1 + {\rm e}^{-x}\right)^2}\,{\rm d}x = 2\,\Re\int_{0}^{\infty} {\rm e}^{-\left(1\ -\ {\rm i}\right)x} \sum_{n = 1}^{\infty}\left(-1\right)^{n}\,n\,{\rm e}^{-\left(n - 1\right)x}\,{\rm d}x \\ & = & 2\,\Re\sum_{n = 1}^{\infty}\left(-1\right)^{n}\,n \int_{0}^{\infty}{\rm e}^{-\left(n - {\rm i}\right)x} = 2\,\Re\sum_{n = 1}^{\infty}\left(-1\right)^{n}\,{n \over n - {\rm i}} \\ & = & 2\,\Re\sum_{n = 1}^{\infty}\left(% -\,{2n - 1\over 2n - 1 - {\rm i}} + {2n \over 2n - {\rm i}} \right) \\ & = & 2\,\Re\sum_{n = 1}^{\infty}\left\lbrack% \left(-1 - {{\rm i} \over 2n - 1 - {\rm i}}\right) + \left(1 + {{\rm i} \over 2n - {\rm i}}\right) \right\rbrack \\ & = & 2\,\Im\sum_{n = 1}^{\infty}\left( {1 \over -{\rm i} + 2n} - {1 \over -{\rm i} + 2n - 1} \right) = 2\,\Im\sum_{n = 1}^{\infty} {\left(-1\right)^{n} \over -{\rm i} + n} \\ & = & 2\,\Im\left\lbrack\sum_{n = 0}^{\infty} {\left(-1\right)^{n} \over -{\rm i} + n} - {1 \over -{\rm i}} \right\rbrack = -2 + 2\,\Im\sum_{n = 0}^{\infty}{\left(-1\right)^{n} \over -{\rm i} + n} \\[1cm]&& \end{eqnarray*} $$ \int_{-\infty}^{\infty} {{\rm e}^{\left(1\ +\ {\rm i}\right)x} \over \left(1 + {\rm e}^{x}\right)^2}\,{\rm d}x = -2 + 2\,\Im\beta\left(-{\rm i}\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/468019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Plotting large equation in mathematica I have this rather large equation which needs to be solved with respect to I1 so that I can plot it against X: -1 - (0.742611 I1 (1/(-(-14 + 16/I2)^2 + (1 - 15/I1 + 16/I2)^2) - ( 30 (1 - 15/I1 + 16/I2))/( I1 (-(-14 + 16/I2)^2 + (1 - 15/I1 + 16/I2)^2)^2)) X^1.5)/((1.36- I1) (I1/(-(-14 + 16/I2)^2 + (1 - 15/I1 + 16/I2)^2))^2.5) + ( 0.495074 X^1.5)/((1.36- I1) (I1/(-(-14 + 16/I2)^2 + (1 - 15/I1 + 16/I2)^2))^1.5) + ( 0.495074 I1 X^1.5)/((1.36- I1)^2 (I1/(-(-14 + 16/I2)^2 + (1 - 15/I1 + 16/I2)^2))^1.5) Mathematica can't solve it using Solve because it is rather complex and therefore I can't plot the solution. Is there some way I can achieve this, maybe using another program? I am not sure if my question is really clear so please ask if you need any further info.
Suppose we define equation as follows: equation=yourBigEquation; Now you can solve it numerically using NSolve producing a table of values (I select only $I1\in\mathbb{R}$ here; start from $X=10^{-10}$ because for $X=0$ there're no usable solutions): sol=I1/.Table[NSolve[equation, I1, Reals], {X, 10^-10, 1, 1/500}]; And now plot it: ListPlot[Transpose[sol], Joined->True, PlotRange->All, DataRange->{10^-10, 1}]
{ "language": "en", "url": "https://math.stackexchange.com/questions/468175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do there exist some relations between Functional Analysis and Algebraic Topology? As the title: does there exist some relations between Functional Analysis and Algebraic Topology. As we have known, the tools developed in Algebraic Topology are used to classify spaces, especially the geometrical structures in finite dimensional Euclidean space. But when we come across some infinite dimensional spaces, such as Banach spaces, do the tools in Algebraic Topology also take effect? Moreover, are there some books discussing such relation? My learning background is listed following:basic algebra(group, ring, field, polynomial); Rudin's real & complex analysis and functional analysis; general topology(Munkres level). Any viewpoint will be appreciated.
See Atiyah–Singer index theorem: http://en.wikipedia.org/wiki/Atiyah%E2%80%93Singer_index_theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/468272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 1 }
Find the 12th term and the sum of the first 12 terms of a geometric sequence. A geometric series has a first term $\sqrt{2}$ and a second term $\sqrt{6}$ . Find the 12th term and the sum of the first 12 terms. I can get to the answers as irrational numbers using a calculator but how can I can obtain the two answers in radical form $243 * \sqrt{6}$ and $364 \left(\sqrt{6}+\sqrt{2}\right)$ ? The closest I get with the 12th term is $\sqrt{2} \left(\sqrt{6} \over \sqrt{2}\right)^{(12-1)}$ or $\sqrt{2} * 3^\left({11\over 2}\right)$ And for the sum ${\sqrt{2}-\sqrt{2}*(\sqrt{3})^{12} \over 1 - \sqrt{3}}$
So, the common ratio $=\frac{\sqrt6}{\sqrt2}=\sqrt3$ So, the $n$ th term $=\sqrt2(\sqrt3)^{n-1}\implies 12$th term $=\sqrt2(\sqrt3)^{12-1}=\sqrt2(\sqrt3)^{11}$ Now, $\displaystyle(\sqrt3)^{11}=\sqrt3 \cdot 3^5=243\sqrt3$ The sum of $n$ term is $\displaystyle \sqrt2\cdot\frac{(\sqrt3)^n-1}{\sqrt3-1}$ $\implies 12$th term $=\displaystyle \sqrt2\cdot\frac{(\sqrt3)^{12}-1}{\sqrt3-1}=\sqrt2\cdot\frac{(3^6-1)(\sqrt3+1)}{(\sqrt3-1)(\sqrt3+1)}$ (rationalizing the denominator ) $\displaystyle=\frac{(3^3-1)(3^3+1)\sqrt2(\sqrt3+1)}2=364(\sqrt6+\sqrt2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/468331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solve $\int \sqrt{7x + 4}\,dx$ I need to solve the following integral $$\int \sqrt{7x + 4}\,dx$$ I did the following steps: \begin{align} \text{Let} \, u &= 7x+4 \quad \text{Let} \, du = 7 \, dx \\ \int &\sqrt{u} \, du\\ &\frac{2 (7x+4)^{3/2}}{3} \end{align} The solution is: $\frac{2 (7x+4)^{3/2}}{21}$. I am having some trouble understanding where the denominator, $21$, comes from (is it because you integrate $du$ also and thus, $7 dx$ becomes $\frac{1}{7}$?). I believe this is some elementary step that I am missing. Can someone please explain to me this? Thanks! P.S Is it correct to say "solve the integral"?
When you made the u-substitution, you took $u=7x+4$ and hence $du=7 dx$. You forgot this factor of 7! In particular, $dx=du/7$. It helps to write out the $dx$ in the integral: $$\int \sqrt{7x+4} dx=\int \frac{\sqrt{u}}{7} du.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/468397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Trigonometric Identities Like $A \sin(x) + B \cos(y) = \cdots$ Are there any identities for trigonometric equations of the form: $$A\sin(x) + B\sin(y) = \cdots$$ $$A\sin(x) + B\cos(y) = \cdots$$ $$A\cos(x) + B\cos(y) = \cdots$$ I can't find any mention of them anywhere, maybe there is a good reason why there aren't identities for these? Thanks!
$A \, \cos(x) + B \, \cos(y)= C \, \cos(z)$, where, $$ C = \sqrt{(A \, \cos(x) + B \, \cos(y))^2 + (A \, \sin(x) + B \, \sin(y))^2}, $$ and $$ z = \tan^{-1}\left(\frac{A \, \sin(x) + B \, \sin(y)}{A \, \cos(x) + B \, \cos(y)}\right). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/468475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why is the Lebesgue-Stieltjes measure a measure? I'm having difficulty convincing myself the Lebesgue-Stieltjes measure is indeed a measure. The Lebesgue-Stieltjes measure is defined as such: Given a nondecreasing, right-continuous function $g$, let $\mathcal{H}_1$ denote the algebra of half-open intervals in $\mathbb{R}$. We define the Lebesgue-Stieltjes integral to be $\lambda: \mathcal{H}_1 \rightarrow[0,\infty]$, with $\lambda(I)=0$ if $I=\emptyset$, $ \lambda(I)=g(b)-g(a)$ if $I=(a,b]$, $-\infty\leq a < b < \infty$ and $\lambda(I)=g(\infty)-g(a)$ if $I=(a,\infty)$, $-\infty\leq a < \infty$. Showing countable subadditivity is done by a careful application of the $\epsilon2^{-n}$ trick, and while I couldn't do this on my own, this can be found in most analysis textbooks. What about the other inequality to show $\sigma$-additivity? Does anyone know of a resource that proves this or could share how to do this? I suspect this requires quite a bit more trickery than the proof of subadditivity.
Firstly, note that the measure defined here is a Radon measure (that is $\lambda(B)<\infty$ for any bounded borel set $B$). hence it is also $\sigma$-finite (Because $\mathbb{R}=\bigcup_{n\in\mathbb{Z}}(n,n+1]$). So if I can only show that the measure $\lambda$ is $\sigma$-additive on the semifield $\{(a,b]:-\infty\leq a\leq b\leq\infty\}$ (showing this is trivial), then it would be so over $\mathcal{B}$ by Caratheodory Extension Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/468545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Distance Between Subsets in Connected Spaces Suppose $\langle X, d \rangle$ is a metric space. For any two sets $F,G \subseteq X$, by abuse of notation define $d(F,G) = \inf \{ d(f,g): f \in F, g \in G \}$. Let $\rho > 0$, $x \in X$, and $E \subseteq X$ be such that the open ball of radius $\rho$ centered at $x$ has non-empty intersection with both $E$ and its complement. If $X$ is connected, does it follow that $d(B_{\rho}(x) \cap E, B_{\rho}(x) \cap E^c) = 0$, where $E^c$ is the complement of $E$ in $X$? If the statement is false, are there obvious natural conditions that one could place on the metric space in question (rather than on the open ball centered at $x$) that guarantee that $d(B_{\rho}(x) \cap E, B_{\rho}(x) \cap E^c) = 0$?
Counterexamples for connected spaces have already been given by Daniel Fischer and Stefan H. It turns out that connectedness is somewhat tangential to the issue. The property you are after is inherited by dense subspaces, so it also applies to $\mathbb{Q}^n$ for example. That means it makes sense to look for conditions on the completion of $X$. A reasonable sufficient condition is that the completion is a length space, because in a length space all open balls are path-connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/468608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Evaluating $\int_0^{\infty} {y^2 \cos^2(\frac{\pi y}{2}) \over (y^2-1)^2} dy$ I´m having trouble with the following integral $$ \int_0^{\infty} {y^2 \cos^2(\frac{\pi y}{2}) \over (y^2-1)^2} dy $$ I have tried lots of approaches and nothing works. Mathematica says it does not converge but that is not true. It appears in a Physical problem (it is the energy of a system) and the answer should be (by conservation of energy): $\frac{\pi^2}{8}$ but I cannot show it.
\begin{align*} I &= {1 \over 2}\int_{-\infty}^{\infty} {y^2 \cos^{2}\left(\pi y/2\right) \over \left(y^{2} - 1\right)^{2}}\,{\rm d}y = {1 \over 8}\int_{-\infty}^{\infty}y\cos^{2}\left(\pi y/2\right)\left\lbrack% {1 \over \left(y - 1\right)^{2}} - {1 \over \left(y + 1\right)^{2}} \right\rbrack \,{\rm d}y \\[5mm]&= {1 \over 8}\int_{-\infty}^{\infty}y\cos^{2}\left(\pi y/2\right)\left\lbrack% {1 \over \left(y - 1\right)^{2}} + {1 \over \left(y - 1\right)^{2}} \right\rbrack \,{\rm d}y = {1 \over 4}\int_{-\infty}^{\infty}{y\cos^{2}\left(\pi y/2\right) \over \left(y - 1\right)^{2}} \,{\rm d}y \\[5mm]&= {1 \over 4}\int_{-\infty}^{\infty}{\sin^{2}\left(\pi y/2\right) \over y^{2}} \,{\rm d}y = {1 \over 4}\int_{0}^{\pi}{\rm d}\pi'\,{1 \over 2}\int_{-\infty}^{\infty} {\sin\left(\pi' y\right) \over y} \,{\rm d}y = {\pi \over 8}\int_{-\infty}^{\infty}{\sin\left(y\right) \over y}\,{\rm d}y \\[5mm]&= {\pi \over 8}\int_{-\infty}^{\infty}\,{\rm d}y\, {1 \over 2}\int_{-1}^{1}\,{\rm d}k\,{\rm e}^{{\rm i}ky} = {\pi^{2} \over 8}\int_{-1}^{1}\,{\rm d}k\, \int_{-\infty}^{\infty}\,{{\rm d}y \over 2\pi}\,{\rm e}^{{\rm i}ky} = {\pi^{2} \over 8}\int_{-1}^{1}\,{\rm d}k\,\delta\left(k\right) = {\Large{\pi^{2} \over 8}} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/468664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Calculate $\int \frac{dx}{x\sqrt{x^2-1}}$ I am trying to solve the following integral $$\int \frac{dx}{x\sqrt{x^2-1}}$$ I did the following steps by letting $u = \sqrt{x^2-1}$ so $\text{d}u = \dfrac{x}{\sqrt{{x}^{2}-1}}$ then \begin{align} &\int \frac{\sqrt{x^2-1} \, \text{d}u}{x \sqrt{x^2-1}} \\ &\int \frac{1}{x} \text{d}u \\ &\int \frac{1}{\sqrt{u^2+1}} \text{d}u\\ \end{align} Now, this is where I am having trouble. How can I evaluate that? Please provide only hints Thanks! EDIT: The problem specifically states that one must use substitution with $u = \sqrt{x^2-1}$. This problem is from the coursera course for Single Variable Calculus.
$$ \begin{aligned}\int \frac{d x}{x \sqrt{x^{2}-1}} =\int \frac{1}{x^{2}} d\left(\sqrt{x^{2}-1}\right) =\int \frac{d\left(\sqrt{x^{2}-1}\right)}{\left(\sqrt{x^{2}-1}\right)^{2}+1} =\tan ^{-1}\left(\sqrt{x^{2}-1}\right)+C \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/468727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 4 }
Are the square roots of all non-perfect squares irrational? I was asked to define a non-perfect square. Now obviously, the first definition that comes to mind is a square that has a root that is not an integer. However, in the examples, 0.25 was considered a perfect square. And the square itself + its root were both not integers. Is it that all non-perfect squares have irrational roots, e.g. $\sqrt{2}$?
In the integers, a perfect square is one that has an integral square root, like $0,1,4,9,16,\dots$ The square root of all other positive integers is irrational. In the rational numbers, a perfect square is one of the form $\frac ab$ in lowest terms where $a$ and $b$ are both perfect squares in the integers. So $0.25=\frac 14$ is a perfect square in the rationals because both $1$ and $4$ are perfect squares in the integers. Any rational that has a reduced form where one of the numerator and denominator is not a perfect square in the integers is not a perfect square. For example, $\frac 12$ is not a perfect square in the rationals. $1$ is a perfect square in the integers, but $2$ is not, and there is no rational that can be squared to give $\frac 12$
{ "language": "en", "url": "https://math.stackexchange.com/questions/468781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Check condition normal subgroup in these three examples Is the subgroup H of G is a normal subgroup of G, for: $$ i)\ G = S_5, \ H = \{id, (1,2)\} $$ $$ ii) \ G = (Sym(\mathbb{N}), \circ), \ H = \{f\in Sym(\mathbb{N}) : f(0) = 0 \}$$ $$ iii) \ G = S_4, \ H = \{id, (1,2,3), (1,3,2) \} $$ I know, that subgroup is normal, if: $$ \forall g\in G \ \ gH = Hg \ \ - \ \ \ H \lhd G $$ But I don't know how. Could you solve example number one and two? Thanks for your help. I know that $$ S_5 = \{id, (1,2,3,4,5), ..., (5,4,3,2,1)\} \ \ \ \ 5! = 120 $$ but I don't understand it.
While the condition for a subgroup being normal is correct, it is often not the most efficient or at least intuitive to check this via using it. Multiplying by $g^{-1}$ you get equivalently $gHg^{-1} = H$ for all $g \in G$. So, to get you started you could just calculate $ghg^{-1}$ for a couple of $g\in G$ and $h \in H$ and see if all the elements are actually in $H$. If you just find one where this is not true, you are done, and can conclude the group is not normal. If you do not seem to find such a counterexample you might start to suspect the subgroup is actually normal and then try to show that indeed for all $g \in G$ and $h \in H$ you have $ghg^{-1} \in H$. If you can show this you are also done and the group is a normal subgroup. Note: what I just said means $gHg^{-1} \subset H$ for all $g \in G$ (not equality). But this is yet another condition for a subgroup to be a normal subgroup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/468977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is lasso not strictly convex I know a nonmonotonic convex function which attains its minimum value at a unique point only is strictly convex. I didn't get how lasso is not strictly convex. For eg if I consider two dimensional case. $||x||_1$ attains its minimum value at (0,0) which is unique
I thought that a strictly convex function is a convex function which has a unique minimizer. I am saying this was the wrong definition. Indeed, this was quite wrong, and the source of confusion here. What is true is that strict convexity is related to uniqueness of minimizer: if a strictly convex function attains its minimum, then it does so at exactly one point. However, having unique minimizer does not imply strict convexity, or any convexity at all. It's important to remember that there are two different notions of strict convexity. Namely, we have: * *Strictly convex functions: $u(ta+(1-t)b)<tu(a)+(1-t)u(b)$ for $0<t<1$ *Strictly convex norms: $\|ta+(1-t)b\|<t\|a\|+(1-t)\|b\|$ for $0<t<1$, unless $a$ and $b$ are parallel vectors. A norm can never be strictly convex function in the sense of definition 1, because for any nonzero vector $x$ we have $$\|2x\|=\frac12({\|x\|+\|3x\|}),\quad \text{where }\ 2x=\frac12(x+3x)$$ This is why the definition of a strictly convex norm requires non-parallel vectors. But $\|\cdot\|_1$ fails the second definition too: for example, $$\|e_1+e_2\|=\frac12({\|2e_1\|+\|2e_2\|}),\quad \text{where }\ e_1+e_2=\frac12(2e_1+2e_2)$$ and $e_1,e_2$ are standard basis vectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/469062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Eigenvector/value in linear transformation. I came across this problem For (a), I wrote $$T(3x,4y) = \lambda (3x, 4y)$$ Since $T$ is a reflection, $\lambda = 1$ That is as far as I got (b) I simply have no idea. I only know how to find $A$ through brute force.
Since the point $(4,3)$ is on the line, the reflection will take this point to itself; so this will be an eigenvector corresponding to $\lambda =1$. Similarly, the point $(3,-4)$ is on the line through the origin perpendicular to the given line; so the reflection will take $(3,-4)$ to its negative $(-3,4)$ and so $(-3,4)$ is an eigenvector corresponding to $\lambda =-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/469202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The unsolved mathematical light beam problem I have the following problem: Imagine that you have a sphere sitting at the interface of two media(like water and oil). And the position(the heigth) of the interface to the center of the sphere is fixed and known(I drew a picture for two different situations). Now think about this: A beam of light (parallel rays) that enclose an angle alpha to the interface enters (the direction is supposed to be: they first enter the lower medium and then go to the upper one) hits this interface with the sphere. Now the question is: Can we find an analytical expression for the maximum area of the sphere perpendicular to the direction of the rays that is defined by the rays that enter the sphere without going first into the other medium? If this is unclear look at the picture: In situation 1 I drew 2 rays (of the infinitely many that enter at this angle) that fulfill the condition that they first hit the sphere without going in the upper medium. Now especially the left one is crucial since if I had chosen one that was even more slightly shifted to the left side, this one would have been in the upper medium, so no longer a reasonable candidate. The right one is not a restriction. The right picture is poorly drawn, since I wanted to have one, where both rays restrict the accessible area. But I think you have the idea now, but what do I mean by area? I am looking for the biggest area(=PROJECTION OF THE SURFACE AREA of points that fulfill this on a plane) perpendicular to the direction of the rays inside the sphere, that consists of rays that enter the sphere and fulfill the property above . So in the first picture this would probably the area going through the center and "somewhat enclosed by the two arrays" and in the second part, this one should be the "imaginary interface" inside the sphere. If you have any questions, please do not hesitate to post them
Let us assume that the radius of the sphere is $R$, the height of the center above the separation plane is $h$ (if the center is below the plane, $h$ is negative, and the angle the rays make with the vertical line is $\beta\in(0,\pi/2)$ (it is $\beta=\frac \pi 2-\alpha$ on your picture). Only the case $|h|<R$ is interesting. Now, 3 possibilities may present itself: 1) $h\le -R\sin\beta$. Then we want the area of the projection of the circle of radius $\sqrt{R^2-h^2}$ to the slanted plane. It is just $A=\pi(R^2-h^2)\cos\beta$. 2) $h> R\sin\beta$. Then the answer is $A=\pi R^2$ (trivially; the border plane doesn't matter for the rays that hit the sphere at all) 3) $-R\sin\beta<h<R\sin\beta$. Then there are two parts: the part of the equator and the part of the cross-cut. The equator projects faithfully and the cross cut with the $\cos\beta$ factor. All we need is to find the area of each. The separating line lies at the distance $H=\frac{|h|}{\sin\beta}$ from the center of the equator disk (for $h>0$, we need the small segment of the equator disk and the large segment of the separation disk; for $h<0$, it is the opposite. The length of that line is $2\sqrt{R^2-H^2}$ Now, if we have a disk of radius $r$ and a chord of length $2\ell$, the (small) segment that is cut off has the area $r^2\arcsin\frac\ell r-\ell\sqrt{r^2-\ell^2}$ (sector minus triangle). Now, let's bring everything together. If $h\ge 0$, then we get $$ A=R^2\arcsin\frac {\sqrt{R^2\sin^2\beta-h^2}}{R\sin\beta}-\frac {h\sqrt{R^2\sin^2\beta-h^2}}{\sin^2\beta}+ \\ \cos\beta\left[(R^2-h^2)\left(\pi-\arcsin\frac{\sqrt{R^2\sin^2\beta-h^2}}{\sin\beta\sqrt{R^2-h^2}}\right) +\frac{h\cos\beta\sqrt{R^2\sin^2\beta-h^2}}{\sin^2\beta}\right]\,. $$ If $h\le 0$, we get $$ A=R^2\left(\pi-\arcsin\frac {\sqrt{R^2\sin^2\beta-h^2}}{R\sin\beta}\right) +\frac{|h|\sqrt{R^2\sin^2\beta-h^2}}{\sin^2\beta}+ \\ \cos\beta\left[(R^2-h^2)\arcsin\frac{\sqrt{R^2\sin^2\beta-h^2}}{\sin\beta\sqrt{R^2-h^2}} -\frac{|h|\cos\beta\sqrt{ R^2\sin^2\beta-h^2}}{\sin^2\beta} \right]\,. $$ I agree that this expression may make a nervous person pass out and admit that I might make a stupid typo somewhere when $\LaTeX$-ing it, but I believe I wrote enough for you to be able to make an independent derivation. Enjoy! :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/469263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Need help creating a Context-Free Grammar I'm trying to generate a CFG for the following $L$, but I'm stuck on how to do this. $$L = \{0^i1^j2^k\mid i<j+k\}$$
Since I don't know anything about context-free grammars, I'll feel free to give what might be a full solution to half the problem, or might be completely wrong. Remember that last bit: might very well be completely wrong. The good news is that I would venture to guess that even if it's wrong, it's probably not completely wrong. Elements of $L$ look like this: $$0^{p+q}1^j2^k=0^p(0^q1^j)2^k,$$ where either: * *$p<k$ and $q \le j$, or *$p \le k$ and $q < j$. Option 1: \begin{array}{cl} A \to A2 \mid B2 \mid 2& \text{We can have as many $2$s at the end as we like, but at least one.}\\ B \to 0B2 \mid C \mid 02& \text{We can add $0$s to the left as we add $2$s to the right.} \\ C \to 1 \mid 01 \mid 0C1 \mid C1 &\text{We can add as many $1$s as we like, and as many zeros as $1$s.} \end{array}
{ "language": "en", "url": "https://math.stackexchange.com/questions/469367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does the following inequality holds for any weak solution $u\in C^1(B_1)$ of uniformly elliptic equation $D_i(a_{ij}D_ju)=0$? Now I'm studying "Elliptic Partial Differential Equations" by Q.Han and F. Lin. Throughout the section 5 of the chapter 1, $u\in C^1(B_1)$ is a weak solution of $$D_i(a_{ij}D_j u)=0$$ where $0<\lambda|\xi|^2\leq a_{ij}\xi_i\xi_j\leq \Lambda|\xi|^2$. In this setting, it says that for any $0<\rho<r\leq 1$, $$\int_{B_\rho}u^2\leq c\left(\frac{\rho}{r}\right)^\mu\int_{B_r}u^2$$ where $\mu$ depends only on $n$, $\lambda$ and $\Lambda$. It is presented as a remark of the following lemma: if $u$ is such a weak solution, then we have $$\int_{B_{R/2}}u^2\leq \theta\int_{B_R}u^2$$ where $\theta=\theta(n,\lambda,\Lambda)\in(0,1)$ and $0<R\leq1$. This lemma is straightforward by using the Poincare inequality and the Cacciopolli inequality. However, I have no idea how to get the first inequality from this lemma. (The author says it follows by iterating the result of this lemma.) Is there any one can help?
Let $f(r)=\int_{B_r} u^2$. We can forget the whole PDE thing and just work with this nonnegative increasing function of $r$, which satisfies $$f(r/2)\le \theta f(r),\quad 0\le r\le 1\tag1$$ Given $0<\rho<r\le 1$, let $k$ be the largest integer such that $2^k\rho\le r$. (It's possible that $k=0$.) Applying the inequality (1) $k$ times, we find that $$f(2^{-k}r)\le \theta^k f(r)$$ By the monotonicity of $f$, $$f(\rho) \le \theta^k f(r)\tag2$$ It remains to relate $k$ to $\rho/r$. The maximality of $k$ implies $2^{k+1}\rho>r$. Hence, $(1/2)^{k+1}<\rho/r$. Raising both sides to power $\mu = \log\theta/\log(1/2)$ yields $\theta^{k+1}<(\rho/r)^\mu$. Hence, (2) implies $$f(\rho) \le \theta^{-1} \left(\frac{\rho}{r}\right)^\mu f(r) $$ which has the required form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/469443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are these proofs on partial orders correct? I'm trying to prove whether the following are true or false for partial orders $P_1$ and $P_2$ over the same set $S$. 1) $P_1$ ∪ $P_2$ is reflexive? True, since $P_1$ and $P_2$ contains all the pairs { ($x,x$) : $x$ in $S$}, we know the union also does. Thus $P_1$ ∪ $P_2$ is reflexive. That set of ($x,x$) elements is called Δ, or the diagonal of $S$. 2) $P_1$ ∪ $P_2$ is transitive? False, since $x≤y$ and $y≤z$ in $P_1$ ∪ $P_2$. How do we know $x≤y$ is from $P_1 and $y≤z$ isn't from $P_2? There's nothing guaranteeing that we have $x≤z$. 3) $P_1$ ∪ $P_2$ is antisymmetric? False, for all we know, $x≤y$ in $P_1$ and $y≤x$ in $P_2$. That would be the complete opposite of asymetry - symmetry (for those two elements, at least).
1) Your answer is correct. One can just as easily say something stronger: if $R_1$ is a reflexive relation on a set $X$ and $R_2$ is any relation on $X$ containing $R_1$, then $R_2$ is also reflexive. 2) Your conclusion is correct, but you should nail it down by giving a specific counterexample. Can you find one? It suffices to take $X = \{1,2,3\}$. 3) To be sure, antisymmetric means that if $x \leq y$ and $y \leq x$, then $x = y$. Again your conclusion is correct but should be justified. Here an easy example would be to take $P_1$ to be any linear order $\leq$ on a set with more than one element -- e.g. the standard ordering on $\mathbb{Z}$ or $\mathbb{R}$ -- and $P_2$ to be the dual ordering, i.e., $x \leq_2 y \iff y \leq x$. Then $P_1 \cup P_2$ is the total relation $x R y$ for all $x,y$. This is not antisymmetric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/469508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is the regularization of an otherwise diverging two-sided sum always equal to zero? As a first example, take the divergent series of all powers of two $1+2+4+8+...=\sum\limits_{k=0}^\infty 2^k$ which can be regularized by using the analytical continuation of the geometric series $\sum\limits_{k=0}^\infty q^k = \frac1{1-q}\Big|_{|p|<1}$ to obtain $1+2+4+8+...=-1$, while on the other hand, the sum $\frac12 + \frac14 + \frac18 + ... = 1$, such that $$\sum_{k=-\infty}^\infty 2^k = 0$$ As a second example, take $... -3-2-1+0+1+2+3+...$, which is clearly zero as well (while the half-sided sum requires (Riemann) zeta regularization to obtain $1+2+3+4+...=-\frac1{12}$). But is this generally the case or did I just pick some exceptional examples? As a third example - that I am not sure about - take $...+1+1+1+1+...$: $$\underbrace{...+1+1+1}_{=\zeta(0)=-\frac12} + \underbrace{1}_{\stackrel{\text{from}}{k=0}} + \underbrace{1+1+1+1+...}_{=\zeta(0)=-\frac12} = 0$$ - I am not sure here since I pretend that $\sum_{k=1}^\infty\frac1{(-k)^s}\Big|_{s=0}$ is also $\zeta(0)$ due to the expression's symmetry.
It's too much to ask that the regularization of any two-sided divergent series be equal to zero. Clearly there is an extra symmetry in the examples you picked, both sides being given by the same expression. Otherwise, one could define either side separately to be any arbitrary divergent series and get all kinds of answers. It's clearly true for any geometric series $\cdots + q^{-2} + q^{-1} + 1 + q + q^2 + \cdots$ if you regularize the two sides separately. The sum $\sum_{k=1}^\infty q^k = \frac{q}{1-q}$ plus the sum $\sum_{k=1}^\infty q^{-k} = \frac{1/q}{1-1/q} = \frac{-1}{1-q}$ is $-1$ which cancels out with the $q^0$ term to give 0. It works trivially for the odd $\zeta$-sums, as in $\sum_{n=-1}^{-\infty} \frac{1}{n^{2k+1}} = -\zeta(2k+1)$, but fails for the even ones, such as $$\cdots + \frac{1}{(-3)^2} + \frac{1}{(-2)^2}+\frac{1}{(-1)^2} = 1 + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = \frac{\pi^2}{6}$$ I played around with these divergent sums a while ago and found the same examples as you have, but no others. The third one was especially tantalizing, but I eventually convinced myself it's a coincidence (though of course there's no proof of that). One subtle phenomenon is the the lack of "shift-invariance" when we assign limits to divergent sums. It's known that we can define, in a non-unique way, a linear function $\lim_{n\rightarrow \infty}$ on all sequences, such that it agrees with the normal limit on convergent ones, as long as we don't expect $\lim_{n \rightarrow \infty} a_{n} = \lim_{n \rightarrow \infty} a_{n+1}$. This can already be seen in the sums you gave. For example take $$a_n = \left\{ \begin{array}{c} 1\text{ if }n\text{ is odd}\\0\text{ otherwise}\end{array}\right.\ \ \ \ \ b_n = \left\{ \begin{array}{c} 1\text{ if }n\text{ is even}\\0\text{ otherwise}\end{array}\right.$$ Then $a_n + b_n$ is the constant 1 sequence, but $b_n = a_{n+1}$. If they have the same (non-zero) limit , they can't cancel out to 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/469568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Number of solutions of exponential equation Can anyone tell me how to find number of solutions $(x+a)^x=b$? For example $(x+1)^x=-1$ has four complex solutions, $(x+3)^x=10$ has two solutions,one positive one negative, and $(x-4)^x=-10$ hasn't any solutions. PS.Sorry for my bad English,I hope you understand my question.
How to solve $(x+a)^x=b$? Well, to begin, as you already gather, we must allow for $x \in \mathbb{C}$. Then the question begs the question how do we define $z^x$ for complex $x$. Use the complex exponential function: $$ z^x = \exp (x \log (z)) $$ clearly $z=0$ is a problem. Moreover, this is a set of values due to the fact that $\log (z)$ is multiply valued. Consider then, $(x+a)^x=b$ becomes: $$ \exp (x \log (x+a)) =b $$ If $b \neq 0$ then it follows that, $$ x \log (x+a) = \log b $$ Assume $x \neq 0$ and divide by $x$, $$ \log (x+a) = \frac{1}{x}\log b. $$ Exponentiate, note $\exp (\log (z))=z$ $$ x+a = \exp \left( \frac{1}{x}\log b \right). $$ This is clearly transcendental, perhaps that was clear from the outset. But, at best we can hope to characterize solutions with a special function. Perhaps the Lambert W-function. According to (1) under "Generalizations" in the Wikipedia article, $$ e^{-cx} = a_o(x-r) $$ has solution $$ x = r + \frac{1}{c} \text{W} \left( \frac{ce^{-cr}}{a_o} \right)$$ This is lovely, but $e^{-cx}$ should look more like $e^{c/x}$ to match the problem we face. Perhaps a champion of the Lambert-$W$ function will appear and show us the light.
{ "language": "en", "url": "https://math.stackexchange.com/questions/469675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How can I prove that a set of natural numbers always have a minimum? Let's say I have a finite not-empty set named A, which is a set of natural numbers. How do I prove it has a minimum? (In Calculus)
You can't. Unless you add the condition that $A\ne \emptyset$. Assume $A\subseteq \mathbb N$ has no monimal element. Then prove by induction that $$\{1,\ldots,n\}\cap A=\emptyset$$ holds for all $n\in \mathbb N$. The induction stecp $n\to n+1$ goes as follows: $\{1,\ldots,n\}\cap A$. If $n+1\in A$, this would imply that $n+1$ is a minimal element of $A$, contrary to the assumption. Therefore $n+1\notin A$ and hence $\{1,\ldots,n+1\}\cap A=\emptyset$. Then note that $\{1,\ldots,n\}\cap A$ for all $n$ implies $A=\emptyset$ (as $n\in A$ implies $n\in \{1,\ldots,n\}\cap A$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/469820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Calculating $E[(X-E[X])^3]$ by mgf Calculate by mgf $E[(X-E[X])^3]$ where a. $X\sim B(n,p)$ b.$X\sim N(\mu,\sigma)$ Before I begin I thought symbolizing $Y=X-E[X]$ and then I'd derivative $M_Y(t)$ three times substitute $t=0$ and solve both questions but I'm not sure about the distribution of Y. My question is "Does subtracting a constant changes the distribution of random variable $\sim B(n,p)\text{ or } N(\mu,\sigma)$"? EDIT: I got $np(2p−1)(p−1)$ in binomial and 0 in normal. The next question is why the binomial tend to 0 as $n→∞$?
Yes, it changes the distribution. For one thing, the mean changes by that constant. In the binomial case, the distribution is no longer binomial. In the normal case, the new distribution is normal, mean $\mu-c$, variance $\sigma^2$, where $c$ is the constant you subtracted. We will look at the problem in two ways. The second way, which is the better way, uses the fact that the mgf of $X-c$ is a close relative of the mgf of $X$. First way: One perhaps slightly painful but mechanical way to find the expectation of $(X-E(X))^3$ is to expand the cube. For simplicity write $\mu$ for $E(X)$. So we want $E(X^3-3\mu X^2+3\mu^2X-\mu^3)$. By the linearity of expectation, the mean of this expanded object is $$E(X^3)-3\mu E(X^2)+3\mu^2 E(X)-\mu^3.$$ Now all the missing bits can be picked up from the mgf of $X$. Second way: Let $Y=X-\mu$, where $\mu=E(X)$. Recall that the mgf of $Y$ is $E(e^{tY})$. This is $E(e^{t(X-\mu)})$, which is $e^{-t\mu} E(e^{tX})$. We have found that the moment generating function of $Y=X-\mu$ is $e^{-\mu t}$ times the moment generating function of $X$. Now for your two problems do this (i) Write down the mgf of $X$; (ii) Multiply by $e^{-\mu t}$. Now you have the mgf of $Y$. You can read off $E(Y^3)$ from the moment generating function of $Y$. For the normal, the answer you get should not come as a surprise, since $Y$ is normal with mean $0$ and variance $\sigma^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/469875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
I've solved this problem, but why is this differentiable? Let $\alpha:\mathbb R\rightarrow \mathbb R^3$ be a smooth curve (i.e., $\alpha \in C^\infty(\mathbb R)$). Suppose there exists $X_0$ such that for every normal line to $\alpha$, $X_0$ belongs to it. Show that $\alpha$ is part of a circumference. Part of solution: Let $n(s)$ be the normal vector to $\alpha$ on $s$. For every $s \in \mathbb R$ there exists an unique real number $\lambda$ such that $\alpha(s)+\lambda n(s)=X_0$. Now let $\lambda(s)$ be the function that associates every real number $s$ to this number. I've finished this question, but I have derivated $\lambda(s)$ and I don't know why I can do it. Can someone tell me why is $\lambda$ differentiable?
You can write $$\lambda(s)=\langle X_0-\alpha(s), n(s)\rangle$$ supposing $\|n(s)\|=1$ (otherwise you divide by this norm). That expression is obtained with sums and products of smooth functions (at most divisions by non-vanishing smooth functions), hence it is smooth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/469957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Fundamental group of a complex algebraic curve residually finite? Is the analytic fundamental group of a smooth complex algebraic curve (considered as a Riemann surface) residually finite?
Yes. Recall that topologically such a surface is a $g$-holed torus minus $n$ points. Except in the cases $(g, n) = (1, 0), (0, 0), (0, 1), (0, 2)$ such a surface, call it $S$, has negative Euler characteristic, so by the uniformization theorem its universal cover is the upper half plane $\mathbb{H}$. Since the action of $\pi_1(S)$ on $\mathbb{H}$ by covering transformations is an action by biholomorphic maps, $\pi_1(S)$ embeds into $\text{PSL}_2(\mathbb{R})$. And any finitely generated subgroup of $\text{PSL}_2(\mathbb{R})$ is residually finite; the argument is nearly identical to the argument that any finitely generated linear group is residually finite. The exceptional cases are straightforward to verify individually.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Let $X$ be such that $S=e^x$. You are given that $M_X(u)=e^{5u+2u^2}$ Suppose, for the stock market, the price of a certain stock S has density function $f_S(s)=\frac{1}{ts\sqrt{2\pi}} e^{\frac{-1}{2}\left(\frac{\ln(s)-m}{i}\right)^2}$ where $S>0$ and $-\infty<m<\infty $ and $t>0$ are constants. Let $X$ be such that $S=e^x$. You are given that $M_X(u)=e^{5u+2u^2}$. Given that $S$ is greater than $50$, what is the probability that is between $70$ and $90$? I know if $M_X(u)=e^{5u+2u^2}$ then $X$ is a normal distribution with parameters $\mu= 5 $ and $\sigma^2= 4 $ now I don´t kwon that I should do Thanks for yours help have a nice day :)
We are told that $X\gt \ln(50)$, and want to find the probability that $\ln(70)\lt X\lt \ln(90)$. Let $A$ be the event $\ln(70)\lt X\lt \ln(90)$, and $B$ the event $X\gt \ln(50)$. We want $\Pr(A|B)$. This is $\dfrac{\Pr(A\cap B)}{\Pr(B)}$. Note that in our case we have $A\cap B=A$. So we need to find $\Pr(A)$ and $\Pr(B)$. These are standard normal distribution calculations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof: for a pretty nasty limit Let $$ f(x) = \lim_{n\to\infty} \dfrac {[(1x)^2]+[(2x)^2]+\ldots+[(nx)^2]} {n^3}$$. Prove that f(x) is continuous function. Edit: $[.] $ is the greatest integer function.
In general, if $[z]$ is the integer part of $z$, $D(n) =\sum_{k=1}^n f(kx) -\sum_{k=1}^n [f(kx)] =\sum_{k=1}^n (f(kx)-[f(kx)]) $. Since $0 \le z-[z] < 1$, $0 \le D(n) < n$. For your case, $f(x) = x^2$, so $0 \le \sum_{k=1}^n (kx)^2 -\sum_{k=1}^n [(kx)^2] < n$ or $0 \le \dfrac1{n^3}\sum_{k=1}^n (kx)^2 -\dfrac1{n^3}\sum_{k=1}^n [(kx)^2] < \frac{n}{n^3} =\frac1{n^2} $. Therefore, since $\lim_{n \to \infty} \frac1{n^3}\sum_{k=1}^n (kx)^2 =\lim_{n \to \infty} \frac{x^2}{n^3}\sum_{k=1}^n k^2 =\lim_{n \to \infty} \frac{x^2}{n^3}\frac{n(n+1)(2n+1)}{6} =\frac{x^2}{3} $, $\lim_{n \to \infty} \frac1{n^3}\sum_{k=1}^n [(kx)^2] =\frac{x^2}{3} $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Continuous bounded functions in $L^1$ Is a continuous function in $L^1$bounded? I know that continouos functions are always bounded on a compact intervall. But how do I prove it?
If we study $L^1(0,\infty)$ or $L^1(\Bbb R)$, then you can have an unbounded continuous integrable function. We can build it in the following way: let's start with $f= 0$. Then we add positive continuous "bumps" at $n=1,2$,etc with each "bump" being higher, say, of magnitude $n$, but adding only $1/n^2$ to the integral. Like this our function remains integrable, continuous, yet unbounded. One can explicitly build such a function, I'll outline the important details First, we take $$g(x)=\begin{cases} e^{-\frac{1}{1-x^2}},&|x|<1,\\0,&|x|\ge 1.\end{cases}$$ It's possible to show that this function is $\mathcal C^{\infty}(\Bbb R)$, its support is $[-1,1]$, it's positive, and its integral is finite (let's call it $I$). Its supremum is $e^{-1}$. Now let's study $$g_n(x):=ng\left( (x-n)n^3 \right).$$ It's still continuous, positive, its support is $[n-n^{-3}, n+n^{-3}]$. Its integral is $\frac{I}{n^2}$, and its supremum is $ne^{-1}$. We take the sum $$G(x):=\sum_{k\ge 3}g_k(x).$$ It's possible to show that $G$ is continuous, unbounded, positive, integrable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
how many elements are there in this field $\mathbb Z_2[x]/\langle x^3+x^2+1\rangle $, I understand it is a field as $\langle x^3+x^2+1\rangle $ ideal is maximal ideal as the polynomial is irreducible over $Z_2$. but I want to know how many elements are there in this field and how to find out that?
You have $x^3=x^2+1$ so you can eliminate powers of $x$ greater than $2$ and every element of the field is represented by a polynomial of order less than $3$. Such polynomials have form $ax^2+bx+c$ and there are two choices in $\mathbb Z_2$ for each of $a,b,c$ so eight candidates. It remains to confirm that the eight elements are distinct (which is true because if two were equal we would get a factor of an irreducible polynomial). Just to clarify, if we set $p(x)=x^3+x^2+1$ and we have any polynomial $f(x)$ we can use the division algorithm to write $f(x)=p(x)q(x)+r(x)$ where the degree of $r(x)$ is less than the degree of $p(x)$ - so $r(x)$ is a representative of the same element of the field as $f(x)$ - they differ by a multiple of $p(x)$. And $r(x)$ has degree at most 2. Which is what we need. It is sometimes easier to compute $r(x)$ by using methods other than the division algorithm. An equivalent method is to treat $p(x)$ as if it is zero (since multiples of $p(x)$ count for nothing in the quotient field) (or simply $p(x)\equiv 0$ because it is in the same coset as zero). If we set $x^3+x^2+1\equiv 0$ and remember that twice anything is zero because our base field is $\mathbb Z_2$, we find that $x^3\equiv x^2+1$ - we can use this as an identity in the quotient field when we are doing explicit computations. The equivalence here is often written as an equality. This second insight is one which will become familiar. In the quotient field, because we can treat $p(x)$ as if it is zero, we find that $x$ behaves as if it is a root of $p(x)$. So factoring by the ideals generated by irreducible polynomials is a way of creating new fields in which those polynomials have roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A proof in vectors If it is given that: $$ \vec{R} + \dfrac{\vec{R}\cdot(\vec{B}\times(\vec{B}\times\vec{A}))} {|\vec{A} \times \vec{B} |^2}\vec{A} + \dfrac{\vec{R}\cdot(\vec{A}\times(\vec{A}\times\vec{B}))} {|\vec{A} \times \vec{B} |^2}\vec{B} = \dfrac{K(\vec{A}\times\vec{B})} {|\vec{A} \times \vec{B} |^2} $$ then prove that $$K= [ \vec{R} \vec{A} \vec{B} ]$$ I don't know how to even begin. Any ideas?
Let $\vec{r} = x \vec{a}+y \vec{b}+ z \left(\vec {a} \times \vec {b}\right)$ If we wish to compute, say $x$, we need to take dot product with a vector that is perpendicular to $\vec{b}$ as well as $\left(\vec {a} \times \vec {b}\right)$ and that vector is $\vec{b} \times \left(\vec {a} \times \vec {b}\right)$ We will need that $\left[\vec {a} \ \ \vec {b} \ \left(\vec {a} \times \vec {b}\right)\right] = \left[\left(\vec {a} \times \vec {b}\right) \ \ \vec {a} \ \ \vec {b} \ \right] = \left|\vec{a} \times \vec{b}\right|^2$. and similarly $\left[\vec {b} \ \ \vec {a} \ \left(\vec {b} \times \vec {a}\right)\right] = \left|\vec{a} \times \vec{b}\right|^2$. Taking dot product with $\vec{b} \times \left(\vec {a} \times \vec {b}\right)$, we get $\vec{r}. \left(\vec{b} \times \left(\vec {a} \times \vec {b}\right)\right)=x \left|\vec{a} \times \vec{b}\right|^2$ In this way we have $\vec{r} = \dfrac{\left[\vec {r} \ \ \vec {b} \ \left(\vec {a} \times \vec {b}\right)\right] }{\left|\vec{a} \times \vec{b}\right|^2} \vec{a}+ \dfrac{\left[\vec {r} \ \ \vec {b} \ \left(\vec {b} \times \vec {a}\right)\right] }{\left|\vec{a} \times \vec{b}\right|^2} \vec{b}+\dfrac{\left[\vec{r} \ \ \vec{a} \ \ \vec{b} \right]}{\left|\vec{a} \times \vec{b}\right|^2} \vec{c}$ Its now clear that $K = \left[\vec{r} \ \ \vec{a} \ \ \vec{b} \right]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/470436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does a graph contain a 3-cycle or a 4-cycle Given a graph $\mathscr G$, that has 100 nodes each with a degree can you show that this graph contains a 3-cycle and/or a 4-cycle? The graph in question represents 100 people at an event, and they each know $\ge 70$ people. I need to prove that there a 3 people there who know eachother. So I translated it to finding a 3-cycle in the graph. I've approached it this way: 3 people know 70 people minimum. So any 3 nodes $n \in \mathscr G$ have $\sum deg(n)$$\ge$ $3*70$. This means that these 3 people at least have $(3*70) - 100 = 110$ nodes in common. These nodes can be friends with 2 OR 3 persons. This is where I'm not so sure anymore: Suppose each node has only been counted twice. This means that 2 of the 3 persons should have $55$ nodes in common, which leaves out the 3rd person who knows $70$ other people. If two people have $55$ nodes in common, they each have a minimum of $15$ unique nodes. This makes a total of $85$ nodes for the both of them. So, $100 - 85 = 15$. This means that there are only 15 people who are still left unconnected. So, the third person should at least be connected to 55 other nodes too. Worst case, 30 of these nodes are the unique nodes from person 1 and 2. So at least 25 nodes are connected to person 1, 2 and 3. Which leads us to a contradiction that there is a 3-cycle. Is this a valid proof?
Divide the group in half. The fifty people in one half must each know at least 20 people in their half. The maximal girth 5 cage graph for 50 vertices is the Hoffman-Singleton graph, with degree 7. If each person knows more than 57 people, a 3 or 4 cycle is also forced.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Are there any integer solutions to $2^x+1=3^y$ for $y>2$? for what values of $ x $ and $ y $ the equality holds $2^x+1=3^y$ It is quiet obvious the equality holds for $x=1,y=1$ and $x=3,y=2$. But further I cannot find why $x$ and $y$ cannot take higher values than this values.
I am assuming that $x$ and $y$ are supposed to be integers. The claim follows from the fact that if $x>3$, then the order of the residue class of $3$ in the group $\mathbb{Z}_{2^x}^*$ (sometimes denoted by $U_{2^x}$) is $2^{x-2}$. In other words: for $3^y-1$ to be divisible by $2^x$ the exponent $y$ has to be a multiple of $2^{x-2}$. There are several proofs for this fact in this site. But when $y\ge 2^{x-2}$, then it shouldn't be too hard to see that $$3^y\ge3^{2^{x-2}}>2^x.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/470568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Counting votes, as long as one has more votes all the way through. * *Two competitors won $n$ votes each. How many ways are there to count the $2n$ votes, in a way that one competitor is always ahead of the other? *One competitor won $a$ votes, and the other won $b$ votes. $a>b$. How many ways are there to count the votes, in a way that the first competitor is always ahead of the other? (They can have the same amount of votes along the way) I know that the first question is the same as the number of different legal strings built of brackets, which is equal to the Catalan number; $\frac{1}{n+1}{2n\choose n}$, by the proof with the grid. I am unsure about how to go about solving the second problem.
We can use a similar idea to the solution to #1: The total number of ways the votes can be counted is $a+b \choose a$, so we have to subtract the number of ways competitor B can get ahead of competitor A: In any count where B gets ahead of A, change all the votes up to and including the first vote where B takes the lead; this will give a count where A gets $a+1$ votes and B gets $b-1$ votes. Conversely, in any count where A gets $a+1$ votes and B gets $b-1$ votes, changing all the votes up to and including the first vote where A gets the lead gives a vote sequence of the first type. Thus the answer is $\binom{a+b}{a}-\binom{a+b}{a+1}$. After referring to the link provided by Brian Scott, I realized that I am answering a slightly different question, which is the number of ways the votes can be counted so that A never gets behind B in the count, not the number of ways that A always remains ahead of B. (In #1, the answer is $C_n$ to this question, not to the question in which A always remains ahead of B.) For the version of the problem as stated, use the fact that A must get the first vote and then apply the same argument to the remaining $a+b-1$ votes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculate the determinant of a matrix multiplied by itself confirmation If $ \det B = 4$ is then is $ \det(B^{10}) = 4^{10}$? Does that also mean that $\det(B^{-2}) = \frac{1}{\det(B)^2} $ Or do I have this completely wrong?
$det(B^2)$ means $ det(B*B)$ means at first, By multiplying two matrices you are getting a matrix and then you are finding determinant of that matrix Similarly, $det(B^{10})$ means at first you are multiplying 10 matrices , that will give you a matrix and then finding the determinant But in the case of the negative power only $B^{-m} $where m is a positive number it is defined with respect to $B^{-1}$ that is $(B^{-1})^m$ hence, if $B^{-1}$ is defined then we can define $B^{-m}$ Here $det(B)\neq 0 $ therefor B is not singular hence $B^{-1}$ exists So it has the menaing
{ "language": "en", "url": "https://math.stackexchange.com/questions/470676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Generalised composition factors Let $A$ be a semiprimary ring. A simple module $L$ is said to be a generalised composition factor of $M$ if there are $M'$ and $M''$, $M'' \subset M'$, submodules of $M$, such that $M'/M'' \cong L$. Suppose $L$ is a generalised composition factor of $M$. Is it possible to have a submodule $N$ of $M$ such that $L$ is neither a generalised composition factor of $N$ nor a generalised composition factor of $M/N$?
I feel very foolish for just noticing this now... The answer to the question is no. Let $P$ be the projective cover of the simple module $L$ (and let $Q$ be its injective hull). It is easy to see that $L$ is a generalised composition factor of a module $M$ if and only if $\operatorname{Hom}_A(P,M)\neq 0$ (if and only if $\operatorname{Hom}_A(M,Q)\neq 0$). Consider the exact sequence $$0 \longrightarrow N \longrightarrow M \longrightarrow M/N \longrightarrow 0.$$ Because $\operatorname{Hom}_A(P,-)$ is exact (in fact we only need the functor to be half exact) then $\operatorname{Hom}_A(P,M)\neq 0$ if and only if $\operatorname{Hom}_A(P,N)\neq 0$ or $\operatorname{Hom}_A(P,M/N) \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finite set of congruences Is it true that for every $c$ there is a finite set of congruences $a_i(mod\,\,n_i) , c = n_1<n_2<n_3<...........<n_k \,\,\, (1)\\ $ So that every integer satisfies at least one of the congruence (1)
You are referring to Covering Systems of congruences. The link, and the name, will let you explore what is a quite large literature. You may also want to look at this survey by Carl Pomerance. If the $n_i$ are strictly increasing, they cannot be chosen arbitrarily.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that the Euler characteristic of $O[3]$ is zero. Show that the Euler characteristic of $O[3]$ is zero. Consider a non zero vector $v$ at the tangent space of identity matrix. Denote the corresponding matrix multiplication by $\phi_A$. Define the vector field $F$ by $F(A)=(\phi_A)_*(v)$. Where $\phi_*$ is the derivative of $\phi$, and $v$ is a tangent vector at identity with a fixed direction. So $$A = \begin{pmatrix} \cos (\pi/2) & -\sin (\pi/2) & 0 \\ \sin(\pi/2) & \cos(\pi/2) & 0\\ 0&0&1 \end{pmatrix}$$ is homotopic to identity map. Then how shall I proceed....? Thank you~~~
Are you familiar with the theory of Lie Groups? You can just take any non-zero vector at the identity and translate it everywhere, generating a non-vanishing smooth vector field on $O(3)$. From here it's easy with Poincare-Hopf.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Limit of $x \log x$ as $x$ tends to $0^+$ Why is the limit of $x \log x$ as $x$ tends to $0^+$, $0$? * *The limit of $x$ as $x$ tends to $0$ is $0$. *The limit of $\log x$ as $x$ tends to $0^+$ is $-\infty$. *The limit of products is the product of each limit, provided each limit exists. *Therefore, the limit of $x \log x$ as $x$ tends to $0^+$ should be $0 \times (-\infty)$, which is undefined and not $0$.
By $x=e^{-y}$ with $y \to \infty$ we have $$x \log x =-\frac y {e^y} \to 0$$ which can be easily proved by the definition of $e^y$ or by induction and extended to reals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/470952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 5, "answer_id": 4 }
Solving Riccati-like matrix inequality How can I find P2 in the following inequality? $$S_{ai} = Q + G P_2 G'- P_2 - \bar{\gamma}^2 G P_2 C' R^{-1} C P_2 G' \prec 0$$ where $R = \beta C P_2 C'+ V$, $\beta=0.9$, $\bar{\gamma} = 0.9$ and G = 0.5437 0.0768 0.2040 -1.1470 C = 1 0 0 1 Q = 0.1363 0.1527 0.1527 2.9000 Please help me. I think it is not solvable by LMI. please help me if there is any solution for solving this inequality in your idea.
$$ \begin{align} Q + G P_2 G'- P_2 - \bar{\gamma}^2 G P_2 C' R^{-1} C P_2 G' &\prec 0 \\[5mm]&\Updownarrow \textrm{if }R\prec 0\\[5mm] \begin{pmatrix}Q + G P_2 G'- P_2 &\bar{\gamma} G P_2 \\ \bar{\gamma}P_2 G'& \beta(C'C)P_2(C'C)\end{pmatrix}&\prec 0 \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/471040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can I decompose a compact set in a finite number of convex set? My problem is in a finite-dimensional space. I look at $\mathcal{X}$ the support of a function $f$, that is continuous and has bounded support. \begin{eqnarray} \mathcal{X}_o & = & \{x \in \Omega, f(x)>0 \} \\ \mathcal{X} & = & \bar{\mathcal{X}}_o \\ \lVert x \lVert < M& & \; \forall x\in \mathcal{X} \end{eqnarray} I already performed many tricks using the Borel-Lebesgue property (Given an union of open set covering a compact set $\mathcal{X} \subset \cup_i B_i$, we can cover $\mathcal{X}$ with a finite number of open sets $\mathcal{X} \subset \cup_{i \in J} B_i$ with $|J|$ finite) [or Heine–Borel theorem]. For one point, I cover $\mathcal{X}$ by convex open sets $B_i$ (open balls). I would need $B_i \cap \mathcal{X}$ to be convex for all $i$. Can we prove the existence of such a decomposition ? PS : seems provable to me. It seems that such a cover can be built by balls $B_i$ small enough. Or, if I can decompose $\mathcal{X}$ into a finite number of compact convex sets $\cup_{i=1}^mC_i = \mathcal{X}$, then I would just do the Borel-Lebesgue trick on each of those $C_i$. Otherwise, if somebody has a counter example of compact set that cannot be covered by a finite number of convex sets, it would solve the problem. Thank you very much Arnaud
As Daniel Fischer said (in a comment, unfortunately), the answer is negative. One counterexample is a round annulus $1\le |x|\le 2$ in dimension $2$ (or higher). Any convex subset of that can meet at most one point of the inner bounding circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/471204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Defining $W^{k,p}(M)$ for non-integers $k$ and $p$ and manifold $M$ For $k$ and $p$ not necessarily integer, and on a smooth manifold $M$, how to define the Sobolev space $W^{k,p}(M)$? I've only seen definitions for $p=2$.
For an integer $k$, one defines $W^{k,p}$ to be (roughly speaking) the space of $L^p$ functions whose $k$-th derivative is $L^p$. More precisely, the Fourier transform takes a degree-$k$ differential operator to a degree-$k$ polynomial, so we can use weak derivatives to reconceptualize $W^{k,p}$ (for integer $k$ still) as the space of $L^P$ functions whose Fourier transforms, when multiplied by degree-$k$ polynomials, transform back into $L^p$. That is, letting $F$ denote the Fourier transform, $u\in W^{k,p}$ if $F^{-1}q(\xi)Fu\in L^p$ where $q(\xi) = (1 + |\xi|^2)^{k/2}$ is a degree-$k$ polynomial in $\xi$. No law says that in this definition, $k$ has to be an integer, so for $k$ non-integer, define $$ W^{k,p} = \{ u\in L^p\ |\ F^{-1}(1+|\xi|^2)^{k/2}Fu \in L^p\}. $$ As a mnemonic, the "W" in "Sobolev space" stands for "weak derivative." (I don't know if it actually means that, but it's how I'm remembering it.) Weak derivatives don't need to have integer order, so this lets us extend Sobolev spaces to non-integer order. For a general manifold, one can use coordinate charts and a partition of unity to patch together a definition for $W^{k,p}(M)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/471270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
equivalence between problem and his variational formulation let $\Omega$ an open bounded and regular in $\mathbb{R}^n$, and let $\overline{\Omega}_1$ and $\overline{\Omega}_2$ and partition to $\Omega:$ $\Omega = \overline{\Omega_1} \cup \overline{\Omega_2}.$ We put $\Gamma=\partial \Omega_1 \cap \partial \Omega_2$ the interface between $\Omega_1$ and $\Omega_2$ ($\Gamma \subset \Omega$), and we put $u_i=u/\Omega_i$ Let the problem: $$ \begin{cases} &-k_i\Delta u_i=f,x\in \Omega_i, i=1,2\\ &u_i=0, x\in \partial \Omega\\ & u_1=u_2, x \in \Gamma\\ &k_1 \nabla u_1 \cdot n = k_2 \nabla u_2 \cdot n, x \in \Gamma \end{cases} $$ where $k_i$ is an constant piecewise function , $i=1,2,$ $k(x)=k_i>0$ and $f\in L^2(\Omega).$ I found that, the variatioanal formulation of this problem is: found $u\in H^1_0(\Omega)$ such that $$\int_{\Omega} k\nabla u \cdot \nabla v dx = \int_{\Omega} fvdx,\quad \forall v \in H^1_0(\Omega)$$ My question is: how we prouve that the solution of this variational formulation, is the solution of the problem?
Once you are sure your weak formulation is right, suppose your weak solution is regular enough that you can integrate by parts, then do it and choose your test functions $v$ in spaces of smooth functions. You'll probably need to do this several times, once for each boundary condition. For example you may first try with functions such that $supp (v) \subset \Gamma$. See what cancels out in the integral equation and you'll get some equality of integrals for every $v$ in this space. Because of the choice of space for the test functions, this will imply one of the boundary conditions. And so on. Note that you might have to do some manipulations using what you just found out in previous steps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/471332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability defective items Two shipments of parts are received. The first shipment contain 1000 parts with 10% defective and the second contain 2000 parts with 5% defectives. Two parts are selected from a shipment selected at random and they are found to be good. Find the probability that the tested parts were from 1st shipment?
Let $A$ be the event the parts were selected from the first shipment, and let $G$ be the event they are both good. We want the probability they are from the first shipment, given they are both good. So we want $\Pr(A|G)$. By the definition of conditional probability, we have $$\Pr(A|G)=\frac{\Pr(A\cap G)}{\Pr(G)}.\tag{1}$$ We find the two probabilities on the right-hand side of (1). The event $G$ can happen in two disjoint ways: (i) we select from the first shipment, and both items are good or (ii) we select from the second shipment, and both items are good. We find the probability of (i). The probability we choose from the first shipment is $\frac{1}{2}$. Given that we selected from the first shipment, the probability they were both good is $\frac{900}{1000}\cdot \frac{899}{999}$. Thus the probability of (i) is $\dfrac{1}{2}\cdot\dfrac{900}{1000}\cdot\dfrac{899}{999}$. Similarly, find the probability of (ii). Add to get $\Pr(G)$. Note that the numerator in Formula (1) is just what we called the probability of (i). Remark: We interpreted "$10\%$ bad" literally, as in exactly $10$ percent, that is, exactly $100$ bad items in the group of $1000$. However, another reasonable interpretation is that the first shipment comes from a supplier who has a $10\%$ bad rate. Then we would replace $\frac{900}{1000}\cdot \frac{899}{999}$ by $\left(\frac{90}{100}\right)^2$. Numerically, it makes no practical difference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/471403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Property of cyclic quadriterals proof! http://en.wikipedia.org/wiki/Cyclic_quadrilateral This article states that: "Another necessary and sufficient condition for a convex quadrilateral ABCD to be cyclic is that an angle between a side and a diagonal is equal to the angle between the opposite side and the other diagonal.[3] That is, for example, $$\angle ACB = \angle ADB$$" I haven't found a proof for this online, can somebody proof this or find one? Also, does this mean that if you draw in the diagonals, that of the 4 triangles you create the opposite ones are similar?
The proof is based on the following theorem: The angles subtended by a chord at the circumference of a circle are equal, if the angles are on the same side of the chord Hence $\angle{ACB} = \angle{ADB}-----------> 1$ similarly $ \angle{CBA} = \angle{CDA} ------------------>2$ Adding 1 and 2 . $\angle{ACB} + \angle{CBA} = \angle{CDB}$ $ 180^\circ - \angle{CAB} = \angle{CDB}$ $\angle{CAB} + \angle{CDB} = 180^\circ$ This means the quadrilateral is inside a circle because as we know if the opposite angles of a quadrilateral sum to $180^\circ$ the quadrilateral is cyclic(this is a well known proof). Hence this is a necessary and sufficient condition for the quadrilateral to be cyclic
{ "language": "en", "url": "https://math.stackexchange.com/questions/471471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Prove AB is hermitian if A is hermitian and B is hermitian If $A$ and $B$ are two hermitian transformations, prove that $AB$ is hermitian if $AB = BA$, knowing that a hermitian transformation is one such that $(T(f), g) = (f, T(g))$ and basic axioms for inner products: $(x,y) = (y,x)$, $(x,x) > 0$, $(cx, y) = c(x,y)$, and $(x,y+z) = (x,y) + (x,z)$.
$$\langle ABf, g \rangle = \langle Bf, Ag \rangle = \langle f, BAg \rangle = \langle f, ABg \rangle.$$ Here * *I "=" $\longleftarrow$ $A$ is Hermitian, *II "=" $\longleftarrow$ $B$ is Hermitian, *III "=" $\longleftarrow$ $AB = BA$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/471546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Archimedean property concept I want to know what the "big deal" about the Archimedean property is. Abbott states it is an important fact about how $\Bbb Q$ fits inside $\Bbb R.$ First, I want to know if the following statements are true: The Archimedean property states that $\Bbb N$ isn't bounded above--some natural number can be found such that it is greater than some specified real number. The Archimedean property also states that there is some rational $\frac1n, n \in\Bbb N$ such that it is less than some specified real number. Secondly, what do the above statements imply about the connection between $\Bbb Q$ and $\Bbb R?$ Does it imply that $\Bbb R$ fills the gaps of $\Bbb Q$ and $\Bbb N;$ is it the proof for $\Bbb R$ completing $\Bbb Q?$ Lastly, what insights have you obtained from the Archimedean property? I am sorry if some of my questions are unclear.
Your statements, strictly interpreted, are not true. You need to change the order of quantifiers. You say "The Archimedean property states that $\Bbb{N}$ isn't bounded above--some natural number can be found such that it is greater than any real number." There is no natural number "such that it is greater than any real number." Rather, for any positive real number $x$, there is a natural number $n$ such that $n > x$. Similarly, you say "The Archimedean property also states that there is some rational $1/n,n∈ℕ$ such that it is less than any real number." Rather, for any positive real number $x$, there is a positive integer $n$ such that $1/n < x$. Be careful out there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/471615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 6, "answer_id": 2 }
Probability of being up in roulette A player bets $\$1$ on a single number in a standard US roulette, that is, 38 possible numbers ($\frac{1}{38}$ chance of a win each game). A win pays 35 times the stake plus the stake returned, otherwise the stake is lost. So, the expected loss per game is $\left(\frac{1}{38}\right)(35) + \left(37/38\right)(-1) = -\frac{2}{38}$ dollars, and in 36 games $36\left(-\frac{2}{38}\right) = -1.89$ dollars. But, the player is up within 35 games if he wins a single game, thus the probability of being up in 35 games is $1 - \left(\frac{37}{38}\right)^{35} = 0.607$. And even in 26 games, the probability of being up is still slightly greater than half. This is perhaps surprising as it seems to suggest you can win at roulette if you play often enough. I'm assuming that that this result is offset by a very high variance, but wouldn't that also imply you could win big by winning multiple times? Can someone with a better statistics brain shed some light onto this problem, and extend my analyse? Thanks.
This is perhaps surprising as it seems to suggest you can win at roulette if you play often enough. The most optimal way of "winning" at roulette is bold play. Bet your entire fortune or the amount desired to "end up," whichever is less. However, in other sub-fair games when $p > 1/2$ then timid play is optimal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/471659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
When colimit of subobjects is still a subobject? What are the conditions on a category (or on a certain object) that will guarantee that the colimit of a family of subobjects of a given object is a subobject of the same object? Update: To clarify the question - let $C$ be a category with arbitrary colimits. Consider a family $\mathcal{I}=\{X_i\to X\}$ of subobjects of $X$, such that $\mathcal{I}$ is a semilattice w.r.t inclusion relation of subobjects. Then one can take the colimit $lim_{\to \mathcal{I}}(X_i)$ in $C$. What are the conditions on the category $C$ that guarantee that the canonical map $lim_{\to \mathcal{I}}(X_i)\to X$ is a monomorphism, i.e. $lim_{\to \mathcal{I}}(X_i)$ is a subobject of X?
I assume that your question is: If $\{X_i \to X\}$ is a diagram of subobjects of $X$, and $\mathrm{colim}_i X_i$ is a colimit in the ambient category, when is the induced morphism $\mathrm{colim}_i X_i \to X$ again a monomorphism and therefore exhibits the colimit as a subobject of $X$? Well without restrictions, of course this fails terribly. Consider any of your favorite concrete algebraic or topological categories and look at discrete colimits, i.e. coproducts. It also fails for coequalizers. But many categories enjoy the following property: A directed colimit of monomorphisms is a monomorphism. Notice that for abelian categories this is part of Grothendieck's axiom AB5. If this property is satisfiesd, and the diagram $\{X_i\}$ is directed, then of course $\mathrm{colim}_i X_i \to \mathrm{colim}_i X = X$ is a monomorphism. For example, you can consider directed colimits of subrings of a ring, of subfields of a field, of subspaces of a space, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/471809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove that the sequence $(3n^2+4)/(2n^2+5) $ converges to $3/2$ Prove directly from the definition that the sequence $\left( \dfrac{3n^2+4}{2n^2+5} \right)$ converges to $\dfrac{3}{2}$. I know that the definition of a limit of a sequence is $|a_n - L| < \varepsilon$ . However I do not know how to prove this using this definition. Any help is kindly appreciated.
Hints: for an arbitrary $\,\epsilon>0\;$ : $$\left|\frac{3n^2+4}{2n^2+5}-\frac32\right|=\left|\frac{-7}{2(2n^2+5)}\right|<\epsilon\iff2n^2+5>\frac7{2\epsilon}\iff$$ $$(**)\;\;2n^2>\frac7{2\epsilon}-5\ldots$$ Be sure you can prove you can choose some $\,M\in\Bbb N\;$ s.t. for all $\,n>M\;$ the inequality (**) is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/471896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Sphere-sphere intersection is not a surface In my topology lecture, my lecturer said that when two spheres intersect each other, the intersecting region is not a surface. Well, my own understanding is that the intersecting region should look like two contact lens combine together,back to back. The definition of a surface that I have now is that every point has a neighbourhood which is homeomorphic to a disc. I don't see why the intersecting region is not a surface. It will be better if someone can guide me through picture.
In general 2 spheres on $\mathbb{R}^3$ intersect on a circle which is a curve as you can simply imagine. You don't need to see a picture to visualize that I think..
{ "language": "en", "url": "https://math.stackexchange.com/questions/471970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How does definition of nowhere dense imply not dense in any subset? In some topological space $X$, a set $N$ is nowhere dense iff $\text{Int}\left(\overline{N}\right)=\emptyset$, where Int is the interior, and overbar is closure. How can I show this is equivalent to the statement "any non-empty open subset of $X$ contains an open non-empty subset containing no elements of $N$."? I can use the following equivalences: $ N\text{ is nowhere dense}\iff\overline{N}\text{ is nowhere dense}\iff\text{Ext}\left(N\right)\text{ is dense in }X\;\iff\overline{N}^{c}\text{ is dense in }X $ My attempt: 1) I assumed the quoted statement is false - that there is a non-empty open set $V$ in which $N$ is dense. It follows that $V\subset\overline{N}$ . On the other hand, N is nowhere dense, so $V\subset\overline{\text{Ext}\left(N\right)}$ also holds. I concluded that $V\subset\partial N$ , but I don't see what this achives.. 2) I think I might have a proof - is the following correct? $\text{Int}\left(N\right)=\emptyset\iff\text{Int}\left(\overline{N}\right)=\emptyset$, so we can in particular take $N_{1}=\overline{N}\setminus\partial N$, and have $\text{Int}\left(N_{1}\right)=\emptyset$. But $\text{Int}\left(N_{1}\right)=N_{1}$, so $N_{1}=\emptyset$. Additionally, $\text{Int}\left(N_{1}\right)$ is the greatest subset of $N_{1}$ (itself), so any subset of it must be empty. The boundary contains no open subsets, so the family of nowhere dense sets whose closure is $\overline{N}$ are just closed sets: they're all unions of the empty set with a subset of a the boundary - a closed set. This means the greatest open subset of nowhere dense set is $\emptyset$.
Note that $\overline N^c \text{ is dense in }X \Longleftrightarrow \overline N^c\cap U\neq \emptyset \text{ for all }U\neq \emptyset \text{ open in }X$. Note further that $\overline N^c$ is open. Now, let $U$ be a non-empty open subset of $X$, by the above $V=\overline N^c\cap U$ is a non-empty open subset of $U$ containing no elements of $N$, because $V\subseteq \overline N^c\subseteq N^c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/472074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How to solve this difficult system of equations? $$1+4\lambda x^{3}-4\lambda y = 0$$ $$4\lambda y^{3}-4\lambda x = 0$$ $$x^{4}+y^{4}-4xy = 0$$ I can't deal with it. How to solve this?
As noted in several comments, the second equation yields $x=y^{3}$. Hence, by the third equation, $$y^{12}=3y^{4} \implies y^{8}=3$$ Assuming $y$ is real, we get $y=\pm 3^{1/8}$ and $x=\pm3^{3/8}$. By the first equation, then, (taking the positive roots): $$1+4\lambda3^{9/8}-4\lambda3^{1/8}=0 \implies 8\lambda3^{1/8}=-1$$ So $\lambda=\frac{-1}{8}3^{-1/8}$. If we were to take the negative roots, $\lambda$ would be positive. NB: Notice that none of $x,y,\lambda$ can be zero due to equation $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/472125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }