Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
what's the name of the theorem:median of right-triangle hypotenuse is always half of it This question is related to one of my previous questions. The answer to that question included a theorem: "The median on the hypotenuse of a right triangle equals one-half the hypotenuse". When I wrote the answer out and showed it a friend of mine, he basically asked me how I knew that the theorem was true, and if the theorem had a name. So, my question: -Does this theorem have a name? -If not, what would be the best way to describe it during a math test? Or is it better to write out the full prove every time?
Here is a proof without words:
{ "language": "en", "url": "https://math.stackexchange.com/questions/240819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Combinatorics problem with negative possibilities I know how to solve the basic number of solutions equations, such as "find the number of positive integer solutions to $x_1 + x_2 + x_3$ = 12, with ". But I have no clue how to do this problem: Find the number of solutions of $x_1+x_2-x_3-x_4 = 0$ in integers between -4 and 4, inclusive. If I try and solve it like the basic equations, I get $C(n+r-1,r)$$ = C(0+9-1,9)$$ = C(8,9)$, which is obviously improper. Can someone point me in the right direction on how to solve this type of problem?
Put $x_i+4=:y_i$. Then we have to solve $$y_1+y_2=y_3+y_4$$ in integers $y_i$ between $0$ and $8$ inclusive. For given $p\geq0$ the equation $y_1+y_2=p$ has $p+1$ solutions if $0\leq p\leq 8$, and $17-p$ solutions if $9\leq p\leq 16$. It follows that the total number $N$ of solutions is given by $$N=\sum_{p=0}^8(p+1)^2+\sum_{p=9}^{16}(17-p)^2=2\sum_{k=1}^8 k^2+ 9^2=489\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/240890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
How fill in this multiplication table? The following multiplication table was given to me as a class exercise. I should have all the necessary information to fill it completely in. However, I'm not sure how to take advantage of the relations I am given to fill it in? The Question A group has four elements $a,b,c$ and $d$, subject to the rules $ca = a$ and $d^2 = a$. Fill in the entire multiplication table. \begin{array}{c|cccc} \cdot & a & b & c & d \\ \hline a& & & & \\ b& & & & \\ c& a & & & \\ d& & & & a \end{array} I imagine I might proceed like this: To find $ab$, write $a = d^2$ and thus $ab = d^2b = db\cdot b$....but my chain of reasoning always stops around here.
Since each element has an inverse, each row and each column of the table must contain all four elements. After filling in the row and the column of the identity element $c$, we have three $a$'s in the table, and it follows that the only place for the last $a$ is given by $b^2=a$. Trying to distribute four $d$'s in the table, we are led to $ab=ba=d$. The only place for the remaining $b$'s is now given by $ad=da=b$, and the remaining products are then $=c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Existence and uniqueness of solution for a seemingly trivial 1D non-autonomous ODE So I was trying to do some existence and uniqueness results beyond the trivial setting. So consider the 1D non-autonomous ODE given by $\dot{y} = f(t) - g(t) y $ where $f,g \geq 0$ are integrable and $f(t),g(t) \rightarrow 0$ for $t \rightarrow \infty$. How would I go about proving the existence and uniqueness for the solution of such an ODE for $t \rightarrow \infty$? Just by a contraction argument?
Assuming you meant $\dot{y} = f(t) - g(t) y$, this is a linear differential equation and has the explicit solutions $y(t) = \mu(t)^{-1} \int \mu(t) f(t)\ dt$ where $\mu(t) = \exp(\int g(t)\ dt)$, from which it is clear that you have global existence of solutions. Uniqueness follows from the standard existence and uniqueness for ODE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Coherent sheaves on a non-singular algebraic variety Grothendieck wrote in his letter to Serre(Nov. 12,1957) that every coherent algebraic sheaf on a non-singular algebraic variety(not necessarily quasi-projective) is a quotient of a direct sum of sheaves defined by divisors. I think "sheaves defined by divisors" means locally free sheaves of rank one(i.e. invertible sheaves). How do you prove this?
This is proved for any noetherian separated regular schemes in SGA 6, exposé II, Corollaire 2.2.7.1 (I learn this result from a comment here: such schemes are "divisorial".) To see that this answers your question, look at op. cit. Définition 2.2.3(ii).
{ "language": "en", "url": "https://math.stackexchange.com/questions/241086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Help me understand the (continuous) uniform distribution I think I didn't pay attention to uniform distributions because they're too easy. So I have this problem * *It takes a professor a random time between 20 and 27 minutes to walk from his home to school every day. If he has a class at 9.00 a.m. and he leaves home at 8.37 a.m., find the probability that he reaches his class on time. I am not sure I know how to do it. I think I would use $F(x)$, and I tried to look up how to figure it out but could only find the answer $F(x)=(x-a)/(b-a)$. So I input the numbers and got $(23-20)/(27-20)$ which is $3/7$ but I am not sure that is the corret answer, though it seems right to me. I'm not here for homework help (I am not being graded on this problem or anything), but I do want to understand the concepts. Too often I just learn how to do math and don't "really" understand it. So I would like to know how to properly do uniform distribution problems (of continuous variable) and maybe how to find the $F(x)$. I thought it was the integral but I didn't get the same answer. Remember I want to understand this. Thanks so much for your time.
Your arrival time is at a constant interva; $[a,b]$ and the uniform distribution gives, $\int_{a}^{x} \frac{dt}{27-20}$ Your starting point, 8:37, is your time 0 and you want to make it to your class by 9. Your minimum walking time is 20 mins which would give make you still on time. But for a walking time of more than 23 mins you will be late. Hence we want our walking time in $[20,23]$ And these are the bounds of our integral. Hence desired probability is $3/7$
{ "language": "en", "url": "https://math.stackexchange.com/questions/241138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find all complex numbers $z$ satisfying the equation I need some help on this question. How do I approach this question? Find all complex numbers $z$ satisfying the equation $$ (2z - 1)^4 = -16. $$ Should I remove the power of $4$ of $(2z-1)$ and also do the same for $-16$?
The answer to this problem lies in roots of a polynomial. From an ocular inspection we know we will have complex roots and they always Come in pairs! The power of our equation is 4 so we know will have a pair of Complex conjugates. I believe it is called the Fundamental Theorem. Addendum A key observation is how to represent -1 in its general term using Euler's identity and the concept of odd powers in EF.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
The difference between m and n in calculating a Fourier series I am studying for an exam in Differential Equations, and one of the topics I should know about is Fourier series. Now, I am using Boyce 9e, and in there I found the general equation for a Fourier series: $$\frac{a_0}{2} + \sum_{m=1}^{\infty} (a_m cos\frac{m \pi x}{L} + b_m sin\frac{m \pi x}{L})$$ I also found the equations to calculate the coefficients in terms of n, where n is any real integer: $$a_n = \frac{1}{L} \int_{-L}^{L} f(x) cos\frac{n \pi x}{L}dx$$ $$b_n = \frac{1}{L} \int_{-L}^{L} f(x) sin\frac{n \pi x}{L}dx$$ I noticed that the coefficients are calculated in terms of n, but are used in the general equation in terms of m. I also noticed that at the end of some exercises in my book, they convert from n to m. So my question is: what is the difference between n and m, and why can't I calculate my coefficients in terms of m directly? Why do I have to calculate them in terms of n, and then convert them? I hope that some of you can help me out!
You should know by now that $n$ and $m$ are just dummy indices. You can interchange them as long as they represent the same thing, namely an arbitrary natural number. If $$f(x) = \sum\limits_{n = 1}^\infty {{b_n}\sin \frac{{n\pi x}}{L}}$$ we can multiply both sides by ${\sin \frac{{m\pi x}}{L}}$ and integrate from $x = 0$ to $x = L$, for example, as follows: $$\int_0^L {f(x)\sin \frac{{m\pi x}}{L}dx = \sum\limits_{n = 1}^\infty {{b_n}\int_0^L {\sin \frac{{n\pi x}}{L}\sin \frac{{m\pi x}}{L}dx}}}$$ but the righthand side is $$\sum\limits_{n = 1}^\infty {{b_n}\frac{L}{2}{\delta _{nm}} = \left\{ {\begin{array}{*{20}{c}} 0&{n \ne m} \\ {{b_m}\frac{L}{2}}&{n = m} \end{array}} \right.}$$ where ${{\delta _{nm}}}$ is the Kronecker delta. It is just a compact way of stating that sines are orthogonal, i.e. $$\int_0^L {\sin \frac{{n\pi x}}{L}\sin \frac{{m\pi x}}{L}dx = \frac{L}{2}}$$ if $n = m$, and $0$ otherwise. So why did we use ${b_m}$? We used ${b_m}$ because the integral evaluates to $0$ when $n \ne m$, and the only term that "survives" is ${b_m}$ because it corresponds to the case $n = m$. Therefore, we can write $$\int_0^L {f(x)\sin \frac{{m\pi x}}{L}} dx = {b_m}\frac{L}{2}$$ and solve for ${b_m}$: $${b_m} = \frac{2}{L}\int_0^L {f(x)\sin \frac{{m\pi x}}{L}}dx.$$ We can solve for ${a_m}$ in exactly the same way because cosines are also orthogonal. At the end we can always change $m$ to $n$. This method for finding the Fourier coefficients works in general and is often referred to "Fourier trick" in physics. Overall, we can use $$\int_{ - L}^L {\sin \frac{{n\pi x}}{L}\sin \frac{{m\pi x}}{L}dx = \left\{ {\begin{array}{*{20}{c}} 0&{n \ne m} \\ L&{n = m \ne 0} \end{array}} \right.}$$ $$\int_{ - L}^L {\cos \frac{{n\pi x}}{L}\cos \frac{{m\pi x}}{L}dx = \left\{ {\begin{array}{*{20}{c}} 0&{n \ne m} \\ L&{n = m \ne 0} \\ {2L}&{n = m = 0} \end{array}} \right.}$$ $$\int_{ - L}^L {\sin \frac{{n\pi x}}{L}\cos \frac{{m\pi x}}{L}dx = 0}$$ to derive the famous Fourier coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the meaning of a.s.? What is the meaning of a.s. behind a limit formula (I found this in a paper about stochastic processes) , or sometimes P-a.s.?
It means almost surely. P-a.s means almost surely with respect to probability measure P. For more details wiki out "almost sure convergence". Let me give some insights: When working with convergence of sequences of random variables(in general stochastic processes), it is not necessary for convergence to happen for all $w \in \Omega$, where $\Omega$ is the sample space. Instead it is fine if the set where it doesn't converge happens over a set with measure 0, since most of the results go through. If you take a measure theory course, you will be able to appreciate this even more.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Solve for variable inside multiple power in terms of the powers. I'm a programmer working to write test software. Currently estimates the values it needs with by testing with a brute force algorithm. I'm trying to improve the math behind the software so that I can calculate the solution(s) instead. I seem to have come across an equation that is beyond my ability. $$(Bx_1 + 1)^{y_2}=(Bx_2 + 1)^{y_1}$$ My goal is to have $B$ as a function of everything else, or have an algorithm solve for it. I feel like it should be possible, but I not even sure how to begin.
Okay so I came up with a that allows me to approximate the answer if $\frac {y_2}{y_1}$ is rational (which it will be in my case because I have limited precision). I can re-express the original equation as $$(Bx_1+1)^{\frac {y_2}{y_1}}=Bx_2+1$$ If $\frac {y_2}{y_1}$ rational I can change it to $\frac ND$ where $N$ and $D$ are integers. Substituting this back in and redistributing the fraction I get $$(Bx_1+1)^N=(Bx_2+1)^D$$ Here is the fun part, here I can do a binomial expansion on each side, this will give me one high-order polynomial that I can approximate the answer to. It's very messy but should get the job done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Counterexample to show that the class of Moscow spaces is not closed hereditary. What is a counterexample to show that the class of Moscow spaces is not closed hereditary? (A space $X$ is called Moscow if the closure of every open $U \subseteq X$ is the union of a family of G$‎_{‎\delta‎‎‎}$‎-subsets of $X$.)
This is essentially copied from A. V. Arhangel'skii, Moscow spaces and topological groups, Top. Proc., 25; pp.383-416: Let $D ( \tau )$ be an uncountable discrete space, and $\alpha D ( \tau )$ the one point compactification of $D ( \tau )$. Then $D ( \tau )$ is a Moscow space, and $D ( \tau )$ is G$_\delta$-dense in $\alpha D ( \tau )$, while $\alpha D ( \tau )$ is not a Moscow space. Indeed, let $U$ be any infinite countable subset of $D ( \tau )$. Then $U$ is open in $\alpha D ( \tau )$, and $\overline{U} = U \cup \{ \alpha \}$, where $\alpha$ is the only non-isolated point in $\alpha D ( \tau )$. Every G$_\delta$-subset of $\alpha D ( \tau )$ containing the point $\alpha$ is easily seen to be uncountable; therefore, $\overline{U}$ is not the union of any family of G$_\delta$-subsets of $\alpha D ( \tau )$. Since $\alpha D ( \tau )$ is a closed subspace of a Tychonoff cube, we conclude that the class of Moscow spaces is not closed hereditary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Getting the Total value from the Nett I can't figure out this formula; I need some help to write it out for a php script. I have a value of $\$80$. $\$80$ is the profit from a total sale of $\$100$; $ 20\%$ is the percentage margin for the respective product. Now I just have the $\$80$ and want to get the total figure of $\$100$. How can I get the total figure and what would be the formula? Your help is much appreciated. Thanks in advance.
If you mean that you get $80$ from $100$ after a $20$ per cent discount and want to reverse the process, multiply by $\frac{100}{100-20}=\frac{5}{4}$. The formula you need is $y=\frac{5x}{4}$, so if $x=80$, then $y=100$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding a High Bound on Probability of Random Set first time user here. English not my native language so I apologize in advance. Taking a final in a few weeks for a graph theory class and one of the sample problems is exactly the same as the $k$-edge problem. We need prove that if each vertex selected with probability $\frac{1}{2}$, that the probability that except that we must find that the probability of the set being an independent set is $\geq (\frac{2}{3})^k$ (instead of $\frac{1}{2^k}$, like original problem). Here is what I am trying: I am looking into calculating the number of independent sets for each possible number of vertices $i$, for $i=0$ to $i=n$. Once I calculate the probability of there being an independent set for the number of vertices $n$, I can take the expectation and union bound.
I was the one who asked the original question that you linked to. Just for fun I tried to see if I could prove a better bound. I was able to prove a bound even better than that. Here's a hint: you can use indicator variables and the second moment method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Convergence of alternating series based on prime numbers I've been experimenting with some infinite series, and I've been looking at this one, $$\sum_{k=1}^\infty (-1)^{k+1} {1\over p_k}$$ where $p_k$ is the k-th prime. I've summed up the first 35 terms myself and got a value of about 0.27935, and this doesn't seem close to a relation of any 'special' constants, except maybe $\frac12\gamma $. My question is, has the sum of this series been proven to have a particular closed form? If so, what is this value?
As mentioned, this series has an expansion given by the OEIS. This series is mentioned in many sources, such as Mathworld, Wells, Robinson & Potter and Weisstein. These sources all seem to imply that, though the series converges, no known "closed form" for this sum exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Basic theory about divisibility and modular arithmetic I am awfully bad with number theory so if one can provide a quick solution of this, it will be very much appreciated! Prove that if $p$ is a prime with $p \equiv 1(\mod4) $ then there is an integer $m$ such that $p$ divides $m^2 +1$
I will assume that you know Wilson's Theorem, which says that if $p$ is prime, then $(p-1)!\equiv -1\pmod{p}$. Let $m=\left(\frac{p-1}{2}\right)^2$. We show that if $p\equiv 1\pmod{4}$, then $m^2\equiv -1\pmod{p}$. This implies that $p$ divides $m^2+1$. The idea is to pair $1$ with $p-1$, $2$ with $p-2$, $3$ with $p-3$, and so on until at the end we pair $\frac{p-1}{2}$ with $\frac{p+1}{2}$. To follow the argument, you may want to work with a specific prime, such as $p=13$. So we pair $1$ with $12$, $2$ with $11$, $3$ with $10$, $4$ with $9$, $5$ with $8$, and finally $6$ with $7$. Thus for any $a$ from $1$ to $\frac{p-1}{2}$, we pair $a$ with $p-a$. Note that $a(p-a)\equiv -a^2\pmod{p}$. So the product of all the numbers from $1$ to $p-1$ is congruent modulo $p$ to the product $$(-1^2)(-2^2)(-3^2)\cdot(-\left(\frac{p-1}{2}\right)^2).$$ This is congruent to $$(-1)^{\frac{p-1}{2}}m^2.$$ But $p-1$ is divisible by $4$, so $\frac{p-1}{2}$ is even, and therefore our product is congruent to $m^2$. But our product is congruent to $(p-1)!$, and therefore, by Wilson's Theorem, it is congruent to $-1$. We conclude that $m^2\equiv -1\pmod{p}$, which is what we wanted to show.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Computing $\operatorname{Ext}^1_R(R/x,M)$ How to compute $\operatorname{Ext}^1_R(R/(x),M)$ where $R$ is a commutative ring with unit, $x$ is a nonzerodivisor and $M$ an $R$-module? Thanks.
There is an alternative way to doing this problem than taking a projective resolution. Consider the ses of $R$ - modules $$0 \to R \stackrel{x}{\to} R \to R/(x) \to 0$$ where the multiplication by $x$ map is injective because it is not a zero divisor in $R$. Now we recall a general fact from homological algebra that says any SES of $R$ - modules gives rise to an LES in Ext. We need only to care about the part $$0 \to \textrm{Hom}_R(R/(x),M) \to \textrm{Hom}_R(R,M) \stackrel{f}{\to} \textrm{Hom}_R(R,M) \to \textrm{Ext}^1_R(R/(x),M) \to 0 \to 0\ldots $$ where the zeros appear because $R$ as a module over itself is free (and hence projective) so that $\textrm{Ext}^1_R(R,M) = 0$. Now we recall that $\textrm{Hom}(R,M) \cong M$ because any homomorphism from $R$ to $M$ is completely determined by the image of $1$. It is easily seen now that under this identification, $\textrm{im} f \cong xM$ so that $$\textrm{Ext}^1_R(R/(x),M) \cong M/xM.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/241874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
prove sum of divisors of a square number is odd Don't know how to prove that sum of all divisors of a square number is always odd. ex: $27 \vdots 1,3,9,27$; $27^2 = 729 \vdots 1,3,9,27,81,243,729$; $\sigma_1 \text{(divisor function)} = 1 + 3 + 9 + 27 + 81 + 243 + 729 = 1093$ is odd; I think it somehow connected to a thing that every odd divisor gets some kind of a pair when a number is squared and 1 doesn't get it, but i can't formalize it. Need help.
The divisors $1=d_1<d_2<\cdots <d_k=n^2$ can be partitioned into pairs $(d, \frac {n^2}d)$, except that there is no partner for $n$ itself. Therefore, the number of divisors is odd. Thus if $n$ itself is odd (and so are all its divisors), we have the sum of an odd number of odd numebrs, hence the result is odd. But if $n$ is even, we can write it as $n=m2^k$ wit $m$ odd. The odd divisors of $n^2$ are precisely the divisors of $m^2$. Their sum is odd and all other (even) divisors do not affect the parity of the sum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 0 }
$p$ an odd prime, $p \equiv 3 \pmod 8$. Show that $2^{(\frac{p-1}{2})}*(p-1)! \equiv 1 \pmod p$ $p$ an odd prime, $p \equiv 3 \pmod 8$. Show that $2^{\left(\frac{p-1}{2}\right)}\cdot(p-1)! \equiv 1 \pmod p$ From Wilson's thm: $(p-1)!= -1 \pmod p$. hence, need to show that $2^{\left(\frac{p-1}{2}\right)} \equiv -1 \pmod p. $ we know that $2^{p-1} \equiv 1 \pmod p.$ Hence: $2^{\left(\frac{p-1}{2}\right)} \equiv \pm 1 \pmod p. $ How do I show that this must be the negative option?
All the equalities below are in the ring $\mathbb{Z}/p\mathbb{Z}$. Note that $-1 = (p-1)! = \prod_{i=1}^{p-1}i = \prod_{i=1}^{\frac{p-1}{2}}(2i-1)\prod_{i=1}^{\frac{p-1}{2}} 2i = 2^{\frac{p-1}{2}} \prod_{i=1}^{\frac{p-1}{2}}(2i-1)\prod_{i=1}^{\frac{p-1}{2}} i$ Now let $S_1, S_2$ be the set of respectively all odd and even numbers in $\left \{ 1, \cdots, \frac{p-1}{2} \right \}$ and $S_3$ be the set of all even numbers in $\left \{ \frac{p+1}{2}, \ldots, p-1 \right \}$. Note that $\prod_{i=1}^{\frac{p-1}{2}} i = \prod _{j \in S_1}j \prod _{k \in S_2}k = (-1)^{|S_1|} \prod _{j \in S_1}(-j)\prod _{k \in S_2}k $ $= (-1)^{|S_1|} \prod _{t \in S_3}t \prod _{k \in S_2}k =(-1)^{|S_1|} \prod_{i=1}^{\frac{p-1}{2}}(2i)$ So $\prod_{i=1}^{\frac{p-1}{2}}(2i-1)\prod_{i=1}^{\frac{p-1}{2}} i = (-1)^{|S_1|}\prod_{i=1}^{\frac{p-1}{2}}(2i-1) \prod_{i=1}^{\frac{p-1}{2}} 2i = (-1)^{|S_1|} (p-1)! = (-1)^{|S_1| +1} $ Now we have $-1 = 2^{\frac{p-1}{2}} \prod_{i=1}^{\frac{p-1}{2}}(2i-1)\prod_{i=1}^{\frac{p-1}{2}} i = (-1)^{|S_1| + 1} \cdot 2^{\frac{p-1}{2}} $ i.e $\boxed{2^{\frac{p-1}{2}} = (-1)^{|S_1|}} $ Now $|S_1| = \frac{p+1}{4}$ if $\frac{p-1}{2}$ is odd and $|S_1| = \frac{p-1}{4}$ if $\frac{p-1}{2}$ is even. So if $p \equiv 3, 5 \mod 8 $ we have $2^{\frac{p-1}{2}} = -1$. if $p = 1,7 \mod 8$ we have $2^{\frac{p-1}{2}} = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Why the principle of counting does not match with our common sense Principle of counting says that "the number of odd integers, which is the same as the number of even integers, is also the same as the number of integers overall." This does not match with my common sense (I am not a mathematician, but a CS student). Can some people here could help me to reach a mathematicians level of thinking for this problem. I have searched net a lot (Wikipedia also)
Because sets of numbers can be infinitely divisible. See this Reddit comment. I think his intuition comes from the fact that the world is discrete in practice. You have 2x more atoms in [0, 2cm] than in [0, 1cm]. If you are not looking at something made of atoms, let's say you have 2x more Planck lengths in [0, 2cm] than in [0, 1cm]. See what I mean? OP's intuition can be correct for physical things in our world, but mathematics go beyond that, with rational numbers being infinitely divisible. As soon as there is a limit to how much you can divide things, even if it's one million digits after the decimal point, OP's intuition is valid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Showing that $\langle p\rangle=\int\limits_{-\infty}^{+\infty}p |a(p)|^2 dp$ How do I show that $$\int \limits_{-\infty}^{+\infty} \Psi^* \left(-i\hbar\frac{\partial \Psi}{\partial x} \right)dx=\int \limits_{-\infty}^{+\infty} p \left|a(p)\right|^2dp\tag1$$ given that $$\Psi(x)=\frac{1}{\sqrt{2 \pi \hbar}}\int \limits_{-\infty}^{+\infty} a(p) \exp\left(\frac{i}{\hbar} px\right)dp\tag2$$ My attempt: $$\frac {\partial \Psi(x)}{\partial x} = \frac{1}{\sqrt{2\pi \hbar}} \int\limits_{-\infty}^{+\infty} \frac{\partial}{\partial x} \left(a(p)\exp\left(\frac{i}{\hbar} px\right)\right)dp\tag3$$ $$=\frac{1}{\sqrt{2\pi \hbar}} \int\limits_{-\infty}^{+\infty} a(p) \cdot \exp\left(\frac{i}{\hbar} px\right)\frac{i}{\hbar}p \cdot dp\tag4$$ Multiplying by $-i\hbar$: $$-i\hbar \frac {\partial \Psi}{\partial x}=\frac{1}{\sqrt{2\pi \hbar}} \int\limits_{-\infty}^{+\infty} a(p) \cdot \exp\left(\frac{i}{\hbar} px\right)p \cdot dp\tag5$$ At this point I'm stuck because I don't know how to evaluate the integral without knowing $a(p)$. And yet, the right hand side of equation (1) doesn't have $a(p)$ substituted in.
The conclusion follows from the Fourier inversion formula (in distribution sense): $$\begin{align*} &\int_{-\infty}^{\infty} \Psi^{*} \left( -i\hbar \frac{\partial \Psi}{\partial x} \right) \, dx \\ &= \int_{-\infty}^{\infty} \left( \frac{1}{\sqrt{2\pi\hbar}} \int_{-\infty}^{\infty} a(p)^{*}e^{-ipx/\hbar} \, dp \right) \left( \frac{1}{\sqrt{2\pi\hbar}} \int_{-\infty}^{\infty} p' a(p')e^{ip'x/\hbar} \, dp' \right) \, dx \\ &= \frac{1}{2\pi\hbar} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p' a(p)^{*}a(p') e^{i(p'-p)x/\hbar} \, dp'dp dx \\ &= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p' a(p)^{*}a(p') \left( \frac{1}{2\pi\hbar} \int_{-\infty}^{\infty} e^{i(p'-p)x/\hbar} \, dx \right) \, dp' dp \\ &= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p' a(p)^{*}a(p') \delta(p-p') \, dp' dp \\ &= \int_{-\infty}^{\infty} p a(p)^{*}a(p) \, dp = \int_{-\infty}^{\infty} p \left| a(p) \right|^2 \, dp. \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/242117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Dihedral group $D_{8}$ as a semidirect product $V\rtimes C_2$? How do I show that the dihedral group $D_{8}$ (order $8$) is a semidirect product $V\rtimes \left\langle \alpha \right\rangle $, where $V$ is Klein group and $% \alpha $ is an automorphism of order two?
I like to think about $D_8$ as the group of invariants of a square. So the our group $V$ is given by the identity, the 2 reflections that have no fix points and their product which is 180°-rotation. Semidirect product can be characterized as split short exact sequences of groups. This is just a fancy way to say the following: Given a group $G$ and a normal subgroup $N\subset G$. Denote by $\pi:G\to G/N$ the projection map. Then $G\cong N\rtimes G/N$ iff there exists a homomorphism $\phi: G/N\to G$, such that $\pi\circ\phi=\mathrm{id}_{G/N}$. This $\phi$ is called a splitting homomorphism. Back to our dihedral group: $D_8/V\cong C_2$. In order to define a splitting homomorphism $C_2\to D_8$ we need to find an element of order 2 that is not contained in $V$. Such an element is given by a reflection through one of the diagonals of our square. It is clear that $\pi$ doesn't map t his element to $0\in C_2$ because it does not lie in $V$. So we constructed a homomorphism $\phi: C_2\to D_8$ such that $\pi\circ\phi=\mathrm{id}_{C_2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How to explain that division by $0$ yields infinity to a 2nd grader How do we explain that dividing a positive number by $0$ yields positive infinity to a 2nd grader? The way I intuitively understand this is $\lim_{x \to 0}{a/x}$ but that's asking too much of a child. There's got to be an easier way. In response to the comments about it being undefined, granted, it is undefined, but it's undefined because of flipping around $0$ in positive or negative values and is in any case either positive or negative infinity. Yet, $|\frac{2}{0}|$ equals positive infinity in my book. How do you convey this idea?
let us consider that any number divided by zero is undefined. You can let the kid know in this way: Division is actually splitting things, for example consider you have 4 chocolates and if u have to distribute those 4 chocolates among 2 of your friends, you would divide it(4) by 2(i.e : 4/2) = 2. Now consider this, you have 4 chocolates and if u don't want to distribute among any of your friends, (that is like distributing to 0 friends) division does not even come into picture in such cases and also division(4/0) makes no sense. Hence in such cases its told UNDEFINED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 13, "answer_id": 1 }
example of irreductible transient markov chain Can anyone give me a simple example of an irreductible (all elements communicate) and transient markov chain? I can't think of any such chain, yet it exists (but has to have an infinite number of elements) thanks
A standard example is asymmetric random walk on the integers: consider a Markov chain with state space $\mathbb{Z}$ and transition probability $p(x,x+1)=3/4$, $p(x,x-1)=1/4$. There are a number of ways to see this is transient; one is to note that it can be realized as $X_n = X_0 + \xi_1 + \dots + \xi_n$ where the $\xi_i$ are iid biased coin flips; then the strong law of large numbers says that $X_n/n \to E[\xi_i] = 1/2$ almost surely, so that in particular $X_n \to +\infty$ almost surely. This means it cannot revisit any state infinitely often. Another example is simple random walk on $\mathbb{Z}^d$ for $d \ge 3$. Proving this is transient is a little more complicated but it should be found in most graduate probability texts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
How do you compute the number of reflexive relation? Given a set with $n$ elements I know that there is $2^{n^2}$ relations, because there are $n$ rows and $n$ columns and it is either $1$ or $0$ in each case, but I don't know how to compute the number of reflexive relation. I am very dumb. Can someone help me go through the thought process?
A relation $R$ on $A$ is a subset of $AXA$. If $A$ is reflexive, each of the $n$ ordered pairs $(a,a)$ belonging to $A$ must be in $R$. So the remaining $n^2-n$ ordered pairs of the type $(a,b)$ where $a!=b$ may or may not be in R. So each ordered pair has now 2 choices, to be in $R$ or to not be in $R$. Hence number of pairs = 2($n^2-n$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/242385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that $S_n$ is doubly transitive on $\{1, 2,..., n\}$ for all $n \geqslant 2$. Prove that $S_n$ is doubly transitive on $\{1, 2,\ldots, n\}$ for all $n \geqslant 2$. I understand that transitive implies only one orbit, but...
In fact, $S_n$ acts $n$-fold transitive on $\{1,\ldots,n\}$ (hence the claim follows from $n\ge 2$), i.e. for $n$ different elements (which could that be?) $i_1,\ldots ,i_n$ you can prescribe any $n$ different elements $j_1,\ldots,j_n$ and dan find (exactly) one element $\sigma\in S_n$ such that $\sigma(i_k)=j_k$ for all $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
using contour integration I am trying to understand using contour integration to evaluate definite integrals. I still don't understand how it works for rational functions in $x$. So can anyone please elaborate this method using any particular function like say $\int_0^{\infty} \frac {1}{1+x^3} \ dx$. I'ld appreciate that. I meant I know what I should be doing but am having problems applying them. So, basically I am interested in how to proceed rather than "Take this contour and so on". I'ld appreciate any help you people can give me. Thanks.
What we really need for contour integration by residues to work is a closed contour. An endpoint of $\infty$ doesn't matter so much because we can treat it as a limit as $R \to \infty$, but an endpoint of $0$ is a problem. Fortunately, this integrand is symmetric under rotation by $2 \pi/3$ radians. So we consider a wedge-shaped contour $\Gamma = \Gamma_1 + \Gamma_2 + \Gamma_3$ going in a straight line $\Gamma_1$ from $0$ to $R$ on the real axis, then a circular arc $\Gamma_2$ on $|z|=R$ to $R e^{2\pi i/3}$, then a straight line $\Gamma_3$ back to $0$. We have $$\eqalign{\int_{\Gamma_1} \dfrac{dz}{1+z^3} &= \int_0^R \dfrac{dx}{1+x^3} \cr \int_{\Gamma_3} \dfrac{dz}{1+z^3} &= -e^{2\pi i/3} \int_0^R \dfrac{dx}{1+x^3} = \dfrac{1 - \sqrt{3}i}{2} \int_0^R \dfrac{dx}{1+x^3} \cr \left|\int_{\Gamma_2} \dfrac{dz}{1+z^3} \right| &\le \dfrac{CR}{R^3 - 1} \to 0\ \text{as $R \to \infty $}}$$ for some constant $C$. Now $f(z) = \dfrac{1}{1+z^3} = \dfrac{1}{(z+1)(z-e^{\pi i/3})(z-e^{-\pi i/3})}$ has one singularity inside $\Gamma$, namely at $z = e^{\pi i/3}$ (if $R > 1$), a simple pole with residue $$\dfrac{1}{(e^{\pi i/3}+1)(e^{\pi i/3} - e^{-\pi i/3})} = -\dfrac{1}{6} - \dfrac{\sqrt{3}}{6} i $$ Thus $$ \dfrac{3 - \sqrt{3}i}{2} \int_0^\infty f(x)\ dx = \lim_{R \to \infty} \oint_\Gamma f(z)\ dz = 2 \pi i \left(-\dfrac{1}{6} - \dfrac{\sqrt{3}}{6} i\right)$$ $$ \int_0^\infty f(x)\ dx = \dfrac{4 \pi i}{3 - \sqrt{3} i} \left(-\dfrac{1}{6} - \dfrac{\sqrt{3}}{6} i\right) = \dfrac{2 \sqrt{3} \pi}{9} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/242514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Two curious "identities" on $x^x$, $e$, and $\pi$ A numerical calculation on Mathematica shows that $$I_1=\int_0^1 x^x(1-x)^{1-x}\sin\pi x\,\mathrm dx\approx0.355822$$ and $$I_2=\int_0^1 x^{-x}(1-x)^{x-1}\sin\pi x\,\mathrm dx\approx1.15573$$ A furthur investigation on OEIS (A019632 and A061382) suggests that $I_1=\frac{\pi e}{24}$ and $I_2=\frac\pi e$ (i.e., $\left\vert I_1-\frac{\pi e}{24}\right\vert<10^{-100}$ and $\left\vert I_2-\frac\pi e\right\vert<10^{-100}$). I think it is very possible that $I_1=\frac{\pi e}{24}$ and $I_2=\frac\pi e$, but I cannot figure them out. Is there any possible way to prove these identities?
You made a very nice observation! Often it is important to make a good guess than just to solve a prescribed problem. So it is surprising that you made a correct guess, especially considering the complexity of the formula. I found a solution to the second integral in here, and you can also find a solution to the first integral at the link of this site.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "78", "answer_count": 3, "answer_id": 1 }
An inequality about the gradient of a harmonic function Let $G$ a open and connected set. Consider a function $z=2R^{-\alpha}v-v^2$ with $R$ that will be chosen suitably small, where $v$ is a harmonic function in $G$, and satisfies $$|x|^\alpha\leqslant v(x)\leqslant C_0|x|^\alpha. \ \ (*)$$ Then, $$\Delta z+f(z)=-2|\nabla v|^2+f(z)\leqslant-C|x|^{2\alpha-2}f(0)+Kz,$$ where $K$ is the Lipschitz constant of the function $f$. ($f$ is a Lipschitz-continuous function). I have no idea how to obtain this second inequality and I don't know if $(*)$ is really necessary.
This is an answer to a question raised in comment to another answer. Since it is of independent interest (perhaps of more interest than the original answer), I post it as a separate answer. Question: How do we construct homogeneous harmonic functions that are positive on a cone? Answer. Say, we want a positive homogeneous harmonic function $v$ on the cone $K=\{x\in\mathbb R^n:x_n>\kappa|x|\}$ where $\kappa\in (-1,1)$. Let $\alpha$ denote the degree of homogeneity of $v$: that is, $v(rx)=r^\alpha v(x)$ for all $x\in K$. The function $v$ is determined by the number $\alpha$ and by the restriction of $v$ to the unit spherical cap $S\cap K$. The Laplacian of $v$ can be decomposed into radial and tangential terms: $$\Delta v = v_{rr}+\frac{n-1}{r}v_r+\frac{1}{r^2}\Delta_S v$$ where $\Delta_S$ is the Laplace-Beltrami operator on the unit sphere $S$. Using the homogeneity of $v$, we find $v_r = \frac{\alpha }{r}v$ and $v_{rr}=\frac{\alpha(\alpha-1)}{r^2}v$. On the unit sphere $r=1$, and the radial term of $\Delta v$ simplifies to $\alpha(n+\alpha-2)v$. Therefore, the restriction of $v$ to $S\cap K$ must be an eigenfunction of $\Delta_S$: $$\Delta_S v + \mu v =0, \ \ \ \ \mu = \alpha(n+\alpha-2) $$ Where do we get such a thing from? Well, there is a well-developed theory of eigenfunctions and eigenvalues for $\Delta_S$ with the Dirichlet boundary conditions. In particular, it is known that the eigenfunction corresponding to the lowest eigenvalue $\mu_1$ has constant sign. So this is what we use for $v$. There are two drawbacks: * *(a) we cannot choose $\alpha$ ourselves, because $\alpha(n+\alpha-2)=\mu_1$ and $\mu_1$ is determined by the domain $S\cap K$. *(b) the function $v$ vanishes on the boundary of spherical cap, and therefore is not bounded away from $0$. Concerning (a), we know that $\mu_1$ is monotone with respect to domain: larger domains (in the sense of inclusion) have lower value of $\mu_1$. (Physical interpretation: a bigger drum emits lower frequencies.) Therefore, the value of $\alpha$ decreases when the domain is enlarged. Also, when $K$ is exactly half-space $\{x_n>0\}$, we know our positive harmonic function directly: it's $v(x)=x_n$, which is homogeneous of degree $\alpha=1$. Therefore, in the cones that are smaller than half-space we have $\alpha>1$, and in the cones that are larger than half-space we have $\alpha<1$. Concerning (b), we do the following: carry out the above construction in a slightly larger cone $K'$ and then restrict $v$ to $K$. Since the closure of $K\cap S$ is a compact subset of $K'\cap S$, the function $v$ attains a positive minimum there. We can multiply $v$ by a constant and achieve $1\le v\le C$ on $K\cap S$, hence $|x|^\alpha\le v(x)\le C|x|^\alpha$ on all of $K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can a cubic that crosses the x axis at three points have imaginary roots? I have a cubic polynomial, $x^3-12x+2$ and when I try to find it's roots by hand, I get two complex roots and one real one. Same, if I use Mathematica. But, when I plot the graph, it crosses the x-axis at three points, so if a cubic crosses the x-axis a three points, can it have imaginary roots, I think not, but I might be wrong.
The three roots are, approximately, $z = -3.545$, $0.167$ or $3.378$. A corollary of the fundamental theorem of algebra is that a cubic has, when counted with multiplicity, exactly three roots over the complex numbers. If your cubic has three real roots then it will not have any other roots. I've checked the plot, and you're right: there are three real roots. All I can think is that maybe you have made a mistake when substituting your complex "root" into the equation. Perhaps you might like to post the root and we can check it for you. Another thing to convince you there's an error. Let $z= r_1,r_2,r_3$ be the three real roots. If $z=c$ is your complex root then the conjugate $z = \overline{c}$ must also be a root. Thus: $$z^3-12z+2 = (z-r_1)(z-r_2)(z-r_3)(z-c)(z-\overline{c}) \, . $$ Hang on! That means your cubic equation starts $z^5 + \cdots$ and isn't a cubic after all. Contradiction! Either it doesn't have three real roots, or your complex "root" is not a root after all. Beware that computer programs use numerical solutions and you get rounding errors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
coordinate system, nonzero vector field I'm interested in the following result (chapter 5, theorem 7 in volume 1 of Spivak's Differential Geometry): Let $X$ be a smooth vector field on an $n$-dimensional manifold M with $X(p)\neq0$ for some point $p\in M$. Then there exists a coordinate system $x^1,\ldots,x^n$ for $U$ (an open subset of $M$ containing $p$) in which $X=\frac{\partial}{\partial x^1}$. Could someone please explain, in words, how to prove this (or, if you have the book, how Spivak proves this)? I've read Spivak's proof, and have a few questions about it: 1) How is he using the assumption $X(p)\neq0$? 2) Why can we assume $X(0)=\frac{\partial}{\partial t^1}|_0$ (where $t^1,\ldots,t^n$ is the standard coordinate system for $\mathbb{R}^n$ and WLOG $p=0\in\mathbb{R}^n$)? 3) How do we know that in a neighborhood of the origin in $\mathbb{R}^n$, there's a unique integral curve through each point $(0,a^2,\ldots,a^n)$?
As a partial answer: uniqueness of integral curves I believe boils down to the existence and uniqueness theorem from ordinary differential equations. Furthermore, any coordinate chart can be translated so that 0 maps to the point $p$ on the manifold, whatever point you're interested in. You're just doing a composition. If you have $\varphi:\mathbb{R}^n\rightarrow M$ and $\varphi(x_0) = p$ then $\phi(x) = \varphi(x + x_0)$ is the new map. It's easily seen to be smooth and satisfy all the properties you need. Because you can always do this, we often just assume it is done and take $\varphi$ to be the map taking 0 to $p$ without further comment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$S_n$ acting transitively on $\{1, 2, \dots, n\}$ I am reading Dummit and Foote, and in Section 4.1: Group Actions and Permutation Representations they give the following example of a group action: The symmetric group $G = S_n$ acts transitively in its usual action as permutations on $A = \{1, 2, \dots, n\}$. Note that the stabilizer in $G$ of any point $i$ has index $n = | A |$ in $S_n$ (My italics) I am having trouble seeing why the stabilizer has index $n$ in $S_n$. Is this due to the fact that we have $n$ elements of $A$ and thus $n$ distinct (left) cosets of the stabilizer $G_i$? And how would I see this if I were to work with the permutation representation of this action? The action is not very clear to me.
$\sigma \in \textrm{ Stab } (i)$ for $1\le i \le n$ iff $\sigma $ fixes $i.$ You just have to count the number of permutations that fix $i$ - working in the usual order, there are $n-1$ choices for the image of the first element in the domain, $n-2$ for the second, and so on, so that $|\textrm{ Stab } (i) | = (n-1)!.$ Apply Lagrange's Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What is the difference between Tautochrone curve and Brachistochrone curve as both are cycloid? What is the difference between Tautochrone curve and Brachistochrone curve as both are cycloid? If possible, show some reference please?
Mathematically, they both are the same curve but they arise from slightly different but related problems. While the Brachistochrone is the path between two points that takes shortest to traverse given only constant gravitational force, the Tautochrone is the curve where, no matter at what height you start, any mass will reach the lowest point in equal time, again given constant gravity. These origins can be seen in the names: Greek: ταὐτό (tauto) the same βράχιστος (brachistos) the shortest χρόνος (chronos) time Both problems are solved via Variational Calculus. Here an illustration of the Tautochrone from Wikipedia (by Claudio Rocchini): By comparison, this is the problem you are trying to solve with a Brachistochrone (Maxim Razin on Wikipedia):
{ "language": "en", "url": "https://math.stackexchange.com/questions/242885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Are there infinite many integer $n\ge 0$ such that $10^{2^n}+1$ prime numbers? It is clear to see that 11 and 101 are primes which sum of digit is 2. I wonder are there more or infinte many of such prime. At first, I was think of the number $10^n+1$. Soon, I knew that $n\neq km$ for odd $k>1$, otherwise $10^m+1$ is a factor. So, here is my question: Are there infinite many integer $n\ge 0$ such that $10^{2^n}+1$ prime numbers? After a few minutes: I found that if $n=2$, $10^{2^n}+1=10001=73\times137$, not a prime; if $n=3$, $10^{2^n}+1=17\times5882353$, not a prime; $n=4$, $10^{2^n}+1=353\times449\times641\times1409\times69857$, not a prime. Now I wonder if 11 and 101 are the only two primes with this property.
Many people wonder the same thing you do. Wilfrid Keller keeps track of what they find out. So far: prime for $n=0$ and $n=1$ only; known to be composite for all other $n$, $2\le n\le23$, and many other values of $n$. The first value for which primality status is unknown is $n=24$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Mutiple root of a polynomial modulo $p$ In my lecture notes of algebraic number theory they are dealing with the polynomial $$f=X^3+X+1, $$ and they say that If f has multiple factors modulo a prime $p > 3$, then $f$ and $f' = 3X^2+1$ have a common factor modulo this prime $p$, and this is the linear factor $f − (X/3)f'$. Please could you help me to see why this works? And moreover, how far can this be generalized?
* *If $f$ has a multiple factor, say $h$ (in any field containing the current base field), then with appropriate $g$, we have $$f(x)=h(x)^2\cdot g(x)$$ If you take its derivative, it will be still a multiple of $h(x)$, so it is a common factor of $f$ and $f'$. If polynomials $u$ and $v$ have common factors, then all of their linear combinations will have that as common factor. Now in your particular example, note that the written $f-(X/3)f'$ is already linear (hence surely irreducible), so, if there is a common factor, it must be this one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/243093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
can open sets be covered with another open set not much bigger? Is that correct that, for any open set $S \subset \mathbb{R}^n$, there exists an open set $D$ such that $S \subset D$ and $D \setminus S$ has measure zero? I think it is correct and I guess I have seen the proof somewhere before, but I cannot find it in any of my books, if it is wrong, please give me a counter-example. Also is the same correct for a closed set such that it's interior is not of measure zero?
Assume that $S\subset D$ with $S,D$ open and $\mu(D\setminus S)=0$. If $D$ is not contained in $\overline S$, then the nonempty open set $D\setminus \overline S$ has positive measure. Therefore, $D\subseteq \overline S$. Therefore any open set $S$ with the property that $$\tag1\partial S\subseteq \partial(\mathbb R^n\setminus \overline S)$$ is a counterexample to your conjecture: Any open ball araound a point in $\partial S$ then intersects $\mathbb R^n\setminus \overline S$ in a nonempty open set of positive measure. For example, an open ball or virtually any open set with a smooth boundary has property (1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/243205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Need someone to show me the solution Need someone to show me the solution. and tell me how ! $$(P÷N) × (N×(N+1)÷2) + N×(1-P) = N×(1-(P÷2)) + (P÷2)$$
\begin{align} \dfrac{P}{N} \times \dfrac{N(N+1)}2 + N\times (1-P) & = \underbrace{P \times \dfrac{N+1}{2} + N \times (1-P)}_{\text{Cancelling out the $N$ from the first term}}\\ & = \underbrace{\dfrac{PN + P}2 + N - NP}_{\text{$P \times (N+1) = PN + P$ and $N \times (1-P) = N - NP$}}\\ & = \underbrace{\dfrac{PN + P +2N - 2NP}2}_{\text{Take the lcm $2$.}}\\ & = \underbrace{\dfrac{P + 2N -NP}2}_{PN - 2NP = -NP}\\ & = \dfrac{P}2 + \dfrac{2N - NP}2\\ & = \underbrace{N\dfrac{2-P}2 + \dfrac{P}2}_{\text{Factor out $N$ from $2N-NP$}}\\ & = \underbrace{N \left(1 - \dfrac{P}2\right) + \dfrac{P}2}_{\text{Making use of the fact that $\dfrac{2-P}2 = \dfrac22 - \dfrac{P}2 = 1 - \dfrac{P}2$}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/243345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Question about of Fatou's lemma in Rick Durrett's book. In Probability Theory and Examples, Theorem $1.5.4$, Fatou's Lemma, says If $f_n \ge 0$ then $$\liminf_{n \to \infty} \int f_n d\mu \ge \int \left(\liminf_{n \to \infty} f_n \right) d\mu. $$ In the proof, the author says Let $E_m \uparrow \Omega$ be sets of finite measure. I'm confused, as without any information on the measure $\mu$, how can we guarantee this kind of sequence of events must exist? Has the author missed some additional condition on $\mu$?
At the beginning of section 1.4 where Durrett defines the integral he assumes that the measure $\mu$ is $\sigma$-finite, so I guess it is a standing assumption about all integrals in this book that the underlying measure satisfies this. Remember that it is a probability book, so his main interest are finite measures.
{ "language": "en", "url": "https://math.stackexchange.com/questions/243393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit $\lim_{x\rightarrow \infty} x^3 e^{-x^2}$ using L'Hôpital's rule I am trying to solve a Limit using L'Hôpital's rule with $e^x$ So my question is how to find $$\lim_{x\rightarrow \infty} x^3 e^{-x^2}$$ I know to get upto this part here, but I'm lost after that $$\lim_{x\rightarrow \infty} \frac{x^3}{e^{x^2}}$$
$$\begin{align} \lim_{x\rightarrow \infty} \dfrac{x^3}{e^{x^2}} &=\lim_{x\rightarrow \infty} \dfrac{3x^2}{e^{x^2}2x}\\ &=\lim_{x\rightarrow \infty} \dfrac{3x}{e^{x^2}2}\\ &=\lim_{x\rightarrow \infty} \dfrac{3}{e^{x^2}.2.2x}\\ &=0 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/243457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
The wedge sum of two circles has fixed point property? The wedge sum of two circles has fixed point property? I'm trying to find a continuous map from the wedge sum to itself, that this property fails, I couldn't find it, I need help. Thanks
If by circle you mean $S^1$, and the fixed point property is the claim that every continuous map into itself has a fixed point, for two circles like so: consider the map that rotates $A$ by 90 degrees, and sends all of $B$ to the image of $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/243588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
On Ceva's Theorem? The famous Ceva's Theorem on a triangle $\Delta \text{ABC}$ $$\frac{AJ}{JB} \cdot \frac{BI}{IC} \cdot \frac{CK}{EK} = 1$$ is usually proven using the property that the area of a triangle of a given height is proportional to its base. Is there any other proof of this theorem (using a different property)? EDIT: I would like if someone can use the proof of Menelaus' Theorem.
I’m sure there is a slick proof lurking in $\mathbb{C}$. This is not it. We first prove the left implication. Place the origin $O$ at the concurrent point as figured. Since ratios of lengths are invariant under dilations and rotations, WLOG let the line through $B$ and $K$ be the real axis and scale the triangle such that $B=1$. As $I,J,K$ lie on the edges of $\triangle ABC$ $$ \begin{split} I&=A+t_1(1-A)\\J&=1+t_2(C-1)\\K&=C+t_3(A-C),\end{split}\tag{*}$$ for real $t_i$, from which it follows $$\tag{**}\frac{ A-I}{ I-B}\cdot \frac{ B-J}{ J-C}\cdot \frac{ C-K}{ K-A}=\frac{t_1t_2t_3}{(1-t_1)(1-t_2)(1-t_3)}.$$ Further, $$\begin{split} I&=r_1C\\J&=r_2A \\K&=r_3.\end{split}\tag{***}$$ If we equate $(*)$ and $(***)$ and solve for the real $r_i$, then we get three complex equations in $r_i,a_i,c_i, t_i$ with imaginary parts $0$. Solving these three imaginary parts for $t_i$ then gives $$\begin{split} t_1=\frac{a_1c_2-a_2c_1}{ a_1c_2-a_2c_1-c_2} &\implies \frac{1}{1-t_1}=\frac{c_2+a_2c_1-a_1c_2}{c_2}\\t_2=\frac{a_2}{a_1c_2+a_2-a_2c_1}&\implies \frac{1}{1-t_2}=\frac{a_1c_2+a_2-a_2c_1}{a_1c_2-a_2c_1}\\t_3=\frac{c_2}{c_2-a_2}&\implies \frac{1}{1-t_3}=\frac{a_2-c_2}{a_2}\end{split}$$ and plugging these into $(**)$ gives the desired result. To prove the right implication we follow basically the same procedure, i.e. place the origin $O$ at the intersection of Ceva lines $CI$ and $AJ$, rotate and scale so that $B=1$ and then prove $K\in\mathbb{R}$ by assuming the Ceva product is $1$, hence the Ceva lines are concurrent. The same algebraic manipulations as before show $$\begin{split} \text{Im}(K)&=t_3a_2-c_2(1-t_3)\\&=t_3a_2+c_2\frac{t_1t_2t_3}{(1-t_1)(1-t_2)}\\&=t_3a_2 +c_2\frac{a_2c_1-a_1c_2}{c_2}\frac{a_2}{a_1c_2-a_2c_1}t_3\\&=0\end{split}$$ and we conclude the result. $\qquad\square$ Not nice, but maybe you can streamline the argument somewhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/243663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Counterexample of Sobolev Embedding Theorem? Is there a counterexample of Sobolev Embedding Theorem? More precisely, please help me construct a sobolev function $u\in W^{1,p}(R^n),\,p\in[1,n)$ such that $u\notin L^q(R^n)$, where $q>p^*:=\frac{np}{n-p}$.^-^
Here is how you can do this on the unit ball $\{x | \|x \| \le 1\}$: Set $u(x) = \|x\|^{-\alpha}$. Then $\nabla u$ is easy to find. Now you can compute $\|u\|_{L^q}$ and $\|u\|_{W^{1,p}}$ using polar coordinates. Play around until the $L^q$ norm is infinite while the $W^{1,p}$ norm is still finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/243733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
The smallest ring containing square root of 2 and rational numbers Can anyone prove why the smallest ring containing $\sqrt{2}$ and rational numbers is comprised of all the numbers of the form $a+b\sqrt{2}$ (with $a,b$ rational)?
That ring must surely contain all numbers of the form $a+b\sqrt 2$ with $a,b\in\mathbb Q$ because these can be obtained by ring operations. Since that set is closed under addition and multiplication (because $(a+b\sqrt 2)+(c+d\sqrt2)=(a+c)+(b+d)\sqrt 2$ and $(a+b\sqrt2)\cdot(c+d\sqrt 2)=(ac+2bd)+(ad+bc)\sqrt 2$), it is already a ring, hence nothing bigger is needed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/243786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Showing that $ (1-\cos x)\left |\sum_{k=1}^n \sin(kx) \right|\left|\sum_{k=1}^n \cos(kx) \right|\leq 2$ I'm trying to show that: $$ (1-\cos x)\left |\sum_{k=1}^n \sin(kx) \right|\left|\sum_{k=1}^n \cos(kx) \right|\leq 2$$ It is equivalent to show that: $$ (1-\cos x) \left (\frac{\sin \left(\frac{nx}{2} \right)}{ \sin \left( \frac{x}{2} \right)} \right)^2 |\sin((n+1)x)|\leq 4 $$ Any idea ?
Using the identity $$ 1 - \cos x = 2\sin^2\left(\frac{x}{2}\right), $$ we readily identify the left-hand side as $$ 2 \sin^2 \left(\frac{nx}{2} \right) \left|\sin((n+1)x)\right|, $$ which is clearly less than or equal to $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/243834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
K critical graphs connectivity and cut vertex Show that a k-critical graph is connected. Furthermore, show that it does not have a vertex whose removal disconnects the graph (such a vertex is known as a cut vertex). I have managed to proove , I think, the first part Let's assume G is not connected. Since χ(G) = k, (If G1,G2,...,Gr are the components of a disconnected graph G, then χ(G) = max χ(Gi) 1≤i≤r ) then there is a component G1 of G such that χ(G1)=k.If v is any vertex of G which is not in G1,then G1 isa component of the subgraph G − v. Therefore, χ (G − v) = χ (G1 ) = k. This contradicts the fact that G is k-critical. Hence G is connected. But I can't manage to prove the second part..ie that there is not cut vertex. Anyone can help?
Your statement is a special case of more general theorem I was researching when I came across your question: T: Cut in a $k$-critical graph is not clique. Proof: Assume that cut $S$ in $k$-critical graph $G=(V,E)$ is clique. Components of $G \setminus S$ are $\{C_1 \dots C_r\}$. For each subgraph of $G$ in the form of $C_i\cup S$, find total coloring $\phi_i$ with $k-1$ or less colors. Let $\{v_i \dots v_m\}$ be vertices of $S$ that have different colors in the coloring $\phi_i$. Now because $S$ is clique, you can permute the colors on $v_1 \dots v_m$ in such a way that $\phi_i(v_j)=j$. Finally, you unite all the colorings $\phi_1 \dots \phi_r$, thus getting total coloring on $G$ with only $k-1$ colors. This contradicts the $k$-criticality of $G$ and proves the theorem. Because single vertex is a clique in every graph, your statement is proven. PS: how is resurrecting of old threads looked upon in these parts?
{ "language": "en", "url": "https://math.stackexchange.com/questions/243887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Induction: How to prove propositions with universal quantifiers? In my book, they prove with mathematical induction propositions with successions like this: $$1 + 3 + 5 + \cdots + (2n-1) = n^2$$ In all exercises. However, recently I took some exercises from a different paper and instead of these it told me to prove this: $$\forall n \in N (11 / (10^{2n+1} + 1 ))$$ Or perhaps this: $$ \forall n \in N (n < 2^n) $$ (I can't find how to write the natural numbers set symbol. $N$ is it there.) And now I'm lost. This is is what I did with the first one: Prove that the proposition works for $n=1$ $$ 11 / (10^3+1) \implies 11/1001 \implies \exists x \in N(11x=1001)$$ Which is true, if you take $x = 91$. Assume $$\forall n \in N (11 / (10^{2n+1} + 1 ))$$ We have to prove: $$\forall n \in N (11 / (10^{2n+3} + 1 ))$$ We prove it: Which I don't know how to do. Curiously enough, my book only shows exercises with successions, so I guess that this exercise can be, somehow, written as a succession? I am not sure about that. Any ideas?
You are misunderstanding induction here. The $\forall$ is always part of it. For example, the real statement for your first result is $$\forall n\in\mathbb N: 1+3+...+(2n-1)=n^2$$ In general, if your equation is: $$\forall n\in\mathbb N: P(n)$$, the principal of mathematical induction says it is enough to show: $$P(1)$$ and $$\forall n\in\mathbb N: P(n)\implies P(n+1)$$ So to prove that $11\mid 10^{2n+1}+1$, you first show $11\mid 10^{3}+1$ and then prove that if $11|10^{2n+1}+1$ then $11|10^{2n+3}+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/243959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Basic Counting Problem I was reading a probability book and am having trouble conceptually with one of the examples. The following is a modification. Let's say that we have $3$ coins that we want to randomly assign into $3$ bins, with equal probability. We can label these coins $a_1$, $a_2$, $a_3$. What is the probability that all $3$ bins will be filled? The solution is: All possible combinations of assigning these coins to bin locations is $3^3 = 27$. The possible ways that all 3 bins can be filled is $3!$. The final probability is $6/27 = 2/9$. Alternatively this could be derived as $(3/3)\cdot(2/3)\cdot(1/3) = 2/9$. Now what if the coins are not labeled and are considered interchangeable. There are now $10$ configurations in which these bins can be filled: $\binom{3+3-1}{3} = 10$. Only one of these configurations will have all bins filled. Thus the probability here is $1/10$. Shouldn't these probabilities be the same? Am I missing something with the second scenario?
It depends on whether you're thinking of choosing each arrangement of balls with equal probability, or whether you place the balls in each bin with equal probability. I would say that the most natural way to think about this is the latter, since this is most likely how it would happen in real life. When dealing with events that are independent like the placement of the unlabelled balls, I would think about it as placing one first, then another and finally a third. This is kind of a way to artificially label the balls, that makes intuitive sense. In the real world, you would probably place the balls one after another. At the very least, you would decide which one goes where one after another. If you did decide on the arrangement of them all at once, the chances are you would not be doing so independently and the outcome you describe with probability $\frac{1}{10}$ would occur. This idea of making events happen one after another is extremely useful throughout probability (actually, it is useful in combinatorics, to decide how many ways there are of doing things much more easily by splitting up a complicated scenario into smaller ones - it applies to probability when we have events happening with equal probability, and then the probability of an event occuring is the number of ways it can happen divided by the number of total events which could happen.) To bring it back to your example, assuming the balls are placed independently we can say that, if we want to fill up all three bins, the first one definitely goes into a bin, the second one goes into an empty one with probability $\frac{2}{3}$ and the third one goes into the final empty one with probability $\frac{1}{3}$ giving total probability of the event $\frac{2}{9}$ If, however, we assume that each arrangement is equally likely, then we get the answer $\frac{1}{10}$. This is because some arrangements are more likely to happen then others when the balls are placed independently, so the probabilities get skewed accordingly. For example, we have already established that the event of them all being in different bins can happen in $6$ different ways. The event of them all being in the same bin can only happen in $3$ different ways because there are $3$ different bins, so it should occur with half the probability of the balls all being spread evenly. This is what causes the discrepancy in the answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/244016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
How to use mathematical induction with inequalities? I've been using mathematical induction to prove propositions like this: $$1 + 3 + 5 + \cdots + (2n-1) = n^2$$ Which is an equality. I am, however, unable to solve inequalities. For instance, this one: $$ 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \leq \frac{n}{2} + 1 $$ Every time my books solves one, it seems to use a different approach, making it hard to analyze. I wonder if there is a more standard procedure for working with mathematical induction (inequalities). There are a lot of questions related to solving this kind of problem. Like these: * *How to prove $a^n < n!$ for all $n$ sufficiently large, and $n! \leq n^n$ for all $n$, by induction? - in this one, the asker was just given hints (it was homework) *How to prove $n < n!$ if $n > 2$ by induction? Ilya gave an answer, but there was little explanation (and I'd like some more details on the procedure) *how: mathematical induction prove inequation Also little explanation. Solving it with one line is great, but I'd prefer large blocks of text instead. Can you give me a more in depth explanation of the whole procedure?
I'm not sure what you expect exactly, but here is how I would do the inequality you mention. We start with the base step (as it is usually called); the important point is that induction is a process where you show that if some property holds for a number, it holds for the next. First step is to prove it holds for the first number. So, in this case, $n=1$ and the inequality reads $$ 1<\frac12+1, $$ which obviously holds. Now we assume the inductive hypothesis, in this case that $$ 1+\frac12+\cdots+\frac1n<\frac{n}2+1, $$ and we try to use this information to prove it for $n+1$. Then we have $$ 1+\frac12+\cdots+\frac1n+\frac1{n+1}=\left(1+\frac12+\cdots+\frac1n\right)+\frac1{n+1}. $$ I inserted the brackets to show that we have the sum we know about, through the inductive hypothesis: so $$ 1+\frac12+\cdots+\frac1n+\frac1{n+1}<\frac{n}2+1+\frac1{n+1}. $$ Now comes the nontrivial part (though not hard in this case), where we need to somehow get $(n+1)/2+1$. Note that this is equal to $n/2+1$ (which we already have) plus $1/2$. And this suggests the proof: as $n\geq1$, $1/(n+1)\leq1/2$. So $$ 1+\frac12+\cdots+\frac1n+\frac1{n+1}<\frac{n}2+1+\frac1{n+1}\leq\frac{n}2+1+\frac12=\frac{n+1}2+1. $$ So, assuming the inequality holds for $n$, we have shown it holds for $n+1$. So, by induction, the inequality holds for all $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/244097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
How are complex numbers useful to real number mathematics? Suppose I have only real number problems, where I need to find solutions. By what means could knowledge about complex numbers be useful? Of course, the obviously applications are: * *contour integration *understand radius of convergence of power series *algebra with $\exp(ix)$ instead of $\sin(x)$ No need to elaborate on these ones :) I'd be interested in some more suggestions! In a way this question is asking how to show the advantage of complex numbers for real number mathematics of (scientifc) everyday problems. Ideally these examples should provide a considerable insight and not just reformulation. EDIT: These examples are the most real world I could come up with. I could imagine an engineer doing work that leads to some real world product in a few months, might need integrals or sine/cosine. Basically I'm looking for a examples that can be shown to a large audience of laymen for the work they already do. Examples like quantum mechanics are hard to justify, because due to many-particle problems QM rarely makes any useful predictions (where experiments aren't needed anyway). Anything closer to application?
I have used complex numbers to solve real life problems: - Digital Signal Processing, Control Engineering: Z-Transform. - AC Circuits: Phasors. This is a handful of applications broadly labeled under load-flow studies and resonant frequency devices (with electric devices modeled into resistors, inductors, capacitors at AC steady state). - Analog Computers and Control Engineering: Laplace Transform. Not sure if it falls into Complex Numbers, but since it has (x,y) form - CNC programming: scaling, rotating coordinates. - Rotating Dynamic Balancers. ... maybe some more but I can't recall.
{ "language": "en", "url": "https://math.stackexchange.com/questions/244240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Poker, number of three of a kind, multiple formulaes I wanted to calculate some poker hands, for a three of a kind I infered, 1) every card rank can form a 'three of a kind' and there are 13 card ranks, 2) there are $\binom{4}{3}$ ways to choose three cards out of the four suits of every card rank, and 3) for the remaining card I can choose two out of 49 cards, i.e. $\binom{49}{2}$. Together the formulae is $$ 13 \cdot \binom{4}{3} \cdot \binom{49}{2} = 61152 $$ But on Wikipedia I found a different formulae, namely $$ \binom{13}{1} \binom{4}{3} \binom{12}{2} \left( \binom{4}{1} \right)^2 = 54912 $$ which makes also totally sense to me (1. card rank, 2. subset of suits, 3. choose form the left card ranks, 4. assign suits). But I can't see why my first formulae is wrong, can anybody explain this to me?
We can count more or less like you did, using $\dbinom{13}{1}\dbinom{4}{3}\dbinom{48}{2}$ (note the small change), and then subtracting the full houses. Or else after we have picked the kind we have $3$ of, and the actual cards, we can pick the two "useless" cards. The kinds of these can be chosen in $\dbinom{12}{2}$ ways. Once the kinds have been chosen, the actual cards can be chosen in $\dbinom{4}{1}^2$ ways, for a total of $$\binom{13}{1}\binom{4}{3}\binom{12}{2}\binom{4}{1}^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/244340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $f^2$ is Riemann Integrable is $f$ always Riemann Integrable? Problem: Suppose that $f$ is a bounded, real-valued function on $[a,b]$ such that $f^2\in R$ (i.e. it is Riemann-Integrable). Must it be the case that $f\in R$ ? Thoughts: I think that this is not necessarily true, but I am having trouble refuting or even proving the above. Of course, the simplest way to prove that it is not necessarily true would be to give an example, but I am unable to think of one! I also have tried using $\phi(y)=\sqrt y$ and composing this with $f^2$ (to try show $f$ is continuous); however, the interval $[a,b]$ may contain negative numbers so I can't utilise $\phi$ in that case. Question: Does there exist a function $f$ such that $f^2\in R$ but $f$ $\not\in R$ ? Or conversely, if $f^2\in R$ does this always imply $f$ $\in R$ ? (If so, could you provide a way of proving this).
$$f=2\cdot\mathbf 1_{[a,b]\cap\mathbb Q}-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/244409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
Using Zorn's lemma show that $\mathbb R^+$ is the disjoint union of two sets closed under addition. Let $\Bbb R^+$ be the set of positive real numbers. Use Zorn's Lemma to show that $\Bbb R^+$ is the union of two disjoint, non-empty subsets, each closed under addition.
Let $\mathcal{P}$ the set of the disjoint pairs $(A,B)$, where $A,B\subseteq\mathbb{R}^+$ are not empty and each one is closed under addition and multiplication by a positive rational number. Note that $\mathcal{P}\neq\emptyset$ because if we consider $X=\mathbb{Q}^+$ and $Y=\{n\sqrt{2}:n\in\mathbb{Q}^+\}$, then $(X,Y)\in\mathcal{P}$. Define $\leq$ as follows: $(X_1,Y_1)\leq(X_2,Y_2)$ if and only if $X_1\subseteq X_2$ and $Y_1\subseteq Y_2$, for all $(X_1,Y_1),(X_2,Y_2)\in\mathcal{P}$. Clearly, $(\mathcal{P},\leq)$ is a partially ordered set. Furthermore, it is easy to see that every chain in $(\mathcal{P},\leq)$ has an upper bound. We can now apply Zorn's lemma, so $(\mathcal{P},\leq)$ has a maximal element. Let $(A,B)\in\mathcal{P}$ a maximal element of $(\mathcal{P},\leq)$. We only have to show that $A\cup B=\mathbb{R}^+$. Suppose that $\mathbb{R}^+\not\subseteq A\cup B$. Therefore, there exists $x\in\mathbb{R}^+$ such that $x\not\in A\cup B$. Consider $A_x=\{kx+a:k\in\mathbb{Q}^+\cup\{0\}\textrm{ and }a\in A\}$ and $B_x=\{kx+b:k\in\mathbb{Q}^+\cup\{0\}\textrm{ and }b\in B\}$. Then $A\subseteq A_x$, $B\subseteq B_x$ and $A_x,B_x\subseteq\mathbb{R}^+$. It is easy to verify that $A_x$ and $B_x$ are closed under addition and multiplication by a positive rational number. If either $A_x\cap B=\emptyset$ or $A\cap B_x=\emptyset$, then $(A,B)$ would not be a maximal element of $(\mathcal{P},\leq)$. Therefore, $A_x\cap B\neq\emptyset$ and $A\cap B_x\neq\emptyset$, so there is some $q_0\in\mathbb{Q}^+$ and $a\in A$ such that $q_0x+a\in B$, and there is some $q_1\in\mathbb{Q}^+$ and $b\in B$ such that $q_1x+b\in A$. Note that $q_0,q_1\neq 0$ because $A\cap B=\emptyset$. It is easy to see that $q_0q_1x+q_1a+q_0b\in A\cap B$, so $A\cap B\neq\emptyset$, a contradiction. Therefore, $A\cup B=\mathbb{R}^+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/244456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Combinatorial Proof Of Binomial Double Counting Let $a$, $b$, $c$ and $n$ be non-negative integers. By counting the number of committees consisting of $n$ sentient beings that can be chosen from a pool of $a$ kittens, $b$ crocodiles and $c$ emus in two different ways, prove the identity $$\sum\limits_{\substack{i,j,k \ge 0; \\ i+j+k = n}} {{a \choose i}\cdot{b \choose j}\cdot{c \choose k} = {a+b+c \choose n}}$$ where the sum is over all non-negative integers $i$, $j$ and $k$ such that $i+j+k=n.$ I know that this is some kind of combinatorial proof. My biggest problem is that I've never really done a proof.
$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ $\ds{\sum_{i\ +\ j\ +\ k\ =\ n \atop{\vphantom{\LARGE A}i,\ j,\ k\ \geq\ 0}} {a \choose i}{b \choose j}{c \choose k} = {a + b + c \choose n}:\ {\large ?}}$ \begin{align}&\color{#66f}{\large% \sum_{i\ +\ j\ +\ k\ =\ n \atop{\vphantom{\LARGE A}i,\ j,\ k\ \geq\ 0}} {a \choose i}{b \choose j}{c \choose k}} =\sum_{\ell_{a},\ \ell_{b},\ \ell_{c}\ \geq\ 0}{a \choose \ell_{a}} {b \choose \ell_{b}}{c \choose \ell_{c}} \delta_{\ell_{a}\ +\ \ell_{b}\ +\ \ell_{c},\ n} \\[3mm]&=\sum_{\ell_{a},\ \ell_{b},\ \ell_{c}\ \geq\ 0}{a \choose \ell_{a}} {b \choose \ell_{b}}{c \choose \ell_{c}}\oint_{\verts{z}\ =\ 1} {1 \over z^{-\ell_{a}\ -\ \ell_{b}\ -\ \ell_{c}\ +\ n\ +\ 1}} \,{\dd z \over 2\pi\ic} \\[3mm]&=\oint_{\verts{z}\ =\ 1}{1 \over z^{n\ +\ 1}} \bracks{\sum_{\ell_{a}\ \geq\ 0}{a \choose \ell_{a}}z^{\ell_{a}}} \bracks{\sum_{\ell_{b}\ \geq\ 0}{b \choose \ell_{b}}z^{\ell_{b}}} \bracks{\sum_{\ell_{a}\ \geq\ 0}{c \choose \ell_{c}}z^{\ell_{c}}} \,{\dd z \over 2\pi\ic} \\[3mm]&=\oint_{\verts{z}\ =\ 1}{1 \over z^{n\ +\ 1}} \pars{1 + z}^{a}\pars{1 + z}^{b}\pars{1 + z}^{c}\,{\dd z \over 2\pi\ic} =\oint_{\verts{z}\ =\ 1}{\pars{1 + z}^{a + b + c} \over z^{n\ +\ 1}} \,{\dd z \over 2\pi\ic} \\[3mm]&=\color{#66f}{\large{a + b + c \choose n}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/244504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Finding generalized eigenbasis * *For a complex square matrix $M$, a maximal set of linearly independent eigenvectors for an eigenvalue $\lambda$ is determined by solving $$ (M - \lambda I) x = 0. $$ for a basis in the solution subspace directly as a homogeneous linear system. *For a complex square matrix $M$, a generalized eigenvector for an eigenvalue $\lambda$ with algebraic multiplicity $c$ is defined as a vector $u$ s.t. $$ (M - \lambda I)^c u = 0. $$ I wonder if a generalized eigenbasis in Jordan decomposition is also determined by finding a basis in the solution subspace of $(M - \lambda I)^c u = 0$ directly in the same way as for an eigenbasis? Or it is more difficult to solve directly as a homogeneous linear system, and some tricks are helpful? Thanks!
Look at the matrix $$M=\pmatrix{1&1\cr0&1\cr}$$ Taking $\lambda=1$, $c=2$, Then $(M-\lambda I)^c$ is the zero matrix, so any two linearly independent vectors will do as a basis for the solution space of $(M-\lambda I)^cu=0$. But that's not what you want: first, you want as many linearly independent eigenvectors as you can find, then you can go hunting for generalized eigenvectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/244567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Value of $\lim_{n\to \infty}\frac{1^n+2^n+\cdots+(n-1)^n}{n^n}$ I remember that a couple of years ago a friend showed me and some other people the following expression: $$\lim_{n\to \infty}\frac{1^n+2^n+\cdots+(n-1)^n}{n^n}.$$ As shown below, I can prove that this limit exists by the monotone convergence theorem. I also remember that my friend gave a very dubious "proof" that the value of the limit is $\frac{1}{e-1}$. I cannot remember the details of the proof, but I am fairly certain that it made the common error of treating $n$ as a variable in some places at some times and as a constant in other places at other times. Nevertheless, numerical analysis suggests that the value my friend gave was correct, even if his methods were flawed. My question is then: What is the value of this limit and how do we prove it rigorously? (Also, for bonus points, What might my friend's original proof have been and what exactly was his error, if any?) I give my convergence proof below in two parts. In both parts, I define the sequence $a_n$ by $a_n=\frac{1^n+2^n+\cdots+(n-1)^n}{n^n}$ for all integers $n\ge 2$. First, I prove that $a_n$ is bounded above by $1$. Second, I prove that $a_n$ is increasing. (1) The sequence $a_n$ satisfies $a_n<1$ for all $n\ge 2$. Note that $a_n<1$ is equivalent to $1^n+2^n+\cdots+(n-1)^n<n^n$. I prove this second statement by induction. Observe that $1^2=1<4=2^2$. Now suppose that $1^n+2^n+\cdots+(n-1)^n<n^n$ for some integer $n\ge 2$. Then $$1^{n+1}+2^{n+1}+\cdots+(n-1)^{n+1}+n^{n+1}\le(n-1)(1^n+2^n+\cdots+(n-1)^n)+n^{n+1}<(n-1)n^n+n^{n+1}<(n+1)n^n+n^{n+1}\le n^{n+1}+(n+1)n^n+\binom{n+1}{2}n^{n-1}+\cdots+1=(n+1)^{n+1}.$$ (2) The sequence $a_n$ is increasing for all $n\ge 2$. We must first prove the following preliminary proposition. (I'm not sure if "lemma" is appropriate for this.) (2a) For all integers $n\ge 2$ and $2\le k\le n$, $\left(\frac{k-1}{k}\right)^n\le\left(\frac{k}{k+1}\right)^{n+1}$. We observe that $k^2-1\le kn$, so upon division by $k(k^2-1)$, we get $\frac{1}{k}\le\frac{n}{k^2-1}$. By Bernoulli's Inequality, we may find: $$\frac{k+1}{k}\le 1+\frac{n}{k^2-1}\le\left(1+\frac{1}{k^2-1}\right)^n=\left(\frac{k^2}{k^2-1}\right)^n.$$ A little multiplication and we arrive at $\left(\frac{k-1}{k}\right)^n\le\left(\frac{k}{k+1}\right)^{n+1}$. We may now first apply this to see that $\left(\frac{n-1}{n}\right)^n\le\left(\frac{n}{n+1}\right)^{n+1}$. Then we suppose that for some integer $2\le k\le n$, we have $\left(\frac{k}{n}\right)^n\le\left(\frac{k+1}{n+1}\right)^{n+1}$. Then: $$\left(\frac{k-1}{n}\right)^n=\left(\frac{k}{n}\right)^n\left(\frac{k-1}{k}\right)^n\le\left(\frac{k+1}{n+1}\right)^{n+1}\left(\frac{k}{k+1}\right)^{n+1}=\left(\frac{k}{n+1}\right)^{n+1}.$$ By backwards (finite) induction from $n$, we have that $\left(\frac{k}{n}\right)^n\le\left(\frac{k+1}{n+1}\right)^{n+1}$ for all integers $1\le k\le n$, so: $$a_n=\left(\frac{1}{n}\right)^n+\left(\frac{2}{n}\right)^n+\cdots+\left(\frac{n-1}{n}\right)^n\le\left(\frac{2}{n+1}\right)^{n+1}+\left(\frac{3}{n+1}\right)^{n+1}+\cdots+\left(\frac{n}{n+1}\right)^{n+1}<\left(\frac{1}{n+1}\right)^{n+1}+\left(\frac{2}{n+1}\right)^{n+1}+\left(\frac{3}{n+1}\right)^{n+1}+\cdots+\left(\frac{n}{n+1}\right)^{n+1}=a_{n+1}.$$ (In fact, this proves that $a_n$ is strictly increasing.) By the monotone convergence theorem, $a_n$ converges. I should note that I am not especially well-practiced in proving these sorts of inequalities, so I may have given a significantly more complicated proof than necessary. If this is the case, feel free to explain in a comment or in your answer. I'd love to get a better grip on these inequalities in addition to finding out what the limit is. Thanks!
The limit is $\frac{1}{e-1}$. I wrote a paper on this sum several years ago and used the Euler-Maclaurin formula to prove the result. The paper is "The Euler-Maclaurin Formula and Sums of Powers," Mathematics Magazine, 79 (1): 61-65, 2006. Basically, I use the Euler-Maclaurin formula to swap the sum with the corresponding integral. Then, after some asymptotic analysis on the error term provided by Euler-Maclaurin we get $$\lim_{n\to \infty}\frac{1^n+2^n+\cdots+(n-1)^n}{n^n} = \sum_{k=0}^{\infty} \frac{B_k}{k!},$$ where $B_k$ is the $k$th Bernoulli number. The exponential generating function of the Bernoulli numbers then provides the $\frac{1}{e-1}$ result. I should mention that I made a mistake in the original proof, though! The correction, as well as the generalization $$\lim_{n\to \infty}\frac{1^n+2^n+\cdots+(n+k)^n}{n^n} = \frac{e^{k+1}}{e-1}$$ are contained in a letter to the editor (Mathematics Magazine 83 (1): 54-55, 2010).
{ "language": "en", "url": "https://math.stackexchange.com/questions/244657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 2 }
Can someone show me how to prove the following? I have $f(x)=g(ax+b)$, a and b are constant. I need to show that $\nabla f(x)=a\nabla g(x)$ and $\nabla^2 f(x)=a^2\nabla^2 g(x)$... I was thinking that the final answer should have ax+b in it, but apparently it can be shown that the above is true???
In this problem $f(x) = g(h(x))$, where $h(x) = ax + b$. I'm going to consider the case where $a$ is a matrix rather than a scalar, because it's useful and no more difficult. You can assume $a$ is a scalar if you'd like. Let's establish some notation. Recall that if $F:\mathbb R^n \to \mathbb R^m$ is differentiable at $x$, then $F'(x)$ is an $m \times n$ matrix. In the special case where $m = 1$, $F'(x)$ is a $1 \times n$ matrix. I'm going to use the convention that $\nabla F(x) = F'(x)^T$, so $\nabla F(x)$ is a column vector rather than a row vector. Then $G(x) = \nabla F(x)$ is a function from $\mathbb R^n \to \mathbb R^n$, and $\nabla^2 F(x) = G'(x)$, which is an $n \times n$ matrix. The chain rule tells us that \begin{align} f'(x) &= g'(h(x))h'(x) \\ &= g'(ax + b) a. \end{align} It follows that \begin{align} \nabla f(x) &= a^T g'(ax+b)^T \\ &= a^T \nabla g(ax + b). \end{align} That is our formula for $\nabla f(x)$. Preparing to use the chain rule again, we can express $\nabla f(x)$ as $\nabla f(x) = w(h(x))$, where $w(x) = a^T \nabla g(x)$. Note that $w'(x) = a^T \nabla^2 g(x)$. Applying the chain rule to $z(x) = \nabla f(x) = w(h(x))$, we see that \begin{align} \nabla^2 f(x) &= w'(h(x))h'(x) \\ &= a^T \nabla^2 g(ax + b) a. \end{align} This is our formula for $\nabla^2 f(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/244705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to understand $\operatorname{cf}(2^{\aleph_0}) > \aleph_0$ As a corollary of König's theorem, we have $\operatorname{cf}(2^{\aleph_0}) > \aleph_0$ . On the other hand, we have $\operatorname{cf}(\aleph_\omega) = \aleph_0$. Why the logic in the latter equation can't apply to the former one? To be precise, why we can't have $\sup ({2^n:n<\omega}) = 2^{\aleph_0}$?
Note that in ordinal arithmetic $2^\omega$ is the supremum of $2^n$ for all $n<\omega$, so it is $\omega$ and thus has cofinality $\aleph_0$. $\omega$ and $\aleph_0$ are the same set but they nevertheless behave differently in practice -- because tradition is to use the notation $\omega$ for operations where a limiting process is involved and $\aleph_0$ for operations where the "countably infinite" is used all at once in a single step. Cardinal exponentatiation $2^{\aleph_0}$ is such an all-at-once process.
{ "language": "en", "url": "https://math.stackexchange.com/questions/244790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Infinitely number of primes in the form $4n+1$ proof Question: Are there infinitely many primes of the form $4n+3$ and $4n+1$? My attempt: Suppose the contrary that there exist finitely many primes of the form $4n+3$, say $k+1$ of them: $3,p_1,p_2,....,p_k$ Consider $N = 4p_1p_2p_3...p_k+3$, $N$ cannot be a prime of this form. So suppose that $N$=$q_1...q_r$, where $q_i∈P$ Claim: At least one of the $q_i$'s is of the form $4n+3$: Proof for my claim: $N$ is odd $\Rightarrow q_1,...,q_r$ are odd $\Rightarrow q_i \equiv 1\ (\text{mod }4)$ or $q_i ≡ 3\ (\text{mod }4)$ If all $q_1,...q_r$ are of the form $4n+1$, then $(4n+1)(4m+1)=16nm+4n+4m+1 = 4(\cdots) +1$ Therefore, $N=q_1...q_r = 4m+1$. But $N=4p_1..p_k+3$, i.e. $N≡3\ (\text{mod }4)$, $N$ is congruent to $1\ \text{mod }4$ which is a contradiction. Therefore, at least one of $q_i \equiv 3\ (\text{mod }4)$. Suppose $q_j\equiv 3\ (\text{mod }4)$ $\Rightarrow$ $q_j=p_i$ for some $1\leq i \leq k$ or $q_j =3$ If $q_j=p_i≠3$ then $q_j$ | $N = 4p_1...p_k + 3 \Rightarrow q_j=3$ Contradiction! If $q_j=3$ ($\neq p_i$, $1\leq i \leq k$) then $q_j | N = 4 p_1...p_k + 3 \Rightarrow q_j=p_t$ for some $1 \leq i \leq k$ Contradiction! In fact, there must be also infinitely many primes of the form $4n+1$ (according to my search), but the above method does not work for its proof. I could not understand why it does not work. Could you please show me? Regards
There are infinite primes in both the arithmetic progressions $4k+1$ and $4k-1$. Euclid's proof of the infinitude of primes can be easily modified to prove the existence of infinite primes of the form $4k-1$. Sketch of proof: assume that the set of these primes is finite, given by $\{p_1=3,p_2=7,\ldots,p_k\}$, and consider the huge number $M=4p_1^2 p_2^2\cdots p_k^2-1$. $M$ is a number of the form $4k-1$, hence by the fundamental theorem of Arithmetics it has a prime divisor of the same form. But $\gcd(M,p_j)=1$ for any $j\in[1,k]$, hence we have a contradiction. Given that there are infinite primes in the AP $4k-1$, is that possible that there are just a finite number of primes in the AP $4k+1$? It does not look as reasonable, and indeed it does not occur. Let us define, for any $n\in\mathbb{N}^+$, $\chi_4(n)$ as $1$ if $n=4k+1$, as $-1$ if $n=4k-1$, as $0$ if $n$ is even. $\chi_4(n)$ is a periodic and multiplicative function (a Dirichlet character) associated with the $L$-function $$ L(\chi_4,s)=\sum_{n\geq 1}\frac{\chi_4(n)}{n^s}=\!\!\!\!\prod_{p\equiv 1\!\!\pmod{4}}\left(1-\frac{1}{p^s}\right)^{-1}\prod_{p\equiv 3\!\!\pmod{4}}\left(1+\frac{1}{p^s}\right)^{-1}. $$ The last equality follows from Euler's product, which allows us to state $$ L(\chi_4,s)=\prod_{p}\left(1+\frac{1}{p^s}\right)^{-1}\prod_{p\equiv 1\!\!\pmod{4}}\frac{p^s+1}{p^s-1}=\frac{\zeta(2s)}{\zeta(s)}\prod_{p\equiv 1\!\!\pmod{4}}\frac{p^s+1}{p^s-1}. $$ If the primes in the AP $4k+1$ were finite, the limit of the RHS as $s\to 1^+$ would be $0$. On the other hand, $$ \lim_{s\to 1^+}L(\chi_4,s)=\sum_{n\geq 0}\frac{(-1)^n}{2n+1}=\int_{0}^{1}\sum_{n\geq 0}(-1)^n x^{2n}\,dx=\int_{0}^{1}\frac{dx}{1+x^2}=\frac{\pi}{4}\color{red}{\neq} 0 $$ so there have to be infinite primes of the form $4k+1$, too. With minor adjustments, the same approach shows that there are infinite primes in both the APs $6k-1$ and $6k+1$. I have just sketched a simplified version of Dirichlet's theorem for primes in APs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/244915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 7, "answer_id": 0 }
Correlated Poisson Distribution $X_1$ and $X_2$ are discrete stochastic variables. They can both be modeled by a Poisson process with arrival rates $\lambda_1$ and $\lambda_2$ respectively. $X_1$ and $X_2$ have a constant correlation $\rho$. Is there an analytic equation that describes the probability density function: $P(X_1= i,X_2= k)$
Consider this model that could generate correlated Poisson variables. Let $Y$, $Y_1$ and $Y_2$ be three independent Poisson variable with parameters $r$, $\lambda_1$ and $\lambda_2$. Let $$X_i=Y_i+Y$$ for $i=1,2$. Then $X_1$ and $X_2$ are both Poisson with parameters $\lambda_1$ and $\lambda_2$. They have the correlation $$\rho=\frac{r}{\sqrt{(\lambda_1+r)(\lambda_2+r)}}$$ Now the joint distribution can be derived as $$P[X_1=i,X_2=j]=e^{-(r+\lambda_1+\lambda_2)}\sum_{k=0}^{i\wedge j}\frac{r^k}{k!}\frac{\lambda_1^{(i-k)}}{(i-k)!}\frac{\lambda_2^{(j-k)}}{(j-k)!}$$ The case for a bivariate Poisson process is immediate from here. You could look at the Johnson and Kotz book on multivariate discrete distributions for more information (this construction of a bivariate Poisson distribution is not unique). Also, it has the drawback that $\rho \in [0, \min(\lambda_1, \lambda_2)/\sqrt{\lambda_1\lambda_2} ]$ when $\lambda_1 \neq \lambda_2$ as discussed by Genest et al. 2018.
{ "language": "en", "url": "https://math.stackexchange.com/questions/244989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Example of two dependent random variables that satisfy $E[f(X)f(Y)]=Ef(X)Ef(Y)$ for every $f$ Does anyone have an example of two dependent random variables, that satisfy this relation? $E[f(X)f(Y)]=E[f(X)]E[f(Y)]$ for every function $f(t)$. Thanks. *edit: I still couldn't find an example. I think one should be of two identically distributed variables, since all the "moments" need to be independent: $Ex^iy^i=Ex^iEy^i$. That's plum hard...
If you take dependent random variables $X$ and $Y$, and set $X^{'} = X - E[X]$ and $Y^{'} = Y - E[Y]$, then $E[f(X^{'})f(Y^{'})]=E[f(X^{'})]E[f(Y^{'})]=0$ as long as $f$ preserves the zero expected value. I guess you cannot show this for all $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 5, "answer_id": 3 }
The graph of a smooth real function is a submanifold Given a function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m $ which is smooth, show that $$\operatorname{graph}(f) = \{(x,f(x)) \in \mathbb{R}^{n+m} : x \in \mathbb{R}^n\}$$ is a smooth submanifold of $\mathbb{R}^{n+m}$. I'm honestly completely unsure of where or how to begin this problem. I am interested in definitions and perhaps hints that can lead me in the right direction.
The map $\mathbb R^n\mapsto \mathbb R^{n+m}$ given by $t\mapsto (t, f(t))$ has the Jacobi matrix $\begin{pmatrix}I_n\\f'(t)\end{pmatrix}$, which has a full rank $n$ for all $t$ (because of the identity submatrix). This means that its value range is a manifold. Is there anything unclear about it? How is this a proof that it is a manifold? A manifold of rank $n$ is such set $X$ that for each $x\in X$ there exists a neighborhood $H_x\subset X$ such that $H_x$ is isomorphic to an open subset of $\mathbb R^n$. In this case, the whole $X=graph(f)$ is isomophic to $\mathbb R^n$. The definition of a manifold differs, often it is required for the isomophism to be diffeomophism, which is true here as well. Think of it this way: A manifold $X$ of rank $2$ is something, in which: wherever someone makes a dot there by a pen, I can cut a piece of $X$ and say to this person: "See, my piece is almost like a piece of paper, it's just a bit curvy. The definition of manifold might seems strage here because here you can take the neighborhood as the whole $X$. This is not always the case: A sphere is a manifold as well, but a whole sphere is not isomorphic to $\mathbb R^2$, you have to take only some cut-out of it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 0 }
If $u''>0$ in $\mathbf{R}^+$ then $u$ is unbounded? If $u$ is a positive function such that $u''>0$ in the whole $\mathbf{R}^+$ then $u$ is unbounded? In fact, I know that if $u''>0$ then $u$ is strictly convex. I think that implies $u$ is coercive. I want to prove it.
$$ u(x) = e^{-x} {}{}{}{}{}{}{}{} $$ EDIT: if you actually meant the entire real line $\mathbb R,$ then any $C^2$ function $u(x)$ really is unbounded. Proof: as $u'' > 0,$ we know that $u'$ cannot always be $0.$ as a result, it is nonzero at some $x=a.$ If $u'(a) > 0,$ then for $x > a$ we have $u(x) > u(a) + (x-a) u'(a),$ which is unbounded. If, instead, $u'(a) < 0,$ then for $x < a$ we have $u(x) > u(a) + (x-a) u'(a),$ which is unbounded as $(x-a)$ is negative. Both of these are the finite Taylor theorem. Examples with minimal growth include $$ x + \sqrt{1 + x^2} $$ and $$ -x + \sqrt{1 + x^2} $$ Note that $C^2$ is not required, it suffices that the second derivative always exist and is always positive. Taylor's with remainder.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
$i,j,k$ Values of the $\Theta$ Matrix in Neural Networks SO I'm looking at these two neural networks and walking through how the $ijk$ values of $\Theta$ correspond to the layer, the node number. Either there are redundant values or I'm missing how the subscripts actually map from node to node. $\Theta^i_{jk}$ ... where this is read as " Theta superscript i subscript jk " As shown here: It looks like the $\Theta$ value corresponding to the node circled in teal would be $\Theta^2_{12}$ ... where: * *superscript $i=2$ ( layer 2 ) *$j=1$ ( node number within the subsequent layer ? ) *$k=2$ ( node number within the current layer ? ) If I'm matching the pattern correctly I think the $j$ value is the node to the right of the red circled node ... and the $k$ value is the teal node... Am I getting this right? Because between the above image and this one: That seems to be the case ... can I get a confirmation on this?
Yes, $\Theta^i_{jk}$ is the weight that the activation of node $j$ has in the previous input layer $j - 1$ in computing the activation of node $k$ in layer $i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Arc Length: Difficulty With The Integral The question is to find the arc length of a portion of a function. $$y=\frac{3}{2}x^{2/3}\text{ on }[1,8]$$ I couldn't quite figure out how to evaluate the integral, so I appealed to the solution manual for aid. I don't quite understand what they did in the 5th step. Could someone perhaps elucidate it for me?
The expression in brackets is precisely what you need for the substitution $u=x^{2/3}+1$ to work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proof of $\frac{Y^{\lambda}-\lambda}{\sqrt{\lambda}}\to Z\sim N(0,1)$ in distribution as $\lambda\to\infty$? This is an exercise of the Central Limit Theorem: Let $Y^{\lambda}$ be a Poisson random variable with parameter $\lambda>0$. Prove that $\frac{Y^{\lambda}-\lambda}{\sqrt{\lambda}}\to Z\sim N(0,1)$ in distribution as $\lambda\to\infty$. I've done that $$ Z_n\to Z\sim N(0,1) $$ in distribution using the CLT, where $Z_n=(Y^n-n)/\sqrt{n}$. Some naive attempt to go is considering $$ Y^{n}\leq Y^{\lambda}\leq Y^{n+1}\tag{*} $$ where $n\leq\lambda\leq n+1$ and somehow use the squeeze theorem. But both (*) and the squeeze theorem in convergence in distribution are NOT justified. How can I go on? Or do I need an alternative direction?
The squeeze theorem in convergence in distribution can be made fully rigorous in the situation you describe--but the shortest proof here might be through characteristic functions. Recall that if $Y^\lambda$ is Poisson with parameter $\lambda$, $\varphi_\lambda(t)=\mathbb E(\mathrm e^{\mathrm itY^\lambda})$ is simply $\varphi_\lambda(t)=\mathrm e^{-\lambda(1-\mathrm e^{\mathrm it})}$. Thus $\mathbb E(\mathrm e^{\mathrm itZ^\lambda})=\mathrm e^{-\mathrm it\sqrt{\lambda}}\varphi_\lambda(t/\sqrt{\lambda})=\mathrm e^{-g_\lambda(t)}$ with $$ g_\lambda(t)=\mathrm it\sqrt{\lambda}+\lambda-\lambda\mathrm e^{\mathrm it/\sqrt{\lambda}}. $$ Expanding the exponential up to second order yields $$ g_\lambda(t)=\mathrm it\sqrt{\lambda}+\lambda-\lambda\cdot(1+\mathrm it/\sqrt{\lambda}-t^2/2\lambda)+o(1)\to\tfrac12t^2. $$ Thus, for every $t$, $\mathbb E(\mathrm e^{\mathrm itZ^\lambda})\to\mathrm e^{-t^2/2}=\mathbb E(\mathrm e^{\mathrm itZ})$ where $Z$ is standard normal, hence $Z^\lambda\to Z$ in distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How can I give a bound on the $L^2$ norm of this function? I came across this question in an old qualifying exam, but I am stumped on how to approach it: For $f\in L^p((1,\infty), m)$ ($m$ is the Lebesgue measure), $2<p<4$, let $$(Vf)(x) = \frac{1}{x} \int_x^{10x} \frac{f(t)}{t^{1/4}} dt$$ Prove that $$||Vf||_{L^2} \leqslant C_p ||f||_{L^p}$$ for some finite number $C_p$, which depends on $p$ but not on $f$.
Using Hölder's inequality, we have for $x>1$, \begin{align} |V(f)(x)|&\leqslant \lVert f\rVert_{L^p}\left(\int_x^{10 x}t^{-\frac p{4(p-1)}}dt\right)^{\frac{p-1}p}\frac 1x\\ &=\lVert f\rVert_{L^p}A_p\left(x^{-\frac p{4(p-1)}+1}\right)^{\frac{p-1}p}\frac 1x\\ &=A_p\lVert f\rVert_{L^p}x^{\frac{p-1}p-\frac 14}\frac 1x\\ &=A_p\lVert f\rVert_{L^p}x^{-\frac 1p-\frac 14}. \end{align} To conclude, we have to check that $\int_1^{+\infty}x^{-\frac 2p-\frac 12}dx$ is convergent. As $p<4$, it's the case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Compute the length of an equilateral triangle's side given the area? Given the area of an equilateral triangle, what is an algorithm to determine the length of a side?
Let $s$ be the side, and $A$ the area. Drop a perpendicular from one vertex to the opposite side. By the Pythagorean Theorem, the height of the triangle is $\sqrt{s^2-\frac{1}{4}s^2}=\frac{s\sqrt{3}}{2}$. It follows that $$A=\frac{s^2\sqrt{3}}{4}.$$ Thus $$s^2=\frac{4A}{\sqrt{3}},$$ and therefore $$s=\sqrt{\frac{4A}{\sqrt{3}}}.$$ There are several ways to rewrite the above expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find an angle in a given triangle $\triangle ABC$ has sides $AC = BC$ and $\angle ACB = 96^\circ$. $D$ is a point in $\triangle ABC$ such that $\angle DAB = 18^\circ$ and $\angle DBA = 30^\circ$. What is the measure (in degrees) of $\angle ACD$?
Take $O$ the circumcenter of $\triangle ABD$, see that $\triangle DAO$ is equilateral and, since $\widehat{BAD}=18^\circ$, we get $\widehat{BAO}=42^\circ$, i.e. $O$ is reflection of $C$ about $AB$, that is, $AOBC$ is a rhombus, hence $AD=AO=AC,\triangle CAD$ is isosceles with $\widehat{ADC}=\widehat{ACD}=78^\circ$, done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Conditions for Schur decomposition and its generalization Let $M$ be a $n$ by $n$ matrix over a field $F$. When $F$ is $\mathbb{C}$, $M$ always has a Schur decomposition, i.e. it is always unitarily similar to a triangular matrix, i.e. $M = U T U^H$ where $U$ is some unitary matrix and $T$ is a triangular matrix. * *I was wondering for an arbitrary field $F$, what are some conditions for $M$ to admit Schur decomposition? *Consider a generalization of Schur decomposition, $M = P T P^{-1}$ where $P$ is some invertible matrix and $T$ is a triangular matrix. I was wondering what some conditions are for $M$ to admit such an decomposition? Note that $M$ admit such an decomposition when $F$ is $\mathbb{C}$, since it always has Schur decomposition. Thanks!
If the characterisic polynomial factors in linear factors then the Jordan decomposition works as your triangular matrix. If you have a similar triangular matrix then the characteristic polynomial of $M$ is the characteristic polynomial of $T$ which clearly factors into linear factors. So, the criterion is exactly the same as for Jordan decomposition. The similar triangular matrix is just a lazy variant of Jordan decomposition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Must-read papers in Operator Theory I have basically finished my grad school applications and have some time at hand. I want to start reading some classic papers in Operator Theory so as to breathe more culture here. I have read some when doing specific problems but have never systematically study the literature. I wonder whether someone can give some suggestions on where to start since this area has been so highly-developed. Maybe to focus the attention let's, say, try to make a list of the top 20 must-read papers in Operator Theory. I believe this must be a very very difficult job, but maybe some more criteria would make it a little bit easier. * *I can only read English and Chinese and it's a pity since I know many of the founding fathers use other languages. *I prefer papers that give some kind of big pictures, since I can always pick up papers related to specific problems when I need them (but this is not a strict restriction). *I would like to focus on the theory itself, not too much on application to physics. *I have already done a rather thorough study of literature related to the invariant subspace problem, so I guess we can omit this important area. Thanks very much!
Cuntz - Simple $C^*$-algebras generated by isometries The Cuntz algebras are very important in various places in C*-algebra theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove that $\lim_{x \rightarrow 0} \frac{1}{x}\int_0^x f(t) dt = f(0)$. Assume $f: \mathbb{R} \rightarrow \mathbb{R}$ is continuous. Prove that $\lim_{x \rightarrow 0} \frac{1}{x}\int_0^x f(t) dt = f(0)$. I'm having a little confusion about proving this. So far, it is clear that $f$ is continuous at 0 and $f$ is Riemann integrable. So with that knowledge, I am trying to use the definition of continuity. So $|\frac{1}{x}\int_0^x f(t) dt - f(0)|=|\frac{1}{x}(f(x)-f(0))-f(0)|$. From here, I'm not sure where to go. Any help is appreciated. Thanks in advance.
$\def\e{\varepsilon}\def\abs#1{\left|#1\right|}$As $f$ is continuous at $0$, for $\e > 0$ there is an $\delta > 0$ such that $\abs{f(x) - f(0)} \le \e$ for $\abs x \le \delta$. For these $x$ we have \begin{align*} \abs{\frac 1x \int_0^x f(t)\, dt - f(0)} &= \abs{\frac 1x \int_0^x \bigl(f(t) - f(0)\bigr)\,dt}\\ &\le \frac 1x \int_0^x \abs{f(t) - f(0)}\, dt\\ &\le \frac 1x \int_0^x \e\,dt\\ &= \e \end{align*} So $\abs{f(0) - \frac 1x \int_0^x f(t)\,dt} \le \e$ for $\abs x \le \delta$, as wished.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Does the Laplace transform biject? Someone wrote on the Wikipedia article for the Laplace trasform that 'this transformation is essentially bijective for the majority of practical uses.' Can someone provide a proof or counterexample that shows that the Laplace transform is not bijective over the domain of functions from $\mathbb{R}^+$ to $\mathbb{R}$?
For "the majority of practical uses" it is important that the Laplace transform ${\cal L}$ is injective. This means that when you have determined a function $s\mapsto F(s)$ that suits your needs, there is at most one process $t\mapsto f(t)$ such that $F$ is its Laplace transform. You can then look up this unique $f$ in a catalogue of Laplace transforms. This injectivity of ${\cal L}$ is the content of Lerch's theorem and is in fact an essential pillar of the "Laplace doctrine". The theorem is proven first for special cases where we have an inversion formula, and then extended to the general case. The difference between "injectivity" and "bijectivity" here is that we don't have a simple description of the space of all Laplace transforms $F$. But we don't need to know all animals when we want to analyze a zebra. Lerch's theorem tells us that it has a unique pair of parents.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Is there a $SL(2,\mathbb{Z})$-action on $\mathbb{Z}$? Is there a $SL(2,\mathbb{Z})$-action on $\mathbb{Z}$? I read this somewhere without proof and I am not sure if this is true. Thank you for your help.
(This is completely different to my first 'answer', which was simply wrong.) Denote by $\text{End}(\mathbb{Z})$ the semi-group of group endomorphisms of $\mathbb{Z}$. Since $\mathbb{Z}$ is cyclic, any endomorphism is determined by the image of the generator $1$, and since $1 \mapsto n$ is an endomorphism for any $n\in\mathbb{Z}$, this is all of them. Since $SL(2,\mathbb{Z})$ is a group, all of its elements are invertible, so must map to invertible endomorphisms, i.e. automorphisms. Obviously these are given only by $n = \pm 1$ in the notation above. So $\text{Aut}(\mathbb{Z}) \cong \mathbb{Z}_2$. So the question becomes: is there a non-trivial homomorphism $\phi : SL(2,\mathbb{Z}) \to \mathbb{Z}_2$? Since $\mathbb{Z}_2$ is Abelian, $\phi$ must factor through the Abelianisation of $SL(2,\mathbb{Z})$, which is$^*$ $\mathbb{Z}_{12}$. There is a unique surjective homomorphism $\mathbb{Z}_{12} \to \mathbb{Z}_2$, and therefore a unique surjective $\phi$, which gives a unique non-trivial action of $SL(2,\mathbb{Z})$ on $\mathbb{Z}$. Unfortunately, I can't see an easy way to decide whether a given $SL(2,\mathbb{Z})$ matrix maps to $1$ or $-1$, but maybe somebody else can. $^*$A proof of this can be found at the link provided in the comments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/245936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Complex Analysis and Limit point help So S is a complex sequence (an from n=1 to infinity) has limit points which form a set E of limit points. How do I prove that every limit point of E are also members of the set E. I think epsilons will need to be used but I'm not sure. Thanks.
Let $z$ be a limit point of $E$, and take any $\varepsilon>0$. There is some $x\in E$ with $\lvert x-z\rvert<\varepsilon/2$. And since $x\in E$, there are infinitely many members of $S$ within an $\varepsilon/2$-ball around $x$. They will all be within an $\varepsilon$-ball around $z$, and you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/246004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Example 2, Chpt 4 Advanced Mathematics (I) $$\int \frac{x+2}{2x^3+3x^2+3x+1}\, \mathrm{d}x$$ I can get it down to this: $$\int \frac{2}{2x+1} - \frac{x}{x^2+x+1}\, \mathrm{d}x $$ I can solve the first part but I don't exactly follow the method in the book. $$ = \ln \vert 2x+1 \vert - \frac{1}{2}\int \frac{\left(2x+1\right) -1}{x^2+x+1}\, \mathrm{d}x $$ $$= \ln \vert 2x+1 \vert - \frac{1}{2} \int \frac{\mathrm{d}\left(x^2+x+1\right)}{x^2+x+1} + \frac{1}{2}\int \dfrac{\mathrm{d}x}{\left(x+\dfrac{1}{2}\right)^2 + \frac{3}{4}} $$ For the 2nd part: I tried $ u = x^2+x+1 $ and $\mathrm{d}u = 2x+1\, \mathrm{d}x$ that leaves me with $\frac{\mathrm{d}u - 1}{2} = x\, \mathrm{d}x$ which seems wrong. because $x^2+x+1$ doesn't factor, I don't see how partial fractions again will help. $x = Ax+B$ isn't helpful.
The post indicates some difficulty with finding $\int \frac{dx}{x^2+x+1}$. We solve a more general problem. But I would suggest for your particular problem, you follow the steps used, instead of using the final result. Suppose that we want to integrate $\dfrac{1}{ax^2+bx+c}$, where $ax^2+bx+c$ is always positive, or always negative. We complete the square. In order to avoid fractions, note that equivalently we want to find $$\int \frac{4a\,dx}{4a^2 x+4abx+4ac}.$$ So we want to find $$\int \frac{4a\,dx}{(2ax+b)^2 + (4ac-b^2)}.$$ Let $$2ax+b=u\sqrt{4ac-b^2}.$$ Then $2a\,dx=\sqrt{4ac-b^2}\,du$. Our integral simplifies to $$\frac{2}{\sqrt{4ac-b^2}}\int\frac{du}{u^2+1},$$ and we are finished.
{ "language": "en", "url": "https://math.stackexchange.com/questions/246135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find all singularities of$ \ \frac{\cos z - \cos (2z)}{z^4} \ $ How do I find all singularities of$ \ \frac{\cos z - \cos (2z)}{z^4} \ $ It seems like there is only one (z = 0)? How do I decide if it is isolated or nonisolated? And if it is isolated, how do I decide if it is removable or not removable? If it is non isolated, how do I decide the orders of the singularities? Thanks!!!
$$\cos z = 1-\frac{z^2}{2}+\cdots$$ $$\cos2z=1-\frac{(2z)^2}{2}+\cdots$$ $$\frac{\cos z-\cos2z}{z^4}= \frac{3}{2z^2}+\left(\frac{-15}{4!}+a_1z^2 +\cdots\right),$$ hence at $z=0$ there is a pole .
{ "language": "en", "url": "https://math.stackexchange.com/questions/246285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding asymptotes of exponential function and one-sided limit Find the asymptotes of $$ \lim_{x \to \infty}x\cdot\exp\left(\dfrac{2}{x}\right)+1. $$ How is it done?
A related problem. We will use the Taylor series of the function $e^t$ at the point $t=0$, $$ e^t = 1+t+\frac{t^2}{2!}+\frac{t^3}{3!}+\dots .$$ $$ x\,e^{2/x}+1 = x ( 1+\frac{2}{x}+ \frac{1}{2!}\frac{2^2}{x^2}+\dots )+1=x+3+\frac{2^2}{2!}\frac{1}{x}+\frac{2^3}{3!}\frac{1}{x^2}+\dots$$ $$ = x+3+O(1/x).$$ Now, you can see when $x$ goes to infinity, then you have $$ x\,e^{2/x}+1 \sim x+3 $$ Here is the plot of $x\,e^{2/x}+1$ and the Oblique asymptote $x+3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/246386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Identity for $\zeta(k- 1/2) \zeta(2k -1) / \zeta(4k -2)$? Is there a nice identity known for $$\frac{\zeta(k- \tfrac{1}{2}) \zeta(2k -1)}{\zeta(4k -2)}?$$ (I'm dealing with half-integral $k$.) Equally, an identity for $$\frac{\zeta(s) \zeta(2s)}{\zeta(4s)}$$ would do ;)
Let $$F(s) = \frac{\zeta(s)\zeta(2s)}{\zeta(4s)}.$$ Then clearly the Euler product of $F(s)$ is $$F(s) = \prod_p \frac{\frac{1}{1-1/p^s}\frac{1}{1-1/p^{2s}}}{\frac{1}{1-1/p^{4s}}}= \prod_p \left( 1 + \frac{1}{p^s} + \frac{2}{p^{2s}} + \frac{2}{p^{3s}} + \frac{2}{p^{4s}} + \frac{2}{p^{5s}} + \cdots\right).$$ Now introduce $$ f(n) = \prod_{p^2|n} 2.$$ It follows that $$ F(s) = \sum_{n\ge 1} \frac{f(n)}{n^s}.$$ We can use this e.g. to study the average order of $f(n)$, given by $$ \frac{1}{n} \sum_{k=1}^n f(n).$$ The function $F(s)$ has a simple pole at $s=1$ and the Wiener-Ikehara-Theorem applies. The residue is $$\operatorname{Res}_{s=1} F(s) = \frac{15}{\pi^2}$$ so that finally $$ \frac{1}{n} \sum_{k=1}^n f(n) \sim \frac{15}{\pi^2}.$$ In fact I would conjecture that we can do better and we ought to have $$ \frac{1}{n} \sum_{k=1}^n f(n) \sim \frac{15}{\pi^2} + \frac{6}{\pi^2}\zeta\left(\frac{1}{2}\right) n^{-1/2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/246455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is vector division? My question is: We have addition, subtraction and muliplication of vectors. Why cannot we define vector division? What is division of vectors?
The quotient of two vectors is a quaternion by definition. (The product of two vectors can also be regarded as a quaternion, according to the choice of a unit of space.) A quaternion is a relative factor between two vectors that acts respectively on the vector's two characteristics length and direction; through its tensor or modulus, the ratio of lengths taken as a positive number; and the versor or radial quotient, the ratio of orientations in space, taken as being equal to an angle in a certain plane. The versor has analogues in the $+$ and $-$ signs of the real numbers, and in the argument or phase of the complex numbers; in space, a versor is described by three numbers: two to identify a point on the unit-sphere which is the axis of positive rotation, and one to identify the angle around that axis. (The angle is canonically taken to be positive and less than a straight angle, so the axis of positive rotation is reversed when the two vectors are exchanged in their plane.) The tensor and versor which describe a vector quotient together have four numbers in their specification (therefore a quaternion). Two quaternions are multiplied or divided by multiplying or dividing their respective tensors and versors. A versor has a representation as a great circle arc, connecting the points where a sphere is pierced by the dividend and divisor rays going from its center; these arcs have the same condition for equality as vectors do, viz. equal magnitude, and parallel direction. But on a sphere, no two arcs are parallel unless they are part of the same great circle. So, the vector-arcs are compared or compounded by moving one end of each to the line of intersection of their two planes, then taking the third side of the spherical triangle as the arc to be the product or the quotient of the versors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/246594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 8, "answer_id": 4 }
Fixed point of $\cos(\sin(x))$ I can show that $\cos(\sin(x))$ is a contraction on $\mathbb{R}$ and hence by the Contraction Mapping Theorem it will have a unique fixed point. But what is the process for finding this fixed point? This is in the context of metric spaces, I know in numerical analysis it can be done trivially with fixed point iteration. Is there a method of finding it analytically?
The Jacobi-Anger expansion gives an expression for your formula as: $\cos(\sin(x)) = J_0(1)+2 \sum_{n=1}^{\infty} J_{2n}(1) \cos(2nx)$. Since the "harmonics" in the sum rapidly damp to zero, to second order the equation for the fixed point can be represented as: $x= J_0(1) + 2[J_2(1)(\cos(2x)) + J_4(1)(\cos(4x))]$. Using Wolfram Alpha to solve this I get $x\approx 0.76868..$
{ "language": "en", "url": "https://math.stackexchange.com/questions/246647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Results of dot product for complex functions Suppose we are given a $C^1$ function $f(t):\mathbb{R} \rightarrow \mathbb{C}$ with $f(0) = 1$, $\|f(t)\| = 1$ and $\|f'(t)\| = 1$. I have already proven that $\langle f(t), f'(t)\rangle = 0$ for all $t$. Now I have to show that either $f'(t) = if(t)$ or $f'(t) = -i f(t)$. How do I go about showing this? (I am terribly sorry for the horrible title, I could not think of a good one).
Presumably by $\langle f(t) , f'(t) \rangle = 0$, you mean that $\text{Re} f(t) \overline{f'(t)} = 0$ (if $z_1,z_2 \in \mathbb{C}$ and $z_1 \overline{z_2} = 0$, then you must have either $z_1 = 0$ or $z_2 = 0$). If $\text{Re} f(t) \overline{f'(t)} = 0$, then $f(t) \overline{f'(t)} = i \zeta(t)$. where $\zeta$ is real valued. $\zeta$ is continuous, and furthermore, $|f(t) \overline{f'(t)}| = 1 \ =|\zeta(t)|$. Consequently, $\zeta$ is either the constant $1$ or $-1$. Multiplying $f(t) \overline{f'(t)} = i \zeta(t)$ on both sides by $f'(t)$ gives $f(t) = i \zeta(t) f'(t)$, from which the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/246718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability Problem with $n$ keys A woman has $n$ keys, one of which will open a door. a)If she tries the keys at random, discarding those that do not work, what is the probability that she will open the door on her $k^{\mathrm{th}}$ try? Attempt: On her first try, she will have the correct key with probability $\frac1n$. If this does not work, she will throw it away and on her second attempt, she will have the correct key with probability $\frac1{(n-1)}$. So on her $k^{\mathrm{th}}$ try, the probability is $\frac1{(n-(k-1))}$ This does not agree with my solutions. b)The same as above but this time she does not discard the keys if they do not work. Attempt: We want the probability on her $k^{\mathrm{th}}$ try. So we want to consider the probability that she must fail on her $k-1$ attempts. Since she keeps all her keys, the correct one is chosen with probability $\frac1n$ for each trial. So the desired probability is $(1-\frac{1}{n})^{k-1} (\frac1n)^k$. Again, does not agree with solutions. I can't really see any mistake in my logic. Can anyone offer any advice? Many thanks
For $(a)$, probability that she will open on the first try is $\dfrac1n$. You have this right. However, the probability that she will open on the second try is when she has failed in her first attempt and succeeded in her second attempt. Hence, the probability is $$\underbrace{\dfrac{n-1}n}_{\text{Prob of failure in her $1^{st}$ attempt.}} \times \underbrace{\dfrac1{n-1}}_{\text{Prob of success in $2^{nd}$ attempt given failure in $1^{st}$ attempt.}} = \dfrac1n$$ Probability that she will open on the third try is when she has failed in her first and second attempt and succeeded in her third attempt. Hence, the probability is $$\underbrace{\dfrac{n-1}n}_{\text{Prob failure in her $1^{st}$ attempt.}} \times \underbrace{\dfrac{n-2}{n-1}}_{\text{Prob of success in $2^{nd}$ attempt given failure in $1^{st}$ attempt.}} \times \underbrace{\dfrac1{n-2}}_{\text{Prob of success in $3^{nd}$ attempt given failure in first two attempts.}} = \dfrac1n$$ Hence, the probability she opens in her $k^{th}$ attempt is $\dfrac1n$. (Also note that the probabilities must add up-to one i.e. $$\sum_{k=1}^{n} \dfrac1n = 1$$ which is not the case in your answer). For $(b)$, the probability that she will open on her $k^{th}$ attempt is the probability she fails in her first $(k-1)$ attempts and succeed in her $k^{th}$ attempt. The probability for this is $$\underbrace{\dfrac{n-1}{n}}_{\text{Fails in $1^{st}$ attempt}} \times \underbrace{\dfrac{n-1}{n}}_{\text{Fails in $2^{nd}$ attempt}} \times \cdots \underbrace{\dfrac{n-1}{n}}_{\text{Fails in $(k-1)^{th}$ attempt}} \times \underbrace{\dfrac1{n}}_{\text{Succeeds in $k^{th}$ attempt}} = \left(1-\dfrac1n \right)^{k-1} \dfrac1n$$ Again a quick check here is the sum $$\sum_{k=1}^{\infty} \left(1-\dfrac1n \right)^{k-1} \dfrac1n$$ should be $1$. Note that here her number of tries could be arbitrarily large since she doesn't discard the keys from her previous tries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/246855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Finding the distance between a line and a vector, given a projection So my question has two parts: a) Let L be a line given by y=2x, find the projection of $\vec{x}$=$\begin{bmatrix}5\\3\end{bmatrix}$ onto the line L. So, for this one: proj$_L$($\vec{x}$) = $\frac{\vec{x}\bullet \vec{y}}{\vec{y}\bullet \vec{y}}$$\times \vec{y}$ = $\frac{(\begin{bmatrix}5\\3\end{bmatrix} \bullet \begin{bmatrix}2\\1\end{bmatrix}}{(\begin{bmatrix}2\\1\end{bmatrix} \bullet \begin{bmatrix}2\\1\end{bmatrix}} ) \times \begin{bmatrix}2\\1\end{bmatrix}$ = $\frac{13}{5} \times \begin{bmatrix}2\\1\end{bmatrix}$ = \begin{bmatrix}5.2\\2.6\end{bmatrix} b) using the above, find the sitance between L and the terminal point of x. Here is where I am stuck... my instinct is to just do: $\begin{bmatrix}5\\3\end{bmatrix} - \begin{bmatrix}5.2\\2.6\end{bmatrix}$ = $\begin{bmatrix}-.2\\.4\end{bmatrix}$ but I'm sure this is incorrect... how would I solve this?
Yes, yes, almost done. You need the length of this distance vector, use Pythagorean theorem. One moment, your line is $y=2x$, then it rather contains $\pmatrix{1\\2}$ than $\pmatrix{2\\1}$ (and its normalvector is $\pmatrix{2\\-1}$)..
{ "language": "en", "url": "https://math.stackexchange.com/questions/246931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Transition from introduction to analysis to more advanced analysis I am currently studying intro to analysis and learning somethings about basic topology in metric space and almost finished the course . I am thinking of taking some more advanced analysis. Would it be demanding to take some course like functional analysis or real analysis only with knowledge of intro analysis course ?Do the courses need more mathmatical knowledge to handle?
I'm currently taking a graduate functional analysis course having only taken introductory analysis (I majored in physics). It's manageable, but knowing measure theory and lebesgue integration would have definitely helped.
{ "language": "en", "url": "https://math.stackexchange.com/questions/247004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
If $f$ is entire and $z=x+iy$, prove that for all $z$ that belongs to $C$, $\left(\frac{d^2}{dx^2}+\frac{d^2}{dy^2}\right)|f(z)|^2= 4|f'(z)|^2$ I'm kind of stuck on this problem and been working on it for days and cannot come to the conclusion of the proof.
Let $\frac{\partial}{\partial z} = \tfrac{1}{2}(\frac{\partial}{\partial x} - i \frac{\partial}{\partial y})$ and $\frac{\partial}{\partial \overline{z}} = \tfrac{1}{2}(\frac{\partial}{\partial x} + i \frac{\partial}{\partial y})$. Then $\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} = 4 \frac{\partial}{\partial z} \frac{\partial}{\partial \overline{z}}$ so $$ (\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}) |f|^2 = 4 \frac{\partial}{\partial z} \frac{\partial}{\partial \overline{z}} f \, \overline{f} = 4 \frac{\partial}{\partial z}(0 \cdot \overline{f} + f \, \overline{f'}) = 4(f' \, \overline{f'} + f \cdot 0) = 4|f'|^2. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/247056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove the isomorphism of cyclic groups $C_{mn}\cong C_m\times C_n$ via categorical considerations As the title suggests, I am trying to prove $C_{mn}\cong C_m\times C_n$ when $\gcd{(m,n)}=1$, where $C_n$ denotes the cyclic group of order $n$, using categorical considerations. Specifically, I am trying to show $C_{mn}$ satisfies the characteristic property of group product, which would then imply an isomorphism since both objects would be final objects in the same category. $C_{mn}$ does come with projection homomorpisms, namely the maps $\pi^{mn}_m: C_{mn} \rightarrow C_m$ and $\pi^{mn}_n: C_{mn} \rightarrow C_n$ which are defined by mapping elements of $C_{mn}$ to the redisue classes mod subscript. From here I have gotten a bit lost though, as I cannot see where $m$ and $n$ being relatively prime comes in. I am guessing it would make the product map commute, but I cannot see it. Any ideas? Note This is not homework. Also, I understand there are other ways to prove this, namely by considering the cyclic subgroup generated by the element $(1_m,1_n) \in C_m\times C_n$ and noting that the order of this element is the least common multiple of $m$ and $n$ and then using it's relation to $\gcd{(m,n)}$. This then shows $\langle (1_m,1_n)\rangle$ has order $mn$ and is cyclic, hence must be isomorphic to $C_{mn}$. Also, $C_{mn}\cong C_m\times C_n$ has order $mn$, so $C_{mn}\cong C_m\times C_n=\langle (1_m,1_n)\rangle$, which completes the proof.
Just follow the definition: Let $X$ be any group, and $f:X\to C_n$, $g:X\to C_m$ homomorphisms. Now you need a unique homomorphism $h:X\to C_{nm}$ which makes both triangles with $\pi_n$ and $\pi_m$ commute. And constructing this $h$ requires basically the Chinese Remainder Theorem (and is essentially the same as constructing the isomorphism $C_n\times C_m\to C_{nm}$ right away): for each pair $(f(x),g(x))$ we have to assign a unique $h(x)\in C_{nm}$ such that, so to say, $h(x)\equiv f(x) \pmod n$ and $h(x)\equiv g(x) \pmod m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/247135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Prove $\lfloor \log_2(n) \rfloor + 1 = \lceil \log_2(n+1) \rceil $ This is a question a lecturer gave me. I'm more than willing to come up with the answer. But I feel I'm missing something in logs. I know the rules, $\log(ab) = \log(a) + \log(b)$ but that's all I have. What should I read, look up to come up with the answer?
Well, even after your edit, this is not an identity in general. It is valid only for integral $n$. For a counter-example of why this is not valid for a positive $n$, take $n = 1.5$. Then, $$\lfloor \log_2(n) \rfloor + 1 = \lfloor \log_2(1.5) \rfloor + 1 = 1$$ and $$\lceil \log_2(n + 1) \rceil = \lceil \log_2(2.5) \rceil = 2$$ However, for $n \in \mathbb N$ and $n > 0$, this identity holds, and can be proven as follows: Let $2^k \le n < 2^{k+1}$ for some $k \ge 0$ and $k$ is an integer. Therefore, let $n = 2^k + m$, where $k \ge 0$ and $0 \le m < 2^{k}$, and $k, m$ are positive integers. Then, $$\lfloor \log_2(n) \rfloor + 1 = \lfloor \log_2(2^k + m) \rfloor + 1 = k + 1$$ Notice that $\log_2(2^k + m) = k$ when $m = 0$. Otherwise, $k < \log_2(2^k + m) < k+1$, and hence the above result. Also, $$\lceil \log_2(n + 1) \rceil = \lceil \log_2(2^k + m + 1) \rceil = k + 1$$ This last equation is clear for $m < 2^k -1$, where $2^k + m + 1 < 2^{k+1}$. When $m=2^k-1$, notice that $\log_2(2^k+2^k-1+1) = \log_2(2^{k+1})$ is an integer, and therefore $\lceil k+1\rceil=k+1$. Hence, this identity is valid only for positive integers ($n \in \mathbb N$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/247182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
An upper bound for $\sum_{i = 1}^m \binom{i}{k}\frac{1}{2^i}$? Does anyone know of a reasonable upper bound for the following: $$\sum_{i = 1}^m \frac{\binom{i}{k}}{2^i},$$ where we $k$ and $m$ are fixed positive integers, and we assume that $\binom{i}{k} = 0$ whenever $k > i$. One trivial upper bound uses the identity $\binom{i}{k} \le \binom{i}{\frac i2}$, and the fact that $\binom{i}{\frac{i}{2}} \le \frac{2^{i+1}}{\sqrt{i}}$, to give a bound of $$2\sum_{i = 1}^m \frac{1}{\sqrt{i}},$$ where $\sum_{i = 1}^m \frac{1}{\sqrt{i}}$ is upper bounded by $2\sqrt{m}$, resulting in a bound of $4\sqrt{m}$. Can we do better? Thanks! Yair
Another estimate for $\sum_{i=1}^m\frac1{\sqrt i}$ is $$2\sqrt{m-1}-2=\int_1^{m-1} x^{-1/2}\, dx \le \sum_{i=1}^m\frac1{\sqrt i}\le1+ \int_1^m x^{-1/2}\, dx=2\sqrt m-1. $$ Thus we can remove the summands with $i<k$ by considering $$ 4\sqrt m+2 -4\sqrt{k-2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/247261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
Reference about Fredholm determinants I am searching for a reference book on Fredholm determinants. I am mainly interested in applications to probability theory, where cumulative distribution functions of limit laws are expressed in terms of Fredholm determinants. I would like to answer questions like : * *How to express a Fredholm determinant on $L^2(\mathcal{C})$, where $\mathcal{C}$ is a contour in $\mathbb{C}$ and the kernel takes a parameter $x$, as a deteminant on $L^2(x, +\infty)$ ; and vice versa. *Which types of kernels give which distributions. For example, in which cases we get the cumulative distribution function of the gaussian distribution ? These questions are quite vague, but I mostly need to be more familiar with the theory and the classical tricks in $\mathbb{C}$. I found the book "Trace ideals ans their applications", of Simon Barry, but I wonder if an other reference exists, ideally with applications to probability theory.
Nearly every book on random matrices deals with the subject. For a recent example, see Section 3.4 of An Introduction to Random Matrices by Anderson, Guionnet and Zeitouni.
{ "language": "en", "url": "https://math.stackexchange.com/questions/247333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Limiting distribution and initial distribution of a Markov chain For a Markov chain (can the following discussion be for either discrete time or continuous time, or just discrete time?), * *if for an initial distribution i.e. the distribution of $X_0$, there exists a limiting distribution for the distribution of $X_t$ as $t \to \infty$, I wonder if there exists a limiting distribution for the distribution of $X_t$ as $t \to \infty$, regardless of the distribution of $X_0$? *When talking about limiting distribution of a Markov chain, is it in the sense that some distributions converge to a distribution? How is the convergence defined? Thanks!
* *No, let $X$ be a Markov process having each state being absorbing, i.e. if you start from $x$ then you always stay there. For any initial distribution $\delta_x$, there is a limiting distribution which is also $\delta_x$ - but this distribution is different for all initial conditions. *The convergence of distributions of Markov Chains is usually discussed in terms of $$ \lim_{t\to\infty}\|\nu P_t - \pi\| = 0 $$ where $\nu$ is the initial distribution and $\pi$ is the limiting one, here $\|\cdot\|$ is the total variation norm. AFAIK there is at least a strong theory for the discrete-time case, see e.g. the book by S. Meyn and R. Tweedie "Markov Chains and Stochastic Stability" - the first edition you can easily find online. In fact, there are also extension of this theory by the same authors to the continuous time case - just check out their work to start with.
{ "language": "en", "url": "https://math.stackexchange.com/questions/247395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Convergence of series $\sum\limits_{n=2}^\infty\frac{n^3+1}{n^4-1}$ Investigate the series for convergence and if possible, determine its limit: $\sum\limits_{n=2}^\infty\frac{n^3+1}{n^4-1}$ My thoughts Let there be the sequence $s_n = \frac{n^3+1}{n^4-1}, n \ge 2$. I have tried different things with no avail. I suspect I must find a lower series which diverges, in order to prove that it diverges, and use the comparison test. Could you give me some hints as a comment? Then I'll try to update my question, so you can double-check it afterwards. Update $$s_n \gt \frac{n^3}{n^4} = \frac1n$$ which means that $$\lim\limits_{n\to\infty} s_n > \lim\limits_{n\to\infty}\frac1n$$ but $$\sum\limits_{n=2}^\infty\frac1n = \infty$$ so $$\sum\limits_{n=2}^\infty s_n = \infty$$ thus the series $\sum\limits_{n=2}^\infty s_n$ also diverges. The question is: is this formally sufficient?
$$\frac{n^3+1}{n^4-1}\gt\frac{n^3}{n^4}=\frac1n\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/247483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Show that if matrices A and B are elements of G, then AB is also an element of G. Let $G$ be the set of $2 \times 2$ matrices of the form \begin{pmatrix} a & b \\ 0 & c\end{pmatrix} such that $ac$ is not zero. Show that if matrices $A$ and $B$ are elements of $G$, then $AB$ is also an element of $G$. Do I just need to show that $AB$ has a non-zero determinant?
Proving that AB has a non-zero determinant is not enough, because not all 2x2 matrices with non-zero determinant are a element of G. You need to prove another property of AB. This property is that it has the shape you stated. This combined with a non-zero determinant guarantees that AB has the prescribed shape with ac not zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/247565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Derivatives vs Integration * *Given that the continuous function $f: \Bbb R \longrightarrow \Bbb R$ satisfies $$\int_0^\pi f(x) ~dx = \pi,$$ Find the exact value of $$\int_0^{\pi^{1/6}} x^5 f(x^6) ~dx.$$ *Let $$g(t) = \int_t^{2t} \frac{x^2 + 1}{x + 1} ~dx.$$ Find $g'(t)$. For the first question: The way I understand this is that the area under $f(x)$ from $0$ to $\pi$ is $\pi$. Doesn't this mean that the function can be $f(x)=1$? Are there other functions that satisfy this definition? The second line in part one also confuses me, specifically the $x^6$ part! For the second question: Does this have to do something with the Second Fundamental Theory of Calculus? I see that there are two variables, $x$ and $t$, that are involved in this equation.
For the first question, There are infinitely many functions other than $1$ that satisfy $$\int_0^{\pi} f(x) dx = \pi$$ For instance, couple of other examples are $$f(x) = 2- \dfrac{2x}{\pi}$$ and $$f(x) = \dfrac{2x}{\pi}$$ To evaluate $$\int_0^{\pi^{1/6}} x^5 f(x^6) dx$$ make the substitution $t = x^6$ and see what happens... For the second question, Yes make use of the fundamental theorem of calculus i.e. if $$g(t) = \int_{a(t)}^{b(t)} f(x) dx$$ then $$g'(t) = f(b(t)) b'(t) - f(a(t)) a'(t)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/247634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Notation for repeated application of function If I have the function $f(x)$ and I want to apply it $n$ times, what is the notation to use? For example, would $f(f(x))$ be $f_2(x)$, $f^2(x)$, or anything less cumbersome than $f(f(x))$? This is important especially since I am trying to couple this with a limit toward infinity.
In the course I took on bifurcation theory we used the notation $$f^{\circ n}(x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/247710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 8, "answer_id": 1 }
Find a convex combination of scalars given a point within them. I've been banging my head on this one all day! I'm going to do my best to explain the problem, but bear with me. Given a set of numbers $S = \{X_1, X_2, \dots, X_n\}$ and a scalar $T$, where it is guaranteed that there is at least one member of $S$ that is less than $T$ and at least one member that's greater than $T$, I'm looking for an algorithm to create a Convex Combination of these scalars that equals $T$. For example, for the set $\{2,4\}$ and the scalar $3$, the answer is: $$.5 \cdot 2 + .5 \cdot 4 = 3.$$ I believe in many cases there are infinite infinitely many solutions. I'm looking for a generalized algorithm/formula to find these coefficients. Additionally, I would like for the coefficient weights to be distributed as evenly as possible (of course while still adding up to 1.) For instance, for the set $\{1,2,4\}$ and the scalar $3$, a technically valid solution would be the same as the first example but with the coefficient for $1$ assigned a weight of 0 - but it would be prefferable to assign a non-zero weight. I may not be thinking through this last part very clearly :)
If $X_1<T<X_2$, then $T$ is a weighted average of $X_1$ and $X_2$ with weights $\dfrac{X_2-T}{X_2-X_1}$ and $\dfrac{T-X_1}{X_2-X_1}$, as can be checked by a bit of algebra. Now suppose $X_3$ is also $>T$. Then $T$ is a weighted average of $X_1$ and $X_3$, and you can find the weights the same way. Now take $40\%$ of the weight assigned to $X_2$ in the first case, and assign it to $X_2$, and $60\%$ of the weight assigned to $X_3$ in the second case and assign it to $X_3$ and let the weight assigned to $X_1$ be $40\%$ of the weight it got in the first case plus $60\%$ of the weight it got in the first case, and you've got another solution. And as with $40$ and $60$, so also with $41$ and $59$, and so on, and you've got infinitely many solutions. But don't say "infinite solutions" if you mean "infinitely many solutions". "Infinite solutions" means "solutions, each one of which, by itself, is infinite". Later note in response to comments: Say you write $4$ as an average of $3$ and $5$ with weights $1/2$, $1/2$. And you write $4$ as an average of $3$ and $7$ with weights $3/4$, $1/4$. Then you have $4$ as a weighted average of $3$, $5$, and $7$ with weights $1/2,\ 1/2,\ 0$. And you have $4$ as a weighted average of $3$, $5$, and $7$ with weights $3/4,\ 0,\ 1/4$. So find a weighted average of $(1/2,\ 1/2,\ 0)$ and $(3/4,\ 0,\ 1/4)$. For example, $40\%$ of the first plus $60\%$ of the second is $(0.65,\ 0.2,\ 0.15)$. Then you have $4$ as a weighted average of $3$, $5$, and $7$ with weights $0.65$, $0.2$, and $0.15$. And you can come up with infinitely many other ways to write $4$ as a weighted average of $3$, $5$, and $7$ by using other weights than $0.4$ and $0.6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/247748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
How to prove that $n^{\frac{1}{3}}$ is not a polynomial? I'm reading Barbeau's Polynomials, there's an exercise: How to prove that $n^{\frac{1}{3}}$ is not a polynomial? I've made this question and with the first answer as an example, I guess I should assume that: $$n^{\frac{1}{3}}=a_pn^p+a_{p-1}n^{p-1}+\cdots+ a_0n^0$$ And then I should make some kind of operation in both sides, the resultant difference would be the proof. But I have no idea on what operation I should do in order to prove that.
If $t^{1/3}$ were a polynomial, then its degree would be at least one (because it is not constant). This would imply $$ \lim_{t\to\infty}\frac{t^{1/3}}t\ne0. $$ But, precisely, the limit above is indeed zero. So $t^{1/3}$ cannot be a polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/247800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 1 }
how to prove $437\,$ divides $18!+1$? (NBHM 2012) I was solving some problems and I came across this problem. I didn't understand how to approach this problem. Can we solve this with out actually calculating $18!\,\,?$
Note that $437=(19)(23)$. We prove that $19$ and $23$ divide $18!+1$. That is enough, since $19$ and $23$ are relatively prime. The fact that $19$ divides $18!+1$ is immediate from Wilson's Theorem, which says that if $p$ is prime then $(p-1)!\equiv -1\pmod{p}$. For $23$ we need to calculate a bit. We have $22!\equiv -1\pmod{23}$ by Wilson's Theorem. Now $(18!)(19)(20)(21)(22)=22!$. But $19\equiv -4\pmod{23}$, $20\equiv -3\pmod{23}$, and so on. So $(19)(20)(21)(22)\equiv 24\equiv 1\pmod{23}$. It follows that $18!\equiv 22!\pmod{23}$, and we are finished.
{ "language": "en", "url": "https://math.stackexchange.com/questions/247879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 1, "answer_id": 0 }
$\mathrm{Spec}(R)\!=\!\mathrm{Max}(R)\!\cup\!\{0\}$ $\Rightarrow$ $R$ is a PID Is the following true: If $R$ is a commutative unital ring with $\mathrm{Spec}(R)\!=\!\mathrm{Max}(R)\!\cup\!\{0\}$, then $R$ is a PID. If yes, how can one prove it? Since $0$ is a prime ideal, $R$ is a domain. Thus we must prove that every ideal is principal. I'm not sure if this link (first answer) helps.
As mentioned, there are easy counterxamples. However, it is true for UFDs since PIDs are precisely the $\rm UFDs$ of dimension $\le 1,\:$ i.e. such that prime ideals $\ne 0$ are maximal. Below is a sketch of a proof of this and closely related results. Theorem $\rm\ \ \ TFAE\ $ for a $\rm UFD\ D$ $(1)\ \ $ prime ideals are maximal if nonzero $(2)\ \ $ prime ideals are principal $(3)\ \ $ maximal ideals are principal $(4)\ \ \rm\ gcd(a,b) = 1\ \Rightarrow\ (a,b) = 1$ $(5)\ \ $ $\rm D$ is Bezout $(6)\ \ $ $\rm D$ is a $\rm PID$ Proof $\ $ (sketch of $1 \Rightarrow 2 \Rightarrow 3 \Rightarrow 4 \Rightarrow 5 \Rightarrow 6 \Rightarrow 1$) $(1\Rightarrow 2)$ $\rm\ \ P\supset (p)\ \Rightarrow\ P = (p)$ $(2\Rightarrow 3)$ $\ \: $ Clear. $(3\Rightarrow 4)$ $\ \ \rm (a,b) \subsetneq P = (p)\ $ so $\rm\ (a,b) = 1$ $(4\Rightarrow 5)$ $\ \ \rm c = \gcd(a,b)\ \Rightarrow\ (a,b) = c\ (a/c,b/c) = (c)$ $(5\Rightarrow 6)$ $\ \ \rm 0 \ne I \subset D\:$ Bezout is generated by an elt with the least number of prime factors $(6\Rightarrow 1)$ $\ \ \rm P \supset (p),\ a \not\in (p)\ \Rightarrow\ (a,p) = (1)\ \Rightarrow\ P = (p)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/247976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
The sufficient and necessary condition for a function approaching a continuous function at $+\infty$ Problem Suppose $f:\Bbb R^+\to\Bbb R$ satisfies $$\forall\epsilon>0,\exists E>0,\forall x_0>E,\exists\delta>0,\forall x(\left|x-x_0\right|<\delta): \left|f(x)-f(x_0)\right|<\epsilon\tag1$$ Can we conclude that there's some continuous function $g:\Bbb R^+\to\Bbb R$ such that $$\lim_{x\to+\infty}(f(x)-g(x))=0\tag2$$ Re-describe Let $\omega_f(x_0)=\limsup_{x\to x_0}\left|f(x)-f(x_0)\right|$, we can re-describe the first condition (1) as this: $$\lim_{x_0\to+\infty}\omega_f(x_0)=0\tag3$$ Motivation In fact, I'm discovering the sufficient and necessary condition of (2). It's easier to show that (2) implies (3), i.e. (1), because $$\left|f(x)-f(x_0)\right|\le\left|f(x)-g(x)\right|+\left|g(x)-g(x_0)\right|+\left|g(x_0)-f(x_0)\right|$$ Take $\lim_{x_0\to+\infty}\limsup_{x\to x_0}$ for both sides, we'll get the result.
Choose $E_0 := 0$, $(E_n)_n \uparrow \infty$ corresponding to $\varepsilon_n := \frac{1}{n}$ for $n \geq 1$ using (1). If $E_n < x \leq E_{n+1}$, choose $\delta_x > 0$ such that $f(y) \in B_{1/n}(f(x))$ for all $y \in B_{\delta_x} (x)$, according to (1). For $n \geq 1$, the balls $(B_{\delta_x}(x))_{x \in [E_n, E_{n+1}]}$ cover $[E_n, E_{n+1}]$, so we can choose $E_n = x^{(n)}_1 < x^{(n)}_2 < \ldots < x^{(n)}_{M_n} \leq E_{n+1}$ such that $$\bigcup_{k=1}^{M_n} {B_{\delta_{x^{(n)}_k}}(x^{(n)}_k)} \supseteq [E_n, E_{n+1}].$$ Set $M_0 = 1$, $x^{(0)}_1 = 0$. The set of points $\{x^{(n)}_k \: | \: n \geq 0,\, 1 \leq k \leq M_n\}$ partitions $\mathbb{R}^+$, so we can define the graph of $g:\: \mathbb{R}^+ \rightarrow \mathbb{R}$ as the polygon joining the points $(x^{(n)}_k, f(x^{(n)}_k)$ in the order of the $x^{(n)}_k$. The function $g$ is continuous by definition and it is easy to see that the limit of $f(x) - g(x)$ for $x \to \infty$ is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/248039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $\omega_{\alpha}$ sequentially compact? For an ordinal $\alpha \geq 2$, let $\omega_{\alpha}$ be as defined here. It is easy to show that $\omega_{\alpha}$ is limit point compact, but is it sequentially compact?
I think I finally found a solution: In a well-ordered set, every sequence admits an non-decreasing subsequence. Indeed, if $(x_n)$ is any sequence, let $n_0$ be such that $x_{n_0}= \min \{ x_n : n\geq 0 \}$, and $n_1$ such that $x_{n_1}= \min \{ x_n : n > n_0 \}$, and so on; here, $(x_{n_i})$ is an non-decreasing subsequence. Because an non-decreasing sequence is convergent iff it admits a cluster point, limit point compactness and sequentially compactness are equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/248083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can it happen to find infinite bases in $\mathbb R^n$ if $\mathbb R^n$ does not admit more than $n$ linearly independent vectors? How can it happen to find infinite bases in $\mathbb R^n$ if $\mathbb R^n$ does not admit more than $n$ linearly independent vectors? Also considered that each basis of $\mathbb R^n$ has the same number $n$ of vectors.
Let $E=\{e_1,...,e_n\}$ be the standard basis in $\mathbb{R}^n$. For each $\lambda\neq 0$, let $E_\lambda = \{\lambda e_1, e_2,...,e_n\}$. Then each $E_\lambda$ is a distinct basis of $\mathbb{R}^n$. However, each $E_\lambda$ has exactly $n$ elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/248179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }