Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Prove that $\sum \frac{a_n}{a_n+3}$ diverges Suppose $a_n>0$ for each $n\in \mathbb{N}$ and $\sum_{n=0}^{\infty} a_n $ diverges. How would one go about showing that $\sum_{n=0}^{\infty} \frac{a_n}{a_n+3}$ diverges?
Let $b_n=\dfrac{a_n}{a_n+3}$. If the $a_n$ are unbounded, then $b_n$ does not approach $0$, and therefore $\sum b_n$ diverges. If the $a_n$ are bounded by $B$, then $b_n\ge \dfrac{1}{B+3} a_n$, and $\sum b_n$ diverges by comparison with $\sum a_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove every odd integer is the difference of two squares I know that I should use the definition of an odd integer ($2k+1$), but that's about it. Thanks in advance!
Eric and orlandpm already showed how this works for consecutive squares, so this is just to show how you can arrive at that conclusion just using the equations. So let the difference of two squares be $A^2-B^2$ and odd numbers be, as you mentioned, $2k+1$. This gives you $A^2-B^2=2k+1$. Now you can add $B^2$ to both sides to get $A^2=B^2+2k+1$. Since $B$ and $k$ are both just constants, they could be equal, so assume $B=k$ to get $A^2=k^2+2k+1$. The second half of this equation is just $(k+1)^2$, so $A^2=(k+1)^2$, giving $A = ±(k+1)$, so for any odd number $2k+1$, $(k+1)^2-k^2=2k+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/263101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 9, "answer_id": 3 }
Construction of an increasing function from a general function Supposing $f: [0,\infty) \to [0,\infty)$. The goal is to make an increasing function from $f$ using the following rule:- If $t_1 \leq t_2$ and $f(t_1) > f(t_2)$ then change the value of $f(t_1)$ to $f(t_2)$. After this change, we have $f(t_1) = f(t_2)$. Let $g$ be the function resulting from applying the above rule for all $t_1,t_2$ recursively (recursively because if $f(t_2)$ changes then the value of $f(t_1)$ needs to be re-computed) Is it correct to treat $g$ as a well defined (increasing) function? Thanks, Phanindra
Consider the function $f(x)=1/x$ if $x>0$, with also $f(0)=0$. Then for example $f(1)$ will, for any $n>1$, get changed to $n$ on considering that $f(1/n)=n>f(1)=1$. Once this is done there will still be plenty of other $m>1$ for which $f(1/m)=m$ where $m>n$, so that $f(1)$ will have to be changed again from its present value of $n$ to the new value $m>n$. In this way, for this example, there will not be a finite value for $f(1)$ as the process is iterated, and the resulting function will not be defined at $x=1$. EDIT: As mkl points out in a comment, the interpretion in the above example has the construction backward. When $f(a)>f(b)$ where $a<b$ the jvp construction is to replace $f(a)$ by the "later value" $f(b)$. In this version there is no problem with infinite values occuring, as a value of $f(x)$ is only decreased during the construction, and the decreasing is bounded below by $0$ because the original $f$ is nonnegative. In fact, if $g(x)$ denotes the constructed function, and if we interpret the "iterative procedure" in a reasonable way, it seems one has $$g(x)=\inf \{f(t):t \ge x \},$$ which is a nondecreasing function for any given $f(x)$. Note that Stefan made exactly this suggestion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/263159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do we prove that $f(x)$ has no integer roots, if $f(x)$ is a polynomial with integer coefficients and $f( 2)= 3$ and $f(7) = -5$? I've been thinking and trying to solve this problem for quite sometime ( like a month or so ), but haven't achieved any success so far, so I finally decided to post it here. Here is my problem: If $f(x)$ is a polynomial with integer coefficients and $f( 2)= 3$ and $f(7) = -5$ then prove that $f(x)$ has no integer roots. All I can think is that if we want to prove * *that if $f( x)$ has no integer roots, then by the integer root theorem its coefficient of highest power will not be equal to 1, but how can I use this fact ( that I don't know)? *How to make use of given data that $f( 2)= 3$ and $f(7) = -5$? *Assuming $f(x)$ to be a polynomial of degree $n$ and replacing $x$ with $2$ and $7$ and trying to make use of given data creates only mess. Now, if someone could tell me how to approach these types of problems other than giving a few hints on how to solve this particular problem , I would greatly appreciate his/her help.
Let's define a new polynomial by $g(x)=f(x+2)$. Then we are told $g(0)=3, g(5)=-5$ and $g$ will have integer roots if and only if $f$ does. We can see that the constant term of $g$ is $3$. Because the coefficients are integers, when we evaluate $g(5)$, we get terms that are multiples of $5$ plus the constant term $3$, so $g(5)$ must equal $3 \pmod 5$ Therefore there is no polynomial that meets the requirement. As the antecedent is false, the implication is true. This is an example of the statement that for all polynomials $p(x)$ with integer coefficients, $a,b \in \mathbb Z \implies (b-a) | p(b)-p(a)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/263229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Given real numbers: define integers? I have only a basic understanding of mathematics, and I was wondering and could not find a satisfying answer to the following: Integer numbers are just special cases (a subset) of real numbers. Imagine a world where you know only real numbers. How are integers defined using mathematical operations? Knowing only the set of complex numbers $a + bi$, I could define real numbers as complex numbers where $b = 0$. Knowing only the set of real numbers, I would have no idea how to define the set of integer numbers. While searching for an answer, most definitions of integer numbers talk about real numbers that don't have a fractional part in their notation. Although correct, this talks about notation, assumes that we know about integers already (the part left of the decimal separator), and it does not use any mathematical operations for the definition. Do we even know what integer numbers are, mathematically speaking?
How about the values (Image) of $$\lfloor x\rfloor:=x-\frac{1}{2}+\frac{1}{\pi}\sum_{k=1}^\infty\frac{\sin(2\pi k x)}{k}$$ But this is nonsense; we sum over the positive integers, and such, we can just define the integers as $$x_0:=0\\x_{k+1}=x_k+1\\ x_{-k}=-x_k$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/263284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68", "answer_count": 9, "answer_id": 6 }
Circle geometry: nonparallel tangent and secant problem If secant and the tangent of a circle intersect at a point outside the circle then prove that the area of the rectangle formed by the two line segments corresponding to the secant is equal to the area of the square formed by the line segment corresponding to the tangent I find this question highly confusing. I do not know what this means. If you could please explain that to me and solve it if possible.
Others have answered this, but here is a source of further information: http://en.wikipedia.org/wiki/Power_of_a_point Here's a problem in which the result is relied on: http://en.wikipedia.org/wiki/Regiomontanus%27_angle_maximization_problem#Solution_by_elementary_geometry The result goes all the way back (23 centuries) to Euclid (the first human who ever lived, with the exception of those who didn't write books on geometry that remain famous down to the present day): http://aleph0.clarku.edu/~djoyce/java/elements/bookIII/propIII36.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/263349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Game theory: Nash equilibrium in asymetric payoff matrix I have a utility function describing the desirability of an outcome state. I weigh the expected utility with the probability of the outcome state occuring. I find the expected utility of an action, a, with $EU(a) = \sum\limits_{s'} P(Result(a) = s' | s)U(s'))$ where Result(a) denotes the outcome state after executing a. There is no global set of actions, the set of actions available to each agent are not identical. Player1 / Player2 | Action C | Action D | ----------------------------------------------------- Action A | (500,-500) | (-1000,1000) | ----------------------------------------------------- Action B | (-5,-5) | ** (200,20) ** | ----------------------------------------------------- Is this a valid approach? All examples of nash equilibriums i can find uses identical action sets for both agents.
Set of concepts aimed at decision making in situations of competition and conflict (as well as of cooperation and interdependence) under specified rules. Game theory employs games of strategy (such as chess) but not of chance (such as rolling a dice). A strategic game represents a situation where two or more participants are faced with choices of action, by which each may gain or lose, depending on what others choose to do or not to do. The final outcome of a game, therefore, is determined jointly by the strategies chosen by all participants. http://en.docsity.com/answers/68695/what-type-of-study-is-the-game-theory
{ "language": "en", "url": "https://math.stackexchange.com/questions/263404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
For what values of $a$ does this improper integral converge? $$\text{Let}\;\; I=\int_{0}^{+\infty}{x^{\large\frac{4a}{3}}}\arctan\left(\frac{\sqrt{x}}{1+x^a}\right)\,\mathrm{d}x.$$ I need to find all $a$ such that $I$ converges.
Hint 1: Near $x=0$, $\arctan(x)\sim x$ whereas near $x=+\infty$, $\arctan(x)\sim\pi/2$. Hint 2: Near $x=0$, consider $a\ge0$ and $a\lt0$. Near $x=+\infty$, consider $a\ge\frac12$ and $a\lt\frac12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/263547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integers that satisfy $a^3= b^2 + 4$ Well, here's my question: Are there any integers, $a$ and $b$ that satisfy the equation $b^2$$+4$=$a^3$, such that $a$ and $b$ are coprime? I've already found the case where $b=11$ and $a =5$, but other than that? And if there do exist other cases, how would I find them? And if not how would I prove so? Thanks in advance. :)
$a=5, b=11$ is one satisfying it. I don't think this is the only pair.
{ "language": "en", "url": "https://math.stackexchange.com/questions/263622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
How many subsets of $\mathbb{N}$ have the same cardinality as $\mathbb{N}$? How many subsets of $\mathbb{N}$ have the same cardinality as $\mathbb{N}$? I realize that any of the class of functions $f:x\to (n\cdot x)$ gives a bijection between $\mathbb{N}$ and the subset of $\mathbb{N}$ whose members equal multiples of n. So, we have at least a countable infinity of sets which have the same cardinality of $\mathbb{N}$. But, we could remove a single element from any countably infinity subset of the natural numbers and we still will end up with a countably infinite subset of $\mathbb{N}$. So (the reasoning here doesn't seem quite right to me), do there exist uncountably many infinite subsets of $\mathbb{N}$ with the same cardinality of $\mathbb{N}$? Also, is the class of all bijections $f: \mathbb{N} \to \mathbb{N}$ and a given countably infinite subset uncountably infinite or countably infinite?
As great answers have been given already, I'd merely like to add an easy way to show that the set of finite subsets of $\mathbb{N}$ is countable: Observe that $$\operatorname{Fin}(\mathbb{N}) = \bigcup_{n\in\mathbb{N}}\left\{ A\subseteq\mathbb{N}: \max(A) = n \right\},$$ which is a countable union of finite sets as for each $n\in\mathbb{N}$ there certainly are less than $2^n = \left|\mathcal{P}(\{1,\ldots,n \})\right|$ subsets of $\mathbb{N}$ whose largest element is $n$. Hence, $\operatorname{Fin}(\mathbb{N})$ is countable itself. From here on, you can use Asaf's argument to show that the set of infinite subsets of $\mathbb{N}$ (which are precisely the sets with the same cardinality as $\mathbb{N}$) must be uncountable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/263677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 7, "answer_id": 3 }
Fixed point in a continuous map Possible Duplicate: Periodic orbits Suppose that $f$ is a continuous map from $\mathbb R$ to $\mathbb R$, which satisfies $f(f(x)) = x$ for each $x \in \mathbb{R}$. Does $f$ necessarily have a fixed point?
Here's a somewhat simpler (in my opinion) argument. It's essentially the answer in Amr's link given in the first comment to the question, but simplified a bit to treat just the present question, not a generalization. Start with any $a\in\mathbb R$. If we're very lucky, $f(a)=a$ and we're done. If we're not that lucky, let $b=f(a)$; by assumption $a=f(f(a))=f(b)$. Since we weren't lucky, $a\neq b$. Suppose for a moment that $a<b$. Then the function $g$ defined by $g(x)=f(x)-x$ is positive at $a$ and negative at $b$, so, by the intermediate value theorem, it's zero at some $c$ (between $a$ and $b$). That means $f(c)=c$, and we have the desired fixed point, under the assumption $a<b$. The other possibility, $b<a$, is handled by the same argument with the roles of $a$ and $b$ interchanged. As user1551 noted, we need $f(f(x))=x$ for only a single $x$, since we can then take that $x$ as the $a$ in the argument above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/263753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
A Few Questions Concerning Vectors In my textbook, they provide a theorem to calculate the angle between two vectors: $\cos\theta = \Large\frac{\vec{u} \cdot \vec{v}}{\|\vec{u}\|\|\vec{v}\|}$ My questions are, why does the angle have to be $0 \le \theta \le \pi$; and why do the vectors have to be in standard position? Also, on the next page, the author writes, "the zero vector is orthogonal to every vector because $0 \cdot \vec{u} = 0$;" why is that so?
Given two points $x$ and $y$ on the unit sphere $S^{n-1}\subset{\mathbb R}^n$ the spherical distance between them is the length of the shortest arc on $S^{n-1}$ connecting $x$ and $y$. The shortest arc obviously lies in the plane spanned by $x$ and $y$, and drawing a figure of this plane one sees that the length $\phi$ of the arc in question can be computed by means of the scalar product as $$\phi =\arccos(x\cdot y)\ \ \in[0,\pi]\ .$$ This length is then also called the angle between $x$ and $y$. When $u$ and $v$ are arbitrary nonzero vectors in ${\mathbb R}^n$ then $u':={u\over |u|}$ and $v':={v\over |v|}$ lie on $S^{n-1}$. Geometrical intuition tells us that $\angle(u,v)=\angle(u',v')$. Therefore one defines the angle $\phi\in[0,\pi]$ between $u$ and $v$ as $$\phi:=\arccos{u\cdot v\over|u|\>|v|}\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/263815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Proof that $\sqrt{5}$ is irrational In my textbook the following proof is given for the fact that $\sqrt{5}$ is irrational: $ x = \frac{p}{q}$ and $x^2 = 5$. We choose $p$ and $q$ so that the have no common factors, so we know that $p$ and $q$ aren't both divisible by $5$. $$\left(\dfrac{p}{q}\right)^2 = 5\\ \text{ so } p^2=5q^2$$ This means that $p^2$ is divisble by 5. But this also means that $p$ is divisible by 5. $p=5k$, so $p^2=25k^2$ and so $q^2=5k^2$. This means that both $q$ and $p$ are divisible by 5, and since that can't be the case, we've proven that $\sqrt{5}$ is irrational. What bothers me with this proof is the beginning, in which we choose a $p$ and $q$ so that they haven't got a common factor. How can we know for sure that there exists a $p$ and $q$ with no common factors such that $x=\dfrac{p}{q} = \sqrt{5}$? Because it seems that step could be used for every number Edit: I found out what started my confusion: I thought that any fraction with numerator 1 had a common factor, since every integer can be divided by 1. This has given me another question: Are confusions like this the reason why 1 is not considered a prime number?
Exactly what MSEoris said, you can always reduce a fraction to such a point that they have no common factors, if $\frac{p}{q}$ had a common factor n, then $nk_0 = p$ $nk_1 = q $ then $\frac{p}{q} = \frac{nk_0}{nk_1} = \frac{k_0}{k_1}$, now if $k_0, k_1$ have a common factor do the same and you will eventually get a fraction with no common factor
{ "language": "en", "url": "https://math.stackexchange.com/questions/263864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 8, "answer_id": 5 }
Convergence in $L^{p}$ spaces Set $$f_n= n1_{[0,1/n]}$$ For $0<p\le\infty $ , one has that $\{f_n\}_n$ is in $L^p(\mathbb R)$. For which values of $p$ is $\{f_n\}_n$ a Cauchy sequence in $L^p$? Justify your answer. This was a Comp question I was not able to answer. I don't mind getting every details of the proof. What I know for sure is for $p=1$, $\{f_n\}_n$ is Cauchy in $L^p$ because when you get the integral of the function that is going to equal 1 no matter the value of $n$. So the sequence is not convergent in $L^1$, and hence is not Cauchy. I do not know how can I be more rigorous on this problem. Any help much appreciated.
Note that, we have $$\Vert f_{2n} -f_n\Vert_p^p = n^p \left(\dfrac1n - \dfrac1{2n}\right) + (2n-n)^p \dfrac1{2n} = \dfrac{n^{p-1}}2 + \dfrac{n^{p-1}}2 \geq 1 \,\,\,\,\,\,\, \forall p \geq 1$$ For $p<1$, and $m>n$ we have $$\Vert f_m - f_n\Vert_p^p = n^p \left(\dfrac1n - \dfrac1m\right) + (m-n)^p \dfrac1m < n^p \dfrac1n + \dfrac{m^p}m = n^{p-1} + m^{p-1} < 2 n^{p-1} \to 0$$ EDIT Note that \begin{align} f_m(x) & =\begin{cases} m & x \in [0,1/m]\\ 0 & \text{else}\end{cases}\\ f_n(x) & =\begin{cases} n & x \in [0,1/n]\\ 0 & \text{else}\end{cases} \end{align} Since $m>n$, we have $$f_m(x) - f_n(x) = \begin{cases} (m-n) & x \in [0,1/m]\\ -n & x \in [1/m,1/n]\\ 0 & \text{else}\end{cases}$$ Hence, $$\vert f_m(x) - f_n(x) \vert^p = \begin{cases} (m-n)^p & x \in [0,1/m]\\ n^p & x \in [1/m,1/n]\\ 0 & \text{else}\end{cases}$$ Hence, $$\int \vert f_m(x) - f_n(x) \vert^p d \mu = (m-n)^p \times \dfrac1m + n^p \left(\dfrac1n - \dfrac1m\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/263917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Sums of two probability density functions If the weighted sum of 2 probability density functions is also a probability density function, then what is the relationship between the random variables of these 3 probability density functions.
I think you might mean, "What happens if I'm not sure which of two distributions a random variable will be drawn from?" That is one situation where you need to take a pointwise weighted sum of two PDFs, where the weights have to add to 1. Suppose you have three coins in your pocket, two fair coins and one which lands as 'Heads' two thirds of the time. You draw one at random from your pocket and flip it. Then the PMF is \begin{align*}f(x)&=2/3\times\left.\cases{1/2, x=\text{'Heads'}\\1/2,x=\text{'Tails'}}\right\}+1/3\times\left.\cases{2/3, x=\text{'Heads'}\\1/3, x=\text{'Tails'}}\right\}\\&=\cases{5/9, x=\text{'Heads'},\\4/9,x=\text{'Tails'}.}\end{align*} The formula is simple: for any value for x, add the values of the PMFs at that value for x, weighted appropriately. If the sum of the weights is 1, then the sum of the values of the weighted sum of your PMFs will be 1, so the weighted sum of your PMFs will be a probability distribution. The same principle applies when adding continuous PDFs. Suppose you have a large group of geese where the female geese have body weights following an N(3,1) distribution and the male geese have weights following an N(4,1) distribution. You toss your unfair coin, and if, with probability 2/3, it is heads, you choose a random female goose, and otherwise choose a random male goose, then the weight of the goose has PDF $f(x)=\frac{1}{3}\times \frac{1}{\sqrt{2 \pi{}\times9}}e^{-\frac{1}{2}(x-1)^2/9}+\frac{2}{3}\times \frac{1}{\sqrt{2 \pi{}\times16}}e^{-\frac{1}{2}(x-1)^2/16}.$ You can even integrate over infinitely many PDFs, in which case your weight function is another PDF. For example, suppose you have a robot with an Exp(1) life span programmed to move left or right at a fixed speed with equal probability in each arbitrarily short time interval, independent of its movement in every other time interval (this is called Brownian motion). Its position after time $t$ follows a N(0,t) distribution, so its position at the end of its life span is \begin{align*}f(x)=\int_0^\infty e^{-\sigma^2} \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{1}{2}x^2/\sigma^2} \text{d}\sigma^2.\end{align*} This is a very open field. Play with it yourself for a while and see where it takes you. References Taleb, Nicholas Nassim (2007) The Black Swan Same author (2013), Collected Scientific Papers, https://docs.google.com/file/d/0B_31K_MP92hUNjljYjIyMzgtZTBmNS00MGMwLWIxNmQtYjMyNDFiYjY0MTJl/edit?hl=en_GB
{ "language": "en", "url": "https://math.stackexchange.com/questions/263974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
Covariance of Brownian Bridge? I am confused by this question. We all know that Brownian Bridge can also be expressed as: $$Y_t=bt+(1−t)\int_a^b \! \frac{1}{1-s} \, \mathrm{d} B_s $$ Where the Brownian motion will end at b at $t = 1$ almost surely. Hence I can write it as: $$Y_t = bt + I(t)$$ where $I(t)$ is a stochastic integral, and in this case it is a martingale. Since it is a martingale, the co-variance can be calculated as: \begin{array} {lcl} E[Y_t Y_s] & = & b^2 ts + E(I(t)I(s)] \\ & = & b^2 ts + E\{(I(t)-I(s))* I(s) \} + E [I(s)^2] \\& =&b^2 ts + Var[I(s)] + b^2s^2 \\ & = & b^2 ts + b^2 s^2 + s(1-s) \end{array} Hence the variance is just $ b^2 s^2 + s(1-s)$. However I read online, the co-variance of the Brownian Bridge should be $s(1-t)$. I am relaly confused. Please advise. Thanks so much!
I think the given representation of the Brownian Bridge is not correct. It should read $$Y_t = a \cdot (1-t) + b \cdot t + (1-t) \cdot \underbrace{\int_0^t \frac{1}{1-s} \, dB_s}_{=:I_t} \tag{1}$$ instead. Moreover, the covariance is defined as $\mathbb{E}((Y_t-\mathbb{E}Y_t) \cdot (Y_s-\mathbb{E}Y_s))$, so you forgot to subtract the expectation value of $Y$ (note that $\mathbb{E}Y_t \not= 0$). Here is a proof using the representation given in $(1)$: $$\begin{align} \mathbb{E}Y_t &= a \cdot (1-t) + b \cdot t \\ \Rightarrow \text{cov}(Y_s,Y_t) &= \mathbb{E}((Y_t-\mathbb{E}Y_t) \cdot (Y_s-\mathbb{E}Y_s)) = (1-t) \cdot (1-s) \cdot \mathbb{E}(I_t \cdot I_s) \\ &= (1-t) \cdot (1-s) \cdot \underbrace{\mathbb{E}((I_t-I_s) \cdot I_s)}_{\mathbb{E}(I_t-I_s) \cdot \mathbb{E}I_s = 0} + (1-t) \cdot (1-s) \mathbb{E}(I_s^2) \tag{2} \end{align}$$ for $s \leq t$ where we used the independence of $I_t-I_s$ and $I_s$. By Itô's isometry, we obtain $$\mathbb{E}(I_s^2) = \int_0^s \frac{1}{(1-r)^2} \, dr = \frac{1}{1-s} -1.$$ Thus we conclude from $(2)$: $$\text{cov}(Y_t,Y_s) = (1-t) \cdot (1-s) \cdot \left( \frac{1}{1-s}-1 \right) = s-t \cdot s = s \cdot (1-t).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/264067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Examples of non-isomorphic abelian groups which are part of exact sequences Suppose $A_1$, $A_2$, $A_3$ and $B_1$, $B_2$, and $B_3$ are two short exact sequences of abelian groups. I am looking for two such short sequences where $A_1$ and $B_1$ is isomorphic and $A_2$ and $B_2$ are isomorphic but $A_3$ and $B_3$ are not. (Similarly I would like examples in which two of the other pairs are isomorphic but the third pair are not, etc)
For the first pair take $$0 \longrightarrow \mathbb{Z} \stackrel{2}{\longrightarrow} \mathbb{Z} \longrightarrow \mathbb{Z} / 2\mathbb{Z} \longrightarrow 0$$ and $$0 \longrightarrow \mathbb{Z} \stackrel{3}{\longrightarrow} \mathbb{Z} \longrightarrow \mathbb{Z} / 3\mathbb{Z} \longrightarrow 0.$$ For sequences with non-isomorphic first pairs you can use an infinite direct sum of $\mathbb{Z}$'s and include one or two copies of $\mathbb{Z}$. The quotient will be the infinite direct sum again so the second and third pairs are isomorphic but the first pair will be non-isomorphic. Finally for non isomorphic central pairs take $$0 \longrightarrow \mathbb{Z} / 2\mathbb{Z} \longrightarrow \mathbb{Z} / 2\mathbb{Z} \times \mathbb{Z} / 2\mathbb{Z} \longrightarrow \mathbb{Z} / 2\mathbb{Z} \longrightarrow 0$$ and $$0 \longrightarrow \mathbb{Z} / 2\mathbb{Z} \stackrel{2}{\longrightarrow} \mathbb{Z} / 4\mathbb{Z} \longrightarrow \mathbb{Z} / 2\mathbb{Z} \longrightarrow 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/264124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Integrating $\int_0^{\infty} u^n e^{-u} du $ I have to work out the integral of $$ I(n):=\int_0^{\infty} u^n e^{-u} du $$ Somehow, the answer goes to $$ I(n) = nI(n - 1)$$ and then using the Gamma function, this gives $I(n) = n!$ What I do is this: $$ I(n) = \int_0^{\infty} u^n e^{-u} du $$ Integrating by parts gives $$ I(n) = -u^ne^{-u} + n \int u^{n - 1}e^{-u} $$ Clearly the stuff in the last bit of the integral is now $I(n - 1)$, but I don't see how using the limits gives you the answer. I get this $$ I(n) = \left( \frac{-(\infty)^n}{e^{\infty}} + nI(n - 1) \right) - \left( \frac{-(0)^n}{e^{0}} + nI(n - 1) \right) $$ As exponential is "better" than powers, or whatever its called, I get $$ I(n) = (0 + I(n - 1)) + ( 0 + nI(n - 1)) = 2nI(n - 1)$$ Does the constant just not matter in this case? Also, I do I use the Gamma function from here? How do I notice that it comes into play? Nothing tells me that $\Gamma(n) = (n - 1)!$, or does it?
You have $$ I(n) = \lim_{u\to +\infty}u^ne^{-u}-0^ne^{-0}+nI(n-1) $$ But $$\lim_{u\to +\infty}u^ne^{-u}=\lim_{u\to +\infty}\frac{u^n}{e^{u}}=...=0$$ and so $$ I(n) =0-0+nI(n-1)=nI(n-1) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/264172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
The way into set theory Given that I am going through Munkres's book on topology , I had to give a glance at the topics included in the first chapter like that of Axiom of choice, The maximum principle, the equivalence of the former and the later etc. Given all this I doubt that I know enough of set theory , or more precisely and suiting to my business , Lack a good deal of rigor in my ingredients. I wanted to know whether research is conducted on set theory as an independent branch. Is there any book that covers all about set theory, like the axioms, the axiom of choice and other advanced topics in it. I have heard about the Bourbaki book, but am helpless at getting any soft copy of that book.
I'd recommend "Naive Set Theory" by Halmos. It is a fun read, in a leisurely style, starts from the axioms and prove the Axiom of Choice. Also, see this XKCD. http://xkcd.com/982/
{ "language": "en", "url": "https://math.stackexchange.com/questions/264252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
How can I show the Coercivity of this function? Let $S$ be the set of real positive matrices, $\lambda>0$ and $f:S\rightarrow\mathbb{R}$ defined by $$f(X)=\langle X,X\rangle-\lambda\log\det(X) $$ where $\langle X,X\rangle=\operatorname{trace}(X^\top X)$. How can one show that $f$ is coercive?
Let $\mu = \max \{\det X : \langle X,X\rangle=1, X\ge 0\}$. The homogeneity of determinant implies that $\log \det X\le \log \mu+\frac{n}{2}\log \langle X,X\rangle$ for all $X\ge 0$. Therefore, $$f(X)\ge \langle X,X\rangle -\lambda \log \mu - \frac{\lambda n}{2}\log \langle X,X\rangle $$ which is $\ge \frac12 \langle X,X\rangle$ when $\langle X,X\rangle$ is large enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fun but serious mathematics books to gift advanced undergraduates. I am looking for fun, interesting mathematics textbooks which would make good studious holiday gifts for advanced mathematics undergraduates or beginning graduate students. They should be serious but also readable. In particular, I am looking for readable books on more obscure topics not covered in a standard undergraduate curriculum which students may not have previously heard of or thought to study. Some examples of suggestions I've liked so far: * *On Numbers and Games, by John Conway. *Groups, Graphs and Trees: An Introduction to the Geometry of Infinite Groups, by John Meier. *Ramsey Theory on the Integers, by Bruce Landman. I am not looking for pop math books, Gödel, Escher, Bach, or anything of that nature. I am also not looking for books on 'core' subjects unless the content is restricted to a subdiscipline which is not commonly studied by undergrads (e.g., Finite Group Theory by Isaacs would be good, but Abstract Algebra by Dummit and Foote would not).
Modern Graph theory by Bela Bollobas counts as fun if they're interested in doing exercises which can be approached by clever intuitive arguments; it's packed full of them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "263", "answer_count": 40, "answer_id": 7 }
Fun but serious mathematics books to gift advanced undergraduates. I am looking for fun, interesting mathematics textbooks which would make good studious holiday gifts for advanced mathematics undergraduates or beginning graduate students. They should be serious but also readable. In particular, I am looking for readable books on more obscure topics not covered in a standard undergraduate curriculum which students may not have previously heard of or thought to study. Some examples of suggestions I've liked so far: * *On Numbers and Games, by John Conway. *Groups, Graphs and Trees: An Introduction to the Geometry of Infinite Groups, by John Meier. *Ramsey Theory on the Integers, by Bruce Landman. I am not looking for pop math books, Gödel, Escher, Bach, or anything of that nature. I am also not looking for books on 'core' subjects unless the content is restricted to a subdiscipline which is not commonly studied by undergrads (e.g., Finite Group Theory by Isaacs would be good, but Abstract Algebra by Dummit and Foote would not).
Dissections: Plane & Fancy by Frederickson, and the second side of the same coin: The Banach--Tarski Paradox by Tomkowicz and Wagon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "263", "answer_count": 40, "answer_id": 39 }
solution for equation For $a^2+b^2=c^2$ such that $a, b, c \in \mathbb{Z}$ Do we know whether the solution is finite or infinite for $a, b, c \in \mathbb{Z}$? We know $a=3, b=4, c=5$ is one of the solutions.
Assuming $m,n$ be any two positive integers such that $m < n$, we have: $$a = n^2 - m^2,\;\; b = 2mn,\;\;c = n^2 + m^2$$ And then $a^2+b^2=c^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Sets of second category-topology A set is of first category if it is the union of nowhere dense sets and otherwise it is of second category. How can we prove that irrational numbers are of second category and the rationals are of of first category?
$\mathbb Q = \bigcup_{q \in \mathbb Q} \{ q \}$ hence the rationals are a countable union of nowhere dense sets. Assume the irrationals are also a countable union of nowhere dense sets: $I = \bigcup_{n \in \mathbb N} U_n$. Then $\mathbb R = \bigcup_{q \in \mathbb Q} \{ q \} \cup \bigcup_{n \in \mathbb N} U_n$ is also a countable union of nowhere dense sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Mind-blowing mathematics experiments We've all heard of some mind-blowing phenomena involving the sciences, such as the double-slit experiment. I was wondering if there are similair experiments or phenomena which seem very counter-intuitive but can be explained using mathematics? I mean things such as the Monty Hall problem. I know it is not exactly an experiment or phenomenon (you could say a thought-experiment), but things along the line of this (so with connections to real life). I have stumbled across this interesting question, and this are the type of phenomena I have in mind. This question however only discusses differential geometry.
If you let $a_1=a_2=a$, and $a_{n+1}=20a_n-19a_{n-1}$ for $n=2,3,\dots$, then it's obvious that you just get the sequence $a,a,a,\dots$. But if you try this on a calculator with, say, $a=\pi$, you find that after a few iterations you start getting very far away from $\pi$. It's a good experiment/demonstration on accumulation of round-off error.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 11, "answer_id": 5 }
Why is 'abuse of notation' tolerated? I've personally tripped up on a few concepts that came down to an abuse of notation, and I've read of plenty more on stack exchange. It seems to all be forgiven with a wave of the hand. Why do we tolerate it at all? I understand if later on in one's studies if things are assumed to be in place, but there are plenty of textbooks out there assuming certain things are known before teaching them. This is a very soft question, but I think it ought to be asked.
When one writes/talks mathematics, in 99.99% of the cases the intended recipient of what one writes is a human, and humans are amazing machines: they are capable of using context, guessing, and all sorts of other information when decoding what we write/say. It is generally immensely more efficient to take advantage of this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91", "answer_count": 10, "answer_id": 1 }
Exactness of Colimits Let $\mathcal A$ be a cocomplete abelian category, let $X$ be an object of $\mathcal A$ and let $I$ be a set. Let $\{ X_i \xrightarrow{f_i} X\}_{i \in I}$ be a set of subobjects. This means we get an exact sequence $$ 0 \longrightarrow X_i \xrightarrow{f_i} X \xrightarrow{q_i}X/X_i \longrightarrow 0 $$ for each $i \in I$. It is supposed to follow (Lemma 5 in the wonderful answer to this question) that there is an exact sequence $$ \operatorname{colim} X_i \longrightarrow X \longrightarrow\operatorname{colim} X/X_i \longrightarrow 0 $$ from the fact that the colimit functor preserves colimits (and in particular, cokernels). However I do not see why this follows. The family of exact sequences I mentioned above is equivalent to specifying the exact sequence $$ 0 \longrightarrow X_\bullet \xrightarrow{f} \Delta X \xrightarrow{q} X / X_\bullet \longrightarrow 0 $$ in the functor category $[I, \mathcal A]$, where $\Delta X$ is the constant functor sending everything to $X$. However applying the colimit functor to this sequence does not give the one we want, because the colimit of $\Delta X$ is the $I$th power of $X$ since $I$ is discrete. Can anybody help with this? Thank you and Merry Christmas in advance!
I think you may have misquoted the question, because if $I$ is (as you wrote) merely a set, then a colimit over it is just a direct sum. Anyway, let me point out why "the colimit functor preserves colimits (and in particular cokernels)" is relevant. Exactness of a sequence of the form $A\to B\to C\to0$ is equivalent to saying that $B\to C$ is the cokernel of $A\to B$. So the short exact sequence you began with contains some cokernel information (plus some irrelevant information thanks to the $0$ at the left end), and what you're trying to prove is also cokernel information. The latter comes from the former by applying a colimit functor, provided colim$(X)$ is just $X$. That last proviso is why I think you've misquoted the question, since it won't be satisfied if $I$ is just a set (with 2 or more elements).
{ "language": "en", "url": "https://math.stackexchange.com/questions/264666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Extra-Challenging olympiad inequality question We have the set $\{X_1,X_2,X_3,\dotsc,X_n\}$. Given that $X_1+X_2+X_3+\dotsb +X_n = n$, prove that: $$\frac{X_1}{X_2} + \frac{X_2}{X_3} + \dotsb + \frac{X_{n-1}}{X_n} + \frac{X_n}{X_1} \leq \frac{4}{X_1X_2X_3\dotsm X_n} + n - 4$$ EDIT: yes, ${X_k>0}$ , forgot to mention :)
Let $$ \begin{eqnarray} L(x_1,\ldots, x_n) &=& \frac{x_1}{x_2} + \frac{x_2}{x_3} + \ldots + \frac{x_n}{x_1} \\ R(x_1,\ldots, x_n) &=& \frac{4}{x_1 x_2 \ldots x_n} + n - 4 \\ f(x_1,\ldots, x_n) &=& R(x_1,\ldots, x_n) - L(x_1,\ldots, x_n) \end{eqnarray} $$ The goal is to prove that $f(x_1,\ldots, x_n) \ge 0$ for all $n$, given $x_i > 0$ and $\Sigma_i x_i = n$. Proof [by complete induction] The base case $n = 1$ is trivial. Now we prove the inequality for $n \ge 2$ assuming it holds for all values below $n$. The sequence $s = x_1,\ldots, x_n$ is split in two subsequences : $s_\alpha = x_u,\ldots, x_{v-1}$ and $s_\beta = x_v, x_{v+1}, \ldots x_1 \ldots x_{u-1}$, where $1 \le u < v \le n$. Let $k = v-u$ such that the lengths of the sequences are $k$ and $n-k$. We write $\pi_\alpha, \pi_\beta$ for the products of the sequences, respectively, and $\sigma_\alpha, \sigma_\beta$ for their sums. With appropriate rescaling we get $f(\frac{k}{\sigma_\alpha} s_\alpha) \ge 0$ and $f(\frac{n-k}{\sigma_\beta} s_\beta) \ge 0$, by induction hypothesis, as $1 \le k \le n-1$. It suffices now to show that $f(s) - f(\frac{k}{\sigma_\alpha} s_\alpha) - f(\frac{n-k}{\sigma_\beta} s_\beta) \ge 0$. Therefore we prove $R(s) - R(\frac{k}{\sigma_\alpha} s_\alpha) - R(\frac{n-k}{\sigma_\beta} s_\beta) \ge 0$ and $L(\frac{k}{\sigma_\alpha} s_\alpha) + L(\frac{n-k}{\sigma_\beta} s_\beta) - L(s)\ge 0$, in turn. $$ \begin{eqnarray} R(s) - R(\frac{k}{\sigma_\alpha} s_\alpha) &-& R(\frac{n-k}{\sigma_\beta} s_\beta) \\ &=& \frac{4}{\pi_\alpha \pi_\beta} + n - 4 - \left(\frac{4 \sigma_\alpha^k}{k^k \pi_\alpha} +k -4 \right) - \left(\frac{4 \sigma_\beta^{n-k}}{(n-k)^{n-k} \pi_\beta} + n -k -4 \right) \\ & =& \frac{4}{\pi_\alpha \pi_\beta} \left( 1 + \pi_\alpha \pi_\beta - \frac{ \sigma_\alpha^k}{k^k} \pi_\beta - \frac{\sigma_\beta^{n-k}}{(n-k)^{n-k}} \pi_\alpha \right) \\ & =& \frac{4}{\pi_\alpha \pi_\beta} \left( \left( \frac{ \sigma_\alpha^k}{k^k} - \pi_\alpha \right) \left(\frac{\sigma_\beta^{n-k}}{(n-k)^{n-k}} - \pi_\beta \right) + 1 - \frac{\sigma_\beta^{n-k}}{(n-k)^{n-k}} \frac{ \sigma_\alpha^k}{k^k} \right) \end{eqnarray} $$ Both factors in the first product are positive: using Jensen's inequality we get $\log(\frac{ \sigma_\alpha^k}{k^k}) = k \log (\frac{\sigma_\alpha}{k}) \ge \Sigma_{i=u}^{v-1} \log(x_i) = \log(\pi_\alpha)$, and similar for the second factor. Furthermore we have $\log(\frac{\sigma_\beta^{n-k}}{(n-k)^{n-k}} \frac{ \sigma_\alpha^k}{k^k}) = n \left( \frac{n-k}{n} \log(\frac{\sigma_\beta}{n-k}) + \frac{k}{n} \log(\frac{ \sigma_\alpha}{k}) \right) \le n \log(\frac{\sigma_\beta}{n} + \frac{\sigma_\alpha}{n}) = 0$ again using Jensen's inequality. So the remaining terms are also positive. Concerning the L-part we have: $$ \begin{eqnarray} L(\frac{k}{\sigma_\alpha} s_\alpha) &+& L(\frac{n-k}{\sigma_\beta} s_\beta) - L(s) \\ &=& (\ldots + \frac{x_{v-1}}{x_u}) + (\ldots +\frac{x_{u-1}}{x_v}) - (\ldots + \frac{x_{v-1}}{x_v} + \ldots + \frac{x_{u-1}}{x_u} + \ldots) \\ &=& (x_{v-1} - x_{u-1}) (\frac{1}{x_u} - \frac{1}{x_v}) \end{eqnarray} $$ This if positive if $x_{v-1} \ge x_{u-1} \wedge x_{v} \ge x_{u}$, or $x_{v-1} \le x_{u-1} \wedge x_{v} \le x_{u}$. As we did not pose any constraints on $u$ and $v$ yet, it now remains to show that for any sequence one can always find $1 \le u < v \le n$ for which this is fulfilled. First, if $x_{i-1} \le x_i \le x_{i+1}$ for some $i$, or $x_{i-1} \ge x_i \ge x_{i+1}$ (for $n$ odd this is always the case), then we can simply choose $u=i-1, v=i$. So now assume we have a "crown" of successive up and down transitions while looping through the sequence. If for some $i, j$ with $x_i \le x_{i+1}$ and $x_j \le x_{j+1}$, none of the intervals $[x_i \le x_{i+1}]$ and $[x_j \le x_{j+1}]$ contains the other completely, then appropriate $u$ and $v$ can be defined. So now assume that all "up-intervals" $[x_i, x_{i+1}]$ with $x_i \le x_{i+1}$ are strictly ordered (by containment). Looping through these up-intervals we must encounter a local maximum such that $[x_{i-2}, x_{i-1}] \subseteq [x_{i}, x_{i+1}] \supseteq [x_{i+2}, x_{i+3}]$. Hence $x_{i-1} \le x_{i+1}$ and $x_{i} \le x_{i+2}$, so with $u=i, v=i+2$ also this last case is covered.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Limit of a function whose values depend on $x$ being odd or even I couldn't find an answer through google or here, so i hope this isn't a duplicate. Let $f(x)$ be given by: $$ f(x) = \begin{cases} x & : x=2n\\ 1/x & : x=2n+1 \end{cases} $$ Find $\lim_{x \to \infty} f(x).$ The limit is different depending on $x$ being odd or even. So what's the limit of $f(x)$? Attempt: this limit doesn't exist because we have different values for $x \to \infty$ which could be either odd or even. My doubt is $\infty$ can't be both even and odd at the same time so one should apply. So what do you ladies/gents think? Also, when confronted with functions like these and one needs to know the limit to gain more information about the behavior of $f(x)$, how should the problem be attacked?
Your first statement following the word "attempt" has the correct intuition: "this limit doesn't exist because we have different values for" ... $\lim_{x\to \infty} f(x) $, which depends on x "which could be either odd or even." (So I'm assuming we are taking $x$ to be an integer, since the property of being "odd" or "even" must entail $x \in \mathbb{Z}$). The subsequent doubt you express originates from the erroneous conclusion that $\infty$ must be even or odd. It is neither. $\infty$ is not a number, not in the sense of it being odd or even, and not in the sense that we can evaluate $f(\infty)$! (We do not evaluate the $\lim_{x \to \infty} f(x)$ AT infinity, only as $x$ approaches infinity.) When taking the limit of $f(x)$ as $x \to \infty$, we look at what happens as $x$ approaches infinity, and as you observe, $x$ oscillates between odd and even values, so as $x \to \infty$, $f(x)$, as defined, also oscillates: at extremely large $x$, $f(x)$ oscillates between extremely large values and extremely small values (approaching $\infty$ when $x$ is even, and approaching zero when $x$ is odd. Hence the limit does not exist. Note: when $x$ goes to some finite number, say $a$, you may be tempted to simply "plug it in" to $f(x)$ when evaluating $\lim_{x\to a}f(x)$. You should be careful, though. This only works when $f(x)$ is continuous at that point $a$. Limits are really only about what happens as $x$ approaches an x-value, or infinity, not what $f(x)$ actually is at that value. This is an important to remember.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$f$ continuous in $[a,b]$ and differentiable in $(a,b)$ without lateral derivatives at $a$ and $b$ Does anyone know an example of a real function $f$ continuous in $[a,b]$ and differentiable in $(a,b)$ such that the lateral derivatives $$ \lim_{h \to a^{+}} \frac{f(x+h)- f(x)}{h} \quad \text{and} \quad \lim_{h \to b^{-}} \frac{f(x+h)- f(x)}{h}$$ don't exist?
$$f(x) = \sqrt{x-a} + \sqrt{b-x} \,\,\,\,\,\, \forall x \in [a,b]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/264829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Integral $\int_{0}^{1}\ln x \, dx$ I have a question about the integral of $\ln x$. When I try to calculate the integral of $\ln x$ from 0 to 1, I always get the following result. * *$\int_0^1 \ln x = x(\ln x -1) |_0^1 = 1(\ln 1 -1) - 0 (\ln 0 -1)$ Is the second part of the calculation indeterminate or 0? What am I doing wrong? Thanks Joachim G.
Looking sideways at the graph of $\log(x)$ you can also see that $$\int_0^1\log(x)dx = -\int_0^\infty e^{-x}dx = -1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/264887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Prove that $\frac{a^2}{a+b}+\frac{b^2}{b+c}+\frac{c^2}{c+a}\geq\frac{1}{2}(a+b+c)$ for positive $a,b,c$ Prove the following inequality: for $a,b,c>0$ $$\frac{a^2}{a+b}+\frac{b^2}{b+c}+\frac{c^2}{c+a}\geq\frac{1}{2}(a+b+c)$$ What I tried is using substitution: $p=a+b+c$ $q=ab+bc+ca$ $r=abc$ But I cannot reduce $a^2(b+c)(c+a)+b^2(a+b)(c+a)+c(a+b)(b+c) $ interms of $p,q,r$
Hint: $ \sum \frac{a^2 - b^2}{a+b} = \sum (a-b) = 0$. (How is this used?) Hint: $\sum \frac{a^2 + b^2}{a+b} \geq \sum \frac{a+b}{2} = a+b+c$ by AM-GM. Hence, $\sum \frac{ a^2}{ a+b} \geq \frac{1}{2}(a+b+c)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/264931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 3 }
Finding power series representation How we can show a presentation of a power series and indicate its radius of convergence? For example how we can find a power series representation of the following function? $$f(x) = \frac{x^3}{(1 + 3x^2)^2}$$
1) Write down the long familiar power series representation of $\dfrac{1}{1-t}$. 2) Differentiate term by term to get the power series representation of $\dfrac{1}{(1-t)^2}$. 3) Substitute $-3x^2$ everywhere that you see $t$ in the result of 2). 4) Multiply term by term by $x^3$. For the radius of convergence, once you have obtained the series, the Ratio Test will do the job. Informally, our orginal geometric series converges when $|t|\lt 1$. So the steps we took are OK if $3x^2\lt 1$, that is, if $|x|\lt \frac{1}{\sqrt{3}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Uncountable closed set of irrational numbers Could you construct an actual example of a uncountable set of irrational numbers that is closed (in the topological sense)? I can find countable examples that are closed, like $\{ \sqrt{2} + \sqrt{2}/n \}_{n=1}^\infty \cup \{ \sqrt2 \}$ , but how does one construct an uncountable example? At least one uncountable example must exist, since otherwise the rational numbers form a Berstein set and are non-measurable.
Explicit example: translation of a Cantor-like set. Consider the Cantor set $$C := \Big\{ \sum \limits_{n=1}^{+\infty} \frac{\varepsilon_n}{4^n}\ \mid\ (\varepsilon_n)_n \in \{0,1\}^{\mathbb{N}}\Big\}.$$ It is uncountable and closed. Consider now the number $$x := \sum \limits_{n=1}^{+\infty} \frac{2}{4^{n^2}}.$$ The closed set which will answer the question is $$K := x + C = \{x+c,\ c\in C\}$$ Indeed, let us take an element $c$ of $C$, and distinguish two cases: * *$c$ is rational, in which case $c+x$ is irrational since $x$ is irrational *$c$ is irrational and can be written uniquely as $c = \sum \limits_{n=1}^{+\infty} \frac{\varepsilon_n}{4^n}$ with $\varepsilon_n \in \{0,1\}$ for all $n$. Then the base $4$ representation of $c+x$ is $\sum \limits_{n=1}^{+\infty} \frac{\varepsilon_n + 2\cdot 1_{\sqrt{n} \in \mathbb{N}}}{4^n}$. Thus the coefficients at non perfect-square-positions are $0$ or $1$, while the coefficients at perfect-square-positions are $2$ or $3$. Hence, the base $4$ representation cannot be periodic, so $c+x$ is not rational.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 7, "answer_id": 6 }
Proving a Geometric Progression Formula, Related to Geometric Distribution I am trying to prove a geometric progression formula that is related to the formula for the second moment of the geometric distribution. Specifically, I am wondering where I am going wrong, so I can perhaps learn a new technique. It is known, and I wish to show: $$ m^{(2)} = \sum_{k=0}^\infty k^2p(1-p)^{k-1} = p\left(\frac{2}{p^3} - \frac{1}{p^2}\right) = \frac{2-p}{p^2} $$ Now, dividing by $p$ both sides, and assigning $a = 1-p$ yields: $$ \sum_{k=1}^\infty k^2a^{k-1}=\frac{2}{(1-a)^3}-\frac{1}{(1-a)^2} \qquad \ldots \text{(Eq. 1)} $$ I want to derive the above formula. I know: $$ \sum_{k=0}^\infty ka^{k-1}=\frac{1}{(1-a)^2} $$ Multiplying the left side by $1=\frac aa$, and multiplying both sides by $a$, $$ \sum_{k=0}^\infty ka^k = \frac{a}{(1-a)^2} $$ Taking the derivative of both sides with respect to $a$, the result is: $$ \sum_{k=0}^{\infty}\left[a^k + k^2 a^{k-1}\right] = \frac{(1-a)^2 - 2(1-a)(-1)a}{(1-a^4)} = \frac{1}{(1-a)^2}+\frac{2a}{(1-a)^3} $$ Moving the known formula $\sum_{k=0}^\infty a^k = \frac{1}{1-a}$ to the right-hand side, the result is: $$ \sum_{k=0}^\infty k^2 a^{k-1} = \frac{1}{(1-a)^2} + \frac{2a}{(1-a^3)} - \frac{1}{1-a} $$ Then, this does not appear to be the same as the original formula (Eq. 1). Where did I go wrong? Thanks for assistance.
You have $$\sum_{k=0}^{\infty} ka^k = \dfrac{a}{(1-a)^2}$$ Differentiating with respect to $a$ gives us $$\sum_{k=0}^{\infty} k^2 a^{k-1} = \dfrac{(1-a)^2 - a \times 2 \times (a-1)}{(1-a)^4} = \dfrac{1-a + 2a}{(1-a)^3} = \dfrac{a-1+2}{(1-a)^3}\\ = \dfrac2{(1-a)^3} - \dfrac1{(1-a)^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/265142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Helly's selection theorem (For sequence of monotonic functions) Let $\{f_n\}$ be a sequence of monotonically increasing functions on $\mathbb{R}$. Let $\{f_n\}$ be uniformly bounded on $\mathbb{R}$. Then, there exists a subsequence $\{f_{n_k}\}$ pointwise convergent to some $f$. Now, assume $f$ is continuous on $\mathbb{R}$. Here, I want to prove that $f_{n_k}\rightarrow f$ uniformly on $\mathbb{R}$. How do i prove this ? I have proven that " $\forall \epsilon>0,\exists K\in\mathbb{N}$ such that $k≧K \Rightarrow \forall x\in\mathbb{R}, |f(x)-f_{n_k}(x)||<\epsilon \bigvee f_{n_k}(x) < \inf f + \epsilon \bigvee \sup f - \epsilon < f_{n_k}(x)$ ". The argument is in the link below. I don't understand why above statement implies "$f_{n_k}\rightarrow f$ uniformly on $\mathbb{R}$". Please explain me how.. Reference ; http://www.math.umn.edu/~jodeit/course/SP6S06.pdf Thank you in advance!
It's absolutely my fault that i didn't even read (c) in the link. I extend the theorem in the link and my argument below is going to prove; "If $K$ is a compact subset of $\mathbb{R}$ and $\{f_n\}$ is a sequence of monotonic functions on $K$ such that $f_n\rightarrow f$ pointwise on $K$, then $f_n\rightarrow f$ uniformly on $K$." (There may exist both $n,m$ such that $f_n$ is monotonically increasing while $f_m$ is monotonically decreasing) Pf> Since $K$ is closed in $\mathbb{R}$, complement of $K$ is a disjoint union of at most countable open segments. Let $K^C=\bigsqcup (a_i,b_i)$. Define; $$ g_n(x) = \begin{cases} f_n(x) &\text{if }x\in K \\ \frac{x-a_i}{b_i-a_i}f_n(b_i)+\frac{b_i-x}{b_i-a_i}f_n(a_i) & \text{if }x\in(a_i,b_i)\bigwedge a_i,b_i\in\mathbb{R} \\ f_n(b_i) &\text{if }x\in(a_i,b_i)\bigwedge a_i=-\infty \\ f_n(a_i) &\text{if }x\in(a_i,b_i)\bigwedge b_i=\infty \end{cases} $$ Then, $g_n$ is monotonic on $\mathbb{R}$ and $g_n\rightarrow g$ pointwise on $\mathbb{R}$ ans $g$ is a continuous extension of $f$. Let $\alpha=\inf K$ and $\beta=\sup K$. Then, by the argument in the link, $g_n\rightarrow g$ uniformly on $[\alpha,\beta]$. Hence, $f_n\rightarrow f$ uniformly on $K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What exactly is steady-state solution? In solving differential equation, one encounters steady-state solutions. My textbook says that steady-state solution is the limit of solutions of (ordinary) differential equations when $t \rightarrow \infty$. But the steady-state solution is given as $f(t)$, and this means that the solution is a function of $t$ - so what is this $t$ being in limit?
Example from dynamics: You can picture for yourself a cantilever beam which is loaded by a force at its tip say: $F(t) = \sin(t)$. At $t=0$ the force is applied, then you get the transient state, after some time the system will become in equilibrium: the steady-state. In this state no changes are applied to the system. You can expand this thinking to other differential equations as well. Hope that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 6, "answer_id": 3 }
Absoluteness of $ \text{Con}(\mathsf{ZFC}) $ for Transitive Models of $ \mathsf{ZFC} $. Is $ \text{Con}(\mathsf{ZFC}) $ absolute for transitive models of $ \mathsf{ZFC} $? It appears that $ \text{Con}(\mathsf{ZFC}) $ is a statement only about logical syntax. Taking any $ \in $-sentence $ \varphi $, we can write $ \text{Con}(\mathsf{ZFC}) $ as $ \mathsf{ZFC} \nvdash (\varphi \land \neg \varphi) $, which appears to be an arithmetical $ \in $-sentence. If this is true, then I think one can get a quick proof of $$ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) \nvdash \langle \text{There exists a transitive model of $ \mathsf{ZFC} $} \rangle, $$ assuming that $ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $ is consistent. Proof If $$ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) \vdash \langle \text{There exists a transitive model of $ \mathsf{ZFC} $} \rangle, $$ then let $ M $ be such a transitive model. By the absoluteness of $ \text{Con}(\mathsf{ZFC}) $, we see that $ M \models \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $. Hence, $ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $ proves the consistency of $ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $. By Gödel’s Second Incompleteness Theorem, $ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $ is therefore inconsistent. Contradiction. $ \blacksquare $ Question: Is $ \text{Con}(\mathsf{ZFC}) $ absolute for transitive models, and is the above proof correct? Thanks for any clarification.
Yes, $\text{Con}(\mathsf{ZFC})$ is an arithmetic statement ($\Pi^0_1$ in particular, because it says a computer program that looks for an inconsistency will never halt) so it is absolute to transitive models, and your proof is correct. By the way, there are a couple of ways you can strengthen it. First, arithmetic statements are absolute to $\omega$-models (models with the standard integers, which may nevertheless have nonstandard ordinals) so $\text{Con}(\mathsf{ZFC})$ does not prove the existence of an $\omega$-model of $\mathsf{ZFC}$. Second, the existence of an $\omega$-model of $\mathsf{ZFC}$ does not prove the existence of a transitive model of $\mathsf{ZFC}$, because the existence of an $\omega$-model of $\mathsf{ZFC}$ is a $\Sigma^1_1$ statement, and $\Sigma^1_1$ statements are absolte to transitive models.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Math question please Rolle theorem? I have to prove that the equation $$x^5 +3x- 6$$ can't have more than one real root..so the function is continuous, has a derivative (both in $R$) . In $R$ there must be an interval where $f'(c)=0$, and if I prove this,than the equation has at least one real root. So $5x^4+3 =0$ ..this equation is only true for $x=0$. How to prove that this is the only root?
So you want to prove $5x^4+3=0$ has only one root at $0$? That's not true as $5x^4+3>0$. This establishes the proof
{ "language": "en", "url": "https://math.stackexchange.com/questions/265429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Combinatorics alphabet If say I want to arrange the letters of the alphabet a,b,c,d,e,f such that e and f cannot be next to each other. I would think the answer was $6\times4\times4\times3\times2$ as there are first 6 letters then 4 as e cannot be next to f. Thanks.
The $6$ numbers without any restriction can be arranged in $6!$ ways. If we put $e,f$ together, we can arrange the $6$ numbers in $2!(5!)$ ways, as $e,f$ can arranged in $2!$ ways. So, the required number of combinations is $6!-2(5!)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/265474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Average limit superior Let $\mathcal{l}_\mathbb{R}^\infty$ be the space of bounded sequences in $\mathbb{R}$. We define a map $p: \mathcal{l}_\mathbb{R}^\infty\to\mathbb{R}$ by $$p(\underline x)=\limsup_{n\to\infty} \frac{1}{n}\sum_{k=1}^n x_k.$$ My notes claim that $$\liminf_{n\to\infty} x_n\le p(\underline x)\le \limsup_{n\to\infty} x_n.$$ I haven't found a neat way to show that this holds (only a rather complicated argument). Is there an easy, intuitive way ?
Let $A = \liminf_{n \to \infty} x_n$ and $B = \limsup_{n \to \infty} x_n$. For any $\epsilon > 0$, there is $N$ such that for all $k > N$, $A - \epsilon \le x_k \le B + \epsilon$. Let $S_n = \displaystyle \sum_{k=1}^n x_k$. Then for $n > N$, $$ S_N + (n-N) (A - \epsilon) \le S_n \le S_N + (n-N) (B + \epsilon) $$ and so $$ \eqalign{ A - \epsilon &= \lim_{n \to \infty} \frac{S_N + (n-N) (A - \epsilon)}{n} \le \liminf_{n \to \infty} \dfrac{S_n}{n}\cr \limsup_{n \to \infty} \dfrac{S_n}{n} &\le \lim_{n \to \infty} \frac{ S_N + (n-N) (B + \epsilon)}{n} = B + \epsilon} $$ Now take $\epsilon \to 0+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Cohen–Macaulayness of $R=k[x_1, \dots,x_n]/\mathfrak p$ I'm looking for some help for the following question: Let $k$ be a field and $R=k[x_1, \dots ,x_n]$. Show that $R/\mathfrak p$ is Cohen–Macaulay if $\mathfrak p$ is a prime ideal with $\operatorname{height} \mathfrak p \in\lbrace 0,1,n-1,n \rbrace$. My proof: If $\operatorname{height} p=0$ then for all $q\in \operatorname{Max}(R)$ $\operatorname{height} pR_q=0$ therefore $pR_q\in \operatorname{Min}(R_q)=\lbrace 0\rbrace$, so $pR_p=(0)$ and it's done but If $\operatorname{height} pR_q=1$ then $\operatorname{grade} pR_p=\operatorname{height} pR_p=1$.Thus there exist regular sequence $z\in pR_q$ and we have $\dim R_q/zR_q=\dim R_q-\operatorname{height}zR_q=n-1$.since that $R_q/zR_q$ is Cohen-Macaulay then $\operatorname{depth}R_q/zR_q=\dim R_q/zR_q=n-1$.here,we know that $‎\exists y_1,...,y_{n-1}\in qR_q $ s.t $y_1,...,y_{n-1}$ is $R_q/zR_q$-sequence.Now, if we can make a $R_q/pR_q$-sequence has a length $n-1$ It's done because $\dim R_q/pR_q=\dim R_q-\operatorname{height}pR_q=n-1$. A similar argument works for n,n-1.
Hint. $R$ integral domain, $\operatorname{ht}(p)=0\Rightarrow p=(0)$. If $\operatorname{ht}(p)=1$, then $p$ is principal. If $\operatorname{ht}(p)=n−1,n$, then $\dim R/p=1,0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
an equality related to the point measure For any positive measure $\rho$ on $[-\pi, \pi]$, prove the following equality: $$\lim_{N\to\infty}\int_{-\pi}^{\pi}\frac{\sum_{n=1}^Ne^{in\theta}}{N}d\rho(\theta)=\rho(\{0\}).$$ Remark: It is easy to check that for any fixed positive number $0<\delta<\pi$, then $$|\int_{\delta}^{\pi}\frac{\sum_{n=1}^Ne^{in\theta}}{N}d\rho(\theta)|\leq \int_{\delta}^{\pi}\frac{2}{2sin(\frac{\delta}{2})N}d\rho(\theta)\to 0\text{ as}\;N\to\infty,$$ so I think we have to show that $$\int_{-\delta}^{\delta}\frac{\sum_{n=1}^Ne^{in\theta}}{N}d\rho(\theta)\sim \int_{-\delta}^{\delta}d\rho(\theta)\sim \rho(\{0\})?$$ Maybe we also have to choose $\delta=\delta(N)$ etc..
Try to show that $\int_{-\delta}^\delta e^{in\theta}d\rho(\theta)\rightarrow \rho(\{0\})$, a kind of generalized Riemann Lebesgue Lemma. Then your result will follow by the fact that you are taking a Cesaro average of a sequence that converges. I believe you need some kind of sigma finite condition on your $\rho$ for this to work though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the tangent function (like in trig) and tangent lines the same? So, a 45 degree angle in the unit circle has a tan value of 1. Does that mean the slope of a tangent line from that point is also 1? Or is something different entirely?
The $\tan$ function can be described four different ways that I can describe and each adds to a fuller understanding of the tan function. * *First, the basics: the value of $\tan$ is equal to the value of $\sin$ over $\cos$. $$\\tan(45^\circ)=\frac{\sin(45^\circ)}{\cos(45^\circ)}=\frac{\frac{\sqrt{2}}{2}}{\frac{\sqrt{2}}{2}}=1$$ *So, the $\tan$ function for a given angle does give the slope of the radius, but only on a unit circle or only when the radius is one. For instance, when the radius is 2, then $2\tan(45^\circ)=2$, but the slope of the 45 degree angle is still 1. *The value of the $\tan$ for a given angle is the length of the line, tangent to the circle at the point on the circle intersected by the angle, from the point of intersection (A) to the $x$-axis (E). $\hspace{1cm}$ *The value of the tangent line can also be described as the length of the line $x=r$ (which is a vertical line intersecting the $x$-axis where $x$ equals the radius of the circle) from $y=0$ to where the vertical line intersects the angle. $\hspace{1cm}$ The explanations in examples 3 and 4 might seem counter intuitive at first, but if you think about it, you can see that they are really just reflections across a line of half the specified angle. Image to follow. The images included are both from Wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Who are the most inspiring communicators of math for a general audience? I have a podcast series (http://wildaboutmath.com/category/podcast/ and on Itunes https://itunes.apple.com/us/podcast/sol-ledermans-podcast/id588254197) where I interview people who have a passion for math and who have inspired others to take an interest in the subject. I've interviewed Alfred Posamentier, Keith Devlin, Ed Burger, James Tanton, and other math popularizers I know. I'm trying to get an interview set up with Ian Stewart and I'll see if I can do interviews with Steven Strogatz and Cliff Pickover in 2013. Who do you know, famous or not, who I should try to get for my series? These people don't need to be authors. They can be game designers, teachers, toy makers, bloggers or anyone who has made a big contribution to helping kids or adults enjoy math more.
I recommend Art Benjamin. He's a dynamic speaker, has given lots of math talks to general audiences (mostly on tricks for doing quick mental math calculations, I think), and is an expert on combinatorial proof techniques (e.g. he's coauthor of Proofs That Really Count). Benjamin is a math professor at Harvey Mudd College.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 13, "answer_id": 3 }
Question With Regards To Evaluating A Definite Integral When Evaluating the below definite integral $$\int_{0}^{\pi}(2\sin\theta + \cos3\theta)\,d\theta$$ I get this.$$\left [-2\cos\theta + \frac{\sin3\theta}{3} \right ]_{0}^{\pi} $$ In the above expression i see that $-2$ is a constant which was taken outside the integral sign while performing integration. Now the question is should $-2$ be distributed throughout or does it only apply to $\cos\theta$? This is what i mean. Is it $$-2\left[\cos(\pi) + \frac{\sin3(\pi)}{3} - \left ( \cos(0) + \frac{\sin3(0)}{3} \right ) \right]?$$ Or the $-2$ stays only with $\cos\theta$?
$$\int_{0}^{\pi}(2\sin\theta + \cos3\theta)d\theta=\left [-2\cos\theta + \frac{\sin3\theta}{3} \right ]_{0}^{\pi} $$ as you noted so $-2$ as you see in @Nameless's answer is just for cosine function. Not for all terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Problem from "Differential topology" by Guillemin I am strugling one of the problems of "Differential Topology" by Guillemin: Suppose that $Z$ is an $l$-dimensional submanifold of $X$ and that $z\in Z$. Show that there exsists a local coordinate system $\left \{ x_{1},...,x_{k} \right \}$ defined in a neighbourhood $U$ of $z$ such that $Z \cap U$ is defined by the equations $x_{l+1}=0,...,x_{k}=0$. I assume that the solution should be based on the Local Immersion theorem, which states that "If $f:X\rightarrow Y $ is an immersion at $x$, then there exist a local coordinates around $x$ and $y=f(x)$ such that $f(x_{1},...,x_{k})=(x_{1},...,x_{k},0,...,0)$". I would really appreciate any pointers on how to attack this problem.
I found answers from Henry T. Horton at the question Why the matrix of $dG_0$ is $I_l$. and Augument, and injectivity. very helpful for solving this question. To repeat your question, which is found in Guillemin & Pallock's Differential Topology on Page 18, problem 2: Suppose that $Z$ is an $l$-dimensional submanifold of $X$ and that $z \in Z$. Show that there exists a local coordinate system {$x_1, \dots, x_k$} defined in a neighborhood $U$ of $z$ in $X$ such that $Z \cap U$ is defined by the equations $x_{l+1}=0, \dots, x_k=0$. Here is my attempt Consider the following diagram: $$\begin{array} AX & \stackrel{i}{\longrightarrow} & Z\\ \uparrow{\varphi} & & \uparrow{\psi} \\ C & \stackrel{(x_1, \dots, x_l) \mapsto (x_1, \dots, x_l, 0, \dots, 0)}{\longrightarrow} & C^\prime \end{array} $$ Since $$i(x_1, \dots, x_l) = (x_1, \dots, x_l,0,\dots, 0) \Rightarrow di_x(x_1, \dots, x_l) = (I_x, 0,\dots, 0).$$Clearly, $di_x$ is injective, thus the inclusion map $i: X \rightarrow Z$ is an immersion. Then we choose parametrization $\varphi: C \rightarrow X$ around $z$, and $\psi: C^\prime \rightarrow Z$ around $i(z)$. The map $C \rightarrow C^\prime$ sends $(x_1, \dots, x_l) \mapsto (x_1, \dots, x_l, 0, \dots, 0)$. The points of $X$ in a neighborhood $\varphi(C)$ around $z$ are those in $Z$ such that $x_{l+1} = \cdots x_k=0$, and this concludes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Proving the stabilizer is a subgroup of the group to prove the Orbit-Stabiliser theorem I have to prove the OS theorem. The OS theorem states that for some group $G$, acting on some set $X$, we get $$ |G| = |\mathrm{Orb}(x)| \cdot |G_x| $$ To prove this, I said that this can be written as $$ |\mathrm{Orb}(x)| = \frac{|G|}{|G_x|}$$ In order to prove the RHS, I can say that we can use Lagrange theorem, assuming that the stabiliser is a subgroup of the group G, which I'm pretty sure it is. I don't really know how I'd go about proving this though. Also, I was thinking, would this prove the whole theorem or just the RHS? I didn't just want to Google a proof, I wanted to try and come up with one myself because it's easier to remember. Unless there is an easier proof?
You’re on the right track. By definition $G_x=\{g\in G:g\cdot x=x\}$. Suppose that $g,h\in G_x$; then $$(gh)\cdot x=g\cdot(h\cdot x)=g\cdot x=x\;,$$ so $gh\in G_x$, and $G_x$ is closed under the group operation. Moreover, $$g^{-1}\cdot x=g^{-1}\cdot(g\cdot x)=(g^{-1}g)\cdot x=1_G\cdot x=x\;,$$ so $g^{-1}\in G_x$, and $G_x$ is closed under taking inverses. Thus, $G_x$ is indeed a subgroup of $G$. To finish the proof, you need only verify that there is a bijection between left cosets of $G_x$ in $G$ and the orbit of $x$. Added: The idea is to show that just as all elements of $G_x$ act identically on $x$ (by not moving it at all), so all elements of a left coset of $G_x$ act identically on $x$. If we can also show that each coset acts differently on $x$, we’ll have established a bijection between left cosets of $G_x$ and members of the orbit of $x$. Let $h\in G$ be arbitrary, and suppose that $g\in hG_x$. Then $g=hk$ for some $k\in G_x$, and $$g\cdot x=(hk)\cdot x=h\cdot(k\cdot x)=h\cdot x\;.$$ In other words, every $g\in hG_x$ acts on $x$ the same way $h$ does. Let $\mathscr{G}_x=\{hG_x:h\in G\}$, the set of left cosets of $G_x$, and let $$\varphi:\mathscr{G}_x\to\operatorname{Orb}(x):hG_x\mapsto h\cdot x\;.$$ The function $\varphi$ is well-defined: if $gG_x=hG_x$, then $g\in hG_x$, and we just showed that in that case $g\cdot x=h\cdot x$. It’s clear that $\varphi$ is a surjection: if $y\in\operatorname{Orb}x$, then $y=h\cdot x=\varphi(hG_x)$ for some $h\in G$. To complete the argument you need only show that $\varphi$ is injective: if $h_1G_x\ne h_2G_x$, then $\varphi(h_1G_x)\ne\varphi(h_2G_x)$. This is perhaps most easily done by proving the contrapositive: suppose that $\varphi(h_1G_x)=\varphi(h_2G_x)$, and show that $h_1G_x=h_2G_x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/265963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
Given a ratio of the height of two similar triangles and the area of the larger triangle, calculate the area of the smaller triangle Please help, I've been working on this problem for ages and I can't seem to get the answer. The heights of two similar triangles are in the ratio 2:5. If the area of the larger triangle is 400 square units, what is the area of the smaller triangle?
It may help you to view your ratio as a fraction in this case. Right now your ratio is for one-dimensional measurements, like height, so if you were to calculate the height of the large triangle based on the height of the small triangle being (for example) 3, you would write: $3 \times \frac52 =$ height of the large triangle Or, to go the other way (knowing the height of the large triangle to be, say, 7) you would write: $7 \times \frac25 = $ height of the small triangle But this is for single-dimensional measurements. For a two-dimensional measurement like area, simply square the ratio (also called the scalar): area of small triangle $\times (\frac52)^2 =$ area of large triangle. This can be extended to three-dimensional measurements by cubing the ratio/fraction. (you'll know which fraction to use because one increases the quantity while the other decreases. So if you find the area of the large triangle to be smaller than the small triangle, you've used the wrong one!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/266044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Simple integral help How do I integrate $$\int_{0}^1 x \bigg\lceil \frac{1}{x} \bigg\rceil \left\{ \frac{1}{x} \right\}\, dx$$ Where $\lceil x \rceil $ is the ceiling function, and $\left\{x\right\}$ is the fractional part function
Split the integral up into segments $S_m=[1/m,1/(m+1)]$ with $[0,1]= \cup_{m=1}^\infty S_m$. In the segment $m$, we have $\lceil 1/x \rceil=m+1$ and $\{1/x\} = 1/x- \lfloor 1/x\rfloor = 1/x - m$ (apart from values of $x$ on the boundary which do not contribute to the integral). This yields $$\begin{align}\int_0^1 x \bigg\lceil \frac{1}{x} \bigg\rceil \left\{ \frac{1}{x} \right\}\, dx &= \sum_{m=1}^\infty \int_{S_m}x \bigg\lceil \frac{1}{x} \bigg\rceil \left\{ \frac{1}{x} \right\}\, dx \\ &= \sum_{m=1}^\infty \int_{1/(m+1)}^{1/m} x (m+1)\left(\frac1x -m \right)\, dx\\ &= \sum_{m=1}^\infty \frac{1}{2m(1+m)}\\ &=\frac{1}{2}. \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/266110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 3, "answer_id": 1 }
Algebraic manipulation of normal, $\chi^2$ and Gamma probability distributions If $X_1, \ldots, X_n \sim N(\mu, \sigma^2)$, then $$ \frac{n - 1}{\sigma^2}S^2 \sim \chi^2_{n - 1} $$ where $S^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i^2- \bar{x})^2$, and there's a direct relationship between the $\chi^2_p$ and Gamma($\alpha, \beta$) distributions: $$ \chi^2_{n - 1} = \text{Gamma}(\tfrac{n-1}{2}, 2). $$ But then why is $$ S^2 \sim \text{Gamma}(\tfrac{n-1}{2}, \tfrac{2\sigma^2}{n-1}) \,? $$ And why do we multiply the reciprocal with $\beta$ and not $\alpha$? Is it because $\beta$ is the scale parameter? In general, are there methods for algebraic manipulation around the "$\sim$" other than the standard transformation procedures? Something that uses the properties of location/scale families perhaps?
Suppose $X\sim \operatorname{Gamma}(\alpha,\beta)$, so that the density is $cx^{\alpha-1} e^{-x/\beta}$ on $x>0$, and $\beta$ is the scale parameter. Let $Y=kX$. The density function of $Y$ is $$ \frac{d}{dx} \Pr(Y\le x) = \frac{d}{dx}\Pr(kX\le x) = \frac{d}{dx} \Pr\left(X\le\frac x k\right) = \frac{d}{dx}\int_0^{x/k} cu^{\alpha-1} e^{-u/\beta} \, du $$ $$ = c\left(\frac x k\right)^{\alpha-1} e^{-(x/k)/\beta} \cdot\frac1k $$ $$ =(\text{constant})\cdot x^{\alpha-1} e^{-x/(k\beta)}. $$ So it's a Gamma distribution with parameters $\alpha$ and $k\beta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/266175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Definition of $L^0$ space From Wikipedia: The vector space of (equivalence classes of) measurable functions on $(S, Σ, μ)$ is denoted $L^0(S, Σ, μ)$. This doesn't seem connected to the definition of $L^p(S, Σ, μ), \forall p \in (0, \infty)$ as being the set of measurable functions $f$ such that $\int_S |f|^p d\mu <\infty$. So I wonder if I miss any connection, and why use the notation $L^0$ if there is no connection? Thanks and regards!
Note that when we restrict ourselves to the probability measures, then this terminology makes sense: $L^p$ is the space of those (equivalence classes of) measurable functions $f$ satisfying $$\int |f|^p<\infty.$$ Therefore $L^0$ should be the space of those (equivalence classes of) measurable functions $f$ satisfying $$\int |f|^0=\int 1=1<\infty,$$ that is the space of all (equivalence classes of) measurable functions $f$. And it is indeed the case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/266216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 4, "answer_id": 3 }
Summing elements of a sequence Let the sequence $a_n$ be defined as $a_n = 2^n$ where $n = 0, 1, 2, \ldots $ That is, the sequence is $1, 2, 4, 8, \ldots$ Now assume I am told that a certain number is obtained by taking some of the numbers in the above sequence and adding them together (e.g. $a_4 + a_{19} + a_5$), I may be able to work out what numbers from the sequence were used to arrive at the number. For example, if the number given is 8, I know that the sum was simply $a_3$, since this is the only way to get 8. If i'm told the number is 12, I know that the sum was simply $a_3 + a_2$. If I'm told what the resulting number, can you prove that I can always determine which elements of the above sequence were used in the sum to arrive at that number?
Fundamental reason: The division algorithm. For any number $a\geq 0$, we know there exists a unique $q_0\geq 0$ and $0\leq r_0<2$ such that $$a=2q_0+r_0.$$ Similarly, we know there exists a unique $q_1\geq 0$ and $0\leq r_1<2$ such that $$q_0=2q_1+r_1.$$ We can define the sequences $q_0,q_1,\ldots$ and $r_0,r_1,\ldots$ in this way, i.e. $q_{n+1}\geq 0$ and $0\leq r_{n+1}< 2$ are the unique solutions to $$q_n=2q_{n+1}+r_{n+1}.$$ Note that $0\leq r_n<2$ is just equivalent to $r_n\in\{0,1\}$. Also note that if $q_n=0$ for some $n$, then $q_k=0$ and $r_k=0$ for all $k>n$. Because $\frac{1}{2} q_n\geq q_{n+1}$ for any $n$, and because every $q_n$ is a non-negative integer, we will eventually have $q_n=0$ for some $n$. Let $N$ be the smallest index such that $q_N=0$. Then $$\begin{align} a&=2q_0+r_0\\ &=2(2q_1+r_1)+r_0\\ &\cdots\\ &=2(2(\cdots (2q_N+r_N)+r_{N-1}\cdots )+r_1)+r_0\\ &=2^Nr_N+2^{N-1}r_{N-1}+\cdots+r_0\\ \end{align}$$ is a sum of terms from the sequence $a_n=2^n$, specifically those $a_n$'s for which the corresponding $r_n=1$ instead of 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/266270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
$a_{n}$ converges and $\frac{a_{n}}{n+1}$ too? I have a sequence $a_{n}$ which converges to $a$, then I have another sequence which is based on $a_{n}$: $b_{n}:=\frac{a_{n}}{n+1}$, now I have to show that $b_{n}$ also converges to $a$. My steps: $$\frac{a_{n}}{n+1}=\frac{1}{n+1}\cdot a_{n}=0\cdot a=0$$ But this is wrong, why am I getting this? My steps seem to be okay according to what I have learned till now; can someone show me the right way please? And then I am asked to find another two sequences like $a_n$ and $b_n$ but where $a_n$ diverges and $b_n$ converges based on $a_n$. I said: let $a_n$ be a diverging sequence then $$b_n:=\frac{1}{a_n}$$ the reciprocal of $a_n$ should converge. Am I right?
For $a_n=1$, clearly $a_n \to 1$ and $b_n \to 0$. So the result you are trying to prove is false. In fact, because product is continuous, $\lim \frac{a_n}{n+1} = (\lim a_n) (\lim \frac{1}{n+1})=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/266345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
L'Hospital's Rule Question. show that if $x $ is an element of $\mathbb R$ then $$\lim_{n\to\infty} \left(1 + \frac xn\right)^n = e^x $$ (HINT: Take logs and use L'Hospital's Rule) i'm not too sure how to go about answer this or putting it in the form $\frac{f'(x)}{g'(x)}$ in order to apply L'Hospitals Rule. so far i've simply taken logs and brought the power in front leaving me with $$ n\log \left(1+ \frac xn\right) = x $$
$$\lim_{n\to\infty} (1 + \frac xn)^n =\lim_{n\to\infty} e^{n\ln(1 + \frac xn)} $$ The limit $$\lim_{n\to\infty} n\ln(1 + \frac xn)=\lim_{n\to\infty} \frac{\ln(1 + \frac xn)}{\frac1n}=\lim_{n\to\infty} \frac{\frac{1}{1 + \frac xn}\frac{-x}{n^2}}{-\frac1{n^2}}=\lim_{n\to\infty} \frac{x}{1 + \frac xn}=x$$ By continuity of $e^x$, $$\lim_{n\to\infty} (1 + \frac xn)^n =\lim_{n\to\infty} e^{n\ln(1 + \frac xn)}=e^x $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/266411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Is it possible to prove everything in mathematics by theorem provers such as Coq? Coq has been used to provide formal proofs to the Four Colour theorem, the Feit–Thompson theorem, and I'm sure many more. I was wondering - is there anything that can't be proved in theorem provers such as Coq? A little extra question is if everything can be proved, will the future of mathematics consistent of a massive database of these proofs that everyone else will build on top of? To my naive mind, this feels like a much more rigorous way to express mathematics.
It is reasonable to believe that everything that has been (or can be) formally proved can bew proved in such an explicitly formal way that a "stupid" proof verification system can give its thumbs up. In fact, while typical everyday proofs may have some informal handwaving parts in them, these should always be able to be formalized in a so "obvious" manner that one does not bother, in fact that one is totally convinced that formalizing is in principle possible; otherwise one won't call it a proof at all. In fact, I sometimes have the habit to guide people (e.g. if they keep objecting to Cantor's diagonal argument) to the corresponding page at the Proof Explorer and ask them to point out which particular step they object to. For some theorems and proofs this approach may help you get rid of any doubts casting a shadow on the proof: Isn't there possibly some sub-sub-case on page 523 that was left out? But then again: Have you checked the validity of the code of your theorem verifier? Is even the hardware bug-free? (Remember the Pentium bug?) Would you believe a proof that $10000\cdot10000=100000000$ that consists of putting millions of pebbles into $10000$ rows and columns and counting them more than computing the same result by a long multiplication?
{ "language": "en", "url": "https://math.stackexchange.com/questions/266501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 0 }
Test for convergence the series $\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}$ Test for convergence the series $$\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}$$ I'd like to make up a collection with solutions for this series, and any new solution will be rewarded with upvotes. Here is what I have at the moment Method 1 We know that for all positive integers $n$, $n<2^n$, and this yields $$n^{(1/n)}<2$$ $$n^{(1+1/n)}<2n$$ Then, it turns out that $$\frac{1}{2} \sum_{n=1}^{\infty}\frac{1}{n} \rightarrow \infty \le\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}$$ Hence $$\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}\rightarrow \infty$$ EDIT: Method 2 If we consider the maximum of $f(x)=x^{(1/x)}$ reached for $x=e$ and denote it by $c$, then $$\sum_{n=1}^{\infty}\frac{1}{c \cdot n} \rightarrow \infty \le\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}$$ Thanks!
Let $a_n = 1/n^{(n+1)/n}$. Then $$\begin{eqnarray*} \frac{a_n}{a_{n+1}} &\sim& 1+\frac{1}{n} - \frac{\log n}{n^2} \qquad (n\to\infty). \end{eqnarray*}$$ The series diverges by Bertrand's test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/266547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Computing left derived functors from acyclic complexes (not resolutions!) I am reading a paper where the following trick is used: To compute the left derived functors $L_{i}FM$ of a right-exact functor $F$ on an object $M$ in a certain abelian category, the authors construct a complex (not a resolution!) of acyclic objects, ending in $M$, say $A_{\bullet} \to M \to 0$, such that the homology of this complex is acyclic, and this homology gets killed by $F$. Thus, they claim, the left-derived functors can be computed from this complex. Why does this claim follow? It seems like it should be easy enough, but I can't seem to wrap my head around it.
Compare with a projective resolution $P_\bullet\to M\to 0$. By projectivity, we obtain (from the identiy $M\to M$) a complex morphism $P_\bullet\to A_\bullet$, which induces $F(P_\bullet)\to F(A_\bullet)$. With a bit of diagram chasing you shold find that $H_\bullet(F(P_\bullet))$ is the same as $H_\bullet(F(A_\bullet))$. A bit more explict: We can build a resolution of complexes $$\begin{matrix} &\downarrow && \downarrow&&\downarrow\\ 0\leftarrow &A_2&\leftarrow&P_{2,1}&\leftarrow&P_{2,2}&\leftarrow\\ &\downarrow && \downarrow&&\downarrow\\ 0\leftarrow &A_1&\leftarrow&P_{1,1}&\leftarrow&P_{1,2}&\leftarrow\\ &\downarrow && \downarrow&&\downarrow\\ 0\leftarrow &M&\leftarrow &P_{0,1}&\leftarrow&P_{0,2}&\leftarrow\\ &\downarrow && \downarrow&&\downarrow\\ &0&&0&&0 \end{matrix} $$ i.e. the $P_{i,j}$ are projective and all rows are exact. The downarrows are found recursively using projectivity so that all squares commute: If all down maps are called $f$ and all left maps $g$, then $f\circ g\colon P_{i,j}\to P_{i-1,j-1}$ maps to the image of $g\colon P_{i-1,j}\to P_{i-1,j-1}$ because $g\circ(f\circ g)=f\circ g\circ g=0$, hence $f\circ g$ factors through $P_{i-1,j}$, thus giving the next $f\colon P_{i,j}\to P_{i-1,j}$. We can apply $F$ and take direct sums across diagonals, i.e. let $B_k=\bigoplus_{i+j=k} FP_{i,j}$. Then $d:=(-1)^if+g$ makes this a complex. What interest us here, is that we can walk from the lower row to the left column by diagram chasing, thus finding that $H_\bullet(F(P_{0,\bullet}))=H_\bullet(F(A_\bullet))$. Indeed: Start with $x_0\in FP_{0,k}$ with $Fg(x_0)=0$. Then we find $y_1\in FP_{1,k}$ with $Ff(y_1)=x_0$. Since $Ff(Fg(y_1))=Fg(Ff(y_1))=0$, we find $y_2\in FP_{2,k-1}$ with $Ff(y_2)=y_1$, and so on until we end up with a cycle in $A_k$. Make yourself clear that the choices involved don't make a difference in the end (i.e. up to boundaries). Also, the chase can be performed just as well from the left column to the bottom row ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/266654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Quotient Group G/G = {identity}? I know this is a basic question, but I'm trying to convince myself of Wikipedia's statement. "The quotient group $G / G$ is isomorphic to the trivial group." I write the definition for left multiplication because left cosets = right cosets. $ G/G = \{g \in G : gG\} $ But how is this isomorphic to the trivial group, $ \{id_G\} $? $gG$ can't be simplified to $id_G$ ? Thank you.
If G is a group and N is normal in G, then G/N is the quotient group. G/N as a group consists of cosets of the normal subgroup N in G and these cosets themselves satisfy the group properties because of normality of N. Now G is clearly normal in G. Hence G/G consists of the coset that is all of G. Thus this group has only one element, thence it must be isomorphic to the identity group
{ "language": "en", "url": "https://math.stackexchange.com/questions/266718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 8, "answer_id": 5 }
Want to show $\sum_{n=2}^{\infty} \frac{1}{2^{n}*n}$ converges Want to show $\sum_{n=2}^{\infty} \frac{1}{2^{n}*n}$ converges. I am trying to show this by showing that the partial sums are bounded. I have tried doing this by induction but am not seeing how to pass the inductive assumption part. Do I need to instead look for a closed form? thanks
Since you mentioned induction: Let $s_m = \sum_{n=2}^{m} \frac{1}{2^{n}*n}$. Then $s_m \leq 1-\frac{1}{m}$. $P(2)$ is obvious, while $P(m) \Rightarrow P(m+1)$ reduces to $$1-\frac{1}{m}+\frac{1}{2^{m+1}(m+1)} \leq 1- \frac{1}{m+1}$$ which is equivalent to: $$m \leq 2^{m+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/266788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 4 }
difficulty understanding branch of the logarithm Here is one past qual question, Prove that the function $\log(z+ \sqrt{z^2-1})$ can be defined to be analytic on the domain $\mathbb{C} \setminus (-\infty,1]$ (Hint: start by defining an appropriate branch of $ \sqrt{z^2-1}$ on $\mathbb{C}\setminus (-\infty,1]$ ) It just seems typical language problem. I do not see point of being rigorous here. But I think I have difficulty understanding the branch cut. I don't know if someone will explain me in some easy way and try to explain the solution to the problem. Any help will be much appreciated.
Alternatively, you can just take the standard branch for $\sqrt{z}$ excluding $(-\infty,0]$ and then compute $\sqrt{z-1}\sqrt{z+1}$ which is defined for $z+1,z-1\notin(-\infty,0]$, that is, for $z\notin(-\infty,1]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/266857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
how to find the nth number in the sequence? consider the sequence of numbers below, 2 5 10 18 31 52 . . . the sequence goes on like this. My Question is, How to find the nth term in the sequence? thanks.
The sequence can be expressed in many ways. As Matt N. and M. Strochyk mentioned: $$ a_{n+2}= a_{n}+a_{n+1}+3,$$ $$ a_1 = 2 \quad (n\in \mathbb{N})$$ Or as this one for example: $$ a_{n+1}= a_{n}+\frac{(n-1)n(2n-1)}{12}-\frac{(n-1)n}{4}+2n+1,$$ $$ a_1 = 2 \quad (n\in \mathbb{N})$$ It's interesting that the term: $$ b_n = \frac{(n-1)n(2n-1)}{12}-\frac{(n-1)n}{4}+2n+1$$ gives five Fibonacci numbers $(3, 5, 8, 13, 21)$ for $1 \leq n \leq 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/266906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 6 }
Show $\lim\limits_{n\to\infty} \sqrt[n]{n^e+e^n}=e$ Why is $\lim\limits_{n\to\infty} \sqrt[n]{n^e+e^n}$ = $e$? I couldn't get this result.
Taking logs, you must show that $$\lim_{n \rightarrow \infty} {\ln(n^e + e^n) \over n} = 1$$ Applying L'hopital's rule, this is equivalent to showing $$\lim_{n \rightarrow \infty}{en^{e-1} + e^n \over n^e + e^n} = 1$$ Which is the same as $$\lim_{n \rightarrow \infty}{e{n^{e-1}\over e^n} + 1 \over {n^e \over e^n}+ 1} = 1$$ By applying L'hopital's rule enough times, any limit of the form $\lim_{n \rightarrow \infty}{\displaystyle {n^a \over e^n}}$ is zero. So one has $$\lim_{n \rightarrow \infty}{e{n^{e-1}\over e^n} + 1 \over {n^e \over e^n}+ 1} = {e*0 + 1 \over 0 + 1}$$ $$ = 1$$ (If you're wondering why you can just plug in zero here, the rigorous reason is that the function ${\displaystyle {ex + 1 \over y + 1}}$ is continuous at $(x,y) = (0,0)$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/266977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 8, "answer_id": 7 }
A non-linear maximisation We know that $x+y=3$ where x and y are positive real numbers. How can one find the maximum value of $x^2y$? Is it $4,3\sqrt{2}, 9/4$ or $2$?
By AM-GM $$\sqrt[3]{2x^2y} \leq \frac{x+x+2y}{3}=2 $$ with equality if and only if $x=x=2y$. Second solution This one is more complicated, and artificial (since I needed to know the max)$. $$x^2y=3x^2-x^3=-4+3x^2-x^3+4=4- (x-2)^2(x+1)\leq 4$$ since $(x-2)^2(x+1) \geq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/267055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Convergence of the series Im trying to resolve the next exercise: $$\sum_{n=1}^\infty\ e^{an}n^2 \text{ , }a\in R $$ I dont know in which ranges I should separe the a value for resolving the limit and finding out the convergence.
Write it as $\sum_{n=1}^\infty\ r^n n^2$ where $r = e^a$ satisfies $0 < r$. If $r \ge 1$ (i.e., $a \ge 0$), the sum clearly diverges. If $r < 1$ (i.e., $a < 0$), you can get an explicit formula for $\sum_{n=1}^m\ r^n n^2$ which will show that the sum converges. Therefore the sum converges for $a < 0$ and diverges for $a \ge 0$. The $e^{an}$ seems like a distraction to hide the true nature of the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/267116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 5 }
2 heads or more in 3 coin toss formula what is the formula to calculate the probabilities of getting 2 heads or more in 3 coin toss ? i've seen a lot of solution but almost all of them were using method of listing all of possible combination like HHT,HTH,etc what i am trying to ask is the formula and/or method to calculate this using a formula and no need to list all of the possible combination. listing 3 coin toss combination is easy(8 possible combination),but suppose i change the coins to dice or say 20-side dice. that would take a long time to list all the possible combination.
The simplest is by symmetry. The chance of at least two heads equals the chance of at least two tails, and if you add them you get exactly $1$ because one or the other has to happen. Thus the chance is $\frac 12$. This approach is not always available.
{ "language": "en", "url": "https://math.stackexchange.com/questions/267186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Infinite sum of floor functions I need to compute this (convergent) sum $$\sum_{j=0}^\infty\left(j-2^k\left\lfloor\frac{j}{2^k}\right\rfloor\right)(1-\alpha)^j\alpha$$ But I have no idea how to get rid of the floor thing. I thought about some variable substitution, but it didn't take me anywhere.
We'll let $M=2^k$ throughout. Note that $$f(j)=j-M\left\lfloor\frac{j}{M}\right\rfloor$$ is just the modulus operator - it is equal to the smallest positive $n$ such that $j\equiv n\pmod {M}$ So that means $f(0)=0, f(1)=1,...f(M-1)=M-1,$ and $f(j+M)=f(j)$. This means that we can write: $$F(z)=\sum_{j=0}^{\infty} f(j)z^{j}= \left(\sum_{j=0}^{M-1} f(j)z^{j}\right)\left(\sum_{i=0}^\infty z^{Mi}\right)$$ But $$\sum_{i=0}^\infty z^{Mi} = \frac{1}{1-z^{M}}$$ and $f(j)=j$ for $j=0,...,2^k-1$, so this simplifies to: $$F(z)=\frac{1}{1-z^{M}}\sum_{j=0}^{M-1} jz^j$$ Finally, $$\sum_{j=0}^{M-1} jz^j = z\sum_{j=1}^{M-1} jz^{j-1} =z\frac{d}{dz}\frac{z^M-1}{z-1}=\frac{(M-1)z^{M+1}-Mz^{M}+z}{(z-1)^2}$$ So: $$F(z)=\frac{(M-1)z^{M+1}-Mz^{M}+z}{(z-1)^2(1-z^{M})}$$ Your final sum is: $$\alpha F(1-\alpha)$$ bearing in mind, again, that $M=2^k$
{ "language": "en", "url": "https://math.stackexchange.com/questions/267248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
show that the interval of the form $[0,a)$ or $(a, 1]$ is open set in metric subspace $[0,1]$ but not open in $\mathbb R^1$ On the metric subspace $S = [0,1]$ of the Euclidean space $\mathbb R^1 $, every interval of the form $A = [0,a)$ or $(a, 1]$ where $0<a<1$ is open set in S. These sets are not open in $\mathbb R^1$ Here's what I attempted to show that $A$ is open in $S$. I have no idea how it is not open in $\mathbb R^1$. Let $M = \mathbb R^1$, $x \in A = [0,a)$ . If $x = 0$, $ r \leq \min \{a, 1-a\}, \\\ B_S(0; r) = B_M(0,r)\cap[0,1] = (-r, r) \cap[0,1] = [0,r) \subseteq A $ If $x \neq 0, r \leq \min \{x,|a-x|, 1-x\}, \\B_S(x; r) = B_M(x,r)\cap[0,1] = (x-r, x+r) \cap[0,1] \subset (0,x) \text{ or } (x, 1) \subset A$
Hint: To show $(x,1]$ not open in $\mathbb R$ simply show that there is no open neighborhood of $1$ included in the half-closed interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/267293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Isosceles triangle Let $ \triangle ABC $ be an $C$-isosceles and $ P\in (AB) $ be a point so that $ m\left(\widehat{PCB}\right)=\phi $. Express $AP$ in terms of $C$, $c$ and $\tan\phi$. Edited problem statement(same as above but in different words): Let $ \triangle ABC $ be a isosceles triangle with right angle at $C$. Denote $\left | AB \right |=c$. Point $P$ lies on $AB(P\neq A,B)$ and angle $\angle PCB=\phi$. Express $\left | AP \right |$ in terms of $c$ and $\tan\phi$.
Edited for revised question Dropping the perpendicular from $C$ onto $AB$ will help. Call the point $E$. Also drop the perpendicular from $P$ onto $BC$, and call the point $F$. Then drop the perpendicular from $F$ onto $AB$, and call the point $G$. This gives a lot of similar and congruent triangles. $$\tan \phi = \dfrac{|PF|}{|CF|} = \dfrac{|FB| }{ |CF|} = \dfrac{ |GB| }{|EG| } = \dfrac{ |PB| }{|AP| }= \dfrac{ c-|AP| }{|AP| }$$ so $$|AP| = \dfrac{c}{ 1+\tan \phi}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/267363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Mathematical competitions for adults. I live in Mexico city. However, I am not so interested in whether these exist in my vicinity but want to learn if they exist, if they do then It might be easier to make them more popular in other places. Are there mathematical competitions for adults? I have been into a couple of mathematical competitions and they are fun. However, its sort of a race against the clock because you only have a couple of chances until you are too old. Are there mathematical competitions for people of all ages. I have looked this up but found none. It makes sense to separate people from second grade and 7th grade. But: for example in sports once it comes to adults they separate them in different leagues. This could work to make it fair for mathematicians and mathematics enthusiasts alike. Thanks for your answer. PS When I say adults I mean people who need not still be studying in college, it includes people who have regular jobs.
Actually I know only one. I was looking for the same thing and found your question. "Championnat International des Jeux Mathématiques et Logiques" http://www.animath.fr/spip.php?article595 Questions and answers must be in french. But there is no requirement for participation. Any age and nationalities are welcome. Questions won't be translated for you, but I'm pretty sure that, if you answer in english graders won't mind. Just ask. ;)
{ "language": "en", "url": "https://math.stackexchange.com/questions/267416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
$A\unlhd G$ , $B\unlhd G$ and $C\unlhd G$ then $A(B∩C)$ is a normal subgroup of $G$ If $A$ normal to $G$ , $B$ normal to $G$ and $C$ normal to $G$ then how can I show that$$A(B∩C)\unlhd G$$ how can i solve this problem? Thanks!
You know that if $B,C$ be subgroups of a group so does their intersection. Moreover if one of subgroups $A$ and $B\cap C$ are normal in $G$, so we have a theorem saying $A(B\cap C)\leq G$ also. Now show that the normality of $A(B\cap C)$ in $G$. In fact, show that: $$\forall x\in A(B\cap C), g\in G$$ we have $g^{-1}xg\in A(B\cap C)$ as well where $g\in G$ is an arbitrary element.
{ "language": "en", "url": "https://math.stackexchange.com/questions/267508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Closed set in $\ell^1$ Show that the set $$ B = \left\lbrace(x_n) \in \ell^1 : \sum_{n\geq 1} n|x_n|\leq 1\right\rbrace$$ is compact in $\ell^1$. Hint: You can use without proof the diagonalization process to conclude that every bounded sequence $(x_n)\in \ell^\infty$ has a subsequence $(x_{n_k})$ that converges in each component, that is $\lim_{k\rightarrow\infty} (x_{n_k}^{(i)})$ exists for all i. Moreover, sequences in $\ell^1$ are obviously closed by the $\ell^1$-norm. My try: Every bounded sequence $(x_n) \in \ell^\infty$ has a subsequence $(x_{n_k})$ that converges in each component. That is $\lim_{k\rightarrow\infty} (x_{n_k}^{(i)})$ exists for all i. .And all sequences in $\ell^1$ are bounded in $\ell^1$-norm. I want to show that every sequence $(x_n) \in B$, has an Cauchy subsequence. Choose an N and M such that for $l,k > M$ such that $|x_{n_k}^{(i)} - x_{n_l}^{(i)}| < \frac{1}{N^2}$ Then $$\sum_i^N |x_{n_k}^{(i)} - x_{n_l}^{(i)}| + \sum_{i = N+1} ^\infty |x_{n_k}^{(i)} - x_{n_l}^{(i)}| \leqslant \frac{1}{N} + \frac{1}{N+1} \sum_{i = N+1} ^\infty i|x_{n_k}^{(i)} - x_{n_l}^{(i)}| \leqslant \frac{3}{N+1}$$ It feels wrong to compine $M,N$ like this, is it? what can I do instead?
We can use and show the following: Let $K\subset \ell^1$. This set has a compact closure for the $\ell^1$ norm if and only if the following conditions are satisfied: * *$\sup_{x\in K}\lVert x\rVert_{\ell^1}$ is finite, and *for all $\varepsilon>0$, we can find $N$ such that for all $x\in K$, $\sum_{k\geqslant N}|x_k|<\varepsilon$. These conditions are equivalent to precompactness, that is, that for all $r>0$, we can find finitely many elements $x^1,\dots,x^N$ such that the balls centered at $x^j$ and for radius $r$ cover $K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/267563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Boundedness of an integral operator Let $K_n \in L^1([0,1]), n \geq 1$ and define a linear map $T$ from $L^\infty([0,1]) $to sequences by $$ Tf = (x_n), \;\; x_n =\int_0^1 K_n(x)f(x)dx$$ Show that $T$ is a bounded linear operator from $L^\infty([0,1]) $to $\ell^\infty$ iff $$\sup_{n\geq 1} \int_0^1|K_n(x)| dx \lt \infty$$ My try: $(\Leftarrow)$ $$\sup_n |x_n| = \sup_n |\int_0^1 K_n(x)f(x) dx| \leq \sup_n\int_0^1 |K_n(x)f(x)| dx \leq \|f\|_\infty \sup_n\int_0^1 |K_n(x)|dx $$ $(\Rightarrow)$ I can't get the absolute value right. I was thinking uniformed boundedness and that every coordinate can be written with help of a linear functional. But then I end up with $\sup_{\|f\| = 1} |\int_0^1 K_n(x) f(x) dx | \leq \infty$. Can I choose my $f$ so that I get what I want?
Yes, you can choose $f$ as you want. If $T$ is bounded then $$ \exists C>0\qquad\left\vert \int_0^1K_n(x)f(x)\,dx\right\vert\leq C \Vert f\Vert_{L^\infty}\qquad \forall f\in L^\infty\quad \forall n\in\mathbb{N}. $$ Fix $m\in\mathbb{N}$, if we take $f=\text{sign}(K_m)\in L^\infty$ then $$ \int_0^1 \vert K_m(x)\vert\,dx\leq C. $$ Repeat this construction for every $m\in\mathbb{N}$ and you obtain $$ \sup_{n\in\mathbb{N}}\int_0^1 \vert K_n(x)\vert\,dx\leq C<+\infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/267612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
convergence of weighted average It is well known that for any sequence $\{x_n\}$ of real or complex numbers which converges to a limit $x$, the sequence of averages of the first $n$ terms is also convergent to $x$. That is, the sequence $\{a_n\}$ defined by $$a_n = \frac{x_1+x_2+\ldots + x_n}{n}$$ converges to $x$. How "severe" of a weighting function $w(n)$ can we create that the sequence of weighted averages $\{b_n\}$ defined by $$b_n = \frac{w(1)x_1 + w(2)x_2 + \ldots + w(n)x_n}{w(1)+w(2)+\ldots+w(n)} $$ is convergent to $x$? Is it possible to choose $w(n)$ such that $\{b_n\}$ is divergent?
Weighted averages belong to the class of matrix summation methods. Define $$W:=\left(\begin{matrix}W_{1,1},W_{1,2},\ldots\\W_{2,1},W_{2,2},\ldots\\\vdots\\\end{matrix}\right)$$ Represent the sequence $\{x_n\}$ by the infinite vector $X:=\left(\begin{matrix}x_1\\x_2\\\vdots\end{matrix}\right)$, and $\{b_n\}$ by the vector $B:=\left(\begin{matrix}b_1\\b_2\\\vdots\end{matrix}\right)$. Then we have $$B=WX.$$ In our case $W_{i,j}:=\frac{w(j)}{w(1)+\ldots+w(i)}$, for $j\leq i$, and $W_{i,j}:=0$, for $j>i$. The summation method is called regular if it transforms convergent sequences into convergent sequences with the same limit. For matrix summation methods we have Silverman-Toeplitz theorem, that says that a matrix summation method is regular if and only if the following are satisfied: * *$\lim_{i\rightarrow\infty} W_{i,j}=0$, for every $j\in\mathbb{N}$ (entries converge to zero along columns) *$\lim_{i\rightarrow\infty}\sum_{j=1}^{\infty}W_{i,j}=1$ (rows add up to $1$) *$\sup_{i}\sum_{j=1}^{\infty}|W_{i,j}|<\infty$ (the sums of the absolute values on the rows are bounded.) In your case $2$ and $3$ are satisfied (if you assume, as in the comment that $w(i)\geq0$), therefore you get the result if and only if $$\sum w(i)=\infty.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/267669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
What is the value of the given limit? Possible Duplicate: How can I prove Infinitesimal Limit Let $$\lim_{x\to 0}f(x)=0$$ and $$\lim_{x\to 0}\frac{f(2x)-f(x)}{x}=0$$ Then what is the value of $$\lim_{x\to 0}\frac{f(x)}{x}$$
If $\displaystyle\lim_{x\to0}\frac{f(x)}{x}=L$ then $$ \lim_{x\to0}\frac{f(2x)}{x} = 2\lim_{x\to0}\frac{f(2x)}{2x} = 2\lim_{u\to0}\frac{f(u)}{u} = 2L. $$ Then $$ \lim_{x\to0}\frac{f(2x)-f(x)}{x} = \lim_{x\to0}\frac{f(2x)}{x} - \lim_{x\to0}\frac{f(x)}{x} =\cdots $$ etc. Later note: What is written above holds in cases in which $\displaystyle\lim_{x\to0}\frac{f(x)}{x}$ exists. The question remains: If both $\lim_{x\to0}f(x)=0$ and $\lim_{x\to0}(f(2x)-f(x))/x=0$ then does it follow that $\displaystyle\lim_{x\to0}\frac{f(x)}{x}$ exists?
{ "language": "en", "url": "https://math.stackexchange.com/questions/267715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
The control of norm in quotient algebra Let $B_1,B_2$ be two Banach spaces and $L(B_i,B_j),K(B_i,B_j)(i,j=1,2)$ spaces of bounded and compact linear operator between them respectively. If $T \in L(B_1,B_1)$, we have a $S \in K(B_1,B_2)$ and a constant $c>0$ such that for any $v \in B_1$,$${\left\| {Tv} \right\|_{{B_1}}} \le c{\left\| v \right\|_{{B_1}}} + {\left\| {Sv} \right\|_{{B_2}}}.$$ My question is, can we find a $A \in K(B_1,B_1)$, such that ${\left\| {T - A} \right\|_{L({B_1},{B_1})}} \le c$?
To start from a very simple case: If $B_1$ is a Hilbert space and $S$ is finite-dimensional such that we have $$ \|Tv\| \le c\|v\| + \|Sv\| \quad \forall v$$ then we can find a finite-dimensional $A$ satisfying $$\tag{1} \|Av\| \le \|Sv\|$$ and $$\tag{2} \|(T-A)v\| \le c \|v\|$$ for every $v \in B_1$. Proof: Write $B_1 = (\ker S)^\bot \oplus_2 \ker S$. We define $A$ separately on each summand: On $\ker S$, we can clearly choose $A = 0$. On its annihilator, we can work with an orthonormal basis: For each vector $e_n$, we set $$ Ae_n = \frac{\|Se_n\|}{c + \|Se_n\|} Te_n$$ (note that we know $Se_n \ne 0$) which immediately gives us $$ \|Ae_n\| \le \|Se_n\|$$ and $$ \|(T-A)e_n\| = \left\|\left(1 - \frac{\|Se_n\|}{c + \|Se_n\|} \right)Te_n\right\| \le c $$ Since (1) and (2) are thus satisfied both on all of $\ker S$ and whenever $v$ is member of the orthonormal basis for $(\ker S)^\bot$, i.e. $v = e_n$, they are satisfied for every $v \in B_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/267786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Find all entire functions $f$ such that for all $z\in \mathbb{C}$, $|f(z)|\ge \frac{1}{|z|+1}$ Find all entire functions $f$ such that for all $z\in \mathbb{C}$, $|f(z)|\ge \frac{1}{|z|+1}$ This is one of the past qualifying exams that I was working on and I think that I have to find the function that involved with $f$ that is bounded and use Louiville's theorem to say that the function that is found is constant and conclude something about $f$. I can only think of using $1/f$ so that $\frac{1}{|f(z)|} \le |z|+1$ but $|z|+1$ is not really bounded so I would like to ask you for some hint or idea. Any hint/ idea would be appreciated. Thank you in advance.
Suppose $f$ is not constant. As an entire non-constant function it must have some sort of singularity at infinity. It cannot be a pole (because then it would have a zero somewhere, cf the winding-number proof of the FTA), so it must be an essential singularity. Then $zf(z)$ also has an essential singularity at infinity. But $|zf(z)|\ge \frac{|z|}{|z|+1}$, which goes towards $1$, which contradicts Big Picard for $zf(z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/267941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
move a point up and down along a sphere I have a problem where i have a sphere and 1 point that can be anywhere on that sphere's surface. The Sphere is at the center point (0,0,0). I now need to get 2 new points, 1 just a little below the and another little above this in reference to the Y axis. If needed or simpler to solve, the points can be about 15º above and below the original point, this viewing the movement on a 2D circle. Thank you in advance for any given help. EDIT: This is to be used on a world globe where the selected point will never be on the top or bottom. EDIT: I'm using the latitude and longitude suggested by rlgordonma and user1551 what I'm doing is adding and subtracting a fixed value to ϕ These 2 apear correctly, at least they apear to look in place: The original point is in the middle of the 2 bars. The sphere has R=1 all the coords i'm putting here are rounded because they are to big (computer processed) coord: (0.77, 0.62, 0,11) coord: (0.93, -0.65, 0.019) these don't: coord: (-0.15, 0.59, 0.79) coord: (-0.33, 0.73, -0.815) there are other occasions for both but i didn't want to put all here. calcs: R = 1 φ = arctan(y/x) θ = arccos(z/1) //to move up only one is used φ = φ + π/50 //to move down only one is used φ = φ - π/50 (x,y,z)=(sinθ cosφ, sinθ sinφ, cosθ)
The conversion between Cartesian and spherical coordinates is $$ (x,y,z) = (R \sin{\theta} \cos {\phi},R \sin{\theta} \sin {\phi}, R \cos{\theta})$$ where $R$ is the radius of the earth/sphere, $\theta = $ $+90^{\circ}$- latitude, and $\phi=$ longitude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/268064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Proof by induction on $\{1,\ldots,m\}$ instead of $\mathbb{N}$ I often see proofs, that claim to be by induction, but where the variable we induct on doesn't take value is $\mathbb{N}$ but only in some set $\{1,\ldots,m\}$. Imagine for example that we have to prove an equality that encompasses $n$ variables on each side, where $n$ can only range through $n\in\{1,\ldots,m\}$ (imagine that for $n>m$ the function relating the variables on one of the sides isn't welldefined): For $n=1$ imagine that the equality is easy to prove by manipulating the variables algebraically and that for some $n\in\{1,\ldots,m\}$ we can also show the equation holds provided it holds for $n\in\{1,\ldots,n-1\}$. Then we have proved the equation for every $n\in\{1,\ldots,m\}$, but does this qualify as a proof by induction ? (Correct me if I'm wrong: If the equation were indeed definable and true for every $n\in\mathbb{N}$ we could - although we are only interested in the case $n\in\{1,\ldots,m\}$ - "extend" it to $\mathbb{N}$ and then use "normal" induction to prove it holds for every $n\in \mathbb{N}$, since then it would also hold for $n\in\{1,\ldots,m\}$ .)
If the statement in question really does not "work" if $n>m$, then necessarily the induction step $n\to n+1$ at least somewhere uses that $n<m$. You may view this as actually proving by induction $$\tag1\forall n\in \mathbb N\colon (n>m\lor \phi(m,n))$$ That is, you first show $$\tag2\phi(m,1)$$ (which of course implies $1>m\lor \phi(m,1)$) and then show $$\tag3(n>m\lor \phi(m,n))\Rightarrow (n+1>m\lor \phi(m,n+1)),$$ where you can more or less ignore the trivial case $n\ge m$. Of course $(2)$ and $(3)$ establish a perfectly valid proof of $(1)$ by induction on $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/268152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Differing properties due to differing topologies on the same set I have been working on this problem from Principles of Topology by Croom: "Let $X$ be a set with three different topologies $S$, $T$, $U$ for which $S$ is weaker than $T$, $T$ is weaker than $U$, and $(X,T)$ is compact and Hausdorff. Show that $(X,S)$ is compact but not Hausdorff, and $(X,U)$ is Hausdorff but not compact." I managed to show that $(X,T)$ compact implies $(X,S)$ compact, and that $(X,T)$ Hausdorff implies $(X,U)$ Hausdorff. However, I realized that there must be an error (right?) when it comes to $(X,T)$ compact implying that $(X,U)$ is noncompact, since if $X$ is finite (which it could be, as the problem text doesn't specify), then it is compact regardless of the topology on it. What are the minimal conditions that we need in order for there to be a counterexample to $(X,T)$ compact implying $(X,U)$ compact? Just that $X$ is not finite? Does this also impact showing $(X,T)$ Hausdorff implies $(X,S)$ is not Hausdorff?
If $X$ is finite, the only compact Hausdorff topology on $X$ is the discrete topology, so $T=\wp(X)$. In this case there is no strictly finer topology $U$. If $X$ is infinite, the discrete topology on $X$ is not compact, so if $T$ is a compact Hausdorff topology on $X$, there is always a strictly finer topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/268218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sampling from a $2$d normal with a given covariance matrix How would one sample from the $2$-dimensional normal distribution with mean $0$ and covariance matrix $$\begin{bmatrix} a & b\\b & c \end{bmatrix}$$ given the ability to sample from the standard ($1$-dimensional) normal distribution? This seems like it should be quite simple, but I can't actually find an answer anywhere.
Say you have a random variable $X\sim N(0,E)$ where $E$ is the identity matrix. Let $A$ be a matrix. Then $Y:=AX\sim N(0,AA^T)$. Hence you need to find a matrix $A$ with $AA^T = \left[\matrix{a & b \\ b & c}\right]$. There is no unique solution to this problem. One popular method is the Cholesky decomposition, where you find a triangular matrix $L$ with a given covariance matrix. Another method is to perform a principle axis transform $$ \left[\matrix{a & b \\ b & c}\right] = U^T\left[\matrix{\lambda_1 & 0 \\ 0 & \lambda_2}\right]U $$ with $UU^T=E$ and then take $$ A = U^T\left[\matrix{\sqrt{\lambda_1} & 0 \\ 0 & \sqrt{\lambda_2}}\right]U $$ as a solution. This is the only symmetric positive definite solution then. It is also called the square root of the positive-definite symmetric matrix $AA^T$. More generally for distribution $X\sim N(\mu,\Sigma)$ it holds $AX+b\sim N(A\mu+b,B\Sigma B^T)$, see Wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/268298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 2 }
A Question on $p$-groups. Suppose $G$ is a group with order $p^{n}$ ($p$ is a prime). Do we know when we can find the subgroups of $G$ of order $p^{2}, p^{3}, \cdots, p^{n-1}$?
(Approach modified in light of Don Antonio's comment below question). Another way to proceed, which may not be so common in textbooks, and which produces a normal subgroup of each possible index, is as follows. Suppose we have a non-trivial normal subgroup $Q$ of $P$ (possibly $Q = P$). We will produce a normal subgroup $R$ of $P$ with $[Q:R] = p.$ Note that $P$ permutes the maximal subgroups of $Q$ by conjugation. Let $\Phi(Q)$ denote the intersection of the maximal subgroups of $Q$, and suppose that $[Q:\Phi(Q)] = p^{s}.$ Then $Q/\Phi(Q)$ is Abelian of exponent $p$. All its maximal subgroups have index $p,$ and there are $\frac{p^{s}-1}{p-1}$ of them. There is bijection between the set of maximal subgroups of $Q$ and the set of maximal subgroups of $Q/\Phi(Q).$ Hence the number of maximal subgroups of $Q$ is $1 + p + \ldots + p^{s-1},$ which is congruent to $1$ (mod $p$), and each of them has index $p$ in $Q$. The maximal subgroups of $Q$ are permuted under conjugation by $P$ in orbits of $p$-power lengths. At least one orbit must have length $1$. This orbit contains a subgroup $R$ of $Q$ with $[Q:R]= p$ and $x^{-1}Rx = R$ for all $x \in P.$ In other words, we have found a normal subgroup $R$ of $P$ which is contained in $Q$ and satisfies $[Q:R] = p .$
{ "language": "en", "url": "https://math.stackexchange.com/questions/268430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
A complex equation I want to solve the following equation: $g(s)f(s)=0$ where $f$ and $g$ are defined in the complex plane with real values and they are not analytic. My question is: If I assume that $f(s)≠0$, can I deduce that $g(s)=0$ without any further complications? I am a little confused about this case: if $f=x+iy$ and $g=u+iv$, then $fg=ux-vy+i(uy+vx)$ and $fg$ can be zero if $ux-vy=0,uy+vx=0$ without the implications: $x=y=0,u=v=0$
First, note that your question is really just about individual complex numbers, not about complex-valued functions. Now, as you note, if $(x + i y)(u + i v) = 0$ then this implies that $x u - v y = x v + u y = 0$. However, the only way these equations can hold is if either $x + i y = 0$ or $u + i v = 0$. Multiplying the first equation through by $v$ and the second by $u$, we have $x u v - v^2 y = 0$ and $x u v + u^2 y = 0$. Subtracting, $(u^2 + v^2)y = 0$. Since $u, v, y \in \mathbb{R}$, this means that either $y = 0$ or $u = v = 0$. In the latter case, we have $u + i v = 0$. In the former, since $y = 0$ we have $x (u + i v) = 0$, so $x u = x v = 0$. Either $x = 0$, in which case $x + i y = 0$, or $u = v = 0$, in which case $u + i v = 0$ again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/268511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
how many number like $119$ How many 3-digits number has this property like $119$: $119$ divided by $2$ the remainder is $1$ 119 divided by $3$ the remainderis $2 $ $119$ divided by $4$ the remainder is $3$ $119$ divided by $5$ the remainder is $4$ $119$ divided by $6$ the remainder is $5$
You seek numbers which, when divided by $k$ (for $k=2,3,4,5,6$) gives a remainder of $k-1$. Thus the numbers you seek are precisely those which are one less than a multiple of $k$ for each of these values of $k$. To find all such numbers, consider the lowest common multiple of $2$, $3$, $4$, $5$ and $6$, and count how many multiples of this have three digits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/268619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 2 }
Question on determining the splitting field It is not hard to check that the three roots of $x^3-2=0$ is $\sqrt[3]{2}, \sqrt[3]{2}\zeta_3, \sqrt[3]{2}\zeta_3^{2}$, hence the splitting field for $x^3-2$ over $\mathbb{Q}$ is $\mathbb{Q}[\sqrt[3]{2}, \sqrt[3]{2}\zeta_3, \sqrt[3]{2}\zeta_3^{2}]$. However, since $\sqrt[3]{2}\zeta_3^{2}$ can be compute through $\sqrt[3]{2}, \sqrt[3]{2}\zeta_3$ then the splitting field is $\mathbb{Q}[\sqrt[3]{2}, \sqrt[3]{2}\zeta_3]$. In the case $x^5-2=0$, in the book Galois theory by J.S.Milne, the author said that the splitting field is $\mathbb{Q}[\sqrt[5]{2}, \zeta_5]$. My question is : * *How can the other roots of $x^5-2$ be represented in term of $\sqrt[5]{2}, \zeta_5$, so that he can write the splitting field is$\mathbb{Q}[\sqrt[5]{2}, \zeta_5] $ ? *Is the splitting field for $x^n -a$ over $\mathbb{Q}$ is $\mathbb{Q}[\alpha,\zeta_n]$, where $\alpha$ is the real $n$-th root of $a$ ?
Note that $(\alpha\zeta_n^k)^n = \alpha^n\zeta_n^{nk}=\alpha^n=a$, $0\le k<n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/268676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Are there real-life relations which are symmetric and reflexive but not transitive? Inspired by Halmos (Naive Set Theory) . . . For each of these three possible properties [reflexivity, symmetry, and transitivity], find a relation that does not have that property but does have the other two. One can construct each of these relations and, in particular, a relation that is symmetric and reflexive but not transitive: $$R=\{(a,a),(a,b),(b,a),(b,b),(c,c),(b,c),(c,b)\}.$$ It is clearly not transitive since $(a,b)\in R$ and $(b,c)\in R$ whilst $(a,c)\notin R$. On the other hand, it is reflexive since $(x,x)\in R$ for all cases of $x$: $x=a$, $x=b$, and $x=c$. Likewise, it is symmetric since $(a,b)\in R$ and $(b,a)\in R$ and $(b,c)\in R$ and $(c,b)\in R$. However, this doesn't satisfy me. Are there real-life examples of $R$? In this question, I am asking if there are tangible and not directly mathematical examples of $R$: a relation that is reflexive and symmetric, but not transitive. For example, when dealing with relations which are symmetric, we could say that $R$ is equivalent to being married. Another common example is ancestry. If $xRy$ means $x$ is an ancestor of $y$, $R$ is transitive but neither symmetric nor reflexive. I would like to see an example along these lines within the answer. Thank you.
Actors $x$ and $y$ have appear in the same movie at least once.
{ "language": "en", "url": "https://math.stackexchange.com/questions/268726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "155", "answer_count": 15, "answer_id": 3 }
Trouble with form of remainder of $\frac{1}{1+x}$ While asking a question here I got the following equality $\displaystyle\frac{1}{1+t}=\sum\limits_{k=0}^{n}(-1)^k t^{k}+\frac{(-1)^{n+1}t^{n+1}}{1+t}$ I'm trying to prove this with Taylor's theorem, I got that $f^{(n)}(x)=\displaystyle\frac{(-1)^nn!}{(1+x)^{n+1}}$ so the first part of the summation is done but I can't seem to reach the remainder according to Rudin the form of the remainder is $\displaystyle\frac{(-1)^n!}{(1+c)^{n+1}}t^{n+1}$ . I know the equality can be proved by expanding the sum on the right but I want to get the result with the theorem.
Taylor will not help you here. Hint: $\sum_{k=0}^n r^k=\frac{1-r^{n+1}}{1-r}$ for all positive integers $n$, when $r\neq 1$. Take $r=$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/268768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
What does it mean $\int_a^b f(G(x)) dG(x)$? - An exercise question on measure theory I am reading Folland's book and definitions are as follows (p. 108). Let $G$ be a continuous increasing function on $[a,b]$ and let $G(a) = c, G(b) = d$. What is asked in the question is: If $f$ is a Borel measurable and integrable function on $[c,d]$, then $\int_c^d f(y)dy = \int_a^b f(G(x))dG(x)$. In particular, $\int_c^d f(y) dy = \int_a^b f(G(x))G'(x)dx$ if $G$ is absolutely continuous. As you can see from the title, I did not understand what does it mean $\int_a^b f(G(x))dG(x)$. Also, I am stuck on the whole exercise. If one can help, I will be very happy! Thanks.
This is likely either Riemann-Stieltjes or Lebesgue-Stieltjes integration (most likely the latter, given the context).
{ "language": "en", "url": "https://math.stackexchange.com/questions/268821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
For any two sets $A,B$ , $|A|\leq|B|$ or $|B|\leq|A|$ Let $A,B$ be any two sets. I really think that the statement $|A|\leq|B|$ or $|B|\leq|A|$ is true. Formally: $$\forall A\forall B[\,|A|\leq|B| \lor\ |B|\leq|A|\,]$$ If this statement is true, what is the proof ?
This is true in $ZFC$ because of Zermelo's Well-Ordering Theorem; given two sets $A,B$, since they are well-orderable there exist alephs $\aleph_{\alpha},\aleph_{\beta}$ with $|A|=\aleph_{\alpha}$ and $|B|=\aleph_{\beta}$, since alephs are comparable, the cardinalities of $A$ and $B$ are comparable. Furthermore, this is equivalent to the axiom of choice: Now suppose the cardinalities of any two sets are comparable, let us prove Zermelo's Well-Ordering Theorem from this. Let $X$ be a set, then there must be an ordinal $\alpha$ such that $|X|\leq |\alpha|$, for otherwise the set of all ordinals equipotent to a subset of $X$ would contain the proper class of all ordinals; for all ordinals $\alpha$, $|\alpha|\leq |X|$ by the comparability condition, a contradiction. Hence $|X|\leq |\alpha|$ for some ordinal $\alpha$, and thus $X$ is well-orderable. Hence the condition of comparability is equivalent to Zermelo's Well-Ordering Theorem, and thus it is equivalent to the axiom of choice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/268942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Check my proof of an algebraic statement about fractions I tried to prove the part c) of "Problem 42" from the book "Algebra" by Gelfand. Fractions $\frac{a}{b}$ and $\frac{c}{d}$ are called neighbor fractions if their difference $\frac{cb-ad}{db}$ has numerator ±1, that is, $cb-ad = ±1$. Prove that: b) If $\frac{a}{b}$ and $\frac{c}{d}$ are neighbor fractions, then $\frac{a+c}{b+d}$ is between them and is a neighbor fraction for both $\frac{a}{b}$ and $\frac{c}{d}$. Me: It is easy to prove. c) no fraction $\frac{e}{f}$ with positive integer $e$ and $f$ such that $f < b+d$ is between $\frac{a}{b}$ and $\frac{c}{d}$. Me: we know that $\frac{a+c}{b+d}$ is between $\frac{a}{b}$ and $\frac{c}{d}$. The statement says that if we make the denominator smaller than $b+d$, the fraction can't be between $\frac{a}{b}$ and $\frac{c}{d}$ with any numerator. Let's prove it: 0) Assume that $\frac{a}{b}$ < $\frac{c}{d}$, and $cb-ad = 1$, ($cb = ad + 1$). I also assume that $\frac{a}{b}$ and $\frac{c}{d}$ are positive. 1) Start with the fraction $\frac{a+c}{b+d}$, let $n$ and $m$ denote the changes of the numerator and denominator, so we get $\frac{a+c+n}{b+d+m}$ ($n$ and $m$ may be negative). We want it to be between the two fractions: $\frac{a}{b} < \frac{a+c+n}{b+d+m} < \frac{c}{d}$ 2) Let's see what the consequences will be if the new fraction is bigger than $\frac{a}{b}$: $\frac{a+c+n}{b+d+m} > \frac{a}{b}$ $b(a+c+n) > a(b+d+m)$ $ba+bc+bn > ba+ad+am$ $bc+bn > ad+am$ but $bc = ad + 1$ by the definition, so $(ad + 1) + bn > ad + am$ $bn - am > -1$ All the variables denote the natural numbers, so if a natural number is bigger than -1 it implies that it is greater or equal $0$. $bn - am \geq 0$ 3) Let's see what the consequences will be if the new fraction is less than $\frac{c}{d}$: $\frac{a+c+n}{b+d+m} < \frac{c}{d}$ ... $cm - dn \geq 0$ 4) We've got two equations, I will call them p-equations, because they will be the base for our proof (they both have to be right): $bn - am \geq 0$ $cm - dn \geq 0$ 5) Suppose $\frac{a}{b} < \frac{a+c+n}{b+d+m} < \frac{c}{d}$. What $n$ and $m$ have to be? It was conjectured that if $m$ is negative, so for any $n$ this equation would not be right. Actually if $m$ is negative, $n$ can be only less or equal $0$, because when the denominator is getting smaller, the fraction is getting bigger. 6) Suppose that $m$ is negative and $n = 0$. Then the second p-equation can't be true: $-cm - d\cdot 0 \geq 0 \implies -cm \geq 0$ 7) If both n and m are negative, the p-equations can't both be true. I will get rid of the negative signs so we can treat $n$ and $m$ as positive: $(-bn) - (-am) \geq 0$ $(-cm) - (-dn) \geq 0$ $am - bn \geq 0$ $dn - cm \geq 0$ If something is greater or equal $0$ then we can multiply it by a positive number and it still will be greater or equal $0$, so multiply by $d$ and $b$: $d(am - bn) \geq 0$ $b(dn - cm) \geq 0$ $da\cdot m - dbn \geq 0$ $dbn - bc\cdot m \geq 0$ But $bc$ is greater than $da$ by the definition. You can already see that the equations can't both be true, but I will show it algebraicly: by the definition $bc = da + 1$, then $dam - dbn \geq 0$ $dbn - (da + 1)m \geq 0$ $dam - dbn \geq 0$ $dbn - dam - m \geq 0$ If two equations are greater or equal $0$ than if we add them together, the sum will still be greater or equal $0$. $(dam - dbn) + (dbn - dam - m) \geq 0$ $-m \geq 0$ It is impossible (I changed $n$ and $m$ from negative to positive before by playing with negative signs). QED. If $n$ and $m$ are positive, the p-equations can both be true, I won't go through it here because it is irrelevant to our problem. But it is the common sense that I can choose such big $n$ and $m$ to situate $\frac{a+c+n}{b+d+m}$ between any two fractions. PS: maybe my proof is too cumbersome, but I want to know is it right or not. Also advices how to make it simpler are highly appreciated.
First of all, the proof is correct and I congratulate you on the excellent effort. I will only offer a few small comments on the writing. It's not clear until all the way down at (5) that you intend to do a proof by contradiction, and even then you never make it explicit. It's generally polite to state at the very beginning of a proof if you plan to make use of contradiction, contrapositive, or induction. Tiny detail, maybe even a typo: $n$ and $m$ are integers, not necessarily naturals, so the statement at the end of (2) needs to reflect that. But for integers, also $x>-1$ implies $x\geq 0$, so it's not a big deal. You didn't really need to make $n$ and $m$ positive since the only place you use positivity is at the very, very end you need $m>0$ to derive the contradiction. You don't even use it when you multiply by $d$ since that relied on the expressions being positive and not the individual numbers themselves. This is the only place I can imagine really see simplifying the proof. As it stands, it would make the reader more comfortable if you named the positive versions of $(n,m)$ as $(N,M)$ or $(n',m')$ or something. Finally, as you hint at, you don't need to consider the positive-positive case. But perhaps you should be more explicit why this is, earlier in the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
How to calculate $\sum \limits_{x=0}^{n} \frac{n!}{(n-x)!\,n^x}\left(1-\frac{x(x-1)}{n(n-1)}\right)$ What are the asymptotics of the following sum as $n$ goes to infinity? $$ S =\sum\limits_{x=0}^{n} \frac{n!}{(n-x)!\,n^x}\left(1-\frac{x(x-1)}{n(n-1)}\right) $$ The sum comes from CDF related to sampling with replacement. Consider a random process where integers are sampled uniformly with replacement from $\{1...n\}$. Let $X$ be a random variable that represents the number of samples until either a duplicate is found or both the values $1$ and $2$ have been found. So if the samples where $1,6,3,5,1$ then $X=5$ and if it was $1,6,3,2$ then $X=4$. This sum is therefore $\mathbb{E}(X)$. We therefore know that $S = \mathbb{E}(X) \leq \text{mean time to find a duplicate} \sim \sqrt{\frac{\pi}{2} n}$. Taking the first part of the sum, $$\sum_{x=0}^{n} \frac{n!}{(n-x)!\,n^x} = \left(\frac{e}{n} \right)^n \Gamma(n+1,n) \sim \left(\frac{e}{n} \right)^n \frac{n!}{2} \sim \sqrt{\frac{\pi}{2} n}. $$
Let $N_n$ denote a Poisson random variable with parameter $n$, then $$ S_n=\frac{n-2}{n-1}n!\left(\frac{\mathrm e}n\right)^n\mathbb P(N_n\leqslant n)+\frac2{n-1}. $$ As a consequence, $\lim\limits_{n\to\infty}S_n/\sqrt{n}=\sqrt{\pi/2}$. To show this (for a shorter proof, see the end of this post), first rewrite each parenthesis in $S_n$ as $$ 1-\frac{x(x-1)}{n(n-1)}=\frac2n(n-x)-\frac1{n(n-1)}(n-x)(n-x-1). $$ Thus, $\displaystyle S_n=n!\,(2U_n-V_n)$ with $$ U_n=\frac1n\sum_{x=0}^n\frac1{n^x}\frac{n-x}{(n-x)!}=\sum_{y=1}^n\frac1{n^y}\frac1{(n-y)!}=n^{-n}\sum_{x=0}^{n-1}\frac{n^x}{x!}, $$ and $$ V_n=\frac1{n(n-1)}\sum_{x=0}^n\frac1{n^x}\frac{(n-x)(n-x-1)}{(n-x)!}=\frac{n}{n-1}\sum_{z=2}^n\frac1{n^z}\frac1{(n-z)!}, $$ that is, $$ V_n=\frac{n}{n-1}n^{-n}\sum_{x=0}^{n-2}\frac{n^x}{x!}. $$ Introducing $$ W_n=n^{-n}\sum_{x=0}^n\frac{n^x}{x!}, $$ this can be rewritten as $$ U_n=W_n-\frac1{n!},\qquad V_n=\frac{n}{n-1}\,\left(W_n-\frac2{n!}\right), $$ hence $$ S_n=n!\left(2\left(W_n-\frac1{n!}\right)-\frac{n}{n-1}\left(W_n-\frac2{n!}\right)\right)=n!\frac{n-2}{n-1}W_n+\frac2{n-1}. $$ The proof is complete since $$ W_n=n^{-n}\mathrm e^n\cdot\mathrm e^{-n}\sum_{x=0}^n\frac{n^x}{x!}=n^{-n}\mathrm e^n\cdot\mathbb P(N_n\leqslant n). $$ Edit: Using the distribution of $N_n$ and the change of variable $x\to n-x$, one gets directly $$ S_n=n!\left(\frac{\mathrm e}n\right)^n\left(\mathbb P(N_n\leqslant n)-\mathbb E(u_n(N_n))\right), $$ with $$ u_n(t)=\frac{(n-t)(n-t-1)}{n(n-1)}\,\mathbf 1_{t\leqslant n}. $$ Since $N_n/n\to1$ almost surely when $n\to\infty$, $u_n(N_n)\to0$ almost surely. Since $u_n\leqslant1$ uniformly, $\mathbb E(u_n(N_n))\to0$. On the other hand, by the central limit theorem, $\mathbb P(N_n\leqslant n)\to\frac12$, hence Stirling's equivalent applied to the prefactor of $S_n$ yields the equivalent of $S_n$. Edit-edit: By Berry-Esseen theorem, $\mathbb P(N_n\leqslant n)=\frac12+O(\frac1{\sqrt{n}})$. By the central limit theorem, $\mathbb E(u_n(N_n))=O(\frac1n)$. By Stirling's approximation, $n!\left(\frac{\mathrm e}n\right)^n=\sqrt{2\pi n}+O(\frac1{\sqrt{n}})$. Hence, $S_n=\sqrt{\pi n/2}+T_n+o(1)$ where $\limsup\limits_{n\to\infty}|T_n|\leqslant 2C\sqrt{2\pi}(1+\mathrm e^{-1})$ for any constant $C$ making Berry-Esseen upper bound true, hence $\limsup\limits_{n\to\infty}|T_n|\lt3.3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Diagonalizable unitarily Schur factorization Let $A$ be $n x n$ matrix. What exactly is the difference between unitarily diagonalizable and diagonalizable matrix $A$? Can that be that it is diagonalizable but not unitarily diagonalizable? What are the conditions for Schur factorization to exist? For a (unitarily) diagonalizable matrix is it necessary that Schur factorization exists and vice versa? Thanks a lot!
Diagonalization means to decompose a square matrix $A$ into the form $PDP^{-1}$, where $P$ is invertible and $D$ is a diagonal matrix. If $P$ is chosen as a unitary matrix, the aforementioned decomposition is called a unitary diagonalization. It follows that every unitarily diagonalizable matrix is diagonalizable. The converse, however, is not true in general. Note that if $A$ is unitary diagonalizable as $UDU^\ast$ ($U^\ast=U^{-1}$), then the columns of $U$ are the eigenvectors of $A$ and they are orthonormal because $U$ is unitary. Therefore, a diagonalizable matrix is unitarily diagonalizable if and only if it has an orthonormal eigenbasis. So, a matrix like $$ A=\begin{pmatrix}1&1\\0&1\end{pmatrix}\begin{pmatrix}2&0\\0&1\end{pmatrix} \begin{pmatrix}1&1\\0&1\end{pmatrix}^{-1} $$ is diagonalizable but not unitarily diagonalizable. As for Schur triangulation, it is a decomposition of the form $A=UTU^\ast$, where $U$ is unitary and $T$ is triangular. Every square complex matrix has a Schur triangulation. Note that if $A=UTU^\ast$ is a Schur triangulation, then you can read off the eigenvalues of $A$ from the diagonal entries of $T$. It follows that a real matrix that has non-real eigenvalues cannot possess a Schur triangulation over the real field. For example, the eigenvalues of $$ A=\begin{pmatrix}0&-1\\1&0\end{pmatrix} $$ are $\pm\sqrt{-1}$. So it is impossible to triangulate $A$ as $UTU^\ast$ using real unitary matrix (i.e. real orthogonal matrix) $U$ and real triangular matrix $T$. Yet there exist complex $U$ and complex $T$ such that $A=UTU^\ast$. If a matrix $A$ is unitarily diagonalizable as $A=UDU^\ast$, the diagonalization is automatically a Schur triangulation because every diagonal matrix $D$ is by itself a triangular matrix. Yet the converse is not true in general. For example, every nontrivial Jordan block, such as $$ A=\begin{pmatrix}0&1\\0&0\end{pmatrix}, $$ is a triangular matrix that is not unitarily diagonalizable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Compact subspaces of the Poset On page 172, James Munkres' textbook Topology(2ed), there is a theorem about compact subspaces of the real line: Let $X$ be a simply-ordered set having the least upper bound property. In the order topology, each closed interval in $X$ is compact. My question is whether there is a generalized theorem about a Poset(or a lattice, complete lattice, maybe). Is there some elegant way to define a topology on Poset?
Many topologies have been defined on partial orders and lattices of various types. One of the most important is the Scott topology. Let $\langle P,\preceq\rangle$ be a partial order. A set $A\subseteq P$ is an upper set if ${\uparrow\!\!x}\subseteq A$ whenever $x\in A$, where ${\uparrow\!\!x}=\{y\in P:x\preceq y\}$. A set $U\subseteq P$ is open in the Scott topology iff $U$ is an upper set with the following property: if $D\subseteq P$ is a directed set in $P$, and $\bigvee D\in U$, then $D\cap U\ne\varnothing$. (In this case $U$ is said to be inaccessible by directed joins.) The upper topology is the topology that has $\{P\,\setminus\!\downarrow\!\!x:x\in P\}$ as a subbase. The lower topology is generated by the subbase $\{P\setminus{\uparrow\!\!x}:x\in P\}$. The Lawson topology is the join (coarsest common refinement) of the Scott and lower topologies. A number of interval topologies have also been defined, the first ones by Frink and by Birkhoff; this paper deals with a number of such topologies. These terms should at least give you a start for further search if you’re interested.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Difference between $u_t + \Delta u = f$ and $u_t - \Delta u = f$? What is the difference between these 2 equations? Instead of $\Delta$ change it to some general elliptic operator. Do they have the same results? Which one is used for which?
The relation boils down to time-reversal, replacing $t$ by $-t$. This makes a lot of difference in the equations that model diffusion. The diffusion processes observed in nature are normally not reversible (2nd law of thermodynamics). In parallel to that, the backward heat equation $u_t=-\Delta u$ exhibits peculiar and undesirable features such as loss of regularity and non-existence of solution for generic data. Indeed, you probably know that the heat equation $u_t=\Delta u$ has a strong regularizing effect: for any integrable initial data $u(\cdot,0)$ the solution $u(x,t)$ is $C^\infty$ smooth, and also real-analytic with respect to $x $ for any fixed $t>0$. When the direction of time flow is reversed, this effect plays against you: there cannot be a solution unless you have real-analytic data to begin with. And even then the solution can suddenly blow up and cease to exist. Consider the fundamental solution of the heat equation, and trace it backward in time, from nice Gaussians to $\delta$-function singularity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
$\infty - \infty = 0$ ? I am given this sequence with square root. $a_n:=\sqrt{n+1000}-\sqrt{n}$. I have read that sequence converges to $0$, if $n \rightarrow \infty$. Then I said, well, it may be because $\sqrt{n}$ goes to $\infty$, and then $\infty - \infty = 0$. Am I right? If I am right, why am I right? I mean, how can something like $\infty - \infty = 0$ happen, since the first $\infty$ which comes from $\sqrt{n+1000}$ is definitely bigger than $\sqrt{n}.$?
$$\sqrt{n+100}-\sqrt n=\frac{100}{\sqrt{n+100}+\sqrt n}\xrightarrow [n\to\infty]{}0$$ But you're not right, since for example $$\sqrt n-\sqrt\frac{n}{2}=\frac{\frac{n}{2}}{\sqrt n+\sqrt\frac{n}{2}}\xrightarrow [n\to\infty]{}\infty$$ In fact, "a difference $\,\infty-\infty\,$ in limits theory can be anything
{ "language": "en", "url": "https://math.stackexchange.com/questions/269337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 7, "answer_id": 2 }
How to be sure that the $k$th largest singular value is at least 1 of a matrix containing a k-by-k identity In section 8.4 of the report of ID software, it says that the $k$th largest singular value of a $k \times n$ matrix $P$ is at least 1 if some subset of its columns makes up a $k\times k$ identity. I tried to figure it out but couldn't be sure of that. Any ideas on how to prove it?
The middle part of the SVD (which containts the singular values) does not change if you permute columns, so you may put the $k$ columns mentioned first. So assume the matrix has the form $A=[\begin{smallmatrix}I&B\end{smallmatrix}]$ where $I$ is a $k\times k$ identity, and $B$ is $k\times(n-k)$. Compute $$AA^T=I+BB^T\ge I$$ and conclude that eigenvalues of $AA^T$ are all at least $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
non constant bounded holomorphic function on some open set this is an exercise I came across in Rudin's "Real and complex analysis" Chapter 16. Suppose $\Omega$ is the complement set of $E$ in $\mathbb{C}$, where $E$ is a compact set with positive Lebesgue measure in the real line. Does there exist a non-constant bounded holomorphic function on $\Omega$? Especially, do this for $\Omega=[-1,1]$. Some observations: Suppose there exists such function $f$, then WLOG, we may assume $f$ has no zeros points in $\Omega$ by adding a large enough positive constant, then, $\Omega$ is simply-connected implies $\int_{\gamma}fdz=0$, for any closed curve $\gamma\subset \Omega$, how to deduce any contradiction?
Reading Exercise 8 of Chapter 16, I imagine Rudin interrogating the reader. Let $E\subset\mathbb R$ be a compact set of positive measure, let $\Omega=\mathbb C\setminus E$, and define $f(z)=\int_E \frac{dt}{t-z}$. Now answer me! a) Is $f$ constant? b) Can $f$ be extended to an entire function? c) Does $zf(z)$ have a limit at $\infty$, and if so, what is it? d) Is $\sqrt{f}$ holomorphic in $\Omega$? e) Is $\operatorname{Re}f$ bounded in $\Omega$? (If yes, give a bound) f) Is $\operatorname{Im}f$ bounded in $\Omega$? (If yes, give a bound) g) What is $\int_\gamma f(z)\,dz$ if $\gamma$ is a positively oriented loop around $E$? h) Does there exist a nonconstant bounded holomorphic function on $\Omega$? Part h) appears to come out of the blue, especially since $f$ is not bounded: we found that in part (e). But it is part (f) that's relevant here: $\operatorname{Im}f$ is indeed bounded in $\Omega$ (Hint: write it as a real integral, notice that the integrand has constant sign, extend the region of integration to $\mathbb R$, and evaluate directly). Therefore, $f$ maps $\Omega$ to a horizontal strip. It's a standard exercise to map this strip onto a disk by some conformal map $g$, thus obtaining a bounded function $g\circ f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proving that $\alpha^{n-2}\leq F_n\leq \alpha^{n-1}$ for all $n\geq 1$, where $\alpha$ is the golden ratio I got stuck on this exercise. It is Theorem 1.15 on page 14 of Robbins' Beginning Number Theory, 2nd edition. Theorem 1.15. $\alpha^{n-2}\leq F_n\leq \alpha^{n-1}$ for all $n\geq 1$. Proof: Exercise. (image of relevant page)
Using Binet's Fibonacci Number Formula, $\alpha+\beta=1,\alpha\beta=-1$ and $\beta<0$ \begin{align} F_n-\alpha^{n-1}= & \frac{\alpha^n-\beta^n}{\alpha-\beta}-\alpha^{n-1} \\ = & \frac{\alpha^n-\beta^n-(\alpha-\beta)\alpha^{n-1}}{\alpha-\beta} \\ = & \beta\frac{(\alpha^{n-1}-\beta^{n-1})}{\alpha-\beta} \\ = & \beta\cdot F_{n-1}\le 0 \end{align} if $n\ge 1$. \begin{align} F_n-\alpha^{n-1}= & \frac{\alpha^n-\beta^n}{\alpha-\beta}-\alpha^{n-2} \\ = & \frac{\alpha^{n-1}\cdot \alpha-\beta^n-\alpha^{n-1}+\beta\cdot \alpha^{n-2}}{\alpha-\beta} \\ = & \frac{\alpha^{n-1}\cdot (1-\beta)-\beta^n-\alpha^{n-1}+\beta\cdot \alpha^{n-2}}{\alpha-\beta} \\ = & \frac{\beta(\alpha^{n-2}-\alpha^{n-1})-\beta^n}{\alpha-\beta} \\ = &\frac{\beta\cdot \alpha^{n-2}(1-\alpha)-\beta^n}{\alpha-\beta} \\ = & \frac{\beta\cdot \alpha^{n-2}(\beta)-\beta^n}{\alpha-\beta} \\ = & \beta^2\frac{\alpha^{n-2}-\beta^{n-2}}{\alpha-\beta} \\ = & \beta^2F_{n-2}\ge 0 \end{align} if $n\ge 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Eigenvalues of the matrix $(I-P)$ Let $P$ be a strictly positive $n\times n$ stochastic matrix. I hope to find out the stability of a system characterized by the matrix $(I-P)$. So I'm interested in knowing under what condition on the entries of $P$ do all the eigenvalues of the matrix $(I-P)$ lie (not necessarily strictly) within the unit disk in the complex plane? The set of eigenvalues of $P$ is shifted by $(1-\sigma(P))$, which would require additional conditions to be placed on $P$ so as to keep them within the unit disk. Or can anybody point out a direction towards which I can look for the answer? I don't really know what subject of linear algebra I should go for...
One sufficient condition (that is not necessary) is that all diagonal entries of $P$ are greater than or equal to $1/2$. If this is the case, by Gersgorin disc theorem, all eigenvalues of $I-P$ will lie inside the closed disc centered at $1/2$ with radius $1/2$, and hence lie inside the closed unit disc as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Linear independence of $\sin(x)$ and $\cos(x)$ In the vector space of $f:\mathbb R \to \mathbb R$, how do I prove that functions $\sin(x)$ and $\cos(x)$ are linearly independent. By def., two elements of a vector space are linearly independent if $0 = a\cos(x) + b\sin(x)$ implies that $a=b=0$, but how can I formalize that? Giving $x$ different values? Thanks in advance.
Although I'm not confident about this, maybe you can use power series for $\sin x$ and $\cos x$? I'm working on a similar exercise but mine has restricted both functions on the interval $[0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 11, "answer_id": 7 }