Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Find a polynomial only from its roots Given $\alpha,\,\beta,\,\gamma$ three roots of $g(x)\in\mathbb Q[x]$, a monic polynomial of degree $3$. We know that $\alpha+\beta+\gamma=0$, $\alpha^2+\beta^2+\gamma^2=2009$ and $\alpha\,\beta\,\gamma=456$. Is it possible to find the polynomial $g(x)$ only from these? I've been working with the degree of the extension $\mathbb Q \subseteq \mathbb Q(\alpha,\,\beta,\,\gamma)$. I've found that it must be $3$ because $g(x)$ is the irreducible polynomial of $\gamma$ over $\mathbb Q(\alpha,\,\beta)$. But there is something that doesn't hold, there must be some of these roots that are not algebraic or something. May be this approach is totally wrong. Is there anyone who knows how to deal with this problem?
The polynomial is $(x-a)(x-b)(x-c)$ with the roots being $a,b,c$. By saying "three roots" you imply all these are different. Note that when multiplied out and coefficients are collected you have three symmetric functions in the roots. For example the constant term is $-abc$, while the degree 2 coefficient is $-(a+b+c)$. The degree 1 coefficient is $ab+ac+bc$, which can be written as $$\frac{(a+b+c)^2-(a^2+b^2+c^2)}{2}.$$ So it looks like you can get all the coefficients of the monic from the givens you have. Note: Just saw copper.hat's remark, essentially saying what's in this answer. I'll leave it up for now in case the poser of the question needs it (or can even use it...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/233051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Simple linear recursion $x_n=\frac{x_{n-1}}{a}+\frac{b}{a}$ with $a>1, b>0$ and $x_0>0$ I tried to solve it using the generating function but it does not work because of $\frac{b}{a}$, so may you have an idea.
Hint: Let $x_n=y_n+c$, where we will choose $c$ later. Then $$y_n+c=\frac{y_{n-1}+c}{a}+\frac{b}{a}.$$ Now can you choose $c$ so that the recurrence for the $y$'s has no pesky constant term? Remark: There is a fancier version of the above trick. Our recurrence (if $b\ne 0$) is not homogeneous. To solve it, we find the general solution of the homogeneous recurrence obtained by removing the $b/a$ term, and add to it some fixed particular solution of the non-homogeneous recurrence. In this case it is easy to find such a particular solution. Look for a constant solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/233109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
List of interesting integrals for early calculus students I am teaching Calc 1 right now and I want to give my students more interesting examples of integrals. By interesting, I mean ones that are challenging, not as straightforward (though not extremely challenging like Putnam problems or anything). For example, they have to do a $u$-substitution, but what to pick for $u$ isn't as easy to figure out as it is usually. Or, several options for $u$ work so maybe they can pick one that works but they learn that there's not just one way to do everything. So far we have covered trig functions, logarithmic functions, and exponential functions, but not inverse trig functions (though we will get to this soon so those would be fine too). We have covered $u$-substitution. Thinks like integration by parts, trig substitution, and partial fractions and all that are covered in Calc 2 where I teach. So, I really don't care much about those right now. I welcome integrals over those topics as answers, as they may be useful to others looking at this question, but I am hoping for integrals that are of interest to my students this semester.
I remember having fun with integrating some step functions, for example: $$\int_{0}^{2} \lfloor x \rfloor - 2 \left\lfloor \frac{x}{2} \right\rfloor \,\mathrm{d}x.$$ My professor for calculus III liked to make us compute piecewise functions, so it would force us to use the Riemann sum definition of the integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/233162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79", "answer_count": 18, "answer_id": 2 }
Finding a certain integral basis for a quadratic extension This is a problem in the first chapter of Dino Lorenzini's book on arithmetic geometry. Let $A$ be a PID with field of fractions $K$ and $L/K$ a quadratic extension (no separability assumption). Let $B$ be the integral closure of $A$ in $L$. Now assuming that $B$ is a f.g. $A$-module, then the problem asks to show that $B=A[b]$ for some $b\in B$. Obviously, we know that $B$ is free of rank $2$ as an $A$-module, so there must be some integral basis $\{ b_1,b_2\}$. However, I don't see how one of these can be assumed to be $1$. Similarly, any $A$-submodule, including ideals of $B$, must be generated by at most two elements. Other things I've tried is using the fact that $B$ must have dimension $1$. However, I've failed to see how any of this could be applied.
Consider the quotient $B/A$. What can you say about it?
{ "language": "en", "url": "https://math.stackexchange.com/questions/233219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Convergence properties of a moment generating function for a random variable without a finite upper bound. I'm stuck on a homework problem which requires me that I prove the following: Say $X$ is a random variable without a finite upper bound (that is, $F_X(x) < 1$ for all $x \in \mathbb{R}$). Let $M_X(s)$ denote the moment-generating function of $X$, so that: $$M_X(s) = \mathbb{E}[e^{sX}]$$ then how can I show that $$\lim_{s\rightarrow\infty} \frac{\log(M_X(s))}{s} = \infty$$
Consider the limit when $s\to+\infty$ of the inequality $$ s^{-1}\log M_X(s)\geqslant x+s^{-1}\log(1-F_X(x)). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/233312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inscribed and Escribed Squares Assume a circle of diameter $d$. Inscribe a square $A$ centred in the circle with its diagonal equal to the diameter of the circle. Now escribe a square $B$ with the sides equal to the diameter of the circle. Show how to obtain the ratio of the area of square $A$ to the area of square $B$.
This can be done by a computation. The outer square $B$ has area $d^2$. Let the side of the inner square $A$ be $s$. Then by the Pythagorean theorem, $s^2+s^2=d^2$. But $s^2$ is the area if the inner square, and we are finished. But there is a neater way! Rotate the inner square $A$ about the centre of the circle, until the corners of the inner square are the midpoints of the sides of the outer square. (Sorry that I cannot draw a picture: I hope these words are enough for you to do it.) Now draw the two diagonals of the inner square. As a check on the correctness of your picture, the diagonals of the inner square are parallel to the sides of the outer square. We have divided the outer square into $8$ congruent isosceles right triangles. And the inner square is made up of $8$ of these triangles. So the outer square $B$ has twice the area of the inner square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/233399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Extra $100 after borrowing and shopping I took \$1000 from my friend James and \$500 from Bond. While walking to the shops I lost \$1000 so now I only have \$500. I did some shopping, spending \$300 so now I have \$200 left. I gave \$100 back to James and \$100 back to Bond. Now my liabilities are \$900 for James and \$400 for Bond, so my total liabilities are \$1300. Total liabilities + Shopping = \$1300 + \$300 = \$1600, but I only borrowed \$1500. Where did the extra \$100 come from?
That $\$1300$ already includes the $\$300$, along with the $\$1000$ you lost--that was your net loss of money for the day--you don't need to add it again. The $\$900$ and the $\$400$ you still owe is just another way of reaching the same number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/233476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 1 }
Trace of a matrix to the $n$ Why is it that if $A(t), B(t)$ are two $n\times n$ complex matrices and $${d\over dt}A=AB-BA$$ then the trace of the matrix $A^n$ where $n\in \mathbb Z$ is a constant for all $t$?
Note that Trace(FE)=Trace(EF) in general. $n>0$ : Trace$(A^n)' = n [$Trace$ (A'(t) A^{n-1})] = n[ $Trace$ ((AB - BA)A^{n-1})] = 0$ $ n=0$ : $A^0 = I$ So we are done $n <0$ : Check $(A^{-1})' = A^{-1} B - BA^{-1}$ So this case is reduced to the first case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/233577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
linear operator on a vector space V such that $T^2 -T +I=0$ let T be a linear operator on a vector space V such that $T^2 -T +I=0$.Then * *T is oneone but not onto. *T is onto but not one one. *T is invertible. *no such T exists. could any one give me just hint?
$$ T^2-T+I=0 \iff T(I-T)=I=(I-T)T, $$ i.e. $T$ is invertible and $T^{-1}=I-T$. In particular $T$ is injective and surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/233666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Prove the transcendence of the number $e$ How to prove that the number $e=2.718281...$ is a transcendental number? The truth is I have no idea how to do it. If I can recommend a book or reference on this topic thank you. There are many tests on the transcendence of $ e $? I'd read several shows on the transcendence of $ e $
Your might be interested in the Lindemann-Weierstrass-theorem, which is useful for proving the transcendence of numbers, e.g., $\pi$ and $e$. If you read further, you'll see that the transcendence of both $\pi$ and $e$ are direct "corollaries" of the Lindemann-Weierstrass theorem. Indeed, $e^x$ is transcendent if $x$ is algebraic and $x \neq 0\,$ (by the Lindemann–Weierstrass theorem). A sketch of a (much) more elementary proof is given here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/233704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find the kernel of a linear transformation of $P_2$ to $P_1$ For some reason, this particular problem is throwing me off: Find the kernel of the linear transformation: $T: P_2 \rightarrow P_1$ $T(a_0+a_1x+a_2x^2)=a_1+2a_2x$ Since the kernel is the set of all vectors in $V$ that satisfy $T(\vec{v})=\vec{0}$, it's obvious that $a_0$ can be any real number. What matters, if I understand correctly, is that $a_1$ and $a_2$ should equal 0 in order to satisfy the zero vector (i.e. $0+2(0)x$). Granted that what I stated is correct, why would my book say that the $\ker(T)=\{a_0: a_0 \; \text{is real}\}$? Yes, $a_0$ can be any real number, but what must $a_1$ or $a_2$ equal? I don't see it specified within the set. Perhaps it's implied - I'm not sure. Let me add some more detail: Again, if I understand correctly, I could make a system of equations as such: $a_1 = 0$ $2a_2 = 0$ From that I can translate it into a matrix and find that $a_1$ and $a_2$ really does equal zero.
Your argument is totally correct. Your book means that $\ker(T)=\{a_0+0\cdot x+0\cdot x^2|a_0\in \mathbb{R}\}$, i.e. $a_1=0$ and $a_2=0$, which is the same as you proved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/233781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
An inequality for $W^{k,p}$ norms Let $u \in W_0^{2,p}(\Omega)$, for $\Omega$ a bounded subset of $\mathbb R^n$. I am trying to obtain the bound $$\|Du\|_p \leq \epsilon \|D^2 u\|_p + C_\epsilon \|u\|_p$$ for any $\epsilon > 0$ (here $C_\epsilon$ is a constant that depends on $\epsilon$, and $\|.\|_p$ is the $L^p$ norm). I tried deducing this from the Poincare inequality, but that does not seem to get me anywhere. I also tried proving the one dimensional case first, but was no more able to do that than the $L^p$ case. Any suggestions for how to proceed with this problem?
Such inequalities appear all over the place in PDE theory. They all can be seen as instances of Ehrling's lemma. Here, you have $$ (W^{2,p}_0(\Omega), ||\;||_3) \hookrightarrow (W^{1,p}_0(\Omega), ||\;||_2) \hookrightarrow (L^p(\Omega), ||\;||_1) $$ where $$ ||u||_3 = ||D^2u||_p, ||u||_2 = ||Du||_p, ||u||_1 = ||u||_p. $$ The first inclusion is compact, the second continuous and hence from Ehrling's lemma you have for any $\epsilon > 0$ a constant $C(\epsilon) > 0$ such that $$ ||u||_2 \leq \epsilon ||u||_3 + C(\epsilon)||u||_1. $$ The fact that $||\;||_2$ is an equivalent norm for the Sobolev space $W^{1,p}_0(\Omega)$ is the Poincaré inequality. The fact that $||\;||_3$ is an equivalent norm for the Sobolev space $W^{2,p}_0(\Omega)$ can itself be seen as an application of Ehrling's lemma together with the Poincaré inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/233858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove $3^{2n+1} + 2^{n+2}$ is divisible by $7$ for all $n\ge0$ Expanding the equation out gives $(3^{2n}\times3)+(2^n\times2^2) \equiv 0\pmod{7}$ Is this correct? I'm a little hazy on my index laws. Not sure if this is what I need to do? Am I on the right track?
Note that $$3^{2n+1} = 3^{2n} \cdot 3^1 = 3 \cdot 9^n$$ and $$2^{n+2} = 4 \cdot 2^n$$ Note that $9^{3k} \equiv 1 \pmod{7}$ and $2^{3k} \equiv 1 \pmod{7}$. If $n \equiv 0 \pmod{3}$, then $$3 \cdot 9^n + 4 \cdot 2^n \equiv (3+4) \pmod{7} \equiv 0 \pmod{7}$$ If $n \equiv 1 \pmod{3}$, then $$3 \cdot 9^n + 4 \cdot 2^n \equiv (3 \cdot 9 + 4 \cdot 2) \pmod{7} \equiv 35 \pmod{7} \equiv 0 \pmod{7}$$ If $n \equiv 2 \pmod{3}$, then $$3 \cdot 9^n + 4 \cdot 2^n \equiv (3 \cdot 9^2 + 4 \cdot 2^2) \pmod{7} \equiv 259 \pmod{7} \equiv 0 \pmod{7}$$ EDIT What you have written can be generalized a bit. In general, $$(x^2 + x + 1) \vert \left((x+1)^{2n+1} + x^{n+2} \right)$$ The case you are interested in is when $x=2$. The proof follows immediately from the factor theorem. Note that $\omega$ and $\omega^2$ are roots of $(x^2 + x + 1)$. If we let $f(x) = (x+1)^{2n+1} + x^{n+2}$, then $$f(\omega) = (\omega+1)^{2n+1} + \omega^{n+2} = (-\omega^2)^{2n+1} + \omega^{n+2} = \omega^{4n} (-\omega^2) + \omega^{n+2} = \omega^{n+2} \left( 1 - \omega^{3n}\right) = 0$$ Similarly, $$f(\omega^2) = (\omega^2+1)^{2n+1} + \omega^{2(n+2)} = (-\omega)^{2n+1} + \omega^{2n+4} = -\omega^{2n+1} + \omega^{2n+1} \omega^3 = -\omega^{2n+1} + \omega^{2n+1} = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/233937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
infinitely many primes p which are not congruent to $-1$ modulo $19$. While trying to solve answer a question, I discovered one that I felt to be remarkably similar. The question I found is 'Argue that there are infinitely many primes $p$ that ar enot congruent to $1$ modulo $5$. I believe this has been proven. (brief summary of this proof follows). Following the Euclid Proof that there are an infinite number of primes. First, Assume that there are a finite number of primes not congruent to $1 \pmod 5$. I then multiply them all except $2$ together to get $N \equiv 0 \pmod 5$. Considering the factors of $N+2$, which is odd and $\equiv 2 \pmod 5$. It cannot be divisible by any prime on the list, as it has remainder $2$ when divided by them. If it is prime, we have exhibited a prime $\not \equiv 1 \pmod 5$ that is not on the list. If it is not prime, it must have a factor that is $\not \equiv 1 \pmod 5$. This is because the product of primes $\equiv 1 \pmod 5$ is still $\equiv 1 \pmod 5$. I can't take credit for much of any of the above proof, because nearly all of it came from \href {http://math.stackexchange.com/questions/231534/infinitely-many-primes-p-that-are-not-congruent-to-1-mod-5}${\text {Ross Millikan}}$. Either way I'm trying to use this proof to answer the following question. I'm having a very difficult time doing so. My question: I wish to prove that there are infinitely many primes p which are not congruent to $-1$ modulo $19$.
Let $p_1,p_2,\dots,p_n$ be any collection of odd primes, and let $n=19p_1p_2\cdots p_n+2$. A prime divisor of $n$ cannot be one of the $p_i$. And $n$ has at least one prime divisor which is not congruent to $-1$ modulo $19$, else we would have $n\equiv \pm 1\pmod{19}$. Remark: Not congruent is generally far easier to deal with than congruent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/234012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prime $p$ with $p^2+8$ prime I need to prove that there is only one $p$ prime number such that $p^2+8$ is prime and find that prime. Anyway, I just guessed and the answer is 3 but how do I prove that?
Any number can be written as $6c,6c\pm1,6c\pm2=2(3c\pm1),6c+3=3(2c+1)$ Clearly, $6c,6c\pm2,6c+3$ can not be prime for $c\ge 1$ Any prime $>3$ can be written as $6a\pm 1$ where $a\ge 1$ So, $p^2+8=(6a\pm 1)^2+8=3(12a^2\pm4a+3)$. Then , $p^2+8>3$ is divisible by 3,hence is not prime. So, the only prime is $3$. Any number$(p)$ not divisible by $3,$ can be written as $3b\pm1$ Now, $(3b\pm1)^2+(3c-1)=3(3b^2\pm2b+c)$. Then , $p^2+(3c-1)$ is divisible by 3 and $p^2+(3c-1)>3$ if $p>3$ and $c\ge1$,hence not prime. The necessary condition for $p^2+(3c-1)$ to be prime is $3\mid p$ $\implies$ if $p^2+(3c-1)$ is prime, $3\mid p$. If $p$ needs to be prime, $p=3$, here $c=3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/234077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
Which kind product of non-zero number non-zero cardinal numbers yields zero? Let $I$ be a non-empty set. $\kappa_i$ is non-zero cardinal number for all $i \in I$. If without AC, then $\prod_{i \in I}\kappa_i=0$ seems can be true(despite I still cannot believe it). But what property should $I$ and $\kappa_i$ have? Can $\prod_{i \in I}\kappa_i\ne 0$ be proved without AC when $I$ and each $\kappa_i$ all is well-orderable? Conversely if $I$ is not well-orderable, or if some $\kappa_i$ is not well-orderable, is $\prod_{i \in I}\kappa_i=0$ definitely holds?
The question is based on presuppositions that might not be true in the absence of AC. Let's consider the simplest non-trivial case, the product of countably many copies of 2, that is, $\prod_{n\in\mathbb N}\kappa_n$ where $\kappa_n=2$ for all $n$. A reasonable way to define this product would be: Take a sequence of sets $A_n$ of the prescribed cardinalities $\kappa_n$, let $P$ be the set of all functions $f$ that assign to each $n\in\mathbb N$ an element $f(n)\in A_n$, and then define the product to be the cardinality of $P$. Unfortunately, the cardinality of $P$ can depend on the specific choice of the sets $A_n$. On the one hand, it is consistent with ZF that there is a sequence of 2-element sets $A_n$ for which there is no choice function; that is, the $P$ defined above is empty. So these $A_n$'s lead to a value of 0 for the product. On the other hand, we could take $A_n=\{0,1\}$ for all $n$, and then there are lots of elements in $P$, for example the constant function with value 0. Indeed, for any subset $X$ of $\mathbb N$, its characteristic function is a member of $P$. The resulting value for the product of countably many 2's would then be the cardinality of the continuum. The moral of this story is that, in order for infinite products to be well-defined, one needs AC (or at least some special cases of it), even when the index set and all the factors in the product are well-orderable. Digging into the problem a bit more deeply, one finds that the natural attempt to prove that "the cardinality of $P$ is independent of the choice of $A_n$'s" involves the following step. If we have a second choice, say the sets $B_n$, and we know that each $A_n$ has the same cardinality as the corresponding $B_n$, so we know that there are bijections $A_n\to B_n$ for all $n$, then we need to fix such bijections --- to choose a specific such bijection for each $n$. Then those chosen bijections can be used to define a bijection between the resulting two versions of $P$. But choosing those bijections is an application of the axiom of choice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/234170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
$\lim\limits_{x\to\infty}f(x)^{1/x}$ where $f(x)=\sum\limits_{k=0}^{\infty}\cfrac{x^{a_k}}{a_k!}$. Does the following limit exist? What is the value of it if it exists? $$\lim\limits_{x\to\infty}f(x)^{1/x}$$ where $f(x)=\sum\limits_{k=0}^{\infty}\cfrac{x^{a_k}}{a_k!}$ and $\{a_k\}\subset\mathbb{N}$ satisfies $a_k<a_{k+1},k=0,1,\cdots$ $\bf{EDIT:}$ I'll show that $f(x)^{1/x}$ is not necessarily monotonically increasing for $x>0$. Since $\lim\limits_{x\to+\infty}\big(x+2\big)^{1/x}=1$, for any $M>0$, we can find some $L > M$ such that $\big(2+L\big)^{1/L}<\sqrt{3}$. It is easy to see that: $$\sum_{k=N}^\infty \frac{x^k}{k!} = \frac{e^{\theta x}}{N!}x^N\leq \frac{x^N}{N!}e^x,\quad \theta\in(0,1)$$ Hence we can choose $N$ big enough such that for any $x\in[0,L]$ $$\sum_{k=N}^\infty \frac{x^k}{k!} \leq 1$$ Now, we let $$a_k=\begin{cases}k,& k=0,1\\ 0,& 2\leq k <N\\ k,& k\geq N\end{cases}$$ Then $f(x)= 1+x+\sum\limits_{k=N}^\infty\frac{x^k}{k!}$ and $$f(2)^{1/2} \geq \sqrt{3} > (2+L)^{1/L} \geq f(L)^{1/L}$$ which shows that $f(x)^{1/x}$ is not monotonically increasing on $[2,L]$.
This limit does not exist in general. First observe that for any polynomial $P$ with non-negative coefficients we have $$ \lim_{x\to\infty} P(x)^{1/x} = 1$$ and $$ \lim_{x\to\infty} (e^x - P(x))^{1/x} = \lim_{x\to\infty} e (1-e^{-x}P(x))^{1/x} = e.$$ For ease of notation let $$ e_n(x) = \sum_{k=n}^\infty \frac{x^k}{k!} = e^x - \sum_{k=0}^{n-1} \frac{x^k}{k!}.$$ Note that $\lim\limits_{n\to\infty} e_n(x) = 0$ for every fixed $x$. Now define a power series of the form $$ f(x) = \sum_{i=1}^\infty \sum_{k=m_i}^{n_i} \frac{x^k}{k!}, $$ along with partial sums $$ P_j(x) = \sum_{i=1}^j \sum_{k=m_i}^{n_i} \frac{x^k}{k!}, $$ where $1 \le m_1 \le n_1 < m_2 \le n_2 < \ldots$ are chosen inductively below. We want to find increasing sequences $(x_i)$ and $(y_i)$ with $x_i \to \infty$, $y_i \to \infty$, and $f(x_i)^{1/x_i} \le \frac32$ and $f(y_i)^{1/y_i} \ge 2$, which obviously implies non-existence of $\lim\limits_{x\to\infty} f(x)^{1/x}$. Having already defined $m_i$, $n_i$, $x_i$, $y_i$ for $i < j$, we know that $\lim\limits_{x\to\infty} P_{j-1}(x)^{1/x} = 1$, so there exists $x_{j}>j+x_{j-1}$ such that $P_{j-1}(x_j)^{1/x_j} \le \frac54$. Then there exists $m_{j}>n_{j-1}$ such that $$(P_{j-1}(x_j)+e_{m_{j}}(x_{j}))^{1/x_j} \le \frac32,$$ which implies that whatever choices we make for $n_j$, $m_{j+1}$, etc., we always get $$f(x_j)^{1/x_j} \le (P_{j-1}(x_j)+e_{m_{j}}(x_{j}))^{1/x_j} \le \frac32.$$ We also know that $$\lim\limits_{x\to\infty} (P_{j-1}(x) + e_{m_j}(x))^{1/x} = e>2,$$ so there exists $y_j > x_j$ with $$ (P_{j-1}(y_j) + e_{m_j}(y_j))^{1/y_j} >2. $$ Furthermore, there exists $n_j > m_j$ with $$ (P_{j-1}(y_j) + e_{m_j}(y_j)- e_{n_j+1}(y_j))^{1/y_j} >2.$$ Lastly, this implies $$ f(y_j)^{1/y_j} \ge P_j (y_j)^{1/y_j} = (P_{j-1}(y_j) + e_{m_j}(y_j)- e_{n_j+1}(y_j))^{1/y_j} >2.$$ By pushing this idea a little further, one can achieve $\liminf\limits_{x\to\infty} f(x)^{1/x} = 1$ and $\limsup\limits_{x\to\infty} f(x)^{1/x} = e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/234237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
lagrange multiplier with interval constraint Given a function $g(x,y,z)$ we need to maximize it given constraints $a<x<b, a<y<b$. If the constraints were given as a function $f(x,y,z)$ the following equation could be used. $\nabla f(x,y,z) = \lambda \nabla g(x,y,z)$ How would I set up the initial equation given an interval constraint. Or how would I turn the interval constraint into a function constraint. EDIT:: Added $a<y<b$ to the constraints.
Maximize $g$ ignoring the constraint. If the solution fulfills the constraint, you're done. If not, there's no maximum, since it would have to lie on the boundary, but the boundary is excluded by the constraint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/234303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does taking $\nabla\times$ infinity times from an arbitrary vector exists? Is it possible to get the value of: \begin{equation} \underbrace{\left[\nabla\times\left[\nabla\times\left[\ldots\nabla\times\right.\right.\right.}_{\infty\text{-times taking curl operator}}\mathbf{V}\left.\left.\left.\right]\right]\ldots\right] = ? \end{equation} For any possible values of vector $\mathbf{V}$.
Two applications of $\nabla$ yield $\nabla \times (\nabla \times F) = -\nabla^2 F + \nabla(\nabla \cdot F)$. Why? Well, setting $F = \sum_i F_i e_i$ where $e_i$ is the standard cartesian frame of $\mathbb{R}^3$ allows the formula: $$ (\nabla \times F)_k = \sum_{ij} \epsilon_{ijk} \partial_i F_j $$ Curling once more, $$ [\nabla \times (\nabla \times F)]_m = \sum_{kl}\epsilon_{klm}\partial_k\sum_{ij} \epsilon_{ijl} \partial_i F_j $$ But, the antismmetric symbol is constant and we can write this as $$ [\nabla \times (\nabla \times F)]_m = \sum_{ijkl}\epsilon_{klm}\epsilon_{ijl} \partial_k \partial_i F_j $$ A beautiful identity states: $$ \sum_{l}\epsilon_{klm}\epsilon_{ijl} = -\sum_{l}\epsilon_{kml}\epsilon_{ijl} = -\delta_{ki}\delta_{mj}+\delta_{kj}\delta_{mi}$$ Hence, $$ [\nabla \times (\nabla \times F)]_m = \sum_{ijk}(-\delta_{ki}\delta_{mj}+\delta_{kj}\delta_{mi}) \partial_k \partial_i F_j = \sum_i [-\partial_i^2F_m+\partial_m(\partial_iF_i)]$$ and the claim follows since $m$ is arbitrary. Now, let's try for 3: $$ \nabla \times (\nabla \times (\nabla \times F)) = \nabla \times \bigl[-\nabla^2 F + \nabla(\nabla \cdot F)\bigr] =\nabla \times (-\nabla^2 F)$$ I used the curl of a gradient is zero. This need not be trivial, take $F = <x^2z,0,0>$ as an example. I suppose I could have shot for a four-folded curl by doubly applying the identity. $$ \nabla \times (\nabla \times (\nabla \times (\nabla \times F))) =?$$ Set $G = -\nabla^2 F $ since we know the gradient term vanishes, $$ \nabla \times (\nabla \times G) = -\nabla^2 G + \nabla \cdot G = \nabla^2(\nabla^2 F)-\nabla [\nabla \cdot (\nabla^2 F)]$$ So, there's the four-folded curl. Well, I see no reason this terminates. I guess you can give it a name. I propose we call (ordered as the edit indicates) $ \nabla \times \nabla \times \cdots \times \nabla = \top$
{ "language": "en", "url": "https://math.stackexchange.com/questions/234362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 3, "answer_id": 0 }
Exponential operator on a Hilbert space Let $T$ be a linear operator from $H$ to itself. If we define $\exp(T)=\sum_{n=0}^\infty \frac{T^n}{n!}$ then how do we prove the function $f(\lambda)=exp(\lambda T)$ for $\lambda\in\mathbb{C}$ is differentiable on a Hilbert space?
$$\frac{f(\lambda)-f(0)}{\lambda}=\frac{\exp(\lambda T)-Id}{\lambda} = \frac1\lambda\left( \sum_{n=1}^{\infty} \frac{\lambda^nT^n}{n!} \right) = \sum_{n=1}^{\infty} \frac{\lambda^{n-1}T^n}{n!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/234418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Describing A Congruence Class The question is, "Give a description of each of the congruence classes modulo 6." Well, I began saying that we have a relation, $R$, on the set $Z$, or, $R \subset Z \times Z$, where $x,y \in Z$. The relation would then be $R=\{(x,y)|x \equiv y~(mod~6)\}$ Then, $[n]_6 =\{x \in Z|~x \equiv n~(mod~6)\}$ $[n]_6=\{x \in Z|~6|(x-n)\}$ $[n]_6=\{x \in Z|~k(x-n)=6\}$, where $n \in Z$ As I looked over what I did, I started think that this would not describe all of the congruence classes on modulo 6. Also, what would I say k is? After despairing, I looked at the answer key, and they talked about there only being 6 equivalence classes. Why are there only six of them? It also says that you can describe equivalence classes as one set, how would I do that?
Let’s start with your correct description $$[n]_6=\{x\in\Bbb Z:x\equiv n\!\!\!\pmod 6\}=\{x\in\Bbb Z:6\mid x-n\}$$ and actually calculate $[n]_6$ for some values of $n$. * *$[0]_6=\{x\in\Bbb Z:6\mid x-0\}=\{x\in\Bbb Z:6\mid x\}=\{x\in\Bbb Z:x=6k\text{ for some }k\in\Bbb Z\}$; this is just the set of all multiples of $6$, so $[0]_6=\{\dots,-12,-6,0,6,12,\dots\}$. *$[1]_6=\{x\in\Bbb Z:6\mid x-1\}=\{x\in\Bbb Z:x-1=6k\text{ for some }k\in\Bbb Z\}$; this isn’t quite so nice, but we can rewrite it as $\{x\in\Bbb Z:x=6k+1\text{ for some }k\in\Bbb Z\}$, the set of integers that are one more than a multiple of $6$; these can be described as the integers that leave a remainder of $1$ when divided by $6$, and $[1]_6=\{\dots,-11,-5,1,7,13,\dots\}$. More generally, if $x$ is any integer, we can write it as $x=6k+r$ for integers $k$ and $r$ such that $0\le r<6$: $r$ is the remainder when $x$ is divided by $6$. Then $$\begin{align*} [r]_6&=\{x\in\Bbb Z:6\mid x-r\}\\ &=\{x\in\Bbb Z:x-r=6k\text{ for some }k\in\Bbb Z\}\\ &=\{x\in\Bbb Z:x=6k+r\text{ for some }k\in\Bbb Z\}\\ &=\{6k+r:k\in\Bbb Z\}\; \end{align*}$$ the set of all integers leaving a remainder of $r$ when divided by $6$. You know that the only possible remainders are $0,1,2,3,4,5$, so you know that this relation splits $\Bbb Z$ into exactly six equivalence classes, $[0]_6,[1]_6,[2]_6,[3]_6,[4]_6$, and $[5]_6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/234483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Domain, codomain, and range This question isn't typically associated with the level of math that I'm about to talk about, but I'm asking it because I'm also doing a separate math class where these terms are relevant. I just want to make sure I understand them because I think I may end up getting answers wrong when I'm over thinking things. In my first level calculus class, we're now talking about critical values and monotonic functions. In one example, the prof showed us how to find the critical values of a function $$f(x)=\frac{x^2}{x-1}$$ He said we have to find the values where $f' (x)=0$ and where $f'(x)$ is undefined.$$f'(x)=\frac{x^2-2x}{(x-1)^2}$$ Clearly, $f'(x)$ is undefined at $x=1$, but he says that $x=1$ is not in the domain of $f(x)$, so therefore $x=1$ is not a critical value. Here's where my question comes in: Isn't the "domain" of $f(x)$ $\mathbb{R}$, or $(-\infty,\infty)$? If my understanding of Domain, Codomain, and Range is correct, then wouldn't it be the "range" that excludes $x=1$?
$x=1$ is not in the domain because when $x=1$, $f(x)$ is undefined. And by definition, strictly speaking, a function defined on a domain $X$ maps every element in the domain to one and only element in the codomain. The domain and codomain of a function depend upon the set on which $f$ is defined and the set to which elements of the domain are being mapped; both are usually made explicit by including the notation $f: X \to Y$, e.g. along with defining $f(x)$ for $x\in X$. $X$ is then taken to be the domain of $f$, and $Y$ the codomain of $f$, though you'll find that some people interchange the terms "codomain" and "range". So "range" is a bit ambiguous, depending on the text used and how it is defined, because "range" is sometimes defined to be the set of all values $y$ such that there is some $x \in X$ for which it is true that $f(x) = y$, i.e. $f[X]$. One way to circumvent any ambiguity related to use of "range" to refer to $f[X]$ is to note that many prefer to define $f[X]$ to be the "image" of $X$ under $f$, often denoted by $\text{Im}f(x)$, with the understanding that $f[X] = \text{Im}f(x) \subseteq Y.\;\; f[X]=\text{Im}f(x) = Y$ when $f$ is onto $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/234562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
A limit $\lim_{n\rightarrow \infty}\sum_{k=1}^{n}\frac{k\sin\frac{k\pi}{n}}{1+(\cos\frac{k\pi}{n})^2}$ maybe relate to riemann sum Find $$\lim_{n\rightarrow \infty}\sum_{k=1}^{n}\frac{k\sin\frac{k\pi}{n}}{1+(\cos\frac{k\pi}{n})^2}$$ I think this maybe relate to Riemann sum. but I can't deal with $k$ before $\sin$
If there is no typo, then the answer is $\infty$. Indeed, let $m$ be any fixed positive integer and consider the final $m$ consecutive terms: $$ \sum_{k=n-m}^{n-1} \frac{k \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}} = \sum_{k=1}^{m} \frac{(n-k) \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}}. $$ As $n \to \infty$, each term converges to $k \pi$, in view of the substitution $x = \frac{k\pi}{n}$ and the following limit $$ \lim_{x\to 0}\frac{\sin x}{x(1 + \cos^2 x)} = 1. $$ Thus $$ \liminf_{n\to\infty} \sum_{k=1}^{n} \frac{k \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}} \geq \lim_{n\to\infty} \sum_{k=n-m}^{n-1} \frac{k \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}} = \sum_{k=1}^{m} k \pi = \frac{m(m+1)}{2}\pi. $$ Now letting $m \to \infty$, we obtain the desired result. Indeed, we have $$ \lim_{n\to\infty} \sum_{k=1}^{n} \frac{\frac{k}{n} \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}} \frac{1}{n} = \frac{1}{\pi^2} \int_{0}^{\pi} \frac{x \sin x}{1 + \cos^2 x} \, dx = \frac{1}{4}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/234635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Unimodular matrix definition? I'm a bit confused. Based on Wikipedia: In mathematics, a unimodular matrix M is a square integer matrix having determinant +1, 0 or −1. Equivalently, it is an integer matrix that is invertible over the integers. So determinant could be +1, 0 or −1. But a matrix is invertible only if determinat is non-zero! In fact, from Wolfram: A unimodular matrix is a real square matrix A with determinant det(A) = -1|+1. Which is right answer?
Well spotted. In a case like this, it's a good idea to check the article's history (using the "View history" link at the top). In the present case, the error was introduced only two days ago by an anonymous user in this edit (which I just reverted).
{ "language": "en", "url": "https://math.stackexchange.com/questions/234765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What exactly does conjugation mean? In group theory, the mathematical definition for "conjugation" is: $$ (g, h) \mapsto g h g^{-1} $$ But what exactly does this mean, like in laymans terms?
The following is equivalent to the second paragraph of Marc van Leeuwen's answer, but I think it might help emphasize how natural conjugation really is. With notation as in Marc's answer, let me write $h'$ for the conjugate $ghg^{-1}$. Then $h'$ is obtained by shifting $h$ along $g$ in the sense that, whenever $h$ sends an element $x\in X$ to another element $y$, then $h'$ sends $g(x)$ to $g(y)$. If, as people sometimes do, one regards a function $h$ as a set of ordered pairs, then $h'$ is obtained by applying $g$ to both components in all those ordered pairs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/234825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 5, "answer_id": 1 }
Prove that: Every $\sigma$-finite measure is semifinite. I am trying to prove every $\sigma$-finite measure is semifinite. This is what I have tried: Definition of $\sigma$-finiteness: Let $(X,\mathcal{M},\mu)$ is a measure space. Then, $ \mu$ is $\sigma$-finite if $X = \bigcup_{i=1}^{\infty}E_i$ where $E_i \in \mathcal{M}$ and $\mu(E_i) < \infty$ for all $ j \in N$. (Real Analysis: Modern Techniques and Their Applications 2nd Edition by Foland). Definition of semifiniteness: $\mu $ is simifinite if for each $E \in \mathcal{M}$ with $\mu(E) = \infty$ $\exists$ $F \subset E$ and $F \in \mathcal{M}$ and $0 < \mu(F) < \infty$. So, take $A$ s.t. $\mu(A) = \infty$. We know $X \cap A = A$. Then, $A = A \cap \bigcup E_j$ hence $A = \bigcup E_j \cap A$. By subadditivity, $$\infty = \mu(A) = \mu\left(\bigcup E_j \cap A\right) \leq \sum_1^{\infty} \mu(E_j \cap A) $$ OK, I am here. But I do not understand how to continue, or even this is a right approach. Thanks.
We can find $N$ such that $\mu\left(A\cap E_N\right)>0$ (otherwise, we would have for each $n$ that $\mu\left(A\cap\bigcup_{j=1}^nE_j\right)=0$ and $\mu\left(A\right)=\lim_{n\to +\infty}\mu\left(A\cap\bigcup_{j=1}^nE_j\right)$), and we have $\mu\left(A\cap E_N\right)\leqslant \mu\left( E_N\right)<+\infty$. Furthermore, $A \cap E_N\subset A$, hence the choice $F:=A\cap E_N$ does the job. This proves that $\mu$ is semi-finite. The converse is not true: counting measure on the subsets of $[0,1]$ is semi-finite but not $\sigma$-finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/234898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
Probability of winning in the lottery In the lottery there are 5 numbers rolled from 35 numbers and for 3 right quessed numbers there is a third price. What's the propability that we will win the third price if we buy one ticket with 5 numbers.
Choose 5 from 35 in $\binom{35}{5}$ and from 5 numbers to get 3 exists $\binom{5}{3}=10$ possibilities and 2 other numbers you choose from 30 others thats not are in your ticket in $\binom{30}{2}=435 $ ways so total ways to win third place is $10\times435=4350$ ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/234956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find an equation of the plane that passes through the point $(1,2,3)$, and cuts off the smallest volume in the first octant. *help needed please* Find an equation of the plane that passes through the point $(1,2,3)$, and cuts off the smallest volume in the first octant. This is what i've done so far.... Let $a,b,c$ be some points that the plane cuts the $x,y,z$ axes. --> $\frac{x}{a} + \frac{y}{b} + \frac{z}{c} = 1$, where $a,b,c >0$. I saw a solution for this question was to use Lagrange multiplier. The solution goes as follows... The product $abc$ will be equal to $6$ times the volume of the tetrahedron $OABC$ (could someone explain to my why is this so?) $f(a,b,c) = abc$ given the condition $(\frac1a + \frac2b + \frac3b -1)$ $f(a,b,c) = abc + \lambda (\frac1a + \frac2b + \frac3c -1)$ 2nd query to the question... $f_a = \lambda g_a \Rightarrow bc - \frac\lambda {a^2} ; a = \sqrt \frac \lambda {bc} \\f_b = \lambda g_b \Rightarrow ac - \frac\lambda {b^2} ; b = \sqrt \frac {2\lambda}{ac} \\f_c = \lambda g_c \Rightarrow ab - \frac\lambda {c^2} ; c = \sqrt \frac {3\lambda}{ab}$ using values of $a,b,c$ into $\frac1a+\frac1b+\frac1c = 1\Rightarrow \lambda =\frac{abc}{a+2b+3c}$. May i know how should i proceed to solve the unknowns?
The volume of a pyramid (of any shaped base) is $\frac13A_bh$, where $A_b$ is the area of the base and $h$ is the height (perpendicular distance from the base to the opposing vertex). In this particular case, we're considering a triangular pyramid, with the right triangle $OAB$ as a base and opposing vertex $C$. The area of the base is $\frac12ab$, and the height is $c$, so the volume of the tetrahedron is $\frac16abc$--equivalently, $abc$ is $6$ times the volume of the tetrahedron.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Numbers to the Power of Zero I have been a witness to many a discussion about numbers to the power of zero, but I have never really been sold on any claims or explanations. This is a three part question, the parts are as follows... * *Why does $n^{0}=1$ when $n\neq 0$? How does that get defined? *What is $0^{0}$? Is it undefined? If so, why does it not equal $1$? *What is the equation that defines exponents? I can easily write a small program to do it (see below), but what about in equation format? I just want a little discussion about numbers to the power of zero, for some clarification. Code for Exponents: (pseudo-code/Ruby) def int find_exp (int x, int n){ int total = 1; n.times{total*=x} return total; }
To define $x^0$, we just cannot use the definition of repeated factors in multiplication. You have to understand how the laws of exponentiation work. We can define $x^0$ to be: $$x^0 = x^{n - n} = \frac{x^n}{x^n}.$$ Now, let us assume that $x^n = a$. It would then be simplified as $$\frac{x^n}{x^n} = \frac{a}{a} = 1.$$ So that's why $x^0 = 1$ for any number $x$. Now, you were asking what does $0^0$ mean. Well, let us the example above: $$0^0 = 0^{n - n} = \frac{0}{0}.$$ Here is where it gets confusing. It is more likely to say that $\frac{0}{0}$ equals either $0$ or $1$, but it turns out that $\frac{0}{0}$ has infinitely many solutions. Therefore, it is indeterminate. Because we mathematicians want to define it as some exact value, which is not possible because there are many values, we just say that is undefined. NOTE: $0^0$ still follows the rule of $x^0 = 1$. So it is correct to say that $x^0 = 1$ for ANY value of $x$. I hope this clarify all your doubts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 10, "answer_id": 7 }
The inverse of the adjacency matrix of an undirected cycle Is there an expression for $A^{-1}$, where $A_{n \times n}$ is the adjacency matrix of an undirected cycle $C_n$, in terms of $A$? I want this expression because I want to compute $A^{-1}$ without actually inverting $A$. As one answer suggests, $A$ is non-invertible for certain values of $n$ (namely when $n$ is a multiple of $4$).
For $n=4$, the matrix in question is $$\pmatrix{0&1&0&1\cr1&0&1&0\cr0&1&0&1\cr1&0&1&0\cr}$$ which is patently noninvertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
$\epsilon$-$\delta$ proof involving differentiation in a defined neighborhood The problem states: Suppose $f'(b) = M$ and $M <0$. Find $\delta>0$ so that if $x\in (b-\delta, b)$, then $f(x) > f(b).$ This intuitively makes sense, but I am not exactly sure how to find $\delta$. I greatly appreciate any help I can receive.
Remember that the definition of derivative will imply that $$ \lim_{x\to b^-}\frac{f(b)-f(x)}{b-x}=M. $$ But, $M<0$ and $b-x>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding a conformal map from the exterior of unit disk onto the exterior of an ellipse Find a conformal bijection $f(z):\mathbb{C}\setminus D\rightarrow \mathbb{C}\setminus E(a, b)$ where $E(a, b)$ is the ellipse $\{x + iy : \frac{x^2}{a}+\frac{y^2}{b}\leq1\}$ Here $D$ denotes the closed unit disk. I hate to ask having not given the question a significant amount of thought, but due to illness I missed several classes, really need to catch up, and the text book we're using (Ahlfors) doesn't seem to have anything on the mapping of circles to ellipses except a discussion of level curves on pages 94-95. and I can't figure out how to get there through composition of the normal elementary maps (powers, exponential and logarithmic), and fractional linear transformations take circles into circles and are therefore useless for figuring this out. I prefer hints, thanks in advance.
The conformal map $z\mapsto z+z^{-1}$ sends $\{|z|>R\}$ onto the exterior of ellipse with semi-axes $A=R+R^{-1}$ and $B=R-R^{-1}$. Note that $A^2-B^2=4$. Thus, you should multiply the given $a,b$ by a constant $C$ such that $(Ca)^2-(Cb)^2=4$, then solve $Ac=R+R^{-1}$ for $R$. After applying the map given above, the final step is $z\mapsto z/C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the limit of $x_1=0 , x_{n+1}=\frac{1}{1+x_n}$ I have had big problems finding the limit of the sequence $x_1=0 , x_{n+1}=\frac{1}{1+x_n}$. So far I've only succeeded in proving that for $n\geq2$: $x_n>0\Rightarrow x_{n+1}>0$ (Hopefully that much is correct: It is true for $n=2$, and for $x_{n+1}>0$ exactly when $\frac{1}{1+x_n}>0$, which leads to the inequality $x_n>-1$ which is true by the induction assumption that $x_n>0$.) On everything else I failed to come up with answers that make sense (such as proving that $x_{n+1}>x_n \forall n\geq1$). I'm new to recursive sequences, so it all seems like a world behind the mirror right now. I'd appreciate any help, thanks!
It is obvious that $f:x\mapsto\frac1{1+x}$ is a monotonically decreasing continuous function $\mathbf R_{\geq0}\to\mathbf R_{\geq0}$, and it is easily computed that $\alpha=\frac{-1+\sqrt5}2\approx0.618$ is its only fixed point (solution of $f(x)=x$). So $f^2:x\mapsto f(f(x))$ is a monotonically increasing function that maps the interval $[0,\alpha)$ into itself. Since $x_3=f^2(x_1)=\frac12>0=x_1$ one now sees by induction that $(x_1,x_3,x_5,...)$ is an increasing sequence bounded by $\alpha$. It then has a limit, which must be a fixed point of $f^2$ (the function mapping each term of the sequence to the next term). One checks that on $ \mathbf R_{\geq0}$ the function $f^2$ has no other fixed point than the one of $f$, which is $\alpha$, so that must be value of the limit. The sequence $(x_2,x_4,x_6,...)$ is obtained by applying $f$ to $(x_1,x_3,x_5,...)$, so by continuity of $f$ it is also convergent, with limit $f(\alpha)=\alpha$. Then $\lim_{n\to\infty}x_n=\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Chain rule application I want to find $y'$ where $$ y = \frac{\frac{b}{a}}{1+ce^{-bt}}.$$ But I dont want to use quotient rule for differentiation. I want to use chain rule. My solution is: Write $$y=\frac{b}{a}\cdot \frac{1}{1+ce^{-bt}}.$$ Then in $$\frac{1}{1+ce^{-bt}},$$ the inner function is $1+ce^{-bt}$ and the outer function is $$\frac{1}{1+ce^{-bt}}.$$ Hence using the chain rule we have $$\left(\frac{1}{1+ce^{-bt}}\right)'= \frac{-1}{(1+ce^{-bt})^2} \cdot -bce^{-bt} = \frac{bce^{-bt}}{(1+ce^{-bt})^2}.$$ Thus $$y'= \frac{\frac{b^2}{a}ce^{-bt}}{(1+ce^{-bt})^2}.$$ Am I correct?
Yes. The outer function is $s\mapsto \displaystyle\frac1s$ or you can call its variable anything. And '$\cdot -bce^{-bt}$' should be in parenthesis: $\cdot (-bce^{-bt})$, else it seems correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
question about normal subgroups If $N$ is a normal subgroup of $G$ and $M$ is a normal subgroup of $G$, and if $MN=\{mn|m\in M,n\in N\}$, prove that $MN$ is a subgroup of $G$ and that $MN$ is a normal subgroup of $G$. The attempt: I tried just starting by showing that $MN$ is a subgroup of $G$. I said let $a=m_1 n_1$ for some $m_1 \in M$ and $n_1 \in N $ and let $b=m_2 n_2$ for some $m_2 \in M$ and $n_2 \in N$, and we need to show $a*b^{-1}$ $\in MN$. So I get $a*b^{-1}$=$m_1n_1n_2^{-1}m_2^{-1}=m_1n_3m_2^{-1}$ but then I don't know how to show that this is in $MN$. Tips on this or the next part of the problem?
$$m_1n_3m_2^{-1}=m_1m_2^{-1}(m_2n_3m_2^{-1})\in MN$$ General Lemma: if $\,M,N\,$ are subgroups of $\,G\,$ , $\,MN\,$ is a subgroup iff $\,MN=NM\,$ . In particular, if $\,M\triangleleft G\,$ or$\,N\triangleleft G\,$ ,then $\,MN=NM\,$
{ "language": "en", "url": "https://math.stackexchange.com/questions/235693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Disjoint Equivalence Why do equivalence classes, on a particular set, have to be disjoint? What's the intuition behind it? I'd appreciate your help Thank you!
The idea behind an equivalence relation is to generalize the notion of equality. The idea behind the equality relation is that something is only equal to itself. So two distinct objects are not equal. With equivalence relation, if so, we allow two things to be "almost equal" (namely equal where it count, and we don't care about their other distinctive properties). So the equivalence class of an object is the class of things which are "almost equal" to it. Clearly if $x$ and $y$ are almost equal they have to have the same class of almost equal objects; and similarly if they are not almost equal then it is impossible to have an object almost equal to both.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
A question on $\liminf$ and $\limsup$ Let us take a sequence of functions $f_n(x)$. Then, when one writes $\sup_n f_n$, I understand what it means: supremum is equal to upper bound of the functions $f_n(x)$ at every $x$. Infimum is defined similarly. Then when one writes $\lim \sup f_n$, then I understand following: There are convergent subsequences of $f_n$, let us call them as $f_{n_k}$ and their limits as a set $E$. Then, $$\limsup f_n = \sup E$$ First question: Are these definitions right? Second question: I do not understand the notion of convergent subsequences. What does it mean really? And why they are necessary at the first place, why they are important? Thanks.
1 For any $ x $ there are $ n_{k(x)} $ such that \begin{equation} \limsup f_n(x) = \lim f_{n_{k(x)}}(x) \end{equation} 2 Maybe $ f_n(x) $ can not converges. But, there are subindices $ n_k $ such that $ f_{n_k}(x) $ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Finding the norm of the operators How do I find the norm of the following operator i.e. how to find $\lVert T_z\rVert$ and $\lVert l\rVert$? 1) Let $z\in \ell^\infty$ and $T_z\colon \ell^p\to\ell^p$ with $$(T_zx)(n)=z(n)\cdot x(n).$$ What my thoughts were to use Banach-Steinhaus theorem but it seems straight forward and I don't know if I am right. $\lVert T_z\rVert _p \leqslant\lVert z\lVert \cdot n\cdot\lVert x\rVert_p n=n^2\lVert x\rVert _p$ so if I choose $x=1$ then I get $\lVert T_z\rVert =n^2$. 2) Let $0\leqslant t_1\leqslant\cdots\leqslant t_n=1$ and $\alpha_1,\dots,\alpha_n \in K$ , $l\colon C([0,1])\to K$ with $l(x)=\sum_{i=1}^n \alpha_i x(t_i)$. How to I find operator norm in this case as well? I am quite sure I am not right. I would be glad if I could get some help. Definitely some hints would be great! Thanks in advance.
* *As for each $n$, $|z(n)x(n)|^p\leqslant \lVert a\rVert_{\infty}|x(n)|^p$, then we certainly have $\lVert T_z\rVert\geqslant \lVert a\rVert_{\infty}$. To get the other inequality, fix $\delta$ and pick $k$ such that $|a(k)|\geqslant \lVert a\rVert_{\infty}-\delta$ (the case $a=0$ is obvious). *We assume $t_j$ distinct. Let $f_j$ a continuous map such that $f_j(t_j)=e^{i\theta_j}$, where $e^{i\theta_j}\alpha_j=|\alpha_j|$ and $f_j(t_k)=0$ if $k\neq j$. We can choose the $f_j$'s such that $\lVert \sum_{j=1}^nf_j\rVert=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Summation over exponent $\sum_{i=0}^k 4^i= \frac{4^{k+1}-1}3$ Why does $$\sum_{i=0}^k 4^i= \frac{4^{k+1}-1}{3}$$where does that 3 comes from? Ok, from your answers I looked it up on Wikipedia Geometric Progression, but to derive the formula it says to multiply by $(1-r)$ not $(r-1)$ why is this case different?
Here is an easy mnemonic. If you have a geometric sum, then $$\sum {\rm geometric} = {{\rm first} - {\rm last}\over {1 - {\rm common \ Ratio}}}.$$ In this case, first is the first term, blast is the one beyond the last, and commonRatio is the common ratio of the terms. If the sum is finite and ${\rm commonRatio} > 1$, reverse the subtractions in the numerator and denominator for greater prettiness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Example of a general random variable with finite mean but infinite variance Given a probability triple $(\Omega, \mathcal{F}, \mu)$ of Lebesgue measure $[0,1]$, find a random variable $X : \Omega \to \mathbb{R}$ such that the expected value $E(X)$ converges to a finite, positive value, but $E(X^2)$ diverges.
An example is a random variable $X$ having a student-t distribution with $\nu = 2$ degrees of freedom Its mean is $E[X] = 0$ for $\nu > 1$, but its second moment $E[X^2] = Var[X] = \infty$ for $1 < \nu \le 2$ Edit: Oh wait, finite positive. Well, $X+1$, I guess.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 0 }
Question on a proof of a sequence I have some questions 1) In the forward direction of the proof, it employs the inequality $|x_{k,i} - a_i| \leq (\sum_{j=1}^{n} |x_{k,j} - a_j|^2)^{\frac{1}{2}}$. What exactly is this inequality? 2) In the backwards direction they claim to use the inequality $\epsilon/n$. I thought that when we choose $\epsilon$ in our proofs, it shouldn't depend on $n$ because $n$ is always changing?
$\def\abs#1{\left|#1\right|}$(1) We have that $$ \abs{x_{k,i} - a_i}^2 \le \sum_{j=1}^n \abs{x_{k,j} - a_j}^2 $$ for sure as adding positive numbers makes the expression bigger. Now, exploiting the monotonicity of $\sqrt{\cdot}$, we have $$ \abs{x_{k,i} - a_i} = \left(\abs{x_{k,i} - a_i}^2\right)^{1/2} \le \left(\sum_{j=1}^n \abs{x_{k,j} - a_j}^2\right)^{1/2} $$ (2) When you talk about sequences $(x_n)$, where you use $n$ to index the sequence's elements, your $\epsilon > 0$. But in this case, $n$ denotes the (fixed, not chaning for different elements $x_k$) dimension of $\mathbb R^n$, are you are talking about a sequence $(x_k)$ in $\mathbb R^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are there five complex numbers satisfying the following equalities? Can anyone help on the following question? Are there five complex numbers $z_{1}$, $z_{2}$ , $z_{3}$ , $z_{4}$ and $z_{5}$ with $\left|z_{1}\right|+\left|z_{2}\right|+\left|z_{3}\right|+\left|z_{4}\right|+\left|z_{5}\right|=1$ such that the smallest among $\left|z_{1}\right|+\left|z_{2}\right|-\left|z_{1}+z_{2}\right|$, $\left|z_{1}\right|+\left|z_{3}\right|-\left|z_{1}+z_{3}\right|$, $\left|z_{1}\right|+\left|z_{4}\right|-\left|z_{1}+z_{4}\right|$, $\left|z_{1}\right|+\left|z_{5}\right|-\left|z_{1}+z_{5}\right|$, $\left|z_{2}\right|+\left|z_{3}\right|-\left|z_{2}+z_{3}\right|$, $\left|z_{2}\right|+\left|z_{4}\right|-\left|z_{2}+z_{4}\right|$, $\left|z_{2}\right|+\left|z_{5}\right|-\left|z_{2}+z_{5}\right|$, $\left|z_{3}\right|+\left|z_{4}\right|-\left|z_{3}+z_{4}\right|$, $\left|z_{3}\right|+\left|z_{5}\right|-\left|z_{3}+z_{5}\right|$ and $\left|z_{4}\right|+\left|z_{5}\right|-\left|z_{4}+z_{5}\right|$is greater than $8/25$? Thanks!
Suppose you have solutions and express $z_i$ as $r_i e^{\theta_i}$. (I use $s_i = \sin( \theta_i )$ and $c_i = \sin( \theta_i )$ to make notations shorter) Then$$\begin{align*} |z_i| + |z_j| - |z_i + z_j| & = |r_i e^{\theta_i}| + |r_j e^{\theta_j}| - |r_i e^{\theta_i} + r_j e^{\theta_j}| \\ & = r_i + r_j - |r_i(c_i +is_i) + r_j(c_j +is_j) | \\ & = r_i + r_j - |(r_ic_i+r_jc_j) + i(r_is_i+r_js_j)| \\ & = r_i + r_j - \sqrt{(r_ic_i+r_jc_j)^2 + (r_is_i+r_js_j)^2} \\ & = r_i + r_j - \sqrt{ ( r_i^2c_i^2 + 2r_ir_jc_ic_j + r_j^2c_j^2 ) + ( r_i^2s_i^2 + 2r_ir_js_is_j + r_j^2s_j^2 ) } \\ & = r_i + r_j - \sqrt{ r_i^2( c_i^2 + s_i ^2 ) + r_j^2(c_j^2 + s_j^2) + 2r_ir_j(c_ic_j+s_is_j) } \\ |z_i| + |z_j| - |z_i + z_j| & = r_i + r_j - \sqrt{r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) } \end{align*} $$ Then $$\begin{align*} |z_i| + |z_j| - |z_i + z_j| > \frac{8}{25} & \Leftrightarrow r_i + r_j - \sqrt{r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) } > \frac{8}{25} \\ & \Leftrightarrow r_i + r_j - \frac{8}{25} > \sqrt{r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) } \\ & \Leftrightarrow ( r_i + r_j - \frac{8}{25} ) ^ 2 > r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) \\ & \Leftrightarrow r_i^2 + r_j^2 + (\frac{8}{25})^2 + 2r_ir_j - 2\frac{8}{25} r_i - 2\frac{8}{25} r_j > r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) \\ & \Leftrightarrow (\frac{8}{25})^2 + 2r_ir_j - 2\frac{8}{25} r_i - 2\frac{8}{25} r_j > 2r_ir_j\cos(\theta_i-\theta_j) \\ & \Leftrightarrow (\frac{8}{25})^2 + 2r_ir_j( 1 - \cos(\theta_i-\theta_j) ) - 2\frac{8}{25} r_i - 2\frac{8}{25} r_j > 0 \\ \end{align*} $$ I might update later but that's all I have for now :/ But I would try to express $r_i$ as a function of $r_{\sigma(i)}$ with $\sigma$ a permutation. And by doing that, you would probably get an ugly way of calculating any $r_j$ from all the $\theta_k$ where $j,k \in \{\sigma(i)^n, n\in \mathbb{N}\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Coin game - applying Kelly criterion I'm looking at a simple coin game where I have \$100, variable betting allowed, and 100 flips of a fair coin where H=2x stake+original stake, T=lose stake. * *If I'm asked to maximise the expected final net worth $N$, am I meant to simply bet a fraction of $\frac{1}{4}$ (according to the Wikipedia article on the Kelly criterion)? *What if I'm asked to maximise the expectation of $\ln(100+N)$? Does this change my answer? Thanks for any help.
The Wikipedia essay says bet $p-(q/b)$, where $p$ is the probability of winning, $q=1-p$ of losing, and $b$ is the payment (not counting the dollar you bet) on a one dollar bet. For your game, $p=q=1/2$ and $b=2$ so, yes, bet one-fourth of your current bankroll. Sorry, I'm not up to thinking about the logarithmic question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
how many ways can the letters in ARRANGEMENT can be arranged Using all the letters of the word ARRANGEMENT how many different words using all letters at a time can be made such that both A, both E, both R both N occur together .
"ARRANGEMENT" is an eleven-letter word. If there were no repeating letters, the answer would simply be $11!=39916800$. However, since there are repeating letters, we have to divide to remove the duplicates accordingly. There are 2 As, 2 Rs, 2 Ns, 2 Es Therefore, there are $\frac{11!}{2!\cdot2!\cdot2!\cdot2!}=2494800$ ways of arranging it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
What do I use to find the image and kernel of a given matrix? I had a couple of questions about a matrix problem. What I'm given is: Consider a linear transformation $T: \mathbb R^5 \to \mathbb R^4$ defined by $T( \vec{x} )=A\vec{x}$, where $$A = \left(\begin{array}{crc} 1 & 2 & 2 & -5 & 6\\ -1 & -2 & -1 & 1 & -1\\ 4 & 8 & 5 & -8 & 9\\ 3 & 6 & 1 & 5 & -7 \end{array}\right)$$ * *Find $\mathrm{im}(T)$ *Find $\ker(T)$ My questions are: What do they mean by the transformation? What do I use to actually find the image and kernel, and how do I do that?
I could give an explanation for the most appreciated answer why image is calculated in this way. Image of a matrix is basically all the vectors you can obtain after this linear transformation. Let's say $A$ is a $2 \times 2$ matrix $$A=\pmatrix {a_1 & b_1\\ a_2 & b_2}$$ . If we apply A as a linear transformation to the standard base, aka the identity matrix, we get A itself. However, we could consider this transformation as it transforms the basis vectors to all the columns A has. (1, 0) to (a1, a2), (0, 1) to (b1, b2). Therefore, the image of A is just the span of the basis vectors after this linear transformation; in this case, span ((a1, a2), (b1, b2)). This is the reason why we need to get rref of the transpose of A. We are simplely getting linearly independent basis vectors after this linear transformation. If there's anything unclear, I really recommand you to watch this video made by 3Blue1Brown, it shows this in a visual way. Here's the link: https://www.youtube.com/watch?v=uQhTuRlWMxw
{ "language": "en", "url": "https://math.stackexchange.com/questions/236541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 4, "answer_id": 3 }
What does brace below the equation mean? An example of what I am trying to understand is found on this page, at Eq. 3. There are two braces under the equation... What is the definition of the brace(s) and how does it relate to Sp(t) and S[k]? This is what 4 years of calculus gets you 20+ years later... http://en.wikipedia.org/wiki/Poisson_summation_formula Thanks
it is a shortcut to let you know that the expression above is equal to it (either by definition or by calculation)
{ "language": "en", "url": "https://math.stackexchange.com/questions/236621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
No Nonzero multiplication operator is compact Let $f,g \in L^2[0,1]$, multiplication operator $M_g:L^2[0,1] \rightarrow L^2[0,1]$ is defined by $M_g(f(x))=g(x)f(x)$. Would you help me to prove that no nonzero multiplication operator on $L^2[0,1]$ is compact. Thanks.
We show that if $g$ is not the equivalence class of the null function, then $M_g$ is not compact. Let $c>0$ such that $\lambda(\{x,|g(x)|>c\})>0$ (such a $c$ exists by assumption). Let $S:=\{x,|g(x)|>c\}$, $H_1:=L^2[0,1]$, $H_2:=\{f\in H_1, f=f\chi_S\}$. Then $T\colon H_2\to H_2$ given by $T(f)=T_g(f)$ is onto. Indeed, if $h\in H_2$, then $T(h\cdot \chi_S \cdot g^{—1})=h\cdot\chi_S=h$. As $H_2$ is a closed subspace of $H_1$, it's a Banach space. This gives, by the open mapping theorem that $T$ is open. It's also compact, so $T(B(0,1))$ is open and has compact closure. By Riesz theorem, $H_2$ is finite dimensional. But for each $N$, we can find $N+1$ disjoint subsets of $S$ which have positive measure, and their characteristic functions will be linearly independent, which gives a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Convergence of $\sum_{n=1}^\infty \frac{a_n}{n}$ with $\lim(a_n)=0$. Is it true that if $(a_n)_{n=1}^\infty$ is any sequence of positive real numbers such that $$\lim_{n\to\infty}(a_n)=0$$ then, $$\sum_{n=1}^\infty \frac{a_n}{n}$$ converges? If yes, how to prove it?
It is false. For $n\gt 1$, let $a_n=\dfrac{1}{\log n}$. The divergence can be shown by noting that $\int_2^\infty \frac{dx}{x\log x}$ diverges. (An antiderivative is $\log\log x$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/236776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
How to show that two equivalence classes are either equal or have an empty intersection? For $x \in X$, let $[x]$ be the set $[x] = \{a \in X | \ x \sim a\}$. Show that given two elements $x,y \in X$, either a) $[x]=[y]$ or b) $[x] \cap [y] = \varnothing$. How I started it is, if $[x] \cap [y]$ is not empty, then $[x]=[y]$, but then I am kind of lost.
The problem with your "start" is that you are assuming exactly what you want to prove. You need to apply what you know about the properties of an equivalence relation, in this case, denoted by $\;\sim\;$ You'll need to use the definitions of $[x], [y]$: $$[x] = \{a \in X | \ x \sim a\} \text{ and}\;\;[y] = \{a \in X | y \sim a\}.\tag{1}$$ Note that $[x]$ and $[y]$ are defined to be sets, which happen also to be equivalence classes. To prove that two sets are equal, show that each is the subset of the other. $$\text{Now, suppose that}\;\; [x] \cap [y] \neq \varnothing.\tag{2}$$ Then there must be at least one element $a\in X$ that is in both equivalence classes. So we have $a \in [x]$ and $a\in [y]$. Here's where the definitions given by $(1)$ come in to play; together with the definition of an equivalence relation (the fact that $\sim$ is reflexive, symmetric, transitive), you can show that: * *$a \in [x]$ and $a \in [y] \rightarrow x \sim y$ and $y\sim x\;\;\forall x\in [x],~\text{and}~ \forall y \in [y]$. And so we have, trivially $$[x]\subset [y] \;\;\text{and}\;\; [y]\subset [x]\iff [x]=[y].$$ Therefore, having assumed $(2)$, it follows that $[x] = [y]$. The only other option is that $(2)$ is false, in which case we have $[x] \cap [y] = \varnothing$. $\therefore$ either $[x] = [y]$ OR $[x] \cap [y] = \varnothing$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
RSA solving for primes p and q knowing n = pq and p - q I was also given these: $p+q=n-\phi(n)+1$ $p-q=\sqrt((p+q)^2-4n)$ $\phi(n)=(p-1)(q-1)$ $p>q$ I've been trying to manipulate this as a system of equations, but it's just not working out for me. I found a similar problem on this site, but instead of $pq$ and $p-q$ being known, it had $pq$ and $\phi(pq)$, so that didn't help. The $\phi(n)$ function has never been mentioned in this class before, so I can't use any definitions of it (aside from the one given above) without proving them. Could someone please point me in the right direction here?
We have $$(p+q)^2=(p-q)^2+4pq.$$ Calculating the right-hand side is very cheap. Then calculating $p+q$ is cheap, a mild variant of Newton's Method. Once we know $p+q$ and $p-q$, calculating $p$ and $q$ is very cheap.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cancellation laws for function composition Okay I was asked to make a conjecture about cancellation laws for function composition. I figured it would go something like "For all sets $A$ and functions $g: A \rightarrow B$ and $h: A \rightarrow B$, $f \circ g = f \circ h$ implies that $g=h$." I'm pretty sure $g=h$ isn't always true, but is there a property of $f$ that makes this true?
Notice that if there are distinct $b_1,b_2\in B$ such that $f(b_1)=f(b_2)$, you won’t necessarily be able to cancel $f$: there might be some $a\in A$ such that $g(a)=b_1$ and $h(a)=b_2$, but you’d still have $(f\circ g)(a)=(f\circ h)(a)$. Thus, you want $f$ to be injective (one-to-one). Can you prove that that’s sufficient?
{ "language": "en", "url": "https://math.stackexchange.com/questions/237001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate a point on the line at a specific distance . I have two points which make a line $l$ , lets say $(x_1,y_1) , (x_2,y_2)$ . I want a new point $(x_3,y_3)$ on the line $l$ at a distance $d$ from $(x_2,y_2)$ in the direction away from $(x_1,y_1)$ . How should i do this in one or two equation .
A point $(x,y)$ is on the line between $(x_1,y_1)$ and $(x_2,y_2)$ if and only if, for some $t\in\mathbb{R}$, $$(x,y)=t(x_1,y_1)+(1-t)(x_2,y_2)=(tx_1+(1-t)x_2,ty_1+(1-t)y_2)$$ You need to solve $$\begin{align*}d&=\|(x_2,y_2)-(tx_1+(1-t)x_2,ty_1+(1-t)y_2)\|=\sqrt{(tx_2-tx_1)^2+(ty_2-ty_1)^2}\\ &=\sqrt{t^2}\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\hspace{5pt}\Rightarrow\hspace{5pt} |t|=\frac{d}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\end{align*}$$ You will have two values of $t$. For $t>0$ this point will be in the direction to $(x_1,y_1)$ and for $t<0$ it will be in the direction away from $(x_1,y_1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Existence of two solutions I am having a problem with the following exercise. I need to show the $x^2 = \cos x $ has two solutions. Thank you in advance.
Let $f(x)=x^2-\cos x$. Note that the curve $y=f(x)$ is symmetric about the $y$-axis. It will thus be enough to show that $f(x)=0$ has a unique positive solution. That there is a unique negative solution follows by symmetry. There is a positive solution, since $f(0)\lt 0$ and $f(100)\gt 0$. (Then use the Intermediate Value Theorem.) For uniqueness of the positive solution, note that $f'(x)=2x+\sin x$. In the interval $(0,\pi/2)$, $f'(x)$ is positive because both terms are positive. And for $x\ge \pi/2$, we have $f'(x)\ge \pi-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
How do I get the residue of the given function? I'm reading the solution of the integral: $$\int\limits_{-\infty}^{\infty} dx\frac{e^{ax}}{1+e^x}$$ by the residue method. And I understood everything, but how to get the residue of $\frac{e^{az}}{1+e^z}$ (the book just states that the residue is $-e^{i\pi a}$). I know there is a simple pole at $z=i\pi$ and that is the point where I want the residue. Since it is a simple pole I tried using the formula $a_{-1}=f(x)(z-z_0)$ by using the series expansion of the exponential function and I got to this formula $$a_{-1}=-e^{i\pi a}\left[\frac{\left(1+\sum\limits_{n=1}^{\infty}\frac{(z-i\pi)^{n-1}}{n!}(z-i\pi)\right)^a}{\sum\limits_{n=1}^{\infty}\frac{(z-i\pi)^{n-1}}{n!}}\right]_{z=i\pi}$$but I believe thats wrong and I couldn't find my mistake or another way of solving it.
$$\lim_{z\to\pi i}(z-\pi i)\frac{e^{az}}{1+e^z}\stackrel{\text{L'Hopital}} = \lim_{z\to\pi i}\frac{e^{az}}{e^z} = -e^{a\pi i}$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why is the set of all real numbers uncountable? I understand Cantor's diagonal argument, but it just doesn't seem to jive for some reason. Lets say I assign the following numbers ad infinitum... * *$1\to 0.1$ *$2\to 0.2$ *$3\to 0.3$ ... *$10\to 0.10$ *$11\to 0.11$ and so on... How come there's supposedly at least one more real number than you can map to a member of $\mathbb{N}$?
How come there's supposedly at least one more real number than you can map to a member of $\mathbb N$? Well, suppose there isn't - that Cantor's conclusion, his theorem, is wrong, because our enumeration covers all real numbers. Wonderful, but let us see what happens when we take our enumeration and apply Cantor's diagonal technique to obtain a real number that can't be in this sequence. But that contradicts our supposition! Hence our supposition - that we can have an enumeration of all reals - is false. That the argument can be applied to any enumeration is what it takes for Cantor's theorem to be true. Wilfred Hodges wrote an excellent survey of wrong refutations of Catonr' argument, his An Editor Recalls Some Hopeless Papers (Postscript). Section 7, dealing with how the counterfactual assumption confuses, might be of particular interest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 2 }
Given this transformation matrix, how do I decompose it into translation, rotation and scale matrices? I have this problem from my Graphics course. Given this transformation matrix: $$\begin{pmatrix} -2 &-1& 2\\ -2 &1& -1\\ 0 &0& 1\\ \end{pmatrix}$$ I need to extract translation, rotation and scale matrices. I've also have the answer (which is $TRS$): $$T=\begin{pmatrix} 1&0&2\\ 0&1&-1\\ 0&0&1\end{pmatrix}\\ R=\begin{pmatrix} 1/\sqrt2 & -1/\sqrt2 &0 \\ 1/\sqrt2 & 1/\sqrt2 &0 \\ 0&0&1 \end{pmatrix}\\ S=\begin{pmatrix} -2/\sqrt2 & 0 & 0 \\ 0 & \sqrt2 & 0 \\ 0& 0& 1 \end{pmatrix} % 1 0 2 1/sqrt(2) -1/sqrt(2) 0 -2/sqrt(2) 0 0 %T = 0 1 -1 R = /1/sqrt(2) 1/sqrt(2) 0 S = 0 sqrt(2) 0 % 0 0 1 0 0 1 0 0 1 $$ I just have no idea (except for the Translation matrix) how I would get to this solution.
It appears you are working with Affine Transformation Matrices, which is also the case in the other answer you referenced, which is standard for working with 2D computer graphics. The only difference between the matrices here and those in the other answer is that yours use the square form, rather than a rectangular augmented form. So, using the labels from the other answer, you would have $$ \left[ \begin{array}{ccc} a & b & t_x\\ c & d & t_y\\ 0 & 0 & 1\end{array}\right]=\left[\begin{array}{ccc} s_{x}\cos\psi & -s_{x}\sin\psi & t_x\\ s_{y}\sin\psi & s_{y}\cos\psi & t_y\\ 0 & 0 & 1\end{array}\right] $$ The matrices you seek then take the form: $$ T=\begin{pmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \\ 0 & 0 & 1 \end{pmatrix}\\ R=\begin{pmatrix} \cos{\psi} & -\sin{\psi} &0 \\ \sin{\psi} & \cos{\psi} &0 \\ 0 & 0 & 1 \end{pmatrix}\\ S=\begin{pmatrix} s_x & 0 & 0 \\ 0 & s_y & 0 \\ 0 & 0 & 1 \end{pmatrix} $$ If you need help with extracting those values, the other answer has explicit formulae.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "83", "answer_count": 2, "answer_id": 1 }
Limit:$ \lim\limits_{n\rightarrow\infty}\left ( n\bigl(1-\sqrt[n]{\ln(n)} \bigr) \right )$ I find to difficult to evaluate with $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )$$ I tried to use the fact, that $$\frac{1}{1-n} \geqslant \ln(n)\geqslant 1+n$$ what gives $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right ) \geqslant \lim_{n\rightarrow\infty} n(1-\sqrt[n]{1+n}) =\lim_{n\rightarrow\infty}n *\lim_{n\rightarrow\infty}(1-\sqrt[n]{1+n})$$ $$(1-\sqrt[n]{1+n})\rightarrow -1\Rightarrow\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )\rightarrow-\infty$$Is it correct? If not, what do I wrong?
Use Taylor! $$n(1-\sqrt[n]{\log n}) = n (1-e^{\frac{\log\log n}{n}}) \approx n\left(1-\left(1+\frac{\log\log n}{n}\right)\right) = - \log\log n$$ which clearly tends to $-\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Prove sum is bounded I have the following sum: $$ \sum\limits_{i=1}^n \binom{i}{i/2}p^\frac{i}{2}(1-p)^\frac{i}{2} $$ where $p<\frac{1}{2}$ I need to prove that this sum is bounded. i.e. it doesn't go to infinity as n goes to infinity.
And instead of an explicit bound, you may use Stirling's formula, which yields $\displaystyle {n \choose n/2} \sim \sqrt{2 / \pi} \cdot n^{-1/2} 2^n$ as $n \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
is $0.\overline{99}$ the same as $\lim_{x \to 1} x$? So we had an interesting discussion the other day about 0.999... repeated to infinity, actually being equal to one. I understand the proof, but I'm wondering then if you had the function... $$ f(x) = x* \frac{(x-1)}{(x-1)} $$ so $$ f(1) = NaN $$ and $$ \lim_{x \to 1} f(x) = 1 $$ what would the following be equal to? $$ f(0.\overline{999}) = ? $$
If $0.\overline9=1$ then $f(0.\overline9)$ is as undefined as $f(1)$ is. However indeed $\lim_{x\to 1}f(x)=1$ as you said. The reason for the above is simple. If $a$ and $b$ are two terms, and $a=b$ then $f(a)=f(b)$, regardless to what $f$ is or what are the actual terms. Once you agreed that $0.\overline9=1$ we have to have $f(0.\overline9)=f(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
4 Points on Circumference of Circle and center This is actually a computer science question in that I need to create a program that will determine the center of a circle given $4$ points on its circumference. Does anyone know the algorithm, theorem or method? I think it has something to do with cyclic quadrilaterals. Thanks.
The perpendicular bisector of a chord of a circle goes through the center of the circle. Therefore, if you have two chords, then the perpendicular bisectors intersect at exactly the center of the circle. Here is a picture of what I'm describing. So, given four points on the circle, draw chords between pairs of them, draw the perpendicular bisectors of the chords, and find the point of intersection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Generating function for binomial coefficients $\binom{2n+k}{n}$ with fixed $k$ Prove that $$ \frac{1}{\sqrt{1-4t}} \left(\frac{1-\sqrt{1-4t}}{2t}\right)^k = \sum\limits_{n=0}^{\infty}\binom{2n+k}{n}t^n, \quad \forall k\in\mathbb{N}. $$ I tried already by induction over $k$ but i have problems showing the statement holds for $k=0$ or $k=1$.
Due to a recent comment on my other answer, I took a second look at this question and tried to apply a double generating function. $$ \begin{align} &\sum_{n=0}^\infty\sum_{k=-n}^\infty\binom{2n+k}{n}x^ny^k\\ &=\sum_{n=0}^\infty\sum_{k=n}^\infty\binom{k}{n}\frac{x^n}{y^{2n}}y^k\\ &=\sum_{n=0}^\infty\frac{x^n}{y^{2n}}\frac{y^n}{(1-y)^{n+1}}\\ &=\frac1{1-y}\frac1{1-\frac{x}{y(1-y)}}\\ &=\frac{y}{y(1-y)-x}\\ &=\frac1{\sqrt{1-4x}}\left(\frac{1+\sqrt{1-4x}}{1+\sqrt{1-4x}-2y}-\frac{1-\sqrt{1-4x}}{1-\sqrt{1-4x}-2y}\right)\\ &=\frac1{\sqrt{1-4x}}\left(\frac{1+\sqrt{1-4x}}{1+\sqrt{1-4x}-2y}+\color{#C00000}{\frac{2x/y}{1+\sqrt{1-4x}-2x/y}}\right)\tag{1} \end{align} $$ The term in red contains those terms with negative powers of $y$. Eliminating those terms yields $$ \begin{align} \sum_{n=0}^\infty\sum_{k=0}^\infty\binom{2n+k}{n}x^ny^k &=\frac1{\sqrt{1-4x}}\frac{1+\sqrt{1-4x}}{1+\sqrt{1-4x}-2y}\\ &=\frac1{\sqrt{1-4x}}\sum_{k=0}^\infty\left(\frac{2y}{1+\sqrt{1-4x}}\right)^k\\ &=\frac1{\sqrt{1-4x}}\sum_{k=0}^\infty\left(\frac{1-\sqrt{1-4x}}{2x}\right)^ky^k\tag{2} \end{align} $$ Equating identical powers of $y$ in $(2)$ shows that $$ \sum_{n=0}^\infty\binom{2n+k}{n}x^n=\frac1{\sqrt{1-4x}}\left(\frac{1-\sqrt{1-4x}}{2x}\right)^k\tag{3} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/237810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 6, "answer_id": 1 }
Rolle's Theorem Let $f$ be a continuous function on $[a,b]$ and differentiable on $(a,b)$, where $a<b$. Suppose $f(a)=f(b)$. Prove that there exists number $c_{1},c_{2},...,c_{2012}$ $\in$ $(a,b)$ satisfying $c_{1} < c_{2} <...< c_{2012}$ and $f'(c_{1})+f'(c_{2})+...+f'(c_{2012})=0$. I believe it has something to do with Rolle's Theorem, judging by the hypotheses. However, I can't seem to find a way to tackle this problem. Any help is appreciated, thanks!
Hint: Given $n\in\mathbb{N}$, consider the function $g_n(x)=\sum_{k=0}^{n-1}f(a+\frac{(b-a)(x+k)}{n})$, $x\in[0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Mathematics induction on inequality: $2^n \ge 3n^2 +5$ for $n\ge8$ I want to prove $2^n \ge 3n^2 +5$--call this statement $S(n)$--for $n\ge8$ Basis step with $n = 8$, which $\text{LHS} \ge \text{RHS}$, and $S(8)$ is true. Then I proceed to inductive step by assuming $S(k)$ is true and so $2^k \ge 3k^2 +5 $ Then $S(k+1)$ is $2^{k+1} \ge 3(k+1)^2 + 5$ I need to prove it so I continue by $2^{k+1} \ge 3k^2 +5$ $2(2^k) \ge 2(3k^2 +5)$ I'm stuck here...don't know how to continue, please explain to me step by step. I search for whole day already, all give answer without explanation. I can't understand, sorry for the trouble.
The missing step (because there is indeed a missing step) is that $2\cdot(3k^2+5)\geqslant3(k+1)^2+5$. This inequality is equivalent to $3k^2-6k+2\geqslant0$, which obviously holds for every $k\geqslant8$ since $3k^2-6k+2=3k(k-2)+2$, hence you are done. The structure of the proof that $S(k)$ implies $S(k+1)$ for every $k\geqslant8$ is as follows: * *Assume that $S(k)$ holds, that is, $2^k\geqslant3k^2+5$, and that $k\geqslant8$. *Then $2^{k+1}=2\cdot2^k$ hence $2^{k+1}\geqslant2\cdot(3k^2+5)$. *Furthermore, $2\cdot(3k^2+5)\geqslant3(k+1)^2+5$ (the so-called missing step). *Hence $2^{k+1}\geqslant3(k+1)^2+5$, which is $S(k+1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Help solving $\frac{1}{{2\pi}}\int_{-\infty}^{+\infty}{{e^{-{{\left({\frac{t}{2}} \right)}^2}}}{e^{-i\omega t}}dt}$ I need help with what seems like a pretty simple integral for a Fourier Transformation. I need to transform $\psi \left( {0,t} \right) = {\exp^{ - {{\left( {\frac{t}{2}} \right)}^2}}}$ into $\psi(0,\omega)$ by solving: $$ \frac{1}{{2\pi }}\int_{ - \infty }^{ + \infty } {\psi \left( {0,t} \right){e^{ - i\omega t}}dt} $$ So far I've written (using Euler's formula): $$\psi \left( {0,\omega } \right) = \frac{1}{{2\pi }}\int_{ - \infty }^{ + \infty } {\psi \left( {0,t} \right){e^{ - i\omega t}}dt} = \frac{1}{{2\pi }}\left( {\int_{ - \infty }^{ + \infty } {{e^{ - {{\left( {\frac{t}{2}} \right)}^2}}}\cos \omega tdt - i\int_{ - \infty }^{ + \infty } {{e^{ - {{\left( {\frac{t}{2}} \right)}^2}}}\sin \omega tdt} } } \right)$$ $$ \begin{array}{l} = \frac{1}{{2\pi }}\left( {{I_1} - i{I_2}} \right)\\ \end{array}$$ I just don't recall a way to solve this integrals by hand. Wolfram Alpha tells me that the result of the first integral is ${I_1} = 2\sqrt \pi {e^{ - {\omega ^2}}}$ and for the second $I_{2}=0$. But on my notes I have ${I_1} = 2\sqrt \pi {e^{ - {{\left( {{\omega ^2}/2} \right)}^2}}}$. Can anybody tell me how one can solve this type of integrals and if the result from Wolfram Alpha is accurate? Any help will be appreciated.
If you complete the square in the argument of the exponentials, $$ -\frac{1}{4}(t^2 + 4i \omega t) \to -\frac{1}{4}(t^2+4i\omega t -4\omega^2) -\omega^2 = -\frac{1}{4}(t+i2\omega)^2-\omega^2. $$ After a change of variables $u=\frac{t}{2}+i\omega$, the integral becomes $$2e^{-\omega^2}\int_{-\infty -i\omega}^{\infty -i\omega} e^{-u^2}\ du,$$ which, apart from the pesky factors of $-i\omega$ in the bounds, is a standard Gaussian integral equal to $\sqrt{\pi}$. As a physicist, I'm inclined to just sweep these factors under the rug, but we can do better: if we form a contour in the complex plane along the paths $(-\infty,i0)\to\ (\infty,i0)\to(\infty,-i\omega)\to(-\infty,-i\omega)\to(-\infty,i0),$ the integrand kills the contributions moving in the imaginary direction, and the overall integral is zero since $e^x$ has no poles. So, our integral is equal to the one along the real axis, and we can discard the $-i\omega$ terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
is there a large time behavoiur of (compound) Poisson processes similiar to Law of iterated logrithm for Brownian Motion Can someone please point me to a reference/answer me this question? From the law of iterated logarithm, we see that Brownian motion with drift converge to $\infty$ or $-\infty$. For a Poisson processes $N_t$ with rate $\lambda$, is there a similiar thing? As an exercise, I am consider what happens if the Brownian motion exponential martingale (plus an additional drift r) is replaced by a Poisson one. More explicitly, that is $\exp(aN_t-(e^a-1)\lambda t + rt)$ how would this behave as $t$ tend to $\infty$?
Since $N_t/t\to\lambda$ almost surely, $X_t=\exp(aN_t-(e^a-1)\lambda t + rt)=\exp(\mu t+o(t))$ almost surely, with $\mu=(1+a-\mathrm e^a)\lambda+r$. If $\mu\ne0$, this yields that $X_t\to0$ or that $X_t\to+\infty$ almost surely, according to the sign of $\mu$. If $\mu=0$, the central limit theorem indicates that $N_t=\lambda t+\sqrt{\lambda t}Z_t$ where $Z_t$ converges in distribution to a standard normal random variable $Z$, hence $X_t=\exp(a\sqrt{\lambda t}Z_t)$ and $X_t$ diverges in distribution (except in the degenerate case $a=r=0$) in the sense that, for every positive $x\leqslant y$, $\mathbb P(X\leqslant x)\to\frac12$ and $\mathbb P(X_t\geqslant y)\to\frac12$, hence $\mathbb P(x\leqslant X_t\leqslant y)\to0$. Note: The LIL is a much finer result than all those above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that if $g^2=e$ for all $g$ in $G$ then $G$ is Abelian. Prove that if $g^2=e$ for all $g$ in $G$ then $G$ is Abelian. This question is from group theory in Abstract Algebra and no matter how many times my lecturer teaches it for some reason I can't seem to crack it. (Please note that $e$ in the question is the group's identity.) Here's my attempt though... First I understand Abelian means that if $g_1$ and $g_2$ are elements of a group $G$ then they are Abelian if $g_1g_2=g_2g_1$... So, I begin by trying to play around with the elements of the group based on their definition... $$(g_2g_1)^r=e$$ $$(g_2g_1g_2g_2^{-1})^r=e$$ $$(g_2g_1g_2g_2^{-1}g_2g_1g_2g_2^{-1}...g_2g_1g_2g_2^{-1})=e$$ I assume that the $g_2^{-1}$'s and the $g_2$'s cancel out so that we end up with something like, $$g_2(g_1g_2)^rg_2^{-1}=e$$ $$g_2^{-1}g_2(g_1g_2)^r=g_2^{-1}g_2$$ Then ultimately... $$g_1g_2=e$$ I figure this is the answer. But I'm not totally sure. I always feel like I do too much in the pursuit of an answer when there's a simpler way. Reference: Fraleigh p. 49 Question 4.38 in A First Course in Abstract Algebra.
given $g^2=e$ for all $g\in G$ So $g=g^{-1}$ for all $g\in G$ Let,$a,b\in G$ Now $ab=a^{-1}b^{-1} =(ba)^{-1} =ba$ So $ab=ba$ for all $a,b\in G$ .Hence $G$ is Abelian Group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 14, "answer_id": 4 }
Basis for this $\mathbb{P}_3$ subspace. Just had an exam where the last question was: Find a basis for the subset of $\mathbb{P}_3$ where $p(1) = 0$ for all $p$. I answered $\{t,t^2-1,t^3-1\}$, but I'm not entirely confident in the answer. Did I think about the question in the wrong way?
Another way to get to the answer: $P_3=\{{ax^3+bx^2+cx+d:a,b,c,d{\rm\ in\ }{\bf R}\}}$. For $p(x)=ax^3+bx^2+cx+d$ in $P_3$, $p(1)=0$ is $$a+b+c+d=0$$ So, you have a "system" of one linear equation in 4 unknowns. Presumably, you have learned how to find a basis for the vector space of all solutions to such a system, or, to put it another way, a basis for the nullspace of the matrix $$\pmatrix{1&1&1&1\cr}$$ One such basis is $$\{{(1,-1,0,0),(1,0,-1,0),(1,0,0,-1)\}}$$ which corresponds to the answer $$\{{x^3-x^2,x^3-x,x^3-1\}}$$ one of an infinity of correct answers to the question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the Cumulative Distribution Function of the following random variable? Suppose that we have $2n$ iid random variables $X_1,…,X_n,Y_1,…,Y_n$ where $n$ is a large number. I want to find $P((k∑_iX_iY_i+(∑_iX_i)(∑_jY_j))<c)$ for any integer c. Since $n$ is a large number and all the random variables are $iid$, using central limit theorem, we can say that $k∑_iX_iY_i$, $(∑_iX_i)$ and $(∑_jY_j)$ are approximately normal random variables and $(∑_iX_i)$$(∑_jY_j)$ is the product of two normal random variables which would have Normal Product Distribution. So $k∑_iX_iY_i+(∑_iX_i)(∑_jY_j)$ is the sum of one normal and one normal product random variable which are dependent. Now the question is how can we find $P((k∑_iX_iY_i+(∑_iX_i)(∑_jY_j)) \le c)$ for any integer c?
$$Z = \sum_{i=1}^n \sum_{j=1}^n X_i Y_j = \left(\sum_{i=1}^n X_i\right)\left(\sum_{j=1}^n Y_j\right)$$ If $n$ is large, $S_X = \sum_i X_i$ and $S_Y = \sum_j Y_j$ are approximately normal. They have means $n\mu$ and standard deviations $\sqrt{n} \sigma$ where each $X_i$ and $Y_j$ have mean $\mu$ and standard deviation $\sigma$. Of course they are independent. Thus $E[Z] = E[S_X] E[S_Y] = n^2 \mu^2$ and $E[Z^2] = E[S_X^2] E[S_Y^2] = (n^2 \mu^2 + n \sigma^2)^2$, so the variance of $Z$ is $\text{Var}(Z) = E[Z^2] - E[Z]^2 = n^2 \sigma^4 + 2 n^3 \sigma^2 \mu^2$. The moment generating function of the product of independent normal random variables with means $n\mu$ and standard deviations $n \sqrt{\sigma}$ has, according to Maple, moment generating function $$ M_Z(t) = E[e^{tZ}] = \frac{1}{\sqrt{1 - n^2 \sigma^4 t^2}} \exp\left(\frac{n^2 \mu^2 t}{1 - n \sigma^2 t}\right)$$ for $t < 1/(n \sigma^2)$. EDIT: If $\mu \ne 0$, it would be better to separate out the effect of the mean. So let $X_i = \mu + \sigma U_i$ and $Y_i = \mu + \sigma V_i$, where $U_i$ and $V_i$ have mean $0$ and standard deviation $1$. Then $$Z = n^2 \mu^2 + n \mu \sigma \sum_{i=1}^n (U_i + V_i) + \sigma^2 \sum_{i=1}^n \sum_{j=1}^n U_i V_j$$ Now $n \mu \sigma \sum_{i=1}^n (U_i + V_i)$ is approximately normal with mean $0$ and standard deviation $\sqrt{2} n^{3/2} \mu \sigma$, while $\sigma^2 \sum_{i=1}^n \sum_{j=1}^n U_i V_j$ has mean $0$ and standard deviation $n \sigma^2$. For large $n$ this term is negligible compared to the $n^{3/2}$ term. So a good approximation to the distribution of $Z$ is normal with mean $n^2 \mu^2$ and standard deviation $\sqrt{2} n^{3/2} \mu \sigma$. You asked about $ (k−1) \sum_i X_i Y_i+ Z$: call this $(k-1) T + Z$. If we separate out the effect of the mean, $$T = n \mu^2 + \mu \sigma \sum_{i=1}^n (U_i + V_i) + \sigma^2\sum_{i=1}^n U_i V_i$$ where $\mu \sigma \sum_{i=1}^n (U_i + V_i)$ has mean $0$ and standard deviation $\sqrt{2n} \mu \sigma$ and $\sigma^2 \sum_{i=1}^n U_i V_i$ has mean $0$ and standard deviation $\sqrt{n} \sigma^2$. Again, these terms are negligible compared to the $n^2$ and $n^{3/2}$ terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
In the ring $\mathbb Z[i]$ explain why our four units $\pm1$ and $\pm i$ divide every $u\in\mathbb Z[i]$. In the ring $\mathbb Z[i]$ explain why our four units $\pm1$ and $\pm i$ divide every $u\in\mathbb Z[i]$. This is obviously a elementary question, but Gaussian integers are relatively new to me. I found this exercise in the textbook, and my professor overlooked it, but i'm curious. Is this basically the same thing as, for a lack of a better term, "normal" integers? As in, $\pm1$ divides everything?
As important as it is to understand why the units divide everything, it's also important to understand why those are the only units (as they are probably being referred to as "the units" in class). Let $z=a+bi$. Then we can define the norm of $z$ to be $N(z)=|z|^2=z\overline z = a^2+b^2$. Note, some people call $|z|$ the norm, and I'm not entirely sure which is more standard. Hopefully no confusion will arise here. The norm satisfies two important properties. First, $N(xy)=N(x)N(y)$, and second, if $z$ is a Gaussian integer, then $N(z)$ is an integer. Because of the first property, if $z$ has an inverse in the Gaussian integers, then $1=N(1)=N(z z^{-1})=N(z)N(z^{-1})$, and so $N(z^{-1})=N(z)^{-1}$. The only numbers such that $x$ and $x^{-1}$ are both integers are $\pm 1$, and since the norm is always non-negative (being the sum of square of real numbers), we just have to solve the equation $$ N(z)=a^2+b^2=1. $$ Since every non-zero square of an integer is at least $1$, this has no solutions other than $z=\pm 1, \pm i$. We are not done yet because we have only shown that if $z$ has an inverse then $N(z)=1$. We must check that each of these solutions is actually invertible. However, for any nonzero complex number, we have $z^{-1}=\frac{\overline z}{N(z)}$, so $$(1)(1)=(-1)(-1)=(i)(-i)=1,$$ and so the inverses of these Gaussian integers are still integers. By definition, an element $r$ of a ring is called a unit if there exists an $s$ such that $rs=sr=1$. In this case, $s$ is called the inverse of $r$, and is unique when it exists, so it is often written as $r^{-1}$. What we have shown so far is that the only units in $\mathbb Z[i]$ are $\pm 1, \pm i$. As Gerry Myerson wrote, if $u$ is a unit, then for any $r$ in the ring, we can write $r=1r=(uu^{-1})r=u(u^{-1}r)$. Therefore, $r$ is divisible by $u$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
When is a factorial of a number equal to its triangular number? Consider the set of all natural numbers $n$ for which the following proposition is true. $$\sum_{k=1}^{n} k = \prod_{k=1}^{n} k$$ Here's an example: $$\sum_{k=1}^{3}k = 1+2+3 = 6 = 1\cdot 2\cdot 3=\prod_{k=1}^{3}k$$ Therefore, $3$ is in this set. Does this set include more than just $3$? If so, is this set finite or infinite? Furthermore, can this set be described by a rule or formula? [Just a tidbit: This question indicates the triangular number $1+2+3+\cdots+n$ is called the termial of $n$ and is denoted $n?$. I'm all for it; let's see if it catches on.] [Another tidbit: the factorial of $n$, written $n!$ and called "$n$-factorial," is abbreviated "$n$-bang" in spoken word.]
The other answers are fine but it shouldn't be necessary to actually carry out the induction to see that the solution set is finite: the triangular numbers grow quadratically and factorials grow super-exponentially. A more interesting problem would be: how many solutions $(m,n)$ are there to $\sum_{k=1}^{m}{k} = \prod_{k=1}^{n}{k}$? In addition to $(1,1)$ and $(3,3)$ we also have $(15,5)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
numbers' pattern It is known that $$\begin{array}{ccc}1+2&=&3 \\ 4+5+6 &=& 7+8 \\ 9+10+11+12 &=& 13+14+15 \\\ 16+17+18+19+20 &=& 21+22+23+24 \\\ 25+26+27+28+29+30 &=& 31+32+33+34+35 \\\ldots&=&\ldots \end{array}$$ There is something similar for square numbers: $$\begin{array}{ccc}3^2+4^2&=&5^2 \\ 10^2+11^2+12^2 &=& 13^2+14^2 \\ 21^2+22^2+23^2+24^2 &=& 25^2+26^2+27^2 \\ \ldots&=&\ldots \end{array}$$ As such, I wonder if there are similar 'consecutive numbers' for cubic or higher powers. Of course, we know that there is impossible for the following holds (by Fermat's last theorem): $$k^3+(k+1)^3=(k+2)^3 $$
Let's start by proving the basic sequences and then where and why trying to step it up to cubes fails. I don't prove anything just reduce the problem to a two variable quartic Diophantine equation. Lemma $1 + 2 + 3 + 4 + \ldots + n = T_1(n) = \frac{n(n+1)}{2}$. Corollary $(k+1) + (k+2) + \ldots + (k+n) = -T_1(k) + T_1(k+n)$ The first sequence of identities is $$-T_1(s(n)) + T_1(s(n)+n+1) = -T_1(s(n)+n+1) + T_1(s(n)+2n+1)$$ so computing ? f(x) = (x*(x+1))/2 ? (-f(s)+f(s+n+1))-(-f(s+n+1)+f(s+2*n+1)) % = -n^2 + (s + 1) we find $s(n) = n^2-1$ and prove it. Lemma $1^2 + 2^2 + 3^2 + 4^2 + \ldots + n^2 = T_2(n) = \frac{n(n+1)(2n+1)}{6}$. The second sequence of identities is $$-T_2(s(n)) + T_2(s(n)+n+1) = -T_2(s(n)+n+1) + T_2(s(n)+2n+1)$$ so computing ? f(x) = (x*(x+1)*(2*x+1))/6 ? (-f(s-1)+f(s-1+n+1))-(-f(s-1+n+1)+f(s-1+2*n+1)) % = -2*n^3 + (-2*s - 1)*n^2 + s^2 this is a weird quadratic equation in two integers with some solutions (n,s) = (1,3), (2,10), (3,21), (4,36), (5,55), (6,76), ... the discriminant of the polynomial (as a polynomial in $s$) is $2^2 n^2 (n+1)^2$ so actually we can solve it and that explains where there's one solution for each $n$. Now lets try for cubes.. but at this point we know it's not going to work ? f(x) = ((x^2+x)/2)^2 ? (-f(s-1)+f(s-1+n+1))-(-f(s-1+n+1)+f(s-1+2*n+1)) % = -7/2*n^4 + (-6*s - 3)*n^3 + (-3*s^2 - 3*s - 1/2)*n^2 + s^3 so this is too complicated to actually solve but if anyone proves this doesn't have solutions for positive $n$ that will show there are no such cubic sequences. For reference $$7n^4 + (12s + 6)n^3 + (6s^2 + 6s + 1)n^2 - 2s^3 = 0$$ is the Diophantine equation that obstructs a cubic sequence from existing. Maybe you could conclude by the Mordell Conjecture that there's no infinite family of sequences of identities for cubic and higher power sums, if you can show these polynomials are always irreducible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
About the asymptotic behaviour of $\sum_{n\in\mathbb{N}}\frac{x^{a_n}}{a_n!}$ Let $\{a_n\}_{n\in\mathbb{N}}$ be an increasing sequence of natural numbers, and $$ f_A(x)=\sum_{n\in\mathbb{N}}\frac{x^{a_n}}{a_n!}. $$ There are some cases in which the limit $$ l_A=\lim_{x\to+\infty} \frac{1}{x}\,\log(f_A(x)) $$ does not exist. However, if $\{a_n\}_{n\in\mathbb{N}}$ is an arithmetic progression, we have $l_A=1$ (it follows from a straightforward application of the discrete Fourier transform). Consider now the case $a_n=n^2.$ * *Is it true that there exists a positive constant $c$ for which $$\forall x>0,\quad e^{-x}f_A(x)=\sum_{k\in\mathbb{N}}x^k\left(\sum_{0\leq j\leq\sqrt{k}}\frac{(-1)^{k-j^2}}{(j^2)!\,(k-j^2)!}\right)\geq c\;?$$ *Is it true that $l_A=1$?
It is true that $l_A=1$. The logic is similar to my answer to $\lim\limits_{x\to\infty}f(x)^{1/x}$ where $f(x)=\sum\limits_{k=0}^{\infty}\cfrac{x^{a_k}}{a_k!}$. Firstly, the terms after $n=3x$ don't matter: $$\sum_{n=3x}^\infty x^n/n! <\sum_{n=3x}^\infty x^n /(3x/e)^n=C$$ by Stirling approximation. But for $n<3x$ there is going to be a perfect square $s$ between $n$ and $n+2\sqrt{3x}$ (this is just $(k+1)^2-k^2=2k+1$ and $k< \sqrt{3x}$). Then the values $x^s/s!$ and $x^n/n!$ differ by a factor of at most $(3x)^{2\sqrt{3x}}$, So if I multiply each term $x^s/s!$ by that ratio and take at least $2\sqrt{3x}$ copies I will have for each $x^n/n!$ (with $n<3x$) at least one term at least as big. This means that $$f_A(x) (3x)^{2\sqrt{3x}}{2\sqrt{3x}} +C > e^x$$ and so $l_A=1$. (I think the ratio of terms can actually be made $3^{2\sqrt{3x}}$, but it works as is too.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/238615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Plotting an integral of a function in Octave I try to integrate a function and plot it in Octave. Integration itself works, i.e. I can evaluate the function g like g(1.5) but plotting fails. f = @(x) ( (1) .* and((0 < x),(x <= 1)) + (-1) .* and((1 <x),(x<=2))); g = @(x) (quadcc(f,0,x)); x = -1.0:0.01:3.0; plot(x,g(x)); But receive the following error: quadcc: upper limit of integration (B) must be a single real scalar As far as I can tell this is because the plot passes a vector (namely x) to g which passes it down to quadcc which cannot handle vector arguments for the third argument. So I understand what's the reason for the error but have no clue how to get the desired result instead. N.B. This is just a simplified version of the real function I use, but the real function is also constant on a finite set of intervals ( number of intervals is less than ten if that matters). I need to integrate the real function 3 times in succession (f represents a jerk and I need to determine functions for acceleration, velocity and distance). So I cannot compute the integrals by hand like I could in this simple case.
You could use cumtrapz instead of quadcc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
$Μ : K$ need not be radical Show that $Μ : K$ need not be radical, Where $L : K$ is a radical extension in $ℂ$ and $Μ$ is an intermediate field.
Let $K = \Bbb Q$ and $M$ be the splitting field of $X^3 - 3X + 1 \in \Bbb Q[X]$. $M$ can be embedded into $\Bbb R$, so it is not a radical extension by casus irreducibilis. However, $X^3 - 3X + 1$ has a solvable Galois group $C_3$ (the cyclic group of order $3$), so $M$ can be embedded into some field $L$ that is radical over $K = \Bbb Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Tangent Spaces and Morphisms of Affine Varieties In page 205 of "Algebraic Curves, Algebraic Manifolds and Schemes" by Shokurov and Danilov, the tangent space $T_x X$ of an affine variety $X$ at a point $x \in X$ is defined as the subspace of $K^n$, where $K$ is the underlying field, such that $\xi \in T_x X$ if $(d_x g)(\xi)=0$ for any $g \in I$, where $I$ is the ideal of $K[T_1,\cdots,T_n]$ that defines $X$ and by definition $(d_x g)(\xi)=\sum_{i=1}^n \frac{\partial g}{\partial T_i}(x) \xi_i$, where partial derivatives are formal. So far so good. Next, it is mentioned, that if $f:X \rightarrow Y$ is a morphism of affine varieties, then we obtain a well-defined map $d_x f : T_x X \rightarrow T_{f(x)} Y$. How is this mapped defined and why is it well-defined?
If $X\subset K^n$ and $Y\subset K^m$ are affine subvarieties , the map $f:X\to Y$ is the restriction of some polynomial map $F:K^n\to K^m: x\mapsto (F_1(x),...,F_m(x))$, where the $F_i$'s are polynomials $F_i\in K[T_1,...,T_n]$. The map $d_x f : T_x X \rightarrow T_{f(x)} Y$ is the restriction to $T_x(X)$ of the linear map given by the Jacobian $$d_xF=Jac(F)(x)=(\frac {\partial F_i}{\partial X_j}(x)):K^n\to K^m$$ [The subspace $T_xX \subset K^n$ is the set of solutions of the humongous (but extremely redundant!) system of linear equations $\Sigma \frac {\partial g}{\partial X_j}(x)\xi_j=0$ where $g$ runs through $I(X)$] The only thing to check is that we have in $K^m$ : $$(d_xf)(T_xX)\subset T_y(Y) $$ This means that we must show that $$(d_yh)(d_xf(v))=0 \quad (?)$$ for alll $v\in T_xX$ and all $h\in I(Y)$. This follows from the following two facts: a) For all $h\in I(Y)$ we have $h\circ F\in I(X)$, since $F$ maps $X$ into $Y$. b) Functoriality of the differential: $d_x(h\circ F)=d_yh\circ d_xF$ . And now if $v\in T_xX$ we can write $$(d_yh)(d_xf(v))=(d_yh)(d_xF(v))=d_x(h\circ F)(v)=0$$ since $h\circ F\in I(X)$ . We have thus proved $(?)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/238868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
proof: set countable iff there is a bijection In class we had the following definiton of a countable set: A set $M$ is countable if there is a bijection between $\mathbb N$ and $M$. In our exam today, we had the following thesis given:If $A$ is a countable set, then there is a bijection $\mathbb N\rightarrow A$. So I am really not sure if the thesis and therefore the equivalence in the definition is right. So is it correct? And how do you proove it? Thanks a lot!
Suppose that $A$ is countable by your definition; then there is a bijection $f:\Bbb N\to A$. Because $f$ is a bijection, $f^{-1}$ is also a bijection, so it’s the desired bijection from $A$ to $\Bbb N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Exponents in Odd and Even Functions I was hoping someone could show or explain why it is that a function of the form $f(x) = ax^d + bx^e + cx^g+\cdots $ going on for some arbitrary length will be an odd function assuming $d, e, g$ and so on are all odd numbers, and likewise why it will be even if $d, e, g$ and so on are all even numbers. Furthermore, why is it if say, $d$ and $e$ are even but $g$ is odd that $f(x)$ will then become neither even nor odd? Thanks.
If the exponents are all odd, then $f(x)$ is the sum of odd functions, and hence is odd. If the exponents are all even, then $f(x)$ is the sum of even functions, and hence is even. As far as your last question, the sum of an odd function and even function is neither even nor odd. Proof: Sum of Odd Functions is Odd: Given two odd functions $f$ and $g$. Since they are odd functions $f(-x) = -f(x)$, and $g(-x) = -g(x)$. Hence: \begin{align*} f(-x) + g(-x) &= -f(x) - g(x) \\ &= -(f+g)(x) \\ \implies (f+g) & \text{ is odd if $f$ and $g$ are odd.} \end{align*} Proof: Sum of Even Functions is Even: Given two even functions $f$ and $g$, then $f(-x) = f(x)$, and $g(-x) = g(x)$. Hence: \begin{align*} f(-x) + g(-x) &= f(x) + g(x) \\ \implies (f+g) &\text{ is even if $f$ and $g$ are even.} \end{align*} Proof: Sum of an odd function and even function is neither odd or even. If $f$ is odd, and $g$ is even \begin{align*} f(-x) + g(-x) &= -f(x) + g(x) \\ &= -(f-g)(x) \\ \implies (f+g)&\text{ is neither odd or even if $f$ is even, $g$ is odd.} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/238980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Construction of an integrable function with a function in $L^2$ I have this really simple question, but I cannot figure out the answer. Suppose that $f\in L^2([0,1])$. Is it true that $f/x^5$ will be in $L^1([0,1])$? Thanks! Edit: I was interested in $f/x^{1/5}$.
The statement $$f \in L^2([0,1])\Rightarrow fx^{-\frac{1}{5}} \in L^1([0,1])$$ is true. It is an easy application of Hölder inequality. Infact, by hypothesis $f\in L^2([0,1])$; on the other hand, we have $g(x)=x^{-1/5}\in L^2([0,1])$ so by Hölder (indeed, this is the case "Cauchy-Schwarz") $$ \Vert fg \Vert_1 \le \Vert f\Vert_2 \Vert g \Vert_2 $$ hence $fg\in L^1$ (and you get also an upper bound for its $L^1$-norm).
{ "language": "en", "url": "https://math.stackexchange.com/questions/239013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
The dimension of the component of a variety Mumford claimed the following result: If $X$ is an $r$-dimensional variety and $f_{1},...,f_{k}$ are polynomial functions on $X$. Then every component of $X\cap V(f_{1},...,f_{k})$ has dimension $\ge r-k$. He suggested this is a simple corollary of the following result: Assume $\phi:X^{r}\rightarrow Y^{s}$ is a dominating regular map of affine varieties. Then for all $y\in Y$, the dimension of components of $\phi^{-1}(y)$ is at least $r-s$. However it is not clear to me how the two statements are connected. Let $Y=X- V(f_{1},...,f_{k})$ where the dominating regular map is the evaluating map by $f_{i}$. Then $Y$ should have dimension at least $r-k$ by the definition. So in particular the components of $o$'s preimage should have dimension at most $r-(r-k)=k$, and least $r-r=0$. I feel something is wrong in my reasoning and so hopefully someone can correct me. I realized something may be wrong in my reasoning: Notice two extremes $\dim(Y)$ can be by letting $f_{i}$ be one of the generating polynomials of $\mathscr{U},X=V(\mathscr{U})$, in this case $\dim(Y)=0$. And on the other hand if $f_{i}$ does not vanish at all on $X$, then $\dim(Y)=r$. So $Y$ seems not a good choice as it is insensitive to the value to $k$. However, a reverse way of reasoning might be possible by using $Y=X\cap V(f_{1},..f_{i},f_{k})$ while claiming the component can only have dimension at most $k$. Then by inequality we have the desired result. But I do not see how $X\rightarrow Y$ by quotient map to be a regular map.
To deduce the corollary, let $Y$ be $k$-dimensional affine space, and let $\phi$ be the map sending a point $x \in X$ to $(f_1(x), f_2(x), \ldots, f_k(x))$. Then $X \cap V(f_1, \ldots, f_k)$ is the preimage $\phi^{-1}(0,0,\ldots,0)$, hence, by the result, its dimension is at least $r-k$. (The fact that $\phi$ might not be dominating is unimportant: We can always replace $Y$ with the image of $\phi$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/239122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that $|||A-B|||\geq \frac{1}{|||A^{-1}|||}$? $A,B\in M_n$, $A$ is non-singular and $B$ is singular. $|||\cdot|||$ is any matrix norm on $M_n$, how to show that $|||A-B||| \geq \frac{1}{|||A^{-1}|||}$? The hint is let $B=A[I-A^{-1}(A-B)]$, but I don't know how to use it. Appreciate any help! update: is $\geq$,not $\leq$.Sorry!
Let's sharpen the hint to $A^{-1}B = I - A^{-1}(A-B)$. First you should check that this identity is correct. Now pick any $v$ such that $Bv = 0$ and $\|v \| = 1$. By assumption, such a $v$ exists. Apply both sides of the identity, play around with it, take norms, see if you can get something that resembles the statement that you want to prove. Remember the definition of a matrix norm: $|||C||| = \sup_{\|x\| = 1} \|Cx\|$. There is also a formula that relates $|||CD|||$ to $|||C|||$ and $|||D|||$. Check your notes and try to use it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Basis, dense subset and an inequality Let $V \subset H$, where $V$ is separable in the Hilbert space $H$. So there is a basis $w_i$ in $V$ such that, for each $m$, $w_1, ..., w_m$ are linearly independent and the finite linear combinations are dense in $V$. Let $y \in H$, and define $y_m = \sum_{i=1}^m a_{im}w_i$ such that $y_m \to y$ in $H$ as $m \to \infty$. Then, why is it true that $\lVert y_m \rVert_H \leq C\lVert y \rVert_H$? I think if the $w_i$ were orthonormal this is true, but they're not. So how to prove this statement?
It is not true. Choose $y=0$ and $a_{1,m} = \frac{1}{m}$. Then $y_m \to y$, but it is never the case that $\|y_m\| \leq C \|y\|$. Elaboration: This is because $\|y_m\| = \frac{1}{m} \|w_1\|$, the $w_i$ are linearly independent, hence non-zero. Hence $\|y_m\| = \frac{1}{m} \|w_1\| > 0$ for all $m$. There is no choice of $C$ that will satisfy the equation $\|y_m\| \leq C \|y\|$. The above is true even if $w_i$ are orthonormal. (I think you need to be more explicit about your choice of $a_{im}$. A 'nice' choice would be to let $y_m$ be the closest point to $y$ in $\text{sp}\{w_i\}_{i=1}^m$. This is what the first part of the answer by Berci does below.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/239300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Tate $p$-nilpotent theorem Tate $p$-nilpotent Theorem. If $P$ is a Sylow $p$-subgroup of $G$ and $N$ is a normal subgroup of $G$ such that $P \cap N \leq \Phi (P)$, then $N$ is $p$-nilpotent. My question is the following: If $P \cap N \leq \Phi (P)$ for only one Sylow p-subgroup of $G$, is $N$ $p$-nilpotent? Remark: $G$ may have more than one Sylow for the prime $p$.
That situation is not possible. Let $P$ be a Sylow $p$-subgroup such that $P \cap N \leqslant \Phi(P)$ and consider $Q\cap N$ for another Sylow $p$-subgroup $Q$. We have that there is a $g$ so that $P^g=Q$, and since $N$ is normal, $$(P\cap N)^g=P^g\cap N^g=P^g \cap N=Q\cap N\leq \Phi(P)^g=\Phi(P^g)=\Phi(Q).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/239358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve the recurrence relation:$ T(n) = \sqrt{n} T \left(\sqrt n \right) + n$ $$T(n) = \sqrt{n} T \left(\sqrt n \right) + n$$ Master method does not apply here. Recursion tree goes a long way. Iteration method would be preferable. The answer is $Θ (n \log \log n)$. Can anyone arrive at the solution.
Let $n = m^{2^k}$. We then get that $$T(m^{2^k}) = m^{2^{k-1}} T (m^{2^{k-1}}) + m^{2^{k}}$$ \begin{align} f_m(k) & = m^{2^{k-1}} f_m(k-1) + m^{2^k} = m^{2^{k-1}}(m^{2^{k-2}} f_m(k-2) + m^{2^{k-1}}) + m^{2^k}\\ & = 2 m^{2^k} + m^{3 \cdot 2^{k-2}} f_m(k-2) \end{align} $$m^{3 \cdot 2^{k-2}} f_m(k-2) = m^{3 \cdot 2^{k-2}} (m^{2^{k-3}} f_m(k-3) + m^{2^{k-2}}) = m^{2^k} + m^{7 \cdot 2^{k-3}} f_m(k-3)$$ Hence, $$f_m(k) = 2 m^{2^k} + m^{3 \cdot 2^{k-2}} f_m(k-2) = 3m^{2^k} + m^{7 \cdot 2^{k-3}} f_m(k-3)$$ In general, it is not hard to see that $$f_m(k) = \ell m^{2^k} + m^{(2^{\ell}-1)2^{k-\ell}} f_m(k-\ell)$$ $\ell$ can go up to $k$, to give us $$f_m(k) = km^{2^k} + m^{(2^{k}-1)} f_m(0) = km^{2^k} + m^{(2^{k}-1)} m^{2^0} = (k+1) m^{2^k}$$ This gives us $$f_m(k) = (k+1) m^{2^k} = n \left(\log_2(\log_m n) + 1\right) = \mathcal{O}(n \log_2(\log_2 n))$$ since $$n=m^{2^k} \implies \log_m(n) = 2^k \implies \log_2(\log_m(n)) = k$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/239402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 0 }
Straightening the boundary in concrete examples Let $\Omega \subset \mathbb{R}^d$ be open and with $C^1$ boundary $\Gamma$. For any given point $x_0 \in \Gamma$ we know there's a neighborhood where $\Gamma$ is the graph of some $C^1$ function $\gamma : \mathbb{R}^{d - 1} \longrightarrow \mathbb{R}^d, x' \longmapsto \gamma ( x') = x_d$. We can use it to straighten the boundary with the local diffeomorphism $$ T ( x', x_d - \gamma ( x')) = ( x', x_d - \gamma ( x')), $$ and its differential $D T$ has a nice $( d - 1) \times ( d - 1)$ identity matrix as first block and a bottom row $\nabla T_d = ( - \nabla \gamma, 1)$ which is proportional to the vector $\vec{n}$ normal to $\Gamma$ at each point, say $c ( x) \vec{n} ( x) = \nabla T_d ( x)$, where $c ( x) = - \| \nabla T_d ( x) \|$. For my calculations in concrete examples with parametrized domains, etc., I want $\nabla T_d$ to actually be the outward pointing normal: I need this $c ( x)$ to be $- 1$. If I try to impose the condition after constructing $T$, then I have to integrate expressions which I'm just not capable of. I can try to throw it at some symbolic integration software, but there has to be some other way, right? In almost every book on PDEs it's stated that this $T$ may be normalized so as to have the property I mention. But how?
If $\phi(x')$ denotes the $d$-th component of the normal vector at $(x',\gamma(x'))$, then first of all it is immediate from the graph structure that $\phi(x') \ne 0$. Let $S(x',y_d) = (x',\phi(x')y_d)$, and write $\tilde{T} = S \circ T$. Then $\tilde{T}$ is a $\mathcal{C}^1$ diffeomorphism which straightens the boundary, and it is normalized, as can be checked easily with the chain rule: $$ DS (x',y_d) = \left[ \begin{array}{c|c} \mathrm{Id} & 0 \\ \hline \nabla \phi(x') y_d & \phi(x') \end{array} \right] $$ In particular $$ DS (x',0) = \left[ \begin{array}{c|c} \mathrm{Id} & 0 \\ \hline 0 & \phi (x') \end{array} \right] $$ So on the boundary $\Gamma$ you get $$ D\tilde{T}(x',\gamma(x')) = DS(x',0) DT(x',\gamma(x')) =\left[ \begin{array}{c|c} \mathrm{Id} & 0 \\ \hline -\phi(x')\nabla \gamma(x') & \phi (x') \end{array} \right] $$ I.e., the last row is a multiple of the outer normal, and since the $d$-th entry is the same, it is equal to the outer normal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Subset of a finite set is finite We define $A$ to be a finite set if there is a bijection between $A$ and a set of the form $\{0,\ldots,n-1\}$ for some $n\in\mathbb N$. How can we prove that a subset of a finite set is finite? It is of course sufficient to show that for a subset of $\{0,\ldots,n-1\}$. But how do I do that?
I have run into this old question and was surprised that noone seemed to have said the following. The actual definition of finite set is the following: A set $A$ is finite if every injection $A\rightarrow A$ is a bijection. (Note: this definition does not require the set $\mathbb{N}$) Now let $B$ a finite set and $A\subset B$. Suppose that $A$ is not finite. Then, by definition there exists a function $f:A\rightarrow A$ that is injective but not surjective. Now define $F:B\rightarrow B$ as follows. $$ F(x)=\begin{cases} f(x) & \text{if $x\in A$}\\ x & \text{if $x\in B\setminus A$} \end{cases} $$ Clearly $F$ is injective but not surjective contradicting the finiteness of $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 7, "answer_id": 6 }
Why is $\operatorname{Var}(X+Y) = \operatorname{Cov}(X,X) + \operatorname{Cov}(X,Y) + \operatorname{Cov}(Y,X) + \operatorname{Cov}(Y,Y)$ I know $\operatorname{Cov}(X,Y) = E[(X-u_x)(Y-u_y)]$ and $$ \operatorname{Cov}(X+Y, Z+W) = \operatorname{Cov}(X,Z) + \operatorname{Cov}(X,W) + \operatorname{Cov}(Y,Z) + \operatorname{Cov}(Y,W), $$ but how does one get $$ \operatorname{Var}(X+Y) = \operatorname{Cov}(X,X) + \operatorname{Cov}(X,Y) + \operatorname{Cov}(Y,X) + \operatorname{Cov}(Y,Y)? $$
A quick way: Note from the definition of variance that $\text{Var}(T)=\text{Cov}(T,T)$. Now in your formula for $\text{Cov}(X+Y, Z+W)$, set $Z=X$ and $Y=W$. You will get exactly the formula you want to derive. A slow way: We can work with just your basic defining formula for covariance. Note that $$\text{Var}(X+Y)=E(((X+Y)-(\mu_X+\mu_Y))^2).$$ Rearranging a bit, we find that this is $$E(((X-\mu_X)+(Y-\mu_Y))^2).$$ Expand the square, and use the linearity of expectation. We get $$E((X-\mu_X)^2) +E((Y-\mu_Y)^2)+2E((X-\mu_X)(Y-\mu_Y).$$ The first term is $\text{Var}(X)$, which is the same as $\text{Cov}(X,X)$. A similar remark can be made about the second term. And $\text{Cov}(X,Y)=\text{Cov}(Y,X)=E((X-\mu_X)(Y-\mu_Y))$. Remark: There is a variant of the formula for covariance, and variance, which is very useful in computations. Suppose we want the covariance of $X$ and $Y$. This is $E((X-\mu_X)(Y-\mu_Y))$. Expand the product, and use the linearity of expectation. We get $$E(XY)-E(\mu_XY)-E(\mu_Y X+E(\mu_X\mu_Y).$$ But $\mu_X$ and $\mu_Y$ are constants. So for example $E(\mu_X Y)=\mu_XE(Y)=\mu_X\mu_Y)$. So we conclude that $$\text{Cov}(X,Y)=E(XY)-\mu_X\mu_Y.$$ A special case of this is the important $$\text{Var}(X)=E(X^2)-\mu_X^2=E(X^2)-(E(X))^2.$$ The above formulas for covariance would have made it easier to derive the formula of your problem, or at least to type the answer. For $$\text{Var}(X+Y)=E((X+Y)^2)-(\mu_X+\mu_Y)^2.$$ Expand each square, use the linearity of expectation, and rearrange. We get $$(E(X^2)-\mu_X^2)+(E(Y^2)-\mu_Y^2)+2(E(XY)-\mu_X\mu_Y),$$ which is exactly what we want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$G=\langle a,b\mid aba=b^2,bab=a^2\rangle$ is not metabelian of order $24$ This is my self-study exercise: Let $G=\langle a,b\mid aba=b^2,bab=a^2\rangle$. Show that $G$ is not metabelian. I know; I have to show that $G'$ is not an abelian subgroup. The index of $G'$ in $G$ is 3 and doing Todd-Coxeter Algorithm for finding any presentation of $G'$ is a long and tedious technique (honestly, I did it but not to end). Moreover GAP tells me that $|G|=24$. May I ask you if there is an emergency exit for this problem. Thanks for any hint. :)
$abab=a^3=b^3$, so $Z := \langle a^3 \rangle$ is central. Modulo $Z$, we get the standard presentation $\langle a,b \mid a^3, b^3, (ab)^3 \rangle$ of $A_4$. Also, module $G'$, we have $a^2=b$, $b^2=a$, so $a^3=1$, and hence $Z \le G'$. Also, $ab,ba \in G'$ and $abba = a^2ba^2=bab^3ab=baabb^3$, so $G'$ is not abelian provided that $Z$ is nontrivial. So to prove the group is not metabelian we need to prove that $Z$ is nontrivial, and the only sensible way of doing that, other than by coset enumeration, which is very tedious to do by hand, is to find an explicit homomorphic image of the group in which $Z$ is nontrivial. Knowing that $G$ is a nonsplit central extension of $Z$ by $A_4$, we might suspect at this stage that $G \cong {\rm SL}_2(3)$, which might help us find an explicit map, like the one described by Jack Schmidt in his comment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A Book for abstract Algebra I am self learning abstract algebra. I am using the book Algebra by Serge Lang. The book has different definitions for some algebraic structures. (For example, according to that book rings are defined to have multiplicative identities. Also modules are defined slightly differently....etc) Given that I like the book, is it OK to keep reading this book or should I get another one? Thank you
There is a less famous but very nice book Abstract Algebra by Paul B. Garrett and then there is the old book "A survey of modern algebra" by Birkhoff
{ "language": "en", "url": "https://math.stackexchange.com/questions/239734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 10, "answer_id": 8 }
$\int 2^x \ln(x)\, \mathrm{d}x$ I found this problem by a typo. My homework problem was $\int 2^x \ln(2) \, \mathrm{d}x$ which is $2^x + C$ by the Fundamental Thm of Calculus. I want to be able to solve what I wrote down incorrectly in my homework. What I wrote for my homework is $\int 2^x \ln(x)\, \mathrm{d}x$ and What I Want to solve, plus I got it wrong. :( I used integration by parts. $$\int u \, \mathrm{d}v = uv - \int v\, \mathrm{d}u$$ $$\begin{array}{l l} u = \ln(x) & du = \frac{1}{x}\mathrm{d}x \\ \mathrm{d}v = 2^x\mathrm{d}x & v = \frac{2^X}{\ln (2)} \\ \end{array}$$ I got this integral: $$\frac{\ln(x)2^x}{\ln 2} - \int \frac{2^x}{x\ln 2}\, \mathrm{d}x$$ Another round of integration of parts: $$\begin{array}{l l} u = \frac{2^x}{\ln 2} & du = 2^x\mathrm{d}x \\ \mathrm{d}v = \frac{1}{x}\mathrm{d}x & v = \ln(x) \end{array} $$ $$\int 2^x \ln(x)\, \mathrm{d}x = \frac{\ln(x)2^x}{\ln 2} - \left[ \frac{2^x \ln x}{\ln 2} - \int \ln(x) 2^x\, \mathrm{d}x \right]$$ My final answer is $$ \frac{\ln(x)2^x}{\ln 2} -\frac{2^x \ln x}{\ln 2}= 0$$ What did I do wrong?
First you did a mistake here: $$\int 2^x \ln(x)\, \mathrm{d}x = \frac{\ln(x)2^x}{\ln 2} - \left[ \frac{2^x \ln x}{\ln 2} - \int \ln(x) 2^x\, \mathrm{d}x \right]\Rightarrow \frac{\ln(x)2^x}{\ln 2} -\frac{2^x \ln x}{\ln 2}= 0$$ You can't just cancel the integrals, as you will lose the constant of integration. For example $$\int \frac{1}{x}\, \mathrm{d}x = \int x^{\prime}\frac{1}{x}\, \mathrm{d}x= x\frac{1}{x} - \int x\frac{-1}{x^2}\, \mathrm{d}x =1+\int \frac{1}{x}\, \mathrm{d}x$$ If you cancel the integrals then $1=0$ which is impossible. When canceling integrals one must never forget the constant of integration $c$. In our case $c=0$. To see this, $$\int 2^x \ln(x)\, \mathrm{d}x = \frac{\ln(x)2^x}{\ln 2} - \frac{2^x \ln x}{\ln 2} + \int \ln(x) 2^x\, \mathrm{d}x \Rightarrow 0=\frac{\ln(x)2^x}{\ln 2} - \frac{2^x \ln x}{\ln 2}+c=c$$ which leads to $0=0$. Why did this come up? You integrated by parts once and then did the reverse and got back to your starting point. Now how can $\int 2^x \ln(x)\, \mathrm{d}x $ be evaluated? It can't be written as a combination of elementary functions (polynomial,exponential,logarithmic,trigonometric and hyperbolic functions and their inverses). I will show this for $\int e^x \ln(x)\, \mathrm{d}x $. $$\int e^x \ln(x)\, \mathrm{d}x = \int (e^x)^{\prime} \ln(x)\, \mathrm{d}x=e^x\ln x- \int e^x (\ln(x))^{\prime}\, \mathrm{d}x=e^x\ln x-\int \frac{e^x}{x}\, \mathrm{d}x $$ The last integral is not elementary as shown by Risch Algorithm. For more information look here: Exponential Integral. And no, I don't think there is any book covering this topic
{ "language": "en", "url": "https://math.stackexchange.com/questions/239788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the sum of the first $n$ terms of $\sum^n_{k=1}k^3$ The question: Find the sum of the first $n$ terms of $$\sum^n_{k=1}k^3$$ [Hint: consider $(k+1)^4-k^4$] [Answer: $\frac{1}{4}n^2(n+1)^2$] My solution: $$\begin{align} \sum^n_{k=1}k^3&=1^3+2^3+3^3+4^3+\cdots+(n-1)^3+n^3\\ &=\frac{n}{2}[\text{first term} + \text{last term}]\\ &=\frac{n(1^3+n^3)}{2} \end{align}$$ What am I doing wrong?
For a geometric solution, you can see theorem 3 on the last page of this PDF. Sorry I did not have time to type it here. This solution was published by Abu Bekr Mohammad ibn Alhusain Alkarachi in about A.D. 1010 (Exercise 40 of appendix E, page A38 of Stewart Calculus 5th edition).
{ "language": "en", "url": "https://math.stackexchange.com/questions/239909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Dual of $\ell_\infty(X)$ Given a Banach space $X$. Consider the space $\ell_\infty(X)$ which is the $\ell_\infty$-sum of countably many copies of $X$. Is there any accessible respresentation of the dual space $\ell_\infty(X)^*$? In particular, is this dual space isomorphic to the space of finitely additive $X^*$-valued measures on the powerset of $\mathbb N$ equipped with the semivariation norm? Any references will be appreciated.
There is no good description of the dual of $\ell_\infty(X)$ as far as I know. If $X$ is finite dimensional, then the answer to your second question is yes. Otherwise, it is no, for there is no way to define an action of a finitely additive $X^*$-valued measure on $\ell_\infty(X)$ if the ball of $X$ is not compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
quadratic and bilinear forms Does a quadratic form always come from symmetric bilinear form ? We know when $q(x)=b(x,x)$ where $q$ is a quadratic form and $b$ is a symmetric bilinear form. But when we just take a bilinear form and $b(x,y)$ and write $x$ instead of $y$,does it give us a quadratic form ?
If we have b symmetric bilinear form we can get q quadratic form $q\colon V \to \mathbb{R}$ q(v)=b(v,v) conversely if q is a quadratic form $q\colon V \to \mathbb{R}$ we can define $\frac 12$(q(v+w)-q(v)-q(w)):=b(v,w) the vital answer is you just get a bilinear form not always a symmetric bilinear form. because the definition $\frac 12$(q(v+w)-q(v)-q(w)) leads us to $\frac 12$(b(v,w)+b(w,v))
{ "language": "en", "url": "https://math.stackexchange.com/questions/240139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
$L_1$ norm of Gaussian random variable Ok. This is bit confusing. Let $g$ be a Gaussian random variable (normalized, i.e. with mean 0 and standard deviation 1). Then, in the expression $$\|g\|_{L_1}=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}|x|\exp(\frac{-x^2}{2})dx =\sqrt{\frac{2}{\pi}},$$ shouldn't the term $|x|$ not appear in the integral?
If $X$ is a random variable with density $f$, and $\phi$ is a measurable function, then $E[\phi(X)]=\int_{\Bbb R}\phi(t)f(t)dt$. As the $L^1$ norm of a random variable $X$ is $E[|X|]$, we have, when $X$ is normally distributed, the announced result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Method of Undetermined Coeffecients - how to assume the form of third degree equation. An example differential equations questions asks me to solve; $$y''' - 2y'' -4y'+8y = 6xe^{2x}$$ I begun by solving the homogeneous equation with $m^3 - 2m^2 -4m+8 =0$ and getting the answer $$y(x) = c_1e^{2x} + c_2xe^{2x}+c_3e^{-2x} $$ The second part of the solution involves assuming a form for the solution. Because $g(x)$ is $6xe^{2x}$, I assumed the solution would be of the form $(Ax+B)e^{2x}$, however it turns out that after differentiating three times it gets extremely complicated. Is there a better way? Also, the textbook solutions manual uses the form of $(Ax^3 + Bx^2)e^{2x}$. How did it arrive at that? (there's no accompanying explanation)
Your equation is $y'''-2y''-4y'+8y=6xe^{2x}$. Now change the $y'$ to $Dy$ form as follows. So $$y'''\to D^3y,\\ y''\to D^2y, \; \; \text{and} \;\;y'\to Dy,$$ So by new arranging respect to $D$ operator we get our equation as: $$D^3y-2D^2y-4Dy+8y=6xe^{2x}$$ or by factoring and expanding $$(D^3-2D^2-4D+8)y=(D-2)^2(D+2)y=6e^{2x}$$ which you got before. Note that considering the corresponding homogeneous equation $$(D-2)^2(D+2)y=0$$ we get $(D-2)^2=0,\;\; (D+2)=0$ which leads us to write the general solutions as $$y_c(x) = c_1e^{2x} + c_2xe^{2x}+c_3e^{-2x}$$ and you did it right above before. Now have a look at some facts: * *If $y=\text{constant}$ so $y'=0$ or $Dy=0$. Here, the operator $D$ annihilates $y$ which is just a constant.($Dc=0$) *If $y=cx$ in which $c$ is a constant so $y''=0$ or $D^2y=0$. It means that the operator polynomial $P(D)=D^2$ annihilates $y=cx$. $(\text{or} \; P(D)=D^2(cx)=cD^2x=c(x'')=0)$. Generally, $D^{n+1}$ annihilates not only the function $y=cx^{n}$ but also all linear functions as $$y=c_0+c_1x+c_2x^2+...+c_nx^n$$ It means that $$P(D)y=D^{n+1}y=0$$. *As the same the differential operator $(D-\alpha)^n$ annihilates each of the following functions and every linear combinations of them: $$e^{\alpha x},xe^{\alpha x},x^2e^{\alpha x},...,x^{n-1}e^{\alpha x}$$ Now look at the RHS of your original equation. I mean $=6xe^{2x}$. Can we guess of which proper differential polynomial annihilates it? As above it would be $(D-2)^2$. It means that $(D-2)^2 \left(6xe^{2x}\right)=0$. Don't respect to numeric coefficients like $6$ here at all. Now we consider of what we have achieved at last: $$(D-2)^2(D+2)y=6e^{2x}$$ Put the operator $(D-2)^2$ before both sides of the above converted equation: $$(D-2)^2\left((D-2)^2(D+2)y\right)=(D-2)^26e^{2x}=0$$ or $$(D-2)^4(D+2)y=0$$ In fact, we have found a proper differential operator $P(D)=(D-2)^4(D+2)$ which if it effects to $y$, $y$ will be lost. Now, for a while, forget our equation and look at $(D-2)^4(D+2)y=0$ and think someone gave this to us asking to guess which function $y$ may satisfy the equality above? We reply: * *Since we have $(D-2)$ so we have some forms as $e^{2x}$ in $y$. *Since $(D-2)$ has a power $4$ so we have the forms $Ae^{2x},\; Bxe^{2x}, \;Cx^2e^{2x}, \text{and} \; Ex^3e^{2x}$ in $y$. Note that you multiply $e^{2x}$, in the previous line, by $A,\; Bx,\; Cx^2,\; Ex^3$. (Exactly until the power of $x$ gets $4-1=3$). *And, since we have $(D+2)$, then $y$ has the term $Fe^{-2x}$. So we are done. Our probable function which satisfy the original equation is $$y=Ae^{2x}+ Bxe^{2x}+Cx^2e^{2x}+Ex^3e^{2x}+Fe^{-2x}$$. Now put the terms which generate $y_c(x)$ aside and take the rest for what we have looked for. It is $$y_p=Cx^2e^{2x}+Ex^3e^{2x}$$ where $C,E$ are unknown constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Line integral vs Arc Length I am trying to understand when do to line integral and when to do arc length. So I know the formula for arc length varies based on $dx$ or $dy$ like so: $s=\int_a^b \sqrt{1+[f'(x)]^2} \, \mathrm{d} x$ for the arc length and here's a line integral equation: $\int_c fds=\int_a^b f(r(t))\cdot r'(t)dt$ I just don't understand how the two are different/similar. Don't they both compute the same thing?
Suppose you have a curve $C$ parametrized as $\mathbf{g}(t)$ for $0\le t\le 1$. Then the arc length of $C$ is defined as $$\int_0^1\|\mathbf{g}'(t)\|\ dt$$ An intuitive way of think of the above integral is to interpret the derivative $\mathbf{g}'$ as velocity. Then the above integral is basically the statement that the net distance traveled is equal to the speed times time. The line integral itself is also concerned with arc length. More specifically, the scalar line integral is concerned with the arc length of a curve along with a weight $f$ at each small segment of the curve. To give a simplified example, suppose that you have an ideal elastic $C$. Further suppose that you have a function $f$ which assigns a value for each point of the elastic. Think of this $f$ as a stretch factor. If a point $p$ of the elastic is assigned a number $f(p)$, then we stretch the elastic locally around $p$ by a factor of $f(p)$. If we now add up the stretched lengths, what we have is a line integral $$\int_C f\ ds = \int_0^1 f(\mathbf{g}(t))\,\|\mathbf{g}'(t)\|\ dt$$ This line integral will be larger or smaller than the actual length of the elastic depending on how the elastic is stretched or compressed as a whole. But if our function is $f(p) = 1$, then that corresponds to stretching the elastic at each point by a factor of $1$, i.e. leaving the elastic alone. If we add up the untouched lengths segments of the elastic, all we do is recover the actual arc length of the elastic. This is why arc-length is given by $$\int_C 1\ ds = \int_0^1\|\mathbf{g}'(t)\|\ dt$$ an unweighted line integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 1, "answer_id": 0 }
Show that if m/n is a good approximation of $\sqrt{2}$ then $(m+2n)/(m+n)$ is better Claim: If $m/n$ is a good approximation of $\sqrt{2}$ then $(m+2n)/(m+n)$ is better. My attempt at the proof: Let d be the distance between $\sqrt{2}$ and some estimate, s. So we have $d=s-\sqrt{2}$ Define $d'=m/n-\sqrt{2}$ and $d''=(m+2n)/(m+n)-\sqrt{2}$ To prove the claim, show $d''<d'$ Substituting in for d' and d'' yields: $\sqrt{2}<m/n$ This result doesn't make sense to me, and I was wondering whether there is an other way I could approach the proof or if I am missing something.
Assume $\dfrac mn\ne\sqrt2;$ otherwise $\dfrac mn$ is $\sqrt2$, not an approximation. Then $d'\ne0$ so we can compute $\dfrac {d''}{d'}=\dfrac{\dfrac{m+2n}{m+n}-\sqrt2}{\dfrac mn-\sqrt2}= \dfrac n{m+n}\dfrac{m+2n-\sqrt2(m+n)}{m-\sqrt2n}$ $=\dfrac n{m+n}\dfrac{m-\sqrt2n-\sqrt2(m-\sqrt2n)}{m-\sqrt2n}=\dfrac {1}{1+\dfrac mn}\left(1-\sqrt2\right).$ We could assume $\dfrac mn\ge0$ (otherwise $\dfrac mn$ is not "a good approximation of $\sqrt2$"), and $-1<1-\sqrt2<0$ since $1<\sqrt2<2$. From here it is easy to see that $|d''|<|d'|,$ and $d''$ and $d'$ have opposite signs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Radius, diameter and center of graph The eccentricity $ecc(v)$ of $v$ in $G$ is the greatest distance from $v$ to any other node. The radius $rad(G)$ of $G$ is the value of the smallest eccentricity. The diameter $diam(G)$ of $G$ is the value of the greatest eccentricity. The center of $G$ is the set of nodes $v$ such that $ecc(v) = rad(G)$ Find the radius, diameter and center of the graph Appreciate as much help as possible. I tried following an example and still didn't get it, when you count the distance from a node to another, do you count the starting node too or you count the ending node instead. And when you count, do you count the ones above and below or how do you count? :)
I think for the path graph Pn, the diameter is n−1.But the radius is (n-1/2)rounded up to the nearest integer. For example,P3 it has radius of 1 but not 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 3, "answer_id": 1 }
How to finish proof that $T$ has an infinite model? I'm trying to prove the following: If $T$ is a first-order theory with the property that for every natural number $n$ there is a natural number $m>n$ such that $T$ has an $m$-element model then $T$ has an infinite model. My thoughts: If $M$ is an $n$-element model then $\varphi_n = \exists v_1, \dots, v_n ((v_1 \neq v_2) \land \dots \land (v_{n-1} \neq v_n))$ is true in $M$. Can I use this to show that $T$ has an infinite model? How? Perhaps combine it with the compactness theorem somehow? Thanks for your help.
This is a standard fact. The result you are looking for is exactly the compactness theorem, but you can also do it directly. Just take an ultraproduct of a sequence $M_i$ of larger and larger finite models. Since every one of these models $T$, so does the ultraproduct, by Łoś's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Recursive Integration over Piecewise Polynomials: Closed form? Is there a closed form to the following recursive integration? $$ f_0(x) = \begin{cases} 1/2 & |x|<1 \\ 0 & |x|\geq1 \end{cases} \\ f_n(x) = 2\int_{-1}^x(f_{n-1}(2t+1)-f_{n-1}(2t-1))\mathrm{d}t $$ It's very clear that this converges against some function and that quite rapidly, as seen in this image, showing the first 8 terms: Furthermore, the derivatives of it have some very special properties. Note how the (renormalized) derivatives consist of repeated and rescaled functions of the previous degree which is obviously a result of the definition of the recursive integral: EDIT I found the following likely Fourier transform of the expression above. I do not have a formal proof but it holds for all terms I tried it with (first 11). $$ \mathcal{F}_x\left[f_n(x)\right](t)=\frac{1}{\sqrt{2\pi}}\frac{2^n \sin \left(2^{-n} t\right)}{t} \prod _{k=1}^n \frac{2^{k} \sin \left(2^{-k} t\right)}{t} $$ Here an image of how that looks like (first 10 terms in Interval $[-8\pi,8\pi]$): With this, my question alternatively becomes: What, if there is one, is the closed form inverse fourier transform of $\mathcal{F}_x\left[f_n(x)\right](t)=\frac{1}{\sqrt{2\pi}}\frac{2^n \sin \left(2^{-n} t\right)}{t} \prod _{k=1}^n \frac{2^{k} \sin \left(2^{-k} t\right)}{t}$, especially for the case $n\rightarrow\infty$? As a side note, it turns out, that this particular product is a particular Browein integral (Wikipedia) using as a sequence $a_k = 2^{-k}$ which exactly sums to 1. The extra term in the front makes this true for the finite sequence as well. In the limit $k \to \infty$, that term just becomes $1$, not changing the product at all. It is therefore just a finite depth correction.
Suppose $f$ is a fixed point of the iterations. Then $$f(x) = 2\int_{-1}^x\big(f(2t+1)-f(2t-1)\big)\,\mathrm{d}t,$$ which, upon differentiating both sides by $x$, implies that $$f'(x) = 2\big(f(2x+1)-f(2x-1)\big).$$ I'll assume that $f$ vanishes outside $[-1,1]$, which you can presumably prove from the initial conditions. Then we get $$f'(x) = \begin{cases} 2f(2x+1) & \text{if }x\le0, \\ -2f(2x-1) & \text{if }x>0. \end{cases}$$ This is pretty close to the definition of the Fabius function. In fact, your function would be $\frac{\text{Fb}'(\frac{x}{2}+1)}{2}$ The Fabius function is smooth but nowhere analytic, so there isn't going to be a nice closed form for your function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 0 }
Sufficient condition for differentiability at endpoints. Let $f:[a,b]\to \mathbb{R}$ be differentiable on $(a,b)$ with derivative $g=f^{\prime}$ there. Assertion: If $\lim_{x\to b^{-}}g(x)$ exists and is a real number $\ell$ then $f$ is differentiable at $b$ and $f^{\prime}(b)=\ell$? Is this assertion correct? If so provide hints for a formal $\epsilon-\delta$ argument. If not, can it be made true if we strengthen some conditions on $g$ (continuity in $(a,b)$ etc.)? Provide counter-examples. I personally think that addition of the continuity of $g$ in the hypothesis won't change anything as for example $x\sin \frac{1}{x}$ has a continuous derivative in $(0,1)$ but its derivative oscillates near $0$. I also know that the converse of this is not true. Also if that limit is infinite, then $f$ is not differentiable at $b$ right?
Since $$ f(b+h)-f(b)=f'(\xi)h $$ for some $\xi \in (b-h,h)$, you can let $h \to 0^{-}$ and conclude that $f'(b)=\lim_{x \to b-}f'(x)$. On the other hand, consider $f(x)=x^2 \sin \frac{1}{x}$. It is easy to check that $\lim_{x \to 0} f'(x)$ does not exist, and yet $f'(0)=0$. Edit: this answer assumes tacitly that $f$ is continuous at $b$. The question does not contain this assumption, although I believe that it should be clear that a discontinuous function can't be differentiable at all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }