Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Existence of geodesic on a compact Riemannian manifold I have a question about the existence of geodesics on a compact Riemannian manifold $M$. Is there an elementary way to prove that in each nontrivial free homotopy class of loops, there is a closed geodesic $\gamma$ on $M$?
Let be $[\gamma]$ nontrivial free homotopy class of loops and $l=\inf_{\beta; \beta\in[\gamma]}l(\beta)$ where $l(\beta)$ is a lenght of the curve $\beta.$ We will show that there is a geodesic $\beta$ in $[\gamma]$ such that $l(\beta)=l.$ Let be $\beta_n$ a sequence of loops in $[\gamma]$ such that $l(\beta_n)\to l.$ The first intuition is that the sequence $\beta_n$ converges to the desired curve, but this is not quite true. We can assume without loss of generality that each $\beta_n$ is a geodesic by parts and are parameterized by arc length. Let us show that beta has a subsequence that converges uniformly to a continuous loop $\beta.$ In fact, as the curves $ \beta_n $ are parameterized by arc length, we have $$ d(\beta_j(t_1),\beta_j(t_2))=|t_1-t_2| $$ Therefore the set $\{\beta_n\}$ is a uniformly limited and equicontínuos set, how $M$ is compact follows from the Arzelá-Ascoli theorem that there exists a subsequence $\beta_{n_j}$ that converges uniformly for a continuous loop $\beta_0:[0,1]\to M$ Now let $t_0<t_1\cdots<t_n $ be a finite partition of $[0,1]$ such that each $\beta_0([t_i,t_{i+1}])$ is contained in a totally normal neighborhood. Now consider the geodesic by parts $\beta:[0,1]\to M$ such that in each $[t_i,t_{i+1}]$ the curve $\beta$ is equal to geodesic segment connecting the points $\beta_0(t_i)$ e $\beta_0(t_{i+1})$ A contradiction argument shows that $l(\beta)=l$ An argument of shortcuts shows that $\beta$ is a geodesic and minimizing
{ "language": "en", "url": "https://math.stackexchange.com/questions/255247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Dirichlet Series for $\#\mathrm{groups}(n)$ What is known about the Dirichlet series given by $$\zeta(s)=\sum_{n=1}^{\infty} \frac{\#\mathrm{groups}(n)}{n^{s}}$$ where $\#\mathrm{groups}(n)$ is the number of isomorphism classes of finite groups of order $n$. Specifically: does it converge? If so, where? Do the residues at any of its poles have interesting values? Can it be expresed in terms of the classical Riemann zeta function? Is this even an interesting object to think about? Mathematica has a list of $\#\mathrm{groups}(n)$ for $1 \le n \le 2047$. Plotting the partial sum seems to indicate that it does converge and has a pole at $s=1$.
According to a sci.math posting by Avinoam Mann which I found at http://www.math.niu.edu/~rusin/known-math/95/numgrps the best upper bound is #groups(n) $\le n^{c(\log n)^2}$ for some constant $c$. That would indicate that your Dirichlet series diverges for all $s$, having arbitrarily large terms. See also https://oeis.org/A000001 (the very first entry in the OEIS), which is where I got the link above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Infinitely valued functions Is it possible to define a multiple integral or multiple sums to infinite order ? Something like $\int\int\int\int\cdots$ where there are infinite number of integrals or $\sum\sum\sum\sum\cdots$ . Does infinite valued functions exist (Something like $R^\infty \rightarrow R^n$ ) ?
Yes, it is possible to define multiple integrals or sums to infinite order: here is my definition: for every function $f$ let $$\int\int\int\cdots \int f:=1$$ and $$\sum\sum\cdots\sum f:=1.$$ As you can see, I defined those objects. But OK, I understand that you are looking for some definitions granting some usual properties of the integral. Here is another answer: it is possible to define integrals of functions between Banach spaces. There are measures on infinite dimensional Banach spaces (for example Gaussian measures) so this might be the concept which is "meaningful" for you. For example you can consider a Gaussian measure on the space of continuous functions $C([0,1])$ induced by a Wiener process and you can calculate integrals with respect to that measure. With some mental gymnastics you can think about those measures and integrals in a way you asked about.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Procedures to find solution to $a_1x_1+\cdots+a_nx_n = 0$ Suppose that $x_1, \dots,x_n$ are given as an input. Then we want to find $a_1,\ldots,a_n$ that satisfy $a_1x_1 + a_2x_2+a_3x_3 + a_4x_4+\cdots +a_nx_n =0$. (including the case where such $a$ set does not exist.) How does one find this easily? (So I am asking for an algorithm.) Edit: all numbers are non-zero integers.
Such $a_i$ do always exist (we can let $a_1 = \cdots = a_n = 0$) for example. The whole set of solutions is a $(n-1)$-dimensional subspace (the whole $k^n$ if $x_1 = \cdots = x_n= 0$) of $k^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Questions about $f: \mathbb{R} \rightarrow \mathbb{R}$ with bounded derivative I came across a problem that says: Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be a function. If $|f'|$ is bounded, then which of the following option(s) is/are true? (a) The function $f$ is bounded. (b) The limit $\lim_{x\to\infty}f(x)$ exists. (c) The function $f$ is uniformly continuous. (d) The set $\{x \mid f(x)=0\}$ is compact. I am stuck on this problem. Please help. Thanks in advance for your time.
(a) & (b) are false: Consider $f(x)=x$ $\forall$ $x\in\mathbb R$; (c) is true: $|f'|$ is bounded on $\mathbb R\implies f'$ is bounded on $\mathbb R$ [See: Related result]; (d) is false: $f=0$ on $\mathbb R\implies${$x:f(x)=0$} $=\mathbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Proof by induction for Stirling Numbers I am asked this: For any real number x and positive integer k, define the notation [x,k] by the recursion [x,k+1] = (x-k) [x,k] and [x,1] = x. If n is any positive integer, one can now express the monomial x^n as a polynomial in [x,1], [x,2], . . . , [x,n]. Find a general formula that accomplishes this, and prove that your formula is correct. I was able to figure out using stirling numbers that the formula is: $$X^n=\sum^{n}_{k=1} {n\choose k}(X_k)$$ where $[x,k]$ is a decreasing factorial $X_k = x(x-1)...(x-k+1)$ How can I prove the formula above by using induction?
I prefer the notation $x^{\underline k}$ for the falling power, so I’ll use that. You don’t want binomial coefficients in your expression: you want Stirling numbers of the second kind, denoted by $\left\{n\atop k\right\}$, and you want to show by induction on $n$ that $$x^n=\sum_{k=1}^n\left\{n\atop k\right\}x^{\underline k}\tag{1}$$ for any $n\in\Bbb Z^+$. (The formula $(1)$ becomes valid for $n=0$ as well if you start the summation at $k=0$.) The base case $n=1$ is clear, since $x^{\underline 1}=x$, and $\left\{n\atop k\right\}=1$. For the induction step you’ll the Pascal-like recurrence relation satisfied by the Stirling numbers of the second kind: $$\left\{{n+1}\atop k\right\}=k\left\{n\atop k\right\}+\left\{n\atop{k-1}\right\}\;.$$ (If you’re not familiar with it, you’ll find a combinatorial proof here.) It’s also useful to note that $x^{\underline{k+1}}=x^{\underline k}(x-k)$, so $x\cdot x^{\underline k}=x^{\underline{k+1}}+kx^{\underline k}$. Now assume $(1)$ for $n$, and try to prove that $$x^{n+1}=\sum_{k=1}^{n+1}\left\{{n+1}\atop k\right\}x^{\underline k}\;.$$ Start out in the usual way: $$\begin{align*} x^{n+1}&=x\cdot x^n\\ &=x\sum_{k=1}^n\left\{n\atop k\right\}x^{\underline k}\\ &=\sum_{k=1}^n\left\{n\atop k\right\}x\cdot x^{\underline k}\\ &=\sum_{k=1}^n\left\{n\atop k\right\}\left(x^{\underline{k+1}}+kx^{\underline k}\right)\\ &=\sum_{k=1}^n\left\{n\atop k\right\}x^{\underline{k+1}}+\sum_{k=1}^n\left\{n\atop k\right\}kx^{\underline k}\\ &=\sum_{k=2}^{n+1}\left\{n\atop{k-1}\right\}x^{\underline k}+\sum_{k=1}^n\left\{n\atop k\right\}kx^{\underline k}&&\left(\text{shift index in first sum}\right)\\ &=\sum_{k=1}^{n+1}\left\{n\atop{k-1}\right\}x^{\underline k}+\sum_{k=1}^{n+1}\left\{n\atop k\right\}kx^{\underline k}&&\left(\text{since }\left\{n\atop0\right\}=0=\left\{n\atop{n+1}\right\}\right)\;.\\ \end{align*}$$ At this point you’re almost done; I’ll leave to you the little that remains.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The compactness of $\{x_n=cos nt\}_{n\in\mathbb{N}}\in L_2[-\pi,\pi]$ Is the set $\{x_n=\cos (nt): n\in\mathbb{N}\}$ in $L_2[-\pi,\pi]$closed or compact? I don't know how to prove it.
As the elements are orthogonal, we have for $n\neq m$ that $$\lVert x_n-x_m\rVert_{L^2}^2=2,$$ proving that the set cannot be compact (it's not precompact, as the definition doesn't work for $\varepsilon=1/2$). But it's a closed set, as it's the orthogonal of the even square-integrable functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proof that an analytic function that takes on real values on the boundary of a circle is a real Possible Duplicate: Let f(z) be entire function. Show that if $f(z)$ is real when $|z| = 1$, then $f(z)$ must be a constant function using Maximum Modulus theorem I'm having trouble proving that an analytic function that takes on only real values on the boundary of a circle is a real constant. I started by writing $f(r, \theta) = u(r, \theta) + i v(r, \theta)$ By definition, $v(r, \theta ) = 0$, so $\frac{d}{d\theta} v = 0$, and in fact, the nth derivative of $v(r,\theta)$ with respect to $\theta$ is 0. The Cauchy Riemann equations in polar coordinates imply that $\frac{d}{ dr} u(r, \theta ) = 0$ Unfortunately, I'm stuck here - I think I need to prove that all nth derivatives of u and v with respect to both r and \theta$ are 0, so that I can move in any direction without inducing a change in f, but at this point I'm stuck. I've played around with this a fair bit but keep running in circles (no pun intended), so there must be something simple that I am missing. What am I doing wrong, and how does one complete the proof?
You know that if $f=u+iv$, then $u$ and $v$ are harmonic. Now, by assumption you have that $v$ is zero on the boundary of the disk. But, by the two extrema principles, you know that the maximum and minimum of $v$ occur on the boundary of your disc, and so clearly this implies that $v$ is identically zero. Thus, $f=u$, and so $f$ maps the disk into $\mathbb{R}$. But, by the open mapping theorem this implies that $f=u=\text{constant}$ else the image of the disk would be an open subset of $\mathbb{C}$ sitting inside $\mathbb{R}$ which is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Linear Algebra Proof If A is a $m\times n$ matrix and $M = (A \mid b)$ the augmented matrix for the linear system $Ax = b$. Show that either $(i) \operatorname{rank}A = \operatorname{rank}M$, or $(ii)$ $\operatorname{rank}A = \operatorname{rank}M - 1$. My attempt: The rank of a matrix is the dimension of its range space. Let the column vectors of $A$ be $a_1,\ldots,a_n$. If $\text{rank}\;A = r$, then $r$ pivot columns of $A$ form a basis of the range space of $A$. The pivots columns are linearly independent. For the matrix $M = (A \mid b)$, there are only two cases. Case $(i)$: $b$ is in the range of $A$. Then the range space of $M$ is the same as the range space of $A$. Therefore $\operatorname{rank}M = \operatorname{rank}A$. I am stuck on how to do case $(ii)$?
Suppose the columns of $A$ have exactly $r$ linearly independent vectors. If $b$ lies in their span, then $\operatorname{rank} A=r=\operatorname{rank} M$. If not, then the columns of $A$ together with $b$ have exactly $(r+1)$ linearly independent vectors, so that $\operatorname{rank} A+1=r+1=\operatorname {rank} M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Using the SLLN to show that the Sample Mean of Arrivals tends to the Arrival Rate for a simple Poisson Process Let $N_t = N([0,t])$ denote a Poisson process with rate $\lambda = 1$ on the interval $[0,1]$. I am wondering how I can use the Law of Large Numbers to formally argue that: $$\frac{N_t}{t} \rightarrow \lambda \quad \text{ a.s.} $$ As it stands, I can almost prove the required result but I have to to assume that $t \in Z_+$. With this assumption, I can define Poisson random variables on intervals of size $1$ as follows $$N_i = N([i-1,i])$$ where $$\mathbb{E}[N([i-1,i])] = \text{Var}[N([i-1,i])] = 1$$ and $$N_t = N([0,t]) = \sum_{i=1}^t N([i-1,i]) = \sum_{i=1}^t N_i$$ Accordingly, we can use the Law of Large Numbers to state the result above... Given that $t \in \mathbb{R}_+$, this proof needs to be tweaked in some way... But I'm not exactly sure how to do it. Intuitively speaking, I believe that the correct approach would be to decompose $N[0,t]$ into $N[0,\lfloor t\rfloor]$ and $N[\lfloor t\rfloor, t]$, and argue that the latter term $\rightarrow 0$ almost surely. However, I'm not sure how to formally state this.
If $n\leqslant t\lt n+1$, then $N_n\leqslant N_t\leqslant N_{n+1}$ hence $$ \frac{n}t\cdot\frac{N_{n}}{n}\leqslant\frac{N_t}t\leqslant\frac{n+1}t\cdot\frac{N_{n+1}}{n+1}. $$ When $t\to\infty$, $\frac{n}t\to1$ and $\frac{n+1}t\to1$ because $n$ is the integer part of $n$ hence $t-1\lt n\leqslant t$. Furthermore, $\frac{N_n}n\to\lambda$ (the result you showed) because $n\to\infty$, and $\frac{N_{n+1}}{n+1}\to\lambda$ for the same reason. You are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
greatest common divisor is 7 and the least common multiple is 16940 How many such number-pairs are there for which the greatest common divisor is 7 and the least common multiple is 16940?
Let the two numbers be $7a$ and $7b$. Note that $16940=7\cdot 2^2\cdot 5\cdot 11^2$. We make a pair $(a,b)$ with gcd $1$ and lcm $2^2\cdot 5\cdot 11^2$ as follows. We "give" $2^2$ to one of $a$ and $b$, and $2^0$ to the other. We give $5^1$ to one of $a$ and $b$, and $5^0$ to the other. Finally, we give $11^2$ to one of $a$ and $b$, and $11^0$ to the other. There are $2^3$ choices, and therefore $2^3$ ordered pairs such that $\gcd(a,b)=1$ and $\text{lcm}(a,b)=2^2\cdot 5\cdot 11^2$. If we want unordered pairs, divide by $2$. Here we used implicitly the Unique Factorization Theorem: Every positive integer can be expressed in an essentially unique way as a product of primes. There was nothing really special about $7$ and $16940$: any problem of this shape can be solved in basically the same way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/256035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How can this isomorphism be valid? How can $\mathbb{Z}_4 \times \mathbb{Z}_6 / <(2,3)> \cong \mathbb{Z}_{12} = \mathbb{Z}_{4} \times \mathbb{Z}_{3}$? I am not convinced at the least that $\mathbb{Z}_{12}$ is isomorphic to $\mathbb{Z}_4 \times \mathbb{Z}_6 / <(2,3)>$ For instance, doesn't $<1>$ have an order of 12 in $\mathbb{Z}_{12}$? And no element of $\mathbb{Z}_4 \times \mathbb{Z}_6 / <(2,3)>$ can even have an order of $12$ no? What is the maximum order of $\mathbb{Z}_4 \times \mathbb{Z}_6 / <(2,3)>$? I know if the denominator isn't there, it is $lcm(4,6) = 12$, but even if it weren't there, I don't see how either component can produce an element of order 12. What I mean is that the first component is in $\mathbb{Z}_4$, so all elements have max order 6 and likewise $\mathbb{Z}_6$, have order 6, so how can any element exceed the order of their group?
I presume you mean ring isomorphism. If so, you should define what is the 0, the 1, the sum and the multiplication. Of course, those are rather implicit. But there lies your doubts. As @KReiser points out, the unit in $\mathbb{Z}_4 \times \mathbb{Z}_6 / \langle(2,3)\rangle$ is (the class of) (1,1). In order to prove the isomorphism to $\mathbb{Z}_{12}$, you can define the bijective function and show that it is an isomorphism of rings. That is, let $\Phi$ be the function that maps $1\in\mathbb{Z}_{12}$ to the class of $(1,1)\in \mathbb{Z}_4 \times \mathbb{Z}_6 / \langle(2,3)\rangle$. First, show that it is a morphism, that it is injective and surjective. Since both sets have the same amount of equivalence classes, they thus turn out to be isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/256097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is the difference between necessary and sufficient conditions? * *If $p\implies q$ ("$p$ implies $q$"), then $p$ is a sufficient condition for $q$. *If $\lnot p\implies \lnot q$ ("not $p$ implies not $q$"), then $p$ is a necessary condition for $q$. I don't understand what sufficient and necessary mean in this case. How do you know which one is necessary and which one is sufficient?
Suppose first that $p$ implies $q$. Then knowing that $p$ is true is sufficient (i.e., enough evidence) for you to conclude that $q$ is true. It’s possible that $q$ could be true even if $p$ weren’t, but having $p$ true ensures that $q$ is also true. Now suppose that $\text{not-}p$ implies $\text{not-}q$. If you know that $p$ is false, i.e., that $\text{not-}p$ is true, then you know that $\text{not-}q$ is true, i.e., that $q$ is false. Thus, in order for $q$ to be true, $p$ must be true: without that, you automatically get that $q$ is false. In other words, in order for $q$ to be true, it’s necessary that $p$ be true; you can’t have $q$ true while $p$ is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/256171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 0 }
Prove divisibility law $\,b\mid a,c\,\Rightarrow\, b\mid ka + lc$ for all $k,l\in\Bbb Z$ We have to prove $b|a$ and $b|c \Rightarrow b|ka+lc$ for all $k,l \in \mathbb{Z}$. I thought it would be enough to say that $b$ can be expressed both as $b=ka$ and $b=lc$. Now we can reason that since $ka+lc=2b$ and $b|2b$, it directly follows that $b|ka+lc$? But I'm not sure if that works for any value of $k$ and $l$ (namely $k$ and $l$ are defined through quotient between $a$ and $c$, respectively). What am I missing?
An alternative presentation of the solution (perhaps slightly less elementary than the already proposed answers) is to work in the quotient ring. You can write that in $\mathbb{Z}/b\mathbb{Z}$ \begin{align*} \overline{ka+lc}&=\bar{k}\bar{a}+\bar{l}\bar{c}\\ &=\bar{0}+\bar{0}\\ &=\bar{0} \end{align*} where $\bar{a}=\bar{c}=\bar{0}$ because $b | a$ and $b | c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/256237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Square of the sum of n positive numbers I have a following problem When we want to write $a^2 + b^2$ in terms of $(a \pm b)^2$ we can do it like that $$a^2 +b^2 = \frac{(a+b)^2}{2} + \frac{(a-b)^2}{2}.$$ Can we do anything similar for $a_1^2 + a_2^2 + \ldots + a_n^2$ ? I can add the assumption that all $a_i$ are positive numbers. I mean to express this as combination of their sums and differences. I know that this question is a little bit naive but I'm curious whether it has an easy answer.
Yes. You have to sum over all of the possibilities of $a\pm b\pm c$: $$4(a^2+b^2+c^2)=(a+b+c)^2+(a+b-c)^2+(a-b+c)^2+(a-b-c)^2$$ This can be extended to n factors by: $$\sum_{k=1}^n a_k^2=\sum_{\alpha=(1,-1,...,-1)\; |a_i|=1}^{(1,...,1)}\frac{\big(\sum_{i=1}^{n}\alpha_ia_i\big)^2}{2^{n-1}}$$ ($\alpha$ is a multiindex with values that are either -1 or 1, except the first that is always 1)
{ "language": "en", "url": "https://math.stackexchange.com/questions/256309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to prove the given series is divergent? Given series $$ \sum_{n=1}^{+\infty}\left[e-\left(1+\frac{1}{n}\right)^{n}\right], $$ try to show that it is divergent! The criterion will show that it is the case of limits $1$, so maybe need some other methods? any suggestions?
Let $x > 1$. Then the inequality $$\frac{1}{t} \leq \frac{1}{x}(1-t) + 1$$ holds for all $t \in [1,x]$ (the right hand side is a straight line between $(1,1)$ and $(x, \tfrac{1}{x})$ in $t$) and in particular $$\log(x) = \int_1^x \frac{dt}{t} \leq \frac{1}{2} \left(x - \frac{1}{x} \right)$$ for all $x > 1$. Substitute $x \leftarrow 1 + \tfrac{1}{n}$ to get $$ \log \left(1 + \frac{1}{n} \right) \leq \frac{1}{2n} + \frac{1}{2(n+1)}$$ and after multiplying by $n$ $$\log\left(1 + \frac{1}{n} \right)^n \leq 1 - \frac{1}{2(n+1)}.$$ Use this together with the estimate $e^x \leq (1-x)^{-1}$ for all $x < 1$ to get $$\left(1 + \frac{1}{n} \right)^n \leq e \cdot e^{-\displaystyle\frac{1}{2(n+1)}} \leq e \cdot \left(1 - \frac{1}{2n+3} \right)$$ or $$e - \left(1 + \frac{1}{n} \right)^n \geq \frac{e}{2n+3}.$$ This shows that your series diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/256359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Use two solutions to a high order linear homogeneous differential equation with constant coefficients to say something about the order of the DE OK, this one utterly baffles me. I am given two solutions to an nth-order homogeneous differential equation with constant coefficients. Using the solutions, I am supposed to put a restriction on n (such as n>=5) I have no idea what method, theorem, or definition is useful to do this. My current "theory" is that I must find all the different derivatives of the solutions and tally up how many unique derivatives they have. This is wrong, but am I going in the right direction? The specific solutions for the example are t^3 and (t)(e^t)(sint) These solutions are to an nth-order homogeneous differential equation with constant coefficients, which means that n >= ? Thanks in advance.
A related problem. We will use the annihilator method. Note that, since you are given two solutions of the ode with constant coefficients, then their linear combination is a solution to the ode too. This means the function $$ y(x) = c_1 t^3 + c_2 te^{t}\sin(t) $$ satisfies the ode. Applying the operator $D^4((D-1)^2+1)^2,$ where $D=\frac{d}{dx},$ to the above equation gives $$D^4((D-1)^2+1)^2 y(x) = 0.$$ From the left hand side of the above equation, one can see that the differential equation is at least of degree $8$ or $n\geq 8.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/256429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving that cosine is uniformly continuous This is what I've already done. Can't think of how to proceed further $$|\cos(x)-\cos(y)|=\left|-2\sin\left(\frac{x+y}{2}\right)\sin\left(\frac{x-y}{2}\right)\right|\leq\left|\frac{x+y}{2}\right||x-y|$$ What should I do next?
Hint: Any continuous function is uniformly continuous on a closed, bounded interval, so $\cos$ is uniformly continuous on $[-2\pi,0]$ and $[0,2\pi]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/256498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Open affine neighborhood of points $X$ is a variety and there are $m$ points $x_1,x_2,\cdots,x_m$ on $X$. Can we find an open affine set which contains all $x_i$s?
A such variety is sometimes called FA-scheme (finite-affine). Quasi-projective schemes over affine schemes (e.g. quasi-projective varieties over a field) are FA. On the other hand, there are varieties which are not FA. Kleiman proved that a propre smooth variety over an algebraically closed field is FA if and only if it is projective. Some more details can be found in § 2.2 in this paper. There is an easy proof for projective varieties $X$ over a field. Just take a homogeneous polynomial $F$ which doesn't vanish at any of $x_1,\dots, x_m$. Then the principal open subset $D_+(F)$ is an affine open subset containing the $x_i$'s. The existence of $F$ is given by the graded version of the classical prime avoidance lemma: Edit Let $R$ be a graded ring, let $I$ be a homogeneous ideal generated by elements of positive degrees. Suppose that any homogenous element of $I$ belongs to the union of finitely many prime homogeneous ideals $\mathfrak p_1, \dots, \mathfrak p_m$. Then $I$ is contained in one of the $\mathfrak p_i$'s. A (sketch of) proof can be found in Eisenbud, § 3.2. For the above application, take $\mathfrak p_i$ be the prime homogeneous ideal corresponding to $x_i$ and $J=R_+$ be the (irrelevant) ideal generated by the homogeneous elements of positive degrees. As $R_+$ is not contained in any $\mathfrak p_i$, the avoidance lemma says there exists a homogeneous $F\in R_+$ not in any of the $\mathfrak p_i$'s. This method can be used to prove that any quasi-projective variety $X$ is FA (embed $X$ into a projective variety $\overline{X}$ and take $J$ be a homogeneous ideal defining the closed subset $\overline{X}\setminus X$. let $F\in J$ be homogeneous and not in any $\mathfrak p_i$, then $D_+(F)$ is affine, contains the $x_i$'s and is contained in $X$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/256648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Infinite series question from analysis Let $a_n > 0$ and for all $n$ let $$\sum\limits_{j=n}^{2n} a_j \le \dfrac 1n $$ Prove or give a counterexample to the statement $$\sum\limits_{j=1}^{\infty} a_j < \infty$$ Not sure where to start, a push in the right direction would be great. Thanks!
Consider sum of sums $\sum_{i=1}^\infty \sum_{k=i}^{2k} a_j$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/256699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is there any function in this way? $f$ is a function which is continous on $\Bbb R$, and $f^2$ is differentiable at $x=0$. Suppose $f(0)=1$. Must $f$ be differentiable at $0$? I may feel it is not necessarily for $f$ to be differentiable at $x=0$ though $f^2$ is. But I cannot find a counterexample to disprove this. Anyone has an example?
Hint: $$\frac{f(x)-1}{x}=\frac{f(x)^2-1}{x}\frac{1}{f(x)+1}\xrightarrow [x\to 0]{}...?$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/256785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Give an example of a sequence of real numbers with subsequences converging to every real number Im unsure of an example Give an example of a sequence of real numbers with subsequences converging to every real number
A related question that you can try: Let $(a_k)_{k\in\mathbb{N}}$ be a real sequence such that $\lim_k a_k=0$, and set $s_n=\sum_{k=1}^na_k$. Then the set of subsequential limits of the sequence $(s_n)_{n\in\mathbb{N}}$ is connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/256840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
how does addition of identity matrix to a square matrix changes determinant? Suppose there is $n \times n$ matrix $A$. If we form matrix $B = A+I$ where $I$ is $n \times n$ identity matrix, how does $|B|$ - determinant of $B$ - change compared to $|A|$? And what about the case where $B = A - I$?
As others already have pointed out, there is no simple relation. Here is one answer more for the intuition. Consider the (restricting) codition, that $A_{n \times n}$ is diagonalizable, then $$\det(A) = \lambda_0 \cdot \lambda_1 \cdot \lambda_2 \cdot \cdots \lambda _{n-1} $$ Now consider you add the identity matrix. The determinant changes to $$\det(B) = (\lambda_0+1) \cdot (\lambda_1+1) \cdot (\lambda_2+1) \cdot \cdots (\lambda _{n-1} +1)$$ I think it is obvious how irregular the result depends on the given eigenvalues of A. If some $\lambda_k=0$ then $\det(A)=0$ but that zero-factor changes to $(\lambda_k+1)$ and det(B) need not be zero. Other way round - if some factor $\lambda_k=-1$ then the addition by I makes that factor $\lambda_k+1=0$ and the determinant $\det(B)$ becomes zero. If some $0 \gt \lambda_k \gt -1$ then the determinant may change its sign... So there is no hope to make one single statement about the behave of B related to A - except that where @pritam linked to, or except you would accept a statement like $$\det(A)=e_n(\Lambda_n) \to \det(B)= \sum_{j=0}^n e_j(\Lambda) $$ where $ \Lambda = \{\lambda_k\}_{k=0..n-1} $ and $e_k(\Lambda)$ denotes k'th elementary symmetric polynomial over $\Lambda$... (And this is only for diagonalizable matrices)
{ "language": "en", "url": "https://math.stackexchange.com/questions/256969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 1 }
interesting matrix Let be $a(k,m),k,m\geq 0$ an infinite matrix then the set $$T_k=\{(a(k,0),a(k,1),...,a(k,i),...),(a(k,0),a(k+1,1),...,a(k+i,i),...)\}$$is called angle of matrix $a(k,0)$ is edge of $T_k$ $a(k,i),a(k+i,i),i>0$ are conjugate elements of $T_k$ $(a(k,0),a(k,1),...,a(k,i),...)$ is horizontal ray of $T_k$ $(a(k,0),a(k+1,1),...,a(k+i,i),...)$is diagonal ray of $T_k$ Elements of diagonal ray of $T_0$ are $1$ Elements above diagonal ray of $T_0$ are $0$ Elements of edge of $T_k,k>0$ are $0$ Each element of diagonal ray of $T_k,k>0$ is sum of his conjugate and elements of horizontal ray of $T_k$ that are placed on left. Prove that sum of elements of row $k$ is partition function $p(k)$
This is a very unnecessarily complicated, ambiguous and partly erroneous reformulation of a simple recurrence relation for the number $a(k,m)$ of partitions of $k$ with greatest part $m$. It works out if the following changes and interpretations are made: * *both instances of $k\gt1$ are replaced by $k\ge1$, *"his conjugate" is interpreted as "its upper conjugate" (each entry has two conjugates), and *"on the left" is interpreted as "to the left of its upper conjugate". The resulting recurrence relation is $$ a(k,m)=\sum_{i=1}^ma(k-m,i)\;, $$ which simply states that a partition of $k$ with greatest part $m$ arises by adding a part $m$ to any partition of $k-m$ with greatest part not greater than $m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limiting distribution Let $(q_n)_{n>0}$ be a real sequence such that $0<q_n<1$ for all $n>0$ and $\lim_{n\to \infty} q_n = 0$. For each $n > 0$, let $X_n$ be a random variable, such that $P[X_n =k]=q_n(1−q_n)^{k−1}, (k=1,2,...)$. Prove that the limit distribution of $\frac{X_n}{\mathbb{E}[X_n]}$ is exponential with parameter 1. I see that $\mathbb{E}[X_n] = \frac{1}{q_n}$ but after that I don't really know where to go from there. Are there any tips please?
First we calculate the characteristic function of $X_n$: $$\Phi_{X_n}(\xi) = \sum_{k=1}^\infty \underbrace{q_n}_{(q_n-1)+1} \cdot (1-q_n)^{k-1} \cdot e^{\imath \, k \cdot \xi} = - \sum_{k=1}^\infty (1-q_n)^k \cdot (e^{\imath \, \xi})^k+e^{\imath \, \xi} \sum_{k=1}^\infty (1-q_n)^{k-1} \cdot e^{\imath \, (k-1) \cdot \xi} \\ = - \left( \frac{1}{1-(1-q_n) \cdot e^{\imath \, \xi}} - 1 \right) + e^{\imath \, \xi} \cdot \left( \frac{1}{1-(1-q_n) \cdot e^{\imath \, \xi}} \right) \\ = \frac{1}{1-(1-q_n) \cdot e^{\imath \, \xi}} \cdot (-1+(1-(1-q_n) \cdot e^{\imath \, \xi})+e^{\imath \, \xi}) = \frac{q_n \cdot e^{\imath \, \xi}}{1-(1-q_n) \cdot e^{\imath \, \xi}}$$ From this we obtain easily the characteristic function of $Y_n := \frac{X_n}{\mathbb{E}X_n} = q_n \cdot X_n$: $$\Phi_{Y_n}(\xi) = \Phi_{X_n}(\xi \cdot q_n) = \frac{q_n \cdot e^{\imath \, q_n \cdot \xi}}{1-(1-q_n) \cdot e^{\imath \, q_n \cdot \xi}}$$ Now we let $n \to \infty$ and obtain by applying Bernoulli-Hôpital $$ \lim_{n \to \infty} \Phi_{Y_n}(\xi) = \lim_{n \to \infty} \frac{e^{\imath \, q_n \cdot \xi} \cdot (1+q_n \cdot \imath \, \xi)}{-e^{\imath \, q_n \cdot \xi} \cdot (\imath \, \xi \cdot (1-q_n) -1)} = \frac{1}{1-\imath \xi}$$ (since $q_n \to 0$ as $n \to \infty$). Thus we have shown that the characteristic functions converge pointwise to the characteristic function of exponential distribution with parameter 1. By Lévy's continuity theorem we obtain $Y_n \to \text{Exp}(1)$ in distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
countable group, uncountably many distinct subgroup? I need to know whether the following statement is true or false? Every countable group $G$ has only countably many distinct subgroups. I have not gotten any counter example to disprove the statement but an vague idea to disprove like: if it has uncountably many distinct subgroup then It must have uncountable number of element?
Let $(\mathbb{Q},+)$ be the group of the rational numbers under addition. For any set $A$ of primes, let $G_A$ be the set of all rationals $a/b$ (in lowest terms) such that every prime factor of the denominator $b$ is in $A$. It is clear that $G_A$ is a subgroup of $\mathbb{Q}$, and that $G_A = G_{A'}$ iff $A = A'$. Since there are uncountably many sets of primes, this produces uncountably many distinct subgroups of the countable group $\mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 3, "answer_id": 1 }
Evaluate $\lim_{n\to\infty}\sum_{k=1}^{n}\frac{k}{n^2+k^2}$ Considering the sum as a Riemann sum, evaluate $$\lim_{n\to\infty}\sum_{k=1}^{n}\frac{k}{n^2+k^2} .$$
$$\sum_{k=1}^n\frac{k}{n^2+k^2}=\frac{1}{n^2}\sum_{k=1}^n\frac{k}{1+\left(\frac{k}{n}\right)^2}=\frac{1}{n}\sum_{k=1}^n\frac{\frac{k}{n}}{1+\left(\frac{k}{n}\right)^2}\xrightarrow [n\to\infty]{}\int_0^1\frac{x}{1+x^2}\,dx=\ldots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/257248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Conditions for matrix similarity Two things that are not clear to me from the Wikipedia page on "Matrix similarity": * *If the geometric multiplicity of an eigenvalue is different in two matrices $A$ and $B$ then $A$ and $B$ are not similar? *If all eigenvalues of $A$ and $B$ coincide, together with their algebraic and geometric multiplicities, then $A$ and $B$ are similar? Thanks!
Intuitively, if $A,B$ are similar matrices, then they represent the same linear transformation, but in different bases. Using this concept, it must be that the eigenvalue structure of two similar matrices must be the same, since the existence of eigenvalues/eigenvectors does not depend on the choice of basis. So, to answer (1), if the eigenvalue structure is different, such as having different multiplicities, then $A,B$ cannot be similar. To address (2), the answer is no. If there is matching geometric and algebraic multiplicities, then $A,B$ may have different Jordan block structure; for example, $A$ could have three Jordan blocks of size $2,2,2$, and $B$ could have three Jordan blocks of size $3,2,1$. Then the algebraic and geometric multiplicities of both would be, respectively, $6$ and $3$. However, if both $A,B$ are diagonalizable, then $A = PDP^{-1}$ and $B = QDQ^{-1}$, where $D$ is the diagonal matrix, and hence $A = PQ^{-1} BQP^{-1}$, so in this more specific case, they would be similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Building a space with given homology groups Let $m \in \mathbb{N}$. Can we have a CW complex $X$ of dimension at most $n+1$ such that $\tilde{H_i}(X)$ is $\mathbb{Z}/m\mathbb{Z}$ for $i =n$ and zero otherwise?
To expand on the comments: The $i$-th homology of a cell complex is defined to be $\mathrm{ker}\partial_{i} / \mathrm{im}\partial_{i+1}$ where $\partial_{i+1}$ is the boundary map from the $i+1$-th chain group to the $i$-th chain group. Geometrically, this map is the attaching map that identifies the boundary of the $i+1$ cells with points on the $i$-cells. For example you could identify the boundary of a $2$-cell (a disk) with points on a $1$-cell (a line segment). In practice you construct a cell complex inductively so you will have already identified the end bits of the line segment with some $0$-cells (points). Assume the zero skeleton is just one point and you attach one line segment. Then we have $S^1$ and identify the boundary of $D^2$ with it. This attaching map is a map $f: S^1 \to S^1$. You can do this in many ways, the most obvious is the identity map. This map has degree one. The resulting space is a disk and the (reduced) homology groups of this disk are $0$ everywhere except in $i=2$ where you get $\mathbb Z$. If you take $f: S^1 \to S^1$ to be the map $t \mapsto 2t$ you wrap the boundary around twice and what you get it the real projective plane which has the homology you want in $i=2$ (check it). See here for the degree of a map. Now generalise to $S^n \to S^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
differentiability and compactness I have no idea how to show whether this statement is false or true: If every differentiable function on a subset $X\subseteq\mathbb{R}^n$ is bounded then $X$ is compact. Thank you
Some hints: * *By the Heine-Borel property for Euclidean space, $X$ is compact if and only if $X$ is closed and bounded. *My inclination is to prove the contrapositive: If $X$ is not compact, then there exists a differentiable function on $X$ which is unbounded. *If $X$ is not compact, then either it isn't bounded, or it isn't closed. As a first step, perhaps show why the contrapositive statement must be true if $X$ isn't bounded?
{ "language": "en", "url": "https://math.stackexchange.com/questions/257473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find $F'(x)$ given $ \int_x^{x+2} (4t+1) \ \mathrm{dt}$ Given the problem find $F'(x)$: $$ \int_x^{x+2} (4t+1) \ \mathrm{dt}$$ I just feel stuck and don't know where to go with this, we learned the second fundamental theorem of calculus today but i don't know where to plug it in. What i did: * *chain rule doesn't really take into effect here(*1) so just replace t with $x$ *$F'(x) = 4x + 1$ though the answer is just 8, what am i doing wrong?
Let $g(t)=4t+1$, and let $G(t)$ be an antiderivative of $g(t)$. Note that $$F(x)=G(x+2)-G(x).\tag{$1$}$$ In this case, we could easily find $G(t)$. But let's not, let's differentiate $F(x)$ immediately. Since $G'(t)=g(t)=4t+1$. we get $$F'(x)=g(x+2)-g(x)=[4(x+2)+1]-[4x+2].$$ This right-hand side ismplifies to $8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Can the graph of a bounded function ever have an unbounded derivative? Can the graph of a bounded function ever have an unbounded derivative? I want to know if $f$ has bounded variation then its derivative is bounded. The converse is obvious. I think the answer is "yes". If the graph were to have an unbounded derivative, it would coincide with a vertical line.
Oh, sure. I'm sure there are lots of examples, but because of the work I do, some modification of the entropy function comes to mind. Consider the following function: $$ f:\mathbb{R}^n\rightarrow\mathbb{R}, \quad f(x) \triangleq \begin{cases} x \log |x| & x \neq 0 \\ 0 & x = 0 \end{cases} $$ It is not difficult to verify that this function is continuous and has a derivative of $$f'(x) = \log|x| + 1$$ for nonzero $x$. So $f'(x)$ is unbounded at the origin; but $f(x)$ is unbounded as well, so we're not quite there. We can create the function we seek by multiplying $f$ by a well-chosen envelope that drives the function to $0$ at the extremes. For instance: $$ g:\mathbb{R}^n\rightarrow\mathbb{R}, \quad g(x) \triangleq e^{-x^2} f(x) = \begin{cases} x e^{-x^2} \log |x| & x \neq 0 \\ 0 & x = 0 \end{cases} $$ The first derivative for nonzero $x$ is $$ g'(x) = e^{-x^2} \cdot \left( ( 1 - 2 x^2 ) \log|x| + 1 \right) $$ which remains unbounded. Attached is a plot of $g$ and $g'$. EDITED to add: I notice that a number of other answers have chosen a bounded domain. From my perspective that is a bit incomplete. After all, we often consider such functions using the extended real number line, and in that context they are not bounded. There are certainly many functions that satisfy the original poster's conditions without resorting to a bounded domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 5, "answer_id": 0 }
Find the rank of Hom(G,Z)? (1) Prove that for any finitely generated abelian group G, the set Hom(G, Z) is a free Abelian group of finite rank. (2) Find the rank of Hom(G,Z) if the group G is generated by three generators x, y, z with relations 2x + 3y + z = 0, 2y - z = 0
(i) Apply the structure theorem: write $$G \simeq \mathbb{Z}^r \oplus_i \mathbb{Z}/d_i$$ Now from here we compute $$Hom(\mathbb{Z}^r \oplus_i \mathbb{Z}/d_i, \mathbb{Z}) \simeq Hom(\mathbb{Z}^r, \mathbb{Z}) \oplus_i Hom(\mathbb{Z}/d_i, \mathbb{Z}) \simeq \mathbb{Z}^r$$ (ii) We just need to find the rank of the free part of $G$, we have it cut out as the cokernel of the map $\mathbb{Z}^2 \rightarrow \mathbb{Z}^3$, given by $$\begin{pmatrix} 2 & 0 \\ 3 & 2 \\ 1 & -1 \end{pmatrix} \sim \begin{pmatrix} 2 & 2 \\ 3 & 5 \\ 1 & 0 \end{pmatrix} \sim \begin{pmatrix} 0 & 2 \\ 0 & 5 \\ 1 & 0 \end{pmatrix} \sim \begin{pmatrix} 0 & 0\\ 0 & 1 \\ 1 & 0 \end{pmatrix}$$ So if I didn't goof that up, our group is simply $\mathbb{Z}^3/\mathbb{Z}^2 \simeq \mathbb{Z}$, and hence Hom is again $\mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Kullback-Liebler divergence The Kullback-Liebler divergence between two distributions with pdfs $f(x)$ and $g(x)$ is defined by $$\mathrm{KL}(F;G) = \int_{-\infty}^{\infty} \ln \left(\frac{f(x)}{g(x)}\right)f(x)\,dx$$ Compute the Kullback-Lieber divergence when $F$ is the standard normal distribution and $G$ is the normal distribution with mean $\mu$ and variance $1$. For what value of $\mu$ is the divergence minimized? I was never instructed on this kind of divergence so I am a bit lost on how to solve this kind of integral. I get that I can simplify my two normal equations in the natural log but my guess is that I should wait until after I take the integral. Any help is appreciated.
I cannot comment (not enough reputation). Vincent: You have the wrong pdf for $g(x)$, you have a normal distribution with mean 1 and variance 1, not mean $\mu$. Hint: You don't need to solve any integrals. You should be able to write this as pdf's and their expected values, so you never need to integrate. Outline: Firstly, $ \log({f(x) \over g(x) })=\left\{ -{1 \over 2} \left( x^2 - (x-\mu )^2 \right) \right\} $ . Expand and simplify. Don't even write out the other $f(x)$ and see where that takes you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Properties of $\det$ and $\operatorname{trace}$ given a $4\times 4$ real valued matrix Let $A$, be a real $4 \times 4$ matrix such that $-1,1,2,-2$ are its eigenvalues. If $B=A^4-5A^2+5I$, then which of the following are true? * *$\det(A+B)=0$ *$\det (B)=1$ *$\operatorname{trace}(A-B)=0 $ *$\operatorname{trace}(A+B)=4$ Using Cayley-Hamilton I get $B=I$, and I know that $\operatorname{trace}(A+B)=\operatorname{trace}(A)+\operatorname{trace}(B)$. From these facts we can obtain easily about 2,3,4 but I am confused in 1. How can I verify (1)? Thanks for your help.
The characteristic equation of $A$ is given by $(t-1)(t+1)(t+2)(t-2)=0 $ which implies $t^{4}-5t^{2}+4=0$. Now $A$ must satisfy its characteristic equation which gives that $A^{4}-5A^{2}+4I=0$ and so we see that $B=A^{4}-5A^{2}+4I+I=0+I=I$. Hence, the eigenvalues of $(A+B)$ is given by $(-1+1),(1+1),(2+1),(-2+1)$ that is $0,2,3,-1.$[Without the loss of generality, one can take $A$ to be diagonal matrix which would not change trace or determinant of the matrix. ]So we see that $det(A+B)$ is the product of its eigenvalues which is $0$. . Also we see that trace of $(A+B)$ is the sum of its eigenvalues which is $(0+2+3-1)=4.$ Also, B being the identity matrix, $det(B)=1.$ So the options $(1),(2) and (4)$ are true .
{ "language": "en", "url": "https://math.stackexchange.com/questions/257912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Vector perpendicular to timelike vector must be spacelike? Given $\mathbb{R}^4$, we define the Minkowski inner product on it by $$ \langle v,w \rangle = -v_1w_1 + v_2w_2 + v_3w_3 + v_4w_4$$ We say a vector is spacelike if $ \langle v,v\rangle >0 $, and it is timelike if $ \langle v,v \rangle < 0 $. How can I show that if $v$ is timelike and $ \langle v,w \rangle = 0$ , then $w$ is either the zero vector or spacelike? I've tried to use the polarization identity, but don't have any information regarding the $\langle v+w,v+w \rangle$ term in the identity. Context: I'm reading a book on Riemannian geometry, and the book gives a proof of a more general result: if $z$ is timelike, then its perpendicular subspace $z^\perp$ is spacelike. It does so using arguments regarding the degeneracy index of the subspace, which I don't fully understand. Since the statement above seems fairly elementary, I was wondering whether it would be possible to give an elementary proof of it as well. Any help is appreciated!
The accepted answer by @user1551 is certainly good, but an intuitive physical explanation may be needed, I think. A timelike vector in special relativity can be thought of as some kind of velocity of some object. And we can find a particular reference frame in which the object is at rest, i.e. with only time component non-zero. With appropriate normalization, the coordinate components of the timelike vector $v$ are $$ v=(1,0,0,0) $$ which means $v$ is actually the first basis vector of this reference frame. And the other three basis vectors were already there when we specified this frame. So, "extending the timelike vector $v$ to an orthonormal basis" physically means a choice of inertial reference frame. What then follows is trivial. Since $v$'s only non-zero component is the time component and $\langle v,w \rangle=0$, $w$'s time component must be zero. Then it's either the zero vector or spacelike.
{ "language": "en", "url": "https://math.stackexchange.com/questions/257980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Consider the quadratic form $q(x,y,z)=4x^2+y^2−z^2+4xy−2xz−yz $ over $\mathbb{R}$ then which of the following are true Consider the quadratic form $q(x,y,z)=4x^2+y^2-z^2+4xy-2xz-yz$ over $\mathbb{R}$. Then which of the followings are true? 1.range of $q$ contains $[1,\infty)$ 2.range of $q$ is contained in $[0,\infty)$ 3. range=$\mathbb{R}$ 4.range is contained in $[-N, ∞)$ for some large natural number $N$ depending on $q$ I am completely stuck on it. How should I solve this problem?
If you consider that for $x=0$ and $y=0$ we have that $q$ maps onto $(-∞,0]$ because $q(0,0,z)=-z^2$, and for $x=0$,$z=0$ we have that $q$ maps onto $[0,∞)$, then as a whole $q$ maps onto $(-∞,∞) = \mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Noetherian module implies Noetherian ring? I know that a finitely generated $R$-module $M$ over a Noetherian ring $R$ is Noetherian. I wonder about the converse. I believe it has to be false and I am looking for counterexamples. Also I wonder if $M$ Noetherian imply that $R$ is Noetherian is true? And if $M$ Noetherian implies $M$ finitely generated is true? That is, do both implications fail or only one of them?
Let $R$ be a commutative non-Noetherian and let $\mathcal m$ be a maximal ideal. Then $R/\mathcal m$ is finitely generated and Noetherian - it only has two sub-$R$-modules. Note that, even if $R$ isn't Noetherian, it contains a maximal ideal, by Krull's Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Finding a Pythagorean triple $a^2 + b^2 = c^2$ with $a+b+c=40$ Let's say you're asked to find a Pythagorean triple $a^2 + b^2 = c^2$ such that $a + b + c = 40$. The catch is that the question is asked at a job interview, and you weren't expecting questions about Pythagorean triples. It is trivial to look up the answer. It is also trivial to write a computer program that would find the answer. There is also plenty of material written about the properties of Pythagorean triples and methods for generating them. However, none of this would be of any help during a job interview. How would you solve this in an interview situation?
$$a^2=(c-b)(c+b) \Rightarrow b+c = \frac{a^2}{c-b}$$ $$a+\frac{a^2}{c-b}=40$$ For simplicity let $c-b=\alpha$. then $$a^2+\alpha a-40\alpha =0$$ Since this equation has integral solutions, $$\Delta=\alpha^2+160 \alpha$$ is a perfect square.Thus $$\alpha^2+160 \alpha =\beta^2$$ Or $$(\alpha+80)^2=\beta^2+80^2 \,.$$ This way we reduced the problem to finding all pytagorean triples of the type $(80, ??, ??)$. This can be easily done if you know the formula, or by setting $$80^2=(\alpha+80-\beta)(\alpha+80+\beta)$$ and solving.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 2 }
Estimation with method of maximum likelihood Can anybody help me to generate the estimator of equation: $$Y_i = \beta_0 + \beta_1X_{i1} + \beta_2X_{i2}+\cdots+\beta_4X_{i4}+\varepsilon_i$$ using method of maximum likelihood, where $\varepsilon_i$ are independent variables which have normal distribution $N(0,\sigma^2)$
This is given by least squares estimation. To see this, write $$ L(\beta, \sigma^2 | Y) = \prod_i (2\pi\sigma^2)^{1/2} \exp\left(\frac {-1} {2\sigma^2} (Y_i - \beta_0 - \sum_j \beta_j X_{ij})^2\right) = (2\pi\sigma^2)^{n/2} \exp\left(\frac {-1} {2\sigma^2} \sum_i(Y_i - \beta_0 - \sum_j \beta_j X_{ij})^2\right) $$ Maximization of this is equivalent to minimization of $\sum_i (Y_i - \beta_0 - \sum_ j\beta_j X_{ij})^2$, which is the definition of least squares estimation. If $\mathbf X$ is such that $\langle \mathbf X\rangle_{ij} = X_{ij}$ and $\mathbf Y$ is such that $\langle\mathbf Y\rangle_{ij} = Y_{ij}$ then we can rewrite this as the minimization of $Q(\beta) = \|\mathbf Y - \mathbf X\beta\|^2$ with respect to $\beta$, and there are several ways to see that $\hat \beta = (\mathbf X^T \mathbf X)^{-1} \mathbf X^T \mathbf Y$ is the minimizer. The most expedient is probably to calculate $\partial Q/ \partial \beta$ and set equal to $0$ which gives $$ -2 \mathbf X^T \mathbf Y + 2 (\mathbf X^T \mathbf X) \beta = 0 $$ and note that $Q$ is a strictly convex function of $\beta$ which ensures that the solution to this equation is a minimum of $Q$. Note however that we are relying on $\mathbf X^T \mathbf X$ being invertible, i.e. $\mathbf X$ has linearly independent columns; if this fails, $Q$ is not strictly convex and any generalized inverse $(\mathbf X^T\mathbf X)^-$ can be used to replace $(\mathbf X^T \mathbf X)^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Category theory text that defines composition backwards? I've always struggled with the convention that if $f:X \rightarrow Y$ and $g:Y \rightarrow Z$, then $g \circ f : X \rightarrow Z$. Constantly reflecting back and forth is inefficient. Does anyone know of a category theory text that defines composition the other way? So that $f \circ g$ means what we would normally mean by $g \circ f$.
I recall that the following textbooks on category theory have compositions written from left to right. * *Freyd, Scedrov: "Categories, Allegories", North-Holland Publishing Co., 1990 . *Manes: "Algebraic Theories", GTM 26, Springer-Verlag, 1976. *Higgins: "Notes on Categories and Groupoids", Van Nostrand, 1971 (available as TAC Reprint No 7). Other examples appear in group theory and ring theory, e.g. * *Lambek: "Lectures on rings and modules", Chelsea Publishing Co., 1976 (2ed). or several books by P.M. Cohn. But in order to avoid confusion, authors usually do not use the symbol $\circ$ for this. In particular when (as with noncommutative rings) it is helpful to have both readings available (so that module homomorphisms and scalars act on opposite sides). For instance, as far as I remember, Lambek uses $\ast$ instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
How to do $\frac{ \partial { \mathrm{tr}(XX^TXX^T)}}{\partial X}$ How to do the derivative \begin{equation} \frac{ \partial {\mathrm{tr}(XX^TXX^T)}}{\partial X}\quad ? \end{equation} I have no idea where to start.
By definition the derivative of $F(X)=tr(XX^TXX^T)$, in the point $X$, is the only linear functional $DF(X):{\rm M}_{n\times n}(\mathbb{R})\to \mathbb{R}$ such that $$ F(x+H)=F(X)+DF(X)\cdot H+r(H) $$ with $\lim_{H\to 0} \frac{r(H)}{\|H\|}=0$. Let's get $DF(X)(H)$ and $r(H)$ by the expansion of $F(X+H)$. But first we must do an algebraic manipulation to expand $(X+H)(X+H)^T(X+H)(X+H)^T$. In fact, \begin{align} (X+\color{red}{H})(X+\color{red}{H})^T(X+\color{red}{H})(X+\color{red}{H})^T =& (X+\color{red}{H})(X^T+\color{red}{H}^T)\big(XX^T+X\color{red}{H}^T+\color{red}{H}X^T+\color{red}{H}\color{red}{H}^T\big) \\ =&(X+\color{red}{H})\Big(X^TXX^T+X^TX\color{red}{H}^T+X^T\color{red}{H}X^T+X^T\color{red}{H}\color{red}{H}^T \\ &\hspace{12mm}+\color{red}{H}^TXX^T+\color{red}{H}^TX\color{red}{H}^T+\color{red}{H}^T\color{red}{H}X^T+\color{red}{H}^T\color{red}{H}\color{red}{H}^T\Big) \\ =&\;\;\;\;\,XX^TXX^T+XX^TX\color{red}{H}^T+XX^T\color{red}{H}X^T+XX^T\color{red}{H}\color{red}{H}^T \\ &+X\color{red}{H}^TXX^T+X\color{red}{H}^TX\color{red}{H}^T+X\color{red}{H}^T\color{red}{H}X^T+X\color{red}{H}^T\color{red}{H}\color{red}{H}^T \\ &+\color{red}{H}X^TXX^T+\color{red}{H}X^TX\color{red}{H}^T+\color{red}{H}X^T\color{red}{H}X^T+\color{red}{H}X^T\color{red}{H}\color{red}{H}^T \\ &+\color{red}{H}\color{red}{H}^TXX^T+\color{red}{H}\color{red}{H}^TX\color{red}{H}^T+\color{red}{H}\color{red}{H}^T\color{red}{H}X^T+\color{red}{H}\color{red}{H}^T\color{red}{H}\color{red}{H}^T \end{align} Extracting $XX^TXX^T$ and the portions where $H$ or $H^T$ appears only once and applying $tr$ we have \begin{align} F(X+H)=&tr\Big( (X+\color{red}{H})(X^T+\color{red}{H}^T)(X+\color{red}{H})(X^T+\color{red}{H}^T) \Big) \\ =&\underbrace{tr \big(XX^TXX^T\big)}_{F(X)} +\underbrace{tr\big( XX^TX\color{red}{H}^T+XX^T\color{red}{H}X^T +X\color{red}{H}^TXX^T+\color{red}{H}X^TXX^T \big)}_{DF(X)\cdot H} \\ &+tr\Big(XX^T\color{red}{H}\color{red}{H}^T +X\color{red}{H}^TX\color{red}{H}^T+X\color{red}{H}^T\color{red}{H}X^T+X\color{red}{H}^T\color{red}{H}\color{red}{H}^T \\ &\hspace{12mm}+\color{red}{H}X^TX\color{red}{H}^T+\color{red}{H}X^T\color{red}{H}X^T+\color{red}{H}X^T\color{red}{H}\color{red}{H}^T \\ &\underbrace{\hspace{12mm}+\color{red}{H}\color{red}{H}^TXX^T+\color{red}{H}\color{red}{H}^TX\color{red}{H}^T+\color{red}{H}\color{red}{H}^T\color{red}{H}X^T+\color{red}{H}\color{red}{H}^T\color{red}{H}\color{red}{H}^T\Big)}_{r(H)} \end{align} Here $\|H\|=\sqrt{tr(HH^T)}$ is the Frobenius norm and $\displaystyle\lim_{H\to 0}\frac{r(H)}{H}=0$. Then the total derivative is \begin{align} \mathcal{D}F(X)\cdot H = & tr\bigg(XX^TXH^T\bigg)+ tr\bigg(XX^THX^T\bigg) \\ + & tr\bigg(XH^TXX^T \bigg)+ tr\bigg(HX^TXX^T \bigg). \\ \end{align} The directional derivative is $$ \frac{\partial}{\partial V}F(X)=\mathcal{D}F(X)\cdot V $$ and the partial derivative is $$ \frac{\partial}{\partial E_{ij}}F(X)=\mathcal{D}F(X)\cdot E_{ij}. $$ Here $E_{ij}=[\delta_{ij}]_{n\times m}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Smallest number of games per match to get $0.8$ chance to win the match. If the first person to win $n$ games wins the match, what is the smallest value of $n$ such that $A$ has a better than $0.8$ chance of winning the match? For $A$ having a probability of $0.70$, I get smallest $n = 5$ (Meaning there must be $5$ games per match for $A$ to have a $0.8$ chance to win.) I got this by doing $1 - 0.7^5 = 0.832$. Although I would have thought it would have been lower.
Using the same method as before, with A having a probability of winning a game, the probabilities of A winning the match are about $0.7$ for $n=1$, $0.784 $ for $n=2$, $0.837$ for $n=3$, $0.874$ for $n=4$ and $0.901$ for $n=5$. So the answer is $n=3$ to exceed $0.8$. $1−0.7^5$ is the answer to the question "What is the probability B wins at least one game before A wins 5 games?"
{ "language": "en", "url": "https://math.stackexchange.com/questions/258604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confused where and why inequality sign changes when proving probability inequality "Let A and B be two events in a sample space such that 0 < P(A) < 1. Let A' denote the complement of A. Show that is P(B|A) > P(B), then P(B|A') < P(B)." This was my proof: $$ P(B| A) > P(B) \hspace{1cm} \frac{P(B \cap A)}{P(A)} > P(B) $$ $$P(B \cap A) + P(B \cap A') = P(B) \implies P(B \cap A) = P(B) - P(B \cap A') $$ Subbing this into the above equation gives $$ P(B) - P(B \cap A') > P(B)P(A) $$ I think the inequality was supposed to change there, but I don't know why. Carrying on with the proo and dividing both sides by P(B) and rearranging gives $$ 1 - P(A) > \frac{P(B \cap A')}{P(B)} $$ $$ P(A') > \frac{P(B \cap A')}{P(B)} $$ Rearrange to get what you need: $$ P(B) < \frac{P(B \cap A')}{P(A')} = P(B |A') $$ Why does the inequality change at that point? EDIT: Figured it out. It's in the last line where the inequality holds.
In general $P(B)=P(A)P(B|A) + P(A')P(B|A')$. What happens if $P(B|A)>P(B)$ and $P(B|A')\geq P(B)$? Hint: Use $P(A)+P(A')=1$ and $P(A)>0$ and $P(A')\geq 0$ to get a contradiction. Your proof was right up to (and including) this step: $$P(A') > \frac{P(B \cap A')}{P(B)}$$ From here, multiply both sides by $\frac{P(B)}{P(A')}$ and you get: $$P(B) > \frac{P(B\cap A')}{P(A')} = P(B|A')$$ That was what you wanted to prove.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Closed form for $\sum_{n=2}^\infty \frac{1}{n^2\log n}$ I had attempted to evaluate $$\int_2^\infty (\zeta(x)-1)\, dx \approx 0.605521788882.$$ Upon writing out the zeta function as a sum, I got $$\int_2^\infty \left(\frac{1}{2^x}+\frac{1}{3^x}+\cdots\right)\, dx = \sum_{n=2}^\infty \frac{1}{n^2\log n}.$$ This sum is mentioned in the OEIS. All my attempts to evaluate this sum have been fruitless. Does anyone know of a closed form, or perhaps, another interesting alternate form?
The closed form means an expression containing only elementary functions. For your case no such a form exists. For more informations read these links: http://www.frm.utn.edu.ar/analisisdsys/MATERIAL/Funcion_Gamma.pdf http://en.wikipedia.org/wiki/Hölder%27s_theorem http://en.wikipedia.org/wiki/Gamma_function#19th-20th_centuries:_characterizing_the_gamma_function http://divizio.perso.math.cnrs.fr/PREPRINTS/16-JourneeAnnuelleSMF/difftransc.pdf http://www.tandfonline.com/doi/abs/10.1080/17476930903394788?journalCode=gcov20 Some background are needed for your understanding and good luck with these referrences.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 1, "answer_id": 0 }
Solving for $y$ with $\arctan$ I know this is a very low level question, but I honestly can't remember how this is done. I want to solve for y with this: $$ x = 2.0 \cdot \arctan\left(\frac{\sqrt{y}}{\sqrt{1 - y}}\right) $$ And I thought I could do this: $$ \frac{\sqrt{y}}{\sqrt{1 - y}} = \tan\left(\frac{x}{2.0}\right) $$ But it seems like I've done something wrong getting there. Could someone break down the process to get to $y =$ ? Again, I know this is very basic stuff, but clearly I'm not very good at this.
So, since Henry told me that I wasn't wrong, I continued and got a really simple answer. Thanks! x = 2 * arctan(sqrt(y)/sqrt(1 - y)) sqrt(y)/sqrt(1 - y) = tan(x/2) 1/(sqrt(1 - y) * sqrt(1/y)) = tan(x/2) 1/tan(x/2) = sqrt(1/y - 1) 1/(tan(x/2))^2 + 1 = 1/y y = (tan(x/2))^2/((tan(x/2))^2 + 1) Thanks again to Henry! EDIT Followed by: y = ((1 - cos(x)) / (1 + cos(x))) / (1 + (1 - cos(x))/(1 + cos(x))) = (1 - cos(x)) / ((1 + cos(x)) * (1 + (1 - cos(x))/(1 + cos(x)))) = (1 - cos(x)) / ((1 + cos(x)) + (1 - cos(x))) = (1 - cos(x)) / 2
{ "language": "en", "url": "https://math.stackexchange.com/questions/258819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why the Picard group of a K3 surface is torsion-free Let $X$ be a K3 surface. I want to prove that $Pic(X)\simeq H^1(X,\mathcal{O}^*_X)$ is torsion-free. From D.Huybrechts' lectures on K3 surfaces I read that if $L$ is torsion then the Riemann-Roch formula would imply that $L$ is effective. But then if a section $s$ of $L$ has zeroes then $s^k\in H^0(X,L^k)$ has also zeroes, so no positive power of $L$ can be trivial. What I am missing is how the Riemann-Roch theorem can imply that if $L$ is torsion then $L$ is effective?
If $L$ is torsion, then $L^k=O_X$ (tensor power). Since $X$ is K3 and because the first chern class of the trivial bundle vanishes, we have $c_1(X)=0$. Furthermore, since $X$ is regular, we get $h^1(O_X)=0$. Thus, $\chi(O_X)=2$. Now the RRT says $$\chi(L)=\chi(O_X) + \tfrac 12 c_1(L)^2$$ Thus, $\chi(O_X)=\chi(L^k)=\chi(O_X)+\tfrac 12 c_1(L^k)^2$, so $c_1(L^k)^2=0$. By general chern polynomial lore, $c_1(L^k)=k\cdot c_1(L)$, so $c_1(L)^2=0$. But this means that $$h^0(L)-h^1(L)+h^2(L)=\chi(L) = \chi(O_X) = 2.$$ By Serre Duality, you have $H^2(X,L)\cong H^0(X,L^\ast)^\ast$. If $H^0(X,L^\ast)=H^0(X,L^{k-1})$ is nontrivial and $L\ne O_X$, then we'd be done since $H^0(X,L)$ would have to be non-trivial. Therefore, we may assume $h^2(L)=0$. Putting this all together we get $h^0(L)=2+h^1(L)> 0$ as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Volume integration help A volume sits above the figure in the $xy$-plane bounded by the equations $y = \sqrt{x}$, $y = −x$ for $0 ≤ x ≤ 1$. Each $x$ cross section is a half-circle, with diameter touching the ends of the curves. What is the volume? a) Sketch the region in the $xy$ plane. b) What is the area of a cross-section at $x$? c) Write an integral for the volume. d) Find the value of the integral.
The question has been essentially fully answered by JohnD: The picture does it all. The cross section at $x$ has diameter $AB$, where $A$ is the point where the vertical line "at" $x$ meets the curve $y=\sqrt{x}$, and $B$ is the point where the vertical line at $x$ meets the line $y=-x$. So the distance $AB$ is $\sqrt{x}-(-x)$, that is, $\sqrt{x}+x$. So the radius at $x$ is $\dfrac{\sqrt{x}+x}{2}$. The area of the half-circle with this radius is $A(x)$, where $$A(x)=\frac{\pi}{2}\left(\frac{\sqrt{x}+x}{2}\right)^2.$$ The required volume is $$\int_0^1 A(x)\,dx.$$ Once the setup has been done, the rest is just computation. We want to integrate $\dfrac{\pi}{8}(\sqrt{x}+x)^2$. Expand the square, and integrate term by term.
{ "language": "en", "url": "https://math.stackexchange.com/questions/258977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
why a geodesic is a regular curve? In most definitions of the geodesic, it is required to be a regular curve,i.e. a smooth curve satisfying that the tangent vector along the curve is not 0 everywhere. I don't know why.
Suppose $\gamma:[a,b]\to M$ be a smooth curve on a Riemannian manifold $M$ with Riemannian metric $\langle\cdot,\cdot\rangle$. Then we have $$\tag{1}\frac{d}{dt}\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle=\langle\frac{D}{dt}\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle+\langle\frac{d\gamma}{dt},\frac{D}{dt}\frac{d\gamma}{dt}\rangle$$ where $\frac{D}{dt}$ is the covariant derivative along the curvature $\gamma$. By definition, $\gamma$ is geodesic if and only if $\frac{D}{dt}\frac{d\gamma}{dt}=0$ for all $t\in[a,b]$, which implies together with $(1)$ that $$\frac{d}{dt}\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle=0\mbox{ for all }t\in[a,b],$$ which implies that the function $\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle$ is constant, i.e. $$\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle=C$$ for some constant $C$. Thus, if $C\neq 0$, then by definition $\gamma$ is a regular curve. And if $C=0$, we have $\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle= 0$, or equivalently, $\frac{d\gamma}{dt}=0$ for all $t\in[a,b]$, which implies $\gamma(t)=p$ for all $t\in[a,b]$ for some point $p\in M$, i.e. $\gamma$ degenerates to a point in $M$. Therefore, if $\gamma$ is a nontrivial geodesic in the sense that it does not degenerates to a point, $\gamma$ must be a regular curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/259035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Determine the PDF of $Z = XY$ when the joint pdf of $X$ and $Y$ is given The joint probability density function of random variables $ X$ and $ Y$ is given by $$p_{XY}(x,y)= \begin{cases} & 2(1-x)\,\,\,\,\,\,\text{if}\,\,\,0<x \le 1, 0 \le y \le 1 \\ & \,0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{otherwise.} \end{cases} $$ Determine the probbility density function of $ Z = XY$.
There are faster methods, but it can be a good idea, at least once or twice, to calculate the cumulative distribution function, and then differentiate to find the density. The upside of doing it that way is that one can retain reasonably good control over what's happening. (There are also a number of downsides!) So we want $F_Z(z)$, the probability that $Z\le z$. Lets do the easy bits first. It is clear that $F_Z(z)=0$ if $Z\le z$. And it is almost as clear that $F_Z(z)=1$ if $z\ge 1$. So from now on we suppose that $0\lt z\lt 1$. Draw a picture of our square. For fixed $z$ between $0$ and $1$, draw the first quadrant part of the curve with equation $xy=z$. This curve is a rectangular hyperbola, with the positive $x$ and $y$ axes as asymptotes. We want the probability that $(X,Y)$ lands in the part of our square which is on the "origin side" of the hyperbola. So we need to integrate our joint density function over this region. There is some messiness in evaluating this integral: we need to break up the integral at $x=z$. We get $$F_Z(z)= \Pr(Z\le z)=\int_{x=0}^z \left(\int_{y=0}^1 (2-2x)\,dy\right)\,dx + \int_{x=z}^1 \left(\int_{y=0}^{z/x} (2-2x)\,dy\right)\,dx. $$ Not difficult after that. We get, I think, $F_Z(z)=z^2-2z\ln z$. Differentiate for the density.
{ "language": "en", "url": "https://math.stackexchange.com/questions/259098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Evaluate $\lim\limits_{x \to \infty}\left (\sqrt{\frac{x^3}{x-1}}-x\right)$ Evaluate $$ \lim_{x \to \infty}\left (\sqrt{\frac{x^3}{x-1}}-x\right) $$ The answer is $\frac{1}{2}$, have no idea how to arrive at that.
Multiply and divide by $\sqrt{x^3/(x-1)}+x$, simplify and take the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/259210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
How to find $(-64\mathrm{i}) ^{1/3}$? How to find $$(-64\mathrm{i})^{\frac{1}{3}}$$ This is a complex variables question. I need help by show step by step. Thanks a lot.
For any $n\in\mathbb{Z}$, $$\left(-64i\right)^{\frac{1}{3}}=\left(64\exp\left[\left(\frac{3\pi}{2}+2\pi n\right)i\right]\right)^{\frac{1}{3}}=4\exp\left[\left(\frac{\pi}{2}+\frac{2\pi n}{3}\right)i\right]=4\exp\left[\frac{3\pi+4\pi n}{6}i\right]=4\exp \left[\frac{\left(3+4n\right)\pi}{6}i\right]$$ The cube roots in polar form are: $$4\exp\left[\frac{\pi}{2}i\right] \quad\text{or}\quad 4\exp\left[\frac{7\pi}{6}i\right] \quad\text{or}\quad 4\exp\left[\frac{11\pi}{6}i\right]$$ and in Cartesian form: $$4i \quad\text{or}\quad -2\sqrt{3}-2i \quad\text{or}\quad 2\sqrt{3}-2i$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/259347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Proof of convexity from definition ($x^Tx$) I have to prove that function $f(x) = x^Tx, x \in R^n$ is convex from definition. Definition: Function $f: R^n \rightarrow R$ is convex over set $X \subseteq dom(f)$ if $X$ is convex and the following holds: $x,y \in X, 0 \leq \alpha \leq 1 \rightarrow f(\alpha x+(1-\alpha) y)) \leq \alpha f(x) + (1-\alpha)f(y)$. I got this so far: $(\alpha x + (1-\alpha)y)^T(\alpha x + (1-\alpha)y) \leq \alpha x^Tx + (1-\alpha)y^Ty$ $\alpha^2 x^Tx + 2\alpha(1-\alpha)x^Ty + (1-\alpha)^2y^Ty \leq \alpha x^Tx + (1-\alpha)y^Ty$ I don´t know how to prove this inequality. It is clear to me, that $\alpha^2 x^Tx \leq \alpha x^Tx$ and $(1-\alpha)^2y^Ty \leq (1-\alpha)y^Ty$, since $0 \leq\alpha \leq 1$, but what about $2\alpha(1-\alpha)x^Ty$? I have to prove this using the above definition. Note: In Czech, the words "convex" and "concave" may have opposite meaning as in some other languages ($x^2$ is a convex function for me!). Thanks for any help.
You can also just take the hessian and see that is positive definite(since this function is Gateaux differentiable) , in fact this means that the function is strictly convex as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/259415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Intersection of a closed subscheme and an open subscheme of a scheme Let $X$ be a scheme. Let $Z$ be a closed subscheme of $X$. Let $U$ be an open subscheme of $X$. Then $Y = U \cap Z$ is an open subscheme of $Z$. Can we identify $Y$ with $U\times_X Z$?
Yes. This doesn't have anything to do with closed subscheme. If $p: Z \to X$ is a morphism of schemes and $U \subset X$ is open subscheme, then the fibre product is $p^{-1}(U)$ with open subscheme structure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/259482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Nature of D-finite sets. A is called D-finite if A is not containing countable subset. With the above strange definition I need to show the following two properties: * *For a D-finite set A, and finite B, the union of A and B is D-finite. *The union of two D-finite sets is D-finite. By the way, can we construct such D-finite set? Only hints... Thank you.
Hints only: The first property may be shown directly. The second however... Try showing what happens when the union of two sets is not D-finite. Hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/259551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Eigenvalues for LU decomposition In general I know that the eigenvalues of A are not the same as U for the decomposition but for one matrix I had earlier in the year it was. Is there a special reason this happened or was it just a coincidence? The matrix was $A = \begin{bmatrix}-1& 3 &-3 \\0 &-6 &5 \\-5& -3 &1\end{bmatrix}$ with $U = \begin{bmatrix}-1& 3 &-3 \\0 &-6 &5 \\0& 0 &1\end{bmatrix}$ if needed $L = \begin{bmatrix}1& 0 &0 \\0 &1 &0 \\5& 3 &1\end{bmatrix}$ The eigenvalues are the same as $U$ which are $-1$,$-6$ and $1$. When I tried to do it the normal way I ended up with a not so nice algebra problem to work on which took way to long. Is there some special property I am missing here? If not is there an easy way to simplify $\mathrm{det}(A-\lambda I)$ that I am missing? Thank you!
It's hard to say if this is mere coincidence or part of a larger pattern. This is like asking someone to infer the next number to a finite sequence of given numbers. Whatever number you say, there is always some way to explain it. Anyway, here's the "pattern" I see. Suppose $$ A = \begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix}, $$ where * *$B$ is a 2x2 upper triangular matrix; *the two eigenvalues of $B$, say $\lambda$ and $\mu$, are distinct and $\neq\gamma$; *$u$ is a right eigenvector of $B$ corresponding to the eigenvalue $\mu$; *$v$ is a left eigenvector of $B$ corresponding to the eigenvalue $\lambda$. Then $A$ has the following LU decomposition: $$ A = \begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix} =\underbrace{\begin{pmatrix}I_2&0\\ kv^T&1\end{pmatrix}}_{L} \quad \underbrace{\begin{pmatrix}B&u\\0&\gamma\end{pmatrix}}_{U} $$ where $k=\frac1\lambda$ if $\lambda\neq0$ or $0$ otherwise. The eigenvalues of $U$ are clearly $\lambda,\mu$ and $\gamma$. Since $u$ and $v$ are right and left eigenvectors of $B$ corresponding to different eigenvalues, we have $v^Tu=0$. Therefore \begin{align} (v^T, 0)A &=(v^T, 0)\begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix} =(v^TB,\, v^Tu)=\lambda(v^T,0),\\ A\begin{pmatrix}u\\0\end{pmatrix} &=\begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix}\begin{pmatrix}u\\0\end{pmatrix} =\begin{pmatrix}Bu\\v^Tu\end{pmatrix} =\mu\begin{pmatrix}u\\0\end{pmatrix},\\ A\begin{pmatrix}\frac{1}{\gamma-\mu}u\\1\end{pmatrix} &=\begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix} \begin{pmatrix}\frac{1}{\gamma-\mu}u\\1\end{pmatrix} =\gamma\begin{pmatrix}\frac{1}{\gamma-\mu}u\\1\end{pmatrix}. \end{align} So, the eigenvalues of $U$ are also the eigenvalues of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/259650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Number of Divisor which perfect cubes and multiples of a number n = $2^{14}$$3^9$$5^8$$7^{10}$$11^3$$13^5$$37^{10}$ How many positive divisors that are perfect cubes and multiples of $2^{10}$$3^9$$5^2$$7^{5}$$11^2$$13^2$$37^{2}$. I'm able to solve number of perfect square and number of of perfect cubes. But the extra condition of multiples of $2^{10}$$3^9$$5^2$$7^{5}$$11^2$$13^2$$37^{2}$ is confusing, anyone can give me a hint?
The numbers you are looking for must be perfect cubes. If you split them into powers of primes, they can have a factor $2^0$, $2^3$, $2^6$, $2^9$ and so on but not $2^1, 2^2, 2^4$ etc. because these are not cubes. The same goes for powers of $3, 5, 7$ and any other primes. The numbers must also be multiples of $2^{10}$ so can have factors $2^{12}, 2^{15}, 2^{18}$ etc. because $2^9, 2^6$ and so on are not multiples of $2^{10}$. The numbers must divide $2^{14}$, which leaves only $2^{12}$ because $2^{15}, 2^{18}$ and so on don't divide $2^{14}$. You get another factor $3^9, 5^3$ or $5^6, 7^6$ or $7^9, 11^3, 13^3$, and $37^3, 37^6$ or $37^9$. For most powers you have one choice, for $5, 7$ and $11$ you have two choices, for $37$ you have three choices - total is $2 \times 2\times 2\times 3$ numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/259714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluation of Derivative Using $\epsilon−\delta$ Definition Consider the function $f \colon\mathbb R \to\mathbb R$ defined by $f(x)= \begin{cases} x^2\sin(1/x); & \text{if }x\ne 0, \\ 0 & \text{if }x=0. \end{cases}$ Use $\varepsilon$-$\delta$ definition to prove that the limit $f'(0)=0$. Now I see that h should equals to delta; and delta should equal to epsilon in this case. Thanks for everyone contributed!
$$\left|{\dfrac{f(h)-f(0)}{h}}\right|=\left|{\dfrac{2h^2 \sin{\dfrac{1}{h}}}{h}}\right|=2 \left|{h \sin{\dfrac{1}{h}}}\right|<2\left|h\right|<\varepsilon.$$ Choose $\delta<\dfrac{\varepsilon}{2}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/259795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Use of $\mathbb N$ & $\omega$ as index sets Why all the properties of a sequence or a series or a sequence of functions or a series of functions remain unchanged irrespective of which of $\mathbb N$ & $\omega$ we are using as an index set? Is it because $\mathbb N$ is equivalent to $\omega$?
It is because $\omega$ and $\mathbb N$ are just different names for the same set. Their members are the same, and so by the Axiom of Extensionality they are the same set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/259878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Probability of a label appearing on a frazzle This is an exercise from a probability textbook: A frazzle is equally likely to contain $0,1,2,3$ defects. No frazzle has more than three defects. The cash price of each frazzle is set at \$ $10-K^2$, where $K$ is the number of defects in it. Gummed labels, each representing $\$ 1$, are placed on each frazzle to indicate its price. What is the probability that a randomy selected label will end up on a frazzle which has exactly two defects? Since the frazzles are equally likely to have $0,1,2,3$ defects, I may argue that a label is equally likely to appear on any of them. On the other hand, frazzles with less defects are more expensive, therefore requiring more labels, from this perspective, a label is most likely to appear on a frazzle with no defects. I am confused here.
It is not equally likely to go on any of the frazzles, because more labels will go to the frazzles with 0 defects than those with 3 defects, for example. 0,1,2,3 defects draws 10, 9, 6 and 1 labels respectively. So say you had 4 million frazzles. Since 0,1,2 or 3 defects are equally likely, suppose you have 1 million of each type. Then you have 10 million labels on those with 0 defects, 9 million labels on those with 1 defect, 6 million labels on those with 2 defects, and 1 million labels on those with 3 defects. So you used a total of 26 million labels, and 6 million of those labels went to frazzles with exactly two defects. Thinking about this example should lead you to understanding what the answer to your question is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/260014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove the derivative of position is velocity and of velocity is acceleration? How has it been proven that the derivative of position is velocity and the derivative of velocity is acceleration? From Google searching, it seems that everyone just states it as fact without any proof behind it.
The derivative is the slope of the function. So if the function is $f(x)=5x-3$, then $f'(x)=5$, because the derivative is the slope of the function. Velocity is the change in position, so it's the slope of the position. Acceleration is the change in velocity, so it is the change in velocity. Since derivatives are about slope, that is how the derivative of position is velocity, and the derivative of velocity is acceleration. So if the position can be expressed with the function $f(x)=x^2 - 3x + 7$, then the derivative would be $f'(x)=2x-3$ since that is the slope of the function at any given point, and since it is the slope of the position function, it is velocity. Same for acceleration; $f"(x)=2$, which is the derivative of velocity, which makes it slope. The slope of velocity is acceleration. This is how the derivative of position is velocity and the derivative is position. NOTE: These functions are entirely hypothetical and were created on the spur of the moment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/260097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 2 }
Help with combinations The sum of all the different ways to multiply together $a,b,c,d,\ldots$ is equal to $$(a+1)(b+1)(c+1)(d+1)\cdots$$ right? If this is true? why is it true?
Yes it is true, give it a try... Ok, sorry. Here's a little more detail: We have the identity $$ \prod_{j=1}^n ( \lambda-X_j)=\lambda^n-e_1(X_1,\ldots,X_n)\lambda^{n-1}+e_2(X_1,\ldots,X_n)\lambda^{n-2}-\cdots+(-1)^n e_n(X_1,\ldots,X_n). $$ $\biggr[$use $\lambda=-1$ to get $ \prod_{j=1}^n ( -1-X_j)=(-1)^n\prod_{j=1}^n ( 1+X_j)$$\biggr]$ This can be proven by a double mathematical induction with respect to the number of variables n and, for fixed n, with respect to the degree of the homogeneous polynomial. I really thought that giving it a try would show some inside. Hope the links help,
{ "language": "en", "url": "https://math.stackexchange.com/questions/260171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Have I calculated this integral correctly? I have this integral to calculate: $$I=\int_{|z|=2}(e^{\sin z}+\bar z)dz.$$ I do it this way: $$I=\int_{|z|=2}e^{\sin z}dz+\int_{|z|=2}\bar zdz.$$ The first integral is $0$ because the function is holomorphic everywhere and it is a contour integral. As for the second one, I have $$\int_{|z|=2}\bar zdz = \int_0^{2\pi}e^{-i\theta}\cdot 2 d\theta=-\int_0^{-2\pi}e^{i\tau}\cdot 2 d\tau=\int_0^{2\pi}e^{i\tau}\cdot 2 d\tau=\int_{|z|=2}zdz=0$$ because the function is now holomorphic. It seems fishy to me. Is it correct?
If $z = 2e^{i \theta}$, then $$\bar{z} dz = 2e^{-i \theta}2i e^{i \theta} d \theta = 4i d \theta$$ Hence, $$\int_{\vert z \vert = 2} \bar{z} dz = \int_0^{2 \pi} 4i d \theta = 8 \pi i$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/260228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is the absolute value needed with the scaling property of fourier tranforms? I understand how to prove the scaling property of Fourier Transforms, except the use of the absolute value: If I transform $f(at)$ then I get $F\{f(at)\}(w) = \int f(at) e^{-jwt} dt$ where I can substitute $u = at$ and thus $du = a dt$ (and $\frac{du}{a} = dt$) which gives me: $ \int f(u) e^{-j\frac{w}{a}u} \frac{du}{a} = \frac{1}{a} \int f(u) e^{-j\frac{w}{a}u} du = \frac{1}{a} F \{f(u)\}(\frac{w}{a}) $ But, according to various references, it should be $ \frac{1}{|a|} F \{f(u)\}(\frac{w}{a}) $ and I don't understand WHY or HOW I get/need the absolute value here?
Think about the range of the variable $t$ in the integral that gives the transform. How do the 'endpoints' of this improper integral transform under $t\to at$? Can you see how this depends on the sign of $a$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/260310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Equation to determine radius for a circle that should intersect a given point? Simple question. I tried Google but I don't know what search keywords to use. I have two points on a $2D$ plane. Point 1 $=(x_1, y_1)$ and Point 2 $=(x_2, y_2)$. I'd like to draw a circle around Point 1, and the radius of the circle should be so that it intersects exactly with Point 2. What is the equation to determine the required radius?
The radius is simply the distance between the two points. So use the standard Euclidean distance which you should have learned.
{ "language": "en", "url": "https://math.stackexchange.com/questions/260436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Find the determinant of $A$ satisfying $A^{-1}=I-2A.$ I am stuck with the following problem: Let $A$ be a $3\times 3$ matrix over real numbers satisfying $A^{-1}=I-2A.$ Then find the value of det$(A).$ I do not know how to proceed. Can someone point me in the right direction? Thanks in advance for your time.
No such $A$ exists. Hence we cannot speak of its determinant. Suppose $A$ is real and $A^{-1}=I-2A$. Then $A^2-\frac12A+\frac12I=0$. Hence the minimal polynomial $m_A$ of $A$ must divide $x^2-\frac12x+\frac12$, which has no real root. Therefore $m_A(x)=x^2-\frac12x+\frac12$. But the minimal polynomial and characteristic polynomial $p_A$ of $A$ must have identical irreducible factors, and this cannot happen because $p_A$ has degree 3 and $m_A$ is an irreducible polynomial of degree 2. Edit: The OP says that the question appears on an extrance exam paper, and four answers are given: (a) $1/2$, (b) $−1/2$, (c) $1$, (d) $2$. It seems that there's a typo in the exam question and $A$ is probably 2x2. If this is really the case, then the above argument shows that the characteristic polynomial of $A$ is $x^2-\frac12x+\frac12$. Hence $\det A = 1/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/260512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 0 }
the Green function $G(x,t)$ of the boundary value problem $\frac{d^2y}{dx^2}-\frac{1}{x}\frac{dy}{dx} = 1$ the Green function $G(x,t)$ of the boundary value problem $\frac{d^2y}{dx^2}-\frac{1}{x}\frac{dy}{dx} = 1$ , $y(0)=y(1)=0$ is $G(x,t)= f_1(x,t)$ if $x≤t$ and $G(x,t)= f_2(x,t)$ if $t≤x$ where (a)$f_1(x,t)=-\frac{1}{2}t(1-x^2)$ ; $f_2(x,t)=-\frac{1}{2t}x^2(1-t^2)$ (b)$f_1(x,t)=-\frac{1}{2x}t^2(1-x^2)$ ; $f_2(x,t)=-\frac{1}{2t}x^2(1-t^2)$ (c)$f_1(x,t)=-\frac{1}{2t}x^2(1-t^2)$ ; $f_2(x,t)=-\frac{1}{2}(1-x^2)$ (d)$f_1(x,t)=-\frac{1}{2t}x^2(1-x^2)$ ; $f_2(x,t)=-\frac{1}{2x}t^2(1-x^2)$ then which are correct. i am not getting my calculation right. my answer was ver similar to them but does not match completely to them.at first i multiply the equation by (1/x) both side and convert it to a S-L equation but not getting my answer right.any help from you please.
Green's function is symmetric, so answer can be (b) and (d).
{ "language": "en", "url": "https://math.stackexchange.com/questions/260633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can't argue with success? Looking for "bad math" that "gets away with it" I'm looking for cases of invalid math operations producing (in spite of it all) correct results (aka "every math teacher's nightmare"). One example would be "cancelling" the 6's in $$\frac{64}{16}.$$ Another one would be something like $$\frac{9}{2} - \frac{25}{10} = \frac{9 - 25}{2 - 10} = \frac{-16}{-8} = 2 \;\;.$$ Yet another one would be $$x^1 - 1^0 = (x - 1)^{(1 - 0)} = x - 1\;\;.$$ Note that I am specifically not interested in mathematical fallacies (aka spurious proofs). Such fallacies produce shockingly wrong ends by (seemingly) valid means, whereas what I am looking for all cases where one arrives at valid ends by (shockingly) wrong means. Edit: fixed typo in last example.
Slightly contrived: Given $n = \frac{2}{15}$ and $x=\arccos(\frac{3}{5})$, find $\frac{\sin(x)}{n}$. $$ \frac{\sin(x)}{n} = \mathrm{si}(x) = \mathrm{si}x = \boxed{6} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/260656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "289", "answer_count": 42, "answer_id": 21 }
Showing that complicated mixed polynomial is always positive I want to show that $\left(132 q^3-175 q^4+73 q^5-\frac{39 q^6}{4}\right)+\left(-144 q^2+12 q^3+70 q^4-19 q^5\right) r+\left(80 q+200 q^2-243 q^3+100 q^4-\frac{31 q^5}{2}\right) r^2+\left(-208 q+116 q^2+24 q^3-13 q^4\right) r^3+\left(80-44 q-44 q^2+34 q^3-\frac{23 q^4}{4}\right) r^4$ is strictly positive whenever $q \in (0,1)$ (numerically, this holds for all $r \in \mathbb{R}$, although I'm only interested in $r \in (0,1)$). Is that even possible analytically? Any idea towards a proof would be greatly appreciated. Many thanks! EDIT: Here is some more information. Let $f(r) = A + Br + Cr^2 + Dr^3 + Er^4$ be the function as defined above. Then it holds that $f(r)$ is a strictly convex function in $r$ for $q \in (0,1)$, $f(0) > 0$, $f'(0) < 0$, and $f'(q) > 0$. Hence, for the relevant $q \in (0,1)$, $f(r)$ attains its minimum for some $r^{min} \in (0,q)$. $A$ is positive and strictly increasing in $q$ for the relevant $q \in (0,1)$, $B$ is negative and strictly decreasing in $q$ for the relevant $q \in (0,1)$, $C$ is positive and strictly increasing in $q$ for the relevant $q \in (0,1)$, $D$ is negative and non-monotonic in $q$, and $E$ is positive and strictly decreasing in $q$ for the relevant $q \in (0,1)$.
Because you want to show that this is always positive, consider what happens when $q$ and $r$ get really big. The polynomials with the largest powers will dominate the result. You can solve this quite easily by approximating the final value using a large number of inequalities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/260672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
$H_1\triangleleft G_1$, $H_2\triangleleft G_2$, $H_1\cong H_2$ and $G_1/H_1\cong G_2/H_2 \nRightarrow G_1\cong G_2$ Find a counterexample to show that if $ G_1 $ and $G_2$ groups, $H_1\triangleleft G_1$, $H_2\triangleleft G_2$, $H_1\cong H_2$ and $G_1/H_1\cong G_2/H_2 \nRightarrow G_1\cong G_2$ I tried but I did not have success, I believe that these groups are infinite.
The standard counter example to that implication is the quaternion group $Q_8$ and dihedral group $D_4$. Both groups have order $2^3=8$, so that every maximal group (i.e. one of order 4) is normal. The cyclic group of order 4 is contained in both groups and the quotient has order 2 in both cases. So all assertions are satisfied but $D_4\ncong Q_8$, of course.
{ "language": "en", "url": "https://math.stackexchange.com/questions/260761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to integrate this $\int\frac{\mathrm{d}x}{{(4+x^2)}^{3/2}} $ without trigonometric substitution? I have been looking for a possible solution but they are with trigonometric integration.. I need a solution for this function without trigonometric integration $$\int\frac{\mathrm{d}x}{{(4+x^2)}^{3/2}}$$
$$\frac{1}{\left(4+x^2\right)^{3/2}}=\frac{1}{8}\cdot\frac{1}{\left(1+\left(\frac{x}{2}\right)^2\right)^{3/2}}$$ Now try $$x=2\sinh u\implies dx=2 \cosh u\,du\implies$$ $$\int\frac{dx}{\left(4+x^2\right)^{3/2}}=\frac{1}{8}\int\frac{2\,du\cosh u}{(1+\sinh^2u)^{3/2}}=\frac{1}{4}\int\frac{du}{\cosh^2u}=\ldots $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/260831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
What exactly is infinity? On Wolfram|Alpha, I was bored and asked for $\frac{\infty}{\infty}$ and the result was (indeterminate). Another two that give the same result are $\infty ^ 0$ and $\infty - \infty$. From what I know, given $x$ being any number, excluding $0$, $\frac{x}{x} = 1$ is true. So just what, exactly, is $\infty$?
I am not much of a mathematician, but I kind of think of infinity as a behavior of increasing without bound at a certain rate rather than a number. That's why I think $\infty \div \infty$ is an undetermined value, you got two entities that keep increasing without bound at different rates so you don't know which one is larger. I could be wrong, but this is my understanding though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/260876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53", "answer_count": 13, "answer_id": 10 }
Identity for central binomial coefficients On Wikipedia I came across the following equation for the central binomial coefficients: $$ \binom{2n}{n}=\frac{4^n}{\sqrt{\pi n}}\left(1-\frac{c_n}{n}\right) $$ for some $1/9<c_n<1/8$. Does anyone know of a better reference for this fact than wikipedia or planet math? Also, does the equality continue to hold for positive real numbers $x$ instead of the integer $n$ if we replace the factorials involved in the definition of the binomial coefficient by Gamma functions?
It appears to be true for $x > .8305123339$ approximately: $c_x \to 0$ as $x \to 0+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/260933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proof for: $(a+b)^{p} \equiv a^p + b^p \pmod p$ a, b are integers. p is prime. I want to prove: $(a+b)^{p} \equiv a^p + b^p \pmod p$ I know about Fermat's little theorem, but I still can't get it I know this is valid: $(a+b)^{p} \equiv a+b \pmod p$ but from there I don't know what to do. Also I thought about $(a+b)^{p} = \sum_{k=0}^{p}\binom{p}{k}a^{k}b^{p-k}=\binom{p}{0}b^{p}+\sum_{k=1}^{p-1} \binom{p}{k} a^{k}b^{p-k}+\binom{p}{p}a^{p}=b^{p}+\sum_{k=1}^{p-1}\binom{p}{k}a^{k}b^{p-k}+a^{p}$ Any ideas? Thanks!
First of all, $a^p \equiv a \pmod p$ and $b^p \equiv b \pmod p$ implies $a^p + b^p \equiv a + b \pmod p$. Also, $(a+b)^p \equiv a + b \pmod p$. By transitivity of modulo, combine the above two results and get $(a+b)^p \equiv a^p + b^p \pmod p$. Done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 0 }
Proving an Entire Function is a Polynomial I had this question on last semesters qualifying exam in complex analysis, and I've attempted it several times since to little result. Let $f$ be an entire function with $|f(z)|\geq 1$ for all $|z|\geq 1$. Prove that $f$ is a polynomial. I was trying to use something about $f$ being uniformly convergent to a power series, but I can't get it to go anywhere.
Picard's Theorem proves this instantly; which states: Let $f$ be a transcendental (non-polynomial) entire function. Then $f-a$ must have infinitely many zeros for every $a$ (except for possibly one exception, called the lacunary value). For example, $e^z-a$ will have infinitely many zeros except for $a=0$ and so the lacunary value of $e^z$ is zero. Your inequality implies that $f$ and $f-\frac{1}{2}$ have only a finite number of zeros. Thus $f$ cannot be transcendental. Of course this is what we call hitting a tac with a sludge hammer. A more realistic approach might be the following: Certainly $f$ has a finite number of zeros say $a_1,\ldots,a_n$, so write $f=(z-a_1)\cdots(z-a_n)\cdot h$, where $h$ is some non-zero entire function. Then the inequalities above give us $|\frac{1}{h}|\le \max\left\{\max_{z\in D(0,2)}|\frac{1}{h}|, |(z-a_1)\cdots (z-a_n)|\right\}$ on the entire complex plane. Said more simply $|\frac{1}{h}|<|p(z)|$ for every $z\in C$ for some polynomial $p$. That implies $\frac{1}{h}$ is a polynomial. But remember that $\frac{1}{h}$ is nonzero, so $h$ is a constant and $f$ must therefore be a polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Show $S = f^{-1}(f(S))$ for all subsets $S$ iff $f$ is injective Let $f: A \rightarrow B$ be a function. How can we show that for all subsets $S$ of $A$, $S \subseteq f^{-1}(f(S))$? I think this is a pretty simple problem but I'm new to this so I'm confused. Also, how can we show that $S = f^{-1}(f(S))$ for all subsets $S$ iff $f$ is injective?
$S \subseteq f^{-1}(f(S)):$ Choose $a\in S.$ To show $a\in f^{-1}(f(S))$ it suffices to show that $\exists$ $a'\in S$ such that $a\in f^{-1}(f(a'))$ i.e. to show $\exists$ $a'\in S$ such that $f(a)=f(a').$ Now take $a=a'.$ $S = f^{-1}(f(S))$ $\forall$ $A \subset S$ $\iff f$ is injective: * *$\Leftarrow:$ Let $f$ be injective. Choose $s'\in f^{-1}(f(S))\implies f(s')\in f(S)\implies \exists$ $s\in S$ such that $f(s')=f(s)\implies s'=s$ (since $f$ is injective) $\implies s'\in S.$ So $f^{-1}(f(S))\subset S.$ Reverse inclusion has been proved earlier. Therefore $f^{-1}(f(S))= S.$ *$\Rightarrow:$ Let $f^{-1}(f(S))= S$ $\forall$ $A \subset S.$ Let $f(s_1)=f(s_2)$ for some $s_1,s_2\in S.$ Then $s_1\in f^{-1}(f(\{s_2\})=\{s_2\}\implies s_1=s_2\implies f$ is injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 4, "answer_id": 0 }
Understanding $\frac {b^{n+1}-a^{n+1}}{b-a} = \sum_{i=0}^{n}a^ib^{n-i}$ I'm going through a book about algorithms and I encounter this. $$\frac {b^{n+1}-a^{n+1}}{b-a} = \sum_{i=0}^{n}a^ib^{n-i}$$ How is this equation formed? If a theorem has been applied, what theorem is it? [Pardon me for asking such a simple question. I'm not very good at maths.]
Multiply both sides by b-a, watch for the cancling of terms, and you will have your answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In the induction proof for $(1+p)^n \geq 1 + np$, a term is dropped and I don't understand why. In What is Mathematics, pg. 15, a proof of $(1+p)^n \geq 1 + np$, for $p>-1$ and positive integer $n$ goes as follows: * *Substitute $r$ for $n$, then multiply both sides by $1+p$, obtaining: $(1+p)^{r+1}\geq 1+rp+p+rp^2$ *"Dropping the positive term $rp^2$ only strengthens this inequality, so that $(1+p)^{r+1}\geq 1+rp+p$, which shows that the inequality will hold for $r+1$." I don't understand why the $rp^2$ term can be dropped -- if we're trying to prove that the inequality holds, and dropping $rp^2$ strengthens the inequality, then why are we allowed to drop it? Thanks!
In $1.$ we have shown that $$(1+p)^{r+1}\geq 1+rp+p+rp^2$$ But we also know that $r > 1$ (because we're doing an induction proof from $1$ upwards); and obviously $p^2 \ge 0$ (because $p$ is real); so we know that $rp^2 \ge 0$. Therefore $$1+rp+p+rp^2 \ge 1+rp+p$$ So putting these two together gives $$(1+p)^{r+1}\geq 1+rp+p$$ as required. In short, if we know that $a \ge b + c$, and we know $c$ is non-negative, we can immediately conclude that $a \ge b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 3 }
n×n matrices A with complex enteries Let U be set of all n×n matrices A with complex enteries s.t. A is unitary. then U as a topological subspace of $\mathbb{C^{n^{2}}} $ is * *compact but not connected. *connected but not compact. *connected and compact. *Neither connected nor compact I am stuck on this problem . Can anyone help me please..... I don't know where yo begin........
For connectedness, examine the set of possible determinants, and whether or not you can find a path of unitary matrices between two unitary matrices with different determinants. For compactness, look at sequences of unitary matrices and examine whether or not one can be constructed to not have a convergent subsequence. Once you have an affirmative or negatinve answer to the above paragraphs, you pick the corresponding alternative, and you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to calculate $\overline{\cos \phi}$ How do you calculate $\overline{\cos \phi}$? Where $\phi\in\mathbb{C}$. I try to proof that $\cos \phi \cdot \overline{\cos \phi} +\sin \phi \cdot \overline{\sin \phi}=1$?
$$ \cos(x+iy) = \cos x \cos (iy) - i \sin x \sin(iy) $$ $$ \overline {\cos(x+iy)} = \cos x \cos (iy) + i \sin x \sin(iy) = \cos x \cos (-iy) - i \sin x \sin(-iy) = \cos(x-iy) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/261508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Testing Convergence of $\sum \sqrt{\ln{n}\cdot e^{-\sqrt{n}}}$ What test should i apply for testing the convergence/divergence of $$\sum_{n=1}^{\infty} \sqrt{\ln{n}\cdot e^{-\sqrt{n}}}$$ Help with hints will be appreciated. Thanks
The $n$-th term is equal to $$\frac{\sqrt{\log n}}{e^{\sqrt{n}/2}}.$$ The intuition is that the bottom grows quite fast, while the top does not grow fast at all. In particular, after a while the top is $\lt n$. If we can show, for example, that after a while $e^{\sqrt{n}/2}\gt n^3$, then by comparison with $\sum \frac{1}{n^2}$ we will be finished. So is it true that in the long run $e^{\sqrt{n}/2}\gt n^3$? Equivalently, is it true that in the long run $\sqrt{n}/2\gt 3\log n$? Sure, in fact $\lim_{n\to\infty}\dfrac{\log n}{\sqrt{n}}=0$, by L'Hospital's Rule, and in other ways. Remark: A named test that works well here is the Cauchy Condensation Test. I believe that a more "hands on" confrontation with the decay rate of the $n$-th term is more informative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Finding a dominating function for this sequence Let $$f_n (x) = \frac{nx^{1/n}}{ne^x + \sin(nx)}.$$ The question is: with the dominated convergence theorem find the limit $$ \lim_{n\to\infty} \int_0^\infty f_n (x) dx. $$ So I need to find an integrable function $g$ such that $|f_n| \leq g$ for all $n\in \mathbf N$. I tried $$ \frac{nx^{1/n}}{ne^x + \sin(nx)} = \frac{x^{1/n}}{e^x + \sin(nx)/n} \leq \frac{x^{1/n}}{e^x - 1} \leq \frac{x^{1/n}}{x}. $$ But I can't get rid of that $n$. Can anyone give me a hint?
We have \begin{align} \left| \frac{nx^{1/n}}{ne^x + \sin(nx)} \right|= & \frac{|x^{1/n}|}{|e^x + \sin(nx)/n|} & \\ \leq & \frac{\max\{1,x\}}{|e^x + \sin(nx)/n|} & \mbox{by } |x^{1/n}|\leq \max\{1,x\} \\ \leq & \frac{\max\{1,x\}}{|e^x -\epsilon |} & \mbox{if } |e^x + \sin(nx)/n|\geq |e^x -\epsilon| \\ \end{align} Note that for all $\epsilon\in(0,1)$ existis $N$ any sufficiently large such that $$ \left|\frac{1}{n}\sin(nx)\right|<\epsilon \implies |e^x + \sin(nx)/n|\geq |e^x -\epsilon| $$ if $n>N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Working out digits of Pi. I have always wondered how the digits of π are calculated. How do they do it? Thanks.
The Chudnovsky algorithm, which just uses the very rapidly converging series $$\frac{1}{\pi} = 12 \sum^\infty_{k=0} \frac{(-1)^k (6k)! (13591409 + 545140134k)}{(3k)!(k!)^3 640320^{3k + 3/2}},$$ was used by the Chudnovsky brothers, who are some of the points on your graph. It is also the algorithm used by at least one arbitrary precision numerical library, mpmath, to compute arbitrarily many digits of $\pi$. Here is the relevant part of the mpmath source code discussing why this series is used, and giving a bit more detail on how it is implemented (and if you want, you can look right below that to see exactly how it is implemented). It actually uses a method called binary splitting to evaluate the series faster.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Constrain Random Numbers to Inside a Circle I am generating two random numbers to choose a point in a circle randomly. The circles radius is 3000 with origin at the center. I'm using -3000 to 3000 as my bounds for the random numbers. I'm trying to get the coordinates to fall inside the circle (ie 3000, 3000 is not in the circle). What equation could I use to test the limits of the two numbers because I can generate a new one if it falls out of bounds.
Compare $x^2+y^2$ with $r^2$ and reject / retry if $x^2+y^2\ge r^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Class Group of $\mathbb{Q}(\sqrt{-47})$ Calculate the group of $\mathbb{Q}(\sqrt{-47})$. I have this: The Minkowski bound is $4,36$ approximately. Thanks!
Here is another attempt. In case I made any mistakes, let me know and I will either try and fix it, or delete my answer. We have Minkowski bound $\frac{2 \sqrt{47}}{\pi}<\frac{2}{3}\cdot 7=\frac{14}{3}\approx 4.66$. So let us look at the primes $2$ and $3$: $-47\equiv 1$ mod $8\quad\Rightarrow\quad 2$ is split, i.e. $(2)=P\overline P$ for some prime ideals $P,\overline P$. NB: In fact we have $P=(2,\delta)$ and $\overline P=(2,\overline \delta)$ with $\delta=\frac{1+\sqrt{-47}}{2}$ and $\overline\delta=\frac{1-\sqrt{-47}}{2}$. But this is going to be irrelevant in the rest of the proof. $-47\equiv 1$ mod $3\quad\Rightarrow\quad 3$ is split, i.e. $(3)=Q \overline Q$ for some prime ideals $Q,\overline Q$. So the class group has at most 5 elements with representatives $(1),P,\overline P, Q, \overline Q$. Note that $P$ is not principal, because $N(\frac{a+b\sqrt{-47}}{2})=\frac{a^2+47b^2}{4}=2$ does not have an integer solution (because $8$ is not a square). So $P$ does not have order $1$. Suppose $P$ has order $2$. Then $P^2$ is a principal ideal with $N(P^2)=N(P)^2=2^2=4$. The only elements with norm $4$ are $\pm2$. But $P^2$ cannot be $(2)$, because $2$ is split. Suppose $P$ has order $3$. Then $P^3$ is a principal ideal with $N(P^3)=N(P)^3=2^3=8$. But $N(\frac{a+b\sqrt{-47}}{2})=\frac{a^2+47b^2}{4}=8$ does not have an integer solution (because $32$ is not a square). Suppose $P$ has order $4$. Then $P^4$ is a principal ideal with $N(P^4)=16$. The only elements with norm $16$ are $\pm4$. But $P^4$ cannot be $(4)$, because $(4)=(2)(2)=P\overline P P\overline P$ is the unique factorisation, and $P\ne \overline P$. Suppose $P$ has order $5$. Then $P^5$ is a principal ideal with $N(P^5)=32$. And, indeed, the element $\frac{9+\sqrt{-47}}{2}$ has norm $32$. So $P^5=(\frac{9+\sqrt{-47}}{2})$. Hence the class group is cyclic of order $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
The particular solution of the recurrence relation I cannot find out why the particular solution of $a_n=2a_{n-1} +3n$ is $a_{n}=-3n-6$ here is the how I solve the relation $a_n-2a_{n-1}=3n$ as $\beta (n)= 3n$ using direct guessing $a_n=B_1 n+ B_2$ $B_1 n+ B_2 - 2 (B_1 n+ B_2) = 3n$ So $B_1 = -3$, $B_2 = 0$ the particular solution is $a_n = -3 n$ and the homo. solution is $a_n = A_1 (-2)^n$ Why it is wrong??
using direct guessing $a_n=B_1 n+ B_2$ $B_1 n+ B_2 - 2 (B_1 (n-1)+ B_2) = 3n$ then $B_1 - 2B_1 = 3$ $2 B_1 - B_2 =0$ The solution will be $B_1 = -3, B_2=-6$
{ "language": "en", "url": "https://math.stackexchange.com/questions/261885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
3-D geometry : three vertices of a ||gm ABCD is (3,-1,2), (1,2,-4) & (-1,1,2). Find the coordinate of the fourth vertex. The question is Three vertices of a parallelogram ABCD are A(3,-1,2), B(1,2,-4) and C(-1,1,2). Find the coordinate of the fourth vertex. To get the answer I tried the distance formula, equated AB=CD and AC=BD.
If you have a parallelogram ABCD, then you know the vectors $\vec{AB}$ and $\vec{DC}$ need to be equal as they are parallel and have the same length. Since we know that $\vec{AB}=(-2,\,3,-6)$ you can easily calculate $D$ since you (now) know $C$ and $\vec{CD}(=-\vec{AB})$. We get for $\vec{0D}=\vec{0C}+\vec{CD}=(-1,\,1,\,2)+(\,2,-3,\,6)=(\,1,-2,\,8)$ and hence $D(\,1,-2,\,8)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/261946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Eigen-values of $AB$ and $BA$? Let $A,B \in M(n,\mathbb{C})$ be two $n\times n$ matrices. I would like know how to prove that eigen-value of $AB$ is the same as the eigen-values of $BA$.
you can prove $|\lambda I-AB|=|\lambda I-BA|$ by computing the determinant of following $$ \left( \begin{array}{cc} I & A \\ B & I \\ \end{array} \right) $$ in two diffeerent ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Solving the integral of a Modified Bessel function of the second kind I would like to find the answer for the following integral $$\int x\ln(x)K_0(x) dx $$ where $K_0(x)$ is the modified Bessel function of the second kind and $\ln(x)$ is the natural-log. Do you have any ideas how to find? Thanks in advance!
Here's what Mathematica found: Looks like an integration by parts to me (combined with an identity for modified Bessel functions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/262180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Card probabilities Five cards are dealt from a standard deck of 52. What is the probability that the 3rd card is a Queen? What I dont understand here is how to factor in when one or both of the first two cards drawn are also Queens.
All orderings of the $52$ cards in the deck are equally likely. So the probability the third card in the deck is a Queen is exactly the same as the probability that the $17$-th card in the deck is a Queen, or that the first card in the deck is a Queen: They are all equal to $\dfrac{4}{52}$. The fact that $5$ cards were dealt is irrelevant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Show That This Complex Sum Converges For complex $z$, show that the sum $$\sum_{n = 1}^{\infty} \frac{z^{n - 1}}{(1 - z^n)(1 - z^{n + 1})}$$ converges to $\frac{1}{(1 - z)^2}$ for $|z| < 1$ and $\frac{1}{z(1 - z)^2}$ for $|z| > 1$. Hint: Multiply and divide each term by $1 - z$, and do a partial fraction decomposition, getting a telescoping effect. I tried following the hint, but got stuck on performing a partial fraction decomposition. After all, since all polynomials can be factored in $\mathbb{C}$, how do I know what the factors of an arbitrary term are? I tried writing $$\frac{z^{n - 1}(1 - z)}{(1 - z^n)(1 - z^{n + 1})(1 - z)} = \frac{z^{n - 1}}{(1 - z)^3(1 + z + \dotsb + z^{n - 1})(1 + z + \dotsb + z^n)} - \frac{z^n}{(1 - z)^3(1 + z + \dotsb + z^{n - 1})(1 + z + \dotsb + z^n)},$$ but didn't see how this is helpful.
HINT: Use $$ \frac{z^{n}-z^{n+1}}{(1-z^n)(1-z^{n+1})} = \frac{1}{1-z^n} - \frac{1}{1-z^{n+1}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/262308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Pigeon-hole Principle: Does this proof have a typo? This was an example of generalized pigeon-hole principle. Ten dots are placed within a square of unit size. The textbook then shoes a box divided into 9 equal squares. Then there three dots that can be covered by a disk of radius 0.5. The proof: Divide our square into four equal parts by it's diagonals (from one corner to the other), then by the generalized pigeon-hole principle, at least one of these triangles will contain three of our points. The proof follows as the radius of the circumcircle of these triangles is shorter than 0.5. But wait! The statement said three dots can be covered by a disk of radius 0.5. Typo?
The proof is basically correct, but yes, there is a typo: the circumcircle of each of the four triangles has radius exactly $0.5$, not less than $0.5$. If $O$ is the centre of the square, and $A$ and $B$ are adjacent corners, the centre of the circumcircle of $\triangle AOB$ is the midpoint of $\overline{AB}$, from which the distance to each of $A,O$, and $B$ is $0.5$. The circle of radius $0.5$ and centre at the midpoint of $\overline{AB}$ contains $\triangle AOB$, as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is Cumulative Distribution Function of this random variable? Suppose that we have $n$ independent random variables, $x_1,\ldots,x_n$ such that each $x_i$ takes value $a_i$ with success probability $p_i$ and value $0$ with failure probability $1-p_i$ ,i.e., \begin{align} P(x_1=a_1) & = p_1,\ P(x_1=0)= 1-p_1 \\ P(x_2=a_2) & = p_2,\ P(x_2=0) = 1-p_2 \\ & \vdots \\ P(x_n=a_n) & = p_n,\ P(x_n=0)=1-p_n \end{align} where $a_i$'s are positive Real numbers. What would be the CDF of the sum of these random variables? That is, what would be $P(x_1+\cdots+x_n\le k)$ ? and how can we find it in an efficient way?
This answer is an attempt at providing an answer to a previous version of the question in which the $x_i$ were independent Bernoulli random variables with parameters $p_i$. $P\{\sum_{i=1}^n x_i = k\}$ equals the coefficient of $z^k$ in $(1-p_1+p_1z)(1-p_2+p_2z)\cdots(1-p_n+p_nz)$. This can be found by developing the Taylor series for this function. It is not much easier than grinding out the answer by brute force.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Let $f$ be a continuous function on [$0, 1$] with $f(0) =1$. Let $ G(a) = 1/a ∫_0^a f(x)\,dx$ then which of the followings are true? Let $f$ be a continuous function on [$0, 1$] with $f(0) =1$. Let $ G(a) = 1/a ∫_0^af(x)\,dx$ then which of the followings are true? * *$\lim_{(a\to 0)} G(a)=1/2$ *$\lim_{(a\to0)} G(a)=1$ *$\lim_{(a\to 0)} G(a)=0$ *The limit $\lim_{(a\to 0)G(a)}$ does not exist. I am completely stuck on it. How should I solve this?
Note that $G(a)$ is the mean (or average) value of the function on the interval $[0,a]$. Here’s an intuitive argument that should help you see what’s going on. The function $f$ is continuous, and $f(0)=1$, so when $x$ is very close to $0$, $f(x)$ must be close to $1$. Thus, for $a$ close to $0$, $f(x)$ should be close to $1$ for every $x\in[0,a]$, and therefore its mean value should also be close to $1$. From that it should be easy to pick out the right answer, but it would also be a good exercise for you to try to prove that the answer really is right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
How can I solve this differential equation? How can I find a solution of the following differential equation: $$\frac{d^2y}{dx^2} =\exp(x^2+ x)$$ Thanks!
$$\frac{d^2y}{dx^2}=f(x)$$ Integrating both sides with respect to x, we have $$\frac{dy}{dx}=\int f(x)~dx+A=\phi(x)+A$$ Integrating again $$y=\int \phi(x)~dx+Ax+B=\chi(x)+Ax+B$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/262559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$\sqrt{(a+b-c)(b+c-a)(c+a-b)} \le \frac{3\sqrt{3}abc}{(a+b+c)\sqrt{a+b+c}}$ Suppose $a, b, c$ are the lengths of three triangular edges. Prove that: $$\sqrt{(a+b-c)(b+c-a)(c+a-b)} \le \frac{3\sqrt{3}abc}{(a+b+c)\sqrt{a+b+c}}$$
As the hint give in the comment says (I denote by $S$ the area of $ABC$ and by $R$ the radius of its circumcircle), if you multiply your inequality by $\sqrt{a+b+c}$ you'll get $$4S \leq \frac{3\sqrt{3}abc}{a+b+c}$$ which is eqivalent to $$a+b+c \leq 3\sqrt{3}\frac{abc}{4S}=3\sqrt{3}R.$$ This inequality is quite known. If you want a proof, you can write $a=2R \sin A$ (and the other two equalities) and get the equivalent inequality $$ \sin A +\sin B +\sin C \leq \frac{3\sqrt{3}}{2}$$ which is an easy application of the Jensen inequality for the concave function $\sin : [0,\pi] \to [0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Escalator puzzle equation I'm trying to understand the escalator puzzle. A man visits a shopping mall almost every day and he walks up an up-going escalator that connects the ground and the first floor. If he walks up the escalator step by step it takes him 16 steps to reach the first floor. One day he doubles his stride length (walks up climbing two steps at a time) and it takes him 12 steps to reach the first floor. If the escalator stood still, how many steps would there be on sight? The solution, apparently, is as follows: $16x = 12(x+1)$, so $x=3$, so the answer is 48. But why can we say $12(x+1)$? First, he covers 16 steps and the motion of the escalator gives him a multiplier of $x$ to cover a total of $16x$ steps. That makes sense. But why is this the same as 12 steps with a multiplier of $(x+1)$?
Let $d$ be the distance traveled, which remains same in both the cases. if $v$ is the speed of the man and $x$ is the speed of elevator, in case 1 the number of steps taken is $$\frac d{v+x}=16$$ In case 2 it is $$\frac d{2v+x}=12$$ because now he is traveling at double the speed; eliminating $d$, we get $x=2v$; therefore $d=48v$; when stationery $x=0$, we get no. of steps as $48v/v=48$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Is this an equivalence relation (reflexivity, symmetry, transitivity) Let $\theta(s):\mathbb{C}\to \mathbb{R}$ be a well defined function. I define the following relation in $\mathbb{C}$. $\forall s,q \in \mathbb{C}: s\mathbin{R}q\iff\theta(s)\ne 0 \pmod {2\pi}$ (and) $\theta(q)\ne 0 \pmod {2\pi}$ The function $\pmod {2\pi}$ is the addition $\pmod {2\pi}$ My question: Is this an equivalence relation (reflexivity, symmetry, transitivity)? The formula of $\theta(s)$ is not important for this question.
Your relation is $$sRq\iff \theta(s)\not \equiv 0\text{ and }\theta(q)\not \equiv 0 \mod 2\pi$$ for $s,q\in \mathbb{C}$. For symmetry: $$sRq\iff \theta(s)\not \equiv 0\text{ and }\theta(q)\not \equiv 0 \mod 2\pi \iff qRs$$ For transitivity: $$sRq\text{ and }qRp\iff \theta(s)\not \equiv 0\text{ and }\theta(q)\not \equiv 0\text{ and }\theta(p)\not \equiv 0 \mod 2\pi\implies sRp$$ Reflexitivity is: $$sRs\iff \theta(s)\not \equiv 0\mod 2\pi$$ That clearly depends on your choice of $\theta$. Therefore, $R$ is an equivalence relation iff $$\theta(\mathbb{C})\cap \left\{2k\pi:k\in \mathbb{Z}\right\}=\emptyset$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/262794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Products of infinitely many measure spaces. Applications? * *What are some typical applications of the theory for measures on infinite product spaces? *Are there any applications that you think are particularly interesting - that make the study of this worthwhile beyond finite products, Fubini-Tonelli. *Are there theorems that require, or are equivalent to, certain choice principles (AC, PIT, etc)? (similar to Tychonoff in topology) Sorry for being so vague, I am just trying to get a feel for this new area before diving head-first into the technical details.
Infinite products of measure spaces are used very frequently in probability. Probabilists are frequently interested in what happens asymptotically as a random process continues indefinitely. The Strong Law of Large Numbers, for example, tells us that if $\{X_i\}_i$ is a sequence of independent, identically distributed random variables with finite mean $\mu$ then the sum $\frac{1}{n}\sum_{i=1}^n X_i$ converges almost surely to $\mu$. But how do we find infinitely many independent random variables to which we can apply this theorem? The most common way to produce these variables is with the infinite product. For example, say we want to flip a coin infinitely many times. A way to model this would be to let $\Omega$ be the probability space $\{-1,1\}$ where $P(1) = P(-1) = \frac{1}{2}$. Then we consider the probability space $\prod_{i=1}^\infty \Omega$, and let $X_i$ be the $i$th component. Then the $X_i$ are independent identically distributed variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What's the probability of a gambler losing \$10 in this dice game? What about making \$5? Is there a third possibility? Can you please help me with this question: In a gambling game, each turn a player throws 2 fair dice. If the sum of numbers on the dice is 2 or 7, the player wins a dollar. If the sum is 3 or 8, the player loses a dollar. The player starts to play with 10 dollars and stops the game if he loses all his money or if he earns 5 dollars. What's the probability for the player to lose all the money and what's the probability to finish the game as a winner? If there some 3rd possibility to finish the game? If yes, what's its probability? Thanks a lot!
[edit: Apparently I misread the question. The player starts out with 10 dollars and not five.] Given that "rolling a 2 or 7" and "rolling a 3 or 8" have the same probability (both occur with probability 7/36), the problem of the probability of a player earning a certain amount of money before losing a different amount of money is the same as the problem of the Gambler's Ruin. What's different when considering individual rounds is that there's a possibility of a tie. But because a tie leaves everything unchanged, the Gambler's Ruin still applies simply because we can simply consider only events that do change the state of each player's money. Therefore, the probability that the player makes \$5 before losing \$10 is the same probability as flipping coins against somebody with $5, or 2/3. And the probability of the opposite event is 1/3. The third outcome, that the game goes on forever, has a probability that vanishes to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/262925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }