Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Good sources on studying fractals (the mathematical, and not just the pretty pictures version)? Particularly, I'm interested in learning about the dimensions (whether it's always possible to find them, and if so, a concrete way of calculating them) of different types of fractals (given by the Hausdorff dimension, according to a few sources), but particularly iterated function systems, other properties of fractals in general, theorems regarding fractals and their properties, and possibly also open problems regarding fractals. It'd be preferable if linear algebra methods could be used to study fractals. I have a background in undergraduate level analysis, abstract algebra and topology.
The author who would be a great start for looking at fractals constructed by iterated function systems and then studying their features like Hausdorff dimension is Kenneth Falconer. His books from the 90's are a great base to start from. If you want to see what is an amazing use of linear algebra to study functions and Laplacians on fractals Robert Strichartz has a book aimed at exactly your background. The spectral decimation method that Strichartz writes is beautiful when recast into a matrix formulation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/447445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Solving for $(x,y): 2+\frac1{x+\frac1{y+\frac15}}=\frac{478}{221}$ Solving for $x,y\in\mathbb{N}$: $$2+\dfrac1{x+\dfrac1{y+\dfrac15}}=\frac{478}{221}$$ This doesn't make any sense; I made $y+\frac15=\frac{5y+1}5$, and so on, but it turns out to be a very complicated fraction on the left hand side, and I don't even know if what I got is correct. Is there a more mathematical way to approach this problem?
Yes, there is a very large and important mathematical theory, called the theory of Continued Fractions. These have many uses, both in number theory and in analysis (approximation of functions). Let's go backwards. The number $\frac{478}{221}$ is $2+\frac{36}{221}$, which is $2+\frac{1}{\frac{221}{36}}$. But $\frac{221}{36}=6+\frac{5}{36}$, which is $6+\frac{1}{\frac{36}{5}}$. Finally, $\frac{36}{5}=7+\frac{1}{5}$. Now compare the results of these calculations with your expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/447512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Covariant derivative of $Ricc^2$ How to derivate (covariant derivative) the expressions $R\cdot Ric$ and $Ric^2$ where $Ric^2$ means $Ric \circ Ric$? Here, $Ric$ is the Ricci tensor seen as a operator and $R$ is the scalar curvature of a Riemannian manifold.
We want to compute $$(\nabla(R\cdot Rc))(X)$$ Connections commute with contractions, so we start by considering the expression (where we contract $X$ with $Rc$, obtaining $Rc(X)$) $$\nabla(R\cdot Rc\otimes X) = (\nabla (R\cdot Rc))\otimes X + R\cdot Rc\otimes (\nabla X)$$ Taking the contraction and moving terms around, we get $$(\nabla(R\cdot Rc))(X) = \nabla(R\cdot Rc(X)) - R\cdot Rc(\nabla X)$$ For the second one, notice that $$Rc\circ Rc = Rc\otimes Rc$$ after contraction, so we proceed as before and start with $$\nabla[Rc\otimes Rc\otimes Y] = [\nabla(Rc\otimes Rc)]\otimes Y + Rc\otimes Rc\otimes \nabla Y$$ We contract and move terms to get $$[\nabla(Rc\circ Rc)](Y) = \nabla(Rc(Rc(Y))) - Rc(Rc(\nabla Y))$$ Now notice that using twice the first equality we derived we obtain $$\nabla(Rc(Rc(Y))) = (\nabla Rc)(Rc(Y)) + Rc((\nabla Rc(Y))) =$$ $$= (\nabla Rc)(Rc(Y)) + Rc((\nabla Rc)(Y) +Rc(\nabla Y)))$$ Putting all together we obtain $$\nabla(Rc\circ Rc) = (\nabla Rc)\circ Rc + Rc\circ(\nabla Rc)$$ This was an explicit derivation of everything. You can also use the various rules on the connections on the bundles associated to the tangent bundle to get directly $$\nabla(R\cdot Rc) = dR\otimes Rc + R\nabla Rc$$ (which agrees with the first result, if you work it out a bit) and $$\nabla(Rc\circ Rc) = \nabla(Rc\otimes Rc) = (\nabla Rc)\otimes Rc + Rc\otimes(\nabla Rc) = (\nabla Rc)\circ Rc + Rc\circ(\nabla Rc)$$ where there is a contraction of the tensors in the intermediate steps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/447569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating derivative by definition vs not by definition I'm not entirely sure I understand when I need to calculate a derivative using the definition and when I can do it normally. The following two examples confused me: $$ g(x) = \begin{cases} x^2\cdot \sin(\frac {1}{x}) & x \neq 0 \\ 0 & x=0 \end{cases} $$ $$ f(x) = \begin{cases} e^{\frac {-1}{x}} & x > 0 \\ -x^2 & x\leq 0 \end{cases} $$ I understand that I can differentiate normally for any $x$ that's not 0 (in both of these examples). I'm confused because I saw an example with $f(x)$ in which they calculated the $f'(x)$ by differentiating: $$ f'(x) = \begin{cases} \frac {1}{x^2}\cdot e^{\frac {-1}{x}} & x > 0 \\ -2x & x\leq0 \end{cases} $$ and then they calculated $f'(0)$ not using the definition but by $ \lim_{x\to0^-} f'(x)$ $\lim_{x\to0^+} f'(x)$ For $g(x)$ though I know that $g'(0)$ exists (using definition) but $$g'(x) = \begin{cases} 2x\cdot \sin(\frac {1}{x})-\cos(\frac {1}{x}) & x \ne 0 \\ 0 & x=0 \end{cases}$$ and you can't calculate $\lim_{x\to0^-} g'(x)$ or $\lim_{x\to0^+} g'(x)$ since $\lim_{x\to0^{+/-}} 2x\cdot \sin(\frac {1}{x})-\cos(\frac {1}{x}) $ doesn't exist. So what's the difference between these two? When can I just differentiate normally like in the first example ($f(x)$) and when do I have to use the definition like in the second example ($g(x)$)? I'd appreciate the help. Edit: When I"m referring to the definition I'm referring to the following: $\lim_{h \rightarrow 0} \dfrac{f(x+h) - f(x)}{h}$
Given a function $f(x)$ and a point $x_0$, it's possible to take $f'(x_0)$ by differentiating normally only if $\lim\limits_{x \rightarrow x_0} f'(x)$ exists. However, this is not the case for the $g(x)$ that you gave. As $x \rightarrow 0$, $-\cos\left(\frac{1}{x}\right)$ oscillates, and, in fact, in any small neighbourhood of $0$, there are points $y$ and $z$ such that $-\cos\left(\frac{1}{y}\right) = -1$ and $-\cos\left(\frac{1}{z}\right) = 1$. Thus, the limit as $x \rightarrow 0$ of this part of the function $g'(x)$ gotten by normal differentiation does not exist, and so the limit of the whole function does not exist, and therefore, it cannot be used as the derivative at that point. However, using the limit definition of the derivative, it is possible to find that $\frac{dg}{dx}(0) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/447637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Solve a set of non linear Equations on Galois Field I have the following set of equations: $$M_{1}=\frac{y_1-y_0}{x_1-x_0}$$ $$M_{2}=\frac{y_2-y_0}{x_2-x_0}$$ $M_1, M_2, x_1, y_1, x_2, y_2,$ are known and they are chosen from a $GF(2^m).$ I want to find $x_0,y_0$ I ll restate my question. Someone chose three distinct x0,x1,x2, as well as y0,y1,y2, then computed M1, M2, and finally revealed M1,M2,x1,y1,x2,y2, but not x0,y0 to us.All the variables are chosen from a Galois Field. I want to recover the unknown $x_0,y_0.$ Is it possible to accomplish that? If a set of nonlinear equations have been constructed with the aforementioned procedure e.g. $$M_1=\frac{k_1-(y_0+(\frac{y_1-y_0}{x_1-x_0})(l_1-x_0))}{(l_1-x_0)(l_1-x_1)}$$ $$M_2=\frac{k_2-(y_0+(\frac{y_1-y_0}{x_1-x_0})(l_2-x_0))}{(l_2-x_0)(l_2-x_1)}$$ $$M_3=\frac{k_3-(y_0+(\frac{y_1-y_0}{x_1-x_0})(l_3-x_0))}{(l_3-x_0)(l_3-x_1)}$$ $$M_4=\frac{k_4-(y_0+(\frac{y_1-y_0}{x_1-x_0})(l_4-x_0))}{(l_4-x_0)(l_4-x_1)}$$ where $x_0,y_0 x_1,y_1$ are the unknown GF elements. Can I recover the unknown elements? My question was if the fact that the set of equations is defined on a Galois Field imposes any difficulties to find its solution. If not I suppose that the set can be solved. Is this true? Has mathematica or matlab any package that will help me to verify it? When I tried to solve a system similar to the one above posted I found out that ${x_i}^{2}, 0\leq i \leq 2$ has come up. I think that I should have to compute the square root of the x0. Is it possible in GF? Is it also possible to compute the $\sqrt[1/n]{x_0}$?
The first system can be solved in the usual way, provided the "slopes" $M_i$ are distinct. Solve each for the knowns $y_k$, $k=1,2$ and subtract. You can then get to $$x_0=\frac{M_2x_2-M_1x_1-y_2+y_1}{M_2-M_1},$$ and then use one of the equations you already formed with this $x_0$ plugged in to get $y_0.$ Since this method only uses addition/subtraction multiplication/(nonzero)division it works in any field, in particular in your Galois field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/447716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Cauchy theorem for a rectangle. Here $\delta R$ will give the boundary of a rectangle taking positively. This is a theorem of the book Complex Analysis An Introduction to The Theory of Analytic Function on One Variable by L. V. Ahlfors, chapter4: Complex Integration Let $f(z)$ be analytic is the set $R'$ obtained from a rectangle $R$ by omiting a finite number of points $\zeta_j$. If it is true that $$\lim_{z\rightarrow \zeta_j}(z-\zeta_j)f(z) = 0$$ for all j, then $$\int_{\delta R}f(z)dz = 0$$ It is sufficient to prove the case for a single exceptional point $\zeta$, for evidently $R$ can be devided into smaller rectangle which contains at most one $\zeta_j$. We divide $R$ into nine rectangles, as shown in the figure and apply My doubt is the inequality $$\int_{\delta R_o} \frac{|dz|}{|z-\zeta|} < 8$$. From where the inequality is coming? Please give me some clues.
A favorite trick for estimating a line integral $\int_\gamma f$ is that it is bounded by $M\cdot L(\gamma)$, where $|f(z)|\leq M$ for all $z$ in the trace of $\gamma$, and $L(\gamma)$ the length of the curve. Here we can use for $M$ the reciprocal of the minimum distance of $z$ from $\zeta$, that is the minimum distance from the edge of the square to its center. And the length should be easy to figure out. That's how they got this estimate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/447782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Differentiability of $xy^{\alpha}$ I was asked to prove that $|xy|^{\alpha}$ is differentiable at $(0,0)$ if $\alpha > \frac{1}{2}$. Since both the partial derivatives are zero, I concluded that this function is differentiable if and only if the following holds: $$ \lim\limits_{(x,y)\to (0,0)} \frac{|xy|^{\alpha}}{\sqrt{x^{2} + y^{2}}} = 0$$ However, I am not sure how to show this. What I tried is: $$\frac{|xy|}{\sqrt{x^{2}+y^{2}}} \leq \frac{1}{\sqrt{2}}$$ Hence the given expression is less than equal to $$ \frac{1}{\sqrt{2}}|xy|^{\alpha - \frac{1}{2}}$$ Now I conclude that this goes to zero as $x,y$ go to zero? I was just wondering if I am correct in all my steps? Any help would be apreciated. EDIT: Can I use other norms on $\mathbb{R}^{2}$ instead of the Euclidean norm to conclude matters of differentiability? I ask this as it would be easier to work with other norms.
Polar co-ordinates. $\displaystyle \lim_{(x,y)\to(0,0)}\frac{|xy|^{\alpha}}{\sqrt{x^2+y^2}}=\lim_{r\to 0}r^{2\alpha-1}|\sin \theta \cos \theta|$ which $\to 0$ if $\alpha>\frac{1}2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/447853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of trees with a fixed edge Consider a vertex set $[n]$. By Cayley's theorem there there are $n^{(n-2)}$ trees on $[n]$, but how can one count the following slightly modified version: What is the number of trees on $[n]$ vertices where the edge $\{1,2\}$ is definitely contained in the trees?
It seems that you can adjust this proof. First, we denote by $S_n$ the number of trees with one of your edges fixed, and then we follow, as to count the number of ways the directed edges can be added to form rooted trees where your edge is added first. As in original proof, we can pick the root in $n$ ways, and add the edges in $(n-2)!$ permutations (the difference is that we add your edge first). We arrive at $S_n n(n-2)!$ sequences. Secondly, we will add edges one by one, but we will start with your edge (which is already picked). The only thing we need to pick is its direction. Then, we proceed usually with the rest, that is if there are $k$ forests, we can pick starting point in $n$ ways (any vertex) and ending point in $(k-1)$ (only the roots). When multiplied together we get (similarly to the original proof) $$ 2\prod_{k=2}^{n-1}n(k-1) = 2n^{n-2}(n-2)!$$ Comparing the two we get $$S_n n (n-2)! = 2n^{n-3}n(n-2)!$$ hence $$S_n = 2n^{n-3}.$$ I hope this helps ;-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/447945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Let $R, P , Q$ be relations, prove that the following statement is tautology Let $P, R, Q$ be relations. Prove that: $\exists x(R(x) \vee P(x)) \to (\forall y \neg R(y) \to (\exists xQ(x) \to \forall x \neg P(x)))$ is a tautology. How do I do so? please help.
Relations of arity 1 (that is the relations that take only one argument) are just subsets of the universe. Therefore, your formula can be translated into $$(R \cup P \neq \varnothing) \to (R = \varnothing \to (Q \neq \varnothing \to P = \varnothing))$$ and then simplified into $$(R \neq \varnothing \lor P \neq \varnothing) \to (R = \varnothing \to (Q \neq \varnothing \to P = \varnothing)),$$ and further into $$(r \lor p) \to (\neg r \to (q \to \neg p)),$$ where $r := (R \neq \varnothing)$, etc. Now, let's set $\neg r$, $p$ and $q$, then $$(0\lor 1) \to (1 \to (1 \to 0)) = 1 \to (1 \to 0) = 1 \to 0 = 0,$$ which shows that your formula is not a tautology. To generate appropiate example, we set $U = \{\bullet\}$ and $R = \varnothing, P = U, Q = U$. Now we evaluate $$\exists x_1(R(x_1) \vee P(x_1)) \to (\forall y_2 \neg R(y_2) \to (\exists x_3Q(x_3) \to \neg \exists x_4 P(x_4))) $$ by setting $x_1 = \bullet$, $x_3 = \bullet$ and $x_4 = \bullet$ (observe that I have changed $\forall x \neg P(x)$ to $\neg \exists x_4 P(x_4)$, so that I can put some value there). What we get is $$(R(\bullet) \vee P(\bullet)) \to (\forall y_2 \neg R(y_2) \to (Q(\bullet) \to \neg P(\bullet))). $$ Now, $\forall y_2 \neg R(y_2)$ is true, so we get exactly the same expression as before, i.e. your formula is false in this case. What would make it a tautology is, for example, $\land$ in the beginning $$\exists x_1(R(x_1) \land P(x_1)) \to (\forall y_2 \neg R(y_2) \to (\exists x_3Q(x_3) \to \forall x_4 \neg P(x_4))) $$ which could be more easily seen with $$(R \cap P \neq \varnothing) \to (R = \varnothing \to (Q \neq \varnothing \to P = \varnothing)).$$ If we substitute $\alpha := (Q \neq \varnothing \to P = \varnothing)$, that is $$(R \cap P \neq \varnothing) \to (R = \varnothing \to \alpha),$$ then we can observe that $\alpha$ does not matter, because $(R \cap P \neq \varnothing)$ implies $R \neq \varnothing$. To do it more formally, consider two cases: * *$R = \varnothing$, then the left side of the top-most implication is false, hence all is true. *$R \neq \varnothing$, then the left side of the $R = \varnothing \to \alpha$ is false, so the right side of the top-most implication is true, hence, the expression is true. I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/448021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Evaluating $\lim\limits_{x \to 0} \frac1{1-\cos (x^2)}\sum\limits_{n=4}^{\infty} n^5x^n$ I'm trying to solve this limit but I'm not sure how to do it. $$\lim_{x \to 0} \frac1{1-\cos(x^2)}\sum_{n=4}^{\infty} n^5x^n$$ I thought of finding the function that represents the sum but I had a hard time finding it. I'd appreciate the help.
Note that the first term is $$\frac 1 {1-(1-x^4/2+O(x^8))} = \frac 2 {x^4}(1+O(x^4))$$ and that the second term is $$4^5 x^4(1+O(x))$$ (The question keeps changing; this is for the $$\cos x^2$$ denominator.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/448100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
What rational numbers have rational square roots? All rational numbers have the fraction form $$\frac a b,$$ where a and b are integers($b\neq0$). My question is: for what $a$ and $b$ does the fraction have rational square root? The simple answer would be when both are perfect squares, but if two perfect squares are multiplied by a common integer $n$, the result may not be two perfect squares. Like:$$\frac49 \to \frac 8 {18}$$ And intuitively, without factoring, $a=8$ and $b=18$ must qualify by some standard to have a rational square root. Once this is solved, can this be extended to any degree of roots? Like for what $a$ and $b$ does the fraction have rational $n$th root?
We give a fairly formal statement and proof of the result described in the post. Theorem: Let $a$ and $b$ be integers, with $b\ne 0$. Suppose that $\frac{a}{b}$ has a rational square root. Then there exists an integer $e$, and integers $m$ and $n$, such that $a=em^2$ and $b=en^2$, Proof: It is enough to prove the result for positive $b$. For if $b$ is negative and $\frac{a}{b}$ has a square root, then we must have $a\le 0$. Thus $\frac{a}{b}=\frac{|a|}{|b|}$. If we know that there are integers $e$, $m$, $n$ such that $|a|=em^2$ and $|b|=en^2$, then $a=(-e)m^2$ and $b=(-e)n^2$. So suppose that $b\gt 0$, and $a\ge 0$. Let $d$ be the greatest common divisor of $a$ and $b$. Then $a=da^\ast$, and $b=db^\ast$, for some relatively prime $a^\ast$ and $b^\ast$. It will be sufficient to prove that each of $a^\ast$ and $b^\ast$ is a perfect square. Since $\frac{a^\ast}{b^\ast}$ is a square, there exist relatively prime integers $m$ and $n$ such that $\frac{a^\ast}{b^\ast}=\left(\frac{m}{n}\right)^2$. With some algebra we reach $$a^\ast n^2=b^\ast m^2.$$ By Euclid's Lemma, since $b^\ast$ divides the product on the left, and is relatively prime to $a^\ast$, we have that $b^\ast$ divides $n^2$. Also, because $n^2$ divides the expression on the right, and $n^2$ is relatively prime to $m^2$, we have $n^2$ divides $b^\ast$. Since $b^\ast$ is positive, we conclude that $b^\ast=n^2$. Now it is easy to show that $a^\ast=m^2$. A similar theorem can be stated and proved for $k$-th roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/448172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 8, "answer_id": 0 }
How to prove that $\lim\limits_{x\to0}\frac{\tan x}x=1$? How to prove that $$\lim\limits_{x\to0}\frac{\tan x}x=1?$$ I'm looking for a method besides L'Hospital's rule.
One way to look at it is to consider an angle subtended by two finite lines, both of magnitude r, where the angle between them is x (we take x to be small). If you draw this out, you can see there are "3 areas" you can consider. One is the area enclosed with a straight line joining the two end points, an arc and lastly considering a right-angled triangle. Sorry I cant provide a diagram, I'm new to maths.stackexchange :) you get the following result 1/2*r^2sinx < 1/2*r^2x < 1/2*r^2tanx for small x, with simplication we get sinx < x < tanx divide by tanx yeilds cosx < x/tanx < 1 taking the limit as x goes to 0, (which we can do as we took x to be small) we get 1 < x/tanx < 1, by squeeze theorem this tells us the limit of as x >>0 for x/tanx is 1. Now the limit of tans/x as x approaches 0 will be the reciprocal of this. I should mention I am assuming early foundational results regarding limits in an Analysis course. Hence, the limit is 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/448207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 12, "answer_id": 8 }
convert ceil to floor Mathematically, why is this true? $$\left\lceil\frac{a}{b}\right\rceil= \left\lfloor\frac{a+b-1}{b}\right\rfloor$$ Assume $a$ and $b$ are positive integers. Is this also true if $a$ and $b$ are real numbers?
Suppose, $b\nmid a$. Let $\lfloor\frac{a+b-1}{b}\rfloor=n \implies nb+r=a+b-1$ where $0\leq r<b$. So $\frac{a}{b}+1=n+\frac{r+1}{b}$. So $\lfloor\frac{a}{b}+1\rfloor=\lceil\frac{a}{b}\rceil=n$. This is not true in general as shown by Lyj.
{ "language": "en", "url": "https://math.stackexchange.com/questions/448300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
I don't see how Cauchy's proof of AM $\ge$ GM holds for all cases? I am reading Maxima and Minima Without Calculus by Ivan Niven and on pages $24-26$ he gives Cauchy's proof for the $AM-GM$ . The general idea of the proof is that $P_{n}$ is the proposition $$(a_{1}+a_{2}+\cdot \cdot \cdot a_{n}) \ge n(a_{1} a_{2} \cdot \cdot \cdot a_{n})^{1/n}$$ The proof proves that if $P_{n}$ holds, then $P_{n-1}$ and $P_{2n}$ also hold. I have a problem with the part that shows that $P_{n-1}$ holds. It goes something like this: $g$ is the geometric mean of $a_{1} , a_{2} , \cdot \cdot \cdot a_{n-1}$ so that $g=(a_{1} , a_{2} , \cdot \cdot \cdot a_{n-1})^{1/n-1}$ Replace $a_{n}$ with $g$: $( a_{1}+ a_{2}+ \cdot \cdot \cdot +a_{n-1}+ g) \ge n ( a_{1} a_{2} \cdot \cdot \cdot g)^{1/n}$ The $RHS$ becomes $n(g^{n-1} g)^{1/n}=ng$ So we have $( a_{1}+ a_{2}+ \cdot \cdot \cdot +a_{n-1}+ g) \ge ng$ , or $( a_{1}+ a_{2}+ \cdot \cdot \cdot +a_{n-1}) \ge (n-1) g$ , therefore if $P_{n}$ holds, then $P_{n-1}$ does also. I understand the logic of the proof, but the problem I have is when he replaced $a_{n}$ with $g$ . Obviously, the last term is not necessarily equal to the the geometric mean of the rest of the terms. So how does this proof work for all cases? Thanks.
We wish to show that the case of the inequality with $n$ non-negative numbers proves the case with $n-1$ numbers. Take any $n-1$ non-negative numbers. We may put any non-negative number for $a_n$ since we already know it to be true in the case of $n$ numbers. We simply choose $a_n=g$ where $g$ is as you defined it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/448429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
helix and covering space of the unit circle Does a bounded helix; for instance $\{(\cos 2\pi t, \sin 2\pi t, t); -5\leq t\leq5\}$ in $\mathbb R^3$ with the projection map $(x,y,z)\mapsto (x,y)$ form a covering space for the unit circle $\mathbb S^1$? If yes, does it disprove: There is a bijection between the fundamental group of a space and the fibers of the base point into a simply connected covering space?
Yes it's a covering map. Since the fundamental group of $S^1$ is $\mathbb{Z}$, and since each fiber is the set of $(\cos(t_0),\sin(t_0),t_0+2\pi k)$, $k \in \mathbb{Z}$, there is no conflict to the bijection you mention. Note the question has been reformulated and this answer is no longer relevant; I'm awaiting further clarification from OP as to what the rephrased bijection assertion means. The re-edit now has the parameter $t$ bounded, i.e. $t \in [-5,5]$, in the helix parametrization $(\cos 2\pi t,\sin 2\pi t, 2\pi t).$ This is not a covering space, since for $t$ near an end, say near 5, the neighborhood of $(0,0)$ in the base is not covered by an open interval along the helix. In a simple covering space over the circle, if you are on a sheet of the cover over any point in the base, you should be able to move back or forth a bit and remain on that sheet, locally.
{ "language": "en", "url": "https://math.stackexchange.com/questions/448492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Applications of logic What are some applications of symbolic logic? I tried using Google and Bing but just got a bunch of book recommendations, and links to articles I did not understand.
It aids mathematicians to explain or describe something i.e. a specific statement in such a way that other mathematicians will understand, also it makes the statement open to manipulation, furthermore it is less time consuming to write an equation than the whole statement. Also it is a universal language which is understood by all and there is no error when translating text, and finally you can focus on the important part of the statement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/448552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How to show that there is no condition that can meet two inequalities? Here's an excerpt from Spivak's Calculus, 4th Edition, page 96: If we consider the function $$ f(x)= \left\{ \matrix{0, x \text{ irrational} \\ 1, x \text{ rational}} \right. $$ then, no matter what $a$ is, $f$ does not approach an number $l$ near $a$. In fact, we cannot make $|f(x)-l|<\frac{1}{4}$ no matter how close we bring $x$ to $a$, because in any interval around $a$ there are number $x$ with $f(x)=0$, and also numbers $x$ with $f(x)=1$, so that we would need $|0-l| < \frac{1}{4}$ and also $|1-l| < \frac{1}{4}$. How do I show that there is no condition that can meet the inequality $|0-l| < \frac{1}{4}$ and also $|1-l| < \frac{1}{4}$ ? Thank you in advance for any help provided.
When dealing with equations including absolute values, it's often convenient to rewrite them as equations without absolute values. In your case, $$|0-l| < \frac{1}{4}$$ gives: $$-\frac{1}{4} < l < \frac{1}{4}$$ And $$|1-l| < \frac{1}{4}$$ gives: $$\frac{3}{4} < l < \frac{5}{4}$$ And now it's easy to spot the contradiction. It's easy to sometimes overlook cases when rewriting absolute valued equations, but it's good practice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/448654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Relation between integral and distribution Let $\mu$ be a (positive) measure on $\mathbb{R}^d$ and $f$ be a $\mu$-measurable function on $\mathbb{R}^d$. How to prove that \begin{equation} \int_{\mathbb{R}^d} |f(x)|^p d\mu(x)=p\int_{0}^{\infty} \gamma^{p-1} \mu(\{x\in \mathbb{R}^d:|f(x)|>\gamma\}) d\gamma \end{equation} for every $1\leq p<\infty$ and every $\gamma>0$. I found in Folland book that we have to prove the following equation: \begin{align} \int_{\mathbb{R}^d} \phi(|f(x)|) d\mu(x)= -\int_{0}^{\infty} \phi(\gamma) d(\mu(\{x\in \mathbb{R}^d: |f(x)|>\gamma\})) \end{align} for non-negative function $\phi$ and using integration by part to get the result. Could we get some direct proof?
You can do it using the Indicator-Fubini trick. Here's how it goes: $$\int_{\mathbb{R}^d}|f(x)|^pd\mu(x) = \int_{\mathbb{R}^d}\int_0^{|f(x)|}pu^{p-1} dud\mu(x)$$ $$ = \int_{\mathbb{R}^d}\int_0^{\infty}1_{\{u < |f(x)|\}}pu^{p-1} dud\mu(x)$$ By Fubini Theorem, $$ = \int_0^{\infty}pu^{p-1}\int_{\mathbb{R}^d}1_{\{u < |f(x)|\}} d\mu(x)du$$ $$ = \int_0^{\infty}pu^{p-1}\mu\{x \in \mathbb{R^d}: |f(x)| > u \}du$$ $\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/448757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Number Theory divisibilty How can I check if $$12^{2013} + 7^{2013}$$ is divisible by $19$? Also, how can I format my questions to allow for squares instead of doing the ^ symbol.
Proposition : $a+b$ divides $a^m+b^m$ if $m$ is odd Some proofs : $1:$ Let $a+b=c,$ $a^m+b^m=a^m+(c-a)^m\equiv a^m+(-a)^m\pmod c\equiv \begin{cases} 2a^m &\mbox{if } m \text{ is even } \\ 0 & \mbox{if } m \text{ is odd } \end{cases}\pmod c $ $2:$ If $m$ is odd, $a^m+b^m=a^m-(-b)^m$ is divisible by $a-(-b)=a+b$ as $\frac{A^r-B^r}{A-B}=A^{r-1}+A^{r-2}B+A^{r-3}B^2+\cdots+A^2B^{r-3}+AB^{r-2}+B^{r-1}$ which is an integer if $A,B$ are integers and integer $r\ge0$ $3:$ Inductive proof: $\underbrace{a^{2n+3}+b^{2n+3}}=a^2\underbrace{(a^{2n+1}+b^{2n+1})}-b^{2n+1}(a^2-b^2)\equiv a^2\underbrace{(a^{2n+1}+b^{2n+1})}\pmod {(a+b)}$ So, $a^{2(n+1)+1}+b^{2(n+1)+1}$ will be divisible by $a+b$ if $(a^{2n+1}+b^{2n+1})$ is divisible by $a+b$ Now clearly,$(a^{2n+1}+b^{2n+1})$ is divisible by $a+b$ for $n=0,1$ Hence the proposition will hold for all positive integer $n$ (By induction)
{ "language": "en", "url": "https://math.stackexchange.com/questions/448828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
characteristic prime or zero Let $R$ be a ring with $1$ and without zero-divisors. I have to show that the characteristic of $R$ is a prime or zero. This is my attempt: This is equivalent to finding the kernel of the homomorphism $f\colon \mathbb{Z}\rightarrow R$ which has the form $n\mathbb{Z}$. There are two cases. Suppose $f$ is injective, then only $f(0)=0$. Suppose now that $f$ is not injective, thus there exist $m,n\in \mathbb{Z}$ such that $f(m)=f(n)$. This implies $f(m-n)=0$ thus $m-n$ is in the kernel and so is every multiple of $m-n$ thus $(m-n)\mathbb{Z}$ is the kernel (I think this is not 100% correct). Suppose now that $m-n$ is not a prime. Then there exist $a,b\in \mathbb{Z}$ such that $m-n=ab$. $f(a)\neq 0\neq f(b)$ because then $a\mathbb{Z}$ or $b\mathbb{Z}$ would be the kernel because $a<m-n$ and also $b<m-n$. Then $0=f(ab)=f(a)f(b)$. This contradicts the fact that there are no zero-divisors. Is this proof correct? If not what is wrong? Thanks.
The claim being proved here is not true; it fails to hold for the zero ring. It doesn't have nontrivial zero divisors (because it doesn't have nontrivial elements at all), yet its characteristic $1$ is neither zero nor prime. The flaw in the proof is that it assumes that just because $m-n$ is positive and nonprime, it is the product of integers strictly between $0$ and $m-n$. That is not true for $m-n=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/448866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Is $x^2-y^2=1$ Merely $\frac 1x$ Rotated -$45^\circ$? Comparing their graphs and definitions of hyperbolic angles seems to suggest so aside from the $\sqrt{2}$ factor: and:
Almost. $$x^2-y^2=1\iff (x+y)(x-y)=1$$ By rotating by $-45^\circ$ you move the point $(x,y)$ to $(\hat x,\hat y) = \frac1{\sqrt 2}(x-y,x+y)$, so what you really get is $\hat y = \frac1{\hat x\sqrt 2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/448961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 2 }
Prove that the angle $\theta=\arccos(-12/17)$ is constructible using ruler and compass. Could I just do this? Proof: if we want to show $\arccos\left(\frac{-12}{15}\right)$ is constructible, can't I just say, take $x_0=\cos(\theta)=-\frac{12}{17}$ implies $17x_0+12=0$ which says that $f(x_0)=17x+0+12$ is a polynomial with root $x_0$. But $f(x)$ is irreducible by Eisenstein's criterion with $p=3$. So $f(x)$ is the minimal polynomial with $x_0$ as a root, thus $\left[\mathbb{Q}[x]:\mathbb{Q}\right]=1$ which is equal to $2^m$ for $m=0$.
The way I would do it is with a triangle whose three sides are $51$, $92$, and $133$. Then Law of Cosines.
{ "language": "en", "url": "https://math.stackexchange.com/questions/449002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Do groups, rings and fields have practical applications in CS? If so, what are some? This is ONE thing about my undergraduate studies in computer science that I haven't been able to 'link' in my real life (academic and professional). Almost everything I studied I've observed be applied (directly or indirectly) or has given me Aha! moments understanding the principles behind the applications. Groups, Rings and Fields have always eluded me. I always thought they were useful (instinctively) but failed to see where/how. Are they just theoretical concepts without practical applications? I hope not. So what are their applications, especially in the field of computer science. No matter how arcane/remote their use I still want to know.
I always thought they were useful (instinctively) but failed to see where/how. Are they just theoretical concepts without practical applications? I'm 100% sure you've written programs $p_1,p_2,p_3$ before, which take data $\mathrm{in}$ in and after the code did what it should it you get computed data $\mathrm{out}$ as return - as in $\mathrm{out}=p_1(p_2(p_3(\mathrm{in})))$. And then you realized you can define a new lumped program $p_{12}=p_1\circ p_2$ which takes $p_3(\mathrm{in})$ in. Then at this point you've already used associativity of function composition $$p_1\circ (p_2\circ p_3)=(p_1\circ p_2)\circ p_3,$$ which is one of the few starting points in investigating structures you listed. numbers "$3+(-3)=0$," vectors "$\vec v\oplus(-\vec v)=\vec 0$," invertible functions "$f\circ f^{-1}=\mathbb{id}$,"... are all pretty groups and you've applied their properties before. The study a mathematical fields like group-, ring- or field theory is an investigation of common mathematical structures. Take out a book on the subject and see how the theorems translate for those examples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/449066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "88", "answer_count": 12, "answer_id": 6 }
Geometric series sum $\sum_2^\infty e^{3-2n}$ $$\sum_2^\infty e^{3-2n}$$ The formulas for these things are so ambiguous I really have no clue on how to use them. $$\frac {cr^M}{1-r}$$ $$\frac {1e^2}{1-\frac{1}{e}}$$ Is that a wrong application of the formula and why?
The sum of a geometric series: $$\sum_{k=0}^{\infty} ar^k = \frac{a}{1-r}$$ In your case: $$r = e^{-2},\ a = e^3$$ Just remember to subtract the first two elements..
{ "language": "en", "url": "https://math.stackexchange.com/questions/449113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Coloring sides of a polygon In how many ways we can color the sides of a $n$-agon with two colors? (rotation is indistinguishable!)
We can answer the following equivalent and more general question: how many necklaces with $n$ beads can be formed from an unlimited supply of $k$ distinct beads? Here, two necklaces are considered the same if one can be transformed into the other by shifting beads circularly. To formalize this, define a string $S=s_1s_2\ldots s_n$, and define an operation $f$ on $S$ that maps $s_i\mapsto s_{i+1}$, where $s_n\mapsto s_1$. Then, we define the period of a string $S$ to be the minimal number $x$ such that $$\underbrace{f(f(f(\cdots f(S)\cdots )))}_{x \text{ times }} = S.$$ Convince yourself that $x|n$. Now, consider a string of length $n$ with period $d$. Each of the $d$ shifts of this string will yield the same necklace when the leading element is joined with the trailing element of the string, and this particular necklace can be constructed only by these $d$ strings. Thus, if $\text{Neck}(n)$ denotes the number of necklaces of length $n$, and $\text{Str}(d)$ denotes the number of strings with period $d$, we have $$\text{Neck}(n)=\sum_{d|n}\frac{\text{Str}(d)}{d}.$$ The number of strings of length $n$ is obviously $k^n$ since there are $k$ distinct beads to choose from. Note that we can express the number of strings of length $n$ in a different way, namely $\sum_{d|n}\text{Str}(d)$. Thus, $\text{Str}(d) = k^d * \mu$, by Möbius inversion, so we get $$\text{Neck}(n)=\sum_{d|n}\frac{1}{d}\sum_{d'|d}k^{d'}\mu\left(\frac{d}{d'}\right)=\frac{1}{n}\sum_{d|n}\varphi(d)k^{n/d},$$ where the last step follows by Möbius inversion on $n=\sum_{d|n}\varphi(n)$. Note: $(f*g)(n)=\sum_{d|n}f(d)g(n/d)$ denotes Dirichlet convolution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/449292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Rational number to the power of irrational number = irrational number. True? I suggested the following problem to my friend: prove that there exist irrational numbers $a$ and $b$ such that $a^b$ is rational. The problem seems to have been discussed in this question. Now, his inital solution was like this: let's take a rational number $r$ and an irrational number $i$. Let's assume $$a = r^i$$ $$b = \frac{1}{i}$$ So we have $$a^b = (r^i)^\frac{1}{i} = r$$ which is rational per initial supposition. $b$ is obviously irrational if $i$ is. My friend says that it is also obvious that if $r$ is rational and $i$ is irrational, then $r^i$ is irrational. I quickly objected saying that $r = 1$ is an easy counterexample. To which my friend said, OK, for any positive rational number $r$, other than 1 and for any irrational number $i$ $r^i$ is irrational. Is this true? If so, is it easily proved? If not, can someone come up with a counterexample? Let's stick to real numbers only (i.e. let's forget about complex numbers for now).
Consider $2^{\log_2 3}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/449431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 2, "answer_id": 0 }
Integration $\int \frac{\sqrt{x^2-4}}{x^4}$ Problem : Integrate $\int \frac{\sqrt{x^2-4}}{x^4}$ I tried : Let $x^2-4 =t^2 \Rightarrow 2xdx = 2tdt$ $\int \frac{\sqrt{x^2-4}}{x^4} \Rightarrow \frac{t^3 dt}{\sqrt{t^2+4}(t^4-8t+16)}$ But I think this made the integral too complicated... please suggest how to proceed.. Thanks..
Besides to @Ron's answer, you can see that the form Differential binomial is ruling the integral here. Let to write integrand as follows: $$\int(x^2-4)^{1/2}x^{-4}~dx$$ So, $m=-4,~~p=1/2,~~n=2$ and so $\frac{m+1}{n}+p=-1\in\mathbb Z$ and so the method says that you can use the following nice substitution: $$x^2-4=t^{2}x^4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/449507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Zeroes of a holomorphic map $f:\mathbb{C}^n\to \mathbb{C}$ for $n\geq 2$ Why can the zeros of a holomorphic map $f:\mathbb{C}^n\to \mathbb{C}$ with $n\geq 2$ have no isolated zeros (or poles if we write it as meromorphic)? Someone says the $n$-times Cauchy Integral formula is enough, but how does it work?
Ted is of course perfectly correct. :) Alternatively the claim follows from Hartogs' theorem, https://en.wikipedia.org/wiki/Hartogs%27_extension_theorem, whose proof may be the origin of the Cauchy integral formula hint you received.
{ "language": "en", "url": "https://math.stackexchange.com/questions/449567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$6$ people, $3$ rooms, $1$ opening door $6$ people spread in $3$ distinguishable rooms, every room needs one person who opens the door. There are ${6 \choose 3}\cdot 3 \cdot 2$ options to choose the three door opener persons and let them open one certain room, so that every room is opened by one person. Further, there are then $3$ options to have all left three guys in one of the rooms, $3\cdot 2$ options to have exactly one other guy in every room, and ${3 choose 2}\cdot 3 \cdot 2$ options to have one guy in one room, two in the other and none in the last room. So in total there are ${6 \choose 3}\cdot 3 \cdot 2 \cdot 3 \cdot 3 \cdot 2 {3 \choose 2} \cdot 3 \cdot 2$ options to have all rooms opened by exactly one person and spread the others in these rooms. If only there are 3 people who can open the doors (only they have keys for all the rooms). There are only $3 \cdot 2 \cdot 3 \cdot 3 \cdot 2 {3 \choose 2} \cdot 3 \cdot 2$ options left, right?
If you choose an order for the six people and put the first two into room $1$ the next two into room $2$ and the final pair into room $3$, and then you put the first named of each pair as the door-opener, every one of the $6!$ orders of people gives you a unique arrangement of people in accordance with the criteria in the problem. And from the arrangement of people, you can recover a unique order for the six people. So I reckon you should be looking at $6!=720$ possibilities. The factor $5$ comes from your $\binom 63$ which seems to go missing somewhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/449637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why "integralis" over "summatorius"? It is written that Johann Bernoulli suggested to Leibniz that he (Leibniz) change the name of his calculus from "calculus summatorius" to "calculus integralis", but I cannot find their correspondence wherein Bernoulli explains why he thinks "integralis" is preferable to "summatorius." Can someone enlighten me? Thank you.
If you wanted the correspondence. Maybe, this would be sure, the initial quotation states the exact point. As for the reason, some say that it was to rival Newton on proving that he had invented calculus first. The Bernoulli's too were involved in the controversy.This page explains the controversy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/449686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
Compute $\lim_{n\to\infty} nx_n$ Let $(x_n)_{n\ge2}$, $x_2>0$, that satisfies recurrence $x_{n+1}=\sqrt[n]{1+n x_n}-1, n\ge 2$. Compute $\lim_{n\to\infty} nx_n$. It's clear that $x_n\to 0$, and probably Stolz theorem would be helpful. Is it really necessary to use this theorem?
Since $(1+x)^n\geqslant nx+1$ we obtain that $x_n\geqslant x_{n+1}$. As the sequence is positive and decreasing, it must converge. Call this limit $\ell$. Consider the non-negative functions $$f_n(x)=\frac{\log(1+nx)}n$$ They have the property that $$\log(1+x_{n+1})=f_n(x_n)$$ Since $x_n$ is decreasing, and since the $f_n$ are decreasing, meaning that $f_{n+1}\leq f_n$, yet each one of them is increasing, meaning $f_n(x)\leq f_n(y)$ if $x\leq y$, we have $$f_{n+1}(x_{n+1})\leq f_n(x_{n+1})\leq f_n(x_n) $$ The limit thus exists. We would like to argue that $f_n(x_n)\to 0$. Since $x_n$ decreases and is non-negative, we can work inside a compact interval $[0,M]$. In this interval, the continuous $f_n$ converge monotonically to $0$, thus by Dini's theorem, they converge uniformly. Thus, $\{f_n\}_{n\geqslant 1}$ is an equicontinuous family, whence $$\lim_{n\to\infty}f_n(x_n)=\lim_{n\to\infty}f_n(\ell)=0$$ But then $\log(1+\ell)=0\implies \ell =0$. Hint Let ${\left( {1 + {x_{n + 1}}} \right)^n} = 1 + n{x_n} = {y_n}$. Since $x_n\to 0$ we may expand $$\log \left( {1 + {x_{n + 1}}} \right) = {x_{n + 1}} + {x_{n + 1}}O\left( {{x_{n + 1}}} \right)$$ for sufficiently large $n$, so if we let $y_n-1=\omega_n$; $$\log \left( 1+\omega_n \right) = \frac{n}{{n + 1}} \omega_{n+1} + n{x_{n + 1}}O\left( {{x_{n + 1}}} \right)$$ Thus if the limit exists, it must be $0$. Let $\mathscr F=\{f_i\}_{i\in I}$ be a family of functions $f_i:A\to\Bbb R$. We say $\mathscr F$ is equicontinuous on $A$ if for each $\epsilon>0$ there exists $\delta >0$ such that for each $x,y\in A$ and $f_i\in\mathscr F$ $$|x-y|<\delta\implies |f_i(x)-f_i(y)|<\epsilon$$ Note every function in an equicontinuous family of functions is automatically uniformly continuous, for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/449748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
A problem about the convergence of an improper integral Let $f:\mathbb R\longrightarrow\mathbb R$ be a function with $$f(x)=\frac{1}{3}\int_x^{x+3} e^{-t^2}dt$$ and consider $g(x)=x^nf(x)$ where $n\in\mathbb Z$. I have to discuss the convergence of the integral $$\int_{-\infty}^{+\infty}g(x)dx$$ at the varying of $n\in\mathbb Z$. Any idea? We have an integral of an "integral function", and I'm a bit confused! Thanks in advance.
Basically, you are considering the integral $$ I := \frac{1}{3}\int_{-\infty}^{\infty}x^n f(x) dx= \frac{1}{3}\int_{-\infty}^{\infty}x^n \int_{x}^{x+3}e^{-t^2}dt \,dx.$$ Changing the order of integration yields $$ I = \frac{1}{3}\int_{-\infty}^{\infty}e^{-t^2} \int_{t-3}^{t}x^ndx \,dt $$ $$ = \frac{1}{3}\int_{-\infty}^{\infty}e^{-t^2}\left(\frac{t^{n+1}-(t-3)^{n+1}}{n+1} \right) \,dt $$ $$ = \frac{1}{3(n+1)}\int_{-\infty}^{\infty}t^{n+1} e^{-t^2} dt - \frac{1}{3(n+1)}\int_{-\infty}^{\infty}(t-3)^{n+1} e^{-t^2} dt dt $$ $$\implies I = \frac{1}{3(n+1)} I_1 + \frac{1}{3(n+1)} I_2. $$ Now, note this, for $n\in \mathbb{N} \cup \left\{ 0\right\}$, $I_1$ is convergent (see the analysis for the case $I_2$ at the end of the answer) and can be evaluated in terms of the gamma function as $$ I_1 = \frac{1}{2}\, \left( 1+ \left( -1 \right)^{n+1} \right) \Gamma \left( \frac{n}{2} +1 \right).$$ Examining the convergence of $I_2$ $n\in \mathbb{N} \cup \left\{ 0\right\}$, we have $$ I_2 = \int_{-\infty}^{\infty}(t-3)^{n+1} e^{-t^2} dt = \int_{-\infty}^{\infty}y^{n+1} e^{-(y+3)^2} dy $$ $$ = (-1)^n \int_{0}^{\infty}y^{n+1} e^{-(y-3)^2} dy + \int_{0}^{\infty}y^{n+1} e^{-(y+3)^2} dy $$ $$ I_2 = I_{21} + I_{22}. $$ Now, both of the integrals $I_{21}$ and $I_{22}$ converge, since $$ y^{n+1}e^{-(y-3)^2} \leq e^{y} e^{-(y-3)^2}, \quad y^{n+1}e^{-(y+3)^2}\leq e^{y} e^{-(y+3)^2} $$ and $$ \int_{0}^{\infty} e^{y} e^{-(y-3)^2}dy <\infty,\quad \int_{0}^{\infty} e^{y} e^{-(y+3)^2}dy <\infty. $$ You can check Gaussian Integrals for the last two inequalities. Note: For the case $n$ is negative integer, we have 1) If $n=-1$, then $I_1$ and $I_2$ converge. 2) If $n$ is a negative even integer, then $I_1$ and $I_2$ are undefined. 3) If $n$ is a negative odd integer, then $I_1$ and $I_2$ diverges to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/449813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 4 }
Why is $1/n^{1/3}$ convergent? I thought because $p<1$ it would be divergent, but apparently not. Why is that?
You, certainly, know the series $\sum (1/n)$ and know that it is divergent. There is a nice approach in which we can test the divergence or convergence. That is the Quotient Test or Limit comparison test. According to it, if $$\lim_{n\to\infty}\frac{u_n}{v_n}=A\neq0, ~~\text{or}~~ A=\infty$$ then $\sum u_n$ and $\sum v_n$ have the same destiny. Here, take $\sum v_n=\sum (1/n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/449868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Men and Women: Committee Selection There is a club consisting of six distinct men and seven distinct women. How many ways can we select a committee of three men and four women?
There are "$6$-choose-$3$" $=\binom{6}{3}$ ways to select the men, and "$7$-choose-$4$" =$\binom{7}{4}$ ways to select the women. That's because $(1)$ We have a group of $6$ men, from which we need to choose $3$ for the committee. $(2)$ We have $7$ women from which we need to choose $4$ to sit in on the committee. Since anyone is either on the committee or not, there is no need to consider order: position or arrangement of those chosen for the committee isn't of concern here, so we don't need to consider permuting the chosen men or women. So we just *multiply*$\;$ "ways of choosing men" $\times$ "ways of choosing women". [Recall th Rule of the Product, also known as the multiplication principle.] $$\binom 63 \cdot \binom 74 = \dfrac{6!}{3!3!} \cdot \dfrac{7!}{4!3!} = \dfrac{6\cdot 5\cdot 4}{3!}\cdot \dfrac{7\cdot 6\cdot 5}{3!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/449940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Every bounded function has an inflection point? Hello from a first time user! I'm working through a problem set that's mostly about using the first and second derivatives to sketch curves, and a question occurred to me: Let $f(x)$ be a function that is twice differentiable everywhere and whose domain is $ \Bbb R$. If $f(x)$ is bounded, then must $f$ have at least one inflection point? The answer is clearly yes, but I can't think of how to prove it. (I thought of using the converse of the MVT, but Google revealed that the converse is untrue.) In the case where $f''(x)$ is always positive, for example, the three possibilities for $f(x)$ are nicely illustrated by considering the three branches of $g(x)$ (disregarding the vertical asymptotes), $g(x)=\left |\frac {1}{(x-a)(x-b)}\right |$, where $a < b$. $(-\infty, a)$: $g'(x)$ is bound below, unbounded above, and $g(x)$ is unbounded above $(a, b)$ : $g'(x)$ is unbounded below and above, and $g(x)$ is unbounded above $(b, \infty)$: $g'(x)$ is unbounded below, bound above, and $g(x)$ is unbounded above. In all three cases, $g''(x) > 0$ implies that $g(x)$ is unbounded. Is all this right? How can we prove that the answer to the above question about $f(x)$ is true?
If $f''>0$ identically and $f$ is bounded, then $f'\leq 0$ identically (for otherwise, $f(\infty)=\infty$). Likewise, $f'\geq 0$ since otherwise $f(-\infty)=-\infty$. Hence $$f'=0$$ identically, so $f$ is constant. The result follows similarly in the case $f''<0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/450001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Simplifying compound fraction: $\frac{3}{\sqrt{5}/5}$ I'm trying to simplify the following: $$\frac{3}{\ \frac{\sqrt{5}}{5} \ }.$$ I know it is a very simple question but I am stuck. I followed through some instructions on Wolfram which suggests that I multiply the numerator by the reciprocal of the denominator. The problem is I interpreted that as: $$\frac{3}{\ \frac{\sqrt{5}}{5} \ } \times \frac{5}{\sqrt{5}},$$ Which I believe is: $$\frac{15}{\ \frac{5}{5} \ } = \frac{15}{1}.$$ What am I doing wrong?
This means $$ 3\cdot \frac{5}{\sqrt{5}}=3\cdot\frac{(\sqrt{5})^2}{\sqrt{5}} =3\sqrt{5} $$ You're multiplying twice for the reciprocal of the denominator. Another way to see it is multiplying numerator and denominator by the same number: $$ \frac{3}{\frac{\sqrt{5}}{5}}=\frac{3\sqrt{5}}{\frac{\sqrt{5}}{5}\cdot\sqrt{5}} =\frac{3\sqrt{5}}{1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/450158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 8, "answer_id": 2 }
Question on monotonicity and differentiability Let $f:[0,1]\rightarrow \Re$ be continuous. Assume $f$ is differentiable almost everywhere and $f(0)>f(1)$. Does this imply that there exists an $x\in(0,1)$ such that $f$ is differentiable at $x$ and $f'(x)<0$? My gut feeling is yes but I do not see a way to prove it. Any thoughts (proof/counterexample)? Thanks!
As stated in another answer the Cantor function is a counterexample. You would need to assume differentiability for all $x \in (0,1)$. "Almost everywhere" is not good enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/450230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Differentiating $\tan\left(\frac{1}{ x^2 +1}\right)$ Differentiate: $\displaystyle \tan \left(\frac{1}{x^2 +1}\right)$ Do I use the quotient rule for this question? If so how do I start it of?
We use the chain rule to evaluate $$ \dfrac{d}{dx}\left(\tan \frac{1}{x^2 +1}\right)$$ Since we have a function which is a composition of functions: $\tan(f(x))$, where $f(x) = \dfrac 1{1+x^2}$, this screams out chain-rule! Now, recall that $$\dfrac{d}{dx}(\tan x) = \sec^2 x$$ and to evaluate $f'(x) = \dfrac d{dx}\left(\dfrac 1{1 + x^2}\right)$, we can use either the quotient rule, or the chain rule. Using the latter, we have $$\dfrac{d}{dx}\left(\dfrac 1{1 + x^2}\right)= \dfrac{d}{dx}(1 + x^2)^{-1} = -(1 + x^2)^{-2}\cdot \dfrac d{dx}(1+ x^2) = -\dfrac{2x}{(1+ x^2)^2}$$ Now, we put the "chain" together: $$\dfrac d{dx}\left(\tan \left(\frac{1}{ x^2 +1}\right)\right) = \dfrac{d}{dx}\Big(\tan(f(x)\Big)\cdot \Big(f'(x)\Big) = \sec^2 \left(\dfrac 1{1 + x^2}\right)\cdot \left(-\dfrac{2x}{(1+ x^2)^2}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/450296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
$5^m = 2 + 3^n$ help what to do how to solve this for natural numbers $5^m = 2 + 3^n$ i did this $5^m = 2 + 3^n \Rightarrow 5^m \equiv 2 \pmod 3 \Rightarrow m \equiv 1 \pmod 2$ now if i put it like this $ 5^{2k+1} = 2 + 3^n $ what to do ?? now is this right another try : $ m = n \Rightarrow 5^n - 3^n = 2 = 5 - 3 \Rightarrow (m_1,n_1)= (1,1) $ we can prove by induction : $5^n - 3^n > 2 \quad \forall n > 1 \Rightarrow m = n = 1 $ another case : $m > n \Rightarrow 5^m > 5^n \geq 3^n+2 \quad \forall n \geq 1 \Rightarrow 5^m - 3^n > 2 \Rightarrow \emptyset$ $m < n \Rightarrow no \ sol. $ by putting some values
I like to reformulate the problem a bit to adapt it to my usances. $$5^m = 2+3^n \\5^m = 5-3+3^n \\ 5(5^{m-1}-1) = 3(3^{n-1}-1) $$ $$ \tag 1 {5^a-1 \over 3} = {3^b-1 \over 5} \\ \small \text{ we let a=m-1 and b=n-1 for shortness}$$ Now we get concurring conditions when looking at powers of the primefactor decomposition of the lhs and the rhs. I must introduce two shorthand-notations for the following. We write $\qquad 1. \qquad [a:p] = 1 $ if p is a factor in a, and $ [a:p] = 0 $ if not (this is the "Iverson-bracket") $\qquad 2. \qquad \{a,p\} = m $ meaning, that m is the exponent, to which p is a primefactor in a . We begin to look at p=3 in the lhs and q=5 in the rhs in(1) (both can occur only once in the numerators) and then at powers of the primefactor p=2 , p=7 and p=13 which must be equal on both sides. It shows a systematic approach, which can also be taken for similar problems. The primefactors, which must occur differently: The occurence of the primefactor 3 in the lhs is determined by the formula $$ \{5^a-1,3\} = [a:2](1 + \{a,3\}) $$ and because it is allowed to occur exactly once, a must be even and not divisible by 3 so $$a = \pm 2 \pmod 6) \tag 2$$ The occurence of the primefactor 5 in the rhs is determined by the formula $$ \{3^b-1,5\} = [b:4](1 + \{b,5\}) $$ and because it is allowed to occur exactly once, b must be divisible by 4 and not divisible by 5 so $$b = (4,8,12,16) \pmod {20} \tag 3$$ The primefactors, which must occur equally: Looking at the primefactor 2 we have for the lhs: $$ \{5^a-1,2\} = 2 + \{a,2\} \tag {3.1} $$ and for the rhs $$ \{3^b-1,2\} = 1 + [b:2]+ \{b,2\}) \tag {3.2} $$ We have from the previous that b must be divisible by 4 so the exponent of primefactor 2 must be at least 4 by (3.2) and thus a must also be divisible by 4 (and must in fact have the same number of primefactors 2 as b). Looking at the primefactor 7 we have for the lhs: $$ \{5^a-1,7\} = [a:6] (1 + \{a,7\}) \tag {4.1}$$ and for the rhs $$ \{3^b-1,7\} = [b:6](1 + \{b,7\}) \tag {4.2} $$ From this because a cannot be divisible by 6 by the needed equality of powers of primefactor 7 also b cannot be divisible by 6. Looking at the primefactor 13 we have for the lhs: $$ \{5^a-1,13\} = [a:4] (1 + \{a,13\}) \tag {5.1} $$ and for the rhs $$ \{3^b-1,13\} = [b:3](1 + \{b,13\}) \tag {5.2}$$ Now we get contradictory conditions: We know already that a must be divisible by 4 thus the primefactor p=13 shall occur in the lhs of (1) by (5.1). But because b is even but never divisible by 6, it is also never divisible by 3 and thus the primefactor 13 does not occur in the rhs of (1) by (5.2). Conclusion: no further solution after the "trivial" one for n=m=1
{ "language": "en", "url": "https://math.stackexchange.com/questions/450365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Sum of a geometric series $\sum_0^\infty \frac{1}{2^{1+2n}}$ $$\sum_0^\infty \frac{1}{2^{1+3n}}$$ So maybe I have written the sequence incorrectly, but how do I apply the $\frac{1}{1 - r}$ formula for summing a geometric sequence to this? When I do it I get something over one which is wrong because this is suppose to model a percentage of something.
This is $$ \begin{align} \frac12\sum_{n=0}^\infty\frac1{4^n} &=\frac12\frac1{1-\frac14}\\ \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/450589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$\lim_{(x,y)\to (0,0)} \frac{x^m y^n}{x^2 + y^2}$ exists iff $m+ n > 2$ I would like to prove, given $m,n \in \mathbb{Z}^+$, $$\lim_{(x,y)\to (0,0)}\frac{x^ny^m}{x^2 + y^2} \iff m+n>2.$$ (My gut tells me this should hold for $m,n \in \mathbb{R}^{>0}$ as well.) The ($\Rightarrow$) direction is pretty easy to show by contrapositive by familiar limit tricks. The ($\Leftarrow$) direction is giving me more trouble. So far my strategy has been to use the arithmetic mean: $$\left| \frac {x^n y^m} {x^2 + y^2} \right| \leq \frac {x ^ {2n} + y ^ {2m}} {2 ( x^2 + y^2 ) } \leq \frac {x^ {2 (n-1)}} {2} + \frac { y^ {2 (m-1)} } {2},$$ but that's not helping if $m=1$, say. Any ideas? Apologies if this is a repeat...I couldn't find this on the site.
If $m+n>2$, you can divide into two cases, by observing that you can't have $m<2$ and $n<2$. First case: $m\ge2$. $$ \lim_{(x,y)\to (0,0)}\frac{x^ny^m}{x^2 + y^2} = \lim_{(x,y)\to (0,0)}\frac{y^2}{x^2 + y^2}x^ny^{m-2} $$ where $n\ge1$ or $m-2\ge1$. The fraction is bounded, while the other factor tends to zero. Similarly for $n\ge2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/450669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ratio test: Finding $\lim \frac{2^n}{n^{100}}$ $$\lim \frac{2^n}{n^{100}}$$ as n goes to infinity of course. I know that the form os $\frac{a_{n+1}}{a_n}$ $$\frac {\frac{2^{n+1}}{(n+1)^{100}}}{\frac{2^n}{n^{100}}}$$ $$\frac{2n^{100}}{(n+1)^{100}}$$ I am not clever enough to evaluate that limit. To me it looks like it should go to zero using the logic that exponents increase much more quickly than twice. Also I feel like I could reduce it down to $\frac{2}{n}$ but that gives an incorrect answer.
In Prove that $n^k < 2^n$ for all large enough $n$ I showed that if $n$ and $k$ are integers and $k \ge 2$ and $n \ge k^2+1$, then $2^n > n^k$. Set $k = 101$. Then, for $n \ge 101^2+1 = 10202$, $2^n > n^{101}$ or $\dfrac{2^n}{n^{100}} > n$ so $\lim_{n \to \infty} \dfrac{2^n}{n^{100}} = \infty$. This obviously shows that $\lim_{n \to \infty} \dfrac{2^n}{n^k} = \infty$ for any positive integer $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/450733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
divisibility test let $n=a_m 10^m+a_{m-1}10^{m-1}+\dots+a_2 10^2+a_1 10+a_0$, where $a_k$'s are integer and $0\le a_k\le 9$, $k=0,1,2,\dots,m$, be the decimal representation of $n$ let $S=a_0+\dots+a_m$, $T=a_0-a_1\dots+(-1)^ma_m$ then could any one tell me how and why on the basis of divisibility of $S$ and $T$ by $2,3,\dots,9$ divisibility of $n$ by $2,3,\dots,9$ depends? I am not getting the fact why we introduce $S,T$ same question $n=a_m (100)^m+a_{m-1}(1000)^{m-1}+\dots+a_2 1000^2+a_1 1000+a_0$, $0\le a_k\le 999$
If $m=1$ then $S$ and $T$ uniquely determine the number. In any case $S$ determines the number mod $3$ and $9$, and $T$ determines the number mod $11$, as André pointed out. Unfortunately, as André also pointed out, even the combination of $S$ and $T$ does not generally determine divisibility by any of the numbers $2,4,5,6,7,$ or $8$. Here are some examples with $m=2$: * *$336$ and $633$ have the same $S$ and $T$, but the first is divisible by $8$ (and hence by $2$ and $4$) and also by $6$ and $7$, while the other is not divisible by any of those $5$ numbers. *$105$ and $501$ have the same $S$ and $T$, but the first is divisible by $5$ and $7$, while the other is not divisible by either of those numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/450818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Power series for $(1+x^3)^{-4}$ I am trying to find the power series for the sum $(1+x^3)^{-4}$ but I am not sure how to find it. Here is some work: $$(1+x^3)^{-4} = \frac{1}{(1+x^3)^{4}} = \left(\frac{1}{1+x^3}\right)^4 = \left(\left(\frac{1}{1+x}\right)\left(\frac{1}{x^2-x+1}\right)\right)^4$$ I can now use $$\frac{1}{(1-ax)^{k+1}} = \left(\begin{array}{c} k \\ 0 \end{array}\right)+\left(\begin{array}{c} k+1 \\ 1 \end{array}\right)ax+\left(\begin{array}{c} k+2 \\ 2 \end{array}\right)a^2x^2+\dots$$ on the $\frac{1}{1+x}$ part but I am not sure how to cope with the rest of formula.
You could use the binomial expansion as noted in the previous answer but just for fun here's an alternative Note that: $$ \dfrac{1}{\left( 1+y \right) ^{4}}=-\dfrac{1}{6}\,{\frac {d^{3}}{d{y}^{3}}} \dfrac{1}{ 1+y } $$ and that the geometric series gives: $$ \dfrac{1}{ 1+y }=\sum _{n=0}^{\infty } \left( -y \right) ^{n}$$ so by: $${\frac {d ^{3}}{d{y}^{3}}} \left( -y \right) ^{ n} = \left( -1 \right) ^{n}{y}^{n-3} \left( n-2 \right) \left( n-1 \right) n$$ we have: $$ \dfrac{1}{\left( 1+y \right) ^{4}}=-\dfrac{1}{6}\,\sum _{n=0}^{\infty } \left( -1 \right) ^{n}{y}^{n-3} \left( n-2 \right) \left( n-1 \right) n$$ and putting $y=x^3$ gives: $$\dfrac{1}{\left( 1+x^3 \right) ^{4}}=-\dfrac{1}{6}\,\sum _{n=0}^{\infty }{x}^{3\,n-9 } \left( -1 \right) ^{n} \left( n-2 \right) \left( n-1 \right) n$$ then noting that the first 3 terms are zero because of the $n$ factors we can shift the index by letting $n\rightarrow m+3$ to get: $$\dfrac{1}{\left( 1+x^3 \right) ^{4}}=\dfrac{1}{6}\,\sum _{m=0}^{\infty }{x}^{3m} \left( -1 \right) ^{m} \left( 1+m \right) \left( m+2 \right) \left( m+3 \right) $$ For comparison the binomial expansion would tell you that: $$\dfrac{1}{\left( 1+x^3 \right) ^{4}}=\sum _{m=0}^{\infty }{-4\choose m}{x}^ {3\,m}$$ from which it follows, by the uniqueness of Taylor series, that: $${-4\choose m}=\dfrac{1}{6} \left( -1 \right) ^{m} \left( 1+m \right) \left( m+2 \right) \left( m+3 \right)=(-1)^m\dfrac{(3+m)!}{3!\,m!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/450900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
How to evaluate a zero of the Riemann zeta function? Here is a super naive question from a physicist: Given the zeros of the Riemann zeta function, $\zeta(s) = \sum_{n=1}^\infty n^{-s}$, how do I actually evaluate them? On this web page I found a list of zeros. Well I guess if the values, call one ${a_\text{zero}}$, are just given in decimal expansion, then I'd have to run a program and see how it approaches zero $$\zeta({a_\text{zero}}) = \sum_{n=1}^\infty n^{-{a_\text{zero}}} = \frac{1}{1^{a_\text{zero}}} + \frac{1}{2^{a_\text{zero}}} + \frac{1}{3^{a_\text{zero}}} + \cdots\approx 0. \qquad ;\Re(a)>1$$ But are there also analytical solutions, let's call one ${b_\text{zero}}$, which I can plug in and see $\zeta({b_\text{zero}}) = 0$ exactly? Specifically the non-trivial ones with imaginary part are of interest. What I find curious is that these series, with each term being a real number multiplied by some $n^{-i\cdot \text{Im}(b_\text{zero})}$, then also kind of resemble Fourer series.
The Riemann Zeta function has zeros as follows: (1) From the Euler Product follows $\zeta(s)\not=0$ for $\mathrm{Re}\,s>1$. Taking the functional equation into account results that the only zeros outside the critical strip $\{ s\in\mathbb C\mid 0\leq\mathrm{Re}\,s\leq1\}$ are the trivial zeros $-2,-4,-6,\ldots$ (2) Beyond the trivial zeros the Zeta-Function has also zeros within the critical strip $S = \{ s \in \mathbb{C} \mid 1 > \mathrm{Re} \, s > 0 \} $. These are the non-trivial zeros. Little is yet known about these and your link to the numerical calculations of Andrew Odlyzko refer exactly to those zeros. There is not an analytical solution yet available. To your point on the Fourier series analogy. It is possible to write out the Zeta function for $\mathrm{Re}\,s>1$ as a so called partition function that (resembles the form of a Fourier): $$Z(T):= \sum_{n=1}^\infty \exp \left(\frac{-E(n)}{k_B T}\right) = \sum_{n=1}^\infty \exp \left(\frac{-E_0 \log n}{k_B T}\right) \equiv \sum_{n=1}^\infty \exp \left(-s \log n\right) = \sum_{n=1}^\infty \frac{1}{n^s} = \zeta (s)$$ and so forth see here>>>. But this was not the question I guess.
{ "language": "en", "url": "https://math.stackexchange.com/questions/450972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What is the meaning of $(2n)!$ I came across something that confused me $$(2n)!=?$$ What does this mean: $$2!n!, \quad 2(n!)$$ or $$(2n)!=(2n)(2n-1)(2n-2)...n...(n-1)(n-2)...1$$ Which one is right? The exercise is to show that $$(n+1)\bigg|\left(\begin{array}{c}2n\\n\end{array}\right)$$Then I thought of using the combination formula $\left(\begin{array}{c}n\\k\end{array}\right)=\frac{n!}{k!(n-k)!}$ to decrease my expression, but then I came across $$(2n)!$$
Hint: You can verify by a computation that $$\frac{1}{n+1}\binom{2n}{n}=\binom{2n}{n}-\binom{2n}{n+1}.$$ Detail: We have $$\frac{1}{n+1}\binom{2n}{n}=\frac{1}{n(n+1)}\frac{(2n)!}{(n-1)!n!}=\left(\frac{1}{n}-\frac{1}{n+1}\right) \frac{(2n)!}{(n-1)!n!} .$$ Now $$\frac{1}{n}\frac{(2n)!}{(n-1)!n!} =\binom{2n}{n}\qquad\text{and}\qquad \frac{1}{n+1}\frac{(2n)!}{(n-1)!n!} =\binom{2n}{n+1} .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/451044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
How to find minimum of sum of mod functions? How to find minimum value of $$|x-1| + |x-2| + |x-31| + |x-24| + |x-5| + |x-6| + |x-17| + |x-8| + \\|x-9| + |x-10| + |x-11| + |x-12|$$ and also where it occurs ? I know the procedure for find answer for small problems like the following what is the minimum value of |x-2| + |x+3 | + |x-5| and where it occurs ? Here it is easy to visualise and give answer or even possible mathematically using equations as Let the point be at distance c from $x = -3$ so the sum of the distances from the $3$ points $x =2$ , $x=-3$ , $x=5$ are $5-c$ , $8-c$ and $c$ respectively ,so total distance is minimum when it is 8 as $x=-3$ and $x=5$ are the farthest and they are $8$ units away so the required point must lie within this region so $$ 5-c + 8-c + c = 8 $$ $$ c = 5 $$ so the point where minimum value occurs is at $x =2$ to find the minimum value substitute it in equation to get value as 8. But In this problem it is hard to do it this way as there are more points . Is there any easy way to find the answer . Thanks
Let's make it generic, you want to minimise $$f(x) = \sum_{k = 1}^n \lvert x - p_k\rvert,$$ where, without loss of generality, $p_1 \leqslant p_2 \leqslant \ldots \leqslant p_n$. How does the value of $f(x)$ change if you move $x$ * *left of $p_1$, *between $p_k$ and $p_{k+1}$, *right of $p_n$? A simple counting argument finds the location of the minimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/451201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Solving for radius of a combined shape of a cone and a cylinder where the cone is base is concentric with the cylinder? I have a solid that is a combined shape of a cylinder and a concentric cone (a round sharpened pencil would be a good example) Know values are: Total Volume = 46,000 Height to Base Ratio = 2/1 (Height = cone height + cylinder height) (Base = Diameter) Angle of cone slope = 30 degrees (between base and slope of cone) How do you solve for the Radius? 2 let say the ratio is from the orignal length Know value: Total Volume = V = 46,000 Original Height to Base Ratio = t = 2/1 (Height = cone height + cylinder height + distance shortened) (Base = Diameter) Angle of cone slope = θ = 30 degrees (between base and slope of cone) distance shortened from original Height x = 3 (Let h be the height of the cylinder. Then h+rtanθ+x=2(2r) How do you solve for the Radius?
Cone height = $ \sqrt 3 R$. Cylinder height = Total height - Cone heght $$= (4 -\sqrt3 R ) $$ Total Volume $$ V_{total} = V_{cone}+ V_{cyl } = \pi R^2 [ (4- \sqrt 3) R ] + \frac\pi3 R^2 \cdot \sqrt3 R = 46000. $$ from which only unknown $R^3, R $ can be calculated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/451277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Double integrals over general regions Set up iterated integrals for both orders of integration. Then evaluate the double integral using the easier order and explain why it's easier. $$\int\int_{D}{ydA}, \text{$D$ is bounded by $y=x-2, x=y^2$}$$ I'm having trouble with setting up the integral for both type I and type 2 and choosing which one would be easier. I'm sure I'll be able to integrate it with no problem, but setting them up is where I would need some help.
Plotting both $y=x-2$ and $x=y^2$ in wolfram alpha: Based upon the points of intersection, you can see that the region D exists for $(x,y)$ with $x\in[1,4]$ and $y\in[-1,2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/451336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Lower bound on the probability of maximum of $n$ i.i.d. chi-square random variables exceeding a value close to their number of degrees of freedom I am wondering if there is a tight lower bound on the probability of a maximum of $n$ i.i.d. chi-square random variables, each with degree of freedom $d$ exceeding a value close to $d$. Formally, I need to lower bound the following expression: $$P(\max_i X_i\geq d+\delta)$$ where each $X_i\sim\chi^2_d$ for $i=1,2,\ldots,n$ and $\delta$ is small. Ideally, I would like an expression involving elementary functions of $n$, $d$, and $\delta$. I am interested in the asymptotics and assume that $n$ and $d$ are large. What I tried We know that $P(\max_i X_i\geq d+\delta)=1-P(X<d+\delta)^n$ where $X\sim\chi^2_d$. Therefore, I tried to upper bound $P(X<d+\delta)$ using the CLT, which yields the normal approximation to chi-squared distribution, and the lower bounds on the Q-function. However, the resultant overall bound is not tight, and I am hoping something better exists.
I'm not sure if this is helpful, but taking a look at what Wikipedia has to say about the cdf of the Chi-Square distribution, it appears that \begin{align*} P[X < d+\delta]^n &= \left( \left( \frac{d+\delta}{2}\right)^{d/2} e^{-d/2} \sum_{k=0}^\infty \frac{\left(\frac{d+\delta}{2}\right)^k}{\Gamma(d/2 + k + 1)}\right)^n \\ &= \left( e^{-d/2} \sum_{y=d/2}^\infty \frac{\left(\frac{d+\delta}{2}\right)^y}{\Gamma(y + 1)}\right)^n \\ &= e^{n\delta/2}\left(e^{-(d+\delta)/2} \sum_{y=d/2}^\infty \frac{\left(\frac{d+\delta}{2}\right)^y}{\Gamma(y + 1)}\right)^n, \end{align*} and so the term $$ e^{-(d+\delta)/2}\sum_{y=d/2}^\infty \frac{\left(\frac{d+\delta}{2}\right)^y}{\Gamma(y + 1)} $$ is bound above by $1$ (this is certainly clear for even $d$). It may be that the larger $d$ is, the further the term is from $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/451413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help with $\lim_{x \to -1} x^{3}-2x=1$ I believe that I completed this problem correctly but I could use a second set of eye's to verify that I used the right methods. Also if you have a suggestion for a better method of how to solve this I would appreciate any advice. Prove $$\lim_{x \to -1} x^{3}-2x=1$$ Given $\epsilon > 0,$ let ; $\delta= \min\left \{{1\over2}, {4\epsilon\over11} \right \} $ If $\; 0<\left | x+1 \right | \Rightarrow \delta \; then \; {-1\over4} < \left ( x-{1\over2} \right )^{2}-{5\over4} < {11\over4} $ so $\left | x^{3} - 2x - 1 \right | = \left | x+1 \right |\left | x^{2}-x-1 \right |=\left | x+1 \right |\left | \left ( x-{1\over2} \right )^{2}-{5\over4} \right |<{11\delta\over4}=\epsilon $ scratch work, let $\delta<{1\over2} \Rightarrow \left | x+1\right | < \delta < {1\over2} \Rightarrow{-1\over2} < x+1 < {1\over2} \Rightarrow -2 < x-{1\over2} < -1 \Rightarrow 4 > \left ( x -{1\over2} \right )^{2} > 1 \Rightarrow {11\over4} > \left ( x-{1\over2}\right )^{2}-{5\over4} > {-1\over4}$
Your method looks like taken "out of the blue": why did you choose $\;\delta\,$ as you did? Why did you do that odd-looking calculations in the third line (which I didn't understand right away, btw)?. I propose the following: for an arbitrary $\,\epsilon>0\,$: $$|x^3-2x-1|=|(x+1)(x^2-x-1)|<\epsilon\iff |x+1|<\frac\epsilon{|x^2-x-1)|}$$ Now the estimmation "trick": for $\,x\,$ "pretty close to $\,-1\,$ , we get $\,x^2-x-1\,$ "pretty close" to $\,1\,$ (you can either use freely this or formally prove by limits that $\,x^2-x-1\xrightarrow[x\to -1]{}1\;$). Thus, we can choose, for example $$\delta:=\epsilon+0.1\;,\;\;\text{so whenever}\;\;|x^2-x-1|>\frac{10\epsilon}{10\epsilon+1} $$ we'll get that $$|x+1|<\delta\implies |(x^3-2x)-1|<\epsilon$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/451479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Probability of getting a certain sum of two dice; confusion about order If you roll two six-sided dice, the probability of obtaining a $7$ (as a sum) is $6/36$. Here is what is confusing me. Aren't $(5,2)$ and $(2,5)$ the same thing? So we shouldn't really double count? Thus by that logic, wouldn't the actual answer be $3/21$ instead? EDIT: My $21$ possibilities came from $\{ (1,1), (1,2), (1,3), (1,4), (1,5), (1,6), (2,2), (2,3), \dots, (5,6), (6,6) \}$
We can perfectly well decide that the outcomes are double $1$, a $1$ and a $2$, double $2$, and so on, as in your proposal. That would give us $21$ different outcomes, not $36$. However, these $21$ outcomes are not all equally likely. So although they are a legitimate collection of outcomes, they are not easy to work with when we are computing probabilities. By way of contrast, if we imagine that we are tossing a red die and a blue die, and record as an ordered pair (result on red, result on blue) then, with a fair die fairly tossed, all outcomes are equally likely. Equivalently, we can imagine tossing one die, then the other, and record the results as an ordered pair. You can compute probabilities using your collection of outcomes, if you keep in mind that for example double $1$ is half as likely as a $1$ and a $2$. The answers will be the same, the computations more messy, and more subject to error.
{ "language": "en", "url": "https://math.stackexchange.com/questions/451579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Is there a name for this type of logical fallacy? Consider a statement of the form: $A$ implies $B$, where $A$ and $B$ are true, but $B$ is not implied by $A$. Example: As $3$ is odd, $3$ is prime. In this case, it is true that $3$ is odd, and that $3$ is prime, but the implication is false. If $9$ had been used instead of $3$, the first statement would be true, but the second wouldn't, in which case it is clear that the implication is false. Is there a name for this sort of logical fallacy?
This is not strictly a fallacy; this is a result from the truth-table definition of implication, in which a statement with a true conclusion is necessarily given the truth-value true, i.e., for any formula $A\rightarrow B$ , if B is given the truth-value T, then, by the truth-table definition of $ \rightarrow $, it follows that $A\rightarrow B$ is true. The only invalid statement of this sort is a statement of the sort $A\rightarrow B$ in which A is true and B is false; this is the essence of what logic is about; we want to avoid starting with a true statement, arguing correctly, and ending up with a false statement; logic is largely designed to avoid this, as correct reasoning is that which preserves true premises, or sends true premises to true premises.
{ "language": "en", "url": "https://math.stackexchange.com/questions/451646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 5 }
Find the projection of the point on the plane I want to find the projection of the point $M(10,-12,12)$ on the plane $2x-3y+4z-17=0$. The normal of the plane is $N(2,-3,4)$. Do I need to use Gram–Schmidt process? If yes, is this the right formula? $$\frac{N\cdot M}{|N\cdot N|} \cdot N$$ What will the result be, vector or scalar? Thanks!
You can use calculus to minimize the distance (easier: the square of the distance) of M from the generic point of the plane. Use the equation of the plane to drop a variable, obtaining a function of two independent variables: compute the partial derivatives and find the stationary point. That's all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/451722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to show $\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$? I am able to evaluate the limit $$\lim_{x \to 1} \frac{x + x^2 + \dots + x^n - n}{x - 1} = \frac{n(n + 1)}{2}$$ for a given $n$ using l'Hôspital's (Bernoulli's) rule. The problem is I don't quite like the solution, as it depends on such a heavy weaponry. A limit this simple, should easily be evaluable using some clever idea. Here is a list of what I tried: * *Substitute $y = x - 1$. This leads nowhere, I think. *Find the Taylor polynomial. Makes no sense, it is a polynomial. *Divide by major term. Dividing by $x$ got me nowhere. *Find the value $f(x)$ at $x = 1$ directly. I cannot as the function is not defined at $x = 1$. *Simplify the expression. I do not see how I could. *Using l'Hôspital's (Bernoulli's) rule. Works, but I do not quite like it. If somebody sees a simple way, please do let me know. Added later: The approach proposed by Sami Ben Romdhane is universal as asmeurer pointed out. Examples of another limits that can be easily solved this way: * *$\lim_{x \to 0} \frac{\sqrt[m]{1 + ax} - \sqrt[n]{1 + bx}}{x}$ where $m, n \in \mathbb{N}$ and $a, b \in \mathbb{R}$ are given, or *$\lim_{x \to 0} \frac{\arctan(1 + x) - \arctan(1 - x)}{x}$. It sems that all limits in the form $\lim_{x \to a} \frac{f(x)}{x - a}$ where $a \in \mathbb{R}$, $f(a) = 0$ and for which $\exists f'(a)$, can be evaluated this way, which is as fast as finding $f'$ and calculating $f'(a)$. This adds a very useful tool into my calculus toolbox: Some limits can be evaluated easily using derivatives if one looks for $f(a) = 0$, without the l'Hôspital's rule. I have not seen this in widespread use; I propose we call this Sami's rule :).
The given limit is $$\lim_{x\rightarrow 1}\frac{\sum_{k=1}^nx^k-n}{x-1}\\ =\lim_{x\rightarrow 1}\frac{\sum_{k=1}^n(x^k-1)}{x-1}\\ =\sum_{k=1}^n \lim_{x\rightarrow 1} \frac{(x^k-1)}{x-1}$$ Now, $$\lim_{x\rightarrow 1} \frac{(x^k-1)}{x-1}\\ =\lim_{x\rightarrow 1} (\sum_{j=0}^{k-1}x^j)=k$$ Hence the given limit becomes $$\sum_{k=1}^n k=\frac{n(n+1)}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/451799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49", "answer_count": 8, "answer_id": 5 }
Is taking cokernels coproduct-preserving? Let $\mathcal{A}$ be an abelian category, $A\,A',B$ three objects of $\mathcal{A}$ and $s: A\to B$, $t: A' \to B$ morphisms. Is the cokernel of $(s\amalg t): A\coprod A'\to B$ the coproduct of the cokernels of $s$ and $t$? In case it's wrong: Is it true if we restrict $s$ and $t$ to be monomorphisms?
Colimits preserve colimits, so colimits do preserve coproducts, and cokernels are colimits. However, this means something different than what you suggest. Usually, $s \coprod t$ is used to mean the morphism $A \coprod A' \to B \coprod B$; with this meaning, we do have $$ \text{coker}(s \coprod t) = \text{coker}(s) \coprod \text{coker}(t)$$ If I let $(s,t)$ denote the morphism $A \coprod A' \to B$, then if I've not made an error, what we do have is a pushout diagram $$ \begin{matrix} B &\to& \text{coker}(s) \\\downarrow & & \downarrow \\\text{coker}(t) &\to& \text{coker}(s,t) \end{matrix} $$ or equivalently, we have an exact sequence $$ B \to \text{coker}(s) \oplus \text{coker}(t) \to \text{coker}(s,t) \to 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/451891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Angles between two vertices on a dodecahedron Say $20$ points are placed across a spherical planet, and they are all spaced evenly, forming the vertices of a dodecahedron. I would like to calculate the distances between the points, but that requires me to find out the angles between the vertices. From the origin of the dodecahedron, how would I find the angle between two adjacent vertices on the same face, and the angle between two vertices on the same face but not connected by an edge?
It is maybe not a good habit to use short-cut formulas/constructions from Wikipedia, since a real mathematician should construct his geometric or analytic geometric figures herself/himself, however, I believe that it is necessary when you have not enough time. Wikipedia already constructed a dodec with 20 vertices: $(\pm 1,\pm 1, \pm 1), (0,\pm\phi,\pm\phi^{-1}), (\pm\phi^{-1},0,\pm\phi), (\pm\phi,\pm\phi^{-1},0).$ If we take two of them which are vertices of an edge we can compute the first angle asked. For example, $(1,1,1)$ and $(\phi,\phi^{-1},0)$ are such vertices! By using the identities $\phi^{-2}=2-\phi$ and $\phi^2=\phi+1$, we compute the angle which sees an edge from the origin: $$(1,1,1)\cdot(\phi,\phi^{-1},0)=\sqrt{1^2+1^2+1^2}\sqrt{\phi^2+\phi^{-2}}\cos\theta\implies\theta=\arccos\frac{\sqrt{5}}{3}\approx 41.81^{\circ}$$ For the angle which sees a diagonal of a face, we take $(1,1,1)$ and $(-1,1,1)$. Hence, $$(1,1,1)\cdot(-1,1,1)=\sqrt{3}\sqrt{3}\cos\theta\implies\theta=\arccos(\frac{1}{3})\approx 70.53^{\circ}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/451943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
how to draw graphs of ODE's In order to solve this question How to calculate $\omega$-limits I'm trying to learn how to draw graphs of ODE's. For example, let $p\in \mathbb R^2$ in the case of the field $Y=(Y_1, Y_2)$, given by: $Y_1=-y_2+y_1(y_1^2+y_2^2)\sin\left(\dfrac{\pi}{\sqrt{y_1^2+y_2^2}}\right)$ $Y_2=y_1+y_2(y_1^2+y_2^2)\sin\left(\dfrac{\pi}{\sqrt{y_1^2+y_2^2}}\right)$ I need help. Thanks so much
This is a nice example of what a nonlinear term can do to a stable, but not asymptotically stable, equilibrium. It helps to introduce the polar radius $\rho=\sqrt{y_1^2+y_2^2}$, because this function satisfies the ODE $$\frac{d\rho }{dt} = \frac{y_1}{\rho}\frac{dy_1}{dt}+\frac{y_2}{\rho}\frac{dy_2}{dt} = \rho^3\sin \frac{\pi}{\rho} \tag1$$ The analysis of equilibria of this ODE tells you about the orbits of the original system: * *There are stable closed orbits of radius $\rho=\dfrac{1}{2k }$, $k=1,2,\dots$ *There are unstable closed orbits of radius $\rho=\dfrac{1}{2k-1 }$, $k=1,2,\dots$ *In between, the orbits are spirals converging to the nearest stable closed orbit. *Outside, in the region $\rho>1$, the orbits go off into infinity in a hurry (in such a hurry that they get there in finite time). The plot given by Amzoti illustrates all of the above points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/452007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Prove that $\sqrt 2 + \sqrt 3$ is irrational I have proved in earlier exercises of this book that $\sqrt 2$ and $\sqrt 3$ are irrational. Then, the sum of two irrational numbers is an irrational number. Thus, $\sqrt 2 + \sqrt 3$ is irrational. My first question is, is this reasoning correct? Secondly, the book wants me to use the fact that if $n$ is an integer that is not a perfect square, then $\sqrt n$ is irrational. This means that $\sqrt 6$ is irrational. How are we to use this fact? Can we reason as follows: $\sqrt 6$ is irrational $\Rightarrow \sqrt{2 \cdot 3}$ is irrational. $\Rightarrow \sqrt 2 \cdot \sqrt 3$ is irrational $\Rightarrow \sqrt 2$ or $\sqrt 3$ or both are irrational. $\Rightarrow \sqrt 2 + \sqrt 3$ is irrational. Is this way of reasoning correct?
If $\sqrt 3 +\sqrt 2$ is rational/irrational, then so is $\sqrt 3 -\sqrt 2$ because $\sqrt 3 +\sqrt 2=\large \frac {1}{\sqrt 3- \sqrt 2}$ . Now assume $\sqrt 3 +\sqrt 2$ is rational. If we add $(\sqrt 3 +\sqrt 2)+(\sqrt 3 -\sqrt 2)$ we get $2\sqrt 3$ which is irrational. But the sum of two rationals can never be irrational, because for integers $a, b, c, d$ $\large \frac ab+\frac cd=\frac {ad+bc}{bd}$ which is rational. Therefore, our assumption that $\sqrt 3 +\sqrt 2$ is rational is incorrect, so $\sqrt 3 +\sqrt 2$ is irrational.
{ "language": "en", "url": "https://math.stackexchange.com/questions/452078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43", "answer_count": 9, "answer_id": 2 }
Understanding "Divides" aka "|" as used logic What are the rules for using the divides operator aka "$\mid$"? Is it false to say $2\mid5$ since $5/2$ = $2.5$ and $2.5\notin\mathbb{Z}$? Or does my question imply a misunderstanding? I am seeing this for the first time in the 6.042J, Lecture 2: Induction on MIT OCW. Thanks for the help.
That's correct. We use the divide operator to denote the following: We say for two integers $a$, $b$, that $a$ divides $b$ (or $a|b$) if $b = ka$ for some integer $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/452146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 0 }
Given a rational number $p/q$, show that the equation $\frac{1}{x} + \frac{1}{y} = \frac{p}{q}$ has only finite many positive integer solutions. How can i solve this, Given a rational number $p/q$, show that the equation $\frac{1}{x} + \frac{1}{y} = \frac{p}{q}$ has only finite many positive integer solutions. I thought let $\frac pq=\frac1r$, but solving this gets $(x-r)(y-r)=r^2$, which gives the no. of solutions to be $k(r^2)$ where $k(r^2)$ is the no. of divisors of $r$. Thanks.
Hint: A solution always satisfies $$ \frac{x + y}{xy} = \frac{p}{q} $$. Conclude that $xy = r q$ for some positive integer $r$. Now what?
{ "language": "en", "url": "https://math.stackexchange.com/questions/452213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Closure of a certain subset in a compact topological group Suppose that $G$ is a compact Hausdorff topological group and that $g\in G$. Consider the set $A=\{g^n : n=0,1,2,\ldots\}$ and let $\bar{A}$ denote the closure of $A$ in $G$. Is it true that $\mathbf{\bar{A}}$ is a subgroup of $\mathbf{G}$? From continuity of multiplication and the fact that $A\cdot A\subseteq A$ it is clear that $\bar{A}\cdot\bar{A}\subseteq\bar{A}$. therefore, $a,b\in \bar{A}$ yields $a\cdot b\in \bar{A}$. However, I am having trouble showing that inverses of elements in $\bar{A}$ are also in $\bar{A}$.
I really like Fischer's solution. I would like to post an answer based on Fischer's answer in the "filling the blank" spirit - certainly helpful for beginners like me. We need to get help from the two theorems below (refer to section 1.15 in Bredon's Topology and Geometry): * *In a topological group $G$ with unity element $1$, the symmetric neighborhoods of $e$ form a neighborhood basis at $1$. *If $G$ is a topological group and $U$ is any neighborhood of $1$ and $n$ is any positive integer, then there exists a symmetric neighborhood $V$ of $1$ such that $V^n\subset N$. Since $A.A\subset A$, by the same argument using the continuity of the multiplication map to prove the closure of a subgroup is a subgroup, we have $\bar{A}.\bar{A}\subset\bar{A}$. Therefore, if $g^{-1}\in\bar{A}$, then $g^{-n}\in\bar{A}$ for any $n\in\textbf{N}$, and so the group $\langle g\rangle\subset\bar{A}$. Taking bar of $A\subset\langle g\rangle\subset\bar{A}$ yields $\bar{A}=\bar{\langle g\rangle}$, which means $\bar{A}$ is a subgroup due to being the closure of a subgroup. We now show that $g^{-1}\in \bar{A}$. The case $g$ has finite order is trivial, so let $g$ has infinite order. Suppose $g^{-1}\notin\bar{A}$. Then by definition, there is a neighborhood $U'$ of $1$ that, by the homeomorphic translation, $g^{-1}U'\cap A=\emptyset$. Since the symmetric neighborhoods form a neighborhood basis of $1$, we can choose a symmetric neighborhood $U$ of $1$ such that $g^{-1}U\cap A=\emptyset$. Since $A$ is an infinite set in a compact space, $A$ has a limit point $p$. We can choose a symmetric neighborhood $V$ of $1$ such that $V^2\subset U$. Then, $pV$, being a neighborhood of a limit point of $A$, must contain at least two distinct points $g^m$, $g^n$ ($m, n\in \textbf{N}$), i.e. $g^m=pa$, $g^n=pb$ for some $a,b\in V$. Due to $V$ being symmetric and $V^2\subset U$, $b^{-1}a=u$ for some $u\in U$, and we have $g^m=pa=pb.u\in g^n U\cap A$ Now, if $m>n$, then $g^{m-n-1}\in g^{-1}U\cap A$ (contradict with our definition of $U$). If $n>m$, then since $U^{-1}=U$, $g^m\in g^n U^{-1}$, and so $g^n\in g^m U\cap A$ (also contradict with our definition of $U$). Therefore $g^{-1}\in \bar{A}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/452297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Solving for $x$: $3^x + 3^{x+2} = 5^{2x-1}$ $3^x + 3^{x+2} = 5^{2x-1}$ Pretty lost on this one. I tried to take the natural log of both sides but did not get the result that I desire. I have the answer but I would like to be pointed in the right direction. Appreciated if you can give me some hints to this question, thanks!
Just take $$3^x$$ as common factor on right side then $$3^x(1+3^2)=5^{2x-1}\Rightarrow ln({3^x(1+3^2)}) = ln(5^{2x-1})\Rightarrow xln(3) + ln(10) = (2x-1)ln(5)\Rightarrow ln(10)+ln(5)=2xln(5)-xln(3)\Rightarrow x=\frac{ln(10)+ln(5)}{2ln(5)-ln(3)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/452346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
show numbers (mod $p$) are distict and nonzero Let's start with the nonzero numbers, mod $p$, $1$, $2$, $\cdots$, $(p-1)$, and multiply them all by a nonzero $a$ (mod $p$). Notice that if we multiply again by the inverse of $a$ (mod $p$) we get back the numbers $1$, $2$, $\cdots$, $(p-1)$. But my question is how the above process show that the numbers $a\cdot 1$ mod $p$, $a\cdot 2$ mod $p$, $\cdots$, $a\cdot (p-1)$ mod $p$ are distinct and nonzero? Thanks in advance.
It is important to point out that $\Bbb Z_p$ is a field if (and only if) $p$ is prime. This means in particular that every element that is not $0$ has an inverse. Since $p\not\mid a$ is equivalent to $a\not\equiv 0\mod p$, $a$ has an inverse $a^{-1}$. But then $$x\equiv y\mod p\iff ax\equiv ay\mod p$$ since we can reverse the equalities by multiplication by $a$ or $a^{-1}$. Moreover, since $p\not\mid a$ and $p\not\mid x$ (by assumption), $p\not\mid ax$, that is $$a\not\equiv 0,x\not\equiv 0\implies ax\not\equiv 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/452432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
we need to show gcd is $1$ I need to show if $(a,b)=1,n$ is an odd positive integer then $\displaystyle \left(a+b,{a^n+b^n\over a+b}\right)\mid n.$ let $\displaystyle \left(a+b,{a^n+b^n\over a+b}\right)=d$ $\displaystyle d\mid {a^n+b^n\over a+b}=(a+b)^2(a^{n-3}-a^{n-4}b\dots+b^{n-3})-2ab(a^{n-3}\dots+b^{n-3})-ab^{n-2}$ $d\mid (a+b)$ so from the rest can I conclude $d=1$ as $(a,b)=1$?
Modulo $a+b$, you have $a^kb^{n-k}\equiv (-1)^kb^n$, so $$ \frac{a^n+b^n}{a+b}=a^{n-1}-a^{n-2}b+a^{n-3}b^2\mp\cdots -ab^{n.2}+b^{n-1}\equiv n$$ so $n=\frac{a^n+b^n}{a+b}+(\ldots)\cdot (a+b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/452510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to calculate weight of positive and negative values. We have used formula formula to calculate weight as, $$ w_1 = \frac{s_1}{s_1 + s_2 + s_3};$$ $$ w_2 = \frac{s_2}{s_1 + s_2 + s_3};$$ $$ w_3 = \frac{s_3}{s_1 + s_2 + s_3};$$ However, their is possibility of negative and positive numbers. Even all can be negative or positive. How to calculate weight in this type of situation. For us -/+ are according to strict algebraic rules. i.e bigger the negative number smaller will be its value. Thanks.
I'm a little confused by your question. When you say the weight can be positive or negative, do you mean just the value can be negative or positive because of the measurement technique, or is it actually a negative weight? I would assume the first (for example, if you slow down really fast in an elevator, and measure your weight in that frame, you'll actually have 'negative' weight). With that said, I think what you're after is the RMS value (root mean squared). It's a common technique used to measure velocity since velocity is a vector and can have negative components, but often we care only about it's magnitude. If that is the case for your weight, then do the following. $w1 = (s1^2/(s1^2 + s2^2 + s3^2))^{1/2}$ $w2 = (s2^2/(s1^2 + s2^2 + s3^2))^{1/2}$ $w3 = (s2^2/(s1^2 + s2^2 + s3^2))^{1/2}$ If you indeed just want the average and weight can be negative, do exactly as what the formulas you provided for us tell you to do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/452566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Which of these numbers is greater: $\sqrt[5]{5}$ or $\sqrt[4]{4}$? I know that this is the question of elementary mathematics but how to logically check which of these numbers is greater: $\sqrt[5]{5}$ or $\sqrt[4]{4}$? It seems to me that since number $5$ is greater than $4$ and we denote $\sqrt[5]{5}$ as $x$ and $\sqrt[4]{4}$ as $y$ then $x^5 > y^4$.
The function $$n^{1/n}=e^{(1/n)\log n}$$ goes to $1$ as $n$ becomes infinite. Also, taking derivatives shows that it is monotonically decreasing whenever $n>e.$ If we consider only integers now, we have that for $n=4,5,6,\ldots,$ the sequence decreases. It follows that $$4^{1/4}>5^{1/5}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/452635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Closed representation of this integral I was wondering whether there is some easy closed representation for $\int_R^{\infty} e^{-k(r-R)}r^{l+1} dr$, where $l\in \mathbb{N_0}$ $k>0$ and $R>0$.
$$ I=\int_R^{\infty} e^{-k(r-R)}r^{l+1} dr= e^{kR}\int_R^{\infty} e^{-kr}r^{l+1} dr. $$ Now, using the change of variables $kr=t$ gives $$ = e^{kR}\int_R^{\infty} e^{-kr}r^{l+1} dr = \frac{e^{kR}}{k^{l+2}}\int_{Rk}^{\infty} e^{-t}t^{l+1} dr $$ $$I = \frac{e^{kR}}{k^{l+2}}\Gamma( l+2, Rk ),$$ where $ \Gamma(s,x) $ is the incomplete gamma function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/452702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that every closed ball in $\Bbb R^n$ is sequentially compact. Question: Prove that every closed ball in $\Bbb R^n$ is sequentially compact. A subset $E$ of $\Bbb R^n$ is said to be squentially compact $\iff$ every sequence $x_k\in E$ has convergent subsequence whose limit belongs to $E$ Solution: Let $B_R(a)$ be closed ball. Let $x_k$ be a sequence in $B_R(a)$ Then, $$\vert\vert x_k-a\vert\vert \le M$$ for $M>0$ By the triangle inequality, $$\vert\vert x_k-a\vert\vert \le \vert \vert x_k\vert \vert +\vert\vert a\vert \le M \ \ \Rightarrow \vert\vert x_k\vert\vert \le \vert\vert a\vert\vert +M $$ So the sequence $x_k$ is bounded. By Bolzano W. Theorem, $x_k$ has convergent subsequences. Now I need to show that these convergent subsequences have a limit point in $B_R(a)$. But how? Please explain this part. Thank you:)
First, note that a closed ball is a closed set. To prove this, let $E\subset \mathbb{R}$ is a closed ball of radius $R$ around $y$. Let $x\in E^{C}$. Now, let $r=d(x,y)-R>0$ and $z\in B_{r}(x)$. Then $$d(y,x)\le d(y,z)+d(z,x)<d(y,z)+r\\ \Rightarrow d(y,z)>R$$ Hence $z\notin E$ and hence the open ball $B_r(x)$ is totally contained in $E^{C}$. hence $E^{C}$ is open and hence $E$ is closed. Now, since you've got a convergent (sub)sequence in a closed set, it is going to converge to some point in the set itself, by definition of closed set. So you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/452789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Finding a point within a 2D triangle I'm not sure how to approach the following problem and would love some help, thanks! I have a two-dimensional triangle ABC for which I know the cartesian coordinates of points $A$, $B$ and $C$. I am trying to find the cartesian coordinates of a point $P$. I know the lengths (distances) $PA$, $PB$ and $PC$. How can I find the coordinates for $P$? Thanks.
Let $(x,y)$ be the coordinate of $P$ and $(x_i,y_i)$ be the coordinates of $A,B,C$ respec. $(i=1,2,3)$ Since you know the lengths of $PA, PB,PC$, you'll get three equations like $$(x-x_i)^2+(y-y_i)^2=d_{i}^2,\ i=1,2,3$$ Then, subtract the equation $i$ from equation $j$ to get something like $$x(x_i-x_j)+y(y_i-y_j)+x_i^2-x_j^2+y_i^2-y_j^2=d_i^2-d_j^2$$ From these equations solve for $(x,y)$ Actually only two lengths are enough to find out $P$. This method, for $3D$ is called triangulation. The equation that you'll get from point $A,B$ is $$x(x_1-x_2)+y(y_1-y_2)=-x_1^2-x_2^2+y_1^2-y_2^2+d_1^2-d_2^2\tag{1}$$ Similarly, equation that you'll get from $B,C$ is $$x(x_2-x_3)+y(y_2-y_3)=-x_2^2-x_3^2+y_2^2-y_3^2+d_2^2-d_3^2\tag{2}$$ So the equations are now in the form $$a_1x+b_1y=c_1\\ a_2x+b_2y=c_2$$ which have the solution $$x=\frac{b_2c_1-b_1c_2}{a_1b_2-a_2b_1}\\ y=\frac{a_1c_2-a_2c_1}{a_1b_2-a_2b_1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/452845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Testing for convergence of this function For the integral $$\int_2^\infty \dfrac{x+1}{(x-1)(x^2+x+1)}dx .$$ Can I know if it's convergent or not? If it does can I know how to evaluate it? I tried to use $u$ substitution but it didn't work.
Hint: Apply partial fractions. $$\frac{x+1}{(x-1)(x^2+x+1)}=\frac A{x-1}+\frac{Bx+C}{x^2+x+1}$$ $$x+1=(A+B)x^2+(A-B+C)x+(A-C)$$ $$\therefore A=\frac23,B=-\frac23,C=-\frac13$$ Now we know that: $$\int \frac{x+1}{(x-1)(x^2+x+1)}dx=\frac23\int\frac1{x-1}dx-\frac13\int\frac{2x-1}{x^2+x+1}dx\\ =\frac23\ln|x-1|-\frac13\ln(x^2+x+1)=\frac13\ln\frac{(x-1)^2}{x^2+x+1}$$ Therefore, \begin{align*} \int^\infty_2\frac{x+1}{(x-1)(x^2+x+1)}dx&=\lim_{n\to \infty}\int^n_2\frac{x+1}{(x-1)(x^2+x+1)}dx \\ \\ &=\lim_{n\to\infty}\left(\frac13\ln\frac{(n-1)^2}{n^2+n+1}-\frac13\ln\frac17\right)\\ \\ &=-\frac13\ln\frac17\\ \\ &=\boxed{\dfrac13\ln7} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/452930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
The Effect of Perspective on Probability My friend and I are tearing each other to bits over this, hope someone can help. Coin flip experiment: Define a single trial as 10 coin flips of a fair coin. Perform an arbitrarily large number of trials. At some number of trials n, you notice that your distribution is extremely skewed in one direction (i.e., the "average" of your 10-flip sets is far away from 5 heads and 5 tails). My reaction: Because you are guaranteed to hit a 5H/5T mean as n approaches infinity, the probability that the next n trials contains an equal skew in the opposite direction increases. In other words, given 2*n* trials, if the first n are skewed in one direction, than the remaining n are probably skewed in the other direction such that the overall distribution of your 2*n* trials is normal and centered around 5H/5T. My friend's reaction: It doesn't matter if your first n trials is skewed, the next n trials should still represent an unmodified 5H/5T distribution regardless. The probability of the next n trials being skewed in the opposite direction is unchanged and low. Who's right, and why?
If you want to consider something really interesting, consider that as you add more flips, the probability of getting exactly half heads and tails goes down and this isn't that hard to compute mathematically as the sequence is the fraction of $2n \choose n$ divided by $(2n)^2$ for 2n flips. The denominator will grow as you increase the number of flips. For 2 flips, this is 2/4 = 1/2. For 4 flips, this is 3/8. For 6 flips, this is 20/64 = 5/16. Now, as this continues, it will keep diminishing as it is a specific outcome compared to be "near" the middle. Just something to consider as while you may get close to the middle, the exact middle keeps having lower and lower probability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/452980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Fourier Transform of short pulse I'm trying to take the fourier transform of a short laser pulse, represented by $E(t) = E_oe^{-(t/\Delta T)^2}\times e^{-i \omega t}$ E is the electric field of the laser pulse. $E_o$ and $\Delta T$ are both constants. Specifically I want to know if there are beats in the fourier transform, and what their frequency is. Any help would be great.
The $e^{-i\omega t}$ factor in your pulse means only a phase delay by $\omega t$, otherwise the pulse is a Gaussian pulse which means that the Fourier transform will also be Gaussian and it will be $$\large \mathcal{E}(f)=E_0\Delta T\sqrt{\pi}e^{-\Delta T^2 \pi^2\left(f+\frac{\omega}{2\pi}\right)^2}$$ So, the power will be mostly concentrated in the frequency range $\displaystyle \left[0,\frac{1}{\sqrt{2}\pi \Delta T}\right]$ Also, the frequency domain pulse has its peak at the $\frac{-\omega}{2\pi}$ frequency and then it monotonically decreases as $f$ increases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/453062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit of $n^2\sqrt{1-\cos(1/n)+\sqrt{1-\cos(1/n)+\ldots}}$ when $n \to \infty$ Compute the limit: $$\lim_{n \to \infty} n^2\sqrt{1-\cos(1/n)+\sqrt{1-\cos(1/n)+\sqrt{1-\cos(1/n)+\ldots}}}$$
Hints: * *For every $a\gt0$, $b=\sqrt{a+\sqrt{a+\sqrt{a+\cdots}}}$ is such that $b^2+a=b$ and $b\gt0$, thus $b=\frac12+\frac12\sqrt{1+4a}$. *When $n\to\infty$, $1-\cos(1/n)\to0$. *When $a\to0$, $\frac12+\frac12\sqrt{1+4a}\to1$. *Hence the limit you are after is $\lim\limits_{n\to\infty}n^2\cdot1=+\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/453139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
show that $\int_{0}^{\infty} \frac {\sin^3(x)}{x^3}dx=\frac{3\pi}{8}$ show that $$\int_{0}^{\infty} \frac {\sin^3(x)}{x^3}dx=\frac{3\pi}{8}$$ using different ways thanks for all
Using the formula found in my answer, $$ \begin{aligned} \int_{0}^{\infty} \frac{\sin ^{3} x}{x^{3}} &=\frac{\pi}{2^{3} \cdot 2 !}\left[\left(\begin{array}{l} 3 \\ 0 \end{array}\right) 3^{2}-\left(\begin{array}{l} 3 \\ 1 \end{array}\right) 1^{2}\right] \\ &=\frac{3 \pi}{8} \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/453198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 8, "answer_id": 7 }
On the Space of Continuous Linear Operators on LCTVS Suppose that $X$ is a locally convex topological vector space (LCTVS) and that $L(X)$ denotes the space of all continuous linear operators on $X$. Question. How can we construct a topology on $L(X)$ which compatible with the vector space structure of $L(X)$? I need help on this. Thanks in advance....
There is no natural topology on the dual space of a locally convex space unless the original space has a norm. In this case, if the original space is complete, then the $\sup_{B}$ norm is a norm on the dual space. Here $B$ is the unit ball in the original space. I think the conventional way to endow a topology on the dual space is as follows. Assume $H$ has a topology defined by a family of seminorms such that $\bigcap \{x:p(x)=0\}=\{0\}$. The subbase in this case is all sets of the form $\{x:p(x-x_{0})<\epsilon\}$. So a set $U$ in $H$ is open if and only if at every point $x_{0}\in H$, we have $p_{i},\epsilon_{i},i=\{1\cdots n\}$ such that $\bigcap^{n}_{j=1}\{x\in H:p_{i}(x-x_{0})<\epsilon_{i}\}\subset U$. Assuming this topology already well defined on $H$, we define a "weak-star topology" on $H^{*}$ by considering the family of seminorms given by $$\mathcal{P}=\{p_{x}:x\in H\}$$where $p_{x}(f)=f(x)$. Since $H^{*}$ is also a topological vector space, the above $p_{x}$ make it into a locally convex space in the same way as $H$. Some author claim that this is the only "natural" topology for the dual space of a locally convex space in the absence of a norm (See Conway). There is also a related personal note by Tao at here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/453288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
One-to-one functions between vectors of integers and integers, with easily computable inverses I'm trying to find functions that fit certain criteria. I'm not sure if such functions even exist. The function I'm trying to find would take vectors of arbitrary integers for the input and would output an integer. It is one-to-one. Both the function and its inverse should be easy to compute with a computer. The inverse should output the original vector, with the elements in the same indices. Just something I've been trying to think of. Thanks for all your help.
What you need is a bijection $\alpha : \mathbb{N}^2 \to \mathbb{N}$ which is easy to compute in both forward and backward directions. For example, $$\begin{align} \mathbb{N}^2 \ni (m,n) & \quad\stackrel{\alpha}{\longrightarrow}\quad \frac{(m+n)(m+n+1)}{2} + m \in \mathbb{N}\\ \mathbb{N} \ni N & \quad\stackrel{\alpha^{-1}}{\longrightarrow}\quad (N - \frac{u(u+1)}{2},\frac{u(u+3)}{2} - N) \in \mathbb{N}^2, \end{align}$$ where $u = \lfloor\sqrt{2N+\frac14}-\frac12\rfloor$. Once you have such a bijection, then given any $k \ge 1$ natural numbers $x_1, x_2, \ldots x_k$, you can encode it into a single natural number as: $$( x_1, \ldots, x_k ) \mapsto \alpha(k,\alpha(x_1,\alpha(\ldots,\alpha( x_{k-1}, x_{k} )))$$ To decode this number, you apply $\alpha^{-1}$ once to get $k$ and $\alpha(x_1,\alpha(\ldots,\alpha( x_{k-1}, x_{k} )))$. Knowing $k$, you know how many times you need to apply $\alpha^{-1}$ to the second piece and get all the $x_k$ back. Other cases like: * *encoding signed instead of unsigned integers *allow encoding of zero number of integers can be handled in similar manner.
{ "language": "en", "url": "https://math.stackexchange.com/questions/453436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove that $\int_{I}f=0 \iff$ the function $f\colon I\to \Bbb R$ is identically $0$. Let $I$ be a generalized rectangle in $\Bbb R^n$ Suppose that the function $f\colon I\to \Bbb R$ is continuous. Assume that $f(x)\ge 0$, $\forall x \in I$ Prove that $\int_{I}f=0 \iff$ the function $f\colon I\to \Bbb R$ is identically $0$. My idea is that For $(\impliedby)$ Since $f\colon I\to \Bbb R$ is identically zero, $$f(I)=0$$ Then $$\int_{I}f=\int 0=0$$ For $(\implies)$, Since $f$ is continuous, the function is integrable. i.e $\int _{I} f $ exists. I need the show that $\int f=0$ but how? Hopefully, other solution is true. Please check this. And how to continue this? Thank you:)
This may be only a minor variation on an earlier answer, but maybe it adds something. Suppose there's some point $x_0$ where $f(x_0)>0$. Let $\varepsilon=f(x_0)/2$. Then by continuity, there is some $\delta>0$ such that for $x$ in the open interval with endpoints $x_0\pm\delta$, the distance between $f(x)$ and $f(x_0)$ is less than $\varepsilon$. That means $f(x)>f(x_0)/2$ on that interval. Hence $$ \int_{x_0-\delta}^{x_0+\delta} f(x)\,dx > 2\delta\cdot\frac{f(x_0)}{2} = \delta f(x_0)>0. $$ Maybe one reason I feel I ought to post this is that there is a question: intuitively, the statement that if a function is positive on an interval, then its integral over that interval is postive, seems obvious. But how does one prove it without doing something like what I did above? What I did gives a partition of the interval, $\{a,x_0-\delta,x_0+\delta,b\}$, where $a,b$ are the endpoints, for which the lower Riemann sum is positive. Or if you like Lebesgue's definition, it gives a simple function dominated by $f$, whose integral is positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/453502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Orientation of manifold given by external normal field Consider the unit sphere $S^1$ of $\mathbb{R}^2$. This is a 1-dimensional manifold. And an orientation $\sigma$ of $S^1$ is given by the orientated atlas $\left\{\phi_1,\phi_2\right\}$ with the maps $$ \phi_1\colon (-\pi,\pi)\to S^1\setminus (-1,0), t\mapsto (\cos t,\sin t)\\ \phi_2\colon (0,2\pi)\to S^1\setminus (1,0), t\mapsto (\cos t,\sin t) $$ Moreover an orientation of $S^1$ is given by the external normal field $$ \nu\colon S^1\to\mathbb{R}^2. $$ Show that the orientation given by the external normal field is identical with the orientation $\sigma$ above. Do I have to show that the external normal field $\nu$ is positive orientated related to $\sigma$? Or something different?
I think you want to show that, if you pick a point $p\in S^{1}\subset\mathbb{R}^{2}$, then $\nu(p),v(p)$ is positively oriented in $T_{p}\mathbb{R}^2$, where $v(p)$ is the oriented basis for $T_{p}S^{1}$ you get from the atlas $\{\phi_1,\phi_2\}$. Is this what you meant by "the external normal field $\nu$ is positive oriented related to $\sigma$"?
{ "language": "en", "url": "https://math.stackexchange.com/questions/453549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do I disagree with my calculator? I followed the order of opperations but my answer disagrees with my calulator's. Problem: $331.91 - 1.03 - 19.90 + 150.00$ Calculator answer: $460.98$ My answer: $162.98$ Why the discrepancy?
The calculator is correct of course (at least in magnitude of the answer, I haven't actually calculate it, and I won't). I bet you are having problems with those negative terms. For this particular exercise and infinite more like this one, you can take another approach, more intuitive than just following rules like a robot. Look at what you are being asked: it's a sum. A sum with negative and positive numbers. We can already see that the positive numbers exceed in magnitude the negative ones, so there is no way in hell that your answer is correct. Structurally you could add all positives, add all negatives and then add those two groups, just like you can identify the 'subject' and 'predicate' part in a grammatical sentence. In my experience, that has always minimize sign-related errors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/453631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 1 }
Formal (series/sum/derivative...) I have come across a lot of cases where terms such as formal sum rather than simply sum is used, similarly in case of derivatives/infinite series/power series. As I understand in case of series/sum, the term formal is used when the notion of convergence is not clear. I would appreciate any precise definition or explanation of where formal is used. I also cannot get how it relates to derivatives. Also, where else is "formal" used?
A formal sum is where we write something using a $+$ symbol, or other way normally used for sums, even when there may be no actual operation defined. An example. Someone may have defined a quaternion as a formal sum of a scalar and a (3-dimensional) vector, for example. Before this definition, at least in that book, there was no defined "sum" of a scalar plus a vector. After that definition, our author may tell us what is means for two quaternions to be equal $$ \lambda + \mathbf{x} = \mu + \mathbf{y} \quad \Longleftrightarrow \quad\text{??} $$ how to add quaterntions $$ (\lambda + \mathbf{x}) + (\mu + \mathbf{y}) = \text{??} $$ how to multiply quaterntions $$ (\lambda + \mathbf{x}) \cdot (\mu + \mathbf{y}) = \text{??} $$ and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/453690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Determinant of PSD matrix and PSD submatrix inequality I'm reading this paper and in the appendix I see the following statement: For $A \in R^{m\times m}, B \in R^{n\times m}, C \in R^{n\times n}$, if $D = \begin{bmatrix}A & B\\B^T & C\end{bmatrix}$ is positive semi-definite then, $det(D) \leq det(A)det(C)$ This is given without proof as a property of psd matrices. This doesn't seem axiomatic to me and it's not obvious. Can you point to a reference or give a proof of this? I suspect it's pretty simple, but I'm missing it. I've never formally studied linear algebra so it might just be a gap in my education. Some things I notice: $A$ and $C$ are principal submatrices of $D$. I know a determinant of a $2\times 2$ matrix is $a_{1,1}a_{2,2} - a_{1,2}a_{2,1}.$ Because $D$ is psd and has larger dimensions than $A$ or $C$, it seems like the second term is subtracting more than the second term for $A$ or $C$ would. But that statement is pretty imprecise and doesn't convince me that it's true.
Lemma: If $A$ and $B$ are symmetric positive-definite, then $\det(A+B) \geq \det(A)$. This follows from Sylvester's determinant theorem: if $L$ is a Cholesky factor of $B$, $$\det(A+B) = \det(A)\det(I + L^TA^{-1}L) \geq \det(A)$$ since $L^TA^{-1}L$ is symmetric positive-definite, and adding $I$ shifts the spectrum by one. Now for your original problem, if $D$ is singular the statement is obvious. If it's strictly positive-definite, write $$D=\left[\begin{array}{cc}A & 0\\B^T & I\end{array}\right]\left[\begin{array}{cc}I & A^{-1}B\\0 & C - B^TA^{-1}B\end{array}\right].$$ The matrix $C-B^T A^{-1}B$ is symmetric, and also positive-definite, since if $v$ is an eigenvector with negative eigenvalue $\lambda$, we have $$(-A^{-1}Bv\quad v)^T\,D\,(-A^{-1}Bv\quad v) = (-A^{-1}Bv\quad v)^T(0 \quad \lambda v) = \lambda\|v\|^2<0.$$ Thus by the lemma, $\det(C) \geq \det(C-B^TA^{-1}B)$, and your statement follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/453840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How is the Inverse Fourier Transform derived from the Fourier Transform? Where does the $\frac{1}{2 \pi}$ come from in this pair? Please try to explain the Plancherel's theorem and the Parseval's theorem! $ X(j \omega)=\int_{-\infty}^\infty x(t) e^{-j \omega t}d t$ $ x(t)=\frac{1}{2 \pi} \int_{-\infty}^{\infty} X(j \omega)e^{j \omega t}d \omega $
I am going to "derive" this heuristically, because the question is concerned with the origin of the $1/(2 \pi)$ factor. Questions about integrability, order of integration, limts, etc., are to be smoothed over here (unless, of course, I have erred somewhere in the derivation - then all bets are of course off). Consider the FT $$\hat{f}(k) = \int_{-\infty}^{\infty} dx \, f(x) \, e^{i k x}$$ Now let's assume that the inverse FT may be written as $$f(x) = A \int_{-\infty}^{\infty} dk \, \hat{f}(k) \, e^{-i k x}$$ where we are to show, again heuristically, that $A = 1/(2 \pi)$. To do this, I am going to rewrite the IFT as follows: $$f(x) = A \lim_{N \to \infty} \int_{-N}^{N} dk \, \hat{f}(k) \, e^{-i k x}$$ Now, substitute the above definition of the FT into the integral, and reverse the order of integration (to repeat, I am assuming that $f$ is such this step and all the others are OK). In doing this, we get $$f(x) = A \lim_{N \to \infty} \int_{-\infty}^{\infty} dx' \, f(x') \, \int_{-N}^{N} dk \, e^{i k (x'-x)}$$ Evaluating the inner integral, we get a single integral back: $$f(x) = A \lim_{N \to \infty} \int_{-\infty}^{\infty} dx' \, f(x') \frac{e^{i N (x'-x)}-e^{-i N (x'-x)}}{i (x'-x)} = 2 A \lim_{N \to \infty} \int_{-\infty}^{\infty} dx' \, f(x') \frac{\sin{N (x-x')}}{x-x'}$$ Now, as $N \to \infty$, the sinc kernel behaves as a distribution which is characterized by a sifting property. Thus, we may, in this limit, take $f$ out of the integral and replace it with the value at $x'=x$. Thus we have $$f(x) = 2 A f(x) \lim_{N \to \infty} \int_{-\infty}^{\infty} dx' \frac{\sin{N (x-x')}}{x-x'}$$ or, simplifying things a bit and incorporating $N$ into the integral, we get that $$1 = 2 A \int_{-\infty}^{\infty} dy \frac{\sin{y}}{y} = 2 A \pi$$ Thus, $A = 1/(2 \pi)$ if the above steps are valid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/453946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is this function convex or not? Is this function convex ? $$ f(\mathbf y) = { \left| \sum_{i=1}^{K} y_i^2e^{-j\frac{2\pi}Np_il} \right| \over\sum_{i=1}^{K}y_i^2} $$ where : $ P = \{p_1,p_2,\cdots,p_K\} \subset\{1,2,\cdots,N\} $ I tried to plot it and see whether it is convex or not, but since we are able to plot for two variables, i can check it only for $K=2$, So, I'm looking for any analytic answers.
No. The function is homogeneous of order zero, i.e., $f(ty)=f(y)$ for any nonzero $t\in\mathbb{R}$, $y\in\mathbb{R}^n$. If such a function is convex, it is necessarily constant. To see this, pick any two linearly independent $x$, $y\in\mathbb{R}^n$, and let $z=x+y$, so that any two of $x$, $y$, $z$ are linearly independent. Now each of the three vectors is a convex combination of some multiples of the other two, and by homogeneity and convexity, $f(x)\le\max(f(y),f(z))$, $f(y)\le\max(f(x),f(z))$, and $f(z)\le\max(f(x),f(y))$. This implies that all three values are equal, and in particular, $f(x)=f(y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/454094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
solve differential equation $x^2\frac{dy}{dx}=2y^2+yx$ How to solve this equation? $x^2\frac{dy}{dx}=2y^2+yx$ I tried to separate variables, but I always have both $x$ and $y$ on one side of equation.
HINT: Divide either sides by $x^2$ to get $$\frac{dy}{dx}=2\left(\frac yx\right)^2+\frac yx\text{ which is a function of } \frac yx$$ So, we can put $\frac yx=v\iff y=vx $ to separate the variables
{ "language": "en", "url": "https://math.stackexchange.com/questions/454150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What Are the Relations for the Polar Hypercomplex form $a + bi + cj + dk$? Olariu in "Complex Numbers in $N$ Dimensions" has polar hypercomplex numbers described by its generators as \begin{gather} \alpha^2 = \beta, \\ \beta^2 = 1, \\ \gamma^2 = \beta, \\ \alpha\beta =\beta\alpha = \gamma, \\ \alpha\gamma =\gamma\alpha = 1,\\ \beta\gamma = \gamma\beta = \alpha. \end{gather} A commutative hypercomplex number can be expressed as a linear combination of real and non-real roots of $1$ and $-1$. My goal is to find an expression for these polar hypercomplex numbers as $a + bi + cj + dk$. Any help, reference would be greatly appreciated.
I will attempt an answer at this point. First, I think you would like us to read section 3.4 which is found at pages 113-137. I make no claim to understand all those results, clearly the author has spent some time developing exponentials and trigonometric functions as well as studying the structure of zero-divisors and so forth. That said, the basic structure here is simply an algebra $\mathcal{A} = \mathbb{R} \oplus \alpha \mathbb{R} \oplus \beta \mathbb{R} \oplus \gamma \mathbb{R}$ where the multiplication is given by the relations in your post. \begin{gather} \alpha^2 = \beta, \\ \beta^2 = 1, \\ \gamma^2 = \beta, \\ \alpha\beta =\beta\alpha = \gamma, \\ \alpha\gamma =\gamma\alpha = 1,\\ \beta\gamma = \gamma\beta = \alpha. \end{gather} You ask for an expression. I would start with $X=t+x\alpha+y\beta+ z\gamma$ this is a typical number in $\mathcal{A}$. We can multiply them as follows, suppose $A=a+b\alpha+c\beta+ d\gamma$ is another number in $\mathcal{A}$ then \begin{align} AX &= (a+b\alpha+c\beta+ d\gamma)( t+x\alpha+y\beta+ z\gamma) \\ &= a( t+x\alpha+y\beta+ z\gamma)+b\alpha( t+x\alpha+y\beta+ z\gamma)+ \\ &\qquad +c\beta ( t+x\alpha+y\beta+ z\gamma)+ d\gamma ( t+x\alpha+y\beta+ z\gamma) \\ &= 1(at+dx+cy+bz)+\alpha(bt+ax+dy+cz) \\ &\qquad + \beta( ct+bx+ay+dz)+ \gamma( dt+cx+by+az) \end{align} Our notation here is that $e_1=1$ and $e_2 = \alpha$, $e_3=\beta$ and $e_4=\gamma$. It follows that I can read off a matrix representative of the number $A$ as follows: $$ M_A = \left[ \begin{array}{cccc} a & d & c & b \\ b & a & d & c \\ c & b & a & d \\ d & c & b & a \end{array} \right] $$ The matrix above represents the linear map $L_A: \mathcal{A} \rightarrow \mathcal{A}$ defined by $L_A(X)=AX$ with respect to the basis of generators. Notice in particular, $$ M_1 = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] \ M_{\alpha} = \left[ \begin{array}{cccc} 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array} \right] \ M_{\beta}=\left[ \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right] \ M_{\gamma}=\left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \end{array} \right] $$ You can verify that $M_AM_B = M_{AB}$ hence $M: \mathcal{A} \rightarrow \mathbb{R}^{ 4 \times 4}$ provides an algebra homomorphism. Interesting, this reminds me of $\mathcal{A}_2 = \mathbb{R} \oplus j\mathbb{R} \oplus j^2 \mathbb{R}\oplus j^3\mathbb{R}$ where $j^4=1$ which has matrix representatives of the following form: $$ M_{a+bj+cj^2+dj^3} = \left[ \begin{array}{cccc} a & d & c & b \\ b & a & d & c \\ c & b & a & d \\ d & c & b & a \end{array} \right]$$ Apparently, your algebra is isomorphic to the cyclotomic numbers of order $4$. Identify that $\alpha = j$, $\beta=j^2$ and $\gamma=j^3$. You can recover all your relations by imposing $j^4=1$. Woke up this morning and it occurred to me you might be after the isomorphism of $\mathbb{R} \oplus j\mathbb{R} \oplus j^2 \mathbb{R}\oplus j^3\mathbb{R}$ to $\mathbb{C} \oplus \mathcal{H}$ where $\mathbb{C}$ is the usual complex numbers and $\mathcal{H}$ are the hyperbolic numbers. The space $\mathcal{A}_3=\mathbb{C} \oplus \mathcal{H}$. I'll use $\mathbb{C} = \mathbb{R} \oplus i \mathbb{R}$ and $\mathcal{H} = \mathbb{R} \oplus \eta \mathbb{R}$ where $i^2=-1$ and $\eta^2=1$. Be carefull though, the identity for $\mathcal{A}_3$ in our current context is $(1,1)$ $$\mathcal{A}_3 = \{ (z_1,z_2) \ | \ z_1 \in \mathbb{C}, z_2 \in \mathcal{H} \}$$ In this notation $(0,\eta)^2 = (0, \eta^2)= (0,1)$ and $(i,0)^2 = (i^2, 0)= (-1,0)$. The isomorphism $\Psi$ from $\mathcal{A}_3$ to $\mathcal{A}_2$ will be fixed by its image on generator $j$. Clearly we need: $$ \Psi(j) = (x_1+iy_1,x_2+\eta y_2)$$ such that $(x_1+iy_1,x_2+\eta y_2)^4=(1,1)$ but this is to ask: $$ (x_1+iy_1)^4=1, \qquad (x_2+\eta y_2)^4=1 $$ One solution is given by: $$ x_1 = 0, \ \ y_1 = 1, \ \ x_2=0, y_2 = 1$$ note $(i,\eta)^2 = (i^2,\eta^2) = (-1,1)$ and $(i,\eta)^4 = (i^4,\eta^4) = (1,1)$. Therefore, $\Psi(j) = (i,\eta)$ and you can cipher that: omitting the $\Psi$, $$ j = (i,\eta) = \alpha $$ $$ j^2 = (-1,1) = \beta $$ $$ j^3 = (-i,\eta) = \gamma $$ so, you can use a polar representation in terms of sine and cosine for the copy of the complex numbers, however, as I suspected from the outset, there is a copy of the hyperbolic numbers implicit withing your algebra. Now you can use the relations above to make that explicit. Incidentally, anytime you have a semi-simple real associative algebra which is commutative it will allow an isomorphism to a direct sum of copies of $\mathbb{C}$ and $\mathcal{H}$. If we allow noncommutativity then the algebra is isomorphic to a direct sum of matrix algebras over $\mathbb{R}$, $\mathbb{C}$ or the quaternions $\mathbb{H}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/454204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is $S^3/\Gamma$? Let G is a group and H is a subgroup of G. I know $G/H$ is the quotient space but I have no idea about what $S^3/\Gamma$ is, where $S^3$ is the sphere and $\Gamma$ is a finite subgroup of $SO(4)$. In this case $S^3$ has not structure of a group and $\Gamma$ is not subgroup of the sphere. So, what is $S^3/\Gamma$? Thanks for your help.
$\mathrm{SO}(4)$ acts on $S^3$ in the obvious way if we think of $S^3$ as the set of unit vectors in $\Bbb R^4$. Then any subgroup $\Gamma \leq G$ acts on $S^3$ as well. $S^3/\Gamma$ is the the orbit space of this $\Gamma$-action, i.e. $$S^3/\Gamma = S^3/\!\sim$$ where $$x \sim y \quad \iff \quad x = g \cdot y \text{ for some } g \in \Gamma.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/454340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
GRE test prep question [LCM and divisors] Let $S$ be the set of all positive integers $n$ such that $n^2$ is a multiple of both $24$ and $108$. Which of the following integers are divisors of every integer $n$ in $S$ ? Indicate all such integers: $A:12$ $B:24$ $C:36$ $D:72$ The answers are $A$ and $C$ First I took the lcm of $24$ and $108$ which is $2^3\times3^3$ but then it says that "the prime factorization of a square number must contain only even exponents. Thus, the least multiple of $(2^3)(3^3)$ that is a square is $(2^4)(3^4)$" Can somebody explain why that is true? What if the lcm was $2^3\times3^4$ ? Would I just make it $2^4\times3^4$ ? Help!
Let $n=p_1^{a_1}\cdots p_r^{a_r}$ where the $p_i$ are primes, so $n^2=p_1^{2a_1}\cdots p_r^{2a_r}$. As you observed, $n^2$ must be a multiple of $LCM(24, 108)=2^{3} 3^{3}$, so $2a_1\ge 3$ and $2a_2\ge 3$ with $p_1=2$ and $p_2=3$. Therefore $a_1\ge 2$ and $a_2\ge 2$, so n is a multiple of $2^{2} 3^{2}=36$. Thus S consists of all positive multiples of 36, so the integers which divide every integer in S are simply the divisors of 36.
{ "language": "en", "url": "https://math.stackexchange.com/questions/454409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
show that $\int_{0}^{\pi/2}\tan^ax \, dx=\frac {\pi}{2\cos(\frac{\pi a}{2})}$ show that $$\int_{0}^{\pi/2}\tan^ax \, dx=\frac {\pi}{2\cos(\frac{\pi a}{2})}$$ I think we can solve it by contour integration but I dont know how. If someone can solve it by two way using complex and real analysis its better for me. thanks for all.
Sorry for being late $$ \begin{aligned} \int_0^{\frac{\pi}{2}} \tan ^a x d x&=\int_0^{\frac{\pi}{2}} \sin ^a x \cos ^{-a} x d x \\ & =\int_0^{\frac{\pi}{2}} \sin ^{2\left(\frac{a+1}{2}\right)-1} x \cos ^{2\left(\frac{-a+1}{2}\right)-1} x d x \\ & =\frac{1}{2} B\left(\frac{a+1}{2}, \frac{-a+1}{2}\right) \\ & =\frac{1}{2} \pi \csc \frac{(a+1) \pi}{2} \\ & \end{aligned} $$ Applying the Euler-reflection property $$ B(x, 1-x)=\pi \csc (\pi x) \quad x \notin \mathbb{Z}, $$ we have $$\boxed{\int_0^{\frac{\pi}{2}} \tan ^a x d x =\frac{\pi}{2 \cos \left(\frac{\pi a}{2}\right)}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/454483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Double integral in polar coordinates Use polar coordinates to find the volume of the given solid inside the sphere $$x^2+y^2+z^2=16$$ and outside the cylinder $$x^2+y^2=4$$ When I try to solve the problem, I keep getting the wrong answer, so I don't know if it's an arithmetic error or if I'm setting it up incorrectly. I've been setting the integral like so: $$\int_{0}^{2\pi}\int_{2}^4\sqrt{16-r^2}rdrd\theta$$ Is that the right set up? If so then I must have made an arithmetic error, if it's not correct, could someone help explain to me why it's not that? Thanks so much!
It's almost correct. Recall that the integrand is usually of the form $z_\text{upper}−z_\text{lower}$, where each $z$ defines the lower and upper boundaries of the solid. As it is currently set up, you are treating the sphere as a hemisphere, where your lower boundary is the $xy$-plane. Hence, you need to multiply by $2$, since we are technically doing: $$ \left(\sqrt{16-r^2} \right) - \left(-\sqrt{16-r^2} \right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/454546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
In plane geometry is it possible to represent the product of two line segments (p and q) as a line segment? In plane geometry the product of two line segments p and q can be represented as the area of a rectangle with sides p and q. Or at least that is the premise assumed here. Assuming that is correct, the question is: Can this product be represented by a simple line segment, instead of an area?
Yes it is possible. We are given two line segments, of lengths $a$ and $b$. Draw two say perpendicular lines (it doesn't really matter) meeting at some point $O$. On one of the lines, which we call the $x$-axis, make a point $A$ such that $OA=a$. (Straightedge and compass can do this.) On the other line, which we call the $y$-axis, make a point $B$ such that $OB=b$. On the $x$-axis, put a point $X$ such that $OX$ has unit length. Join $X$ and $B$. Through $A$, draw the line parallel to $XB$. This meets the $y$-axis at some point $P$. By similar triangles, we have $\frac{b}{1}=\frac{OP}{a}$. It follows that $OP$ has length $ab$. Remark: By a small modification of the basic idea, we can also construct a line segment of length $\dfrac{a}{b}$. Note that we need to define, perhaps arbitrarily, some line segment as the unit line segment in order to carry out the construction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/454620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is there no explicit formula for the factorial? I am somewhat new to summations and products, but I know that the sum of the first positive n integers is given by: $$\sum_{k=1}^n k = \frac{n(n+1)}{2} = \frac{n^2+n}{2}$$ However, I know that no such formula is known for the most simple product (in my opinion) - the factorial: $$n!=\prod_{k=1}^n k = \prod_{k=2}^n k$$ I don't know if that is how products work, but I would really like to know! So my question is why is there no explicit formula (in terms of n, other than n(n-1)...2*1) for the product of the first n integers? Is there a proof that says that one cannot exist or is it that one has not been discovered? By explicit formula I mean a non-functional equation that does not require n amount of steps to calculate - just like the summation formula does not require n additions.
These two formulas give n! This was discovered by Euler. Reference this link for further reading. http://eulerarchive.maa.org/hedi/HEDI-2007-09.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/454689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 5, "answer_id": 3 }
Why can't $x^k+5x^{k-1}+3$ be factored? I have a polynomial $P(x)=x^k+5x^{k-1}+3$, where $k\in\mathbb{Z}$ and $k>1$. Now I have to show that you can't factor $P(x)$ into two polynomials with degree $\ge1$ and only integer coefficients. How can I show this?
Assume $k\ge2$. By the rational root theorem, only $\pm1$ and $\pm3$ are candidates for rational roots - and by inspection are not roots. Therefore, if $P(x)=Q(x)R(x)$ with $q:=\deg Q>0$, $r:=\deg R>0$, we conclude that $q\ge2$ and $r\ge 2$. Modulo $3$ we have $(x-1)x^{k-1}=x^k-x^{k-1}$, hence wlog. $Q(x)\equiv (x-1)x^{q-1}\pmod 3$, $R(x)\equiv x^r\pmod 3$.This implies that the constant terms of both $Q$ ans $R$ are multiples of $3$, hence the constant term of $P$ would have to be a multiple of $9$ - contradiction
{ "language": "en", "url": "https://math.stackexchange.com/questions/454756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Searching sum of constants in a polynomial Consider the polynomial $f(x)=x^3+ax+b$, where $a$ and $b$ are constants. If $f(x+1004)$ leaves a remainder of $36$ upon division by $x+1005$, and $f(x+1005)$ leaves a remainder of $42$ upon division by $x+1004$, what is the value of $a+b$?
The general knowledge to be used is that if $$P(x)=(x-a)Q(x)+R$$ where $P$ and $Q$ are polynomials and $R$ a constant. Then, evaluating on $x=a$ we get $P(a)=R$. This is called the Polynomial remainder theorem. You are looking for $f(1)-1=a+b$. We have that $f(x+1005)$ gives remainder $42$ after division by $x+1004=x-(-1004)$. This means that $f(-1004+1005)=f(1)=1+a+b=42$. From this we get that $a+b=41$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/454830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is there a closed form solution for partial sums of $1/(2^{2^0}) + 1/(2^{2^1}) + 1/(2^{2^2}) + \ldots$ Title says it all, this is such a classical looking series, $$\frac1{2^{2^0}} + \frac1{2^{2^1}} + \frac1{2^{2^2}} + \ldots.$$ So, I was just wondering, is there a closed form solution known for the partial sums? If so, can someone post? I've looked around a bit and haven't found a closed form solution yet.
I doubt there's a closed expression for it, but the partial sums arise here: http://oeis.org/A085010 and in particular the decimal expansion of the infinite series is here: http://oeis.org/A007404 There is a (paywall) article called "Simple continued fractions for some numbers" where the problem being considered is to find the continued fraction expansion of $\sum_{n=0}^\infty \frac{1}{U^{2^n}}$. It doesn't seem to be known how to find a closed form expression for the sum however.
{ "language": "en", "url": "https://math.stackexchange.com/questions/454910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that $P(X)$ has exactly $\binom nk$ subsets of $X$ of $k$ elements each. Let set $X$ consist of $n$ members. $P(X)$ is power set of $X$. Prove that set $P(X)$ has exactly $$\binom nk = \frac{n!}{k!(n-k)!}$$ subsets of $X$ of $k$ elements each. Hence, show that $P(X)$ contains $2^n$ members. Hint: use the binomial expansion of $(1 + 1)^n$. I'm trying to get more into math (proof writing first) so I got this from a book about Real Analysis. It's a short first sub-chapter which was an intro review of set theory and I have absolutely no idea how to do this particular problem given what has been said in 8 pages. I 'know' that it's true, since I've tried it. I can also somewhat prove that $P(X)$ has $2^n$ elements thinking of binary numbers. I can also see that the binomial expansion of $(1 + 1)^n$ does tell the number of subsets of $X$ that has $k$ elements via the coefficients. I just don't know how they all fit together given that the author thinks they should. I've also seen some questions in here proving the summation of $\binom nk = 2^n$, which was suppose to help, but I couldn't understand them at all given my limited math/combinatorics knowledge.
I think some hints might be more appropriate here than actually answering the question: 1) If you take $k$ out of $n$ elements, you automatically also take $n-k$ out of $k$ elements - the other ones. The order of these elements doesn't matter. 2) All subsets of a finite set with $n$ elements have a certain amount of elements: $|\mathcal{P}(X)|=$the number of subsets = the number of subsets having no elements + the number of subsets having exactly one element+...+the number of subsets having exactly $n$ elements. How does this translate to $(1+1)^n$ using the first part of your question?
{ "language": "en", "url": "https://math.stackexchange.com/questions/454974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$24\mid n(n^{2}-1)(3n+2)$ for all $n$ natural problems in the statement. "Prove that for every $ n $ natural, $24\mid n(n^2-1)(3n+2)$" Resolution: $$24\mid n(n^2-1)(3n+2)$$if$$3\cdot8\mid n(n^2-1)(3n+2)$$since$$n(n^2-1)(3n+2)=(n-1)n(n+1)(3n+2)\Rightarrow3\mid n(n^{2}-1)(3n+2)$$and$$8\mid n(n^{2}-1)(3n+2)?$$$$$$ Would not, ever succeeded without the help of everyone that this will post tips, ideas, etc., etc.. Thank you.
Hint: $(n-1)n(n+1)$ is product of three consecutive numbers. So they are always divisible by $8$ and $3$ if $n$ is odd. You don't even have to bother about $3n+2$. Now think, what happens to $3n+2$ if $n$ is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/455043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Sum of $\sum_2^\infty e^{3-2n}$ $$\sum_2^\infty e^{3-2n}$$ I only have memorized the sum of at index zero. So I reindex $$\sum_0^\infty e^{-2n-1}$$ This gives me the sum as $$\frac{1}{1- \frac{1}{e} }$$ This is wrong. Why?
There is an $e^3$ factor that we can bring to the front. For the rest, we want $$\sum_{2}^\infty e^{-2n}.\tag{1}$$ It is nice to write out (1) at greater length. It is $$\frac{1}{e^4}+\frac{1}{e^6}+\frac{1}{e^8}+\frac{1}{e^{10}}+\cdots.\tag(2)$$ Now if you know that when $|r|\lt 1$ then $a+ar+ar^2+ar^3+\cdots=\frac{a}{1-r}$, we are finished, for we have $a=\frac{1}{e^4}$ and $r=\frac{1}{e^2}$. But if we don't want to remember that formula, take out a common factor of $\frac{1}{e^4}$. We get that (2) is equal to $$\frac{1}{e^4}\left(1+\frac{1}{e^2}+\frac{1}{e^4}+\frac{1}{e^6}+\cdots\right).$$ Now by the remembered formula, this is $$\frac{1}{e^4}\cdot \frac{1}{1-\frac{1}{e^2}}.$$ Finally, remember about the $e^3$ we had temporarily forgotten about, and simplify.
{ "language": "en", "url": "https://math.stackexchange.com/questions/455094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
An example of a simple module which does not occur in the regular module? Let $K$ be a field and $A$ be a $K$-algebra. I know, if $A$ is artinain algebra, then by Krull-Schmidt Theorem $A$ , as a left regular module, can be written as a direct sum of indecomposable $A$-modules, that is $A=\oplus_{i=1}^n S_i$ where each $S_i$ is indecomposable $A$-module Moreover, each $S_i$ contains only one maximal submodule, which is given by $J_i= J(A)S_i$, and every simple $A$-module is isomorphic to some $A/J_i$. My question is that, can you, please, tell me an example of a non simisimple algebra, or a ring, such that it has a simple module which does not occur in the regular module. By occur I mean it has to be isomorphic to a simple submodule of a regular module
Consider the algebra $K[T]$. The simple $K[T]$-modules are of the form $K[T]/P(T)$ for some irreducible polynomial $P$. These do not occur as submodules of $K[T]$, since every such submodule contains a free module, and hence is infinite dimensional over $K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/455223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Can a function with just one point in its domain be continuous? For example if my function is $f:\{1\}\longrightarrow \mathbb{R}$ such that $f(1)=1$. I have the next context: 1) According to the definition given in Spivak's book and also in wikipedia, since $\lim_{x\to1}f$ doesn't exist because $1$ is not an accumulation point, then the function is not continuous at $1$ (Otherwise it should be $\lim_{x\to 1}f=f(1)$). 2) According to this answer , as far as I can understand a function is continuous at an isolated point. I don't understand. Edit: * *Spivak's definition of limit: The function $f$ approaches to $l$ near $a$ means $\forall \epsilon > 0 \; \exists \delta > 0 \; \forall x \; [0<|x-a|<\delta\implies |f(x)-l|<\epsilon]$ *Spivak's definition of continuity: The function $f$ is continuous at $a$ if $\lim_{x\to a}f(x)=f(a)$
Based on the definitions Spivak gave, I suspect that (as discussed in comments) his definition of continuity is based on the assumption that we're dealing with functions defined everywhere, or at very least having domains with no isolated points. His definition does indeed break down (badly) for functions such as yours. A related (but more general) definition given for continuity at a point $a$ of the domain of a function $f$ is something like $$\forall\epsilon>0\:\exists\delta>0\:\forall x\in\operatorname{dom}f\:\bigl[|x-a|<\delta\implies |f(x)-f(a)|<\epsilon\bigr]$$ This is provably equivalent to: (i) $x$ is isolated in $\operatorname{dom}f$, or (ii) $x$ is a point of accumulation of $\operatorname{dom}(f)$ and $\lim_{y\to x}f(y)=f(x)$. The key to the proof is that for a point of accumulation $a$ of $\operatorname{dom}f,$ we say $\lim_{x\to a}f(x)=l$ iff $$\forall\epsilon>0,\exists\delta>0:\forall x\color{red}{\in\operatorname{dom}f},\:\bigl[0<|x-a|<\delta\implies |f(x)-l|<\epsilon\bigr]$$ Note that this definition also varies subtly and critically from Spivak's.
{ "language": "en", "url": "https://math.stackexchange.com/questions/455296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 6, "answer_id": 4 }