Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Proof of parallel lines The quadrilateral ABCD is inscribed in circle W. F is the intersection point of AC and BD. BA and CD meet at E. Let the projection of F on AB and CD be G and H, respectively. Let M and N be the midpoints of BC and EF, respectively. If the circumcircle of triangle MNH only meets segment CF at Q, and the circumcircle of triangle MNG only meets segment BF at P, prove that PQ is parallel to BC. I do not know where to begin.
Possibly the last steps of a proof This is no full proof, just some observations which might get you started, but which just as well might be leading in the completely wrong direction. You could start from the end, i.e. with the last step of your proof, and work backwards. $P$ is the midpoint of $FB$ and $Q$ is the midpoint of $FC$. Therefore the triangle $BCD$ is similar to $PQF$, and since they have the edges at $F$ in common, the edges opposite $F$ have to be parallel. So your next question is: why are these the midpoints? You can observe that $NP$ is parallel to $AB=AE$, and $NQ$ is parallel to $CD=CE$. Since $N$ is the midpoint of $FE$, the $\triangle FNP$ is the result of dilating $\triangle FEB$ by a factor of $2$ with center $F$. Likewise for $\triangle FNQ$ and $\triangle FEC$. So this explains why $P$ and $Q$ are midpoints as observed, but leaves the question as to why these lines are parallel. Bits and pieces I don't have the answer to that question yet. But I have a few other observations which I have not proven either but which might be useful as piezes of this puzzle. * *$\measuredangle DBE = \measuredangle ECA = \measuredangle NMG = \measuredangle HMN$. The first equality is due to the cocircularity of $ABCD$, but the others are unexplained so far. *$\measuredangle MGN = \measuredangle NHM$, which implies that the circles $MGN$ and $MHN$ have equal radius, and the triangles formed by these three points each are congruent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/431742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Implication with a there exists quantifier When I negate $ \forall x \in \mathbb R, T(x) \Rightarrow G(x) $ I get $ \exists x \in \mathbb R, T(x) \wedge \neg G(x) $ and NOT $ \exists x \in \mathbb R, T(x) \Rightarrow \neg G(x) $ right? What would it mean if I said $ \exists x \in \mathbb R, T(x) \Rightarrow \neg G(x) $ ? I know in symbolic logic a statement like $ \forall x \in \mathbb R, T(x) \Rightarrow G(x) $ means every T is a G, but what claim am I making between T & G with $ \exists x \in \mathbb R, T(x) \Rightarrow G(x) $ in simple everyday english if you can? Thanks,
You're correct that the negation of $\forall x (T(x) \rightarrow G(x))$ is $\exists x (T(x) \wedge \neg G(x))$. The short answer is that $\exists x (\varphi(x) \rightarrow \psi(x))$ doesn't really have a good English translation. You could try turning this into a disjunction, so that $\exists x (\varphi(x) \rightarrow \psi(x))$ becomes $\exists x (\neg \varphi(x) \vee \psi(x))$. But this is equivalent to $\exists x \neg\varphi(x) \vee \exists x \psi(x)$, which just says "There either exists something which is not $\varphi$ or there exists something which is $\psi$. That's the best you're going to get though. $\exists x (\varphi(x) \rightarrow \psi(x))$ is just something that doesn't have a good translation because it's a rather weak statement. Contrast this with the dual problem $\forall x (\varphi(x) \wedge \psi(x))$, which is a rather strong statement, saying "Everything is both a $\varphi$ and a $\psi$."
{ "language": "en", "url": "https://math.stackexchange.com/questions/431842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
"uniquely written" definition I'm having troubles with this definition: My problem is with the uniquely part, for example the zero element: $0=0+0$, but $0=0+0+0$ or $0=0+0+0+0+0+0$. Another example, if $m \in \sum_{i=1}^{10} G_i$ and $m=g_1+g_2$, with $g_1\in G_1$ and $g_2\in G_2$, we have: $m=g_1+g_2$ or $m=g_1+g_2+0+0$. It seems they can't be unique! I really need help. Thanks a lot.
Well notice what the definition says. It says that for each $m \in M$, you need to be able to write $m= \sum\limits_{\lambda \in \Lambda} g_{\lambda}$ where this sum is over all $\lambda$. So for $0$, the only possibility is a sum of $0$ $\lambda$-many times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/431931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving the normed linear space, $V, ||a-b||$ is a metric space (Symmetry) The following theorem is given in Metric Spaces by O'Searcoid Theorem: Suppose $V$ is a normed linear space. Then the function $d$ defined on $V \times V$ by $(a,b) \to ||a-b||$ is a metric on $V$ Three conditions of a metric are fairly straight-forward. By the definitions of a norm, I know that $||x|| \ge 0$ and only holds with equality if $x=0$. Thus $||a-b||$ is non-negative and zero if and only if $a=b$. The triangle inequality of a normed linear space requires: $||x+y|| \le ||x|| + ||y||$. Let $x = a - b$ and $y = b - c$. Then $||a - c|| \le || a - b || + || b - c||$ satisfying the triangle inequality for a metric space. What I am having trouble figuring out is symmetry. The definition of a linear space does not impose any condition of a symmetry. I know from the definition of a linear space that given two members of $V$, $u$ and $v$ they must be commutative, however, I do not see how that could extend here. Thus what I would like to request help with is demonstrating $||a - b|| = ||b - a||$.
$$\|a-b\|=\|(-1)(b-a)\|=|-1|\cdot\|b-a\|=\|b-a\|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/431999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How can I calculate a $4\times 4$ rotation matrix to match a 4d direction vector? I have two 4d vectors, and need to calculate a $4\times 4$ rotation matrix to point from one to the other. edit - I'm getting an idea of how to do it conceptually: find the plane in which the vectors lie, calculate the angle between the vectors using the dot product, then construct the rotation matrix based on the two. The trouble is I don't know how to mechanically do the first or last of those three steps. I'm trying to program objects in 4space, so an ideal solution would be computationally efficient too, but that is secondary.
Consider the plane $P\subset \mathbf{R}^4$ containing the two vectors given to you. Calculate the inner product and get the angle between them. Call the angle $x$. Now there is a 2-d rotation $A$ by angle $x$ inside $P$. And consider identity $I$ on the orthogonal complement of $P$ in $\mathbf{R}^4$. Now $A\oplus I$ is the required 4d matrix
{ "language": "en", "url": "https://math.stackexchange.com/questions/432057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 4 }
Math Parlor Trick A magician asks a person in the audience to think of a number $\overline {abc}$. He then asks them to sum up $\overline{acb}, \overline{bac}, \overline{bca}, \overline{cab}, \overline{cba}$ and reveal the result. Suppose it is $3194$. What was the original number? The obvious approach was modular arithmetic. $(100a + 10c + b) + (100b + 10a + c) + (100b + 10c + a) + (100c + 10a + b) + (100c + 10b + a) = 3194$ $122a + 212b + 221c = 3194$ Since $122, 212, 221 \equiv 5 (mod\space9)$ and $3194 \equiv 8 (mod\space9)$ $5(a + b + c) \equiv 8 (mod\space9)$ So, $a + b + c = 7$ or $16$ or $26$ Hit and trial produces the result $358$. Any other, more elegant method?
Let $S$ be the sum, $$S \text{ mod} 10=A$$ $$S \text{ mod} 100=B$$ $$A=2b+2a+c$$ $$\frac{B-A}{10}=(2c+2a+b)$$ $$\frac{S-B}{100}=(a+2b+2c)$$ $$\text{Now just solve the system of equations for $a$ $b$ and $c$}$$ $$\text{ The original number will be a+10b+100c}$$ $$\text{ Now memorize this formula and do the addition in your head}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/432118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Two questions on topology and continous functions I have two questions: 1.) I have been thinking a while about the fact, that in general the union of closed sets will not be closed, but I could not find a counterexample, does anybody of you have one available? 2.) The other one is, that I thought that one could possibly say that a function f is continuous iff we have $f(\overline{M})=\overline{f(M)}$(In the second part this should mean the closure of $f(M)$). Is this true?
(1) Take the closed sets $$\left\{\;C_n:=\left[0\,,\,1-\frac1n\right]\;\right\}_{n\in\Bbb N}\implies \bigcup_{n\in\Bbb N}C_n=[0,1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/432244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Infimum and supremum of the empty set Let $E$ be an empty set. Then, $\sup(E) = -\infty$ and $\inf(E)=+\infty$. I thought it is only meaningful to talk about $\inf(E)$ and $\sup(E)$ if $E$ is non-empty and bounded? Thank you.
There might be different preferences to how one should define this. I am not sure that I understand exactly what you are asking, but maybe the following can be helpful. If we consider subsets of the real numbers, then it is customary to define the infimum of the empty set as being $\infty$. This makes sense since the infimum is the greatest lower bound and every real number is a lower bound. So $\infty$ could be thought of as the greatest such. The supremum of the empty set is $-\infty$. Again this makes sense since the supremum is the least upper bound. Any real number is an upper bound, so $-\infty$ would be the least. Note that when talking about supremum and infimum, one has to start with a partially ordered set $(P, \leq)$. That $(P, \leq)$ is partial ordered means that $\leq$ is reflexive, antisymmetric, and transitive. So let $P = \mathbb{R} \cup \{-\infty, \infty\}$. Define $\leq$ the "obvious way", so that $a\leq \infty$ for all $a\in \mathbb{R}$ and $-\infty \leq a$ for all $a\in \mathbb{R}$. With this definition you have a partial order and it this setup the infimum and the supremum are as mentioned above. So you don't need non-empty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/432295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 6, "answer_id": 3 }
Bott periodicity and homotopy groups of spheres I studied Bott periodicity theorem for unitary group $U(n)$ and ortogonl group O$(n)$ using Milnor's book "Morse Theory". Is there a method, using this theorem, to calculate $\pi_{k}(S^{n})$? (For example $U(1) \simeq S^1$, so $\pi_1(S^1)\simeq \mathbb{Z}$).
In general, no. However there is a strong connection between Bott Periodicity and the stable homotopy groups of spheres. It turns out that $\pi_{n+k}(S^{n})$ is independent of $n$ for all sufficiently large $n$ (specifically $n \geq k+2$). We call the groups $\pi_{k}^{S} = \lim \pi_{n+k}(S^{n})$ the stable homotopy groups of spheres. There is a homomorphism, called the stable $J$-homomorphism $J: \pi_{k}(SO) \rightarrow \pi_{k}^{S}$. The Adams conjecture says that $\pi_{k}^{S}$ is a direct summand of the image of $J$ with the kernel of another computable homomorphism. By Bott periodicity we know the homotopy groups $\pi_{k}(SO)$ and the definition of $J$, so the Bott Periodicity theorem is an important step in computations of stable homotopy groups of spheres (a task which is by no means complete).
{ "language": "en", "url": "https://math.stackexchange.com/questions/432387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integral $\int_0^\frac{\pi}{2} \sin^7x \cos^5x\, dx $ im asked to find the limited integral here but unfortunately im floundering can someone please point me in the right direction? $$\int_0^\frac{\pi}{2} \sin^7x \cos^5x\, dx $$ step 1 brake up sin and cos so that i can use substitution $$\int_0^\frac{\pi}{2} \sin^7(x) \cos^4(x) \cos(x) \, dx$$ step 2 apply trig identity $$\int_0^\frac{\pi}{2} \sin^7x\ (1-\sin^2 x)^2 \, dx$$ step 3 use $u$-substitution $$ \text{let}\,\,\, u= \sin(x)\ du=\cos(x) $$ step 4 apply use substitution $$\int_0^\frac{\pi}{2} u^7 (1-u^2)^2 du $$ step 5 expand and distribute and change limits of integration $$\int_0^1 u^7-2u^9+u^{11}\ du $$ step 6 integrate $$(1^7-2(1)^9+1^{11})-0$$ i would just end up with $1$ however the book answer is $$\frac {1}{120}$$ how can i be so far off?
$$ \begin{aligned} \int_0^{\frac{\pi}{2}} \sin ^7 x \cos ^5 x & \stackrel{x\mapsto\frac{\pi}{2}-x}{=} \int_0^{\frac{\pi}{2}} \cos ^5 x \sin ^7 x d x \\ &=\frac{1}{2} \int_0^{\frac{\pi}{2}} \sin ^5 x\cos ^5 x\left(\sin ^2 x+\cos ^2 x\right) d x \\ &=\frac{1}{64} \int_0^{\frac{\pi}{2}} \sin ^5(2 x)d x \\ &=\frac{1}{128} \int_0^{\frac{\pi}{2}}\left(1-\cos ^2 2 x\right)^2 d(\cos 2 x) \\ &=\frac{1}{128}\left[\cos 2 x-\frac{2 \cos ^3 2 x}{3}+\frac{\cos ^5 2 x}{5}\right]_0^{\frac{\pi}{2}} \\ &=\frac{1}{120} \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/432446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 5 }
Finding the rotation angles of an plane I have the following question: First I have a global coordinate system x y and z. Second I have a plane defined somewhere in this coordinate system, I also have the normal vector of this plane. Assuming that the plane originally was lying in the xy plane and the normal vector of the plane was pointing in the same direction as global Z: How can I calculate the three rotation-angles around the x y and z axis by which the plane was rotated to its current position? I have read This question but I do not understand the solution, since math is not directly my top skill. If anyone knows a good TCL library that can be used to program this task, please let me know. A numerical example for the normal vector ( 8 -3 2 ) would probably greatly increase my chance of understanding a solution. Thanks! What I have tried: I projected the normal vector on each of the global planes and calculated the angle between the x y z axis. If I use this angles in a rotationmatrix i have (and Im sure it is correct) and trie to rotate the original vector by these three angles I do not get the result I was hoping for....
First off, there is no simple rule (at least as far as I know) to represent a rotation as a cascade of single axis rotations. One mechanism is to compute a rotation matrix and then compute the single axis rotation angles. I should also mention that there is not a unique solution to the problem you are posing, since there are different cascades of rotations that will yield the same final position. Let's first build two references frames, $A$ and $B$. $A$ will denote the standard Euclidian basis. $B$ will denote the reference frame which results from applying the desired rotation. I will use subscript notation when referring to a vector in a specific reference frame, so $v_A$ is a vector whose coordinates are defined with respect to the standard Euclidian basis, and $v_B$ is defined w.r.t. $B$. All we know so far about $B$ is that the vector $(0,0,1)_B = \frac{1}{\sqrt{77}}(8, -3, 2)_A$. We have one degree of freedom to select how $(0,1,0)_B$ is defined in reference frame $A$. We only require that it be orthogonal to $\frac{1}{\sqrt{77}}(8, -3, 2)_A$. So we would like to find a unit vector such that $$8x-3y+2z=0$$ One solution is $\frac{1}{\sqrt{6}}(1,2,-1)_A$, which you can verify. This forces our hand for how the vector $(1,0,0)_B$ is represented w.r.t. $A$, hence $$(1,0,0)_B=\frac{1}{\sqrt{462}}(-1,10,19)_A$$ So the rotation matrix from $B$ to $A$ is $$R_B^A=\left( \begin{array}{ccc} \frac{-1}{\sqrt{462}} & \frac{1}{\sqrt{6}} & \frac{8}{\sqrt{77}} \\ \frac{10}{\sqrt{462}} & \frac{2}{\sqrt{6}} & \frac{-3}{\sqrt{77}} \\ \frac{19}{\sqrt{462}} & \frac{-1}{\sqrt{6}} & \frac{2}{\sqrt{77}} \end{array} \right)$$ Given any vector with coordinates defined in the Euclidian basis (such as a point in the $xy$-plane), multiplying by the inverse this matrix will yield the representation in our new reference frame. If you want to search for the three axis rotations, I refer you to Wolfram MathWorld's page on Euler Angles. Hope this helps! Let me know if anything is unclear or if I you think you see any mistakes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/432502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Triangle integral with vertices Evaluate $$I=\iint\limits_R \sin \left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right)\, dA,$$ where $R$ is the triangle with vertices $(0,0),(2,0)$ and $(1,1)$. Hint: use $u=\dfrac{x+y}{2},v=\dfrac{x-y}{2}$. Can anyone help me with this question I am very lost. Please help I know you can make the intergal $\sin(u)\cos(v)$, but then what to do?
$\sin\left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right)=\frac{1}{2}\left(\sin x+\sin y\right)$ The line joining $(0,0)$ and $(2,0)$ has an equation $y=0$ and $0\leq x\leq 2$ The second line: $y=-x+2$ The third line: $y=x$ The integral becomes: $$I=\frac{1}{2}\left(\int\limits_{0}^{1}\int\limits_{0}^{x}+\int\limits_{1}^{2}\int\limits_{0}^{2-x}\right)(\sin x+\sin y)dy~dx=\frac{1}{2}\left(\int\limits_{0}^{1}\int\limits_{0}^{x}(\sin x+\sin y)dy~dx+\int\limits_{1}^{2}\int\limits_{0}^{2-x}(\sin x+\sin y)dy~dx\right)=\frac{1}{2}\left(\int\limits_0^1\left[y\sin x-\cos y\right]_0^xdx+\int\limits_1^2\left[y\sin x-\cos y\right]_0^{2-x}dx\right)=\frac{1}{2}\left(\left[\sin x-x\cos x+x-\cos x\right]_0^1+\left[(x-2)\cos x-\sin x+x+\sin(2-x)\right]_1^2\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/432573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Favourite applications of the Nakayama Lemma Inspired by a recent question on the nilradical of an absolutely flat ring, what are some of your favourite applications of the Nakayama Lemma? It would be good if you outlined a proof for the result too. I am also interested to see the Nakayama Lemma prove some facts in Algebraic Geometry if possible. Here are some facts which one can use the Nakayama lemma to prove. * *A local ring that is absolutely flat is a field - proof given here. *Every set of $n$ - generators for a free module of rank $n$ is a basis - proof given here. *For any integral domain $R$ (that is not a field) with fraction field $F$, it is never the case that $F$ is a f.g. $R$ - module. Sketch proof: if $F$ is f.g. as a $R$ - module then certainly it is f.g. as a $R_{\mathfrak{m}}$ module for any maximal ideal $\mathfrak{m}$. Then $\mathfrak{m}_{\mathfrak{m}}F = F$ and so Nakayama's Lemma implies $F = 0$ which is ridiculous.
You might be interested in this. It contains some applications of Nakayama's Lemma in Commutative Algebra and Algebraic Geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/432659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 5, "answer_id": 4 }
If $\lim_{x \rightarrow \infty} f(x)$ is finite, is it true that $ \lim_{x \rightarrow \infty} f'(x) = 0$? Does finite $\lim_{x \rightarrow \infty} f(x)$ imply that $\lim_{x \rightarrow \infty} f'(x) = 0$? If not, could you provide a counterexample? It's obvious for constant function. But what about others?
Simple counterexample: $f(x) = \frac{\sin x^2}{x}$. UPDATE: It may seem that such an answer is an unexplicable lucky guess, but it is not. I strongly suggest looking at Brian M. Scott's answer to see why. His answer reveals exactly the reasoning that should first happen in one's head. I started thinking along the same lines, and then I just replaced those triangular bumps with $\sin x^2$ that oscillates more and more quickly as $x$ goes to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/432736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
Determining whether a coin is fair I have a dataset where an ostensibly 50% process has been tested 118 times and has come up positive 84 times. My actual question: * *IF a process has a 50% chance of testing positive and *IF you then run it 118 times *What is the probability that you get AT LEAST 84 successes? My gut feeling is that, the more tests are run, the closer to a 50% success rate I should get and so something might be wrong with the process (That is, it might not truly be 50%) but at the same time, it looks like it's running correctly, so I want to know what the chances are that it's actually correct and I've just had a long string of successes.
Of course, 118 is in the "small numbers regime", where one can easily (use a computer to) calculate the probability exactly. By wolframalapha, the probability that you get at least 84 successes $\;\;=\;\; \frac{\displaystyle\sum_{s=84}^{118}\:\binom{118}s}{2^{118}}$ $=\;\; \frac{392493659183064677180203372911}{166153499473114484112975882535043072} \;\;\approx\;\; 0.00000236224 \;\;\;\; $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/432807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Norms in extended fields let's have some notation to start with: $K$ is a number field and $L$ is an extension of $K$. Let $\mathfrak{p}$ be a prime ideal in $K$ and let its norm with respect to $K$ be denoted as $N_{\mathbb{Q}}^K(\mathfrak{p})$. My question is this: If $|L:K|=n$, what is $N_{\mathbb{Q}}^L(\mathfrak{p})$? I would like to think that $N_{\mathbb{Q}}^L(\mathfrak{p})=\left(N_{\mathbb{Q}}^K(\mathfrak{p})\right)^n$, ie if $L$ is a quadratic extension of $K$, then $N_{\mathbb{Q}}^L(\mathfrak{p})=\left(N_{\mathbb{Q}}^K(\mathfrak{p})\right)^2$. Is this right? I feel that the prove would involve using the definition that $N_K^L(x)$ is the determinant of the multiplication by $x$ matrix (Here, $K$ and $L$ are arbitrary). Thanks!
The queston is simple, if one notices the equality: Let $p\in \mathfrak p\cap \mathbb Z$ be a prime integer in $\mathfrak p$. And let $f$ be the inertia degree of $\mathfrak p$. Then $N^K_{\mathbb Q}\mathfrak p=p^f$. Now the result follows from the multiplicativity of the inertia degree $f$. P.S. The above equality could be found in any standard algebraic number theory text.
{ "language": "en", "url": "https://math.stackexchange.com/questions/432874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How do we take second order of total differential? This is the total differential $$df=dx\frac {\partial f}{\partial x}+dy\frac {\partial f}{\partial y}.$$ How do we take higher orders of total differential, $d^2 f=$? Suppose I have $f(x,y)$ and I want the second order total differential $d^2f$?
I will assume that you are referring to the Fréchet derivative. If $U\subseteq\mathbb{R}^n$ is an open set and we have functions $\omega_{j_1,\dots,j_p}:U\to\mathbb{R}$, then $$ D\left(\sum_{j_1,\dots,j_p} \omega_{j_1,\dots,j_p} dx_{j_1}\otimes\cdots\otimes dx_{j_p}\right) = \sum_{j_1,\dots,j_p}\sum_{j=1}^n \frac{\partial\omega_{j_1,\dots,j_p}}{\partial x_{j}} dx_j\otimes dx_{j_1}\otimes\cdots\otimes dx_{j_p}. $$ Here $dx_{i}$ is the projection onto the $i$th coordinate, and if $\alpha,\beta$ are multilinear forms then $\alpha\otimes\beta$ is the multilinear form defined by $(\alpha\otimes\beta)(x,y)=\alpha(x)\beta(y)$. For example, let $f(x,y)=x^3+x^2 y^2+y^3$. Then \begin{align} Df&=(3x^2+2xy^2)dx+(2x^2y+3y^2)dy; \\ D^2f&=(6x+2y^2)dx\otimes dx+4xy(dx\otimes dy+dy\otimes dx)+(2x^2+6y)dy\otimes dy; \\ D^3f&=6dx\otimes dx\otimes dx+4y(dx\otimes dx\otimes dy+dx\otimes dy\otimes dx+dy\otimes dx\otimes dx)\\ &\qquad+4x(dx\otimes dy\otimes dy+dy\otimes dx\otimes dy+dy\otimes dy\otimes dx) \\ &\qquad+6dy\otimes dy\otimes dy. \end{align} Since $D^p f(x)$ is always a symmetric multilinear map if $f$ is of class $C^p$, you might want to simplify the above by using the symmetric product (of tensors).
{ "language": "en", "url": "https://math.stackexchange.com/questions/432955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Odds of being correct X times in a row Is there a simple way to know what the chances are of being correct for a given number of opportunities? To keep this simple: I am either right or wrong with a 50/50 chance. What are the odds that I'll be correct 7 times in a row or 20 or simply X times? ... and can the answer be put in simple terms ;)
You guys are making it too complicated. It goes like this: double your last... 1st time: 1 in 2 - (win a single coin toss) 2nd time: 1 in 4 - (win 2 consecutive coin tosses) . . .etc. 3rd time: 1 in 8 4th time: 1 in 16 5th time: 1 in 32 6th time: 1 in 64 . . . etc . . . 1 in 2 to the nth. If you wanna get complicated with it then if 2 events have the same odds, there's no such thing as "independence" it's all in your head. Just add number of attempts together (nth)<--(this is an exponent). If the odds aren't 50/50 or if there are multiple events multiply the odds together. 1 in 4 should be represented as the fraction 1/4, 2 in 17 as 2/17 and so on... An example would be: "What would the odds of winning both a 1 in 3 bet and a 1 in 6 bet?" 1/3 X 1/6 = 1/18 or just multiply the 3 and 6 to make it easy. If you wanna know the odds of winning one and losing another, you don't do anything different, just flip the losses upside-down (inverse fractions) Make sense? "What are the odds of winning a 1 in 4 bet and a 1 in 6 bet, then losing a 1 in 4 bet?" 1/4 X 1/6 X 4/1(or just 4)= 1/6. I'm not going to bother with probability vs odds syntax since both boil down to the same thing. Just make sure that a preposition is never the word that you end your sentence in.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
How to determine the arc length of ellipse? I want to determine the length of an arc from the ellipse in the picture below: How can I determine the length of $d$?
Giving a Mathematica calculation. Same result as coffeemath (+1) In[1]:= ArcTan[3.05*Tan[5Pi/18]/2.23] Out[1]= 1.02051 In[2]:= x=3.05 Cos[t]; In[3]:= y=2.23 Sin[t]; In[4]:= NIntegrate[Sqrt[D[x,t]^2+D[y,t]^2],{t,0,1.02051}] Out[4]= 2.53143
{ "language": "en", "url": "https://math.stackexchange.com/questions/433094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 4, "answer_id": 1 }
divergence of $\int_{2}^{\infty}\frac{dx}{x^{2}-x-2}$ i ran into this question and im sitting on it for a long time. why does this integral diverge: $$\int_{2}^{\infty}\frac{dx}{x^{2}-x-2}$$ thank you very much in advance. yaron.
$$\int_{2}^{\infty}\frac{dx}{x^{2}-x-2} = \int_{2}^{\infty}\frac{dx}{(x - 2)(x+1)}$$ Hence at $x = 2$, the integrand is undefined (the denominator "zeroes out" at $x = 2, x = -1$. So $x = 2$ is not in the domain of the integrand. Although we could find the indefinite integral, e.g., using partial fractions, the indefinite integral of the term with denominator $(x - 2)$ would be $$A\ln|x - 2| + C$$ which is also undefined when $x = 2.$ Recall, the limits of integration are exactly those: limits. $$\lim_{x\to 2} (A\ln|x - 2| + C) = -\infty$$ and hence, since the limit diverges, so too the integral diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Valid Alternative Proof to an Elementary Number Theory question in congruences? So, I've recently started teaching myself elementary number theory (since it does not require any specific mathematical background and it seems like a good way to keep my brain in shape until my freshman college year) using Elementary Number Theory by Underwood Dudley. I've run across the following question: Show that every prime (except $2$ or $3$) is congruent to $1$ or $3$ (mod $4$) Now I am aware of one method of proof, which is looking at all the residues of $p$ (mod $4$) and eliminating the improbable ones. But before that, I found another possible way of proving it. Since $p$ must be odd: $p$ $\equiv$ $1$ (mod $2$) We can write this as: 2$|$$(p-1)$ But since $p$ is odd we can also say: 2$|$$(p-3)$ If $a|b$ and $c|d$ then $ac|bd$, then it follows that: $4|(p-1)(p-3)$ The $3$ possibilites then are: $1.$ $4|(p-1)(p-3)$ $2.$ $4|(p-1)$ $3.$ $4|(p-3)$ Thus, by the definition of congruence, we have the 3 possibilites: $1.$ $p \equiv 1$ (mod $4$) $2.$ $p \equiv 3$ (mod $4$) $3.$ $4|(p-1)(p-3) = 4|p^2-4p+3$ therefore $p^2-4p+3 = 4m$ Then $p^2+3 = 4m +4p$. Set $m+p=z$ Then from $p^2+3 = 4z$ it follows that $p^2 \equiv -3$ (mod $4$) (Is this correct?) Can anyone please tell me if this is a valid proof? Thank you in advance. EDIT: Taken the first possibility into account. I also realize that there are much simpler proofs but I was curious as to whether this approach also works.
I don't think there is anything wrong with your proof, but put it more simply, the congruence $$ p \equiv 1 \pmod{2} $$ is equivalent to $$ \text{either}\quad p \equiv 1 \pmod{4}, \quad\text{or}\quad p \equiv 3 \pmod{4}. $$ This is simply because if $p \equiv 0, 2 \pmod{4}$, then $p \equiv 0 \pmod{2}$. An then, note that the prime $3$ does fit in, $3 \equiv 3 \pmod{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
An odd integer minus an even integer is odd. Prove or Disprove: An odd integer minus an even integer is odd. I am assuming you would define an odd integer and an even integer. than you would use quantifiers which shows your solution to be odd or even. I am unsure on how to show this...
Instead of a pure algebraic argument, which I don't dislike, it's also possible to see visually. Any even number can be represented by an array consisting of 2 x n objects, where n represents some number of objects. An odd number will be represented by a "2 by n" array with an item left over. Odd numbers don't have "evenly matched" rows or columns (depending on how you depict the array). Removing or subtracting an even number of items, then, will remove pairs of objects or items in the array. The leftover item (the "+1") will still remain and cannot have a "partner." Thus, an odd minus an even (or plus an even, for that matter) must be an odd result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
The image of $I$ is an open interval and maps $\mathbb{R}$ diffeomorphically onto $\mathbb{R}$? This is Problem 3 in Guillemin & Pallock's Differential Topology on Page 18. So that means I just started and am struggling with the beginning. So I would be expecting a less involved proof: Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a local diffeomorphism. Prove that the image of $f$ is an open interval and maps $\mathbb{R}$ diffeomorphically onto this interval. I am rather confused with this question. So just identity works as $f$ right? * *The derivative of identity is still identity, it is non-singular at any point. So it is a local diffeomorphism. *$I$ maps $\mathbb{R} \rightarrow \mathbb{R}$, and the latter is open. *$I$ is smooth and bijective, its inverse $I$ is also smooth. Hence it maps diffeomorphically. Thanks for @Zev Chonoles's comment. Now I realized what I am asked to prove, though still at lost on how.
If $f$ is a local differeomorphism then the image must be connected, try to classify the connected subsets of $\mathbb{R}$ into four categories. Since $f$ is an open map, this gives you only one option left. I do not know if this is the proof the author has in mind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability Question about Tennis Games! $2^{n}$ players enter a single elimination tennis tournament. You can assume that the players are of equal ability. Find the probability that two particular players meet each other in the tournament. I could't make a serious attempt on the question, hope you can excuse me this time.
We will make the usual unreasonable assumptions. We also assume (and this is the way tournaments are usually arranged) that the initial locations of the players on the left of your picture are chosen in advance. We also assume (not so reasonable) that the locations are all equally likely. Work with $n=32$, like in your picture. It is enough to show the pattern. Call the two players A and B. Without loss of generality, we may, by symmetry, assume that A is the player at the top left of your picture. Maybe they meet in the first round. Then player B has to be the second player in your list. The probability of this is $\frac{1}{31}$. Maybe they meet in the second round. For that, B has to be $3$ or $4$ on your list, and they must both win. The probability is $\frac{2}{31}\frac{1}{2^2}$. Maybe they meet in the third round. For that, B must be one of players $5$ to $8$, and each player must win twice. The probability is $\frac{4}{31}\frac{1}{2^4}$. Maybe they meet in the fourth round, probability $\frac{8}{31}\frac{1}{2^6}$. Or maybe they meet in the final, probability $\frac{16}{31}\frac{1}{2^8}$. Add up. For the sake of the generalization, note we have a finite geometric series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Regression towards the mean v/s the Gambler's fallacy Suppose you toss a (fair) coin 9 times, and get heads on all of them. Wouldn't the probability of getting a tails increase from 50/50 due to regression towards the mean? I know that that shouldn't happen, as the tosses are independent event. However, it seems to go against the idea of "things evening out".
This is interesting because it shows how tricky the mind can be. I arrived at this web site after reading the book by Kahneman, "Thinking, Fast and Slow". I do not see contradiction between the gambler´s fallacy and regression towards the mean. According to the regression principle, the best prediction of the next measure of a random variable is the mean. This is precisely what is assumed when considering that each toss is an independent event; that is, the mean (0.5 probability) is the best prediction. This applies for the next event’s theoretical probability and there is no need for a next bunch of tosses. The reason we are inclined to think that after a “long” run of repeated outcomes the best prediction is other than the mean value of a random variable has to do with heuristics. According to Abelson's first law of statistics, "Chance is lumpy". I quote Abelson: "People generally fail to appreciate that occasional long runs of one or the other outcome are a natural feature of random sequences." Some studies have shown that persons are bad generators of random numbers. When asked to write down a series of chance outcomes, subjects tend to avoid long runs of either outcome. They write sequences that quickly alternate between outcomes. This is so because we expect random outcomes to be "representative" of the process that generates them (a problem related to heuristics). Therefore, assuming that the best prediction for a tenth toss in you example should be other than 0.5, is a consequence of what unconsciously (“fast thinking”) we want to be represented in our sample. Fool gamblers are bad samplers. Alfredo Hernandez
{ "language": "en", "url": "https://math.stackexchange.com/questions/433492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Is a semigroup $G$ with left identity and right inverses a group? Hungerford's Algebra poses the question: Is it true that a semigroup $G$ that has a left identity element and in which every element has a right inverse is a group? Now, If both the identity and the inverse are of the same side, this is simple. For, instead of the above, say every element has a left inverse. For $a \in G$ denote this left inverse by $a^{-1}$. Then $$(aa^{-1})(aa^{-1}) = a(a^{-1}a)a^{-1} = aa^{-1}$$ and we can use the fact that $$cc = c \Longrightarrow c = 1$$ to get that inverses are in fact two-sided: $$ aa^{-1} = 1$$ From which it follows that $$a = 1 \cdot a = (aa^{-1})a = a (a^{-1}a) = a \cdot 1$$ as desired. But in the scenario given we cannot use $cc = c \Longrightarrow c = 1$, and I can see no other way to prove this. At the same time, I cannot find a counter-example. Is there a simple resolution to this question?
If $cc=c$ than $c$ is called idempotent element. Semigroup with left unit and right inverse is called left right system or shortly $(l, r)$ system. If you take all the idempotent elements of $(l,r)$ system they also form $(l,r)$ system called idempotent $(l,r)$ system. In such system the multiplication of two elements is equal to the second element because if you take $f$ and $g$ to be two such elements, and $e$ is the unit, than $fg=feg=fff^{-1}g=ff^{-1}g=eg=g.$ So, each element in such a system is left unit and also right inverse and any such system is not a group cause no element can be right unit. It is now also easy to see that, if you define the multiplication of two elements to be equal to the second one, you get idempotent $(l,r)$ system, which is obviously not a group. For more details and some extra facts check the article from 1944. by Henry B. Mann called "On certain systems which are not group". You can easily google it and find it online. Reference to this article is mentioned somewhere at the beginning of the book "The theory of groups" by Marshall Hall.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 4, "answer_id": 3 }
Solving for X in a simple matrix equation system. I am trying to solve for X in this simple matrix equation system: $$\begin{bmatrix}7 & 7\\2 & 4\\\end{bmatrix} - X\begin{bmatrix}5 & -1\\6 & -4\\\end{bmatrix} = E $$ where $E$ is the identity matrix. If I multiply $X$ with $\begin{bmatrix}5 & -1\\6 & -4\\\end{bmatrix}$ I get the following system: $$\begin{bmatrix}5x_1 & -1x_2\\6x_3 & -4x_4\\\end{bmatrix}$$ By subtracting this from $\begin{bmatrix}7 & 7\\2 & 4\\\end{bmatrix}$ I get $\begin{bmatrix}7 - 5x_1 & 7 + 1x_2\\2 - 6x_3 & 4 + 4x_4\\\end{bmatrix} = \begin{bmatrix}1 & 0\\0 & 1\\\end{bmatrix}$ Which gives me: $7-5x_1 = 1$ $7+1x_2 = 0$ $2-6x_3 = 0$ $4+4x_4 = 1$ These are not the correct answers, can anyone help me out here? Thank you!
Since $\begin{pmatrix}7&7\\2&4\end{pmatrix}-X\begin{pmatrix}5&-1\\6&-4\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix}$, we obtain: $\begin{pmatrix}6&7\\2&3\end{pmatrix}=\begin{pmatrix}5x_1+6x_2&-x_1-4x_2\\5x_3+6x_4&-x_3-4x_4\end{pmatrix}$, where $X=\begin{pmatrix}x_1&x_2\\x_3&x_4\end{pmatrix}$. Now you can multiply both sides of the equation by $\frac{1}{-14}\begin{pmatrix}-4&1\\-6&5\end{pmatrix}$ =(inverse of $\begin{pmatrix}5&-1\\6&-4\end{pmatrix}$), to find: $X=\frac{1}{-14}\begin{pmatrix}6&7\\2&3\end{pmatrix}\begin{pmatrix}-4&1\\-6&5\end{pmatrix}=\frac{1}{-14}\begin{pmatrix}-66&41\\-26&17\end{pmatrix}$. Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
How does one combine proportionality? this is something that often comes up in both Physics and Mathematics, in my A Levels. Here is the crux of the problem. So, you have something like this : $A \propto B$ which means that $A = kB \tag{1}$ Fine, then you get something like : $A \propto L^2$ which means that $A = k'L^2 \tag{2}$ Okay, so from $(1)$ and $(2)$ that they derive : $$A \propto BL^2$$ Now how does that work? How do we derive from the properties in $(1)$ and $(2)$, that $A \propto BL^2$. Thanks in advance.
Suppose a variable $A$ depends on two independent factors $B,C$, then $A\propto B\implies A=kB$, but here $k$ is a constant w.r.t. $B$ not $C$, in fact, $k=f(C)\tag{1}$ Similarly, $A\propto C\implies A=k'C$ but here $k'$ is a constant w.r.t. C not $B$, in fact, $k'=g(B)\tag{2}$ From $(1)$ and $(2)$, $f(C)B=g(B)C\implies f(C)\propto C\implies f(C)=k''C$ Putting it in $(1)$ gives, $A=k''CB\implies A\propto BC\tag{Q.E.D.}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/433754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
pressure in earth's atmosphere as a function of height above sea level While I was studying the measurements of pressure at earth's atmosphere,I found the barometric formula which is more complex equation ($P'=Pe^{-mgh/kT}$) than what I used so far ($p=h\rho g$). So I want to know how this complex formula build up? I could reach at the point of $${dP \over dh}=-{mgP \over kT}$$ From this how can I obtain that equation. Please give me a Mathematical explanation.
If $\frac{dP}{dh} = (-\frac{mgP}{kT})$, then $\frac{1}{P} \frac{dP}{dh} = (-\frac{mg}{kT})$, or $\frac{d(ln P)}{dh} = (-\frac{mg}{kT})$. Integrating with respect to $h$ over the interval $[h_0, h]$ yields $ln(P(h)) - ln(P(h_0)) = (-\frac{mg}{kT})(h - h_0)$, or $ln(\frac{P(h)}{P(h_0)}) = (-\frac{mg}{kT})(h - h_0)$, or $\frac{P(h)}{P(h_0)} = exp(-\frac{mg}{kT}(h - h_0))$, or $P(h) = P(h_0)exp(-\frac{mg}{kT}(h - h_0))$. If we now take $h_0 = 0$, and set $P = P(0)$, $P' = P(h)$, the desired formula is had.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proof of Riemann-Lebesgue lemma: what does "integration by parts in each variable" mean? I was reading the proof of the Riemann-Lebesgue lemma on Wikipedia, and something confused me. It says the following: What does the author mean by "integration by parts in each variable"? If we integrate by parts with respect to $x$, then (filling in the limits, which I believe are $-\infty$ and $\infty$) I get $$\hat{f}(z) = \left[\frac{-1}{iz}f(x)e^{-izx}\right]_{-\infty}^{\infty} + \frac{1}{iz}\int_{-\infty}^{\infty}e^{-izx}f'(x)dx.$$ I think I am missing something here...it's not clear to me why the limit at $-\infty$ of the first term should exist. Can anyone clarify this for me? Thanks very much.
This question was answered by @danielFischer in the comments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/433956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proving an operator is self-adjoint I have a linear operator $T:V \to V$ (where $V$ is a finite-dimensional vector space) such that $T^9=T^8$ and $T$ is normal, I need to prove that $T$ is self-adjoint and also that $T^2=T$. Would appreciate any help given. Thanks a million!
Hint. Call the underlying field $\mathbb{F}$. As $T$ is normal and its characteristic polynomial can be split into linear factors $\underline{\text{over }\, \mathbb{F}}$ (why?), $T$ is unitarily diagonalisable over $\mathbb{F}$. Now the rest is obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
domain of square root What is the domain and range of square $\sqrt{3-t} - \sqrt{2+t}$? I consider the domain the two separate domain of each square root. My domain is $[-2,3]$. Is it right? Are there methods on how to figure out the domain and range in this kind of problem?
You are right about the domain. As to the range, use the fact that as $t$ travels from $-2$ to $3$, $\sqrt{3-t}$ is decreasing, and $\sqrt{2+t}$ is increasing, so the difference is decreasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\sum^{\infty}_{n=1} \frac {x^n}{1+x^{2n}}$ interval of convergence I need to find interval of convergence for this series: $$\sum^{\infty}_{n=1} \frac {x^n}{1+x^{2n}}$$ I noticed that if $x=0$ then series is convergent. Unfortunately, that’s it.
If $\left| x\right|<1$ then $$ \left| \frac{x^n}{1+x^{2n}} \right|\leq \left| x\right|^n $$ because $1+x^{2n}\geq1$. Now $\sum_{n=1}^{\infty} \left| x\right|^n$ is a finite geometric series. Similarly, if $\left| x\right|>1$ then $$ \left| \frac{x^n}{1+x^{2n}} \right|\leq \frac{1}{ \left| x\right|^n} $$ because $1+x^{2n}\geq x^{2n}$. Now $\sum_{n=1}^{\infty} \frac{1}{\left| x\right|^n}$ is a finite geometric series. If $x=1$, or $x=-1$ the series is divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why does $\oint\mathbf{E}\cdot{d}\boldsymbol\ell=0$ imply $\nabla\times\mathbf{E}=\mathbf{0}$? I am looking at Griffith's Electrodynamics textbook and on page 76 he is discussing the curl of electric field in electrostatics. He claims that since $$\oint_C\mathbf{E}\cdot{d}\boldsymbol\ell=0$$ then $$\nabla\times\mathbf{E}=\mathbf{0}$$ I don't follow this logic. Although I know that curl of $\mathbf{E}$ in statics is $\mathbf{0}$, I can't see how you can simply apply Stokes' theorem to equate the two statements. If we take Stokes' original theorem, we have $\oint\mathbf{E}\cdot{d}\boldsymbol\ell=\int\nabla\times\mathbf{E}\cdot{d}\mathbf{a}=0$. How does this imply $\nabla\times\mathbf{E}=\mathbf{0}$? Griffiths seem to imply that this step is pretty easy, but I can't see it!
The infinitesimal area $d\mathbf{a}$ is arbitrary which means that $\nabla\times\mathbf{E}=0$ must be true everywhere, and not just locally.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 4 }
simple limit but I forget how to prove it I have to calculate the following limit $$\lim_{x\rightarrow -\infty} \sqrt{x^2+2x+2} - x$$ it is in un undeterminated form. I tried to rewrite it as follows: $$\lim_{x\rightarrow -\infty} \sqrt{x^2+2x+2} - \sqrt{|x|^2}$$ but seems a dead road. Can anyone suggest a solution? thanks for your help
Clearly $$\lim_{x\rightarrow -\infty} \sqrt{x^2+2x+2} - x=+\infty+\infty=+\infty$$ But \begin{gather*}\lim_{x\rightarrow +\infty} \sqrt{x^2+2x+2} - x="\infty-\infty"=\\ =\lim_{x\rightarrow +\infty} \frac{(\sqrt{x^2+2x+2} - x)(\sqrt{x^2+2x+2} + x)}{\sqrt{x^2+2x+2} + x}=\lim_{x\rightarrow +\infty} \frac{2x+2}{\sqrt{x^2+2x+2} + x}=\lim_{x\rightarrow +\infty} \frac{2+2/x}{\sqrt{1+2/x+2/x^2} + 1}=1 \end{gather*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/434370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
$d(f \times g)_{x,m} = df_x \times dg_m$? (a) $d(f \times g)_{x,m} = df_x \times dg_m$? Also, (b) does $d(f \times g)_{x,m}$ carry $\tilde{x} \in T_x X, \tilde{m} \in T_x M$ to the tangent space of $f$ cross the tangent space of $g$? Furthermore, (c) does $df_x \times dg_m$ carry $\tilde{x} \in T_x X, \tilde{m} \in T_x M$ to the tangent space of $f$ cross the tangent space of $g$? Thank you~
I don't know what formalism you are adopting, but starting from this one it can be done intuitively. I will state the construction without proof of compatibility and so. An element of $T_x X$ is a differential from locally defined functions $C^{1}_x(X)$ to $\mathbb{R}$ induced by a locally defined curve $\gamma_x(t):(-\epsilon,\epsilon)\rightarrow X,\gamma_x(0)=x$ by $\gamma_x (f)=\frac{df(\gamma(t))}{dt}|_{t=0}$. $T_x X\times T_m M$ is identified canonically with $T_{(x,m)}(X\times M)$ in the following way. Given $\gamma_x,\gamma_m$, then $\gamma_{(x,m)}(t)=(\gamma_x(t),\gamma_m(t))$; given $\gamma_{(x,m)}$ then $\gamma_x(t)=p_X\gamma_{x,m}(t)$ where $p_X$ is the projection, and likewise for $m$. For a map $F:X\rightarrow Y$, the differential is defined as $dF_x(\gamma_x)(g)=\gamma_x(gF)$. Now what you want is a commutative diagram \begin{array}{ccc} T_{(x,m)}(X\times M) & \cong & T_x X\times T_m M \\ \downarrow & & \downarrow \\ T_{(y,n)}(Y\times N) & \cong & T_y Y\times T_n N \end{array} Let's now check it. $\gamma_{(x,m)}$ is mapped to $\gamma_{(x,m)}(g(F,G))$, by definition of the tangent vector, we see it is induced by the curve $(F,G)\gamma_{(x,m)}=(Fp_X \gamma_{(x,m)},Gp_M \gamma_{(x,m)})$, which then projects to $Fp_X \gamma_{(x,m)}$ and $Gp_M \gamma_{(x,m)}$. If we factored earlier, it would first be taken to $p_X \gamma_{(x,m)}$ and $p_M \gamma_{(x,m)}$, then by $F,G$ respectively to $Fp_X \gamma_{(x,m)}$ and $Gp_M \gamma_{(x,m)}$, which is the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inequality involving Maclaurin series of $\sin x$ Question: If $T_n$ is $\sin x$'s $n$th Maclaurin polynomial. Prove that $\forall 0<x<2, \forall m \in \Bbb N,~ T_{4m+3}(x)<\sin(x)<T_{4m+1}(x)$ Thoughts I think I managed to prove the sides, proving that $T_3>T_1$ and adding $T_{4m}$ on both sides, but about the middle, I frankly have no clue...
Your thoughts are not quite right. Firstly, $T_{3} < T_{1}$ for $0 < x < 2$, and for that matter showing $T_{3} > T_{1}$ would be antithetical to what you are looking to prove. Secondly, truncated Taylor series are not additive in the way that you are claiming, i.e. it is NOT true that $T_{3} + T_{4m} = T_{4m + 3}$. However, consider the Maclaurin series for $\sin x$ and write out all sides of that inequality as follows: $$T_{4m+3} < \sin x < T_{4m+1}$$ is the same as writing: $$x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} + \cdots + \frac{x^{4m+1}}{(4m+1)!} - \frac{x^{4m+3}}{(4m+3)!} < x - \frac{x^{3}}{3!} + \cdots < x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} + \cdots + \frac{x^{4m+1}}{(4m+1)!}$$ It is obvious from here that since $x> 0$, $T_{4m+3} < T_{4m+1}$. We must now complete the proof. All this amounts to proving is that for $0 < x < 2$, the following inequalities hold: $$\sum_{n = 2m+2}^{\infty} \frac{(-1)^{n}x^{2n+1}}{(2n+1)!} > 0$$ $$\sum_{n = 2m+1}^{\infty} \frac{(-1)^{n}x^{2n+1}}{(2n+1)!} < 0$$ Why this is what we need can be seen by looking at the terms of the Maclaurin series for $\sin x$ and comparing with each side of the inequality. I leave these details to you... feel free to comment if you need additional help. I imagine it well help to expand what these inequalities are out of series form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A function/distribution which satisfies an integral equation. (sounds bizzare) I think I need a function, $f(x)$ such that $\large{\int_{x_1}^{x_2}f(x)\,d{x} = \frac{1}{(x_2-x_1)}}$ $\forall x_2>x_1>0$. Wonder such a function been used or studied by someone, or is it just more than insane to ask for such a function? I guess it has to be some distribution, and if so, I'd like to know more about it.
No such $f(x)$ can exist. Such a function $f(x)$ would have antiderivative $F(x)$ that satisfies $$F(x_2)-F(x_1)=\frac{1}{x_2-x_1}$$ for all $x_2>x_1>0$. Take the limit as $x_2\rightarrow \infty$ of both sides; the right side is 0 (hence the limit exists), while the left side is $$\left(\lim_{x\rightarrow \infty} F(x)\right)-F(x_1)$$ Hence $F(x_1)=\lim_{x\rightarrow \infty} F(x)$ for all $x_1$. Hence $F(x)$ is constant, but no constant function satisfies the relationship desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Soccer betting combinations for accumulators I would like to bet on soccer games, on every possible combination. For example, I bet on $10$ different games, and each soccer game can go three ways: either a win, draw, or loss. How many combinations would I have to use in order to get a guaranteed win by betting $10$ matches with every combination possible?
If you bet on every possibility, in proportion to the respective odds, the market is priced so that you can only get back about 70% of your original stake no matter what.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Ultrafilter condition: If A is a subset of X, then either A or X \ A is an element of U My confusion concerns ultrafilters on sets that are themselves power sets. If $X=\{\emptyset,\ \{1\},\ \{2\},\ \{3\},\ \{4\},\ \{1,2\},\ \{1,3\},\ \{1,4\},\ \{2,3\},\ \{2,4\},\ \{3,4\},\ \{1,2,3\},\ \{1,2,4\},\ \{1,3,4\},\ \{2,3,4\},\ \{1,2,3,4\}\ \}$ and the upset $\{1\}=U=\{\ \{1\},\ \{1,2\},\ \{1,3\},\ \{1,4\},\ \{1,2,3\},\ \{1,2,4\},\ \{1,3,4\},\ \{1,2,3,4\}\ \}$ is supposedly a principal ultrafilter (for visual delineation see https://en.wikipedia.org/wiki/Filter_%28mathematics%29), then how do I satisfy the criteria that "If $A$ is a subset of $X$, then either $A$ or $X\setminus A$ is an element of $U$"? For example, could I let $A=\{\emptyset,\{2\}\}$ such that $A\notin U$, but $X\setminus A\notin U$ because $\{2,3,4\}\in X\setminus A,\{2,3,4\}\notin U$? My understanding of the distinction between an element and a subset is unrefined, particularly with regard to power sets.
It is fine to have filters on a power set of another set. But then the filter is a subset of the power set of the power set of that another set; rather than our original set which is the power set of another set. But in the example that you gave, note that $W\in U$ if and only if $\{1\}\in W$. Indeed $\{1\}\notin A$, but therefore it is in its complement and so $X\setminus A\in U$. If things like that confuse you, just replace the sets with other elements, $1,\ldots,16$ for example, and consider the ultrafilter on that set. Then simply translate it back to your original $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proving $\sum_{n=-\infty}^\infty e^{-\pi n^2} = \frac{\sqrt[4] \pi}{\Gamma\left(\frac 3 4\right)}$ Wikipedia informs me that $$S = \vartheta(0;i)=\sum_{n=-\infty}^\infty e^{-\pi n^2} = \frac{\sqrt[4] \pi}{\Gamma\left(\frac 3 4\right)}$$ I tried considering $f(x,n) = e^{-x n^2}$ so that its Mellin transform becomes $\mathcal{M}_x(f)=n^{-2z} \Gamma(z)$ so inverting and summing $$\frac{1}{2}(S-1)=\sum_{n=1}^\infty f(\pi,n)=\sum_{n=1}^\infty \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}n^{-2z} \Gamma(z)\pi^{-z}\,dz = \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\zeta(2z) \Gamma(z) \pi^{-z}\,dz$$ However, this last integral (whose integrand has poles at $z=0,\frac{1}{2}$ with respective residues of $-\frac 1 2$ and $\frac 1 2$) is hard to evaluate due to the behavior of the function as $\Re(z)\to \pm\infty$ which makes a classic infinite contour over the entire left/right plane impossible. How does one go about evaluating this sum?
I am not sure if it will ever help, but the following identity can be proved: $$ S^2 = 1 + 4 \sum_{n=0}^{\infty} \frac{(-1)^n}{\mathrm{e}^{(2n+1)\pi} - 1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/434933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52", "answer_count": 4, "answer_id": 2 }
countability of limit of a set sequence Let $S_n$ be the set of all binary strings of length $2n$ with equal number of zeros and ones. Is it correct to say $\lim_{n\to\infty} S_n$ is countable? I wanted to use it to solve this problem. My argument is that each of $S_n$s is countable (in fact finite) thus their union would also be countable. Then $\lim_{n\to\infty}S_n$ should also be countable as it is contained in the union.
The collection of all finite strings of $0$'s and $1$'s is countably infinite. The subcollection of all strings that have equal numbers of $0$'s and $1$'s is therefore countably infinite. I would advise not using the limit notation to denote that collection. The usual notation for this kind of union is $\displaystyle\bigcup_n S_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/434999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to construct a non-piecewise differentiable S curve with constant quick turning flat tails and a linear slope? I need to find an example of a non-piecewise differentiable $f:\mathbb{R}\to\mathbb{R}$ such that $$ \begin{cases} f(x)=C_1 &\text{ for } x<X_1,\\ C_1 < f(x) < C_2 &\text{ for } X_1 < x < X_2,\\ f(x)=C_2 &\text{ for } X_2<x. \end{cases} $$ For example, $\log(1+e^{Ax})$ is sort meets some characteristics in that * *quickly flat lines around 0 for large A *has a linear slope for 0 < x < Xc But it * *does not flatline for Xc < x *I can't seem to control the slope of the linear section So am looking for something different Is there some other differentiable function that exhibits the above behavior?
After a bit of trying to mix various segments I came up with the following for my problem f(x) = (1/(1+e^(100*(x-1))))(1/(1+e^(-100(x+1))))x-(1/(1+e^(100(x+1))))+(1/(1+e^(-100*(x-1)))) Which atleast seems to be in the right direction to meet all the requirements * *Has a slope between -1 to +1. *Flatens quickly > 1 and < 1. How quickly can be controlled by changing the constant 100 *is continuous and not pieceswise *is differentiable I still haven't generalized it yet, but the general idea is to build it out of 3 segments * *(1/(1+e^(100*(x-1))))(1/(1+e^(-100(x+1)))) multiplied by a slope and X for the sloping part *bottom flatline -(1/(1+e^(100*(x+1)))) *top flat line (1/(1+e^(-100*(x-1))))
{ "language": "en", "url": "https://math.stackexchange.com/questions/435074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is this true? $f(g(x))=g(f(x))\iff f^{-1}(g^{-1}(x))=g^{-1}(f^{-1}(x))$. Is this true? Given $f,g\colon\mathbb R\to\mathbb R$. $f(g(x))=g(f(x))\iff f^{-1}(g^{-1}(x))=g^{-1}(f^{-1}(x))$. I met this problem when dealing with a coding method, but I'm really not familiar with functions. Please help. Thank you.
Of course that is only true if $f^{-1}$ and $g^{-1}$ exist. But then, it's easy to show: Be $y=f^{-1}(g^{-1}(x))$. Then obviously $g(f(y))=g(f(f^{-1}(g^{-1}(x)))) = g(g^{-1}(x)) = x$. On the other hand, by assumption $f(g(y))=g(f(y))=x$. Therefore $g^{-1}(f^{-1}(x)) = g^{-1}(f^{-1}(f(g(y)))) = g^{-1}(g(y)) = y = f^{-1}(g^{-1}(x))$
{ "language": "en", "url": "https://math.stackexchange.com/questions/435143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Numerical solutions to wave equation Does the wave equation always have an analytical solution given well-behaved boundary/initial conditions? If not, under what conditions does the wave equation need to be solved numerically? This figure of a simple 1D-problem seems to have been generated numerically. Any recommended reading for general theory on the wave equation is also welcome.
Derivation Solutions It has a general solution but numerical solutions can still be an interesting exercise. Numerical solutions are useful when you are solving some variation of the wave equation with an additional term in it which makes it unsolvable analytically.
{ "language": "en", "url": "https://math.stackexchange.com/questions/435213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to show that this limit $\lim_{n\rightarrow\infty}\sum_{k=1}^n(\frac{1}{k}-\frac{1}{2^k})$ is divergent? How to show that this limit $\lim_{n\rightarrow\infty}\sum_{k=1}^n(\frac{1}{k}-\frac{1}{2^k})$ is divergent? I applied integral test and found the series is divergent. I wonder if there exist easier solutions.
Each partial sum of your series is the difference between the partial sum of the harmonic series, and the partial sum of the geometric series. The latter are all bounded by 1. Since the harmonic series diverges, your series does also.
{ "language": "en", "url": "https://math.stackexchange.com/questions/435306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Definable sets à la Jech Jech in Set Theory, p. 175 defines definable sets over a given model $(M,\in)$ (where $M$ is a set) as those sets (= subsets of $M$) $X$ with a formula $\phi$ in the set of formulas of the language $\lbrace \in \rbrace$ and some $a_1,\dots,a_n \in M$ such that $$ X = \lbrace x \in M : (M,\in) \models \phi[x,a_1,\dots, a_n]\rbrace$$ Jech defines $$ \text{def}(M) := \lbrace X \subset M : X\ \text{is definable over }\ (M,\in)\rbrace $$ So far, everything is clear to me. But then, Jech claims: Clearly, $M \in \text{def}(M)$ and $M \subset \text{def}(M) \subset \text{P}(M)$. It is clear to me that and why $M \in \text{def}(M)$ and that and why $\text{def}(M) \subset \text{P}(M)$. But I cannot see at once that and why $M \subset \text{def}(M)$ for arbitrary sets $M$. Is this a typo in Jech's Set Theory or did I misunderstand something?
If $M$ is transitive and $a \in M$ then $a \subseteq M$ and moreover $$a = \{x \in M : (M,{\in}) \vDash x \in a\}.$$ So $a \in \operatorname{def}(M)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/435447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$A$ is a subset of $B$ if and only if $P(A) \subset P(B)$ I had to prove the following for a trial calculus exam: $A\subset B$ if and only if $P(A) \subset P(B)$ where $P(A)$ is the set of all subsets of $A$. Can someone tell me if my approach is correct and please give the correct proof otherwise? $PROOF$: $\Big(\Longrightarrow\Big)$ assume $A\subset B$ is true. Then $\forall$ $a\in A$, $a\in B$ Then for $\forall$ A, the elements $a_1, a_2,$ ... , $a_n$ in A are also in B. Hence $P(A)\subset P(B)$ $\Big(\Longleftarrow\Big) $ assume $P(A) \subset P(B)$ is true. We prove this by contradiction so assume $A\not\subset B$ Then there is a set $A$ with an element $a$ in it, $a\notin$ B. Hence $P(A) \not\subset P(B)$ But we assumed $P(A) = P(B)$ is true. We reached a contradiction. Hence if $P(A) = P(B)$ then $A\subset B$. I proved it both sides now, please improve me if I did something wrong :-)
$(\Rightarrow)$ Given any $x\in P(A)$ then $x\subset A$. So, by hypothesis, $x\subset B$ and so $x\in P(B)$. The counter part is easier, as cited by @Asaf, below.
{ "language": "en", "url": "https://math.stackexchange.com/questions/435514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Algebraic Solutions to Systems of Polynomial Equations Given a system of rational polynomials in some number of variables with at least one real solution, I want to prove that there exists a solution that is a tuple of algebraic numbers. I feel like this should be easy to prove, but I can't determine how to. Could anyone give me any help? I have spent a while thinking about this problem, and I can't think of any cases where it should be false. However I have absolutely no idea how to begin to show that it is true, and I can't stop thinking about it. Are there any simple properties of the algebraic numbers that would imply this? I don't want someone to prove it for me, I just need someone to point me in the direction of a proof, or show me how to find a counterexample if it is actually false (which would be very surprising and somewhat enlightening). If anyone knows anything about this problem I would be thankful if they could give me a little bit of help.
Here's a thought. Let's look at the simplest non-trivial case. Let $P(x,y)$ be a polynomial in two variables with rational (equivalently, for our purposes, integer) coefficients, and a real zero. If that zero is isolated, then $P$ is never negative (or never positive) in some neighborhood of that zero, so the graph of $z=P(x,y)$ is tangent to the plane $z=0$, so the partial derivatives $P_x$ and $P_y$ vanish at the zero, so if you eliminate $x$ from the partials (by, say, taking the resultant) you get a one-variable polynomial that vanishes at the $y$-value, so the $y$-value must be algebraic, so the $x$-value must be algebraic. If the zero is not isolated, then $P$ vanishes at some nearby point with at least one algebraic (indeed, rational) coordinate, but that point must then have both coordinates algebraic. Looking at the general case, many polynomials in many variables, you ought to be able to use resultants to get down to a single polynomial, and then do an induction on the number of variables --- we've already done the base case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/435576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Edge coloring in a graph How do I color edges in a graph? I actually want to ask you specifically about one method that I've heard about - to find a dual (?) graph and color its vertices. What is the dual graph here? Is it really the dual graph, or maybe something different? If so, what is this? The graph I'm talking about has G* sign.
One method of finding an edge colouring of a graph is to find a vertex colouring of it's line graph. The line graph is formed by placing a vertex for every edge in the original graph, and connecting them with edges if the edges of the original graph share a vertex. By finding a vertex colouring of the line graph we obtain a colour for each edge, and if two edges of the original graph share an endpoint they will be connected in the line graph and so have different colours in our colouring. From this we can see that a vertex colouring of the line graph gives an edge colouring of our initial graph.
{ "language": "en", "url": "https://math.stackexchange.com/questions/435652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $(\sup\{y\in\Bbb R^{\ge0}:y^2\le x\})^2=x$ This question is a simple one from foundations of analysis, regarding the definition of the square root function. We begin by defining $$\sqrt x:=\sup\{y\in\Bbb R^{\ge0}:y^2\le x\}:=\sup S(x)$$ for $x\ge0$, and now we wish to show that it satisfies its defining property, namely $\sqrt x^2=x$. By dividing into cases $x\le1$ (where $1$ is an upper bound for $S(x)$) and $x\ge1$ (where $x$ is an upper bound for $S(x)$), we know $S(x)$ is upper bounded and hence the function is well-defined for all $x\ge0$. It follows from the intermediate value theorem applied to $f(y)=y^2$ on $[0,\max(1,x)]$ that $\sqrt x^2=x$, if we can prove that $f(y)$ is continuous, but suffice it to say that I would like to prove the IVT later in generality, when I will have the definition $|x|=\sqrt{xx^*}$ (which is used in the definition of continuity), so I hit a circularity problem. Thus I need a "bootstrap" version of the IVT for this particular case, i.e. I can't just invoke this theorem. What is the cleanest way to get to the goal here? Assume I don't have any theorems to work from except basic algebra.
Since you have the l.u.b. axiom, you can use that, for any two bounded sets $A,B$ of nonnegative reals, we have $$(\sup A) \cdot (\sup B)=\sup( \{a\cdot b:a \in A,b \in B\}).$$ Applying this to $A=B=S(x)$ we want to find the sup of the set of products $a\cdot b$ where $a^2\le x$ and $b^2 \le x.$ First note any such product is at most $x$: without loss assume $a \le b$, then we have $ab \le b^2 \le x.$ This much shows $\sqrt{x}\cdot \sqrt{x} \le x$ for your definition of $\sqrt{x}$ as a sup. Second, since we may use any (independent) choices of $a,b \in S(x)$ we may in particular take them both equal, and obtain the products $t\cdot t=t^2$ which, given the definition of $S(x)$, have supremum $x$, which shows that $\sqrt{x}\cdot \sqrt{x} \ge x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/435715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Comparing Areas under Curves I remembered back in high school AP Calculus class, we're taught that for a series: $$\int^\infty_1\frac{1}{x^n}dx:n\in\mathbb{R}_{\geq2}\implies\text{The integral converges.}$$ Now, let's compare $$\int^\infty_1\frac{1}{x^2}dx\text{ and }\int^\infty_1\frac{1}{x^3}dx\text{.}$$ Of course, the second integral converges "faster" since it's cubed, and the area under the curve would be smaller in value than the first integral. This is what's bothering me: I found out that $$\int^\infty_{1/2}\frac{1}{x^2}dx=\int^\infty_{1/2}\frac{1}{x^3}dx<\int^\infty_{1/2}\frac{1}{x^4}dx$$ Can someone explain to me when is this happening, and how can one prove that the fact this is right? Thanks!
$$\text{The main reason that: }\int^\infty_{1/2}\frac{1}{x^2}dx=\int^\infty_{1/2}\frac{1}{x^3}dx$$ $$\text{is because although that }\int^\infty_1\frac{1}{x^2}dx>\int^\infty_1\frac{1}{x^3}dx$$ $$\text{remember that }\int^1_{1/2}\frac{1}{x^2}dx<\int^1_{1/2}\frac{1}{x^3}dx\text{.}$$ $$\text{So, in this case: }\int^\infty_1\frac{1}{x^2}dx+\int^1_{1/2}\frac{1}{x^2}dx=\int^\infty_1\frac{1}{x^3}dx+\int^1_{1/2}\frac{1}{x^3}dx,$$ $$\text{which means that: }\int^\infty_{1/2}\frac{1}{x^2}dx=\int^\infty_{1/2}\frac{1}{x^3}dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/435785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Notation of random variables I am really confused about capitalization of variable names in statistics. When should a random variable be presented by uppercase letter, and when lower case? For a probability $P(X \leq x)$, what do $x$ and $X$ mean here?
You need to dissociate $x$ from $X$ in your mind—sometimes it matters that they are "the same letter" but in general this is not the case. They are two different characters and they mean two different things and just because they have the same name when read out loud doesn't mean anything. By convention, a lot of the time we give random variables names which are capital letters from around the end of the alphabet. That doesn't have to be the case—it's arbitrary—but it's a convention. So just as an example here, let's let $X$ be the random variable which represents the outcome of a single roll of a die, so that $X$ takes on values in $\{1,2,3,4,5,6\}$. Now I believe you understand what would be meant by something like $P(X\leq 2)$: it's the probability that the die comes up either a 1 or a 2. Similarly, we could evaluate numbers for $P(X\leq 4)$, $P(X\leq 6)$, $P(X\leq \pi)$, $P(X\leq 10000)$, $P(X\leq -230)$ or $P(X\leq \text{any real number that you can think of})$. Another way to say this is that $P(X\leq\text{[blank]})$ is a function of a real variable: we can put any number into [blank] that we want and we end up with a unique number. Now a very common symbol for denoting a real variable is $x$, so we can write this function as $P(X\leq x)$. In this expression, $X$ is fixed, and $x$ is allowed to vary over all real numbers. It's not super significant that $x$ and $X$ are the same letter here. We can similarly write $P(y\leq X)$ and this would be the same function. Where it really starts to come in handy that $x$ and $X$ are the same letter is when you are dealing with things like joint distributions where you have more than one random variable, and you are interested in the probability of for instance $P(X\leq \text{[some number] and } Y\leq\text{[some other number]})$ which can be written more succinctly as $P(X\leq x,Y\leq y)$. Then, just to account for the fact that it's hard to keep track of a lot of symbols at the same time, it's convenient that $x$ corresponds to $X$ in an obvious way. By the way, for a random variable $X$, the function $P(X\leq x)$ a very important function. It is called the cumulative distribution function and is usually denoted by $F_X$, so that $$F_X(x)=P(X\leq x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/435846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 2 }
Complex integration: $\int _\gamma \frac{1}{z}dz=\log (\gamma (b))-\log(\gamma (a))?$ Let $\gamma$ be a closed path defined on $[a,b]$ with image in the complex plan except the upper imaginary axis, ($0$ isn't in this set). Then $\frac{1}{z}$ has an antiderivative there and it is $\log z$. Therefore $\int _\gamma \frac{1}{z}dz=\log (\gamma (b))-\log(\gamma (a))=0$ because it is a closed path. Now let $\psi(t)=e^{it}+3$, $0\leq t\leq 2\pi$. Then $\psi'(t)=ie^{it}, 0\leq t\leq 2\pi$. So $$\int _\psi\frac{1}{z}dz=\int _0^{2\pi}\frac{ie^{it}}{e^{it}}dt=2\pi i$$ but $\psi$ is a closed path so there's something wrong. What's going on here?
The denominator in the second expression should be $e^{it}+3$ instead of $e^{it}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/435924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find maximum and minimum of $f(x,y) = x^2 y^2 - 2x - 2y$ in $0 \leq x \leq y \leq 5$. Find maximum and minimum of $f(x,y) = x^2 y^2 - 2x - 2y$ in $0 \leq x \leq y \leq 5$. So first we need to check inside the domain, I got only one point $A(1,1)$ where $f(1,1) = -3$. and after further checking it is a saddle point. Now we want to check on the edges, we have 3 edges: $(1) x=y, 0\leq x\leq y \leq 5$ and $(2) y=5, 0 \leq x \leq 5$ and $(3) x=0, 0\leq y \leq 5$. So I started with each of them, using Lagrange. I started with the first and got: $l(x,y,\lambda) = x^2y^2 - 2x - 2y + \lambda (x-y)$. $l_x(x,y) = 2xy^2 - 2 + \lambda = 0$ $l_y(x,y) = 2x^2y - 2 - \lambda = 0 $ $x-y = 0 \rightarrow x=y$. But then what to do ? I always get $x,y = \sqrt[3]{1+ 0.5\lambda}$, which gets me nowhere. Any help would be appreciated
Well, we have $f(x,y)=x^2y^2-2x-2y$ considered on the following green area: So $f_x=2xy^2-2,~~f_y=2x^2y-2$ and solving $f_x=f_y=0$ gives us the critical point $x=y=1$ which is on the border of the area. Now think about the border: $$y=5,~~ 0\le x\le 5 \;\;\;\; x=0,~~0\le y\le 5 \;\;\;\; y=x,~~ 0\le x\le 5$$ If $y=5,~~ 0\le x\le 5$, so $$f(x,y)\to f(x)=25x^2-2x-10$$ and this easy for you to find the critical points which are maybe the best candidates on this border. If $x=0,~~0\le y\le 5$, so $$f(x,y)\to f(y)=-2y$$ there is a similar way of finding such points on this border.
{ "language": "en", "url": "https://math.stackexchange.com/questions/436005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Only 'atomic' vectors as part of the base of a vector space? Given a vector subspace $U_1=${$\left(\begin{array}{c} \lambda+µ \\ \lambda \\ µ \end{array}\right)\in R^3$: $\lambda,µ \in R$ } Determine a possible base of this vector subspace. As far as I know, the base of a vector space is a number of vectors from which every vector in the given vector (sub-)space can be constructed. My suggestion for a base of the given vectorspace would be: $$\left(\begin{array}{c} \lambda \\ 0 \\ 0 \end{array}\right)+ \left(\begin{array}{c} µ \\ 0 \\ 0 \end{array}\right)+ \left(\begin{array}{c} 0 \\ \lambda \\ 0 \end{array}\right)+ \left(\begin{array}{c} 0 \\ 0 \\ µ \end{array}\right) $$ None of these vectors can be written as a linear combination of the other vectors. What also came to my mind as a possible solution was: $$\left(\begin{array}{c} \lambda+µ \\ 0 \\ 0 \end{array}\right)+ \left(\begin{array}{c} 0 \\ \lambda \\ 0 \end{array}\right)+ \left(\begin{array}{c} 0 \\ 0 \\ µ \end{array}\right) $$ Which - if any - of these two is a valid base for the given vectorspace? Is the second one invalid, because it can be written like the first one?
The first thing you need to know is that a subspace's dimension cannot exceed the containing space's dimension. Since the number of vectors constituting a base is equal to the dimension, your first suggestion is wrong, as it suggests that the subspace is of dimension 4 in $\mathbb{R}^3$, which is only of dimension 3. Then, if a subspace's dimension is equal to the dimension of the containing space, they are equal. This means that if your second suggestion is correct, the subspace $U_1$ is equal to $\mathbb{R}^3$, which is also false (take for instance the vector $(1,0,0)$) As a general rule, you need to factor the scalars appearing in the definition (here $\lambda$ and $\mu$), like Listing suggested, and the basis will naturally appear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/436057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Limits and exponents and e exponent form So I know that $\underset{n\rightarrow \infty}{\text{lim}}\left(1+\frac {1}{n}\right)^n=e$ and that we're not allowed to see it as $1^\infty$ because that'd be incorrect. Why is then that we can do the same thing with (for example): $$\lim_{n\rightarrow \infty} \left(1+\sin\left(\frac {1}{n}\right)\right)^{n\cos\left(\frac {1}{n}\right)}= \lim_{n\rightarrow \infty} \left(\left(1+\sin\left(\frac {1}{n}\right)\right)^\frac {1}{\sin\left(\frac {1}{n}\right)} \right)^{n\cdot\cos\left(\frac {1}{n}\right)\sin\left(\frac{1}{n}\right)}$$ What I mean by that is that in the example of $e$ we can't say it' $\lim \left(1+\frac {1}{n}\right)^{\lim (n)}=1^\infty$ While in the second example that's exactly what we do (we say that the limit of the base is $e$ while the limit of the exponent is 1 which is why the entire expression is equal to $e$. Can anyone explain to me the difference between the two?
In fact there's no difference between the two examples, indeed if you have a function $h$ such that $h(n)\to\infty$ then $$\lim_{n\to\infty}\left(1+\frac{1}{h(n)}\right)^{h(n)}=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{n}=e$$ and ofcourse if we have another function $f$ such that $f(n)\to a\in\mathbb R$ then $$\lim_{n\to\infty}\left(\left(1+\frac{1}{h(n)}\right)^{h(n)}\right)^{f(n)}=e^a.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/436122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
How to show that right triangle is intersection of two rectangles in Cartesian coordinates? I am trying to do the following. Given the triangle $$T:=\left\{(x,y)\mid 0\leq x\leq h,0\leq y\leq k,\frac{x}{h}+\frac{y}{k}\leq1\right\}$$ find two rectangles $R$ and $S$ such that $R\cap S=T$, $\partial R\cap T$ is the two legs of $T$, and $\partial S\cap T$ is the hypotenuse of $T$ union the vertex of $T$. The rectangle $R$ should be of smallest area. For example, for $h=4$ and $k=3$, Clearly, $$R=\left\{(x,y)\mid 0\leq x\leq h,0\leq y\leq k\right\}.$$ I am having trouble coming up with a description of $S$ to show that $R\cap S=T$. I have that $S$ has side lengths $$\sqrt{h^2+k^2}\qquad\textrm{and}\qquad\frac{hk}{\sqrt{h^2+k^2}}$$ and corners at $$(0,k)\qquad(h,0)\qquad\left(\frac{h^3}{h^2+k^2},-\frac{h^2k}{h^2+k^2}\right)\qquad\left(-\frac{k^3}{h^2+k^2},\frac{hk^2}{h^2+k^2}\right).$$ How do I describe $S$ in a way that I can show $R\cap S=T$?
Note that $\frac{x}{h}+\frac{y}{k}=1$ is equivalent to $y=-\frac{k}{h}x+k$. Define $$H_1=\left\{(x,y)\mid y\leq-\frac{k}{h}x+k\right\}\\ H_2=\left\{(x,y)\mid y\leq\frac{h}{k}x+k\right\}\\ H_3=\left\{(x,y)\mid y\leq\frac{h}{k}x-\frac{h^2}{k}\right\}\\ H_4=\left\{(x,y)\mid y\leq-\frac{k}{h}x\right\}.$$ Show that $H_1\cap H_2\cap H_3\cap H_4$ is such a rectangle that fits your condition for $S$, and that $$H_1\cap R=T\\ H_2\cap R=R\\ H_3\cap R=R\\ H_4\cap R=R;$$ thus, $H_1\cap H_2\cap H_3\cap H_4\cap R=T$. (Note that $\partial H_1\parallel\partial H_4$, $\partial H_2\parallel\partial H_3$, and $\partial H_1\perp\partial H_2$, according to the point-slope form of the equations given for the definitions of $H_1$, $H_2$, $H_3$, and $H_4$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/436202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Testing polynomial equivalence Suppose I have two polynomials, P(x) and Q(x), of the same degree and with the same leading coefficient. How can I test if the two are equivalent in the sense that there exists some $k$ with $P(x+k)=Q(x)$? P and Q are in $\mathbb{Z}[x].$
The condition of $\mathbb{Z}[x]$ isn't required. Suppose we have 2 polynomial $P(x)$ and $Q(x)$, whose coefficients of $x^i$ are $P_i$ and $Q_i$ respectively. If they are equivalent in the sense of $P(x+k) = Q(k)$, then * *Their degree must be the same, which we denote as $n$. *Their leading coefficient must be the same, which we denote as $a_n=P_n=Q_n$ *$P(x+k) - a_n(x+k)^n = Q(x) - a_n x^n$ By considering the coefficient of $x^{n-1}$ in the last equation, this tells us that $P_{n-1} + nk a_n = Q_{n-1}$. This allows you to calculate $k$ in terms of the various knowns, in which case you can just substitute in and check if we have equivalence. We can simply check that $Q(i) = P(i+k)$ for $n+1$ distinct values of $i$, which tell us that they agree as degree $n$ polynomials.
{ "language": "en", "url": "https://math.stackexchange.com/questions/436273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
solving an equation of the type: $t \sin (2t)=2$ where $0Need to solve: How many solutions are there to the equation, $t\sin (2t)=2$ where $0<t<3 \pi$ I am currently studying calc 3 and came across this and realized i dont have a clue as to how to get started on it.
As an alternate approach, you could rewrite the equation as $$\frac{1}{2}\sin{2t}=\frac{1}{t}$$ and then observe that since $\frac{1}{t}\le\frac{1}{2}$ for $t\ge2$, the graph of $y=\frac{1}{t}$ will intersect the graph of $y=\frac{1}{2}\sin{2t}$ twice in each interval $[n\pi,(n+1)\pi]$ for $n\ge1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/436376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
calculate the derivative using fundamental theorem of calculus This is a GRE prep question: What's the derivative of $f(x)=\int_x^0 \frac{\cos xt}{t}\mathrm{d}t$? The answer is $\frac{1}{x}[1-2\cos x^2]$. I guess this has something to do with the first fundamental theorem of calculus but I'm not sure how to use that to solve this problem. Thanks in advance.
The integral does not exist, consequently it is not differentiable. The integral does not exist, because for each $x$ ($x>0$, the case of negative $x$ is dealt with similarly) there is some $\varepsilon > 0$ such that $\cos(xt)> 1/2$ for all $t$ in the $t$-range $0\le t \le \varepsilon$. If one splits the integral $\int_x^0 = \int_\varepsilon^0 + \int_x^\varepsilon$, then the second integral exists, because the integrand is continuous in the $t$-range $\varepsilon \le t\le x $. The first is by definition $\lim_{\eta\rightarrow 0+}\int_\varepsilon^\eta$ and larger than $(-\ln \eta + \ln \varepsilon)/2$. So the limit does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/436567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Algebra and Substitution in Quadratic Form―Einstein Summation Notation Schaum's Outline to Tensor Calculus ― chapter 1, example 1.5 ――― If $y_i = a_{ij}x_j$, express the quadratic form $Q = g_{ij}y_iy_j$ in terms of the $x$-variables. Solution: I can't substitute $y_i$ directly because it contains $j$ and there's already a $j$ in the given quadratic form. So $y_i = a_{i \huge{j}}x_{\huge{j}} = a_{i \huge{r}}x_{\huge{r}}$. This implies $ y_{\huge{j}} = a_{{\huge{j}}r}x_r.$ But I already used $r$ (in the sentence before the previous) so need to replace $r$ ――― $ y_j = a_{j \huge{r}}x_{\huge{r}} = a_{j \huge{s}}x_{\huge{s}}.$ Therefore, by substitution, $Q = g_{ij}(a_{ir}x_r)(a_{js}x_s)$ $$ = g_{ij}a_{ir}a_{js}x_rx_s. \tag{1}$$ $$= h_{rs}x_rx_s, \text{ where } h_{rs} = g_{ij}a_{ir}a_{js}. \tag{2}$$ Equation ($1$): Why can they commute $a_{js}$ and $x_r$? How are any of the terms commutative? Equation ($2$): How does $rs$ get to be the subscript of $h$? Why did they define $h_{rs}$?
In equation (1) $a_{js}$ and $x_r$ commute because these are just regular (reals or complex) numbers using standard multiplication which is commutative. Equation (2) $h_{rs}$ is defined to save space more than anything, it's the coefficients of the polynomial in $x_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/436635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
A simple inequality in calculus? I have to solve this inequality: $$\left(\left[\dfrac{1}{s}\right] + 1 \right) s < 1,$$ where $ 0 < s < 1 $. I guess that $s$ must be in this range: $\left(0,\dfrac{1}{2}\right]$.But I do not know if my guess is true. If so, how I can prove it? Thank you.
try $$\left(\left[\dfrac{1}{s}\right] \right) < \frac{1}{s}-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/436712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
p-adic Eisenstein series I'm trying to understand the basic properties of the p-adic Eisenstein series. Let $p$ be a prime number. Define the group $X = \begin{cases} \mathbb{Z}_p\times \mathbb{Z}/(p-1)\mathbb{Z} & \mbox{if }p \neq2 \\ \mathbb{Z}_2 & \mbox{if } p=2 \end{cases}$ where $\mathbb{Z}_p$ is the ring of $p$-adic integers. If $k\in X$ and $d$ is a positive integer then is it true that $d^{k-1}\in \mathbb{Z}_p$? If so, why? Thank you for your help.
Like your previous question, there's a slight philosophical issue: the question should not be "is $d^{k-1} \in \mathbb{Z}_p$", but "when and how is $d^{k-1}$ defined"? It's far from obvious what the definition should be, but once you know what the conventional definition is, the fact that it gives you something in $\mathbb{Z}_p$ whenever it is defined is totally obvious :-) So we have to do something to define $d^{x}$ for $x \in X$, and it will only work if $d \in \mathbb{Z}_p^\times$. The usual definition is as follows. Let me assume $p \ne 2$ -- you can work out the necessary modifications for $p = 2$ yourself. Suppose $x = (x_1, x_2)$ where $x_1 \in \mathbb{Z}_p$ and $x_2 \in \mathbb{Z} / (p-1)\mathbb{Z}$. Write $d = \langle d \rangle \tau(d)$, where $\tau(d)$ is a $(p-1)$-st root of unity and $\langle d \rangle$ is in $1 + p\mathbb{Z}_p$ (this can always be done, in a unique way, for any $d \in \mathbb{Z}_p^\times$). Then we define $$ d^x = \langle d \rangle^{x_1} \tau(d)^{x_2} $$ which is well-defined (using your previous question) and lies in $\mathbb{Z}_p^\times$. It's easy to check that this agrees with the "natural" definition of $d^x$ when $x \in \mathbb{Z}$ (which lives inside $X$ in the obvious way). In fact $X$ is exactly the set of group homomorphisms $\mathbb{Z}_p^\times \to \mathbb{Z}_p^\times$. If $k \in X$ we can now define $d^{k-1} = d^k / d$, where $d^k$ is defined as above. There's no sensible definition of $d^x$ for $x \in X$ and $d \in p\mathbb{Z}$, which is why the definition of the coefficient of $q^n$ in the $p$-adic Eisenstein series involves a sum over only those divisors of $n$ coprime to $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/436774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Partial fraction integration $\int \frac{dx}{(x-1)^2 (x-2)^2}$ $$\int \frac{dx}{(x-1)^2 (x-2)^2} = \int \frac{A}{x-1}+\frac{B}{(x-1)^2}+\frac{C}{x-2}+\frac{D}{(x-2)^2}\,dx$$ I use the cover up method to find that B = 1 and so is C. From here I know that the cover up method won't really work and I have to plug in values for x but that won't really work either because I have two unknowns. How do I use the coverup method?
To keep in line with the processes you are learning, we have: $$\frac{1}{(x-1)^2 (x-2)^2} = \frac{A}{x-1}+\frac{B}{(x-1)^2}+\frac{C}{x-2}+\frac{D}{(x-2)^2}$$ So we want to find $A, B, C, D$ given $$A(x-1)(x-2)^2 + B(x-2)^2 + C(x-1)^2(x-2) + D(x-1)^2 = 1$$ As you found, when $x = 1$, we have $B = 1$, and when $x = 2$, we have $D = 1$. Now, we need to solve for the other two unknowns by creating a system of two equations and two unknowns: $A, C$, given our known values of $B, D = 1$. Let's pick an easy values for $x$: $x = 0$, $x = 3$ $$A(x-1)(x-2)^2 + B(x-2)^2 + C(x-1)^2(x-2) + D(x-1)^2 = 1\quad (x = 0) \implies$$ $$A(-1)((-2)^2) + B\cdot (-2)^2 + C\cdot (1)\cdot(-2) + D\cdot (-1)^2 = 1$$ $$\iff - 4A + 4B - 2C + D = 1 $$ $$B = D = 1 \implies -4A + 4 - 2C + 1 = 1 \iff 4A + 2C = 4\tag{x = 0}$$ Similarly, $x = 3 \implies $ $2A + 1 + 4C + 4 = 1 \iff 2A + 4C = -4 \iff A + 2C = -2\tag{x = 3}$ Now we have a system of two equations and two unknowns and can solve for A, C. And solving this way, gives of $A = 2, C= -2$ Now we have $$\int\frac{dx}{(x-1)^2 (x-2)^2} = \int \frac{2}{x-1}+\frac{1}{(x-1)^2}+\frac{-2}{x-2}+\frac{1}{(x-2)^2}\,dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/436855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
How can I calculate this determinant? Please can you give me some hints to deal with this : $\displaystyle \text{Let } a_1, a_2, ..., a_n \in \mathbb{R}$ $\displaystyle \text{ Calculate } \det A \text{ where }$ $\displaystyle A=(a_{ij})_{1\leqslant i,j\leqslant n} \text{ and }$ $\displaystyle \lbrace_{\alpha_{ij}=0,\text{ otherwise}}^{\alpha_{ij}=a_i,\text{ for } i+j=n+1}$
Hint: The matrix looks like the following (for $n=4$; it gives the idea though): $$ \begin{bmatrix} 0 & 0 & 0 & a_1\\ 0 & 0 & a_2 & 0\\ 0 & a_3 & 0 & 0\\ a_4 & 0 & 0 & 0 \end{bmatrix} $$ What happens if you do a cofactor expansion in the first column? Try using induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/436931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Sequence of natural numbers Numbers $1,2,...,n$ are written in sequence. It's allowed to exchange any two elements. Is it possible to return to the starting position after an odd number of movements? I know that is necessarily an even number of movements but I can't explain that!
Basically, if you make an odd number of switches, then at least one of the numbers has only been moved once (unless you repeat the same switch over and over which is an easy case to explain). But if you start in some configuration and move a number only once and want to return to the start, you must move again. Try induction -- an easy base case is $n=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/437006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Solution to $y'' - 2y = 2\tan^3x$ I'm struggling with this nonhomogeneous second order differential equation $$y'' - 2y = 2\tan^3x$$ I assumed that the form of the solution would be $A\tan^3x$ where A was some constant, but this results in a mess when solving. The back of the book reports that the solution is simply $y(x) = \tan x$. Can someone explain why they chose the form $A\tan x$ instead of $A\tan^3x$? Thanks in advance.
Have you learned variation of parameters? This is a method, rather than lucky guessing :) http://en.wikipedia.org/wiki/Variation_of_parameters
{ "language": "en", "url": "https://math.stackexchange.com/questions/437053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Solving for the integrating factor in a Linear Equation with Variable Coefficients So I am studying Diff Eq and I'm looking through the following example. Solve the following equation: $(dy/dt)+2y=3 \rightarrow μ(t)*(dy/dt)+2*μ(t)*y=3*μ(t) \rightarrow (dμ(t)/dt)=2*μ(t) \rightarrow (dμ(t)/dt)/μ(t)=2 \rightarrow (d/dt)\ln|μ(t)|=2 \rightarrow \ln|μ(t)|=2*t+C \rightarrow μ(t)=c*e^{2*t} \rightarrow μ(t)=e^{2*t}$ So I have two questions regarding this solved problem. It appears that the absolute value sign is just tossed out of the problem without saying that as a result $c \ge 0$, is this not correct and if not why? Secondly and more importantly, I was confused by the assumption that $c=1$. Why should it be $1$ and would the answer differ if another number were selected is it just an arbitrary selection that doesn't influence the end result and just cancels out anyways?
Method 1: Calculus We have: $y' + 2y = 3$. Lets use calculus to solve this and see why these statements are okay. We have: $$\displaystyle \frac{\dfrac{dy}{dt}}{y - \dfrac{3}{2}} = -2$$ Integrating both sides yields: $$\displaystyle \int \frac{dy}{\left(y - \dfrac{3}{2}\right)} = -2 \int dt$$ We get: $\ln\left|y - \dfrac{3}{2}\right| = -2t + c$. Lets take the exponential of both sides, we get: $$\left|y - \dfrac{3}{2}\right| = e^{-2t + c} = e^{c}e^{-2t} = c e^{-2t}$$ Do you see what happened to the constant now? Now, lets use the definition of absolute value and see why it does not matter. For $y \ge \dfrac{3}{2}$, we have: $$\left(y - \dfrac{3}{2}\right) = c e^{-2t} \rightarrow y = c e^{-2t} +\dfrac{3}{2}$$ For $y \lt \dfrac{3}{2}$, we have: $$-\left(y - \dfrac{3}{2}\right) = c e^{-2t} \rightarrow y = -c e^{-2t} + \dfrac{3}{2}$$ However, we know that $c$ is an arbitrary constant, so we can rewrite this as: $$y = c e^{-2t} + \dfrac{3}{2}$$ We could also leave it as $-c$ if we choose, but it is dangerous to keep those pesky negatives around. Note: look at this graph of $\left|y - \dfrac{3}{2}\right|$: Now, can you use this approach and see why it is identical to the integrating factor (it is exactly the same reasoning)? For your second question: You could make $c$ be anything you want. Let it be $y = ke^{-2t} + \dfrac{3}{2}$. Take the derivative and substitute back into ODE and see if you get $3 = 3$ (you do). If they gave you initial conditions, then it would be a specific value, so the authors are being a little sloppy. They should have said something like $y(0) = \dfrac{5}{2}$, which would lead to $c = 1$. Lets work this statement: $y = ke^{-2t} + \dfrac{3}{2}$ $y' = -2 k e^{-2t}$ Substituting back into the original DEQ, yields: $y' + 2y = -2 k e^{-2t} + 2(ke^{-2t} + \dfrac{3}{2}) = 3$, and $3 = 3$. What if we let $c = 1$, we have: $y' + 2y = -2 e^{-2t} + 2(e^{-2t} + \dfrac{3}{2}) = 3$, and $3 = 3$. So, you see that we can let $c$ be anything, unless given an IC. Method 2: Integrating Factor Here is a step-by-step solution using integrating factor. * *$y' + 2 y = 3$ *$\mu y' + 2 \mu y = 3 \mu$ *$\dfrac{d}{dt}(\mu y) = uy' + u' y$ *Choose $\mu$ so that $\mu' = 2 \mu \rightarrow \mu = e^{2t}$ *We have: $y'+2y = 3$, so: *$e^{2t}y' + 2e^{2t}y = 3e^{2t}$ *$\dfrac{d}{dt}(e^{2t}y) = 3 e^{2t}$ *$e^{2t} y = \dfrac{3}{2}e^{2t} + c$, thus *$y(t) = \dfrac{3}{2} + c e^{-2t} = c e^{-2t}+ \dfrac{3}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/437141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I show that $\sqrt{1+\sqrt{2+\sqrt{3+\sqrt\ldots}}}$ exists? I would like to investigate the convergence of $$\sqrt{1+\sqrt{2+\sqrt{3+\sqrt{4+\sqrt\ldots}}}}$$ Or more precisely, let $$\begin{align} a_1 & = \sqrt 1\\ a_2 & = \sqrt{1+\sqrt2}\\ a_3 & = \sqrt{1+\sqrt{2+\sqrt 3}}\\ a_4 & = \sqrt{1+\sqrt{2+\sqrt{3+\sqrt 4}}}\\ &\vdots \end{align}$$ Easy computer calculations suggest that this sequence converges rapidly to the value 1.75793275661800453265, so I handed this number to the all-seeing Google, which produced: * *OEIS A072449 * "Nested Radical Constant" from MathWorld Henceforth let us write $\sqrt{r_1 + \sqrt{r_2 + \sqrt{\cdots + \sqrt{r_n}}}}$ as $[r_1, r_2, \ldots r_n]$ for short, in the manner of continued fractions. Obviously we have $$a_n= [1,2,\ldots n] \le \underbrace{[n, n,\ldots, n]}_n$$ but as the right-hand side grows without bound (It's $O(\sqrt n)$) this is unhelpful. I thought maybe to do something like: $$a_{n^2}\le [1, \underbrace{4, 4, 4}_3, \underbrace{9, 9, 9, 9, 9}_5, \ldots, \underbrace{n^2,n^2,\ldots,n^2}_{2n-1}] $$ but I haven't been able to make it work. I would like a proof that the limit $$\lim_{n\to\infty} a_n$$ exists. The methods I know are not getting me anywhere. I originally planned to ask "and what the limit is", but OEIS says "No closed-form expression is known for this constant". The references it cites are unavailable to me at present.
For any $n\ge4$, we have $\sqrt{2n} \le n-1$. Therefore \begin{align*} a_n &\le \sqrt{1+\sqrt{2+\sqrt{\ldots+\sqrt{(n-2)+\sqrt{(n-1) + \sqrt{2n}}}}}}\\ &\le \sqrt{1+\sqrt{2+\sqrt{\ldots+\sqrt{(n-2)+\sqrt{2(n-1)}}}}}\\ &\le\ldots\\ &\le \sqrt{1+\sqrt{2+\sqrt{3+\sqrt{2(4)}}}}. \end{align*} Hence $\{a_n\}$ is a monotonic increasing sequence that is bounded above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/437209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63", "answer_count": 7, "answer_id": 3 }
How many cones pass through a given conic section? Given a conic section in the $xy$-plane, how many cones (infinite double cone) in the surrounding 3D space intersect the $xy$-plane at that conic? Is the family continuous, with a nice parametization? At least one must exist, and I expect symmetry in the conic to give a few such cones by reflection, but are there more than that? Edit: Following Peter Smith's answer, it seems possible that a continuum of such cones exist. If that were to be the case, what is the locus of the apexes of those cones?
To take the simplest case, take the circle to be centred at $(0, 0, 0)$ in the $xy$-plane; and now take any point $(0, 0, z)$. Then plainly there is a double cone of rays which pass through $(0, 0, z)$ and some point on the circle (and this is a right circular cone). So there are continuum-many distinct such cones (i.e. as many as there are are points $(0, 0, z)$) which have the given circle as section in the $xy$-plane. [This observation generalizes, mutatis mutandis, to other sorts of conic section: you'll get continuum-many possibilities.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/437267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What went wrong? Calculate mass given the density function Calculate the mass: $$D = \{1 \leq x^2 + y^2 \leq 4 , y \leq 0\},\quad p(x,y) = y^2.$$ So I said: $M = \iint_{D} {y^2 dxdy} = [\text{polar coordinates}] = \int_{\pi}^{2\pi}d\theta {\int_{1}^{2} {r^3sin^2\theta dr}}$. But when I calculated that I got the answer $0$ which is wrong, it should be $\frac{15\pi}{8}$. Can someone please tell me what I did wrong?
You have the set-up correct, but you have incorrectly computed the integral Let's work it out together. $\int_{\pi}^{2\pi}d\theta {\int_{1}^{2} {r^3\sin^2\theta dr}}$ $\int_{\pi}^{2\pi} {\int_{1}^{2} {r^3\sin^2\theta drd\theta}}$ $\int_{\pi}^{2\pi} \sin^2\theta d\theta {\int_{1}^{2} {r^3dr}}$ $\int_{\pi}^{2\pi} \sin^2\theta d\theta (\frac{2^4}{4} - \frac{1^4}{4})$ $\int_{\pi}^{2\pi} \sin^2\theta d\theta (3\frac{3}{4})$ $\frac{1}{2}((2\pi - \sin(2\pi)\cos(2\pi) - \pi +\sin(\pi)\cos(\pi)) (3\frac{3}{4})$ note that the integral of $\sin^2(x)$ = $\frac{1}{2}(x - \sin(x)\cos(x))$ $\frac{1}{2}(\pi)(3\frac{3}{4}) = \frac{15\pi}{8}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/437331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Lang $SL_2$ two formulas for Harish transform Let $G = SL_2$ and give it the standard Iwasawa decomposition $G = ANK$. Let: $$D(a) = \alpha(a)^{1/2} - \alpha(a)^{-1/2} := \rho(a) - \rho(a)^{-1}.$$ Now, Lang defines ($SL_2$, p.69) the Harish transform of a function $f \in C_c(G,K)$ to be $$Hf(a) := \rho(a)\int_Nf(an)dn = |D(a)|\int_{A\setminus G} f(x^{-1}ax)d\dot x$$ My trouble is in understanding why the two definitions agree for $\rho (a)≠1$. In the second integral, we're are integrating over $A\setminus G$, so we can write $x = nk$, whence $$f(x^{-1}ax) = f((nk)^{-1}ank) = f(n^{-1}an) $$ since $f \in C_c(G,K)$, i.e. it is invariant w.r.t. conjugation by elements in $K$. But now, I don't know who to get rid of the remaining $n^{-1}$ and get the factor before the integral.
This equality is not at all obvious. Just before that section, it was proven that $$ \int_{A\backslash G} f(x^{-1}ax)\;dx\;=\; {\alpha(a)\over |D(\alpha)} \int_K\int_N f(kank^{-1})\;dn\;dk $$ for arbitrary $f\in C_c(G)$. For $f$ left and right $K$-invariant, the outer integral goes away, leaving just the integral over $N$. The key point is the identity proven another page or two earlier, something like $$ \int_N f(a^{-1}nan^{-1})\,dn\;=\; {1\over |\alpha(a)^{-1}-1|}\int_N f(n)\,dn $$ which follows from multiplying out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/437412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
convergence to a generalized Euler constant and relation to Zeta serie Let $0 \leq a \leq 1$ be a real number. I would like to know how to prove that the following sequence converges: $$u_n(a)=\sum_{k=1}^n k^a- n^a \left(\frac{n}{1+a}+\frac{1}{2}\right)$$ For $a=1$: $$u_n(1)=\sum\limits_{k=1}^{n} k- n \left(\frac{n}{1+1}+\frac{1}{2}\right)= \frac{n(n+1)}{2}-\frac{n(n+1)}{2}=0$$ so $u_n(1)$ converges to $0$. for $a=0$: $$u_n(0)=\sum\limits_{k=1}^{n} 1- \left(\frac{n}{1+0}+\frac{1}{2}\right) = n-n+\frac{1}{2}=\frac{1}{2}$$ so $u_n(0)$ converges to $1/2$. In genaral, the only idea I have in mind is the Cauchy integral criterion but it does not work because $k^a$ is an increasing function, Do the proof involves Zeta serie ?
From this answer you have an asymptotics $$ \sum_{k=1}^n k^a = \frac{n^{a+1}}{a+1} + \frac{n^a}{2} + \frac{a n^{a-1}}{12} + O(n^{a-3}) $$ Use it to prove that $u_n(a)$ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/437507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Minimum and maximum of $ \sin^2(\sin x) + \cos^2(\cos x) $ I want to find the maximum and minimum value of this expression: $$ \sin^2(\sin x) + \cos^2(\cos x) $$
Your expression simplifies to $$1+\cos(2\cos x)-\cos (2\sin x).$$ We optimize of $1+\cos u-\cos v$ under the constraint $u^2+v^2=4$. $\cos$ is an even function, so we can say that we optimize $1+\cos 2u-\cos(2\sqrt{1-u^2})$, $u\in [0,1]$, which should be doable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/437556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
I need a better explanation of $(\epsilon,\delta)$-definition of limit I am reading the $\epsilon$-$\delta$ definition of a limit here on Wikipedia. * *It says that $f(x)$ can be made as close as desired to $L$ by making the independent variable $x$ close enough, but not equal, to the value $c$. So this means that $f(x)$ defines $y$ or the output of the function. So when I say $f(x)$ close as desired to $L$, I actually mean the result of the calculation that has taken place and produced a $y$ close to $L$ which sits on the $y$-axis? * How close is "close enough to $c$" depends on how close one wants to make $f(x)$ to $L$. So $c$ is actually the $x$'s that I am putting into my $f$ function. So one is picking $c$'s that are $x$'s and entering them into the function, and he actually is picking those $c$'s (sorry, $x$'s) to make his result closer to $L$, which is the limit of an approaching value of $y$? * It also of course depends on which function $f$ is, and on which number $c$ is. Therefore let the positive number $\epsilon$ be how close one wishes to make $f(x)$ to $L$; OK, so now one picks a letter $\epsilon$ which means error, and that letter is the value of "how much one needs to be close to $L$". So it is actually the $y$ value, or the result of the function again, that needs to be close of the limit which is the $y$-coordinate again? * strictly one wants the distance to be less than $\epsilon$. Further, if the positive number $\delta$ is how close one will make $x$ to $c$, Er, this means $\delta=x$, or the value that will be entered into $f$? * and if the distance from $x$ to $c$ is less than $\delta$ (but not zero), then the distance from $f(x)$ to $L$ will be less than $\epsilon$. Therefore $\delta$ depends on $\epsilon$. The limit statement means that no matter how small $\epsilon$ is made, $\delta$ can be made small enough. So essentially the $\epsilon$-$\delta$ definition of the limit is the corresponding $y$, $x$ definition of the function that we use to limit it around a value? Are my conclusions wrong? I am sorry but it seams like an "Amazing Three Cup Shuffle Magic Trick" to me on how my teacher is trying to explain this to me. I always get lost to what letters mean $\epsilon$, $\delta$, $c$, $y$, and $x$, when the function has $x$ and $y$ only.
If you are a concrete or geometrical thinker you might find it easier to think in these terms. You are player $X$ and your opponent is player $Y$. Player $Y$ chooses any horizontal lines they like, symmetric about $L$, but not equal to it. You have to choose two vertical lines symmetric about $c$ - these create a rectangle, with $Y$'s lines. If $f(x)$ stays within the rectangle, you win. If you always win, whatever $Y$ does, you have a limit. If $Y$ has a winning strategy you don't.
{ "language": "en", "url": "https://math.stackexchange.com/questions/437615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Is Dirichlet function Riemann integrable? "Dirichlet function" is meant to be the characteristic function of rational numbers on $[a,b]\subset\mathbb{R}$. On one hand, a function on $[a,b]$ is Riemann integrable if and only if it is bounded and continuous almost everywhere, which the Dirichlet function satisfies. On the other hand, the upper integral of Dirichlet function is $b-a$, while the lower integral is $0$. They don't match, so that the function is not Riemann integrable. I feel confused about which explanation I should choose...
The Dirichlet function $f : [0, 1] → \mathbb R$ is defined by $$f(x) = \begin{cases} 1, & x ∈ \mathbb Q \\ 0, & x ∈ [0, 1] - \mathbb Q \end{cases}$$ That is, $f$ is one at every rational number and zero at every irrational number. This function is not Riemann integrable. If $P = \{I_1, I_2, . . . , I_n\}$ is a partition of [0, 1], then $M_k = \sup I_k = 1, m_k = \inf I_k = 0$, since every interval of non-zero length contains both rational and irrational numbers. It follows that $U(f; P) = 1, L(f; P) = 0$ for every partition $P$ of $[0, 1]$, so $U(f) = 1$ and $L(f) = 0$ are not equal. The Dirichlet function is discontinuous at every point of $[0, 1]$, and the moral of the last example is that the Riemann integral of a highly discontinuous function need not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/437711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 4, "answer_id": 0 }
Evaluating $\int_0^1 \int_0^{\sqrt{1-x^2}}e^{-(x^2+y^2)} \, dy \, dx\ $ using polar coordinates Use polar coordinates to evaluate $\int_0^1 \int_0^{\sqrt{1-x^2}}e^{-(x^2+y^2)} \, dy \, dx\ $ I understand that we need to change $x^2+y^2$ to $r^2$ and then we get $\int_0^1 \int_0^{\sqrt{1-x^2}} e^{-(r^2)} \, dy \, dx\ $. Then I know I need to change the bounds with respect to $dy$ but I am unsure on how to do that and further. Please help me.
Hints: $$\bullet\;\;\;x=r\cos\theta\;,\;\;y=r\sin\theta\;,\;0\le\theta\le \frac\pi2\;\text{(why?). The Jacobian is}\;\;|J|=r$$ So the integral is $$\int\limits_0^1\int\limits_0^{\pi/2}re^{-r^2}drd\theta$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/437758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
The preimage of continuous function on a closed set is closed. My proof is very different from my reference, hence I am wondering is I got this right? Apparently, $F$ is continuous, and the identity matrix is closed. Now we want to show that the preimage of continuous function on closed set is closed. Let $D$ be a closed set, Consider a sequence $x_n \to x_0$ in which $x_n \in f^{-1}(D)$, and we will show that $x_0 \in f^{-1}(D)$. Since $f$ is continuous, we have a convergent sequence $$\lim_{n\to \infty} f(x_n) = f(x_0) = y.$$ But we know $y$ is in the range, hence, $x_0$ is in the domain. So the preimage is also closed since it contains all the limit points. Thank you.
Yes, it looks right. Alternatively, given a continuous map $f: X \to Y$, if $D \subseteq Y$ is closed, then $X \setminus f^{-1}(D) = f^{-1}(Y \setminus D)$ is open, so $f^{-1}(D)$ is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/437829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 3, "answer_id": 0 }
The tangent plane of orthogonal group at identity. Why the tangent plane of orthogonal group at identity is the kernel of $dF_I$, the derivative of $F$ at identity, where $F(A) = AA^T$? Thank you ~
$\exists$ Proposition Let $Z$ be the preimage of a regular value $y\in Y$ under the smooth map $f: X \to Y$. Then the kernel of the derivative $df_x:T_x(X) \to T_y(Y)$ at any point $x \in Z$ is precisely the tangent space to $Z, T_x(Z).$ Proof: Since $f$ is constant on $Z$, $df_x$ is zero on $T_x(Z)$. But $df_x: T_x(X)\to T_y(Y)$ is surjective, so the dimension of the kernel of $df_x$ must be $$\dim T_x(X) - \dim T_y(Y) = \dim X - \dim Y = \dim Z.$$ Thus $T_x(Z)$ is a subspace of the kernel that has the same dimension as the complete kernel; hence $T_x(Z)$ must be the kernel. Proposition on Page 24, Guillemin and Pollack, Differential Topology Jellyfish, you should really read your textbook!
{ "language": "en", "url": "https://math.stackexchange.com/questions/437892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
problem of probability and distribution Suppose there are 1 million parts which have $1\%$ defective parts i.e 1 million parts have $10000$ defective parts. Now suppose we are taking different sample sizes from 1 million like $10\%$, $30\%$, $50\%$, $70\%$, $90\%$ of 1 million parts and we need to calculate the probability of finding maximum $5000$ defective parts from these sample sizes. As 1 million parts has $1\%$ defective parts so value of success $p$ is $0.01$ and failure $q$ is $0.99$. Now suppose we are adding $100,000$ parts in one million parts which makes total $1,000,000$ parts but this newly added $100,000$ parts do not have any defective parts. So now what will be the value of success $p$ in total $1,000,000$ parts to find $5000$ defective parts? Please also give justification for choosing value of $p$?
There was an earlier problem of which this is a variant. In the solution to that problem I did a great many computations. For this problem, the computations are in the same style, with a different value of $p$, the probability that any one item is defective. Some of the computations in the earlier answer were done for "general" (small) $p$, so they can be repeated with small modification. We had $10000$ defectives in a population of $1000000$, and added $100000$ non-defectives. So the new probability of a defective is $p=\frac{10000}{1100000}\approx 0.0090909$. We are taking a large sample, presumably without replacement. So the distribution of the number $X$ of defectives in a sample of size $n$ is hypergeometric, not binomial. However, that does not make a significant differnce for the sample sizes of interest. As was the case in the earlier problem, the probability of $\le 5000$ defectives in a sample of size $n$ is nearly $1$ up to a certain $n_0$, the n climbs very rapidly to (nearly) $0.5$ at a certain $n_0$, and then falls very rapidly to $0$ as $n$ increases further. In the earlier problem, we had $n_0=500000$. In our new situation, the appropriate $n_0$ is obtained by solving the equation $$n_0p=5000$. Solve. We get $n_0=550000$. If $n$ is significantly below $550000$, we will have that $\Pr(X\le 5000)$ will be nearly $1$. For example, that is the case already at $n=500000$. However, for $n$ quite close to $550000$, such as $546000$, the probability is not close to $1$. Similrly, on the other side of $550000$ but close, like $554000$, the proability that $X\le 5000$ will not be close to $0$. In the erlier answer, you were supplied with all the formulas to do any needed calculations if you want to explore the "near $550000$" region in detail.
{ "language": "en", "url": "https://math.stackexchange.com/questions/437952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
motivation of additive inverse of a Dedekind cut set My understanding behind motivation of additive inverse of a cut set is as follows : For example, for the rational number 2 the inverse is -2. Now 2 is represented by the set of rational numbers less than it and -2 is represented by the set of rational numbers less than it. So, if the cut set $\alpha$ represents a rational number then the inverse of $\alpha$ is the set $\{-p-r : p\notin \alpha , r >0\}$. But if the cut set does not represent a rational number then is the above definition is correct ? I think we will miss the first rational number which does not belong to $\alpha$ intuitively. Should not the set $\{-p : p\notin \alpha \}$ be the inverse now ? Confused.
I think the confusion arises when we are trying to identify a rational number say $2$ with the cut $\{ x\mid x \in \mathbb{Q}, x < 2\}$. When using Dedekind cuts as a definition of real numbers it is important to stick to some convention and follow it properly. For example to represent a real number either we choose 1) set containing smaller rationals 2) or set container larger rationals 3) or both the sets At the same time after choosing one of these alternatives it is important to clarify whether the sets contains an end point (like least member in option 2) and greatest member in option 1)) or not. In this particular question I believe the definition uses option 1) with the criteria that there is no greatest member in the set. When this definition is adopted and you define the additive inverse of a real number then we must ensure that the set corresponding to the additive inverse does not have a greatest member. This needs to be taken care only when the cut represents a rational number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/438031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there any field of characteristic two that is not $\mathbb{Z}/2\mathbb{Z}$ Is there any field of characteristic two that is not $\mathbb{Z}/2\mathbb{Z}$? That is, if a field is of characteristic 2, then does this field have to be $\{0,1\}$?
To a beginner, knowing how one could think of an answer is at least as important as knowing an answer. For examples in Algebra, one needs (at least) two things: A catalogue of the basic structures that appear commonly in important mathematics, and methods of constructing new structures from old. Your catalogue and constructions will naturally expand as you study more, so you don't need to worry about this consciously. The moral I am trying to impart is the following: Instead of trying to construct a particular a structure with particular properties "from scratch" (like I constantly tried to when I was starting to learn these things), first search your basic catalogue. If that doesn't turn up anything, more often than not your search will hint at some basic construction from one of these examples that will work. When you start learning field theory, your basic catalogue should include all of the finite fields, the rationals, real numbers, complex numbers, and algebraic numbers. Your basic constructions should be subfields, field extensions, fields of fractions and algebraic closures. You should also have the tools from your basic ring theory; constructions like a quotient ring and ring extensions also help with this stuff. For example, Chris came up with his answer by starting with the easy example of a field of characteristic two and wanting to make it bigger. So he extended it with an indeterminate $X$ and as a result he got the field of rational functions with coefficients in $\mathbb{Z}/(2).$ Asaf suggested two ways: making it bigger by taking the algebraic closure, or extending the field by a root of a polynomial (I personally like to see this construction as a certain quotient ring).
{ "language": "en", "url": "https://math.stackexchange.com/questions/438107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Prove that $\frac{100!}{50!\cdot2^{50}} \in \Bbb{Z}$ I'm trying to prove that : $$\frac{100!}{50!\cdot2^{50}}$$ is an integer . For the moment I did the following : $$\frac{100!}{50!\cdot2^{50}} = \frac{51 \cdot 52 \cdots 99 \cdot 100}{2^{50}}$$ But it still doesn't quite work out . Hints anyone ? Thanks
We have $100$ people at a dance class. How many ways are there to divide them into $50$ dance pairs of $2$ people each? (Of course we will pay no attention to gender.) Clearly there is an integer number of ways. Let us count the ways. We solve first a different problem. This is a tango class. How many ways are there to divide $100$ people into dance pairs, one person to be called the leader and the other the follower? Line up the people. There are $100!$ ways to do this. Now go down the line, pairing $1$ and $2$ and calling $1$ the leader, pairing $3$ and $4$ and calling $3$ the leader, and so on. We obtain each leader-follower division in $50!$ ways, since the groups of $2$ can be permuted. So there are $\dfrac{100!}{50!}$ ways to divide the people into $50$ leader-follower pairs to dance the tango. Now solve the original problem. To just count the number of democratic pairs, note that interchanging the leader/follower tags produces the same pair division. So each democratic pairing gives rise to $2^{50}$ leader/follower pairings. It follows that there are $\dfrac{100!}{2^{50}\cdot 50!}$ democratic pairings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/438169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 7, "answer_id": 3 }
Proving by induction: $2^n > n^3 $ for any natural number $n > 9$ I need to prove that $$ 2^n > n^3\quad \forall n\in \mathbb N, \;n>9.$$ Now that is actually very easy if we prove it for real numbers using calculus. But I need a proof that uses mathematical induction. I tried the problem for a long time, but got stuck at one step - I have to prove that: $$ k^3 > 3k^2 + 3k + 1 $$ Hints???
For your "subproof": Try proof by induction (another induction!) for $k \geq 7$ $$k^3 > 3k^2 + 3k + 1$$ And you may find it useful to note that $k\leq k^2, 1\leq k^2$ $$3k^2 + 3k + 1 \leq 3(k^2) + 3(k^2) + 1(k^2) = 7k^2 \leq k^3 \quad\text{when}??$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/438260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 0 }
Norm inequality (supper bound) Do you think this inequality is correct? I try to prove it, but I cannot. Please hep me. Assume that $\|X\| < \|Y\|$, where $\|X\|, \|Y\|\in (0,1)$ and $\|Z\| \gg \|X\|,\|Z\| \gg \|Y||$. prove that $$\|X+Z\|-\|Y+Z\| \leq \|X\|-\|Y\|$$ and if $Z$ is increased, the left hand side become smaller. I pick up some example and see that this inequality is correct but I cannot prove it. Thank you very much.
The inequality is false as stated. Let $$ \begin{align} X &= (0.5,0)\\ Y &= (-0.7,0)\\ Z &= (z,0), 1 \ll z \end{align}$$ This satisfies all the conditions given. We have that $$ \|X + Z\| - \|Y + Z\| = z + 0.5 - (z - 0.7) = 1.2 \not\leq -0.2 = \|X\| - \|Y\| $$ From the Calculus point of view, in $n$ dimensions, we can write $$ \|X\| = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2} $$ We have that when $\|X\| \ll \|Z\|$, we can approximate $\|X+Z\| \approx \|Z\| + X\cdot \nabla(\|Z\|)$. Now, $\nabla(\|Z\|) = \frac{Z}{\|Z\|}$ by a direct computation, so we have that $$ \|X + Z\| - \|Y + Z\| \approx (X-Y) \cdot \frac{Z}{\|Z\|} $$ From this formulation we see that even in the cases where $X,Y$ are infinitesimal the inequality you hoped for cannot hold true. However, the right hand side of this approximation can be controlled by Cauchy inequality to get (using that $Z / \|Z\|$ is a unit vector). $$ (X-Y) \cdot \frac{Z}{\|Z\|} \leq \|X - Y\| $$ So perhaps what you are thinking about is the following corollary of the triangle inequality Claim: If $X,Y,Z$ are vectors in $\mathbb{R}^n$, then $$ \|X + Z\| - \|Y + Z\| \leq \|X - Y \| $$ Proof: We write $$ X + Z = (X - Y) + (Y + Z) $$ so by the triangle inequality $$ \|X + Z\| = \|(X - Y) + (Y+Z)\| \leq \|X - Y\| + \|Y + Z\| $$ rearranging we get $$ \|X + Z\| - \|Y + Z\| \leq \|X - Y\| $$ as desired. Remark: if we re-write the expression using $-Z$ instead of $Z$, the same claim is true in an arbitrary metric space: Let $(S,d)$ be a metric space. Let $x,y,z$ be elements of $S$. Then $$ d(x,z) - d(y,z) \leq d(x,y) $$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/438319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the difference between exponential symbol $a^x$ and $e^x$ in mathematics symbols? I want to know the difference between the exponential symbol $a^x$ and $e^x$ in mathematics symbols and please give me some examples for both of them. I asked this question because of the derivative rules table below contain both exponential symbol $a^x$ and $e^x$ and I don't know when should I use one of them and when should I use the another one. Derivative rules table: [Derivative rules table source]
The two are essentially the same formula stated in different ways. They can be derived from each other as follows: Note that $$\frac{d}{dx}(e^x)=e^x \ln(e) = e^x$$ is a special case of the formula for $a^x$ because $e$ has the special property that $\ln (e) =1$ Also $a^x=e^{\ln(a) x}$, which is another way into the derivative for $a^x$. $$\frac{d}{dx}(e^{rx})=re^{rx}$$ by the chain rule. Let $r=\ln (a)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/438364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Proving a set of linear functionals is a basis for a dual space I've seen some similar problems on the stackexchange and I want to be sure I am at least approaching this in a way that is sensible. The problem as stated: Let $V= \Bbb R^3$ and define $f_1, f_2, f_3 \in V^*$ as follows: $f_1(x,y,z)= x-2y ,\; f_2(x,y,z)= x+y+z,\;f_3(x,y,z)= y-3z$ Prove that $f_1, f_2, f_3$ is a basis for $V^*$ and and then find a basis for V for which it is the dual basis. Here's my problem: the question feels a bit circular. But this is what I attempted: To show that the linear functionals $f$ are a basis, we want that $f_i(x_j)=\delta_{ij}$, or that $f_i(x_j)=1$ if $i=j$ and that it is zero otherwise. That means that we want to set this up so that $$1= f_1(x,y,z)= x-2y$$ $$0= f_2(x,y,z)= x+y +z$$ $$0= f_3(x,y,z)= y-3z$$ That gives us three equations and three unknowns. Solving them we get $2x-2z=1$ for $x-z=\frac{1}{2}$ and $z=x-\frac{1}{2}$ and subbing into the equation for $f_3$ I get $0=y-3x-\frac{3}{2}$ which gets us $1=x-6x+3$ or $x=\frac{2}{5}$. That gives us $y=\frac{-3}{10}$ and $z=\frac{-1}{10}$. OK, this is where I am stuck on the next step. I just got what should be a vertical matrix I think, with the values $(\frac{2}{5}, \frac{-3}{10}, \frac{-1}{10})$ but I am not sure where to go from here. I am not entirely sure I set this up correctly. thanks EDIT: I do know that I have to show that $f_1, f_2, f_3 $ are linearly independent. That I think I can manage, but I am unsure how to fit it into the rest of the problem or if I am even approaching this right.
What about a direct approach? Suppose $\,a,b,c\in\Bbb R\,$ are such that $$af_1+bf_2+cf_3=0\in V^*\implies\;\forall\,v:=(x,y,z)\in\Bbb R^3\;,\;\;af_1v+bf_2v+cf_3v=0\iff$$ $$a(x-2y)+b(x+y+z)+c(y-3z)=0\iff$$ $$\iff (a+b)x-(2a-b-c)y+(b-3c)z=0$$ As the above is true for all $\;x,y,z\in\Bbb R\,$ , we must have $$\begin{align*}\text{I}&\;\;\;\;a+b=0\\\text{II}&\;\;\;\;2a-b-c=0\\\text{II}&\;\;\;\;b-3c=0\end{align*}\;\;\implies a=-b\;,\;\;c=3a=\frac13b\implies a=b=c=0$$ and we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/438449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Can anyone provide me a step-by-step proof for proving a function IS onto/surjective? I've seen the definition, I've seen several examples and anti-examples (e.g. the typical x squared example). I get the idea, but I can't seem to find a proof for proving that a function IS onto, with proper explanation start to finish. Given: * *$f: R$ $\rightarrow$ $R$ *$f(x) = -3x + 4$ Prove that the above function is "onto." I know that this IS onto, but what would a dry, stone cold proof look like for this, that says like Step 1 with justification, step 2 with justification, and so on? The closest thing I could find to what I'm looking for: http://courses.engr.illinois.edu/cs173/sp2009/lectures/lect_15_supp.pdf in Section 3. It says to prove that g(x) = x - 8 is onto, and it does so by setting x to be (y + 8). But...why choose that value? What formula or strategy is there for determining what x should be? It appears as though you want x to be whatever will get rid of other stuff (+8 against a -8). So with some basic algebra, I think I can set x to $-\frac13$y + $\frac43$. And this is valid by the definition of real numbers, yes? This properly cancels everything out so that f(x) = y. Is that really the end of the proof?.....or am I way off the track?
What you need to do to prove that a function is surjective is to take each value $y$ and find - any way you can - a value of $x$ with $f(x)=y$. If you succeed for every possible value of $y$, then you have proved that $f$ is surjective. So we take $x=-\cfrac 13y+ \cfrac 43$ as you suggest. This is well-defined (no division by zero, for example) Then $f(x)=-3x+4=-3\left(-\cfrac 13y+ \cfrac 43\right)+4=y-4+4=y$ So your formula covers every $y$ at once. And because you have covered every $y$ you can say that you have a surjection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/438501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Integral representation of cosh x On Wolfram math world, there's apparently an integral representation of $\cosh x$ that I'm unfamiliar with. I'm trying to prove it, but I can't figure it out. It goes \begin{equation}\cosh x=\frac{\sqrt{\pi}}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty} \frac{ds}{\sqrt{s}}\,e^{s+\frac{1}{4}\frac{x^{2}}{s}} \gamma >0\end{equation} The contour is taken along a vertical line with positive real part. I thought at first glance to use the residue theorem but it seems to be of no use here.
Expand the difficult part of the exponential in power series, the integral equals $$ I = \sqrt\pi \sum_{k\geq0} \frac{(x^2/4)^k}{k!} \frac{1}{2\pi i}\int_{\Re s=\gamma} s^{-k-1/2}e^{s}\,ds. $$ The integral here is the inverse Laplace transform of the function $s^{-k-1/2}$ evaluated at the point $t=1$, given by $$ \mathcal{L}^{-1}[s^{-k-1/2}](t) = \frac1{2\pi i}\int_{\Re s=\gamma}s^{-k-1/2}e^{st}\,ds. $$ So we can look it up (http://mathworld.wolfram.com/LaplaceTransform.html): $$ \frac1{2\pi i}\int_{\Re s=\gamma}s^{-k-1/2}e^{s}\,ds = \frac{1}{\Gamma(k+1/2)}, $$ which also satisfies $$ \frac{\Gamma(1/2)}{\Gamma(k+1/2)} = \frac{1}{\frac12\times\frac32\times\cdots\times(k-\frac12)}, $$ where $\Gamma(1/2)=\sqrt\pi$. Simplifying, we get $$ \sum_{k\geq0} \frac{(x^2/4)^k}{k!} \frac{\sqrt\pi}{\Gamma(k+1/2)} = \sum_{k\geq0}\frac{x^{2k}}{(2k)!} = \cosh x. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/438588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Trigonometry Equations. Solve for $0 \leq X \leq 360$, giving solutions correct to the nearest minute where necessary, a) $\cos^2 A -8\sin A \cos A +3=0$ Can someone please explain how to solve this, ive tried myself and no luck. Thanks!
HINT: $\cos^2 A=\frac{1+\cos 2A}{2},$ $\sin A\cos A=\frac{\sin 2A}{2}$ and $\sin^2 2A+\cos^2 2A=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/438648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
$\int \frac{dz}{z\sqrt{(1-{1}/{z^2})}}$ over $|z|=2$ I need help in calculating the integral of $$\int \frac{dz}{z\sqrt{\left(1-\dfrac{1}{z^2}\right)}}$$ over the circle $|z|=2$. (We're talking about the main branch of the square root). I'm trying to remember what methods we used to calculate this sort of integral in my CA textbook. I remember the main idea was using the remainder theorem, but I don't remember the specifics of calculating remainders.... Thanks you for your help!
$$\frac{1}{z \sqrt{1-\frac{1}{z^{2}}}} = \frac{1}{z} \Big( 1 - \frac{1}{2z^{2}} + O(z^{-4}) \Big) \text{for} \ |z| >1 \implies \int_{|z|=2} \frac{1}{z \sqrt{1-\frac{1}{z^{2}}}} \ dz = 2 \pi i (1) = 2 \pi i $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/438714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Congruence in rings Let $R$ be a commutative (and probably unitary, if you like) ring and $p$ a prime number. If $x_1,\ldots,x_n\in R$ are elements of $R$, then we have $(x_1+\cdots+x_n)^p\equiv x_1^p+\cdots+x_n^p$ mod $pR$. Why is this true? I tried to show that in $R/pR$ their congruence classes are equal, but without sucess.
Just compute ;-) ... we have - as $R$ is commutative - by the multinomial theorem $$ (x_1 + \cdots + x_n)^p = \sum_{\nu_1 + \cdots + \nu_n = p} \frac{p!}{\nu_1! \cdots \nu_n!} x_1^{\nu_1} \cdots x_n^{\nu_n} $$ If all $\nu_i <p $, the denominator contains no factor $p$ (as $p$ is prime), hence $\frac{p!}{\nu_1! \cdots \nu_n!} \equiv 0 \pmod p$, that is the only terms which survice reduction mod $pR$ are those where one $\nu_i = p$, hence the others are $0$, so $$ (x_1 + \cdots + x_n)^p = \sum_{\nu_1 + \cdots + \nu_n = p} \frac{p!}{\nu_1! \cdots \nu_n!} x_1^{\nu_1} \cdots x_n^{\nu_n} \equiv x_1^p + \cdots + x_n^p \pmod{pR}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/438819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
sum of exterior angles of a closed broken line in space I am looking for a simple proof of the following fact: The sum of exterior angles of any closed broken line in space is at least $2 \pi$. I believe it equals $2 \pi$ if and only if the closed broken line equals a polygon.
Quoting Curves of Finite Total Curvature by J. Sullivan: Lemma 2.1. (See[Mil50, Lemma 1.1] and [Bor47].) Suppose $P$ is a polygon in $\mathbb E^d$. If $P'$ is obtained from $P$ by deleting one vertex $v_n$ then $\operatorname{TC}(P')\leq\operatorname{TC}(P)$. We have equality here if $v_{n-1}v_nv_{n+1}$ are collinear in that order, or if $v_{n-2}v_{n-1}v_nv_{n+1}v_{n+2}$ lie convexly in some two-plane, but never otherwise. This total curvature $\operatorname{TC}$ is the sum of exterior angles you are asking about. You could reduce every closed broken line to a triangle by successive removal of vertices. Since the total curvature of a triangle is always $2\pi$, this gives the lower bound you assumed. And with the other condition, you can argue that in the case of equality the last vertex you deleted must have been in the same plane as the triangle, and subsequently every vertex deleted before that, and hence the whole curve must have been a planar and convex polygon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/438909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the matrix $A+E$ is invertible. Let $A$ be an invertible matrix, and let $E$ be an upper triangular matrix with zeros on the diagonal. Assume that $AE=EA$. Show that the matrix $A+E$ is invertible. WLOG, we can assume $E$ is Jordan form. If $A$ is Jordan form, it's trivial. If $A$ is not Jordan form, how to use $AE=EA$ to transform $A$ to a Jordan form? Any suggestion? Thanks.
$E^n=0$ and since $A,E$ commute you have $$A^{2n+1}=A^{2n+1}+E^{2n+1}=(A+E)(A^{2n}-A^{2n-1}E+...+E^{2n})$$ Since $A^{2n+1}$ is invertible, it follows that $A+E$ is invertible. P.S. I only used in the proof that $E$ is nilpotent and commutes with $A$, so more generally it holds that in (any ring), if $A$ is invertible, $E$ is nil potent and $AE=EA$ then $A\pm E$ are invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/438976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solving Bessel integration What would be the solution of the bessels equation, $$b=k A(t)\int_0^{\infty} J_0 (k \rho) e^ \frac{-\rho^2}{R^2} \rho d \rho$$ Can I sove that by using this formulation? $$c= \int_0^{\infty}j_0(t) e^{-pt} dt= \frac{1}{\sqrt{1+p^2}}$$
According to Gradshteyn and Ryzhik, we have: $$\int_0^{\infty}x^{\mu}\exp(-\alpha x^2)J_{\nu}(\beta x)dx = \frac{\beta^{\nu}\Gamma\left(\frac{1}{2}\nu+\frac{1}{2}\mu+\frac{1}{2}\right)}{2^{\nu+1}\alpha^{\frac{1}{2}(\mu+\nu+1)}\Gamma(\nu+1)}\mbox{}_1 F_1\left(\frac{\nu+\mu+1}{2};\mbox{ }\nu+1;\mbox{ }-\frac{\beta^2}{4\alpha}\right).$$ Here $\mbox{}_1F_1$ is a hypergeometric function. Inputting the proper values gives $$\int_0^{\infty}\rho\exp\left(-\frac{\rho^2}{R^2}\right)J_0(k\rho)d\rho = \frac{\Gamma(1)}{2\frac{1}{R^2}\Gamma(1)}\mbox{}_1F_1\left(1;1;-\frac{k^2R^2}{4}\right).$$ Using a property of the hypergeometric function ($_1F_1(a;a;x) = \exp(x)$) we get: $$\frac{R^2}{2}\exp\left(-\frac{k^2R^2}{4}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/439034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
An odd question about induction. Given $n$ $0$'s and $n$ $1$'s distributed in any manner whatsoever around a circle, show, using induction on $n$, that it is possible to start at some number and proceed clockwise around the circle to the original starting position so that, at any point during the cycle, we have seen at least as many $0$'s as $1$'s:
Alternatively: Count the total number of ways to arrange $0$s and $1$s around a circle ($2n$ binary digits in total), consider the number of Dyck words of length $2n$, i.e., the $n$th Catalan number, and then use the Pigeonhole Principle. "QED"
{ "language": "en", "url": "https://math.stackexchange.com/questions/439105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Uniform grid on a disc Do there exist any known methods of drawing a uniform grid on a disk ? I am looking for a map that converts a grid on a square to a grid on a disk.
There are many possibilities to map a square on a disk. For example one possibility is: $$ \phi(x,y) = \frac{(x,y)}{\sqrt{1+\min\{x^2,y^2\}}} $$ which moves the points along the line trough the origin. If you also want the map to mantain the infinitesimal area, it's a little bit more complicated. One possibility is to look for a map $\phi(x,y)$ which sends the circle to a rectangle by keeping vertical the vertical lines. On each vertical band you can subdivide the strip in equal parts. This means that you impose $\phi(x,y) = (f(x),y/\sqrt{1-x^2})$. The condition that the map preserves the area becomes: $$ \frac{f'(x)}{\sqrt{1-x^2}} = 1 $$ i.e. $f'(x) = \sqrt{1-x^2}$ which, by integration, gives $$ f(x) = \frac 1 2 (\arcsin x + x\sqrt{1-x^2}). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/439161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove that $\vdash p \lor \lnot p$ is true using natural deduction I'm trying to prove that $p \lor \lnot p$ is true using natural deduction. I want to do this without using any premises. As it's done in a second using a truth table and because it is so intuitive, I would think that this proof shouldn't be too difficult, but I not able to construct one.
If you have the definition Cpq := ANpq and you have that A-commutes, and Cpp, then you can do this really quickly. Since we have Cpp, by the definition we have ANpp. By A-commutation we then have ApNp. More formally, I'll first point out that natural deduction allows for definitions as well as uniform substitution on theses. So, again, here's our definition Cpq := ANpq. (or C := AN for short) First one of the lemmas: * * * *p hypothesis *Cpp 1-1 C-in Now for another of the lemmas: * * * *Apq hypothesis * * * * *p hypothesis * * * * *Aqp 2 A-in * * *CpAqp 2-3 C-in * * * * *q hypothesis * * * * *Aqp 5 A-in * * *CqAqp 5-6 A-in * * *Aqp 1, 4, 7 A-out *CApqAqp Now we have the following sequence of theses: 1 Cpp by the above 2 CApqAqp by the above 3 ANpp definition applied to thesis 1 4 CANppApNp 2 p/Np, q/p (meaning in thesis 2 we substitue p with Np and q with p) 5 ApNp 3, 4 C-out
{ "language": "en", "url": "https://math.stackexchange.com/questions/439291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Given that $\cos x =-3/4$ and $90^\circGiven that $\;\cos x =-\frac{3}{4}\,$ and $\,90^\circ<x<180^\circ,\,$ find $\,\tan x\,$ and $\,\csc x.$ This question is quite unusual from the rest of the questions in the chapter, can someone please explain how this question is solved? I tried Pythagorean Theorem, but no luck. Is it possible to teach me how to use the circle diagram?
As $90^\circ< x<180^\circ,\tan x <0,\sin x>0$ So, $\sin x=+\sqrt{1-\cos^2x}=...$ $\csc x=\frac1{\sin x}= ... $ and $\tan x=\frac{\sin x}{\cos x}=...$
{ "language": "en", "url": "https://math.stackexchange.com/questions/439369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }