Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How many subgroups of $\Bbb Z_5\times \Bbb Z_6$ are isomorphic to $\Bbb Z_5\times \Bbb Z_6$ I am trying to find the answer to the question in the title. The textbook's answer is only $\Bbb Z_5\times \Bbb Z_6$ itself. But i think like the following: Since 5 and 6 are relatively prime, $\Bbb Z_5\times \Bbb Z_6$ is isomorphic to $\mathbb{Z}_{30}$. And also, since 2,3 and 5 are pairwise relatively prime, then $\Bbb Z_2\times \Bbb Z_3\times \Bbb Z_5$ should also be isomorphic to $\Bbb Z_5\times \Bbb Z_6$. Am i wrong? Thank you
The underlying sets of these groups are by definition $$\mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_5=\{(a,b,c):a \in \mathbb{Z}_2 \text{ and } b \in \mathbb{Z}_3 \text{ and } c \in \mathbb{Z}_5\}$$ and $$\mathbb{Z}_5 \times \mathbb{Z}_6=\{(a,b):a \in \mathbb{Z}_5 \text{ and } b \in \mathbb{Z}_6\}.$$ So while $\mathbb{Z}_5 \times \mathbb{Z}_6$ is trivially a subgroup of $\mathbb{Z}_5 \times \mathbb{Z}_6$, the group $\mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_5$ is not even a subset of $\mathbb{Z}_5 \times \mathbb{Z}_6$. In fact, they have no elements in common. However, the groups $(\mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_5,+)$ and $(\mathbb{Z}_5 \times \mathbb{Z}_6,+)$ are isomorphic (they have essentially the same structure). In fact, they are isomorphic to infinitely many other groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/361719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Maximal compact subgroups of $GL_n(\mathbb{R})$. The subgroup $O_n=\{M\in GL_n(\mathbb{R}) | ^tM M = I_n\}$ is closed in $GL_n(\mathbb{R})$ because it's the inverse image of the closed set $\{I_n\}$ by the continuous map $X\mapsto ^tX X$. $O_n$ is also bounded in $GL_n(\mathbb{R})$, for example this is clear by considering the norm $||X|| = \sqrt{tr(^tX X)}$ (elements of $O_n$ are bounded by $\sqrt{n}$), so $O_n$ should be a compact subgroup of $GL_n(\mathbb{R})$. I see it claimed without proof in many places that $O_n$ is a maximal compact subgroup. How can we see this?
Let $G \subset GL_n(\mathbb{R})$ be a compact group containing $O(n)$ and let $M \in G$. Using polar decomposition, $$M=OS \ \text{for some} \ O \in O(n), \ S \in S_n^{++}(\mathbb{R}).$$ Since $O(n) \subset G$, we deduce that $S \in G$. Because $G$ is compact, $(S^n)$ has a convergent subsequence; $S$ being diagonalizable, it is possible if and only if the eigenvalues of $S$ are $\leq 1$. The same thing works for $S^{-1}$, so $1$ is the only eigenvalue of $S$, ie. $S=I_n$ hence $M=O \in O(n)$. Consequently, $G= O(n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/361821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Help with proof that $I = \langle 2 + 2i \rangle$ is not a prime ideal of $Z[i]$ (Note: $Z[i] = \{a + bi\ |\ a,b\in Z \}$) This is what I have so far. Proof: If $I$ is a prime ideal of $Z[i]$ then $Z[i]/I$ must also be an integral domain. Now (I think this next step is right, I'm not sure though), $$ Z[i]/I = \{a+bi + \langle 2 + 2i \rangle\ | a,b,\in Z \}. $$ So, let $a=b=2$, then we can see that $$ (2+2i) \cdot \langle 2 + 2i \rangle = \langle 4 - 4 \rangle = 0 $$ Thus, $Z[i]/I$ has a zero-divisor. Thus, $Z[i]/I$ is not an integral domain which means that $I$ is not a prime ideal of $Z[i]$. $\square$ Now if my proof is right, am I right to think that $(2+2i) \cdot \langle 2 + 2i \rangle $ represents the following: $$ (2+2i) \cdot \langle 2 + 2i \rangle = (2+2i) \cdot \{2 +2i^2, 4 + 4i^2, 6 +6i^2, 8 + 8i^2, \ldots \} = \{0, 0, 0, 0, \ldots\} = 0? $$
A much simpler argument: Note that $2,1+i\notin \langle 2+2i\rangle$, yet $2\cdot (1+i)=2+2i\in \langle 2+2i\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/361875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
A question concerning fundamental groups and whether a map is null-homotopic. Is it true that if $X$ and $Y$ are topological spaces, and $f:X \rightarrow Y$ is a continuous map and the induced group homomorphism $\pi_1(f):\pi_1(X) \rightarrow \pi_1(Y)$ is the trivial homomorphism, then we have that $f$ is null-homotopic?
Take $X=S^{2}$, $Y=S^{2}$, and the map $f(x)=-x$. This map has degree $-1 \neq 0$, therefore it is not nullhomotopic. However, $\pi_{1} (S^{2})$ is trivial, so the induced map will be between trivial groups, and is thus trivial. The claim you're making is too strong because it asserts that whenever $Y$ is simply connected, then any continuous map into $Y$ is null homotopic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/361949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Find the last two digits of $ 7^{81} ?$ I came across the following problem and do not know how to tackle it. Find the last two digits of $ 7^{81} ?$ Can someone point me in the right direction? Thanks in advance for your time.
$\rm{\bf Hint}\ \ mod\,\ \color{#c00}2n\!: \ a\equiv b\, \Rightarrow\, mod\,\ \color{#c00}4n\!:\ \, a^2 \equiv b^2\ \ by \ \ a^2\! = (b\!+\!\color{#c00}2nk)^2\!=b^2\!+\!\color{#c00}4nk(b\!+\!nk)\equiv b^2$ $\rm So,\, \ mod\,\ \color{}50\!:\, 7^{\large 2}\!\equiv -1\Rightarrow mod\ \color{}100\!:\,\color{#0a0}{7^{\large 4} \equiv\, 1}\:\Rightarrow\:7^{\large 1+4n}\equiv\, 7\, \color{#0a0}{(7^{\large 4})}^{\large n} \equiv\, 7 \color{#0a0}{(1)}^{\large n} \equiv 7 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/362012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
Power Series Solution for $e^xy''+xy=0$ $$e^xy''+xy=0$$ How do I find the power series solution to this equation, or rather, how should I go about dealing with the $e^x$? Thanks!
When trying to find a series to represent something, it's important to decide what kind of a series you want. Even if you're dealing with power series, a small change in notation between $\displaystyle \sum a_n x^n$ and $\displaystyle\sum \frac{a_nx^n}{n!}$ can lead to substantial changes. In particular, we have that, if $f(x)=\sum a_n x^n$, then $$e^x f(x) = \left(\sum \frac{x^n}{n!}\right) \left(\sum a_n x^n\right) = \sum \left(\sum_{i+j=n}\frac{a_i}{j!} \right) x^n, $$ which, while it will lead to a perfectly good recurrence yielding power series solution for a problem like this, is somewhat awkward, unwieldy, and likely not to lead to a recurrence that you can recognize or explicitly solve. However, if $f(x)=\sum \frac{a_n}{n!}$, then $$e^x f(x) = \left(\sum \frac{x^n}{n!}\right) \left(\sum \frac{a_n x^n}{n!}\right) = \sum \left(\sum_{i+j=n}\frac{a_i n!}{i!j!} \right) \frac{x^n}{n!}=\sum \left(\sum_{k\leq n}a_k \binom{n}{k} \right) \frac{x^n}{n!}, $$ which is a nicer looking expression. Additionally, the power series expansion of $f'(x)$ has a nice form due to the cancellation between the $n$ in $n!$ and the $n$ in $D(x^n)=nx^{n-1}$ With this in mind, I suggest a slight change on the hint of @M.Strochyk Hint: Expand $\displaystyle y(x)=\sum \frac{a_nx^n}{n!}$ and $\displaystyle e^x=\sum \frac{x^n}{n!}$ and substitute them into the equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/362089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Proving two graphs are isomorphic I need to prove that the following two countable undirected graphs $G_1$ and $G_2$ are isomorphic: Set of vertices of $G_1$ is $\mathbb{N}$ and there is an edge between $i$ and $j$ if and only if the $j$ th bit of the binary representation of $i$ is $1$ or the $i$ th bit of the binary representation of $j$ is $1$. In the other graph $G_2$, the set of vertices is $\mathbb{N}_+ := \lbrace n\in\mathbb{N} : n>0\rbrace$ and there is an edge between $n$ and $m$, for $n>m$, if and only if $n$ is divisible by $p_m$, the $m$ th prime. Any hints or ideas would be appreciated.
HINT: These are both the Rado graph, which is the unique countable graph with the following extension property: if $U$ and $V$ are disjoint finite sets of vertices of the graph, there is a vertex $x$ connected to each vertex in $U$ and to no vertex in $V$. The link actually demonstrates this for $G_1$, and the same article proves uniqueness. Thus, you need only prove that $G_2$ has the extension property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/362151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Coefficients of series given by generating function How to find the coefficients of this infinite series given by generating function.$$g(x)=\sum_{n=0}^{\infty}a_nx^n=\frac{1-11x}{1-(3x^2+10x)}$$ I try to expand like Fibonacci sequences using geometric series and binomial theorem but without any success.
Use (1) the fact that $1-3x^2-10x=(1-ax)(1-bx)$ for some $a$ and $b$, (2) the fact that $$ \frac{1-11x}{(1-ax)(1-bx)}=\frac{c}{1-ax}+\frac{d}{1-bx}, $$ for some $c$ and $d$, and (3) the fact that, for every $e$, $$ \frac1{1-ex}=\sum_{n\geqslant0}e^nx^n. $$ Then put all these pieces together to deduce that, for every $n\geqslant0$, $$ a_n=c\cdot a^n+d\cdot b^n. $$ Edit: You might want to note that $g(0)=1$ hence $a_0=1$, which yields $c+d=1$. Likewise, $1/(1-u)=1+u+o(u)$ when $u\to0$ hence $g(x)=(1-11x)(1+10x+o(x))=1-x+o(x)$. This shows that $g'(0)=-1$ thus $a_1=-1$ and $ca+(1-c)b=-1$, that is, $c=(1+b)/(b-a)$. Finally, $a$ and $b$ are the roots of the denominator $1-3x^2-10x$ and, for every $n\geqslant0$, $$ a_n=\frac{(1+b)\cdot a^n-(1+a)\cdot b^n}{b-a}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/362223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Number of bases of an n-dimensional vector space over q-element field. If I have an n-dimensional vector space over a field with q elements, how can I find the number of bases of this vector space?
There are $q^n-1$ ways of choosing the first element, since we can't choose zero. The subspace generated by this element has $q$ elements, so there are $q^n-q$ ways of choosing the second element. Repeating this process, we have $$(q^n-1)(q^n-q)\cdots(q^n-q^{n-1})$$ for the number of ordered bases. If you want unordered bases, divide this by $n!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/362312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
How to calculate max iterations needed to equally increase row of numbers per some value each iteration? I don't know whether title describes the main idea of my question, so apologize me for it. I have 6 numbers whose values can vary from 0 to 100, but initial value cannot be more than 35. As example, here is my number list: 20, 31, 15, 7, 18, 29 In one iteration we can distribute some value (5, 7, 10, 15 or so) among these numbers. Let it be 15. And each number must be increased at least once per iteration. So one iteration may look like: 20 + 3 => 23 31 + 2 => 33 15 + 3 => 18 7 + 5 => 12 18 + 1 => 19 29 + 1 => 30 The question is: how to calculate the max amount of iterations for any number row with constant distribution value per iteration? One should know that iterations must stop once one number reaches value of 100. What math field should I learn to get more info on alike questions?
You start with $a_1, \ldots, a_n$, have to distribute $d\ge n$ in each round and stop at $m>\max a_i$. Then the maximal number of rounds is bounded by two effects: * *The maximal value grows by at least one per round, so it reaches $m$ after at most $m-\max a_i$ rounds. *The total grows by $d$ each round, so you exceed $n\cdot (m-1)$ after at most $\left\lceil\frac{n(m-1)+1-\sum a_i}{d}\right\rceil$ rounds. Interestingly, these two conditions are all there is, i.e. the maximal number of rounds is indeed $$ \min\left\{m-\max a_i,\left\lceil\frac{n(m-1)+1-\sum a_i}{d}\right\rceil\right\}.$$ This can be achieved by (in each round) distributing one point to each term and then repeatedly increase the smallest term until all has been distributed. The field of math this belongs two would be combinatorics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/362421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Nice examples of groups which are not obviously groups I am searching for some groups, where it is not so obvious that they are groups. In the lecture's script there are only examples like $\mathbb{Z}$ under addition and other things like that. I don't think that these examples are helpful to understand the real properties of a group, when only looking to such trivial examples. I am searching for some more exotic examples, like the power set of a set together with the symmetric difference, or an elliptic curve with its group law.
I always found the fact that braid groups are groups at all quite interesting. The elements of the group are all the different braids you can make with, say, $n$ strings. The group operation is concatenation. The identity is the untangled braid. But the fact that inverses exist is not obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/362446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "325", "answer_count": 31, "answer_id": 25 }
Counter-examples of homeomorphism Briefly speaking, we know that a map $f$ between $2$ topological spaces is homeomorphic if $f$ is a bijection and the inverse of $f$ and itself are both continuous. So, can anyone give me $2$ counter examples(preferably simple ones) of non-homeomorphic maps $f$ between 2 topological spaces that satisfy the properties I give? (Only one of them is satisfied and 3 examples for each property.) * *$f$ is bijective and continuous, but its inverse is not continuous. *$f$ is bijective and the inverse is continuous, but $f$ itself is not continuous. In addition, can we think about some examples of topologies that are path-connected? I will understand the concept of homeomorphism much better if I know some simple counterexamples. I hope you can help me out. Thanks!
$1.$ Let $X$ be the set of real numbers, with the discrete topology, and let $Y$ be the reals, with the ordinary topology. Let $f(x)=x$. Then $f$ is continuous, since every subset of $X$ is open. But $f^{-1}$ is not continuous. $2.$ In doing $1$, we have basically done $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/362592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
Nicolas Boubarki, Algebra I, Chapter 1, § 2, Ex. 12 Nicolas Boubarki, Algebra I, Chapter 1, § 2, Ex. 12: ($E$ is a Semigroup with associative law (represented multiplicatively), $\gamma_a(x)=ax$.) Under a multiplicative law on $E$, let $ a \in E $ be such that $\gamma_a $ is surjective. (a) Show that, if there exists $u$ such that $ua=a$, then $ux=x$ for all $x\in E$. (b) For an element $b\in E$ to be such that $ba$ is left cancellable, it is necessary and sufficient that $\gamma_a$ be surjective and that $b$ be left cancellable. For those interested in part (a), simple proof is that for every $x\in E$ there exists $x^\prime \in E$ such that $ax^\prime=x$, consequently $ua=a \Rightarrow uax^\prime=ax^\prime \Rightarrow ux=x$. In (b), surjectivity of $\gamma_a$ and left cancellability of $b$ is required. However, I am concerned with "sufficiency" portion of part (b). When $E$ is infinite set there can always be a surjective function $\gamma_a$ which need not be injective, and left translation by $b$ is cancellable, however $ba$ need not be left cancellable.
I have a Russian translation of Bourbaki. In it Ex.12 looks as follows: "For $\gamma_{ba}$ to be an one-one mapping of $E$ into $E$, it is necessary and sufficient that $\gamma_{a}$ be an one-one mapping of $E$ onto $E$ and $\gamma_{b}$ be an one-one mapping of $E$ into $E$." So I guess that there is a misprint in English translation. I wonder how it looks in the French original?
{ "language": "en", "url": "https://math.stackexchange.com/questions/362667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Differential equation must satisfy its edge conditions. I have this variation problem $$\text{Minimize} \; \int_0^1 \left( 12xt- \dot{x}^2-2 \dot{x} \right) \; dt$$ With the edge conditions $x(0)=0$ and $x(1)$ is "free". And from here solve it: $$x(t)\to -t^3 +c_1t+c_2$$ From here it should've been correctly. Now I must solve the equation and compute $c_1$ and $c_2$ However I'm not aware of the edge condition $x(1)$ as free. How do I solve this, and what does it exactly mean by "free"? Result: $c_1$ should be $2$ and $c_2$ should be $0$. (Ps. If you can show it with Mathematica It would be great!)
Denote the functional as $J(x)$: $$ J(x) = \int_0^1 \left( 12xt- \dot{x}^2-2 \dot{x} \right) $$ Then the minimizer $x$ satisfies the following (perturbing the minimum with $\epsilon y$): $$ \frac{d}{d\epsilon} J(x + \epsilon y)\Big\vert_{\epsilon = 0} =0$$ Simplifying above gives us: $$ \int^1_0 (12 y t - 2\dot{x}\dot{y} - 2\dot{y})dt = 0 $$ Integration by parts yields: $$ \int^1_0 (12 y t + 2\ddot{x}y)dt - 2(\dot{x}+1)y\big|^1_0= 0 $$ Let's look at the boundary term: $(\dot{x}+1)y\big|^1_0 = (\dot{x}(1)+1)y(1) - (\dot{x}(0)+1)y(0)$. The term "free" from what I know would be a natural boundary condition. The essential boundary condition is $x(0)=0$ hence test function $y(0)=0$, the second term vanishes. On the natural boundary $t=1$, we do not impose anything on $y$'s value, hence we have to let $\dot{x}(1)+1 = 0$ to make variational problem well-posed, thus to get the differential equation $6 t + \ddot{x} = 0$. And the final answer is: "$x(1)$ is free" leads us to the natural boundary condition $$ \dot{x}(1)+1 = 0 $$ thus the coefficients are $c_1 = 2, c_2 = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/362722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Numbers that are divisible So I am given the following question: For natural numbers less than or equal to 120, how many are divisible by 2, 3, or 5? I solved it by inclusion-exclusion principle and by using the least common multiple by having it as (2, 3, 5)=120 which is equal to 30. Are these the right way to solve them or is there other ways?
It is a right way. Inevitably there are others. For example, there are $\varphi(120)$ numbers in the interval $[1,120]$ which are relatively prime to $120$. Here $\varphi$ is the Euler $\varphi$-function. The numbers in our interval which are divisible by $2$, $3$, or $5$ are precisely the numbers in our interval which are not relatively prime to $120$. So $120-\varphi(120)$ gives the desired count. Compute $\varphi(120)$ by using the usual formula, and the fact that $120=2^3\cdot 3\cdot 5$. The Inclusion/Exclusion procedure is more versatile than the Euler $\phi$-function procedure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/362834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Convergence of $\prod_{n=1}^\infty(1+a_n)$ The question is motivated by the following exercise in complex analysis: Let $\{a_n\}\subset{\Bbb C}$ such that $a_n\neq-1$ for all $n$. Show that if $\sum_{n=1}^\infty |a_n|^2$ converges, then the product $\prod_{n=1}^\infty(1+a_n)$ converges to a non-zero limit if and only if $\sum_{n=1}^\infty a_n$ converges. One can get a proof by using $|a_n|^2$ to bound $|\log(1+a_n)-a_n|$. Here is my question: is the converse of this statement also true? If "the product $\prod_{n=1}^\infty(1+a_n)$ converges to a non-zero limit if and only if $\sum_{n=1}^\infty a_n$ converges", then $\sum_{n=1}^\infty |a_n|^2$ converges.
I shall try to give examples where $\sum|a_n|^2$ is divergent and all possible combinations of convergence/divergence for $\prod(1+a_n)$ and $\sum a_n$. Let $a_{2n}=\frac1{\sqrt n}$ and $a_{2n+1}=\frac1{1+a_{2n}}-1=-\frac{1}{1+\sqrt n}$. Then $(1+a_{2n})(1+a_{2n+1})= 1$, hence the product converges. But $a_{2n}+a_{2n+1}=\frac1{n+\sqrt n}>\frac1{2n}$, hence $\sum a_n$ diverges. Let $a_{2n}=\frac1{\sqrt n}$ and $a_{2n+1}=-\frac1{\sqrt n}$. Then $a_{2n}+a_{2n+1}=0$, hence $\sum a_n$ converges. But $(1+a_{2n})(1+a_{2n+1})=1-\frac1n$; the $\log$ of this is $\sim -\frac1n$, hence $\sum \log(1+a_n)$ and also $\prod(1+a_n)$ diverges. Let $a_n=\frac1{\sqrt n}$. Then $\prod(1+a_n)$ and $\sum a_n$ diverge. It almost looks as if it is not possible to have both $\prod(1+a_n)$ and $\sum a_n$ convergent if $\sum |a_n|^2$ diverges because $\ln(1+a_n) = a_n-\frac12a_n^2\pm\ldots$, but here we go: If $n=4k+r$ with $r\in\{0,1,2,3\}$, let $a_n = \frac{i^r}{\sqrt k}$. Then the product of four such consecutive terms is $(1+\frac1{\sqrt k})(1+\frac i{\sqrt k})(1-\frac1{\sqrt k})(1-\frac i{\sqrt k})=1-\frac1{k^2}$, hence the log of these is $\sim -\frac1{k^2}$ and the product converges. The sum also converges (to $0$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/362899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Concise proof that every common divisor divides GCD without Bezout's identity? In the integers, it follows almost immediately from the division theorem and the fact that $a \mid x,y \implies a \mid ux + vy$ for any $u, v \in \mathbb{Z}$ that the least common multiple of $a$ and $b$ divides any other common multiple. In contrast, proving $e\mid a,b \implies e\mid\gcd(a,b)$ seems to be more difficult. In Elementary Number Theory by Jones & Jones, they do not try to prove this fact until establishing Bezout's identity. This Wikipedia page has a proof without Bezout's identity, but it is convoluted to my eyes. I tried my hand at it, and what I got seems no cleaner: Proposition: If $e \mid a,b$, then $e \mid \gcd(a,b)$. Proof: Let $d = \gcd(a,b)$. Then if $e \nmid d$, by the division theorem there's some $q$ and $c$ such that $d = qe + c$ with $0 < c < r$. We have $a = k_1 d$ and $b = k_2 d$, so by substituting we obtain $a = k_1 (qe + c)$ and $b = k_2 (qe + c)$. Since $e$ divides both $a$ and $b$, it must divide both $k_1 c$ and $k_2 c$ as well. This implies that both $k_1 c$ and $k_2 c$ are common multiples of $c$ and $r$. Now let $l = \operatorname{lcm}(r, c)$. $l$ divides both $k_1 c$ and $k_2 c$. Since $l = \phi c$ for some $\phi$, we have $\phi | k_1, k_2$, so $d \phi | a, b$. But we must have $\phi > 1$ otherwise $l = c$, implying $r \mid c$, which could not be the case since $c < r$. So $d \phi$ is a common divisor greater than $d$, which is a contradiction. $\Box$ Question: Is there a cleaner proof I'm missing, or is this seemingly elementary proposition just not very easy to prove without using Bezout's identity?
We can gain some insight by seeing what happens for other rings. A GCD domain is an integral domain $D$ such that $\gcd$s exist in the sense that for any $a, b \in D$ there exists an element $\gcd(a, b) \in D$ such that $e | a, e | b \Rightarrow e | \gcd(a, b)$. A Bézout domain is an integral domain satisfying Bézout's identity. Unsurprisingly, Bézout domains are GCD domains, and the proof is the one you already know. It turns out that the converse is false, so there exist GCD domains which are not Bézout domains; Wikipedia gives a construction. (But if you're allowing yourself the division algorithm, why the fuss? The path from the division algorithm to Bézout's identity is straightforward. In all of these proofs for $\mathbb{Z}$ the division algorithm is doing most of the work.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/362975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 6, "answer_id": 1 }
Finding the median value on a probability density function Quick question here that I cannot find in my textbook or online. I have a probability density function as follows: $\begin{cases} 0.04x & 0 \le x < 5 \\ 0.4 - 0.04x & 5 \le x < 10 \\ 0 & \text{otherwise} \end{cases}$ Now I understand that for the median, the value of the integral must be $0.5$. We can set the integrals from negative infinity to m where m represents the median and solve. However, there are 2 functions here so how would i do that? In the answers that I was provided, the prof. simply takes the first function and applies what I said. How/why did he do that? Help would be greatly appreciated! Thank you :)
I don't know, let's find out. Maybe the median is in the $[0,5]$ part. Maybe it is in the other part. To get some insight, let's find the probability that our random variable lands between $0$ and $5$. This is $$\int_0^5(0.04)x\,dx.$$ Integrate. We get $0.5$. What a lucky break! There is nothing more to do. The median is $5$. Well, it wasn't entirely luck. Graph the density function. (We should have done that to begin with, geometric insight can never hurt.) We find that the density function is symmetric about the line $x=5$. So the median must be $5$. Remark: Suppose the integral had turned out to be $0.4$. Then to reach the median, we need $0.1$ more in area. Then the median $m$ would be the number such that $\int_5^m (0.4-0.04x)\,dx=0.1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/363040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Generating $n\times n$ random matrix with rank $n/2$ using matlab Can we generate $n \times n$ random matrix having any desired rank? I have to generate a $n\times n$ random matrix having rank $n/2$. Thanks for your time and help.
Generate $U,V$ random matrices of size $n \times n/2$, then almost surely $A = U \cdot V^T$ is of rank $n/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/363103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
$f$ is a bijective function with differentiable inverse at a single point Let $\Omega \subseteq \mathbb{R}^n$ and $p \in \Omega$. Let $f:U \to V$ be a bijection of open sets $p \in U \subseteq \Omega$ amd $f(p) \in V \subseteq \mathbb{R}^n$. If $f^{-1}: V \to U$ is differentiable at $p$, then $df_p: \mathbb{R}^n \to \mathbb{R}^n$ is invertible. Suppose $f$ is a bijection. Since $f^{-1}$ is differentiable at $p$, it is continuous at $p$. Let $f^{-1}(f(x))=x$. Then \begin{align*} df^{-1}(f(x))&=1\\ df_p^{-1}df_p&=1\\ \end{align*} Following Dan Shved's suggestion I applied the chain rule, but I'm not sure that $f$ is differentiable - thus I'm not sure if $df_p$ exists. If $f^{-1}$ were continuously differentiable this would be easier because I could invoke the Inverse Function theorem.
The problem statement is either incorrect, incomplete, or both. Certainly, in order to say anything about $df_p$, the assumption on $f^{-1}$ should be made at $f(p)$. But the mere fact that $f^{-1}$ is differentiable at $f(p)$ is not enough. For example, $f(x)=x^{1/3}$ is a bijection of $(-1,1)\subset \mathbb R$ onto itself. Its inverse $f^{-1}(x)=x^3$ is differentiable at $0=f(0)$, but $f$ is not differentiable at $0$. The following amended statement is correct: Let $f:U \to V$ be a bijection of open sets $p \in U \subseteq \Omega$ amd $f(p) \in V \subseteq \mathbb{R}^n$. If $f^{-1}: V \to U$ is differentiable at $\mathbf{f(p)}$, then $df_p: \mathbb{R}^n \to \mathbb{R}^n$ is invertible provided it exists. Indeed, the chain rule applies to $\mathrm{id}=f^{-1}\circ f$ at $p$ and yields $\mathrm{id}=df^{-1}_{f(p)} \circ df_p$. Hence $df_p$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/363178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trace of the matrix power Say I have matrix $A = \begin{bmatrix} a & 0 & -c\\ 0 & b & 0\\ -c & 0 & a \end{bmatrix}$. What is matrix trace tr(A^200) Thanks much!
You may do it by first computing matrix powers and then you may calculate whatever you want. Now question is how to calculate matrix power for a given matrix, say $A$? Your goal here is to develop a useful factorization $A = PDP^{-1}$, when $A$ is $n\times n$ matrix.The matrix $D$ is a diagonal matrix (i.e. entries off the main diagonal are all zeros). Then $A^k =PD^kP^{-1} $. $D^k$ is trivial to compute. Note that columns of $P$ are n linearly independent eigenvectors of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/363252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Show that if $G$ is a group of order 6, then $G \cong \Bbb Z/6\Bbb Z$ or $G\cong S_3$ Show that if $G$ is a group of order 6, then $G \cong \Bbb Z/6\Bbb Z$ or $G\cong S_3$ This is what I tried: If there is an element $c$ of order 6, then $\langle c \rangle=G$. And we get that $G \cong \Bbb Z/6 \Bbb Z$. Assume there don't exist element of order 6. From Cauchy I know that there exist an element $a$ with $|a|=2$, and $b$ with $b=3$. As the inverse of $b$ doesn't equal itself, I have now 4 distinct elements: $e,a,b,b^{-1}$. As there are no elements of order $6$, we have two options left. Option 1: $c$ with $|c|=3$, and $c^{-1}$ with $|c|=3$. Option 2: two different elements of order 2, $c,d$. My intuition says that for option 1, $H= \{e,b,b^{-1},c,c^{-1} \}$ would be a subgroup of order $5$, which would give a contradiction, but I don't know if this is true/how I could prove this. I also don't know how I could prove that option 2 must be $S_3$. I think that $G= \{e,a,b,b^{-1},c,d \}$. Looks very similar to $D_3$. But I don't know how I could prove this rigoursly. Can anybody help me finish my proof ? If there are other ways to see this, I'd be glad to hear them as well!
Instead of introducing a new element called $c$, we can use the group structure to show that the elements $ab$ and $ab^2$ are the final two elements of the group, that is $G=\{e,a,b,b^2,ab,ab^2\}$. Notice that if $ab$ were equal to any of $e,a,b$, or $b^{-1}=b^2$, we would arrive at the contradictions $a=b^{-1}$, $b=e$, $a=e$, and $a=b$ respectively. Similarly, see if you can show that $ab^2$ must be distinct from the elements $e,a,b,b^2,$ and $ab$. In a manner similar to the one above, we can now show that the element $ba$ (notice the order) can only be equal to either $ab$ or $ab^2$ in our list of elements of $G$ without arriving at some contradiction. If $ba=ab$, then $|ab|=6$, so that $G$ is cyclic. If $ba=ab^2$, see if you can write down a group isomorphism between $G$ and $S_3$, and show that the images of the elements of $G$ in $S_3$ satisfy the same relations their pre-images do in $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/363332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
find the largest perfect number less than $10,000$ in Maple Can anyone tell me how to find largest perfect number less than 10000 in maple? Actually, I know how to find all the perfect numbers less than or equal to 10000 but I don't know how to find the largest one within the same code?
Well if you know how to find them all, I suppose you use a loop. So before your loop add a variable $max=0$. During the loop, for each perfect number $p$ you find, check if $p>max$ and if it is, then do $max=p$. The value of $max$ after the end of the loop will be the greatest number found ;)
{ "language": "en", "url": "https://math.stackexchange.com/questions/363424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Which way does the inclusion go? Lemma Let $\mathcal{B}$ and $\mathcal{B'}$ be bases for topologies $\mathcal{T}$ and $\mathcal{T'}$, respectively, on $X$. Then the following are equivalent: * *$\mathcal{T'}$ is finer than $\mathcal{T}$. *For each $x\in X$ and each basis element $B\in \mathcal{B}$ containing $x$, there is a basis element $B'\in \mathcal{B'}$ such that $x\in B' \subset B$. Why don't we write "for each $x\in X$ and each basis element $B\in \mathcal{B}$ containing $x$, there is a basis element $B'\in \mathcal{B'}$ such that $x\in B' \supset B$." (isn't that also true?) instead? I can see that the original statement is true but it seems very counterintuitive.
The idea is that we need every $\mathcal{T}$-open set to be $\mathcal{T}'$-open. Since $\mathcal{B}$ is a basis for $\mathcal{T}$, then every $\mathcal{T}$-open set is a union of $\mathcal{B}$-elements (and every union of $\mathcal{B}$-elements is $\mathcal{T}$-open), so it suffices that every $\mathcal{B}$-element is $\mathcal{T}'$-open. Since $\mathcal{B}'$ is a basis for $\mathcal{T}'$, then we must show that every $\mathcal{B}$-element is a union of $\mathcal{B}'$-elements, which is what the Lemma shows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/363521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
High-order elements of $SL_2(\mathbb{Z})$ have no real eigenvalues Let $\gamma=\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in SL_2(\mathbb{Z})$, $k$ the order of $\gamma$, i.e. $\gamma^k=1$ and $k=\min\{ l : \gamma^l = 1 \}$. I have to show that $\gamma$ has no real eigenvalues if $k>2$. The eigenvalues of $\gamma$ are $\gamma_{1,2} = \frac{1}{2} (a+d \pm \sqrt{(a+d)^2-4})$, i.e. I have to show that $(a+d)^2<4$ for $k>2$. How can I prove this? I have determined the first powers of $\gamma$ to get the condition directly from $\gamma^k = 1$ but I failed. Probably, there is an easier way?
Assume there is a real eigenvalue. Then the minimal polynomial of $\gamma$ is a divisor of $X^k-1$ and has degree at most $2$ and has at least one real root. If its degree is $2$, the other root must also be real. The only real roots of unity are $\pm1$, so the minimal polynomial os one of $X-1$, $X+1$ or $(X-1)(X+1)=X^2-1$. All three are divisors of $X^2-1$, i.e. we find $\gamma^2=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/363621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Counting binary sequences with $S$ $0$'s and $T$ $1$'s where every pre-sequence contains fewer $1$'s than $0$'s How many $S+T$-digit binary sequences with exactly $S$ $0$'s and $T$ $1$'s exist where in every pre-sequence the number of $1$'s is less than the number of $0$'s? Examples: * *the sequence $011100$, is bad since the pre-sequence $011$ has more $1$'s than $0$'s. *the sequence $010101000$, is good since there is no pre-sequence such that there are more $1$'s than $0$'s.
This is a famous problem often called Bertand's Ballot Theorem. A good summary is given in the Wikipedia article cited. There are a number of nice proofs. Note that your statement is the classical one ("always ahead") but the example of a good sequence that you give shows that "never behind" is intended. If that is the case, go to the "ties allowed" section of the article. The number of good sequences turns out to be $$\binom{s+t}{s}\frac{s+1-t}{s+1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/363694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
proving that the following limit exist How can I prove that the following limit exist? $$ \mathop {\lim }\limits_{x,y \to 0} \frac{{x^2 + y^4 }} {{\left| x \right| + 3\left| y \right|}} $$ I tried a lot of tricks. At least assuming that this limit exist, I can prove using some special path (for example y=x) that the limit is zero. But how can I prove the existence?
There are more appropriate ways, but let's use the common hammer. Let $x=r\cos\theta$ and $y=r\sin\theta$. Substitute. The only other fact needed is that $|\sin\theta|+|\cos\theta|$ is bounded below. An easy lower bound is $\frac{1}{\sqrt{2}}$. When you substitute for $x$ and $y$ on top, you get an $r^2$ term, part of which cancels the $r$ at the bottom, and the other part of which kills the new top as $r\to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/363782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
If I have $5^{200}$, can I rewrite it in terms of $2$'s and $3$'s to some powers? If I have $5^{200}$, can I rewrite it in terms of $2$'s and $3$'s to some powers? For example, if I had $4^{250}$ can be written in terms of $2$'s like so: $2^{500}$.
No. This is the Fundamental theorem of algebra: every integer $n\geq 2$ can be written in exactly one way (up to the order of factors) as a product of powers of prime numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/363876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Showing that one cannot continuously embed $\ell^\infty$ in $\ell^1$. Is it possible to embed $\ell^\infty$ into $\ell^1$ continuously? I.e. can one find a continuous linear injection $I:\ell^\infty \to \ell^1$. I have reduced a problem I have been working on to showing that this cannot happen, but I don't see how to proceed from here.
Yes, it's possible; for example, you can set $$ I(a_1,a_2,a_3,\dots):=(\frac{a_1}{1^2},\frac{a_2}{2^2},\frac{a_3}{3^2},\dots). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/363972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Turing Decryption MIT example I am learning mathematics for computer science on OpenCourseWare. I have no clue in understanding below small mathematical problem. Encryption: The message m can be any integer in the set $\{0,1,2,\dots,p−1\}$; in par­ticular, the message is no longer required to be a prime. The sender encrypts the message $m$ to produce $m^∗$ by computing: $$m^∗ = \operatorname{remainder}(mk,p).$$ Multiplicative inverses are the key to decryption in Turing’s code. Specfically, we can recover the original message by multiplying the encoded message by the inverse of the key: \begin{align*} m*k^{-1} &\cong \operatorname{remainder}(mk,p) k^{-1} && \text{(the def. (14.8) of $m^*$)} \\ &\cong (mk) k{^-1} \pmod p && \text{(by Cor. 14.5.2)} \\ &\cong m \pmod p. \end{align*} This shows that $m*k^{-1}$ is congruent to the original message $m$. Since $m$ was in the range $0,1,\dots,p-1$, we can recover it exactly by taking a remainder: $m = \operatorname{rem}(m*k^{-1},p)$ --- ??? Can someone please explain the above line (with question marks) I don't understand it.
The line with the question mark is just a restatement of the explanation above in symbolic form. We have a message $m$ and encrypted message $m^* = \text{remainder}(mk,p)$. If we are given $m^*$ we can recover $m$ by multiplying by $k^{-1}$ and taking the remainder mod $p$. That is, $\text{remainder}(m^* \cdot k^{-1},p) = \text{remainder}(mkk^{-1},p) = \text{remainder}(m,p) = m$. This gives $m$ exactly (and not something else congruent to $m \mod p$) because $m$ is restricted to be in $0,1,\dots,p-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/364041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Calculating $\lim\limits_{n\to\infty}\frac{(\ln n)^{2}}{n}$ What is the value of $\lim\limits_{n\to\infty}\dfrac{(\ln n)^{2}}{n}$ and the proof ? I can't find anything related to it from questions. Just only $\lim\limits_{n\to\infty}\dfrac{\ln n}{n}=0$, which I know it is proved by Cesàro.
We can get L'Hospital's Rule to work in one step. Express $\dfrac{\log^2 x}{x}$ as $\dfrac{\log x}{\sqrt{x}}\cdot\dfrac{\log x}{\sqrt{x}}$. L'Hospital's Rule gives limit $0$ for each part. Another approach is to let $x=e^y$. Then we want to find $\displaystyle\lim{y\to\infty} \dfrac{y^2}{e^y}$. Note that for positive $y$, we have $$e^y\gt 1+y+\frac{y^2}{2!}+\frac{y^3}{3!}\gt \frac{y^3}{3!}.$$ It follows that $\dfrac{y^2}{e^y}\lt \dfrac{3!}{y}$, which is enough to show that the limit is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/364139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Intuition behind Borsuk-Ulam Theorem I watched the following video to get more intuition behind Borsuk-Ulam Theorem. The first part of the video was very clear for me, as I understood it considers only $R^2$ dimension and points $A$ and $B$ moving along the equator and during the video we track the temperature of point $A$ and $B$ along the equator. The following is the picture from the second part. In the second part $R^3$ is considered, and instead of tracking the temperatures along the equator, we track the temperature along the arbitrary path from $A$ to $B$ along the sphere, but along this part we don't move $A$ and $B$ there is no intersection of temperatures as in was in the first part (the most confusing phrase is 4:45 "as $A$ goes to $B$ is goes from being colder than $B$ to hotter than B", why? it just goes to the $B$). I don't understand how the assumption that there are a point on the track where the temperature is as in the point $B$ can help us, even if it's true is not what we need we need the temperature in the point $A$ to be equal the temperate in the point $B$. The second assumption is to consider all antipodal points with the same temperature and consider all the points on the track with the same temperature of the opposite point, so as result we have a "club" of the intermediate point with the different temperatures, but all their temperatures equal to the temperature of the opposite point, given so how can we connect them by the line. I have some problems in understanding the idea of the second part, would appreciate for help.
Creator of the video here. but along this part we don't move A and B there is no intersection of temperatures as in was in the first part Yeah I didn't elaborate on this as much as the previous section, but the same thing is happening, just along an arbitrary path of connected opposite points, instead of a great sphere. What also might be unclear, is that B tracks the opposite point of A. Its path just isn't drawn. I hope that helps. Vsauce also released a video very recently which runs through my explanation (I think he got it from me - I'm credited in his video description), so maybe it would also be useful: https://www.youtube.com/watch?v=csInNn6pfT4
{ "language": "en", "url": "https://math.stackexchange.com/questions/364217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to show that $C(\bigcup _{i \in I} A_i)$ is a supremum of a subset $\{C(A_i): i \in I \}$ of the lattice $L_C$ of closed subsets? According to Brris & Sankappanavar's "A course in universal algebra," the set $L_C$ of closed subsets of a set $A$ forms a complete lattice under $\subseteq$. Here, a subset $X$ of $A$ is said to be closed if $C(X) = X$, where $C$ is a closure operator on $A$ in the sense that it satisfies C1 - C3 below: (For any $X, Y \subseteq A$) C1: $X \subseteq C(X)$ C2: $C^2(X) = C(X)$ C3: $X \subseteq Y \Rightarrow C(X) \subseteq C(Y)$. They say that the supremum of a subset $\{C(A_i): i \in I\}$ of the lattice $\langle L_C, \subseteq \rangle$ is $C(\bigcup _{i \in I} A_i)$. If so, it must be that $$C(\bigcup _{i \in I} A_i) \subseteq \bigcup _{i \in I} C (A_i)$$ (since $\bigcup _{i \in I} C (A_i)$ is also an upper bound). But, I cannot so far show how this is so. Postscript It was an error to think that the above inclusion had to hold if $C(\bigcup _{i \in I} A_i)$ is $sup \{C(A_i): i \in I\}$. This inclusion does not follow, and its converse follows, actually, as pointed out by Brian and Abel. Still, $C(\bigcup _{i \in I} A_i)$ is the supremum of the set since, among the closed subsets of $A$, it is the set's smallest upper bound, as explained by Brian and Alexei. This question was very poorly and misleadingly stated. I will delete it if it's requested.
$\bigcup C(A_i)$ is not necessarily closed, and the smallest closed set containing it is $C[\bigcup C(A_i)]$. Now, $\bigcup A_i \subset \bigcup C(A_i)$, thus $C(\bigcup A_i) \subset C[\bigcup C(A_i)]$. Conversely, $A_i \subset \bigcup A_i$, so $C(A_i) \subset C(\bigcup A_i)$. Therefore, $\bigcup C(A_i) \subset C(\bigcup A_i)$, so $C[\bigcup C(A_i)] \subset C(\bigcup A_i)$. Thus, $C[\bigcup C(A_i)] = C(\bigcup A_i)$, Q.E.D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/364283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Minimal Distance between two curves What is the minimal distance between curves? * *$y = |x| + 1$ *$y = \arctan(2x)$ I need to set a point with $\cos(t), \sin(t)$?
One shortcut here is to note that curves 1, 2 (say $f(x)$, $g(x)$) have a co-normal line passing between the closest two points. Therefore, since $f'(x) = 1$ for all $x>0$ then just find where $g'(x) = 1$ or \begin{align}&\frac{2}{4x^2 +1} = 1\\ &2 = 4x^2 + 1\\ &\bf{x = \pm 1/2}\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/364341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to show $\dim_\mathcal{H} f(F) \leq \dim_\mathcal{H} F$ for any set $F \subset \mathbb{R}$ and $f$ continuously differentiable? Let $f: \mathbb{R} \to \mathbb{R}$ be differentiable with continuous derivative. I have to show that for all sets $F \subset \mathbb{R}$, the inequality $$\dim_\mathcal{H} f(F) \leq \dim_\mathcal{H} F$$ holds, where $\dim_\mathcal{H}$ denotes the Hausdorff dimension. For some strange reason, there seems to be no definition of the Hausdorff dimension in the provided lecture notes. I looked it up on wikipedia and don't really know how I can say anything about the Hausdorff dimension of the image of a continuously differentiable function. Could anyone give me some help? Thanks a lot in advance.
Hint: Show that the inequality is true if $f$ is lipschitz. Then, deduce the general case from the following property: $\dim_{\mathcal{H}} \bigcup\limits_{i \geq 0} A_i= \sup\limits_{i \geq 0} \ \dim_{\mathcal{H}}A_i$. For a reference, there is Fractal Geometry by K. Falconer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/364415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate eigenvectors I am given the $2\times2$ matrix $$A = \begin {bmatrix} -2&-1 \\\\ 15&6 \ \end{bmatrix}$$ I calculated the Eigenvalues to be 3 and 1. How do I find the vectors? If I plug the value back into the character matrix, I get $$B = \begin {bmatrix} -5&1 \\\\ 15&3 \ \end{bmatrix}$$ Am I doing this right? What would the eigenvector be?
Remember what the word "eigenvector" means. If $3$ is an eigenvalue, then you're looking for a vector satisfying this: $$A = \begin {bmatrix} -2&-1 \\\\ 15&6 \ \end{bmatrix}\begin{bmatrix} x \\ y\end{bmatrix} = 3\begin{bmatrix} x \\ y\end{bmatrix}$$ Solve that. You'll get infinitely many solutions since every scalar multiple of a solution is also a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/364484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Solving a set of 3 Nonlinear Equations In the following 3 equations: $$ k_1\cos^2(\theta)+k_2\sin^2(\theta) = c_1 $$ $$ 2(k_2-k_1)\cos(\theta)\sin(\theta)=c_2 $$ $$ k_1\sin^2(\theta)+k_2\cos^2(\theta) = c_3 $$ $c_1$, $c_2$ and $c_3$ are given, and $k_1$, $k_2$ and $\theta$ are the unknowns. What is the best way to solve for the unknowns? Specifically, I need to solve many independent instances of this system in an algorithm. Therefore, ideally the solution method should be fast.
Add and subtract equations $1$ and $3$, giving the system $$\begin{cases}\begin{align}k_1+k_2&=c_1+c_3\\(k_1-k_2)\sin2\theta&=-c_2\\(k_1-k_2)\cos2\theta&=c_1-c_3\end{align}\end{cases}$$ Then you find $k_1-k_2$ and $2\theta$ by a polar-to-Cartesian transform, giving $$\begin{cases}\begin{align}k_1+k_2&=c_1+c_3,\\k_1-k_2&=\sqrt{c_2^2+(c_1-c_3)^2},\\2\theta&=\arctan_2(-c_2,c_1-c_3).\end{align}\end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/364565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Find $\arg\max_x \operatorname{corr}(Ax, Bx)$ for vector $x$, matrices $A$ and $B$ This is similar to, but not the same as, canonical correlation: For $(n \times m)$ matrices $A$ and $B$, and unit vector $(m \times 1)$ $x$, is there a closed-form solution to maximize the correlation between $Ax$ and $Bx$ w.r.t. $x$? Note that I am optimizing over just one vector (in contrast to canonical correlation).
Here is an answer for the case $m>n$. Write $x=(x_1,\ldots,x_m)^T,A=(a^{1},\ldots,a^{m}),B=(b^{1},\ldots,b^{m})$, so $Ax=\sum_{i\le m} x_ia^i$, $Bx=\sum_{i\le m} x_i b^i$. Since $m>n$, columns $a^i - b^i$ of the matrix $A-B$ are linearly dependent, i.e. there is $x$ such that $Ax=Bx$. For this $x$ we have ${\rm corr}(Ax,Bx)=1$, i.e. is maximal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/364643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Graph of an inverse trig function. Which of the following is equivalent to the graph of $arcsin(x)$ ? (a) Reflecting $arccos(x)$ about the y-axis, then shift down by $\pi /2$ units. (b) Reflecting $arccos(x)$ about the x-axis, then shift up by $\pi /2$ units. I think they are both the same thing. Can someone confirm this ?
You can look at graphs of all three functions here: http://www.wolframalpha.com/input/?i=%7Barccos%28-x%29-pi%2F2%2C-arccos%28x%29%2Bpi%2F2%2Carcsin%28x%29%7D Do they look the same to you?
{ "language": "en", "url": "https://math.stackexchange.com/questions/364732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to prove that $\|AB-B^{-1}A^{-1}\|_F\geq\|AB-I\|_F$ when $A$ and $B$ are symmetric positive definite? Let $A$ and $B$ be two symmetric positive definite $n \times n$ matrices. Prove or disprove that $$\|AB-B^{-1}A^{-1}\|_F\geq\|AB-I\|_F$$ where $\|\cdot\|_F$ denotes Frobenius norm. I believe it is true but I have no clue how to prove it. Thanks for your help.
For the Froebenius Norm: Since $A$ and $B$ are positive definite, we can write $C=AB=QDQ^\dagger$, with $D$ being the a diagonal matrix with the $n$ positive eigenvalues $\lambda_k$ and $Q$ a hermitian matrix ($QQ^\dagger=QQ^{-1}=I$). So we obtain $$ ||C - C^{-1}|| \geq ||C - I|| $$ Since the Froebenius Norm is invariant under coordinate rotations, i.e. $||QA||=||A||$, we can simplify this expression to $$ ||C - C^{-1}|| = || QDQ^\dagger - QD^{-1}Q^\dagger || = ||D - D^{-1}|| = \sqrt{\sum_{k=1}^n \left(\lambda_k-\lambda_k^{-1}\right)^2} $$ and $$ ||C -I|| = ||QDQ^\dagger - I || = ||D -I|| = \sqrt{\sum_{k=1}^n \left(\lambda_k-1\right)^2} $$ For all $\lambda_k>0$, $$ \sqrt{\sum_{k=1}^n \left(\lambda_k-\lambda_k^{-1}\right)^2} \geq \sqrt{\sum_{k=1}^n \left(\lambda_k-1\right)^2} $$ holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/364805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Inverse bit in Chinese Remainder Theorem I need to solve the system of equations: $$x \equiv 13 \mod 11$$ $$3x \equiv 12 \mod 10$$ $$2x \equiv 10 \mod 6.$$ So I have reduced this to $$x \equiv 2 \mod 11$$ $$x \equiv 4 \mod 10$$ $$x \equiv 2 \mod 3$$ so now I can use CRT. So to do that, I have done $$x \equiv \{ 2 \times (30^{-1} \mod 11) \times 30 + 4 \times (33^{-1} \mod 10) \times 33 + 2 \times (110^{-1} \mod 3) \times 110 \} \mod 330$$ $$= \{ 2 (8^{-1} \mod 11) \cdot 30 + 4(3^{-1} \mod 10)\cdot33 + 2(2^{-1} \mod 3) \cdot 110 \} \mod 330$$ but now I'm stuck on what to do. What do the inverse bits means? If I knew that I could probably simplify the rest myself.
Consider $3 \mod 5$. If I multiply this by $2$, I get $2 \cdot 3 \mod 5 \equiv 1 \mod 5$. Thus when I multiply by $2$, I get the multiplicative identity. This means that I might call $3^{-1} = 2 \mod 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/364899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Probability of getting exactly 2 heads in 3 coins tossed with order not important? I have been thinking of this problem for the post 3-4 hours, I have come up with this problem it is not a home work exercise Let's say I have 3 coins and I toss them, Here order is not important so possible sample space should be 0 H, 1 H, 2 HH, 3 HHH (H being heads) TTT, HTT, HHT, HHH since P(T) and P(H) =1/2; Here we have fair coins only, Since each and every outcome is equally likely, answer should be 1/4 (is this correct) and if that is correct, all of the probabilities don't add up to one, will I have to do the manipulation to make it add up to one, or I am doing anything wrong. EDIT In my opinion, with order being not important, there should be only 4 possible outcomes. All of the answers have ignored that condition.
The sample space has size $2^3 = 8$ and consists of triples $$ \begin{array}{*{3}{c}} H&H&H \\ H&H&T \\ H&T&H \\ H&T&T \\ T&H&H \\ T&H&T \\ T&T&H \\ T&T&T \end{array} $$ The events $$ \begin{align} \{ 0 \text{ heads} \} &= \{TTT\}, \\ \{ 1 \text{ head} \} &= \{HTT, THT, TTH\}, \end{align} $$ and I'll let you figure out the other two. The probabilities are, for example, $$ P(\{ 1 \text{ head} \}) = \frac{3}{8}. $$ This is called a binomial distribution, and the sizes of the events "got $k$ heads out of $n$ coin flips" are called binomial coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/364986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 1 }
Intuitive proofs that $\lim\limits_{n\to\infty}\left(1+\frac xn\right)^n=e^x$ At this link someone asked how to prove rigorously that $$ \lim_{n\to\infty}\left(1+\frac xn\right)^n = e^x. $$ What good intuitive arguments exist for this statement? Later edit: . . . where $e$ is defined as the base of an exponential function equal to its own derivative. I will post my own answer, but that shouldn't deter anyone else from posting one as well.
$$ e^x=\lim_{m\rightarrow \infty}\left(1+\frac{1}{m}\right)^{mx} $$ Let $mx=n$, so $m=\frac{n}{x}$ $$e^x=\lim_{n\rightarrow\infty}\left(1+\frac{x}{n}\right)^n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/365029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 14, "answer_id": 8 }
Radicals in a fraction: simplification I cannot for the life of me figure out how this fraction got simplified. Please tell me how the first fraction got simplified into the second one. I've provided initial fraction and its simplified answer: $$ -\frac{p \cdot (-1 /(2\sqrt{517-p}) )}{\sqrt{517-p}} = \frac{1}{2(517-p)} $$
$$ -\frac{p \cdot \frac{-1}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ -\frac{p \cdot -\frac{1}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ --\frac{p \cdot \frac{1}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ \frac{p \cdot \frac{1}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ \frac{\frac{p}{2\sqrt{517-p}}}{\sqrt{517-p}} \\\\$$ $$ \frac{p}{2\sqrt{517-p} \cdot \sqrt{517-p}} \\\\$$ $$ \frac{p}{2(517-p)} \\\\$$ Which step does not make sense to you?
{ "language": "en", "url": "https://math.stackexchange.com/questions/365083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Combinatorial Proof I have trouble coming up with combinatorial proofs. How would you justify this equality? $$ n\binom {n-1}{k-1} = k \binom nk $$ where $n$ is a positive integer and $k$ is an integer.
We have a group of $n$ people, and want to count the number of ways to choose a committee of $k$ people with Chair. For the left-hand side, we select the Chair first, and then $k-1$ from the remaining $n-1$ to join her. For the right-hand side, we choose $k$ people, and select one of them to be Chair.
{ "language": "en", "url": "https://math.stackexchange.com/questions/365166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Does the sequence converges? I am trying to prove if the sequence $a_n=(\root n\of e-1)\cdot n$ is convergent. I know that the sequences $x_n=(1+1/n)^n$ and $y_n=(1+1/n)^{n+1}$ tends to the same limit which is $e$. Can anyone prove if the above sequence $a_n$ is convergent? and if so, find the limit. My trial was to write $a_n$ as $a_n=n(e^{1/n}-1)$ and taking $1/n=m$ so that $a_n=\frac{1}{m}(e^m-1)$ and taking the limit $\lim_{x\to 0^+}\frac{e^x-1}{x}$, but I don't know how to continue. Thanks to every one who solve this for me.
Let $e^{1/n}-1 = x$. We then have $\dfrac1n = \log(1+x) \implies n = \dfrac1{\log(1+x)}$. Now as $n \to \infty$, we have $x \to 0$. Hence, $$\lim_{n \to \infty}n(e^{1/n}-1) = \lim_{x \to 0} \dfrac{x}{\log(1+x)} = 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/365237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Does there exist a positive integer $n$ such that it will be twice of $n$ when its digits are reversed? Does there exist a positive integer $n$ such that it will be twice of $n$ when its digits are reversed? We define $f(n)=m$ where the digits of $m$ and $n$ are reverse. Such as $f(12)=21,f(122)=221,f(10)=01=1$,so we cannot say $f(f(n))=n$,but $f(f(n))=n/10^k$. So we need to find a solution to $f(n)=2n$. If $f(n)=2n$ and the first digit of $n$ is 2,then the last digit of $n$ is 1 or 6,and so on.So the first digit of $n$ is even. There are some solutions to the equation $f(n)=\frac{3}{2}n$,such as $n=4356,43956$,but there is no solution to $f(n)=2n$ when $n<10^7$. Edit:Since Alon Amit has proved that $f(n)=2n$ has no positive solution,so I wonder whether $f(n)=\frac{3}{2}n$ has only finite solutions. Any suggestion is grateful,thanks in advance!
There is no such integer $n$. Suppose there is, and let $b = n \bmod 10$ be its units digit (in decimal notation) and $a$ its leading digit, so $a 10^k \leq n < (a+1)10^k$ for some $k$ and $1 \leq a < 10$. Since $f(n) = 2n$ is larger than $n$, and $f(n)$ has leading digit $b$ and at most as many digits as $n$, we must have $b > a$. At the same time, $2b \equiv a \bmod 10$ because $(2b \bmod 10)$ is the units digits of $2n$ and $a$ is the units digit of $f(n)$. This means that $a$ is even, as you pointed out. * *$a$ cannot be $0$, by definition. *If $a=2$, $b$ must be $1$ (impossible since $b>a$) or $6$. But the leading digit of $2n$ can only be $4$ or $5$, since $4\times 10^k \leq 2n < 6\times 10^k$ and the right inequality is strict (in plain English, when you double a number starting with $2$, the result must start with $4$ or $5$). *If $a=4$, by the same reasoning $b$ must be $7$ which again fails to be a possible first digit for twice a number whose leading digit is $4$. *If $a=6$, we have $b=8$. Impossible since $2n$ must start with $1$. *If $a=8$, $b$ must be $9$. Impossible again, for the same reason. So no $a$ is possible, QED. Edit: The OP has further asked if $f(n) = \frac{3}{2}n$ has only finitely many solutions. The answer to that is No: Consider $n=43999...99956$ where the number of $9$'s is arbitrary. One can check that $f(n) = \frac{3}{2}n$ for those values of $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/365307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Every injective function is an inclusion (up to a unique bijection) Let $X$ be a set and let $A$ be a subet of $X$. Let $i:A\longrightarrow X$ be the usual inclusion of $A$ in $X$. Then $i$ is an example of an injective function. I want to show that every injective function is of this kind. More precisely: for every set $Y$ and every injective function $f:X\longrightarrow Y$, there exist a subset $B$ of $Y$ and a bijection $g:X\longrightarrow B$ such that $f$ factors through $B$, i.e. $f=j\circ g$, where $j$ is the inclusion of $B$ in $Y$. Moreover, $g$ is unique with respect to this property. I can take $B:=f(X)$ and $g:=f$ (so that $g$ is the same of $f$ as a rule, but with different codomain) and it is easely checked that everything works. Moreover $g$ is unique, since $j\circ g=f=j\circ g'$ implies $g=g'$ by injectivity of $j$. There is something that does not convince at all, in the unicity part. I mean, $g$ is unique if I fix $B=f(X)$, but what about the unicity of $B$? Is there a $B'$, different from $B$, and a $g'$ from $X$ to $B'$ bijective, such that $j'\circ g'=f$ holds?
No. If $j' \circ g' = f$ then $j'(g'(x)) = f(x)$ for all $x \in X$. But $j'$ is the inclusion of $B'$ in $Y$, so it acts by the identity on elements of $B$, which the $g'(x)$ are, by definition of $g' : X \rightarrow B'$. Hence $g'(x) = f(x)$ for all $x \in X$, so $B' = f(X)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/365358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
When are the sections of the structure sheaf just morphisms to affine space? Let $X$ be a scheme over a field $K$ and $f\in\mathscr O_X(U)$ for some (say, affine) open $U\subseteq X$. For a $K$-rational point $P$, I can denote by $f(P)$ the image of $f$ under the map $$\mathscr O_X(U) \to \mathscr O_{X,P} \twoheadrightarrow \mathscr O_{X,P}/\mathfrak m_P = K.$$ This yields a map $f:U(K)\to K$. Giving $U$ the induced subscheme structure, when does this uniquely define a morphism $f:U\to\mathbb A_K^1$ of schemes? It certainly works when $X$ is a variety (and $K$ algebraically closed), so there should be some "minimal" set of conditions for this interpretation to make sense. Thanks a lot in advance!
The scheme $\mathrm{Spec}(k[X])=\mathbf{A}_k^1$ is the universal locally ringed space with a morphism to $\mathrm{Spec}(k)$ and a global section (namely $X$). What I mean by this is that for any locally ringed space $X$ with a morphism to $\mathrm{Spec}(k)$ (equivalently $\mathscr{O}_X(X)$) is a $k$-algebra) and any global section $s\in\mathscr{O}_X(X)$, there is a unique morphism of locally ringed $k$-spaces $X\rightarrow\mathbf{A}_k^1$ such that $X\mapsto s$ under the pullback map $f^*:k[X]=\mathscr{O}_{\mathbf{A}_k^1}(\mathbf{A}_k^1)\rightarrow\mathscr{O}_X(X)$. This is a special case of my answer here: on the adjointness of the global section functor and the Spec functor
{ "language": "en", "url": "https://math.stackexchange.com/questions/365447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
An identity related to Legendre polynomials Let $m$ be a positive integer. I believe the the following identity $$1+\sum_{k=1}^m (-1)^k\frac{P(k,m)}{(2k)!}=(-1)^m\frac{2^{2m}(m!)^2}{(2m)!}$$ where $P(k,m)=\prod_{i=0}^{k-1} (2m-2i)(2m+2i+1)$, is true, but I don't see a quick proof. Anyone?
Clearly $P(k,m) = (-1)^k 4^k \cdot (-m)_k \cdot \left(m+\frac{1}{2}\right)_k$, where $(a)_k$ stands for the Pochhammer's symbol. Thus the sums of the left-hand-side of your equality is $$ 1 + \sum_{k=1}^\infty (-1)^k \frac{P(k,m)}{(2k)!} = \sum_{k=0}^\infty 4^k \frac{(-m)_k \left(m+\frac{1}{2}\right)_k}{ (2k)!} = \sum_{k=0}^\infty \frac{(-m)_k \left(m+\frac{1}{2}\right)_k}{ \left(\frac{1}{2}\right)_k k!} = {}_2F_1\left( -m, m + \frac{1}{2}; \frac{1}{2}; 1 \right) = $$ Per identity 07.23.03.0003.01 ${}_2F_1(-m,b;c;1) = \frac{(c-b)_m}{(c)_m}$: $$ {}_2F_1\left( -m, m + \frac{1}{2}; \frac{1}{2}; 1 \right) = \frac{(-m)_m}{\left(\frac{1}{2}\right)_m} = \frac{(-1)^m m!}{ \frac{(2m)!}{m! 2^{2m}} } = (-1)^m \frac{4^m}{\binom{2m}{m}} $$ The quoted identity follows as a solution of the contiguity relation for $f_m(z) = {}_2F_1(-m,b;c;z)$: $$ (m+1)(z-1) f_m(z) + (2+c+2m-z(b+m+1)) f_{m+1}(z) - (m+c+1) f_{m+2}(z) = 0 $$ Setting $z=1$ and assuming that $f_m(1)$ is finite the recurrence relation drops the order, and can be solved in terms of Pochhammer's symbols.
{ "language": "en", "url": "https://math.stackexchange.com/questions/365499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given one primitive root, how do you find all the others? For example: if $5$ is a primitive root of $p = 23$. Since $p$ is a prime there are $\phi(p - 1)$ primitive roots. Is this correct? If so, $\phi(p - 1) = \phi(22) = \phi(2) \phi(11) = 10$. So $23$ should have $10$ primitive roots? And, to find all the other primitive roots we need powers of $5$, say $k$, sucht that $gcd(k, p - 1) = d> 1$. Again, please let me know If this true or not So, the possible powers of $5$ are: $1, 2, 11, 22$. But this only gives four other primitive roots. So I don't think I'm on right path.
The possible powers of 5 are all the $k$'s that $gcd(k,p−1)=1$, so $k$ is in the set {$1, 3, 5, 7, 9, 13, 15, 17, 19, 21$} and $5^k$ is in the set {$5, 10, 20, 17, 11, 21, 19, 15, 7, 14$} which is exactly of length 10.
{ "language": "en", "url": "https://math.stackexchange.com/questions/365584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Ordered groups - examples Let $G=BS(m,n)$ denote the Baumslag–Solitar groups defined by the presentation $\langle a,b: b^m a=a b^n\rangle$. Question: Find $m,n$ such that $G$ is an ordered group, i.e. $G$ is a group on which a partial order relation $\le $ is given such that for any elements $x,y,z \in G$, from $x \le y$ it follows that $xz \le yz$ and $zx \le zy$.
From Wikipedia: A group $G$ is a partially ordered group if and only if there exists a subset $H\subset G$ such that: i) $1 \in H$ ii) If $a ∈ H$ and $b ∈ H$ then $ab ∈ H$ iii) If $a ∈ H$ then $x^{-1}ax ∈ H$ for each $x\in G$ iv) If $a ∈ H$ and $a^{-1} ∈ H$ then $a=0$ For every $n,m$ can you find such a subset? Here's one thought: let $n=m=1$. Let $H=\{a^n:n\in\mathbb N\}$, where we include $a^0=1$. I believe this satisfies all the conditions and thus $G=BS(1,1)$ (which is the free abelian group on two generators) has a partial ordering.
{ "language": "en", "url": "https://math.stackexchange.com/questions/365660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
to get a MDS code from a hyperoval in a projective plane explain how we can get a MDS code of length q+2 and dimension q-1 from a hyperoval in a projective plane PG2(q) with q a power of 2? HINT:a hyperoval Q is a set of q+2 points such that no three points in Q are collinear. you are expected to get a [q+2,q-1,4] binary code from this.take points,one dimensional subspaces and blocks as the lines 2-dimensional subspaces
As an addition to the answer of Jyrki Lahtonen: The standard way to get projective coordinates of the points of a hyperoval over $\mathbb F_q$ is to take the vectors $[1 : t : t^2]$ with $t\in\mathbb F_q$ together with $[0 : 1 : 0]$ and $[0 : 0 : 1]$. Placing these vectors into the columns of a matrix, in the example $\mathbb F_4 = \{0,1,a,a^2\}$ (with $a^2 + a + 1 = 0$) a possible check matrix of a $[6,3,4]$ MDS code is $$ \begin{pmatrix} 1 & 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 & a & a^2 \\ 0 & 0 & 1 & 1 & a^2 & a \end{pmatrix} $$ This specific code is also called Hexacode.
{ "language": "en", "url": "https://math.stackexchange.com/questions/365744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Differentiably redundant functions. I am looking for a differentiably redundant function of order 6 from the following. (a) $e^{-x} + e^{-x/ 2} \cos({\sqrt{3x} \over 2})$ (b) $e^{-x} + \cos(x)$ (c) $e^{x/2}\sin({\sqrt{3x} \over 2})$ I know that (b) has order 4, but I cannot solve for (a) and (c). It would be a huge waste of time if I took the derivatives and calculate them, so there must be a simple way to solve this. According to the book, it is related to $1/2 \pm i\sqrt{3} /2$, but why is that?
"Differentiably redundant function of order $n$" is not a standard mathematical term: this is something that GRE Math authors made up for this particular problem. Define a function $f(x)$ to be differentiably redundant of order $n$ if the $n$th derivative $f^{(n)}(x)=f(x)$ but $f^{(k)}(x)\ne f(x)$ when $k<n$. Which of the following functions is differentiably redundant of order $6$? By the way, this is not a shining example of mathematical writing: "when $k<n$" should be "when $0<k<n$" and, more importantly, $\sqrt{3x}$ was meant to be $\sqrt{3}x$ in both (A) and (C). This looks like a major typo in the book. If you are familiar with complex numbers, the appearance of both $-1/2$ and $\sqrt{3}/2$ in the same formula is quite suggestive, especially since both exponential and trigonometric functions appear here. Euler's formula $e^{it}=\cos t+i\sin t$ should come to mind. Let $\zeta=-\frac12+i\frac{\sqrt{3}}{2}$: then $$e^{-x/2}\cos \frac{\sqrt{3}x}{2} = \operatorname{Re}e^{\zeta x},\qquad e^{-x/2}\sin \frac{\sqrt{3}x}{2} = \operatorname{Im}\, e^{\zeta x}$$ Differentiating $n$ times, you get the factor of $\zeta^n$ inside of $\operatorname{Re}$ and $\operatorname{Im}$. Then you should ask yourself: what is the smallest positive integer $n$ such that $\zeta^n=1$? Helpful article.
{ "language": "en", "url": "https://math.stackexchange.com/questions/365791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solving for $f(n+1)$ when $f(k)$ is known for $k=0,1,...,n$ I posted earlier about polynomials but this is different type of problem I think. I seem to have an answer but I mistrust it.... A polynomial $f(x)$ where deg[$f(x)$]$\le{n}$ satisfies $f(k)=2^k$ for $k=0,1,...,n$. Find $f(n+1)$. So $f(k)=2^k \Rightarrow 2^{-k}f(k)-1=0.$ Thus, there exists a constant c such that $$2^{-k}f(k)-1=cx(x-1)(x-2)...(x-n)$$ Now, let $x=n+1$. Then $$2^{-(n+1)}f(n+1)-1=c(n+1)(n+1-1)(n+1-2)...(n+1-n)=c(n+1)!$$ Therefore, $f(n+1)=2^{n+1}[c(n+1)!+1]$. Plugging in known values of k we obtain $c=0$ which just shows that $f(n+1)=2^{n+1}$. Is this right? It seems right, but I've seen another problem of the sort and it plays out differently.
You are right to mistrust your answer: it's easy to check that it's incorrect in the case $n=1$ (and, for that matter, $n=0$). The mistake you made is in concluding that $2^{-k}f(k) - 1$ must have a certain form; that expression is not a polynomial, so you can't use results about polynomials to categorize it. In fact, you're not off by that much: the answer is that $f(n+1) = 2^{n+1}-1$. (This gives the right value for $n=0$ and $n=1$, and you can check without too much difficulty that it's also right for $n=2$; that's probably enough data to suggest the pattern.) To me, the nicest way to solve this problem is to prove it by induction on $n$. For clarity, let $f_n$ denote the polynomial described in the problem for a particular $n$. Then consider $$f_n(x+1) - f_n(x).$$ Given what we know about $f_n$, we can show that this new expression is a polynomial of degree at most $n-1$, and its values at $x=0,1,\dots,n-1$ are $1,2,\dots,2^{n-1}$. In other words, $f_n(x+1) - f_n(x)$ must equal $f_{n-1}(x)$! And now the induction hypothesis should give you what you want almost immediately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Disjunction: Why did the inclusive "OR" become the convention? In How to Prove it by Velleman, for defining disjunctions, he gives the difference between exclusive "OR", and inclusive "OR." Given two events $P$ and $Q$, the disjunction is defined for them as: * *Inclusive: One of $P$ and $Q$, or both. *Exclusive: One of $P$ and $Q$, but not both. Quoting from his book: "In mathematics, or always means inclusive or, unless specified otherwise, ..." (Velleman, 2006, p.15) My question is - Why did the inclusive definition of disjunction become the convention? Was it coincidental, or is there some aspect to the inclusive definition that makes it more convenient?
George Boole, when he originally developed his Laws of Thought to apply mathematics to reasoning, used the exclusive or. However, the system was quite cumbersome in comparison to modern methods. As others took up his ideas, they found that the inclusive or was far better behaved and easier to work with. For instance, suppose we want to say "It is not the case that P or Q but not both". We get a "Either it is not the case that P and not the case that Q, or it is the case that both P and Q". Contrast this with "It is not the case that P or Q or both". To negate this, we have "It is not the case that P and it is not the case that Q".
{ "language": "en", "url": "https://math.stackexchange.com/questions/366122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
Summation of n-squared, cubed, etc. How do you in general derive a formula for summation of n-squared, n-cubed, etc...? Clear explanation with reference would be great.
In general, if you want to derive a formula for the sum of $1^k+\cdots+n^k,$ let $$f(x)=\sum_{j=0}^{k+1}a_jx^j$$ and solve $$f(x)-f(x-1)=x^k$$ for the coefficients $a_j$. The polynomial that results will have the property that $f(n)=1^k+\cdots+ n^k$ for all positive integers $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Understanding directional derivative and the gradient I'm having trouble understanding the proof of directional derivative and the gradient. Could someone give me a easy-to-read proof of the directional derivative and explain why does the gradient point to the direction of maximum increase? Thank you very much for any help! =)
As for the gradient pointing in the direction of maximum increase, recall that the directional derivative is given by the dot product $$\nabla f(x)\cdot\textbf{u},$$ where $$\nabla f(x)$$ is the gradient at the point $\textbf{x}$ and $\textbf{u}$ is the unit vector in the direction we are considering. Recall also that this directional derivative is the rate of increase/decrease of the function in the direction of $\textbf{u}$ at the point $\textbf{x}$. The dot product has two equivalent definitions: $$\textbf{u}\cdot\textbf{v}=u_1v_1+u_2v_2+...+u_nv_n$$ or $$\textbf{u}\cdot\textbf{v}=||\textbf{u}||||\textbf{v}||cos(\theta),$$ where $\theta$ is the angle between the vectors. Using this second definition, the fact that $\textbf{u}$ is a unit vector, and knowledge of trigonometry, we see that the directional derivative $D\textbf{u}$ at $\textbf{x}$ is bounded: $$-||\nabla f(x)||=cos(\pi)||\nabla f(x)||\leq cos(\theta)||\nabla f(x)||$$ $$\leq cos(0)||\nabla f(x)||=||\nabla f(x)||$$ Since $0\leq||\nabla f(x)||$, we see that the maximum rate of change must occur when $\theta=0$, that is, in the direction of the gradient. As for the directional derivative consider the direction of the vector $(2,1)$. We want to know how the value of the function changes as we move in this direction at a point $\textbf{x}$. Well, for infinitesimally small units, we are moving $2$ units in the $x$ direction and $1$ unit in the $y$ direction so the change in the function is $2$ times the change in $f$ that we get as we move $1$ unit in the $x$ direction plus $1$ times the change in $f$ that we get as we move $1$ unit in the $y$ direction. Finally, we divide by the norm of $(2,1)$ so that we have the change for $1$ unit in this direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Study the equivalence of these norms I have two Hilbert spaces $H_1$ and $H_2$ and I consider a set of functions $f$ which decompose as $f=g+h$ with $g\in H_1$ and $h\in H_2$. I know that this decomposition is unique. So I define the following norm $$\Vert f\Vert=(\Vert g\Vert_{H_1}^2+\Vert h\Vert_{H_2}^2)^{\frac{1}{2}}$$ Is this equivalent to $$|||f|||=\Vert g\Vert_{H_1}+\Vert h\Vert_{H_2}$$? I've followed this reasoning: from sublinearity of square root I have $$\Vert f\Vert\leq|||f|||$$; for the other direction I observe that $$(\Vert g\Vert_{H_1}+\Vert h\Vert_{H_2})^2\leq 2(\Vert g\Vert_{H_1}^2+\Vert h\Vert_{H_2}^2)=2\Vert f\Vert^2$$ And so $$\Vert f\Vert\leq|||f|||\leq\sqrt{2}\Vert f\Vert$$ Is it right?
Your calculation should be right - it is just the equivalence of the $1$-norm and the $2$-norm on $\mathbb R^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing the homology groups of a given surface Let $\triangle^2=\{(x,y)\in\mathbb{R}^2\mid 0\le x,y\wedge x+y\le1\}$ (that is, a right triangle). Define the equivalence relation $(t,0) \sim (0,t)\sim (t,1-t)$. I want to compute the homology groups of $\triangle^2/\sim$. An attempt at doing so was to define $U=\{(x,y)\in\mathbb{R}^2\mid 0< x,y\wedge x+y<1\}$ and $V=\triangle^2 \setminus \{1/3,1/3\}$. It is clear that $U\cup V = \triangle^2$ and so Mayer-Vietories could be useful here. Noting the following facts: * *$V$ is a retract deformation of the boundary of the triangle and since all lines are identified it is homeomorphic to $S^1$, and so $H_2(V)=0$, $H_1(V)=\mathbb{Z}$ and $\tilde{H}_0(V)=\mathbb{Z}$. *$U$ is retractable and so it's positive dimension homology groups vanish, and it's zero dimensional homology group is $\mathbb{Z}$ *$U\cap V$ is again homotopy equivalent to $S_1$ At this point it's really easy to see (using M.V) that $H_n(\triangle^2 / \sim ) = 0$ for $n>2$ and also for $n=0$. For lower values of $n$, taking the M.V. sequence for reduced homologies and plugging in the values I already know, I get. $0\to H_2(\triangle^2 / \sim ) \to \mathbb{Z} \to \mathbb{Z}\to H_1(\triangle^2 / \sim )\to 0$. This is the point where I don't know what to do next, and any help would be appreciated.
Just knowing that sequence is exact is not enough since, for example, $H_2(\Delta^2/\sim) = 0 = H_1(\Delta^2/\sim)$ and $H_2(\Delta^2/\sim) = 0, H_1(\Delta^2/\sim) = \mathbb Z/n\mathbb Z$ both work. So you need to look at the actual map $H_1(U \cap V) \to H_1(U) \oplus H_1(V) \simeq 0 \oplus H_1(V)$, which is given by the two inclusions. But $U\cap V$ is a deformation retract of $V$ so that the inclusion $U \cap V \to V$ induces an isomorphism on homology. Thus the map $H_1(U \cap V) \to H_1(U) \oplus H_1(V)$ is an isomorphism so that $H_2(\Delta^2/\sim) = 0 = H_1(\Delta^2/\sim)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Contour integration to compute $\int_0^\infty \frac{\sin ax}{e^{2\pi x}-1}\,\mathrm dx$ How to show: $$\int_{0}^{\infty}\frac{\sin ax}{e^{2\pi x}-1}dx=\frac{1}{4}\frac{e^{a}+1}{e^{a}-1}-\frac{1}{2a}$$ integrating $\dfrac{e^{aiz}}{e^{2\pi z}-1}$ round a contour formed by the rectangle whose corners are $0 ,R ,R+i,i$ (the rectangle being indented at $0$ and $i$) and making $R \rightarrow \infty$.
For this particular contour, the integral $$\oint_C dz \frac{e^{i a z}}{e^{2 \pi z}-1}$$ is split into $6$ segments: $$\int_{\epsilon}^R dx \frac{e^{i a x}}{e^{2 \pi x}-1} + i \int_{\epsilon}^{1-\epsilon} dy \frac{e^{i a R} e^{-a t}}{e^{2 \pi R} e^{i 2 \pi y}-1} + \int_R^{\epsilon} dx \frac{e^{-a} e^{i a x}}{e^{2 \pi x}-1} \\+ i \int_{1-\epsilon}^{\epsilon} dy \frac{ e^{-a t}}{e^{i 2 \pi y}-1} + i \epsilon \int_{\pi/2}^0 d\phi \:e^{i \phi} \frac{e^{i a \epsilon e^{i \phi}}}{e^{2 \pi \epsilon e^{i \phi}}-1}+ i \epsilon \int_{2\pi}^{3 \pi/2} d\phi\: e^{-a} e^{i \phi} \frac{e^{i a \epsilon e^{i \phi}}}{e^{2 \pi \epsilon e^{i \phi}}-1}$$ The first integral is on the real axis, away from the indent at the origin. The second integral is along the right vertical segment. The third is on the horizontal upper segment. The fourth is on the left vertical segment. The fifth is around the lower indent (about the origin), and the sixth is around the upper indent, about $z=i$. We will be interested in the limits as $R \rightarrow \infty$ and $\epsilon \rightarrow 0$. The first and third integrals combine to form, in this limit, $$(1-e^{-a}) \int_0^{\infty} dx \frac{e^{i a x}}{e^{2 \pi x}-1}$$ The fifth and sixth integrals combine to form, as $\epsilon \rightarrow 0$: $$\frac{i \epsilon}{2 \pi \epsilon} \left ( -\frac{\pi}{2}\right) + e^{-a} \frac{i \epsilon}{2 \pi \epsilon} \left ( -\frac{\pi}{2}\right) = -\frac{i}{4} (1+e^{-a}) $$ The second integral vanishes as $R \rightarrow \infty$. The fourth integral, however, does not, and must be evaluated, at least partially. We rewrite it, as $\epsilon \rightarrow 0$: $$-\frac{1}{2} \int_0^1 dy \frac{e^{-a y} e^{- i \pi t}}{\sin{\pi y}} = -\frac{1}{2} PV\int_0^1 dy \: e^{-a y} \cot{\pi y} + \frac{i}{2} \frac{1-e^{-a}}{a}$$ By Cauchy's theorem, the contour integral is zero because there are no poles within the contour. Thus, $$(1-e^{-a}) \int_0^{\infty} dx \frac{e^{i a x}}{e^{2 \pi x}-1} -\frac{i}{4} (1+e^{-a}) -\frac{1}{2} PV \int_0^1 dy \: e^{-a y} \cot{\pi y} + \frac{i}{2} \frac{1-e^{-a}}{a}=0$$ where $PV$ represents the Cauchy principal value of that integral. Now take the imaginary part of the above equation and do the algebra - the stated result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Intermediate value-like theorem for $\mathbb{C}$? Is there an intermediate value like theorem for $\mathbb{C}$? I know $\mathbb {C}$ isn't ordered, but if we have a function $f:\mathbb{C}\to\mathbb{C}$ that's continuous, what can we conclude about it? Also, if we have a function, $g:\mathbb{C}\to\mathbb{R}$ ,continuous with $g(x)>0>g(y)$ does that imply a z "between" them satisfying $g(z)=0$. Edit: I apologize if the question is vague and confusing. I really want to ask for which definition of between, (for example maybe the part of the plane dividing the points), and with relaxed definition on intermediateness for the first part, can we prove any such results?
Consider $f(x) = e^{\pi i x }$. We know that $f(0) = 1, f(1) = -1$. But for no real value $r$ between 0 and 1 is $f(r) = 0$, or even real valued. Think about how this is a 'counter-example', and what aspect of $\mathbb{C}$ did we use. It could be useful to trace out this graph.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Geodesics of conformal metrics in complex domains Let $U$ be a non-empty domain in the complex plane $\mathbb C$. Question: what is the differential equation of the geodesics of the metric $$m=\varphi(x,y) (dx^2+dy^2)$$ where $\varphi$ is a positive function on $U$ and where $x,y$ are the usual euclidian coordinates on $\mathbb C\simeq \mathbb R^2$ Certainly, an answer can be found in many classical textbooks. But I'm interested in the (simpler) case when $\varphi=\lvert f(z)\lvert^2$ where $f$ is a holomorphic function of $z=x+iy$. And I didn't find in the classical literature a simple differential equation characterizing geodesics for metric of the form $$m= \lvert f(z) dz\lvert^2.$$ Does anyone know the answer or a good reference? Many thanks in advance.
You should be worried about the zeroes of $f$; the geodesic equation degenerates at the points where the metric vanishes. At the points where $f\ne 0$ the local structure of geodesics is indeed simple. Let $F$ be an antiderivative of $f$, that is $F'=f$. The metric $|F'(z)|^2\,|dz|^2$ is exactly the pullback of the Euclidean metric under the map $F$. Therefore, geodesics are preimages of straight lines under $F$; note that $F$ is locally a diffeomorphism because $F'\ne 0$. Stated another way, geodesics are curves of the form $\mathrm{Re}\,(e^{i\theta}F)=c$ for some $\theta,c\in\mathbb R$. If you want a differential equation for parametrized geodesics $t\mapsto z(t)$, it is $\dfrac{d}{dt}\mathrm{Re}\,(e^{i\theta}F(z(t)))=0$. Example: two orthogonal families of geodesics for $f(z)=z^2$, that is, the metric $|z|^4|dz|^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/366551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proving that $4 \mid m - n$ is an equivalence relation on $\mathbb{Z}$ I have been able to figure out the the distinct equivalence classes. Now I am having difficulties proving the relation IS an equivalence relation. $F$ is the relation defined on $\Bbb Z$ as follows: For all $(m, n) \in \Bbb Z^2,\ m F n \iff 4 | (m-n)$ equivalence classes: $\{-8,-4,0,4,8\}, \{-7,-3,1,5,9\} \{-6,-2,2,6,10\}$ and $\{-5, -1, 3, 7, 11\}$
1) reflexivity: $mFm $ since $4|0$ 2) simmetry: $mFn \Rightarrow nFm$ since $4|\pm (m-n)$ 3) transitivity: if $4|(m-n)$ and $4|(n-r)$ then $4|\big((m-n)+(n-r)\big)$ or $4|(m-r)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/366614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Factorial primes Factorial primes are primes of the form $n!\pm1$. (In this application I'm interested specifically in $n!+1$ but any answer is likely to apply to both forms.) It seems hard to prove that there are infinitely many primes of this form, though Caldwell & Gallot were courageous enough to conjecture that there are infinitely many and to give a conjectured density ($e^\gamma\log x$ below $x$). I'm looking at the opposite direction: how many composites are there of the form $n!\pm1$? It seems 'obvious' that the fraction of numbers of this form which are composite is 1, but I cannot even prove that there are infinitely many composites of this form. Has this been proved? (Perhaps there's even a proof easy enough to relate here?) Or on the other hand, is it known to be open?
Wilson's Theorem shows there are infinitely many composites. For if $p$ is prime, then $(p-1)!+1$ is divisible by $p$, and apart from the cases $p=2$ and $p=3$, the number $(p-1)!+1$ is greater than $p$. There are related ways to produce a composite. For example, let $p$ be a prime of the form $4k+3$. Then one of $\left(\frac{p-1}{2}\right)!\pm 1$ is divisible by $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Cylindrical coordinates where $z$ axis is not axis of symmetry. I'm a little bit uncertain of how to set up the limits of integration when the axis of symmetry of the region is not centered at $z$ (this is for cylindrical coordinates). The region is bounded by $(x-1)^2+y^2=1$ and $x^2+y^2+z^2=4$. This is my attempt: Let $x=r\cos\theta$ and $y=r\sin\theta$. We are bounded on $z$ by $\pm\sqrt{4-r^2}$. We take $0\leq r\leq 2\cos\theta$ and $-\pi/2\leq \theta \leq \pi/2$ to account for the fact that the cylinder is not centered with the $z$ axis (shift on the $x$ axis). The volume is given by $$ V = \int\limits_{-\pi/2}^{\pi/2} \int\limits_0^{2\cos\theta}\int\limits_{-\sqrt{4-r^2}}^{\sqrt{4-r^2}} dz\,(r\,dr)\,d\theta \ . $$
I prefer to visualize the cross sections in $z$. Draw a picture of various cross-sections in $z$: there is an intersection region for each $z$ You have to find the points where the cross-sections intersect: $$4-z^2=4 \cos^2{\theta} \implies \sin{\theta} = \pm \frac{z}{2} \ .$$ For $\theta \in [-\arcsin{(z/2)},\arcsin{(z/2)}]$, the cross-sectional area integral at $z$ looks like $$\int_{-\arcsin{(z/2)}}^{\arcsin{(z/2)}} d\theta \: \int_0^\sqrt{4-z^2} dr \, r = (4 -z^2)\arcsin\left(\frac{z}{2}\right) \ .$$ For $\theta \in [-\pi/2,-\arcsin{(z/2)}] \cup [\arcsin{(z/2)},\pi/2]$, this area is $$2 \int_{\arcsin{(z/2)}}^{\pi/2} d\theta \: \int_0^{2 \cos{\theta}} dr \, r = \pi - 2 \arcsin\left(\frac{z}{2}\right) -z \sqrt{1-\frac{z^2}{4}} \ . $$ To get the volume, add the above two expressions and integrate over $z \in [-2,2]$; the expression looks odd, but we are really dealing with $|z|$ : $$V = 2 \int_{0}^2 \: \left(\pi + (2-z^2) \arcsin\left( \frac{z}{2}\right) -z \, \sqrt{1-\frac{z^2}{4}}\right)\,dz = \frac{16 \pi}{3} - \frac{64}{9} \ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/366746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How many ways are there to add the numbers in set $k$ to equal $n$? How many ways are there to add the numbers in set $k$ to equal $n$? For a specific example, consider the following: I have infinite pennies, nickels, dimes, quarters, and loonies (equivalent to 0.01, 0.05, 0.1, 0.25, and 1, for those who are not Canadian). How many unique ways are there to add these numbers to get a loonie? Some combinations would include: $1 = 1$ $1 = 0.25 + 0.25 + 0.25 + 0.25$ $1 = 0.25 + 0.25 + 0.25 + 0.1 + 0.1 + 0.05$ And so on. Several people have suggested the coin problem, though I have yet to find a source that explains this well enough for me to understand it.
You will have luck googling for this with the phrase "coin problem". I have a few links at this solution which will lead to the general method. There is a Project Euler problem (or maybe several of them) which ask you to compute absurdly large such numbers of ways, but the programs (I found out) can be just a handful of lines.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Unification and substitutions in first-order logic I am currently learning about first-order logic and various resolution techniques. When applying a substitution $\theta$ to two sentences $\alpha$ and $\beta$, for unification purposes, aside from SUBST($\theta$, $\alpha$) = SUBST($\theta$, $\beta$), does the resulting substitution have to be unique? What I mean is, when unifying, when we check if SUBST($\theta$, $\alpha$) = SUBST($\theta$, $\beta$), is it OK if SUBST($\theta$, $\beta$) = $\beta$ ? Thank you.
The most general unifier $\theta$ is unique in the sense that given any other unifier $\phi$, $\alpha \phi$ can be got from $\alpha \theta$ by a subtitution, and the same for $\beta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Combinatorics Example Problem Ten weight lifters are competing in a team weightlifting contest. Of the lifters, 3 are from the United States, 4 are from Russia, 2 are from China, and 1 is from Canada. Part 1 If the scoring takes account of the countries that the lifters represent, but not their individual identities, how many different outcomes are possible from the point of view of scores? This part I understand: 10! / [ 3! 4! 2! ] = 12,600 Part 2 (don't understand this part) How many different outcomes correspond to results in which the United States has 1 competitor in the top three and 2 in the bottom three? This part I'm confused. Here you have 10 slots. The first three slots must be of some order: US, US, or [Russia, China, Canada]. The last three slots must be of some order US, [Russia, China, Canada], [Russia, China, Canada]. I thought the answer would be this: $\binom{3}{2} \binom{1}{1} * \frac{7!}{4!\ 3!\ 2!} $ My reasoning: In the first 3 slots, you have to pick 2/3 US people. Then you only have one remaining. You have 7! ways to organize the rest but have to take out repeats so you divide by factorials of repeats which is 4,3, and 2. But my middle term is wrong.... My research shows me answers of 2 forms but I can't understand it: Method 1: $\binom{3}{1} \binom{3}{2} * \frac{7!}{4!\ 3!\ 2!}$ This case, I don't understand why there's a 3 in the second binomial, $\binom{3}{2}$. We already selected ONE US person so shouldn't it be $\binom{2}{2}$? Method 2: $ \binom{7}{2} \binom{3}{1} \binom{5}{4} \binom{3}{2} \binom{1}{1} $ ? Sorry for the long post. Thanks again.
For the second, you pick the slots (not the people-we said all the people from one country were interchangeable) for the two US people in the bottom ${3 \choose 2}$ ways, but then have to pick which slot the US person in the top is in, which adds a factor ${3 \choose 1}$. Then of the seven left, you just have $\frac {7!}{4!2!}$ as there are no Americans among them. So Method 1 is off by the $3!$ in the denominator. I don't understand the terms in Method 2. Added: One way to look at the first part, which I think is the way you did, is to say there are $10!$ orders of the people, but if the Americans are interchangeable you divide by $3!$, for the Russians you divide by $4!$ and for the Chinese you divide by $2!$. Another way would be to say we pick three slots for Americans in ${10 \choose 3}$ ways, then pick slots for the Russians in ${7 \choose 4}$ ways, then pick slots for the Chinese in ${3 \choose 2}$ ways, giving ${10 \choose 3}{7 \choose 4}{3 \choose 2}=120\cdot 35 \cdot 3=12600$. Of course the answer is the same, but the approach is better suited to the second part. Now if you think of choosing slots for nationalities you get what I did before.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
p-adic modular form example In Serre's paper on $p$-adic modular forms, he gives the example (in the case of $p = 2,3,5$) of $\frac{1}{Q}$ and $\frac{1}{j}$ as $p$-adic modular forms, where $Q = E_4 = 1 + 540\sum \sigma_{3}(n)q^n$ is the normalized Eisenstein series of weight 4 and $j = \frac{\Delta}{Q^3}$ is the $j$-invariant. To see this, Serre remarks that the fact that $\frac{1}{Q}$ is a $p$-adic modular form follows from the observations that $\displaystyle\frac{1}{Q} = \lim_{m\to\infty} Q^{p^m} - 1$ and that $Q = 1 \mod p$. He remarks that similarly $\frac{1}{j}$ can similarly be shown to be $p$-adic modular weight 0 and that the space of weight 0 modular forms is precisely $\mathbb{Q}_p\langle \frac{1}{j} \rangle$. Forgive my ignorance, but could someone explain these facts in detail? Perhaps I am missing something obvious, but I don't understand why the $\displaystyle \lim_{m\to\infty} Q^{p^m} - 1 = \frac{1}{Q}$ is true and how to obtain the other statements he makes.
I'm not sure this can quite be correct. The problem is that $Q^{p^m}$ is going to tend to 1, so $Q^{p^m} - 1$ tends to 0, not $1/Q$. I think you may have misread the paper and what was meant was $1/Q = \lim_{m \to \infty} Q^{(p^m - 1)}$; if you're reading Serre's paper in the Antwerp volumes, then this is an easy mistake to make given that it's all typewritten! It shouldn't be too hard to convince yourself by induction that $Q^{p^m} = 1 \bmod p^{m+1}$. As for $1/j$, we have $1/j = \Delta / Q^3$ (there is a typo in your post, you write $j = \Delta / Q^3$ which is not quite right) so once you know that $1/Q$ is $p$-adic modular it follows immediately that $1/j$ is so.
{ "language": "en", "url": "https://math.stackexchange.com/questions/366993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Let $f$ be a twice differentiable function on $\mathbb{R}$. Given that $f''(x)>0$ for all $x\in \mathbb{R}$.Then which is true? Let $f$ be a twice differentiable function on $\mathbb{R}$. Given that $f''(x)>0$ for all $x\in \mathbb{R}$.Then which of the following is true? 1.$f(x)=0$ has exactly two solutions on $\mathbb{R}$. 2.$f(x)=0$ has a positive solution if $f(0)=0$ and $f'(0)>0$. 3.$f(x)=0$ has no positive solution if $f(0)=0$ and $f'(0)>0$. 4.$f(x)=0$ has no positive solution if $f(0)=0$ and $f'(0)<0$. My thoughts:- (1) is not true as $f(x)=x^2+1$ is a counter example. Now suppose the conditions in (2) holds.then $f'(x)$ is increasing everywhere.so $f'(x)$ is never zero for all positive $x$.so $f(x)$ can not have a positive solution otherwise $f'(x)$ have a zero between $0$ and $x$ by Rolle's Theorem. so (3) is true. Are my arguments right?
Correct answer for above question is option no.3.Second option is not correct because it has a counter example.$$f(x)=e^x-1$$ f satisfy condition $f(0)=0,f'(0)>0$but $0$ is the only solution of $f(x)=0$.Hence $f(x)=0$ has no positive solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/367059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Cardinality of the set of bijective functions on $\mathbb{N}$? I learned that the set of all one-to-one mappings of $\mathbb{N}$ onto $\mathbb{N}$ has cardinality $|\mathbb{R}|$. What about surjective functions and bijective functions?
Choose one natural number. How many are left to choose from? More rigorously, $$\operatorname{Aut}\mathbb{N} \cong \prod_{n \in \mathbb{N}} \mathbb{N} \setminus \{1, \ldots, n\} \cong \prod_{n \in \mathbb{N}} \mathbb{N} \cong \mathbb{N}^\mathbb{N} = \operatorname{End}\mathbb{N},$$ where $\{1, \ldots, 0\} := \varnothing$. The first isomorphism is a generalization of $\#S_n = n!$ Edit: but I haven't thought it through yet, I'll get back to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/367194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 2 }
a multiple choice question on monotone non-decreasing real-valued function on $\mathbb{R}$ let $f$ be a monotone non-decreasing real-valued function on $\mathbb{R}$ . Then $1$. $\lim _ {x \to a}f(x)$ exists at each point $a$. $2$. If $a<b$ , then $\lim _ {x \to a+}f(x) \le \lim _ {x \to b-}f(x)$. $3$. $f$ is an unbounded function. $4$. The function $g(x)=e^{-f(x)}$ is a bounded function. $2$ is looking obviously true. but I need some counter example to disprove the others.can anyone help me please.thanks for your help.
1: monotone does not necessarily mean it 's continuous 3: never said it was strictly monotone, could be constant 4: $g$ is strictly decreasing and has a lower bound never reached ($0$). It has an upper bound only if $f$ has a lower bound -> many counter examples
{ "language": "en", "url": "https://math.stackexchange.com/questions/367399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$f: \mathbb{R} \to \mathbb{R}$ satisfies $(x-2)f(x)-(x+1)f(x-1) = 3$. Evaluate $f(2013)$, given that $f(2)=5$ The function $f : \mathbb{R} \to \mathbb{R}$ satisfies $(x-2)f(x)-(x+1)f(x-1) = 3$. Evaluate $f(2013)$, given that $f(2) = 5$.
The conditions allow you to calculate $f(x+1)$ if you know $f(x)$. Try calculating $f(3), f(4), f(5), f(6)$, and looking for a pattern.
{ "language": "en", "url": "https://math.stackexchange.com/questions/367473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
If $A$ is a diagonalizable $n\times n$ matrix for which the eigenvalues are $0$ and $1$, then $A^2=A$. If $A$ is a diagonalizable $n\times n$ matrix for which the eigenvalues are $0$ and $1$, then $A^2=A$. I know how to prove this in the opposite direction, however I can't seem to find a way prove this. Could anyone please help?
Write $A = QDQ^{-1}$, where $D$ is a diagonal matrix with the eigenvalues, $0$s and $1$s, on the diagonal. The $A^2 = QDQ^{-1}QDQ^{-1} = QD^{2}Q^{-1}$. But $D^2 = D$, because when you square a diagonal matrix you square the entries on the diagonal and $1^2 = 1$ and $0^2 = 0$. Thus $$A^{2} = QD^{2}Q^{-1} = QDQ^{-1} = A$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/367570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Integrating $\frac {e^{iz}}{z}$ over a semicircle around $0$ of radius $\epsilon$ I am trying to find the value of $\int_{-\infty}^{\infty} \frac{\sin (x)}{x}$ using residue theorem and a contour with a kink around $0$. For this, I need to find $\int_{C_\epsilon} \frac {e^{iz}} {z}$ where $C_\epsilon$ is the semicircle centred at $0$ with radius $\epsilon$ from $-\epsilon$ to $\epsilon$. I guess it is equal to half the residue of $\frac {e^{iz}} {z}$ at $0$. Is this true? Any help is appreciated.
Note that $\dfrac{e^{iz}}{z}=\dfrac1z+O(1)$. Integrating this counter-clockwise around the semicircle of radius $\epsilon$ is $$ \begin{align} \int_\gamma\frac{e^{iz}}{z}\,\mathrm{d}z &=\int_0^\pi\left(\frac1\epsilon e^{-i\theta}+O(1)\right)\,\epsilon\,ie^{i\theta}\,\mathrm{d}\theta\\ &=\int_0^\pi\frac1\epsilon e^{-i\theta}\,\epsilon\,ie^{i\theta}\,\mathrm{d}\theta +\int_0^\pi O(1)\,\epsilon\,ie^{i\theta}\,\mathrm{d}\theta\\[9pt] &=\pi i+O(\epsilon)\\[12pt] \end{align} $$ The residue at $0$ is $1$, so integrating around the full circle would give $2\pi i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/367626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
What is the closed formula for the following summation? Is there any closed formula for the following summation? $$\sum_{k=2}^n \frac{1}{\log_2(k)}$$
There is no closed form as such. However, you can use the Abel summation technique from here to derive the asymptotic. We have \begin{align} S_n & = \sum_{k=2}^n \dfrac1{\log(k)} = \int_{2^-}^{n^+} \dfrac{d \lfloor t \rfloor}{\log(t)} = \dfrac{n}{\log(n)} - \dfrac2{\log(2)} + \int_2^{n} \dfrac{dt}{\log^2(t)}\\ & =\dfrac{n}{\log(n)} - \dfrac2{\log(2)} + \int_2^{3} \dfrac{dt}{\log^2(t)} + \int_3^{n} \dfrac{dt}{\log^2(t)}\\ &\leq \dfrac{n}{\log(n)} \overbrace{- \dfrac2{\log(2)} + \int_2^{3} \dfrac{dt}{\log^2(t)}}^{\text{constant}} + \dfrac1{\log(3)}\int_3^{n} \underbrace{\dfrac{dt}{\log(t)}}_{\leq S_n}\\ \end{align} We have get that $$S_n \leq \dfrac{n}{\log(n)} + \text{constant} + \dfrac{S_n}{\log(3)} \implies S_n \leq \dfrac{\log(3)}{\log(3)-1} \left(\dfrac{n}{\log(n)} + \text{constant}\right) \tag{$\star$}$$ With a bit more effort you can show that $$S_n \sim \dfrac{n}{\log n}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/367688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find the Sum $1\cdot2+2\cdot3+\cdots + (n-1)\cdot n$ Find the sum $$1\cdot2 + 2\cdot3 + \cdot \cdot \cdot + (n-1)\cdot n.$$ This is related to the binomial theorem. My guess is we use the combination formula . . . $C(n, k) = n!/k!\cdot(n-k)!$ so . . . for the first term $2 = C(2,1) = 2/1!(2-1)! = 2$ but I can't figure out the second term $3 \cdot 2 = 6$ . . . $C(3,2) = 3$ and $C(3,1) = 3$ I can't get it to be 6. Right now i have something like . . . $$ C(2,1) + C(3,2) + \cdot \cdot \cdot + C(n, n-1) $$ The 2nd term doesn't seem to equal 6. What should I do?
As I have been directed to teach how to fish... this is a bit clunky, but works. Define rising factorial powers: $$ x^{\overline{m}} = \prod_{0 \le k < m} (x + k) = x (x + 1) \ldots (x + m - 1) $$ Prove by induction over $n$ that: $$ \sum_{0 \le k \le n} k^{\overline{m}} = \frac{n^{\overline{m + 1}}}{m + 1} $$ When $n = 0$, it reduces to $0 = 0$. Assume the formula is valid for $n$, and: $$ \begin{align*} \sum_{0 \le k \le n + 1} k^{\overline{m}} &= \sum_{0 \le n} k^{\overline{m}} + (n + 1)^{\overline{m}} \\ &= \frac{n^{\overline{m + 1}}}{m + 1} + (n + 1)^{\overline{m}} \\ &= \frac{n \cdot (n + 1)^{\overline{m}} + (m + 1) (n + 1)^{\overline{m}}} {m + 1} \\ &= \frac{(n + m + 1) \cdot (n + 1)^{\overline{m}}}{m + 1} \\ &= \frac{(n + 1)^{\overline{m + 1}}}{m + 1} \end{align*} $$ By induction, it is valid for all $n$. Defining falling factorial powers: $$ x^{\underline{m}} = \prod_{0 \le k < m} (x - k) = x (x - 1) \ldots (x - m + 1) $$ you get a similar formula for the sum: $$ \sum_{0 \le k \le n} k^{\underline{m}} $$ You can see that $x^{\overline{m}}$ (respectively $x^{\underline{m}}$) is a monic polynomial of degree $m$, so any integral power of $x$ can be expressed as a combination of appropiate factorial powers, and so sums of polynomials in $k$ can also be computed with some work. By the way, the binomial coefficient: $$ \binom{\alpha}{k} = \frac{\alpha^{\underline{k}}}{k!} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/367749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Evaluating a trigonometric integral using residues Finding the trigonometric integral using the method for residues: $$\int_0^{2\pi} \frac{d\theta}{ a^2\sin^2 \theta + b^2\cos^2 \theta} = \frac{2\pi}{ab}$$ where $a, b > 0$. I can't seem to factor this question I got up to $4/i (z) / ((b^2)(z^2 + 1)^2 - a^2(z^2 - 1)^2 $ I think I should be pulling out $a^2$ and $b^2$ out earlier but not too sure how.
Letting $z = e^{i\theta},$ we get $$\int_0^{2\pi} \frac{1}{a^2\sin^2\theta + b^2\cos^2\theta} d\theta = \int_{|z|=1} \frac{1}{iz} \frac{4}{-a^2(z-1/z)^2+b^2(z+1/z)^2} dz \\= \int_{|z|=1} \frac{1}{iz} \frac{4z^2}{-a^2(z^2-1)^2+b^2(z^2+1)^2} dz = -i\int_{|z|=1} \frac{4z}{-a^2(z^2-1)^2+b^2(z^2+1)^2} dz.$$ Now the location of the four simple poles of the new integrand is given by $$z_{0, 1} = \pm \sqrt{\frac{a+b}{a-b}} \quad \text{and} \quad z_{2, 3} = \pm \sqrt{\frac{a-b}{a+b}}.$$ We now restrict ourselves to the case $a > b > 0,$ so that only $z_{2,3}$ are inside the contour. The residues $w_{2,3}$ are given by $$ w_{2,3} =\lim_{z\to z_{2,3}} \frac{4z}{-2a^2(2z)(z^2-1)+2b^2(2z)(z^2+1)} = \lim_{z\to z_{2,3}} \frac{1}{-a^2(z^2-1)+b^2(z^2+1)} \\ = \lim_{z\to z_{2,3}} \frac{1}{z^2(b^2-a^2)+b^2+a^2}= \frac{1}{-(a-b)^2+b^2+a^2} = \frac{1}{2ab}.$$ It follows that the value of the integral is given by $$-i \times 2\pi i \times 2 \times \frac{1}{2ab} = \frac{2\pi}{ab}.$$ The case $b > a > 0$ is left to the reader. (In fact it is not difficult to see that even though in this case the poles are complex, it is once more the poles $z_{2,3}$ that are inside the unit circle.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/367798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
On a long proof On wikipedia there is a claim that the Abel–Ruffini theorem was nearly proved by Paolo Ruffini, and that his proof spanned over $500$ pages, is this really true? I don't really know much abstract algebra, and I know that the length of a paper will vary due to the size of the font, but what could possibly take $500$ pages to explain? Did he have to introduce a new subject part way through the paper or what? It also says Niels Henrik Abel published a proof that required just six pages, how can someone jump from $500$ pages to $6$?
Not only true, but not unique. The abc conjecture has a recent (2012) proposed proof by Shinichi Mochizuki that spans over 500 pages, over 4 papers. The record is the classification of finite simple groups which consists of tens of thousands of pages, over hundreds of papers. Very few people have read all of them, although the result is important and used frequently.
{ "language": "en", "url": "https://math.stackexchange.com/questions/367869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53", "answer_count": 2, "answer_id": 0 }
Difficulties performing Laurent Series expansions to determine Residues The following problems are from Brown and Churchill's Complex Variables, 8ed. From §71 concerning Residues and Poles, problem #1d: Determine the residue at $z = 0$ of the function $$\frac{\cot(z)}{z^4} $$ I really don't know where to start with this. I had previously tried expanding the series using the composition of the series expansions of $\cos(z)$ and $\sin(z)$ but didn't really achieve any favorable outcomes. If anyone has an idea on how I might go about solving this please let me know. For the sake of completion, the book lists the solution as $-1/45$. From the same section, problem #1e Determine the residue at $z = 0$ of the function $$\frac{\sinh(z)}{z^4(1-z^2)} $$ Recognizing the following expressions: $$\sinh(z) = \sum_{n=0}^{\infty} \frac{z^{(2n+1)}}{(2n +1)!}$$ $$\frac{1}{1-z^2} = \sum_{n=0}^{\infty} (z^2)^n $$ I have expanded the series thusly: $$\begin{aligned} \frac{\sinh(z)}{z^4(1-z^2)} &= \frac{1}{z^4} \bigg(\sum_{n=0}^{\infty} \frac{z^{(2n+1)}}{(2n +1)!}\bigg) \bigg(\sum_{n=0}^{\infty} (z^2)^n\bigg) \\ &= \bigg(\sum_{n=0}^{\infty} \frac{z^{2n - 3}}{(2n +1)!}\bigg) \bigg(\sum_{n=0}^{\infty} z^{2n-4} \bigg) \\\end{aligned} $$ I don't really know where to go from here. Any help would be great. Thanks.
A related problem. Lets consider your first problem $$ \frac{\cot(z)}{z^4}=\frac{\cos(z)}{z^4\sin(z)}. $$ First, determine the order of the pole of the function at the point $z=0$, which, in this case, is of order $5$. Once the order of the pole has been determined, we can use the formula $$r = \frac{1}{4!} \lim_{z\to 0} \frac{d^4}{dz^4}\left( z^5\frac{\cos(z)}{z^4\sin(z)} \right)=-\frac{1}{45}. $$ Note that, the general formula for computing the residue of $f(z)$ at a point $z=z_0$ with a pole order $n$ is $$r = \frac{1}{(n-1)!} \lim_{z\to z_0} \frac{d^{n-1}}{dz^{n-1}}\left( (z-z_0)^n f(z) \right) $$ Note: If $z=z_0$ is a pole of order one of $f(z)$, then the residue is $$ r = \lim_{z\to z_0}(z-z_0)f(z). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/367940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calclute the probability? A random function $rand()$ return a integer between $1$ and $k$ with the probability $\frac{1}{k}$. After $n$ times we obtain a sequence $\{b_i\}_{i=1}^n$, where $1\leq b_i\leq k$. Set $\mathbb{M}=\{b_1\}\cup\{b_2\}\cdots \cup\{b_n\}$. I want to known the probability $\mathbb{M}\neq \{1, 2\cdots, k\}$.
Hint: Obviously $n<k$ is trivial. Thereafter, the question becomes equivalent to solving What fraction of $n$-tuples with the digits $1,\ldots,k$ are in fact $n$-tuples formed from a strict subset of these numbers? or a surjection-counting problem. You can find a recursive solution by letting $p_{n,k}$ be the probability that a set of $n$ digits up to $k$ contains all distinct digits, and then considering the last digit of a sequence leading to $(n+1)$-tuples. Edit: I should make clear that a recursive 'solution' essentially is the best you can do, which is why I called it that! The numbers don't have a closed form. (See e.g. https://mathoverflow.net/questions/27071/counting-sequences-a-recurrence-relation for a discussion, once you've worked out the recursive form yourself.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/368033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How to prove every closed interval in R is compact? Let $[a,b]\subseteq \mathbb R$. As we know, it is compact. This is a very important result. However, the proof for the result may be not familiar to us. Here I want to collect the ways to prove $[a,b]$ is compact. Thanks for your help and any link.
Perhaps the shortest, slickest proof I know is by Real Induction. See Theorem 17 in the aforelinked note, and observe that the proof is six lines. More importantly, after you spend an hour or two familiarizing yourself with the formalism of Real Induction, the proof essentially writes itself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/368108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 12, "answer_id": 8 }
Where does the relation $\nabla^2(1/r)=-4\pi\delta^3({\bf r})$ between Laplacian and Dirac delta function come from? It is often quoted in physics textbooks for finding the electric potential using Green's function that $$\nabla ^2 \left(\frac{1}{r}\right)=-4\pi\delta^3({\bf r}),$$ or more generally $$\nabla ^2 \left(\frac{1}{|| \vec x - \vec x'||}\right)=-4\pi\delta^3(\vec x - \vec x'),$$ where $\delta^3$ is the 3-dimensional Dirac delta distribution. However I don't understand how/where this comes from. Would anyone mind explaining?
We can use the simplest method to display the results, as shown below : - $$ \nabla ^2 \left(\frac{1}{r}\right) = \nabla \cdot \nabla \left( \frac 1 r \right) = \nabla \cdot \frac {-1 \mathbf {e_r}} {r^2} $$ Suppose there is a sphere centered on the origin, then the total flux on the surface of the sphere is : - $$ \text {Total flux} = 4 \pi r^2 \frac {-1} {r^2} = -4 \pi $$ Suppose the volume of the sphere be $ \mathbf {v(r)}$, so by the definition, the divergence is : - $$ \lim_{\text {volume} \to zero} \frac {\text {Total Flux}} {\text {Volume}} = \lim_{\text {v(r)} \to 0} \left(\frac {-4 \pi} {v(r)}\right) $$ So obviously, $$ \lim_{\text {r} \to 0} \left[ \nabla ^2 \left(\frac{1}{r}\right) \right]= \lim_{\text {r} \to 0} \left[ \nabla \cdot \nabla \left( \frac 1 r \right) \right]= \lim_{\text {r} \to 0} \left(\frac {-4 \pi} {v(r)} \right) = \text {infinite} $$ $$ \lim_{\text r\to 0} \int \nabla ^2 \left( \frac 1 r \right) dv(r) = \lim_{\text r\to 0}\int \frac {-4 \pi} {v(r)} dv(r) = -4\pi$$ Since the laplacian is zero everywhere except $r \to 0$, it is true and real that :- $$ \nabla ^2 \left(\frac{1}{r}\right)=-4\pi\delta^3({\bf r}) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/368155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 4, "answer_id": 3 }
Method of Characteristics $au_x+bu_y+u_t=0 $ $au_x+bu_y+u_t=0$ $u(x,y,0)=g(x,y)$ solve $u(x,y,t)$ Our professor talked about solving this using Method of Characteristics. However, I am confused about this method. Since it's weekend, I think it might be faster to get respond here. In the lecture, he wrote down the followings: Fix a point$(x,y,t)$ in $\mathbb{R}^3$. $h(s)=u(x+as,y+bs,t+s)$,line $φ(s)=(x+as,y+bs,t+s)=(x,y,t)+s(a,b,1)$ $h'(s)=u_xa+u_yb+u_t=0$ for all $s$. $h(-t)=u(x-at,y-bt,0)=g(x-at,y-bt)$ <----- u equal this value for all points on the line $(x+as,y+bs,t+s)$. $h(0)=u(x,y,t)$ $u(x,y,t)=g(x-at,y-bt)$ The first question I have is that why we want to parametrize $x,y$ and $t$ this way. In addition, what is the characteristic system of this problem. If we have derived the formula already, why do we still need the characteristics system equations? Thank you!
I think it's easiest just to concisely re-explain the method, so that's what I'll do. The idea: linear, first-order PDEs have preferred lines (generally curved) along which all the action happens. More specifically, because the differential bit takes the form of $\mathbf f \cdot \nabla u$ where in general $\mathbf f$ varies, it is actually always a directional derivative along the vector field $\mathbf f$. Therefore, along a line given by $\dot{\mathbf x}(s) = \mathbf f$, we expect the PDE to reduce to an ODE involving $\mathrm d u(\mathbf x(s))/ \mathrm d s$. In fact, by the chain rule, $\mathrm d u(\mathbf x(s))/ \mathrm d s = \dot{\mathbf x}(s) \cdot\nabla u = \mathbf f \cdot\nabla u$ which is exactly the term we said was in the PDE. So $\mathbf f \cdot\nabla u = h(\mathbf x)$ is equivalent to $$\mathrm d u(\mathbf x(s))/ \mathrm d s = h(\mathbf x(s))$$ Therefore, by finding $\mathbf x(s)$ we can find the ODE $u$ satisfies, and find the initial conditions relevant to each line by saying that at $s=0$ we are on the space where the initial conditions are given. Does this help? If you have a more specific question, ask away!
{ "language": "en", "url": "https://math.stackexchange.com/questions/368230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Rank of matrix of order $2 \times 2$ and $3 \times 3$ How Can I calculate Rank of matrix Using echlon Method:: $(a)\;\; \begin{pmatrix} 1 & -1\\ 2 & 3 \end{pmatrix}$ $(b)\;\; \begin{pmatrix} 2 & 1\\ 7 & 4 \end{pmatrix}$ $(c)\;\; \begin{pmatrix} 2 & 1\\ 4 & 2 \end{pmatrix}$ $(d)\;\; \begin{pmatrix} 2 & -3 & 3\\ 2 & 2 & 3\\ 3 & -2 & 2 \end{pmatrix}$ $(e)\;\; \begin{pmatrix} 1 & 2 & 3\\ 3 & 6 & 9\\ 1 & 2 & 3 \end{pmatrix}$ although I have a knowledge of Using Determinant Method to calculate rank of Given matrix. But in exercise it is calculate using echlon form plz explain me in detail Thanks
Follow this link to find your answer If you are left with any doubt after reading this this, feel free to discuss.
{ "language": "en", "url": "https://math.stackexchange.com/questions/368322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Finding the limit of function - exponential one Find the value of $\displaystyle \lim_{x \rightarrow 0}\left(\frac{1+5x^2}{1+3x^2}\right)^{\frac{1}{\large {x^2}}}$ We can write this limit function as : $$\lim_{x \rightarrow 0}\left(1+ \frac{2x^2}{1+3x^2}\right)^{\frac{1}{\large{x^2}}}$$ Please guide further how to proceed in such limit..
Write it as $$\dfrac{(1+5x^2)^{1/x^2}}{(1+3x^2)^{1/x^2}}$$ and recall that $$\lim_{y \to 0} (1+ay)^{1/y} = e^a$$ to conclude what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/368382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Permutations of Symmetric Group of Order 3 Find an example, in the group $S_3$ of permutations of $\{1,2,3\}$, of elements $x,y\in S_3$ for which $x^2 = e = y^2$ but for which $(xy)^4$ $\not=$ e.
This group is isomorphic to the 6 element dihedral group. Any element including a reflection will have order two. The product of two different such elements will be a pure nonzero rotation, necessarily of order 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/368433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Construct matrix Let $B$ any square matrix. Is possible to construct an invertible matrix $Q_B$ such that $$\|Q_BBQ_B^{-1}\|_2\ \leq\ \rho(B)?$$ Thanks in advance for the help. Edit: $Q_B$ only need to be invertible, not orthogonal.
Since $\rho(B)=\rho(Q_BBQ_B^{-1})$, you are essentially asking whether $B$ is similar to some $A$ such that $\|A\|_2=\rho(A)$, Yet, $\|A\|_2=\rho(A)$ if and only if $A$ is the scalar multiple of a unitary matrix. So, if $B$ is not similar to the scalar multiple of a unitary matrix (e.g. when $B$ has at least two eigenvalues of different moduli, or when $B$ is not normal), the construction of $Q_B$ is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/368526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I write a trig function that includes inverses in terms of another variable? It's been awhile since I've used trig and I feel stupid asking this question lol but here goes: Given: $z = \tan(\arcsin(x))$ Question: How do I write something like that in terms of $x$? Thanks! And sorry for my dumb question.
$$z = \tan[\arcsin(x)]$$ $$\arctan(z) = \arctan[\tan(\arcsin(x)] = \arcsin(x)$$ $$\sin[\arctan(z)] = \sin[\arcsin(x)] = x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/368603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Uniform limit of holomorphic functions Let $\{f_n\}$ be a sequence of holomorphic functions defined in a generic domain $D \subset \mathbb{C}$. Suppose that there exist $f$ such that $f_n \to f$ uniformly. My question is: is it true that $f$ is holomorphic too?
You've already seen an approach using Morera's theorem from the other excellent answers. For a slightly more concrete demonstration of why $f$ is complex differentiable, you can use the fact that every $f_n$ satisfies Cauchy's integral formula, so by uniform convergence $f$ also satisfies Cauchy's integral formula. This allows you to differentiate $f$ by differentiating the integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/368664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 4, "answer_id": 3 }
Analytic functions of a real variable which do not extend to an open complex neighborhood Do such functions exist? If not, is it appropriate to think of real analytic functions as "slices" of holomorphic functions?
If $f$ is real analytic on an open interval $(a,b)$. Then at every point $x_0\in (a,b)$, there is a power series $P_{x_0}(x)=\sum_{n=0}^\infty a_n(x-x_0)^n$ with radius of convergence $r(x_0)>0$ such that $f(x)=P_{x_0}(x)$ for all $x$ in $(a,b)\cap \{x:|x-x_0|<r\}$. Then $f$ can be extended to an open neighborhood $B(x_0,r(x_0))$ of $x_0$ in $\mathbb{C}$ by the power series $P_{x_0}(z)$. Now let $O\subset \mathbb{C}$ be the union of these open balls $B(x_0,r(x_0), x_0\in (a,b)$. Define $F(z), z\in O$ such that $F(z)=P_{x_0}(z)$ if $z\in B(x_0,r(x_0))$ (for some $x_0\in (a,b)$). This is well-defined since any two analytic functions agreeing on a set with accumulation points in a connected open set must be identically equal. So $F$ is an extension of $f$ to an open set in $\mathbb{C}$ containing $(a,b)$. Since every open set in $\mathbb{R}$ is a countable union of disjoint open intervals, $f$ can be so extended if its domain is open in $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/368781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Let A = {a, c, 4, {4, 3}, {1, 4, 3, 3, 2}, d, e, {3}, {4}} Which of the following is true? Let A = {a, c, 4, {4, 3}, {1, 4, 3, 3, 2}, d, e, {3}, {4}} Which of the following is true?
Yes, you are correct. None of the options are true. $\{4, \{4\}\}$ is a subset of $A$, since $4 \in A,$ and $\{4\}\in A$, but the set with which it is paired is not a subset of $A$, and none of the items listed as "elements of $A$ are, in fact, elements of $A$. For example, $\{1, 4, 3\} \not \subset A$ because $1 \notin A$ (nor is $3 \in A$). But it also the case that $\{1, 4, 3\} \notin A$ because $A$ does not contain the set $\{1, 4, 3\}$ as an element. Hence, none of the options $A),\, B),\,$ or $\, C)$ are true. That leaves only $(D)$ as being true. Be careful though: $\{1, 4, 3, 3, 2\} = \{1, 2, 3, 4\} \in A$, but $\{1, 4, 3, 3, 2\} = \{1, 2, 3, 4\} \not \subset A$. That is, the set itself is an element in $A$. It is true, however that $\{\{1, 4, 3, 3, 2\}\} = \{\{1, 2, 3, 4\}\} \subset A$. That is, the subset of $A$ containing the element $\{1, 4, 3, 3, 2\}$ is a subset of $A$. Do you understand the difference?
{ "language": "en", "url": "https://math.stackexchange.com/questions/368846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
normed division algebra Can we prove that every division algebra over $R$ or $C$ is a normed division algebra? Or is there any example of division algebra in which it is not possible to define a norm? Definition of normed division algebra is in here. Thanks!
Frobenius theorem for (finite-dimensional) asscoative real division algebras states there are only $\mathbb{R},\mathbb{C},\mathbb{H}$, and the proof is elementary (it is given on Wikipedia in fact). If you don't care about finite-dimensional, then the transcendental field extension $\mathbb{R}(T)/\mathbb{R}$, where here $\mathbb{R}(T)$ is the field of real-coefficient rational functions in the variable $T$, is a divison algebra (it is a commutative field) but cannot carry a norm. Indeed, there are no infinite-dimensional normed division algebras. Finally, there are real division algebras that are not $\mathbb{R},\mathbb{C},\mathbb{H}$ or $\mathbb{O}$ (which are the only normed ones), which means there are division algebras that cannot carry a norm. It is still true they all must have dimension $1,2,4$ or $8$ (see section 1.1 of Baez's The Octonions), but there are (for example) uncountably many isomorphism classes of two-dimensional real division algebras (but unfortunately I don't have a reference for this handy).
{ "language": "en", "url": "https://math.stackexchange.com/questions/368925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why $f^{-1}(f(A)) \not= A$ Let $A$ be a subset of the domain of a function $f$. Why $f^{-1}(f(A)) \not= A$. I was not able to find a function $f$ which satisfies the above equation. Can you give an example or hint. I was asking for an example function which is not addressed here
Any noninjective function provides a counterexample. To be more specific, let $X$ be any set with at least two elements, $Y$ any nonempty set, $u$ in $X$, $v$ in $Y$, and $f:X\to Y$ defined by $f(x)=v$ for every $x$ in $X$. Then $A=\{u\}\subset X$ is such that $f(A)=\{v\}$ hence $f^{-1}(f(A))=X\ne A$. In general, for $A\subset X$, $A\subset f^{-1}(f(A))$ but the other inclusion may fail except when $f$ is injective. Another example: define $f:\mathbb R\to\mathbb R$ by $f(x)=x^2$ for every $x$. Then, $f^{-1}(f(A))=A\cup(-A)$ for every $A\subset\mathbb R$. For example, $A=[1,2]$ yields $f^{-1}(f(A))=[-2,-1]\cup[1,2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/368990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Understanding the Hamiltonian function Based on this function: $$\text{max} \int_0^2(-2tx-u^2) \, dt$$ We know that $$(1) \;-1 \leq u \leq 1, \; \; \; (2) \; \dot{x}=2u, \; \; \; (3) \; x(0)=1, \; \; \; \text{x(2) is free}$$ I can rewrite the function into a hamiltonian function: $$H=-2tx-u^2+p2u$$ where u(t) maxizmizee H where: \begin{equation} u = \left\{\begin{array}{rc} 1 & p \geq 1 \\ p & -1 < p < 1 \\ -1 & p \leq -1 \end{array}\right. \end{equation} Now, can somebody help me understand how the last part is true, and why? I find it hard to see the bridge between $u$ and $p$.
$$ \frac{\partial H}{\partial u} = -2u + 2p \tag{1} $$ where $u$ is the control variable and $p$ is the costate. The optimality of $H$ requires (1)=0, where you obtain your $u_t$ expression considering its constraint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/369096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Positive Outcome! I have a question on Probability. With two dices, which each have six sides people are making duels with these dices. the winner is the one who rolls the highest. the chances are of a 50% win as it is a duel between two and the terms are the same, but the person taking bets keeps a 10% commision. What is the best way to bet in this type of scenario? i'll give you an illustration: i bet 100,000 and i win so the host gives me 180,000 which is 2x the original bet -10% out of the outcome. what is the best way to make a long term profit? Currently i am using martingale system, but i lost in a loosing streak by doing the following: i bet 200,000 and lost. then i bet 400,000 and lost then i bet 900,000 and lost and finaly i bet 2,000,000 and won. but the pay out was 3,600,000 which was by 200,000 lower than the total of my bets. What system should I use?
To maximize your long-term profit, you should use Kelly gambling, which says you should bet on a wager proportional to the edge you get from the odds. The intuitive form of the formula is $$ f^{*} = \frac{\text{expected net winnings}}{\text{net winnings if you win}} \! $$ If the odds are even (ie. your scenario, but without the commision), the Kelly criterion says not to bet at all. The addition of a commision for the casino gives a negative result, which means you'll make a loss in the long-term, whatever your strategy. So the short answer: playing this game is a losing proposition whatever your strategy. If it weren't, casinos wouldn't stay in business very long. The martingale system only works if there is no ceiling to the amount you can bet (remember that the bets increase exponentially, and there is only so much money in the world). But you should make sure that each bet covers your existing losses, if you win, and makes a small profit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/369275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Hyperbolic Functions Hey everyone, I need help with questions on hyperbolic functions. I was able to do part (a). I proved for $\sinh(3y)$ by doing this: \begin{align*} \sinh(3y) &= \sinh(2y +y)\\ &= \sinh(2y)\cosh(y) + \cosh(2y)\sinh(y)\\ &= 2\sinh(y)\cosh(y)\cosh(y) + (\cosh^2(y)+\sinh^2(y))\sinh(y)\\ &= 2\sinh(y)(1+\sinh^2(y)) + (1+\sinh^2(y) + \sinh^2(y))\sinh(y)\\ &= 2\sinh(y) + 2\sinh^3(y) + \sinh(y) +2\sinh^3(y)\\ &= 4\sinh^3(y) + 3\sinh(y). \end{align*} Therefore, $0 = 4\sinh^3(y) + 3\sinh(y) - \sinh(3y)$. I have no clue what to do for part (b) and part (c) but I do see similarities between part (a) and part(b) as you can subtitute $x = \sinh(y)$. But yeah, I'm stuck and help would be very much appreciated.
Hint 1: Set $\color{#C00000}{x=\sinh(y)}$. Since $0=4\sinh^3(y)+3\sinh(y)-\sinh(3y)$, we have $$ 4x^3+3x-\sinh(3y)=0 $$ and by hypothesis, $$ 4x^3+3x-2=0 $$ So, if $\color{#C00000}{\sinh(3y)=2}$, both equations match. Solve for $x$. Hint 2: Set $c\,x=\sinh(y)$ for appropriate $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/369339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Intuition behind the difference between derived sets and closed sets? I missed the lecture from my Analysis class where my professor talked about derived sets. Furthermore, nothing about derived sets is in my textbook. Upon looking in many topology textbooks, few even have the term "derived set" in their index and many books only say "$A'$ is the set of limit points of $A$". But I am really confused on the difference between $A'$ and $\bar{A}$. For example, Let $A=\{(x,y)∈ \mathbb{R}^{2} ∣x^2+y^2<1\}$, certainly $(1,0) \in A'$, but shouldn't $(0,0)∈A'$ too? In this way it would seem that $A \subseteq A'$. The definition of a limit point is a point $x$ such that every neighborhood of $x$ contains a point of $A$ other than $x$ itself. Then wouldn't $(0,0)$ fit this criterion? If I am wrong, why? And if I am not, can someone please give me some more intuitive examples that clearly illustrate, the subtle difference between $A'$, and $\bar{A}$?
The key to the difference is the notion of an isolated point. If $X$ is a space, $A\subseteq X$, and $x\in A$, $x$ is an isolated point of $A$ if there is an open set $U$ such that $U\cap A=\{x\}$. If $X$ is a metric space with metric $d$, this is equivalent to saying that there is an $\epsilon>0$ such that $B(x,\epsilon)\cap A=\{x\}$, where $B(x,\epsilon)$ is the open ball of radius $\epsilon$ centred at $x$. It’s not hard to see that $x$ is an isolated point of $A$ if and only if $x\in A$ and $x$ is not a limit point of $A$. This means that in principle $\operatorname{cl}A$ contains three kinds of points: * *isolated points of $A$; *points of $A$ that are limit points of $A$; and *points of $X\setminus A$ that are limit points of $A$. If $A_I$ is the set of isolated points of $A$, $A_L$ is the set of limit points of $A$ that are in $A$, and $L$ is the set of limit points of $A$ that are not in $A$, then * *$A=A_I\cup A_L$; *$A'=A_L\cup L$; and *$\operatorname{cl}A=A_I\cup A_L\cup L$. In particular, if $A$ has no isolated points, so that $A_I=\varnothing$, then $A'=\operatorname{cl}A$. If, on the other hand, $A$ consists entirely of isolated points, like the sets $\Bbb Z$ and $\left\{\frac1n:n\in\Bbb Z^+\right\}$ in $\Bbb R$, then $A_L=\varnothing$, $A'=L$, and $\operatorname{cl}A=A\cup L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/369427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Correct way to calculate a complex integral? I have $$ \int_{[-i,i]} \sin(z)\,dz $$ Parametrizing the segment $[-i,i]$ I have, if $t\in[0,1]$ $$ z(t) = it + (1-t)(-i) = 2it-i, \quad \dot{z}(t) = 2i. $$ So $$ \int_{[-i,i]} \sin(z)\,dz = \int_0^1 \sin(2it-i)2i\, dt = -\cos(2it-i)|_0^1 = 0. $$ Am I correct?
Sine is analytic function, so you can also use Cauchy's theorem. $$ \int_{[-i, i]} \sin z\ dz = -\left . \cos z \right |_{-i}^i = 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/369488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }