Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Invertability of a linear transformation Given $T : \mathbb{R}^3 \to \mathbb{R}^3$ such that $T(x_1,x_2,x_3) = (3x_1,x_1-x_2,2x_1+x_2+x_3)$ Show that $(T^2-I)(T-3I) = 0$.
Solution 1: I can very easily write down the matrix representing $T$, calculate each of the terms in each set of parenthesis, and multiply the two matrixes to show that they equal zero.
Though this would show the desired statement, I think its cumbersome and perhaps might be missing the point of the exercise. So:
Other Attempt: I can multiply out the above expression to have that $T^3-IT+(T^2)(-3I)+3I^2=0$. Now we can easily verify that $T$ is invertible by simply looking at the kernel and seeing its $(x_1,x_2,x_3)=0$. Now what I want to conclude is that the matrix product above is also in the kernel since we have that each linear component is invertible, and so their sum should be, and the statement is almost trivially proven.
The problem is i'm not really comfortable with saying that since each linear component is invertible its sum must be, and am having trouble showing this concretely.
Could anyone provide me with a small insight?
Thanks.
| Let $A=T^2-I$ and $B=T-3I$. Proving that $A\cdot B = 0$ is the same as showing that $\text{ker}(A\cdot B)=\text{ker}(A)\cup\text{ker}(B)=\Bbb R^3$.
An easy calculation shows that $\text{ker}(B)=\langle (1,-2,3) \rangle$. Now observe that
$$
\{v_1=(1,-2,3),v_2=(0,1,0),v_3=(0,0,1)\}
$$
is a basis for $\Bbb R^3$. Then it is enough to show that $A(v_i)=T^2v_i-v_i=0$, i.e. $T^2v_i=v_i$, for $i=2,3$.
$$
\begin{align*}
T^2v_2 &= T(0,-1,1) = (0,1,-1+1) = v_2\\
T^2v_3 &= T(0,0,1) = (0,0,1) = v_3
\end{align*}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is $\int\frac{dx}{\sin x}$? I'm looking for the antiderivatives of $1/\sin x$. Is there even a closed form of the antiderivatives? Thanks in advance.
| Hint: Write this as $$\int \frac{\sin (x)}{\sin^2 (x)} dx=\int \frac{\sin (x)}{1-\cos^2(x)} dx.$$ Now let $u=\cos(x)$, and use the fact that $$\frac{1}{1-u^2}=\frac{1}{2(1+u)}+\frac{1}{2(1-u)}.$$
Added: I want to give credit to my friend Fernando who taught me this approach.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 2
} |
Arrange $n$ men and $n$ women in a row, alternating women and men. A group contains n men and n women. How many ways are there to arrange these people in a row if the men and women alternate?
I got as far as:
There are $n$ [MW] blocks. So there are $n!$ ways to arrange these blocks.
There are $n$ [WM] blocks. So there are $n!$ ways to arrange these blocks.
Which makes the total ways to arrange the men and women in alternating blocks $2(n!)$
The correct answer is $2(n!)^2$
Where does the second power come from?
| You added where you needed to multiply. You're going to arrange $n$ men AND $n$ women in a row, not $n$ men OR $n$ women, so you've got $n!$ ways to do one task and $n!$ ways to do the other, making $(n!)^2$.
But after that there's this issue: Going from left to right, is the first person a man or a woman? You can do it either way, so you have $(n!)^2 + (n!)^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
The epsilon-delta definition of continuity As we know the epsilon-delta definition of continuity is:
For given $$\varepsilon > 0\ \exists \delta > 0\ \text{s.t. } 0 < |x - x_0| < \delta \implies |f(x) - f(x_0)| < \varepsilon $$
My question: Why wouldn't this work if the implication would be:
For given $$\varepsilon > 0\ \exists \delta > 0\ \text{s.t. } |f(x) - f(x_0)| < \varepsilon \implies 0 < |x - x_0| < \delta ?$$
| Consider the implications of using this definition for any constant function (which should all be continuous, if any function is to be continuous).
*
*In particular, for $c \in \mathbb{R}$ consider the constant function $f(x) = c$. Given $x_0 \in \mathbb{R}$, taking $\varepsilon = 1$, note that for any $\delta > 0$ if $x = x_0 + \delta$ we have that
*
*$| f(x) - f(x_0) | = | c - c | = 0 < 1 = \varepsilon$; but
*$| x - x_0 | = |(x_0 + \delta ) - x_0 | = \delta \not< \delta$.
Therefore the implication $| f(x) - f(x_0) | < \varepsilon \rightarrow | x - x_0 | < \delta$ does not hold. It follows that the function $f$ does not satisfy the given property.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 7,
"answer_id": 0
} |
Missing dollar problem This sounds silly but I saw this and I couldn't figure it out so I thought you could help.
The below is what I saw.
You see a top you want to buy for $\$97$, but you don't have any money so you borrow $\$50$ from your mom and $\$50$ from your dad. You buy the top and have $\$3$ change, you give your mom $\$1$,your dad $\$1$, and keep $\$1$ for yourself. You now owe your mom $\$49$ and your dad $\$49$.
$\$49 + \$49 = \$98$ and you kept $\$1$. Where is the missing $\$1$?
| top = 97
from each parent: 48.50
48.50 + 48.50 = 97
remainder from each parent = 1.50 x 2 = 3.00
3.00 - (1.00 to each parent = 2) - (1.00 for yourself = 1) = 0
owed to each parent: 48.50 [97]
giving one dollar to each parent: 49.50 [99]
check your pocket for the remaining dollar. [100]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 1
} |
Question 2.1 of Bartle's Elements of Integration The problem 2.1 of Bartle's Elements of Integration says:
Give an exemple of a function $f$ on $X$ to $\mathbb{R}$ which is not
$\boldsymbol{X}$-mensurable, but is such that the function $|f|$ and
$f^2$ are $\boldsymbol{X}$-mensurable.
But, if one define $f^{+}:= \max\{f(x), 0\}$ e $f^{-}\max\{-f(x),0\}$, then $f = f^{+} - f^{-}$ and $|f| = f^{+}+f^{-}$. Also, $f$ is $\boldsymbol{X}$-mensurable iff $f^{+}$ and $f^{-}$ are mensurable. Therefore, $|f|$ is mensurable iff $f$ is mensurable. Is the problem wrong?
Edit:
Here $X$ is a set and $\boldsymbol{X}$ a $\sigma$-algebra over $X$.
| Take any non-empty measurable set $U$ in $X$, and any non-empty non-measurable set $V$ in $U$. Let $f$ be the function that sends $U$ to $1$, except for the points in $V$, which are sent to $-1$. In other words, $f(x) = 1_{U\backslash V} - 1_V$.
Now $|f| =f^2 = 1_U$.
I see now that Bunder has written a similar idea (that I don't think is fleshed out yet as I see it so far).
EDIT (to match the edited question)
You ask:
But, if one define $f^+:=\max\{f(x),0\}$ and $f^−= \max\{−f(x),0\}$, then $f=f^+−f^−$ and $|f|=f^++f^−$. Also, $f$ is $X$-mensurable iff $f^+$ and $f^−$ are mensurable. Therefore, $|f|$ is mensurable iff $f$ is mensurable. Is the problem wrong?
What makes you think that $f$ is measurable iff $f^+$ and $f^-$ are measurable? In my example above, $f^+ = 1_{U\backslash V}$ and $f^- = 1_V$, neither of which are measurable. This is clear for $1_V$. To see that $f^+$ here is not measurable, note that if it were measurable, then $1 - f^+ = 1_V$ would be measurable. But their sum is $1_U$, which is trivially measurable.
In other words, it is not true that $f$ measurable iff $f^+, f^-$ measurable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Cantor set on the circle Draw a Cantor set C on the circle and consider the set A of all the chords between points of C. Prove that A is compact.
| $C$ is compact as it's closed and bounded. Then, $A$ is compact as it's the image of the compact set $C\times C\times [0,1]$ under the continuous map $\phi: {\Bbb R}^2\times {\Bbb R}^2\times [0,1]\to {\Bbb R}^2$ given by $\phi(x,y,\lambda)= \lambda x + (1-\lambda )y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What Kind of Geometric Object Is Represented By An Equation I'm trying to understand the solution to a particular problem but can't seem to figure it out.
What kind of geometric object is represented by the equation:
$$(x_1, x_2, x_3, x_4) = (2,-3,1,4)t_1 + (4,-6,2,8)t_2$$
The answer is: a line in (1 dimensional subspace) in $\mathbb{R}^4$ that passes through the origin and is parallel to $u = (2,-3,1,4) = \frac{1}{2} (4,-6,2,8)$
I'm thinking it has something to do with the fact that $x_1 = 2x_3$, $x_2 = -3x_3$, $x_4 = 4x_3$, $x_3 = t_1+2t_2$ so the solution is $x_3$, which is a line (I think) but why would this line be in a 1-D subspace? What does that actually mean?
There's also another example:
$$(x_1, x_2, x_3, x_4) = (3, -2, 2, 5)t_1 + (6, -4, 4, 0)t_2$$
The answer: a plane in (2 dimensional subspace) in $\mathbb{R}^4$ that passes through origin and is parallel to $u = (3,-2,2,5)$ and $v = (6,-4,4,0)$
For this one, I don't even know why this object is a plane..
Can someone help connect the dots for me? Thanks!
| The first one is a line because the vector $(4,-6,2,8)$ is twice the vector $(2,-3,1,4)$. Thus your collection of points is just the collection of all points of the form $(t_1+2t_2)(2,-3,1,4)$. So it is the collection of all points of the form $t(2,-3,1,4)$. The multiples of a non-zero vector are just a line through the origin.
In the second example, you are getting the set of all linear combinations of $2$ linearly independent vectors in $4$-dimensional space. Whether to call this a plane is a matter of taste. If we think of a $2$-dimensional subspace of $\mathbb{R}^n$ as a plane, then it certainly qualifies.
Similarly, if you are given a set $3$ linearly independent points $\{v_1,v_2,v_3\}$ in $\mathbb{R}^4$, then the set of all points of the form $t_1v_1+t_2v_2+t_3v_3$ is a $3$-dimensional subspace of $\mathbb{R}^4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Limit involving a hypergeometric function I am new to hypergeometric function and am interested in evaluating the following limit:
$$L(m,n,r)=\lim_{x\rightarrow 0^+} x^m\times {}_2F_1\left(-m,-n,-(m+n);1-\frac{r}{x}\right)$$
where $n$ and $m$ are non-negative integers, and $r$ is a positive real constant.
However, I don't know where to start. I did have Wolfram Mathematica symbolically evaluate this limit for various values of $m$, and the patters seems to suggest the following expression for $L(m,n,r)$:
$$L(m,n,r)=r^m\prod_{i=1}^m\frac{n-i+1}{n+i}$$
which one can re-write using the Pochhammer symbol notation as follows:
$$L(m,n,r)=r^mn\frac{(n)_m}{n^{(m)}}$$
If the above is in fact correct, I am interested in learning how to derive it using "first principles" as opposed to the black box that is Wolfram Mathematica. I am really confused by the definition of hypergeometric function ${}_2F_1(a,b,c;z)$, as the defition that uses the Pochhammer symbol in the wikipedia page excludes the case that I have where $c$ is a non-positive integer. Any help would be appreciated.
| OK, let's start with an integral representation of that hypergeometric:
$$_2F_1\left(-m,-n,-(m+n);1-\frac{r}{z}\right) \\= \frac{1}{B(-n,-m)} \int_0^1 dx \: x^{-(n+1)} (1-x)^{-(m+1)} \left[1-\left(1-\frac{r}{z}\right)x\right]^m$$
where $B$ is the beta function. Please do not concern yourself with poles involved in gamma functions of negative numbers for now: I will address this below.
As $z \rightarrow 0^+$, we find that
$$\begin{align}\lim_{z \rightarrow 0^+} z^m\ _2F_1\left(-m,-n,-(m+n);1-\frac{r}{z}\right) &= \frac{r^m}{B(-n,-m)} \int_0^1 dx \: x^{-(n-m+1)} (1-x)^{-(m+1)}\\ &= r^m \frac{B(-(n-m),-m)}{B(-n,-m)}\\ &= r^m \lim_{\epsilon \rightarrow 0^+} \frac{\Gamma(-n+m+\epsilon) \Gamma(-n-m+\epsilon)}{\Gamma(-n+\epsilon)^2} \end{align} $$
Note that, in that last line, I used the definition of the Beta function, along with a cautionary treatment of the Gamma function near negative integers, which are poles. (I am assuming that $n>m$.) The nice thing is that we have ratios of these Gamma function values, so the singularities will cancel and leave us with something useful.
I use the following property of the Gamma function (see Equation (41) of this reference):
$$\Gamma(x) \Gamma(-x) = -\frac{\pi}{x \sin{\pi x}}$$
Also note that, for small $\epsilon$
$$\sin{\pi (n-\epsilon)} \approx (-1)^{n+1} \pi \epsilon$$
Putting this all together (I leave the algebra as an exercise for the reader), I get that
$$\lim_{z \rightarrow 0^+} z^m \ _2F_1\left(-m,-n,-(m+n);1-\frac{r}{z}\right) = r^m \frac{n!^2}{(n+m)! (n-m)!}$$
which I believe is equivalent to the stated result.
That last statement is readily seen from writing out the product above:
$$\prod_{i=1}^m \frac{n-(i-1)}{n+1} = \frac{n}{n+1} \frac{n-1}{n+2}\frac{n-2}{n+3}\ldots\frac{n-(m-1)}{n+m}$$
The numerator of the above product is
$$\frac{n!}{(n-m)!}$$
and the denominator is
$$\frac{(n+m)!}{n!}$$
The result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Prove inequality: $28(a^4+b^4+c^4)\ge (a+b+c)^4+(a+b-c)^4+(b+c-a)^4+(a+c-b)^4$ Prove: $28(a^4+b^4+c^4)\ge (a+b+c)^4+(a+b-c)^4+(b+c-a)^4+(a+c-b)^4$ with $a, b, c \ge0$
I can do this by: $EAT^2$ (expand all of the thing)
*
*$(x+y+z)^4={x}^{4}+{y}^{4}+{z}^{4}+4\,{x}^{3}y+4\,{x}^{3}z+6\,{x}^{2}{y}^{2}+6\,{
x}^{2}{z}^{2}+4\,x{y}^{3}+4\,x{z}^{3}+4\,{y}^{3}z+6\,{y}^{2}{z}^{2}+4
\,y{z}^{3}+12\,x{y}^{2}z+12\,xy{z}^{2}+12\,{x}^{2}yz$
*$(x+y-z)^4={x}^{4}+{y}^{4}+{z}^{4}+4\,{x}^{3}y-4\,{x}^{3}z+6\,{x}^{2}{y}^{2}+6\,{
x}^{2}{z}^{2}+4\,x{y}^{3}-4\,x{z}^{3}-4\,{y}^{3}z+6\,{y}^{2}{z}^{2}-4
\,y{z}^{3}-12\,x{y}^{2}z+12\,xy{z}^{2}-12\,{x}^{2}yz$
...
$$28(a^4+b^4+c^4)\ge (a+b+c)^4+(a+b-c)^4+(b+c-a)^4+(a+c-b)^4\\
\iff a^4 + b^4 + c^4 \ge a^2b^2+c^2a^2+b^2c^2 \text{(clearly hold by AM-GM)}$$
but any other ways that smarter ?
| A nice way of tackling the calculations might be as follows:$$~$$
Let $x=b+c-a,y=c+a-b,z=a+b-c.$ Then the original inequality is just equivalent with
$$\frac74\Bigl((x+y)^4+(y+z)^4+(z+x)^4\Bigr)\geq x^4+y^4+z^4+(x+y+z)^4.$$
Now we can use the identity
$$\sum_{cyc}(x+y)^4=x^4+y^4+z^4+(x+y+z)^4-12xyz(x+y+z),$$
So that it suffices to check that
$$\frac37\Bigl(x^4+y^4+z^4+(x+y+z)^4\Bigr)\geq 12xyz(x+y+z),$$
Which obviously follows from the AM-GM inequality: $(x+y+z)^4\geq \Bigl(3(xy+yz+zx)\Bigr)^2\geq 27xyz(x+y+z)$ and $x^4+y^4+z^4\geq xyz(x+y+z).$
Equality holds in the original inequality iff $x=y=z\iff a=b=c.$
$\Box$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Why is the tensor product constructed in this way? I've already asked about the definition of tensor product here and now I understand the steps of the construction. I'm just in doubt about the motivation to construct it in that way. Well, if all that we want is to have tuples of vectors that behave linearly on addition and multiplication by scalar, couldn't we just take all vector spaces $L_1, L_2,\dots,L_p$, form their cartesian product $L_1\times L_2\times \cdots \times L_p$ and simply introduce operations analogous to that of $\mathbb{R}^n$ ?
We would get a space of tuples of vectors on wich all those linear properties are obeyed. What's the reason/motivation to define the tensor product using the free vector space and that quotient to impose linearity ? Can someone point me out the motivation for that definition ?
Thanks very much in advance.
| When I studied tensor product, I am lucky to find this wonderful article by Tom Coates. Starting with the very trivial functions on the product space, he explains the intuition behind tensor products very clearly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 5,
"answer_id": 0
} |
How to find non-cyclic subgroups of a group? I am trying to find all of the subgroups of a given group. To do this, I follow the following steps:
*
*Look at the order of the group. For example, if it is $15$, the subgroups can only be of order $1,3,5,15$.
*Then find the cyclic groups.
*Then find the non cyclic groups.
But i do not know how to find the non cyclic groups. For example, let us consider the dihedral group $D_4$, then the subgroups are of the orders $1,2,4$ or $8$. I find all cyclic groups. Then, I saw that there are non-cyclic groups of order $4$. How can I find them? I appreciate any help. Thanks.
| In the $n=15=3\cdot 5$ case, recall that every group of order $p$ prime is cyclic. This leaves you with the subgroups of order $15$. How many are there?
Of course, this is not as easy in general. For general finite groups, the classification is a piece of work. Finite Abelian groups are easier, as they fall in the classification of finitely-generated Abelian groups.
Now, $D_4$ is not that bad. The only nontrivial thing is to find all the subgroups of order $4$. Cyclic ones correspond to order $4$ elements in $D_4$. Noncyclic ones are of the form $\{\pm 1,\pm z\}$ where $z$ is an order $2$ element in $D_4$. Since $D_4$ has eight elements, it is fairly easy to determine all these order $2$ and $4$ elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 0
} |
Looking for help with a proof that n-th derivative of $e^\frac{-1}{x^2} = 0$ for $x=0$. Given the function
$$
f(x) = \left\{\begin{array}{cc}
e^{- \frac{1}{x^2}} & x \neq 0
\\
0 & x = 0
\end{array}\right.
$$
show that $\forall_{n\in \Bbb N} f^{(n)}(0) = 0$.
So I have to show that nth derivative is always equal to zero $0$. Now I guess that it is about finding some dependencies between the previous and next differential but I have yet to notice one. Could you be so kind to help me with that?
Thanks in advance!
| What about a direct approach?:
$$f'(0):=\lim_{x\to 0}\frac{e^{-\frac{1}{x^2}}}{x}=\lim_{x\to 0}\frac{\frac{1}{x}}{e^{\frac{1}{x^2}}}\stackrel{\text{l'Hosp.}}=0$$
$$f''(0):=\lim_{x\to 0}\frac{\frac{2}{x^3}e^{-\frac{1}{x^2}}}{x}=\lim_{x\to 0}\frac{\frac{2}{x^4}}{e^\frac{1}{x^2}}\stackrel{\text{l'Hosp.}\times 2}=0$$
................................ Induction.................
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Find all the symmetries of the $ℤ\subset ℝ$. Find all the symmetries of the $ℤ\subset ℝ$.
I'm not sure what is meant with this.
My frist thought was that every bijection $ℤ→ℤ$ is a symmetry of $ℤ$.
My second thought was that if I look at $ℤ$ as point on the real line, then many bijections would screw up the distance between points. Then I would say that the set of symmetries contains all the translation: $x↦x+a$ and the reflections in a point $a∈ℤ$, which gives, $x↦a-(x-a)$.
| Both of you thoughts are correct, but in different contexts. If we regard $\mathbb{Z}$ as just a set with no extra structure then the symmetries of that set are just the bijections, (as in this case, we are thinking as $\mathbb{Z}$ as a bag of points). If on the other hand, we add the extra structure of the distances, then the symmetries are more restricted and the symmetries are those maps that preserve distance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Does π depends on the norm? If we take the definition of π in the form:
π is the ratio of a circle's circumference to its diameter.
There implicitly assumed that the norm is Euclidian:
\begin{equation}
\|\boldsymbol{x}\|_{2} := \sqrt{x_1^2 + \cdots + x_n^2}
\end{equation}
And if we take the Chebyshev norm:
\begin{equation}
\|x\|_\infty=\max\{ |x_1|, \dots, |x_n| \}
\end{equation}
The circle would transform into this:
And the π would obviously change it value into $4$.
Does this lead to any changes? Maybe on other definitions of π or anything?
| Under Euclidean metric there are number of constants that their values coincide and are collectively denoted by the symbol $\pi$.
How ever some of the coinciding values are independent from the metric and some are coupled with the metric and the geometry under consideration.
In your example, would the calculation of areas remain the same? How does the value of calculation under new metric need to be adjusted for unit square? Would the $\pi$ of calculation of area be the same one as the one for calculation of perimeter?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this proof using the pumping lemma correct? I have this proof and it goes like this:
We have a language $L = \{\text{w element of } \{0,1\}^* \mid w = (00)^n1^m \text{ for } n > m \}$.
Then, the following proof is given:
There is a $p$ (pumping length) for $L$. Then we have a word $w = (00)^{p+1}1^p$ and $w$ element of $L$ and $|w| \le p$. $w$ can be written as $xyz$ with $y$ not empty and $|xy| \le p$.
This implies that $y=(00)^n$ for $0 < n \le p$.
Now for all $i \ge 1$ it means $xy^iz = (00)^{p+n(i-1)+1}1^p$ is an element of $L$.
Hence $L$ is regular.
I can clearly see that this proof is wrong (at least that's what I think). I can see that three things are wrong:
*
*The pumping lemma cannot be used to proof the regularity of a language!
*One thing bothers me: $y=(00)^n$ could never hold for $0 < n \le p$ because $n$ can be $p$ and therefore $y=(00)^p$ which means that the length of $y$ is $2p$! Condition no. 2 of the pumping lemma won't hold because $|y| = 2p$ and the 2nd condition of the pumping lemma is $|xy| \le p$.
*Another thing is that you should also consider $i = 0$. Oh, and if $i = 0$ and $n \ge 1$ we would get that $n \le m$ and not $n > m$!
My question is actually if my assumptions are correct because it didn't take me much time to figure this out. There is a chance that I missed something. Can someone clarify?
I'm not asking for a proof, so please don't give one. I just want to know if my assumptions are correct and if not what you would think is wrong?
| Yes, the proof is bogus, as Brian M. Scott's answer expertly explains.
It is easier to prove that the reverse of your language isn't regular, or (by minor adjusting the proof of the lemma to place the pumped string near the end, not the start) that it isn't regular.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Is the Picard Group countable? Is the Picard group of a (smooth, projective) variety always countable?
This seems likely but I have no idea if it's true.
If so, is the Picard group necessarily finitely generated?
| No. Even for curves there is an entire variety which parametrizes $Pic^0$(X). It is called the jacobian variety. The jacobian is g dimensional (where g is the genus of the curve), so in particular if g > 0 and you are working over an uncountable field the picard group will be uncountable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Set Theory and surjective function exercise Let $E$ a set and $f:E\rightarrow P(E)$ any function. If $$A=\{a\in E:a\notin f(a)\}$$ Prove that $A$ has no preimage under $f$.
| Suppose $\exists a\in E$, $f(a)=A$. Then, is $a\in f(a)$?
If $a\in f(a)$, that means $a\notin A$, contradicts to $f\in f(a)=A$.
If $a\notin f(a)$, that means $a\in A$.
Thus, such $a$ does not exists i.e. $f(a)$ has no preimage.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Understanding a Theorem regarding Order of elements in a cyclic group This is part of practice midterm that I have been given (our prof doesn't post any solutions to it) I'd like to know whats right before I write the midterm on Monday this was actually a 4 part question, I'm posting just 1 piece as a question in its own cause it was much too long.
Let $G$ be an abelian group.
Suppose that $a$ is in $G$ and has order $m$ (such that $m$ is finite) and that the positive integer $k$ divides $m$.
(ii) Let $a$ be in $G$ and $l$ be a positive integer. State the theorem which says $\langle a^l\rangle=\langle a^k\rangle$ for some $k$ which divides $m$.
I have a theorem in my textbook that says as follows.
Let G be a finite cyclic group of order n with $a \in G$ as a generator for any integer m, the subgroup generated by $a^{m}$ is the same as the subgroup generated by $ a^{d}$ where $d=(m,n)$.
if this is the correct theorem can someone please show me why? thanks
| For the theorem in your textbook, note that $(a^d)^{m/d}=a^m$ and $(a^m)^{b}=a^d$ where $b$ is the integer that $bm+cn=d$, which you can find using Euclidean Algorithm.
This means they can generate each other. Therefore, the groups they generate are the same.
You can use this to prove the question.
The question says $\forall l>0, \exists k|m,\langle a^l\rangle=\langle a^k\rangle$. Using the theorem, we know that $\langle a^l\rangle=\langle a^d\rangle$ where $d=gcd(l, n)$. Since $a$ is of order $m$, $d$ must divides $m$. Therefore, pick $d$ as your $k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Can SAT instances be solved using cellular automata? I'm a high school student, and I have to write a 4000-word research paper on mathematics (as part of the IB Diploma Programme). Among my potential topics were cellular automata and the Boolean satisfiability problem, but then I thought that maybe there was a connection between the two. Variables in Boolean expressions can be True or False; cells in cellular automata can be "On" or "Off". Also, the state of some variables in Boolean expressions can depend on that of other variables (e.g. the output of a Boolean function), while the state of cells in cellular automata depend on that of its neighbors.
Would it be possible to use cellular automata to solve a satisfiablity instance? If so, how, and where can I find helpful/relevant information?
Thanks in advance!
| If you could find an efficient way to solve SAT, you'd become very rich and famous. That's not likely to happen when you're still in high school. What you might be able to do, though, is get your cellular automaton to go through all possible values of the variables, and check the value of the Boolean expression for each one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Let $S$ be a linear operator on $W=P_3(\mathbb R)$ defined by $S(p(x)) = p(x) - \frac {dp(x)} {dx}$ Let S be a linear operator on W = $P_3(\Bbb R)$ defined by S(p(x)) = $p(x) - $$\frac {dp(x)} {dx}$.
(a) Find nullity (S) and rank (S)
(b) Is S an isophormism? If so, write down a formula for $S^{-1}$.
Please help me correct my working:
(a) W = $P_3(\Bbb R)$ = a$x^3$+b$x^2$+c$x$+d
$p(x) - $$\frac {dp(x)} {dx}$ = a$x^3$+b$x^2$+c$x$+d-3a$x^2$-2b$x$-c = a$x^3$+ (b-3a)$x^2$+ (c-2b)x + (d-c).
Then equating S(p(x))=0, a=0, b=0, c=0, d=0.
ker(S)=0.
Therefore, nullity(S)=0, rank(S)=4 since there are 4 column in the matrix.
(b) For any h + e$x$ + f$x^2$ + g$x^3$, S(p(x) = h + e$x$ + f$x^2$ + g$x^3$.
Through gaussian elimination, a = g, b = f+3g, c = e+2f+6g, d = h+e+2f+6g. S is surjective and from (a), S is injective.
Thus, S is isomorphism.
But i got stuck here, how to find $S^{-1}$?
| (a) Looks fine, but instead of saying "since there are 4 column in the matrix" (what matrix?), I would say "since $\dim P_3(\mathbb{R})=4$".
(b) You have already found $S^{-1}$, haven't you? According to your calculation, $S^{-1}(h+ex+fx^2+gx^3)=gx^3 + (f+3g)x^2 + (e+2f+6g)x + (h+e+2f+6g)$.
BTW, you can show that $S$ is an isomorphism without finding $S^{-1}$: the result in (a) that the nullity of $S$ is zero already implies that $S$ is injective. As $S$ is linear and $P_3(\mathbb{R})$ is finite dimensional, it follows that $S$ is an isomorphism. But surely the Gaussian elimination part is still useful because you have to find $S^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
defining equation of general projective line Let $\mathbb{P}$ be a projective $n$-space.
For $p=[a_0,\cdots,a_n], q=[b_0,\cdots,b_n]$ I know that the line pass through $p$ and $q$ is defined by the set $\{ [xa_0+yb_0,\cdots, xa_n+yb_n] | [x,y] \in \mathbb{P}^1\}$
I wonder the defining equation of this projective line. and can we formulate the defining equation of the line passing through two point?
| I am not sure if this is what you want. If $n=2$, you can think of the projective points $p,q$ are two vectors in the corresponding vector space. Then the projective line through these two points corresponds to the class of planes parallel to these two vectors. By taking cross product of these two vectors, you then find the normal vector of the planes, and hence you can formulate the equation. If $n>2$, the normal vector is not unique, then I am not sure how you can formulate it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If $C\subseteq B \subseteq A$ then $A\backslash C = (A\backslash B)\cup (B\backslash C)$ Is it true that if $C\subseteq B \subseteq A$ then $A\backslash C = (A\backslash B)\cup (B\backslash C)$? By drawing Venn diagrams this clearly seems so, but I can't seem to prove it.
Similarly, is it true that if $C\subseteq A$ and $C\subseteq B$ then $A\backslash C \subseteq(A\Delta B)\cup(B\backslash C)$?
If they are, what is the proof? Otherwise, what is a counterexample?
| Let $x\in A-C$. Then $x\in A$ and $x\not\in C$. If $x\in B$, then $x\in B$ and $x\not\in C$, so $x\in B-C$, while if $x\not\in B$, then $x\in A$ and $x\not\in B$, so $x\in A-B$. This proves that $A-C\subset (A-B)\cup (B-C)$.
Now suppose that $C\subset B\subset A$. Let $y\in (A-B)\cup (B-C)$. If $y\in A-B$, then $y\in A$ and $y\not\in B$, so $y\in A$ and $y\not\in C$, so $y\in A-C$, while if $y\in B-C$, then $y\in B$ and $y\not\in C$, so $y\in A$ and $y\not\in C$, so $y\in A-C$. This proves that $(A-B)\cup (B-C)\subset A-C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Find lim sup $x_n$ Let $x_n = n(\sqrt{n^2+1} - n)\sin\dfrac{n\pi}8$ , $n\in\Bbb{N}$
Find $\limsup x_n$.
Hint: lim sup $x_n = \sup C(x_n)$.
How to make it into a fraction to find the cluster point of $x_n$?
| Expand: $n · \left( \sqrt{n²+1} - n \right) = n · \tfrac{1}{\sqrt{n²+1} + n}$ for all $n ∈ ℕ$.
Since $\tfrac{\sqrt{n²+1} + n}{n} = \sqrt{1+\tfrac{1}{n²}} + 1 \overset{n → ∞}{\longrightarrow} 2$, you have $n · \left( \sqrt{n²+1} - n \right) \overset{n → ∞}{\longrightarrow} \tfrac{1}{2}$.
Now $x_n = n · \left( \sqrt{n²+1} - n \right) · \sin \left(\tfrac{n · π}{8}\right) ≤ n · \left( \sqrt{n²+1} - n \right)$ for all $n ∈ ℕ$, so:
\begin{align*}
\limsup_{n→∞} x_n &= \limsup_{n→∞} \left( n · \left( \sqrt{n²+1} - n \right) · \sin \left(\tfrac{n · π}{8}\right) \right)\\ &≤ \limsup_{n→∞} \left( n · \left( \sqrt{n²+1} - n \right) \right)= \tfrac{1}{2}
\end{align*}
But $\sin \left(\tfrac{(16n+4)· π}{8}\right) = \sin \left(n·2π + \tfrac{π}{2}\right) = \sin \left(\tfrac{π}{2}\right)= 1$ for all $n ∈ ℕ$.
This means $x_{16n+4} = (16n+4) · \left( \sqrt{(16n+4)²+1} - (16n+4) \right) · \sin \left(\tfrac{(16n+4)· π}{8}\right) \overset{n → ∞}{\longrightarrow} \tfrac{1}{2}$.
Therefore $\tfrac{1}{2}$ is a limit point of the sequence $(x_n)_{n ∈ ℕ}$ as well as an upper bound of the limit points of that sequence.
So it’s the upper limit of that sequence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
An increasing probability density function? Could anyone come up with a probability density function which is:
*
*supported on [1,∞) (or [0,∞))
*increasing
*discrete
| I was having the same question, but in a bounded environment, so the distribution would be increasing with support a and b. It grows like a slow exponential, actually is the the distribution of the following:
Take N repetitions of 4 uniformly generated values between A and B. Make the histogram of the max in each repetition. You have something increasing that is not linear, it would be a kind of low speed growing exponential.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Calculating determinant of a block diagonal matrix Given an $m \times m$ square matrix $M$:
$$
M = \begin{bmatrix}
A & 0 \\
0 & B
\end{bmatrix}
$$
$A$ is an $a \times a$ and $B$ is a $b \times b$ square matrix; and of course $a+b=m$. All the terms of A and B are known.
Is there a way of calculating determinant of $M$ by determinants of (or any other useful data from) of $A$ and $B$ sub-matrices?
| Hint: It's easy to prove that
$$\det M=\det A\det B$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Intersection of irreducible sets in $\mathbb A_{\mathbb C}^3$ is not irreducible I am looking for a counterexample in order to answer to the following:
Is the intersection of two closed irreducible sets in $\mathbb
A_{\mathbb C}^3$ still irreducible?
The topology on $\mathbb A_{\mathbb C}^3$ is clearly the Zariski one; by irreducible set, I mean a set which cannot be written as a union of two proper closed subsets (equivalently, every open subset is dense).
I think the answer to the question is "No", but I do not manage to find a counterexample. I think I would be happy if I found two prime ideals (in $\mathbb C[x,y,z]$) s.t. their sum is not prime. Am I right? Is there an easier way?
Thanks.
| Choose any two irreducible plane curves, they will intersect in a finite number of points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/332973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Find the general solution of $ y'''- y'' - 9y' +9y = 0 $ Find the general solution of $ y'''- y'' - 9y' +9y = 0 $
The answer is
$y=c_{1}e^{-3x}+c_{2}e^{3x}+c_{3}e^{x}$
how do i approach this problem?
| Try to substitute $e^{\lambda x}$ in the equation and solve the algebraic equation in $\lambda$ as is usually done for second order homogeneous ODEs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
convergence and limits How can I rewrite $\frac{1}{1+nx}$ and prove it's absolute convergence as $n \rightarrow \infty$? Given $\epsilon > 0$, should I define $f_n(x)$ and $f(x)$?
Any help is hugely appreciated. Thank you
| It seems you're being given $$f_n(x)=\frac{1}{1+nx}$$ and asked to find its (pointwise) limit as $n \to \infty$. Note that if $x>0$, $\lim f_n(x)=0$. If $x=0$, $f_n(x)=1$ for each $n$, so $\lim f_n(x)=1$. There is a little problem when $x<0$, namely, when $x=-\frac 1 n $, so I guess we just avoid $x<0$. Thus, you can see for $x\geq 0$, you function is $$f(x)=\begin{cases}1\text{ ; if} x=0\\0\text{ ; if } x>0\end{cases}$$
In particular, convergence is not uniform over $[0,\infty)$ (since the limit function is not continuous), nor in $(0,\infty)$ because $$\sup_{(0,\infty)}|f_n-0| =1$$
for any $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the angle between the main diagonal of a cube and a skew diagonal of a face of the cube I was told it was $90$ degrees, but then others say it is about $35.26$ degrees. Now I am unsure which one it is.
| If we assume the cube has unit side length and lies in the first octant with faces parallel to the coordinate planes and one vertex at the origin, then the the vector $(1,1,0)$ describes a diagonal of a face, and the vector $(1,1,1)$ describes the skew diagonal.
The angle between two vectors $u$ and $v$ is given by:
$$\cos(\theta)=\frac{u\cdot v}{|u||v|}$$
In our case, we have
$$\cos(\theta)=\frac{2}{\sqrt{6}}\quad\Longrightarrow\quad\theta\approx 35.26$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Why does the power rule work? If $$f(x)=x^u$$ then the derivative function will always be $$f'(x)=u*x^{u-1}$$
I've been trying to figure out why that makes sense and I can't quite get there.
I know it can be proven with limits, but I'm looking for something more basic, something I can picture in my head.
The derivative should be the slope of the function.
If $$f(x)=x^3=x^2*x$$ then the slope should be $x^2$. But it isn't. The power rule says it's $3x^2$.
I understand that it has to do with having variables where in a more simple equation there would be a constant. I'm trying to understand how that exactly translates into the power rule.
| This may be too advanced for you right now,
but knowing about the derivative of the log function
can be very helpful.
The basic idea is that
$(\ln(x))' = 1/x$,
where $\ln$ is the natural log.
Applying the chain rule,
$(\ln(f(x))' = f'(x)/f(x)$.
For this case,
set $f(x) = x^n$.
Then $\ln(f(x)) = \ln(x^n) = n \ln(x)$.
Taking derivatives on both sides,
$(\ln(x^n))' = f'(x)/f(x) = f'(x)/x^n$
and $(\ln(x^n))' = (n \ln(x))' = n/x$,
so
$f'(x)/x^n = n/x$
or $f'(x) = n x^{n-1}$.
More generally,
if $f(x) = \prod a_i(x)/\prod b_i(x)$,
then
$\ln(f(x)) = \sum \ln(a_i(x))-\sum \ln(b_i(x))$
so
$\begin{align}
f'(x)/f(x) &= (\ln(f(x))'\\
&= \sum (\ln(a_i(x))'-\sum (\ln(b_i(x))'\\
&=\sum (a_i(x))'/a_i(x)-\sum (b_i(x))'/b_i(x)\\
\end{align}
$,
so
$f'(x) = f(x)\left(\sum (a_i(x))'/a_i(x)-\sum (b_i(x))'/b_i(x)\right)
$.
Note that this technique,
called logarithmic differentiation,
generalizes both the
product and quotient rules for derivatives.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 1
} |
Question about integration (related to uniform integrability) Consider a probability space $( \Omega, \Sigma, \mu) $ (we could also consider a general measure space). Suppose $f: \Omega -> \mathbb{R}$ is integrable. Does this mean that
$ \int |f| \chi(|f| >K) d\mu $ converges to 0 as K goes to infinity? N.B. $\chi$ is the characteristic/indicator function. I showed that if $f$ belongs to $L^2$ as well then we can use the Cauchy-Schwarz inequality and the Chebyshev inequality to show that this is indeed so. For the general case, I think that it is false, but I can't think of a counterxample. I tried $ \frac{1}{\sqrt{x}}$ on $[0,1]$ with the Lebesgue measure, but it didn't work. I can't think of another function that belongs to $L^1$ but not $L^2$! Could anyone please help with this by providing a counterexample or proof? Many thanks.
| Let $f \in L^1$, then we have $|f| \cdot \chi(|f|>k) \leq |f| \in L^1$ and $$|f| \cdot \chi(|f|>k) \downarrow |f| \cdot \chi(|f|=\infty)$$ (i.e. it's decreasing in $k$) since $\chi(|f|>n) \leq \chi(|f|>m)$ for all $m \leq n$. Thus, we obtain by applying dominated convergence theorem $$\lim_{k \to \infty} \int |f| \cdot \chi(|f|>k) \, d\mu = \int |f| \cdot \chi(|f|=\infty) \, d\mu = 0$$ since $\mu(|f|=\infty)=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Formula for Product of Subgroups of $\mathbb Z$, Problem What is the product of $\mathbb{Z}_2$ and $\mathbb{Z}_5$ as subgroups of $\mathbb{Z}_6$?
Since $\mathbb{Z}_n$ is abelian, any subgroup should be normal. From my understanding of the subgroup product, this creates the following set: $\{ [0], [1], [2], [3], [4], [5] \}$ which has order 6 and is in fact $\mathbb{Z}_6$. However, the subgroup product formula
$$
|\mathbb{Z}_2\mathbb{Z}_5| = \frac{|\mathbb{Z}_2||\mathbb{Z}_5|}{|\mathbb{Z}_2 \cap \mathbb{Z}_5|} = \frac{2 \cdot 5}{2} = 5
$$
I feel like I'm doing something wrong in the subgroup product, in particular understanding what closure rules to follow when considering the individual product of elements.
| @Ittay Weiss, made you a complete illustration, but for noting a good point about the subgroups of $\mathbb Z$, we memorize:
If $m|n$ then $n\mathbb{Z}\leq m\mathbb{Z}$ (or $n\mathbb{Z}\lhd m\mathbb{Z}$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Need to prove the sequence $a_n=1+\frac{1}{2^2}+\frac{1}{3^2}+\cdots+\frac{1}{n^2}$ converges I need to prove that the sequence $a_n=1+\frac{1}{2^2}+\frac{1}{3^2}+\cdots+\frac{1}{n^2}$ converges. I do not have to find the limit. I have tried to prove it by proving that the sequence is monotone and bounded, but I am having some trouble:
Monotonic:
The sequence seems to be monotone and increasing. This can be proved by induction: Claim that $a_n\leq a_{n+1}$
$$a_1=1\leq 1+\frac{1}{2^2}=a_2$$
Need to show that $a_{n+1}\leq a_{n+2}$
$$a_{n+1}=1+\frac{1}{2^2}+\frac{1}{3^2}+\cdots+\frac{1}{n^2}+\frac{1}{(n+1)^2}\leq 1+\frac{1}{2^2}+\frac{1}{3^2}+\cdots+\frac{1}{n^2}+\frac{1}{(n+1)^2}+\frac{1}{(n+2)^2}=a_{n+2}$$
Thus the sequence is monotone and increasing.
Boundedness:
Since the sequence is increasing it is bounded below by $a_1=1$.
Upper bound is where I am having trouble. All the examples I have dealt with in class have to do with decreasing functions, but I don't know what my thinking process should be to find an upper bound.
Can anyone enlighten me as to how I should approach this, and can anyone confirm my work thus far? Also, although I prove this using monotonicity and boundedness, could I have approached this by showing the sequence was a Cauchy sequence?
Thanks so much in advance!
| Notice that $ 2k^2 \geq k(k+1) \implies \frac{1}{k^2} \leq \frac{2}{k(k+1)}$.
$$ \sum_{k=1}^{\infty} \frac{2}{k(k+1)} = \frac{2}{1 \times 2} + \frac{2}{2 \times 3} + \frac{2}{3 \times 4} + \ldots $$
$$ \sum_{k=1}^{\infty} \frac{2}{k(k+1)} = 2\Big(\, \Big(1 - \frac{1}{2}\Big) + \Big(\frac{1}{2} - \frac{1}{3} \Big) + \Big(\frac{1}{3} - \frac{1}{4} \Big) + \ldots \Big)$$
$$ \sum_{k=1}^{\infty} \frac{2}{k(k+1)} = 2 (1) = 2 $$.
Therefore $ \sum_{k=1}^{\infty} \frac{1}{k^2} \leq 2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 8,
"answer_id": 5
} |
Finding the Expected Value and Variance of Random Variables This is an introductory math finance course, and for some reason, my prof has decided to ask us this question. We haven't learnt this type of material yet, and our textbook is close to NO help. If anyone has a clue on how to solve this problem, PLEASE help me! :)
Assume $X_1$, $X_2$, $X_3$ are random variables with the following quantitative characteristics:
$E(X_1) = 2$, $E(X_2) = -1$, $E(X_3) = 4$; $Var(X_1) = 4$, $Var(X_2) = 6$, $Var(X_3) = 8$;
$COV(X_1,X_2) = 1$, $COV(X_1,X_3) = -1$, $COV(X_2,X_3) = 0$
Find $E(3X_1 + 4X_2 - 6X_3)$ and $Var(3X_1 + 4X_2 - 6X_3)$.
| Well, just to expand on mne__povezlo's answer, I guess a more complete (and useful, in your case) formula for variance would be:
$$\mathrm{Var}\left(\sum_{i=1}^{n}a_{i}X_{i}\right)=\sum^n_{i=1}a_{i}^2\mathrm{Var}X_{i}+2\underset{1\le{i}<j\le{n}}{\sum\sum}a_{i}a_{j}\mathrm{Cov}\left(X_i,X_{j}\right)$$
Now what's left is just to plug in your numbers into the formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Rings and unity The set $R = {([0]; [2]; [4]; [6]; [8])}$ is a subring of $Z_{10}$. (You do not need to
prove this.) Prove that it has a unity and explain why this is surprising.
Also, prove that it is a field and explain why that is also surprising.
This sis a HW Question.
The unity is not [0] is it ?? Could I get a hint ???
| Notice that $[6]\times[a]=[a]$ for $a=0,2,4,6,$ and $8$. See if you can do the rest!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\int \frac{1}{{x^4+1}} dx$ I am trying to evaluate the integral
$$\int \frac{1}{1+x^4} \mathrm dx.$$
The integrand $\frac{1}{1+x^4}$ is a rational function (quotient of two polynomials), so I could solve the integral if I can find the partial fraction of $\frac{1}{1+x^4}$. But I failed to factorize $1+x^4$.
Any other methods are also wellcome.
| Without using fractional decomposition:
$$\begin{align}\int\dfrac{1}{x^4+1}~dx&=\dfrac{1}{2}\int\dfrac{2}{x^4+1}~dx
\\&=\dfrac{1}{2}\int\dfrac{(x^2+1)-(x^2-1)}{x^4+1}~dx
\\&=\dfrac{1}{2}\int\dfrac{x^2+1}{x^4+1}~dx-\dfrac{1}{2}\int\dfrac{x^2-1}{x^4+1}~dx
\\&=\dfrac{1}{2}\int\dfrac{1+\dfrac{1}{x^2}}{x^2+\dfrac{1}{x^2}}~dx-\dfrac{1}{2}\int\dfrac{1-\dfrac{1}{x^2}}{x^2+\dfrac{1}{x^2}}~dx
\\&=\dfrac{1}{2}\left(\int\dfrac{1+\dfrac{1}{x^2}}{\left(x-\dfrac{1}{x}\right)^2+2}~dx-\int\dfrac{1-\dfrac{1}{x^2}}{\left(x+\dfrac{1}{x}\right)^2-2}~dx\right)
\\&=\dfrac{1}{2}\left(\int\dfrac{d\left(x-\dfrac{1}{x}\right)}{\left(x-\dfrac{1}{x}\right)^2+2}-\int\dfrac{d\left(x+\dfrac{1}{x}\right)}{\left(x+\dfrac{1}{x}\right)^2-2}\right)\end{align}$$
So, finally solution is $$\int\dfrac{1}{x^4+1}~dx=\dfrac{1}{4\sqrt2}\left(2\arctan\left(\dfrac{x^2-1}{\sqrt2x}\right)+\log\left(\dfrac{x^2+\sqrt2x+1}{x^2-\sqrt2x+1}\right)\right)+C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Find $\int_0^\infty \frac{\ln ^2z} {1+z^2}{d}z$ How to find the value of the integral $$\int_{0}^{\infty} \frac{\ln^2z}{1+z^2}{d}z$$ without using contour integration - using usual special functions, e.g. zeta/gamma/beta/etc.
Thank you.
| Here's another way to go:
$$\begin{eqnarray*}
\int_0^\infty dz\, \dfrac{\ln ^2z} {1+z^2}
&=& \frac{d^2}{ds^2} \left. \int_0^\infty dz\, \dfrac{z^s} {1+z^2} \right|_{s=0} \\
&=& \frac{d^2}{ds^2} \left. \frac{\pi}{2} \sec\frac{\pi s}{2} \right|_{s=0} \\
&=& \frac{\pi^3}{8}.
\end{eqnarray*}$$
The integral $\int_0^\infty dz\, z^s/(1+z^2)$ can be handled with the beta function.
See some of the answers here, for example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
} |
Modular homework problem Show that:
$$[6]_{21}X=[15]_{21}$$
I'm stuck on this problem and I have no clue how to solve it at all.
| Well, we know that $\gcd\ (6,21)=3$ which divides $15$. So there will be solutions:
$$
\begin{align}
6x &\equiv 15 \pmod {21} \\
2x &\equiv 5 \pmod 7
\end{align}
$$
because that $2\times 4\equiv 1 \pmod 7$, thus:
$$
\begin{align}
x &\equiv 4\times 5 \pmod 7\\
&\equiv 6 \pmod 7
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Every invertible matrix can be written as the exponential of one other matrix I'm looking for a proof of this claim: "every invertible matrix can be written as the exponential of another matrix".
I'm not familiar yet with logarithms of matrices, so I wonder if a proof exists, without them. I'll be happy with any proof anyways. I hope someone can help.
| I assume you are talking about complex $n\times n$ matrices. This is not true in general within real square matrices.
A simple proof goes by functional calculus. If $A$ is invertible, you can find a determination of the complex logarithm on some $\mathbb{C}\setminus e^{i\theta_0}[0,+\infty)$ which contains the spectrum of $A$. Then by holomorphic functional calculus, you can define $B:=\log A$ and it satisfies $e^B=A$.
Notes:
1) There is a formula that says $\det e^B=e^{\mbox{trace}\;B}$ (easy proof by Jordan normal form, or by density of diagonalizable matrices). Therefore the range of the exponential over $M_n(\mathbb{C})$ is exactly $GL_n(\mathbb{C})$, the group of invertible matrices.
2) For diagonalizable matrices $A$, it is very easy to find a log. Take $P$ invertible such that $A=PDP^{-1}$ with $D=\mbox{diag}\{\lambda_1,\ldots,\lambda_n\}$. If $A$ is invertible, then every $\lambda_j$ is nonzero so we can find $\mu_j$ such that $\lambda_j=e^{\mu_j}$. Then the matrix $B:=P\mbox{diag}\{\mu_1,\ldots,\mu_n\}P^{-1}$ satisfies $e^B=A$.
3) If $\|A-I_n\|<1$, we can define explicitly a log with the power series of $\log (1+z)$ by setting $\log A:=\log(I_n+(A-I_n))=\sum_{k\geq 1}(-1)^{k+1}(A-I_n)^k/k.$
4) For a real matrix $B$, the formula above shows that $\det e^B>0$. So the matrices with a nonpositive determinant don't have a log. The converse is not true in general. A sufficient condition is that $A$ has no negative eigenvalue. For a necesary and sufficient condition, one needs to consider the Jordan decomposition of $A$.
5) And precisely, the Jordan decomposition of $A$ is a concrete way to get a log. Indeed, for a block $A=\lambda I+N=\lambda(I+\lambda^{-1}N)$ with $\lambda\neq 0$ and $N$ nilpotent, take $\mu$ such that $\lambda=e^\mu$ and set $B:=\mu+\log(I+\lambda^{-1}N)=\mu+\sum_{k\geq 1}(-1)^{k+1}\lambda^{-k}N^k$ and note that this series has actually finitely many nonzero terms since $N$ is nilpotent. Do this on each block, and you get your log for the Jordan form of $A$. It only remains to go back to $A$ by similarity.
6) Finally, here are two examples using the above:
$$
\log\left( \matrix{5&1&0\\0&5&1\\0&0&5}\right)=\left(\matrix{\log 5&1&-\frac{1}{2}\\ 0&\log 5&1\\0&0&\log 5} \right)
$$
and
$$
\log\left(\matrix{-1&0\\0&1} \right)=\left(\matrix{i\pi&0\\0&0} \right)
$$
are two possible choices for the log of these matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 1
} |
How to combine Bézier curves to a surface? My aim is to smooth the terrain in a video game. Therefore I contrived an algorithm that makes use of Bézier curves of different orders. But this algorithm is defined in a two dimensional space for now.
To shift it into the third dimension I need to somehow combine the Bézier curves. Given two curves in each two dimensions, how can I combine them to build a surface?
I thought about something like a curve of a curve. Or maybe I could multiply them somehow. How can I combine them to cause the wanted behavior? Is there a common approach?
In the image you can see the input, on the left, and the output, on the right, of the algorithm that works in a two dimensional space. For the real application there is a three dimensional input.
The algorithm relays only on the adjacent tiles. Given these it will define the control points of the mentioned Bézier curve. Additionally it marks one side of the curve as inside the terrain and the other as outside.
| I would not use Bezier curves for this. Too much work to find the end-points and you end up with a big clumsy polynomial.
I would build a linear least squares problem minimizing the gradient (smoothing the slopes of hills).
First let's split each pixel into $6\times 6$ which will give the new smoothed resolution (just an example, you can pick any size you would like).
Now the optimization problem $${\bf v_o} = \min_{\bf v} \{\|{\bf D_x(v+d)}\|_F^2+\|{\bf D_y(v+d)}\|_F^2 + \epsilon\|{\bf v}\|_F^2\}$$
where $\bf d$ is the initial pixely blocky surface, and $\bf v$ is the vector of smoothing change you want to do.
Already after 2 iterations of a very fast iterative solver we can get results like this:
After clarification from OP, I realized it is more this problem we have ( but in 3D ).
Now the contours (level sets) to this smoothed binary function can be used to create an interpolative effect:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/333991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why this function is not uniformly continuous? Let a function $f: \mathbb R \rightarrow \mathbb R$ be such that $f(n)=n^2$ for $n \in \mathbb N$.
Why $f$ is not uniformly continuous?
Thanks
| Suppose it is. i.e. for every $\varepsilon>0$ there is some $\delta>0$ such that if $|x-y| \leq \delta$ then $|f(x)-f(y)| \leq \varepsilon$. Any interval $[n,n+1]$ can be broken into at most $\lceil \frac{1}{\delta} \rceil$ intervals with length$<\delta$ using some partition $x_0=n<x_1<\ldots<n+1=x_k$. Using the triangle inequality, $|f(n)-f(n+1)| \leq |f(n)-f(x_1)|+|f(x_1)-f(x_2)|+\ldots+|f(x_{k-1})-f(n+1)| \leq \lceil \frac{1}{\delta} \rceil \varepsilon$. But this can't hold for all $n$ because $|f(n)-f(n+1)|=1+2n$ grows larger than any bound.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How to evaluate $\int^\infty_{-\infty} \frac{dx}{4x^2+4x+5}$? I need help in my calculus homework guys, I can't find a way to integrate this, I tried use partial fractions or u-substitutions but it didn't work.
$$\int^\infty_{-\infty} \frac{dx}{4x^2+4x+5}$$
Thanks much for the help!
| *
*Manipulate the denominator to get $(2x+1)^2 + 4 = (2x+1)^2 + 2^2$.
*Let $u = 2x+1 \implies du = 2 dx \implies dx = \frac 12 du$,
*$\displaystyle \frac 12 \int_{-\infty}^\infty \dfrac{du}{u^2 + (2)^2} $
*use an appropriate trig substitution which you should recognize:
$$ \int\frac{du}{{u^2 + a^2}} = \frac{1}{a} \arctan \left(\frac{u}{a}\right)+C $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
Bernstein Polynomials and Expected Value The first equation in this paper
http://www.emis.de/journals/BAMV/conten/vol10/jopalagzyl.pdf
is:
$$\displaystyle B_nf(x)=\sum_{i=0}^{n}\binom{n}{i}x^i(1-x)^{n-i}f\left(\frac{i}{n}\right)=\mathbb E f\left(\frac{S_{n,x}}{n}\right)$$
where $f$ is a Lipschitz continuous real function defined on $[0,1]$, and $S_{n,x}$ is a binomial random variable with parameters $n$ and $x$.
How is this equation proven?
| $$\mathbb E(g)=\sum_{i}\mathbb P\left(S_{n,k}\right)g(i)$$
where:
$$g(S_{n,x})=f\left(\frac{S_{n,x}}{n}\right)$$
Therefore we have:
$$\mathbb E\left(f\left(\frac{S_{n,x}}{n}\right)\right)=\sum_{i=0}^{n}\mathbb P(S_{n,x}=i)f\left(\frac{i}{n}\right)$$
But $$\mathbb P(S_{n,x}=i)=\binom{n}{i}x^i(1-x)^{n-i}$$
Therefore
$$\sum_{i=0}^{n}\binom{n}{i}x^i(1-x)^{n-i}f\left(\frac{i}{n}\right)=\mathbb E\left(f\left(\frac{S_{n,x}}{n}\right)\right)$$ which was the original statement.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Need to prove the sequence $a_n=(1+\frac{1}{n})^n$ converges by proving it is a Cauchy sequence
I am trying to prove that the sequence $a_n=(1+\frac{1}{n})^n$ converges by proving that it is a Cauchy sequence.
I don't get very far, see: for $\epsilon>0$ there must exist $N$ such that $|a_m-a_n|<\epsilon$, for $ m,n>N$
$$|a_m-a_n|=\bigg|\bigg(1+\frac{1}{m}\bigg)^m-\bigg(1+\frac{1}{n}\bigg)^n\bigg|\leq \bigg|\bigg(1+\frac{1}{m}\bigg)^m\bigg|+\bigg|\bigg(1+\frac{1}{n}\bigg)^n\bigg|\leq\bigg(1+\frac{1}{m}\bigg)^m+\bigg(1+\frac{1}{n}\bigg)^n\leq \quad?$$
I know I am supposed to keep going, but I just can't figure out the next step. Can anyone offer me a hint please? Or if there is another question that has been answered (I couldn't find any) I would gladly look at it.
Thanks so much!
| We have the following inequalities:
$\left(1+\dfrac{1}{n}\right)^n = 1 + 1 + \dfrac{1}{2!}\left(1-\dfrac{1}{n}\right)+\dfrac{1}{3!}\left(1-\dfrac{1}{n}\right)\left(1-\dfrac{2}{n}\right) + \ldots \leq 2 + \dfrac{1}{2} + \dfrac{1}{2^2} + \dots =3$
Similarly,
$
\begin{align*}
\left(1-\dfrac{1}{n^2}\right)^n &= 1 - {n \choose 1}\frac{1}{n^2} + {n \choose 2}\frac{1}{n^4} + \dots\\
&= 1 - \frac{1}{n} + \dfrac{1}{2!n^2}\left(1-\dfrac{1}{n}\right) - \dfrac{1}{3!n^3}\left(1-\dfrac{1}{n}\right)\left(1-\dfrac{2}{n}\right) + \ldots
\end{align*}
$
So,
$$
| \left(1-\dfrac{1}{n^2}\right)^{n} -\left(1-\frac{1}{n} \right)| \leq \dfrac{1}{2n^2} + \dfrac{1}{2^2n^2} + \dfrac{1}{2^3n^2} + \ldots = \dfrac{1}{n^2} .
$$
Now,
$
\begin{align*}
\left(1+\frac{1}{n+1}\right)^{n+1} - \left(1+\frac{1}{n}\right)^n &= \left(1+\frac{1}{n}-\frac{1}{n(n+1)}\right)^{n+1}-\left(1+\frac{1}{n}\right)^n \\
&=\left(1+\frac{1}{n}\right)^{n+1}\left\{ \left( 1- \frac{1}{(n+1)^2}\right)^{n+1} - \frac{n}{n+1}\right\}\\
&= \left(1+\frac{1}{n}\right)^{n+1}(1 - \frac{1}{n+1} + \text{O}(\frac{1}{n^2}) - \frac{n}{n+1})\\
& = \left(1+\frac{1}{n}\right)^{n+1}\text{O}(\frac{1}{n^2}) \\
&= \text{O}(\frac{1}{n^2}) \text{ (since $(1+1/n)^{n+1}$ is bounded) }.
\end{align*}
$
So, letting $a_k = (1+1/k)^k$ we have,
$|a_{k+1}-a_k| \leq C/k^2$ for some $C$ and hence,
$\sum_{ k \geq n } | a_{k+1} - a_k | \to 0$ as $n \to \infty$.
Since $|a_n - a_m| \leq \sum_{ k \geq \min\{m,n\}} |a_{k+1} - a_k|$. So given $\epsilon > 0$ chose $N$ such that $\sum_{ k \geq N} |a_k - a_{k+1}| < \epsilon$ and $|a_n - a_m| < \epsilon $ for $n,m \geq N$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Does there exist a bijective mapping $f: \Delta\to\mathbb{C} $ Does there exist a bijective mapping $f: \Delta\to\mathbb{C} $?
Yet I have not found such example. Is it false (why?),please help me.
$ \Delta $ is the unit disk in $\mathbb {C}$.
| I would do this radius by radius. So in case we were talking about the open disk, all we’d need to do is get a nice map from $[0,1\rangle$ onto $[0,\infty\rangle$, like $x/(1-x)$. In case we were talking about the closed disk, you need a (discontinuous) map from $[0,1]$ onto $[0,1\rangle$, and follow it by the map you chose before. For this discontinuous map, one might send $1/n$ to $1/(n+1)$ for all $n\ge1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Expected value of a minimum
Under a group insurance policy, an insurer agrees to pay 100% of the medical bills incurred during the year by employees of a company, up to a max of $1$ million dollars. The total bills incurred, $X$, has pdf
$$
f_X(x) =
\begin{cases}
\frac{x(4-x)}{9}, & \text{for } 0 < x < 3\\
0& \text{else}
\end{cases}
$$
where $x$ is measured in millions. Calculate the total amount, in millions of dollars, the insurer would expect to pay under this policy.
So I was able to obtain part of the solution, which was
$$
E(\min(X,1)) = \int_0^1 x\cdot \frac{x(4-x)}{9} dx \tag1
$$
However, the solution has $(1)$, plus
$$
E(\min(X,1)) = \int_1^3 1\cdot \frac{x(4-x)}{9} dx \tag2
$$
What I don't understand is if the problem explicitly states that they agree to pay up to $1$ million, why would you even have to bother with $(2)$?
| For amounts greater than one million, they still pay out a million.
Let $x$ be the total bills in millions. For $x \in [0,1]$ the payout is $x$, for $x \in [1,\infty)$, the payout is $1$. Hence the payout as a function of $x$ is $p(x) = \min(x,1)$, and you wish to compute $Ep$.
\begin{eqnarray}
Ep &=& \int_0^\infty p(x) f_X(x) dx \\
&=& \int_0^1 x f_X(x) dx + \int_1^3 1 f_X(x)dx + \int_3^\infty 1 f_X(x)dx \\
&=& \frac{13}{108} + \frac{22}{27}+ 0 \\
&=& \frac{101}{108}
\end{eqnarray}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Evaluate the Integral using Contour Integration (Theorem of Residues) $$
J(a,b)=\int_{0}^{\infty }\frac{\sin(b x)}{\sinh(a x)} dx
$$
This integral is difficult because contour integrals normally cannot be solved with a sin(x) term in the numerator because of singularity issues between the 1st quadrant to the 2nd quadrant in the upper half circle of the contour.
I've assumed:
$$
\sin(bx)=e^{ibz}
$$
since it effectively IS the sine term in the upper half circle.
I've also used the following substitutions:
$$
x=z ; z=re^{i\theta} ; z=\cos(\theta)+i\sin(\theta)
$$
From what I understand, the x range has to include all values somehow.
When I plug in the substitutions and the Euler forms of sin and sinh, the integral becomes:
$$
2\int_{-\infty }^{\infty }\frac{e^{ibx}}{e^{ax}-e^{ax}}dx
$$
Have I screwed up the subs? Did I reduce incorrectly? Not sure what to go from here. If anyone could help shed some light it would be very much appreciated.
| You can change the integration $\int_{0}^{\infty}$ to $\frac{1}{2}\int_{-\infty}^{\infty}$ . Since $\cos$ part will disappear,
Rewrite the integrand as
$$
\frac{e^{ibx}}{e^{ax}-e^{-ax}}=\frac{\exp\{(a+ib)x\}}{\exp(2ax)-1}
$$
Take the contour $[-R,R]$, $[R,R+\frac{i\pi}{a}]$, $[R+\frac{i\pi}{a},-R+\frac{i\pi}{a}]$, $[-R+\frac{i\pi}{a}, -R]$. (We will take $R\rightarrow\infty$).
However the integrand has singularities at $x=0$ and $x=\frac{i\pi}{a}$, so detour those points by taking upper half circle of small radius $\epsilon$ at $x=0$, and lower half circle of small radius $\epsilon$ at $x=\frac{i\pi}{a}$. Then the contour does not have singularities inside. (We will take $\epsilon\rightarrow 0$).
Note that the integrals on the vertical sides and the small part of the horizontal lines around $0$ and $\frac{i\pi}{a}$ will vanish with $R\rightarrow\infty$ and $\epsilon\rightarrow 0$.
Denote the integral by $I=\int_{-\infty}^{\infty}\frac{e^{ibx}}{e^{ax}-e^{-ax}}dx$. Then by Residue Theorem, we have
$$
I-I\exp\{(a+ib)\frac{i\pi}{a}\}-\frac{1}{2}2\pi i \frac{1}{2a}-\frac{1}{2}2\pi i\frac{1}{2a}\exp\{(a+ib)\frac{i\pi}{a}\}=0
$$
We solve this for $I$, it now follows that
$$
J(a,b)=\int_{0}^{\infty }\frac{\sin(b x)}{\sinh(a x)} dx=-iI=\frac{\pi}{2a}\frac{1-\exp(-\frac{b}{a}\pi)}{1+\exp(-\frac{b}{a}\pi)}
$$
and this RHS is the integral that we wanted in the beginning.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
irreducibility of polynomials with integer coefficients Consider the polynomial
$$p(x)=x^9+18x^8+132x^7+501x^6+1011x^5+933x^4+269x^3+906x^2+2529x+1733$$
Is there a way to prove irreducubility of $p(x)$ in $\mathbb{Q}[x]$ different from asking to PARI/GP?
| This polynomial has the element $\alpha^2+\beta$ described in this question as a root. My answer to that question implies among other things that the minimal polynomial of that element is of degree 9, so this polynomial has to be irreducible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Solvability of $S_3\times S_3$ I know that the direct product of two solvable groups are solvable. The group $S_3$ is solvable, so $S_3\times S_3$ is solvable. But how am I going to establish the subnormal series of $S_3\times S_3$?or is there any simpler way to show its solvability?
Thanks.
| Perhaps this is one of those cases where you understand things better by looking at a more general setting.
Let $G, H$ be soluble groups, and let $G.H$ be any extension of $G$ by $H$. Then $G.H$ is soluble.
Start with a subnormal series with abelian factors that goes from $\{1\}$ to $G$. Then continue with a subnormal series with abelian factors of $H$, or to be precise, with the counterimages of the elements of such a series through the epimorphism $G.H \to H$.
In your case $G.H = G \times H$. So you simply start with the required subnormal series for $G$, and then from $G$ you continue with $G N$, with $N$ in the required subnormal series for $H$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Value of $\sum\limits^{\infty}_{n=1}\frac{\ln n}{n^{1/2}\cdot 2^n}$ Here is a series:
$$\displaystyle \sum^{\infty}_{n=1}\dfrac{\ln n}{n^{\frac12}\cdot 2^n}$$
It is convergent by d'Alembert's law. Can we find the sum of this series ?
| Consider
$$f(s):=\sum_{n=1}^\infty \frac {\left(\frac 12\right)^n}{n^s}=\operatorname{Li}_s\left(\frac 12\right)$$
with $\operatorname{Li}$ the polylogarithm then (since $\,n^{-s}=e^{-s\ln(n)}$) :
$$f'(s)=\frac d{ds}\operatorname{Li}_s\left(\frac 12\right)=-\sum_{n=1}^\infty \frac {\ln(n)}{n^s}\left(\frac 12\right)^n$$
giving minus your answer for $s=\frac 12$.
You may use the integrals defining the polylogarithm to get alternative formulations but don't hope much simpler expressions...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Are all $n\times n$ invertible matrices change-of-coordinates matrices in $\mathbb{R}^n$? More precisely, I'm trying to prove the following problem:
Assume that $\text{Span}\{\vec{v}_{1},\dots,\vec{v}_{k}\}=\mathbb{R}^{n}$
and that A
is an invertible matrix. Prove that $\text{Span}\{A\vec{v}_{1},\dots,A\vec{v}_{k}\}=\mathbb{R}^{n}$.
but I think I need the lemma in the title. I find it difficult to prove without large amounts of handwaving. Is the proposition in the title true, and if so, how to prove it?
| Actually, the opposite is true: the problem in your question is half of the proof of the proposition in the title, namely, that $\{Av_1,\dotsc,Av_k\}$ is a spanning set whenever $\{v_1,\dotsc,v_k\}$ is. So, let $x \in \mathbb{R}^n$. Then, of course, $A^{-1}x \in \mathbb{R}^n$, so $A^{-1}x = \sum_{i=1}^k \alpha_i v_i$ for some $\alpha_i \in \mathbb{R}$ -- note that I'm only using the fact that $\{v_1,\dotsc,v_k\}$ is a spanning set, and am assuming absolutely nothing concerning linear independence. Since $x = A(A^{-1}x)$, what follows?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Variation of Pythagorean triplets: $x^2+y^2 = z^3$ I need to prove that the equation $x^2 + y^2 = z^3$ has infinitely many solutions for positive $x, y$ and $z$.
I got to as far as $4^3 = 8^2$ but that seems to be of no help.
Can some one help me with it?
| the equation:
$X^2+Y^2=Z^3$
Has the solutions:
$X=2k^6+8tk^5+2(7t^2+8qt-9q^2)k^4+16(t^3+2qt^2-tq^2-2q^3)k^3+$
$+2(7t^4+12qt^3+6q^2t^2-28tq^3-9q^4)k^2+8(t^5+2qt^4-2q^3t^2-5tq^4)k+$
$+2(q^6-4tq^5-5q^4t^2-5q^2t^4+4qt^5+t^6)$
.................................................................................................................................................
$Y=2k^6+4(3q+t)k^5+2(9q^2+16qt+t^2)k^4+32qt(2q+t)k^3+$
$+2(-9q^4+20tq^3+30q^2t^2+12qt^3-t^4)k^2+4(-3q^5-tq^4+10q^3t^2+6q^2t^3+5qt^4-t^5)k-$
$-2(q^6+4tq^5-5q^4t^2-5q^2t^4-4qt^5+t^6)$
.................................................................................................................................................
$Z=2k^4+4(q+t)k^3+4(q+t)^2k^2+4(q^3+tq^2+qt^2+t^3)k+2(q^2+t^2)^2$
$q,t,k$ - What are some integers any sign. After substituting the numbers and get a result it will be necessary to divide by the greatest common divisor. This is to obtain the primitive solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 8,
"answer_id": 5
} |
How do I show $\sin^2(x+y)−\sin^2(x−y)≡\sin(2x)\sin(2y)$? I really don't know where I'm going wrong, I use the sum to product formula but always end up far from $\sin(2x)\sin(2y)$. Any help is appreciated, thanks.
| Just expand, note that $(a+b)^2-(a-b)^2 = 4 ab$.
Expand $\sin (x+y), \sin (x-y)$ in the usual way. Let $a = \sin x \cos y, b= \cos x \sin y$.
Then $\sin^2(x+y)−\sin^2(x−y)= 4 \sin x \cos y \cos x \sin y$.
Then note that $\sin 2 t = 2 \sin t \cos t$ to finish.
Note that the only trigonometric identity used here is $\sin (s+t) = \sin s \cos t + \sin t \cos s$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 4
} |
Functions question help me? a) We have the function $f(x,y)=x-y+1.$ Find the values of $f(x,y)$ in the points of the parabola $y=x^2$ and build the graph $F(x)=f(x,x^2)$ .
So, some points of the parabola are $(0;0), (1;1), (2;4)$. I replace these in $f(x,y)$ and I have $f(x,y)=1,1,-1\dots$. The graph $f(x,x^2)$ must have the points $(0;1),(1;1)$ and $(2;-1)\,$ , right?
b)We have the function $f(x,y) =\sqrt x + g[\sqrt x-1]$.Find $g(x)$ and $f(x,y)$ if $f(x,1)=x$?
I dont even know where to start here :/
| (a)
*
*To find points on $f(x, y)$ that are also on the parabola $y = f(x) = x^2$: Solve for where $f(x, y) = x - y + 1$ and $y = f(x) = x^2$ intersect by putting $f(x, y) = f(x)$:
$$ x^2 = x - y + 1$$ and and express as a value of $y$:
$$y = 1 + x - x^2\;\;\text{ and note}\;\; y = F(x) = f(x, x^2)\tag{1}$$
*
*Note that this function $F(x)$ is precisely $f(x, x^2)$. Find enough points in this equation to plot it. (The points that satisfy $(1)$ will be points on $f(x,y)$ which satisfy $y = x^2$.) You will find that $\;\;F(x) = 1 + x - x^2\;\;$ is also a parabola. (See image below.) We can manipulate $F(x)$ to learn where the vertex of the parabola is located: write $F(x): (y - \frac 54) = -(x - \frac 12)^2$, so the vertex is located at $(\frac 12, \frac 54)$. The negative sign indicates that the parabola opens downward.
$\quad F(x) = 1 + x - x^2:$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/334953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Prove that $B$ is a basis of the space $V$. Let $V,W$ be nonzero spaces over a field $F$ and suppose that a set $B =\lbrace v_1, . . . , v_n \rbrace \subset V$ has the following property:
For any vectors $w_1, . . . ,w_n \in W$, there exists a unique linear transformation $T : V \rightarrow W$ such that $T(v_i) = w_i$ for all $i = 1, . . . , n$.
Prove that $B$ is a basis of the space $V$.
Can anyone help me with this? I have no idea how to prove this.
| Choose $w \in W$, $w \neq 0$. Define $T_k$ by $T_k(v_i) = \delta_{ik} w$. By assumption $T_k$ exists.
We want to show that $v_1,...,v_n$ are linearly independent. So suppose $\sum_i \alpha_i v_i = 0$. Then $T_k(\sum_i \alpha_i v_i) = \alpha_k w = 0$. Hence $\alpha_k = 0$.
Now we need to show that $v_1,...,v_n$ spans $V$. Define the transformation $Z(v_i) = 0$. By assumption, $Z$ is the unique linear transformation mapping all $v_i$ to zero. It follows that $V = \operatorname{sp}\{v_1,...,v_n\}$.
To see this, suppose $x \notin \operatorname{sp}\{v_1,...,v_n\}$. Then we can extend $Z$ to $\operatorname{sp}\{x,v_1,...,v_n\}$: Define $Z_+(v_i) = 0$, $Z_+(x) = w$, $Z_-(v_i) = 0$, $Z_-(x) = -w$. However, by uniqueness, we must have $Z = Z_+ = Z_-$, which is a contradiction. Hence no such $x$ exists.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Find the complete solution to the simultaneous congruence. I'm having trouble understanding the steps involved to do this question so any step by step reasoning in solving the solution would help me study for my exam.
Thanks so much!
$$x\equiv 6 \pmod{14}$$
$$x\equiv 24 \pmod{29}$$
| Applying Easy CRT (below), noting that $\rm\displaystyle\, \frac{-18}{29}\equiv \frac{-4}{1}\ \ (mod\ 14),\ $ quickly yields
$$\begin{array}{ll}\rm x\equiv \ \ 6\ \ (mod\ 14)\\ \rm x\equiv 24\ \ (mod\ 29)\end{array}\rm \!\iff\! x\equiv 24\! +\! 29 \left[\frac{-18}{29}\, mod\ 14\right]\!\equiv 24\!+\!29[\!-4]\equiv -92\equiv 314\,\ (mod\ 406) $$
Theorem (Easy CRT) $\rm\ \ $ If $\rm\ m,n\:$ are coprime integers then $\rm\ n^{-1}\ $ exists $\rm\ (mod\ m)\ \ $ and
$\rm\displaystyle\quad \begin{eqnarray}\rm x&\equiv&\rm\ a\ \ (mod\ m) \\
\rm x&\equiv&\rm\ b\ \ (mod\ n)\end{eqnarray} \! \iff x\ \equiv\ b + n\ \bigg[\frac{a\!-\!b}{n}\ mod\ m\:\bigg]\ \ (mod\ mn)$
Proof $\rm\ (\Leftarrow)\ \ \ mod\ n\!:\,\ x\equiv b + n\ [\cdots]\equiv b,\ $ and $\rm\,\ mod\ m\!:\,\ x\equiv b + (a\!-\!b)\ n/n\: \equiv\: a\:.$
$\rm\ (\Rightarrow)\ \ $ The solution is unique $\rm\ (mod\ mn)\ $ since if $\rm\ x',x\ $ are solutions then $\rm\ x'\equiv x\ $ mod $\rm\:m,n\:$ therefore $\rm\ m,n\ |\ x'-x\ \Rightarrow\ mn\ |\ x'-x\ \ $ since $\rm\ \:m,n\:$ coprime $\rm\:\Rightarrow\ lcm(m,n) = mn\:.\ \ $ QED
Remark $\ $ I chose $\rm\: n,m = 29,14\ $ (vs. $\rm\, 14,29)\:$ since then $\rm\:n \equiv 1\,\ (mod\ m),\:$ making completely trivial the computation of $\rm\,\ n^{-1}\ mod\ m\,\ $ in the bracketed term in the formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Proof involving modulus and CRT Let m,n be natural numbers where gcd(m,n) = 1. Suppose x is an integer which satisfies
x ≡ m (mod n)
x ≡ n (mod m)
Prove that x ≡ m+n (mod mn).
I know that since gcd(m,n)=1 means they are relatively prime so then given x, gcd(x,n)=m and gcd(x,m)=n. I have trouble getting to the next steps in proving x ≡ m+n (mod mn). Which is gcd(x, mn)=m+n
| From the Chinese Remainder Theorem (uniqueness part) you know that, since $m$ and $n$ are relatively prime, the system of congruences has a unique solution modulo $mn$.
So we only need to check that $m+n$ works. For that, we only need to verify that $m+n\equiv n\pmod{m}$, and that $m+n\equiv m\pmod{n}$. That is very easy!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Convert segment of parabola to quadratic bezier curve How do I convert a segment of parabola to a cubic Bezier curve?
The parabola segment is given as a polynomial with two x values for the edges.
My target is to convert a quadratic piecewise polynomial to a Bezier path (a set of concatenated Bezier curves).
| You can do this in two steps, first convert the parabola segment to a quadratic Bezier curve (with a single control point), then convert it to a cubic Bezier curve (with two control points).
Let $f(x)=Ax^2+Bx+C$ be the parabola and let $x_1$ and $x_2$ be the edges of the segment on which the parabola is defined.
Then $P_1=(x_1,f(x_1))$ and $P_2=(x_2,f(x_2))$ are the Bezier curve start and end points
and $C=(\frac{x_1+x_2}{2},f(x_1)+f'(x_1)\cdot \frac{x_2-x_1}{2})$ is the control point for the quadratic Bezier curve.
Now you can convert this quadratic Bezier curve to a cubic Bezier curve by define the two control points as:
$C_1=\frac{2}{3}C+\frac{1}{3}P_1$ and
$C_2=\frac{2}{3}C+\frac{1}{3}P_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Finding the derivatives of sin(x) and cos(x) We all know that the following (hopefully):
$$\sin(x)=\sum^{\infty}_{n=0}(-1)^n \frac{x^{2n+1}}{(2n+1)!}\ , \ x\in \mathbb{R}$$
$$\cos(x)=\sum^{\infty}_{n=0}(-1)^n \frac{x^{2n}}{(2n)!}\ , \ x\in \mathbb{R}$$
But how do we find the derivates of $\sin(x)$ and $\cos(x)$ by using the definition of a derivative and the those definition above?
Like I should start by doing:
$$\lim_{h\to\infty}\frac{\sin(x+h)-\sin(x)}{h}$$
But after that no clue at all. Help appreciated!
| The radius of convergence of the power series is infinity, so you can interchange summation and differentiation.
$\frac{d}{dx}\sin x = \frac{d}{dx} \sum^{\infty}_{n=0}(-1)^n \frac{x^{2n+1}}{(2n+1)!} = \sum^{\infty}_{n=0}(-1)^n \frac{d}{dx} \frac{x^{2n+1}}{(2n+1)!} = \sum^{\infty}_{n=0}(-1)^n \frac{x^{2n}}{(2n)!} = \cos x$
If you wish to use the definition, you can do:
$\frac{\sin(x+h)-\sin x}{h} = \frac{\sum^{\infty}_{n=0}(-1)^n \frac{(x+h)^{2n+1}}{(2n+1)!}-\sum^{\infty}_{n=0}(-1)^n \frac{x^{2n+1}}{(2n+1)!}}{h} = \sum^{\infty}_{n=0}(-1)^n \frac{1}{(2n+1)!}\frac{(x+h)^{(2n+1)}-x^{(2n+1)}}{h}$
Then $\lim_{h \to 0} \frac{(x+h)^{(2n+1)}-x^{(2n+1)}}{h} = (2n+1)x^{2n}$ (using the binomial theorem).
(Note $(x+h)^p-x^p = \sum_{k=1}^p \binom{p}{k}x^{p-k}h^k = p x^{p-1}h + \sum_{k=2}^p \binom{p}{k}x^{p-k}h^k$. Dividing across by $h$ and taking limits as $h \to 0$ shows that $\frac{d}{dx} x^p = p x^{p-1}$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Determinant of the transpose of elementary matrices Is there a 'nice' proof to show that $\det(E^T) = \det(E)$ where $E$ is an elementary matrix?
Clearly it's true for the elementary matrix representing a row being multiplied by a constant, because then $E^T = E$ as it is diagonal.
I was thinking for the "row-addition" type, it's clearly true because if $E_1$ is a matrix representing row-addition then it is either an upper/lower triangular matrix and so $\det(E_1)$ is equal to the product of the diagonals. If $E_1$ is an upper/lower triangular matrix, then $E_1^T$ is a lower/upper triangular matrix and so $\det(E_1^T) = \det(E_1)$ as the diagonal entries remain the same when the matrix is transposed.
How about for the "row-switching" matrix where rows $i$ and $j$ have been swapped on the identity matrix? Can we use the linearity of the rows in a matrix somehow?
Thanks for any help!
| You can use the fact that switching two rows or columns of a matrix changes the sign of the determinant. Switching two rows of $E$ makes it diagonal, then switch the corresponding columns and you have $E^T$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
degree of the extension of the normal closure So I am trying to do the following problem: if $K/F$ is an extension of degree $p$ (a prime), and if $L/F$ is a normal closure of $K/F$, then $[L:K]$ is not divisible by $p$.
This is what I tried. If $F$ is separable, then primitive element theorem holds and we get $K=F(\alpha)$. Let $f(x)$ be the minimal polynomial of $\alpha$ over $F$. We know that $deg(f)=p$. If $K$ contains all the roots of $f$, then it is normal, and we are done. If not, then there is a root $\beta$ that when we add it we generate an extension of at degree at most $p-1$. If we have all the roots then we are done, otherwise we keep adding roots and at each step the maximum degree of the extension we obtain goes down, so we have that $[L:K]$ is a divisor of $(p-1)!$, so it is not divisible by $p$ since $p$ is a prime.
I am not quite sure of how to solve this problem when we dont know that $F$ is not separable. Any thoughts?
| If $[K:F]=P$, then for any $\alpha\in K\setminus F$, we get $K=F(\alpha)$.
Let $f(x)$ be the minimal polynomial of $\alpha$ over $F$, then the normal closure $L/F$ of $K/F$ is the splitting field of $f(x)$ over $K$. If we let $f(x)=g(x)(x-\alpha)$, then $L$ is the splitting field of $g(x)$ over $K$, but $\deg(g)=p-1$, so $[L:K]$ is less than $(p-1)!$
so it is not divisible by $p$ since $p$ is a prime.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
halfspaces question How do I find the supporting halfspace inequality to epigraph of
$$f(x) = \frac{x^2}{|x|+1}$$
at point $(1,0.5)$
| For $x>0$, we have
$$
f(x)=\frac{x^2}{x+1}\quad\Rightarrow\quad f'(x)=\frac{x^2+2x}{(x+1)^2}.
$$
Hence $f'(1)=\frac{3}{4}$ and an equation of the tangent to the graph of $f$ at $(1,f(1))$ is
$$y=f'(1)(x-1)+f(1)=\frac{3}{4}(x-1)+\frac{1}{2}=\frac{3}{4}x-\frac{1}{4}.$$
An inequality defining the halfspace above this tangent is therefore
$$
y\geq \frac{3}{4}x-\frac{1}{4}.
$$
See here for a picture of the graph and the tangent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is 1 divided by 3 equal to 0.333...? I have been taught that $\frac{1}{3}$ is 0.333.... However, I believe that this is not true, as 1/3 cannot actually be represented in base ten; even if you had infinite threes, as 0.333... is supposed to represent, it would not be exactly equal to 1/3, as 10 is not divisible by 3.
0.333... = 3/10 + 3/100 + 3/1000...
This occured to me while I discussion on one of Zeno's Paradoxes. We were talking about one potential solution to the race between Achilles and the Tortoise, one of Zeno's Paradoxes. The solution stated that it would take Achilles $11\frac{1}{3}$ seconds to pass the tortoise, as 0.111... = 1/9. However, this isn't that case, as, no matter how many ones you add, 0.111... will never equal precisely $\frac{1}{9}$.
Could you tell me if this is valid, and if not, why not? Thanks!
I'm not arguing that $0.333...$ isn't the closest that we can get in base 10; rather, I am arguing that, in base 10, we cannot accurately represent $\frac{1}{3}$
| The problematic part of the question is "no matter how many ones you add, 0.111... will never equal precisely 1/9."
In this (imprecise) context $0.111\ldots$ is an infinite sequence of ones; the sequence of ones does not terminate, so there is no place at which to add another one; each one is already followed by another one. Thus, $10\times0.111\ldots=1.111\ldots$ is precise. Therefore, $9\times0.111\ldots=1.000\ldots=1$ is precise, and $0.111\ldots=1/9$.
I say "imprecise" because we also say $\pi=3.14159\dots$ where ... there means an unspecified sequence of digits following. A more precise way of writing what, in the context of this question, we mean by $0.111\dots$ is $0.\overline{1}$ where the group of digits under the bar is to be repeated without end.
In this question, $0.333\ldots=0.\overline{3}$, and just as above, $10\times0.\overline{3}=3.\overline{3}$, and therefore, $9\times0.\overline{3}=3.\overline{0}=3$, which means $0.\overline{3}=3/9=1/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 6,
"answer_id": 4
} |
How to prove this function is quasi-convex/concave? this is the function:
$$\displaystyle f(a,b) = \frac{b^2}{4(1+a)}$$
| For quasi convexity you have to consider for $\alpha\in R$ the set
$$\{(a,b)\in R^{2}: f(a,b)\leq \alpha\}
$$
If this set is convex for every $\alpha \in R$ you have quasi convexity.
So we obtain the equality
$$4(1+a)\leq \alpha b^{2}.$$
If you draw this set as set in $R^{2}$ for fixed $\alpha\in R$, this should give you a clue about quasi convexity...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\int \frac{dx}{x\log(x)}$ I think I'm having a bad day. I was just trying to use integration by parts on the following integral:
$$\int \frac{dx}{x\log(x)}$$
Which yields
$$\int \frac{dx}{x\log(x)} = 1 + \int \frac{dx}{x\log(x)}$$
Now, if I were to subtract
$$\int \frac{dx}{x\log(x)}$$
from both sides, it would seem that $0 = 1$. What is going on here?
Aside: I do know the integral evaluates to
$$\log(\log(x))$$
plus a constant.
| In a general case we have
$$\int\frac{f'(x)}{f(x)}dx=\log|f(x)|+C,$$
and in our case, take $f(x)=\log(x)$ so $f'(x)=\frac{1}{x}$ to find the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
2D Partial Integration I have a (probably very simple problem): I try to find the variational form of a PDE, at one time we have to partially integrate:
$\int_{\Omega_j} v \frac{\partial}{\partial x}E d(x,y)$ where v is our testfunction and E ist the function we try to find. We have $v, E: \mathbb{R}^2\rightarrow \mathbb{R}$.
Our approach was the partial integration: $\int_{\Omega_j} v \frac{\partial}{\partial x}E d(x,y) = - \int_{\Omega_j} E \frac{\partial}{\partial x} v d(x, y) + \int_{\partial\Omega_j} v E ds$
But I don't think that this makes sense, as we usually need the normal for the boundary integral, but how can we introduce a normal, when our functions are no vector functions?
I hope you understand my problem
| For the following, I refer to Holland, Applied Analysis by the Hilbert Space Method, Sec. 7.2. Let $D$ be a simply connected region in the plane, and $f$, $P$, and $Q$ be functions that map $D \rightarrow \mathbb{R}$. Then
$$\iint_D f \left ( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right ) \,dx\,dy = \int_{\partial D} f (P dx + Q dy) - \iint_D \left( \frac{\partial f}{\partial x} Q - \frac{\partial f}{\partial y} P \right )\,dx\,dy $$
This is a form of 2D integration by parts and is derived using differential forms. This applies to your problem, in that $f=v$, $P=0$, and $Q=E$. Your integration by parts then takes the form
$$\iint_{\Omega_j} v \frac{\partial E}{\partial x} \,dx\,dy = \int_{\partial \Omega_j} v \, E \, dy - \iint_{\Omega_j} E \frac{\partial v}{\partial x} \,dx\,dy$$
The important observation here is that the lack of a function that has a derivative with respect to $y$ in the original integral produces a boundary integral that is integrated over the $y$ direction only.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Intersecting set systems and Erdos-Ko-Rado Theorem Suppose you have an $n$-element set, where $n$ is finite, and you want to make an intersecting family of $r$-subsets of this set. Each subset has to intersecting each other subset.
We may assume $r$ is not larger than $n/2$, because that would make the problem trivial, as any two $r$-subsets would intersect!
The Erdos-Ko-Rado theorem says that the way to make your intersecting system the largest is to choose an element and simply take the set of all $r$-sets containing that element. This family has size $\binom{n-1}{r-1}$. This type of family is sometimes called a "star".
This is not necessarily larger than every other method. If $n$ is even and $r=n/2$ you could take the family of all sets excluding a given element. That would give $\binom{n-1}{r-1}$.
*Question *
Suppose $n$ is even and $r=n/2$. Suppose we move to an $(n+1)$-set and $r=n/2$. The "star" method now gives a larger intersecting family. This is obvious, you have one more element to choose from, and the formula is given by Pascal's identity.
How do you get a larger intersecting system from the "exclusion" method? It's not obvious what to do, because when I try to make the system larger I always end up turning it into a star.
| If I understand your question correctly, you can't do better. Indeed from the proof of Erdos-Ko-Rado you can deduce that only the stars have size equal to $\binom{n-1}{r-1}$, when $2r<n$. In the exceptional case $r=n/2$ you can take one of every pair of complementary sets $A,A^c$.
EDIT: The intended question seems to be (equivalent to) the following. How large can an intersecting family of $r$-sets be if we stipulate that it is not a star?
The answer in this case was given by Hilton and Milner (see A. J. W. Hilton and E. C. Milner. Some Intersection Theorems For Systems of Finite Sets Q J Math (1967) 18(1): 369-384 doi:10.1093/qmath/18.1.369).
The largest such family is obtained by fixing an element $1$, an $r$-set $A_1 = \{2,\dots,r+1\}$, and requiring every further set to contain $1$ and to intersect $A_1$. Thus the maximum size is whatever the size of this family is.
Since you seem to be particularly interested in the case of $n$ odd and $r=\lfloor n/2\rfloor$, let's do the calculation in that case. The family you propose ("exclude two points") has size $\binom{n-2}{r} = \binom{n-2}{r-1}$, while the family that Hilton and Milner propose has size $$1 + \binom{n-1}{r-1} - \binom{n-r-1}{r-1} = \binom{n-1}{r-1} - r + 1 = \binom{n-2}{r-1} + \binom{2r-1}{r-2} - r + 1.$$
Sorry, they win. :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Help me to prove this integration Where the method used should be using complex analysis.
$$\int_{c}\frac{d\theta}{(p+\cos\theta)^2}=\frac{2\pi p}{(p^2-1)\sqrt{p^2-1}};c:\left|z\right|=1$$
thanks in advance
| Hint:
$$\cos \theta = \frac {z + z^{-1}} 2$$
Plug this in and refer to classic tools in complex analysis, such as the Cauchy formula or the residue theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/335930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
$e^{\large\frac{-t}{5}}(1-\frac{t}{5})=0$ solve for t I'm given the following equation an i have to solve for $"t"$ This is actually the derivative of a function, set equal to zero:
$$f'(t) = e^\frac{-t}{5}(1-\frac{t}{5})=0$$
I will admit im just stuck and im looking for how to solve this efficiently.
steps $1,\;2$ - rewrote the equation and distributed:
$$\frac{1}{e^\frac{t}{5}}(1-\frac{t}{5}) \iff \frac{1-\frac{t}{5}}{e^\frac{t}{5}} $$
steps $3, \;4$ - common denominator of 5, multiply by reciprocal of denominator:
$$ \frac{\frac{5-t}{5}}{e^\frac{t}{5}}\iff\frac{\frac{5-t}{5}}*\frac{1}{e^\frac{t}{5}} = \frac{5-t}{5e^\frac{t}{5}} $$
step 5, set this is where I,m stuck:
$$f'(t) = \frac{5-t}{5e^\frac{t}{5}}=0$$
How do i go from here? And am I even doing this correctly? Any help would be greatly appreciated.
Miguel
| $e^{\dfrac{-t}{5}}\neq0$. (Cause: $\frac{1}{e^\frac{t}{5}}$ is a real number)
Therefore, the other term has to be zero.
Now you can solve it !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Prove $|Im(z)|\le |\cos (z)|$ for $z\in \mathbb{C}$. Let $z\in \mathbb{C}$ i.e. $z=x+iy$. Show that $|Im(z)|\le |\cos (z)|$.
My hand wavy hint was to consider $\cos (z)=\cos (x+iy)=\cos (x)\cosh (y)+i\sin (x)\sinh(y)$ then do "stuff".
Then I have $|\cos (z)|=|Re(z)+iIm(z)|$ and the result will be obvious.
Thanks in advance.
I am missing something trivial I know.
| $$\cos(z) = \cos(x) \cosh(y) + i \sin(x) \sinh(y) $$
Taking the norm squared:
$$ \cos^2(x) \cosh^2(y) + \sin^2(x) \sinh^2(y)$$
We are left with:
$$ \cos^2(x) \left(\frac{1}{2}\cosh(2y) + \frac{1}{2}\right) + \sin^2(x) \left(\frac{1}{2}\cosh(2y) - \frac{1}{2}\right)$$
Simplifying, we get:
$$ \frac{1}{2} \left(\cosh(2y) + \cos(2x) \right)$$
We might as well suppose $\cos{2x} = -1$ Our goal is to show $|$Im$(z)|^2 $ is smaller than this quantity.
That is,
$$ \begin{align} & & y^2 & \leq \frac{1}{2} \cosh{2y} - \frac{1}{2} \\ \iff & & y^2 &\leq (\sinh y)^2 \\ \iff & & y & \leq \sinh y \ \ \ \ \ \ \ \ \forall y\geq0\end{align}$$
A quick computation of the derivative shows that $\frac{d}{dy} y = 1$ but $\frac{d}{dy} \sinh y = \cosh{y}$. If we want to see that $\cosh y \geq 1$, we can differentiate it again and see that $\sinh y \geq 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
What is the property that allows $5^{2x+1} = 5^2$ to become $2x+1 = 2$? What is the property that allows $5^{2x+1} = 5^2$ to become $2x+1 = 2$? We just covered this in class, but the teacher didn't explain why we're allowed to do it.
| $5^{(2x+1)} = 5^2$
Multiplying by $1/5^2$ om both sides we get,
$\frac{5^{(2x+1)}}{5^2} = 1$
$\Rightarrow 5^{(2x+1)-2} = 1$
Taking log to the base 5 on both sides we get $2x+1-2=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
} |
Prove $\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$ Let $a,b,c$ are non-negative numbers, such that $a+b+c = 3$.
Prove that $\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$
Here's my idea:
$\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) \ge 2(ab + bc + ca)$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) - 2(ab + bc + ca) \ge 0$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) - ((a+b+c)^2 - (a^2 + b^2 + c^2) \ge 0$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) - (a+b+c)^2 \ge 0$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge (a+b+c)^2$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge 3^2 = 9$
And I'm stuck here.
I need to prove that:
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge (a+b+c)^2$ or
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge 3(a+b+c)$, because $a+b+c = 3$
In the first case using Cauchy-Schwarz Inequality I prove that:
$(a^2 + b^2 + c^2)(1+1+1) \ge (a+b+c)^2$
$3(a^2 + b^2 + c^2) \ge (a+b+c)^2$
Now I need to prove that:
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge 3(a^2 + b^2 + c^2)$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) \ge 2(a^2 + b^2 + c^2)$
$\sqrt{a} + \sqrt{b} + \sqrt{c} \ge a^2 + b^2 + c^2$
I need I don't know how to continue.
In the second case I tried proving:
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) \ge 2(a+b+c)$ and
$a^2 + b^2 + c^2 \ge a+b+c$
Using Cauchy-Schwarz Inequality I proved:
$(a^2 + b^2 + c^2)(1+1+1) \ge (a+b+c)^2$
$(a^2 + b^2 + c^2)(a+b+c) \ge (a+b+c)^2$
$a^2 + b^2 + c^2 \ge a+b+c$
But I can't find a way to prove that $2(\sqrt{a} + \sqrt{b} + \sqrt{c}) \ge 2(a+b+c)$
So please help me with this problem.
P.S
My initial idea, which is proving:
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge 3^2 = 9$
maybe isn't the right way to prove this inequality.
| I will use the following lemma (the proof below):
$$2x \geq x^2(3-x^2)\ \ \ \ \text{ for any }\ x \geq 0. \tag{$\clubsuit$}$$
Start by multiplying our inequality by two
$$2\sqrt{a} +2\sqrt{b} + 2\sqrt{c} \geq 2ab +2bc +2ca, \tag{$\spadesuit$}$$
and observe that
$$2ab + 2bc + 2ca = a(b+c) + b(c+a) + c(b+c) = a(3-a) + b(3-b) + c(3-c)$$
and thus $(\spadesuit)$ is equivalent to
$$2\sqrt{a} +2\sqrt{b} + 2\sqrt{c} \geq a(3-a) + b(3-b) + c(3-c)$$
which can be obtained by summing up three applications of $(\clubsuit)$ for $x$ equal to $\sqrt{a}$, $\sqrt{b}$ and $\sqrt{c}$ respectively:
\begin{align}
2\sqrt{a} &\geq a(3-a), \\
2\sqrt{b} &\geq b(3-b), \\
2\sqrt{c} &\geq c(3-c). \\
\end{align}
$$\tag*{$\square$}$$
The lemma
$$2x \geq x^2(3-x^2) \tag{$\clubsuit$}$$
is true for any $x \geq 0$ (and also any $x \leq -2$) because
$$2x - x^2(3-x^2) = (x-1)^2x(x+2)$$
is a polynomial with roots at $0$ and $-2$, a double root at $1$ and a positive coefficient at the largest degree, $x^4$.
$\hspace{60pt}$
I hope this helps ;-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 4,
"answer_id": 2
} |
Nth number of continued fraction Given a real number $r$ and a non-negative integer $n$, is there a way to accurately find the $n^{th}$ (with the integer being the $0^{th}$ number in the continued fraction. If this can not be done for all $r$ what are some specific ones, like $\pi$ or $e$. I already now how to do this square roots.
| You can do it recursively: $$\eqalign{f_0(r) &= \lfloor r \rfloor\cr
f_{n+1}(r) &= f_n\left( \frac{1}{r - \lfloor r \rfloor}\right)\cr}$$
Of course this may require numerical calculations with very high precision.
Actually, if $r$ is a rational number but you don't know it, no finite precision
numerical calculation will suffice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
After $t$ hours, the hour hand was where the minute hand had been, and vice versa
On Saturday, Jimmy started painting his toy helicopter between 9:00 a.m. and 10:00 a.m. When he finished between 10:00 a.m. and 11:00 a.m. on the same morning, the hour hand was exactly where the minute hand had been when he started, and the minute hand was exactly where the hour hand had been where the hour hand had been when he started. Jimmy spent $t$ hours painting. Determine the value of $t$.
Let $h$ represent the initial position of the hour hand.
Let $m$ represent the initial position of the minute hand.
Note that in this solution position will be represented in "minutes". For example, if the hour hand was initially at $9$, its position would be $45$.
$$45\le h \le 50$$
$$50 \le m \le 55$$
$$\implies 0 \le (m-h) \le 10$$
Time can be represented as the number of minutes passed since 12 a.m. (for example 1 am = 60, 2 am = 120 etc.)
Then:
$$60(\frac{h}{5}) + m + t = 60 (\frac{m}{5}) + h$$
$$\implies 12 h + m + t = 12m + h$$
$$\implies t = 11 m - 11 h$$
$$\implies t = 11(m-h)$$
$$0 \le t \le 120$$
$$\implies 0 \le 11(m-h) \le 120$$
$$\implies 0 \le m -h \le \frac{120}{11}$$
That was as far as I got. Could someone point me on the right path to complete the above solution? (if possible). I am not simply looking for a solution but instead a way to complete the above solution. Any help is appreciated, thank you.
| I'm not sure what you mean by "complete the above solution". The above attempt is actually pretty far from actually finding out what $m$ and $h$ are. However, you do get $t$ as a function of $m$ and $h$, which will be needed to compute the time elapsed.
To actually solve this problem, you have to make use of the fact that $m$ and $h$ encode the same information. If I know the exact position of the hour hand, then I know the exact time because the minute hand's information is encoded in the position of the hour hand between the two integer hours.
At the beginning, we have
$$ \frac{h-45}{5} = \frac{m}{60} $$
Then, at the end, we have
$$ \frac{m-50}{5} = \frac{h}{60} $$
Now, it's just a matter of solving a system of two equations/unknowns: http://www.wolframalpha.com/input/?i=h-45+%3D+m%2F12,+m-50+%3D+h%2F12
Plugging in these values for $h$ and $m$ into $t = 11(m-h)$ yields approximately $t=$ 50.8 minutes, or $t = 0.846$ hours.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Help solving $\sqrt[3]{x^3+3x^2-6x-8}>x+1$ I'm working through a problem set of inequalities where we've been moving all terms to one side and factoring, so you end up with a product of factors compared to zero. Then by creating a sign diagram we've determined which intervals satisfy the inequality.
This one, however, has me stumped:
$$\sqrt[3]{x^3+3x^2-6x-8}>x+1$$
I managed to do some factoring, and think I might be on the right path, though I'm not totally sure. Here's what I have so far,
$$\sqrt[3]{x^3+3x^2-6x-8}-(x+1)>0\\
\sqrt[3]{(x-2)(x+4)(x+1)}-(x+1)>0\\
\sqrt[3]{(x-2)(x+4)}\sqrt[3]{(x+1)}-(x+1)>0\\
\sqrt[3]{(x-2)(x+4)}(x+1)^{1/3}-(x+1)>0\\
(x+1)^{1/3}\left(\sqrt[3]{(x-2)(x+4)}-(x+1)^{2/3}\right)>0$$
Is there a way to factor it further (the stuff in the big parentheses), or is there another approach I'm missing?
Update: As you all pointed out below, this was the wrong approach.
This is a lot easier than you are making it.
How very true :) I wasn't sure how cubing both sides would affect the inequality. Now things are a good deal clearer. Thanks for all your help!
| Hint: you may want to remove the root first, by noticing that $f(y)=y^3$ is a monotonic increasing function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Trignometry - Cosine Formulae c = 21
b = 19
B= $65^o$
solve A with cosine formulae
$a^2+21^2-19^2=2a(21)cos65^o$
yield an simple quadratic equation in variable a
but, $\Delta=(-2(21)cos65^o)^2-4(21^2-19^2) < 0$ implies the triangle as no solution?
How to make sense of that? Why does this happen and in what situation? Please give a range, if any, thanks.
| Here is an investigation without directly using sine law:
From A draw a perpendicular line to BC. Let the intersection be H
BH is AB*cos(65) which is nearly 8.87
From Pythagoras law, AH is nearly 19.032
AC is 19 But we arrived at a contradiction since hypotenuse is less than another side which then immediately implies $\sin(C)>1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
If $E$ is generated by $E_1$ and $E_2$, then $[E:F]\leq [E_1:F][E_2:F]$? Suppose $K/F$ is an extension field, and $E_1$ and $E_2$ are two intermediate subfields of finite degree. Let $E$ be the subfields of $K$ generated by $E_1$ and $E_2$. I'm trying to prove that
$$ [E:F]\leq[E_1:F][E_2:F].$$
Since $E_1$ and $E_2$ are finite extensions, I suppose they have bases $\{a_1,\dots,a_n\}$ and $\{b_1,\dots,b_m\}$, respectively. If $E_1=F$ or $E_2=F$, then the inequality is actually equality, so I suppose both are proper extension fields. I think $E=F(a_1,\dots,a_n,b_1,\dots,b_m)$. Since $\{a_1,\dots,a_n,b_1,\dots,b_m\}$ is a spanning set for $E$ over $F$,
$$[E:F]\leq n+m\leq nm=[E_1:F][E_2:F]$$
since $m,n>1$.
Is this sufficient? I'm weirded out since the problem did not ask to show $[E:F]\leq [E_1:F]+[E_2:F]$ which I feel will generally be a better upper bound.
| The sum of the degrees in general is not going to be an upper bound. Consider $K = \Bbb{Q}(\sqrt[3]{2},e^{2\pi i/3})$. This is a degree $6$ extension of $\Bbb{Q}$. Take $E_1 = \Bbb{Q}(\sqrt[3]{2})$ and $E_2 = \Bbb{Q}(e^{2 \pi i/3})$. Then
$$[E_1 : \Bbb{Q}] + [E_2:\Bbb{Q} ] = 3 + 2 = 5$$
but $E = K$ that has degree $6$ over $\Bbb{Q}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Eigenvalues of $A$ Let $A$ be a $3*3$ matrix with real entries. If $A$ commutes with all $3*3$ matrices with real entries, then how many number of distinct real eigenvalues exist for $A$?
please give some hint.
thank you for your time.
| If $A$ commute with $B$ then $$[A,B] = AB - BA = 0$$
$$AB=BA$$
$$ABB^{-1}=BAB^{-1}$$
$$A=BAB^{-1}$$
If the $B$ has inverse matrix $B^{-1}$, then sets of eigenvalues not change.
Matrix $3\times{3}$ has 3 eigenvalues.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Balls, Bags, Partitions, and Permutations We have $n$ distinct colored balls and $m$ similar bags( with the condition $n \geq m$ ). In how many ways can we place these $n$ balls into given $m$ bags?
My Attempt: For the moment, if we assume all the balls are of same color - we are counting partitions of $n$ into at most $m$ parts( since each bag looks same - it is not a composition of $n$, just a partition ).
But, given that each ball is unique, for every partition of $n$ ----> $\lambda : \{\lambda_1,\lambda_2, \cdots \lambda_m\}$, we've $\binom{n}{\lambda_1} * \binom{n-\lambda_1}{\lambda_2}*\binom{n-\lambda_1-\lambda_2}{\lambda_3}* \cdots * \binom{\lambda_{m-1}+\lambda_{m}}{\lambda_{m-1}}*\binom{\lambda_m}{\lambda_m}$ ways of arranging the balls into $m$ given baskets. We need to find this out for every partition of $n$ into $m$ parts.
Am i doing it right?
I'm wondering whether there is any closed form formula for this( i know that enumerating the number of partitions of $n$ is also complex. But, i don't know, the stated problem seemed simple at the first look. But, it isn't? Am i right? ).
P.S. I searched for this question thinking that somebody might have asked it earlier. But, couldn't find it. Please point me there if you find it.
| This is counting the maps from an $n$-set (the balls) to an $m$-set (the bags), up to permutations of the $m$-set (as the bags are similar). This is just one of the problems of the twelvefold way, and looking at the appropriate section you find that the result can only be expressed as a sum of Stirling numbers of the second kind, namely $\sum_{k=0}^m\genfrac\{\}0{}nk$. The $k$ in fact represents the number of nonempty bags here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Easy volumes question Note: I am trying to prepare, i already see the answer to this question but i dont understand it.
It says volume of earth used for making the embarkment = $\pi (R^2 - r^2)h$
But i dont understand, what R? What r?The whole part solves out to 28$\pi$ but i dont understand where the two Rr come from and what are their values
The question is: A well of diameter 3m is dug 14m deep. The earth taken out has been spread evenly all around it in the shape of a circular ring of width 4m. Find the height of the embarkment.
I think that width refers to the diameter but i am not sure about that either, would be really helpful if someone explained the answer to me properly, my guide book sure doesnt D:
| The total volume of earth taken out is $r^2\pi\cdot h=3^2\pi\cdot14$. The volume of the new pile is $(R^2-r^2)\pi\cdot H=(4^2-3^2)\pi H$, where you need to find $H$. Compare the two, and express $H$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Counting Couples A group of m men and w women randomly sit in a single row at a theater. If a man and woman are seated next to each other, they form a "couple." "Couples" can overlap, which means one person can be a member of two "couples."
Question: What is the expected number of couples?
Comment:
I have a hard time with word problems that deal with "expectations".
| Continuing the computation we can calculate $E[C(C-1)]$.
We have $$\left.\left(\left(\frac{d}{dz}\right)^2 (P+Q)\right)\right|_{z=1} =
2\,{\frac {uv \left( -2\,uv-u+{u}^{2}+{v}^{2}-v \right) }{ \left( v-1+u \right) ^{3}}}.$$
After a straightforward calculation this transforms into
$$ E[C(C-1)] =
{\frac {2\,mw \,\left( 2\,mw-w-m \right) }{ \left( m+w-1 \right) \left( m+w \right) }}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/336896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
How can I prove that $\|Ah\| \le \|A\| \|h\|$ for a linear operator $A$? On http://www.proofwiki.org/wiki/Definition:Norm_(Linear_Transformation) , it is stated that $||Ah|| \leq ||A||||h||$ where $A$ is an operator.
Is this a theorem of some sort? If so, how can it be proved? I've been trying to gather more information about this to no avail.
Help greatly appreciated!
| By definition:
$$
\|A\|=\sup_{\|x\|\leq 1}\|Ax||=\sup_{\|x\|= 1}\|Ax||=\sup_{x\neq 0}\frac{\|Ax\|}{\|x\|}.$$
In particular,
$$
\frac{\|Ax\|}{\|x\|}\leq \|A\|\qquad\forall x\neq 0\quad\Rightarrow\quad\|Ax\|\leq \|A\|\|x\|\qquad \forall x.
$$
Note that $\|A\|$ can alternatively be defined as the least $M\geq 0$ such that $\|Ax\|\leq M\|x\|$ for all $x$. When such an $M$ does not exist, one has $\|A\|=+\infty$ and one says that $A$ is unbounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How to prove inverse of Matrix If you have an invertible upper triangular matrix $M$, how can you prove that $M^{-1}$ is also an upper triangular matrix? I already tried many things but don't know how to prove this. Please help!
| The co-factor of any element above the diagonal is zero.Resaon :The matrix whose determinant is the co-factor will always be an upper triangular matrix with the determinant zero because either its last row is zero or its first column is zero or one of its diagonal element is zero.This can easily be verified. This will imply that the entries below the diagonal of the inverse of this matrix will all be zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 2
} |
How can I calculate or think about the large number 32768^1049088? I decided to ask myself how many different images my laptop's screen could display. I came up with (number of colors)^(number of pixels) so assuming 32768 colors I'm trying to get my head around the number, but I have a feeling it's too big to actually calculate.
Am I right that it's too big to calculate? If not, then how? If so then how would you approach grasping the magnitude?
Update: I realized a simpler way to get the same number is 2^(number of bits of video RAM) or "all the possible configurations of video RAM" - correct me if I'm wrong.
| Your original number is
$2^{15*2^{20}}
<2^{2^{24}}
< 10^{2^{21}}< 10^{3*10^6}
$
which is certainly computable
since it has less than
3,000,000 digits.
The new, larger number is
$2^{24*2^{20}}
<2^{2^{25}}
< 10^{2^{22}}< 10^{6*10^6}
$
which is still computable
since it has less than
6,000,000 digits.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Analytic Geometry question (high school level) I was asked to find the focus and diretrix of the given equation: $y=x^2 -4$. This is what I have so far:
Let $F = (0, -\frac{p}{2})$ be the focus, $D = (x, \frac{p}{2})$ and $P = (x,y)$ which reduces to $x^2 = 2py$ for $p>0$. Now I have $x^2 = 2p(x^2 - 4)$ resulting in $ x^2 = \frac{-8p}{(1-2p)}$ I have no clue how to find the focus. I just know that it will be at $(0, -4+\frac{p}{2})$
Can I get help from some college math major? I went to the tutoring center at my high school but no one there understands what I'm talking about.
| I seem to remember the focal distance $p$ satisfies $4ap=1$ where the equation for the parabola is $y = ax^2 + bx + c$. Your focus will be $1/4$ above your vertex, and the directrix will be a horizontal line $1/4$ below your vertex.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Generating function with quadratic coefficients. $h_k=2k^2+2k+1$. I need the generating function $$G(x)=h_0+h_1x+\dots+h_kx^k+\dots$$ I do not have to simplify this, yet I'd really like to know how Wolfram computed this sum as $$\frac{x(2x^2-2x+5}{(1-x)^3}$$ when $|x|<1$. Rewrite Wolfram's answer as $$x(2x^2-2x+5)(1+x+x^2+\dots)^3=x(2x^2-5x+2)(1+2x+3x^2+\dots)(1+x+x^2+\dots),$$ but how would this give $G$?
| $$\sum_{k=0}^{\infty} x^k = \frac{1}{1-x}$$
$$\sum_{k=0}^{\infty}k x^k = \frac{x}{(1-x)^2}$$
$$\sum_{k=0}^{\infty}k^2 x^k = \frac{x(1+x)}{(1-x)^3}$$
$$G(x) = \sum_{k=0}^{\infty} (2 k^2+2 k+1) x^k$$
Combine the above expressions as defined by $G(x)$ and you should reproduce Wolfram.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Is the sum or product of idempotent matrices idempotent? If you have two idempotent matrices $A$ and $B$, is $A+B$ an idempotent matrix?
Also, is $AB$ an idempotent Matrix?
If both are true, Can I see the proof? I am completley lost in how to prove both cases.
Thanks!
| Let $e_1$, $e_2 \in \mathbb{R}^n$ be linearly independent unit vectors with $c := \left\langle e_1,e_2\right\rangle \neq 0$, viewed as column vectors. For $i=1$, $2$, let $P_i := e_i e_i^T \in M_n(\mathbb{R})$ be the orthogonal projection onto $\mathbb{R}e_i$. Thus, $P_1$ and $P_2$ are idempotents with
$$
P_1 e_1 = e_1, \quad P_1 e_2 = c e_1, \quad P_2 e_1 = c e_2, \quad P_2 e_2 = 1.
$$
Then:
*
*On the one hand,
$$
(P_1 + P_2)e_1 = e_1 + c e_2,
$$
and on the other hand,
$$
(P_1 + P_2)^2 e_1 = (P_1+P_2)(e_1+ce_2) = (1+c^2)e_1 + 2c e_2,
$$
so that $(P_1+P_2)^2 e_1 \neq (P_1+P_2)e_1$, and hence $(P_1+P_2)^2 = P_1+P_2$.
*On the one hand,
$$
P_1 P_2 e_1 = P_1 (c e_2) = c^2 e_1,
$$
and on the other hand,
$$
(P_1 P_2)^2 e_1 = P_1 P_2 (c^2 e_1) = c^4 e_1,
$$
so that since $0 < |c| < 1$, $(P_1 P_2)^2 e_1 \neq P_1 P_2 e_1$, and hence $(P_1 P_2)^2 \neq P_1 P_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
A question about topology regarding its conditions A question from a rookie.
As we know, $(X, T)$ is a topological space, on the following conditions,
*
*The union of a family of $T$-sets, belongs to $T$;
*The intersection of a FINITE family of $T$-sets, belongs to $T$;
*The empty set and the whole $X$ belongs to $T$.
So when the condition 2 is changed into:
2'. The intersection of a family of $T$-sets, belongs to $T$;
can anyone give a legitimate topological space as a counterexample to condition 2'?
Thank you.
| The intersection of intervals $(-1/n,\ 1/n)$ for $n\in\mathbb N$ is only $\{0\}$ which is not open in $\mathbb R$ with the euclidean topology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Calculate:$\lim_{x \rightarrow (-1)^{+}}\left(\frac{\sqrt{\pi}-\sqrt{\cos^{-1}x}}{\sqrt{x+1}} \right)$ How to calculate following with out using L'Hospital rule
$$\lim_{x \rightarrow (-1)^{+}}\left(\frac{\sqrt{\pi}-\sqrt{\cos^{-1}x}}{\sqrt{x+1}} \right)$$
| Let $\sqrt{\arccos(x)} = t$. We then have $x = \cos(t^2)$. Since $x \to (-1)^+$, we have $t^2 \to \pi^-$. Hence, we have
$$\lim_{x \to (-1)^+} \dfrac{\sqrt{\pi} - \sqrt{\arccos(x)}}{\sqrt{1+x}} = \overbrace{\lim_{t \to \sqrt{\pi}^-} \dfrac{\sqrt{\pi} - t}{\sqrt{1+\cos(t^2)}}}^{t = \sqrt{\arccos(x)}} = \underbrace{\lim_{y \to 0^+} \dfrac{y}{\sqrt{1+\cos((\sqrt{\pi}-y)^2)}}}_{y = \sqrt{\pi}-t}$$
$$1+\cos((\sqrt{\pi}-y)^2) = 1+\cos(\pi -2 \sqrt{\pi}y + y^2) = 1-\cos(2 \sqrt{\pi}y - y^2) = 2 \sin^2 \left(\sqrt{\pi} y - \dfrac{y^2}2\right)$$
Hence,
\begin{align}
\lim_{y \to 0^+} \dfrac{y}{\sqrt{1+\cos((\sqrt{\pi}-y)^2)}} & = \dfrac1{\sqrt2} \lim_{y \to 0^+} \dfrac{y}{\sin \left(\sqrt{\pi}y - \dfrac{y^2}2\right)}\\
& = \dfrac1{\sqrt2} \lim_{y \to 0^+} \dfrac{\left(\sqrt{\pi}y - \dfrac{y^2}2\right)}{\sin \left(\sqrt{\pi}y - \dfrac{y^2}2\right)} \dfrac{y}{\left(\sqrt{\pi}y - \dfrac{y^2}2\right)} = \dfrac1{\sqrt{2 \pi}}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Calculating line integral I'm working on this problem:
Calculate integral
\begin{equation}
\int_C\frac{z\arctan(z)}{\sqrt{1+z^2}}\,dz + (y-z^3)\,dx - (2x+z^3)\,dy,
\end{equation}
where the contour $C$ is defined by equations
$$
\sqrt{1-x^2-y^2}=z, \quad 4x^2+9y^2 = 1.
$$
Seems to me that I know the solution, but I have feeling that I could lost something. Would you help me to clarify this.
First, it is easy to parametrize the contour: $x=\frac{1}{2}\cos\varphi$, $y=\frac{1}{3}\sin\varphi$, $z=\sqrt{1-\frac{\cos^2\varphi}{4}-\frac{\sin^2\varphi}{9}}$ and $\varphi$ goes from $0$ to $2\pi$. So we will have the integral $\int_0^{2\pi}F(\varphi)\,d\varphi$, where the function $F(\varphi)$ is quite complicated.
But I thought about another method. The contour is symmetric and it would provide some simplifications:
When variable $z$ goes up and down (on the contour) the value of the function $\frac{z\arctan(z)}{\sqrt{1+z^2}}$ are same in such up-and-down points. So I can write
$$
\int_C\dots = \int_C + (y-z^3)\,dx - (2x+z^3)\,dy.
$$
The same I can conclude for variable $x$ and function $z^3$ and for variable $y$ and function $z^3$. So
$$
\int_C\dots = \int_C y\,dx - (2x+z^3)\,dy = \int_C y\,dx - 2x\,dy.
$$
After that it is much easier to compute the integral using parameterization.
$$
\int_C y\,dx - 2x\,dy = \int_0^{2\pi}-\frac{1}{6}\sin^2\varphi -2\frac{1}{6}\cos^2\varphi \, d\varphi = -\frac{1}{6}\int_0^{2\pi}1+\cos^2\varphi\,d\varphi = -\frac{\pi}{2}
$$
So the answer is $-\frac{\pi}{2}$.
| It's absolutely fine to exploit the symmetries in the given problem. But we need a clear cut argument. Observing that a "variable goes up and down" doesn't suffice.
Relevant are the following symmetries in the parametrization of $C$:
$$x(\phi+\pi)=-x(\phi),\quad y(\phi+\pi)=-y(\phi),\quad z(\phi)=z(-\phi)=z(\phi+\pi)\ .$$
This implies
$$\dot x(\phi+\pi)=-\dot x(\phi),\quad \dot y(\phi+\pi)=-\dot y(\phi),\quad \dot z(-\phi)=-\dot z(\phi)\ .$$
When computing $W:=\int_C \bigl(P\ dx+Q\ dy+ R\ dz\bigr)$ for $P$, $Q$, $R$ as in your question we therefore immediately get
$$\eqalign{\int_C P\ dx&=\int_0^{2\pi}(y(\phi)-z^3(\phi))\dot x(\phi)\ d\phi=\int_0^{2\pi}y(\phi)\ \dot x(\phi)\ d\phi=-{1\over6}\int_0^{2\pi}\sin^2\phi\ d\phi=-{\pi\over6},\cr
\int_C Q\ dy&=\int_0^{2\pi}(-2x(\phi)-z^3(\phi))\dot y(\phi)\ d\phi=-2
\int_0^{2\pi}x(\phi)\ \dot y(\phi)\ d\phi=\ldots=-{\pi\over3},\cr
\int_C R\ dz&=\int_{-\pi}^\pi \tilde R(\phi)\dot z(\phi)\ d\phi=0\cr}$$
(where we have used that $\tilde R(-\phi)=\tilde R(\phi)$). It follows that $W=-{\pi\over2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How fast can you determine if vectors are linearly independent? Let us suppose you have $m$ real-valued vectors of length $n$ where $n \geq m$.
How fast can you determine if they are linearly independent?
In the case where $m = n$ one way to determine independence would be to compute the determinant of the matrix whose rows are the vectors. I tried some googling and found that the best known algorithm to compute the determinant of a square matrix with $n$ rows runs in $O \left ( n^{2.373} \right )$. That puts an upper bound on the case where $m = n$. But computing the determinant seems like an overkill. Furthermore it does not solve the case where $n > m$.
Is there a better algorithm? What is the known theoretical lower bound on the complexity of such an algorithm?
| Please use the following steps
*
*Arrange the vectors in form of a matrix with each vector representing a column of matrix.
*Vectors of a matrix are always Linearly Dependent if number of columns is greater than number of rows (where m > n).
*Vectors of a matrix having number of rows greater than or equal to number of columns (where n >= m) are Linearly Independent only if Elementry Row Operations on Matrix can convert it into a Matrix containing only Mutually Orthogonal Identitity Vectors (An Identity Vector is a Vector having 1 as one of the component and 0 as other components). Otherwise the vectors are Linearly Dependent
So, the only thing that is required is a fast algorithm to do elementry row operations on a matrix with n>=m to check whether the vectors in it can be converted to Mutually Orthogonal Identitity Vectors
If someone thinks this answer is wrong, please prove it by giving some counter examples of Matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
How to calculate the number of pieces in the border of a puzzle? Is there any way to calculate how many border-pieces a puzzle has, without knowing its width-height ratio? I guess it's not even possible, but I am trying to be sure about it.
Thanks for your help!
BTW you might want to know that the puzzle has 3000 pieces.
| Obviously, $w\cdot h=3000$, and there are $2w+h-2+h-2=2w+2h-4$ border pieces. Since $3000=2^3\cdot 3\cdot 5^3$, possibilities are \begin{eqnarray}(w,h)&\in&\{(1,3000),(2,1500),(3,1000),(4,750),(5,600),(6,500),\\&&\hphantom{\{}(8,375),(10,300),(12,250),(15,200),(20,150),(24,125)\\ &&\hphantom{\{}(25,120),(30,100),(40,75),(50,60),(h,w)\},\end{eqnarray}
Considering this, your puzzle is probably $50\cdot60$ (I've never seen a puzzle with $h/w$ or $w/h$ ratio more than $1/2$), so there are $216$ border pieces. This is only $\frac{216\cdot100\%}{3000}=7.2\%$ of the puzzle pieces, which fits standards.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 0
} |
Find the necessary and sufficient conditions for all $ 41 \mid \underbrace{11\ldots 1}_{n}$, $n\in N$. Find the necessary and sufficient conditions for all $ 41 \mid \underbrace{11\ldots 1}_{n}$, $n\in N$. And, if $\underbrace{11\ldots 1}_{n}\equiv 41\times p$,
then $p$ is a prime number.
Find all of the possible values of $n$ to satisfy the condition.
| The first sentence is asking that $41|\frac {10^n-1}9$. This is just the length of the repeat of $\frac 1{41}$. The second statement forces $n$ to be the minimum value. Without it, any multiple of $n$ would work, but $p$ would not be prime.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
totally ordered group Suppose a no trivial totally ordered group .This group has maximum element?
A totally ordered group is a totally ordered structure (G,∘,≤) such that (G,∘) is a group.I couldnt find a more exact definition
| I assume you want the ordering to be compatible with the group operation, such that if $a \geq b$ and $c\geq d$ then $ac\geq bd$.
In this case, the group cannot have a maximal element, which we can see as follows: Assume $g$ is such a maximal element and let $h\in G$ with $h\geq 1$.
Now we have that $g\geq g$ and $h\geq 1$ so $gh\geq g$ which by maximality would mean $gh = g$ so $h = 1$.
But if $G$ is not trivial, it has an element $h$ with $h\geq 1$ and $h\neq 1$, which gives us our contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
volume of "$n$-hedron" In $\mathbb{R}^n$, why does the "$n$-hedron" $|x_1|+|x_2|+\dots+|x_n| \le 1$ have volume $\cfrac{2^n}{n!}$? I came across this fact in some of Minkowski's proofs in the field of geometry of numbers.
Thank you.
| The $2^n$ comes because you have that many copies of the simplex $0 \le x_i \le 1$
The $n!$ comes from integrating up the volume. The area of the right triangle is $\frac 12$, the volume of the tetrahedron is $\frac 12 \cdot \frac 13$ and so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/337988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Find a polynomial as $2836x^2-12724x+16129$ I found a polynomial function with integer coefficients:$f(x)=2836x^2-12724x+16129$
and $f(0)=127^2,f(1)=79^2,f(2)=45^2,f(3)=59^2,f(4)=103^2,f(5)=153^2.$
My question is:can we find a polynomial function with integer coefficients,called $f(x)$,which has no multiple roots,and $f(0),f(1),f(2),f(3),……,f(k)$ are distinct square numbers?($k>5$ is a given integer)
Thanks all.
PS:I'm sorry,guys.I lost a very important condition:$f(x)$ should be a quadratic function:$f(x)=ax^2+bx+c$.($a,b,c$ are integers and $b^2-4ac≠0$)
So the Lagrange interpolation method does not work.
I wonder is there always such a quadratic polynomial when $k$ is arbitrarily large?
| One such quadratic
$$p(t)=-4980t^2+32100t+2809$$
$p(0)=53^2,p(1)=173^2,p(2)=217^2,p(3)=233^2,p(4)=227^2,p(5)=197^2,p(6)=127^2$
Source : Polynomials E.J Barbeau
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/338037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
inscribed angles on circle
That's basically the problem. I keep getting $\theta=90-\phi/2$. But I have a feeling its not right. What I did was draw line segments BD and AC. From there you get four triangles. I labeled the intersection of BD and AC as point P. From exterior angles I got my answer.
| One way would be to let $E$ be the center of the circle. A standard result in geometry tells you that $AEC=2\theta$. And the two sides $AE$ and $CE$ are of equal lengths, and there are right angles at $A$ and $C$, and the sides $AD$ and $CD$ are also of equal lengths. So the triangle $EAD$ is right triangle congruent to $ECD$. One of the angles in that right triangle is $\theta$, so the other is $90^\circ-\theta$. Therefore what you're looking for is $180^\circ-2\theta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/338101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Surface integral over ellipsoid I've problem with this surface integral:
$$
\iint\limits_S {\sqrt{ \left(\frac{x^2}{a^4}+\frac{y^2}{b^4}+\frac{z^2}{c^4}\right)}}{dS}
$$, where
$$
S = \{(x,y,z)\in\mathbb{R}^3: \frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2}= 1\}
$$
| Let the ellipsoid $S$ be given by
$${\bf x}(\theta,\phi)=(a\cos\theta\cos\phi,b\cos\theta\sin\phi,c\sin\theta)\ .$$
Then for all points $(x,y,z)\in S$ one has
$$Q^2:={x^2\over a^4}+{y^2\over b^4}+{z^2\over c^4}={1\over a^2b^2c^2}\left(\cos^2\theta(b^2c^2\cos^2\phi+a^2c^2\sin^2\phi)+a^2b^2\sin^2\theta\right)\ .$$
On the other hand
$${\rm d}S=|{\bf x}_\theta\times{\rm x}_\phi|\>{\rm d}(\theta,\phi)\ ,$$
and one computes
$$\eqalign{|{\bf x}_\theta\times{\rm x}_\phi|^2&=\cos^4\theta(b^2c^2\cos^2\phi+a^2c^2\sin^2\phi)+a^2b^2\cos^2\theta\sin^2\theta\cr
&=\cos^2\theta\ (a^2b^2c^2\ \ Q^2)\ \cr}$$
It follows that your integral ($=:J$) is given by
$$\eqalign{J&=\int\nolimits_{\hat S} Q\ |{\bf x}_\theta\times{\rm x}_\phi|\>{\rm d}(\theta,\phi)=\int\nolimits_{\hat S}abc\ Q^2\ \cos\theta\ {\rm d}(\theta,\phi) \cr &={1\over abc}\int\nolimits_{\hat S}\cos\theta\left(\cos^2\theta(b^2c^2\cos^2\phi+a^2c^2\sin^2\phi)+a^2b^2\sin^2\theta\right)\ {\rm d}(\theta,\phi)\ ,\cr}$$
where $\hat S=[-{\pi\over2},{\pi\over2}]\times[0,2\pi]$. Using
$$\int_{-\pi/2}^{\pi/2}\cos^3\theta\ d\theta={4\over3},\quad \int_{-\pi/2}^{\pi/2}\cos\theta\sin^2\theta\ d\theta={2\over3},\quad \int_0^{2\pi}\cos^2\phi\ d\phi=\int_0^{2\pi}\sin^2\phi\ d\phi=\pi$$
we finally obtain
$$J={4\pi\over3}\left({ab\over c}+{bc\over a}+{ca\over b}\right)\ .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/338155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 1
} |
Integrating $x/(x-2)$ from $0$ to $5$ How would one go about integrating the following?
$$\int_0^5 \frac{x}{x-2} dx$$
It seems like you need to use long division, split it up into two integrals, and the use limits. I'm not quite sure about the limits part.
| Yes, exactly, you do want to use "long division"...
Note, dividing the numerator by the denominator gives you:
$$\int_0^5 {x\over{x-2}} \mathrm{d}x = \int_0^5 \left(1 + \frac 2{x-2}\right) \mathrm{d}x$$
Now simply split the integral into the sum of two integrals:
$$\int_0^5 \left(1 + \frac 2{x-2}\right) \mathrm{d}x \quad= \quad\int_0^5 \,\mathrm{d}x \;\; + \;\; 2\int_0^5 \frac 1{x-2} \,\mathrm{d}x$$
The problem, of course, is what happens with the limits of integration in the second integral: If $u = x-2$ then the limits of integration become $\big|_{-2}^3$, and we will see that the integral does not converge - there is a discontinuity at $u = 0$, and indeed, a vertical asymptote at $x = -2$, hence the limit of the integral - evaluated as $u \to 0$ does not exist, so the integral does not converge. And so the sum of the integrals will not converge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/338221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.