Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Is Fibonacci sequence the minimum of unique pairwise sum sequence? Let $(a_n)_{n=1}^\infty$ be a strictly increasing (condition added per earlier answer of Amitesh Datta) sequence of natural numbers where all pairwise element sums are unique. Can anyone prove or disprove whether the Fibonacci sequence $(f_n)_{n=1}^\infty=(1,2,3,5,8,\cdots)$ is the "minimum" of such sequences, i.e., $f_n\le a_n$, for all such sequences $(a_n)$?
| No. The Mian-Chowla sequence is such a sequence and it begins 1, 2, 4, 8, 13... ; it also grows only polynomially fast, no faster than n3.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/455389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
inversely proportional term in task let us consider following problem:
The amount of time taken to paint a wall is inversely proportional to the number of painters working on the job. If it takes 3 painters 5 days to complete such a job, how many days longer will it take if there are only 2 painters working?
so let us recall terminology of inversely proportional,if one number is inversely proportional to another number it means that
$k=c*1/x$ where $k$ is number which is inversely proportional of number $c$ and $1/x$ is inversely proportional coefficient,so in our case
$5=3*1/x$ so $x=3/5$ ,it means that one painter paints wall in $3/5=0.6$ days right? and $2$ painter will paint $2/0.6=3.3$ days right?so longer day would be $5-3.3=1.7$,but here
http://www.majortests.com/gre/numeric_entry_expl.php?exp=473031322437243236
answer is $2.5$,pleas help me
| Let $d(n)$ be the number of days required when there are $n$ painters. We’re told that $d(n)$ is inversely proportional to $n$, so there is a constant $c$ such that $$d(n)=\frac{c}n\;.$$ We’re also told that $d(3)=5$, so $$5=d(3)=\frac{c}3\;,$$ and therefore $c=3\cdot5=15$. Therefore $$d(2)=\frac{15}2=7.5\;.$$ Thus, two painters will require $7.5-5=2.5$ more days than $3$ painters.
Another way to look at inverse proportionality is this: if $d$ and $n$ are inversely proportional, then multiplying $n$ by some factor $a$ causes $d$ to be multiplied by $\frac1a$. When we reduce the number of painters from $3$ to $2$, we’re multiplying $n$ by $\frac23$; $\frac1{2/3}=\frac32$, so the effect is to multiply $d$ by $\frac32$. And $\frac32\cdot5=\frac{15}2=7.5$, so once again we see that the increase in time required is $2.5$ days.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/455447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\sin3\alpha \sin^3\alpha + \cos3\alpha \cos^3\alpha = \cos^32\alpha$
Show that $\sin3\alpha \sin^3\alpha + \cos3\alpha \cos^3\alpha = \cos^32\alpha$
I have tried $\sin^3\alpha(3\sin\alpha - 4 \sin^3\alpha) = 3\sin^4\alpha - 4\sin^6\alpha$ and $\cos^3\alpha(4\cos^3\alpha - 3\cos\alpha) = 4\cos^6\alpha - 3\cos^4\alpha$ to give
$$\sin3\alpha \sin^3\alpha + \cos3\alpha \cos^3\alpha = 3\sin^4\alpha - 4\sin^6\alpha + 4\cos^6\alpha - 3\cos^4\alpha$$
I can't work out how to simplify this to $\cos^32\alpha$.
I also noticed that the LHS of the question resembles $\cos(A-B)$, but I can't figure a way of making that useful.
| \begin{align}
L.H.S=& \sin 3\alpha \sin \alpha\sin^2\alpha+\cos 3\alpha\cos \alpha \cos^2 \alpha\\
\ =& \frac{1}{2}\left(\sin 3\alpha \sin \alpha (1-\cos 2\alpha)+\cos 3\alpha\cos \alpha(1+\cos 2\alpha)\right)\\
\ =& \frac{1}{2}\left(\sin 3\alpha \sin \alpha+ \cos 3\alpha\cos \alpha\right)+\frac{1}{2}\left(\cos 3\alpha\cos \alpha-\sin 3\alpha \sin \alpha\right)\cos 2\alpha\\
\ =& \frac{1}{2}\cos(3\alpha-\alpha)+\frac{1}{2}\cos(3\alpha+\alpha)\cos 2\alpha\\
\ =& \frac{1}{2}\cos 2\alpha(1+\cos 4\alpha)\\
\ =& \frac{1}{2}\cos 2\alpha \cdot 2\cos^2 2\alpha\\
\ =& \cos^3 2\alpha\hspace{6cm}\Box
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/455518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Material derivative of a material vector field
On page 12 of An Introduction to Theoretical Fluid Dynamics, following the introduction of a material vector field $v_i(\mathbf a,t)=J_{ij}(\mathbf a,t)V_j(\mathbf a)$ the author wrote:
$$
\frac{\mathrm D \mathbf v}{\mathrm D t}
=
\left. \frac{\partial \mathbf v}{\partial t} \right| _ {\mathbf x}
+ \mathbf u \cdot \nabla \mathbf v
- \mathbf v \cdot \nabla \mathbf u
\equiv
v_t+\mathcal L_{\mathbf u} \mathbf v
= 0
$$
Question: Shouldn't the material derivative of $\mathbf v$ be the following? Where is the "extra" term with the negative sign from?
$$
\frac{\mathrm D \mathbf v}{\mathrm D t}
=
\left. \frac{\partial \mathbf v}{\partial t} \right| _ {\mathbf x}
+ \mathbf u \cdot \nabla \mathbf v
$$
Update: I believe it has something to do with Eqn. (1.22) which states that
$$
\left. \frac{\partial \mathbf v}{\partial t} \right |_{\mathbf a} =
\mathbf v\cdot\nabla\mathbf u
$$
| For some clarity the author has made the following calculation (I will explicitly give the variables that $\mathbf{v}$ depends on in each equation to avoid confusion)
$$\dfrac{\mathbf{Dv}}{\mathbf{D}t} = \dfrac{\text{d}\mathbf{v}(\mathbf{x}(t),t)}{\text{d}t}= \dfrac{\text{d}\mathbf{v}(\mathbf{a},t)}{\text{d}t} = \dfrac{\partial\mathbf{v}(\mathbf{a},t)}{\partial t}\tag{1}$$
But since $\mathbf{v}$ is a material vector field it satisfies the following differential equation
$$\dfrac{\partial\mathbf{v}(\mathbf{a},t)}{\partial t} = \mathbf{v}\cdot\nabla \mathbf{u}\tag{2}$$
By definition the material derivative (as it is really just a total derivative) is
$$\dfrac{\mathbf{Dv}}{\mathbf{D}t} = \dfrac{\partial\mathbf{v}(\mathbf{x}(t),t)}{\partial t} + \mathbf{u}\cdot\nabla\mathbf{v}(\mathbf{x}(t),t)\tag{3}$$
Then equation $(1)$ implies that
$$\dfrac{\mathbf{Dv}}{\mathbf{D}t}-\left.\dfrac{\partial\mathbf{v}}{\partial t}\right\vert_{\mathbf{a}} = 0$$
Rewriting this using equations $(2)$ and $(3)$ we arrive at
$$\left.\dfrac{\partial\mathbf{v}}{\partial t}\right\vert_{\mathbf{x}} + \mathbf{u}\cdot\nabla\mathbf{v}-\mathbf{v}\cdot\nabla \mathbf{u}=0$$
The fancy expression you see in this equation has a name, the lie derivative
$$\mathcal{L}_\mathbf{u}\mathbf{v} = \mathbf{u}\cdot\nabla\mathbf{v}-\mathbf{v}\cdot\nabla \mathbf{u}$$
So finally we have the result
$$\mathbf{v}_t + \mathcal{L}_\mathbf{u}\mathbf{v} = 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/455610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Advanced undergraduate(?) Real Analysis book which is concise and lots of interesting problems I have gone through the other book recommendations on Real Analysis, but I think my requirements and background is slightly different. I am a Physics undergrad teaching myself Pure math. My journey is pure math has been highly non-linear. I have studied Groups and Rings (Dummit and Foote), some Commutative algebra (Atiyah and MacD ~3 chapters), and some representation theory(Fulton and Harris). I am looking for a challenging enough book for Real Analysis. It should cover the material in for e.g.baby Rudin, but I am thinking of something more concise but deeper, which has maybe more interesting and difficult problems.
I have done a course on Real Analysis taught from Bartle and Sherbert (I hope this text is not very unknown), but I wish to revisit the material and learn, maybe upto what a standard math undergrad is supposed to know, and also to develop my problem solving skills.
Please feel free to close down the question.
| I like Kolmogorov and Fomin "Introductory Real Analysis" - which gives lots of examples and has plenty of good problems. But I'm not sure what kind of problem you are looking for.
If you are after challenging integrals and limits etc Hardy's "Pure Mathematics" has lots of those. I also think Apostol's Mathematical Analysis has a mix of challenging problems.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/455735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
How to check if two inequalities express the same thing? Suppose:
$$
\delta_1>0 \\
\delta_2>0 \\
\delta=\text{min}(\delta_1,\delta_2)
$$
If we know that the following is true:
$$
a-\delta_2<a-\delta<x<a \hspace{2cm} \text{ or } \hspace{2cm} a<x<a+\delta<a+\delta_1
$$
Can we conclude that the followng is true too?
$$
a-\delta_2\leq a-\delta < x < a+\delta \leq a+\delta_1
$$
I don't know how to formally show this.
Thank you in advance for any help provided.
| Yes. if you chose $|x-a|<\delta $.
maybe i dont get this question but your inequality is true for this specific $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/455824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
$f: X \to Y$ is a smooth map and $\omega$ is a $p$-form on $Y$, what is $\omega[f(x)]$? I am reading Guillemin and Pollack's Differential Topology Page 163:
If $f: X \to Y$ is a smooth map and $\omega$ is a $p$-form on $Y$, define a $p$-form $f^*\omega$ on $X$ as follows: $$f^*\omega(x) = (df_x)^*\omega[f(x)].$$
So my question is, what is $\omega[f(x)]$?
I have been trying to make this question self-contained, but here is the whole background: Definition of pullback. In that question, I have shown $(df_x)^*T(v_1, \dots, v_p) = T \circ df_x (v_1, \dots, v_p)$ and $\omega$ is $(df_x)^*T$. But I want the missing piece about what that $\omega[f(x)]$ is equal to, to make sense of the definition of $f^* \omega (x)$.
| $\omega[f(x)]$ is the value of the form $\omega$ at the point $f(x)$. It is a vector in the $p$-th exterior power of the cotangent space at $f(x)$. (Remember that a $p$-form is a smooth function assigning to each point of a manifold such a vector.) I guess you were just confused by the use of square brackets, which were, I think, intended just to make it easier to read; round brackets would have been OK too, and it seems they didn't cause you any problems with the entirely analogous expresson $f^*\omega(x)$ on the left side of the equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/455951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What exactly does linear dependence and linear independence imply? I have a very hard time remembering which is which between linear independence and linear dependence... that is, if I am asked to specify whether a set of vectors are linearly dependent or independent, I'd be able to find out if $\vec{x}=\vec{0}$ is the only solution to $A\vec{x}=\vec{0}$, but I would be stuck guessing whether that means that the vectors are linearly dependent or independent.
Is there a way that I can understand what the consequence of this trait is so that I can confidently answer such a question on a test?
| I've found the best way to understand this is as follows: a set of vectors is linearly dependent if you can write one of them in terms of the others.
When you multiply a matrix by a vector, $A\vec{x}$, that's shorthand for "multiply each column of $A$ by the corresponding entry of $\vec{x}$, and then add them together." If the columns in $A$ are linearly dependent, then there's some $\vec{x}$ that will allow them to cancel, giving you the zero vector.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 7,
"answer_id": 2
} |
Pullback expanded form.
Definition. If $f: X \to Y$ is a smooth map and $\omega$ is a $p$-form on $Y$, define a $p$-form $f^*\omega$ on $X$ as follows: $$f^*\omega(x) = (df_x)^*\omega[f(x)].$$
According to Daniel Robert-Nicoud's nice answer to $f: X \to Y$ is a smooth map and $\omega$ is a $p$-form on $Y$, what is $\omega[f(x)]$?, locally, differential form can be written as
$$\omega_\alpha(y) = \alpha(y)dx^{i_1}\wedge\ldots\wedge dx^{i_p}$$
with $\alpha$ a smooth function. Then
$$f^*\omega_\alpha(x) = (df_x)^*[(\alpha\circ f)(x)dx^{i_1}\wedge\ldots\wedge dx^{i_p}].$$
Hence, if $$\omega_\alpha(y) = \alpha(y)dx^{i_1}\wedge\ldots\wedge dx^{i_p},$$
$$\theta_\beta(y) = \beta(y)dx^{j_1}\wedge\ldots\wedge dx^{j_q},$$
Do I get $\omega \wedge \theta$ such that
$$\omega_\alpha \wedge \theta_\beta(y)= \gamma(y)dx^{k_1}\wedge\ldots\wedge dx^{k_{p+q}}?$$
If so, how can I prove it?
And can I write $\gamma$ in terms of $\alpha$ and $\beta$?
Thank you~~~
| I suppose generally a $p$-form is a sum of such terms, but if we can understand how one such element pulls-back then linearity extends to $\sum_{i_1, \dots , i_p}\alpha_{i_1,\dots , i_p} dy^{i_1} \wedge \cdots \wedge dy^{i_p}$. That said, to calculate $\gamma$ you just have to sort out the sign needed to arrange the indices on the wedge of $\omega_{\alpha} \wedge \theta_{\beta}$. I prefer the notation, assuming $I \cap J = \emptyset$,
$$ (\alpha dy^I) \wedge (\beta dy^J) = \alpha \beta dy^I\wedge dy^J = \alpha \beta (-1)^{\sigma(I,J)}dy^K $$
here $\alpha\beta$ merely indicated the product of the scalar-valued functions $\alpha$ and $\beta$ and I have suppressed the $y$-dependence as it has little to do with the question. Here $\sigma(I,J)$ is the number of transpositions needed to rearrange $(I|J)$ into $K$.
Of course, you could just leave the $I$ and $J$ unchanged in which case the $\gamma = \alpha \beta$. I rearranged them because in some of what you are interested in reading there will be a supposition that the indices are arranged in increasing order so if $I = (1,2,5)$ and $J = (3,6)$ then you'll want $(1,2,5)(3,6) \rightarrow K = (1,2,3,5,6)$ which requires flipping $3$ and $5$ hence $\sigma(I,J) = 1$. Ok, usually we use "sgn" of a permutation to get this sign so my notation is a bit nonstandard.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can one zero-pad data prior to Fourier transformation, then reverse the change afterwards? Suppose I have a set of $n$ points $\underline{x}\in\mathbb{C}^n$ with $n \in \mathbb{P}$ ($n$ is prime), and I want to find the Fourier transform of $\underline{x}$.
There are some prime-length Fourier algorithms out there, but what about this... What if I add a single constant, say zero, to the end of $\underline{x}$ to give $\underline{\hat{x}}$. If I compute the Fourier transform of $\underline{\hat{x}}$, I can use various faster, simpler algorithms. (For argument's sake, suppose the length of $\underline{\hat{x}}$ is highly composite, although this isn't strictly necessary, in which case I could use the Cooley-Tukey FFT).
The data I get from Fourier transforming $\underline{\hat{x}}$ is not the same as the data I would have gotten had I applied the same transform to $\underline{x}$; not least because the former is longer than the latter.
Is there some simple tweak I can make to the result of Fourier transforming $\underline{\hat{x}}$ to make it like the Fourier transform of $\underline{x}$? Surely there is a trivial relationship between the two results.
(I understand there may be a slight numerical inaccuracy)
| They are the same Fourier transform but just sampled at different locations. If your signal vector is of length $N$ and sampled at $f_s$ Hertz, then an $N$-point DFT will have a sample spacing of $f_s / N$ Hertz/sample with the samples ranging from $-f_s/2$ to $+f_s/2 - f_s/N$. If you zeropad your data out to $M>N$ samples by appending zeros to the end of it and then take the DFT, you'll get the same spectrum but with a sample spacing of $f_s/M$ Hertz/sample with the samples ranging from $-f_s/2$ to $+f_s/2 - f_s/M$. This technique is commonly used to manipulate the sample spacing.
Note, that this works in the other direction, too. A quick and easy way to upsample your data is to zeropad its DFT coefficients with zeros and then inverse transform it back to the time domain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What do $R$, $R^2$ and $R^{-1}$ represent? Question
What do $R$, $R^2$ and $R^{-1}$ represent when $R$ is a relation on the set of natural numbers?
I'm doing some homework, but the $R^2$ and $R^{-1}$ notation confuses me.
Does $R^2 = R*R$?
Does $R^{-1} = \frac{1}{R}$?
| $R^{-1}$ is the inverse relation: $R^{-1}=\{\langle \ell,k\rangle:\langle k,\ell\rangle\in R\}$. In other words, you get $R^{-1}$ from $R$ by turning each ordered pair in $R$ around. If $R=\{\langle n,n+1\rangle:n\in\Bbb N\}$, so that $R$ contains such pairs as $\langle 1,2\rangle$ and $\langle 7,8\rangle$, then $R^{-1}=\{\langle n+1,n\rangle:n\in\Bbb N\}$ and contains such pairs as $\langle 2,1\rangle$ and $\langle 8,7\rangle$.
$R^2$ is the composition of $R$ with itself. An ordered pair $\langle k,\ell\rangle$ is in $R^2$ if and only if there is an $n\in\Bbb N$ such that $\langle k,n\rangle$ and $\langle n,\ell\rangle$ are in $R$. In terms of the steppingstone analogy, if $R$ lets you get from $k$ to $\ell$ in two steps, then $R^2$ lets you do it in one. If $R$ is the relation that I used as an example in the first paragraph, $R$ lets you go from $k$ to $\ell$ in one step exactly when $\ell=k+1$, so it lets you get from $k$ to $\ell$ in two steps exactly when $\ell=k+2$. In this case, then, it turns out that
$$R^2=\{\langle n,n+2\rangle:n\in\Bbb N\}\;.$$
Note that if $R$ is transitive, and it lets you get from $k$ to $\ell$ in two steps, then by the definition of transitivity it already lets you get from $k$ to $\ell$ in one step. Thus, if $R$ is transitive you’ll always find that $R^2\subseteq R$: every pair in $R^2$ is already in $R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Are real coefficients also complex coefficients? Can $x^2 + x +1$ be called a polynomial with complex coefficients?
I know that all real numbers are complex numbers, so does this hold here as well?
| The polynomial $x^2 + x +1$ can certainly be called a polynomial with complex coefficients, but moreover the idea of doing so has important mathematical applications. For example, when proving that a real symmetric matrix has a real eigenvalue, it is very convenient to extend the scalars to $\mathbb{C}$, find an eigenvalue over the complex domain, and then show that the eigenvalue found is actually real. Moreover, historically speaking, mathematicians realized the importance of complex numbers by noticing that in order to find the real roots of some cubic polynomials by Cardano's formula, one necessarily passes through the complex domain in computing them! I can provide historical references if you are interested.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Convergence in metric and in measure Let $\mu$ be a finite measure on $(X, A)$, with the semimetric
$$ d(f,g) = \int \frac{|f-g|}{1+ |f-g|}d\mu$$
on all real-valued, A-measurable functions.
Show that $$\lim_n d(f_n, f) = 0$$ holds iff
$(f_n)$ converges to $f$ in measure.
I know that convergence in mean implies convergence in measure but
$ \int \frac{|f-g|}{1+ |f-g|}d\mu \leq \int {|f-g|}d\mu $
and also
$ \mu(\{x\in X\ : |f_n(x) - f(x) > \epsilon\}) \leq \int {|f-g|}d\mu $.
So I don't know what to do.
| First, a remark: the map $x\mapsto \frac x{1+x}$ is increasing over the set of non-negative real numbers, and is bounded by $1$.
If $f_n\to f$ in measure, fix $\varepsilon$ and integrate over $\{|f_n-f|>\varepsilon\}$ and $\{|f_n-f|\leqslant \varepsilon\}$.
Conversely, if $d(f_n,f)\to 0$, then $\frac{\varepsilon}{1+\varepsilon}\mu\{|f_n-f|>\varepsilon\}\to 0$ (integrating over the set $\{|f_n-f|>\varepsilon\}$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Does the Pigeonhole principle apply in this problem? I came accross this problem a while ago at school during a math contest. I dont remember the exact instruction (word for word) but it went something like :
Randomed A and B, 2 natural integer $\in [0,100]$ , Start a Fibonacci-like sequence.
A + B => C
B + C => D
etc.
If the sum goes stricly above 100, trim the first digit(leaving only the tens and units) and go on. Proove that no matter what are the two random numbers at start, the sequence will repeat itself at some point.
I thought at the time that the Pigeonhole principle could help me proove this easily but I never achieved it. Any help on this problem would be great. Sorry for the lack of mathematical vocabulary, english isn't my native langage and technical vocabulary is hard to use correctly.
| Look at the sequence of couples of consecutive terms. Observe that given two numbers $(a,b)$, the rest of the sequence is fully determined; therefore, it is sufficient to prove that no matter what are the initial two numbers, as some point you will fall back to two successive terms of the sequence $(a^\prime,b^\prime)$ previously encountered.
Then the pigeonhole principle applies, since your numbers are always in $\{0,\dots,100\}$ — you have only $101^2$ possible different couples $(a,b)$, but your sequence of couples has infinitely many terms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to find the vertex of a 'non-standard' parabola? $ 9x^2-24xy+16y^2-20x-15y-60=0 $ I have to find out the vertex of a parabola given by:
$$ 9x^2-24xy+16y^2-20x-15y-60=0 $$
I don't know what to do. I tried to bring it in the form:
$$ (x-a)^2 + (y-b)^2 = \dfrac {(lx+my+n)^2} {l^2+m^2} $$
but failed in doing so. Is there any other way to solve the problem? Or maybe you could help me bring the equation in the above form.
| Think of the standard equations of a parabola you know $y=x^2$ or $y^2=4ax$ - something squared = something linear, and the squared quantity and the linear quantity represent axes at right-angles to one another.
The vertex occurs where the squared quantity is equal to zero, ie on the axis of symmetry.
Now notice that the equation can be rewritten as $$(3x-4y)^2=5(4x+3y+12)$$
Check that the axes implied by this form are perpendicular.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Logic problem, proposed by Euler A professor of mine asked me this challenge.
"(Proposed by Euler) A person bought horses and oxen. Paid 31 shields per horse and 20 per ox, and found that all oxen cost 7 shields more than all the horses. How many horses and oxen were bought? "
And I is not getting ...
Called C the amount of horses and cattle amount of B ...
From the data we have $31C=x$ and $20B=x+7$, soon $$31C=20B-7\Rightarrow20B-31C=7$$ Now just solve the Diophantine equation, am I correct?
| Yes, that's right.
Note that this linear Diophantine equation has infinitely many solutions... I suppose the "most likely" solution is the one with positive number of horses and oxen that costs the least total.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $pTq \longleftrightarrow |p| = |q|$ is Equivalence Relation on $A$ set of all point in the plane I want to prove that this relation is equivalence relation on A
*
*$A$ set of all points in the plane
*$pTq \longleftrightarrow |p| = |q|$ , |p| is the distance from origin.
about transitivity, there are counter-examples?
for reflexivity is obvious, $(x,x)$ the distance will be the same.
for symmetry $(x,y)\in R , (y,x) \in R$ the distances are the same.
if its Equivalence Relation what are the equivalence classes? and partition set?
I would like to get some suggestions.
| Notice that this relation $T$ is defined by the equality relation which's the most natural equivalence relation so $T$ will inherit the same properties and then it's also an equivalence relation.
Remark You can use this method for all relation defined by
$$xRy\iff f(x)=f(y)$$
For the class equivalence of $x$:
$$[x]=\{y;\, xRy\}=\{y;\, |x|=|y|\}=C(O,|x|) $$
where $C(O,|x|)$ is the circle of center the origin of the plane and radius $|x|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Probability density function for radius within part of a sphere I would like to find the probability density function for radius within a given section of a sphere. For example, suppose I specify $\pi / 4 < \theta < \pi / 3$ and $\pi /7 < \phi < \pi /5 $ and $1 < r < 4$. If I select a point at random from within this region, what is the probability distribution of the resulting values?
| Define $V:[1,4]\rightarrow\mathbb{R}$ by
$$
V(r):=\int_{\pi/4}^{\pi/3}\int_{\pi/7}^{\pi/5}\int_1^r\rho^2\sin\phi\,d\rho\,d\phi\,d\theta.
$$
(Here, we have used the spherical coordinate transformation where $\rho$ is distance to the origin, $\theta$ is the angle formed in the $(x,y)$-plane, and $\phi$ is measured down from the positive $z$-axis. If you've used $\phi$ and $\theta$ differently, as sometimes happens, just adjust accordingly.)
Then the volume of your entire region is $V(4)$. Now, assuming you select your point uniformly at random in the sphere, the cumulative distribution function for the radius $R$ of the point in question is
$$
F_R(r):=P(R\leq r)=\begin{cases}0 & \text{if }r\leq 1\\ V(r)/V(4) & \text{if }1< r\leq 4\\1 & \text{if }r>4\end{cases}
$$
To find the density from this, just differentiate! You will use the fact that, by the Fundamental Theorem of Calculus,
$$
V'(r)=\int_{\pi/4}^{\pi/3}\int_{\pi/7}^{\pi/5}r^2\sin\phi\,d\phi\,d\theta.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Can $p^{q-1}\equiv 1 \pmod {q^3}$ for primes $pFor prime $q$ can it be that
$$
p^{q-1}\equiv 1 \pmod{q^k}
$$
for some prime $p<q$ and for $k\ge 3$?
There doesn't seem to be a case with $k=3$ and $q<90000$, and I also checked for small solutions with $3<k\le 20$ and found none.
If we remove the condition $p<q$ then there are always solutions, e.g. $15441^{16}\equiv 1 \pmod{17^5}$. Also for $k=2$ there are many, e.g. $71^{330} \equiv 1 \pmod {331^2}$.
| Let $w>1$ be any integer and let $q$ be an odd prime and $w^{q-1}$ $\equiv 1 \pmod {q^3}$. Let v be a primitive root mod $q^3$ where $v^h$ $\equiv w \pmod {q^3}$. So $v^{h(q-1)}$ $\equiv 1\pmod {q^3}$. Therefore h=$q^2 k$ ; k >= 1. Assume k> 1 , then $w^{(q-1)/k}$ $\equiv 1\pmod {q^3}$ ; $v^{q^2 k-k}$ $\equiv(w/v^k) \pmod {q^3}$ , so $v^{(q^2 k -k)(q^2)}$ $\equiv 1 \pmod {q^3}$ ,therefore $(w/v^k)^{q^2}$ $\equiv 1\pmod {q^3}$. If the order of w mod $q^3$ is M then given $(w/v)^{q^2 M} $ $ \equiv 1 \pmod {q^3}$ ; $v^{q^2 M}$ $\equiv 1 \pmod {q^3}$. Yet this implies M = (q-1). Then the order of w mod $q^3$ is not <(q-1) contradiction. So k = 1. And $v^{q^2}$ $\equiv w \pmod q^3$. The order of w mod $q^3$ is (q-1). If w = p a prime < q then $p^{q-1}$ $\equiv 1 \pmod {q^3}$ where (q-1) is the order of p. p = (q-v); $(q-v)^q$ $\equiv(q-v)\pmod {q^3}$. So ($q^2$ $v^{q-1}$ -$v^q$) $\equiv(q-v)\pmod {q^3}$. Therefore $v^{q-1}$ ($q^2$-v) $\equiv (q-v)\pmod{q^3}$ ; $(-v q)\equiv (q^2-v q)\pmod{q^3}$ ; $q^2 \equiv 0 \pmod{q^3}$ Contradiction , so if p < q the order of p mod $q^3$ can not be (q-1)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
} |
Derivative of an integral $\sqrt{t}\sin t dt$ I need to find the derivative of this function. I know I need to separate the integrals into two and use the chain rule but I am stuck.
$$y=\int_\sqrt{x}^{x^3}\sqrt{t}\sin t~dt~.$$
Thanks in advance
| Hint
By the chain rule we prove easly:
If
$$F(x)=\int_{u(x)}^{v(x)}f(t)dt$$
then
$$F'(x)=f(v(x))v'(x)-f(u(x))u'(x)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Clifford Algebras What would be the best source to learn Clifford Algebras from? Anything online would suffice or any textual sources for that matter..
I'm interested in doing a project in the subject, but I'm not sure where to begin to learn. What would be the prerequisites to ensure a stable foundation?
Thanks
I'm also aware that they can be applied to Digital Imagining Processing
| Clifford algebras arise in several mathematical contexts (e.g., spin geometry, abstract algebra, algebraic topology etc.). If you're just interested in the algebraic theory, then the prerequisites would probably be a solid background in abstract algebra. For example, I think linear algebra and ring theory are prerequisites but in practice, one should probably know more (e.g., for motivation and mathematical maturity). If you could elaborate further on your mathematical background, then I'm happy to provide more detailed suggestions.
I think this link provides a nice elementary introduction to Clifford algebras: http://www.av8n.com/physics/clifford-intro.htm. If that's too basic for you, then also have a look at: http://www.fuw.edu.pl/~amt/amt2.pdf. If you're familiar with algebraic topology, then the following paper is very interesting: http://www.ma.utexas.edu/users/dafr/Index/ABS.pdf.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/456968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
prove that a connected graph with $n$ vertices has at least $n-1$ edges Show that every connected graph with $n$ vertices has at least $n − 1$ edges.
How can I prove this? Conceptually, I understand that the following graph has 3 vertices, and two edges:
a-----b-----c
with $a$, $b$ and $c$ being vertices, and $\{a,b\}$, $\{b,c\}$ being edges.
Is there some way to prove this logically?
--UPDATE--
Does this look correct? Any advice on how to improve this proof would be appreciated. Thank you.
| Hint: Let $\Gamma$ be a connected graph. If $T \subset \Gamma$ is a maximal subtree, then $|E(\Gamma)| \geq |E(T)|$ and $|V(\Gamma)|=|V(T)|$. (Where $E(\cdot)$ and $V(\cdot)$ is the set of edges and vertices respectively.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 8,
"answer_id": 5
} |
Compact variety which is not projective While reading Andreas Gathmann's notes on Algebraic Geometry, I stumbled upon this statement: "Projective varieties form a large class of “compact” varieties that do admit such a unified global description. In fact, the class of projective varieties is so large that it is not easy to construct a variety that is not (an open subset of) a projective variety.".
I know that we can sometimes glue affine varieties together and create compact spaces (in fact, Gathmann constructs $\mathbb{P^1}(\mathbb{C})$ as a compactification of $\mathbb{A}^1$). Also affine varieties are not compact unless they are single points. But my question is: is there an example of a variety which is "compact" but not projective?
Gathman does not provide such an example, so maybe someone here can help.
| Such example does not exist in dimension 1. For dimension 3, see the appendix B in Hartshorne, Example 3.4.1
As Liu pointed out below, there is a list discussing related questions in Hartshorne Chapter II Beneath Remark 4.10.2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How to prove $\int_{-\infty}^{+\infty} f(x)dx = \int_{-\infty}^{+\infty} f\left(x - \frac{1}{x}\right)dx?$ If $f(x)$ is a continuous function on $(-\infty, +\infty)$ and $\int_{-\infty}^{+\infty} f(x) \, dx$ exists. How can I prove that
$$\int_{-\infty}^{+\infty} f(x) \, dx = \int_{-\infty}^{+\infty} f\left( x - \frac{1}{x} \right) \, dx\text{ ?}$$
| We can write
\begin{align}
\int_{-\infty}^{\infty}f\left(x-x^{-1}\right)dx&=\int_{0}^{\infty}f\left(x-x^{-1}\right)dx+\int_{-\infty}^{0}f\left(x-x^{-1}\right)dx\\
&=\int_{-\infty}^{\infty}f(2\sinh\theta)\,e^{\theta}d\theta+\int_{-\infty}^{\infty}f(2\sinh\theta)\,e^{-\theta}d\theta\\
&=\int_{-\infty}^{\infty}f(2\sinh\theta)\,2\cosh\theta\,d\theta\\
&=\int_{-\infty}^{\infty}f(x)\,dx.
\end{align}
To pass from the first to the second line, we make the change of variables $x=e^{\theta}$ in the first integral and $x=-e^{-\theta}$ in the second one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75",
"answer_count": 7,
"answer_id": 0
} |
The contradiction method used to prove that the square root of a prime is irrational The contradiction method given in certain books to prove that sqare root of a prime is irrational also shows that sqare root of $4$ is irrational, so how is it acceptable?
e.g. Suppose $\sqrt{4}$ is rational,
$$\begin{align}
\sqrt{4} &=p/q \qquad\text{where pand q are coprimes} \\
4 &=p^2/q^2\\
4q^2&=p^2 \tag{1} \\
4&\mid p^2\\
4&\mid p\\
\text {let }p&=4m \qquad\text{for some natural no. m} \\
p^2&=16m^2\\
4q^2&=16m^2 \qquad\text{(from (1) )}\\
q^2&=4m^2\\
4& \mid q^2\\
4&\mid q
\end{align}
$$
but this contradicts our assumption that $p$ and $q$ are coprime since they have a common factor $p$. Hence $\sqrt{4}$ is not rational. But we know that it is a rational. Why?
| The step from $4\mid p^2$ to $4\mid p$ is wrong. For example, take $p=6$. Then $4\mid 6^2$ but it is not true that $4\mid 6$.
In general, $q\mid p^2$ implies $q\mid p$ only for squarefree $q$. A number $q$ is "squarefree" if it is not divisible by any square larger than 1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Can $\sqrt{n} + \sqrt{m}$ be rational if neither $n,m$ are perfect squares? Can the expression $\sqrt{n} + \sqrt{m}$ be rational if neither $n,m \in \mathbb{N}$ are perfect squares? It doesn't seem likely, the only way that could happen is if for example $\sqrt{m} = a-\sqrt{n}, \ \ a \in \mathbb{Q}$, which I don't think is possible, but how to show it?
| Nice way to see thinks
Assume that, $$(\sqrt{n}+\sqrt{m})=\frac{p}{q}$$
Then we have
$$(\sqrt{n}+\sqrt{m})=\frac{p}{q}\in\Bbb Q \implies
n+m+2\sqrt{nm} =(\sqrt{n}+\sqrt{m})^2 =\frac{p^2}{q^2}\in\Bbb Q\\\implies \sqrt{nm} =\frac{n+m}{2}+\frac{p^2}{2q^2}\in\Bbb Q $$
But if $ nm $ is not a perfect square then $\sqrt{nm}\not \in\Bbb Q ,$ (This can be easily prove using the fundamental theorem of number theory: Decomposition into prime numbers)
Hence in this case we have $$\sqrt{n}+\sqrt{m}\not \in\Bbb Q$$
Remark $~~~~~$1. $mn$ can be a perfect square even though neither $n$ nor $m$ is a perfect square. (see the example below)
*We can still have $\sqrt{n}+\sqrt{m}\not\in \Bbb Q$ even if $mn$ is perfect square.(see the example below)
Example: $n= 3$ and $ m = 12$ are not perfect square and $ nm = 36 =6^2.$
Moreover,
$$\sqrt{n}+\sqrt{m} = \sqrt{3}+\sqrt{12} =3\sqrt 3 \not \in\Bbb Q$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 5,
"answer_id": 0
} |
Resources to help an 8yo struggling with math Friends of mine asked me for suggestion for one of their children (age 8) who had bad scores at the local Star test (the family is based in California).
Both parents work, so they have also limited time/energies to go through math exercise with the kid (or may have time only at the end of the day, when the student's energies are depleted, too).
This is not to say that anything requiring parent support should automatically disqualified - it's just to make clear that parent assistance could be a limited resource, so either something that can be done more or less alone by the student, or that gets maximum bang-for-the-buck for the parents time would be preferred.
Books (including exercise workbooks)? Software? Online videos? Games (boardgames, computer games)?
| In my personal opinion getting him interested in mathematics is the best way to get him to get better at it hands down. When I was a kid I played math games for kids on my computer and I would also compete against my mom to see who could answer basic arithmetic question (mabye TMI).
In other words if you can get the kid interested in math you are sure to see positive results since the key to proficiency is practice, and if there is passion the kid will practice without external motivation. Hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Prove the triangle inequality I want to porve the triangle inequality:
$x, y \in \mathbb{R} \text { Then } |x+y| \leq |x| + |y|$
I figured out that probably the cases:
*
*$x\geq0$ and $y \geq 0$
*$x<0$ and $y < 0$
*$x\geq0$ and $y < 0$
*$x<0$ and $y \geq 0$ <- Here I am not sure...
have to be proven. However, I do not figured out a concrete method. Are my assumptions true? How to finish the prove with these assumptions. I really appreciate your answer!!!
| If both $x$ and $y$ are $0$ or $x=-y$ then the inequality is clear. Otherwise we note that for $x,y\in\mathbb{R}$ $x\le|x|$ and similarly $y\le|y|$, which follows from the definition of the absolute value.
This tells us that $x+y\le|x|+|y|$ which implies $\frac{x+y}{|x|+|y|}\le1$ since $|x|+|y|>0$.
Thus, $|x+y|=|\frac{x+y}{|x|+|y|}|(|x|+|y|)=\frac{1}{sgn\bigg(\frac{x+y}{|x|+|y|}\bigg)}\frac{x+y}{|x|+|y|}(|x|+|y|)\le|x|+|y|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
find the inverse of $x^2 + x + 1$ In $\mathbb{F}_2[x]$ modulo $x^4 + x + 1$
find the inverse of $x^2 + x + 1$
not 100% sure but here what i have:
user euclid algorithm:
$x^4 + x + 1 = (x^3 + 1)(x + 1) + x$
$(x^3 + 1) = x * x * x + 1$
$1 = (x^3 + 1) - x * x * x $
| Using that $\;x^4=x+1\; $ in $\,\Bbb F_2[x]/(x^4+x+1)\;$ , prove that
$$x^2+x=(x^2+x+1)^{-1}\;\;\; (\text{ further hint:}\;(x^2+x+1)^3=1)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Do the densities of a Uniform[0,1/n] random variable converge pointwise to zero? I'm trying to think of densities that converge pointwise to a function that is not a density. It seems to me that the only way this is possible is if the densities converge to some constant.
Here is something that I thought of but doesn't quite work as far as I can see.
Let $X_n \sim Unif[0,1/n]$. Thus $X_n$ has density $f_n(x) = nI_{\{0\leq x \leq 1/n\}}$. Does $f_n(x)$ converge to zero pointwise? How do we deal with limits of functions like this? We can't use product rule of limits because n $\rightarrow$ $\infty$ and we can't use l'Hopitals because indicator functions are not differentiable.
Whether or not this works, what are some other examples of densities that converge pointwise to a function that is not a density?
| For example, you can use the normals mean $0$, variance $n^2$. These flatten out nicely as $n\to\infty$. It is easy to see that
$$\lim_{n\to\infty} \frac{1}{n\sqrt{2\pi}}e^{-x^2/2n^2}=0$$
for all $x$.
Your proposed example of density function $f_n(x)$ equal to $n$ on $\left(0,\frac{1}{n}\right)$ and $0$ elsewhere also works. For given any positive $x$, if $\frac{1}{n}\lt x$ then $f_n(x)=0$, since then $x$ is outside the interval $\left(0,\frac{1}{n}\right)$. It follows immediately that $\lim_{n\to\infty} f_n(x)=0$. Thus we get pointwise convergence. Note that the convergence is not uniform.
One can play the same game with most standard continuous distributions. For example, one can imitate the normal example to get a family of exponential distribution densities converging pointwise to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Definite integral of unknown function given some additional info Given
*
*$f$ integrable on [0,3]
*$\displaystyle\int_0^1 f(x)\,\mathrm{d}x = 1$,
*$f(x+1) = \frac{1}{2}f(x)$ for all x $\in [0, 2]$
How can I find $\displaystyle\int_0^3 f(x)\,\mathrm{d}x$ ?
I tried breaking it into as follows:
$\displaystyle\int_0^3 f(x)\,\mathrm{d}x = \displaystyle\int_0^1 f(x)\,\mathrm{d}x + \displaystyle\int_1^2 f(x)\,\mathrm{d}x + \displaystyle\int_2^3 f(x)\,\mathrm{d}x$
I'm not sure how to proceed from here...it's been a long time since I took calculus, and I am probably forgetting something really basic. I feel like the solution has something to do with substitution or the fundamental theorem. I notice that 1 and 2 are 0+1, 1+2 so substituting x+1 into f(x), or something along those lines?
| Hint:
$$\int_1^2f(t)\,dt=\int_0^1f(x+1)\,dx={1\over 2}\int_0^1f(x)\,dx={1\over 2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Counting Real Numbers Forgive me if this is a novice question. I'm not a mathematics student, but I'm interested in mathematical philosophy.
Georg Cantor made an argument that the set of rational numbers is countable by showing a correspondence to the set of natural numbers. He did this by scanning rational numbers in a zigzag scheme starting at the top left corner of a 2D table of integers representing the numerator vs. denominator of every rational number. He also proved that the set of real numbers is uncountable through his famous diagonalization argument.
My question is, why can't real numbers also be counted in the same fashion by placing them in a 2D table of integers representing the whole vs. decimal parts of a real number i.e. like this:
0 1 2 3 4 ...
0 0.0 0.1 0.2 0.3 0.4 ...
1 1.0 1.1 1.2 1.3 1.4 ...
2 2.0 2.1 2.2 2.3 2.4 ...
3 3.0 3.1 3.2 3.3 3.4 ...
4 4.0 4.1 4.2 4.3 4.4 ...
. . . . . . .
. . . . . . .
. . . . . . .
and scanning them in a zigzag scheme starting at the top left corner? Negative reals can also be treated the same way as negative rationals (e.g. by pairing even natural numbers with positive real numbers, and odd natural numbers with negative real numbers).
| You only counted a subset of the reals, namely, the set including the integers as well as reals with one decimal place. You cannot count the reals, as you would have to count to an infinite number of decimal places, as some reals have no fractional representation.
To 'count' as you propose, you would need the top heading to be reals, as well as the side heading, making it all but impossible to actually count anything, due to the fact you would now have two of the infinite sets, which you are using to count the infinite set itself.
Between any two real numbers lies the entire set of integers (Or so hypothesized. In any case, some infinite quantity), thus you cannot "count" reals, as there are infinite reals between any two reals you can pick to "count".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Integrating $\int_0^\infty \frac{\log x}{(1+x)^3}\,\operatorname d\!x$ using residues I am trying to use residues to compute $$\int_0^\infty\frac{\log x}{(1+x)^3}\,\operatorname d\!x.$$My first attempt involved trying to take a circular contour with the branch cut being the positive real axis, but this ended up cancelling off the term I wanted. I wasn't sure if there was another contour I should use. I also had someone suggest using the substitution $x=e^z$, so the integral becomes $$\int_{-\infty}^\infty\frac{ze^z}{(1+e^z)^3}\,\operatorname d\!z$$so that the poles are the at the odd multiples of $i\pi$. I haven't actually worked this out, but it does not seem like the solution the author was looking for (this question comes from an old preliminary exam).
Any suggestions on how to integrate?
| Consider the integral
$$\oint_C dz \frac{\log^2{z}}{(1+z)^3}$$
where $C$ is a keyhole contour in the complex plane, about the positive real axis. This contour integral may be seen to vanish along the outer and inner circular contours about the origin, so the contour integral is simply equal to
$$\int_0^{\infty} dx \frac{\log^2{x}-(\log{x}+i 2 \pi)^2}{(1+x)^3} = -i 4 \pi \int_0^{\infty} dx \frac{\log{x}}{(1+x)^3}+4 \pi^2 \int_0^{\infty} dx \frac{1}{(1+x)^3}$$
By the residue theorem, the contour integral is also equal to $i 2 \pi$ times the residue at the pole $z=-1=e^{i \pi}$. In this case, with the triple pole, we have the residue being equal to
$$\frac12 \left [ \frac{d^2}{dz^2} \log^2{z}\right]_{z=e^{i \pi}} = 1-i \pi$$
Thus we have that
$$-i 4 \pi \int_0^{\infty} dx \frac{\log{x}}{(1+x)^3}+4 \pi^2 \frac12 = i 2 \pi + 2 \pi^2$$
which implies that
$$\int_0^{\infty} dx \frac{\log{x}}{(1+x)^3} = -\frac12$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/457977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 2
} |
If $f$ is an entire function with $|f(z)|\le 100\log|z|$ and $f(i)=2i$, what is $f(1)$? Let $f$ be an entire function with $|f(z)|\le 100\log|z|,\forall |z|\ge 2,f(i)=2i, \text{ Then} f(1)=?$
I have no idea how to solve this one!
$g(z)={f(z)\over \log|z|}$ Then Can I say $g$ is constant by Liouville Theorem?
| You can't directly use Liouville's theorem, since dividing $f$ by $\log \lvert z\rvert$ or $\log z$ doesn't produce an entire function.
But you can use the Cauchy estimates to show that that bound on $\lvert f(z)\rvert$ actually implies that $f$ is constant, for $R > 2$ and $\lvert z\rvert \leqslant \frac{R}{2}$, you can compute
$$\left\lvert f'(z) \right\rvert = \left\lvert \frac{1}{2\pi i}
\int\limits_{\lvert \zeta \rvert = R} \frac{f(\zeta)}{(\zeta-z)^2}\, d\zeta\right\rvert \leqslant \frac{1}{2\pi} \int\limits_{\lvert\zeta\rvert = R} \frac{\lvert f(\zeta)\rvert}{\lvert \zeta - z\rvert^2}\, \lvert d\zeta\rvert \leqslant \frac{4R}{R^2}100\log R$$
and, since the right hand side tends to $0$ for $R \to \infty$, conclude that $f' \equiv 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Express spherical coordinates with different centers in terms of each other. Imagine that you have two spheres with a distance $R$ from one center to the other one. Now, it is well known how one would get the cartesian position vector of each point in sphere 1 by using spherical coordinates with the reference frame being the center of sphere 1. Now, I am looking for an expression that gives me a point written in spherical coordinates with reference point(center 1) in terms of spherical coordinates with reference point(center 2). In other words, I am looking for the relationship between the expression of a point in spherical coordinates with the reference center being the center of sphere number 1 in terms of spherical coordinates with reference to the center of sphere number 2
| Let the origin of your first spherical coordinates be the origin of the first Cartesian system being used, and the origin of your second spherical coordinates be at $X_o, Y_o, Z_o$ in the same Cartesian system.
Use spherical polar coordinates, with $\phi$ being the azimuth angle in the $X-Y$ plane, and $\theta$ being the altitude angle measured from the $Z$-axis
Consider a point P, with coordinates $R_1,\phi_1, \theta_1$ in the first spherical coordinate system. Then point P has Cartesian coordinates in the first cartesian system $X_1, Y_1, Z_1$ given by $$X_1=R_1\sin(\theta_1) \cos(\phi_1)$$ $$Y_1=R_1\sin(\theta_1)\sin(\phi_1)$$ $$Z_1=R_1\cos(\theta_1)$$
If we now consider a new Cartesian system parallel to the first and with origin at the origin of the second spherical system, the Cartesian coordinates of P in the second system would be$$X_2=X_1-X_0=R_1\sin(\theta_1) \cos(\phi_1)-X_0$$ $$Y_2=Y_1-Y_0=R_1\sin(\theta_1)\sin(\phi_1)-Y_0$$ $$Z_2=Z_1-Z_0=R_1\cos(\theta_1)-Z_0$$Finally, you can find the polar coordinates of P in the second spherical polar system, $R_2,\phi_2, \theta_2$:
$$R_2=\sqrt{X_2^2+Y_2^2+Z_2^2}$$ $$\phi_2=\arctan\frac{Y_2}{X_2}$$ $$\theta_2=\arccos\frac{Z_2}{R_2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove the inequality $\sum_{1\le iPlease demonstrate this is true
This is the exercise:
$$\sqrt{a_1a_2}+\sqrt{a_1a_3}+\ldots+\sqrt{a_1a_n}+\sqrt{a_2a_3}+\ldots+\sqrt{a_{n-1}a_n}<\frac{n-1}2(a_1+a_2+a_3+\ldots+a_n).$$
I tried to solve it, but I couldn't do anything right.
This is my idea:
$\sqrt{a_1a_2}<\frac{a_1+a_2}2$ -because geometric mean < arithmetic mean;
$\sqrt{a_1a_3}<\frac{a_1+a_3}2$,
$\dots$,
$\sqrt{a_{n-1}a_n}<\frac{a_{n-1}+a_n}2$
Please give me the answer!!
$a_1,a_2,a_3,\ldots,a_n$ are real, positive nummbers
| Use the AM-GM inequality, which states that:
$$\sqrt{a_ia_j}\le\frac{a_i+a_j}{2}$$
Then we have the following:
$$\sum_{i< j}\sqrt{a_ia_j}\le\sum_{i< j}\frac{a_i+a_j}{2}=\frac{1}{2}\sum_{i< j}a_i+a_j\\=\frac{1}{2}[(a_1+a_2)+(a_1+a_3)+\ldots+(a_1+a_n)+(a_2+a_3)+\ldots+(a_{n-1}+a_n)]$$
Now notice that in the sum on the right hand side, each $a_i$ appears $n-1$ times, so we have:
$$\sum_{i< j}\sqrt{a_ia_j}\le\frac{n-1}{2}(a_1+\ldots+a_n)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Dimension of $\mathbb{Q}\otimes_{\mathbb{Z}} \mathbb{Q}$ as a vector space over $\mathbb{Q}$ The following problem was subject of examination that was taken place in June. The document is here. Problem 1 states:
The tensor product $\mathbb{Q}\otimes_{\mathbb Z}\mathbb{Q}$ is a vector space
over $\mathbb{Q}$ by multiplication in the left factor, i.e.
$\lambda(x\otimes y)=(\lambda x)\otimes y$ for $\lambda, x,
y\in\mathbb{Q}$. What is the dimension of
$\mathbb{Q}\otimes_{\mathbb{Z}}\mathbb{Q}$ as a vector space over
$\mathbb{Q}$?
I only know the definition of tensor product for modules (via universal property). How does one go about calculating dimension of such a vector space?
Thanks!
| There is another solution using category theory : tensor product is the fiber coproduct in the category of commutative rings. $\mathbb{Z} \rightarrow \mathbb{Q}$ is an epimorphism since it's a localization.
If $A \rightarrow B$ is an epimorphism, and $B \rightarrow C, B \rightarrow D$ are any morphisms, then $C \coprod_A D \simeq C \coprod_B D$. Therefore, $\mathbb{Q} \otimes_{\mathbb{Z}} \mathbb{Q} = \mathbb{Q} \otimes_{\mathbb{Q}} \mathbb{Q}= \mathbb{Q}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 2
} |
Is The *Mona Lisa* in the complement of the Mandelbrot set. Here is a description of how to color pictures of the Mandelbrot set, more accurately the complement of the Mandelbrot set. Suppose we have a rectangular array of points. Say the array is $m$ by $n$. Suppose also we have a number of color names. Now suppose we assign the color name $j$ to a point in the array if the $j$-th iteration exceeds $2$. If the iterates do not exceed $2$ we color the point black. This process will yield a picture. By careful positioning the array of points can we get any picture we want? In particular can we get a digital representation of the Mona Lisa.
I do not know how to begin to prove or disprove this. My guess is that we can probably get any pictures.
Edit
A different way to color the array would be to use color $c$ if the first iterate to exceed $2$ is iterate $i_{1}$, $i_{2}$, $\cdots$, $i_{j_{c}}$. The iterates for different colors should be distinct. If someone wishes to use infinites lists for the number of iterates that are assigned to a color that would also be acceptable.
With this change the problem reduces to finding an $m$ by $n$ array where each point in the array has a different number of iterates before the value exceeds $2$.
| I think yes
Consider the sequence of "westernmost" islands increasing in period:
Here are some examples, you can see them increasing in hairyness / spinyness.
Period 20:
Period 30:
Period 40:
Period 50:
Zooming in near to a sufficiently hairy high-period island, you can get very nearly parallel spines:
There is a natural grid with a 4:1 aspect ratio between successive escape time level sets in neighbouring spine gaps. (Determined by measuring images: I have no proof of this seeming fact. But it's not a strictly necessary detail.)
If you choose the angle of the spines carefully, you can create an NxN grid with square cells that satisfies "distinct level set at each pixel sampling point in an (m,n) image array".
There is a slight fuzziness due to the imperfect parallelism and the quantized angles available for any given period island, but increasing the period far enough and that won't matter - eventually it gets good enough for the finite width of the level sets to take care of it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
Solution Manual for Chapters 13 and 14, Dummit & Foote I bought the third edition of "Abstract Algebra" by Dummit and Foote. In my opinion this is the best "algebra book" that has been written.
I found several solution manual but none has solutions for Chapters 13 and 14 (Field extensions and Galois theory respectively)
Is there a solution manual for these chapters?
| If anyone is interested, I made a full solution manual for Chapter 13 - Field Theory.
You can find it here
https://positron0802.wordpress.com/algebra-dummit-chapter-13-solution/.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
Sports competition team gaming Let $A,B,C,D,E,F$ be six teams in a sports competition, and each team will play exactly once with another team.
Now we know that Team $A,B,C,D,E$ had already played $5,4,3,2,1$ games, correspondingly.
So, how do I figure out which team haven't played a game with team $B$ yet?
| A has played all 5 other teams and E has only played one team. That means E must have only played A, so the only team B hasn't played must be E.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Conditions under which $BA = I_{n}$, where $A\in\mathbb{C}^{m\times n}$ and $B\in\mathbb{C}^{n\times m}$ Let $A\in\mathbb{C}^{m\times n}$ . I want to to know what conditions can I apply on the matrix $B\in\mathbb{C}^{n\times m}$ such that product $BA = I_{n}$ or matrix $B$ is the left inverse of the matrix $A$.
Please help me. Thanks for the help.
| I think that, as @OwenSizemore correctly suggested, if $n\leq m$ and A is a full rank matrix, then we can think of some Moore-Penrose Pseudoinverse type of solution.
If not, then I'm not sure if this is even possible. Given a vector $\mathbf{v}$ such that $\mathbf{Ax=v}$ we can't really get $\mathbf{x}$ since given a solution $\mathbf{x_0}$ to this equation, $\mathbf{x_0+n}$ is also a solution ($\mathbf{n}$ is any vector in the nullspace of $\mathbf{A}$), right?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does there exists a theorem like this? Statement: Suppose $T\subseteq \mathbb{N}$, then all $x^i,i\in T$ generate a dense linear subspace of $C^0[a,b]$ iff $\sum_{i\in T} 1/i$ is divergent.
I heard it somewhere a long time ago, so there may be minor errors, but the meaning goes like this. I heard it was called a "Bernstein problem", but I never succeeded in searching for such a theorem on the web.
| An excellent reference, not mentioned in the Wikipedia article, is section 4.2 of Polynomials and Polynomial Inequalities by Borwein and Erdélyi. On 35 pages of that section the authors collect a huge number of variations of the theorem (and then return to it in 4.4 in the setting of rational functions).
Here is a sample.
Theorem 4.2.1 Suppose $(\lambda_n)$ is a sequence of distinct positive numbers. Then the span of $x^{\lambda_n}$ is dense in $C[0,1]$ if and only if
$$\sum_{n} \frac{\lambda_n}{\lambda_n^2+1}=\infty \tag1$$
Condition (1) simplifies to $\sum_n \lambda_n^{-1}=\infty$ when $\inf \lambda_n>0$.
And a rational version:
Theorem 4.4.1 Let $(\lambda_n)$ be any sequence of distinct real numbers. Then, for any $0<a<b$, the set
$$\left\{ \frac{\sum_{n=0}^N a_n x^{\lambda_n}}{\sum_{n=0}^N b_n x^{\lambda_n}} : a_n,b_n\in\mathbb R, \ N=1,2,3\dots \right\}$$
is dense in $C[a,b]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Example of semiprime ring The ring is semiprime if $x\in R, xyx=0$ for all $y\in R$ implies $x=0$ or equivalently for $x\neq0$ exists $y_{0}\in R$ such that $xy_{0}x\neq0$.
I found an example of semiprime ring. However, I am not sure that if I understand properly.
$R=\left\{ \begin{pmatrix}a & 0\\
0 & b
\end{pmatrix},\; a,\, b\in\mathbb{Z}_{2}\right\} =\left\{ \begin{pmatrix}0 & 0\\
0 & 0
\end{pmatrix},\begin{pmatrix}0 & 0\\
0 & 1
\end{pmatrix},\begin{pmatrix}1 & 0\\
0 & 0
\end{pmatrix},\begin{pmatrix}1 & 0\\
0 & 1
\end{pmatrix}\right\}
$
This ring is a semiprime because
$\begin{pmatrix}0 & 0\\
0 & 0
\end{pmatrix}y\begin{pmatrix}0 & 0\\
0 & 0
\end{pmatrix}=\begin{pmatrix}0 & 0\\
0 & 0
\end{pmatrix}$
Am I right? Could you give me some other example of semiprime rings?
Thank you.
| The example you gave is indeed semiprime, but it is a complicated way to look at the ring $F_2^2$.
Given $(a,b)$ nonzero, one of $(a,b)(1,0)(a,b)$ or $(a,b)(0,1)(a,b)$ is nonzero.
Any semisimple or Von Neumann regular ring or prime ring is going to be semiprime.
I recommend that you try to show that the ring of linear transformations of any vector space is semiprime.
Rings that don't have nilpotent elements are also semiprime.
Another easy source is to take any semiprime ideal J in a ring R and use R/J.
Semi prime ideals are easy to find: they're just the intersections of prime ideals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
solving the differential equation$y'+\sin y+x \cos y+x=0.$ How to solve the following differential equation?
$$y'+\sin y+x \cos y+x=0.$$
| HINT:
$$y'+\sin y+x \cos y+x=0 \implies \frac{dy}{dx}+\sin y=-x(1+\cos y)$$
$$\implies \frac{dy}{dx}\cdot\frac1{1+\cos y}+\frac{\sin y}{1+\cos y}=-x$$
Using $\cos 2z=2\cos^2z-1,\sin2z=2\sin z\cos z,$
$$\frac12\sec^2\frac y2 \frac{dy}{dx}+\tan\frac y2=-x$$
$$\implies \frac{d\left(\tan\frac y2\right)}{dx}+\tan\frac y2 \cdot1=-x$$
Multiplying either sides by the Integrating factor $e^{\int dx}=e^x,$
$$ e^x\cdot d\left(\tan\frac y2\right)+\tan\frac y2 \cdot e^xdx=-e^xxdx$$
so that the left hand side becomes $d\left(e^x\cdot\tan\frac y2\right)$
So, we have $$d\left(e^x\cdot\tan\frac y2\right)=-e^xxdx$$
Now integrate
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/458967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that if $AB$ is invertible then $B$ is invertible. I know this proof is short but a bit tricky. So I suppose that $AB$ is invertible then $(AB)^{-1}$ exists. We also know $(AB)^{-1}=B^{-1}A^{-1}$. If we let $C=(B^{-1}A^{-1}A)$ then by the invertible matrix theorem we see that since $CA=I$(left inverse) then $B$ is invertible. Would this be correct?
Edit
Suppose $AB$ is invertible. There exists a matrix call it $X$ such that $XAB=I$. Let $C=XA$ Then $CB=I$ and it follows that $B$ is invertible by the invertible matrix theorem.
| $\;AB\;$ invertible $\;\implies \exists\;C\;$ s.t.$\;C(AB)=I\;$ , but using associativity of matrix multiplication:
$$I=C(AB)=(CA)B\implies B\;\;\text{is invertible and}\;\;CA=B^{-1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 8,
"answer_id": 0
} |
$\sum_{k=0}^{n/2} {n\choose{2k}}=\sum_{k=1}^{n/2} {n\choose{2k-1}}$, Combinatorial Proof: How am I supposed to prove combinatorially:
$$\sum_{k=0}^{n/2} {n\choose{2k}}=\sum_{k=1}^{n/2} {n\choose{2k-1}}$$
$${n\choose{0}}+{n\choose{2}}+{n\choose{4}}+\dots={n\choose{1}}+{n\choose{3}}+{n\choose{5}}+\cdots$$
Absolutely clueless.
| The question as currently posed can be answered by looking at the symmetry of the rows of Pascal's triangle corresponding to odd $n$ (which have an even number of elements). By definition
$\large{n\choose{k}\large}=\frac{n!}{k!(n-k)!}$.
Therefore ${n\choose{0}}={n\choose{n}}$, ${n\choose{1}}={n\choose{n-1}}$, and in general ${n\choose{k}}={n\choose{n-k}}$. Thus, the set of odd indexed elements and the set of even indexed elements in each row are identical.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Topologist's Sine Curve not a regular submanifold of $\mathbb{R^2}$? I am trying to work out the details of following example from page 101 of Tu's An Introduction to Manifolds:
Example 9.3. Let $\Gamma$ be the graph of the function $f(x) =
\sin(1/x)$ on the interval $]0, 1[$, and let $S$ be the union of
$\Gamma$ and the open interval $I=\{(0,y)\in \mathbb{R^2} |−1<y<1\}$.
The subset $S$ of $\mathbb{R^2}$ is not a regular submanifold for the
following reason: if $p$ is in the interval $I$, then there is no
adapted chart containing $p$, since any sufficiently small
neighborhood $U$ of $p$ in $\mathbb{R^2}$ intersects $S$ in infinitely
many components.
Using Tu's definitions, I need to show that given a neighborhood $U$ of $p$, there exists no homeomorphism $\phi$ from $U$ onto an open set of $V \subset\mathbb{R^2}$ with the property that $U \cap S$ is the pre-image with respect to $\phi$ of the $x$ or $y$ axes intersected with $V$.
I am not sure where the fact that $U \cap S$ has infinitely many components comes into play. I tried to derive a contradiction using this fact; but even if restrict my attention to connected neighborhoods $U$ of $p$, the intersection of the connected set $\phi(U)$ with the $x$ or $y$ axes might have infinitely many components (I think), so there's no contradiction with the fact that homeomorphism preserves components.
I would appreciate any help!
| One simple way to see that $S$ is not a regular submanifold around $p$ is that it is not locally Euclidean: while there exist open subsets of $\mathbb{R}$ with as many connected components as you like, there are no points like $p$ in $\mathbb{R}$, that is, with every neighborhood not connected.
So, in the sentence "since any sufficiently small neighborhood $U$ of $p$ in $\mathbb{R}^2$ intersects $S$ in infinitely many components" the important part (besides "many components") is "any neighborhood".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
} |
Computing Brauer characters of a finite group I am studying character theory from the book "Character Theory of Finite Groups" by Martin Isaac. (I am not too familiar with valuations and algebraic number theory.)
In the last chapter on modular representation theory, Brauer characters, blocks, and defect groups are introduced.
My question is this: How do we find the irreducible Brauer characters and blocks, given a group and a prime?
For instance, let's say we have $p=3$ and the group $G = S_5$, the symmetric group.
An example of the precise calculation or method used to determine the characters would be very helpful. Thanks.
| This is a difficult question and you would probably need to learn more theory in order to understand the different methods available.
But one method that is often used in practice is to calculate the representations and then just find the Brauer character directly from the matrices of the representations. Of course, you have to express the traces of the matrices as sums of roots of unity over the finite field, and then lift this sum to the complex numbers to get the Brauer character, but that is not particularly difficult. (That is not completely true - since the lifting is not uniquely defined, you may have to work hard if you want to make it consistent.)
With ordinary (complex) character tables, it is generally much easier to calculate the characters than the matrices that define the representations, but that is not always the case with modular representations. There are fast algorithms for computing representations over finite fields, using the so-called MeatAxe algorithm.
I am more familiar with Magma than with GAP, and I expect there are similar commands in GAP, but in Magma I can just type
> G := Sym(5);
> I := AbsolutelyIrreducibleModules(G, GF(3));
and I get the five absolutely irreducible representations in characteristic three as group homomorphisms, and so I can just look at the images of elements from the different conjugacy classes. There is a Magma command that does this for you, giving the Brauer character table:
> [BrauerCharacter(i): i in I];
[
( 1, 1, 1, 0, 1, 1, 0 ),
( 1, -1, 1, 0, -1, 1, 0 ),
( 4, 2, 0, 0, 0, -1, 0 ),
( 4, -2, 0, 0, 0, -1, 0 ),
( 6, 0, -2, 0, 0, 1, 0 )
]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
contour integral with integration by parts Is there a complex version of integration-by-part? I saw someone used it but didn't find it in textbook. I tested integrals $\int_{\mathcal{C}}\frac{\log(x+1)}{x-2}\mathrm{d}x$ and $\int_{\mathcal{C}}\frac{\log(x-2)}{x+1}\mathrm{d}x$, where $\mathcal{C}$ encloses both -1 and 2. But the results do not match. Is it because they are not equal at the first place or I chose the wrong branch cut?
| Integration by parts is just the product rule and the Fundamental Theorem of Calculus. But you need well-defined analytic functions on your contour, which you don't have here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An application of fixed point theorem I want to use the fixed point method to solve the equation to find $y$: $$ y = c_1 y^3 - c_2 y$$where $c_1, c_2$ are real valued constants. So I designed $$ y_{k+1} = c_1 y_k^3 - c_2 y_k$$ to approximate $y$. But I don't know what to do next. Also I want to know the convergence for this equation.
| Do you want to do this on a computer or by hand? Approximating things by hand usually makes little sense, so suppose by computer.
If so, then the thing to do is first to put some value of $y_0$ (probably close to $0$, or else $y_k \to \infty$ as $k \to \infty$, but also not precisely $0$, or else $y_k = 0$ for all $k$). Keep using the recursion to generate $y_k$, until you reach a good solution (a reasonable check is $y_{k+1} \simeq y_k$), or run out of patience ($k$ is very large, say $k \gg 1000$), or you see that the sequence diverges ($y_k$ is very large, say $y_k \gg 1000$). If you found an approximate solution, then the job is done. If not, then you probably should try again with a different choice of $y_0$ (it might be reasonable to put $y_0$ random).
Of course, it is usually much simpler to solve the equation by transforming it into the form: $$ y(c_1 y^2 - (c_2+1)) = 0$$
and then computing $y = 0$ or $y = \pm \sqrt{\frac{c_2+1}{c_1}}$. In some circumstances you explicitly don't want to use square roots, however. (E.g. you want to generalise the method afterwards, or you just want to learn how the fixed point method works, or you're programming and your language does not have the square root function...). I'm assuming its one of such situations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to find cubic non-snarks where the $\min(f_k)>6$ on surfaces with $\chi<0$? Henning once told me that,
[i]t follows from the Euler characteristic of the plane that the average face degree of a 3-regular planar graph with $F$ faces is $6-12/F$, which means that every 3-regular planar graph has at least one face with degree $5$ or lower.
I tried to understand and extend this and got the following:
Given a $k$-regular graph. Summing over the face degrees $f_n$ gives twice the number of edges $E$ and this is $k$ times the number of vertices $V$:
$$
\sum f_n = 2E =kV \tag{1}
$$
Further we have Euler's formula saying
$$
V+F = E +\chi,
$$
where $\chi$ is Euler's characteristic of the surface where the graph lives on. Again we insert $E=\frac k2V$ and get:
$$
F=\left( \frac k2 -1 \right)V+\chi\\
V=\frac{F-\chi}{\frac k2 -1}. \tag{2}
$$
Dividing $(1)$ by $F$ and inserting $(2)$ gives:
$$
\frac{\sum f_n}{F}= \frac{k(F-\chi)}{(\frac k2 -1)F}=\frac {2k}{k -2} \left( 1-\frac{\chi}{F}\right) \tag{3}
$$
or
$$
\sum f_n=\frac {2k}{k -2} \big( F-\chi\big). \tag{3$^\ast$}
$$
Plug in $k=3$ and $\chi=2$
(the characteristic of the plane), we get back Henning's formula, but when e.g. $\chi=-2$, so the surface we draw could be a double torus, we get the average degree to be:
$$
6 \left( 1+\frac{2}{F}\right)
$$
How to find cubic graphs where the $\min(f_k)>6$ on surfaces with $\chi<0$?
EDIT The graph should not be a snark.
| For orientable surfaces, here's a representative element of a family of non-snarky cubic graphs on an $n$-torus with $4n-2$ vertices, $6n-3$ edges and a single $(12n-6)$-sided face.
If it is a problem that some pairs of vertices have more than one edge going between them, that can easily be fixed with some local rearrangements (which can be chosen to preserve the green-blue Hamiltonian cycle and thus non-snarkiness).
For non-orientable surfaces, I think it is easiest to start by appealing to the classification theorem and construe the surface as a sphere with $k\ge 2$ cross-caps on it. Now if you start with a planar graph and place a cross-cap straddling one of its edges (so following the edge from one end to another will make you arrive in the opposite orientation), the net effect is to fuse the two faces the edge used to separate.
Therefore, start with an arbitrary planar non-snarky cubic graph with $2k-2$ vertices and $3k-3$ edges. Select an arbitrary spanning tree (comprising $2k-3$ edges), and place cross-caps on the $k$ edges not in the spanning tree. This will fuse all of the original graph's faces, and we're left with a graph with a single $(6k-6)$-sided face.
In each of the above cases, if you want more faces, simply subdivide the single one you've already got with new edges. The face itself, before you glue its edges together to form the final closed surface, is just a plain old $(12n-6)$- or $(6k-6)$-gon, so you can design the subdivision as a plane drawing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
How much money should we take? I'm a new user so if my question is inappropriate, please comment (or edit maybe).
We want to define a dice game. We will be the casino and a customer will roll dice. I will assume customer is man. He can stop whenever he wants and when he stopped he can take the money as much as sum of his dices so far. But, if he dices 1 he must stop and he can't take any money. For the sake of casino, how much money should we take at the beginning of the game, minimum?
For example: If he dices 2-5-1, he can't get any money but if he dices 4 and stop he get 4 dollars.
I don't have any good work for this problem but I guess it is 10. Also, maybe this helps:
If game would be free and we can play it once in a year, when should we stop? Obviously, if we get 1000 dollars we should stop and if we gain 2 dollars we should play.
Answer for this question is 20. Because, in the game, if we get so far $t$, after next move we will have $\frac26 + \frac36 + \frac46 + \frac56 + \frac66 -\frac t6 = \frac{20-t} 6 $.
Please don't hesitate to edit linguistic mistakes in my question. Thanks for any help.
| Let $f(n)$ be the expected final win for a player already having a balance of $n$ and employing the optimal strategy. Trivially, $f(n)\ge n$ as he might decide to stop right now.
However, if the optimal strategy tells him to play at least once more, we find that $f(n)=\frac16\cdot 0+\sum_{k=2}^6 \frac16f(n+k)$. Thus
$$\tag1 f(n)=\max\left\{n,\frac16\sum_{k=2}^6f(n+k)\right\}.$$
If the player always plays as if his current balance were one more than it actually is, the expected win with starting balance $n$ will be $f(n+1)-1+p$ where $p$ is his probability of ending with rolling a "1". Therefore $f(n)\ge f(n+1)-1$ and by induction $f(n+k)\le f(n)+k$. Then from $(1)$ we get
$$ f(n)\le \max\left\{n,\frac56f(n)+\frac{10}{3}\right\}$$
Especially, $f(n)>n$ implies $f(n)\le \frac56f(n)+\frac{10}3$, i.e. $f(n)\le 20$. Thus $f(n)=n$ if $n\ge 20$ and we can calculate $f(n)$ for $n<20$ backwards from $(1)$. We obtain step by step
$$\begin{align}f(19)&=\frac{115}6\\
f(18)&=\frac{55}3\\
&\vdots\\
f(0)&=\frac{492303203}{60466176}\approx 8.1418\end{align}$$
as the expected value of the game when starting with zero balance, and this is also the fair price.
We have at the same time found the optimal strategy: As $f(n)>n$ iff $n\le 19$, the optimal strategy is to continue until you have collected at least $20$ and then stop (this matches your remarks in your last paragraph).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Are quantifiers a primitive notion? Are quantifiers a primitive notion? I know that one can be defined in terms the other one, so question can be posed, for example, like this: is universal quantifier a primitive notion? I know, that $\forall x P (x) $ can be viewed as a logical conjunction of a predicate $ P $ being applied to all possible variables in, for example, $\sf ZFC$. But how can one write such a statement down formally? Also, it seems you can not use the notion of a set to define the domain of discourse, because you are trying to build $\sf ZFC$ from scratch, and sets make sense only inside $\sf ZFC$. Obviously, I'm missing a lot here. Any help is appreciated.
| Note for example in PA that even if $P(0)$ and $P(1)$ and $P(2)$ and ... are all theorems, it may happen that $\forall n\colon P(n)$ is not a theorem. Thus $\forall n\colon P(n)$ is in fact something different from $P(0)\land P(1)\land P(2)\land \ldots$ even if one were to accept such an infinte string as a wff (which is a box of Pandora that should not be opened as in the next step such an infinite conjuntion would reuire an infinte proof and so on).
So this means that quantification does bear a "new" notion and should be considered primitive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
} |
$X$ homeomorphic to $f(X)$
Let $X$, and $Y$ be topological spaces, and let $f:X\rightarrow Y$ be a continuous and one-to-one map.
When is $X$ homeomorphic to $f(X)$?
| Well... when $f$ is open (or closed). A nice criterion is: $X$ compact, $Y$ Hausdorff, then $f$ is a closed map. Indeed let $C\subset X$ be closed, then $C$ is compact. The continuous image of a compact set is compact, so $f(C)\subset Y$ is compact, and thus closed.
Note: I interpreted your question as: "When is $f$ an homeomorphism with its image?" Obviously as Daniel Fisher stated in a comment, $X$ and $f(X)$ can be homeomorphic without $f$ being a homeomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Examples of uncountable sets with zero Lebesgue measure I would like examples of uncountable subsets of $\mathbb{R}$ that have zero Lebesgue measure and are not obtained by the Cantor set.
Thanks.
| Let $(r_n)_{n\in\mathbb{N}}$ be a dense sequence in $\mathbb{R}$ (it could, for example, be an enumeration of the rationals). For $k \in \mathbb{N}$, let
$$U_k = \bigcup_{n\in\mathbb{N}} (r_n - 2^{-(n+k)},\, r_n + 2^{-(n+k)}).$$
$U_k$ is a dense open set with Lebesgue measure $\leqslant 2^{2-k}$, thus
$$N = \bigcap_{k\in\mathbb{N}} U_k$$
is a set of the second category (Baire), hence uncountable, and has Lebesgue measure $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 6,
"answer_id": 0
} |
Average of all 6 digit numbers that contain only digits $1,2,3,4,5$ How do I find the average of all $6$ digit numbers which consist of only digits $1,2,3,4$ and $5$?
Do I have to list all the possible numbers and then divide the sum by the count? There has to be a more efficient way, right?
Thank you!
| Don't want to completely give it away, but there are $5^6$ of these numbers as the first through sixth digits can all take on five different values. I'm sure there's something slicker you could do, but it should be easy to then sum them all up by evaluating the sum
$$
\sum_{a=1}^5 \sum_{b=1}^5 \sum_{c=1}^5 \sum_{d=1}^5 \sum_{e=1}^5 \sum_{f=1}^5 (a \cdot 10^5+b \cdot 10^4+ c \cdot 10^3+d \cdot 10^2+ e \cdot 10^1 + f \cdot 10^0)
$$
and dividing by the total number of them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 2
} |
How to check a set of ring is a subring? To check a subset of a given ring is a subring,
is it enough to check that the subset is closed under induced operations(multiplication and addition) or
do I also need to show that it contains 0 and additive inverses of each element?
| You do need to show that it contains an additive inverse for each of its elements. (For example, $\mathbb{N}$ is not a subring of $\mathbb{Z}$ though it is closed under addition and multiplication.) Provided that you know the subset is nonempty, this together with it being closed under addition will then imply that $0$ is in there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/459944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
} |
$(a^{n},b^{n})=(a,b)^{n}$ and $[a^{n},b^{n}]=[a,b]^{n}$? How to show that $$(a^{n},b^{n})=(a,b)^{n}$$ and $$[a^{n},b^{n}]=[a,b]^{n}$$ without using modular arithmetic? Seems to have very interesting applications.$$$$Try: $(a^{n},b^{n})=d\Longrightarrow d\mid a^{n}$ and $d\mid b^n$
| Show that the common divisors of $a^n$ and $b^n$ all divide $(a,b)^n$ and that any divisor of $(a,b)^n$ divides $a^n$ and $b^n$ (the proofs are pretty straight forward). It might be useful to consider prime factorization for the second direction.
Similarly, show that $[a,b]^n$ is a multiple of $a^n$ and $b^n$ but that it is also the smallest such multiple (and again there, prime factorization might be useful in the second case). If you need more details just ask.
EDIT :
Let's use prime factorization, I think this way everything makes more sense. To show equality, it suffices to show that the prime powers dividing either side are the same. Let's show $(a^n, b^n) = (a,b)^n$ first.
Since $p$ is prime, $p$ divides $a$ if and only if it divides $a^n$ and similarly for $b$ ; so if $p$ does not divide $a$ or $b$, then $p^0 = 1$ is the greatest power of $p$ that divides both sides. If $p$ divides both $a$ and $b$, let $p^k$ be the greatest power of $p$ dividing $(a,b)$, so that $p^{kn}$ is the greatest power of $p$ dividing $(a,b)^n$. Since $p^k$ divides $a$, $p^{kn}$ divides $a^n$, and similarly for $b$. For obvious reasons the greatest power of $p$ dividing both $a^n$ and $b^n$ must be a power of $p^n$. But if $p^{(k+1)n}$ divided both $a^n$ and $b^n$, then $p^{(k+1)}$ would divide $a$ and $b$, contradicting the fact that $p^k$ is the greatest power of $p$ dividing $(a,b)$. Therefore $p^{kn}$ is the greatest power of $p$ dividing $(a^n,b^n)$ and the greatest power of $p$ dividing $(a,b)^n$, so taking the product over all primes, $(a^n,b^n) = (a,b)^n$.
For $[a^n,b^n] = [a,b]^n$ you can do very similar techniques as with the gcd, except all the 'greatest' are replaced by 'smallest' in the last proof, and 'division' is replaced by 'being a multiple of'.
Hope that helps,
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/460231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Evaluate the integral $\int_{0}^{+\infty}\frac{\arctan \pi x-\arctan x}{x}dx$ Compute improper integral : $\displaystyle I=\int\limits_{0}^{+\infty}\dfrac{\arctan \pi x-\arctan x}{x}dx$.
| We have
$$\int_a^b \dfrac{\arctan \pi x-\arctan x}{x}dx=\int_{\pi a}^{\pi b}\dfrac{\arctan x}{x}dx-\int_{ a}^{ b}\dfrac{\arctan x}{x}dx\\=\int_{ b}^{\pi b}\dfrac{\arctan x}{x}dx-\int_{ a}^{ \pi a}\dfrac{\arctan x}{x}dx$$
and since the function $\arctan$ is increasing so
$$\arctan( b)\log\pi=\arctan( b)\int_b^{\pi b}\frac{dx}{x}\leq\int_{ b}^{\pi b}\dfrac{\arctan x}{x}dx\\ \leq\arctan(\pi b)\int_b^{\pi b}\frac{dx}{x}=\arctan(\pi b)\log\pi$$
so if $b\to\infty$ we have
$$\int_{ b}^{\pi b}\dfrac{\arctan x}{x}dx\to\frac{1}{2}\pi\log\pi$$
and by a similar method we prove that
$$\int_{ a}^{ \pi a}\dfrac{\arctan x}{x}dx\to 0,\quad a\to0$$
hence we conclude
$$\displaystyle I=\int\limits_{0}^{+\infty}\dfrac{\arctan \pi x-\arctan x}{x}dx=\frac{1}{2}\pi\log\pi$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/460307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 0
} |
Distributing persons into cars We want to distribute $10$ persons into $6$ different cars knowing that each car can take three persons. How many ways have to do it. The order of the person inside the same car is not important and the car can be empty.
| If we put $i$ people in the 1-st car, there's $\binom{10}{i}$ ways to do this. Once this is done, we put $j$ people in the 2-nd car, and there's $\binom{10-i}{j}$ ways to do this. And so on, until we get to the final car, where we attempt to put in all of the unassigned passengers. If there's more than 3, we discard this case.
Hence the number of ways is: $$\scriptsize \sum_{i=0}^3 \binom{10}{i} \sum_{j=0}^3 \binom{10-i}{j} \sum_{k=0}^3 \binom{10-i-j}{k} \sum_{\ell=0}^3 \binom{10-i-j-k}{\ell} \sum_{m=0}^3 \binom{10-i-j-k-\ell}{m} [0 \leq 10-i-j-k-\ell-m \leq 3].$$
Here $[0 \leq 10-i-j-k-\ell-m \leq 3]$ takes the value $1$ if $0 \leq 10-i-j-k-\ell-m \leq 3$ is true and $0$ otherwise.
In GAP this is computed by
WithinBounds:=function(n)
if(n>=0 and n<=3) then return 1; fi;
return 0;
end;;
Sum([0..3],i->Binomial(10,i)*Sum([0..3],j->Binomial(10-i,j)*Sum([0..3],k->Binomial(10-i-j,k)*Sum([0..3],l->Binomial(10-i-j-k,l)*Sum([0..3],m->Binomial(10-i-j-k-l,m)*WithinBounds(10-i-j-k-l-m))))));
which returns $36086400$.
Alternatively, let $\mathcal{G}$ be the set of partitions of $\{1,2,\ldots,10\}$ of size at most $6$ with parts of size at most $3$. Given a partition $P \in \mathcal{G}$, there are $\binom{6}{|P|} |P|!$ ways to distribute the passengers among the cars in such a way to as give rise to the partition $P$ (after discarding empty cars). So, the number is also given by $$\sum_{P \in \mathcal{G}} \binom{6}{|P|} |P|!.$$
This is implemented in GAP via:
S:=Filtered(PartitionsSet([1..10]),P->Size(P)<=6 and Maximum(List(P,p->Size(p)))<=3);;
Sum(S,P->Binomial(6,Size(P))*Factorial(Size(P)));
which also returns $36086400$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/460359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question involving exponential tower of 19 Consider:
$$
y = \underbrace{19^{19^{\cdot^{\cdot^{\cdot^{19}}}}}}_{101 \text{ times}}
$$
with the tower containing a hundred $ 19$s. Take the sum of the digits of the resulting number. Again, add the digits of this new number and get the sum. Keep doing this process till you reach a single-digit number. Find the number.
Here's what I tried so far:- Every number which is a power of $19 $ has to end in either $1 $ or $ 9$. Also, by playing around a bit, taking different powers of $19$, I found that regardless of the power of $19$, whether it is an odd or an even number, the single-digit number obtained at the end is always $ 1$. I've been trying to prove this, but I've no idea on how to do it. Can anyone help me out?
| $$10^0a_0+10^1a_1+10^2a_2+\ldots+10^na_n=(a_0+a_1+\ldots+a_n)+\text{a multiple of }9.$$
Therefore taking the sum of the digits of a number gives you a number that leaves the same remainder, in the division by $9$, as the one you had before (and it is also smaller as long as the original number is not $<10$). Therefore the process ends with a one-digit number that leaves the same remainder under division by $9$ are your number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/460520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Folliation and non-vanishing vector field. The canonical foliation on $\mathbb{R}^k$ is its decomposition into parallel sheets $\{t\} \times \mathbb{R}^{k-1}$ (as oriented submanifolds). In general, a foliation $\mathcal{F}$ on a compact, oriented manifold $X$ is a decomposition into $1-1$ immersed oriented manifolds $Y_\alpha$ (not necessarily compact) that is locally given (preserving all orientations) by the canonical foliation in a suitable chart at each point. For example, the lines in $\mathbb{R}^2$ of any fixed slope (possibly irrational) descend to a foliation on $T^2 = \mathbb{R}^2/\mathbb{Z}^2$.
(a) If $X$ admits a foliation, prove that $\chi(X) = 0$. (Hint: Partition of unity.)
(b) Prove (with suitable justification) that $S^2 \times S^2$ does not admit a foliation as defined above.
Theorem
A compact, connected, oriented manifold $X$ possesses a nowhere vanishing vector field if and only if its Euler characteristic is zero.
Question: How could $X$ in this problem satisfy the connectness property in the theorem? Can I just say if it is not connected, treat each connected component individually?
| I assume that your manifold and foliation are smooth and foliation is of codimension 1, otherwise see Jack Lee'a comment. Then pick a Riemannian metric on $X$ and at each point $x\in M$ take unit vector $u_x$ orthogonal to the leaf $F_x$ through $x$: There are two choices, but since your foliation is transversally orientable, you can make a consistent choice of $u_x$. Then $u$ is a nonvanishing vector field on $X$.
In fact, orientability is irrelevant: Clearly, it suffices to consider the case when $X$ is connected. Then you can pass to a 2-fold cover $\tilde{X}\to X$ so that the foliation ${\mathcal F}$ on $X$ lifts to a transversally oriented foliation on $\tilde{X}$. See Proposition 3.5.1 of
A. Candel, L. Conlon, "Foliations, I", Springer Verlag, 1999.
You should read this book (and, maybe, its sequel, "Foliations, II") if you want to learn more about foliations.
Then $\chi(\tilde{X})=0$. Thus, $\chi(X)=0$ too. Now, recall that a smooth compact connected manifold admits a nonvanishing vector field if and only if it has zero Euler characteristic. Thus, $X$ itself also admits a nonvanishing vector field.
Incidentally, Bill Thurston proved in 1976 (Annals of Mathematics) that the converse is also true: Zero Euler characteristic for a compact connected manifold implies existence of a smooth codimension 1 foliation. This converse is much harder.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/460665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
What would have been our number system if humans had more than 10 fingers? Try to solve this puzzle. Try to solve this puzzle:
The first expedition to Mars found only the ruins of a civilization.
From the artifacts and pictures, the explorers deduced that the
creatures who produced this civilization were four-legged beings with
a tentatcle that branched out at the end with a number of grasping
"fingers". After much study, the explorers were able to translate
Martian mathematics. They found the following equation:
$$5x^2 - 50x + 125 = 0$$
with the indicated solutions $x=5$ and $x=8$. The value $x=5$ seemed
legitimate enough, but $x=8$ required some explanation. Then the explorers
reflected on the way in which Earth's number system developed, and found
evidence that the Martian system had a similar history. How many fingers would
you say the Martians had?
$(a)\;10$
$(b)\;13$
$(c)\;40$
$(d)\;25$
P.S. This is not a home work. It's a question asked in an interview.
| The correct answer is (a) 10.
There is no comment which number system the given answers refer to. As all other numbers refer to the Martian number system, we can safely assume the answers refer to the Martian number system as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/460729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "74",
"answer_count": 13,
"answer_id": 8
} |
difference between expected values of distributions let us assume two distributions $p$ and $p'$ over the set of naturals $N$.
Is it true the following property?
$\sum_{n \in N} p(n) \cdot n \le \sum_{n \in N} p'(n) \cdot n$
IFF
for all $0 \le u \le 1$
$\sum_{n \in N} p(n) \cdot u^{n} \ge \sum_{n \in N} p'(n) \cdot u^n$
Thanks for your help!
| Call $X$ a random variable with distribution $p$ and $Y$ a random variable with distribution $p'$, then one considers the assertions:
*
*$E[X]\leqslant E[Y]$
*$E[u^X]\geqslant E[u^Y]$ for every $u$ in $[0,1]$
Assertion 2. implies assertion 1. because $X=\lim\limits_{u\to1}\frac1{1-u}\cdot(1-u^X)$ and the limit is monotonous.
Assertion 1. does not imply assertion 2., as witnessed by the case $E[u^X]=\frac23+\frac13u$ and $E[u^Y]=\frac34+\frac14u^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/460890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof of strong Holder inequality Let $a>1$ and $f,g :\left(0,1\right) \rightarrow \left(0,\infty\right)$ measurable functions, $B$ a measurable subset of $\left(0,1\right)$ such that $$\left(\int_{C} f^2 dt\right)^{1/2} \left(\int_{C} g^2 dt\right)^{1/2} \geq a \int_{C} fg dt$$ for all $C$ measurable subset of $B$. Prove that $B$ has Lebesgue measure zero. Is the same true if we consider a probability measure and Borel subsets of $\left(0,1\right)$ and Borel functions?
| Yes, it's true in more general situations.
Let $\mu$ a positive measure on $X$, and $f,\, g \colon X \to (0,\,\infty)$ measurable. Let $a > 1$. Then every measurable $B$ with $\mu(B) > 0$ contains a measurable $C \subset B$ with
$$\left(\int_C f^2\,d\mu\right)^{1/2} \left(\int_C g^2\,d\mu\right)^{1/2} < a\cdot \int_C fg\,d\mu.$$
For $c > 1$, $n\in \mathbb{Z}$, measurable $M$ with $\mu(M) > 0$, and measurable $h\colon X \to (0,\,\infty)$, let
$$S(c,k,M,h) := \{ x \in M : c^k \leqslant h(x) < c^{k+1}\}.$$
Each $S(c,k,M,h)$ is measurable, and
$$M = \bigcup_{k \in \mathbb{Z}} S(c,k,M,h)$$
where the union is disjoint. Since $\mu(M) > 0$, at least one $S(c,n,M,h)$ has positive measure.
Choose $1 < c < \sqrt{a}$ and set $m_f = c^n$ where $n\in\mathbb{Z}$ is such that $A = S(c,n,B,f)$ has positive measure. Let $k \in\mathbb{Z}$ such that $C = S(c,k,A,g)$ has positive measure, and set $m_g = c^k$.
On $C$, we have $m_f \leqslant f(x) < c\cdot m_f$ and $m_g \leqslant g(x) < c\cdot m_g$, hence
$$\begin{align}
\int_C fg\,d\mu &\geqslant \int_C m_f\cdot m_g\, d\mu = m_f m_g \cdot \mu(C),\\
\left(\int_C f^2\,d\mu\right)^{1/2} \left(\int_C g^2\,d\mu\right)^{1/2}
&< \left(\int_C(c\cdot m_f)^2\, d\mu\right)^{1/2} \left(\int_C (c\cdot m_g)^2\, d\mu\right)^{1/2}\\
&= c\cdot m_f \sqrt{\mu(C)} \cdot c\cdot m_g \sqrt{\mu(C)}\\
&= c^2\cdot m_f m_g\cdot \mu(C)\\
&< a\cdot m_f m_g\cdot \mu(C)\\
&\leqslant a \int_C fg\,d\mu.
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/460974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove Disjunction Elimination rule of inference I've looked at the tableau proofs of many rules of inference (double-negation, disjunction is commutative, modus tollendo ponens, and others), and they all seem to use the so-called "or-elimination" (Disjunction Elimination) rule:
$$(P\vdash R), (Q\vdash R), (P \lor Q) \vdash R$$
(If $P\implies R$ and $Q\implies R$, and either $P$ or $Q$ (or both) are true, then $R$ must be true.)
It's often called the "proof by cases" rule, it makes sense, and I've seen the principle used in many mathematical proofs.
I'm trying to figure out how to logically prove this rule (using other rules of inference and/or replacement), however the proof offered is self-reliant! Is this an axiom?
(There's also the Constructive Dilemma rule, which looks like a more generalized version of Disjunction Elimination. Maybe the proof of D.E. depends on C.D.? or maybe C.D. is an extension of D.E.?)
| The rules of Disjunction Elimination and Constructive dilemma are interchangable.
You can proof Disjunction Elimination from Constructive Dilemma and
You can proof Constructive Dilemma from Disjunction Elimination.
So whichever you have you can prove the other.
Your second question is Disjunction Elimination an axiom?
In a strict logical sense axioms are formulas that are treated as self-evidently true.
Rules of inference are not formulas so in the strict sense they cannot be axioms.
But seeing in it in a more relaxed way , they are treated as self-evidently true so you could call it a metalogical axiom or something like it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Proving a Set is NOT a vector space Before I begin, I will emphasis I DO NOT want the full solution. I just want some hints.
Show that the set $S=\{\textbf{x}\in \mathbb{R}^3: x_{1} \leq 0$ and $x_{2}\geq 0 \}$ with the usual rules for addition and multiplication by a scalar in $\mathbb{R}^3$ is NOT a vector space by showing that at least one of the vector space axioms is not satisfied. Give a geometric interpretation of the result.
My solution (so far): To show this, I will provide a counter example, I have selected axiom 6 (closure under multiplication of a scalar).
$\textbf{x} = \begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\end{pmatrix}$
Let $\lambda = -1, x_{1} = -2, x_{2} = 2, x_{3}=1$
$\lambda \textbf{x} = \lambda \begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\end{pmatrix}$
$= -1 \begin{pmatrix}-2\\ 2\\ 1\end{pmatrix}$
$= \begin{pmatrix}2\\ -2\\ -1\end{pmatrix}$
Clearly, as $\begin{pmatrix}2\\ -2\\ -1\end{pmatrix} \notin S$, as $x_{1} \nleqslant 0$ and $x_{2} \ngeqslant 0$ axiom (Multiplication by a scalar) does not hold. Hence $S$ is not a vector space.
My questions:
*
*Is my solution correct/reasoning? How can it be improved? (Please note I am new to Linear Algebra)
*Are there more axioms for which it doesn't hold besides the one I listed?
*It says to give a geometric interpretation of this result. I'm not sure how to go about doing this. Any hints?
| Absolutely! A single counterexample is all you need. Nice work.
In general elements of $S$ will not have additive inverses in $S$. (Can you determine the exceptions?) Otherwise, the axioms are satisfied.
Geometrically speaking, I recommend that you focus on the lack of additive inverses. Note that if $A$ is a set of vectors such that every element of $A$ has an additive inverse in $A,$ then $A$ will be symmetric about the origin. That, in itself, won't be sufficient to make $A$ a vector subspace, but it will be necessary. Your set $S$ here is an octant of $3$-space. In general, an octant will not be a vector subspace, but a union of octants may be. (When?)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
} |
Conjugates of $12^{1/5}+54^{1/5}-144^{1/5}+648^{1/5}$ over $\mathbb{Q}$ After much manual computation I found the minimal polynomial to be $x^5+330x-4170$, although I would very much like to know if there's a clever way to see this. I suspect there is from seeing that the prime factorisations of the four integers are as various powers of $2$ and $3$ and because the number can be written as: $$12^{1/5}+54^{1/5}-(12^{1/5})^2+(12^{1/5}\cdot 54^{1/5})$$
But I haven't yet been able to find anything better than manual computation of the fifth power.
However, from here I am lost, I'm not sure how to solve this equation. A quick internet search returns much information on the general case in terms of the "Bring-Gerrard normal form" but the book from which this problem was taken hasn't gone into any detail on general methods for solving polynomials, so I am trying to find a solution that doesn't require any heavy machinery.
| Let $x$ be your number.
You can immediately see that $x \in \Bbb Q(12^{1/5}, 54^{1/5})$ (or even $x \in \Bbb Q(2^{1/5},3^{1/5})$, from your remark about the prime factors).
It is not too hard to show that its normal closure is $\Bbb Q(\zeta_5,12^{1/5},54^{1/5}) ( = \Bbb Q(\zeta_5,2^{1/5},3^{1/5}))$, and from there you can investigate the relevant Galois group and find the conjugates.
However, you have found that the number is actually of degree $5$, which should be a surprise at first : the situation is simpler than that.
After investigating a bit, we see that $54 = 12^3 / 2^5$, and so $54^{1/5}$ is already in $\Bbb Q(12^{1/5})$, and letting $y = 12^{1/5}$, we have $x = y + y^3/2 - y^2 + y^4/2$.
So now the problem reduces to finding the conjugates of $y = 12^{1/5}$. Its minimal polynomial is clearly $y^5-12=0$, and its $5$ roots are the the $\zeta_5^k 12^{1/5}$ where $\zeta_5$ is a primitive $5$th root of $1$. To get the conjugates of $x$ you simply replace $y$ with any of its conjugate in the above formula for $x$
In simpler terms : your calculations should actually show that "if $y^5 = 12$, then $(y+y^3/2-y^2+y^4/2)$ is a root of $x^5+330x-4170 = 0$". By using $5$ different $y$ such that $y^5 = 12$, you obtain $5$ roots of $x^5+330x-4170 = 0$. However, proving that they are distinct, and that the polynomial is irreducible, doesn't seem easy to do.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Proof a $2^n$ by $2^n$ board can be filled using L shaped trominoes and 1 monomino Suppose we have an $2^n\times 2^n$ board. Prove you can use any rotation of L shaped trominoes and a monomino to fill the board completely.
You can mix different rotations in the same tililng.
| Forgive me if what is below is ‘old hat’ or easily found on some other site. I only found this site whilst solving the similar problem: for which $n$ can an $n\times n$ square be covered without overlapping by T shapes each comprising four small squares. The fact that an nxn square can be almost covered by L shapes comprising three small squares leaving any prechosen square free is true for all $n\ge6, 3\nmid n$. A very simple proof can be found in The Mathematical Gazette of November 1999.
Dr. R.B.J.T.Allenby (long retired from Leeds University,)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Where did this statistics formula come from: $E[X^2] = \mu^2 + \sigma^2$ I am studying statistics and I need some guidance as to where this formula came from. All I know is that $\displaystyle E[X^2] = x^2 \sum_{i=0}^n p_{i}(x)$
| Edit: Thanks to the Did's comment and as a alternative answer.
You can use the following definition:
If $X$ is any random variable with distribution $F_{X}(x)$, then
$$\mu_{X}=\int \limits_{0}^{+\infty}\left({1-F_{X}(x)}\right)dx-\int\limits_{-\infty}^{0}F_{X}(x)dx,$$
$$\sigma_{X}^{2}=\int \limits_{0}^{+\infty} 2x \left( {1-F_{X}(x)+F_{X}(-x) } \right)dx-\mu_{X}^2$$
and then show that
$$E[X^2]=\int \limits_{0}^{+\infty} 2x \left( {1-F_{X}(x)+F_{X}(-x) } \right)dx$$
to conclude your equality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
A basic question on the definition of order In the first chapter of Rudin's analysis book "order" on a set is defined as follows :
Let $S$ be a set. An order on $S$ is a relation, denoted by $<$, with the following two properties :
(i) If $x \in S$ and $y \in S$ then one and only one of the statements
$$ x < y, x=y, y<x $$ is true.
(ii) If $x,y,z \in S$, then $x < y$ and $y < z$ implies $x<z$.
How is this different from the usual partial/total order notation. This looks like total order. Why is defining "order" like this ? Moreover, he has not defined $=$ here.
| The root of your difficulty seems to be that people use different conventions for defining "order". The first issue arises in the defintion of "partial order", by which some people mean a reflexive, transitive, and antisymmetric relation $\leq$, while other people mean an irreflexive, transitive relation $<$. Given a partial order on a set $A$ in either sense, one can easily define the corresponding order in the other sense, by adjoining or removing the pairs $(a,a)$ for all $a\in A$. So people don't usually worry too much about the distinction, but technically they are two different notions of order. I'm accustomed to calling the reflexive version a "partial order" and the irreflexive version a "strict partial order", but there are people who prefer to use the shorter name "partial order" for the irreflexive version.
Total (or linear) orders are then defined in the reflexive version by requiring $a\leq b$ or $b\leq a$ for all $a,b\in A$, and in the irreflexive version by instead requiring $a<b$ or $a=b$ or $b<a$.
Again, one can convert either sort of total order to the other sort, just as with partial orders.
To further increase the confusion, some people use "order" (without any adjective) to mean a partial order, while others use it to mean a total order. So the final result is that "order" can have any of four meanings. You just have to get used to that; you'll get no suport for denouncing Rudin just because he chose a convention different from the one you learned first.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Proving that pullback objects are unique up to isomorphism In Hungerford's Algebra he defines a pullback of morphisms $f_1 \in \hom(X_1,A)$ and $f_2 \in \hom(X_2,A)$ as a commutative diagram
$$\require{AMScd}
\begin{CD}
P @>{g_1}>>X_2\\
@V{g_2}VV @V{f_1}VV \\
X_2 @>{f_2}>> A
\end{CD}$$
satisfying the universal property that for any commutative diagram
$$\require{AMScd}
\begin{CD}
Q @>{h_1}>>X_2\\
@V{h_2}VV @V{f_1}VV \\
X_2 @>{f_2}>> A
\end{CD}$$
there exists a unique morphism $t: Q \to P$ such that $h_i = g_i \circ t$. He then asks the reader to establish that
For any other pullback diagram with $P'$ in the upper-left corner $P \cong P'$.
How do we obtain this isomorphism?
The obvious choice seems to be considering the two morphisms $t: P \to P'$, $t': P' \to P$ and show that they compose to the identity. To this end,
$$h_1 = g_1 \circ t \implies h_1\circ 1 = h_1 \circ t' \circ t$$
but we cannot cancel unless $h_1$ is monic. Can we claim that necessarily $t \circ t'$ is the identity, since comparing $(P,g_1,g_2)$ with itself there exists a unique morphism $t'': P \to P$?
| As said by Martin Brandenburg in the comments, you stated the universal property wrong (not relevant anymore since the edit of the original post).
A pullback of $f_1\colon X_1 \to A,f_2\colon X_2 \to A$ is a diagram
$$ \require{AMScd}
\begin{CD}
P @>{g_1}>>X_1\\
@V{g_2}VV @V{f_1}VV \\
X_2 @>{f_2}>> A
\end{CD} $$
satisfying that for any other diagram
$$\require{AMScd}
\begin{CD}
Q @>{h_1}>>X_1\\
@V{h_2}VV @V{f_1}VV \\
X_2 @>{f_2}>> A
\end{CD}$$
there exists a unique $t \colon Q \to P$ such that the following diagram commutes :
.
So now, if you have two pullback $P,P'$ there is $t \colon P \to P',t' \colon P' \to P$ such that commute the following diagrams :
.
Notably, the arrow $t' \circ t \colon P \to P$ make the diagram
commutes. By the universal property of the pullback $P$, such a $t'\circ t$ is unique : do you see another arrow $P \to P$ satisfying the same property ? Then it must equal $t' \circ t$.
Starting from here and elaborating a similar argument with the pullback $P'$, you should be able to prove the uniqueness up to isomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Starting with $\frac{-1}{1}=\frac{1}{-1}$ and taking square root: proves $1=-1$ In this blog post, RJ. Lipton mentions an example of common mathematical traps. In particular, that ``square root is not a function''. He shows the following trap:
Start with:
$\frac{-1}{1}=\frac{1}{-1}$, then take the square root of both sides:
$$
\frac{\sqrt{-1}}{\sqrt{1}}=\frac{\sqrt{1}}{\sqrt{-1}}
$$
hence
$$
\frac{i}{1} = \frac{1}{i} \\
i^2=1 \enspace ,
$$
which contradicts the definition that $i^2=-1$.
Question 1: I know that the square root is not a function because it is multi-valued, but I still can not wrap my head around this example. Where was the problem exactly? Was is that we
*
*can not convert $\sqrt{1/-1}$ to $\sqrt{1}/\sqrt{-1}$?
*both the RHS and LHS are unordered sets?
*both?
Question 2: Also, does this problem only arise in equalities or in general algebraic manipulation? Because it would be a nightmare when manipulating an expression with fractional powers! Are there easy rules to determine what is safe to do with fractional powers? To see what I mean, there is another example of a similar trap:
One might easily think that $\sqrt[4]{16x^2y^7}$ is equivalent to $2x^{1/2}y^{7/4}$, which is not true for $x=-1$ and $y=1$.
| Without going into complex analysis, I think this is the simplest way I can explain this. Let $f(x) = \sqrt{x}$. Note that the (maximal) domain of $f$ is the set of all non-negative numbers. And how is this defined? $f(x) = \sqrt{x}$ is equal to a non-negative number $y$ such that $y^2=x$. In this sense, square root is a function! It is called the principal square root.
In contrast, the following correspondence is not a function: the relation $g$ takes in a value $x$ and returns a value $y$ such that $y^2=x$. For example, under $g$, 1 corresponds to two values, $1,-1$.
Now, the property of distributing a square root over a product is only proven (at least in precalculus) over the domain of the principal square root, that is, only for non-negative numbers. Given this, there is no basis for the step
$$\frac{\sqrt{1}}{\sqrt{-1}} = \frac{\sqrt{-1}}{\sqrt{1}}.$$
As to why this property is not true, the best explanation for now is that because $-1$ is not in the domain of the principal square root. Hence, $\sqrt{-1}$ does not actually make sense, as far as our definition of square root is concerned. In complex analysis, more can be said. As a commenter mentioned, this has something to do with branches of logarithms.
For your second question, I think it is safe that you always keep in mind what the domain of the function is. If you will get a negative number inside an even root, then you can't distribute the even root over products or quotients.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Rudin 4.22 Theorem Could you help me understand
why 1. f(H) = B and
why 2. $\bar A$ $\cap$ B is empty and
why 3. $\bar G$ $\cap$ H is empty?
| In 2. (@Antoine's solution) the closure is understood in subspace topology.
In 3. An indirect proof is (technically) simpler.
Assume that $x\in \overline G\cap H$. Then $f(x)\in f(\overline G)\subseteq \overline A$, and $f(x)\in f(H)=B$, which is a contradiction because $\overline A\cap B=\emptyset$. In fact, $f(H)\subseteq B$ is enough for the conclusion, that is trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Injective function $f(x) = x + \sin x$ How can I prove that $f(x) = x + \sin x$ is injective function on set $x \in [0,8]$?
I think that I should show that for any $x_1, x_2 \in [0,8]$ such that $x_1 \neq x_2$ we have $f(x_1) \neq f(x_2)$ id est $x_1 + \sin x_1 \neq x_2 + \sin x_2$. But I don't know what can I do next.
Equivalently I can show that for any $x_1, x_2 \in [0,8]$ such that $f(x_1) = f(x_2)$ we have $x_1 = x_2$ but I have again problem.
| Hint: What is the sign of $f'(x)$. What does it tell you about $f(x)$?
Since $f'(x)\ge0$, the function is non-decreasing. So if $f(a)=f(b)$ for some $a<b$, then $f(x)$ would have to be constant on the interval $[a,b]$. This would imply $f'(x)=0$ for each $x\in[a,b]$. But there is no non-trivial interval such that $f'(x)$ is zero for each point of the interval.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How many values of $n<50$ exist that satisfy$ (n-1)! \ne kn$ where n,k are natural numbers? How many natural numbers less than 50 exist that satisfy $ (n-1)! \ne kn$ where n,k are natural numbers and $n \lt 50$ ?
when n=1
$0!=1*1$
when n=2
$1!\ne2*1$
...
...
...
when n=49
$48!=\frac{48!}{49}*49$
Here $k = \frac{48!}{49}$ and $n =49$
| If you're only interested in the case of up to n = 49 then the answer isn't very enlightening... the answer is 16.
As the hints above suggest, you need to look at what happens when n is a prime, although you also need to take care in this case with n = 4 (can you see why?).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/461914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
How to solve the Riccati's differential equation I found this question in a differential equation textbook as a question
The equation
$$
\frac{dy}{dx} =A(x)y^2 + B(x)y +C(x)
$$
is called Riccati's equation
show that if $f$ is any solution of the equation, then the transformation
$$
y = f + \frac{1}{v}
$$
reduces it to a linear equation in $v$.
I am not understanding this, what does $f$ mean? How can we find the solution with the help of the solution itself. I hope anyone could help me to solve this differential equation.
| The question is wrongly posed, which is probably why you don't comprehend it. Here's the correct one:
Let $y$ and $f$ be solutions to the above diff. equation such that $y=f+1/v$ for some function $v(x)$. Show that $v$ satisfies a linear diff. equation.
The solution is provided by Amzoti.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Is the MLE strongly consistent and asymptotically efficient for exponential families? It is known that the Maximum Likelihood Estimator (MLE) is strongly consistent and asymptotically efficient under certain regularity conditions. By strongly consistent I mean that $\hat{\theta}_{MLE} \rightarrow \theta$ almost surely. By asymptotically efficient I mean that $\sqrt{n}(\hat{\theta}_{MLE}-\theta)\rightarrow N(0,I^{-1}(\theta))$ in distribution.
These regularity conditions are cumbersome to check so I was wondering if there is a general and easy to check case for when the regularity conditions hold. For example, do these regularity conditions always hold for exponential families?
I am not asking anyone to prove this, I am just wondering if someone knows the answer.
Regularity Conditions for Asymptotic Efficiency: http://en.wikipedia.org/wiki/Maximum_likelihood#Asymptotic_normality
Regularity Conditions for Strong Consistency: http://en.wikipedia.org/wiki/Maximum_likelihood#Consistency
| In a paper I read a while ago, which can be found here, the authors propose a set of conditions of eigenvalues which are weaker than the usual regularity conditions. They show that under this set of conditions, the usual MLE are strongly consistent for (nonlinear) exponential families.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why does factoring eliminate a hole in the limit? $$\lim _{x\rightarrow 5}\frac{x^2-25}{x-5} = \lim_{x\rightarrow 5} (x+5)$$
I understand that to evaluate a limit that has a zero ("hole") in the denominator we have to factor and cancel terms, and that the original limit is equal to the new and simplified limit. I understand how to do this procedurally, but I'd like to know why this works. I've only been told the methodology of expanding the $x^2-25$ into $(x-5)(x+5)$, but I don't just want to understand the methodology which my teacher tells me to "just memorize", I really want to know what's going on. I've read about factoring in abstract algebra, and about irreducible polynomials (just an example...), and I'd like to get a bigger picture of the abstract algebra in order to see why we factor the limit and why the simplified is equal to the original if it's missing the $(x-5)$, which has been cancelled. I don't want to just memorize things, I would really like to understand, but I've been told that this is "just how we do it" and that I should "practice to just memorize the procedure."
I really want to understand this in abstract algebra terms, please elaborate. Thank you very much.
| One of definitions of $\lim_{x \to A} f(x) = B$ is:
$$\forall_{\varepsilon > 0}\exists_{\delta > 0}\forall_{0 < \left|x - A\right| < \delta}\left|f(x) - B\right| < \varepsilon$$
The intuition is that we can achieve arbitrary 'precision' (put in bounds on y axis) provided we get close enough (so we get the bounds on x axis). However the definition does not say anything about the value at the point $f(A)$ which can be undefined or have arbitrary value.
One of method of proving the limit is to find the directly $\delta(\varepsilon)$. Hence we have following formula (well defined as $x\neq 5$):
$$\forall_{0 < \left|x - 5\right| < \delta}\left|\frac{x^2-25}{x-5} - 10\right| < \epsilon$$
As $x\neq 5$ (in such case $\left|x - 5\right| = 0$) we can factor the expression out
$$\forall_{0 < \left|x - 5\right| < \delta} \left|x + 5 - 10\right| < \varepsilon $$
$$\forall_{0 < \left|x - 5\right| < \delta} \left|x - 5 \right| < \varepsilon $$
Taking $\delta(\varepsilon) = \varepsilon$ we find that:
$$\forall_{\varepsilon > 0}\exists_{\delta > 0}\forall_{0 < \left|x - 5\right| < \delta}\left|\frac{x^2-25}{x-5} - 10\right| < \varepsilon$$
The key thing is that we don't care about value at the limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "162",
"answer_count": 15,
"answer_id": 2
} |
Show that $f:I\to\mathbb{R}^{n^2}$ defined by $f(t)=X(t)^k$ is differentiable Let $I$ be a interval, $\mathbb{R}^{n^2}$ be the set of all $n\times n$ matrices and $X:I \to\mathbb{R}^{n^2}$ be a differentiable function. Given $k\in\mathbb{N}$, define $f:I\to\mathbb{R}^{n^2}$ by $f(t)=X(t)^k$. How to prove that $f$ is differentiable?
Thanks.
| Hint: The entries of $X(t)$ are obviously differentiable, and the entries of $f(t)$ are polynomials in the entries of $X(t)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Possible values of difference of 2 primes Is it true that for any even number $2k$, there exists primes $p, q$ such that $p-q = 2k$?
Polignac's conjecture talks about having infinitely many consecutive primes whose difference is $2k$. This has not been proven or disproven.
This is a more general version of my question on the possible value of prime gaps.
Of course, the odd case is easily done.
| In short, this is an open problem (makes Polignac seem really tough then, doesn't it?).
The sequence A02483 of the OEIS tracks this in terms of $a(n) =$ the least $p$ such that $p + 2n = q$. On the page it mentions that this is merely conjectured.
In terms of a positive result, I am almost certain that Chen's theorem stating that every even number can be written as either $p + q$ or $p + q_1q_2$ is attained by sieving methods both powerful enough and loose enough to also give that every even number can be written as either a difference of two primes or a difference of a prime and an almost-prime (or of an almost-prime and a prime). I think I've even seen this derived before.
This would come as a corollary of Polignac's conjecture or of the vastly stronger Schinzel's hypothesis H, which is one of those conjectures that feels really far from reach to me. I suppose that it has been proved on average over function fields (I think), so perhaps that's hopeful. On the other hand, so has the Riemann Hypothesis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
What is wrong in saying that the map $F:\mathbb{S}^1 \times I \to \mathbb{S}^1$ defined by $F(z,t)=z^{t+1}$ is a homotopy from $f$ to $g$? Let $\mathbb{S}^1$ be the unit circle of complex plane and $f,g:\mathbb{S}^1 \to \mathbb{S}^1$ be two maps defined by $f(z)=z$ and $g(z)=z^2$. What is wrong in saying that the map $F:\mathbb{S}^1 \times I \to \mathbb{S}^1$ defined by $F(z,t)=z^{t+1}$ is a homotopy from $f$ to $g$?
Can someone tell me please what is wrong ?thanks in advance.
| Raising a complex number to a non-integer power is more complicated than you're realizing. The "function" $z^{t}$ is really multivalued when $t\notin\mathbb{Z}$, and even after choosing a branch, it won't be continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Infinite Limit Problems Can some one help me to solve the problems?
*
*$$\lim_{n\to\infty}\sum_{k=1}^n\left|e^{2\pi ik/n}-e^{2\pi i(k-1)/n}\right|$$
*$$ \lim_{x\to\infty}\left(\frac{3x-1}{3x+1}\right)^{4x}$$
*$$\lim_{n\to\infty}\left(1-\frac1{n^2}\right)^n $$
(Original scan of problems at http://i.stack.imgur.com/4t12K.jpg)
This questions are from ISI kolkata Computer science Phd exam entrance.The link of the original questions http://www.isical.ac.in/~deanweb/sample/MMA2013.pdf
| For the first one, consider a geometric interpretation. Recall that when $a$ and $b$ are complex numbers, $|a-b|$ is the distance in the plane between the points $a$ and $b$. Peter Tamaroff says in a comment that the limit is $2\pi$, which I believe is correct.
Addendum: The points $e^{2\pi ik/n}$ are spaced evenly around the unit circle. The distance $\left|e^{2\pi ik/n} - e^{2\pi i(k-1)/n}\right|$ is the distance from one point to the next. So we are calculating the perimeter of a regular $n$-gon which, as $n$ increases, approximates the perimeter of the unit circle arbitrarily closely. The answer is therefore $2\pi$.
For the third one, factor $$\left(1-\frac1{n^2}\right)$$ as
$$ \left(1-\frac1{n}\right) \left(1+\frac1{n}\right)$$ and then apply the usual theorem about $\lim_{n\to\infty}\left(1+\frac an\right)^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
How to approximate unknown two-dimensional function? I have a surface defined by values on a two-dimensional grid. I would like to find an approximate function which would give me a value for any arbitrary point within certain range of xs and ys.
My general idea is to construct some sort of polynomial, and then tweak it's coefficients by using some sort of evolutionary algorithm until the polynomial behaves as I want it to. But how should my polynomial look in general?
| You might be interested in Lagrange interpolation (link only talks about one-variable version). Just input 100 points from your values and you'll probably get something good. You may need more points for it to be accurate at locations of high variance. If you insist on using an evolutionary algorithm, you might use it to choose near-optimal choices of data points, but in many cases you could probably do almost as well by spreading them around evenly.
Two-variable interpolation isn't on Wikipedia. But it is Google-able, and I found it described in sufficient generality in section 3 of this paper. Or, if you have access to MATLAB, a "static" 2D-interpolater has already been coded in this script (I haven't used this one, so it might be unwieldy or otherwise unsuited to your purposes).
Hope this helps you out!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Finding a Hopf Bifucation with eigenvalues I am trying to show that the following 2D system has a Hopf bifurcation at $\lambda=0$:
\begin{align}
x' =& y + \lambda x \\
y' =& -x + \lambda y - x^2y
\end{align}
I know that I could easily plot the system with a CAS but I wish to analytical methods. So, I took the Jacobian:
\begin{equation}
J = \begin{pmatrix} \lambda&1\\-1-2xy&\lambda-x^2\end{pmatrix}
\end{equation}
My book says I should look at the eigenvalues of the Jacobian and find where the real part of the eigenvalue switches from $-$ to $+$. This would correspond to where the
system changes stability. So I took the $\det(J)$:
\begin{align}
\det(J) =& -\lambda x^2 + 2xy + \lambda^2 + 1 = 0
\end{align}
I am stuck here with algebra and am not quite sure how to find out where the eigenvalues switch from negative real part to positive real part. I would like to use the
quadratic formula but the $2xy$ term throws me off.
How do I proceed? Thanks for all the help!
| Step 1: As jkn stated, the first step is to find the equilibrium such that $\dot x=\dot y=0$. It is easy to obtain the equilibrium is $(0, 0)$.
Step 2: Compute the eigenvalue around the equilibrium $(0, 0)$.
$$J= \left(\begin{array}{cc}
\lambda &1\\
-1& \lambda\\
\end{array}\right)$$
Thus the characteristic equation is $$\det (\tilde\lambda I-J)=0$$
The aobve equation admits two eigenvalue $\lambda\pm i$. It is called the imaginary $\omega=1$.
It is obvious that $\lambda:=0$ such that the dynamical system has a pair of pure imaginary roots $\pm i$, moverover
$$\frac{d\tilde\lambda}{d\lambda}|_{\lambda=0}=\frac{d}{d\lambda}|_{\lambda=0}(\lambda\pm i)=1>0$$
There the dynamical system exhibits a Hopf bifurcation at $\lambda=0$.
Step 3, Compute the Hopf bifurcation. Consider
$$(J-\tilde\lambda I)\left(\begin{array}{cc}
q_1\\
q_2\\
\end{array}\right)=0$$
we get the eigenvector $$\left(\begin{array}{cc}
q_1\\
q_2\\
\end{array}\right)=\left(\begin{array}{cc}
i\\
1\\
\end{array}\right)$$.
On the other hand, consider$$(J^T-\overline{\tilde\lambda} I)\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right)=0$$
we get the eigenvector $$\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right)=\left(\begin{array}{cc}
i\\
1\\
\end{array}\right)$$
It is always possible to normalize $\mathbf p$ with respect to $\mathbf p$:
$$<\mathbf p, \mathbf q>=1, \mbox{ with } <\mathbf p, \mathbf q>=\bar p_1q_1+\bar p_2 q_2 $$
Thus $$\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}
i\\
1\\
\end{array}\right)$$
We denote the nonlinear term $$\mathbf F:=\left(\begin{array}{cc}
F_1\\
F_2\\
\end{array}\right)
=\left(\begin{array}{cc}
0\\
-x^2y\\
\end{array}\right)$$
Introduce the complex variable $$z:=<\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right),
\left(\begin{array}{cc}
x\\
y\\
\end{array}\right)>$$
Then the dynamical system becomes to
$$\dot z=(\lambda+i)z+g(z, \bar z,\lambda) \mbox{ with }g(z, \bar z,\lambda) =<\left(\begin{array}{cc}
p_1\\
p_2\\
\end{array}\right),
\left(\begin{array}{cc}
F_1\\
F_2\\
\end{array}\right)>$$
Some direct computations show
$$g(z, \bar z,\lambda)=\frac{1}{2}(z^3-z^2\bar z-z\bar z^2+\bar z^3):=\frac{g_{30}}{6}z^3+\frac{g_{21}}{2}z^2\bar z+\frac{g_{12}}{2}z\bar z^2+\frac{g_{03}}{6}\bar z^3$$
We recall that the first Lyapunov coefficient is
$$l_1=\frac{1}{2\omega^2}\mbox{Re} (ig_{20}g_{11}+\omega g_{21})$$
Since $\omega=1$, $g_{20}=g_{11}=0$, $g_{21}=-1$, we obtain
$$l_1=-\frac{1}{2}<0$$
Therefore $\lambda=0$ is a super crirical Hopf bifurcation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Mathematical Games suitable for undergraduates I am looking for mathematical games for an undergraduate `maths club' for interested students. I am thinking of things like topological tic-tac-toe and singularity chess. I have some funding for this so it would be nice if some of these games were available to purchase, although I am open to making the equipment myself if necessary.
So the question is: what are some more examples of games that should be of interest to undergraduates in maths?
| Ticket to Ride is a very easy game to learn and can lead to some interesting discussions of graph theory. On a more bitter note, a game of Settlers of Catan never fails to provide a wonderful example of the difference between theoretical and empirical probability.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 3
} |
Finding all bifurcations in a 2D system I want to find all bifurcations for the system:
\begin{align}
x' =& -x+y\\
y' =& \frac{x^2}{1+x^2}-\lambda y
\end{align}
So far, I am using the idea that the intersection of the nullclines ($y'=0$ or $x'=0$) gives equilibrium points. So:
\begin{align}
x'=& 0 \implies x=y \\
y'=& 0 \implies y=\frac{x^2}{\lambda(1+x^2)}
\end{align}
Clearly $(0, 0)$ is an equilibrium point but there is also the possibility of two more equilibrium points when the parabola given
by the above equation intersects the line $y=x$. To find the intersection I solved:
\begin{align}
x =& \frac{x^2}{\lambda(1+x^2)}\\
\lambda(1+x^2) =& x\\
\lambda x^2-x+\lambda =& 0 \\
x =& \frac{1\pm \sqrt{1-4\lambda^2}}{2\lambda}
\end{align}
I have these two intersection points. Now I need to vary $\lambda$ to find where the curve passing through the intersection points
becomes tangent to $y=x$ and hence we would expect one equilibrium point instead of these two at this particular value of $\lambda$. Then continuing the variation of $\lambda$ in the same direction we would expect no equilibrium points from these two.
Hence we originally had $2$ equilibrium points and then they coalesced and finally annihilated each other. This is a saddle-node bifucation.
How do I show this for my variation of $\lambda$? Are there any other bifurcations?
EDIT: Consider the discriminant of the equation:
\begin{align}
x = \frac{1\pm\sqrt{1-4\lambda^2}}{2\lambda}
\end{align}
\begin{align}
1-4\lambda^2 =& 0 \\
1 =& 4\lambda^2 \\
1 =& \pm 2\lambda \\
\pm\frac{1}{2} =& \lambda
\end{align}
So, I plotted the system with $\lambda = \frac{1}{2}$: sage: P = plot(x^2/ .5*(1+x^2), 0, 2) + plot(x, 0, 2)
We no longer have two bifurcations and instead have one. I was expecting the curves to be tangent. When does it happen that the curves are tangent? Actually I just realized that SAGE was not dividing the $1+x^2$ term, so I added the extra set of parens and everything works as expected!
| Notice that x is real only when the discriminant 1-4λ^2 > 0. I.e. The curves of x' and y' do not intersect at points when 1-4λ^2 > 0 does not hold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Using the FTOC to find the derivative of the integral. I'm apologizing ahead of time because I don't know how to format an integral.
If I have the following integral: $$\int_{x^2}^5 (4x+2)\;dx.$$
I need to find the derivative, so could I do the following?
Multiply the integral by -1 and swap the limits of integration so that they are from 5 to x^2.
Then, use FTOC and say that the derivative will be $-((4x^2)+2))$.
Is this correct?
| You have
$$
f(x) = \int_{x^2}^5 4t + 2 \; dt
$$
and you want to find $f'(x)$.
First note that
$$
f(x) = -\int_5^{g(x)} 4t + 2\; dt
$$
where $g(x) = x^2$. So $f(x)$ is the composition of two functions:
$$
f(x) = h(g(x)).
$$
where
$$\begin{align}
h(x) &= -\int_5^x 4x + 2\; dt \quad \text{and}\\
g(x) &= x^2.
\end{align}
$$
So by the chain rule you have
$$
f'(x) = h'(g(x))\color{red}{g'(x)} = -(4g(x) + 2)\color{red}{g'(x)} = \dots
$$
It looks like you forgot the derivative of the inner function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does the graph of $y + |y| = x + |x|$ represent a function of $x$? The question is whether or not the graph $y + |y| = x + |x|$ represents a function of $x$. Explain why.
It looks like a weird graph but it would probably be a function because if you say $f(x) = y$ (you get a $y$ value)?
| It is a function only if for every $x$ you only have one $y$ value satisfying the equation.
Now look for example the value $x=-1$ and $y=-1$ or $x=-1$ and $y=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/462933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Show that if $a$ has order $3\bmod p$ then $a+1$ has order $6\bmod p$. Show that if $a$ has order $3\bmod p$ then $a+1$ has order $6\bmod p$. I know I am supposed to use primitive roots but I think this is where I am getting caught up. The definition of primitive root is "if $a$ is a least residue and the order of $a\bmod p$ is $\phi(m)$ then $a$ is a primitive root of $m$. But I really am not sure how to use this to my advantage in solving anything.
Thanks!
| Note that if $a=1$, $a$ has order $1$. Thus, we can assume $a\ne1$. Furthermore, $p\ne2$ since no element mod $2$ has order $3$. Therefore, $-1\ne1\pmod{p}$.
$$
\begin{align}
(a+1)^3
&=a^3+3a^2+3a+1\\
&=1+3a^2+3a+1\\
&=-1+3(1+a+a^2)\\
&=-1+3\frac{a^3-1}{a-1}\\
&=-1
\end{align}
$$
Therefore, $(a+1)^3=-1$ and $(a+1)^6=1$. (Shortened a la ccorn).
$$
\begin{align}
(a+1)^2
&=a^2+2a+1\\
&=a+(a^2+a+1)\\
&=a+\frac{a^3-1}{a-1}\\
&=a\\
&\ne1
\end{align}
$$
Therefore, $(a+1)^2\ne1$.
Thus, $(a+1)$ has order $6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Using Van-Kampen in $S^1\times S^1$ as cylinder I was trying to use Van-Kampen theorem to compute $S^1 \times S^1$ using its representation as a cylinder.
$U$= from a little below the middle circle to the upper part of the cylinder.
$V$= from a little above the middle circle to the bottom part of the cylinder.
Then $U$ is homotopic to the upper $S^1$.
Similarly, $V$ is homotopic to the bottom $S^1$
The intersection of $U$ and $V$ is homotopic to the middle circle, $U \cap V = S^1$.
Then $\pi(U)=<\alpha>$, $\pi(V)=<\beta>$, $\pi(U\cap V)=<\gamma>$.
Now $\alpha$ and $\beta$ are homotopic since the upper circle and the bottom circle are relative.
$\pi(X)=<\alpha,\beta \mid \alpha=\beta>=<\alpha>$.
I know this is wrong but I don't understand where. I know that using the rectangular representation of the torus we can use Van-Kampen, just taking a point out and then a circle and we will get $\pi(X)=<\alpha,\beta \mid \alpha*\beta*\alpha^{-1}*\beta^{-1}=1>$.
I am trying to use van Kampen in this example and I cant do it.
Even more general, Can we use van Kampen in $X\times Y$.
| The product $S^1\times S^1$ is not a cylinder. It's a torus.
So what you've done is a correct computation of the fundamental group, but of a wrong space.
In general if you want to compute the fundamental group of a product of spaces, you don't need van Kampen. The group $\pi_1(A\times B)$ is just going to be the product of the fundamental groups $\pi_1(A)\times\pi_1(B)$. It's easy to see, since any loop in $A\times B$ is just a product of loops in $A$ and in $B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inverse of a function $e^x + 2e^{2x}$ The function is $f(x) = e^x+2e^{2x}$
So to find the inverse I went
$$WTS: x = ____$$
$$y = e^x+2e^{2x}$$
$$ log3(y)=log3(3e^{2x})$$
$$ log3(y) = 2x$$
$$ log3(y)=5x$$
$$ x=\frac{log3(y)}{2}$$
Am i correct?
| No, you are wrong, $log$ is not a linear function, set $g(x)=e^x$, $2g(x)^2+g(x)-y=0$,
solving this equation, since $y>0$, $g(x)=e^x=-{1\over4}+{1\over 2}\sqrt{{1\over4}+2y}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Prove that $\sum\limits_{i=1}^{n-k} (-1)^{i+1} \cdot \frac{(k-1+i)!}{k! \cdot(i-1)!} \cdot \frac{n!}{(k+i)! \cdot (n-k-i)!}=1$
Prove that, for $n>k$, $$\sum\limits_{i=1}^{n-k} (-1)^{i+1} \cdot \frac{(k-1+i)!}{k! \cdot(i-1)!} \cdot \frac{n!}{(k+i)! \cdot (n-k-i)!}=1$$
I found this problem in a book at the library of my my university, but sadly the book doesn't show a solution. I worked on it quite a long time, but now I have to admit that this is above my (current) capabilities. Would be great if someone could help out here and write up a solution for me.
| Note first that two factorials nearly cancel out, leaving us with $\frac1{k+i}=\int_0^1t^{k+i-1}\mathrm dt$, hence the sum to be computed is
$$
S=\sum_{i=1}^{n-k}(-1)^{i+1}n{n-1\choose k}{n-k-1\choose i-1}\int_0^1t^{k+i-1}\mathrm dt
$$
Second, note that
$$
\sum_{i=1}^{n-k}(-1)^{i+1}{n-k-1\choose i-1}t^{k+i-1}=t^k\sum_{j=0}^{n-k-1}{n-k-1\choose j}(-t)^{j}=t^k(1-t)^{n-k-1}
$$
hence
$$
S=n{n-1\choose k}\int_0^1t^k(1-t)^{n-k-1}\mathrm dt=n{n-1\choose k}\mathrm{B}(k+1,n-k)
$$
where the letter $\mathrm{B}$ refers to the beta numbers, defined as
$$
\mathrm{B}(i,j)=\frac{(i-1)!(j-1)!}{(i+j-1)!}
$$
All this yields
$$
S=n{n-1\choose k}\frac{k!(n-k-1)!}{n!}=1
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Estimate $\displaystyle\int_0^\infty\frac{t^n}{n!}e^{-e^t}dt$ accurately. How can I obtain good asymptotics for $$\gamma_n=\displaystyle\int_0^\infty\frac{t^n}{n!}e^{-e^t}dt\text{ ? }$$
[This has been already done] In particular, I would like to obtain asymptotics that show $$\sum_{n\geqslant 0}\gamma_nz^n$$
converges for every $z\in\Bbb C$.
N.B.: The above are the coefficients when expanding $$\Gamma \left( z \right) = \sum\limits_{n \geqslant 0} {\frac{{{{\left( { - 1} \right)}^n}}}{{n!}}\frac{1}{{n + z}}} + \sum\limits_{n \geqslant 0} {{\gamma _n}{z^n}} $$
ADD Write $${c_n} = \int\limits_0^\infty {{t^n}{e^{ - {e^t}}}dt} = \int\limits_0^\infty {{e^{n\log t - {e^t}}}dt} $$
We can use something similar to Laplace's method with the expansion $${p_n}\left( x \right) = g\left( {{\rm W}\left( n \right)} \right) + g''\left( {{\rm W}\left( n \right)} \right)\frac{{{{\left( {x - {\rm W}\left( n \right)} \right)}^2}}}{2}$$
where $g(t)=n\log t-e^t$. That is, let $$\begin{cases} w_n={\rm W}(n)\\
{\alpha _n} = n\log {w_n} - {e^{{w_n}}} \\
{\beta _n} = \frac{n}{{w_n^2}} + {e^{{w_n}}} \end{cases} $$
Then we're looking at something asymptotically equal to $${C_n} = \exp {\alpha _n}\int\limits_0^\infty {\exp \left( { - {\beta _n}\frac{{{{\left( {t - {w_n}} \right)}^2}}}{2}} \right)dt} $$
| $$
\begin{align}
\sum_{n=1}^\infty\gamma_nz^n
&=\sum_{n=1}^\infty\int_0^\infty\frac{t^nz^n}{n!}e^{-e^t}\,\mathrm{d}t\\
&=\int_0^\infty e^{tz}e^{-e^t}\,\mathrm{d}t\\
&=\int_0^\infty e^{t(z-1)}e^{-e^t}\,\mathrm{d}e^t\\
&=\int_{1}^\infty u^{z-1}e^{-u}\,\mathrm{d}u\\
&=\Gamma(z,1)
\end{align}
$$
The Upper Incomplete Gamma Function is an entire function. According to this answer, the power series for an entire function has an infinite radius of convergence.
$\color{#C0C0C0}{\text{idea mentioned in chat}}$
$$
\begin{align}
\int_0^\infty\frac{x^{n-1}}{(n-1)!}e^{-e^x}\,\mathrm{d}x
&=\frac1{(n-1)!}\int_1^\infty\log(t)^{n-1}e^{-t}\frac{\mathrm{d}t}{t}\\
&=\frac1{n!}\int_1^\infty\log(t)^ne^{-t}\,\mathrm{d}t\\
&=\frac1{n!}\int_1^\infty e^{-t+n\log(\log(t))}\,\mathrm{d}t\\
\end{align}
$$
Looking at the function $\phi(t)=-t+n\log(\log(t))$, we see that it reaches its maximum when $t\log(t)=n$; i.e. $t_0=e^{\mathrm{W}(n)}=\frac{n}{\mathrm{W}(n)}$.
Using the estimate
$$
\mathrm{W}(n)\approx\log(n)-\frac{\log(n)\log(\log(n))}{\log(n)+1}
$$
from this answer, at $t_0$,
$$
\begin{align}
\phi(t_0)
&=-n\left(\mathrm{W}(n)+\frac1{\mathrm{W}(n)}-\log(n)\right)\\
&\approx n\log(\log(n))
\end{align}
$$
and
$$
\begin{align}
\phi''(t_0)
&=-\frac{\mathrm{W}(n)+1}{n}\\
&\approx-\frac{\log(n)}{n}
\end{align}
$$
According to the Laplace Method, the integral would be asymptotic to
$$
\begin{align}
\frac1{n!}\sqrt{\frac{-2\pi}{\phi''(t_0)}}e^{\phi(t_0)}
&\approx\frac1{n!}\sqrt{\frac{2\pi n}{\log(n)}}\log(n)^n\\
&\approx\frac1{\sqrt{2\pi n}}\frac{e^n}{n^n}\sqrt{\frac{2\pi n}{\log(n)}}\log(n)^n\\
&=\frac1{\sqrt{\log(n)}}\left(\frac{e\log(n)}{n}\right)^n
\end{align}
$$
which dies away faster than $r^{-n}$ for any $r$.
Analysis of the Approximation to Lambert W
For $x\ge e$, the approximation
$$
\mathrm{W}(x)\approx\log(x)\left(1-\frac{\log(\log(x))}{\log(x)+1}\right)
$$
attains a maximum error of about $0.0353865$ at $x$ around $67.9411$.
At least that same precision is maintained for $x\ge\frac53$.
For all $x\gt1$, this approximation is an underestimate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
} |
Numerical inversion of characteristic functions I have a need to use the FFT in my work and am trying to learn how to use it. I am beginning by attempting to use the FFT to numerically invert the characteristic function of a normal distribution. So I have discretised the integral using the trapezoidal rule (I know, this is a very crude method), converted to a form consistent with the FFT and then run a programme to make the calculations. However when I plot the output, the density function gets thinner and thinner as I increase the number of discretisation steps. I don't know if this is because of my errors or because of the numerical problems associated with the trapezoidal rule. Would someone mind having a look at my working please?
Thanks...
$$
f(x) = \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-ix\xi}F(\xi)d\xi = \frac{1}{\pi}\int_{0}^{\infty}e^{-ix\xi}F(\xi)d\xi\\
\approx \frac{1}{\pi}\int_0^{\xi_{max}}e^{-ix\xi}F(\xi)d\xi
$$
Let $\xi_j=(j-1)\Delta\xi,\quad j=1,...,N$
Take $\xi_N=\xi_{max}$ and $\Delta\xi=\frac{\xi_{max}}{N-1}$
Set $F(\xi_j) = F_j$
Let $x_k = x_{min} + (k-1)\Delta x,\quad k=1,...,N \quad$ ($\Delta x$ will be determined later)
Set $f(x_k) = f_k$
Discretising integral using trapezoidal rule:
$$
f_k \approx \frac{1}{\pi}\int_0^{\xi_{max}}e^{-ix_j\xi}F(\xi)d\xi \\
= \frac{1}{\pi}\Delta\xi \left(\sum_{j=1}^{N}e^{-ix_k\xi_j}F_j - \frac{1}{2}e^{-ix_k\xi_1}F_1 - \frac{1}{2}e^{-ix_k\xi_N}F_N \right) \\
= \frac{1}{\pi}\Delta\xi \left(\sum_{j=1}^{N}e^{-i\Delta x \Delta\xi (j-1)(k-1)}e^{-i x_{min}(j-1)\Delta \xi}F_j - \frac{1}{2}F_1 - \frac{1}{2}e^{-ix_k\xi_{max}}F_N \right) \\
= \frac{1}{\pi}\Delta\xi \left(\sum_{j=1}^{N}e^{-i\frac{2\pi}{N} (j-1)(k-1)}e^{-i x_{min}(j-1)\Delta \xi}F_j - \frac{1}{2}F_1 - \frac{1}{2}e^{-ix_k\xi_{max}}F_N \right)
$$
where in the last step $\Delta x \Delta{\xi}$ has been set to $\frac{2\pi}{N}$ in order for the sum to be in the form required by FFT. Rearranging gives $\Delta x = \frac{2\pi}{N\Delta \xi} = \frac{2\pi(N-1)}{N\xi_{max}}.$
To centre about the mean $\mu$ set $x_{min} = \mu - \frac{N\Delta x}{2} = \mu - \frac{\pi}{\Delta \xi}.$
| Please have a look the section on the numerical inversion of characteristic functions in the dissertation found here:
http://wiredspace.wits.ac.za//handle/10539/9273
It maybe helpful.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What's the difference between theorem, lemma and corollary? Can anybody explain me what is the basic difference between theorem, lemma and corollary?
We have been using it for a long time but I never paid any attention. I am just curious to know.
| Terence Tao (Analysis I, p. 25, n. 4):
From a logical point of view, there is no difference between a lemma, proposition,
theorem, or corollary - they are all claims waiting to be proved. However, we use
these terms to suggest different levels of importance and difficulty.
A lemma is an easily proved claim which is helpful for proving other propositions and theorems, but is usually not particularly interesting in its own right.
A proposition is a statement which is interesting in its own right, while
a theorem is a more important statement than a proposition which says something definitive on the subject, and often takes more effort to prove than a proposition or lemma.
A corollary is a quick consequence of a proposition or theorem that was proven recently.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "118",
"answer_count": 10,
"answer_id": 6
} |
how did he conclude that?integral So the question is : Find all continuous functions such that $\displaystyle \int_{0}^{x} f(t) \, dt= ((f(x)^2)+C$.
Now in the solution, it starts with this, clearly $f^2$ is differentiable at every point ( its derivative is $f$). So $f(x)\ne0$? I have no idea how he concluded that, this is from Spivak's calculus, if you differentiate it's clearly $f(x)=f(x)f'(x)$ but he said that before even giving this formula help please?
EDIT : I know that $f(x)=f(x)f'(x)$ What i don't understand is this "clearly $f^2$ is differentiable at every point ( its derivative is $f$). So $f(x)\ne0$" why $f(x)$ mustn't equal $0$?
| I interpret the problem as follows:
Find all continuous functions $f:\ \Omega\to{\mathbb R}$ defined in some open interval $\Omega$ containing the origin and satisfying the integral equation
$$\int_0^x f(t)\ dt=f^2(x)+ C\qquad(x\in\Omega)$$
for a suitable constant $C$.
Assume $f$ is such a function and that $f(x)\ne 0$ for some $x\in\Omega$. Then for $0<|h|\ll1$ we have
$$\int_x^{x+h} f(t)\ dt=f^2(x+h)-f^2(x)=\bigl(f(x+h)+f(x)\bigr)\bigl(f(x+h)-f(x)\bigr)\ ,$$
and using the mean value theorem for integrals we conclude that
$${f(x+h)-f(x)\over h}={f(x+\tau h)\over f(x+h)+f(x)}$$
for some $\tau\in[0,1]$. Letting $h\to0$ we see that $f$ is differentiable at $x$ and that $f'(x)={1\over2}$. It follows that the graph of $f$ is a line with slope ${1\over2}$ in all open intervals where $f$ is nonzero. When such a line coming from west-south-west arrives at the point $(a,0)$ on the $x$-axis then $f(a)=0$ by continuity, and similarly, when such a line starts at the point $(b,0)$ due east-north-east, then $f(b)=0$.
The above analysis leaves only the following possibility for such an $f$: There are constants $a$, $b$ with $-\infty\leq a\leq b\leq\infty$ such that
$$f(x)=\cases{-{1\over2}(a-x)\quad &$(x\leq a)$ \cr 0&$(a\leq x\leq b)$ \cr {1\over2}(x-b)&$(x\geq b)\ .$\cr}$$
It turns out that these $f$'s are in fact solutions to the original problem.
Proof. Assume for the moment
$$a\leq0\leq b\ .\tag{1}$$
When $0\leq x\leq b$ then $$\int_0^x f(t)\ dt=0=f^2(x)+0\ ,$$
and when $x\geq b$ then
$$\int_0^x f(t)\ dt=\int_b^x f(t)\ dt={1\over4}(t-b)^2\biggr|_b^x={1\over4}(x-b)^2= f^2(x)+0$$
as well. Similarly one argues for $x\leq0$.
In order to get rid of the assumption $(1)$ we note that when $f$ is a solution of the original problem then any translate $g(x):=f(x-c)$, $\>c\in{\mathbb R}$, is a solution as well: For all $x\in{\mathbb R}$ we have
$$\eqalign{\int_0^x g(t)\ dt&=\int_0^x f(t-c)\ dt=\int_{-c}^{x-c} f(t')\ dt'=\int_{-c}^0 f(t')\ dt'+\int_0^{x-c} f(t')\ dt'\cr
&=\int_{-c}^0 f(t')\ dt'+f^2(x-c)+C=g^2(x)+C'\ .\cr}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Coin chosen is two headed coin in this probability question I have a probability question that reads:
Question:
A box has three coins. One has two heads, another two tails and the last is a fair coin. A coin is chosen at random, and comes up head. What is the probability that the coin chosen is a two headed coin.
My attempt:
P(two heads coin| given head) = P(two heads coin * given head)/P(given head)
= 1/3/2/3 = 1/2
Not sure whether this is correct?
| For such a small number of options its easy to count them
The possible outcomes are:
heads or heads using the double head coin
tails or tails using the double tail coin
heads or tails using the fair coin
All these outcomes are equally likely. How many of these are heads and of those how many use the double headed coin?
$$Answer = \frac{2}{3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 9,
"answer_id": 0
} |
Zeroes about entire function Given an entire function $f(z)$ which satisfies $|f(z)|=1, \forall z \in \mathbb{R}$, the problem asks to show that there exists an entire function $g(z)$ such that $f(z)=\exp(g(z))$.
The only thing need to show is that $f(z)$ admits no zeros on $\mathbb{C}$ so that we can define $g(z)$ by a standard single-valued branch argument of logarithm. But does the assumption $|f(z)|=1, \forall z \in \mathbb{R}$ is so strong that entire functions satisfying this cannot have zeroes? Intuitive we can take an example of $f(z)=\exp(iz)$ where $\infty$ is its essential singularity so it is hard to expect we can turn the real line into the boundary of unit circle and then use somekind of Maximum Principle, etc. Basically I do not get the picture of what the assumption is talking about.
| Consider the function $h(z) = \overline{f(\overline{z})}$. That is an entire function too, and hence so is $k(z) = f(z)\cdot h(z)$. On the real line, you have
$$k(x) = f(x)\cdot h(x) = f(x) \overline{f(x)} = \lvert f(x)\rvert^2 = 1,$$
hence $k \equiv 1$. That guarantees that $f$ has no zeros.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the product of this by telescopic method? $$\prod_{k=0}^{\infty} \biggl(1+ {\frac{1}{2^{2^k}}}\biggr)$$
My teacher gave me this question and said that this is easy only if it strikes the minute you read it. But I'm still thinking. Help!
P.S. This question is to be attempted by telescopic method.
| The terms of the product are $(1+1/2)(1+1/4)(1+1/16)(1+1/256)\cdots$ with each denominator being the square of the previous denominator.
Now if you multiply the product with $(1-1/2)$ you see telescoping action:
$(1-1/2)(1+1/2)=1-1/4$
$(1-1/4)(1+1/4)=1-1/16$
$(1-1/16)(1+1/16)=1-1/256$
Do you see the pattern developing?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/463758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.