Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Question on Showing points of discontinuities of a function are removable (or not) The question is as follows:
Given function: $F(x,y)=\frac{x + 2y}{sin(x+y) - cos(x-y)}$
Tasks:
a/ Find points of discontinuities
b/ Decide if the points (of
discontinuities) from part a are removable
Here is my work so far:
(1) For part a, I think the points of discontinuities should have form $(0, \frac{\pi}{4} + n\pi)$ or $(\frac{\pi}{4} + n\pi, 0)$ , since they make the denominator undefined. For convenience of part b, I choose to specifically deal with the point $(0, \frac{\pi}{4})$
(2) Recall definition:
A point of discontinuity $x_0$ is removable if the limits of the function under certain path are equal to each other, as they are "close" to $x_0$. In particular, if the function is 1 dimensional, we get the notion of "left" and "right" limits. But here we talk about paths of any possible direction. However, these limits are not equal to $f(x_0)$, which can be defined or undefined.
(3)
I'm having trouble of "finding" such paths @_@
I come across with these two, by fix x-coordinate and vary y-coordinate:
$F(x, x^2 - \frac{\pi}{4})$
and $F(x, x^2 - x - \frac{\pi}{4})$
They both have limit to be $\frac{\pi}{2\sqrt(2)}$ as x approaches 0 (by my calculation)
But what I can say about these results? I feel that discontinuities of $F(x,y)$ should be not removable, but I don't know if my thought is correct.
Would someone please help me on this question?
Thank you in advance ^^
| Observe that F(x,y) =(x+2y) /(2*Cos(y+pi/4)*Sin(x - pi/4)).just use formula for sin(a) - sin(b) , and cos(b) = sin (pi/2 -b)
Thus the set of discontinuities are (x, pi/4 +n*pi) and (pi/4 + n*pi, y) for any x,y real. ie. they are lines parellel to x and y axis ie. a grid.
So we have to look at point of intersection of line x+2y = 0 and the above grid.
For example at (pi/4 , -pi/8) F(x,y) =(x+2y) /(2*Cos(y+pi/4)*Sin(x - pi/4)) . Cos(-pi/8 + pi/4) is non zero. So consider Lim as (x,y)->(pi/4 , -pi/8) (x+2y) /Sin (x-pi/4) . Make substitution x' = x-pi/4 and y' = y+pi/8. we get Lim (x',y')->(0,0) x'+2y'/Sin(x'). consider curve y'=0 we get limit as 1 . while curve x'+2y' = 0 we get limit as 0.
You can try this in general on the other points, we get a expression of form Lim (x',y')->(0,0) x'+2y'/{+ or -}Sin(x') or x'+2y'/ {+ or -}Sin (y') As in both cases limit never exists it will not be a removable singularity at any point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
When are the binomial coefficients equal to a generalization involving the Gamma function? Let $\Gamma$ be the Gamma function and abbreviate $x!:=\Gamma(x+1)$, $x>-1$.
For $\alpha>0$ let us generalize the binomial coefficients in the following way:
$$\binom{n+m}{n}_\alpha:=\frac{(\alpha n+\alpha m)!}{(\alpha n)!(\alpha m)!}$$
Of course for $\alpha=1$ this reduces to the ordinary binomial coefficients.
My question is:
How can one show that this is really the only case where they coincide?
That is, $\displaystyle\binom{n+m}{n}=\binom{n+m}{n}_\alpha$ for all $n,m\in\mathbb{N}$ implies $\alpha=1$.
Or even more general (but currently not of interest to me):
$\displaystyle\binom{n+m}{n}_\alpha=\binom{n+m}{n}_\beta$ implies $\alpha=\beta$.
I tried to use Stirling's formula: $\Gamma(z+1)\sim\sqrt{2\pi z}\bigl(\frac{z}{e}\bigr)^z$ as $z\to\infty$, but didn't get very far.
Thanks in advance for any help.
| Set, for example, $m=1$ and consider the limit $n\rightarrow\infty$. Then
$$ {\alpha n+\alpha\choose \alpha n}\sim \frac{(\alpha n)^{\alpha}}{\alpha!},\qquad
{n+1 \choose n}\sim n.$$
It is clear that the only possibility for both asymptotics to agree is $\alpha=1$. In the more general situation, the same argument shows that $\alpha=\beta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Existence of whole number between two real numbers $x$ and $x +1$? How to prove that there is a whole number, integer, between two real numbers $x$ and $x+1$ (in case $x$ is not whole)?
I need this for an exercise solution in my Topology class, so I can, probably, use more than just axioms from set theory.
Any ideas?
| This can be proved using the decimal expansion of $x$. If $x$ has the decimal expansion $n_0.n_1n_2n_3\cdots$, where $n_0$ is some whole number, then $x+1$ has the decimal expanion $(n_0+1).n_1n_2n_3\cdots$.
Then it is clear that $x < n_0+1 < x+1$ if $x$ is not a whole number itself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
What does "Solve $ax \equiv b \pmod{337}$ for $x$" mean? I have a general question about modular equations:
Let's say I have this simple equation:
$$ax\equiv b \pmod{337}$$
I need to solve the equation.
What does "solve the equation" mean? There are an infinite number of $x$'s that will be correct.
Does $x$ need to be integer?
Thanks very much in advance,
Yaron.
| Write $p = 337$. This is a prime number.
First of all, if $a \equiv 0 \pmod{p}$ then there is a solution if and only if $b \equiv 0 \pmod{p}$, and then all integers $x$ are a solution.
If $a \not\equiv 0 \pmod{p}$, then the (infinite number of integer) solutions will form a congruence class modulo $p$. These can be found using Euclid's algorithm to find an inverse of $a$ modulo $p$, very much as you would do over the real numbers, say.
That is, use Euclid to find $u, v \in \Bbb{Z}$ such that $a u + p v = 1$, and then note that $x_{0} = u b$ is a solution, as $a x_{0} = a u b = b - p v b \equiv b \pmod{p}$. (Here $u$ is the inverse of $a$ modulo $p$, as $a u \equiv 1 \pmod{p}$.)
Then note that if $x$ is any solution, then $a (x - x_{0}) \equiv 0 \pmod{p}$, which happens if and only if $p \mid x - x_{0}$, so that $x \equiv x_{0} \pmod{p}$. Thus the set of solutions is the congruence class of $x_{0}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the precise statement of the theorem that allows us to "localize" our knowledge of derivatives? Most introductory calculus courses feature a proof that
Proposition 1. For the function $f : \mathbb{R} \rightarrow \mathbb{R}$ such that $x \in \mathbb{R} \Rightarrow f(x)=x^2$ it holds that $x \in \mathbb{R} \Rightarrow f'(x)=2x$.
In practice though, we freely use the following stronger result.
Proposition 2. For all partial functions $f : \mathbb{R} \rightarrow \mathbb{R}$, setting $X = \mathrm{dom}(f)$, we have that for all $x \in X$, if there exists a neighborhood of $x$ that is a subset of $X$, call it $A$, such that $a \in A \Rightarrow f(a)=a^2$, then we have $f'(x)=2x$.
What is the precise statement of the theorem that lets us get from the sentences that we actually prove, like Proposition 1, to the sentences we actually use, like Proposition 2?
| I’ll give a first try in answering this:
How about: Let $f : D_f → ℝ$ and $g : D_g → ℝ$ be differentiable in an open set $D ⊂ D_f ∩ D_g$. If $f|_D = g|_D$, then $f'|_D = g'|_D$.
I feel this is not what you want. Did I misunderstand you?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Evaluating this integral : $ \int \frac {1-7\cos^2x} {\sin^7x \cos^2x} dx $ The question :
$$ \int \frac {1-7\cos^2x} {\sin^7x \cos^2x} dx $$
I tried dividing by $\cos^2 x$ and splitting the fraction.
That turned out to be complicated(Atleast for me!)
How do I proceed now?
| The integration is $$\int \frac{dx}{\sin^7x\cos^2x}-\int\csc^7xdx$$
Using this repeatedly, $$\frac{m-1}{n+1}\int\sin^{m-2}\cos^n dx=\frac{\sin^{m-1}x\cos^{n+1}x}{m+n}+\int \sin^mx\cos^n dx,$$
$$\text{we can reach from }\int \frac{dx}{\sin^7x\cos^2x}dx\text{ to } \int \frac{\sin xdx}{\cos^2x}dx$$
Now use the Reduction Formula of $\int\csc^nxdx$ for the second/last integral
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 1
} |
Symmetric Groups and Commutativity I just finished my homework which involved, among many things, the following question:
Let $S_{3}$ be the symmetric group $\{1,2,3\}$. Determine the number of elements that commute with (23).
Now, solving this was unproblematic - for those interested; the answer is 2. However, it got me thinking whether or not there is a general solution to this type of question.Thus, my question is:
Let $S_{n}$ be the symmetric group $\{1,\dots,n\}$. Determine the number of elements in $S_{n}$ that commute with $(ij)$ where $1\leq i,j \leq n $.
| *
*Let $\pi, \phi\in S_{\Omega}$. Then $\pi, \phi$ are disjoint if $\pi$ moves $\omega\in \Omega$ then $\phi$ doesn't move $\omega$.
For example, $(2,3)$ and $(4,5)$ in $S_6$ are disjoint. indeed, $\{2,3\}\cap\{4,5\}=\emptyset$.
Theorem: If $\pi, \phi\in S_{\Omega}$ are disjoint then $\pi\phi=\phi\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Find the following integral: $\int {{{1 + \sin x} \over {\cos x}}dx} $ My attempt:
$\int {{{1 + \sin x} \over {\cos x}}dx} $,
given : $u = \sin x$
I use the general rule:
$\eqalign{
& \int {f(x)dx = \int {f\left[ {g(u)} \right]{{dx} \over {du}}du} } \cr
& {{du} \over {dx}} = \cos x \cr
& {{dx} \over {du}} = {1 \over {\cos x}} \cr
& so: \cr
& \int {{{1 + \sin x} \over {\cos x}}dx = \int {{{1 + u} \over {\cos x}}{1 \over {\cos x}}du} } \cr
& = \int {{{1 + u} \over {{{\cos }^2}x}}du} \cr
& = \int {{{1 + u} \over {\sqrt {1 - {u^2}} }}du} \cr
& = \int {{{1 + u} \over {{{(1 - {u^2})}^{{1 \over 2}}}}}du} \cr
& = \int {(1 + u){{(1 - {u^2})}^{ - {1 \over 2}}}} du \cr
& = {(1 - u)^{ - {1 \over 2}}} + u{(1 - {u^2})^{ - {1 \over 2}}}du \cr
& = {1 \over {({1 \over 2})}}{(1 - u)^{{1 \over 2}}} + u - {1 \over {\left( {{1 \over 2}} \right)}}{(1 - {u^2})^{{1 \over 2}}} + C \cr
& = 2{(1 - u)^{{1 \over 2}}} - 2u{(1 - {u^2})^{{1 \over 2}}} + C \cr
& = 2{(1 - \sin x)^{{1 \over 2}}} - 2(\sin x){(1 - {\sin ^2}x)^{{1 \over 2}}} + C \cr
& = {(1 - \sin x)^{{1 \over 2}}}(2 - 2\sin x) + C \cr} $
This is wrong, the answer in the book is:
$y = - \ln |1 - \sin x| + C$
Could someone please explain where I integrated wrongly?
Thank you!
| You replaced $\cos^2 x$ by $\sqrt{1-u^2}$, it should be $1-u^2$.
Remark: It is easier to multiply top and bottom by $1-\sin x$. Then we are integrating $\frac{\cos x}{1-\sin x}$, easy.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 1
} |
clarity on a question The original question is :
Let $0 < a_{1}<a_{2}<\dots<a_{mn+1}\;\;$ be $\;mn+1\;$ integers. Prove that you can either $\;m+1\;$ of them no one of which divides any other or $\;n+1\;$ of them each dividing the following.
(1966 Putnam Mathematical Competition)
The question has words missing , so could someone tell me what the corrected version of this question is
| I think it is this question
Given a set of $(mn + 1)$ unequal positive integers, prove that we can either $(1)$ find $m + 1$ integers $b_i$ in the set such that $b_i$ does not divide $b_j$ for any unequal $i, j,$ or $(2)$ find $n+1$ integers $a_i$ in the set such that $a_i$ divides $a_{i+1}$ for $i = 1, 2, \dots , n$.
27th Putnam 1966
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
reflection groups and hyperplane arrangement We know that for the braid arrangement $A_\ell$ in $\mathbb{C}^\ell$: $$\Pi_{1 \leq i < j \leq \ell} (x_i - x_j)=0,$$
$\pi_1(\mathbb{C}^\ell - A_\ell) \cong PB_\ell$, where $PB_\ell$ is the pure braid group.
Moreover, the reflection group that is associated to $A_\ell$ is the symmetric group $S_\ell$, and it is known that there is an exact sequence $PB_\ell \rightarrow B_\ell \rightarrow S_\ell$.
My question is the following: let $L$ be a reflection arrangement (associated to the reflection group $G_L$) in $\mathbb{C}^\ell$. What is the connection between
$\pi_1(\mathbb{C}^\ell - L)$ and $G_L$, or the Artin group associated to $G_L$?
Thank you!
| I have found out that Brieskorn proved the following (using the above notations):
$$
\pi_1(\mathbb{C}^\ell - A_\ell) \cong \text{ker}(A_L \rightarrow G_L)
$$
where $A_L$ is the corresponding Artin group, $G_L$ the reflection group.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/385970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Truth of Fundamental Theorem of Arithmetic beyond some large number Let $n$ be a ridiculously large number, e.g., $$\displaystyle23^{23^{23^{23^{23^{23^{23^{23^{23^{23^{23^{23^{23}}}}}}}}}}}}+5$$ which cannot be explicitly written down provided the size of the universe. Can a Prime factorization of $n$ still be possible? Does it enter the realms of philosophy or is it still a tangible mathematical concept?
| Yes, there exists a unique prime factorization.
No, we probably won't ever know what it is.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
When the ordinal sum equals the Hessenberg ("natural") sum Let $\alpha_1 \geq \ldots \geq \alpha_n$ be ordinal numbers. I am interested in necessary and sufficient conditions for the ordinal sum $\alpha_1 + \ldots + \alpha_n$ to be equal to the Hessenberg sum $\alpha_1 \oplus \ldots \alpha_n$, most quickly defined by collecting all the terms $\omega^{\gamma_i}$ of the Cantor normal forms of the $\alpha_i$'s and adding them in decreasing order.
Unless I am very much mistaken the answer is the following: for all $1 \leq i \leq n-1$, the smallest exponent $\gamma$ of a term $\omega^{\gamma}$ appearing in the Cantor normal form of $\alpha_i$ must be at least as large as the greatest exponent $\gamma'$ of a term $\omega^{\gamma'}$ appearing in the Cantor normal form of $\alpha_{i+1}$. And this holds just because if $\gamma' < \gamma$,
$\omega^{\gamma'} + \omega^{\gamma} = \omega^{\gamma} < \omega^{\gamma} + \omega^{\gamma'} = \omega^{\gamma'} \oplus \omega^{\gamma}$.
Nevertheless I ask the question because:
1) I want reassurance of this: I have essentially no experience with ordinal arithmetic.
2) Ideally I'd like to be able to cite a standard paper or text in which this result appears.
Bonus points if there happens to be a standard name for sequences of ordinals with this property: if I had to name it I would choose something like unlaced or nonoverlapping.
P.S.: The condition certainly holds if each $\alpha_i$ is of the form $\omega^{\gamma} + \ldots + \omega^{\gamma}$. Is there a name for such ordinals?
| You are exactly right about the "asymmetrically absorptive" nature of standard ordinal addition (specifically with regard to Cantor normal form). Your condition is necessary and sufficient (sufficiency is easy, and you've shown necessity). I don't know of any standard name for such sequences, though. As for your $\omega^\gamma+\cdots+\omega^\gamma$ bit, that isn't appropriate for Cantor normal form. We require the exponents to be listed in strictly decreasing order.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
} |
How to find length of a rectangular tile when viewing at some angle I have a question on angles.
I have a rectangular tile. when looking straight I can find the width of the tile, but how do I find the apparent width when I see the same rectangular tile at some angle. Below I have attached an image for more clarity. So how do I find y in the image below?
| It depends on your projection. If you assume orthogonal projection, so that the apparent length of line segments is independent of their distance the way your images suggest, then you cannot solve this, since a rectangle of any aspect ratio might appear as a rectangle of any other aspect ratio by simply aligning it with the image plane and then rotating it around one of its axes of symmetry. So you can't deduce the original aspect ratio from the apparent one, much less the original lengths.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
prove or disprove invertible matrix with given equations Given a non-scalar matrix $A$ in size $n\times n$ over $\mathbb{R}$ that maintains the following equation
$$A^2 + 2A = 3I$$
given matrix $B$ in size $n\times n$ too $$B = A^2 + A- 6I$$
Is $B$ an invertible matrix?
| Hint: The first equation implies that $A^2+2A-3I=(A-I)(A+3I)=0$. Hence the minimal polynomial $m_A(x)$ of $A$ divides $(x-1)(x+3)$. What happens if $x+3$ is not a factor of $m_A(x)$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Estimate derivatives in terms of derivatives of the Fourier transform. Let us suppose that $f: \mathbb{R}^n \to \mathbb{R}$ is a smooth function. Furthermore, for every $\alpha$ multi-index, there exists $C_\alpha > 0$ such that
$$
|D^\alpha f(\xi)| \leq \frac{C_\alpha}{(1+|\xi|)^{|\alpha|}}.
$$
Does it follow that, for every $\alpha$, there exists $C'_\alpha > 0$ such that
$$
|D^\alpha (\mathcal{F}^{-1}(f))(x)| \leq \frac{C'_\alpha}{|x|^{n+|\alpha|}}
$$
where $\mathcal{F}^{-1}$ is the inverse Fourier transform (which exists since $f \in \mathcal{S}'$)?
I tried to do it using the definition, but it is really messed up because $\mathcal{F}^{-1}$ is in general in $\mathcal{S}'$. For instance, if $f$ is a constant function, then its inverse transform is a dirac $\delta$, then I should give it a pointwise meaning, and I don't know when this is possible. Any help would be really appreciated.
| This settles it. See Theorem 9, it also settles regularity issues.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Unable to solve expression for $x$ I'm trying to solve this expression for $x$:
$$\frac{x^n(n(1-x)+1)}{(1-x)^2}=0$$
I'm not sure where to begin (especially getting rid of the $x^n$ part), any hints or tips are appreciated.
| Hint: Try multiplying both sides by the denominator, to get rid of it. Note that this may introduce extraneous solutions if what we multiplied by is $0$, so you have to consider the case of $(x - 1)^2 = 0$ separately.
Finally, to solve an equation of the form $a \cdot b = 0$, you can divide by $a \neq 0$ on both sides to get $b = 0$, but this may lead to missing solutions if $a = 0$ is also a solution. So again, consider the cases $a = 0$ and $a \neq 0$ (and therefore $b = 0$) separately.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Summation of independent discrete random variables? We have a summation of independent discrete random variables (rvs) $Y = X_1 + X_2 + \ldots + X_n$. Assume the rvs can take non-negative real values. How can we find the probability mass function of $Y$?
Is there any efficient method like the convolution for integer case?
| Since the random variables are continuous, you would speak of their probability density function (instead of the probability mass function). The probability density function (PDF) of $Y$ is simply the (continuous) convolution of the PDFs of the random variables $X_i$. Convolution of two continuous random variables is defined by
$$(p_{X_1}*p_{X_2})(x)=\int_{-\infty}^{\infty}p_{X_1}(x-y)p_{X_2}(y)\;dy$$
EDIT: I was assuming your RVs are continuous, but maybe I misunderstood the question. Anyway, if they are discrete then (discrete) convolution is also the correct answer.
EXAMPLE: Let $X_1$ and $X_2$ be two discrete random variables, where $X_1$ takes on values $1/2$ and $3/4$ with probabilities $0.5$, and $X_2$ takes on values $1/8$ and $1/4$ with probabilities $0.4$ and $0.6$, respectively. So we have $p_{X_1}(x)=0.5\delta(x-1/2) + 0.5\delta(x-3/4)$ and $p_{X_2}=0.4\delta(x-1/8)+0.6\delta(x-1/4)$. Let $Y=X_1+X_2$. Then $p_Y(x)$ is given by the convolution of $p_{X_1}(x)$ and $p_{X_2}(x)$:
$$p_Y(x)=0.2\delta(x-5/8)+0.3\delta(x-3/4)+0.2\delta(x-7/8)+0.3\delta(x-1)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Could someone please explain the theory behind finding if a given point is inside a circle on a grid? Let us say I have a grid of 1000 x 1000, and on that grid is drawn a circle, the circle could be anywhere.
If I then pick a random point from the grid with an x and y co-ordinate I can work out if the point is inside the circle by performing the following math,
xCoord = x co-ordinate;
yCoord = y co-ordinate;
xCenter = x co-ordinate of the center of the circle
yCenter = y co-ordinate of the center of the circle
radius = the circle's radius
((xCoord - xCenter) ^ 2 + (yCoord - yCenter) ^ 2) < (radius ^ 2)
If the radius ^ 2 is less than ((xCoord - xCenter) ^ 2 + (yCoord - yCenter) ^ 2) then it means that the co-ordinates were inside the circle.
I am struggling alot to wrap my head around this and cannot seem to work out how it works out that the co-ordinates were inside the circle.
Could someone please break this down for me and explain how it is worked out (what is going on in a logical manner so to speak)?
Sorry if my question is formatted wrong it is my first question on this site.
Thanks
| Let me try to answer in words only. you have a circle. To fill it in, as with a paint program, it's every point whose distance from the center of the circle is less than the radius of the circle. Simple enough.
To test if a point is inside the circle, calculate the distance from the center point to your point. If less than the radius, it's in your circle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Mean number of particle present in the system: birth-death process, $E(X_t|X_0=i)$, $b_i=\frac{b}{i+1}$, $d_i=d$ Let $\{X_t\}$ be a birth–and–death process with birth rate
$$
b_i = \frac{b}{i+1},
$$
when $i$ particle are in the system, and a constant death rate
$$
d_i=d.
$$
Find the expected number of particle in the system at time $t$, given that $X_0=i$.
Define
$$
f(t)=E(X_t),
$$
and
$$
p_n=P\left(X_t=n | X_0=i \right).
$$
Using the foward equation,
$$
f'(t)=\sum_{n=1}^\infty n \left( p_{n-1} \frac{b}{n} + p_{n+1}d + p_n\left( 1 - \frac{b}{n+1}-d\right)\right).
$$
After simplification, I have
$$
f'(t)=p_0 b - \sum_{n=1}^\infty\frac{b}{n+1}p_n + d + f(t),
$$
and I don't see how to solve this differential equation.
| Not sure one can get explicit formulas for $E[X_t]$ but anyway, your function $f$ is not rich enough to capture the dynamics of the process.
The canonical way to go is to consider $u(t,s)=E[s^{X_t}]$ for every $t\geqslant0$ and, say, every $s$ in $(0,1)$. Then, pending some errors in computations done too quickly, the function $u$ solves an integro-differential equation similar to
$$
\frac{s}{1-s}\cdot\frac{\partial u}{\partial t}(t,s)=d\cdot(u(t,s)-u(t,0))-b\int_0^su(t,r)\mathrm dr,
$$
with initial condition $u(0,s)=s^i$. Assuming one can solve this (which does not seem obvious at first sight), your answer is
$$
E[X_t]=\frac{\partial u}{\partial s}(t,1).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Mnemonic for centroid of a bounded region The centroid of a region bounded by two curves is given by:
$ \bar{x} = \frac{1}{A}\int_a^b{x\left[f(x)-g(x)\right]dx} $
$ \bar{y} = \frac{1}{A}\int_a^b{\left[\frac{(f(x)+g(x)}{2}(f(x)-g(x))\right]dx} = \frac{1}{2A}\int_a^b{\left(f^2(x) - g^2(x)\right)dx}$
where A is just the area of that region.
But I have a terrible time remembering those formulas (when taken in conjunction with all of the other things that need to be remembered), and which which moment uses which formula. Does anybody know a good mnemonic to keep track of them?
Hopefully this isn't off topic. Thanks
| In order to remember those formulas, you have to use them repeatedly on many problems involving finding centroid of areas bounded by two curves. Have faith in the learning process and you will remember it after using it many times, just like playing online games.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
calculate the number of possible number of words If one word can be at most 63 characters long. It can be combination of :
*
*letters from a to z
*numbers from 0 to 9
*hyphen - but only if not in the first or the last character of the word
I'm trying to calculate possible number of combinations for a given domain name. I took stats facts here :
https://webmasters.stackexchange.com/a/16997
I have a very poor, elementary level of math so I've got this address from a friend to ask this. If someone could write me a formula how to calculate this or give me exact number or any useful information that would be great.
| Close - but 26 letters plus 10 numbers plus the hyphen is 37 characters total, so it would be
(36^2)(37^61)
Now granted, that's just the number of alphanumeric combinations; whether those combinations are actually words would require quite a bit of proofreading.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/386997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
} |
Prove that connected graph G, with 11 vertices and and 52 edges, is Hamiltonian Is this graph always, sometimes, or never Eulerian? Give a proof or a pair of examples to justify your answer
Could G contain an Euler trail? Must G contain an Euler trail? Fully justify your answer
| $G$ is obtained from $K_{11}$ by removing three edges $e_1,e_2,e_3$. We label now the vertices of $G$ the following way:
$$e_1=(1,3)$$
Label the unlabeled vertices of $e_2$ by the smallest unused odd numbers, and label the unlabeled vertices of $e_3$ by the smallest unused odd numbers. Note that by our choices $e_3 \neq (1,11)$, since if $e_3$ uses the vertex $1$, the remaining vertex is labeled by a number $\leq 9$.
Label the remaining vertices some random way.
Then $1-2-3-4-5-6-7-8-9-10-11-1$ is an Hamiltonian cycle in $G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Dense set in $L^2$ Let $ \Omega\subset \mathbb{R}^n$ with $m(\Omega^c)=0 $. Then how can we show that $ \mathcal{F}(C_{0}^{\infty}(\Omega))$ (here $ \mathcal{F}$ denotes the fourier transform) is dense in $L^2$(or $L^p$)?
Besides, I'm also interested to know if the condition that $m(\Omega^c)=0$ can be weakened to some more general set.
Thanks for your help.
| To deal with the case $\Omega=\Bbb R^n$, take $f\in L^2$. Then by Plancherel's theorem, theorem 12 in these lecture notes, we can find $g\in L^2$ such that $f=\mathcal F g$. Now approximate $g$ by smooth functions with compact support and use isometry.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Triple Integral over a disk How do I integrate $$z = \frac{1}{x^2+y^2+1}$$ over the region above the disk $x^2+y^2 \leq R^2$?
| Use polar coordinates: $x^2+y^2 = r^2$, etc. An area element is $dx\, dy = r \, dr \, d\theta$. The integral over the disk is
$$\int_0^R dr \, r \: \int_0^{2 \pi} d\theta \frac{1}{1+r^2} = 2 \pi \int_0^R dr \frac{r}{1+r^2}$$
You can substitute $u=r^2$ to get for theintegral
$$\pi \int_0^{R^2} \frac{du}{1+u}$$
I trust that you can evaluate this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
probable squares in a square cake There is a probability density function defined on the square [0,1]x[0,1].
The pdf is finite, i.e., the cumulative density is positive only for pieces with positive area.
Now Alice and Bob play a game: Alice marks two disjoint squares, Bob chooses the square that contains the maximum probability, and Alice gets the other square. The goal of Alice is to maximize the probability in her square.
Obviously, in some cases Alice can assure herself a probability of 1/2, for example, if the pdf is uniform in [0,$1 \over 2$]x[0,1], she can cut the squares [0,$1 \over 2$]x[0,$1 \over 2$] and [0,$1 \over 2$]x[$1 \over 2$,1], both of which contain $1 \over 2$.
However, in other cases Alice can assure herself only $1 \over 4$, for example, if the pdf is uniform in [0,1]x[0,1].
Are there pdfs for which Alice cannot assure herself even $1 \over 4$ ?
What is the worst case for Alice?
| I think Alice can always assure herself at least $1 \over 4$ cdf, in the following way.
First, in each of the 4 corners, mark a square that contains $1 \over 4$ cdf. Since the pdf is finite, it is always possible to construct such a square, by starting from the corner and increasing the square gradually, until it contains exactly $1 \over 4$ cdf.
There is at least one corner, in which the side length of such a square will be at most $1 \over 2$ . Suppose this is the lower-left corner, and the side length is a, so square #1 is [0,a]x[0,a], with a <= $1 \over 2$.
Now, consider the following 3 squares:
*
*To the right of square #1: [a,1]x[0,1-a]
*On top of square #1: [0,1-a]x[a,1]
*On the top-right of square #1: [a,1]x[a,1]
The union of these squares covers the entire remainder after we remove square #1. This remainder contains $3 \over 4$ cdf. So, the sum of cdf in all 3 squares is at least $3 \over 4$ (probably more, because the squares overlap).
Among those 3, select the one with the greatest cdf. It must contain at least $1 \over 3$ of $3 \over 4$, i.e., at least $1 \over 4$. This is square #2.
So, Alice can always cut two squares that contain at least $1 \over 4$ cdf.
Note that this procedure relies on the fact (that I mentioned in the original question) that the pdf is finite. Otherwise, it may not always be possible to construct a square with $1 \over 4$ cdf.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to simplify $\frac{(\sec\theta -\tan\theta)^2+1}{\sec\theta \csc\theta -\tan\theta \csc \theta} $ How to simplify the following expression :
$$\frac{(\sec\theta -\tan\theta)^2+1}{\sec\theta \csc\theta -\tan\theta \csc \theta} $$
| The numerator becomes
$(\sec\theta -\tan\theta)^2+1=\sec^2\theta+\tan^2\theta-2\sec\theta\tan\theta+1=2\sec\theta(\sec\theta -\tan\theta)$
So, $$\frac{(\sec\theta -\tan\theta)^2+1}{\sec\theta \csc\theta -\tan\theta \csc \theta}$$
$$=\frac{2\sec\theta(\sec\theta -\tan\theta)}{\csc\theta(\sec\theta -\tan\theta)}=2\frac{\sec\theta}{\csc\theta}(\text{ assuming } \sec\theta -\tan\theta\ne0)$$
$$=2\frac{\sin\theta}{\cos\theta}=2\tan\theta$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Metric spaces and distance functions. I need to provide an example of a space of points X and a distance function d, such that the following properties hold:
*
*X has a countable dense subset
*X is uncountably infinite and has only one limit point
*X is uncountably infinite and every point of X is isolated
I'm really bad with finding examples... Any help will be greatly appreciated! Thank you.
(I got the third one)
| For the first question, hint: $\mathbb{Q}$ is a countable set.
For the third question, hint: think about the discrete metric on a space.
For the second question: Let $X=\{x\in\mathbb{R}\:|\: x>1 \mbox{ or }x=\frac{1}{n}, n\in\mathbb{N}_{\geq 1}\}\cup\{0\}$. Let
*
*$d(x,y)=1$ if $x> 1$ or $y>1$,
*$d(x,y)=|x-y|$ if $x\leq 1$ and $y\leq 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Group $\mathbb Q^*$ as direct product/sum Is the group $\mathbb Q^*$ (rationals without $0$ under multiplication) a direct product or a direct sum of nontrivial subgroups?
My thoughts:
Consider subgroups $\langle p\rangle=\{p^k\mid k\in \mathbb Z\}$ generated by a positive prime $p$ and $\langle -1\rangle=\{-1,1\}$.
They are normal (because $\mathbb Q^*$ is abelian), intersects in $\{1\}$ and any $q\in \mathbb Q^*$ is uniquely written as quotient of primes' powers (finitely many).
So, I think $\mathbb Q^*\cong \langle -1\rangle\times \bigoplus_p\langle p\rangle\,$ where $\bigoplus$ is the direct sum.
And simply we can write $\mathbb Q^*\cong \Bbb Z_2\times \bigoplus_{i=1}^\infty \Bbb Z$.
Am I right?
| Yes, you're right.
Your statement can be generalized to the multiplicative group $K^*$ of the fraction field $K$ of a unique factorization domain $R$. Can you see how?
In fact, if I'm not mistaken it follows from this that for any number field $K$, the group $K^*$ is the product of a finite cyclic group (the group of roots of unity in $K$) with a free abelian group of countable rank, so of the form
$K^* \cong \newcommand{\Z}{\mathbb{Z}}$
$\Z/n\Z \oplus \bigoplus_{i=1}^{\infty} \Z.$
Here it is not enough to take the most obvious choice of $R$, namely the full ring of integers in $K$, because this might not be a UFD. But one can always choose an $S$-integer ring (obtained from $R$ by inverting finitely many prime ideals) with this property and then apply Dirichlet's S-Unit Theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
} |
Compute the Centroid of a Semicircle without Calculus Can the centroid of a semicircle be computed without deferring to calculus or a limiting procedure?
| The following may be acceptable to you as an answer. You can use the centroid theorem of Pappus.
I do not know whether you really mean half-circle (a semi-circular piece of wire), or a half-disk. Either problem can be solved using the theorem of Pappus.
When a region is rotated about an axis that does not go through the region, the volume of the solid generated is the area of the region times the distance travelled by the centroid.
A similar result holds when a piece of wire is rotated. The surface area of the solid is the length of the wire times the distance travelled by the centroid.
In the case of rotating a semi-circular disk, or a semi-circular piece of wire, the volume (area) are known.
Remark: The result was known some $1500$ years before Newton was born. And the volume of a sphere, also the surface area, were known even before that. The ideas used to calculate volume, area have, in hindsignt, limiting processes at their heart. So if one takes a broad view of the meaning of "calculus," we have not avoided it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Determining probability of certain combinations Say I have a set of numbers 1,2,3,4,5,6,7,8,9,10 and I say 10 C 4 I know that equals 210. But lets say I want to know how often 3 appears in those combinations how do I determine that?
I now know the answer to this is $\binom{1}{1}$ $\binom{9}{3}$
I am trying to apply this to solve a problem in my math book. School is over so I cant ask my professor, Im just trying to get a head start on next year.
There are 4 numbers S,N,M,K
S stands for students in the class N the number of kids going on the trip M My friend circle including me K the number of my friends I need on the trip with me to enjoy myself.
I have to come up with a general solution to find out the probability I will enjoy the trip If I am chosen to go on it.
So far I came up with $\binom{S-1}{N-1}$ /($\binom{M-1}{K}$ $\binom{N-K-1}{S-K-1}$)
it works for cases like
10 4 6 4 & 3 2 2 1 but dosent work for 10 10 5 3
any help is appreciated
| Corrected:
First off, your fraction is upside-down: $\binom{S-1}{N-1}$ is the total number of groups of $N$ students that include you, so it should be the denominator of your probability, not the numerator. Your figure of $\binom{M-1}K\binom{N-K-1}{S-K-1}$ also has an inversion: it should be $\binom{M-1}K\binom{S-K-1}{N-K-1}$, where $\binom{S-K-1}{N-K-1}$ is the number of ways of choosing the $N-(K+1)$ students on the trip who are not you or your $K$ friends who are going.
After those corrections you have
$$\frac{\binom{M-1}K\binom{S-K-1}{N-K-1}}{\binom{S-1}{N-1}}\;.$$
The denominator counts all possible groups of $N$ students that include you. The first factor in the numerator is the number of ways to choose $K$ of your friends, and the second factor is the number of ways to choose enough other people (besides you and the $K$ friends already chosen) to make up the total of $N$. However, this counts any group of $N$ students that includes you and more than $K$ of your friends more than once: it counts such a group once for each $K$-sized set of your friends that it contains.
To avoid this difficulty, replace the numerator by
$$\binom{M-1}K\binom{S-M}{N-K-1}\;;$$
now the second factor counts the ways to fill up the group with students who are not your friends, so the product is the number of groups of $N$ students that contain you and exactly $K$ of your friends. Of course now you have to add in similar terms for each possible number of friends greater than $K$, since you’ll be happy as long as you have at least $K$ friends with you: it need not be exactly $K$.
There are $\binom{M-1}{K+1}\binom{S-M}{N-K-2}$ groups of $N$ that include you and exactly $K+1$ of your friends, another $\binom{M-1}{K+2}\binom{S-M}{N-K-3}$ that contain you and exactly $K+2$ of your friends, and so on, and the numerator should be the sum of these terms:
$$\sum_i\binom{M-1}{K+i}\binom{S-M}{N-K-1-i}\;.\tag{1}$$
You’ll notice that I didn’t specify bounds for $i$. $\binom{n}k$ is by definition $0$ when $k>n$ or $k<0$, so we really don’t have to specify them: only the finitely many terms that make sense are non-zero anyway..
Alternatively, you can count the $N$-person groups that include you and fewer than $K$ of your friends and subtract that from the $\binom{S-1}{N-1}$ groups that include you; the different must be the number that include you and at least $K$ of your friends, i.e., the number that is counted by $(1)$. The number that include you and $i$ of your friends is $\binom{M-1}i\binom{S-M-i}{N-1-i}$, so the number of groups that include you and fewer than $K$ of your friends is
$$\sum_{i=0}^{K-1}\binom{M-1}i\binom{S-M}{N-1-i}\;.$$
This is going to be a shorter calculation than $(1)$ if $K$ is small.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proper way to define this multiset operator that does a pseudo-intersection? it's been a while since I've done anything with set theory and I'm trying to find a way to describe a certain operator.
Let's say I have two multisets:
$A = \{1,1,2,3,4\}$
$B = \{1,5,6,7\}$
How can I define the operator $\mathbf{O}$ such that
$ A \mathbf{O} B= \{1,1,1\}$
Thanks!
| Let us represent multisets by ordered pairs, $\newcommand{\tup}[1]{\langle #1\rangle}\tup{x,i}$ where $x$ is the element and $i>0$ is the number of times that $x$ is in the set.
Let me write the two two multisets in this notation now: $$A=\{\tup{1,2},\tup{2,1},\tup{3,1},\tup{4,1}\},\quad B=\{\tup{1,1},\tup{5,1},\tup{6,1},\tup{7,1}\}.$$
In this case we take those elements appearing in both sets and sum their counters, then: $$A\mathrel{\mathbf{O}}B=\{\tup{x,i+j}\mid\tup{x,i}\in A\land\tup{x,j}\in B\}=\{\tup{1,3}\}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Determining Fourier series for $\lvert \sin{x}\rvert$ for building sums My math problem is a bit more tricky than it sounds in the caption.
I have the following Task (which i in fact do not understand):
"Determine the Fourier series for $f(x)=\lvert \sin{x}\rvert$ in order to build the Sum for the series: $\frac{1}{1*3}+\frac{1}{3*5}+\frac{1}{5*7}+\dots$"
My approach: first, calculating the Fourier series. There is no period, Intervall or Point given. i think it must turn out to be something like this: $a_{n} = \frac{2}{\pi} \int_{0}^{\pi} |\sin x|\cos nx dx$ but what would be the next step to build the series?
and second: the other given series. I think its all about the uneven numbers, so i have in mind 2n-1 and 2n+1 are the two possible definitions. So it could be something like this:
$(\frac{1}{1\cdot 3}+\frac{1}{3\cdot 5}+\frac{1}{5\cdot 7}+\dots) = \sum\limits_{n=1}^{\infty} \frac{1}{(2n-1)(2n+1)}$
But i cannot make the connection between this series, its sum and |sin x|.
despite i think the sum should be something around $\sum\limits_{n=1}^{\infty} \frac{1}{(2n-1)(2n+1)} = \frac{1}{2}$ (but i cannot proof yet)
please help me!
P.S.: i know the other Convergence of Fourier series for $|\sin{x}|$ -question here at stackexchange, but i think it doesn't fit into my problem. despite i don't understand their Explanation and there is no way shown, how to determine the solution by self.
P.P.S: edits were made only to improve Latex and/or language
| You were on the right track. First, calculate the Fourier series of $f(x)$ (you can leave out the magnitude signs because $\sin x \ge 0$ for $0\le x \le \pi$ ):
$$a_n=\frac{2}{\pi}\int_{0}^{\pi}\sin x\cos nx\;dx=
\left\{ \begin{array}{l}-\frac{4}{\pi}\frac{1}{(n+1)(n-1)},\quad n \text{ even}\\
0,\quad n \text{ odd} \end{array}\right .$$
For $a_0$ we get $a_0=\frac{2}{\pi}$. Therefore, the Fourier series is
$$f(x) = \frac{2}{\pi}-\frac{4}{\pi}\sum_{n \text{ even}}^{\infty}\frac{1}{(n+1)(n-1)}\cos nx=\\
=\frac{2}{\pi}\left ( 1-2\sum_{n=1}^{\infty}\frac{1}{(2n+1)(2n-1)}\cos 2nx \right )\tag{1}$$
Now we can evaluate the series. Set $x=\pi$, then we have $f(\pi)=0$ and $\cos 2n\pi = 1$. Evaluating (1) with $x=\pi$ we get
$$0 = 1 - 2\sum_{n=1}^{\infty}\frac{1}{(2n+1)(2n-1)}$$
which gives your desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove $n$ is prime? Let $n \gt 1$ and
$$\left\lfloor\frac n 1\right\rfloor + \left\lfloor\frac n2\right\rfloor + \ldots + \left\lfloor\frac n n\right\rfloor = \left\lfloor\frac{n-1}{1}\right\rfloor + \left\lfloor\frac{n-1}{2}\right\rfloor + \ldots + \left\lfloor\frac{n-1}{n-1}\right\rfloor + 2$$
and $\lfloor \cdot \rfloor$ is the floor function. How to prove that $n$ is a prime?
Thanks in advance.
| You know that
$$\left( \left\lfloor\frac n 1\right\rfloor - \left\lfloor\frac{n-1}{1}\right\rfloor \right)+\left( \left\lfloor\frac n2\right\rfloor - \left\lfloor\frac{n-1}{2}\right\rfloor\right) + \ldots + \left( \left\lfloor\frac{n}{n-1}\right\rfloor - \left\lfloor\frac{n-1}{n-1}\right\rfloor\right) + \left\lfloor\frac n n\right\rfloor=+2 \,.$$
You know that
$$\left( \left\lfloor\frac n 1\right\rfloor - \left\lfloor\frac{n-1}{1}\right\rfloor \right)=1$$
$$\left\lfloor\frac n n\right\rfloor =1$$
$$\left( \left\lfloor\frac n k\right\rfloor - \left\lfloor\frac{n-1}{k}\right\rfloor\right) \geq 0, \qquad \forall 2 \leq k \leq n-1 \,.$$
Since they add to 2, the last ones must be equal, thus for all $2 \leq k \leq n-1$ we have
$$ \left\lfloor\frac n k\right\rfloor - \left\lfloor\frac{n-1}{k}\right\rfloor = 0 \Rightarrow \left\lfloor\frac n k\right\rfloor = \left\lfloor\frac{n-1}{k}\right\rfloor $$
It is easy to prove that this means that $k \nmid n$. Since this is true for all $2 \leq k \leq n-1$, you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Examples of Diophantine equations with a large finite number of solutions I wonder, if there are examples of Diophantine equations (or systems of such equations) with integer coefficients fitting on a few lines that have been proven to have a finite, but really huge number of solutions?
Are there ones with so large number of solutions that we cannot write any explicit upper bound for this number using Conway chained arrow notation?
Update: I am also interested in equations with few solutions but where a value in a solution is very large itself.
| Let $b$ is a non-zero integer, and let $n$ is a positive integer.
The equation $y(x-b)=x^n$ has only finitely many integer solutions.
The first solution is:
$x=b+b^n$ and $y=(1+b^{n-1})^n$.
The second solution is:
$x=b-b^n$ and $y=-(1-b^{n-1})^n$,
cf. [1, page 7, Theorem 9] and [2, page 709, Theorem 2].
The number of integer solutions to $(y(x-2)-x^{\textstyle 2^n})^2+(x^{\textstyle 2^n}-s^2-t^2-u^2)^2=0$
grows quickly with $n$, see [3].
References
[1] A. Tyszka, A conjecture on rational arithmetic
which allows us to compute an upper bound for the
heights of rational solutions of a Diophantine
equation with a finite number of solutions,
http://arxiv.org/abs/1511.06689
[2] A. Tyszka, A hypothetical way to compute an upper
bound for the heights of solutions of a Diophantine
equation with a finite number of solutions,
Proceedings of the 2015 Federated Conference on
Computer Science and Information Systems
(eds. M. Ganzha, L. Maciaszek, M. Paprzycki),
Annals of Computer Science and Information Systems, vol. 5, 709-716,
IEEE Computer Society Press, 2015.
[3] A. Tyszka, On systems of Diophantine equations with a large number of integer solutions,
http://arxiv.org/abs/1511.04004
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/387950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 8,
"answer_id": 7
} |
Probability related finance question: Need a more formal solution You are offered a contract on a piece of land which is worth $1,000,000$ USD $70\%$ of the time, $500,000$ USD $20\%$ percent of the time, and $150,000$ USD $10\%$ of the time. We're trying to max profit.
The contract says you can pay $x$ dollars for someone to determine the land's value from which you can decide whether or not to pay $300,000$ USD for the land. What is $x$? i.e., How much is this contract worth?
$700,000 + 100,000 + 15,000 = 815,000$ is the contract's worth.
So if we just blindly buy, we net ourselves $515$k.
I originally was going to say that the max we pay someone to value the land was $x < 515$k (right?) but that doesn't make sense. We'd only pay that max if we had to hire someone to determine the value. We'd still blindly buy the land all day since its Expected Value is greater than $300$k.
Out of curiosity: what is the max we'd pay someone to value the land?
Anyway, to think about this another way:
If we pay someone $x$ to see the value of the land, we don't pay the $150,000$ USD $10\%$ of the time and we still purchase the land if the land is worth more than $300,000$.
Quickest way to think of it is to say "valuing saves us $150,000$ USD $10\%$ of the time and does nothing the other $90\%$, so it is worth $15,000$ USD".
Is there a more formal way to think of this problem?
| There is no unique arbitrage-free solution to the pricing problem with $3$ outcomes, so you will need to impose more assumptions to get a numerical value for the land.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is the centroid of a hollow spherical cap? I have a unit hollow sphere which I cut along a diameter to generate two equivalent hollow hemispheres. I place one of these hemispheres on an (x,y) plane, letting it rest on the circular planar face where the cut occurred.
If the hemisphere was solid, we could write that its centroid in the above case would be given as $(0,0,\frac{3}{8})$. Given that the hemisphere is hollow, can we now write its centroid as $(0,0,\frac{1}{2})$?
| Use $z_0=\displaystyle {{\int z ds}\over {\int ds}}$ where $z=\cos\phi$ and $ds=\sin\phi d\theta d\phi$. That gives your 1/2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Simple dice questions. There are two dice.
Assume dice are fair.
What does the following probability represent:
It's $$\frac{1}{6} + \frac{1}{6} - \left(\frac{1}{36} \right)$$
What does this represent:
$$\frac{1}{6} \cdot \frac{5}{6} + \frac{1}{6} \cdot \frac{5}{6}$$
This represents the probability of rolling just a single $X$ (where $X$ is a specified number on the dice, not a combined value where $X$ is between $1$ and $6$) right?
What does this represent: $1- \left(\frac{5}{6} \cdot\frac{5}{6} \right)$
What does
1/6 + 1/6 - 1/36 represent? Is this also the probability of rolling a single 6? If so, why do we subtract the 1/36 at the end (the probability of rolling both sixes). Don't we want to include that possibility since we're looking for the probability of rolling a single 6?
| 1> Same as 3. [but used the inclusion exclusion principle]
2> P[You get different outcomes on rolling the pair of dice twice]
3> P[you get a particular outcome atleast once]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
A formal proof required using real analysis If $\int_0^1f^{2n}(x)dx=0$, prove that $f(x)=0$, where $f$ is a real valued continuous function on [0,1]?
It is obvious, since $f^{2n}(x) \geq 0$, the only way this is possible is when $f(x)=0$.
I am looking for any other formal way of writing this proof i.e. using concepts from real analysis.
Any hints will be appreciated.
| Assume there exists $c\in[0,1]$ such that $f^{2n}(x)>0$, then by definition of continuity there exists $0<\delta< \min(c,1-c)$ such that $$|x-c|<\delta\implies |f^{2n}(x)-f^{2n}(c)|<\frac{1}{2} f^{2n}(c)$$
Especially, if $|x-c|<\delta$ then $f^{2n}(x)>\frac{1}{2} f^{2n}(c)$. Therefore
$$\int_0^1 f^{2n}(x) dx =\int_0^{c-\delta} f^{2n}(x) dx+\int_{c-\delta}^{c+\delta} f^{2n}(x) dx+\int_{c+\delta}^1 f^{2n}(x) dx\\
\ge 0+2\delta \int_{c-\delta}^{c+\delta} \frac{f^{2n}(c)}{2} dx+0 >0 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Partitions of an interval and convergence of nets Let $\mathscr{T}$ be the set of partitions $\tau = (\tau_0 = 0 < \tau_1 < \dots < \tau_N = 1)$ of the interval $[0,1]$ (where $N$ is not fixed). This becomes a directed set by setting $\tau < \tau^\prime$ iff $\tau^\prime$ is a subdivision.
Now we can look at the nets
$$ s_\tau := \sum_{j=1}^N (\tau_j - \tau_{j-1})^2$$
or
$$ r_\tau := N \sum_{j=1}^N (\tau_j - \tau_{j-1})^3$$
We should have both $s_\tau \longrightarrow 0$ and $r_\tau \longrightarrow 0$ in the sense of nets, but I don't know how to prove it. Does anybody have an idea?
| A general hint and a partial illustration.
As I understood, you must show that for each $\varepsilon>0$ there are nets $\tau_1$ and $\tau_2$ such that $s_{\tau_1’}<\varepsilon$ and $r_{\tau_2’}<\varepsilon$ for each $\tau_1’>\tau_1$ and $\tau_2’>\tau_2$.
Since $(a+b)^2\ge a^2+b^2$, provided $a$ and $b$ are non-negative, $s_\tau$ is monotone. So
it is enough to show that for each real $\varepsilon>0$ there is a net $\tau$ such that $s_\tau<\varepsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Unexpected approximations which have led to important mathematical discoveries On a regular basis, one sees at MSE approximate numerology questions like
*
*Prove $\log_{{1}/{4}} \frac{8}{7}> \log_{{1}/{5}} \frac{5}{4}$,
*Prove $\left(\dfrac{2}{5}\right)^{{2}/{5}}<\ln{2}$,
*Comparing $2013!$ and $1007^{2013}$
or yet the classical $\pi^e$ vs $e^{\pi}$. In general I don't like this kind of problems since
a determined person with calculator can always find two numbers accidentally close to each other - and then ask others
to compare them without calculator. An illustration I've quickly found myself (presumably it is as difficult as stupid): show that $\sin 2013$ is between $\displaystyle \frac{e}{4}$ and $\ln 2$.
However, sometimes there are deep reasons for "almost coincidence". One famous example is
the explanation of the fact that $e^{\pi\sqrt{163}}$ is an almost integer number (with more than $10$-digit accuracy) using the theory of elliptic curves with complex multiplication.
The question I want to ask is: which unexpected good approximations have led to important mathematical
developments in the past?
To give an idea of what I have in mind, let me mention Monstrous Moonshine where the observation
that $196\,884\approx 196\,883$ has revealed deep connections between modular functions,
sporadic finite simple groups and vertex operator algebras.
Many thanks in advance for sharing your insights.
| The most famous, most misguided, and most useful case of approximation fanaticism comes from Kepler's attempt to match the orbits of the planets to a nested arrangement of platonic solids. Fortunately, he decided to go with his data instead of his desires and abandoned the approximations in favor of Kepler's Laws.
Kepler's Mysterium Cosmographicum has unexpected close approximations, and they led to a major result in science.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62",
"answer_count": 3,
"answer_id": 2
} |
Determining power series for $\frac{3x^{2}-4x+9}{(x-1)^2(x+3)}$ I'm looking for the power series for $f(x)=\frac{3x^{2}-4x+9}{(x-1)^2(x+3)}$
My approach: the given function is a combination of two problems. first i made some transformations, so the function looks easier.
$$\frac{3x^{2}-4x+9}{(x-1)^2(x+3)})=\frac{3x^{2}-4x+9}{x^3+x^2-5x+3}$$
Now i have two polynomials. i thought the Problem might be even easier, if thinking about the function as:
$$\frac{3x^{2}-4x+9}{x^3+x^2-5x+3)}= (3x^{2}-4x+9)\cdot \frac{1}{(x^3+x^2-5x+3)}$$
Assuming the power series of $3x^{2}-4x+9$ is just $3x^{2}-4x+9$ itself. I hoped, i could find the power series by multiplying the series of the both easier functions.. yeah, i am stuck.
$\sum\limits_{n=0}^{\infty}a_{n}\cdot x^{n}=(3x^{2}-4x+9)\cdot \ ...?... =$ Solution
| You can use the partial fraction decomposition:
$$
\frac{3x^{2}-4x+9}{(x-1)^2(x+3)}=
\frac{A}{1-x}+\frac{B}{(1-x)^2}+\frac{C}{1+\frac{1}{3}x}
$$
and sum up the series you get, which are known.
If you do the computation, you find $A=0$, $B=2$ and $C=1$, so
$$
\frac{3x^{2}-4x+9}{(x-1)^2(x+3)}=
\frac{2}{(1-x)^2}+\frac{1}{1+\frac{1}{3}x}
$$
The development of $(1-x)^{-2}$ can be deduced from the fact that
$$
\frac{1}{1-x}=\sum_{n\ge0}x^n
$$
so, by deriving, we get
$$
\frac{1}{(1-x)^2}=\sum_{n\ge1}nx^{n-1}=\sum_{n\ge0}(n+1)x^n
$$
The power series for the other term is again easy:
$$
\frac{1}{1+\frac{1}{3}x}=\sum_{n\ge0}\frac{(-1)^n}{3^n}x^n
$$
so your power series development is
$$
\frac{3x^{2}-4x+9}{(x-1)^2(x+3)}=
\sum_{n\ge0}\left(2n+2+\frac{(-1)^n}{3^n}\right)x^n
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subgroup transitive on the subset with same cardinality Maybe there is some very obvious insight that i miss here, but i've asked this question also to other people and nothing meaningful came out:
If you have a subgroup G of $S_n$(the symmetric group on n elements), you can consider the natural action of G on the subset of $\{1,...,n\}$; my question is this:what are the G that act transitively on the subsets of the same cardinality; that is, whenever $A,B \subseteq \{1,...,n\}$ with $|A|=|B|$, there is $g \in G$ so that $gA=B$.
| By the Livingstone Wagner Theorem, (Livingstone, D., Wagner, A., Transitivity of finite permutation groups on unordered sets. Math. Z. 90 (1965) 393–403), if $n \ge 2k$ and $G$ is transitive on $k$-subsets with $k \ge 5$, then $G$ is $k$-transitive. Using the classification of finite simple groups, $A_n$ and $S_n$ are the only finite $k$-transitive groups for $k \ge 6$, and the only 4- and 5-transitive groups are the Mathieu groups, so (at least for $n \ge 10$), the only permutation groups that are $k$-homogeneous for all $k$ are $A_n$ and $S_n$.
See also the similar discussion in Group actions transitive on certain subsets
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Prove A is symmetric matrix iff $A$ is square and $x^T Ay = (Ax)^T y$ Prove A is a symmetric matrix iff $A$ is square and $x^T Ay = (Ax)^T y$. (for all $x,y \in \mathbb{R}^n$)
Going from the assumption that it is symmetric to the two conditions is fairly straightforward.
However, going the other way, I am stuck at proving $A^T = A$ using the second and condition, being stuck at $X^T (A-A^T)y=0$.
Note T is for transpose!
| First, lets remember the rules of transposing the product: $(AB)^T = B^T A^T$
Using this, we start with the giver equation $ x^TAy=(Ax)^Ty$ and apply the rule above which yields
$x^TAy = x^TA^Ty$.
Note that the equation is true for all $x, y$, so the only way that is possible is if $A = A^T$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Congruence of invertible skew symmetric matrices I am asking for some hints to solve this excercise. Given an invertible skew symmetric matrix $A$, then show that there are invertible matrices $ R, R^T$ such that $R^T A R = \begin{pmatrix} 0 & Id \\ -Id & 0 \end{pmatrix}$, meaning that this is a block matrix that has the identity matrix in two of the four blocks and the lower one with a negative sign.
I am completely stuck!
| Hint: a skew-symmetric matrix commutes with its transpose, and so it is diagonalizable. Your block matrix on your right hand side is also skew-symmetric, and so it is also diagonalizable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Probability of retirement event This is elementary, but not clear to me.
Suppose I know that the mean age of retirement is $\mu$ and the standard deviation $\sigma$.
What is the probability that someone of age $x$, who has not yet retired,
will retire sometime in the next year,
i.e., between $x$ and $x+1$?
Clearly for small $x$, far below the mean, the probability is near zero, and for large $x$, it approaches $1$. So it is a type of cumulative distribution...
Thanks for your help!
| Integrate the Gaussian probability density function from x to x+1
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Do the premises logically imply the conclusion? $$b\rightarrow a,\lnot c\rightarrow\lnot a\models\lnot(b\land \lnot c)$$
I have generated an 8 row truth table, separating it into $b\rightarrow a$, $\lnot c\rightarrow\lnot a$ and $\lnot (b\land\lnot c)$. I know that if it was
$$\lnot c\rightarrow\lnot a\models\lnot(a \land \lnot c)$$
I would only need to check the right side with every value that makes the left side true to make sure the overall statement is true. How do I deal with more than one premise?
| The second premise $\neg c\to\neg a$ implies that $a\to c$.
The first premise $b\to a$ leads to $b\to a\to c$, which implies $\neg c\to\neg b$.
The two last statements clearly prevent $b\land\neg c$ from being true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does there exist a matrix $P$ such that $P^n=M$ for a special matrix $M$? Consider the matrix
$$
M=\left(\begin{matrix}
0&0&0&1\\
0&0&0&0\\
0&0&0&0\\
0&0&0&0\\
\end{matrix}\right).
$$
Is there a matrix $P\in{\Bbb C}^{4\times 4}$ such that $P^n=M$ for some $n>1$?
One obvious fact is that if such $P$ exists, then $P$ must be nilpotent. However, I have no idea how to deal with this problem. Furthermore, what if $M$ is an arbitrary nilpotent matrix with index $k$?
| Since it has been proposed to treat this Question as a duplicate of the present one, it should be noted that there is a negative Answer to the "new" issue raised in Yeyeye's Answer here.
As earlier observed, since $M$ is nilpotent, for $P^n = M$ for $n\gt 1$ will require that $P$ is nilpotent. It follows that the minimal polynomial for $P$ must divide $x^d$ where $d$ is the degree of nilpotency (i.e. the least power such that $P^d=0$.
Now the characteristic polynomial of $P$ will have degree $4$ (because $P$ is $4\times 4$), and thus $d\le 4$. So the minimal polynomial of $P$ has the form $x^k$ for $1\lt k \le d \le 4$. In other words, $P^4=0$ given the above information. So $P^3 = M \neq 0$ is the largest power $n=3$ that one can achieve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/388970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Let $W$ be a Wiener process and $X(t):=W^{2}(t)$ for $t\geq 0.$ Calculate $\operatorname{Cov}(X(s), X(t))$. Let $W$ be a Wiener process. If $X(t):=W^2(t)$ for $t\geq 0$, calculate $\operatorname{Cov}(X(s),X(t))$
| Assume WLOG that $t\geqslant s$. The main task is to compute $E(X_sX_t)$.
Write $X_t=(W_t-W_s+W_s)^2=(W_t-W_s)^2+2W_t(W_t-W_s)+W_s^2$, and use the fact that if $U$ and $V$ are independent random variables, so are $U^2$ and $V^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
stability and asymptotic stability: unstable but asymptotically convergent solution of nonlinear system Consider nonlinear systems of the form $X(t)'=F(X(t))$, where $F$ is smooth (assume $C^\infty$). Is it possible to construct such a system (preferably planar system) so that $X_0$ is an unstable equilibrium, but all nearby solution curves tend to $X_0$ as $t \to \infty$? If so, how? A conceptual construction is enough. What will the phase portrait look like?
|
Definition (Verhulst 1996)
The equilibrium solution $X_c$ is called asymptotically stable if there exists a $\delta(t_0)$ such that
\begin{equation}
||X_0 - X_c|| \le \delta(t_0) \implies \lim_{t\rightarrow\infty} ||X(t;t_0,X_0) - X_c|| =0
\end{equation}
So no, you're essentially asking can a system be both asymptotically stable and unstable which is a contradiction.
EDIT:
Glendinning's text refers to this definition as quasi-asymptotic stability to differentiate it from 'direct' asymptotic stability (solutions are both quasi-asymptotically stable and Lyupanov stable).
As an example he presents this equation:
\begin{equation}
\dot{r} = r(1-r^2) \qquad \dot{\theta} = 2\sin^2\left(\frac{1}{2}\theta\right),
\end{equation}
which has an unstable critical point at $(0,0)$ a quasi-asymptotically stable critical point at $(1,0)$.
Points are attracted to the invariant circle at $r=1$. On this curve, the flow is semi-stable. If $\pi>\theta >0$ then the flow wraps around until it passes $\theta = \pi$. If $ -\pi> \theta>0$ the trajectory falls into the critical point at $(1,0)$. This is an example of a homoclinic orbit.
In the example provided, the critical point at $(1,0)$ is a saddle. Which, if all we considered was the linearised system, is unstable. Indeed it is Lyapunov unstable as there is a trajectory that leaves any $\epsilon$-ball about $(1,0)$.
However, in nonlinear systems the linearisation is often not sufficient,
especially with saddles, which are often the cause of a whole bunch of interesting behavior that aren't represented in the linearisation, like homoclinic and heteroclinic connections.
I think the confusion comes from how you/they define instability.
The texts I refer to define a critical point as unstable if it is not Lyapunov stable or 'Quasi-Asympotically' stable. However if all you go off is the linearisation then i'd imagine the example is what you're after.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Where is wrong in this proof Suppose $a=b$.
Multiplying by $a$ on both sides gives $a^2 = ab$. Then we subtract $b^2$ on both sides, and get
$a^2-b^2 = ab-b^2$.
Obviously, $(a-b)(a+b) = b(a-b)$, so dividing by $a - b$, we find
$a+b = b$.
Now, suppose $a=b=1$. Then $1=2$ :)
| By a simple example:
if( A = 0 ) A * 5 = A * 7 and by dividing by A we have 5 = 7.
Then we can dividing by A if A not equal to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
what is the value of this trigonometric expression I want to find out value of this expression
$$\cos^2 48°-\sin^2 12°$$
Just hint the starting step.Is there any any formula regarding $\cos^2 A-\sin^2 B$?
| I've got a formula :
$$\cos(A+B).\cos(A-B)=\cos^2A-\sin^2B$$
so from this formula this question is now easy
$$\cos^248-\sin^212$$
$$\cos60.\cos36$$
$$\frac{\sqrt{5}+1}{8}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is the difference of two recursively enumerable sets, reducible to $K$? Is the difference of two recursively enumerable sets, reducible to $K$?
$W_x/W_y=\{z|z \in W_x \& z \notin W_y\}$
$K=\{x|\Phi_x(x) \downarrow\}$
$W_x= \text{dom}(\Phi_x)$
| No.
Let $\omega$ denote the set of natural numbers. $K$ is c.e. but incomputable. If a set $A$ and its complement $\bar{A} = \omega - A$ is c.e., then $A$ must be computable. Hence $\bar{K} = \omega - K$, the complement of $K$, is not a c.e. set.
It is clear that $\omega$, the set of natural numbers, is c.e. (it is computable). $K$ is c.e. Thus
$\omega - K$ is a different of two c.e. sets. $\omega - K = \bar{K}$ which as mentioned above is not c.e.
Suppose $\omega - K = \bar{K}$ was many to one reducible to $K$; in notations $\bar{K} \leq_m K$. Since $K$ is c.e., $\bar{K}$ would have to c.e. (see the little lemma under the line). This a contradiction.
If if two set $A \leq_m B$ (are many to one reducible) and $B$ is c.e., then $A$ is c.e.
By definition of $A \leq_m B$, there is a computable function $f$ such that
$x \in A$ if and only if $f(x) \in B$.
Since $B$ is c.e., $B = W_n$ for some $n$. Define a new partial computable function
$\Psi(x) = \begin{cases}
0 & \quad \text{if }\Phi_n(f(x)) \downarrow \\
\uparrow & \quad \text{otherwise}
\end{cases}$
Since $\Psi$ is partial computable, it has a index $p$. That is, $\Psi = \Phi_p$. Then $A = W_p$ since $x \in A$ if and only if $f(x) \in B$ if and only if $\Phi_n(f(x)) \downarrow$ if and only if $\Phi_p(x) \downarrow$. $A$ is c.e.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove $L=\{ 1^n| n\hspace{2mm}\text{is a prime number} \}$ is not regular. Prove $L=\{ 1^n| n\hspace{2mm}\text{is a prime number} \}$ is not regular.
It seems to use one Lemma: Pumping Lemma.
| In addition:
Instead of Pumping lemma one can use the following fact: $L$ is regular iff it is an union of $\lambda$-classes for some left congruence $\lambda$ on the free monoid $A^*$ such that $|A^*/\lambda|<\infty$. Here $A=\{a\}$ (in your notation $a=1$) is commutative, so $\lambda$ is two-sided. The structure of the factor-monoid $A^*/\lambda =\langle {\bar a}\rangle$ is well-known -- it is defined by a relation ${\bar a}^{n+r}={\bar a^n}$. Therefore every $\lambda$-class is either one-element $\{a^k\}$ for $k< n$ or has the form $\{a^k,a^{k+r},a^{k+2r},\ldots\}$ for $k\ge n$. Since $L$ contains an infinite $\lambda$-class, we get a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Weaker/Stronger Topologies and Compact/Hausdorff Spaces In my topology lecture notes, I have written:
"By considering the identity map between different spaces with the same underlying set, it follows that for a compact, Hausdorff space:
$\bullet$ any weaker topology is compact, but not Hausdorff
$\bullet$ any stronger topology is Hausdorff, but not compact "
However, I'm struggling to see why this is. Can anyone shed some light on this?
| Hint. $X$ being a set, a topology $\tau$ is weaker than a topology $\sigma$ on $X$ if and only if the application
$$ (X, \sigma) \to (X,\tau), x \mapsto x $$
is continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
$\int_0^1 \frac{{f}(x)}{x^p} $ exists and finite $\implies f(0) = 0 $ Need some help with this question please.
Let $f$ be a continuous function and
let the improper inegral
$$\int_0^1 \frac{{f}(x)}{x^p} $$
exist and be finite for any $ p \geq 1 $.
I need to prove that
$$f(0) = 0 $$
In this question, I really wanted to use somehow integration by parts and/or the Fundamental theorem of calculus.
Or even maybe use Lagrange Means value theorem,
but couldn't find a way to se it up.
I'll really appreciate your help on proving this.
| Hint: Let
$$
g(t) = \int_t^1 \frac{f(x)}{x^p}dx
$$
Now investigate properties of $g(t)$ around $t=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Even weighted codewords and puncturing My question is below:
Prove that if a binary $(n,M,d)$-code exists for which $d$ is even, then a binary $(n,M,d)$-code exists for which each codeword has even weight.
(Hint: Do some puncturing and extending.)
| Breaking this down to individual steps. Assume that $d=2t$ is an even integer.
Assume that an $(n,M,d)$ code $C$ exists.
*
*Show that puncturing the last bit from the words of $C$, you get a code $C'$ with parameters $(n-1,M,d')$, where $d'\ge d-1$. Actually we could puncture any bit, but let's be specific. Also we fully expect to have $d'=d-1$, but can't tell for sure, and don't care.
*Let us append each word of $C'$ by adding an extra bit chosen in such a way that the weight of the word is even. Call the resulting code $C^+$. Show that $C^*$ has parameters $(n,M,d^*)$, where $d^*\ge d'\ge 2t-1.$ Observe that all the words of $C^*$ have an even weight.
*Show that the minimum distance of $C^*$ must be an even number. Conclude that $d^*\ge d$.
Observe that we did not assume linearity at any step.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Introduction to Abstract Harmonic Analysis for undergraduate background I'm looking for a good starting book on the subject which only assumes standard undergraduate background.
In particular, I need to gain some confidence working with properties of Haar measures, so I can better understand the spaces $L^{p}(G)$ for a locally compact group $G$.
For some perspective on what I currently know, I can't yet solve this problem: Verifying Convolution Identities
Something with plenty of exercises would be ideal.
| I suggest the short book by Robert, "Introduction to the Representation Theory of Compact and Locally Compact Groups" which is leisurely and has plenty of exercises. The only prerequisite of this book is some familiarity of finite dimensional representations.
A second book you should look at is Folland's "A Course in Abstract Harmonic Analysis", which is more advanced, and requires more experience with analysis (having seen Banach spaces is not a bad thing), but the advantage of this book is that it has very clearly written proofs, that are easily to follow (I do algebra mostly, and I find many analysis tracts a bit opaque in this regard). Unfortunately, this book does not have exercises, and should be approached once you have plenty of examples in mind.
Donald Cohn's "measure theory" has a large number of exercises on the basics of topological groups and Haar measure, but it doesn't do representation theory or much else on locally compact groups except an introduction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
$d\mid p\!-\!1\Rightarrow x^d-1\pmod{\!p^2}$ has exactly $d$ roots for odd prime $p$ I'm trying to figure out the number of solutions to the congruence equation $x^d \equiv1 \pmod{p^2}$ where $p$ is prime and $d\mid p-1$.
For the congruence equation ${x^d}\equiv1 \pmod p$ where $p$ is prime and $d\mid p-1$ I've shown that there are exactly $d$ solutions modulo $p$.
I'm trying to use the above result to extend it to the higher power. I'm aware of something called Hensel's Lemma which says if a polynomial has a simple root modulo a prime $p$, then this root corresponds to a unique root of the same equation modulo any higher power of $p$. We can 'lift' the root iteratively up to higher powers.
I'm unsure exactly how this process works. All I have to start with is that I assume a solution of the form $m+np$ solves the 'base' congruence and I'm trying to somehow extend it to $p^2$.
Any help would be appreciated.
Thanks.
| Let $p$ be an odd prime, and work modulo $p^k$ (so $k$ does not have to be $2$). Then if $d$ divides $p-1$, the congruence $x^d\equiv 1\pmod{p^k}$ has exactly $d$ solutions.
To prove this, we use the fact that there is a primitive root $g$ of $p^k$, that is, a generator of the group of invertibles modulo $p^k$. Then for $1\le n\le (p-1)p^{k-1}$, we have $(g^n)^d\equiv 1\pmod{p^k}$ if and only if $nd$ is a multiple of $(p-1)p^{k-1}$. There are $d$ different such $n$, namely $i\frac{p-1}{d}p^{k-1}$ for $i=1,2,\dots, d$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Arc length of logarithm function I need to find the length of $y = \ln(x)$ (natural logarithm) from $x=\sqrt3$ to $x=\sqrt8$.
So, if I am not mistake, the length should be $$\int^\sqrt8_\sqrt3\sqrt{1+\frac{1}{x^2}}dx$$
I am having trouble calculating the integral. I tried to do substitution, but I still fail to think of a way to integrate it. This is what I have done so far:
$$\sqrt{1+\frac{1}{x^2}}=u-\frac{1}{x}$$
$$x=\frac{2u}{u-1}$$
$$dx=\frac{2}{(u-1)^2}du$$
$$\sqrt{1+\frac{1}{x^2}}=u-\frac{1}{x}=u-\frac{1}{2}+\frac{1}{2u}$$
$$\int\sqrt{1+\frac{1}{x^2}}dx=2\int\frac{u-\frac{1}{2}+\frac{1}{2u}}{(u-1)^2}du$$
And I am stuck.
| $\int^{\sqrt{8}}_{\sqrt{3}}\sqrt{1+\frac{1}{x^{2}}}dx$=$\int^{\sqrt{8}}_{\sqrt{3}}\frac{\sqrt{1+x^{2}}}{x}dx$=$\int^{\sqrt{8}}_{\sqrt{3}}\frac{1+x^{2}}{x\sqrt{1+x^{2}}}$=$\int^{\sqrt{8}}_{\sqrt{3}}\frac{x}{\sqrt{1+x^{2}}}dx$+
+$\int^{\sqrt{8}}_{\sqrt{3}}\frac{1}{x\sqrt{1+x^{2}}}dx$=
$=\frac{1}{2}\int^{\sqrt{8}}_{\sqrt{3}}\frac{(1+x^{2})'}{\sqrt{1+x^{2}}}dx$
$-\int^{\sqrt{8}}_{\sqrt{3}}\frac{(\frac{1}{x})'}{\sqrt{1+\frac{1}{x^{2}}}}dx =$ $\sqrt{1+x^{2}}|^{\sqrt{8}}_{\sqrt{3}}-ln(\frac{1}{x}+
\sqrt{1+\frac{1}{x^{2}}})$ $|^{\sqrt{8}}_{\sqrt{3}}$
$=1+\frac{1}{2}$$ln\frac{3}{2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Order of this group Given $k\in{\mathbb{N}}$, we denote $\Gamma _2(p^k)$ the multiplicative group of all matrix $\begin{bmatrix}{a}&{b}\\{c}&{d}\end{bmatrix}$ with $a,b,c,d\in\mathbb{Z}$, $ad-bc = 1$, $a$ and $d$ are equal to $1$ module $p^k$ and $b$ and $c$ are multiples of $p^k$.
How can I show that $|\Gamma _2(p)/\Gamma _2(p^k)|\le p^{4k}$?
| By exhibiting a set of at most $N=p^{4k}$ matrices $A_1,\ldots,A_N\in\Gamma_2(p)$ such that for each $B\in\Gamma_2(p)$ we have $A_iB\in\Gamma_2(p^k)$ for some $i$.
Matrices of the form $A_i=I+pM_i$ suggest themselves.
Or much simpler: by counting how many matrices in $\Gamma_2(p)$ and $\Gamma_2(p^k)$ map to the same element of $SL_2(\mathbb Z/p^k\mathbb Z)$ under the canonical projection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How do I see that $F$ is a vector field defined on all of $\mathbb{R}^3$? $$\vec{F}(x,y,z)= y^3z^3\mathbf{i} + 2xyz^3\mathbf{j} + 3xy^2z^2\mathbf{k}$$
How do I see that $F$ is a vector field defined on all of $\mathbb{R}^3$? And then is there an easy way to check if it has continuous partial derivatives?
I am looking at a theorem and it states, If F is a vector field defined on all of R^3 whose component functions have continuous partial derivatives and curl F = 0, then F is a conservative vector field.
I don't know how to check those conditions. Could someone show with that problem given?
| The vector field $F$ is defined on all of $\mathbb{R}^3$ because all of its component functions are (there are no pints where the functions are undefined, i.e., they make sense when plugging in any point of $\mathbb{R}^3$ into them). If say one of the component functions was $\frac{1}{x-y}$ then the vector field wouldn't be defined along the plane $x=y$. Moreover the component functions are continuous, as every polynomial function is continuous. To calculate the curl of the vector field just use the definition of curl, which involves just computing partial derivatives.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Artinian rings are perfect Definition. A ring is called perfect if every flat module is projective.
Is there a simple way to prove that an Artinian ring is perfect (in the commutative case)?
| The local case is proved here, Lemma 10.97.2, and then extend the result to the non-local case by using that an artinian ring is isomorphic to a finite product of artinian local rings and a module $M$ over a finite product of rings $R_1\times\cdots\times R_n$ has the form $M_1\times\cdots\times M_n$ with $M_i$ an $R_i$-module.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/389965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
For any $11$-vertex graph $G$, show that $G$ and $\overline{G}$ cannot both be planar Let $G$ be a graph with 11 vertices. Prove that $G$ or $\overline{G}$ must be nonplanar.
This question was given as extra study material but a little stuck. Any intuitive explanation would be great!
| It seems the following. Euler formula implies that $E\le 3V-6$ for each planar graph. If both $G$ and $\bar G$ are planar, then $55=|E(K_{11})|\le 6|V(K_{11})|-12=54$, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Total no. of ordered pairs $(x,y)$ in $x^2-y^2=2013$ Total no. of ordered pairs $(x,y)$ which satisfy $x^2-y^2=2013$
My try:: $(x-y).(x+y) = 3 \times 11 \times 61$
If we Calculate for positive integers Then $(x-y).(x+y)=1.2013 = 3 .671=11.183=61.33$
my question is there is any better method for solving the given question.
thanks
| You can solve this pretty quickly, since you essentially need to solve a bunch of linear systems. One of them is e.g.
$$x-y = 3\times 11$$
$$x+y = 61$$
Just compute the inverse matrix of
$$\left[\begin{array}{cc}
1 & -1\\
1 & 1
\end{array}\right]$$
and multiply that with the vectors corresponding to the different combinations of factors.
(ps. How do I write matrices???)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Bernoulli differential equation help? We have the equation $$3xy' -2y=\frac{x^3}{y^2}$$ It is a type of Bernouli differential equation. So, since B. diff equation type is
$$y'+P(x)y=Q(x)y^n$$
I modify it a little to:
$$y'- \frac{2y}{3x} = \frac{x^2}{3y^2}$$
$$y'-\frac{2y}{3x}=\frac{1}{3}x^2y^{-2}$$
Now I divide both sides by $y^{-2}$. What should I do now?
| $$\text{We have $3xy^2 y'-2y^3 = x^3 \implies x (y^3)' - 2y^3 = x^3 \implies \dfrac{(y^3)'}{x^2} + y^3 \times \left(-\dfrac2{x^3}\right) = 1$}$$
$$\text{Now note that }\left(\dfrac1{x^2}\right)' = -\dfrac2{x^3}. \text{ Hence, we have }\dfrac{d}{dx}\left(\dfrac{y^3}{x^2}\right) = 1\implies \dfrac{y^3}{x^2} = x + c$$
$$\text{Hence, the solution to the differential equation is }\boxed{\color{blue}{y^3 = x^3 + cx^2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find quotient space on $\mathbb{N} $ On $\mathbb{N}$ is given equivalence relation R with $nRm \iff 4|n-m$. Topology on $\mathbb{N}$ is defined with $\tau=\{\emptyset\}\cup\{U\subseteq\mathbb{N}|n\in U \wedge m|n \implies m\in U\}$.
I need to find quotient space $(\mathbb{N}/R,\tau_{R})$.
I have solution: $\tau_{R}=\{\emptyset,\mathbb{N}/R,\{[1],[2],[3]\}\}$ where $\mathbb{N}/R=\{[1],[2],[3],[4]\}$.
But I have no idea how to prove that $p^{-1}[\{[1],[2],[3]\}]=\cup_{k\in \mathbb{N}_0}\{4k+1,4k+2,4k+3\}$, where $p$ is quotient mapping, contains all divisors of its elements.
(for other set it's easy to find element whose divisor is not in set)
| Note that $p^{-1}(\{[1],[2],[3]\}) = \mathbb{N}\backslash 4\mathbb{N}$.
We need to prove that $n\in\mathbb{N}\backslash 4\mathbb{N}$ and $m|n$ implies $m\in\mathbb{N}\backslash 4\mathbb{N}$ and we do this by contraposition.
Suppose $m\notin\mathbb{N}\backslash 4\mathbb{N}$, then $m = 4k$ and thus $m|n$ implies $n = pm = 4pk$, hence $n\notin\mathbb{N}\backslash 4\mathbb{N}$ concluding the argument.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Composition of $\mathrm H^p$ function with Möbius transform Let $f:\mathbb D\rightarrow \mathbb C$ be a function in $\mathrm{H}^p$, i.e. $$\exists M>0,\text{ such that }\int_0^{2\pi}|f(re^{it})|^pdt\leq M<\infty,\forall r\in[o,1)$$
Consider a Möbius transform of the disk $\varphi :\mathbb D\rightarrow\mathbb D$, which generally does not fix $0$.
(or more generally consider a holomorphic function $g:\mathbb D \rightarrow \mathbb D$)
Is the composition $f\circ\varphi $ in $\mathrm H^p$?
If $\varphi $ fixes $0$ then we can apply Littlewood subordination theorem and derive the result. However, what can we say if $\varphi$ does not fix zero? Does it hold $f\circ\varphi $ in $\mathrm H^p$, or is there a counterexample?
| Yes. If $\varphi$ is a holomorphic map of unit disk into itself, the composition operator $f\mapsto f\circ \varphi$ is bounded on $H^p$. In fact,
$$\|f\circ \varphi\|_{H^p}\le \left(\frac{1+|\varphi(0)|}{1-|\varphi(0)|}\right)^{1/p}\|f \|_{H^p} \tag1$$
Original source:
John V. Ryff, Subordinate $H^p$ functions, Duke Math. J. Vol. 33, no. 2 (1966), 347-354.
Sketch of proof. The function $|f|^p$ is subharmonic and has uniformly bounded circular means (i.e., mean values on $|z|=r$, $r<1$). Let $u$ be its the smallest harmonic majorant in $\mathbb D$. Then $u\circ \varphi$ is a harmonic majorant of $|f\circ \varphi|^p$. This implies that the circular means of $|f\circ \varphi|^p$ do not exceed $u(\varphi(0))$. In particular, $f\circ \varphi\in H^p$. Furthermore, the Harnack inequality for the disk yields $$u(\varphi(0))\le u(0)\frac{1+|\varphi(0)|}{1-|\varphi(0)|}$$ proving (1).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the meaning of "independent events " and how can we logically conclude independence of two events in probability?
What is the meaning of "independent events " in probability
For eg: Two events (say A and B)are independent , what I understand is the occurrence of A is not affected by occurrence of B .But I am not comfortable with this understanding , thinking this way every events I meet upon are independent!, does there exist a more mathematical definition( I don't want the formula associated with it)
Another thing I want to know in a real case or a problem how do we
understand(logically) if two events are independent?
That is without verifying that $P(A\cap B) = P(A)P(B) \Large\color{blue}{\star}$ how do we conclude that two events $A$ and $B$ are independent or dependent?.
Take for eg an example by "Tim"
Suppose you throw a die. The probability you throw a six(Event $A$) is $\frac16$ and the
probability you throw an even number(Event $B$) is $\frac12$.
And event $C$ such that $A$ and $B$ both happen would mean $A$ should happen(As here $A$ is a subset of $B$) hence it's too $\frac16$
My thought:
The above example suggest something like this "Suppose I define two events $A$ and $B$ and if one of the even is a subset of the other then the events are not independent". Of course such an argument is not a sufficient condition for dependency , for eg: consider an event $A$ throwing a dice and getting an odd number and event $B$ getting 6. They aren't independent ($\Large\color{blue}{\star}$ isn't satisfied) . So again I improve my suggestion "Suppose I define two events $A$ and $B$ and if one of them is a subset of the other or their intersection is a null-set then the events are not independent $\Large\color{red}{\star}$"
So at last is $\Large\color{red}{\star}$ a sufficient condition?. Or does there exist a suffcient condition (other than satisfying the formulas $\Large\color{blue}{\star}$? And what is the proof, I can't prove my statement as the idea for me is not that much mathematical .
| I think you are asking why is the notion of independent events when there is no relation between them!
Notion of independent events help to calculate the probability of occurring two independent events on like p(A)Up(B),P(A) intersection P(B) on contrary to calculating dependent events.
Independent events--> their probabilities of occurring is not dependent on one another
Dependent events ---> Calculation of probabilities dependent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
on the commutator subgroup of a special group Let $G'$ be the commutator subgroup of a group $G$ and $G^*=\langle g^{-1}\alpha(g)\mid g\in G, \alpha\in Aut(G)\rangle$.
We know that always $G'\leq G^*$.
It is clear that if $Inn(G)=Aut(G)$, then $G'=G^*$.
Also if $G$ is a non abelian simple group or perfect group, then $G'=G^*=G$.
Now do there exist a group such that $Inn(G)\neq Aut(G)$ and $G'=G^*\neq G$?
Thank you
| Serkan's answer is part of a more general family and a more general idea.
The more general idea is to include non-inner automorphisms that create no new subgroup fusion. The easiest way to do this is with power automorphisms, and the simplest examples of those are automorphisms that raise a single generator to a power.
The more general family is parameterized by pairs $(n,d)$ where $n$ is a positive integer and $1 \neq d \neq n-1$, but $d$ divides $n-1$. Let $A=\operatorname{AGL}(1,\newcommand{\znz}{\mathbb{Z}/n\mathbb{Z}}\znz)$ consist of the affine transformations of the line over $\znz$. The derived subgroup $D=A'$ consists of the translations. Choose some element $\zeta$ of multiplicative order $d$ mod $n$, and let $G$ be the subgroup generated by multiplication by $\zeta$ and by the translations. Explicitly:
$$\begin{array}{rcl}
A &=& \left\{ \begin{bmatrix} \alpha & \beta \\ 0 & 1 \end{bmatrix} : \alpha \in \znz^\times, \beta \in \znz \right\} \\
D &=& \left\{ \begin{bmatrix} 1 & \beta \\ 0 & 1 \end{bmatrix} : \beta \in \znz \right\} \\
G &=& \left\{ \begin{bmatrix} \zeta^i & \beta \\ 0 & 1 \end{bmatrix} : 0 \leq i < d, \beta \in \znz \right\} \\
\end{array}$$
For example, take $n=5$ and $d=2$, to get $D$ is a cyclic group of order 5, $G$ is dihedral of order 10, and $A$ is isomorphic to the normalizer of a Sylow 5-subgroup ($D$) in $S_5$.
Notice that $Z(G)=1$ since $d\neq 1$, so $G \cong \operatorname{Inn}(G)$. We'll show $A \cong \operatorname{Aut}(G)$ so that $D=G'=G^*$ as requested. Since $d \neq n-1$, $A \neq G$.
Notice that $D=G'$ (not just $D=A'$) so $D$ is characteristic in $G$, so automorphisms of $G$ restrict to automorphisms of the cyclic group $D$. Let $f$ be an automorphism of $G$, and choose $\beta, \bar\alpha, \bar\beta \in \znz$ so that
$$\begin{array}{rcl}
f\left( \begin{bmatrix} 1 & 1 \\0 & 1 \end{bmatrix} \right)
&=& \begin{bmatrix} 1 & \beta \\ 0 & 1 \end{bmatrix} \\
f\left( \begin{bmatrix} \zeta & 0 \\0 & 1 \end{bmatrix} \right)
&=& \begin{bmatrix} \bar\alpha & \bar\beta \\ 0 & 1 \end{bmatrix}
\end{array}$$
Notice that $f$ is determined by these numbers since $G$ is generated by those two matrices, and that $\beta, \bar\alpha \in \znz^\times$. Now
$$
\begin{bmatrix} \zeta & 0 \\0 & 1 \end{bmatrix} \cdot
\begin{bmatrix} 1 & 1 \\0 & 1 \end{bmatrix} \cdot
{\begin{bmatrix} \zeta & 0 \\0 & 1 \end{bmatrix}}^{-1}
= \begin{bmatrix} 1 & \zeta \\0 & 1 \end{bmatrix}
$$
so applying $f$ we get
$$
\begin{bmatrix} 1 & \bar\alpha\cdot\beta \\0 & 1 \end{bmatrix}
=
\begin{bmatrix} \bar\alpha & \bar\beta \\0 & 1 \end{bmatrix} \cdot
\begin{bmatrix} 1 & \beta \\0 & 1 \end{bmatrix} \cdot
{\begin{bmatrix} \bar\alpha & \bar\beta \\0 & 1 \end{bmatrix}}^{-1}
= \begin{bmatrix} 1 & \zeta\cdot\beta \\0 & 1 \end{bmatrix}
$$
hence $\bar\alpha \beta = \zeta\beta$ and since $\beta \in \znz^\times$ (lest $D \cap \ker(f) \neq 1$), we get $\bar\alpha=\zeta$. Hence every automorphism of $G$ is conjugation by an element of $A$, namely $$\bar f = \begin{bmatrix} 1/\beta & \bar\beta/(\beta(\zeta-1)) \\ 0 & 1 \end{bmatrix} \in A. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Newbie vector spaces question So browsing the tasks our prof gave us to test our skills before the June finals, I've encountered something like this:
"Prove that the kernel and image are subspaces of the space V: $\ker(f) < V, \operatorname{im}(f) < V$, where $<$ means a subspace."
Is it just me or there's something wrong with the problem? I mean: I was rewriting the tasks from the blackboard by hand so I may have made a mistake but is a problem like this solvable or I messed up and should rather look for the task description from someone else? Cause for the time being, I don't see anything to prove here since we don't know what V is, right?
| Let $f$ is a linear transformation from $V$ to $V$. So what is the kernel of $f$? Indeed, it is $$\ker(f)=\{v\in V\mid f(v)=0_V\}$$ It is obvious that $\ker(f)\subseteq V$. Now take $a,b\in F$ the field associated to $V$, and let $v,w\in\ker(f)$. We have $$f(av+bw)=f(av)+f(bw)=af(v)+bf(w)= 0+0=0$$ So the subset $\ker(f)$ has the eligibility of being a subspace of $V$. Go the same way for another subset.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show that $\exp: \mathfrak{sl}(n,\mathbb R)\to \operatorname{SL}(n,\mathbb R)$ is not surjective It is well known that for $n=2$, this holds. The polar decomposition provides the topology of $\operatorname{SL}(n,\mathbb R)$ as the product of symmetric matrices and orthogonal matrices, which can be written as the product of exponentials of skew symmetric and symmetric traceless matrices. However I could not find out the proof that $\exp: \mathfrak{sl}(n,\mathbb R)\to\operatorname{SL}(n,\mathbb R)$ is not surjective for $n\geq 3$.
| Over $\mathbb{R}$, for a general $n$, a real matrix has a real logarithm if and only if it is nonsingular and in its (complex) Jordan normal form, every Jordan block corresponding to a negative eigenvalue occurs an even number of times. So, you may verify that $\pmatrix{-1&1\\ 0&-1}$ (as given by rschwieb's answer) and $\operatorname{diag}(-2,-\frac12,1,\ldots,1)$ (as given by Gokler's answer) are not matrix exponentials of a real traceless matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Determine the number of elements of order 2 in AR
So i have completed parts a and b. For b i reduced R to smith normal form and ended up with diagonals 1,2,6. From this i have said that the structure of the group is $Z_2 \oplus Z_6 \oplus Z$. But i have no idea what so ever about part c.
| As stated in the comments, let us focus on $\,\Bbb Z_2\times\Bbb Z_6\,$ , which for simplicity (for me, at least) we'll better write multiplicatively as $\,C_2\times C_6=\langle a\rangle\times\langle b\rangle\,\;,\;\;a^2=b^6=1$
Suppose the first coordinate is $\,1\,$ , then the second one has to have order $\,2\,$ , and the only option is $\,(1,b^3)\,$ , so we go over elements with non-trivial first coordinate, and thus the second coordinate has to have order dividing two:
$$(a,1)\;,\;\;(a,b^3)$$
and that seems to be pretty much all there is: three involutions as any such one either has first coordinate trivial or not...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Closed form for $\sum_{n=1}^\infty\frac{1}{2^n\left(1+\sqrt[2^n]{2}\right)}$ Here is another infinite sum I need you help with:
$$\sum_{n=1}^\infty\frac{1}{2^n\left(1+\sqrt[2^n]{2}\right)}.$$
I was told it could be represented in terms of elementary functions and integers.
| Notice that
$$
\frac1{2^n(\sqrt[2^n]{2}-1)}
-\frac1{2^n(\sqrt[2^n]{2}+1)}
=\frac1{2^{n-1}(\sqrt[2^{n-1}]{2}-1)}
$$
We can rearrange this to
$$
\left(\frac1{2^n(\sqrt[2^n]{2}-1)}-1\right)
=\frac1{2^n(\sqrt[2^n]{2}+1)}
+\left(\frac1{2^{n-1}(\sqrt[2^{n-1}]{2}-1)}-1\right)
$$
and for $n=1$,
$$
\frac1{2^{n-1}(\sqrt[2^{n-1}]{2}-1)}-1=0
$$
therefore, the partial sum is
$$
\sum_{n=1}^m\frac1{2^n(\sqrt[2^n]{2}+1)}
=\frac1{2^m(\sqrt[2^m]{2}-1)}-1
$$
Taking the limit as $m\to\infty$, we get
$$
\sum_{n=1}^\infty\frac1{2^n(\sqrt[2^n]{2}+1)}
=\frac1{\log(2)}-1
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 2,
"answer_id": 1
} |
Extending transvections/generating the symplectic group The context is showing that the symplectic group is generated by symplectic transvections.
At the very bottom of http://www-math.mit.edu/~dav/sympgen.pdf it is stated that any transvection on the orthogonal space to a hyperbolic plane (a plane generated by $u,v$ such that $(u,v)=1$ with respect to the bilinear form) can be extended to a transvection on the whole space with the plane contained in its fixed set.
Is there an easy way to see why this is true? If not, does anyone have a reference/solution?
Thanks.
| Assume that $V$ is whole space with $2n$ dimensional. Then the orthogonal space you mentioned (let it be $W$) is $2n-2$ dimensional symplectic vector space (It is very easy to check the conditions). Then take another space spanned by $v$ and $w$ such that $v$, $w$ $\in W$ and $\omega(v,w)=1$. The new space orthogonal to this space is symplectic vector space with dimensional $2n-4$. You know the rest.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Calculus II, Curve length question. Find the length of the curve
$x= \int_0^y\sqrt{\sec ^4(3 t)-1}dt, \quad 0\le y\le 9$
A bit stumped, without the 'y' in the upper limit it'd make a lot more sense to me.
Advice or solutions with explanation would be very appreciated.
| $$\frac{dx}{dy} = \sqrt{\sec^4{3 y}-1}$$
Arc length is then
$$\begin{align}\int_0^9 dy \sqrt{1+\left ( \frac{dx}{dy} \right )^2} &= \int_0^9 dy\, \sec^2{3 y} \\ &= \frac13 \tan{27} \end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/390930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What are some relationships between a matrix and its transpose? All I can think of are that
If symmetric, they're equivalent
If A is orthogonal, then its transpose is equivalent to its inverse.
They have the same rank and determinant.
Is there any relationship between their images/kernels or even eigenvalues?
| Fix a ring $R$, and let $A \in M_n(R)$. The characteristic polynomial for $A$ is $$\chi_A(x)=\det (xI-A),$$
so that $\chi_{A^T}(x) = \det (x I -A^T)= \det ((xI-A)^T)=\det(xI-A)$. Since the eigenvalues of $A$ and $A^T$ are the roots of their respective characteristic polynomials, $A$ and $A^T$ have the same eigenvalues. (Moreover, they have the same characteristic polynomial.)
It follows that $A$ and $A^T$ have the same minimal polynomial as well, because the minimal polynomial arises as the radical of the characteristic polynomial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Infinitely many primes of the form $4n+3$ I've found at least 3 other posts$^*$ regarding this theorem, but the posts don't address the issues that I have.
Below is a proof that for infinitely many primes of the form $4n+3$, there's a few questions I have in the proof which I'll mark accordingly.
Proof: Suppose there were only finitely many primes $p_1,\dots, p_k$, which are of the form $4n+3$. Let $N = 4p_1\cdots p_k - 1$. This number is of the form $4n+3$ and is also not prime as it is larger than all the possible primes of the same form. Therefore, it is divisible by a prime $ \color{green}{ \text{(How did they get to this conclusion?)}}$. However, none of the $p_1,\dots, p_k$ divide $N$. So every prime which divides $N$ must be of the form $4n+1$ $ \color{green}{ \text{(Why must it be of this form?)}}$. But notice any two numbers of the form $4n+1$ form a product of the same form, which contradicts the definition of $N$. Contradiction. $\square$
Then as a follow-up question, the text asks "Why does a proof of this flavor fail for primes of the form $4n+1$? $ \color{green}{ \text{(This is my last question.)}}$
$^*$One involves congruences, which I haven't learned yet. The other is a solution-verification type question. The last one makes use of a lemma that is actually one of my questions, but wasn't a question in that post.
|
Therefore, it is divisible by a prime (How did they get to this conclusion?).
All integers are divisible by some prime!
So every prime which divides N must be of the form 4n+1 (Why must it be of this form?).
Because we've assumed that $p_1, \dots, p_k$ are the only primes of the form 4n+3. If none of those divide N, and 2 doesn't divide N, then all its prime factors must be of the form 4n+1.
"Why does a proof of this flavor fail for primes of the form 4n+1? (This is my last question.)
Can you do this yourself now? (Do you understand how the contradiction works in the proof you have? What happens if you multiply together two numbers of the form 4n+3?)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 4,
"answer_id": 0
} |
generalized eigenvector for 3x3 matrix with 1 eigenvalue, 2 eigenvectors
I am trying to find a generalized eigenvector in this problem. (I understand the general theory goes much deeper, but we are only responsible for a limited number of cases.)
I have found eigenvectors $\vec {u_1}$ and $\vec {u_2}.$
When I try $u_1$ and $u_2$ as $u_3$ into this equation:
$$ (A - I)u_4 = u_3$$
I get systems which are inconsistent.
How can I find the $u_3$? I've been told it has something to do with $(A - I)^3 = 0$, but that's about it.
| We are given the matrix:
$$\begin{bmatrix}2 & 1 & 1\\1 & 2 & 1\\-2 & -2 & -1\\\end{bmatrix}$$
We want to find the characteristic polynomial and eigenvalues by solving
$$|A -\lambda I| = 0 \rightarrow -\lambda^3+3 \lambda^2-3 \lambda+1 = -(\lambda-1)^3 = 0$$
This yields a single eigenvalue, $\lambda = 1$, with an algebraic multiplicity of $3$.
If we try and find eigenvectors, we setup and solve:
$$[A - \lambda I]v_i = 0$$
In this case, after row-reduced-echelon-form, we have:
$$\begin{bmatrix}1 & 1 & 1\\0 & 0 & 0\\0 & 0 & 0\\\end{bmatrix}v_i = 0$$
This leads to the two eigenvectors as he shows, but the problem is that we cannot use that to find the third as we get degenerate results, like you showed.
Instead, let's use the top-down chaining method to find three linearly independent generalized eigenvectors.
Since the RREF of
$$[A - 1 I] = \begin{bmatrix}1 & 1 & 1\\0 & 0 & 0\\0 & 0 & 0\\\end{bmatrix}$$
We have $E_3 = kernel(A - 1I)$ with dimension $= 2$, so there will be two chains.
Next, since
$$[A - 1 I]^2 = \begin{bmatrix}0 & 0 & 0\\0 & 0 & 0\\0 & 0 & 0\\\end{bmatrix}$$
the space Kernel $(A-1I)^2$ has dimension $=3$, which matches the algebraic multiplicity of $\lambda=1$.
Thus, one of the chains will have length $2$, so the other must have length $1$.
We now form a chain of $2$ generalized eigenvectors by choosing $v_2$ in kernel $(A-1I)^2$ such that $v_2$ is not in the kernel $(A-1I)$.
Since every vector is in kernel $(A-1I)^2$, and the third column of $(A-1I)$ is non-zero, we may choose:
$$v_2 = (1, 0, 0) \implies v_1 = (A-1I)v_2 = (1,1,-2)$$
To form a basis for $\mathbb R^3$, we need one additional chain of one generalized eigenvector. This vector must be an eigenvector that is independent from $v_1$. Since
$$E_3 = ~\text{span}~ \left(\begin{bmatrix}0\\1\\-1\\\end{bmatrix}, \begin{bmatrix}-1\\0\\1\\\end{bmatrix}\right).$$
and neither of these spanning vectors is itself a scalar multiple of $v1$, we may choose either one of them. So let
$$w_1 = (0, 1, -1).$$
Now we have two chains:
$$v_2 \rightarrow v_1 \rightarrow 0$$
$$w_1 \rightarrow 0$$
So, to write the solution, we have:
$\displaystyle 1^{st}$ Chain
$$x_1(t) = e^t \begin{bmatrix}1\\1\\-2\\\end{bmatrix}$$
$$x_2(t) = e^t\left(t \begin{bmatrix}1\\1\\-2\\\end{bmatrix} + \begin{bmatrix}1\\0\\0\\\end{bmatrix}\right)$$
$\displaystyle 2^{nd}$ Chain
$$x_3(t) = e^t \begin{bmatrix}0\\1\\-1\\\end{bmatrix}$$
Thus:
$$x(t) = x_1(t) + x_2(t) + x_3(t)$$
Note, you can use this linear combination of $x(t)$ and verify that indeed it is a solution to $x' = Ax$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Free online mathematical software What are the best free user-friendly alternatives to Mathematica and Maple available online?
I used Magma online calculator a few times for computational algebra issues, and was very much satisfied, even though the calculation time there was limited to $60$ seconds.
Very basic computations can be carried out with Wolfram Alpha. What if one is interested in integer relation detection or integration involving special functions, asymptotic analysis etc?
Thank you in advance.
Added: It would be nice to provide links in the answers so that the page becomes easily usable. I would also very much appreciate short summary on what a particular software is suitable/not suitable for. For example, Magma is in my opinion useless for doing the least numerics.
| I use Pari/GP. SAGE includes this as a component too, but I really like GP alone, as it is. In fact, GP comes with integer relation finding functions (as you mentioned) and has enough rational/series symbolic power that I have been able to implement Sister Celine's method for finding recurrence relations among hypergeometric sums in GP with ease.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 9,
"answer_id": 5
} |
Integral of $\int^1_0 \frac{dx}{1+e^{2x}}$ I am trying to solve this integral and I need your suggestions.
I think about taking $1+e^{2x}$ and setting it as $t$, but I don't know how to continue now.
$$\int^1_0 \frac{dx}{1+e^{2x}}$$
Thanks!
| $\int^{1}_{0}\frac{dx}{1+e^{2x}}=$ $ \int^{1}_{0}\frac{e^{-2x}dx}{1+e^{-2x}}=$ $-\frac{1}{2}\int^{1}_{0}\frac{(1+e^{-2x})'dx}{1+e^{-2x}}=$
$=-\frac{1}{2}ln(1+e^{-2x})|^{1}_{0}= \frac{1}{2}ln\frac{2e^{2}}{1+e^{2}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Zorn's lemma and maximal linearly ordered subsets Let $T$ be a partially ordered set. We say $T$ is a tree if $\forall t\in T$ $\{r\in T\mid r < t\}$ is linearly ordered (such orders can be considered on connected graphs without cycles, i.e. on trees). By a branch we mean a maximal linearly ordered subset in $T$.
It is easy to prove that each tree has a branch using Zorn's lemma. However, the converse is also true (I read it recently in an article). Can anybody give a sketchy proof?
| Here is a nice way of proving the well-ordering principle from "Every tree has a branch":
Let $A$ be an infinite set, and let $\lambda$ be the least ordinal such that there is no injection from $\lambda$ into $A$. Consider the set $A^{<\lambda}$, that is all the functions from ordinals smaller than $\lambda$ into $A$, and order those by end-extension (which is exactly $\subseteq$ if you think about it for a moment).
It is not hard to verify that $(A^{<\lambda},\subseteq)$ is a tree. Therefore it has a branch. Take the union of that branch, and we have a "maximal" function $f$ whose domain is an ordinal $\alpha<\lambda$. Note that it has to be a maximal element, otherwise we can extend it and the branch was no branch.
If the range of $f$ is not all $A$ then there is some $a\in A$ which witnesses that, and we can extend $f\cup\{\langle\alpha,a\rangle\}$ to a strictly larger function. Since $f$ is maximal, it follows it has to be a surjection, and therefore $A$ can be well-ordered.
(Note: If we restrict to the sub-tree of the injective functions we can pull the same trick and end up with a bijection, but a surjection from an ordinal is just as good).
From this we have that every set can be well-ordered, and therefore the axiom of choice holds.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Number of ways to arrange $n$ people in a line I came across this confusing question in Combinatorics.
Given $n \in \mathbb N$. We have $n$ people that are sitting in a row. We mark $a_n$ as the number of ways to rearrange them such that a person can stay in his seat or move one seat to the right or one seat to the left. Calculate $a_n$
This is one of the hardest combinatorics questions I have came across (I'm still mid-way through the course,) so if anyone can give me a direction I'll be grateful!
| HINT: It’s a little more convenient to think of this in terms of permutations: $a_n$ is the number of permutations $p_1 p_2\dots p_n$ of $[n]=\{1,\dots,n\}$ such that $|p_k-k|\le 1$ for each $k\in[n]$. Call such permutations good.
Suppose that $p_1 p_2\dots p_n$ is such a permutation. Clearly one of $p_{n-1}$ and $p_n$ must be $n$.
*
*If $p_n=n$, then $p_1 p_2\dots p_{n-1}$ is a good permutation of $[n-1]$. Moreover, any good permutation of $[n-1]$ can be extended to a good permutation of $[n]$ with $n$ at the end. Thus, there are $a_{n-1}$ good permutations of $[n]$ with $n$ as last element.
*If $p_{n-1}=n$, then $p_n$ must be $n-1$, so $p_1 p_2\dots p_{n-2}p_n$ is a good permutation of $[n-1]$ with $n-1$ as its last element. Conversely, if $q_1 q_2\dots q_{n-1}$ is a good permutation of $[n-1]$ with $q_{n-1}=n-1$, then $q_1 q_2\dots q_{n-2}nq_{n-1}$ is a good permutation of $[n]$ with $n$ as second-last element. How many good permutations of $[n-1]$ are there with $n-1$ as last element?
Alternatively, you could simply work out $a_k$ by hand for $k=0,\dots,4$, say, and see what turns up; you should recognize the numbers that you get.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why do we use open intervals in most proofs and definitions? In my class we usually use intervals and balls in many proofs and definitions, but we almost never use closed intervals (for example, in Stokes Theorem, etc). On the other hand, many books use closed intervals.
Why is this preference? What would happen if we substituted "open" by "closed"?
| My guess is that it's because of two related facts.
*
*The advantage of open intervals is that, since every point in the interval has an open neighbourhood within the interval, there are no special points 'at the edge' like in closed intervals, which require being treated differently.
*Lots of definitions rely on the existence of a neighbourhood in their most formal aspect, like differentiability for instance, so key properties within the result may require special formulation at the boundary.
In PDE/functional analysis contexts for example boundaries are very subtle and important objects which are treated separately.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Computing $\int_0^{\pi\over2} \frac{dx}{1+\sin^2(x)}$? How would you compute$$\int_0^{\pi\over2} \frac{dx}{1+\sin^2(x)}\, \, ?$$
| HINT:
$$\int_0^{\pi\over2} \frac{dx}{1+\sin^2(x)}= \int_0^{\pi\over2} \frac{\csc^2xdx}{\csc^2x+1}=\int_0^{\pi\over2} \frac{\csc^2xdx}{\cot^2x+2}$$
Put $\cot x=u$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Building a tower using colorful blocks How many possibilities are there to build a tower of n height, using colorful blocks, where:
*
*white block is of 1 height
*black, red, blue, green, yellow, pink blocks are equal to 2 heights
I need to find the generating function formula for this.
So, for n = 1 I get 1 possibility,
for n = 2 I get 2 possibilities,
for n = 3 I get 3 possibilities
for n = 4 I get > 4 possibilities etc.
The generating function at this moment would be $1 + 2x + 3x^{2} + ...$. But I have no idea how can I find the general formula calculating this.
Could you give me any suggestions, or solutions (;-)) ?
| Let the number of towers of height $n$ be $a_n$. To build a tower of height $n$, you start with a tower of height $n - 1$ and add a white block ($a_{n - 1}$ ways) or with a tower of height $n - 2$ and add one of 6 height-2 blocks. In all, you can write:
$$
a_{n + 2} = a_{n + 1} + 6 a_n
$$
Clearly $a_0 = a_1 = 1$.
Define $A(z) = \sum_{n \ge 0} a_n z^n$, multiply the recurrence by $z^n$ and sum over $n \ge 0$ to get:
$$
\frac{A(z) - a_0 - a_1 z}{z^2} = \frac{A(z) - a_0}{z} + 6 A(z)
$$
You get:
$$
A(z) = \frac{1}{1 - z - 6 z^2}
= \frac{3}{5} \cdot \frac{1}{1 - 3 z} + \frac{2}{5} \cdot \frac{1}{1 + 2 z}
$$
This is just geometric series:
$$
a_n = \frac{3}{5} \cdot 3^n + \frac{2}{3} \cdot (-2)^n
= \frac{3^{n + 1} - (-2)^{n + 1}}{5}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Matrix Norm Inequality $\lVert A\rVert_\infty \leq \sqrt{n} \lVert A\rVert_2$ So I'm trying to prove that
$\lVert A\rVert_\infty \leq \sqrt{n} \lVert A\rVert_2$.
I've written the right hand side in terms of rows, but this method doesn't seem to be getting me anywhere.
Where else should I go?
| Writing $A=(A_1,\dots,A_n)^\mathrm{T}$ with $A_i$ being the $i$-th row of the matrix, let $A_j$ be the row for which
$$
\lVert A\rVert_\infty = \max_{1\leq i\leq n }\lVert A_i\rVert_1 = \lVert A_j\rVert_1 = \sum_{k=1}^n \left|A_{i,j}\right|
$$
Then
$$
n\lVert A\rVert_2^2 = n\sum_{i=1}^n \lVert A_i\rVert_2^2 \geq n\lVert A_j\rVert_2^2 \geq \lVert A_j\rVert_1^2 = \lVert A\rVert_\infty^2
$$
where the last inequality is "standard" (relation between 1 and 2-norm on $\mathbb{R}^n$, can be proven with Cauchy-Schwarz).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Radius of convergence of the Bernoulli polynomial generating function power series. The generating function of the Bernoulli Polynomials is: $$\frac{te^{xt}}{e^t-1}=\sum_{k=0}^\infty B_k(x)\frac{t^k}{k!}.$$
Would it be right to say that the radius of convergence of this power series is $2\pi$ ? I'm not sure since the power series above is in fact a double series: $$\sum_{k=0}^\infty\left(\sum_{j=0}^k {k\choose j}B_{k-j} x^j\right)\frac{t^k}{k!}.$$
What if I were to choose a fixed value for $x$? Would the radius be $2\pi$ then, even for the double power series?
| For every fixed $x=c$, the radius of convergence of the power series is $2\pi$. This is because $$\frac{ze^{cz}}{e^z-1}$$ is analytic everywhere except at $z=i2\pi n, n=\pm 1,\pm 2,\cdots$ (not at $0$ though.) The disk $B(0,2\pi)$ is the smallest one centered at $0$ that contains a singularity on its boundary, so the radius of convergence is $2\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Does this ODE have an exact or well-established approximate analytical solution? The equation looks like this:
$$\frac{\mathrm{d}y}{\mathrm{d}t} = A + B\sin\omega t - C y^n,$$
where $A$, $B$, $C$ are positive constants, and $n\ge1$ is an integer. Actually I am mainly concerned with the $n=4$ case.
The $n=1$ case is trivial. Otherwise the only method I can think of is kind of an iterative approximation, in which a "clean" expression seems not easy to obtain.
Just in case it is too "easy" for the experts, we may generalize it to the form
$$\frac{\mathrm{d}y}{\mathrm{d}t} = f(t) - g(y).$$
| If $\frac{dy}{dt} = F(t,y)$ has $F$ and $\partial_2F$ continuous on a rectangle then there exists a unique local solution at any particular point within the rectangle. This is in the basic texts. For $F(t,y) = f(t)-g(y)$ continuity requires continuity of $f$ and $g$ whereas continuity of the partial derivative of $F$ with respect to $y$ require continuity of $g'$. Thus continuity of $f$ and continuous differentiability of $g$ suffice to indicate the existence of a local solution. Of course, this is not an analytic expression just yet. That said, the proof of this theorem provides an iteratively generated approximation.
Alternatively, you could argue by Pfaff's theorem there exists an integrating factor which recasts the problem as an exact equation. So, you can rewrite $dy = [f(t)-g(y)]dt$ as
$Idy+I[f(t)-g(y)]dt=dG$ for some function $G$. Then the solution is simply $G(t,y)=k$ which locally provides functions which solve your given DEqn. Of course, the devil is in the detail of how to calculate $I$. The answer for most of us is magic.
Probably a better answer to your problem is to look at the Bernoulli problem. There a substitution is made which handles problems which look an awful lot like the one you state.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
relationship of polar unit vectors to rectangular I'm looking at page 16 of Fleisch's Student's Guide to Vectors and Tensors. The author is talking about the relationship between the unit vector in 2D rectangular vs polar coordinate systems. They give these equations:
\begin{align}\hat{r} &= \cos(\theta)\hat{i} + \sin(\theta)\hat{j}\\
\hat{\theta} &= -\sin(\theta)\hat{i} + \cos(\theta)\hat{j}\end{align}
I'm just not getting it. I understand how, in rectangular coordinates, $x = r \cos(\theta)$, but the unit vectors are just not computing.
| The symbols on the left side of those equations don't make any sense. If you wanted to change to a new pair of coordinates $(\hat{u}, \hat{v})$ by rotating through an angle $\theta$, then you would have
$$
\left\{\begin{align}
\hat{u} &= (\cos \theta) \hat{\imath} + (\sin \theta)\hat{\jmath} \\
\hat{v} &= (-\sin \theta) \hat{\imath} + (\cos \theta)\hat{\jmath}.
\end{align}\right.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/391947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 1
} |
Differentiate $\log_{10}x$ My attempt:
$\eqalign{
& \log_{10}x = {{\ln x} \over {\ln 10}} \cr
& u = \ln x \cr
& v = \ln 10 \cr
& {{du} \over {dx}} = {1 \over x} \cr
& {{dv} \over {dx}} = 0 \cr
& {v^2} = {(\ln10)^2} \cr
& {{dy} \over {dx}} = {{\left( {{{\ln 10} \over x}} \right)} \over {2\ln 10}} = {{\ln10} \over x} \times {1 \over {2\ln 10}} = {1 \over {2x}} \cr} $
The right answer is: ${{dy} \over {dx}} = {1 \over {x\ln 10}}$ , where did I go wrong?
Thanks!
| $${\rm{lo}}{{\rm{g}}_{10}}x = {{\ln x} \over {\ln 10}} = \dfrac{1}{\ln(10)}\ln x$$
No need for the chain rule, in fact, that would lead you to your mistakes, since $\dfrac 1 {\ln(10)}$ is a constant.
So we differentiate only the term that's a function of $x$: $$\dfrac{1}{\ln(10)}\frac d{dx}(\ln x)= \dfrac 1{x\ln(10)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
If $\lim_{t\to\infty}\gamma(t)=p$, then $p$ is a singularity of $\gamma$. I'm trying to solve this question:
Let $X$ be a vectorial field of class $C^1$ in an open set
$\Delta\subset \mathbb R^n$. Prove if $\gamma(t)$ is a trajectory of
$X$ defined in a maximal interval $(\omega_-,\omega_+)$ with
$\lim_{t\to\infty}\gamma(t)=p\in \Delta$, then $\omega_+=\infty$ and
$p$ is a singularity of $X$.
The first part is easy because $\gamma$ is contained in a compact for a large $t$, my problem is with the second part, I need help.
Thanks in advance
| For $n \geq 0$, let $t_n \in (n,n+1)$ such that $\gamma'(t_n)=\gamma(n+1)-\gamma(n)$ (use mean value theorem).
Then $\gamma'(t_n)=X(\gamma(t_n)) \underset{n \to + \infty}{\longrightarrow} X(p)$ and $\gamma'(t_n) \underset{n \to + \infty}{\longrightarrow} p-p=0$. Thus $X(p)=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sequence of Functions Converging to 0 I encountered this question in a textbook. While I understand the intuition behind it I am not sure how to formally prove it.
Define the sequence of functions $(g_n)$ on $[0,1]$ to be $$g_{k,n}(x) = \begin{cases}1 & x \in \left[\dfrac{k}n, \dfrac{k+1}n\right]\\ 0 & \text{ else}\end{cases}$$
where $k \in \{0,1,2,\ldots,n-1\}$.
Prove the following statements:
1) $g_n \to 0$ with respect to the $L^2$ norm.
2) $g_n(x)$ doesn't converge to $0$ at any point in $[0,1]$.
3) There is a subsequence of $(g_n)$ that converges pointwise to $0$.
Thank you for your replies.
| 1) What is $\lVert g_n\rVert_2$?
2) Show that for any $x$, $g_n(x)=0$ infinitely often and $g_n(x)=1$ infinitely often.
3) Note that $\frac1n\to 0 $ and $\frac2n\to 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Do there exist some non-constant holomorphic functions such that the sum of the modulus of them is a constant Do there exist some non-constant holomorphic functions $f_1,f_2,\ldots,f_n$such that $$\sum_{k=1}^{n}\left|\,f_k\right|$$ is a constant? Can you give an example? Thanks very much
| NO. Suppose $f, g$ are holomorphic functions on the unite disc.
$$
2\pi r M=2\pi r( |f(z_0)|+|g(z_0)|)=|\int_{|z-z_0|=r} fdz|+|\int_{|z-z_0|=r} gdz|\le \int_{|z-z_0|=r} (|f|+|g|)|dz|=2\pi r M
$$
so all equal sign hold, then $f, g$ are constants.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Solve $\sqrt{2x-5} - \sqrt{x-1} = 1$ Although this is a simple question I for the life of me can not figure out why one would get a 2 in front of the second square root when expanding. Can someone please explain that to me?
Example: solve $\sqrt{(2x-5)} - \sqrt{(x-1)} = 1$
Isolate one of the square roots: $\sqrt{(2x-5)} = 1 + \sqrt{(x-1)}$
Square both sides: $2x-5 = (1 + \sqrt{(x-1)})^{2}$
We have removed one square root.
Expand right hand side: $2x-5 = 1 + 2\sqrt{(x-1)} + (x-1)$-- I don't understand?
Simplify: $2x-5 = 2\sqrt{(x-1)} + x$
Simplify more: $x-5 = 2\sqrt{(x-1)}$
Now do the "square root" thing again:
Isolate the square root: $\sqrt{(x-1)} = \frac{(x-5)}{2}$
Square both sides: $x-1 = (\frac{(x-5)}{2})^{2}$
Square root removed
Thank you in advance for your help
| To get rid of the square root, denote: $\sqrt{x-1}=t\Rightarrow x=t^2+1$. Then:
$$\sqrt{2x-5} - \sqrt{x-1} = 1 \Rightarrow \\
\sqrt{2t^2-3}=t+1\Rightarrow \\
2t^2-3=t^2+2t+1\Rightarrow \\
t^2-2t-4=0 \Rightarrow \\
t_1=1-\sqrt{5} \text{ (ignored, because $t>0$)},t_2=1+\sqrt{5}.$$
Now we can return to $x$:
$$x=t^2+1=(1+\sqrt5)^2+1=7+2\sqrt5.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
} |
Guides/tutorials to learn abstract algebra? I recently read up a bit on symmetry groups and was interested by how they apply to even the Rubik's cube. I'm also intrigued by how group theory helps prove that "polynomials of degree $\gt4$ are not generally solvable".
I love set theory and stuff, but I'd like to learn something else of a similar type. Learning about groups, rings, fields and what-have-you seems like an obvious choice.
Could anyone recommend any informal guides to abstract algebra that are written in (at least moderately) comprehensible language? (PDFs etc. would also be nice)
| I really think that Isaacs book Algebra: A Graduate Course introduces the group theory in detail without omitting any proof. It may sound difficult because of the adjective "Graduate" but I do not think that the explanations are that difficult to follow for undergraduates as long as they know how to write proofs.
The best freebies for algebra in my opinion is Milne's website (http://www.jmilne.org/math/). Not every note is complete, but his excellent notes tell you which books to buy to corroborate them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 7,
"answer_id": 4
} |
properties of recursively enumerable sets $A \times B$ is an r.e.(recursively enumerable) set, I want to show that $A$ (or $B$) is r.e. ($A$ and $B$ are nonempty)
I need to find a formula. I've got an idea that I should use the symbolic definition of an r.e. set. That is, writing a formula for the function that specifies $A$ or $B$, assuming a formula exists for $A \times B$. I guess I must use Gödel number somewhere.
I should've mentioned that I asked this question over there (Computer Science) but since I am more interested in mathematical arguments and looking for formulas, I'd ask it here as well.
| The notion of computable or c.e. is usually defined on $\omega$. To understand what $A \times B$ being computable or c.e., you should identify order pairs $(x,y)$ with the natural number under a bijective pairing function. Any of the usual standard pairing function, the projection maps $\pi_1$ and $\pi_2$ are computable.
First note that $\emptyset \times B = \emptyset$. So the result does not hold if one of the sets is empty.
Suppose $A$ and $B$ are not empty. Suppose $A \times B$ is c.e. Then there is a total computable function $f$ with range $A \times B$. Then $\pi_1 \circ f$ and $\pi_2 \circ f$ are total computable functions with range $A$ and $B$, respectively. So $A$ and $B$ are c.e.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Cyclic shifts when multiplied by $2$. I was trying to solve the following problem:
Find a number in base $10$, which when multiplied by $2$, results in a number which is a cyclic shift of the original number, such that the last digit (least significant digit) becomes the first digit.
I believe one such number is $105263157894736842$
This I was able to get by assuming the last digit and figuring out the others, till there was a repeat which gave a valid number.
I noticed that each time (with different guesses for the last digit), I got an $18$ digit number which was a cyclic shift of the one I got above.
Is there any mathematical explanation for the different answers being cyclic shifts of each other?
| Suppose an $N$ digit number satisfies your condition, write it as $N= 10a + b$, where $b$ is the last digit. Then, your condition implies that
$$ 2 (10a + b) = b\times 10^{N-1} + a $$
Or that $b \times (10^{N-1} -1 ) = 19 a$.
Clearly, $b$ is not a multiple of 19, so we must have $10^{N-1} -1$ to be a multiple of 19. You should be able to verify (modular arithmetic), that this happens if and only if $N\equiv 0 \pmod{19} $. This gives us that $N = 19k$, and the numbers have the form
$$\frac{ 10^{19k} + 9 } {19} b = 10 \lfloor \frac{10^{19k-1} } {19} b \rfloor$$
[The equality occurs because $\frac{1}{19} < 1$.]
I now refer you to "When is $\frac{1}{p}$ obtained as a cyclic shift", for you to reach your conclusion.
There is one exception, where we need the leading digit to be considered as 0 (not the typical definition, so I though I'd point it out). We can get more solutions, obtained by concatenations of your number with itself, and several cyclic shifts (and again accounting for a possible leading digit of 0).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Approximation of alternating series $\sum_{n=1}^\infty a_n = 0.55 - (0.55)^3/3! + (0.55)^5/5! - (0.55)^7/7! + ...$ $\sum_{n=1}^\infty a_n = 0.55 - (0.55)^3/3! + (0.55)^5/5! - (0.55)^7/7! + ...$
I am asked to find the no. of terms needed to approximate the partial sum to be within 0.0000001 from the convergent value of that series. Hence, I tried find the remainder, $R_n = S-S_n$ by the improper integral
$\int_{n}^{\infty} \frac{ (-1)^n(0.55)^{2n-1}} {(2n-1)!} dn $
However, I don't know how to integrate this improper integral so is there other method to solve this problem?
| Hint:
$\sin x=x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dfrac{x^7}{7!}+ \dots$
Your expression is simply $\sin (0.55)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
how to apply hint to question involving the pigeonhole principle The following question is from cut-the-knot.org's page on the pigeonhole principle
Question
Prove that however one selects 55 integers $1 \le x_1 < x_2 < x_3 < ... < x_{55} \le 100$, there will be some two that differ by 9, some two that differ by 10, a pair that differ by 12, and a pair that differ by 13. Surprisingly, there need not be a pair of numbers that differ by 11.
The Hint
Given a run of $2n$ consecutive integers: $a + 1, a + 2, ..., a + 2n - 1, a + 2n$, there are n pairs of numbers that differ by n: $(a+1, a+n+1), (a + 2, a + n + 2), \dots, (a + n, a + 2n)$. Therefore, by the Pigeonhole Principle, if one selects more than n numbers from the set, two are liable to belong to the same pair that differ by $n$.
I understood the hint but no concrete idea as to how to apply it, but here are my current insights:
My Insights
break the set of 100 possible choices into m number of 2n(where $n \in \{9,12,13\}$ ) consecutive numbers and since 55 numbers are to be chosen , show that even if one choses randomly there will be n+1 in one of the m partitions and if so, by the hint there will exist a pair of two numbers that differ by n.
| Here is $9$ done explicitly: Break the set into subsets with a difference of $9$: $\{1,10,19,28,\ldots,100\},\{2,11,20,\ldots 92\},\ldots \{9,18,27,\ldots 99\}$. Note that there is one subset with $12$ members and eight with $11$ members. If you want to avoid a pair with a difference of $9$ among your $55$ numbers, you can't pick a pair of neighbors from any set. That means you can only pick six from within each set, but that gives you only $54$ numbers, so with $55$ you will have a pair with difference $9$.
The reason this fails with $11$ is you get one subset with $10$ members and ten with $9$ members. You can pick $5$ out of each and avoid neighbors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to integrate $\int \sqrt{x^2+a^2}dx$ $a$ is a parameter. I have no idea where to start
| I will give you a proof of how they can get the formula above. As a heads up, it is quite difficult and long, so most people use the formula usually written in the back of the text, but I was able to prove it so here goes.
The idea is to, of course, do trig-substitution.
Since $$\sqrt{a^2+x^2} $$ suggests that $x=a\tan(\theta)$ would be a good one because the expression simplifies to $$a\sec(\theta)$$
We can also observe that $dx$ will become $\sec^2(\theta)d\theta$.
Therefore$$\int \sqrt{a^2+x^2}dx = a^2\int \sec^3(\theta)d\theta$$
Now there are two big things that we are going to do.
One is to do integration by parts to simplify this expression so that it looks a little better, and later we need to be able to integrate $\int \sec(\theta)d\theta$.
So the first step is this. It is well known and natural to let $u=\sec(\theta)$ and $dv=\sec^2(\theta)d\theta$ because the latter integrates to simply, $\tan(\theta)$.
Letting $A = \int \sec^3(\theta)d\theta$,you will get the following $$A = \sec(\theta)\tan(\theta) - \int{\sec(\theta)\tan^2(\theta)d\theta}$$
$$=\sec(\theta)\tan(\theta) - \int{\sec(\theta)d\theta - \int\sec^3(\theta)d\theta}$$
therefore,$$2A = \sec(\theta)\tan(\theta)-\int \sec(\theta)d\theta$$
Dividing both sides give you $$A = 1/2[\sec(\theta)\tan(\theta)-\int \sec(\theta)d\theta]$$
I hope you see now why all we need to be able to do is to integrate $\sec(\theta)$.
The chance that you know how is rather high because you are solving this particular problem, but let's just go through it for the hell of it.
This is a very common trick in integration using trig, but remember the fact that $\sec^2(\theta)$ and $\sec(\theta)\tan(\theta)$ are derivatives of $\tan(\theta)$ and $\sec(\theta)$, respectively. So this is what we do.
$$\int \sec(\theta)d\theta = \int {{\sec(\theta)(\sec(\theta)+\tan(\theta))} \over {\sec(\theta)+\tan(\theta)}} d\theta$$
Letting $w = \sec(\theta)+\tan(\theta)$,
$$= \int {dw \over w} = \ln|w|$$
So, long story short,
$$\int \sqrt{a^2+x^2}dx = a^2/2[\sec(\theta)\tan(\theta) - \ln|\sec(\theta)+\tan(\theta)|]$$
$$= {x\sqrt{a^2+x^2}\over 2} + {{a^2\ln|x+\sqrt{a^2+x^2}|}\over 2} + C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/392663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.