Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Completing a proof Say we are given this:
Impossibility of ordering the complex numbers. As yet we have not defined a relation of the form $x < y$ if $x$ and $y$ are arbitrary complex numbers, for the reason that it is impossible to give a definition of $<$ for complex numbers which will have all the properties in Axioms 6 through 8.
To illustrate, suppose we were able to define an order relation $<$ satisfying Axioms 6, 7, and 8. Then, since $i \neq 0$, we must have either $i > 0$ or $i < 0$, by Axiom 6. Let us assume $i > 0$. Then, taking $x = y = i$ in Axiom 8, we get $i^2 > 0$, or $—1 > 0$. Adding 1 to both sides (Axiom 7), we get $0 > 1$. On the other hand, applying Axiom 8 to $—1 > 0$ yields $1 > 0$. Thus we have both $0 > 1$ and $1 > 0$, which, by Axiom 6, is impossible. Hence the assumption $i > 0$ leads us to a contradiction.
So we are given this "passage" and the question is
By reading the passage: Suppose that $<$ is a relation on $\mathbb{C}$ that satisfies Axioms 6, 7, and 8. and show that the assumption $i <0$ leads to a contradiction.
Axiom 6 - Exactly one of the relations $x = y$, $x<y$ and $x>y$ holds. Note $x>y$ means the same thing as $y<x$
Axiom 7 - If $x<y$, then for every $z$ we have $x+z <y+z$.
Axiom 8 - If $x>0$ and $y>0$, then $xy>0$.
So this is what I have but I am really confused from that passage. Didn't we already complete the proof because we ended up with a contradiction? Help on this one please.
| The passage proves only for the case when $i>0$ was considered, and the $i<0$ doesn't readily follow from this, though it is not hard neither:
By axiom 7., and if $i<0$, we have $0=i+(-i)<0+(-i)=-i$, that is, $-i>0$. But then the passage can be applied again, as $(-i)^2=i^2=-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to LU facctorisation of a 4 by 4 matrice using gaussian eilimination! I have a 4 by 4 matrice,
A = [2 -2 0 0]
[2 -4 2 0]
[0 -2 4 -2]
[0 0 2 -4]
How would I use Gaussian Elimination to find the LU factorisation of the matrix
Please could someone explain how to do this!? I have an exam where a similar question will come up so i really want to be able to fully understand this. I can do and completely understand Gaussian elimination of a 3 by 3 matrix but not when it is not a system on equations! I havnt seen anything like this before!
Many thanks
| Here's your matrix $A$, and you multiply it on the left with a 4 by 4 Identity matrix (it's always going to be the same dimensions as your $A$ matrix). So it'll look like $[I]*[A]$, and then you do Gaussian Elimination (GE) to your $A$ matrix, and make sure you keep track of your row operations that you do. Ex that's not important to your question:
$$\
r_{2} = r_{2} - 2r_{1}$$
When you apply it to your Identity matrix, make sure you change it to:
$$\
c_{1} = c_{1} + 2c_{2}$$
Where you then apply it as a column operation to the identity matrix. You keep doing those same steps until you end with your lower triangular matrix ($L$) on the left, and an upper triangular matrix on the right ($U$). Then $A = LU$. I will post an actual example soon.
Note: If you have to switch rows, then you have to multiply by a Permutation matrix, P. The steps you would do it would be PAx = Pb. I will write out an example, and somehow figure out how to post it here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Asymptotic bound $T(n)=T(n/3+\lg n)+1$ How would I go about finding the upper and lower bounds of $T(n)=T(n/3+\lg(n))+1$?
| Not sure how tight a bound you need, but here is an idea.
Compare your recurrence to the one that satisfies $X_n = X_{n/3} + 1$ (without the log) - what is the relationship between $T_n$ and $X_n$? Note that when you solve, $X_n = \Theta(\log n)$.
Next item is in the other direction. Compare $T_n$ to $Y_n = Y_{n-1}+1$, and note that $Y_n = \Theta(n)$.
If you fill in the blanks, you get an inequality bounding on both sides.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Power series infinity at every point of boundary Is there an example of a power series $f(z)=\sum_{k=0}^\infty a_kz^k$ with radius of convergence $0<R<\infty$ so that $\sum_{k=0}^\infty a_kw^k=\infty$ for all $w$ with $|w|=R$
Thank you kindly.
| No there is not. In fact, there is no example of such power series $\sum_n a_n z^n$ such that $\sum_n a_n w^n = \infty$ for all $w$ in a set of positive measure in $\partial D$, where $D=\{|z|<R\}$. Indeed, suppose there exists such a power series $f$. By Abel's Theorem, we deduce that $f(z)$ has non-tangential boundary values $\infty$ on a set of positive measure in $\partial D$. This means that $1/f$ is a meromorphic function in $D$ with non-tangential boundary values $0$ on a set of positive measure in $\partial D$, and so $1/f$ is identically zero in $D$, by the Luzin-Privalov Theorem. So $f \equiv \infty$ in $D$, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
Evaluate the limit $\lim\limits_{n\to\infty}{\frac{n!}{n^n}\bigg(\sum_{k=0}^n{\frac{n^{k}}{k!}}-\sum_{k=n+1}^{\infty}{\frac{n^{k}}{k!}}\bigg)}$
Evaluate the limit
$$ \lim_{n\rightarrow\infty}{\frac{n!}{n^{n}}\left(\sum_{k=0}^{n}{\frac{n^{k}}{k!}}-\sum_{k=n+1}^{\infty}{\frac{n^{k}}{k!}} \right)} $$
I use $$e^{n}=1+n+\frac{n^{2}}{2!}+\cdots+\frac{n^{n}}{n!}+\frac{1}{n!}\int_{0}^{n}{e^{x}(n-x)^{n}dx}$$
but I don't know how to evaluate
$$ \lim_{n\rightarrow\infty}{\frac{n!}{n^{n}}\left(e^{n}-2\frac{1}{n!}\int_{0}^{n}{e^{x}(n-x)^{n}dx} \right) }$$
| In this answer, it is shown, using integration by parts, that
$$
\sum_{k=0}^n\frac{n^k}{k!}=\frac{e^n}{n!}\int_n^\infty e^{-t}\,t^n\,\mathrm{d}t\tag{1}
$$
Subtracting both sides from $e^n$ gives
$$
\sum_{k=n+1}^\infty\frac{n^k}{k!}=\frac{e^n}{n!}\int_0^n e^{-t}\,t^n\,\mathrm{d}t\tag{2}
$$
Substtuting $t=n(s+1)$ and $u^2/2=s-\log(1+s)$ gives us
$$
\begin{align}
\Gamma(n+1)
&=\int_0^\infty t^n\,e^{-t}\,\mathrm{d}t\\
&=n^{n+1}e^{-n}\int_{-1}^\infty e^{-n(s-\log(1+s))}\,\mathrm{d}s\\
&=n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}\,s'\,\mathrm{d}u\tag{3}
\end{align}
$$
and
$$
\begin{align}
\Gamma(n+1,n)
&=\int_n^\infty t^n\,e^{-t}\,\mathrm{d}t\\
&=n^{n+1}e^{-n}\int_0^\infty e^{-n(s-\log(1+s))}\,\mathrm{d}s\\
&=n^{n+1}e^{-n}\int_0^\infty e^{-nu^2/2}\,s'\,\mathrm{d}u\tag{4}
\end{align}
$$
Computing the series for $s'$ in terms of $u$ gives
$$
s'=1+\frac23u+\frac1{12}u^2-\frac2{135}u^3+\frac1{864}u^4+\frac1{2835}u^5-\frac{139}{777600}u^6+O(u^7)\tag{5}
$$
In the integral for $\Gamma(n+1)$, the odd powers of $u$ in $(5)$ are cancelled and the even powers of $u$ are integrated over twice the domain as in the integral for $\Gamma(n+1,n)$. Thus,
$$
\begin{align}
2\Gamma(n+1,n)-\Gamma(n+1)
&=\int_n^\infty t^n\,e^{-t}\,\mathrm{d}t-\int_0^n t^n\,e^{-t}\,\mathrm{d}t\\
&=n^{n+1}e^{-n}\int_0^\infty e^{-nu^2/2}\,2\,\mathrm{odd}(s')\,\mathrm{d}u\\
&=n^{n+1}e^{-n}\left(\frac4{3n}-\frac8{135n^2}+\frac{16}{2835n^3}+O\left(\frac1{n^4}\right)\right)\\
&=n^ne^{-n}\left(\frac43-\frac8{135n}+\frac{16}{2835n^2}+O\left(\frac1{n^3}\right)\right)\tag{6}
\end{align}
$$
Therefore, combining $(1)$, $(2)$, and $(6)$, we get
$$
\begin{align}
\frac{n!}{n^n}\left(\sum_{k=0}^n\frac{n^k}{k!}-\sum_{k=n+1}^\infty\frac{n^k}{k!}\right)
&=\frac{e^n}{n^n}\left(\int_n^\infty e^{-t}\,t^n\,\mathrm{d}t-\int_0^n e^{-t}\,t^n\,\mathrm{d}t\right)\\
&=\frac43-\frac{8}{135n}+\frac{16}{2835n^2}+O\left(\frac1{n^3}\right)\tag{7}
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
How to prove that $\lim\limits_{n\to\infty} \frac{n!}{n^2}$ diverges to infinity? $\lim\limits_{n\to\infty} \dfrac{n!}{n^2} \rightarrow \lim\limits_{n\to\infty}\dfrac{\left(n-1\right)!}{n}$
I can understand that this will go to infinity because the numerator grows faster.
I am trying to apply L'Hôpital's rule to this; however, have not been able to figure out how to take the derivative of $\left(n-1\right)!$
So how does one take the derivative of a factorial?
| Dominic Michaelis is the 'right' answer for such a simple problem. This is just to demonstrate a trick that is often helpful in showing limits going off to $\infty$. Consider $$\sum_{n=1}^{\infty} \frac{n^2}{n!}$$ By the ratio test this converges. So the terms $\frac{n^2}{n!} \to 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 8,
"answer_id": 0
} |
How can i prove this identity (by mathematical induction) (rational product of sines) I would appreciate if somebody could help me with the following problem:
Q: proof? (by mathematical induction)
$$\prod_{k=1}^{n-1}\sin\frac{k \pi}{n}=\frac{n}{2^{n-1}}~(n\geq 2)$$
| Let
$$S_n=\prod_{k=1}^{n-1}\sin \frac{k\pi}{n}.$$
We solve the equation $(z+1)^n=1$ for $z\in\mathbb{C}$ we find
$$z=e^{i2k\pi/n}-1=2ie^{ik\pi/n}\sin\frac{k\pi}{n}=z_k,\quad 0,\ldots,n-1.$$
Moreover $(x+1)^n-1=x\left((x+1)^{n-1}+(x+1)^{n-2}+\cdots+(x+1)+1\right)=xP(x).$
The roots of $P$ are $z_k, k=1,\ldots ,n-1$. By the relation between polynomial's cofficients of $P$ and its roots we have $$\sigma_{n-1}=(-1)^{n-1}n=\prod_{k=1}^{n-1}z_k.$$
In another way, we have
$$\prod_{k=1}^{n-1}z_k=2^{n-1}i^{n-1}\left(\prod_{k=1}^{n-1}e^{ik\pi/n}\right)\left(\prod_{k=1}^{n-1}\sin \frac{k\pi}{n}\right)=2^{n-1}i^{n-1}e^{i\pi(1+2+\cdots+(n-1)/n)}S_n=2^{n-1}(-1)^{n-1}S_n.$$
We conclude.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Given the product of a unitary matrix and an orthogonal matrix, can it be easily inverted _without_ knowing these factors? Given the product $M$ of a unitary matrix $U$ (i.e. $U^\dagger U=1$) and an orthogonal matrix $O$ (i.e. $O^TO=1$), can it be easily inverted without knowing $U$ and $O$?
Sure enough, if $M=UO$, then $M^{-1}=O^TU^\dagger$. But assuming you only know that $M$ is composed in such a way, but not how $U$ and $O$ actually look, does there still exist a simple formula for $M^{-1}$?
| Note that
$$M^\dagger M = O^\dagger\underbrace{U^\dagger U}_{=1} O = O^\dagger O$$
Therefore,
$$(M^\dagger M)(M^\dagger M)^T = O^\dagger \underbrace{O O^T}_{=1} O^* = (OO^T)^* = 1$$
So that
$$(M^\dagger)^{-1} = M(M^\dagger M)^T$$
And thus
$$M^{-1} = M^T M^* M^\dagger$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to choose the starting row when computing the reduced row echelon form? I'm having hell of a time going around solving matrices to reduced row echelon form. My main issue is which row to start simplifying values and based on what? I have this example
so again, the questions are:
1.Which row to start simplifying values?
2.Based on what criteria?
Our professor solved it in the class with no fractions but I could not do it. Even though I know the 3 operations performed on matrices
| Where you start is not really a problem.
My tip:
*
*Always first make sure you make the first column: 1,0,0
*Then proceed making the second one: 0,1,0
*And lastly, 0,0,1
Step one:
$$\begin{pmatrix} 1&2&3&9 \\ 2&-1&1&8 \\ 3&0&-1&3\end{pmatrix}$$
row 3 - 3 times row 1
$$\begin{pmatrix} 1&2&3&9 \\ 2&-1&1&8 \\ 0&-6&-10&-24\end{pmatrix}$$
row 2 - 2 times row 1
$$\begin{pmatrix} 1&2&3&9 \\ 0&-5&-5&-10 \\ 0&-6&-10&-24\end{pmatrix}$$
Which simplifies to
$$\begin{pmatrix} 1&2&3&9 \\ 0&1&1&2 \\ 0&3&5&12\end{pmatrix}$$
Now you can proceed with step 2, and 3.
row 1 - 2 times row 2 and row 3 - 3 times row 2
$$\begin{pmatrix} 1&0&1&5 \\ 0&1&1&2 \\ 0&0&2&6\end{pmatrix}$$
Simplifies to
$$\begin{pmatrix} 1&0&1&5 \\ 0&1&1&2 \\ 0&0&1&3\end{pmatrix}$$
row 2 - row 3
$$\begin{pmatrix} 1&0&1&5 \\ 0&1&0&-1 \\ 0&0&1&3\end{pmatrix}$$
row 1 - row 3
$$\begin{pmatrix} 1&0&0&2 \\ 0&1&0&-1 \\ 0&0&1&3\end{pmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 0
} |
Fibonacci identity proof I've been struggled for this identity for a while, how can I use combinatorial proof to prove the Fibonacci identity $$F_2+F_5+\dots+F_{3n-1}=\frac{F_{3n+1}-1}{2}$$
I know that $F_n$ is number of tilings for the board of length $n-1$, so if I rewrite the identity and let $f_n$ be the number of tilings for the board of length $n$, then I got $$f_1+f_4+\dots+f_{3n-2}=\frac{f_{3n}-1}{2}$$
the only thing that I know so far is the Right hand side, $f_{3n}-1$ is the number of tilings for the $3n$ board with at least one $(1\times 2)$ tile (or maybe I am wrong), but I have no idea of what the fraction $\frac{1}{2}$ is doing here. Can anyone help?
(P.S.: In general, when it comes to this kind of combinatorial proof question, is it ok to rewrite the question in a different way? Or is it ok to rewrite this question as $2(f_1+f_4+\dots+f_{3n-1})=f_{3n}-1$, then process the proof?
Thank you for all your useful proofs, but this is an identity from a course that I am taking recently, and it is all about combinatorial proof, so some hint about how to find the number of tilings for the board of length $3n$ would be really helpful.
Thanks for dtldarek's help, I finally came up with:
Rewrite the identity as $2F_2+2F_5+\dots+2F_{3n-1}=\frac{F_{3n+1}-1}{2}$, then the Left hand side becomes $F_2+F_2+F_5+F_5+\dots+F_{3n-1}+F_{3n-1}=F_0+F_1+F_2+F_3+\dots+F_{3n-3}+F_{3n-2}+F_{3n-1}=\sum^{3n-1}_{i=0}F_{i}\implies \sum^{3n-1}_{i=0} F_i=F_{3n+1}-1$, and recall that $f_n$ is the number of tilings for the board of length $n$, so we have $\sum^{3n-2}_{i=0}f_i=f_{3n}-1$.
For the Right hand side $f_{3n}$ is the number of tilings for the length of $3n$ board, then $f_{3n}-1$ is the number of tilings for a $3n$ board use at least one $1\times 2$ tile. Now, for the Left hand side, conditioning on the last domino in the $k^{th}$ cell, for any cells before the $k^{th}$ cell, there are only one way can be done, and all cells after the $k+1$ cell can be done in $f_{3n-k-1}$, finally sum up $k$ from 0 to $3n-1$, which is the Left hand side.
Is it ok? did I change the meaning of the original identity?
| This may not be the quickest approach, but it seems fairly simple, using only the recursion equation $F_i+F_{i+1}=F_{i+2}$ and initial conditions, which I take to be $F_0=0$ and $F_1=1$. Notice first that you can apply the recursion equation to replace, on the left side of your formula, each term by the sum of the two preceding Fibonacci numbers, so this left side is equal to $F_0+F_1+F_3+F_4+F_6+F_7+\dots+F_{3n-3}+F_{3n-2}$, which skips exactly those terms that are present in your formula's left side. So, adding this form of your left side to the original form, you find that twice the left side is the sum of all the Fibonacci numbers up to and including $F_{3n-1}$. So what needs to be proved is that $\sum_{i=0}^{3n-1}F_i=F_{3n+1}-1$. That can be done by induction on $n$. The base case is trivial ($0+1+1=3-1$), and the induction step is three applications of the recursion equation. In $\sum_{i=0}^{3(n+1)-1}F_i$, apply the induction hypothesis to replace all but the last three terms with $F_{3n+1}-1$. To combine that with the last three terms, $F_{3n}+F_{3n+1}+F_{3n+2}$, use the recursion equation to replace $F_{3n}+F_{3n+1}$ and $F_{3n+1}+F_{3n+2}$ with $F_{3n+2}$ and $F_{3n+3}$, respectively, and then again to replace these last two results with $F_{3n+4}=F_{3(n+1)+1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/324879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
} |
How to minimize the amount of material used to make a shape of a given volume? A metal can company will make cylindrical shape cans of capacity 300 cubic centimeters. What is the bottom radius of the cans in order to use the least amount of the sheet metal in the production? Accurate to 2 decimal places.
| Hint:
*
*Write out the expressions for surface area and volume of cylinders. Here they are for reference:
$ A = 2 \pi r h + 2 \pi r^2 $
$ V = \pi r^2 h $
*
*We already know what the required volume is so we can set $ V = 300 $.
*Can we combine our expressions for $ A $ and $V $ and make progress that way?
ETA: curse my blurred vision!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Linear Transformations - Direct Sum Let $U, V$, and $W$ be finite dimensional vectors spaces over a field.
Suppose that $V\subset U$ is a subspace. Show that there is a subspace $W\subset U$ such that $U=V\oplus W$.
only thing i know about this problem is that you have to use the null space. I'm pretty much lost! any help would be appreciated!
| Hint: Start with a basis for $V$, $\{v_1,\ldots,v_k\}$, and extend it to a basis of $U$, $\{v_1,\ldots,v_k,u_{k+1},\ldots,u_n\}$. Now find a subspace of $U$ which has the properties that you want by using that extended basis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Common basis for subspace intersection
Let $ W_1 = \textrm{span}\left\{\begin{pmatrix}1\\2\\3\end{pmatrix}, \begin{pmatrix}2\\1\\1\end{pmatrix}\right\}$, and $ W_2 = \textrm{span}\left\{\begin{pmatrix}1\\0\\1\end{pmatrix}, \begin{pmatrix}3\\0\\-1\end{pmatrix}\right\}$. Find a basis for $W_1 \cap W_2$
I first thought of solving the augmented matrix:
$ \begin{pmatrix}1 && 2 && 1 && 3\\2 && 1 && 0 && 0\\3 && 1 && 1 && -1\end{pmatrix}$
But this matrix can have 3 pivots and so it's column space dimension can be at most 3 (which doesn't make sense since the basis I'm looking for must have dimension 2.
So, what is the correct way to solve these exercise.
| The two sub spaces are not the same, because $W_2$ has no extent in the second axis while $W_1$ does. The intersection is then a line in the $xz$ plane, which $W_2$ spans. If you can find a vector in $W_1$ in that plane that is your basis
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Is there a bijection between $\mathbb N$ and $\mathbb N^2$?
Is there a bijection between $\mathbb N$ and $\mathbb N^2$?
If I can show $\mathbb N^2$ is equipotent to $\mathbb N$, I can show that $\mathbb Q$ is countable. Please help. Thanks,
| Yes. Imagine starting at $(1,1)$ and then zig-zagging diagonally across the quadrant. I'll leave you to formulate it.
Hint: for every natural number $>1$ there's a set of elements of $\mathbb{N}^2$ that add up to that number. For $2$, there's $(1,1)$. For $3$, there's $(2,1)$ and $(1,2)$...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 0
} |
Prove that if matrix $A$ is nilpotent, then $I+A$ is invertible. So my friend and I are working on this and here is what we have so far.
We want to show that $\exists \, B$ s.t. $(I+A)B = I$. We considered the fact that $I - A^k = I$ for some positive $k$. Now, if $B = (I-A+A^2-A^3+ \cdots -A^{k-1})$, then $(I+A)B = I-A^k = I$. My question is: in matrix $B$, why is the sign for $A^{k-1}$ negative? Couldn't it be positive, in which case we'd get $(I+A)B = I + A^k$?
Thank you.
| It's the usual polynomial identity
$$
1 - x^{k} = (1 - x)(1 + x + x^{2} + \dots + x^{k-1}),
$$
where you are substituting $x = -A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Intersection points of a Triangle and a Circle How can I find all intersection points of the following circle and triangle?
Triangle
$$A:=\begin{pmatrix}22\\-1.5\\1 \end{pmatrix} B:=\begin{pmatrix}27\\-2.25\\4 \end{pmatrix} C:=\begin{pmatrix}25.2\\-2\\4.7 \end{pmatrix}$$
Circle
$$\frac{9}{16}=(x-25)^2 + (y+2)^2 + (z-3)^2$$
What I did so far was to determine the line equations of the triangle (a, b and c):
$a : \overrightarrow {OX} = \begin{pmatrix}27\\-2.25\\4 \end{pmatrix}+ \lambda_1*\begin{pmatrix}-1.8\\0.25\\0.7 \end{pmatrix} $
$b : \overrightarrow {OX} = \begin{pmatrix}22\\-1.5\\1 \end{pmatrix}+ \lambda_2*\begin{pmatrix}3.2\\-0.5\\3.7 \end{pmatrix} $
$c : \overrightarrow {OX} = \begin{pmatrix}22\\-1.5\\1 \end{pmatrix}+ \lambda_3*\begin{pmatrix}5\\-0.75\\3 \end{pmatrix} $
But I am not sure what I have to do next...
| The side $AB$ of the triangle has equation $P(t) = (1-t)A + tB$ for $0 \le t \le 1$. The $0 \le t \le 1$ part is important. If $t$ lies outside $[0,1]$, the point $P(t)$ will lie on the infinite line through $A$ and $B$, but not on the edge $AB$ of the triangle. Substitute $(1-t)A + tB$ into the circle equation, as others have suggested. This will give you a quadratic equation in $t$ that you can solve using the well-known formula.
But, a solution $t$ will give you a circle/triangle intersection point only if it lies in the range $0 \le t \le 1$. Solutions outside this interval can be ignored.
Do the same thing with sides $BC$ and $AC$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Prove $f(x)=ax+b$ Let $f(x)$ be a continuous function in $\mathbb R$ that for all $x\in(-\infty,+\infty)$, satisfies
$$ \lim_{h\rightarrow+\infty}{[f(x+h)-2f(x)+f(x-h)]}=0. $$
Prove that $f(x)=ax+b$ for some $a,b\in\mathbb R$.
This is a problem from my exercise book, but I can't figure out the solution of it, I think the solution in my book is wrong. :( Any idea and proof of it are welcome! Thank you in advance.
| Given $x\in\mathbb{R}$, the limit can be rewritten as
$$f(x)=\frac{1}{2}\lim_{h\to\infty}[f(x+h)+f(x-h)].\tag{1}$$
Given $y\in\mathbb{R}$, replacing $h$ with $h+y$ or $h-y$ in $(1)$, we have
$$f(x)=\frac{1}{2}\lim_{h\to\infty}[f(x+y+h)+f(x-y-h)],\quad \forall x\in\mathbb{R}.\tag{2}$$
and
$$f(x)=\frac{1}{2}\lim_{h\to\infty}[f(x-y+h)+f(x+y-h)],\quad \forall x\in\mathbb{R}.\tag{3}$$
Replacing $x$ with $x+y$ or $x-y$ in $(1)$ respectively, we have:
$$f(x+y)=\frac{1}{2}\lim_{h\to\infty}[f(x+y+h)+f(x+y-h)]\tag{4}$$
and
$$f(x-y)=\frac{1}{2}\lim_{h\to\infty}[f(x-y+h)+f(x-y-h)].\tag{5}$$
Comparing $(2)+(3)$ and $(4)+(5)$, we have:
$$2f(x)=f(x+y)+f(x-y),\tag{6}$$
or equivalently,
$$f(\frac{x+y}{2})=\frac{1}{2}[f(x)+f(y)].\tag{7}$$
$(7)$ together with the continuity of $f$ implies that $f$ is both convex and concave on $\mathbb{R}$, so $f$ must be a linear function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 0
} |
Find $x$ such that $\sum_{k=1}^{2014} k^k \equiv x \pmod {10}$ Find $x$ such that $$\sum_{k=1}^{2014} k^k \equiv x \pmod {10}$$
I knew the answer was $3$.
| We are going to compute the sum mod $2$ and mod $5$. The Chinese Remainder Theorem then gives us the result mod $10$.
Mod $2$, obviously $$k^k \equiv \begin{cases}0 & \text{if }k\text{ even,}\\1 & \text{if }k\text{ odd,}\end{cases}$$
so
$$\sum_{k = 0}^{2014} \equiv \frac{2014}{2} = 1007 \equiv 1 \mod 2.$$
By Fermat, $k^k$ mod $5$ only depends on the remainder of $k$ mod $\operatorname{lcm}(5, 4) = 20$. So $$\sum_{k = 1}^{2014}k^k \equiv \underbrace{100}_{\equiv 0} \cdot \sum_{k=1}^{20} k^k + \sum_{k=1}^{14} k^k \\ \equiv 1 + 4 + 2 + 1 + 0 + 1 + 3 + 1 + 4 + 0 + 1 + 1 + 3 + 1 \\\equiv 3 \mod 5.$$
Combining the results mod $2$ and mod $5$, $$\sum_{k=1}^{2014} k^k \equiv 3\mod 10.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Is there an analytic function applying formula? Is there an analytic function $f$ in $\mathbb{C}\backslash \{0\}$ s.t. for every $z\ne0$: $$|f(z)|\ge\frac{1}{\sqrt{|z|}}\, ?$$
| How about this:
Since $f(z)$ is analytic on $\mathbb{C}-\{0\}$, $g(z) = \frac{1}{(f(z))^2}$ is analytic on
$\mathbb{C}-\{0\}$. Also $\bigg|\frac{g(z)}{z}\bigg| \leq 1$.
I am sure you will finish the rest (think about the order of the pole at $0$ and use Liouville's Theorem).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
$f(z)= az$ if $f$ is analytic and $f(z_{1}+z_{2})=f(z_{1})+f(z_{2})$ If $f$ is an analytic function with $f(z_{1}+z_{2})=f(z_{1})+f(z_{2})$, how can we show that $f(z)= az$ where $a$ is a complex constant?
| It's true under weaker assumptions, but let's do it by assuming that $f$ is analytic.
Fix $w \in \mathbb{C}$. Since $f(z+w) = f(z)+f(w)$, it follows that $f'(z+w) = f'(z)$ for all $z$. Hence $f'$ is constant, say $f'(z) = a$ which implies that $f(z) = az+c$.
Plug in $z_1 = z_2 = 0$ in the defining equation to conclude that $f(0) = 0$, so $f(z) = az$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
A probability question that involves $5$ dice For five dice that are thrown, I am struggling to find the probability of one number showing exactly three times and a second number showing twice.
For the one number showing exactly three times, the probability is:
$$
{5 \choose 3} \times \left ( \frac{1}{6} \right )^{3} \times \left ( \frac{5}{6}\right )^{2}
$$
However, I understand I cannot just multiply this by
$$
{5 \choose 2} \times \left ( \frac{1}{6} \right )^{2} \times \left ( \frac{5}{6}\right )^{3}
$$
as this includes the probability of picking the original number twice which allows the possibility of the same number being shown $5$ times. I am unsure of what to do next, I tried to write down all the combinations manually and got $10$ possible outcomes so for example if a was the value found $3$ times and $b$ was the value obtained $2$ times one arrangement would be '$aaabb$'. However I still am unsure of what to do after I get $10$ different possibilities and I am not sure how I could even get the $10$ different combinations mathematically. Any hints or advice on what to do next would be much appreciated.
| First, I assume they wil all come out in neat order, first three in a row, then two in a row of a different number. The probability of that happening is
$$
\frac{1}{6^2}\cdot \frac{5}{6}\cdot\frac{1}{6} = \frac{5}{6^4}
$$
The first die can be anything, but the next two have to be equal to that, so the $\frac{1}{6^2}$ comes from there. Then the fourth die has to be different, and the odds of that happening is the $\frac{5}{6}$ above, and lastly, the last die has to be the same as the fourth.
Now, we assumed that the three equal dice would come out first. There are other orders, a total of $\binom{5}{3}$. Multiply them, and you get the final answer
$$
\binom{5}{3}\frac{5}{6^4}
$$
You might reason another way. Let the event $A$ be "there are exactly 3 of one number" and $B$ be "There are exactly 2 of one number". Then we have that
$$
P(A\cap B) = P(A) \cdot P(B|A) = {5 \choose 3} \cdot \frac{1}{6^3} \cdot\frac{5^2}{6^2} \cdot \frac{1}{5} = \binom{5}{3}\frac{5}{6^4}
$$
Where $P(A)$ you allready calculated on your own, and $P(B|A)$ is the probability that there is an exact pair given that there is an exact triple, which again is the probability that the two last dice are equal, and that is $\frac{1}{5}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Eigenvalues of a rotation How do I show that the rotation by a non zero angle $\theta$ in $\mathbb{R}^2 $ does not have any real eigenvalues. I know the matrix of a rotation but I don't how to show the above proposition.
Thank you
| The characteristical polynomial is
$$x^2-2\cos(\theta) x+1$$
and $x^2\geq 0$ and $1>0$ and $|\cos(\theta)|\leq 1$ the polynomial can only have a zero
when $|\cos(\theta)|=1$.
As $x^2-2x+1=(x-1)^2$ and $x^2+2x+1=(x+1)^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Property of $10\times 10 $ matrix Let $A$ be a $10 \times 10$ matrix such that each entry is either $1$ or $-1$. Is it true that $\det(A)$ is divisible by $2^9$?
| Answer based on the comments by Ludolila and Erick Wong as an answer:
The answer follows from three easily proven rules:
*
*Adding or subtracting a row of a matrix from another does not change its determinant.
*Multiplying a line of the matrix by a constant $c$ multiplies the determinant by that constant.
*The determinant of a matrix with integer entries is an integer.
Take a matrix $A=(a_{ij})\in M_{10}(\mathbb{R})$ such that all its entries are either $1$ or $-1$. If $a_{11}=-1$, multiply the first line by $-1$. For $2\le i\le10$, subtract $a_{i1}(a_{1\to})$ (where $a_{1\to}$ is the first row of $A$) from $a_{i\to}$.
Now all rows consist only of $0$'s and $\pm2$'s. Divide each of these rows by $2$ to obtain a matrix $B$ that has entries only in $\{-1,0,1\}$.
Note that $\det B = \pm 2^{-9} \det A$ following rules 1 and 2.
Following rule 3, $\det B$ is an integer, so $\det A = 2^9 \cdot n$ where $n$ is an integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Separation of function When can a function of 2 variables say $h(x,y)$ can be written as $$\sum_i f_i(x)g_i(y)$$ I want to know what conditions on $h$ would ensure this kind of separation.
| If you don't require the sum to be finite then essentially anything expandable in a two dimensional Fourier Series will satisfy what you want. For example if $f(x,y)$ is defined on the unit square, then
$$f(x,y)=\sum_{m,n\in\mathbb{Z}}a_{m,n}e^{2\pi i m x}e^{2\pi i n y}$$
for appropriate coefficients $a_{m,n}$. For example, if $f(x,y)$ is continuously differentiable, then the series converges uniformly. Otherwise, you have almost sure convergence for any $L^2$ function by Carleson's theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is there any specific formula for $\log{f(z)}$? Let $f(z)$ be a nonvanishing analytic function on a simply connected region $\Omega$. Then there is an analytic function $g(z)$ such that $e^{g(z)}=f(z)$. Is there any specific formula for $g(z)$?
(By specific formula I mean, for example, on the region $\mathbb{C}-\{x\le 0\}$ we know $$\log^{[k]}{z}=\log{|z|}+i\arg{z}+i2k\pi$$ where $k$ is an integer, and $\log^{[k]}{z}$ is holomorphic on $\mathbb{C}-\{x\le 0\}$.)
EDIT: Let me make my question clear. I know that we can use integral to define $\log{f}$. But that's not what I'm looking for. Let me take this example to explain what I want:
Let $f(z)=z^9$ and $\Omega$ the region $Re(z)>1$. Then there is a holomorphic function $g(z)$ on $\Omega$ such that $e^{g(z)}=f(z)=z^9$ and that $g(x)=9\log{x}$ for real $x>1$. In this case the formula I want is: $$g(z)=9\log|z|+9i\arg z$$ where $\arg z\in (−\pi,\pi)$.
| Hint: Try defining your function of $z$ as an integral from a certain function to a fixed point $z_0$ (the well-definedness [i.e. path independence] of which comes from simple connectedness).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/325968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Probability of rolling three dice without getting a 6 I am having trouble understanding how you get $91/216$ as the answer to this question.
say a die is rolled three times
what is the probability that at least one roll is 6?
| There are two answers already that express the probability as $$1-\left(\frac56\right)^3 = \frac{91}{216},$$
I'd like to point out that a more complicated, but more direct calculation gets to the same place. Let's let 6 represent a die that comes up a 6, and X a die that comes up with something else. Then we might distinguish eight cases for how the dice can come up:
666
66X
6X6
X66
6XX
X6X
XX6
XXX
We can easily calculate the probabilities for each of these eight cases. Each die has a $\frac16$ probability of showing a 6, and a $\frac56$ probability of showing something else, which we represented with X. To get the probability for a combination like 6X6 we multiply the three probabilities for the three dice; in this case $\frac16\cdot\frac56\cdot\frac16 = \frac5{216}$. This yields the following probabilities:
$$\begin{array}{|r|ll|}
\hline
\mathtt{666} & \frac16\cdot\frac16\cdot\frac16 & = \frac{1}{216} \\
\hline
\mathtt{66X} & \frac16\cdot\frac16\cdot\frac56 & = \frac{5}{216} \\
\mathtt{6X6} & \frac16\cdot\frac56\cdot\frac16 & = \frac{5}{216} \\
\mathtt{X66} & \frac56\cdot\frac16\cdot\frac16 & = \frac{5}{216} \\
\hline
\mathtt{6XX} & \frac16\cdot\frac56\cdot\frac56 & = \frac{25}{216} \\
\mathtt{X6X} & \frac56\cdot\frac16\cdot\frac56 & = \frac{25}{216} \\
\mathtt{XX6} & \frac56\cdot\frac56\cdot\frac16 & = \frac{25}{216} \\
\hline
\mathtt{XXX} & \frac56\cdot\frac56\cdot\frac56 & = \frac{125}{216} \\
\hline
\end{array}
$$
The cases that we want are those that have at least one 6, which are the first seven lines of the table, and the sum of the probabilities for these lines is $$\frac{1}{216}+\frac{5}{216}+\frac{5}{216}+\frac{5}{216}+
\frac{25}{216}+\frac{25}{216}+\frac{25}{216} = \color{red}{\frac{91}{216}}$$
just as everyone else said.
Since the first 7 lines together with the 8th line account for all possible throws of the dice, together they add up to a probability of $\frac{216}{216} = 1$, and that leads to the easier way to get to the correct answer: instead of calculating and adding the first 7 lines, just calculate the 8th line, $\frac{125}{216}$ and subtract it from 1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
What is a Ramsey Graph? What is a ramsey graph and What is its relation to RamseyTheorem?
In Ramsey Theorem:
for a pairs of parameters (r,b) there exists an n such that for every (edge-)coloring of the complete graph on n vertices with colors r(ed) and b(lue) there will exist a complete subgraph on r vertices colored red or a complete subgraph on b vertices colored blue.
Ramsey Graph :
A Ramsey graph is a graph with n vertices, no clique of size s, and no independent set of size t.
I couldnt understand how the above two are related.
Can any one explain what is a Ramsey Graph as simple as possible? (in terms of coloring)
| Instead of considering a complete graph $K_n$ whose edges are red and blue, just consider some graph $G$ with $n$ vertices. Let $\bar G$ be the complement of $G$. $\bar G$ contains a clique of size $s$ if and only if $G$ contains an independent set of size $s$.
Now consider $G$ as a subgraph of $K_n$. Color the edges of $G$ in $K_n$ red, and color the other edges of the $K_n$, that is the edges of $\bar G$, blue.
Now $K_n$ contains a red $K_r$ if and only if $G$ contains a clique of size $r$, and $K_n$ contains a blue $K_b$ if $\bar G$ contains a clique of size $b$, which is true if and only if $G$ contains an independent set of size $b$.
The Ramsey theorem says that for given $r$ and $b$ there is an $n$ such that (what you said about $K_n$). A Ramsey graph of size $q$ is a counterexample to the Ramsey theorem, and when it exists, it shows that $n$, which must exist, must be larger than $q$.
Here is an example. Let's take $r=b=3$. The Ramsey theorem says that there is some $n$ such that if we color edges of $K_n$ in red and blue, there is either a red triangle or a blue triangle.
There are many Ramsey graphs. For example, consider $K_2$. It has neither a clique of size $r=3$ nor an independent set of size $b=3$ and therefore it is a ramsey graph for $r=3, b=3$ and shows that the $n$ of the previous paragraph must be bigger than 2.
Now consider $C_5$, the cycle on five vertices. It has neither a clique of size $r=3$ nor an independent set of size $b=3$ and therefore it is a ramsey graph for $r=3, b=3$ and shows that the $n$ given by the Ramsey theorem must be bigger than 5.
The Ramsey theorem claims that when $n$ is large enough, there is no Ramsey graph with $n$ or more vertices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$\lim_{n\to\infty}(\sqrt{n^2+n}-\sqrt{n^2+1})$ How to evaluate $$\lim_{n\to\infty}(\sqrt{n^2+n}-\sqrt{n^2+1})$$
I'm completely stuck into it.
| A useful general approach to limits is, in your scratch work, to take every complicated term and replace it with a similar approximate term.
As $n$ grows large, $\sqrt{n^2 + n}$ looks like $\sqrt{n^2} = n$. More precisely,
$$ \sqrt{n^2 + n} = n + o(n) $$
where I've used little-o notation. In terms of limits, this means
$$ \lim_{n \to \infty} \frac{\sqrt{n^2 + n} - n}{n} = 0 $$
but little-o notation makes it much easier to express the intuitive idea being used.
Unfortunately, $\sqrt{n^2 + 1}$ also looks like $n$. Combining these estimates,
$$ \sqrt{n^2 + n} - \sqrt{n^2 + 1} = (n + o(n)) - (n + o(n)) = o(n) $$
Unfortunately, this cancellation has clobbered all of the precision of our estimates! All this analysis reveals is
$$ \lim_{n \to \infty} \frac{\sqrt{n^2 + n} - \sqrt{n^2+1}}{n} = 0 $$
which isn't good enough to answer the problem. So, we need a better estimate.
A standard way to get better estimates is differential approximation. While the situation at hand is a little awkward, there is fortunately a standard trick to deal with square roots, or any power:
$$ \sqrt{n^2 + n} = n \sqrt{1 + \frac{1}{n}} $$
and now we can invoke differential approximation (or Taylor series)
$$ f(x+h) = f(x) + h f'(x) + o(h) $$
with $f(x) = 1 + \frac{1}{x}$ at $x=1$ to get
$$ \sqrt{n^2 + n} = n \left( 1 + \frac{1}{2n} + o\left(\frac{1}{n} \right)\right)
= n + \frac{1}{2} + o(1) $$
or equivalently in limit terms,
$$ \lim_{n \to \infty} \sqrt{n^2 + n} - n - \frac{1}{2} = 0$$
similarly,
$$ \sqrt{n^2 + 1} = n + o(1)$$
and we get
$$ \lim_{n \to \infty} \sqrt{n^2 + n} - \sqrt{n^2 + 1}
= \lim_{n \to \infty} (n + \frac{1}{2} + o(1)) - (n + o(1))
= \lim_{n \to \infty} \frac{1}{2} + o(1) = \frac{1}{2} $$
If we didn't realize that trick, there are a few other tricks to do, but there is actually a straightforward way to proceed too. Initially, simply taking the Taylor series for $g(x) = \sqrt{n^2 + x}$ around $x=0$ doesn't help, because that gives
$$ g(x) = n + \frac{1}{2} \frac{x}{n} + o(x^2) $$
the Taylor series for $h(x) = \sqrt{x^2 + x}$ doesn't help either. But this is why we pay attention to the remainder term! One form of the Taylor remainder says that:
$$ g(x) = n + \frac{1}{2} \frac{x}{n} - \frac{1}{8} \left( n^2 + c \right)^{-3/2} x^2$$
for some $c$ between $0$ and $x$. It's easy to bound this error term for $x > 0$.
$$ \left| \frac{1}{8} \left( n^2 + c \right)^{-3/2} x^2 \right|
\leq \left| \frac{1}{8} \left( n^2 \right)^{-3/2} x^2 \right|
= \left| \frac{x^2}{8 n^3} \right| $$
So, for $x > 0$,
$$ g(x) = n + \frac{1}{2} + O\left( \frac{x^2}{n^3} \right) $$
(note I've switched to big-O). Plugging in $n$ gives
$$ g(n) = n + \frac{1}{2} + O\left( \frac{1}{n} \right) $$
which gives the approximation we need (better than we need, actually). (One could, of course, simply stick to limits rather than use big-O notation)
This is not the simplest way to solve the problem, but I wanted to demonstrate a straightforward application of the tools you have learned (or will soon learn) to solve a problem in the case that you can't find a 'clever' approach.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Solution of $19 x \equiv 1 \pmod{35}$ $19 x \equiv 1 \pmod{35}$
For this, may I know how to get the smallest value of $x$. I know that there is a theorem like $19^{34} = 1 \pmod {35}$. But I don't think it is the smallest.
| Hint $\rm\,\ mod\ 35\!:\,\ 19x\equiv 1\iff x\equiv \dfrac{1}{19}\equiv \dfrac{2}{38}\equiv \dfrac{2}3\equiv \dfrac{24}{36}\equiv\dfrac{24}1$
Remark $\ $ We used Gauss's algorithm for computing inverses $\rm\:mod\ p\:$ prime.
Beware $\ $ One can employ fractions $\rm\ x\equiv b/a\ $ in modular arithmetic (as above) only when the fractions have denominator $ $ coprime $ $ to the modulus $ $ (else the fraction may not uniquely exist, $ $ i.e. the equation $\rm\: ax\equiv b\,\ (mod\ m)\:$ might have no solutions, or more than one solution). The reason why such fraction arithmetic works here (and in analogous contexts) will become clearer when one learns about the universal properties of fraction rings (localizations).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
How to write the following expression in index notation? I would like to know how can I write $ ||\vec{a} \times(\nabla \times \vec{a})||^2 $ and $(\vec{a} \cdot (\nabla \times \vec{a}))^2$ in index notation if $\vec{a}=(a_1,a_2,a_3)$
Thank you for reading/replying
EDIT: found the second one: $(\vec{a} \cdot (\nabla \times \vec{a}))^2 = a_ia_ja_{k,i}a_{k,j}$
The first one can also be written as $ ||\vec{a} \times(\nabla \times \vec{a})||^2 = (a_ie_{ijk}a_{k,j})^2 $ but if one finds a better expression let me know!
| To do this I would use the Levi-Civita symbol and its properties in 3 dimensions.
(from Wikipedia:)
Definition:
\begin{equation}
\varepsilon_{ijk}=
\left\{
\begin{array}{l}
+1 \quad \text{if} \quad (i,j,k)\ \text{is}\ (1,2,3),(3,1,2)\ \text{or}\ (2,3,1)\\
-1 \quad \text{if} \quad (i,j,k)\ \text{is}\ (1,3,2),(3,2,1)\ \text{or}\ (2,1,3)\\
\ \ \ 0\quad \text{if} \quad i=j\ \text{or}\ j=k\ \text{or}\ k=i
\end{array}
\right.
\end{equation}
Vector product:
\begin{equation}
\vec a\times \vec b=\sum_{i=1}^3\sum_{j=1}^3\sum_{k=1}^3\varepsilon_{ijk}\vec e_ia^jb^k
\end{equation}
Component of a vector product:
\begin{equation}
(\vec a\times \vec b)_i=\sum_{j=1}^3\sum_{k=1}^3\varepsilon_{ijk}a^jb^k
\end{equation}
Spatproduct
\begin{equation}
\vec a\cdot(\vec b\times\vec c)=\vec a\times \vec b=\sum_{i=1}^3\sum_{j=1}^3\sum_{k=1}^3\varepsilon_{ijk}a^ib^jc^k
\end{equation}
Useful properties:
\begin{equation}
\sum_{i=1}^3\varepsilon_{ijk}\varepsilon^{imn}=\delta_j^{\ m}\delta_k^{\ n}-\delta_j^{\ n}\delta_k^{\ m}
\end{equation}
\begin{equation}
\sum_{m=1}^3\sum_{n=1}^3\varepsilon_{jmn}\varepsilon^{imn}=2\delta_{\ j}^{i}
\end{equation}
\begin{equation}
\sum_{i=1}^3\sum_{j=1}^3\sum_{k=1}^3\varepsilon_{ijk}\varepsilon^{ijk}=6
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Radicals and direct sums. Let A be a K-algebra and M, N be right A-submodules of a right A module L, $M \cap N =0$. How to show that $(M\oplus N) \text{rad} A = M \text{rad} A \oplus N \text{rad} A$? Let $m \in M, n\in N, x\in \text{rad} A$. Since $(m+n)x=mx+nx$, $(m+n)x \in M \text{rad} A \oplus N \text{rad} A$. Since $M, N$ are submodules, $mx \in M, nx\in N$. Therefore $ M \text{rad} A \cap N \text{rad} A = 0$. Is this true? Thank you very much.
| Sure, it's obvious that for any $S \subseteq A$, we have $MS\subseteq M$ and $NS\subseteq N$. That means $MS\cap NS\subseteq M\cap N =\{0\}.$
But this has little to do with the sum being direct: the real question here is how to get the equality
$$(M\oplus N) \text{rad} A = M \text{rad} A \oplus N \text{rad} A.$$
The definition of the $A$ action on the direct sum by "distribution" proves that the left-hand side is contained in the right-hand side.
The other containment is also true, but seeing that hinges on your proper understanding of the definition of $MI$ where $M$ is an $A$ module and $I\lhd A$. Do you see why the final containment holds?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove $a\sqrt[3]{a+b}+b\sqrt[3]{b+c}+c\sqrt[3]{c+a} \ge 3 \sqrt[3]2$ Prove $a\sqrt[3]{a+b}+b\sqrt[3]{b+c}+c\sqrt[3]{c+a} \ge 3 \sqrt[3]2$ with $a + b+c=3 \land a,b,c\in \mathbb{R^+}$
I tried power mean inequalities but I still can't prove it.
| Here is my proof
by AM-GM inequality,we have
$$ a\sqrt[3]{a+b}=\frac{3\sqrt[3]{2}a(a+b)}{3\sqrt[3]{2(a+b)(a+b)}}\geq 3\sqrt[3]{2}\cdot \frac{a(a+b)}{2+2a+2b} $$
Thus,it's suffice to prove that
$$ \frac{a(a+b)}{a+b+1}+\frac{b(b+c)}{b+c+1}+\frac{c(c+a)}{c+a+1}\geq 2 $$
Or
$$ \frac{a}{a+b+1}+\frac{b}{b+c+1}+\frac{c}{c+a+1}\leq 1 $$
After homogenous,it's
$$\frac{a}{4a+4b+c}+\frac{b}{4b+4c+a}+\frac{c}{4c+4a+b}\leq \frac{1}{3} $$
Now,multiply $4a+4b+4c$ to each sides.we can rewrite the inequality into
$$ \frac{9ca}{4a+4b+c}+\frac{9ab}{4b+4c+a}+\frac{9bc}{4c+4a+b}\leq a+b+c $$
Using Cauchy-Schwarz inequality,we have
$$ \frac{9}{4a+4b+c}=\frac{(2+1)^2}{2(2a+b)+(2b+c)}\le \frac{2}{2a+b}+\frac{1}{2b+c} $$
Therefore
\begin{align}
\sum{\frac{9ca}{4a+4b+c}}&\leq \sum{\left(\frac{2ca}{2a+b}+\frac{ca}{2b+c}\right)}\\
&=a+b+c
\end{align}
Hence we are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Limit of definite sum equals $\ln(2)$ I have to show the following equality:
$$\lim_{n\to\infty}\sum_{i=\frac{n}{2}}^{n}\frac{1}{i}=\log(2)$$
I've been playing with it for almost an hour, mainly with the taylor expansion of $\ln(2)$. It looks very similar to what I need, but it has an alternating sign which sits in my way.
Can anyone point me in the right direction?
| Truncate the Maclaurin series for $\log(1+x)$ at the $2m$-th term, and evaluate at $x=1$. Take for example $m=10$. We get
$$1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6}+\cdots+\frac{1}{19}-\frac{1}{20}.$$
Add $2\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\cdots +\frac{1}{20}\right)$, and subtract the same thing, but this time noting that
$$ 2\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\cdots +\frac{1}{20}\right)=1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{10}.$$
We get
$$\left(1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{10}+\frac{1}{11}+\cdots+\frac{1}{20}\right)-\left(1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{10}\right).$$
There is nice cancellation, and we get
$$\frac{1}{11}+\frac{1}{12}+\cdots+\frac{1}{20}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 2
} |
How to calculate the asymptotic expansion of $\sum \sqrt{k}$? Denote $u_n:=\sum_{k=1}^n \sqrt{k}$. We can easily see that
$$ k^{1/2} = \frac{2}{3} (k^{3/2} - (k-1)^{3/2}) + O(k^{-1/2}),$$
hence $\sum_1^n \sqrt{k} = \frac{2}{3}n^{3/2} + O(n^{1/2})$, because $\sum_1^n O(k^{-1/2}) =O(n^{1/2})$.
With some more calculations, we get
$$ k^{1/2} = \frac{2}{3} (k^{3/2} - (k-1)^{3/2}) + \frac{1}{2} (k^{1/2}-(k-1)^{-3/2}) + O(k^{-1/2}),$$
hence $\sum_1^n \sqrt{k} = \frac{2}{3}n^{3/2} + \frac{1}{2} n^{1/2} + C + O(n^{1/2})$ for some constant $C$, because $\sum_n^\infty O(k^{-3/2}) = O(n^{-1/2})$.
Now let's go further. I have made the following calculation
$$k^{1/2} = \frac{3}{2} \Delta_{3/2}(k) + \frac{1}{2} \Delta_{1/2}(k) + \frac{1}{24} \Delta_{-1/2}(k) + O(k^{-5/2}),$$
where $\Delta_\alpha(k) = k^\alpha-(k-1)^{\alpha}$. Hence :
$$\sum_{k=1}^n \sqrt{k} = \frac{2}{3} n^{3/2} + \frac{1}{2} n^{1/2} + C + \frac{1}{24} n^{-1/2} + O(n^{-3/2}).$$
And one can continue ad vitam aeternam, but the only term I don't know how to compute is the constant term.
How do we find $C$ ?
| Let us substitute into the sum
$$\sqrt k=\frac{1}{\sqrt \pi }\int_0^{\infty}\frac{k e^{-kx}dx}{\sqrt x}. $$
Exchanging the order of summation and integration and summing the derivative of geometric series, we get
\begin{align*}
\mathcal S_N:=
\sum_{k=1}^{N}\sqrt k&=\frac{1}{\sqrt \pi }\int_0^{\infty}\frac{\left(e^x-e^{-(N-1)x}\right)-N\left(e^{-(N-1)x}-e^{-Nx}\right)}{\left(e^x-1\right)^2}\frac{dx}{\sqrt x}=\\&=\frac{1}{2\sqrt\pi}\int_0^{\infty}
\left(N-\frac{1-e^{-Nx}}{e^x-1}\right)\frac{dx}{x\sqrt x}=\\
&=\frac{1}{2\sqrt\pi}\int_0^{\infty}
\left(N-\frac{1-e^{-Nx}}{e^x-1}\right)\frac{dx}{x\sqrt x}.
\end{align*}
To extract the asymptotics of the above integral it suffices to slightly elaborate the method used to answer this question. Namely
\begin{align*}
\mathcal S_N&=\frac{1}{2\sqrt\pi}\int_0^{\infty}
\left(N-\frac{1-e^{-Nx}}{e^x-1}+\left(1-e^{-Nx}\right)\left(\frac1x-\frac12\right)-\left(1-e^{-Nx}\right)\left(\frac1x-\frac12\right)\right)\frac{dx}{x\sqrt x}=\\
&={\color{red}{\frac{1}{2\sqrt\pi}\int_0^{\infty}\left(1-e^{-Nx}\right)\left(\frac1x-\frac12-\frac{1}{e^x-1}\right)\frac{dx}{x\sqrt x}}}+\\&+
{\color{blue}{\frac{1}{2\sqrt\pi}\int_0^{\infty}
\left(N-\left(1-e^{-Nx}\right)\left(\frac1x-\frac12\right)\right)\frac{dx}{x\sqrt x}}}.
\end{align*}
The reason to decompose $\mathcal S_N$ in this way is that
*
*the red integral has an easily computable finite limit: since $\frac1x-\frac12-\frac{1}{e^x-1}=O(x)$ as $x\to 0$, we can simply neglect the exponential $e^{-Nx}$.
*the blue integral can be computed exactly.
Therefore, as $N\to \infty$, we have
$$\mathcal S_N={\color{blue}{\frac{\left(4n+3\right)\sqrt n}{6}}}+
{\color{red}{\frac{1}{2\sqrt\pi}\int_0^{\infty}\left(\frac1x-\frac12-\frac{1}{e^x-1}\right)\frac{dx}{x\sqrt x}+o(1)}},$$
and the finite part you are looking for is given by
$$C=\frac{1}{2\sqrt\pi}\int_0^{\infty}\left(\frac1x-\frac12-\frac{1}{e^x-1}\right)\frac{dx}{x\sqrt x}=\zeta\left(-\frac12\right).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
How to prove **using SVD** that $\mathbb{C}^{n \times n}$ is dense for nonsingular matrices? How to show using SVD that the set of nonsingular matrices is dense in $\mathbb{C}^{n \times n}$? That is, for any $A \in \mathbb{C}^{n \times n}$, and any given $\varepsilon > 0$, there exists a nonsingular matrix $A_\varepsilon \in \mathbb{C}^{n \times n}$ such that:
$\left \| A-A_\varepsilon \right \| \le \varepsilon$.
| In finite dimensional space all norms are equivalent. We choose an algebra norm.
Let $A=U\Sigma V^*$ the singular value decomposition where $U$ and $V$ are unitary matrices that's $||U||=||V||=1$ and $\Sigma=diag(\sigma_n,\ldots,\sigma_1)$ with
$$\sigma_n\geq\cdots\geq\sigma_1\geq0.$$
Let $\Sigma_p=diag(\sigma_n+1/p,\ldots,\sigma_1+1/p)$ and $A_p=U\Sigma_pV^*\in GL_n(\mathbb{C})$ then we have:
$$||A-A_p||=||U(\Sigma-\Sigma_p)V^*||\leq||U||||\Sigma-\Sigma_p||||V^*||=||\Sigma-\Sigma_p||=\frac{1}{p}||I_n||,$$
so
$$\lim_{p\to\infty}||A-A_p||=0,$$
and we conclude.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Need help proving this integration If $a>b>0$, prove that :
$$\int_0^{2\pi} \frac{\sin^2\theta}{a+b\cos\theta}\ d\theta = \frac{2\pi}{b^2} \left(a-\sqrt{a^2-b^2} \right) $$
| I'll do this one $$\int_{0}^{2\pi}\frac{cos(2\theta)}{a+bcos(\theta)}d\theta$$if we know how to do tis one you can replace $sin^{2}(\theta)$ by $\frac{1}{2}(1-cos(2\theta))$ and do the same thing. if we ae on the unit circle we know that $cos(\theta)=\frac{e^{i\theta}+e^{-i\theta}}{2}$ so by letting $z=e^{i\theta}$ we wil get$$cos(\theta)=\frac{z+\frac{1}{z}}{2}=\frac{z^2+1}{2z}$$and$$cos(2\theta)=\frac{z^2+\frac{1}{z^2}}{2}=\frac{z^4+1}{2z^2}$$ thus, if$\gamma: |z|=1$ the integral becomes $$\int_{\gamma}\frac{\frac{z^4+1}{2z^2}}{a+b\frac{z^2+1}{2z}}\frac{1}{iz}dz=\int_{\gamma}\frac{-i(z^4+1)}{2z^2(bz^2+2az+b)}dz$$now the roots of $bz^2+2az+b$ are $z=\frac{-2a\pm \sqrt{4a^2-4b^2}}{2b}=\frac{-a}{b}\pm\frac{\sqrt{a^2-b^2}}{b}$, you can check that the only root inside $|z|=1$ is $z_1=\frac{-a}{b}+\frac{\sqrt{a^2-b^2}}{b}$ so the only singularities of the function that we want to integrate inside $\gamma$ are $z_0=0$ and $z_1$ both are poles. find the resudies and sum them to get the answer.Notice that this will only give youe "half" the answer you still have to do$$\frac{1}{2}\int_{\gamma}\frac{d\theta}{a+bcos(\theta)}$$ in the same way to get the full answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Laplace transform of the Bessel function of the first kind I want to show that $$ \int_{0}^{\infty} J_{n}(bx) e^{-ax} \, dx = \frac{(\sqrt{a^{2}+b^{2}}-a)^{n}}{b^{n}\sqrt{a^{2}+b^{2}}}\ , \quad \ (n \in \mathbb{Z}_{\ge 0} \, , \text{Re}(a) >0 , \, b >0 ),$$ where $J_{n}(x)$ is the Bessel function of the first kind of order $n$.
But the result I get using an integral representation of $J_{n}(bx)$ is off by a factor of $ \displaystyle \frac{1}{b}$, and I don't understand why.
$$ \begin{align} \int_{0}^{\infty} J_{n}(bx) e^{-ax} \, dx &= \frac{1}{2 \pi} \int_{0}^{\infty} \int_{-\pi}^{\pi} e^{i(n \theta -bx \sin \theta)} e^{-ax} \, d \theta \, dx \\ &= \frac{1}{2 \pi} \int_{-\pi}^{\pi} \int_{0}^{\infty} e^{i n \theta} e^{-(a+ib \sin \theta)x} \, dx \, d \theta \\ &= \frac{1}{2 \pi} \int_{-\pi}^{\pi} \frac{e^{i n \theta}}{a + ib \sin \theta} \, d \theta \\ &= \frac{1}{2 \pi} \int_{|z|=1} \frac{z^{n}}{a+\frac{b}{2} \left(z-\frac{1}{z} \right)} \frac{dz} {iz} \\ &= \frac{1}{i\pi} \int_{|z|=1} \frac{z^{n}}{bz^{2}+2az-b} \, dz \end{align}$$
The integrand has simple poles at $\displaystyle z= -\frac{a}{b} \pm \frac{\sqrt{a^{2}+b^{2}}}{b}$.
But only the pole at $\displaystyle z= -\frac{a}{b} + \frac{\sqrt{a^{2}+b^{2}}}{b}$ is inside the unit circle.
Therefore,
$$ \begin{align} \int_{0}^{\infty} J_{n}(bx) e^{-ax} \, dx &= \frac{1}{i \pi} \, 2 \pi i \ \text{Res} \left[ \frac{z^{n}}{bz^{2}+2az-b}, -\frac{a}{b} + \frac{\sqrt{a^{2}+b^{2}}}{b} \right] \\ &= {\color{red}{b}} \ \frac{(\sqrt{a^{2}+b^{2}}-a)^{n}}{b^{n}\sqrt{a^{2}+b^{2}}} . \end{align}$$
| Everything is correct up until your computation of the residue. Write
$$bz^2+2az-b=b(z-z_+)(z-z_-)$$
where
$$z_\pm=-\frac{a}{b}\pm\frac{\sqrt{a^2+b^2}}{b}$$
as you have determined. Now,
$${\rm Res}\Bigg(\frac{z^n}{b(z-z_+)(z-z_-)};\quad z=z_+\Bigg)=\lim_{z\to z_+} (z-z_+)\frac{z^n}{b(z-z_+)(z-z_-)}$$
and here you get the desired factor of $1/b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
On the Definition of Posets... In my book, the author defines posets formally in the following way:
Let $P$ be a set, and let $\le$ be a relationship on $P$ so that,
$a$. $\le$ is reflective.
$b$. $\le$ is transitive.
$c$. $\le$ is antisymmetric.
Say for $a$, does this merely mean that if some element $x\in P$, $x$ should always have the same relation to itself? and for $b$ if $x$ has the relation to $y$ and $y$ has the relation to $z$, this implies that $x$ has the relation to $z$?
Moreover, when trying to determine if a something is a poset,do I just have to determine if such a relationship exists? And that relationship is not necessarily the usual meaning of "$\le$"
| Your statements are correct.
On your last question: A poset is a pair ($P$, $R$), where $P$ is a set and $R$ is a relation on $P$ (which must have properties a, b and c). So the relation is a part of the poset, there is no freedom to choose it.
Hence to determine if something is a poset, you don't have to determine if a suitable order relation exists (it always does), but if the set together with the given relation is a poset. The relation should always be clear from the context, in particular if the relation symbol is not ''$\leq$''.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Discrete math: Euler cycle or Euler tour/path? Could someone help explain to me how I can figure out if the graphs given are Euler cycle or Euler path? Is it through trial and error?
Here are some examples:
Would appreciate any help.
| a graph is Eulerian if its contains an Eulerian circuit, where Eulerian circuit is an Eulerian trail. By eulerian trail we mean a trail that visits every edge of a graph once and only once. now use the result that "A connectded graph is Eulerian if and only if every vertex of G has even degree." now you may distinguish easily.
You must notice that an Eulerian path starts and ends at different vertices and Eulerian circuit starts and ends at the same vertex.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Continuous real-valued function and open subset Let $f$ be a continuous real-valued function defined on an open subset $U$ of $\mathbb{R}^n$.
Show that $\{(x,y):x\in{U},y>f(x)\}$ is an open subset of $\mathbb{R}^{n+1}$
Let $\forall{x}\in{X}, X\subset{U}$
Using the theorem, for a function $f$ mapping $S\subset{\mathbb{R}^n}$ into $\mathbb{R}^m$, it is equivalent to $f$ is continuous in $S$
so we can say $f(x)$ is continuous on $U$. Also, by following $U$, $f(x)$ is also open which is one of what I want to prove.
But how does it so sure about it maps to $\mathbb{R}^{n+1}$ but not $\mathbb{R}^{n+2}$, $\mathbb{R}^{n+3}$, ...
| Hint: the function
$$
g:(x,y)\longmapsto y-f(x)
$$
is defined and continuous on $U\times \mathbb{R}$, which is open in $\mathbb{R}^{n+1}$. Now try to express your set with this function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/326978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Dilogarithm Identities Is there a cleaner way to write:
$$
f(x) = \operatorname{Li}_2(i x) - \operatorname{Li}_2(-i x)
$$
in terms of simpler functions? I don't know enough about dilogarithms, and the basic identities I see on wikipedia are not helping me.
| I can show you some approximations which might help:
For the case of $|x|<1$ we have that
$ i(\operatorname{Li}_2(-i x) - \operatorname{Li}_2(i x)) ≃ 2x $
For the case of $|x|\ge1$ we have that
$ i(\operatorname{Li}_2(-i x) - \operatorname{Li}_2(i x)) ≃ π\cdot log(x) $
Another good approximation for any x is
$ i(\operatorname{Li}_2(-i x) - \operatorname{Li}_2(i x)) ≃ π \cdot arcsinh(x/2) $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Understanding branch cuts for functions with multiple branch points My question was poorly worded and thus confusing. I'm going to edit it to make it clearer, and then I'm going to give a brief answer.
Take, for example, the function $$f(z) = \sqrt{1-z^{2}}= \sqrt{(1+z)(1-z)} = \sqrt{|1+z|e^{i \arg(1+z)} |1-z|e^{i \arg(1-z)}}.$$
If we restrict $\arg(1+z)$ to $-\pi < \arg(1+z) \le \pi$, then the half-line $[-\infty,-1]$ needs to be omitted.
But if we restrict $\arg(1-z)$ to $0 < \arg(1-z) \le 2 \pi$, why does the half-line $(-\infty, 1]$ need to be omitted and not the half-line $[1, \infty)$?
And if we define $f(z)$ in such a way, how do we show that $f(z)$ is continuous across $(-\infty,-1)$?
$ $
The answer to the first question is $(1-z)$ is real and positive for $z \in (-\infty,1)$.
And with regard to the second question, to the left of $z=x=-1$ and just above the real axis,
$$f(x) = \sqrt{(-1-x)e^{i (\pi)} (1-x)e^{i (2 \pi)}} = e^{3 \pi i /2} \sqrt{x^{2}-1} = -i \sqrt{x^{2}-1} .$$
While to the left of $z=x=-1$ and just below the real axis,
$$f(x) = \sqrt{(-1-x)e^{i (-\pi)} (1-x)e^{i (0)}} = e^{-i \pi /2} \sqrt{x^{2}-1} = -i \sqrt{x^{2}-1} .$$
| I would recommend chapter 2.3 in Ablowitz, but I can try to explain in short.
Let
$$w : = (z^2-1)^{1/2} = [(z+1)(z-1)]^{1/2}.$$
Now, we can write
$$z-1 = r_1\,\exp(i\theta_1)$$ and similarly for
$$z+1 = r_2\,\exp(i\theta_2)$$ so that
$$w = \sqrt{r_1\,r_2}\,\exp(i(\theta_1+\theta_2)/2). $$
Notice that since $r_1$ and $r_2$ are $>0$ the square root sign is the old familiar one from real analysis, so just forget about it for now.
Now let us define $$\Theta:=\frac{\theta_1+\theta_2}{2}$$ so that $w$ can be written as
$$w = \sqrt{r_1r_2} \exp(\mathrm{i}\Theta).$$
Now depending on how we choose the $\theta$'s we get different branch cuts for $w$, for instance, suppose we choose both $$\theta_i \in [0,2\pi),$$ then if you draw a phase diagram of $w$ i.e. check the values of $\Theta$ in different regions of the plane you will see that there is a branch cut between $[-1,1].$ This is because just larger than $1$ and above the real line both $\theta$s are $0$ hence $\Theta = 0$, while just below both are $2\pi$ hence $\Theta = 4\pi/2=2\pi$ which implies that $w$ is continuous across this line (since $e^{i2\pi} = e^{i\cdot 0}$). Similarly below $-1$ same analysis shows that $w$ is continuous across $x<-1$.
Now for the part $[-1,1]$, you will notice that just above this line $\theta_1 = \pi$ while $\theta_2 = 0$ so that $\Theta = \pi/2$ hence
$$w = i\,\sqrt{r_1r_2}.$$
Just below we still have $\theta_1 = \pi$ but $\theta_2 = 2\pi$ so that $\Theta = 3\pi/2 (= -\pi/2)$ hence $w = -i\,\sqrt{r_1r_2}$ is discontinuous across this line. Hope that helped some.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Are there infinite sets of axioms? I'm reading Behnke's Fundamentals of mathematics:
If the number of axioms is finite, we can reduce the concept of a consequence to that of a tautology.
I got curious on this: Are there infinite sets of axioms? The only thing I could think about is the possible existence of unknown axioms and perhaps some belief that this number of axioms is infinite.
| Perhaps surprisingly even the classical (Łukasiewicz's) axiomatization of propositional logic has an infinite number of axioms. The axioms are all substitution instances of
*
*$(p \to (q \to p))$
*$((p \to (q \to r)) \to ((p \to q) \to (p \to r)))$
*$((\neg p \to \neg q) \to (q \to p))$
so we have an infinite number of axioms.
Usually the important thing is not if the set of axioms is finite or infinite, but if it is decidable. We can only verify proofs in theories with decidable sets of axioms. If a set of axioms is undecidable, we can't verify a proof, because we can't tell if a formula appearing in it is an axiom or not. (If a set of axioms is only semi-decidable, we're able to verify correct proofs, but we're not able to refute incorrect ones.)
For example, if I construct a theory with the set of axioms given as
$T(\pi)$ is an axiom if $\pi$ is a representation of a terminating program.
Then I can "prove" that some program $p$ is terminating in a one-line proof by simply stating $T(p)$. But of course such theory has no real use because nobody can verify such a proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 7,
"answer_id": 0
} |
Find $E(\max(X,Y))$ for $X$, $Y$ independent standard normal Let $X,Y$ independent random variables with $X,Y\sim \mathcal{N}(0,1)$. Let $Z=\max(X,Y)$.
I already showed that $F_Z$ of $Z$ suffices $F_Z(z)=F(z)^2$.
Now I need to find $EZ$.
Should I start like this ?
$$EZ=\int_{-\infty}^{\infty}\int_{-\infty}^\infty \max(x,y)\frac{1}{\sqrt{2\pi}}e^{-1/2x^2}\frac{1}{\sqrt{2\pi}}e^{-1/2y^2} dxdy$$
| To go back to $Z$ as a function of $(X,Y)$ once one has determined $F_Z$ is counterproductive. Rather, one could compute the density $f_Z$ as the derivative of $F_Z=\Phi^2$, that is, $f_Z=2\varphi\Phi$ where $\Phi$ is the standard normal CDF and its derivative $\varphi$ is the standard normal PDF, and use
$$
\mathbb E(Z)=\int zf_Z(z)\mathrm dz=\int 2z\varphi(z)\Phi(z)\mathrm dz.
$$
Since $z\varphi(z)=-\varphi'(z)$ and $\Phi'=\varphi$, an integration by parts yields
$$
\mathbb E(Z)=\int2\varphi\cdot\varphi=\frac2{\sqrt{2\pi}}\int\varphi(\sqrt2z)\mathrm dz=\frac2{\sqrt{2\pi}}\frac1{\sqrt2}=\frac1{\sqrt{\pi}}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Correct way of saying that some value depends on another value x only by a function of x I would like to know what good and valid ways there are to say (in words) that some value f(x), which depends on a variable x, in fact only depends on x "through" some function of x.
Example: For $x\in\mathbb{R}$ let $\hat{x}:=\min(0,x)$ and let f be a function such that $f(x)=f(\hat{x})$ for all x. I want to express that "f only depends on x 'through' $\hat{x}$".
How to formulate this (in words rather than writing it down in formulas)?
While for the above example it may be easier to just write down the formula, there can of course be more complex situations, in particular when not dealing with numbers, for example that the expectation of a random variable "depends on the r.v. only through its distribution".
| Translating the formulas to words is one way: f is a function of x such that if we denote g(x) by y, f can be written as a function of y only. Perhaps not exactly what you are looking for.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the order of $2$ in $(\mathbb{Z}/n\mathbb{Z})^\times$? Is it there some theorem that makes a statement about the order of $2$ in the multiplicative group of integers modulo $n$ for general $n>2$?
| Let me quote from this presentation of Carl Pomerance:
[...] the multiplicative order of $2 \pmod n$
appears to be very erratic and difficult to get hold of.
The presentation describes, however, some properties of this order. The basic facts have already been elucidated by @HagenvonEitzen in his comment.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
} |
Convergence of $\sum_{n=1}^\infty (-1)^n(\sqrt{n+1}-\sqrt n)$ Please suggest some hint to test the convergence of the following series
$$\sum_{n=1}^\infty (-1)^n(\sqrt{n+1}-\sqrt n)$$
| We have
$$u_n=(-1)^n(\sqrt{n+1}-\sqrt{n})=\frac{(-1)^n}{\sqrt{n+1}+\sqrt{n}}$$
So the sequence $(|u_n|)_n$ converges to $0$ and is monotone decreasing then by Alternating series test the series $\sum_n u_n$ is convergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Number of Different Equivalence Classes? In the question given below, determine the number of different equivalence classes. I think the answer is infinite as $\ b_1$ and $b_2\;$ can have either one 1's or two 1's or three 1's etc. I just want to clarify if this is right as some says the number of different equivalence classes should be 2, given that they are either related or not.
Question: $\;S$ is the set of all binary strings of finite length, and
$$R = \{(b_1, b_2) \in S \times S \mid\; b_1\;\text{ and}\; b_2\; \text{ have the same number of }\;1's.\}$$
| The proof that the correct answer is infinite is quite easy.
First of all, let's note that the set of equivalence classes is not empty, since, trivially, "1" generates a class.
Suppose that the number of classes is finite. Then we can build over the set of the equivalence classes an order relation, defined as follows:
Given a class $C$, we define $d(C)$ the number of "$1$" in a string sequence of said class and use the order between natural numbers
now, since we have supposed that the cardinality of
$Z = \{ C_i | \forall x\in C_i, |x_1| = i \}$
is finite, we can find the maximum of the set $\{d(C_i) | C_i \in Z\}$, say $M$.
It's now easy to write a string of $M+1$ "$1$", that is finite, and doesn't belong in any of the previous classes, that is absurd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculus, integration, Riemann sum help? Express as a definite integral and then evaluate the limit of the Riemann sum lim
$$
\lim_{n\to \infty}\sum_{i=0}^{n-1} (3x_i^2 + 1)\Delta x,
$$
where $P$ is the partition with
$$
x_i = -1 + \frac{3i}{n}
$$
for $i = 0, 1, \dots, n$ and $\Delta x \equiv x_i - x_{i-1}$.
I am completely and utterly confused as to how to even start this question. Any help/good links hugely appriciated!
| Let $f$ be a function, and let $[a,b]$ be an interval. Let $n$ be a positive integer, and let $\Delta x=\frac{b-a}{n}$. Let $x_0=a$, $x_1=a+\Delta x$, $x_2=a+2\Delta x$, and so on up to $x_n=a+n\Delta x$. So $x_i=a+i\Delta x$.
So far, a jumble of symbols. You are likely not to ever understand what's going on unless you associate a picture with these symbols.
So draw some nice function $f(x)$, say always positive, and take some interval $[a,b]$. For concreteness, let $a=1$ and $b=4$. Pick a specific $n$, like $n=6$. Then $\Delta x=\frac{3}{6}=\frac{1}{2}$.
So $x_0=1$, $x_1=1.5$, $x_2=2$, $x_3=2.5$, $x_4=3$, $x_5=2.5$, and $x_6=3$. Note that the points divide the interval from $a$ to $b$ into $n$ subintervals. These intervals all have width $\Delta x$.
Now calculate $f(x_0)\Delta x$. This is the area of a certain rectangle. Draw it. Similarly, $f(x_1)\Delta x$ is the area of a certain rectangle. Draw it. Continue up to $f(x_5)\Delta x$. Add up. The sum is called the left Riemann sum associated with the function $f$ and the division of $[1,4]$ into $6$ equal-sized parts.
The left Riemann sum is an approximation to the area under the curve $y=f(x)$, from $x=a$ to $x=b$. Intuitively, if we take $n$ very large, the sum will be a very good approximation to the area, and the limit as $n\to\infty$ of the Riemann sums is the integral $\displaystyle\int_a^b f(x)\,dx$.
Let us apply these ideas to your concrete example. It is basically a matter of pattern recognition. We have $x_0=-1$, $x_1=-1+\frac{3}{n}$, $x_2=-1+\frac{6}{n}$, and so on. These increase by $\frac{3}{n}$, so $\Delta x=\frac{3}{n}$.
We have $x_0=-1$, and $x_n=-1+\frac{3n}{n}=2$. So $a=-1$ and $b=2$.
Our sum is a sum of terms of the shape $(3x_i^2+1)\Delta x$. Comparing with the general pattern $f(x_i)\Delta x$, we see that $f(x)=3x^2+1$.
So for large $n$, the Riemann sum of your problem should be a good approximation to $\displaystyle\int_{-1}^2 (3x^2+1)\,dx$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Monomial ordering problem I've got the following problem:
Let $\gamma$, $\delta$ $\in$ $\mathbb R_{> 0}$. The binary relation $\preceq$ on monomials in $X,Y$ is defined: $X^{m}Y^{n} \preceq X^{p}Y^{q}$ if and only if $\gamma m + \delta n \leq \gamma p + \delta q .$ Show that this is a monomial ordering if and only if $ \frac \gamma \delta $ is irrational.
So since $\preceq$ is a partial order, do I just need to show that it is a total order only when $ \frac \gamma \delta $ is irrational?
Not really sure how to go about this, any guidance would be great. Thanks.
| Every monomial order is a total order; in particular when $\gamma/\delta$ is a rational $a/b$, we have $X^b\preceq Y^a\preceq X^b$, so $\preceq$ is not a total order, so $\preceq$ is not a monomial order.
But not every total order is a monomial order. To verify that $\preceq$ is a monomial order you have to show:
*
*it's a total order (easy)
*$u\preceq v$ implies $uw\preceq vw$ (hint: just expand the definition of $\preceq$)
*$\preceq$ is a well-ordering (hint: show that $\{(m,n)\in\mathbb N^2\mid \gamma m+\delta n\leq L\}$ is finite for any $L$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Functions $f$ satisfying $ f\circ f(x)=2f(x)-x,\forall x\in\mathbb{R}$. How to prove that the continuous functions $f$ on $\mathbb{R}$ satisfying
$$f\circ f(x)=2f(x)-x,\forall x\in\mathbb{R},$$
are given by
$$f(x)=x+a,a\in\mathbb{R}.$$
Any hints are welcome. Thanks.
| If $f$ were bounded from below, so were $x=2f(x)-f(f(x))$. Therefore $f$ is unbounded, hence by IVT surjective.
Also, $f(x)=f(y)$ implies $x=2f(x)-f(f(x))=2f(y)-f(f(y))=y$, hence $f$ is also injective and has a twosided inverse. With this inverse we find $$\tag1f(x)+f^{-1}(x)=2x.$$
We conclude that $f$ is either strictly increasing or strictly decreasing. But in the latter case $f^{-1}$ would also be decreasing, contradicting $(1)$. Therefore $f$ is strictly increasing.
Following TMM's suggestion, define $g(x)=f(x)-x$.
Then using $(1)$ we see $f^{-1}(x)=2x-f(x)=x-g(x)$, hence $f(x-g(x))=x$ and $g(x-g(x))=g(x)$. By induction, $$\tag2g(x-ng(x))=g(x)$$ for all $x\in\mathbb R, n\in\mathbb N$.
Because $f$ is increasing, we conclude that
$$\tag3 x<y\implies g(x)<g(y)+(y-x).$$
If we assume that $g$ is not constant, there are $x_0,x_1\in\mathbb R$ with $g(x_0)g(x_1)>0$ and $\alpha:=\frac{g(x_1)}{g(x_0)}$ irrational (and positive!).
Wlog. $g(x_1)<g(x_0)$.
Because of this irrationality, for any $\epsilon>0$ we find $n,m$ with
$$x_0-ng(x_0) < x_1-mg(x_1)< x_0-ng(x_0)+\epsilon,$$
hence $g(x_0-ng(x_0)) < g(x_1-mg(x_1))+\epsilon$ by $(3)$.
Using $(2)$ we conclude $g(x_0)<g(x_1)+\epsilon$, contradiction.
Therefore $g$ is constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 1,
"answer_id": 0
} |
Taylor polynomials expansion with substitution I am working on some practice exercises on Taylor Polynomial and came across this problem:
Find the third order Taylor polynomial of $f(x,y)=x + \cos(\pi y) + x\log(y)$ based at $a=(3,1).$
In the solution provided, the author makes a substitution such that $x=3+h$ and $y=1+k$. I am not sure why he makes this substitution. Also, why not just find the Taylor polynomial for $f(x,y)$, then plug in the values for $x$ and $y$ to solve for $f(x,y)$?
If you could provide some references for reading on this I would appreciate that as well.
Thanks in advance.
| It's generally a good policy to "always expand around zero". This means that you want to have variables that go to zero at your point of interest.
In your case, you want to have $h$ go to zero for $x$ and $k$ go to zero for $y$. For this to happen at $x=3$ and $y=1$, you want to use $x=3+h$ and $y = 1+k$.
One reason that you want to see what happens when $h \to 0$ is that $h^2$ (and higher powers) are small compared to $h$, so they can be disregarded when you are seeing what happens. If $h$ does not tend to zero, then $h^2$ and higher powers cannot be disregarded and, in fact, may dominate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Manipulating the equation!
The question asks to manipulate $f(x,y)=e^{-x^2-y^2}$ this equation to make its graph look like the three shapes in the images I attached.
I got the first one: $e^{(-x^2+y^2)}\cos(x^2+y^2)$.
But I have no idea what to do for second and the third one. Please help me.
| Is the cosine in the exponent? In case not, Alpha gives something that looks like the original function
If so, Alpha gives something closer to your second target
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
r.v. Law of the min On a probability space $(\Omega, A, P)$, and given a r.v. $(X,Y)$ with values in $R^2$. If the law of $(X,Y)$ is $\lambda \mu e^{-\lambda x - \mu y } 1_{R^2_+} (x,y) dx dy$, what is the law of the min $(X,Y)$?
| HINT
Let $Z=\min(X,Y)$. Start by computing, for some $z>0$
$$
1-F_Z(z) = \mathbb{P}\left(Z>z\right) = \mathbb{P}\left(X>z, Y>z\right)
$$
Notice that the probability of $(X,Y)$ factors, i.e. $X$ and $Y$ are independent, hence
$$
\mathbb{P}\left(X>z, Y>z\right) = \mathbb{P}\left(X>z\right) \mathbb{P}\left(Y>z\right)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/327956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Question on measurable set I am having a hard time solving the following problem. Any help would be wonderful.
If $f: [0, 1] \rightarrow [0, \infty)$ is measurable, and we have that $\int_{[0, 1]}f \mathrm{d} m = 1$, must there exist a continuous function $g: [0, 1] \rightarrow [0, \infty)$ and a measurable set $E$ with $m(E) = \frac{3}{4}$ and $|f - g| < \frac{1}{100}$ on $E$.
| $m([f > 10]) \le 1/10$ (Chebyshev)
Uniformly approximate $f$ on $[0 < f \le 10]$ by a simple function, within say 1/200.
By linearity, reduce to characteristic function of a measurable set. But a measurable set is close to a finite union of intervals (the measure of their symmetric difference can be made small).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Problem involving combinations. In how many ways can $42$ candies (all the same) be distributed among 6 different infants such that each infant gets an odd number of candies?
I seem to think that we have 42 different objects, and 6 choices. So it should be 42C6. However, I'm not factoring in the "odd number of candies" part of the question, so I'm sure it's wrong. Any help is appreciated.
| You are looking for compositions of $42$ into six odd parts. If you give each child one candy and put the rest in pairs, this will be the same as the weak compositions of 18 into six parts, which is given by ${23 \choose 5}=33649$. To prove the formula, put $24$ (pairs) in a row, then you select five places to split the row and remove one pair from each part. This allows for not giving any more candies to one or more infants.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Algebra Question from Mathematics GRE I just started learning algebra, and I came across a question from a practice GRE which I couldn't solve. http://www.wmich.edu/mathclub/files/GR8767.pdf #49
The finite group $G$ has a subgroup $H$ of order 7 and no element of $G$ other than the identity is its own inverse. What could the order of $G$ be?
Edit: This is a misreading of the problem. The problem intends that no element in G is its own inverse.
a) 27
b) 28
c) 35
d) 37
e) 42
I've already eliminated a) and d) due to LaGrange's theorem.
| Since $\rm a=a^{-1}\iff a^2=1$, you can use Cauchy's theorem to eliminate even orders.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Solving equation $A^{(B^{(C^x)})} = C^{(B^{(A^x)})}$ for $x$ Recently I came across the equation
$$A^{(B^{(C^x)})} = C^{(B^{(A^x)})}$$
where $A \neq B \neq C$, and if $A, B, C > 1$ or if $0 < A,B,C < 1$, there exists
a unique solution for $x$.
Here is my attempt:
$$A^{(B^{(C^x)})} = C^{(B^{(A^x)})}$$
$$B^{(C^x)} = \log_A{C^{(B^{(A^x)})}}$$
$$C^x-A^x = \log_B{\log_A{C}}$$
$$C^x-A^x = \frac{\ln{(\frac{\ln{C}}{\ln{A}}})}{\ln B}$$
And I was stuck at $C^x-A^x$..
| By symmetry in $A\leftrightarrow C$, we may assume that either $0<C<A<1$ or $1<A<C$.
So far, by taking logarithms and reordering
$$\tag0A^{(B^{(C^x)})} = C^{(B^{(A^x)})}$$
$$B^{(C^x)}\ln A = B^{(A^x)}\ln C$$
$$C^x\ln B + \ln\ln A = A^x\ln B +\ln\ln C$$
$$\tag1C^x-A^x = \frac{\ln\ln C-\ln\ln A}{\ln B}$$
where the right hand side is constant.
We rewrite the left hand side as
$$ \tag2C^x-A^x = A^x\left(\left(\frac CA\right)^x-1\right).$$
If $C>A>1$, the first factor on the right os strictly positive and strictly increasing, while the second factor is strictly increasing (but might be negative).
However, the product is not monotonuous for all $x$.
But from $B>1$ we infer that the right hand side in $(1)$ is positive, hence we can restrict to $x$ where the second factor in $(2)$ is positive, that is $x>0$. In that case, both factors in $(2)$ are positive and increasing, hence so is their product. This shows that at most one solution $x$ exists.
At $x=0$, we obtain $C^x-A^x=1-1=0$, whereas each factor $A^x$ and $\left(\frac CA\right)^x-1$ goes $\to+\infty$ as $x\to+\infty$. Therefore $x\mapsto C^x-A^x$ is a bijection $[0,\infty)\to[0,\infty)$ and there exists a unique solution.
The same discussion works in the case $0<A<C<1$, $0<B<1$ apart from changed signs and monotonicity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Differentials and implicit differentiation Consider this example. Suppose $x$ is a function of two variables $s$ and $t$,
$$x = \sin(s+t)$$
Taking the differential as in doing implicit differentiation [1],
$$dx = \cos(s+t)(ds+dt) = \cos(s+t)dt + \cos(s+t)ds$$
I know the right way of taking the differential of $x$ is by
$dx = \dfrac{\partial x}{\partial s}ds + \dfrac{\partial x}{\partial t}dt$ [2]
But why do the above method [1], which I cannot make any sense of mathematically, give the same result as [2]?
I do not understand method [1] because $s$ and $t$ are supposed to be independent variables not functions to be differentiated. i.e. $ds$ just means $s-s_0$, same with $dt$.
| This is a special case: you may consider a simple substitution $y = s + t$
$x = \sin(y)$
$dx = \cos(y)dy$
$dx = \cos(s+y)(ds + dt)$
if you do this with $x=s^t$
$y = s^t$
$x = y$
$dx = dy$
$dx = ts^{t-1}ds + s^t\ln(s)dt$
which isn't useful at all...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
find maximum for a function on a ball I have a question thats giving me a hard time, can someone please help me out with it ?
I need to find maximun to the func:
$$
f(x) = 2-x+x^3+2y^3+3z^3
$$
on the ball
$$
\{ (x,y,z) | x^2+y^2+z^2\leq1 \}
$$
what i have tried to do is this $$ (f_x,f_y,f_z)=0 $$ and then to check the H matrix. i got that $$ x= - \sqrt{\frac{1}{3}}, y=0,z=0 $$ but by wolfram-alpha i am wrong. could you help? thanks!
| At first search for extremas in the interior. (With the normal way)
Than observe that for a maximum on the bound $x^2+y^2+z^2=1$ the derivative doesn't need to be zero, use lagrange multipliers here.
Look at the function
$$g(x,y,z,\lambda)= 2-x+x^3+2y^3+3z^3+ \lambda (x^2+y^2+z^2-1)$$
and search find for a maximum for this function
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Linearly independent Let $S$ be a linearly independent subset of a vector space $V$, and let $v$ be a vector in $V$ that is not in $S$. Then $S\cup \{v\}$ is linearly dependent if and only if $v\in span\{S\}$.
proof)
If $S\cup \{v\}$ is linearly dependent then there are vectors $u_1,u_2,\dots,u_n$ in $S\cup \{v\}$ such that $a_1u_1+a_2u_2+\dots+a_nu_n=0$ for some nonzero scalars $a_1,a_2,\dots,a_n$.
Because $S$ is linearly independent, one of the $u_i$'s, say $u_1$ equals $v$.
[ADDITION]
The last part of the proof is this: Because S is linearly independent, one of the $u_i$'s, say $u_1$ equals $v$. Thus $a_1v+a_2u_2+\dots+a_nu_n=0$ and so $v$ can be written as a linear combination of $u_2,\dots,u_n$ which are in $S$. By definition of span, we have $v\in span(S)$.
I can't understand the last sentence. I think since $S$ is linearly independent and $S\cup \{v\}$ is linearly dependent so consequently $v$ can be written as the linear combination of $u_1,u_2,\dots,u_n$. But it has any relation to that sentence?
(+) I also want to ask a simple question here.
Any subset of a vector space that contains the zero vector is linearly dependent, because $0=1*0$. But that shows it holds when there is only one vector, zero vector, and the coefficient $a_1=1$.
Then it still holds when there are other nonzero vectors in a vector space?
| Your intuition is good but there is a problem with stating that $v$ is in the span of $S$ directly. Mainly if we have some linear combination:
$$(\ast) \qquad a_1s_1 + \cdots a_ns_n + a_{n+1}v = 0$$
where not all the $a_i$ are zero, we want to show that $a_{n+1}$ is not zero so that we can subtract that term over and divide by $-a_{n+1}$ That is,
$$a_1s_1 + \cdots a_ns_n = -a_{n+1}v$$
and so to finish the proof we only need to show that $-a_{n+1}$ is not zero so that we can divide by it. If $a_{n+1}$ were zero then $(\ast)$ would have a nontrivial linear combination equal to 0 which contradicts the fact that $S$ is linearly independent. So $a_{n+1}$ is not zero and we can divide by it so that $v$ is in the span of $S.$
This is essential the step that your proof above is using. If you have some combination that is zero then one of the terms must be $v$ an it must have a nonzero coefficient by the linearly independence of $S.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Math question functions help me? I have to find find $f(x,y)$ that satisfies
\begin{align}
f(x+y,x-y) &= xy + y^2 \\
f(x+y, \frac{y}x ) &= x^2 - y^2
\end{align}
So I first though about replacing $x+y=X$ and $x-y=Y$ in the first one but then what?
| Hint:
If $x+y=X$ and $x-y=Y$ then
\begin{align*}
x&=X-y \implies Y= X-y-y \implies 2y=X-Y \dots
\end{align*}
And be careful with the second for $x=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Groupoids isomorphism Let $G, G'$ be two groups and $X=\{x,y\}$ be a set of two elements. Consider a groupoid $\mathcal{G}$ with objects from $X$ such that Hom$(x,x)=G$ and Hom$(y,y)=G'$.
Suppose Hom$(x,y) \neq \emptyset$, i.e. there is a morphism between $x$ and $y$. I think this is possible if and only if this morphism is an isomorphism between $G$ an $G'$ (in fact, every morphism in a groupoid must be invertible, moreover it must respect multiplication, so it is an isomorphism of these groups). Also, I know that any two such morphisms are conjugate.
I need to prove: "Given two groupoids $\mathcal{G}_1$ and $\mathcal{G}_2$ satisfying the conditions above (i.e. both have $X$ as set of objects and $G, G'$ as hom-sets Hom$(x,x)$ and Hom$(y,y)$), if there is a morphism from $x$ to $y$ in both groupoids, then $\mathcal{G}_1$ and $\mathcal{G}_2$ are isomorphic".
[Two groupoids are isomorphic if they are isomorphic as category, i.e. there is an "invertible" functor between the two].
It seems quite an obvious statement, but I cannot prove it. I start with a functor between the groupoids, say $F: \mathcal{G}_1 \to \mathcal{G}_2$. Then, what? Should I concretely construct such a functor and show it is an isomorphism? Or is there an "abstract nonsensical way" to do it?
| You should just construct the functor. The objects map to themselves as do the endomorphisms. All you really have to decide is how the map $F\colon\hom_{\mathcal G_1}(x, y) \to \hom_{\mathcal G_2}(x, y)$ is going to work.
Pick $\phi_1 \in \hom_{\mathcal G_1}(x, y)$ and $\phi_2 \in \hom_{\mathcal G_2}(x, y)$. What you need to do is show that every $f \in \hom_{\mathcal G_1}(x, y)$ can be written in the form $\phi_1f'$ where $f' \in G$, and similarly for $\mathcal G_2$. Then define $F(\phi_1f') = \phi_2f'$.
You'll have to show that $F$ respects composition and is a bijection on that homset, but that shouldn't be hard.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does it make sense to talk about $L^2$ inner product of two functions not necessarily in $L^2$? The $L^2$-inner product of two real functions $f$ and $g$ on a measure space $X$ with respect to the measure $\mu$ is given by
$$
\langle f,g\rangle_{L^2} := \int_X fg d\mu, $$
When $f$ and $g$ are both in $L^2(X)$, $|\langle f,g\rangle_{L^2}|\leqslant \|f\|_{L^2} \|g\|_{L^2} < \infty$.
I was wondering if it makes sense to talk about $\langle f,g\rangle_{L^2}$ when $f$ and/or $g$ may not be in $L^2(X)$? What are cases more general than $f$ and $g$ both in $L^2(X)$, when talking about $L^2(X)$ makes sense?
Thanks and regards!
| Yes it does make sense. For example, if you take $f\in L^p(X)$ and $g\in L^{p'}(X)$ with $\frac{1}{p}+\frac{1}{p'}=1$, then $\langle f,g\rangle_{L^2} $ is well defined and $$\langle f,g\rangle_{L^2} <\|f\|_p\|g\|_{p'}$$
With this notation it is said the the inner product on $L^2$ induces the duality $p$ and $p'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Ideal of ideal needs not to be an ideal Suppose I is an ideal of a ring R and J is an ideal of I, is there any counter example showing J need not to be an ideal of R? The hint given in the book is to consider polynomial ring with coefficient from a field, thanks
| Consider $R=\mathbb Q[x]$, and $I=xR$ be the most obvious ideal of $R$.
Note that we can define $J$ as a subset of $I$ to be an ideal of $I$ if $J$ is a subgroup of $(I,+)$ and $IJ\subseteq J$. Find a $J$ that is a super-set of $x^2R$ but does not contain all of $I=xR$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Simple probability problems which hide important concepts Together with a group of students we need to compose a course on probability theory having the form of a debate. In order to do that we need to decide on a probability concept simple enough so that it could be explained in 10-15 minutes to an audience with basic math knowledge. Still, the concept to be explained must be hidden in some tricky probability problems where intuition does not work.
Until now we have two leads:
*
*the probability of the union is not necessarily the sum of the probabilities
*Bayes' law (for a rare disease the probability of testing positive when you are not sick is very large)
The second one is clearly not intuitive, but it cannot be explained easily to a general audience.
Do you know any other probability issues which are simple enough to explain, but create big difficulties in problems when not applied correctly?
| There are three prisoners in Cook Maximum security prison. Jack, Will and Mitchel. The prison guard knows the one who is to be executed. Jack has finished writing a letter to his mother and asks the guard whether he should give it to Will or Mitchel. The prison guard is in a dilemma thinking that telling Jack the name of a free person would help him find out his chances of being executed.
Why are both the prisoner and Jack wrong in their thinking?
You have 4 conditions
Jack Will Mitchel probability go free probability of jack being executed
X jail jail Will 1/6
jail X Jail Mitchel 1/6
jail Jail X Will 1/3
X jail jail Mitchel 1/3
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Existence of a sequence that has every element of $\mathbb N$ infinite number of times I was wondering if a sequence that has every element of $\mathbb N$ infinite number of times exists ($\mathbb N$ includes $0$). It feels like it should, but I just have a few doubts.
Like, assume that $(a_n)$ is such a sequence. Find the first $a_i \not = 1$ and the next $a_j, \ \ j > i, \ \ a_j = 1$. Swap $a_j$ and $a_i$. This can be done infinitely many times and the resulting sequence has $1$ at position $M \in \mathbb N$, no matter how large $M$ is. Thus $(a_n) = (1_n)$.
Also, consider powerset of $\mathbb N, \ \ 2^{\mathbb N}$. Then form any permutation of these sets, $(b_{n_{\{N\}}})$. Now simply flatten this, take first element $b_1$ and make it first element of $a$ and continue till you come to end of the first set and then the next element in $a$ is the first element of $b_2$... But again, if you start with set $\{1 | k \in \mathbb N\}$ (infinite sequence of ones), you will get the same as in the first case...
You would not be able to form a bijection between this sequence and a sequence that lists every natural number once, for if you started $(a_n)$ with listing every natural number once, you would be able to map only the first $\aleph_0$ elements of $(a_n)$ to $\mathbb N$.
But consider decimal expansion of $\pi$. Does it contain every natural number infinitely many times?
What am I not getting here?
What would the cardinality of such a sequence, if interpreted as a set, be?
| Let $a_n$ be the largest natural number, $k$ such that $2^k$ divides $n+1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Einstein Summation Notation Interpretation A vector field is called irrotational if its curl is zero. A vector field is called solenoidal if its divergence is zero. If A and B are irrotational, prove that A $ \times $ B is solenoidal.
I'm having a hard time the proof equation that is required, and the steps that would go with it. I am defining V as a vector.
$ \nabla \times V = 0 $ = irrotational
$ \nabla \cdot V = 0 $ = solenoidal
$ \nabla \times A = 0 $
$ \nabla \times B = 0 $
so therefore, ($ \nabla \times A $)+ ($ \nabla \times B) = \nabla \cdot (A \times B) $
would this be a correct setup? I'm having a hard time expanding this to E.S. form.
| This doesn't work; $(\nabla\times A)+(\nabla\times B)$ doesn't correspond to anything, and $\nabla\cdot(A\times B)$ doesn't expand to it (as noted by Henning Makholm in a comment, one of these is effectively a scalar and one effectively a vector). Instead, you want a form of the triple product identity $A\cdot(B\times C) = B\cdot(C\times A) = C\cdot (A\times B)$ - but this is where you need to be at least a little careful, because $\nabla$ isn't 'really' a vector.
As for Einstein summation form, the most important piece to keep in mind is the form for the cross-product: if $A\times B = C$, then $C^i = \epsilon^i\ _{jk}A^jB^k$ where $\epsilon$ is the so-called Levi-Civita symbol which essentially represents the sign of the permutation of its coordinates (i.e., $\epsilon_{ijk}=0$ if any two of $i,j,k$ are pairwise equal, $\epsilon_{012}=\epsilon_{120}=\epsilon_{201}=1$, and $\epsilon_{021}=\epsilon_{210}=\epsilon_{102}=-1$). Writing out the triple-product identity in terms of this notation should make it clear how it works, and then substituting in your hypotheses should show you how to draw your conclusion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Rotating Matrix by $180$ degrees through another matrix To rotate a $2\times2$ matrix by $180$ degrees around the center point, I have the following formula:
$PAP$ = Rotated Matrix, where
$$P =\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}$$
$$A= \begin{bmatrix}
a & b\\
c & d
\end{bmatrix}$$
And the resulting matrix will equal
\begin{bmatrix}
d & c\\
b & a
\end{bmatrix}
I need to have this in the form of:
$AP$ = Rotated matrix.
How would I get it to this form?
| There is no such $2 \times 2$ matrix $P$ which will do what you want for a general $A$. This is because you need to rearrange both the rows and columns and so need a matrix action on the left and on the right. To see this explicitly, define $$P = \left[\begin{array}{cc} p_1 & p_2 \\ p_3 & p_4 \end{array} \right] $$
Compute $AP$ and notice there is never an $a$ term which appears on the bottom row.
Edit for the case where $a,b,c,d$ are fixed variables and we can write $P$ in terms of them:
If $A$ is invertible, $$P = A^{-1}B$$ where $B$ is the rotated form of $A$. If you don't care about singular matrices (since most matrices are non-singular), then just use this. Otherwise expand out $AP$ as I mentioned before and find values for $P$ which will make it work, if possible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How do I prove: If $A$ is an infinite set and $x$ is some element such that $x$ is not in $A$, then$ A\sim A\cup \left\{x\right\}$. How do I prove: If $A$ is an infinite set and $x$ is some element such that $x$ is not in $A$, then$ A\sim A\cup \left\{x\right\}$.
| Hint: Since $A$ is infinite there is some $A_0\subseteq A$ such that $|A_0|=|\Bbb N|$. Show that $\Bbb N$ has the wanted property, conclude that $A_0$ has it, and then conclude that $A$ has it as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/328998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Representing an element mod $n$ as a product of two primes Given a positive integer $n$ and $x \in (\mathbb{Z}/n\mathbb{Z})^*$ what is the most efficient way to find primes $q_1,q_2$ st
$$q_1q_2 \equiv x \bmod n$$
when $n$ is large?
One option is just to take $q_1=2$ and then find the least prime $q_2 \equiv x/2 \bmod n.$ The bound on the least prime of this form is Linnik's theorem. Going by wikipedia (http://en.wikipedia.org/wiki/Linnik%27s_theorem) the best current result is $O(n^{5.2})$ which means that $O(n^{4.2})$ primality tests would be needed to find $q_2$ in the worst case. Can this be improved on?
| If $n$ is not unreasonably large, I'd take advantage of the birthday paradox. You should only need to collect about $O(\sqrt{n})$ distinct residues of primes before you find two that multiply to $x$. This does require $O(\sqrt{n})$ memory unlike your Linnik's solution which is essentially constant memory.
Since you wouldn't be targeting any particular residue class, adding one more residue to your collection is not much slower than just finding the next prime (it only takes $O(\log n)$ time to decide if that residue is already in your collection, and since the collection is not very big you wouldn't have to discard many values on average).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Normalized cross correlation via FHT - how can I get correlation score? I'm using the 2D Fast Hartley Transform to do fast correlation of two images in the frequency domain, which is the equivalent of NCC (normalized cross correlation) in the spatial domain.
However, with NCC, I can get a confidence metric that gives me an idea of how strong the correlation is at a certain offset. In the frequency domain version, I end up with a peak-finding problem in the inverse FHT after doing the correlation, so my question is:
Can I use the value of the peak that I find in the correlation image to derive the same (or similar) confidence metric that I can get from NCC? If so, how do I calculate it?
| Without knowing what you actually computed I can only assume that the following is probably what you want. I interpret "correlation image" as the cross correlation $I_1 \star I_2$ of the two images $I_1$ and $I_2$. Depending on normalizations in your FHT and IFHT there might be an additional scale factor. In what follows I assume that you used a normalized FHT that is exactly its own inverse. Otherwise you have to correct for the additional factor.
The FHT is only a tool to compute the correlation image, just as the FFT is. So it gives you exactly $I_1 \star I_2$. Therefore the confidence can be computed exactly the same way as for the FFT. That is, let $P$ be the peak value, $N$ the total number of pixels in either image, $\mu_1$ and $\sigma_1$ the mean and standard deviation of $I_1$ and $\mu_2$ and $\sigma_2$ the mean and standard deviation of $I_2$. Then the correlation coefficient between the two images at the offset of the peak is
$$
\frac{P - N \, \mu_1 \mu_2}{N \, \sigma_1 \sigma_2}.
$$
This is a value between $-1$ (maximal negative correlation) and $1$ (maximal positive correlation). The maximal score $1$ means that the two images are the same (up to an offset and gain correction of their intensities).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $\frac{1}{x-1}+ \frac{2}{x-2}+ \frac{3}{x-3}+\cdots+\frac{10}{x-10}\geq\frac{1}{2} $ I would appreciate if somebody could help me with the following problem:
Q: find $x$
$$\frac{1}{x-1}+ \frac{2}{x-2}+ \frac{3}{x-3}+\cdots+\frac{10}{x-10}\geq\frac{1}{2} $$
| If the left side is $$f(x)=\sum_{k=1}^{10} \frac{k}{x-k},$$
then the graph of $f$ shows that $f(x)<0$ on $(-\infty,1)$, so no solutions there. For each $k=1..9$ there is a vertical asymptote at $x=k$ with the value of $f(x)$ coming down from $+\infty$ immediately to the right of $x=k$ and crossing the line $y=1/2$ between $k$ and $k+1$, afterwards remaining less than $1/2$ in the interval $(k,k+1)$.
This gives nine intervals of the form $(k,k+a_k]$ where $f(x) \ge 1/2$, and there is a tenth interval in which $f(x) \ge 1/2$ beginning at $x=10$ of the form $(10,a_{10}]$ where $a_{10}$ lies somewhere in the interval $[117.0538,117.0539].$ The values of the $a_k$ for $k=1..9$ are all less than $1$, starting out small and increasing with $k$, some approximations being
$$a_1=0.078,\ a_2=0.143,\ a_3=0.201,\ ...\ a_9=0.615.$$
The formula for finding the exact values of the $a_k$ for $k=1..10$ is a tenth degree polynomial equation which maple12 could not solve exactly, hence the numerical solutions above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How do I show that $T$ is invertible? I'm really stuck on these linear transformations, so I have $T(x_1,x_2)=(-5x_1+9x_2,4x_1-7x_2)$, and I need to show that $T$ is invertible. So would I pretty much just say that this is the matrix: $$\left[\begin{matrix}-5&9\\4&-7\end{matrix}\right]$$ Then it's inverse must be $\frac{1}{(-5)(-7)-(9)(4)}\left[\begin{matrix}-7&-9\\-4&-5\end{matrix}\right]=\left[\begin{matrix}7&9\\4&5\end{matrix}\right]$. But is that "showing" that $T$ is invertible? I'm also supposed to find a formula for $T^{-1}$. But that's the matrix I just found right?
| I think it would be more in the spirit of the question (it sounds like it is an exercise in a course or book) to write down a linear map $S$ such that $S\circ T$ and $T\circ S$ are both the identity - the matrix you have written down tells you how to do this. Then such an $S$ is $T^{-1}$.
You should also note that there are different matrices that can represent the map $T$, but it is true that checking that any such matrix is invertible amounts to a proof that $T$ is invertible.
This non-uniqueness of matrices also means that I would disagree that the matrix you found is the same thing as "a formula for $T^{-1}$". You should say what the map $T^{-1}$ does to a point $(y_1,y_2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
How to create a generating function / closed form from this recurrence? Let $f_n$ = $f_{n-1} + n + 6$ where $f_0 = 0$.
I know $f_n = \frac{n^2+13n}{2}$ but I want to pretend I don't know this. How do I correctly turn this into a generating function / derive the closed form?
| In two answers, it is derived that the generating function is
$$
\begin{align}
\frac{7x-6x^2}{(1-x)^3}
&=x\left(\frac1{(1-x)^3}+\frac6{(1-x)^2}\right)\\
&=\sum_{k=0}^\infty(-1)^k\binom{-3}{k}x^{k+1}+6\sum_{k=0}^\infty(-1)^k\binom{-2}{k}x^{k+1}\\
&=\sum_{k=1}^\infty(-1)^{k-1}\left(\binom{-3}{k-1}+6\binom{-2}{k-1}\right)x^k\\
&=\sum_{k=1}^\infty\left(\binom{k+1}{k-1}+6\binom{k}{k-1}\right)x^k\\
&=\sum_{k=1}^\infty\left(\binom{k+1}{2}+6\binom{k}{1}\right)x^k\\
&=\sum_{k=1}^\infty\frac{k^2+13k}{2}x^k\\
\end{align}
$$
Therefore, we get the general term is $f_k=\dfrac{k^2+13k}{2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Integral of irrational function $$
\int \frac{\sqrt{\frac{x+1}{x-2}}}{x-2}dx
$$
I tried:
$$
t =x-2
$$
$$
dt = dx
$$
but it didn't work.
Do you have any other ideas?
| Let $y=x-2$ and integrate by parts to get
$$\int dx \: \frac{\sqrt{\frac{x+1}{x-2}}}{x-2} = -2 (x-2)^{-1/2} (x+1)^{1/2} + \int \frac{dy}{\sqrt{y (y+3)}}$$
In the second integral, complete the square in the denominator to get
$$\int \frac{dy}{\sqrt{y (y+3)}} = \int \frac{dy}{\sqrt{(y+3/2)^2-9/4}}$$
This integral may be solved using a substitution $y+3/2=3/2 \sec{\theta}$, $dy = 3/2 \sec{\theta} \tan{\theta}$. Using the fact that
$$\int d\theta \sec{\theta} = \log{(\sec{\theta}+\tan{\theta})}+C$$
we may evaluate the integral exactly. I leave the intervening steps to the reader; I get
$$\int dx \: \frac{\sqrt{\frac{x+1}{x-2}}}{x-2} = -2 (x-2)^{-1/2} (x+1)^{1/2} + \log{\left[\frac{2}{3}\left(x-\frac{1}{2}\right)+\sqrt{\frac{4}{9}\left(x-\frac{1}{2}\right)^2-1}\right]}+C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Summations involving $\sum_k{x^{e^k}}$ I'm interested in the series
$$\sum_{k=0}^\infty{x^{e^k}}$$
I started "decomposing" the function as so:
$$x^{e^k}=e^{(e^k \log{x})}$$
So I believe that as long as $|(e^k \log{x})|<\infty$, we can compose a power series for the exponential. For example,
$$e^{(e^k \log{x})}=\frac{(e^k \log{x})^0}{0!}+\frac{(e^k \log{x})^1}{1!}+\frac{(e^k \log{x})^2}{2!}+\dots$$
Then I got a series for
$$\frac{(e^k \log{x})^m}{m!}=\sum_{j=0}^\infty{\frac{m^j \log{x}^m}{m!j!}k^j}$$
THE QUESTION
I believe that we can then plug in the last series into the equation to get
$$\sum_{k=0}^\infty{x^{e^k}}=\sum_{k=0}^\infty{\sum_{j=0}^\infty{ \sum_{m=0}^\infty{\frac{m^j \log{x}^m}{m!j!}k^j} }}$$
Is the order of summations correct? i.e. Must $\sum_k$ come before $\sum_j$?
Also, can we switch the order of the summations? If so, which order(s) of summations are correct?
| If we consider values $\alpha,\beta$ and $t$ greater then zero with $\alpha\beta=2\pi$ then your series can be expressed in relation to several other sums under the following double exponential series identity:
$$\alpha \sum_{k=0}^\infty e^{te^{k\alpha}}=\alpha\left(\frac{1}{2}-\sum_{k=1}^\infty\frac{(-1)^{k}t^k}{k!(e^{k\alpha}-1)}\right)-\gamma-\ln(t)+2\sum_{k=1}^\infty\varphi(k\beta)$$
Where we have:
$$\varphi(\beta)=\frac{1}{\beta}\Im\left(\frac{\Gamma(i\beta+1)}{t^{i\beta}}\right)=\sqrt{\frac{\pi}{\beta\sinh(\pi \beta)}}\cos\left(\beta\log\left(\frac{\beta}{n}\right)-\beta-\frac{\pi}{4}-\frac{B_2}{1\times 2 }\frac{1}{\beta}+\cdots\right)$$
This along with a similar identity was stated without a proof on page $279$ of Ramanujan's second notebook. Though in $1994$ Bruce C Berndt and James Lee Hafner published a proof which can be found here . Unfortunately I can't access the article without paying, however just by looking at the identity I'm more then willing to bet they made use of the Poisson summation formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
On the meaning of the second derivative When we want to find the velocity of an object we use the derivative to find this. However, I just learned that when you find the acceleration of the object you find the second derivative.
I'm confused on what is being defined as the parameters of acceleration. I always thought acceleration of an object is it's velocity (d/t).
Furthermore, in the second derivative are we using the x value or the y value of interest. In the first derivative we were only concerned with the x value. Does this still hold true with the second derivative?
I would post pictures but apparently I'm still lacking 4 points.
| The position function is typically denoted $r=x(t)$. Velocity is the derivative of the position function with respect to time: $v(t)=\dfrac{dx(t)}{dt}$. Acceleration is the derivative of the velocity function with respect to time: $a(t)=\dfrac{dv(t)}{dt}$. This is equivalent to the second derivative of the position function with respect to time: $$\dfrac{d}{dt}\dfrac{d}{dt}x(t)=\dfrac{d}{dt}\dfrac{dx(t)}{dt}=\dfrac{d}{dt}v(t)=\dfrac{dv(t)}{dt}=a(t).$$
The derivative is taken because it gives the change of a function with respect to its input variable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 4
} |
$\forall m \exists n$, $mn = n$ True or False
Identify if the statement is true or false. If false, give a counterexample.
$\forall m \exists n$, $mn = n$, where $m$ and $n$ are integers.
I said that this statement was false; specifically, that it is false when $m$ is any integer other than $1$
Apparently this is incorrect; honestly though, I can't see how it is.
| As others have noted, the statement is true:
$$\forall m \exists n, \; mn = n, \;\;\; m,\,n \in \mathbb Z \tag {1}$$
For all $m$, there exists an $n$ such that $mn = n$. To show this is true we need only to find the existence of such an $n$: and $n = 0:\;\; m\cdot 0 = 0 \forall m$.
Since there exists an $n$ ($n = 0$) such that for every $m$, $mn = n$, this is one case where one can switch the order of the quantifiers and preserve truth:
$$\exists n \forall m,\; mn = n,\;\;\;m, n\in \mathbb Z \tag{2}$$
And further more, this $n = 0$ is the unique $n$ satisfying $(1), (2)$. It is precisely the defining property of zero under multiplication, satisfied by and only by $0$.
The existence of a unique "something" is denoted: $\exists !$, giving us, the strongest (true) statement yet:
$$\exists! n\forall m,\;\; mn = n,\;\;\;m, n\in \mathbb Z\tag{3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 2
} |
Equilateral triangle geometric problem I have an Equilateral triangle with unknown side $a$. The next thing I do is to make a random point inside the triangle $P$. The distance $|AP|=3$ cm, $|BP|=4$ cm, $|CP|=5$ cm.
It is the red triangle in the picture. The exercise is to calculate the area of the Equilateral triangle (without using law of cosine and law of sine, just with simple elementary argumentation).
The first I did was to reflect point $A$ along the opposite side $a$, therefore I get $D$. Afterwards I constructed another Equilateral triangle $\triangle PP_1C$.
Now it is possible to say something about the angles, namely that $\angle ABD=120^{\circ}$, $\angle PBP_1=90^{\circ} \implies \angle APB=150^{\circ}$ and $\alpha+\beta=90^{\circ}$
Now I have no more ideas. Could you help me finishing the proof to get $a$ and therefore the area of the $\triangle ABC$. If you have some alternative ideas to get the area without reflecting the point $A$ it would be interesting.
| Well, since the distances form a Pythagorean triple the choice was not that random. You are on the right track and reflection is a great idea, but you need to take it a step further.
Check that in the (imperfect) drawing below $\triangle RBM$, $\triangle AMQ$, $\triangle MPC$ are equilateral, since they each have two equal sides enclosing angles of $\frac{\pi}{3}$. Furthermore, $S_{\triangle ARM}=S_{\triangle QMC}=S_{\triangle MBP}$ each having sides of length 3,4,5 respectively (sometimes known as the Egyptian triangle as the ancient Egyptians are said to have known the method of constructing a right angle by marking 12 equal segments on the rope and tying it on the poles to form a triangle; all this long before the Pythagoras' theorem was conceived)
By construction the area of the entire polygon $ARBPCQ$ is $2S_{\triangle ABC}$
On the other hand
$$ARBPCQ= S_{\triangle AMQ}+S_{\triangle MPC}+S_{\triangle RBM}+3S_{\triangle ARM}\\=\frac{3^2\sqrt{3}}{4}+\frac{4^2\sqrt{3}}{4}+\frac{5^2\sqrt{3}}{4}+3\frac{1}{2}\cdot 3\cdot 4 = 18+\frac{25}{2}\sqrt{3}$$
Hence
$$S_{\triangle ABC}= 9+\frac{25\sqrt{3}}{4}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
} |
How to simplify a square root How can the following:
$$
\sqrt{27-10\sqrt{2}}
$$
Be simplified to:
$$
5 - \sqrt{2}
$$
Thanks
| Set the nested radical as the difference of two square roots so that $$\sqrt{27-10\sqrt{2}}=(\sqrt{a}-\sqrt{b})$$ Then square both sides so that $$27-10\sqrt{2}=a-2\sqrt{a}\sqrt{b}+b$$ Set (1) $$a+b=27$$ and set (2) $$-2\sqrt{a}\sqrt{b}=-10\sqrt{2}$$ Square both sides of (2) to get $$4ab= 200$$ and solve for $b$ to get $$b=\frac{50}{a}$$ Replacing $b$ in (1) gives $$a+\frac{50}{a}=27$$ Multiply all terms by $a$ and convert to the quadratic equation $$a^2-27a+50=0$$ Solving the quadratic gives $a=25$ or $a=2$. Replacing $a$ and $b$ in the first difference of square roots formula above with $25$ and $2$ the solutions to the quadratic we have $$\sqrt{25}-\sqrt{2}$$ or $$5-\sqrt{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 7,
"answer_id": 4
} |
How to find $f:\mathbb{Q}^+\to \mathbb{Q}^+$ if $f(x)+f\left(\frac1x\right)=1$ and $f(2x)=2f\bigl(f(x)\bigr)$
Let $f:\mathbb{Q}^+\to \mathbb{Q}^+$ be a function such that
$$f(x)+f\left(\frac1x\right)=1$$
and
$$f(2x)=f\bigl(f(x)\bigr)$$
for all $x\in \mathbb{Q}^+$. Prove that
$$f(x)=\frac{x}{x+1}$$
for all $x\in \mathbb{Q}^+$.
This problem is from my student.
| Some ideas:
$$\text{I}\;\;\;\;x=1\Longrightarrow f(1)+f\left(\frac{1}{1}\right)=2f(1)=1\Longrightarrow \color{red}{f(1)=\frac{1}{2}}$$
$$\text{II}\;\;\;\;\;\;\;\;f(2)=2f(f(1))=2f\left(\frac{1}{2}\right)$$
But we also know that
$$f(2)+f\left(\frac{1}{2}\right) =1$$
so from II we get
$$3f\left(\frac{1}{2}\right)=1\Longrightarrow \color{red}{f\left(\frac{1}{2}\right)=\frac{1}{3}}\;,\;\;\color{red}{f(2)=\frac{2}{3}}$$
and also:
$$ \frac{1}{2}=f(1)=f\left(2\cdot \frac{1}{2}\right)=2f\left(f\left(\frac{1}{2}\right)\right)=2f\left(\frac{1}{3}\right)\Longrightarrow \color{red}{f\left(\frac{1}{3}\right)=\frac{1}{4}}\;,\;\color{red}{f(3)=\frac{3}{4}}$$
One more step:
$$f(4)=f(2\cdot2)=2f(f(2))=2f\left(\frac{2}{3}\right)=2\cdot 2f\left(f\left(\frac{1}{3}\right)\right)=4f\left(\frac{1}{4}\right)$$
and thus:
$$1=f(4)+f\left(\frac{1}{4}\right)=5f\left(\frac{1}{4}\right)\Longrightarrow \color{red}{f\left(\frac{1}{4}\right)=\frac{1}{5}}\;,\;\;\color{red}{f(4)=\frac{4}{5}}$$
...and etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Rotating a system of points to obtain a point in a given place Given an arbitrary number of points which lie on the surface of a unit sphere, one of which is arbitrarily <0, 0, 1> (which I will call K) in a rotated system (i.e. the rotation matrix is unknown), I'm trying to figure out how to (un-)rotate it so that the given point is <0, 0, 1> in the origin system.
The original method I came up with is to find the angle between the x and z components of K and rotating around the y-axis by that value:
double yrot = Math.atan2(point[0], point[2]);
for (int i = 0; i < p0.length; i++) {
double[] p = p0[i];
p0[i][0] = p[0] * Math.cos(theta) - p[2] * Math.sin(theta);;
p0[i][2] = p[0] * Math.sin(theta) + p[2] * Math.cos(theta);
}
Then to find the angle between the y and z components of K and rotating around the x-axis by that value:
double xrot = Math.atan2(point[1], point[2]);
for (int i = 0; i < p0.length; i++) {
double[] p = p0[i];;
p0[i][1] = p[1] * Math.cos(theta) - p[2] * Math.sin(theta);
p0[i][2] = p[1] * Math.sin(theta) + p[2] * Math.cos(theta);
}
This works great except for one issue: The magnitude of any of the points are no longer 1.
Where am I going wrong here?
In the above code blocks, p0 refers to the system of points while point refers to K.
| Having the vector $K$ and knowing that it corresponds to $(0,0,1)$ the rotation matrix can be obtained by taking the cross product $M = K\times(0,0,1)$, this yields a vector perpendicular to both $K$ and $(0,0,1)$, so that you can rotate by the angle between $K$ and $(0,0,1)$ around vector $M$. For that, expression in this link, will do.
Hope it helps :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/329953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
(Probability Space) Shouldn't $\mathcal{F}$ always equal the power set of $\Omega$? This is from the wikipedia article about Probability Space:
A probability space consists of three parts:
1- A sample space, $\Omega$, which is the set of all possible outcomes.
2- A set of events $\mathcal{F}$, where each event is a set containing zero or more outcomes.
3- The assignment of probabilities to the events, that is, a function $P$ from events to probability levels.
I can not think of a case where $\mathcal{F}$ is not equal to the power set of $\Omega$. What is the purpose of the second part in this definition then?
| If $\Omega$ is an infinite set, you can run into problems if you try to define a measure for every set. This is a common issue in measure-theory, and the reason why the notion of a $\sigma-$algebra exists. See, for instance:
The Vitali Set: http://en.wikipedia.org/wiki/Vitali_set
Banach Tarski Paradox: http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox
For reasons why there are some sets so pathological its nonsensical to assign them a measure/probability.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How to evaluate $\int_0^{\pi/2}x^2\ln(\sin x)\ln(\cos x)\ \mathrm dx$ Find the value of
$$I=\displaystyle\int_0^{\pi/2}x^2\ln(\sin x)\ln(\cos x)\ \mathrm dx$$
We have the information that
$$J=\displaystyle\int_0^{\pi/2}x\ln(\sin x)\ln(\cos x)\ \mathrm dx=\dfrac{\pi^2}{8}\ln^2(2)-\dfrac{\pi^4}{192}$$
| This is not quite a complete answer but goes a good way towards showing that the idea of @kalpeshmpopat is not so far off the mark - if we want to answer the question that was orginally asked.
First, numerical investigation indicates that the correct integral is
$$I=\displaystyle\int_0^{\pi/2}x\ln(\sin x)\ln(\cos x)dx=\dfrac{(\pi\ln{2})^2}{8}-\dfrac{\pi^4}{192}.$$
Now, as @kalpeshmpopat points out, a simple substitution, together with the facts that $\cos(\frac{\pi}{2}-x)=\sin(x)$ and vice-versa, shows that
$$\displaystyle\int_0^{\pi/2}x\ln(\sin x)\ln(\cos x)dx=\int_0^{\pi/2}(\frac{\pi}{2}-x)\ln(\sin x)\ln(\cos x)dx.$$
Thus, if we add these two together we get
$$\displaystyle\int_0^{\pi/2} \frac{\pi}{2} \ln(\sin x)\ln(\cos x)dx=2I.$$
All that remains to show is that
$$\displaystyle\int_0^{\pi/2} \frac{\pi}{2} \ln(\sin x)\ln(\cos x)dx =
\frac{1}{96} \pi ^2 \left(6 \log ^2(4)-\pi^2\right),$$
which Mathematica can do. It's getting late, but my guess on this last integral would be to expand $\ln(\cos(x))$ into a power series (which is easy, since we know $\ln(1+y)$) and try to integrate $x^n \ln(\sin(x))$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60",
"answer_count": 6,
"answer_id": 0
} |
Relating volume elements and metrics. Does a volume element + uniform structure induce a metric? AFAIK a metric uniquely determines the volume element up to to sign since the volume element since a metric will determine the length of supplied vectors and angle between them, but I do not see a way to derive a metric from the volume element. The volume form must heavily constrain the the set of compatible metrics though. So is there a nice statement of what the volume element tells you about a metric that is compatible with it?
If you have a volume element what is the least additional information you need to be able to derive a metric?
EDIT:
Is the minimum additional information uniform structure?
A volume element gives us a kind of notion of local scale but much weaker than a metric. We can can measure a volume, but we can't tell if it looks like a sphere, a pancake or string (that is the essence of @Rhys's answer). To get a metric we need to be able to tell that a volume is "spherical", or to compare distances without assigning an actual value to them, since if we can do that then we can use use the the volume to determine the radius of a hypershere in the limit of small volumes. So the last part of my question is really asking how to do that without implicitly specifying a metric. I believe that uniform structure does this. It specifies a set of entourages that are binary relations saying that points are within some unspecified distance from each other. An entourage determines a ball of that size around each point. To induce a metric, the uniform structure needs to be compatible with the volume element by assigning the same volume to all the balls induced by the same entourage in the limit of small spheres (delta epsilonics required to make this formal). The condition will not hold for macroscopic spheres if the uniform structure is inducing a curved geometry. There must also be some conditions for the uniform structure to be compatible with the manifold structure.
Does this make sense?
| The volume form tells you very little about the metric. Let $V$ be an $n$-dimensional vector space, with volume form $v_1\wedge \ldots \wedge v_n$, where $\{v_1,\ldots,v_n\}$ are linearly independent elements of $V$ (any volume form can be written this way). Now define a metric by the condition that this be an orthonormal set; this metric gives you the above volume form. But it's far from unique, as there are many choices of $n$ vectors which give the same volume form.
The same story should apply on some small enough patch of a Riemannian manifold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Example of a (dis)continuous function The following thought came to my mind: Given we have a function $f$, and for arbitrary $\varepsilon>0$, $f(a+\varepsilon)= 100\,000$ while $f(a) = 1$. Why is or isn't this function continuous?
I thought that with the epsilon-delta definition, where we would chose delta just bigger then $100 000$, the function could be shown to be continuous. Am I wrong?
| Hint:
Choose $0<r<10000-1$
Then $\forall~\epsilon>0,~|f(a+\epsilon)-f(a)|=10000-1>r.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Exercise in propositional logic. Which of the following arguments is valid?
A. If it rains, then the grass grows. The worms are not happy unless it rains. Therefore, If the worms are happy , then the grass grows.
B. If the wind howls, then the wolf howls. If the wind howls, then the birds sing. Therefore, if the birds sing, then the wolf howls.
C. If the sun shines, then it is day. If the stars shine, then the sun does not shine. Therefore, if the stars shine, it is not day
D. Both A and C.
| A. $(U \wedge (\neg V \Rightarrow \neg U) \wedge (V \Rightarrow W)) \Rightarrow W$
True. If the worms are happy, then it rains, then the grass grows.
B. $((U \Rightarrow V) \wedge (U \Rightarrow W)) \Rightarrow (V \Rightarrow W)$
Wrong. If the birds sing, you don't know anything else. Especially, you don't know if the wind howls, only the converse is true.
C. $((U \Rightarrow V) \wedge (W \Rightarrow \neg U)) \Rightarrow (W \Rightarrow \neg U)$
Wrong. If the stars shine, then the sun does not shine, but you can't conclude the sun does not shine. Since $U \Rightarrow V$ does not imply $\neg U \Rightarrow \neg V$, but $\neg V \Rightarrow \neg U$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Find at least two ways to find $a, b$ and $c$ in the parabola equation I've been fighting with this problem for some hours now, and i decided to ask the clever people on this website.
The parabola with the equation $y=ax^2+bx+c$ goes through the points $P, Q$ and $R$. How can I find $a, b$ and $c$ in at least two different ways, when
$P(0,1), Q(1,0)$ and $R(-1,3)$?
| You can find normal equation using least square method
First you have to obtain sum of the square of deviation say S and then put the values zero of partial differentiation of S with respect to a,b and c to get three normal equation then solve three normal equation to get a,b and c
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Exact differential equations. Test to tel if its exact not valid, am I doing something wrong? I got this differential equation:
$$(y^{3} + \cos t)'y = 2 + y \sin t,\text{ where }y(0) = -1$$
Tried to check for $dM/dY = dH/dY$ but I cant seem to get them alike. So what would the next step be to solve this problem?
| A related problem. Here is how to start. Write the ode as
$$ (y^3 + \cos t)\frac{dy}{dx} = 2 + y \sin t \implies (y^3 + \cos t){dy} - (2 + y \sin t)dx=0 .$$
Now, you should be able to proceed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Complement is connected iff Connected components are Simply Connected Let $G$ be an open subset of $\mathbb{C}$. Prove that $(\mathbb{C}\cup \{ \infty\})-G$ is connected if and only if every connected component of $G$ is simply connected.
| If the connected open set $H \subset \mathbb C$ is not simply connected, there is a simple closed curve $C$ in $H$ that is not homotopic to a point in $H$. Therefore there must be points inside $C$ that is not in $H$. Such a point and $\infty$ are in different connected components of $({\mathbb C} \cup \{\infty\} - G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Diophantine equation on an example I do have one task here, that could be solved my guessing the numbers. But the seminars leader said, also Diophantine equation would lead to solution. Has anyone an idea how it works? And could you please show it to me on that example here?
The example:
The number $400$ shall be divided into $2$ summands, the first summand should be divisible through $7$ and the second summand should be divisible through $13$. Thank you for help!
| You are looking for solutions to the diophantine equation:
$$7x+13y = 400.$$ Clearly, if you can find integer values (x,y) that satisfies this equation, then 7x is your first summand and 13y your second summand.
To solve such a linear diophantine equation, you should use the Euclidean algorithm. Have you done this or do you want me to elaborate a bit more?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
I need to show that the intersection of $T$ and $ \bar{I}$ is equal to empty as well. Lets assume that $T⊂R^{n}$ is open and $I⊂R^n$ & $T\cap I=\varnothing$.
I need to show that the intersection of $T$ and $\bar{I}$ (the closure of $I$) is empty as well.
How to show this? I have seen this question in a book.but I have No idea. I wonder the solution. Help me. Thank you!
| Hint: Since $T$ is open, the complement of $T$ is closed. So now what happens when you take the closure of $I$? How could it possibly have non-empty intersection with $T$?
(Note that $I$ is contained in the complement of $T$, since $I\cap T = \emptyset$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How many strings are there that use each character in the set $\{a, b, c, d, e\}$? How many strings are there that use each character in the set $\{a, b, c, d, e\}$ exactly once and that contain the sequence $ab$ somewhere in the string?
My intuition is to do the following:
$a \; b \; \_ \; \_ \; \_ + \_ \; a \; b \; \_ \; \_ + \_ \; \_ \; a \; b \; \_ + \_ \; \_ \; \_ \; a \; b$
$3! + 3! + 3! + 3! = 24$
Can someone explain why you have to use the product rule instead of adding the $3!$ so I can understand this in an intuitive way. Thanks!
| As Darren aptly pointed out:
Note that in treating $\{a, b, c, d, e\}\;$ like a set of four objects where $a$ and $b$ are "glued together" to count as one object: $\{ab\}$, then we have a set of four elements $\{\{ab\}, c, d, e\}$, and the number of possible permutations (arrangements) of a set of $n = 4$ objects equaling $n! = 4! = 24$.
In a sense, that's precisely what you did in your post: treating as one "placeholder" $\;\underline{a\; b}$
If you had remembered to add an additional term of $3!$ in your "intuitive" approach, (now edited to include it) note that $$3! + 3! + 3! + 3! = 4\cdot(3!) = 4\cdot (3\cdot 2\cdot 1) =4! = 24.$$
So in this way, you compute precisely what we computed above.
What's nice about combinatorics is that "double counting" - using different ways to compute the same result - is a much-utilized method of proof!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
probability density function In a book the following sentence is told about probability density function at point $a$: "it is a measure of how likely it is that the random variable will be near $a$." What is the meaning of this ?
| The meaning is that if the probability density function is continuous at $a$,
then the probability that the random variable $X$ takes on values in a short interval of length $\Delta$ that contains $a$ is approximately $f_X(a)\Delta$. Thus, for example,
$$P\left\{a - \frac{\Delta}{2} < X < a + \frac{\Delta}{2}\right\} \approx f_X(a)\Delta$$
and the approximation improves as $\Delta \to 0$. In other words,
$$\lim_{\Delta \to 0}\frac{P\left\{a - \frac{\Delta}{2}
< X < a + \frac{\Delta}{2}\right\}}{\Delta} = f_X(a)$$
at all points $a$ where $f_X(\cdot)$ is continuous.
"it is a measure of how likely it is that the random variable will be near $a$."
If by "near $a$" we mean in the interval $\left(a-\frac{\Delta}{2}, a-\frac{\Delta}{2}\right)$ where $\Delta$ is some fixed small positive number,
then, assuming that $f_X(\cdot)$ is continuous at $a$ and $b$, the
probabilities that $X$ is near $a$ and near $b$ are respectively
proportional to $f_X(a)$ and $f_X(b)$ respectively, and so the values
of $f_X(\cdot)$ at $a$ and $b$ respectively can be used to compare
these probabilities. $f_X(a)$ and $f_X(b)$ respectively are a measure
of how likely $X$ is to be near $a$ and $b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
How do I transform the left side into the right side of this equation? How does one transform the left side into the right side?
$$
(a^2+b^2)(c^2+d^2) = (ac-bd)^2 + (ad+bc)^2
$$
| Expand the left hand side, you get
$$(a^2+b^2)(c^2+d^2)=a^2c^2+a^2d^2+b^2c^2+b^2d^2$$
Add and substract $2abcd$
$$a^2c^2+a^2d^2+b^2c^2+b^2d^2=(a^2c^2-2abcd+b^2d^2)+(a^2d^2+2abcd+b^2c^2)$$
Complete the square, you can get
$$(a^2c^2-2abcd+b^2d^2)+(a^2d^2+2abcd+b^2c^2)=(ac-bd)^2+(ad+bc)^2$$
Therefore,
$$(a^2+b^2)(c^2+d^2)=(ac-bd)^2+(ad+bc)^2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 7,
"answer_id": 5
} |
Geometry of a subset of $\mathbb{R}^3$ Let $(x_1,x_2,x_3)\in\mathbb{R}^3$. How can I describe the geometry of vectors of the form
$$
\left( \frac{x_1}{\sqrt{x_2^2+x_3^2}}, \frac{x_2}{\sqrt{x_2^2+x_3^2}}, \frac{x_3}{\sqrt{x_2^2+x_3^2}} \right) \, ?
$$
Thank you all!
| To elaborate on user1551's answer a bit, notice that this locus you describe is a fibration of unit circles in the $yz$-plane over the $x$-axis. More precisely, you are interseted in the image of the map $$(x_1,x_2,x_3)\mapsto \left(\frac{x_1}{\sqrt{x_2^2+x_3^2}},\frac{x_2}{\sqrt{x_2^2+x_3^2}},\frac{x_3}{\sqrt{x_2^2+x_3^2}}\right),$$
which we will denote by $X$. Now consider the map $f:X\to \mathbb{R}$ given by $(x,y,z)\mapsto x$. Clearly the map $f$ is surjective and the fiber over a point $r\in \mathbb{R}$ (meaning $f^{-1}(r)$) is a circle in the $yz$-plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Euclidean Algorithm - GCD Word Problem
An Oil company has a contract to deliver 100000 litres of gasoline. Their tankers can carry 2400 litres and they can attach on trailer carrying 2200 litres to each tanker. All the tankers and trailers must be completely full on this contract, otherwise the gas would slosh around too much when going over some rough roads. Find the least number of tankers required to fulfill the contract [ each trailer, if used, must be pulled by a full tanker.]
Alright, so i know i can solve this by the Euclidean Algorithm.
Where
100000 = 2200x + 4600y
But, how do i solve with the variables there.
| Step 1. Divide everything through by $\gcd(2200,4600)$. (If that greatest common divisor doesn't divide 100000, then there's no solution - do you see why?) In this case, we get $500 = 11x + 23y$.
Step 2. With the remaining coefficients, use the extended Euclidean algorithm to find integers $m$ and $n$ such that $11m + 23n = 1$. Then $11(500m) + 23(500n) = 500$.
Step 3. One of the integers $500m$ and $500n$ is negative. However, you can add/subtract $23$ to/from $500m$ while subtracting/adding $11$ from/to $500n$ - it will remain a solution to $500 = 11x + 23y$. Keep doing that until both variables are positive. In this way you can even find all solutions (I find 21 solutions).
PS: To me it seems that the problem leads to the equation $100000 = 2400x+4600y$, rather than $2200x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/330987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Is this function quasi convex I have a function $f(x,y) = y(k_1x^2 + k_2x + k_3)$ which describes chemical potential of a species ($y$ is mole fraction and $x$ is temperature)
I only want to check quasi convexity over a limited range.
k1,k2 and k3 are coefficients of a polynomial function to calculate Gibbs energy of formation. They differ widely between different chemical species.
In my case they have the following restrictions.
$0<k1<0.001$
$-0.1<k2<0.1$
$-400<k3<0$
Temperature, $100<=T<=3000$
Mole fraction, $0<=y<=1$
From the hessian matrix I know it is not convex or concave. But can I check whether it is quasi convex.
Thanks for the answer. So to check quasi convexity I have to find the roots of the polynomial. If I get equal or complex roots then the function will be convex.
Thanks for the update. However I am a confused about how the domain for $x_1,x_2$ was determined in the last step. Also would there be any change in the domain if the function were divided by $x$ so that,
$f(x,y)=y(k_1x+k_2+k_3/x)$
| Let $k_{1}=1, k_{2}=k_{3}=0$ and consider the level set
$$
\{(x,y)\in R^{2}: f(x,y)\leq 1\}
$$
This condition is equivalent to $y\leq\tfrac{1}{x^{2}}$ and the set
$$\{(x,y)\in R^{2}: y\leq\tfrac{1}{x^{2}}\}$$ is convex in $R^{2}$.
This can be generealized. If the function $k_{1}x^{2}+k_{2}x+k_{3}$ has two different real roots, it is not quasi convex, if it has two identical real roots it is quasi convex, if it has no real roots it is also quasi convex,
but this seems only be right for a special selection of $k_{1}, k_{2},k_{3}$ :-(...
If $k_{1}=0$ and $k_{2}\neq 0$ the function is not quasi convex. ...if I did no mistake.
So an update: We have summarized the following situation:
$$ f:[100,3000]\times[0,1]\rightarrow R,\quad (x,y)\mapsto y(k_{1}x^2+k_{2}x+k_{3})$$
where $k_{1},k_{2},k_{3}$ parameters in $R$ with restriction
$$
k_{1}\in (0,0.001),\quad k_{2}\in [-0.1,0.1], \quad k_{3}\in [-400,0].$$
So for given $\alpha\in R$ we have to research if the set
$$\{(x,y)\in[100,3000]\times[0,1]: f(x,y)\leq \alpha\}$$
is convex $\forall \alpha\in R$. This can be transformed into
$$y\leq \frac{\alpha}{k_{1}x^{2}+k_{2}x+k_{3}}.$$
Seeing this as the graph of a one dimensional function we have a rational function. Computing the asymptotes we obtain
$$x_{1,2}=\frac{-k_{2}\pm\sqrt{k_{2}^{2}-4k_{1}k_{3}}}{2k_{1}}.$$
So there are always two different roots for every selection of parameters an we have
$$
y\leq\frac{\alpha}{k_{1}(x-x_{1})(x-x_{2})}.
$$
So if $x_{1}$ or $x_{2}\in[100,3000]$ the function will not be quasi convex. Otherwise it will be quasi convex.
The restriction on $y$ seems to be irrelevant since we have it always compensated by $\alpha$.
That should answer your question. Maybe some constants are not quite right, but in general this idea can be used rigorously...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Inverse Laplace Transform of $(s+1)/z^s$ I'm trying to compute this ILT
$$\mathcal{L}^{-1}\left\{\frac{s+1}{z^s}\right\},$$
where $|z|>1$. However, I'm not sure this is possile? Any help would be appreciated.
| This is an odd one. I worked the actual Bromwich integral directly because the residue theorem is no help here. You get something in terms of $c$, the offset from the imaginary axis of the integration path. So I appealed to something more basic. Consider
$$\hat{f}(s) = \int_0^{\infty} dt \: f(t) e^{-s t}$$
Just the plain Laplace transform of some function $f$. Let's throw away any conditions on continuity, etc. on $f$, and consider
$$f(t) = \delta(t-\log{z})$$
for some $z$. Then
$$\hat{f}(s) = z^{-s}$$
Interesting. Now consider $f(t) = \delta'(t-\log{z})$; then
$$\hat{f}(s) = s z^{-s} + \delta(-\log{z})$$
It follows that
$$\mathcal{L}\left\{\delta(t-\log{z})+ \delta'(t-\log{z})\right\} = \frac{s+1}{z^s} + \delta(-\log{z})$$
Therefore
$$\mathcal{L}^{-1}\left\{\frac{s+1}{z^s}\right\} = \delta(t-\log{z})+ \delta'(t-\log{z}) - \delta(-\log{z}) \delta(t)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/331150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.