Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Exactly half of the elements of $\mathcal{P}(A)$ are odd-sized Let $A$ be a non-empty set and $n$ be the number of elements in $A$, i.e. $n:=|A|$.
I know that the number of elements of the power set of $A$ is $2^n$, i.e. $|\mathcal{P}(A)|=2^n$.
I came across the fact that exactly half of the elements of $\mathcal{P}(A)$ contain an odd number of elements, and half of them an even number of elements.
Can someone prove this? Or hint at a proof?
| Fix an element $a\in A$ (this is the point where $A\ne\emptyset$ is needed).
Then $$S\mapsto S\operatorname{\Delta}\{a\}$$ (symmetric difference) is a bijection from the set of odd subsets to the set of even subsets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 8,
"answer_id": 1
} |
How to find the number of roots using Rouche theorem? Find the number of roots $f(z)=z^{10}+10z+9$ in $D(0,1)$. I want to find $g(z)$ s.t. $|f(z)-g(z)|<|g(z)|$, but I cannot. Any hint is appreciated.
| First, we factor by $z+1$ to get $f(z)=(z+1)(z^9-z^8+z^7+\dots-z^2+z+9)$. Let $F(z):=z^9-z^8+z^7+\dots-z^2+z+9$ and $G(z)=9$. Then for $F$ of modulus strictly smaller than $1$, $|F(z)-G(z)|\leqslant 9|z| \lt |G(z)|$. thus for each positive $\delta$, we can find the number of zeros of $f$ on $B(0,1-\delta)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
A function whose value is either one or zero First I apologize in advance for I don't know math's English at all and I haven't done math in almost a decade.
I'm looking for a function whose "domain/ensemble of definition" would be ℝ (or maybe ℤ) and whose "ensemble/domain of variation" would be ℕ{0, 1}
that would look like something this awesome ascii graph...
^f(x)
|
|
1________ | __________
0________\./__________>x
0|
|
|
|
f(x) is always 1, but 0 when x = 0
Actually I need this for programming, I could always find other ways to do what I wanted with booleans so far but it would be much better in most cases to have a simple algebra formula to represent my needs instead.
I just hope it's not too complicated a function to write/understand and that it exists of course. Thanks in advance guys.
| I think that the most compact way to write this is using the Iverson Bracket:
$f: \mathbb{R} \to \{{0,1}\}$
$$ f(x) = [x \neq 0]$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
Find the Maclaurin series for the function $\tan^{-1}(2x^2)$ Find the Maclaurin series for the function $\tan^{-1}(2x^2)$
Express your answer in sigma notation, simplified as much as possible. What is the open interval of convergence of the series.
I have the correct answer, but I would like to use another method to solve this.
By taking the function and its derivative, find the sum and then taking the anti derivative. This does not yield the same answer for me.
| You probably know the series for $\tan^{-1} t$. Plug in $2x^2$ for $t$.
If you do not know the series for $\arctan t$, you undoubtedly know the series for $\dfrac{1}{1-u}$. Set $u=-x^2$, and integrate term by term.
For the interval of convergence of the series for $\tan^{-1} (2x^2)$, you probably know when the series for $\tan^{-1}t$ converges. That knowledge can be readily translated to knowledge about $\tan^{-1}(2x^2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove non-zero eigenvalues of skew-Hermitian operator are pure imaginary Just like the title:
Assume $T$ is a skew-Hermitian but not a Hermitian operator an a finite dimensional complex inner product space V. Prove that the non-zero eigenvalues of $T$ are pure imaginary.
| We need the following properties of the inner product
i) $\langle au,v \rangle = a \langle u,v \rangle \quad a \in \mathbb{C}$,
ii) $ \langle u, a v \rangle = \overline{\langle a v, u \rangle} = \overline{a} \overline{\langle v, u \rangle} = \overline{a} \langle u, v \rangle \quad a \in \mathbb{C}.$
Since T is skew Hermitian, then $T^{*}=-T$. Let $u$ be an eigenvector that corresponds to the eigenvalue $\lambda$ of $T$, then we have
$$ \langle Tu,u \rangle= \langle u,T^{*}u \rangle \Longleftrightarrow \langle Tu,u \rangle = \langle u,-Tu \rangle$$
$$ \langle \lambda u,u \rangle = \langle u,-\lambda u \rangle \Longleftrightarrow \lambda \langle u,u \rangle = -\bar{\lambda} \langle u,u\rangle $$
$$ \Longleftrightarrow \lambda = -\bar{\lambda} \Longleftrightarrow x+iy = -x+iy. $$
What can you conclude from the last equation?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Two questions with mathematical induction First hello all, we have a lecture. It has 10 questions but I'm stuck with these two about 3 hours and I can't solve them.
Any help would be appreciated.
Question 1
Given that $T(1)=1$, and $T(n)=2T(\frac{n}{2})+1$, for $n$ a power of $2$, and
greater than $1$. Using mathematical induction, prove that
$T(n) = 2^k.T(\frac{n}{2^k}) + 2^k - 1$
for $k=0, 1, 2, \dots, \log_2 n$.
Question 2
Definition: $H(j) = \frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{j}$. Facts: $H(16) > 3.38$,
$\frac{1}{3} + \frac{1}{4} = \frac{7}{12}, \frac{1}{4} + \frac{1}{5} + \frac{1}{7} = \frac{319}{420} $
a) Using induction on $n$,
prove that $H(2^n) > 1 + 7n/12$, for $n\geq 4$.
| 1)
Prove by induction on $k$:
If $0\le k\le m$, then $T(2^m)=2^kT(2^{m-k})+2^k-1$.
The case $k=0$ is trivial.
If we already know that $T(2^m)=2^{k-1}T(2^{m-(k-1)})+2^{k-1}-1$ for all $m\ge k-1$, then for $m\ge 2^k$ we have
$$\begin{align}T(2^m)&=2^{k-1}T(2^{m-(k-1)})+2^{k-1}-1\\
&=2^{k-1}\left(2\cdot T\left(\tfrac{2^{m-(k-1)}}2\right)+1\right)+2^{k-1}-1\\
&=2^k T(2^{m-k})+2^k-1\end{align}$$
2)
Note that $$H(2(m+1))-H(2m)=\frac 1{2m+1}+\frac1{2(m+2)}>\frac2{2m+2}=H(m+1)-H(m)$$
and therefore (by induction on $d$) for $d>0$
$$H(2(m+d))-H(2m)>H(m+d)-H(m)$$
hence with $m=d=2^{n-1}$
$$H(2^{n+1})-H(2^n)>H(2^n)-H(2^{n-1})$$
thus by induction on $n$
$$H(2^n)-H(2^{n-1})>H(4)-H(2)=\frac7{12}\text{ if }n\ge2$$
and finally by induction on $n$
$$H(2^n)>H(16)+\frac7{12}(n-4)\text{ for }n\ge 4.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Convergence of Bisection method I know how to prove the bound on the error after $k$ steps of the Bisection method.
I.e.
$$|\tau - x_{k}| \leq \left(\frac{1}{2}\right)^{k-1}|b-a|$$
where $a$ and $b$ are the starting points.
But does this imply something about the order of convergence of the Bisection method? I know that it converges with order at least 1, is that implied in the error bound?
Edit
I've had a go at showing it, is what I am doing here correct when I want to demonstrate the order of convergence of the Bisection method?
$$\lim_{k \to \infty}\frac{|\tau - x_k|}{|\tau - x_{k-1}|} = \frac{(\frac{1}{2})^{k-1}|b-a|}{(\frac{1}{2})^{k-2}|b-a|}$$
$$=\frac{(\frac{1}{2})^{k-1}}{(\frac{1}{2})^{k-2}}$$
$$=\frac{1}{2}$$
Show this shows linear convergence with $\frac{1}{2}$ being the rate of convergence. Is this correct?
| For the bisection you simply have that $\epsilon_{i+1}/\epsilon_i = 1/2$, so, by definition the order of convergence is 1 (linearly).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Functional analysis summary Anyone knows a good summary containing the most important definitions and theorems about functional analysis.
| Georgei E. Shilov's Elementary Functional Analysis, 2nd Ed. (Dover books, 1996) would be a great start, and cheap, as far as textbooks go!
For a very brief (17 page) "summary" pdf document, written and posted by Dileep Menon and which might be of interest: An introduction to functional analysis. It contains both definitions and theorems, as well as a list of references.
See also the lists of definitions and theorems covered in Gilliam's course on functional analysis: here, and here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Partial fractions integration Re-express $\dfrac{6x^5 + x^2 + x + 2}{(x^2 + 2x + 1)(2x^2 - x + 4)(x+1)}$ in terms of partial fractions and compute the indefinite integral $\dfrac{1}5{}\int f(x)dx $ using the result from the first part of the question.
| Hint
Use $$\dfrac{6x^5 + x^2 + x + 2}{(x^2 + 2x + 1)(2x^2 - x + 4)(x+1)}=\frac{A}{(x+1)^3}+\frac{B}{(x+1)^2}+\frac{C}{x+1}+\frac{Dx+E}{2x^2-x+4}+F$$
and solve for $A,B,C,D,E$ and $F$.
Explanation
Note that this partial fraction decomposition is a case where the degree of the numerator and denominator are the same. (Just notice that the sum of the highest degree terms in the denominator is $2+2+1=5.$) This means that the typical use of partial fraction decomposition does not apply.
Furthermore, notice that $(x^2+2x+1)$ factorizes as $(x+1)^2$, which means the denominator is really $(x+1)^3(2x^2-x+4)$. This means the most basic decomposition would involve denominators of $(x+1)^3$ and $(2x^2-x+4)$. However, we can escape that by using a more complicated decomposition involving the denominators $(x+1)^3$, $(x+1)^2$, $(x+1)$, and $(2x^2-x+4)$. The $F$ term is necessary for the $6x^5$ term to arise in the multiplication, intuitively speaking.
More thoroughly stated, the $F$ term is needed because of the following equivalency between $6x^5+x^2+x+2$ and the following expression:
$$
\begin{align}
&A(2x^2-x+4)\\
+&B(2x^2-x+4)(x+1)\\
+&C(2x^2-x+4)(x+1)^2\\
+&[Dx+E](x+1)^3\\
+&F(2x^2-x+4)(x+1)^3.
\end{align}
$$
Notice how the term $F(2x^2-x+4)(x+1)^3$ is the only possible term that can give rise to $6x^5$? That is precisely why it is there.
Hint 2
Part 1
In the integration of $\frac{1}{5}\int f(x)dx$, first separate the integral, $\int f(x)dx$ into many small integrals with the constants removed from the integrals. That is, $\int f(x) dx$ is
$$A\int \frac{1}{(x+1)^3}dx+B\int \frac{1}{(x+1)^2}dx+C\int \frac{1}{x+1}dx+D\int \frac{x}{2x^2-x+4}dx\quad+E\int \frac{1}{2x^2-x+4}dx+F\int 1dx.$$
Part 2
Next, use the substitution $u=x+1$ with $du=dx$ on the first three small integrals:
$$A\int \frac{1}{u^3}du+B\int \frac{1}{u^2}du+C\int \frac{1}{u}du+D\int \frac{x}{2x^2-x+4}dx+E\int \frac{1}{2x^2-x+4}dx\quad +F\int 1dx.$$
Part 3
To deal with the integral $\int \frac{1}{2x^2-x+4}dx$, you must complete the square. This is done as follows:
$$
\begin{align}
\int \frac{1}{2x^2-x+4}dx&=\int \frac{\frac{1}{2}}{x^2-\frac{1}{2}x+2}dx\\
&=\frac{1}{2} \int \frac{1}{\left(x-\frac{1}{4}\right)^2+\frac{31}{16}}dx.
\end{align}
$$
To conclude, you make the trig substitution $x-\frac{1}{4}=\frac{\sqrt{31}}{4}\tan \theta$ with $dx=\left(\frac{\sqrt{31}}{4}\tan \theta+\frac{1}{4}\right)'d\theta=\frac{\sqrt{31}}{4}\sec^2\theta d\theta$. This gives you:
$$\frac{1}{2} \int \frac{1}{\left(x-\frac{1}{4}\right)^2+\frac{31}{16}}dx=\frac{1}{2} \int \frac{\frac{\sqrt{31}}{4}\sec^2 \theta}{\frac{31}{16}\tan^2\theta+\frac{31}{16}}d\theta.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How many graphs with vertex degrees (1, 1, 1, 1, 2, 4, 5, 6, 6) are there? How many graphs with vertex degrees (1, 1, 1, 1, 2, 4, 5, 6, 6) are there? Assuming that all vertices and edges are labelled. I know there's a long way to do it by drawing all of them and count. Is there a quicker, combinatoric way?
| There are none. By the hand shaking lemma we know that the number of degrees of odd degree must be even.
There are 5 vertices with odd degrees in your graph, these are the ones with degrees:
1,1,1,1,5
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Verify these logical equivalences by writing an equivalence proof? I have two parts to this question - I need to verify each of the following by writing an equivalence proof:
*
*$p \to (q \land r) \equiv (p \to q) \land (p \to r)$
*$(p \to q) \land (p \lor q) \equiv q$
Thank you if you can help! It's greatly appreciated.
| We make extensive use of the identity $(a \to b) \equiv (\lnot a \lor b)$, and leave you to fill in the reasons for some of the intermediate steps in (2).
(1) $\quad p \to (q \wedge r) \equiv \lnot p \lor (q \land r) \equiv (\lnot p \lor q) \land (\lnot p \lor r) \equiv (p \to q) \wedge (p \to r)$.
(2) $\quad(p \to q) \land (p \lor q) \equiv q\quad?$
$$(p \to q) \land (p \lor q) \equiv (\lnot p \lor q) \land (p \lor q)\tag{1}$$
$$\equiv \;[(\lnot p \lor q) \land p] \lor [(\lnot p \lor q) \land q]\tag{2}$$
$$\equiv \;[(\lnot p \land p) \lor (q \land p)] \lor [(\lnot p \land q) \lor (q \land q)]\tag{3}$$
$$\equiv \;F \lor (p \land q) \lor [(\lnot p \land q) \lor (q \land q)]\tag{4}$$
$$\equiv \;(p\land q) \lor [(\lnot p \land q) \lor (q \land q)]\tag{5}$$
$$\equiv \;(p\land q) \lor (\lnot p \land q) \lor q\tag{6}$$
$$\equiv \;[(p \lor \lnot p) \land q] \lor q\tag{7}$$
$$\equiv \;(T \land q) \lor q\tag{8}$$
$$\equiv \;q\lor q\tag{9}$$
$$\equiv \;q\tag{10}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/248941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing $\sqrt{2}\sqrt{3} $ is greater or less than $ \sqrt{2} + \sqrt{3} $ algebraically How can we establish algebraically if $\sqrt{2}\sqrt{3}$ is greater than or less than $\sqrt{2} + \sqrt{3}$?
I know I can plug the values into any calculator and compare the digits, but that is not very satisfying. I've tried to solve $$\sqrt{2}+\sqrt{3}+x=\sqrt{2}\sqrt{3} $$ to see if $x$ is positive or negative. But I'm just getting sums of square roots whose positive or negative values are not obvious.
Can it be done without the decimal expansion?
| Method 1: $\sqrt{2}+\sqrt{3}>\sqrt{2}+\sqrt{2}=2\sqrt{2}>\sqrt{3}\sqrt{2}$.
Method 2: $(\sqrt{2}\sqrt{3})^2=6<5+2<5+2\sqrt{6}=2+3+2\sqrt{2}\sqrt{3}=(\sqrt{2}+\sqrt{3})^2$, so $\sqrt{2}\sqrt{3}<\sqrt{2}+\sqrt{3}$.
Method 3: $\frac{196}{100}<2<\frac{225}{100}$ and $\frac{289}{100}<3<\frac{324}{100}$, so $\sqrt{2}\sqrt{3}<\frac{15}{10}\frac{18}{10}=\frac{270}{100}<\frac{14}{10}+\frac{17}{10}<\sqrt{2}+\sqrt{3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 1
} |
Vector Space or Not? I have a question. Suppose that $V$ is a set of all real valued functions that attain its relative maximum or relative minimum at $x=0$. Is V a vector space under the usual operations of addition and scalar multiplications? My guess is it is not a vector space, but I can't able to give a counterexample?
| As I hinted in my comment above, the terms local maximum and local minimum only really make sense when talking about differentiable functions. So here I show that the set of functions with a critical point (not necessarily a local max/min) at 0 (really at any arbitrary point $a \in \mathbb{R}$) is a vector subspace.
If you broaden this slightly to say that $V \subset C^1(-\infty,\infty)$ is the set of differentiable functions on the reals that have a critical point at 0 (i.e. $\forall$ $f \in V$ $f'(0) = 0$).
Then it's simple to show that this is a vector space.
If $f,g \in V$ ($f'(0)=g'(0)=0$), and $r \in \mathbb{R}$ then, and hence
*
*$(f+g)'(0) = f'(0) + g'(0) = 0 + 0 = 0$.
*The derivative of the zero function is zero, and hence evaluates to 0 at 0.
*$(rf)'(0) = r(f'(0))=r0 = 0$.
And together these imply that $V$ is a vector subspace of $C^1(-\infty,\infty)$.
Robert Israel's answer above is a nice example of why we must define our vector space to have a critical point, not just a max/min at 0.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Two norms on $C_b([0,\infty])$ $C_b([0,\infty])$ is the space of all bounded, continuous functions.
Let $||f||_a=(\int_{0}^{\infty}e^{-ax}|f(x)|^2)^{\frac{1}{2}}$
First I want to prove that it is a norm on $C_b([0,\infty])$. The only thing I have problems with is the triangle inequality, I do not know how to simplify
$||f+g||_a=(\int_{0}^{\infty}e^{-ax}|f(x)+g(x)|^2)^{\frac{1}{2}}$
The second thing I am interested in is how to show that there are constants $C_1,C_2$ such that $||f||_a\le C_1||f||_b$ and $||f||_b\le C_2||f||_a$ so for $a>b>0$ the norms $||.||_a$ and $||.||_b$ are not equivalent.
| For the first one: Use
$$e^{-\alpha \cdot x} \cdot |f(x)+g(x)|^2 = \left|e^{-\frac{\alpha}{2} \cdot x} \cdot f(x)+ e^{-\frac{\alpha}{2} \cdot x} \cdot g(x) \right|^2$$
and apply the triangel inequality in $L^2$.
Concerning the second one: For $a>b>0$ you have
$$e^{-a \cdot x} \cdot |f(x)|^2 \leq e^{-b \cdot x} \cdot |f(x)|^2 $$
i.e. $\|f\|_a \leq \|f\|_b$. The inequality $\|f\|_b \leq C \cdot \|f\|_a$ does not hold in general. To see this let $c \in (b,a)$ and
$$f_n(x) := \min\{n,e^{c \cdot x}\} (\in C_b)$$
Then $\|f_n\|_a < \infty$,$\|f_n\|_a \to c<\infty$, but $\|f_n\|_b \to \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How can I show some rings are local. I want to prove $k[x]/(x^2)$ is local. I know it by rather a direct way: $(a+bx)(a-bx)/a^2=1$. But for general case such as $k[x]/(x^n)$, how can I prove it?
Also for 2 variables, for example $k[x,y]/(x^2,y^2)$ (or more higher orders?), how can I prove they are local rings?
| You can use the following
Claim: A commutative unitary ring is local iff the set of non-unit elements is an ideal, and in this case this is the unique maximal ideal.
Now, in $\,k[x]/(x^n):=\{f(x)+(x^n)\;\;;\;\;f(x)\in K[x]\,\,,\,\deg(f)<n\}$ , an element in a non-unit iff $\,f(0)=a_0= 0\,$ , with $\,a_0=$ the free coefficient of $\,f(x)\,$, of course.
Thus, we can characterize the non-units in $\,k[x]/(x^n)\,$ as those represented by polynomials of degree less than $\,n\,$ and with free coefficient zero, i.e. the set of elements $$\,M:=\{f(x)+(x^n)\in k[x]/(x^n)\;\;;\;\;f(x)=xg(x)\,\,,\,\,g(x)\in k[x]\,\,,\deg (g)<n-1\}$$
Well, now check the above set fulfills the claim's conditions.
Note: I'm assuming $\,k\,$ above is a field, but if it is a general commutative unitary ring the corrections to the characterization of unit elements are minor, though important. About the claim being true in this general case: I'm not quite sure right now.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Countable unions of countable sets It seems the axiom of choice is needed to prove, over ZF set theory, that a countable union of countable sets is countable.
Suppose we don't assume any form of choice, and stick to ZF. What are the possible cardinalities of a countable union of countable sets?
Could a countable union of countable sets have the same cardinality as that of the real numbers?
| Yes. It is consistent that the real numbers are a countable union of countable sets. For example in the Feferman-Levy model this is true. In such model $\omega_1$ is also the countable union of countable sets (and there are models in which $\omega_1$ is the countable union of countable sets, but the real numbers are not).
It is consistent [relative to large cardinals] that $\omega_1$ is the countable union of countable sets, and $\omega_2$ is the countable union of countably many sets of size $\aleph_1$, that is $\omega_2$ is the countable union of countable unions of countable sets.
Indeed it is consistent [relative to some large cardinal axioms] that every set can be generated by reiterating countable unions of countable sets.
For non-aleph cardinals the follow results holds:
Let $M$ be a transitive model of ZFC, there exists $M\subseteq N$ with the same cardinals as $M$ [read: initial ordinals] and the following statement is true in $N$:
For every $\alpha$ there exists a set $X$ such that $X$ is a countable union of countable sets, and $\mathcal P(X)$ can be partitioned into $\aleph_\alpha$ nonempty sets.
D. B. Morris, A model of ZF which cannot be extended to a model of ZFC without adding ordinals, Notices Am. Math. Soc. 17, 577.
Note that $\mathcal P(X)$ can only be mapped onto set many ordinals, so this ensures us that there is indeed a proper class of cardinalities which can be written as countable unions of countable sets.
Some positive results are:
*
*If $X$ can be expressed as the countable union of countable sets, and $X$ can be well-ordered then $|X|\leq\aleph_1$.
*If $\langle A_i,f_i\rangle$ is given such that $A_i$ is countable and $f_i\colon A_i\to\omega$ is an injection, then $\bigcup A_i$ is countable.
*The collection of finite subsets of a countable set, as well finite sequences are both countable, without the use of the axiom of choice (so there are still only a countable collection of algebraic numbers in $\mathbb R$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Criteria for metric on a set Let $X$ be a set and $d: X \times X \to X$ be a function such that $d(a,b)=0$ if and only if $a=b$.
Suppose further that $d(a,b) ≤ d(z,a)+d(z,b)$ for all $a,b,z \in X$.
Show that $d$ is a metric on $X$.
| Let $X$ be a set and $d: X \times X \to X$ be a function such that $$d(a,b)=0\text{ if and only if}\;\; a=b,\text{ and}\tag{1}$$ $$d(a,b) ≤ d(z,a)+d(z,b)\forall a,b,z \in X.\tag{2}$$
There's additional criterion that needs to be met for a function $d$ to be a metric on $X$:
*
*You must have that $d(a, b) = d(b,a)$ for all $a, b \in X$ (symmetry).You can use the two properties you have been given to prove this. $d(a,b)\leq d(b,a)+d(b,b)= d(b, a) + 0 = d(b,a)$ and vice versa, hence we get equality.
*Having proven symmetry, you will then have that $d(a,b) \leq d(z,a) + d(z, b) \iff d(a, b) \leq d(a, z) + d(z, b)$.
*Finally, using the property immediately above, along with the $(1)$, you can establish that for all $a, b\in X$ such that $a\neq b$, we must have $d(a, b) > 0$.
Then you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
What does this mathematical notation mean? Please excuse this simple question, but I cannot seem to find an answer. I'm not very experienced with math, but I keep seeing a notation that I would like explained. The notation I am referring too generally is one variable m floating over another variable n enclosed in paraentheses. You can see an example in the first equation here.
What does this mean? Thanks in advance for the help.
| This is called the binomial coefficent, often read "n choose m", since it provides a way of computing the number of ways to choose $m$ items from a collection of $n$ items, provided the order or arrangement of those items doesn't matter.
To compute the binomial coefficient: $\displaystyle \binom{n}{m}$, you can use the factorial formula: $$\binom{n}{m} = \binom{n}{n-m}=\frac{n!}{m!(n-m)!}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Proof that interpolation converges; Reference request I am interested in the mathematical justification for methods of approximating functions.
In $x \in (C[a, b], ||\cdot||_{\infty})$ we know that we can get an arbitrarily good approximation by using high enough order polynomials (Weierstrass Theorem).
Suppose that $x \in (C[a, b], ||\cdot||_{\infty})$. Let $y_n$ be defined by linearly interpolating $x$ on an uniform partition of $[a, b]$ (equidistant nodes). Is it true that
\begin{equation}
\lim_{n \to \infty} ||y_n - x||_{\infty} = 0?
\end{equation}
Do we need to impose stronger conditions? For example
\begin{equation}
x(t) =
\begin{cases}
t \sin\left(\frac{1}{t}\right), & t \in (0, \pi] \\
0, & t = 0
\end{cases}
\end{equation}
is in $C[0, 1]$, however it seems to me that we cannot get a good approximation near $t = 0$.
More generally, can anyone recommend a reference containing the theory of linear interpolation and splines? It would have to include conditions under which these approximation methods converge (in some metric) to the true function.
| Given an arbitrary function in $x \in C[a, b]$ and defining $y_n$ to be the linear interpolant on the uniform partition of $[a, b]$ with $n + 1$ nodes we have
\begin{equation}
\lim_{n \to \infty} ||y_n - x||_{\infty} = 0.
\end{equation}
Proof. As $x$ is continuous on the compact set $[a, b]$ it is uniformly continuous. Fix $\varepsilon > 0$. By uniform continuity there exists $\delta > 0$ such that for all $r, s \in [0, 1]$ we have
\begin{equation}
|r - s| < \delta \quad \Rightarrow \quad |x(r) - x(s)| < \varepsilon.
\end{equation}
Every $n \in \mathbb{N}$ defines a unique uniform partition of $[a, b]$ into $a = t_0 < \ldots < t_n = b$ where $\Delta t_n = t_{l+1} - t_l = t_{k+1} - t_k$ for all $l, k \in \{0, \ldots, n\}$. Choose $N \in \mathbb{N}$ so that $\Delta t_N < \delta$. Let $I_k = [t_k, t_{k+1}]$, $\,k \in \{1, \ldots, N\}$. Then for all $t \in I_k$ we have
\begin{equation}
|y_N(t) - x(t)| \leq |y_N(t_k) - x(t)| + |y_N(t_{k+1}) - x(t)| < 2 \varepsilon,
\end{equation}
where the first inequality is due to the fact that since $y_N$ is linear on $I_k$ we know that $y_N(t) \in [\min(y_N(t_k), y_N(t_{k+1}), \max(y_N(t_k), y_N(t_{k+1})]$.
Q.E.D.
If anyone knows a reference for a proof along these lines, then I would be grateful to know it.
Also, the function $x$ in the OP can certainly be well approximated near zero. Here is a picture of the function; the dashed lines are $y = t$ and $y = -t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is $ \lim_{n \to \infty} \sum_{k=1}^{n-1}\binom{n}{k}2^{-k(n-k)} = 0$? Is it true that:
$$
\lim_{n \to \infty} \sum_{k=1}^{n-1}\binom{n}{k}2^{-k(n-k)} = 0
\;?$$
It seems true numerically, but how can this limit be shown?
| Note that $(n-k)$ is at least $n/2$ for $k$ between 1 and $n/2$. Then, looking at the sum up to $n/2$ and doubling bounds what you have above by something like:
$$\sum_{k=1}^{n-1}\binom{n}{k}2^{-kn/2}=\left(1+2^{-n/2}\right)^n-2^{-n/2}-1$$
which bounds your sum above and goes to zero.
Alternatively, use the bound
$$\binom{n}{k}\leq \frac{n^k}{k!}\;.$$
Since the sum is symmetric around $k=n/2$, work with the sum up to $n/2$. Then $n^k2^{-k(n-k)}=2^{k(\log_2 n-n+k)}$. For $k$ between 1 and $n/2$ and for large $n$ this scales something like $2^{-kn/2}$, which when summed from 1 to $n/2$ in $k$ will tend to 0 as $n\rightarrow\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Generating Pythagorean triples for $a^2+b^2=5c^2$? Just trying to figure out a way to generate triples for $a^2+b^2=5c^2$. The wiki article shows how it is done for $a^2+b^2=c^2$ but I am not sure how to extrapolate.
| Consider the circle $$x^2+y^2=5$$ Find a rational point on it (that shouldn't be too hard). Then imagine a line with slope $t$ through that point. It hits the circle at another rational point. So you get a family of rational points, parametrized by $t$. Rational points on the circle are integer points on $a^2+b^2=5c^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
How do I get the conditional CDF of $U_{(n-1)}$? Let $U_1$, $U_2$ .. $U_n$ be identical and independent random variables distributed Uniform(0, 1). How can I find the cumulative distribution function of the conditional distribution of $U_{(n-1)}$ given $U_{(n)} = c$? Here, $U_{(n-1)}$ refers to the second largest of the aforementioned uniform random variables.
I know that I can find the unconditional distribution of $U_{(n-1)}$: It's just Beta(2, $n - 2 + 1$) or Beta(2, $n-1$) because the ith order statistic of uniforms is distributed Beta(i, $n - i + 1$). However, how do I find the conditional CDF of $U_{(n-1)}$ given that $U_{(n)} = c$??
| To condition on $A=[U_{(n)}=c]$ is to condition on the event that one value in the random sample $(U_k)_{1\leqslant k\leqslant n}$ is $c$ and the $n-1$ others are in $(0,c)$. Thus, conditionally on $A$, the rest of the sample is i.i.d. uniform on $(0,c)$. In particular $U_{(n-1)}\lt x$ means that the $n-1$ values are in $(0,x)$, which happens with probability $(x/c)^{n-1}$. The conditional density of $U_{(n-1)}$ is
$$
f_{U_{(n-1)}\mid A}(x)=(n-1)c^{-(n-1)}x^{n-2}\mathbf 1_{0\lt x\lt c}.
$$
Thus, conditionally on $U_{(n)}$, $U_{(n-1)}$ is distributed as $\bar U_{(n-1)}\cdot U_{(n)}$, where $\bar U_{(n-1)}$ is independent of $U_{(n)}$ and distributed as $U_{(n-1)}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is this Matrix Invertible? Suppose $X$ is a real $n\times n$ matrix. Suppose $m>0$ and let $\operatorname{tr}(X)$ denote the trace of $X$. If $\operatorname {tr}(X^{\top}X)=m$, can i conclude that $X$ is invertible?
Thanks
| The fact that $\operatorname{Tr}(X^TX)$ is positive just mean that the matrix is non-zero. so any non-zero matrix which is not invertible will do the job.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Analog of Beta-function What is the multi-dimensional analogue of the Beta-function called? The Beta-function being $$B(x,y) = \int_0^1 t^x (1-t)^y dt$$
I have a function
$$F(x_1, x_2,\ldots, x_n) = \int_0^1\cdots\int_0^1t_1^{x_1}t_2^{x_2}\cdots(1 - t_1 - \cdots-t_{n-1})^{x_n}dx_1\ldots dx_n$$
and I don't know what it is called or how to integrate it.
I have an idea that according to the Beta-function: $$F(x_1, \ldots,x_n) = \dfrac{\Gamma(x_1)\cdots\Gamma(x_n)}{\Gamma(x_1 + \cdots + x_n)}$$
Is there any analogue for this integral such as Gamma-function form for Beta-function?
| What you can look at is the Selberg integral. It is a generalization of the Beta function and is defined by
\begin{eqnarray}
S_n(\alpha,\beta,\gamma) &=& \int_0^1\cdots\int_0^1\prod_{i=1}^n t_i^{\alpha-1}(1-t_i)^{\beta-1}\prod_{1\leq i<j\leq n}|t_i-t_j|^{2\gamma}dt_1\cdots dt_n \\
&=& \prod_{j=0}^{n-1}\frac{\Gamma(\alpha+j\gamma)\Gamma(\beta+j\gamma)\Gamma(1+(j+1)\gamma)}{\Gamma(\alpha+\beta+(n+j-1)\gamma)\Gamma(1+\gamma)}
\end{eqnarray}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/249956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Proof $A\sim\mathbb{N}^{\mathbb{N}}$ Let $A=\{f\in\{0,1\}^{\mathbb{N}}\,|\, f(0)=0,f(1)=0\}$, i.e. all the infinite binary vectors, that start from $0,0$.
Need to proof that $A\sim\mathbb{N}^{\mathbb{N}}$.
Any ideas or hint?
| Hint:
Show that $\mathbb{2^N\sim N^N}$ first, then show that $A\sim 2^\mathbb N$.
The first part is the more difficult part, but recall that $\mathbb{N^N}\subseteq\mathcal P(\mathbb{N\times N})$ and that $\mathbb{N\times N\sim N}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How do I prove the middle-$\alpha$ cantor set is perfect? Let $\alpha\in (0,1)$ and $\beta=\frac{1-\alpha}{2}$.
Define $T_0(x) = \beta x$ and $T_1(x) = (1-\beta) + \beta x$ , $\forall x\in [0,1]$.
Recursively define $I_0 =[0,1]$ and $I_{n+1}= T_0(I_n) \cup T_1(I_n)$.
The Middle-$\alpha$ Cantor Set is defined as $\bigcap_{n\in \omega} I_n$.
I have proved that $I_n$ is a disjoint union of $2^n$ intervals, each of length $\beta^n$. That is, $I_n=\bigcup_{i=1}^{2^n} [a_i,b_i]$
My question is that how do i prove that every endpoint $a_i,b_i$ in $I_n$ is in $\bigcap_{n\in\omega} I_n$?
It seems trivial, but i don't know how to prove this..
| Define $I_n^* = I_0 - I_n, \forall n\in \omega$.
Note that (i);
$I_{n+1}^*\\=I_0 \setminus I_{n+1} \\=I_0 \setminus (T_0(I_n)\cup T_1(I_n)) \\=(I_0\setminus (T_0(I_0)\setminus T_0(I_n^*)))\cap (I_0\setminus (T_1(I_0)\setminus T_1(I_n^*)) \\=T_0(I_n^*)\cup I_1^* \cup T_1(I_n^*)$.
Also(ii), it can be found that, $\forall x\in I_n, \beta x\in I_n$ and $(1-\beta)+\beta x \in I_n$.
Let $E_n$ be a set of endpoints of $I_n$.
Let $G=\{n\in \omega | n<m \Rightarrow E_n\subset E_m\}$
Then, it can be shown that $n<m\Rightarrow E_n \subset E_m$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Conceptual question about equivalence of eigenvectors Suppose for a matrix the eigenvalue is 1 and the eigenvector is (2,-3). Then does that mean (-4,6) and (4,-6) are equivalent eigenvectors as the ratios are the same?
| Let T be a transformation, and let $\lambda$ be an eigenvalue with eigenvector $v$, ie. $T(v)=\lambda v$. Then if $c$ is any scalar, $cv$ is also an eigenvector with eigenvalue $\lambda$, since $T(cv)=cT(v)=c\lambda v=\lambda(cv)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Trying understand a move in Cohen's proof of the independence of the continuum hypothesis I've read a few different presentations of Cohen's proof. All of them (that I've seen) eventually make a move where a Cartesian product (call it CP) between the (M-form of) $\aleph_2$ and $\aleph_0$ into {1, 0} is imagined. From what I gather, whether $CP \in M$ is what determines whether $\neg$CH holds in M or not such that if $CP \in M$ then $\neg$CH.
Anyway, my question is: Why does this product, CP, play this role? How does it show us that $\aleph_2 \in M$ (the 'relativized' form of $\aleph_2$, not the $\aleph_2 \in V$)? Could not some other set-theoretical object play the same role?
| In order to prove the continuum hypothesis is independent from the axioms of $ZFC$ what Cohen did was to start with $ZFC+V=L$ (in which the generalized continuum hypothesis holds), and create a new model in which $ZFC$ in which the continuum hypothesis fails.
First we need to understand how to add one real number to the universe, then we can add $\aleph_2$ of them at once. If we are lucky enough then $\aleph_1$ of our original model did not become countable after this addition, and then we have that there are $\aleph_2$ new real numbers, and therefore CH fails.
To add one real number Cohen invented forcing. In this process we "approximate" a new set of natural numbers by finite parts. Some mysterious create known as a "generic filter" then creates a new subset, so if we adjoin the generic filter to the model we can show that there is a new subset of the natural numbers, which is the same thing as saying we add a real number.
We can now use the partial order which adds $\aleph_2$ real numbers at once. This partial order has some good properties which ensure that the ordinals which were initial ordinals (i.e. cardinals) are preserved, and so we have that CH is false in this extension.
(I am really trying to avoid a technical answer here, and if you wish to get the details you will have to sit through some book and learn about forcing. I wrote more about the details in A question regarding the Continuum Hypothesis (Revised))
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Coin tossing questions I had an exam today and I would like to know for sure that I got these answers correct.
A fair coin will be tossed repeatedly.
*
*What is the probability that in 5 flips we will obtain exactly 4 heads.
*Let $X =$ # flips until we have obtained the first head. Find the conditional probability of $P(X=4|X\geq2)$.
*What is the probability that the fifth heads will occur on the sixth flip?
Here are my answers:
*
*$\frac{5}{32}$
*$\frac{1}{8}$
*$\frac{5}{64}$
| It seems that the general consensus is that you are right.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
In regard to a retraction $r: \mathbb{R}^3 \rightarrow K$ Let $K$ be the "knotted" $x$-axis. I have been able to show that $K$ is a retract of $\mathbb{R}^3 $ using the fact that $K$ and the real line $\mathbb{R}$ are homeomorphic, $\mathbb{R}^3$ is a normal space, and then applying the Tietze Extension Theorem. But then what would be an explicit retraction $r: \mathbb{R}^3 \rightarrow K$? Any ideas?
| Let $f : K \to \Bbb R$ and pick a point $x \in K$.
Pick a infinite sheet of paper on the left side of the knot and imagine pinching and pushing it inside the knot all the way right to $x$, and use this to define $g$ on the space spanned by the sheet of paper into $( - \infty ; f(x))$, such that if $y<f(x)$, $g^{-1}(\{y\})$ is homeomorphic to the sheet of paper and intersects $K$ only at $f^{-1}(y)$.
Do the same thing with another infinite sheet of paper from the right side of the knot moving all the way left to $x$.
Finally define $g(y) = f(x)$ for all $y$ in the remaining space between the two sheets of paper.
Then $g : \Bbb R^3 \to \Bbb R$ is continuous, extends $f$, and except at $f(x)$ where the fiber is big, $g^{-1}(\{y\})$ is homeomorphic to $\Bbb R^2 $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A necessary and sufficient condition for a measure to be continuous. If $(X,\mathcal{M})$ is a measurable space such that $\{x\}\in\mathcal{M}$ for all $x\in$$X$, a finite measure $\mu$ is called continuous if $\mu(\{x\})=0$ for all $x\in$$X$.
Now let $X=[0,\infty]$, $\mathcal{M}$ be the collection of the Lebesgue measurable subsets of $X$. Show that $\mu$ is continuous if and only if the function $x\to\mu([0,x])$ is continuous.
One direction is easy: if the function is continuous, I can get that $\mu$ is continuous. But the other direction confuses me. I want to show the function is continuous, so I need to show for any $\epsilon>0$, there is a $\delta>0$ such that $|\mu([x,y])|<\epsilon$ whenever $|x-y|<\delta$.But I can't figure out how to apply the condition that $\mu$ is continuous to get this conclusion.
| It seems like the contrapositive is a good way to go. Suppose that $x\mapsto\mu([0,x])$ is not continuous, say at the point $x_0$. Then there exists an $\epsilon>0$ such that for all $\delta>0$ there is a $y$ such that $\vert x_0-y\vert<\delta$ but $\vert\mu([x_0,y])\vert\geq\epsilon$. Thus we can construct a sequence $(y_n)$ which converges to $x_0$, but such that $\vert\mu([x_0,y_n])\vert\geq\epsilon$ for all $n$. Hence we can conclude that $\mu(x_0)\geq\epsilon>0$. Hopefully you can fill in the details from there!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Integral and Area of a section bounded by a function. I'm having a really hard time grasping the concept of an integral/area of a region bounded a function.
Let's use $x^3$ as our sample function.
I understand the concept is to create an infinite number of infinitely small rectangles, calculate and sum their area. Using the formula
$$\text{Area}=\lim_{n\to\infty}\sum_{i=1}^n f(C_i)\Delta x=\lim_{n\to\infty}\sum_{i=1}^n\left(\frac{i}n\right)^3\left(\frac1n\right)$$
I understand that $n$ is to represent the number of rectangles, $\Delta x$ is the change in the $x$ values, and that we are summing the series, but I still don't understand what $i$ and $f(C_i)$ do. Is $f(C_i)$ just the value of the function at that point, giving us area?
Sorry to bother you with a homework question. I know how annoying that can be.
P.S. Is there a correct way to enter formulas?
| So, $f(C_i)$ is the value of $f$ at $C_i$, but more importantly it is the height of the specific rectangle being used in the approximation. Then $i$ is just the interval which is the base of the rectangle. As $|C_{i+1}-C_i|\rightarrow 0$, this sum becomes the area under the curve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Isomorphism between 2 quotient spaces Let $M,N$be linear subspaces $L$ then how can we prove that the following map
$$(M+N)/N\to M/M\cap N$$ defined by $$m+n+N\mapsto m+M\cap N$$ is surjective?
Originally, I need to prove that this map is bijection but I have already proven that this map is injective and well defined,but having hard time to prove surjectivity,please help.
| Define $T: M \to (M+N)/N$ by
$m \mapsto m+N$.
Show that it is linear and onto.
Check the $\ker T$ and is $M\cap N$
by the first isomorphism theorem $f:M/(M\cap N) \to (M+N)/N$ is an isomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
The notion of complex numbers How does one know the notion of real numbers is compatible with the axioms defined for complex numbers, ie how does one know that by defining an operator '$i$' with the property that $i^2=-1$, we will not in some way contradict some statement that is an outcome of the real numbers.
For example if I defined an operator x with the property that $x^{2n}=-1$, and $x^{2n+1}=1$, for all integers n, this operator is not consistent when used compatibly with properties of the real numbers, since I would have $x^2=-1$, $x^3=1$, thus $x^5=-1$, but I defined $x^5$ to be equal to 1.
How do I know I wont encounter such a contradiction based apon the axioms of the complex numbers.
| A field is a generalization of the real number system. For a structure to be a field, it should fulfill the field axioms (http://en.wikipedia.org/wiki/Field_%28mathematics%29).
It is rather easy to see that the complex numbers are, indeed, a field. Proving that there isn't a paradox hiding in the complex-number theory is harder. What can be proved is this:
If number theory (natural numbers, that is) is consistent, then so is the complex number system. The main problem is that you can't prove the consistency of a theory without using a stronger theory. And then you have the problem of proving that the stronger theory is consistent, ad infinitum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 4
} |
Constructing a strictly increasing function with zero derivatives I'm trying to construct a fuction described as follows:
$f:[0,1]\rightarrow R$ such that $f'(x)=0$ almost everywhere,f has to be continuous and strictly increasing.
(I'd also conlude that this functions is not absolutely continuous)
The part in bracket is easy to prove.
I'm in troubles with constructing the function:
I thought about considering the function as an infinite sum of sucession of increasing and conitnuous function, I considered the Cantor-Vitali functions which is defined on $[0,1]$ and is continuous and incresing (not strictly).
So $f(x)=\sum_{k=0}^{\infty} 2^{-k}\phi(3^{-k}x)$ where $2^{-k}$ is there to makes the sum converging and $\phi$ is the Cantor-Vitali function.
The sum convergethe function in continuous (as sum of) and is defined as asked.But I'm in trouble while proving that is strictly increasing.Honestly it seems to be but I don't know how to prove it.
I know that $0\leq x \leq y\leq 1$ always exist a k such that $\phi(3^{-k}x)\leq \phi(3^{-k}y)$ but then I stucked.
I'm looking for help in proving this part.
| By $\phi$ we denote Cantor-Vitali function. Let $\{(a_n,b_n):n\in\mathbb{N}\}$ be the set of all intervals in $[0,1]$ with rational endpoints. Define
$$
f_n(x)=2^{-n}\phi\left(\frac{x-a_n}{b_n-a_n}\right)\qquad\qquad
f(x)=\sum\limits_{n=1}^{\infty}f_n(x)
$$
I think you can show that it is continuous and have zero derivative almost everywhere. As for strict monotonicity consider $0\leq x_1<x_2\leq 1$ and find interval $(a_n,b_n)$ such that $(a_n,b_n)\subset(x_1,x_2)$, then
$$
f(x_2)-f(x_1)\geq f(b_n)-f(a_n)\geq f_n(b_n)-f(a_n)=2^{-n}>0
$$
So $f$ is strictly monotone.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
$V$ is a vector space over $\mathbb Q$ of dimension $3$ $V$ is a vector space over $\mathbb Q$ of dimension $3$, and $T: V \to V$ is linear with $Tx = y$, $Ty = z$, $Tz=(x+y)$ where $x$ is non-zero. Show that $x, y, z$ are linearly independent.
| Let $A = \mathbb{Q}[X]$ be the polynomial ring.
Let $I = \{f(X) \in A|\ f(T)x = 0\}$.
Clearly $I$ is an ideal of $A$.
Let $g(X) = X^3 - X - 1$.
Then $g(X) \in I$.
Suppose $g(X)$ is not irreducible in $A$.
Then $g(X)$ has a linear factor of the form $X - a$, where $a = 1$ or $-1$.
But this is impposible.
Hence $g(X)$ is irreducible in $A$.
Since $x \neq 0$, $I \neq A$. Hence $I = (g(X))$.
Suppose there exist $a, b, c \in \mathbb{Q}$ such that $ax + bTx + cT^2x = 0$.
Then $a + bX + cX^2 \in I$.
Hence $a + bX + cX^2$ is divisible by $g(X)$.
Hence $a = b = c = 0$ as desired.
A more elementary version of the above proof
Suppose $x, y = Tx, z= Tx^2$ is not linearly independent over $\mathbb{Q}$.
Let $h(X)\in \mathbb{Q}[X]$ be the monic polynomial of the least degree such that $h(T)x = 0$.
Since $x \neq 0$, deg $h(X) = 1$, or $2$.
Let $g(X) = X^3 - X - 1$.
Then $g(X) = h(X)q(X) + r(X)$, where $q(X), r(X) \in \mathbb{Q}[X]$ and deg $r(X) <$ deg $h(X)$.
Then $g(T)x = q(T)h(T)x + r(T)x$.
Since $g(T)x = 0$ and $h(T)x = 0$, $r(T)x = 0$.
Hence $r(X) = 0$.
Hence $g(X)$ is divisible by $h(X)$.
But this is impossible because $g(X)$ is irreducible as shown above.
Hence $x, y, z$ must be linearly independent over $\mathbb{Q}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Is $\mathbb R^2$ a field? I'm new to this very interesting world of mathematics, and I'm trying to learn some linear algebra from Khan academy.
In the world of vector spaces and fields, I keep coming across the definition of $\mathbb R^2$ as a vector space ontop of the field $\mathbb R$.
This makes me think, Why can't $\mathbb R^2$ be a field of its own?
Would that make $\mathbb R^2$ a field and a vector space?
Thanks
| Adding to the above answer. With the usual exterior multiplication of the $\mathbb{R}-\{0\}$ as a ring with the natural addition and multiplication you can not make a field out of $\mathbb{R}^{2}-(0,0)$
\
But there may exist other products such as the one in the answers which can make a field out of ${\mathbb{R}\times\mathbb{R}}-\{ 0\}$
\
According to one of the theorems of Field theory every field is an Integral domain. So by considering :
${\mathbb{R}\times\mathbb{R}}-\{ 0\}$
With the following natural product:
$(A,B)*(C,D)=(AB,CD) $
We see that
$(1,0)*(0,1)=(0,0) $
Which means that $\mathbb{R}^{2} $is not an integral domain and hence not a field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 3
} |
Stochastic Processes...what is it? My university is offering stochastic processes next semester, here is the course description:
Review of distribution theory. Introduction to stochastic processes, Markov chains and Markov processes, counting, and Poisson and Gaussian processes. Applications to queuing theory.
I'm torn between this course and analysis 2. I'm not sure what stochastic processes deals with, and the description doesn't help. Can anyone explain what you do in such a course? Is it proofs-based? Also, would you recommend this over Analysis 2?
Thanks.
| "Stochastic" refers to topics involving probability -- often the treatment of processes that are inherently random in nature by virtue of being some sot of random function about a random or deterministic variable, or a process parameterized by a random quantity.
For example, Brownian motion is a stochastic process; similarly, the behavior of a LTI system containing a random parameter is a stochastic process (say, the vibration of a spring-mass system, where the spring constant is actually a random variable).
Stochastic calculus requires a strong background in analytical probability theory, which itself requires some notion of measure theory, algebra, and multivariable analysis. Many undergraduate courses will avoid some of the rigorous elements of stochastic analysis, but to really understand it, you will need a decent background in undergraduate analysis through multivariable analysis, Lebesgue theory, and measure theory.
In short, take Analysis II, take Probability, and then consider Stochastic processes if you are interested in it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Where is the mistake in the calculation of $y'$ if $ y = \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{1/4} $? Plase take a look here.
If $ y = \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{1/4} $
\begin{eqnarray}
y'&=& \dfrac{1}{4} \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{-3/4} \left \{ \dfrac{2x(x^2-1) - 2x(x^2+1) }{(x^2-1)^2} \right \}\\
&=& \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{-3/4} \dfrac{-x}{(x^2-1)^2}.
\end{eqnarray}
By the other hand, we have
\begin{equation}
\log y = \dfrac{1}{4} \left \{ \log (x^2+1) - \log (x^2-1) \right \}
\end{equation}
Then,
\begin{eqnarray}
\dfrac{dy}{dx} &=& y \dfrac{1}{4} \left \{ \dfrac{2x}{(x^2+1)} -\dfrac{ 2x}{(x^2-1)} \right \} \\
&=& \dfrac{1}{4} \dfrac{x^2+1}{x^2-1} \cdot 2x \dfrac{(x^2-1) - (x^2+1)}{(x^2+1)(x^2-1)} \\
&=& \dfrac{x^2+1}{x^2-1} \dfrac{-x}{(x^2+1)(x^2-1)} \\
&=& \dfrac{-x}{(x^2-1)^2}.
\end{eqnarray}
But this implies,
\begin{equation}
\dfrac{-x}{(x^2-1)^2} = \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{-3/4} \dfrac{-x}{(x^2-1)^2}.
\end{equation}
Where is the mistake?
| I believe you forgot a power 1/4 when substituting for $y$ (in the calculation using logarithms).
Edited to explain further:
In your calculation, you write
\begin{align} \frac{dy}{dx} &= y\frac14 \left\{ \frac{2x}{(x^2+1)} - \frac{2x}{(x^2-1)} \right\} \\
&= \frac14 \frac{x^2+1}{x^2-1} \cdot 2x\frac{(x^2-1)-(x^2+1)}{(x^2+1)(x^2-1)}.
\end{align}
However, this should be
\begin{align} \frac{dy}{dx} &= y\frac14 \left\{ \frac{2x}{(x^2+1)} - \frac{2x}{(x^2-1)} \right\} \\
&= \frac14 \color{red}{\left(\color{black}{\frac{x^2+1}{x^2-1}}\right)^{\frac14}} \cdot 2x\frac{(x^2-1)-(x^2+1)}{(x^2+1)(x^2-1)}.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why is $E(Z | Y) = 0$? Let $Z$ be a random variable distributed $\mathcal{N}(0, 1)$. Let $Y = Z^2$.
Apparently, $E(Z \mid Y) = E(Z \mid Z^2) = 0$ due to "symmetry." Why is that?
| For completion, note that for all mesurable $f$ such that $E(|f(Y)|) < \infty$, $$E(f(Y)\mid Z) = \frac{f(Z) + f(-Z)}{2}.$$
Here $f\colon x\mapsto x$ is odd, hence $E(Y\mid Z) = \frac{Z-Z}{2}=0$.
Another example of interest : if you take $f(x)=e^{i\theta x}$, you get $E(e^{i\theta Y} \mid Z) = \cos(\theta Z)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/250969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
When generating set is not a basis If a generating set of a vector space being made up of linearly independent vectors constitues a basis, when such a set is not a basis does it mean that its vectors are linearly dependent?
| Yes. Let $S$ be the generating set and let $B\subset S $ be a basis. If $B,S$ are not equal, then there exists $u\in S-B$ since $B$ is a basis, it follows that $u=c_1v_1+c_2v_2+...+c_nv_n$ for some $v_1,v_2,...,v_n\in B$. From this it follows that {$u,v_1,...,v_n$} are linearly dependent. Hence, $S$ is not linearly independent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to simplify polynomials I can't figure out how to simplify this polynominal $$5x^2+3x^4-7x^3+5x+8+2x^2-4x+9-6x^2+7x$$
I tried combining like terms
$$5x^2+3x^4-7x^3+5x+8+2x^2-4x+9-6x^2+7x$$
$$(5x^2+5x)+3x^4-(7x^3+7x)+2x^2-4x-6x^2+(8+9)$$
$$5x^3+3x^4-7x^4+2x^2-4x-6x^2+17$$
It says the answer is $$3x^4-7x^3+x^2+8x+17$$ but how did they get it?
| You cannot combine terms like that, you have to split your terms by powers of $x$.
So for example $$5x^2+5x+2x^2 = (5+2)x^2+5x = 7x^2+5x$$ and not $5x^3+2x^2$. Using this, you should end up with your answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Find conditions on $a$ and $b$ such that the splitting field of $x^3 +ax+b $ has degree of extension 3 Find conditions on $a$ and $b$ such that the splitting field of $x^3 +ax+b \in \mathbb Q[x]$ has degree of extension 3 over $\mathbb Q$.
I'm trying solve do this question, it seems very difficult to me, maybe because I don't know Galois Theory yet (The content of next chapters). I need help to prove this without Galois Theory.
Thanks
| Partial answer: Let $f(x)=x^3+ax+b$, let $K$ be its splitting field, and $\alpha$, $\beta$ and $\gamma$ be the roots of $f$ in $K$.
First of all, $f$ has to be irreducible, which is the same as saying it doesn't have a rational root: if it's not, and say $\alpha$ is rational, then $f(x)$ factors as $(x-\alpha)g(x)$ with $g$ a quadratic polynomial; then $K$ is also the splitting field of $g$, so it has degree $\leq2$.
So let's assume that $f$ is irreducible. Its splitting field $\mathbf Q(\alpha)$ has degree $\deg(f)=3$, so if we want $[K:\mathbf Q]=3$, we need $K=\mathbf Q(\alpha)$. In other words, we want the two other roots $\beta,\gamma$ to be in $\mathbf Q(\alpha)$. Let's look at the relation between roots and coefficients for $f$:
$$
\alpha+\beta+\gamma=0\\
\alpha\beta+\beta\gamma+\gamma\alpha=a\\
\alpha\beta\gamma=-b
$$
From the first and the third equation, you see that $\beta+\gamma=-\alpha$ and $\beta\gamma=-b/\alpha=\alpha^2+a$, so $\beta$ and $\gamma$ are the roots of the second degree polynomial $g(y)=y^2+\alpha y +\alpha^2+a\in\mathbf Q(\alpha)$. Those roots are in $\mathbf Q(\alpha)$ if, and only if, the discriminant $\Delta=-3\alpha^2-4a$ of $g$ is a square in $\mathbf Q(\alpha)$.
Now, the problem is that I'm not sure how to determine whether $\Delta$ is a square or not...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
If $n$ is a natural number $\ge 2$ how do I prove that any graph with $n$ vertices has at least two vertices of the same degree? Any help would be appreciated.
If $n$ is a natural number $\ge 2$ how do I prove that any graph with $n$ vertices has at least two vertices of the same degree?
| HINT: The possible degrees of a simple graph with $n$ vertices are the $n$ integers $0,1,\dots,n-1$. However, a simple graph on $n$ vertices cannot have both a vertex of degree $0$ and a vertex of degree $n-1$; why?
That means that either the degrees of the $n$ vertices are all in the set $\{0,1,\dots,n-2\}$, or they’re all in the set $\{1,2,\dots,n-1\}$. How many numbers are in each of those sets? (In case that’s not enough of a hint, I’ve added a spoiler-protected further hint; mouse-over to see it.)
Pigeonhole principle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Proving every montonic function on an interval is integrable I am trying to understand the proof of every monotonic function that is on an interval is integrable.
This is what I have $U(f, P) - L(f, P) = \sum\limits_{k=1}^n(f(t_k) - f(t_{k-1}))\cdot (t_k - t_{k-1})$
Now my book says that this is equal to:
$= (f(b) - f(a))\cdot(t_k - t_{k-1})$
How does one deduce that $\sum\limits_{k=1}^n(f(t_k) - f(t_{k-1})) = f(b) - f(a)$?
| Note that
\begin{equation*}
\sum_{k=1}^{n}(f(t_{k})-f(t_{k-1}))=(f(t_{1})-f(t_{0}))+(f(t_{2})-f(t_{1}))+(f(t_{3})-f(t_{2}))+...+(f(t_{n})-f(t_{n-1})),
\end{equation*}
so in $i$:th term of the sum, $-f(t_{i-1})$ always eliminates the $i-1$:th appearing term $+f(t_{i-1})$. Hence you are only left the endpoints, i.e. $f(b)-f(a)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proving that $\|u_1\|^2+\|w_1\|^2=\|u_2\|^2+\|w_2\|^2$ If $u_1+w_1=u_2+w_2$ and $\langle u_1,w_1\rangle=0=\langle u_2,w_2\rangle$, how can we prove that $$\|u_1\|^2+\|w_1\|^2=\|u_2\|^2+\|w_2\|^2$$
I know I can open this to $$\langle u_1,u_1\rangle+\langle w_1,w_1\rangle=\langle u_2,u_2\rangle+\langle w_2,w_2\rangle$$ but from here what can I do with that?
| From $u_1+w_1=u_2+w_2$ and $\langle u_1,w_1\rangle=0=\langle u_2,w_2\rangle$, we have
$\|u_1\|^2+\|w_1\|^2$
$=\langle u_1,u_1\rangle+\langle w_1,w_1\rangle+2\langle u_1,w_1\rangle$
$=\langle u_1+w_1,u_1+w_1\rangle$
$=\langle u_2+w_2,u_2+w_2\rangle$
$=\langle u_2,u_2\rangle+\langle w_2,w_2\rangle+2\langle u_2,w_2\rangle$
$=\|u_2\|^2+\|w_2\|^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Conditions for the mean value theorem the mean value theorem which most of us know starts with the conditions that $f$ is continuous on the closed interval $[a,b]$ and differentiable on the opened interval $(a,b)$, then there exists a $c \in (a,b)$, where
$\frac{f(b)-f(a)}{b-a} = f'(c)$.
I'm guessing we're then able to define $\forall a', b' \in [a,b]$ where $c \in (a', b')$ and the mean value theorem is correspondingly valid.
However, are we then able to start with $\forall c \in (a,b)$ and then claim that there exist $a',b' \in [a,b]$, where the mean value theorem is still valid?
| The answer is No. Consider $y=f(x)=x^3$ and $c=0$. $f'(c)=0$ but no secant line has a zero slope as ${{f(r)-f(s)}\over{r-s}}=r^2+s^2+rs>0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Is $Y_s=sB_{1\over s},s>0$ a brownian motion Suppose $\{B_s,s>0\}$ is a standard brownian motion process. Is $Y_s=sB_{1\over s},\ s>0$ a brownian motion or (stardard). I have found that $Y_0=0$ and $Y_s\sim N(0,1)$ as $B_s\sim N(0,s)$, so it remains to show that it is stationary increment and independent increment. But i am not sure how to do it.
| Have you heard of Gaussian processes ? If you have, you only have to check that $(Y_s)$ has the same covariance function as the Brownian motion.
If you haven't, don't worry, it's very simple here: you are interested in the law of the couple $(sB_{1/s},tB_{1/t}-sB_{1/s})$ when $0 < s <t$. This is a 2 dimensional centered Gaussian vector, so its law is entirely determined by its covariance matrix. In the end, you have to compute $E(sB_{1/s} tB_{1/t})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Representation of linear functionals I have always seen the linear functionals in $R^n$ expressed at $\ell(x) = \sum_{i=0}^n a_ix_i$ And in an countable metric space $\ell(x) = \sum_{i=0}^{\infty} a_ix_i$. I guess that this follows directly from http://en.wikipedia.org/wiki/Riesz_representation_theorem, for Hilbert spaces.
But what if we are not in an Hilbert space or if the space is uncountable.
If X was a 1 dimensional space I would get $f(x) = f(1)x$ by continuity and linearity (by derivation and integration) and by partial derivation it would look like $f(x) = \sum_{i=0}^n f(1)x_i$
for the n dimensional case
| I can think of a nice representation theorem that holds in a non-Hilbert space. It goes by the name Riesz-Kakutani-Markov:
Let $X$ be a compact Hausdorff space and $(C(X),\|\cdot\|_\infty)$ the space of continuous real valued functions on $X$ endowed with the maximum norm. Then, every bounded linear functional $F$ on $C(X)$ can be written as an integral against a signed, finite Borel measure $\mu$ on $X$:
$$
F(f)=\int_X fd\mu
$$
with norm
$$
\|F\|=\int_X\vert d\mu\vert
$$ where $\vert d\mu\vert$ is the absolute variation of $\mu$.
A good resource for this theorem is Lax: Functional Analysis. Granted, this is more sophisticated than the Riesz representation theorem on Hilbert spaces, but that's to be expected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Why does Lenstra ECM work? I came across Lenstra ECM algorithm and I wonder why it works.
Please refer for simplicity to Wikipedia section Why does the algorithm work
I NOT a math expert but I understood first part well enough (I suppose), what I miss is
When this is the case, $kP$ does not exist on the original curve, and in the computations we found some $v$ with either $\text{gcd}(v,p)=p$ or $\text{gcd}(v,q)=q$, but not both. That is, $\text{gcd}(v,n)$ gave a non-trivial factor of $n$.
As far as I know this has to do with the fact that $E(\mathbb Z/n\mathbb Z)$ is not a group if $n$ is not prime so some element (i.e. $x_1-x_2$) is not invertible but what is the link between non invertible elements and $n$ factors?
Thanks to everyone
| Read this paper.
this is just a piece of hole paper.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Simple and intuitive example for Zorns Lemma Do you know any example that demonstrated Zorn's Lemma simple and intuitive? I know some applications of it, but in these proof I find the application of Zorn Lemma not very intuitive.
| Zorn's lemma is not intuitive. It only becomes intuitive when you get comfortable with it and take it for granted. The problem is that Zorn's lemma is not counterintuitive either. It's just is.
The idea is that if every chain has an upper bound, then there is a maximal element. I think that the most intuitive usage is the proof that every chain can be extended to a maximal chain.
Let $(P,\leq)$ be a partially ordered set and $C$ a chain in $P$. Let $T=\{D\subseteq P\mid C\subseteq D\text{ is a chain}\}$. Then $(T,\subseteq)$ has the Zorn property, because a chain in $T$ is an increasing collection of chains. The $\subseteq$-increasing union of chains is a chain as well, so there is an upper bound. By Zorn there is a maximal element, and it is a maximal chain by definition.
If you search on this site "Zorn's lemma" you can find more than a handful examples explaining slightly more in details several discussions and other applications of Zorn's lemma. Here is a quick list from what I found:
*
*Is there any motivation for Zorn's Lemma?
*Every Hilbert space has an orthonomal basis - using Zorn's Lemma
*How does this statement on posets follow from Zorn's lemma?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Periodic parametric curve on cylinder Given a cylinder surface $S=\{(x,y,z):x^2+2y^2=C\}$. Let $\gamma(t)=(x(t),y(t),z(t))$ satisfy $\gamma'(t)=(2y(t)(z(t)-1),-x(t)(z(t)-1),x(t)y(t))$. Could we guarante that $\gamma$ always on $S$ and periodic if $\gamma(0)$ on $S$?
| We can reparameterize $S=\{(\sqrt{C}\cos u,\frac{\sqrt{C}}{\sqrt{2}}\sin u, v): u,v\in \mathbb{R}\}$ since $(\sqrt{C}\cos u)^2+2\left(\frac{\sqrt{C}}{\sqrt{2}}\sin u\right)^2=C$. Let $r(t)= (x(t),y(t),z(t))$ and $r(0)=(x_0,y_0,z_0)$. Define $V(x,y,z)=x^2+2y^2$. Since $V(x,y,z)=C$ then $\frac{dV}{dt}=0$. But, by chain rule we get $0=\frac{dV}{dt}=\nabla{V}\cdot(x',y',z')$ so the tangent vector of the parametrized curve that intersect $S$ in a point always parpendicular with $\nabla{V}$. Since $r(0)$ be in $S$ and $\nabla{V}$ parpendicular with the tangent plane of $S$ at $r(0)$ , then $r'(0)$ be on the tangent plane of $S$ at $r(0)$. By this argument, we can conclude that $r(t)$ must be on $S$. Since $S=\{(\sqrt{C}\cos u,\frac{\sqrt{C}}{\sqrt{2}}\sin u, v): u,v\in \mathbb{R}\}$ then $x(t)=\sqrt{C}\cos (t-t_0)$ and $y(t)=\frac{\sqrt{C}}{\sqrt{2}}\sin (t-t_0)$ with $t_0$ satisfying $x_0=\sqrt{C}\cos t_0$ and $y_0=-\frac{\sqrt{C}}{\sqrt{2}}\sin t_0$. Since $z'=xy$ then $z'(t)=\frac{C}{2\sqrt{2}}\sin(2t-t_0)$, hence $z(t)=-\frac{C}{4\sqrt{2}}\cos(2t-t_0)$. Since $r(2\pi)=(\sqrt{C}\cos (2\pi-t_0),\frac{\sqrt{C}}{\sqrt{2}}\sin (2\pi-t_0),-\frac{C}{4\sqrt{2}}\cos(2\pi-t_0))=(x_0,y_0,z_0)=r(0)$ then $r(t)$ is periodic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Advanced integration, how to integrate 1/polynomial ? Thanks I have been trying to integrate a function with a polynomial as the denominator.
i.e, how would I go about integrating
$$\frac{1}{ax^2+bx+c}.$$
Any help at all with this would be much appreciated, thanks a lot :)
ps The polynomial has NO real roots
$${}{}$$
| If $a=0$, then
$$I = \int \dfrac{dx}{bx+c} = \dfrac1b \log (\vert bx+c \vert) + \text{constant}$$
$$I = \int \dfrac{dx}{ax^2 + bx + c} = \dfrac1a \int\dfrac{dx}{\left(x + \dfrac{b}{2a}\right)^2 + \left(\dfrac{c}a- \dfrac{b^2}{4a^2} \right)}$$
If $b^2 < 4ac$, then recall that
$$\int \dfrac{dt}{t^2 + a^2} = \dfrac1a\arctan \left(\dfrac{t}a \right) + \text{constant}$$
Hence, if $b^2 < 4ac$, then
$$I = \dfrac2{\sqrt{4ac-b^2}} \arctan \left( \dfrac{2ax + b}{\sqrt{4ac-b^2}} \right) + \text{constant}$$
If $b^2 = 4ac$, then
$$I =\dfrac1a \dfrac{-1}{\left(x + \dfrac{b}{2a}\right)} + \text{constant} = - \dfrac2{2ax+b} + \text{constant}$$
If $b^2 > 4ac$, then
$$I = \dfrac1a \int\dfrac{dx}{\left(x + \dfrac{b}{2a}\right)^2 - \sqrt{\left( \dfrac{b^2}{4a^2} -\dfrac{c}a\right)}^2}$$
Now $$\int \dfrac{dt}{t^2 - k^2} = \dfrac1{2k} \left(\int \dfrac{dt}{t-k} - \int \dfrac{dt}{t+k} \right) = \dfrac1{2k} \log \left(\left \vert \dfrac{t-k}{t+k} \right \vert \right) + \text{constant}$$
Hence, if $b^2 > 4ac$, then
$$I = \dfrac1{\sqrt{b^2-4ac}} \log \left(\left \vert \dfrac{2ax + b - \sqrt{b^2-4ac}}{2ax + b + \sqrt{b^2-4ac}} \right \vert \right) + \text{constant}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Empirical distribution vs. the true one: How fast $KL( \hat{P}_n || Q)$ converges to $KL( P || Q)$? Let $X_1,X_2,\dots$ be i.i.d. samples drawn from a discrete space $\mathcal{X}$ according to probability distribution $P$, and denote the resulting empirical distribution based on n samples by $\hat{P}_n$. Also let $Q$ be an arbitrary distribution. It is clear that (KL-divergence)
\begin{equation}
KL( \hat{P}_n || Q) \stackrel{n\rightarrow \infty}{\longrightarrow} KL(P || Q)
\end{equation}
but I am wondering if there exist any known quantitative rate of convergence for it. I mean if it can be shown that
\begin{equation}
\Pr\Big[ | KL( \hat{P}_n || Q) - KL(P || Q) | \geq \delta\Big] \leq f(\delta, n, |\mathcal{X}|)
\end{equation}
and what is the best expression for the RHS if there is any.
Thanks a lot!
| In addition to the last answer, the most popular concentration inequality for the KL divergence is for finite alphabets. You can look for Theo. 11.2.1 of "Elements of Information Theory" by Thomas Cover and Joy Thomas:
$$\mathbf{P}\left(D(\hat{P}_n\|P)\geq\epsilon\right)\leq e^{-n\left(\epsilon-|\mathcal{X}|\frac{\log(n+1)}{}n\right)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/251990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
Proving the uncountability of $[a,b]$ and $(a,b)$ I am trying to prove that $[a,b]$ and $(a,b)$ are uncountable for $a,b\in \mathbb{R}$. I looked up Rudin and I am not too inclined to read the chapter on topology, for his proof involves perfect sets.
Can anyone please point me to a proof of the above facts without point-set topology?
I am thinking along these lines:
$\mathbb{R}$ is uncountable.If we can show that there exists a bijection between $(a,b)$ ad $\mathbb{R}$ we can prove $(a,b)$ is uncountable.But I am not sure how to construct such a bijection.
| $$\tan \left( \frac{\pi}{(b-a)} (x-\frac{a+b}{2})\right)$$
Basically $f(x)=\frac{\pi}{(b-a)} (x-\frac{a+b}{2})$ is the linear function such that $f(a)=-\frac{\pi}{2}$ and $f(b)=\frac{\pi}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 4
} |
Maximize $x_1x_2+x_2x_3+\cdots+x_nx_1$ Let $x_1,x_2,\ldots,x_n$ be $n$ non-negative numbers ($n>2$) with a fixed sum $S$.
What is the maximum of $x_1x_2+x_2x_3+\cdots+x_nx_1$?
| I have to solve this in 3 parts. First for $n=3$, then for $n=4$ and finally for $n>4$.
For $n=3$ we can take a tranformation $x'_1=x'_2=(x_1+x_2)/2$ and $x'_3=x_3$. $\sum x_i$ remains fixed while $\sum{x'_i*x'_{i+1}}-\sum{x_i*x_{i+1}} = (x_1+x_2)^2/4-x_1*x_2 = (x_1^2+2x_1x_2+x_2^2)/4-x_1*x_2 = (x_1^2-2x_1x_2+x_2^2)/4 = (x_1-x_2)^2/4$
which is $>0$ if $x_1$ differs from $x_2$.
So an optimal solution must have the first two terms equal (otherwise we can apply this transformation and obtain a higher value), but you can cycle the terms, so they must be all equal to $S/3$ for a total sum of $S^2/3$.
For $n=4$ the transformation $x'_1=x_1+x_3$, $x'_3=0$ doesn't change the result, so we take an optimal solution, sum the odd and even terms, and the problem becomes finding the maximum of $(\sum x_{odd})*(\sum x_{even})$, that is maximized if both terms are equal to $S/2$, for a total of $S^2/4$.
For $n>4$, I have to prove this lemma first:
For $n>4$, there is at least one optimal configuration that has at least
one index $i$ such that $x_i=0$
Take a configuration that is optimal and such that every $x_i>0$ and $x_1 = max(x_i)$.
Now use the following transformation: $x'_2=x_2+x_4$, $x'_4=0$. $\sum x_i$ remains the same but $\sum{x'_i*x'_{i+1}}-\sum{x_i*x_{i+1}}=x_1*(x_2+x_4)+(x_2+x_4)*x_3+\sum_{i>4}{x_i*x_{i+1}}-\sum{x_i*x_{i+1}} = x_1*x_4-x_4*x_5 = x_4*(x_1-x_5) = x_4*(max(x_i)-x_5) \geq x_4*(x_5-x_5) = 0$
So we have another optimal solution with a $0$.
Given that at least an optimal solution contains a $0$ for every $n>4$, the maximum value of $\sum{x_i*x_{i+1}}$ must be non-increasing for $n$ (otherwise we can take a solution for $n$ with a $0$ inside, remove that $0$, and obtain a higher solution for $n-1$).
Now the value of the sum must be $\leq S^2/4$, but taking $x_1=x_2=S/2$ gives that sum, so that configuration is an optimal one, for a sum of $S^2/4$.
This proves that the maximum is $S^2/3$ if $n=3$ and $S^2/4$ otherwise.
I am not satisfied with this answer, because it breaks down to a lot of case analysis. I am still curious to see a simpler proof (or one that requires less space..).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 5,
"answer_id": 1
} |
Proving that $x^a=x^{a\,\bmod\,{\phi(m)}} \pmod m$ i want to prove $x^a \equiv x^{a\,\bmod\,8} \pmod{15}$.....(1)
my logic:
here, since $\mathrm{gcd}(x,15)=1$, and $15$ has prime factors $3$ and $5$ (given) we can apply Euler's theorem.
we know that $a= rem + 8q$, where $8= \phi(15)$,
$x^a \equiv x^{rem}. (x^8)^q \pmod{15}$......(2)
applying Euler's theorem we get:
$x^a \equiv x^{rem} \pmod{15}$......(3)
Is this proof correct or should I end up in getting $x^a \equiv x^a \pmod {15}$...(4)
| If $b\equiv a\pmod m, b$ will be equal to $a\iff 0\le a<m$
For example, $m-2\equiv m-2\pmod m, 13\equiv 13\pmod {15}$
but, $m+2\equiv 2\pmod m, 17\equiv 2\pmod {15}$
If $b\equiv c\pmod{\phi(m)} ,$i.e., if $b=c+d\phi(m)$
$y^b=y^{c+d\phi(m)}=y^c\cdot(y^{\phi(m)})^d\equiv y^c \pmod m$ for $(y,m)=1$
Here $\phi(15)=\phi(3)\phi(5)=2\cdot4=8$
Observe that this condition is no way necessary as proved below.
If $y^b\equiv y^d\pmod m$ where $b\ge c$
$y^{b-d}\equiv1\pmod m\iff ord_my\mid (b-d)$ does not need to divide $\phi(m)$ unless $ord_my=\phi(m)$ where $y$ is a primitive root of $m$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Element Argument Proofs - Set theory This is an exercise on my study guide for my discrete applications class.
Prove by element argument: A × (B ∩ C) = (A × B) ∩ (A × C)
Now I know that this is the distributive law, but I'm not sure if this proof would work in the exact same way as a union problem would, because I know how to solve that one. Here is my thinking thus far:
Proof: Suppose A, B, and C are sets.
*
*A × (B ∩ C) = (A × B) ∩ (A × C)
*Case 1 (a is a member of A): if a belongs to A, then by the definition of the cartesian product, a is also a member of A x B and A x C. By definition of intersection, a belongs to (A × B) ∩ (A × C).
*Case 2 (a is a member of B ∩ C): a is a member of both B and C by intersection. a is a member of (A × B) ∩ (A × C) by the definition of intersection.
*By definition of a subset, (A × B) ∩ (A × C) is a subset of A × (B ∩ C).
*Therefore A × (B ∩ C) = (A × B) ∩ (A × C).
Is that at least a little right?
Thanks.
| No, you're not doing it completely right, the cartesian product produces an element that is a pair of elements from both subsets.
The definition of the cartesian product.
Def. $X\times Y = \{ (x,y) : x \in X\text{ and }y \in Y \}$.
PROOF.
$Z = A \times (B \cap C) = \{ (a,y) : a \in A\text{ and }y \in B \cap C \}$
$W = (A \times B) \cap (A \times C) = \{ (a,b) : a \in A\text{ and }b \in B \} \cap \{ (a,c) : a \in A\text{ and }c \in C \}$
For all $a \in A$:
Case 1. $b \in C$.
If $b \in C$ then $(a,b) \in Z$. Also $(a,b) \in W$.
Case 2. $b \notin C$.
If $b \notin C$, then $b$ is not in $B \cap C$. Then $(a,b)$ is not in $Z$. $b$ is also not in $A \times C$, so it's not in $W$.
The rest follows by the symmetry of intersection. $C \cap B$ is equivalent to $B \cap C$. Relabel $B$ as $C$, and vice versa. Apply case 1 and case 2.
QED.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Definition of a tangent I've been involved in a discussion on the definition of a tangent and would appreciate a bit of help.
At my high school and my college, I was taught that a definition of a tangent is 'a line that intersects given curve at two infinitesimally close points.' Aside from the possibility that tangent may elsewhere intersect the curve, to me, it was both intuitive and concise, but apparently, I'd have more chance of locating a set of hen's teeth than finding a similar definition online...
Has anybody else encountered such definition, or may have an objection to it (or an opinion on it, for that matter)? Thanks in advance.
| Given a curve $y = f(x)$ in an $xy$-coordinate system a tangent to the curve at the point $(a,f(a))$ is a straight line ($y = mx + b$) with slope $m = f'(a)$.
I have never heard about the definition that you talk about. There are ways to "think" about what a tangent is. If you consider the definition of a derivative then it involves limits. And limits is where one would talk about stuff like things being "infinitesimally" close.
Note that in math there isn't much room for opinions. Either the definition is correct or it is not. However, we often invent ways to think about certain definitions that make it intuitive for us. However, one always has to be careful not to make the picture that you have in your head the definition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Proof: in $\mathbb{R}$, $((0,1),|\cdot|)$ is not compact. Let $(M,d)$ be a metric space, and $A\subset M$. By definition, $A$ is said to be compact if every open cover of $A$ contains a finite subcover.
What is wrong with saying that, in $\mathbb{R}$, if $I=(0,1)$, we can choose $G=\{(0,\frac{3}{4}), (\frac{1}{4}, 1)\}$, which satisfies $I \subset \bigcup_{U\in G} U$, but we can't extract a finite subcover, so $I$ is not compact. Is $G$ a finite subcover of $G$, so it is not a valid cover for proving this? I would take $\cup_{n\in\mathbb{N}} (\frac{1}{n},1)$ in order to prove this, can we conclude that every open cover is necessarily a infinite union of open sets $\neq \emptyset$?
| You counter example (the open cover $\cup_{n \in \mathbb N} (1/n,1)$) actually works. It has no finite subcover. Therefore, $(0,1)$ is not compact. Every cover is not necessarily infinite. Again, your $G$ is the counter example (it is a finite cover.)
Note: you can take $G$ as a finite subcover of $G$. So, $G$ does not show that $(0,1)$ is not compact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Optimal Coding Scheme for given Weights I'm having trouble with this homework problem. Do I create the tree by beginning with each weight being a leaf? Then combining the lowest weighted leaves, and their parent becomes the sum of their weight?
I got 85 as my answer for (b) but I'm not sure if this is the correct process
Consider the weights: 10, 12, 13, 16, 17, 17.
(a) Construct an optimal coding scheme for the weights given by using a tree.
(b) What is the total weight of the tree you found in part (a)?
| Yes, you first combine $10+12=22$, then $13+16=29$, then $17+17=34$, then $22+29=51$, finally $51+34=85$ (thus your answer for b).
If we always represent the first choice with 0 and the second with 1, the respective code words are
$$000,001, 010, 011, 10, 11.$$
I'm not sure if part b isn't rather referring to the weighted code word length, that is $\frac{3\cdot 10+3\cdot 12+3\cdot 13+3\cdot 16+2\cdot 17+2\cdot 17}{10+12+13+16+17+17}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing that $W_1\subseteq W_1+W_2$ I found this question and answer on UCLA's website:
Let $W_1$ and $W_2$ be subspaces of a vector space $V$ .
Prove that $W_1 +W_2$ is a subspace of $V$ that contains both $W_1$ and $W_2$.
The answer given:
First, we want to show that $W_1 \subseteq W_1 +W_2$. Choose $x \in W_1$. Since
$W_2$ is a subspace, $0 \in W_2$ where $0$ is the zero vector of $V$ . But $x = x + 0$ and $x \in W_1$. Thus, $x \in W_1 + W_2$ by definition. Ergo, $W_1 \subseteq W_1 + W_2$. We also must show that $W_2 \in W_1 + W_2$, but
this result is completely analogous (see if you can formalize it).
My question:
Why is it enough to show that $x + 0 \in W_1 + W_2$, $0$ is just one element in $W_2$, why don't we have to show, for example, $x + y \in W_1 + W_2$?
| $W_1+W_2=\{w_1+w_2: w_1\in W_1,w_2\in W_2\}$. To show that an element belongs to this set, we just need to show that it can be written in the form $w_1+w_2$ for some $w_1\in W_1$ and some $w_2\in W_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Want to show Quantifier elimination and completeness of this set of axioms...
Let $\Sigma_\infty$ be a set of axioms in the language $\{\sim\}$ (where $\sim$ is a binary relation
symbol) that states:
(i) $\sim$ is an equivalence relation;
(ii) every equivalence class is infinite;
(iii) there are infinitely many equivalence classes.
Show that $\Sigma_{\infty}$ admits QE and is complete. (It is given that it is also possible to use Vaught's test
to prove completeness.)
I think I have shown that $\Sigma_\infty$ admits QE, but am not sure
how to show completeness. There is a theorem, however, that states that if a set of sentences $\Sigma$ has a model and admits QE, and there exists an $L$-structure that can be embedded in every model of $\Sigma$, then $\Sigma$ is complete.
Thanks.
| According to the last sentence in your question, all you need is an $L$-structure that can be embedded into every model of $\Sigma_\infty$. In fact, $\Sigma_\infty$ has a "smallest" model, one that embeds into all other models of $\Sigma_\infty$. I think this should be enough of a hint to enable you to find the model in question --- just make it as small as the axioms of $\Sigma_\infty$ permit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Good software for linear/integer programming I never did any linear/integer programming so I am wondering the following two things
*
*What are some efficient free linear programming solvers?
*What are some efficient commercial linear programming solvers?
It would be nice to supply a dummy usage example with each proposed answer.
Also what if wish to solve a integer programming problem? What are the answers to the above two questions in this case?
I know that integer LP is a hard problem but there are some relaxing methods that are sometimes employed in order to obtain a solution to an integer programming problem. Are there any software packages implementing this kind of stuff?
| The Konrad-Zuse Institute in Berlin (ZIB), Germany provides a nice suite to solve all kinds of LP / ILP tasks. It includes:
*
*zimpl: a language to model mathematical programms
*SCIP: a mixed integer programming solver and constraint programming framework
*SoPlex: a linear programming solver
*and more
Best of all, it is free! And all implementations are reasonably fast.
The state of the art in the commercial sector is probably IBM's CPLEX Studio. This is an expansive piece of software, but IBM has an academic program where you get free licenses. However it is a bit of a pain to apply. I used to work with the CPLEX package because it includes this nice modelling language ampl. However when the equivalent free zimpl came out, I switched to the more available ZIB package.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 5,
"answer_id": 0
} |
Noncontinuity and an induced equivalence relation Can someone give me an example of a map which is not continuous such that $f(\{a\}) = f(\{b \})$ induces an equivalence relation $ \{ a \} \sim \{ b \} $?
| Let $f:\{a,b,c,\}\to\{0,1\}$. Let $f(a)=f(b)=0$ and $f(c)=1$. Let $x\sim y$ precisely if $f(x)=f(y)$. Then we have
\begin{align}
a & \sim a \\
a & \sim b \\
a & \not\sim c \\ \\
b & \sim a \\
b & \sim b \\
b & \not\sim c \\ \\
c & \not\sim a \\
c & \not\sim b \\
c & \sim c
\end{align}
This is an equivalence relation on the set $\{a,b,c\}$ with two equivalence classes: $\{a,b\}$ and $\{c\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Quotient of Gamma functions I am trying to find a clever way to compute the quotient of two gamma functions whose inputs differ by some integer. In other words, for some real value $x$ and an integer $n < x$, I want to find a way to compute
$$ \frac{\Gamma(x)}{\Gamma(x-n)} $$
For $n=1$, the quotient it is simply $(x-1)$ since by definition
$$ \Gamma(x) = (x - 1)\Gamma(x-1) $$
For $n=2$, it is also simple:
$$ \frac{\Gamma(x)}{\Gamma(x-2)} =
\frac{(x-1)\Gamma(x-1)}{\Gamma(x-2)} =
(x-1)(x-2)$$
If we continue this pattern out to some arbitrary $n$, we get
$$ \frac{\Gamma(x)}{\Gamma(x-n)} = \prod_i^n (x-i)$$
Obviously I am floundering a bit here. Can anyone help me find an efficient way to compute this quotient besides directly computing the two gamma functions and dividing?
I am also okay if an efficient computation can be found in log space. Currently I am using a simple approximation of the log gamma function and taking the difference. This was necessary because the gamma function gets too big to store in any primitive data type for even smallish values of $x$.
| I think you mean
$$ \frac{\Gamma(x)}{\Gamma(x-n)} = \prod_{i=1}^{n} (x - i) $$
Of course this might not be very nice if $n$ is very large, in which case you might want to first compute the $\Gamma$ values and divide; but then (unless $x$ is very close to one of the integers $i$) the result will also be enormous. If you're looking for a numerical approximation rather than an exact value, you can use Stirling's approximation
or its variants.
EDIT: Note also that if your $x$ or $x-n$ might be negative, you may find the reflection formula useful:
$$ \Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin(\pi z)} $$
For $x > n$,
$$ \eqalign{\ln\left(\frac{\Gamma(x)}{\Gamma(x-n)}\right) &= n \ln x + \sum_{i=1}^n \ln(x-i)\cr
&= n \ln x + \sum_{i=1}^n \left( - \frac{i}{x} - \frac{i^2}{2x^2} - \frac{i^3}{3x^3} - \frac{i^4}{4x^4} - \frac{i^5}{5x^5} - \frac{i^6}{6 x^6}\ldots \right)\cr
&= n \ln x - \frac{n(n+1)}{2x} - \frac{n(n+1)(2n+1)}{12 x^2} - \frac{n^2 (n+1)^2}{12 x^3} \cr
& -{\frac {n \left( n+
1 \right) \left( 2\,n+1 \right) \left( 3\,{n}^{2}+3\,n-1 \right) }{120 {
x}^{4}}}-\frac { \left( n+1 \right) ^{2}{n}^{2}
\left( 2\,{n}^{2}+2\,n-1 \right) }{60 {x}^{5}}
\ldots} $$
This provides excellent approximations as $x \to \infty$ for fixed $n$ (or $n$ growing much more slowly than $x$).
On the other hand, for $n = tx$ with $0 < t < 1$ fixed, as $x \to \infty$ we have
$$ \eqalign{\ln\left(\frac{\Gamma(x)}{\Gamma(x-n)}\right) &=(1-t) x \ln(x) - (t \ln(t) - t + 1) x +\frac{\ln(t)}{2}\cr &+{
\frac {t-1}{12tx}}+{\frac {1-{t}^{3}}{360{t}^{3}{x}^{3}}
}+{\frac {{t}^{5}-1}{1260{t}^{5}{x}^{5}}}+\frac{1-t^7}{1680 t^7 x^7} \ldots
}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Why is the Frobenius norm of a matrix greater than or equal to the spectral norm?
How can one prove that $ \|A\|_2 \le \|A\|_F $ without using $ \|A\|_2^2 := \lambda_{\max}(A^TA) $?
It makes sense that the $2$-norm would be less than or equal to the Frobenius norm but I don't know how to prove it. I do know:
$$\|A\|_2 = \max_{\|x\|_2 = 1} {\|Ax\|_2}$$
and I know I can define the Frobenius norm to be:
$$\|A\|_F^2 = \sum_{j=1}^n {\|Ae_j\|_2^2}$$
but I don't see how this could help. I don't know how else to compare the two norms though.
| In fact, the proof from $\left\| \mathbf{A}\right\|_2 =\max_{\left\| \mathbf{x}\right\|_2=1} \left\| \mathbf{Ax} \right\|_2$ to $\left\| \mathbf{A}\right\|_2 = \sqrt{\lambda_{\max}(\mathbf{A}^H \mathbf{A})}$ is straight forward. We can first simply prove when $\mathbf{P}$ is Hermitian
$$
\lambda_{\max} = \max_{\| \mathbf{x} \|_2=1} \mathbf{x}^H \mathbf{Px}.
$$
That's because when $\mathbf{P}$ is Hermitian, there exists one and only one unitary matrix $\mathbf{U}$ that can diagonalize $\mathbf{P}$ as $\mathbf{U}^H \mathbf{PU}=\mathbf{D}$ (so $\mathbf{P}=\mathbf{UDU}^H$), where $\mathbf{D}$ is a diagonal matrix with eigenvalues of $\mathbf{P}$ on the diagonal, and the columns of $\mathbf{U}$ are the corresponding eigenvectors. Let $\mathbf{y}=\mathbf{U}^H \mathbf{x}$ and substitute $\mathbf{x} = \mathbf{Uy}$ to the optimization problem, we obtain
$$
\max_{\| \mathbf{x} \|_2=1} \mathbf{x}^H \mathbf{Px} = \max_{\| \mathbf{y} \|_2=1} \mathbf{y}^H \mathbf{Dy} = \max_{\| \mathbf{y} \|_2=1} \sum_{i=1}^n \lambda_i |y_i|^2 \le \lambda_{\max} \max_{\| \mathbf{y} \|_2=1} \sum_{i=1}^n |y_i|^2 = \lambda_{\max}
$$
Thus, just by choosing $\mathbf{x}$ as the corresponding eigenvector to the eigenvalue $\lambda_{\max}$, $\max_{\| \mathbf{x} \|_2=1} \mathbf{x}^H \mathbf{Px} = \lambda_{\max}$. This proves $\left\| \mathbf{A}\right\|_2 = \sqrt{\lambda_{\max}(\mathbf{A}^H \mathbf{A})}$.
And then, because the $n\times n$ matrix $\mathbf{A}^H \mathbf{A}$ is positive semidefinite, all of its eigenvalues are not less than zero. Assume $\text{rank}~\mathbf{A}^H \mathbf{A}=r$, we can put the eigenvalues into a decrease order:
$$
\lambda_1 \geq \lambda_2 \geq \lambda_r > \lambda_{r+1} = \cdots = \lambda_n = 0.
$$
Because for all $\mathbf{X}\in \mathbb{C}^{n\times n}$,
$$
\text{trace}~\mathbf{X} = \sum\limits_{i=1}^{n} \lambda_i,
$$
where $\lambda_i$, $i=1,2,\ldots,n$ are eigenvalues of $\mathbf{X}$; and besides, it's easy to verify
$$
\left\| \mathbf{A}\right\|_F = \sqrt{\text{trace}~ \mathbf{A}^H \mathbf{A}}.
$$
Thus, through
$$
\sqrt{\lambda_1} \leq \sqrt{\sum_{i=1}^{n} \lambda_i} \leq \sqrt{r \cdot \lambda_1}
$$
we have
$$
\left\| \mathbf{A}\right\|_2 \leq \left\| \mathbf{A}\right\|_F \leq \sqrt{r} \left\| \mathbf{A}\right\|_2
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 8,
"answer_id": 2
} |
Can floor functions have inverses? R to R
$f(x) = \lfloor \frac{x-2}{2} \rfloor $
If $T = \{2\}$, find $f^{-1}(T)$
Is $f^{-1}(T)$ the inverse or the "image", and how do you know that we're talking about the image and not the inverse?
There shouldn't be any inverse since the function is not one-to-one, nor is it onto since it's $\mathbb{R}\to\mathbb{R}$ and not $\mathbb{R}\to\mathbb{Z}$.
| Note to this calculating;
$[\frac{x-2}{2}]=2\Longrightarrow 2\leq\frac{x-2}{2}<3\Longrightarrow 4\leq x-2<6\Longrightarrow 6\leq x<8$
so the set $f^{-1}(\{2\})$ is equal to $[6,8)$.
Now;
$\forall y\in\mathbb{Z}\; :\; f^{-1}(\{y\})=\{x\in\mathbb{R}|[\frac{x-2}{2}]=y\}$ ... $\Longrightarrow\{x\in\mathbb{R}|x\in[2y+2,2y+4)\}$
In final;
$\Longrightarrow \forall T\subset\mathbb{R}\; :\; f^{-1}(T)=\cup_{y\in T\cap\mathbb{Z}}[2y+2,2y+4)$
and if $T\cap\mathbb{Z}=\emptyset$ then $f^{-1}(T)=\emptyset$ too.
And about existence of inverse functions. If a function be one-to-one it has left-inverse and if it be onto it has right-inverse. for existence both it should be bijective. But always we can define a function which bring back any point of range to set of elements that their value by f is them. like what we had done above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Number of ordered triplets $(x,y,z)$, $−10\leq x,y,z\leq 10$, $x^3+y^3+z^3=3xyz$ Let $x, y$ and $z$ be integers such that $−10\leq x,y,z\leq 10$. How many ordered triplets $(x,y,z)$ satisfy $x^3+y^3+z^3=3xyz$?
x,y,z are allowed to be equal.
When I tried I got any one of x,y,z to be 0. I am not sure this is correct. And I got 21 as the answer which i am sure is not.
| $\textbf{Hint}$: Note that $$x^3+y^3+z^3-3xyz=(x+y+z)(x^2+y^2+z^2-xy-yz-zx)=\frac{1}{2}(x+y+z)[(x-y)^2+(y-z)^2+(z-x)^2]=0$$ if and only if either $x+y+z=0$ or $x=y,y=z$ and $z=x$. Now count the number of ordered triples for the first case using generating functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/252976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Subspace Preserved Under Addition of Elements? I'm trying to understand how to complete this proof about subspaces. I understand the basics about the definition of a subspace (i.e. the zero matrix must exist, and addition and multiplication must be closed within the subspace). But I'm confused as to how to show that the addition of two elements from completely different sets somehow are preserved under the same subspace.
I'm pretty sure the zero vector exists because the zero vector is within C and D, but I'm unsure about the other two conditions. The complete problem is listed below.
Problem: Let W be a vector space and C,D be two subspaces of W. Prove or disprove that { a + b | a $\in$ C, b $\in$ D} is also a subspace of W.
Any help would be appreciated.
| You have to show (among other things) that $Z=\{{\,a+b:a\in C,b\in D\,\}}$ is closed under addition. So, let $x$ and $y$ be in $Z$; you have to show $x+y$ is in $Z$. So, what is $x$? Well, it's in $Z$, so $x=a+b$ for some $a$ in $C$ and some $b$ in $D$. What's $y$? Well, it's also in $Z$, so $y=r+s$ for some $r$ in $C$ and some $s$ in $D$. Now what's $x+y$? Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How do you find the smallest integer? $$\begin{align}
(x-1) \;\text{mod}\; 11 &= 3x\; \text{mod}\; 11\\
11&\lvert(3x-(x-1)) \\
11&\lvert2x+1\\
x &= 5?\\
\end{align}
$$
$$
\begin{align}
(a-b)\; \text{mod}\; 5 &= (a+b)\;\text{mod}\;5\\
5&\lvert a+b-a+b\\
5&\lvert2b\\
b &= 5/2\\
a &= \text{any integer}
\end{align}
$$
I don't know how to solve this type of problem. Can you tell me what I have to do generally step-by-step?
| There are several ways; for example look at Diophantine divisors, but now I will write it ;
$ax\equiv b\;(mod\;m)\Longleftrightarrow (a,m)=d|b$ and its answers are every number in congreuent classes by modulo m like $(\frac{a}{d})^{*}(\frac{b}{d})+k\frac{m}{d}$ where $0\leq k\leq d-1$ and $(\frac{a}{d})^{*}$ in Möbius inversion of $\frac{a}{d}$ in mod $\frac{m}{d}$
When $(a,m)=d|b$, we can say $(\frac{a}{d},\frac{m}{d})=1$ and from Euler theorem that $a^{\phi(m)}\equiv 1\;(mod\;m)\Longrightarrow a^{\phi(\frac{m}{d})-1}a\equiv 1\;(mod\;\frac{m}{d})$ so $a^{\phi(\frac{m}{d} )-1}$ is one Möbius inversion of $\frac{a}{d}$ in mod $\frac{m}{d}$ . By finding Möbius inversion, and replacing in $(\frac{a}{d})^{*}(\frac{b}{d})+k\frac{m}{d}$, then every number in congreuent classes by modulo m that is found, are the answers you need.
For example let calculate your first question;
$x-1\equiv 3x\;(mod\;11)\Longleftrightarrow 2x\equiv -1\;(mod\;11)$
because $(2,11)=1|-1$ so it has answer. Here $d$ is $1$.
At first calculate Möbius inversion of $2$ in mod $11$, (note that $d=1$!).
$2^{*}=2^{\phi(11)-1}=2^{10-1}=2^{9}=512\equiv 6\;(mod\;11)$
so $2^{*}=6$ now because $d-1=0$ we only put $k=0$ then answers is
$\{x\in\mathbb{Z}|[x]_{11}=[6(-1)+(0)11]_{11}\}=$
$\{x\in\mathbb{Z}|[x]_{11}=[-6]_{11}\}=$
$\{x\in\mathbb{Z}|[x]_{11}=[5]_{11}\}=$
$\{11n+5|n\in\mathbb{Z}\}$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Basic definition of Inverse Limit in sheaf theory / schemes I read the book "Algebraic Geometry" by U. Görtz and whenever limits are involved I struggle for an understanding. The application of limits is mostly very basic, though; but I'm new to the concept of limits.
My example (page 60 in the book): Let $A$ be an integral domain. The structure sheaf $O_X$ on $X = \text{Spec}A$ is given by $O_X(D(f)) = A_f$ ($f\in A$) and for any $U\subseteq X$ by
\begin{align}
O_X(U) &= \varprojlim_{D(f)\subseteq U} O_X(D(f)) \\
&:= \{ (s_{D(f)})_{D(f)\subseteq U} \in \prod_{D(f)\subseteq U} O_X(D(f)) \mid \text{for all } D(g) \subseteq D(f) \subseteq U: s_{D(f)\big|D(g)} = s_{D(g)}\} \\
&= \bigcap_{D(f)\subseteq U} A_f.
\end{align}
I simply don't understand the last equality: In my naive understanding the elements of the last set are "fractions" and the elements of the Inverse Limit are "families of fractions".
Any hint is appreciated.
| My personal advice is to study a bit of category theory: it will let you understand all this stuff in a very clearer way. In fact you can easily realize that the first equality is not a definition, but a way to express a limit of an arbitrary presheaf, while the second is an isomorphism, not exactly an equality, given by the universal property defining limits. I started with Hartshorne, but without category theory as a background it's just like wandering in the dark without even a candle with you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
A Fourier transform using contour integral I try to evaluate $$\int_{-\infty}^\infty \frac{\sin^2 x}{x^2}e^{itx}\,dx$$ ($t$ real) using contour integrals, but encounter some difficulty. Perhaps someone can provide a hint. (I do not want to use convolution.)
| An idea, defining
$$f(z):=\frac{e^{itz}\sin^2z}{z^2}\,\,,\,\,C_R:=[-R-\epsilon]\cup(-\gamma_\epsilon)\cup[\epsilon,R]\cup\gamma_R$$
with
$$\gamma_k:=\{z\in\Bbb C\;;\;|z|=k\,,\,\arg z\geq 0\}=\{z\in\Bbb C\;;\;z=ke^{i\theta}\,\,,\,0\leq\theta\leq\pi\}$$
in the positive direction (check the minus sign in $\,\gamma_\epsilon\,$ above!).
This works assuming $\,0<t\in\Bbb R\,$, Jordan's lemma in the lemma and its corollaty in the answer here
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Applying the Thue-Siegel Theorem Let $p(n)$ be the greatest prime divisor of $n$. Chowla proved here that $p(n^2+1) > C \ln \ln n $ for some $C$ and all $n > 1$.
At the beginning of the paper, he mentions briefly that the weaker result $\lim_{n \to \infty} p(n^2+1) = \infty$ can be proved by means of the Thue-Siegel theorem (note: it was written before Roth considerably improved the theorem).
*
*Can someone elaborate on this? I was able to reduce the problem to showing that the negative Pell equation $x^2-Dy^2 = -1$ (where $D$ is positive and squarefree) has a finite number of solutions such that $p(y)$ is bounded by some $C$.
*Can someone provide examples of applications of Thue-Siegel in Diophantine equations? In the Wikipedia article it is mentioned that "Thue realised that an exponent less than d would have applications to the solution of Diophantine equations", but I am not sure I am realising it...
| Suppose that the prime factors of $n^2+1$ are all bounded by $N$ for infinitely many $n$. Then infinitely many integers $n^2 + 1$ can be written in the form $D y^3$ for one of finitely many $D$. Explicitly, the set of $D$ can be taken to be the finitely many integers whose prime divisors are all less than $N$, and whose exponents are at most $2$. For example, if $N=3$, then $D \in \{1,2,4,3,6,12,9,18,36\}$. Letting $x = n$, it follows that for at least one of those $D$, there are infinitely many solutions to the equation
$$x^2 - D y^3 = -1.$$
From your original post, I'm guessing you actually know this argument, except you converted $n^2 + 1$ to $D y^2$ rather than $D y^3$ (of course, one can also use $D y^k$ for any fixed $k$, at the cost of increasing the number of possible $D$).
It turns out, however, that the equation $x^2 - D y^3 = -1$ only has finitely many solutions, and that this is a well known consequence of Siegel's theorem (1929), which says that any curve of genus at least one has only finitely many integral points. Siegel's proof does indeed use the Thue-Siegel method, although the proof is quite complicated. It is quite possible that this is the argument that Chowla had in mind - is is certainly consistent, since Chowla's paper is from 1934 > 1929.
There are some more direct applications of Thue-Siegel to diophantine equations, in particular, to the so called Thue equations, which look like $F(x,y) = k$ for some irreducible homogeneous polynomial $F$ of degree at least three. A typical example would be:
$$x^n - D y^n = -1,$$
for $n \ge 3$. Here the point is that the rational approximations $x/y$ to $\sqrt[n]{D}$ are of the order $1/y^n$, which contradict Thue's bounds as long as $n \ge 3$. Equations of this kind are what are being referred to in the wikipedia article.
Edit Glancing at that paper of Chowla in your comment, one can see the more elementary approach. Let $K$ denote the field $\mathbf{Q}(i)$, and let $\mathcal{O} = \mathbf{Z}[i]$ denote the ring of integers of $K$. Assume, as above, that there exists an infinite set $\Sigma$ of integers such that $n^2+1$ has prime factors less than $N$. For $n \in \Sigma$, write $A = n+i$ and $B = n-i$; they have small factors in $\mathcal{O}$ (which is a PID, although the argument can be made to work more generally using the class number). As above, one may write $A =(a + bi)(x + i y)^3$ where $a + bi$ comes from a finite list of elements of $\mathcal{O}$ (explicitly, the elements whose prime factorization in $\mathcal{O}$ only contains primes dividing $N$ with exponent at most $2$). Since $\Sigma$ is infinite, there are thus infinitely many solutions for some fixed $a + b i$, or equivalently, infinitely many solutions to the equations (taking the conjugate):
$$n + i = (a + b i)(x + i y)^3, \qquad n - i = (a - b i)(x - i y)^3,$$
Take the difference of these equations and divide by $2i$. We end up with infinitely many solutions to the equation:
$$b x^3 + 3 a x^2 y - 3 b x y^2 - a y^3 = 1.$$
This is now homogeneous, so one can apply Thue's theorem, rather than Siegel's Theorem. (Explicitly, the fractions $x/y$ are producing rational approximations to the root of $b t^3 + 3 a t^2 - 3 b t - a = 0$ which contradict the Thue bounds.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Why is the set of natural numbers not considered a non-Abelian group? I don't understand why the set of natural numbers constitutes a commutative monoid with addition, but is not considered an Abelian group.
| Addition on the natural numbers IS commutative ...
...but the natural numbers under addition do not form a group.
Why not a group?
*
*if you define $\mathbb{N} = \{ n\in \mathbb{Z} \mid n\ge 1\}$, as we typically do, then it fails to be a group because it does not contain $0$, the additive identity. Indeed, $\mathbb{N} = \{ n\in \mathbb{Z} \mid n\ge 1\}$ fails to be a monoid, if the additive identity $0\notin \mathbb{N}$. So I will assume you are defining $\mathbb{N} = \mathbb{Z}^{+} = \{x \in \mathbb{Z} \mid x \ge 0\}$, which is indeed a commutative monoid: addition is both associative and commutative on $\mathbb{N}$, and $0 \in \mathbb{N}$ if $\mathbb{N} = \mathbb{Z}^{+} = \{x \in \mathbb{Z} \mid x \ge 0\}$.
*There is no additive inverse for any $n\in \mathbb{N}, n \ge 1$. For example, there is no element $x \in \mathbb{N}$ such that $3 + x = 0.$ Hence the natural numbers cannot be a GROUP.
A monoid is a group if it also (in addition to being associative, has the additive identity for addition) satisfies the ADDED requirement that the inverse element of each element in the monoid is also in the monoid. (In such case, the monoid is said to be a group.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
} |
If $P \leq G$, $Q\leq G$, are $P\cap Q$ and $P\cup Q$ subgroups of $G$?
$P$ and $Q$ are subgroups of a group $G$. How can we prove that $P\cap Q$ is a subgroup of $G$? Is $P \cup Q$ a subgroup of $G$?
Reference: Fraleigh p. 59 Question 5.54 in A First Course in Abstract Algebra.
|
$P$ and $Q$ are subgroups of a group $G$. Prove that $P \cap Q$ is a subgroup.
Hint 1: You know that $P$ and $Q$ are subgroups of $G$. That means they each contain the identity element, say $e$ of $G$. So what can you conclude about $P\cap Q$? If $e \in P$ and $e \in Q$? (Just unpack that means for their intersection.)
Hint 2: You know that $P, Q$ are subgroups of $G$. So they are both closed under the group operation of $G$. If $a, b \in P\cap Q$, then $a, b \in P$ and $a, b \in Q$. So what can you conclude about $ab$ with respect to $P\cap Q$? This is about proving closure under the group operation of $G$.
Hint 3: You can use similar arguments to show that for any element $c \in P\cap Q$, $c^{-1} \in P\cap Q$. That will establish that $P\cap Q$ is closed under inverses.
Once you've completed each step above, what can you conclude about $P\cap Q$ in $G$?
$P$ and $Q$ are subgroups of a group $G$. Is $P\cup Q $ a subgroup of $G\;$?
Here, you need to provide only one counterexample to show that it is not necessarily the case that $P\cup Q$ is a subgroup of $G$.
*
*Suppose, for example, that your group $G = \mathbb{Z}$, under addition. Then we know that $P = 2\mathbb{Z} \le \mathbb{Z}$ under addition (all even integers), and $Q = 5\mathbb{Z} \le \mathbb{Z}$ under addition (all integer multiples of $5$). So $P \cup Q$ contains $2\in P,$ and $5 \in Q.\;\;$ But:$\;$ is $\;2 + 5 = 7 \in P\cup Q\;$?
So what does this tell regarding whether or not $P \cup Q$ is a subgroup of $\mathbb{Z}\;$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
What's a proof that a set of disjoint cycles is a bijection? Consider a function $f : D \to D$ (where $D$ is a finite set) so that for every $d \in D$, there is an integer $n$ so that $f(f(...(n\text{ times})...f(d)...) = d$.
*
*Prove that $f$ is a bijection.
*Prove that if $f : D \to D$ (where $D$ is a finite set) is a bijection, then for every $d \in D$, there is an integer $n$ so that $f(f(...(n\text{ times})...f(d)...) = d$.
I know this proposition is true (because a bijection over a finite set is just a permutation which is always representable using cycles), I'm just having a hard time formulating a proof for it that a first-year algebra student could understand.
| For (1), it's easier to prove that if $f$ is not injective then neither is $f^{n}$ and if $f$ is not surjective then neither is $f^n$, for any $n \ge 1$. You know that $f^n$ is bijective, and hence so must $f$ be.
For (2), note that the bijections on a set form a group under composition, and that if $D$ is finite then this group is finite, so all its elements have finite order.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
} |
Show that $\alpha_1u+\alpha_2v+\alpha_3w=0\Rightarrow\alpha_1=\alpha_2=\alpha_3=0$ Let $u, v, w$ be three points in $\mathbb R^3$ not lying in any plane containing the origin. Would you help me to prove or disprove: $\alpha_1u+\alpha_2v+\alpha_3w=0\Rightarrow\alpha_1=\alpha_2=\alpha_3=0.$ I think this is wrong since otherwise Rank of the Coefficient Matrix have to be 3. But for$u_1=(1,1,0),u_2=(1,2,0),u_3=(1,3,0)$, (Rank of the corresponding Coefficient Matrix)$\neq 3$.
Am I right?
| With $u_1=(1,1,0)$, $u_2=(1,2,0)$, $u_3=(1,3,0)$
Let $A = \begin{pmatrix}
u_1 \\
u_2 \\
u_3
\end{pmatrix} = \begin{pmatrix}1 & 1 & 0 \\ 1 & 2 & 0 \\ 1 & 3 & 0 \end{pmatrix}$
we have $\det A = 0$ $\implies$ $u_1, u_2, u_3$ is linearly dependent $\implies$ you're wrong !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Convolution Laplace transform
Find the inverse Laplace transform of the giveb function by using the
convolution theorem.
$$F(x) = \frac{s}{(s+1)(s^2+4)}$$
If I use partial fractions I get: $$\frac{s+4}{5(s^2+4)} - \frac{1}{5(x+1)}$$
which gives me Laplace inverses:
$$\frac{1}{5}(\cos2t + \sin2t) -\frac{1}{5} e^{-t}$$
But the answer is:
$$f(t) = \int^t_0 e^{-(t -\tau)}\cos(2\tau) d\tau$$
How did they get that?
| Related techniques (I), (II). Using the fact about the Laplace transform $L$ that
$$ L(f*g)=L(f)L(g)=F(s)G(s)\implies (f*g)(t)=L^{-1}(F(s)G(s)) .$$
In our case, given $ H(s)=\frac{1}{(s+1)}\frac{s}{(s^2+4)}$
$$F(s)=\frac{1}{s+1}\implies f(t)=e^{-t},\quad G(s)=\frac{s}{s^2+4}\implies g(t)=\cos(2t).$$
Now, you use the convolution as
$$ h(t) = \int_{0}^{t} e^{-(t-\tau)}\cos(2\tau) d\tau . $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to verify this limit? Kindly asking for any hints about showing:
$$\lim_{n\to\infty}\int_0^1\frac{dx}{(1+x/n)^n}=1-\exp(-1)$$
Thank you very much, indeed!
| HINT: Just evaluate the integral. For $n>1$ you have
$$\int_0^1\frac{dx}{(1+x/n)^n}=\int_0^1\left(1+\frac{x}n\right)^{-n}dx=\left[\frac{n}{n+1}\left(1+\frac{x}n\right)^{-n+1}\right]_0^1\;;$$
evaluating that leaves you with a limit that involves pieces that ought to be pretty familiar.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Is one structure elementary equivalent to its elementary extension? Let $\mathfrak A,\mathfrak A^*$ be $\mathcal L$-structures and $\mathfrak A \preceq \mathfrak A^*$. That implies forall n-ary formula $\varphi(\bar{v})$ in $\mathcal L$ and $\bar{a} \in \mathfrak A^n$
$$\models_{\mathfrak A}\varphi[\bar{a}] \iff \models_{\mathfrak A^*}\varphi[\bar{a}]$$
Therefore forall $\mathcal L$-sentence $\phi$
$$\models_{\mathfrak A}\phi \iff \models_{\mathfrak A^*}\phi$$
,which implies $\mathfrak A \equiv \mathfrak A^*$
But I haven't found this result in textbook, so I'm not sure.
| (I realise that it was answered in the comments, but I'm posting the answer so as to keep the question from staying in the unanswered pool.)
This is, of course, true, an $\mathcal L$-sentence without parameters is an $\mathcal L$-sentence with parameters, that happens not to use any parameters, so elementary extension is a stronger condition. :)
To put it differently, $M\preceq N$ is equivalent to saying that $M$ is a substructure of $N$ and that $(M,m)_{m\in M}\equiv (N,m)_{m\in M}$, which is certainly stronger than mere $M\equiv N$ (to see that the converse does not hold, consider, for instance, $M=(2{\bf Z},+)$, $N=({\bf Z},+)$ -- $M$ is a substructure of $N$ and is e.e. (even isomorphic!) to it, but is still not an elementary substructure).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The pebble sequence Let we have $n\cdot(n+1)/2$ stones grouped by piles. We can pick up 1 stone from each pile and put them as a new pile. Show that after doing it some times we will get the following piles: $1, 2, \ldots n$ stones.
Example: $n = 3$
Let we have 2 piles with 3 stones of each.
$$3 3 \to 2 2 2 \to 1 1 1 3 \to 2 4 \to 1 3 2$$
| This was originally proved by Jørgen Brandt in Cycles of Partitions, Proceedings of the American Mathematical Society, Vol. 85, No. 3 (Jul., 1982), pp. 483-486, which is freely available here. The proof of this result covers the first page and a half and is pretty terse.
First note that there are only finitely many possible sets of pile sizes, so at some point a position must repeat a previous position. Say that $P_{m+p}=P_m$ for some $p>0$, where $P_n$ is the $n$-th position (set of pile sizes) in the game. It’s not hard to see that the game will then cycle through the positions $P_{m+k}$, $k=0,\dots,p-1$, repeatedly. The proof consists in showing that when there are $\frac12n(n+1)$ pebbles, the only possible cycle is the one of length one from $\{1,2,\dots,n\}$ to $\{1,2,\dots,n\}$.
Brian Hopkins, 30 Years of Bulgarian Solitaire, The College Mathematics Journal, Vol. 43, No. 2, March 2012, pp. 135-140, has references to other published proofs; one appears to be to a less accessible version of the paper Karatsuba Solitaire (Therese A. Hart, Gabriel Khan, Mizan R. Khan) that MJD found at arXiv.org.
Added: And having now read the Hart, Khan, & Khan paper, I agree with MJD: the argument is quite simple, and it’s presented very well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Counterexample to Fubini? I am trying to come up with a measurable function on $[0,1]^2$ which is not integrable, but such that the iterated integrals are defined and unequal.
Any help would be appreciated.
| $$
\int_0^1\int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2} \,dy\,dx \ne \int_0^1\int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2} \,dx\,dy
$$
Obviously either of these is $-1$ times the other and if this function were absolutely integrable, then they would be equal, so their value would be $0$. But one is $\pi/2$ and the other is $-\pi/2$, as may be checked by freshman calculus methods.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Find integral from 1 to infinity of $1/(1+x^2)$ I am practicing for an exam and am having trouble with this problem. Find the integral from 1 to infinity of $\frac{1}{1+x^2}$. I believe the integral's anti-derivative is $\arctan(x)$ which would make this answer $\arctan(\infty)-\arctan(1)$ but from here I'm lost. I did find out that this comes out to $\pi/4$ but I don't know why.
| Essentially, your question appears to be "Why does $\lim_{x\to\infty}\arctan{x} = \frac{\pi}{2}$?"
Remember that
$$\theta = \arctan\left(\frac{y}{x}\right)$$
When will $\frac{y}{x}\to\infty$? That's when $x \to 0$.
Think of a right triangle with height $y$ and base $x$. $\theta$ is the angle between $x$ and the hypotenuse. As $x$ gets smaller and smaller, what does $\theta$ get close to? Well, $\theta$ approaches 90 degrees or $\frac{\pi}{2}$.
EDIT:
As requested, here's a picture to illustrate this idea. The angle (in degrees) is displayed inside the tangent function, alongside the fraction $\frac{y}{x}$:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/253979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Find the Matrix of a Linear Transformation. It's been a few weeks since the subject was covered in my Linear Algebra class, and unfortunately linear transformations are my weak spot, so could anyone explain the steps to solve this problem?
Find the matrix $A$ of the linear transformation $T(f(t)) = 3f'(t)+7f(t)$ from $P_2$ to $P_2$ with respect to the standard basis for $P_2$, $\{1,t,t^2\}$.
The resulting answer should be a $3 \times 3$ matrix, but I'm unsure of where to start when it comes to solving this problem.
| NOTE Given a finite dimensional vector space $\Bbb V$ and a basis $B=\{v_1,\dots,v_n\}$ of $\Bbb V$, the coordinates of $v$ in base $B$ are the unique $n$ scalars $\{a_1,\dots,a_n\}$ such that $v=\sum_{k=1}^n a_kv_k$, and we note this by writing $(v)_B=(a_1,\dots,a_n)$.
All you need is to find what $T$ maps the basis elements to. Why? Because any vector in $P_2$ can be written as a linear combination of $1$, $t$ and $t^2$, whence if you know what $T(1)$, $T(t)$ and $T(t^2)$ are, you will find what any $T(a_0 +a_1 t +a_2 t^2)=a_0T(1)+a_1 T(t)+a_2 T(t^2)$ is. So, let us call $B=\{1,t,t^2\}$. Then
$$T(1)=3\cdot 0 +7\cdot 1=7=(7,0,0)$$
$$T(t)=3\cdot 1 +7\cdot t=(3,7,0)$$
$$T(t^2)=6\cdot t +7\cdot t^2=(0,6,7)$$
(Here I'm abusing the notation a bit. Formally, we should enclose the two first terms of the equations with $(-)_B$ )
Now note that our transformation matrix simply takes a vector in coordinates of base $B$, and maps it to another vector in coordinates of base $B$. Thus, if $|T|_{B,B}$ is our matrix from base $B$ to base $B$, we must have
$$|T|_{B,B} (P)_B=(T(P))_B$$
where we wrote $P=P(t)$ to avoid too much parenthesis.
But let's observe that if $(P)_B=(a_0,a_1,a_2)$ then $a_0T(1)+a_1 T(t)+a_2 T(t^2)=a_0(7,0,0)+a_1 (3,7,0)+a_2(0,6,7)$ is the matrix product
$$\left(\begin{matrix}7&3&0\\0&7&6\\0&0&7\end{matrix}\right)\left(\begin{matrix}a_0 \\a_1 \\a_2 \end{matrix}\right)$$
And $|T|_{B,B}=\left(\begin{matrix}7&3&0\\0&7&6\\0&0&7\end{matrix}\right)$ is precisely the matrix we're after. It has the property that for each vector of $P_2$
$$|T|_{B, B}(P)_B=(T(P))_B$$
(well, actually
$$(|T|_{B,B} (P)_B^t)^t=(T(P))_B$$
but that looks just clumsy, doesn't it?)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $(a,b,c)$ is a primitive Pythagorean triplet, explain why... If $(a,b,c)$ is a primitive Pythagorean triplet, explain why only one of $a$,$b$ and $c$ can be even-and that $c$ cannot be the one that is even.
What I Know:
A Primitive Pythagorean Triple is a Pythagorean triple $a$,$b$,$c$ with the constraint that $\gcd(a,b)=1$, which implies $\gcd(a,c)=1$ and $\gcd(b,c)=1$. Example: $a=3$,$b=4$,$c=5$ where, $9+16=25$
At least one leg of a primitive Pythagorean triple is odd since if $a$,$b$ are both even then $\gcd(a,b)>1$
| Clearly they cannot all be even as a smaller similar triple could be obtained by dividing all the sides by $2$ (your final point).
Nor can two of them be even since $a^2+b^2=c^2$ and either you would have an even number plus an odd number (or the other way round) adding to make an even number or you would have an even number plus an even number adding to make an odd number.
Nor can they all be odd since $a^2+b^2=c^2$ and you would have an odd number plus an odd number adding to make an odd number.
Added: Nor can the two shorter sides be both odd, say $2n+1$ and $2m+1$ for some integers $n$ and $m$, as the longer side would be even, say $2k$ for some integer $k$, as its square is the sum of two odd numbers, but you would then have $4n^2+4n+1 + 4m^2+4m+1 = 4k^2$ which would lead to $k^2 = n^2+n+m^2+m + \frac{1}{2}$ preventing $k$ from being an integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Uniformly convergent sequence proof. Prove that if $(f_k)$ is a uniformly convergent sequence of continuous real-valued functions on a compact domain $D\subseteq \mathbb{R}$, then there is some $M\geq 0$ such that $\left|f_k(x)\right|\leq M$ for every $x\in D$ and for every $k\in \mathbb{N}$.
My response: Basically, I am trying to show that uniform convergence on a compact domain implies uniform boundedness. Let $f(x)$ be the limiting function. Then I know that
$\lim_{k\to\infty} \sup_{x \in D} | f_k (x) - f(x) | = 0$. Also, I know that $f$ is continuous, therefore it attains an absolute maximum $\in D$. How can I apply these two things to prove it?
| I think this is quite obvious. Noted that $D$ is compact and $f$ is continuous so $f_k(D)$ is also compact for every $k$ and hence it is bounded for every $k\in\mathbb{N}$ and we can just take $M=\sup\bigcup f_k(D)$ where $M \ne +\infty$ as every $f_k(D)$ are bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
About Central limit theorem We prove Central limit theorem with characteristic function. If we know the $X_i$ are independent but not identically distributed, is there any weaker condition which still yields the convergence to normal distribution?
| For example, suppose $X_i$ are independent with $\mathbb E(X_i) = 0$, $\text{Var}(X_i) = \sigma_i^2$, and $$\lim_{n \to \infty} \frac{1}{\sigma(n)^3} \sum_{i=1}^n \mathbb E[|X_i|^3] = 0$$
where $$\sigma(n)^2 = \text{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sigma_i^2$$ Then $\displaystyle \frac{1}{\sigma(n)} \sum_{i=1}^n X_i$ converges in distribution to a standard normal random variable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Closed form for $\sum_{k=0}^{n} \binom{n}{k}\frac{(-1)^k}{(k+1)^2}$ How can I calculate the following sum involving binomial terms:
$$\sum_{k=0}^{n} \binom{n}{k}\frac{(-1)^k}{(k+1)^2}$$
Where the value of n can get very big (thus calculating the binomial coefficients is not feasible).
Is there a closed form for this summation?
| Apparently I'm a little late to the party, but my answer has a punchline!
We have
$$
\frac{1}{z} \int_0^z \sum_{k=0}^{n} \binom{n}{k} s^k\,ds = \sum_{k=0}^{n} \binom{n}{k} \frac{z^k}{k+1},
$$
so that
$$
- \int_0^z \frac{1}{t} \int_0^t \sum_{k=0}^{n} \binom{n}{k} s^k\,ds\,dt = - \sum_{k=0}^{n} \binom{n}{k} \frac{z^{k+1}}{(k+1)^2}.
$$
Setting $z = -1$ gives an expression for your sum,
$$
\sum_{k=0}^{n} \binom{n}{k} \frac{(-1)^k}{(k+1)^2} = \int_{-1}^{0} \frac{1}{t} \int_0^t \sum_{k=0}^{n} \binom{n}{k} s^k\,ds\,dt.
$$
Now, $\sum_{k=0}^{n} \binom{n}{k} s^k = (1+s)^n$, so
$$
\begin{align*}
\sum_{k=0}^{n} \binom{n}{k} \frac{(-1)^k}{(k+1)^2} &= \int_{-1}^{0} \frac{1}{t} \int_0^t (1+s)^n \,ds\,dt \\
&= \frac{1}{n+1}\int_{-1}^{0} \frac{1}{t} \left[(1+t)^{n+1} - 1\right]\,dt \\
&= \frac{1}{n+1}\int_{0}^{1} \frac{u^{n+1}-1}{u-1}\,du \\
&= \frac{1}{n+1}\int_0^1 \sum_{k=0}^{n} u^k \,du \\
&= \frac{1}{n+1}\sum_{k=1}^{n+1} \frac{1}{k} \\
&= \frac{H_{n+1}}{n+1},
\end{align*}
$$
where $H_n$ is the $n^{\text{th}}$ harmonic number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 9,
"answer_id": 2
} |
A simple limit of a sequence I feel almost ashamed for putting this up here, but oh well..
I'm attempting to prove: $$\lim_{n\to \infty}\sqrt[n]{2^n+n^5}=2$$
My approach was to use the following inequality (which is quite easy to prove) and the squeeze theorem:
$$\lim_{n\to \infty}\sqrt[n]{1}\leq \lim_{n\to \infty}\sqrt[n]{1+\frac{n^5}{2^n}}\leq \lim_{n\to \infty}\sqrt[n]{1+\frac{1}{n}}$$
I encountered a problem with the last limit though. While with functions using the $e^{lnx}$ "trick" would work, this problem is a sequences problem only, therefore there has to be a solution only using the first-semester-calculus-student-who's-just-done-sequences knowledge.
Or maybe this approach to the limit is overly complicated to begin with? Anyway, I'll be glad to receive any hints and answers, whether to the original problem or with $\lim_{n\to \infty}\sqrt[n]{1+\frac{1}{n}}$, thanks!
| How about using $\root n\of{1+(1/n)}\le\sqrt{1+(1/n)}$ for $n\ge2$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
An easy way to remember PEMDAS I'm having trouble remembering PEMDAS (in regards to precedence in mathematical equations), ie:
*
*Parentheses
*Exponentiation
*Multiplication & Division
*Addition & Subtraction
I understand what all of the above mean, but I am having trouble keeping this in my head. Can you recommend any tricks or tips you use to remember this.
Thanks
| I am a step by step person.
(remembering always left to right)
1. Parenthesis
2. exponents
3. multiply/divide
4. add/subtract
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 13,
"answer_id": 9
} |
Pattern continued The following pattern:
$$\frac{3^{2/401}}{3^{2/401} +3}+\frac{3^{4/401 }}{3^{4/401} +3}+\frac{3^{6/401}}{3^{6/401} +3}+\frac{3^{8/401}}{3^{8/401} +3}$$
what will the result be if the pattern is continued $\;300\;$ times?
| If you need the sum to the nth term, you're looking at computing the sum of the first 300 terms:
$$\sum_{k=1}^{300}\left(\large\frac{3^{\frac{2k}{401}}}{3^{\frac{2k}{401}}+3}\right)$$
To sum to the nth term, you need to compute:
$$\sum_{k=1}^{n}\left(\large\frac{3^{\frac{2k}{401}}}{3^{\frac{2k}{401}}+3}\right)=
\sum_{k=1}^{n}\left(\large\frac{3\cdot 3^{\frac{2k-1}{401}}}{3\cdot\left(3^{\frac{2k-1}{401}}+1\right)}\right) = \sum_{k=1}^{n}\left(\large\frac{3^{\frac{2k-1}{401}}}{\left(3^{\frac{2k-1}{401}}+1\right)}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Probability question with combinations of different types of an item Suppose a bakery has 18 varieties of bread, one of which is blueberry bread. If a half dozen loafs of bread are selected at random (with repetitions allowed), then what is the probability that at least one of the loafs of blueberry bread will be included in the selection.
I started off by determining that if we picked up at least one blueberry bread from the start, then we could find the total ways by finding:
$C_{R}(n, r)$
Plugging in for $n$ and $r$ I calculated that the number of ways would be:
$C_{R}(18, 5) = C(18+5-1,5) = 26,344$ ways.
Am I on the right track, and how would I go about finding the probability in this case?
| I don't think that's a very good direction. It could require much work. I always fond it useful to think of a simplified version:
Suppose we pick 1 loaf. What are the chances of it being blueberry?
Suppose we pick 2 loaves. What are the chances of not having a blueberry?
3 loaves?
4 loaves?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How to find its closed-form? here is a sequence defined by the below recursion formula:
$$a_n=2a_{n-1}+a_{n-2}$$
where $n \in \mathbb{N}$ and $a_0=1,a_1=2$.how to find its closed-form.
| If we write $E^ra_n=a_{n+r},$
the characteristic/auxiliary equation becomes $E^2-2E-1=0,E=1\pm\sqrt2$
So, the complementary function $a_n=A(1+\sqrt2)^n+B(1-\sqrt2)^n$ where $A,B$ are indeterminate constants to be determined from the initial condition.
$a_0=A+B,$ But $a_0=1$ So, $A+B=1$
and $a_1=A(1+\sqrt2)+B(1-\sqrt2)=2$
Now, solve for $A,B$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
On some inequalities in $L_p$ spaces Let $f$ be a function such that $\|fg\|_1<\infty$ whenever $\|g\|_2<\infty$. I would like to show that $\|f\|_2<\infty$. It seems that I should use some kind of Hölder inequalities, since we have $\|fg\|_1\leq \|f\|_2\|g\|_2$, but I don't know how. Any help would be appreciated. Thanks!
| You have to assume that
$$M := \sup \{ \|f \cdot g\|_1; \|g\|_2 \leq 1\}<\infty$$
... otherwise it won't work. (Assume $M=\infty$. Then for all $n \in \mathbb{N}$ there exists $g_n \in L^2$, $\|g_n\|_2 \leq 1$, such that $\|f \cdot g_n\|_1 \geq n$. And this means that there cannot exist a constant $c$ such that $\|f \cdot g\|_1 \leq c \cdot \|g\|_2$, in particular $f \notin L^2$ (by Hölder inequality).)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Simple binomial theorem proof: $\sum_{j=0}^{k} \binom{a+j}j = \binom{a+k+1}k$ I am trying to prove this binomial statement:
For $a \in \mathbb{C}$ and $k \in \mathbb{N_0}$, $\sum_{j=0}^{k} {a+j \choose j} = {a+k+1 \choose k}.$
I am stuck where and how to start. My steps are these:
${a+j \choose j} = \frac{(a+j)!}{j!(a+j-j)!} = \frac{(a+j)!}{j!a!}$ but now I dont know how to go further to show the equality.
Or I said: $\sum_{j=0}^{k} {a+j \choose j} = {a+k \choose k} +\sum_{j=0}^{k-1} {a+j \choose j} = [help!] = {a+k+1 \choose k}$
Thanks for help!
| One way to prove that
$$\sum_{j=0}^k\binom{a+j}j=\binom{a+k+1}k\tag{1}$$
is by induction on $k$. You can easily check the $k=0$ case. Now assume $(1)$, and try to show that
$$\sum_{j=0}^{k+1}\binom{a+j}j=\binom{a+(k+1)+1}{k+1}=\binom{a+k+2}{k+1}\;.$$
To get you started, clearly
$$\begin{align*}
\sum_{j=0}^{k+1}\binom{a+j}j&=\binom{a+k+1}{k+1}+\sum_{j=0}^k\binom{a+j}j\\
&=\binom{a+k+1}{k+1}+\binom{a+k+1}k
\end{align*}$$
by the induction hypothesis, so all that remains is to show that
$$\binom{a+k+1}{k+1}+\binom{a+k+1}k=\binom{a+k+2}{k+1}\;,$$
which should be very easy.
It’s also possible to give a combinatorial proof. Note that $\binom{a+j}j=\binom{a+j}a$ and $\binom{a+k+1}k=\binom{a+k+1}{a+1}$. Thus, the righthand side of $(1)$ is the number of ways to choose $a+1$ numbers from the set $\{1,\dots,a+k+1\}$. We can divide these choices into $k+1$ categories according to the largest number chosen. Suppose that the largest number chosen is $\ell$; then the remaining $a$ numbers must be chosen from $\{1,\dots,\ell-1\}$, something that can be done in $\binom{\ell-1}a$ ways. The largest of the $a+1$ numbers can be any of the numbers $a+1,\dots,a+k+1$, so $\ell-1$ ranges from $a$ through $a+k$. Letting $j=\ell-1$, we see that the number of ways to choose the $a+1$ numbers is given by the lefthand side of $(1)$: the term $\binom{a+j}j=\binom{a+j}a$ is the number of ways to make the choice if $a+j+1$ is the largest of the $a+1$ numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why does $\ln(x) = \epsilon x$ have 2 solutions? I was working on a problem involving perturbation methods and it asked me to sketch the graph of $\ln(x) = \epsilon x$ and explain why it must have 2 solutions. Clearly there is a solution near $x=1$ which depends on the value of $\epsilon$, but I fail to see why there must be a solution near $x \rightarrow \infty$. It was my understanding that $\ln x$ has no horizontal asymptote and continues to grow indefinitely, where for really small values of $\epsilon, \epsilon x$ should grow incredibly slowly. How can I 'see' that there are two solutions?
Thanks!
| For all $\varepsilon>0$ using L'Hospital's rule
$$\lim\limits_{x \to +\infty} {\dfrac{\varepsilon x}{\ln{x}}}=\varepsilon \lim\limits_{x \to +\infty} {\dfrac{x}{\ln{x}}}=\varepsilon \lim\limits_{x \to +\infty} {\dfrac{1}{\frac{1}{x}}}=+\infty.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
} |
On an identity about integrals Suppose you have two finite Borel measures $\mu$ and $\nu$ on $(0,\infty)$. I would like to show that there exists a finite Borel measure $\omega$ such that
$$\int_0^{\infty} f(z) d\omega(z) = \int_0^{\infty}\int_0^{\infty} f(st) d\mu(s)d\nu(t).$$
I could try to use a change of variable formula, but the two integration domains are not diffeomorphic. So I really don't know how to start. Any help would be appreciated! This is not an homework, I am currently practising for an exam.
Thanks!
| When we have no idea about the problem, the question we have to ask ourselves is: "if a measure $\omega$ works, what should it have to satisfy?".
We know that for a Borel measure that it's important to know them on intervals of the form $(0,a]$, $a>0$ (because we can deduce their value on $(a,b]$ for $a<b$, and on finite unions of these intervals). So we are tempted to define
$$\omega((0,a]):=\int_{(0,+\infty)^2}\chi_{(0,a]}(st)d\mu(s)d\nu(t)=\int_{(0,+\infty)}\mu(0,At^{-1}])d\nu(t).$$
Note that if the collection $\{(a_i,b_i]\}_{i=1}^N\}$ consists of pairwise disjoint elements, so are for each $t$ the $(a_it^{-1},b_it^{-1}]$, which allows us to define $\omega$ over the ring which consists of finite disjoint unions of elements of the form $(a,b]$, $0<a<b<+\infty$. Then we extend it to Borel sets by Caratheodory's extension theorem.
As the involved measures are finite, $\omega$ is actually uniquely determined.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/254988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Making a $1,0,-1$ linear commbination of primes a multiple of $1000$ Prove that with every given 10 primes $p_1,p_ 2,\ldots,p_{10}$,there always exist 10 number which are not simultaneously equal to $0$, get one of three values: $-1$, $0$, $1$ satisfied that:
$\sum\limits_{ i=1}^{10}a_ip_i$ is a multiple of 1000
| The pigeonhole principle takes care of this. There are $2^{10}-1=1023$ non-empty sums, so two are congruent modulo $1000$, etc., etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/255110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is $S = X^2 + Y^2$ distributed $Exponential(\frac{1}{2})$? Let $X$ and $Y$ be independent, standard normally distributed random variables ($\sim Normal(0, 1)$). Why is $S = X^2 + Y^2$ distributed $Exponential(\frac{1}{2})$?
I understand that an exponential random variable describes the time until a next event given a rate at which events occur. However, I do not understand what this has to do with $S = X^2 + Y^2$.
| Sum of 2 standard normals is chi squared with 2 degrees of freedom.
A chi squared with 2 degrees of freedom is equivalent to a exponential with parameter =$1/2.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/255163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.