Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Probability of winning a game $A$ and $B$ play a game. * *The probability of $A$ winning is $0.55$. *The probability of $B$ winning is $0.35$. *The probability of a tie is $0.10$. The winner of the game is the person who first wins two rounds. What is the probability that $A$ wins? The answer is $0.66$. I don't know how it's coming $0.66$. Please help. EDIT : The right combinations according to me are {null,T,TT,TTT....}A{null,T,TT,TTT....}A {null,T,TT,TTT....}A{null,T,TT,TTT....}B{null,T,TT,TTT....}A {null,T,TT,TTT....}B{null,T,TT,TTT....}A{null,T,TT,TTT....}A
Ties don't count, don't record them. So in effect we are playing a game in which A has probability $p=\frac{0.55}{0.90}$ of winning a game, and B has probability $1-p$ of winning a game. Now there are several ways to finish. The least thinking one is that A wins with the pattern AA, or the patterns ABA, or BAA.
{ "language": "en", "url": "https://math.stackexchange.com/questions/377301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the remainder when $2^{100}+3^{100}+4^{100}+5^{100}$ is divided by $7$ Find the remainder when $2^{100}+3^{100}+4^{100}+5^{100}$ is divided by $7$. Please brief about the concept behind this to solve such problems. Thanks.
Using Euler-Fermat's theorem. $\phi(7)=6$ $2^{6} \equiv 1 (\mod 7) \implies2^4.2^{96} \equiv 2(\mod7)$ $3^{6} \equiv 1 (\mod 7) \implies3^4.3^{96} \equiv 4(\mod7)$ $4^{6} \equiv 1 (\mod 7) \implies4^4.4^{96} \equiv 4(\mod7)$ $5^{6} \equiv 1 (\mod 7) \implies5^4.5^{96} \equiv 2(\mod7)$ $2^{100}+3^{100}+4^{100}+5^{100} \equiv 5(\mod 7)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/377378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 1 }
Differential Equation - $y'=|y|+1, y(0)=0$ The equation is $y'=|y|+1, y(0)=0$. Suppose $y$ is a solution on an interval $I$. Let $x\in I$. If $y(x)\ge 0$ then $$y'(x)=|y(x)|+1\iff y'(x)=y(x)+1\iff \frac{y'(x)}{y(x)+1}=1\\ \iff \ln (y(x)+1)=x+C\iff y(x)+1=e^{x+C}\\ \iff y(x)=e^{x+C}-1$$ Then $y(0)=0\implies C=0$. So $y(x)=e^x-1$ if $y(x)\ge 0$. If $y(x)\leq 0$ then $y(x)=1-e^x$. Now I want to say $y(x)=\begin{cases} e^x-1, \text{if } x\ge 0\\1-e^x, \text{if } x\leq 0\end{cases}$ Is this correct? Is there only one solution?
I think my solution above is correct. there are a few details missing: it is necessary to show that $y(x)\leq 0\iff x\leq0$ and $y(x)\ge 0\iff x\ge 0$ which allows me to define $y$ the way I do. also it is necessary to check that $y$ is differentiable at $x=0$ and it is because: $$\lim _{x\to 0^+}\frac{e^x-1}{x-0}=1=\lim _{x\to 0^-}\frac{1-e^{-x}}{x-0}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/377443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determine number of squares in progressively decreasing size that can be carved out of a rectangle How many squares in progressively decreasing size can be created from a rectangle of dimension $a\;X\;b$ For example, consider a rectangle of dimension $3\;X\;8$ As you can see, the biggest square that you can carve out of it is of dimension $3\;X\;3$ and they are ABFE and EFGH The next biggest square is of dimension $2\;X\;2$ which is GJIC Followed by two other squares of dimension $2\;X\;2$ which are JHLK and KLDI So the answer is 5. Is there any mathematical approach of solving it for a rectangle of arbitrary dimension?
Following the algorithm you seem to be doing, cutting the largest possible square off a rectangle, it is a simple recursive algorithm. If you start with an $n \times m$ rectangle with $n \ge m$, you will cut off $\lfloor \frac nm \rfloor m \times m$ squares and be left with a $(n-\lfloor \frac nm \rfloor m) \times m$ rectangle. Then you remove as many $(n-\lfloor \frac nm \rfloor m)$ squares as you can and continue. The smallest square will be the greatest common divisor of $n$ and $m$
{ "language": "en", "url": "https://math.stackexchange.com/questions/377538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Length of period of decimal expansion of a fraction Each rational number (fraction) can be written as a decimal periodic number. Is there a method or hint to derive the length of the period of an arbitrary fraction? For example $1/3=0.3333...=0.(3)$ has a period of length 1. For example: how to determine the length of a period of $119/13$?
Assuming there are no factors of $2,5$ in the denominator, one way is just to raise $10$ to powers modulo the denominator. If you find $-1$ you are halfway done. Taking your example: $10^2\equiv 9, 10^3\equiv -1, 10^6 \equiv 1 \pmod {13}$ so the repeat of $\frac 1{13}$ is $6$ long. It will always be a factor of Euler's totient function of the denominator. For prime $p$, that is $p-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/377683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 3, "answer_id": 1 }
$11$ divides $10a + b$ $\Leftrightarrow$ $11$ divides $a − b$ Problem So, I am to show that $11$ divides $10a + b$ $\Leftrightarrow$ $11$ divides $a − b$. Attempt This is a useful proposition given by the book: Proposition 12. $11$ divides a $\Leftrightarrow$ $11$ divides the alternating sum of the digits of $a$. Proof. Since $10 ≡ −1 \pmod{11}$, $10^e ≡ (−1)^n \pmod{11}$ for all $e$. Then \begin{eqnarray} a&=&r_n10^n +r^{n−1}10^{n−1} +...+a_210^2 +a_110+a_0\\ &≡&a_n(−1)^n +a_{n−1}(−1)^{n−1} +...+a_2(−1)^2 +a_1(−1)+a_0 \pmod{11}. \end{eqnarray} So, I let "$a$" be $10a+b$, which means that $10a+b≡11k\pmod{11}$, or similarly that $\frac{10a+b-11k}{11}=n$, from some $n\in \mathbb{Z}$. Next, I write $10$ as $11-1$, which gives $\frac{11a-a+b-11k}{11}=\frac{11(a-k)+b-a}{11}=\frac{11(a-k)-(a-b)}{11}$. This being so we must then have that $(a-b)=11m$, for some $m\in \mathbb{Z}$ (zero works), but this would mean that $a≡b\pmod{11m}$, namely that $11m~|~a-b$. It is therefore quite plain that $11$ divides $10a+b$ $\Leftrightarrow$ $11$ divides $a-b$. Discussion What I'd like to know is how I'm to use this to show that $11$ divides $232595$, another part of the same problem.
Since $11$ divides $10a+b$, then $$ 10a+b=11k $$ or $$ b = 11k-10a $$ so $$ a-b=a-11k+10a=11(a-k) $$ which means that $11$ divides $a-b$ as well, since $a,b,k$ are integers. Update To prove in opposite direction you can do the same $$ a-b=11k\\ b=a-11k\\ 10a+b=10a+a-11k=11(a-k) $$ or in other words, if $11$ divides $a-b$ it also divides $10a+b$. So both directions are proved $$ 10a+b\equiv 0(\text{mod } 11) \Leftrightarrow a-b\equiv 0(\text{mod } 11) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/377729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Approximating measurable functions on $[0,1]$ by smooth functions. Let $f$ be a measurable function on $[0,1]$. Is there a sequence infinitely differentiable $f_n$ such that one of * *$f_n\rightarrow f$ pointwise *$f_n\rightarrow f$ uniformly *$\int_0^1|f_n-f|\rightarrow 0$ is true?
Uniform convergence is surely too much to ask for. As Wikipedia suggests, uniform convergence theorem assures that the uniform limit of continuous functions is again continuous. Hence, as soon as $f$ is discontinuous, all hope of finding smooth $f_n$ uniformly convergent to $f$ is gone. The statement involving the integral is true (if we additionally assume $\int |f| < \infty$, at least), and follows from a more general fact that the continuous functions are dense in $L^1([0,1])$ (integrable functions with norm given by $||f|| = \int |f|$). A possible way to check this is the following. First, measurable functions can be arbitrarily well approximated by simple functions (the ones of the form $\sum a_i \chi_{A_i}$, with $A_i$ - measurable sets). Thus, if we are able to approximate the function $\chi_A$ arbitrarily well by continuous functions, then we are done. For this, notice that $A$ can be approximated by an open set $U$: for any $\varepsilon > 0$, there is open $U$ with $\lambda(A \triangle U) < \varepsilon$. Now, $U$ is open, so you can express it as a sum of intervals: $U = \bigcup I_n$, $I_n$ - open interval, disjoint from the $I_m$, $m\neq n$. Now, $\chi_I$ can be approximated by smooth functions by using classical bump functions. A lot of details would have to be filled in, but it should be clear that a measurable function can indeed be arbitrarily well approximated by smooth ones, in $L^1$. For pointwise convergence, I think you can use a reasoning as just offered for $L^1$. I also believe you can use mollifiers. Since you only asked if one of the statements can be made true, I shall not go into more detail. Also, I am not quite sure what background to assume, and I am more that sure there are a lot of other users who have much better understanding of these issues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/377808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\frac{1}{ab}=\frac{s}{a}+\frac{r}{b} \overset{?}{\iff}\gcd(a,b)=1$ $$\frac{1}{ab}=\frac{s}{a}+\frac{r}{b} \overset{?}{\iff} \gcd(a,b)=1$$ This seems almost painfully obvious because it is just $ar+bs=1$ in another form. This second form is the definition of coprimality, so what else is my professor looking for?
If $\gcd(a,b)=1$ then since the greatest common divisor is the smallest positive integer that can be represented as a linear combination of a and b then we have that there are integers r and s such that $1=ra+sb$ By dividing by ab we have that $\frac{1}{ab}=\frac{s}{a}+\frac{r}{b}$. Now if we suppose that $\frac{1}{ab}=\frac{s}{a}+\frac{r}{b}$ then by multiplying by ab we have $1=ra+sb$. Since $\gcd(a,b)$ divides both a and b then it divides 1 and since the greatest common divisor is non-negative then $\gcd(a,b)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/377896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Definition of $b|a \implies 0|0$? The definition I'm using for $b|a$ (taken from Elementary Numbery Theory by Jones & Jones): If $a,b \in \mathbb{Z}$ then $b$ divides $a$ if for some $q \in \mathbb{Z}$ $a = qb$. However, I have $0 = q\cdot0$ for any $q$ I choose. So this seems to imply that $0$ divides $0$ which I know is always taken to be undefined. Should the definition be for "unique $q$" rather than for "some $q$"? Thank-you.
The statement $0$ divides $0$ and the "quantity" $0/0$ are different things. The first is exactly the statement that there exists some $a$ such that $0a=0$ and the second is not a number
{ "language": "en", "url": "https://math.stackexchange.com/questions/377996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove that $X\times Y$, with the product topology is connected I was given this proof but I don't clearly understand it. Would someone be able to dumb it down for me so I can maybe process it better? Since a topological space is connected if and only if every function from it to $\lbrace 0,1\rbrace$ is constant. Let $F:X\times Y\rightarrow\lbrace 0,1\rbrace$ be a continuous function. Let $x \in X$, we get a function $f:Y\rightarrow\lbrace 0,1\rbrace$ defined by $y\mapsto F(x,y)$. F is constant on every set $\lbrace x\rbrace\times Y$. So the function is continuous and constant because $Y$ is connected. In the same way $F$ is constant on the set of the form $X\times \lbrace y\rbrace$. This implies constant on $X\times Y$. $(x,y)$ exist on $X\times Y$. Say there exists $(a,b)$ on $X\times Y{}{}$. $F(x,y)=F(a,b)$ proving $A\times B$ is conected. The main things I don't understand is why is F constant on every set $\{x\}\times Y$ and how is Y connected to make the function continiuous and constant
The basic fact we use is the one you start out with (which I won't prove, as I assume it's already known; it's not hard anyway): (1) $X$ is connected iff every continuous function $f: X \rightarrow \{0,1\}$ (the latter space in the discrete topology) is constant. So, given connected spaces $X$ and $Y$, we start with an arbitrary continuous function $F: X \times Y \rightarrow \{0,1\}$ and we want to show it is constant. This will show that $X \times Y$ is connected by (1). First, for a fixed $x \in X$, we can define $F_x: Y \rightarrow \{0,1\}$ by $F_x(y) = F(x,y)$. This is the composition of the maps that sends $y$ to $(x,y)$ (for this fixed $x$) and $F$, and as both these maps are continuous, so is $F_x$, for every $x$. So, as this is a continuous function from $Y$ to $\{0,1\}$ and $Y$ is connected, every $F_x$ is constant, say all its values are $c_x \in \{0,1\}$. Of course we haven't used the connectedness of $X$ yet, so we do the exact same thing fixing $y$: for every $y \in Y$, we define $F^y: X \rightarrow \{0,1\}$ by $F^y(x) = F(x,y)$. Again, this is a composition of the map sending $x$ to $(x,y)$ (for this fixed $y$) and $F$, so every $F^y$ is continuous from the connected $X$ to $\{0,1\}$ and so $F^y$ is constant with value $c'_y \in \{0,1\}$, say. The claim now is that $F$ is constant: let $(a,b)$ and ($c,d)$ be any two points in $X \times Y$. Then we also consider the point $(a,d)$ and note that: $$F(a,b) = F_a(b) = c_a = F_a(d) = F(a,d) = F^d(a) = c'_d = F^d(c) = F(c,d)\mbox{,}$$ first using that $F_a$ is constant and then that $F^d$ is constant. We basically connect any 2 points via a third using a horizontal and a vertical "line" (here via $(a,d)$) and on every "line" $F$ remains constant, so $F$ is a constant function. This concludes the proof. As a last note: the fact that a function $x \rightarrow (x,y)$, for fixed $y$, is continuous, is easy from the definitions: a basic neighbourhood of $(x,y)$ is of the form $U \times V$, $U$ open in $X$ containing $x$, $V$ open in $Y$ containing $y$, and its inverse image under this function is just $U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Big-O notation in division Let $r(x)=\frac{p(x)}{q(x)}$. Expending $p$ and $q$ around 0 gives $$ \frac{p_0+p'_0x+\mathcal{O}(x^2)}{q_0+q'_0x+\mathcal{O}(x^2)}. $$ Now the claim is that the above expression is equal to $$ \frac{p_0+p'_0x}{q_0+q'_0x}+\mathcal{O}(x^2). $$
Try to evaluating the difference: $\displaystyle \frac{p_0+p'_0x+\mathcal{O}(x^2)}{q_0+q'_0x+\mathcal{O}(x^2)}-\frac{p_0+p'_0x}{q_0+q'_0x}$ and recall that $p\mathcal{O}(x^2)=\mathcal{O}(x^2)$ and $x\mathcal{O}(x^2)=\mathcal{O}(x^2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solve $\dfrac{1}{1+\frac{1}{1+\ddots}}$ I'm currently a high school junior enrolling in AP Calculus, I found this website that's full of "math geeks" and I hope you can give me some clues on how to solve this problem. I'm pretty desperate for this since I'm only about $0.4%$ to an A- and I can't really afford a B now... The problem is to simplify: $$\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{\ddots}}}}$$ What I did, was using basic "limits" taught in class and I figured out that the denominator would just keep going like this and approaches $1$, so this whole thing equals $1$, but I think it's not that easy...
This is the Golden ratio (also known as $\varphi$) expressed using countinued fraction. This number is solution of the $x^2-x-1=0$ quadratic equotation. This quadratic equotation you can wrote as $x=1+\frac{1}{x}$ and this form is used to construct cotinued faction used in your question. See wikipedia for "Golden Ratio" or read book writed by Mario Livio: Golden ratio for example. Here you found more answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 3 }
whether or not there exist a non-constant entire function $f(z)$ satisfying the following conditions In each of the case below, determine whether or not there exist a non-constant entire function $f(z)$ satisfying the following conditions. ($1$) $f(0)=e^{i\alpha}$ and $|f(z)|=1/2$ for all $z \in Bdr \Delta$. ($2$) $f(e^{i\alpha})=3$ and $|f(z)|=1$ for all $z$ with $|z|=3$. ($3$) $f(0)=1$ , $f(i)=0$ , and $|f(z)| \le 10$ $z \in \mathbb{C}$. ($4$) $f(0)=1 , f(i)=0$ , and $|f(z)| \le 5$ $ \forall z \in \Delta$. ($5$) $f(z) =0 $ for all $z=n \pi$ , $n\in \mathbb{Z}$ My thought:- ($1$)No. by Maximum-modulus Theorem $|f(z)|$ has maximum value at the boundary,Then $1/2$ would be the maximum value but $|f(0)|=1$ which is a contradiction. ($2$)No . by same argument as above. ($3$)No. by liouvilles Theorem $f(z)$ is bounded hence must be constant. ($4$) I think it is true but not sure. ($5$) True. $\sin z$ is the example. please somebody verify my answers.
Looks good to me. Surely you can find an example for (4)? A first degree polynomial should do the job.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
construction set of natural number logic I identify the natural number $0$ with the empty set $\emptyset$, $1$ with $S(0)$, $2$ with $S(1)$, etc, etc. The axiom of infinity says $\exists x (\emptyset\in x\wedge \forall z\in x\space z\cup\{z\}\in x)$ and the Axiom schema of specification says $\forall y_0,...,y_n\exists x\forall z (z\in x\leftrightarrow (z\in y_0\wedge \phi(z,y_1,...,y_n)))$. My question now: Why is there now a smallest element $x$ which can be identified with the natural numbers?
Let $y$ be an inductive set whose existence follows from the axiom of infinity, then consider $\{x\subseteq y\mid x\text{ is inductive}\}$. This is a definable collection of members of the power set of $y$, so it is a set, and $y$ is there so it's not an empty set. Now take the intersection of all those sets. This is an inductive set as well (you have to prove this, of course). Call this inductive set $N$. Now prove that if $x$ is any inductive set then $N\subseteq x$, by considering $M=N\cap x$. Show that $M$ is inductive as well, and $M\subseteq y$, now use the property which defined $N$ to conclude $N=M$ and therefore $N\subseteq x$. And now we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that if $A - A^2 = I$ then $A$ has no real eigenvalues Given: $$ A \in M_{n\times n}(\mathbb R) \; , \; A - A^2 = I $$ Then we have to prove that $A$ does not have real eigenvalues. How do we prove such a thing?
By using index notation, $A-A^2=I$ can be written as $A_{ij}-A_{ik}A_{kj}=\delta_{ij}$. By definition: $A_{ij}n_i=\lambda n_j$. So that, $A_{ij}n_i-A_{ik}A_{kj}n_i=\delta_{ij}n_i$, hence $\lambda n_j -\lambda n_k A_{kj}=n_j$, whence $\lambda n_j -\lambda^2 n_j=n_j$, or $(\lambda^2-\lambda+1)n_j=0$, $n_j\neq 0$ an eigenvector. $\lambda^2-\lambda+1=0$ has not real roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 5 }
If $S_n = 1+ 2 +3 + \cdots + n$, then prove that the last digit of $S_n$ is not 2,4 7,9. If $S_n = 1 + 2 + 3 + \cdots + n,$ then prove that the last digit of $S_n$ cannot be 2, 4, 7, or 9 for any whole number n. What I have done: *I have determined that it is supposed to be done with mathematical induction. *The formula for an finite sum is $\frac{1}{2}n(n+1)$. *This means that since we know that $n(n+1)$ has a factor of two, it must always end in 4 and 8. *Knowing this, we can assume that $n(n+1)\bmod 10 \neq 4$ or $n(n+1)\bmod 10 \neq 8$.
First show that for $n^2$ the last digit will always be from the set $M=1,4,5,6,9,0$ (I don't know how to create those brackets with your version of TeX, \left{ doesn't seem to work). Then consider all cases for the last digit of $n$ (last digit is a $1$, I get a $2$ as last digit for $n(n+1)= n^2 + n$ and so on). If you do all of this in $mod5$, you only have 5 easy cases to check and show that $n^2 + n$ $mod5 \notin (3,4)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Is the linear dependence test also valid for matrices? I have the set of matrices $ \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix} $ $ \begin{pmatrix} 0 & 1 \\ 0 & 0 \\ \end{pmatrix} $ $ \begin{pmatrix} 0 & 0 \\ 1 & 0 \\ \end{pmatrix} $ $ \begin{pmatrix} 0 & 0 \\ 0 & 1 \\ \end{pmatrix} $ and I'm asked to check if said set is a basis of $ \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} $, that is, any 2x2 matrix. I know that in this case it is really simple to do it like $$\pmatrix{a&b\cr c&d\cr}=\alpha\pmatrix{1&0\cr0&0\cr}+\beta\pmatrix{0&1\cr0&0\cr}+\gamma\pmatrix{0&0\cr1&0\cr}+\delta\pmatrix{0&0\cr0&1\cr}$$ But not all cases are that simple, so I was wondering if I could do it another way: When I'm asked to check if a set of vectors is a basis of a vector space $R^n$ I just see if said vectors are linearly independent, given that if I have n linearly independent vectors $\{(v1,...,vn),...,(u1,...un)\}$ they'll be a basis for $R^n$. Can I use the same test and say that if $$\pmatrix{0&0\cr 0&0\cr}=\alpha\pmatrix{1&0\cr0&0\cr}+\beta\pmatrix{0&1\cr0&0\cr}+\gamma\pmatrix{0&0\cr1&0\cr}+\delta\pmatrix{0&0\cr0&1\cr}$$ Has only one solution $\alpha=\beta=\gamma=\delta=0$ then they are all linearly independent, and since it's all 2x2 it'll be a basis for all 2x2 matrices? Does that make any sense?
$\mathbb{R}^{N\times N}$ as a linear space (with addition between elements and multiplication by scalars) is no different than $\mathbb{R}^{2N}$ endowed with these same operations. There is a bijective mapping between elements and operations in these two spaces, so any tricks you know about vectors is $\mathbb{R}^4$ are applicable to $\mathbb{R}^{2\times 2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/378541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Stieltjes Integral meaning. Can anybody give a geometrical interpretation of the Stieltjes integral: $$\int_a^bf(\xi)\,d\alpha(\xi)$$ How would we calculate? $$\int_a^b \xi^3\,d\alpha(\xi)$$ for example.
$$\int_a^b \xi^3\,d\alpha(\xi)=\int_a^b\xi^3a'(\xi)d\mu(\xi) $$ in case that $a$ is differentiable. Also $\mu(\xi) $ indicates the Legesgue measure. In order to find the above formula, you have to use Radon-Nikodym derivatives. (http://en.wikipedia.org/wiki/Radon%E2%80%93Nikodym_theorem). Generally the function $a$ gives you a way to measure elementary sets, such as intervals. This function will produce the corresponding Stieltjes measure. In the same way, measuring intervals(or cubes) with the natural way ($\mu([0,1])=1)$, produces Lebesgue measure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Existence of a certain functor $F:\mathrm{Grpd}\rightarrow\mathrm{Grp}$ Let $\mathrm{Grpd}$ denote the category of all groupoids. Let $\mathrm{Grp}$ denote the category of all groups. Are there functors $F\colon\mathrm{Grpd}\rightarrow \mathrm{Grp}, G\colon\mathrm{Grp}\rightarrow \mathrm{Grpd}$ such that $GF=1_{\mathrm{Grpd}}$. Dear all, I know the question is not easy (at least for me). I don't expect you to solve a problem that is possibly unintresting for you and waste your time on it. I was just asking to see if anyone had seen something similar so that s/he would give me a reference to it Thank you
Such a pair of functors do not exist. Reason 1 (if you accept the empty groupoid) In the category of groups every pair of objects have a morphism between them. While in the the category of groupoids there is no morphism from the terminal object to the initial object. It follows that the initial object can't be in the image of $G$. Reason 2 (if you don't accept the empty groupoid) Let $A$ and $B$ be the discrete groupoids on the sets $\{0\}$ and $\{0,1\}$ respectively, and let $f,g:A\to B$ be functors defined by $f(0)=0$ and $g(0)=1$. Now suppose such a pair of functors exists and let $z: F(A)\to F(A)$, $z':F(A)\to F(B)$ be the group homorphisms sending everything to the identity element. Since $A$ is the terminal object in the category of groupoids it follows that $G(z) = 1_A$. We have $f = f 1_A= GF(f) G(z)= G(F(f)z)=G(z')$ and similary $g= G(z')$. This leads to $f=g$ which is a contradicition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 1, "answer_id": 0 }
Meaning and types of geometry I heard that there's several kind of geometries for instance projective geometry and non euclidean geometry besides the euclidean geometry. So the question is what do you mean by a geometry, do you need truly many geometries and if yes what kind of results we can find in one geometry and not in the others. Thanks a lot.
Different geometries denote different sets of axioms, which in turn result in different sets of conclusions. I'll concentrate on the planar cases. * *Projective geometry is pure incidence geometry. The basic relation expresses whether or not a point lies on a line or not. One of its axioms requires that two different lines will always have a point of intersection, which in the case of parallel lines is usually interpreted as being infinitely far away in the direction of those parallel lines. Projective geometry does not usually come with any metric to measure lengths or angles, but using concepts by Cayley and Klein, many different geometries can be embedded into the projective plane by distinguishing a specific conic as the fundamental object of that geometry. This includes Euclidean and hyperbolic geometry as well as pseudo-Euclidean geometry and relativistic space-time geometry, among others. *Non-Euclidean geometries would in the literal sense be any geometry which doesn't exactly follow Euclid's set of axioms. More specifically, though, it is usually used for geometries which satisfy all of his postulates except for the parallel postulate. This will always include hyperbolic geometry and, depending on how you interpret the other axioms, usually includes elliptic geometry as well. One important difference between these and Euclidean geometry is the way lengths and angles are measured. It turns out that hyperbolic geometry describes the geometry on an infinite surface of constant negative curvature, whereas elliptic geometry is the geometry on a positively curved surface and therefore closely related to spherical geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Graph of $\quad\frac{x^3-8}{x^2-4}$. I was using google graphs to find the graph of $$\frac{x^3-8}{x^2-4}$$ and it gave me: Why is $x=2$ defined as $3$? I know that it is supposed to tend to 3. But where is the asymptote???
Because there is a removable singularity at $x = 2$, there will be no asymptote. You're correct that the function is not defined at $x = 2$. Consider the point $(2, 3)$ to be a hole in the graph. Note that in the numerator, $$(x-2)(x^2 + 2x + 4) = x^3 - 8,$$ and in the denominator $$(x-2)(x+ 2) = x^2 - 4$$ When we simplify by canceling (while recognizing $x\neq 2$), we end with the rational function $$\frac{x^2 + 2x + 4}{x+2}$$ We can confirm that the "hole" at $x = 2$ is a removable singularity by confirming that its limit exists: $$\lim_{x \to 2} \frac{x^2 + 2x + 4}{x+2} = 3$$ In contrast, however, we do see, that there is an asymptote at $x = -2$. We can know this without graphing by evaluating the limit of the function as $x$ approaches $-2$ from the left and from the right: $$\lim_{x \to -2^-} \frac{x^2 + 2x + 4}{x+2} \to -\infty$$ $$\lim_{x \to -2^+} \frac{x^2 + 2x + 4}{x+2} \to +\infty$$ Hence, there exists a vertical asymptote at $x = -2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/378914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Testing for convergence in Infinite series with factorial in numerator I have the following infinite series that I need to test for convergence/divergence: $$\sum_{n=1}^{\infty} \frac{n!}{1 \times 3 \times 5 \times \cdots \times (2n-1)}$$ I can see that the denominator will eventually blow up and surpass the numerator, and so it would seem that the series would converge, but I am not sure how to test this algebraically given the factorial in the numerator and the sequence in denominator. The recursive function for factorial $n! = n \times (n-1)!$ doesn't seem to simplify things in this case, as I cannot eliminate the $(2n-1)$ in the denominator. Is there a way to find a general equation for the denominator such that I could perform convergence tests (e.g. by taking the integral, limit comparison, etc.)
We have $$a_n = \dfrac{n!}{(2n-1)!!} = \dfrac{n!}{(2n)!} \times 2^n n! = \dfrac{2^n}{\dbinom{2n}n}$$ Use ratio test now to get that $$\dfrac{a_{n+1}}{a_n} = \dfrac{2^{n+1}}{\dbinom{2n+2}{n+1}} \cdot \dfrac{\dbinom{2n}n}{2^n} = \dfrac{2(n+1)(n+1)}{(2n+2)(2n+1)} = \dfrac{n+1}{2n+1}$$ We can also use Stirling. From Stirling, we have $$\dbinom{2n}n \sim \dfrac{4^n}{\sqrt{\pi n}}$$ Use this to conclude, about the convergence/divergence of the series. EDIT $$1 \times 3 \times 5 \times \cdots \times(2n-1) = \dfrac{\left( 1 \times 3 \times 5 \times \cdots \times(2n-1) \right) \times \left(2 \times 4 \times \cdots \times (2n)\right)}{ \left(2 \times 4 \times \cdots \times (2n)\right)}$$ Now note that $$\left( 1 \times 3 \times 5 \times \cdots \times(2n-1) \right) \times \left(2 \times 4 \times \cdots \times (2n)\right) = (2n)!$$ and $$\left(2 \times 4 \times \cdots \times (2n)\right) = 2^n \left(1 \times 2 \times \cdots \times n\right) = 2^n n!$$ Hence, $$1 \times 3 \times 5 \times \cdots \times(2n-1) = \dfrac{(2n)!}{2^n \cdot n!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/378991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find all matrices $A$ of order $2 \times 2$ that satisfy the equation $A^2-5A+6I = O$ Find all matrices $A$ of order $2 \times 2$ that satisfy the equation $$ A^2-5A+6I = O $$ My Attempt: We can separate the $A$ term of the given equality: $$ \begin{align} A^2-5A+6I &= O\\ A^2-3A-2A+6I^2 &= O \end{align} $$ This implies that $A\in\{3I,2I\} = \left\{\begin{pmatrix} 3 & 0\\ 0 & 3 \end{pmatrix}, \begin{pmatrix} 2 & 0\\ 0 & 2 \end{pmatrix}\right\}$. Are these the only two possible values for $A$, or are there other solutions?If there are other solutions, how can I find them?
The Cayley-Hamilton theorem states that every matrix $A$ satisfies its own characteristic polynomial; that is the polynomial for which the roots are the eigenvalues of the matrix: $p(\lambda)=\det[A-\lambda\mathbb{I}]$. If you view the polynomial: $a^2-5a+6=0$, as a characteristic polynomial with roots $a=2,3$, then any matrix with eigenvalues that are any combination of 2 or 3 will satisfy the matrix polynomial: $A^2-5A+6\mathbb{I}=0$, that is any matrix similar to: $\begin{pmatrix}3 & 0\\ 0 & 3\end{pmatrix}$,$\begin{pmatrix}2 & 0\\ 0 & 2\end{pmatrix}$,$\begin{pmatrix}2 & 0\\ 0 & 3\end{pmatrix}$. Note:$\begin{pmatrix}3 & 0\\ 0 & 2\end{pmatrix}$ is similar to $\begin{pmatrix}2 & 0\\ 0 & 3\end{pmatrix}$. To see why this is true, imagine $A$ is diagonalized by some matrix $S$ to give a diagonal matrix $D$ containing the eigenvalues $D_{i,i}=e_i$, $i=1..n$, that is: $A=SDS^{-1}$, $SS^{-1}=\mathbb{I}$. This implies: $A^2-5A+6\mathbb{I}=0$, $SDS^{-1}SDS^{-1}-5SDS^{-1}+6\mathbb{I}=0$, $S^{-1}\left(SD^2S^{-1}-5SDS^{-1}+6\mathbb{I}\right)S=0$, $D^2-5D+6\mathbb{I}=0$, and because $D$ is diagonal, for this to hold each diagonal entry of $D$ must satisfy this polynomial: $D_{i,i}^2-5D_{i,i}+6=0$, but the diagonal entries are the eigenvalues of $A$ and thus it follows that the polynomial is satisfied by $A$ iff the polynomial is satisfied by the eigenvalues of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/379076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Summing series with factorials in How do you sum this series? $$\sum _{y=1}^m \frac{y}{(m-y)!(m+y)!}$$ My attempt: $$\frac{y}{(m-y)!(m+y)!}=\frac{y}{(2m)!}{2m\choose m+y}$$ My thoughts were, sum this from zero, get a trivial answer, take away the first term. But actually I don't think this will work very well. This question was originally under probability, but the problem is that I can't sum a series and really has nothing to do with probability (reason for the first comment)
For example, one can write \begin{align} \sum_{y=0}^m\frac{y}{(m-y)!(m+y)!} &= \sum_{k=0}^m\frac{m-k}{k!(2m-k)!} \\ &=\frac{m}{(2m)!}\sum_{k=0}^m{2m \choose k}-\frac{1}{(2m-1)!}\sum_{k=1}^{m}{2m-1\choose k-1} \\ &= \frac{m}{2(2m)!}\left[{2m\choose m}+\sum_{k=0}^{2m}{2m \choose k}\right]-\frac{1}{(2m-1)!}\sum_{k=0}^{m-1}{2m-1\choose k} \\ &= \frac{m}{2(2m)!}\left[{2m\choose m}+\left(1+1\right)^{2m}\right]-\frac{1}{2(2m-1)!}\sum_{k=0}^{2m-1}{2m-1\choose k} \\ &= \frac{m}{2(2m)!}{2m\choose m}+\frac{m\cdot 2^{2m}}{2(2m)!}-\frac{2^{2m-1}}{2(2m-1)!} \\ &= \frac{m}{2(2m)!}{2m\choose m}\;. \end{align} All we have used in the way is that $\displaystyle{n\choose k} ={n\choose n-k}$ and that $\displaystyle(1+1)^n=\sum_{k=0}^n{n\choose k}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/379149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
closest pair in N-Dimensional I have to find the closest pair in n-dimension, and I have problem in the combine steps. I use the divide and conquer.I first choose the median x, and split it into left and right part, and then find the smallest distance in left and right part respectively, dr, dl. And then dm=min(dr,dl); And I have to consider the across hyper-plane constructed by median x, and the cloest pair must be in in the 2d think slab, and I don't understand that what to do in the following?(How to reduce the dimension) Here is the ppt that I following, please explain that the combine step, I have read it for a day and still cannot figure it out what it is doing.(from p9) https://docs.google.com/file/d/0ByMlz1Uisc9OWmxBRUk1LW9oMlk/edit?usp=sharing Thx in advance.
The closest pair was either already found, or is in the 2-d-thick slab which can only include a low number of points. No need to reduce the dimension, just apply the algorithm recursively left, right and on the slab (cycling the direction the separating hyperplane is perpendicular to), optimality is implicit. Here are other slides.
{ "language": "en", "url": "https://math.stackexchange.com/questions/379243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$\frac{d}{dt} \int_{-\infty}^{\infty} e^{-x^2} \cos(2tx) dx$ Prove that: $\frac{d}{dt} \int_{-\infty}^{\infty} e^{-x^2} \cos(2tx) dx=\int_{-\infty}^{\infty} -2x e^{-x^2} \sin(2tx) dx$ This is my proof: $\forall \ t \in \mathbb{R}$ (the improper integral coverge absolutely $\forall \ t \in \mathbb{R}$) I consider: $g(t)=\int_{-\infty}^{\infty} e^{-x^2} \cos(2tx) dx$. Let $h \ne 0$ $\left| \frac{g(t+h)-g(t)}{h}-\int_{-\infty}^{\infty} -2x e^{-x^2} \sin(2tx) dx \right|\le\int_{-\infty}^{\infty} \left|\frac{\cos(2(t+h)x)-\cos(2tx)}{h}-(-2x)\sin(2tx)\right| e^{-x^2}dx$ For the main value theorem and since $\int_{-\infty}^{\infty} e^{-x^2} dx=\sqrt{\pi}$ $\int_{-\infty}^{\infty} \left|\frac{\cos(2(t+h)x)-\cos(2tx)}{h}-(-2x)\sin(2tx)\right| e^{-x^2}dx=\left|\frac{\cos(2(t+h)\bar{x})-\cos(2t\bar{x})}{h}-(-2\bar{x})\sin(2t\bar{x})\right| \sqrt{\pi}$ $\cos(2tx)$ is derivable in $\bar{x}$ then fixed a $\epsilon>0 \ $ if $\ 0<|h|<\delta$: $\left|\frac{\cos(2(t+h)\bar{x})-\cos(2t\bar{x})}{h}-(-2\bar{x})\sin(2t\bar{x})\right| \sqrt{\pi}<\sqrt{\pi} \ \epsilon$ It is correct? There are other ways? UPDATE probably the proof is incorrect, because when I use the mean value theorem $x$ depends also from $h$ and hence the continuity of $x(h)$ is not obvious, then I can't guarantee the derivability of $\cos(2 t x(h))$ in $x$ for $h \rightarrow 0$. Am I right?
You way looks good. Here's an alternate way: evaluate both integrals, and see that the derivative of one equals the other. For example, $$\begin{align}\int_{-\infty}^{\infty} dx \, e^{-x^2} \, \cos{2 t x} &= \Re{\left [\int_{-\infty}^{\infty} dx \, e^{-x^2} e^{i 2 t x} \right ]}\\ &= e^{-t^2}\Re{\left [\int_{-\infty}^{\infty} dx \, e^{-(x-i t)^2} \right ]}\\ &= \sqrt{\pi}\, e^{-t^2}\end{align}$$ The derivative of this with respect to $t$ is $$\frac{d}{dt} \int_{-\infty}^{\infty} dx \, e^{-x^2} \, \cos{2 t x} = -2 \sqrt{\pi} t e^{-t^2}$$ Now try with taking the derivative inside the integral: $$\begin{align}-2 \int_{-\infty}^{\infty} dx \,x \, e^{-x^2} \,\sin{2 t x} &= -2 \Im{\left [\int_{-\infty}^{\infty} dx\,x \, e^{-x^2} e^{i 2 t x} \right ]}\\ &= -2 e^{-t^2}\Im{\left [\int_{-\infty}^{\infty} dx\,x \, e^{-(x-i t)^2} \right ]}\\ &= -2 e^{-t^2}\Im{\left [\int_{-\infty}^{\infty} dx\,(x+i t) e^{-x^2} \right ]} \\ &= -2 t \sqrt{\pi} e^{-t^2} \end{align}$$ QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/379282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Square and reverse reading of an integer For all $n=\overline{a_k a_{k-1}\ldots a_1 a_0} := \sum_{i=0}^k a_i 10^i\in \mathbb{N}$, where $a_i \in \{0,...,9\}$ and $a_k \neq 0$, we define $f(n)=\overline{a_0 a_1 \ldots a_{k-1} a_k}= \sum_{i=0}^k a_{k-i}10^i$. Is it true that, for all $m=\overline{a_k a_{k-1}\ldots a_1 a_0} \in \mathbb{N}$, we have $f(m\times m)=f(m)\times f(m) \implies$$\forall i \in \{0, \ldots, k\}, a_i \in \{0,1,2,3\}$ ? Example: $f(201)\times f(201)=102 \times 102=10404=f(40401)=f(201\times 201)$. It's true for $m \leq 10^8$.
If $m=...4$, then $m^2=...6$, but $f(m)=4...$ and $f(m)^2=1...$ or $2...$ (because $4^2=16$ and $5^2=25$). The same can be calculated explicitly for $m$ ending in $5, \ldots, 8$, and only a little bit different for $9$. If $m=...9$, then $m^2=...1$, but $f(m)=9...$ and $f(m)^2=8...$ or $9...$ not $1...$ (as $9^2=81$, $10^2=100$ and the inequality is strict).
{ "language": "en", "url": "https://math.stackexchange.com/questions/379369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to enumerate the solutions of a quadratic equation When we solve a quadratic equation, and let's assume that the solutions are $x=2$, $x=3$, should I say * *$x=2$ and $x=3$ *$x=2$ or $x=3$. What is the correct way to say it?
You should say $$x=2 \color{red}{\textbf{ or }}x=3.$$ $x=2$ and $x=3$ is wrong since $x$ cannot be equal to $2$ and $3$ simultaneously, since $2 \neq 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/379437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Solve for $x$: question on logarithms. The question: $$\log_3 x \cdot \log_4 x \cdot \log_5 x = \log_3 x \cdot \log_4 x \cdot \log_5 x \cdot \log_5 x \cdot \log_4 x \cdot \log_3 x$$ My mother who's a math teacher was asked this by one of her students, and she can't quite figure it out. Anyone got any ideas?
Following up on Jaeyong Chung's answer, and working it out: $$ 1 =\log_3x\log_4x\log_5x$$ $$1=\frac{(\ln x)^3}{\ln3\ln4\ln5}$$ $$(\ln x)^3 = \ln3\ln4\ln5$$ $$(\ln x) = \sqrt[3]{\ln3\ln4\ln5}$$ $$x = \exp\left(\sqrt[3]{\ln3\ln4\ln5}\right) \approx 3.85093$$ EDIT: And, of course, the obvious answer that everyone will overlook: $x=1$ makes both sides of the equation zero. :D
{ "language": "en", "url": "https://math.stackexchange.com/questions/379486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Uncountability of the equivalence classes of $\mathbb{R}/\mathbb{Q}$ Let $a,b\in[0,1]$ and define the equivalence relation $\sim$ by $a\sim b\iff a-b\in\mathbb{Q}$. This relation partitions $[0,1]$ into equivalence classes where every class consists of a set of numbers which are equivalent under $\sim$, My textbook states (without proof): The set $[0,1]/\sim$ consists of uncountably many of these classes, where each class consists of countably many members. How can I formally prove this statement?
If you know $\Bbb Q$ is countable, that covers the second half. Then use the fact that a countable union of countable sets is again countable to show that there must be uncountably many classes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/379591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Learning trigonometry on my own. I have been self teaching myself math beginning with a grade 10 level for a while now and need learn trigonometry from near scratch. I am seeking both books and perhaps lectures on trigonometry and possibly geometry as some overlap does exist. I am not looking for algebra/precalc textbooks as my algebra knowledge is quite good. The primary goal is to be prepared for both calculus and linear algebra in the future. I have done some research, but haven't really found much that starts trigonometry from the beginning. Though I am aware and have used Khan Academy, this is not what I seek. I find things missing from Khan Academy and things are usually taught to generally. I prefer more traditional methods to learning, more doing less watching. However, I am not opposed to other suggestions. -$Thanks$ $edit:$ Though I love Pauls online notes, it's unfortunate he doesn't have much in the way of trig notes. :(
The Indian mathematician Ramanujan learned his trigonometry from Sidney Luxton Loney's "Plane Trigonometry". Since it's a free Google book, what have you got to lose? http://books.google.com/books?id=Mtw2AAAAMAAJ&printsec=frontcover&dq=editions:ix4vRrrEehgC&hl=en&sa=X&ei=Qu2CUeznBaO-yQH2tYCwDA&ved=0CDQQ6AEwAQ#v=onepage&q&f=false
{ "language": "en", "url": "https://math.stackexchange.com/questions/379650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Another Birthday Problem (Probability/Combinatorics) What is the smallest number of people in a room to assure that the probability that at least two were born on the same day of the week is at least 40%? I understand when approaching this type of problem, you simplify it so there's only 365 days. Also, I thought you go about the question by finding the probability that no one is born on the same day of the week. Then you subtract by 1 to get the solution: Therefore, if the first person can have a birthday on any of 365 days, and the second is (365-8) because 1 week has to be removed (since question asks at least two born on the same day). I thought the answer is: $1-\cfrac{365(365-8)...(365-r+1)}{365^{r}}\tag{1}$ The solution is 4 people but when I enter r=4, I get 5.6% which is obviously wrong. Any help is appreciated. Thank you.
Here are the first several results for the same day of the week: $$ \begin{align} 1-\frac77&=0&\text{$1$ person}&(0\%)\\ 1-\frac77\frac67&=\frac17&\text{$2$ people}&(14.29\%)\\ 1-\frac77\frac67\frac57&=\frac{19}{49}&\text{$3$ people}&(38.78\%)\\ 1-\frac77\frac67\frac57\frac47&=\frac{223}{343}&\text{$4$ people}&(65.01\%)\\ \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/379705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solve a written problem with matrix I have the following problem described here: The government attributes an allocation to the children who benefits child-care services. The children are splitted inside 3 groups: preschool, first cycle and second cycle. The allocation is different for each group, 2$ for the first cycle and the others are unknowns, lets name them x and y. On the other hand, we analysed the following data from 3 different school Rainbow School: 43 preschool children, 160 first cycle children and 140 second cycle. Total allocation: 589$ Cumulus School: 50, 170, 160 total: k(unknown) Nimbus School: 100, 88, 80 total: 556$ Now, I must represent the following problem with a matrix equation, I have tried the following: $$ M = \begin{array}{cccc} 43x & 320 & 140y & 589 \\ 50x & 340 & 160y & k \\ 100x & 176 & 80y & 556 \end{array} $$ I'm not sure that it make sense and I'm not sure how to solve it either. What now?
There's an easier solution, I think. Take the Rainbow and Nimbus schools alone. This yields a system of equations: $$ 43x+320+140y=589\\ 100x+176+80y=556. $$ Two variables and two equations means you can find the solutions for $x$ and $y$. Following that, you can plug those values into your formula for Cumulus and find $k$. EDIT: if you know that $x$ and $y$ are integers, you can plug in some low values and probably find $x,y$ without solving the system properly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/379747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there a correct order to learning maths properly? I am a high school student but I would like to self-learn higher level maths so is there a correct order to do that? I have learnt pre-calculus, calculus, algebra, series and sequences, combinatorics, complex numbers, polynomials and geometry all at high school level. Where should I go from here? Some people recommended that I learn how to prove things properly, is that a good idea? What textbooks do you recommend?
Quite often the transition to higher, pure math is real analysis. Here proofs really become relevant. I would suggest this free set of down-loadable notes from a class given at Berkeley by Fields medal winner (math analog of Noble Prize) Vaughan Jones. https://sites.google.com/site/math104sp2011/lecture-notes They are virtually verbatim and complete as a text. They build gradually so you can get a good base. The material is Prof. Jones's own treatment and the proofs are quite accessible and beautiful. You might just give it a try and see if it works for you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/379819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Stokes' Theorem Let $C$ be the following, let $C$ be the curve of intersection of the cylinder $x^2 + y^2 = 1$ and the given surface $z = f(x,y)$, oriented counterclockwise around the cylinder. Use Stokes' theorem to compute the line integral by first converting it to a surface integral. (a) $\int_C (y \, \mathrm{d}x + z \, \mathrm{d}y + x \, \mathrm{d}z),\quad z=x \cdot y$. I'm having a problem setting up the problem. I appreciate any assistance.
Just to get you started, here's the details for (a). Start by finding the curl of the vector field $\mathbf{F}=\langle y,z,x\rangle$. You get $$\nabla\times\mathbf{F}=\det\begin{pmatrix} \mathbf{i} &\mathbf{j}&\mathbf{k}\\\frac{\partial}{\partial x}&\frac{\partial}{\partial y}&\frac{\partial}{\partial z}\\y&z&x\end{pmatrix}=\langle0,-1,-1 \rangle$$ Stokes's Theorem says the line integral of $\mathbf{F}$ around $C$ is equal to the surface integral of $\nabla\times\mathbf{F}$ over any surface $S$ having $C$ as a boundary (written $C=\partial S$). The easiest such surface would just be the part of the surface $z=xy$ lying inside the cylinder $x^2+y^2=1$. Call $S$ this surface, and parametrize it in polar coordinates by $\mathbf{S}(r,\theta)=\langle r\cos\theta, r\sin\theta, r^2\cos\theta\sin\theta\rangle$ for $r\in[0,1]$ and $\theta\in[0,2\pi)$. The $z$ component here comes from the fact that $z=xy$, and in polar coordinates $x=r\cos\theta$ and $y=r\sin\theta$. Then we just need to compute $$\int\int_S\langle 0,-1,-1\rangle\cdot \mathbf{dS}=\int_0^{2\pi}\int_0^1\langle 0,-1,-1\rangle\cdot\left(\frac{\partial\mathbf{S}}{\partial r}\times\frac{\partial\mathbf{S}}{\partial\theta}\right)\,dr\,d\theta$$ But we have $$\frac{\partial\mathbf{S}}{\partial r}=\langle\cos\theta,\sin\theta,2r\cos\theta\sin\theta \rangle=\langle \cos\theta,\sin\theta,r\sin(2\theta)\rangle$$ and $$\frac{\partial\mathbf{S}}{\partial \theta}=\langle -r\sin\theta, r\cos\theta, r^2(\cos^2\theta-\sin^2\theta) \rangle=r\langle-\cos\theta,\sin\theta,r\cos(2\theta)\rangle$$ Then $$\frac{\partial\mathbf{S}}{\partial r}\times\frac{\partial\mathbf{S}}{\partial\theta}=\det\begin{pmatrix}\mathbf{i}&\mathbf{j}&\mathbf{k}\\\cos\theta&\sin\theta&r\sin(2\theta)\\-r\sin\theta&r\cos\theta&r^2\cos(2\theta)\end{pmatrix}$$ The first component doesn't matter, because we're dotting it with $\langle0,-1,-1\rangle$ anyway. The second component is $-r^2(\sin(2\theta)\sin\theta+\cos(2\theta)\cos\theta)$ and the third component is just $r$. You can tell this is the right orientation because the third component is positive and hence pointing up rather than down. Taking the dot product, our integral becomes $$\int_0^{2\pi}\int_0^1\left(r^2\sin(2\theta)\sin(\theta)+r^2\cos(2\theta)\cos\theta-r\right)\,dr\,d\theta$$ Doing the integration with respect to $r$ we get $$\int_0^{2\pi}\left(\frac{1}{3}(\sin(2\theta)\sin\theta+\cos(2\theta)\cos\theta)-\frac{1}{2}\right)\,d\theta$$ I'll leave it to you to finish -- with the hint that $\sin(2\theta)\sin\theta+\cos(2\theta)\cos\theta$ simplifies very nicely if you remember your sum/difference formulas.
{ "language": "en", "url": "https://math.stackexchange.com/questions/379878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
How long will it take Marie to saw another board into 3 pieces? So this is supposed to be really simple, and it's taken from the following picture: Text-only: It took Marie $10$ minutes to saw a board into $2$ pieces. If she works just as fast, how long will it take for her to saw another board into $3$ pieces? I don't understand what's wrong with this question. I think the student answered the question wrong, yet my friend insists the student got the question right. I feel like I'm missing something critical here. What am I getting wrong here?
The student is absolutely correct (as Twiceler has correctly shown). The time taken to cut a board into $2$ pieces (that is $1$ cut) : $10$ minutes Therefore, The time taken to cut a board into $3$ pieces (that is $2$ cuts) : $20$ minutes The question may have different weird interpretations as I am happy commented:<br Time taken to cut it into one piece = $0$ minutes So Time taken to cut it into $3$ pieces = $0 \times 3$ minutes = $0$ minutes. So $0$ can be an answer. but it is illogical just like the teacher's answer and as Keltari said Another correct answer would be 10 minutes. One could infer, "If she works just as fast," that "work" is the complete amount of time to do the job. -Keltari This is logical but you can be sure that this is not what the question meant, but the student has chosen the most relevant one. The teacher's interpretation is mathematically incorrect. The teacher may have put the question for the students to have an idea of Arithmetic Progression and may have thought that the students will just answer the question without thinking hard. In many a schools, at low grades children are thought that real numbers consists of all the numbers. Only later in higher grades do they learn that complex numbers also exist. (I learned just like that.) So the question was put as a question on A.P. thinking that the students may not be capable of solving the answer the correct way. Or as Jared rightly commented: This is simultaneously wonderful and sad. Wonderful for the student who was level-headed enough to answer this question correctly, and sad that this teacher's mistake could be representative of the quality of elementary school math education. – Jared Whatever may be the reason, there is no doubt that the student has been accurate in answering the question properly and that the teacher's answer is illogical.
{ "language": "en", "url": "https://math.stackexchange.com/questions/379927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1079", "answer_count": 32, "answer_id": 6 }
I roll 6-sided dice until the sum exceeds 50. What is the expected value of the final roll? I roll 6-sided dice until the sum exceeds 50. What is the expected value of the final roll? I am not sure how to set this one up. This one is not homework, by the way, but a question I am making up that is inspired by one. I'm hoping this will help me understand what's going on better.
The purpose of this answer here is to convince readers that the distribution of the roll 'to get me over $50$' is not necessarily that of the standard 6=sided die roll. Recall that a stopping time is a positive integer valued random variable $\tau$ for which $\{\tau \leq n \} \in \mathcal{F}_n$, where $\mathcal{F}_n = \sigma(X_0, X_1, \cdots, X_n)$ is the canonical filtration with respect to the (time-homogeneous) markov chain $X_n$. The Strong Markov Property asserts (in this case) that conditioned on the event $\tau < \infty$, the random variables $X_1$ and $X_{\tau + 1} - X_{\tau}$ are equidistributed. Letting $\tau = -1 + \min\{k \mid X_k > 50\}$ ought to prove the equidistribution with a regular die roll, right? Well, no. What happens is that $\tau + 1$ is a stopping time, but $\tau$ is not, because it looks into the future one timestep. This is just enough to throw off the SMP.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Bounded integrable function Let $f : \mathbb{R} \to \overline{\mathbb{R}}$ be an integrable funtion. Given $\varepsilon > 0$ show that there is a bounded integrable function $g$ such that $\int |f - g| < \varepsilon$. I was wondering if I could get a hint.
First, as $f$ is integrable, it takes infinite values on a negligible set, so we can assume that $f$ take its values on $\Bbb R$. Writing $f=\max\{f,0\}+(f-\max\{f,0\})$, we can write $f$ as the difference of two measurable integrable non-negative functions. So we are reduced to the case $f\geqslant 0$ is integrable and measure. To this aim, go back to the definition of Lebesgue integral, and recall that a simple function is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inf and sup for Lebesgue integrable functions Let $D \subset \mathbb{R}$ be a measurable set of finite measure. Suppose that $f : D \to \mathbb{R}$ is a bounded function. Prove that $$\sup\left\{\int_D \varphi \mid \varphi \leq f \text{ and } \varphi \text{ simple}\right\} = \inf\left\{\int_D \psi \mid f \leq \psi \text{ and } \psi \text{ simple}\right\}$$ iff $f$ is measurable. I was wondering if I could get a hint.
Suppose the inf and the sup are equal. Then for any $\epsilon > 0$ there exist simple functions $\varphi \le f \le \psi$ with the property that $\displaystyle \int_D \psi < \int_D \varphi + \epsilon$. Use this fact to construct sequences $\varphi_n$ and $\psi_n$ of simple functions with the property that $\varphi_n \le f \le \psi_n$ and $\displaystyle \int_D \psi_n - \varphi_n < 4^{-n}$. Let $m^*$ denote Lebesgue outer measure and $m$ Lebesgue measure. Then $$ m^*(\{f - \varphi_n > 2^{-n}\}) \le m^*(\{\psi_n - \varphi_n > 2^{-n}\}) = m(\{\psi_n - \varphi_n > 2^{-n}\}) $$ because $f \le \psi_n$. Chebyshev's inequality gives you $$ m(\{\psi_n - \varphi_n > 2^{-n}\}) \le 2^n \int_D \psi_n - \varphi_n < 2^{-n}. $$ Write $E_n = \{f - \varphi_n > 2^{-n}\}$. Then $m^*(E_n) < 2^{-n}$, and by subadditivity $m^*(\cup_{n \ge N} E_n) \le 2^{1-N}$. The monotonicity of $m^*$ implies $$ m^* \left( \bigcap_{N \ge 1} \bigcup_{n \ge N} E_n \right) = 0. $$ On edit: can you use this fact to show $f$ is measurable?
{ "language": "en", "url": "https://math.stackexchange.com/questions/380189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $f(x,y)$ continuous? I want to find out if this function is continuous: $$(x,y)\mapsto \begin{cases}\frac{y\sin(x)}{(x-\pi)^2+y^2}&\text{for $(x,y)\not = (\pi, 0)$}\\0&\text{for $(x,y)=(\pi,0)$}\end{cases}$$ My first idea is that $$\lim_{(x,y)\to(\pi,0)} |f(x,y)-f(\pi,0)|=\lim_{(x,y)\to(\pi,0)}\left|\frac{y\sin(x)}{(x-\pi)^2+y^2}\right|\\\le \lim_{(x,y)\to(\pi,0)}|y\sin(x)|\cdot \left|\frac{1}{(x-\pi)^2+y^2}\right| $$ Where the first term = 0 ($y=0$,$ \sin(\pi)=0$) but im not sure if this is leading me where I want. Btw, I assume the function is continuous.
Let $u=x-\pi$. Then you're wondering about $$\lim_{(u,y)\to(0,0)}\frac{y\sin(\pi+u)}{u^2+y^2}$$ $$\lim_{(u,y)\to(0,0)}\frac{-y\sin u}{u^2+y^2}=$$ $$\lim_{(u,y)\to(0,0)}\frac{-yu}{u^2+y^2}\frac{\sin u}{u}$$ Can you show it doesn't go to zero? Look at $y=u$ and $y=-u$, for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are all vectors straight lines? Is there a math field that deals with quadratic, cubic etc. vectors? Or a non-linear equivalent of a vector? If so, why are they so much less common than linear vectors?
From your response to my comment in your OP, I'll talk about why vectors are always considered "rays" geometrically. The most important interpretation of a vector is as a list of numbers (we'll say from $\Bbb R^n$.) The list of numbers determines a point in $\Bbb R^n$, and if you imagine connecting this to the origin with a straight line and adding an arrowhead at the end with the point, you have a "ray" representing the vector. Actually you can slide this ray (without changing the direction) all over space and still have the same vector. Strictly speaking it is not a segment because it has a fixed direction. If you put the arrowhead on the other end, it is a different vector. Since the vector only carries the information about a single point, it would does not determine anything more complex than a line. In order to "curve" the vector, you'd need to say more about the path it follows on its way from start to finish. Another reason is that the points lying on the line determined by a vector through the origin are a subspace of the big space. Subspaces "aren't curved" in some sense. This property causes $\alpha v,\beta v, (\alpha+\beta)v$ to all lie on the same straight line. Multiplying a vector (ray) in a real vector space by $\lambda \in \Bbb R$ just stretches or contracts the vector. If the vector were curved somehow, then scaling it would change the curviture of the shape. It does not seem desirable that a geometric property such as curviture is ruined merely by scaling the vector, so it doesn't seem very likely to be useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
a totally ordered set with small well ordered set has to be small? doing something quite different the following question came to me: 1)If you have a totally ordered set A such that all the well ordered subset are at most countable, is it true that A has at most the cardinality of continuos? 2)More in general is it true that if a set A totally ordered has well ordered subset of lenght at most $|B|$ then A has at most cardinality $2^{|B|}$?
If $\kappa>2^\omega$, then $\kappa^*$ is a counterexample: all of its well-ordered subsets are finite. (The star indicates the reverse order.) However, if you require all well-ordered and reverse well-ordered subsets to be countable, the answer is yes to the more general question. Let $\langle A,\le\rangle$ be a linear order with $|A|>2^\kappa$. Let $\preceq$ be any well-order on $A$. Then $[A]^2$, the set of $2$-element subsets of $A$, can be partitioned into sets $I_0$ and $I_1$, where $$I_0=\left\{\{a,b\}\in[A]^2:\le\text{ and }\preceq\text{ agree on }\{a,b\}\right\}$$ and $$I_1=\left\{\{a,b\}\in[A]^2:\le\text{ and }\preceq\text{ disagree on }\{a,b\}\right\}\;.$$ By the Erdős-Rado theorem there are an $H\subseteq A$ and an $i\in\{0,1\}$ such that $|H|>\kappa$ and $[H]^2\subseteq I_i$. If $i=0$, $H$ is well-ordered by $\le$, and if $i=1$, $H$ is inversely well-ordered by $\le$. (Nice question, by the way.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/380417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Understanding bicomplex numbers I found by chance, the set of Bicomplex numbers. These numbers took particularly my attention because of their similarity to my previous personal research and question. I should say that I can't really understand the fact that $j^2=+1$ (and must of other abstract algebra) without using matricial interpretations. When I look a bit on the Bicomplex numbers, the think that surprised me a lot was the fact that $ij=ji=k$ and $k^2=-1$. Because using matricial representations, we get : $$ij=\begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}=\begin{pmatrix} -1 & 0\\ 0 & 1 \end{pmatrix}$$ $$ji=\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix}=\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}$$ So, $ij$ should be different then $ji$. Then if we define $k=ij$ or $k=ji$, we get both way $$k^2= I$$ Looked at the Wikipedia article and did some internet search but couldn't find a matricial representation for the bicomplex $k$. So I wanna ask some clarifications about where from the relations $ij=ji$ and $k^2=-1$ come from.
The algebra you discribed in the question is tessarines. So, there are two matrical representations of tessarines. * *For a tessarine $z=w_1+w_2i+w_3j+w_4ij$ the matrix representation is as follows: $$\left( \begin{array}{cccc} {w_0} & -{w_1} & {w_2} & -{w_3} \\ {w_1} & {w_0} & {w_3} & {w_2} \\ {w_2} & -{w_3} & {w_0} & -{w_1} \\ {w_3} & {w_2} & {w_1} & {w_0} \\ \end{array} \right)$$ One has to use the 4x4 matrix. After performing the operations, the coefficients of the resulting tessarine can be extracted from the first column of the resulting matrix. *Alternatively, one can use a 2x2 matrix of the form $z=a+bj=\left( \begin{array}{cc} a & b \\ b & a \\ \end{array} \right)$ where both $a$ and $b$ are complex numbers. Since complex numbers are embedded in most computer algebra systems, I prefer the second method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Bayesian learning Imagine we assume there are two different types of coins: * *Coin A: a fair coin, p(heads) = 0.5. *Coin B: biased to heads at p(heads)=0.7. We then want to learn from samples which coin we are flipping. Assume a naive prior over the two coins, so we have a Beta distribution, $\beta_0(1,1)$. You flip the coin and see heads. Since you know the probability that coin A would generate heads is 0.5 and you know the probability that coin B would generate heads is 0.7, we update our distribution as: $$ \beta_1 = (1+\frac{0.5}{1.2},1+\frac{0.7}{1.2}) \approx (1.4167, 1.5833) $$ Is this the correct way to update the distribution or will it improperly bias the distribution in some way?
I don't understand how a beta distribution enters into it. A beta distribution is usually used in the context of an unknown probability lying anywhere in $[0,1]$. We have no such parameter here; all we have is an unknown binary choice between coins $A$ and $B$. The most natural prior in this case is one that assigns probability $1/2$ to both coins. The a priori probability of flipping heads with coin $A$ was $\frac12\cdot0.5=0.25$, and the a priori probability of flipping heads with coin $B$ was $\frac12\cdot0.7=0.35$, so since heads was flipped the probability that the coin is coin $A$ is $0.25/(0.25+0.35)=5/12$ and the probability that the coin is coin $B$ is $0.35/(0.25+0.35)=7/12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding a continuous function $f: \mathbb{R} \to \mathbb{R}$ such that $f(\mathbb{R})$ is neither open nor closed Find a bounded, continuous function $f: \mathbb{R} \to \mathbb{R}$ such that $f(\mathbb{R})$ is neither open nor closed?
Take $f(x) = \arctan(x^2)$. Then, $f(\mathbb{R}) = [0, \pi/2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Is there something faulty about this statement? Show any prime of the form $3k+1$ is of the form $6k+1$. I came up with my own solution that made perfect sense to me, but when I read the text's solution, it argued that for the primes that are of the particular form are $6k+1 = 3(2k)+1$. But doesn't that really say the primes in the form of $3k+1$ are in the form of $6m+1$? It seems to me as though there's some misuse of notation here -- allowing $k = 2k$. So should the exercise be phrased as $6m+1$ instead?
Another way to phrase it is "If k is a positive integer such that 3k+1 is prime then k is even". The proof, of course, is easy: If k is odd, then k=2h+1 for some integer h. But 3k+1 = 3(2h+1)+1 = 6h+4 is even and therefore not prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Probability of draws at random with replacement of five tickets $400$ draws are made at random with replacement from $5$ tickets that are marked $-2, -1, 0, 1,$ and $2$ respectively. Find the expected value of: the number of times positive numbers appear? Expected value of $X$ number times positive number appear $= E(X)= (1\cdot(1/5))+ (2\cdot(1/5))= (1/5) + (2/5)=3/5=0.6$? $400\cdot 3/5=240$ Expected Value of $X$ number times positive number appear $=E(X)=240/400=3/5=0.6$?
The idea is right, but note that $0$ is not positive. So the probability we get a positive on any draw is $\frac{2}{5}$. So if $X_i=1$ if we get a positive on the $i$-th draw, with $X_i=0$ otherwise, then $E(X_i)=\frac{2}{5}$. Now use the linearity of expectation to conclude that the expected number of positives in $400$ draws is $(400)\left(\frac{2}{5} \right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An NFA with $\Sigma = \{1\}$ with $x^2$ accepting runs on strings $1^x$ for all $x \geq 0$ - how to construct? One of my homework assignments requires us to construct an NFA over the alphabet $\{1\}$ which has exactly $x^2 + 3$ accepting runs over the input string 1^x for all $x \in \mathbb{N}$. Now, the +3 part is simple - I've got LaTeX code for a state diagram for this using tikz automata: (Now with a pretty diagram!) However, the $x^2$ part is proving really hard for me to figure out. I'm not sure how to do this with a finite number of states, and because this is an NFA, this is especially tricky. Any and all suggestions of how to think about this and what approach to take would be very helpful.
There's no such NFA. Assume there was an NFA over the alphabet ${1}$ which accepts all strings whose length is $n^2 + 3$ for some $n \in \mathbb{N}$. I.e., the NFS is supposed to accept strings of lengths $3,4,7,12,19,28,39,\ldots$. If there was such an NFS, the language $$ \Omega = \left\{\underline{1}^{n^2+3} \,:\, n \in \mathbb{N}\right\} $$ would be regular. According to the pumping lemma for regular languages, we'd then have an $N$ such that if $x \in \Omega$ and $|x| \geq N$, then there are $x_l,x_2,x_3$ with $|x_2| > 0$, $x_1x_2x_3=x$ and $x_1\underbrace{x_2\ldots x_2}_{k\text{ times}}x_3 = x_1x_2^kx_3 \in \Omega$ for all $k \geq 1$. For our language, that implies there there's an $N$ such that if for some $x > N$ there's an $n \in \mathbb{N}$ with $x = n^2 + 3$, then the same thing works for $x + mx_2$ ($x_2 \neq 0$) for all $m \geq 1$. Set $z = x-3$, then the statement is $$ z \geq N, z \text{ is a square} \implies \exists{x_2\in\mathbb{N}^+}:\, \forall m \geq 1:\, z + mx_2 \text{ is a square}\text{.} $$ That's obviously impossible, since the distance between two consecutive squares gets larger and larger, and once it get's larger than $x_2$, the right-hand side cannot hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/380812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is G isomorphic to $\mathbb{Z} \oplus \mathbb{Z}$? If $ G=\{3^{m}6^{n}|m,n \in \mathbb{Z}\}$ under multiplication then i want prove that this G is isomorphic to $\mathbb{Z} \oplus \mathbb{Z}$.Can any one help me to solve this example? please help me. thanks in advance. Can i define $\phi:\mathbb{Z} \oplus \mathbb{Z} to G$ as $\phi\big((m,n)\big)=3^m 6^n$
Hint: $2^k=3^{-k}6^k$.${}{}{}{}{}{}{}{}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/380888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $\log X < X$ for all $X > 0$ I'm working through Data Structures and Algorithm Analysis in C++, 2nd Ed, and problem 1.7 asks us to prove that $\log X < X$ for all $X > 0$. However, unless I'm missing something, this can't actually be proven. The spirit of the problem only holds true if you define several extra qualifiers, because it's relatively easy to provide counter examples. First, it says that $\log_{a} X < X$ for all $X > 0$, in essence. But if $a = -1$, then $(-1)^{2} = 1$. Therefore $\log_{-1} 1 = 2$. Thus, we must assume $a$ is positive. if $a$ is $< 1$, then $a^2 < 1$. Therefore we must assume that $a \geq 1$. Now, the book says that unless stated otherwise, it's generally speaking about base 2 for logarithms, which are vital in computer science. However, even then - if $a$ is two and $X$ is $\frac{1}{16}$, then $\log_{a} X$ is $-4$. (Similarly for base 10, try taking the log of $\frac{1}{10}$ on your calculator: It's $-1$.) Thus we must assume that $X \geq 1$. ...Unless I'm horribly missing something here. The problem seems quite different if we have to prove it for $X \geq 1$. But even then, I need some help solving the problem. I've tried manipulating the equation as many ways as I could think of but I'm not cracking it.
One way to approach this question is to consider the minimum of $x - \log_a x$ on the interval $(0,\infty)$. For this we can compute the derivative, which is $1 - 1/(\log_e a )\cdot x$. Thus the derivative is zero at a single point, namely $x = 1/\log_e a,$ and is negative to the left of that point and positive to the right. Thus $x - \log_a x$ decreases as $x$ approaches $1/\log_e a$ from the left, and then increases as we move away from this point to the right. Thus the minimum value is achieved at $x = 1/\log_e a$. (Here I'm assuming that $a > 1$, so that $\log_e a > 0$; the analysis of the problem is a little different if $a < 1$, since then for $x < a < 1$, we have $log_a x > 1 > x,$ and the statement is not true.) Now this value is equal to $1/\log_e a + (\log_e \log_e a)/\log_e a,$ and you want this to be $> 0 $. This will be true provided $a > e^{1/e}$ (as noted in the comments).
{ "language": "en", "url": "https://math.stackexchange.com/questions/380963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 0 }
Normal subgroup of a normal subgroup Let $F,G,H$ be groups such that $F\trianglelefteq G \trianglelefteq H$. I am asked whether we necessarily have $F\trianglelefteq H$. I think the answer is no but I cannot find any counterexample with usual groups. Is there a simple case where this property is not true?
Let $p$ be a prime, and let $G$ be a $p$-group of order $p^3$. Let $H \leq G$ be a non-normal subgroup of order $p$ (equivalently, $H$ is of order $p$ and not central). Then $H$ is contained in a subgroup $K \leq G$ of order $p^2$. In this case $H \trianglelefteq K \trianglelefteq G$, but $H$ is not normal in $G$. For example, $G$ could be the Heisenberg group, which is the set $$\left\{ \begin{pmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{pmatrix} : a, b, c \in \mathbb{Z}_p \right\}$$ of matrices under multiplication. In the case $p = 2$ this group is isomorphic to $D_8$, which is the example given in another answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/381035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Complex roots of polynomial equations with real coefficients Consider the polynomial $x^5 +ax^4 +bx^3 +cx^2 +dx+4$ where $a, b, c, d$ are real numbers. If $(1 + 2i)$ and $(3 - 2i)$ are two roots of this polynomial then what is the value of $a$ ?
Adding to lab bhattacharjee's answer, Vieta's Formulas basically tell you that the negative fraction of the last term (the constant) divided by the coefficient of the first term is equal to the product of the roots. Letting r be the 5th root of the polynomial (since we know 4), $$-\frac{4}{1} = (1−2i)(1+2i)(3+2i)(3-2i)r$$ We get $1+2i$ and $3-2i$ as two other roots because they're conjugates of the roots you gave us. Roots as well as their conjugates are roots of a polynomial. Vietas formulas also states that the negtive fraction of the coefficient of the term immediately after the first term divided by the coefficient of the first term is equal to the sum of the roots. So $$-\frac{a}{1} = (1−2i)+(1+2i)+(3+2i)+(3-2i)+r$$ Using these two equations, you can solve for $a$ and $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/381114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Simple/Concise proof of Muir's Identity I am not a Math student and I am having trouble finding some small proof for the Muir's identity. Even a slightly lengthy but easy to understand proof would be helpful. Muir's Identity $$\det(A)= (\operatorname{pf}(A))^2;$$ the identity is given in the first paragraph of the following link http://en.wikipedia.org/wiki/Pfaffian I am expecting a proof which uses minimal advanced mathematics.Any reference to a textbook or link would do. I would be very grateful, if any of you could point me in that direction. P.s- i have done all the googling required and i wasnt satisfied with their results,so dont post any results from google's 1st page Thanks in Advance
This answer does not show the explicit form of $\textrm{pf}(A)$ but it proves that such a form must exist as a polynomial in the entries of $A$. Let $A$ be a generic skew-symmetric $n \times n$ matrix with indeterminate entries $A_{i j}$ on row $i$ column $j$ for $0 \leq i < j \leq n$. I will prove by induction in $n$ that $\det(A)$ is the square of a polynomial in the indeterminates $A_{i j}$. If $n = 2$ then $$ \det \begin{pmatrix} 0& A_{1 2} \\ -A_{1 2}& 0 \end{pmatrix} = A_{1 2}^2 $$ is a square polynomial. Let $$ B = \begin{pmatrix}1 \\ & 1 \\ & -A_{1 3} & A_{1 2} \\ & -A_{1 4} & & A_{1 2} \\ & \vdots & & & \ddots \\ & -A_{1 n} & & & & A_{1 2} \end{pmatrix} $$ where all unlisted entries are equal to zero. Then the product $$ C = B\,A\,B^{T} $$ is skew-symmetric and takes the form $$ C = \begin{pmatrix} 0& A_{1 2} & 0 & \dotsc & 0\\ -A_{1 2} & 0 & \ast & \dotsc & \ast\\ 0 & \ast & \ddots & \ddots & \vdots\\ \vdots & \vdots & \ddots & & \ast \\ 0 & \ast & \dotsc & \ast & 0 \end{pmatrix} $$ where each asterisk denotes some polynomial in the indeterminates. Let $A'$ be the bottom right $(n-2) \times (n-2)$ skew-symmetric sub-matrix of $C$. By induction $\det(A')$ is the square of a polynomial. From the explicit form of $C$ it follows that $$ A_{1 2}^{2n-4} \det(A) = \det(B \, A \, B^T) = \det(C) = A_{1 2}^2 \det(A') $$ or $$\det(A) = A_{1 2}^{6-2n} \det(A').$$ Now the right hand side must be a polynomial (because $\det(A)$ is) and since it is also a square we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/381290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find a polar representation for a curve. I have the following curve: $(x^2 + y^2)^2 - 4x(x^2 + y^2) = 4y^2$ and I have to find its polar representation. I don't know how. I'd like to get help .. thanks in advance.
Just as the Cartesian has two variables, we will have two variables in polar form: $$x = r\cos \theta,\;\;y = r \sin \theta$$ We can also use the fact that $x^2 + y^2 = (r\cos \theta)^2 + (r\sin\theta)^2 = r^2 \cos^2\theta + r^2\sin^2 \theta = r^2\underbrace{(\sin^2 \theta + \cos^2 \theta)}_{= 1} =r^2$ This gives us $$r^4 - 4r^3\cos \theta - 4r^2 \sin^2\theta = 0 \iff r^2 - 4r\cos\theta - 4\sin^2\theta = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/381353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Explicit Functions on $\mathbb{C}$ The following question on last years Complex Analysis exam paper, and Im a little stuck on it.. $(i)f(z)=e^{z^2}$ find the explicit formulas for $u(x,y)$ and $v(x,y)$ such that: $f(x+iy)=u(x,y)+iv(x,y)$ (ii) Find all functions $v: \mathbb{R}^2\rightarrow\mathbb{R}^2$ such that $f(x+iy)=u(x,y)+iv(x,y)$ where $u(x,y)=x^3-3xy^2$ for $(x,y) \in \mathbb{R}^2$ is differentiable on $\mathbb{C}$ . My Working (i) $z^2=(x+iy)(x+iy) = x^2-y^2+2ixy$ $e^{z^2}=e^{x^2-y^2+2ixy}$ But im not really sure where to go from here to fin a value for u(x,y) and v(x,y) (ii) I dont have a clue what to do with this, any help in the right direction would be great.. Im thinking maybe it has something to do with Cauchy Riemann Equations as its differentiable on $\mathbb{C}$
From Euler formula $e^{iz}=\cos z+i\sin z$, $e^{z^2}=e^{x^2-y^2+2ixy}=e^{x^2-y^2}\cos(2xy)+ie^{x^2-y^2}\sin(2xy)$ $f$ $\mathbb{C}$-differentiable implies $u,v$ $\mathbb{R}$-differentiable plus Cauchy-Riemann equations $$u_x=v_y$$ $$u_y=-v_x$$ Then $$v_y(x,y)=3x^2-3y^2\longrightarrow v=3x^2y-y^3+H(x)$$ $$v_x(x,y)=6xy\longrightarrow v=3x^2y+K(y)$$ By comparison, you get $$v(x,y)=3x^2y-y^3$$ which is the imaginary part of $f(z)=z^3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/381450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$k(tx,ty)=tk(x,y)$ then $k(x,y)=Ax+By$ A friend asked me today the following question: Let $k(x,y)$ be differentiable in all $\mathbb{R}^{2}$ s.t for every $(x,y)$ and for every $t$ it holds that $$k(tx,ty)=tk(x,y)$$ Prove that there exist $A,B\in\mathbb{R}$ s.t $$k(x,y)=Ax+By$$ I want to use the chain rule somehow, but I am having difficulty using it (I am a bit rusty). I believe I can get $$\frac{\partial k}{\partial tx}\cdot\frac{\partial tx}{\partial t}+\frac{\partial k}{\partial ty}\cdot\frac{\partial ty}{\partial t}=k(x,y)$$ hence $$\frac{\partial k}{\partial tx}\cdot x+\frac{\partial k}{\partial ty}\cdot y=k(x,y)$$ but I don't see how this helps. Can someone please help me out ?
First $k(0,0)=0$. $k(x, y)=\lim_{t\to 0}\frac{k(tx, ty)}{t}=xk_x+yk_y$ where $k_x=\partial_xk|_{(x,y)=(0,0)}, k_y=\partial_yk|_{(x,y)=(0,0)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/381512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
What is the Fourier transform of $f(x)=e^{-x^2}$? I remember there is a special rule for this kind of function, but I can't remember what it was. Does anyone know?
Caveat: I'm using the normalization $\hat f(\omega) = \int_{-\infty}^\infty f(t)e^{-it\omega}\,dt$. A cute way to to derive the Fourier transform of $f(t) = e^{-t^2}$ is the following trick: Since $$f'(t) = -2te^{-t^2} = -2tf(t),$$ taking the Fourier transfom of both sides will give us $$i\omega \hat f(\omega) = -2i\hat f'(\omega).$$ Solving this differential equation for $\hat f$ yields $$\hat f(\omega) = Ce^{-\omega^2/4}$$ and plugging in $\omega = 0$ finally gives $$ C = \hat f(0) = \int_{-\infty}^\infty e^{-t^2}\,dt = \sqrt{\pi}.$$ I.e. $$ \hat f(\omega) = \sqrt{\pi}e^{-\omega^2/4}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/381597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 1, "answer_id": 0 }
Expected number of edges: does $\sum\limits_{k=1}^m k \binom{m}{k} p^k (1-p)^{m-k} = mp$ Find the expected number of edges in $G \in \mathcal G(n,p)$. Method $1$: Let $\binom{n}{2} = m$. The probability that any set of edges $|X| = k$ is the set of edges in $G$ is $p^k (1-p)^{m-k}$. So the probability that $G$ has $k$ edges is $$\binom{m}{k} p^k ( 1-p )^{m-k}$$ This implies that $$E(X) = \sum_{k=1}^m k \binom{m}{k} p^k (1-p)^{m-k}$$ Method $2$: Choose an indicator random variable $X_e : \mathcal G(n,p) \to \{ 0,1 \}$ as follows: $$X_e(G) = \begin{cases} 1 & e \in E(G) \\ 0 & e \notin E(G) \end{cases}$$ So $E(X) = \sum_{e \in K_n} E(X_e(G)) = m p$ since each event $e \in E(G)$ and $f \in E(G)$ are independent. How do you reconcile these answers? I'm looking for either a mistake in reasoning or a direct proof that: $$\sum_{k=1}^m k \binom{m}{k} p^k (1-p)^{m-k} = mp$$ for $0 < p < 1$.
Related problems:(I), (II). Consider the function $$ f(x)=( xp+(1-p) )^m = \sum_{k=0}^{m} {m\choose k} p^k(1-p)^{m-k}x^k $$ Differentiating the above equation with respect to $x$ yields $$ \implies mp( xp+(1-p) )^{m-1} = \sum_{k=1}^{m}{m\choose k} k p^k(1-p)^{m-k}x^{k-1}. $$ Subs $x=1$ in the above equation gives the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/381716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Need help solving - $ \int (\sin 101x) \cdot\sin^{99}x\,dx $ I have a complicated integral to solve. I tried to split ($101 x$) and proceed but I am getting a pretty nasty answer while evaluating using parts. are there any simpler methods to evaluate this integral? $$ \int\!\sin (101x)\cdot\sin^{99}(x)\, dx $$
Let's use the identity $$\sin(101x)=\sin(x)\cos(100x)+\cos(x)\sin(100x)$$ Then the integral becomes $$\int\sin^{100}(x)\cos(100x)dx+\int\sin^{99}(x)\sin(100x)\cos(x)dx$$ Integrating the first term by parts gives $$\int\sin^{100}(x)\cos(100x)dx=\frac{1}{100}\sin^{100}(x)\sin(100x)-\int\sin^{99}(x)\sin(100x)\cos(x)dx$$ Plugging this in, we see the remaining integrals cancel (up to a constant) and we are left with $$\int\sin^{99}(x)\sin(101x)dx=\frac{1}{100}\sin^{100}(x)\sin(100x)+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/381867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
A parameterized elliptical integral (Legendre Elliptical Integral) $$ K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{1+2t\cos(\theta)+t^{2}}dt $$ For $$ -1<a<1;$$ $$-\pi<\theta<\pi$$ I know this integral to be a known tabulated Legendre elliptic integral, however the very fact that the numerator is parameterized completely throws a curveball. Using: $$ K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{(1+t^{2}) + 2t\cos(\theta)}dt $$ letting $2\gamma$ = $\theta$ $$ \rightarrow K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{(1+t^{2}) + 2t \cos(2\gamma)}dt $$ in which case the trig function can be later manipulated using the double angle identity, turning it into a sine function $$ \rightarrow K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{(1+t^{2}) + 2t \cos(2\gamma)}dt $$ $$ \rightarrow K(a,\theta)=\int_{0}^{\infty}\frac{t^{-a}}{(1+t^{2}) + 2t(1-(\sin(\gamma))^{2})}dt $$ So it does have a sine in the denominator that is sufficient for a Legendre Elliptical integral. The rest is just solving for $k$ and simplifying the expression. Which leaves me with the parameter $a$. I have no idea what to do there. Any help is certainly appreciated.
This is not elliptic integral, this can be expressed in terms of elementary functions: \begin{align} K(a,\theta)=\int_0^{\infty}\frac{t^{-a}dt}{t^2+2\cos\theta\, t+1}=\frac{1}{2i\sin\theta}\int_0^{\infty}\left(\frac{t^{-a}}{t+e^{-i\theta}}-\frac{t^{-a}}{t+e^{i\theta}}\right)dt=\\ =\frac{1}{2i\sin\theta}\left(\frac{\pi e^{ia\theta}}{\sin\pi a}-\frac{\pi e^{-ia\theta}}{\sin\pi a}\right)=\frac{\pi\sin \theta a}{\sin\theta\sin\pi a}. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/381937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $\ \varphi \, : \, V\rightarrow V\ $ be a linear transformation. Prove that $\ Im(\varphi \, \circ \varphi) \subseteq Im \,\varphi\ $ Let V be a vector space and $\ \varphi \, : \, V\rightarrow V\ $ be a linear transformation. Prove that: $$\ Im(\varphi \, \circ \varphi) \subseteq Im \,\varphi\ $$ I am struggling to see what conditions I have to verify, in order to prove it.
Recall that $$\mathrm{Im}(\varphi)=\{\varphi(x)\quad|\quad x\in V\}$$ Now take $y\in \mathrm{Im}(\varphi\circ \varphi)$ then there's $x\in V$ such that $$y=\varphi\circ \varphi(x)= \varphi( \underbrace{\varphi(x)}_{z\in V})=\varphi(z)\in \mathrm{Im}(\varphi)$$ so we have $$\ Im(\varphi \, \circ \varphi) \subseteq Im \,\varphi\ $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/382013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Integration by parts, Reduction I was able to complete part (a) easily by using integration by parts. I ended up getting: $$I(n) = -\frac{1}{n} \cos x\cdot \sin^{n-1}x + \frac{n-1}{n}· I(n-2)$$ For question (b), When I integrated $1/\sin^4x$ and subbed in $n = -4$, I get the following equation: $$\frac{1}{4}·\cos x·\sin^{-5}x + \frac{5}{4} \int sin^{-6}x dx$$ My question is, how do I integrate $\sin^{-6}x$ because it's not the same as integrating $\sin^{6}x$, which will actually get you somewhere. It feels like I'm going in a loop when integrating $\sin^{-6}x$. I might have went wrong somewhere, help would be very much appreciated :)
Putting $n=-2,$ in $$I_n=-\frac1n\cos x\sin^{n-1}x+\frac{n-1}nI_{n-2}$$ we get $$I_{-2}=-\frac1{(-2)}\cos x\sin^{-2-1}x+\frac{(-2-1)}{(-2)}I_{-2-2}$$ $$\implies \frac32I_{-4}=I_{-2}-\frac{\cos x}{2\sin^3x}$$ Now, $$I_{-2}=\int\sin^{-2}xdx=\int \csc^2xdx=-\cot x+C$$ Can you finish it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/382106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Ways of merging two incomparable sorted lists of elements keeping their relative ordering Suppose that, for a real application, I have ended up with a sorted list A = {$a_1, a_2, ..., a_{|A|}$} of elements of a certain kind (say, Type-A), and another sorted list B = {$b_1, b_2, ..., b_{|B|}$} of elements of a different kind (Type-B), such that Type-A elements are only comparable with Type-A elements, and likewise for Type-B. At this point I seek to count the following: in how many ways can I merge both lists together, in such a way that the relative ordering of Type-A and Type-B elements, respectively, is preserved? (i.e. that if $P_M(x)$ represents the position of an element of A or B in the merged list, then $P_M(a_i)<P_M(a_j)$ and $P_M(b_i)<P_M(b_j)$ for all $i<j$) I've tried to figure this out constructively by starting with an empty merged list and inserting elements of A or B one at a time, counting in how many ways each insertion can be done, but since this depends on the placement of previous elements of the same type, I've had little luck so far. I also tried explicitly counting all possibilities for different (small) lengths of A and B, but I've been unable to extract any potential general principle in this way.
For merging N sorted lists, here is a good way to see that the solution is $$\frac{(|A_1|+\dots|+|A_N|)!}{|A_1|!\dots |A_N|!}$$ All the $(|A_1|+\dots|+|A_N|)$ elements can be permuted in $(|A_1|+\dots|+|A_N|)!$ ways. Among these, any solution which has the ordering of the $A_1$ elements different from the given order have to be thrown out. If you keep the positions of everyone except $A_1$ fixed, you can generate $|A_1|!$ permutations by shuffling $A_1$. Only 1 among these $|A_1|!$ solutions is valid. Hence the number solutions containing the right order of $A_1$ is $$\frac{(|A_1|+\dots|+|A_N|)!}{|A_1|!}$$ Now repeat the argument for $A_2$, $A_2, \dots A_N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/382174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Binary Decision Diagram of $(A\Rightarrow C)\wedge (B\Rightarrow C)$? I made a Binary Decision Diagram for $(A\vee B)\Rightarrow C$, which i think is correct. Know i want o make a Binary Decision Diagram for $(A\Rightarrow C) \wedge (B\Rightarrow C)$ but i can't. I can make 2 BDD's, one for $(A\Rightarrow C)$ and one for $(B\Rightarrow C)$. In the picture bellow is just the BDD for $(A\Rightarrow C)$ because the other is the same just instead of $A$ it is $B$. How can i make one BDD for $(A\Rightarrow C) \wedge (B\Rightarrow C)$ ?
$(A\Rightarrow C)\lor(B\Rightarrow C) \equiv (\lnot A\lor C)\lor(\lnot B\lor C)\equiv \lnot A\lor\lnot B\lor C$. So, this is true iff either $A$ is false or $B$ is false, or $C$ is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/382239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove by mathematical induction that $1 + 1/4 +\ldots + 1/4^n \to 4/3$ Please help. I haven't found any text on how to prove by induction this sort of problem: $$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$ I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.
If you want to use proof by induction, you have to prove the stronger statement that $$ 1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} - \frac{1}{3}\frac{1}{4^n} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/382295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 10, "answer_id": 5 }
Spanned divisors and Base Points Let $X$ be a smooth algebraic variety. We say that a line bundle $\xi\in H^1(X,\mathcal{O}^\ast)$ is spanned is for each $x\in X$ there is a global section $s\in H^0(X,\mathcal{O}(\xi))$ with $s(x)\neq 0$. Let $\xi=[D]$ be the line bundle associated to a divisor $D$. If $\xi$ is spanned, why can we say that the linear system $|D|$ is base-point free? Basically for each $x$ I want to find $D'\in|D|$ such that $x\notin D'$.
Assume that the line bundle $\mathscr{O}(D)$ is spanned. So given $x \in X$, there exists a section $s \in H^0(X, \mathscr{O}(D))$ such that $s(x) \neq 0$. Let $D'$ denote the divisor of zeroes of the section $s$; then $x$ is not contained in $\mathrm{Supp}(D')$. The remaining issue is to show that $D'$ is linearly equivalent to $D$. To do this, we need to show there's a rational function $f$ on $X$ such that $\mathrm{div} f = D'-D$. But there's a correspondence (explained in detail e.g. in Shafarevich Volume 2, VI.1.4) between global sections of the bundle $\mathscr{O}(D)$ and rational functions $f$ on $X$ such that $\mathrm{div} f + D \geq 0$; moreover, if $s \in H^0(X,\mathscr{O}(D))$ has zero-set $D'$, and $s$ corresponds to a rational function $f$, then $D'=\mathrm{div} f + D$. So our chosen section $s$ from the first paragraph yields a rational function $f$ such that $D'-D=\mathrm{div} f$, which means the two divisors are linearly equivalent, as we wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/382373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Roots of cubic polynomial lying inside the circle Show that all roots of $a+bz+cz^2+z^3=0$ lie inside the circle $|z|=max{\{1,|a|+|b|+|c| \}}$ Now this problem is given in Beardon's Algebra and Geometry third chapter on complex numbers. What might be relevant for this problem: * *author previously discussed roots of unity; *a little (I mean abut a page of informal discussion) about cubic and quartic equations; *then gave proof of fundamental theorem of algebra (the existence of root was given as a informal proof and rest using induction) and then the corollary of it (If $p(z) = q(z)$ at $n + 1$ distinct points then $p(z) = q(z)$ for all $z$, where both polynomials are of degree at most $n$); I was trying to see how I should approach it with no success for quite some time. Looked on the net and found, that this would be kind of easy with Rouche's theorem, but I was not given that. So is it possible to solve it in a simple way with what was given? Thanks!
I'd simply be looking at showing that the $z^3$ term was dominant, so there could be no roots beyond the bound. I don't think it is at all sophisticated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/382484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to write this conic equation in standard form? $$x^2+y^2-16x-20y+100=0$$ Standard form? Circle or ellipse?
Recall that one of the usual standard forms is: $(x - a)^{2} + (y - b)^{2} = r^{2}$ where... * *(a,b) is the center of the circle *r is the radius of the circle Rearrange the terms to obtain: $x^{2} - 16x + y^{2} - 20y + 100 = 0$ Then, by completing the squares, we have: $(x^{2} - 16x + 64) + (y^{2} - 20y + 100) = 64$ $(x - 8)^{2} + (y - 10)^{2} = 8^{2}$ Thus, we have a circle with radius $r = 8$ and center $(a,b) = (8,10)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/382554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Number of Invariant Subspaces of a Jordan Block I'm asking this question on behalf of a person I'm supposed to be tutoring who has this problem as part of eir homework. The problem is "How many invariant subspaces are there of a transformation $T$ that sends $v\mapsto J_{\lambda,n}v$" where $J_{\lambda,n}$ is a Jordan block. We are pretty sure the answer is $n+1$, where the spaces are the trivial space and the ones spanned by sets of columns of this form: $\Big\{$ $\pmatrix{1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \\ \vdots \\ 0}$ , $~\pmatrix{0 \\ 1 \\ 0 \\ \vdots \\ 0 \\ 0 \\ \vdots \\ 0}$ , $~\pmatrix{0 \\ 0 \\ 1 \\ \vdots \\ 0 \\ 0 \\ \vdots \\ 0}$ , $~\cdots$ , $~\pmatrix{0 \\ 0 \\ 0 \\ \vdots \\ 1 \\ 0 \\ \vdots \\ 0}$ $\Big\}$ . But we are not sure how to explain that there aren't others. Can anyone help us, or at least put us on the right track?
You are right. Let $J_{\lambda,n}=\lambda\,I+N$ where $N$ is the nilpotent part, which maps $e_i\mapsto e_{i-1}$ if $i>1$ and $e_1\mapsto 0$. * *Observe that the invariant subspaces of $\lambda\,I+N$ coincide with those of $N$. *Assume that $v=(v_1,..,v_k,0,..,0)$ with $v_k\ne 0$, and consider its generated $N$-invariant subspace $V$. We have $V\ni N^{k-1}v=v_k\,e_1$, so $e_1\in V$. Similarly, by $N^{k-2}v,e_1\in V$ we can conclude $e_2\in V$, and so on, until $e_k\in V$. Thus, indeed $V={\rm span}(e_1,..,e_k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/382635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Limit $\frac{\tan^{-1}x - \tan^{-1}\sqrt{3}}{x-\sqrt{3}}$ without L'Hopital's rule. Please solve this without L'Hopital's rule? $$\lim_{x\rightarrow\sqrt{3}} \frac{\tan^{-1} x - \frac{\pi}{3}}{x-\sqrt{3}}$$ All I figured out how to do is to rewrite this as $$\frac{\tan^{-1} x - \tan^{-1}\sqrt{3}}{x-\sqrt{3}}$$ Any help is appreciated!
We want $$L = \lim_{x \to \sqrt{3}} \dfrac{\arctan(x) - \pi/3}{x - \sqrt3}$$ Let $\arctan(x) = t$. We then have $$L = \lim_{t \to \pi/3} \dfrac{t-\pi/3}{\tan(t) - \sqrt{3}} = \lim_{t \to \pi/3} \dfrac{t-\pi/3}{\tan(t) - \tan(\pi/3)} = \dfrac1{\left.\dfrac{d \tan(t)}{dt} \right\vert_{t=\pi/3}} = \dfrac1{\sec^2(t) \vert_{t=\pi/3}} = \dfrac14$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/382684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Induction proof: $\dbinom{2n}{n}=\dfrac{(2n)!}{n!n!}$ is an integer. Prove using induction: $\dbinom{2n}{n}=\dfrac{(2n)!}{n!n!}$ is an integer. I tried but I can't do it.
This is one instance of a strange phenomenon: proving something seemingly more complicated makes things simpler. Show that $\binom{n}{k}$ is an integer for all $0\leq k\leq n$, and that will show what you want. To do so, show by induction on $n$ that $$\binom{n+1}{k}=\binom{n}{k}+\binom{n}{k-1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/382787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
compare of eigenvalues $\lambda_1(D_a)$ and $\lambda_1(D_c)$. Let $f(x)$ be a smooth function on $[-1,1]$, such that $f(x)>0$ for all $x\in(-1,1)$,$f(-1)=f(1)=0$. consider $\gamma\subset\Bbb{R}^2$ the graph of the $f(x)$. Let $T_a$ the symmetry with respect to axis $x$ and $T_c$ the central symmetry with respect to origin. Now consider two domains $D_a$:bounded by the curves $\gamma$ and $T_a(\gamma)$, and $D_c$: bounded by the curves $\gamma$ and $T_c(\gamma)$. Let $\lambda_1(D_a)$ and $\lambda_1(D_c)$ are the eigenvalues of Dirichlet problem on $D_a$ and $D_c$ resp. How is compare of $\lambda_1(D_a)$ and $\lambda_1(D_c)$? ($\lambda_1(D_c)\geq\lambda_1(D_a)$ or $\lambda_1(D_a)\geq \lambda_1(D_c)$) There exist one theorem in Strauss's book which: If $\Omega_1\subset\Omega_2$ then $\lambda_1(\Omega_1)\geq\lambda_1(\Omega_2)$. Thanks.
Fact 1. In the special case $$f(-x)\le f(x),\qquad 0\le x\le 1\tag0$$ the inequality $\lambda_1(D_a)\le \lambda_1(D_c)$ holds. Proof. Let $u$ be the first eigenfunction for $D_c$. Extend it to $\mathbb R^2$ by zero outside of $D_c$. For $(x,y)\in\mathbb R^2$ define $$v(x,y)=\begin{cases} \max(u(x,y),u(-x,y))\quad & x\ge 0 \\ \min(u(x,y),u(-x,y))\quad & x\le 0 \end{cases}$$ (This is called the polarization of $u$ with respect to the $y$-axis.) Then the following hold: $$\int_{\mathbb R^2} v^2 = \int_{\mathbb R^2} u^2 \tag1$$ $$\int_{\mathbb R^2} |\nabla v|^2 = \int_{\mathbb R^2} |\nabla u|^2 \tag2$$ $$v=0\quad \text{ on } \partial D_c\tag3$$ Here (1) and (3) are relatively easy (you need assumption (0) to prove (3)). The equality (2) is not very straightforward unless you know about something about Sobolev spaces. The paper An approach to symmetrization via polarization should give you an idea of what is going on here. Since $v$ is an eligible function in the variational definition of $\lambda_1$, we have $$\lambda_1(D_a)\le \frac{\int |\nabla v|^2}{\int v^2}=\lambda_1(D_c) \tag4$$ as claimed. Fact 2. There exist functions $f$ for which $\lambda_1(D_a)<\lambda_1(D_c)$. Let $f=\chi_{[1/2,2/5]}$ (I know it's neither continuous nor positive). Then $D_a$ is a rectangle of dimensions $2\times (1/5)$ while $D_c$ is the union of two rectangles of dimensions $1\times (1/5)$. The fundamental frequency of larger rectangle is smaller. Therefore, $\lambda_1(D_a)<\lambda_1(D_c)$ in this case. The function $f$ can be approximated by smooth positive functions. The first eigenvalue depends continuously on the domain in various senses (you'll have to dig in the literature). Conclusion: $\lambda_1(D_a)<\lambda_1(D_c)$ holds for some domains. An alternative proof of Fact 2 can be given by following the proof of Fact 1 and observing that (4) is a strict inequality in general. Indeed, $v$ is in general not differentiable, while eigenfunctions are; therefore $v$ is not an eigenfunction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/382875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can every infinite set be divided into pairwise disjoint subsets of size $n\in\mathbb{N}$? Let $S$ be an infinite set and $n$ be a natural number. Does there exist partition of $S$ in which each subset has size $n$? * *This is pretty easy to do for countable sets. Is it true for uncountable sets? *If (1) is true, can it be proved without choice?
A geometric answer, for cardinality $c$. For $x, y \in S^1$ (the unit circle), let $x \sim y$ iff $x$ and $y$ are vertices of the same regular $n$-gon centered at the origin. Now the title of this question speaks of infinite sets generally, which is a different question. The approach by Hagen von Eitzen is probably about the best you'll find in the general case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/382965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 4 }
Prove by mathematical induction for any prime number$ p > 3, p^2 - 1$ is divisible by $3$? Prove by mathematical induction for any prime number $p > 3, p^2 - 1$ is divisible by $3$? Actually the above expression is divisible by $3,4,6,8,12$ and $24$. I have proved the divisibility by $4$ like: $$ \begin{align} p^2 -1 &= (p+1)(p-1)\\ &=(2n +1 +1)(2n + 1 - 1)\;\;\;\text{as $p$ is prime, it can be written as $(2n + 1)$}\\ &= (2n + 2)(2n)\\ &= 4(n)(n + 1) \end{align} $$ Hence $p^2 - 1$ is divisible by 4. But I cannot prove the divisibility by $3$.
Hint: $p \equiv 1$ or $-1 (\mod3) \implies p^2 \equiv 1 (\mod 3)$ for every $p>3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/383018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solving systems of linear equations using matrices, 3 equations, 4 variables I understand how to solve systems of linear equations when they have the same number of variables as equations. But what about when there are only three equations and 4 variables? For example, when i was looking through an exam paper, i came across this question- w + x + y + z = 1 2w + x + 3y + z =7 2w + 2x + y + 2z =7 The question does not implicitly ask for us to solve using matrices, but it is in a question about matrices... Any help would be appreciated!
There are a couple of things you have to pay attention to when solving a system of equations. The first thing you want to pay attention to is the rank of the corresponding matrix, defined as the number of pivot rows in the Reduced Row Echelon form of your matrix (that you get at via Gaussian elimination). You can think of the rank as the number of independent equations. For example, if you have $a + b = 3$ and $2a + 2b = 6$, those equations are not independent. The second one does not tell you anything that the first one doesn't tell you already. So instead of characterizing a system as "m equations with n unknowns", treat it as "m independent equations with n unkowns". The next thing you have to know is how to identify the solution space. Linear algebra tells you that if you have a matrix of rank r and n columns (unkowns), you will have n - r free variables that can take any value. Linear algebra also tells you that the complete solution space consists of any particular solution plus the null space of the matrix. To find both a particular solution and a basis for the null space, you will want to use the Reduced Row Echelon form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/383100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to solve equations of algebra? Let $a_i>0, b_i>0$ ($i=1,2,\ldots,N$). How to prove that there exist unique $x_i>0$ ($i=1,2,\ldots,N$) such that $$a_ix_i^{b_i}+x_1+x_2+\cdots+x_N=1,\;\;i=1,2,\ldots,N.$$ Thank you.
Replace $x_1+\ldots+x_N$ in the equation by $S$. A solution to the problem must have $0<S<1$, since $0<a_ix_i^{b_i}=1-S$. Then $$ x_i=(\frac{1-S}{a_i})^{1/b_i}=f_i(S). $$ $f_i(S)$ are continuous functions from $[0,1]$ to $\mathbb{R^{+}}$, so the same is true for the sum $F(S)=\sum_{i=1}^N f_i(S)$. $F(S)$ is strictly monotonly decreasing with $F(0)=\sum\frac{1}{a_i}^{1/b_i}>0$ and $F(1)=0$. Thus there is a unique point where $F(S)=S$. Now you should be able to conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/383250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rotating x,y points 45 degrees I have a two dimensional data set that I would like to rotate 45 degrees such that a 45 degree line from the points (0,0 and 10,10) becomes the x-axis. For example, the x,y points (1,1), (2,2), and (3,3) would be transformed to the points (0,1), (0,2), and (0,3), respectively, such that they now lie on the x-axis. The points (1,0), (2,0), and (3,0) would be rotated to the points (1,1), (2,2), and (3,3). How can I calculate how to rotate a series of x,y point 45 degrees?
Here is some good background about this topic in wikipedia: Rotation Matrix
{ "language": "en", "url": "https://math.stackexchange.com/questions/383321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 1 }
An intuitive idea about fundamental group of $\mathbb{RP}^2$ Someone can explain me with an example, what is the meaning of $\pi(\mathbb{RP}^2,x_0) \cong \mathbb{Z}_2$? We consider the real projective plane as a quotient of the disk. I didn't receive an exhaustive answer to this question from my teacher, in fact he said that the loop $2a$ with base point $P$ is homotopically equivalent to the "constant loop" with base point $P$. but this doesn't solve my doubts. Obviously I can calculate it, so the problem is NOT how to calculate it using Van Kampen theorem, but I need to get an idea of "why for every loop $a$, $[2a] = [1]$"
You can see another set of related pictures here, which gives the script for this video Pivoted Lines and the Mobius Band (1.47MB). The term "Pivoted Lines" is intended to be a non technical reference to the fact that we are discussing rotations, and their representations. The video shows the "identification" of the Projective Plane as a Mobius Band and a disk, the identification being shown by a point moving from one to the other. Then the point makes a loop twice round the Mobius Band, as in the above, and this loop moves off the Band onto the disk and so to a point. Thus we are representing motion of motions!
{ "language": "en", "url": "https://math.stackexchange.com/questions/383537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 4, "answer_id": 2 }
Singular asymptotics of Gaussian integrals with periodic perturbations At the bottom of page 5 of this paper by Giedrius Alkauskas it is claimed that, for a $1$-periodic continuous function $f$, $$ \int_{-\infty}^{\infty} f(x) e^{-Ax^2}\,dx = \sqrt{\frac{\pi}{A}} \int_0^1 f(x)\,dx + O(1) \tag{1} $$ as $A \to 0^+$. How can I prove $(1)$? I'm having a hard time doing it rigorously since I'm unfamiliar with Fourier series. If I ignore convergence and just work formally then I can get something that resembles statement $(1)$. Indeed, since $f$ is $1$-periodic we write $$ f(x) = \int_0^1 f(t)\,dt + \sum_{n=1}^{\infty} \Bigl[a_n \cos(2\pi nx) + b_n \sin(2\pi nx)\Bigr]. $$ Multiply this by $e^{-Ax^2}$ and integrate term by term, remember that $\sin(2\pi nx)$ is odd, and get $$ \begin{align} \int_{-\infty}^{\infty} f(x) e^{-Ax^2}\,dx &= \int_{-\infty}^{\infty} e^{-Ax^2}\,dx \int_0^1 f(x)\,dx + \sum_{n=1}^{\infty} a_n \int_{-\infty}^{\infty} cos(2\pi nx) e^{-Ax^2}\,dx \\ &= \sqrt{\frac{\pi}{A}} \int_0^1 f(x)\,dx + \sqrt{\frac{\pi}{A}} \sum_{n=1}^{\infty} a_n e^{-\pi^2 n^2/A} \\ &= \sqrt{\frac{\pi}{A}} \int_0^1 f(x)\,dx + O\left(A^{-1/2} e^{-\pi^2/A}\right) \end{align} $$ as $A \to 0^+$. I suppose the discrepancy in the error stems from the fact that $f$ need not be smooth (I think in the paper it's actually nowhere differentiable). Obviously there are some issues, namely the convergence of the series and the interchange of summation and integration. Resources concerning the analytic properties of Fourier series which are relevant would be much appreciated. Answers which do not use Fourier series are also very welcome.
Your estimate of error term is correct. The following are just some supplementary details to make your argument more rigorous. Let $(f_n)_{n\ge 1}$ be the sequence of partial sums of the Fourier series of $f$, i.e. $$ f_n(x) = \int_0^1 f(t)\,dt + \sum_{k=1}^n \big(a_k \cos(2\pi kx) + b_k \sin(2\pi kx)\big). $$ Note that $f_n$ converges to $f$ in $L^2([0,1])$, so by Cauchy-Schwarz's inequality, as $n\to\infty$, $$\int_0^1|f(t)-f_n(t)| dt\le \big(\int_0^1|f(t)-f_n(t)|^2dt\big)^{\frac{1}{2}}\to 0.\tag{1}$$ Also note that, for every $n\ge 1$, $$|a_n|=2\cdot\big|\int_0^1 f(t)\cos(2\pi nt) dt\big|\le 2\int_0^1|f(t)|dt.\tag{2}$$ As you have shown, $$\int_{-\infty}^{+\infty}f_n(x)e^{-Ax^2}dx=\sqrt{\frac{\pi}{A}}(\int_0^1f(t)dt+\sum_{k=1}^na_ke^{\frac{-k^2\pi^2}{A}}).\tag{3}$$ From $(2)$ and $(3)$ we know, $$\big|\sqrt{\frac{A}{\pi}}\int_{-\infty}^{+\infty}f_n(x)e^{-Ax^2}dx-\int_0^1f(t)dt\big|\le 2\int_0^1|f(t)|dt\cdot\sum_{k=1}^\infty e^{\frac{-k^2\pi^2}{A}}\le M e^{\frac{-\pi^2}{A}},\tag{4}$$ where $M>0$ is independent of $n\ge 1$ and $0<A\le 1$. Since for every $m\in\mathbb{Z}$, $$\int_m^{m+1}|f_n(x)-f(x)|dx=\int_0^1|f_n(x)-f(x)|dx,$$ $$ \int_{-\infty}^{+\infty}|f_n(x)-f(x)|e^{-Ax^2}dx\le2 \int_0^1|f_n(x)-f(x)|dx\cdot\sum_{m=0}^\infty e^{-Am^2}<\infty.\tag{5} $$ Letting $n\to\infty$ in $(5)$, from $(1)$ we know that $$\lim_{n\to\infty}\int_{-\infty}^{+\infty}|f_n(x)-f(x)|e^{-Ax^2}dx=0.\tag{6}$$ Combining $(4)$ and $(6)$, it follows that $$|\int_{-\infty}^{+\infty}f(x)e^{-Ax^2}dx-\sqrt{\frac{\pi}{A}}\int_0^1f(t)dt|\le M\sqrt{\frac{\pi}{A}} e^{\frac{-\pi^2}{A}}.\tag{7}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/383592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
Dirichlet series generating function I am stuck on how to do this question: Let d(n) denote the number of divisors of n. Show that the dirichlet series generating function of the sequence {(d(n))^2} equals C^4 (s)/ C(2s). C(s) represents the riemann zeta function, I apologize, I am not very accustomed with LaTex. Any help is highly appreciated, I am studying for an exam. Everywhere I have looked on the internet says it is obvious, but none of them seem to want to explain why or how to do it, so please help. thank you
There are many different ways to approach this, depending on what you are permitted to use. One simple way is to use Euler products. The Euler product for $$Q(s) = \sum_{n\ge 1} \frac{d(n)^2}{n^s}$$ is given by $$ Q(s) = \prod_p \left( 1 + \frac{2^2}{p^s} + \frac{3^2}{p^{2s}} + \frac{4^2}{p^{3s}} + \cdots \right).$$ This should follow by inspection considering that $d(n)$ is multiplicative and $d(p^v) = v+1$, with $p$ prime. Now note that $$\sum_{k\ge 0} (k+1)^2 z^k = \sum_{k\ge 0} (k+2)(k+1) z^k - \sum_{k\ge 0} (k+1) z^k \\= \left(\frac{1}{1-z}\right)'' - \left(\frac{1}{1-z}\right)' = \frac{1+z}{(1-z)^3}.$$ It follows that the Euler product for $Q(s)$ is equal to $$ Q(s) = \prod_p \frac{1+1/p^s}{(1-1/p^s)^3}.$$ On the other hand, we have $$ \zeta(s) = \prod_p \frac{1}{1-1/p^s}$$ so that $$ \frac{\zeta^4(s)}{\zeta(2s)} = \prod_p \frac{\left(\frac{1}{1-1/p^s}\right)^4}{\frac{1}{1-1/p^{2s}}} = \prod_p \frac{1-1/p^{2s}}{\left(1-1/p^s\right)^4} = \prod_p \frac{1+1/p^s}{\left(1-1/p^s\right)^3}.$$ The two Euler products are the same, QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/383733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How big is the size of all infinities? "Not only infinite - it's "so big" that there is no infinite set so large as the collection of all types of infinity..." What does exactly mean? How many infinities are there? I've heard there are more than infinite infinities? What does that mean? Is that true? Will anyone ever be able to know how many infinities there are? Does God know how many there are? Are there so many that God doesn't even know?
In the world of natural numbers it is known that $2 ^ a \neq 3 ^ b$ for any pair of positive integers a and b. This is true for any pair of primes. So if we believe there is only one infinitely large natural number ($\infty$) From the above statement: $2 ^ \infty \neq 3 ^ \infty$ Let $2 ^ \infty $ be $\infty_{2}$ Let $3 ^ \infty $ be $\infty_{3}$ Then $\infty_{2}$ and $\infty_{3}$ are two distinct infinitely large natural numbers. Since there is an infinite number of primes we could have chosen in place of 2 and 3, the implication is that there is an infinite number of infinitely large natural numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/383811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluating Complex Integral. I am trying to evaluate the following integrals: $$\int\limits_{-\infty}^\infty \frac{x^2}{1+x^2+x^4}dx $$ $$\int\limits_{0}^\pi \frac{d\theta}{a\cos\theta+ b} \text{ where }0<a<b$$ My very limited text has the following substitution: $$\int\limits_0^\infty \frac{\sin x}{x}dx = \frac{1}{2i}\int\limits_{\delta}^R \frac{e^{ix}-e^{-ix}}{x}dx \cdots $$ Is the same of substitution available for the polynomial? Thanks for any help. I apologize in advance for slow responses, I have a disability that limits me to an on-screen keyboard.
For the first one, write $\dfrac{x^2}{1+x^2+x^4}$ as $\dfrac{x}{2(1-x+x^2)} - \dfrac{x}{2(1+x+x^2)}$. Now $$\dfrac{x}{(1-x+x^2)} = \dfrac{x-1/2}{\left(x-\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2} + \dfrac{1/2}{\left(x-\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2}$$ and $$\dfrac{x}{(1-x+x^2)} = \dfrac{x+1/2}{\left(x+\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2} - \dfrac{1/2}{\left(x+\dfrac12\right)^2 + \left(\dfrac{\sqrt3}2 \right)^2}$$ I trust you can take it from here. For the second one, from Taylor series of $\dfrac1{(b+ax)}$, we have $$\dfrac1{(b+a \cos(t))} = \sum_{k=0}^{\infty} \dfrac{(-a)^k}{b^{k+1}} \cos^{k}(t)$$ Now $$\int_0^{\pi} \cos^k(t) dt = 0 \text{ if $k$ is odd}$$ We also have that $$\color{red}{\int_0^{\pi}\cos^{2k}(t) dt = \dfrac{(2k-1)!!}{(2k)!!} \times \pi = \pi \dfrac{\dbinom{2k}k}{4^k}}$$ Hence, $$I=\int_0^{\pi}\dfrac{dt}{(b+a \cos(t))} = \sum_{k=0}^{\infty} \dfrac{a^{2k}}{b^{2k+1}} \int_0^{\pi}\cos^{2k}(t) dt = \dfrac{\pi}{b} \sum_{k=0}^{\infty}\left(\dfrac{a}{2b}\right)^{2k} \dbinom{2k}k$$ Now from Taylor series, we have $$\color{blue}{\sum_{k=0}^{\infty} x^{2k} \dbinom{2k}k = (1-4x^2)^{-1/2}}$$ Hence, $$\color{green}{I = \dfrac{\pi}{b} \cdot \left(1-\left(\dfrac{a}b\right)^2 \right)^{-1/2} = \dfrac{\pi}{\sqrt{b^2-a^2}}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/383874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Practice Preliminary exam - evaluate the limit This is from a practice prelim exam and I know I should be able to get this one. $$ \lim_{n\to\infty} n^{1/2}\int_0^\infty \left( \frac{2x}{1+x^2} \right)^n $$ I have tried many different $u-$substitions but to no avail. I have tried $$ u = \log(1+x^2) $$ $$ du = \frac{2x}{1+x^2}dx $$ but did not get anywhere
Let $x = \tan(t)$. We then get that \begin{align} I(n) & = \int_0^{\infty} \left(\dfrac{2x}{1+x^2} \right)^n dx = \int_0^{\pi/2} \sin^n(2t) \sec^2(t) dt = 2^n \int_0^{\pi/2} \sin^n(t) \cos^{n-2}(t) dt\\ & = 4\int_0^{\pi/2}\sin^2(t) \sin^{n-2}(2t)dt \tag{$\star$} \end{align} Replacing $t$ by $\pi/2-t$, we get $$I(n) = 4 \int_0^{\pi/2} \cos^2(t) \sin^{n-2}(2t) dt \tag{$\perp$}$$ Adding $(\star)$ and $(\perp)$, we get that $$2I(n) = 4 \int_0^{\pi/2} \sin^{n-2}(2t)dt \implies I(n) = 2 \int_0^{\pi/2} \sin^{n-2}(2t)dt$$ I trust you can take it from here, using this post, which evaluates $\displaystyle \int_0^{\pi} \sin^{k}(t) dt$ and using Stirling (or) Wallis formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/383953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 0 }
prove that : $ \sum_{n=0}^\infty |x_n|^2 = +\infty \Rightarrow \sum_{n=0}^\infty |x_n| = +\infty $ i wanted to prove initially that a function is well defined and i concluded that it's enough to prove this statement for $x_n$ a sequence : $ \sum_{n=0}^\infty |x_n|^2 = +\infty \Rightarrow \sum_{n=0}^\infty |x_n| = +\infty $ or : $ \sum_{n=0}^\infty |x_n| < +\infty \Rightarrow \sum_{n=0}^\infty |x_n|^2 < +\infty $ any help ?
Hint: If $\sum a_n < \infty$, then $\displaystyle \lim_{n \to \infty} a_n = 0$. In particular for large $n$, we have $|a_n| \leq 1$. Then $$ |a_n|^2 = |a_n| |a_n| \leq \dots $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/384015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Which probability law? It may be a basic probability law in another form, but I cannot figure it out. Why can we say the following: $P(A∩B|C) = $$P(A|B∩C)P(B|C)$ Thank you.
You should simply expand upon the definition: $P(A \wedge B | C) = \frac{P(A \wedge B \wedge C)}{P(C)}$. $P(A | B \wedge C) = \frac {P( A \wedge B \wedge C)}{P( B \wedge C )}$. $P(B|C)=\frac{P( B \wedge C)}{P(C)}$. Now, we solve for $P(A \wedge B \wedge C)$ in the first two above and equate them: $P(A \wedge B | C) \cdot P(C)=P(A | B \wedge C) \cdot P( B \wedge C )$. Solve for $P(B \wedge C)$ in the third line and plug it in to the expression above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/384088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Can't prove this elementary algebra problem $x^2 + 8x + 16 - y^2$ First proof: $(x^2 + 8x + 16) – y^2$ $(x + 4)^2 – y^2$ $[(x + 4) + y][(x + 4) – y]$ 2nd proof where I mess up: $(x^2 + 8x) + (16 - y^2)$ $x(x + 8) + (4 + y)(4 - y)$ $x + 1(x + 8)(4 + y)(4 - y)$ ???? I think I'm breaking one of algebra's golden rules, but I can't find it.
Let us fix your second approach. $$ \begin{align} &x^2+8x+16-y^2\\ =&x^2+8x+(4-y)(4+y)\\ =&x^2+x\big[(4-y)+(4+y)\big]+(4-y)(4+y)\\ =&(x+4-y)(x+4+y). \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/384129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Longest antichain of divisors I Need to find a way to calculate the length of the longest antichain of divisors of a number N (example 720 - 6, or 1450 - 4), with divisibility as operation. Is there a universally applicable way to approach this problem for a given N?
If $N=\prod_p p^{e_p}$ is the prime factorization of $N$, then the longest such antichain has length $1+\sum_p e_p$ (if we count $N$ and $1$ as part of the chain, otherwise subtract $2$) and can be realized by dividing by a prime in each step. Thus with $N=720=2^4\cdot 3^2\cdot 5^1$ we find $720\stackrel{\color{red}2}, 360\stackrel{\color{red}2}, 180\stackrel{\color{red}2}, 90\stackrel{\color{red}2}, 45\stackrel{\color{red}3}, 15\stackrel{\color{red}3}, 5\stackrel{\color{red}5}, 1$ and with $N=1450=2^15^229^1$ we find $1450\stackrel{\color{red}2},725\stackrel{\color{red}5},145\stackrel{\color{red}5},29\stackrel{\color{red}{29}}, 1$ (in red is the prime I divide by in each step; I start with the smallest, but the order does not matter).
{ "language": "en", "url": "https://math.stackexchange.com/questions/384212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
If $S\times\mathbb{R}$ is homeomorphic to $T\times\mathbb{R}$, and $S$ and $T$ are compact, can we conclude that $S$ and $T$ are homeomorphic? If $S \times \mathbb{R}$ is homeomorphic to $T \times \mathbb{R}$ and $S$ and $T$ are compact, connected manifolds (according to an earlier question if one of them is compact the other one needs to be compact) can we conclude that $S$ and $T$ are homeomorphic? I know this is not true for non compact manifolds. I am mainly interested in the case where $S, T$ are 3-manifolds.
For closed 3-manifolds, taking the product with $\mathbb{R}$ doesn't change the fundamental group, so if the two products are homemorphic, the original spaces have the same fundamental group, and closed 3-manifolds are uniquely determined by their fundamental group, if they are irreducible and non-spherical.
{ "language": "en", "url": "https://math.stackexchange.com/questions/384288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 1, "answer_id": 0 }
Group of invertible elements of a ring has never order $5$ Let $R$ be a ring with unity. How can I prove that group of invertible elements of $R$ is never of order $5$? My teacher told me and my colleagues that problem is very hard to solve. I would be glad if someone can provide me even a small hint because, at this point, I have no clue how to attack the problem.
Here are a couple of ideas: * *$-1 \in R$ is always invertible. If $-1 \neq 1$, then it follows that $R^*$ should have even order, a contradiction. Therefore, $1=-1$ in ring $R$, so in fact $R$ contains a subfield isomorphic to $\mathbb{F}_2$. *Let $a$ be the generator of $R^*$. Consider the subring $N \subseteq R$ generated by $1$ and $a$. Then in fact $N \simeq \mathbb{F}_2[x] / (f(x))$, where $\mathbb{F}_2[x]$ is the polynomial ring over $\mathbb{F}_2$, and $f(x)$ is some polynomial from that ring. *Since $a^5=1$, it follows that $f(x)$ divides $x^5+1$. This leaves only finitely many options for $f$, and therefore for $N$. Then you can deal with each case separately and see that in each case $|N^*| \neq 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/384362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 2, "answer_id": 0 }
Integrating a school homework question. Show that $$\int_0^1\frac{4x-5}{\sqrt{3+2x-x^2}}dx = \frac{a\sqrt{3}+b-\pi}{6},$$ where $a$ and $b$ are constants to be found. Answer is: $$\frac{24\sqrt3-48-\pi}{6}$$ Thank you in advance!
On solving we will find that it is equal to -$$-4\sqrt{3+2x-x^{2}}-\sin^{-1}(\frac{x-1}{2})$$ Now if you put the appropriate limits I guess you'll get your answer. First of all write $$4x-5 = \mu \frac{d(3+2x-x^{2})}{dx}+\tau(3+2x-x^{2})$$ We will find that $\mu=-2$ and $\tau=-1$. $$\int\frac{4x-5}{\sqrt{3+2x-x^{2}}}=\mu\int\frac{d(3+2x-x^{2})}{\sqrt{3+2x-x^{2}}}+\tau\int\frac{dx}{\sqrt{3+2x-x^{2}}}$$$$= -2(2\sqrt{(3+2x-x^{2}})+-1\left(\int\frac{d(x-1)}{\sqrt{2^{2}-(x-1)^{2}}}\right) $$$$=-4\sqrt{3+2x-x^{2}}-\sin^{-1}(\frac{x-1}{2})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/384530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Finding Markov chain transition matrix using mathematical induction Let the transition matrix of a two-state Markov chain be $$P = \begin{bmatrix}p& 1-p\\ 1-p& p\end{bmatrix}$$ Questions: a. Use mathematical induction to find $P^n$. b. When n goes to infinity, what happens to $P^n$? Attempt: i'm able to find $$P^n = \begin{bmatrix}1/2 + 1/2(2p-1)^n& 1/2 - 1/2(2p-1)^n\\ 1/2 - 1/2(2p-1)^n & 1/2 + 1/2(2p-1)^n\end{bmatrix}$$ I don't know how to use induction to get there even though I know induction step. i.e n=1 true. suppose it's true for n = k, we need to prove it's also true for k+1...
Your initial $P^1$ matrix is has first row $[p,1-p]$ and second row the reverse of that. Your goal matrix for $P^n$ also has its entries in the same form, with first row say $[a_n,b_n]$ and second row the reverse of that. So an approach would be to multiply the matrix for $P^n$ by the matrix $P$, and its top row will be $$[pa_n+(1-p)b_n,(1-p)a_n+pb_n],$$ with bottom row the reverse of that. Now you just have to check that when $$a_n=1/2+(1/2)(2p-1)^n, \\ b_n = 1/2-(1/2)(2p-1)^n,$$ and the above calculations are done, you obtain the formulas for $a_{n+1}$ and $b_{n+1}$ in the new version of row one of $P^{n+1}$. That is, the form where the above vlues of $a_n,b_n$ have their $n$ replaced by $n+1$. It seems this should be just simple algebra, though admittedly I didn't check, I'm just suggesting an approach. ADDED: Actually the algebra is very simple, if you do the $1/2$ part separately from the $\pm (2p-1)^n$: $p\cdot (1/2)+(1-p)\cdot(1/2)=1/2$, while $$p\cdot (1/2)(2p-1)^n +(1-p) \cdot (-1/2)(2p-1)^n = \\ p \cdot (1/2)(2p-1)^n +(p-1) \cdot(1/2)(2p-1)= \\ (2p-1)\cdot(1/2) (2p-1)^n,$$ which is of course $(1/2)(2p-1)^{n+1}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/384592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $\{w^k|w\in L\}$ regular implies L regular? If L is a language and the language $$\tilde{L}:=\{x^k,x\in L, k\in\mathbb{N}\}$$ is regular, does that imply that L is regular? ($|L|<\infty$ gives equivalence) We came across this question when trying to prove an explicit example, namely Prove that $L=\{a^{p^k}\mid p\text{ prime}, k\in\mathbb{N}\}$ is not regular. It was fairly easy to show that $L_1=\{\underbrace{a^{p^1}}_{=a^p}\mid p\text{ prime}\}$ is not regular. If the above statement were true, this would give a very easy proof for the assignment. I am asking out of interest, there are other ways to solve the question (I am NOT asking for answers on this example). I tried proving it via the pumping lemma, explicit construction of a FSA and the Myhill-Nerode theorem, however I was unable to find any notable results. I am unsure if this statement is true or if we can find a counterexample (I have been unable to do so)
No. Consider $L = \{a^{n^2} \colon n \ge 1\}$. As $a \in L$, your $\tilde{L} = \mathcal{L}(a^*)$, which is regular, but $L$ isn't.
{ "language": "en", "url": "https://math.stackexchange.com/questions/384644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the value of $\int_{-\infty}^\infty \int_{-\infty}^\infty e^{-(x^2+xy+y^2)} \, dx\,dy$ Given that $\int_{-\infty}^\infty e^{-x^2} \, dx=\sqrt{\pi}$. Find the value of $$\int_{-\infty}^\infty\int_{-\infty}^\infty e^{-(x^2+xy+y^2)} \, dx\,dy$$ I don't understand how I find this double integral by using the given data. Please help.
For fun, I want to point out that much more can be said. The spectral theorem for real symmetric matrices tells us that real symmetric matrices are orthogonally diagonalizable. Thus, if $A$ is some symmetric matrix, then there exists an orthogonal matrix $U$ and diagonal matrix $D$ such that $A$ can be written as $A=U^{-1} DU$. Hence $x^TAx=x^TU^TDUx=(Ux)^TD(Ux)$. The Jacobian det of the transformation $x\mapsto Ux$ is simply $1$, so we have a change of variables $u=Ux$: $$\begin{align} \int_{{\bf R}^n}\exp\left(-x^TAx\right)\,dV & =\int_{{\bf R}^n}\exp\left(-(\lambda_1u_1^2+\cdots+\lambda_nu_n^2)\right)\,dV \\[6pt] & =\prod_{i=1}^n\int_{-\infty}^{+\infty}\exp\left(-\lambda_i u_i^2\right)du_i \\[6pt] & =\prod_{i=1}^n\left[\frac{1}{\sqrt{\lambda_i}}\int_{-\infty}^{+\infty}e^{-u^2} \, du\right] \\[6pt] & =\sqrt{\frac{\pi^n}{\det A}}.\end{align}$$ Note that we don't even have to compute $U$ or $D$. The above calculation is the generalized version of the "completing the squares" approach when $n=2$ (which is invoked elsewhere in this thread). This formula is in fact the basis for the Feynman path integral formulation of the functional determinant from quantum field theory; since technically the integral diverges we need to compare them instead of looking at individual ones outright. Wikipedia has some more details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/384732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Showing $(v - \hat{v})\,\bot\,v$ $\fbox{Setting}$ Let $V$ be an inner-product space with $v \in V$. Suppose that $\mathcal{O} = \{u_1, \ldots, u_n\}$ forms an orthonormal basis of $V$. Let $\hat{v} = \left\langle u_1, v\right\rangle u_1 + \ldots + \left\langle u_n,v\right\rangle u_n$ denote the Fourier Expansion of $v$ with respect to $\mathcal{O}$. $\fbox{Question}$ How do we show that $\left\langle v - \hat{v}, \hat{v} \right\rangle = \left\langle v, \hat{v} \right\rangle - \left\langle \hat{v}, \hat{v} \right\rangle$ is $0$?
$$ \begin{align} \langle v-\hat{v},\hat{v}\rangle &=\left\langle\color{#C00000}{v}-\color{#00A000}{\sum_{k=1}^n\langle v,u_k\rangle u_k},\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle u_k}\right\rangle\\ &=\left\langle\color{#C00000}{v},\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle u_k}\right\rangle-\left\langle\color{#00A000}{\sum_{j=1}^n\langle v,u_j\rangle u_j},\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle u_k}\right\rangle\\ &=\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle}\langle\color{#C00000}{v},\color{#0000FF}{u_k}\rangle-\color{#00A000}{\sum_{j=1}^n\langle v,u_j\rangle}\color{#0000FF}{\sum_{k=1}^n\langle v,u_k\rangle}\langle\color{#00A000}{u_j},\color{#0000FF}{u_k}\rangle\\ &=\sum_{k=1}^n\langle v,u_k\rangle^2-\color{#00A000}{\sum_{j=1}^n\langle v,u_j\rangle}\color{#0000FF}{\langle v,u_j\rangle}\tag{$\ast$}\\ &=\sum_{k=1}^n\langle v,u_k\rangle^2-\sum_{j=1}^n\langle v,u_j\rangle^2\\[6pt] &=0 \end{align} $$ $(\ast)$ since $\langle u_j,u_k\rangle=1$ when $k=j$ and $0$ otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/384789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The set of numbers whose decimal expansions contain only 4 and 7 Let $S$ be the set of numbers in $X=[0,1]$ that when expanded as a decimal form, the numbers are 4 or 7 only. The following are the problems. a), Is S countable ? b), Is it dense in $X$ ? c), Is it compact ? d), Is it perfect ? For a), I want to say that it is intuitively, but I have no idea how to prove this. I tried to come up with a bijection between $S$ and $\Bbb Z$ but I couldn't find one. For b), my understanding of a set being "dense" means that all points in $X$ is either a limit point or a point in $S$. Am I right? Even if I were, I am not sure how to show this. For c), my intuition tells me that it is because it is bounded. So if I could show that it is closed I will be done, I think. But I am still iffy with the idea of limit points, and I am not sure what kind of limit points there are in $S$. For d), Because I can't show that it's closed I am completely stuck. I am teaching myself analysis, and I only know up to abstract algebra. Since I never took topology, please give me an explanation that helps without knowledge of advanced math.
Hints: For b, can you get close to $0.2?$ For c, you are correct that it is bounded, so you need to investigate closed. Let $y \in [0,1]$, but $y \not \in S$. Then there is some digit of $y$ that is not $4$ or $7$.... For d, you need to show that any point of $S$ is a limit of a sequence of other elements of $S$. Let $x \in S$. Can you find a list of other elements in $S$ that get closer and closer to $x$? For starters, can you describe another point within $\pm 0.1$ of $x$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/384856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Calculating time to 0? Quick question format: Let $a_n$ be the sequence given by the rule: $$a_0=k,a_{n+1}=\alpha a_n−\beta$$ Find a closed form for $a_n$. Long question format: If I have a starting value $x=100000$ then first multiply $x$ by $i=1.05$, then subtract $e=9000$. Let's say $y$ is how many times you do it. Anyone know of a formula to get $e$ given $y,x,i$ and given the total must be $0$ after $y$ "turns"? Example: $$y=1: xi-e=0$$ $$y=2: ((xi-e)i-e)=0$$ and so on...
Let's $u_0 = x$, and $u_{n+1} = iu_n - e$. To solve this equation, let's $l= il-e$, and $v_n = u_n - l$. Then $v_{n+1}= i v_n$ ans $v_n = i^n v_0$, so $u_n = l + i^n v_0 = l + i^n (x - l)$. But $l = e/(i-1) = 180 000$. So $u_n = 180 000 - 80000*1.05^n$. So $u_n \leq 0$ is equivalent to $$180 000 - 80000*1.05^n \leq 0$$ $$1.05^n \geq 180000/80000 = 2.25$$ $$n \geq 17$$ (And $u_{17} \neq 0$ !) General case : Let's $l = e/(i-1)$. If $i > 1$ and $x > l$, $u_n \leq 0$ is equivalent to $$l + i^n(x-l) \leq 0$$ $$i^n \geq l/(x-l)$$ $$n \geq \frac{log(i)}{log(l) - log(x-l)}$$ So $$y = E(\frac{log(i)}{log(l) - log(x-l)}) + 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/384867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is Vector Calculus useful for pure math? I have the option to take a vector calculus class at my uni but I have received conflicting opinions from various professors about this class's use in pure math (my major emphasis). I was wondering what others thought about the issue. I appreciate any advice.
The answer depends on your interests, and on the place you continue your education. In some areas in the world, PhD's are very specialized so any course that is not directly related to the subject matter is not necessary. One could complete a pure math PhD and not know vector or multivariate calculus. However, this is increasingly rare; more and more PhD programs are following the American model, expecting their graduates to have some breadth of knowledge in addition to the specialized skills required for the PhD thesis itself. Such programs would consider a lack of vector calculus a serious deficiency and may not even consider your application into any math PhD program, pure or applied. Certainly this is the case at most universities in the U.S.
{ "language": "en", "url": "https://math.stackexchange.com/questions/384940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Homogenous measure on the positive real halfline Define a measure $\mu\not=0$ on positive real number $\Bbb R_{>0}$ such that for any measurable set $E\subset\Bbb R_{>0}$ and $a\in \Bbb R_{>0} $, we have $\mu(aE)= \mu(E)$, where $aE=[ax;x\in E]$. I am totally blank about this problem. I ponder on it several times but didn't get any idea. This exercise illustrates lebesgue measure's abstract and weird nature. Because if we assume E as a subset of real numbers or any interval, it totally disagrees to fulfill this translation.
I'll answer in greater generality. A common way to construct a measure is to take a nonnegative locally integrable function $w$ and define $\mu(E)=\int_E w(x)\,dx$. This does not give all measures (only those that are absolutely continuous with respect to $dx$) but for many examples that's enough. In terms of $w$, the desired condition translates to $$\int_E w(x)\,dx = \int_{aE} w(x)\,dx\tag1$$ A way to get a handle on (1) is to bring both integrals to the same domain of integration. So, change the variable $x=ay$ in the second one, so that it becomes $\int_{E} a\,w(ay)\,dy$. Which is the same as $\int_{E} a\,w(ax)\,dx$, because the name of the integration variable does not matter. So, (1) takes the form $$\int_E w(x)\,dx = \int_{E} a\, w(ax)\,dx \tag2$$ or, better yet, $$\int_E ( w(x)-a\, w(ax))\,dx \tag3$$ So that (3) holds for every measurable set, the integrand should be zero almost everywhere. So, we need a function $w$ such that $w(ax)=w(x)/a$ for all $x$. In particular, $w(a)=w(1)/a$ for all $a$, which tells us what the function is. (The value of $w(1)$ can be chosen to be any positive number). The above can be generalized to produce measures such that $$\mu(aE) =a^p \mu(E)$$ for all $a>0$, where $p$ can be any fixed real number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/385008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Closed form for $\sum_{n=1}^\infty\frac{(-1)^n n^4 H_n}{2^n}$ Please help me to find a closed form for the sum $$\sum_{n=1}^\infty\frac{(-1)^n n^4 H_n}{2^n},$$ where $H_n$ are harmonic numbers: $$H_n=\sum_{k=1}^n\frac{1}{k}=\frac{\Gamma'(n+1)}{n!}+\gamma.$$
$$\sum_{n=1}^\infty\frac{(-1)^n n^4 H_n}{2^n}=\frac{28}{243}+\frac{10}{81} \log \left(\frac{2}{3}\right).$$ Hint: Change the order of summation: $$\sum_{n=1}^\infty\frac{(-1)^n n^4 H_n}{2^n}=\sum_{n=1}^\infty\sum_{k=1}^n\frac{(-1)^n n^4}{2^n k}=\sum_{k=1}^\infty\sum_{n=k}^\infty\frac{(-1)^n n^4}{2^n k}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/385067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 3, "answer_id": 0 }
Why isn't there a continuously differentiable surjection from $I \to I \times I$? I was asked this question recently in an interview. Why can't there be a a continuously differentiable map $f \colon I \to I \times I$, which is also surjective? In contrast to just continuous, where we have examples of space filling curves. I have one proof which is direct via Sard's theorem. But the statement seems simple enough to have a proof using at most implicit (and inverse) function theorem.
The image of any continuously differentiable function $f$ from $I$ to $I\times I$ has measure zero; in particular, it cannot be the whole square. To see this, note that $\|f'\|$ has a maximum value $M$ on $I$. This implies that the image of any subinterval of $I$ of length $\epsilon$ lies inside a circle of diameter $M\epsilon$, by the mean value theorem. The union of these $\approx 1/\epsilon$ circles, which contains the image of $f$, has measure $\approx \pi M^2\epsilon$. Now let $\epsilon$ tend to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/385165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }