Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Limits calculus very short question? Can you help me to solve this limit? $\frac{\cos x}{(1-\sin x)^{2/3}}$... as $x \rightarrow \pi/2$, how can I transform this?
Hint: let $y = \pi/2 - x$ and take the limit as $y \rightarrow 0$. In this case, the limit becomes $$\lim_{y \rightarrow 0} \frac{\sin{y}}{(1-\cos{y})^{2/3}}$$ That this limit diverges to $\infty$ may be shown several ways. One way is to recognize that, in this limit, $\sin{y} \sim y$ and $1-\cos{y} \sim y^2/2$, and the limit becomes $$\lim_{y \rightarrow 0} \frac{2^{2/3} y}{y^{4/3}} = \lim_{y \rightarrow 0} 2^{2/3} y^{-1/3} $$ which diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/293560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Find $\lim_{(x,y) \to (0,0)}\frac{x^3\sin(x+y)}{x^2+y^2}$ Find the limit $$\lim_{(x,y) \to (0,0)}\frac{x^3\sin(x+y)}{x^2+y^2}.$$ How exactly can I do this? Thanks.
Using $|\sin z|\leq 1$ we find that the absolute value of your function is not greater than $$ \frac{|x^3|}{x^2+y^2}\leq \frac{|x^3|}{x^2}=|x|. $$ This is first when $x\neq 0$. Then observe that $|f(x,y)|\leq |x|$ also holds when $x=0$. Can you conclude?
{ "language": "en", "url": "https://math.stackexchange.com/questions/293641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Homogeneous Linear Equation General Solution I’m having some difficulty understanding the solution to the following differential equation problem. Find a general solution to the given differential equation $4y’’ – 4y’ + y = 0$ The steps I’ve taken in solving this problem was to first find the auxiliary equation and then factor to find the roots. I listed the steps below: $4r^2 – 4r + 1$ $(2r – 1) \cdot (2r-1)$ $\therefore r = \frac{1}{2} \text{is the root}$ Given this information, I supposed that the general solution to the differential equation would be as follows: $y(t) = c_{1} \cdot e^{\frac{1}{2} t}$ But when I look at the back of my textbook, the correct answer is supposed to be $y(t) = c_{1} \cdot e^{\frac{1}{2} t} + c_{2} \cdot te^{\frac{1}{2} t}$ Now I know that understanding the correct solution has something to do with linear independence, but I’m having a hard time getting a deep understanding of what’s going on. Any help would be appreciated in understanding the solution.
The story behind what is going on here, is exactly the same we always see when search a Basis for a vector space over $V$ a field $K$. There; we look for a set of linear independent vectors which can generate the whole space. In the space of all solutions for an OE, we do the same as well. For any Homogeneous Linear OE with constant coefficients, there is a routine way to find out the solutions. And you did it right for this one. When the solution of axillary equation is one and this solution frequents two times, one solution is as you noted $y_1(t)=\exp(0.5t)$ but we don't have another solution. So, we should find another solution which is independent to the first one and the number of whole solutions is equal to the order of OE which is two here. It means that, we need one solution that the set $$\{\exp(0.5t),y_2(t)\}$$ is a fundamental set of solutions. For doing this you can use the method of Reduction of Order to find $$y_2(t)=t\exp(0.5t)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/293725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
When log is written without a base, is the equation normally referring to log base 10 or natural log? For example, this question presents the equation $$\omega(n) < \frac{\log n}{\log \log n} + 1.4573 \frac{\log n}{(\log \log n)^{2}},$$ but I'm not entirely sure if this is referring to log base $10$ or the natural logarithm.
In many programming languages, log is the natural logarithm. There are often variants for log2 and log10. Checked in C, C++, Java, JavaScript, R, Python
{ "language": "en", "url": "https://math.stackexchange.com/questions/293783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49", "answer_count": 6, "answer_id": 5 }
Need examples about injection (1-1) and surjection (onto) of composite functions The task is that I have to come up with examples for the following 2 statements: 1/ If the composite $g o f$ is injective (one-to-one), then $f$ is one-to-one, but $g$ doesn't have to be. 2/ If the composite $g o f$ is surjective (onto), the $g$ is onto, but $f$ doesn't have to be. I have some difficulties when I try to think of examples, especially the "one-to-one" statement. ** For #1, I try to go about letting the function $f$ being something in the nature of $f(x) = kx$, which is clearly 1-1, and $g$ being something in the nature of $g(x) = x^2$, which is not 1-1. But then when I try to make g o f out of these two by all of $+, -, x$, and division, it turns out I can't find a 1-1 function $g o f$. ** For #2, I firstly think about letting $f$ being $e^x$, which only covers the positive values of $y$, and then g being something like $k - e^x$. I thought g o f would cover the whole real line, but when I tries that on matlab, it only covers the negative values of $y$. So I'm totally wrong >_< Little note: I finished the proofs on showing f must be 1-1 (for #1), and g must be onto (for #2). I just need examples on the "doesn't to be" parts. Would someone please give me some suggestions ? I appreciate any help. Thank you ^_^
Consider maps $\{0\}\xrightarrow{f}\{0,1\}\xrightarrow{g}\{0\}$, or $\{0\}\xrightarrow{f}A\xrightarrow{g}\{0\}$ where $A$ is any set with more than one element.
{ "language": "en", "url": "https://math.stackexchange.com/questions/293851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
kernel and cokernel of a morphism Let $\phi: A=\mathbb{C}[x,y]/(y^2-x^3) \to \mathbb{C}[t]=B $ be the ring morphism defined by $x\mapsto t^2, y \mapsto t^3.$ Let $f:Y=\text{Spec} B \to X=\text{Spec} A$ be the associated morphism of affine schemes. Since $\phi$ is injective, then $f^{\sharp} : \mathcal{O}_X \to f_{\star}\mathcal{O}_Y$ is injective too. 1.What is the cokernel of $f^{\sharp}?$ I know that it will be isomorphic to the quotient sheaf $f_{\star}\mathcal{O}_Y/\mathcal{O}_X$ but I'm unable to find it explicitly. Here I computed the cotangent sheaf $\Omega_X$ of $X.$ 2.How can I find the kernel and the cokernel of $f^{\star} \Omega_X \to \Omega_Y?$ What I've tried out: The sheaf morphism corresponds to a $B$-module morphism $\Omega_A \otimes_A B \to \Omega_B$ or $Adx \oplus Ady/(2ydy-3x^2dx) \otimes B \to Bdt$ which is in fact, the map $(t)dt \oplus (t^2)dt \to Bdt$ where $(tg(t)dt,t^2h(t)dt)\mapsto (tg(t)+t^2h(t))dt$ for $g,h \in B.$ The kernel is isomorphic to $B$ since $g(t)=-th(t)$ and the cokernel is $B/(t).$ Is this correct?
Hopefully you know that coherent sheaves over $\mathrm{Spec} A$ are equivalent to finitely generated $A$-modules. The structure sheaf $\mathcal O_X$ corresponds to $A$ as a module over itself and $f_\ast\mathcal O_Y$ corresponds to $\mathbb C[t]$ with the $A$-module structure given by restricting through $\phi$. So $f^\#$ corresponds to $\phi$ as a map of $A$-modules, the image of $\phi$ consists of all polynomials with no degree $1$ term so the cokernel is $(t)/(t^2)$. The $A$-module action on this is that $x$ and $y$ act as $t^2$ and $t^3$ respectively, hence $x$ and $y$ act as $0$. So the cokernel is a trivial $1$-dimensional $A$-module. Edit: I forgot about $1$, the cokernel should be $1$-dimensional, not $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/293923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$\int_0^\infty\frac{\log x dx}{x^2-1}$ with a hint. I have to calculate $$\int_0^\infty\frac{\log x dx}{x^2-1},$$ and the hint is to integrate $\frac{\log z}{z^2-1}$ over the boundary of the domain $$\{z\,:\,r<|z|<R,\,\Re (z)>0,\,\Im (z)>0\}.$$ I don't understand. The boundary of this domain has a pole of the integrand in it, doesn't it? Doesn't it make this method useless?
Note $$I(a)=\int_0^\infty\frac{\ln x}{(x+1)(x+a)}dx\overset{x\to\frac a x} = \frac{1}{2}\int_0^\infty\frac{\ln a}{(x+1)(x+a)}dx= \frac{\ln^2a}{2(a-1)} $$ Then $$\int_0^\infty\frac{\ln x}{x^2-1}dx=I(-1)=-\frac14 [\ln(e^{i\pi})]^2=\frac{\pi^2}4 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/293990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 4 }
Normal Distribution Identity I have the following problem. I am reading the paper which uses this identity for a proof, but I can't see why or how to prove its true. Can you help me? \begin{align} \int_{x_{0}}^{\infty} e^{tx} n(x;\mu,\nu^2)dx &= e^{\mu t+\nu^2 t^2 /2} N(\frac{\mu - x_0 }{\nu} +\nu t ) \end{align} where $n(\cdot)$ is the normal pdf with mean $\mu$ and variance $\nu^2$. $N(\cdot)$ refers to the normal cdf. \begin{align} \int_{x_{0}}^{\infty} e^{tx} n(x;\mu,\nu^2)dx &= \int_{x_{0}}^{\infty} e^{tx} \frac{1}{\nu \sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2 \nu^2} } dx \\ &\int_{x_{0}}^{\infty} \frac{1}{\nu \sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2 \nu^2} + tx } dx \\ &\int_{x_{0}}^{\infty} \frac{1}{\nu \sqrt{2\pi}} e^{-\frac{x^2 -2x\mu +\mu^2 +2\nu^2tx}{2 \nu^2}} dx \\ \end{align} and I'm stuck here Thank you.
* *Substitute the expression for the Normal pdf. *Gather together the powers of $e$. *Complete squares in the exponent of $e$ to get the square of something plus a constant. *Take the constant powers of $e$ out of the integral. *Change variables to turn the integral into a integral of the standard normal pdf from $-\infty$ to some number $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Permutation and Equivalence Let X be a nonempty set and define the two place relation ~ as $\sigma\sim\tau$ if and only if $\rho^{-1}\circ\sigma\circ\rho=\tau$ for some permutation $\rho$ For reflexivity this is what I have: Let x$\in$X such that $(\rho^{-1}\circ\sigma\circ\rho)(x)=\sigma(x)$ Than $\rho^{-1}(\sigma(\rho(x)))=\sigma(x)$ So $\sigma(\rho(x))=\rho(\sigma(x))$ Therefore $\sigma(x)=\rho^{-1}(\rho(\sigma(x))$ So $\sigma(x)=\rho^{-1}(\sigma(\rho(x))$ Finally $\sigma(x)=\rho^{-1}\circ\sigma\circ\rho$ Does that show that the relation is reflexive?
You assumed, I believe, what you were to prove. Look at your expression following "Let" and look at your expression following "Finally"...they say the same thing! How about letting $\rho = \sigma$: all that matters to show is that for each $\sigma \in X$, $\sigma \sim \sigma$, there exists some permutation in $X$ satisfying the relation: This reflexive relation is satisfied for any $\sigma \in X$ by choosing $\rho$ to be itself: for $\tau$, let $\rho = \tau$...etc... Then $$(\sigma^{-1} \circ \sigma \circ \sigma)(x) = (\sigma^{-1} \circ \sigma)(x) \circ \sigma(x) $$ $$\vdots$$ $$ = \sigma(x)$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can normal chain rule be used for total derivative? Chain rule states that $$\frac{df}{dx} \frac{dx}{dt} = \frac{df}{dt}$$. Suppose that $f$ is function $f(x,y)$. In this case, would normal chain rule still work?
The multivariable chain rule goes like this: $$ \frac{df}{dt} = \frac{\partial f}{\partial x}\cdot \frac{dx}{dt}+ \frac{\partial f}{\partial y}\frac{dy}{dt} $$ If you can isolate for $\dfrac{dy}{dx}$, then you can always just do implicit differentiation. Let's do an example: $$ f=f(x,y) = x^2 - y $$ Where $$ x(t) = t, \; \; y(t) = t $$ $$ \frac{df}{dt} \frac{df}{dx} \cdot \frac{dx}{dt} = 2x -\frac{dy}{dx}\frac{dx}{dt}= 2t - 1 $$ Also $$ \frac{df}{dt} = \frac{\partial f}{\partial x}\cdot \frac{dx}{dt}+ \frac{\partial f}{\partial y}\frac{dy}{dt} = 2x - 1 = 2t - 1 $$ Now let's do the same example, except this time: $$ f=f(x,y)= x^2 - y\\ x(t)=t\ln(t), \;\; y(t) = t\sin(t) \\ \Rightarrow \frac{df}{dt} = 2t\ln(t)(\ln(t) + 1) -(\sin(t) +t\cos(t)) $$ Would implicit differentiation work here? Yes. But it requires more steps. Implicit differentiation would be the inefficient route as, $$ e^{x/t} = t \Rightarrow y= e^{x/t}\sin(t) \Rightarrow \frac{dy}{dt} = \frac{dy}{dx}\frac{dx}{dt} = \\ \left(\frac{1}{t}\frac{dx}{dt}-\frac{x}{t^2}\right)e^{\frac{x}{t}}\sin(t)+e^{\frac{x}{t}}\cos(t) = \\ e^{\frac{x}{t}}\left(\left(\frac{1}{t}\frac{dx}{dt}-\frac{x}{t^2}\right)\sin(t)+\cos(t)\right) =\\ t\left(\frac{\ln(t)+1}{t} - \frac{t\ln(t)}{t^2}\right)\sin(t) + t\cos(t) = \\ \left(\ln(t) +1 - \ln(t)\right)\sin(t) + t\cos(t) = \\ \sin(t)+t \cos(t) $$ But it still works out. You can check out this site: Link For examples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Factoring Cubic Equations I’ve been trying to figure out how to factor cubic equations by studying a few worksheets online such as the one here and was wondering is there any generalized way of factoring these types of equations or do we just need to remember a bunch of different cases. For example, how would you factor the following: $x^3 – 7x^2 +7x + 15$ I'm have trouble factoring equations such as these without obvious factors.
The general method involves three important rules for polynomials with integer coefficients: * *If $a+\sqrt b$ is a root, so is $a-\sqrt b$. *If $a+ib$ is a root, so is $a-ib$. *If a root is of the form $p/q$, then p is an integer factor of the constant term, and q is an integer factor of the leading coefficient. Since this is cubic, there is at least one real rational solution (using rules 1 & 2). Let's find it using rule 3: $$q = 1,\ p \in \{\pm1,\pm3,\pm5,\pm15\}$$ Brute force to find that $-1$ is a solution. So we can now write the polynomial as: $$(x+1)(x-a)(x-b)=x^3 - 7x^2 + 7x +15$$ Now use: $$a+b=8$$ $$ab-a-b=7$$ Solve to get: $$a(8-a)-a-(8-a)=7 \to a^2-8a+15 = 0$$ And get the two remaining solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Three random variables, two are independent Suppose we have 3 random variables $X,Y,Z$ such that $Y\perp Z$ and let $W = X+Y$. How can we infer from this that $$\int f_{WXZ}(x+y,x,z)\mathrm{d}x = \int f_{WX}(x+y,x)f_{Z}(z)\mathrm{d}x$$ Any good reference where I could learn about independence relevant to this question is also welcome.
After integrating out $x$, the left-hand side is the joint density for $Y$ and $Z$ at $(y,z)$, and the right-hand side is the density for $Y$ at $y$ multiplied by the density for $Z$ at $z$. These are the same because $Y$ and $Z$ were assumed to be independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to compute conditional probability for a homogeneous Poisson process? Let $N$ be a homogeneous Poisson process with intensity $\lambda$. How do I compute the following probability: $$P[N(5)=2 \, | \, N(2)=1,N(10)=3]?$$
You know that exactly $2$ events occurred between $t=2$ and $t=10$. These are independently uniformly distributed over the interval $[2,10]$. The probability that one of them occurred before $t=5$ and the other thereafter is therefore $2\cdot\frac38\cdot\frac58=\frac{15}{32}$. The intensity $\lambda$ doesn't enter into it, since you know the numbers of events.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quantified Statements To English The problem I am working on is: Translate these statements into English, where C(x) is “x is a comedian” and F(x) is “x is funny” and the domain consists of all people. a) $∀x(C(x)→F(x))$ b)$∀x(C(x)∧F(x))$ c) $∃x(C(x)→F(x))$ d)$∃x(C(x)∧F(x))$ ----------------------------------------------------------------------------------------- Here are my answers: For a): For every person, if they are a comedian, then they are funny. For b): For every person, they are both a comedian and funny. For c): There exists a person who, if he is funny, is a comedian For d): There exists a person who is funny and is a comedian. Here are the books answers: a)Every comedian is funny. b)Every person is a funny comedian. c)There exists a person such that if she or he is a comedian, then she or he is funny. d)Some comedians are funny. Does the meaning of my answers seem to be in harmony with the meaning of the answers given in the solution manual? The reason I ask is because part a), for instance, is a implication, and "Every comedian is funny," does not appear to be an implication.
This is because the mathematical language is more accurate than the usual language. But your answers are right and they are the same as the book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Show that monotone sequence is reducing and bounded by zero? Let $a_n=\displaystyle\sum\limits_{k=1}^n {1\over{k}} - \log n$ for $n\ge1$. Euler's Constant is defined as $y=\lim_{n\to\infty} a_n$. Show that $(a_n)^\infty_{n=1}$ is decreasing and bounded by zero, and so this limit exists My thought: When I was trying the first few terms for $a_1, a_2, a_3$, I get: $$a_1 = 1-0$$ $$a_2={1\over1}+{1\over2}-\log2$$ $$a_3={1\over1}+{1\over2}+{1\over3}-\log3$$ It is NOT decreasing!!! what did I do wrong?? Professor also gave us a hint: Prove that $\dfrac{1}{n+1}\le \log(n+1)-\log n\le \dfrac1n$ we need to use squeeze theorem???
Certainly it's decreasing. I get $1-0=1$, and $1 + \frac12 -\log2=0.8068528\ldots$, and $1+\frac12+\frac13-\log3=0.73472\ldots$. $$ \frac{1}{n+1} = \int_n^{n+1}\frac{dx}{n+1} \le \int_n^{n+1}\frac{dx}{x} = \log(n+1)-\log n. $$ $$ \frac1n = \int_n^{n+1} \frac{dx}{n} \ge \int_n^{n+1} \frac{dx}{x} = \log(n+1)-\log n. $$ So $$ \Big(1+\frac12+\frac13+\cdots+\frac1n-\log n\Big) +\Big(\log n\Big) + \Big(\frac{1}{n+1}-\log(n+1)\Big) $$ $$ \le1+\frac12+\frac13+\cdots+\frac1n-\log n. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/294629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Distinction between "measure differential equations" and "differential equations in distributions"? Is there a universally recognized term for ODEs considered in the sense of distributions used to describe impulsive/discontinuous processes? I noticed that some authors call such ODEs "measure differential equations" while others use the term "differential equations in distributions". But I don't see a major difference between them. Can anyone please make the point clear? Thank you.
As long as the distributions involved in the equation are (signed) measures, there is no difference and both terms can be used interchangeably. This is the case for impulsive source equations like $y''+y=\delta_{t_0}$. Conceivably, ODE could also involve distributions that are not measures, such as the derivative of $\delta_{t_0}$. In that case only "differentiable equation in distributions" would be correct. But I can't think of a natural example of such an ODE at this time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Field extension of composite degree has a non-trivial sub-extension Let $E/F$ be an extension of fields with $[E:F]$ composite (not prime). Must there be a field $L$ contained between $E$ and $F$ which is not equal to either $E$ or $F$? To prove this is true, it suffices to produce an element $x\in E$ such that $F(x) \not= E$, but I cannot find a way to produce such an element. Any ideas?
Let $L/\mathbb Q$ be a Galois extension with Galois group $A_4$, this is certainly possible although off the top of my head I can't remember a polynomial that gives you this extension. Now $A_4$ has a subgroup $H$ of index $4$ namely, the subgroup generated by a cycle, but this subgroup is not properly contained in another proper subgroup of $A_4$. In particular let $K$ be the fixed field of $H$, then $H$ has no non-trivial proper subfields because this would imply that $H$ was properly contained in a proper subgroup. Of course we also have $[K:\mathbb Q]=4$. In general to find such an example one can find a group $G$ which has a maximal subgroup with the desired index. Then one can realize $G$ as a Galois group of a field of rational functions and look at the fixed field of the subgroup to get a general counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Counting primes of the form $S_1(a_n)$ vs primes of the form $S_2(b_n)$ Let $n$ be an integer $>1$. Let $S_1(a_n)$ be a symmetric irreducible integer polynomial in the variables $a_1,a_2,...a_n$. Let $S_2(b_n)$ be a symmetric irreducible integer polynomial in the variables $b_1,b_2,...b_n$ of the same degree as $S_1(a_n)$. Let $m$ be an integer $>1$ and let $S^*_1(m)$ be the amount of integers of the form $S_1(a_n)$ $<m$ (where the $a_n$ are integers $>-1$). Likewise let $S^*_2(m)$ be the amount of integers of the form $S_2(b_n)$ $<m$ (where the $b_n$ are integers $>-1$). By analogue let $S^-_1(m)$ be the amount of primes of the form $S_1(a_n)$ $<m$ (where the $a_n$ are integers $>-1$). And let $S^-_2(m)$ be the amount of primes of the form $S_2(b_n)$ $<m$ (where the $b_n$ are integers $>-1$). Now I believe it is always true that $lim_{m=oo}$$\dfrac{S^-_1(m)}{S^-_2(m)}$= $lim_{m=oo}$$\dfrac{S^*_1(m)}{S^*_2(m)}$=$\dfrac{x}{y}$ for some integers $x,y$. How to show this ? Even with related conjectures assumed to be true I do not know how to prove it , nor if I need to assume some related conjectures to be true. EDIT: I forgot to mention both $S_1(a_n)$ and $S_2(b_n)$ (must) have the Bunyakovsky property.
I think the polynomial $2x^4+x^2y+xy^2+2y^4$ is symmetric and irreducible, but its values are all even, hence, it takes on at most $1$ prime value. If your other polynomial takes on infinitely many prime values --- $x^4+y^4$ probably does this, though proving it is out of reach --- then the ratio of primes represented by the two polynomials will go to zero, I doubt the ratio of numbers represented would do that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
what is the precise definition of matrix? Let $F$ be a field. To me, saying 'Matrix is a rectangular array of elements of $F$' seems extremely terse. What kind of set is a rectangular array of elements of $F$? Precisely, $(F^m)^n \neq (F^n)^m \neq F^{n\times m}$. I wonder which is the precise definition for $M_{n\times m}(F)$?
In an engineering class where we don't do things very rigorously at all, we defined an $n\times m$ matrix as a linear function from $\mathbb{R}^m$ to $\mathbb{R}^m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/294899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
half-angle trig identity clarification I am working on the following trig half angle problem. I seem to be on the the right track except that my book answer shows -1/2 and I didn't get that in my answer. Where did I go wrong? $$\sin{15^{\circ}} = $$ $$\sin \frac { 30^{\circ} }{ 2 } = \pm \sqrt { \frac { 1 - \cos{30^{\circ}} }{ 2 } } $$ $$\sin \frac { 30^{\circ} }{ 2 } = \pm \sqrt { \frac { 1 - \frac {\sqrt 3 }{ 2 } }{ 2 } } $$ $$\sin \frac { 30^{\circ} }{ 2 } = \pm \sqrt { \frac { 1 - \frac {\sqrt 3 }{ 2 } }{ 2 } (\frac {2} {2}) } $$ $$\sin \frac { 30^{\circ} }{ 2 } = \pm \sqrt { 2 - \sqrt {3} } $$ Book Answer $$\sin \frac { 30^{\circ} }{ 2 } = -\frac {1} {2} \sqrt { 2 - \sqrt {3} } $$
$$\sqrt { \dfrac { 1 - \dfrac {\sqrt 3 }{ 2 } }{ 2 } \times \dfrac22} = \sqrt{\dfrac{2-\sqrt3}{\color{red}4}} = \dfrac{\sqrt{2-\sqrt3}}{\color{red}2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/295038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Convert $ x^2 - y^2 -2x = 0$ to polar? So far I got $$r^2(\cos^2{\phi} - \sin^2{\phi}) -2 r\cos{\phi} = 0$$ $$r^2 \cos{(2\phi)} -2 r \cos{\phi} = 0$$
You are on the right track. Now divide through by $r \ne 0$ and get $$r \cos{2 \phi} - 2 \cos{\phi} = 0$$ or $$r = 2 \frac{ \cos{\phi}}{\cos{2 \phi}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/295162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is it true that, $A,B\subset X$ are completely seprated iff their closures are? If $A,B\subset X$ and $\overline{A}, \overline{B}$ are completely seprated, so also are $A,B$. since $A\subset \overline{A}$, $B\subset \overline{B}$ then, $f(A)\subset f(\overline{A})=0$ and $f(B)\subset f(\overline{B})=1$ for some continuous function $f:X\to [0,1]$. but the converse is true?
HINT: Suppose that $A$ and $B$ are completely separated, and let $f:X\to[0,1]$ be a continuous function such that $f(x)=0$ for all $x\in A$ and $f(x)=1$ for all $x\in B$. Since $f$ is continuous, $f^{-1}[\{0\}]$ is closed, and certainly $A\subseteq f^{-1}[\{0\}]$, so ... ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/295220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is there a way to solve $x^2 + 12y - 12x = 0$ for $x$? I'm doing some statistical analysis (random variate generation for simulation models) and I just ran the inverse transform of a CDF: $$ F(x) = \begin{cases} (x-4)/4 & \text{for } x \in [2,3] \\ x - (x^2/12) & \text{for } x \in (3,6] \\ 0 & \text{otherwise} \end{cases} $$ That yieds a couple of equations: $$ R=(x-4)/4 ~ \text{ for } ~ 2 \leq x \leq 3$$ $$ R=x(1-x)/12 ~ \text{ for } 3 < x \leq 6 $$ Now, the first one is easy: $$ 4(R+1)=x ~ \text{ for } -1/2 \leq x \leq -1/4 $$ But the second one is implicit: $$ (1-(12R/x))1/2=x \text{ for } -2 < x <=-17.5 $$ ...backtracking, I rearrange the equation: \begin{eqnarray*} R & = & x - (x^2/12) \\ 12R & = & 12(x - (x^2/12)) \\ R & = & 12(x - (x^2/12))/12 \\ R & = & (12x - (12x^2/12))/12 \\ R & = &(12x - (x^2))/12 \\ 12R & = &(12x - (x^2)) \\ 12R & = &12x - (x^2) \\ 12R & = &12x - x^2 \\ \end{eqnarray*} Changing R for y... $$ x^2 + 12y - 12x= 0 $$ Now, that looks awfully familiar, but I confess I've hit a wall and do not remember what to do from here. How can I get an explicit function solving for $x$?
Try the quadratic equation formula: $$x^2-12x+12y=0\Longrightarrow \Delta:=12^2-4\cdot 1\cdot 12y=144-48y=48(3-y)\Longrightarrow$$ $$x_{1,2}=\frac{12\pm \sqrt{48(3-y)}}{2}=6\pm 2\sqrt{3(3-y)}$$ If you're interested in real roots then it must be that $\,y\le 3\,$ ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/295369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving algebraic sets i) Let $Z$ be an algebraic set in $\mathbb{A}^n$. Fix $c\in \mathbb{C}$. Show that $$Y=\{b=(b_1,\dots,b_{n-1})\in \mathbb{A}^{n-1}|(b_1,\dots,b_{n-1},c)\in Z\}$$ is an algebraic set in $\mathbb{A}^{n-1}$. ii) Deduce that if $Z$ is an algebraic set in $\mathbb{A}^2$ and $c\in \mathbb{C}$ then $Y=\{a\in \mathbb{C}|(a,c)\in Z\}$ is either finite or all of $\mathbb{A}^1$. Deduce that $\{(z,w)\in \mathbb{A}^2 :|z|^2 +|w|^2 =1\}$ is not an algebraic set in $\mathbb{A}^2$.
This answer was merged from another question so it only covers part ii). $Z$ is algebraic, and hence the simultaneous solution to a set of polynimials in two variables. If we swap one variable in all the polynomials with the number $c$, you will get a set of polynomials in one variable, with zero set being your $Y$. $Y$ is therefore an algebraic set, and closed in $\Bbb A^1$, therefore either finite or the whole affine line. Assume for contradiction that $Y = \{ ( z,w) \in \mathbb{A}^2 : |z|^2 + |w|^2 = 1 \}$ is algebraic. Set $w = 0$ (this is our $c$). The $Y$ we get from this is the unit circle in the complex plane. That is an infinite set, but certainly not all of $\Bbb C$. Thus $Y$ is neither finite nor all of $\Bbb A^1 = \Bbb C$, and therefore $Z$ cannot be algebraic to begin with.
{ "language": "en", "url": "https://math.stackexchange.com/questions/295445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Why aren't parametrizations equivalent to their "equation form"? Consider the parametrization $(\lambda,t)\mapsto (\lambda t,\lambda t^2,\lambda t^3)$. This is a union of lines (not sure how to visualize it precisely. I think it's a double cone). It doesn't appear that the $x$ or $z$ axis are in this parametrization (if $y$ and $z$ are both zero, so must $x$ be). However, when you solve the parametrization in terms of $x,y,z$ you obtain the relationship $y^2=xz$. In which the $x$ and $z$ axes are solutions! Why is this? Is this the closure? If so, how can we relate the closure to the original set?
When you say that you "solve" and obtain the relationship $y^2 = xz$ I image you just observed that $(\lambda t^2)^2 = (\lambda t)(\lambda t^3)$ correct? In this case what you have shown is that the set for which you have a parameterization is a subset of the set of solutions to $y^2 = xz$. What you have not shown is that every solution of $y^2 = xz$ is covered by the parameterization. Indeed, as you pointed out, this is not true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/295573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
with how many zeros it ends I have to calculate with how many zeros 1000! ends. This is wat I did, but I am not sure whether its good: I think I have to calculate how many times the product 10 is in 1000!. I found out the factor 10 is 249 times in 1000! using the fact that $s_p(n!)=\sum_{r=1}^{\infty}\lfloor \frac{n}{p^r}\rfloor$. So I think the answer should be that it ends with 249 zeros. Is this correct/are there other ways to do this? If not how should I do it then?
First, let's note that $10$ can be factored into $2 \times 5$ so the key is to compute the minimum of the number of $5$'s and $2$'s that appear in $1000!$ as factors of the numbers involved. As an example, consider $5! = 120$, which has one $0$ because there is a single $5$ factor and a trio of $2$ factors in the product, one from $2$ and a pair from $4$. Thus, the key is to compute the number of $5$'s, $25$'s, $125$'s, and $625$'s in that product as each is contributing a different number of $5$'s to the overall product as these are the powers of $5$ for you to consider as the $2$'s will be much higher and thus not worth computing. So, while there are 200 times that $5$ will be a factor, there are 40 times for $25$ being a factor, eight for $125$ and one for $625$, which does give the same result as you had of 249, though this is a better explanation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/295643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
For which Natural $n\ge2: \phi(n)=n/2$ For which Natural $n\ge2$ does this occur with?: $\phi(n)=n/2$
Hint: $n$ is even, or $n/2$ wouldn't be an integer. Hence $n=2^km$ with $m$ odd and $k\ge1$. You have $\phi(2^km)=2^{k-1}\phi(m)$ which must equal $n/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/295732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Induction proof of $\sum_{k=1}^{n} \binom n k = 2^n -1 $ Prove by induction: $$\sum_{k=1}^{n} \binom n k = 2^n -1 $$ for all $n\in \mathbb{N}$. Today I wrote calculus exam, I had this problem given. I have the feeling that I will get $0$ points for my solution, because I did this: Base Case: $n=1$ $$\sum_{k=1}^{1} \binom 1 1 = 1 = 2^1 -1 .$$ Induction Hypothesis: for all $n \in \mathbb{N}$: $$\sum_{k=1}^{n} \binom n k = 2^n -1 $$ Induction Step: $n \rightarrow n+1$ $$\sum_{k=1}^{n+1} \binom {n+1} {k} = \sum_{k=1}^{n} \binom {n+1} {k} + \binom{n+1}{n+1} = 2^{n+1} -1$$ Please show me my mistake because next time is my last chance in this class.
suppose that $$\sum_{k=1}^{n} \binom n k = 2^n -1 $$ then $$\sum_{k=1}^{n+1} \binom {n+1}{k} =\sum_{k=1}^{n+1}\Bigg( \binom {n}{ k} +\binom{n}{k-1}\Bigg)=$$ $$=\sum_{k=1}^{n+1} \binom {n}{ k} +\sum_{k=1}^{n+1}\binom{n}{k-1}=$$ $$=\sum_{k=1}^{n} \binom {n}{ k}+\binom{n}{n+1}+\sum_{k=0}^{n} \binom {n}{k}=$$ $$=\sum_{k=1}^{n} \binom {n}{ k}+\binom{n}{n+1}+\binom{n}{0}+\sum_{k=1}^{n} \binom {n}{k}=$$ $$=2\sum_{k=1}^{n} \binom {n}{ k}+\binom{n}{n+1}+\binom{n}{0}=$$ since $\binom{n}{n+1}=0, \binom{n}{0}=1$ $$=2\sum_{k=1}^{n} \binom {n}{ k}+1=2(2^n-1)+1=2^{n+1}-2+1=2^{n+1}-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/295802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Graphs such that $|G| \ge 2$ has at least two vertices which are not its cut-vertices Show that every graph $G$, such that $|G| \ge 2$ has at least two vertices which are not its cut-vertices.
Let $P$ be a maximal path in $G$. I claim that the end points of $P$ are not cut vertices. Suppose that an end point $v$ of $P$ was a cut vertex. Let $G$ be separated into $G_1,\ G_2,\ \cdots,\ G_k$. It follows that any path from one component to another must pass through $v$ and namely such a path does not end on $v$ and therefore cannot be $P$. Therefore $P$ is contained entirely within some $G_i\cup\{v\}$. But this contradicts the fact that $P$ is maximal for there exists at least one vertex in $G_j$ for $i\neq j$ which connects to $v$ and extends $P$. Therefore $v$ must not be a cut vertex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/295866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Proving some basic facts about integrals Let $f$, $g$ be Riemann integrable functions on the interval $[a,b]$, that is $f,g \in \mathscr{R}([a,b])$. (i) $\int_{a}^{b} (cf+g)^2\geq 0$ for all $c \in \mathbb{R}$. (ii) $2|\int_{a}^{b}fg|\leq c \int_{a}^{b} f^2+\frac{1}{c}\int_{a}^{b} g^2$ for all $c \in \mathbb{R}^+$ I don't have a complete answer for either of these, but I have some ideas. For (i) $\int_{a}^{b} (cf+g)^2=\int_{a}^{b}c^2f^2+2cfg+g^2=c^2\int_{a}^{b}f^2+c\int_{a}^{b}2fg+\int_{a}^{b}g^2$. The first third is positive because if $c <0$ then $c^2>0$. I'm not sure about the middle third. The final third is positive. I did notice that if I can figure out (i)...(ii) follows from some rearranging.
I will assume that the functions are real-valued. For the first point, I will use that if an R-integrable function $h$ is nonegative on $[a,b]$, then $\int_a^bh(x)dx\geq 0$. For the second point, I will use that if $h(x)\leq k(x)$ on $[a,b]$, then $\int_a^bh(x)dx\leq \int_a^bk(x)dx$. Note that the latter follows readily from the former by linearity of the integral. 1) We have $(cf(x)+g(x))^2\geq 0$ for all $x\in [a,b]$, so $\int_a^b(cf(x)+g(x))^2dx\geq 0$. 2) Recall that $2|ab|\leq a^2+b^2$ for every $a,b\in\mathbb{R}$. With $a=\sqrt{c}f(x)$ and $b=g(x)/\sqrt{c}$ ,this yields $$ 2|f(x)g(x)|\leq cf(x)^2+\frac{1}{c}g(x)^2 $$ on $[a,b]$. Hence $$ 2\int_a^b|f(x)g(x)|dx\leq c\int_a^bf(x)^2dx+\frac{1}{c}\int_a^bg(x)^2dx. $$ Finally, we have $|\int_a^bf(x)g(x)dx|\leq \int_a^b|f(x)g(x)|dx$, hence the second inequality. Note: As you said, you can also deduce 2) from 1) directly by expanding $(cf+g)^2$ and then dividing by $c$. But $2|ab|\leq a^2+b^2$ is so useful I could not resist mentioning it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/295931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When is $\| f \|_\infty$ a norm of the vector space of all continuous functions on subset S? Let S be any subset of $\mathbb{R^n}$. Let $C_b(S)$ denote the vector space of all bounded continuous functions on S. For $f \in C(S)$, define $\| f \|_\infty = \sup_{x \in S} |f(x)|$ When is this a norm of the vector space of all continuous functions on S?
For X a locally compact Hausdorff space such as an open or closed subset or non-empty intersection of open and closed subsets of R^n, the space of bounded continuous function on X with the sup norm, which is the same as the infinity norm or essential sup norm is complete. The sup or the essential sup is always a norm on this pace of all bounded continuous function. The completion of the space of all continuous function on X with compact support with respect to the sup norm is the space of all continuous function on X which vanishes at infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/295999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluate $\lim_{x\to1^-}\left(\sum_{n=0}^{\infty}\left(x^{(2^n)}\right)-\log_2\frac{1}{1-x}\right)$ Evaluate$$\lim_{x\to1^-}\left(\sum_{n=0}^{\infty}\left(x^{(2^n)}\right)-\log_2\frac{1}{1-x}\right)$$ Difficult problem. Been thinking about it for a few hours now. Pretty sure it's beyond my ability. Very frustrating to show that the limit even exists. Help, please. Either I'm not smart enough to solve this, or I haven't learned enough to solve this. And I want to know which!
This is NOT a solution, but I think that others can benefit from my failed attempt. Recall that $\log_2 a=\frac{\log a}{\log 2}$, and that $\log(1-x)=-\sum_{n=1}^\infty\frac{x^n}n$ for $-1\leq x<1$, so your limit becomes $$\lim_{x\to1^-}x+\sum_{n=1}^\infty\biggl[x^{2^n}-\frac1{\log2}\frac{x^n}n\biggr]\,.$$ The series above can be rewritten as $\frac1{\log2}\sum_{k=1}^\infty a_kx^k$, where $$a_k=\begin{cases} -\frac1k,\ &\style{font-family:inherit;}{\text{if}}\ k\ \style{font-family:inherit;}{\text{is not a power of}}\ 2;\\\log2-\frac1k,\ &\style{font-family:inherit;}{\text{if}}\ k=2^m.\end{cases}$$ We can try to use Abel's theorem, so we consider $\sum_{k=1}^\infty a_k$. Luckily, if this series converges, say to $L$, then the desired limit is equal to $1+\frac L{\log2}\,$. Given $r\geq1$, then we have $2^m\leq r<2^{m+1}$, with $m\geq1$. Then the $r$-th partial sum of this series is equal to $$\sum_{k=1}^ra_k=\biggl(\sum_{k=1}^r-\frac1k\biggr)+m\log2=m\log2-H_r\,,$$ where $H_r$ stands for the $r$-th harmonic number. It is well-known that $$\lim_{r\to\infty}H_r-\log r=\gamma\quad\style{font-family:inherit;}{\text{(Euler-Mascheroni constant)}}\,,$$ so $$\sum_{k=1}^ra_k=\log(2^m)-\log r-(H_r-\log r)=\log\Bigl(\frac{2^m}r\Bigr)-(H_r-\log r\bigr)\,.$$ Now the bad news: the second term clearly tends to $-\gamma$ when $r\to\infty$, but unfortunately the first term oscillates between $\log 1=0$ (when $r=2^m$) and $\bigl(\log\frac12\bigr)^+$ (when $r=2^{m+1}-1$), so $\sum_{k=1}^\infty a_k$ diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 3 }
Two Problem: find $\max, \min$; number theory: find $x, y$ * *Find $x, y \in \mathbb{N}$ such that $$\left.\frac{x^2+y^2}{x-y}~\right|~ 2010$$ *Find max and min of $\sqrt{x+1}+\sqrt{5-4x}$ (I know $\max = \frac{3\sqrt{5}}2,\, \min = \frac 3 2$)
Problem 1: I have read a similar problem with a good solution in this forum
{ "language": "en", "url": "https://math.stackexchange.com/questions/296130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Exponentiation when the exponent is irrational I am just curious about what inference we can draw when we calculate something like $$\text{base}^\text{exponent}$$ where base = rational or irrational number and exponent = irrational number
$2^2$ is rational while $2^{1/2}$ is irrational. Similarly, $\sqrt 2^2$ is rational while $\sqrt 2^{\sqrt 2}$ is irrational (though it is not so easily proved), so that pretty much settles all cases. Much more can be said when the base is $e$. The Lindemann-Weierstrass Theorem asserts that $e^a$ where $a$ is a non-zero algebraic number is a transcendental number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
$p$ is an odd primitive, Show why there are no primitive roots of $\bmod 3p$ if $p$ is an odd primitive, Prove there are no primitive roots of $\bmod 3p$ Where I'm at: $a^{2(p-1)}=1 \pmod{3p}$ where a is a primitive root of $3p$ (by contradiction) $(a/3p)=(a/3)(a/p)$ are the Legendre symbols, and stuck here..tried a couple of things, but got nowhere, could use a helping hand :]
Note that when $p=3$ the theorem does not hold! 2 is a primitive root. So supposing $p$ is not 3... Since $p$ is odd let $p = 2^r k+1$ with $k$ odd. The group of units is $\mod {3p}$ is $$(\mathbb Z/(3p))^\times \simeq \mathbb Z/(2) \times \mathbb Z/(2^r) \times (\mathbb Z/(k))^\times$$ by Sun Zi's theorem. There can be no primitive root for this because of the two $\mathbb Z/(2^i)$ parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
topology question about cartesian product I have a question about a proof I'm reading. It says: Suppose $A$ is an open set in the topology of the Cartesian product $X\times Y$, then you can write $A$ as the $\bigcup (U_\alpha\times V_\alpha)$ for $U_\alpha$ open in $X$ and for $V_\alpha$ open in $Y$. Why is this? (I get that the basis of the product topology of $X \times Y$ is the collection of all sets of the form $U \times V$, where $U$ is an open subset of $X$ and $V$ is an open subset of $Y$, I just don't know why we can take this random union and say that it equals $A$.)
It isn't a random union, it is the union for all open $U,V$ such that $U\times V$ is contained in $A$. As a result we immediately have half of the containment, that the union is a subset of $A$. To see why $A$ is contained in the union, consider a point $(x,y)$ in $A$. Since $A$ is open, there must be a basic open set of the product topology that contains the point and is a subset of $A$. But this is precisely a product $U\times V$ of open sets $U,V$ from $X,Y$ respectively. QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/296323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Given a point $x$ and a closed subspace $Y$ of a normed space, must the distance from $x$ to $Y$ be achieved by some $y\in Y$? I think no. And I am looking for examples. I would like a sequence $y_n$ in $Y$ such that $||y_n-x||\rightarrow d(x,Y)$ while $y_n$ do not converge. Can anyone give a proof or an counterexample to this question?
This is a slight adaptation of a fairly standard example. Let $\phi: C[0,1]\to \mathbb{R}$ be given by $\phi(f)=\int_0^{\frac{1}{2}} f(t)dt - \int_{\frac{1}{2}}^1 f(t)dt$. Let $Y_\alpha = \phi^{-1}\{\alpha\}$. Since $\phi$ is continuous, $Y_\alpha$ is closed for any $\alpha$. Now let $\hat{f}(t) = 4t$ and notice that $\phi(\hat{f}) = -1$ (in fact, any $\hat{f}$ such that $\phi(\hat{f}) = -1$ will do). Then $$\inf_{f \in Y_0} \|\hat{f}-f\| = \inf \{ \|g\|\, | \, g+\hat{f} \in Y_0 \} = \inf \{ \|g\|\, | \, \phi(g) =1 \} = \inf_{g \in Y_1} \|g\|$$ It is clear that $g_n$ is an infimizing sequence for the latter problem iff $g_n+\hat{f}$ is an infimizing sequence for the initial problem. It is well known that $Y_1$ has no element of minimum norm, consequently there is no $f \in Y_0$ that mnimizes $\|f-\hat{f}\|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
What are relative open sets? I came across the following: Definition 15. Let $X$ be a subset of $\mathbb{R}$. A subset $O \subset X$ is said to be open in $X$ (or relatively open in $X$) if for each $x \in O$, there exists $\epsilon = \epsilon(x) > 0$ such that $N_\epsilon (x) \cap X \subset O$. What is $\epsilon$ and $N_\epsilon (x) $? Or more general, what are relatively open sets?
Forget your definition above. The general notion is: Let $X$ be a topological space, $A\subset X$ any subset. A set $U_A$ is relatively open in $A$ if there is an open set $U$ in $X$ such that $U_A=U\cap A$. I think that in your definition $N_\epsilon(x)$ is meant to denote an open neighborhood of radius $\epsilon$ of $x$, ie $(x-\epsilon,\ x+\epsilon)$. As you can see, this would agree with the definition I gave you above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Example of a normal operator which has no eigenvalues Is there a normal operator which has no eigenvalues? If your answer is yes, give an example. Thanks.
Example 1 'I think "shift operator or translation operator" is one of them.' – Ali Qurbani Indeed, the bilateral shift operator on $\ell^2$, the Hilbert space of square-summable two-sided sequences, is normal but has no eigenvalues. Let $L:\ell^2 \to \ell^2$ be the left shift operator, $R:\ell^2 \to \ell^2$ the right shift operator and $\langle\cdot,\cdot\rangle$ denote the inner product. Take $x=(x_n)_{n\in \mathbb{Z}}$ and $y=(y_n)_{n\in \mathbb{Z}}$ two sequences in $\ell^2$: $$\langle Lx, y\rangle = \sum_{n \in \mathbb{Z}} x_{n+1}\cdot y_n = \sum_{n \in \mathbb{Z}} x_n\cdot y_{n-1} = \langle x, Ry\rangle,$$ hence $L^*=R=L^{-1}$, i.e. $L$ is unitary. Now let $\lambda$ be a scalar and $x\in\ell^2$ such that $Lx = \lambda x$ then $x_n = \lambda^n x_0$ holds for $n \in \mathbb{Z}$ and we have $$\|x\|^2=\sum_{n \in \mathbb{Z}} x_n^2 = x_0\left( \sum_{n=1}^\infty \lambda^{2n} + \sum_{n=0}^{-\infty} \lambda^{2n} \right).$$ The first sum diverges for $|\lambda|\geq 1$ and the second sum diverges for $|\lambda|\leq 1$ so the only $x\in\ell^2$ solving the equation must be the zero sequence which cannot be an eigenvector, hence $L$ has no eigenvalues. Example 2 As Christopher A. Wong pointed out you can construct another example with a multiplication operator. Let $L^2$ be the Hilbert space of Lebesgue-square-integrable functions on $\mathbb{R}$ and $M:L^2\to L^2,\ f \mapsto f \cdot h$ where $$h(x) = \begin{cases} 1, &x \in [0,1] \\ 0, &\text{else} \end{cases}.$$ For $f,g\in L^2$ we have $$\langle Mf,g\rangle = \int_\mathbb{R} f\cdot h\cdot g \ dx = \langle f,Mg\rangle,$$ i.e. $M$ is self adjoint. Now let $\lambda$ be a scalar and $f\in L^2$ such that $(M-\lambda)f = 0$ which is only true if $f=0$ or if $h=\lambda$ hence there are no eigenvalues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prove the relation of $\cosh(\pi/2)$ and $e$ Prove that: $$\cosh\left(\frac{\pi}{2}\right)=\frac{1}{2}e^{-{\pi/2}}(1+e^\pi)$$ What I have tried. $$\cosh\left(\frac{\pi}{2}\right)=\cos\left(i\frac{\pi}{2}\right)$$ $$=Re\{e^{i.i\frac{\pi}{2}}\}$$ $$=Re\{e^{-\frac{\pi}{2}}\}$$ Why is $e^{-\frac{\pi}{2}}$ not answer any why is $$\frac{e^{-\frac{\pi}{2}}+e^{\frac{\pi}{2}}}{2}$$ a correct solution. Did I miss something somewhere?
$\cosh(x)$ is usually defined defined as $\frac{e^{x} + e^{-x}}{2}$. If you haven't some different definition, then it is quite straightforward: $$\cosh\left(\frac{\pi}{2}\right)=\frac{e^{\frac{\pi}{2}} + e^{-\frac{\pi}{2}}}{2} = \frac{1}{2}e^{-\frac{\pi}{2}}(1 + e^x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/296623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Field of characteristic 0 such that every finite extension is cyclic I am trying to construct a field $F$ of characteristic 0 such that every finite extension of $F$ is cyclic. I think that I have an idea as to what $F$ should be, but I am not sure how to complete the proof that it has this property. Namely, let $a\in \mathbb Z$ be an element which is not a perfect square and let $F$ be the a maximal subfield of $\bar{\mathbb Q}$ which does not contain $\sqrt{a}$ (such a field exists by Zorn's lemma). Intuitively, a finite extension of $F$ should be generated by $a^{1/2^n}$ for some $n$, in which case its Galois group will be cyclic since $F$ contains the $2^n$th roots of unity. However, I cannot find a nice way to prove this. Any suggestions?
This is a fairly common question in algebra texts. Here's a series of hints taken from a prelim exam. Let $F$ be a maximal subfield of $\bar{\mathbb Q}$ with respect to not containing $\sqrt{a}$. Let $F \subset E$ be a Galois extension. Show that $F(\sqrt{a})$ is the unique subfield of $E$ of degree $2$. Deduce that $\mathrm{Gal}(E/F)$ contains a maximal normal subgroup of index $2$. Conclude that $\mathrm{Gal}(E/F)$ is cyclic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Are PA and ZFC examples of logical systems? Wikipedia says A logical system or, for short, logic, is a formal system together with a form of semantics, usually in the form of model-theoretic interpretation, which assigns truth values to sentences of the formal language. When we talk about PA or ZFC, are these logical systems, or are they merely formal systems?
When I talk about a logical system in the way that A logical system or, for short, logic, is a formal system together with a form of semantics, usually in the form of model-theoretic interpretation I understand that a logic $\mathcal{L}$ is a pair $(L,\models)$, where $L$ is a function, and the domain of $L$ is the class of all the signatures. But the signature of $ZFC$ has just a member, $\in$. So, there is no sense in talk in the $ZFC$ as a logic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Is the set of all bounded sequences complete? Let $X$ be the set of all bounded sequences $x=(x_n)$ of real numbers and let $$d(x,y)=\sup{|x_n-y_n|}.$$ I need to show that $X$ is a complete metric space. I need to show that all Cauchy sequences are convergent. I appreciate your help.
HINT: Let $\langle x^n:n\in\Bbb N\rangle$ be a Cauchy sequence in $X$. The superscripts are just that, labels, not exponents: $x^n=\langle x^n_k:k\in\Bbb N\rangle\in X$. Fix $k\in\Bbb N$, and consider the sequence $$\langle x^n_k:n\in\Bbb N\rangle=\langle x^0_k,x^1_k,x^2_k,\dots\rangle\tag{1}$$ of $k$-th coordinates of the sequences $x^n$. Show that for any $m,n\in\Bbb N$, $|x^m_k-x^n_k|\le d(x^m,x^n)$ and use this to conclude that the sequence $(1)$ is a Cauchy sequence in $\Bbb R$. $\Bbb R$ is complete, so $(1)$ converges to some $y_k\in\Bbb R$. Let $y=\langle y_k:k\in\Bbb N\rangle$; show that $y\in X$ and that $\langle x^n:n\in\Bbb N\rangle$ converges to $y$ in $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Can I really factor a constant into the $\min$ function? Say I have $\min(5x_1,x_2)$ and I multiply the whole function by $10$, i.e. $10\min(5x_1,x_2)$. Does that simplify to $\min(50x_1,10x_1)$? In one of my classes I think my professor did this but I'm not sure (he makes very hard to read and seemingly bad notes), and I'm just trying to put these notes together. Thanks!
Ys, that is legal as long as the constant is not negative. I.e., $10 \cdot \max(3, 5) = 10 \cdot 5 = 50$ is the same as $\max(10 \cdot 3, 10 \cdot 5) = 50$, but try multiplying by $-10$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/296867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 3 }
Zero and Cozero-sets of $\mathbb{R}$ A subset $U$ of a space $X$ is said to be a zero-set if there exists a continuous real-valued function $f$ on $X$ such that $U=\{x\in X: f(x)=0\}$. and said to be a Cozero-set if here exists a continuous real-valued function $g$ on $X$ such that $U=\{x\in X: g(x)\not=0\}$. Is it true that every closed set in $\mathbb{R}$ is a Cozero-set? I guess since $\mathbb{R}$ is a completely regular this implies that every closed set is Cozero-set, but by the same argument use completely regular property on $\mathbb{R}$, every closed subset of $\mathbb{R}$ is a zero-set. This argument is correct? How can we discussed the relation between open & closed subset of $\mathbb{R}$ and zero and cozero-sets? thanks.
I just waant to know how $\phi$ and $\mathbb{R}$ are not zero set? as if i take $f(x) = 0 \forall x$ and $g(x) = e^{x} + 1 \forall x$ both are cts. Then $\phi$ and $\mathbb{R}$ are zero set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/296942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How to show that $ n^{2} = 4^{{\log_{2}}(n)} $? I ran across this simple identity yesterday, but can’t seem to find a way to get from one side to the other: $$ n^{2} = 4^{{\log_{2}}(n)}. $$ Wolfram Alpha tells me that it is true, but other than that, I’m stuck.
Take $\log_{2}$ of both sides and get $$n^{2} = 4^{{\log_{2}} n}$$ $$2\log_{2}n =2\log_{2}n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/297001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
$P$ projector. prove that $\langle Px,x\rangle=\|Px\|^2.$ Let $X$ be a Hilbert space and $P \in B(X)$ a projector. Then for any $x\in X$: $$\langle Px,x\rangle=\|Px\|^2.$$ My proof: $$\|Px\|^{2}=\langle Px,Px\rangle=\langle P^{*}Px,x\rangle=\langle P^2x,x\rangle=\langle Px,x\rangle.$$ Is ok ? Thanks :)
Yes, that is all. $$ \quad \quad $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/297098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Obvious Group and subgroup questions. My awesome math prof posted a practice midterm but didn't post any solutions to it :s Here is the question. Let $G$ be a group and let $H$ be a subgroup of $G$. * *(a) TRUE or FALSE: If $G$ is abelian, then so is $H$. *(b) TRUE or FALSE: If $H$ is abelian, then so is $G$. Part (a) is clearly true but I am having a bit of difficulty proving it, after fulling the conditions of being a subgroup the commutative of $G$ should imply that $ab=ba$ somehow. Part (b) I am fairly certain this is false and I know my tiny brain should be able to find an example somewhere but it is 4 am here :) I want to use some non-abelian group $G$ then find a generator to make a cyclic subgroup of $G$ that is abelian. Any help would be appreciated, I have looked in my book but I can't seem to find for certain what I am looking for with what we have cover thus far.
(b) Take the group of $(n \times n)$-matrices with $\mathbb{R}$-coefficients with usual matrix multiplication as G and let H be the subgroup of diagonal matrices. H ist abelian, but G is not abelian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/297150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 3 }
when $f(x)^n$ is a degree of $n$ why is useful to think $\sqrt{f(x)^n}$ as $n/2$? I have come across this question when doing problems from "Schaum's 3000 Solved Calculus Problems". I was trying to solve $$\lim_{x\rightarrow+\infty}\frac{4x-1}{\sqrt{x^2+2}}$$ and I couldn't so I looked the solution and solution said Can someone please explain to me why this is and exactly how it works? Also, the next question is as such $$\lim_{x\rightarrow-\infty}\frac{4x-1}{\sqrt{x^2+2}}$$ and there the author has suggested that $x= -\sqrt{x^2}$. Why is that? Thanks EDIT Can someone use the above technique and solve it, to show it works? Because I understand the exponent rules, I am aware of that but what I don't understand is why you want to do that? Here is the solution that book shows:
Note that $$-|x|-\sqrt{2}\leq\sqrt{x^2+2}\leq|x|+\sqrt{2}$$ From the last inequality, you can conclude that the rate of growth of the function $\sqrt{x^2+2}$, is in some sense linear or in the language of the author, the degree of $\sqrt{f(x)}$ is something like $\frac{n}{2}$. Therefore the functions $4x-1$ and $\sqrt{x^2+2}$ are in some sense proportional , or better saying, there exists the limit: $$\lim_{x\rightarrow\infty}\frac{4x-1}{\sqrt{x^2+2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/297232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Is $A - B = \emptyset$? $A = \{1,2,3,4,5\}, B = \{1,2,3,4,5,6,7,8\}$ $A - B =$ ? Does that just leave me with $\emptyset$? Or do I do something with the leftover $6,7,8$?
$A - B = \emptyset$, because by definition, $A - B$ is everything that is in $A$ but not in $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/297325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is this math symbol called? My professors use a symbol $$x_0$$ and they pronounce it as x not or x nod, I am not sure what the exact name is because they have thick accents. I have tried looking this up on the Internet but I could not find an answer. Does anyone know what this is called?
They actually call it x-naught. I believe it comes from British English. Kind of like how the Canadians call the letter z "zed". All it means is "x sub zero", just another way of saying the same thing. It does flow better though, I think. "sub zero" just takes so much more work to say. I do think "naught" and "not" have similar meaning though - the absence of something, some value or quality. Im sure there is a linguistic connection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/297378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 0 }
$T:\Bbb R^2 \to \Bbb R^2$, linear, diagonal with respect to any basis. Is there a linear transformation from $\Bbb R^2$ to $\Bbb R^2$ which is represented by a diagonal matrix when written with respect to any fixed basis? If such linear transformation $T$ exists, then its eigenvector should be the identity matrix for any fixed basis $\beta$ of $\Bbb R^2$. Then, I don't see, if this is possible or not.
If the transformation $T$ is represented by the matrix $A$ in basis $\mathcal{A}$, then it is represented by the matrix $PAP^{-1}$ in basis $\mathcal{B}$, where $P$ is the invertible change-of-basis matrix. Suppose that $T$ is represented by a diagonal matrix in any basis. Let $P$ be an arbitrary invertible matrix and $A$ any diagonal matrix: $$P = \left[\begin{array}{cc} p_{1,1} & p_{1,2} \\ p_{2,1} & p_{2,2} \end{array}\right] \text{ and } A = \left[\begin{array}{cc} d_1 & 0 \\ 0 & d_2 \end{array}\right].$$ Now, calculate $PAP^{-1} = \dfrac{1}{\det P} \left[\begin{array}{cc} b_{1,1} & b_{1,2} \\ b_{2,1} & b_{2,2} \end{array}\right]$, where the entries $b_{i,j}$ are polynomials in the $p_{i,j}$ and $d_i$ variables. For this new conjugated matrix to be diagonal, we have the following two equations. (Check!) $$\begin{align*} 0 = b_{1,2} &= (d_2 - d_1)p_{1,1}p_{1,2} \\ 0 = b_{2,1} &= (d_1 - d_2)p_{2,1}p_{2,2} \end{align*}$$ Since $P$ is arbitrary, the only way for these equations to always be satisfied is for $d_1 = d_2$. In other words, the original matrix $A$ was a scalar multiple of the identity. $$A = d \cdot \operatorname{Id}_2 = \left[\begin{array}{cc} d & 0 \\ 0 & d \end{array}\right].$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/297471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Construction of a triangle, given: side, sum of the other sides and angle between them. Given: $\overline{AB}$, $\overline{AC}+\overline{BC}$ and $\angle C$. Construct the triangle $\triangle ABC$ using rule and compass.
Draw the side given AB draw the angle given A produce the adjacent side which equal to sum of given sides AP connect remaining point from it PB bisect that side PB produce that side until cut AP take the point of intersection C now you have triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/297528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Probability with Chi-Square distribution What is the difference, when calculating probabilities of Chi-Square distributions, between $<$ and $\leq$ or $>$ and $\geq$. For example, say you are asked to find P$(\chi_{5}^{2} \leq 1.145)$. I know that this is $=0.05$ from the table of Chi-Square distributions, but what if you were asked to find P$(\chi_{5}^{2} < 1.145)$? How would this be different?
The $\chi^2$ distributions are continuous distributions. If $X$ has continuous distribution, then $$\Pr(X\lt a)=\Pr(X\le a).$$ If $a$ is any point, then $\Pr(X=a)=0$. So in your case, the probabilities would be exactly the same. Many other useful distributions, such as the normal, and the exponential, are continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/297605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determining Ambiguity in Context Free Grammars What are some common ways to determine if a grammar is ambiguous or not? What are some common attributes that ambiguous grammars have? For example, consider the following Grammar G: $S \rightarrow S(E)|E$ $E \rightarrow (S)E|0|1|\epsilon$ My guess is that this grammar is not ambiguous, because of the parentheses I could not make an equivalent strings with distinct parse trees. I could have easily made a mistake since I am new to this. What are some common enumeration techniques for attempting to construct the same string with different parse trees? * *How can I know that I am right or wrong? *What are common attributes of ambiguous grammars? *How could I prove this to myself intuitively? *How could I prove this with formal mathematics?
To determine if a context free grammar is ambiguous is undecidable (there is no algorithm which will correctly say "yes" or "no" in a finite time for all grammars). This doesn't mean there aren't classes of grammars where an answer is possible. To prove a grammar ambiguous, you do as you outline: Find a string with two parses. To prove it unambiguous is harder: You have to prove the above isn't possible. It is known that the $LL(k)$ and $LR(k)$ grammars are unambiguous, and for $k = 1$ the conditions are relatively easy to check.
{ "language": "en", "url": "https://math.stackexchange.com/questions/297721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 4, "answer_id": 1 }
What is the origin of the phrase "as desired" in mathematics? This is a sort of strange question that popped into my head when I was reading a paper. In writing mathematics, many authors use the phrase "as desired" to conclude a proof, usually written to indicate that one has reached the result originally stated. I know that this is perfectly good English, but the phrase is so widespread, despite the fact that there are many other similar alternatives. Does anybody know whether the phrase has any specific origins?
From Wikipedia http://en.wikipedia.org/wiki/Q.E.D.: Q.E.D. is an initialism of the Latin phrase quod erat demonstrandum, originating from the Greek analogous hóper édei deîxai (ὅπερ ἔδει δεῖξαι), meaning "which had to be demonstrated". The phrase is traditionally placed in its abbreviated form at the end of a mathematical proof ... ...however, translating the Greek phrase ὅπερ ἔδει δεῖξαι produces a slightly different meaning. Since the verb "δείκνυμι" also means to show or to prove, a better translation from the Greek would read, "what was required to be proved." The phrase was used by many early Greek mathematicians, including Euclid and Archimedes. But I don't know how close this translation of Q.E.D. "what was required" is to the phrase "as desired", as desired by the OP.
{ "language": "en", "url": "https://math.stackexchange.com/questions/297798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Trigonometric Identity $\frac{1}{1-\cos t} + \frac{1}{1+\cos t}$ I am just learning about trig identities, and after doing a few, I am stuck on this one: $$ \frac{1}{1-\cos t} + \frac{1}{1+\cos t}. $$ The only way to start, that I can think of is this: $$ \frac{1}{1-(1/\sec t)} + \frac{1}{1+(1/\sec t)}. $$ And from there it just gets messed up. Can someone point me in the right direction?
Hint: Use that $$ \frac{1}{a}+\frac{1}{b}=\frac{a+b}{ab} $$ along with the identity $$ \sin^2t+\cos^2t=1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/297864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Existence of Matrix inverses depending on the existence of the inverse of the others.. Let $A_{m\times n}$ and $B_{n\times m}$ be two matrices with real entries. Prove that $I-AB$ is invertible iff $I-BA$ is invertible.
Hint:$(I-BA)^{-1}=X$ (say), Now expand left side. we get $$X=I+BA+ (BA)(BA)+(BA)(BA)(BA)+\dots$$ $$AXB=AB+(AB)^2+(AB)^3+(AB)^4+\dots$$ $$I+AXB=I+(AB)+(AB)^2+\dots+(AB)^n+\dots=(I-AB)^{-1}$$ Check yourself: $(I+AXB)(I-AB)=I$, $(I-AB)(I+AXB)=I$
{ "language": "en", "url": "https://math.stackexchange.com/questions/297935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Intuitive meaning of immersion and submersion What is immersion and submersion at the intuitive level. What can be visually done in each case?
First of all, note that if $f : M \to N$ is a submersion, then $\dim M \geq \dim N$, and if $f$ is an immersion, $\dim M \leq \dim N$. The Rank Theorem may provide some insight into these concepts. The following statement of the theorem is taken from Lee's Introduction to Smooth Manifolds (second edition); see Theorem $4.12$. Suppose $M$ and $N$ are smooth manifolds of dimensions $m$ and $n$, respectively, and $F : M \to N$ is a smooth map with constant rank $r$. For each $p \in M$ there exist smooth charts $(U, \varphi)$ for $M$ centered at $p$ and $(V, \psi)$ for $N$ centered at $F(p)$ such that $F(U) \subseteq V$, in which $F$ has a coordinate representation of the form $$\hat{F}(x^1, \dots, x^r, x^{r+1}, \dots, x^m) = (x^1, \dots, x^r, 0, \dots, 0).$$ In particular, if $F$ is a smooth submersion, this becomes $$\hat{F}(x^1, \dots, x^n, x^{n+1}, \dots, x^m) = (x^1, \dots, x^n),$$ and if $F$ is a smooth immersion, it is $$\hat{F}(x^1, \dots, x^m) = (x^1, \dots, x^m, 0, \dots, 0).$$ So a submersion locally looks like a projection $\mathbb{R}^n\times\mathbb{R}^{m-n} \to \mathbb{R}^n$, while an immersion locally looks like an inclusion $\mathbb{R}^m \to \mathbb{R}^m\times\mathbb{R}^{n-m}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/297988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48", "answer_count": 1, "answer_id": 0 }
Groups with transitive automorphisms Let $G$ be a finite group such that for each $a,b \in G \setminus \{e\}$ there is an automorphism $\phi:G \rightarrow G$ with $\phi(a)=b$. Prove that $G$ is isomorphic to $\Bbb Z_p^n$ for some prime $p$ and natural number $n$.
Hint 1: If $a, b \in G \setminus \{e\}$, then $a$ and $b$ have the same order. Hint 2: Using the previous hint, show that $G$ has order $p^n$ for some prime $p$ and that every nonidentity element has order $p$. Hint 3: In a $p$-group, the center is a nontrivial characteristic subgroup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/298050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Delta in continuity Let $f: [a,b]\to\mathbb{R}$ be continuous, prove that it is uniform continuous. I know using compactness it is almost one liner, but I want to prove it without using compactness. However, I can use the theorem that every continuous function achieves max and min on a closed bounded interval. I propose proving that some choices of $\delta$ can be continuous on $[a,b]$, for example but not restricted to: For an arbitrary $\epsilon>0$, for each $x\in[a,b]$ set $\Delta_x=\{0<\delta<b-a \;|\;|x-y|<\delta\Longrightarrow |f(x)-f(y)| <\epsilon\}$, denote $\delta_x = \sup \Delta_x $. Basically $\delta_x$ is the radius of largest neighborhood of $x$ that will be mapped into a subset of neighborhood radius epsilon of $f(x)$. I'm trying to show that $\delta_x$ is continuous on $[a,b]$ with fixed $\epsilon$. My progress is that I can show $\delta_y$ is bounded below if $y$ is close enough to $x$, but failed to find its upper bound that is related to its distance with $x$. Maybe either you could help me with this $\delta_x$ proof, or another cleaner proof without compactness (but allowed max and min). Thanks so much.
Let an $\epsilon>0$ be given and put $$\rho(x):=\sup\bigl\{\delta\in\ ]0,1]\ \bigm|\ y, \>y'\in U_\delta(x)\ \Rightarrow\ |f(y')-f(y)|<\epsilon\bigr\}\ .$$ By continuity of $f$ the function $x\to\rho(x)$ is strictly positive and $\leq1$ on $[a,b]$. Lemma. The function $\rho$ is $1$-Lipschitz continuous, i.e., $$|\rho(x')-\rho(x)|\leq |x'-x|\qquad \bigl(x,\ x'\in[a,b]\bigr)\ .$$ Proof. Assume the claim is wrong. Then there are two points $x_1$, $\>x_2\in[a,b]$ with $$\rho(x_2)-\rho(x_1)>|x_2-x_1|\ .$$ It follows that there is a $\delta$ with $\rho(x_1)<\delta<\rho(x_2)-|x_2-x_1|$. By definition of $\rho(x_1)$ we can find two points $y$, $\> y'\in U_\delta(x_1)$ such that $|f(y')-f(y)|\geq\epsilon$. Now $$|y-x_2|\leq |y -x_1|+|x_1-x_2|<\delta +|x_2-x_1| =:\delta'<\rho(x_2)\ .$$ Similarly $|y'-x_2|<\delta'$. It follows that $y$, $\>y'$ would contradict the definition of $\rho(x_2)$.$\qquad\quad\square$ The function $\rho$ therefore takes a positive minimum value $\rho_*$ on $[a,b]$. The number $\delta_*:={\rho_*\over2}>0$ is a universal $\delta$ for $f$ and the given $\epsilon$ on $[a,b]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/298136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Product of numbers Pair of numbers whose product is $+7$ and whose sum is $-8$. Factorise $x^{2} - 8x + 7$. I can factorise but it's just I can't find any products of $+7$ and that is a sum of $-8$. Any idea? Thanks guys! Thanks.
I do not understand why you are trying to factorise $x^2-8x+7$. I suggest you use viete's formulae. xy=2 and x+y=-3. Let's say you have a quadratic equation $x^2+ax+b$ Then the roots $x_1, x_2$ has the property $x_1+x_2=-a$ and $x_1.x_2=c$ So you have the quadratic equation $x^2-2x-3$ When you solve this, you get the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/298200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
How to find all polynomials with rational coefficients s.t $\forall r\notin\mathbb Q :f(r)\notin\mathbb Q$ How to find all polynomials with rational coefficients$f(x)=a_nx^n+\cdots+a_1x+a_0$, $a_i\in \mathbb Q$, such that $$\forall r\in\mathbb R\setminus\mathbb Q,\quad f(r)\in\mathbb R\setminus\mathbb Q.$$ thanks in advance
The only candidates are those polynomials $f(x)\in\mathbb Q[x]$ that are factored over $\mathbb Q$ as product of first degree polynomials (this is because if $\deg f>1$ and $f$ is irreducible then all of its roots are irrationals.) The first degree polynomials have this property. Can you see that these are all? (Hint: The polynomial $f(x)+q$, for suitable $q\in\mathbb Q$, is not a product of first degree polynomials)
{ "language": "en", "url": "https://math.stackexchange.com/questions/298276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does zeta have infinitely many zeros in the critical strip? I want a simple proof that $\zeta$ has infinitely many zeros in the critical strip. The function $$\xi(s) = \frac{1}{2} s (s-1) \pi^{\tfrac{s}{2}} \Gamma(\tfrac{s}{2})\zeta(s)$$ has exactly the non-trivial zeros of $\zeta$ as its zeros ($\Gamma$ cancels all the trivial ones out). It also satisfies the functional equation $\xi(s) = \xi(1-s)$. If we assume it has finitely many zeros, what analysis could get a contradiction? I found an outline for a way to do it here but I can't do the details myself: https://mathoverflow.net/questions/13647/why-does-the-riemann-zeta-function-have-non-trivial-zeros/13762#13762
Hardy proved in 1914 that an infinity of zeros were on the critical line ("Sur les zéros de la fonction $\zeta(s)$ de Riemann" Comptes rendus hebdomadaires des séances de l'Académie des sciences. 1914). Of course other zeros could exist elsewhere in the critical strip. Let's exhibit the main idea starting with the Xi function defined by : $$\Xi(t):=\xi\left(\frac 12+it\right)=-\frac 12\left(t^2+\frac 14\right)\,\pi^{-\frac 14-\frac{it}2}\,\Gamma\left(\frac 14+\frac{it}2\right)\,\zeta\left(\frac 12+it\right)$$ $\Xi(t)$ is an even integral function of $t$, real for real $t$ because of the functional equation (applied to $s=\frac 12+it$) : $$\xi(s)=\frac 12s(s-1)\pi^{-\frac s2}\,\Gamma\left(\frac s2\right)\,\zeta(s)=\frac 12s(s-1)\pi^{\frac {s-1}2}\,\Gamma\left(\frac {1-s}2\right)\,\zeta(1-s)=\xi(1-s)$$ We observe that a zero of $\zeta$ on the critical line will give a real zero of $\,\Xi(t)$. Now it can be proved (using Ramanujan's $(2.16.2)$ reproduced at the end) that : $$\int_0^\infty\frac{\Xi(t)}{t^2+\frac 14}\cos(x t)\,dt=\frac{\pi}2\left(e^{\frac x2}-2e^{-\frac x2}\psi\left(e^{-2x}\right)\right)$$ where $\,\displaystyle \psi(s):=\sum_{n=1}^\infty e^{-n^2\pi s}\ $ is the theta function used by Riemann Setting $\ x:=-i\alpha\ $ and after $2n$ derivations relatively to $\alpha$ we get (see Titchmarsh's first proof $10.2$, alternative proofs follow in the book...) : $$\lim_{\alpha\to\frac{\pi}4}\,\int_0^\infty\frac{\Xi(t)}{t^2+\frac 14}t^{2n}\cosh(\alpha t)\,dt=\frac{(-1)^n\,\pi\,\cos\bigl(\frac{\pi}8\bigr)}{4^n}$$ Let's suppose that $\Xi(t)$ doesn't change sign for $\,t\ge T\,$ then the integral will be uniformly convergent with respect to $\alpha$ for $0\le\alpha\le\frac{\pi}4$ so that, for every $n$, we will have (at the limit) : $$\int_0^\infty\frac{\Xi(t)}{t^2+\frac 14}t^{2n}\cosh\left(\frac {\pi t}4\right)\,dt=\frac{(-1)^n\,\pi\,\cos\bigl(\frac{\pi}8\bigr)}{4^n}$$ But this is not possible since, from our hypothesis, the left-hand side has the same sign for sufficiently large values of $n$ (c.f. Titchmarsh) while the right part has alternating signs. This proves that $\Xi(t)$ must change sign infinitely often and that $\zeta\left(\frac 12+it\right)$ has an infinity of real solutions $t$. Probably not as simple as you hoped but a stronger result! $$-$$ From Titchmarsh's book "The Theory of the Riemann Zeta-function" p. $35-36\;$ and $\;255-258$ :
{ "language": "en", "url": "https://math.stackexchange.com/questions/298331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 3 }
Is this function differentiable at 0? I would like to know if this function is differentiable at the origin: $$f(x) = \left\{ \begin{array}{cl} x+x^2 & \mbox{if } x \in \mathbb{Q}; \\ x & \mbox{if } x \not\in \mathbb{Q}. \end{array} \right.$$ Intuitively, I know it is, but I don't know how to prove it. Any ideas? Thanks a lot.
For continuity at any arbitrary point $c\in\mathbb{R}$ and considering sequential criteria(first consider a rational sequence converging to $c$ and then a irrational sequence converging to $c$ and equate the limit) of continuity at $c$ you need $c^2+c=c$ so $c^2=0$ so $c=0$, so only at $c=0$ the function is continuos, Now consider the limit $\lim_{x\rightarrow 0}\frac{f(x)-f(0)}{x}$ take rational sequence $x_n\rightarrow 0$ and see what is the limit and take irrational sequence $x_n\rightarrow 0$ and see the limit, are they equal? you need read this two topic first to understand the solution:Sequential Criterion For Limit Sequential Criterion For Continuity Here you can look the Sequential criterion for Derivative
{ "language": "en", "url": "https://math.stackexchange.com/questions/298482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
A proof question about continuity and norm Let $E⊂\mathbb{R}^{n}$ be a closed, non-empty set and $N : \mathbb{R}^{n}\to\mathbb{R}$ be a norm. Prove that the function $f(x)=\inf\left \{ N(x-a)\mid a\in E \right \}$, $f : \mathbb{R}^{n}→\mathbb{R}$ is continuous and $f^{-1}(0) = E$. There are some hints: $f^{-1}(0) = E$ will be implied by $E$ closed, and $f : \mathbb{R}^{n}\to\mathbb{R}$ is continuous will be implied by triangle inequality. I still can't get the proof by the hint. So...thank you for your help!
The other answers so far are good, but here is an alternative hint for the first part. Because $E$ is closed, its complement $E^c$ is open. A set in $\mathbb{R}^n$ is open if and only if the set contains an open ball around any point in the set. Thus, for any $x\in E^c$, there is some $r>0$ such that the open ball $B(x,r)\subset E^c$. What does that tell you about the minimum distance from $x$ to $E$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/298555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What is inverse of $I+A$? Assume $A$ is a square invertible matrix and we have $A^{-1}$. If we know that $I+A$ is also invertible, do we have a close form for $(I+A)^{-1}$ in terms of $A^{-1}$ and $A$? Does it make it any easier if we know that sum of all rows are equal?
Check this question. The first answer presents a recursive formula to retrieve the inverse of a generic sum of matrices. So yours should be a special case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/298616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 3, "answer_id": 2 }
Total divisor in a Principal Ideal Domain. Let $R$ be a right and left principal ideal domain. An element $a\in R$ is said to be a right divisor of $b\in R$ if there exists $x \in R$ such that $xa=b$ . And similarly define left divisor. $a$ is said to be a total divisor of $b$ if $RbR = <a>_R \cap$ $ _R<a>$ . How do I prove the following theorem: If $RbR \subseteq aR$ then $a$ is already a total divisor of $b$. Thanks in advance. I am finding pretty difficult to understand things in the noncommutative case.
I'm going to assume $R$ contains $1$. Also, my proof will only show that $RbR \subseteq aR \cap Ra$. Most definitions of total divisors I've seen go like the following: An element $a$ in a ring $R$ is a total divisor of $b$ when $RbR \subseteq aR \cap Ra$. Whether this implies $RbR = aR \cap Ra$ in a ring that is both a left and right principal ideal domain with unity, I'm unsure of. Maybe you could provide information on where you found this problem? The proof: Since $RbR$ is an ideal, it is also a right ideal, and because $R$ is a right principal ideal domain, we get $RbR = Rx$ for some $x \in R$. Again, because $R$ is a right PID, and sums of right ideals are right ideals, we have $Ra + Rx = Rd$ for some $d \in R$. Thus, $d = r_1a +r_2x$, with $r_1, r_2 \in R$. Further, $x = ar$ for some $r \in R$, because, $RbR \subseteq aR$ and $1 \in R$. Now, $dr = r_1ar +r_2xr = r_1ar + r_2r'x$ (for some $r' \in R$, because $xR = Rx$) $= r_1ar + r_2r'ar = (r_1a + r_2r'a)r$ $\implies d = r_1a + r_2r'a$ when $r \neq 0$, that is when $a \neq 0$ (should we have $a = 0$ the result follows trivially, so let's assume $a \neq 0$). That is, $d \in Ra$, so $Rd \subseteq Ra$, and $Rd = Ra$. From our previous equation $Ra + Rx = Rd$, we see that $Rx \subseteq Ra$. But $RbR = Rx = xR$, so $RbR \subseteq Ra$. We have, adding the hypothesis, that $RbR \subseteq aR$ and $RbR \subseteq Ra$, thus - $RbR \subseteq aR \cap Ra$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/298754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to check the continuity of this function defined as follows The function $f:\Bbb R\to\Bbb R$ defined by $f(x)=\min(3x^3+x,|x|)$ is (A) continuous on $\Bbb R$, but not differentiable at $x=0$. (B) differentiable on $\Bbb R$, but $f\,'$ is discontinuous at $x=0$. (C) differentiable on $\Bbb R$, and $f\,'$ is continuous on $\Bbb R$. (D) differentiable to any order on $\Bbb R$. My attempt: Here,$f(x)=|x|,x>0$;$f(x)=3x^3+x,x<0$;$f(x)=0$ at $x=0.$ Also,$Lf'(0)=Rf'(0)=1.$ So,$f$ is differentiable at $x=0.$ But I am having trouble to check whether $f'$ is continuous at $x=0$ or not. Can someone point me in the right direction?Thanks in advance for your time.
HINT: If $x<0$, $f\,'(x)=9x^2+1$, so $$\lim_{x\to 0^-}f\,'(x)=\lim_{x\to 0^-}\left(9x^2+1\right)=1\;.$$ Is this the same as $\lim_{x\to 0^+}f\,'(x)$? Are both one-sided limits equal to $f\,'(0)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/298822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Showing diffeomorphism between $S^1 \subset \mathbb{R}^2$ and $\mathbb{RP}^1$ I am trying to construct a diffeomorphism between $S^1 = \{x^2 + y^2 = 1; x,y \in \mathbb{R}\}$ with subspace topology and $\mathbb{R P}^1 = \{[x,y]: x,y \in \mathbb{R}; x \vee y \not = 0 \}$ with quotient topology and I am a little stuck. I have shown that both are smooth manifolds, and I used stereographic projection for $S^1$, but now I am runing into trouble when I give the homeomorphism between $S^1$ and $\mathbb{RP}^1$ as the map that takes a line in $\mathbb{RP}^1$ to the point in $S^1$ that you get when letting the parallel line go through the respective pole used in the stereographic projection. If I use the south and north poles I get a potential homeomorphism, but I cannot capture the horizontal line in my image, but when I pick say north and east then my map is not well defined as I get different lines for the same point in $S^1$. Can somebody give me a hint how to make this construction work, or is it better to move to a different representation of $S^1$ ?
The easiest explicit map I know is: $$(\cos(\theta), \sin(\theta))\mapsto [\cos(\theta/2):\sin(\theta/2)]$$ Note that although $\cos(\theta/2)$ and $\sin(\theta/2)$ depend on $\theta$ and not just $\sin(\theta)$ and $\cos(\theta)$, the map is well-defined so long as we use the same value of $\theta$ when computing coordinates in $S^1$.That is, $$\cos(\frac{\theta+2\pi}{2})=-\cos(\theta/2) \text{ and } \sin(\frac{\theta+2\pi}{2})=-\sin(\theta/2)$$ So the choice of $\theta$ modulo $2\pi$ does not affect $[\cos(\theta/2):\sin(\theta/2)]$, since $[x,y]=[-x,-y]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/298879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Proving that if $f: \mathbb{C} \to \mathbb{C} $ is a continuous function with $f^2, f^3$ analytic, then $f$ is also analytic Let $f: \mathbb{C} \to \mathbb{C}$ be a continuous function such that $f^2$ and $f^3$ are both analytic. Prove that $f$ is also analytic. Some ideas: At $z_0$ where $f^2$ is not $0$ , then $f^3$ and $f^2$ are analytic so $f = \frac{f^3}{f^2}$ is analytic at $z_0$ but at $z_0$ where $f^2$ is $0$, I'm not able to show that $f$ is analytic.
First rule out the case $f^2(z)\equiv 0$ or $f^3(z)\equiv 0$ as both imply $f(z)\equiv 0$ and we are done. Write $f^2(z)=(z-z_0)^ng(z)$, $f^3(z)=(z-z_0)^mh(z)$ with $n.m\in\mathbb N_0$, $g,h$ analytic and nonzero at $z_0$. Then $$(z-z_0)^{3n}g^3(z)=f^ 6(z)=(z-z_0) ^ {2m} h^2 (z)$$ implies $3n=2m$ (and $g^3=h^2$), hence if we let $k=m-n\in\mathbb Z$ we have $n=3n-2n=2m-2n=2k$ and $m=3m-2m=3m-3n=3k$. Especially, we see that $k\ge 0$ and hence $$ f(z)=\frac{f^3(z)}{f^2(z)}=(z-z_0)^k\frac{g(z)}{h(z)}$$ is analytic at $z_0$. Remark: We did not need that $f$ itself is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/298951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
For each $n \ge 1$ compute $Z(S_n)$ Can someone please help me on how to compute $Z(S_n)$ for each $n \ge 1$? Does this basically mean compute $Z(1), Z(2), \ldots$? Please hint me on how to compute this. Thanks in advance.
Hint: $S_n$ denotes the symmetric group over a set of $n$ elements. It's the group of all posible permutations, so you have to find $Z(S_1),Z(S_2),...$ so you have to find the permutations that commute with every other permutations. That is the definition of $Z(G)$: $$Z(G)=\lbrace g\in G,ga=ag\;\;\forall a\in G\rbrace$$ So it's all the elements of the group that commute with ALL members of the group. It's a generalization of the centralizer of a subgroup: if you have $H\subset G$, being $G$ a group and $H$ a subgroup, then the centralizer of $H$ in $G$ is: $$C_G(H)=\lbrace g\in G, gh=hg\;\;\forall h\in H\rbrace$$ So the center of a group, $Z(G)$ is the centralizer of $G$ on $G$: $C_G(G)$ There are some trivial cases for little values of $n$, for example for $n=2$ the group is abelian so $Z(S_2)=S_2$. Remember the order of $S_n$: $|S_n|=n!$
{ "language": "en", "url": "https://math.stackexchange.com/questions/299087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Binomial-Like Distribution with Changing Probability The Question Assume we have $n$ multiple choice questions, the $k$-th question having $k+1$ answer choices. What is the probability that, guessing randomly, we get at least $r$ questions right? If no general case is available, I am OK with the special case $r = \left\lfloor\frac{n}{2}\right\rfloor + 1$. Example Assume we have four different multiple choice questions. * *Question 1 * *Choice A *Choice B *Question 2 * *Choice A *Choice B *Choice C *Question 3 * *Choice A *Choice B *Choice C *Choice D *Question 4 * *Choice A *Choice B *Choice C *Choice D *Choice E If we choose the answer to each question at random, what is the probability we get at least three right? (By constructing a probability tree, I get the answer as $11/120$.)
Let $U_k$ be an indicator random variable, equal to 1 if the $k$-th question has been guessed correctly. Clearly $(U_1, U_2,\ldots,U_n)$ are independent Bernoulli random variables with $\mathbb{E}\left(U_k\right) = \frac{1}{k+1}$. The total number of correct guesses equals: $$ X = \sum_{k=1}^n U_k $$ The moment generating function of $X$ is easy to find: $$ \mathcal{P}_X\left(z\right) = \mathbb{E}\left(z^X\right) = \prod_{k=1}^n \mathbb{E}\left(z^{U_k}\right) = \prod_{k=1}^n \frac{k+z}{k+1} = \frac{1}{z} \frac{(z)_{n+1}}{(n+1)!} = \frac{1}{(n+1)!} \sum_{k=0}^n z^k s(n+1,k+1) $$ where $s(n,m)$ denote unsigned Stirling numbers of the first kind. Thus:$$ \mathbb{P}\left(X=m\right) = \frac{s(n+1,m+1)}{(n+1)!} [ 1 \leqslant m \leqslant n ] $$ The probability of getting at least $r$ equals: $$ \mathbb{P}\left(X \geqslant r\right) = \sum_{m=r}^{n} \frac{s(n+1,m+1)}{(n+1)!} $$ This reproduces your result for $n=4$ and $r=3$. In[229]:= With[{n = 4, r = 3}, Sum[Abs[StirlingS1[n + 1, m + 1]]/(n + 1)!, {m, r, n}]] Out[229]= 11/120 Here is the table for $1 \leqslant n \leqslant 11$:
{ "language": "en", "url": "https://math.stackexchange.com/questions/299157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Recursion for Finding Expectation (Somewhat Lengthy) Preface: Ever since I read the brilliant answer by Mike Spivey I have been on a mission for re-solving all my probability questions with it when possible. I tried solving the Coupon Collector problem using Recursion which the community assisted on another question of mine. Now, I think I have come close to completely understanding the way of using recursion. But..... Question: This is from Stochastic Processes by Sheldon Ross (Page 49, Question 1.14). The question is: A fair die is continually rolled until an even number has appeared on 10 distinct rolls. Let $X_i$ denote the number of rolls that land on side $i$. Determine : * *$E[X_1]$ *$E[X_2]$ *PMF of $X_1$ *PMF of $X_2$ My Attempt: Building on my previous question, I begin: Let $N$ denote the total number of throws (Random Variable) and let $Z_{i}$ denote the result of the $i^{th}$ throw. Then: \begin{eqnarray*} E(X_{1}) & = & E\left(\sum_{i=1}^{N}1_{Z_{i}=1}\right)\\ & = & E\left[E\left(\sum_{i=1}^{N}1_{Z_{i}=1}|N\right)\right]\\ E(X_{1}|N) & = & E(1_{Z_{1}=1}+1_{Z_{2}=1}+\cdots+1_{z_{N}=1})\\ & = & \frac{N-10}{3}\\ E(X_{1}) & = & \frac{E(N)-10}{3} \end{eqnarray*} To Find : $E(N)$ Let $W_{i}$ be the waiting time for the $i^{th}$ distinct roll of an even number. Then: $$E(N)=\sum_{i=1}^{10}E(W_{i})$$ Now, \begin{eqnarray*} E(W_{i}) & = & \frac{1}{2}(1)+\frac{1}{2}(1+E(W_{i}))\\ E(W_{i}) & = & 1+\frac{E(W_{i})}{2}\\ \implies E(W_{i}) & = & 2\\ \therefore E(N) & = & \sum_{i=1}^{10}2\\ & = & 20\\ \therefore E(X_{1}) & = & \frac{10}{3}\\ & & \blacksquare \end{eqnarray*} The exact same procedure can be followed for $E(X_2)$ with the same answer. The answer matches the one given in the book. I am confused how to go from here to get the PMFs. Note : If possible, please provide me an extension to this answer for finding the PMFs rather than a completely different method. The book has the answer at the back using a different method. I am not interested in an answer as much as I am interested in knowing how to continue this attempt to get the PMFs.
The main idea is to use probability generating functions. (If you don't know what that means, this will be explained later on in the solution) We solve the problem in general, so replace $10$ by any non-negative integer $a$. Let $p_{k, a}(i)$ be the probability of getting $k$ rolls with face $i$ when a fair dice is continually rolled until an even number has appeared on $a$ distinct rolls. In relation to your problem, when $a=10$, we have $p_{k, 10}(i)=P(X_i=k)$. To start off, note that $p_{-1, a}(i)=0$ (You can't have $-1$ rolls), $$p_{k, 0}(i)=\begin{cases} 1 & \text{if} \, k=0 \\ 0 & \text{if} \, k \geq 1 \end{cases}$$ (If you continually roll a fair dice until an even number has appeared on $0$ distinct rolls, then you must have $0$ rolls for all faces since you don't roll at all.) Now we have 2 recurrence relations: $p_{k, a}(1)=\frac{1}{6}p_{k-1, a}(1)+\frac{1}{3}p_{k, a}(1)+\frac{1}{2}p_{k, a-1}(1)$ and $p_{k, a}(2)=\frac{1}{6}p_{k-1, a-1}(2)+\frac{1}{3}p_{k, a-1}(2)+\frac{1}{2}p_{k, a}(2)$. Simplifying, we get $p_{k, a}(1)=\frac{1}{4}p_{k-1, a}(1)+\frac{3}{4}p_{k, a-1}(1)$ and $p_{k, a}(2)=\frac{1}{3}p_{k-1, a-1}(2)+\frac{2}{3}p_{k, a-1}(2)$. Time to bring in the probability generating functions. Define $f_a(x)=\sum\limits_{k=0}^{\infty}{p_{k, a}(1)x^k}$, $g_a(x)=\sum\limits_{k=0}^{\infty}{p_{k, a}(2)x^k}$. Basically, the coefficient of $x^k$ in $f_a(x)$ is the probability that you have $k$ rolls of $1$. You can think of it (using your notation) as $f_{10}(x)=E(x^{X_1})$ (and similarly for $g_a(x)$) We easily see that $f_0(x)=g_0(x)=1$. Multiplying the first recurrence relation by $x^k$ and summing from $k=0$ to $\infty$ gives $$\sum\limits_{k=0}^{\infty}{p_{k, a}(1)x^k}=\frac{1}{4}\sum\limits_{k=0}^{\infty}{p_{k-1, a}(1)x^k}+\frac{3}{4}\sum\limits_{k=0}^{\infty}{p_{k, a-1}(1)x^k}$$ $$f_a(x)=\frac{1}{4}xf_a(x)+\frac{3}{4}f_{a-1}(x)$$ $$f_a(x)=\frac{3}{4-x}f_{a-1}(x)$$ $$f_a(x)=\left(\frac{3}{4-x}\right)^af_0(x)=\left(\frac{3}{4-x}\right)^a$$ The coefficient of $x^k$ in the expansion of $f_a(x)$ is just $\left(\frac{3}{4}\right)^a\frac{1}{4^k}\binom{k+a-1}{k}$. In particular, when $a=10$, the PMF $F_1(x)$ of $X_1$ is $$F_1(k)=P(X_1=k)=\frac{3^{10}}{4^{k+10}}\binom{k+9}{k}$$ Doing the same to the 2nd second recurrence gives $$g_a(x)=\left(\frac{1}{3}x+\frac{2}{3}\right)g_{a-1}(x)$$ $$g_a(x)=\left(\frac{1}{3}x+\frac{2}{3}\right)^ag_0(x)=\left(\frac{1}{3}x+\frac{2}{3}\right)^a$$ The coefficient of $x^k$ in the expansion of $g_a(x)$ is just $\frac{1}{3^a}2^{a-k}\binom{a}{k}$. In particular, when $a=10$, the PMF $F_2(x)$ is $$F_2(k)=P(X_2=k)=\frac{2^{10-k}}{3^{10}}\binom{10}{k}$$ P.S. It is now a trivial matter to calculate expectation, by differentiating the probability generating function and then evaluating at $x=1$: $$E(X_1)=f_{10}'(1)=\frac{10}{3}, E(X_2)=g_{10}'(1)=\frac{10}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/299210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Combinatorical meaning of an identity involving factorials While solving (successfully!) problem 24 in projectEuler I was doodling around and discoverd the foloowing identity: $$1+2\times2!+3\times3!+\dots N\times N!=\sum_{k=1}^{k=N} k\times k!=(N+1)!-1$$ While this is very easy to prove, I couldn't find a nice and simple combinatorical way to interpret this identity*. Any ideas? *That is, I do have a combinatorical interpretation - that's how I got to this identity - but it's not as simple as I'd like.
The number of ways you can sort a set of consecutive numbers starting from $1$ and none of which is larger than $N$ and then paint one of them blue is $(N+1)!-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/299289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is the diagonal set a measurable rectangle? Let $\Sigma$ denotes the Borel $\sigma$-algebra of $\mathbb{R}$ and $\Delta=\{(x,y)\in\mathbb{R}^2: x=y\}$. I am trying to clarity the definitions of $\Sigma\times\Sigma$ (the sets which contains all measurable rectangles) and $\Sigma\otimes\Sigma$ (the $\sigma$-algebra generated by the collection of all measurable rectangles). My question is (1) does $\Delta$ belong to $\Sigma\times\Sigma$? (2) does $\Delta$ belong to $\Sigma\otimes\Sigma$? I am thinking that (1) would be no (since a measurable rectangle can be arbitrary measurable sets which are not required to be intervals?) and (2) would be yes (can we write $\Delta$ like countable unions of some open intervals? I cannot find a one at time).
Let $x\neq y$. If there is a measurable rectangle containing $(x,x)$ and $(y,y)$, there must be a set $A$ containing both $x$ and $y$ and a set $B$ doing the same such that $(x,x)$ and $(y,y)$ are in $A\times B$. But then $(x,y)\in A\times B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/299423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Show that the ideal of all polynomials of degree at least 5 in $\mathbb Q[x]$ is not prime Let $I$ be the subset of $\mathbb{Q}[x]$ that consists of all the polynomials whose first five terms are 0. I've proven that $I$ is an ideal (any polynomial multiplied by a polynomial in $I$ must be at least degree 5), but I'm unsure to how to prove that it is not a prime ideal. My intuition says that its not, because we can't use $(1)$ or $(x)$ as generators. I know that $I$ is a prime ideal $\iff$ $R/I$ is an integral domain. Again, I'm a little confused on how represent $\mathbb{Q}[x]/I$
Hint $\ $ For any prime ideal $\rm\,P\!:\,\ x^n\in P\:\Rightarrow\:x\in P.\:$ Thus $\rm\ x^5 \in I\,$ but $\rm\ x\not\in I\ \Rightarrow\ I\,$ is not prime. Equivalently, $\rm\, R\ mod\ P\,$ has a nilpotent ($\Rightarrow$ zero-divisor): $\rm\, x^5\equiv 0,\ x\not\equiv 0,\,$ so it is not a domain. Remark $\ $ Generally prime ideals can be generated by irreducible elements (in any domain where nonunits factor into irreducibles), since one can replace any reducible generator by some factor, then iterate till all generators are irreducible. In particular, in UFDs, where irreducibles are prime, prime ideals can be generated by prime elements. This property characterizes UFDs. Well-known is Kaplansky's case: a domain is a UFD iff every prime ideal $\!\ne\! 0$ contains a prime $\!\ne\! 0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/299509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Calculating $\lim_{x\to\frac{\pi}{2}}(\sin x)^{\tan x}$ Please help me calculate: $\lim_{x\to\frac{\pi}{2}}(\sin x)^{\tan x}$
Take $y=(\sin x)^{\tan x}$ Taking log on both sides we have, $\log y=\tan x\log(\sin x)=\frac{\log(\sin x)}{\cot x}$ Now as $x\to \pi/2$, $\log(\sin x)\to 0$ and $\cot x\to 0$ Now you can use L'Hospital's Rule. $$\lim_{x\to \pi/2}\frac{\log(\sin x)}{\cot x}=\lim_{x\to \pi/2}\frac{\cos x}{\sin x(-\csc^2 x)}=\lim_{x\to \pi/2}\frac{\cos x}{-\sin x}=0$$ $$\Rightarrow \log y\to 0, \text{as}, x\to \pi/2$$ $$\Rightarrow y\to \exp^0, \text{as}, x\to \pi/2$$ $$\Rightarrow y\to 1, \text{as}, x\to \pi/2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/299584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let the matrix $A=[a_{ij}]_{n×n}$ be defined by $a_{ij}=\gcd(i,j )$. How prove that $A$ is invertible, and compute $\det(A)$? Let $A=[a_{ij}]_{n×n}$ be the matrix defined by letting $a_{ij}$ be the rational number such that $$a_{ij}=\gcd(i,j ).$$ How prove that $A$ is invertible, and compute $\det(A)$? thanks in advance
There is a general trick that applies to this case. Assume a matrix $A=(a_{i,j})$ is such that there exists a function $\psi$ such that $$ a_{i,j}=\sum_{k|i,k|j}\psi(k) $$ for all $i,j$. Then $$ \det A=\psi(1)\psi(2)\cdots\psi(n). $$ To see this, consider the matrix $B=(b_{i,j})$ such that $b_{i,j}=1$ if $i|j$ and $b_{i,j}=0$ otherwise. Note that $B$ is upper-triangular with ones on the diagonal, so its determinant is $1$. Now let $C$ be the diagonal matrix whose diagonal is $(\psi(1),\ldots,\psi(n))$. A matrix product computation shows that $$ A=B^tCB\quad\mbox{hence}\quad \det A=(\det B)^2\det C=\psi(1)\cdots\psi(n). $$ Now going back to your question. Consider Euler's totient function $\phi$. It is well-known that $$ m=\sum_{k|m}\phi(k) $$ so $$ a_{i,j}=gcd(i,j)=\sum_{k|gcd(i,j)}\phi(k)=\sum_{k|i,k|j}\phi(k). $$ Applying the general result above, we find: $$ \det A=\phi(1)\phi(2)\cdots\phi(n). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/299652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
eigenvalues of a matrix with zero $k^{th}$ power For a matrix $A$, where $A^k=0$, $k\ge1$, need prove that $trace(A)=0$; i.e sum of eigenvalues is zero. How do you approach this problem?
I assume your matrix is an $n\times n$ matrix with, say, complex coefficients. Since $A^k=0$, the spectrum of $A$ is $\{0\}$ (or the characteristic polynomial of $A$ is $X^n$). Next we can find an invertible matrix $P$ such that $PAP^{-1}$ is upper-triangular with $0$'s on the diagonal. So $$ \mbox{trace}A=\mbox{trace}(PAP^{-1})=0 $$ where we use the fact that $\mbox{trace} (AB)=\mbox{trace}(BA)$ in general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/299716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Variance for a product-normal distribution I have two normally distributed random variables (zero mean), and I am interested in the distribution of their product; a normal product distribution. It's a strange distribution involving a delta function. What is the variance of this distribution - and is it finite? I know that $Var(XY)=Var(X)Var(Y)+Var(X)E(Y)^2+Var(Y)E(X)^2$ However I'm running a few simulations and noticing that the sample average of variables following this distribution is not converging to normality - making me guess that its variance is not actually finite.
Hint: We need to know something about the joint distribution. The simplest assumption is that $X$ and $Y$ are independent. Let $W=XY$. We want $E(W^2)-(E(W))^2$. To calculate $E((XY)^2)$, use independence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/299829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
convolution square root of uniform distribution I need to find a probability distribution function $f(x)$ such that the convolution $f * f$ is the uniform distribution (between $x=0$ and $x=1$). I would like to generate pairs of numbers with independent identical distributions, so that their sum is uniformly distributed between $0$ and $1$. This can't be something new, and I can search on google for convolution square root but I can't seem to find the right information on probability distributions. Can someone out there point me at the right information?
Assume that $X$ is a random variable with density $f$ and that $f\ast f=\mathbf 1_{[0,1]}$. Note that the function $t\mapsto\mathbb E(\mathrm e^{\mathrm itX})$ is smooth since $X$ is bounded (and in fact, $X$ is in $[0,\frac12]$ almost surely). Then, for every real number $t$, $$ \mathbb E(\mathrm e^{\mathrm itX})^2=\frac{\mathrm e^{\mathrm it}-1}{\mathrm it}. $$ Differentiating this with respect to $t$ yields a formula for $\mathbb E(X\mathrm e^{\mathrm itX})\mathbb E(\mathrm e^{\mathrm itX})$. Squaring this product and replacing $\mathbb E(\mathrm e^{\mathrm itX})^2$ by its value yields $$ \mathbb E(X\mathrm e^{\mathrm itX})^2=\frac{\mathrm i(1-\mathrm e^{\mathrm it}+\mathrm it\mathrm e^{\mathrm it})}{4t^3(\mathrm e^{\mathrm it}-1)}. $$ The RHS diverges when $t=2\pi$, hence such a random variable $X$ cannot exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/299915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A question about Elementary Row Operation: Add a Multiple of a Row to Another Row The task is that I have to prove the following statement, using Linear Algebra arguments: Given a matrix A, then: To perform an ERO (Elementary Row Operation) type 3 : (c * R_i) + R_k --> R_k (i.e. replace a row k by adding c-multiple of row i to row k) is the same as replacing a row k by subtracting a multiple of some row from another row I just don't know how to formally prove this statement, like how the arguments should look like. By some inspections, I'm pretty sure that doing (c * R_i) + R_k --> R_k is the same as doing: R_i - (d * R_k) --> R_k where d can be positive or negative, but it must have opposite sign with c. I use an example as follows: A = (2 1 3, 4 3 1) Then if I want to add row 2 to row 1, say, instead of doing (1 * 4) + 2 --> 6 and so on, I do 4 - [ (-1) * 2 ] --> 6 instead. Thus c = 1 and d = -1 in this case. That's why I conclude that the coefficient d should always be the opposite sign with coefficient c. Would someone help me on how to construct a formal proof of the statement? I know how to go about the examples, but I understand examples are never proofs >_< Thank you very much ^_^
Note: If $c$ is some non-zero scalar, then * *adding $cR_i$ to $R_k$ and replacing the original $R_{k\text{ old}}$ by $(R_k + cR_i)$ is the same as * *subtracting $−c⋅R_i$ from $R_k$ and replacing the old $R_k$ by the result $R_k - (-cR_i)$. Since...$R_k + cR_i = R_k - (-cR_i)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/299985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Using sufficiency to prove and disprove completeness of a distribution Let $X_1, \dots ,X_n$ be a random sample of size $n$ from the continuous distribution with pdf $f_X(x\mid\theta) = \dfrac{2\theta^2}{x^3} I(x)_{(\theta;\infty)}$ where $\theta \in \Theta = (0, \infty)$. (1) Show that $X_{(1)}$ is sufficient for $\theta$. (2) Show directly that the pdf for $X_{(1)}$ is $f_{X(1)}(x\mid\theta) = \dfrac{2n\theta^{2n}}{x^{2n+1}} I(x)(\theta,\infty)$. (3) When $\Theta = (0, \infty)$, the probability distribution of $X_{(1)}$ is complete. In this case, find the best unbiased estimator for $\theta$. (4) Suppose that $\Theta = (0; 1]$. Show that the probability distribution of $X_{(1)}$ is not complete in this setting by considering the function $g(X_{(1)}) = \Big [ X_{(1)} - \frac{2n}{2n-1} \Bigr] I(X_{(1)})_{(1,\infty)}$. For (1), this was pretty easy to show using Factorization Theorem. For (2), I think I am integrating my pdf wrong because I can't seem to arrive at the answer. For (3), I am trying to use a Theorem that states "If T is a complete and sufficient statistic for $\theta$ and $\phi(T)$ is any estimator based only on T, then $\phi(T)$ is the unique best unbiased estimator of its expected values", but I can't seem to simplify the expected value to get $\theta$. For (4), I am getting stuck trying to show $P(g(X_{(1)})=0) = 1$ using the given function. Any assistance is greatly appreciated.
For Part (1), great! For Part (2), I'm unsure about that one. For Part (3), Note: that the original distribution $f_{X}(x|\beta) = \frac{2*\theta^{2}}{x^{3}}*I_{(\theta,\infty)}(x)$ resembles a famous distribution, but this famous distribution has 2 parameters, $\alpha$ and $\beta$, where the value of $\beta = 2$, and your $\alpha = \theta$. Side Note: May I ask what you got for Part (3) when you integrated? (If you did integrate?) it's the Pareto distribution and once you know the right distribution, then finding the expected value for the BUE is easier than having to integrate. For Part(4), all you have to do is show that its Expected value of that function is not equal to 0, and, therefore, the function is not complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/300052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Helix's arc length I'm reading this. The relevant definitions are that of parametrized curve which is at the beginning of page 1 and the definition of arclength of a curve, which is in the first half of page 6. Also the author mentions the helix at the bottom of page 3. On exercise $1.1.2.$ (page 8) I'm asked to find the arc length of the helix: $\alpha (t)=(a\cos (t), a\sin (t), bt)$, but the author don't say what the domain of $\alpha$ is. How am I supposed to go about this? Usually when the domain isn't specified isn't the reader supposed to assume the domain is a maximal set? In that case the domain would be $\Bbb R$ and the arc length wouldn't be defined as the integral wouldn't be finite.
There are a number of ways of approaching this problem. And yes, you are correct, without the domain specified there is a dilemma here. You can give an answer for one complete cycle of $2\pi$. Depending on the context you may find it more convenient to measure arc length as a function of $z$-axis distance along the helix... a sort of ratio: units of length along the arc per units of length of elevation. Thirdly, you can also write the arc length not as a numeric answer but as a function of $a$ and $b$ marking the endpoints of any arbitrary domain. Personally, I recommend doing the third and last. Expressing the answer as a function is the best you can do without making assumptions about the domain in question, and it leaves a solution that can be applied and reused whenever endpoints are given.
{ "language": "en", "url": "https://math.stackexchange.com/questions/300096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
example of morphism of affine schemes Let $X={\rm Spec}~k[x,y,t]/<yt-x^2>$ and let $Y={\rm Spec}~ k[t]$. Let $f:X \rightarrow Y$ be the morphism determined by $k[t] \rightarrow k[x,y,t]/<yt-x^2>$. Is f surjective> If f is surjective, why??
I'm assuming your map of rings comes from the natural inclusion: $i:k[t]\rightarrow[x,y,t]\rightarrow k[x,y,t]/<yt-x^2>=A$. A prime of $k[T]$ is of the form $(F(t))$ where $F(t)$ is an irreducible polynomial over $k$. Show that the $I=F(t)A$ is not the whole ring $A$ (This amounts to showing that $yt-x^2$ and $F(t)$ don't generate $k[x,y,t]$). In fact it is even prime but we won't need that. We just need the fact that $I$ is contained in a prime ideal $P\in Spec(A)$. So $i^{-1}(P)$ is a prime of $k[t]$ that contains $F(t)$ and hence equal to $(F(t))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/300154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Does the series $\sum\limits_{n=1}^\infty \frac{1}{n\sqrt[n]{n}}$ converge? Does the following series converge? $$\sum_{n=1}^\infty \frac{1}{n\sqrt[n]{n}}$$ As $$\frac{1}{n\sqrt[n]{n}}=\frac{1}{n^{1+\frac{1}{n}}},$$ I was thinking that you may consider this as a p-series with $p>1$. But I'm not sure if this is correct, as with p-series, p is a fixed number, right ? On the other hand, $1+\frac{1}{n}>1$ for all $n$. Any hints ?
Limit comparison test: $$\frac{\frac{1}{n\sqrt[n]n}}{\frac{1}{n}}=\frac{1}{\sqrt[n]n}\xrightarrow[n\to\infty]{}1$$ So that both $$\sum_{n=1}^\infty\frac{1}{n\sqrt[n] n}\,\,\,\text{and}\,\,\,\sum_{n=1}^\infty\frac{1}{n}$$ converge or both diverge...
{ "language": "en", "url": "https://math.stackexchange.com/questions/300243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 0 }
Let lim $a_n=0$ and $s_N=\sum_{n=1}^{N}a_n$. Show that $\sum a_n$ converges when $\lim_{N\to\infty}s_Ns_{N+1}=p$ for a given $p>0$. Let lim $a_n=0$ and $s_N=\sum_{n=1}^{N}a_n$. Show that $\sum a_n$ converges when $\lim_{N\to\infty}s_Ns_{N+1}=p$ for a given $p>0$. I've no idea how to even start. Should I try to prove that $s_N$ is bounded ?
Put $s_n:=\epsilon_n|s_n|$ with $\epsilon_n\in\{-1,1\}$. Then from $$\epsilon_n\epsilon_{n+1}|s_n|\>|s_{n+1}|=s_n\>s_{n+1}=:p_n\to p>0\qquad(n\to\infty)$$ it follows that $\epsilon_n=\epsilon_{n+1}$ for $n>n_0$. Assume $\epsilon_n=1$ for all $n> n_0$, the case $\epsilon_n=-1$ being similar. The equation $$s_n(s_n+a_{n+1})=s_ns_{n+1}=p_n$$ implies that for all $n$ the quantities $s_n$, $a_{n+1}$, and $p_n$ are related by $$s_n={1\over2}\left(-a_{n+1}\pm\sqrt{a_{n+1}^2 +4p_n}\right)\ .$$ Since $s_n\geq0$ $\ (n>n_0)$, $\ a_{n+1}\to 0$, $\ p_n\to p>0$ it follows that necessarily $$s_n={1\over2}\left(-a_{n+1}+\sqrt{a_{n+1}^2 +4p_n}\right)\qquad(n>n_1)\ ,$$ and this implies $\lim_{n\to\infty} s_n=\sqrt{p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/300309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove the determinant of this matrix We have an $n\times n$ square matrix $\left(a_{i,j}\right)_{1\leq i\leq n, \ 1\leq j\leq n}$ such that all elements on main diagonal are zero, whereas the other elements are defined as follows: $$a_{i,j}=\begin{cases} 1,&\text{if } i+j \text{ belongs to the Fibonacci numbers,}\\ 0,&\text{if } i+j \text{ does not belong to the Fibonacci numbers}.\\ \end{cases}$$ We know that when $n$ is odd, the determinant of this matrix is zero. Now prove that when $n$ is even, the determinant of this matrix is $0$ or $1$ or $-1$. (Use induction or other methods.) Also posted on MO.
This is just a partial answer, too long to fit in a comment, written in order to start collecting ideas. We have: $$\det A=\sum_{\sigma\in S_n}\operatorname{sign}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)},$$ hence the contribute of every permutation in $S_n$ belongs to $\{-1,0,1\}$. In particular, the contribute of a permutation differs from zero iff $i+\sigma(i)$ belongs to the Fibonacci sequence for every $i\in[1,n]$. If we consider the cycle decomposition of such a $\sigma$: $$\sigma = (n_1,\ldots, n_j)\cdot\ldots\cdot(m_1,\ldots, m_k)$$ this condition gives that $n_1+n_2,\ldots,n_j+n_1,\ldots,m_1+m_2,\ldots,m_k+m_1$ belong to the Fibonacci sequence, so, if the contribute of $\sigma$ differs from zero, the contribute of $\sigma^{-1}$ is the same. However, not so many permutations fulfill the condition. For instance, the only possible fixed points of the contributing permutations are half of the even Fibonacci numbers, hence numbers of the form $\frac{1}{2}F_{3k}$:$1,4,17,72,\ldots$. Moreover, the elements of $[1,n]$ can be arranged in a graph $\Gamma$ in which the neighbourhood of a node with label $m$ is made of the integers in $[1,n]$ whose sum with $m$ is a Fibonacci number, i.e. all the possible images $\sigma(m)$ for a contributing permutation. For istance, for $n=5$: we get (apart from the self-loops in $1$ and $4$) an acyclic graph. The only contibuting permutation is $\sigma=(1 2)(3 5)(4)$, hence $|\det A|=1$. When $n=6$ or $n=7$ the neighbourhood of $5$ is still made of only one element: When $n=7$ the contributing permutations are of the form $\sigma=(4)(3 5)\tau$, where $\tau\in\{(1,2,6,7),(1,7,6,2),(1,2)(6,7),(1 7)(6 2)\}$, hence $\det A=0$. In general, the neighbourhood of the greatest Fibonacci number $F_k\leq n$ is made of $F_{k-1}$ only, hence $F_k$ is always paired with $F_{k-1}$ in a transposition of every contributing permutation. Now I believe that the conjecture $\det A\in\{-1,0,1\}$ heavily correlated with the structure of the cycles in $\Gamma_\mathbb{N}$, a graph over $\mathbb{N}$ where two integers are connected when their sum is a Fibonacci number. There are many trivial cycles in $\Gamma_\mathbb{N}$: $$(k,F_m-k),\quad (k,F_m-k,F_{m+1}+k,F_{m+2}-k),\quad (k,F_m-k,F_{m+1}+k,F_{m+2}-k,F_{m+3}+k,F_{m+4}-k),\ldots$$ and my claim is that all the cycles have even length, and all the cycles are of the given type. Given that $F$ is the set of all the Fibonacci number, it is straightforward to prove that the only elements of $F-F$ represented twice are the Fibonacci numbers, hence there are no cycles of length $3$ in $\Gamma_\mathbb{N}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/300379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 1, "answer_id": 0 }
Proving that for any odd integer:$\left\lfloor \frac{n^2}{4} \right\rfloor = \frac{(n-1)(n+1)}{4}$ I'm trying to figure out how to prove that for any odd integer, the floor of: $$\left\lfloor \frac{n^2}{4} \right\rfloor = \frac{(n-1)(n+1)}{4}$$ Any help is appreciated to construct this proof! Thanks guys.
Take $n=2k+1$ then, $\lfloor(n^2/4)\rfloor=\lfloor k^2+k+1/4\rfloor=k^2+k$ $\frac{(n-1)(n+1)}{4}=(n^2-1)/4=k^2+k=\lfloor(n^2/4)\rfloor$
{ "language": "en", "url": "https://math.stackexchange.com/questions/300493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
how to find cosinus betwen two vector? i have task in linear-algebra. Condition: we have triangle angles A(-4,2); B(-1,6); C(8,-3); How to find cosinus between BA and BC vectors? please help :( what is solution for this task?
The dot product gets you just what you want. The dot product of two vectors $\vec u \cdot \vec v = |\vec u||\vec v|\cos \theta$ where $\theta$ is the angle between the vectors. So $\cos \theta =\frac{\vec u \cdot \vec v}{|\vec u||\vec v|}$. The dot product is calculated by summing the products of the components $\vec {BA} \cdot \vec {BC} = -4 \cdot -1 + 2 \cdot 6=4+12=16$
{ "language": "en", "url": "https://math.stackexchange.com/questions/300551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Construction of Hadamard Matrices of Order $n!$ I'm trying to get a hand on Hadamard matrices of order $n!$, with $n>3$. Payley's construction says that there is a Hadamard matrix for $q+1$, with $q$ being a prime power. Since $$ n!-1 \bmod 4 = 3 $$ construction 1 has to be chosen: If $q$ is congruent to $3 (\bmod 4)$ [and $Q$ is the corresponding Jacobsthal matrix] then $$ H=I+\begin{bmatrix} 0 & j^T\\ -j & Q\end{bmatrix} $$ is a Hadamard matrix of size $q + 1$. Here $j$ is the all-1 column vector of length $q$ and $I$ is the $(q+1)×(q+1)$ identity matrix. The matrix $H$ is a skew Hadamard matrix, which means it satisfies $H+H^T = 2I$. The problem is that the number of primes among $n!-1$ is restricted (see A002982). I checked the values of $n!-1$ given by Wolfram|Alpha w.r.t. be a prime power, without success, so Payley's construction won't work for all $n$. Is there a general way to get the matrices, or is it case by case different? I haven't yet looked into Williamson's construction nor Turyn type constructions. Would it be worth a closer look (sure it would, but) concerning my problem? Where can I find their constructions? PS for the interested reader: I've found a nice compilation of Hadamard matrices here: http://neilsloane.com/hadamard/
I don't think a general construction for Hadamard matrices of order $n!$ is known. The knowledge about general construction methods for Hadamard matrices is quite sparse, the basic ones (see also the Wikipedia article) are: 1) If $n$ is a multiple of $4$ such that $n-1$ is a prime power or $n/2 - 1$ is a prime power $\equiv 1\pmod{4}$, then there exists a Hadamard Matrix of order $n$ (Paley). 2) If $n$ is a multiple of $4$ such that there exists a Hadamard Matrix of order $n/2$, then there exists a Hadamard Matrix of order $n$ (Sylvester). The Hadamard conjecture states that for all multiples $n$ of $4$ there is a Hadamard matrix of order $n$. The above constructions do not cover all these $n$, the smallest case not covered is $n = 92$. There are more specialized constructions and a few computer constructions, such that the smallest open case is $n = 668$ nowadays. EDIT: I have just checked that for $n\in\{13,26,44,52,63,67,70,77,85\}$ a Hadamard matrix of order $n!$ cannot be constructed only by a combination of the Paley/Sylvester construction above. So in these cases, one had to check more specialized constructions like Williamsons' one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/300635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If a function is uniformly continuous in $(a,b)$ can I say that its image is bounded? If a function is uniformly continuous in $(a,b)$ can I say that its image is bounded? ($a$ and $b$ being finite numbers). I tried proving and disproving it. Couldn't find an example for a non-bounded image. Is there any basic proof or counter example for any of the cases? Thanks a million!
Hint: Prove first that a uniformly continuous function on an open interval can be extended to a continuous function on the closure of the interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/300745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Show that open segment $(a,b)$, close segment $[a,b]$ have the same cardinality as $\mathbb{R}$ a) Show that any open segment $(a,b)$ with $a<b$ has the same cardinality as $\mathbb{R}$. b) Show that any closed segment $[a,b]$ with $a<b$ has the same cardinality as $\mathbb{R}$. Thoughts: Since $a<b$, $a,b$ are two distinct real number on $\mathbb{R}$, we need to show it is 1 to 1 bijection functions which map between $(a,b)$ and $\mathbb{R}$, $[a,b]$ and $\mathbb{R}$. But we know $\mathbb{R}$ is uncountable, so we show the same for $(a,b)$ and $[a,b]$? and how can I make use of the Cantor-Schroder-Bernstein Theorem? The one with $|A|\le|B|$ and $|B|\le|A|$, then $|A|=|B|$? thanks!!
Consider the function $f:(0,1)\to \mathbb{R}$ defined as, $$f(x)=\frac{1}{x}+\frac{1}{1-x}$$ Prove that $f$ is a bijective function. Now, by previous posts, $(0,1)$ and $[0,1]$ have the same cardinality. Consider the function $g:[0,1]\to[a,b]$, defined as, $$g(x)=({b-a})x+a$$ Prove that $g$ is bijective function to conclude that $[0,1]$ and $[a,b]$ have the same cardinality as $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/300815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 2 }
Intuition for scale of the largest eigenvalue of symmetric Gaussian matrix Let $X$ be $n \times n$ matrix whose matrix elements are independent identically distributed normal variables with zero mean and variance of $\frac{1}{2}$. Then $$ A = \frac{1}{2} \left(X + X^\top\right) $$ is a random matrix from GOE ensemble with weight $\exp(-\operatorname{Tr}(A^2))$. Let $\lambda_\max(n)$ denote its largest eigenvalue. The soft edge limit asserts convergence of $\left(\lambda_\max(n)-\sqrt{n}\right) n^{1/6}$ in distribution as $n$ increases. Q: I am seeking to get an intuition (or better yet, a simple argument) for why the largest eigenvalue scales like $\sqrt{n}$.
The scaling follows from the Wigner semicircle law. Proof of the Wigner semicircle law is outlined in section 2.5 of the review "Orthogonal polynomials ensembles in probability theory" by W. König, Probability Surveys, vol. 2 (2005), pp. 385-447.
{ "language": "en", "url": "https://math.stackexchange.com/questions/300894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Solving Recurrence $T(n) = T(n − 3) + 1/2$; I have to solve the following recurrence. $$\begin{gather} T(n) = T(n − 3) + 1/2\\ T(0) = T(1) = T(2) = 1. \end{gather}$$ I tried solving it using the forward iteration. $$\begin{align} T(3) &= 1 + 1/2\\ T(4) &= 1 + 1/2\\ T(5) &= 1 + 1/2\\ T(6) &= 1 + 1/2 + 1/2 = 2\\ T(7) &= 1 + 1/2 + 1/2 = 2\\ T(8) &= 1 + 1/2 + 1/2 = 2\\ T(9) &= 2 + 1/2 \end{align}$$ I couldnt find any sequence here. can anyone help!
The generating function is $$g(x)=\sum_{n\ge 0}T(n)x^n = \frac{2-x^3}{2(1+x+x^2)(1-x)^2}$$, which has the partial fraction representation $$g = \frac{2}{3(1-x)} + \frac{1}{6(1-x)^2}+\frac{x+1}{6(1+x+x^2)}$$. The first term contributes $$\frac{2}{3}(1+x+x^2+x^3+\ldots)$$, equivalent to $T(n)=2/3$ the second term contributes $$\frac{1}{6}(1+2x+3x^2+4x^3+\ldots)$$ equivalent to $T(n) = (n+1)/6$, and the third term contributes $$\frac{1}{6}(1-x^2+x^3-x^5+x^6-\ldots)$$ equivalent to $T(n) = 1/6, 0, -1/6$ depending on $n\mod 3$ being 0 or 1 or 2. $$T(n) = \frac{2}{3}+\frac{n+1}{6}+\left\{\begin{array}{ll} 1/6,& n \mod 3=0\\ 0,& n \mod 3=1 \\ -1/6,&n \mod 3 =2\end{array}\right.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/300934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Another trigonometric equation Show that : $$31+8\sqrt{15}=16(1+\cos 6^{\circ})(1+\cos 42^{\circ})(1+\cos 66^{\circ})(1-\cos 78^{\circ})$$
I don't think this is how the problem came into being. But, I think this to be a legitimate way. $$(1+\cos 6^{\circ})(1+\cos 42^{\circ})(1+\cos 66^{\circ})(1-\cos 78^{\circ})$$ $$=(1+\cos 6^{\circ})(1+\cos 66^{\circ})(1-\cos 78^{\circ})(1+\cos 42^{\circ})$$ $$=(1+\cos 6^{\circ}+\cos 66^{\circ}+\cos 6^{\circ}\cos 66^{\circ})(1+\cos 42^{\circ}-\cos 78^{\circ}-\cos 42^{\circ}\cos 78^{\circ})$$ $$=\{1+2\cos 30^{\circ}\cos 36^{\circ}+\frac12(\cos60^\circ+\cos72^\circ)\} \{1+2\sin 18^{\circ}\sin60^{\circ}-\frac12(\cos36^\circ+\cos120^\circ)\}$$ (Applying $2\cos A\cos B=\cos(A-B)+\cos(A+B),$ $ \cos2C+\cos2D=2\cos(C-D)\cos(C+D)$ and $\cos2C-\cos2D=-2\sin(C-D)\sin(C+D)$ ) Now, $\sin60^{\circ}=\cos 30^{\circ}=\frac{\sqrt3}2,\cos120^\circ=\cos(180-60)^\circ=-\cos60^\circ=-\frac12$ From here, or here or here $\cos72^\circ=\sin 18^\circ=\frac{\sqrt5-1}4$ and $\cos36^\circ=\frac{\sqrt5+1}4$
{ "language": "en", "url": "https://math.stackexchange.com/questions/301012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can a set be bounded and countably infinite at the same time? There is a theorem in my textbook that states, Let $E$ be a bounded measurable set of real numbers. Suppose there is a bounded countably infinite set of real numbers $\Lambda$ for which the collection of translates of $E$, $\{\lambda + E\}_{\lambda \in \Lambda}$, is disjoint. Then $m(E) = 0$. I'm a little confused about this theorem, because it's saying that a set is bounded and countably infinite at the same time. But if a set is bounded, isn't it supposed to be finite? Thanks in advance
Hint: Consider $\Bbb Q$ intersected with any bounded set, finite or infinite. Since $\Bbb Q$ is countable, the new set is at most countable, and clearly can be made infinite; for example, $[0,1]\cap\Bbb Q$ is bounded and countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/301080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Continuous function with zero integral Let $f$ be a continuous function on $[a,b]$ ($a<b$), such that $\int_{a}^{b}{f(t)dt}=0$ Show that $\exists c\in[a,b], f(c)=0$.
Let $m=\min\{f(x)|x\in[a,b]\}, m=\max\{f(x)|x\in[a,b]\}$(We can get the minimum and maximum because $f$ is continuous on a closed interval) . If $m,M$ have the same sign it can be shown that the integral cant be zero (for example if both are positive, the the integral will be positive). If $m,M$ have different signs apply the intermediate value theorem
{ "language": "en", "url": "https://math.stackexchange.com/questions/301143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Proof of the simple approximation lemma a) For the proof of the simple approximation lemma, our textbook says, Let (c,d) be an open, bounded interval that contains the image of E, f(E), and $c=y_0 < y_1 < ... < y_n = d$ be a partition of the closed bounded interval [c,d] such that $y_k - y_{k-1} < \epsilon$ for $1 \leq k \leq n$. Define $I_k = [y_{k-1}, y_k)$ and $E_k = f^{-1}(I_k)$ for $ 1 \leq k \leq n$. Since each $I_k$ is an interval and the function f is measurable, each set $E_k$ is measaurable. I was a bit confused about this last sentence. I'm not sure what theorem they are using to say that $E_k$ is measurable because $I_k$ is an interval and f is measurable...
The definition of a function $f: A \to B$ being measurable is that for any measurable set $E \subseteq B$, $f^{-1}(E)$ is measurable, so this follows by definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/301278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }