Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How can complex polynomials be represented? I know that real polynomials (polynomials with real coefficients) are sometimes graphed on a 3D complex space ($x=a, y=b, z=f(a+bi)$), but how are polynomials like $(1+2i)x^2+(3+4i)x+7$ represented?
Plotting $P(z) $ and $P:\mathbb{C}\rightarrow\mathbb{C}$, if you want to use a cartesian coordinate system give a 4D image because the input can be rappresented by the coordinate $(a,b)$ where $z=a+bi$, and the output too. So the points (coordinates) of the graph are of the kind $$(a,b,Re(P(a+bi)),Im(P(a+bi)))$$ and they belong to the set $\mathbb{R}^4$ (set of 4-uple of $\mathbb{R}$). I think is impossible for evry human to watch a 4D plot on paper, but if you do a 3D plot with an animation, the timeline is your 4th dimension, and you can see how the 3d plot changes when the time (4th variable) changes. I hope that is correct. At least this is how i represent in my mind 4D objects. ps:I apologize in advance for any mistakes in my English, I hope that the translator did a good job :).
{ "language": "en", "url": "https://math.stackexchange.com/questions/338409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
If $2x = y^{\frac{1}{m}} + y^{\frac{-1}{m}}(x≥1) $ then prove that $(x^2-1)y^{"}+xy^{'} = m^{2}y$ How do I prove following? If $2x = y^{\frac{1}{m}} + y^{\frac{-1}{m}},(x≥1)$, then prove that $(x^2-1)y^{"}+xy^{'} = m^{2}y$
Let $y=e^{mu}$. Then $x=e^u+e^{-u}=\cosh u$. Note that $$y'=mu'e^{mu}=mu'y.$$ But $x=\cosh u$, so $1=u'\sinh u$, and therefore $$u'=\frac{1}{\sinh u}=\frac{1}{\sqrt{\cosh^2 u-1}}=\frac{1}{\sqrt{x^2-1}}.$$ It follows that $$y'=mu'y=\frac{my}{\sqrt{x^2-1}},\quad\text{and therefore}\quad \sqrt{x^2-1}\,y'=my.$$ Differentiate again. We get $$y''\sqrt{x^2-1}+\frac{xy'}{\sqrt{x^2-1}}=my'.$$ Multiply by $\sqrt{x^2-1}$. We get $$(x^2-1)y'' +xy' =my'\sqrt{x^2-1}.$$ But the right-hand side is just $m^2y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/338492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Complex numbers and trig identities: $\cos(3\theta) + i \sin(3\theta)$ Using the equally rule $a + bi = c + di$ and trigonometric identities how do I make... $$\cos^3(\theta) - 3\sin^2(\theta)\ \cos(\theta) + 3i\ \sin(\theta)\ \cos^2(\theta) - i\ \sin^3(\theta)= \cos(3\theta) + i\ \sin(3\theta)$$ Apparently it's easy but I can't see what trig identities to substitute PLEASE HELP!
Note that $$(\cos(t)+i\sin(t))^n=(\cos(nt)+i\sin(nt)),~~n\in\mathbb Z$$ and $(a+b)^3=a^3+3a^2b+3ab^2+b^3,~~~(a-b)^3=a^3-3a^2b+3ab^2-b^3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/338536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
trigonometric function integration I have this integral: $$ \int \dfrac{\sin(2x)}{1+\sin^2(x)}\,dx$$ and I need hint solving it. I tried using the trigonometric identities and let $$u=\sin(x)$$ but I got $$\int ... =\int \dfrac{2u}{1+u^2}\, du$$ which I don't know how to solve. I also tried letting $$u=tg\left(\frac{x}{2}\right)$$ but that leads to $$\int ...=\int \frac{8t(1-t^2)}{(1+t^2)(t^4+6t^2+1)} du$$ which again I can't solve. I'll be glad for help.
Hint: $\displaystyle \log (u(x))'=\frac{u'(x)}{u(x)}$, (with the implied assymption that $u(x)>0$ for all $x$ in the domain of $u$). You should note, however, that $\displaystyle \log (|u(x)|)'=\frac{u'(x)}{u(x)}$, for all $x\in \operatorname{dom}(u)$ such that $u(x)\neq 0$. Also $1+u^2>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/338703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
General topological space $2$. 1. Let $A\subset X$ be a closed set of a topological space $X$. Let $B \subset A$ be a subset of $A$. prove that $B$ is closed as a subset of $A$, if and only if $B$ is closed as a subset of $X$. What I have done is that if $B$ is closed in $A,B$ should be the form of $A\cap C$ where $C$ is closed in $X$... ( I don't know whether this is true or not..) anyway, then, $A$ and $C$ is all closed in $X$. Hence, $B$ also should be closed in $X$. I think this is just stupid way. I don't know how to prove solve this problem. 2. If we omit the assumption that $A$ is closed, show that previous exercise is false. What I have done is that if I let $X=R$ and $A=(0,1)\cup(2,3)$ in $R$, although the interval $B=(0,\frac{1}{2}]$ is a subset of $A$ and closed in $A$, it is not closed in $X$. and I don't know the opposite way. please help me.
The first proof is correct, minus the remarks about stupidity. It could use a slight rewording, but the idea is correct. The second example is also correct, you are supposed to find a non-closed $A$ and $B\subseteq A$ which is closed in $A$ but not closed in $X$. You can do with a simpler example, though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/338778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find a closed form of the series $\sum_{n=0}^{\infty} n^2x^n$ The question I've been given is this: Using both sides of this equation: $$\frac{1}{1-x} = \sum_{n=0}^{\infty}x^n$$ Find an expression for $$\sum_{n=0}^{\infty} n^2x^n$$ Then use that to find an expression for $$\sum_{n=0}^{\infty}\frac{n^2}{2^n}$$ This is as close as I've gotten: \begin{align*} \frac{1}{1-x} & = \sum_{n=0}^{\infty} x^n \\ \frac{-2}{(x-1)^3} & = \frac{d^2}{dx^2} \sum_{n=0}^{\infty} x^n \\ \frac{-2}{(x-1)^3} & = \sum_{n=2}^{\infty} n(n-1)x^{n-2} \\ \frac{-2x(x+1)}{(x-1)^3} & = \sum_{n=0}^{\infty} n(n-1)\frac{x^n}{x}(x+1) \\ \frac{-2x(x+1)}{(x-1)^3} & = \sum_{n=0}^{\infty} (n^2x + n^2 - nx - n)\frac{x^n}{x} \\ \frac{-2x(x+1)}{(x-1)^3} & = \sum_{n=0}^{\infty} n^2x^n + n^2\frac{x^n}{x} - nx^n - n\frac{x^n}{x} \\ \end{align*} Any help is appreciated, thanks :)
You've got $\sum_{n\geq 0} n(n-1)x^n$, modulo multiplication by $x^2$. Differentiate just once your initial power series and you'll be able to find $\sum_{n\geq 0} nx^n$. Then take the sum of $\sum_{n\geq 0} n(n-1)x^n$ and $\sum_{n\geq 0} nx^n$. What are the coefficients?
{ "language": "en", "url": "https://math.stackexchange.com/questions/338852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 4 }
Is this convergent or diverges to infinity? Solve or give some hints. $\lim_{n\to\infty}\dfrac {C_n^{F_n}}{F_n^{C_n}}$, where $C_n=\dfrac {(2n)!}{n!(n+1)!}$ is the n-th Catalan number and $F_n=2^{2^n}+1$ is the n-th Fermat number.
approximate $(n+1)!\simeq n!$ and use sterling approximation $$L\simeq\lim_{n\rightarrow\infty}\frac{\frac{\sqrt{4\pi n}(\frac{2n}{e})^n}{(\sqrt{2\pi n}(\frac{n}{e})^n)^2}}{2^{2^n}}$$ $$L\simeq\lim_{n\rightarrow\infty}e^{n\ln\left((1/\pi n) (2e/n)\right) -2^nlog2}$$ It appears as n goes to infinity, the upper power part goes i.e. {$\dots$} part of $e^{\dots}$ goes to negative infinity and hence the limit is zero. I took $F_n\simeq2^{2^n}$ and similar approximations. Since you know $C_n$ grows slower than $F_n$, you can show that for any $a_n$ and $b_n$, if $a_n/b_n$ goes to zero as n goes to infty, $a_n^{b_n}/b_n^{a_n}$ goes to infinity as n goes to infinity. To show this proceed as above, $$L'=\lim_{n\rightarrow\infty} e^\left(b_n\ln(a_n)-a_n\ln(b_n)\right)$$ since, for any sequence, or function, $\ln(n)$ grows slower than $n$, you can conclude that above limit is infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/338925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can a countable set of parabolas cover the unit square in the plane? Can a countable set of parabolas cover the unit square in the plane? My intuition tells me the answer is no, since it can't be covered by countably many horizontal lines (by the uncountability of $[0, 1]$). Help would be appreciated.
An approach: Let the parabolas be $P_1,P_2,\dots$. Then by thickening each parabola slightly, we can make the area covered by the thickened parabola $P_i^\ast$ less than $\frac{\epsilon}{2^i}$, where $\epsilon$ is any preassigned positive number, say $\epsilon=1/2$. Then the total area covered by the thickened parabolas is $\lt 1$. Remark: There are various ways to say that a set is "small." Cardinality is one such way. Measure is another.
{ "language": "en", "url": "https://math.stackexchange.com/questions/338990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prove for every function that from sphere to real number has points $x$, $-x$ such that f$(x)=f(-x)$ I have not taken topology course yet. This is just the question that my undergrad research professor left us to think about. She hinted that I could use a theorem from Calculus. So I reviewed all theorems in Calculus, and I found Intermediate Value Theorem might be helpful(?)... since it has some generalizations on topology. But I still don't know how to get started. If you could give me some hints or similar examples, that would be really helpful. Thanks!
You may search borsuk–ulam theorem and get some details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Is the set of non finitely describable real numbers closed under addition and squaring? Is the set of non finitely describable real numbers closed under addition and squaring? If so, can someone give a proof? Thanks.
Hum... if by non-finitely-describable you mean "can't construct a finite description (as e.g. a Turing machine)", they aren't closed with respect to addition: $a + b = 2$ if $a$ is one of yours, $b$ is too (if it wasn't, $a$ could be described). But 2 clearly isn't. Squares I have no clue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Minimum ceiling height to move a closet to upright position I brought a closet today. It has dimension $a\times b\times c$ where $c$ is the height and $a\leq b \leq c$. To assemble it, I have to lay it out on the ground, then move it to upright position. I realized if I just move it in the way in this picture, then it would require the height of the ceiling to be $\sqrt{c^2+a^2}$. Is this height required if we consider all possible ways to move the closet? Or is there a smart way to use less height? This is a image from IKEA's instruction that comes with the closet.
I have two solutions to this problem. Intuitive solution Intuitively, it seems to me that the greatest distance across the box would be the diagonal, which can be calculated according to the Pythagorean theorem: $$h = \sqrt {l^2 + w^2}$$ If you'd like a more rigorous solution, read on. Calculus solution Treat this as an optimization problem. For a box of width $w$ and length $l$ (because depth doesn't really matter in this problem), the height when the box is upright, assuming $l > w$, i $$h=l$$ If the box is rotated at an angle $\theta$ to the horizontal, then we have two components of the height: the height $h_1$ of the short side, $w$, and the height $h_2$ of the long side, $l$. Using polar coordinates, we have $$h_1 = w sin \theta\\ h_2 = l cos \theta$$ Thus, the total height is $$h = h_1 + h_2 = w sin\theta + l cos \theta$$ This intuitively makes sense: for small $\theta$ (close to upright), $w sin \theta \approx 0$, and $l cos \theta \approx l$, so $h \approx l$ (and similarly for large $\theta$). The maximum height required means we need to maximize $h$. Take the derivative: $$\frac {dh}{d\theta} = \frac d{d\theta} (w sin \theta + l cos \theta) = w cos \theta - l sin \theta$$ When the derivative is zero, we may be at either an extremum or a point of inflection. We need to find all of these on the interval $(0, \frac \pi 2)$ (because we don't case about anything outside of a standard 90° rotation). So, we have $$0 = \frac {dh}{d\theta} = w cos \theta - l sin \theta\\ w cos \theta = l sin \theta$$ And, because of our interval $(0, \frac \pi 2)$, we can guarantee that $cos \theta \neq 0$, so $$\frac w l = \frac {sin \theta} {cos \theta} = tan \theta\\ \theta = atan \left (\frac l w \right )$$ Now that we know that the maximum is at $\theta = atan \left (\frac l w \right)$. We can plug that back into our polar coordinate equation to get $$h = w sin \theta + l cos \theta\\ h = w sin \left ( atan \left (\frac l w \right ) \right ) + l cos \left ( atan \left (\frac l w \right ) \right )$$ That's my 2¢.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Prove that such an inverse is unique Given $z$ is a non zero complex number, we define a new complex number $z^{-1}$ , called $z$ inverse to have the property that $z\cdot z^{-1} = 1$ $z^{-1}$ is also often written as $1/z$
Whenever you need to prove the uniqueness of an element that holds some property, you can begin your proof by assuming the existence of at least two such elements that hold this property, say $x$ and $y$, and showing that under this assumption, it turns out $x = y$, necessarily. In this case, the property we'll check is "being an inverse of $z$": We'll use the definition of $z$-inverse (the inverse of $z$): it is "any" element $z'$ such that $$z' z = zz' = 1\tag{1}$$ (We won't denote any such element by $z^{-1}$ yet, because we have to first rule out the possibility that such an element is not unique.) So suppose $z \neq 0 \in \mathbb C$ has two inverses, $x, y,\;\;x\neq y$. We won't call either of them $z^{-1}$ at this point because we are assuming they are distinct, and that they are both inverses satisfying $(1)$. Then we use the definition of an inverse element, $(1)$, which must hold for both $x, y$. Then * *Since $y$ is an inverse of $z$, we must have, by $(1)$ that $\color{blue}{\bf yz} = zy = \color{blue}{\bf 1}$, and *Since $x$ is an inverse of $z$, we must have that $xz = \color{blue}{\bf zx = 1}$, again, so it satisfies the definition of an inverse element given in $(1)$. This means that $${\bf{x}} = \color{blue}{\bf 1} \cdot x = \color{blue}{\bf(yz)}x = y(zx) =\;y\color{blue}{\bf (zx)} = y \cdot \color{blue}{\bf 1} = {\bf {y}}$$ Hence, $$\text{Therefore,}\quad x \;=\; y \;= \;z^{-1},$$ and thus, there really is only one multiplicative inverse of $z;\;$ that is, the inverse of a given complex $z$ must be unique, and we denote it by $\,z^{-1}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/339296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
$x$ is rational, $\frac{x}{2}$ is rational, and $3x-1$ is rational are equivalent How do we prove that the three statements below about the real number $x$ are equivalent? (i) $\displaystyle x$ is rational (ii) $\displaystyle \frac{x}{2}$ is rational (iii) $\displaystyle 3x-1$ is rational
It is enough to prove that $$(i) \implies (ii) \implies (iii) \implies (i)$$ $1$. $(i) \implies (ii)$. Let $x = \dfrac{p}q$, where $p,q \in\mathbb{Z}$. We then have $\dfrac{x}2 = \dfrac{p}{2q}$ and we have $p,2q \in \mathbb{Z}$. Hence, $$(i) \implies (ii)$$ $2$. $(ii) \implies (iii)$. Let $\dfrac{x}2 = \dfrac{p}q$, where $p,q \in \mathbb{Z}$. This gives $x = \dfrac{2p}q$, which in-turn gives $$3x-1 = \dfrac{6p}q - 1 = \dfrac{6p-q}q$$ Since $p,q \in \mathbb{Z}$, we have $q,6p-q \in \mathbb{Z}$. Hence, $$(ii) \implies (iii)$$ $3$. $(iii) \implies (i)$. Let $3x-1 = \dfrac{p}q$, where $p,q \in \mathbb{Z}$. This gives $$3x = \dfrac{p}q + 1 \implies x = \dfrac{p+q}{3q}$$ Since $p,q \in \mathbb{Z}$, we have $3q,p+q \in \mathbb{Z}$. Hence, $$(iii) \implies (i)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/339372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
If $U_0 = 0$ and $U_n=\sqrt{U_{n-1}+(1/2)^{n-1}}$, then $U_n < U_{n-1}+(1/2)^n$ for $n > 2$ Letting $$U_n=\sqrt{U_{n-1}+(1/2)^{n-1}}$$ where $U_0=0$, prove that: $$U_n < U_{n-1}+(1/2)^n$$ where $n>2$
Here is a useful factoid: For every $x\geqslant1$ and $y\gt0$, $\sqrt{x+2y}\lt x+y$. Now, apply this to your setting. First note that $U_n\geqslant1$ implies $U_{n+1}\geqslant1$. Since $U_1=1$, this proves that $U_n\geqslant1$ for every $n\geqslant1$. Then, choosing $n\geqslant2$, $x=U_{n-1}$ and $y=1/2^n$, the factoid yields $U_n\lt U_{n-1}+1/2^n$, as desired. Finally the result holds for every $n\geqslant2$ (and not only $n\gt2$). Can you prove the factoid above?
{ "language": "en", "url": "https://math.stackexchange.com/questions/339444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Mathematical way of determining whether a number is an integer I'm developing a computer program, and I've run into a mathematical problem. This isn't specific to any programming language, so it isn't really appropriate to ask on stackoverflow. Is there any way to determine whether a number is an integer using a mathematical function, from which a boolean response is given. For example: let x equal 159 let y equal 12.5 f(x) returns 1 and f(y) returns 0 Please get back to me if you can. If it isn't possible, is there a similar way to determine whether a number is odd or even? EDIT: I found a solution to the problem thats to Karolis Juodelė. I'll use a floor function to round the integer down, and then subtract the output from the original number. If the output is zero, then the function returns 0. I just need to make sure that floor is a purely mathematical function. Does anyone know? Thanks
Since no one has answered with this debatable solution, I will post it. $$f(x) := \begin{cases}1 \qquad x \in \mathbb{Z}\\ 0 \qquad x \in \mathbb{R} \setminus \mathbb{Z}\end{cases}$$ is a perfectly fine function. Even shorter would be $\chi_{\mathbb{Z}}$ defined on $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 2 }
Stock behaviour probability I found this question in a financial mathematics course exam, could anyone please help with a solution and some explanation? Thanks in advance :) A stock has beta of $2.0$ and stock specific daily volatility of $0.02$. Suppose that yesterday’s closing price was $100$ and today the market goes up by $1$%. What’s the probability of today’s closing price being at least $103$?
Assuming normality, vola being specified as standard deviation and assuming a risk free rate (r_risk_free) of zero the following reasoning could be applied: 1) From CAPM we see that the expected return of the stock (E(r)=r_risk_free+beta*(r_market-r_risk_free) here E(r)=0+2.0*.01=0.02 2) From a casual definition of beta we know that it relates stock specific vola to market vola (sd_stock_total=beta*sd_stock_specific) here sd_stock_total=2*0.02=.04 3) Since we have data on returns only, we transfer the value of interest (103) into returns space which is r_of_interest=(103-100)/100=0.03 4) Now we are looking for the probability that the return of interest occurs (P(x>=0.03)) given the probability distribution defined as N(0.02,0.04) or written differently P(x>=0.03)~N(0.02,0.04) 5) In R you could write 1-(pnorm(.03,mean=.02,sd=.04)) where 1-pnorm(...) is necessary because pnorm() returns P(X<=x) of the cumulative distribution function (CDF) and you are interested in P(x>=0.03) - more pnorm() hints.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove $ax^2+bx+c=0$ has no rational roots if $a,b,c$ are odd If $a,b,c$ are odd, how can we prove that $ax^2+bx+c=0$ has no rational roots? I was unable to proceed beyond this: Roots are $x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ and rational numbers are of the form $\frac pq$.
Hint $\ $ By the Rational Root Test, any rational root is integral, hence it follows by Theorem Parity Root Test $\ $ A polynomial $\rm\:f(x)\:$ with integer coefficients has no integer roots if its constant coefficient and coefficient sum are both odd. Proof $\ $ The test verifies that $\rm\ f(0) \equiv 1\equiv f(1)\ \ (mod\ 2)\:,\ $ i.e. that $\rm\:f(x)\:$ has no roots modulo $2$, hence it has no integer roots. $\ $ QED This test extends to many other rings which have a "sense of parity", i.e. an image $\cong \Bbb Z/2,\:$ for example, various algebraic number rings such as the Gaussian integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 11, "answer_id": 4 }
Uniform convergence of $f_n(x)=x^n(1-x)$ I need to show that $f_n(x)=x^n(1-x)\rightarrow0 $ is uniformly in $[0,1]$ (i.e. $\forall\epsilon>0\,\exists N\in\mathbb{N}\,\forall n>N: \|f_n-f\|<\epsilon$) I tried to find the maximum of $f_n$, because: $$\|f_n-f\|=\sup_{[0,1]}|f_n(x)-f(x)|=\max\{f_n(x)\}.$$ So if we investigate the maximum value of $f_n(x)$, we get: $$f_n'(x)=0\Rightarrow x_\max=\dfrac{n}{n+1}.$$ Therefore $\|f_n\|=f_n\left(\frac{n}{n+1}\right)=\frac{n^n}{(n+1)^{n+1}}$. And here I get stuck. How can I get $\|f_n\|<\epsilon$
The $f_n$ sequence is decreasing, $[0, 1]$ is compact, constant function is continuous, so the result follows immediately from the Dini's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Biased coin hypothesis Let's assume, we threw a coin $110$ times and in $85$ tosses it was head. What is the probability that the coin is biased towards head? We can use chi squared test to test, whether the coin is biased, but using this test we only find out, that the coin is biased towards heads or tails and there seems to be no one tailed chi squared test. The same problem seems to appear when using z-test approach. What is the correct way to solve this problem?
Let's assume that you have a fair coin $p=.5$. You can approximate a binomial distribution with a normal distribution. In this case we'd use a normal distribution with mean $110p=55$ and standard deviation $\sqrt{110p(1-p)}\approx5.244$. So getting 85 heads is a $(85-55)/5.244\approx5.72$ standard deviation event. And looking this value up on a table (if your table goes out that far, lol) you can see that the probability of getting 85 heads or more is $5.3\times10^{-9}$. An extremely unlikely event. That is approximately the probability you have a fair coin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Integration theory for Banach-valued functions I am actually studying integration theory for vector-valued functions in a general Banach space, defining the integral with Riemann's sums. Everything seems to work exactly as in the finite dimensional case: Let X be a Banach space, $f,g \colon I = [a,b] \to X$, $\alpha$, $\beta \in \mathbb{R}$ then: $\int_I \alpha f + \beta g = \alpha \int_i f + \beta \int_i g$, $\|\int_I f\| \le \int_I \|f\|$, etc... The fundamental theorem of calculus holds. If $f_n$ are continuous and uniformly convergent to $f$ it is also true that $\lim_n \int_I f_n = \int_I f$. My question is: is there any property that hold only in the finite dimensional case? Is it possible to generalize the construction of the integral as Lebesgue did? If so, does it make sense? Thank you for your help and suggestions
You might want to have a look to the Bochner-Lebesgue spaces. They are an appropriate generalization to the Banach-space-valued case. Many properties translate directly from the scalar case (Lebesgue theorem of dominated convergence, Lebesgue's differentiation theorem). Introductions could be found in the rather old book by Yoshida (Functional analysis) or Diestel & Uhl (Vector measures). The latter also considers different (weaker) definitions of integrals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Solving partial differential equation using laplace transform with time and space variation I have a equation like this: $\dfrac{\partial y}{\partial t} = -A\dfrac{\partial y}{\partial x}+ B \dfrac{\partial^2y}{\partial x^2}$ with the following I.C $y(x,0)=0$ and boundary conditions $y(0,t)=1$ and $y(\infty , t)=0$ I tried to solve the problem as follows: Taking Laplace transform on both sides, $\mathcal{L}(\dfrac{\partial y}{\partial t}) = - A \mathcal{L}(\dfrac{\partial y}{\partial x})+B \mathcal{L}(\dfrac{\partial^2 y}{\partial x^2})$ Now, on the L.H.S we have, $sY-y(x,0)=sY$ $\mathcal{L}(\dfrac{\partial^2 y}{\partial x^2}) =\displaystyle \int e^{-st} \dfrac{\partial^2 y}{\partial x^2} dt$ Exchanging the order of integration and differentiation $\displaystyle\mathcal{L}(\frac{\partial^2 y}{\partial x^2}) =\frac{\partial^2}{\partial x^2} \int e^{-st} y(x,t) dt$ $\displaystyle\mathcal{L}(\frac{\partial^2 y}{\partial x^2}) =\frac{\partial^2}{\partial x^2}\mathcal{L}(y) = \frac{\partial^2Y}{\partial x^2} $ $\displaystyle\mathcal{L}\frac{\partial y}{\partial x} = \frac{\partial Y}{\partial x}$ Now, combing L.H.S and R.H.S, we have, $\displaystyle sY = - A \frac{\partial Y}{\partial x} + B \frac{\partial^2Y}{\partial x^2}$ Above equation might have three solutions: If $b^2 - 4ac > 0 $ let $r_1=\frac{-b-\sqrt{b^2-4ac}}{2a}$ and $r_2 = \frac{-b+\sqrt{b^2-4ac}}{2a}$ The general solution is $\displaystyle y(x) = C_1e^{r_1x}+C_2 e^{r_2x}$ if $b^2 - 4ac = 0 $, then the general solution is given by $ y(x)=C_1e^{-\frac{bx}{2a}}+C_2xe^-{\frac{bx}{2a}}$ if $b^2 - 4ac <0$ , then the general solution is given by $y(x) = C_1e^{\frac{-bx}{2a}}\cos(wx) + C_2 e^{\frac{-bx}{2a}}\sin(wx)$ Since, A, and B are always positive in my problem, the first solution seems to be appropriate. Now, from this point I am stuck and couldn't properly use the boundary conditions. If anyone could offer any help that would be great. "Solution added" The solution of the problem is $y(x,t)= \dfrac {y_0}{2} [exp(\dfrac {Ax}{B}erfc(\dfrac{x+At}{2\sqrt{Bt}}) + erfc(\dfrac{x-At}{2\sqrt{Bt}})$
I get a simpler procedure that without using laplace transform. Note that this PDE is separable. Let $y(x,t)=X(x)T(t)$ , Then $X(x)T'(t)=-AX'(x)T(t)+BX''(x)T(t)$ $X(x)T'(t)=(BX''(x)-AX'(x))T(t)$ $\dfrac{T'(t)}{T(t)}=\dfrac{BX''(x)-AX'(x)}{X(x)}=\dfrac{4B^2s^2-A^2}{4B}$ $\begin{cases}\dfrac{T'(t)}{T(t)}=\dfrac{4B^2s^2-A^2}{4B}\\BX''(x)-AX'(x)-\dfrac{4B^2s^2-A^2}{4B}X(x)=0\end{cases}$ $\begin{cases}T(t)=c_3(s)e^\frac{t(4B^2s^2-A^2)}{4B}\\F(x)=\begin{cases}c_1(s)e^\frac{Ax}{2B}\sinh xs+c_2(s)e^\frac{Ax}{2B}\cosh xs&\text{when}~s\neq0\\c_1xe^\frac{Ax}{2B}+c_2e^\frac{Ax}{2B}&\text{when}~s=0\end{cases}\end{cases}$ $\therefore y(x,t)=\int_{-\infty}^\infty C_1(s)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\sinh xs~ds+\int_{-\infty}^\infty C_2(s)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\cosh xs~ds$ $y(0,t)=1$ : $\int_{-\infty}^\infty C_2(s)e^\frac{t(4B^2s^2-A^2)}{4B}~ds=1$ $C_2(s)=\dfrac{1}{2}\left(\delta\left(s-\dfrac{A}{2B}\right)+\delta\left(s+\dfrac{A}{2B}\right)\right)$ $\therefore y(x,t)=\int_{-\infty}^\infty C_1(s)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\sinh xs~ds+\int_{-\infty}^\infty\dfrac{1}{2}\left(\delta\left(s-\dfrac{A}{2B}\right)+\delta\left(s+\dfrac{A}{2B}\right)\right)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\cosh xs~ds=\int_{-\infty}^\infty C_1(s)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\sinh xs~ds+e^\frac{Ax}{2B}\cosh\dfrac{Ax}{2B}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/339849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Summation/Sigma notation There are lots of variants in the notation for summation. For example, $$\sum_{k=1}^{n} f(k), \qquad \sum_{p \text{ prime}} \frac{1}{p}, \qquad \sum_{\sigma \in S_n} (\operatorname{sgn} \sigma) a_{1 , \sigma(1)} \ldots a_{n , \sigma(n)}, \qquad \sum_{d \mid n} \mu(d).$$ What exactly is a summation? How do we define it? Is there a notation that generalizes all of the above, so that each of the above summations is a variant of the general notation? Are there any books that discuss this matter? It seems that summation is a pretty self-evident concept, and I have yet to find a discussion of it in a textbook.
Except for the case of the upper and lower limit, all the other summations are really just sums of the form $$\sum_{P(i)} f(i)$$ Where $P$ is a unary predicate in the "language of mathematics", and $f(i)$ is some function which returns a value that we can sum. In the case of the sum of prime reciprocals $P(i)$ states that $i$ is a prime number and $f(i)=\frac1i$. In the second sum, $P(i)$ was $i\in S_n$, and $f(i)$ was that summand term. And so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/339977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Evaluate the integral $\int_0^{\infty} \left(\frac{\log x \arctan x}{x}\right)^2 \ dx$ Some rumours point out that the integral you see might be evaluated in a straightforward way. But rumours are sometimes just rumours. Could you confirm/refute it? $$ \int_0^{\infty}\left[\frac{\log\left(x\right)\arctan\left(x\right)}{x}\right]^{2} \,{\rm d}x $$ EDIT W|A tells the integral evaluates $0$ but this is not true. Then how do I exactly compute it?
Related problems: (I), (II), (III). Denoting our integral by $J$ and recalling the mellin transform $$ F(s)=\int_{0}^{\infty}x^{s-1} f(x)\,dx \implies F''(s)=\int_{0}^{\infty}x^{s-1} \ln(x)^2\,f(x)\,dx.$$ Taking $f(x)=\arctan(x)^2$, then the mellin transform of $f(x)$ is $$ \frac{1}{2}\,{\frac {\pi \, \left( \gamma+2\,\ln\left( 2 \right) +\psi \left( \frac{1}{2}+\frac{s}{2} \right)\right) }{s\sin \left( \frac{\pi \,s}{2} \right)}}-\frac{1}{2}\,{\frac {{\pi }^{2}}{s\cos\left( \frac{\pi \,s}{2} \right) }},$$ where $\psi(x)=\frac{d}{dx}\ln \Gamma(x)$ is the digamma function. Thus $J$ can be calculated directly as $$ J= \lim_{s\to -1} F''(s) = \frac{1}{12}\,\pi \, \left( 3\,{\pi }^{2}\ln \left( 2 \right) -{\pi }^{2}+24 \,\ln \left( 2 \right) -3\,\zeta \left( 3 \right) \right)\sim 6.200200824 .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/340033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 2, "answer_id": 0 }
Definition of principal ideal This is a pretty basic question about principal ideals - on page 197 of Katznelson's A (Terse) Introduction to Linear Algebra, it says: Assume that $\mathcal{R}$ has an identity element. For $g\in \mathcal{R}$, the set $I_g = \{ag:a\in\mathcal{R}\}$ is a left ideal in $\mathcal{R}$, and is clearly the smallest (left) ideal that contains $g$. Ideals of the form $I_g$ are called principal left ideals... (Note $\mathcal{R}$ is a ring). Why is the assumption that $\mathcal{R}$ has an identity element important?
Because if $\mathcal{R}$ has an identity, then $I_{g}$ is the smallest left ideal containing $g$. Without an identity, it might be that $g \notin I_{g}$. For instance if $\mathcal{R} = 2 \mathbf{Z}$, then $I_{2} =\{a \cdot 2:a\in 2 \mathbf{Z} \} = 4 \mathbf{Z}$ does not contain $2$. (Thanks Cocopuffs for pointing out an earlier mistake.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/340095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Adding a surjection $\omega \to \omega$ by Levy forcing I'm trying to understand the Levy collapse, working through Kanamori's 'The Higher Infinite'. He introduces the Levy forcing $\text{Col}(\lambda, S)$ for $S \subseteq \text{On}$ to be the set of all partial functions $p: \lambda \times S \to S$ such that $|p| < \lambda$ and $(\forall \langle \alpha, \xi \rangle \in \text{dom}(p))(p(\alpha,\xi) = 0 \vee p(\alpha, \xi) \in \alpha$). In the generic extension, we introduce surjections $\lambda \to \alpha$ for all $\alpha \in S$. However, the example Kanamori gives is $\text{Col}(\omega,\{\omega\})$, which he says is equivalent to adding a Cohen real. I can see how in the generic extension, we add a new surjection $\omega \to \omega$, but I don't see how this gives a new subset of $\omega$. Thanks for your help.
A new surjection is a subset of $\omega\times\omega$. We have a very nice way to encode $\omega\times\omega$ into $\omega$, so nice it is in the ground model. If the function is a generic subset, so must be its encoded result, otherwise by applying a function in the ground model, on a set in the ground model, we end up with a function... not in the ground model! The fact this forcing is equivalent to a Cohen one is easy to see by a cardinality argument. It is a nontrivial countable forcing. It has the same Boolean completion as Cohen, and therefore the two are equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Number of spanning trees in random graph Let $G$ be a graph in $G(n, p)$ (Erdős–Rényi model). What is the (expected) number of different spanning trees of $G$?
There are $n^{n-2}$ trees on $n$ labelled vertices. The probability that all $n-1$ edges in a given tree are in the graph is $p^{n-1}$. So the expected number of spanning trees is $p^{n-1} n^{n-2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Factoring 3 Dimensional Polynomials? How do you factor a system of polynomials into their roots the way one can factor a single dimensional polynomial into its roots. Example $$x^2 + y^2 = 14$$ $$xy = 1$$ We note that we can find the 4 solutions via quadratic formula and substitution such that the solutions can be separated into $2$ groups of $2$ such that each group lies on either $(x + y + 4)$ and $(x + y - 4)$. Note that: $$(x + y + 4)(x + y - 4) = 0$$ $$xy = 1$$ Is also an equivalent system. How do I factor the bottom half? Ideally if $g$ is a linear expression then my system should be $$g_1 * g_2 = 0$$ $$g_3 * g_4 = 0$$ Such that the solutions to Any of the subsystems of this system is a solution to system itself (note there are $4$ viable subsystems). Help?
For the "example" you list, here are some suggestions: Given $$x^2 + y^2 = 14\tag{1}$$ $$xy = 1 \iff y = \frac 1x\tag{2}$$ * *Substitute $y = \dfrac{1}{x}\tag{*}$ into equation $(1)$. Then solve for roots of the resulting equation in one variable. $$x^2 + y^2 = 14\tag{a}$$ $$\iff x^2 + \left(\frac 1x\right)^2 = 14\tag{b}$$ $$\implies x^4 - 14x^2 + 1 = 0\tag{c}$$ Try letting $z = x^2$ and solve the resulting quadratic: $$z^2 - 14z + 1 = 0\tag{d}$$ Then for each root $z_1, z_2,\;$ there will be two repeated roots solving $(c)$: $2 \times 2$ repeated roots Notice that by incorporating the equation (2) into equation step $(b)$, we effectively end up finding all solutions that satisfy both equations $(1), (2)$: solutions for which the two given equations coincide. See Lubin's post for some additional "generalizations": ways to approach problems such as this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Generating random numbers with skewed distribution I want to generate random numbers with skewed distribution. But I have only following information about distribution from the paper : skewed distribution where the value is 1 with probability 0.9 and 46 with probability 0.1. the distribution has mean (5.5) I don't know how to generate random numbers with this information and I don't know what is the distribution function. I'm using jdistlib for random number generation. If anyone has experience using this library can help me with this library for skewed distribution random number generation?
What copperhat is hinting at is the following algorithm: Generate u, uniformly distributed in [0, 1] If u < 0.9 then return 1 else return 46 (sorry, would be a mess as a comment). In general, if you have a continuous distribution with cummulative distribution $c(x)$, to generate the respective numbers get $u$ as above, $c^{-1}(u)$ will have the required distribution. Can do the same with discrete distributions, essentially searching where the $u$ lies in the cummulative distribution. An extensive treatment of generating random numbers (including non-uniform ones) is in Knuth's "Seminumerical Algorithms" (volume 2 of "The Art of Computer Programming"). Warning, heavy math involved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is a full circle 360° degrees? What's the reason we agreed to setting the number of degrees of a full circle to 360? Does that make any more sense than 100, 1000 or any other number? Is there any logic involved in that particular number?
As it has been replied here - on Wonder Quest (webarchive link): The Sumerians watched the Sun, Moon, and the five visible planets (Mercury, Venus, Mars, Jupiter, and Saturn), primarily for omens. They did not try to understand the motions physically. They did, however, notice the circular track of the Sun's annual path across the sky and knew that it took about 360 days to complete one year's circuit. Consequently, they divided the circular path into 360 degrees to track each day's passage of the Sun's whole journey. This probably happened about 2400 BC. That's how we got a 360 degree circle. Around 1500 BC, Egyptians divided the day into 24 hours, though the hours varied with the seasons originally. Greek astronomers made the hours equal. About 300 to 100 BC, the Babylonians subdivided the hour into base-60 fractions: 60 minutes in an hour and 60 seconds in a minute. The base 60 of their number system lives on in our time and angle divisions. An 100-degree circle makes sense for base 10 people like ourselves. But the base-60 Babylonians came up with 360 degrees and we cling to their ways-4,400 years later. Then, there's also this discussion on Math Forum: In 1936, a tablet was excavated some 200 miles from Babylon. Here one should make the interjection that the Sumerians were first to make one of man's greatest inventions, namely, writing; through written communication, knowledge could be passed from one person to others, and from one generation to the next and future ones. They impressed their cuneiform (wedge-shaped) script on soft clay tablets with a stylus, and the tablets were then hardened in the sun. The mentioned tablet, whose translation was partially published only in 1950, is devoted to various geometrical figures, and states that the ratio of the perimeter of a regular hexagon to the circumference of the circumscribed circle equals a number which in modern notation is given by $ \frac{57}{60} + \frac{36}{60^2} $ (the Babylonians used the sexagesimal system, i.e., their base was 60 rather than 10). The Babylonians knew, of course, that the perimeter of a hexagon is exactly equal to six times the radius of the circumscribed circle, in fact that was evidently the reason why they chose to divide the circle into 360 degrees (and we are still burdened with that figure to this day). The tablet, therefore, gives ... $\pi = \frac{25}{8} = 3.125$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 1, "answer_id": 0 }
Describe a PDA that accepts all strings over $\{a, b\}$ that have as many $a$’s as $b$’s. I'm having my exam in few days and I would like help with this Describe a PDA that accepts all strings over $\{ a, b \}$ that have as many $a$’s as $b$’s.
Hint: Use the stack as an indication of how many more of one symbol have been so far read from the string than the other. (Also ensure that the stack never contains both $\mathtt{a}$s and $\mathtt{b}$s at the same time.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/340555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Are there countably or uncountably many infinite subsets of the positive even integers? Let $S$ be the set of all infinite subsets of $\mathbb N$ such that $S$ consists only of even numbers. Is $S$ countable or uncountable? I know that set $F$ of all finite subsets of $\mathbb N$ is countable but from that I am not able to deduce that $S$ is uncountable since it looks hard to find a bijection between $S$ and $P(\mathbb N)\setminus F$. Also I am not finding the way at the moment to find any bijection between $S$ and $[0,1]$ to show that $S$ is uncountable nor I can find any bijection between $S$ and $\mathbb N$ or $S$ and $\mathbb Q$ to show that it is countable. So I am thinking is there some clever way to show what is the cardinality of $S$ by avoiding bijectivity arguments? So can you help me?
Notice that by dividing by two, you get all infinite subsets of $\mathbb{N}$. Now to make a bijection from $]0,1]$ to this set, write real numbers in base two, and for each real, get the set of positions of $1$ in de binary expansion. You have to write numbers of the form $\frac{n}{2^p}$ with infinitely many $1$ digits (they have two binary expansions, one finite, one infinite). Otherwise, the image of such a real by this application would not fall into the set of infinite sequences of integers (it would have only finitely many $1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/340631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Numer Matrix and Probability Say your playing a game with a friend. Lets call it 1 in 8. Your seeing who can predict the next three quarter flips in a row. 1 player flips the quarter three times and HTT comes up. He now has to stick with that as his set of numbers to win. tThe other player gets to pick his sequence of any three like HHH. Now you only get seven more sequences and the game is over. Does anyone have the advantage or is it still 50/50??
If each set of three is compared with each player's goal, the game is fair. Each player has $\frac 18$ chance to win each round and there is $\frac 34$ chance the round will be a draw. The chance of seven draws in a row is $(\frac 34)^7\approx 0.1335$, so each player wins with probability about $0.4333$. If I pick TTT and would win if the next flip were T (because of the two T's that have already come) I have the advantage.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
approximation of law sines from spherical case to planar case we know for plane triangle cosine rule is $\cos C=\frac{a^+b^2-c^2}{2ab}$ and on spherical triangle is $ \cos C=\frac{\cos c - \cos a \cos b} {\sin a\sin b}$ suppose $a,b,c<\epsilon$ which are sides of a spherical triangle, and $$|\frac{a^2 +b^2-c^2}{2ab}- \frac{\cos c - \cos a \cos b} {\sin a\sin b}|<Ke^m$$ could any one tell me what will be $m$ and $K$?
Note that for $x$ close to $0$, $$1-\frac{x^2}{2!} \le \cos x\le 1-\frac{x^2}{2!}+\frac{x^4}{4!}$$ and $$x-\frac{x^3}{3!} \le \sin x\le x.$$ (We used the Maclaurin series expansion of $\cos x$ and $\sin x$.) Using these facts on the small angles $a$, $b$, and $c$, we can estimate your difference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Glue Together smooth functions Let's say that $f(x)$ is a $C^{1}$ function defined on a closed interval $I\subset \mathbb{R^{+}}$ and $g(x)\equiv c$ ($c$=constant) on an open interval $J\subset \mathbb{R^{+}}$ where $\overline{J}∩I\neq \emptyset$. Is there a way to "glue" together those two functions in such a way that they connect smoothly?
If $\overline J \cap I \neq \emptyset$, clearly not. If there is an open interval between $I$ and $J$ then yes, you can interpolate with a polynomial of degree $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why cannot the permutation $f^{-1}(1,2,3,5)f$ be even Please help me to prove that if $f\in S_6$ be arbiotrary permutation so the permutation $f^{-1}(1,2,3,5)f$ cannot be an even permutation. I am sure there is a small thing I am missing it. Thank you.
I think, you can do the problem, if you know that: $f$ is even so is $f^{-1}$ and $f$ is odd so is $f^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Change of basis matrix to convert standard basis to another basis Consider the basis $B=\left\{\begin{pmatrix} -1 \\ 1 \\0 \end{pmatrix}\begin{pmatrix} -1 \\ 0 \\1 \end{pmatrix}\begin{pmatrix} 1 \\ 1 \\1 \end{pmatrix} \right\}$ for $\mathbb{R}^3$. A) Find the change of basis matrix for converting from the standard basis to the basis B. I have never done anything like this and the only examples I can find online basically tell me how to do the change of basis for "change-of-coordinates matrix from B to C". B) Write the vector $\begin{pmatrix} 1 \\ 0 \\0 \end{pmatrix}$ in B-coordinates. Obviously I can't do this if I can't complete part A. Can someone either give me a hint, or preferably guide me towards an example of this type of problem? The absolute only thing I can think to do is take an augmented matrix $[B E]$ (note - E in this case is the standard basis, because I don't know the correct notation) and row reduce until B is now the standard matrix. This is basically finding the inverse, so I doubt this is correct.
By definition change of base matrix contains the coordinates of the new base in respect to old base as it's columns. So by definition $B$ is the change of base matrix. Key to solution is equation $v = Bv'$ where $v$ has coordinates in old basis and $v'$ has coordinates in the new basis (new basis is B-s cols) suppose we know that in old basis $v$ has coords $(1,0,0)$ (as a column) (which is by the way just an old base vector) and we want to know $v'$ (the old base vector coordinates in terms of new base) then from the above equation we get $$B^{-1}v = B^{-1}Bv' \Rightarrow B^{-1}v = v'$$ As a side-node, sometimes we want to ask how does that change of base matrix B act if we look at it as linear transformation, that is given vector v in old base $v=(v_1,...,v_n)$, what is the vector $Bv$? In general it is a vector whith i-th coordinate bi1*v1+...+bin*vn (dot product of i-th row of $B$ with $v$). But in particular if we consider v to be an old base vector having coordinates (0...1...0) (coordinates in respect the old base) where 1 is in the j-th position, then we get $Bv = (b_{1j},...,b_{nj})$ which is the j-th column of B, which is the j-th base vector of the new base. Thus we may say that B viewed as linear transformation takes old base to new base.
{ "language": "en", "url": "https://math.stackexchange.com/questions/340978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 3, "answer_id": 1 }
Showing the sum over primes is equal to an integral First, note that $$\vartheta = \sum_{p \leq x} \log p$$I am trying to show $$\vartheta(x) = \pi(x)\log(x)-\int_2^x\frac{\pi(u)}{u}du$$ I am trying to show this by summation of parts. The theorem of partial summation is Let $f$ be a continuous and differentiable function. Suppose $\{a_n\}_{n\geq 1} \subseteq \mathbb{R}$. Set $A(t) =\sum_{n\leq t}a_n$. Then $$\sum_{n\leq x} a_n f(n) = A(x)f(x) - \int_1^x A(t)f'(t) dt$$ My proof is as follows (it seems simple enough). Let $f(n) = \log(p)$ if n is a prime. Clearly $f(n)$ is continuous and diffrentiable. Set $$a_n = \begin{cases} 1 & \text{if } n = prime \\ 0 & \text{otherwise}\end{cases}$$ Then by summation of parts we have $$\sum_{n\leq x}log(p)\cdot a_n = A(x)\log(x) - \int_1^x\frac{A(n)}{u}du$$ where $A(t) = \sum_{p \leq x} 1 = \pi(x)$ Is this sufficient enough?
It looks reasonably good, but one thing you need to be more clear about is the definition of $f(n)$. Currently "$f(n) = \log(p)$ when $n$ is a prime" is not an adequate definition, since it fails to define, for instance $f(2.5)$. It sounds like you might be defining $f(n) = 0$ when $n$ is not a prime; this would not be very usable since it results in a discontinuous function. On the other hand, $f(x) = \log x$ (already hinted at when you implicitly determined $f'(u) = 1/u$) works just fine for the argument. Another minor nitpick: you need to justify the switch between $\int_1^x$ in one equation and $\int_2^x$ in the other. This isn't hard because if you think about the definition, $A(t)$ happens to be $0$ for all $t < 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/341059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finite automaton that recognizes the empty language $\emptyset$ Since the language $L = \emptyset$ is regular, there must be a finite automaton that recognizes it. However, I'm not exactly sure how one would be constructed. I feel like the answer is trivial. Can someone help me out?
You have only one state $s$ that is initial, but not accepting with loops $s \overset{\alpha}{\rightarrow} s$ for any letter $\alpha \in \Sigma$ (with non-deterministic automaton you can even skip the loops, i.e. the transition relation would be empty). I hope this helps ;-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/341124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 0 }
Subgroups of the group of all roots of unity. Let $G=\mathbb{C}^*$ and let $\mu$ be the subgroup of roots of unity in $\mathbb{C}^*$. Show that any finitely generated subgroup of $\mu$ is cyclic. Show that $\mu$ is not finitely generated and find a non-trivial subgroup of $\mu$ which is not finitely generated. I can see that a subgroup of $\mu$ is cyclic due to the nature of complex numbers and De-Moivre's theorem. The second part of this question confuses me though since, by the same logic, should follow a similar procedure. Perhaps it has something to do with the "finite" nature of the subgroup in comparison to $\mu$. Any assistance would be helpful ! Thank you in advanced.
The first part should probably include "Show that any f.g. subgroup of $\,\mu\,$ is cyclic finite...". From here it follows at once that $\,\mu\,$ cannot be f.g. as it isn't finite. For a non-trivial non f.g. subgroup think of the roots of unit of order $\,p^n\,\,,\,\,p\,$ a prime, and running exponent $\,n\in\Bbb N\,$ ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/341198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Continuous exponential growth and misleading rate terminology I'm learning about continuous growth and looking at examples of Continuously Compounded Interest in finance and Uninhibited Growth in biology. While I've gotten a handle on the math, I'm finding some of the terminology counterintuitive. The best way to explain would be through an example. A culture of cells is grown in a laboratory. The initial population is 12,000 cells. The number of cells, $N$, in thousands, after $t$ days is, $N(t)=12e^{0.86t}$, which we can interpret as an $86\%$ daily growth rate for the cells. I understand the mechanism by which $0.86$ affects the growth rate, but it seems a misnomer to say there's an "$86\%$ daily growth rate" for the cells, as that makes it sound like the population will grow by $86\%$ in a day, when it actually grows by about $136\%$ since the growth is occurring continuously. Is it just that we have to sacrifice accuracy for succinctness?
The instantaneous growth rate is $0.86$ per day in that $N(t)$ is the solution to $\frac {dN}{dt}=0.86N$. You are correct that the compounding makes the increase in one day $1.36$
{ "language": "en", "url": "https://math.stackexchange.com/questions/341216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Möbius Transformation help Hey guys I need help on these 2 questions that I am having trouble on. 1) Show that the Möbius transformation $z \rightarrow \frac{2}{1-z}$ sends the unit circle and the line $x = 1$ to the lines $x = 1$ and $x = 0$, respectively. 2) Now deduce from this that the non-Euclidean distance between the unit circle and the line $x = 1$ tends to zero as these non-Euclidean lines approach the x-axis. I know that the non-euclidean distance between the lines $x=0$ and $x=1$ goes to zero as $y$ approachs $\infty$. But I dont know how to deduce that the unit circle would go to $x=1$ and $x=1$ to $x=0$ and how would I deduce from this. Its from my old past tests and I am trying to practice but I got stuck on this question. Please help out thank you
These are exercises 8.6.5, 8.6.6 from "The Four Pillars of Geometry." I'll use the terminology of the book. #1: Call the transform $\varphi(z) = \dfrac{2}{1-z}$. We know that Möbius transformations send non-Euclidean lines (circles and lines) to non-Euclidean lines. If we calculate the image of three points from the unit circle, we can uniquely determine the image of the unit circle as a whole. Pick three points from the unit circle, say $\{-1, 1, i\}$. We have: $$ \varphi(-1) = 1,\quad \varphi(1) = \infty,\quad \varphi(i) = \frac{2}{1-i} = 1 + i $$ All points lie on the line $x = 1$. Hence, the image of the unit circle is the line $x = 1$. Similarly for the line $x = 1$, pick the points $\{1, \infty, i + 1\}$: $$ \varphi(1) = \infty,\quad \varphi(\infty) = 0,\quad \varphi(1+i) = 2i $$ All points lie on the line $x = 0$. Hence, the image of the line $x = 1$ is the line $x = 0$. #2: We will show that the non-Euclidean distance between the unit circle and line $x = 1$ approaches zero as both non-Euclidean lines approach the point $1$ on the real axis. Consider a point $p_1$ on the unit circle approaching the point $1$, and another point $p_2$ on the line $x = 1$ approaching the point $1$. Their images under $\varphi$ both approach $\infty$ on the lines $x = 1$ and $x = 0$ respectively by #1. Since we know that the non-Euclidean distance is invariant under all Möbius transformations, and we also know that the distance between the lines $x = 0$ and $x = 1$ approaches $0$ as $y$ approaches $\infty$, it follows that the non-Euclidean distance between $p_1$ and $p_2$ approaches $0$ as they approach the $x$ axis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/341274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$\int_{0}^{\pi/2} (\sin x)^{1+\sqrt2} dx$ and $\int_{0}^{\pi/2} (\sin x)^{\sqrt2\space-1} dx $ How do I evaluate $$\int_{0}^{\pi/2} (\sin x)^{1+\sqrt2} dx\quad \text{ and }\quad \int_{0}^{\pi/2} (\sin x)^{\sqrt2\space-1} dx \quad ?$$
$$\beta(x,y) = 2 \int_0^{\pi/2} \sin^{2x-1}(a) \cos^{2y-1}(a) da \implies \int_0^{\pi/2} \sin^{m}(a) da = \dfrac{\beta((m+1)/2,1/2)}2$$ Hence, $$\int_0^{\pi/2} \sin^{1+\sqrt2}(a) da = \dfrac{\beta(1+1/\sqrt2,1/2)}2$$ $$\int_0^{\pi/2} \sin^{\sqrt2-1}(a) da = \dfrac{\beta(1/\sqrt2,1/2)}2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/341402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Nonzero derivative implies function is strictly increasing or decreasing on some interval Let $f$ be a differentiable function on open interval $(a,b)$. Suppose $f'(x)$ is not identically zero. Show that there exists an subinterval $(c,d)$ such that $f(x)$ is strictly increasing or strictly decreasing on $(c,d)$. How to prove this? I think this statement is wrong...
The statement is indeed wrong. You can construct for example a function $f:\mathbb{R} \rightarrow\mathbb{R}$ which is differentiable everywhere such that both $\{x \in \mathbb{R} : f'(x) > 0\}$ and $\{x \in \mathbb{R} : f'(x) < 0\}$ are dense in $\mathbb R$ and thus $f$ is monotone on no interval. You can find such a construction on page 80 of A Second Course on Real Functions by van Rooij and Schikhof. See also here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/341485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Decomposable elements of $\Lambda^k(V)$ I have a conjecture. I have a problem proving or disproving it. Let $w \in \Lambda^k(V)$ be a $k$-vector. Then $W_w=\{v\in V: v\wedge w = 0 \}$ is a $k$-dimensional vector space if and only if $w$ is decomposable. For example, for $u=e_1\wedge e_2 + e_3 \wedge e_4$ we have $W_u = 0$.
Roughly another 2 years later let me try giving a different proof. Suppose first $w$ is a decomposable $k$-vector. Then there exist $k$ linearly independent vectors $\{e_i\}$, $i=1,\ldots k$ such that $w=e_1\wedge\cdots\wedge e_k$. Complete $\{e_i\}$ to a basis of $V$, it is then clear that $e_i \wedge w=0$ for $i=1,\ldots k$, $e_i\wedge w\neq 0 $ for $i=k+1, \ldots n$, hence $\mathrm{dim}(W_w)= k$. Suppose now $\mathrm{dim}(W_w)= k$, and let $\{e_i\}$, $i=1,\ldots k$ be a basis of $W_w$. By using the following fact: Let $w$ be a $k$-vector, $u$ a non-vanishing vector. Then $w \wedge u =0 $ if and only if $w= z\wedge u $ for some $(k-1)$-vector $z$. one establishes that $w$ is of the form $w=c e_1 \wedge\cdots\wedge e_k$ for some constant $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/341540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 0 }
Probability - how come $76$ has better chances than $77$ on a "Luck Machine" round? a Luck machine is a $3$ $0-9$ digits on a screen, define $X_1$ to be the result contains $77$, and $X_2$ to be the result contains $76$. $p(X_1)=2*1/10*1/10*9/10+1/10*1/10*1/10$ $p(X_2)=2*1/10*1/10$ if i wasnt mistaken in my calculation, we get $p(X_2)>p(X_1)$ can anyone explain this shocking outcome?
If I understand your question correctly, you have a machine that generates 3 digits (10³ possible outcomes) and you want to know the probability that you have event $X_1$ : either "7 7 x" or "x 7 7" and compare this with event $X_2$: either "7 6 x" or "x 7 6". Notice that the 2 sub-events "7 7 x" and "x 7 7" are not disjoint (they have "7 7 7" as common outcome) while the 2 sub-events "7 6 x" and "x 7 6" are disjoint. $P(X_1) = P($"7 7 x" $\cup $ "x 7 7"$) = P($"7 7 x"$) + P($"x 7 7"$) - P($"7 7 7"$)$ $P(X_2) = P($"7 6 x" $\cup $ "x 6 7") = $P($"7 6 x") + $P($"x 7 6"$)$ This explains why $P(X_1) < P(X_2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/341618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $ P $ be a non-constant polynomial in z. Show that $ P(z) \rightarrow \infty $ as $ z \rightarrow \infty $ This is a a homework problem I have and I am having some trouble on it. I had thought I solved it, but I found out a algebraic mistake made my proof incorrect. Here is what I have so far. Let $ P(z) = \sum\limits_{k=0}^n a_k z^k $ $\def\abs#1{\left|#1\right|}$ Then, $ \abs{a_nz^n} = \abs{\sum\limits_{k=0}^n a_k z^k - \sum\limits_{k=0}^{n-1} a_kz^k} \leq \abs{\sum\limits_{k=0}^n a_k z^k} + \abs{\sum\limits_{k=0}^{n-1} a_kz^k} $ by the triangle inequality. Let $ M \gt 0, \exists R \in \mathbb{R} \text{ s.t. } \abs{z} \gt R \implies M \lt \abs{a_n z^n} $ since it is known that $ \abs{a_n z^n} $ converges to infinity. So by choose such and R as above and choosing a sufficiently large z, you get: $ M \lt \abs{a_n z^n} \leq \abs{\sum\limits_{k=0}^n a_k z^k} + \abs{\sum\limits_{k=0}^{n-1} a_kz^k} \implies M - \abs{\sum\limits_{k=0}^{n-1} a_kz^k} \lt \abs{\sum\limits_{k=0}^n a_k z^k} $ However, this is not sufficient to prove the hypothesis and I am at a loss of what to do. Help woud be great.
Hints: $$(1)\;\;\;\;\;|P(z)|=\left|\;\sum_{k=0}^na_kz^k\;\right|=|z|^n\left|\;\sum_{k=0}^na_kz^{k-n}\;\right|$$ $$(2)\;\;\;\;\forall\,w\in\Bbb C\;\;\wedge\;\forall\;n\in\Bbb N\;,\;\;\left|\frac{w}{z^n}\right|\xrightarrow[z\to\infty]{}0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/341669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that for any real numbers $a,b$ we have $\lvert \arctan a−\arctan b\rvert\leq \lvert a−b\rvert$. Prove that for any real numbers $a,b$ we have $\lvert \arctan a−\arctan b\rvert\leq \lvert a−b\rvert$. This should have an application of the mean value theorem.
Let $f(x)=\arctan x$, then by mean value theorem there exist $c\in (a,b)$ such that $$|f(a)-f(b)|=f^{\prime}(c) |a-b|$$ $$f^{\prime}(x)=\frac{1}{1+x^2}\leq1$$ So $f^{\prime}(c)\leq1$ and $|\arctan a-\arctan b|\leq |a-b|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/341741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
differentiate f(x) using L'hopital and other problem * *Evaluate: $\ \ \ \ \lim_{x\to1}(2-x)^{\tan(\frac{\pi}{2}x)}$ *Show that the inequality holds: $\ \ \ x^\alpha\leq \alpha x + (1-\alpha)\ \ \ (x\geq0, \,0<\alpha <1)$ Please help me with these. Either a hint or a full proof will do. Thanks.
For the first problem: another useful method is to log the function to get $\frac{\log(2-x)}{\frac{1}{\tan(\frac{\pi x}{2})}}$, now you can apply L'Hospital's rule, take the limit and exponentiate it. I keep getting $e^{\frac{\pi}{2}}$, but you need to check the algebra more thoroughly
{ "language": "en", "url": "https://math.stackexchange.com/questions/341795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Combinatorics riddle: keys and a safe There are 8 crew members, The leading member wants that only a crew of 5 people or more could open a safe he bought, To each member he gave equal amount of keys, and locked the safe with several locks. What's the minimal number of locks he should put: * *at least 5 crew members would be required to open the safe? *Any team of 5 crew members could open the safe, but none of the 4 team crew members. 1,2 stands individually of course.
This can easily be generalized, replacing 8 and 4 with a variable each. Let the set of keys (and locks) be denoted by $K$. Let the set of keys that crew member $i$ doesn't receive be $K_i$. For distinct indices, Condition 1 states that $K_{ijkl} = K_i \cap K_j \cap K_k \cap K_l \neq \emptyset$ (no 4 crew can open the safe), and condition 2 states that $K_i \cap K_j \cap K_k \cap K_l \cap K_m = \emptyset$ (any 5 crew can open the safe). Claim: $ K_{ijkl} \cap K_{i'j'k'l'} = \emptyset$ for distinct quadruple sets. Proof: Suppose not. Then $\{i, j, k, l, i', j', k', l'\}$ is a set with at least 5 distinct elements, contradicting condition 2. $_\square$ Hence, there must be at least $8 \choose 4$ locks. Claim. $8 \choose 4$ is sufficient. Proof: Label each of the locks with a distinct quadruple from 1 to 8. To each crew member, give him the key to the lock, if it doesn't have his number on it. Then, any 4 crew members $i, j, k, l$ will not be able to open lock $\{i, j, k, l\}$. Any 5 crew members will be able to open all locks.
{ "language": "en", "url": "https://math.stackexchange.com/questions/342001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the value of $\lim_{n \rightarrow \infty} \sqrt{1+\left(\frac1{2n}\right)^n}$ Find the limit of the sequence as it approches $\infty$ $$\sqrt{1+\left(\frac1{2n}\right)^n}$$ I made a table of the values of the sequence and the values approach 1, so why is the limit $e^{1/4}$? I know that if the answer is $e^{1/4}$ I must have to take the $\ln$ of the sequence but how and where do I do that with the square root? I did some work getting the sequence into an indeterminate form and trying to use L'Hospitals but I'm not sure if it's right and then where to go from there. Here is the work I've done $$\sqrt{1+\left(\frac1{2n}\right)^n} = \frac1{2n} \ln \left(1+\frac1{2n}\right) = \lim_{x\to\infty} \frac1{2n} \ln \left(1+\frac1{2n}\right) \\ = \frac 1{1+\frac1{2n}}\cdot-\frac1{2n^2}\div-\frac1{2n^2}$$ Thank you
We have $\lim_{n\to\infty} (1+\frac{x}{n})^n = e^x$. Hence $\lim_{n \to \infty} (1+\frac{x}{2n})^{2n} = e^x$, then taking square roots (noting that the square root is continuous on $[0,\infty)$), we have $\lim_{n \to \infty} \sqrt{(1+\frac{x}{2n})^{2n}} = \lim_n (1+\frac{x}{2n})^{n} = \sqrt{e^x} = e^\frac{x}{2}$, and finally $\lim_{n\to\infty} \sqrt{(1+\frac{x}{2n})^{n}} = e^\frac{x}{4}$. Setting $x=1$ gives the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/342066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Prove that if $\gcd(a, n) = d$ then $\langle[a]\rangle = \langle[d]\rangle$ in $\mathbb Z_n$? I am not sure how to start this problem and hope someone can help me out.
Well, $d$ divides $a$, so in any $\Bbb Z_k$ it should be clear that $[a]\in\langle[d]\rangle$, whence $\langle[a]\rangle\subseteq\langle[d]\rangle$. The reverse inclusion doesn't generally hold, but since $d=\gcd(a,n)$, then there exist $x,y\in\Bbb Z$ such that $d=ax+ny$, so in $\Bbb Z_n$ we have $[d]=[a][x]$, which lets us demonstrate the reverse inclusion in a similar fashion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/342132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving $n+3 \mid 3n^3-11n+48$ I'm really stuck while I'm trying to prove this statement: $\forall n \in \mathbb{N},\quad (n+3) \mid (3n^3-11n+48)$. I couldn't even how to start.
If you wrote it backwards by accident then the proof is by $$(3n^2 - 9n + 16)(n+3) = 3n^3 - 11n + 48$$ if you meant what you wrote then $n=0\;$ gives a counter examples since $48$ doesn't divide $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/342193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 3 }
Notatebility of Uncountable Sets I have noticed a pattern. The set of integers is infinite. Therefore one at first would think it impossible to come up with a notation allowing the representation of all integers. This though becomes easy actually to get around. Simply allow larger integers to take up larger notation. Now look at the rational numbers. They are dense, so there are an infinite number of rational numbers between 0 and 1. Seemingly impossible, but not. By dividing two integers, you can come up with any rational numbers. Algebraic, and even all the computable numbers can be notated. Yet when you get to the real numbers, the great uncountable set, there is no finite notation to represent all the real numbers. Why is it that countable sets can be notated, but it seems that uncountable sets can not? Has this been studied before?
Other people have answered why uncountable sets cannot be notated. I'll just add that some countable sets cannot be "usefully" notated. So it depends on your definition of notation. For example, the set of all Turing machines is countable. The set of Turing machines that do not halt is a subset, so is also countable. But there is no practical notation - that is, no programatic way - which allows for the enumeration of the set of non-halting Turing machines. A notation system is only as useful as it is computable, and there is no computable enumeration of this set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/342254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Are all large cardinal axioms expressible in terms of elementary embeddings? An elementary embedding is an injection $f:M\rightarrow N$ between two models $M,N$ of a theory $T$ such that for any formula $\phi$ of the theory, we have $M\vDash \phi(a) \ \iff N\vDash \phi(f(a))$ where $a$ is a list of elements of $M$. A critical point of such an embedding is the least ordinal $\alpha$ such that $f(\alpha)\neq\alpha$. A large cardinal is a cardinal number that cannot be proven to exist within ZFC. They often appear to be critical points of an elementary embedding of models of ZFC where $M$ is the von Neumann hierarchy, and $N$ is some transitive model. Is this in fact true for all large cardinal axioms?
There is an online PDF of lecture slides by Woodin on the "Omega Conjecture" in which he axiomatizes the type of formulas that are large cardinals. I do not know how exhaustive his formulation is. See the references under http://en.wikipedia.org/wiki/Omega_conjecture
{ "language": "en", "url": "https://math.stackexchange.com/questions/342306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Chopping arithmetic on terms such as $\pi^2$, $\pi^3$ or $e^3$ I have a problem where I have to use 3-digit chopping with numbers such as $\pi^2$, $\pi^3$, $e^3$, etc. If I wanted to 3-digit chop $\pi^2$, do I square the true value of $\pi$ and then chop, or do I chop $\pi$ first to 3.14 then square it?
If you chop $\pi$ then square then chop, what you are really chopping is $3.14^2$ not $\pi^2$. So you must take $\pi^2$ and then chop it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/342377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What does 1 modulo p mean? For example, from Gallian's text: Sylow Test for Nonsimplicity Let $n$ be a positive integer that is not prime, and let $p$ be a prime divisor of $n$. If 1 is the only divisor of $n$ that is equal to 1 modulo p, then there does not exist a simple group of order $n$. I of course understand what some a mod b is (ex. 19 mod 10 = 9); but wouldn't 1 mod anything positive be 1 [edit: changed from 0]?
To say that a number $a$ is $1$ modulo $p$ means that $p$ divides $a - 1$. So, in particular, the numbers $1, p + 1, 2p + 1, \ldots$ are all equal to $1$ modulo $p$. As you're studying group theory, another way to put it is that $a = b$ modulo $p$ if and only if $\pi(a) = \pi(b)$ where $\pi\colon\mathbb Z \to \mathbb Z/p$ is the factor homomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/342463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Calculate an infinite continued fraction Is there a way to algebraically determine the closed form of any infinite continued fraction with a particular pattern? For example, how would you determine the value of $$b+\cfrac1{m+b+\cfrac1{2m+b+\cfrac1{3m+b+\cdots}}}$$? Edit (2013-03-31): When $m=0$, simple algebraic manipulation leads to $b+\dfrac{\sqrt{b^2+4}}{4}$. The case where $m=2$ and $b=1$ is $\dfrac{e^2+1}{e^2-1}$, and I've found out through WolframAlpha that the case where $b=0$ is an expression related to the Bessel function: $\dfrac{I_1(\tfrac2m)}{I_0(\tfrac2m)}$. I'm not sure why this happens.
Reference: D. H. Lehmer, "Continued fractions containing arithmetic progressions", Scripta Mathematica vol. 29, pp. 17-24 Theorem 1: $$ b+\frac{1}{a+b\quad+}\quad\frac{1}{2a+b\quad+}\quad\frac{1}{3a+b\quad+}\quad\dots = \frac{I_{b/a-1}(2/a)}{I_{b/a}(2/a)} $$ Theorem 2 $$ b-\frac{1}{a+b\quad-}\quad\frac{1}{2a+b\quad-}\quad\frac{1}{3a+b\quad-}\quad\dots = \frac{J_{b/a-1}(2/a)}{J_{b/a}(2/a)} $$ and other results...
{ "language": "en", "url": "https://math.stackexchange.com/questions/342518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Factor $(a^2+2a)^2-2(a^2+2a)-3$ completely I have this question that asks to factor this expression completely: $$(a^2+2a)^2-2(a^2+2a)-3$$ My working out: $$a^4+4a^3+4a^2-2a^2-4a-3$$ $$=a^4+4a^3+2a^2-4a-3$$ $$=a^2(a^2+4a-2)-4a-3$$ I am stuck here. I don't how to proceed correctly.
Or if you missed Jasper Loy's trick, you can guess and check a value of $a$ for which $$f(a) = a^4 +4a^3 +2a^2 −4a−3 = 0.$$ E.g. f(1) = 0 so $(a-1)$ is a factor and you can use long division to factorise it out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/342581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
nCr question choosing 1 - 9 from 9 I've been trying to rack my brain for my high school maths to find the right calculation for this but I've come up blank. I would like to know how many combinations there are of choosing 1-9 items from a set of 9 items. i.e. There are 9 ways of selecting 1 item. There is 1 way of selecting 9 items. For 2 items you can choose... 1,2 1,3 1,4 1,5 1,6 1,7 1,8 1,9 2,3 2,4 and so on. How many ways in total are there of selecting any number of items from a set of 9 (without duplicates, i.e. you can't have 2,2). Also, order is not important. So 1,2 is the same as 2,1.
Line up the items in front of you, in order. To any of them you can say YES or NO. There are $2^9$ ways to do this. This is the same as the number of bit strings of length $9$. But you didn't want to allow the all NO's possibility (the empty subset). Thus there are $2^9-1$ ways to choose $1$ to $9$ of the objects. Remark: There are $\dbinom{9}{k}$ ways of choosing exactly $k$ objects. Here $\dbinom{n}{k}$ is a Binomial Coefficient, and is equal to $\dfrac{n!}{k!(n-k)!}$. This binomial coefficient is called by various other names, such as $C(n,k)$, or ${}_nC_k$, or $C^n_k$. So an alternate, and much longer way of doing the count is to find the sum $$\binom{9}{1}+\binom{9}{2}+\binom{9}{3}+\cdots +\binom{9}{9}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/342655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Definition of limit in category theory - is $X$ a single object of $J$ or a subset of $J$? Let $F : J → C$ be a diagram of type $J$ in a category $C$. A cone to $F$ is an object $N$ of $C$ together with a family $ψ_X : N → F(X)$ of morphisms indexed by the objects $X$ of $J$, such that for every morphism $f : X → Y$ in $J$, we have $F(f) \centerdot ψ_X = ψ_Y$. http://en.wikipedia.org/wiki/Limit_(category_theory) So, is $X$ here referring to a single object of $J$ or several objects of $J$ grouped together? (In terms of set, is $X$ an element of $J$ or a subset of $J$?)
The cone is indexed by all objects of $J$. Before dealing with limits in general, you should understand products and their universal property. Of course all factors are involved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/342727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proving that an injective function is bijective I am having a lot of trouble starting this proof. I would greatly appreciate any help I can get here. Thanks. Let $n\in \mathbb{N}$. Prove that any injective function from $\{1,2,\ldots,n\}$ to $\{1,2,\ldots,n\}$ is bijective.
Another hint: Prove it by induction. It’s clear for $n=1$. Otherwise if the statement holds for some $n$, take an injective map $σ \colon \{1, …, n+1\} → \{1, …, n+1\}$. Assume $σ(n+1) = n+1$ – why can you do this? What follows?
{ "language": "en", "url": "https://math.stackexchange.com/questions/343830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Eigenvalues of a matrix with only one non-zero row and non-zero column. Here is the full question. * *Only the last row and the last column can contain non-zero entries. *The matrix entries can take values only from $\{0,1\}$. It is a kind of binary matrix. I am interested in the eigenvalues of this matrix. What can we say about them? In particular, when are all of them positive?
At first I thought I understood your question, but after reading those comments and answers here, it seems that people here have very different interpretations. So I'm not sure if I understand it correctly now. To my understanding, you want to find the eigenvalues of $$ A=\begin{pmatrix}0_{(n-1)\times(n-1)}&u\\ v^T&a\end{pmatrix}, $$ where $a$ is a scalar, $u,v$ are vectors and all entries in $a,u,v$ are either $0$ or $1$. The rank of this matrix is at most $2$. So, when $n\ge3$, $A$ must have some zero eigenvalues. In general, the eignevalues of $A$ include (at least) $(n-2)$ zeros and $$\frac{a \pm \sqrt{a^2 + 4v^Tu}}{2}.$$ Since $u,v$ are $0-1$ vectors, $A$ has exactly one positive eigenvalue and one negative eigenvalue if $v^Tu>0$, and the eigenvalues of $A$ are $\{a,0,0,\ldots,0\}$ if $v^Tu=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/343880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Show that $f '(x_0) =g'(x_0)$. Assume that $f$ and $g$ are differentiable on interval $(a,b)$ and $f(x) \le g(x)$ for all $x \in (a,b)$. There exists a point $x_0\in (a,b)$ such that $f(x_0) =g(x_0)$. Show that $f '(x_0) =g'(x_0)$. I am guessing we create a function $h(x) = f(x)-g(x)$ and try to come up with the conclusion using the definition of differentiability at a point. I am not sure how. Some useful facts: A function $f:(a,b)\to \mathbb R$ is differentiable at a point $x_0$ in $(a,b)$ if $$\lim_{h\to 0}\frac{f(x_0 + h)-f(x_0)}{h}=\lim_{x\to x_0}\frac{f(x)-f(x_0)}{x-x_0}.$$
$h(x)=g(x)-f(x)$ is differentiable, and $h(x_0)=0$ and $h(x)\geq 0$ hence $h$ has a minimum at $x_0$ and hence $h'(x_0)=0$ And as $$h'(x_0)=0 \implies f'(x_0)=g'(x_0)$$ we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/343940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
dimension of a coordinate ring Let $I$ be an ideal of $\mathbb{C}[x,y]$ such that its zero set in $\mathbb{C}^2$ has cardinality $n$. Is it true that $\mathbb{C}[x,y]/I$ is an $n$-dimensional $\mathbb{C}$-vector space (and why)?
The answer is no, but your very interesting question leads to about the most elementary motivation for the introduction of scheme theory in elementary algebraic geometry. You see, if the common zero set $X_{\mathrm{classical}}=V_{\mathrm{classical}}(I)$ consists set-theoretically (I would even say physically) in $n$ points, then $N:=\dim_\mathbb C \mathbb{C}[x,y]/I\geq n$. If $N\gt n$, this is an indication that some interesting geometry is present: $X_{\mathrm{classical}}$ is described by equations that are not transversal enough, so that morally they describe a variety bigger than the naked physical set. The most elementary example is given by $I=\langle y,y-x^2\rangle$: we have $V_{\mathrm{classical}}(I)=\{(0,0)\}=\{O\}$ Everybody feels that it is a bad idea do describe the origin as $V(I)$, i.e. as the intersection of a parabola and one of its tangents: a better description would be to describe it by the ideal $J=\langle x,y\rangle,$ in other words as the intersection of two transversal lines. However the ideal $I$ describes an interesting structure, richer than a naked point, and this structure is called a scheme. This is all reflected in the strict inequality $$\dim_\mathbb C \mathbb{C}[x,y]/I=2=N\gt \dim_\mathbb C \mathbb{C}[x,y]/J=1=n=\text { number of physical points}.$$ Scheme theory in its most elementary incarnation disambiguates these two cases by adding the relevant algebra in the algebro-geometric structure, now defined as pairs consisting of a physical set plus an algebra: $$V_{\mathrm{scheme}}(J)=(\{O\},\mathbb{C}[x,y]/J )\subsetneq V_{\mathrm{scheme}}(I)= (\{O\},\mathbb{C}[x,y]/I ).$$ Bibliography Perrin's Algebraic Geometry is the most elementary introduction to this down-to-earth vision of schemes (cf. beginning of Chapter VI).
{ "language": "en", "url": "https://math.stackexchange.com/questions/343993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
$\tan B\cdot \frac{BM}{MA}+\tan C\cdot \frac{CN}{NA}=\tan A. $ Let $\triangle ABC$ be a triangle and $H$ be the orthocenter of the triangle. If $M\in AB$ and $N \in AC$ such that $M,N,H$ are collinear prove that : $$\tan B\cdot \frac{BM}{MA}+\tan C\cdot \frac{CN}{NA}=\tan A. $$ Thanks :)
with menelaus' theorem, in$\triangle ABD$,$\dfrac{BM}{MA}\dfrac{AH}{HD}\dfrac{DK}{KB}=1 $,ie, $\dfrac{BM}{MA}=\dfrac{BK*HD}{AH*DK}$. in$\triangle ACD$,$\dfrac{CN}{NA}\dfrac{AH}{HD}\dfrac{DK}{KC}=1 $,ie, $\dfrac{CN}{NA}=\dfrac{CK*HD}{AH*DK}$. $tanB=\dfrac{AD}{BD}, tanC=\dfrac{AD}{DC},tanA=\dfrac{BC}{AH}$ LHS=$\dfrac{AD}{BD}*\dfrac{BK*HD}{AH*DK}+\dfrac{AD}{DC}*\dfrac{CK*HD}{AH*DK}$=$\dfrac{AD*HD}{AH*DK}(\dfrac{BK}{BD}+\dfrac{CK}{DC})$ $\dfrac{BK}{BD}+\dfrac{CK}{DC}=\dfrac{BK}{BD}+\dfrac{KB+BD+DC}{DC}=\dfrac{BK}{BD}+\dfrac{BK}{DC}+\dfrac{BD}{DC}+\dfrac{DC}{DC}=\dfrac{BK*BC}{BD*DC}+\dfrac{BC}{DC}=\dfrac{BC}{DC}*\dfrac{BK+BD}{BD}=\dfrac{BC*DK}{DC*BD}$ LHS=$\dfrac{AD*HD}{AH*DK}*\dfrac{BC*DK}{DC*BD}=\dfrac{BC*AD*HD}{AH*DC*BD}$ clearly, $\triangle BHD \sim \triangle ACD$,that is $\dfrac{BD}{AD}=\dfrac{HD}{CD}$, ie. $\dfrac{AD*HD}{DC*BD}=1$ LHS=$\dfrac{BC}{AH}=tanA$ for $tanA=\dfrac{BC}{AH}$, we can proof as follow: $A=\angle BAD+\angle DAC=\angle HCD+\angle HBD$, so $tanA=tan(\angle HBD+\angle HCD)=\dfrac{tan\angle HBD+tan\angle HCD}{1-tan\angle HBD*tan\angle HCD}=\dfrac{\dfrac{HD}{BD}+\dfrac{HD}{DC}}{1-\dfrac{HD*HD}{BD*DC}}=\dfrac{HD*BC}{BD*DC-HD^2}=\dfrac{HD*BC}{AD*HD-HD^2}=\dfrac{HD*BC}{HD(AD-HD)}=\dfrac{BC}{AH}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/344189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A subspace of a vector space A subspace of a vector space $V$ is a subset $H$ of $V$ that has three properties: a) The zero vector of $V$ is in $H$. b) $H$ is closed under vector addition. That is for each $u$ and $v$ in $H$, the sum $u+v$ is in $H$. c) $H$ is closed under multiplication by scalars. That is, for each $u$ in $H$ and each scalar $c$, the vector $cu$ is in $H$. It would be great if someone could "dumb" this down. It already seems extremely simply, but i'm having a very difficult time applying these.
If your original vector space was $V=\mathbb R^3$, then the possible subspaces are: * *The whole space *Any plane that passes through $0$ *Any line through $0$ *The singleton set, $\{0\}$ One reading for the definition is that $H$ is a subspace of $V$ if it is a sub-set of $V$ and it is also a vector space under the same operations as in $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/344259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Showing $n\Bbb{Z}$ is a group under addition Let $n$ be a positive integer and let $n\Bbb{Z}=\{nm\mid m \in\Bbb{Z}\}$. I need to show that $\left< n\Bbb{Z},+ \right>$ is a group. And I need to show that $\left< n\Bbb{Z},+ \right>\cong\left< \Bbb{Z},+ \right>$. Added: If $n\mathbb Z$ is a subgroup of $\mathbb Z$ then it must be closed under "+". The identity element 0 is an element of the subgroup. And for any $n$ in $n\mathbb Z$, it's inverse $-n$ must be in $n\mathbb Z$...
Since every element of $n\mathbb Z$ is an element of $\mathbb Z$, we can do an easier proof that it is a group by showing that it is a subgroup of $\mathbb Z$. It happens to be true that if $H\subset G$, where $G$ is a group, then $H$ is group if it satisfies the single condition that if $x,y\in H$, then $x\ast y^{-1}\in H$, where $\ast$ is the group operation of $G$ and $y^{-1}$ is the inverse of $y$ in $G$. In order to find an isomorphism $\varphi:\mathbb Z\to n\mathbb Z$, ask yourself what $\varphi(1)$ should be. Can you go from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/344319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Maximum number of points a minimum distance apart in a semicircle of certain radius You have a circle of certain radius $r$. I want to put a number of points in either of the semicircles. However, no two point can be closer than $r$. The points can be put anywhere inside the semicircle, on the straight line, inside area, or on the circumference. There is no relation among the points of the two semicircles. But as you can see, eventually they will be the same. How do I find the maximum number of points that can be put inside the semicircle?
The answer is five points. Five points can be achieved by placing one at the center of the large circle and four others equally spaced around the circumference of one semicircle (the red points in the picture below). To show that six points is impossible, consider disks of radius $s$ about each of those five points, where $r/\sqrt3 < s < r$. These five smaller disks completely cover the large half-disk; so for any six points in the large half-disk, at least two of them must lie in the same smaller disk. But then those two points are closer than $r$ to each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/344417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Radius of convergence of power series $\sum c_n x^{2n}$ and $\sum c_n x^{n^2}$ I've got a start on the question I've written below. I'm hoping for some help to finish it off. Suppose that the power series $\sum_{n=0}^{\infty}c_n x^n$ has a radius of convergence $R \in (0, \infty)$. Find the radii of convergence of the power series $\sum_{n=0}^{\infty}c_n x^{2n}$ and $\sum_{n=0}^{\infty}c_n x^{n^2}$. From Hadamard's Theorem I know that the radius of convergence for $\sum_{n=0}^{\infty}c_n x^n$ is $R=\frac{1}{\alpha}$, where $$\alpha = \limsup_{n \to \infty} |a_n|^{\frac{1}{n}}.$$ Now, applying the Root Test to $\sum_{n=0}^{\infty}c_n x^{2n}$ gives $$\limsup |a_nx^{2n}|^{\frac{1}{n}}=x^2 \cdot \limsup |a_n|^{\frac{1}{n}}=x^2 \alpha$$ which gives a radius of convergence $R_1 = \frac{1}{\sqrt{\alpha}}$. Now for the second power series. My first thought was to take $$\limsup |a_nx^{n^2}|^{\frac{1}{n^2}}=|x| \cdot \limsup |a_n|^{\frac{1}{n^2}}$$ but then I'm stuck. I was trying to write the radius of convergence once again in terms of $\alpha$. Any input appreciated and thanks a bunch.
$$ \limsup_{n\rightarrow\infty} |c_n|^{\frac{1}{n}}=\alpha <\infty $$ gives that there exists $N\geq 1$ such that if $n>N$ then $|c_n|^{\frac{1}{n}}< \alpha+1$. Then $|c_n|^{\frac{1}{n^2}}< (\alpha+ 1)^{\frac{1}{n}}$ for all $n>N$. It follows that $\limsup_{n\rightarrow\infty} |c_n|^{\frac{1}{n^2}}\leq 1$. Also, there is a subsequence $\{n_k\}$ such that $|c_{n_k}|^{\frac{1}{n_k}}$ converges to $\alpha$. For that subsequence, we have $$ |c_{n_k}|^{\frac{1}{n_k^2}} \rightarrow 1 $$ as $k\rightarrow\infty$. This implies that $\limsup_{n\rightarrow\infty} |c_n|^{\frac{1}{n^2}}=1$. Therefore, the radius of convergence of $\sum c_n x^{n^2}$ is 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/344483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
A question about a basis for a topology If a subset $B$ of a powerset $P(X)$ has the property that finite intersections of elements of $B$ are empty or again elements of $B$, does the collection of all unions of sets from $B$ form a topology on $X$ then? My book A Taste of Topology says this is indeed the case, but I wonder how you can guarantee that $X$ will be open. For example, the empty set has the required properties, so that would mean that the empty set is a topology for any set $X$, which is impossible.
It is usually taken that the empty intersection of subsets of a set is the entire set, similar to how the empty product is often taken to be the multiplicative identity or the empty sum is often taken to be the additive identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/344561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
To prove a property of greatest common divisor Suppose integer $d$ is the greatest common divisor of integer $a$ and $b$, how to prove, there exist whole number $r$ and $s$, so that $$d = r \cdot a + s \cdot b $$ ? i know a proof in abstract algebra, hope to find a number theory proof? for abstract algebra proof, it's in Michael Artin's book "Algebra".
An approach through elementary number-theory: It suffices to prove this for relatively prime $a$ and $b$, so suppose this is so. Denote the set of integers $0\le k\le b$ which is relatively prime to $b$ by $\mathfrak B$. Then $a$ lies in the residue class of one of elements in $\mathfrak B$. Define a map $\pi$ from $\mathfrak B$ into itself by sending $k\in \mathfrak B$ to the residue class of $ka$. If $k_1a\equiv k_2a\pmod b$, then, as $\gcd (a,b)=1$, $b\mid (k_1-k_2)$, so that $k_1=k_2$ (Here $k_1$ and $k_2$ are positive integers less than $b$.). Hence this map is injective. Since the set $\mathfrak B$ is finite, it follows that $\pi$ is also surjectie. So there is some $k$ such that $ka\equiv 1\pmod b$. This means that there is some $l$ with $ka-1=lb$, i.e. $ka-lb=1$. Barring mistakes. Thanks and regards then. P.S. The reduction step is: Given $a, b$ with $\gcd(a,b)=d$, we know that $\gcd(\frac{a}{d},\frac{b}{d})=1$. So, if the relatively prime case has been settled, then there are $m$ and $n$ such that $m\frac{a}{d}+n\frac{b}{d}=1$, and hence $ma+nb=d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/344628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Non-commutative or commutative ring or subring with $x^2 = 0$ Does there exist a non-commutative or commutative ring or subring $R$ with $x \cdot x = 0$ where $0$ is the zero element of $R$, $\cdot$ is multiplication secondary binary operation, and $x$ is not zero element, and excluding the case where addition (abelian group operation) and multiplication of two numbers always become zero? Edit: most seem to be focused on non-commutative case. What about commutative case? Edit 2: It is fine to relax the restriction to the following: there exists countably (and possibly uncountable) infinite number of $x$'s in $R$ that satisfy $x \cdot x = 0$ (so this means that there may be elements of ring that do not satisfy $x \cdot x =0$) excluding the case where addition and multiplication of two numbers always become zero.
Example for the non-commutative case: $$ \pmatrix{0 & 1 \\ 0 & 0} \pmatrix{0 & 1 \\ 0 & 0} = \pmatrix{0 & 0 \\ 0 & 0}. $$ Example for the commutative case: Consider the ring $\mathbb{Z} / 4\mathbb{Z}$. What is $2^2$ in this ring?
{ "language": "en", "url": "https://math.stackexchange.com/questions/344714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Determine the points on the parabola $y=x^2 - 25$ that are closest to $(0,3)$ Determine the points on the parabola $y=x^2 - 25$ that are closest to $(0,3)$ I would like to know how to go about solving this. I have some idea of solving it. I believe you have to use implicit differentiation and the distance formula but I don't know how to set it up. Hints would be appreciated.
Just set up a distance squared function: $$d(x) = (x-0)^2 + (x^2-25-3)^2 = x^2 + (x^2-28)^2$$ Minimize this with respect to $x$. It is easier to work with the square of the distance rather than the distance itself because you avoid the square roots which, in the end, do not matter when taking a derivative and setting it to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/344760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
How do the floor and ceiling functions work on negative numbers? It's clear to me how these functions work on positive real numbers: you round up or down accordingly. But if you have to round a negative real number: to take $\,-0.8\,$ to $\,-1,\,$ then do you take the floor of $\,-0.8,\,$ or the ceiling? That is, which of the following are true? $$\lfloor-0.8\rfloor=-1$$ $$\text{or}$$ $$\lceil-0.8\rceil=-1$$
The first is the correct: you round "down" (i.e. the greatest integer LESS THAN OR EQUAL TO $-0.8$). In contrast, the ceiling function rounds "up" to the least integer GREATER THAN OR EQUAL TO $-0.8 = 0$. $$ \begin{align} \lfloor{-0.8}\rfloor & = -1\quad & \text{since}\;\; \color{blue}{\bf -1} \le -0.8 \le 0 \\ \\ \lceil {-0.8} \rceil & = 0\quad &\text{since} \;\; -1 \le -0.8 \le \color{blue}{\bf 0} \end{align}$$ In general, we must have that $$\lfloor x \rfloor \leq x\leq \lceil x \rceil\quad \forall x \in \mathbb R$$ And so it follows that $$-1 = \lfloor -0.8 \rfloor \leq -0.8 \leq \lceil -0.8 \rceil = 0$$ K.Stm's suggestion is a nice, intuitive way to recall the relation between the floor and the ceiling of a real number $x$, especially when $x\lt 0$. Using the "number line" idea and plotting $-0.8$ with the two closest integers that "sandwich" $-0.8$ gives us: $\qquad\qquad$ We see that the floor of $x= -0.8$ is the first integer immediately to the left of $-0.8,\;$ and the ceiling of $x= -0.8$ is the first integer immediately to the right of $-0.8$, and this strategy can be used, whatever the value of a real number $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/344815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 5, "answer_id": 1 }
How does linear algebra help with computer science? I'm a Computer Science student. I've just completed a linear algebra course. I got 75 points out of 100 points on the final exam. I know linear algebra well. As a programmer, I'm having a difficult time understanding how linear algebra helps with computer science? Can someone please clear me up on this topic?
The page Coding The Matrix: Linear Algebra Through Computer Science Applications (see also this page) might be useful here. In the second page you read among others In this class, you will learn the concepts and methods of linear algebra, and how to use them to think about problems arising in computer science. I guess you have been giving a standard course in linear algebra, with no reference to applications in your field of interest. Although this is standard practice, I think that an approach in which the theory is mixed with applications is to be preferred. This is surely what I did when I had to teach Mathematics 101 to Economics majors, a few years ago.
{ "language": "en", "url": "https://math.stackexchange.com/questions/344879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 2 }
Describe all matrices similar to a certain matrix. Math people: I assigned this problem as homework to my students (from Strang's "Linear Algebra and its Applications", 4th edition): Describe in words all matrices that are similar to $$\begin{bmatrix}1& 0\\ 0& -1\end{bmatrix}$$ and find two of them. Square matrices $A$ and $B$ are defined to be "similar" if there exists square invertible $M$ with $A = M^{-1}BM$ (or vice versa, since this is an equivalence relation). The answer to the problem is not in the text, and I am embarrassed to admit I am having trouble solving it. The problem looked easy when I first saw it. The given matrix induces a reflection in the $x_2$-coordinate, but I don't see how the geometry helps. A similar matrix has to have the same eigenvalues, trace, and determinant, so its trace is $0$ and its determinant is $-1$. I spent a fair amount of time on it, with little progress, and I can spend my time more productively. This problem is #2 in the problem set, which suggests that maybe there is an easy solution. I would settle for a hint that leads me to a solution. EDIT: Thanks to Thomas (?) for rendering my matrix in $\LaTeX$. Stefan (STack Exchange FAN)
Make a picture, your matrix mirrors the $e_2$ vector and doesn't change anything at the $e_1$ vector. The matrix is in the orthogonal group but not in the special orthogonal group. Show that every matrix $$\begin{pmatrix} \cos(\alpha) & \sin(\alpha) \\ \sin(\alpha) & -\cos(\alpha)\\ \end{pmatrix} $$ make the same. Those are the nicest matrix which can happen to you but there are some more (those matrices appear when $M$ itself is in the orthogonal group. When $M$ is not in the orthogonal group it still won't change the eigenvalues (I am not sure if you already know waht eigenvalues are), $\lambda$ is an eigenvalue to a vector $v\neq 0$ if $$ A \cdot v=\lambda v$$ which means the vector is only enlarged or made smaller through the matrix, but not rotated or something like that. As $A$ has the eigenvalues $1$ and $-1$ you will always find vectors $v_1,v_2$ such that $$ B \cdot v_1= v_1$$ and $$ B\cdot v_2= -v_2.$$ So those matrices won't change one vector and the other one is "turned around". The eigenvectors of the matrix: $$ \begin{pmatrix} a & b \\c & d \\ \end{pmatrix}^{-1}\cdot \begin{pmatrix} 1 & 0 \\ 0 & -1\\ \end{pmatrix} \cdot \begin{pmatrix} a & b \\c & d \\ \end{pmatrix}$$ are $$\begin{pmatrix} \frac{a}{c} \\ 1 \end{pmatrix} \qquad \begin{pmatrix} \frac{b}{d} \\1 \end{pmatrix} $$ when $c$ and $d$ are not zero,
{ "language": "en", "url": "https://math.stackexchange.com/questions/344930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Showing that $\ln(b)-\ln(a)=\frac 1x \cdot (b-a)$ has one solution $x \in (\sqrt{ab}, {a+b\over2})$ for $0 < a < b$ For $0<a<b$, show that $\ln(b)-\ln(a)=\frac 1x \cdot (b-a)$ has one solution $x \in (\sqrt{ab}, {a+b\over2})$. I guess that this is an application of the Lagrange theorem, but I'm unsure how to deal with $a+b\over2$ and $\sqrt{ab}$ since Lagrange's theorem offers a solution $\in (a,b)$.
Hint: Use the mean value theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/344991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$\cos(\arcsin(x)) = \sqrt{1 - x^2}$. How? How does that bit work? How is $$\cos(\arcsin(x)) = \sin(\arccos(x)) = \sqrt{1 - x^2}$$
You know that "$\textrm{cosine} = \frac{\textrm{adjacent}}{\textrm{hypotenuse}}$" (so the cosine of an angle is the adjacent side over the hypotenuse), so now you have to imagine that your angle is $y = \arcsin x$. Since "$\textrm{sine} = \frac{\textrm{opposite}}{\textrm{hypotenuse}}$", and we have $\sin y = x/1$, draw a right triangle with a hypotenuse of length $1$ and adjacent side to your angle $y$ with length $x$. Then, use the pythagorean theorem to see that the other side must have length $\sqrt{1 - x^2}$. Once you have that, use the picture to deduce $$ \cos y = \cos\left(\arcsin x\right) = \sqrt{1 -x^2}. $$ Here's a picture of the scenario:
{ "language": "en", "url": "https://math.stackexchange.com/questions/345077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 3 }
Non-Deterministic Turing Machine Algorithm I'm having trouble with this question: Write a simple program/algorithm for a nondeterministic Turing machine that accepts the language: $$ L = \left\{\left. xw w^R y \right| x,y,w \in \{a,b\}^+, |x| \geq |y|\right\} $$
Outline: First nondeterministically choose where the cut-off between $w$ and $w^{\text{R}}$ is. Then compare and cross-out symbols to the left and right of this cut off until you find the first $i$ such that the symbol $i$ cells to the left of the cut-off is different than the symbol $i$ cells to the right of the cut off. Now check that there are at least as many non-crossed symbols to the left as there are to the right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/345131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Which CSL rules hold in Łukasiewicz's 3-valued logic? CSL is classical logic. So I'm talking about the basic introduction and elimination rules (conditional, biconditional, disjunction, conjunction and negation). I'm not talking about his infinite-valued logical theory, but the 3-valued one where any atomic sentence can be given T,F or I. (See the Wikipedia article here: http://en.wikipedia.org/wiki/Three-valued_logic
1) I believe that Conditional introduction works: My experience is that the problem lies in getting a getting valid derivation from a set of premises, given the other rules that don't work. 2) Conditional elimination does not. This is the chief thing that cripples Lukasiewicz logic as a logic. Modus Ponens and the transitivity of the conditional both fail for the Lukasiewicz conditional, as they properly should, given that it allows conditionals with a value of I. If your conditionals are doubtful, you should not expect inference using them to be valid. (It is not generally known that this can be remedied: Define Spq as NCCpqNCpq and use that for the conditional) 3) Biconditional introduction works. 4) Biconditional elimination works. 5) Conjunction introduction works. 6) Conjunction elimination works 7) Disjunction introduction works. 8) Disjunction elimination does not work. 9) Negation introduction does not work. 10) Negation elimination does not work. Classically, 9 and 10 depend on the Law of the Excluded Middle, which doesn't hold in Lukasiewicz logic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/345186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Evaluating $\lim_{x \to 0} \frac{\sqrt{x+9}-3}{x}$ The question is this. In $h(x) = \dfrac{\sqrt{x+9}-3}{x}$, show that $\displaystyle \lim_{x \to 0} \ h(x) = \frac{1}{6}$, but that $h(0)$ is undefinied. In my opinion if I use this expression $\displaystyle \lim_{x \to 0} \dfrac{\sqrt{x+9}-3}{x}$ above with the $-3$ inside the square root I got an undefinied expression, but if I put the $-3$ out of the square and I use this expression to calculate the limit $\displaystyle \lim_{x \to 0} \dfrac{\sqrt{x+9}-3}{x}$ I will get $\frac{1}{6}$. Here a print screen of the original question. If needed i can post the Pdf of the homework.
You cannot pull the negative 3 out of the square root. For example: $$\sqrt{1-3} = \sqrt{2}i \ne \sqrt{1} - 3 = -2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/345251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Show that $\displaystyle\lim_{x\rightarrow 0}\frac{5^x-4^x}{x}=\log_e\left({\frac{5}{4}}\right)$ * *Show that $\displaystyle\lim_{x\rightarrow 0}\frac{5^x-4^x}{x}=\log_e\left({\frac{5}{4}}\right)$ *If $0<\theta < \frac{\pi}{2} $ and $\sin 2\theta=\cos 3\theta~~$ then find the value of $\sin\theta$
* *$\displaystyle\lim_{x\rightarrow 0}\frac{5^x-4^x}{x}$ $=\displaystyle\lim_{x\rightarrow 0}\frac{5^x-1-(4^x-1)}{x}$ $=\displaystyle\lim_{x\rightarrow 0}\frac{5^x-1}{x}$ -$\displaystyle\lim_{x\rightarrow 0}\frac{4^x-1}{x}$ $=\log_e5-\log_e4 ~~~~~~$ $[\because\displaystyle\lim_{x\rightarrow 0}\frac{a^x-1}{x}=\log_a (a>0)]$ $=\log_e(\frac{5}{4})$
{ "language": "en", "url": "https://math.stackexchange.com/questions/345319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
The minimum value of $a^2+b^2+c^2+\frac1{a^2}+\frac1{b^2}+\frac1{c^2}?$ I came across the following problem : Let $a,b,c$ are non-zero real numbers .Then the minimum value of $a^2+b^2+c^2+\dfrac{1}{a^2}+\dfrac{1}{b^2}+\dfrac{1}{c^2}?$ This is a multiple choice question and the options are $0,6,3^2,6^2.$ I do not know how to progress with the problem. Can someone point me in the right direction? Thanks in advance for your time.
Might as well take advantage of the fact that it's a multiple choice question. First, is it possible that the quantity is ever zero? Next, can you find $a, b, c$ such that $$a^2+b^2+c^2+\dfrac{1}{a^2}+\dfrac{1}{b^2}+\dfrac{1}{c^2} = 6?$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/345379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Can twice a perfect square be divisible by $q^{\frac{q+1}{2}} + 1$, where $q$ is a prime with $q \equiv 1 \pmod 4$? Can twice a perfect square be divisible by $$q^{\frac{q+1}{2}} + 1,$$ where $q$ is a prime with $q \equiv 1 \pmod 4$?
Try proving something "harder": Theorem: Let $n$ be a positive integer. There exists a positive integer $k$ such that $n | 2k^2$ k=n
{ "language": "en", "url": "https://math.stackexchange.com/questions/345470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Case when there are more leaves than non leaves in the tree Prove that there are more leaves than non-leaves in the graph that don't have vertices of degree 2. Ideas: If graph doesn't have vertices of degree 2 this means that vertices of the graph have degree 1 or $\geq$ 3. Vertices with degree 1 are leaves and vertices with degree $\geq$ 3 are non-leaves. Particularly, root has degree 3 (therefore 3 vertices on the level 1), level 2 has 6 vertices, level $i$ has $3*2^{i-1}$ vertices. Let's assume there are $n$ level, therefore according to assumption $1+\sum_{i=1}^{n-1} 3*2^{i-1} < 3*2^{n-1}$. There are few problems: currently I don't have an idea how to show that the above inequality is true. In addition, do I need to consider particular cases when no all leaves on the same level of the tree, or maybe when no all non leaves have the same degree, however intuitively all these cases just extend number of leaves. I will appreciate any idea or hint how to show that the assumption is right.
If $D_k$ is the number of vertices of degree $k$ then $\sum k \cdot D_k=2E$ where $E$ is the number of edges. In a tree, $E=V-1$ with $V$ the number of vertices. So if $D_2=0$ you have $$1D_1+3D_3+4D_4+...=2E=2V-2\\ =2(D_1+D_3+D_4+...)-2.$$ From this, $$2D_1-2-D_1=(3D_3+4D_4+...)-(2D_3+2D_4+...),$$ $$D_1-2=1D_3+2D_4+3D_5+...,$$ and then $$D_1>D_3+2D_4+3D_5+... \ge D_3+D_4+D_5+...$$ The sum $D_3+D_4+D_5+...$ on the right here is the number of non leaves, since $D_2=0$, while $D_1$ is the number of leaves, showing more leaves than non leaves.
{ "language": "en", "url": "https://math.stackexchange.com/questions/345534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Being ready to study calculus Some background: I have a degree in computer science, but the math was limited and this was 10 years ago. High school was way before that. A year ago I relearnt algebra (factoring, solving linear equations, etc). However, I have probably forgotten some of that. I never really studied trigonometry properly. I want to self study calculus and other advanced math, but I feel there are some holes that I should fill before starting. I planned on using MIT OCW for calculus but they don't have a revision course. Is there a video course or book that covers all of this up to calculus? (No textbooks with endless exercises please.) I would like to complete this in a few weeks. Given my background, I think this is possible.
The lecture notes by William Chen cover the requested material nicely. The Trillia Group distributes good texts too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/345600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
Minimize $\sum a_i^2 \sigma^2$ subject to $\sum a_i = 1$ $$\min_{a_i} \sum_{i=1}^{n} {a_i}^2 \sigma^2\text{ such that }\sum_{i=1}^{n}a_i=1$$ and $\sigma^2$ is a scalar. The answer is $a_i=\frac{1}{n}$. I tried Lagrangian method. How can I get that answer?
$\displaystyle \sum_{i=1}^{n}(x-a_i)^2\ge0,\forall x\in \mathbb{R}$ $\displaystyle \Rightarrow \sum_{i=1}^{n}(x^2+a_i^2-2xa_i)\ge0$ $\displaystyle \Rightarrow nx^2+\sum_{i=1}^{n}a_i^2-2x\sum_{i=1}^{n}a_i\ge0$ Now we have a quadratic in $x$ which is always grater than equal to zero which implies that the quadratic can have either two equal roots in real nos. or has both complex roots.This implies that the discriminant is less than or equal to zero. Discriminant $=\displaystyle D=4\left(\sum_{i=1}^{n}a_i\right)^2-4n\sum_{i=1}^{n}a_i^2\le0$ $\displaystyle \Rightarrow\left(\sum_{i=1}^{n}a_i\right)^2-n\sum_{i=1}^{n}a_i^2\le0$ $\displaystyle \Rightarrow 1-n\sum_{i=1}^{n}a_i^2\le 0$ $\displaystyle \Rightarrow \frac{1}{n}\le \sum_{i=1}^{n}a_i^2$ Equality holds if the equation has equal real root But then $\displaystyle \sum_{i=1}^{n}(x-a_i)^2=0$ for some $x\in R$ $\Rightarrow x=a_i,\forall 1\le i\le n$ $\Rightarrow \sum _{i=1}^{n}x=\sum _{i=1}^{n}a_i=1$ $\Rightarrow x=a_i=\frac{1}{n},\forall 1\le i\le n$ Now as $\sigma^2\ge 0$ so the min. of $\sum_{i=1}^{n}a_i^2\sigma^2$ is attained when $\sum_{i=1}^{n}a_i^2$ is also minimum. I think this is a much better and elementary solution than solving it using Lagrange multipliers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/345645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Moduli Spaces of Higher Dimensional Complex Tori I know that the space of all complex 1-tori (elliptic curves) is modeled by $SL(2, \mathbb{R})$ acting on the upper half plane. There are many explicit formulas for this action. Similarly, I have been told that in the higher dimensional cases, the symplectic group $Sp(2n, \mathbb{R})$ acts on some such space to give the moduli space of complex structures on higher dimensional complex tori. Is there a reference that covers this case in detail and gives explicit formulas for the action? In the 1-dimensional case, all complex tori can be realized as algebraic varieties, but this is not the case for higher dimensional complex tori. Does the action preserve complex structures that come from abelian varieties?
You could check Complex tori by Christina Birkenhake and Herbert Lange, edited by Birkhauser. In Chapter 7 (Moduli spaces), Section 4 (Moduli spaces of nondegenerate complex tori), Theorems 4.1 and 4.2 should answer you doubts. Hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/345713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to calculate $ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s} $ I'm trying to calculate this limit expression: $$ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s} $$ Both the numerator and denominator should converge, since $0 \leq a, b \leq 1$, but I don't know if that helps. My guess would be to use L'Hopital's rule and take the derivative with respect to $s$, which gives me: $$ \lim_{s \to \infty} \frac{s (ab)^{s-1}}{s (ab)^{s-1}} $$ but this still gives me the non-expression $\frac{\infty}{\infty}$ as the solution, and applying L'Hopital's rule repeatedly doesn't change that. My second guess would be to divide by some multiple of $ab$ and therefore simplify the expression, but I'm not sure how that would help, if at all. Furthermore, the solution in the tutorial I'm working through is listed as $ab$, but if I evaluate the expression that results from L'Hopital's rule, I get $1$ (obviously).
If $ab=1,$ $$ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s}= \lim_{s \to \infty} \frac{s}{s+1}=\lim_{s \to \infty} \frac1{1+\frac1s}=1$$ If $ab\ne1, $ $$\lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s}$$ $$=\lim_{s \to \infty} \frac{(ab)^{s+1}-ab}{(ab)^{s+1}-1}$$ If $|ab|<1, \lim_{s \to \infty}(ab)^s=0$ then $$\lim_{s \to \infty} \frac{(ab)^{s+1}-ab}{(ab)^{s+1}-1}=ab$$ Similarly if $|ab|>1,\lim_{s \to \infty}\frac1{(ab)^s}=0$ then $$\lim_{s \to \infty} \frac{(ab)^{s+1}-ab}{(ab)^{s+1}-1}=\lim_{s \to \infty} \frac{1-\frac1{(ab)^s}}{1-\frac1{(ab)^{s+1}}}=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/345766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Finitely additive probabilities and Integrals My question is the following: if $(\Omega, \mathcal{A}, P)$ is a probabilistic space with $P$ simply additive (not necessarily $\sigma$-additive) and $f,g$ two real valued, positive, bounded, $\mathcal{A}$-measurable function, then $\int f+g\,dP=\int f\,dP+\int g\,dP$? The definition of $P$ simply additive is the following (I took it from Example for finitely additive but not countably additive probability measure): * *For each $E \subset \Omega$, $0 \le P(E) \le 1$ *$P(\Omega) = 1$ *If $E_1$ and $E_2$ are disjoint subsets $P(E_1 \cup E_2) = P(E_1) + P(E_2)$.
You would need to define the meaning of the integral for a finitely additive measure. I am not sure if there is standard and well established definition in that context. And I think the notion will generally not be very well behaved. For instance, suppose you allow not all sets to have a measure, but allow measures on $\mathbb{N}$ to be defined only on finite and cofinite sets (as with normal measures, you only define them on $\sigma$-algebras). Then, you have a nice measure $\mu$ that is $0$ on finite sets and $1$ on cofinite sets. Let $f(n) := n$, or $g(n) = n \pmod{2}$. What values would you assign to $\int f $ and $\int g$ ? That being said, linearity is a "must" for all notions of integrals I have ever seen. I am far from being an expert, but (variously generalised) integrals are things that take a map, map it to a number, and always (1) preserve positivity and (2) are linear. Edit: If you are referring to the example of ultrafilter measures mentioned under the link, they do lead to well defined "integrals". That is, you can define $\int f = c$ if and only if the set $$\{ n \ : \ f(n) \in (c-\epsilon, c+\epsilon) \}$$ has measure $1$, or equivalently belongs to the ultrafilter. This gives you a well defined integral, that has all the usual nice properties (except Fubini's theorem, perhaps).
{ "language": "en", "url": "https://math.stackexchange.com/questions/345852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finding equilibrium with two dependent variables I was thinking while in the shower. What if I wanted to set two hands of the clock so that the long hand is at a golden angle to the short hand. I thought, set the short hand (hour hand) at 12, and then set the long hand (minute hand) at what ever is a golden angle to 12. I realized tho, that the short hand moves at 1/12 the speed of the long hand, so I would need to move the long hand 1/12 whatever the long's angle from 12 is. However, doing this will effect the angle between the two hands, so I would need to more the long hand further. But again, I would need to move the short hand further because the long hand was moved further, and this creates a vicious circle until some kind of equilibrium is found. Rather than doing this through trial and error, there must be a way to solve this problem mathematically. Maybe with an equation? My math isn't very good, but I'm very interested in math, so this is why I'm posting this question. Could someone give me the solution to this problem while explaining it very throughly? Thanks for reading. I'm interested in seeing what answers pop up on this question. :) PS: Please help me with the tags for this question, as I don't know which tags would be appropriate.
Start at 0:00 where the two hands are on the 12. You now that the long hand advances 12 times faster than the short one so if the long hand is at an angle $x$, the angle between the two hands is $x - x/12 = 11x/12$. So you just need to place the long hand at an angle $12 \alpha/11$ so that the angle between the two is $\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/345895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that if $\mathcal P(A) \cup \mathcal P(B)= \mathcal P(A\cup B)$ then either $A \subseteq B$ or $B \subseteq A$. Prove that for any sets $A$ or $B$, if $\mathcal P(A) \cup \mathcal P(B)= \mathcal P(A\cup B)$ then either $A \subseteq B$ or $B \subseteq A$. ($\mathcal P$ is the power set.) I'm having trouble making any progress with this proof at all. I've assumed that $\mathcal P(A) \cup \mathcal P(B)= \mathcal P(A\cup B)$, and am trying to figure out some cases I can use to help me prove that either $A \subseteq B$ or $B \subseteq A$. The statement that $\mathcal P(A) \cup \mathcal P(B)= \mathcal P(A\cup B)$ seems to be somewhat useless though. I can't seem to make any inferences with it that yield any new information about any of the sets it applies to or elements therein. The only "progress" I seem to be able to make is that I can conclude that $A \subseteq A \cup B$, or that $B \subseteq A \cup B$, but I don't think this gives me anything I don't already know. I've tried going down the contradiction path as well but I haven't been able to find anything there either. I feel like I am missing something obvious here though...
Hint: Try instead to prove the contrapositive: If $A \nsubseteq B$ and $B \nsubseteq A$, then $\mathcal{P} ( A ) \cup \mathcal{P} ( B ) \neq \mathcal{P} ( A \cup B )$. Remember that $E \nsubseteq F$ means that there is an element of $E$ which is not an element of $F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/345978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
History of Conic Sections Recently, I came to know that ancient Greeks had already studied conic sections. I find myself wondering if they knew about things like directrix or eccentricity. (I mean familiar with these concepts in the spirit not in terminology). This is just the appetizer. What I really want to understand is what will make someone even think of these (let me use the word) contrived constructions for conic sections. I mean let us pretend for a while that we are living in the $200$ BC period. What will motivate the mathematicians of our time (which is $200$ BC) study the properties of figures that are obtained on cutting a cone at different angles to its vertical? Also, what will lead them to deduce that if the angle of cut is acute then the figure obtained has the curious property that for any point on that figure the sum of its distances from some $2$ fixed points a constant. And in the grand scheme of things how do our friend mathematicians deduce the concepts of directrix and eccentricity (I am not sure if this was an ancient discovery, but in all, yes I will find it really gratifying to understand the origin of conic sections). Please shed some light on this whenever convenient. I will really find it helpful. Thanks
There are several Ideas where it might have come from. One such idea is the construction of burning mirrors, for which a parabola is the best shape, because it concentrates the light in a single point, and the distance between the mirror and the point can be calculated by the use of geometry (see diocles "on burning mirrors", I could really recommend this book for it's introduction alone). Conics where also usefull in the construction of diferent sun dials. I have researched the topic quite a bit but sadly I am yet to understand how did they "merge" all this seemingly unrealted topics to cutting a cone. Most likely the lost work "on solid loci" from euclid would provide some more insight.
{ "language": "en", "url": "https://math.stackexchange.com/questions/346046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Is $\mathrm{GL}_n(K)$ divisible for an algebraically closed field $K?$ This is a follow-up question to this one. To reiterate the definition, a group $G$ (possibly non-abelian) is divisible when for all $k\in \Bbb N$ and $g\in G$ there exists $h\in G$ such that $g=h^k.$ Let $K$ be an algebraically closed field. For which $n$ is $\mathrm{GL}_n(K)$ divisible? (It is clearly true for $n=0$ and $n=1$.) The linked question is about the case of $K=\Bbb C$ and the answer is "for all $n$" there.
${\rm GL}(n,K)$ is not divisible when $K$ has finite characteristic $p$ and $n >1.$ The maximum order of an element of $p$-power order in ${\rm GL}(n,K)$ is $p^{e+1},$ where $p^{e} < n \leq p^{e+1}$ ($e$ a positive integer). There is an element of that order, and it is not the $p$-th power of any element of ${\rm GL}(n,K).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/346109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Darboux integral too sophisticated for Calculus 1 students? I strongly prefer Darboux's method to the one commonly found in introductory level calculus texts such as Stewart, but I'm worried that it might be a bit overwhelming for my freshman level calculus class. My aim is to develop the theory, proving all of the results we need such as FTC, substitution rule, etc. If I can get everyone to buy into the concepts of lub and glb, this should be a fairly neat process. But that's potentially a big "if". Even worse, maybe they just won't care about the theory since they know I will not ask them to prove anything in an assignment. It seems to me that there is very little middle ground here. You have to present integration the right way, paying attention to all the details, or accept a decent amount of sloppiness. In either case the class will likely lose interest. Questions: Would you attempt the Darboux method? I would be using Spivak's Calculus / Rudin's Real Analysis as guides. I suppose there's no way of dumbing this down. Otherwise, could you recommend a good source for the standard Riemann integral? Stewart just doesn't do it for me. Thanks
Part of the point of college is to learn the craft, not just the subject. But of all the things you could give the students by a single deviation from the textbook, why Darboux integrals? Not infinite series, recognizing measures in an integral, or linear maps? Each of those would greatly clarify foundations, allowing students to remake calculus after their intuitions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/346177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Finding a subspace whose intersections with other subpaces are trivial. On p.24 of the John M. Lee's Introduction to Smooth Manifolds (2nd ed.), he constructs the smooth structure of the Grassmannian. And when he tries to show Hausdorff condition, he says that for any 2 $k$-dimensional subspaces $P_1$, $P_2$ of $\mathbb{R}^n$, it is always possible to find a $(n-k)$-dimensional subspace whose intersection with both $P_1$ and $P_2$ are trivial. My question: I think it is also intuitively obvious that we can always find $(n-k)$-dimensional subspace whose intersection with m subspaces $P_1,\ldots,P_m$ are trivial, which is a more generalized situation. But I can't prove it rigorously. Could you help me for this generalized case? Thank you.
For $j=1,2,\ldots,m$, let $B_j$ be a basis of $P_j$. It suffices to find a set of vectors $S=\{v_1,v_2,\ldots,v_{n-k}\}$ such that $S\cup B_j$ is a linearly independent set of vectors for each $j$. We will begin with $S=\phi$ and put vectors into $S$ one by one. Suppose $S$ already contains $i<n-k$ vectors. Now consider a vector $v_{i+1}=v_{i+1}(x)$ of the form $(1,x,x^2,\ldots,x^{n-1})^T$. For each $j$, there are at most $n-1$ different values of $x$ such that $v_{i+1}(x)\in\operatorname{span}\left(\{v_1,v_2,\ldots,v_i\}\cup B_j\right)$, otherwise there would exist a non-invertible Vandermonde matrix that corresponds to distinct interpolation nodes. Therefore, there exist some $x$ such that $v_{i+1}(x)\notin\operatorname{span}\left(\{v_1,v_2,\ldots,v_i\}\cup B_j\right)$ for all $j$. Put this $v_{i+1}$ into $S$, $S\cup B_j$ is a linearly independent set. Continue in this manner until $S$ contains $n-k$ vectors. The resulting $\operatorname{span}(S)$ is the subspace we desire.
{ "language": "en", "url": "https://math.stackexchange.com/questions/346252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Isomorphisme of measurable space Hi, Can you ,help me to understand this proposition, and it's prrof ? Definition 24 is : A measurable space $(T, \mathcal{T})$ is said to be separable if there existe a sequence $(A_n)$ dans $\mathcal{T}$ which generates $\mathcal{T}$ and $\chi_{A_n}$ separate the points of $T$ Please Thank you
Clearly, $h:T\to h(T)$ is a surjection. The fact that $h$ is an injection follows from the fact that $\{1_{A_n}\}_{n\in \Bbb N}$ separate points of $T$. That is, if $t\neq s$ then $h(t)\neq h(s)$ since for some $n$ it holds that there exists a separation indicator function, that is $1_{A_n}(t)\neq 1_{A_n}(s)$. I didn't get your second confusion, could you elaborate?
{ "language": "en", "url": "https://math.stackexchange.com/questions/346311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solving a tricky equation involving logs How can you solve $$a(1 - 1/c)^{a - 1} = a - b$$ for $a$? I get $(a-1)\ln(1-1/c) = \ln(1-b/a)$ and then I am stuck. All the variables are real and $c>a>b>1$.
You have to solve this numerically, since no standard function will do it. To get an initial value, in $a(1 - 1/c)^{a - 1} = a - b$, use the first two terms of the binomial theorem to get $(1 - 1/c)^{a - 1} \approx 1-(a-1)/c$. This gives $a(1-(a-1)/c) \approx a-b$ or $a(c-a-1)\approx ac-bc$ or $ac-a^2-a \approx ac-bc$ or $a^2+a = bc$. Completing the square, $a^2+a+1/4 \approx bc$, $(a+1/2)^2 \approx bc$, $a \approx \sqrt{bc}-1/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/346392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Easy way to simplify this expression? I'm teaching myself algebra 2 and I'm at this expression (I'm trying to find the roots): $$ x=\frac{-1-2\sqrt{5}\pm\sqrt{21-4\sqrt{5}}}{4} $$ My calculator gives $ -\sqrt{5} $ and $ -\frac12 $ and I'm wondering how I would go about simplifying this down without a calculator. Is there a relatively painless way to do this that I'm missing? Or is it best to just leave these to the calculator? Thanks!
You'll want to try to write $21-4\sqrt 5$ as the square of some number $a+b\sqrt 5$. In particular, $$(a+b\sqrt 5)^2=a^2+2ab\sqrt 5+5b^2,$$ so we'll need $ab=-2$ and $21=a^2+5b^2$. Some quick trial and error shows us that $a=1,b=-2$ does the job.
{ "language": "en", "url": "https://math.stackexchange.com/questions/346475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Stiff differential equation where Runge-Kutta $4$th order method can be broken Is there a stiff differential equation that cannot be solved by the Runge-Kutta 4th order method, but which has an analytical solution for testing?
Cleve Moler, in this note, gives an innocuous-looking DE he attributes to Larry Shampine that models flame propagation. The differential equation is $$y^\prime=y^2-y^3$$ with initial condition $y(0)=\frac1{h}$, and integrated over the interval $[0,2h]$. The exact solution of this differential equation is $$y(t)=\frac1{1+W((h-1)\exp(h-t-1))}$$ where $W(t)$ is the Lambert function, which is the inverse of the function $t\exp\,t$. (Whether the Lambert function is considered an analytical solution might well be up for debate, but I'm in the camp that considers it a closed form.) For instance, in Mathematica, the following code throws a warning about possible stiffness: With[{h = 40}, y /. First @ NDSolve[{y'[t] == y[t]^2 - y[t]^3, y[0] == 1/h}, y, {t, 0, 2 h}, Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4}]]
{ "language": "en", "url": "https://math.stackexchange.com/questions/346582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }