Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How can I find two independent solution for this ODE? Please help me find two independent solutions for $$3x(x+1)y''+2(x+1)y'+4y=0$$ Thanks from a new beginner into ODE's.
Note that both $x=0, x=1$ are regular singular points( check them!). As @Edgar assumed, let $y=\sum_1^{\infty}a_nx^n$ and by writing the equation like $$x^2y''+\frac{2}{3}xy'+\frac{4x}{3(x+1)}y=0$$ we get: $$p(x)=\frac{2}{3},\; q(x)=\frac{4x}{3(x+1)}$$ and then $p(0)=\frac{2}{3},\; q(0)=0$. Now I suggest you to set the characteristic equation (see http://www.math.ualberta.ca/~xinweiyu/334.1.10f/DE_series_sol.pdf) $$s(s-1)+sp(0)+q(0)=0$$ You get $s_1=0,s_2=\frac{1}{3}$. Can you do the rest from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/269808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is $X_n = 1+\frac12+\frac{1}{2^2}+\cdots+\frac{1}{2^n}; \forall n \ge 0$ bounded? Is $X_n = 1+\frac12+\frac{1}{2^2}+\cdots+\frac{1}{2^n}; \forall n \ge 0$ bounded? I have to find an upper bound for $X_n$ and i cant figure it out, a lower bound can be 0 or 1 but does it have an upper bound?
$$1+\frac12+\frac{1}{2^2}+\cdots+\frac{1}{2^n}=1+\frac{1}{2} \cdot \frac{\left(\frac{1}{2}\right)^{n}-1}{\frac{1}{2}-1}.$$ Now it is obviously, that it is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 5 }
Reference for an integral formula Good morning, I'm reading a paper of W. Stoll in which the author uses some implicit facts (i.e. he states them without proofs and references) in measure theory. So I would like to ask the following question: Let $G$ be a bounded domain in $\mathbb{R}^n$ and $S^{n-1}$ the unit sphere in $\mathbb{R}^n.$ For each $a\in S^{n-1},$ define $L(a) = \{x.a~:~ x\in \mathbb{R}\}.$ Denote by $L^n$ the n-dimensional Lebesgue area. Is the following formula true? $$\int_{a\in S^{n-1}}L^1(G\cap L(a)) = L^n(G) = \mathrm{vol}(G).$$ Could anyone please show me a reference where there is a proof for this? If this formula is not true, how will we correct it? Thanks in advance, Duc Anh
This formula seems to be false. Consider the case of the unit disk in $\mathbb{R}^2$, $D^2$. This is obviously bounded. $L^1(D^2 \cap L(a)) = 2$ for any $a$ in $S^1$, as the radius of $D^2$ is 1, and the intersection of the line through the origin that goes through $a$ and $D^2$ has length 2. The integral on the left is therefore equal to $4\pi$ and $L^2(D^2)$ was known by the greeks to be $\pi$. So your formula is like computing an integral in polar coordinates without multiplying the integrand by the determinant of the jacobian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/269906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Abstract Algebra - Monoids I'm trying to find the identity of a monoid but all the examples I can find are not as complex as this one. (source: gyazo.com)
It's straightforward to show that $\otimes$ is an associative binary operation, and as others have pointed out, the identity of the monoid is $(1, 0, 1)$. However, $(\mathbb{R}^3, \otimes)$ is not a group, since for example $(0, 0, 0)$ has no inverse element.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Cantor's intersection theorem and Baire Category Theorem From an old post in math stackexchange, I read a comment which goes as follows " I like to think of Baire Category Theorem as spiced up version of Cantor's Intersection Theorem". My question -----is it possible to derive the latter one using the former?
Do you have a copy of Rudin's Principles of Mathematical Analysis? If you do, then problems 3.21 and 3.22 outline how this is done. Quoting here: 3.21: Prove: If $(E_n)$ is a sequence of closed and bounded sets in a complete metric space $X$, if $E_n \supset E_{n+1}$, and if $\lim_{n\to\infty}\text{diam}~E_n=0$, then $\cap_1^\infty E_n$ consists of exactly one point. 3.22: Suppose $X$ is a complete metric space, and $(G_n)$ is a sequence of dense open subsets of $X$. Prove Baire's theorem, namely, that $\cap_1^\infty$ is not empty. (In fact, it is dense in $X$. Hint: Find a shrinking sequence of neighborhoods $E_n$ such that $\overline{E_n}\subset G_n$, and apply Exercise 21. Proving density isn't actually much harder than proving nonemptiness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to find normal vector of line given point normal passes through Given a line L in three-dimensional space and a point P, how can we find the normal vector of L under the constraint that the normal passes through P?
Let the line and point have position vectors $\vec r=\vec a+\lambda \vec b$ ($\lambda$ is real) and $\vec p$ respectively. Set $(\vec r-\vec p).\vec b=0$ and solve for $\lambda$ to obtain $\lambda_0$. The normal vector is simply $\vec a+\lambda_0 \vec b-\vec p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$ \displaystyle\lim_{n\to\infty}\frac{1}{\sqrt{n^3+1}}+\frac{2}{\sqrt{n^3+2}}+\cdots+\frac{n}{\sqrt{n^3+n}}$ $$ \ X_n=\frac{1}{\sqrt{n^3+1}}+\frac{2}{\sqrt{n^3+2}}+\cdots+\frac{n}{\sqrt{n^3+n}}$$ Find $\displaystyle\lim_{n\to\infty} X_n$ using the squeeze theorem I tried this approach: $$ \frac{1}{\sqrt{n^3+1}}\le\frac{1}{\sqrt{n^3+1}}<\frac{n}{\sqrt{n^3+1}} $$ $$ \frac{1}{\sqrt{n^3+1}}<\frac{2}{\sqrt{n^3+2}}<\frac{n}{\sqrt{n^3+1}}$$ $$\vdots$$ $$\frac{1}{\sqrt{n^3+1}}<\frac{n}{\sqrt{n^3+n}}<\frac{n}{\sqrt{n^3+1}}$$ Adding this inequalities: $$\frac{n}{\sqrt{n^3+1}}\leq X_n<\frac{n^2}{\sqrt{n^3+1}}$$ And this doesn't help me much. How should i proced?
Hint: use $\frac{i}{\sqrt{n^3+n}} \le \frac{i}{\sqrt{n^3+i}} \le \frac{i}{\sqrt{n^3+1}}$. I even think the Squeeze theorem can be avoided.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
condition on $\epsilon$ to make $f$ injective from the condition $g$ is uniformly continuous, $x$ is also U.continuous and one-one, but we dont know about $g$ is one-one or not, so $\epsilon=0$ will work?may be I am vague. Thank you.
We have $$ f'(x)=1+\varepsilon g'(x) \quad \forall\ x \in \mathbb{R}. $$ For $\varepsilon \ge 0$ we have $$ 1-\varepsilon M \le f'(x)\le 1+\varepsilon M \quad \forall\ x \in \mathbb{R}. $$ If we choose $$ \varepsilon \in [0,1/M), $$ then $f'>0$, i.e. $f$ is strictly increasing and therefore one-to-one. For $\varepsilon < 0$ we have $$ 1+\varepsilon M \le f'(x)\le 1-\varepsilon M \quad \forall\ x \in \mathbb{R}. $$ If we choose $$ \varepsilon \in (-1/M,0), $$ then $f'<0$, i.e. $f$ is strictly decreasing and therefore one-to-one. Hence, if $|\varepsilon|<1/M$ then $f$ is injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If both roots of the Quadratic Equation are similar then prove that If both roots of the equation $(a-b)x^2+(b-c)x+(c-a)=0$ are equal, prove that $2a=b+c$. Things should be known: * *Roots of a Quadratic Equations can be identified by: The roots can be figured out by: $$\frac{-b \pm \sqrt{d}}{2a},$$ where $$d=b^2-4ac.$$ *When the equation has equal roots, then $d=b^2-4ac=0$. *That means $d=(b-c)^2-4(a-b)(c-a)=0$
As the two roots are equal the discriminant must be equal to $0$. $$(b-c)^2-4(a-b)(c-a)=(a-b+c-a)^2-4(a-b)(c-a)=\{a-b-(c-a)\}^2=(2a-b-c)^2=0 \iff 2a-b-c=0$$ Alternatively, solving for $x,$ we get $$x=\frac{-(b-c)\pm\sqrt{(b-c)^2-4(a-b)(c-a)}}{2(a-b)}=\frac{c-b\pm(2a-b-c)}{2(a-b)}=\frac{c-a}{a-b}, 1$$ as $a-b\ne 0$ as $a=b$ would make the equation linear. So, $$\frac{c-a}{a-b}=1\implies c-a=a-b\implies 2a=b+c$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/270344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Topology - Open and closed sets in two different metric spaces I am currently working through some topology problems, and I would like to confirm my results just for some peace of mind! Statement: Let $(X, d_x)$ be a metric space, and $Y\subset X$ a non-empty subset. Define a metric $d_y$ on $Y$ by restriction. Then, each subset $A\subset Y $ can be viewed as a subset of $X$. Q 1: If $A$ is open in $X$, $A$ open in $Y$.
This I found FALSE, example: Let $X =\mathbb{R}, Y = [0,1]$ and $A = [0, 1/2)$. $A$ is open in $Y$, howevever, $A$ is not open in $X$. Q 3: If $A$ is closed in $X$, then $A$ is closed in $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Tensor Components In Barrett Oneill's Semi-Riemann Geometry there is a definition of tensor component: Let $\xi=(x^1,\dots ,x^n)$ be a coordinate system on $\upsilon\subset M$. If $A \in \mathscr I^r_s (M)$ the components of $A$ relative to $\xi$ are the real-valued functions $A _j^i, \dots j^i =A(dx^i_1,\dots > ,dx^i_s, \delta_{j 1},\dots, \delta_{j s})$ $i=1,\dots,r,\ j=1,\dots,s$ on $\upsilon$ where all indices run from $1$ to $n=\dim M$. By the definition above the $i$th component of $X$ relative to $\xi$ is $X(dx^i)$,which is interpreted as $dx^i(X)=X(x^i)$. I don't understand the last sentence. Because one-forms are $(0,1)$ tensors we could interpret them like $V(\theta)=\theta(V)$. So we can do the same thing here: $X(dx^i)=dx^i(X)$. But how did we write $dx^i(X)=X(x^i)$? Did I make a mistake?
The is a very nice intuitive explanation of this in Penrose's Road to Reality, ch 14. As a quick summary, any vector field can be thought of as a directional derivative operator on scalar valued functions, i.e for every scalar valued smooth function $f$ and vector field $X$, define the scalar field $$X(f) \triangleq p \mapsto \lim_{\epsilon \rightarrow 0} \frac{f(p+\epsilon X_p) - f(p)}\epsilon $$ at every point $p$. This action on all $f$ uniquely determines $X$. Similarly the 1-form $df$ acts on a vector field $X$ to yield the linear change in $f$ along $X$, ie: $$df(X) = X(f)$$ Substituting $x^i$ for $f$ then gives $dx^i(X) = X(x^i)$. In more detail, given a coordinate system $x^i$, exterior calculus says $$df = \sum_i \partial_{x^i}f\,dx^i$$ Combining the above $$X(f) = df(X) = \sum_i \partial_{x^i}f\,dx^i(X) = \sum_i \partial_{x^i}f \, X(x^i) = \sum_i X(x^i) \, \partial_{x^i} f \\ \therefore X = \sum_i X^i \partial_{x^i} \text{ where } X^i \triangleq X(x^i)$$ So $\partial_{x^i}$ form a basis for the space of directional derivative operators and thus also vector fields. The $dx^i$ then are the dual basis of $\partial_{x^i}$, since $$dx^i(\partial_{x^j}) = \partial_{x^j} x^i = \delta^i_j \\ dx^i(X) = X(x^i) = X^i$$ as required. To formalize this for the general Riemannian setting, you need express it in terms of manifolds, charts and tangent spaces/bundles, with appropriate smoothness conditions on everything, but Penrose's explanation gives you a nice mental picture to start with. The book also has excellent diagrams. It is worth getting just for differential geometry chapter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Integration of $x^3 \tan^{-1}(x)$ by parts I'm having problem with this question. How would one integrate $$\int x^3\tan^{-1}x\,dx\text{ ?}$$ After trying too much I got stuck at this point. How would one integrate $$\int \frac{x^4}{1+x^2}\,dx\text{ ?}$$
You've done the hardest part. Now, the problem isn't so much about "calculus"; you simply need to recall what you've learned in algebra: $(1)$ Divide the numerator of the integrand: $\,{x^4}\,$ by its denominator, $\,{1+x^2}\,$ using *polynomial long division *, (linked to serve as a reference). This will give you: $$\int \frac{x^4}{1+x^2}\,dx = \int (x^2 + \frac{1}{1+x^2} -1)\,dx=\;\;?$$ Alternatively: Notice also that $$\int \frac{x^4}{x^2 + 1}\,dx= \int \frac{[(x^4 - 1) + 1]}{x^2 + 1}\,dx$$ $$= \int \frac{(x^2 + 1)(x^2 - 1) + 1}{x^2 + 1} \,dx = \int x^2 - 1 + \frac{1}{x^2 + 1}\,dx$$ I trust you can take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/270550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
a ideal of the ring of regular functions of an affine variety. We assume that $\Bbb k$ is an algebraically closed field. Let $X \subset \Bbb A^n$ be an affine $\Bbb k$-variety , let's consider $ \mathfrak A_X \subset \Bbb k[t_1,...t_n]$ as the ideal of polynomials that vanish on $X$. Given a closed subset $Y\subset X$ we associate the ideal $ a_Y \subset k[X]$ defined by $ a_Y = \left\{ {f \in k[X];f = 0\,on\,Y} \right\} $. I'm reading " Basic Algebraic Geometry of Shafarevich" and it says that " It follows from Nullstellensatz that $Y$ is the empty set if and only if $a_Y = K[X] $ But I don't know how to prove this . Maybe it's trivial , but I need help anyway. After knowing if the result is true here, I want to know if it's true in the case of quasiprojective varieties, but first the affine case =) It remains to prove that if $ Y\subset X $ then $$ Y = \phi \Rightarrow a_Y = k\left[ X \right] $$ For that side we need Hilbert Nullstelensatz, but I don't know how to use it.
The Nullstellensatz is unnecessary here. If every function vanishes on a set Y (including all constant functions) then the set Y must obviously be empty; else the function f(x)=1 would be non-vanishing at some point x. Conversely if the set Y is empty then every function attains 0 at every point of Y.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Well-ordering the set of all finite sequences. Let $(A,<)$ be well-ordered set, using <, how can one define well-order on set of finite sequences? (I thought using lexicographic order) Thank you!
The lexicographic order is fine, but you need to make a point where one sequence extends another -- there the definition of the lexicographic order may break down. In this case you may want to require that the shorter sequence comes before the longer sequence. Generally speaking, if $\alpha$ and $\beta$ are two well-ordered sets, then we know how to well-order $\alpha\times\beta$ (lexicographically, or otherwise). We can define by recursion a well-order of $\alpha^n$ for all $n$, and then define the well-order on $\alpha^{<\omega}$ as comparison of length, and if the two sequences have the same length, $n$, then we can compare them by the well-order of $\alpha^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What's the difference between expected values in binomial distributions and hypergeometric distributions? The formula for the expected value in a binomial distribution is: $$E(X) = nP(s)$$ where $n$ is the number of trials and $P(s)$ is the probability of success. The formula for the expected value in a hypergeometric distribution is: $$E(X) = \frac{ns}{N}$$ where $N$ is the population size, $s$ is the number of successes available in the population and $n$ is the number of trials. $$E(x) = \left( \frac{s}{N} \right)n $$ $$P(s) = \frac{s}{N}$$ $$\implies E(x) = nP(s)$$ Why do both the distributions have the same expected value? Why doesn't the independence of the events have any effect on expected value?
For either one, let $X_i=1$ if there is a success on the $i$-th trial, and $X_i=0$ otherwise. Then $$X=X_1+X_2+\cdots+X_n,$$ and therefore by the linearity of expectation $$E(X)=E(X_1)+E(X_2)+\cdots +E(X_n)=nE(X_1). \tag{1}$$ Note that linearity of expectation does not require independence. In the hypergeometric case, $\Pr(X_i=1)=\frac{s}{N}$, where $s$ is the number of "good" objects among the $N$. This is because any object is just as likely to be the $i$-th one chosen as any other. So $E(X_i)=1\cdot\frac{s}{N}+0\cdot \frac{N-s}{N}=\frac{s}{N}$. It follows that $E(X)=n\frac{s}{N}$. Essentially the same proof works for the binomial distribution: both expectations follow from Formula $(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find intersection of two 3D lines I have two lines $(5,5,4) (10,10,6)$ and $(5,5,5) (10,10,3)$ with same $x$, $y$ and difference in $z$ values. Please some body tell me how can I find the intersection of these lines. EDIT: By using the answer given by coffemath I would able to find the intersection point for the above given points. But I'm getting a problem for $(6,8,4) (12,15,4)$ and $(6,8,2) (12,15,6)$. I'm unable to calculate the common point for these points as it is resulting in Zero. Any ideas to resolve this? Thanks, Kumar.
The direction numbers $(a,b,c)$ for a line in space may be obtained from two points on the line by subtracting corresponding coordinates. Note that $(a,b,c)$ may be rescaled by multiplying through by any nonzero constant. The first line has direction numbers $(5,5,2)$ while the second line has direction numbers $(5,5,-2).$ Once one has direction numbers $(a,b,c)$, one can use either given point of the line to obtain the symmetric form of its equation as $$\frac{x-x_0}{a}=\frac{y-y_0}{b}=\frac{z-z_0}{c}.$$ Note that if one or two of $a,b,c$ are $0$ the equation for that variable is obtained by setting the top to zero. That doesn't happen in your case. Using the given point $(5,5,4)$ of the first line gives its symmetric equation as $$\frac{x-5}{5}=\frac{y-5}{5}=\frac{z-4}{2}.$$ And using the given point $(5,5,5)$ of the second line gives its symmetric form $$\frac{x-5}{5}=\frac{y-5}{5}=\frac{z-5}{-2}.$$ Now if the point $(x,y,z)$ is on both lines, the equation $$\frac{z-4}{2}=\frac{z-5}{-2}$$ gives $z=9/2$, so that the common value for the fractions is $(9/2-4)/2=1/4$. This value is then used to find $x$ and $y$. In this example the equations are both of the same form $(t-5)/5=1/4$ with solution $t=25/4$. So we may conclude the intersection point is $$(25/4,\ 25/4,\ 9/2).$$ ADDED CASE: The OP has asked about another case, which illustrates what happens when one of the direction numbers of one of the two lines is $0$. Line 1: points $(6,8,4),\ (12,15,4);$ directions $(6,7,0)$, "equation" $$\frac{x-6}{6}=\frac{y-8}{7}=\frac{z-4}{0},$$ where I put equation in quotes because of the division by zero, and as noted the zero denominator of the last fraction means $z=4$ (so $z$ is constant on line 1). Line 2: points $(6,8,2),\ (12,15,6);$ directions $(6,7,4)$, equation $$\frac{x-6}{6}=\frac{y-8}{8}=\frac{z-2}{4}.$$ Now since we know $z=4$ from line 1 equation, we can use $z=4$ in $(z-2)/4$ of line 2 equation, to get the common fraction value of $(4-2)/4=1/2$. Then from either line, $(x-6)/6=1/2$ so $x=9$, and $(y-8)/7=1/2$ so $y=23/2.$ So for these lines the intersection point is $(9,\ 23/2,\ 4).$ It should be pointed out that two lines in space generally do not intersect, they can be parallel or "skew". This would come out as some contradictory values in the above mechanical procedure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 3, "answer_id": 2 }
Involutions of a torus $T^n$. Let $T^n$ be a complex torus of dimension $n$ and $x \in T^n$. We have a canonical involution $-id_{(T^n,x)}$ on the torus $T^n$. I want to know for which $y \in T^n$, we have $-id_{(T^n,x)}=-id_{(T^n,y)}$ as involutions of $T^n$. My guess is, such $y$ must be a 2-torsion point of $(T^n,x)$ and there are $2^{2n}$ choices of such $y$. Am I right?
Yes, you are right: here is a proof (I have taken the liberty of slightly modifying your notations). Let $X=\mathbb C^n/\Lambda$ be the complex torus obtained by dividing out $\mathbb C^n$ by the lattice $\Lambda\subset \mathbb C^n$ ($\Lambda \cong \mathbb Z^{2n}$). This torus is an abelian Lie group, and this gives it much more structure than a plain complex manifold. Such a torus admits of the involution $-id=\iota _0: X\to X:x\mapsto -x$, a holomorphic automorphism of the complex manifold $X$. But for every $a\in X$ it also admits of the involution $\iota _a: X\to X:x\mapsto 2a-x$, which fixes $a$. Your question amounts to asking for which $a\in X$ we have $\iota_ a=\iota_0=-id$. This means $2a-x=-x$ for all $x\in X$ or equivalently $2a=0$. So, exactly as you conjectured, the required points $a\in X$ are the $2^{2n}$ two-torsion points of $X$, namely the images of $\Lambda/2$, the half-lattice points, under the projection morphism $\mathbb C^n\to X=\mathbb C^n/\Lambda$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/270840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Intermediate Value Theorem and Continuity of derivative. Suppose that a function $f(x)$ is differentiable $\forall x \in [a,b]$. Prove that $f'(x)$ takes on every value between $f'(a)$ and $f'(b)$. If the above question is a misprint and wants to say "prove that $f(x)$ takes on every value between $f(a)$ and $f(b)$", then I have no problem using the intermediate value theorem here. If, on the other hand, it is not a misprint, then it seems to me that I can't use the Intermediate value theorem, as I can't see how I am authorised to assume that $f'(x)$ is continuous on $[a,b]$. Or perhaps there is another way to look at the problem?
This is not a misprint. You can indeed prove that $f'$ takes every value between $f'(a)$ and $f'(b)$. You cannot, however, assume that $f'$ is continuous. A standard example is $f(x) = x^2 \sin(1/x)$ when $x \ne 0$, and $0$ otherwise. This function is differentiable at $0$ but the derivative isn't continuous at it. To prove the problem you have, consider the function $g(x) = f(x) - \lambda x$ for any $\lambda \in (f'(a), f'(b))$. What do you know about $g'(a)$, $g'(b)$? What do you conclude about $g$ in the interval $[a, b]$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/270919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Solve equation $\tfrac 1x (e^x-1) = \alpha$ I have the equation $\tfrac 1x (e^x-1) = \alpha$ for an positive $\alpha \in \mathbb{R}^+$ which I want to solve for $x\in \mathbb R$ (most of all I am interested in the solution $x > 0$ for $\alpha > 1$). How can I do this? My attempt I defined $\phi(x) = \tfrac 1x (e^x-1)$ which can be continuously extended to $x=0$ with $\phi(0)=1$ ($\phi$ is the difference quotient $\frac{e^x-e^0}{x-0}$ of the exponential function). Therefore it is an entire function. Its Taylor series is $$\phi(x) = \frac 1x (e^x-1) = \frac 1x (1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \ldots -1) = \sum_{n=0}^\infty \frac{x^n}{(n+1)!}$$ Now I can calculate the power series of the inverse function $\phi^{-1}$ with the methods of Lagrange inversion theorem or the Faà di Bruno's formula. Is there a better approach? Diagram of $\phi(x)=\begin{cases} \tfrac 1x (e^x-1) & ;x\ne 0 \\ 1 & ;x=0\end{cases}$:
I just want to complete Hans Engler's answer. He already showed $$x = -\frac 1\alpha -W\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$$ $\alpha > 0$ implies $-\tfrac 1\alpha \in \mathbb{R}^{-}$ and thus $-\tfrac 1\alpha e^{-\tfrac 1\alpha} \in \left[-\tfrac 1e,0\right)$ (The function $z\mapsto ze^z$ maps $\mathbb{R}^-$ to $\left[-\tfrac 1e,0\right)$) The Lambert $W$ function has two branches $W_0$ and $W_{-1}$ on the interval $\left[-\tfrac 1e,0\right)$: So we have the two solutions $$x_1 = -\frac 1\alpha -W_0\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$$ $$x_2 = -\frac 1\alpha -W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$$ One of $W_0\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$ and $W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$ will always be $-\tfrac 1\alpha$ as $W$ is the inverse of the function $z \mapsto ze^z$. This solution of $W$ would give $x=0$ which must be canceled out for $\alpha \ne 1$ as $\phi(x)=1$ just for $x=0$. Case $\alpha=1$: For $\alpha=1$ is $-\tfrac 1\alpha e^{-\tfrac 1\alpha}=-\tfrac 1e$ and thus $W_0\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)=W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)=-1$. This gives $\phi^{-1}(1)=0$ as expected. Case $\alpha > 1$: $\alpha > 1 \Rightarrow 0 < \tfrac 1 \alpha < 1 \Rightarrow -1 < -\tfrac 1 \alpha < 0$. Because $W_0(y) \ge -1$ it must be $W_0\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)=-\tfrac 1\alpha$ and so $$\phi^{-1}(\alpha) = -\frac 1\alpha -W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)\text{ for } \alpha > 1$$ Case $\alpha < 1$: $0 < \alpha < 1 \Rightarrow \tfrac 1 \alpha > 1 \Rightarrow -\tfrac 1\alpha < -1$ Because $W_{-1}(y) \le -1$ we have $W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)=-\tfrac 1\alpha$ and thus $$\phi^{-1}(\alpha) = -\frac 1\alpha -W_{0}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)\text{ for } \alpha < 1$$ Solution The solution is $$\phi^{-1}(\alpha) = \begin{cases} -\frac 1\alpha -W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right) & ; \alpha > 1 \\ 0 & ; \alpha = 1 \\-\frac 1\alpha -W_{0}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right) & ; \alpha < 1 \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/270961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Finding the value of $a$ which minimizes the absolute maximum of $f(x)$ I know that this is an elementary problem in calculus and so it has a routine way of proof. I faced it and brought it up here just because it is one of R.A.Silverman's interesting problem. Let me learn your approaches like an student. Thanks. What value of $a$ minimizes the absolute maximum of the function $$f(x)=|x^2+a|$$ in the interval $[-1,1]$?
In my opinion, a calculus student would likely start by looking at the graph of this function for a few values of $a$. Then, they would notice that the absolute maxima occur on the endpoints or at $x=0$. So, looking at $f(-1)=f(1)=|1+a|$ and $f(0)=|a|$, I think it would be fairly easy for a calculus student to see that $a=-\frac{1}{2} \hspace{1 mm} $ minimizes the absolute maximum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Number of solutions for $x[1] + x[2] + \ldots + x[n] =k$ Omg this is driving me crazy seriously, it's a subproblem for a bigger problem, and i'm stuck on it. Anyways i need the number of ways to pick $x[1]$ ammount of objects type $1$, $x[2]$ ammount of objects type $2$, $x[3]$ ammounts of objects type $3$ etc etc such that $$x[1] + x[2] + \ldots x[n] = k\;.$$ Order of the objects of course doesn't matter. I know there is a simple formula for this something like $\binom{k + n}n$ or something like that i just can't find it anywhere on google.
This is a so-called stars-and-bars problem; the number that you want is $$\binom{n+k-1}{n-1}=\binom{n+k-1}k\;.$$ The linked article has a reasonably good explanation of the reasoning behind the formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Check If a point on a circle is left or right of a point What is the best way to determine if a point on a circle is to the left or to the right of another point on that same circle?
If you mean in which direction you have to travel the shortest distance from $a$ to $b$ and assuming that the circle is centered at the origin then this is given by the sign of the determinant $\det(a\, b)$ where $a$ and $b$ are columns in a $2\times 2$ matrix. If this determinant is positive you travel in the counter clockwise direction. If it is negative you travel in the clockwise direction. If it is zero then both directions result in the same travel distance (either $a=b$ or $a$ and $b$ are antipodes).
{ "language": "en", "url": "https://math.stackexchange.com/questions/271149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find all the continuous functions such that $\int_{-1}^{1}f(x)x^ndx=0$. Find all the continuous functions on $[-1,1]$ such that $\int_{-1}^{1}f(x)x^ndx=0$ fof all the even integers $n$. Clearly, if $f$ is an odd function, then it satisfies this condition. What else?
Rewrite the integral as $$\int_0^1 [f(x)+f(-x)]x^n dx = 0,$$ which holds for all even $n$, and do a change of variables $y=x^2$ so that for all $m$, even or odd, we have $$ \int_0^1 \left[\frac{f(\sqrt{y})+f(-\sqrt{y})}{2 \sqrt{y}}\right] y^{m} dy = 0. $$ By the Stone Weierstrass Theorem, polynomials are uniformly dense in $C([0,1])$, and therefore, if the above is satisfied, we must have $$\frac{f(\sqrt{y})+f(-\sqrt{y})}{2 \sqrt{y}}=0$$ provided it is in $C([0,1])$. [EDIT: I just realized the singularity can be a problem for continuity--so perhaps jump to the density of polynomials in $L^1([0,1])$.] Substituting $x=\sqrt{y}$ and doing algebra, it follows $f$ is an odd function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Scaling of random variables Note: I will use $\mathbb{P}\{X \in dx\}$ to denote $f(x)dx$ where $f(x)$ is the pdf of $X$. While doing some homework, I came across a fault in my intuition. I was scaling a standard normally distributed random variable $Z$. Edit: I was missing the infinitesimals $dx$ and $dx/c$, so everything works out in the end. Thank you Jokiri! $$\mathbb{P}\{cX \in dx\} = \frac{e^{-x^2 / 2c^2}}{\sqrt{2\pi c^2}}$$ while $$\mathbb{P} \left\{X \in \frac{dx}{c}\right\} = \frac{e^{-(x/c)^2/2}}{\sqrt{2\pi}}$$ Could anyone help me understand why the following equality doesn't hold? Edit: it does, see edit below $$\mathbb{P}\{cX \in dx\} \ne \mathbb{P} \left\{X \in \frac{dx}{c}\right\}$$ I have been looking around, and it seems that equality of the cdf holds, though: $$\mathbb{P}\{cX < x \} = \mathbb{P}\left\{X < \frac xc\right\}.$$ Thank you in advance! This question came out of a silly mistake on my part. Let me attempt to try to set things straight. Let $Y = cX$. Let $X$ have pdf $f_X$ and $Y$ have pdf $f_Y$. $$\mathbb{E}[Y] = \mathbb{E}[cX] = \int_{-\infty}^\infty cx\,f_X(x)\mathop{dx} =\int_{-\infty}^\infty y\, f_X(y/c) \frac{dy}{c}$$ So, $f_Y(y) = \frac 1c f_X(y/c)$. Thank you for the help, and sorry for my mistake.
Setting aside rigour and following your intuition about infinitesimal probabilities of finding a random variable in an infinitesimal interval, I note that the left-hand sides of your first two equations are infinitesimal whereas the right-hand sides are finite. So these are clearly wrong, even loosely interpreted. They make sense if you multiply the right-hand sides by the infinitesimal interval lengths, $\mathrm dx$ and $\mathrm dx/c$, respectively, and then everything comes out right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Can this function be expressed in terms of other well-known functions? Consider the function $$f(a) = \int^1_0 \frac {t-1}{t^a-1}dt$$ Can this function be expressed in terms of 'well-known' functions for integer values of $a$? I know that it can be relatively simply evaluated for specific values of $a$ as long as they are integers or simple fractions, but I am looking for a more general formula. I have found that it is quite easy to evaluate for some fractions--from plugging values into Wolfram I am fairly sure that $$f(1/n)=\sum^n_{k=1}\frac{n}{2n-k}$$ for all positive integer $n$. However, $f(a)$ gets quite complicated fairly quickly as you plug in increasing integer values of $a$, and I am stumped. Do any of you guys know if this function has a relation to any other special (or not-so-special) functions?
Mathematica says that $$f(a)=\frac{1}{a}\Big(\psi\left(\tfrac{2}{a}\right)-\psi\left(\tfrac{1}{a}\right)\Big)$$ where $$\psi(z)=\frac{\Gamma'(z)}{\Gamma(z)}$$ is the digamma function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Mean time until adsorption for a well-mixed bounded random walk that suddenly allows for adsorption I have a random walk on some interval $[0, N]$ with probability $p$ of taking a $+1$ step, probability $(1-p)$ of taking a $-1$ step, and where we have that $p>(1-p)$. Initially the boundaries are reflecting, i.e. if the walker lands at $0$ or $N$, then it will remain in place until it takes a $+1$ or $-1$ step with probability $p$ and $(1-p)$, respectively. Let $\pi$ be the stationary distribution of this walk ( Finding the exact stationary distribution for a biased random walk on a bounded interval). Now, imagine that we allow the walk to proceed for an arbitrary number of time steps until the walker's probability distribution on the interval approaches the stationary distribution. We then "flip a switch" and specify that the boundary $0$ will be fully adsorbing and that the walker will have a probability of $f(x)$ of being moved to $0$ at any step. More specifically, we flip a coin to decide $p$ and $q$ as we normally would, then flip another coin to decide whether to adsorb with probability $f(x)$. Here, $f(x)$ is some function that depends on the position, $x$, of the walker. What is our mean time until adsorption? Is there a technical term for what $\pi$ is after we "flip the switch" (some kind of quasi-stationary distribution)? For example, is it correct to assume that: (1) Ignoring $f(x)$, that the probability of adsorbing at $(x=0)$ is $\approx \pi(0)$? (2) That the probability the walker will be moved to $0$ at any given step can be approximated as: $P[k \to 0]=\sum_{k=1}^{N} k(f(k)\pi(k))$ And thus that we can approximate the adsorption time as: $\mu(ads)=[(1-P[k \to 0])(1-\pi(0))]^{-1}$ I'm pretty sure, however, that the above approximation is wrong. Update - I would be happy for a solution that only looks at absorbance at $0$ without consideration of $f(x)$. Hopefully there can an analytic solution with this simplification?
This is not (yet) an answer: I just try to formalise the question asked by the OP The stationary distribution before the adsorption process sets in can be obtained from the balance equations $p \pi(k) = (1-p)\pi(k+1)$. It is given by $$\pi(k) = \alpha^k \pi(0)= \frac{(1-\alpha) \alpha^k}{1-\alpha^{1+N}} ,\qquad \alpha = \frac{p}{1-p}.$$ This distribution serves as the start for the new random walk. The new random walk is given by the rate equations $$\begin{align}P_{i+1}(0) &= (1-f_1)(1-p) P_i (1) + \sum_{k=1}^N f_k P_i(k)\\ P_{i+1}(1)&= (1-f_2)(1-p) P_i(2)\\ P_{i+1} (2\leq j \leq N-1)&= (1-f_{j+1})(1-p) P_i (j+1) + (1-f_{j-1})p P(j-1) \\ P_{i+1} (N) &= (1-f_{N-1})p P_{i}(N-1) +(1-f_{N})p P_i(N) \end{align}$$ where $P_i(k)$ denotes the probability to be at step $i$ at position $k$. The initial condition is $P_0(k)=\pi(k)$. You are asking for the mean absorbtion time given by $$\mu=\sum_{i=0}^\infty i P_{i}(0) .$$ We define the transition matrix $M$ via $\mathbf{P}_{i+1} = M \mathbf{P}_i$ where we collect the probabilities to a vector $\mathbf{P} =(P_0, \ldots, P_N)$. We have also the initial vector of probabilities $\boldsymbol{\pi}$. With this notation, we can write $$\mu= \sum_{j=0}^\infty e_0 j M^j \boldsymbol{\pi} =e_0 \frac{M}{(I-M)^2} \boldsymbol{\pi} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/271378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate value of expression $(\sin^6 x+\cos^6 x)/(\sin^4 x+\cos^4 x)$ Calculate the value of expresion: $$ E(x)=\frac{\sin^6 x+\cos^6 x}{\sin^4 x+\cos^4 x} $$ for $\tan(x) = 2$. Here is the solution but I don't know why $\sin^6 x + \cos^6 x = ( \cos^6 x(\tan^6 x + 1) )$, see this: Can you explain to me why they solved it like that?
You can factor the term $\cos^6(x)$ from $\sin^6(x)+\cos^6(x)$ in the numerator to find: $$\cos^6(x)\left(\frac{\sin^6(x)}{\cos^6(x)}+1\right)=\cos^6(x)\left(\tan^6(x)+1\right)$$ and factor $\cos^4(x)$ from the denominator to find: $$\cos^4(x)\left(\frac{\sin^4(x)}{\cos^4(x)}+1\right)=\cos^4(x)\left(\tan^4(x)+1\right)$$ and so $$E(x)=\frac{\cos^6(x)\left(\tan^6(x)+1\right)}{\cos^4(x)\left(\tan^4(x)+1\right)}$$ So if we assume $x\neq (2k+1)\pi/2$ then we can simplify the term $\cos^4(x)$ and find: $$E(x)=\frac{\cos^2(x)\left(\tan^6(x)+1\right)}{\left(\tan^4(x)+1\right)}$$ You know that $\cos^2(x)=\frac{1}{1+\tan^2(x)}$ as an trigonometric identity. So set it in $E(x)$ and find the value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Ring of formal power series finitely generated as algebra? I'm asked if the ring of formal power series is finitely generated as a $K$-algebra. Intuition says no, but I don't know where to start. Any hint or suggestion?
Let $A$ be a non-trivial commutative ring. Then $A[[x]]$ is not finitely generated as a $A$-algebra. Indeed, observe that $A$ must have a maximal ideal $\mathfrak{m}$, so we have a field $k = A / \mathfrak{m}$, and if $k[[x]]$ is not finitely-generated as a $k$-algebra, then $A[[x]]$ cannot be finitely-generated as an $A$-algebra. So it suffices to prove that $k[[x]]$ is not finitely generated. Now, it is a straightforward matter to show that the polynomial ring $k[x_1, \ldots, x_n]$ has a countably infinite basis as a $k$-vector space, so any finitely-generated $k$-algebra must have an at most countable basis as a $k$-vector space. However, $k[[x]]$ has an uncountable basis as a $k$-vector space. Observe that $k[[x]]$ is obviously isomorphic to $k^\mathbb{N}$, the space of all $\mathbb{N}$-indexed sequences of elements of $k$, as $k$-vector spaces. But it is well-known that $k^\mathbb{N}$ is of uncountable dimension: see here, for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
How to read $A=[0,1]\times[a,5]$ I have this problem: consider the two sets $A$ and $B$ $$A=[0,1]\times [a,5]$$ and $$B=\{(x,y):x^2+y^2<1\}$$ What are the values of $a$ that guarantee the existence of a hyperplane that separates $A$ from $B$. Given a chosen value of $a$, find one of those hyperplanes. My main problem is axiomatics: how do I read: $A=[0,1]\times[a,5]$, what's with the $\times$? Thank you
The $\times$ stands for cartesian product, i.e. $X\times Y=\{(x,y)\mid x\in X, y\in Y\}$. Whether ordered pairs $(x,y)$ are considered a basic notion or are themselves defined (e.g. as Kurtowsky pairs) usually does not matter. See alo here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
$10$ distinct integers with sum of any $9$ a perfect square Do there exist $10$ distinct integers such that the sum of any $9$ of them is a perfect square?
I think the answer is yes. Here is a simple idea: Consider the system of equations $$S-x_i= y_i^2, 1 \leq i \leq 10\,,$$ where $S=x_1+..+x_n$. Let $A$ be the coefficients matrix of this system. Then all the entries of $I+A$ are $1$, thus $\operatorname{rank}(I+A)=1$. This shows that $\lambda=0$ is an eigenvalue of $I+A$ of multiplicity $n-1$, and hence the remaining eigenvalue is $\lambda=tr(I+A)=n.$ Hence the eigenvalues of $A$ are $\lambda_1=...=\lambda_{n-1}=-1$ and $\lambda_n=(n-1)$. This shows that $\det(A)=(-1)^{n-1}(n-1)$. Now pick distinct $y_1,..,y_n$ positive integers, each divisible by $n-1$. Then, by Cramer's rule, all the solutions to the system $$S-x_i= y_i^2 1 \leq i \leq 10\,,$$ are integers (since when you calculate the determinant of $A_i$, you can pull an $(n-1)^2$ from the i-th column, and you are left with a matrix with integer entries). The only thing left to do is proving that $x_i$ are pairwise distinct. Let $i \neq j$. Then $$S-x_i =y_i^2 \,;\, S-x_j=y_j^2 \Rightarrow x_i-x_j=y_j^2-y_i^2 \neq 0 \,.$$ Remark You can easily prove that $\det(A)=(-1)^{n-1}(n-1)$ by row reduction: Add all the other rows to the last one, get an $(n-1)$ common factor from that one, and the n subtract the last row from each of the remaining ones.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proof of Irrationality of e using Diophantine Equations I was trying to prove that e is irrational without using the typical series expansion, so starting off $e = a/b $ Take the natural log so $1 = \ln(a/b)$ Then $1 = \ln(a)-\ln(b)$ So unless I did something horribly wrong showing the irrationality of $e$ is the same as showing that the equation $c = \ln(a)-\ln(b)$ or $1 = \ln(a) - \ln(b)$ (whichever one is easiest) has no solutions amongst the natural numbers. I feel like this would probably be easiest with infinite descent, but I'm in high school so my understanding of infinite descent is pretty hazy. If any of you can provide a proof of that, that would be awesome. EDIT: What I mean by "typical series expansion" is Fourier's proof http://en.wikipedia.org/wiki/Proof_that_e_is_irrational#Proof
One situation in which the existence of a solution to a Diophantine equation implies an irrationality result is this: If, for a positive integer $n$, there are positive integers $x$ and $y$ satisfying $x^2 - n y^2 = 1$, then $\sqrt n$ is irrational. I find this amusing, since this proof is more complicated than any of the standard proofs that $\sqrt n$ is irrational and also, since it does not assume the existence of solutions to $x^2 - n y^2 = 1$, requires the "user" to supply a solution. Solutions are readily supplied for 2 and 5, but are harder to find for 61.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 1 }
A problem on self adjoint matrix and its eigenvalues Let $S = \{\lambda_1, \ldots , \lambda_n\}$ be an ordered set of $n$ real numbers, not all equal, but not all necessarily distinct. Pick out the true statements: a. There exists an $n × n$ matrix with complex entries, which is not selfadjoint, whose set of eigenvalues is given by $S$. b. There exists an $n × n$ self-adjoint, non-diagonal matrix with complex entries whose set of eigenvalues is given by $S$. c. There exists an $n × n$ symmetric, non-diagonal matrix with real entries whose set of eigenvalues is given by $S$. How can i solve this? Thanks for your help.
The general idea is to start with a diagonal matrix $[\Lambda]_{kj} = \begin{cases} 0, & j \neq k \\ \lambda_j, & j=k\end{cases}$ and then modify this to satisfy the conditions required. 1) Just set the upper triangular parts of $\Lambda$ to $i$. Choose $[A]_{kj} = \begin{cases} 0, & j>k \\ \lambda_j, & j=k \\ i, & j<k\end{cases}$. 2) & 3) Suppose $\lambda_{j_0} \neq \lambda_{j_1}$. Then rotate the '$j_0$-$j_1$' part of $\Lambda$ so it is no longer diagonal. Let $[U]_{kj} = \begin{cases} \frac{1}{\sqrt{2}}, & (k,j) \in \{(j_0,j_0), (j_0,j_1),(j_1,j_1)\} \\ -\frac{1}{\sqrt{2}}, & (k,j) \in \{(j_1,j_0)\} \\ \delta_{kj}, & \text{otherwise} \end{cases}$. $U$ is real and $U^TU=I$. Let $A=U \Lambda U^T$. It is straightforward to check that $A$ is real, symmetric (hence self-adjoint) and $[A]_{j_0 j_1}=\lambda_{j_1}-\lambda_{j_0}$, hence it is not diagonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/271985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Inequality for dense subset implies inequality for whole set? (PDE) Suppose I have an inequality that holds for all $f \in C^\infty(\Omega)$. Then since $C^\infty(\Omega)$ is dense in, say, $H^1(\Omega)$ under the latter norm, does the inequality hold for all $f \in H^1(\Omega)$ too? (Suppose the inequality involves norms in $L^2$ space)
Let $X$ be a topological space. Let $F,G$ be continuous maps from $X$ to $\mathbb{R}$. Let $Y\subset X$ be a dense subspace. Then $$ F|_Y \leq G|_Y \iff F \leq G $$ The key is continuity. (Actually, semi-continuity of the appropriate direction is enough.) Continuity guarantees for $x\in X\setminus Y$ and $x_\alpha \in Y$ such that $x_\alpha \to x$ you have $$ F(x) \leq \lim F(x_\alpha) \leq \lim G(x_\alpha) \leq G(x) $$ So the question you need to ask yourself is: are the two functionals on the two sides of your inequality continuous functionals on $H^1(\Omega)$? Just knowing that they involve norms in $L^2$ space is not (generally) enough (imagine $f\mapsto \|\triangle f\|_{L^2(\Omega)}$). For a slightly silly example: Let $G:H^1(\Omega)\to\mathbb{R}$ be identically 1, and let $F:C^\infty(\Omega)\to \mathbb{R}$ be identically zero, but let $F:H^1(\Omega) \setminus C^\infty(\Omega) \to \mathbb{R}$ be equal to $\|f\|_{L^2(\Omega)}$. Then clearly $F|_{C^\infty} \leq G|_{C^\infty}$ but the extension to $H^1$ is false in general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $x = 2$ is the unique solution to $3^x + 4^x = 5^x$ where $x \in \mathbb{R}$ Yesterday, my uncle asked me this question: Prove that $x = 2$ is the unique solution to $3^x + 4^x = 5^x$ where $x \in \mathbb{R}$. How can we do this? Note that this is not a diophantine equation since $x \in \mathbb{R}$ if you are thinking about Fermat's Last Theorem.
$$f(x) = \left(\dfrac{3}{5}\right)^x + \left(\dfrac{4}{5}\right)^x -1$$ $$f^ \prime(x) < 0\;\forall x \in \mathbb R\tag{1}$$ $f(2) =0$. If there are two zeros of $f(x)$, then by Rolle's theorem $f^\prime(x)$ will have a zero which is a contradiction to $(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 2 }
Distance is independent of coordinates I am asked to show $d(x,y) = ((x_2 - x_1)^2 + (y_2 -y_1)^2)^{1/2}$ does not depend on the choice of coordinates. My try is: $V$ has basis $B = b_1 , b_2$ and $B' = b_1' , b_2'$ and $T = [[a c], [b d]]$ is the coordinate transformation matrix $Tv_{B'} = v_B$ and $x_{B'} = x_1 b'_1 + x_2 b'_2$ and $y_{B'} = y_1b_1' + y_2b_2'$ are the vectors and the distance in the coordinates of $B'$ is $d(x_{B'},y_{B'}) = ((x_2 - x_1)^2 + (y_2 -y_1)^2)^{1/2}$. The coordinates in $B$ are $x_B = (x_1 a + x_2 c)b_1 + (x_1 b + x_2 d) b_2$ and similar for $y$. I compute the first term in the distance $((x_1 b + x_2 d) - (x_1 a + x_2 c))^2$. I may assume these are Cartesian coordinates so that $a^2 + b^2 = c^2 + d^2 = 1$ and $ac + bd = 0$. With this I have $((x_1 b + x_2 d) - (x_1 a + x_2 c))^2 = x_2^2 + x_2^2 - 2(x_1^2 ab + x_1 x_2 bc + x_1 x_2 ad + x_2^2 cd)$. My problem is that $x_1^2 ab + x_1 x_2 bc + x_1 x_2 ad + x_2^2 cd \neq x_1 x_2$. How to solve this? How to show that $x_1^2 ab + x_2^2 cd = 0$ and that $bc + ad = 1$? Thank you.
I would try a little bit more abstract approach. Sometimes a little bit of abstraction helps. First, distance can be computed in terms of the dot product. So, if you have points with Cartesian coordinates $X,Y$, the distance between them is $$ d(X,Y) = \sqrt{(X-Y)^t(X-Y)} \ . $$ Now, if you make an orthogonal change of coordinates of matrix $S$, the new coordinates $X'$ and the old ones $X$ are related through the relation $$ X = SX' $$ where $S$ is an orthogonal matrix. This is exactly your condition that the new coordinates are "Cartesian". That is, if $$ S = \begin{pmatrix} a & c \\ b & d \end{pmatrix} $$ the fact that $S$ is orthogonal means that $S^tS = I$, that is $$ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} a & c \\ b & d \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \qquad \Longleftrightarrow \qquad a^2 + b^2 = c^2 + d^2 =1 \quad \text{and} \quad ac + bd = 0 \ . $$ So, let's now compute: $$ d(X,Y) = \sqrt{(SX' - SY')^t(SX' - SY')} = \sqrt{(X'-Y')^tS^tS(X'-Y')} = \sqrt{(X'-Y')^t(X'-Y')} \ . $$ Indeed, distance does not depend on the Cartesian coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A measure of non-differentiability Consider $f(x) = x^2$ and $g(x) = |x|$. Both graphs have an upward open graph, but $g(x) = |x|$ is "sharper". Is there a way to measure this sharpness?
This may be somewhat above your pay grade, but a "measure" of a discontinuity of a function at a point may be seen in a Fourier transform of that function. For example consider the function $$f(x) = \exp{(-|x|)} $$ which is proportional to a Lorentzian function: $$\hat{f}(w) = \frac{1}{1+w^2} $$ (I am ignoring constants, etc., which are not important to this discussion.) Note that $\hat{f}(w) \approx 1/w^2 (w \rightarrow \infty)$. The algebraic, rather than exponential, behavior at $\infty$ is characteristic of a type of discontinuity. In this case, there is a discontinuity in the derivative. For a straight discontinuity, there is a $1/w$ behavior at $\infty$. (Note the step function and its transform which is proportional to $1/w$ at $\infty$.) For a discontinuity in the 2nd derivative, there is a $1/w^3$ behavior at $\infty$. And a discontinutiy in the $k$th derivative of $f(x)$ translates into a $1/w^{k+1}$ behavior of the Fourier transform at $\infty$. No, I do not have a proof of this, so I am talking off the cuff from my experiences some moons ago. But I am sure this is correct for the functions we see in physics. Also note that I define the Fourier Transform here as $$\hat{f}(w) = \int_{-\infty}^{\infty} dx \: f(x) \exp{(-i w x)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/272291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
the existence of duality of closure and interior? Let the closure and interior of set $A$ be $\bar A$ and $A^o$ respectively. In some cases, the dual is relatively easy to find, e.g. the dual of equation $\overline{A \cup B} = \bar A \cup \bar B$ is $(A \cap B)^o= A^o\cap B^o$. However, I can't find the dual of $f(\bar A) \subseteq \overline{f(A)}$, the definition of continuity of $f$. Is there some principle to translate the language of closure into that of interior in the same way as the duality principle of Boolean Algebra?
In the duality examples that you described as "relatively easy", the key was that you get the dual of an operation by applying "complement" to the inputs and outputs. For example, writing $\sim$ for complement, we have $A^o=\sim(\overline{\sim A})$, i.e., we get the interior of $A$ by taking the complement of $A$, then applying closure, and finally applying complement again. Similarly, $A\cap B=\sim((\sim A)\cup(\sim B))$. To do something similar for the notion of the image of a set $A$ under a function, we need the analogous thing (which unfortunately has no universally accepted name --- I'll call it $\hat f$): $$ \hat f(A)=\sim(f(\sim A)). $$ Equivalently, if $f:X\to Y$ and $A\subseteq X$, then $\hat f(A)$ consists of those points $y\in Y$ such that all points of $X$ that map via $f$ to $y$ are in $A$. [Notice that, if I replaced "all" by "some", then I'd have a definition of $f(A)$. Notice also that, if $y$ isn't in the image of $f$, then it automatically (vacuously) belongs to $\hat f(A)$.] Now we can dualize the formula $\bar f(A)\subseteq\overline{f(A)}$ to get $\hat f(A^o)\supseteq(\hat f(A))^o$. And this (asserted for all subsets $A$ of the domain of $f$) is indeed an equivalent characterization of continuity. [Digression for any category-minded readers: If $f:X\to Y$ then the operation $f^{-1}$ sending subsets of $Y$ to subsets of $X$ is a monotone function between the power sets, $f^{-1}:\mathcal P(Y)\to\mathcal P(X)$. The power sets, being partially ordered by $\subseteq$, can be viewed as categories, and then $f^{-1}$ is a functor between them. This functor has adjoints on both sides. The left adjoint $\mathcal P(X)\to\mathcal P(Y)$ sends $A$ to $f(A)$. The right adjoint sends $A$ to what I called $\hat f(A)$. These adjointness relations imply the elementary facts that $f^{-1}$ preserves both unions and intersections (as these are colimits and limits, respectively) while the left adjoint $A\mapsto f(A)$ preserves unions but not (in general) intersections. The right adjoint $\hat f$ preserves intersections but not (in general) unions.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/272359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I prove by induction that, for $n≥1, \sum_{r=1}^n \frac{1}{r(r+1)}=\frac{n}{n+1}$? Hi can you help me solve this: I have proved that $p(1)$ is true and am now assuming that $p(k)$ is true. I just don't know how to show $p(k+1)$ for both sides?
$$\sum_{r=1}^{n}\frac{1}{r(r+1)}=\frac{n}{n+1}$$ for $n=1$ we have $\frac{1}{1(1+1)}=\frac{1}{1+1}$ suppose that $$\sum_{r=1}^{k}\frac{1}{r(r+1)}=\frac{k}{k+1}$$ then $$\sum_{r=1}^{k+1}\frac{1}{r(r+1)}=\sum_{r=1}^{k}\frac{1}{r(r+1)}+\frac{1}{(k+1)(k+2)}=$$ $$=\frac{k}{k+1}+\frac{1}{(k+1)(k+2)}=\frac{k(k+2)+1}{(k+1)(k+2)}=$$ $$=\frac{k^2+2k+1}{(k+1)(k+2)}=\frac{(k+1)^2}{(k+1)(k+2)}=\frac{k+1}{k+2}=\frac{(k+1)}{(k+1)+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/272429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
How do I calculate the derivative of $x|x|$? I know that $$f(x)=x\cdot|x|$$ have no derivative at $$x=0$$ but how do I calculate it's derivative for the rest of the points? When I calculate for $$x>0$$ I get that $$f'(x) = 2x $$ but for $$ x < 0 $$ I can't seem to find a way to solve the limit. As this is homework please don't put the answer straight forward.
When $x<0$ replace $|x|$ by $-x$ (since that is what it is equal to) in the formula for the function and proceed. Please note as well that the function $f(x)=x\cdot |x|$ does have a derivative at $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 2 }
Multiples of an irrational number forming a dense subset Say you picked your favorite irrational number $q$ and looking at $S = \{nq: n\in \mathbb{Z} \}$ in $\mathbb{R}$, you chopped off everything but the decimal of $nq$, leaving you with a number in $[0,1]$. Is this new set dense in $[0,1]$? If so, why? (Basically looking at the $\mathbb{Z}$-orbit of a fixed irrational number in $\mathbb{R}/\mathbb{Z}$ where we mean the quotient by the group action of $\mathbb{Z}$.) Thanks!
A bit of a late comer to this question, but here's another proof: Lemma: The set of points $\{x\}$ where $x\in S$, (here $\{\cdot\}$ denotes the fractional part function), has $0$ as a limit point. Proof: Given $x\in S$, Select $n$ so that $\frac{1}{n+1}\lt\{x\}\lt\frac{1}{n}$. We'll show that by selecting an appropriate $m$, we'll get: $\{mx\}\lt\frac{1}{n+1}$, and that would conclude the lemma's proof. Select $k$ so that $\frac{1}{n}-\{x\}\gt\frac{1}{n(n+1)^k}$. Then: $$ \begin{array}{ccc} \frac{1}{n+1} &\lt& \{x\} &\lt& \frac{1}{n} - \frac{1}{n(n+1)^k} \\ 1 &\lt& (n+1)\{x\} &\lt& 1+\frac{1}{n} - \frac{1}{n(n+1)^{k-1}} \\ & & \{(n+1)x\} &\lt&\frac{1}{n} - \frac{1}{n(n+1)^{k-1}} \end{array} $$ If $\{(n+1)x\}\lt\frac{1}{n+1}$, we are done. Otherwise, we repeat the above procedure, replacing $x$ and $k$ with $(n+1)x$ and $k-1$ respectively. The procedure would be repeated at most $k-1$ times, at which point we'll get: $$ \{(n+1)^{k-1}x\}\lt\frac{1}{n} - \frac{1}{n(n+1)}=\frac{1}{n+1}. $$ Proposition: The set described in the lemma is dense in $[0,1]$. Proof: Let $y\in[0,1]$, and let $\epsilon\gt0$. Then by selecting $x\in S$ such that $\{x\}\lt\epsilon$, and $N$ such that $N\cdot\{x\}\le y\lt (N+1)\cdot\{x\}$, we get: $\left|\,y-\{Nx\}\,\right|\lt\epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44", "answer_count": 3, "answer_id": 2 }
Modules over local ring and completion I'm stuck again at a commutative algebra question. Would love some help with this completion business... We have a local ring $R$ and $M$ is a $R$-module with unique assassin/associated prime the maximal ideal $m$ of $R$. i) prove that $M$ is also naturally a module over the $m$-adic completion $\hat{R}$, and $M$ has the same $R$ and $\hat{R}$-submodules. ii) if $M$ and $N$ are two modules as above, show that $Hom_{R}(M,N) = Hom_{\hat{R}}(M,N)$. Best wishes and a happy new year!
Proposition. Let $R$ be a noetherian ring and $M$ an $R$-module, $M\neq 0$. Then $\operatorname{Ass}(M)=\{\mathfrak m\}$ iff for every $x\in M$ there exists a positive integer $k$ such that $\mathfrak m^kx=0$. Proof. "$\Rightarrow$" Let $x\in M$, $x\neq 0$. Then $\operatorname{Ann}(x)$ is an ideal of $R$. Let $\mathfrak p$ be a minimal prime ideal over $\operatorname{Ann}(x)$. Then $\mathfrak p\in\operatorname{Ass}(M)$, so $\mathfrak p=\mathfrak m$. This shows that $\operatorname{Ann}(x)$ is an $\mathfrak m$-primary ideal, hence there exists a positive integer $k$ such that $\mathfrak m^k\subseteq\operatorname{Ann}(x)$. "$\Leftarrow$" Let $\mathfrak p\in\operatorname{Ass}(M)$. The there is $x\in M$, $x\neq 0$, such that $\mathfrak p=\operatorname{Ann}(x)$. On the other side, on knows that there exists $k\ge 1$ such that $\mathfrak m^kx=0$. It follows that $\mathfrak m^k\subseteq \mathfrak p$, hence $\mathfrak p=\mathfrak m$. Now take $x\in M$ and $a\in\hat R$. One knows that $\mathfrak m^kx=0$ for some $k\ge 1$. Since $R/\mathfrak m^k\simeq \hat R/\hat{\mathfrak m^k}$ and $\hat{\mathfrak m^k}=\mathfrak m^k\hat R$, there exists $\alpha\in R$ such that $a-\alpha\in \mathfrak m^k\hat R$. (Maybe Ted's answer is more illuminating at this point.) Now define $ax=\alpha x$. (This is well defined since $\mathfrak m^kx=0$.) Now both properties are clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Inequality involving closure and intersection Let the closure of a set $A$ be $\bar A$. On Page 62, Introduction to Boolean Algebras,Steven Givant,Paul Halmos(2000), an exercise goes like, Show that $P \cap \bar Q \subseteq \overline{(P \cap Q)}$, whenever $P$ is open. I felt muddled in face of this sort of exercises. Is there some way to deal with these problems and be assured about the result?
Let $x$ in $P\cap\bar Q$. Since $x$ is in $\bar Q$, there exists a sequence $(x_n)_n$ in $Q$ such that $x_n\to x$. Since $x$ is in $P$ and $P$ is open, $x_n$ is in $P$ for every $n$ large enough, say, $n\geqslant n_0$. Hence, for every $n\geqslant n_0$, $x_n$ is in $P\cap Q$. Thus, $x$ is in $\overline{P\cap Q}$ as limit of $(x_n)_{n\geqslant n_0}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Minimizing a multivariable function given restraint I want to minimize the following function: $$J(x, y, z) = x^a + y^b + z^c$$ I know I can easily determine the minimum value of $J$ using partial derivative. But I have also the following condition: $$ x + y + z = D$$ How can I approach now?
This is an easy example of using Lagrange multiplier. If you reformulate your constraint as $C(x,y,z) = x+y+z-D=0$, you can define $L(x,y,z,\lambda) := J(x,y,z)-\lambda \cdot C(x,y,z)$ If you now take the condition $\nabla L=0$ as necessary for your minimum you will fulfill $$\frac{\partial L}{\partial x}=0 \\ \frac{\partial L}{\partial y}=0 \\ \frac{\partial L}{\partial z}=0 \\ $$ Which are required for your minimum and $$ \frac{\partial L}{\partial \lambda}=C(x,y,z)=0 \\ $$ as your constraint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solving a system of differential equations I would like to get some help by solving the following problem: $$ p'_1= \frac 1 x p_1 - p_2 + x$$ $$ p'_2=\frac 1{x^2}p_1+\frac 2 x p_2 - x^2 $$ with initial conditions $$p_1(1)=p_2(1)=0, x \gt0 $$ EDIT: If I use Wolframalpha, I get Where $u$ and $v$ are obviously $p_1$ and $p_2$. Can anybody explain whats going on?
One approach is to express $p_2=-p_1'+\frac1xp_1+x$ from the first equation and substitute into the second to get: $$-p_1''+\frac1xp_1'-\frac1{x^2}p_1+1=p_2'=\frac1{x^2}p_1+\frac2x\left(-p_1'+\frac1xp_1+x\right)-x^2$$ $$p_1''-\frac3xp_1'+\frac4{x^2}p_1=x^2-1$$ Multiplying by $x^2$ we get $x^2p_1''-3xp_1'+4p_1=x^4-x^2$, which is a Cauchy–Euler equation. Solve the homogeneous equation $x^2p_1''-3xp_1'+4p_1=0$ first: $p_1=x^r$, hence $$x^r(r(r-1)-3r+4)=0\hspace{5pt}\Rightarrow\hspace{5pt} r^2-4r+4=(r-2)^2=0 \hspace{5pt}\Rightarrow\hspace{5pt} r=2$$ So the solution to the homogeneous equation is $C_1x^2\ln x+C_2x^2$. Now we can use Green's function of the equation to find $p_1$: ($y_1(x)=x^2\ln x,\hspace{3pt} y_2(x)=x^2$) $$\begin{align*}k(x,t)&=\frac{y_1(t)y_2(x)-y_1(x)y_2(t)}{y_1(t)y_2'(t)-y_2(t)y_1'(t)}= \frac{t^2\ln t\cdot x^2-x^2\ln x\cdot t^2}{t^2\ln t\cdot 2t-t^2\cdot(2t\ln t+t)}=\frac{t^2\ln t\cdot x^2-x^2\ln x\cdot t^2}{-t^3}\\ &=\frac{x^2\ln x-x^2\ln t}{t}\end{align*}$$ Then ($b(x)$ is the in-homogeneous part, i.e. $b(x)=x^4-x^2$) $$\begin{align*}p_1(x)&=\int k(x,t)b(t)dt=\int \frac{x^2\ln x-x^2\ln t}{t}t^2(t^2-1)dt\\ &=x^2\ln x\int t(t^2-1)dt-x^2\int t(t^2-1)\ln tdt\end{align*}$$ Compute the integral, find $p_1$ using you initial values and the substitute back to find $p_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to do this interesting integration? $$\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+\Delta x}^{k+1-\Delta x}x^m dx$$ How to integrate the above integral? Edit1: $$\lim_{\Delta x\rightarrow0}\int_{2-\Delta x}^{2+\Delta x}x^m dx$$ Does this intergral give $\space\space\space\space$ $2^m\space\space$ as the output? Edit2: Are my following steps correct? $\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+\Delta x}^{k+1-\Delta x}x^m dx$ = $\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+\Delta x}^{k+1-\Delta x}x^m dx$ $+$ $\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+1-\Delta x}^{k+1+\Delta x}x^m dx$ $-$ $\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+1-\Delta x}^{k+1 +\Delta x}x^m dx$ = $\lim_{\Delta x\rightarrow0}\int_{1+\Delta x}^{n+\Delta x}x^m dx$ $-$ $\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+1-\Delta x}^{k+1 +\Delta x}x^m dx$ = $\lim_{\Delta x\rightarrow0}\int_{1}^{n}x^m dx$ $+$$\lim_{\Delta x\rightarrow0}\int_{n}^{n+\Delta x}x^m dx$ $-$ $\lim_{\Delta x\rightarrow0}\int_{1}^{1+\Delta x}x^m dx$ $-$ $\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+1-\Delta x}^{k+1 +\Delta x}x^m dx$ = $\lim_{\Delta x\rightarrow0}\int_{1}^{n}x^m dx$ + 0 - 0 - 0 = $\lim_{\Delta x\rightarrow0}\int_{1}^{n}x^m dx$ = $\int_{1}^{n}x^m dx$
For the first question: $$ \begin{align} \lim_{\Delta x\to0}\,\left|\,\int_k^{k+1}x^m\,\mathrm{d}x-\int_{k+\Delta x}^{k+1-\Delta x}x^m\,\mathrm{d}x\,\right| &=\lim_{\Delta x\to0}\,\left|\,\int_k^{k+\Delta x}x^m\,\mathrm{d}x+\int_{k+1-\Delta x}^{k+1}x^m\,\mathrm{d}x\,\right|\\ &\le\lim_{\Delta x\to0}2\Delta x(k+1)^m\\ &=0 \end{align} $$ we get $$ \begin{align} \lim_{\Delta x\to0}\sum_{k=1}^{n-1}\int_{k+\Delta x}^{k+1-\Delta x}x^m\,\mathrm{d}x &=\sum_{k=1}^{n-1}\int_k^{k+1}x^m\,\mathrm{d}x\\ &=\int_1^nx^m\,\mathrm{d}x \end{align} $$ For the Edit: For $\Delta x<1$, $$ \left|\,\int_{2-\Delta x}^{2+\Delta x}x^m\,\mathrm{d}x\,\right|\le2\cdot3^m\Delta x $$ Therefore, $$ \lim_{\Delta x\to0}\,\int_{2-\Delta x}^{2+\Delta x}x^m\,\mathrm{d}x=0 $$ For your steps: If you would give the justification for each step, it would help us in commenting on what is correct and what might be wrong and help you in seeing what is right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Fixed: Is this set empty? $ S = \{ x \in \mathbb{Z} \mid \sqrt{x} \in \mathbb{Q}, \sqrt{x} \notin \mathbb{Z}, x \notin \mathbb{P}$ } This question has been "fixed" to reflect the question that I intended to ask Is this set empty? $ S = \{ x \in \mathbb{Z} \mid \sqrt{x} \in \mathbb{Q}, \sqrt{x} \notin \mathbb{Z}, x \notin \mathbb{P}$ } Is there a integer, $x$, that is not prime and has a root that is not irrational?
To answer according to the last edit: Yes. Let $a\in\mathbb{Z}$ and consider the polynomial $x^{2}-a$. By the rational root theorm if there is a rational root $\frac{r}{s}$ then $s|1$ hence $s=\pm1$ and the root is an integer. So $\sqrt{a}\in\mathbb{Q}\iff\sqrt{a}\in\mathbb{Z}$ . Since you assumed that the root is not an integer but is a rational number this can not be.
{ "language": "en", "url": "https://math.stackexchange.com/questions/272988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to randomly construct a square full-ranked matrix with low determinant? How to randomly construct a square (1000*1000) full-ranked matrix with low determinant? I have tried the following method, but it failed. In MATLAB, I just use: n=100; A=randi([0 1], n, n); while rank(A)~=n A=randi([0 1], n, n); end The above code generates a random binary matrix, with the hope that the corresponding determinant can be small. However, the determinant is usually about 10^49, a huge number. Not to mention when n>200, the determinant is usually overflowed in MATLAB. Could anyone have comments how I can generate matrix (could be non-binary) with very low determinant (e.g. <10^3)?
The determinant of $e^B$ is $e^{\textrm{tr}(B)}$ (wiki) and $e^B$ is always invertible, since $e^B e^{-B}=\textrm{Id}$. So, if you have a matrix $B$ with negative trace then $\det e^B$ is positive and smaller than $1$. Using this idea I wrote the following matlab script which generates matrices with "small" determinant: n=100; for i=1:10 B=randn(n); A=expm(B); det(A) end I generate the coefficients using a normal distribution with mean $0$, so that the expected trace of B is $0$ (I think) and therefore the expected determinant of $A$ is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/273061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Plot of x^(1/3) has range of 0-inf in Mathematica and R Just doing a quick plot of the cuberoot of x, but both Mathematica 9 and R 2.15.32 are not plotting it in the negative space. However they both plot x cubed just fine: Plot[{x^(1/3), x^3}, {x, -2, 2}, PlotRange -> {-2, 2}, AspectRatio -> Automatic] http://www.wolframalpha.com/input/?i=x%5E%281%2F3%29%2Cx%5E3 plot(function(x){x^(1/3)} , xlim=c(-2,2), ylim=c(-2,2)) Is this a bug in both software packages, or is there something about the cubed root that I don't understand? In[19]:= {1^3, 1^(1/3), -1^3, -1^(1/3), 42^3, -42^3, 42^(1/3) // N, -42^(1/3) // N} Out[19]= {1, 1, -1, -1, 74088, -74088, 3.47603, -3.47603} Interestingly when passing -42 into the R function I get NaN, but when I multiply it directly I get -3.476027. > f = function(x){x^(1/3)} > f(c(42, -42)) [1] 3.476027 NaN > -42^(1/3) [1] -3.476027
Really funny you'd mention that... my Calc professor talked about that last semester. ;) Many software packages plot the principal root, rather than the real root. http://mathworld.wolfram.com/PrincipalRootofUnity.html For example, $\sqrt[3]{3}$ has three values: W|A Mathematica uses the roots in the upper-left quadrant when plotting the cube root. Thus, it thinks it's complex, and therefore doesn't graph it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/273149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Generating function for the divisor function Earlier today on MathWorld (see eq. 17) I ran across the following expression, which gives a generating function for the divisor function $\sigma_k(n)$: $$\sum_{n=1}^{\infty} \sigma_k (n) x^n = \sum_{n=1}^{\infty} \frac{n^k x^n}{1-x^n}. \tag{1}$$ (The divisor function $\sigma_k(n)$ is defined by $\sigma_k(n) = \sum_{d|n} d^k$.) How would one go about proving Equation (1)? (For reference, I ran across this while thinking about this question asked earlier today. The sum in that question is a special case of the sum on the right side of Equation (1) above.)
Switching the order of summation, we have that $$\sum_{n=1}^{\infty}\sigma_{k}(n)x^{n}=\sum_{n=1}^{\infty}x^{n}\sum_{d|n}d^{k}=\sum_{d=1}^{\infty}d^{k}\sum_{n:\ d|n}^{\infty}x^{n}.$$ From here, applying the formula for the geometric series, we find that the above equals $$\sum_{d=1}^{\infty}d^{k}\sum_{n=1}^{\infty}x^{nd}=\sum_{d=1}^{\infty}d^{k}\frac{x^{d}}{1-x^{d}}.$$ Such a generating series is known as a Lambert Series. The same argument above proves that for a multiplicative function $f$ with $f=1*g$ where $*$ represents Dirichlet convolution, we have $$\sum_{n=1}^\infty f(n)x^n =\sum_{k=1}^\infty \frac{g(k)x^k}{1-x^k}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/273275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Linearization of Gross-Pitaevskii-Equation Consider a PDE of the form $\partial_t \phi = A(\partial_\xi) \phi + c\partial_\xi \phi +N(\phi)$ where $N$ is some non-linearity defined via pointwise evaluation of $\phi$. If you want to check for stability of travelling wave solutions of PDEs you linearize the PDE at some travelling wave solution $Q$: $\partial_t \phi = A(\partial_\xi) \phi + c\partial_\xi \phi + \partial_\phi N(Q) \phi$ My problem is: do exactly this for the Gross-Pitaevskii-Equation. The Gross-Pitaevskii Equation for (in appropriate coordinates) has the form $\partial_t \phi = -i \triangle \phi + c \partial_\xi \phi-i\phi(1-\vert \phi \vert^2)$ so that $N(\phi)=-i\phi (1 -\vert \phi \vert^2).$ Can anyone help me to linearize that at some travelling wave solution $Q$? I'm not even sure how to start...
Well.... first you need to specify a traveling wave solution $Q$. Then you just take, since $N(\phi) = -i \phi(1 - |\phi|^2)$, $$ (\partial_\phi N)(\phi) = -i(1-|\phi|^2) + 2i\phi \bar{\phi} $$ by the product rule of differential calculus. Here note $|\phi|^2 = \phi \bar\phi$. So simplifying and evaluating it at $Q$ we have $$ (\partial_\phi N)(Q) = -i + 3i |Q|^2 $$ which you can plug into the general form you quoted to get the linearised equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/273365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For which p the series converge? $$\sum_{n=0}^{\infty}\left(\frac{1}{n!}\right)^{p}$$ Please verify answer below
Comparison test $$\lim_{n\rightarrow\infty}\left(\frac{n!}{(n+1)!}\right)^{p}=\frac{1}{(n+1)^{p}}=\begin{cases} 1 & \Leftrightarrow p=0\\ 0 & \Leftrightarrow p\neq0 \end{cases}$$ The series have the same convergence as $\frac{1}{n}$, so for: * *$p>1$ converge *for $p<1$ don't converge
{ "language": "en", "url": "https://math.stackexchange.com/questions/273497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
How to prove that $\lVert \Delta u \rVert_{L^2} + \lVert u \rVert_{L^2}$ and $\lVert u \rVert_{H^2}$ are equivalent norms? How to prove that $\lVert \Delta u \rVert_{L^2} + \lVert u \rVert_{L^2}$ and $\lVert u \rVert_{H^2}$ are equivalent norms on a bounded domain? I hear there is a way to do it by RRT but any other way is fine. Thanks.
As user53153 wrote this is true for bounded smooth domains and can be directly obtained by the boundary regularity theory exposed in Evans. BUT: consider the domain $\Omega=\{ (r \cos\phi,r \sin\phi); 0<r<1, 0<\phi<\omega\}$ for some $\pi<\omega<2\pi$. Then the function $u(r,\phi)=r^{\pi/\omega}\sin(\phi\pi/\omega)$ (in polar coordinates) satisfies $\Delta u=0$ in $\Omega$ and it is clearly bounded. On the other hand the second derivatives blow up as $r\rightarrow 0$, more specifically $u\not\in H^2$!
{ "language": "en", "url": "https://math.stackexchange.com/questions/273563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Term for a group where every element is its own inverse? Several groups have the property that every element is its own inverse. For example, the numbers $0$ and $1$ and the XOR operator form a group of this sort, and more generally the set of all bitstrings of length $n$ and XOR form a group with this property. These groups have the interesting property that they have to be commutative. Is there a special name associated with groups with this property? Or are they just "abelian groups where every element has order two?" Thanks!
Another term is "group of exponent $2$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/274604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Set of points reachable by the tip of a swinging sticks kinetic energy structure This is an interesting problem that I thought of myself but I'm racking my brain on it. I recently saw this kinetic energy knick knack in a scene in Iron Man 2: http://www.youtube.com/watch?v=uBxUoxn46A0 And it got me thinking, it looks like either tip of the shorter rod can reach all points within the maximum radius of device. The axis of either rod is off center, so for simplicity I decided to first simplify the problem by modeling it as having both rods centered. So the radius of the device is $r_1 + r_2$. So I decided to first model the space of points reachable by the tip of the shorter rod as a function of a vector in $\mathbb{R}^2$, consisting of $\theta_1$, the angle of the longer rod, and $\theta_2$, the angle of the shorter rod. Where I'm getting lost is how to transform this angle vector to its position in coordinate space. How would you describe this mapping? And how would you express the space of all points reachable by the tip of the shorter rod, as the domain encompasses all of $\mathbb{R}^2$?
One way to see the reach is to notice that if the configuration $\theta = (\theta_1,\theta_2)$ reaches some spot $x \in \mathbb{R}^2$, and $y$ is a spot obtained by rotating $x$ by $\alpha \in \mathbb{R}$, then the configuration $\theta+(\alpha, \alpha)$ will reach $y$. So, we only need to see what the minimum and maximum radius can be. For a particular configuration, the radius squared is \begin{eqnarray} (r_1 \cos \theta_1 + r_2 \cos \theta_2)^2 + (r_1 \sin \theta_1 + r_2 \sin \theta_2)^2 &=& r_1^2+r_2^2+ 2r_1r_2 ( \cos \theta_1 \cos \theta_2 + \sin \theta_1 \sin \theta_2)\\ & = & r_1^2+r_2^2+ 2r_1r_2 \cos(\theta_1-\theta_2) \end{eqnarray} Hence the radius lies in $[|r_1-r_2|,|r_1+r_2|]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/274663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Dual norm and distance Let $Z$ be a subspace of a normed linear space $X$ and $x\in X$ has distance $d=\inf\{||z-y||:z\in Z\}$ to $Z$. I would like to find a function $f\in X^*$ that satifies $||f||\le1$, $f(x)=d$ and $f(z)=0$ Is it correct that $||f||:=\sup\{|f(x)| :x\in X, ||x||\le 1\}$ because I cannot conclude from this definition that $||f||\le1$ May you could help me with that, thank you very much.
I'll list the ingredients and leave the cooking to you: * *The function $d:X\to [0,\infty)$ is sublinear in the sense used in the Hahn-Banach theorem *There is a linear functional $\phi$ on the one-dimensional space $V=\{t x:t\in\mathbb R\}$ such that $\phi(x)=d(x)$ and $|\phi(y)|\le d(y)$ for all $y\in V$. If you get stuck here, mouse over the hidden text. *From 1, 2, and Hahn-Banach you will get $f$. $\phi(tx)=td(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/274792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ambiguous Curve: can you follow the bicycle? Let $\alpha:[0,1]\to \mathbb R^2$ be a smooth closed curve parameterized by the arc length. We will think of $\alpha$ like a back track of the wheel of a bicycle. If we suppose that the distance between the two wheels is $1$ then we can describe the front track by $$\tau(t)=\alpha(t)+\alpha'(t)\;.$$ Suppose we know the two (back and front) trace of a bicycle. Can you determine the orientation of the curves? For example if $\alpha$ was a circle the answer is no. More precisely the question is: Is there a smooth closed curve parameterized by the arc length $\alpha$ such that $$\tau([0,1])=\gamma([0,1])$$ where $\gamma(t)=\alpha(1-t)-\alpha'(1-t)$? If trace of $\alpha$ is a circle we have $\tau([0,1])=\gamma([0,1])$. Is there another?
After the which way did bicycle go book, there has been some systematic development of theory related to the bicycle problem. Much of that is either done or cited in papers by Tabachnikov and his coauthors, available online: http://arxiv.org/find/all/1/all:+AND+bicycle+tracks/0/1/0/all/0/1 http://arxiv.org/abs/math/0405445
{ "language": "en", "url": "https://math.stackexchange.com/questions/274849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 5, "answer_id": 2 }
$e^{i\theta_n}\to e^{i\theta}\implies \theta_n\to\theta$ How to show $e^{i\theta_n}\to e^{i\theta}\implies \theta_n\to\theta$ for $-\pi<\theta_n,\theta<\pi.$ I'm completely stuck in it. Please help.
Suppose $(\theta_n)$ does not converge to $\theta$, then there is an $\epsilon > 0$ and a subsequence $( \theta_{n_k} )$ such that $| \theta_{n_k} - \theta | \geq \epsilon $ for all $k$. $(\theta_{n_k})$ is bounded so it has a further subsequence $(\theta_{m_k})$ which converges to $\theta_0 \in [-\pi,\pi]$ (say) with $| \theta - \theta_0 | \geq \epsilon $, and hence $ \theta_0 \neq \theta $. Next $( \exp i\theta_{m_k} )$ being a subsequence of $( \exp i\theta_n ) $ must converge to $\exp i\theta $, hence $ \exp i\theta = \exp i\theta_0 $. So $ \theta_0 = 2n \pi + \theta $ for some integer $n$, however $ | \theta_0 - \theta | < 2\pi $ as $ \theta \in ( -\pi , \pi )$ and $ \theta_0 \in [-\pi,\pi]$, this implies $ \theta = \theta_0$ and contradicts $ \theta \neq \theta_0 $ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/274907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
It's in my hands to have a surjective function Let $f$ be any function $A \to B$. By definition $f$ is a surjective function if $\space \forall y \in B \space \exists \space x \in A \space( \space f(x)=y \space)$. So, for any function I only have to ensure that there doesn't "remain" any element "alone" in the set $B$. In other words, the range set of the function has to be equal to the codomain set. The range depends on the function, but the codomain can be choose by me. So if I chose a codomain equal to the range I get a surjective function, regardless the function that is given. M'I right?
There are intrinsic properties and extrinsic properties. Being surjective is an extrinsic property. If you are not given a particular codomain you cannot conclude whether or not a function is surjective. Being injective, on the other hand, is an intrinsic property. It depends only on the function as a set of ordered pairs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/274967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
In every interval there is a rational and an irrational number. When the interval is between two rational numbers it is easy. But things get complicated when the interval is between two irrational numbers. I couldn't prove that.
Supposing you mean an interval $(x,y)$ of length $y-x=l>0$ (it doesn't matter whether $l$ is rational or irrational), you can simply choose any integer $n>\frac1l$, and then the interval will contain a rational number of the form $\frac an$ with $a\in\mathbf Z$. Indeed if $a'$ is the largest integer such that $\frac{a'}n\leq x$ (which is well defined) then $a=a'+1$ will do. By choosing $n>\frac2l$ you even get two rationals of this form, and an irrational number between those two by an argument you claim to already have.
{ "language": "en", "url": "https://math.stackexchange.com/questions/275032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Check convergence $\sum\limits_{n=1}^\infty\left(\sqrt{1+\frac{7}{n^{2}}}-\sqrt[3]{1-\frac{8}{n^{2}}+\frac{1}{n^{3}}}\right)$ Please help me to check convergence of $$\sum_{n=1}^{\infty}\left(\sqrt{1+\frac{7}{n^{2}}}-\sqrt[3]{1-\frac{8}{n^{2}}+\frac{1}{n^{3}}}\right)$$
(Presumably with tools from Calc I...) Using the conjugate identities $$ \sqrt{a}-1=\frac{a-1}{1+\sqrt{a}},\qquad1-\sqrt[3]{b}=\frac{1-b}{1+\sqrt[3]{b}+\sqrt[3]{b^2}}, $$ one gets $$ \sqrt{1+\frac{7}{n^{2}}}-\sqrt[3]{1-\frac{8}{n^{2}}+\frac{1}{n^{3}}}=x_n+y_n, $$ with $$ x_n=\sqrt{1+a_n}-1=\frac{a_n}{1+\sqrt{1+a_n}},\qquad a_n=\frac{7}{n^2}, $$ and $$ y_n=1-\sqrt[3]{1-b_n}=\frac{b_n}{1+\sqrt[3]{1-b_n}+(\sqrt[3]{1-b_n})^2},\qquad b_n=\frac{8}{n^{2}}-\frac{1}{n^{3}}. $$ The rest is easy: when $n\to\infty$, the denominator of $x_n$ is at least $2$ hence $x_n\leqslant\frac12a_n$, likewise the denominator of $y_n$ is at least $1$ hence $y_n\leqslant b_n$. Thus, $$ 0\leqslant x_n+y_n\leqslant\tfrac12a_n+b_n\leqslant\frac{12}{n^2}, $$ since $a_n=\frac7{n^2}$ and $b_n\leqslant\frac8{n^2}$. The series $\sum\limits_n\frac1{n^2}$ converges, hence all this proves that the series $\sum\limits_nx_n$ converges (absolutely).
{ "language": "en", "url": "https://math.stackexchange.com/questions/275090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
If x,y,z are positive reals, then the minimum value of $x^2+8y^2+27z^2$ where $\frac{1}{x}+\frac{1}{y}+\frac{1}{z}=1$ is what If $x,y, z$ are positive reals, then the minimum value of $x^2+8y^2+27z^2$ where $\frac{1}{x}+\frac{1}{y}+\frac{1}{z}=1$ is what? $108$ , $216$ , $405$ , $1048$
As $x,y,z$ are +ve real, we can set $\frac 1x=\sin^2A,\frac1y+\frac1z=\cos^2A$ again, $\frac1{y\cos^2A}+\frac1{z\cos^2A}=1$ we can put $\frac1{y\cos^2A}=\cos^2B, \frac1{z\cos^2A}=\sin^2B\implies y=\cos^{-2}A\cos^{-2}B,z=\cos^{-2}A\sin^{-2}B$ So, $$x^2+8y^2+27z^2=\sin^{-4}A+\cos^{-4}A(8\cos^{-4}B+27\sin^{-4}B)$$ We need $8\cos^{-4}B+27\sin^{-4}B$ to be minimum for the minimum value of $x^2+8y^2+27z^2$ Let $F(B)=p^3\cos^{-4}B+q^3\sin^{-4}B$ where $p,q$ are positive real numbers. then $F'(B)=p^3(-4)\cos^{-5}B(-\sin B)+q^3(-4)\sin^{-5}B(\cos B)$ for the extreme values of $F(B),F'(B)=0\implies (p\sin^2B)^3=(q\cos^2B)^3\implies \frac{\sin^2B}q=\frac{\cos^2B}p=\frac1{p+q}$ Observe that $F(B)$ can not have any finite maximum value. So, $F(B)_{min}=\frac{p^3}{\left(\frac p{p+q}\right)^2}+\frac{q^3}{\left(\frac q{p+q}\right)^2}=(p+q)^3$ So, the minimum value of $8\cos^{-4}B+27\sin^{-4}B$ is $(2+3)^3=125$ (Putting $p=2,q=3$) So, $$x^2+8y^2+27z^2=\sin^{-4}A+\cos^{-4}A(8\cos^{-4}B+27\sin^{-4}B)\ge \sin^{-4}A+125\cos^{-4}A\ge (1+5)^3=216$$ (Putting $p=1,q=5$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/275153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
How can I to solve $ \cos x = 2x$? I would like to get an approx. solution to the equation: $ \cos x = 2x$, I don't need an exact solution just some approx. And I need a solution using elementary mathematics (without derivatives etc).
Take a pocket calculator, start with $0$ and repeatedly type [cos], [$\div$], [2], [=]. This will more or less quickly converge to a value $x$ such that $\frac{\cos x}2=x$, just what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/275198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Spivak problem on orientations. (A comprehensive introduction to differential geometry) I have a problems doing exercise 16 of chapter 3 (p.98 in my edition) of Spivak's book. The problem is very simple. Let $M$ be a manifold with boundary, and choose a point $p\in\delta(M)$. Now consider an element $v\in T_p M$ which is not spanned by the vectors on $T_p\delta(M)$, that is, it's last coordinate is non-zero (after good identifications). We say that $v$ is inward pointing if there is a chart $\phi: U\rightarrow \mathbb{H}^n$ ($p\in U$) such that $d_p\phi(v)=(v_1,\dots,v_n)$ where $v_n>0$. It is asked to show that this is independent on the choice of coordinates (on the chart). I think that Spivak's idea is to realize first that the subespace of vectors in $T_p\delta M$ is independent on the chart, which can be seen noticing that if $i:\delta (M)\rightarrow M$ then $d_pi(v_1,\dots,v_{n-1})=(v_1,\dots,v_{n-1},0)\in T_p \mathbb{H}^n$
A change of coordinates between charts for the manifold with boundary $M$ has the form $x=(x_1, \cdots,x_n) \mapsto (\phi_1(x), \cdots,\phi_n(x))$, with $x_n, \phi_n(x)\geq0$ since $x_n,\phi_n(x)\in \mathbb H_n$. The last line of the Jacobian $Jac_a(\phi)$ at a point $a\in \partial \mathbb H_n$ has the form $(0,\cdots , 0,\frac { \partial \phi_n}{\partial x_n}(a))$ : Indeed, for $1\leq i\leq n-1$ we have $\frac { \partial \phi_n}{\partial x_i}(a)=0$ by the definition of partial derivatives since $\phi(\partial \mathbb H_n)\subset \partial \mathbb H_n$ and thus $\:\frac {\phi_n(a+he_i)-\phi_n(a)}{h}=\frac {0-0}{h}=0$. Similarly $\frac { \partial \phi_n}{\partial x_n}(a)\geq 0$ because $\:\frac {\phi_n(a+he_n)-\phi_n(a)}{h}=\frac {\phi_n(a+he_n)-0}{h}\gt 0$. Actually, we must have $\frac { \partial \phi_n}{\partial x_n}(a)\gt 0$ because the Jacobian is invertible. The above proves that, given a tangent vector $v\in T_a(\mathbb H_n)$, its image $w=Jac_a(\phi)(v)$ satisfies $w_n=\frac { \partial \phi_n}{\partial x_n}(a)\cdot v_n$ with $\frac { \partial \phi_n}{\partial x_n}(a)\gt 0$, which shows that outward pointing vectors are preserved by the Jacobian of a change of coordinates for $M$ and thus that the notion of outward pointing vector is well-defined at a boundary point of a manifold with boundary .
{ "language": "en", "url": "https://math.stackexchange.com/questions/275259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculate $\lim_{x\to 0}\frac{\ln(\cos(2x))}{x\sin x}$ Problems with calculating $$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}$$ $$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}=\lim_{x\rightarrow0}\frac{\ln(2\cos^{2}(x)-1)}{(2\cos^{2}(x)-1)}\cdot \left(\frac{\sin x}{x}\right)^{-1}\cdot\frac{(2\cos^{2}(x)-1)}{x^{2}}=0$$ Correct answer is -2. Please show where this time I've error. Thanks in advance!
The known limits you might wanna use are, for $x\to 0$ $$\frac{\log(1+x)}x\to 1$$ $$\frac{\sin x }x\to 1$$ With them, you get $$\begin{align}\lim\limits_{x\to 0}\frac{\log(\cos 2x)}{x\sin x}&=\lim\limits_{x\to 0}\frac{\log(1-2\sin ^2 x)}{-2\sin ^2 x}\frac{-2\sin ^2 x}{x\sin x}\\&=-2\lim\limits_{x\to 0}\frac{\log(1-2\sin ^2 x)}{-2\sin ^2 x}\lim\limits_{x\to 0}\frac{\sin x}{x}\\&=-2\lim\limits_{u\to 0}\frac{\log(1+u)}{u}\lim\limits_{x\to 0}\frac{\sin x}{x}\\&=-2\cdot 1 \cdot 1 \\&=-2\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/275308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 2 }
Ultrafilters and measurability Consider a compact metric space $X$, the sigma-algebra of the boreleans of $X$, a sequence of measurable maps $f_n: X \to\Bbb R$ and an ultrafilter $U$. Take, for each $x \in X$, the $U$-limit, say $f^*(x)$, of the sequence $(f_n(x))_{n \in\Bbb N}$. (Under what conditions on $U$) Is $f^*$ measurable?
Let me first get rid of a silly case that you probably didn't intend to include. If $U$ is a principal ultrafilter, generated by $\{k\}$, then $f^*$ is just $f_k$, so it's measurable. Now for the non-silly cases, where $U$ isn't principal. Here's an example of a sequence of measurable (in fact low-level Borel) functions whose $U$-limit isn't measurable. I'll use a very nice $X$, the unit interval with the usual topology and with Lebesgue measure. My functions $f_n$ will take only the values 0 and 1, and they're defined by $f_n(x)=$ the $n$-th bit in the binary expansion of $x$. (The binary expansion is ambiguous when $x$ is a rational number whose denominator is a power of 2, but that's only countably many $x$'s so they won't affect measurability; resolve the ambiguity any arbitrary way you want.) Then the $U$-limit $f^*$ of these functions sends $x$ to 1 iff the set $\{n:\text{the }n\text{-th bit in the binary expansion of }x\text{ is }1\}$ in in $U$. In other words, if we identify $x$ via its binary expansion with a sequence of 0's and 1's and if we then regard that sequence as the characteristic function of a subset of $\mathbb N$, then $f^*$, now viewed as mapping subsets of $\mathbb N$ to $\{0,1\}$, is just the characteristic function of $U$. An old theorem of Sierpiński says that this is never Lebesgue measurable when $U$ is a non-principal ultrafilter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/275365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$\int_{\mathbb{R}^n} dx_1 \dots dx_n \exp(−\frac{1}{2}\sum_{i,j=1}^{n}x_iA_{ij}x_j)$? Let $A$ be a symmetric positive-definite $n\times n$ matrix and $b_i$ be some real numbers How can one evaluate the following integrals? * *$\int_{\mathbb{R}^n} dx_1 \dots dx_n \exp(−\frac{1}{2}\sum_{i,j=1}^{n}x_iA_{ij}x_j)$ *$\int_{\mathbb{R}^n} dx_1 \dots dx_n \exp(−\frac{1}{2}\sum_{i,j=1}^{n}x_iA_{ij}x_{j}-b_i x_i)$
Let's $x=(x_1,\ldots,x_n)\in\mathbb{R}^n$. We have \begin{align} \int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle x,x \rangle_{A}} d x = & \int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle x,Ax \rangle} d x \\ = & \int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle Ux,Ux \rangle} d x \\ = & \int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle x,x \rangle}\|U\| d x \\ = & \|U\|\cdot \left(\int_{R} e^{-\frac{1}{2}x^2} d x \right)^n\\ = & \|U\|\cdot\left( \frac{1}{2}\sqrt{2\pi}\right)^n \end{align} For second itegral use the change of variable $y_i+s_i=x_i$ such that $(2\cdot s^TA+b^T)=0$, \begin{align} (y+s)^TA(y+s) +b(y+s)= & y^TAy+(2\cdot s^TA+b^T)y+s^T(b+As) \\ = & y^TAy+s^T(b+As) \\ \end{align} Then $$ \int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle x,x \rangle_{A}+b^Tx} d x =n\cdot \|U\|\cdot \int_{\mathbb{R}^n} e^{-\frac{1}{2}y^2}\cdot e^{s^T(b+As)} d y \\ = \cdot e^{s^T(b+As)}\cdot \|U\|\cdot \left( \frac{1}{2}\sqrt{2\pi}\right)^n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/275428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a quick way to solve $3^8 \equiv x \mod 17$? Is there a quick way to solve $3^8 \equiv x \mod 17$? Like the above says really, is there a quick way to solve for $x$? Right now, what I started doing was $3^8 = 6561$, and then I was going to keep subtracting $17$ until I got my answer.
When dealing with powers, squaring is a good trick to reduce computations (your computer does this too!) What this means is: $ \begin{array}{l l l l l} 3 & &\equiv 3 &\pmod{17}\\ 3^2 &\equiv 3^2 & \equiv 9 &\pmod{17}\\ 3^4 & \equiv 9^2 & \equiv 81 \equiv 13 & \pmod{17}\\ 3^8 & \equiv 13^2 & \equiv 169 \equiv 16 & \pmod{17}\\ \end{array} $ Slightly irrelevant note: By Euler's theorem, we know that $3^{16} \equiv 1 \pmod{17}$. Thus this implies that $3^{8} \equiv \pm 1 \pmod{17}$. If you know more about quadratic reciprocity, read Thomas Andrew's comment below.
{ "language": "en", "url": "https://math.stackexchange.com/questions/275501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
Order of nontrivial elements is 2 implies Abelian group If the order of all nontrivial elements in a group is 2, then the group is Abelian. I know of a proof that is just from calculations (see below). I'm wondering if there is any theory or motivation behind this fact. Perhaps to do with commutators? Proof: $a \cdot b = (b \cdot b) \cdot (a \cdot b) \cdot (a \cdot a) = b \cdot (b \cdot a) \cdot (b\cdot a) \cdot a = b \cdot a$.
The idea of this approach is to work with a class of very small, finite, subgroups $H$ of $G$ in which we can prove commutativity. The reason for this is to be able to use the results like Cauchy's theorem and Lagrange's theorem. Consider the subgroup $H$ generated by two distinct, nonidentity elements $a,b$ in the given group. The group $H$ consists of strings of instances of $a$ and $b$. By induction on the length of a string, one can show that any string of length 4 or longer is equal to a string of length 3 or shorter. Using this fact we can list the seven possible elements of $H$: $$1,a,b,ab,ba,aba,bab.$$ By (the contrapositive of) Cauchy's Theorem, the only prime divisor of $|H|$ is 2. This implies the order of $H$ is either $1$, $2$, or $4$. If $|H|=1$ or $2$, then either $a$ or $b$ is the identity, a contradiction. Hence $|H|$ has four elements. The subgroup generated by $a$ has order 2; its index in $H$ is 2, so it is a normal subgroup. Thus, the left coset $\{b,ba\}$ is the same as the right coset$\{b,ab\}$, and as a result $ab=ba$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/275544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 3 }
Definition of $f+g$ and $f \cdot g$ in the context of polynomial rings? I've been asked to prove that given $f$ and $g$ are polynomials in $R[x]$, where $R$ is a commutative ring with identity, $(f+g)(k) = f(k) + g(k)$, and $(f \cdot g)(k) = f(k) \cdot g(k)$. However, I always took these things as definition. What exactly is there to prove? How exactly are $f+g$ and $f \cdot g$ defined here?
One construction (I do not like using constructions as definitions) of $R[x]$ is as the set of formal sums $\sum r_i x^i$ of the symbols $1, x, x^2, x^3, ...$ with coefficients in $R$. Addition here is defined pointwise and multiplication is defined using the Cauchy product rule. For every $k \in R$ there is an evaluation map $\text{eval}_k : R[x] \to R$ which sends $\sum r_i x^i$ to $\sum r_i k^i$, and the question is asking you to show that this map is a ring homomorphism. To emphasize that this doesn't follow immediately from the construction, note that this statement is false if $R$ is noncommutative. The reason I don't want to use the word "definition" above is that "polynomial ring" should more or less mean "the ring such that the above thing is true, and if it isn't, we messed up and need a different construction."
{ "language": "en", "url": "https://math.stackexchange.com/questions/275601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Limit with parameter, $(e^{-1/x^2})/x^a$ How do I explore for which values of the parameter $a$ $$ \lim_{x\to 0} \frac{1}{x^a} e^{-1/x^2} = 0? $$ For $a=0$ it is true, but I don't know what other values.
It's true for all $a's$ (note that for $a\leqslant 0$ it's trivial) and you can prove it easily using L'Hopital's rule or Taylor's series for $\exp(-x^2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/275653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Need help in describing the set of complex numbers Let set $$C=\{z\in \mathbb{C}:\sqrt{2}|z|=(i-1)z\}$$ I think C is empty , because you could put it in this way$$|z|=-\frac{(1-i)z}{\sqrt{2}}$$ but would like a second opinion.
$$C=\{z\in \mathbb{C}:\sqrt{2}|z|=(i-1)z\}$$ let $z=a+bi,a,b\in\mathbb R$ then $$\sqrt{2}|a+bi|=(i-1)(a+bi)$$ $$\sqrt{2}|a+bi|=ai+bi^2-a-bi$$ $$\sqrt{2}|a+bi|=ai-b-a-bi$$ $$\sqrt{2}|a+bi|=-b-a+(a-b)i$$ follow that $a-b=0$ and $\sqrt{2}|a+bi|=-b-a$ or $a=b$ and $\sqrt{2}|a+ai|=-2a\geq 0$ so $a=b\leq 0$ or $z=a+ai=a(1+i)$ and finally $$C=\{z=a(1+i),a\leq 0\}\neq\emptyset$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/275728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
A sequence in $C([-1,1])$ and $C^1([-1,1])$ with star-weak convergence w.r.t. to one space, but not the other The functionals $$ \phi_n(x) = \int_{\frac{1}{n} \le |t| \le 1} \frac{x(t)}{t} \mathrm{d} t $$ define a sequence of functionls in $C([-1,1])$ and $C^1([-1,1])$. a) Show that $(\phi_n)$ converges *-weakly in $C^1([-1,1])'$. b) Does $(\phi_n)$ converges *-weakly in $C([-1,1])'$? For me the limit functional $$ \int_{0 \le |t| \le 1} \frac{x(t)}{t} \mathrm{d} t $$ is not well defined so i have trouble evaluating the condition of convergence? Do you have any hints?
@Davide Giraduo Regarding your derivation, i came to another result: \begin{align*} \int_{1/n \le |t| \le 1} \frac{x(t)}{t} \mathrm{d} t & = \int_{-1}^{-1/n} \frac{x(t)}{t} \mathrm{d} t + \int_{1/n}^1 \frac{x(t)}{t} \mathrm{d} t \\ & = - \int_{1}^{1/n} \frac{x(-t)}{t} \mathrm{d} t + \int_{1/n}^1 \frac{x(t)}{t} \mathrm{d} t \\ & = \int_{1/n}^{1} \frac{x(-t)}{t} \mathrm{d} t + \int_{1/n}^1 \frac{x(t)}{t} \mathrm{d} t \\ & = \int_{1/n}^{1} \frac{x(-t) + x(t)}{t} \, \mathrm{d} t \end{align*} Which has others sign's?
{ "language": "en", "url": "https://math.stackexchange.com/questions/275789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Computing $999,999\cdot 222,222 + 333,333\cdot 333,334$ by hand. I got this question from a last year's olympiad paper. Compute $999,999\cdot 222,222 + 333,333\cdot 333,334$. Is there an approach to this by using pen-and-paper? EDIT Working through on paper made me figure out the answer. Posted below. I'd now like to see other methods. Thank you.
$$999,999\cdot 222,222 + 333,333\cdot 333,334=333,333\cdot 666,666 + 333,333\cdot 333,334$$ $$=333,333 \cdot 1,000,000$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/275853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51", "answer_count": 6, "answer_id": 2 }
How to solve $|x-5|=|2x+6|-1$? $|x-5|=|2x+6|-1$. The answer is $0$ or $-12$, but how would I solve it by algebraically solving it as opposed to sketching a graph? $|x-5|=|2x+6|-1\\ (|x-5|)^2=(|2x+6|-1)^2\\ ...\\ 9x^4+204x^3+1188x^2+720x=0?$
Consider different cases: Case 1: $x>5$ In this case, both $x-5$ and $2x+6$ are positive, and you can resolve the absolute values positively. hence $$ x-5=2x+6-1 \Rightarrow x = -10, $$ which is not compatible with the assumption that $x>5$, hence no solution so far. Case 2: $-3<x\leq5$ In this case, $x-5$ is negative, while $2x+6$ is still positive, so you get $$ -(x-5)=2x+6-1\Rightarrow x=0; $$ Since $0\in[-3,5]$, this is our first solution. Case 3: $x\leq-3$ In this final case, the arguments of both absolute values are negative and the equation simplifies to $$ -(x-5) = -(2x+6)-1 \Rightarrow x = -12, $$ in agreement with your solution by inspection of the graph.
{ "language": "en", "url": "https://math.stackexchange.com/questions/275928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Calculate $\underset{x\rightarrow7}{\lim}\frac{\sqrt{x+2}-\sqrt[3]{x+20}}{\sqrt[4]{x+9}-2}$ Please help me calculate this: $$\underset{x\rightarrow7}{\lim}\frac{\sqrt{x+2}-\sqrt[3]{x+20}}{\sqrt[4]{x+9}-2}$$ Here I've tried multiplying by $\sqrt[4]{x+9}+2$ and few other method. Thanks in advance for solution / hints using simple methods. Edit Please don't use l'Hosplital rule. We are before derivatives, don't know how to use it correctly yet. Thanks!
$\frac{112}{27}$ which is roughly 4.14815 The derivative of the top at $x=7$ is $\frac{7}{54}$ The derivative of the bottom at $x=7$ is $\frac{1}{32}$ $\frac{(\frac{7}{54})}{(\frac{1}{32})}$ is $\frac{112}{27}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/275990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 4 }
Behavior of differential equation as argument goes to zero I'm trying to solve a coupled set of ODEs, but before attempting the full numerical solution, I would like to get an idea of what the solution looks like around the origin. The equation at hand is: $$ y''_l - (f'+g')y'_l + \biggr[ \frac{2-l^2-l}{x^2}e^{2f} - \frac{2}{x}(f'+g') - \frac{2}{x^2} \biggr]y_l = \frac{4}{x}(f'+g')z_l$$ $y,f,g,z$ are all functions of $x$, which has the domain ($0, X_0$). If I specifically take the $l=2$ case I of course have $$ y''_2 - (f'+g')y'_2 + \biggr[ \frac{-4}{x^2}e^{2f} - \frac{2}{x}(f'+g') - \frac{2}{x^2} \biggr]y_2 = \frac{4}{x}(f'+g')z_2$$ To avoid issues with singularities, I multiply both sides of the equation by $x^2$, to get, lets call it EQ1. $$ x^2 y''_2 - x^2(f'+g')y'_2 + \biggr[ -4e^{2f} - x(f'+g') - 2 \biggr]y_2 = 4 x(f'+g')z_2$$ Now if by some magic I know by that as $x \rightarrow 0$, $y_l = x^{l+1}$ so that $y_2 = x^3$. How would I determine what the functions $z_2$ is around the origin? Actually I already know the answer: $z_2$ should also go as $x^3$, but I have not been able to show it. Any help is much appreciated. My attempt at a solution: I have tried to keep things general so I expand $f[x] = 1 + f_1 x + f_2 x^2 +f_3 x^3$, and similarly $g[x] = 1 + g_1 x + g_2 x^2 +g_3 x^3$ where $f_1,f_2,f_3, g_1,g_2,g_3$ are constants. I have kept up to third order because I want to substitute $y_2 = x^3$, so I figured I should take the other functions to that order as well. As for the $e^{2f}$ term, I use a truncated Taylor series for the exponential, $1+\frac{x^2}{2!} + \frac{x^3}{3!}$, into which I substitue my expansion of $f[x]$. After expanding everything out and eliminating terms of higher order than $x^3$ from the right hand side of EQ1, I get something like $constant*x^3$, while the left hand side is third order polynomial times $z_2$. I really just don't know how to proceed. Should I have left higher order terms in the RHS, so that I could divide by a third order polynomial and still end up with $z_2 \propto x^3$. I don't know how this would work because on the RHS I had terms has high as $x^{12}$.
Your equation EQ1 forces $z_2$ to have a quadratic term. Indeed, the left-hand side of EQ1 is $$4(1-e^2)x^3-4(f_1+2e^2f_1+g_1)x^4+O(x^5) \tag{1}$$ This is equated to $4x(f'+g')z_2$. Clearly, $z_2$ must include the term $\frac{1-e^2}{f_1+g_1}x^2$. I used Maple to check this, including many more terms than was necessary. n:=7; F:=1+sum(f[k]*x^k,k=1..n); G:=1+sum(g[k]*x^k,k=1..n); y:=x^3; z:=(x^2*diff(y,x$2)-x^2*(diff(F,x)+diff(G,x))*diff(y,x)-(4*exp(2*F)+x*diff(F+G,x)+2)*y)/(4*x*diff(F+G,x)); series(z,x=0,4); is the series expansion for $z_2$. By the way, you can number displayed equations with commands like \tag{1}, as I did above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
if $f$ is entire then show that $f(z)f(1/z)$ is also entire This is again for an old exam. Let $f$ be an entire function, show that f(z)f(1/z) is entire. How do I go about showing the above. Do I use the definition of analyticity?., Call g: f(z)f(1/z) and show that it is complex differentiable everywhere? Edit: Well the original question was. Let $f$ be entire and suppose $f(z)f(1/z)$ is bounded on $\mathbb C$, then $f(z)=az^n$ for some $a\in \mathbb C$. I was trying to show that $f(z)f(1/z)$ is entire and then use Louiville's theorem. :). I hope this makes sense.
As Pavel already mentioned, this is not true. In fact, the only entire functions that satisfy the stated conclusion are $f(z) = cz^n$, where $c\neq 0$. First of all, $f$ must be a polynomial, otherwise $f(1/z)$ has an essential singularity at $z=0$. If $\deg f = n$, then $f(1/z)$ has a pole of order $n$ at the origin, so to cancel this, $f$ itself must have a zero of order $n$ at $z=0$, Edit incidentally, the above should help you with the edited question too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inequality problem algebra? How would I solve the following inequality problem. $s+1<2s+1<4$ My book answer says $s\in (0, \frac32)$ as the final answer but I cannot seem to get that answer.
We have $$s+1<2s+1<4.$$ This means $2s+1<4$, and in particular, $2s<3$. Dividing by the $2$ gives $s<3/2$. Now, observing on the other hand that we have $s+1<2s+1$, we subtract $s+1$ from both sides and have $0<s$. This gives us a bound on both sides of $s$, i.e., $$0<s<\frac{3}{2}$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to solve simple systems of differential equations Say we are given a system of differential equations $$ \left[ \begin{array}{c} x' \\ y' \end{array} \right] = A\begin{bmatrix} x \\ y \end{bmatrix} $$ Where $A$ is a $2\times 2$ matrix. How can I in general solve the system, and secondly sketch a solution $\left(x(t), y(t) \right)$, in the $(x,y)$-plane? For example, let's say $$\left[ \begin{array}{c} x' \\ y' \end{array} \right] = \begin{bmatrix} 2 & -4 \\ -1 & 2 \end{bmatrix} \left[ \begin{array}{c} x \\ y \end{array} \right]$$ Secondly I would like to know how you can draw a phane plane? I can imagine something like setting $c_1 = 0$ or $c_2=0$, but I'm not sure how to proceed.
If you don't want change variables then, there is a simple way for calculate $e^A$(all cases). Let me explain about. Let A be a matrix and $p(\lambda)=\lambda^2-traço(A)\lambda+det(A)$ the characteristic polynomial. We have 2 cases: $1$) $p$ has two distinct roots $2$) $p$ has one root with multiplicity 2 The case 2 is more simple: In this case we have $p(\lambda)=(\lambda-a)^2$. By Cayley-Hamilton follow that $p(A)=(A-aI)^2=0$. Now develop $e^x$ in taylor series around the $a$ $$e^x=e^a+e^a(x-a)+e^a\frac{(x-a)^2}{2!}+...$$ Therefore $$e^A=e^aI+e^a(A-aI)$$ Note that $(A-aI)^2=0$ $\implies$ $(A-aI)^n=0$ for all $n>2$ Case $1$: Let A be your example. The eigenvalues are $0$ and $4$. Now we choose a polynomial $f$ of degree $\le1$ such that $e^0=f(0)$ and $e^4=f(4)$( there is only one). In other words what we want is a function $f(x)=cx+d$ such that $$1=d$$ $$e^4=c4+d$$ Solving this system we have $c=\dfrac{e^4-1}{4}$ and $d=1$. I say that $$e^A=f(A)=cA+dI=\dfrac{e^4-1}{4}A+I$$ In general if $\lambda_1$ and $\lambda_2$ are the distinct eigenvalue, and $f(x)=cx+d$ satisfies $f(\lambda_1)=e^{\lambda_1}$ and $f(\lambda_2)=e^{\lambda_2}$, then $$e^A=f(A)$$ If you are interested so I can explain more (it is not hard to see why this is true) Now I will solve your equation using above. What we need is $e^{tA}$ The eigenvalues of $tA$ is $0$ and $4t$. Then $e^{tA}=\dfrac{e^{4t}-1}{4t}A+I$ for $t$ distinct of $0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/276264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Real tree and hyperbolicity I seek a proof of the following result due to Tits: Theorem: A path-connected $0$-hyperbolic metric space is a real tree. Do you know any proof or reference?
I finally found the result as Théorème 4.1 in Coornaert, Delzant and Papadopoulos' book Géométrie et théorie des groupes, les groupes hyperboliques de Gromov, where path-connected is replaced with geodesic; and in a document written by Steven N. Evans: Probability and Real Trees (theorem 3.40), where path-connected is replaced with connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Common tangent to two circles with Ruler and Compass Given two circles (centers are given) -- one is not contained within the other, the two do not intersect -- how to construct a line that is tangent to both of them? There are four such lines.
[I will assume you know how to do basic constructions, and not explain (say) how to draw perpendicular from a point to a line.] If you're not given the center of the circles, draw 2 chords and take their perpendicular bisector to find the centers $O_1, O_2$. Draw the line connecting the centers. Through each center, draw the perpendicular to $O_1O_2$, giving you the diameters of the circles that are parallel to each other. Connect up the endpoints of these diameters, and find the point of intersection. There are 2 ways to connect them up, 1 gives you the exterior center of expansion (homothety), the other gives you the interior center of expansion (homothety). Each tangent must pass through one of these centers of expansion. From any center of expansion, draw two tangents to any circle. Extend this line, and it will be tangential to the other circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Rayleigh-Ritz method for an extremum problem I am trying to use the Rayleigh-Ritz method to calculate an approximate solution to the extremum problem with the following functional: $$ L[y]=\int\int_D (u_x^2+u_y^2+u^2-2xyu)\,dx\,dy, $$ $D$ is the unit square i.e. $0 \leq x \leq 1, 0 \leq y \leq 1.$ Also $u=0$ on the boundary of $D$. I have chosen to use the trial function: $$ \phi(x,y)=cxy(1-x)(1-y) $$ Where $c$ is a constant that I need to find. I am familiar with using the Rayleigh-Ritz method most of the time, however this question I am not sure of. Is it possible to convert the problem to a Sturm-Liouville ration type? Thanks for your help.
Your integral is in the form of $$L(x,y,u)=\int\int_D (u_x^2+u_y^2+u^2-2xyu)\,dx\,dy$$ $$0 \leq x \leq 1, 0 \leq y \leq 1$$ Due to homogenous boundary conditions it is possible to use your approximation function $$u(x,y)=cxy(1-x)(1-y)$$ When substituted into integral equation $$L(x,y,u)=\int_0^1\int_0^1 (u_x^2+u_y^2+u^2-2xyu)\,dx\,dy=\frac{7}{300}c^2-\frac{1}{72}c$$ and taking first derivative condition and solving for c $$\frac{d\,L}{d\,c}=\frac{7}{150}c-\frac{1}{72}=0\Rightarrow c=\frac{25}{84}$$ and $$u(x,y)=\frac{25}{84}xy(1-x)(1-y)$$ Since the second derivative is positive it is a minimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$x^4 + y^4 = z^2$ $x, y, z \in \mathbb{N}$, $\gcd(x, y) = 1$ prove that $x^4 + y^4 = z^2$ has no solutions. It is true even without $\gcd(x, y) = 1$, but it is easy to see that $\gcd(x, y)$ must be $1$
This has been completely revised to match the intended question. The proof is by showing that there is no minimal positive solution, i.e., by infinite descent. It’s from some old notes; I’ve no idea where I cribbed it from in the first place. Suppose that $x^4+y^4=z^2$, where $z$ is the smallest positive integer for which there is a solution in positive integers. Then $(x^2,y^2,z)$ is a primitive Pythagorean triple, so there are relatively prime integers $m,n$ with $m>n$ such that $x^2=m^2-n^2,y^2=2mn$, and $z=m^2+n^2$. Since $2mn=y^2$, one of $m$ and $n$ is an odd square, and the other is twice a square. In particular, one is odd, and one is even. Now $x^2+n^2=m^2$, and $\gcd(x,n)=1$ (since $m$ and $n$ are relatively prime), so $(x,n,m)$ is a primitive Pythagorean triple, and it must be $n$ that is even: there must be integers $a$ and $b$ such that $a>b$, $a$ and $b$ are relatively prime, $x=a^2-b^2$, $n=2ab$, and $m=a^2+b^2$. It must be $m$ that is the odd square, so there are integers $r$ and $s$ such that $m=r^2$ and $n=2s^2$. Now $2s^2=n=2ab$, so $s^2=ab$, and we must have $a=c^2$ and $b=d^2$ for some integers $c$ and $d$, since $\gcd(a,b)=1$. The equation $m=a^2+b^2$ can then be written $r^2=c^4+d^4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Dirac Delta Function of a Function I'm trying to show that $$\delta\big(f(x)\big) = \sum_{i}\frac{\delta(x-a_{i})}{\left|{\frac{df}{dx}(a_{i})}\right|}$$ Where $a_{i}$ are the roots of the function $f(x)$. I've tried to proceed by using a dummy function $g(x)$ and carrying out: $$\int_{-\infty}^{\infty}dx\,\delta\big(f(x)\big)g(x)$$ Then making the coordinate substitution $u$ = $f(x)$ and integrating over $u$. This seems to be on the right track, but I'm unsure where the absolute value comes in in the denominator, and also why it becomes a sum. $$\int_{-\infty}^{\infty}\frac{du}{\frac{df}{dx}}\delta(u)g\big(f^{-1}(u)\big) = \frac{g\big(f^{-1}(0)\big)}{\frac{df}{dx}\big(f^{-1}(0)\big)}$$ Can any one shed some light? Wikipedia just states the formula and doesn't actually show where it comes from.
Split the integral into regions around $a_i$, the zeros of $f$ (as integration of a delta function only gives nonzero results in regions where its arg is zero) $$ \int_{-\infty}^{\infty}\delta\big(f(x)\big)g(x)\,\mathrm{d}x = \sum_{i}\int_{a_i-\epsilon}^{a_i+\epsilon}\delta(f(x))g(x)\,\mathrm{d}x $$ write out the Taylor expansion of $f$ for $x$ near some $a_i$ (ie. different for each term in the summation) $$ f(a_i+x) =f(a_i) + f'(a_i)x + \mathcal{O}(x^2) = f'(a_i)x + \mathcal{O}(x^2) $$ Now, for each term, you can show that the following hold: $$ \int_{-\infty}^\infty\delta(kx)g(x)\,\mathrm{d}x = \frac{1}{|k|}g(0) = \int_{-\infty}^\infty\frac{1}{|k|}\delta(x)g(x)\,\mathrm{d}x $$ (making a transformation $y=kx$, and looking at $k<0,k>0$ separately **Note: the trick is in the limits of integration) and $$ \int_{-\infty}^\infty\delta(x+\mathcal{O}(x^2))g(x)\,\mathrm{d}x = g(0) = \int_{-\infty}^\infty\delta(x)g(x)\,\mathrm{d}x $$ (making use of the fact that we can take an interval around 0 as small as we like) Combine these with shifting to each of the desired roots, and you can obtain the equality you're looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49", "answer_count": 7, "answer_id": 4 }
Is my understanding of product sigma algebra (or topology) correct? Let $(E_i, \mathcal{B}_i)$ be measurable (or topological) spaces, where $i \in I$ is an index set, possibly infinite. Their product sigma algebra (or product topology) $\mathcal{B}$ on $E= \prod_{i \in I} E_i$ is defined to be the coarsest one that can make the projections $\pi_i: E \to E_i$ measurable (or continuous). Many sources said the following is an equivalent definition: $$\mathcal{B}=\sigma \text{ or }\tau\left(\left\{\text{$\prod_{i \in I}B_i$, where $B_i \in \mathcal{B}_i, B_i=E_i$ for all but a finite number of $i \in I$}\right\}\right),$$ where $\sigma \text{ and }\tau$ mean taking the smallest sigma algebra and taking the smallest topology. Honestly I don't quite understand why this is the coarsest sigma algebra (or topology) that make the projections measurable (or continuous). Following is what I think is the coarsest one that can make the projections measurable $$\mathcal{B}=\sigma \text{ or }\tau\left(\left\{\text{$\prod_{i \in I}B_i$, where $B_i \in \mathcal{B}_i, B_i=E_i$ at least for all but one $i \in I$}\right\}\right),$$ because $\pi^{-1}_k (E_k) = \text{$\prod_{i \in I}B_i$, where $B_i=E_i$ for all $i \neq k$}$. So I was wondering if the two equations for $\mathcal{B}$ are the same? Thanks and regards!
For the comments: I retract my error for the definition of measurability. Sorry. For the two things generating the same sigma algebra (or topology, which is similar): We use $\langle - \rangle$ to denote the smallest sigma algebra containing the thing in the middle. We want to show that $$(1) \hspace{5mm}\langle \prod_{i} B_i \rangle$$ where $B_i \in \mathcal{B}_i$, and $B_i = E_i$ for all but finitely many $i$s, is the same as $$(2) \hspace{5mm}\langle \prod_{i} B_i \rangle$$ where $B_i \in \mathcal{B}_i$, and $B_i = E_i$ for all but one $i$. It is clear that $(2) \subset (1)$, since the generating collection in (2) is a subset of that of (1). On the other hand, $(2)$ contains $\prod_{i} B_i$ where $B_i \in \mathcal{B}_i$, and $B_i = E_i$ for all but finitely many $i$s, since it is the (finite) intersection of the generators. For example, $$B_1 \times B_2 = (B_1 \times E_2) \cap (E_1 \times B_2)$$ So $(2) \supset (1)$. Therefore $(2) = (1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does a graph with $0$ vertices count as simple? Does a graph with $0$ vertices count as a simple graph? Or does a simple graph need to have a non-empty vertex set? Thanks!
It is typical to refer to a graph with no vertices as the null graph. Since it has no loops and no parallel edges (indeed, it has no edges at all), it is simple. That said, if your present work finds you writing "Such and such is true for all simple graphs except the null graph", then it could be a good idea to announce at the beginning of your document that you will not consider the null graph to be simple.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Minimal polynominal: geometric meaning I am currently studying Chapter 6 of Hoffman & Kunze's Linear Algebra which deals with characteristic values and triangulation and diagonalization theorems. The chapter makes heavy use of the concept of the minimal polynomial which it defines as the monic polynomial of the smallest degree that annihilates a linear transformation. I am currently finding the proofs in the book that use this definition of the minimal polynomial to be very opaque: just chains of calculations with polynomials that turn out to give the right answer in the end. I was therefore wondering if there is a more geometric view of the minimal polynomial? What information about the linear transformation is it carrying and why? I guess the answer would be simpler in the case of an algebraically closed field but is there an answer which also works for a general field?
Consider the following matrices: $$ A = \left(\begin{array}{cc}2&0\\0&2\end{array}\right) \ \ \text{ and } \ \ B = \left(\begin{array}{cc}2&1\\0&2\end{array}\right). $$ The first matrix has minimal polynomial $X - 2$ and the second has minimal polynomial $(X-2)^2$. If we subtract $2I_2$ from these matrices then we get $$ \left(\begin{array}{cc}0&0\\0&0\end{array}\right) \ \ \text{ and } \ \ \left(\begin{array}{cc}0&1\\0&0\end{array}\right), $$ where the first has minimal polynomial $X$ and the second has minimal polynomial $X^2$. The different exponents here reflect the fact that a matrix can have a power that is $O$ without being $O$ itself (this doesn't happen with ordinary numbers, like in ${\mathbf R}$ or ${\mathbf C}$). A matrix has a power equal to $O$ precisely when its minimal polynomial is some power of $X$, and the exponent you need on $X$ to achieve that can vary. As another example, compare $$ M = \left(\begin{array}{ccc} 0 & 1 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right) \ \ \text{ and } \ \ N = \left(\begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right). $$ These are not the zero matrix, but $N^2 = O$ while $M^3 = O$ and $M^2 \not= O$. So $M$ has minimal polynomial $X^3$ and $N$ has minimal polynomial $X^2$. To describe a (monic) polynomial we could provide its roots and the multiplicities for those roots. For a square matrix, the roots of its minimal polynomial are easy to connect with the matrix: they are the eigenvalues of the matrix. The subtle part is their multiplicities, which are more algebraic than geometric. It might be natural to hope that the multiplicity of an eigenvalue $\lambda$ as a root of the minimal polynomial is the dimension of the $\lambda$-eigenspace, but this is false in general, as we can see with the matrices $A$ and $B$ above (e.g., $B$ has minimal polynomial $(X-2)^2$ but its 2-eigenspace is 1-dimensional). In the case when the matrix has all distinct eigenvalues, the minimal polynomial is the characteristic polynomial, so you could think of the distinction between the minimal and characteristic polynomials as reflecting the presence of repeated eigenvalues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Integrate $\int_{C} \frac{1}{r-\bar{z}}dz$ - conflicting answers In an homework exercise, we're asked to integrate $\int_{C} \frac{1}{k-\bar{z}}dz$ where C is some circle that doesn't pass through $k$. I tried solving this question through two different approaches, but have arrived at different answers. Idea: use the fact that in C, $z\bar{z}=r^2$, where $r$ is the radius of C, to get $$\int_{C} \frac{1}{k-\frac{r^2}{z}}dz$$ We then apply Cauchy to get that the answer is $2πi r^2/k^2$ when C contains $r^2/k$, and 0 otherwise. Another idea: Intuitively, since C is a circle we get ${\int_{C} \frac{1}{k-\bar{z}}dz} = \int_{C} \frac{1}{k-z}dz$ (since $\bar{z}$ belongs to C iff $z$ belongs to C) and we can then use Cauchy's theorem and Cauchy's formula (depending on what C contains) to arrive at a different answer. Which answer (if any) is correct?
The original function is not holomorphic since $\frac{d}{d\overline{z}}\frac{1}{k-\overline{z}}=\frac{-1}{(k-\overline{z})^{2}}$. So you cannot apply Cauchy's integral formula. Let $C$ be centered at $c$ with radius $r$, then we have $z=re^{i\theta}+c=c+r\cos(\theta)+ri\sin(\theta)$, and its conjugate become $c+r\cos(\theta)-ri\sin(\theta)=c+re^{-i\theta}$. Therefore we have $k-\overline{z}=k-c-re^{-i\theta}$. To noramlize it we mutiply $(k-c-r\cos(\theta))-ri\sin(\theta)=k-c-re^{i\theta}$. The result is $(k-c-r\cos(\theta))^{2}+r^{2}\sin^{2}\theta$. Rearranging gives $$k^{2}+c^{2}+r^{2}-2kc-2(k-c)r\cos(\theta)=A-2Br\cos(\theta),A=k^{2}+c^{2}+r^{2}-2kc,B=k-c$$ So we have $$\int_{C}\frac{1}{k-\overline{z}}=\int^{2\pi}_{0}\frac{k-c-re^{i\theta}}{(k-c-re^{-i\theta})(k-c-re^{-i\theta})}dre^{i\theta}=ri\int^{2\pi}_{0}\frac{Be^{i\theta}-re^{2i\theta}}{A-2Br\cos(\theta)}d\theta$$ So it suffice to integrate $$\frac{e^{i\theta}}{1-C\cos(\theta)}d\theta,C=constant$$ since the above integral is a linear combination of integrals of this type. But this should be possible in general as we can make trigonometric substitutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Sum of the form $r+r^2+r^4+\dots+r^{2^k} = \sum_{i=1}^k r^{2^i}$ I am wondering if there exists any formula for the following power series : $$S = r + r^2 + r^4 + r^8 + r^{16} + r^{32} + ...... + r^{2^k}$$ Is there any way to calculate the sum of above series (if $k$ is given) ?
I haven’t been able to obtain a closed form expression for the sum, but maybe you or someone else can do something with what follows. In Blackburn's paper (reference below) there are some manipulations involving the geometric series $$1 \; + \; r^{2^n} \; + \; r^{2 \cdot 2^n} \; + \; r^{3 \cdot 2^n} \; + \; \ldots \; + \; r^{(m-1) \cdot 2^n}$$ that might give some useful ideas to someone. However, thus far I haven't found the identities or the manipulations in Blackburn's paper to be of any help. Charles Blackburn, Analytical theorems relating to geometrical series, Philosophical Magazine (3) 6 #33 (March 1835), 196-201. As for your series, I tried exploiting the factorization of $r^m – 1$ as the product of $r-1$ and $1 + r + r^2 + \ldots + r^{m-1}:$ First, replace each of the terms $r^m$ with $\left(r^{m} - 1 \right) + 1.$ $$S \;\; = \;\; \left(r – 1 \right) + 1 + \left(r^2 – 1 \right) + 1 + \left(r^4 – 1 \right) + 1 + \left(r^8 – 1 \right) + 1 + \ldots + \left(r^{2^k} – 1 \right) + 1$$ Next, replace the $(k+1)$-many additions of $1$ with a single addition of $k+1.$ $$S \;\; = \;\; (k+1) + \left(r – 1 \right) + \left(r^2 – 1 \right) + \left(r^4 – 1 \right) + \left(r^8 – 1 \right) + \ldots + \left(r^{2^k} – 1 \right)$$ Now use the fact that for each $m$ we have $r^m - 1 \; = \; \left(r-1\right) \left(1 + r + r^2 + \ldots + r^{m-1}\right).$ $$S \;\; = \;\; (k+1) + \left(r – 1 \right)\left[1 + \left(1 + r \right) + \left(1 + r + r^2 + r^3 \right) + \ldots + \left(1 + r + \ldots + r^{2^{k} - 1} \right) \right]$$ At this point, let's focus on the expression in square brackets. This expression is equal to $$\left(k+1\right) \cdot 1 + kr + \left(k-1\right)r^2 + \left(k-1\right)r^3 + \left(k-2\right)r^4 + \ldots + \left(k-2\right)r^7 + \left(k-3\right)r^8 + \ldots + \left(k-3\right)r^{15} + \ldots + \left(k-n\right)r^{2^n} + \dots + \left(k-n\right)r^{2^{n+1}-1} + \ldots + \left(1\right) r^{2^{k-1}} + \ldots + \left(1\right) r^{2^{k} - 1}$$ I'm now at a loss. We can slightly compress this by factoring out common factors for groups of terms such as $(k-2)r^4 + \ldots + (k-2)r^7$ to get $(k-2)r^4\left(1 + r + r^2 + r^3\right).$ Doing this gives the following for the expression in square brackets. $$\left(k+1\right) + \left(k\right)r + \left(k-1\right)r^2 \left(1+r\right) + \left(k-2\right)r^4\left(1+r+r^2+r^3\right) + \ldots + \;\; \left(1\right)r^{2^{k-1}} \left(1 + r + \ldots + r^{2^{k-1} -1} \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/276892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 5, "answer_id": 1 }
Matrices with columns which are eigenvectors In this question, the OP asks about finding the matrix exponential of the matrix $$M=\begin{bmatrix} 1 & 1 & 1\\ 1 & 1 & 1\\ 1 & 1 & 1 \end{bmatrix}.$$ It works out quite nicely because $M^2 = 3M$ so $M^n = 3^{n-1}M$. The reason this occurs is that the vector $$v = \begin{bmatrix} 1\\ 1\\ 1 \end{bmatrix}$$ is an eigenvector for $M$ (with eigenvalue $3$). The same is true of the $n\times n$ matrix consisting of all ones and the corresponding vector. With this in mind, I ask the following: * *Can we find a standard form for an $n\times n$ matrix with the property that each of its columns are eigenvectors? (Added later: This may be easier to answer if we allow columns to be zero vectors. Thanks Adam W for the comment.) *What about the case when we require all the eigenvectors to correspond to the same eigenvalue? The matrices in the second question are precisely the ones for which the calculation of the matrix exponential would be analogous to that for $M$ as above. Added later: In Hurkyl's answer, he/she shows that an invertible matrix satisfying 1 is diagonal. For the case $n=2$, it is fairly easy to see that any non-invertible matrix satisfies 1 (which is generalised by the situation in Seirios's answer). However, for $n > 2$, not every non-invertible matrix satisfies this property (as one may expect). For example, $$M = \begin{bmatrix} 1 & 0 & 0\\ 0 & 0 & 0\\ 1 & 0 & 1 \end{bmatrix}.$$
Another partial answer: One can notice that matrices of rank 1 are such examples. Indeed, if $M$ is of rank 1, its columns are of the form $a_1C,a_2C,...,a_nC$ for a given vector $C \in \mathbb{R}^n$ ; if $L=(a_1 \ a_2 \ ... \ a_n)$, then $M=CL$. So $MC=pC$ with the inner product $p=LC$. Also, $M^n=p^{n-1}M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/276962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Free group n contains subgroup of index 2 My problem is to show that any free group $F_{n}$ has a normal subgroup of index 2. I know that any subgroup of index 2 is normal. But how do I find a subgroup of index 2? The subgroup needs to have 2 cosets. My first guess is to construct a subgroup $H<G$ as $H = <x_{1}^{2}, x_{2}^{2}, ... , x_{n}^{2} >$ but this wouldn't be correct because $x_{1}H \ne x_{2}H \ne H$. What is a way to construct such a subgroup?
For any subgroup $H$ of $G$ and elements $a$ and $b$ of $G$ the following statements hold. * *If $a \in H$ and $b \in H$, then $ab \in H$ *If $a \in H$ and $b \not\in H$, then $ab \not\in H$ *If $a \not\in H$ and $b \in H$, then $ab \not\in H$ Hence it is natural to ask when $a \not\in H$ and $b \not\in H$ implies $ab \in H$, ie. when a subgroup is a "parity subgroup", as in $G = \mathbb{Z}$ and $H = 2\mathbb{Z}$ or in $G = S_n$ and $H = A_n$. Suppose that $H$ is a proper subgroup since the case $H = G$ is not interesting. Then the following statements are equivalent: * *$[G:H] = 2$ *For all elements $a \not\in H$ and $b \not\in H$ of $G$, we have $ab \in H$. *There exists a homomorphism $\phi: G \rightarrow \{1, -1\}$ with $\operatorname{Ker}(\phi) = H$. So these parity subgroups are precisely all the subgroups of index $2$. I think you could use (2) in PatrickR's answer and (3) in DonAntonio's answer. Of course, they are both good and complete answers in their own, this is just one way I like to think about subgroups of index $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/277007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
how to show $f(1/n)$ is convergent? Let $f:(0,\infty)\rightarrow \mathbb{R}$ be differentiable, $\lvert f'(x)\rvert<1 \forall x$. We need to show that $a_n=f(1/n)$ is convergent. Well, it just converges to $f(0)$ as $\lim_{n\rightarrow \infty}f(1/n)=f(0)$ am I right? But $f$ is not defined at $0$ and I am not able to apply the fact $\lvert f'\rvert < 1$. Please give me hints.
The condition $|f'(x)|<1$ implies that f is lipschitz. $$|f(x)-f(y)|\le |x-y| $$ Then $$|f(\frac{1}{n})-f(\frac{1}{m})|\le|\frac{1}{n}-\frac{1}{m}|$$ Since $x_n=\frac{1}{n}$ is Cauchy, $f(\frac{1}{n})$ also is Cauchy
{ "language": "en", "url": "https://math.stackexchange.com/questions/277070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Nilpotent Lie Group that is not simply connect nor product of Lie Groups? I have been trying to find for days a non-abelian nilpotent Lie Group that is not simply connected nor product of Lie Groups, but haven't been able to succeed. Is there an example of this, or hints to this group, or is it fundamentally impossible? Cheers and thanks.
The typical answer is a sort of Heisenberg group, presented as a quotient (by a normal subgroup) $$ H \;=\; \{\pmatrix{1 & a & b \cr 0 & 1 & c\cr 0 & 0& 1}:a,b,c\in \mathbb R\} \;\bigg/\; \{\pmatrix{1 & 0 & b \cr 0 & 1 & 0\cr 0 & 0& 1}:b\in \mathbb Z\} $$ Edit: To certify the non-simple-connectedness, note that the group of upper-triangular unipotent matrices is simply connected, and that the indicated subgroup is discrete, so this Heisenberg group has universal covering group isomorphic to that discrete subgroup, and $\pi_1$ of the quotient (the Heisenberg group) is isomorphic to that covering group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/277118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Arithmetic Progressions in Complex Variables From Stein and Shakarchi's Complex Analysis book, Chapter 1 Exercise 22 asks the following: Let $\Bbb N=\{1,2,\ldots\}$ denote the set of positive integers. A subset $S\subseteq \Bbb N$ is said to be in arithmetic progression if $$S=\{a,a+d,a+2d,\ldots\}$$ where $a,d\in\Bbb N$. Here $d$ is called the step of $S$. We are asked to show that $\Bbb N$ cannot be partitioned into a finite number of subsets that are in arithmetic progression with distinct steps (except for the case $a=d=1$). He gives a hint to write $$\sum_{n\in\Bbb N}z^n$$ as a sum of terms of the type $$\frac{z^a}{1-z^d}.$$ How do I apply the hint? I know that $$\sum_{n\in\Bbb N}z^n=\frac{z}{1-z}$$ but that doesn't have anything to do with the $a$ or $d$. Thanks for any help!
Suppose $\mathbf{N}=S_1\cup\cdots S_k$ is a partition of $\bf N$ and $S_r=\{a_r+d_rm:m\ge0\}$. Then $$\begin{array}{cl}\frac{z}{1-z} & =\sum_{n\in\bf N}z^n \\ & =\sum_{r=1}^k\left(\sum_{n\in S_r}z^n\right) \\ & = \sum_{r=1}^k\left(\sum_{m\ge0}z^{a_r+d_rm}\right) \\ & = \sum_{r=1}^k\frac{z^{a_r}}{1-z^{d_r}}.\end{array}\tag{$\circ$}$$ Suppose each $d_r$ is distinct. Do both sides of $(\circ)$ have the same poles in the complex plane?
{ "language": "en", "url": "https://math.stackexchange.com/questions/277183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $X$ is normal and $A$ is a $F_{\sigma}$-set in $X$, then $A$ is normal. How could I prove this theorem? A topological space $X$ is a normal space if, given any disjoint closed sets $E$ and $F$, there are open neighbourhoods $U$ of $E$ and $V$ of $F$ that are also disjoint. (Or more intuitively, this condition says that $E$ and $F$ can be separated by neighbourhoods.) And an $F_{\sigma}$-set is a countable union of closed sets. So I should be able to show that the $F_{\sigma}$-set has the necessary conditions for a $T_4$ space? But how could I for instance select two disjoint closed sets from $F_{\sigma}$?
Let us begin with a lemma ( see it in the Engelking's "General Topology" book, lemma 1.5.15): If $X$ is a $T_1$ space and for every closed $F$ and every open $W$ that contains $F$ there exists a sequence $W_1$, $W_2$, ... of open subsets of $X$ such that $F\subset \cup_{i}W_i$ and $cl(W_i)\subset W$ for $i=$ 1, 2, ..., then the space $X$ is normal. Suppose $X$ is normal and $A = \cup_n F_n \subset X$ is an $F_\sigma$ in $X$, where all the $F_n$ are closed subsets of $X$. Then $A$ is normal (in the subspace topology). To apply the lemma, let $F$ be closed in $A$ and $W$ be an open superset of it (open in $A$). Let $O$ be open in $X$ such that $O \cap A = W$, and note that each $F \cap F_n$ is closed in $X$ and by normality of $X$ there are open subsets $O_n$ in $X$, for $n \in \mathbb{N}$ such that $$ F \cap F_n \subset O_n \subset \overline{O_n} \subset O $$ and define $W_n = O_n \cap A$, which are open in $A$ and satisfy that the $W_n$ cover $F$ (as each $W_n$ covers $F_n$ and $F = F \cap A = \cup_n (F \cap F_n)$) and the closure of $W_n$ in $A$ equals $$\overline{W_n} \cap A = \overline{O_n \cap A} \cap A \subset O \cap A = W$$ which is what is needed for the lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/277251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Is high school contest math useful after high school? I've been prepping for a lot of high school math competitions this year. Will all the math I learn would actually mean something in college? There is a chance that all of it will be for naught, and I just wanted to know if any of you people found the math useful after high school. I do like what I'm learning, so it's not like I'm only prepping by forcing myself to. EDIT: I'm not really sure what the appropriate procedure is to thank everyone for the answers. So... thanks! All of your answers were really helpful, and they motivated me a little more. However, I would like to make it clear that I wasn't looking for a reason to continue but rather just asking a question out of curiosity.
High school math competitions require you to learn how to solve problems, especially when there is no "method" you can look up telling you how to solve these problems. Problem solving is a very desirable skill for many jobs you might someday wish to have.
{ "language": "en", "url": "https://math.stackexchange.com/questions/277310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 3 }
How to show that $\frac{x^2}{x-1}$ simplifies to $x + \frac{1}{x-1} +1$ How does $\frac{x^2}{(x-1)}$ simplify to $x + \frac{1}{x-1} +1$? The second expression would be much easier to work with, but I cant figure out how to get there. Thanks
Very clever trick: If you have to show that two expressions are equivalent, you work backwards. $$\begin{align}=& x +\frac{1}{x-1} + 1 \\ \\ \\ =& \frac{x^2 - x}{x-1} +\frac{1}{x - 1} + \frac{x-1}{x-1} \\ \\ \\ =& \frac{x^2 - x + 1 + x - 1}{x - 1} \\ \\ \\ =&\frac{x^2 }{x - 1}\end{align}$$Now, write the steps backwards (if you're going to your grandmommy's place, you turn backwards and then you again turn backwards, you're on the right way!) and act like a know-it-all. $$\begin{align}=&\frac{x^2 }{x - 1} \\ \\ \\ =& \frac{x^2 - x}{x-1} +\frac{1}{x - 1} + \frac{x-1}{x-1} \\ \\ \\ =& x +\frac{1}{x-1} + 1 \end{align}$$ Q.E.Doodly dee! This trick works and you can impress your friends with such elegant proofs produced by this trick.
{ "language": "en", "url": "https://math.stackexchange.com/questions/278481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 3 }
Prove that if for every $x \in \mathbb{R}^N$ $Ax=Bx$ then $A=B$ How can I quickly prove that if for every $x \in \mathbb{R}^N$ $$Ax=Bx$$ then $A=B$ ? Where $A,B\in \mathbb{R}^{N\times N}$. Normally I would multiply both sides by inverse of $x$, however vectors have no inverse, so I am not sure how to prove it.
If you want to invert a matrix but all you have are vectors, put the vectors into a matrix! For example, $$A \left( e_1 \mid e_2 \mid \cdots \mid e_n \right) = \left( A e_1 \mid A e_2 \mid \cdots \mid A e_n \right) $$ where $e_i$ is the $i$-th standard basis (column) vector. I could have chosen any vectors, but the standard basis vectors are the simplest, and I chose a linearly independent set so as to get the most information out of doing this. Also, the matrix wouldn't be invertible if I had chosen a linearly dependent set. It turns out we don't even need to bother with inverting in this case, since we've pooled the vectors into an identity matrix: $$ \begin{align} A &= A(e_1 \mid e_2 \mid \cdots \mid e_n) \\&= (Ae_1 \mid Ae_2 \mid \cdots \mid Ae_n) \\&= (Be_1 \mid Be_2 \mid \cdots \mid Be_n) \\ &= B(e_1 \mid e_2 \mid \cdots \mid e_n) \\ &= B\end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/278555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Trying to find $\sum\limits_{k=0}^n k \binom{n}{k}$ Possible Duplicate: How to prove this binomial identity $\sum_{r=0}^n {r {n \choose r}} = n2^{n-1}$? $$\begin{align} &\sum_{k=0}^n k \binom{n}{k} =\\ &\sum_{k=0}^n k \frac{n!}{k!(n-k)!} =\\ &\sum_{k=0}^n k \frac{n(n-1)!}{(k-1)!((n-1)-(k-1))!} = \\ &n\sum_{k=0}^n \binom{n-1}{k-1} =\\ &n\sum_{k=0}^{n-1} \binom{n}{k} + n \binom{n-1}{-1} =\\ &n2^{n-1} + n \binom{n-1}{-1} \end{align}$$ * *Do I have any mistake? *How can I handle the last term? (Presumptive) Source: Theoretical Exercise 1.12(a), P18, A First Course in Pr, 8th Ed, by S Ross
By convention $\binom{n}k=0$ if $k$ is a negative integer, so your last line is simply $$n\sum_{k=0}^n\binom{n-1}{k-1}=n\sum_{k=0}^{n-1}\binom{n}k=n2^{n-1}\;.$$ Everything else is fine. By the way, there is also a combinatorial way to see that $k\binom{n}k=n\binom{n-1}{k-1}$: the lefthand side counts the ways to choose a $k$-person committee from a group of $n$ people and then choose one of the $k$ to be chairman; the righthand side counts the number of ways to select a chairman ($n$) and then the other $k-1$ members of the committee.
{ "language": "en", "url": "https://math.stackexchange.com/questions/278615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
A continuous function $f : \mathbb{R} → \mathbb{R}$ is uniformly continuous if it maps Cauchy sequences into Cauchy sequences. A continuous function $f : \mathbb{R} → \mathbb{R}$ is uniformly continuous if it maps Cauchy sequences into Cauchy sequences. is the above statement is true? I guess it is not true but can't find any counterexample.
The answer is no as explained by Jonas Meyer and every continuous function $f:\mathbb R\longrightarrow \mathbb R$ has this property: If $(x_n)_{n\in\mathbb N}$ is a Cauchy sequence then $m\leq x_n \leq M, \ \ \forall \ n\in\mathbb N$ for some $m<M$. Since $f$ is uniformly continuous on $[m,M]$ the result follows. So $f(x)=x^2$ (or any continuous not uniformly continuous function $\mathbb R\to \mathbb R$) is a counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/278678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }