Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Using the Casorati-Weierstrass theorem. Show that there is a complex number $z$ such that:$$\left|\cos{\left(\frac{1}{2z^4+3z^2+1}\right)}+100\tan^2{z}+e^{-z^2}\right|<1$$ It's easy to see that $z=i$ is a simple pole of $\frac{1}{2z^4+3z^2+1}$, but I want to know how to conclude that $z=i$ is an essential singularity of $\cos{\left(\frac{1}{2z^4+3z^2+1}\right)}$ so that I can use the Casorati-Weierstrass theorem.
Denote $$f(z) = \cos\left(\frac{1}{2z^4+3z^2+1}\right)$$ To prove $z=i$ is an essential singularity of $f(z)$, just find two complex sequences of $\{z_n\}$ and $\{w_n\}$ so that $z_n, w_n\rightarrow i$ but $f(z_n) = 1$, $f(w_n)=-1$. This means both $\lim_{z->i} f(z)$ and $\lim_{z->i} 1/f(z)$ do not exist. By definition, $z=i$ is an essential singularity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/439465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Diameter of finite set of points is equal to diameter of its convex hull Let $M\subset \mathbb{R}^2$ be a finite set of points, $\operatorname{C}(M)$ the convex hull of M and $$\operatorname{diam}(M) = \sup_{x,y\in M}\|x-y\|_2$$ be the diameter of $M$ What I want to show now is, that it holds $$\operatorname{diam}(M) = \operatorname{diam}(\operatorname{C}(M))$$ Because $$M\subseteq\operatorname{C}(M)$$ we obtain $$\operatorname{diam}(M) \le\operatorname{diam}(\operatorname{C}(M))$$ but how to proof that $$\operatorname{diam}(M) \ge \operatorname{diam}(\operatorname{C}(M))$$ I suppose it should be possible to construct a contradiction assuming $\operatorname{diam}(M) <\operatorname{diam}(\operatorname{C}(M))$ but i do not see how at this moment.
Hint: Prove this for a triangle and then use the fact that for every point of $C(M)$ there is a triangle that contains it, there are many ways to go from there. I hope this helps ;-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/439571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
are elementary symmetric polynomials concave on probability distributions? Let $S_{n,k}=\sum_{S\subset[n],|S|=k}\prod_{i\in S} x_i$ be the elementary symmetric polynomial of degree $k$ on $n$ variables. Consider this polynomial as a function, in particular a function on probability distributions on $n$ items. It is not hard to see that this function is maximized at the uniform distribution. I am wondering if there is a "convexity"-based approach to show this. Specifically, is $S_{n,k}$ concave on probability distributions on $n$ items?
(I know this question is ancient, but I happened to run into it while looking for something else.) While I am not sure if $S_{n,k}$ is concave on the probability simplex, you can prove the result you want and many other similar useful things using Schur concavity. A sketch follows. A vector $y\in \mathbb{R}_+^n$ majorizes $x \in \mathbb{R}_+^n$ if the following inequalities are satisfied: $$ \sum_{j=1}^i{x_{(j)}} \leq \sum_{j=1}^i{y_{(j)}} $$ for all $i$, and $\sum_{i=1}^n x_i = \sum_{i=1}^n y_i$. Here $x_{(j)}$ is the $j$-th largest coordinate of $x$ and similarly for $y$. Let's write this $x \prec y$. For intuition it's useful to know that $x \prec y$ if and only if $x$ is in the convex hull of vectors you get by permuting the coordinates of $y$. A function is Schur-concave if $x \prec y \implies f(x) \geq f(y)$. A simple sufficient condition for Schur concavity is that $\partial f(x)/\partial x_i \ge \partial f(x)/\partial x_j$ whenever $x_i \le x_j$. It is easy to verify that $S_{n,k}$ satisfies this condition for any $n$,$k$. Notice that $x=(1/n, \ldots, 1/n)$ is majorized by every vector $y$ in the probability simplex. You can see this for example by noticing that the sum of $i$ random coordinates of $y$ is $i/n$, so surely the sum of the $i$ largest coordinates is at least as much. Equivalently, $x$ is the average of all permutations of $y$. This observation, and the Schur concavity of $S_{n,k}$ imply $S_{n,k}(x) \ge S_{n,k}(y)$. In fact, $S_{n,k}^{1/k}$ is concave on the positive orthant, and this implies what you want. This is itself a special case of much more powerful results about the concavity of mixed volumes. But the Schur concavity approach is elementary and pretty widely applicable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/439649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
For $f,g~(f0$ let $\{h\in\mathcal C[0,1]:t-c For $f,g~(f<g),t\in\mathcal C[0,1],c>0$ let $\{h\in\mathcal C[0,1]:t-c<h<t+c\}$$=\{h\in\mathcal C[0,1]:f<h<g\}.$ I want to show that $t-c=f,~t+c=g.$ $$t-c<t<t+c\text{ and } \\f<\dfrac{f+g}{2}<g.\\\text{Then }t-c<\dfrac{f+g}{2}<t+c\text{ and } f<t<g.$$ I don't know how to contradict the following cases: For some $y\in[0,1].$ * *Let $t(y)-c>f(y)$ *Let $t(y)-c<f(y)$ *Let $t(y)+c>g(y)$ *Let $t(y)+c<g(y)$ This problem can more clearly be written as: For $f_1,g_1,f_2,g_2\in\mathcal C[0,1]$ $$\{h\in\mathcal C[0,1]:f_1<h<g_1\}=\{h\in\mathcal C[0,1]:f_2<h<g_2\}\implies f_1=f_2,g_1=g_2.$$
Let $A=\{h\in\mathcal C[0,1]|\;t-c<h<t+c\}$ and $B=\{h\in\mathcal C[0,1]|\;f<h<g\}$. For every $\epsilon\in(0,c)$ we have $t-\epsilon,t+\epsilon\in A$ by definition of $A$. Since $A=B$, this means that $t-\epsilon,t+\epsilon\in B$ for all $\epsilon\in(0,c)$, which by definition of $B$ means that $$f<t-\epsilon<t+\epsilon<g.$$ This implies that $$f\leq t-c<t+c\leq g.$$ To complete the proof, we have to show that $f\geq t-c$ and $g\leq t+c$. To do this, note that (since $f<g$) for every $\epsilon\in(0,1)$ we have $$f<(1-\epsilon)f+\epsilon g<g,$$ therefore $(1-\epsilon)f+\epsilon g\in B$. But this means that $(1-\epsilon)f+\epsilon g\in A$ for all $\epsilon\in(0,1)$, which means that $$t-c<(1-\epsilon)f+\epsilon g<t+c.$$ Now, let $\epsilon\to 0$. This gives us $$t-c\leq f\leq t+c.$$ Similarly, letting $\epsilon\to 1$ gives us $$t-c\leq g\leq t+c.$$ This completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/439710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Techniques for determining how "random" a sample is? What techniques exist to determine the "randomness" of a sample? For instance, say I have data from a series of $1200$ six-sided dice rolls. If the results were 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, ... Or: 1, 1, 1, ..., 2, 2, 2, ..., 3, 3, 3, ... The confidence of randomness would be quite low. Is there a formula where I can input the sequence of outcomes and get back a number that corresponds to the likelihood of randomness? Thanks UPDATE: awkward's answer was the most helpful. Some Googling turned up these two helpful resources: * *Statistical analysis of Random.org - an overview of the statistical analyses used to evaluate the random numbers generated by the website, www.random.org *Random Number Generation Software - a NIST-funded project that provides a discussion on tests that can be used against random number generators, as well as a free software package for running said tests.
What I would do is to first take the samples one by one, and check to see whether it is uniform (You assign some value depending on how far the distribution is from uniform and the way you calculate this value depends on your application). I would then take the samples two by two and do the same thing above, and then three by three and so on. With proper weighting, your second sequence will e.g. be flagged as "not random" with this technique. This is also what Neil's professor (in one of the answer to this question) is doing to see whether the sequence of his/her student's are really random or human generated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/439776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 4 }
Evaluate the integral $\int^{\frac{\pi}{2}}_0 \frac{\sin^3x}{\sin^3x+\cos^3x}\,\mathrm dx$. Evaluate the integral $$\int^{\frac{\pi}{2}}_0 \frac{\sin^3x}{\sin^3x+\cos^3x}\, \mathrm dx.$$ How can i evaluate this one? Didn't find any clever substitute and integration by parts doesn't lead anywhere (I think). Any guidelines please?
Symmetry! This is the same as the integral with $\cos^3 x$ on top. If that is not obvious from the geometry, make the change of variable $u=\pi/2-x$. Add them, you get the integral of $1$. So our integral is $\pi/4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/439851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 2 }
Can someone explain the intuition behind this moment generating function identity? If $X_i \sim N(\mu, \sigma^2) $, we know that: $\bar{X} \sim N(\mu, \sigma^2 /n)$. But why does: $$\exp\left({\sigma^{2}\over 2}\sum_{i=1}^{n}(t_{i}-\bar{t})^{2}\right)= M_{X_{1}-\bar{X},X_{2}-\bar{X},...,X_{n}-\bar{X}}(t_1,t_2,...,t_n)$$ Where $M$ is the moment generating function? I have three pages of scratch work but it would be incredibly tedious to post that here, and I already know it's true... Thanks!
This identity relies on the fact that $$\sum_{i=1}^nt_iX_i-\sum_{i=1}^nt_i\bar X=\sum_{j=1}^n(t_j-\bar t)X_j.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/439939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
covering space of $2$-genus surface I'm trying to build $2:1$ covering space for $2$- genus surface by $3$-genus surface. I can see that if I take a cut of $3$-genus surface in the middle (along the mid hole) I get $2$ surfaces each one looks like $2$-genus surface which are open in one side so it's clear how to make the projection from these $2$ copies to the $2$-genus surface. my question is how can I see this process in the polygonal representation of 3-genus surface ( $12$ edges: $a_{1}b_{1}a_{1}^{-1}b_{1}^{-1}......a_{3}b_{3}a_{3}^{-1}b_{3}^{-1})$ . I can't visualize the cut I make in the polygon. thanks.
Take the dodecagon at the origin with one pair of edges intersecting the $y$-axis (call them the top and bottom faces) and one pair intersecting the $x$-axis. Cut the polygon along the $x$ axis, and un-identify the left and right faces. This gives two octagons, each with an opposing pair of unmatched edges. Identify these new edges, and you should have what you're looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/439989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
Linear dependence of multivariable functions It is well known that the Wronskian is a great tool for checking the linear dependence between a set of functions of one variable. Is there a similar way of checking linear dependance between two functions of two variables (e.g. $P(x,y),Q(x,y)$)? Thanks.
For checking linear dependency between two functions of two variables we can follow the follwing theorem given by "Green, G. M., Trans. Amer. Math. Soc., New York, 17, 1916,(483-516)". Theorem: Let $y_{1}$ and $y_{2}$ be functions of two independence variables $x_{1}$ and $x_{2}$ i.e., $y_{1} = y_{1}(x_{1} ,x_{2}) $ and $y_{2} = y_{1}(x_{1} ,x_{2}) $ for which all partial derivatives of $1^{st}$ order, $\frac{\partial y_{1}}{\partial x_{k}}$, $\frac{\partial y_{2}}{\partial x_{k}}$, $(k = 1,2)$ exists throughout the region $A$. Suppose, farther, that one of the functions, say $y_{1}$, vanishes at no point of $A$. Then if all the two rowed determinants in the matrix \begin{pmatrix} y_{1} & y_{2} \\ \frac{\partial y_{1}}{\partial x_{1}} & \frac{\partial y_{2}}{\partial x_{1}} \\ \frac{\partial y_{1}}{\partial x_{2}} & \frac{\partial y_{2}}{\partial x_{2}} \end{pmatrix} vanish identically in $A$, $y_{1}$ and $y_{2}$ are linearly dependent in $A$, and in fact $y_{2}=c y_{1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Analysis of Differentiable Functions Suppose that $f : \Bbb{R} \to \Bbb{R}$ is a function such that $|f(x)− f(y)| ≤ |x−y|^2$ for all $x$ and $y$. Show that $f (x) = C$ for some constant $C$. Hint: Show that $f$ is differentiable at all points and compute the derivative I confused as to what I use as the function in order to show that $f$ is differentiable at all points
Hint: Let $y=x+h$, then you have $$abs\left( \frac{f(x+h)-f(x)}{h}\right)\le |h|,$$ so that as $h \to 0$ you get that $f$ is differentiable. Maybe now you can use differentiability of $f$ to finish. Actually once you know it's differentiable, the same inequality above shows the derivative is $0$, so not really more work, just to explain it on your write-up.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
"IFF" (if and only if) vs. "TFAE" (the following are equivalent) If $P$ and $Q$ are statements, $P \iff Q$ and The following are equivalent: $(\text{i}) \ P$ $(\text{ii}) \ Q$ Is there a difference between the two? I ask because formulations of certain theorems (such as Heine-Borel) use the latter, while others use the former. Is it simply out of convention or "etiquette" that one formulation is preferred? Or is there something deeper? Thanks!
"TFAE" is appropriate when one is listing optional replacements for some theory. For example, you could list dozen replacements for the statements, such as replacements for the fifth postulate in euclidean geometry. "IFF" is one of the implications of "TFAE", although it as $P \rightarrow Q \rightarrow R \rightarrow P $, which equates to an iff relation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 2 }
What does $a\equiv b\pmod n$ mean? What does the $\equiv$ and $b\pmod n$ mean? for example, what does the following equation mean? $5x \equiv 7\pmod {24}$? Tomorrow I have a final exam so I really have to know what is it.
Let $a=qn+r_{1}$ and $b=pn+r_{2}$, where $0\leq r_{1},r_{2}<n$. Then $$r_{1}=r_{2}.$$ $r_{1}$ and $r_{2}$ are remainders when $a$ and $b$ are divided by $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 2 }
Constant growth rate? Say the population of a city is increasing at a constant rate of 11.5% per year. If the population is currently 2000, estimate how long it will take for the population to reach 3000. Using the formula given, so far I've figured out how many years it will take (see working below) but how can I narrow it down to the nearest month?
Let $a=1.115^{1/12}=\sqrt[12]{1.115}$, the twelfth root of $1.115$. Then $$1.115^x=(a^{12})^x=a^{12x}\;,$$ and $12x$ is the number of months that have gone by. Thus, if you can solve $a^y=1.5$, $y$ will be the desired number of months. Without logarithms the best that you’ll be able to do is find the smallest integer $y$ such that $a^y\ge 1.5$. By my calculation $a\approx1.009112468437$. You could start with $a^{36}$ and work up until you find the desired $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to determine whether an isomorphism $\varphi: {U_{12}} \to U_5$ exists? I have 2 groups $U_5$ and $U_{12}$ , .. $U_5 = \{1,2,3,4\}, U_{12} = \{1,5,7,11\}$. I have to determine whether an isomorphism $\varphi: {U_{12}} \to U_5$ exists. I started with the "$yes$" case: there is an isomorphism. So I searched an isomorphism $\varphi$ , but I didn't found. So I guess there is no an isomorphism $\varphi$ . How can I prove it? or at least explain? please help.
Note that $x^2\equiv 1\pmod {12}$ for all elements of $U_{12}$ whereas the corresponding property does not hold in $U_5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
For what integers $n$ does $\phi(2n) = \phi(n)$? For what integers $n$ does $\phi(2n) = \phi(n)$? Could anyone help me start this problem off? I'm new to elementary number theory and such, and I can't really get a grasp of the totient function. I know that $$\phi(n) = n\left(1-\frac1{p_1}\right)\left(1-\frac1{p_2}\right)\cdots\left(1-\dfrac1{p_k}\right)$$ but I don't know how to apply this to the problem. I also know that $$\phi(n) = (p_1^{a_1} - p_1^{a_1-1})(p_2^{a_2} - p_2^{a_2 - 1})\cdots$$ Help
Hint: You may also prove in general that $$\varphi(mn)=\frac{d\varphi(m)\varphi(n)}{\varphi(d)}$$ where $d=\gcd(m,n).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/440557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Higher Moments of Sums of Independent Random Variables Let $X_1 \dots X_n$ be independent random variables taking values $\{-1,1\}$ with equal probability 1/2. Let $S_n = \sum X_i$. Is there a closed form expression for $E[(S_n)^{2j}]$. If not a closed form expression then can we hope to get a nice tight upper bound. I am leaving tight unspecified here because I do not know myself how tight I want the bound to be so please tell me any non-trivial bounds.
The random variable $S_n^{2j}$ takes the value $(2k-n)^{2j}$ with probability $\binom nk\frac 1{2^n}$, hence $$\mathbb E\left[S_n^{2j}\right]=\sum_{k=0}^n\binom nk(2k-n)^{2j}.$$ It involves computations of terms of the form $\sum_{k=0}^n\binom nk k^p$, $p\in\Bbb N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Asking for a good starting tutorial on differential geometry for engineering background student. I just jumped into a project related to an estimation algorithm. It needs to build measures between two distributions. I found a lot of papers in this field required a general idea from differential geometry, which is like a whole new area for me as a linear algebra guy. I indeed follow a wiki leading studying by looking up the terms, and start to understand some of them, but I found this way of studying is not really good for me, because it is hard to connect these concepts. For example, I know the meaning for concepts like Manifold, Tangent space, Exponential map, etc. But I lack the understanding why they are defined in this way and how they are connected. I indeed want to put as much effort as needed on it, but my project has a quick due time, so I guess I would like to have a set of the minimum concepts I need to learn in order to have some feeling for this field. So in short, I really want to know if there is any good reference for beginner level like me -- for engineering background student? Also since my background is mainly in linear algebra and statistics, do I have to go through all the materials in geometry and topology ? I really appreciate your help. Following up: I checked the books suggested by answerers below, they are all very helpful. Especially I found Introduction to Topological Manifolds suggested by kjetil is very good for myself. Also I found http://www.youtube.com/user/ThoughtSpaceZero is a good complement (easier) resource that can be helpful for checking the basic meanings.
I suggest you read Lee's introduction to topological manifolds followed by his introduction to smooth manifolds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What justifies assuming that a level surface contains a differentiable curve? My textbook's proof that the Lagrange multiplier method is valid begins: Let $X(t)$ be a differentiable curve on the surface $S$ passing through $P$ Where $S$ is the level surface defining the constraint, and $P$ is an extremum of the function that we're seeking to optimize. But how do we know that such a curve exists? $S$ is specifically defined as the set of points in the (open) domain of the continuously differentiable function $g$ with $g(X) = 0$ but $\operatorname{grad}g(X)\ne0$. The function $f$ that we're seeking to optimize is assumed to be continuously differentiable and defined on the same open domain as $g$, and $P$ is an extremum of $f$ on $S$.
By the Implicit Function Theorem, near $P$ you can represent your level surface as a graph, say $z=\phi(x,y)$, where $\phi$ is continuously differentiable. If $P=\phi(a,b)$, take any line through $(a,b)$ and you get a nice curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does it exactly mean for a subspace to be dense? My understanding of rationals being dense in real numbers: I know when we say the rationals are dense in real is because between any two rationals we can find a irrational number. In other words we can approximate irrational numbers using rationals. I think a more precise definition would be is that any open ball around a irrational number will contain a rational number. If what I said is correct, I am trying to think about what it means for $C[a,b]$ (which are the continuous complex valued functions on [a.b]) to be dense subspace of $L^2[a,b]$. From what I said above, I want to say that all functions in $L^2[a,b]$ can be approximated by functions from $C[a,b]$. Is the intuition correct here, what would the precise definition in this case?
In general topological spaces, a dense set is one whose intersection with any nonempty open set is nonempty. For metric spaces, since we have a topological base of open balls, this is equivalent to every point in space space being arbitrarily close, with regards to the metric, to point in the dense set. Note that $L^2$ is a metric space, where $d(f,g) = ||f-g||_2$, with $||\cdot||_2$ being the $L^2$ norm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/440971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Must certain rings be isomorphic to $\mathbb{Z}[\sqrt{a}]$ for some $a$ Consider the group $(\mathbb{Z}\times\mathbb{Z},+)$, where $(a,b)+(c,d)=(a+c,b+d)$. Let $\times$ be any binary operation on $\mathbb{Z}\times\mathbb{Z}$ such that $(\mathbb{Z}\times\mathbb{Z},+,\times)$ is a ring. Must there exist a non-square integer "$a$" such that $$(\mathbb{Z}\times\mathbb{Z},+,\times)\cong\mathbb{Z}[\sqrt{a}]?$$ Thank you. Edit: Chris Eagle noted that setting $x\times y=0$ for all $x,y\in\mathbb{Z}\times\mathbb{Z}$ would provide a counterexample. I would like to see other ecounterexamples though.
Probably the most natural counterexample is the following: If the operation $\times$ is defined such that the resulting ring is simply product of two copies of the usual ring $(\mathbb{Z},+,\times)$ (that is, if we set $(a,b)\times(c,d)=(ac,bd)$), then, again, no isomorphism exists, since the resulting ring $\mathbb{Z}\times \mathbb{Z}$ is not an integral domain and $\mathbb{Z}[\sqrt{a}]$ is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/441110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to check if three coordinates form a line Assume I have three coordinates from a world map in Longitude + Latitude. Is there a way to determine these three coordinates form a straight line? What if I was using a system with bounds that defines the 2 corners (northeast - southwest) in Long/Lat? The long & lat are expressed in decimal degrees.
I'll assume that by "line" you mean "great circle" -- that is, if you want to go from A to C via the shortest possible route, then keep going straight until you circle the globe and get back to A, you'll pass B on the way. The best coordinates for the question to be in are cartesian -- the 3D vector from the center of the earth to the point on the map. (Lat/long are two thirds of a set of "spherical coordinates", and can be converted to cartesian coordinates given the radius of the earth as $r$, though for this question $r$ doesn't matter). Once you have those points, which each one being a 3D vector), find the plane containing them, and check that it includes the center of the earth. Or, as a shortcut, just mark the points on a Gnomonic projection map, and use a ruler to see if they form a straight line there. Unfortunately, a given gnomonic projection map can't include the whole world.
{ "language": "en", "url": "https://math.stackexchange.com/questions/441182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
When factoring polynomials does not result in repeated factors I found the following statement in the book introduction to finite fields and their applications: Let $x^n-1 = f_1(x)f_2(x)\dots f_m(x)$ be the decomposition of $x^n-1$ into monic irreducible factors over $\mathbb{F}_q$. If $\text{GCD}(n,q)=1$, then there are no repeated factors; i.e., polynomials $f_1, f_2, \ldots, f_m$ are all distinct. Firstly, please indicate why this statement holds. Secondly, are there similar theorems for polynomials other than $x^n-1$?
If $f(x)=g(x)^2h(x)$ then by the product rule of polynomial derivatives: $$f'(x)=2g(x)g'(x)h(x)+g(x)^2h'(x) =g(x)\left(2g'(x)h(x)+g(x)h'(x)\right)$$ So when $f(x)$ has a repeated factor, $f'(x)$ has a common factor with $f(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/441249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove uniform distribution of $m\oplus k$ if $k$ is uniformly distributed? All values $m, k, c$ are $n$-bit strings. $\oplus$ stands for the bitwise modulo-2 addition. How to prove uniform distribution of $c=m\oplus k$ if $k$ is uniformly distributed? $m$ may be of any distribution and statistically independant of $k$. For example you have a $m$-bit string that is with probability p=1 always '1111...111'. Adding it bitwise to a random $k$-bit string which is uniformly distributed makes the result also uniformly distributed. Why?
This is not true. For example, if $m = k$, $c$ is not uniformly distributed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/441329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Why does $\lim_{n \to \infty} \sqrt[n]{(-1)^n \cdot n^2 + 1} = 1$? As the title suggests, I want to know as to why the following function converges to 1 for $n \to \infty$: $$ \lim_{n \to \infty} \sqrt[n]{(-1)^n \cdot n^2 + 1} = 1 $$ For even $n$'s only $n^2+1$ has to be shown, which I did in the following way: $$\sqrt[n]{n^2} \le \sqrt[n]{n^2 + 1} \le \sqrt[n]{n^3}$$ Assuming we have already proven that $\lim_{n \to \infty}\sqrt[n]{n^k} = 1$ we can conclude that $$1 \le \sqrt[n]{n^2+1} \le 1 \Rightarrow \lim_{n \to \infty} \sqrt[n]{n^2+1} = 1.$$ For odd $n$'s I can't find the solution. I tried going the same route as for even $n$'s: $$\sqrt[n]{-n^2} \le \sqrt[n]{-n^2 + 1} \le \sqrt[n]{-n^3}$$ And it seems that it comes down to $$\lim_{n \to \infty} \sqrt[n]{-n^k}$$ I checked the limit using both Wolfram Alpha and a CAS and it converges to 1. Why is that?
It's common for CAS's like Wolfram Alpha to take $n$th roots that are complex numbers with the smallest angle measured counterclockwise from the positive real axis. So the $n$th root of negative real numbers winds up being in the first quadrant of the complex plane. As $n\to\infty$, this $n$th root would get closer to the real axis and explain why WA says the limit is 1. CAS's do this for continuity reasons; so that $\sqrt[n]{-2}$ will be close to $\sqrt[n]{-2+\varepsilon\,i}$. Instead of $\sqrt[n]{x}$, you can get around the issue with $\operatorname{sg}(x)\cdot\sqrt[n]{|x|}$ where $\operatorname{sg}(x)$ is the signum function: $1$ for positive $x$ and $-1$ for negative $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/441395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Explaining the physical meaning of an eigenvalue in a real world problem Contextual Problem A PhD student in Applied Mathematics is defending his dissertation and needs to make 10 gallon keg consisting of vodka and beer to placate his thesis committee. Suppose that all committee members, being stubborn people, refuse to sign his dissertation paperwork until the next day. Since all committee members will be driving home immediately after his defense, he wants to make sure that they all drive home safely. To do so, he must ensure that his mixture doesn't contain too much alcohol in it! Therefore, his goal is to make a 10 gallon mixture of vodka and beer such that the total alcohol content of the mixture is only $12$ percent. Suppose that beer has $8\%$ alcohol while vodka has $40\%$. If $x$ is the volume of beer and $y$ is the volume of vodka needed, then clearly the system of equations is \begin{equation} x+y=10 \\ 0.08 x +0.4 y = 0.12\times 10 \end{equation} My Question The eigenvalues and eigenvectors of the corresponding matrix \begin{equation} \left[ \begin{array}{cc} 1 & 1\\ 0.08 & 0.4 \end{array} \right] \end{equation} are \begin{align} \lambda_1\approx 1.1123 \\ \lambda_2\approx 0.2877 \\ v_1\approx\left[\begin{array}{c} 0.9938 \\ 0.1116 \end{array} \right] \\ v_2\approx\left[\begin{array}{c} -0.8145 \\ 0.5802 \end{array} \right] \end{align} How do I interpret their physical meaning in the context of this particular problem?
An interpretation of eigenvalues and eigenvectors of this matrix makes little sense because it is not in a natural fashion an endomorphism of a vector space: On the "input" side you have (liters of vodka, liters of beer) and on the putput (liters of liquid, liters of alcohol). For example, nothing speaks against switching the order of beer and vodka (or of liquid and alcohol), which would result in totally different eigenvalues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/441448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 0 }
Expression for the Maurer-Cartan form of a matrix group I understand the definition of the Maurer-Cartan form on a general Lie group $G$, defined as $\theta_g = (L_{g^{-1}})_*:T_gG \rightarrow T_eG=\mathfrak{g}$. What I don't understand is the expression $\theta_g=g^{-1}dg$ when $G$ is a matrix group. In particular, I'm not sure how I'm supposed to interpret $dg$. It seemed to me that, in this concrete case, I should take a matrix $A\in T_gG$ and a curve $\sigma$ such that $\dot{\sigma}(0)=A$, and compute $\theta_g(A)=(\frac{d}{dt}g^{-1}\sigma(t))\big|_{t=0}=g^{-1}A$ since $g$ is constant. So it looks like $\theta_g$ is just plain old left matrix multiplication by $g^{-1}$. Is this correct? If so, how does it connect to the expression above?
This notation is akin to writing $d\vec x$ on $\mathbb R^n$. Think of $\vec x\colon\mathbb R^n\to\mathbb R^n$ as the identity map and so $d\vec x = \sum\limits_{j=1}^n \theta^j e_j$ is an expression for the identity map as a tensor of type $(1,1)$ [here $\theta^j$ are the dual basis to the basis $e_j$]. In the Lie group setting, one is thinking of $g\colon G\to G$ as the identity map, and $dg_a\colon T_aG\to T_aG$ is of course the identity. Since $(L_g)_* = L_g$ on matrices (as you observed), for $A\in T_aG$, $(g^{-1}dg)_a(A) = a^{-1}A = L_{a^{-1}*}dg_a(A)\in\frak g$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/441496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 2, "answer_id": 0 }
Mean value theorem application for multivariable functions Define the function $f\colon \Bbb R^3\to \Bbb R$ by $$f(x,y,z)=xyz+x^2+y^2$$ The Mean Value Theorem implies that there is a number $\theta$ with $0<\theta <1$ for which $$f(1,1,1)-f(0,0,0)=\frac{\partial f}{\partial x}(\theta, \theta, \theta)+\frac{\partial f}{\partial y}(\theta, \theta, \theta)+\frac{\partial f}{\partial z}(\theta, \theta, \theta)$$ This is the last question. I don't have any idea. Sorry for not writing any idea. How can we show MVT for this question?
Hint: Consider $g(t):=f(t,t,t)$. What is $g'(t)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/441564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Orthogonal Subspaces I am reading orthogonality in subspaces and ran into confusion by reading this part: Suppose S is a six-dimensional subspace of nine-dimensional space $\mathbb R^9$. a) What are the possible dimensions of subspace orthogonal to $S$? Answer: Sub spaces orthogonal to S can have dimensions $0,1,2,3.$ b) What are the possible dimensions of the orthogonal complement $S^{\perp}$ of $S$? Answer: Complement $S^{\perp}$ is the largest orthogonal subspace with dim $3$. Where I am having trouble is understanding how the answers make sense to the question, or how the answers are pretty much the answers. In other words, for a, how is the dimensions $0,1,2,3$? But maybe I am not understanding the question. Any assistance with helping me understand the answer would be appreciated.
Take all the vectors linearly independent vectors in $S$ and put them in a matrix (as rows). Since $S$ has dim=6, so there are 6 linearly independent vectors in S. Thus the matrix will have size 6x9. Now, rank of this matrix is $6$, and the orthogonal complement to $S$ is the rank of its null space. So, by rank nullity theorem, dimension of orthogonal complement is $3$. Now, subspaces orthogonal to $S$ consists of vectors that belong to the null space of the above matrix, and are subspaces of the null space of the matrix (any subspace will do). These subspaces can only have dimension $0,1,2$ or $3$. Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/441640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Recursion Question - Trying to understand the concept Just trying to grasp this concept and was hoping someone could help me a bit. I am taking a discrete math class. Can someone please explain this equation to me a bit? $f(0) = 3$ $f(n+1) = 2f(n) + 3$ $f(1) = 2f(0) + 3 = 2 \cdot 3 + 3 = 9$ $f(2) = 2f(1) + 3 = 2 \cdot 9 + 3 = 21$ $f(3) = 2f(2) + 3 = 2 \cdot 21 + 3 = 45$ $f(4) = 2f(3) + 3 = 2 \cdot 45 + 3 = 93$ I do not see how they get the numbers to the right of the equals sign. Please someone show me how $f(2) = 2f(1) + 3 = 2 \cdot 9 + 3$. I see they get "$2\cdot$" because of $2f$ but how and where does the $9$ come from? I also see why the $+3$ at the end of each equation but how and where does that number in the middle come from?
Perhaps by considering a different sequence this may become clearer: $$f(0)=0$$ $$f(n+1)=n+1$$ therefore $$\begin{align} f(n=1)&=f(n=0)+1=0+1=1\\ f(n=2)&=f(n=1)+1=(f(n=0)+1)+1=(0+1)+1=2\\ f(n=3)&=f(n=2)+1=(f(n=1)+1)+1=((f(n=0)+1)+1)+1=((0+1)+1)+1=3\\ \end{align}$$ So this will generate all the natural numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/441718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that every irreducible cubic monic polynomial over $\mathbb F_{5}$ has the form $P_{t}(x)=(x-t_{1})(x-t_{2})(x-t_{3})+t_{0}(x-t_{4})(x-t_{5})$? For a parameter $t=(t_{0},t_{1},t_{2},t_{3},t_{4},t_{5},)\in\mathbb F_{5}^{6}$ with $t_{0}\ne 0$ and {$t_{i},i>0$} are ordering of elements in $\mathbb F_{5}$ (t1~t5 is a permutation of [0]~[4] here at least as I think), define a polynomial $$P_{t}(x)=(x-t_{1})(x-t_{2})(x-t_{3})+t_{0}(x-t_{4})(x-t_{5}).$$ * *Show that $P_{t}(x)$ is irreducible in $\mathbb F_{5}[x]$. *Prove that two parameters $t,t'$ give the same polynomial over $\mathbb F_5$ if and only if $t_{0}=t_{0}'$ and $\{t_{4},t_{5}\}=\{t_{4}',t_{5}'\}$. *Show that every irreducible cubic monic polynomial over $\mathbb F_{5}$ is obtained in this way. After trying $x,x-1,x-2,x-3,x-4$ the first question can be solved. But I have no idea about where to start with the remaining two. Expanding the factor seems failed for proving two polynomials are equal to each other.
Hints: * *A cubic is reducible, only if it has a linear factor. But then it should have a zero in $\mathbb{F}_5$, so it suffices to check that none of $t_1,t_2,t_3,t_4,t_5$ is a zero of $P_t(x)$. *This part is tricky. I would go about it as follows. Let $t$ and $t'$ be two vectors of parameters. Consider the difference $$ Q_{t,t'}(x)=P_t(x)-(x-t'_1)(x-t'_2)(x-t'_3). $$ It is a quadratic. Show that if $\{t_1,t_2,t_3\}=\{t'_1,t'_2,t'_3\}$ then $Q_{t,t'}$ has two zeros in $\mathbb{F}_5$, but otherwise it has one or none. This allows you to make progress. *Count them! The irreducible cubics are exactly the minimal polynomials of those elements of the finite field $L=\mathbb{F}_{125}$ that don't belong to the prime field. The number of such elements is $125-5=120$. Each cubic has three zeros in $L$ (it's Galois over the prime field), so there are a total of 40 irreducible cubic polynomials over $\mathbb{F}_5$. How many distinct polynomials $P_t(x)$ are there?
{ "language": "en", "url": "https://math.stackexchange.com/questions/441878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Homeomorphism from the interior of a unit disk to the punctured unit sphere I need help constructing a homeomorphism from the interior of the unit disk, $\{(x,y)\mid x^2+y^2<1\}$, to the punctured unit sphere, $\{(x,y,z)\mid x^2+y^2+z^2 = 1\} - \{(0,0,1)\}$. I was thinking you could take a line passing through $(0,0,1)$ and a point in the disk and send that point to the part of the sphere the line passes through, but this function wouldn't cover the top half of the sphere.
Stereographic projection, as mentioned in the other answers, usually comes up in this context. Indeed, it is a beautiful way of identifying the punctured sphere with $\mathbb{R}^2$, in a conformal manner (preserving angles). However, if you only care about finding a homeomorphism to the disk (and there is no way to make this conformal anyway), then perhaps it is easier to just define the homeomorphism from the punctured sphere to the unit disk by $$ \phi(x,y,t) := \frac{t+1}{2} e^{i\theta}, $$ where $\theta$ is the angle of the point in the $x-y$-plane; i.e. $\theta=\arg(x+iy)$. If you prefer real coordinates, this means $$ \phi(x,y,t) = \frac{t+1}{2\sqrt{x^2+y^2}} \cdot (x,y).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/441965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Evaluating the integral of $f(x, y)=yx$ Evaluate the integral $I = \int_C f(x,y) ds$ where $f(x,y)=yx$. and the curve $C$ is given by $x=\sin(t)$ and $y=\cos(t)$ for $0\leq t\leq \frac{pi}{2}$. I got the answer for this as $\frac{\sqrt{2}}{2}$ is that right?
Evaluate the parameterized integral $$ \int_0^{\pi/2} \cos t \sin t \sqrt{\cos^2 t + \sin^2 t} \, dt = \int_0^{\pi/2} \cos t \sin t \, dt. $$ I don't see a square root of 2 appearing anywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/442121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Weakly compact implies bounded in norm The weak topology on a normed vector space $X$ is the weakest topology making every bounded linear functionals $x^*\in X^*$ continuous. If a subset $C$ of $X$ is compact for the weak topology, then $C$ is bounded in norm. How does one prove this fact?
The first key point is that an element of $x$ can be identified with a linear functional of norm $\|x\|$ on the dual $X^*$. Indeed, it follows from Hahn-Banach that there exists $x^*\in X^*$ such that $x^*(x)=\|x\|$ with $\|x^*\|=1$. Therefore, denoting by $e_x$ the linear functional (called point evaluation) $e_x:x^*\longmapsto x^*(x)$ on $X^*$, we have $$ \|e_x\|=\sup_{\|x^*\|\leq 1}|e_x(x^*)|=\sup_{\|x^*\|\leq 1}|x^*(x)|=\max_{\|x^*\|\leq 1}|x^*(x)|=\|x\|. $$ This yields the canonical isometric embedding of $X$ into the double dual $X^{**}$. The second key point is that $X^*$ is always a Banach space. Therefore we can use the uniform boundedness principle (Banach-Steinhaus) for bounded linear functionals on $X^*$, that is in $X^{**}$. If $C$ is weakly compact in $X$, then the image of $C$ under every $x^*\in X^*$ is compact, hence bounded, in the base field. Just because $x^*$ is continuous for the weak topology by definition of the latter, and because the continuous image of a compact space is compact. It follows that $$ \sup_{x \in C}|x^*(x)|=\sup_{x \in C}|e_x(x^*)|<\infty \qquad \forall x^*\in X^*. $$ By the uniform boundedness principle applied to $\{e_x\,;\,x\in C\}$, this implies that $$ \sup_{x\in C}\|x\|=\sup_{x\in C}\|e_x\|<\infty $$ which says precisely that $C$ is bounded in norm. Note: since the weak topology is Hausdorff, we get that weakly compact implies weakly closed + norm bounded. The converse is false. For instance, the closed unit ball of $c_0$ is weakly closed (the weak closure of a convex set is the same as its norm closure) and norm bounded, but not weakly compact (the closed unit ball of a normed vector space $X$ - actually automatically a Banach space in either case - is weakly compact if and only if $X$ is reflexive). Things are better with the weak*-topology on the dual of a Banach space $X$: weak*-compact is equivalent to weak*-closed + norm bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/442197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 1, "answer_id": 0 }
Proof that $ \lim_{x \to \infty} x \cdot \log(\frac{x+1}{x+10})$ is $-9$ Given this limit: $$ \lim_{x \to \infty} x \cdot \log\left(\frac{x+1}{x+10}\right) $$ I may use this trick: $$ \frac{x+1}{x+1} = \frac{x+1}{x} \cdot \frac{x}{x+10} $$ So I will have: $$ \lim_{x \to \infty} x \cdot \left(\log\left(\frac{x+1}{x}\right) + \log\left(\frac{x}{x+10}\right)\right) = $$ $$ = 1 + \lim_{x \to \infty} x \cdot \log\left(\frac{x}{x+10}\right) $$ But from here I am lost, I still can't make it look like a fondamental limit. How to solve it?
I'll use the famous limit $$\left(1+\frac{a}{x+1}\right)^x\approx\left(1+\frac{a}{x}\right)^x\to e^a$$ We have $$x \ln \frac{x+1}{x+10}=x \ln \frac{x+1}{x+1+9}=-x\ln\left( 1+\frac{9}{x+1} \right)=-\ln\left( 1+\frac{9}{x+1} \right)^x\to-9$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/442254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Poisson Estimators Consider a simple random sample of size $n$ from a Poisson distribution with mean $\mu$. Let $\theta=P(X=0)$. Let $T=\sum X_{i}$. Show that $\tilde{\theta}=[(n-1)/n]^{T}$ is an unbiased estimator of $\theta$.
We have $\Pr(X_1=0)=e^{-\mu}=\theta$. Therefore $$ \theta=\mathbb E(\Pr(X_1=0\mid X_1+\cdots+X_n)). $$ So what is $$ \Pr(X_1=0\mid X_1+\cdots+X_n=x)\text{ ?} $$ It is $$ \begin{align} & {}\qquad \frac{\Pr(X_1=0\text{ and } X_1+\cdots+X_n=x)}{\Pr(X_1+\cdots+X_n=x)} = \frac{\Pr(X_1=0)\cdot\Pr(X_2+\cdots+X_n=x)}{e^{-n\mu}(n\mu)^x/(x!)} \\[10pt] & = \frac{\left(e^{-\mu}\right)\cdot\left(e^{-(n-1)\mu}((n-1)\mu)^x/(x!)\right)}{e^{-n\mu}(n\mu)^x/(x!)} = \left(\frac{n-1}{n}\right)^x \\[10pt] \end{align} $$ Therefore $$ \mathbb E\left( \left(\frac{n-1}{n}\right)^{X_1+\cdots+X_n} \right) = \theta. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/442344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Calculation for absolute value pattern I have a weird pattern I have to calculate and I don't quite know how to describe it, so my apologies if this is a duplicate somewhere.. I want to solve this pattern mathematically. When I have an array of numbers, I need to calculate a secondary sequence (position) based on the index and the size, i.e.: 1 element: index: 0 position: 1 2 elements: index: 0 1 position: 2 1 3 elements: index: 0 1 2 position: 2 1 3 4 elements: index: 0 1 2 3 position: 4 2 1 3 5 elements: index: 0 1 2 3 4 position: 4 2 1 3 5 6 elements: index: 0 1 2 3 4 5 position: 6 4 2 1 3 5 etc.... The array can be 1-based as well if that would make it easier. I wrote this out to 9 elements to try and find a pattern but the best I could make out was that it was some sort of absolute value function with a variable offset...
For an array with $n$ elements, the function is: $$f(n,k)=1+2(k-[n/2])$$ when $k\geq [n/2]$ and $$f(n,k)=2+2([n/2]-k-1)$$ when $k<[n/2]$, where $[\cdot]$ is the floor function. Example: $$f(5,4)=1+2(4-2)=5$$. $$f(5,1)=2+2(1-2+1)=2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/442403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
For the Fibonacci numbers, show for all $n$: $F_1^2+F_2^2+\dots+F_n^2=F_nF_{n+1}$ The definition of a Fibonacci number is as follows: $$F_0=0\\F_1=1\\F_n=F_{n-1}+F_{n-2}\text{ for } n\geq 2$$ Prove the given property of the Fibonacci numbers for all n greater than or equal to 1. $$F_1^2+F_2^2+\dots+F_n^2=F_nF_{n+1}$$ I am pretty sure I should use weak induction to solve this. My professor got me used to solving it in the following format, which I would like to use because it help me map everything out... This is what I have so far: Base Case: Solve for $F_0$ and $F_1$ for the following function: $F_nF_{n+1}$. Inductive Hypothesis: What I need to show: I need to show $F_{n+1}F_{n+1+1}$ will satisfy the given property. Proof Proper: (didn't get to it yet) Any intro. tips and pointers?
This identity is clear from the following diagram: (imagine here a generalized picture with $F_i$ notation) The area of the rectangle is obviously $$F_n(F_{n}+F_{n-1})=F_nF_{n+1}$$ On the other hand, since the area of a square is x^2, it is obviously: $$F_1^2+F_2^2+\dots+F_n^2$$ Therefore: $$F_1^2+F_2^2+\dots+F_n^2=F_nF_{n+1}$$ You can even convert this graphical proof to an inductive proof - your inductive step would consist of adding a square $F_{n+1} * F_{n+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/442459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 5, "answer_id": 2 }
Is [0,1] closed? I thought it was closed, under the usual topology $\mathbb{R}$, since its compliment $(-\infty, 0) \cup (1,\infty)$ is open. However, then then intersection number would not agree mod 2, since it can arbitrarily intersect a compact manifold even or odd times. P.S. The corollary. $X$ and $Z$ are closed submanifolds inside $Y$ with complementary dimension, and at least one of them is compact. If $g_0, g_1: X \to Y$ are arbitrary homotopic maps, then we have $I_2(g_0, Z) = I_2(g_1, Z).$ The contradiction (my question): Let [0,1] be the closed manifold $Z$, and then it can intersect an arbitrary compact manifold any times, contradicting with the corollary. Aneesh Karthik C's comment answered my question, so just to clarify: I was thinking $g_0$ is one wiggle of [0,1] such that it intersects a compact manifold once, and $g_1$ is some other sort that [0,1] intersect twice. Then it contradicts with the corollary. But apparently it doesn't, because [0,1] does not satisfy the corollary as a closed manifold. By definition, a closed manifold is a type of topological space, namely a compact manifold without boundary. Since [0,1] is not a closed manifold, it can intersect a compact manifold as much as it want, without contradicting with the theorem. I didn't realize that [0,1] is not a closed manifold. So I thought it contradicts and that's why I ask the question.
A closed manifold is a compact boundaryless manifold. So the last line "Let [0,1] be the closed manifold Z" is wrong, for $\partial[0,1]\ne\phi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/442580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is't a correct observation that No norm on $B[0,1]$ can be found to make $C[0,1]$ open in it? There's a problem in my text which reads as: Show that $C[0,1]$ is not an open subset of $(B[0,1],\|.\|_\infty).$ I've already shown in a previous example that for any open subspace $Y$ of a normed linear space $(X,\|.\|),~Y=X.$ Even though using this result this problem turns out to be immediate the sup-norm is becoming immaterial. And I can't believe what I'm left with: No norm on $B[0,1]$ can be found to make $C[0,1]$ open in it. Is this a correct observation?
This statement is actually true under more general settings. It seems convenient to talk about topological vector spaces, of which normed spaces are a very special kind. So let $X$ be a topological vector space and $Y$ be an open subspace. So we know $Y$ contains some open set. Since the topology on topological vector spaces are translation invariant (that is, $V$ is open if and only if $V+x$ is open for all $x\in X$, you can check this in normed spaces), we know $Y$ contains some open neighborhood of the origin, say $0\in V\subset Y$. Another interesting fact about topological vector spaces is that for any open neighborhood $W$ of the origin, one has \begin{equation} X=\cup_{n=1}^{\infty}nW. \end{equation}Again you might check this for normed spaces. Apply this to our $V$, and note that $Y$ is closed under scalar multiplication, we have \begin{equation} X=\cup nV\subset \cup nY=Y. \end{equation} So we have just proved The only open subspace is the entire space. Note: If $S$ is a subset of a vector space, for a point $x$ and a scalar $\alpha$ we define \begin{equation} x+S:=\{x+s|s\in S\} \end{equation} and \begin{equation} \alpha S:=\{\alpha s|s\in S\}. \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/442726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding perpendicular bisector of the line segement joining $ (-1,4)\;\text{and}\;(3,-2)$ Find the perpendicular bisector of the line joining the points $(-1,4)\;\text{and}\;(3,-2).\;$ I know this is a very easy question, and the answer is an equation. So any hints would be very nice. thanks
Hint: The line must be orthogonal to the difference vector $(3-(-1),-2-4)$ and pass through the midpoint $(\frac{-1+3}2,\frac{4-2}2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/442781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Can we use the Second Mean Value Theorem over infinite intervals? Let $[a,b]$ be any closed interval and let $f,g$ be continuous on $[a,b]$ with $g(x)\geq 0$ for all $x\in[a,b]$. Then the Second Mean Value Theorem says that $$\int_a^bf(t)g(t)\text{d}t = f(c)\int_a^b g(t)\text{d}t,$$ for some $c\in(a,b)$. Does this theorem work on the interval $[0,\infty]$ ? EDIT: Assuming the integrals involved converge.
No. Consider $g(t)=\frac{1}{t}$, $f=g$ on $[1,\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/442851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Floor Inequalities Proving the integrality of an fractions of factorials can be done through De Polignac formula for the exponent of factorials, reducing the question to an floored inequality. Some of those inequalities turn out to be very hard to proof if true at all. The first is, given $x_i \in \mathbb{R}$ and $\{x_i\} = x_i - \lfloor x_i \rfloor$: $$\sum_{i=1}^{n}\left \lfloor n \{x_i\} \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}\{x_i\} \right \rfloor$$ I was able to prove this one by arguing that if $\left \lfloor \sum_{i=1}^{n}\{x_i\} \right \rfloor = L$ than there is some $x_k \geq \frac{L}{n}$, so the left side is at least $L$. But I was unable to apply the same idea to the following inequality: $$\sum_{i=1}^{n}\left \lfloor q_i \{x_i\} \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}\{x_i\} \right \rfloor$$ Where $q_i \in \mathbb{N}$ and $\frac{1}{q_1} + \dotsm + \frac{1}{q_n} \leq 1$. Also, this generalization was proposed: $$\sum_{i=1}^{n}\left \lfloor q_i \{x_i\} \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}k_i\{x_i\} \right \rfloor$$ Where $q_i, k_i \in \mathbb{N}$ and $\frac{k_1}{q_1} + \dotsm + \frac{k_n}{q_n} \leq 1$. I don't know if the last two inequalities are correct neither know how to proof if wrong or any counter-example if not. Could someone help?
Let $\theta_i=\{x_i\}$, so that the second inequality reads $$\sum_{i=1}^{n}\left \lfloor q_i \theta_i \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}\theta_i \right \rfloor. \tag{1}$$ Also let $L$ denote the right side, as in the proof of the OP. Now if for each $i$ we had $\theta_i<L/q_i$ then we would have $$\sum_{i=1}^n \theta_i<L \sum_{i=1}^n \frac{1}{q_i} \le L,$$ the last inequality from the assumption that the reciprocals of the $q_i$ sum to at most $1$. From this, similar in spirit to the OP's proof, we get that for at least one index $i$ we have $\theta_i \ge L/q_i$, in other words $q_i \theta_i \ge L$, implying that the term $\left \lfloor q_i \theta_i \right \rfloor \ge L$ and establishing (1). Perhaps a similar idea would work for the final inequality. ADDED: Yes, the third inequality has a similar proof. With the notation above it reads $$\sum_{i=1}^{n}\left \lfloor q_i \theta_i \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}k_i\theta_i \right \rfloor. \tag{2}$$ Again let $L$ denote the right side, and assume for each $i$ we had $\theta_i<L/q_i$. then we would have $$\sum_{i=1}^n k_i\theta_i<L \sum_{i=1}^n \frac{k_i}{q_i}\le L,$$ using $\sum (k_i/q_i) \le 1.$ This as before implies there is an index $i$ for which $q_i \theta_i \ge L$ to finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/442914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Indefinite integral $\int{\frac{dx}{x^2+2}}$ I cannot manage to solve this integral: $$\int{\frac{dx}{x^2+2}}$$ The problem is the $2$ at denominator, I am trying to decompose it in something like $\int{\frac{dt}{t^2+1}}$: $$t^2+1 = x^2 +2$$ $$\int{\frac{dt}{2 \cdot \sqrt{t^2-1} \cdot (t^2+1)}}$$ But it's even harder than the original one. I also cannot try partial fraction decomposition because the polynomial has no roots. Ho to go on?
Hint: $$x^2+2 = 2\left(\frac{x^2}{\sqrt{2}^2}+1\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/442991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
Is a Whole Number A Rational Number Is a Whole Number part of A Rational Number or a whole number??
The real answer, as usual, is "it depends". As the other answers have indicated, it is possible to identify whole numbers with certain rational numbers. On the other hand, it's also possible to identify rational numbers with certain ordered pairs of integers. So it really depends on your perspective/purpose. If you're doing something like number theory, you'll be thinking in terms of a whole number being a rational number. If you're thinking in terms of "mathematical foundations", you'll most likely be looking at it from the other direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/443152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Steady State Solution Non-Linear ODE I'm working through some problems studying for a numerical methods course, but I'm stuck on how to answer the following question analytically. It says to find the steady state solution of the following equation: $\frac{dy}{dx} = -ay + e^{-y}$ where $y(0)=0$. It says that the steady state solution is when $\frac{dy}{dx} = 0$. It's also claimed that the steady state solution is a fixed point of the system. I'm stuck on where to start with this. If I immediately try to solve $0 = -ay + e^{-y}$, I can't isolate y. Then I tried to start with the homogenous equation, but that devolved also. What am I missing?
Firstly, I will expand on my comment. If $a\neq 0$, then $0=-ay+e^{-y}$, which can be rearranged to $ye^y=\dfrac{1}{a}$. This has the "analytic" solution using the LambertW function $y=LambertW(1/a)$. If $a=0$, then the original ODE is $y'=e^{-y}$, which has solution $y(x)=\log(x+c)$. This does not have a steady state. Now, given that you are in a numerical analysis class, you can also solve this equation using numerical methods. Hence, a solution could be obtained using Newton's method. Given $f(y)=e^{-y}-ay$, the iteration would be $$ y_{n+1}=y_n-\frac{e^{-y_n}-ay_n}{-e^{-y_n}-a}=y_n+\dfrac{1+ay_ne^{y_n}}{1+ae^{y_n}}. $$ This will break down when $1+ae^y=0$. Hence, if $a>0$, The iteration will not break down and is guaranteed to find a solution. If $a<0$, it is possible that the expression is satisfied. Now, appealing to the geometric interpretation of the problem, we see that there is a solution provided that the graphs of $l_1=ay$ and $l_2=e^{-y}$ intersect. The least magnitude negative value for $a$ must then be the value where these two graphs are tangent. This leads to $a=-e^{-p}$ where $y=p$ is the point of tangency. But, we also require the line $l_1=ay$ to be satisfied at $(p,l_2(p))$ Hence, $e^{-p}=-e^{-p}p\implies p=-1$. Thus, the least magnitude negative value for $a$ is $a=-e$. The method will then also not breakdown if $a\leq-e$. This result is confirmed as the $LambertW(x)$ function is defined only when $x\geq-1/e$, which corresponds to $a\leq-e$ when $a<0$. Conclusion A solution will exist provided $a>0$ or $a\leq-e$. This is confirmed by this implicit plot of the solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/443202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Intuition for orthogonal vectors in $\Bbb R^n$ Two vectors in $\Bbb R^n$ are orthogonal iff their dot product is $0$. I'm aware that the dot product can be defined in other spaces, but to keep things simple let's restrict ourselves to $\Bbb R^n$. Given that the idea of orthogonality is roughly to identify when two vectors have no "overlap", then apart from the fact that in $\Bbb R^2$ and $\Bbb R^3$ this corresponds to the geometrical notion of orthogonality, why is this chosen as the definition of orthogonality? Ideally give examples of concrete mathematical problems where this definition arises naturally.
Firstly, let'd consider the geometry, to expand on Raskolnikov's comments: Consider two independent vectors in $\mathbb R^n$. Then there's a unique plane through them (and the origin). Pick a linear transformation from $\mathbb R^n\to\mathbb R^2$ that sends that plane onto all of $\mathbb R^2$ and preserves lengths. Then the vectors are dot-product-orthogonal in $\mathbb R^n$ if and only if their images are geometrically-orthogonal in $\mathbb R^2$ (use the converse of Pythagorean theorem or whichever other criterion you like). This says that the geometry in $\mathbb R^n$ really acts the same way as in $\mathbb R^2$. However, you seemed to be saying that you were not interested in the geometry side of things, and wanted an independent reason why orthogonality of vectors is important. Pretty much everything in your linear algebra book about orthogonality is an example, with one of the biggest ones being the spectral theorem, which says that every nice linear map (given symmetric matrices in the $\mathbb R^n$ case) is just a linear combination of orthogonal projections. Another important use of orthogonality is in the singular value decomposition, which has many applications, including image compression and topography.
{ "language": "en", "url": "https://math.stackexchange.com/questions/443262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A strange trigonometric identity in a proof of Niven's theorem I can't understand the inductive step on Lemma A in this proof of Niven's theorem. It asserts, where $n$ is an integer: $$2\cos ((n-1)t)\cos (t) = \cos (nt) + \cos ((n-2)t)$$ I tried applying the angle subtraction formula to both sides, but all that does is introduce a bunch of sines, which I can't see how to eliminate.
As $\cos(A-B)+\cos(A+B)=\cos A\cos B+\sin A\sin B+\cos A\cos B-\sin A\sin B=2\cos A\cos B$ Put $A+B=nt,A-B=(n-2)t$ Alternatively use $\cos C+\cos D=2\cos\frac{C+D}2\cos\frac{C-D}2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/443345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Visual proof of the addition formula for $\sin^2(a+b)$? Is there a visual proof of the addition formula for $\sin^2(a+b)$ ? The visual proof of the addition formula for $\sin(a+b)$ is here : Also it is easy to generalize (in any way: algebra , picture etc) to an addition formula for $\sin^n(a+b)$ where $n$ is a given positive integer ? EDIT 4 COMMENTS : $1)$ I prefer the addition formula's to have as little sums as possible. I assume this is equivalent to allowing and preferring large power of $\sin$ and $\cos$ ; e.g. $\sin^4(a+b)=$ expression involving $\sin^2$, $\sin^4$ and $\cos^4$ and no other powers of $\sin$ or $\cos$. In one of the answers, the poster just used the binomium. That works nice, but Im not sure I like the output of that answer. We get $n$ sums for $sin^n(a+b)$ and I assume we can do better if we allow powers of $\sin$ and $\cos$. I could be wrong ofcourse. $2)$ I bet against the existance of visual proofs for $\sin^n(a+b)$ for $n>1$. But I could be wrong. $3)$ Not trying to insult the answers and comments but I am skeptical about the use of complex numbers (and $\exp$) to solve this issue EFFICIENTLY. I know Euler's formula for $\exp(i x)$ but still. I could be wrong about this too ofcourse. $4)$ My main intrest is in the $\sin^2$ case. I assume it has many forms. Can the addition formula for $\sin^2$ be expressed by $sin^2$ only ? I think so. (One of the reason I think so is because $\cos^2$ can be rewritten.)
It's not entirely clear what you mean by "the addition formula for $\sin^2(\alpha+\beta)$", but if it's this ... $$\sin^2(\alpha+\beta) = \cos^2 \alpha + \cos^2\beta - 2 \cos\alpha\cos\beta \cos\left(\alpha+\beta\right)$$ ... then here's a picture-proof that relies on the Law of Cosines (which itself has a nice picture proof). We simply inscribe $\alpha$ and $\beta$ to either side of a unit-length diameter of a circle, and apply the Law to the green-red-blue cos-cos-sin triangle. (The dashed right(!) triangle (re-)confirms why the blue segment has length $\sin(\alpha+\beta)$.) Note: The figure also illustrates Ptolemy's Theorem ---The product of the diagonals of an inscribed quadrilateral is equal to the sum of the products of opposite sides--- since the unmarked green and red edges have lengths $\sin\alpha$ and $\sin\beta$, respectively, so that $$1 \cdot \sin(\alpha+\beta) = \sin\alpha \cos\beta + \sin\beta \cos\alpha$$ Note 2: The figure also gives this version of the addition formula ... $$\sin^2\left(\alpha+\beta\right) = \sin^2\alpha + \sin^2\beta + 2 \sin\alpha\sin\beta\cos\left( \alpha+\beta \right)$$ once we interpret the right-hand side as $\sin^2\alpha + \sin^2\beta - 2 \sin\alpha\sin\beta\cos\left( \left(\frac{\pi}{2}-\alpha\right)+\left(\frac{\pi}{2}-\beta\right) \right)$ and apply the Law of Cosines to the green-red-blue sin-sin-sin triangle at the top of the picture. That's less pretty, though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/443411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Confused by proof of Abelian group whose order divisible by prime has element divisible by prime. I have a problem understanding the assumptions for the proof of this theorem Theorem: If $A$ is abelian with order $a$ divisible by prime $p$, then $A$ has an element of order $p$. The proof goes as follows: Obviously true if $|A|=p$. We may therefore employ induction and shall henceforth assume that $a$ is composite divisible by $p$. By Chapter II, Theorem 5, p. 40, $A$ possesses proper subgroups. Let us select a proper subgroups $H$ of maximum order $h$, ($h<a$) say. We have to distinguish two cases: (i) $p \mid h$. By induction, $H$ contains an element of order $p$ (ii) $(h, p) =1$. Since $H$ is proper, there exists an element $x$ or order $t$ which does not belong to $H$. Let $T = \langle x\rangle$ and consider the product $HT$. Since $A$ is Abelian, $HT = TH$ so $HT$ is a subgroup more comprehensive than $H$ so it must be $A$ since $H$ was maximal. By the product theorem $ad = ht$ where $d$ is the order of the intersection of $H$ and $T$ which must be $\{e\}$ and therefore $d =1$ and so $a = ht$. Since $p \mid a$ and $(h, p) = 1$, then $p \mid t$ so $t = sp$ for some $s$. Then $x^s$ is of order $p$. QED. I paraphrased only slightly but the proof is what was presented. I have a problem with the induction hypothesis and the assumption about composite order. I assume that the induction hypotheses is that “all groups whose order is divisible by $p$ and smaller than the order of $A$ have an element divisible by $p$”, but I am not quite sure. The second problem I have is the assumption that we can restrict ourselves to composite order. Doesn’t that eliminate all groups of prime power order? I must be missing something.
Your first assumption is correct. We are indeed using (strong) induction on the number $\frac{|A|}{p}$ for fixed, but arbitrary $p$. Any non-identity element $a$ in any group $A$ of order a prime $p$ is itself of order $p$. That is to say, for each prime $p$ there is exactly one group (even without assuming abelian) of order $p$ and that group is cyclic. Therefore the pure prime ordered groups are dealt with in just a side comment of the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/443464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the derivative of $x\sin x$? Ok so I know the answer of $\frac{d}{dx}x\sin(x) = \sin(x)+ x\cos(x)$...but how exactly do you get there? I know $\frac{d}{dx} \sin{x} = \cos{x}$. But where does the additional $\sin(x)$ (in the answer) come in?
A bit of intuition about the product rule: Suppose that you have a rectangle whose height at time $t$ is $h(t)$ and whose width at time $t$ is $w(t)$. Then the area at time $t$ is $A(t)=h(t)w(t)$. Now, as the time changes, how does the area change? (Please, forgive my use of paint here.) Say the white rectangle was from time $t$, and the larger rectangle at time $t+\Delta t$. We gain three new regions of area: the green one, the blue one, and the gray one. The green area is $\Delta h\cdot w(t)$, where $\Delta h$ is the change in height from time $t$ to time $t+\Delta t$; the blue area is, similarly, $\Delta w\cdot h(t)$, and the gray area is $\Delta h\cdot\Delta w$. So, we have $$ \Delta A=\Delta h\cdot w(t)+\Delta w\cdot h(t)+\Delta h\cdot\Delta w. $$ Now, when $\Delta t$ is really small, we expect $\Delta h$ and $\Delta w$ to be really small as well; so, their product is tiny. Hence $$ \Delta A\approx \Delta h\cdot w(t)+\Delta w\cdot h(t), $$ or $$ \frac{\Delta A}{\Delta t}\approx\frac{\Delta h}{\Delta t}\cdot w(t)+\frac{\Delta w}{\Delta t}\cdot h(t). $$ Does this look at all like the product rule? Letting $\Delta t\rightarrow0$, this approximation (properly formalized, of course) leads us to the formula $$ \frac{d}{dt}\left[w(t)\cdot h(t)\right]=\frac{dA}{dt}=\frac{dh}{dt}\cdot w(t)+\frac{dw}{dt}\cdot h(t) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/443509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Weighted uniform convergence of Taylor series of exponential function Is the limit $$ e^{-x}\sum_{n=0}^N \frac{(-1)^n}{n!}x^n\to e^{-2x} \quad \text{as } \ N\to\infty \tag1 $$ uniform on $[0,+\infty)$? Numerically this appears to be true: see the difference of two sides in (1) for $N=10$ and $N=100$ plotted below. But the convergence is very slow (logarithmic error $\approx N^{-1/2}$ as shown by Antonio Vargas in his answer). In particular, putting $e^{-0.9x}$ and $e^{-1.9x}$ in (1) clearly makes convergence non-uniform. One difficulty here is that the Taylor remainder formula is effective only up to $x\approx N/e$, and the maximum of the difference is at $x\approx N$. The question is inspired by an attempt to find an alternative proof of $\epsilon>0$ there is a polynomial $p$ such that $|f(x)-e^{-x}p|<\epsilon\forall x\in[0,\infty)$.
Thanks, this was a fun problem. From the integral representation $$ \sum_{k=0}^{n} \frac{x^k}{k!} = \frac{1}{n!} \int_0^\infty (x+t)^n e^{-t} \,dt \tag1 $$ we can derive the expression $$ e^{-x} \sum_{k=0}^{n} \frac{(-x)^k}{k!} = e^{-2x} - \frac{e^{-2x} (-x)^{n+1}}{n!} \int_0^1 t^n e^{xt}\,dt. \tag2 $$ Now $$ \int_0^1 t^n e^{xt}\,dt \leq e^x \int_0^1 t^n\,dt = \frac{e^x}{n+1}, \tag3 $$ so that $$ \begin{align} \left|\frac{e^{-2x} (-x)^{n+1}}{n!} \int_0^1 t^n e^{xt}\,dt\right| &\leq \frac{e^{-x} x^{n+1}}{(n+1)!} \\ &\leq \frac{e^{-n-1} (n+1)^{n+1}}{(n+1)!} \\ &\sim \frac{1}{\sqrt{2\pi n}} \end{align} \tag4 $$ by Stirling's formula. Added by 40 votes for those interested in the derivation of (2) from (1): $$ \sum_{k=0}^{n} \frac{(-x)^k}{k!} = \frac{1}{n!} \int_0^x (t-x)^n e^{-t} \,dt + \frac{1}{n!} \int_x^\infty (t-x)^n e^{-t} \,dt \tag{A} $$ Substitute $u=t-x$ in the second integral on the right of (A): $$\frac{1}{n!}\int_x^\infty (t-x)^n e^{-t} \,dt =\frac{1}{n!}\int_0^\infty u^n e^{-u-x} \,dt = e^{-x} \tag{B}$$ Substitute $u=1-t/x$ in the first integral on the right of (A), noting that $(t-x)^n=(-x)^n u^n$ and $dt=(-x)du$: $$\frac{1}{n!} \int_0^x (-x+t)^n e^{-t} \,dt = \frac{(-x)^{n+1}}{n!} \int_0^1 u^n e^{xu-x} \,du = \frac{e^{-x}(-x)^{n+1}}{n!} \int_0^1 u^n e^{xu} \,du \tag{C} $$ Adding (B) and (C), identity (2) follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/443578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Find the coefficient of $x^{20}$ in $(x^{1}+⋯+x^{6} )^{10}$ I'm trying to find the coefficient of $x^{20}$ in $$(x^{1}+⋯+x^{6} )^{10}$$ So I did this : $$\frac {1-x^{m+1}} {1-x} = 1+x+x^2+⋯+x^{m}$$ $$(x^1+⋯+x^6 )=x(1+x+⋯+x^5 ) = \frac {x(1-x^6 )} {1-x} = \frac {x-x^7} {1-x}$$ $$(x^1+⋯+x^6 )^{10} =\left(\dfrac {x-x^7} {1-x}\right)^{10}$$ But what do I do from here ? any hints ? Thanks
Since $(x+x^2+\cdots+x^6)^{10}=x^{10}(1+x+\cdots+x^5)^{10}$ and $1+x+\cdots+x^5=\frac{1-x^6}{1-x},$ we need to find the coefficient of $x^{10}$ in $(\frac{1-x^6}{1-x})^{10}=(1-x^6)^{10}(1-x)^{-10}.$ Since $(1-x^6)^{10}(1-x)^{-10} = (1-10x^6+45x^{12}+\cdots) \sum_{m=0}^{\infty}\binom{m+9}{9}x^{m},$ the coefficient of $x^{10}$ will be $\binom{19}{9}-10\binom{13}{9}. $
{ "language": "en", "url": "https://math.stackexchange.com/questions/443641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 0 }
Define the linear transformation T: P2 -> R2 by T(p) = [p(0) p(0)] Find a basis for the kernel of T. Pretty lost on how to answer this question. Define the linear transformation $T:P_2 \rightarrow \Bbb{R}^2$ by $$ T(p) =\left[\begin{array}{c}p(0)\\p(0)\end{array}\right] $$ Find a basis for the kernel of $T$. So a $P_2$ polynomial has the form $ax + bx + cx^2$. So $T(p)$ will always have he form $[a\; a]^\intercal$. That would mean the kernel of $T$ is $[a\; a]^\intercal$ where $a = 0$, correct? It was my understanding the kernel of a transformation is all $u$ such that $T(u)= 0$. If this is correct,which I'm sure it probably isn't, how do I find the basis? A basis has to be linearly independent and would have to span the kernel, so would it be a polynomial of the form $a + bx + cx^2$?
Recall the definition of kernel. Let $V,W$ be vector spaces over the same field of scalars and let $T:V\to W$ be a linear map. The kernel of $T$ denoted by $\ker T$ is the set of all $v \in V$ such that $T(v) = 0$. So, let $T:P_2(\Bbb R)\to \Bbb R^2$ be the map you defined, i.e.: $T(p) = (p(0),p(0))$. One arbitrary element of $P_2(\Bbb R)$ is of the form $p(x)=ax^2+bx+c$. So we have $p(0)=c$. In that case, we have the following: $$T(ax^2+bx+c)=(c,c).$$ So, what must be $ax^2+bx+c$ to give us $T(ax^2+bx+c)=(0,0)$? It must of course be such that $c=0$. So all elements of the kernel are of the form $ax^2+bx$. And so we found: $$\ker T=\{p \in P_2(\Bbb R) : p(x)=ax^2+bx, \quad a,b\in \Bbb R\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/443763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Suggestions on how to prove the following equality. $a^{m+n}=a^m a^n$ Let $a$ be a nonzero number and $m$ and $n$ be integers. Prove the following equality: $a^{m+n}=a^{m}a^{n}$ I'm not really sure what direction to go in. I'm not sure if I need to show for $n$ positive and negative separately or is there an easier way. Can you use induction on integers? My attempt: 1) Base case $m=0$. Prove that $a^{m+n}=a^m a^n$. Is true. 2) Assume the result holds for $m$. So I want to prove it holds for $m+1$. So I know that $a^{m+n}=a^ma^n$. So does this imply that: $a^{m+n+1}= a^{m+1+n}= a^{m+1}a^n$? Am I going in the right direction? I'm not sure what to do next...
As I mention in the comment, you need some definition of $a^n$ in order to get started. It turns out that $a^n: \mathbb{R}\setminus\{0\} \times \mathbb{Z}\to \mathbb{R}$ is uniquely determined by specifying the relationships * *$a^1 = a$; *$a^n = a\cdot a^{n-1}$. (Can you see why both of these are necessary?) As a first fact, $a^1 = a\cdot a^0$, which shows that for $a\neq 0$, $a^0 = 1$. Now we can prove your theorem, for $n\geq 0$, by induction on $n$. Base case: $n=0$. Then $a^{m+n} = a^{m+0} = a^m = a^m \cdot 1 = a^m a^0 = a^m a^n.$ Inductive case: suppose $a^{m+n} = a^ma^n.$ Then $$a^{m+(n+1)} = a^{(m+n)+1} = a\cdot a^{m+n} = a\cdot a^ma^n = a^m(a\cdot a^n) = a^m(a^{n+1}).$$ As mentioned in the comments, you now need to show the relation also holds for $n<0$. I'll let you try that one on your own for now. EDIT: Notice that I lumped together many "obvious" steps involving commutativity and associativity of addition and multiplication, more than one should for a proof at this level. It is worthwhile going over the proof in very careful steps and noting where and how all of the axioms of arithmetic are used; this then gives you better understanding of when and how the theorem breaks down for e.g. matrix exponentiation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/443840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
combination related question suppose that dinner cooker has 500 mint,500 orange and 500 strawberry,and he wished to do packets containing 10 mint,5 orange and 5 strawberry,question is what is a maximum number of packets he can make by this way? so as i think,it is a combination related problem,which means that we can choose how many 10 mint we can choose from 500 mint or $500!/(10!*490!)+500!/(5!*495!)+500!/(5!*495!)$,but no calculator can calculate factorial of $500$ and how can i solve it more easily?
After making 50 packets, you've used up $500=50\times 10$ mint, $250=50\times 5$ orange, and $250=50\times 5$ strawberry. There is no mint left, so you can't make any more.
{ "language": "en", "url": "https://math.stackexchange.com/questions/443953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Infinite Coins Tossed Infinitely Often If an infinite number of coins are tossed infinitely often, is it true that there will be infinite subsets of those coins that repeat any finite sequence of heads/tails infinitely often? I.e., infinitely many coins will always produce heads, infinitely many always produce tails, infinitely many produce HTHTHT..., THTHTH..., HHTHHTHHT..., TTHTTHTTH..., etc. And on each toss of all the coins, would some infinite subset of coins begin reproducing a finite sequence infinitely often? I.e, infinitely many coins that had previously produced irregular sequences of HT would begin producing HTHTHT..., THTHTH..., et
The answer to the question in your first sentence is yes, with probability $1$. However, the assertion of your second sentence, beginning with "I.e.", is false, also with probability $1$. The same applies to the question and assertion of your second paragraph. For any coin, the probability that a given finite sequence will occur infinitely often is $1$, while the probability that the finite sequence will occur sequentially infinitely often is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Interaction of two values I am a mathematically-challenged guy struggling (or to say better - having no clue at all) about a problem. I have two values (let's call them value ONE and TWO). The first can go from 55 to 190. The second can be anything i want. By making these two values interact i get different results as value one changes (ranging from 6 to 24). What i'd like to do is to have the same result from this interaction, regardless of numeric value of value one. Unfortunately i cannot act or change the results directly. However, i can change the numeric value of value two. I reckon there should be a formula that, substituted to value two, would give the the desired result (which - in this case - should always be 6) whatever the first value number is. Please don't bash me, i know i'm terrible :x VALUE ONE VALUE TWO OBTAINED RESULT DESIRED RESULT 55 0 16 6 85 0 19 6 115 0 24 6 145 0 28 6 175 0 32 6 190 0 34 6 VALUE ONE VALUE TWO OBTAINED RESULT DESIRED RESULT 55 10 6 6 85 10 9 6 115 10 14 6 145 10 18 6 175 10 22 6 190 10 24 6 VALUE ONE VALUE TWO OBTAINED RESULT DESIRED RESULT 55 20 -6 6 85 20 -1 6 115 20 4 6 145 20 8 6 175 20 12 6 190 20 14 6
I have to warn you that this is not a generally valid procedure. It just worked because the data seemed to exhibit a linear relationship. And in that case, it is not too difficult to find out what that relationship is. First, look at VALUE TWO and the RESULT. You'll notice that in the second group of data, when VALUE TWO = 10, the results are shifted down by 10. When VALUE TWO = 20, the results are shifted down by 20 w.r.t. the original group. Focussing on the original group, you'll notice that as you rise by 30 points in VALUE ONE, the result rises by 4 which means that $$\text{RES} = \frac{4}{30}(\text{VAL1} - 55) + 16 - \text{VAL2}$$ for the complete formula. Now, applying your requirement that the end result should always be 6, you can work out that $$\text{VAL2} = \frac{4}{30}(\text{VAL1} - 55) + 10$$ by simple algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $A$ is a symmetric matrix in $\mathbb{R}$, why is $PAP^t$ diagonal? In Linear Algebra Why is following correct: Given a symmetric matrix $A$ on the field of the real numbers, why is that true that there exists an unitary matrix $P$ such that $PAP^t$ is a diagonal matrix? I know that from the Spectral Theorem there exists a unitary matrix $P$ such that $PAP^{-1}$ is diagonal. However how can I conclude that $PAP^t$ is diagonal as well? Thank you
This is a straightforward consequence of the spectral theorem. Let $u_1=p_1+iq_1,\ \ldots,\ u_n=p_n+iq_n$ be an orthonormal eigenbasis of $A$, where $u_1,\ldots,u_{k_1}$ correspond to the eigenvalues $\lambda_1$. Since $\lambda_1$ is real, we have $Ap_\ell=\lambda_1p_\ell$ and $Aq_\ell=\lambda_1q_\ell$ for $\ell=1,2,\ldots,k_1$. Hence every vector in $\mathcal{B}=\{p_1,\ldots,p_{k_1},q_1,\ldots,q_{k_1}\}$ is either zero or an eigenvector of $A$ corresponding to the eigenvalue $\lambda_1$. Hence the span of $\mathcal{B}$ is precisely the eigenspace of $A$ corresponding to the eigenvalue $\lambda_1$. However, as all vectors in the set $\mathcal{B}$ are real, we can choose from its real span $k_1$ orthonormal eigenvectors. The similar holds for other eigenvalues. Put these orthonormal eigenvectors together, we obtain a real orthogonal matrix $S$ such that $AS=SD$, i.e. $S^TAS=D$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proving statement $(A \cup C)\setminus B=(A\setminus B)\cup C \iff B\cap C= \varnothing$ I want to prove the following statment: $$(A \cup C)\setminus B=(A\setminus B)\cup C \iff B\cap C= \varnothing$$ Do I need to prove each side? Or is one side enough? I mean, if I get from the left side to the right is it enough? How do the following statements help me? * *$(A \cup C) \setminus B = (A \cup C)\cap B'$ *$(A \setminus B) \cup C = (A \cap B')\cup C$ I would like to get some hint to prove this statement. Thanks!
Let me try to explain how I'd think about this problem, since I'd work directly from definitions and thus use fewer formulas than other people seem to like. For the left-to-right implication, notice that $(A\setminus B)\cup C$ contains (by definition) all the elements of $C$. But it equals $(A\cup C)\setminus B$ and therefore contains none of the elements of $B$. So no elements of $C$ can be in $B$; that is, $B\cap C=\varnothing$. For the right-to-left implication, look at the expressions $(A\cup C)\setminus B$ and $(A\setminus B)\cup C$ on the left side. Each involves starting with $A$, adding in the members of $C$ and removing the members of $B$; they differ only in the order in which "adding" and "removing" are done. So for points that are in only one of $B$ and $C$ or in neither of them, the two expressions work exactly the same way. A difference arises only for points in $B\cap C$ as these would be first removed and then added in $(A\setminus B)\cup C$, so they'd be present in the final result, but first added and then removed in $(A\cup C)\setminus B$, so they'd be absent from the final result. Conclusion: The two expressions on the left differ only in regard to elements of $B\cap C$. In particular, if $B\cap C=\varnothing$ then the two expressions agree and the equation on the left side therefore holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Proving the length of a circle's arc is proportional to the size of the angle How can I prove that: The length of the arc is proportional to the size of the angle. Every book use this fact in explaining radians and the fundamental arc length equation $s = r\theta$. However no book proofs this fact. Is this fact some axiom, some natural law like $\pi$ and the triangle side proportions? Can I proof the above? Or is it something that you should just accept?
Let o = theta(angle subtended by the arc) Ur equation can be written as s/r=o We surely have two radii of circle subtending that angle o and forming arc s. In the space between them we can fit more such radii. The number of that radii will be decided by theta and they will form arc s. Now divide arc s with the radius to get theta.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 7, "answer_id": 5 }
Show $\cos(x+y)\cos(x-y) - \sin(x+y)\sin(x-y) = \cos^2x - \sin^2x$ Show $\cos(x+y)\cos(x-y) - \sin(x+y)\sin(x-y) = \cos^2x - \sin^2x$ I have got as far as showing that: $\cos(x+y)\cos(x-y) = \cos^2x\cos^2y -\sin^2x\sin^2y$ and $\sin(x+y)\sin(x-y) = \sin^2x\cos^2y - \cos^2x\sin^2y$ I get stuck at showing: $\cos^2x\cos^2y -\sin^2x\sin^2y - \sin^2x\cos^2y - \cos^2x\sin^2y = \cos^2x - \sin^2x$ I know that $\sin^2x + \cos^2x = 1$ and I have tried rearranging this identity in various ways, but this has not helped me so far.
Hint: $$\cos(a+b)=\cos a \cos b-\sin a \sin b$$$$\cos(2a)=\cos^2 a -\sin ^2a $$ $$\begin{align}\cos(x+y)\cos(x-y) - \sin(x+y)\sin(x-y) &= \cos((x+y)+(x-y))\\&=\cos2x\\&=\cos^2x - \sin^2x\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/444407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How many positive integers $n$ satisfy $n = P(n) + S(n)$ Let $P(n)$ denote the product of digits of $n$ and let $S(n)$ denote the sum of digits of $n$. Then how many positive integers $n$ satisfy $$ n = P(n) + S(n) $$ I think I solved it, but I need your input. I first assumed that $n$ is a two digit number. Then $n=10a+b$ and according to the requirement $$ 10a + b = ab + a + b\\ \Rightarrow 9a = ab \\ \Rightarrow b = 9 $$ (Since a is not zero) Now we have { 19,29,39,49,59,69,79,89,99} a set of 9 numbers that satisfy the requirement. However if we assume a three digit number $n=100a+10b+c$ then $$ 100a+10b+c = abc+a+b+c\\ \Rightarrow 99a+9b=abc\\ \Rightarrow 9(11a+b) = abc\\ $$ Either two of the digits are $1$ and $9$ or $3$ and $3$ and for any such cases the left hand side is much larger than the right side. If there cannot be three digit number, there cannot be higher cases. Is this good reason?
hint : consider the inequality $10^x\leq 9^x+9x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/444463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
About the addition formula $f(x+y) = f(x)g(y)+f(y)g(x)$ Consider the functional equation $$f(x+y) = f(x)g(y)+f(y)g(x)$$ valid for all complex $x,y$. The only solutions I know for this equation are $f(x)=0$, $f(x)=Cx$, $f(x)=C\sin(x)$ and $f(x)=C\sinh(x)$. Question $1)$ Are there any other solutions ? If we set $x=y$ we can conclude that if there exists a $g$ for a given $f$ then $g$ must be $g(x)=\frac{f(2x)}{2f(x)}$. By using this result, I tried setting $y=2x$ yielding $$f(x+2x)=f(x)g(2x)+g(x)f(2x)=\dfrac{f(x)f(4x)}{2f(2x)}+\dfrac{f^2(2x)}{2f(x)}$$ Question $2)$ Are $f(x)=0$,$f(x)=Cx$,$f(x)=C\sin(x)$ and $f(x)=C\sinh(x)$ the only solutions to $f(3x)=\dfrac{f(x)f(4x)}{2f(2x)}+\dfrac{f^2(2x)}{2f(x)}$? If not, what are the other solutions?
As leshik pointed out, this equation has plenty of discontinuous solutions (e.g. for $g=1$ it becomes Cauchy's functional equation), so let's just consider continuous solutions. $f=0$ is the trivial solution; from now on we will assume that $f$ is not identically zero. We consider two cases: * *If $f(x)$ and $g(x)$ are linearly dependent, then there is some $\lambda \neq 0$ such that $g(x)=\lambda f(x)$. Since we can replace $f(x)$ with $c f(x)$ and the original equation is still satisfied, we may assume without loss of generality that $g(x)=\frac{1}{2}f(x)$, so the equation becomes $f(x+y)=f(x)f(y)$. Now since $f(x+0)=f(x)f(0)$ and $f$ is not identically zero, we must have $f(0) \neq 0$. Then since $f(0)=f(0+0)=f(0)^2$, $f(0)=1$. By continuity $F(t) = \ln (2f(t))$ is defined in a neighborhood of $0$ and satisfies $F(x+y)=F(x)+F(y)$. This is Cauchy's functional equation, so since $F$ is continuous $F(x)=rx$, and therefore $f(x)=e^{rx}$ and $g(x)=\frac{1}{2}e^{rx}$. Therefore the general solution in this case is: * *$f(x)=ke^{rx}$, $g(x)=\frac{1}{2}e^{rx}$. *Now suppose that $f(x)$ and $g(x)$ are linearly independent. Since $f(0)=f(0+0)=2f(0)g(0)$, we either have $f(0)=0$, or else $f(0) \neq 0$ and $g(0)=\frac{1}{2}$. In the latter case we would have $f(x)=f(x+0)=\frac{1}{2}f(x)+f(0)g(x)$, so $f(x)$ and $g(x)$ would be linearly dependent. Therefore $f(0)=0$. Since $f$ is not identically zero, there is no $x$ for which both $f(x)$ and $g(x)$ are zero, since otherwise $f(x+y)=f(x)g(y)+f(y)g(x)$ would be identically zero. Then by continuity there must be some $p$ such that $f(p) \neq 0$ and $g(p) \neq 0$. It follows that $g(x)=\frac{1}{f(p)}f(x+p)-\frac{g(p)}{f(p)}f(x)$, so $g$ is a linear combination of $f$ and its translate $f_p$ (where $f_p(x)=f(x+p))$. Since $f$ and $g$ are linearly independent, it follows that $f$ and $f_p$ are linearly independent. Then since $f(x+y)=f(x)g(y)+f(y)g(x)$ for all $x,y$, every translate $f_y$ of $f$ is a unique linear combination of $f$ and $g$, and therefore every translate $f_y$ of $f$ is a unique linear combination of $f$ and $f_p$. Since all translates of $f$ are linearly combinations of the linearly independent functions $f$ and $f_p$, this implies that $f$ is differentiable as described in the answer to this question. Then by the answers to this question, $f$ is a solution to the ODE $\frac{d^{2}f}{dx^{2}}+B\frac{df}{dx}+Cf=0$ for some $B,C \in \mathbb{R}$, other than the solution $ce^{rx}$. With the boundary conditions $f(0)=0$ it follows that there are only three families of solutions: * *$f(x) = kxe^{rx}$, $g(x)=e^{rx}$ *$f(x) = ke^{rx}\sin(sx)$, $g(x)=e^{rx}\cos(sx)$ *$f(x)= ke^{rx}\sinh(sx)$, $g(x)=e^{rx}\cosh(sx)$ Of course, these are valid solutions because of the following addition formulae $$(x+y)=x\cdot 1 + y \cdot 1$$ $$\sin(x+y)=\sin(x)\cos(y)+\cos(x)\sin(y)$$ $$\sinh(x+y)=\sinh(x)\cosh(y)+\sinh(y)\cosh(x)$$ We have found all the continuous solutions of the given functional equation. QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/444517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 3, "answer_id": 1 }
Construct a compact set of real numbers whose limit points form a countable set. I searched and found out that the below is a compact set of real numbers whose limit points form a countable set. I know the set in real number is compact if and only if it is bounded and closed. It's obvious it is bounded since $\,d(1/4, q) < 1\,$ for all $\,q \in E.$ However, I'm not sure how this is closed. Is there any simpler set that satisfies the above condition? Thank you! $$E = \left\{\frac 1{2^m}\left(1 - \frac 1n\right) \mid m,n \in \mathbb N\right\}.$$
The limit points are $\{\frac{1}{2^m}\mid m\in \mathbb{N}\}$. These are contained in the set (to get $\frac{1}{2^k}$ (for $k>1$), take $m=k-1$, $n=2$). We can tell there are no other limit points, since the closest points to $\frac{1}{2^k}(1-\frac{1}{l})$ (for $l>2$) are $\frac{1}{2^k}(1-\frac{1}{l+1})$ and $\frac{1}{2^k}(1-\frac{1}{l-1})$, so we can isolate them in a neighborhood of radius $\frac{1}{2^k}(1 - \frac{1}{2(l+1)})$. Edit: As Andre has pointed out, $1/2$ is not in the set, so the problem does not work as stated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 2 }
Impossible identity? $ \tan{\frac{x}{2}}$ $$\text{Let}\;\;t = \tan\left(\frac{x}{2}\right). \;\;\text{Show that}\;\dfrac{dx}{dt} = \dfrac{2}{1 + t^2}$$ I am saying that this is false because that identity is equal to $2\sec^2 x$ and that can't be equal. Also if I take the derivative of an integral I get the function so if I take the integral of a derivative I get the function also so the integral of that is $x + \sin x$ which evaluated at 0 is not equal to $\tan(x/2)$
You're wrong, the identity is correct. Note that $t = \tan(x/2)$ implies $x = 2 \arctan(t) + 2 n \pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Saturation of the Cauchy-Schwarz Inequality Consider a vector space ${\cal S}$ with inner product $(\cdot, \cdot)$. The Cauchy-Schwarz Inequality reads $$ (y_1, y_1) (y_2, y_2) \geq \left| (y_1, y_2) \right|^2~~\forall y_1, y_2 \in {\cal S} $$ This inequality is saturated when $y_1 = \lambda y_2$. In particular, this implies $$ \boxed{ (y_1, y_1) (y_2, y_2) \geq \left| \text{Im} (y_1, y_2) \right|^2 } $$ My question is the following. Given a fixed $y_1 \in {\cal S}$. Is it always possible to find $y_2 \in {\cal S}$ such that the boxed inequality is saturated? If not in general, under what conditions is this possible? PS - I am a physicist, and not that well-versed with math jargon.
You have $|y_1|^2|y_2|^2\geq|(y_1,y_2)|^2\geq|\text{Im}(y_1,y_2)|$. If the right hand side is equal to the left-most hand side then, in particular, you have 'saturation' of Cauchy's inequality. Then $y_2=\lambda y_1$ because the saturation of Cauchy's inequality happens if and only if the vectors are proportional. You then only need to have $|\lambda|^2|y_1|^4=|(y_1,\lambda y_1)|^2=|\text{Im}(\overline{\lambda}(y_1,y_1))|^2=|\text{Im}(\overline{\lambda})|y_1|^2|^2=|\text{Im}(\lambda)|^2|y_1|^4$. This is ok for $\lambda$ imaginary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A problem with the proof of a proposition I have a problem with the proof of Proposition 5.1. of the article of Ito.(Noboru Itˆo. On finite groups with given conjugate types. I. Nagoya Math. J., 6:17–28, 1953.). I don't know what is "e" and "e-1" in the proof. I'd be really greatfull if someone help me.You can find the pdf file in this link. Link
I would appear that $e$ is a group. Whenever the author writes $e-1$ they actually have "some group":$e-1$, so they just mean the index of $e$ in the group, minus one. As to what $e$ is I am not sure, but I would guess it is related to $E$. I am not even sure whether $E$ is a group or an element, but most of the time it appears to be an element, and I would guess that $e$ is the group generated by $E$. Edit: As Derek Holt has commented below, $e$ is probably the trivial group, and hence $[G:e]$ is just the order of $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove that càdlàg (RCLL) functions on $[0,1]$ are bounded? While studying the space $\mathbb{D}[0,1]$ of right continuous functions with left hand limits (i.e. càdlàg functions) on $[0,1]$, I came across the following theorem: Theorem. If $f$ is càdlàg on $[0,1]$, it is bounded. My proof attempt: I am aware that if a function has both left and right hand limits on $[0,1]$, then the set of discontinuities is at most countable. Hence I tackled this in two parts, one where the discontinuities are finite, the other infinite. I got the finite discontinuity one. But I am stuck at the infinite discontinuity part. My guess is that there is something special about the countable discontinuities (e.g. they cannot be dense in $[0,1]$) and somewhere, I'll have to use the sequential compactness property to get a contradiction, but I am unable to collect my ideas. I request any starting hints on this. A sketch of the proof would also be appreciated.
Billigsley gives an excellent one-line proof. I am repeating it here. First note that this Lemma is true : For every $\epsilon >0$, $ \exists$ partition $ 0=t_0<t_1<\ldots,t_k=1$ such that for the set $S_i = [t_i,t_{i+1})$, we have $\sup_{s,t \in S_i} |f(s)-f(t)| <\epsilon$ for all $i$. This Lemma is easily proved. Once Lemma is seen to be true, choose an $\epsilon >0$. Then the bound on $f$ is simply $(\sum_{l=1}^{k} J_{l} ) + \epsilon \times k$ (first term is the amount of jumps which occur at the $k$ points which were obtained using the Lemma, while the second term is the increment of $f$ within these intervals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
Solving the trigonometric equation $A\cos x + B\sin x = C$ I have a simple equation which i cannot solve for $x$: $$A\cos x + B\sin x = C$$ Could anyone show me how to solve this. Is this a quadratic equation?
$A\cos x+B\sin x=C$ so if $A\neq 0, B\neq 0$ then $$\frac{A}{\sqrt{A^2+B^2}}\cos x+\frac{B}{\sqrt{A^2+B^2}}\sin x=\frac{C}{\sqrt{A^2+B^2}}$$ in which $$\frac{A}{\sqrt{A^2+B^2}}\le1,~~\frac{B}{\sqrt{A^2+B^2}}\le1,~~\frac{C}{\sqrt{A^2+B^2}}\le1$$ This means you can suppose there is a $\xi$ such that $\cos(\xi)=\frac{A}{\sqrt{A^2+B^2}},\sin(\xi)=\frac{B}{\sqrt{A^2+B^2}}$ and so...
{ "language": "en", "url": "https://math.stackexchange.com/questions/444887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving existence of $T$-invariant subspace Let $T:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}$ be a linear transformation. I'm trying to prove that there exists a T-invariant subspace $W\subset \mathbb{R}^3$ so that $\dim W=2$. How can I prove it? Any advice?
If you've not learned about minimal polynomials or Jordan normal form, here's a simple proof involving the Cayley-Hamilton Theorem. Let $T : \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ be a linear map and $A = [T]_{\text{std}}$ be the matrix representation of $T$ in the standard basis. Take the set $\{v,Tv\}$. If this is linearly dependent for all $v \in \mathbb{R}^{n}$, then we can choose any $\{v_{1},v_{2}\} \subseteq \mathbb{R}^{n}$ such that $v_{1}$ and $v_{2}$ are linearly independent. Then, $\operatorname{span}\{v_{1},v_{2}\}$ is the required $2$-dimensional subspace. If $\{v,Tv\}$ is not linearly independent for some $v \in \mathbb{R}^{n}$, choose that $v$. According to Cayley-Hamilton Theorem, we have the characteristic polynomial of A, $p(\lambda)$ such that $p(A)=0 \implies p(A)v=0 \forall v \in \mathbb{R}^{n}$. $p(A)$ can be factored into irreducible polynomials of degree $1$ or $2$ by the Fundamental Theorem of Algebra. Thus, there is some linear or quadratic polynomial $q(\lambda)$ such that $q(\lambda)$ is a factor of $p(\lambda)$ and $q(A)v = 0$. If $q(A)$ is linear, we are done. If $q(A)$ is quadratic, there are some $s,r \in \mathbb{R}$ such that \begin{equation} \begin{split} & A^{2}v + sAv + rv = 0\\ &\quad\implies A^{2}v = -sAv-rv \\ &\quad\implies T^{2}v = -sTv-rv \label{eq:1} \end{split} \end{equation} Now, for some $v' \in \operatorname{span}\{v,Tv\}$, $v' = av + bTv$ for some $a,b \in \mathbb{R}$. So, \begin{equation} \begin{split} T(v') & = T(av + bTv) \\ & = aTv + bT^{2}v \\ & = aTv + b(-sTv-rv) \\ & = (a-bs)Tv - rbv \in \operatorname{span}\{v,Tv\} \\ \end{split} \end{equation} Thus $\operatorname{span}\{v, Tv\}$ is the required 2-dimensional invariant subspace.
{ "language": "en", "url": "https://math.stackexchange.com/questions/444936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Evaluating $\int\cos\theta~e^{−ia\cos\theta}~\mathrm{d}\theta$ Is anybody able to solve this indefinite integral : $$ \int\cos\theta~e^{\large −ia\cos\theta}~\mathrm{d}\theta $$ The letter $i$ denotes the Imaginary unit; $a$ is a constant; Mathematica doesn't give any result. Thanks for any help you would like to provide me.
$\int \cos\theta(−ia\cos\theta) d\theta$ =$-ia \int \cos^2\theta d\theta$ Hint: $\cos^2\theta=\frac {1+\cos(2\theta)} 2$ I do not think that you have to do anything special because of the imaginary part. That just means your result is imaginary
{ "language": "en", "url": "https://math.stackexchange.com/questions/445011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Evaluate $\int_{1}^{\infty}e^{-x}\ln^{2}\left(x\right)dx$ Evaluate :$$\int_{1}^{\infty}e^{-x}\ln^{2}\left(x\right)\mathrm{d}x$$ I've tried to solve this with some elegant substitutions like $t=e^x$ or $t=\ln\left(x\right)$ . I've also tried to integrate by parts without any success. any help would be good.
Let's define (for $\Re(a)>-1$) the function $$ f(a):=\int_{1}^{\infty}x^a\,e^{-x}\left(x\right)\,\mathrm{d}x$$ then this is the incomplete gamma function $\,f(a)=\Gamma(a+1,1)$. But since $\,\displaystyle x^a=e^{a\ln(x)}\,$ we have : $$f''(a)=\int_{1}^{\infty}x^a\;\ln^{2}(x)\;e^{-x}\,\mathrm{d}x$$ So that your function is the second derivative of $\,\Gamma(a+1,1)\,$ at $\,a=0$ (so that $x^a$ disappears) or the second derivative relatively to $a$ of $\,\Gamma(a,1)\,$ at $\,a=1$ : $$f''(0)=\lim_{a\to 1}\Gamma(a,1)''$$ (this answer was also given by Mr.G in the comments) Let's add that the derivatives are not much simpler so that it is probably better to keep it this way... Of course as noted by Eric Naslund (+1) the answer is simpler when the lower bound is $0$ because in this case we get simply $\;\lim_{a\to 0}\Gamma(a+1,0)''=\Gamma''(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/445082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
$z_0$ non-removable singularity of $f\Rightarrow z_0$ essential singularity of $\exp(f)$ Let $z_0$ be a non-removable isolated singularity of $f$. Show that $z_0$ is then an essential singularity of $\exp(f)$. Hello, unfortunately I do not know how to proof that. To my opinion one has to consider two cases: * *$z_0$ is a pole of order $k$ of $f$. *$z_0$ is an essential singularity of $f$.
We can also look at it from the other side. If $z_0$ is a removable singularity of $e^f$, then $\lvert e^{f(z)}\rvert < K$ in some punctured neighbourhood of $z_0$. Since $\lvert e^w\rvert = e^{\operatorname{Re} w}$, that means $\operatorname{Re} f(z) < K'\; (= \log K)$ in a punctured neighbourhood $\dot{D}_\varepsilon(z_0)$ of $z_0$, and that implies that $z_0$ is a removable singularity of $f$. (Were it a pole, $f(\dot{D}_\varepsilon(z_0))$ would contain the complement of some disk $D_r(0)$; were it an essential singularity, each $f(\dot{D}_\varepsilon(z_0))$ would be dense in $\mathbb{C}$ by Casorati-Weierstraß; in both cases $\operatorname{Re} f(z)$ is unbounded on $\dot{D}_\varepsilon(z_0)$.) If $z_0$ were a pole of $e^f$, it would be a removable singularity of $e^{-f}$, hence $z_0$ would be a removable singularity of $-f$ by the above, hence $z_0$ would be a removable singularity of $f$, and therefore a removable singularity of $e^f$ - contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/445160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 0 }
$20$ hats problem I've seen this tricky problem, where $20$ prisoners are told that the next day they will be lined up, and a red or black hat will be place on each persons head. The prisoners will have to guess the hat color they are wearing, if they get it right the go free. The person in the back can see every hat in front of him, and guesses first, followed by the person in front of him, etc. etc. The prisoners have the night to think of the optimal method for escape. This method ends up allowing $19$ prisoners to always escape. The person in the back counts the number of red hats, and if its even, says red, if its odd, says black. This allows the people in front to notice if its changed, and can determine their hat color, allowing the person in front to keep track as well. What I'm wondering is what is the equivalent solution for $3$ or more people, and how many people will go free. If possible, a general solution would be nice.
To solve the problem with $n$ prisoners and $k$ colours, do as follows: Wlog. the colors are the elements of $\mathbb Z/k\mathbb Z$. If $c_i$ is the colour of th ehat of the $i$th prisoner, then the $i$th prisoner can easiliy compute $s_i:=\sum_{j<i}c_j$. Let the $n$th prisoner announce $s_n$. Then the $(n-1)$st prisoner can compute $c_{n-1}=s_n-s_{n-1}$ and correctly announce it. All subsequent prisoners $n-2, \ldots , 1$ can do the bookkeeping and announce their own colur accordingly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/445321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
What is the definition of "formal identity"? In Ahlfors' Complex Analysis he remarks that harmonic $u(x,y)$ can be expressed as $$ u(x,y) = \frac{1}{2}[f(x + i y) + \overline{f}(x - i y)] $$ when $x$ and $y$ are real. He then writes "It is reasonable to expect that this is a formal identity, and then it holds even when x and y are complex". What does he mean in this context by "formal identity"? Edit: This entire page (p.27 of my edition) comes with what is a caveat, as far as I can tell: We present this procedure with an explicit warning that it is purely formal and does not possess any power of proof. In the same page he uses the phrases "formal procedure", "formal reasoning", "formal arguments", and "formal identity". Is he more or less saying that he's embarking on something that could be considered suspect, at least at this point in the book? Thank you very much!
The word "formal" as it's being used here doesn't have an entirely rigorous meaning. The archetypal example of a formal argument is manipulating a power series without worrying about convergence, which gives rise to the notion of formal power series. In general, a formal argument is one based on the "form" of the mathematical objects involved without thinking about their "substance" (e.g. a power series is a form, a function it's a Taylor series of is a substance). In this case I agree with RGB that a possible interpretation is that the identity might hold on the level of power series, in which case it should hold for even complex $x, y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/445394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Closed forms for $\lim_{x\rightarrow \infty} \ln(x) \prod_{x>(p-a)>0}(1-(p-a)^{-1})$ Im looking for closed forms for $\lim_{x \rightarrow \infty} \ln(x) \prod_{x>(p-a)>0}(1-(p-a)^{-1})$ where $x$ is a positive real, $a$ is a given real, $p$ is the set of primes such that the inequation is valid. Lets call this Limit $L(a)$. Mertens gave a closed form for $L(0)$. Are there others possible ? Also the inverse function $L^{-1}(b)=a$ intrests me. What is the value of $L^{-1}(0)$ ??
I always take the log when I run into a product, so, pressing on regardless, let's look at $\begin{align} f(x, a) &=\sum_{x>(p-a)>0}\ln(1-(p-a)^{-1})\\ &=\sum_{a < p < x+a}\ln(1-(p-a)^{-1})\\ &=\sum_{a < p < x+a}-(\frac1{p-a}+\frac1{2(p-a)^2}+...)\\ &=-\sum_{a < p < x+a}\frac1{p-a}+C\\ &=C-\sum_{a < p < x+a}\frac1{p(1-a/p)}\\ &=C-\sum_{a < p < x+a}\frac1{p}(1+a/p+(a/p)^2+...)\\ &=C-\sum_{a < p < x+a}\frac1{p}+\sum_{a < p < x+a}\frac{a}{p^2} +\sum_{a < p < x+a}\frac{a^2}{p^3} +...\\ &=C_1-\sum_{a < p < x+a}\frac1{p} \end{align} $ where I have blithely absorbed the various convergent sums ($ \sum_{a < p < x+a}1/(p-a)^k$ and $\sum_{a < p < x+a}1/p^k$ for $k \ge 2$) into constants $C$ and $C_1$ which will depend on $a$. Using the known estimate $\sum_{p < x} \frac1{p} \approx \ln \ln x$, $f(x, a) \approx C_1-\ln \ln(x+a)+\ln\ln a \approx C_2-\ln \ln(x+a) $. Since $\ln(x+a) = \ln x + \ln(1+a/x) \approx \ln x+a/x $, $\begin{align} \ln\ln(x+a) &\approx \ln(\ln x+a/x)\\ &= \ln(\ln x(1+a/(x \ln x))\\ &= \ln\ln x+ \ln(1+a/(x \ln x))\\ &\approx \ln\ln x+ a/(x \ln x)\\ \end{align} $ so $f(x, a) \approx C_2-\ln \ln(x+a) \approx C_2-\ln\ln x- a/(x \ln x) $. So, $\ln x e^{f(x, a)} \approx e^{\ln \ln x + C_2-\ln\ln x} \approx C_3 $. All this shows is that the limit approaches a constant, without giving much help in evaluating the constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/445452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Deriving the series representation of the digamma function from the functional equation By repeatedly using the functional equation $ \displaystyle\psi(z+1) = \frac{1}{z} + \psi(z)$, I get that $$ \psi(z) = \psi(z+n) - \frac{1}{z+n-1} - \ldots - \frac{1}{z+1} - \frac{1}{z}$$ or $$\psi(z+1) = \psi(z+n+1) - \frac{1}{z+n} - \ldots - \frac{1}{z+2} - \frac{1}{z+1} . $$ Is it possible to derive the series representation $ \displaystyle \psi(z+1) = - \gamma - \sum_{n=1}^{\infty} \Big( \frac{1}{z+n} - \frac{1}{n} \Big)$ from that?
The functional equation tells us: $$\frac{1}{z+n} -\frac{1}{n}=-\left( \Psi \left( z+n+1 \right) -\Psi \left( z+n \right)\right) +\left( \Psi \left( n+1 \right) -\Psi \left( n \right)\right)$$ and so we can form the partial sum: $$-\sum _{n=1}^{N} \frac{1}{z+n} -\frac{1}{n}=-\sum _{n=1}^{N}\left( \Psi \left( z+n+1 \right) -\Psi \left( z+n \right)\right) +\sum _{n=1}^{N}\left( \Psi \left( n+1 \right) -\Psi \left( n \right)\right)$$ and by noting that: $$\sum _{n=1}^{N}\Psi \left( z+n+1 \right) -\Psi \left( z+n \right) = \sum _{n=2}^{N+1}\Psi \left( z+n \right) -\sum _{n=1}^{N}\Psi \left( z +n \right) =\Psi \left( z+N+1 \right) -\Psi \left( z+1 \right)$$ the partial sum becomes: $$-\sum _{n=1}^{N} \frac{1}{z+n} -\frac{1}{n}=-\Psi \left( z+N+1 \right) +\Psi \left( N+1 \right) +\Psi \left( z+1 \right) +\Psi(1)$$ If we denote the starting point of the recursion relation $\Psi(1)=\gamma$ and take $N\rightarrow \infty$ we then have: $$\Psi \left( z+1 \right)=-\lim_{N\to \infty}\left(-\Psi \left( z+N+1 \right) +\Psi \left( N+1 \right)\right)-\gamma-\lim_{N\to\infty}\sum _{n=1}^{N}\left( \frac{1}{z+n} -\frac{1}{n}\right)$$ So, ultimately we find the functional equation alone would not quite do the job as we also have to prove the limit: $$\lim_{n\to \infty}\left(-\Psi \left( z+N+1 \right) +\Psi \left( N+1 \right)\right)=0$$ Proving the limit is finite proves the convergence of the sum, proving it vanishes proves the desired result. To prove the limit we could consider the integral representation of the Digamma function for $\mathfrak{R} (x)>0$: $$\Psi(x)=\int_{0}^{\infty}\frac{e^{-t}}{t}-\frac{e^{-xt}}{1-e^{-t}}{dt}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/445531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
finitely generated & finitely related = finitely presented module? Let $R$ be a ring $M$ an $R$-module. How can I prove that if * *$M\cong R^n/N$ for some $n\!\in\!\mathbb{N}$ and some submodule $N\leq R^n$ and if * *$M\cong R^{(I)}/\langle u_1,\ldots,u_m\rangle$ for some set $I$ and some vectors $u_1,\ldots,u_m\in R^{(I)}$, then * *$M\cong R^k/\langle v_1,\ldots,v_l\rangle$ for some $k\!\in\!\mathbb{N}$ and some vectors $v_1,\ldots,v_l\in R^k$ ?
Every generating system $E$ of a finitely generated module $M$ contains a finite generating system (namely, look at those generators of $E$ which are needed to generate a finite generating system of $M$). Now assume that there are only finitely many relations between the generators of $E$. But these only use finitely many generators of $E$. It follows that every presentation of $M$ can be adjusted to a finite presentation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/445587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Double integral $\iint_D |x^3 y^3|\, \mathrm{d}x \mathrm{d}y$ Solve the following double integral \begin{equation} \iint_D |x^3 y^3|\, \mathrm{d}x \mathrm{d}y \end{equation} where $D: \{(x,y)\mid x^2+y^2\leq y \}$. Some help please? Thank you very much.
This may be done easily in polar coordinates; the equation of the circle is $r=\sin{\theta}$, $\theta \in [0,\pi]$. The integrand is $r^6 |\cos^3{\theta} \sin^3{\theta}|$, and is symmetric about $\theta = \pi/2$. The integral is then $$2 \int_0^{\pi/2} d\theta \, \cos^3{\theta} \, \sin^3{\theta}\, \int_0^{\sin{\theta}} dr \, r^7 = \frac14 \int_0^{\pi/2} d\theta\, \cos^3{\theta} \, \sin^{11}{\theta} $$ which may be evaluated simply as $$ \frac14 \int_0^{\pi/2} d(\sin{\theta}) (1-\sin^2{\theta}) \sin^{11}{\theta} = \frac14 \left (\frac{1}{12} - \frac{1}{14} \right ) = \frac{1}{336} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/445688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Homogeneous differential equations of second order I can not find other solutions of the equation $$ y'' + 4y = 0 $$ addition of the solutions $y = 0$ and $y = \sin 2x$. There are positive solutions? Or solutions in terms of exponential function? Thanks.
So for these equations we take a guess that the solution is exponential form($e^{rt}$). $$y''+4y=0$$ $$\frac {d^2} {dt^2} e^{rt}+4e^{rt}$$ $$r^2e^{rt}+4e^{rt}=0$$ $$e^{rt}(r^2+4)=0$$ $$r^2=-4$$ $$r=\pm2i$$ So the solution to your equation is $Ae^{\pm2ix}$. If you covered this in class, you know that you can expand the exponential: $$e^{i\alpha}=\cos(\alpha)+i\sin(\alpha)$$ So your general solution is thus $$y(x)=A\cos(2x)+B\sin(2x)$$ The 2 is because of the exponential, and there is no i in front of the sin term because it gets absorbed into the constant B. You can then check that this works: $$y''+4y=0$$ $$\frac {d^2} {dx^2} y +4y$$ $$-4A\cos(2x)-4B\sin(2x)+4A\cos(2x)+4B\sin(2x)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/445792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
If $f(t) = 1+ \frac{1}{2} +\frac{1}{3}+....+\frac{1}{t}$, find $\sum^n_{r=1} (2r+1)f(r)$ in terms of $f(n)$ If $f(t) = 1+ \frac{1}{2} +\frac{1}{3}+....+\frac{1}{t}$, Find $x$ and $y$ such that $\sum^n_{r=1} (2r+1)f(r) =xf(r) -y$
Since $(n+1)^2 - n^2 = 2n+1$, we'd expect $\sum_{r=1}^n (2r+1)f(r)$ to be something like $n^2 f(n)$. A little experimentation shows that it's actually $(n+1)^2 f(n) - n(n+1)/2$. $\textbf{Proof}$: At $n=1$, this is $4 - 1 = 3 = (2+1)f(1)$. The forward difference of $(n+1)^2 f(n) - n(n+1)/2$ is $(n+2)^2 f(n+1) - (n+1)(n+2)/2 - [(n+1)^2 f(n) - n(n+1)/2] = (n+2)^2 f(n+1) - (n+1)^2 (f(n+1)-1/(n+1)) - (n+1) = (2n+3)f(n+1) = (2(n+1)+1)f(n+1)$, as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/445849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
black and white balls in the box A box contains $731$ black balls and $2000$ white balls. The following process is to be repeated as long as possible. (1) arbitrarily select two balls from the box. If they are of the same color, throw them out and put a black ball into the box. (We have sufficient black balls for this). (2) if they are of different colors, place the white ball back into the box and throw the black ball away. What will happen at last? Will the process stop with a single black ball in the box or a single white ball in the box or with an empty box? I am unable to decide how to start and in which direction? should we apply probability or what?
First, the number of balls in the box decrease by one at each step. Suppose $B(t)$ and $W(t)$ are the number of black and white balls present after $t$ steps. Since we start with $B(0)+W(0)=2731$ and $B(t+1)+W(t+1)=B(t)+W(t)-1$, we have that $B(2730)+W(2730)=1$. At this point we can no longer continue the process. So at this end, there is either a white ball or a black ball left. Notice that the number of white balls present at any time is even. For instance, suppose we have just done step $t$. If we choose two black balls, then $B(t+1)=B(t)-1$ and $W(t+1)=W(t)$. If we choose a white ball and a black ball, then $B(t+1)=B(t)-1$ and $W(t+1)=W(t)$. If we choose two white balls, then $B(t+1)=B(t)+1$ and $W(t+1)=W(t)-2$. Necessarily, since $W(0)$ is even, it stays even throughout the process. Hence $W(2730)=0$ and $B(2730)=1$. Even though what happens throughout the process is random, by parity, the process must end with only one black ball left.
{ "language": "en", "url": "https://math.stackexchange.com/questions/445919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
solve congruence $x^{59} \equiv 604 \pmod{2013}$ This is an exercise from my previous exam; how should I approach this? Solve congruence $\;x^{59} \equiv 604 \pmod{2013}$ Thanks in advance :)
Hint We have that $3 \cdot 11\cdot 61=2013$. Break up your congruence into three. By $x^2\equiv 1\mod 3$, the first one turns into $x\equiv 1\mod 3$, for example, since we can deduce $3\not\mid x$. Glue back using CRT. ADD Just in case you want the solution. First we may write $x^{59}\equiv 604\mod 3$ as $ x^{2\cdot 27}x\equiv 1\mod 3$. The last equation reveals$3\not\mid x$, so $x^2\equiv 1\mod 3$ and $x\equiv 1\mod 3$. The second one can be reduced to $x^{59}\equiv 10\mod 11$ which again reveals $11\not\mid x$. Thus $x^{10}\equiv 1\mod 11$ and then $x^{-1}\equiv 10\mod 11$ which gives $x\equiv 10\mod 11$. Finally we have $x^{59}\equiv 55\mod 61$. Again $61\not\mid x$ so $x^{60}\equiv 1\mod 61$ and we get $x^{-1}\equiv 55\mod 61$. Using the Euclidean algorithm, we find $55\cdot 10-61\cdot 9=1$ so $x\equiv 10\mod 61$. Thus we have that $$\begin{cases}x\equiv 1\mod 3\\x\equiv 10\mod 11\\x\equiv 10\mod 61\end{cases}$$ One may apply now the Chinese Remainder Theorem, or note $x=10$ is a solution of the above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/445993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Getting equation from differential equations I have: $\dfrac {dx} {dt}$=$-x+y$ $\dfrac {dy}{dt}$=$-x-y$ and I am trying to find $x(t)$ and $y(t)$ given that $x(0)=0$ and $y(0)=1$ I know to do this I need to decouple the equations so that I only have to deal with one variable but the decoupling is what I am having trouble with Do I set them equal to each other and then just move like terms to separate sides getting two different equations and then integrate?
If I understand your question correctly, you are seeking for a possiblity to separate the equations as common with some of the systems, by shifting the system parallel to the existing $x$ and $y$ coordinates. In simple words you want to find a way by addition or subtraction or multiplication or division. A fair answer is that this is not possible with this system, you can not just scale it. You need to rotate it. You would see this from the solution of the equation with $$x(t) = e^{-t} \sin t$$ $$y(t) = e^{-t} \cos t$$ you see there the trigonometric term and $\sin$ and $\cos$ depicting that $y$ and $x$ are shifted at a rotation of $\pi/2$ against each other. But a priori you dont have of course the solution and the only way to know about the demand for a rotation is due to the fact that you feel by yourself intuitively the system has a resistence and you can not just stretch it. The positive and negative signs just make it impossible to manage a separation by only scaling - following your words. There is no other way to solve the system by separation except via transformation in the eigenspace accepting a rotation. You will see a way to caculation of the eigenvalues and solution of the system here>>> It wont make sense to copy/paste all. The solution path for such equations via transformation is standard straight forward and cares about all stretching/scaling and rotation required. You can find it also in other books/literature. Hope this helps out of all confusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/446061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Do I have enough iMac boxes to make a full circle? My work has a bunch of iMac boxes and because of their slightly wedged shape we are curious how many it would take to make a complete circle. We already did some calculations and also laid enough out to make 1/4 of a circle so we know how many it would take, but I'm curious to see how others would approach this problem mathematically and see if you came up with the same answer. Our stock consists of 12 of the 27" iMac boxes and 16 of the 21.5" iMac boxes. The 21.5" box has the following dimensions top: 5" height: 21.25" bottom: 8.75" and the 27" box is top: 5.75" height: 23.75" bottom: 9.5" 21.5" iMac box picture for reference
Trying to avoid trigonometric functions: The outer circle must be longer than the inner circle by $2\pi$ times the height, so compute $$\frac{2\pi\text{height}}{(\text{bottom}-\text{top})} $$ as a good approximation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/446110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How find that $\left(\frac{x}{1-x^2}+\frac{3x^3}{1-x^6}+\frac{5x^5}{1-x^{10}}+\frac{7x^7}{1-x^{14}}+\cdots\right)^2=\sum_{i=0}^{\infty}a_{i}x^i$ let $$\left(\dfrac{x}{1-x^2}+\dfrac{3x^3}{1-x^6}+\dfrac{5x^5}{1-x^{10}}+\dfrac{7x^7}{1-x^{14}}+\cdots\right)^2=\sum_{i=0}^{\infty}a_{i}x^i$$ How find the $a_{2^n}=?$ my idea:let $$\dfrac{nx^n}{1-x^{2n}}=nx^n(1+x^{2n}+x^{4n}+\cdots+x^{2kn}+\cdots)=n\sum_{i=0}^{\infty}x^{(2k+1)n}$$ Thank you everyone
The square of the sum is $$\sum_{u\geq0}\left[\sum_{\substack{n,m,k,l\geq0\\(2n+1)(2k+1)+(2m+1)(2l+1)=u}}(2n+1)(2m+1)\right]x^u.$$ It is easy to use this formula to compute the first coefficients, and we get (starting from $a_1$) $$0, 1, 0, 8, 0, 28, 0, 64, 0, 126, 0, 224, 0, 344, 0, 512, 0, 757, 0, 1008, 0, 1332, 0, 1792, 0, 2198, 0, 2752, 0, 3528, \dots$$ We see that the odd indexed ones are zero. We look up the even ones in the OEIS and we see that $a_{2n}$ is the sum of the cubes of the divisors $d$ of $n$ such that $n/d$ is odd. In particular, $a_{2^n}=2^{3(n-1)}$. Notice that the sum $$\sum_{\substack{n,m,k,l\geq0\\(2n+1)(2k+1)+(2m+1)(2l+1)=u}}(2n+1)(2m+1)$$ can be written, if we group the terms according to what the products $=(2n+1)(2k+1)$ and $y=(2m+1)(2l+1)$ are, in the form $$\sum_{\substack{x+y=2u\\\text{$x$ and $y$ odd}}}\left(\sum_{a\mid x}a\right)\left(\sum_{b\mid x}b\right)=\sum_{\substack{x+y=2u\\\text{$x$ and $y$ odd}}}\sigma(x)\sigma(y),$$ where as usual $\sigma(x)$ denotes the sum of the divisors of $x$. This last sum is in fact equal to $$\sum_{x+y=2u}\sigma(x)\sigma(y)-\sum_{x+y=u}\sigma(2x)\sigma(2y).$$ The first sum is $\tfrac{1}{12}(5\sigma_3(n)-(6n+1)\sigma(n))$, as observed by Ethan (references are given in the wikipedia article)
{ "language": "en", "url": "https://math.stackexchange.com/questions/446272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Equation of motion Pendulum using $w=e^{ix}$ I'm working with the equation of motion for a pendulum as follows: $$x''+ \frac{g}{l} \sin (x)=0$$ Where $x$ is the angle between the pendulum and the vertical rest position. I am required to use the complex variable $w=e^{ix}$ to rewrite the equation of motion in the form $(w')^2= Q (w)$, where $Q$ is a cubic polynomial. So in the form $(u')^2=u^3 + au + b$, with $a$, $b$ constants. I'm not sure where to start with the question, can anybody help me get going? Homework help
Multiply the equation through by $x'$ and integrate once to get $$x'^2-\frac{2 g}{\ell} \cos{x} = C$$ where $C$ is a constant of integration. Now, if $w=e^{i x}$, then $\cos{x}=(w+w^{-1})/2$ and $$w' = i x' e^{i x} \implies x'=-i w'/w$$ Then the equation is equivalent to $$-\frac{w'^2}{w^2} - \frac{g}{\ell} \left (w+\frac{1}{w}\right)=C$$ Then, multiplying through by $-w^2$, we get $$w'^2+\frac{g}{\ell} w^3 + C w^2+\frac{g}{\ell} w=0$$ which is not quite the form specified, but is an equation of the form $w'^2+Q(w)=0$, where $Q$ is a cubic in $w$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/446457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the value to which a sequence converges The question is $f_1=\sqrt2 \ \ \ , \ \ f_{n+1}=\sqrt{2f_n}$, I have to show that it converges to 2. The book proceeds like this: let $\lim f_n=l$. We have, $f_{n+1}=\sqrt{2f_n} \implies (f_{n+1})^2=2f_n$. Also, $\lim f_n=l \implies \lim f_{n+1}=l$. [HOW ?] Thus, $l^2=2l \implies l\in [0,2]$. [???] Can someone please explain these two steps.
See this. In short, you may express the limit as $$2^{1/2 + 1/4 + 1/8 + \cdots} = 2^1 = 2$$ There are very interesting generalizations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/446536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
If $A+B+C+D+E = 540^\circ$ what is $\min (\cos A+\cos B+\cos C+\cos D+\cos E)$? Let each of $A, B, C, D, E$ be an angle that is less than $180^\circ$ and is greater than $0^\circ$. Note that each angle can be neither $0^\circ$ nor $180^\circ$. If $A+B+C+D+E = 540^\circ,$ what is the minimum of the following function? $$\cos A+\cos B+\cos C+\cos D+\cos E$$ I suspect the minimum is achieved when $A=B=C=D=E$, but I can't prove it. I need your help.
What you're looking at is a constrained optimization problem. Put another way, you are being asked to minimize $\cos A+\cos B+\cos C+\cos D+\cos E$ given the constraint $A+B+C+D+E=540$. Using lagrange multipliers, we can rewrite this as the minimization of the function $L(A,B,C,D,E,\lambda)=\cos A+\cos B+\cos C+\cos D+\cos E+\lambda(540-A-B-C-D-E)$ To minimize the function, you want to set each partial derivative of L ($\frac{\partial L}{\partial A},\frac{\partial L}{\partial B},...,\frac{\partial L}{\partial \lambda})$ to zero. This should get you the value for $\lambda$ as well as values for each angle. With that I think you can take the last step.
{ "language": "en", "url": "https://math.stackexchange.com/questions/446629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 5, "answer_id": 1 }
Strategies for Factoring Expressions with Four Terms I'm trying to come up with a general strategy for factoring expressions with four terms on the basis of the symmetries of the expressions. One thought I had was the following: count up the number of terms in which each variable appears, and compare. For example, if I want to factor the expression $x^{2}-y^{2}-4x+4$, I could say that $x$ appears in two terms, and $y$ appears in only one term, and there is only one constant term. Since the two variables do not appear in the same number of terms, I would group by 1 and 3. So I would have $$x^{2}-4x+4-y^{2}=(x-2)^{2}-y^{2}=(x-2-y)(x-2+y).$$ On the other hand, suppose I had to factor the expression $2y-6-3x+xy$, I could note that $x$ and $y$ each appear in exactly two terms, so I should group by 2 and 2. Indeed, $$2y-6-3x+xy=2(y-3)+x(y-3)=(x+2)(y-3).$$ However, I find that this strategy does not always work. For example, suppose I have something like $x^{2}-2xy+y^{2}-9$. Even though $x$ and $y$ appear in the same number of terms, the correct thing to do here is $$x^{2}-2xy+y^{2}-9=(x-y)^{2}-3^{2}=(x-y-3)(x-y+3).$$ Can my strategy be salvaged? Is there a more general method than simply trial-and-error? Important note: I am interested in basic techniques teachable in High School Algebra I.
Factorization in $\mathbb{C}[x_1,x_2, .. .x_n]$ is polynomial-time given a suitable representation of the algebraic number coefficients. Paper i'm quoting: http://www.aimath.org/pastworkshops/polyfactorrep.pdf Paper that is cited: http://scholar.google.co.uk/scholar?cluster=697805526622901646&hl=en&as_sdt=0,5&sciodt=0,5
{ "language": "en", "url": "https://math.stackexchange.com/questions/446694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
NURBS, parametrized curves and manifolds Let's start with the definitions: A parametrized curve is a map $γ : (α,β) → R^n$ , for some $α,β$ with $−∞ ≤ α < β ≤ ∞$. A NURBS curve is defined by $C(u)=\sum_{i=1}^n R_{i,p}(u)\mathbf{P_i}$ as a rational function from the domain $\Omega=[0,1]$ to $R^n$. A parametrized manifold in $R^n$ is a smooth map $σ:U → R^n$ , where $U ⊂ R^m$ is a non-empty open set. It is called regular at $x ∈ U$ if the $n × m$ Jacobi matrix $Dσ(x)$ has rank $m$ (that is, it has linearly independent columns), and it is called regular if this is the case at all $x ∈ U$. Now, I might be misunderstanding some things but I have a couple of questions: i) We can say that a NURBS curve is defined is parametric form. However it doesn't fit in the definition of a parametrized curve because the interval $[0,1]$ is not open. Why is that and is this important? ii) We can use NURBS to exactly represent conics. For example the circle can be exactly represented. If we consider the circle as a manifold, can we also consider the NURBS mapping that defines the geometry of the circle as a chart, or is the open set (interval) condition again a problem? I guess there are two notions here that have me confused. One the use of open intervals vs closed intervals and second, what's the relationship between parametrizations, manifolds and charts. Are all parametric curves (or surfaces) manifolds and vice versa?
To answer your question in brief, All curves and surfaces are manifolds, parametrized or not. Manifolds are abstractions of surfaces or curves.Just look up the definition of a chart and atlas on Wikipedia. In a way , a parametrization is a kind of chart. A precise answer might be possible if you add a bit more to your question or your background. As for the other question, open intervals and closed intervals are both used for defining curves. Take a look at this related question: open interval in definition of curve Hope this helped.
{ "language": "en", "url": "https://math.stackexchange.com/questions/446747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Different meanings of math terms in different countries Does anyone know of a list of math terms that have (slightly) different meanings in different countries? For example, "positive" could mean $\geq 0 $ in some places, and "strictly positive" means $>0$ - See Dutch wikipedia page on Positive numbers, which states "In Belgium, it is a number that is greater than or equal to 0". Another common example is Domain and range, which is even ambiguous at the author level. I'd also be interested in distinct math terms that different countries use. E.g. Divisors and factors in American vs British school systems, but this will easily get very long. Since this is now CW, please add an answer for each term that you are aware of.
There is a declining but still existent tendency in French to not assume fields are commutative, English (field, division ring) = French (corps commutatif, corps)
{ "language": "en", "url": "https://math.stackexchange.com/questions/446811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
One Point Derivations on locally Lipschitz functions Let $A$ be the algebra of $\mathbb{R}\to\mathbb{R}$ locally Lipschitz functions. What is the vector space of derivations at $0$? The proof that for continuous functions there aren't really any doesn't seem to work in this case. I was thinking about trying to define the tangent spaces for Lipschitz manifolds analogously to differentiable manifolds, but I couldn't work out even this simplest example. Can tangent spaces of Lipschitz, even topological manifolds be defined at all and how?
As Etienne pointed out in a comment, the book Lipschitz Algebras by Weaver is very much relevant. Section 4.7 introduces and describes derivations on the algebra of Lipschitz functions on a compact metric space. Weaver's constructions can be viewed as a way to introduce differentiable structures on metric spaces. See this paper by Gong. I don't know of any concept of a tangent space to a topological manifold. But a Lipschitz manifold is a complete doubling metric space, and therefore one can define its tangent cones as (pointed) Gromov-Hausdorff limits of rescaled spaces $(X,\delta^{-1}d)$. Different sequences of scales $\delta_n$ can produce different limits, which means tangent cone is not unique in general. See section 8.7 of Nonsmooth Calculus by Heinonen.
{ "language": "en", "url": "https://math.stackexchange.com/questions/446882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
is this the right truth table? When I filled out the table I tried my best to figure it out. But If I made any mistakes please help me correct them. Thanks! sorry 5th one should be false
EDIT: UPDATE Now your table is mostly correct. * *Check your truth value assignment columns; we need to cover all possible $2^3$ truth-value assignments, and you've missed, for example, $P = F, Q = T, R = T$, but double counted another.) *The only truth value combination that is false is when we have $P = F, Q = T, R = T$. Then and only then is it the case that $P \lor \lnot Q \lnot R = F \lor F \lor F = F$. For all other truth value combinations, the compound disjunction is true. *The truth values listed in your columns below $P$ and $Q$ are slightly off. Compare the columns below with your columns.
{ "language": "en", "url": "https://math.stackexchange.com/questions/446953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many times is the print statement executed? Hello, I've gotten far on this exercise, with the following insight: Here is a matrix of examples (vertical-axis is n=1,2,3,4,5,6,7,8; horizontal-axis is k=1,2,3,4) 1: 1 1 1 1 2: 2 3 4 5 3: 3 6 10 15 4: 4 10 20 35 5: 5 15 35 70 6: 6 21 56 126 7: 7 28 84 210 8: 8 36 120 330 Now, there is an obvious pattern among the numbers, being triangular numbers, sum of triangular numbers, sum of sum of triangular numbers, and so on. My question is: Can you help me find closed form expressions for the sums of the entries: * *Down the columns *Across the rows *And for general n,k? Thanks!
This is a standard "stars and bars" problem. For given $n\geq1$ and $k\geq1$ we have to count the number of $k$-tuples $(i_1,\ldots,i_k)$ such that $$1\leq i_k\leq i_{k-1}\leq\ldots\leq i_2\leq i_1\leq n\ .$$ Each such $k$-tuple can be encoded as a $0$-$1$-sequence of length $n+k-1$ as follows: Begin by writing $n-1$ ones, or bars $|$, leaving enough space between them. These bars create $n$ spaces between them and at the ends. The spaces represent the numbers $1$, $2$, $\ldots$, $n$. For each $i_j$ we write a $0$ into the space corresponding to its value, making a total of $k$ zeros. Conversely: Given a $0$-$1$-sequence with exactly $n-1$ ones and $k$ zeros we can immediately read off the sequence $(i_1,\ldots,i_k)$ so encoded. There are $${n+k-1\choose k}$$ such sequences, and this is also the number of times the print command is executed in the quoted program.
{ "language": "en", "url": "https://math.stackexchange.com/questions/447007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Inverse of $(A + B)$ and $(A + BCD)$? Consider $A$ as an arbitrary matrix and $B$ as a symmetric matrix. Since $B$ is symmetric, therefore, it can be written as a $\Gamma \Delta \Gamma'$, where $\Delta$ is a diagonal matrix with eigen-values on the the main diagonal and $\Gamma$ is a matrix of corresponding eigenvectors. Is there any formula for $(A + B)^{-1}$? Now, consider $A$, $B$, $C$, and $D$ as arbitrary matrices. Is there any formula for: $(A + BCD)^{-1}$ Application: I'm fitting a model that involves matrix inversion. My matrix can be decomposed into two parts. At each step, only one part of the matrix gets updated, if I can find a formula for the inverse of the matrices above, then I don't need to invert the whole matrix and instead, I can only invert the part that has been updated. Thanks for your help,
Have you considered the woodbury matrix inversion AKA Matrix Inversion Lemma? https://en.wikipedia.org/wiki/Woodbury_matrix_identity
{ "language": "en", "url": "https://math.stackexchange.com/questions/447081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
decoupling and integrating differential equations I am having trouble with the process of decoupling. If I have $$\frac{dx}{dt}=-x+y$$ $$\frac{dy}{dt}=-x-y$$ I am trying to figure out how to solve for $x(t)$ and $y(t)$ by decoupling the system so that I only have one variable but I can't seem to get anywhere
Compute $d^2 x/dt^2$, giving $$\frac{d^2 x}{dt^2} = - \frac{dx}{dt} + \frac{dy}{dt}$$ Use the second equation to replace $\frac{dy}{dt}$ with an expression in x and y, and use the first equation to replace $y$ with an expression in $dx/dt$ and $x$. The result is a second order equation in x. Edit: For clarity, this gives $$x'' = -x' + y' = -x' + (-x - y) = -x' -x - y = -x' - x - (x' + x)$$ $$x'' = -2x' - 2x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/447162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Implication of an inequality relation Suppose I have linear function $f$ on $\mathbb{R}^n$ and another function $p$, which is positive homogeneous, i.e. $p(\lambda x)=\lambda p(x)$ for all $\lambda\ge 0$. We have the following implication $$f(x)>-1\Rightarrow p(x)>-1$$ Since we can multiply by positive scalars, we get $$f(x)>-y\Rightarrow p(x)>-y$$ for every $y\ge 0$. I thought this should imply that $p(x)\ge f(x)$, but my lecture note says the opposite: $p(x)\le f(x)$. The problem is, with the latter the whole rest of the proof does not work anymore! So where is my error in reasoning?
"Since we can multiply by positive scalars": what does it mean? Is this something like: for $x = λy$ with $λ>0$ and y in $ℝ^n$: $f(x)>−1 ⇒ p(x)>−1$ $f(λy)>−1 ⇒ p(λy)>−1$ $λf(y)>−1 ⇒ λp(y)>−1$ $f(y)>−1/λ ⇒ p(y)>−1/λ$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/447212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Say I have a bag of 100 unique marbles. With I replacement, I pick 10 marbles at a time, at random. Say I have a bag of 100 unique marbles. With replacement, I pick 10 marbles at a time, at random. How many times will I have to pick the marbles (10 marbles a pick) in order to have a 95% chance of having seen every unique marble at least once.
Simulations show that you need 73 or 72 picks (probably this, $10^7$ turns give $p = 0.950612$) and I don't know any efficient method to calculate that exactly (of course, that doesn't mean there isn't one). irb(main):021:0> average(1000000) do g(100,10,73)? 1:0 end => 0.955118 irb(main):022:0> average(1000000) do g(100,10,72)? 1:0 end => 0.950215 irb(main):023:0> average(1000000) do g(100,10,71)? 1:0 end => 0.945348 If you want some formulas, then very rough estimate using Markov's inequality gives you $\frac{49.9}{1-0.95} \simeq 998$ times. With such a difference, you could as well assume that you replace each marble after picking it (that is you pick a marble and then replace it, and again a marble, and again replace it), and then this is called the coupon collector's problem. The expected time to see all unique marbles is $n\mathcal{H}_n$, where $n$ is the number of marbles and $\mathcal{H}_n = \sum_{k=1}^{n} \frac{1}{k}$ is the $n$-th harmonic number. Using this estimate you can use the Markov's inequality $$P(X \geq t) \leq \frac{\mathbb{E}X}{t}$$ to bound the necessary number of picks. With $n = 100$ we get $$P(X \geq t) \leq \frac{100\mathcal{H}_{100}}{t}\leq5\%$$ that is, $0.05t \geq 100 \mathcal{H}_{100} \approx 518.737$, so $t \geq 10375$. Picking 10 at a time, you will have to pick 1038 times. I hope this helps ;-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/447301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
norms and sparsity Could anyone please elaborate on why $L^2$ norm moves toward the outliers compared to $L^1$ norm. I mean, what property/quantity in the mathematical expression of the norms makes it perform such way. One more thing is, how $L^1$ norm introduces more sparsity in the solution? Thank you. Praveen
I think the overall context you're referring to is the problem of $L^0$ minimization, i.e. compressed sensing. So, the goal of your question is to find the relationship between the following problems: $(P_0)\qquad \min \|x\|_0 \;s.t.\;Ax=y\\ (P_1)\qquad \min \|x\|_1 \;s.t.\;Ax=y\\ (P_2)\qquad \min \|x\|_2 \;s.t.\;Ax=y$ Since our goal is ultimately to solve $P_0$, the problem with $L^2$ minimization is as follows: As you can see, it is generally unlikely that a matrix $A$ will have a $P_2$ solution that lies on any of the axes (i.e. a sparse solution). However, because of the "diamond-shape" of the set of equal $L^1$ norm, the $L^1$ solution is more likely to be sparse As for your second question, here's an exercise from my own coursework that might help you understand this better Consider $P_1$ as described above, where $x\in\mathbb R^N, A\in \mathbb R^{m\times N},y\in \mathbb R^m$, with $m\ll N.$ Then $(P_1)$ has a solution with at most $m$ non-zero entries. Accordingly, the solutions to the $P_1$ problem promote sparsity. Actually proving this is an interesting exercise, and I can give you a hint there if you want it. The point is, we can guarantee a relatively sparse solution under $L^1$ minimization, which we can't generally do for $L^2$ minimization. In fact, if $A$ has the null-space property, we find that there is a unique solution like this. Here's the hint that came with the problem: Hint: suppose $\overline{x}\in\mathbb R^N$ is a solution to $(P_1)$ and $\|\overline{x}\|=k$ where $m<k\leq N$. It follows that $k$ columns of $A$ are linearly dependent. As a result, there exists a nonzero vector $h$ in $\mathbb R^N$ such that $Ah=0$. Define $\widetilde{x}=\overline{x}+\epsilon h$ where $\epsilon\in\mathbb R$, then $A\widetilde{x}=y$, i.e., $\widetilde{x}$ is also a solution to $y=Ax$. Since $\overline{x}$ is a solution to $(P_1)$, we have $$\|\widetilde{x}\|_1=\|\overline{x}+\epsilon h\|_1 \geq \|\overline{x}\|_1$$ Therefore, we can choose $\epsilon$ such that $$\|\overline{x}+\epsilon h\|_1=\|\overline{x}\|_1 \quad \text{ and } \quad \|\overline{x}+\epsilon h\|_0<\|\overline{x}\|_0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/447370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }