Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Good places to start for Calculus? I am a student, and due to my school's decision to not teach Calculus in high school (They said we'd learn it in college, but that's a year and a half away for me), I have to learn it myself. I am trying to get a summer internship as a Bioinformatics intern, and I would like to have prior knowledge (I was told I'd need an understanding of calculus to get the internship.) I have heard of MIT's OpenCourseWare, which I had taken a look at, but I was wondering if there were any other resources (Preferably free of charge, I don't have much money to spend) Is there any good online (Preferably free) way to learn Calculus?
MIT OpenCourseWare is useful. Also, check out Paul's Online Notes: http://tutorial.math.lamar.edu/Classes/CalcI/CalcI.aspx
{ "language": "en", "url": "https://math.stackexchange.com/questions/408018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Finding the upper and lower limit of the following sequence. $\{s_n\}$ is defined by $$s_1 = 0; s_{2m}=\frac{s_{2m-1}}{2}; s_{2m+1}= {1\over 2} + s_{2m}$$ The following is what I tried to do. The sequence is $$\{0,0,\frac{1}{2},\frac{1}{4},\frac{3}{4},\frac{3}{8},\frac{7}{8},\frac{7}{16},\cdots \}$$ So the even terms $\{E_i\} = 1 - 2^{-i}$ and the odd terms $\{O_k\} = \frac{1}{2} - 2^{-k}$ and each of them has a limit of $1$ and $\frac{1}{2}$, respectively. So, the upper limit is $1$ and the lower limit is $1\over 2$, am I right ? Does this also mean that $\{s_n\}$ has no limits ? Is my denotation $$\lim_{n \to \infty} \sup(s_n)=1 ,\lim_{n \to \infty} \inf(s_n)={1 \over 2} $$ correct ?
Shouldn't it be $E_i = \frac{1}{2} - 2^{-i}$ and $O_i = 1 - 2^{1-i}$? That way $E_i = 0, \frac{1}{4}, \frac{3}{8}...$ and $E_i = 0, \frac{1}{2}, \frac{3}{4}...$, which seems to be what you want. Your conclusion looks fine, but you might want to derive the even and odd terms more rigorously. For example, the even terms $E_i$ are defined recursively by $E_{i+1} = s_{2i+2} = \frac{s_{2i+1}}{2} = \frac{E_1 + \frac{1}{2}}{2}$, and $\frac{1}{2} - 2^{-i}$ also satisfies this recursion relation. $E_1 = 0$, and $\frac{1}{2} - 2^{-1} = 0$, hence they have the same first term. By induction the two sequences are the same. If we partition a sequence into a finite number of subsequences then the upper and lower limit of the sequence are equal to the maximum upper limit and minimum lower limit of the subsequences; in this case you're partitioning into even and odd terms. $\{s_n\}$ has a limit iff the upper and lower limits are the same (this is proved in most analysis books), so in this case $\{s_n\}$ has no limits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Without using L'Hospital's rule, I want to find a limit of the following. Given a series with $a_n = \sqrt{n+1}-\sqrt n$ , determine whether it converges or diverges. The ratio test was inconclusive because the limit equaled 1. So I tried to use the root test. So the problem was reduced to finding $$\lim_{n \to \infty} \sqrt[n]{\sqrt{n+1}-\sqrt n}$$ Since the book hasn't covered derivatives, yet, I am trying to solve this without using L'Hospital's rule. So the following is the strategy that I am trying to proceed with. I cannot come up with a way to simplify the expression, so I am trying to compare $a_n$ to a sequence that converges to 1 or maybe something less. In fact, I want to compare it to 1 because of that reason. The problem with this is that adding constants makes the series divergent, and I suspect that the series converges, so I don't think that works at all. Can someone help ?
Hint: what is the $n^\text{th}$ partial sum of your series?
{ "language": "en", "url": "https://math.stackexchange.com/questions/408166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Find $Y=f(X)$ such that $Y \sim \text{Uniform}(-1,1)$. If $X_1,X_2\sim \text{Normal} (0,1)$, then find $Y=f(X)$ such that $Y \sim \text{Uniform}(-1,1)$. I solve problems where transformation is given and I need to find the distribution. But here I need to find the transformation. I have no idea how to proceed. Please help.
Why not use the probability integral transform? Note that if $$ F(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}} e^{-u^2/2} du $$ then $F(X_i) \sim U(0,1)$. So you could take $f(x) = 2*F(x) - 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Integral of $\int \frac{1+\sin(2x)}{\operatorname{tg}(2x)}dx$ I'm trying to find the $F(x)$ of this function but I don't find how to do it, I need some hints about the solution. I know that $\sin(2x) = 2\sin(x)\cos(x)$ its help me? It's good way to set $2x$ as $t$? $$\int \frac{1+\sin(2x)}{\operatorname{tg}(2x)}dx$$ EDIT Its right to do it like that? $$\int \frac{1+2\sin(x)\cos(x)dx}{\frac{2\sin(x)\cos(x)}{\cos^2(x)-\sin^2(x)}}$$ $$\int \left(\frac{\cos^2(x)-\sin^2(x)}{2\sin(x)\cos(x)}+\cos^2(x)-\sin^2(x)\right)dx = \int \left(\operatorname{ctg}(2x)+\frac{1+\cos(2x)}{2}-\frac{1-\cos(2x)}{2}\right)dx$$ and the integral of $\displaystyle \operatorname{ctg}(2x) = \frac{\ln|\sin(2x)|}{2}+C$ and the two other is equal to $\displaystyle \frac{\sin(2x)}{2}+C$ Thanks
Effective hint: Let $\int R(\sin x,\cos x)dx$ wherein $R$ is a rational function respect to $\sin x$ and $\cos x$. If $$R(-\sin x, -\cos x)\equiv R(\sin x, \cos x) $$ then $t=\tan x, t=\cot x$ is a good substitution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
On Polar Sets with respect to Continuous Seminorms In the following, $X$ is a Hausdorff locally convex topological vector space and $X'$ is the topological dual of $X$. If $p$ is a continuous seminorm on $X$ then we shall designate by $U_p$ the "$p$-unit ball", i.e, $$U_p=\{x\in X: p(x)\le 1\}.$$ The polar set of $U_p$ is given by $$U_p^o=\{f\in X':|f(x)|\le 1\quad \forall x\in U_p\}.$$ How do we prove that for each $x\in X$, we have $$p(x)=\sup\{|f(x)|:f\in U_p^o\}.$$ I need some help...Thanks in advance.
Assume that $p(x)=0$. Then for all $\lambda>0$, $\lambda x\in U_p$ hence $|f(\lambda x)|\leqslant 1$ and $f(x)=0$ whenver $f\in U_p^0$. If $p(x)\neq 0$, then considering $\frac 1{p(x)}x$, we get $\geqslant$ direction. For the other one, take $f(a\cdot x):=a\cdot p(x)$ for $a\in\Bbb R$; then $|f(v)|\leqslant p(v)$ for any $v\in\Bbb R\cdot x$. We extend $f$ by Hahn-Banach theorem to the whose space: then $|f(w)|\leqslant p(w)$ for any $w\in X$, giving what we want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Applications of computation on very large groups I have been studying computational group theory and I am reading and trying to implement these algorithms. But what that is actually bothering me is, what is the practical advantage of computing all properties of extremely large groups, moreover it is a hard problem? It might give birth to new algorithms but does it solve any problem specific to group theory or other branches affected by it?
The existence of several of the large finite simple sporadic groups, such as the Lyons group and the Baby Monster was originally proved using big computer calculations (although I think they all now have computer-free existence proofs). Many of the properties of individual simple groups, such as their maximal subgroups and their (modular) character tables, which are essential for a deeper understanding of the groups, have been calculated by computer. Some significant theorems in group theory have proofs that are partly dependent on computer calculations, usually for small or medium sized special cases that are not covered by the general arguments. A recent example of this is the proof of Ore's conjecture that every element in every finite simple group is a commutator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Endomorphim Ring of Abelian Groups In the paper "Über die Abelschen Gruppen mit nullteilerfreiem Endomorphismenring." Szele considers the problem of describing all abelian groups with endomorphism ring contaning no zero-divisors. He proved that there is no such group among the mixed groups. While $C(p)$ and $C(p^\infty)$ are the only torsion groups of this property. I do not have access to this paper. Moreover, I do not speak German, someone can give me a reference, in English or French, for this result or Sketch the Proof?
Kulikov proved that an indecomposable abelian group is either torsion-free or $C(p^k)$ for some $k=0,1,\dots,\infty$. A direct summand creates a zero divisor in the endomorphism ring: Let $G = A \oplus B$ and define $e(a,b) = (a,0)$. Then $e^2=e$ and $e(1-e) = 0$. However $1-e$ is the endomorphism that takes $(a,b)$ to $(0,b)$ so it is not zero either.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Where can I learn about the lattice of partitions? A set $P \subseteq \mathcal{P}(X)$ is a partition of $X$ if and only if all of the following conditions hold: * *$\emptyset \notin P$ *For all $x,y \in P$, if $x \neq y$ then $x \cap y = \emptyset$. *$\bigcup P = X$ I have read many times that the partitions of a set form a lattice, but never really considered the idea in great detail. Where can I learn the major results about such lattices? An article recommendation would be nice. I'm also interested in the generalization where condition 3 is disregarded.
George Grätzers book General Lattice Theory has a section IV.4 on partition lattices, see page 250 of this result of Google books search. A more recent version of the book is called Lattice Theory: Foundation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Derivative of $\left(x^x\right)^x$ I am asked to find the derivative of $\left(x^x\right)^x$. So I said let $$y=(x^x)^x \Rightarrow \ln y=x\ln x^x \Rightarrow \ln y = x^2 \ln x.$$Differentiating both sides, $$\frac{dy}{dx}=y(2x\ln x+x)=x^{x^2+1}(2\ln x+1).$$ Now I checked this answer with Wolfram Alpha and I get that this is only correct when $x\in\mathbb{R},~x>0$. I see that if $x<0$ then $(x^x)^x\neq x^{x^2}$ but if $x$ is negative $\ln x $ is meaningless anyway (in real analysis). Would my answer above be acceptable in a first year calculus course? So, how do I get the correct general answer, that $$\frac{dy}{dx}=(x^x)^x (x+x \ln(x)+\ln(x^x)).$$ Thanks in advance.
If $y=(x^x)^x$ then $\ln y = x\ln(x^x) = x^2\ln x$. Then apply the product rule: $$ \frac{1}{y} \frac{dy}{dx} = 2x\ln x + \frac{x^2}{x} = 2x\ln x + x$$ Hence $y' = y(2x\ln x + x) = (x^x)^x(2x\ln x + x).$ This looks a little different to your expression, but note that $\ln(x^x) \equiv x\ln x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $\mid \lambda_i\mid=1$ and $\mu_i^2=\lambda_i$, then $\mid \mu_i\mid=1$? If $|\lambda_i|=1$ and $\mu_i^2=\lambda_i$, then $|\mu_i|=1$? $|\mu_i|=|\sqrt\lambda_i|=\sqrt |\lambda_i|=1$. Is that possible?
Yes, that is correct. Or, either you could write $1=|\lambda_i|=|{\mu_i}^2|=|\mu_i|^2$, and use $|\mu_i|\ge 0$ to arrive to the unique solution $|\mu_i|=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $U$ is a self adjoint unitary operator Let $W$ be the finite dimensional subspace of an inner product space $V$ and $V=W\oplus W^\perp $. Define $U:V \rightarrow V$ by $U(v_1+v_2)=v_1-v_2$ where $v_1\in W$ and $v_2 \in W^\perp$. Prove that $U$ is a self adjoint unitary operator. I know I have to show that $\parallel U(x) \parallel=\parallel x \parallel $ but can't proceed from this stage.
$\langle U(x),U(x)\rangle = \langle U(v_1+v_2) , U(v_1+v_2)\rangle = \langle v_1 - v_2, v_1 - v_2\rangle = \langle v_1,v_1\rangle + \langle v_2,v_2\rangle = \langle x,x\rangle$ where last two equalities comes frome the fact that $\langle v_1,v_2\rangle = 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/408681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Which one is bigger: $\;35{,}043 × 25{,}430\,$ or $\,35{,}430 × 25{,}043\;$? Which of the two quantities is greater? Quantity A: $\;\;35{,}043 × 25{,}430$ Quantity B: $\;\;35{,}430 × 25{,}043$ What is the best and quickest way to get the answer without using calculation, I mean using bird's eye view?
Hint: Compare $a\times b$ with $$(a+x)\times (b-x)=ab-ax+bx-x^2=ab-x(a-b)-x^2$$ keeping in mind that in your question, $a> b$ and $x>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 15, "answer_id": 0 }
Solving $f_n=\exp(f_{n-1})$ : Where is my mistake? I was trying to solve the recurrence $f_n=\exp(f_{n-1})$. My logic was this : $f_n -f_{n-1}=\exp(f_{n-1})-f_{n-1}$. The associated differential equation would then be $\dfrac{dg}{dn}=e^g-g$. if $f(m)=g(m)>0$ for some real $m$ then for $n>m$ we would have $g(n)>f(n)$. Solving the differential equation with seperating the variables gives the solution $g(n)= \mathrm{inv}(\int_c^{n} \dfrac{dt}{e^t-t}+c_1)+c_2$ for some $c,c_1,c_2$. That solutions seems correct since $e^t-t=0$ has no real solution for $t$ so there are no issues with singularities near the real line. However $\mathrm{inv}(\int_c^{n} \dfrac{dt}{e^t-t}+c_1)+c_2$ Does NOT seem near $f_n$ let alone larger than it !! So where did I make a big mistake in my logic ? And can that mistake be fixed ?
The problem is that the primitive $\displaystyle\int_\cdot^x\frac{\mathrm dt}{\mathrm e^t-t}$ does not converge to infinity when $x\to+\infty$. The comparison between $(f_n)$ and $g$ reads $$ \int_{f_1}^{f_n}\frac{\mathrm dt}{\mathrm e^t-t}\leqslant n-1, $$ for every $n\geqslant1$. When $n\to\infty$, the LHS converges to a finite limit hence one can be sure that the LHS and the RHS are quite different when $n\to\infty$ and that this upper bound becomes trivial for every $n$ large enough. Take-home message: to compare the sequence $(f_n)$ solving a recursion $f_{n+1}=f_n+h(f_n)$ and the function $g$ solving the differential equation $g'(t)=h(g(t))$ can be fruitful only when the integral $\displaystyle\int_\cdot^{+\infty}\frac{\mathrm dt}{h(t)}$ diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Exercise 3.15 [Atiyah/Macdonald] I have a question regarding a claim in Atiyah, Macdonald. A is a commutative ring with $1$, $F$ is the free $A$-module $A^n$. Assume that $A$ is local with residue field $k = A/\mathfrak m$, and assume we are given a surjective map $\phi: F\to F$ with kernel $N$. Then why is the following true? Since $F$ is a flat $A$-module, the exact sequence $0\to N \to F\overset\phi\to F\to 0$ gives an exact sequence $0\to k\otimes N \to k\otimes F \overset{1\otimes \phi}\to k\otimes F \to 0$. I can see that $F$ is a free $A$-module, and that the first sequence is exact. But how does flatness of $F$ tell me something about the second sequence? Thanks!
A general principle in homological algebra is the following: Every ses of chain complexes gives rise to a LES in homology. One can apply this principle to many situations, in our case it can be used to show that every ses of $A$ - modules gives rise to a LES in Tor. The LES in your situation is exactly $$\ldots \to \text{Tor}_1^A(k, N) \to \text{Tor}_1^A(k, F) \to \text{Tor}_1^A(k, F) \to k \otimes_A N \to k \otimes_A F \to k\otimes_A F \to 0.$$ Now we claim that $\text{Tor}_1^A (k,F) = 0$. Indeed because $F$ is free (hence projective) we can always take the tautological projective resolution $$ \ldots \to 0 \to 0 \to F \to F \to 0 $$ and remove the first $F$, tensor with $k$, to get the chain complex $$ \ldots \to 0 \to 0 \to k \otimes_A F \to 0$$ from which it is clear that the first homology group of this complex is zero, i.e. $\text{Tor}_1^A(k,F) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/408881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 0 }
Strictly convex sets If $S\subseteq \mathbb{R} ^2$ is closed and convex, we say $S$ is strictly convex if for any $x,y\in Bd(S)$ we have that the segment $\overline{xy} \not\subseteq Bd(S)$. Show that if $S$ is compact, convex and constant width then $S$ is strictly convex. Any hint? Than you.
The idea of celtschk works just fine. Suppose that the line $L$ meets $\partial S$ along a line segment. Let $a\in S$ be a point that maximizes the distance from $L$ among all points in $S$. This distance, say $w$, is the width of $S$. Let $b$ any point of $L\cap \partial S$ which is not the orthogonal projection of $a$ onto $L$. Then the distance from $a$ to $b$ is greater than $w$, a contradiction. (The projection onto the line through $a$ and $b$ will have length $>w$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/408947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Find all positive integers $x$ such that $13 \mid (x^2 + 1)$ I was able to solve this by hand to get $x = 5$ and $x =8$. I didn't know if there were more solutions, so I just verified it by WolframAlpha. I set up the congruence relation $x^2 \equiv -1 \mod13$ and just literally just multiplied out. This lead me to two questions: * *But I was wondering how would I do this if the $x$'s were really large? It doesn't seem like multiplying out by hand could be the only possible method. *Further, what if there were 15 or 100 of these $x$'s? How do I know when to stop?
Starting with $2,$ the minimum natural number $>1$ co-prime with $13,$ $2^1=2,2^2=4,2^3=8,2^4=16\equiv3,2^5=32\equiv6,2^6=64\equiv-1\pmod{13}$ As $2^6=(2^3)^2,$ so $2^3=8$ is a solution of $x^2\equiv-1\pmod{13}$ Now, observe that $x^2\equiv a\pmod m\iff (-x)^2\equiv a$ So, $8^2\equiv-1\pmod {13}\iff(-8)^2\equiv-1$ Now, $-8\equiv5\pmod{13}$ If we need $x^2\equiv-1\pmod m$ where integer $m=\prod p_i^{r_i}$ where $p_i$s are distinct primes and $p_i\equiv1\pmod 4$ for each $i$ (Proof) $\implies x^2\equiv-1\pmod {p_i^{r_i}}$ Applying Discrete logarithm with respect to any primitive root $g\pmod {p_i^{r_i}},$ $2ind_gx\equiv \frac{\phi(p_i^{r_i})}2 \pmod {\phi(p_i^{r_i})}$ as if $y\equiv-1\pmod {p_i^{r_i}}\implies y^2\equiv1 $ $\implies 2ind_gy\equiv0 \pmod {\phi(p_i^{r_i})}\implies ind_gy\equiv \frac{\phi(p_i^{r_i})}2 \pmod {\phi(p_i^{r_i})}$ as $y\not\equiv0\pmod { {\phi(p_i^{r_i})}}$ Now apply CRT, for relatively prime moduli $p_i^{r_i}$ For example, if $m=13, \phi(13)=12$ and $2$ is a primitive root of $13$ So, $2ind_2x\equiv 6\pmod {12}\implies ind_2x=3\pmod 6$ $\implies x=2^3\equiv8\pmod{13}$ and $x=2^9=2^6\cdot2^3\equiv(-1)8\equiv-8\equiv5\pmod{13}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/409005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
If $f(a)$ is divisible by either $101$ or $107$ for each $a\in\Bbb{Z}$, then $f(a)$ is divisible by at least one of them for all $a$ I've been struggling with this problem for a while, I really don't know where to start: Let $f(x) \in \mathbb{Z}[X]$ be a polynomial such that for every value of $a \in \mathbb{Z}$, $f(a)$ is always a multiple of $101$ or $107$. Prove that $f(a)$ is always divisible by $101$ for all values of $a$, or that $f(a)$ is divisible by 107 for all values of $a$.
If neither of the statements "$f(x)$ is always divisible by $101$" or "$f(x)$ is always divisible by $107$" is true, then there exist $a,b\in{\bf Z}$ so that $107\nmid f(a)$ and $101\nmid f(b)$. It follows from hypotheses that $$\begin{cases} f(a)\equiv 0\bmod 101 \\ f(a)\not\equiv0\bmod 107\end{cases}\qquad \begin{cases}f(b)\not\equiv 0\bmod 101 \\ f(b)\equiv 0\bmod 107\end{cases}$$ Let $c\in{\bf Z}$ be $\equiv a\bmod 107$ and $\equiv b\bmod 101$. Is $f(c)$ divisible by $101$ or $107$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/409081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How could we see that $\{n_k\}_k$ converges $\infty$? Let $x \in \Bbb R\setminus \Bbb Q$ and the sequence $\{\frac {m_k} {n_k}\}_k$ concerges to $x$. The question is from this comment by Ilya: How could we see that $\{n_k\}_k$ converges $\infty$? Thanks for your help.
Let $M$ be fixed. Show that there exists such $k_0$ that $$n_k>M, k\geq k_0.$$ By assuming the contradiction, we'll get such subsequance $\{n_{k_j}\}_j$ that $n_{k_j} \leq M$ for all $j\geq 1$. Note that $$\frac{m_{k_j}}{n_{k_j}}\to x.$$ Since such fractions can written as $$\frac{m_{k_j}}{n_{k_j}}=\frac{A_{k_j}}{M!},$$ where $A_{k_j}$ are integers, then $\frac{m_{k_j}}{n_{k_j}}$ cannot tends to an irrational number. So there is no such subsequance $\{n_{k_j}\}_j$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/409166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Find the greatest common divisor of the polynomials: a) $X^m-1$ and $X^n-1$ $\in$ $Q[X]$ b) $X^m+a^m$ and $X^n+a^n$ $\in$ $Q[X]$ where $a$ $\in$ $Q$, $m,n$ $\in$ $N^*$ I will appreciate any explanations! THanks
Let $n=mq+r$ with $0\leq r<m $ then $$x^n-1= (x^m)^q x^r-1=\left((x^m)^q-1\right)x^r+(x^r-1)=(x^m-1)\left(\sum_{k=0}^{q-1}x^{mk}\right)x^r+(x^r-1)$$ and $$\deg(x^r-1)<\deg(x^m-1)$$ hence by doing the Euclidean algorithm in parallel for the integers and the polynomials, we find $$(x^n-1)\wedge(x^m-1)=x^{n\wedge m}-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/409201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
If $T\colon V\to V$ is linear then$\text{ Im}(T) = \ker(T)$ implies $T^2 = 0$ I'm trying to show that if $V$ is finite dimensional and $T\colon V\to V$ is linear then$\text{ Im}(T) = \ker(T)$ implies $T^2 = 0$. I've tried taking a $v$ in the kernel and then since it's in the kernel we know its in the image so there is a $w$ such that $T(w) = v$, so then $TT(w) = 0$, but thats only for a specific $w$? Thanks
Hint: Note that $T^2$ should be read as $T\circ T$. You wish to show that $(\forall v\in V)((T\circ T)(v)=0)$. I think you can do this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/409259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Proving $(A \land B) \to C$ and $A \to (B \to C)$ are equivalent Prove that $(A \land B) \rightarrow C$ is equivalent to $A \rightarrow (B \rightarrow C)$ in two ways: by semantics and syntax. Can somebody give hints or answer to solve it?
Semantically you can just consider two cases. 1) Suppose A is true, and 2) Suppose A is false. Since all atomic propositions in classical logic are either true or false, but not both, this method will work. Syntactically, we'll need to know the proof system (the axioms and the rules of inference for your system) to know how to solve this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/409330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Insertion sort proof I am reading Algorithm design manual by Skiena.It gives proof of Insertion sort by Induction. I am giving the proof described in the below. Consider the correctness of insertion sort, which we introduced at the beginning of this chapter. The reason it is correct can be shown inductively: * *The basis case consists of a single element, and by definition a one-element array is completely sorted. *In general, we can assume that the first $n − 1$ elements of array $A$ are completely sorted after $n − 1$ iterations of insertion sort. *To insert one last element $x$ to $A$, we find where it goes, namely the unique spot between the biggest element less than or equal to $x$ and the smallest element greater than $x$. This is done by moving all the greater elements back by one position, creating room for $x$ in the desired location. I cannot understand the last pragraph(i.e 3).Could someone please explain me with an example?
The algorithm will have the property that at each iteration, the array will consist of two subarrays: the left subarray will always be sorted, so at each iteration our array will look like $$ \langle\; \text{(a sorted array)}, \fbox{current element},\text{(the other elements)}\;\rangle $$ We work from left to right, inserting each current element where it belongs in the sorted subarray. To do that, we find where the current element belongs, shift the larger elements one position to the right, and place the current element where it belongs. Consider, for example, the initial array $\langle\; 7, 2, 6, 11, 4, 8, 5\;\rangle$. We start with $$ \langle\; \fbox{7}, 2, 6, 11, 4, 8, 5\;\rangle $$ The sorted part is initially empty, so inserting the 7 into an empty array will just give the array $\langle\;7\;\rangle$. In the second iteration we have $$ \langle\; 7, \fbox{2}, 6, 11, 4, 8, 5\;\rangle $$ and now your part (3) comes into play: we find that the element 2 belongs at the front of the sorted list, so we shift the 7 one position to the right and insert the 2 in its proper location, giving us $$ \langle\; 2, 7, \fbox{6}, 11, 4, 8, 5\;\rangle $$ Inserting the 6 in its proper place (after shifting the 7 to make room) in the sorted subarray yields $$ \langle\; 2, 6, 7, \fbox{11}, 4, 8, 5\;\rangle $$ Continuing this process, we'll have $$ \langle\; 2, 6, 7, 11, \fbox{4}, 8, 5\;\rangle $$ (since the 11 is already where it should be, so no shifting was necessary). Then we have $$ \langle\; 2, 4, 6, 7, 11, \fbox{8}, 5\;\rangle $$ (shifting the 7 and 11). Then $$ \langle\; 2, 4, 6, 7, 8, 11, \fbox{5}\;\rangle $$ and, finally, we use paragraph (3) one final time to get $$ \langle\; 2, 4, 5, 6, 7, 8, 11\;\rangle $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/409408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Every principal ideal domain satisfies ACCP. Every principal ideal domain $D$ satisfies ACCP (ascending chain condition on principal ideals) Proof. Let $(a_1) ⊆ (a_2) ⊆ (a_3) ⊆ · · ·$ be a chain of principal ideals in $D$. It can be easily verified that $I = \displaystyle{∪_{i∈N} (a_i)}$ is an ideal of $D$. Since $D$ is a PID, there exists an element $a ∈ D$ such that $ I = (a)$. Hence, $a ∈ (a_n)$ for some positive integer $n$. Then $I ⊆ (a_n) ⊆ I$. Therefore, $I = a_n$. For $t ≥ n$, $(a_t) ⊆ I = (a_n) ⊆ (a_t)$. Thus, $(a_n) = (a_t)$ for all $t ≥ n$. I have prove $I$ is an ideal in the following way:- Let $ x,y\in I$. Then there exist $i,j \in \mathbb{N}$ s.t. $x \in (a_i)$ & $y \in (a_j)$. Let $k \in \mathbb{N}$ s.t $k>i,j$. Then $x \in (a_k)$ & $y \in (a_k)$. as $(a_k)$ is an ideal $x-y \in (a_k)\subset I$ and $rx,xr \in (a_k)\subset I$. So $I$ is an ideal. Is it correct?
Your proof is right but you can let t = max(i,j) and any k > t.
{ "language": "en", "url": "https://math.stackexchange.com/questions/409466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
How many resulting regions if we partition $\mathbb{R}^m$ with $n$ hyperplanes? This is a generalization of this question. So in $\mathbb{R}^2$, the problem is illustrated like so: Here, $n = 3$ lines divides $\mathbb{R}^2$ into $N_2=7$ regions. For general $n$ in the case of $\mathbb{R}^2$, the number of regions $N_2$ is $\binom{n+1}{2}+1$. But what about if we consider the case of $\mathbb{R}^m$, partitioned using $n$ hyperplanes? Is the answer $N_m$ still $\binom{n+1}{2}+1$, or will it be a function of $m$?
Denote this number as $A(m, n)$. We will prove $A(m, n) = A(m, n-1) + A(m-1, n-1)$. Consider removing one of the hyperplanes, the maximum number is $A(m, n-1)$. Then, we add the hyperplane back. The number of regions on the hyperplane is the same as the number of newly-added regions. Since this hyperplane is $m-1$ dimensional, this maximum number is $A(m-1, n-1)$. So, this number satisfies $A(m, n) = A(m, n-1) + A(m-1, n-1)$, and the formula can be simply derived by induction as $\sum_{i=0}^m \binom{n}{i}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/409518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
Recurrence relations: How many numbers between 1 and 10,000,000 don't have the string 12 or 21 so the question is (to be solved with recurrence relations: How many numbers between 1 and 10,000,000 don't have the string 12 or 21? So my solution: $a_n=10a_{n-1}-2a_{n-2}$. The $10a_{n-1}$ represents the number of strings of n length of digits from 0 to 9, and the $2a_{n-2}$ represent the strings of n-length with the 12 or 21 strings included. Just wanted to know if my recursion is correct, if so, I'll be able to solve the rest. Thanks in advance!
We look at a slightly different problem, from which your question can be answered. Call a digit string good if it does not have $12$ or $21$ in it. Let $a_n$ be the number of good strings of length $n$. Let $b_n$ be the number of good strings of length $n$ that end with a $1$ or a $2$, Then $a_n-b_n$ is the number of good strings of length $n$ that don't end with $1$ or $2$. We have $$a_{n+1}=10(a_n-b_n) +9b_n.$$ For a good string of length $n+1$ is obtained by appending any digit to a good string that doesn't end with $1$ or $2$, or by appending any digit except the forbidden one to a good string that ends in $1$ or $2$. We also have $$b_{n+1}=2(a_n-b_n) + b_n.$$ For we obtain a good string of length $n+1$ that ends in $1$ or $2$ by appending $1$ or $2$ to a string that doesn't end with either, or by taking a string that ends with $1$ (respectively, $2$) and adding a $1$ (respectively, $2$). The two recurrences simplify to $$a_{n+1}=10a_n-b_n\qquad\text{ and}\qquad b_{n+1}=2a_n-b_n.$$ For calculational purposes, these are good enough. We do not really need a recurrence for the $a_i$ alone. However, your question perhaps asks about the $a_i$, so we eliminate the $b$'s. One standard way to do this is to increment $n$ in the first recurrence, and obtain $$a_{n+2}=10a_{n+1}-b_{n+1}.$$ But $b_{n+1}=2a_n-b_n$, so $$a_{n+2}=10a_{n+1}-2a_n+b_n.$$ But $b_n=10a_n-a_{n+1}$, and therefore $$a_{n+2}=9a_{n+1}+8a_n.$$ Remark: It would have been better to have $b_n$ as above, and $c_n$ the number of strings that do not end in $1$ or $2$, and to forget about $a_n$ entirely for a while.
{ "language": "en", "url": "https://math.stackexchange.com/questions/409588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can every real number be represented by a (possibly infinite) decimal? Does every real number have a representation within our decimal system? The reason I ask is because, from beginning a mathematics undergraduate degree a lot of 'mathematical facts' I had previously assumed have been consistently modified, or altogether stripped away. I'm wondering if my subconscious assumption that every real number can be represented in such a way is in fact incorrect? If so, is there a proof? If not, why not? (Also I'm not quite sure how to tag this question?)
Irrational numbers were known to the ancient Greeks, as I expect you know. But it took humankind another 2000 years to come up with a satisfactory definition of them. This was mainly because nobody realised that a satisfactory definition was lacking. Once humankind realised this, various suggestions were proposed. One suggestion (Dedekind's) defined a real number as two infinite sets of rational numbers, which 'sandwiched' the real number; another suggestion (Cauchy's) defined a real number as an equivalence class of sequences obeying a certain convergence criterion. The details are available in many places. But the important point is that all of the reasonable definitions turned out to be equivalent -- the set of real numbers according to Dedekind's definition was 'the same' as the set of real numbers according to Cauchy's definition, although the definitions look completely different. Now, to your question: another reasonable definition of a real number is a non-terminating decimal expansion (we say non-terminating just to clear up the ambiguity that arises between e.g. 123.4599999... and 123.46 -- only the first is allowed). It turns out that this definition is equivalent to all the others. So your intuition is correct. But strictly speaking, your question is flawed: instead of asking whether every real number can be represented in this way, you should ask whether this representation of the real numbers is a valid one. And it is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/409658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 5, "answer_id": 1 }
Existence and uniqueness of God Over lunch, my math professor teasingly gave this argument God by definition is perfect. Non-existence would be an imperfection, therefore God exists. Non-uniqueness would be an imperfection, therefore God is unique. I have thought about it, please critique from mathematical/logical point of view on * *Why does/doesn't this argument fall through? Does it violate any logical deduction rules? *Can this statement be altered in a way that it belongs to ZF + something? What about any axiomatic system? *Is it positive to make mathematically precise the notion of "perfect"?
Existence is not a predicate. You may want to read Gödel's onthological proof, which you can find on Wikipedia. Equally good is the claim that uniqueness is imperfection, since something which is perfect cannot be scarce and unique. Therefore God is inconsistent..?
{ "language": "en", "url": "https://math.stackexchange.com/questions/409735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
How to prove " $¬\forall x P(x)$ I have a step but can't figure out the rest. I have been trying to understand for hours and the slides don't help. I know that since I have "not P" that there is a case where not All(x) has P... but how do I show this logically? 1. $\forall x (P(x) → Q(x))$ Given 2. $¬Q(x)$ Given 3. $¬P(x)$ Modus Tollens using (1) and (2) 4. 5. 6.
First, you want to instantiate your quantified statement with a witness, say $x$: So from $(1)$ we get $$\;P(x) \rightarrow Q(x) \tag{$1\dagger$}$$ Then from $(1\dagger)$ with $(2)$ $\lnot Q(x)$, by modus tollens, you can correctly infer $(3)$: $\lnot P(x)$. So, from $(3)$ you can affirm the existence of an $x$ such that $\lnot P(x)$ holds: $\quad\exists x \lnot P(x)$ Then recall that, by DeMorgan's for quantifiers,$$\underbrace{\exists x \lnot P(x) \quad \equiv \quad \lnot \forall x P(x)}_{\text{these statements are equivalent}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/409821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Linear Algebra dependent Eigenvectors Proof Problem statement: Let $n \ge 2 $ be an integer. Suppose that A is an $n \times n$ matrix and that $\lambda_1$, $\lambda_2$ are eigenvalues of A with corresponding eigenvectors $v_1$, $v_2$ respectively. Prove that if $v_1$, $v_2$ are linearly dependent then $\lambda_1 = \lambda_2$. I have an intuition as to why this is true, but am having difficulty formalizing a proof. What I have doesn't seem tight enough. If $v_1$ and $v_2$ are linearly dependent then $v_1$ lies in the span of $v_2$. If two eigenvectors lie in the span of one another then only one of them is required in order to form a basis of the eigenspace. All eigenvalues correspond to a single $n\times 1$ eigenvector or a set of $n\times 1$ linearly independent vectors. Since $v_1$ and $v_2$ are linearly dependent, we know that there can only be one eigenvalue that corresponds to the single eigenvector. Thus $\lambda_1$ must equal $\lambda_2$. Any thoughts or criticism are welcome. Thanks
You know that $$Av_1=\lambda_1v_1\\Av_2=\lambda_2v_2$$ If $v_1,v_2$ are linearly dependent, then $v_1=\mu v_2$ for some scalar $\mu$. Putting this in the first equation, $$A(\mu v_2) = \lambda_1(\mu v_2) \implies Av_2 = \lambda_1 v_2$$ This gives $\lambda_1=\lambda_2$ as desired. I think your idea is on the right track, but putting it in the above way gives more clarity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/409897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove inequality: $74 - 37\sqrt 2 \le a+b+6(c+d) \le 74 +37\sqrt 2$ without calculus Let $a,b,c,d \in \mathbb R$ such that $a^2 + b^2 + 1 = 2(a+b), c^2 + d^2 + 6^2 = 12(c+d)$, prove inequality without calculus (or langrange multiplier): $$74 - 37\sqrt 2 \le a+b+6(c+d) \le 74 +37\sqrt 2$$ The original problem is find max and min of $a+b+6(c+d)$ where ... Using some calculus, I found it, but could you solve it without calculus.
Hint: You can split this problem to find max and min of $a+b$ and $c+d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/409979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
supremum of an array of a convex functions If $\{J_n\}$ is an array of a convex functions on a convex set $U$ and $G(u)=\sup J_i(u), u\in U$, how to show that $G(u)$ is convex too? I've done this, but I am not sure about properties of a supremum. Since $U$ is convex, $\alpha x +(1-\alpha) y)\in U$ for all $x,y\in U$. If $G$ is convex, than it would be $G(\alpha x +(1-\alpha) y)\leq \alpha G(x)+(1-\alpha) G(y)$ i.e. $\sup J_i(\alpha x +(1-\alpha) y)\leq \alpha \sup J_i(x)+(1-\alpha) \sup J_i(y)$. So, I've done this $G(\alpha x +(1-\alpha) y)=\sup J_i(\alpha x +(1-\alpha) y)\leq sup \{\alpha J_i(x)+(1-\alpha)J_i(y)\}\leq \alpha \sup J_i(x)+(1-\alpha) \sup J_i(y)=\alpha G(x)+(1-\alpha) G(y)$. Is this ok?
It seems that you assume that your $J_n$ are convex real-valued functions. One can prove that their pointwise supremum is a convex without assuming that the common domain $U$ is convex, or even that the set of indices $n$ is finite. A function $J_n$ is convex iff its epigraph is a convex set. The epigraph of the supremum $G=sup_n J_n$ is precisely the intersection of the epigraphs of the $J_n$'s. But an intersection of (an arbitrary number of) convex sets is convex, from which it follows that $G$ is convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/410040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Maximum value of a product How to write the number $60$ as $\displaystyle\sum^{6}_{i=1} x_i$ such that $\displaystyle\prod^{6}_{i=1} x_i$ has maximum value? Thanks to everyone :) Is there a way to solve this using Lagrange multipliers?
(of course the $x_i$ must be positive, otherwise the product may be as great as you want) Hint: if you have $x_i \ne x_j$, substitute both with their arithmetic mean.
{ "language": "en", "url": "https://math.stackexchange.com/questions/410119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
CDF of the distance of two random points on (0,1) Let $Y_1 \sim U(0,1)$ and $Y_2 \sim U(0,1)$. Let $X = |Y_1 - Y_2|$. Now the solution for the CDF in my book looks like this: $P(X < t) = P(|Y_1 - Y_2| < t) = P(Y_2 - t < Y_1 < Y_2 + t) = 1-(1-t)^2$ They give this result without explanation. How do they come up with the $1-(1-t)^2$ part? Can you help me find the explanation?
I want to change notation. Call the random variable called $Y_1$ in the problem by the name $X$. Call the random variable called $Y_2$ in the problem by the name $Y$. And finally, call the random variable called $X$ in the problem by the name $T$. Trust me, these name changes are a good idea! We need to assume that $X$ and $Y$ are independent. Fix $t$ between $0$ and $1$. In the usual coordinate plane, draw the square with corners $(0,0)$, $(1,0)$, $(1,1)$, and $(0,1)$. Now draw the two lines $y=x+t$ and $y=x-t$. By independence, the joint distribution of $(X,Y)$ is uniform in our square. Draw the lines $y=x-t$ and $y=x+t$. You know well what these look like. Remember that $0\le t \le 1$ when drawing the lines. For a nice picture, you could for example pick $t$ around $\frac{1}{3}$. (Without drawing a picture, you are unlikely to understand what is really going on.) Note that $T\le t$ if and only if $|X-Y|\le t$ if and only if the pair $(X,Y)$ lands between our two lines. The probability that this happens is the area of the region between the two lines, divided by the area of the whole square, which is $1$. So we need to find the area of the region between the two lines. Now we find that area. The part of the square which is outside our region consists of two isosceles right-angled triangles. Each of these triangles has legs $1-t$, so together they make up a $(1-t)\times (1-t)$ square, with area $(1-t)^2$. Thus the area of the region between the two lines is $1-(1-t)^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/410200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
probability of sum of two integers less than an integer Two integers [not necessarily distinct] are chosen from the set {1,2,3,...,n}. What is the probability that their sum is <=k? My approach is as follows. Let a and b be two integers. First we calculate the probability of the sum of a+b being equal to x [1<=x<=n]. WLOG let a be chosen first. For b= x-a to be positive, we must have 1<=a < x. This gives (x-1) possible values for a out of total n possible values. Probability of valid selection of a= (x-1)/n. For each valid selection of a, we have one and only one possible value of b. Only 1 value of b is then valid out of total n possible values. Thus probability of valid selection of b= 1/n. Thus probability of (a+b= x) = (x-1)/n(n-1). Now probability of (a+b<=k) = Probability of (a+b= 2) + probability of (a+b= 3) + ... + probability of (a+b= k) = {1+2+3+4+5+...+(k-1)}n(n-1) = k(k-1)/n(n-1). Can anybody please check if my approach is correct here?
Notice if $k\le 1$ the probability is $0$, and if $k\ge 2n$ the probability is $1$, so let's assume $2\le k\le 2n-1$. For some $i$ satisfying $2\le i\le 2n-1$, how many ways can we choose $2$ numbers to add up to $i$? If $i\le n+1$, there are $i-1$ ways. If $i\ge n+2$, there are $2n-i+1$ ways. Now, suppose $k\le n+1$, so by summing we find: $$\sum_{i=2}^{k}i-1=\frac{k(k-1)}{2}$$ If $k\ge n+2$, if we sum from $i=2$ to $i=n+1$ we get $\frac{(n+1)n}{2}$, and then from $n+1$ to $k$ we get: $$\sum_{i=n+2}^k2n-i+1=\frac{1}{2}(3n-k)(k-n-1)$$ Adding the amount for $i\le n+1$ we get: $$2kn-\frac{k^2}{2}+\frac{k}{2}-n^2-n$$ Since there are $n^2$ choices altogether, we arrive at the following probabilities: $$\begin{cases}\frac{k(k-1)}{2n^2}&1\le k\le n+1\\\frac{4kn-k^2+k-2n^2-2n}{2n^2}&n+2\le k\le 2n\end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/410320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Test for convergence of improper integrals $\int_{0}^{1}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$ and $\int_{1}^{\infty}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$ I need to test if, integrals below, either converge or diverge: 1) $\displaystyle\int_{0}^{1}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$ 2) $\displaystyle\int_{1}^{\infty}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$ I tried comparing with $\displaystyle\int_{0}^{1}\frac{1}{(1+x)\ln^3(1+x)}dx$, $\displaystyle\int_{0}^{1}\frac{\sqrt{x}}{(1+x)}dx$ but ended up with nothing. Do you have any suggestions? Thanks!
A related problem. 1) The integral diverges since as $x\sim 0$ $$\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}\sim \frac{\sqrt{x}}{(1)(x^3) }\sim \frac{1}{x^{5/2}}.$$ Note: $$ \ln(1+x) = x - \frac{x^2}{2} + \dots. $$ 2) For the second integral, just replace $x \leftrightarrow 1/x $, so the integrand will behave as $x\sim 0$ as $$ \frac{\sqrt{1/x}}{(1+1/x)\ln^3(1+1/x)}= \frac{\sqrt{x}}{(1+x)(\ln^3(1+1/x))}\sim \frac{\sqrt{x}}{(x)(\ln^3(1/x))} = -\frac{1}{\sqrt{x}\ln^3(x)}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/410378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Under What Conditions and Why can Move Operator under Integral? Given a function space $V$ of some subset of real-valued functions on the real line, linear operator $L: V \rightarrow V$, and $f,g \in V$, define $$ h(t) = \int_{\mathbb{R}}f(u)g(u-t)du $$ Further, assume $h \in V$. Is the below true? $$L(h(t)) = \int_{\mathbb{R}}f(u)L(g(u-t))du $$ If not, under what assumptions is this true? If yes, why?
An arbitrary operator cannot be moved into the convolution. For example, if $Lh=\psi h$ for some nonconstant function $\psi$, then $$\psi(t) \int_{\mathbb R} f(u) g(u-t) \,du \ne \int_{\mathbb R} f(u) g(u-t) \psi(u-t) \,du $$ for general $f,g$. However, the identity is true for translation-invariant operators, i.e., those for which $L(g(t-c))=L(g)(t-c)$ for every $c\in\mathbb R$. Indeed, for such operators $$f*(Lg)= \int_{\mathbb{R}}f(u)L(g)(u-t)\,du =\int_{\mathbb{R}}f(u)L(g(u-t))\,du = L(f*g)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/410429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Theorems' names that don't credit the right people The point of this question is to compile a list of theorems that don't give credit to right people in the sense that the name(s) of the mathematician(s) who first proved the theorem doesn't (do not) appear in the theorem name. For instance the Cantor Schröder Bernstein theorem was first proved by Dedekind. I'd also like to include situations in which someone conjectured something, didn't prove it, then someone else conjectured the same thing later, also without proving it, and was credited with having first conjectured it. Similar unfair things which I didn't remember to include might also be considered. Some kind of reference is appreciated.
Nobody's mentioned the Pythagorean theorem yet?
{ "language": "en", "url": "https://math.stackexchange.com/questions/410457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71", "answer_count": 24, "answer_id": 17 }
Inner Product Spaces : $N(T^{\star}\circ T) = N(T)$ (A PROOF) Let $T$ be a linear operator on an inner product space. I really just want a hint as to how prove that $N(T^{\dagger}\circ T) = N(T)$, where "$^\dagger$" stands for the conjugate transpose. Just as an aside, how should I read to myself the following symbolism:
Hint: Let $V$ denote your inner product space. Clearly $N(T)\subseteq N(T^* T)$, so you really want to show that $N(T^* T)\subseteq N(T)$. Suppose $x\in N(T^* T)$. Then $T^* Tx = 0$, so we have $\langle T^* Tx, y\rangle = 0$ for all $y\in V$. Can you see where to go from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/410536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Concept about series test I have five kind of test here 1. Divergent test 2. Ratio test 3. Integral test 4. Comparison test 5. Alternating Series test And a few questions here. 1. Are test 1,2,3,4 only available for Positive Series? and alternate series test is only for alternating series? 2. To show $\sum_{n=1}^{\infty}(-1)^n$ diverge I can't use alternating series test right? it just tell me the series doesn't converge. So, i tried to use divergence test? but it seems like edivergence test is not applicable for alternating series.
I assume that for (1) you mean the theorem that says that if the $n^\text{th}$ term does not approach 0 as $n \to \infty$ then the series diverges. This test does not require the terms to be positive, so you can apply it to show that the series $\sum_{n=1}^\infty (-1)^n$ diverges. The ratio test does not require the terms to be positive. You end up taking the absolute value in this test, so signs do not matter. The usual formulations of the integral test and comparison test only apply to series with positive terms. The alternating series test is only for alternating series, as the name suggests. It has a couple of other requirements also. The alternating series test never tells you that a series diverges. If the hypotheses are met, then the conclusion is that the series converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/410669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Online Model Theory Classes Since "model theory" is kind of too general naming, I have encountered with lots of irrelevant results (like mathematical modelling etc.) when I searched for some videos on the special mathematical logic branch "model theory". So, do you know/have you ever seen any online lecture videos on model theory? Any relevant answer will be appreciated...
If you are french fluent, here are the lecture notes of Tuna Altinel course : http://math.univ-lyon1.fr/~altinel/Master/m11415.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/410713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Is this kind of space metrizable? It has a nice result from Tkachuk V V. Spaces that are projective with respect to classes of mappings[J]. Trans. Moscow Math. Soc, 1988, 50: 139-156. If the closure of every discrete subset of a space is compact then the whole space is compact. The proof can be seen here by Brian M. Scott. Then these questions arise naturally: Question 1: If the closure of every discrete subset of a space is countably compact, then is the whole space countably compact? Question 2: If the closure of every discrete subset of a space is metrizable, then is the whole space metrizable?
$\newcommand{\cl}{\operatorname{cl}}$The first conjecture is true at least for $T_1$ spaces. If $X$ is $T_1$ and not countably compact, then $X$ has an infinite closed discrete subspace, which is obviously not countably compact. Thus, if every discrete subspace of a $T_1$ space $X$ is countably compact, so is $X$. It’s at least consistent that the second conjecture is false. It is consistent that there be a compact Suslin line, i.e., a complete dense linear order $\langle X,\preceq\rangle$ with endpoints such that the order topology on $X$ is ccc but not separable. (E.g., the existence of Suslin line follows from the combinatorial principle $\diamondsuit$, which holds in $\mathsf{V=L}$.) Suppose that $F\subseteq X$ is closed, $x\in F$, and $F\cap[x,\to)$ is open in $F$. If $x=\min F$, then $x$ is not a left pseudogap of $F$. (For a definition of left and right pseudogaps see this answer.) Otherwise, $F\cap(\leftarrow,x)$ is a non-empty closed subset of $F$ and therefore of $X$. It follows that $F\cap(\leftarrow,x)$ is compact and has a maximum element $y$. But then $F\cap[x,\to)=\{z\in F:y\prec z\}$ is open in the order topology on $F$, and $x$ is not a left pseudogap of $F$. A similar argument shows that $F$ has no right pseudogaps and hence that the subspace topology on $F$ is identical to the order topology, so that $F$ with its subspace topology is a LOTS. Let $D\subseteq X$ be discrete. The spread of any LOTS is equal to its cellularity, so $D$ is countable. Let $Y=\cl_XD$; then $Y$ is a separable, compact LOTS. Let $$J=\{x\in Y:x\text{ has an immediate successor in }Y\}\;;$$ since $Y$ is a LOTS, $w(Y)=c(Y)+|J|=\omega+|J|$. For $x\in J$ let $x^+$ be the immediate successor of $x$ in $Y$; then $\{(x,x^+):x\in J\}$ is a pairwise disjoint family of non-empty open intervals in $X$, so $|J|\le\omega$, and $w(Y)=\omega$. It now follows from the Uryson metrization theorem that $Y$ is metrizable and hence that every discrete subset of $X$ has metrizable closure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/410796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Prove $x^2+y^2+z^2 \ge 14$ with constraints Let $0<x\le y \le z,\ z\ge 3,\ y+z \ge 5,\ x+y+z = 6.$ Prove the inequalities: $I)\ x^2 + y^2 + z^2 \ge 14$ $II)\ \sqrt x + \sqrt y + \sqrt z \le 1 + \sqrt 2 + \sqrt 3$ My teacher said the method that can solve problem I can be use to solve problem II. But I don't know what method that my teacher talking about, so the hint is useless, please help me. Thanks
Hint: $$x^2+y^2+z^2 \ge 14 = 1^2+2^2+3^2\iff (x-1)(x+1)+(y-2)(y+2)+(z-3)(z+3) \ge 0$$ $$\iff (z-3)[(z+3)-(y+2)] + (y+z-5)[(y+2)-(x+1)] + (a+b+c-6)(a+1) \ge 0$$ (alway true)
{ "language": "en", "url": "https://math.stackexchange.com/questions/410845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Arctangent integral How come this is correct: $$\int \dfrac{3}{(3x)^2 + 1} dx = \arctan (3x) + C$$ I learned that $$\int \dfrac{1}{x^2+1} = \arctan(x) + C$$ But I don't see how you can get the above one from the other. The $1$ in the denominator especially confuses me.
We can say even more in the general case: if a function $\;f\;$ is derivable , then $$\int \frac{f'(x)}{1+f(x)^2}dx=\arctan(f(x)) + K(=\;\text{a constant})$$ which you can quickly verify by differentiating applying the chain rule. In your particular case we simply have $\;f(x)=3x\;$ ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/410932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Need help with $\int \dfrac{2x}{4x^2+1}$ We want$$\int \dfrac{2x}{4x^2+1}$$ I only know that $\ln(4x^2 + 1)$ would have to be in the mix, but what am I supposed to do with the $2x$ in the numerator?
Again, as in your past question, there's a general case here: if $\,f\,$ is derivable then $$\int\frac{f'(x)}{f(x)}dx=\log|f(x)|+K$$ Here, we have $$\frac{2x}{4x^2+1}=\frac14\frac{(4x^2+1)'}{4x^2+1}\ldots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/411000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Integral defined as a limit using regular partitions Definition. Given a function $f$ defined on $[a,b]$, let $$\xi_k \in [x_{k-1},x_k],\quad k=1,\ldots,n$$ where $$ x_k=a+k\frac{b-a}n, \quad k=0,\ldots,n \; .$$ One says that $f$ is integrable on $[a,b]$ if the limit $$\lim_{n\to\infty}\frac{b-a}n\sum_{k=1}^n f(\xi_k)$$ exists and is independent of the $\xi_k$. I seek a proof of the: Theorem. If $a<c<b$ and $f$ is integrable on $[a,c]$ and $[c,b]$, then $f$ is integrable on $[a,b].$
HINT: Take Two cases: 1.When $c$ is a tag of a sub-interval $[x_{k},x_{k+1}]$ of $\dot{P}$,where $\dot{P}$ is your tagged partition ${(I_{i},t_{i})}_{i=1}^{n}$,such that $I_{i}=[x_{i},x_{i+1}]$ 2.When $c$ is an end-point of a sub-interval of $\dot{P}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/411079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Name of this "cut 'n slide" fractal? Can you identify this fractal--if in fact is has a name--based either upon its look or on the method of its generation? It's created in this short video. It looks similar to a dragon fractal, but I don't think they are the same. Help, please?
That is the twindragon. It is a two-dimensional self-similar set. That is it is composed of two smaller copies of itself scaled by the factor $\sqrt{2}$ as shown here: Using this self-similarity, one can construct a tiling of the plane with fractal boundary. Analysis of the fractal dimension of the boundary is also possible, but it's a bit harder. It's all truly great fun!
{ "language": "en", "url": "https://math.stackexchange.com/questions/411166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to evaluate double integrals over a region? Evaluate the double integral $\iint_D(1/x)dA$, where D is the region bounded by the circles $x^2+y^2=1$ and $x^2+y^2=2x$ Alright so first I converted to polar coordinates: $$ x^2 + y^2 = 1 \ \Rightarrow \ r = 1 \ \ , \ \ x^2 + y^2 = 2x \ \Rightarrow \ r^2 = 2r \cos θ \ \Rightarrow \ r = 2 \cos θ \ . $$ Points of intersection: $ 2 \cos θ = 1 \ \Rightarrow \ θ = ±π/3 \ , $ $ 2 \cos θ > 1 $ for θ in (-π/3, π/3). So, $$ \int \int_D \ (1/x) \ \ dA \ \ = \ \ \int_{-π/3}^{π/3} \ \int_1^{2 \cos θ} \ \frac{1}{r \cos θ} \ \ r dr \ dθ $$ $$ = \ \ \int_{-π/3}^{π/3} \ \int_1^{2 \cos θ} \ \sec θ \ \ dr \ dθ \ \ = \ \ \int_{-π/3}^{π/3} \ (2 \cos θ - 1) \sec θ \ \ dθ $$ $$ = \ \ 2 \ \int_0^{π/3} \ (2 - \sec θ) \ \ dθ \ \ , $$ (since the integrand is even) $$ = \ \ 2 \ (2 θ \ - \ \ln |\sec θ + \tan θ| \ ) \vert_0^{π/3} \ \ = \ \ \frac{4π}{3} \ - \ 2 \ln(2 + √3) \ \ . $$ I'm not sure this is right. Could someone look over it?
Of course, you can do the problem by using the polar coordinates. If it's understood correctly, you would want to find the right limits for double integrals. I made a plot of the region as follows: The red colored part is our $D$. So: $$r|_1^{2\cos\theta},\theta|_{-\pi/3}^{\pi/3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/411224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
putting a complex structure on a graph I am studying Riemann Surfaces, and an example that comes up in two of my references, as a preamble to smooth affine plane curves, is the following: Let $D$ be a domain in the complex plane, and let $g$ be holomorphic on $D$; giving the graph the subspace topology, and letting the charts be open subsets of the graph with maps given by projection, we get an atlas and hence the graph admits a complex structure. Clearly on any overlap the transition function will be identity so that is all good; my confusion is, why did $g$ have to be holomorphic to begin with? Couldn't we have done the exact same thing with a continuous function? For that matter, couldn't we take any function at all, and let the atlas of the graph consist of one chart, namely the whole set, with the projection map? I think the issue, at least for the more extreme second example, would be that the structure would not be compatible with the subspace topology, but I don't see what the issue is with $g$ just being a continuous function. Thank you for any insight.
Suppose $g$ is a continuous complex-valued function on $D$. Then the set $\Omega=\{(z,g(z))\in\mathbb C^2: z\in D\}$, which gets the subspace topology from $\mathbb C^2$, is homeomorphic to $D$ via $z\mapsto (z,g(z))$. By declaring this homeomorphism to be an isomorphism of complex structures, we can make $\Omega$ a complex manifold. No problem at all. By construction, $\Omega$ is an embedded submanifold of $\mathbb C^2$ in the sense of topological manifolds. But in general it is not a complex submanifold of $\mathbb C^2$, because the inclusion map $\Omega\to \mathbb C^4$ is not holomorphic. Indeed, due to our definition of the complex structure on $\Omega$, the inclusion map is holomorphic if and only if the map $D\to\mathbb C^2$ defined by $z\mapsto (z,g(z))$ is holomorphic. The latter happens precisely when $g$ is a holomorphic function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/411279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ideal of smooth function on a manifold vanishing at a point I'm trying to prove the following lemma: let $M$ be a smooth manifold and consider the algebra $C^{\infty}(M)$ of smooth functions $f\colon M \to \mathbb{R}$. Given $x_0 \in M$, consider the ideals $$\mathfrak{m}_{x_0} := \{f\in C^{\infty}(M) : f(x_0)=0\},$$ $$\mathfrak{I}_{x_0} := \{f\in C^{\infty}(M) : f(x_0)=0, df(x_0)=0\}.$$ Then $\mathfrak{I}_{x_0} = \mathfrak{m}^2_{x_0}$, i.e. any function $f$ vanishing at $x_0$ together with its derivatives can be written as $$f=\sum_kg_kh_k, \quad g_k, h_k \in \mathfrak{m}_{x_0}.$$ I have no idea about how to prove the inclusion $\mathfrak{I}_{x_0} \subseteq \mathfrak{m}^2_{x_0}$.
$\forall i \in \{1,2,\dots,n\}$, let $g_i(x_1, \dots, x_n)=\int_0^1\frac{\partial f}{\partial x_i}(tx_1, \dots, tx_n)dt$, it is easy to verify that $f=\sum_{i=1}^nx_ig_i$
{ "language": "en", "url": "https://math.stackexchange.com/questions/411339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
I found out that $p^n$ only has the factors ${p^{n-1}, p^{n-2}, \ldots p^0=1}$, is there a reason why? So I've known this for a while, and only finally thought to ask about it.. so, any prime number ($p$) to a power $n$ has the factors $\{p^{n-1},\ p^{n-2},\ ...\ p^1,\ p^0 = 1\}$ So, e.g., $5^4 = 625$, its factors are: $$ \{625 = 5^4,\ 125 = 5^3,\ 25 = 5^2,\ 5 = 5^1,\ 1 = 5^0\} $$ Now, my best guess is that it's related to its prime factorisation, $5*5*5*5$, but other than that, I have no idea. So my question is, why does a prime number raised to a power $n$ have only the factors of $p^{n-1}$ and so on like above? No, I'm not talking about prime factorisation, I'm talking about normal factors (like 12's factors are 1, 2, 3, 4, 6, 12) and why $p^n$ doesn't have any other (normal) factors other than $p^{n-1} \ldots$
Let $ab=p^n$. Consider the prime factorization of the two terms on the left hand side. If any prime other than $p$ appears on the left, say $q$, then it appears as an overall factor and so we construct a prime factorization of $ab$ that contains a $q$. But then the right hand side has only $p$ as prime factors. Since the two prime factorizations are the same, and factorizations are unique, this is impossible. Hence $a,b$ are formed entirely of $p$s in their prime decomposition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/411461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
property of equality The property of equality says: "The equals sign in an equation is like a scale: both sides, left and right, must be the same in order for the scale to stay in balance and the equation to be true." So for example in the following equation, I want to isolate the x variable. So I cross-multiply both sides by 3/5: 5/3x = 55 x = 3/5*55 What I did to one side, I had to to do the other. However, take a look at the following problem: y - 10/3 = -5/6(x + 2) y = -5/6 - 10/6 + 10/3 So for we just use distributive property of multiplication to distribute -5/6 to the quantity of x + 2. Then since we isolate y, we add 10/3 to both sides. However, now in order to add the two fractions, we find the greatest common factor is 6, so we multiple 10/3 by 2: y = -5/6x - 10/6 + 20/6. There is my question. Why can we multiply a term on the one side by 2, without having to do it on the other side? After writing this question here, I'm now thinking that because 10/3 is equal to 20/6 we really didn't actually add anything new to the one side, and that's why we didn't have to add it to the other side.
You did not multiple it by two but $\frac{2}{2}=1$ instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/411543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What am I missing here? That's an idiot question, but I'm missing something here. If $x'= Ax$ and $A$ is linear operator in $\mathbb{R}^n$, then $x'_i = \sum_j a_{ij} x_j$ such that $[A]_{ij} =a_{ij} = \frac{\partial x'_i}{\partial x_j}$, therefore $\frac{\partial}{\partial x_i'} = \sum_j \frac{\partial x_j}{\partial x'_i} \frac{\partial}{\partial x_j}$. However $\frac{\partial}{\partial x_i'} = \frac{\partial}{\partial \sum_j a_{ij} x_j} = \sum_j a_{ij} \frac{\partial}{\partial x_j} = \sum_j \frac{\partial x'_i}{\partial x_j} \frac{\partial}{\partial x_j}$ !!! What's wrong here? Thanks in advance.
When you wrote $\frac{\partial}{\partial x_i'} = \frac{\partial}{\partial \sum_j a_{ij} x_j} = \sum_j a_{ij} \frac{\partial}{\partial x_j} $ the $a_{ij}$ somehow climbed from the denominator to the numerator
{ "language": "en", "url": "https://math.stackexchange.com/questions/411602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving elementary row operations So I am faced with the following: $$ \begin{align} x_1 + 4x_2 - 2x_3 +8x_4 &=12\\ x_2 - 7x_3 +2x_4 &=-4\\ 5x_3 -x_4 &=7\\ x_3 +3x_4 &=-5 \end{align}$$ How should I approach this problem? In other words, what is the next elementary row operation that I should attempt in order to solve it? I know how to do with 3 equations by using the augmented method but this got me a little confused.
HINT: Use Elimination/ Substitution or Cross Multiplication to solve for $x_3,x_4$ from the last two simultaneous equation. Putting the values of $x_3,x_4$ in the second equation, you will get $x_2$ Putting the values of $x_2,x_3,x_4$ in the first equation you will get $x_1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/411667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Show that Total Orders does not have the finite model property I am not sure whether my answer to this problem is correct. I would be grateful if anyone could correct my mistakes or help me to find the correct solutions. The problem: Show that Total Orders does not have the finite model property by finding a sentence A which is refuted only in models with an infinite domain. Just for reference purpose only: A theory T has the finite model property if and only if whenever $T\nvdash A$ there is a model $\mathcal{M}$ with a finite domain, such that $\mathcal{M}$ satisfies the theory $T$ but $A$ does not hold in $\mathcal{M}$. The theory of Total Orders (in the language with quantifiers, propositional connectives, identity and one new binary relation symbol '<') defined as the set of consequences of the following three formulas: 1. $(\forall x)\neg(x<x)$ 2. $(\forall x)((x<y\wedge y<z)\supset x<z)$ 3. $(\forall x)(x<y\vee x=y\vee y<x)$ My answer is quite simple, but it is too simple that I doubt whether it is correct. I am trying to say that the statement "there is a least element" is refuted only in models with an infinite domain, for example the integers. My sentence A is $\neg((\exists y)(\forall x)(y<x))$. I am not sure whether I am correct. Please correct me if I am wrong and please say so if there is any better answer. Many thanks in advance!
Looks like a good candidate to me. As you say, this clearly holds in every finite model of our theory, but infinite counterexamples exist, like $\mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/411715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Limit of $\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)$ I want to evaluate this limit and I faced with one issue. for this post I set $L`$ as L'Hôpital's rule $$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)$$ Solution One: $$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)=\frac{0}{0}L`=\frac{\sin(2x)+2x\cos(2x)+\sin(x)}{\sin(2x)}$$ at this step I decided to evaluate each fraction so I get : $$\lim\limits_{x\to 0}\frac{\sin(2x)}{\sin(2x)}+\frac{2x\cos(2x)}{\sin(2x)}+\frac{\sin(x)}{\sin(2x)} = \frac{3}{2}$$ Solution Two: $$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)\frac{0}{0}L`=\frac{\sin(2x)+2x\cos(2x)+\sin(x)}{\sin(2x)}=\frac{0}{0}L`$$ $$\frac{2\cos(2x)+2\cos(2x)-4x\sin(2x)+\cos(x)}{2\cos(2x)}=\frac{5}{2}$$ I would like to get some idea where I did wrong, Thanks.
As mentioned your first solution is incorrect. the reason is $$\displaystyle lim_{x\to0}\frac{2xcos(2x)}{sin(2x)}\neq 0$$ you can activate agin l'hospital: $$lim_{x\to0}\frac{2xcos(2x)}{sin(2x)}=lim_{x\to0}\frac{2cos(2x)-2xsin(2x)}{2cos(2x)}=lim_{x\to0} 1-2xtg(2x)=1+0=1$$, so now after we conclude this limit, in the first solution, i'd write after $$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)=\lim\limits_{x\to 0}\frac{\sin(2x)}{\sin(2x)}+\frac{2x\cos(2x)}{\sin(2x)}+\frac{\sin(x)}{\sin(2x)}$$ that the limit equals to $$ = 1+\frac 1 2+1=\frac 5 2$$ and that's the correct answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/411850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proof for Lemma about convex hull I have to prove a Lemma: "For the set B of all convex combinations of arbitrary finite number of points from set A, $co (A)=B$" I started by showing $B\subset co(A)$ first. $B$ contains all convex combinations of arbitrary finite number of points from A. Let $x=\alpha_1 x_1 +...+\alpha_n x_n$ be convex combination of $x_1,...x_n\in A$ $x\in B$ Let $C$ be any convex set that contains A. Now I know that $x_1,...,x_n\in C$ and, since $C$ is convex, it contains all convex combinations of arbitrary finite number of its points (and points from A), so $x\in C$. Thus, $B\subset C$ I also know that $C=co(C)$ because $C$ is convex. Also, $A\subset C \rightarrow co(A)\subset co(C) \rightarrow co(A)\subset C$ So I have $B\subset C$ and $co(A)\subset C$. How can I conclude from this that $B\subset co(A)$? What about another direction, $co(A)\subset B$? Just to prove that $B$ is convex?
The conclusion of your question is not correct. You are right in whatever you have done. The set $B$ may not even be convex, so how can it be equal to co($A$)?
{ "language": "en", "url": "https://math.stackexchange.com/questions/411914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Problem related to GCD I was solving a question on GCD. The question was calculate to the value of $$\gcd(n,m)$$ where $$n = a+b$$$$m = (a+b)^2 - 2^k(ab)$$ $$\gcd(a,b)=1$$ Till now I have solved that when $n$ is odd, the $\gcd(n,m)=1$. So I would like to get a hint or direction to proceed for the case when $n$ is even.
Key idea: $ $ employ $\bigg\lbrace\begin{eqnarray}\rm Euclidean\ Algorithm\ \color{#f0f}{(EA)}\!: &&\rm\ (a\!+\!b,x) = (a\!+\!b,\,x\ \,mod\,\ a\!+\!b)\\ \rm and\ \ Euclid's\ Lemma\ \color{blue}{(EL)}\!: &&\rm\ (a,\,b\,x)\ =\ (a,x)\ \ \,if\,\ \ (a,b)=1\end{eqnarray}$ $\begin{eqnarray}\rm So\ \ f \in \Bbb Z[x,y]\Rightarrow &&\rm (a\!+\!b,\, f(\color{#c00}a,b))\stackrel{\color{#f0f}{(EA)}} = (a\!+\!b,\,f(\color{#c00}{-b},b)),\ \ by\, \ \ \color{#c00}{a\equiv -b}\!\!\pmod{a\!+\!b}\\ \rm \Rightarrow &&\rm(a\!+\!b\!,\, (\color{#0a0}{a\!+\!b})^2\! \color{#c00}{- a}bc) = (a\!+\!b\!,{\color{#0a0}0}^2\!+\!\color{#c00}bbc)\!\stackrel{\color{blue}{(EL)}}= \!(a\!+\!b,c)\ \ by\ \, \bigg\lbrace\begin{array}((a\!+\!b,b)\\\rm\, = (a,b)=1\end{array} \end{eqnarray}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/412000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Sum of exponential of normal random variables Suppose $X_i \sim N(0,1)$ (independent, identical normal distributions) Then by Law of large number, $$ \sqrt{1-\delta} \frac{1}{n}\sum_i^\infty e^{\frac{\delta}{2}X_i^2} \rightarrow \sqrt{1-\delta} \int e^{\frac{\delta}{2}x^2}\frac{1}{\sqrt{2\pi}}e^{\frac{1}{2}x^2}dx =1 $$ However, according to simulations, this approximation doesn't seem to work when $\delta$ close to one. Is that ture or just need to run larger samples? Thanks ! Update (6/6): As sos440 mentioned, there's a typo and now fixed.
Note that $$ \Bbb{E} \exp \left\{ \tfrac{1}{2}\delta X_{i}^{2} \right\} = \frac{1}{\sqrt{1-\delta}} \quad \text{and} \quad \Bbb{V} \exp \left\{ \tfrac{1}{2}\delta X_{i}^{2} \right\} = \frac{1}{(1-\delta)^{3/2}} < \infty. $$ Then the right form of the (strong) law of large number would be $$ \frac{\sqrt{1-\delta}}{n} \sum_{i=1}^{n} e^{\frac{1}{2}\delta X_{i}^{2}} \xrightarrow{n\to\infty} 1 \quad \text{a.s.} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/412051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How does the Hahn-Banach theorem implies the existence of weak solution? I came across the following question when I read chapter 17 of Hormander's book "Tha Analysis of Linear Partial Differential Operators", and the theorem is Let $a_{jk}(x)$ be Lipschitz continuous in an open set $X\subset\mathbb{R}^n$, $a_{ij}=a_{ji}$, and assume that $(\Re a_{ij}(x))$ is positive definite. Then $$ \sum_{ij} D_j(a_{jk}D_ku)=f $$ has a solution $u\in H_{(2)}^{loc}(X)$ for every $f\in L_{loc}^2(X)$ The auther then says if we can show that $$ |(f,\phi)|\leq \|M \cdot\sum_{ij} D_j(\bar{a_{jk}}D_k\phi) \|_{L^2}, \quad \phi\in C_c^{\infty}(X) $$ for some positive continuous function $M$, then by Hahn-Banach theorem there exists some $g\in L^2$ $$ (f,\phi)=\left(g,M\cdot\sum_{ij} D_j(\bar{a_{jk}}D_k\phi)\right) $$ which inplies that the weak solution is $u=Mg$. what confuses me is how the Hahn-Banach theorem is used here to show the existence of $g$. Thanks for your help
Define the functional: $$k(M Lw)=\int_{X} fw$$ where $L$ is the differential opeartor. This is a bounded functional thanks to estimate you are assuming, also notice that you use the estimate to check that $k$ is well defined . Then thanks to the Hahn-Banach theorem you can extendthe domain of this functional to the whole $L^{2}$ as nullUser mention. Finally you use Riesz-Representation theorem to obtain the solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/412092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Proving that the $n$th derivative satisfies $(x^n\ln x)^{(n)} = n!(\ln x+1+\frac12+\cdots+\frac1n)$ Question: Prove that $(x^n\ln x)^{(n)} = n!(\ln x+1+\frac 12 + ... + \frac 1n)$ What I tried: Using Leibnitz's theorem, with $f=x^n$ and $g=\ln x$. So $$f^{(j)}=n\cdots(n-j+1)x^{n-j} , g^{(j)}=(-1)^{j+1} \dfrac 1{x^{n-j}}$$ But somehow I get stuck on the way...
Hint: Try using induction. Suppose $(x^n\ln x)^{(n)} = n!\left(\ln x+\frac{1}{1}+\cdots\frac{1}{n}\right)$, then $$\begin{align}{} (x^{n+1}\ln x)^{(n+1)} & = \left(\frac{\mathrm{d}}{\mathrm{d}x}\left[x^{n+1} \ln x\right]\right)^{(n)} \\ &= \left((n+1)x^n\ln x + x^n\right)^{(n)} \\ &= (n+1)(x^n\ln x)^{(n)} + (x^n)^{(n)} \\ &= \ldots \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/412169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Barbalat's lemma for Stability Analysis Good day, We have: Lyapunov-Like Lemma: If a scaler function V(t, x) satisfies the following conditions: * *$V(t,x)$ is lower bounded *$\dot{V}(t,x)$ is negative semi-definite *$\dot{V}(t,x)$ is uniformly continuous in time then $\dot{V}(t,x) \to 0$ as $t \to \infty $. Now if we have the following system: $\dot{e} = -e + \theta w(t) \\ \dot{\theta} = -e w(t)$ and assume that $w(t)$ is a bounded function, then we can select the following Lyapunov function: $V(x,t) = e^2 + \theta^2$ Taking the time derivative: $\dot{V}(x,t) = -2e^2 \leq 0$ Taking the time derivative again: $\ddot{V}(x,t) = -4e(-e+\theta w)$ Now $\ddot{V}(x,t)$ satisfies condition (3) when $e$ and $\theta$ are bounded, but how can I be sure that these two variables are indeed bounded? Should I perform some other test, or...?
Since $\dot{V} = -2e^2 \leq 0$, from the Lyapunov stability theory, one concludes that the system states $(e,\theta)$ are bounded. Observe above that $\dot{V}$ is a function of only one state ($e$); if it were a function of the two states $(e,\theta)$ and $\dot{V} < 0$ (except in $e=\theta = 0$, in that case $V=0$), one would conclude asymptotic stability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/412231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why is this true? $(\exists x)(P(x) \Rightarrow (\forall y) P(y))$ Why is this true? $(\exists x)(P(x) \Rightarrow (\forall y) P(y))$
In classical logic the following equivalence is logically valid: $$ \exists x (\varphi\Rightarrow\psi)\Longleftrightarrow(\forall x\varphi\Rightarrow\psi) $$ providing that $x$ is a variable not free in $\psi$. So the formula in question is logically equivalent to $\forall xP(x)\Rightarrow\forall yP(y)$. Looking at the poblem from a slightly different perspective. Either (i) all objects in the domain of discourse have property $P$, i.e. $\forall y P(y)$ is true or (ii) there is $a$ in the domain for which $P$ fails, i.e. $\neg P(a)$ is true. In (i) $P(x)\Rightarrow\forall y P(y)$ must be true, so $\exists x(P(x)\Rightarrow\forall y P(y))$ is true. In (ii) $P(a)\Rightarrow\forall y P(y)$ must be true, therefore the sentence in question must be true as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/412387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 14, "answer_id": 2 }
Finding the root of a degree $5$ polynomial $\textbf{Question}$: which of the following $\textbf{cannot}$ be a root of a polynomial in $x$ of the form $9x^5+ax^3+b$, where $a$ and $b$ are integers? A) $-9$ B) $-5$ C) $\dfrac{1}{4}$ D) $\dfrac{1}{3}$ E) $9$ I thought about this question for a bit now and can anyone provide any hints because I have no clue how to begin to eliminate the choices? Thank you very much in advance.
Use the rational root theorem, and note that the denominator of one of the options given does not divide $9$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/412490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
At least one vertex of a tetrahedron projects to the interior of the opposite triangle How i can give a fast proof of the following fact: Given four points on $\mathbb{R}^3$ not contained in a plane we can choose one such that its projection to the plane passing through the others is in the triangle generated by the three others points. Thanks in advance.
Here is a graphical supplement (that I cannot place into a comment) to the excellent answer by @achille hui . I have taken the case $\eta=0.4.$ with normals in red. The (complicated) name of this polyhedron is "tetragonal disphenoid" (https://en.wikipedia.org/wiki/Disphenoid).
{ "language": "en", "url": "https://math.stackexchange.com/questions/412553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
X,Y are independent exponentially distributed then what is the distribution of X/(X+Y) Been crushing my head with this exercise. I know how to get the distribution of a ratio of exponential variables and of the sum of them, but i can't piece everything together. The exercise goes as this: If X,Y are independent exponentially distributed with beta = 1 (parameter of the exponential distribution = 1) then what is the distribution of X/(X+Y) Any ideas? Thanks a lot.
In other words, for each $a > 0$, you want to compute $P\left(\frac{X}{X+Y} < a \right)$. Outline: Find the joint density of $(X,Y)$, and integrate it over the subset of the plane $\left\{ (x,y) : \frac{x}{x+y} < a \right\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/412615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Let $R$ be a ring with $1$. a nonzero proper ideal $I$ of $R$ is a maximal ideal iff the $R/I$ is a simple ring. Let $R$ be a ring with $1$. Prove that a nonzero proper ideal $I$ of $R$ is a maximal ideal if and only if the quotient ring $R/I$ is a simple ring. My attempt:- $I$ is maximal $\iff$ $R/I$ is a field. $\iff$ $R/I$ has no non-trivial ideals $\iff$ $R/I$ is simple. Is it correct?
You won't necessarily get a field in the quotient without commutativity, but you have a decent notion, nonetheless. The rightmost equivalence is just the definition of simple ring. If $I$ isn't maximal, then there is a proper ideal $J$ of $R$ with $I\subsetneq J$. Show that $J/I$ is a non-trivial ideal of $R/I$. Thus, simpleness of $R/I$ implies maximality of $I$. On the other hand, suppose that $R/I$ isn't simple, so that there is a non-trivial ideal $\overline J$ of $R/I$. Let $J$ be the preimage of $\overline J$ under the quotient map $R\to R/I$, and show that $J$ is a proper ideal of $R$ containing $I$, so that $I$ is not maximal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/412669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
When a group algebra (semigroup algebra) is an Artinian algebra? When a group algebra (semigroup algebra) is an Artinian algebra? We know that an Artinian algebra is an algebra that satisfies the descending chain condition on ideals. I think that a group algebra (semigroup algebra) is an Artinian algebra if the group algebra (semigroup algebra) satisfies the descending chain condition on ideals. Are there other equivalent conditions that determine when a group algebra (semigroup algebra) is an Artinian algebra? Thank you very much.
A result of E. Zelmanov (Zel'manov), Semigroup algebras with identities, (Russian) Sib. Mat. Zh. 18, 787-798 (1977): Assume that $kS$ is right artinian. Then $S$ is a finite semigroup. The converse holds if $S$ is monoid. See this assertion in Jan Okniński, Semigroup algebras.Pure and Applied Mathematics, 138 (1990), p.172, Th.23.
{ "language": "en", "url": "https://math.stackexchange.com/questions/412750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Why why the natural density of the set $\{ n \mid n \equiv n_{0}\mod{m}\}$ is $\frac{1}{m}$. The natural density of a set $S$ is defined by $\displaystyle\lim_{x \to{+}\infty}{\frac{\left | \{ n\le x \mid n\in S \} \right |}{x}}$. This is maybe a silly question, but I got a confusion with this definition. And I really need understand why the natural density of the set $\{ n \mid n \equiv n_{0}\mod{m}\}$ is $\dfrac{1}{m}$. Thanks!
I think you should add the condition $n\geq 0$ here. Let $n_{0}=pm+k$, where $0\leq k< m$. We know that $x\to\infty$. Let $x=rm+t$, where $0\leq t< m$. Then $\frac {n|n\equiv n_{0}\text { and } n\in S}{x}=\frac{r+1}{rm+t}$ if $t\geq k$, and $\frac {r}{rm+t}$ if $t<k$. In either case, as $x\to \infty$, the fractions simplify to $\frac {1}{m}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/412912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
2 questions regarding solutions for $\sqrt{a+b} - (a-b)^2 = 0$ Here's two questions derived from the following question: $\quad\begin{matrix} \text{Is there more than one solution to the following statement?} \\ \!\sqrt{a+b} - (a-b)^2 = 0 \end{matrix}$ $\color{Blue}{(1)\!\!:\;}$How would one (dis)prove this? I.e. In what ways could one effectively determine whether an equation has more than one solution; More specifically, this one? $\color{Blue}{(2)\!\!:\;}$Is it possible to determine this with(out) a valid solution as a sort of reference? Cheers!
If you are looking for real solutions, then note that $a+b$ and $a-b$ are just arbitrary numbers, with $a + b \ge 0$. This is because the system $$ \begin{cases} u = a + b\\ v = a - b \end{cases} $$ has a unique solution \begin{cases} a = \frac{u+v}{2}\\ \\ b = \frac{u-v}{2}. \end{cases} In the variables $u,v$ the general solution is $u = v^{4}$, which translates to $$ a = \frac{v^{4}+v}{2}, \qquad b = \frac{v^{4}-v}{2}, $$ as noted by Peter Košinár. Note that $a+b = u = v^{4}$ is always non-negative, as requested. If it's integer solutions you're looking for, you get the same solutions (for $v$ an integer), as $v$ and $v^{4}$ have the same parity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/412999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
For $0I have problems with computing the following limit: Given a sequence $0<q_n<1$ such that $\lim\limits_{n\to\infty} q_n =q < 1$, prove that for a fixed $k \in \mathbb N$, $\lim\limits_{n\to\infty} n^k q_n^n= 0$. I know how to prove this, but I can't do it without using L'Hôpital's Rule. Does someone have an elementary proof?
Note that $$\frac{(n+1)^k}{n^k}=1+kn^{-1}+{k\choose2}n^{-2}+\ldots+n^{-k}\to1$$ as $n\to\infty$, hence for any $s$ with $1<s<\frac1q$ (possible because $q<1$) we can find $a$ with $n^k<a\cdot s^n$. Select $r$ with $q<r<\frac1s$ (possible because $s<\frac1q$). Then For almost all $n$, we have $q_n<r$, hence $$n^kq_n^n<n^kr^n<a(rs)^n.$$ Since $0<rs<1$, the claim follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
A group that has a finite number of subgroups is finite I have to show that a group that has a finite number of subgroups is finite. For starters, i'm not sure why this is true, i was thinking what if i have 2 subgroups, one that is infinite and the other one that might or not be finite, that means that the group isn't finite, or is my consideration wrong?
Consider only the cyclic subgroups. No one of them can be infinite, because an infinite cyclic group has infinitely many subgroups. So every cyclic subgroup is finite, and the group is the finite set-theoretic union of these finite cyclic subgroups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Estimation of the number of prime numbers in a $b^x$ to $b^{x + 1}$ interval This is a question I have put to myself a long time ago, although only now am I posting it. The thing is, though there is an infinity of prime numbers, they become more and more scarce the further you go. So back then, I decided to make an inefficient program (the inefficiency was not intended, I just wanted to do it quickly, I took more than 10 minutes to get the numbers below, and got them now from a sheet of paper, not the program) to count primes between bases of different numbers. These are the numbers I got $($below the first numbers are the exponents of $x$, I used a logarithmic scale, in the first line $\forall \ (n$-- $n+1)$ means $ [b^n; b^n+1[$ $)$ ---------0-1--2---3----4----5--6---7---8----9----10--11--12--13----------------- base 2: |0| 2| 2| 2| 5| 7| 13| 23| 43| 75|137|255|464| base 3: |1| 3| 13| 13| 31|76|198|520|1380|3741| base 10: |4|21|143|1061|8363| I made three histograms from this data (one for each base, with the respective logarithmic scales both on the $x$ and $y$ axes) and drew a line over them, that seemed like a linear function (you can try it yourselves, or if you prefer, insert these into some program like Excel, Geogebra, etc.). My question is: are these lines really tending (as the base and/or as x grows) to linear or even any kind of function describable by a closed form expression?
The prime number theorem is what you need. A rough statement is that if $\pi(x)$ is the number of primes $p \leq x$, then $$ \pi(x) \sim \frac{x}{\ln(x)} $$ Here "$\sim$" denotes "is asymptotically equal to". A corollary of the prime number theorem is that, for $1\ll y\ll x$, then $\pi(x)-\pi(x-y) \sim y/\ln(x)$. So yes, the number of primes start to thin out for larger $x$; in fact, their density drops logarithmically. To address your specific question, the PNT implies: \begin{align} \pi(b^{x+1}) - \pi(b^x) &\sim \frac{b^{x+1}}{\ln( b^{x+1} )} - \frac{b^x}{\ln( b^x )}, \end{align} where \begin{align} \frac{b^{x+1}}{\ln( b^{x+1} )} - \frac{b^x}{\ln( b^x )} &= \frac{b^{x+1}}{x+1\ln(b)} - \frac{b^x}{x\ln(b)}\\ &= \frac{b^{x+1}}{(x+1)\ln(b)} - \frac{b^x}{x\ln(b)}\\ &=\frac{b^x}{\ln(b)}\left( \frac{b}{x+1} - \frac{1}{x} \right)\\ &=\frac{b^x}{\ln(b)}\left( \frac{ bx-(x+1) }{x(x+1)} \right)\\ &=\frac{b^x}{\ln(b)}\left( \frac{ x(b-1)-1) }{x(x+1)} \right)\\ \end{align} For $x\gg 1$, we can neglect '$-1$' next to $x(b-1)$ in the numerator and $1$ next to $x$ in the denominator, so that: \begin{align} \frac{b^{x+1}}{\ln( b^{x+1} )} - \frac{b^x}{\ln( b^x )} &= \frac{b^x}{\ln(b)}\left( \frac{ x(b-1) }{x^2} \right)\\ &=\frac{b^x(b-1)}{x\ln(b)}, \end{align} so that $$ \pi(b^{x+1}) - \pi(b^x) \sim \frac{b^x(b-1)}{x\ln(b)}. $$ As I'm writing this, I see that this is exactly the same as the answer that @Charles gave.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Graph with no even cycles has each edge in at most one cycle? As the title says, I am trying to show that if $G$ is a finite simple graph with no even cycles, then each edge is in at most one cycle. I'm trying to do this by contradiction: let $e$ be an edge of $G$, and for contradiction suppose that $e$ was in two distinct cycles $C_1$ and $C_2$ of $G$. By assumption, $C_1$ and $C_2$ must have odd length. Now I would like to somehow patch $C_1$ and $C_2$ together to obtain a cycle of even length, but I'm not sure how to do so. If $C_1$ and $C_2$ only overlap in $e$, or perhaps in a single path containing $e$, then this can be done, but I can't see how to make this patching work when $C_1$ and $C_2$ overlap in disjoint paths. Any help is appreciated!
Here is the rough idea: Suppose $C_1$ and $C_2$ overlap in at least two disjoint paths. If we follow $C_1$ along the end of one path to the beginning of the next path, and then follows $C_2$ back to the end of the first path, we obtain a cycle $C_3$. Since this cycle must have odd length, the parity of the two parts must be different. This means I can change the parity of the length of $C_1$ by following the $C_2$ part of $C_3$ instead of the $C_1$ part. This is also a contradiction, as $C_1$ has odd length. Explicitly, let $a$ be the last vertex of one path contained in both $C_1$ and $C_2$, and let $b$ be the first vertex of the next path contained in both $C_1$ and $C_2$. Let $C_3$ be the cycle obtained by following $C_1$ from $a$ to $b$, and then following $C_2$ back to $a$. Since $C_3$ has odd length, the parity of the length along $C_1$ from $a$ to $b$ must be different from the parity of the length of $C_2$ from $a$ to $b$. It is this difference in parity that allows us to modify $C_1$. That is, let $C_1'$ be the cycle that agrees with $C_1$ except for the path from $a$ to $b$, where it agrees with $C_2$. Then $C_1'$ will have even length.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given basis spanning the vector space I am learning Linear algebra nowadays. I had a small doubt and I know it's an easy one. But still I am not able to get it. Recently I came across a statement saying "((1,2),(3,5)) is a basis of $ F^2 $ ". According to the statement a linear combination of the vectors in the list,i.e., $a(1,2)+b(3,5)$ (where a and b belong to F of course) must span the vector space $F^2$. I wanted to know how we can get all the vectors in the vector space using linear combination of these two given vectors here (if the two vectors were ((1,0),(0,1)) which is the standard basis, it would have been fine). But how can we get all the points in the vector space $F^2$ using ((1,2),(3,5))??? suppose I want to get the vector (1,1). I can't think of getting it using linear combination of the given two vectors. Kindly help me with this. I know its an easy one but still could not help and had to ask you guys. Thanks.
Note that if you can get to $(1,0)$ and $(0,1)$ you can get anywhere. So, can you find linear combinations $a(1,2) + b(3,5)$ that get you to these two vectors? Note that you can expand the equation, say, $a(1,2) + b(3,5) = (1,0)$ into two equations with two unknowns by looking at each coordinate separately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Why does this summation of ones give this answer? I saw this in a book and I don't understand it. Suppose we have nonnegative integers $0 = k_0<k_1<...<k_m$ - why is it that $$\sum\limits_{j=k_i+1}^{k_{i+1}}1=k_{i+1}-k_i?$$
Because $1+1+\cdots+1=n$ if there are $n$ ones. So $j$ going from $k_i+1$ to $k_{i+1}$ is the same as going from $1$ to $k_{i+1}-k_i$ since the summand doesn't depend on $j$. There are $k_{i+1}-k_i$ ones in the list.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
CINEMA : Mathematicians I know that a similar question has been asked about mathematics documentaries in general, but I would like some recommendations on films specifically about various mathematicians (male and or female). What would be nice is if you'd recommend something about not just the famous ones but also the not so famous ones. Note$_1$: If you happen to find a film on a transgender mathematician, well that's just great too. I'm a very progressive person with a very open brain. Note$_2$: Throughout my life I've seen most of the mainstream films, but it'd be nice to hear from the world what things I may have missed. Note$_3$: If you have the ability to search Nets outside of the the limitations of Google's Nets, then maybe you'll find some foreign films or something, as, for example in Switzerland.
The real man portrayed in A Beautiful Mind was not only an economist but a mathematician who published original discoveries in math. There is also a recent film bio of Alvin Turing. Sorry I forget the title.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
the following inequality is true, but I can't prove it The inequality $$\sum_{k=1}^{2d}\left(1-\frac{1}{2d+2-k}\right)\frac{d^k}{k!}>e^d\left(1-\frac{1}{d}\right)$$ holds for all integer $d\geq 1$. I use computer to verify it for $d\leq 50$, and find it is true, but I can't prove it. Thanks for your answer.
Sorry I didn't check this sooner: the problem was cross-posted at mathoverflow and I eventually was able to give a version of David Speyer's analysis that makes it feasible to compute many more coefficient of the asymptotic expansion (I reached the $d^{-13}$ term, and could go further), and later to give error bounds good enough to prove the inequality for $d \geq 14$, at which point the previous numerical computations complete the proof. See https://mathoverflow.net/questions/133028/the-following-inequality-is-truebut-i-cant-prove-it/133123#133123
{ "language": "en", "url": "https://math.stackexchange.com/questions/413558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
limit of evaluated automorphisms in a Banach algebra Let $\mathcal{A}=\operatorname{M}_k(\mathbb{R})$ be the Banach algebra of $k\times k$ real matrices and let $(U_n)_{n\in\mathbb{N}}\subset\operatorname{GL}_k(\mathbb{R})$ be a sequence of invertible elements such that $U_n\to 0$ as $n\to\infty$. Define $\sigma_n\in\operatorname{Aut}(\mathcal{A})$ via $X\mapsto U_nXU_n^{-1}$. Suppose I have a sequence $(W_n)_{n\in\mathbb{N}}\subset\mathcal{A}$ such that $W_n\to W\in\mathcal{A}$ as $n\to\infty$. I would like to determine $\lim_{n\to\infty}\sigma_n(W_n)$. My question is how can I approach such a problem? It looks like something that should have a general answer (for $\mathcal{A}$ not necessarily finite-dimensional Banach algebra over the reals) in the theory of operator algebras, but I have a rather poor background there. If it is something relatively easy, I would rather appreciate a hint or reference than a full answer, so I can work it out further on my own (I am just trying to get back on the math track after some time of troubles). Thanks in advance for any help!
This limit need not exist. For example, let's work in $M_2(\mathbb R)$. If $$ U_n= \left( \begin{array}{cc} \frac{1}{n} & 0 \\ 0 & \frac{1}{n^2} \end{array} \right), $$ then $\Vert U_n \Vert \to 0$ as $n \to \infty$, and $$ U_n^{-1}= \left( \begin{array}{cc} {n} & 0 \\ 0 & {n^2} \end{array} \right). $$ If we now let $$ W_n= \left( \begin{array}{cc} 0 & \frac{1}{\sqrt{n}} \\ 0 & 0 \end{array} \right), $$ then $W_n \to 0$ as $n \to \infty$, but $$ \sigma_n (W_n) = U_n W_n U_n^{-1}= \left( \begin{array}{cc} 0 & \sqrt{n} \\ 0 & 0 \end{array} \right), $$ which does not converge as $n \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is this notation for Stokes' theorem? I'm trying to figure out what $\iint_R \nabla\times\vec{F}\cdot d\textbf{S}$ means. I have a feeling that it has something to do with the classical Stokes' theorem. The Stokes' theorem that I have says $$ \int\limits_C W_{\vec{F}} = \iint\limits_S \Phi_{\nabla\times\vec{F}} $$ where $\vec{F}$ is a vector field, $W_{\vec{F}}$ is the work form of $\vec{F}$, and $\Phi_{\nabla\times\vec{F}}$ is the flux form of the curl of $\vec{F}$. Is the notation in question the same as the RHS of the above equation?
It seems to me that the integrals $$\int\limits_C W_{\vec{F}}~~~~\text{and}~~~~\oint_{\mathfrak{C}}\vec{F}\cdot d\textbf{r}$$ have the same meanings. I don't know the notation $ \Phi_{\nabla\times\vec{F}}$, but if it means as $$\textbf{curl F}\cdot \hat{\textbf{N}} ~dS$$ then your answer is Yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove $(a-b)^3 + (b-c)^3 + (c-a)^3 -3(a-b)(b-c)(c-a) = 0$ without calculations I read somewhere that I can prove this identity below with abstract algebra in a simpler and faster way without any calculations, is that true or am I wrong? $$(a-b)^3 + (b-c)^3 + (c-a)^3 -3(a-b)(b-c)(c-a) = 0$$ Thanks
To Prove: $$(a-b)^3 + (b-c)^3 + (c-a)^3 =3(a-b)(b-c)(c-a)$$ we know, $x^3 + y^3 = (x + y)(x^2 - xy + y^2)$ so, $$(a-b)^3 + (b-c)^3 = (a -c)((a-b)^2 - (a-b)(b-c) + (b-c)^2)$$ now, $$(a-b)^3 + (b-c)^3 + (c-a)^3 = (a -c)((a-b)^2 - (a-b)(b-c) + (b-c)^2) + (c-a)^3 = (c-a)(-(a-b)^2 + (a-b)(b-c)- (b-c)^2 +(c-a)^2)$$ now, $(c-a)^2 - (a-b)^2 = (c-a+a-b)(c-a-a+b) = (c-b)(c-2a+b)$ the expression becomes, $$(c-a)((c-b)(c-2a+b) + (b-c)(a-2b+c)) = (c-a)(b-c)(-c+2a-b+a-2b+c)=3(c-a)(b-c)(a-b)$$ Hence proved
{ "language": "en", "url": "https://math.stackexchange.com/questions/413738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 12, "answer_id": 11 }
Is there a dimension which extends to negative or even irrational numbers? Just elaborating on the question: We all use to natural numbers as dimensions: 1 stands for a length, 2 for area, 3 for volume and so on. Hausdorff–Besicovitch extends dimensions to any positive real number. So my question: is there any dimension which extends this notion further (negative numbers or even irrational numbers)? If so, what are the examples, how it can be used?
The infinite lattice is a fractal of negative dimension: if you scale the infinite lattice on a line 2x, it becomes 2x less dense, thus 2 scaled lattices compose one non-scaled. If you take a lattice or on a plane, scaling 2x makes it 4x less dense so that 4 scaled lattices compose one non-scaled, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Period of derivative is the period of the original function Let $f:I\to\mathbb R$ be a differentiable and periodic function with prime/minimum period $T$ (it is $T$-periodic) that is, $f(x+T) = f(x)$ for all $x\in I$. It is clear that $$ f'(x) = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h} = \lim_{h\to 0} \frac{f(x+T+h) - f(x+T)}{h} = f'(x+T), $$ but how to prove that $f'$ has the same prime/minimum period $T$? I suppose that there exist $\tilde T < T$ such that $f'(x+\tilde T) = f'(x)$ for all $x\in I$ but can't find the way to get a contradiction.
One solution is to note that $f(x)$ has an associated Fourier series, and since the derivative of a sinusoid of any frequency is another sinusoid of the same frequency, we deduce that the Fourier series of the derivative will have all the same sinusoidal terms as the original. Thus, the derivative must have the same frequency as the original function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/413830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 1 }
help me in trace of following proposition In a paper an author proved the following proposition Please help me in trace proof of following proposition Proposition: let $f$ be a homeomorphism of a connected topological manifold $M$ with fixed point set $F$. then either $(1)$ $f$ is invariant on each component of $M-F$ or $(2)$ there are exactly two component and $f$ interchanges them. and after that he said: In the case of $(2)$ the above argument shows that F cannot contain an open set and hence $dim F\leq (dim M) -1$ and since $F$ separates $M$ we have $dim F = (dim M) -1$. G. Bredon has shown that if $M$ is also orientable then any involution with an odd codimensional fixed point set must reverse the orientation; hence we obtain Let $f$ be an orientation-preserving homemorphism of an orientable manifold $M$; then $f$ is invariant on each component of $M-F$. Can you say me, what does mean the dim $F$ here? Is always $F$ is sub manifold with above condition? and how can we deduce that $dim F = n-1$?
For the first question, here is a sketch that $F$ is a submanifold under the assumption that the group $G$ acting on $M$ is finite: Consider the map $M \to \prod_{g \in G} M, m \mapsto (gm)_{g \in G}$. This is smooth and should be a local homeomorphism, hence its regular. The diagonal $\{(m,\ldots,m) | m \in M\}$ of the product is a submanifold, hence its preimage is a submanifold of $M$ and the preimage is exactly the fixed point a set. For the second question, we have that $M$ is connected but $M - F$, with $F$ being a submanifold is not connected. Intuitively it is clear that a submanifold dividing a manifold into connected components must have codimension 1 but I cannot think of a proof right now. Maybe one could work with path-connectedness?
{ "language": "en", "url": "https://math.stackexchange.com/questions/413889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find Gross from Net and Percentage I would like know if a simple calculation exists that would allow me to determine how much money I need to gross to receive a certain net amount. For example, if my tax rate was 30%, and my goal was to take home 700, I would need to have a Gross salary of $1000.
Suppose your tax rate is $r$, written in percent. If you want your net to be $N$, then we want a gross of: $$G=\frac{100N}{100-r}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/413962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Ratio between trigonometric sums: $\sum_{n=1}^{44} \cos n^\circ/\sum_{n=1}^{44} \sin n^\circ$ What is the value of this trigonometric sum ratio: $$\frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} = \quad ?$$ The answer is given as $$\frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} \approx \displaystyle \frac{\displaystyle\int_{0}^{45}\cos n^\circ dn}{\displaystyle\int_{0}^{45}\sin n^\circ dn} = \sqrt{2}+1$$ Using the fact $$\displaystyle \sum_{n = 1}^{44}\cos\left(\frac{\pi}{180}\cdot n\right)\approx\int_0^{45}\cos\left(\frac{\pi}{180}\cdot x\right)\, dx $$ My question is that I did not understand the last line of this solution. Please explain to me in detail. Thanks.
The last line in the argument you give could say $$ \sum_{n=1}^{44} \cos\left(\frac{\pi}{180}n\right)\,\Delta n \approx \int_1^{44} \cos n^\circ\, dn. $$ Thus the Riemann sum approximates the integral. The value of $\Delta n$ in this case is $1$, and if it were anything but $1$, it would still cancel from the numerator and the denominator. Maybe what you didn't follow is that $n^\circ = n\cdot\dfrac{\pi}{180}\text{ radians}$? The identity is ultimately reducible to the known tangent half-angle formula $$ \frac{\sin\alpha+\sin\beta}{\cos\alpha+\cos\beta}=\tan\frac{\alpha+\beta}{2} $$ and the rule of algebra that says that if $$ \frac a b=\frac c d, $$ then this common value is equal to $$ \frac{a+c}{b+d}. $$ Just iterate that a bunch of times, until you're done. Thus $$ \frac{\sin1^\circ+\sin44^\circ}{\cos1^\circ+\cos44^\circ} = \tan 22.5^\circ $$ and $$ \frac{\sin2^\circ+\sin43^\circ}{\cos2^\circ+\cos43^\circ} = \tan 22.5^\circ $$ so $$ \frac{\sin1^\circ+\sin2^\circ+\sin43^\circ+\sin44^\circ}{\cos1^\circ+\cos2^\circ+\cos43^\circ+\cos44^\circ} = \tan 22.5^\circ $$ and so on. Now let's look at $\tan 22.5^\circ$. If $\alpha=0$ then the tangent half-angle formula given above becomes $$ \frac{\sin\beta}{1+\cos\beta}=\tan\frac\beta2. $$ So $$ \tan\frac{45^\circ}{2} = \frac{\sin45^\circ}{1+\cos45^\circ} = \frac{\sqrt{2}/2}{1+(\sqrt{2}/2)} = \frac{\sqrt{2}}{2+\sqrt{2}} = \frac{1}{\sqrt{2}+1}. $$ In the last step we divided the top and bottom by $\sqrt{2}$. What you have is the reciprocal of this. Postscript four years later: In my answer I explained why the answer that was "given" was right, but I forgot to mention that $$ \frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} = \sqrt{2}+1 \text{ exactly, not just approximately.} $$ The reason why the equality is exact is in my answer, but the explicit statement that it is exact is not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Algebra simplification in mathematical induction . I was proving some mathematical induction problems and came through an algebra expression that shows as follows: $$\frac{k(k+1)(2k+1)}{6} + (k + 1)^2$$ The final answer is supposed to be: $$\frac{(k+1)(k+2)(2k+3)}{6}$$ I walked through every possible expansion; I combine like terms, simplify, factor, but never arrived at the answer. Could someone explain the steps?
First, let's write the expression as a sum of fractions with a common denominator. $$\dfrac{k(k+1)(2k+1)}{6} + (k + 1)^2 = \dfrac{k(k+1)(2k+1)}{6} + \dfrac{6(k+1)^2}{6}\tag{1}$$ Now expand $6(k+1)^2 = 6k^2 + 12k + 6\tag{2}$ and expand $k(k+1)(2k+1) = k(2k^2 + 3k + 1) = 2k^3 + 3k^2 + k\tag{3}$ So now, $(1)$ becomes $$\dfrac{k(k+1)(2k+1)}{6} + \dfrac{6(k+1)^2}{6} = \dfrac{(2k^3 + 3k^2 + k) + (6k^2 +12 k + 6)}{6} $$ $$= \dfrac{\color{blue}{\bf 2k^3 + 9k^2 +13k + 6}}{6}\tag{4}$$ We can factor the numerator in $(4)$, or we can expand the numerator of our "goal"... $$\frac{(k+1)(k+2)(2k+3)}{6} = \dfrac{\color{blue}{\bf 2k^3 + 9k^2 + 13k + 6}}{6}\tag{goal}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/414184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
What leads us to believe that 2+2 is equal to 4? My professor of Epistemological Basis of Modern Science discipline was questioning about what we consider knowledge and what makes us believe or not in it's reliability. To test us, he asked us to write down our justifications for why do we accept as true that 2 plus 2 is equal to 4. Everybody, including me, answered that we believe in it because we can prove it, like, I can take 2 beans and more 2 beans and in the end I will have 4 beans. Although the professor told us: "And if all the beans in the universe disappear", and of course he can extend it to any object we choose to make the proof. What he was trying to show us is that the logical-mathematical universe is independent of our universe. Although I was pretty delighted with this question and I want to go deeper. I already searched about Peano axioms and Zermelo-Fraenkel axioms although I think the answer that I am looking for can't be explained by an axiom. It is a complicated question for me, very confusing, but try to understand, what I want is the background process, the gears of addition, like, you can say that a+0=a and then say a+1 = a+S(0) = S(a+0) = S(a). Although it doesn't show what the addition operation itself is. Can addition be represented graphically? Like rows that rotates, or lines that join? Summarizing, I think my question is: How can I understand addition, not only learn how to do it, not just reproduce what teachers had taught to me like a machine. How can I make a mental construct of this mathematical operation?
I've always liked this approach, that a naming precedes a counting. =============================================== ================================================
{ "language": "en", "url": "https://math.stackexchange.com/questions/414269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability puzzle - the 3 cannons (Apologies if this is the wrong venue to ask such a question, but I don't understand how to arrive at a solution to this math puzzle). Three cannons are fighting each other. Cannon A hits 1/2 of the time. Cannon B hits 1/3 the time. Cannon C hits 1/6 of the time. Each cannon fires at the current "best" cannon. So B and C will start shooting at A, while A will shoot at B, the next best. Cannons die when they get hit. Which cannon has the highest probability of survival? Why? Clarification: B and C will start shooting at A.
A has the greatest chance of survival. Consider the three possible scenarios for the first round: On the first trial, define $a$ as the probability that A gets knocked out, $b$ is the probability that B gets knocked out, and $c$ is the probability that C gets knocked out. Since both B and C are firing at A, the probability of a getting knocked out is: $$a=\frac{1}{3}+\frac{1}{6}=\frac{1}{2}$$ Only A is firing at B, so the probability of B getting knocked out is: $$b=\frac{1}{2}$$ No one is firing at C, so there is no chance of C being knocked out in the first round: $$c=0$$ The probability of A or B being knocked out first is therefore even. Now on the second round, there are one of two possibilities: A and C are left to duel, or B and C are left to duel. Between A and C, the probability of A defeating C is $\frac{\frac{1}{2}}{\frac{1}{2}+\frac{1}{6}}=0.75$, and the probability of C defeating A is $0.25$. Between B and C, the probability of B defeating C is $\frac{\frac{1}{3}}{\frac{1}{3}+\frac{1}{6}}=\frac{2}{3}$, and the probability of C defeating B is $\frac{1}{3}$. Finally, assess the total probability of victory for each cannon, using the rule of product: Probability of A winning: $0.75*0.5=0.375$ Probability of B winning: $\frac{2}{3}*0.5=\frac{1}{3}$ Probability of C winning: $0.25*0.5+\frac{1}{3}*0.5\approx0.2917$ So it's close, but A has the greatest chance of survivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Stirling Binomial Polynomial Let $\{\cdot\}$ denote Stirling Numbers of the second kind. Let $(\cdot)$ denote the usual binomial coefficients. It is known that $$\sum_{j=k}^n {n\choose j} \left\{\begin{matrix} j \\ k \end{matrix}\right\} = \left\{\begin{matrix} n+1 \\ k+1 \end{matrix}\right\}.$$ Note: The indexes for $j$ aren't really needed since the terms are zero when $j>n$ or $j<k$. How do I calculate $$\sum_{j=k}^n 4^j{n\choose j} \left\{\begin{matrix} j \\ k \end{matrix}\right\} = ?$$ I have been trying to think of this sum as some special polynomial (maybe a Bell polynomial of some kind) that has been evaluated at 4. I have little knowledge of Stirling Numbers in the context of polynomials. Any help would be appreciated; even a reference to a comprehensive book on Stirling Numbers and polynomials.
It appears we can give another derivation of the closed form by @vadim123 for the sum $$q_n = \sum_{j=k}^n m^j {n\choose j} {j \brace k}$$ using the bivariate generating function of the Stirling numbers of the second kind. This computation illustrates generating function techniques as presented in Wilf's generatingfunctionology as well as the technique of annihilating coefficient extractors. Recall the species for set partitions which is $$\mathfrak{P}(\mathcal{U} \mathfrak{P}_{\ge 1}(\mathcal{Z}))$$ which gives the generating function $$G(z, u) = \exp(u(\exp(z)-1)).$$ Introduce the generating function $$Q(z) = \sum_{n\ge k} q_n \frac{z^n}{n!}.$$ We thus have $$Q(z) = \sum_{n\ge k} \frac{z^n}{n!} \sum_{j=k}^n m^j {n\choose j} {j \brace k}.$$ Substitute $G(z, u)$ into the sum to get $$Q(z) = \sum_{n\ge k} \frac{z^n}{n!} \sum_{j=k}^n m^j {n\choose j} j! [z^j] \frac{(\exp(z)-1)^k}{k!} \\ = \sum_{j\ge k} m^j \left([z^j] \frac{(\exp(z)-1)^k}{k!}\right) \sum_{n\ge j} j! \frac{z^n}{n!} {n\choose j} \\ = \sum_{j\ge k} m^j \left([z^j] \frac{(\exp(z)-1)^k}{k!}\right) \sum_{n\ge j} \frac{z^n}{(n-j)!} \\= \sum_{j\ge k} m^j \left([z^j] \frac{(\exp(z)-1)^k}{k!}\right) z^j \sum_{n\ge j} \frac{z^{n-j}}{(n-j)!} \\ = \exp(z) \sum_{j\ge k} m^j z^j \left([z^j] \frac{(\exp(z)-1)^k}{k!}\right).$$ Observe that the sum annihilates the coefficient extractor, producing $$Q(z) = \exp(z)\frac{(\exp(mz)-1)^k}{k!}.$$ Extracting coefficients from $Q(z)$ we get $$q_n = \frac{n!}{k!} [z^n] \exp(z) \sum_{q=0}^k {k\choose q} (-1)^{k-q} \exp(mqz) \\ = \frac{n!}{k!} [z^n] \sum_{q=0}^k {k\choose q} (-1)^{k-q} \exp((mq+1)z) = \frac{n!}{k!} \sum_{q=0}^k {k\choose q} (-1)^{k-q} \frac{(mq+1)^n}{n!} \\ = \frac{1}{k!} \sum_{q=0}^k {k\choose q} (-1)^{k-q} (mq+1)^n.$$ Note that when $m=1$ $Q(z)$ becomes $$\exp(z)\frac{(\exp(z)-1)^k}{k!} = \frac{(\exp(z)-1)^{k+1}}{k!} + \frac{(\exp(z)-1)^k}{k!}$$ so that $$[z^n] Q(z) = (k+1){n\brace k+1} + {n\brace k} = {n+1\brace k+1},$$ which can also be derived using a very simple combinatorial argument. Addendum. Here is another derivation of the formula for $Q(z).$ Observe that when we multiply two exponential generating functions of the sequences $\{a_n\}$ and $\{b_n\}$ we get that $$ A(z) B(z) = \sum_{n\ge 0} a_n \frac{z^n}{n!} \sum_{n\ge 0} b_n \frac{z^n}{n!} = \sum_{n\ge 0} \sum_{k=0}^n \frac{1}{k!}\frac{1}{(n-k)!} a_k b_{n-k} z^n\\ = \sum_{n\ge 0} \sum_{k=0}^n \frac{n!}{k!(n-k)!} a_k b_{n-k} \frac{z^n}{n!} = \sum_{n\ge 0} \left(\sum_{k=0}^n {n\choose k} a_k b_{n-k}\right)\frac{z^n}{n!}$$ i.e. the product of the two generating functions is the generating function of $$\sum_{k=0}^n {n\choose k} a_k b_{n-k}.$$ (I have included this derivation in several of my posts.) Now in the present case we have $$A(z) = \sum_{j\ge k} {j\brace k} m^j \frac{z^j}{j!} \quad\text{and}\quad B(z) = \sum_{j\ge 0} \frac{z^j}{j!} = \exp(z).$$ Evidently $A(z)$ is just the exponential generating function for set partitions into $k$ sets evaluated at $mz,$ so we get $$A(z) = \frac{(\exp(mz)-1)^k}{k!}$$ and with $Q(z) = A(z) B(z)$ the formula for $Q(z)$ follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Initial and terminal objects in $\textbf{FinVect}_R$ I am teaching myself category theory and I am having difficulties identifying the initial and terminal object of the category of $\textbf{FinVect}_R$. I was thinking that because it is finite vectors then the initial and terminal should be the same object ( since they are finite, we can operate in those vectors until we reached that last one since it is finite). Any help will be greatly appreciated.
I believe the terminal and initial object are both the zero-dimensional vector space $0$. There is one map from $0$ to any other vector space $V$, since we must send $0$ to $0_V$. This follows from the definition of a linear map, and linear maps are the morphisms in this category. So $0$ is the initial object. Similarly, there is exactly one morphism from any vector space $V$ to $0$: the map sending all elements to $0$. So $0$ satisfies the definition of terminal object. Note that this argument is independent of the base field. So you could consider the category of vector spaces over $\mathbb C$, for example, and the answer would be the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Which are functions of bounded variations? Let $f, g : [0, 1] \to \mathbb{R}$ be defined as follows: $f(x) = x^2 \sin (1/x)$ if $x = 0$, $f(0)=0$ $g(x) = \sqrt{x} \sin (1/x)$ if $x = 0, g(0) = 0$. Which are functions of bounded variations?Every polynomial in a compact interval is of BV? Could any one just tell me what is the main result to see whether a function is of BV? Derivative bounded?
Yes, bounded derivative implies BV. I explained this in your older question which condition says that $f$ is necessarily bounded variation. Since $f$ has bounded derivative, it is in BV. A function with unbounded derivative could also be in BV, for example $\sqrt{x}$ on $[0,1]$ is BV because it's monotone. More generally, a function with finitely many maxima and minima on an interval is BV. But $g$ has unbounded derivative and infinite number of maxima and minima on an interval. In such a situation you should look at the peaks and troughs of its graph and try to estimate the sum of differences $\sum |\Delta f_i|$ between them. It is not necessary to precisely locate the maxima and minima. The fact that $$g((\pi/2+2\pi n)^{-1})=(\pi/2+2\pi n)^{-1/2},\quad g((3\pi/2+2\pi n)^{-1})=-(3\pi/2+2\pi n)^{-1/2}$$ gives you enough information about $g$ to conclude it is not BV.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\int_0^\infty \frac{\log(1+x^3)}{(1+x^2)^2}dx$ and $\int_0^\infty \frac{\log(1+x^4)}{(1+x^2)^2}dx$ Background: Evaluation of $\int_0^\infty \frac{\log(1+x^2)}{(1+x^2)^2}dx$ We can prove using the Beta-Function identity that $$\int_0^\infty \frac{1}{(1+x^2)^\lambda}dx=\sqrt{\pi}\frac{\Gamma \left(\lambda-\frac{1}{2} \right)}{\Gamma(\lambda)} \quad \lambda>\frac{1}{2}$$ Differentiating the above equation with respect to $\lambda$, we obtain an expression involving the Digamma Function $\psi_0(z)$. $$\int_0^\infty \frac{\log(1+x^2)}{(1+x^2)^\lambda}dx = \sqrt{\pi}\frac{\Gamma \left(\lambda-\frac{1}{2} \right)}{\Gamma(\lambda)} \left(\psi_0(\lambda)-\psi_0 \left( \lambda-\frac{1}{2}\right) \right)$$ Putting $\lambda=2$, we get $$\int_0^\infty \frac{\log(1+x^2)}{(1+x^2)^2}dx = -\frac{\pi}{4}+\frac{\pi}{2}\log(2)$$ Question: But, does anybody know how to evaluate $\displaystyle \int_0^\infty \frac{\log(1+x^3)}{(1+x^2)^2}dx$ and $\displaystyle \int_0^\infty \frac{\log(1+x^4)}{(1+x^2)^2}dx$? Mathematica gives the values * *$\displaystyle \int_0^\infty \frac{\log(1+x^3)}{(1+x^2)^2}dx = -\frac{G}{6}+\pi \left(-\frac{3}{8}+\frac{1}{8}\log(2)+\frac{1}{3}\log \left(2+\sqrt{3} \right) \right)$ *$\displaystyle \int_0^\infty \frac{\log(1+x^4)}{(1+x^2)^2}dx = -\frac{\pi}{2}+\frac{\pi \log \left( 6+4\sqrt{2}\right)}{4}$ Here, $G$ denotes the Catalan's Constant. Initially, my approach was to find closed forms for $$\int_0^\infty \frac{1}{(1+x^2)^2(1+x^3)^\lambda}dx \ \ , \int_0^\infty \frac{1}{(1+x^2)^2(1+x^4)^\lambda}dx$$ and then differentiate them with respect to $\lambda$ but it didn't prove to be of any help. Please help me prove these two results.
I hope it is not too late. Define \begin{eqnarray} I(a)=\int_0^\infty\frac{\log(1+ax^4)}{(1+x^2)^2}dx. \end{eqnarray} Then \begin{eqnarray} I'(a)&=&\int_0^\infty \frac{x^4}{(1+ax^4)(1+x^2)^2}dx\\ &=&\frac{1}{(1+a)^2}\int_0^\infty\left(-\frac{2}{1+x^2}+\frac{1+a}{(1+x^2)^2}+\frac{1-a+2ax^2}{1+a x^4}\right)dx\\ &=&\frac{1}{(1+a)^2}\left(-\pi+\frac{1}{4}(1+a)\pi+\frac{(1-a)\pi}{2\sqrt2a^{1/4}}+\frac{\pi a^{1/4}}{\sqrt2}\right)\\ &=&\frac{1}{4(1+a)^2}\left(a-3+\frac{\sqrt2(1-a)}{a^{1/4}}+2\sqrt2 a^{1/4}\right). \end{eqnarray} and hence \begin{eqnarray} I(1)&=&\int_0^1\frac{1}{4(1+a)^2}\left(a-3+\frac{\sqrt2(1-a)}{a^{1/4}}+2\sqrt2 a^{1/4}\right)da\\ &=&-\frac{\pi}{2}+\frac{1}{4}\log(6+4\sqrt2). \end{eqnarray} For the other integral, we can do the same thing to define $$ J(a)=\int_0^\infty\frac{\log(1+ax^3)}{(1+x^2)^2}dx. $$ The calculation is similar and more complicated and here I omit the detail.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 6, "answer_id": 0 }
The principal ideal $(x(x^2+1))$ equals its radical. Let $\mathbb R$ be the reals and $\mathbb R[x]$ be the polynomial ring of one variable with real coefficients. Let $I$ be the principal ideal $(x(x^2+1))$. I want to prove that the ideal of the ideals variety is not the same as its radical, that is, $I(V(I))\not=\text {rad}(I)$. I've reduced this to proving that $I=\text{rad}(I)$. How can I go about that?
This $x(x^2 + 1)$ is the factorisation of that polynomial into irreducibles (over $\mathbb{R}$). Once you have such a factorisation, the radical is the factorisation with no powers. So it is radical. EDIT — I am referencing this: Let $f \in k[x]$ be a polynomial, and suppose that $f = f_1^{\alpha_1} \cdots f_n^{\alpha_n}$ is the factorisation of $f$ into irreducibles. Then $\sqrt{(f)} = (f_1 \ldots f_n)$. One inclusion should be clear, and the other can be seen by appealing to the uniqueness of factorisation in $k[x]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
If a graph with $n$ vertices and $n$ edges there must a cycle? How to prove this question? If a graph with $n$ vertices and $n$ edges it must contain a cycle?
Here's is an approach which does not use induction: Let $G$ be a graph with $n$ vertices and $n$ edges. Keep removing vertices of degree $1$ from $G$ until no such removal is possible, and let $G'$ denote the resulting graph. Note that in each removal, we're removing exactly $1$ vertex and $1$ edge, so $G'$ cannot be empty, otherwise before the last removal we'd have a graph with $1$ vertex and $1$ edge, and $G'$ has the same number of vertices and edges. Therefore the minimum degree in $G'$ is at least $2$, which implies that $G'$ has a cycle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 1 }
How many samples are required to estimate the frequency of occurrence of an output (out of 8 different outputs)? I have $N$ marbles and to each of them corresponds a 1 or 2 or 3 or ... or 8.(i.e., there's 8 different kinds of marbles) How many samples are required to estimate the frequency of occurrence of each kind (within a given confidence interval and confidence level)? ( If the answer is lengthy, a hint or a link to an online reference suffices.)
This is quite straight forward in socio-economic statistics. An example in health care is given here. There also a general calculator. Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the area: $(\frac xa+\frac yb)^2 = \frac xa-\frac yb , y=0 , b>a$ Find the area: $$\left(\frac xa+\frac yb\right)^2 = \frac xa-\frac yb,$$ $ y=0 , b>a$ I work in spherical coordinates. $x=a\cdot r\cdot \cos(\phi)\;\;,$ $y=b\cdot r\cdot \cos(\phi)$ Then I get the equation and don't know to do with, cause "a" and "b" are dissapearing. For what are the conditions: $y=0, b>a?$..How to define the limits of integration?
Your shape for $a=1$, $b=2$ is as below It is much easier if you would use line parametrization rather than polar coordinates. Let $y=m\,x$ then $$\bigg(\frac xa+\frac{mx}b\bigg)^2=\frac xa-\frac {mx}b\Rightarrow x(m)=\frac{1/a-m/b}{\big(1/a+m/b\big)^2}$$ and $$y(m)=m\,x(m)=m\frac{1/a-m/b}{\big(1/a+m/b\big)^2}$$ To find the limits you must determine where $y(m)$ becomes zero due to boundary of $y=0$ $$y(m)=m\frac{1/a-m/b}{\big(1/a+m/b\big)^2}\Rightarrow m=0\text{ and }m=b/a$$ Since $x(m)$ is zero as $m=b/a$ the integration will be from $b/a$ to $0$. Therefore $$A=\int_{b/a}^0ydx=\int_{b/a}^0y(m)\frac{dx}{dm}dm=\int_{b/a}^0m\frac{1/a-m/b}{\big(1/a+m/b\big)^2}\frac{a^2b(a\,m-3b)}{(am+b)^3}dm=\frac{ab}{12}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/414864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
when Fourier transform function in $\mathbb C$? The Fourier transform of a function $f\in\mathscr L^1(\mathbb R)$ is $$\widehat f\colon\mathbb R\rightarrow\mathbb C, x\mapsto\int_{-\infty}^\infty f(t)\exp(-ixt)\,\textrm{d}t$$ When is this indeed a function in $\mathbb C$? Most of calculations you get functions in $\mathbb R$. When in $\mathbb C$? Add: I know there are results like $\frac{e^{ait}-e^ {-ait}}{2i}=\sin(at)$ multiplied by 'anything', but I am asking for a function which you cannot write as a function in $\mathbb R$.
There is a basic fact about Fourier transform on the Schwartz space: Let $f\in \mathcal{S}(\mathbb{R})$, then $\widehat{f'} = it\widehat{f}$. Thus, if $\widehat{f}$ is real-valued, then $\widehat{f'}$ is complex-valued.
{ "language": "en", "url": "https://math.stackexchange.com/questions/414992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
distributing z different objects among k people almost evenly We have z objects (all different), and we want to distribute them among k people ( k < = z ) so that the distribution is almost even. i.e. the difference between the number of articles given to the person with maximum articles, and the one with minimum articles is at most 1. We need to find the total number of ways in which this can be done for example if there are 5 objects and 3 people the number of such ways should be 90. I am not sure how we get this value.
Each person will get either $\lfloor \frac{z}{k}\rfloor$ or $\lceil \frac{z}{k}\rceil$ objects. These are the floor and ceiling functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/415076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Is it possible to derive all the other boolean functions by taking other primitives different of $NAND$? I was reading the TECS book (The Elements of Computing Systems), in the book, we start to build the other logical gates with a single primitive logical gate, the $NAND$ gate. With it, we could easily make the $NOT$ gate, then the $AND$ gate and then the $OR$ gate. With the $NOT$, $AND$ and $OR$ gates, we could express any truth table through cannonical representation. The book has the table given below, I was wondering if we could do the same by taking any other boolean function as primitive, I'm quite sure that it's not possible to do with both constant $0$ nor constant $1$. What about the others?
As you noted, it's impossible for the constants to generate all of the boolean functions; the gates which are functions of a single input also can't do it, for equally-obvious reasons (it's impossible to generate the boolean function $f(x,y)=y$ from either $f_1(x,y)=x$ or $f_2(x,y)=\bar{x}$). OR and AND can't do it either, for a slightly more complicated reason — it's impossible to generate nothing from something! (More specifically, neither one of them is able to represent negation, and in particular neither one can represent a constant function — this is because any function composed of only ORs and ANDs will always return 0 when its inputs are all-0s and will return 1 when its inputs are all-1s. XOR is a bit more complicated: it's obviously possible to generate 0 and 1 via XORs, and thus NOT; but the 'parity preserving' nature of XOR makes it impossible to represent $x\wedge y$ with XOR alone. In fact, it can be shown that NOR and NAND are the only two-input boolean functions which suffice in and of themselves; for more details, have a look at the Wikipedia page on functional completeness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/415152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }