Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Convergence of series given a monotonic sequence Let $(x_n)_{n \in \Bbb N}$ be a decreasing sequence such that its series converges, want to show that $\displaystyle \lim_{n \to \infty} n x_n = 0$.
Ok I don't even know where to start.
I need a direction please!
Thankyou!
| Just another approach. Since $\{x_n\}_{n\in\mathbb{N}}$ id decreasing and its associated series converges, we have $x_n\geq 0$ for every $n\in\mathbb{N}$ (otherwise, $\lim_{n\to +\infty}x_n < 0$ and the series cannot converge).
Assume now the existence of a positive real number $\alpha$ such that
$$ n\, x_n \geq \alpha $$
for an infinite number of positive natural numbers $n$; let $A=\{n_1,n_2,\ldots\}$ be the set of such natural numbers. Let now $a_0=0,a_1=n_1$, $a_2$ be the minimum element of $A$ greater than $2a_1$, $a_3$ be the minimum element of $A$ greater than $2a_2$ and so on. We have:
$$\sum_{n=1}^{+\infty}x_n \geq \sum_{k=1}^{+\infty}(a_k-a_{k-1})x_{a_k} \geq\sum_{k=1}^{+\infty}\frac{\alpha}{2}=+\infty,$$
that is clearly a contradiction, so
$$\lim_{n\in\mathbb{N}} (n\,x_n) = 0$$
must hold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/223452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
$a^2-b^2 = x$ where $a,b,x$ are natural numbers Suppose that $a^2-b^2 =x$ where $a,b,x$ are natural numbers.
Suppose $x$ is fixed. If there is one $(a,b)$ found, can there be another $(a,b)$?
Also, would there be a way to know how many such $(a,b)$ exists?
| You want $x = a^2 - b^2 = (a-b)(a+b)$. Let $m = a-b$ and $n = a+b$, then note that $a = (m+n)/2$ and $b = (n-m)/2$. For these to be natural numbers, you want both $m$ and $n$ to be of the same parity (i.e., both odd or both even), and $m \le n$. For any factorization $x = mn$ satisfying these properties, $a = (m+n)/2$ and $b = (n-m)/2$ will be a solution.
The answer to your question of how many such $(a,b)$ exist is therefore the same as how many ways there are of writing $x = mn$ with both factors of the same parity and $m \le n$. Let $d(x)$ denote the number of divisors of $x$. (For instance, $d(12) = 6$ as $12$ has $6$ factors $1, 2, 3, 4, 6, 12$.)
*
*If $x$ is odd, note that for any divisor $m$ of $x$, the factorization $x = mn$ (where $n = x/m$) has both factors odd. Also, out of any two factorizations $(m,n)$ and $(n,m)$, one of them will have $m < n$ and the other will have $m > n$, so the number of "good" factorizations of $x$ is $d(x)/2$. In the case where $x$ is a perfect square, this means either $\lceil d(x)/2 \rceil$ or $\lfloor d(x)/2 \rfloor$ depending on whether or not you want to allow the solution $a = \sqrt{x}$, $b = 0$.
*If $x$ is even, then $m$ and $n$ can't be both odd so they must be both even, so $x$ must be divisible by $4$. Say $x = 2^k \cdot l$, where $k \ge 2$. How many factorizations $x = mn$ are there with both $m$ and $n$ being even? Well, there are $d(x)$ factorisations of $x$ as $x = mn$. Of these, the factorisations in which all the powers of $2$ go on one side can be got by taking any of the $d(l)$ factorisations of $l$, and then putting the powers of two entirely on one of the $2$ sides. So the number of representations $x = mn$ with $m$ and $n$ both being even is $d(x) - 2d(l)$. Again, the number where $m \le n$ is half that, namely $(d(x) - 2d(l))/2$, where in the case $x$ is a perfect square, you mean either $\lceil (d(x) - 2d(l))/2 \rceil$ or $\lfloor (d(x) - 2d(l))/2 \rfloor$ depending on whether you want to allow the $b = 0$ solution or not.
Here is a program using Sage that can print all solutions $(a,b)$ for any given $x$.
#!/path/to/sage
print "Hello"
def mn(x):
'''Returns all (m,n) such that x = mn, m <= n, and both odd/even'''
for m in divisors(x):
n = x/m
if m <= n and (m % 2 == n % 2): yield (m,n)
elif m > n: break
def ab(x):
'''Returns all (a,b) such that a^2 - b^2 = x'''
return [((m+n)/2, (n-m)/2) for (m,n) in mn(x)]
def num_ab(x):
'''The number of (a,b) such that a^2 - b^2 = x'''
dx = number_of_divisors(x)
if x % 2: return ceil(dx / 2)
l = odd_part(x)
dl = number_of_divisors(l)
return ceil((dx - 2*dl) / 2)
# Do it in two ways to check that we have things right
for x in range(1,1000): assert num_ab(x) == len(ab(x))
# Some examples
print ab(12)
print ab(100)
print ab(23)
print ab(42)
print ab(999)
print ab(100000001)
It prints:
Hello
[(4, 2)]
[(26, 24), (10, 0)]
[(12, 11)]
[]
[(500, 499), (168, 165), (60, 51), (32, 5)]
[(50000001, 50000000), (2941185, 2941168)]
and you can verify that, for instance, $168^2 -165^2 = 999$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/223521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
} |
sines and cosines law Can we apply the sines and cosines law on the external angles of triangle ?
| This answer assumes a triangle with angles $A, B, C$ with sides $a,b,c$.
Law of sines states that $$\frac{\sin A}{a} = \frac{\sin B}{b} = \frac{\sin C}{c}$$
Knowing the external angle is $\pi - \gamma$ if the angle is $\gamma$, $\sin(\pi-\gamma) = \sin \gamma$ because in the unit circle, you are merely reflecting the point on the circle over the y-axis and the sine value represents the $y$ value of the point, so the $y$ value will remain the same.
(Also, $\sin(\pi-\gamma) = \sin\pi \cos \gamma - \cos \pi \sin \gamma = \sin \gamma$), so $$\frac{\sin A}{a} = \frac{\sin B}{b} = \frac{\sin C}{c} = \frac{\sin (\pi -A)}{a} = \frac{\sin (\pi -B)}{b} = \frac{\sin(\pi - C)}{c}$$
The law of cosines state that $$c^2=a^2+b^2-2ab\cos\gamma \Longrightarrow \cos \gamma = \frac{a^2 + b^2 - c^2}{2ab}$$
But $\cos(\pi-\gamma) = -\cos\gamma$ for pretty much the same reasons I used for the sine above. So if you are using external angles, the law of cosines will not work. However, if $\pi - \gamma$ is the external angle, then
$$c^2=a^2+b^2+2ab\cos(\pi-\gamma) \Longrightarrow \cos(\pi- \gamma) = -\frac{a^2 + b^2 - c^2}{2ab}$$.
So in short, the answer is yes for law of sines, and no for law of cosines (unless you make the slight modification I made).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/223583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What type of singularity is this? $z\cdot e^{1/z}\cdot e^{-1/z^2}$ at $z=0$.
My answer is removable singularity.
$$
\lim_{z\to0}\left|z\cdot e^{1/z}\cdot e^{-1/z^2}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{z-1}{z^2}}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{-1}{z^2}}\right|=0.
$$
But someone says it is an essential singularity. I don't know why.
| $$ze^{1/z}e^{-1/z^2}=z\left(1+\frac{1}{z}+\frac{1}{2!z^2}+...\right)\left(1-\frac{1}{z^2}+\frac{1}{2!z^4}-...\right)$$
So this looks like an essential singularity, uh?
I really don't understand how you made the following step:
$$\lim_{z\to 0}\left|z\cdot e^{\frac{z-1}{z^2}}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{-1}{z^2}}\right|$$
What happened to that $\,z\,$ in the exponential's power?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/223642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Math induction ($n^2 \leq n!$) help please I'm having trouble with a math induction problem. I've been doing other proofs (summations of the integers etc) but I just can't seem to get my head around this.
Q. Prove using induction that $n^2 \leq n!$
So, assume that $P(k)$ is true: $k^2 \leq k!$
Prove that $P(k+1)$ is true: $(k+1)^2 \leq (k+1)!$
I know that $(k+1)! = (k+1)k!$ so: $(k+1)^2 \leq (k+1)k!$ but where can I go from here?
Any help would be much appreciated.
| The statement you want to prove is for all $n\in\mathbb{N}$ it holds that $n^2\leq n!$ (you called this $P(n)$. So lets first prove $P(4)$ i.e. $4^2\leq 4!$ but since $16\leq 24$ this is clear. So lets assume $P(n)$ and prove $P(n+1)$.
First note that for $n\geq 2$ it holds that
$$ 0\leq (n-1)^2+(n-2)=n^2-2n+1+n-2=n^2-n-1 $$
which is equivalent to $n+1\leq n^2$ which gives
$$ (n+1)^2=(n+1)(n+1)\leq (n+1)n^2 $$
by induction hypothesis (i.e. $P(n)$) the term $n^2$ in the last expression is smaller or equal $n!$ so we can continue:
$$ (n+1)n^2\leq (n+1)n! = (n+1)! $$
which is the statement we wanted to prove.
My answer is very extensive and explicit. But maybe, you now get a better understanding of what you have to do in general, when you want to prove something by induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/223718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Are Trace of product of matrices- distributive/associative? Is $\operatorname{Tr}(X^TAX)-\operatorname{Tr}(X^TBX)$ equal to $\operatorname{Tr}(X^TCX)$, where $C=A-B$ and $A$, $B$, $X$ have real entries and also $A$ and $B$ are p.s.d.
| Yes, as
$$X^t(A-B)X=X^t(AX-BX)=X^tAX-X^tBX,$$
using associativity and distributivity of product with respect to the addition. The fact that the matrices $A$ and $B$ are p.s.d. is not needed here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/223791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Does a tower of Galois extensions in $\mathbb{C}$ give an overall Galois extension? If $L/K$ and $F/L$ are Galois extensions inside $\mathbb{C}$, must $F/K$ be a Galois extension?
| Consider the extension $\mathbb Q\subset\mathbb Q(\sqrt[4]{2})\subset \mathbb Q(\sqrt[4]{2},i) $. You have that $\mathbb Q(\sqrt[4]{2})/\mathbb Q$ is not Galois since it is not normal. Yu have to enlarge $\mathbb Q(\sqrt[4]{2})$ over $\mathbb Q$ in order to get Galois extension.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/223866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
generalized MRRW bound on the asymptotic rate of q-ary codes Among the many upper bounds for families of codes in $\mathbb F _2 ^n$, the best known bound is the one by McEliece, Rodemich, Rumsey and Welch (MRRW) which states that the rate $R(\delta)$ corresponding to a relative distance of $\delta$ is such that:
\begin{equation*}R(\delta) \leq H_2(\frac{1}{2}-\sqrt{\delta(1-\delta)}) \end{equation*}
where H is the (binary) entropy function.
(A slight improvement of the above exists in the binary case, but within the same framework)
In the case of q-ary codes, i.e. codes over $\mathbb F _q ^n$, the above bound is generalized to:
\begin{equation*}R(\delta) \leq H_q(\frac{1}{q}(q-1-(q-2)\delta-2\sqrt{(q-1)\delta(1-\delta)})) \end{equation*}
My question is as follows:
For larger alphabet size q, the above bound seems to weaken significantly. In fact, observing the growth of the above bound as $q \rightarrow \infty$ (using simple approximations of entropy), we see that:
\begin{equation*} R(\delta) \leq 1-\delta+\mathcal{O}(\frac{1}{\log{q}}) \end{equation*}
Thus, it seems to get worse than even the Singleton bound $R(\delta) \leq 1-\delta$.
So which is the best bound for large alphabet size $q$? Or am I wrong in the above conclusion (most sources claim the MRRW bound stated above is the best known bound, but its not clear if that holds for larger q as well).
Also, could someone direct me to references for comparisons of different bounds for larger $q$? I am able to find reliable comparisons only for $q=2$.
| The source is formula (3) on page 86 in the artice
Aaltonen, Matti J.: Linear programming bounds for tree codes. IEEE Transactions on Information Theory 25.1 (1979), 85–90,
doi: 10.1109/tit.1979.1056004.
According to the article, there is the additional requirement $0 < \delta < 1 - \frac{1}{q}$.
Instead of comparing to the asymptotic Singleton bound, one can (more ambitiously) compare to an improvement, the asymptotic Plotkin bound
$$R(\delta) \leq 1 - \frac{q}{q-1}\,\delta$$
for $0 \leq \delta \leq q - \frac{1}{q}$.
In Aaltonen's paper, the formula is followed by some discussion into this direction. There I find "However, for large $q$ the Plotkin bound remains the best upper bound."
So apparently this "generalized MRRW bound" is not strong for large $q$, as you say.
As a side note, for $q\to \infty$, the function $R(\delta)$ converges to the asymptotic Singleton bound $1 - \delta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/224933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Interior Set of Rationals. Confused! Can someone explain to me why the interior of rationals is empty? That is $\text{int}(\mathbb{Q}) = \emptyset$?
The definition of an interior point is "A point $q$ is an interior point of $E$ if there exists a ball at $q$ such that the ball is contained in $E$" and the interior set is the collection of all interior points.
So if I were to take $q = \frac{1}{2}$, then clearly $q$ is an interior point of $\mathbb{Q}$, since I can draw a ball of radius $1$ and it would still be contained in $\mathbb{Q}$.
And why can't I just take all the rationals to be the interior?
So why can't I have $\text{int}\mathbb{(Q)} = \mathbb{Q}$?
| It is easy to show that there are irrational numbers between any two rational numbers. Let $q_1 < q_2$ be rational numbers and choose a positive integer $m$ such that $m(q_2-q_1)>2$. Taking some positive integer $m$ such that $m(b-a)>2$, the irrational number $m q_1+\sqrt{2}$ belongs to the interval $(mq_1, mq_2)$ and so the irrational number $q_1 + \frac{\sqrt{2}}{m}$ belongs to $(q_1,q_2)$.
With this in mind, there are irrational numbers in any neighbourhood of a rational number, which implies that $\textrm{int}(\mathbb{Q}) = \emptyset$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/224980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 5
} |
Is the difference of the natural logarithms of two integers always irrational or 0? If I have two integers $a,b > 1$. Is
$\ln(a) - \ln(b)$
always either irrational or $0$. I know both $\ln(a)$ and $\ln(b)$ are irrational.
| If $\log(a)-\log(b)$ is rational, then $\log(a)-\log(b)=p/q$ for some integers $p$ and $q$, hence $\mathrm e^p=r$ where $r=(a/b)^q$ is rational. If $p\ne0$, then $\mathrm e=r^{1/p}$ is algebraic since $\mathrm e$ solves $x^p-r=0$. This is absurd hence $p=0$, and $a=b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Homology of pair (A,A) Why is the homology of the pair (A,A) zero?
$$H_n(A,A)=0, n\geq0$$
To me it looks like the homology of a point so at least for $n=0$ it should not be zero.
How do we see this?
| Let us consider the identity map $i : A \to A$. This a homeomorphism and so induces an isomorphism on homology. Now consider the long exact sequence of the pair $(A,A)$: We get
$$\ldots \longrightarrow H_n(A)\stackrel{\cong}{\longrightarrow} H_n(A) \stackrel{f}{\longrightarrow} H_n(A,A) \stackrel{g}{\longrightarrow} H_{n-1}(A) \stackrel{\cong}{\longrightarrow} H_{n-1}(A)
\longrightarrow \ldots $$
Now this tells you that $f$ must be the zero map because its kernel is the whole of the homology group. So the image of $f$ is zero. However this would mean that the kernel of $g$ is zero. But then because of the isomorphism on the right we have that the image of $g$ is zero. The only way for the kernel of $g$ to be zero at the same time as the image being zero is if
$$H_n(A,A) = 0.$$
Another way is to probably go straight from the definition: By definition the relative singular chain groups are $C_n(A)/C_n(A) = 0$. Now when you take the homology of of this chain complex it is obvious that you get zero because you are taking the homology of the chain complex
$$\rightarrow C_n(A)/C_n(A) \rightarrow C_{n-1}(A)/C_{n-1}(A) \rightarrow \ldots$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How many ways to reach $1$ from $n$ by doing $/13$ or $-7$? How many ways to reach $1$ from $n$ by doing $/13$ or $-7$ ?
(i.e., where $n$ is the starting value (positive integer) and $/13$ means division by $13$ and $-7$ means subtracting 7)?
Let the number of ways be $f(n)$.
Example $n = 20$ , then $f(n) = 1$ since $13$ is not a divisor of $20$ , we start $20-7=13$ , then $13/13 = 1$.
(edit) : Let $g(n)$ be the number of steps needed.(edit)
We can show easily that $f(13n) = f(n) + f(13n-7).$ Or if $n$ is not a multiple of $13$ then $f(n) = f(n-7).$
(edit): I believe that $g(13n) = g(n) + g(13n-7) + 2.$ Or if $n$ is not a multiple of $13$ then $g(n) = g(n-7) + 1.$ Although this might require negative values.
( with thanks to Ross for pointing out the error )(edit)
Those equations look simple and familiar , somewhat like functional equations for logaritms , partition functions , fibonacci sequence and even collatz like or a functional equation I posted here before.
I consider modular aritmetic such as mod $13^2$ and such but with no succes sofar.
How to solve this ?
Does it have a nice generating function ?
Is there a name for the generalization of this problem ? Because it seems very typical number theory. Is this related to q-analogs ?
| Some more thoughts to help:
As $13 \equiv -1 \pmod 7$, you can only get to $1$ for numbers that are $\equiv \pm 1 \pmod 7$. You can handle $1$ and $13$, so you can handle all $k \equiv \pm 1 \pmod 7 \in \mathbb N $ except $6$.
Also because $13 \equiv -1 \pmod 7$, for $k \equiv -1 \pmod 7$ you have to divide an odd number of times. For $k \equiv 1 \pmod 7$ you have to divide an even number of times.
From your starting number, you have to subtract $7$'s until you get to a multiple of $13$. Then if you subtract $7$, you have to do $13$ of them, subtracting $91$. In a sense, the operations commute, as you can subtract $91$, then divide by $13$ or divide by $13$ and then subtract $7$ to get the same place. Numbers of the form $13+91k$ have $k+1$ routes to $1$ (as long as they are less than $13^3)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Constructing a triangle given three concurrent cevians? Well, I've been taught how to construct triangles given the $3$ sides, the $3$ angles and etc. This question came up and the first thing I wondered was if the three altitudes (medians, concurrent$^\text {any}$ cevians in general) of a triangle are unique for a particular triangle.
I took a wild guess and I assumed Yes! Now assuming my guess is correct, I have the following questions
How can I construct a triangle given all three altitudes, medians or/and any three concurrent cevians, if it is possible?
N.B: If one looks closely at the order in which I wrote the question(the cevians coming last), the altitudes and the medians are special cases of concurrent cevians with properties
*
*The altitudes form an angle of $90°$ with the sides each of them touch.
*The medians bisect the side each of them touch.
With these properties it will be easier to construct the equivalent triangles (which I still don't know how) but with just any concurrent cevians, what other unique property can be added to make the construction possible? For example the angle they make with the sides they touch ($90°$ in the case of altitudes) or the ratio in which they divide the sides they touch ($1:1$ in the case of medians) or any other property for that matter.
EDIT
What André has shown below is a perfect example of three concurrent cevians forming two different triangles, thus given the lengths of three concurrent cevians, these cevians don't necessarily define a unique triangle. But also note that the altitudes of the equilateral triangle he defined are perpendicular to opposite sides while for the isosceles triangle, the altitude is, obviously also perpendicular to the opposite side with the remaining two cevians form approximately an angle of $50°$ with each opposite sides. Also note that the altitudes of the equilateral triangle bisects the opposite sides and the altitude of the isosceles triangle bisects its opposite sides while the remaining two cevians divides the opposite sides, each in the ratio $1:8$.
Now, given these additional properties (like the ratio of "bisection" or the angle formed with opposite sides) of these cevians, do they form a unique triangle (I'm assuming yes on this one) and if yes, how can one construct that unique triangle with a pair of compasses, a ruler and a protractor?
| It is clear that the lengths of concurrent cevians cannot always determine the triangle. Indeed, they probably never can. But if it is clear, we must be able to give an explicit example.
Cevians $1$: Draw an equilateral triangle with height $1$. Pick as your cevians the altitudes.
Cevians $2$: Draw an isosceles triangle $ABC$ such that $AB=AC$, and $BC=\dfrac{10}{12}$, and the height of the triangle with respect to $A$ is equal to $1$. Then $AB=AC=\sqrt{1+(5/12)^2}=\dfrac{13}{12}$.
There are (unique) points $X$ and $Y$ on $AB$ and $AC$ respectively such that $BY=CX=1$. This is because as a point $T$ travels from $B$ to $A$ along $BA$, the length of $CT$ increases steadily from $\dfrac{10}{12}$ to $\dfrac{13}{12}$, so must be equal to $1$ somewhere between $B$ and $A$. Let $X$ be this value of $T$.
Let one of our cevians be the altitude from $A$, and let $BY$ and $CX$ be the other two cevians. These three cevians are concurrent because of the symmetry about the altitude from $A$, and they all have length $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
} |
Logic for getting number of pages If a page can have 27 items printed on it and number of items can be any positive number then how can I find number of pages if I have number of items, I tried Modulus and division but didn't helped.
FYI, I am using C# as programming platform.
| If I understand the question correctly, isn't the answer just the number of total items divided by $27$ and then rounded up?
If you had $54$ total items, $54/27=2$ pages, which doesn't need to round.
If you had $100$ total items, $100/27=3.7$ which rounds up to $4$ pages.
If you had 115 total items, $115/27=4.26$ which rounds up to $5$ pages.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Squeeze Theorem Problem I'm busy studying for my Calculus A exam tomorrow and I've come across quite a tough question. I know I shouldn't post such localized questions, so if you don't want to answer, you can just push me in the right direction.
I had to use the squeeze theorem to determine:
$$\lim_{x\to\infty} \dfrac{\sin(x^2)}{x^3}$$
This was easy enough and I got the limit to equal 0. Now the second part of that question was to use that to determine:
$$\lim_{x\to\infty} \dfrac{2x^3 + \sin(x^2)}{1 + x^3}$$
Obvously I can see that I'm going to have to sub in the answer I got from the first limit into this equation, but I can't seem to figure how how to do it.
Any help would really be appreciated! Thanks in advance!
| I assume you meant $$\lim_{x \to \infty} \dfrac{2x^3 + \sin(x^2)}{1+x^3}$$ Note that $-1 \leq \sin(\theta) \leq 1$. Hence, we have that $$\dfrac{2x^3 - 1}{1+x^3} \leq \dfrac{2x^3 + \sin(x^2)}{1+x^3} \leq \dfrac{2x^3 + 1}{1+x^3}$$
Note that
$$\dfrac{2x^3 - 1}{1+x^3} = \dfrac{2x^3 +2 -3}{1+x^3} = 2 - \dfrac3{1+x^3}$$
$$\dfrac{2x^3 + 1}{1+x^3} = \dfrac{2x^3 + 2 - 1}{1+x^3} = 2 - \dfrac1{1+x^3}$$
Hence,
$$2 - \dfrac3{1+x^3} \leq \dfrac{2x^3 + \sin(x^2)}{1+x^3} \leq 2 - \dfrac1{1+x^3}$$
Can you now find the limit?
EDIT
If you want to make use of the fact that $\lim_{x \to \infty} \dfrac{\sin(x^2)}{x^3} = 0$, divide the numerator and denominator of $\dfrac{2x^3 + \sin(x^2)}{1+x^3}$ by $x^3$ to get
$$\dfrac{2x^3 + \sin(x^2)}{1+x^3} = \dfrac{2 + \dfrac{\sin(x^2)}{x^3}}{1 + \dfrac1{x^3}}$$ Now make use of the fact that $\lim_{x \to \infty} \dfrac{\sin(x^2)}{x^3} = 0$ and $\lim_{x \to \infty} \dfrac1{x^3} = 0$ to get your answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Derivative of a split function We have the function:
$$f(x) = \frac{x^2\sqrt[4]{x^3}}{x^3+2}.$$
I rewrote it as
$$f(x) = \frac{x^2{x^{3/4}}}{x^3+2}.$$
After a while of differentiating I get the final answer:
$$f(x)= \frac{- {\sqrt[4]{\left(\frac{1}{4}\right)^{19}} + \sqrt[4]{5.5^7}}}{(x^3+2)^2}$$(The minus isn't behind the four)
But my answer sheet gives a different answer, but they also show a wrong calculation, so I don't know what is the right answer, can you help me with this?
| Let $y=\frac{x^2\cdot x^{3/4}}{x^3+2}$ so $y=\frac{x^{11/4}}{x^3+2}$ and therefore $y=x^{11/4}\times(x^3+2)^{-1}$. Now use the product rule of two functions: $$(f\cdot g)'=f'\cdot g+f\cdot g'$$ Here $f(x)=x^{11/4}$ and $g(x)=(x^3+2)^{-1}$. So $f'(x)=\frac{11}{4}x^{7/4}$ and $g'(x)=(-1)(3x^2)(x^3+2)^{-2}$. But thinking of your answer in the body, I cannot see where did it come from.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Normal subgroups of the Special Linear Group What are some normal subgroups of SL$(2, \mathbb{R})$?
I tried to check SO$(2, \mathbb{R})$, UT$(2, \mathbb{R})$, linear algebraic group and some scalar and diagonal matrices, but still couldn't come up with any. So can anyone give me an idea to continue on, please?
| ${\rm{SL}}_2(\mathbb{R})$ is a simple Lie group, so there are no connected normal subgroups.
It's only proper normal subgroup is $\{I,-I\}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Almost A Vector Bundle I'm trying to get some intuition for vector bundles. Does anyone have good examples of constructions which are not vector bundles for some nontrivial reason. Ideally I want to test myself by seeing some difficult/pathological spaces where my naive intuition fails me!
Apologies if this isn't a particularly well-defined question - hopefully it's clear enough to solicit some useful responses!
| Fix $ B = (-1,1) $ to be the base space, and to each point $ b $ of $ B $, attach the vector-space fiber $ \mathcal{F}_{b} \stackrel{\text{def}}{=} \{ b \} \times \mathbb{R} $. We thus obtain a trivial $ 1 $-dimensional vector bundle over $ B $, namely $ B \times \mathbb{R} $. Next, define a fiber-preserving vector-bundle map $ \phi: B \times \mathbb{R} \rightarrow B \times \mathbb{R} $ as follows:
$$
\forall (b,r) \in B \times \mathbb{R}: \quad \phi(b,r) \stackrel{\text{def}}{=} (b,br).
$$
We now consider the kernel $ \ker(\phi) $ of $ \phi $. For each $ b \in B $, let $ \phi_{b}: \mathcal{F}_{b} \rightarrow \mathcal{F}_{b} $ denote the restriction of $ \phi $ to the fiber $ \mathcal{F}_{b} $. Then $ \ker(\phi_{b}) $ is $ 0 $-dimensional for all $ b \in (-1,1) \setminus \{ 0 \} $ but is $ 1 $-dimensional for $ b = 0 $. Hence, $ \ker(\phi) $ does not have a local trivialization at $ b = 0 $, which means that it is not a vector bundle.
In general, if $ f: \xi \rightarrow \eta $ is a map between vector bundles $ \xi $ and $ \eta $, then $ \ker(f) $ is a sub-bundle of $ \xi $ if and only if the dimensions of the fibers of $ \ker(f) $ are locally constant. It is also true that $ \text{im}(f) $ is a sub-bundle of $ \eta $ if and only if the dimensions of the fibers of $ \text{im}(f) $ are locally constant.
The moral of the story is that although something may look like a vector bundle by virtue of having a vector space attached to each point of the base space, it may fail to be a vector bundle in the end because the local trivialization property is not satisfied at some point. You want the dimensions of the fibers to stay locally constant; you do not want them to jump.
Richard G. Swan has a beautiful paper entitled Vector Bundles and Projective Modules (Transactions of the A.M.S., Vol. 105, No. 2, Nov. 1962) that contains results that might be of interest to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
for $\nu$ a probability measure on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ the set ${x\in \mathbb{R} ; \nu(x) > 0}$ is at most countable Given a probability measure $\nu$ on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$, how do I show that the set (call it $S$) of all $x\in \mathbb{R}$ where $\nu(x)>0$ holds is at most countable?
I thought about utilizing countable additivity of measures and the fact that we have $\nu(A) < 1$ for all countable subsets $A\subset S$. How do I conclude rigorously?
| Given $n\in\mathbb N$, consider the set
$$A_n=\{x\in\mathbb R:\nu(\{x\})\geq\tfrac{1}{n}\}$$
It must be finite; otherwise, the probability of $A_n$ would be infinite since $\nu$ is additive. Thus, $A=\cup_{n\in\mathbb N}A_n$ is countable as a countable union of finite sets, but it is clear that
$$A=\{x\in\mathbb R:\nu(\{x\})>0\}$$
so you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Are these two predicate statements equivalent or not? $\exists x \forall y P(x,y) \equiv \forall y \exists x P(x,y)$
I was told they were not, but I don't see how it can be true.
| Take the more concrete statements: $\forall {n \in \mathbb{N}}\ \exists {m \in \mathbb{N}}: m > n$ versus $\exists {m \in \mathbb{N}}\ \forall {n \in \mathbb{N}}: m > n$. Now the first statement reads: For every natural number there's a bigger one. The second statement reads: There's a biggest natural number.
Quantifiers of different type don't commute. However, quantifiers of the same type do: $\exists x \ \exists y \equiv \exists y \ \exists x$ and $\forall x \ \forall y \equiv \forall y \ \forall x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Hatcher: Barycentric Subdivision At the bottom of page 122 of Hatcher, he defines a map $S:C_{n}(X)\rightarrow C_{n}(X)$ by $S\sigma=\sigma_{\#}S\Delta^{n}$. What is the $S$ on the right hand side and how does it act on the simplex $\Delta$? I'm having trouble deconstructing the notation here.
| It's the $S$ he defines in (2) of his proof: barycentric subdivision of linear chains.
I think you will benefit from reading the following supplementary notes.
update: here's the idea: first he deals with linear chains so he knows how to define the map $S: LC_n(\Delta^n) \to LC_n(\Delta^n)$, i.e. he knows how to find the barycentric subdivision of the standard $n$-simplex (thought of as the identity map onto itself). Then he uses the map $\sigma_\#$ composed with the subdivided standard $n$-simplex to define a barycentric subdivision of singular $n$-chains.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The dual of a finitely generated module over a noetherian integral domain is reflexive. As posted by user26857 in this question the dual of a finitely generated module over a noetherian integral domain is reflexive. Could you tell me how to prove it?
| I think I have a proof.
I will use this theorem:
Let $A$ be a Noetherian ring and $M$ a finitely generated $A$-module. Then
$M^*$ is reflexive if and only if $M^*_P$ is reflexive for every $P$ such that $\mathrm{depth}\;A_P=0$; in short, a dual is reflexive if and only if it is reflexive in depth $0$.
If the ring is a domain then the only prime of depth $0$ is the zero ideal. If $M$ is finitely generated, then $M^*_{(0)}$ is a finite dimensional vector space over the ring of fractions and so it is reflexive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Find the smallest positive integer satisfying 3 simultaneous congruences $x \equiv 2 \pmod 3$
$2x \equiv 4 \pmod 7$
$x \equiv 9 \pmod {11}$
What is troubling me is the $2x$.
I know of an algorithm (not sure what it's called) that would work if all three equations were $x \equiv$ by using euclidean algorithm back substitution three times but since this one has a $2x$ it won't work.
Would it be possible to convert it to $x\equiv$ somehow? Could you divide through by $2$ to get
$x \equiv 2 \pmod{\frac{7}{2}}$
Though even if you could I suspect this wouldn't really help..
What would be the best way to do this?
| To supplement the other answers, if your equation was
$$ 2x \equiv 4 \pmod 8 $$
then the idea you guessed is actually right: this equation is equivalent to
$$ x \equiv 2 \pmod 4 $$
More generally, the equations
$$ a \equiv b \pmod c $$
and
$$ ad \equiv bd \pmod {cd}$$
are equivalent. Thus, if both sides of the equation and the modulus share a common factor, you can cancel it out without losing any solutions or introducing spurious ones. However, this only works with a common factor.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
An example for a calculation where imaginary numbers are used but don't occur in the question or the solution. In a presentation I will have to give an account of Hilbert's concept of real and ideal mathematics. Hilbert wrote in his treatise "Über das Unendliche" (page 14, second paragraph. Here is an English version - look for the paragraph starting with "Let us remember that we are mathematicians") that this concept can be compared with (some of) the use(s) of imaginary numbers.
He thought probably of a calculation where the setting and the final solution has nothing to do with imaginary numbers but that there is an easy proof using imaginary numbers.
I remember once seeing such an example but cannot find one, so:
Does anyone know about a good an easily explicable example of this phenomenon?
("Easily" means that enigneers and biologists can also understand it well.)
| The canonical example seems to be Cardano's solution of the cubic equation, which requires non-real numbers in some cases even when all the roots are real. The mathematics is not as hard as you might think; and as an added benefit, there is a juicy tale to go with it – as the solution was really due to Scipione del Ferro and Tartaglia.
Here is a writeup, based on some notes I made a year and a half ago:
First, the general cubic equation $x^3+ax^2+bx+c=0$
can be transformed into the form
$$
x^3-3px+2q=0
$$
by a simple substitution of $x-a/3$ for $x$.
We may as well assume $pq\ne0$, since otherwise the equation is
trivial to solve.
So we substitute in $$x=u+v$$ and get the equation into the form
$$
u^3+v^3+3(uv-p)(u+v)+q=0.
$$
Now we add the extra equation
$$
uv=p
$$
so that $u^3+v^3+q=0$. Substituting $v=p/u$ in this equation, then
multiplying by $u^3$, we arrive at
$$
u^6+2qu^3+p^3=0,
$$
which is a quadratic equation in $u^3$.
Noticing that interchanging the two roots of this equation corresponds
to interchanging $u$ and $v$,
which does not change $x$,
we pick one of the two solutions, and get:
$$
u^3=-q+\sqrt{q^2-p^3},
$$
with the resulting solution
$$
x=u+p/u.
$$
The three different cube roots $u$ will of course yield the three
solutions $x$ of the original equation.
Real coefficients
In the case when $u^3$ is not real, that is when $q^2<p^3$,
we could write instead
$$
u^3=-q+i\sqrt{p^3-q^2},
$$
and we note that in this case $\lvert u\rvert=\sqrt{p}$,
so that in fact $x=u+\bar u=2\operatorname{Re} u$.
In other words, all the roots are real.
In fact the two extrema of $x^3-3px+2q$ are at $x=\pm\sqrt{p}$,
and the values of the polynomial at these two points are
$2(q\mp p^{3/2})$.
The product of these two values is $4(q^2-p^3)<0$,
which is another way to see that there are indeed three real zeros.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 16,
"answer_id": 9
} |
Adding sine waves of different phase, sin(π/2) + sin(3π/2)? Adding sine waves of different phase, what is $\sin(\pi/2) + \sin(3\pi/2)$?
Please could someone explain this.
Thanks.
| Heres the plot for $\sin(L)$ where $L$ goes from $(0, \pi/2)$
Heres the plot for $\sin(L) + \sin(3L)$ where $L$ goes from $(0, \pi/2)$
I hope this distinction is useful to you.
This was done in Mathematica.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/225973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Showing that $1/x$ is NOT Lebesgue Integrable on $(0,1]$ I aim to show that $\int_{(0,1]} 1/x = \infty$. My original idea was to find a sequence of simple functions $\{ \phi_n \}$ s.t $\lim\limits_{n \rightarrow \infty}\int \phi_n = \infty$. Here is a failed attempt at finding such a sequence of $\phi_n$:
(1) Let $A_k = \{x \in (0,1] : 1/x \ge k \}$ for $k \in \mathbb{N}$.
(2) Let $\phi_n = n \cdot \chi_{A_n}$
(3) $\int \phi_n = n \cdot m(A_n) = n \cdot 1/n = 1$
Any advice from here on this approach or another?
| I think this may be the same as what Davide Giraudo wrote, but this way of saying it seems simpler. Let $\lfloor w\rfloor$ be the greatest integer less than or equal to $w$. Then the function
$$x\mapsto \begin{cases} \lfloor 1/x\rfloor & \text{if } \lfloor 1/x\rfloor\le n \\[8pt]
n & \text{otherwise} \end{cases}$$ is simple. It is $\le 1/x$ and its integral over $(0,1]$ approaches $\infty$ as $n\to\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
Using induction to prove $3$ divides $\left \lfloor\left(\frac {7+\sqrt {37}}{2}\right)^n \right\rfloor$ How can I use induction to prove that $$\left \lfloor\left(\cfrac {7+\sqrt {37}}{2}\right)^n \right\rfloor$$ is divisible by $3$ for every natural number $n$?
| Consider the recurrence given by
$$x_{n+1} = 7x_n - 3 x_{n-1}$$ where $x_0 = 2$, $x_1 = 7$.
Note that $$x_{n+1} \equiv (7x_n - 3 x_{n-1}) \pmod{3} \equiv (x_n + 3(2x_n - x_{n-1})) \pmod{3} \equiv x_n \pmod{3}$$
Since $x_1 \equiv 1 \pmod{3}$, we have that $x_n \equiv 1 \pmod{n}$. Ths solution to this recurrence is given by
$$x_n = \left( \dfrac{7+\sqrt{37}}{2} \right)^n + \left( \dfrac{7-\sqrt{37}}{2}\right)^n \equiv 1 \pmod{3}$$
Further, $0 < \dfrac{7-\sqrt{37}}{2} < 1$. This means $$3M < \left( \dfrac{7+\sqrt{37}}{2} \right)^n < 3M+1$$ where $M \in \mathbb{Z}$. Hence, we have that
$$3 \text{ divides }\left \lfloor \left (\dfrac{7+\sqrt{37}}{2}\right)^n \right \rfloor$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Integration with infinity and exponential How is
$$\lim_{T\to\infty}\frac{1}T\int_{-T/2}^{T/2}e^{-2at}dt=\infty\;?$$
however my answer comes zero because putting limit in the expression, we get:
$$\frac1\infty\left(-\frac1{2a}\right) [e^{-\infty} - e^\infty]$$ which results in zero?
I think I am doing wrong. So how can I get the answer equal to $\infty$
Regards
| $I=\int_{-T/2}^{T/2}e^{-2at}dt=\left.\frac{-1}{2a}e^{-2at}\right|_{-T/2}^{T/2}=-\frac{e^{-aT}+e^{aT}}{2a}$
Despite that $a$ positive or negative, one of the exponents will tend to zero at our limit, so we can rewrite it as :
$\lim_{T\rightarrow\infty}\frac{I}{T}=\lim_{T\rightarrow\infty}\left(-\frac{e^{-aT}+e^{aT}}{2aT}\right)=\lim_{T\rightarrow\infty}\left(-\frac{e^{\left|a\right|T}}{2aT}\right)
$
But becuase exponental functions are growing much faster than $T$ , this makes the limit is always infinity, thus finaly we have:
$\lim_{T\rightarrow\infty}\frac{I}{T}=\begin{cases}
-\infty & a>0\\
+\infty & a<0
\end{cases}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
The positive integer solutions for $2^a+3^b=5^c$ What are the positive integer solutions to the equation
$$2^a + 3^b = 5^c$$
Of course $(1,\space 1, \space 1)$ is a solution.
| There are three solutions which can all be found by elementary means.
If $b$ is odd
$$2^a+3\equiv 1 \bmod 4$$
Therefore $a=1$ and $b$ is odd.
If $b>1$, then $2\equiv 5^c \bmod 9$ and $c\equiv 5 \bmod 6$
Therefore $2+3^b\equiv 5^c\equiv3 \bmod 7$ and $b\equiv 0 \bmod 6$, a contradiction.
The only solution is $(a,b,c)=(1,1,1)$.
If $b$ is even, $c$ is odd
$$2^a+1\equiv 5 \bmod 8$$
Therefore $a=2$.
If $b\ge 2$, then $4\equiv 5^c \bmod 9$ and $c\equiv 4 \bmod 6$, a contradiction.
The only solution is $(a,b,c)=(2,0,1)$.
If $b$ and $c$ are even
Let $b=2B$ and $c=2C$. Then
$$2^a=5^{2C}-3^{2B}=(5^C-3^B)(5^C+3^B)$$
Therefore $5^C-3^B$ is also a (smaller) power of 2.
A check of $(B,C)=(0,1)$ and $(1,1)$ yields the third solution $(a,b,c)=(4,2,2)$.
$(B,C)=(2,2)$ does not yield a further solution and we are finished.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 4
} |
Why $0!$ is equal to $1$? Many counting formulas involving factorials can make sense for the case $n= 0$ if we define $0!=1 $; e.g., Catalan number and the number of trees with a given number of vetrices. Now here is my question:
If $A$ is an associative and commutative ring, then we can define an
unary operation on the set of all the finite subsets of our ring,
denoted by $+ \left(A\right) $ and $\times \left(A\right)$. While it
is intuitive to define $+ \left( \emptyset \right) =0$, why should
the product of zero number of elements be $1$? Does the fact that $0! =1$ have anything to do with 1 being the multiplication unity of integers?
| As pointed out in one of the answers to this math.SX question, you can get the Gamma function as an extension of factorials, and then this falls out from it (though this isn't a very combinatorial answer).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 6,
"answer_id": 1
} |
Signs of rates of change when filling a spherical tank. Water is flowing into a large spherical tank at a constant rate. Let $V\left(t\right)$ be the volume of water in the tank at time $t$, and $h\left(t\right)$ of the water level at time $t$.
Is $\frac{dV}{dt}$ positive, negative, or zero when the tank is one quarter full?
Is $\frac{dh}{dt}$ positive, negative, or zero when the tank is one quarter full?
My answer
I believe that the rate of change of the volume is increasing till 1/2 full, then decreasing.
I believe that the rate of change of the height is decreasing till 1/2 full, then increasing.
THis is because of the widest part of the sphere is when h = r (the radius of the sphere), and if you think of it as many slices of circles upon each other, then the area change would follow the patter of each slice.
Am I correct?
| Uncookedfalcon is correct. You're adding water, which means that both the volume and water depth are increasing (that is, $\frac{dV}{dt}$ and $\frac{dh}{dt}$ are positive) until it's full. In fact, $\frac{dV}{dt}$ is constant until it's full.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
is a non-falling rank of smooth maps an open condition? If $f \colon M \to N$ is a smooth map of smooth manifolds, for any point $p \in M$, is there an open neighbourhood $V$ of $p$ such that $\forall q \in V \colon \mathrm{rnk}_q (f) \geq \mathrm{rnk}_p (f)$?
| Yes. Note that if $f$ has rank $k$ at $p$, then $Df(p)$ has rank $k$. Therefore in coordinates, there is a non-vanishing $k \times k$-minor of $Df(p)$. As having a non-vanishing determinant is an open condition, the same minor will not vanish in a whole neighbourhood $V$ of $p$, giving $\operatorname{rank}_q f \ge k$, all $q \in V$:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Describing the intersection of two planes Consider the plane with general equation 3x+y+z=1 and the plane with vector equation (x, y, z)=s(1, -1, -2) + t(1, -2 -1) where s and t are real numbers. Describe the intersection of these two planes.
I started by substituting the parametric equations into the general equation and got 0=9. Does that imply the planes are parallel and hence do not intersect?
| As a test to see if the planes are parallel you can calculate the normalvectors for the planes {n1, n2}.
If $abs\left (\frac{(n1\cdot n2)}{\left | n1 \right |*\left | n2 \right |} \right )==1$ the planes are parallel.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Example where $\int |f(x)| dx$ is infinite and $\int |f(x)|^2 dx$ is finite I read in a book that the condition $\int |f(x)|^2 dx <\infty$ is less restrictive than $\int |f(x)| dx <\infty$. That means whenever $\int |f(x)| dx$ is finite, $\int |f(x)|^2 dx$ is also finite, right?
My understanding is that $|f(x)|$ may have a thick tail to make the integral blow up, but $|f(x)|^2$ may decay quickly enough to have a finite integral. Can someone give me an example that $\int |f(x)| dx=\infty$ but $\int |f(x)|^2 dx <\infty$. Suppose $f(x)$ is an absolutely continous function and bounded on $(-\infty, \infty)$.
| The most noticeable one I think is the sinc function $$\mathrm{sinc}(x)=\frac{\sin(\pi x)}{\pi x}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 5
} |
Which letter is not homeomorphic to the letter $C$? Which letter is not homeomorphic to the letter $C$?
I think letter $O$ and $o$ are not homeomorphic to the letter $C$. Is that correct?
Is there any other letter?
| There are many others $E$ or $Q$ for example. The most basic method I know of is by assuming there is one, then it restricts to the subspace if you take out one (or more) points. Then the number of connected components of this subspace is an invariant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
How do I prove that $S^n$ is homeomorphic to $S^m \Rightarrow m=n$? This is what I have so far:
Assume $S^n$ is homeomorphic to $S^m$. Also, assume $m≠n$. So, let $m>n$.
From here I am not sure what is implied. Of course in this problem $S^k$ is defined as:
$S^k=\lbrace (x_0,x_1,⋯,x_{k+1}):x_0^2+x_1^2+⋯+x_{k+1}^2=1 \rbrace$ with subspace topology.
| Hint: look at the topic Invariance of Domain
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Trace of $A$ if $A =A^{-1}$ Suppose $I\neq A\neq -I$, where $I$ is the identity matrix and $A$ is a real $2\times2$ matrix. If $A=A^{-1}$ then what is the trace of $A$? I was thinking of writing $A$ as $\{a,b;c; d\}$ then using $A=A^{-1}$ to equate the positions but the equations I get suggest there is an easier way.
| From $A=A^{-1}$ you will know that all the possible eigenvalues are $\pm 1$, so the trace of $A$ would only be $0$ or $\pm 2$. You may show that all these three cases are realizable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/226962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 3
} |
Newton-Raphson method and solvability by radicals If a polynomial is not solvable by radicals, then does the Newton-Raphson method work slower or faster? I don't know how to approach this.
| The speed of Newton-Raphson has [EDIT: almost] nothing to do with solvability by radicals. What it does have to do with is $f''(r)/f'(r)$ where $r$ is the root: i.e. if $r$ is a root of $f$ such that $f'(r) \ne 0$ and
Newton-Raphson starting at $x_0$ converges to $r$, then
$$\lim_{n \to \infty} \dfrac{x_{n+1} - r}{(x_n - r)^2} = - \frac{f''(r)}{2 f'(r)}$$
If, say, $f(x) = x^n + \sum_{j=0}^{n-1} c_j x^j$ is a polynomial of degree $n\ge 5$, $f''(r)/f'(r)$ is a continuous function of the coefficients $(c_0, \ldots, c_n)$ in a region that avoids $f'(r) = 0$. But there is a dense set of $(c_0,\ldots,c_n)$ for which $f$ is solvable by radicals (e.g. where the roots are all of the form $a+bi$ with $a$ and $b$ rational),
and a dense set where it is not (e.g. where $c_0,\ldots,c_n$ are algebraically independent).
EDIT: On the other hand, if the convergence of Newton-Raphson is slow (linear rather than quadratic) and $f$ is a polynomial of degree $5$, then $f$ is solvable by radicals (over the field generated by the coefficients). For in this case $f'(r) = 0$, and so $r$ is a root of the gcd of $f$ and $f'$, which has degree $<5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $b$ is a root of $x^n -a$, what's the minimal polynomial of $b^m$? Let $x^n -a \in F[x]$ be an irreducible polynomial over $F$, and let $b \in K$ be its root, where $K$ is an extension field of F. If $m$ is a positive integer such that $m|n$, find the degree of the minimal polynomial of $b^m$ over $F$.
My solution:
$[F(b^m):F]=[F(b^m):F(b)][F(b):F] \Rightarrow n\le [F(b^m):F]$
and
$F(b^m)\subset F(b) \Rightarrow [F(b^m):F]\le [F(b):F]=n$
Then
$[F(b^m):F]=n$
Comments
I didn't use the fact that $m|n$, where am I wrong? I need help how to solve this problem.
Thanks
| Hint Let $km = n$ then if $b^{mk} - a = 0$ then $(b^m)^k - a = 0$ so maybe $x^k-a$ is the minimal polynomial?
Hint' Show that if $x^k-a$ has a factor then so does $x^{mk} - a$.
Given a field extension $K/L$ then $K$ is a vector space with coefficients in $L$ of dimension $\left[K:L\right]$ which is called the degree of the field extension.
The vector space $F(b^m)$ is spanned by $F$-linear combinations of the basis vectors $\left\{1,b^m,b^{2m},\ldots,b^{(k-1)m}\right\}$ so $\left[F(b^m):F\right] = k$.
Furthermore $\left[F(b):F\right] = n$ and $\left[F(b):F(b^m)\right] = m$ (prove these, for the second one use that $b$ is the minimal polynomial of $z^m - b^m$ [why can we not just use $z-b$?] in $F(b^m)$) so by $mk = n$ we have the identity $\left[F(b):F(b^m)\right]\left[F(b^m):F\right] = \left[F(b):F\right]$.
Why is $F(b^m)$ spanned by $F$-linear combinations of $\{1,b^m,b^{2m},…,b^{(k−1)m}\}$?
$F(b^m)$ is the field generated by all well defined sums differences products and fractions of the elements $F \cup {b^m}$. So that means it includes $b^m, (b^m)^2, (b^m)^3, \ldots$ but since $b^m$ satisfies a polynomial every power of $b^m$ higher or equal to $k$ can be reduced by it to a linear combination of lower powers. Similarly $(b^m)^{-1} = a (b^m)^{k-1}$, of course the sum of linear combinations is again a linear combination so we have seen that $F$-linear combinations of $\{1,b^m,b^{2m},…,b^{(k−1)m}\}$ span $F(b^m)$. The fact it's an independent basis (i.e. cannot be made smaller) comes from the polynomial being minimal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Describe a group homomorphism from $U_8$ to $S_4$ Im in an intro course to abstract algebra and we have been focusing completely on rings/the chinese remainder theorem and this question came up in the review and totally stumped me (we only have basic definitions of groups and subgroups and homomorphisms).
I think that $U_8$ is the group of units modulo 8, and $S_4$ is the permutation group of 4 letters. Ive figured out what $S_4$ looks like by examining certain sets of permutations but dont understand homomorphisms enough to be able to name the one in question. I do know that im looking for something of the form $f(ab) = f(a)f(b)$, but thats about it.
I was told a hint: that the units mod 8 were cosets which are relativley prime to 8, which i think would be $[1],[3],[5],[7]$ in mod 8, though im not really sure why this is the case. What I do notice is that each of these elements has an order of 2, which i think somehow should relate to the order of my permutations in $S_4$, but again, i'm not certain.
Any help is much appreciated, thanks.
| A unit mod $8$ is a congruence class mod $8$ which is invertible, i.e., a class $[a]$ such that there exists $[b]$ with $[a][b] = [1]$, or equivalently $ab +8k = 1$ for some integer $k$. Now any number dividing both $a$ and $8$ would also divide $ab+8k=1$, so this implies that $[a]$ being a unit implies $(a,8)=1$ (where the parentheses indicate the greatest common divisor.) On the other hand, one corollary of the Euclidean algorithm is that $(a,8)$ can always be written as a linear combination of $a$ and $8$, so in the case of relatively prime $a$ and $8$ there always exist such $b$ and $k$, and so $[a]$ is a unit.
If $f:U_8 \to S_4$ is a homomorphism, then the order of $\phi([a])$ always divides the order of $[a]$, so the image of $[1]$ has to be $()$ (the identity permutation), and the images of $[3]$, $[5]$, and $[7]$ have to have order $1$ or $2$. Obviously you also need that $f([3]) f([5]) = f([7])$ etc.
Now the question is what exactly you are trying to find, just one homomorphism (which is easy, there is always the trivial one mapping everything to the identity), or all of them (which is not quite as easy but doable with the information here and some trial and error.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Expectations with self-adjoint random matrix So, we have a square matrix $A=(a_{ij})_{1 \leq i,j \leq n}$ where the entries are independent random variables with the same distribution. Suppose $A = A^{*}$, where $A^{*}$ is the classical adjoint. Moreover, suppose that $E(a_{ij}) = 0$, $E(a_{ij}^{2}) < \infty$. How can I evaluate? $E(Tr A^{2})$?
Clearly, we have $E(Tr A) = 0$ and we can use linearity to get something about $E(Tr A^{2})$ in terms of the entries using simply the formula for $A^{2}$, but for instance I don't see where $A=A^{*}$ comes in... I suppose there's a clever way of handling it...
| Clearly
$$
\operatorname{Tr}(A^2) = \sum_{i,j} a_{i,j} a_{j,i} \stackrel{\rm symmetry}{=}\sum_{i,j} a_{i,j}^2
$$
Thus
$$
\mathbb{E}\left(\operatorname{Tr}(A^2) \right) = \sum_{i,j} \mathbb{E}(a_{i,j}^2) = n^2 \mathbb{Var}(a_{1,1})
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there a Math symbol that means "associated" I am looking for a Math symbol that means "associated" and I don't mean "associated" as something as complicated as isomorphism or anything super fancy.
I am looking for a symbol that means something like "$\triangle ABC$ [insert symbol] $A_{1}$" (as in triangle ABC "associated" with area_{1}) Or want to say something like "The eigenvector associated with the eigenvalue"
You get the idea.
| In general, you can use a little chain link symbol since the meaning behind "associated" is "connection" where you are not specifying the type of connection or how they are connected. That will reduce your horizontal space and make sense to people.... ~ is the NOT symbol in logic so never use that! Don't use the squiggle "if, and only if" symbol either because that insinuates that there is some kind of bijection and that is a specific type of connection. Your only caring about if there is "some kind of connection/association" between two different sets/elements/statements/primitive statements/etc. You should treat it as if it were a logical connective so again, don't use NOT because that would confuse logisticians and pure math people most definitely. The squiggle double arrow would be even more confusing like saying a "loose bijection" which is quite the fancy abstraction that is not what your aiming for... just a simple link between two "things" should be sufficient for what you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Coercion in MAGMA In MAGMA, if you are dealing with an element $x\in H$ for some group $H$, and you know that $H<G$ for some group $G$, is there an easy way to coerce $x$ into $G$ (e.g. if $H=\text{Alt}(n)$ and $G=\text{Alt}(n+k)$ for some $k\geq 1$)? The natural coercion method $G!x$ does not seem to work.
| G!CycleDecomposition(g);
will work
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Do an axis-aligned rectangle and a circle overlap? Given a circle of radius $r$ located at $(x_c, y_c)$ and a rectangle defined by the points $(x_l, y_l), (x_l+w, y_l+h)$ is there a way to determine whether the the two overlap? The square's edges are parallel to the $x$ and $y$ axes.
I am thinking that overlap will occur if one of the rectangle's corners is contained in the circle or one of the circle's circumference points at ($n\frac{\pi}{2}, n=\{0,1,2,3\}$ radians) is contained in the rectangle. Is this true?
EDIT: One answer has pointed out a case not covered by the above which is resolved by also checking whether the center of the circle is contained.
Is there a method which doesn't involve checking all points on the circle's circumference?
| No. Imagine a square and enlarge its incircle a bit. They will overlap, but wouldn't satisfy neither of your requirement.
Unfortunately, you have to check all points of the circle. Or, rather, solve the arising inequalities (I assume you are talking about filled idoms):
$$\begin{align} (x-x_c)^2+(y-y_c)^2 & \le r \\
x\in [x_l,x_l+w]\ &\quad y\in [y_l,y_l+h]
\end{align}$$
Or.. Perhaps it is enough to add to your requirements, that the center of the circle is contained in the rectangle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How to solve second order PDE with first order terms. I know we can transform a second order PDE into three standard forms. But how to deal with the remaining first order terms?
Particularly, how to solve the following PDE:
$$
u_{xy}+au_x+bu_y+cu+dx+ey+f=0
$$
update:
$a,b,c,d,e,f$ are all constant.
| Case $1$: $a=b=c=0$
Then $u_{xy}+dx+ey+f=0$
$u_{xy}=-dx-ey-f$
$u_x=\int(-dx-ey-f)~dy$
$u_x=C(x)-dxy-\dfrac{ey^2}{2}-fy$
$u=\int\left(C(x)-dxy-\dfrac{ey^2}{2}-fy\right)dx$
$u=C_1(x)+C_2(y)-\dfrac{dx^2y}{2}-\dfrac{exy^2}{2}-fxy$
Case $2$: $a\neq0$ and $b=c=0$
Then $u_{xy}+au_x+dx+ey+f=0$
Let $u_x=v$ ,
Then $u_{xy}=v_y$
$\therefore v_y+av+dx+ey+f=0$
$v_y+av=-dx-ey-f$
$(\exp(ay)v)_y=-dx\exp(ay)-ey\exp(ay)-f\exp(ay)$
$\exp(ay)v=\int(-dx\exp(ay)-ey\exp(ay)-f\exp(ay))~dy$
$\exp(ay)u_x=C(x)-\dfrac{dx\exp(ay)}{a}-\dfrac{ey\exp(ay)}{a}+\dfrac{e\exp(ay)}{a^2}-\dfrac{f\exp(ay)}{a}$
$u_x=C(x)\exp(-ay)-\dfrac{dx}{a}-\dfrac{ey}{a}+\dfrac{e}{a^2}-\dfrac{f}{a}$
$u=\int\left(C(x)\exp(-ay)-\dfrac{dx}{a}-\dfrac{ey}{a}+\dfrac{e}{a^2}-\dfrac{f}{a}\right)dx$
$u=C_1(x)\exp(-ay)+C_2(y)-\dfrac{dx^2}{2a}-\dfrac{exy}{a}+\dfrac{ex}{a^2}-\dfrac{fx}{a}$
Case $3$: $b\neq0$ and $a=c=0$
Then $u_{xy}+bu_y+dx+ey+f=0$
Let $u_y=v$ ,
Then $u_{xy}=v_x$
$\therefore v_x+bv+dx+ey+f=0$
$v_x+bv=-dx-ey-f$
$(\exp(bx)v)_x=-dx\exp(bx)-ey\exp(bx)-f\exp(bx)$
$\exp(bx)v=\int(-dx\exp(bx)-ey\exp(bx)-f\exp(bx))~dx$
$\exp(bx)u_y=C(y)-\dfrac{dx\exp(bx)}{b}+\dfrac{d\exp(bx)}{b^2}-\dfrac{ey\exp(bx)}{b}-\dfrac{f\exp(bx)}{b}$
$u_y=C(y)\exp(-bx)-\dfrac{dx}{b}+\dfrac{d}{b^2}-\dfrac{ey}{b}-\dfrac{f}{b}$
$u=\int\left(C(y)\exp(-bx)-\dfrac{dx}{b}+\dfrac{d}{b^2}-\dfrac{ey}{b}-\dfrac{f}{b}\right)dy$
$u=C_1(x)+C_2(y)\exp(-bx)-\dfrac{dxy}{b}+\dfrac{dy}{b^2}-\dfrac{ey^2}{2b}-\dfrac{fy}{b}$
Case $4$: $a,b,c\neq0$
Then $u_{xy}+au_x+bu_y+cu+dx+ey+f=0$
Try let $u=p(x)q(y)v$ ,
Then $u_x=p(x)q(y)v_x+p_x(x)q(y)v$
$u_y=p(x)q(y)v_y+p(x)q_y(y)v$
$u_{xy}=p(x)q(y)v_{xy}+p(x)q_y(y)v_x+p_x(x)q(y)v_y+p_x(x)q_y(y)v$
$\therefore p(x)q(y)v_{xy}+p(x)q_y(y)v_x+p_x(x)q(y)v_y+p_x(x)q_y(y)v+a(p(x)q(y)v_x+p_x(x)q(y)v)+b(p(x)q(y)v_y+p(x)q_y(y)v)+cp(x)q(y)v+dx+ey+f=0$
$p(x)q(y)v_{xy}+p(x)(q_y(y)+aq(y))v_x+(p_x(x)+bp(x))q(y)v_y+(p_x(x)q_y(y)+ap_x(x)q(y)+bp(x)q_y(y)+cp(x)q(y))v=-dx-ey-f$
Take $q_y(y)+aq(y)=0\Rightarrow q(y)=\exp(-ay)$ and $p_x(x)+bp(x)=0\Rightarrow p(x)=\exp(-bx)$ , the PDE becomes
$\exp(-bx-ay)v_{xy}+(c-ab)\exp(-bx-ay)v=-dx-ey-f$
$v_{xy}+(c-ab)v=-dx\exp(bx+ay)-ey\exp(bx+ay)-f\exp(bx+ay)$
Case $4a$: $c=ab$
Then $v_{xy}=-dx\exp(bx+ay)-ey\exp(bx+ay)-f\exp(bx+ay)$
$v_x=\int(-dx\exp(bx+ay)-ey\exp(bx+ay)-f\exp(bx+ay))~dy$
$v_x=C(x)-\dfrac{dx\exp(bx+ay)}{a}-\dfrac{ey\exp(bx+ay)}{a}+\dfrac{e\exp(bx+ay)}{a^2}-\dfrac{f\exp(bx+ay)}{a}$
$v=\int\left(C(x)-\dfrac{dx\exp(bx+ay)}{a}-\dfrac{ey\exp(bx+ay)}{a}+\dfrac{e\exp(bx+ay)}{a^2}-\dfrac{f\exp(bx+ay)}{a}\right)dx$
$\exp(bx+ay)u=C_1(x)+C_2(y)-\dfrac{dx\exp(bx+ay)}{ab}+\dfrac{d\exp(bx+ay)}{ab^2}-\dfrac{ey\exp(bx+ay)}{ab}+\dfrac{e\exp(bx+ay)}{a^2b}-\dfrac{f\exp(bx+ay)}{ab}$
$\exp(bx+ay)u=C_1(x)+C_2(y)-\dfrac{(dx+ey+f)\exp(bx+ay)}{ab}+\dfrac{d\exp(bx+ay)}{ab^2}+\dfrac{e\exp(bx+ay)}{a^2b}$
$u=C_1(x)\exp(-bx-ay)+C_2(y)\exp(-bx-ay)-\dfrac{dx+ey+f}{ab}+\dfrac{d}{ab^2}+\dfrac{e}{a^2b}$
$u=C_1(x)\exp(-ay)+C_2(y)\exp(-bx)-\dfrac{dx+ey+f}{ab}+\dfrac{d}{ab^2}+\dfrac{e}{a^2b}$
Hence the really difficult case is that when $c\neq ab$ . By letting $u=\exp(-bx-ay)v$ the PDE will reduce to $v_{xy}+(c-ab)v=-dx\exp(bx+ay)-ey\exp(bx+ay)-f\exp(bx+ay)$ , which is as headache as https://math.stackexchange.com/questions/218425 for finding its most general solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How many numbers between $1$ and $6042$ (inclusive) are relatively prime to $3780$? How many numbers between $1$ and $6042$ (inclusive) are relatively prime to $3780$?
Hint: $53$ is a factor.
Here the problem is not the solution of the question, because I would simply remove all the multiples of prime factors of $3780$.
But I wonder what is the trick associated with the hint and using factor $53$.
| $3780=2^2\cdot3^3\cdot5\cdot7$
Any number that is not co-prime with $3780$ must be divisible by at lease one of $2,3,5,7$
Let us denote $t(n)=$ number of numbers$\le 6042$ divisible by $n$
$t(2)=\left\lfloor\frac{6042}2\right\rfloor=3021$
$t(3)=\left\lfloor\frac{6042}3\right\rfloor=2014$
$t(5)=\left\lfloor\frac{6042}5\right\rfloor=1208$
$t(7)=\left\lfloor\frac{6042}7\right\rfloor=863$
$t(6)=\left\lfloor\frac{6042}6\right\rfloor=1007$
Similarly, $t(30)=\left\lfloor\frac{6042}{30}\right\rfloor=201$
and $t(2\cdot 3\cdot 5\cdot 7)=\left\lfloor\frac{6042}{210}\right\rfloor=28$
The number of number not co-prime with $3780$
=$N=\sum t(i)-\sum t(i\cdot j)+\sum t(i\cdot j \cdot k)-t(i\cdot j\cdot k \cdot l)$ where $i,j,k,l \in (2,3,5,7)$ and no two are equal.
The number of number coprime with $3780$ is $6042-N$
Reference: Venn Diagram for 4 Sets
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
$n$ Distinct Eigenvectors for an $ n\times n$ Hermitian matrix? Much like the title says, I wish to know how it is possible that we can know that there are $n$ distinct eigenvectors for an $n\times n$ Hermitian matrix, even though we have multiple eigenvalues. My professor hinted at using the concept of unitary transform and Gram-Schmidt orthogonalization process, but to be honest I'm a bit in the dark. Could anyone help me?
| You can show that any matrix is unitarily similar to an upper triangular matrix over the complex numbers. This is the Schur decomposition which Ed Gorcenski linked to. Given this transformation, let $A$ be a Hermitian matrix. Then there exists unitary matrix $U$ and upper-triangular matrix $T$ such that
$$A = UTU^{\dagger}$$
We can show that any such decomposition leads to $T$ being diagonal so that $U$ not only triangularizes $A$ but in fact diagonalizes it.
Since $A$ is Hermitian, we have
$$A= UT^{\dagger}U^{\dagger} = UTU^{\dagger} = A^{\dagger}$$
This immediately implies $T^{\dagger} = T$. Since $T$ is upper-triangular and $T^{\dagger}$ is lower-triangular, both must be diagonal matrices (this further shows that the eigenvalues are real). This shows that any Hermitian matrix is diagonalizable, i.e. any $n\times n$ Hermitian matrix has $n$ linearly independent eigenvectors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a name for this ring-like object? Let $S$ be an abelian group under an operation denoted by $+$. Suppose further that $S$ is closed under a commutative, associative law of multiplication denoted by $\cdot$. Say that $\cdot$ distributes over $+$ in the usual way. Finally, for every $s\in S$, suppose there exists some element $t$, not necessarily unique, such that $s\cdot t=s$.
Essentially, $S$ is one step removed from being a ring; the only problem is that the multiplicative identity is not unique. Here is an example.
Let $S=\{\text{Continuous functions} f: \mathbb{R}\rightarrow \mathbb{R} \ \text{with compact support}\}$ with addition and multiplication defined pointwise. It is clear that this is an abelian group with the necessary law of multiplication. Now, let $f\in S$ be supported on $[a,b]$. Let $S'\subset S$ be the set of continuous functions compactly supported on intervals containing $[a,b]$ that are identically 1 on $[a,b]$. Clearly, if $g\in S'$, then $f\cdot g=f$ for all $x$. Also, there is no unique multiplicative identity in this collection since the constant function 1 is not compactly supported.
I've observed that this example is an increasing union of rings, but I don't know if this holds for every set with the property I've defined.
| This is a pseudo-ring, or rng, or ring-without-unit. The article linked in fact actually mentions the example of functions with compact support. The fact that you have a per-element neutral element is probably not sufficiently useful to give a special name to pseudo-rings with this additional property.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 2
} |
Simple Characterizations of Mathematical Structures By no means trivial, a simple characterization of a mathematical structure is a simply-stated one-liner in the following sense:
Some general structure is (surprisingly and substantially) more structured if and only if the former satisfies some (surprisingly and superficially weak) extra assumption.
For example, here are four simple characterizations in algebra:
*
*A quasigroup is a group if and only if it is associative.
*A ring is an integral domain if and only if its spectrum is reduced and irreducible.
*A ring is a field if and only its ideals are $(0)$ and itself.
*A domain is a finite field if and only if it is finite.
I'm convinced that there are many beautiful simple characterizations in virtually all areas of mathematics, and I'm quite puzzled why they aren't utilized more frequently. What are some simple characterizations that you've learned in your mathematical studies?
| A natural number $p$ is prime if and only if it divides $(p-1)! + 1$ (and is greater than 1).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Are there values $k$ and $\ell$ such that $n$ = $kd$ + $\ell$? Prove. Suppose that n $\in$ $\mathbb Z$ and d is an odd natural number, where $0 \notin\mathbb N$. Prove that $\exists$ $\mathcal k$ and $\ell$ such that $n =\mathcal kd +\ell$ and $\frac {-d}2 < \ell$ < $\frac d2$.
I know that this is related to Euclidean's Algorithm and that k and $\ell$ are unique. I do not understand where to start proving this (as I don't most problems like these), but I also have a few other questions.
Why is is that d is divided by 2 when it is an odd number? I'm not even sure how $\ell$ being greater than and less than these fractions has anything to do with the rest of the proof. Couldn't $\ell$ be any value greater than or less than $0$?
Since d can never equal $0$, then kd could never equal $0$, so doesn't that leave the only n to possibly equal $0$?
I would appreciate anyone pushing me in the correct direction.
| We give a quite formal, and unpleasantly lengthy, argument. Then in a remark we say what's really going on. Let $n$ be an integer. First note that there are integers $x$ such that $n-xd\ge 0$. This is obvious if $n\ge 0$. And if $n \lt 0$, we can for example use $x=-n+1$.
Let $S$ be the set of all non-negative integers of the shape $n-xd$. Then $S$ is, as we observed, non-empty. So there is a smallest non-negative integer in $S$. Call this number $r$. (The fact that any non-empty set of non-negative integers has a smallest element is a hugely important fact equivalent to the principle of mathematical induction. It is often called the Least Number Principle.)
Since $r\in S$, we have $r\ge 0$. Moreover, by the definition of $S$ there is an integer $y$ such that $r=n-yd$, or equivalently $n=yd+r$.
Note that $r\lt d$. For suppose to the contrary that $r \ge d$. Then $r-d\ge 0$. But $r-d=r-(y+1)d$, and therefore $r-d$ is an element of $S$, contradicting the fact that $r$ is the smallest element of $r$.
To sum up, we have shown that there is an $r$ such that $0\le r\lt d$ and such that there exists a $y$ such that $r=n-yd$, or equivalently $n=yd+r$.
Case (i): Suppose that $r\lt \dfrac{d}{2}$. Then let $k=y$ and $\ell=r$. We have then $n=kd+\ell$ and $0\le \ell\lt \dfrac{d}{2}$.
Case (ii): Suppose that $r \ge \frac{d}{2}$. Since $d$ is odd, we have $r\gt \dfrac{d}{2}$. We have
$$\frac{d}{2}\lt r \lt d.$$
Subtract $d$ from both sides of these inequalities. We obtain
$$-\dfrac{d}{2}\lt r-d\lt 0,$$
which shows that
$$-\frac{d}{2}\lt n-yd-d\lt 0.$$
Finally, in this case let $k=y+1$ and $\ell=n-kd$. Then $n=kd+\ell$ and
$$-\dfrac{d}{2}\lt kd+\ell\lt 0.$$
Remark: There is surprisingly little going on here. We first found the remainder $r$ when $n$ is divided by $d$. But the problem asks for a "remainder" which is not necessarily, like the usual remainder, between $0$ and $d-1$. We want to allow negative "remainders"
that are as small in absolute value as possible. The idea is that if the ordinary remainder is between $0$ and $d/2$, we are happy with it, but if the ordinary remainder is between $d/2$ and $d-1$, we increase the "quotient" by $1$, thereby decreasing the remainder by $d$, and putting it in the right range. So for example if $n=68$ and $d=13$, we use $k=5$, and $\ell=3$. If $n=74$ and $d=13$, we have the usual $74=(5)(13)+9$. Increase the quotient to $6$. We get $74=(6)(13)+(-4)$, and use $k=6$, and $\ell=-4$.
We gave a proof in the traditional style, but the argument can be rewritten as an ordinary induction argument on $|n|$. It is a good idea to work separately with non-negative and negative integers $n$. We sketch the argument for non-negative $n$. The result is obvious for $n=0$, with $k_0=\ell_0=0$. Suppose that for a given non-negative $n$ we have $n=k_nd+\ell_n$, where $\ell_n$ obeys the inequalities of the problem, that is, $-d/2\lt \ell_n\lt d/2$. If $\ell_n\le (d-3)/2$, then $n+1=k_{n+1} +\ell_{n+1}$, where $k_{n+1}=k_n$ and $\ell_{n+1}=\ell_n+1$. If $\ell_n=(d-1)/2$, let $k_{n+1}=k_n+1$ and $\ell_{n+1}=-(d-1)/2$. It is not hard to verify that these values of $\ell_{n+1}$ are in the right range.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Minimal surfaces and gaussian and normal curvaturess If $M$ is the surface $$x(u^1,u^2) = (u^2\cos(u^1),u^2\sin(u^1), p\,u^1)$$ then I am trying to show that $M$ is minimal. $M$ is referred to as a helicoid.
Also I am confused on how $p$ affects the problem
| There is a good reason that the value of $p$ does not matter, as long as $p \neq 0.$
If you begin with a sphere of radius $R$ and blow it up to a sphere of radius $SR,$ the result is to multiply the mean curvature by $\frac{1}{S}.$ This is a general phenomenon. A map, which is also linear, given by moving every point $(x,y,z)$ to $(\lambda x, \lambda y, \lambda z)$ for a positive constant $\lambda,$ is called a homothety. A homothety takes any surface and divides the mean curvature (at matching points, of course)) by $\frac{1}{\lambda}.$ This can be done in any $\mathbb R^n,$ I guess we are sticking with $\mathbb R^3.$
So, what I need to do is show you that your helicoid with parameter $p,$ expanded or shrunk by a homothety, is the helicoid with a different parameter, call it $q.$ I'm going to use $u = u^1, v = u^2.$ And that is just
$$ \frac{q}{p} x(u, \frac{pv}{q}) = \frac{q}{p} \left(\frac{pv}{q} \cos u , \frac{pv}{q} \sin u, p u \right) = (v \cos u , v \sin u, q u). $$
Well, the mean curvature of the original helicoid is $0$ everywhere. So the new helicoid is still minimal.
There is a bit of work showing that a homothety changes the mean curvature in the way I described, no easier than your original problem. True, though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/227989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proj construction and fibered products How to show, that
$Proj \, A[x_0,...,x_n] = Proj \, \mathbb{Z}[x_0,...,x_n] \times_\mathbb{Z} Spec \, A$?
It is used in Hartshorne, Algebraic geometry, section 2.7.
| Show that you have an isomorphism on suitable open subsets, and that the isomorphisms glue. The standard ones on $\mathbb{P}^n_a$ should suffice. Use that $$\mathbb{Z}[x_0, \ldots, x_n] \otimes_\mathbb{Z} A \cong A[x_0,..., \ldots, x_n].$$ Maybe you could prove the isomorphism by using the universal property of projective spaces too, but that might be overkill / not clean at all.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Show that $\left(\frac{1}{2}\left(x+\frac{2}{x}\right)\right)^2 > 2$ if $x^2 > 2$ Okay, I'm really sick and tired of this problem. Have been at it for an hour now and we all know the drill: if you don't get to the solution of a simple problem, you won't, so ...
I'm working on a proof for the convergence of the Babylonian method for computing square roots. As a warming up I'm first using the sequence $(x_n)$ defined by:
$$
x_1 = 2\\
x_{n+1} = \frac{1}{2} (x_n + \frac{2}{x_n})
$$
Now for the proof, I want to show that: $\forall n \in \mathbb{N}: x^2_n > 2$. I want to prove this using induction, so this eventually comes down to:
$$
x_n^2 > 2 \implies x_{n+1}^2 = \frac{1}{4}x_n^2 + 1 + \frac{1}{x_n^2} > 2
$$
And I can't seem to get to the solution. Note that I don't want to make use of showing that $x=2$ is a minimum for this function using derivatives. I purely want to work with the inequalities provided. I'm probably seeing through something very obvious, so I would like to ask if anyone here sees what's the catch.
Sincerely,
Eric
| First, swap $x_n^2$ for $2y$, just to make it simpler to write. The hypothesis is then $y > 1$, and what we want to show is
$$
\frac{2}{4}y + \frac{1}{2y} > 1
$$
$$
y + \frac{1}{y} > 2
$$
Multiply by $y$ (since $y$ is positive, no problems arise)
$$
y^2 -2y + 1 > 0
$$
$$
(y-1)^2 > 0
$$
which is obvious, since $y > 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Convergence of Lebesgue integrable functions in an arbitrary measure. I'm a bit stuck on this problem, and I was hoping someone could point me in the right direction.
Suppose $f, f_1, f_2,\ldots \in L^{1}(\Omega,A,\mu)$ , and further suppose that $\lim_{n \to \infty} \int_{\Omega} |f-f_n| \, d\mu = 0$. Show that $f_n \rightarrow f$ in measure $\mu$.
In case you aren't sure, $L^1(\Omega,A,\mu)$ is the complex Lebesgue integrable functions on $\Omega$ with measure $\mu$.
I believe I have to use the Dominated convergence theorem to get this result, and they usually do some trick like taking a new function $g$ that relates to $f$ and $f_n$ in some way, but I'm not seeing it. Any advice?
| A bit late to answer, but here it is anyways.
We wish to show that for any $\epsilon > 0,$ there is some $N$ such that for all $n \geq N, \mu(\{x : |f_n(x) - f(x)| > \epsilon\}) < \epsilon.$ (This is one of several equivalent formulations of convergence in measure.)
If this were not the case, then there'd be some $\epsilon > 0$ so that for every $N$ there is an $n \geq N$ that doesn't satisfy the above condition. So, pick $N$ large enough so that for all $n \geq N, \int_\Omega |f-f_n| \ d\mu < \epsilon^2.$ Then, for this $N$ we have, by our assumption, some $n_0$ with $\mu(L_{n_0}) \geq \epsilon$ where $L_{n_0} = \{x: |f_{n_0}(x) - f(x)| > \epsilon\}.$
But then, we'd have that $$\epsilon^2 > \int_\Omega |f_{n_0} - f| \ d\mu \geq \int_{L_{n_0}} |f_{n_0} - f| \ d\mu > \epsilon\mu(L_{n_0}) \geq \epsilon^2,$$ which is a contradiction. Hence, we must have convergence in measure.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is this the category of metric spaces and continuous functions?
Suppose the object of the category are metric spaces and for $\left(A,d_A\right)$ and $\left(B,d_B\right)$ metric spaces over sets A and B, a morphisms of two metric space is given by a function between the underlying sets, such that $f$ presere the metric structure: $\forall x,y,z \in A$ we have:
*
*$$ d_B\left(f\left(x\right),f\left(y\right)\right)= 0 \Leftrightarrow
f\left(x\right)=f\left(y \right)$$
*$$d_B\left(f\left(x\right),f\left(y\right)\right)=d_y\left(f\left(y\right),f\left(x\right)\right)$$
*$$d_B\left(f\left(x\right),f\left(y\right)\right) \le d_B\left(f\left(x\right),f\left(z\right)\right) + d_B\left(f\left(z\right),f\left(y\right)\right) $$
and furthermore : $\forall \epsilon > 0$, $\exists \delta >0 $ which satisfy:
*$$d_A\left(x,y\right)<\delta \Rightarrow d_B \left(f\left(x\right),f\left(y\right)\right)< \epsilon$$
Is this the category of metric spaces and continues functions? What if we drop the last requirement?
| I don't think there's really one the category of metric spaces. The fourth axiom here gives you a category of metric spaces and (uniformly) continuous functions. The other axioms are implied by the assumptions. Allowing $\delta$ to depend on $x$ gives you the category of metric spaces and (all) continuous functions.
One way to preserve metric structure would be to demand that $d_B(f(x),f(y))=d_A(x,y)$. This would restrict the functions to isometric ones, which are all homeomorphisms, so you could relax the restriction to $d_B(f(x),f(y))\le d_A(x,y)$. That way you get the category of metric spaces and contractions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Counting permutations of students standing in line Say I have k students, four of them are Jack, Liz, Jenna and Tracy. I want to count the number of permutations in which Liz is standing left to Jack and Jenna is standing right to Tracy. I define $A = ${Liz is left to Jack} so $|A| = \frac{k!}{2}$. The same goes for $B$ with Jenna and Tracy.
I know that $$|A \cap B| = |A| + |B| - |A \cup B|$$
But how do I find the union? I'm guessing it involves inclusion-exclusion, but I can't remember how exactly.
Any ideas? Thanks!
| The order relationship between Liz and Jack is independent of that between Jenna and Tracy. You already know that there are $k!/2$ permutations in which Liz stands to the left of Jack. In each of those Jenna can be on the right of Tracy or on her left without affecting the order of Liz and Jack, so exactly half of these $k!/2$ permutations have Jenna to the right of Tracy. The answer is therefore $k!/4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Does a natural transformation on sites induce a natural transformation on presheaves? Suppose $C$ and $D$ are sites and $F$, $G:C\to D$ two functors connected by a natural transformation $\eta_c:F(c)\to G(c)$.
Suppose further that two functors $\hat F$, $\hat G:\hat C\to\hat D$ on the respective categories of presheaves are given by $\hat F(c)=F(c)$ and $\hat G(c)=G(c)$ where I abuse the notation for the Yoneda embedding.
Is there always a natural transformation $\hat\eta_X:\hat F(X)\to \hat G(X)$?
The problem is, that in the diagram
$$
\begin{array}{rcccccl}
\hat F(X)&=&\operatorname{colim} F(X_j)&\to& \operatorname{colim} G(X_j)&=&\hat G(X)\\
&&\downarrow &&\downarrow\\
\hat F(Y)&=&\operatorname{colim} F(Y_k)&\to& \operatorname{colim} G(Y_k)&=&\hat G(Y)
\end{array}
$$
for a presheaf morphism $X\to Y$ the diagrams for the colimits may be different, or am I wrong?
| Recall: given a functor $F : \mathbb{C} \to \mathbb{D}$ between small categories, there is an induced functor $F^\dagger : [\mathbb{D}^\textrm{op}, \textbf{Set}] \to [\mathbb{C}^\textrm{op}, \textbf{Set}]$, and this functor has both a left adjoint $\textrm{Lan}_F$ and a right adjoint $\textrm{Ran}_F$. Now, given a natural transformation $\alpha : F \Rightarrow G$, there is an induced natural transformation $\alpha^\dagger : G^\dagger \Rightarrow F^\dagger$ (note the direction!), given by $(\alpha^\dagger_Q)_C = Q(\alpha_C) : Q(G C) \to Q(F C)$. Consequently, if $\eta^G_P : P \to (\textrm{Lan}_G P) F$ is the component of the unit of the adjunction $\textrm{Lan}_G \dashv G^\dagger$, we can compose with $\alpha^\dagger_{\textrm{Lan}_G P}$ to get a presheaf morphism $\alpha^\dagger_{\textrm{Lan}_G P} \circ \eta^G_P : P \to (\textrm{Lan}_G P) F$, and by adjunction this corresponds to a presheaf morphism $\textrm{Lan}_F P \to \textrm{Lan}_G P$. This is all natural in $P$, so we have the desired natural transformation $\textrm{Lan}_\alpha : \textrm{Lan}_F \Rightarrow \textrm{Lan}_G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to solve/transform/simplify an equation by a simple algorithm? MathePower provides an form. There you can input a formula (1st input field) and a variable to release (2nd input field) and it will output a simplified version of that formula.
I want to write a script which needs to do something similar.
So my question is:
*
*Do you know about any simple algorithm which can do something like the script on MathePower? (I just want to simplify formulas based on the four basic arithmetical operations.)
*Are there any example implementations in a programming language?
Thanks for your answer. (And please execuse my bad English.)
| This is generally known as "computer algebra," and there are entire books and courses on the subject. There's no single magic bullet. Generally it relies on things like specifying canonical forms for certain types of expressions and massaging them. Playing with the form, it seems to know how to simplify a rational expression, but not for instance that $\sin^2 x + \cos^2 x = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Maximum based recursive function definition Does a function other than 0 that satisfies the following definition exist?
$$
f(x) = \max_{0<\xi<x}\left\{ \xi\;f(x-\xi) \right\}
$$
If so can it be expressed using elementary functions?
| Since we cannot be sure if the $\max$ exists, let us consider $f\colon I\to\mathbb R$ with
$$\tag1f(x)=\sup_{0<\xi<x}\xi f(x-\xi)$$
instead, where $I$ is an interval of the form $I=(0,a)$ or $I=(0,a]$ with $a>0$.
If $x_0>0$ then $f(x)\ge (x-x_0)f(x_0)$ for $x>x_0$ and $f(x)\le\frac{f(x_0)}{x_0-x}$ for $x<x_0$.
We can conclude $f(x)\ge0$ for all $x>0$: Select $x_0\in(0,x)$. Note that $f(x_0)$ may be negative. Let $\epsilon>0$. For $0<h<x-x_0$ we have $f(x_0+h)\ge h f(x_0)$ and $f(x)\ge (x-x_0-h)f(x_0+h)\ge h(x-x_0-h)f(x_0)$. If $h<\frac{\epsilon}{(x_1-x_0)|f(x-0)|}$, this shows $f(x)\ge-\epsilon$. Since $\epsilon$ was arbitrary, we conclude $f(x)\ge0$.
Assume $f(x_0)>0$. Then for any $0<\epsilon<1$ there is $x_1<x_0$ with $(x_0-x_1)f(x_1)>(1-\epsilon)f(x_0)$ and especially $f(x_1)>0$. In fact, for a sequence $(\epsilon_n)_n$ with $0<\epsilon_n<1$ and
$$\prod_n (1-\epsilon_n)=:c>0$$
(which is readily constructed) we find a sequence $x_0>x_1>x_2>\ldots$ such that $(x_n-x_{n+1})f(x_{n+1})>(1-\epsilon_n)f(x_n)$, hence
$$\prod_{k=1}^{n} (x_{k}-x_{k+1})\cdot f(x_{n+1})>\prod_{k=1}^{n-1}(1-\epsilon_k)\cdot f(x_1)>c f(x_1). $$
By the arithmetic-geometric inequality, $${\prod_{k=1}^n (x_{k}-x_{k+1})}\le \left(\frac {x_1-x_n}n\right)^n<\left(\frac {x_1}n\right)^n$$
and
$$f(x_{n+1})>c f(x_1)\cdot \left(\frac n{x_1}\right)^n$$
The last factor is unbounded.
Therefore,
$f(x_0)\ge (x_0-x_n)f(x_{n+1})\ge (x_0-x_1) f(x_{n+1})$ gives us a contradiction.
Therefore $f$ is identically zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Alice and Bob Game Alice and Bob just invented a new game.
The rule of the new game is quite simple. At the beginning of the game, they write down N
random positive integers, then they take turns (Alice first) to either:
*
*Decrease a number by one.
*Erase any two numbers and write down their sum.
Whenever a number is decreased to 0, it will be erased automatically. The game ends when all numbers are finally erased, and the one who cannot play in his(her) turn loses the game.
Here's the problem: Who will win the game if both use the best strategy?
| The complete solution to this game is harder than it looks, due to complications when there are several numbers $1$ present; I claim the following is a complete list of the "Bob" games, those that can be won by the second player to move. To justify, I will indicate for each case a strategy for Bob, countering any move by Alice by another move leading to a simpler "Bob" game.
I will write game position as partitions, weakly decreasing sequences of nonnegative integers (order clearly does not matter for the game). Entries present a number of times are indicated by exponents in parentheses, so $(3,1^{(4)})$ designates $(3,1,1,1,1)$. Moves are of type "decrease" (type 1 in the question) or "merge" (type 2); a decrease from $1$ to $0$ will be called a removal.
Bob-games are:
*
*$(0)$ and $(2)$
*$(a_1,\ldots,a_n,1^{(2k)})$ where $k\geq0$, $n\geq1$, $a_n\geq2$, $(a_1,\ldots,a_n)\neq(2)$, and $a_1+\cdots+a_n+n-1$ is even. Strategy: counter a removal of one of the numbers $1$ by another such removal; a merge of a $1$ and an $a_i$ by another merge of a $1$ into $a_i$; a merge of two entries $1$ by a merge of the resulting $2$ into one of the $a_i$; a decrease of an $a_i$ from $2$ to $1$ by a merge of the resulting $1$ into another $a_j$; any other decrease of an $a_i$ or a merge of an $a_i$ and $a_j$ by the merge of two entries $1$ if possible ($k\geq1$) or else merge an $a_i$ and $a_j$ if possible ($n\geq2$), or else decrease the unique remaining number making it even.
*(to be continued...)
Note that the minimal possibilities for $(a_1,\ldots,a_n)$ here are $(4)$, $(3,2)$, and $(2,2,2)$. Anything that can be moved into a Bob-game is an Alice-game; this applies to any $(a_1,\ldots,a_n,1^{(2k+1)})$ where $k\geq0$, $n\geq1$, $a_n\geq2$, $(a_1,\ldots,a_n)\neq(2)$ (either remove or merge a $1$ so as to leave $a_1+\cdots+a_n+n-1$ even), and to any $(a_1,\ldots,a_n,1^{(2k)})$ where $k\geq0$, $n\geq1$, $a_n\geq2$, and $a_1+\cdots+a_n+n-1$ odd (either merge two of the $a_i$ or two entries $1$, or if there was just an odd singleton, decrease it). All cases $(3,1^{(l)})$ and $(2,2,1^{(l)})$ are covered by this, in a manner depending on the parity of $l$. It remains to classify the configurations $(2,1^{(l)})$ and $(1^{(l)})$. Moving outside this remaining collection always gives some Alice-game $(3,1^{(l)})$ or $(2,2,1^{(l)})$, which are losing moves that can be ignored. Then we complete our list of Bob-games with:
*
*$(1^{(3k)})$ and $(2,1^{(3k)})$ with $k>0$. Bob wins game $(1,1,1)$ by moving to $(2)$ in all cases. Similarly he wins other games $(1^{(3k)})$ by moving to $(2,1^{(3k-3)})$ in all cases. Finally Bob wins $(2,1^{(3k)})$ by moving to $(1^{(3k)})$ (unless Alice merges, but this also loses as we already saw).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
How to undo linear combinations of a vector If $v$ is a row vector and $A$ a matrix, the product $w = v A$ can be seen as a vector containing a number of linear combinations of the columns of vector $v$. For instance, if
$$
v = \begin{bmatrix}1, 2\end{bmatrix}, \quad
A = \begin{bmatrix}0 & 0 & 0 \\ 1 & 1 & 1\end{bmatrix}, \quad
w = vA = \begin{bmatrix}2, 2, 2\end{bmatrix}
$$
read by columns, the matrix $A$ is saying: make 3 combinations of the columns of vector $v$, each of which consists of taking 0 times the first column and 1 time the second column.
Now, the goal is to reconstruct, to the extent of possible, vector $v$ from $A$ and $w$, in other words to find a vector $v'$ such that $$v'A = w .$$
Two things to consider:
*
*The matrix $A$ can have any number of columns and may or may not be square or invertible.
*There are times when elements of the original vector can't be known, because $w$ contains no information about them. In the previous example, this would be the case of $v_1$. In this case, we would accept any value of $v'_1$ as correct.
How would you approach this problem? Can $v'$ be found doing simple operations with $w$ and $A$ or do I have to invent an algorithm specifically for the purpose?
| Clearly
$$ \begin{bmatrix}0 & 2\end{bmatrix}
\begin{bmatrix}0 & 0 & 0 \\ 1 & 1 & 1\end{bmatrix}
= \begin{bmatrix}2 & 2 & 2\end{bmatrix}$$
So $v'=[0 \; 2]$ is a solution.
So we can suppose than any other solution can look like
$v'' = v' + [x \; y]$.
\begin{align}
(v' + [x \; y])A &= [2 \; 2 \; 2] \\
v'A + [x \; y]A &= [2 \; 2 \; 2] \\
[2 \; 2 \; 2] + [x \; y]A &= [2 \; 2 \; 2] \\
[x \; y]A &= [0 \; 0 \; 0] \\
[y \; y \; y] &= [0 \; 0 \; 0] \\
y &= 0
\end{align}
So the most general solution is
$v'' = v'+ \begin{bmatrix}x & 0\end{bmatrix} = \begin{bmatrix}x & 2\end{bmatrix}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
*Recursive* vs. *inductive* definition I once had an argument with a professor of mine, if the following definition was a recursive or inductive definition:
Suppose you have sequence of real numbers. Define $a_0:=2$ and $a_{i+1}:=\frac{a_i a_{i-1}}{5}$. (Of course this is just an example and as such has only illustrative character - I could have as well taken as an example a definition of some family of sets)
I claimed that this definition was recursive since we have an $a_{i+1}$ and define it going "downwards" and use the word "inductive" only as an adjectiv for the word "proof", but my professor insisted that we distinguish between these types of definition and that this was an inductive definition, since we start with $a_0$ and work "upwards".
Now, is there even someone who can be right ? Since to me it seems that mathematically every recursive definition is also inductive (whatever these two expressions may finally mean), since the mathematical methods used to define them (namely equations) are the same. (Wikipedia also seems to think they are the same - but I trust a sound mathematical community, i.e. you guys, more than Wikipedia)
And if there is a difference, who is right and what is, if the above is a recursive definition, an inductive definition (and vice-versa) ?
(Please, don't ask me to ask my professor again - or anything similar, since I often get this answer here, after mentioning that this question resulted from a discussion with some faculty member - since out discussion ended with him saying that "it definitely is inductive, but I just can't explain it")
| Here is my inductive definition
of the cardinality
of a finite set
(since,
in my mind,
finite sets are built
by adding elements
starting with the
empty set):
$|\emptyset|
= 0
$.
$|A \cup {x}|
=
\begin{cases}
x \in A
&\implies |A|\\
x \not\in A
&\implies |A|+1\\
\end{cases}
$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 5,
"answer_id": 4
} |
Prove that $mn|a$ implies $m|a$ and $n|a$ I am trying to prove this statement about divisibility: $mn|a$ implies $m|a$ and $n|a$.
I cannot start the proof. I need to prove either the right or left side. I don't know how to use divisibility theorems here. Generally, I have problems in proving mathematical statements.
This is my attempt: $m$ divides $a$ implies that $mn$ also divides $a$. How do I show that $n$ also divides $a$?
| If $mn|a$ then $a=kmn$ for some integer $k$. Then $a=(km)n$ where $km$ is an integer so that $n|a$. Similarly, $a=(kn)m$ where $kn$ is an integer so that $m|a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Evaluating a double integral: $\iint \exp(\sqrt{x^2+y^2})\:dx\:dy$? How to evaluate the following integral? $$\iint \exp\left(\sqrt{x^2+y^2} \right)\:dx\:dy$$
I'm trying to integrate this using substitution and integration by parts but I keep getting stuck.
| If you switch to polar coordinates, you end out integrating $re^r \,dr \,d\theta$, which you should be able to integrate over your domain by doing the $r$ integral first (via integration by parts).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/228995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Non-closed compact subspace of a non-hausdorff space I have a topology question which is:
Give an example of a topological (non-Hausdorff) space X and a a non-closed compact subspace.
I've been thinking about it for a while but I'm not really getting anywhere. I've also realised that apart from metric spaces I don't really have a large pool of topological spaces to think about (and a metric sapce won't do here-because then it would be hausdorff and any compact set of a metric space is closed)
Is there certain topological spaces that I should know about (i.e. some standard and non-standard examples?)
Thanks very much for any help
| Here are some examples that work nicely.
*
*The indiscrete topology on any set with more than one point: every non-empty, proper subset is compact but not closed. (The indiscrete topology isn’t good for much, but as Qiaochu said in the comments, it’s a nice, simply example when it actually works.)
*In the line with two origins, the set $[-1,0)\cup\{a\}\cup(0,1]$ is compact but not closed: $b$ is in its closure.
*The set $\{1\}$ in the Sierpiński space is compact but not closed.
*For each $n\in\Bbb N$ let $V_n=\{k\in\Bbb N:k<n\}$; then $\{V_n:n\in\Bbb N\}\cup\{\Bbb N\}$ is a topology on $\Bbb N$, in which every non-empty finite set is compact by not closed.
*Let $\tau$ be the cofinite topology on an infinite set $X$. Then every subset of $X$ is compact, but the only closed subsets are $X$ and the finite subsets of $X$.
In terms of the common separation axioms: (1) is not even $T_0$; (2) and (5) are $T_1$; and (3) and (4) are $T_0$ but not $T_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
} |
Maps of maximal ideals Prove that $\mu:k^n\rightarrow \text{maximal ideal}\in k[x_1,\ldots,x_n]$ by $$(a_1,\ldots,a_n)\rightarrow (x_1-a_1,\ldots,x_n-a_n)$$ is an injection, and given an example of a field $k$ for which $\mu$ is not a surjection.
The first part is clear, but the second part needs a field $k$ such that not all maximal ideals of the polynomial ring is of the form $(x-a_1,\ldots,x-a_n)$. I am not sure how to find one as I obviously need to a non-obvious ring epimorphism $k[x_1,\ldots,x_n]\rightarrow k$ such that the kernel is the maximal ideal. This question is quite elementary and I feel embarrassed to ask.
| At Julian's request I'm developing my old comment into an answer. Here is the result:
Given any non algebraically field $k$, the canonical map $$k\to \operatorname {Specmax}(k[x]):a\mapsto (x-a)$$ is not surjective.
Indeed, by hypothesis there exists an irreducible polynomial $p(x)\in k[x]$ of degree $\gt 1$.
This polynomial generates a maximal ideal $\mathfrak m=(p(x))$ which is not of the form (x-a), in other words which is not in the image of our displayed canonical map.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Number of squares in a rectangle Given a rectangle of length a and width b (as shown in the figure), how many different squares of edge greater than 1 can be formed using the cells inside.
For example, if a = 2, b = 2, then the number of such squares is just 1.
| In an $n\times p$ rectangle, the number of rectangles that can be formed is $\frac{np}{4(n+1)(p+1)}$ and the number of squares that can be formed is $\sum_{r=1}^n (n+1-r)(p+1-r)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
For which values of $\alpha \in \mathbb R$ is the following system of linear equations solvable? The problem I was given:
Calculate the value of the following determinant:
$\left|
\begin{array}{ccc}
\alpha & 1 & \alpha^2 & -\alpha\\
1 & \alpha & 1 & 1\\
1 & \alpha^2 & 2\alpha & 2\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}
\right|$
For which values of $\alpha \in \mathbb R$ is the following system of linear equations solvable?
$\begin{array}{lcl}
\alpha x_1 & + & x_2 & + & \alpha^2 x_3 & = & -\alpha\\
x_1 & + & \alpha x_2 & + & x_3 & = & 1\\
x_1 & + & \alpha^2 x_2 & + & 2\alpha x_3 & = & 2\alpha\\
x_1 & + & x_2 & + & \alpha^2 x_3 & = & -\alpha\\
\end{array}$
I got as far as finding the determinant, and then I got stuck.
So I solve the determinant like this:
$\left|
\begin{array}{ccc}
\alpha & 1 & \alpha^2 & -\alpha\\
1 & \alpha & 1 & 1\\
1 & \alpha^2 & 2\alpha & 2\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}
\right|$ =
$\left|
\begin{array}{ccc}
\alpha - 1 & 0 & 0 & 0\\
1 & \alpha & 1 & 1\\
1 & \alpha^2 & 2\alpha & 2\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}
\right|$ =
$(\alpha - 1)\left|
\begin{array}{ccc}
\alpha & 1 & 1\\
\alpha^2 & 2\alpha & 2\alpha \\
1 & \alpha^2 & -\alpha
\end{array}
\right|$ =
$(\alpha - 1)\left|
\begin{array}{ccc}
\alpha & 1 & 0\\
\alpha^2 & 2\alpha & 0 \\
1 & \alpha^2 & -\alpha - \alpha^2
\end{array}
\right|$ = $-\alpha^3(\alpha - 1) (1 + \alpha)$
However, now I haven't got a clue on solving the system of linear equations... It's got to do with the fact that the equations look like the determinant I calculated before, but I don't know how to connect those two.
Thanks in advance for any help. (:
| Let me first illustrate an alternate approach. You're looking at $$\left[\begin{array}{ccc}
\alpha & 1 & \alpha^2\\
1 & \alpha & 1\\
1 & \alpha^2 & 2\alpha\\
1 & 1 & \alpha^2
\end{array}\right]\left[\begin{array}{c} x_1\\ x_2\\ x_3\end{array}\right]=\left[\begin{array}{c} -\alpha\\ 1\\ 2\alpha\\ -\alpha\end{array}\right].$$ We can use row reduction on the augmented matrix $$\left[\begin{array}{ccc|c}
\alpha & 1 & \alpha^2 & -\alpha\\
1 & \alpha & 1 & 1\\
1 & \alpha^2 & 2\alpha & 2\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}\right].$$ In particular, for the system to be solvable, it is necessary and sufficient that none of the rows in the reduced matrix is all $0$'s except for in the last column. Subtract the bottom row from the other rows, yielding $$\left[\begin{array}{ccc|c}
\alpha-1 & 0 & 0 & 0\\
0 & \alpha-1 & 1-\alpha^2 & 1+\alpha\\
0 & \alpha^2-1 & 2\alpha-\alpha^2 & 3\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}\right].$$
It's clear then that if $\alpha=1$, the second row has all $0$s except in the last column, so $\alpha=1$ doesn't give us a solvable system. Suppose that $\alpha\neq 1$, multiply the top row by $\frac1{\alpha-1}$, and subtract the new top row from the bottom row, giving us $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & \alpha-1 & 1-\alpha^2 & 1+\alpha\\
0 & \alpha^2-1 & 2\alpha-\alpha^2 & 3\alpha\\
0 & 1 & \alpha^2 & -\alpha
\end{array}\right].$$
Swap the second and fourth rows and add the new second row to the last two rows, giving us $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & \alpha^2 & -\alpha\\
0 & \alpha^2 & 2\alpha & 2\alpha\\
0 & \alpha & 1 & 1
\end{array}\right],$$ whence subtracting $\alpha$ times the fourth row from the third row gives us $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & \alpha^2 & -\alpha\\
0 & 0 & \alpha & \alpha\\
0 & \alpha & 1 & 1
\end{array}\right].$$
Note that $\alpha=0$ readily gives us the solution $x_1=x_2=0$, $x_3=1$. Assume that $\alpha\neq 0,$ multiply the third row by $\frac1\alpha$, subtract the new third row from the fourth row, and subtract $\alpha^2$ times the new third row from the second row, yielding $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & 0 & -\alpha^2-\alpha\\
0 & 0 & 1 & 1\\
0 & \alpha & 0 & 0
\end{array}\right],$$ whence subtracting $\alpha$ times the second row from the fourth row yields $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & 0 & -\alpha^2-\alpha\\
0 & 0 & 1 & 1\\
0 & 0 & 0 & \alpha^3+\alpha^2
\end{array}\right].$$ The bottom right entry has to be $0$, so since $\alpha\neq 0$ by assumption, we need $\alpha=-1$, giving us $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 1\\
0 & 0 & 0 & 0
\end{array}\right].$$
Hence, the two values of $\alpha$ that give the system a solution are $\alpha=0$ and $\alpha=-1$, and in both cases, the system has solution $x_1=x_2=0$, $x_3=1$. (I think all my calculations are correct, but I'd recommend double-checking them.)
The major upside of the determinant approach is that it saves you time and effort, since you've already calculated it. If we assume that $\alpha$ is a constant that gives us a solution, then since we're dealing with $4$ equations in only $3$ variables, we have to have at least one of the rows in the reduced echelon form of the augmented matrix be all $0$s--we simply don't have enough degrees of freedom otherwise. The determinant of the reduced matrix will then be $0$, and since we obtain it by invertible row operations on the original matrix, then the determinant of the original matrix must also be $0$.
By your previous work, then, $-\alpha^3(\alpha-1)(1+\alpha)=0$, so the only possible values of $\alpha$ that can give us a solvable system are $\alpha=0$, $\alpha=-1$, and $\alpha=1$. We simply check the system in each case to see if it actually is solvable. If $\alpha=0$, we readily get $x_1=x_2=0$, $x_3=1$ as the unique solution; similarly for $\alpha=-1$. However, if we put $\alpha=1$, then the second equation becomes $$x_1+x_2+x_3=1,$$ but the fourth equation becomes $$x_1+x_2+x_3=-1,$$ so $\alpha=1$ does not give us a solvable system.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Combinatorics: When To Use Different Counting Techniques I am studying combinatorics, and at the moment I am having trouble with the logic behind more complicated counting problems. Given the following list of counting techniques, in which cases should they be used (ideally with a simple, related example):
*
*Repeated multiplication (such as $10 \times 9\times 8\times 7$, but not down to $1$)
*Addition
*Exponents
*Combination of the above ($2^6 + 2^5 + 2^4 + 2^3 + 2^2 + 2^1 + 2^0$)
*Factorials
*Permutations
*Combinations
*A case like this: $2^{10} \times \left({6 \choose 2} + {6 \choose 1} + {6 \choose 0}\right)$
*A case like this: $13 \times {4 \choose 3} \times {4 \choose 2} \times 12$
*A case like this: $13 \times {4 \choose 3} \times {4 \choose 2} \times {4 \choose 1}$
Sorry for the crash list of questions, but I am not clear on these issues, especially not good when I have a test in a few days!
Thank you for your time!
| Let me address some of the more general techniques on your list, since the specific ones just appear to be combinations of the general ones.
Repeated Multiplication: Also called "falling factorial", use this technique when you are choosing items from a list where order matters. For example, if you have ten flowers and you want to plant three of them in a row (where you count different orderings of the flowers), you can do this in $10 \cdot 9 \cdot 8$ ways.
Addition: Use this to combine the results of disjoint cases. For example, if you can have three different cakes or four different kinds of ice cream (but not both), then there you have $3 + 4$ choices of dessert.
Exponents: As with multiplication, but the number of choices does not decrease. For example, if you had ample supply of each of your ten kinds of flowers, you could plant $10 \cdot 10 \cdot 10$ different ways (because you can reuse the same kind of flower).
Factorials/Permutations: As with the first example, except you use all ten flowers rather than just three.
Combinations: Use this when you want to select a group of items from a larger group, but order does not matter. For example, if you have five different marbles and want to grab three of them to put in your pocket (so the order in which you choose them does not matter), this can be done in $\binom{5}{3}$ ways.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solve $2a + 5b = 20$ Is this equation solvable? It seems like you should be able to get a right number!
If this is solvable can you tell me step by step on how you solved it.
$$\begin{align}
{2a + 5b} & = {20}
\end{align}$$
My thinking process:
$$\begin{align}
{2a + 5b} & = {20} & {2a + 5b} & = {20} \\
{0a + 5b} & = {20} & {a + 0b} & = {20} \\
{0a + b} & = {4} & {a + 0b} & = {10} \\
{0a + b} & = {4/2} & {a + 0b} & = {10/2} \\
{0a + b} & = {2} & {a + 0b} & = {5} \\
\end{align}$$
The problem comes out to equal:
$$\begin{align}
{2(5) + 5(2)} & = {20} \\
{10 + 10} & = {20} \\
{20} & = {20}
\end{align}$$
since the there are two different variables could it not be solved with the right answer , but only "a answer?"
What do you guys think?
| Generally one can use the Extended Euclidean algorithm, but that's overkill here. First note that since $\rm\,2a+5b = 20\:$ we see $\rm\,b\,$ is even, say $\rm\:b = 2n,\:$ hence dividing by $\,2\,$ yields $\rm\:a = 10-5n.$
Remark $\ $ The solution $\rm\:(a,b) = (10-5n,2n) = (10,0) + (-5,2)\,n\:$ is the (obvious) particular solution $(10,0)\,$ summed with the general solution $\rm\,(-5,2)\,n\,$ of the associated homogeneous equation $\rm\,2a+5b = 0,\:$ i.e. the general form of a solution of a nonhomogeneous linear equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Strict Inequality for Fatou Given $f_n(x)=(n+1)x^n; x\in [0,1]$
I want to show $\int_{[0,1]}f<\liminf\int_{[0,1]}f_n$, where $f_n$ converges pointwise to $f$ almost everywhere on $[0,1]$.
I have found that $\liminf\int f_n = \int f +\liminf\int|f-f_n|$, but I'm not sure how to use this, and I don't even know what $f_n$ converges to here. Can someone hint me in the right direction?
| HINT
Consider $a \in [0,1)$. The main crux is to compute $$\lim_{n \to \infty} (n+1)a^n$$
To compute the limit note that $a = \dfrac1{1+b}$ for some $b > 0$.
Hence, $$a^n = \dfrac1{(1+b)^n} < \dfrac1{\dfrac{n(n-1)}2 b^2}\,\,\,\,\,\,\,\,\, \text{(Why? Hint: Binomial theorem)}$$ Can you now compute $\lim_{n \to \infty} (n+1)a^n$?
HINT: $$\lim_{n \to \infty} (n+1)a^n < \lim_{n \to \infty} \dfrac{2n+2}{n(n-1) b^2} = ?$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$G$ finite group, $H \trianglelefteq G$, $\vert H \vert = p$ prime, show $G = HC_G(a)$ $a \in H$ Let $G$ be a finite group. $H \trianglelefteq G$ with $\vert H \vert = p$ the smallest prime dividing $\vert G \vert$. Show $G = HC_G(a)$ with $e \neq a \in H$. $C_G(a)$ is the Centralizer of $a$ in $G$.
To start it off, I know $HC_G(a)\leq G$ by normality of $H$ and subgroup property of $C_G(a)$. So I made the observation that
$\begin{align*} \vert HC_G(a) \vert &= \frac{\vert H \vert \vert C_G(a) \vert}{\vert H \cap C_G(a) \vert}\\&=\frac{\vert H \vert \vert C_G(a) \vert}{\vert C_H(a)\vert}\end{align*}$
But, from here on I never reach the result I'm looking for. Any help would be greatly appreciated!
Note: I posted a similar question earlier, except that one had the index of $H$ being prime, this has the order of $H$ being prime: Link
| Since $N_G(H)/C_G(H)$ injects in $Aut(H)\cong C_{p-1}$ and $p$ is the smallest prime dividing $|G|$, it follows that $N_G(H)=C_G(H)$. But $H$ is normal, so $G=N_G(H)$ and we conclude that $H \subseteq Z(G)$. In particular $G=C_G(a)$ for every $a \in H$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that $A=\left(\begin{array}{ccc}1&2&3\\4&5&6\\7&8&9\end{array}\right)$ is not invertible $$A=\left(\begin{array}{ccc}1&2&3\\4&5&6\\7&8&9\end{array}\right)$$
I don't know how to start. Will be grateful for a clue.
Edit: Matrix ranks and Det have not yet been presented in the material.
| Note that $L_3-L_2=L_2-L_1$. What does that imply about the rank of $A$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 3
} |
Finding common terms of two arithmetic sequences using Extended Euclidean algorithm I have a problem which could be simplified as: there are two arithmetic sequences, a and b. Those can be written as
a=a1+m*d1
b=b1+n*d2
I need to find the lowest term, appearing in both sequences. It is possible to do by brute force, but that approach is not good enough. I was given a hint - extended Euclidean algorithm can be used to solve this. However, after several frustrating hours I cannot figure it out.
For example:
a1 = 2
d1 = 15
b1 = 67
d2 = 80
That gives these sequences
2 17 32 47 62 77 ... 227 ...
67 147 227
^
Needed term
Could you somehow point me to how to use the algorithm for this problem? It's essentially finding the lowest common multiple, only with an "offset"
Thank you
| Your equations:
$$a(m) = a_1 + m d_1$$
$$b(n) = b_1 + n d_2 $$
You want $a(m) = b(n)$ or $a(m)-b(n)=0$, so it may be written as
$$(a_1-b_1) + m(d_1) +n(-d_2) = 0$$ or $$ m(d_1) +n(-d_2) = (b_1-a_1) $$
You want $n$ and $m$ minimal and that solves that. This is of course very similar to the EGCD, but that the value of $b_1 - a_1$ is the desired value intead of the value $1$. EGCD sovles it for the value of $1$ (or the gcd of $d_1$ and $d_2$).
This is actually a lattice reduction problem since you are interested in the minimum integer values for $m$ and $n$. That is an involved problem, but this is the lowest dimension and thus is relatively easy.
The method I have written in the past used a matrix form to solve it. I started with
the matrix $$\pmatrix{1 & 0 & d_1 \\ 0 & 1 & -d_2}$$
which represents the equations
\begin{align}
(1)d_1 + (0)(-d_2) = & \phantom{-} d_1 \\
(0)d_1 + (1)(-d_2) = & -d_2 \\
\end{align}
Each row of the matrix gives valid numbers for your equation, the first element in the row is the number for $m$, the second is for $n$ and the third is the value of your equation for those $m$ and $n$. Now if you combine the rows (such as row one equals row one minus row two) then you still get valid numbers. The goal then is to find a combination resulting in the desired value of $b_1 - a_1$ in the final column.
If you use EGCD you can start with this matrix:
$$\pmatrix{d_2 \over g & d_1 \over g & 0 \\ u & -v & g}$$
where $u$, $v$, and $g$ are the outputs of the EGCD (with $g$ the gcd) since EGCD gives you $ud_1 + vd_2 = g$
In your example you would have:
$$\pmatrix{16 & 3 & 0 \\ -5 & -1 & 5}$$
From there you can scale the last row to get $kg = (b_1 - a_1)$ for some integer $k$, then to find the minimal values use the first row to reduce, since the zero in the first row will not affect the result.
Again, for your example, $k=13$ which gives
$$\pmatrix{16 & 3 & 0 \\ -65 & -13 & 65}$$
Adding $5$ times the first row gives
$$\pmatrix{15 & 2 & 65}$$
Which represents the $16$th and $3$rd terms (count starts at $0$) respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
If $d^2|p^{11}$ where $p$ is a prime, explain why $p|\frac{p^{11}}{d^2}$. If $d^2|p^{11}$ where $p$ is a prime, explain why $p|\frac{p^{11}}{d^2}$.
I'm not sure how to prove this by way other than examples. I only tried a few examples, and from what I could tell $d=p^2$. Is that always the case?
Say $p=3$ and $d=9$. So, $9^2|3^{11}$ because $\frac{3^{11}}{9^2}=2187$. Therefore, $3|\frac{3^{11}}{9^2}$ because $\frac{2187}{3}=729$. Is proof by example satisfactory?
I know now that "proof by example" is only satisfactory if it "knocks out" every possibility.
The proof I am trying to form (thanks to the answers below):
Any factor of $p^{11}$ must be of the form $p^{k}$ for some $k$.
If the factor has to be a square, then it must then be of the form $p^{2k}$, because it must be an even power.
Now, we can show that $\rm\:p^{11}\! = c\,d^2\Rightarrow\:p\:|\:c\ (= \frac{p^{11}}{d^2})\:$ for some integer $c$.
I obviously see how it was achieved that $c=\frac{p^{11}}{d^2}$, but I don't see how what has been said shows that $p|\frac{p^{11}}{d^2}$.
| Hint $\ $ It suffices to show $\rm\:p^{11}\! = c\,d^2\Rightarrow\:p\:|\:c\ (= p^{11}\!/d^2).\:$ We do so by comparing the parity of the exponents of $\rm\:p\:$ on both sides of the first equation. Let $\rm\:\nu(n) = $ the exponent of $\rm\,p\,$ in the unique prime factorization of $\rm\,n.\:$ By uniqueness $\rm\:\color{#C00}{\nu(m\,n) = \nu(m)+\nu(n)}\:$ for all integers $\rm\:m,n\ne 0.\:$ Thus
$$\rm \color{#C00}{applying\,\ \nu}\ \ to\ \ p^{11}\! =\, c\,d^2\ \Rightarrow\ 11 = \nu(c) + 2\, \nu(d)$$
Therefore $\rm\:\nu(c)\:$ is odd, hence $\rm\:\nu(c) \ge 1,\:$ i.e. $\rm\:p\mid c.\ \ $ QED
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 2
} |
Net Present Worth Calculation (Economic Equivalence) I'm currently doing some work involving net present worth analyses, and I'm really struggling with calculations that involve interest and inflation, such as the question below. I feel that if anyone can set me on the right track, and once I've worked through the full method for doing one of these calculations, I should be able to do them all. Is there any chance that anyone may be able to guide me through the process of doing the question below, or give me any pointers?
Thanks very much in advance!
You win the lottery. The prize can either be awarded as USD1,000,000 paid out in full today, or yearly instalments paid out at the end of each of the next 10 years. The yearly instalments are USD100,000 at the end of the first year, increasing each subsequent year by $5,000; in other words you get USD100 000-00 at the end of the first year, USD105,000 at the end of the second year, USD110 000-00 at the end of the third year, and so on.
After some economic research you determine that inflation is expected to be 5% for the next 5 years and 4% for the subsequent 5 years. You also discover that real interest rates are expected to be constant at 2.5% for the next 10 years.
Using net present worth analysis, which prize do you choose?
Further, given the inflation figures above, what will the real value of the prize of USD1,000,000 be at the end of 10 years?
| Using the discount rate (interest rate) calculate the present value of each payment. Let A = 100,000, first payment a = 5,000 (annual increase), and r = 2.5%, interest rate to simply the formulae.
$$
PV_1 = A/(1+r) \\
PV_2 = (A+a)/(1+r)^2 \\
... \\
PV_{10} = (A+9a)/(1+r)^{10}
$$
The total present value (PV) is just the sum of individual payment present values. To compute the expected real value (V) apply the expected inflation rates ($i_1 \dots i_{10}$) on this present value, i.e. in year one
$V_1 = PV/(1+i_1), V_2=V_1/(1+i_2),$ etc. You can do the simplifications since $i_1 = \dots = i_5$ and write one equation. However, my suggestion is use a spreadsheet and do the computations individually to better comprehend the subject.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/229888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Noetherian rings and modules A ring is left-Noetherian if every left ideal of R is finitely generated. Suppose R is a left-Noetherian ring and M is a finitely generated R-module. Show that M is a Noetherian R-module.
I'm thinking we want to proceed by contradiction and try to produce an infinitely generated ideal, but I'm having trouble coming up with what such an ideal will look like.
| If $\{x_i\mid 1\leq i\leq n\}$ is a set of generators for $M$, then the obvious map $\phi$ from $R^n$ to $M$ is a surjection. Since $R^n$ is a Noetherian left module, so is $R^n/\ker(\phi)\cong M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does $f(x)$ is continuous and $f = 0$ a.e. imply $f=0$ everywhere? I wanna prove that
"if $f: \mathbb{R}^n \to \mathbb{R}$ is continuous and satisfies $f=0$ almost everywhere (in the sense of Lebesgue measure), then, $f=0$ everywhere."
I am confident that the statement is true, but stuck with the proof. Also, is the statement true if the domain $\mathbb{R}^n$ is restricted to $\Omega \subseteq \mathbb{R}^n$ that contains a neighborhood of the origin "$0$"?
| A set of measure zero has dense complement. So if a continuous function zero on a set of full measure, it is identically zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 3
} |
Find a basis of $\ker T$ and $\dim (\mathrm{im}(T))$ of a linear map from polynomials to $\mathbb{R}^2$ $T: P_{2} \rightarrow \mathbb{R}^2: T(a + bx + cx^2) = (a-b,b-c)$
Find basis for $\ker T$ and $\dim(\mathrm{im}(T))$.
This is a problem in my textbook, it looks strange with me, because it goes from polynomial to $\mathbb{R}^2$. Before that, i just know from $\mathbb{R}^m \rightarrow \mathbb{R}^n$.
Thanks :)
| You can treat any polynomial $P_n$ space as an $R^n$ space.
That is, in your case the polynomial is $P_2$ and it can be converted to $R^3$.
The logic is simple, each coefficient of the term in the polynomial is converted to a number in $R^3$.
In the end of this conversion you'll get an isomorphics spaces/subspaces.
In your case :
The polynomial : $a + bx + cx^2$ can be converted to the cartesian product: $(c, b, a)$
I chose the coefficient from the highest degree to the smallest. That is important only to set some ground rules so you will know how to revert your cartesian product into polynomial again, if you want to do so. *You can choose any other order.
Now because these are isomorphics subspaces you'll get the same Kernel and the same Image.
I hope It helps.
Guy
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
"8 Dice arranged as a Cube" Face-Sum Problem I found this here:
Sum Problem
Given eight dice. Build a $2\times 2\times2$ cube, so that the sum of the points on each side is the same.
$\hskip2.7in$
Here is one of 20 736 solutions with the sum 14.
You find more at the German magazine "Bild der Wissenschaft 3-1980".
Now I have three (Question 1 moved here) questions:
*
* Is $14$ the only possible face sum? At least, in the example given, it seems to related to the fact, that on every face two dice-pairs show up, having $n$ and $7-n$ pips. Is this necessary? Sufficient it is...
*How do they get $20736$? This is the dimension of the related group and factors to $2^8\times 3^4$, the number of group elements, right?
i. I can get $2^3$, by the following: In the example given, you can split along the $xy$ ($yz,zx$) plane and then interchange the $2$ blocks of $4$ dice. Wlog, mirroring at $xy$ commutes with $yz$ (both just invert the $z$ resp. $x$ coordinate, right), so we get $2^3$ group lements. $$ $$
ii. The factor $3$ looks related to rotations throught the diagonals. But without my role playing set at hand, I can't work that out. $$ $$
iii. Would rolling the overall die around an axis also count, since back and front always shows a "rotated" pattern? This would give six $90^\circ$-rotations and three $180^\circ$-rotations, $9=3^2$ in total.
$$ \\ $$
Where do the missing $2^5\times 3^2$ come from?
*Is the reference given, online available?
EDIT
And to not make
tehshrike sad
again,
here's the special question for $D4$:
What face sum is possible, so that the sum of the points on each side
is the same, when you pile up 4 D4's to a pyramid (plus the octahedron mentioned by Henning) and how many representations, would such a pyramid
have?
Thanks
| Regarding your reference request:
The site of the magazine offers many of their articles online starting from 1997, so you cannot obtain the 1980 edition online (although you can likely buy a used print version).
Most good libraries in German-speaking countries do have this magazine, so, depending on your country, you could go directly go to the library, try to get an inter-library loan or contact friends in German-speaking countries to scan the appropriate pages for you.
Of course, the article will be in German.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Show $W^{1,q}_0(-1,1)\subset C([-1,1])$ I need show that space $W^{1,q}_0(-1,1)$ is a subset of $C([-1,1])$ space. How I will able to doing this?
| If $q=+\infty$, and $\varphi_n$ are test functions such that $\lVert \varphi_n-u\rVert_{\infty}\to 0$, then we can find a set of measure $0$ such that $\sup_{x\in [-1,1]\setminus N}|\varphi_n(x)-u(x)|\to 0$, so $u$ is almost everywhere equal to a continuous function, and can be represented by it.
We assume $1\leqslant q<\infty$.
As $W_0^{1,q}(-1,1)$ consists of equivalence classes of functions and $C[-1,1]$ consists of functions, what we have to show is that each element of $W_0^{1,q}(-1,1)$ can be represented by a continuous function. First, for $\varphi$ a test function, we have
$$|\varphi(x)-\varphi(y)|\leqslant \left|\int_x^y|\varphi'(x)|dx\right|\leqslant |x-y|^{1-1/q}\lVert\varphi'\rVert_{L^q}.$$
Now, let $u\in W_0^{1,q}(-1,1)$. By definition, we can find a sequence $\{\varphi_k\}\subset D(-1,1)$ such that $\lim_{k\to+\infty}\lVert u-\varphi_k\rVert_q+\lVert u'-\varphi'_k\rVert_q=0.$
Up to a subsequence, we can assume that $\lim_{k\to+\infty}\varphi_k(x)=u(x)$. As for $k$ large enough,
$$|\varphi_k(x)-\varphi_k(y)|\leqslant |x-y|^{1-1/q}(\lVert u'\rVert_{L^q}+1),$$
we have for almost every $x, y\in (-1,1)$,
$$|u(x)-u(y)|\leqslant |x-y|^{1-1/q}(\lVert u'\rVert_{L^q}+1),$$
what we wanted (and even more, as $u$ is represented by a $\left(1-\frac 1q\right)$-Hölder continuous function, as noted robjohn).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The degree of a polynomial which also has negative exponents. In theory, we define the degree of a polynomial as the highest exponent it holds.
However when there are negative and positive exponents are present in the function, I want to know the basis that we define the degree. Is the order of a polynomial degree expression defined by the highest magnitude of available exponents?
For example in $x^{-4} + x^{3}$, is the degree $4$ or $3$?
| For the sake of completeness, I would like to add that this generalization of polynomials is called a Laurent polynomial. This set is denoted $R[x,x^{-1}]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Leslie matrix stationary distribution Given a particular normalized Perron vector representing a discrete probability distribution, is it possible to derive some constraints or particular Leslie matrices having the given as their Perron vector?
There is a related question on math overflow.
| I have very little knowledge about demography. Yet, if the Leslie matrices you talk about are the ones described in this Wikipedia page, it seems like that for any given $v=(v_0,v_1,\ldots,v_{\omega - 1})^T$, a corresponding Leslie matrix exists if $v_0>0$ and $v_0\ge v_1\ge\ldots\ge v_{\omega - 1}\ge0$.
For such a vector $v$, let $v_j$ be the smallest nonzero entry (i.e. $j$ is the largest index such that $v_j>0$). Define
$$
s_i = \begin{cases}\frac{v_{i+1}}{v_i}&\ \textrm{ if } v_i>0,\\
\textrm{any number } \in[0,1]&\ \textrm{ otherwise}.\end{cases}
$$
and let $f=(f_0,f_1,\ldots,f_{\omega-1})^T$ be any entrywise nonnegative vector such that
$$
f_0 + \sum_{i=1}^{\omega-1}s_0s_1\ldots s_{i-1}f_i = 1.
$$
Then the Euler-Lokta equation
$$
f_0 + \sum_{i=1}^{\omega-1}\frac{s_0s_1\ldots s_{i-1}f_i}{\lambda^i} = \lambda$$
is satisfied and hence $v$, up to a normalizing factor, is the stable age distribution or Perron eigenvector of the Leslie matrix
$$
L = \begin{bmatrix}
f_0 & f_1 & f_2 & f_3 & \ldots &f_{\omega - 1} \\
s_0 & 0 & 0 & 0 & \ldots & 0\\
0 & s_1 & 0 & 0 & \ldots & 0\\
0 & 0 & s_2 & 0 & \ldots & 0\\
0 & 0 & 0 & \ddots & \ldots & 0\\
0 & 0 & 0 & \ldots & s_{\omega - 2} & 0
\end{bmatrix}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Infinitely many primes p that are not congruent to 1 mod 5 Argue that there are infinitely many primes p that are not congruent to 1 modulo 5.
I find this confusing. Is this saying $p_n \not\equiv 1 \pmod{5}$?
To start off I tried some examples.
$3 \not\equiv 1 \pmod{5}$
$5 \not\equiv 1 \pmod{5}$
$7 \not\equiv 1 \pmod{5}$
$11 \equiv 1 \pmod{5}$
$13 \not\equiv 1 \pmod{5}$
$17 \not\equiv 1 \pmod{5}$...
If this is what the question is asking I've come to the conclusion that this is true. Either way, I've got no clue how to write this as a proof.
| You can follow the Euclid proof that there are an infinite number of primes. Assume there are a finite number of primes not congruent to $1 \pmod 5$. Multiply them all except $2$ together to get $N \equiv 0 \pmod 5$. Consider the factors of $N+2$, which is odd and $\equiv 2 \pmod 5$. It cannot be divisible by any prime on the list, as it has remainder $2$ when divided by them. If it is prime, we have exhibited a prime $\not \equiv 1 \pmod 5$ that is not on the list. If it is not prime, it must have a factor that is $\not \equiv 1 \pmod 5$ because the product of primes $\equiv 1 \pmod 5$ is still $\equiv 1 \pmod 5$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Showing $f:\mathbb{R^2} \to \mathbb{R}$, $f(x, y) = x$ is continuous Let $(x_n)$ be a sequence in $\mathbb{R^2}$ and $c \in \mathbb{R^2}$.
To show $f$ is continuous we want to show if $(x_n) \to c$, $f(x) \to f(c)$.
As $(x_n) \to c$ we can take $B_\epsilon(c)$, $\epsilon > 0$ such that when $n \geq$ some $N$, $x_n \in B_\epsilon(c)$.
As $x_n \in B_\epsilon(c)$ this implies that $f(x_n) \in f(B_\epsilon(c))$.
This holds for all $\epsilon$, so as $\epsilon \to 0$ and $B_\epsilon(c)$ becomes infinitely small, we can always find $n \geq$ some $N$ such that $x_n \in B_\epsilon(c)$ and $f(x_n) \in f(B_\epsilon(c))$.
Hence as $\epsilon \to 0$, $(x_n)$ clearly converges to $c$ and $f(x_n)$ clearly converges to $f(c)$.
Does that look ok?
| There's a bit of repetition when you say $x_n \in B_\epsilon(c) \implies f(x_n) \in f(B_\epsilon(c))$. While this is true as you define it, repeating it doesn't add to the proof. What you need to show is that the image of $B_\epsilon(c)$ is itself an open neighborhood of $f(c)$.
Another look, which uses uniform continuity...
For any open neighborhood $N_\delta(\mathbf{x})$ of $\mathbf{x} = (x_1,y_1) \in \Bbb R^2$ and $(x_2,y_2) = y \in N_\delta(\mathbf{x})$ with the usual metric we have
$$\delta > d(\mathbf{x},\mathbf{y}) = \sqrt{(x_1-x_2)^2+(y_1-y_2)^2} \ge \sqrt{(x_1-x_2)^2} = |x_1-x_2| = d(f(\mathbf{x}),f(\mathbf{y})).$$
Thus, we can set $\delta = \varepsilon$ and obtain uniform continuity on any open subset of $\Bbb R^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What does $ ( \nabla u) \circ \tau \cdot D \tau $ and $ \nabla u \cdot (D \tau_\gamma)^{-1} $ mean? To understand the question here
$\def\abs#1{\left|#1\right|}$
\begin{align*}
F(u_\gamma) &= F(u \circ \tau_\gamma^{-1})\\
&= \int_\Omega \abs{\nabla(u \circ \tau_\gamma^{-1})}^2\\
&= \int_\Omega \abs{(\nabla u) \circ \tau_\gamma^{-1} \cdot D\tau_\gamma^{-1}}^2\\
&= \int_{\tau_\gamma^{-1}\Omega} \abs{(\nabla u) \circ \tau_\gamma^{-1}\circ \tau_\gamma\cdot D\tau_\gamma^{-1}\circ \tau_\gamma}^2\abs{\det(D\tau_\gamma)}\\
&= \int_\Omega \abs{\nabla u\cdot (D\tau_\gamma)^{-1}}^2\abs{\det(D\tau_\gamma)}
\end{align*}
I know that by chain rule $ \cdots $ componentwise we have
$$ \partial_i ( u \circ \tau) = \sum_{j} (\partial_j u) \circ \tau \cdot \partial_i \tau_j. $$
Thus, $ \nabla ( u \circ\tau )= ( \nabla u) \circ \tau \cdot D \tau $. I'd like to understand this equality or this notaition. I know that
\begin{equation}
\nabla u = (\partial_1 u, \partial_2 u, \cdots , \partial_n u)
\end{equation}
and I guess that
$$ D \tau = \left[
\begin{array}{cccc}
\partial_1 \tau_1 & \partial_2 \tau_1 & \cdots & \partial_n \tau_1\\
\partial_1 \tau_2 & \partial_2 \tau_2 & \cdots & \partial_n \tau_2\\
\vdots & \vdots & \ddots & \vdots\\
\partial_1 \tau_n & \partial_2 \tau_n & \cdots & \partial_n \tau_n\\
\end{array}
\right] $$
Then, what does $ ( \nabla u) \circ \tau \cdot D \tau$ mean? And what does $ \nabla u \cdot (D \tau_\gamma)^{-1} $ mean?
|
$\nabla ( u \circ\tau )= ( \nabla u) \circ \tau \cdot D \tau$. I'd like to understand this equality or this notaition.
Think of the chain rule: derivative of composition is the product of derivatives. On the left, $u\circ \tau $ is composition (not Hadamard product, as suggested in the other answer). On the right, we have a product of $( \nabla u) \circ \tau$ (which is a vector) and $D \tau$ (which is a matrix); this is the usual application of matrix to a vector, except that the vector, being written as a row, appears to the left of the matrix. It is not necessary to use the dot here: $ (( \nabla u) \circ \tau ) D \tau$ would be better.
In the chain of computations in your question, the chain rule is applied to the composition of $u$ with $\tau_\gamma^{-1}$, which is why $\tau_\gamma^{-1}$ appears instead of $\tau$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
continued fraction expression for $\sqrt{2}$ in $\mathbb{Q_7}$ Hensel's lemma implies that $\sqrt{2}\in\mathbb{Q_7}$. Find a continued
fraction expression for $\sqrt{2}$ in $\mathbb{Q_7}$
| There's a bit of a problem with defining continued fractions in the $p$-adics. The idea for finding continued fractions in $\Bbb Z$ is that we subtract an integer $m$ from $\sqrt{n}$ such that $\left|\sqrt{n} - m\right| < 1$. We can find such an $m$ because $\Bbb R$ has a generalized version of the division algorithm: for any $r\in\Bbb R$, we can find $k\in\Bbb Z$ such that $r = k + s$, where $\left|s\right| < 1$. This means that we can write
$$
\sqrt{n} = m + (\sqrt{n} - m) = m + \frac{1}{1/(\sqrt{n} - m)},
$$
and since $\left|\frac{1}{\sqrt{n}-m}\right| > 1$, we can repeat the process with this number, and so on, until we have written
$$
\sqrt{n} = m_1 + \cfrac{1}{m_2 + \cfrac{1}{m_3 + \dots}},
$$
where each $m_i\in\Bbb Z$. However, in $\Bbb Q_p$ there is no such division algorithm because of the ultrametric property: the norm $\left|\,\cdot\,\right|_p$ in $\Bbb Q_p$ satisfies the property that $\left|a - b\right|_p\leq\max\{\left|a\right|_p,\left|b\right|_p\}$, with equality if $\left|a\right|_p\neq\left|b\right|_p$. If $\sigma\in\Bbb Z_p$, then $\left|\sqrt{\sigma}\right|_p\leq 1$, as $\sqrt{\sigma}$ is a $p$-adic algebraic integer. Then we may find an integer $\rho$ such that $\left|\sqrt{\sigma} - \rho\right|_p < 1$, but there's a problem: if we write $\sqrt{\sigma} = \rho + \frac{1}{1/(\sqrt{\sigma} - \rho)}$, we have $\left|\frac{1}{\sqrt{\sigma} - \rho}\right|_p > 1$. As any element of $\Bbb Z_p$ has norm less than or equal to $1$, the ultrametric property of $\left|\,\cdot\,\right|_p$ tells us that we can never find an $p$-adic integer to subtract from $\frac{1}{\sqrt{\sigma} - \rho}$ that will give the difference norm less than $1$. So we cannot write
$$
\sqrt{2} = a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \dots}}
$$
with $a_i\in\Bbb Z_7$, as we can in $\Bbb R$. If you wished, you could remedy this by taking your $a_i\in\Bbb Q_7$, but since $\sqrt{2}\in\Bbb Q_7$ already, this is silly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof of $\sum_{k=0}^n k \text{Pr}(X=k) = \sum^{n-1}_{k=0} \text{Pr}(X>k) -n \text{Pr}(X>n)$ $X$ is a random variable defined in $\mathbb N$. How can I prove that for all $n\in \mathbb N$?
*
*$ \text E(X) =\sum_{k=0}^n k \text{Pr}(X=k) = \sum^{n-1}_{k=0} \text{Pr}(X>k) -n \text{Pr}(X>n)$
*$\text E(X) =\sum_{k=0}^n k \text{Pr}(X=k)=\sum_{k\ge 0} \text{Pr}(X>k) $
| For part $a)$, use Thomas' hint. You get
$$
\sum_{i=0}^{n}k(P(X>i-1)-P(X>i)).
$$
This develops as $P(X>0)-P(X>1)+2P(X>1)-2P(X>2)+3P(X>2)-3P(X>3)+\cdots nP(X>n-1)-nP(X>n)$
for part $b)$:
In general, you have
$\mathbb{E}(X)=\sum\limits_{i=1}^\infty P(X\geq i).$
You can show this as follow:
$$
\sum\limits_{i=1}^\infty P(X\geq i) = \sum\limits_{i=1}^\infty \sum\limits_{j=i}^\infty P(X = j)
$$
Switch the order of summation gives
\begin{align}
\sum\limits_{i=1}^\infty P(X\geq i)&=\sum\limits_{j=1}^\infty \sum\limits_{i=1}^j P(X = j)\\
&=\sum\limits_{j=1}^\infty j\, P(X = j)\\
&=\mathbb{E}(X)
\end{align}
$$\sum\limits_{i=0}^{\infty}iP(X=i)=\sum\limits_{i=1}^\infty P(X\geq i)=\sum\limits_{i=0}^{\infty} P(X> i)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Expectation and Distribution Function? Consider X as a random variable with distribution function $F(x)$. Also assume that $|E(x)| < \infty$. the goal is to show that for any constant $c$, we have:
$$\int_{-\infty}^{\infty} x (F(x + c) - F(x)) dx = cE(X) - c^2/2$$
Does anyone have any hint on how to approach this?
Thanks
| Based on @DilipSarwate suggestion, we can write the integral as a double integral because:
$\int f(y)dy = F(y)$ so, we can write:
$ \int_{-\infty}^{\infty} x (F(x + c) - F(x)) dx = \int_{-\infty}^{\infty} x \{\int_{x}^{x + c} f(y)dy\} dx = \int_{-\infty}^{\infty} \{\int_{x}^{x + c} xf(y)dy\} dx = \int_{-\infty}^{\infty} \int_{x}^{x + c} xf(y)dy\ dx = \text{based on the Fubini's Thm. since $f(y)\ge 0$ and we know that $\int |f|dp < \infty (why?) $, then we can change the order of integrals}\\ = \text{assume that we can show the integ. is eq to} = \frac{1}{2}(E(X^2)- E((X - c)^2)) = \frac{1}{2}(E(X^2) - E(X^2 + c^2 - 2Xc)) = \frac{1}{2}(-E(c^2) + 2cE(x)) = cE(x) - \frac{c^2}{2}$
The missing part here is to know how to show the integral of $\int_{-\infty}^{\infty} \int_{x}^{x + c} xf(y)dy\ dx$ is equal to $\frac{1}{2}\{E(X^2) - E(X^2 + c^2 - 2Xc)\}$ ?!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Proof of sigma-additivity for measures I understand the proof for the subadditivity property of the outer measure (using the epsilon/2^n method), but I am not quite clear on the proof for the sigma-additivity property of measures. Most sources I have read either leave it an exercise or just state it outright.
From what I gather, they essentially try to show that a measure is also *super*additive (the reverse of subadditive) which means it must be sigma-additive. However, I'm a bit confused as to how they do this.
Would anyone be kind enough to give a simple proof about how this could be done?
| As far as I'm aware, that's the standard approach. The method I was taught is here (Theorem A.9), and involves showing countable subadditivity, defining a new sigma algebra $\mathcal{M}_{0}$ on which countable additivity holds when the outer measure is restricted to $\mathcal{M}_{0}$ (by showing superadditivity), and then showing that $\mathcal{M}_{0}$ is just $\mathcal{M}$, the sigma algebra of measurable sets (the sigma algebra generated by null sets together with Borel sets).
The notes I linked to are based on the book by Franks, which might cover it in a bit more detail/give a slightly different approach if you aren't happy with the notes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/231995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
convergence tests for series $p_n=\frac{1\cdot 3\cdot 5...(2n-1)}{2\cdot 4\cdot 6...(2n)}$ If the sequence:
$p_n=\frac{1\cdot 3\cdot 5...(2n-1)}{2\cdot 4\cdot 6...(2n)}$
Prove that the sequence
$((n+1/2)p_n^2)^{n=\infty}_{1}$ is decreasing.
and that the series $(np_n^2)^{n=\infty}_{1}$ is convergent.
Any hints/ answers would be great.
I'm unsure where to begin.
| Hint 1:
Show that (n+1/2)>=(n+1.5)(2n+1/2n+2)^2 for all positive integers n, then use induction to show that the first sequence is decreasing
Hint 2:
show that 1/2n<=p(n), thus 1/2n<= np(n)^2 therefore the second series diverges
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove $\frac{1}{a^3} + \frac{1}{b^3} +\frac{1}{c^3} ≥ 3$ Prove inequality $$\frac{1}{a^3} + \frac{1}{b^3} +\frac{1}{c^3} ≥ 3$$ where $a+b+c=3abc$ and $a,b,c>0$
| If $a, b, c >0$ then $a+b+c=3abc \ \Rightarrow \ \cfrac 1{ab} + \cfrac 1{bc}+ \cfrac 1{ca} = 3$
See that $2\left(\cfrac 1{a^3} +\cfrac 1{b^3}+ \cfrac 1{c^3}\right) +3 =\left(\cfrac 1{a^3} +\cfrac 1{b^3}+ 1\right)+\left(\cfrac 1{b^3} +\cfrac 1{c^3}+ 1\right)+\left(\cfrac 1{c^3} +\cfrac 1{a^3}+ 1\right) $
Use $AM-GM$ inequality on each of them and you've got your proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that the unit sphere is strictly convex I can prove with the triangle inequality that the unit sphere in $R^n$ is convex, but how to show that it is strictly convex?
| To show that the closed unit ball $B$ is strictly convex we need to show that for any two points $x$ and $y$ in the boundary of $B$, the chord joining $x$ to $y$ meets the boundary only at the points $x$ and $y$.
Let $x,y \in \partial B$, then $||x|| = ||y|| = 1.$ Now consider the chord joining $x$ to $y$. We can parametrise this by $c(t) := (1-t)x + ty$. Notice that $c(0) = x$ and $c(1) = y$. We need to show that $c(t)$ only meets the boundary when $t=0$ or $t=1$. Well:
$$||c(t)||^2 = \langle c(t), c(t) \rangle = (1-t)^2\langle x, x \rangle + 2(1-t)t \, \langle x,y \rangle + t^2 \langle y,y \rangle$$
$$||c(t)||^2 = (1-t)^2||x||^2 + 2t(1-t)\langle x,y \rangle + t^2||y||^2$$
Since $x,y \in \partial B$ it follows that $||x|| = ||y|| = 1$ and so:
$$||c(t)||^2 = (1-t)^2 + 2t(1-t)\langle x,y \rangle + t^2 \, . $$
If $c(t)$ meets the boundary then $||c(t) || = 1$, so let's find the values of $t$ for which $||c(t)|| = 1$:
$$(1-t)^2 + 2t(1-t)\langle x,y \rangle + t^2 = 1 \iff 2t(1-t)(1-\langle x, y \rangle) = 0 \, .$$
Clearly $t=0$ and $t=1$ are solution since $c(0)$ and $c(1)$ lie on the boundary. Recall that $\langle x, y \rangle = \cos\theta$, where $\theta$ is the angle between vectors $\vec{0x}$ and $\vec{0y}$, because $||x|| = ||y|| = 1.$ Thus, provided $x \neq y$ we have $\langle x, y \rangle \neq 1$ and so the chord only meets the boundary at $c(0)$ and $c(1).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Does this change in this monotonic function affect ranking? I need to make sure I can take out the one in $(1-e^{-x})e^{-y}$ without affecting a sort order based on this function. I other words, I need to prove the following:
$$
(1-e^{-x})e^{-y} \ >= \ -e^{-x}e^{-y}\quad\forall\ \ x,y> 0
$$
If that is true, then I can take the logarithm of the right hand side above: $\log(-e^{-x}e^{-y}) = x + y$ and my life is soooo much easier...
| Looking at the current version of your post, we have
$$(1-e^{-x})e^{-y}=e^{-y}-e^{-x}e^{-y}>-e^{-x}e^{-y},$$ since $e^t$ is positive for all real $t$. However, we can't take the logarithm of the right-hand side. It's negative.
Update:
The old version was $$(1-e^{-x_1})(1-e^{-x_2})e^{-x_3}=e^{-x_3}-e^{-x_1-x_3}-e^{-x_2-x_3}+e^{-x_1-x_2-x_3},$$ and you wanted to know if that was greater than or equal to $$(-e^{-x_1})(-e^{-x_2})e^{-x_3}=e^{-x_1-x_2-x_3}$$ for all positive $x_1,x_2,x_3$. Note, then, that the following are equivalent (bearing in mind the positivity of $e^t$):
$$(1-e^{-x_1})(1-e^{-x_2})e^{-x_3}\geq e^{-x_1-x_2-x_3}$$
$$e^{-x_3}-e^{-x_1-x_3}-e^{-x_2-x_3}\geq 0$$
$$e^{-x_3}(1-e^{-x_1}-e^{-x_2})\geq 0$$
$$1-e^{-x_1}-e^{-x_2}\geq 0$$
This need not hold. In fact, for any $x_2>0$, there is some $x_1>0$ such that the inequality fails to hold. (Let me know if you're interested in a proof of that fact.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
definition of morphism of ringed spaces I've recently started reading about sheafs and ringed spaces (at the moment, primarily on wikipedia). Assuming I'm correctly understanding the definitions of the direct image functor and of morphisms of ringed spaces, a morphism from a ringed space $(X, O_X)$ to a ringed space $(Y, O_Y)$ is a continuous map $f\colon X\to Y$ along with a natural transformation $\varphi$ from $O_Y$ to $f_*O_X$.
Why does the definition require $\varphi$ to go from $O_Y$ to $f_*O_X$ as opposed to from $f_*O_X$ to $O_Y$?
| Think about what it means to give a morphism from $\mathcal O_Y$ to $f_* \mathcal O_X$: it means that for every open set $V \subset Y$, there is a map
$$\mathcal O_Y(V) \to \mathcal O_X\bigl( f^{-1}(V) \bigr).$$
If you imagine that $\mathcal O_X$ and $\mathcal O_Y$ are supposed to be some sorts of "sheaves of functions" on $X$ and $Y$, then this accords perfectly with the intuition that a morphism of ringed spaces should allow us to "pull back" functions.
Indeed, in concrete examples (such as smooth manifolds equipped with the structure sheaf of smooth functions), the map $\mathcal O_Y \to f_* \mathcal O_X$ is just the pull-back map on functions.
A morphism in the opposite direction doesn't have any analogous intuitive interpretation, and doesn't accord with what happens in the key motivating examples.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
} |
Is $G/pG$ is a $p$-group? Jack is trying to prove:
Let $G$ be an abelian group, and $n\in\Bbb Z$. Denote $nG = \{ng \mid g\in G\}$.
(1) Show that $nG$ is a subgroup in $G$.
(2) Show that if $G$ is a finitely generated abelian group, and $p$ is prime,
then $G/pG$ is a $p$-group (a group whose order is a power of $p$).
I think $G/pG$ is a $p$-group because it is a direct sum of cyclic groups of order $p$.
But I cannot give a detailed proof.
| $G/pG$ is a direct sum of a finite number of cyclic groups by the fundamental theorem of finitely generated abelian groups. Since every non-zero element of $G/pG$ is of order $p$.
It is a direct sum of a finite number of cyclic groups of order $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Bernoulli Polynomials I am having a problem with this question. Can someone help me please.
We are defining a sequence of polynomials such that:
$P_0(x)=1; P_n'(x)=nP_{n-1}(x) \mbox{ and} \int_{0}^1P_n(x)dx=0$
I need to prove, by induction, that $P_n(x)$ is a polynomial in $x$ of degree $n$, the term of highest degree being $x^n$.
Thank you in advance
| Recall that $\displaystyle \int x^n dx = \dfrac{x^{n+1}}{n+1}$. Hence, if $P_n(x)$ is a polynomial of degree $n$, then it is of the form $$P_n(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0$$ Since $P_{n+1}'(x) = (n+1) P_n(x)$, we have that $$P_{n+1}(x) = \int_{0}^x (n+1) P_n(y) dy + c$$
Hence, $$P_{n+1}(x) = \int_{0}^x (n+1) \left(a_n y^n + a_{n-1} y^{n-1} + \cdots + a_1 y + a_0\right) dy + c\\ = a_n x^{n+1} + a_{n-1} \left(\dfrac{n+1}n\right) x^n + \cdots + a_{1} \left(\dfrac{n+1}2\right) x^2 + a_{0} \left(\dfrac{n+1}1\right) x + c$$
Now finish it off with induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
f, g continuous for all rationals follow by continuous on all reals?
Possible Duplicate:
Can there be two distinct, continuous functions that are equal at all rationals?
Let $f, g:\Bbb{R}\to\Bbb{R}$ to be continuous functions such that $f(x)=g(x)\text{ for all rational numbers}\,x\in\Bbb{Q}$. Does it follow that $f(x)=g(x)$ for all real numbers $x$?
Here is what I think:
f continuous when $\lim\limits_{x\to x_0}f(x)=f(x_0)$ and $\lim\limits_{x\to x_0}g(x)=g(x_0)$
So it does not neccesarily mean that $f(x)=g(x)$ when x is irrational. So I can pick a function f so that
$f(x) =
\begin{cases}
g(x) & \text{if $x\in\Bbb{Q}$} \\
x & \text{if $x\in\Bbb{R}\setminus \Bbb{Q}$} \\
\end{cases}
$
| Hint: prove that if $\,h\,$ is a real continuous function s.t. $\,h(q)=0\,\,,\,\,\forall\,q\in\Bbb Q\,$ , then $\,h(x)=0\,\,,\,\,\forall\,x\in\Bbb R\,$
Further hint: For any $\,x\in\Bbb R\,$ , let $\,\{q_n\}\subset\Bbb Q\,$ be s.t. $\,q_n\xrightarrow [n\to\infty]{} x\,$ . What happens with
$$\lim_{n\to\infty}f(q_n)\,\,\,?$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
67 67 67 : use 3, 67's use any way how to get 11222 I need to get 11222 using three 67 s (Sixty seven)
We can use any operation in any maner
67 67 67
use 3, 67's use any way but to get 11222.
| I'd guess this is a trick question around "using three, sixty-sevens" to get $11222$.
In particular, $67 + 67 = 134$, which is $11222$ in ternary (base $3$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Which player is most likely to win when drawing cards? Two players each draw a single card, in turn, from a standard deck of 52 cards, without returning it to the deck. The winner is the player with the highest value on their card. If the value on both cards is equal then all cards are returned to the deck, the deck is shuffled and both players draw again with the same rules.
Given that the second player is drawing from a deck that has been modified by the first player removing their card, I'm wondering if either player is more likely to win than the other?
Does this change as the number of players increases?
| If the second player were drawing from a full deck, he would draw each of the $13$ ranks with equal probability. The only change when he draws from the $51$-card deck that remains after the first player’s draw is that the rank of the first player’s card becomes less probable; the other $12$ ranks remain equally likely. Thus, given that the game is decided in this round, the second player’s probability of winning is the same for both decks, namely, $\frac12$. The only effect of not replacing the first player’s card is to decrease the expected number of tied rounds before the game is won or lost.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/232963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.