Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Cyclic groups and generators For each of the groups $\mathbb Z_4$,$\mathbb Z_4^*$ indicate which are cyclic. For those that are cyclic list all the generators.
Solution
$\mathbb Z_4=${0,1,2,3}
$\mathbb Z_4$ is cyclic and all the generators of $\mathbb Z_4=${1,3}
Now if we consider $\mathbb Z_4^*$
$\mathbb Z_4^*$={1,3}
How do i know that $\mathbb Z_4^*$ is cyclic?
In our lecture notes it says that the $\mathbb Z_4^*$ is cyclic and the generators of $\mathbb Z_4^*$=3
Can anyone help me on the steps to follow in order to prove the above?
| Consulting here, you can easy see that in modulo $4$ there are two relatively prime congruence classes, $1$ and $3$, so $(\mathbb{Z}/4\mathbb{Z})^\times \cong \mathrm{C}_2$, the cyclic group with two elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/309809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Show $\sum_{n=1}^{\infty}\frac{\sinh\pi}{\cosh(2n\pi)-\cosh\pi}=\frac1{\text{e}^{\pi}-1}$ and another Show that :
$$\sum_{n=1}^{\infty}\frac{\cosh(2nx)}{\cosh(4nx)-\cosh(2x)}=\frac1{4\sinh^2(x)}$$
$$\sum_{n=1}^{\infty}\frac{\sinh\pi}{\cosh(2n\pi)-\cosh\pi}=\frac1{\text{e}^{\pi}-1}$$
| OK, I have figured out the second sum using a completely different method. I begin with the following result (+):
$$\sum_{k=1}^{\infty} e^{-k t} \sin{k x} = \frac{1}{2} \frac{\sin{x}}{\cosh{t}-\cos{x}}$$
I will prove this result below; it is a simple geometrical sum. In any case, let $x=i \pi$ and $t=2 n \pi$; then
$$\begin{align}\frac{\sinh{\pi}}{\cosh{2 n \pi}-\cosh{\pi}} &= 2 \sum_{k=1}^{\infty} e^{-2 n \pi k} \sinh{k \pi}\end{align}$$
Now we can sum:
$$\begin{align}\sum_{n=1}^{\infty} \frac{\sinh{\pi}}{\cosh{2 n \pi}-\cosh{\pi}} &= 2 \sum_{n=1}^{\infty} \sum_{k=1}^{\infty} e^{-2 n \pi k} \sinh{k \pi}\\ &= 2 \sum_{k=1}^{\infty} \sinh{k \pi} \sum_{n=1}^{\infty}e^{-2 n \pi k}\\ &= 2 \sum_{k=1}^{\infty} \frac{\sinh{k \pi}}{e^{2 \pi k}-1} \\ &= \sum_{k=1}^{\infty} \frac{e^{\pi k} - e^{-\pi k}}{e^{2 \pi k}-1} \\ &= \sum_{k=1}^{\infty} e^{-\pi k} \\ \therefore \sum_{n=1}^{\infty} \frac{\sinh{\pi}}{\cosh{2 n \pi}-\cosh{\pi}} &= \frac{1}{e^{\pi}-1} \end{align}$$
To prove (+), write as the imaginary part of a geometrical sum.
$$\begin{align} \sum_{k=1}^{\infty} e^{-k t} \sin{k x} &= \Im{\sum_{k=1}^{\infty} e^{-k (t-i x)}} \\ &= \Im{\left [ \frac{1}{1-e^{-(t-i x)}} \right ]} \\ &= \Im{\left [ \frac{1}{1-e^{-t} \cos{x} - i e^{-t} \sin{x}} \right ]}\\ &= \frac{e^{-t} \sin{x}}{(1-e^{-t} \cos{x})^2 + e^{-2 t} \sin^2{x}}\\ &= \frac{\sin{x}}{e^{t}-2 \cos{x} + e^{-t}} \\ \therefore \sum_{k=1}^{\infty} e^{-k t} \sin{k x} &= \frac{1}{2} \frac{\sin{x}}{\cosh{t}-\cos{x}}\end{align}$$
QED
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/309875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
} |
Continuity of the lebesgue integral How does one show that the function, $g(t) = \int \chi_{A+t} f $ is continuous, given that $A$ is measurable, $f$ is integrable and $A+t = \{x+t: x \in A\}$.
Any help would be appreciated, thanks
| Notice that
$$
|g(t+h)-g(t)| \le \int_{(A+t)\Delta A} |f|
$$
so it is enough to prove that
$$
|(A+t)\Delta A| \to 0 \qquad \text{as }t \to 0
$$
where $\Delta$ is the symmetric difference, since
$$
\int_{A_k} f \to 0
$$
if $|A_k|\to 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/309935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Sum of dihedral angles in Tetrahedron I'd like to ask if someone can help me out with this problem. I have to determine what is the lower and upper bound for sum (the largest and smallest sum I can get) of dihedral angles in arbitrary Tetrahedron and prove that. I'm ok with hint for proof, but I'd be grateful for lower and upper bound and reason for that.
Thanks
| Lemma: Sum of the 4 internal solid angles of a tetrahedron is bounded above by $2\pi$.
Start with a non-degenerate tetrahedron $\langle p_1p_2p_3p_4 \rangle$. Let $p = p_i$ be one its vertices and $\vec{n} \in S^2$ be any unit vector. Aside from a set of measure zero in choosing $\vec{n}$, the projection
of $p_j, j = 1\ldots4$ onto a plane orthogonal to $\vec{n}$ are in general positions (i.e. no 3 points are collinear). When the images of the vertices are in general positions, a necessary condition for either $\vec{n}$ or $-\vec{n}$ belong to the inner solid angle at $p$ is $p$'s image lies in the interior of the triangle formed by the images of other 3 vertices. So aside from a set of exception of measure zero, the unit vectors in the 4 inner solid angles are "disjoint". When one view tetrahedron $\langle p_1p_2p_3p_4 \rangle$ as the convex hull of its vertices, the vertices are extremal points. This in turn implies for any unit vector, $\vec{n}$ and $-\vec{n}$ cannot belong to the inner solid angle of $p$ at the same time.
From this we can conclude (up to a set of exception of measure zero), at most half of the unit vectors belongs to the 4 inner solid angles of a tetrahedron. The almost disjointness of the inner solid angles then forces their sum to be at most $2\pi$.
Back to original problem
Let $\Omega_p$ be the internal solid angle and $\phi_{p,i}, i = 1\ldots 3$ be the three dihedral angles at vertex $p$. The wiki
page mentioned by @joriki tell us:
$$\Omega_p = \sum_{i=1}^3 \phi_{p,i} - \pi$$
Notice each $\Omega_p \ge 0$ and we have shown $\sum_{p}\Omega_{p} \le 2\pi$. We get:
$$\begin{align}
& 0 \le \sum_p \sum_{i=1}^3 \phi_{p,i} - 4\pi \le 2\pi\\
\implies & 2\pi \le \frac12 \sum_p \sum_{i=1}^3 \phi_{p,i} \le 3\pi
\end{align}$$
When we sum the dihedral angles over $p$ and $i$, every dihedral angles with be counted twice. This means the expression $\frac12 \sum_p \sum_{i=1}^3 \phi_{p,i}$ above is nothing
but the sum of the 6 dihedral angles of a tetrahedron.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
application of Lowenheim-Skolem theorem So if minimal model of ZF exists, it is said that it is countable set by Lowenheim-Skolem. So, is Lowenheim-Skolem saying that for any countable theory with existence of infinite model there exists standard model, respecting normal element relation, that is countable infinite?
| No. The existence of standard models is strictly stronger.
It is consistent that there are no standard models, to see this note that the standard models are well-founded, in the sense that there is no infinite decreasing chain of standard models such that $M_{n+1}\in M_n$, simply because $\in$ itself is well-founded and standard models use the real $\in$ for their membership relation.
So there is a minimal standard model. But this model has the standard $\omega$ for its integers, so it cannot possible satisfy $\lnot\text{Con}(\mathsf{ZFC})$, so it must have a model of $\sf ZFC$ inside, but this model cannot be standard.
See also: Transitive ${\sf ZFC}$ model on Cantor's Attic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Baire one extension of continuous functions I struggle with the following comment in Sierpinski's Hypothèse du continu, p. 49.
For every continuous function $f(x):X \rightarrow \mathbb{R}$, where $X \subseteq \mathbb{R}$, there exist a Baire one function $g(x): \mathbb{R} \rightarrow \mathbb{R}$ such that $g(x) = f(x)$ for all $x \in X$.
What if both $X$ and $\mathbb{R}\backslash X$ are dense and uncountable ?
| I think it can be proved by the following:
For arbitrary $X\subset\mathbb{R}$, continuous $f:X\longrightarrow\mathbb{R}$ can be extended to a function $F:\mathbb{R}\longrightarrow\mathbb{R}$ such that $F^{-1}(A)$ is a $G_\delta$ set in $\mathbb{R}$ for every closed $A\subset\mathbb{R}$ (a Lebesgue-one function such that $F|_X=f$).
Then the Lebesgue-Hausdorff theorem implies that the Lebesgue-one function $F$ is also Baire-one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to get as much pie as possible! Alice and Bob are sharing a triangular pie. Alice will cut the pie with one straight cut and pick the bigger piece, but Bob first specifies one point through which the cut must pass. What point should Bob specify to get as much pie as possible? And in that case how much pie can Alice get?
The approach I took was to put an arbitrary point in a triangle, and draw lines from it to each corner of the triangle. Now the cut will only go through two of the three triangles we now have, so the aim is to get as close to 50% of the remaining two triangles as possible?
| You can consider the triangle to be equilateral, as you can make any triangle equilateral with a linear transformation. That transformation will preserve the ratio of areas. The centroid is the obvious point to pick. If Alice cuts parallel to a side, she leaves Bob $\frac 49$ because the centroid is $\frac 23$ of the way along the altitude.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Let $k$ and $n$ be any integers such that $k \ge 3$ and $k$ divides $n$. Prove that $D_n$ contains exactly one cyclic subgroup of order $k$ a) Find a cyclic subgroup $H$ of order $10$ in $D_{30}$. List all generators of $H$.
b) Let $k$ and $n$ be any integers such that $k \ge 3$ and $k$ divides $n$. Prove that $D_n$ contains exactly one cyclic subgroup of order $k$.
My attempt at a), the elements of $D_{30}$ of order 10 are $r^{3n},sr^{3n}, 0\le n\le 4$ so any cyclic groups of the corresponding elements would work. The generators of the elements would be of the form $\langle a^j \rangle$ where $\gcd(10, j) = 1$ so $j =1,3,7,9,11,13$.
Any ideas as to how I should attempt b)?
| Let $$D_{2n} = \langle r, s \mid r^n = 1, s^2 = 1, s r = r^{-1}s \rangle$$ be the dihedral group of order $2n$ generated by rotations ($r$) and reflections ($s$) of the regular $n$-gon.
From the presentation it is clear that every element can be put into the form $s^i r^j$ where $i$ is $0$ or $1$. So the cyclic subgroups of $D_{2n}$ are the cyclic subgroups generated by elements of the form $r^j$ and $s r^j$.
*
*Since $r$ generates $C_n$ we uniquely have $C_d \le C_n \le D_{2n}$ for every $d|n$ by the lemma.
*Since $s r^i s r^i = 1$ the second form only generates $C_2$ subgroups.
This shows that there may be many different $C_2$ subgroups of $D_{2n}$, but the $C_d$ subgroups are all unique.
Lemma For $d|n$, there is a unique subgroup of $C_d$ isomorphic to $C_d$.
proof: Let $m=ab$, every cyclic group $C_m$ has exactly $a$ elements $g$ such that $g^a=1$ (in fact these elements are $b$, $2b$, ...). So $C_n$ has exactly $d$ elements such that $g^d=1$, and if $C'$ is a subgroup of $C_n$ isomorphic to $C_d$ then it too has exactly $d$ elements like this: they must be exactly the same elements then!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Real Analysis Question! Consider the equation $\sin(x^2 + y) − 2x= 0$ for $x ∈ \mathbb{R}$ with $y ∈ \mathbb{R}$
as a parameter.
Prove the existence of neighborhoods $V$ and $U$ of $0$ in $\mathbb{R}$ such that for every
$y ∈ V$ there exists a unique solution $x = ψ(y) ∈ U$. Prove that $ψ$ is a $C\infty$
mapping on $V$, and that $ψ'(0) = \frac{1}{2}.$
I know that the solution has to do with the inverse and implicit function theorems but I just can't figure it out! Any help would be much appreciated!
| Implicit differentiation of
$$
\sin(x^2+y)-2x=0\tag{1}
$$
yields
$$
y'=2\sec(x^2+y)-2x\tag{2}
$$
$(1)$ implies
$$
|x|\le\frac12\tag{3}
$$
$(2)$ and $(3)$ imply
$$
\begin{align}
|y'|
&\ge2|\sec(x^2+y)|-2|x|\\
&\ge2-1\\
&=1\tag{4}
\end{align}
$$
By the Inverse Function Theorem, $(4)$ says that for all $y$,
$$
|\psi'(y)|\le1\tag{5}
$$
and $\psi\in C^\infty$. Furthermore, $x=0$ is the only $x\in\left[-\frac12,\frac12\right]$ so that $\sin(x^2)=x$. $(2)$ says that $y'=2$ at $(0,0)$. Therefore,
$$
\psi'(0)=\frac12\tag{6}
$$
Since $\psi$ is continuous, there is a neighborhood of $y=0$ so that $\psi'(y)>0$. Thus, $\psi$ is unique in that neighborhood of $y=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Probability of choosing two equal bits from three random bits Given three random bits, pick two (without replacement). What is the probability that the two you pick are equal?
I would like to know if the following analysis is correct and/or if there is a better way to think about it.
$$\Pr[\text{choose two equal bits}] = \Pr[\text{2nd bit} = 0 \mid \text{1st bit} = 0] + \Pr[\text{2nd bit} = 1 \mid \text{1st bit} = 1]$$
Given three random bits, once you remove the first bit the other two bits can be: 00, 01, 11, each of which occurring with probability $\frac{1}{2}\cdot\frac{1}{2}=\frac{1}{4}$. Thus,
$$\Pr[\text{2nd bit} = 0] = 1\cdot\frac{1}{4} + \frac{1}{2}\cdot\frac{1}{4} + 0\cdot\frac{1}{4} = \frac{3}{8}$$
And $\Pr[\text{2nd bit} = 1] = \Pr[\text{2nd bit} = 0]$ by the same analysis.
Therefore,
$$\Pr[\text{2nd bit}=0 \mid \text{1st bit} = 0] = \frac{\Pr[\text{1st and 2nd bits are 0}]}{\Pr[\text{1st bit}=0]} = \frac{1/2\cdot3/8}{1/2} = \frac{3}{8}$$
and by the same analysis, $\Pr[\text{2nd bit} = 1 \mid \text{1st bit} = 1] = \frac{3}{8}$.
Thus, $$\Pr[\text{choose two equal bits}] = 2\cdot\frac{3}{8} = \frac{3}{4}$$
| Whatever the first bit picked, the probability the second bit matches it is $1/2$.
Remark: We are assuming what was not explicitly stated, that $0$'s and $1$'s are equally likely. One can very well have "random" bits where the probability of $0$ is not the same as the probability of $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
How to solve the following differential equation We have the following DE: $$ \dfrac{dy}{dx} = \dfrac{x^2 + 3y^2}{2xy}$$
I don't know how to solve this. I know we need to write it as $y/x$ but I don't know how to in this case.
| $y=vx$ so $y'=xv'+v$. Your equation is $xv'+v={{1 \over{2v}}+{{3v} \over {2}}}$.
Now clean up to get $xv'={{v^2+1}\over{2v}}$. Now separate ${2v dv \over {v^2+1}} = {dx \over x}$.
Edit
${{x^2+3y^2} \over {2xy}} = {{{x^2}\over{2xy}}+{{3y^2}\over{2xy}}}={ x \over {2y}}+{{3y}\over{2x}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Fractional Derivative Implications/Meaning? I've recently been studying the concept of taking fractional derivatives and antiderivatives, and this question has come to mind: If a first derivative, in Cartesian coordinates, is representative of the function's slope, and the second derivative is representative of its concavity, is there any qualitative relationship between a 1/2 derivative and its original function? Or a 3/2 derivative with its respective function?
| There several approaches to fractional derivatives. I use the Grunwald-Letnikov derivative and its generalizations to complex plane and the two-sided derivatives. However, most papers use what I call "walking dead" derivatives: the Riemann-Liouville and Caputo. If you want to start, don't loose time with them.
There are some attempts to give interpretations to the FD: Prof. Tenreiro Machado and also Prof. Podlubny. The best interpretation in my opinion is the system interpretation: there is a system (linear) called differintegrator with transfer function H(s)=s^a, for Re(s)>0 (forward, causal case) or Re(s) < 0 (backward, anti-causal case). The impulse response of the causal system is t^(a-1)/gamma(-a).u(t) where u(t) is the Heaviside function.
Send me a mail and i'll send you some papers
[email protected]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 3,
"answer_id": 1
} |
3D - derivative of a point's function, is it the tangent? If I have (for instance) this formula which associates a $(x,y,z)$ point $p$ to each $u,v$ couple (on a 2D surface in 3D):
$p=f(u,v)=(u^2+v^2+4,2uv,u^2−v^2) $
and I calculate the $\frac{\partial p}{\partial u}$, what do I get? The answer should be "a vector tangent to the point $p$" but I can't understand why. Shouldn't I obtain another point?
| Take a fixed location where $(u,v) = (u_0,v_0)$. Think about the mapping $u \mapsto f(u,v_0)$. This is a curve lying on your surface, which is formed by allowing $u$ to vary while $v$ is held fixed. In fact, in my business, we would say that this is an "isoparametric curve" on the surface. By definition, $\frac{\partial f}{\partial u}(u_0)$ is the first derivative vector of this curve at $u= u_0$. In other words, it's the "tangent" vector of this curve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Fubini's theorem for Riemann integrals? The integrals in Fubini's theorem are all Lebesgue integrals. I was wondering if there is a theorem with conclusions similar to Fubini's but only involving Riemann integrals? Thanks and regards!
| To see the difficulties of Fubini with Riemann integrals, study two functions $f$ and $g$ on the rectangle $[0,1]\times[0,1]$ defined by:
(1) $\forall$integer $i\ge0$, $\forall$odd integer $j\in[0,2^i]$,
$\forall$integer $k\ge0$, $\forall$odd integer $\ell\in[0,2^k]$,
define $f(j/2^i,\ell/2^k)=\delta_{ik}$ (here, $\delta_{ik}$ is the Kronecker delta, equal to one if $i=k$ and $0$ if not) and $g(j/2^i,\ell/2^k)=1/2^i$;
and
(2) $\forall x,y\in[0,1]$, if either $x$ or $y$ is not a dyadic rational, define $f(x,y)=0$ and $g(x,y)=0$.
Then both iterated Riemann integrals of $f$ are zero, i.e., $\int_0^1\int_0^1 f(x,y)\,dx\,dy=\int_0^1\int_0^1 f(x,y)\,dy\,dx=0$. However, the Riemann integral, over $[0,1]\times[0,1]$, of $f$ does not exist.
Also, the Riemann integral, over $[0,1]\times[0,1]$, of $g$ is zero. However, $\forall$dyadic rational $x\in[0,1]$, the Riemann integral $\int_0^1 g(x,y)\,dy$ does not exist. Consequently, $\int_0^1\int_0^1 g(x,y)\,dy\,dx$ does not exist, in the Riemann sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Sub-lattices and lattices. I have read in a textbook that $ \mathcal{P}(X) $, the power-set of $ X $ under the relation ‘contained in’ is a lattice. They also said that $ S := \{ \varnothing,\{ 1,2 \},\{ 2,3 \},\{ 1,2,3 \} \} $ is a lattice but not a sub-lattice. Why is it so?
| The point of confusion is that a lattice can be described in two different ways. One way is to say that it is a poset such that finite meets and joins exist. Another way is to say that it is a set upon which two binary operations (called meet and join) are given that satisfy a short list of axioms. The two definitions are equivalent in the sense that using the first definition's finite meets and joins gives us the two binary operations, and the structure imposed by the second definition allows one to recover a poset structure, and these processes are inverse to each other.
So now, if $L$ is a lattice and $S\subseteq L$ then $S$ is automatically a poset, indeed a subposet of $L$. But, even if with that poset structure it is a lattice it does not mean that it is a sublattice of $L$. To be a sublattice it must be that for all $x,y\in S$, the join $x\vee y$ computed in $S$ is the same as that computed in $L$, and similarly for the meet $x\wedge y$. This much stronger condition does not have to hold. Indeed, as noted by Gerry in the comment, the meet $\{1,2\}\wedge \{2,3\}$ computed in $\mathcal P({1,2,3})$ is $\{2\}$, while computed in the given subset it is $\emptyset$. None the less, it can immediately be verified that the given subset is a lattice since under the inclusion poset, all finite meets and joins exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Good book recommendations on trigonometry I need to find a good book on trigonometry, I was using trigonometry demystified but I got sad when I read this line:
Now that you know how the circular functions are defined, you might wonder how the values are calculated. The answer: with an electronic calculator!
I know a book which seems to be really good: Loney's Plane Trigonometry, I'm just not sure if the book is up to date.
| Nothing changed much in basic Trigonometry for a century. Of Loney's book genre is another:
Henry Sinclair Hall, Samuel Ratcliffe Knight, Macmillan and Company, 1893 - Plane trigonometry - 404 pages
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/310980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 11,
"answer_id": 8
} |
Solve $x'(t)=(x(t))^2-t^2+1 $ How can we solve $$x'(t)=(x(t))^2-t^2+1 $$?I have tried to check whether it is Exact, separable, homogeneous, Bernoulli or not. It doesn't resemble to none of them. Who can help me. Thank you. The source of question is CEU entrance examination.
| The non-linear DE $$x'=P(t)+Q(t)x+R(t)x^2$$ is called Ricatti's equation. If $x_1$ is a known particular solution of it, then we can have a family of solutions of the OE of the form $x(t)=x_1+u$ where $u$ is a solution of $$u'=Ru^2+(Q+2x_1R)u$$ or the linear ODE: $$w'+(Q+2x_1R)w=-R,~~~w=u^{-1}$$ Here, as @Ishan noted correctly, one particular solution of your OE is $x_1=t$ and $$R=1,~~Q=0,~~P=1-t^2$$ and so we first solve $$w'+(0+2t\times1)w=-1,~~w=u^{-1}$$ or $w'+2tw=-1$. The solution of latter OE is $$w=\frac{e^{t^2}}{C-\int_{t_0}^te^{k^2}dk}$$ and sofar we get $$x(t)=t+\frac{C-\int_{t_0}^te^{k^2}dk}{e^{t^2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
DNA sequence in MATLAB I am wanting to count how many times synonymous and non- synonymous mutations appear in a sequence of DNA, given the number of synonymous and non- synonymous mutations in each 3 letter codon. ie given that AAA has 7 synonymous and 1 non- synonymous equations, and CCC has 6 and 3 respectively, then the sequence AAACCC would have 13 synonymous and 4 non- synonymous mutations. However, these sequences could have 10k + letters with a total of 64 different 3 letter combinations...
How could I set up an M file, using for / else if statements to count the mutations?
Thanks
| Assuming you have filtered out the data errors and each time you nicely have three letter, here is one approach:
1) Make your data look like this:
AAA
CCC
ACA
CAC
...
2) Count how many times each of the 64 options occurs.
3) Multiply that found number of times with the corresponding syn and non-sym mutations.
That should be it!
Note that step 2 and 3 can easily be achieved with Excel as well. If you are not fluent in matlab it will probably even be quicker.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Ideals in Dedekind domain If I is a non-zero ideal in a Dedekind domain such that $I^m$ and $I^n$ are principal, and are equal to $(a)$ and $(b)$ respectively. How to show that $I^{(m,n)}$ is principal.
Try: $(m,n) = rm +sn$ So, $I^{(m,n)} = (a)^r(b)^s$, where $r$, $s$ can be positive and negative. Both positive case is ok. but how to handle other cases.
| Even in the other cases the argument works, you just have fractional ideals instead. Viewing $I$ as an element $\overline{I}$ of the ideal class group of $A$ your question can be stated as: Suppose $\overline{I}^m=\overline{I}^n=0$ in the ideal class group, then $\overline{I}^{(m,n)}=0$. This statement is true in any group.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
How can I calculate the limit of this? What is the limit? $$\lim_{n\rightarrow\infty}\dfrac{3}{(4^n+5^n)^{\frac{1}{n}}}$$
I don't get this limit. Really, I don't know if it has limit.
| Denote the function
$$
f(n) = \frac{3}{(4^n +5^n)^{\frac{1}{n}}}
$$
Recall logarithm is a continuous function, hence denote
$$
L(f(n)) = \log 3-\frac{\log(4^n +5^n)}{n}\\
\lim_{n \to \infty} L(f(n)) = \log 3 - \lim_{n \to \infty}\frac{\log(4^n +5^n)}{n}=\log 3 - \lim_{n \to \infty} \frac{4^n \log 4 + 5^n \log 5}{4^n + 5^n} \\
=\log 3 - \log 5=\log \bigg(\frac{3}{5} \bigg)
$$
I used here L'Hospital's rule and then divided the fraction through $5^n$.
Hence $\lim_{n \to \infty}f(n)=\frac{3}{5}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
If $M(r)$ for $a \leq r \leq b$ by $M(r)=\max\{\frac {r}{a}-1,1-\frac {r}{b}\}$.Then $\min \{M(r):a \leq r \leq b\}=$? I faced the following problem that says:
Let $0<a<b$. Define a function $M(r)$ for $a \leq r \leq b$ by $M(r)=\max\{\frac {r}{a}-1,1-\frac {r}{b}\}$.Then $\min \{M(r):a \leq r \leq b\}$ is which of the following:
$1.0$
$ 2.\frac {2ab}{a+b}$
$3.\frac {b-a}{b+a}$
$4.\frac {b+a}{b-a}$
I do not know how to progress with it.Can someone point me in the right direction? Thanks in advance for your time.
| If $a\leq r \leq \dfrac{2ab}{a+b}$ show that $M(r)=1-\dfrac{r}{b}$ and for $\dfrac{2ab}{a+b}\leq r \leq b$ that $M(r)=\dfrac{r}{a}-1$.
As Did suggested a picture (of $\dfrac{r}{a}-1, 1-\dfrac{r}{b}$) will help. What is $\dfrac{2ab}{a+b}$?
To find $\min \{M(r):a\leq r\leq b\}$ note that $\dfrac{r}{a}-1$ is increasing and $1-\dfrac{r}{b}$ decreasing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
getting the inner corner angle I have four points that make concave quad:
now I wanna get the inner angle of the (b) corner in degrees.
note: the inner angle is greater than 180 degree.
| Draw $ac$ and use the law of cosines at $\angle b$, then subtract from $360$
$226=68+50-2\sqrt{50\cdot 68} \cos \theta \\ \cos \theta\approx -0.926 \\ \theta \approx 157.83 \\ \text{Your angle } \approx 202.17$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Sum of $\prod 1/n_i$ where $n_1,\ldots,n_k$ are divisions of $m$ into $k$ parts. Fix $m$ and $k$ natural numbers. Let $A_{m,k}$ be the set of all partitions divisions of $m$ into $k$ parts. That is:
$$A_{m,k} = \left\{ (n_1,\ldots,n_k) : n_i >0, \sum_{i=1}^k n_i = m \right\} $$
We are interested in the following sum $s_{m,k}$:
$$s_{m,k} = \sum_{ (n_1,\ldots,n_k) \in A_{m,k} } \prod_{i=1}^k \frac{1}{n_i} $$
Can you find $s_{m,k}$ explicitly, or perhaps its generating function or exponential generating function?
EDIT: Since order matters in the $(n_1,\ldots,n_k)$ this is not exactly a partition.
| I am not sure I understand the problem correctly, but
$$ g(z) =
\left( \frac{z}{1} + \frac{z^2}{2} +\frac{z^3}{3} + \ldots + \frac{z^q}{q} + \ldots \right)^k =
\left( \log \frac{1}{1-z} \right)^k $$
looks like a good candidate to me, so that
$$ s_{m,k} = [z^m] \left( \log \frac{1}{1-z} \right)^k.$$
This is the exponential generating function for a sequence of $k$ cycles containing a total of $m$ nodes,
$$\mathfrak{C}(\mathcal{Z}) \times \mathfrak{C(\mathcal{Z})} \times \mathfrak{C(\mathcal{Z})} \times \cdots \times \mathfrak{C(\mathcal{Z})} =
\mathfrak{C}^k(\mathcal{Z}) ,$$
so that $m! [z^m] g(z)$ gives the number of such sequences.
Since the components are at most $m$ we could truncate the inner logarithmic term at $z^m/m$, but I suspect the logarithmic form is more useful for asymptotics.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proving the statement using Resolution? I'm trying to solve this problem for my logical programming class:
Every child loves Santa. Everyone who loves Santa loves any reindeer.
Rudolph is a reindeer, and Rudolph has a red nose. Anything which has
a red nose is weird or is a clown. No reindeer is a clown. John
does not love anything which is weird. (Conclusion) John is not a
child.
Here is my theory:
1. K(x) => L(x,s) % Every child loves Santa
2. L(x,s) => ( D(y) => L(x,y) ) % Everyone who loves Santa loves any reindeer
3. D(r) & R(r) % Rudolph is a reindeer, and Rudolph has a red nose.
4. R(x) => ( W(x) v C(x) ) % Anything which has a red nose is weird or is clown
5. ~( D(x) & C(x) ) % No reindeer is a clown.
6. W(x) => ~L(j,x) % John does not love anything which is weird.
7. ?=> ~K(j) % John is not a child?
Here are the clauses in CNF:
1. ~K(x) v L(x,s)
2. ~L(x,s) v ~D(y) v L(x,y)
3. D(r)
4. R(r)
5. ~R(x) v W(x) v C(x)
6. ~D(x) v ~C(x)
7. ~W(x) v ~L(j,x)
8. K(j)
I cannot seem to get an empty statement by Resolution. Is there a mistake in my theory or the conclusion indeed does not follow?
Edit: Resolution
[3,6] 9. ~C(r)
[4,5] 10. W(r) v C(r)
[9,10] 11. W(r)
[8,1] 12. L(j,s)
[12,2] 13. ~D(y) v L(j,y)
[11,7] 14. ~L(j,r)
[13,14] 15. ~D(r)
Thank you for your help!
| The conclusion is correct.
I will let you tidy this up and fill in the gaps, but you might want to consider the following
W(r) v C(r)
W(r)
~L(j,r)
~L(j,s)
~K(j)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Exposition On An Integral Of An Absolute Value Function At the moment, I am trying to work on a simple integral, involving an absolute value function. However, I am not just trying to merely solve it; I am undertaking to write, in detail, of everything I am doing.
So, the function is $f(x) = |x^2 + 3x - 4|$. I know that this isn't an algebraic-like function, so we can't evaluate it as one; but, by using the definition of absolute value, we can rewrite it as one.
The function $f(x)$, without the absolute value signs, can take on both positive and negative values; so, in order to retain the strictly positive output that the absolute value function demands, we have to put a negative sign in front of the algebraic definiton of our absolute vaue function on the interval where the values yield a negative value, so we'll get a double negative -(-), resulting in the positve we want.
This is how far I've gotten so far. From what i've been taught, in order to find the intervals where the function is positive and where it is negative, you have to find the values that make the function zero, and create test intervals from those values. For instance, the zeros of the function above are $x = -4$ and $x = 1$; our test intervals are then $(- \infty, -4)$, $(-4, 1)$, and $(1, \infty)$ My question is, why does finding the zeros of the function guarantee that we will find those precise test intervals?
| Since polynomials and absolute value of continuous functions are continuous , the zeros are
the points for possible sign change.
Then you can write piecewisely your function and integrate..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Define a domain filter of a function Let $\mathbb{B}, \mathbb{V}$ two sets. I have defined a function $f: \mathbb{B} \rightarrow \mathbb{V}$.
$\mathcal{P}(\mathbb{B})$ means the power set of $\mathbb{B}$,
I am looking for a function $g: (\mathbb{B} \rightarrow \mathbb{V}) \times \mathcal{P}(\mathbb{B}) \rightarrow (\mathcal{P}(\mathbb{B}) \rightarrow \mathbb{V})$ which can filter the domain of $f$ by a subset of $\mathbb{B}$, that means $g: (f, \mathbb{S}) \mapsto h$ such that the domain of $h$ is $\mathbb{S}$ and $\forall x \in \mathbb{S}, h(x) = f(x)$.
I am wondering if this kind of function exists already. If not, is there a better way to define it?
Could anyone help?
| What you denote $g(f,\mathbb S)$ is usually called the restriction of $f$ to $\mathbb S$, and denoted $f|\mathbb S$ or $f\upharpoonright \mathbb S$. Sometimes this notation is used even if $\mathbb S$ is not contained in the domain of $f$, in which case it is understood to be $f\upharpoonright A$, where $A=\mathbb S\cap{\rm dom}(f)$.
(And, agreeing with Trevor's comment, the range of $g$ should be $\bigcup_{\mathbb S\in\mathcal P(\mathbb B)}(\mathbb S\to \mathbb V)$ rather than ${\mathcal P}(\mathbb B)\to\mathbb V$. Anyway, I much prefer the notation $A^B$ or ${}^B A$ instead of $B\to A$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does every sequence of rationals, whose sum is irrational, have a subsequence whose sum is rational Assume we have a sequence of rational numbers $a=(a_n)$. Assume we have a summation function $S: \mathscr {L}^1 \mapsto \mathbb R, \ \ S(a)=\sum a_n$ ($\mathscr {L}^1$ is the sequence space whose sums of absolute values converges). Assume also that $S(a) \in \mathbb R \setminus \mathbb Q$.
I would like to know if every such sequence $a$ has a subsequence $b$ (infinitely long) such that $S(b) \in \mathbb Q$.
Take as an example $a_n = 1/n^2$. Then $S(a)=\pi^2/6$. But $a$ has a subsequence $b=(b_n)=(1/(2^n)^2)$ (ie. all squares of powers of $2$). Then $S(b)=4/3$. Is this case with every such sequence?
| No; for example, if $(n_i)$ is a strictly increasing sequence of positive integers, then we can imitate the proof of the irrationality of $e$ to see that
$$\sum_{i=1}^\infty \frac{1}{n_1 \dots n_i} \notin \mathbf Q.$$
But every sub-series of this series has the same property (it just amounts to grouping some of the $n_i$ together).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 3,
"answer_id": 1
} |
Prove $||a| - |b|| \leq |a - b|$ I'm trying to prove that $||a| - |b|| \leq |a - b|$. So far, by using the triangle inequality, I've got:
$$|a| = |\left(a - b\right) + b| \leq |a - b| + |b|$$
Subtracting $|b|$ from both sides yields,
$$|a| - |b| \leq |a - b|$$
The book I'm working from claims you can achieve this proof by considering just two cases: $|a| - |b| \geq 0$ and $|a| - |b| < 0$. The first case is pretty straightforward:
$$|a| - |b| \geq 0 \implies ||a| - |b|| = |a| - |b| \leq |a - b|$$
But I'm stuck on the case where $|a| - |b| < 0$
Cool, I think I got it (thanks for the hints!). So,
$$|b| - |a| \leq |b - a| = |a - b|$$
And when $|a| - |b| < 0$,
$$||a| - |b|| = -\left(|a| - |b|\right) = |b| - |a| \leq |a - b|$$
| Hint: If $|a|-|b|<0$, rename $a$ to $b'$ and $b$ to $a'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
How to prove that two non-zero linear functionals defined on the same vector space and having the same null-space are proportional? Let $f$ and $g$ be two non-zero linear functionals defined on a vector space $X$ such that the null-space of $f$ is equal to that of $g$. How to prove that $f$ and $g$ are proportional (i.e. one is a scalar multiple of the other)?
| Let $H$ be the null space and take a vector $v$ outside $H$. The point is that $H+\langle v\rangle$ is the whole vector space, this I assume you know (i.e. $H$ has codimension 1). Then $f(v)$ and $g(v)$ uniquely determine the functions $f$ and $v$ and all $x\in X$ can be written as $x=h+tv$ with $h\in H$ so:
$$
f(x) / g(x) = f(tv)/g(tv) = f(v)/g(v).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Integrate $\iint_D\exp\{\min(x^2, y^2)\}\mathrm{d}x\mathrm{d}y$ Compute the integral:
\begin{equation}
\iint\limits_{\substack{\displaystyle 0 \leqslant x \leqslant 1\\\displaystyle 0 \leqslant y \leqslant 1}}\exp\left\{\min(x^2, y^2)\right\}\mathrm{d}x\mathrm{d}y
\end{equation}
$D$ is the rectangle with vertices $(0,0)$, $(0,1)$, $(1,0)$ and $(1,1)$ and $\min(x^2,y^2)$ is the minimum of the numbers $x^2$ and $y^2$.
I dont have a clue about how I can integrate this function. I thought of using Taylor series but I am not sure if this would work either. Can anyone help me out?
| Note that the derivative of $x\mapsto e^{x^2}$ is $x\mapsto 2xe^{x^2}$, hence by symmetry along the line $x=y$
$$\begin{align} \int_0^1\int_0^1e^{\min\{x^2,y^2\}}\,\mathrm dy\,\mathrm dx &= 2\int_0^1\int_x^1e^{x^2}\,\mathrm dy\,\mathrm dx\\
&=2\int_0^1(1-x)e^{x^2}\,\mathrm dx\\
&=2\int_0^1e^{x^2}\,\mathrm dx-\int_0^12xe^{x^2}\,\mathrm dx\\
&=2\int_0^1e^{x^2}\,\mathrm dx-e+1.
\end{align}$$
Unfortunately, the remaining integral is non-elementary.
A similar integal with max instead of min is much easier:
$$\begin{align} \int_0^1\int_0^1e^{\max\{x^2,y^2\}}\,\mathrm dy\,\mathrm dx &= 2\int_0^1\int_0^xe^{x^2}\,\mathrm dy\,\mathrm dx\\
&=2\int_0^1 x e^{x^2}\,\mathrm dx\\
&=e-1.
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Algebraic Number Theory - Lemma for Fermat's Equation with $n=3$ I have to prove the following, in my notes it is lemma before Fermat's Equation, case $n=3$. I was able to prove everything up to the last two points:
Let $\zeta=e^{(\frac{2\pi i}{3})}$. Consider $A:=\mathbb{Z}[\zeta]=\{a+\zeta b \quad|\quad a,b\in \mathbb{Z}\}$. Then
*
*$\zeta$ is a root of the irreducible poly. $X^2+X+1$.
*The field of fractions of $A$ is $\mathbb{Q}(\sqrt{-3})$
*The norm map $N:\mathbb{Q}(\sqrt{-3})\rightarrow \mathbb{Q},$ given by $a+\sqrt{-3}b \mapsto a^2+3b^2$ is multiplicative and sends every element in $A$ to an element in $\mathbb{Z}$. In particular, $u\in A$ is a unit iff $N(u)\in\{-1,1\}$. Moreover, if $N(a)=\pm$ prime number, then $a$ is irreducible.
*The unit group $A^x$ is cyclic of order $6$. ($A^x=\{\pm 1, \pm\zeta, \pm\zeta^2\}$)
*The ring $A$ is Euclidean with respect to the norm $N$ and hence a unique factorisation domain.
*The element $\lambda=1-\zeta$ is a prime element in $A$ and $3=-\zeta^2\lambda^2$.
*The quotient $A$ / $(\lambda)$ is isomorphic to $\mathbb{F}_3$.
*The image of the set $A^3=\{a^3|a\in A\}$ under $\pi: A \rightarrow A / (\lambda^4)=A / (9)$ is equal to $\{0+(\lambda^4),\pm 1+(\lambda^4),\pm \lambda^3+(\lambda^4)\}$
I was not able to prove 7 and 8. For 7 I do not even know which isomorphism, I guess it should be an isomorphism of rings?
I hope anybody knows what to do or has at least some hints,
Thanks in advance for your help!
| For 7), note that 6) tells you that $3 \in (\lambda)$, and since by 6) $\lambda$ is prime, $A \ne (\lambda)$. Moreover $a + \zeta b = a + (1-\lambda) b \equiv a + b \pmod{\lambda}$. So if you want an explicit isomorphism, it is $a + \zeta b + (\lambda) \mapsto a+ b \pmod{3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/311935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Properties of Equivalence Relation Compared with Equality I'm reading about congruences in number theory and my textbook states the following:
The congruence relation on $\mathbb{Z}$ enjoys many (but not all!) of the properties satisfied by the usual relation of equality on $\mathbb{Z}$.
The text then does not go into detail as to what properties they are describing. So what are the properties they are talking about? I've already showed that congruences are reflexive, symmetric, and transitive, so why in general is this not the same as equality? Is there some property that all equivalence relations will never share with the equality relation? I appreciate all responses.
| An equivalence relation is the equality relation if and only if its congruence classes are all singletons. Most equivalence relations do not have this characteristic. The equivalence classes of (most) congruence relations on $\Bbb Z$, for example, are infinite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Two trivial questions in general topology I'd appreciate some guidance regarding the following 2 questions. Number 1 should be clear, and number 2 is more of a discussion:
*
*Let $X$ be a topological space. Let $E$ be a dense subset. Can $E$ be finite without $X$ being finite? Or countable?
*Let $X$ be a topological space. I came across the following definition of "isolated point": a point $x$ which is not a limit point of $X\setminus\{x\}$. Or, for a metric space: a point $x$ such that there is an open ball centered on $x$ not containing any other point of $X$. These definitions make no sense to me: how can a ball of a topological space $X$ contain points not from $X$? It sounds like there is some kind of bigger topological space containing $X$ which is implicitly referred to. In that context, what meaning can one give to "an isolated point of $X$"?
| *
*Let $X = \mathbb{R}$ and define the topology $\mathcal{T} = \{\mathbb{R}, \emptyset, \{0\}\}$. The set $\{1\}$ is dense in $(\mathbb{R}, \mathcal{T})$ (it's closure is clearly all of $\mathbb{R}$, and it's a finite set.
*Consider $X = [0,1] \cup \{2\}$ with the usual Euclidean metric. The ball of radius $1$ about the point $2$ (in $X$) is just the set $\{2\}$. You don't need to have a larger space in mind when you talk about the isolated points of $X$. The ambient space $\mathbb{R}$ doesn't play any role here.
Alternatively, you could think about the metric $d(x,y) = 1$ if $x \neq y$ and $0$ otherwise on $\mathbb{R}$. Then the ball of radius $1$ about any point in $\mathbb{R}$ is the singleton $\{x\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 1
} |
Understanding Primitive Polynomials in GF(2)? This is an entire field over my head right now, but my research into LFSRs has brought me here.
It's my understanding that a primitive polynomial in $GF(2)$ of degree $n$ indicates which taps will create an LFSR. Such as $x^4+x^3+1$ is primitive in $GF(2)$ and has degree $4$, so a 4 bit LFSR will have taps on bits 4 and 3.
Let's say I didn't have tables telling me what taps work for what sizes of LFSRs. What process can I go through to determine that $x^4+x^3+1$ is primitive and also show that $x^4+x^2+x+1$ is not (I made that equation up off the top of my head from what I understand about LFSRs, I think it's not primitive)?
Several pages online say that you should divide $x^e+1$ (where e is $2^n-1$ and $n$ is the degree of the polynomial) by the polynomial, e.g. for $x^4+x^3+1$, you do $(x^{15}+1)/(x^4+x^3+1)$. I can divide polynomials but I don't know what the result of that division will tell me? Am I looking for something that divides evenly? Does that mean it's not primitive?
| For a polynomial $p(x)$ of degree $n$ with coefficients in $GF(2)$ to be primitive, it must satisfy the condition that $2^n-1$ is the smallest positive integer $e$ with the property that
$$
x^e\equiv 1\pmod{p(x)}.
$$
You got that right.
For a polynomial to be primitive, it is necessary (but not sufficient) for it to be irreducible. Your "random" polynomial
$$
x^4+x^2+x+1=(x+1)(x^3+x^2+1)
$$
fails this lithmus test, so we don't need to check anything more.
Whenever I am implementing a finite field of characteristic two in a computer program at the beginning I generate a table of discrete logarithms. While doing that I also automatically verify that $2^n-1$ is, indeed, the smallest power of $x$ that yield remainder $=1$. For an in situ example see the latter half of my answer to this question, where I verify that $x^3+x+1$ is primitive by showing that we need to go up to $x^7$ to get remainder equal to $1$.
Doing it by hand becomes tedious after while. There are several shortcuts available, if you know that $p(x)$ is irreducible and if you know the prime factorization of $2^n-1$. These depend on the fact the multiplicative group of $GF(2^n)$ is always cyclic. Your example of $p(x)=x^4+x^3+1$ is a case in point. It is relatively easy to decide that it is
irreducible. Then the theory of cyclic groups tells that the smallest $e$ will be factor of $15$. So to prove that it is actually equal to fifteen it suffices to check that none of
$e=1,3,5$ work. This is easy. The only non-trivial check is that
$$
x^5\equiv x^5+x(x^4+x^3+1)=x^4+x\equiv x^4+x+(x^4+x^3+1)=x^3+x+1\not\equiv1\pmod{x^4+x^3+1},
$$
and this gives us the proof.
Even with the shortcuts, finding a primitive polynomial of a given degree is a task I would rather avoid. So I use look up tables. My favorite on-line source is at
http://web.eecs.utk.edu/~plank/plank/papers/CS-07-593/primitive-polynomial-table.txt
After you have one primitive polynomial, you often want to find other closely related ones. For example, when calculating generating polynomials of a BCH-code or an LFSR of a Gold sequence (or other sequence with known structure) you encounter the following task. The given primitive polynomial is the so called minimal polynomial of any one of its roots, say $\alpha$. Those constructions require you to find the minimal polynomial of $\alpha^d$ for some $d$. For example $d=3$ or $d=5$ are very common. The minimal polynomial of $\alpha^d$ will be primitive, iff $\gcd(d,2^n-1)=1$, and this often holds.
Then relatively basic algebra of field extensions gives you an algorithm for finding the desired minimal polynomial. Space won't permit me to get into that here, though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
} |
Friends Problem (Basic Combinatorics) Let $k$ and $n$ be fixed integers. In a group of $k$ people, any group of $n$ people all have a friend in common.
*
*If $k=2 n + 1$ prove that there exists a person who is friends with everyone else.
*If $k=2n+2$, give an example of a group of $k$ people satisfying the given condition, but with no person being friends with everyone else.
Thanks :)
| The first part of this is just an expansion of Harald Hanche-Olsen’s answer.
For the second part number the $2n+2$ people $P_1,\dots,P_{n+1},Q_1,\dots,Q_{n+1}$. Divide them into pairs: $\{P_1,Q_1\},\{P_2,Q_2\},\dots,\{P_{n+1},Q_{n+1}\}$. The two people in each pair are not friends; i.e., $P_k$ is not friends with $Q_k$ for $k=1,\dots,n+1$. However, every other possible pair of people in the group are friends. In particular,
*
*$P_k$ is friends with $P_i$ whenever $1\le i,k\le n+1$ and $i\ne k$,
*$Q_k$ is friends with $Q_i$ whenever $1\le i,k\le n+1$ and $i\ne k$, and
*$P_k$ is friends with $Q_i$ whenever $1\le i,k\le n+1$ and $i\ne k$.
In short, two people in the group are friends if and only if they have different subscripts. Clearly no person in the group is friends with everyone else in the group. Suppose, though, that $\mathscr{A}$ is a group of $n$ of these people. The people in $\mathscr{A}$ have altogether at most $n$ different subscripts, so there is at least one subscript that isn’t used by anyone in $\mathscr{A}$; let $k$ be such a subscript. Then everyone in $\mathscr{A}$ has a different subscript from $P_k$ and is therefore friends with $P_k$ (and for that matter with $Q_k$).
I hope to get to the first part a bit later.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
I need to ask a question about vectors and cross product? When you take the determinant on 3 vectors, you calculate and get the volume of that specific shape, correct?
When you take the cross-product of 2 vectors, you calculate and get the area of that shape and you also get the vector perpendicular to the plane, correct?
| Kind Of.
When you take the determinant of a set of vectors, you get the volume bounded by the vectors.
For instance, the determinant of the identity matrix (which can be considered as a set of vectors) gives the volume of the solid box in $n$ dimensions. A $3\times3$ identity matrix gives the area of a cube.
However, when you calculate cross products, the matrix of whose determinant you take has the first row to be the the unit vectors in the $n$ dimensions.
For instance
\begin{align}
\det\begin{pmatrix}
\hat{i}&\hat{j}&\hat{k}\\1&0&0\\0&1&0
\end{pmatrix}=1 \hat{k}
\end{align}
It does NOT return a scalar value.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proving $\prod((k^2-1)/k^2)=(n+1)/(2n)$ by induction $$P_n = \prod^n_{k=2} \left(\frac{k^2 - 1}{k^2}\right)$$ Someone already helped me see that $$P_n = \frac{1}{2}.\frac{n + 1}{n} $$ Now I have to prove, by induction, that the formula for $P_n$ is correct. The basis step: $n = 2$ is true, $$P_2 = \frac{3}{4} = \frac{1}{2}.\frac{2+1}{2} $$ Inductive step: Assuming $P_k = \frac12.\frac{k+1}k$ is true for $k \ge2$, I need to prove $$P_{k+1} = \frac12.\frac{k+2}{k+1}$$ So I am stuck here, I have been playin around with $P_k$ and $P_{k+1}$ but I can't figure out how to connect the hypothesis with what I am trying to prove.
| We do the same thing as in the solution of Brian M. Scott, but slightly backwards, We are interested in the question
$$\frac{n+2}{2(n+1)}\overset{?}{=}\frac{n+1}{2n}\frac{(n+1)^2-1}{(n+1)^2}.$$
The difference-of-squares factorization $(n+1)^2-1=(n)(n+2)$ settles things.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Can we test if a number is a lucky number in polynomial time? I know primality tests exist in polynomial time. But can we test if a number is a lucky number in polynomial time ?
| I doubt you're going to get a satisfactory answer to this question. I'm fairly sure the answer to your question is that there's no known polynomial time algorithm to determine if a number is lucky. The fact that Primes is in P was not shown until 2004, when people have been studying prime numbers and prime number tests for quite a long time. I certainly can't comment on whether or not there is a polynomial time algorithm for lucky numbers, probably it's in $\mathrm{NP}$ so such a question would inevitably have to answer $\mathrm{P=NP}$.
Looking at the lucky numbers they seem to have much less exploitable structure than the primes. Possibly one could develop a large theory of lucky numbers and then use this to help make progress forward. But examining the seive algorithm I highly doubt there's going to be a way to make it polynomial in terms of the input.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Sketch complex curve $z(t) = e^{-1t+it}$, $0 \le t \le b$ for some $b>0$ Sketch complex curve $z(t) = e^{-1t+it}$, $0 \le t \le b$ for some $b>0$
I tried plotting this using mathematica, but I get two curves.
Also, how do I find its length, is it just the integral?
This equation doesn't converge right?
Edit: I forgot the $t$ in front of the $1$ so it's not a circle of radius $e$
| (You have received answers for the rest, so let me focus on the length.)
The length of the curve is given by
$$\int_0^b |z'(t)|\,dt = \int_0^b |(-1+i)e^{(-1+i)t}|\,dt = \int_0^b \sqrt 2 e^{-t}\,dt.$$
I think you can work out what happens when $b\to\infty$ now.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How do we plot $u(4−t)$, where $u(t)$ is a step function? How do we plot $u(4−t)$?
$u(t)$ is a step function:
$$u(t)=\begin{cases} 1&\text{ for }t \ge 0,\\
0 & \text{ for }t \lt 0.\end{cases}$$
| $u(4-t) = 1$ for $4-t\ge0$ so for $t\le4$
$u(4-t) = 0$ for $4-t\lt0$ so for $t\gt4$
Here is the plot:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Cauchy Problem for Heat Equation with Holder Continuous Data This exercise comes from a past PDE qual problem. Assume $u(x,t)$ solves
$$
\left\{\begin{array}{rl}
u_{t}-\Delta u=0&\text{in}\mathbb{R}^{n}\times(0,\infty)\\
u(x,0)=g(x)&\text{on}\mathbb{R}^{n}\times\{t=0\}\end{array}\right.
$$
and $g$ is Holder continuous with continuity mode $0<\delta\leq1,$
that is
$$|g(x)-g(y)|\leq|x-y|^{\delta}$$
for every $(x,y)\in\mathbb{R}^{n}$. Prove the estimate
$$|u_{t}|+|u_{x_{i}x_{j}}|\leq C_{n}t^{\frac{\delta}{2}-1}.$$
I have quite a few pages of scratch work in trying to prove this estimate, but I have not been able to arrive at a situation where it is even obvious how to exploit the Holder continuity of $g$. Because of translation invariance in space, we can just prove it for the case $x=0$, so that at least simplifies some things. But again, there is a key observation that has apparently eluded me, and a hint would be appreciated!
| This calls for a scaling argument.
As you noticed, it suffices to consider $x=0$. Replace $g$ with $g-g(0)$; this does not change the derivatives. Now we know that $$|g(x)|\le |x|^\delta\tag1$$
Prove an estimate of the form
$$|u_{t}(0,1)| + |u_{x_ix_j}(0,1)| \le C_n\tag2$$
This requires writing the derivatives
as convolutions of $g$ with the derivatives of $\Phi$, and a rough estimate such as $|g(x)|\le 1+|x|$.
For every scaling factor $\lambda$ the function $u_\lambda=\lambda^{-\delta} u(\lambda x,\lambda^2 t)$ solves the heat equation with the initial data $g_\lambda(x)=\lambda^{-\delta} g(\lambda x)$. Notice that $g_\lambda$ also satisfies (1).
Therefore, $u_\lambda$ satisfies (2). All of a sudden, we're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Filling the gap in knowledge of algebra Recently, I realize that my inability to solve problems, sometimes, is because I have gaps in my knowldge of algebra. For example, I recently posted a question that asked why $\sqrt{(9x^2)}$ was not $3x$ which to me was fairly embarrassing because the answer was fairly logical and something that I had missed. Furthermore, I realize that when I am solving questions, I tend to get stuck on some intermediate step because that is where most of the algebra is needed. Therefore, my question is: How can I improve algebra? What steps are needed? What books should I be practicing from? What are a few things everyone should know?
| As skullpatrol commented, the Khan Academy has covered a wide range of high school algebra. As you go up the scale, there are many more resources such as the Art of Problem Solving for contest mathematics. Art of Problem Solving has books on high school algebra as well as practice problems: I personally like their structure. You can use your mathematics textbooks for algebra too. Another website I want to add is Brilliant.
P.S.: Never forget the site you are already on! Set the algebra-precalculus tag as your favorite and start exploring.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Graph theory and computer chip design reference Wikipedia says graph theory is used in computer chip design.
... travel, biology, computer chip design, and many other fields. ...
Is there a good reference for that? I can imagine optimal way to draw cpu to chip is to draw shortest hamiltonian cycle in it.
| I don't know about a reference. However, the intuition is that an electrical circuit in a computer chip design is etched into a flat surface. This implies that the graph model of this circuit must be a planar graph. So the theory behind planar graphs is very important in designing such circuits.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Applying substitutions in lambda calculus For computing $2+3$, the lambda calculus goes the following: $(\lambda sz.s(sz))(\lambda wyx.y(wyx))(\lambda uv.u(u(uv)))$
I am having a hard time substituing and reaching the final form of $(\lambda wyx.y((wy)x))((\lambda wyx.y((wy)x))(\lambda uv.u(u(uv))))$. Can anyone provide step-by-step procedure?
| Recall that application is left-associative e.g. $w y x = \color{\red}{(}w y\color{red}{)} x$. Then the steps just follow by standard $\beta$-reduction
and $\zeta_1$ (which reduces the left-hand side of an application, i.e. $e_1 \rightarrow e_1' \Rightarrow e_1 e_2 \rightarrow e_1' e_2$.
In the following I have underlined the term which is about to be substituted:
$\color{red}{(}(\lambda sz.s(sz))\underline{(\lambda wyx.y(wyx))}\color{red}{)}(\lambda uv.u(u(uv)))$
$\xrightarrow{\zeta_1/\beta} (\lambda z.(λwyx.y(wyx))((\lambda wyx.y(wyx))z))\underline{(\lambda uv.u(u(uv)))}$
$\xrightarrow{\beta} (\lambda wyx.y(wyx))((\lambda wyx.y(wyx))(\lambda uv.u(u(uv))))$
Which is equal to your final form which has some additional parentheses to make clear the associativity of $w y z$ (i.e. $\lambda wyx.y((wy)x))((\lambda wyx.y((wy)x))(\lambda uv.u(u(uv))))$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Exercise 6.1 in Serre's Representations of Finite Groups I am trying to show that if $p$ divides the order of $G$ then the group algebra $K[G]$ for $K$ a field of characteristic $p$ is not semisimple. Now Serre suggests us to consider the ideal
$$U = \left\{ \sum_{s \in G} a_s e_s \hspace{1mm} \Bigg| \hspace{1mm} \sum_{s\in G} a_s = 0\right\}$$
of $K[G]$ and show that there is no submodule $K[G]$ - submodule $V$ such that $U \oplus V = K[G]$. Now I sort of have a proof in the case that $G = S_3$ and $K = \Bbb{F}_2$ but can't seem to generalise.
Suppose there were such a $V$ in this case. Then $V$ is one - dimensional spanned by some vector $v \in K[G]$ such that the sum of the coefficients of $u$ is not zero. If that $v$ is say $e_{(1)}$, then by multiplying with all other other basis elements of $\Bbb{F}_2[S_3]$ and taking their sum we will get something in $U$, contradiction.
We now see that the general plan is this: If $v \notin U$ is the sum of a certain number of basis elements not in $U$, we can somehow multiply $v$ by elements in $K[G]$ then take the sum of all these to get something in $U$, contradiction.
How can this be generalised to an aribtrary field of characteristic $p$ and finite group $G$?
Thanks.
| Here's my shot and the problem. I thought of this solution just before going to bed last night.
Suppose that $p$ divides the order of $G$. Then Cauchy's Theorem says that there is an element $x$ of order $p$. Consider $e_x$. If we can find a submodule $V$ such that $K[G] = U \oplus V$ then we can write
$$e_x = u+v$$
for some $u\in U$ and $v \in V$. We note that $v \neq 0$ because $e_x \notin U$. Then consider the elements
$$\begin{eqnarray*} e_x &=& u + v \\
e_{x^2} &=& e_xu + e_x v \\
&\vdots & \\
e_{x^p} &=& e_{x^{p-1}}u + e_{x^{p-1}}v\end{eqnarray*}$$
and take their sum. We then have
$$\left(\sum_{i=1}^p e_{x^i}\right)\left(e_{1} - u\right) = \sum_{i=1}^{p} e_{x^i}v. $$
However the guys on the left are in $U$ while the sum on the right is in $V$, contradicting $U \cap V = \{0\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Normalizer of $S_n$ in $GL_n(K)$ In the exercises on direct product of groups of Dummit & Foote, I proved that the symmetric group $S_n$ is isomorphic to a subgroup of $GL_n(K)$, called the permutation matrices with one 1 in each row and each column.
My question is how can I find the normalizer of this subgroup in $GL_n(K)$?
| Edit: I've revised the answer to make it more elementary, and to fix the error YACP pointed out (thank you).
Suppose $X\in N_{GL_n(K)}(S_n)$. Then for every permutation matrix $P\in S_n$ we have $XPX^{-1}\in S_n$, so conjugation by $X$ is an automorphism of $S_n$. If $n\ne 2, 6$, then as YACP noted it must be an inner automorphism, i.e. we have some $P'\in S_n$ such that for every $P\in S_n$, $XPX^{-1}=P'P{P'}^{-1}$. Thus $(X^{-1}P')P(X^{-1}P')^{-1}=P$, so $X^{-1}P'\in C_{GL_n(K)}(S_n)$ (the centralizer of $S_n$). Thus $X\in C_{GL_n(K)}(S_n)\cdot S_n$, so all we have to do is find $C_{GL_n(K)}(S_n)$, as $C_{GL_n(K)}(S_n)\cdot S_n\subseteq N_{GL_n(K)}(S_n)$ holds trivially.
Let $\mathcal C$ denote of all matrices (including non-invertible ones) $X$ such that $PXP^{-1}=X$ for all $P\in S_n$. Note that conjugation is linear, i.e. $A(X+Y)A^{-1}=AXA^{-1}+AYA^{-1}$ for any $A,X,Y\in M_{n\times n}(K)$, so $\mathcal C$ is closed under addition. Conjugation also respects scalar multiplication, i.e. $AcXA^{-1}=cAXA^{-1}$, so $\mathcal C$ is closed under scalar multiplication. Recall that $M_{n\times n}(K)$ is a vector space over $K$, so this makes $\mathcal C$ a subspace of $M_{n\times n}$. The use of $\mathcal C$ is that $C_{GL_n(K)}(S_n)=\mathcal C\cap GL_n(K)$, yet unlike $C_{GL_n(K)}(S_n)$ it is a vector subspace, and vector subspaces are nice to work with.
It is easy to see that $\mathcal C$ contains diagonal matrices $D$ with constant diagonal, as well as all matrices $M$ such that the entries $m_{ij}$ are the same for all $i,j$. Since $\mathcal C$ is a vector subspace, this means it contains all sums of these matrices as well. We want to show that every matrix in $\mathcal C$ can be written as $D+M$ where $D$ and $M$ are as above. If $X\in \mathcal C$ then we can subtract a diagonal matrix $D$ and a matrix $M$ of the second kind to get the upper left and right entries to be $0$:
$$X-D-M=\begin{pmatrix} 0 & x_{12} & \cdots & x_{1n-1} & 0\\
x_{21} & x_{22} & \cdots & x_{2n-1} & x_{2n}\\
\vdots & \vdots & \ddots & \vdots & \vdots\\
x_{n1} & x_{n2} & \cdots & x_{nn-1} & x_{nn}\\
\end{pmatrix}$$
Call this matrix $X'$; we wish to show $X'=0$. Exchanging the second and last column must be the same as exchanging the second and last row, and since the first action switches $x_{12}$ and $x_{1n}$ while the second leaves the first row unchanged we have $x_{12}=x_{1n}=0$. Continuing in this manner we see that the whole first row is $0$. Exchanging the first and second row is the same as exchanging the first and second column, so the whole second row must be $0$ as well. Continuing in this manner we get that $X'=0$ as desired. Thus $\mathcal C$ is the set of matrices of the form $D+M$, i.e. with $a$ on the diagonal and $b$ off the diagonal.
$C_{GL_n(K)}(S_n)$ is the set of such matrices with nonzero determinant. Let $X\in \mathcal C$ have entries $a$ on the diagonal and $b$ off it. Clearly if $a=b$ then the determinant is $0$, so suppose $a\ne b$. Then we can write $X=(a-b)(I_n+cr)$ where $c$ is a column consisting entirely of $1$'s and $r$ is a row consisting entirely of entries $\frac{b}{a-b}$. By Sylvester's Determinant Theorem the determinant of this is $(a-b)^n(1+rc)$, and $rc=\frac{nb}{a-b}$, which gives us $\det(X)=(a-b)^{n-1}(a-b+nb)$. Thus for any $X\in \mathcal C$, $\det(X)=0$ iff either $a=b$ or $a=(1-n)b$.
Putting this all together, we get that
$$N_{GL_n(K)}(S_n)=\left\{\begin{pmatrix} a & b & \cdots & b \\
b & a & \ddots &\vdots \\
\vdots &\ddots & \ddots & b\\
b & \cdots & b & a\\
\end{pmatrix}P: a\neq b, a\neq (1-n)b, P\in S_n \right\}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/312967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Prove the limit problems I got two problems asking for the proof of the limit:
Prove the following limit: $$\sup_{x\ge 0}\ x e^{x^2}\int_x^\infty e^{-t^2} \, dt={1\over 2}.$$
and,
Prove the following limit: $$\sup_{x\gt 0}\ x\int_0^\infty {e^{-px}\over {p+1}} \, dp=1.$$
I may feel that these two problems are of the same kind. World anyone please help me with one of them and I may figure out the other one? Many thanks!
| Let
$$ f(x)=\ x e^{x^2}\int_x^\infty e^{-t^2} \implies f(x)=\ x e^{x^2}g(x).$$
We can see that $ f(0)=0 $ and $f(x)>0,\,\, \forall x>0$. Taking the limit as $x$ goes to infinity and using L'hobital's rule and Leibniz integral rule yields
$$ \lim_{ x\to \infty } xe^{x^2}g(x) = \lim _{x\to \infty} \frac{g(x)}{\frac{1}{xe^{x^2}}}=\lim_{x \to \infty} \frac{g'(x)}{\frac{1}{(xe^{x^2})'}}=\lim_{x \to \infty} \frac{-e^{-x^2}}{{-{\frac {{{\rm e}^{-{x}^{2}}} \left( 2\,{x}^{2}+1 \right) }{{x}^{2}}}}} =\frac{1}{2}. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
} |
Help with writing the following as a partial fraction $\frac{4x+5}{x^3+1}$. I need help with writing the following as a partial fraction:
$$\frac{4x+5}{x^3+1}$$
My attempts so far are to factor $x^3$ into $(x+1)$ and $(x^2-x+1)$
This gives me: $A(x^2-x+1)+B(x+1)$.
But I have problems with solving the equation system that this gives:
$A = 0$ (since there are no $x^2$ terms in $4x+5$)
$-A+B =4$ (since there are $4$ $x$ terms in $4x+5$)
$A+B = 5$ (since the constant is $5$ in $4x+5$)
this gives me $A=0.5$ and $B=4.5$ and $\frac{1/2}{x+1}, \frac{9/2}{x^2-x+1}$
This is appearantly wrong. Where is my reasoning faulty?
Thank you!
| You need to use one less exponent per factor in the numerator after your factorization.
This leads to:
$$\frac{Ax+B}{x^2-x+1} + \frac{C}{x+1} = \frac{4x+5}{x^3+1}$$
This gives us:
$$Ax^2 + Ax + Bx + B + Cx^2 - Cx + C = 4x + 5$$
This leads to:
$A + C = 0$
$A + B - C = 4$
$B + C = 5$
yielding:
$$A = -\frac{1}{3}, B = \frac{14}{3}, C = \frac{1}{3}$$
Writing the expansion out yields:
$$\frac{4x+5}{x^3+1} = \frac{14 - x}{3(x^2-x+1)} + \frac{1}{3(x+1)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integration theory Any help with this problem is appreciated.
Given the $f$ is measurable and finite a.e. on $[0,1]$. Then prove the following statements
$$ \int_E f = 0 \text{ for all measurable $E \subset [0,1]$ with $\mu(E) = 1/2$ }\Rightarrow f = 0 \text{ a.e. on } [0,1]$$
$$ f > 0 \text{ a.e. } \Rightarrow \inf ~ \left\{\int_E f : \mu(E) \geq 1/2\right\} > 0 $$
| For $(1)$, we can define the sets $ P:= \{x:f(x)\ge 0\}$ and $ N:=\{x:f(x)\le 0\}$. Then either $\mu(P)\ge \frac{1}{2}$ or $\mu(N)\ge \frac{1}{2}$. Suppose $\mu(P)\ge \frac{1}{2}$, define $SP:= \{x:f(x)> 0\},$ then $\ SP\subset P$. If $ \mu(SP)<\frac{1}{2}$, we can choose a set $E$ such that $$SP \subset E,\ f(x)\ge 0\ on \ E,\ and\ \mu(E)=\frac{1}{2}$$ According to the hypothesis, $\int_Ef=0$ which implies $\mu(SP)=0$($f$ is non-negative on $E\ \Rightarrow \ f=0\ a.e.\ on\ E$). I think you are able to show the case when $\mu(SP)>\frac{1}{2}$. Similarly, if we define $ SN:=\{x: f(x)<0\} $, we can show $\mu(SN)=0$.
For $(2)$, forst define $A_n:=\{x:f(x)>\frac{1}{n}\} $, then we know $A_n$ is incresing and $\lim_{n\to \infty}\mu(A_n)=1$ since $f>0\ a.e.$. Fix sufficiently large $n_0$ so that $\mu(A_{n_0})>1-\epsilon_0$, then for any $E$ with $\mu(E)\ge \frac{1}{2}$, we have
$$ \int_Ef\,\mathrm{d}\mu=\int_{E\cap A_{n_0}^c} f\,\mathrm{d}\mu+\int_{E\cap A_{n_0}} f\,\mathrm{d}\mu\ge \int_{E\cap A_{n_0}} f\,\mathrm{d}\mu\ge \frac{1}{n_o}\cdot \mu(E\cap A_{n_0})$$
Note that $\mu(E\cap A_{n_0})\ge \mu (E)+\mu (A_{n_0})-1> \frac{1}{2}+(1-\epsilon_0)-1=\frac{1}{2}-\epsilon_0$, hence $\int_E f\,\mathrm{d}\mu>\frac{1}{n_0}\cdot (\frac{1}{2}-\epsilon_0)>0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How can I prove that a sequence has a given limit? For example, let's say that I have some sequence $$\left\{c_n\right\} = \left\{\frac{n^2 + 10}{2n^2}\right\}$$ How can I prove that $\{c_n\}$ approaches $\frac{1}{2}$ as $n\rightarrow\infty$?
I'm using the Buchanan textbook, but I'm not understanding their proofs at all.
| Well we want to show that for any $\epsilon>0$, there is some $N\in\mathbb N$ such that for all $n>N$ we have $|c_n-1/2|<\epsilon$ (this is the definition of a limit). In this case we are looking for a natural number $N$ such that if $n>N$ then
$$\left|\frac{n^2+10}{2n^2}-\frac{1}{2}\right|=\frac{5}{n^2}<\epsilon$$
We can make use of what's called the Archimedean property, which is that for any real number $x$ there is a natural number larger than it. To do so, note that the above equation is equivalent to $\frac{n^2}{5}>\frac{1}{\epsilon}$, or $n^2>\frac{5}{\epsilon}$. If we choose $N$ to be a natural number greater than $\frac{5}{\epsilon}$, then if $n>N$ we have $n^2>N>\frac{5}{\epsilon}$ as desired. Thus $\lim\limits_{n\to\infty} c_n = \frac12$.
To relate this to the definition of limit in Buchanan: You want to show that for any neighborhood of $1/2$, there is some $N\in\mathbb N$ such that if $n>N$ then $c_n$ is in the neighborhood. Now note that any neighborhood contains an open interval around $1/2$, which takes the form $(1/2-\epsilon,1/2+\epsilon)$. Saying that $c_n$ is in this open interval is the same as saying that $|c_n-1/2|<\epsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Prove that either $m$ divides $n$ or $n$ divides $m$ given that $\operatorname{lcm}(m,n) + \operatorname{gcd}(m,n) = m + n$? We are given that $m$ and $n$ are positive integers such that $\operatorname{lcm}(m,n) + \operatorname{gcd}(m,n) = m + n$.
We are looking to prove that one of numbers (either $m$ or $n$) must be divisible by the other.
| We may suppose without loss of generality that $m \le n$. If $\text{lcm}(m,n) > n$, then $\text{lcm}(m,n) \ge 2n$, since $\text{lcm}(m,n)$ is a multiple of $n$. But then we have
$\text{lcm}(m,n) < \text{lcm}(m,n)+\gcd(m,n) = m + n \le 2n \le \text{lcm}(m,n)$,
a contradiction. So $\text{lcm}(m,n) = n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 1
} |
Integral Sign with indicator function and random variable I have the following problem. I need to consider all the conditions in which the following integral may be equal to zero:
$$\int_\Omega [p\phi-\lambda(\omega)]f(\omega)\iota(\omega)d\omega$$
Where $p>0$ is a constant. $f$ is a probability density function and $\iota$ is an indicator function (i.e I am truncating the density). $\lambda(\cdot)$ is an increasing function in $\omega$ and $\phi$ is a random variable, so I am solving this integral for any possible realization of $\phi$. Of course there is a trivial solution when $p\phi=\lambda(\omega*)$ for some $\omega*$. But there exist any other case when this is zero which I am not considering?
| *
*The expectation $\mathbb{E}\left[\iota(p\phi-\lambda)\right]$ could be zero depending on the value of $p\phi-\lambda$ on all $\omega$.
*If there is no $\omega$ for which $\iota(\omega)=1$, again you end up with a zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How many triangles are formed by $n$ chords of a circle? This is a homework problem I have to solve, and I think I might be misunderstanding it. I'm translating it from Polish word for word.
$n$ points are placed on a circle, and all the chords whose endpoints they are are drawn. We assume that no three chords intersect at one point.
a) How many parts do the chords dissect the disk?
b) How many triangles are formed whose sides are the chords or their fragments?
I think the answer to a) is $2^n$. But I couldn't find a way to approach b), so I calculated the values for small $n$ and asked OEIS about them. I got A006600. And it appears that there is no known formula for all $n$. This page says that
$\text{triangles}(n) = P(n) - T(n)$
where $P(n)$ is the number of triangles for a convex n-gon in general
position. This means there are no three diagonal through one point
(except on the boundary). (There are no degenarate corners.)
This number is easy to calculate as:
$$P(n) = {n\choose 3} + 4{n\choose 4} + 5{n\choose5} + {n\choose6}
= {n(n-1)(n-2)(n^3 + 18 n^2 - 43 n + 60)\over720}$$
The four terms count the triangles in the following manner [CoE76]:
$n\choose3$: Number of trianges with 3 corners at the border.
$4{n\choose4}$: Number of trianges with 2 corners at the border.
$5{n\choose5}$: Number of trianges with 1 corners at the border.
$n\choose6$: Number of trianges with 0 corners at the border.
$T(n)$ is the number of triple-crossings (this is the number of
triples of diagonals which are concurrent) of the regular $n$-gon.
It turns out that such concurrences cannot occur for n odd, and,
except for obvious cases, can only occur for $n$ divisible by $6$.
Among other interesting results, Bol [Bol36] finds values of n
for which $4$, $5$, $6$, and $7$ diagonals are concurrent and shows that
these are the only possibilities (with the center for exception).
The function $T(n)$ for $n$ not divisible by $6$ is:
$$T(n) = {1\over8}n(n-2)(n-7)[2|n] + {3\over4}n[4|n].$$
where $[k|n]$ is defined as $1$ if $k$ is a divisor of $n$ and otherwise $0$.
The intersection points need not lie an any of lines of symmetry of
the $2m$-gon, e. g. for $n=16$ the triple intersection of $(0,7),(1,12),(2,14)$.
If I understand the text correctly, it doesn't give a general formula for $T(n)$. Also I've found a statement somewhere else that some mathematician wasn't able to give a general formula solving this problem. I haven't found a statement that it is still an open problem, but it looks like it to me.
So am I just misunderstanding the problem, or misunderstanding what I've found on the web, or maybe it is indeed a very hard problem? It's the beginning of the semester, our first homework, and it really scares me.
| The answer to a) isn't $2^n$. It isn't even $2^{n-1}$, which is probably what you meant. Draw the case $n=6$, carefully, and count the regions.
As for $T(n)$, the number of triple-crossings, the problem statement specifically says there are no triple-crossings.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Clarification Regarding the Tor Functor involved in a Finite Exact Sequence Let $\cdots\rightarrow F_1 \rightarrow F_0 \rightarrow M \rightarrow 0$ be a free resolution of the $A$-module $M$. Let $N$ be an $A$-module. I saw in some notes that we have an exact sequence $0 \rightarrow \operatorname{Tor}(M,N) \rightarrow F_1 \otimes N \rightarrow F_0 \otimes N \rightarrow M \otimes N \rightarrow 0$. Why is that true?
Edited:
Let me explain the source of my confusion. In my study, $\operatorname{Tor}_n(M,N)$ was defined as the homology of dimension $n$ of the double complex $K_{p,q}=F_p \otimes Q_q$, where $F_{\cdot}, Q_{\cdot}$ are projective resolutions of $M,N$ respectively. It was shown that $\operatorname{Tor}_n(M,N) = \operatorname{Tor}_n(F_{\cdot} \otimes N)=H_n(F_{\cdot} \otimes N) = \frac{Ker(F_n \otimes N \rightarrow F_{n-1} \otimes N)}{Im(F_{n+1} \otimes N \rightarrow F_{n} \otimes N)}$. Now if the sequence
$0 \rightarrow \operatorname{Tor}(M,N) \rightarrow F_1 \otimes N \rightarrow F_0 \otimes N \rightarrow M \otimes N \rightarrow 0$ is exact, then $\operatorname{Tor}_1(M,N) = \operatorname{Ker}(F_1 \otimes N \rightarrow F_{0} \otimes N)$, which means that $\operatorname{Im}(F_{2} \otimes N \rightarrow F_{1} \otimes N)=0$, which does not make sense. What am I missing?
PS: The reference that i am using is Matsumura's Commutative Ring Theory Appendix B. That's as far as i have gone so far with homological algebra.
| The tensor functor is right-exact. We have an exact sequence
$F_1\to F_0\to M\to 0$,
implying that the complex
$F_2\otimes N\to F_1\otimes N\to F_0\otimes N\to M\otimes N\to 0$,
is exact except possibly at $F_1\otimes N$. Now how can you calculate $\mathrm{Tor}_1(M,N)$? You should be able to figure it out from here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Prove that $ \left(1+\frac a b \right) \left(1+\frac b c \right)\left(1+\frac c a \right) \geq 2\left(1+ \frac{a+b+c}{\sqrt[3]{abc}}\right)$.
Given $a,b,c>0$, prove that $\displaystyle \left(1+\frac a b \right) \left(1+\frac b c \right)\left(1+\frac c a \right) \geq 2\left(1+ \frac{a+b+c}{\sqrt[3]{abc}}\right)$.
I expanded the LHS, and realized I have to prove $\displaystyle\frac a b +\frac a c +\frac b c +\frac b a +\frac c a +\frac c b \geq \frac{2(a+b+c)}{\sqrt[3]{abc}}$, but I don't know how. Please help. Thank you.
| We can begin by clearing denominators as follows
$$a^2c+a^2b+b^2a+b^2c+c^2a+c^2b\geq 2a^{5/3}b^{2/3}c^{2/3}+2a^{2/3}b^{5/3}c^{2/3}+2a^{2/3}b^{2/3}c^{5/3}$$
Now by the Arithmetic Mean - Geometric Mean Inequality,
$$\frac{2a^2c+2a^2b+b^2a+c^2a}{6} \geq a^{5/3}b^{2/3}c^{2/3}$$
That is,
$$\frac{2}{3}a^2c+\frac{2}{3}a^2b+\frac{1}{3}b^2a+\frac{1}{3}c^2a \geq 2a^{5/3}b^{2/3}c^{2/3}$$
Similarly, we have
$$\frac{2}{3}b^2a+\frac{2}{3}b^2c+\frac{1}{3}c^2b+\frac{1}{3}a^2b \geq 2a^{2/3}b^{5/3}c^{2/3}$$
$$\frac{2}{3}c^2b+\frac{2}{3}c^2a+\frac{1}{3}a^2c+\frac{1}{3}b^2c \geq 2a^{2/3}b^{2/3}c^{5/3}$$
Summing these three inequalities together, we obtain the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 2
} |
Derivative of the off-diagonal $L_1$ matrix norm We define the off-diagonal $L_1$ norm of a matrix as follows: for any $A\in \mathcal{M}_{n,n}$, $$\|A\|_1^{\text{off}} = \sum_{i\ne j}|a_{ij}|.$$
So what is $$\frac{\partial \|A\|_1^{\text{off}}}{\partial A}\;?$$
| $
\def\p{\partial}
\def\L{\left}\def\R{\right}\def\LR#1{\L(#1\R)}
\def\t#1{\operatorname{Tr}\LR{#1}}
\def\s#1{\operatorname{sign}\LR{#1}}
\def\g#1#2{\frac{\p #1}{\p #2}}
$Use the element-wise sign() function to define the matrix
$\,S = \s{X}.\;$
The gradient and differential of the Manhattan norm can be written as
$$\eqalign{
\g{\|X\|_1}{X} &= S \quad\iff\quad d\|X\|_1 = S:dX \\
}$$
Suppose that $X$ itself is defined in terms of another matrix $A$
$$\eqalign{
F &= (J-I), \qquad
X &= F\odot A, \qquad
S &= \s{F\odot A} \\
}$$
where $J$ is the all-ones matrix, $I$ is the identity, and $(\odot)$ is the Hadamard product.
$\big($ Note that $X$ is composed of the off-diagonal elements of $A\,\big)$
Substituting into the known differential yields the desired gradient
$$\eqalign{
&\quad d\|X\|_1 = S:dX = S:(F\odot dA) = (F\odot S):dA \\
&\boxed{\;\g{\|X\|_1}{A} = F\odot S\;}
\\
}$$
where $(:)$ denotes the Frobenius product, which is a convenient notation for the trace
$$\eqalign{
A:B &= \sum_{i=1}^m\sum_{j=1}^n A_{ij}B_{ij} \;=\; \t{A^TB} \\
A:A &= \big\|A\big\|^2_F \\
}$$
and which commutes with the Hadamard product
$$\eqalign{
A:(B\odot C) &= \sum_{i=1}^m\sum_{j=1}^n A_{ij}B_{ij}C_{ij} \\
&= (A\odot B):C \\\\
}$$
NB: The gradient above is not defined for any element of $X$ equal to zero, because the sign() function itself is discontinuous at zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
$f$ is integrable but has no indefinite integral Let $$f(x)=\cases{0,& $x\ne0$\cr 1, &$x=0.$}$$
Then $f$ is clearly integrable, yet has no antiderivative, on any interval containing $0,$ since any such antiderivative would have a constant value on each side of $0$ and have slope $1$ at $0$—an impossibility.
So does this mean that $f$ has no indefinite integral?
EDIT
My understanding is that the indefinite integral of $f$ is the family of all the antiderivatives of $f,$ and conceptually requires some antiderivative to be defined on the entire domain. Is this correct?
| Just to supplement Emanuele’s answer. If $ I $ is an open subset of $ \mathbb{R} $, then some mathematicians define the indefinite integral of a function $ f: I \to \mathbb{R} $ as follows:
$$
\int f ~ d{x} \stackrel{\text{def}}{=} \{ g \in {D^{1}}(I) ~|~ f = g' \}.
$$
Hence, taking the indefinite integral of $ f $ yields a family of antiderivatives of $ f $. If $ f $ has no antiderivative, then according to the definition above, we have
$$
\int f ~ d{x} = \varnothing.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
Prove $1+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\dots+\frac{1}{\sqrt{n}} > 2\:(\sqrt{n+1} − 1)$ Basically, I'm trying to prove (by induction) that:
$$1+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\dots+\frac{1}{\sqrt{n}} > 2\:(\sqrt{n+1} − 1)$$
I know to begin, we should use a base case. In this case, I'll use $1$. So we have:
$$1 > 2\:(1+1-1) = 1>-2$$
Which works out.
My problem is the next step. What comes after this? Thanks!
| Mean Value Theorem can also be used,
Let $\displaystyle f(x)=\sqrt{x}$
$\displaystyle f'(x)=\frac{1}{2}\frac{1}{\sqrt{x}}$
Using mean value theorem we have:
$\displaystyle \frac{f(n+1)-f(n)}{(n+1)-n}=f'(c)$ for some $c\in(n,n+1)$
$\displaystyle \Rightarrow \frac{\sqrt{n+1}-\sqrt{n}}{1}=\frac{1}{2}\frac{1}{\sqrt{c}}$....(1)
$\displaystyle \frac{1}{\sqrt{n+1}}<\frac{1}{\sqrt{c}}<\frac{1}{\sqrt{n}}$
Using the above ineq. in $(1)$ we have,
$\displaystyle \frac{1}{2\sqrt{n+1}}<\sqrt{n+1}-\sqrt{n}<\frac{1}{2\sqrt{n}}$
Adding the left part of the inequality we have,$\displaystyle\sum_{k=2}^{n}\frac{1}{2\sqrt{k}}<\sum_{k=2}^{n}(\sqrt{k}-\sqrt{k-1})=\sqrt{n}-1$
$\Rightarrow \displaystyle\sum_{k=2}^{n}\frac{1}{\sqrt{k}}<2\sum_{k=2}^{n}(\sqrt{k}-\sqrt{k-1})=2(\sqrt{n}-1)$
$\Rightarrow \displaystyle1+\sum_{k=2}^{n}\frac{1}{\sqrt{k}}<1+2\sum_{k=2}^{n}(\sqrt{k}-\sqrt{k-1})=2\sqrt{n}-2+1=2\sqrt{n}-1$
$\Rightarrow \displaystyle\sum_{k=1}^{n}\frac{1}{\sqrt{k}}<2\sqrt{n}-1$
Similarly adding the right side of the inequality we have,
$\displaystyle\sum_{k=1}^{n}\frac{1}{2\sqrt{k}}>\sum_{k=1}^{n}(\sqrt{k+1}-\sqrt{k})=\sqrt{n+1}-1$
$\Rightarrow \displaystyle\sum_{k=1}^{n}\frac{1}{\sqrt{k}}>2(\sqrt{n+1}-1)$
This completes the proof.
$\displaystyle 2\sqrt{n+1}-2<\sum_{k=1}^{n}{\frac{1}{\sqrt{k}}}<2\sqrt{n}-1.$
This is a much better proof than proving by induction(Ofcourse if someone knows elementary calculus).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 2
} |
Elementary books by good mathematicians I'm interested in elementary books written by good mathematicians.
For example:
*
*Gelfand (Algebra, Trigonometry, Sequences)
*Lang (A first course in calculus, Geometry)
I'm sure there are many other ones. Can you help me to complete this short list?
As for the level, I'm thinking of pupils (can be advanced ones) whose age is less than 18.
But books a bit more advanced could interest me. For example Roger Godement books: Analysis I & II are full of nice little results that could be of interest at an elementary level.
| Solving Mathematical Problems: A Personal Perspective by Terence Tao
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/313980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 21,
"answer_id": 6
} |
Ramsey Number Inequality: $R(\underbrace{3,3,...,3,3}_{k+1}) \le (k+1)(R(\underbrace{3,3,...3}_k)-1)+2$ I want to prove that:
$$R(\underbrace{3,3,...,3,3}_{k+1}) \le (k+1)(R(\underbrace{3,3,...3}_k)-1)+2$$
where R is a Ramsey number. In the LHS, there are $k+1$ $3$'s, and in the RHS, there are $k$ $3's$. I really have no clue how to start this proof. Any help is appreciated!
| The general strategy will probably look something like this. (Let $R = R(\underbrace{3,\ldots,3}_k)$ in what follows.)
*
*Take $k+1$ copies of $K_{R-1}$, the complete graph on $R-1$ vertices, and assume that the edges of each one are colored with $k+1$ colors so as to avoid monochromatic triangles.
*Then suppose that no copy contains a triangle in the $k+1$'th color.
*Then show that if you add 2 more vertices $u,v$ there must be some monochromatic triangle unless (something about the edges between $u$ and the $K_{R-1}$'s, and between $v$ and the $K_{R-1}$'s).
*Then show that the edge from $u$ to $v$ must complete a monochromatic triangle with some vertex in one of the $K_{R-1}$'s.
A more careful analysis would consider the many edges between one $K_{R-1}$ and another, but with any luck you won't have to do that.
On review, I think Alex's suggestion of induction on $k$ is probably an excellent one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If $f(x_0+x)=P(x)+O(x^n)$, is $f$ $mLet $f : \mathbb{R} \to \mathbb{R}$ be a real function and $x_0 \in \mathbb{R}$ be a real number. Suppose that there exists a polynomial $P \in \mathbb{R}[X]$ such that $f(x_0+x)=P(x)+ \underset{x \to 0}{O} (x^n)$ with $n> \text{deg}(P)$.
Is it true that for $m<n$, $f^{(m)}(x_0)$ exists? (Of course, it is obvious for $m=1$.) If so, we can notice that $f^{(m)}(x_0)=P^{(m)}(x_0)$.
| No, these are not true, and classical counterexamples are based on $f(x)=|x|^a\sin(1/|x|^b)$ for $x\ne0$ and $f(0)=0$, considered at $x_0=0$, for well chosen positive $a$ and $b$.
Basically, the idea is that the limited expansion of $f$ at $0$ is $f(x)=O(|x|^n)$ (no polynomial term) with $n$ large if $a$ is large but that $f''(0)$ need not exist when $b$ is suitably larger than $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How is this derived? In my textbook I find the following derivation:
$$ \displaystyle \lim _{n \to \infty} \dfrac{1}{n} \displaystyle \sum ^n _{k=1} \dfrac{1}{1 + k/n} = \displaystyle \int^1_0 \dfrac{dx}{1+x}$$
I understand that it's $\displaystyle \int^1_0$ but I don't understand the $\dfrac{dx}{1+x}$ part.
| The sum is a Riemann Sum for the given integral. As $n\to\infty$,
$$
\sum_{k=1}^n\frac1{1+k/n}\frac1n
$$
tends to the sum of rectangles $\frac1{1+k/n}$ high and $\frac1n$ wide. This approximates the integral
$$
\int_0^1\frac1{1+x}\mathrm{d}x
$$
where $x$ is represented by $k/n$ and $\mathrm{d}x$ by $\frac1n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Limit $a_{n+1}=\frac{(a_n)^2}{6}(n+5)\int_{0}^{3/n}{e^{-2x^2}} \mathrm{d}x$ I need to find the limit when $n$ goes to $\infty$ of
$$a_{n+1}=\frac{(a_n)^2}{6}(n+5)\int_{0}^{3/n}{e^{-2x^2}} \mathrm{d}x, \quad a_{1}=\frac{1}{4}$$
Thanks in advance!
| Obviously $a_n > 0$ for all $n \in \mathbb{N}$. Since
$$
\int_0^{3/n} \exp(-2 x^2) \mathrm{d}x < \int_0^{3/n} \mathrm{d}x = \frac{3}{n}
$$
We have
$$
a_{n+1} < \frac{1}{2} a_n^2 \left(1 + \frac{5}{n} \right) \leqslant 3 a_n^2
$$
Consider sequence $b_n$, such that $b_1 = a_1$ and $b_{n+1} = 3 b_n^2$, then $a_n \leqslant b_n$ by induction on $n$. But $b_n$ admits a closed form solution:
$$
b_n = \frac{1}{3} \left(\frac{3}{4} \right)^{2^{n-1}}
$$
and $\lim_{n \to \infty }b_n = 0$. Thus, since $0 < a_n \leqslant b_n$, $\lim_{n \to \infty} a_n = 0$ by squeeze theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Sequence Proof with Binomial Coefficient Suppose $\lim_{n\to \infty }z_n=z$.
Let $w_n=\sum_{k=0}^n {2^{-n}{n \choose k}z_k}$
Prove $\lim_{n\to \infty }w_n=z$.
I'm pretty sure I need to use $\sum_{k=0}^\infty{n \choose k}$ = $2^{n}$ in the proof. Help? Thoughts?
| $$w_n-z=\sum_{k=0}^n2^{-n}\binom{n}k(z_k-z)$$
Fix $\epsilon>0$. There is an $m\in\Bbb Z^+$ such that $|z_k-z|<\frac{\epsilon}2$ whenever $k\ge m$. Clearly
$$\lim_{n\to\infty}\sum_{k=0}^m2^{-n}\binom{n}k=0\;,$$
so there is an $r\ge m$ such that $$\sum_{k=0}^m2^{-n}\binom{n}k|z_k-z|<\frac{\epsilon}2$$ whenever $n\ge r$.
Then
$$\begin{align*}
|w_n-z|&\le\sum_{k=0}^m2^{-n}\binom{n}k|z_k-z|+\sum_{k=m+1}^n2^{-n}\binom{n}k|z_k-z|\\
&\le\sum_{k=0}^m2^{-n}\binom{n}k|z_k-z|+\sum_{k=0}^n2^{-n}\binom{n}k|z_k-z|\\
&\le\frac{\epsilon}2+\frac{\epsilon}2\sum_{k=0}^n2^{-n}\binom{n}k\\
&=\epsilon
\end{align*}$$
for $n\ge r$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that $\sigma(n) = \sum_{d|n} \phi(n) d(\frac{n}{d})$ This is a homework question and I am to show that $$\sigma(n) = \sum_{d|n} \phi(n) d\left(\frac{n}{d}\right)$$ where $\sigma(n) = \sum_{d|n}d$, $d(n) = \sum_{d|n} 1 $ and $\phi$ is the Euler Phi function.
What I have. Well I know $$\sum_{d|n}\phi(d) = n$$ I also know that for $n\in \mathbb{Z}^n$ it has a certain prime factorization $n = p_1^{a_1} \ldots p_k^{a_k}$ so since $\sigma$ is a multiplicative function, we have $\sigma(n) = \sigma(p_1)\sigma(p_2)...$
I also know the theorem of Möbius Inversion Formula and the fact that if $f$ and $g$ are artihmetic functions, then $$f(n) = \sum_{d|n}g(d)$$ iff $$g(n) = \sum_{d|n}f(d)\mu\left(\frac{n}{d}\right)$$
Please post no solution, only hints. I will post the solution myself for others when I have figured it out.
| First hint: verify the formula when $n$ is a power of a prime. Then, prove that the function $n \mapsto \sum_{d \mid n} \phi(n) d(n/d)$ is also multiplicative, so it must coincide with $\sigma$. In fact, prove more generally that if $a_n$ and $b_n$ are multiplicative, then $n\mapsto \sum_{d \mid n}a_db_{n/d}$ is multiplicative.
Second hint (more advanced): If $A(s) = \sum_{n\geq 1}\frac{a_n}{n^{-s}}$ and $B(s) = \sum_{n\geq 1}\frac{b_n}{n^{-s}}$ are Dirichlet series, their product is $$A(s)B(s)=\sum_{n\geq 1}\frac{(a * b)_n}{n^{-s}},$$
where $(a*b)_n = \sum_{d\mid n} a_n b_{n/d}$. Express the Dirichlet series $\sum_{n\geq 1}\frac{\phi(n)}{n^{-s}}$, $\sum_{n\geq 1}\frac{d(n)}{n^{-s}}$ and $\sum_{n\geq 1}\frac{\sigma(n)}{n^{-s}}$ in terms of the Riemann zeta function, and reduce your identity to an identity between them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Coordinates for vertices of the "silver" rhombohedron. The "silver" rhombohedron (a.k.a the trigonal trapezohedron) is a three-dimensional object with six faces composed of congruent rhombi. You can see it visualised here.
I am interested in replicating the visualisation linked to above in MATLAB, but to do that I need to know the coordinates of the rhombohedron's eight vertices (I am using the patch function).
How do I calculate the coordinates of the vertices of the silver rhombohedron?
| Use vectors $e_1=(1,0,0)$, $e_2=(\cos{\alpha},\sin{\alpha},0)$ and $e_3=(\cos{\alpha},0,\sin{\alpha})$ as basis.
Then vertices are set of all points with each coordinate $0$ or $1$: $(0,0,0)$, $(0,0,1)$, ..., $(1,1,1)$.
Or $0$, $e_1$, $e_2$, $e_3$, $e_1+e_2$, ..., $e_1+e_2+e_3$.
Multiply coordinates by constant if needed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Conditional probabilties in the OR-gate $T=A\cdot B$ with zero-probabilities in $A$ and $B$? My earlier question became too long so succintly:
What are $P(T|A)=P(T\cap A)/P(A)$ and $P(T|B)=P(T\cap B)/P(B)$ if $P(A)=0$ and $P(B)=0$?
I think they are undefined because of the division by zero. How can I specify the conditional probabilities now? Please, note that the basic events $A$ and $B$ depend on $T$ because $T$ consists of them, namely $T=A \cup B$.
| Yes, it is undefined in general. It is generally pointless to ask for the conditional probability of $T$ when $A$ occurs when it is known that $A$ almost surely never happens.
But a meaningful specification in your particular case that $T = A\cup B$ is by some intuitive notion of continuity. For $P(A) \neq 0$, if $C\supseteq A$ we must have $P(C|A) = 1$. Hence one can argue from a subjective interpretation of probability that $P(T|A) = 1$ since $T\supseteq A$. And similarly for $P(T|B)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Help evaluating a gamma function I need to do a calculus review; I never felt fully confident with it and it keeps cropping up as I delve into statistics. Currently, I'm working through a some proof theory and basic analysis as a sort of precursor to the calc review, and I just hit a problem that requires integration. Derivatives I'm ok with, but I really don't remember how to take integrals correctly. Here's the problem:
$$\Gamma (x) = \int_0^\infty t^{x-1} \mathrm{e}^{-t}\,\mathrm{d}t$$
If $x \in \mathbb{N}$, then $ \Gamma(x)=(x-1)!$
Check that this is true for x=1,2,3,4
I did a bit of reading on integration, but it left my head spinning. If anyone wants to lend a hand, I'd appreciate it. I should probably just push this one to the side for now, but part of me wants to plow through it.
Update: Thanks for the responses. I suspect this will all make more sense once I've reviewed integration. I'll have to revisit it then.
| In general:
$$u=t^{x-1}\;\;,\;\;u'=(x-1)t^{x-2}\\v'=e^{-t}\;\;,\;\;v=-e^{-t}$$
so
$$\Gamma(x):=\int\limits_0^\infty t^{x-1}e^{-t}\,dt=\overbrace{\left.-t^{x-1}e^{-t}\right|_0^\infty}^\text{This is zero}+(x-1)\int\limits_0^\infty t^{x-2}e^{-t}=$$
$$=:(x-1)\Gamma(x-1)$$
So you only need to know $\,\Gamma(1)=1\,$ and this is almost immediate...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Find the common ratio of the geometric series with the sum and the first term Given: Geometric Series Sum ($S_n$) = 39
First Term ($a_1$) = 3
number of terms ($n$) = 3
Find the common ratio $r$.
*I have been made aware that the the common ratio is 3, but for anyone trying to solve this, don't plug it in as an answer the prove that it's true. Try to find it without using it.
| The genreal sum formula in geometric sequences: $$S_n=\frac{a_1\left(r^n-1\right)}{r-1}$$ In your case, $S_n=39$, $a_1=3$, $n=3$. Now you just need to find $r$:
$$39=\frac{3\left(r^3-1\right)}{r-1}$$ $$\ 13=\frac{r^3-1}{r-1}$$ $$\ 13\left(r-1\right)=r^3-1$$ $$\ 13r-13=r^3-1$$ $$\ r^3-13r+12=0$$
$$\ r_1=-4,\ r_2=3,\ r_3=1$$
Since you said it's a geometric series, $r ≠ 1$, but even if it did, plugging it back in would give you $3 + 3 + 3 ≠ 39$. So, the common ratio is $$r = 3, -4$$ Plug $r=3$ back into the formula and see that it works. As for $r=-4$, you'll have to do it the long way because it won't work in the sum formula, so you could just write that $3 + (-12) + 48 = 39$ which is indeed true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
} |
Artinian rings and PID Let $R$ be a commutative ring with unity. Suppose that $R$ is a principal ideal domain, and $0\ne t\in R$. I'm trying to show that $R/Rt$ is an artinian $R$-module, and so is an artinian ring if $t$ is not a unit in $R$.I'm not sure how to begin. please help.
| Hint: Show that the ideals of $R/Rt$ are all principal, and in fact, are in bijection with the divisors of $t$ in $R$ (considered up to multiplication by a unit). Then use that $R$ is a UFD (since it's a PID). Finally, note that the ideals of $R/Rt$ are also precisely its $R$-submodules.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Suppose that $(x_n)$ is a sequence satisfying $|x_{n+1}-x_n| \leq q|x_n-x_{n-1}|$ for all $n \in \mathbb{N^+}$.Prove that $(x_n)$ converges Let $q$ be a real number satisfying $0<q<1$. Suppose that $(x_n)$ is a sequence satisfying $$|x_{n+1}-x_n| \leq q|x_n-x_{n-1}|$$ for all $n \in \mathbb{N^+}$. Prove that $(x_n)$ is a convergent sequence. I try to show that the sequence is Cauchy but I stuck at finding the $M$. Can anyone help me ?
| Hint: note that for any $N$ and $n>N$, we have $|x_{n+1}-x_n|\leq q|x_n-x_{n-1}|\cdot q^{N-1}|x_2-x_1|$. This can be proven by induction on $N$. If we have $n>m>N$, then how can we bound $|x_n-x_m|$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Example of a group where $o(a)$ and $o(b)$ are finite but $o(ab)$ is infinite Let G be a group and $a,b \in G$. It is given that $o(a)$ and $o(b)$ are finite. Can you give an example of a group where $o(ab)$ is infinite?
| The standard example is the infinite dihedral group.
Consider the group of maps on $\mathbf{Z}$
$$
D_{\infty} = \{ x \mapsto \pm x + b : b \in \mathbf{Z} \}.
$$
Consider the maps
$$
\sigma: x \mapsto -x,
\qquad
\tau: x \mapsto -x + 1,
$$
both of order $2$. Their composition
$$
\tau \circ \sigma (x) = \tau(\sigma(x)) : x \mapsto x + 1
$$
has infinite order.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 4
} |
Writing direct proofs Let $x$, $y$ be elements of $\mathbb{Z}$. Prove if $17\mid(2x+3y)$ then $17\mid(9x+5y)$. Can someone give advice as to what method of proof should I use for this implication? Or simply what steps to take?
| I would write the equations in $\mathbb{Z}_{17}$, which is a field, because $17$ is prime, so linear algebra applies:
$$ 2x+3y=0 $$
is a linear equation of two variables, and you seek to prove that it implies
$$ 9x+5y=0 $$
which means they're linearly dependent. Two equations are linearly dependent if and only if one is a multiple of the other - and this should be easy to prove.
Edit: Since you asked about proof strategy, I'd like to emphasize that this is not some random trick; the condition $p|x$ is not very nice to work with algebraically, but because $\mathbb{Z}_p$ is a field, the equivalent statement $x\equiv 0\mod p$ (I omitted the $\mod 17$ and the $\equiv$ above to make it look more like familiar algebra) is much simpler and better, because you can multiply, add and create linear spaces over $\mathbb{Z}_p$ that behave (in many ways) like real numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Klein-bottle and Möbius-strip together with a homeomorphism Consider the Klein bottle (this can be done by making a quotient space). I want to give a proof of the following statement:
The Klein Bottle is homeomorphic to the union of two copies of a Möbius strip joined by a homeomorphism along their boundaries.
I know what such a Möbius band looks like and how we can obtain this also by a quotient map. I also know how to see the Klein bottle, but I don't understand that the given statement is correct. How do you construct such a homeomorphism explicitly?
| Present a Klein bottle as a square with vertical edges identified in an orientation-reversing manner and horizontal edges identified in an orientation-preserving manner. Now make two horizontal cuts at one-third of the way up and two-thirds of the way up.
The middle third is one Möbius strip. Take the upper and lower thirds and glue them by the original horizontal identification. This is the other Möbius strip.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/314977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
Using Pigeonhole Principle to prove two numbers in a subset of $[2n]$ divide each other Let $n$ be greater or equal to $1$, and let $S$ be an $(n+1)$-subset of $[2n]$. Prove that there exist two numbers in $S$ such that one divides the other.
Any help is appreciated!
| HINT: Create a pigeonhole for each odd positive integer $2k+1<2n$, and put into it all numbers in $[2n]$ of the form $(2k+1)2^r$ for some $r\ge 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 4,
"answer_id": 0
} |
Exercise on representations I am stuck on an exercise in Serre, Abelian $\ell$-adic representations (first exercise of chapter 1).
Let $V$ be a vector space of dimension $2$, and $H$ a subgroup of $GL(V)$ such that $\det(1-h)=0$ for all $h \in H$.
Show that in some basis $H$ is a subgroup of either $\begin{pmatrix} 1 & * \\ 0 &* \\ \end{pmatrix}$ or $\begin{pmatrix} 1 & 0 \\ * &* \\ \end{pmatrix}$.
I know that this means that there is a subspace or a quotient of $V$ on which $H$ acts trivially, and I know it is enough to show that $V$ is not irreducible as representation of $H$, but I don't know how to do it.
| By hypothesis, for all $h \in H$ and basis $(e,f)$ one has
$$\det(h(e)-e,h(f)-f) =0.$$
Let $g \in H$ different from the identity.
$\bullet$ Suppose that $1$ is the only eigen-value of $g$. Then in some basis $(e,f)$ the matrix of $g$ is $Mat(g) = \left( \begin{smallmatrix} 1&1\\0&1 \end{smallmatrix}\right)$. Let $h \in H$, then
$$(1) \quad \det(h(e)-e,h(f)-f)=0,$$
$$(2) \quad \det(hg(e)-e,hg(f)-f)=\det(h(e)-e,h(e)+h(f)-f) = 0.$$
If $h(e)-e=0$, then $h(e)$ is colinear to $e$. If $h(f)-f=0$, then (2) shows that $h(e)$ is colinear to $e$. If $h(e) \neq e$ and $h(f)\neq f$, then $h(e)+h(f)-f$ is colinear to $h(e)-e$ (by (2)). The latter is colinear to $h(f)-f$ (by (1)). This implies that $h(e)$ is colinear to $e$. Hence $ke$ is fixed by $H$ and we are done.
$\bullet$ Suppose that $g$ has two distinct eigenvalues $1$ and $a$. Then is some basis $(e,f)$ : $Mat(g) = \left( \begin{smallmatrix} 1&0\\0&a \end{smallmatrix}\right)$.
Lemma: if $h \in H$ then
(i) $h(e)=e$ or (ii) $h(f)$ is colinear to $f$.
Proof: We have
$$\det(h(e)-e,h(f)-f) = 0,$$
$$\det(hg(e)-e,hg(f)-f) = \det(h(e)-e,a h(f)-f) = 0.$$
If $h(e) \neq e$, then $h(f)-f$ and $a h(f)-f$ are colinear to each other (because they are both colinear to $h(e)-e$). Since $a \neq 1$, this implies that $h(f)$ is colinear to $f$. QED
To conclude we have to show that either every $h \in H$ satisfies (i) or every $h \in H$ satisfies (ii). If not, then let $h \in H$ not satisfying (i) and $k \in H$ not satisfying (ii). The matrices of $h$ and $k$ have the forms $Mat(h) = \left( \begin{smallmatrix} 1&0\\\alpha&* \end{smallmatrix}\right)$ and $Mat(k) = \left( \begin{smallmatrix} 1&\beta\\0&* \end{smallmatrix}\right)$ with $\alpha \neq 0$ and $\beta \neq 0$. We check that $\det(hk-{\rm{Id}}) = -\alpha \beta$ which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Dimension of a space I'm reading a book about Hilbert spaces, and in chapter 1 (which is supposed to be a revision of linear algebra), there's a problem I can't solve. I read the solution, which is in the book, and I don't understand it either.
Problem: Prove that the space of continuos functions in the interval (0,1): $C[0,1]$, has dimension $c=\dim(\mathbb{R})$.
Solution: The solution of the book goes by proving that the size of a minimal base of the space $B$ is first $|B|\leqslant c$ and $|B|\geqslant c$, and so $|B|=c$. the proof of it being greater or equal is simple and I understand it, the problem is the other thing. The author does this:
A continuos function is defined by the values it takes at rational numbers, so $|B|\leqslant c^{\aleph_0}=c$
I don't get that.
| Note that if $f,g$ are continuous and for every rational number $q$ it holds that $f(q)=g(q)$ then $f=g$ everywhere.
This means that $|B|\leq|\mathbb{R^Q}|=|\mathbb{R^N}|=|\mathbb R|$.
Also, $|\mathbb R|$ is not necessarily $\aleph_1$. This assumption is known as the continuum hypothesis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
functional relation I need to find functions $f : \mathbb{R}_{+} \rightarrow \mathbb{R}$ which satisfies
$$ f(0) =1 $$
$$ f(\max(a,b))=f(a)f(b)$$
For each $a,b \geq 0$.
I have found two functions which satisfy my criteria.
$$ f_1(x)=1$$
$$f_2(x) = \begin{cases}
0, & \text{if } x>0 \\
1, & \text{if } x=0
\end{cases}
$$
Is there another function which satisfies my criteria?
| Since $f(a)f(b)=f(b)$ for all $b\ge a$, either $f(b)=0$ or $f(a)=1$. If $f(a)=0$, then $f(b)=0$ for all $b\ge a$. If $f(b)=1$, then $f(a)=1$ for all $a\le b$.
Thus, it appears that for any $a\ge0$, the functions
$$
f_a^+(x)=\left\{\begin{array}{}
1&\text{for }x\le a\\
0&\text{for }x\gt a
\end{array}\right.
$$
and for any $a\gt0$, the functions
$$
f_a^-(x)=\left\{\begin{array}{}
1&\text{for }x\lt a\\
0&\text{for }x\ge a
\end{array}\right.
$$
and the function
$$
f_\infty(x)=1\quad\text{for all }x
$$
all satisfy the conditions, and these should be all.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Are sets always represented by upper case letters? -- and understanding a bit about equivalance relations I'm trying to solve q question which states:
Let $\leq$ be a preorder relation on a set $X$, and $E=${$(x,y)\in X:x\leq y$ and $y\leq x$} the corresponding equivalence relation on $X$. Describe $E$ if $X$ is the set of finite subsets of a fixed infinite set $U$, and $\leq$ is the inclusion relation (i.e $x \leq y$ iff $x\subset y$)
I naively said that $E=${$(x,y) \in X$:$x\subset y$ and $y\subset x$}$=${$(x,y) \in X:x=y$}
I have a few concerns however, the question says $x \leq y$ iff $x\subset y$, can we just assume that x and y are sets even though they are represented by lower case letters thus be able to use the $\subset$ relation on them (it was not specified that elements of $X$ are sets)?
Secondly, what is the significance of stating that X is the set of finite subsets of a fixed infinite set U?
Thanks
| In answer to your second question, this is a major component on the axiomatic set theory. I believe the power theory states that if X is the set of finite subsets of a fixed infinite set U then ∀x∃y∀u(u∈y↔u⊆x). And there are many subsequent theories based on this axiom. http://mathworld.wolfram.com/AxiomofthePowerSet.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Showing if something is continuous in Topology If $f : X \to \mathbb{R}$ is continuous,
I want to show that $(cf)(x) = cf(x)$ is continuous, where $c$ is a constant.
Attempt: If $f$ is continuous, then we want to show that the inverse image of every open set in $\mathbb{R}$ is an open set of $X$. Choose an open interval in $\mathbb{R}$.
Thats as far as I got. :(
| An alternative answer, that does not require you to prove the composition of continuous functions is continuous:
Let $U\subset\mathbb{R}$ be open. In fact we may assume $U=(a,b)$ by taking the open balls as a base for $\mathbb{R}$.
Now,
$$
(cf)^{-1}(U) = \{ x\in X: \exists y\in(a,b): (cf)(x) = y\}
= \left\{x\in X:\exists y\in(a,b): f(x)=\frac{y}{c}\right\}
$$
$$
=\left\{x\in X: \exists z\in \left(\frac{a}{c},\frac{b}{c}\right): f(x)=z\right\}
$$
and this last set is precisely $$f^{-1}\left\{\left(\frac{a}{c},\frac{b}{c}\right)\right\},$$ which is open since $f$ is continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
The Lebesgue Measure of the Cantor Set Why is it that m(C+C) = 2, where C is the Cantor set, but C is itself measure zero? What is happening that makes this change occur? Is it true generally that m(A+B) =/= m(A) + m(B)? Are there cases in which this necessarily holds? I should note that this is all within the usual topology on R. Thanks!
| Adding two sets $A$ and $B$ is in general much more than taking the union of a copy of $A$ and a copy of $B$.
Rather, $A+B$ can be seen as made up of many slices that look like $A$ (one for each $b\in B$), and the Cantor set has the same size as the reals, so here we have in fact as many slices as real numbers. Therein lies the problem: Even $m(A+\{1,2\})$ may be larger than $m(A)+m(\{1,2\})$, and it can be as large as $2m(A)$ (think, for example, of $A=[0,1]$). -- Of course, the slices do not need to be disjoint in general, but this is another issue.
Ittay's answer indicates that if $A,B$ are countable we have $m(A+B)=m(A)+m(B)$, more or less for trivial reasons. There does not seem to be a natural way of generalizing this to the uncountable, in part because Lebesgue measure can never be $\mathfrak c$-additive, meaning that even if we have that $A_r$ is a measurable set for each $r\in\mathbb R$, and it happens that $\sum_r m(A_r)$ converges, and it happens that the $A_r$ are pairwise disjoint, in general $m(\bigcup_r A_r)$ and $\sum_r m(A_r)$ are different. For example, consider $A_r=\{r\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How to solve a linear equation by substitution? I've been having a tough time figuring this out. This is how I wrote out the question with substitution.
$$\begin{align}
& I_2 = I_1 + aI_1 (T_2 - T_1) \\
& I_1=100\\
&T_2=35\\
&T_1=-5\\
&a=0.000011
\end{align}$$
My try was $I_2 = 100(1) + 0.000011(100) (35(2)-5(1))$
The answer is $100.044$m but I can't figure out the mechanics of the question.
Thanks in advance for any help
| The lower temperature is $-5$ and you dropped a sign plus should not be multiplying $T_2$ by $2$. The correct calculation is $I_2=100+0.000011\cdot 100 \cdot (35-(-5))=100.044$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding a bijection and inverse to show there is a homeomorphism I need to find a bijection and inverse of the following:
$X = \{ (x,y) \in \mathbb{R}^2 : 1 \leq x^2 + y^2 \leq 4 \}$ with its subspace topology in $\mathbb{R}^2$
$Y = \{ (x,y,z) \in \mathbb{R}^3 : x^2 + y^2 = 1$ and $ 0 \leq z \leq 1 \}$ with its subspace topology in $\mathbb{R}^3$
Then show they are homeomorphic. Not sure where to start.
| Hint: $X$ is an annulus, and $Y$ is a cylinder. Imagine "flattening" a cylinder by forcing its bottom edge inwards and its top edge outwards; it will eventually become an annulus. This is the idea of the map; you should work out the actual expressions describing it on your own.
Here is a gif animation I made with Mathematica to illustrate the idea:
F[R_][t_, x_, y_] := {(R + x) Cos[t], (R + x) Sin[t], y}
BentCylinder[R_, r_, s_, t_, z_] := F[R][t, r + s*Sin[z], s*Cos[z]]
BendingCylinder[R_, r_, z_] :=
ParametricPlot3D[
BentCylinder[R, r, s, t, z], {s, -r, r}, {t, 0, 2 Pi}, Mesh -> None,
Boxed -> False, Axes -> None, PlotStyle -> Red,
PlotRange -> {{-10, 10}, {-10, 10}, {-5, 5}}, PlotPoints -> 50]
Export["animation.gif",
Table[BendingCylinder[6, 2, z], {z, 0, Pi/2, 0.02*Pi}],
"DisplayDurations" -> {0.25}]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
(partial) Derivative of norm of vector with respect to norm of vector I'm doing a weird derivative as part of a physics class that deals with quantum mechanics, and as part of that I got this derivative:
$$\frac{\partial}{\partial r_1} r_{12}$$
where $r_1 = |\vec r_1|$ and $r_{12} = |\vec r_1 - \vec r_2|$. Is there any way to solve this? My first guess was to set it equal to 1 or since $r_{12}$ is just a scalar, but then I realized it really depends on $r_1$ after all.
The expression appears when I try to solve
$$\frac{\nabla_1^2}{2} \left( \frac{r_{12}}{2(1+\beta r_{12})} \right)$$
($\beta$ is constant)
| Since these are vectors, one can consider the following approach:
Let
${\bf x}_1 := \overrightarrow{r}_1$, and ${\bf x}_2 : = \overrightarrow{r}_2$, then $r_1 = ||{\bf x}||^{\frac{1}{2}}$, and $r_{12} = ||{\bf x}_1 - {\bf x_2}||^{\frac{1}{2}}$.
Define the following functions:
$g({\bf x}_1) = ||{\bf x}_1 - {\bf x_2}||^{\frac{1}{2}} = \left[({\bf x}_1 - {\bf x_2})^T ({\bf x}_1 - {\bf x_2}) \right]^{\frac{1}{2}}$
$f({\bf x}_1) = ||{\bf x}||^{\frac{1}{2}} = \left[ {\bf x}_1^T {\bf x}_1\right]^{\frac{1}{2}}$
Note that these functions are both scalar function of vectors. Also the following update should be noted
$r_{12} = g({\bf x}_1)$, and $r_1 = f({\bf x}_1)$.
$\dfrac{\partial}{\partial {r_1}} r_{12} ~=~ \dfrac{\partial}{\partial f({\bf x}_1)} g({\bf x}_1) ~=~ \dfrac{\partial}{\partial {\bf x}_1} g({\bf x}_1) ~~ \dfrac{1}{\dfrac{\partial}{\partial {\bf x}_1} f({\bf x}_1)}$ ...... chain rule
Applying Matrix Calculus and simplifying
$\dfrac{\partial}{\partial f({\bf x}_1)} g({\bf x}_1) ~=~
\dfrac{({\bf x}_1 - {\bf x_2})^T}{g({\bf x}_1)}
\dfrac{f({\bf x}_1)}{{\bf x}^T} ~=~
\dfrac{({\bf x}_1 - {\bf x_2})}{g({\bf x}_1)}
\dfrac{f({\bf x}_1)}{{\bf x}}
$... since transpose of a scalar is a scalar
changing to the original variables,
$\dfrac{\partial}{\partial {r_1}} r_{12} ~=~\dfrac{\overrightarrow{r}_1 - \overrightarrow{r}_2}{r_{12}} \dfrac{r_1}{\overrightarrow{r}_1}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Limit involving probability Let $\mu$ be any probability measure on the interval $]0,\infty[$. I think the following limit holds, but I don't manage to prove it:
$$\frac{1}{\alpha}\log\biggl(\int_0^\infty\! x^\alpha d\mu(x)\biggr) \ \xrightarrow[\alpha\to 0+]{}\ \int_0^\infty\! \log x\ d\mu(x)$$
In probabilistic terms it can be rewritten as:
$$\frac{1}{\alpha}\log\mathbb{E}[X^\alpha] \ \xrightarrow[\alpha\to 0+]{}\ \mathbb{E}[\log X]$$
for any positive random variable $X$.
Can you help me to prove it?
| We assume that there is $\alpha_0>0$ such that $\int_0^{+\infty}x^{\alpha_0} d\mu(x)$ is finite. Let $I(\alpha):=\frac 1{\alpha}\log\left(\int_0^{+\infty}x^\alpha d\mu(x)\right)$ and $I:=\int_0^{+\infty}\log xd\mu(x)$.
Since the function $t\mapsto \log t$ is concave, we have $I(\alpha)\geqslant I$ for all $\alpha$.
Now, use the inequality $\log(1+t)\leqslant t$ and the dominated convergence theorem to show that $\lim_{\alpha\to 0^+}\int_0^{+\infty}\frac{x^\alpha-1}\alpha d\mu(x)=I$. Call $J(\alpha):=\int_0^{+\infty}\frac{x^\alpha-1}\alpha d\mu(x)$. Then
$$I\leqslant I(\alpha)\leqslant J(\alpha).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Continuously differentiability of "glued" function I have the following surface for $x,t>0$: $$z(x,t)=\begin{cases}\sin(x-2t)&x\geq 2t\\
(t-\frac{x}{2})^{2}&x<2t \end{cases}$$ How to show that surface is not continuously differentiable along the curve $x=2t$? Truly speaking, I have no idea how to start with this example. Thank you. Andrew
| When $x=2t$, what is the value of $\frac x2-t$? of $\cos(x-2t)$? Are these equal?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Armijo's rule line search I have read a paper (http://www.seas.upenn.edu/~taskar/pubs/aistats09.pdf) which describes a way to solve an optimization problem involving Armijo's rule, cf. p363 eq 13.
The variable is $\beta$ which is a square matrix. If $f$ is the objective function, the paper states that Armijo's rule is the following:
$f(\beta^{new})-f(\beta^{old}) \leq \eta(\beta^{new}-\beta^{old})^T \nabla_{\beta} f$
where $\nabla_{\beta} f$ is the vectorization of the gradient of $f$.
I am having problems with this as the expression on the right of the inequality above does not make sense due to dimension mismatch. I am unable to find an analogue of the above rule elsewhere. Can anyone help me as to figure out what the right expression means? The left expression is a scalar while the right expression suffers from a dimension mismatch...
| In general (i.e. for scalar-valued $x$), Armijo's rule states
$$f(x^{new}) - f(x^{old}) \le \eta \, (x^{new}-x^{old})^\top \nabla f(x^{old}).$$
Hence, you need the vectorization of $\beta^{new}-\beta^{old}$ on the right hand side.
(Alternatively, you could replace $\nabla_\beta f$ by the gradient w.r.t. the frobenius inner product and write $(\beta^{new}-\beta^{old}) : \nabla_\beta f$, where $:$ denotes the frobenius inner product).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/315962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Solve the equation $y''+y'=\sec(x)$ Solve the equation $$y''+y'=\sec(x).$$ By solving the associated homogenous equation, I get complementary solution is $y_c(x)=A+Be^{-x}$. Then by using the method of variation of parameter, I let $y_p(x)=u_1+u_2e^{-x}$ where $u_1,u_2$ satisfy $$u_1'+u_2' e^{-x}=0, \quad -u_2' e^{-x}=\sec(x).$$ Then I stuck at finding $u_2$ as $u_2^{\prime}=-e^x \sec(x)$ and I have no idea on how to integrate this. Can anyone guide me?
| $y''+y'=sec(x)$
$(y'e^x)'=\dfrac{d\int sec(x)e^xdx}{dx}$
$y'=e^{-x}\int sec(x)e^xdx
+Ce^{-x}$
$y=\int e^{-x}\int sec(x)e^xdx dx+ C_1e^{-x}+C_2$
I do not know of any elementary form for the integrals. Wolfram gives an answer
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Do subspaces obey the axioms for a vector space? If I have a subspace $W$, then would elements in $W$ obey the 8 axioms for a vector space $V$ such as:
$u + (-u) = 0$ ; where $u ∈ V$
| Yes, indeed. A subspace of a vector space is also a vector space, restricted to the operations of the vector space of which it is a subspace. And as such, the axioms for a vector space are all be satisfied by a subspace of a vector space.
As stated in my comment below: In fact, a subset of a vector space is a subspace if and only if it satisfies the axioms of a vector space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Find all solutions of the equation $x! + y! = z!$ Not sure where to start with this one. Do we look at two cases where $x<y$ and where $x>y$ and then show that the smaller number will have the same values of the greater? What do you think?
| Let $x \le y < z$:
We get $x! = 1\cdot 2 \cdots x$, $y! = 1\cdot 2 \cdots y$, and $z! = 1\cdot 2 \cdots z$.
Now we can divide by $x!$
$$1 + (x+1)\cdot (x+2)\cdots y = (x+1)\cdot (x+2) \cdots z$$
You can easily show by induction that the right side is bigger than the left for all $z>2$. The only cases that remain are $0! + 0! = 2!$, $0! + 1! = 2!$, $1! + 0! = 2!$, and $1! + 1! = 2!$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 2
} |
Limit point of set $\{\sqrt{m}-\sqrt{n}:m,n\in \mathbb N\} $ How can I calculate the limit points of set $\{\sqrt{m}-\sqrt{n}\mid m,n\in \mathbb N\} $?
| The answer is $\mathbb{R}$, as we can see here, for $x\in (0,\infty)$ and $\epsilon >0$, there are $n_0 , N \in \mathbb{N}$ such that $\sqrt{n_0 +1}-\sqrt{n_0} <1/N<\epsilon /2$. Now we can divide $(0,\infty)$ to pieces of length $1/N$, so there is $k\in \mathbb{N}$ such that $k(\sqrt{n_0 +1}-\sqrt{n_0})\in N_{\epsilon} (x)$.
The proof for $(-\infty , 0)$
is the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Number of elements of order $5$ in $S_7$: clarification In finding the number of elements of order $5$ in $S_7$, the symmetric group on $7$ objects, we want to find products of disjoint cycles that partition $\{1,2,3,4,5,6,7\}$ such that the least common multiple of their lengths is $5$. Since $5$ is prime, this allows only products of the form $(a)(b)(cdefg)$, which is just equal to $(cdefg)$. Therefore, we want to find all distinct $5$-cycles in $S_7$.
Choosing our first number we have seven options, followed by six, and so on, and we obtain $(7\cdot 6 \cdot 5 \cdot 4 \cdot 3)=\dfrac{7!}{2!}$. Then, we note that for any $5$-cycles there are $5$ $5$-cycles equivalent to it (including itself), since $(abcde)$ is equivalent to $(bcdea)$, etc. Thus, we divide by $5$, yielding $\dfrac{7!}{2!\cdot 5}=504$ elements of order $5$.
I understand this...it makes perfect sense. What's bothering me is:
Why isn't the number of elements of order five $\binom{7}{5}/5$? It's probably simple, I'm just not seeing why it doesn't yield what we want.
| This is your code:
g:=SymmetricGroup(7);
h:=Filtered(Elements(g),x->Order(x)=5);
Size(h);
The size of $h$ is the same as other theoretical approaches suggested. It is $504$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Method of Undetermined Coefficients I am trying to solve a problem using method of undetermined coefficients to derive a second order scheme for ux using three points, c1, c2, c3 in the following way:
ux = c1*u(x) + c2*u(x - h) + c3*u(x - 2h)
Now second order scheme just means to solve the equation for the second order derivative, am I right?
I understand how this problem works for actual numerical functions, but I am unsure how to go about it when everything is theoretical and just variables.
Thanks for some help
| Having a second order scheme means that it's accurate for polynomials up to and including second degree. The scheme should calculate the first order derivative $u_x$, as the formula says.
It suffices to make sure that the scheme is accurate for $1$, $x$, and $x^2$; then it will work for all second-degree polynomials by linearity.
To make it work for the function $u(x)=1$, we need
$$ 0= c_1+c_2+c_3
\tag1$$
To make it work for the linear function $u(x)=x$, we need
$$ 1 = c_1 x +c_2(x-h) +c_3(x-2h)
\tag2$$
which in view of (1) simplifies to
$$ 1 = c_2(-h) +c_3(-2h)
\tag{2'}$$
And to make it work for the quadratic function $u(x)=x^2$, we need
$$ 2x = c_1 x^2 +c_2(x-h)^2 +c_3(x-2h)^2
\tag3$$
which in view of (1) and (2') simplifies to
$$ 0 = c_2h^2 +c_3(4h^2)
\tag{3'}$$
Now you can solve the linear system (1), (2') and (3') for the unknowns $c_1,c_2,c_3$.
This may not be the quickest solution, but it's the most concrete one that I could think of.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Banach-algebra homeomorphism. Let $ A $ be a commutative unital Banach algebra that is generated by a set $ Y \subseteq A $. I want to show that $ \Phi(A) $ is homeomorphic to a closed subset of the Cartesian product $ \displaystyle \prod_{y \in Y} \sigma(y) $. Moreover, if $ Y = \{ a \} $ for some $ a \in A $, I want to show that the map is onto.
Notation: $ \Phi(A) $ is the set of characters on $ A $ and $ \sigma(y) $ is the spectrum of $ y $.
I tried to do this with the map
$$
f: \Phi(A) \longrightarrow \prod_{y \in Y} \sigma(y)
$$
defined by
$$
f(\phi) \stackrel{\text{def}}{=} (\phi(y))_{y \in Y}.
$$
I don’t know if $ f $ makes sense, and I can’t show that it is open or continuous. Need your help. Thank you!
| Note that $\Phi (A)$ is compact in the w$^*$-topology. Also, $\prod \sigma(y)$ is compact Hausdorff in the product topology. For the map $f$ you defined, note that $Ker f = \{0\}$ since $Y$ generates $A$.
To prove continuity, take a net $\{\phi_\alpha\}_{\alpha \in I}$ in $\Phi(A)$, such that $\phi_\alpha \rightarrow \phi$ in w$^*$-topology. Then, $\phi_\alpha (y) \rightarrow \phi(y)$ in norm topology, for any $y \in Y$. ------------- (*)
Now consider a basic open set $V$ around the point $\prod_{y \in Y} \phi(y)$. Then, there exists $y_1, y_2, \ldots, y_k \in Y$ such that $V = \prod_{y \in Y} V_y $, where $V_y = \sigma (y)$ for any $y \in Y \setminus \{y_1, y_2, \ldots, y_k\}$ and $ V_{y_i} = V_i $ is an open ball in $\sigma (y_i)$ containing $\phi(y_i)$, for $i = 1, 2, \ldots, k$.
Using (*) we get that for each $i$, $\exists$ $\alpha_i$ such that $\phi_{\beta} (y_i) \in V_i$ for any $\beta \geq \alpha_i$. Since the index set $I$ is directed, $\exists$ $\alpha_0 \in I$ such that $\alpha_0 \geq \alpha_i$ for all $i$. Thus for any $\beta \geq \alpha_0$, we have that $\phi_{\beta} (y_i) \in V_i$ for each $i$, and hence $\prod_{y \in Y} \phi_{\beta} (y) \in V$.
Thus, it follows that $\prod \phi_\alpha (y) \rightarrow \prod \phi(y)$ in $\prod \sigma(y)$, i.e, $f$ is continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Evaluating the similarity of watermarks I am working on an assignment where we have to use the NEC algorithm for inserting and extracting a watermark from an image file. Using the techniques described in this article. I am at the point where I want to apply the similarity function in Matlab:
$$\begin{align*}
X &= (x_1,x_2,\dotsc,x_n)\quad\text{original watermark}\\
X^* &= (x^*_1,x^*_2,\dotsc,x^*_n)\quad\text{extracted watermark}\\\\
sim(X, X^*) &= \frac{X^* * X}{\sqrt{X^* * X^*}}\\
\end{align*}$$
After that, the purpose is to compare the result to a threshold value to take a decision.
EDIT Question: what is the best way to implement the sim() function? The values of the watermark vectors are doubles.
| Well if you talking from an efficient/fastest/concise point of view, then MATLAB is vectorized. Let x1 be the vector of original watermark and let x2 be the vector of the extracted watermark then I would just do something like
dot(x1,x2)/sqrt(dot(x2,x2))
or even
dot(x1,x2)/norm(x2,2). They are identical.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Linear independence of polynomials from Friedberg In Page 38 Example 4, in Friedberg's Linear Algebra:
For $k = 0, 1, \ldots, n$ let $$p_k = x^k + x^{k+1} +\cdots + x^n.$$
The set $\{p_0(x), p_1(x), \ldots , p_n(x)\}$ is linearly independent in $P_n(F)$. For if $$a_0 p_0(x) + \cdots a_n p_n(x) = 0$$ for some scalars $a_0, a_1, \ldots, a_n$ then $$a_0 + (a_0 + a_1)x +(a_0+a_1+a_2)x^2+\cdots+ (a_0+\cdots+a_n)x^n = 0.$$
My question is how he arrives at $$a_0 + (a_0 + a_1)x +(a_0+a_1+a_2)x^2+\cdots+ (a_0+\cdots+a_n)x^n = 0.$$ Why is it $(a_0+....+a_n)x^n$ instead of $(a_n)x^n$ since it was $a_n p_n(x)$? It should be $(a_0+\cdots+a_n)x^0 $ since $x^0$ is common to each polynomial and then $(a_1+\cdots+ a_n) x$, no?
| You are taking $n$ as variable, but the variable index is $k$. If you write it out, varying $k$, only $p_0$ has the term $1$ (so only $a_0$ is multiplied by $1$), but all $p_i$ have the term $x^n$. I think if you look at your equations carefully again, you'll see it yourself (visually: in your argument, you make the polynomial shorter from the right; but it is made shorter from the left, when counting from $0$ to $n$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Difference of two sets and distributivity If $A,B,C$ are sets, then we all know that $A\setminus (B\cap C)= (A\setminus B)\cup (A\setminus C)$. So by induction
$$A\setminus\bigcap_{i=1}^nB_i=\bigcup_{i=1}^n (A\setminus B_i)$$
for all $n\in\mathbb N$.
Now if $I$ is an uncountable set and $\{B_i\}_{i\in I}$ is a family of sets, is it true that:
$$A\setminus\bigcap_{i\in I}B_i=\bigcup_{i\in I} (A\setminus B_i)\,\,\,?$$
If the answer to the above question will be "NO", what can we say if $I$ is countable?
| De Morgan's laws are most fundamental and hold for all indexed families, no matter the cardinalities involved. So, $$A-\bigcap _{i\in I}A_i=\bigcup _{i\in I}(A-A_i)$$ and dually $$A-\bigcup_{i\in I}A_i=\bigcap _{i\in I}(A-A_i).$$ The proof is a very good exercise.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to evaluate $\int_0^1\frac{\log^2(1+x)}x\mathrm dx$? The definite integral
$$\int_0^1\frac{\log^2(1+x)}x\mathrm dx=\frac{\zeta(3)}4$$
arose in my answer to this question. I couldn't find it treated anywhere online. I eventually found two ways to evaluate the integral, and I'm posting them as answers, but they both seem like a complicated detour for a simple result, so I'm posting this question not only to record my answers but also to ask whether there's a more elegant derivation of the result.
Note that either using the method described in this blog post or substituting the power series for $\log(1+x)$ and using
$$\frac1k\frac1{s-k}=\frac1s\left(\frac1k+\frac1{s-k}\right)\;$$
yields
$$
\int_0^1\frac{\log^2(1+x)}x\mathrm dx=2\sum_{n=1}^\infty\frac{(-1)^{n+1}H_n}{(n+1)^2}\;.
$$
However, since the corresponding identity without the alternating sign is used to obtain the sum by evaluating the integral and not vice versa, I'm not sure that this constitutes progress.
| I wrote this to answer a question which was deleted (before I posted) because the answers to this question answered that question.
$$
\begin{align}
\int_0^1\frac{\log(1+x)^2}x\,\mathrm{d}x
&=-2\int_0^1\frac{\log(1+x)\log(x)}{1+x}\,\mathrm{d}x\tag1\\
&=-2\sum_{k=0}^\infty(-1)^kH_k\int_0^1x^k\log(x)\,\mathrm{d}x\tag2\\
&=-2\sum_{k=0}^\infty(-1)^k\frac{H_k}{(k+1)^2}\tag3\\
&=-2\sum_{k=0}^\infty(-1)^k\left(\frac{H_{k+1}}{(k+1)^2}-\frac1{(k+1)^3}\right)\tag4\\[3pt]
&=-2\left(\frac58\zeta(3)-\frac34\zeta(3)\right)\tag5\\[6pt]
&=\frac14\zeta(3)\tag6
\end{align}
$$
Explanation:
$(1)$: integration by parts
$(2)$: $\frac{\log(1+x)}{1+x}=\sum\limits_{k=0}^\infty(-1)^{k-1}H_kx^k$
$(3)$: $H_{k+1}=H_k+\frac1{k+1}$
$(4)$: $\int_0^1x^k\log(x)\,\mathrm{d}x=-\frac1{(k+1)^2}$
$(5)$: $(7)$ from this answer
$(6)$: simplify
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47",
"answer_count": 5,
"answer_id": 1
} |
Most efficient method for converting flat rate interest to APR. A while ago, a rather sneaky car salesman tried to sell me a car financing deal, advertising an 'incredibly low' annual interest rate of 1.5%. What he later revealed that this was the 'flat rate' (meaning the interest is charged on the original balance, and doesn't decrease with the balance over time).
The standard for advertising interest is APR (annual percentage rate), where the interest charged decreases in proportion to the balance. Hence the sneaky!
I was able to calculate what the interest for the flat rate would be (merely 1.5% of the loan, fixed over the number of months), but I was unable to take that total figure of interest charged and then convert it to the appropriate APR for comparison.
I'm good with numbers but not a mathematician. To the best of my knowledge I would need to use some kind of trial and error of various percentages (a function that oscillates perhaps?) to find an APR which most closely matched the final interest figure.
What would be the most appropriate mathematical method for achieving this?
Please feel free to edit this question to add appropriate tags - I don't know enough terminology to appropriately tag the question.
| My rule of thumb to convert APR to Flat or vice versa is as such:
APR = Flat rate x 2 x No. of payments / No. of payments + 1
Example: 4% x 2 x 12 / 12 + 1 = 96 / 13 = 7.38% approx.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 3
} |
Is an infinite linearly independent subset of $\Bbb R$ dense? Suppose $(a_n)$ is a real sequence and $A:=\{a_n \mid n\in \Bbb N \}$ has an infinite linearly independent subset (with respect to field $\Bbb Q$). Is $A$ dense in $\Bbb R?$
| If $A$ is a linearly independent subset of $\mathbb R$, for each $a\in A$ there is a positive integer $n(a)$ such that $n(a)>|a|$. The set $\left\{\dfrac{a}{n(a)}:a\in A\right\}$ is a linearly independent set with the same cardinality and span as $A$, but it is a subset of $(-1,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/316866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.