Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Representing the statement using Quantifiers I want to represent the statement "Some numbers are not real " using quantifiers. I have been told by my teacher that the correct way to represent this is
num(x) : x is a number
real(x) : x is real
∃x (num(x) ^ ¬real(x))
This made sense, i can translate this statement into "There exist some x such that x is a number and x is not real.
But the answer i came up by myself is this one
∃x (num(x)=> ¬real(x))
In translation , There exist some x such that if x is a number then x is not real.
I just can't get around why my answer is wrong, for some x ; if x is a number then x is not real. Doesn't that sound very similar to the statement "Some numbers are not real".
In one of the video lectures i saw this example which made me even more confused.
"No dog is intelligent"
dog(x) : x is a dog
intel(x) : x is intelligent
The representation was
∀x (dog(x) ==> ¬intel(x))
if this representation is true, how is my representation of "Some numbers are not real" wrong.
PS : I am just a beginner at Discrete math finding my way, please pardon me if the question doesn't meet the quality standards of the community.
| Your version is wrong because "A and B" is not the same as "if A, then B".
For instance, there exists a horse H such that if H is 50 feet tall, then I win the lottery.
It is sadly not true that there exists a horse H such that H is 50 feet tall and I win the lottery.
More pointedly, (A implies B) is true when A is false. Therefore, "there exists X such that if X is a number, then X is not real" is true when X, for instance, is a banana.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/369546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Does every non-trivial element of $\mathbb{Z}_p$ generate the group? This just popped up in my head and I just wanted to make sure if I'm right.
Every element (except the identity element $0$) of the group $\mathbb{Z}_p$ (under addition and $p$ is prime) is a generator for the group. For example, $\mathbb{Z}_5 = \langle 1 \rangle = \langle 2 \rangle = \langle 3 \rangle = \langle 4 \rangle$.
Thanks!
| You can show that $\mathbb{Z}_p$ is a field, so if $H= \langle h \rangle$ with $h \neq 0$, $1 \in H$ since $h$ is invertible; you deduce that $H= \mathbb{Z}_p$, ie. $h$ is a generator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/369628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
What are zigzag theories, and why are they called that? I've encountered the term zigzag theory while randomly clicking my way through the internet. It is given here. I haven't been able to find a clear explanation of what constitutes a zigzag theory. Here, it is said that they have to do with non-Cantorian sets, which, as I understand, are sets that fail to satisfy Cantor's theorem. The article also says that New Foundations is a zigzag theory, but I don't see that it says why exactly that is so. I have gone through the Wikipedia article on New Foundations, and there's nothing about zigzags in it.
So what makes a zigzag theory? And what's zigzagging about it?
| As a footnote to Arthur Fischer, here's an additional quote from Michael Potter's Set Theory and its Philosophy:
In 1906, Russell canvassed three forms a solution to the paradoxes might take:
the no-class theory, limitation of size, and the zigzag theory. It is striking that a century later all of the theories that have been studied in any detail are recognizably descendants of one or other of these. Russell’s no-class theory became the theory of types, and the idea that the iterative conception is interpretable as a cumulative version of the theory of types was explained with great clarity by Gödel in a lecture he gave in 1933, ... although the view that it is an independently motivated notion rather than a device to make the theory more susceptible to metamathematical investigation is hard to find in print before Gödel 1947. The doctrine of limitation of size ... has received rather
less philosophical attention, but the cumulatively detailed analysis in Hallett
1984 can be recommended. The principal modern descendants of Russell’s
zigzag theory -- the idea that a property is collectivizing provided that its syntactic expression is not too complex -- are Quine’s two theories NF and ML.
Research into their properties has always been a minority sport: for the current
state of knowledge consult Forster 1995. What remains elusive is a proof
of the consistency of NF relative to ZF or any of its common strengthenings.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/369689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Does a symmetric matrix with main diagonal zero is classified into a separate type of its own? And does it have a particular name? I have a symmetric matrix as shown below
$$\begin{pmatrix} 0&2&1&4&3 \\ 2&0&1&2&1 \\ 1&1&0&3&2 \\4&2&3&0&1 \\ 3&1&2&1&0\end{pmatrix}$$
Does this matrix belong to a particular type?
I am CS student and not familiar with types of matrices. I am researching to know the particular matrix type since I have a huge collection of matrices similar to this one. By knowing the type of matrix, maybe I can go through its properties and work around easier ways to process data efficiently. I am working on a research project in Data Mining. Please help.
P.S.: Only the diagonal elements are zero. Non diagonal elements are positive.
| This is a hollow matrix. You can say that the sum of its eigenvalues equals zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/369735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the probability that a random $n\times n$ bipartite graph has an isolated vertex? By a random $n\times n$ bipartite graph, I mean a random bipartite graph on two vertex classes of size $n$, with the edges added independently, each with probability $p$.
I want to find the probability that such a graph contains an isolated vertex.
Let $X$ and $Y$ be the vertex classes. I can calculate the probability that $X$ contains an isolated vertex by considering one vertex first and using the fact that vertices in $X$ are independent.
But I don't know how to calculate the probability that $X\cup Y$ contains an isolated vertex. Can someone help? Thanks!
| This can be done using inclusion/exclusion. We have $n+n$ conditions for the individual vertices being isolated. There are $\binom nk\binom nl$ combinations of these conditions that require $k$ particular vertices in $X$ and $l$ particular vertices in $Y$ to be isolated, and the probability for this is $q^{kn+ln-kl}$, with $q=1-p$. Thus by inclusion/exclusion the desired probability that at least one vertex is isolated is
\begin{align}
&1-\sum_{k=0}^n\sum_{l=0}^n(-1)^{k+l}\binom nk\binom nlq^{kn+ln-kl}\\
={}&1-\sum_{k=0}^n(-1)^k\binom nkq^{kn}\sum_{l=0}^n(-1)^l\binom nlq^{ln-kl}\\
={}&1-\sum_{k=0}^n(-1)^k\binom nkq^{kn}\left(1-q^{n-k}\right)^n\\
={}&1-\sum_{k=0}^n(-1)^k\binom nk\left(q^k-q^n\right)^n\;.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/369830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Prove By Mathematical Induction (factorial-to-the-fourth vs power of two) Prove $(n!)^{4}\le2^{n(n+1)}$ for $n = 0, 1, 2, 3,...$
Base Step: $(0!)^{4} = 1 \le 2^{0(0+1)} = 1$
IH: Assume that $(k!)^{4} \le 2^{k(k+1)}$ for some $k\in\mathbb N$.
Induction Step: Show $(k+1!)^{4} \le 2^{k+1((k+1)+1)}$
Proof: $(k+1!)^{4} = (k+1)^{4}*(k!)^{4}$ By the definition of factorial.
$$\begin{align*}
(k+1)^{4}*(k!)^{4} &\le (k+1)^{4}*2^{k(k+1)}\\
&\le (k+1)^{4}*2^{(k+1)((k+1)+1)}
\end{align*}$$
by the IH.
That is as far as I have been able to get at this point...Please Help! Any suggestions or comments are greatly appreciated.
| You are doing well up to $(k+1)^4*(k!)^4 \le (k+1)^4*2^{k(k+1)}$ That is the proper use of the induction hypothesis. Now you need to argue $(k+1)^4 \le \frac {2^{(k+1)(k+2)}}{2^{k(k+1)}}=2^{(k+1)(k+2)-k(k+1)}=2^{2(k+1)}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/369882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Relation between Galois representation and rational $p$-torsion Let $E$ be an elliptic curve over $\mathbb{Q}$. Does the image of $\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ under the mod $p$ Galois representation tell us whether or not $E$ has rational $p$-torsion or not?
| Yes, it does, and in a rather straightforward way. The $p$-torsion in $E(\mathbb{Q})$ is precisely the fixed vectors under the Galois action. In particular, $E$ has full rational $p$-torsion if and only if the mod $p$ representation is trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/369955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Lagrangian subspaces Let $\Lambda_{n}$ be the set of all Lagrangian subspaces of $C^{n}$, and $P\in \Lambda_{n}$. Put $U_{P} = \{Q\in \Lambda_{n} : Q\cap (iP)=0\}$. There is an assertion that the set $U_{P}$ is homeomorphic to the real vector space of all symmetric endomorphisms of $P$. And then in the proof of it there is a fact that the subspaces $Q$ that intersect $iP$ only at $0$ are the graphs of the linear maps $\phi : P\to iP$. This is what I don't understand, any explanation or reference where I can find it would be helpful.
| Remember these are Lagrangians and thus half-dimensional. It's easiest to see what is going on if you take $P = \mathbb{R}^n$. This simplifies notation and also is somewhat easier to understand, imo.
We are given a Lagrangian subspace $Q$, transverse to $i \mathbb{R}^n$. Then, consider the linear map $Q \to \mathbb{R}^n$ by taking a $z \in Q$ and mapping to the real part (i.e. projecting to $P$ along $iP$). This map is injective (by the transversality assumption) and is thus an isomorphism. The inverse of this map takes a point $x \in \mathbb{R}^n$ and constructs a $y(x) \in \mathbb{R}^n$ so $x + i y(x) \in Q$. The map $y \colon \mathbb{R}^n \to i \mathbb{R}^n$ is what you are looking for.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why isn't $\lim \limits_{x\to\infty}\left(1+\frac{1}{x}\right)^{x}$ equal to $1$? Given $\lim \limits_{x\to\infty}(1+\frac{1}{x})^{x}$, why can't you reduce it to $\lim \limits_{x\to\infty}(1+0)^{x}$, making the result "$1$"? Obviously, it's wrong, as the true value is $e$. Is it because the $\frac{1}{x}$ is still something even though it's really small? Then why is $$\lim_{x\to\infty}\left(\frac{1}{x}\right) = 0\text{?}$$
What is the proper way of calculating the limit in this case?
| In the expression
$$\left(1+\frac{1}{x}\right)^x,$$
the $1+1/x$ is always bigger than one. Furthermore, the exponent is going to $\infty$ and (I suppose) that any number larger than one raised to infinity should be infinity. Thus, you could just as easily ask, why isn't the limit infinity?
Of course, the reality is that the $1/x$ going to $0$ in the base pushes the limit down towards $1$ (as you observe) and that the $x$ going to $\infty$ in the exponent pulls the limit up to $\infty$. There's a balance between the two and the limit lands in the middle somewhere, namely at the number $e$, a fact explained many times in this forum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 10,
"answer_id": 1
} |
Minimal value of a polynomial I do not know the following statement is true or not:
Given $1<x_0<2$, there exists $\delta>0$ such that for any n, define $A=\{ f(x)=\sum\limits_{i=0}^{n}a_ix^i\}$ where $a_i\in\{0\,,1\}$, then for any $f\,,g \in A$ and their degrees are the same, we have $\delta\leq|f(x_0)-g(x_0))|$ or $f(x_0)=g(x_0)$
| Let $x_0$ be a real root of the polynomial $1-X^2-X^3+X^4$ in the interval $(1,2)$; it exists by the intermediate value theorem because this polynomial has value $0$ and derivative $-1$ at $X=1$, and value $5$ at $X=2$. Then with $f(x)=1+x^4$ and $g(x)=x^2+x^3$ one has $f(x_0)=g(x_0)$, so no positive $\delta\leq|f(x_0)-g(x_0)|=0$ can exist.
I might add that the fact that you get answers that are looking at cases where $f(x_0)-g(x_0)=0$, it is because obviously this is precisely the only thing that can prevent $\delta$ from existing, so the whole formulation with $\delta$ seems a bit pointless.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show $\det \left[T\right]_\beta=-1$, for any basis $\beta$ when $Tx=x-2(x,u)u$, $u$ unit vector Let $u$ be a unit vector in an $n$ dimensional inner product space $V$.
Define the orthogonal operator
$$
Tx= x - 2 (x,u)u,
$$
where $x \in V$. Show
$$
\det A = -1,
$$
whenever $A$ is a matrix representation of $T$.
I can show that $\det A=\pm1$
$$
1=\det(I)=\det(A^tA)=(\det A)^2.
$$
Then I guess I should extend $u$ to a basis $\beta$ for $V$
$$
\beta=\{u,v_1,v_2,\ldots,v_{n-1}\}.
$$
I don't see what to do after this.
| To complement rschwieb's answer, he is giving you way to determine a specific basis to perform your calculation in easily. Then recall that if $A$ and $A'$ are two matrices that represent the same linear transformation in two different bases, then $A' = P^{-1} A P$ where $P$ is a change of basis matrix. In particular, this implies that $\det(A') = \det(A)$ (why?).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
In how many ways can five letters be posted in 4 boxes? Question : In how many ways can 5 letters be posted in 4 boxes?
Answer 1: We take a letter. It can be posted in any of the 4 boxes. Similarly, next letter can be posted in 4 ways, and so on. Total number = $4^5$.
Answer 2: Among the total number of positions of 5 letters and 3 dividers (between 4 boxes), positions of 3 dividers can selected in $\binom{8}{3}$ ways.
Which one is correct (I think the first one)? Why is the other one wrong and how does it differ from the right answer in logical/combinatorial perspective?
| The way to understand this problem (and decide which answer is correct) is to ask what you are counting exactly. If all the letters are distinct then the first answer sounds better because it counts arrangements that only differ by where each particular letter goes.
The second answer appears to make the letters and the dividers distinct, as well as counting as distinct the cases where two letters appear in a box in a different order. These seem to be counting different things.
If you modify the second answer to place 3 identical dividers in a row of 5 identical letters, (choose 3 from 6 ‘gaps’), then it works fine. But those ‘identical’s are not correct (are they?).
If we tried to ‘correct’ the second solution by multiplying by rearrangements (to take into account all possible rearrangements of letters, say) we would discover that some rearrangements would be introduced that weren’t supposed to be distinct. No, the second approach is wrong-headed: it counts the wrong things and it gets very complicated when you try to correct it.
So the answer is “the first solution is correct”. This applies when the letters are distinct and the boxes are distinct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Finding the Laplace Transform of sin(t)/t I'm in a Differential Equations class, and I'm having trouble solving a Laplace Transformation problem.
This is the problem:
Consider the function
$$f(t) = \{\begin{align}&\frac{\sin(t)}{t} \;\;\;\;\;\;\;\; t \neq 0\\& 1\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; t = 0\end{align}$$
a) Using the power series (Maclaurin) for $\sin(t)$ - Find the power series representation for $f(t)$ for $t > 0.$
b) Because $f(t)$ is continuous on $[0, \infty)$ and clearly of exponential order, it has a Laplace transform. Using the result from part a) (assuming that linearity applies to an infinite sum) find $\mathfrak{L}\{f(t)\}$. (Note: It can be shown that the series is good for $s > 1$)
There's a few more sub-problems, but I'd really like to focus on b).
I've been able to find the answer to a):
$$ 1 - \frac{t^2}{3!} + \frac{t^4}{5!} - \frac{t^6}{7!} + O(t^8)$$
The problem is that I'm awful at anything involving power series. I have no idea how I'm supposed to continue here. I've tried using the definition of the Laplace Transform and solving the integral
$$\int_0^\infty e^{-st}*\frac{sin(t)}{t} dt$$
However, I just end up with an unsolvable integral.
Any ideas/advice?
| Just an small hint:
Theorem: If $\mathcal{L}\{f(t)\}=F(s)$ and $\frac{f(t)}{t}$ has a laplace transform, then $$\mathcal{L}\left(\frac{f(t)}{t}\right)=\int_s^{\infty}F(u)du$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Characteristic Polynomial of a Linear Map I am hoping for some help with this question from a practice exam I am doing before a linear algebra final.
Let $T_1, T_2$ be the linear maps from $C^{\infty}(\mathbb{R})$ to $C^{\infty}(\mathbb{R})$ given by
$$T_1(f)=f'' - 3f' + 2f$$
$$T_2(f)=f''-f'-2f$$
(a) Write out the characteristic polynomials for $T_1$ and $T_2$.
(b) Compute the composition
(c) Find a vector in $\ker(T)$ which is not a linear combination of vectors in $\ker(T_1)$ and $\ker(T_2)$.
I know that the characteristic polynomial of an $n \times n$ matrix is the expansion of $$\det(A - I \lambda ).$$
Where I'm stuck here is finding the matrices for $T_1$ and $T_2$. I know it's silly, but I am used to applying these ideas to transformations of the form $T_A: \mathbb{R}^n \to \mathbb{R}^m$. Where $A$ is an $m \times n$ matrix and $T_A(\mathbf{x})= A \mathbb{x}$. Then $A$ is given by
$$A=[T(\mathbf{e}_1) \vdots T(\mathbf{e}_2) \vdots \cdots \vdots T(\mathbf{e}_n)].$$
Then I would find the characteristic polynomial of $A$. Now, I know that these ideas apply to abstract vector spaces like $C^{\infty}(\mathbb{R})$, but for some reason I cannot bridge the gap intuitively in the case of transformations. My textbook covers finding the matrix of a transformation for vectors in $\mathbb{R}^n$ but not for abstract vector spaces.
I am having the same problem with another question from the same practice exam:
Let $P_2$ be the vector space of polynomials of degree $2 \leq 2$, and define a linear transformation $T: P_2 \to P_2$ by $T(f)=(x+3)f' + 2f$. Write the matrix for $T$ with respect to the basis $\beta = (1,x,x^2)$.
Once again I do not know how to write the matrix for $T$. I would really appreciate any help understanding these concepts. Thanks very much.
| Here's how to solve the second boxed problem.
First, for every $v\in\beta$, write $T(v)$ in the basis $\beta$:
$$
\begin{align}
T(1) &= 2\cdot 1+0\cdot x+0\cdot x^2 \\
T(x) &= 3\cdot 1+3\cdot x+0\cdot x^2 \\
T(x^2) &= 0\cdot 1+6\cdot x+3\cdot x^2
\end{align}
$$
Now, the scalars appearing in these equations become the columns of $[T]_\beta$:
$$
[T]_\beta=
\begin{bmatrix}
2 & 3 & 0 \\
0 & 3 & 6 \\
0 & 0 & 3
\end{bmatrix}
$$
Of course, this matrix encodes a lot of information about the linear transformation $T$. For example, we can read off the eigenvalues as $2$ and $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Integration function spherical coordinates, convolution How can I calculate the following integral explicitly:
$$\int_{R^3}\frac{f(x)}{|x-y|}dx$$
where $f$ is a function with spherical symmetry that is $f(x)=f(|x|)$?
I tried to use polar coordinates at $x=0$ but it didn't help. Any idea on how to do this? Do you think it is doable somehow?
| This is a singular integral and so you can expect some weird behavior but if $f$ has spherical symmetry, then I would change to spherical coordinates. Then you'll have $f(x) = f(r)$. $dx$ will become $r^2\sin(\theta)drd\theta d\phi$. The tricky part is then what becomes of $|x-y|$. Recall that $|x-y| = \sqrt{(x-y)\cdot(x-y)} = \sqrt{|x|^2-2x\cdot y+|y|^2}$. In our case, $|x|^2 = r^2$ and $x = (r\sin(\theta)\cos(\phi), r\sin(\theta)\sin(\phi), r\cos(\theta))$. From here, I'm not sure how much simplification there can be. What more are you looking for?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
introductory reference for Hopf Fibrations I am looking for a good introductory treatment of Hopf Fibrations and I am wondering whether there is a popular, well regarded, accessible book. ( I should probably say that I am just starting to learn about vector bundles. )
If anyone with more experience could point me in the right direction this would be really helpful.
| These notes might also add some motivation to the topic coming in from Physics:
http://www.itp.uni-hannover.de/~giulini/papers/DiffGeom/Urbantke_HopfFib_JGP46_2003.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Free modules have no infinitely divisible elements Let $F$ be a free $\mathbb Z$-module. How can we show that $F$ has no non-zero infinitely divisible element? (An element $v$ in $F$ is called infinitely divisible if the equation $nx = v$ has solutions $x$ in $F$ for infinitely many integers $n$.)
| By definition, $F$ has a basis $(b_i)_{i \in I}$. Suppose
$$
v = a_1 b_1 + \dots + a_k b_k,
$$
for $a_i \in \Bbb{Z}$, is divisible by infinitely many $n$. Choose $n$ positive, larger than all the $\lvert a_i \rvert$, so that $v$ is divisible by $n$. If $v = n x$ for
$$
x = x_1 b_1 + \dots + x_k b_k
$$
then $n x_i = a_i$ for all $i$, as the $b_i$ are a basis, which implies all $a_i = 0$, as $n > \lvert a_i \rvert$. Thus $v = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that a sequence diverges Let $b > 1$. Prove that the sequence $\frac{b^n}{n}$ diverges to $\infty$
I know that I need to show that $\dfrac{b^n}{n} \geq M $, possibly by solving for $n$, but I am not sure how.
If I multiply both sides by $n$, you get $b^n \geq Mn$, but I don't know if that is helpful.
| You could use L'Hospital's rule:
$$
\lim_{n\rightarrow\infty}\frac{b^n}{n}=\lim_{n\rightarrow\infty}\frac{b^n\log{b}}{1}=\infty
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/370876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding the limit of function - irrational function How can I find the following limit:
$$ \lim_{x \rightarrow -1 }\left(\frac{1+\sqrt[5]{x}}{1+\sqrt[7]{x}}\right)$$
| Let $x=t^{35}$. As $x \to -1$, we have $t \to-1$. Hence,
$$\lim_{x \to -1} \dfrac{1+\sqrt[5]{x}}{1+\sqrt[7]{x}} = \lim_{t \to -1} \dfrac{1+t^7}{1+t^5} = \lim_{t \to -1} \dfrac{(1+t)(1-t+t^2-t^3+t^4-t^5+t^6)}{(1+t)(1-t+t^2-t^3+t^4)}$$
I am sure you can take it from here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Is it always true that $\lim_{x\to\infty} [f(x)+c_1]/[g(x)+c_2]= \lim_{x\to\infty}f(x)/g(x)$? Is it true that $$\lim\limits_{x\to\infty} \frac{f(x)+c_1}{g(x)+c_2}= \lim\limits_{x\to\infty} \frac{f(x)}{g(x)}?$$ If so, can you prove it? Thanks!
| Think of it this way: the equality is true only when $f(x), g(x)$ completely 'wash out' the additive constants at infinity. To be more precise, suppose $f(x), g(x) \rightarrow \infty$. Then
$$
\frac{f(x) + c_1}{g(x) + c_2} = \frac{f(x)}{g(x)} \frac{1 + c_1/f(x)}{1 + c_2 / g(x)}
$$
In the limit as $x \rightarrow \infty$, the right-hand factor goes to 1, and so the left-hand quantity approaches the same limit as $f(x) / g(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
elementary ring proof with composition of functions and addition of functions as operations
Consider the set $\mathcal{F}=\lbrace f\mid \mathcal{f}:\Bbb{R}\to\Bbb{R}\rbrace$, in which an additive group is defined by addition of functions and a second operation defined as composition of functions. The question asks to verify that the resulting structure does not satisfy the ring properties.
This is the question 24.10 from Modern Algebra, by Durbin, 4th edition.
So what I have so far is that all of the properties for the additive group (with addition of functions as the operation here) hold, associativity of the composition of functions hold, and that the failure must come from the distributive laws.
My proof of my claim, so far;
$$1\,\,\,\,\,[(\mathcal{f}\circ\mathcal{g})+\mathcal{h}](\mathcal{x})=\cdots=\mathcal{f}(\mathcal{g}(\mathcal{x}))+\mathcal{f}(\mathcal{h}(\mathcal{x})).$$
...but,
$$2\,\,[(\mathcal{f}\circ\mathcal{g})+\mathcal{h}](\mathcal{x})=$$
$$3\,\,(\mathcal{f}\circ\mathcal{g})(\mathcal{x})+\mathcal{h}(\mathcal{x})=$$
$$4\,\,\,\mathcal{f}(\mathcal{g}(\mathcal{x}))+\mathcal{h}(\mathcal{x}).$$
...which brings me to my contradiction of the properties of rings that;
$$5\,\,\,[\mathcal{f}\circ(\mathcal{g}+\mathcal{h})](\mathcal{x})\neq[(\mathcal{f}\circ\mathcal{g})+\mathcal{h}](\mathcal{x}).$$
So my question is, is this the correct path to showing the distributive laws are not obeyed when composition is taken as an operation on $\mathcal{F}$? particularly my proceeding from 2 to 3 to 4 is what i feel uncomfortable about, like i may be missing a step in there.
thanks
| Taking on the suggestion of Sammy Black, consider $g = h = \mathbf{1} = \text{the identity function}$, and $f$ any function.
Suppose $f \circ (\mathbf{1} + \mathbf{1}) = f \circ \mathbf{1} + f \circ \mathbf{1} = f + f = 2 f$.
So for all $x \in \Bbb{R}$ you should have $f( 2 x) = 2 f(x)$. Now think of a function $f$ which does not satisfy this.
(Variation. Take $g(x) = a$ and $h(x) = b$ for two arbitrary constants $a, b \in \mathbf{R}$. If $f \circ (g + h) = f \circ g + f \circ h$ holds, then $f(a + b) = f(a) + f(b)$ for all $a, b \in \Bbb{R}$. Now take a non-additive $f$.)
So the distributive property $f \circ (g + h) = f \circ g + f \circ h$ does not generally hold.
On the other hand, the other distributive property $(g + h) \circ f = g \circ f + h \circ f$ always holds. In fact for all $x$ one has
$$
\begin{align}
((g + h) \circ f) (x) &= (g + h) ( f(x)) \\&= g(f(x)) + h(f(x)) \\&= (g \circ f) (x) + (h \circ f) (x) \\&= (g \circ f + h \circ f) (x).
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
An infinite union of closed sets is a closed set? Question: {$B_n$}$ \in \Bbb R$ is a family of closed sets.
Prove that $\cup _{n=1}^\infty B_n$ is not necessarily a closed set.
What I thought: Using a counterexample: If I say that each $B_i$ is a set of all numbers in range $[i,i+1]$ then I can pick a sequence $a_n \in \cup _{n=1}^\infty B_n$ s.t. $a_n \to \infty$ (because eventually the set includes all positive reals) and since $\infty \notin \Bbb R$ then $\cup _{n=1}^\infty B_n$ is not a closed set.
Is this proof correct?
Thanks
| But since $\infty\notin\Bbb R$ we cannot use it as a counterexample. To see that is indeed the case note that your union is the set $[1,\infty)$, which is closed. Why is it closed? Recall that $A\subseteq\Bbb R$ is closed if and only if every convergent sequence $a_n$ whose elements are from $A$, has a limit inside $A$. So we need sequences whose limits are real numbers to begin with, and so sequences converging to $\infty$ are of no use to us. On the other hand, if $a_n\geq 1$ then their limit is $\geq 1$, so $[1,\infty)$ is closed.
You need to try this with bounded intervals whose union is bounded. Show that in such case the result is an open interval.
Another option is to note that $\{a\}$ is closed, but $\Bbb Q$ is the union of countably many closed sets. Is $\Bbb Q$ closed?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 6,
"answer_id": 0
} |
General solution of $yy′′+2y'''=0$ How do you derive the general solution of $yy''+2y'''= 0$?
please help me to derive solution thanks a lot
| NB: This is a general way to reduce the order of the equation. This doesn't solve your question as is, but rather gives you a starting point.
In this equation, the variable $t$ does not appear, hence one can substitute: $$y'=p(y), \hspace{7pt}y''=p'(y)\cdot y'=p'p,\hspace{7pt}y'''=p''\cdot y'\cdot p+p'\cdot p'\cdot y'=p''p^2+(p')^2p$$
(where $y'=\frac{dy}{dt}$, while $p'=\frac{dp}{dy}$).
Then your equation turns into
$$yp'p+2p''p^2+2(p')^2p=0 \hspace{7pt}\Rightarrow\hspace{7pt} yp'+2p''p+2(p')^2=0$$
Where the derivatives are w.r.t $y$ - this gives you an equation of the second order.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
General solution of a differential equation $x''+{a^2}x+b^2x^2=0$ How do you derive the general solution of this equation:$$x''+{a^2}x+b^2x^2=0$$where a and b are constants.
Please help me to derive solution thanks a lot.
| First make the substitution:
$x=-\,{\frac {6y}{{b}^{2}}}-\,{\frac {{a}^{2}}{{2b}^{2}}}$
This will give you the differential equation:
$y^{''} =6y^{2}-\frac{a^{4}}{24}$ which is to be compared with the second order differential equation for the Weierstrass elliptic function ${\wp}(t-\tau_{0},g_2,g_3)$:
${\wp}^{''} =6{\wp}^{2}-\frac{g_{2}}{2}$
Where $g_{2}$, $g_3$ are known as elliptic invariants. It then follows that the solution is given by:
$y={\wp}(t-\tau_{0},\frac{a^{4}}{12},g_3)$
$x=\,-{\frac {6}{{b}^{2}}}{\wp}(t-\tau_{0},\frac{a^{4}}{12},g_3)-\,{\frac {{a}^{2}}{{2b}^{2}}}$
Where $\tau_0$ and $g_3$ are constants determined by the initial conditions and $t$ is the function variable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove that a tensor field is of type (1,2) Let $J\in\operatorname{end}(TM)=\Gamma(TM\otimes T^*M)$ with $J^2=-\operatorname{id}$ and for $X,Y\in TM$, let
$$N(X,Y):=[JX,JY]-J\big([JX,Y]-[X,JY]\big)-[X,Y].$$
Prove that $N$ is a tensor field of type (1,2).
Since I heard $N$ is the Nijenhuis tensor and saw its component formula, I have tried proving it by starting with the component formula for $N^k_{ij}$ showing that for $X=X^i\frac{\partial}{\partial x^i}$ and $Y=Y^i\frac{\partial}{\partial x^i}$
$$ N(X,Y)=N^k_{ij}\frac{\partial}{\partial x^k}\otimes\mathrm{d}x^i\otimes\mathrm{d}x^j\left(X,Y\right).$$
But I am only able to show
\begin{align}
N(X,Y)
&=\underbrace{\big(J^m_iX^i\frac{\partial}{\partial x^i}(J^k_jY^j)-J^m_jY^j\frac{\partial}{\partial x^m}(J^k_iX^i)\big)\frac{\partial}{\partial x^k}}_{=[JX,JY]}-\underbrace{J^k_m\left(\frac{\partial}{\partial x^i}J^m_j-\frac{\partial}{\partial x^j}J^m_i\right)X^iY^j\frac{\partial}{\partial x^k}}_{=?}.
\end{align}
Anyway, I assume this way to be pretty dumb (although I would like to know if it is correct so far and how to go on from there), so I would like to know if there is a more elegant (and for general (k,l)-tensor fields better) way to prove the type of $N$ than writing it locally.
| Being a tensor field means that you have to show $C^{\infty}(M)$-Linearity. (1,2)-Tensor just means, that it eats two vector fields at spits out another vector field which is obvious from the definition.
So look at $N(X,fY)$ for $f \in C^{\infty}(M)$ and use the properties of the Lie-bracket, to show $N(X,fY)=fN(X,Y)$.
Note: $C^{\infty}$-linearity induces, that $N(X,Y)|_p$ just depends on $X_p$ and $Y_p$ in contrast to $\nabla_YX|_p$ in the upper entry.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If f is integrable on $[a,b]$, prove $f^{q}$ is integrable on $[a,b]$ Let $q $ be a rational number. Suppose that $a < b,\ 0 < c < d $, and that $f : [a,b] $->$ [c,d] $. If $f$ is integrable on $[a,b]$, then prove that $f^{q}$ is integrable on $[a,b]$.
I think that the proof involves the binomial theorem. My book has a proof showing that $f^2$ is integrable if $f$ is integrable, which I assume can be easily extended to $f^n$ for any integer $n$. I'm not sure exactly how to go about it from there though.
| Hints:
*
*$f$ is Riemann integrable and $g$ is continuous then the composition $g \circ f$ -- when this makes sense --is Riemann integrable (Theorem 8.18 of these notes).
*If $f: [a,b] \rightarrow [c,d]$ is continuous and monotone, then the inverse function $f^{-1}$ exists and is continuous (Theorem 5.39 of loc. cit.).
Alternately, as with so many of these kinds of results, this is an immediate corollary of Lebesgue's Criterion for Riemann Integrability (Theorem 8.28 of loc. cit.). In fact this latter route takes care of irrational exponents as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Confusion about Banach Matchbox problem While trying to solve Banach matchbox problem, I am getting a wrong answer. I dont understand what mistake I made. Please help me understand.
The problem statement is presented below (Source:Here)
Suppose a mathematician carries two matchboxes at all times: one in his left pocket and one in his right. Each time he needs a match, he is equally likely to take it from either pocket. Suppose he reaches into his pocket and discovers that the box picked is empty. If it is assumed that each of the matchboxes originally contained $N$ matches, what is the probability that there are exactly $k$ matches in the other box?
My solution goes like this. Lets say pocket $1$ becomes empty. Now, we want to find the probability that pocket $2$ contains $k$ matches (or $n-k$ matches have been removed from it. I also note that wikipedia solution does not consider the $1^{st}$ equality -- maybe thats where i am wrong?).
Let
$p = P[k\ \text{matches left in pocket}\ 2\ |\ \text{pocket 1 found empty}]$
= $\frac{P[k\ \text{matches left in pocket}\ 2\ \text{and pocket 1 found empty}]}{\sum_{i=0}^{n}P[i\ \text{matches left in pocket}\ 2\ \text{and pocket 1 found empty}]}$
= $\frac{\binom{2n-k}{n} \cdot \frac{1}{2^{2n-k}}}
{\sum_{i=0}^{n}\binom{2n-i}{n} \cdot \frac{1}{2^{2n-i}}}$
In my $2^{nd}$ equality, I have written the probability of removing all matches from pocket $1$ and $n-k$ from pocket $2$ using Bernoulli trials with probability $\frac{1}{2}$. The denominator is a running sum over a similar quantity.
Now, my answer to the original problem is $2p$ (the role of pockets could be switched). I am unable to see whats wrong with my approach. Please explain.
Thanks
| Apart from doubling $p$ at the end, your answer is correct: your denominator is actually equal to $1$. It can be rewritten as
$$\frac1{2^{2n}}\sum_{i=0}^n\binom{2n-i}n2^i=\frac1{2^{2n}}\sum_{m=n}^{2n}\binom{m}n2^{2n-m}=\frac1{2^{2n}}\sum_{i=0}^n\binom{n+i}n2^{n-i}\;,$$
and
$$\begin{align*}
\sum_{i=0}^n\binom{n+i}n2^{n-i}&=\sum_{i=0}^n\binom{n+i}n\sum_{k=0}^{n-i}\binom{n-i}k\\\\
&=\sum_{i=0}^n\sum_{k=0}^{n-i}\binom{n+i}i\binom{n-i}k\\\\
&=\sum_{k=0}^n\sum_{i=0}^{n-k}\binom{n+i}n\binom{n-i}k\\\\
&\overset{*}=\sum_{k=0}^n\binom{2n+1}{n+k+1}\\\\
&=\sum_{k=n+1}^{2n+1}\binom{2n+1}k\\\\
&=\frac12\sum_{k=0}^{2n+1}\binom{2n+1}k\\\\
&=2^{2n}\;,
\end{align*}$$
where the starred step invokes identity $(5.26)$ of Graham, Knuth, & Patashnik, Concrete Mathematics. Thus, your result can be simplified to
$$p=\binom{2n-k}n\left(\frac12\right)^{2n-k}\;.$$
And you don’t want to multiply this by $2$: no matter which pocket empties first, this is the probability that the other pocket still contains $k$ matches.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Does this $3\times 3$ matrix exist? Does a real $3\times 3$ matrix $A$ that satisfies conditions $\operatorname{tr}(A)=0$ and $A^2+A^T=I$ ($I$ is an identity matrix) exist?
Thank you for your help.
| [Many thanks to user1551 for this contribution.] First, let us show that the eigenvalues of $A$ must be real. The equation $A^2+A^T=I$ implies that
$$A^T=I−A^2 \quad\text{and}\quad (A^T)^2+A=I.$$ Substituting the first equation into the second yields
$$I−2A^2+A^4+A=I \quad\Longrightarrow\quad A^4−2A^2+A=A(A−I)(A^2+A−I)=0.$$
The eigenvalues must therefore satisfy $\lambda(\lambda−1)(\lambda^2+\lambda−1)=0$, which has the roots $\{0,1,(-1+\sqrt{5})/2,(-1-\sqrt{5})2\}$, all real. In fact, we will show that only the last two are possible.
Let $A=QUQ^T$ be the Schur decomposition of $A$, where $Q$ is unitary and $U$ is upper triangular. Because the eigenvalues of $A$ are real, both $Q$ and $U$ are real as well. Then
$$A^2+A^T=I\quad\Longrightarrow QUQ^TQUQ^T+QU^TQ^T=I\quad\Longrightarrow\quad U^2+U^T=I.$$
$U^2$ is upper triangular, and $U^T$ is lower triangular. The only way it is possible to satisfy the equation is if $U$ is diagonal.
(Alternatively, $A$ commutes with $A^T$ because $A^T=I-A^2$. Therefore $A$ is a normal matrix. Since we have shown that all eigenvalues of $A$ are real, it follows that $A$ is orthogonally diagonalisable as $QUQ^T$ for some real orthogonal matrix $Q$ and real diagonal matrix $U$.)
If $U$ is diagonal, then $A$ was symmetric; and the diagonal elements of $U$ are the eigenvalues of $A$. Each must separately satisfy $\lambda^2+\lambda-1=0$. So $\lambda=\frac{-1\pm\sqrt{5}}{2}$. But there are three eigenvalues, so one of them is repeated. There is no way for the resulting sum to be zero.
Since the sum of the eigenvalues is equal to $\mathop{\textrm{Tr}}(A)$, this is a contradiction.
EDIT: I'm going to add that it does not matter that the matrix is $3\times 3$. There is no square matrix of any size that satisfies the conditions put forth. There is no way to choose $n>0$ values from the set $\{(-1+\sqrt{5})/2,(-1-\sqrt{5})/2\}$ that have a zero sum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
Optimizing the area of a rectangle A rectangular field is bounded on one side by a river and on the other three sides by a fence. Additional fencing is used to divide the field into three smaller rectangles, each of equal area. 1080 feet of fencing is required. I want to find the dimensions of the large rectangle that will maximize the area.
I had the following equations and I was wondering if they are correct:
I let $Y$ denote the length of the rectangle. Then $Y=3y$. If $x$ represents the width of the three smaller rectangles, then I get the following:
$$ 4x+3y = 1080,~~~\text{Area} = 3xy.$$
| @Gorg, you are on the right track. You can either solve this problem using small y or large Y. Your equations are set up correctly with small y, and the answer I get if you want to compare with what you get is $$x=135 \text{ and}\ y=180$$. Good job :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What should be the intuition when working with compactness? I have a question that may be regarded by many as duplicate since there's a similar one at MathOverflow.
*
*In $\mathbb{R}^n$ the compact sets are those that are closed and bounded, however the guy who answered this question and had his answer accepted says that compactness is some analogue of finiteness. In my intuitive view of finiteness, only boundedness would suffice to say that a certain subset of $\mathbb{R}^n$ is in some sense "finite". On the other hand there's the other definition of compactness (in terms of covers) which is the one I really need to work with and I cannot see how that definition implies this intuition on finiteness.
*To prove a set is compact I know they must show that for every open cover there's a finite subcover;the problem is that I can't see intuitively how one could show this for every cover. Also when trying to disprove compactness the books I've read start presenting strange covers that I would have never thought about.
I think my real problem is that I didn't yet get the intuition on compactness. So, what intuition should we have about compact sets in general and how should we really put this definition to use?
Can someone provide some reference that shows how to understand the process of proving (and disproving) compactness?
| You may read various descriptions and consequences of compactness here. But be aware that compactness is a very subtle finiteness concept. The definitive codification of this concept is a fundamental achievement of $20^{\,\rm th}$ century mathematics.
On the intuitive level, a space is a large set $X$ where some notion of nearness or neighborhood is established. A space $X$ is compact, if you cannot slip away within $X$ without being caught. To be a little more precise: Assume that for each point $x\in X$ a guard placed at $x$ could survey a certain, maybe small, neighborhood of $x$. If $X$ is compact then you can do with finitely many (suitably chosen) guards.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "133",
"answer_count": 15,
"answer_id": 0
} |
Show inequality of integrals (cauchy-schwarz??) $f:[0,1]\to\mathbb{C}$ continuous and differentiable and $f(0)=f(1)=0$.
Show that
$$
\left |\int_{0}^{1}f(x)dx \right |^2\leq\frac{1}{12}\int_{0}^{1} \left |f'(x)\right|^2dx
$$
Well I know that
$$
\left |\int f(x)\cdot g(x)\ dx \right|^2\leq \int \left |f(x) \right|^2dx\ \cdot \int \left |g(x) \right |^2dx
$$
and I think I should use it, so I guess the term $g(x)$ has to be kind of $\frac{1}{12}$ but I have no idea how I can choose g(x) so that I can show the inequality.
advice?
| Integrate by parts to get
$$
\int_0^1 f(x)\,dx = -\int_0^1 (x-1/2)f'(x)\,dx,
$$
and then use Cauchy-Schwarz:
$$
\begin{align*}
\left|\int_0^1 (x-1/2)f'(x)\,dx\right|^2 & \leq \int_0^1(x-1/2)^2\,dx\int_0^1|f'(x)|^2\,dx \\
& = \frac{1}{12}\int_0^1|f'(x)|^2\,dx
\end{align*}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/371965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
boundary map in the (M-V) sequence Let $K\subset S^3$ be a knot, $N(K)$ be a tubular neighborhood of $K$ in $S^3$, $M_K$ to be the exterior of $K$ in $S^3$, i.e., $M_K=S^3-\text{interior of }{N(K)}$.
Now, it is clear that $\partial M_K=\partial N(K)=T^2$, the two dimensional torus, and when using the (M-V) sequence to the triad $(S^3, N(K)\cup M_K, \partial M_K)$ to calculate the homology group of $H_i(M_K, \mathbb{Z})$,
$$H_3(\partial M_K)\to H_3(M_K)\oplus H_3(N(K))\to H_3(S^3)\overset{\partial_3}{\to}H_2(\partial M_K)\to H_2(M_K)\oplus H_2(N(K))\to H_2(S^3)\overset{\partial_2}{\to}\cdots$$
I have to figure out what the boundary map $\mathbb{Z}=H_3(S^3)\overset{\partial_3}{\to}H_2(\partial M_K)=\mathbb{Z}$ looks like.
I think it should be the $\times 1$ map, but I do not know how to deduce it geometrically, so my question is:
Q1, Is it true that in the above (M-V) sequence $~ \mathbb{Z}=H_3(S^3)\overset{\partial_3}{\to}H_2(\partial M_K)=\mathbb{Z}$ is $\times 1$ map? Why?
Q2, If the above $K$ is a link with two components, then $\partial M_K=T^2\sqcup T^2$, do the same thing as above, what does
$\mathbb{Z}=H_3(S^3)\overset{\partial_3}{\to}H_2(\partial M_K)=\mathbb{Z}\oplus \mathbb{Z}$ look like now?
| The generator of $H_3(S_3)$ can be given by taking the closures of $M_K$ and $N(K)$ and triangulating them so the triangulation agrees on the boundary, then taking the union of all the simplices as your cycle. The boundary map takes this cycle and sends it to the common boundary of its two chunks, which is exactly the torus, so the map is indeed 1. The map in the second question should be the diagonal map $(\times 1,\times 1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
disconnected/connected graphs Determine whether the statements below are true or false. If the statement is true, then
prove it; and if it is false, give a counterexample.
(a) Every disconnected graph has a vertex of degree 0.
(b) A graph is connected if and only if some vertex is connected to all other vertices.
Please correct me if i'm wrong. (a) is false, as we could have 2 triangles not connected with each other. The graph would be disconnected and all vertexes would have order 2.
(b) confuses me a bit. Since this is double implication, for the statement to hold, it must be:
A graph is connected if some vertex is connected to all other vertices. (true)
AND
Some vertex is connected to all other vertices if the graph is connected.
We could have a square. In this case the graph is connected but no vertex is connected to every other vertex. Therefore this part is false. Since this part is false - the whole statement must also be false.
Is this correct?
| So that this doesn't remain unanswered: Yes, all of your reasoning is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Application of quadratic functions to measurement and graphing thanks for any help!
Q1. Find the equation of the surface area function of a cylindrical grain silo. The input variable is the radius (r). (the equation is to be graphed using a graphics calculator in the following question)
Height (h) = 5 meters
Radius (r) - unknown
Surface Area (S)- unknown
Pi (p) = 3.142
So far I have:
S = 2pr^2 + 2prh (surface area formula)
S = 2p(r^2+5r)
S = 2pr(r+5)
S= 6.284r(r+5)
I am not sure if this is an equation I can use to answer Q2 Use the graphic calculator emulator to draw the equation obtained at Q1.
I have also come up with:
2pr^2 + 2prh + 0 (in the quadratic expression ax^2 + bx + c=0)
When I substitute values for r I get the same surface area for both equations but am not sure if I am on the right track!
Thank you for any help!
| (SA = Surface Area)
*
*SA (silo) = SA (cylinder) + $\frac{1}{2}$ SA (sphere)
*SA (cylinder) = $2\pi r h $
*SA (sphere) = $4\pi r^2$
So we have,
SA (silo) = SA (cylinder) + $\frac{1}{2}$ SA (sphere) = $2\pi r h + \frac{1}{2}4\pi r^2 = 2\pi r h + 2 \pi r^2 = 2 ~\pi~ r(h + r) = 2 ~\pi~ r(5 + r)$
Plot:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does the theta function decay exponentially as $x \rightarrow \infty$? I'm trying to understand the proof of the functional equation for the L-series of primitive, even Dirichlet characters.
For even, primitive characters we have $$\theta_\chi(x):=\sum_{n\in \mathbb{Z}} \chi(n)\exp\left(\frac{-\pi}{q}n^2x^2\right).$$
In my lecture notes it says
The theta function decays exponentially as $x \rightarrow \infty$ [$\theta_\chi(x)=O(e^{-\pi/qx^2})$]
which I assume means $\theta_\chi(x)=O\left(\exp\left(\frac{-\pi}{q}x^2\right)\right)$ since I want to deduce that the integral
$$\int_1^\infty \theta_\chi(x)x^s\frac{dx}{x} $$
converges.
I "feel" like this is vaguely because when $n$ gets big, the $\exp\left(\frac{-\pi}{q}n^2x^2\right)$ is so small that is contributes basically nothing to the sum. But I don't really think this is enough, since
$$\int_{-\infty}^\infty \exp\left(\frac{-\pi}{q}x^2\xi^2\right)d\xi \leq \sum_{n \in \mathbb{Z}}\exp\left(\frac{-\pi}{q}n^2x^2\right) \leq 2+ \int_{-\infty}^\infty \exp\left(\frac{-\pi}{q}x^2\xi^2\right)d\xi$$
so
$$\frac{\sqrt{q}}{|x|} \leq \sum_{n \in \mathbb{Z}}\exp\left(\frac{-\pi}{q}n^2x^2\right) \leq 2+\frac{\sqrt{q}}{|x|}$$
which is not strong enough. So how do I prove that
The theta function decays exponentially as $x \rightarrow \infty$ [$\theta_\chi(x)=O(e^{-\pi/qx^2})$]
| So, using Sanchez's hint, I think I have
$$\begin{array}{rll}\theta_\chi(x) & \leq & 2\exp\left(\frac{-\pi}{q}x^2\right)+\sum_{|n|\geq 2}\exp\left( \frac{-\pi}{q}n^2x^2\right) \\
& \leq & 2\exp\left(\frac{-\pi}{q}x^2\right)+\sum_{|n|\geq 4}\exp\left( \frac{-\pi}{q}n x^2\right) \\
\text{(expanding the sum} \\ \text{ as a geometric series)}
& \leq &2 \exp\left(\frac{-\pi}{q}x^2\right)+ 2\exp \left(\frac{-\pi}{q}4x^2\right)\left( 1-\exp\left(\frac{-\pi}{q} x^2\right)\right)^{-1}\end{array}$$
which does it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
inequality involving complex exponential Is it true that
$$|e^{ix}-e^{iy}|\leq |x-y|$$ for $x,y\in\mathbb{R}$? I can't figure it out. I tried looking at the series for exponential but it did not help.
Could someone offer a hint?
| One way is to use
$$
|e^{ix} - e^{iy}| = \left|\int_x^ye^{it}\,dt\right|\leq \int_x^y\,dt = y-x,
$$
assuming $y > x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 6,
"answer_id": 1
} |
Integrate $e^{f(x)}$ Just wondering how I can integrate $\displaystyle xe^{ \large {-x^2/(2\sigma^2)}}$
Tried using substitution where $U(x) = x^2$ but I kept getting a $x^2$ at the denominator which is incorrect.
I understand that $\displaystyle \int e^{f(x)} = e^{\large \frac{f(x)}{f'(x)}}$ if $f(x)$ is linear, however, how do we handle this situation when $f(x)$ is not linear?
Step by step answer will be awesome!!!! thanks :D
M
| A close relative of your substitution, namely $u=-\dfrac{x^2}{2\sigma^2}$, works.
In "differential" notation, we get $du=-\frac{1}{\sigma^2} x\,dx$, so $x\,dx=-\sigma^2\,du$.
Remark: Or more informally, let's guess that the answer is $e^{-x^2/(2\sigma^2)}$. Differentiate, using the Chain Rule. We get $-\frac{x}{\sigma^2}e^{-x^2/(2\sigma^2)}$, so wrong guess. Too bad.
But we got close, and there is an easy fix by multiplying by a suitable constant to get rid of the $-\frac{1}{\sigma^2}$ in front of the derivative of our wrong guess. The indefinite integral is $-\sigma^2e^{-x^2/(2\sigma^2)}+C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What will be the units digit of $7777^{8888}$? What will be the units digit of $7777$ raised to the power of $8888$ ?
Can someone do the math with explaining the fact "units digit of $7777$ raised to the power of $8888$"?
| the units digit of a number is the same as the number mod 10
so we just need to compute $7777^{8888} \pmod {10}$
first $$7777^{8888} \equiv 7^{8888} \pmod {10}$$ and secondly by Eulers totient theorem $$7^{8888} \equiv 7^{0} \equiv 1 \pmod {10}$$ by Eulers totient theorem (since $\varphi(10)=4$ and $4 \mid 8888$)
so there's a quick way to see that the last digit is a 1
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Mean Value Property of Harmonic Function on a Square A friend of mine presented me the following problem a couple days ago:
Let $S$ in $\mathbb{R}^2$ be a square and $u$ a continuous harmonic function on the closure of $S$. Show that the average of $u$ over the perimeter of $S$ is equal to the average of $u$ over the union of two diagonals.
I recall that the 'standard' mean value property of harmonic functions is proven over a sphere using greens identities. I've given this one some thought but I haven't come up with any ideas of how to proceed. It's driving me crazy! Maybe it has something to do with the triangles resulting from a diagonal? Any ideas?
| Consider the isosceles right triangles formed from two sides of the square and a diagonal.
Let's consider the first such triangle. Call the sides $L_1, L_2$ and $H_1$ for legs and hypotenuse; for sake of convenience, our square is the unit square, so we give the triangle $T_1$, legs $L_1$ from $(0,0)$ to $(1,0)$ and $L_2$ from $(1,0)$ to $(1,1)$; the hypotenuse $H_1$ clearly runs from $(0,0)$ to $(1,1)$.
Now consider the function $\phi(x) = |x-y|$, and we're going to use integration by parts.
$$ \int_{\partial T_1} u \phi_{\nu} = \int_{\partial T_1} \phi u_{\nu} $$
since $u$ and $\phi$ are both harmonic in the interior of the triangle $T$. Now,
$$ \int_{\partial T_1} u \phi_{\nu} = - \sqrt{2} \int_{H_1} u + \int_{L_1} u + \int_{L_2} u = \int_{L_1} x u_\nu + \int_{L_2} (1-y) u_\nu = \int_{\partial T_1} \phi u_\nu $$
Perform the same construction on the other triangle $T_2$ with hypotenuse $H_1$, but with legs $L_3$ from (0,0) to (0,1) and $L_4$ from (0,1) to (1,1). We get
$$ \int_{\partial T_2} u \phi_{\nu} = - \sqrt{2} \int_{H_1} u + \int_{L_3} u + \int_{L_4} u = \int_{L_3} y u_\nu + \int_{L_4} (1-x) u_\nu = \int_{\partial T_2} \phi u_\nu $$
Now consider the function $\psi(x) = |x+y-1|$ on the triangle $T_3$ formed by $L_1$ as above, $L_3$ from (0,0) to (0,1), and $H_2$ from (1,0) to (0,1).
$$ \int_{\partial T_3} u \psi_{\nu} = - \sqrt{2} \int_{H_2} u + \int_{L_3} u + \int_{L_1} u = \int_{L_3} (1-y) u_\nu + \int_{L_1} (1-x) u_\nu = \int_{\partial T_3} \psi u_\nu $$
Finally, on the triangle $T_4$ formed by $L_4$, $L_2$, and $H_2$ we have
$$ \int_{\partial T_4} u \psi_{\nu} = - \sqrt{2} \int_{H_2} u + \int_{L_2} u + \int_{L_4} u = \int_{L_2} y u_\nu + \int_{L_4} x u_\nu = \int_{\partial T_4} \psi u_\nu $$
Summing all these terms together, we get
$$ -2 \sqrt{2} \int_{H_1 \cup H_2} u + 2 \int_{\partial S} u = \int_{\partial S} u_\nu$$
Since $u$ is harmonic, this must equal 0, which tells us that the average over the diagonals is the average over the perimeter.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Continuity of one partial derivative implies differentiability Let $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ be a function such that the partial derivatives with respect to $x$ and $y$ exist and one of them is continuous. Prove that $f$ is differentiable.
| In short: the problem reduces to the easy case when $f$ depends solely on one variable. See the greyish box below for the formula that does the reduction.
It suffices to show that $f$ is differentiable at $(0,0)$ with the additional assumption that $\frac{\partial f}{\partial x}(0,0)=\frac{\partial f}{\partial y}(0,0)=0$. First pass from $(x_0,y_0)$ to $(0,0)$ by considering the function $g(x,y)=f(x+x_0,y+y_0)$. Then work on $h(x,y)=g(x,y)-x\frac{\partial f}{\partial x}(0,0)-y\frac{\partial f}{\partial y}(0,0)$.
So let us assume assume that $\frac{\partial f}{\partial x}$ exists and is continuous on $\mathbb{R}^2$ (only continuity in an open neighborhood of $(0,0)$ is really needed for the local argument), that $\frac{\partial f}{\partial y}$ exists at $(0,0)$, and that $\frac{\partial f}{\partial x}(0,0)=\frac{\partial f}{\partial y}(0,0)=0$. We need to show that $f$ is differentiable at $(0,0)$. Note that the derivative must be $0$ given our assumptions.
Now observe that for every $x,y$, we have, by the fundamental theorem of calculus:
$$
f(x,y)=f(0,y)+\int_0^x \frac{\partial f}{\partial x}(s,y)ds.
$$
I let you check properly that $(x,y)\longmapsto f(0,y)$ is differentiable at $(0,0)$ with zero derivative, using only $\frac{\partial f}{\partial y}(0,0)=0$. For the other term, just note that it is $0$ at $(0,0)$ and that for every $0<\sqrt{x^2+y^2}\leq r$
$$
\frac{1}{\sqrt{x^2+y^2}}\Big|\int_0^x \frac{\partial f}{\partial x}(s,y)ds\Big|\leq \frac{|x|}{\sqrt{x^2+y^2}}\sup_{0\leq \sqrt{x^2+y^2}\leq r}\Big| \frac{\partial f}{\partial x}(s,t)\Big|\leq \sup_{0\leq \sqrt{x^2+y^2}\leq r}\Big| \frac{\partial f}{\partial x}(s,t)\Big|.
$$
By continuity of $\frac{\partial f}{\partial x}$ at $(0,0)$, the rhs tends to $0$ when $(x,y)$ tends to $(0,0)$. This proves that the function $(x,y)\longmapsto \int_0^x \frac{\partial f}{\partial x}(s,y)ds$ is differentiable at $(0,0)$ with zero derivative. And this concludes the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Modular Exponentiation Give numbers $x,y,z$ such that $y \equiv z \pmod{5}$ but $x^y \not\equiv x^z \pmod{5}$
I'm just learning modular arithmetic and this questions has me puzzled. Any help with explanation would be great!
| Take $x=2$, $y=3$, $z=8$. Then $x^y \bmod 5 = 3$ but $x^z \bmod 5 = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
If $x,y$ are positive, then $\frac1x+\frac1y\ge \frac4{x+y}$ For $x$, $y$ $\in R^+$, prove that $$\frac{1}{x}+\frac{1}{y}\ge\frac{4}{x+y}$$
Could someone please help me with this inequality problem? I have tried to use the AM-GM inequality but I must be doing something wrong. I think it can be solved with the AM-GM but I can’t solve it. Thanks in advance for your help.
| Here is a solution with AM-GM:
$$\frac{1}{x}+\frac{1}{y} \geq \frac{2}{\sqrt{xy}}$$
$$x+y \geq 2 \sqrt{xy} \Rightarrow \frac{1}{\sqrt{xy}} \geq \frac{2}{x+y}\Rightarrow \frac{2}{\sqrt{xy}} \geq \frac{4}{x+y}$$
Also you can note that
$$(x+y)(\frac{1}{x}+\frac{1}{y}) \geq 4$$
is just Cauchy-Schwarz.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Find a lower bound Let $M$ be an $N\times N$ symmetric real matrix, and let $J$ be a permutation of the integers from 1 to $N$, with the following properties:
*
*$J:\{1,...,N\}\rightarrow\{1,...,N\}$ is one-to-one.
*$J$ is its own inverse: $J(J(i))=i$.
*$J$ has at most one fixed point, that is, there's at most one value of $i$ such that $J(i)=i$. Explicitly, if $N$ is odd, there is exactly one fixed point, and if $J$ is even, there are none.
A permutation with these properties establishes a pairing between the integers from 1 to $N$, where $i$ is paired with $J(i)$ (except if $N$ is odd, in which case the fixed point is not paired). Therefore we will call $J$ a pairing. (*)
Given a matrix $M$, we go through all possible pairings $J$ to find the maximum of
$$\frac{\sum_{ij}M_{ij}M_{J(i)J(j)}}{\sum_{ij}M_{ij}^2}.$$
This way we define a function:
$$F(M)=\max_J\frac{\sum_{ij}M_{ij}M_{J(i)J(j)}}{\sum_{ij}M_{ij}^2}.$$
The question is: Is there a lower bound to $F(M)$, over all symmetric real $N\times N$ matrices $M$, with the constraint $\sum_{ij}M_{ij}^2 > 0$ to avoid singularities?
I suspect that $F(M)\geq 0$ (**), but I don't have a proof. Perhaps there is a tighter lower bound.
(*) I am asking here if there is a standard name for this type of permutation.
(**) For $N=2$ this is false, see the answer by @O.L. What happens for $N>2$?
| Not really an answer, rather a reformulation and an observation.
UPD: The answer is in the addendum
Any such permutation $S$ can be considered as an $N\times N$ matrix with one $1$ and $N-1$ zeros in each raw and each column. Note that $S^{-1}=S^T$. The number of fixed points coincides with $\mathrm{Tr}\,S$. You are also asking for $S$ to be an involution ($J^{2}=1$), which means that $J=J^T$.
Given a real symmetric matrix $M$ and an involutive permutation $J$, the expression
$$ E_J(M)=\frac{\sum_{i,j}M_{ij}M_{J(i)J(j)}}{\sum_{i,j}M_{ij}^2}$$
can be rewritten as
$$ E_J(M)=\frac{\mathrm{Tr}\,MJMJ}{\mathrm{Tr}\,M^2}=\frac{\mathrm{Tr}\left(MJ\right)^2}{\mathrm{Tr}\,M^2}.$$
Let us now consider the case $N=2$ where there is only one permutation $J=\left(\begin{array}{cc} 0 & 1 \\ 1 & 0\end{array}\right)$ with necessary properties. Now taking for example $M=\left(\begin{array}{cc} 1 & 0 \\ 0 & -1\end{array}\right)$ we find $\max_J E_J(M)=-1$. This seems to contradict your hypothesis $F(M)\geq 0$. Similar examples can be constructed for any $N$.
Important addendum. I think the lower bound for $F(M)$ is precisely $-1$. Let us prove this for even $N=2n$.
Since $J^2=1$, the eigenvalues of $J$ can only be $\pm1$. Further, since the corresponding permutation has no fixed points, $J$ can be brought to the form $J=O^T\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right) O$ by a real orthogonal transformation. Let us now compute the quantity
\begin{align} \mathrm{Tr}\,MJMJ=\mathrm{Tr}\left\{MO^T\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right)OMO^T\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right)O\right\}=\\=
\mathrm{Tr}\left\{\left(\begin{array}{cc} A & B \\ B^T & D\end{array}\right)\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right)\left(\begin{array}{cc} A & B \\ B^T & D\end{array}\right)\left(\begin{array}{cc} \mathbf{1}_n & 0 \\ 0 & -\mathbf{1}_n\end{array}\right)\right\},\tag{1}\end{align}
where $\left(\begin{array}{cc} A & B \\ B^T & D\end{array}\right)$ denotes real symmetric matrix $OMO^T$ written in block form. Real matrices $A$, $B$, $D$ can be made arbitrary by the appropriate choice of $M$ (of course, under obvious constraints $A=A^T$, $D=D^T$).
Now let us continue the computation in (1):
$$\mathrm{Tr}\,MJMJ=\mathrm{Tr}\left(A^2+D^2-BB^T-B^TB\right).$$
Also using that
$$\mathrm{Tr}\,M^2=\mathrm{Tr}\left(OMO^T\right)^2=\mathrm{Tr}\left(A^2+D^2+BB^T+B^TB\right),$$
we find that
$$\frac{\mathrm{Tr}\,MJMJ}{\mathrm{Tr}\,M^2}+1=\frac{2\mathrm{Tr}\left(A^2+D^2\right)}{\mathrm{Tr}\left(A^2+D^2+BB^T+B^TB\right)}\geq 0.$$
The equality is certainly attained if $A=D=\mathbf{0}_n$. $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/372940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Photograph of Marjorie Rice I'm giving a presentation this weekend about Marjorie Rice's work on tilings. The only photograph I have of her (from her website) is small and pixelated, and I haven't been able to make contact with her to ask her for a better one. I'd be most grateful if you could point me to a better photo of her, either on the web or in print or from your personal archives.
Thanks!
| See Marjorie Rice, page 2 from a newsletter published by Key Curriculum Press, on Tesselations. There's a photo of Marjorie Rice (on the left), at the lower left of page 2 of the newsletter. The link will take you to the pdf.
I don't know if this is an improvement over your current picture, but I thought I'd post this, in case it helps. Here's an image of page 2 of the pdf:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Is mathematical induction necessary in this situation? I was reading "Number Theory" by George E. Andrews.
On P.17, where he proves that for each pair of positive integers a,b, gcd(a,b) uniquely exists, I came up with a question.
The approach he used is probably the most common one, that is, to make use of Euclidean Algorithm.
There exist integers $q_o, r_o $ ,$0 \leq r_0 <b$. such that
$a=q_0 \times b +r_0$.
If $r_0 \neq 0$, we can find $q_1,r_1$, $0 \leq r_1 < r_0$ such that
$b=q_1 \times r_0 + r_1$.
Since $b>r_0>r_1>....\geq 0$, there exists $k$ such that $r_k=0$.
Then we can prove that $d=r_{k-1}$ divides $r_{k-2}$.
Moreover, we can divide every $r_t$ by $d$.
I believe this is proved as following;
Suppose that $d$ divides both $r_t, r_{t-1}$.
Since $r_{t-2}=q_t \times r_{t-1} + r_t$ and the right side is clearly divisible by $d$, so is the left side.
And suppose that $d$ divides both $r_{t-1},r_{t-2}$. And keep going and going till we have $d$ divides both $a,b$
And the author says that this procedure requires the Principle of Mathematical Induction.
This looks like a Mathematical Induction but this is not proving for infinitely many numbers,so I think this does not need Principle of Mathematical Induction because k is a finite number.
To my understanding, we need to use the Principle of mathematical Induction only when we want to prove that a statement is true for infinitely many integers, because we cannot write infinitely long proofs. However, in this situation, we could write the proof of k steps but it was just troublesome. That is why I think it does not need Mathematical Induction
Could you help me figure out why we need to use Principle of Mathematical Induction in this situation?
| Here are the spots where induction is required:
"Since $b>r_0>r_1>....≥0$, there exists $k$ such that $r_k=0$." Not true for real numbers, right?
"I think this does not need Principle of Mathematical Induction because k is a finite number." But how do we know it's finite? You could descend forever in some rings.
Personally though, I'd use the well-ordering principle, it's much cleaner than induction in most cases.
Let $S$ be the set of those $r_i$. It's fine if it's infinite, sets can do that. Now, since we know all of them are $\ge 0$, there is a minimum element. Call this element $d$. [continue as you did in your post].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to find solutions for linear recurrences using eigenvalues Use eigenvalues to solve the system of linear recurrences
$$y_{n+1} = 2y_n + 10z_n\\
z_{n+1} = 2y_n + 3z_n$$
where $y_0 = 0$ and $z_0 = 1$.
I have absolutely no idea where to begin. I understand linear recurrences, but I'm struggling with eigenvalues.
| Set $x_n=[y_n z_n]^T$, and your system becomes $x_{n+1}=\left[\begin{smallmatrix}2&10\\2&3\end{smallmatrix}\right]x_n$. Iteration becomes matrix exponentiation. If your eigenvalues are less than 1 in absolute value, the matrix approaches 0. If an eigenvalue is bigger than 1 in absolute value, you get divergence. It's a rich subject, read about it on Wikipedia.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Show that the equation $\cos(x) = \ln(x)$ has at least one solution on real number I have question
Q
Show that the equation $\cos (x) = \ln (x)$ has at least one solution on real number.
to solve this question by using intermediate value theorem
we let $f(x)=\cos (x)-\ln (x)$
we want to find $a$ and $b$
but what i should try
to get $f(a)f(b)<0$
I means
$f(a)>0$
$f(b)<0$
thanks
| Hint: $\cos$ is bounded whereas $\ln$ is increasing with $\lim\limits_{x\to 0^+} \ln(x) =- \infty$ and $\lim\limits_{x \to + \infty} \ln(x)=+ \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
4 dimensional numbers I've tought using split complex and complex numbers toghether for building a 3 dimensional space (related to my previous question). I then found out using both together, we can have trouble on the product $ij$. So by adding another dimension, I've defined $$k=\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}$$
with the property $k^2=1$. So numbers of the form $a+bi+cj+dk$ where ${{a,b,c,d}} \in \Bbb R^4$, $i$ is the imaginary unit, $j$ is the elementry unit of split complex numbers and k the number defined above, could be represented on a 4 dimensinal space. I know that these numbers look like the Quaternions. They are not! So far, I came out with the multiplication table below :
$$\begin{array}{|l |l l l|}\hline
& i&j&k \\ \hline
i&-1&k&j \\
j& -k&1&i \\
k& -j&-i&1 \\ \hline
\end{array}$$
We can note that commutativity no longer exists with these numbers like the Quaternions. When I showed this work to my math teacher he said basicaly these :
*
*It's not coherent using numbers with different properties as basic element, since $i^2=-1$ whereas $j^2=k^2=1$
*2x2 matrices doesn't represent anything on a 4 dimensional space
Can somebody explains these 2 things to me. What's incoherent here?
| You have discovered split-quaternions. You can compare the multiplication table there and in your question.
This algebra is not commutative and has zero divisors. So, it combines the "negative" traits of both quaternions and tessarines. On the other hand it is notable to be isimorphic to the $2\times2$ matrices. Due to this isomorphism, people usually speak about matrices rather than split-quaternions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
} |
Domain of the function $f(z) = \sqrt{z^2 -1}$ What will be the domain of the function $f(z) = \sqrt{z^2 -1}$?
My answers are: $(-\infty, -1] \cup [1, \infty)$ OR $\mathbb{R} - \lbrace1>x\rbrace$ OR $\mathbb {R}$, such that $z \nless 1$.
| The first part of your answer (before the "or") is correct:
The domain of your function, in $\mathbb R$ is indeed $(-\infty, -1]\cup [1, \infty).$ That is, the function is defined for all real numbers $z$ such that $z \leq -1$ or $z \geq 1$.
Did you have any particular reason you included: this as your answer, along with "or...."? Did you have doubts about the above, that you were questioning whether the domain is not $(-\infty, -1] \cup [1, \infty)$?
Why is the domain $\;\;(-\infty, -1] \cup [1, \infty) \subset \mathbb R\;$?
Note that the numbers strictly contained in $(-1, 1)$, when squared, are less than $1$, making $\color{blue}{\bf z^2 - 1 < 0}$, in which case we would be trying to take the square root of a negative number - which has no definition in the real numbers. So we exclude those numbers, the $z \in (-1, 1)$ from the domain, giving us what remains. And so we have that our function, and its domain, is given by:
$$f(z) = \sqrt{z - 1},\quad z \in (-\infty, -1] \cup [1, \infty) \subset \mathbb R$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Generalization of metric that can induce ordering I was wondering if there is some generalization of the concept metric to take positive and negative and zero values, such that it can induce an order on the metric space? If there already exists such a concept, what is its name?
For example on $\forall x,y \in \mathbb R$, we can use difference $x-y$ as such generalization of metric.
Thanks and regards!
| You will have to lose some of the other axioms of a metric space as well since the requirement that $d(x,y)\ge 0$ in a metric space is actually a consequence of the other axioms: $0=d(x,x)\le d(x,y)+d(y,x)=2\cdot d(x,y)$, thus $d(x,y)\ge 0$. This proof uses the requirements that $d(x,x)=0$, the triangle inequality, and symmetry.
There are notions of generalizations of metric spaces that weaken these axioms. I think the one closest to what you might be thinking about is partial metric spaces (where $d(x,x)=0$ is dropped).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Conformally Map Region between tangent circles to Disk Suppose we are given two circles, one inside the other, that are tangent at a point $z_0$. I'm trying to map the region between these circles to the unit disc, and my thought process is the following:
I feel like we can map $z_0$ to $\infty$, but I'm not really sure about this. If it works, then I get a strip in the complex plane, and I know how to handle strips via rotation, translation, logarithm, power, and then I'm in the upper half-plane (if I've thought this through somewhat properly). My problem really lies in what point $z_0$ can go to, because I thought the point symmetric to $z_0$ (which is $z_0$ in this case) had to be mapped to $0$. Is this the right idea, and if what other details should I make sure I have? Thanks!
| I just recently solved this myself.
Let $z_1$ be the center of the inner tangent circle such that $|z_1 - z_0| =r$ and let $z_2$ be the center of the larger circle with $|z_2 - z_0| = R$ with $R>r$. Rotate and translate your circles so that $z_0$ lies on the real axis (as so does $z_1$) and $z_2$ is 0. You can map this a vertical strip by sending $z_0$ to $\infty$, 0 to 0 and the antipodal point on the inner circle of $z_0$ to 1.
To get from the upper half plane to the vertical strip, you'll need a logarithm and a reflection.
E.g., Consider $B(0,2) \cap \overline{B(3/2,1/2)}$ as the domain. The map as above (sending the balls to the vertical strip) is $\Phi(z) = \frac{z}{z-2} \frac{-1}{1}$ (this is a Möbius transformation). You'll need to invert this to go from the strip to the balls.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Does one always use augmented matrices to solve systems of linear equations? The homework tag is to express that I am a student with no working knowledge of math.
I know how to use elimination to solve systems of linear equations. I set up the matrix, perform row operations until I can get the resulting matrix into row echelon form or reduced row echelon form, and then I back substitute and get the values for each of my variables.
Just some random equations: $a + 3b + c = 5 \\ a + 0 + 5c = -2$
My question is, don't you always want to be using an augmented matrix to solve systems of linear equations? By augmenting the matrix, you're performing row operations to both the left and right side of those equations. By not augmenting the matrix, aren't you missing out on performing row operations on the right side of the equations (the $5$ and $-2$)?
The sort of problems I'm talking about are homework/test-level questions (as opposed to real world harder data and more complex solving methods?) where they give you $Ax = b$ and ask you to solve it using matrix elimination.
Here is what I mean mathematically:
$[A] =
\begin{bmatrix}
1 & 3 & 1 \\
1 & 0 & 5 \\
\end{bmatrix}
$
$ [A|b] =
\left[\begin{array}{ccc|c}
1 & 3 & 1 & 5\\
1 & 0 & 5 & -2\\
\end{array}\right]
$
So, to properly solve the systems of equations above, you want to be using $[A|b]$ to perform Elementary Row Operations and you do not want to only use $[A]$ right?
The answer is: yes, if you use this method of matrices to solve systems of linear equations, you must use the augmented form $[A|b]$ in order to perform EROs to both $A$ and $b$.
| I certainly wouldn't use an augmented matrix to solve the following:
$x = 3$
$y - x = 6$
When you solve a system of equations, if doing so correctly, one does indeed perform "elementary row operations", and in any case, when working with any equations, to solve for $y$ above, for example, I would add $x$ to each side of the second equation to get $y = x + 6 = 3 + 6 = 9$
Note: we do treat systems of equations like "row operations" (in fact, row operations are simply modeled after precisely those operations which are legitimate to perform on systems of equations)
$$2x + 2y = 4 \iff x + y = 2\tag{multiply each side by 1/2}$$
$$\qquad\qquad x + y = 1$$
$$\qquad\qquad x - y = 1$$
$$\iff 2x = 2\tag{add equation 1 and 2}$$
$$x+ y + z = 1$$
$$3x + 2y + z = 2$$
$$-x - y + z = 3$$
We can certainly switch "equation 2 and equation 3" to make "adding equation 3 to equation 1" more obvious.
I do believe that using an augmented coefficient matrix is very worthwhile, very often, to "get variables out of the way" temporarily, and for dealing with many equations in many unknowns: and even for $3\times 3$ systems when the associated augmented coefficient matrix has mostly non-zero entries. It just makes more explicit (and easier to tackle) the process one can use with the corresponding system of equations it represents.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Finding intersection of 2 planes without cartesian equations?
The planes $\pi_1$ and $\pi_2$ have vector equations:
$$\pi_1: r=\lambda_1(i+j-k)+\mu_1(2i-j+k)$$
$$\pi_2: r=\lambda_2(i+2j+k)+\mu_2(3i+j-k)$$
$i.$ The line $l$ passes through the point with position vector $4i+5j+6k$ and is parallel to both $\pi_1$ and $\pi_2$. Find a vector equation for $l$.
This is what I know:
$$l \parallel \pi_1, \pi_2 \implies l \perp n_1, n_2 \implies d = n_1\times n_2;\quad l:r=(4i+5j+6k)+\lambda d$$
However, that method involves 3 cross-products, which according to the examination report was an 'expeditious' solution which was also prone to 'lack of accuracy'. Rightfully so, I also often make small errors with sign changes leading me to the wrong answer, so if there is a more efficient way of working I'd like to know what that is.
Q1 How can I find the equation for line $l$ without converting the plane equations to cartesian form?
$ii.$ Find also the shortest distance between $l$ and the line of intersection of $\pi_1$ and $\pi_2$.
Three methods are described to solve the second part. However, I didn't understand this one:
The determination of the shortest distance was perceived by most to be the shortest distance of the given point $4i+5j+6k$ to the line of intersection of the planes. Thus they continued by evaluating the vector product $(4i+5j+6k)\times (3i+j-k)$ to obtain $-11i+22j-11k$. The required minimum distance is then given immediately by $p = \frac{|-11i+22j-11k|}{\sqrt{11}} = \sqrt{66}$
Q2 Why does the cross-product of the given point and the direction vector of the line of intersection of the planes get to the correct answer?
| 1) One method you can use to find line $L_1$ is to make equations in $x, y, z$ for each plane and solve them, instead of finding the cross product:
$$x=\lambda_1+2\mu_1, y=\lambda_1-\mu_1, z=-\lambda_1+\mu_1$$
$$x=\lambda_2+3\mu_2, y=\lambda_2+\mu_2, z=\lambda_2-\mu_2$$
What we'll do is find the line of intersection; this line is clearly parallel to both planes so we can use its direction for the line through $(4,5,6)$ that we want.
Since the line of intersection has to lie on both planes, it will satisfy both sets of equations. So equating them,
$$\lambda_1+2\mu_1 = \lambda_2+3\mu_2$$
$$\lambda_1-\mu_1 = 2\lambda_2+\mu_2$$
$$-\lambda_1+\mu_1 = \lambda_2-\mu_2$$
Adding the last two equations, we get,
$$\lambda_2 = 0$$
So,
$$x=3\mu_2, y=\mu_2, z=-\mu_2$$
This is a parametrisation of the solution which is the line. So obviously $L_1$ must be:
$$L_1: (4,5,6) + \mu_2(3,1,-1)$$
2) Notice that $(0,0,0)$ lies on the line of intersection $L_2$. So the difference vector $v$ between the point $(4,5,6)$ on the first line and $(0,0,0)$ on the second is $v=(4,5,6)$. The distance is really the length of the projection of $v$ on the perpendicular joining the two lines. This is $|v|\sin \theta$ where $\theta$ is the angle between line and plane.
Expressing $|v|\sin\theta$ as $$|v|\sin\theta=\frac{|v||w|\sin\theta}{|w|} = \frac{|v\times w|}{|w|}$$ (where $w$ is the direction vector of $L_2$) gives you the answer.
Here taking the cross product of the position vector of the given point with the direction vector of $L_2$ depended on $(0,0,0)$ lying on $L_2$; otherwise you would need to take the difference vector of the given point and any point on $L_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Finding the closed form for a sequence My teacher isn't great with explaining his work and the book we have doesn't cover anything like this. He wants us to find a closed form for the sequence defined by:
$P_{0} = 0$
$P_{1} = 1$
$\vdots$
$P_{n} = -2P_{n-1} + 15P_{n-2}$
I'm not asking for a straight up solution, I just have no idea where to start with it. The notes he gave us say:
We will consider a linear difference equation that gives the
Fibonacci sequence.
$y(k) + A_1y(k -1) + A_2y(k -2) = \beta$
That's the general form for a difference equation in which each term is formed from the
two preceding terms. We specialize this for our Fibonacci sequence by setting $A_1 = 1, $ >$
>A_2 = 1,$ and $ \beta = 0$. With some rearrangement, we get
$y(k) = y(k - 1) + y(k - 2)$
which looks more like the general form for the Fibonacci sequence.
To solve the difference equation, we try the solution $y(k) = Br^k$. Plugging that in, we
obtain
$Br^{k-2} (r^2 - r - 1) = 0$
I have no idea where the $Br^k$ is coming from nor what it means, and he won't explain it in any sort of terms we can understand.
If someone could help me with the basic principle behind finding a closed form with the given information I would be eternally grateful.
EDIT: Using the information given (thank you guys so much) I came up with
$y(k) = \frac{1}{8}(3)^k - \frac{1}{8}(-5)^k$
If anyone has ran through let me know what you found, but I'm in no way asking you guys to do that. It's a lot of work to help some random college student.
| A related problem. Here is a start. Just assume your solution $P_n=r^n$ and plug in back in the eq. to find $r$
$$ P_{n} = -2P_{n-1} + 15P_{n-2} \implies r^n+2r^{n-1}-15r^{n-2}=0 $$
$$ \implies r^{n-2}(r^2+2r-15)=0 \implies r^2+2r-15=0 $$
Find the roots of the above polynomial $r_1, r_2$ and construct the general solution
$$ P(n)=c_1 r_1^n + c_2 r_2^n \longrightarrow (*) $$
To find $c_1$ and $c_2$, just use $P_0=0$ and $P_1=1$ in $(*)$ to get two equations in $c_1$ and $c_2$. Once you find $c_1$ and $c_2$ plug them back in $(*)$ and this will be the required solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Derivatives using the Limit Definition How do I find the derivative of $\sqrt{x^2+3}$? I plugged everything into the formula but now I'm having trouble simplifying.
$$\frac{\sqrt{(x+h)^2+3}-\sqrt{x^2+3}}{h}$$
| Keaton's comment is very useful. If you multiply the top and bottom of your expression by $\sqrt{(x+h)^2+3}+\sqrt{x^2+3}$, the numerator should simplify to $2xh+h^2$. See if you can finish the problem after that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Clarification on expected number of coin tosses for specific outcomes. As seen in this question, André Nicolas provides a solution for 5 heads in a row.
Basically, for any sort of problem that relies on determining this sort of probability, if the chance of each event is 50/50, then no matter what the composition of values, the linear equation would be the same?
For example, in the case of flipping 5 coins, and wanting to find out how many flips are needed for 4 consecutive tails followed by a head, is the same form as trying to find how many flips for 5 heads?
specifically:
$$x=\frac{1}{2}(x+1)+\frac{1}{4}(x+2)+\frac{1}{8}(x+3)+\frac{1}{16}(x+4)+\frac{1}{32}(x+5)+\frac{1}{32}(5).$$
Where $\frac{1}{32}(x+5)$ denotes the last flips chance of landing a tails after 4 heads in a row (.5*.5*.5*.5*(P(tails))
If I was using an unfair coin in the same example as above (HHHHT) with a 60% chance to land on heads, would the equation be:
$$x=\frac{1}{2}(x+1)+\frac{1}{4}(x+2)+\frac{1}{8}(x+3)+\frac{1}{16}(x+4)+\frac{1}{40}(x+5)+\frac{1}{40}(5).$$
| No, it is not the same. For this pattern, the argument holds well (if you keep flipping heads) until four tosses. But on the fifth toss, if you flip heads you are done, but if you flip tails you are not back to the start-you have potentially flipped the first of your winning series. You will need to consider states where you have flipped some number of tails so far.
For the unfair coin problem, aside from that objection, the $\frac 12$ would become $\frac 25$ for the first one because you have a $40\%$ chance to land tails and be back at start. Again, the first four flips work fine (with $0.4^n$) but you need to worry about states with one or more heads in the bank.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/373953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Simplex on Linear Program with equations My linear program instead of inequations also contains one equation. I do not understand how to handle this in every tutorial I searched the procedure is to add slack variables to convert the inequations to equations. My lp is the following:
Minimize x4
Subject to:
3x1+7x2+8x3<=x4
9x1+5x2+7x3<=x4
5x1+6x2+7x3<=x4
x1+x2+x3=1
I tried to add slack variables w1,w2,w3 to convert the in-equations to equations but then I do not understand how to find an Initial feasible solution. I am aware of the 2-phase simplex but I do not understand how to use it here.
Do I have the right to add a slack variable w4 to cope with the last equation? As far as I understand If I do that I will change the LP. How should I start to cope with this LP? Can I use as an initial feasible solution the vector (0,0,1,0) for example?
This is not a homework question (Preparatory question for exams)! I do not ask for a complete solution, just for a hint to get unstuck from the equation problem.
Edit: I am not able to solve this. And I am not able to prove that it is infeasible. The fact that I have so many zeros in the $b_i$ creates to me problems!
| I have to be honest, my simplex is rusty. But perhaps you could split the equation into two inequalities:
$$x_1+x_2+x_3\leq 1$$
$$-x_1-x_2-x_3\leq -1$$
This is exactly what some solvers do that can't handle mixtures of inequalities and equations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Analytically continue a function with Euler product I would like to estimate the main term of the integral
$$\frac{1}{2\pi i} \int_{(c)} L(s) \frac{x^s}{s} ds$$
where $c > 0$, $\displaystyle L(s) = \prod_p \left(1 + \frac{2}{p(p^s-1)}\right)$.
Question: How to estimate the integral? In other words, is there any way to analytic continue this function?
The function as stated converges for $\Re s > 0$, but I'm not sure how to extend it past $y$-axis. Thanks!
| Let $\rho(d)$ count the number of solutions $x$ in $\frac{Z}{dZ}$, to $x^2\equiv \text{-1 mod d}$, then we have
$$\sum_{n\leq x}d(n^2+1)=2x\sum_{n\leq x}\frac{\rho(n)}{n}+O(\sum_{n\leq x}\rho(n))$$
By multiplicative properties of $\rho(n)$ we have,
$$\rho(n)=\chi(n)*|\mu(n)|$$
Where $\chi(n)$ is the non principal character modulo $4$
Which allows us to estimate,
$$\sum_{n\leq x}\frac{\rho(n)}{n}=\sum_{n\leq x}\frac{\chi(n)*|\mu(n)|}{n}=\sum_{n\leq x}\frac{|\mu(n)|}{n}\sum_{k\leq \frac{x}{n}}\frac{\chi(k)}{k}=\sum_{n\leq x}\frac{|\mu(n)|}{n}(L(1,\chi)+O(\frac{n}{x}))=L(1,\chi)\sum_{n\leq x}\frac{|\mu(n)|}{n}+O(1)=\frac{\pi}{4}\sum_{n\leq x}\frac{|\mu(n)|}{n}+O(1)=\frac{\pi}{4}(\frac{6}{\pi^2}\ln(x)+O(1))$$
So that, $$\sum_{n\leq x}\frac{\rho(n)}{n}=\frac{3}{2\pi}\ln(x)+O(1)$$
Also note that, $$\sum_{n\leq x}\rho(n)=\sum_{n\leq x}{\chi(n)*|\mu(n)|}=\sum_{n\leq x}|\mu(n)|\sum_{k\leq \frac{x}{n}}\chi(k)\leq\sum_{n\leq x}|\mu(n)|\leq x$$
So we have, $$\sum_{n\leq x}\rho(n)=O(x)$$
Which gives,
$$\sum_{n\leq x}d(n^2+1)=\frac{3}{\pi}x\ln(x)+O(x)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Is the ideal $(X^2-3)$ proper in $\mathbb{F}[[X]]$?
Let $\mathbb{F}$ be a field and $R=\mathbb{F}[[X]]$ be the ring of formal power series over $\mathbb{F}$. Is the ideal $(X^2-3)$ proper in $R$? Does the answer depend upon $\mathbb{F}$?
Clearly $X^2-3=(X+\sqrt3)(X-\sqrt3)$ and hence $X^2-3$ is not zero.
I have no idea whether the ideal is proper or not.
So far, I didn't learn any theorem to prove an ideal is proper. Perhaps I should start with definition of proper ideal? Find one element in $R$ but not in the ideal ?
| The element $\sum _{i \geq 0} a_i X^i \in R$ is invertible in $R$ if and only if $a_0\neq 0$.
The key to the proof of that relatively easy result is the identity $(1-X)^{-1}=\sum_{i \geq 0} 1. X^i \in R $
In your question $a_0=-3$ so that the element $X^2-3$ is invertible in $R=F[[X]]$ (which is equivalent to the ideal $(X^2-3)\subset R$ being proper) if and only if the characteristic of the field $F$ is not $3$: $$ (X^2-3)\subset R \;\text {proper} \iff \operatorname{char} F\neq 3.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Addition table for a 4 elements field Why is this addition table good,
\begin{matrix}
\boldsymbol{\textbf{}+} & \mathbf{0} & \boldsymbol{\textbf{}1} & \textbf{a} &\textbf{ b}\\
\boldsymbol{\textbf{}0} & 0 & 1 & a & b\\
\boldsymbol{\textbf{}1} & 1 & 0 & b & a\\
\boldsymbol{\textbf{} a} & a & b & 0 & 1\\
\boldsymbol{\textbf{} b} &b & a & 1 & 0
\end{matrix}
and this one isn't, what makes it not work?
\begin{matrix}
\boldsymbol{\textbf{}+} & \mathbf{0} & \boldsymbol{\textbf{}1} & \textbf{a} &\textbf{ b}\\
\boldsymbol{\textbf{}0} & 0 & 1 & a & b\\
\boldsymbol{\textbf{}1} & 1 & a & b & 0\\
\boldsymbol{\textbf{} a} & a & b & 0 & 1\\
\boldsymbol{\textbf{} b} &b & 0 & 1 & a
\end{matrix}
It's clear that $0$ and $a$ changes places in the second table but I can't find an example that refutes any of the addition axioms.
| If you have only one operation, it is difficult to speak about field. But, it is well-known that:
1) there exists exactly two groups (up to isomorphism) with 4 elements: one is ${\mathbb Z}/2{\mathbb Z}\times{\mathbb Z}/2{\mathbb Z}$ (the first table) and the other one is ${\mathbb Z}/4{\mathbb Z}$ (the second table)
2) there exists exaclty one field (up to isomorphism) with 4 elements, and it is isomorphic to ${\mathbb Z}/2{\mathbb Z}\times{\mathbb Z}/2{\mathbb Z}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
understanding $\mathbb{R}$/$\mathbb{Z}$ I am having trouble understanding the factor group, $\mathbb{R}$/$\mathbb{Z}$, or maybe i'm not. Here's what I am thinking.
Okay, so i have a group $G=(\mathbb{R},+)$, and I have a subgroup $N=(\mathbb{Z},+)$. Then I form $G/N$. So this thing identifies any real number $x$ with the integers that are exactly 1 unit step away. So if $x=\frac{3}{4}$, then $[x]=({...,\frac{-5}{4},\frac{-1}{4},\frac{3}{4},\frac{7}{4},...})$ and i can do this for any real number. So therefore, my cosets are unit intervals $[0,1)+k$, for integers $k$. Herstein calls this thing a circle and I was not sure why, but here's my intuition. The unit interval is essentially closed and since every real number plus an integer identifies with itself, these "circles" keep piling up on top of each other as if its one closed interval. Since it's closed it is a circle. Does that make sense?
Now how do I extend this intuition to this?
$G'=[(a,b)|a,b\in{\mathbb{R}}], N'=[(a,b)|a,b\in{\mathbb{Z}}].$ What is $G'/N'$? How is this a torus? I can't get an intuitive picture in my head...
EDIT: Actually, are the cosets just simply $[x]=[x\in{\mathbb{R}}|x+k,k\in{\mathbb{Z}}]?$
| You can also use the following nice facts. I hope you are inspired by them.
$$\mathbb R/\mathbb Z\cong T\cong\prod_p\mathbb Z(p^{\infty})\cong\mathbb R\oplus(\mathbb Q/\mathbb Z)\cong\mathbb C^{\times}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
} |
Formulate optimization problem My research area has "nothing to do with mathematics" but I still find it full of optimization problems. Therefore, I would like to learn to formulate and solve such problems, even though I am not encouraged to do it (at least at the moment; maybe the situation will change after I have proved my point :-).
Currently, I have tried to get familiar with gradient methods (gradient descent), and I think I understand some of the basic ideas now. Still I find it difficult to put my problems into mathematical formulas, yet solving them.
The ingridients I have for my optimization problem are:
1) My data; two vectors $x = (x_{0}, ..., x_{N})$ and $y = (y_{0}, ..., y_{N})$ having both $N$ samples.
2) Function $f(a, b)$ which tells me something about the relation of vectors $a$ and $b$.
What I want to do is:
Find a square matrix $P$ (of size 2 x 2) such that the value of $f(z_{1}, z_{2})$, where $z = P [x, y]^{T}$, becomes minimal.
To clarify (sorry, I'm not sure if my notation is completely correct) I mean that $z$ is computed as:
$z_{1} = p_{11}x + p_{12}y\\
z_{2} = p_{21}x + p_{22}y$.
How would one squeeze up all this into a problem to be solved using an optimization method like the gradient descent? All help is appreciated. Please note that my mathematical background is not too solid, I know only some very basic calculus and linear algebra.
| The notation in the question looks fine. So, you have a function $F$ of four real variables $p_{11},\dots,p_{22}$, defined by
$$F(p_{11},p_{12},p_{21},p_{22}) = f(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \tag2$$
If $f$ is differentiable, then so is $F$. Therefore, the gradient descent can be used; how successful it will be depends on $f$. From the question it's not clear what kind of function $f$ is. Some natural functions like $f(z_1,z_2)=\|z_1-z_2\|^2$ would make the problem easy, but also uninteresting because the minimum is attained, e.g., at $p_{11}=p_{21}=1$, $p_{12}=p_{22}=0$, because these values make $z_1=z_2$.
Using the chain rule, one can express the gradient of $F$ in terms of the gradient of $f$ and the vectors $x,y$. Let's write $f_{ik}$ for the partial derivative of $f(z_1,z_2)$ with respect to the $k$th component of $z_i$. Here the index $i$ takes values $1,2$ only, while $k$ ranges from $0$ to $N$. With this notation,
$$\begin{split} \frac{\partial F}{\partial p_{11}}&=\sum_{k=0}^N x_k f_{1k}(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \\
\frac{\partial F}{\partial p_{12}}&=\sum_{k=0}^N y_k f_{1k}(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \\
\frac{\partial F}{\partial p_{21}}&=\sum_{k=0}^N x_k f_{2k}(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \\
\frac{\partial F}{\partial p_{22}}&=\sum_{k=0}^N y_k f_{2k}(p_{11}x+p_{12}y,\ p_{21}x+p_{22}y) \\
\end{split} \tag1$$
The formulas (1) would be more compact if instead of $x,y$ the data vectors were called $x^{(1)}$ and $x^{(2)}$. Then (1) becomes
$$ \frac{\partial F}{\partial p_{ij}}=\sum_{k=0}^N x^{(i)}_k f_{jk}(p_{11}x^{(1)}+p_{12}x^{(2)},\ p_{21}x^{(1)}+p_{22}x^{(2)}) \tag{1*}$$
For more concrete advice, it would help to know what kind of function $f$ you have in mind, and whether the matrix $P$ needs to be of any special kind (orthogonal, unit norm, etc).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Moment of inertia of a circle A wire has the shape of the circle $x^2+y^2=a^2$. Determine the moment of inertia about a diameter if the density at $(x,y)$ is $|x|+|y|$
Thank you
| Consider a small segment of the wire, going from $\theta$ to $\theta +d\theta$. The length of the small segment is $a \,d\theta$. The density varies, but is approximately $a|\cos\theta|+a|\sin \theta|$.
Take a particular diameter, say with rectangular equation $y=(\tan\phi) x$, or better, $x\sin \phi -y\cos\phi=0$. The perpendicular distance from $(a\cos\theta,a\sin\theta)$ to this diameter is $|a\cos\theta\sin\phi -a\sin\theta\cos\phi|$.
So for the moment of inertia, we need to find
$$\int_0^{2\pi} \left(a|\cos\theta|+a|\sin \theta|\right)\left(a\cos\theta\sin\phi -a\sin\theta\cos\phi\right)^2a\,d\theta.$$
The integration is doable, but not easy. Special cases such as $\phi=0$ or $\phi=\pi/4$ will not be too hard.
Remark: The perpendicular distance from a point $(p,q)$ to the line with equation $ax+by+c=0$ is
$$\frac{|ap+bq+c|}{\sqrt{a^2+b^2}}.$$
There is a reasonably good discussion of the formula in Wikipedia.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Weather station brain teaser I am living in a world where tomorrow will either rain or not rain. There are two independent weather stations (A,B) that can predict the chance of raining tomorrow with equal probability 3/5. They both say it will rain, what is the probability of it actually rain tomorrow?
My intuition is to look at the complementary, i.e. $$1-P(\text{not rain | A = rain and B = rain}) = \frac{21}{25}$$
However, using the same methodology, the chance of it not raining tomorrow is: $$1-P(\text{rain | A = rain and B = rain}) = \frac{16}{25}$$
Clearly they do not add up to 1!
Edit: corrected the probability to 3/5
** Edit 2:
There seems to be a problem using this method. Say the probability of getting a right prediction is 1/2, so basically there is no predictability power. The using the original argument, the probability of raining is $1-\frac{1}{2}*\frac{1}{2} = \frac{3}{4}$ which is also non-sensical
| After a couple months of thinking, a friend of mine have pointed out that the question lacks one piece of information: the unconditional probability distribution of rain $P(rain)$. The logic is that, if one is to live in an area that is certain to rain everyday, the probability of raining is always 1. Then Kaz's analysis will give the wrong answer (9/13). In fact, the probability of correctly predicting the rain should be:
$$\frac{P(rain)(3/5)^2}{P(rain)(3/5)^2+(1-P(rain))(2/5)^2}$$
Kaz's answer is correct if the probability distribution of raining is uniform. Cheers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
which texts on number theory do you recommend? my close friend intend to study number theory
and he asked me if i know a good text on it , so i thought that you guys can help me to help him !
he look for a text for the beginners and for a first course
he will study it as a self study ..
so what texts do you recommend ?
are there any lectures or videos online on internet can help them ??
also , what do you advice him ?
| My two pennyworth: for introductory books try ...
*
*John Stillwell, Elements of Number Theory (Springer 2002). This is by a masterly expositor, and is particularly approachable.
*G.H. Hardy and E.M. Wright, An Introduction to the Theory of Numbers (OUP 1938, and still going strong with a 6th edition in 2008). Also aimed at beginning undergraduate mathematicians and pleasingly accessible.
*Alan Baker, A Comprehensive Course in Number Theory (CUP 2012) is a nice recent textbook (shorter than its title would suggest, too).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
can a ring homomorphism map an integral domain to a non integral domain? i understand that if two rings are isomorphic, and one ring is an integral domain, so must the other be.
however, consider two rings, both commutative rings with unity. is it possible that one ring contains zero divisors and one does not while there exists an ring homomorphism between the two?
there could not be a isomorphism between the two rings because there would be no one to one or onto mapping between the two rings. but could there be an operation preserving mapping between an integral domain and a commutative ring with unity and with zero divisors?
clearly if such a mapping did exists it would not seem to be one to one or onto, but does this rule out the potential of a homomorphism existing between the two?
| Given any unital ring $R$ (with multiplicative identity $1_R$, say), there is a unique ring homomorphism $\Bbb Z\to R$ (take $1\mapsto 1_R$ and "fill in the blanks" from there).
This may be an injective map, even if $R$ has zero divisors. For example, take $$R=\Bbb Z[\epsilon]:=\Bbb Z[x]/\langle x^2\rangle.$$ Surjective examples are easy to come by. You are of course correct that such maps cannot be bijective if $R$ has zero divisors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 0
} |
Does $\frac{x}{x}=1$ when $x=\infty$? This may be a dumb question:
Does $\frac{x}{x}=1$ when $x=\infty$?
I understand why $\frac{x}{x}$ is undefined when $x=0$: This can cause errors if an equation is divided by $x$ without restrictions.
Also, $\frac{\infty}{\infty}$ is undefined. So when I use $\frac{x}{x}=1$ to simplify an equation, can it also lead to errors because $x$ can equal infinity?
Or is $x=\infty$ meaningless?
| You cannot really say $x = \infty$ because $\infty \not \in \mathbb{R}$
What you do is, you take the limes. Limes means not, that $x=a$, but that $x$ is getting closeser and closer to $a$. For example:
$$\lim_{x\mapsto 0}\frac{1}{x}=\infty$$ because the divisor gets smaller and smaller
$$\frac{1}{2}=0.5 \\\frac{1}{1}=1 \\\frac{1}{0.5}=2\\..$$
So this is growing and growing, but $\frac{1}{0}$ is mathematically nonsense.
So it's quite simple.
$$\lim_{x\mapsto \infty}\frac{x}{x}=\lim_{x\mapsto\infty}\frac{1}{1}=1$$
In that case you simply cancel (is that the right word in english?) the $x$. That way the Limes and the $\infty$ vansishs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 9,
"answer_id": 4
} |
A Criterion for Surjectivity of Morphisms of Sheaves? Suppose that $f: \mathcal{F} \rightarrow \mathcal{G}$ is a morphism of sheaves on a topological space $X$. Consider the following statements.
1) $f$ is surjective, i.e. $\text{Im } f = \mathcal{G}$.
2) $f_{p}: \mathcal{F}_p \rightarrow \mathcal{G}_p$ is surjective for every $p\in X$.
$(1) \Rightarrow (2)$ is always true. I was wondering if $(2) \Rightarrow (1)$ is also true and I found Germ and sheaves problem of injectivity and surjectivity, which claims it positively. I double check all the details of the arguments made in that thread and found no mistake. I just want a confirmation that $(1) \Leftrightarrow (2)$ is right, so that we have a criterion for surjectivity of morphisms of sheaves.
In case you asked why this fact, which is already established in a previous thread, is repeated here, followings are my reasons:
a) I'm always skeptical, even with myself;
b) I have not seen this statement in popular texts. Maybe it's in EGA but I can't read French (if it is, would be nice if someone points it out please!);
c) In the proof for the fact that $f$ is isomorphic iff $f_p$ is isomorphic for all $p \in X$ (Prop 1.1, p.g. 63, Hartshorne's), to prove that $\mathcal{F}(U) \rightarrow \mathcal{G}(U)$ is surjective for all $U$ open, the proof requires injectivity of $f_p$ for all $p \in X$. This is not directly related to our situation but it provides one a caution that surjectivity of $f$ is a little bit subtle.
Thanks!
| This can be found in every complete introduction to sheaves or algebraic geometry and comes down to the fact that the functor $F \mapsto (F_x)_{x \in X}$ is faithful and exact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Use L'Hopital's rule to evaluate $\lim_{x \to 0} \frac{9x(\cos4x-1)}{\sin8x-8x}$ $$\lim_{x \to 0} \frac{9x(\cos4x-1)}{\sin8x-8x}$$
I have done this problem a couple of times and could not get the correct answer. Here is the work I have done so far http://imgur.com/GDZjX26 . The correct answer was $\frac{27}{32}$, did I differentiate wrong somewhere?
| You have to use L'Hopitals 3 times we have $$\begin{align} \lim_{x\to 0}\frac{9x(\text{cos}(4x)-1}{\text{sin}(8x)-8x}&=\lim_{x\to 0}\frac{(9 (\text{cos}(4 x)-1)-36 x \text{sin}(4 x))}{(8 \text{cos}(8 x)-8)}\\
&=\lim_{x\to 0}\frac{-1}{64}\frac{(-72 \text{sin}(4 x)-144 x \text{cos}(4 x))}{\text{sin}(8x)}\\&=\lim_{x\to 0}\frac{-1}{512}\frac{(576 x \text{sin}(4 x)-432 \text{cos}(4 x))}{\text{cos}(8x)}\\&=\frac{432}{512}\\&=\frac{27}{32}.\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Calculating a complex derivative of a polynomial What are the rules for derivatives with respect to $z$ and $\bar{z}$ in polynomials?
For instance, is it justified to calculate the partial derivatives of $f(z,\bar{z})=z^3-2z+\bar{z}-(\overline{z-3i})^4$ as if $z$ and $\bar{z}$ were independent? i.e. $f_z=3z^2-2$ and $f_\bar{z}=1-4(\overline{z-3i})^3$
| I would first write $$ f(z,\bar z)=z^3−2z+\bar z−(\bar z+3i)^4 $$
and then treat $z$ and $\bar z$ as independent parameters.
Then I have
$$f_z=3z^2−2$$ $$f_{\bar z}=1−4(\bar z+3i)^3$$
Am I right?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/374969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
number of terms of a sum required to get a given accuracy How do I find the number of terms of a sum required to get a given accuracy. For example a text says that to get the sum $\zeta(2)=\sum_{n=1}^{\infty}{\frac{1}{n^2}}$ to 6 d.p. of accuracy, I need to add 1000 terms. How do in find it for a general series?
| If you have a sum $S=\sum_{n=1}^{\infty} a(n)$ that you want to estimate with a partial sum, denote by $R$ the residual error
$$
R(N) = S-\sum_{n=1}^N a(n) = \sum_{n=N+1}^\infty a(n)
$$
If all $a(n)$ are nonnegative then $R(N)\ge a(N+1)$, so to estimate within a given accuracy $\epsilon$ you need $N$ at least large enough that $a(N+1)<\epsilon$.
So you can tell that to get $\sum_{n=1}^{\infty}\frac{1}{n^2}$ to six decimal places of accuracy, i.e. within $\frac{1}{1000000}$, you will need at least 1000 terms, since for $n\le 1000$ each new term is at least that size.
Unfortunately this is not sufficient. If $a(n)$ shrinks slowly $R(N)$ may be much bigger than $a(N+1)$. For your example 1000 terms is only accurate to about $1/1000$:
$$
\zeta(2)=1.64493\cdots ~~,~~ \sum_{n=1}^{1000}\frac{1}{n^2}=1.64393\cdots
$$
If you can find a decreasing function $b$ on $\mathbb R$ that is an upper bound $b(n)\ge a(n)$ at the integers, then you can bound $R(N)$ by observing that
$$
\int_{N}^{N+1} b(x)dx > \min_{N\le x\le N+1} b(x) = b(N+1)\ge a(N+1) \\
\int_{N+1}^{N+2} b(x)dx > b(N+2)\ge a(N+2) \\
\cdots \\
\int_{N}^\infty b(x)dx > R(N)
$$
For $\zeta(2)$ choose $b(x)=x^{-2}$, then $R(N)<N^{-1}$, so for 6 decimal places $10^6$ terms would suffice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof of an Integral inequality Let $f\in C^0(\mathbb R_+,\mathbb R)$ and $a\in\mathbb R_+$, $f^*(x)=\dfrac1x\displaystyle\int_0^xf(t)\,dt$ when $x>0$, and $f^*(0)=f(0)$.
Show that
$$
\int_0^a(f^*)^2(t)\,dt\le4\int_0^af^2(t)\,dt$$
I tried integration by part without success, and Cauchy-Schwarz is not helping here.
Thanks for your help.
| Assume without loss of generality that $f\geqslant0$. Writing $(f^*)^2(t)$ as
$$
(f^*)^2(t)=\frac1{t^2}\int_0^tf(y)\left(\int_0^tf(x)\mathrm dx\right)\mathrm dy=\frac2{t^2}\int_0^tf(y)\left(\int_0^yf(x)\mathrm dx\right)\mathrm dy,
$$
and using Fubini, one sees that
$$
A=\int_0^a(f^*)^2(t)\mathrm dt=2\int_0^af(y)\int_0^yf(x)\mathrm dx\int_y^a\frac{\mathrm dt}{t^2}\mathrm dy,
$$
hence
$$
A\leqslant2\int_0^af(y)\frac1y\int_0^yf(x)\mathrm dx\,\mathrm dy=2\int_0^af(y)f^*(y)\mathrm dy.
$$
Cauchy-Schwarz applied to the RHS yields
$$
A^2\leqslant4\left(\int_0^af(y)f^*(y)\mathrm dy\right)^2\leqslant4\int_0^af^2(y)\mathrm dy\cdot\int_0^a(f^*)^2(y)\mathrm dy=4A\int_0^af^2(y)\mathrm dy,
$$
and the result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Automorphism of Graph $G^n$ I try to define the automorphism of $G^n$ where $G$ is a graph and $G^n = G \Box \ldots \Box G$,( $n$ times, $\Box$ is the graph product).
I think that : $\text{Aut}(G^n)$ is $\text{Aut}(G) \wr S_n$ where $S_n$ is the symmetric group of $\{1,\ldots,n\}$ but I have no idea how to prove it because I am a beginner in group theory. Can you help me or suggest me a reference on this subject ? Thanks a lot.
| You are right, provided you assume that $G$ is prime relative to the Cartesian product, the automorphism group of the $n$-th Cartesian power of $G$ is the wreath product as you stated.
The standard reference for this is Hammack, Richard; Imrich, Wilfried; Klavžar, Sandi: Handbook of product graphs. (There is an older version of this, written by Imrich and Klavžar alone, which would serve just as well.) Unfortunately there does not seem to be much on the internet on this subject.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Evaluation of a limit with integral Is this limit
$$\lim_{\varepsilon\to 0}\,\,\varepsilon\int_{\mathbb{R}^3}\frac{e^{-\varepsilon|x|}}{|x|^2(1+|x|^2)^s}$$
with $s>\frac{1}{2}$, zero?.
The limit of a product is the product of limit, so I evaluate
$$\lim_{\varepsilon\to 0}\,\,\int_{\mathbb{R}^3}\frac{e^{-\varepsilon|x|}}{|x|^2(1+|x|^2)^s}$$.
With the theorem of dominated convergence the last limit equals
$$\int_{\mathbb{R}^3}\frac{1}{|x|^2(1+|x|^2)^s}=4\pi\int_{0}^{+\infty}\frac{1}{(1+r^2)^s}=C<\infty$$
(I have used the fact that $s>\frac{1}{2}$)
Using the product rule I have the result.
Have I made some mistake?
| What you did is correct. The only thing you have to take care is that in general, dominated convergence theorem applies for sequences. Here there is no problem since the convergence is monotonic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In what sense is the derivative the "best" linear approximation? I am familiar with the definition of the Frechet derivative and it's uniqueness if it exists. I would however like to know, how the derivative is the "best" linear approximation. What does this mean formally? The "best" on the entire domain is surely wrong, so it must mean the "best" on a small neighborhood of the point we are differentiating at, where this neighborhood becomes arbitrarily small? Why does the definition of the derivative formalize precisely this? Thank you in advance.
| Say the graph of $L$ is a straight line and at one point $a$ we have $L(a)=f(a)$. And suppose $L$ is the tangent line to the graph of $f$ at $a$. Let $L_1$ be another function passing through $(a,f(a))$ whose graph is a straight line. Then there is some open interval $(a-\varepsilon,a+\varepsilon)$ such that for every $x$ in that interval, the value of $L(x)$ is closer to the value of $f(x)$ than is the value of $L_1(x)$. Now one might then have another line $L_2$ through that point whose slope is closer to that of the tangent line than is that of $L_1$, such that $L_2(x)$ actually comes closer to $f(x)$ than does $L(x)$, for some $x$ in that interval. But now there is a still smaller interval $(a-\varepsilon_2,a+\varepsilon_2)$, within which $L$ beats $L_2$. For every line except the tangent line, one can make the interval small enough so that the tangent line beats the other line within that interval. In general there's no one interval that works no matter how close the rival line gets. Rather, one must make the interval small enough in each case separately.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 2,
"answer_id": 0
} |
how to prove: $A=B$ iff $A\bigtriangleup B \subseteq C$ I am given this: $A=B$ iff $A\bigtriangleup B \subseteq C$. And $A\bigtriangleup B :=(A\setminus B)\cup(B\setminus A)$.
I dont know how to prove this and I dont know where to start.
please give me guidance
| Hint: For an arbitrary set $C$, what is the one and only set that is the subset of every set?
So given $\,A\triangle\,B \subseteq C$, where $C$ is any arbitrary set, what does this tell you about the set $A\triangle B$?
And what does that tell you about the relationship between $A$ and $B$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How can I able to show that $(S ^{\perp})^{\perp}$ is a finite dimensional vector space. Let $H$ be a Hilbert space and $S\subseteq H$ be a finite subset. How can I able to show that $(S ^{\perp})^{\perp}$
is a finite dimensional vector space.
| What you want to prove is that, for any $S\subset H$,
$$
S^{\perp\perp}=\overline{\mbox{span}\,S}
$$
One inclusion is easy if you notice that $S^{\perp\perp}$ is a closed subspace that contains $S$. The other inclusion follows from
$$
H=\overline{\mbox{span}\,S}\oplus S^\perp
$$
and the uniqueness of the orthogonal complement.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Contour Integral of $\int \frac{a^z}{z^2}\,dz$. My task is to show $$\int_{c-i\infty}^{c+i\infty}\frac{a^z}{z^2}\,dz=\begin{cases}\log a &:a\geq1\\ 0 &: 0<a<1\end{cases},\qquad c>0.$$So, I formed the contour consisting of a semi-circle of radius $R$ and center $c$ with a vertical line passing through $c$. I am having two problems. I can show that along this outer arc, the integral will go to zero if and only if $\log a\geq0$, or equivalently, $a\geq1$; the problem is that the integral of this contour should be $2\pi i\cdot \text{Res}(f;0)$, so for $a\geq1$, I find $$\int f(z)=2\pi i\log a,\qquad a\geq1.$$My second problem occurs when $0<a<1$, I can no longer get the integral along the arc to go to zero as before.
Am I making a mistake in my first calculation, or is the problem asking to show something that is wrong? For the second case, how do I calculate this integral?
| For $a>1$, consider the contour $$(c-iT \to c+iT) \cup (c+iT \to -R +iT) \cup (-R+iT \to -R - iT) \cup (-R-iT \to c-iT),$$where $R>0$.
For $a<1$, consider the contour $$(c-iT \to c+iT) \cup (c+iT \to R +iT) \cup (R+iT \to R - iT) \cup (R-iT \to c-iT),$$where $R>0$.
Then let $R \to \infty$ and then $T \to \infty$.
The main reason for choice of these contours is that
*
*$a^{-R} \to 0$ for $a > 1$, as $R \to \infty$.
*$a^{R} \to 0$ for $a < 1$, as $R \to \infty$.
For $a>1$, the contour encloses a pole of the integrand at $z=0$ and hence this contribution will be reflected in the integral $\left( \text{recall that }a^z = 1 + z \color{red}{\log(a)} + \dfrac{z^2 \log^2(a)}{2!} + \cdots \right)$, whereas for $a<1$, the integrand is analytic in the region enclosed by the contour.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Primes in $\mathbb{Z}[i]$ I need a bit of help with this problem
Let $x \in \mathbb{Z}[i]$ and suppose $x$ is prime, therefore $x$ is not a unit and cannot be written as a product of elements of smaller norm. Prove that $N(x)$ is either prime in $\mathbb{Z}$ or else $N(x) = p^2$ for some prime $p \in \mathbb{Z}$.
thanks.
| Hint $\ $ Prime $\rm\:w\mid ww' = p_1^{k_1}\!\cdots p_n^{k_n}\:\Rightarrow\:w\mid p_i\:\Rightarrow\:w'\mid p_i' = p_i\:\Rightarrow\:N(w) = ww'\mid p_i^2$
Here $'$ denotes the complex conjugation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Characterization of short exact sequences The following is the first part of Proposition 2.9 in "Introduction to Commutative Algebra" by Atiyah & Macdonald.
Let $A$ be a commutative ring with $1$. Let $$M'
\overset{u}{\longrightarrow}M\overset{v}{\longrightarrow}M''\longrightarrow
0\tag{1} $$ be sequence of $A$-modules and homomorphisms. Then the sequence (1) is exact if and only if for all $A$-modules $N$, the
sequence $$0\longrightarrow \operatorname{Hom}(M'', N)
\overset{\overline{v}}{\longrightarrow}\operatorname{Hom}(M,
N)\overset{\overline{u}}{\longrightarrow}\operatorname{Hom}(M', N)
\tag{2} $$ is exact.
Here, $\overline{v}$ is the map defined by $\overline{v}(f)=f\circ v$ for every $f\in\operatorname{Hom}(M'', N)$ and $\overline{u}$ is defined likewise.
The proof one of direction, namely $(2)\Rightarrow (1)$ is given in the book, which I am having some trouble understanding. So assuming (2) is exact sequence, the authors remark that "since $\overline{v}$ is injective for all $N$, it follows that $v$ is surjective". Could someone explain why this follows?
Given that $\overline{v}$ is injective, we know that whenever $f(v(x))=g(v(x))$ for all $x\in M$, we have $f=g$. I am not sure how we conclude from this surjectivity of $v$.
Thanks!
| I think I have found the solution using Zach L's hint.
Let $N=\operatorname{coker}(v)=M''/\operatorname{Im}(v)$, and let $p\in\operatorname{Hom}(M'', N)$ be the canonical map $p: M''\to M''/\operatorname{Im(v)}=N$. We observe for every $x\in M$, we have
$$p(v(x))=v(x)+\operatorname{Im}(v)=0+\operatorname{Im}(v)=0_{M''/\operatorname{Im(v)}}$$
So $p\circ v=0$. But we know that $\overline{v}$ is injective, that is, $\ker{\overline{v}}=\{0\}$. So $\overline{v}(p)=p\circ v=0$ implies $p=0$ (that is, the identically zero map), from which we get $M''/\operatorname{Im}(v)=0$, that is, $\operatorname{Im}(v)=M''$, proving that $v$ is surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Analytical Solution to a simple l1 norm problem Can we solve this simple optimization problem analytically?
$ \min_{w}\dfrac{1}{2}\left(w-c\right)^{2}+\lambda\left|w\right| $
where c is a scalar and w is the scalar optimization variable.
| Set $f(w)=\frac{1}{2}(w-c)^2+\lambda |w|$, equal to $\frac{1}{2}(w-c)^2\pm\lambda w$. We find $f'(w)=w-c\pm \lambda$. Setting this to zero gives $c\pm \lambda$ as the only critical values of $f$. As $w$ gets large, $f(w)$ grows without bound, so the minimum is going to be at one of the two critical values. At those values, we have $f(c+\lambda)=\frac{\lambda^2}{2}+\lambda|c+\lambda|$, and $f(c-\lambda)=\frac{\lambda^2}{2}+\lambda|c-\lambda|$. Which one is minimal depends on whether $c,\lambda$ are the same sign or different signs.
Also need to compare with $f(0)=\frac{c^2}{2}$, $f$ is nondifferentiable there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Let $X$ and $Y$ be 2 disjoint connected subsets of $\mathbb{R}^{2}$, can $X \cup Y =\mathbb{R}^{2}$? Let $X$ and $Y$ be 2 disjoint connected subsets of $\mathbb{R}^{2}$. Then can $$X \cup Y =\mathbb{R}^{2}$$
I think this cannot be true, but I don't know of a formal proof. Any help would be nice.
| consider $X:=\{(x,0) :x>0\}$ and $Y:=\mathbb{R^2}-X$ ,Both are connected and disjoint but $X\cup Y=\mathbb{R^2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Rank of the difference of matrices Let $A$ and $B$ be to $n \times n$ matrices. My question is:
Is $\operatorname{rank}(A-B) \geq \operatorname{rank}(A) - \operatorname{rank}(B)$ true in general? Or maybe under certain assumptions?
| Set $X=A-B$ and $Y=B$. You are asking whether $\operatorname{rank}(X) + \operatorname{rank}(Y) \ge \operatorname{rank}(X+Y)$. This is true in general. Let $W=\operatorname{Im}(X)\cap\operatorname{Im}(Y)$. Let $U$ be a complementary subspace of $W$ in $\operatorname{Im}(X)$ and $V$ be a complementary subspace of $W$ in $\operatorname{Im}(Y)$. Then we have $\operatorname{Im}(X)=U+W$ and $\operatorname{Im}(Y)=V+W$ by definition and also $\operatorname{Im}(X+Y)\subseteq U+V+W$. Therefore
$$\operatorname{rank}(X) + \operatorname{rank}(Y) = \dim U + \dim V + 2\dim W\ge \dim U+\dim V+\dim W \ge\operatorname{rank}(X+Y).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/375982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Error estimate, asymptotic error and the Peano kernel error formula Find the error estimate by approximating $f(x)$ and derive a numerical integration formula for $\int_0^l f(x) \,dx$ based on approximating $f(x)$ by the straight line joining $(x_0, f(x_0))$ and $(x_1, f(x_1))$, where the two points $x_0$ and $x_1 = h - x_0$ are chosen so that $x_0, x_1 \in (0, l)$, $x_0 < x_1$ and $\int_0^l {(x - x_0) (x - x_1)} dx = 0$.
Derive the error estimate, asymptotic error and the Peano kernel error formula for the composite rule for $\int_a^b f(x) \,dx$.
Use the asymptotic error estimate to improve the integration formula. Find the values of $x_0$, $x_1$.
I know the Peano Kernel formula will take the form $E_n(f)=1/2($$\int_a^b K(t)\ f''(t) \,dx$$)$ with $K(t)$ being the Peano kernel but am having a tough time getting started on the question. Any help will be greatly appreciated. Thanks a lot!
| For this I think you can use trapezoidal rule. You can approximate $f(x)$ by the straight line joining $(a,f(a))$ and $(b,f(b))$ Then by integrating the formula for this straight line, we get the approximation $$I_1(f)=\frac{(b-a)}{2}[f(a)+f(b)].$$ To get the error formula we get $$f(x)-\frac{b-x)f(a)+(x-a)f(b)}{b-a}=(x-a)(x-b)f[a,b,x]$$
I am not sure if this is absolutely correct, maybe someone can verify my answer?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Showing that if $\lim\limits_{x \to a} f'(x)=A$, then $f'(a)$ exists and equals $A$
Let $f : [a; b] \to \mathbb{R}$ be continuous on $[a, b]$ and differentiable in $(a, b)$. Show that if $\lim\limits_{x \to a} f'(x)=A$, then $f'(a)$ exists and equals $A$.
I am completely stuck on it. Can somebody help me please? Thanks for your time.
| Let $\epsilon>0$. We want to find a $\delta>0$ such that if $0\lt x-a\lt\delta$ then $\left|\dfrac{f(x)-f(a)}{x-a}-A\right|\lt\epsilon$. If $x\gt a$ then MVT tell us that $\dfrac{f(x)-f(a)}{x-a}=f'(c)$ for some $c\in[a,x]$.
Now use that $\displaystyle\lim_{c\to a^+}f'(c)=A$ to find that $\delta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to find a power series representation for a divergent product? Euler used the identity $$ \frac{ \sin(x) }{x} = \prod_{n=1}^{\infty} \left(1 - \frac{x^2}{n^2 \pi^2 } \right) = \sum_{n=0}^{\infty} \frac{ (-1)^n }{(2n + 1)! } x^{2n} $$ to solve the Basel problem. The product is obtained by noting that the sine function is 'just' an infinite polynomial, which can be rewritten as the product of its zeroes. The sum is found by writing down the taylor series expansion of the sine function and dividing by $x$.
Now, I am interested in finding the sum representation of the following product: $$ \prod_{n=1}^{\infty} \left(1 - \frac{x}{n \pi} \right) ,$$ which is divergent (see
this article).
The infinite sum representation of this product is not as easily found (at least not by me) because it does not have an obvious formal representation like $\frac{\sin(x)}{x}$ above.
Questions: what is the infinite sum representation of the second product I mentioned? How does one obtain this sum? And is there any 'formal' represenation for these formulae (like $\frac{\sin(x)}{x}$ above).
| I never gave the full answer :)
$$\prod_{n=1}^{\infty} (1-x/n)$$
When analysing a product it's often easiest to consider the form $\prod_{n=1}^{\infty} (1+f(n))$ given that $\sum_{n=1}^{\infty}f(n)^m=G(m)$; $f(n)=-x/n$ Then $G(m)=(-x)^m\zeta(m)$.
$$\prod_{n=1}^{\infty} (1-x/n)=e^{\sum_{m=1}^{\infty}\frac{(-1)^{m+1}x^m\sum_{n=1}^{\infty}f(n)^m}{m}}$$
Because $\zeta(1)$ is the only part of the product that goes to 0 (e.g. e^-infty), the regularized product will tend to 0 just as $\zeta(1)$ goes to infinity. However this is easy fixable:
$$\prod_{n=1}^{\infty} (1-x/n)e^{x/n}=e^{\sum_{m=2}^{\infty}\frac{(-1)^{m+1}x^m\sum_{n=1}^{\infty}f(n)^m}{m}}=e^{\sum_{m=1}^{\infty}\frac{- x^{m+1}\zeta(m+1)}{m+1}}$$ which does converge. To show the above more well known representation,
$$\sum_{n=1}^{\infty}-\frac{x^{n+1}\zeta(n+1)}{n+1}=\int_{0}^{-x}\sum_{n=1}^{\infty}(-1)^nz^n\zeta(n+1)dz=\int_{0}^{-x}-H_zdz=-\ln((-x)!)+x\gamma$$
$$\prod_{n=1}^{\infty} (1-x/n)e^{x/n}=\frac{e^
{x\gamma}}{(-x)!}$$
And now with the answer I posted 6 years ago, and using the refined stirling numbers:
$$\prod_{n=1}^{\infty} (1-x/n)e^{x/n}=\frac{e^
{x\gamma}}{(-x)!}=\bigg(1-x^2\zeta(2)/2-x^3 2\zeta(3)/6 +x^4 \big(3\zeta(2)^2-6\zeta(4)\big)/24+... \bigg)$$
So we can also say with p goes to inf:
$$p^x\prod_{n=1}^{p} (1-x/n)=\frac{1}{(-x)!}=$$
We can rewrite our G(m) with $G(1)=-x\gamma$ and use this in the formula given above to write a polynomal representation, which is easier then multiply our previous polynome by $e^{-x\gamma}$.
$$p^x\prod_{n=1}^{p} (1-x/n)= \bigg(1+-x\gamma+x^2 \frac{\gamma^2-\zeta(2)}{2}+x^3 \frac{-\gamma^3+3\gamma\zeta(2)-2\zeta(3)}{3!}+...\big)=\frac{1}{(-x)!}$$
I find it hard to clearly explain these refined stirling numbers but if you are throwing in a bit of time, consider the following ways to represent the found product representation. It's going to be vague but i clarify it if you want.
The "nice" form is in knowing g(m) for all m. There are a lot of way to represent the refined stirling numbers, especially within this context. Lots of it is related to partitions and ways to "write" a number. e.g. for the x^4 term, $4=1+1+1+1$ so we get $g(1)^4*4!/4!/1^4$ and the next term is $1+1+2; G(1)^2 G(2) * 4!/1^2/2^1/2$ as coefficient. So we divide by the a*s^a if you have a G(s)^a. another more intunitive way is to see it as the "unique" combination of all outcomes. Another way is to write it as these unique combination, but use sum of sums (particular cool! And very easy to image things). You can also achieve it algebraric with writing the e powers out, but that's a real hassle. And if you want another way to represent these refined stirling numbers, you can construct them by using previous found stirling numbers and binominals, which is the most efficient.
I always wondered why there was no wikipedia page about them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Definition of minimal and characteristic polynomials I have defined the characteristic and minimal polynomial as follows, but have been told this is not strictly correct since det$(-I)$ is not necessarily 1, so my formulae don't match for $A=0$, how can I correct this?
Given an $n$-by-$n$ matrix $A$, the characteristic polynomial is defined to be the determinant $\text{det}(A-Ix)=|A-Ix|$, where $I$ is the identity matrix. The characteristic polynomial will be denoted by
\begin{equation}
\text{char}(x)=(x-x_1)^{M_1}(x-x_2)^{M_2}...(x-x_s)^{M_s}.\nonumber
\end{equation}
Also, we will denote the minimal polynomial, the polynomial of least degree such that $\psi(A)=\textbf{0}$, by
\begin{equation}
\psi(x)=(x-x_1)^{m_1}(x-x_2)^{m_2}...(x-x_s)^{m_s}\nonumber
\end{equation}
where $m_{1}\le M_{1},m_{2}\le M_{2},...,m_{s}\le M_{s}$ and $\textbf{0}$ is the zero matrix.
| There are two (nearly identical) ways to define the characteristic polynomial of a square $n\times n$ matrix $A$. One can use either
*
*$\det(A-I x)$ or
*$\det(Ix-A)$
The two are equal when $n$ is even, and differ by a sign when $n$ is odd, so in all cases, they have the same roots. The roots are the most important attribute of the characteristic polynomial, so it's not that important which definition you choose. The first definition has the advantage that its constant term is always $\det(A)$, while the second is always monic (leading coefficient $1$).
With the minimal polynomial however, it is conventional to define it as the monic polynomial of smallest degree which is satisfied by $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does limit means replacing $x$ for a number? I don't understand limit so much. For example I see $\lim_{x \to -3}$. And I always just put $-3$ everywhere I see $x$. I feel like I'm doing something wrong, but it seems correct all the time.
| Substitution "works" many times; it works but not always: $$\lim_{x\to a} f(x) = f(a)\quad \text{${\bf only}$ when $f(x)$ is defined and continuous at $a$}$$
and this is why understanding the limit of a function as the limiting value (or lack of one) when $x$ is approaching a particular value: getting very very near that value, is crucial. That is, $$\lim_{x \to a} f(x) \not\equiv f(a) \qquad\qquad\tag{"$\not \equiv$"$\;$ here meaning "not identically"}$$
E.g., your "method" won't work for $\;\;\lim_{x\to -3} \dfrac{x^2 - 9}{x + 3}\;\;$ straight off.
Immediate substitution $f(-3)$ evaluates to $\dfrac 00$ which is indeterminate: More work is required. Other examples are given in the comments.
When we seek to find the limit of a function $f(x)$ as $x \to a$, we are seeking the "limiting value" of $f(x)$ as the distance between $x$ and $a$ grows increasingly small. That value is not necessarily the value $f(a)$.
And understanding the "limit" as the "limiting value" or lack there of, of a function is crucial to understanding, e.g. that $\lim_{x \to +\infty} f(x)$ requires examining the behavior of $f(x)$ as $x$ gets arbitrarily (increasingly) large, where evaluating $f(\infty)$ to find the limit makes no sense and has no meaning.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Why is $PGL_2(5)\cong S_5$? Why is $PGL_2(5)\cong S_5$? And is there a set of 5 elements on which $PGL_2(5)$ acts?
| As David Speyer explains,
there are 15 involutions of $P^1(\mathbb F_5)$ without fixed points (one might call them «synthemes»). Of these 15 involutions 10 («skew crosses») lie in $PGL_2(\mathbb F_5)$ and 5 («true crosses») don't. The action of $PGL_2(\mathbb F_5)$ on the latter ones gives the isomorphism $PGL_2(\mathbb F_5)\to S_5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Monotonic Lattice Paths and Catalan numbers Can someone give me a cleaner and better explained proof that the number of monotonic paths in an $n\times n$ lattice is given by ${2n\choose n} - {2n\choose n+1}$ than Wikipedia
I do not understand the how they get ${2n\choose n+1}$ and I do not see how this is the number of monotonic paths that cross the diagonal. Please be explicit about the $n+1$ term. I think this is the hardest part for me to understand.
I understand the bijections between proper parenthesizations and so forth...
Thanks!
| There are $\binom{2n}{n+1}$ monotonic paths from $\langle 0,0\rangle$ to $\langle n-1,n+1\rangle$: such a path must contain exactly $(n-1)+(n+1)=2n$ steps, any $n+1$ of those steps can be vertical, and the path is completely determined once you know which $n+1$ of the $2n$ steps are vertical. Every monotonic path from $\langle 0,0\rangle$ to $\langle n-1,n+1$ necessarily rises above the diagonal, since it starts on the diagonal and finishes above it. At some point, therefore, it must go from a point $\langle m,m\rangle$ on the diagonal to the point $\langle m,m+1\rangle$ just above the diagonal. After the point $\langle m,m+1\rangle$ the path must still take $(n+1)-(m+1)=n-m$ vertical steps and $(n-1)-m=n-m-1$ horizontal steps.
If you flip that part of the path across the axis $y=x+1$, each vertical step turns into a horizontal step and vice versa, so you’re now taking $n-m$ horizontal and $n-m-1$ vertical steps. You’re starting at $\langle m,m+1\rangle$, so you end up at $\langle m+(n-m),(m+1)+(n-m-1)\rangle=\langle n,n\rangle$. Thus, each monotonic path from $\langle 0,0\rangle$ to $\langle n-1,n+1\rangle$ can be converted by this flipping procedure into a monotonic path from $\langle 0,0\rangle$ to $\langle n,n\rangle$ that has vertical step from some $\langle m,m\rangle$ on the diagonal to the point $\langle m,m+1\rangle$ immediately above it.
Conversely, every monotonic path from $\langle 0,0\rangle$ to $\langle n,n\rangle$ that rises above the diagonal must have such a vertical step in it, and reversing the flip produces a monotonic path from $\langle 0,0\rangle$ to $\langle n-1,n+1\rangle$. Thus, this flipping procedure yields a bijection between all monotonic paths from $\langle 0,0\rangle$ to $\langle n-1,n+1\rangle$, on the one hand, and all monotonic paths from $\langle 0,0\rangle$ to $\langle n,n\rangle$ that rise above the diagonal, on the other. As we saw in the first paragraph, there are $\binom{2n}{n+1}$ of the former, so there are also $\binom{2n}{n+1}$ of the latter. The total number of monotonic paths from $\langle 0,0\rangle$ to $\langle n,n\rangle$, on the other hand, is $\binom{2n}n$: each path has $2n$ steps, any $n$ of them can be vertical, and the path is completely determined once we know which $n$ are vertical.
The difference $\binom{2n}n-\binom{2n}{n+1}$ is therefore simply the total number of monotonic paths from $\langle 0,0\rangle$ to $\langle n,n\rangle$ minus the number that rise above the diagonal, i.e., the number that do not rise above the diagonal — which is precisely what we wanted.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to find the order of these groups? I don't know why but I just cannot see how to find the orders of these groups:
$YXY^{-1}=X^2$
$YXY^{-1}=X^4$
$YXY^{-1}=X^3$
With the property that $X^5 = 1$ and $Y^4 =1$
How would I go about finding the order? The questions asks me to find which of these groups are isomorphic.
Thanks.
| Hint: You should treat those relations as a rule on how to commute $Y$ past $X$, for example the first can be written:
$$YX = X^2Y$$
Then you know that every element can be written in the form $X^nY^m$ for some $n$ and $m$. Use the orders of $X$ and $Y$ to figure out how many elements there are of this form.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Path independence of an integral? I'm studying for a test (that's why I've been asking so much today,) and one of the questions is about saying if an integral is path independent and then solving for it.
I was reading online about path independence and it's all about vector fields, and I'm very, very lost.
This is the integral
$$\int_{0}^{i} \frac{dz}{1-z^2}$$
So should I find another equation that gives the same result with those boundaries? I honestly just don't know how to approach the problem, any links or topics to read on would be appreciated as well.
Thank you!
| Seeing other answers, the follםwing perhaps doesn't grab the OP's intention, but here it is anyway.
Putting $\,z=x+iy\implies\,z^2=x^2-y^2+2xyi\,$ , so along the $\,y$-axis from zero to $\,i\,$ we get:
$$x=0\;,\;\;0\le y\le 1\implies \frac1{1-z^2}=\frac1{1+y^2}\;,\;\;dx=0 \;,\;\;dz=i\,dy\;,\;\ \;\text{so}$$
$$\int\limits_0^i\frac{dz}{1-z^2}=\left.\int\limits_0^1\frac{i\,dy}{1+y^2}= i\arctan y\right|_0^1=\frac\pi 4i$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
surface area of a sphere above a cylinder I need to find the surface area of the sphere $x^2+y^2+z^2=4$ above the cone $z = \sqrt{x^2+y^2}$, but I'm not sure how. I know that the surface area of a surface can be calculated with the equation $A=\int{\int_D{\sqrt{f_x^2+f_y^2+1}}}dA$, but I'm not sure how to take into account the constraint that it must lie above the cone. How is this done?
| Hint: Use spherical coordinates. $dA = r^2\sin\theta d\theta d\phi$ with $0<\theta<\pi$. The surface area becomes $\iint_D dA$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Find the derivative of y with respect to x,t,or theta, as appropriate $$y=\int_{\sqrt{x}}^{\sqrt{4x}}\ln(t^2)\,dt$$
I'm having trouble getting started with this, thanks for any help.
| First Step
First, we need to recognize to which variables you are supposed to differentiate with respect. The important thing to realize here is that if you perform a definite integration with respect to one variable, that variable "goes away" after the computation. Symbolically:
$$\frac{d}{dt}\int_a^b f(t)\,dt = 0$$
Why? Because the result of a definite integral is a constant, and the derivative of a constant is zero! :)
So, it isn't appropriate here to differentiate with respect to $t$. With respect to $\theta$ doesn't make much sense, either--that's not even in the problem! So, we are looking at differentiating with respect to $x$.
Second Step
We now use a very fun theorem: the fundamental theorem of calculus! (bad pun, sorry)
The relevant part states that:
$$\frac{d}{dx}\int_a^x f(t)\,dt = f(x)$$
We now make your integral look like this:
$$\begin{align}
y &= \int_{\sqrt{x}}^{\sqrt{4x}}\ln(t^2)\,dt\\
& = \int_{\sqrt{x}}^a\ln(t^2)\,dt + \int_{a}^{\sqrt{4x}}\ln(t^2)\,dt\\
& = -\int_{a}^{\sqrt{x}}\ln(t^2)\,dt + \int_{a}^{\sqrt{4x}}\ln(t^2)\,dt\\
\end{align}$$
Can you now find $\frac{dy}{dx}$? (Hint: Don't forget the chain rule!)
If you still want some more guidance, just leave a comment.
EDIT:
Note that $y$ is a sum of two integral functions, so you can differentiate both independently. I'll do one, and leave the other for you:
$$\begin{align}
\frac{d}{dx}\left[\int_{a}^{\sqrt{4x}}\ln(t^2)\,dt\right] &= \left[\ln\left(\sqrt{4x}^2\right)\right]\cdot\frac{d}{dx}\left(\,\sqrt{4x}\right)\\
&=\left[\ln\left(4|x|\right)\right]\left(2\frac{x^{-1/2}}{1/2}\right)\\
&=4x^{-1/2}\ln\left(4|x|\right)
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How do I show that $6(4^n-1)$ is a multiple of $9$ for all $n\in \mathbb{N}$? How do I show that $6(4^n-1)$ is a multiple of $9$ for all $n\in \mathbb{N}$? I'm not so keen on divisibility tricks. Any help is appreciated.
| You want it to be a multiple of $9$, it suffices to show you can extract a pair of 3's from this. The $6$ has one of the 3's, and $4^n-1$ is 0 mod 3 so you're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Probability of Multiple Choice first attempt and second attempt A multiple choice question has 5 available options, only 1 of which is correct. Students are allowed 2 attempts at the answer. A student who does not know the answer decides to guess at random, as follows:
On the first attempt, he guesses at random among the 5 options. If his guess is right, he stops. If his guess is wrong, then on the second attempt he guesses at random from among the 4 remaining options. Find the chance that the student gets the right answer at his first attempt? Then, find the chance the student has to make two attempts and gets the right answer the second time?Find the chance that the student gets the right answer?
$P(k)=nk\times p^k\times({1−p})^n−k$.
$P($First attempt to get right answer$)=(5C_1)\times \frac{1}{5} \times (\frac{2}{4})^4=?$
$P($Second attempt to get right answer$)=(5C_2)\times \frac{1}{5}\times (\frac{2}{4})^3=?$
$P($The student gets it right$)=(5C_1)\times \frac{1}{5} \times (\frac{2}{4})^4=?$
| First try to find the sample space $S$ for the question. There are five equally likely choices, so $S=\{c_1,\cdots, c_5\}$ and the event $E \subset S$ is choosing the correct answer, and there is only one correct answer i.e. $|E|=1.$ Therefore the probability is $\frac{|E|}{|S|}=\frac 15.$
Do the same to determine the sample space for the second question and find the relevant event. What is the answer, then?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What does it mean for a set to exist? Is there a precise meaning of the word 'exist', what does it mean for a set to exist?
And what does it mean for a set to 'not exist' ?
And what is a set, what is the precise definition of a set?
| In mathematics, you do not simply say, for example, that set $S$ exists. You would add some qualifier, e.g. there exists a set $S$ with some property $P$ common to all its elements.
Likewise, for the non-existence of a set. You wouldn't simply say that set $S$ does not exist. You would also add a qualifier here, e.g. there does not exist a set $S$ with some property $P$ common to all its elements.
How do you establish that a set exists? It depends on your set theory. In ZFC, for example, you are given the only the empty set to start, and rules to construct other sets from it, sets that are also presumed to exist.
In other set theories, you are not even given the empty set. Then the existence of every set is provisional on the existence of other set(s). You cannot then actually prove the existence of any set.
To prove the non-existence of a set $S$ with property $P$ common to all its elements, you would first postulate its existence, then derive a contradiction, concluding that no such set can exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/376989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 5,
"answer_id": 1
} |
Identity Law - Set Theory I'm trying to wrap my head around the Identity Law, but I'm having some trouble.
My lecture slides say:
$$
A \cup \varnothing = A
$$
I can understand this one. $A$ union nothing is still $A$. In the same way that $1 + 0$ is still $1$.
However, it goes on to say:
$$
A \cup U = U
$$
I don't see how this is possible. How can $A\cup U = U$?
http://upload.wikimedia.org/wikipedia/commons/thumb/3/30/Venn0111.svg/150px-Venn0111.svg.png
If this image represents the result of $A\cup U$, where $A$ is the left circle, $U$ is the right circle, how can the UNION of both sets EQUAL the RIGHT set? I don't see how that is possible?
Can soemone please explain to me how this is possible? Thanks.
$$
A \cap\varnothing=\varnothing,\\
A \cap U = A
$$
Some further examples from the slides. I really don't understand these, either. It must be something simple, but I'm just not seeing it.
| Well $U$ is "the universe of discourse" -- it contains everything we'd like to talk about. In particular, all elements of $A$ are also in $U$.
In the "circles" representation, you can think of $U$ as the paper on which we draw circles to indicate sets like $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/377056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Dissecting a proof of the $\Delta$-system lemma (part II) This is part II of this question I asked yesterday. In the link you can find a proof of the $\Delta$-system lemma. In case 1 it uses the axiom of choice (correct me if I'm wrong). Now one can also prove the $\Delta$-system lemma differently, for example as follows:
I have two questions about it:
1) It seems to me that by using the ordinals to index the family of sets we have eliminated the axiom of choice from the proof. Have we or did we just use it a bit earlier in the proof where we index the family $B$?
2) But, more importantly, why is it ok to assume that $b \in B$ are subsets of $\omega_1$? In the theorem there is no such restriction. Can one just "wlog" this? The answer is probably yes since otherwise the proof would be wrong but I don't see how.
Thanks for your help!
| You need some AC to prove the statement just for a family of pairs of $\omega_1$.
If $\omega_1$ is the union of a countable family $\{B_n:n \in \omega \}$ of countable sets (which is consistent with ZF!), then the family $\{\{n, \beta\}: n<\omega, \beta \in B_n-\omega \}$ does not contain an uncountable $\Delta$-system.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/377113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Pascal's triangle and combinatorial proofs This recent question got me thinking, if a textbook (or an exam) tells a student to give a combinatorial proof of something involving (sums of) binomial coefficients, would it be enough to show that by Pascal's triangle these things do add up, or would you fail an answer like that? What if we didn't call it Pascal's triangle but "the number of paths that stop at some point at step $i$ during a one-dimensional random walk"?
| I would argue that a combinatorial proof is something more substantial than pointing out a pattern in a picture! If we are at the level of "combinatorics" then we are also at the level of proofs and as such, the phrase "combinatorial proof" asks for a proof but in the combinatorial (or counting) sense.
A proof by example, i.e. "this pattern holds in the small portion of Pascal's Triangle that I have drawn", is not a proof period, combinatorially or otherwise. The general case of such a property could be verified combinatorially, but simply observing it would not constitute a combinatorial proof in itself. At least that's the way I see it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/377190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.