Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
What is $dx$ in integration? When I was at school and learning integration in maths class at A Level my teacher wrote things like this on the board.
$$\int f(x)\, dx$$
When he came to explain the meaning of the $dx$, he told us "think of it as a full stop". For whatever reason I did not raise my hand and question him about it. But I have always shaken my head at such a poor explanation for putting a $dx$ at the end of integration equations such as these. To this day I do not know the purpose of the $dx$. Can someone explain this to me without resorting to grammatical metaphors?
| I once went at some length illustrating the point that for the purpose of evaluating integrals it is useful to look at $d$ as a linear operator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "183",
"answer_count": 12,
"answer_id": 2
} |
Extensions of Bertrand's Postulate Two questions came to mind when I was reading the proof for Bertrand's Postulate (there's always a prime between $n$ and $2n$):
(1) Can we change the proof somehow to show that: $\forall x > x_{0}$, there exists a prime $p$ $\in [x, ax]$, for some $a \in (1, 2)$?
(2) Suppose the (1) is true, what is the smallest value of $x_{0}$?
I'm not sure how to prove either of them, any input would be greatly appreciated! And correct me if any of the above statement is wrong. Thank you!
| I think you would enjoy the page PRIME GAPS.
My own version of the conjecture of Shanks, actually both a little stronger and a little weaker, is $$ p_{n+1} < p_n + 3 \,\left( \log p_n \right)^2, $$
for all primes $p_n \geq 2.$ This is true as high as has been checked.
Shanks conjectured that $$ \limsup \frac{p_{n+1} - p_n}{\left( \log p_n \right)^2} = 1, $$ while Granville later corrected the number $1$ on the right hand side to $2 e^{- \gamma} \approx 1.1229,$ see CRAMER GRANVILLE. There is no hope of proving this, but I enjoy knowing what seems likely as well as what little can be proved.
Here is a table from the third edition (2004) of Unsolved Problems in Number Theory by Richard K. Guy, in which $p = p_n$ is a prime but $n$ is not calculated, then $g = p_{n+1} - p_n,$ and $p = p(g),$ so $p_{n+1} = p + g.$
=-=-=-=-=-=-=-=-=-=
=-=-=-=-=-=-=-=-=-=
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Determine the equations needed to solve a problem I am trying to come up with the set of equations that will help solve the following problem, but am stuck without a starting point - I can't classify the question to look up more info.
The problem:
Divide a set of products among a set of categories such that a product does not belong to more than one category and the total products within each category satisfies a minimum number.
Example:
I have 6 products that can belong to 3 categories with the required minimums for each category in the final row. For each row, the allowed categories for that product are marked with an X - eg. Product A can only be categorized in CatX, Product B can only be categorized in CatX or CatY.
$$
\begin{matrix}
Product & CatX & CatY & CatZ \\
A & X & & \\
B & X & X & \\
C & X & & \\
D & X & X & X \\
E & & & X\\
F & & X & \\
Min Required& 3 & 1 & 2\\
\end{matrix}
$$
The solution - where * marks how the product was categorized:
$$
\begin{matrix}
Product & CatX & CatY & CatZ \\
A & * & & \\
B & * & & \\
C & * & & \\
D & & & * \\
E & & & *\\
F & & * & \\
Total & 3 & 1 & 2\\
\end{matrix}
$$
| Let $x_{ij} = 1$ if you put product $i$ in category $j$, $0$ otherwise. You need
$\sum_i x_{ij} \ge m_j$ for each $j$, where $m_j$ is the minimum for category $j$,
and $\sum_j x_{ij} = 1$ for each $i$, and each $x_{ij} \in \{0,1\}$. The last requirement takes it out of the realm of linear algebra. However, look up "Transportation problem".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
proving a inequality about sup
Possible Duplicate:
How can I prove $\sup(A+B)=\sup A+\sup B$ if $A+B=\{a+b\mid a\in A, b\in B\}$
I want to prove that $\sup\{a+b\}\le\sup{a}+\sup{b}$ and my approach is that I claim $\sup a+ \sup b= \sup\{\sup a + \sup b\}$ and since $\sup a +\sup b \ge a+b$ the inequality is proved. Is my approach correct?
| Perhaps this is what you are looking for. Consider
$$
\sup_{x\in X}(a(x)+b(x))=\color{#C00000}{\sup_{{x\in X\atop y\in X}\atop x=y}(a(x)+b(y))\le\sup_{x\in X\atop y\in X}(a(x)+b(y))}=\sup_{x\in X}a(x)+\sup_{x\in X}b(x)
$$
The red inequality is true because the $\sup$ on the left is taken over a smaller set than the $\sup$ on the right. The equalities are essentially definitions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to solve an nth degree polynomial equation The typical approach of solving a quadratic equation is to solve for the roots
$$x=\frac{-b\pm\sqrt{b^{2}-4ac}}{2a}$$
Here, the degree of x is given to be 2
However, I was wondering on how to solve an equation if the degree of x is given to be n.
For example, consider this equation:
$$a_0 x^{n} + a_1 x^{n-1} + \dots + a_n = 0$$
| If the equation's all roots are real and negative, The range bound answer for one of a root is between $\displaystyle -\frac{k}{z}$ and $\displaystyle -n \frac{k}{z}$, where $k$ is constant, $z$ is coefficient of $x$ and $n$ is the highest power of $x$. And the coefficient of $x^n$ must be $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59",
"answer_count": 10,
"answer_id": 8
} |
Question: Find all values of real number a such that $ \lim_{x\to1}\frac{ax^2+a^2x-2}{x^3-3x+2} $ exists. Thanks in advance for looking at my question.
I was tackling this limits problem using this method, but I can't seem to find any error with my work.
Question:
Find all values of real number a such that
$$
\lim_{x\to1}\frac{ax^2+a^2x-2}{x^3-3x+2}
$$
exists.
My Solution: Suppose $\lim_{x\to1}\frac{ax^2+a^2x-2}{x^3-3x+2}$ exists and is equals to $L$.
We have $$\lim_{x\to1}{ax^2+a^2x-2}=\frac{\lim_{x\to1}ax^2+a^2x-2}{\lim_{x\to1}x^3-3x+2}*\lim_{x\to1}x^3-3x+2=L*0=0$$
Therefore, $\lim_{x\to1}{ax^2+a^2x-2}=0$
implying $a(1)^2+a^2(1)-2=0$.
Solving for $a$, we get $a=-2$ or $a=1$.
Apparently, the answer is only $a=-2$. I understand where they are coming from, but I can't see anything wrong with my solution either.
| Since the denominator's limit is 0, the numerator cannot have a nonzero limit if the limit of the quotient is to be defined. The only hope is that the numerator's limit is also 0, and that after analyzing the indeterminate form, it does have a limit.
So, it must be the case that $\lim_{x\to1} ax^2+a^2x-2=0$, and consequently $a^2+a-2=0$.
The solutions to that are $a=-2$ and $a=1$, and if you substitute them into the expression, you will find that the numerator now factors into $-2(x-1)^2$ in the first case, and $(x+2)(x-1)$ in the second case.
In either one, the $(x-1)$ can be cancelled with the $(x-1)$ factor in the denominator, so that the singularity (a pole of order 2) might disappear.
$\lim_{x\to1}\frac{x^2+x-2}{x^3-3x+2}=\lim_{x\to1}\frac{x^2+x-2}{(x-1)(x^2+x-2)}=\lim_{x\to 1}\frac{1}{x-1}$ does not exist, so the $a=1 $ case is a false positive.
In the other case:
$\lim_{x\to1}\frac{-2(x-1)^2}{(x-1)^2(x+2)}=\lim_{x\to1}\frac{-2}{(x+2)}=\frac{-2}{3}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Numerical optimization with nonlinear equality constraints A problem that often comes up is minimizing a function $f(x_1,\ldots,x_n)$ under a constraint $g(x_1\ldots,x_n)=0$. In general this problem is very hard. When $f$ is convex and $g$ is affine, there are well known algorithms to solve this. In many cases however, $g$ is not affine. For general $g$ this problem is hopelessly hard to solve, but what if the constraint is easy to solve on its own? In particular, suppose that if we are given $x_1,\ldots,x_{n-1}$, then Newtons method on the constraint $g(x_1,\ldots,x_n)=0$ can easily find $x_n$. Are there effective algorithms to solve the constrained optimization problem in that case?
To solve these kinds of problems I have tried to use Lagrange multipliers, and directly apply Newtons method to solve those (nonlinear) equations, but this does not converge. Something that does work is to add a penalty term for violating the constraint to the objective function, similar to how the barrier method handles inequalities. Unfortunately (but as expected) this is not very fast at getting an accurate answer.
| If you have a single equality constraint you might try to rewrite your constraint $g(x_1,...,x_n)$ as:
$x_i = h(x_1,...x_{i-1},x_{i+1},...x_n)$
and then substitute for the $i$th variable in your objective function and solve the problem as an unconstraint optimiation problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Statements in Euclidean geometry that appear to be true but aren't I'm teaching a geometry course this semester, involving mainly Euclidean geometry and introducing non-Euclidean geometry. In discussing the importance of deductive proof, I'd like to present some examples of statements that may appear to be true (perhaps based on a common student misconception or over-generalisation), but are not. The aim would be to reinforce one of the reasons given for studying deductive proof: to properly determine the validity of statements claimed to be true.
Can anyone offer interesting examples of such statements?
An example would be that the circumcentre of a triangle lies inside the triangle. This is true for triangles without any obtuse angles - which seems to be the standard student (mis)conception of a triangle. However, I don't think that this is a particularly good example because the misconception is fairly easily revealed, as would statements that hold only for isoceles or right-angled triangles. I'd really like to have some statements whose lack of general validity is quite hard to tease out, or has some subtlety behind it.
Of course the phrase 'may appear to be true' is subjective. The example quoted should be understood as indicating the level of thinking of the relevant students.
Thanks.
| Here is one example that is quite similar in nature to the statement in the question about the center of the circumcircle lying inside a triangle, but the dubious part ("lie inside") is somewhat better disguised. I report it only because I just found it in Wikipedia, with a literature reference.
The incenter (that is, the center for the inscribed circle) of the orthic triangle is the orthocenter of the original triangle.
It is easily verified that for a triangle with an obtuse angle the orthocenter lies outside the orthic triangle, so it cannot be the incenter in this case; it is one of the excenters instead.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
} |
Inducing a well-defined function on a set What does it mean to say that $f$ induces a well-defined function on the set $X$?
I'm confused about what the term induce means here, and what role the set
$X$ has.
| It means that a function is such that we can define a(nother) well defined function on some set $\,X\,$ that'll depend, in some definite way, on the original function.
For example: if $\,f:G\to H\,$ is a group homomorphism and there's some group $\,N\leq \ker f\,$ , with $\,N\triangleleft G\,$ , then $\,f\,$ induces a well-defined group homomorphism
$$\overline f:X:=G/N\to H\,\,,\,\,\text{defined by}\,\,\overline f(gN):=f(g)$$
Please do note that the original function's domain is $\,G\,$ whereas the induced function's domain is $\,G/N\,$ . These two domains are both different sets and different groups.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Is it possible to take the absolute value of both sides of an equation? I have a problem that says: Suppose $3x^2+bx+7 > 0$ for every number $x$, Show that $|b|<2\sqrt21$.
Since the quadratic is greater than 0, I assume that there are no real solutions since
$y = 3x^2+bx+7$, and $3x^2+bx+7 > 0$, $y > 0$
since $y>0$ there are no x-intercepts. I would use the discriminant $b^2-4ac<0$.
I now have $b^2-4(3)(7)<0$
$b^2-84<0$
$b^2<84$
$b<\pm\sqrt{84}$
Now how do I change $b$ to $|b|$? Can I take the absolute value of both sides of the equation or is there a proper way to do this?
| What you've written is an inequality, not an equation. If you have an equation, say $a=b$, you can conclude that $|a|=|b|$.
But notice that $3>-5$, although $|3|\not>|-5|$.
If $3x^2+bx+7>0$ for every value of $x$, then the quadratic equation $3x^2+bx+7=0$ has no solutions that are real numbers. THat implies that the discriminant $b^2-4ac=b^2-4\cdot3\cdot7$ is negative. If $b^2-84<0$ then $b^2<84$, so $|b|<\sqrt{84}$.
Now observe that $\sqrt{84}=\sqrt{4}\sqrt{21}=2\sqrt{21}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/200946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Error in proof of self-adjointness of 1D Laplacian I have successfully checked self-adjointness of simple and classic differential operator - 1D Laplacian
$$D = \frac {d^2}{dx^2}: L_2(0,\infty) \rightarrow L_2(0,\infty)$$
defined on
$$\{f(x) | f'' \in L_2(0,\infty), f(0) = 0\},$$
open an article and see the first Example that this operator is not self-adjoint but stricly simmetric (hermitian).
Can anybody point out error in reasoning below?
Find adjoint operator, i.e. it's domain.
$$
(Df,g) = \int_0^\infty f''\overline g dx = ... = \left. (f'\overline g - f \overline g') \right|_0^{\infty} + (f, D^*g).
$$
To satisfy adjointness we should zero out free term with fixed $g$ and for all $f \in D_D$ - domain of D
$$
\left. (f'\overline g - f \overline g') \right|_0^{\infty} = 0.
$$
Second term in it zero outs because of $f(0) = 0$, so
$$
\left. (f'\overline g ) \right|_0^{\infty} = 0.
$$
Because $f$ arbitrary from domain of $D$ and hence can have not zero first derivative, so this equality holds for fixed $g$ if and only if
$$g(0) = 0.$$
Thus domain of direct and adjoint operator the same, which means that it is self-adjoint.
What I see in article.
Boris Pavlov wrote:
Example. Symplectic extension procedure for the differential operator Consider the second order differential operator
$$
L_0u = - \frac {d^2u}{dx^2}
$$
defined on all square integrable functions, $u \in L_2(0, \infty)$, with square- integrable derivatives of the first and second order and vanishing near the origin. This operator is symmetric and it’s adjoint $L^+_0$ is defined by the same differential expression on all square integrable functions with square integrable derivatives of the first and second order and no additional boundary condition at the origin.
Where is error? It certainly is, because operator $D$ with domain $f'(0)=\alpha f(0)$ is symmetric, It's domain is superset of regarded in my message domain hence my operator is not maximal symmetric and hence can not be self-adjoint.
| The operator in Pavlov's article is not the same as yours. His has a domain of functions "vanishing near the origin", i.e. on a neighborhood of 0. For your operator, functions in the domain need only vanish at the origin. So there is no error; your operator is self-adjoint and his is not.
Regarding your last paragraph, the operator $D$ with domain $f'(0) = \alpha f(0)$ (Robin boundary conditions) is not an extension of yours. Consider for instance $f(x) = e^{-x} \sin x$; it has $f(0)= 0$ but $f'(0) \ne 0$, so it is in the domain of your original operator but not the latter.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Example about hyperbolicity. $\def\abs#1{\left|#1\right|}$I would like to understand this example:
*
*Why is the following set a hyperbolic manifold?
$X=\{[1:z:w]\in \mathbb{CP}_2\mid0<\abs z< 1, \abs w < \abs{\exp(1/z)}\}$
It's an examples given in the book Hyperbolic Manifolds and Holomorphic Mappings: An Introduction by Kobayashi, in order to give a counterexample of an optimistic generalization of the Big Picard Theorem. They claim that it is biholomorphic to $\mathbb{D}\times\mathbb{D}^*$. I dont understand why.
| $\mathbb{CP}^2$ is a natural complex manifold where the chart are given by the maps :
$\begin{array}{lclc} \varphi_i: & U_i:=\{[z_0:z_1:z_2]\in \mathbb{CP}^2 \ | \ z_i\neq 0\} & \longrightarrow & \mathbb{C}^2 \\ & {[z_0:z_1:z_2]} & \longmapsto & (\dfrac{z_j}{z_0},\dfrac{z_k}{z_0}) \end{array}$
where $j,k\neq i$.
However, you can write $U_1$ as $\{[1:z:w]\in \mathbb{CP}^2\}$ and by definition of a biholomorphic map between two complex manifolds, it tells you that $X$ is biholomorphic to $\varphi_1(X)$.
The map $(z,w)\in \varphi_1(X)\mapsto (z,we^{-\frac{1}{z}})\in \mathbb D^\star\times \mathbb D$ is clearly a biholomorphism.
So $X$ is biholomorphic to $\mathbb D^\star\times \mathbb D$.
Since $\mathbb D$ and $\mathbb D^\star$ are hyperbolic manifolds then so is $\varphi_1(X)$ and consequently $X$ is a hyperbolic manifold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\zeta (4)\le 1.1$ Prove the following inequality
$$\zeta (4)\le 1.1$$
I saw on the site some proofs for $\zeta(4)$ that use Fourier or Euler's way for computing its precise value, and that's fine and I can use it. Still, I wonder if there is a simpler way around for proving
this inequality. Thanks!
| $$\zeta(4) < \sum_{n=1}^{6} \frac{1}{n^{4}} + \int_{6}^{\infty} \frac{dx}{x^4} < 1.1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Limit of a complex function How to find the limit of such a complex function?
$$
\lim_{z\rightarrow \infty} \frac{z \left| z \right| - 3 \Im z + i}{z \left| z \right|^2 +2z - 3i}.
$$
|
Consider moduli and use the triangular inequality.
The modulus of the numerator is at most $|z|^2+3|z|+1$ because $|\Im z|\leqslant|z|$ and $|\mathrm i|=1$. The modulus of the denominator is at least $|z|^3-2|z|-3$ because $|\mathrm i|=1$. Hence the limit of the ratio is $0$ when $|z|\to\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Affine Subspace Confusion I'm having some trouble deciphering the wording of a problem.
I'm given $V$ a vector space over a field $\mathbb{F}$. Letting $v_1$ and $v_2$ be distinct elements of $V$, define the set $L\subseteq V$: $L=\{rv_1+sv_2 | r,s\in \mathbb{F}, r+s=1\}$.
It's the next part where I can't figure out what they mean.
"Let $X$ be a non-empty subset of $V$ which contains all lines through two distinct elements of $X$."
No idea what this set $X$ is. Once I figure that out, I'm supposed to show that it's a coset of some subspace of $V$. I'm hoping this part will become clearer once I know what $X$ is...
| By definition the set $L$ in your question consists of all the points on a line. So you may think of $L$ as a line (or the line that passes through the two points $v_1$ and $v_2$).
Hence if you are considering the two points ($v_1$, $v_2$) giving you the line $L$, then a subset $X$ containing all lines (the one line) through the two points, is a subset $X$ containing $L$: $L \subseteq X$.
Note: There might be a bit confusion here since by saying "$X$ contains all lines..." you might be understood as saying that the elements of $X$ are lines. But that would mean that $X$ is not a subset of $V$, so I assumed that you by "$X$ contains all lines..." mean that $X$ contains all the points on the lines (all the points that make up the lines).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Recommended book on modeling/differential equations I am soon attending a undergrad course named differential equations and modeling. I have dealt with differential equations before, but in that course just learned a bunch of methods for solving them. Is there any cool books with more 'modeling' view of this subject? Like given a problem A, you have to derive equations for solving it, then solve it. This is often a hard part in math problems in my view.
| Note: this list is different if you meant partial differential equations.
*
*A First Course in Differential Equations, Modeling, and Simulation Carlos A. Smith, Scott W. Campbell
*Differential Equations: A Modeling Approach, Frank R. Giordano, Maurice Weir
*Differential Equations And Boundary Value Problems: Computing and Modeling by Charles Henry Edwards, David E. Penney, David Calvis
*Modeling and Simulation of Dynamic Systems by Robert L. Woods
*Simulation and Inference for Stochastic Differential Equations: With R Examples by Stefano M. Iacus
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Open Measurable Sets Containing All Rational Numbers So I am trying to figure out a proof for the following statement, but I'm not really sure how to go about it. The statement is: "Show that for every $\epsilon>0$, there exists an open set G in $\mathbb{R}$ which contains all of the rational numbers but $m(G)<\epsilon$." How can it be true that the open set G contains all of the rational numbers but has an arbitrarily small measure?
| Hint: if you order the rationals, you can put an interval around each successive one and take the union for your set. If the intervals decrease in length quickly enough....
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Find the following limit $\lim_{x\to 0}\frac{\sqrt[3]{1+x}-1}{x}$ and $\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2}$
Find the following limits
$$\lim_{x\to 0}\frac{\sqrt[3]{1+x}-1}{x}$$
Any hints/solutions how to approach this? I tried many ways, rationalization, taking out x, etc. But I still can't rid myself of the singularity. Thanks in advance.
Also another question.
Find the limit of
$$\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2}$$
I worked up till here, after which I got stuck. I think I need to apply the squeeze theore, but I am not sure how to.
$$\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2} = \lim_{x\to 0}\frac{-2\sin\frac{1}{2}(3x+x)\sin\frac{1}{2}(3x-x)}{x^2}=\lim_{x\to 0}\frac{-2\sin2x\sin x}{x^2}=\lim_{x\to 0}\frac{-2(2\sin x\cos x)\sin x}{x^2}=\lim_{x\to 0}\frac{-4\sin^2 x\cos x}{x^2}$$
Solutions or hints will be appreciated. Thanks in advance! L'hospital's rule not allowed.
| Like N.S. said, looking this limit as derivative is a way to solve.
You could also do $u=x+1$ to simplify your expression and consider $f(u)=u^{1/3}$.
$$u=x+1\rightarrow \lim_{u\rightarrow 1} \frac{u^{1/3}-1}{u-1}=\lim_{u\rightarrow 1} \frac{f(u)-f(1)}{u-1}=f'(1)$$
But $f'(u) = \frac{1}{3}u^{-2/3}$, then $f'(1) = \frac{1}{3}\cdot 1^{-2/3}=\frac{1}{3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Uniform convergence of $f_n\rightarrow f$ and limit of zeroes to $f_n$ I'm having some doubts on a homework question:
Let $f_n\rightarrow f$ uniformly on compact subsets of an open connected set $\Omega \subset \mathbb{C}$, where $f_n$ is analytic, and $f$ is not identically equal to zero.
(a) Show that if $f(w)=0$ then we can write $w=\lim z_n$, where $f_n(z_n)=0$ for all $n$ sufficiently large.
(b) Does this result hold if we only assume $\Omega$ to be open?
I'm not too sure how to do (a)-- I think I might be able to do it just by using the definition of uniform convergence and the fact that $f_n$ has a zero at $z_n$, but this doesn't use the assumption that $f_n$ is analytic or that $\Omega$ is connected. I'm also guessing that the result doesn't hold if we only assume $\Omega$ to be open and not connected for obvious topological reasons, but not knowing exactly how to do (a), I'm not sure if I know how to prove this. Could anyone give me some pointers? Thanks in advance.
| Take a small circle around $w$.
Then by Rouché's theorem $f_n$ has a zero $z_n$ inside the circle for $n$ large enough (and maybe several if $w$ is a multiple zero of $f$).
Now shrink the circle and repeat: you will obtain the convergent sequence $(z_n)$.
By the way, this sketch of proof shows why we must assume that $f$ is not identically zero near $w$: if $f\equiv 0$ just take $f_n=\frac {1}{n}$ (which has no zero at all!) to get a counter-example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Pathologies in module theory Linear algebra is a very well-behaved part of mathematics. Soon after you have mastered the basics you got a good feeling for what kind of statements should be true -- even if you are not familiar with all major results and counterexamples.
If one replaces the underlying field by a ring, and therefore looks at modules, things become more tricky. Many pathologies occur that one maybe would not expect coming from linear algebra.
I am looking for a list of such pathologies where modules behave differently than vector spaces. This list should not only be a list of statements but all phenomena should be illustrated by an example.
To start the list I will post an answer below with all pathologies that I know from the top of my head. This should also explain better what kind of list I have in mind.
| I am surprised that it is not mentioned here-
Example of a free module M which has bases having different cardinalities.
Let $V$ be a vector space of countably infinite dimension over a division ring $D$. Let $R=End_D(V)$. We know that $R$ is free over $R$ with basis $\{1\}$. We claim that given a positive integer $n$, there is a $R$-basis $B_n=\{f_1,f_2, \dots f_n\}$ for $R$ having $n$ elements.
Let $B=\{e_k\}_{k=1}^{\infty}$ be a basis of $V$ over $D$. Define $\{f_1, \dots , f_n\}\in R$ by specifying their values on $B$ as in the following table-
\begin{array}{|c| ccccc|}
\hline
& f_1 & f_2 & f_3 & \dots & f_n \\ \hline
e_1& e_1&0 &0 & \dots & 0\\
e_2& 0 & e_1 &0 & \dots & 0\\
\vdots& & & & \ddots\\
e_n & 0 &0 &0 & \dots & e_1 \\ \hline
e_{n+1} & e_2 &0 &0 & \dots & 0 \\
e_{n+2} & 0 &e_2 &0 &\dots &0 \\
\vdots & & & &\ddots & \\
e_{2n} & 0 &0 &0 & \dots & e_2 \\ \hline
\vdots & \vdots & \vdots & \vdots& \vdots & \vdots \\ \hline
e_{kn+1} & e_{k+1} & 0 & 0 & \dots & 0 \\
e_{kn+2} & 0 & e_{k+1}&0& \dots &0 \\
\vdots & &&&\ddots & \\
e_{(k+1)n} &0&0&0& \dots& e_{k+1} \\ \hline
\vdots & \vdots & \vdots & \vdots& \vdots & \vdots \\
\end{array}
Now we check that $B_n$ is an $R$- basis of $R$.
*
*Linearly independent-
If $\sum_{i=1}^{n} \alpha_i f_i=0$ with $\alpha_i \in R,$ then evaluating on the successive blocks of $n$ vectors, namely , $e_{kn+1}, \dots , e_{(k+1)n}, k=0,1,\dots ,$ we get $\alpha_i(e_{k+1})=0\ \forall\ k$ and $1 \le i \le n ;$ i.e. $\alpha_i \equiv 0\ \forall\ i$ showing that $B_n$ is linearly independent over $R$.
*
*$B_n$ spans $R$-
Let $f\in R$ then $f= \sum_{i=1}^{n} \alpha_i f_i,$ where $\alpha_i \in R$ are defined by their values on $B$ as in the following table-
\begin{array}{|c| ccccc|}
\hline
& \alpha_1 & \alpha_2 & \alpha_3 & \dots & \alpha_n \\ \hline
e_1& f(e_1)&f(e_2) &f(e_3) & \dots & f(e_n)\\
e_2& f(e_{n+1}) & f(e_{n+2}) &f(e_{n+3}) & \dots & f(e_{2n})\\
\vdots& & & & \ddots\\
e_n & f(e_{(n-1)n+1}) & f(e_{(n-1)n+2}) &f(e_{(n-1)n+3}) & \dots & f(e_{n^2}) \\ \hline
e_{n+1} & f(e_{n^2+1}) &f(e_{n^2+2}) &f(e_{n^2+3}) & \dots & f(e_{n^2+n}) \\
e_{n+2} & . & . &. &\dots &f(e_{n^2+2n}) \\
\vdots & & & &\ddots & \\
e_{2n} & f(e_{2n^2-n+1}) &f(e_{2n^2-n+2}) &f(e_{2n^2-n+3}) & \dots & f(e_{2n^2}) \\ \hline
\vdots & \vdots & \vdots & \vdots& \vdots & \vdots \\ \hline
e_{kn+1} & f(e_{kn^2+1}) & f(e_{kn^2+2}) & f(e_{kn^2+3}) & \dots & f(e_{kn^2+n}) \\
e_{kn+2} & . & .&.& \dots &f(e_{kn^2+2n}) \\
\vdots & &&&\ddots & \\
e_{(k+1)n} &.&.&.& \dots& f(e_{(k+1)n^2}) \\ \hline
\vdots & \vdots & \vdots & \vdots& \vdots & \vdots \\
\end{array}
This shows that $B_n$ spans $R$.
So for each $n > 0$, $B_n= \{f_n\}$ is a basis of cardinality $n$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49",
"answer_count": 4,
"answer_id": 0
} |
How to show a quasi-compact, Hausdorff space be totally disconnected? This is from Atiyah-Macdonald. I was asked to show if every prime ideal of $A$ is maximal, then $A/R$ is absolutely flat, Spec($A$) is a $T_{1}$ space,further Spec($A$) is Hausdorff. The author then asked me to show Spec($A$) is totally disconnected. I am wondering why, because it is not automatic that a compact Hausdorff space is totally disconnected (consider $\mathbb{S}^{1}$ as one-point compactification of $\mathbb{R}$, for example). Why can we put Spec$A$ a discrete topology when we know elements $\{p\}$ is closed?
| Okay, so first, there is a distinction made between quasicompactness and compactness. A topological space $X$ is quasicompact if every open cover of $X$ has a finite subcover. The topological space $X$ is said to be compact if it is quasicompact and Hausdorff.
We know the $Spec(A)$ is quasicompact for any ring $A$. If you have managed to show for the above problem that $Spec(A)$ is Hausdorff, then this means that $Spec(A)$ is compact.
Claim: For a non-unit $f$, the distinguished open set $D(f) = \{p \in Spec(A): f \notin p \}$ is closed.
Proof of claim: Since $f$ is not a unit, it follows that $D(f) \subsetneq Spec(A)$. Our goal will be to show that $Spec(A) - D(f)$ is open. Let $p \in Spec(A) - D(f)$. Then, $f \in p$. Note that $f$ is nilpotent in $A_p$, since the only prime of $A_p$ is $p(A_p)$. Thus, there exists $s_p \in A - p$ such that $s_pf^n = 0$ for some $n \in \mathbb{N}$. Then $p \in D(s_p)$, and $D(s_p) \cap D(f) = \emptyset$. Thus, $D(s) \subset Spec(A) - D(f)$. Since $p$ was an arbitrary point of $Spec(A) - D(f)$, this shows that $Spec(A) - D(f) = \bigcup_{p \in Spec(A) - D(f)} D(s_p)$ is open, hence $D(f)$ is closed.
Thus, for all $f \in A$, $D(f)$ is a clopen set (simultaneously closed and open).
Now let $C$ be a connected component of $Spec(A)$ with more than one element, say $p_1, p_2$. Since $p_1, p_2$ are maximal and distinct, there exists $f \in p_1$ such that $f \notin p_2$. Then $D(f)$ is a clopen set that contains $p_2$ but not $p_1$. This shows that $C \cap D(f)$ is a proper clopen set of $C$, which contradicts the connectedness of $C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
What is it called when a function is not continuous but still can have a derivative? Consider the following function (I think it has a name, but I don't remember it):
$$
f(x) = \cases{-1 & $x < 0$ \\
0 & $x = 0$ \\
1 & $x > 0$}
$$
$f'(x)$ is zero everywhere except at $x=0$, where $f$ is not continuous. But suppose we ignore the right half of the real line and define $f(0)$ to be $-1$. Then $f$ has a left derivative at $x=0$, and it is zero. We can do the same thing from the right, so in a way it could make a little bit of sense to say that $f'(0) =0$.
Of course, I understand that going by the definition $f$ isn't differentiable at $x=0$. But one could imagine an alternative definition of derivative for discontinous functions, in which one calculates lateral derivatives by redefining the function to be continuous, and then we see if the lateral derivatives match. This doesn't always work; for example it's hard to meaningfully assign a derivative to $x \mapsto |x|$ at $x=0$.
Are there other functions with this property? Does it have a name?
| Interestingly, you can assign a derivative of the function $\operatorname{abs}$ at $0$ by using the following definition:
$$\frac{\mathrm df(x)}{\mathrm dx}=\lim_{h\to0}\frac{f(x+h)-f(x-h)}{2h}.$$
Thus, taking the limit,
$$\operatorname{abs}'(x)=\frac{\mathrm d}{\mathrm dx}|x|=\operatorname{sgn}(x)=\cases{-1, & $x < 0;$ \\ \phantom{-}0, & $x = 0;$ \\\phantom{-}1, & $x > 0.$}$$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Probability problem: cars on the road I heard this problem, so I might be missing pieces. Imagine there are two cities separated by a very long road. The road has only one lane, so cars cannot overtake each other. $N$ cars are released from one of the cities, the cars travel at constant speeds $V$ chosen at random and independently from a probability distribution $P(V)$. What is the expected number of groups of cars arriving simultaneously at the other city?
P.S.: Supposedly, this was a Princeton physics qualifier problem, if that makes a difference.
| There are already two answers that show that under a certain interpretation of the question the answer is the $N$-th harmonic number. This can be seen more directly by noting that the $k$-th car is the "leader" of a group iff it is the slowest of the first $k$ cars, which occurs with probability $1/k$. Thus the expected number of leaders of groups is the $N$-th harmonic number, and of course there are as many groups as there are leaders of groups.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 6,
"answer_id": 0
} |
What is the probability of the box? Your box of cereal may be a contest winner! It's rattling, which 100% of winning boxes do. Of course 1% of all boxes rattle and only one box in a million is a winner. What is the probability that your box is a winner?
| The correct solution would be $0.0001$ ($1/10000$), wouldn't it? It's late, but it seems to me that Drew Christianson miscalculated and dedocu mixed $p(A)$ and $p(B)$ - correct me please, if I'm wrong.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 5
} |
Proof that a perfect set is uncountable There is something I don't understand about the proof that perfect sets are uncountable. The same proof is present in Rudin's Principles of Mathematical Analysis.
Do we assume that our construction of $U_n$ must contain all points of $S$? What if we are only collecting evenly-indexed points of $S$ ($x_{2n}$)? We would still get an infinitely countable subset of $S$, and the rest of $S$ can be used to provide points for $V$. What am I missing?
| There is an alternative proof, using what is a consequence of Baire's Theorem:
THM Let $(M,d)$ be a complete metric space with no isolated points. Then $(M,d)$ is uncountable.
PROOF Assume $M$ is countable, and let $\{x_1,x_2,x_3,\dots\}$ be an enumeration of $M$. Since each singleton is closed, each $X_i=X\smallsetminus \{x_i\}$ is open for each $i$. Moreover, each of them is dense, since each point is an accumulation point of $X$. By Baire's Theorem, $\displaystyle\bigcap_{i\in\Bbb N} X_i$ must be dense, hence nonempty, but it is readily seen it is empty, which is absurd. $\blacktriangle$.
COROLLARY Let $(M,d)$ be complete, $P$ a perfect subset of $M$. Then $P$ is uncountable.
PROOF $(P,d\mid_P)$ is a complete metric space with no isolated points.
ADD It might be interesting to note that one can prove Baire's Theorem using a construction completely analogous to the proof suggested in the post.
THM Let $(X,d)$ be complete, and let $\langle G_n\rangle$ be a sequence of open dense sets in $X$. Then $G=\displaystyle \bigcap_{n\in\Bbb N}G_n$ is dense.
PROOOF We can construct a sequence $\langle F_n\rangle$ of closed sets as follows. Let $x\in X$, and take $\epsilon >0$, set $B=B(x,\epsilon)$. Since $G_1$ is dense, there exists $x_1\in B\cap G_1$. Since both $B$ and $G_1$ are open, there exists a ball $B_1=B(x_1,r_1)$ such that $$\overline{B_1}\subseteq B\cap G_1$$
Since $G_2$ is open and dense, there is $x_2\in B_1\cap G_2$ and again an open ball $B_2=B(x_2,r_2)$ such that $\overline{B_2}\subseteq B_1\cap G_2$, but we ask now that $r_2\leq r_1/2$. We then successively take $r_{n+1}<\frac{r_n}2$. Inductively, we see we can construct a sequence of closed bounded sets $F_n=\overline{B_n}$ such that $$F_{n+1}\subseteq F_n\\ \operatorname{diam}D_n\to 0$$
Since $X$ is complete, there exists $\alpha\in \displaystyle\bigcap_{n\in\Bbb N}F_n$. But, by construction, we see that $\displaystyle\alpha\in \bigcap_{n\in\Bbb N}G_n\cap B(x,\epsilon)$
Thus $G$ is dense in $X$.$\blacktriangle.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 3,
"answer_id": 1
} |
Please explain how this ratio is being calculated A,B and C are partners of a company. A receives $\frac{x}{y}$ of profit. B and C share the remaining profit equally among them.
A's income increases by $I_a$ if overall profit increases from P% to Q%. How much A had invested in their company.
I know the answer: $\frac{I_a\cdot100}{P-Q}$.
This may be a very simple question, but I don't understand how it comes.
| Let $A$ be the amount that Alicia has invested in the company. Let $\frac{x}{y}$ be the fraction of the company that she owns. So if $V$ is the total value of the company, then $A=\frac{x}{y}V$.
The old percentage profit was $P$. So the old profit was $\frac{P}{100}V$.
Alicia got the fraction $\frac{x}{y}$ of this, so Alicia's old profit was
$$\frac{x}{y}\frac{P}{100}V=\frac{P}{100}\frac{x}{y}V=\frac{P}{100}A.$$
Similarly, Alicia's new profit is
$$\frac{Q}{100}A,$$
so the change in profit is
$$\frac{Q}{100}A-\frac{P}{100}A.$$
This is equal to $I_a$. So
$$I_a=\frac{Q-P}{100}A,$$
and therefore
$$A=\frac{100 I_a}{Q-P}.$$
Note that the fraction $\frac{x}{y}$ turned out to be irrelevant, as of course did the fact that there are other shareholders.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/201993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Taylor's Tangent Approximation This is my question,
A function of 2 variable is given by,
$f(x,y) = e^{2x-3y}$
How to find tangent approximation to $f(0.244, 1.273)$ near $(0,0)?$
I need some guidance for this question.
Am i suppose to do the linear approximation or quadratic approximation?
Need some explanation for the formula. Thanks
| More precise approximation we obtain if represent $f(x,\,y)$ as $$f(x,\,y)=e^{2x-3y}=e^3e^{2x-3y-3}=e^3e^{2x-3(y-1)}.$$ Then apply formula for tangent approximation to function $g(x,\,y)=e^{2x-3(y-1)}$
with $a=0; \,b=1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
The drying water melon puzzle I couldn't find an explanation to this problem that I could understand.
A watermelon consist of 99% water and that water measures 2 litre. After a day in the sun the water melon dries up and now consist of 98% water. How much water is left in the water melon?
I know the answer is ~1 litre, but why is that? I've read a couple of answers but I guess I'm a bit slow because I don't understand why.
EDIT
I'd like you to assume that I know no maths. Explain it like you would explain it to a 10 year old.
| At the beginning the solid material is $1\%$ of the total which is a trifle (to be neglected) more than $1\%$ of $99\%$ of the total, or $1\%$ of $2000\ {\rm cm}^3$. Therefore the solid material has volume $\sim20\ {\rm cm}^3$.
After one day in the sun these $20\ {\rm cm}^3$ solid material are still the same, but now they make up $2\%$ of the total. Therefore the total now will be $1000\ {\rm cm}^3$ or $1$ litre. $98\%$ of this volume, or almost all of it, will be water.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Direct construction of Lebesgue measure I have seen two books for measure theory, viz, Rudin's, and Lieb and Loss, "Analysis".
Both use some kind of Riesz representation theorem machinery to construct Lebesgue measure.
Is there a more "direct" construction, and if so, what is a source?
| The most popular way is constructing it using the Caratheodory extension theorem, from Lebesgue outer measure. This approach is not very intuitive, but is a very powerful and general way for constructing measures.
An even more direct construction and essentially the one developed by Lebesgue himself defines Lebesgue measurable sets to be the ones that can be well approximated (in terms of outer measure) by open sets from the outside and by closed sets from the inside and shows that Lebesgue outer measure applied to these sets is an actual measure. You find this approach in A Radical Approach to Lebesgue's Theory of Integration by Bressoud and Real Analysis: Measure Theory, Integration, and Hilbert Spaces by Stein and Shakarchi.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Zero polynomial
Possible Duplicate:
Polynomial of degree $-\infty$?
Today in Abstract Algebra my instructor briefly mentioned that sometimes the zero polynomial is defined to have degree $-\infty$. What contexts have caused this to become convention?
| Persistance.
You want formulas to make sense also when abusively applying them to cases involving the zero polynomial.
For example, we have $\deg(f\cdot g)=\deg f +\deg g$ and $\deg (f+g)\le \max\{\deg f, \deg g\}$. Therefore we assign a symbolic value - and be it only for mnemonic purposes - of $-\infty$ as the degree of $0$, because that makes $-\infty =\deg(0\cdot g)=-\infty+\deg g$ and $\deg g = \deg (0+g)=\max\{-\infty,\deg g\}$ work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Finding error bounds for hermite interpolation I am unsure how to find the error bounds for Hermite interpolation. I have some kind of idea but I have a feeling that I am going wrong somewhere.
$f(x)=3xe^x-e^{2x}$ with my x-values being 1 and 1.05
My hermite interpolating polynomial is:
$H(x)=.7657893864+1.5313578773(x-1)-2.770468386(x-1)^2-4.83859508(x-1)^2(x-1.05)$
Error Bound:
$\large{f^{n+1}(\xi)\over (n+1)!}*(x-x_0)(x-x_1)...(x-n)$
$$\large{f^3 (\xi) \over 3!}(x-1)^2(x-1.5)$$
$(x-1)^2(x-1.5)=x^3-3.05x^2+3.1x-1.05$
We must find the maximum point of this cubic function which is at $(1.0333,1.8518463*10^{-5})$
$$\large{f^3 (\xi) \over 3!}*1.8518463*10^{-5}$$
Am I on the correct path and How would I continue from here?
| I think that should be $(x-1)(x-1.05)$ instead of $(x-1)^2(x-1.5)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why are Darboux integrals called Riemann integrals? As far as I have seen, the majority of modern introductory real analysis texts introduce Darboux integrals, not Riemann integrals. Indeed, many do not even mention Riemann integrals as they are actually defined (with Riemann sums as opposed to Darboux sums). However, they call the Darboux integrals Riemann integrals. Does anyone know the history behind this? I can understand why they use Darboux - I find it much more natural and the convergence is simpler in some sense (and of course the two are equivalent). But why do they call them Riemann integrals? Is this another mathematical misappropriation of credit or was Riemann perhaps more involved with Darboux integrals (which themselves may be misnamed)?
| There are other examples, such as "An introduction to Real Analysis" by Wade. I don't know the history of these definitions at all. Once the dust settles over partitions, we have just one concept of integral left. The term "Riemann integral" is entrenched in so much of the literature that not using it isn't an option. One could use the term "Darboux integral" alongside "Riemann integral", but most students taking Intro to Real Analysis are sufficiently confused already. Mentioning names for the sake of mentioning names isn't what a math textbook should be doing. That job is best left to books on history of mathematics.
If you feel bad for Darboux, be sure to give him credit for the theorem about intermediate values of the derivative. (Rudin proves the theorem, but attaches no name to it.)
On a similar note: If I had my way, there'd be no mention of Maclaurin in calculus textbooks. Input: the total time spent by calculus instructors explaining the Maclaurin-Taylor nomenclatorial conundrum to sleep-deprived engineering freshmen. Output:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 2,
"answer_id": 1
} |
Measuring orderedness I've found this a frustrating topic to Google, and might have an entire field dedicated to it that I'm unaware of.
Given an permutation of consecutive integers, I would like a "score" (real [0:1]) that evaluates how in-order it is.
Clearly I could count the number of misplaced integers wrt the ordered array, or I could do a "merge sort" count of the number of swaps required to achieve order and normalise to the length of the array. Has this problem been considered before (I assume it has), and is there a summary of the advantages of various methods?
I also assume there is no "true" answer, but am interested in the possibilities.
| One book which treats metrics on permutations (that is, metrics on the symmetric group) is Persi Diaconis:"Group representations in probability and statistics"
which it is possible to download from here:
Link
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Retraction of the Möbius strip to its boundary Prove that there is no retraction (i.e. continuous function constant on the codomain) $r: M \rightarrow S^1 = \partial M$ where $M$ is the Möbius strip.
I've tried to find a contradiction using $r_*$ homomorphism between the fundamental groups, but they are both $\mathbb{Z}$ and nothing seems to go wrong...
| If $\alpha\in\pi_1(\partial M)$ is a generator, its image $i_*(\alpha)\in\pi_1(M)$ under the inclusion $i:\partial M\to M$ is the square of an element of $\pi_1(M)$, so that if $r:M\to\partial M$ is a retraction, $\alpha=r_*i_*(\alpha)$ is also the square of an element of $\pi_1(\partial M)$. This is not so.
(For all this to work, one has to pick a basepoint $x_0\in\partial M$ and use it to compute both $\pi_1(M)$ and $\pi_1(\partial M)$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 5,
"answer_id": 2
} |
How to find sum of quadratic I got this quadratic function from physics that I need to find the sum of each term, up to whatever point. Written thusly:
$$ \sum_{n=1}^{t}4.945n^2$$
And is there someway to quickly figure this out? Or links to tutorials
| There is the standard formula
$$\sum_{k=1}^n k^2=\frac{n(n+1)(2n+1)}{6}.$$
It can be proved by a pretty routine induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a largest "nested" prime number? There are some prime numbers which I will call "nested primes" that have a curious property: if the $n$ digit prime number $p$ is written out in base 10 notation $p=d_1d_2...d_n$, then the nested sequence formed by deleting the last digit one at a time consists entirely of prime numbers. The definition is best illustrated by an example, for which I will choose the number $3733799$: not only is $3733799$ prime, but so are $\{3,37,373,3733,37337,373379\}$. See here and here if you want to check.
Question: Does there exist a largest nested prime number, and if so, what is it?
| From the comments in OEIS A024770
Primes in which repeatedly deleting the least significant digit gives a prime at every step until a single digit prime remains. The sequence ends at $a(83) = 73939133.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Showing a vertical tangent exists at a given function. I want to apologise in advance for not having this in latex or some sort of neat code, I would be more than happy to learn how though.
Anyway, for the function $y=4(x-1)^{2/5}$ I see there appears to be a vertical tangent at $x=1$, but how can I know for certain the vertical tangent exists at $x=1$? Would I just solve for $f'(x)$, letting $x=1$? But what would that tell me?
Thanks.
| Yes, you would check if $f'(1)$ tends to $+\infty$ or $-\infty$
$$
\frac{d}{dx}4(x-1)^{2/5}=\frac{8}{5}(x-1)^{-3/5}\\
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
What are Aleph numbers intuitively? I cannot get my head around the concept of the `types' of Aleph infinity. What is an easy intuitive way to see when you are given the integer numbers $\aleph_0$ the $\aleph_1$ will follow?
| The cardinals are the following ones:
$$0,1,2,3,4,5,6,\dots,\aleph_0,\aleph_1,\aleph_2, \aleph_3,\dots,\aleph_\omega,\aleph_{\omega+1},\dots,\aleph_{\omega2},\aleph_{\omega2+1},\dots $$
Where $\aleph_0$ is the first infinite cardinal (the cardinality of each infinite countable set), so $\aleph_0\notin\mathbb N$, is not said to be an integer in the ordinary way. Then $\aleph_1$ is the next cardinal, and so on...
(and this "and so on..." also includes some knowledge about the ordinals).
By Cantor's theorem ($|P(A)| > |A|$ for all sets $A$) we have that for every cardinal there is a bigger cardinal. By the well foundedness and the axiom of choice in ZFC, we have that every cardinal is a cardinal of a well-ordered set (which is in bijection to some ordinal), and it follows that there is always a next cardinal..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 0
} |
Solve a simultaneous equation. How do we solve $|b-y|=b+y-2\;and\;|b+y|=b+2$? I have tried to square them and factorize them but got confused by and and or conditions.
| $2\min (b,y)=b+y-|b-y|=2$ so that $\min (b,y)=1$. This implies that $b$ and $y$ are both positive so that $b+y=b+2$. Hence $y=2$ and $b=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Locus perpendicular to a plane in $\mathcal{R}^4$ I have solved an exercise but I'm not sure to have solved it perfectly. Could you check it? It's very important for me..
In $\mathcal{R}^4$ I have a plane $\pi$ and a point P. I have to find the locus of Q points such that line PQ is perpendicular to $\pi$.
$$\pi:\begin{cases} 3x+y-z-q+1=0\\ -x-y+z+2q=0 \end{cases}$$
$P=(0,1,1,0)$
$P$ doesn't belong to $\pi$ because it doesn't satisfy first equation.
If $PQ$ line has to be $\perp$ to $\pi$, the vector of direction of $PQ$ must be perpendicular to spanners of the plane.
Spanners of plane are:
$v_1=(1, -5, 0, -2)$ and $v_2=(0, 1, 1, 0)$
Then, $PQ \cdot v_1=0$ and $PQ\cdot v_2=0$
But I can write $PQ=OQ-OP$
and so I obtain $\begin{cases} (OQ-OP)\cdot v_1=0 \\(OQ-OP)\cdot v_2=0\end{cases}$
But inner product is a bilinear form and so I can also write:
$\begin{cases} OQ\cdot v_1 - OP\cdot v_1=0 \\OQ\cdot v_2-OP\cdot v_2=0\end{cases}$
I can calculate $OP\cdot v_1 $ and $OP\cdot v_2$ and, if $OQ=(x, y, z, q)$, I obtain:
$\begin{cases} (x, y, z, q) \cdot (1, -5, 0, -2)=(0,1,1,0) \cdot (1,-5,0,-2) \\ (x,y,z,q) \cdot (0,1,1,0)=(0,1,1,0,) \cdot (0,1,1,0) \end{cases}$
$\begin{cases}x-5y-2q=-5 \\
y+z=2 \end{cases}$
I have solved this equations set and I have obtained that $OQ= t(1, 1/5, -1/5, 0) + s(0,-2/5, 2/5, 1) + (0, 1, 1, 0)$ (with $t, s$ belonging to $\mathbb {R}$).
And so I can say that my locus is given by $Q$ points having coordinates given by $t(1, 1/5, -1/5, 0) + s(0,-2/5, 2/5, 1) + (0, 1, 1, 0)$.
My locus is given by vectors perpendicular to $\pi$ and so I can say that my locus is the orthogonal complement of $\pi$, so it is a plane.
Is it all correct? Please, signal me all things that you think are wrong. Thank you!
EDIT:
I can develope the exercise in another way saying that if $PQ$ has to be perpendicular to $\pi$, $PQ$ has to be perpendicular to any vectors of $\pi$ and so $PQ$ belongs to orthogonal complement of $\pi$, so PQ is given by:
$PQ=t(w_1)+s(w_2)$ where $w_1$ and $w_2$ are spanners of othogonal complement of $\pi$. Now I can obtain that $Q=P+t(w_1)+s(w_2)$. Is it correct? Thanks again
| It all seems good.
But, you jumped one step: how did you find your $v_1$ and $v_2$ spanner vectors of $\pi$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Need a hint: show that a subset $E \subset \mathbb{R}$ with no limit points is at most countable. I'm stuck on the following real-analysis problem and could use a hint:
Consider $\mathbb{R}$ with the standard metric. Let $E \subset \mathbb{R}$ be a subset which has no limit points. Show that $E$ is at most countable.
I'm primarily confused about how to go about showing that this set $E$ is at most countable (i.e. finite or countable).
What I can show: since $E$ has no limit points, I can show that for every $x \in E$, there is a neighborhood $N_{r_x}(x)$ where $r_x > 0$ that does not contain any other point $y \in E$ where $y \neq x$. This suffices to show that every point within x is an isolated point.
| proof by contradiction.
suppose E is uncountable. consider the set E intersection with [n,n+1] for integers n.
then there exist at least one n such that the above type of set is uncountable. clearly that set is bounded in real nos. then by Bolzano Weierstrass's theorem the above set has a limit point in R.
a contradiction to the hypothesis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/202943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 8,
"answer_id": 2
} |
What is the number of combinations of the solutions to $a+b+c=7$ in $\mathbb{N}$? My professor gave me this problem:
Find the number of combinations of the integer solutions to the equation $a+b+c=7$ using combinatorics.
Thank you.
UPDATE
Positive solutions
| Is this a hoax?
Perhaps I should put that differently. What institution are you studying at?
Was that the whole question? Or was there a part two asking the same thing but =702 or some other rather bigger number?
I ask these questions because the question as posed is absolutely trivial. The only tricky point is deciding what "integers" means. If he/she really wrote just "integers" then your professor is either dumb or careless, because the answer is obviously infinite. If he didn't mean that, then it is ambiguous, as Henry pointed out.
Second, assuming he meant natural numbers, he is a bad setter of questions. Who wants to "use combinatorics" when they can just list the solutions, as Dotanooblet did! Well, if you have it at your fingertips, combinatorics is slightly faster, but in an exam I would list (if "use combinatorics" was not in the question), because it requires less thought allows one to mull over the other questions at the same time.
Third, the approach set out by Henry above is absolutely standard bookwork. There is nothing remotely tricky or hard about it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Determinant with unknown parameter. I'm given 4 vectors: $u_1, u_2, u_3$ and $u_4$. I'm going to type them in as points, because it will be easier to read, but think as them as column vectors.
$$u_1 =( 5, λ, λ, λ), \hspace{10pt} u_2 =( λ, 5, λ, λ), \hspace{10pt} u_3 =( λ, λ, 5, λ), \hspace{10pt}u_4 =( λ, λ, λ, 5)$$
The task is to calculate the value of λ if the vectors where linearly dependent, as well as linearly independent.
I managed to figure out that I could put them in a matrix, let's call it $A$, and set $det(A) = 0$ if the vectors should be linearly dependent, and $det(A) \neq 0$ if the vectors should be linearly independent.
Some help to put me in the right direction would be great!
| If $\lambda=0$, the vectors are clearly linearly independent.
If $\lambda\ne0$, we can divide through by $\lambda$ without affecting whether the determinant vanishes; this yields
$$
\pmatrix{\frac5\lambda&1&1&1\\1&\frac5\lambda&1&1\\1&1&\frac5\lambda&1\\1&1&1&\frac5\lambda}\;.
$$
Thus the values of $\lambda$ for which the determinant vanishes are those for which
$$
\frac5\lambda=1-\mu_i\;,
$$
where $\mu_i$ is an eigenvalue of
$$
\pmatrix{1&1&1&1\\1&1&1&1\\1&1&1&1\\1&1&1&1}\;.
$$
This matrix annihilates all vectors whose components sum to $0$, so it has an eigenspace of dimension $4-1=3$ corresponding to the eigenvalue $0$, and thus a triple eigenvalue $0$. Since it is symmetric, its four eigenvectors can be chosen to form an orthonormal system, so the fourth eigenvector is a vector orthogonal to that eigenspace, e.g. $(1,1,1,1)$, which corresponds to eigenvalue $4$.
Thus we have the two possibilities $5/\lambda=1-0$, corresponding to $\lambda=5$, and $5/\lambda=1-4$, corresponding to $\lambda=-5/3$. All other values of $\lambda$ lead to linearly independent vectors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Curves that intersect all exponential functions precisely once? Let $f(x)$ be a function on the positive real line. Suppose that for all nonnegative reals $A$, $Ae^{x}$ intersects $f(x)$ exactly once. Is there a simple description of the set of functions $f$ which satisfy this property?
| I presume that by a function on the positive real line you mean a function from the positive real line to the positive real line.
The condition is equivalent to $f(x)\mathrm e^{-x}$ taking every value $A$ exactly once. If $f$ is continuous, this is equivalent to $f(x)\mathrm e^{-x}$ either increasing monotonically with $\lim_{x\to0}f(x)=0$ and $\lim_{x\to\infty}f(x)=\infty$, or decreasing monotonically with $\lim_{x\to0}f(x)=\infty$ and $\lim_{x\to\infty}f(x)=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question regarding what appears to be an identity This is an MCQ we were posed in school recently (I hope you don't mind elementary stuff):
What is $(x-a)(x-b)(x-c)...(x-z)$ ?
Options:
$0$
$1$
$2$
$(x^n)-(abcdef...z)$
| Hint $\ $ What is $\rm\ (24-1)(24-2)(24-3)\cdots (24-26)\ $ ?
And what is $\rm\,(x_{24}\!-x_1)(x_{24}\!-x_2)(x_{24}\!-x_3)\cdots (x_{24}\!-x_{26})\ $ ?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Finding the maximum height a ball can be dropped from and still collide with a ball thrown below it I don't necessarily need a specific answer, but I could use a hint, direction, or maybe some reading material.
The question states:
A rubber ball is shot straight up from the ground with speed $V(0)$. Simultaneously, a second rubber ball at height $h$ is directly above the first ball is dropped from rest.
a) At what height above the ground do both balls collide. Your answer will be an algebraic expression in terms of $h$, $V(0)$, and $g$.
b) What is the maximum value of $h$ for which a collision occurs before the first ball falls back to the ground?
c) For what value of $h$ doe the collision occur at the instant when the first ball is at its highest point?
I have solved A and found the answer to be:
$$d = h - \frac{gh}{2V(0)}$$
EDIT: After redoing the problem, the answer comes out to be:
$$d=h-\frac{gh^2}{2V(0)^2}$$
I believe this is correct, but I am completely stumped on questions b and c.
EDIT: I'm also not sure whether or not $h$ is meant to represent the height the second ball starts at or the distance between the balls at any given moment. It seems I haven't solved the first part correctly, so I'll post my work on it:
I plugged the variables I received into the function:
$$\text{Distance} = d_0 + V_0 + \frac{1}{2}at^2$$
(where $d_0$ is the starting distance, $V_0$ is the starting velocity, $a$ is acceleration, $t$ is time), to describe the position functions for the two balls as:
$$d_0 = V(0)t - \frac{1}{2}gt^2$$
$$d_1 = h - \frac{1}{2}gt^2$$
I set $d_0 = d_1$, but I'm not sure what I should be solving for. I can solve it as:
$$V(0)t = h$$
But I can't find any kind of use for this.
| First, doublecheck your answer. Dimensionally, it doesn't make sense. $gh/v_0$ doesn't have dimensions of length.
Next, you find that they collide at some height $d$, and the problem requires that $d\ge 0$. This inequality is solved for certain values of $h$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Where is the difference between the union and sum of sets? My book writes:
Definition. A vector space $V$ is called the direct sum of $W_1$ and $W_2$ if $W_1$ and $W_2$ are subspaces of $V$ such that $W_1 \cap W_2=\{0\}$ and $W_1 + W_2 = V$. We denote that $V$ is the direct sum of $W_1$ and $W_2$ by writing $V=W_1\oplus W_2$.
I'm not sure what I should imagine $W_1 + W_2 = V$ as.
Thank you for any help !
| Take this example to clarify the difference: $$V=\mathbb{R}^{2}$$ $$W_{1}=sp_{\mathbb{R}}\{(1,0)\}=\{(a,0)|a\in\mathbb{R}\}$$ $$W_{2}=sp_{\mathbb{R}}\{(0,1)\}=\{(0,b)|b\in\mathbb{R}\}$$
Then,
$$W_{1}+W_{2}=\{w_{1}+w_{2}|w_{i}\in W_{i}\}=\{(a,0)+(0,b)|a,b\in\mathbb{R}\}=\{(a,b)|a,b\in\mathbb{R}\}$$
but,
$$W_{1}\cup W_{2}=\{v_{1}|v_{1}\in W_{1}\}\cup\{v_{2}|v_{2}\in W_{2}\}$$
and this set is consistent of all elements of the form $(a,0)$ and
$(0,b)$ (where $a,b\in\mathbb{R})$ but, for example, $(1,1)\not\in W_{1}\cup W_{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Help me prove $\sqrt{1+i\sqrt 3}+\sqrt{1-i\sqrt 3}=\sqrt 6$ Please help me prove this Leibniz equation: $\sqrt{1+i\sqrt 3}+\sqrt{1-i\sqrt 3}=\sqrt 6$. Thanks!
| Use $\sqrt{1\pm i\sqrt 3}=\sqrt{2}e^{\pm i\pi/6}$ (EDIT we are picking the principal branch here) to get
$$
\sqrt{2}\left( e^{i\pi/6}+e^{-i\pi/6}\right)=2\sqrt{2}\cos(\pi /6)=2\sqrt{2}\frac{\sqrt{3}}2=\sqrt{6}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 6,
"answer_id": 1
} |
Is this function Lipschitz? Let $f:X \rightarrow \mathbb R$ be a Lipschitz function on a metric space $X$ and $K<M$ be some constants.
Is it such a function $g:X\rightarrow \mathbb R$ Lipschitz:
$$
g(x)=f(x) \textrm{ if } \ K \leq f(x) \leq M,
$$
$$
g(x)=K \textrm { if } \ f(x)<K,
$$
$$
g(x)=M \textrm{ if } \ f(x)>M.
$$
Thanks
| You can write your function as $\min(\max(f(x), K), M)$. Note that the composition of Lipschitz functions is Lipschitz. So you just have to show that the functions $\min(x,K)$ and $\max(x,M)$ are Lipschitz on ${\mathbb R}$, which is not hard.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Writing "$\nabla f$" or "$\operatorname{grad} f$" When hand-writing the gradient of $f$ as "$\nabla f$" or "grad $f$", is it necessary to indicate that it is a vector using the usual vector markings (cap, arrow, wavy line, etc.)?
| It should be considered obligatory to write, for example $\vec{a}$ or $\mathbf{a}$, when you're writing in a context in which vectors and vector-valued functions are generally written that way. But that is not always done. The style should be consistent throughout the document.
However, notice that the $f$ in $\nabla f$ is scalar-valued. The expression $\nabla f$ is vector-valued, and that is indicated by the meanings of the symbols.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Stuck with solving a polynomial I am doing a problem for homework that says:
Suppose $s(x)=3x^3-2$. Write the expression $\frac{s(2+x)-s(2)}{x}$ as a sum of terms, each of which is a constant times power of $x$.
I was able to do the following work for this problem:
$\frac{3(2+x)^3-3(2)^3-2}{x}$
$\frac{3(x^3+6x^2+12x+8)-24-2}{x}$
$\frac {3x^3+18x^2+36x+24-24-2}{x}$
$\frac {3x^3+18x^2+36x-2}{x}$
This is where I got stuck. I am not sure what I am supposed to do next. The multiple choice answers are:
a) $2x^2-36x+18$
b) $3x^2+18x+36$
c) $18x^2+18x+36$
d) $x^3+18x^2+36x$
e) $-3x^2-18x-36$
The closest answer to the answer I got was d, does anyone know how I would solve this?
| You went astray at the first step, when you got $$\frac{3(2+x)^3-3(2)^3-2}{x}\;;$$ in fact
$$s(2+x)-s(2)=\Big(3(2+x)^3-2\Big)-\Big(3\cdot2^3-2\Big)=3(2+x)^3-3\cdot2^3\;.$$
Can you straighten the rest out from there? When you do it correctly, there will be no constant term in the numerator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving that $x^3 +1=15x$ has at most three solutions. in the interval [-4,4]. I need someone to check my work. Thanks! This is a 2 mark homework question by the way. I am not sure why am I using such a long way to prove it. Is there a way to shorten it or is there a shorter, more intuitive method?
Proving that $x^3 +1=15x$ has at most three solutions. in the interval [-4,4].
Let $f(x)=x^3+1-15x$
Suppose for a contradiction, that this equation has at least 4 solutions, $a,b,c,d$, such that $f(a)=0,f(b)=0,f(c)=0,f(d)=0$. Since f is continuous and differentiable on $x\in\mathbb{R}$, by the Rolle's Theorem, there exist a $c_1 \in (a,b) , c_2 \in (b,c),c_3 \in (c,d) $, such that $f^\prime(c_1)=0,f^\prime(c_2)=0,f^\prime(c_3)=0 $
$f^\prime(x)=3x^2-15$
Moreover, if $f^\prime(x)$ has 3 solutions, by the Rolle's Theorem, Since f is continuous and differentiable on $x\in\mathbb{R}$, there exist a $d_1 \in (c_1,c_2) , d_2 \in (c_2,c_3)$, such that $f^{\prime\prime}(d_1)=0,f^{\prime\prime}(d_2)=0$
$f^{\prime\prime}(x)=6x$
Moreover, if $f^{\prime\prime}(x)$ has 2 solutions, by the Rolle's Theorem, Since f is continuous and differentiable on $x\in\mathbb{R}$, there exist a $e_1 \in (d_1,d_2)$, such that $f^{\prime\prime\prime}(e_1)=0$
$f^{\prime\prime\prime}(x)=6$
This implies that $f^{\prime\prime\prime}(e_1)=0=6$ Hence, we have a contradiction. Without loss of generality, we can apply the steps to cases where $f(x)=x^3+1-15x$ has 5 or more solutions and still achieve a contradiction. Therefore, the negation must be true, i.e $f(x)=x^3+1-15x$ has at most 3 solutions.
| Degree 3 polynomials have exactly 3 roots, some of which could be complex. If it had more than 3 solutions in your interval, you get a contradiction with the fundamental theorem of algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 1
} |
How many bits are in factorial? I am interested in good integer approximation from below and from above for binary Log(N!). The question and the question provides only a general idea but not exact values.
In other words I need integers A and B so that A <= Log(N!) <= B
| Expanding on joriki's answer, taking more terms from the approximation,
$$ \log (n!) = n(\log n - \log e) + \frac{1}{2}\log n + \log \sqrt{2\pi} + \frac{\log e}{C_ n}, \quad 12n < C_n < 12n+1. $$
The number of binary digits is equal to $\lceil \log n! \rceil$, and for most $n$, I expect that the slight uncertainty in $C_n$ won't effect $\lceil \log n! \rceil$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Solving for x with exponents (algebra) So I am trying to help a friend do her homework and I am a bit stuck.
$$8x+3 = 3x^2$$
I can look at this and see that the answer is $3$, but I am having a hard time remembering how to solve for $x$ in this situation.
Could someone be so kind as to break down the steps in solving for $x$.
Thanks in advance for replies.
| This is a quadratic equation: the highest power of the unknown is $2$. Rearrange it to bring everything to one side of the equation:
$$3x^2-8x-3=0\;.$$
If you can easily factor the resulting expression, you can take a shortcut, but otherwise you either complete the square or use the quadratic formula.
Completing the square relies on the fact that $(x+a)^2=x^2+2ax+a^2$. First factor out the coefficient of $x^2$:
$$3x^2-8x-3=3\left(x^2-\frac83x-1\right)\;.\tag{1}$$
Now notice that if you set $a=-\dfrac{8/3}2=-\dfrac43$, you’ll have $$(x+a)^2=\left(x-\frac43\right)^2=x^2-\frac83x+\frac{16}9\;,$$ which agrees in all but the constant term with the expression in parentheses in $(1)$. Thus,
$$x^2-\frac83x-1=\left(x-\frac43\right)^2-\frac{16}9-1=\left(x-\frac43\right)^2-\frac{25}9\;,$$
and on substituting back into $(1)$ we have
$$0=3x^2-8x-3=3\left(x-\frac43\right)^2-\frac{25}3\;.$$ Rearranging this gives us
$$3\left(x-\frac43\right)^2=\frac{25}3\;,$$ or $$\left(x-\frac43\right)^2=\frac{25}9\;.$$ Finally, taking the square root on both sides and remembering that there are two square roots, one positive and one negative, we get
$$x-\frac43=\pm\frac53$$ and therefore $$x=\frac53+\frac43=\frac93=3\quad\text{or}\quad x=-\frac53+\frac43=-\frac13\;.$$
The quadratic formula can be derived by applying the method of completing the square to the general quadratic equation $ax^2+bx+c=0$. The result is that
$$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Why does this equation have different number of answers? I have a simple equation:
$$\frac{x}{x-3} - \frac{2}{x-1} = \frac{4}{x^2-4x+3}$$
By looking at it, one can easily see that $x \not= 1$ because that would cause $\frac{2}{x-1} $ to become $\frac{2}{0}$, which is illegal.
However, if you do some magic with it. First I factorized the last denominator to be able to simplify this:
$$\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$
$$\frac{-(-4)\pm\sqrt{(-4)^2-4\times1\times3}}{2 \times 1}$$
$$x=1 \vee x=3$$
Then we can multiply everything with the common factor, which is $(x-1)(x-3)$ and get:
$$x(x-1) - 2(x-3) - 4 = 0$$
If we multiply out these brackets, we get:
$$x^2-x-2x+6-4=0$$
$$x^2-3x+2=0$$
The quadratic formula gives $x = 1 \vee x=2$. We already know that $x$ CANNOT equal to 1, but we still get it as an answer. Have I done anything wrong here, because as I see it, this is the same as saying that:
$$\frac{x}{x-3} - \frac{2}{x-1} = \frac{4}{x^2-4x+3}$$
$$=$$
$$x(x-1) - 2(x-3) - 4 = 0$$
which cannot be true, because the two doesn't have the same answers. What am I missing here?
| If $\dfrac AB = 0$ then $A=0\cdot B$. But you can't say that if $A=0\cdot B$ then $\dfrac AB=0$ unless you know that $B\ne 0$. So if $A$ and $B$ are complicated expressions that can be solved for $x$, there may be values of $x$ that make $B$ equal to $0$, and if they also make $A$ equal to $0$, then they are solutions of the equation $A=0\cdot B$, but not of the equation $\dfrac AB=0$.
"If P then Q" is not the same as "If Q then P".
Another way of putting it is that this explains why "clearing fractions" is one of the operations that can introduce "extraneous roots". Perhaps more well known is that squaring both sides of an equation can do that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/203963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Cycling Digits puzzle I'm trying to answer the following:
"I have in mind a number which, when you remove the units digit and place it at the front, gives the same result as multiplying the original number by $2$. Am I telling the truth?"
I think the answer to that is no. It's easy to prove that it's false for numbers with two digits: Let $N = d_0 + 10 \cdot d_1$. Then $2N = 2 d_0 + 20 d_1$ and the "swapped" number is $N^\prime = d_1 + 10 d_0$. We would like to have $2d_0 + 20 d_1 = d_1 + 10d_0$ which amounts to $8d_0 = 19d_1$. The smallest value for which this equality is fulfilled is $d_0 = 19, d_1 = 8$ but $19$ is not $\leq 9$ that is, is not a digit, hence there is no solution.
Using the same argument I can show that the claim is false for $3$-digit numbers.
I conjecture that it's false for all numbers. How can I show that? Is there a more general argument than mine, for all numbers? Thanks for helps.
| If you follow your argument but let $N=a+10b$ where $a$ is a single digit but let $b$ have $n$ digits, then $2N=10^na+b$ and you get $b=\frac {10^n-2}{19}a$ If $n=17$, this is integral. Then $a$ has to be at least $2$ to make $b$ have $17$ digits. The smallest solution is $105263157894736842$
Another way to get there is to just start multiplying. If you guess that the ones digit is $2$, you double it the ones digit of the product will be $4$, which will be the tens digit of the first number, and so on. Stop when the product starts with a $2$ and doesn't carry. You get $$105263157894736842 \\ \underline {\times\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 2} \\ 210526315789473684$$ If you had started with a $1$, you would miss the leading zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Find arc length of a circle using a hyperbolic metric
Given the hyperbolic metric $ds^2=\frac{dx^2+dy^2}{x^2}$ on the half plane $x > 0$, find the length of the arc of the circle $x^2+y^2=1$ from $(\cos\alpha,\sin\alpha)$ to $(\cos \beta, \sin\beta)$
I found that $ds^2=\displaystyle\frac{d\theta^2}{\cos^2\theta}$ but when I try to plug in $\pi/3, -\pi/3$, which should give me the arc length of $2\pi/3$,
I get $4\pi/3=\sqrt{\displaystyle\frac{{(\pi/3-(-\pi/3))}^2}{cos^2{(\pi/3})}}$
I feel like I'm making a simple mistake but I cant place it
| The circle $x^2 + y^2 = 1$ can be parametrised by $(\cos \theta, \sin \theta)$. If $x(\theta) = \cos \theta$ and $y(\theta) = \sin \theta$ then
$$ds^2 = \frac{dx^2+dy^2}{x^2} = \frac{(\sin^2\theta+\cos^2\theta) \, d\theta^2}{\cos^2\theta} = \sec^2\theta \, d\theta^2.$$
The arc-length that you are interested in is given by:
$$s = \int \sqrt{ds^2} = \int_{\alpha}^{\beta} |\sec \theta| \, d\theta \, . $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
sets and functions proof help I started to get excited about mathematical analysis. So I bought a mathematical analysis book and started to study. But because of the reason that, book do not have solutions I do not have an idea where to start and how to prove the following:
Let $A_t$, $t \in T$, be a family of sets, and let $X$ be a set. Prove the identities:
$$X \setminus \bigcup A_t = \bigcap (X\setminus A_t)$$
$$X \setminus \bigcap A_t = \bigcup(X\setminus A_t) $$
Could you please help me? Also I need a book recommendation about mathematical analysis which goes like a theorem and its proof, a theorem and its proof.. Do you have any suggestions?
Regards
| Let's look at the case there $|T| = 1$, i.e. that the family of sets has only one element.
The $X\setminus A$ is trivially $X \setminus A$.
What about when $|T| = 2$?
Then $X \setminus (A_1 \cup A_2)$ is the set of $x \in X$ such that $x \not \in A_1, A_2$.
$X \setminus A_1$ is the set of $x \in X$ such that $x\not \in A_1$, and $X \setminus A_2$ is the set of $x \in X$ such that $x \not \in A_2$. This means that $\bigcap (X \setminus A_t)$ is the set of $x \in X$ such that $x \not \in A_1$ and such that $x \not \in A_2$. Equivalently, this is the set of $x \in X$ such that $x \not \in A_1, A_2$. My gosh, this is the same as the set above!
Now you might look back at this and say, where did we use our ability to enumerate the sets?
Do you see how to finish from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Polynomials and factoring in $\mathbb{Z}[x]$
Show that any polynomial $p(x) \in \mathbb{Q}[x]$ can be written as $p(x) = tq(x)$ where $t \in \mathbb{Q}$ and $q(x) \in \mathbb{Z}[x]$ is primitive.
I started my proof by defining $p(x)$ as $(\frac{q}{r})_n x^n + \dots + (\frac{q}{r})_0$. Then I defined $t \in \mathbb{Q}$ as the product of greatest common factor of the coefficients of $p(x)$. I don't think this will work. How can I guarantee that after dividing $p(x)$ by $t$ I will get a primitive polynomial?
| You presumably brought the coefficients of $p(x)$ to some common denominator $r$, where $r$ is an integer. This can certainly be done.
So now the coefficients in the numerator are integers, say $b_n$ down to $b_0$. Let $d$ be the gcd of all of these, and let $c_i=b_i/d$. Then $p(x)$ is $\frac{d}{r}$ times the primitive polynomial $c_nx^n+\cdots +c_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does this prove AM-GM? Here is an extract from A less than B. The author claims to imply the AM-GM inequality with this reasoning, but I can't see how. So far the author has covered AM-GM, convexity, the smoothing principle and Jensen's inequality.
"Theorem 4: Let $f$ be a twice-differentiable function on an open interval $I$. Then $f$ is convex on $I$ if and only if $f''(x)\ge 0$ for all $ x \in I$.
For example, the AM-GM inequality can be proved by noting that $f(x)=\log(x)$ is concave; its first derivative is $1/x$ and its second $-1/x^2$. In fact, one immediately deduces a weighted AM-GM inequality..."
I understand the theorem, but not at all how it applies to AM-GM. Any enlightenment would be much appreciated!
| Notice that $\ln{x}$ is defined only for $x>0$, which means that its second-order derivative is always negative, which means the function is concave.
Let $f(x)=-\ln{x}$. We know that $f$ is convex. Thus, writing the Jensen's inequality for $f$ with the weights $t_{i}=\frac{1}{n}$ ($i=1,2,...,n$), we get:
$$-\ln\left(\frac{x_{1}+x_{2}+...+x_{n}}{n}\right)\leq\frac{1}{n}\sum^{n}_{i=1}(-\ln{x_{i}})=-\ln\sqrt[n]{x_{1}x_{2}\cdot...\cdot x_{n}}$$
now, since we know that $\ln$ is monotonic, we get:
$$\frac{x_{1}+x_{2}+...+x_{n}}{n}\geq\sqrt[n]{x_{1}x_{2}\cdot...\cdot x_{n}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Holomorphic functions as sums Are there any holomorphic functions on a connected domain in $\mathbb C$ that can not be written as a sum of two univalent (holomorphic and injective) functions? What about as a sum of finitely many univalent functions? Or even infinitely many?
| There is a growth obstruction for finite sum representation. Indeed, a theorem of Prawitz (1927) says that every univalent function on the unit disk belongs to the Hardy space $H^p$ for all $p<1/2$. Consequently, $f(z)=(1-z)^{-q}$ is not a finite sum of univalent functions when $q>2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Generating Series for the set of all compositions which have an even number of parts.
I'm having trouble showing that the generating series for all compositions which have an even number of parts.
I'm given that each part congruent to 1 mod 5 is equal to:$$\frac{1-2x^5+x^{10}}{1-x^2-2x^5+x^{10}}$$
If you could help me out that would be great!
| Here's another method. First, you should convince yourself that the generating function for compositions with $k$ parts is given by
$$(x + x^2 + x^3 + \ldots)^k.$$ This is because choosing a composition $k_1 + k_2 + \ldots + k_m$ corresponds to choosing $x^{k_1}$ in the first factor, $x^{k_2}$ in the second, and so on. This is the same way you would multiply out the power series - choosing every possible $k$-tuple of terms, one from each factor, and multiplying them, and then summing the result.
Now simplify this expression and sum this over all even $k$ - it will become a nice rational generating function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Chain Rule of Partial Derivatives If $f$ is a differentiable function defined $f: \mathbb{R}^2 \to \mathbb{R}$ and $f(2,1) = 3$ and $\nabla f(2,1) = (4,3)$, find $\nabla G(1,1)$ with $G(x,y) := x^2yf(x^2 +y^2, xy)$.
I wrote the $G_x$ as $2xyf(x^2 +y^2, xy) + x^2yf'(x^2 + y^2, xy)f_x(x^2 + y^2, xy)$ but I don't know what the value of $f'(x^2 + y^2, xy)$ is.
Thanks in advance
| Denote
$$g(x,y)=x^{2}y$$
Then by product rule
$$\nabla G=f\nabla g+g\nabla f$$
$$\nabla g=2xy\boldsymbol{i}+x^{2}\boldsymbol{j}$$
Now write
$$\begin{cases}
x^{2}+y^{2} & =2\\
xy & =1
\end{cases}$$
Multiply the second equation by 2, add and subtract from the first one obtaining respectively
$$\left(x+y\right)^{2}=4$$
$$\left(x-y\right)^{2}=0$$
Hence
$$x=y=\pm1$$
Finally
$$\nabla G\left(1,1\right)=3\left(2\boldsymbol{i}+1\boldsymbol{j}\right)+4\boldsymbol{i}+3\boldsymbol{j}=10\boldsymbol{i}+6\boldsymbol{j}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Complex analysis integration question Let $f(z) = A_0 + A_1z + A_2z^2 + \ldots + A_nz^n$ be a complex polynomial of degree $n > 0$.
Show that $\frac{1}{2\pi i} \int\limits_{|z|=R} \! z^{n-1} |f(z)|^2 dz = A_0 \bar{A_n}R^{2n}$.
| Let $\Gamma = \{z: |z| = R\}$. Recall that
$$ \int_{\Gamma} z^k \, dz = \begin{cases} 0 & k \neq -1 \\ 2\pi i & k = -1 \end{cases}$$
Now, when we multiply out $|f(z)|^2$ in terms of $z$ and $\overline{z}$, we are ultimately evaluating an integral of the following form:
$$ \frac{1}{2\pi i} \int_{\Gamma} \sum_j B_j z^{p_j} \overline{z}^{k_j} \, dz = \frac{1}{2\pi i} \int_{\Gamma} \sum_j B_j z^{p_j-k_j} R^{2k_j} \, dz$$
for some powers $p_j, k_j$. Then, since we know that $z^k$ integrates to $0$ unless $k = -1$, then we require that $p_j - k_j = -1$. Since the whole integrand is multiplied by $z^{n-1}$ originally, then it must be that $p_j = 0, k_j = n$, so the only term that does not vanish is the one that has the term that was formed from multiplied the $A_0$ term with the $\overline{A}_n\overline{z}^n$ term. Therefore, in summary,
$$ \frac{1}{2\pi i} \int_{\Gamma} z^{n-1} |f(z)|^2 \, dz = \frac{1}{2\pi i} \int_{\Gamma} A_0\overline{A}_n R^{2n} z^{-1} \, dz$$
which evaluates to your desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
a geometry problem Let $DA$ be the normal on the plane of the triangle $ABC$ and $E \in (DA)$. Let's notate with $M,N,P,Q$ the proiections of the point $A$ to the lines $BD$, $CD$, $BE$, respectively $CE$.
Prove that:
1) $$MN\cap BC \cap PQ \neq \emptyset;$$
2) the quadrilateral $MNPQ$ is inscriptible.
| For 2), consider the circumscribed circle of the $ABC$ triangle, and expand it to a sphere in 3d with the same center and radius. If we cut it by a plane orthogonal to the $ABC$ plane and containing $AB$, we get a circle with $AB$ as its diameter, so, by Thales' thm the points $M$ and $P$ will be on that circle, hence on the sphere. Similarly for $N,Q$. Well, it's still needed that these 4 points are in the same plane..
So, for the rest, I would use vectors (or coordinate geometry), setting up the coordinate system in a preferable way, say $A$ is the origo, $ABC$ plane is the $x,y$-plane, we can set also $AB=(1,0,0)$ if it helps..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to approximate $y=\frac{W(e^{cx+d})}{W(e^{ax+b})}$? How to approximate
$$y=\frac{W(e^{cx+d})}{W(e^{ax+b})}$$
with (a) simple function(s)?
given $a=-1/\lambda_0$, $b=(\mu_0+\lambda_0)/\lambda_0$, $c=1/\lambda_1$, $d=(\mu_1+\lambda_1-1)/\lambda_1$ for positive $\mu_0,\lambda_0,\mu_1,\lambda_1$
where $W$ is a Lambert $W$ function, i.e., if $y=xe^x$ then $x=W(y)$
My problem is that I can not invert the function and get $x=f(y)$ alone and decided to go for some nice approximations.
Thanks alot for any help.
| You could try to estimate $W(x)$ by using Newton-Raphson iteration, because $W(c)$ is the root of $x\exp(x)-c$:
$$x_{n+1}= x_n-\frac{f(x_n)}{f'(x_n)} = x_n-\frac{x_n \exp (x_n)-c}{\exp(x_n)(x_n+1)} = \frac{c\exp(-x_n)+x_n^2}{x_n+1}$$
and as $n$ is sufficiently large, we can get approximations for $W$ using only elementary functions.
Using this method, we can find $W(1)=\Omega = 0.567143\cdots$, starting with $x_0=1$ and keeping 6 places precision:
Iteration Value Error
1 0.683939 0.116795
2 0.577454 0.010310
3 0.56723 0.000086
4 0.567143 0.000000
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Estimate for the product of primes less than n In this paper Erdős shows a shorter proof for one of his old results stating that $$ s(n) = \prod_{p < n} p < 4^n$$ where the product is taken over all primes less than $n$. He also remarks that using the prime number theorem one can show $$ s(n)^{\frac1n} \stackrel{n\to\infty}{\longrightarrow} e.$$
Can someone here prove this result? It does not seem straightforward to me.
One (crude) attempt I tried was to consider the product $$\prod_{i=2}^n \frac{i}{\log{i}} = n!\prod_{i=2}^n \frac{1}{\log{i}}$$ which I do not know how to estimate, not to mention that I would then have to argue that it is an asymptotic estimate for $s(n).$
Is there a simple way to show the result about $s(n)$ using the prime number theorem?
| The reason the sum
$$ \sum_{k = 2}^{n} \frac{k}{\log k} $$
works as an estimate of the sum of all primes up to $n$ is because, roughly speaking, one on $\log N$ numbers of size around $N$ are prime. You are estimating
The sum of all primes of a given size
with the approximation
The sum of all numbers of that size, multiplied by (an estimate of) the proportion of them that are primes
(note that this method relies on the fact that the average of the primes of size around $N$ is roughly the same as the average of all numbers of size around $N$... specifically, that average is around $N$)
The analogous method for products is not dividing out by $\log i$: it is taking the $\log i$-th root: you meant to consider
$$ \prod_{k=2}^{n} k^{1 / \log k} $$
Of course, this isn't necessarily any easier to deal with. The thing to do is the one that is usually useful for products: take the logarithm. Consider
$$ \log \prod_{\substack{p=2 \\ p \text{ prime}}}^N p$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/204902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Trying to prove that $p$ prime divides $\binom{p-1}{k} + \binom{p-2}{k-1} + \cdots +\binom{p-k}{1} + 1$ So I'm trying to prove that for any natural number $1\leq k<p$, that $p$ prime divides:
$$\binom{p-1}{k} + \binom{p-2}{k-1} + \cdots +\binom{p-k}{1} + 1$$
Writing these choice functions in factorial form, I obtain:
$$\frac{(p-1)!}{k!(p-(k+1))!} + \frac{(p-2)!}{(k-1)!(p-(k+1))!} + \cdots + \frac{(p-k)!}{1!(p-(k+1))!} + 1$$
Thus as you can see each term except the last has a $(p-(k+1))!$ factor in its denominator. I've tried some stuff with integer partitions, tried to do some factoring and simplification, etc. But I can't see how to prove that $p$ divides this expression. I'll probably have to use the fact that $p$ is prime somehow, but I'm not sure how. Can anyone help me? Thanks.
Edit: Proof by induction is also a possibility I suppose, but that approach seems awfully complex since changing k changes every term but the last.
| Letting $k=p-1$, we find that the expression equals $p$ summands of value $1$ each.
Maybe the problem was not meant to read "for some natural" but rather "for all natural" numbers $1\le k<p$.
The statement remains true, once you observer that the sum is simply $p\choose k$.
To see this combinatorially, note that you can choose $k$ out of $p$ objects by selecting $r$ with $0\le r \le k$, then take the first $r$ objects, do not take the next object and choose $k-r$ out of the remaing $p-1-r$ objects.
Finally, $p\choose k$ with $0<k<p$ is a multiple of $p$, for example because $p$ divides $p!$ but divides neither $k!$ nor $(p-k)!$. (This is where we need that $p$ is prime),
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Finding the range of rational functions I have a problem that I cannot figure out how to do. The problem is:
Suppose $s(x)=\frac{x+2}{x^2+5}$. What is the range of $s$?
I know that the range is equivalent to the domain of $s^{-1}(x)$ but that is only true for one-to-one functions. I have tried to find the inverse of function s but I got stuck trying to isolate y. Here is what I have done so far:
$y=\frac{x+2}{x^2+5}$
$x=\frac{y+2}{y^2+5}$
$x(y^2+5)=y+2$
$xy^2+5x=y+2$
$xy^2-y=2-5x$
$y(xy-1)=2-5x$
This is the step I got stuck on, usually I would just divide by the parenthesis to isolate y but since y is squared, I cannot do that. Is this the right approach to finding the range of the function? If not how would I approach this problem?
| To find the range, we want to find all $y$ for which there exists an $x$ such that
$$ y = \frac{x+2}{x^2+5}.$$
We can solve this equation for $x$:
$$ y x^2 + 5y = x+2$$
$$ 0 = y x^2 -x + 5y-2$$
If $y \neq 0$, this is a quadratic equation in $x$, so we can solve it with the quadratic formula:
$$
x = \frac{ 1 \pm \sqrt{ 1 - 4y(5y-2)}}{2y}.$$
So, for a given $y$, $y$ is in the range if this expression yields a real number. That is, if
$$ 1 - 4y(5y-2) = -20y^2 +8y +1 \ge 0$$
If you study this quadratic, you will find that it has roots at $y=1/2$ and $y=-1/10$, and between these roots it is positive, while outside these roots it is negative. Hence, there exists an $x$ such that $s(x)=y$ only if
$$
-\frac{1}{10} \le y \le \frac{1}{2}.
$$
Thus, this is the range of $s$.
(Note we excluded $y=0$ earlier, but we know $y=0$ is in our range since $s(-2)=0$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Bounded sequences and lim inf Let $a_n$ and $b_n$ be bounded sequences. Prove that lim inf $a_n$ + lim inf $b_n \leq$ lim inf$(a_n + b_n)$
I have no idea where to begin.
| Start by showing that for all $n$, $$\inf_{k\geq n}a_k+\inf_{k\geq n}b_k\leq\inf_{k\geq n}(a_k+b_k),$$ then take the limit as $n\to\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
area of a convex quadrilateral I have a quadrilateral with sides as follows: $30, 20, 30, 15.$
I do not have any other information about the quadrilateral apart from this.
Is it possible to calculate its area?
| A quadrilateral with sides $30,20,30,15?$ two sides are equal, right? Why don't you try to draw it? Divide it into two triangles. If the two equal sides have a common edge, one of the triangles is isosceles, i.e. has equal angles. Can you find the rest of the angles and the area?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
A function which is in $L^1$ but does not belong to $L^\infty$ Can someone give me an example of an $L ^1$ function which does not belong to $L^\infty$. In fact we look at $L^1(\Omega,\mathcal{F},P)$, where $(\Omega,\mathcal{F},P)$ denotes a probability space. Of course the function should be unbounded but the integral should exist. Clearly, we can embed $L^\infty$ into $L^1$ in this case. Thank you.
| An example for the space $X= (0,1]$ is $\dfrac{1}{\sqrt{x}},$ which has infinite $L^{\infty}$ norm because as $x\to 0^+$ the function $1/\sqrt{x}\to \infty,$ but has $L^1$ norm of $\displaystyle \int^1_0 \frac{1}{\sqrt{x}} dx = 2.$
If you want an example for $X=\mathbb{R}$ then $1_{(0,1]} \dfrac{1}{\sqrt{x}}$ works similarly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
elementary negation I couldn't be sure about the negation of the statement $\exists x \in N , x < 3$
is it
$$
\lnot(\exists x \in N , x < 3) \equiv (\forall x \in N , x \geq 3)
$$
or
$$
\lnot (\exists x \in N , x < 3) \equiv (\forall x \notin N , x \geq 3)
$$
can someone help me (with an explanation)?
| just to be sure, to me your statement reads as "There exists an $x$, element of the positive integers (doesn't matter with or withour $0$), such that $x$ is less than $3$. If this is correct, then the negation is: There is no $x$ element of the positive integers such that $x$ is less than $3$. This means that all $x$ element of the positive integers are greater equal $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Calculate the following expectation There are $K$ items indexed $X_1, X_2, \ldots, X_K$ in the pool. Person A first randomly take $K_A$ out of these $K$ items and put them back to the pool. Person B then randomly take $K_B$ out of these $K$ items. What is the expectation of items that was picked by B but not taken by A before?
Assuming $K_A \geq K_B$, the formula I get is,
\begin{equation}
E = \sum_{i=1}^{K_B} i \frac{{{K}\choose{K_A}}{{K_A}\choose{K_B - i}}{{K - K_A}\choose{i}}}{{{K}\choose{K_A}}{{K}\choose{K_B}}}
\end{equation}
When $K_B > K_A$, I can derive similar formulas. I am wondering if there is a way to simplify this formula? Thanks.
| André's solution is the best one, of course.
But for the sheer fun of it, let's calculate the sum
\begin{equation}
E = \sum_{i=1}^{K_B} i \frac{{{K}\choose{K_A}}{{K_A}\choose{K_B - i}}{{K - K_A}\choose{i}}}{{{K}\choose{K_A}}{{K}\choose{K_B}}}
\end{equation}
First, cancel the common factor
$$E = \sum_{i} i \frac{{K_A\choose K_B - i}{{K - K_A}\choose{i}}}{{{K}\choose{K_B}}}.$$
The absorption identity (4) lets us get rid of the factor "$i$"
$$E = \sum_{i} \frac{{K_A\choose K_B - i}(K-K_A) {{K - K_A-1}\choose{i-1}}}{{{K}\choose{K_B}}},$$
so that
$$E = {(K-K_A)\over {K\choose K_B}} \sum_{i} {K_A\choose K_B - i} {K - K_A-1 \choose i-1}.$$
Using Vandermonde's convolution (7a) we get
$$E = {(K-K_A)\over {K\choose K_B}} {K-1 \choose K_B - 1},$$
and using the absorption identity once more we arrive at
$$E = (K-K_A)\,{K_B\over K}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is this category essentially small? Let $\mathcal C$ be the category of finite dimensional $\mathbb C$-vector spaces $(V, \phi_V)$ where $\phi_V \colon V \to V$ is a linear map. A morphism $f \colon (V , \phi_V) \to (W , \phi_W)$ in this category is a linear map such that $\phi_W f = \phi_V f$. Note this category is the same as the category of $\mathbb C [t]$-modules whose underlying space is finite dimensional as a $\mathbb C$-vector space.
I am having some trouble working out how many isomorphism classes there are. The problem is that even if $V \cong W$ as vector spaces, the isomorphism might not respect the structure morphisms in $\mathcal C$. So potentially there are a LOT of isomorphism classes.
| Jordan normal form tells you what the isomorphism classes look like, but you don't need to know this: it suffices to show that the collection of isomorphism classes with a fixed value of $\dim V$ forms a set, and this is straightforward as specifying the corresponding $\phi_V$ requires at most $(\dim V)^2$ parameters.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Heat equation with initial values $U(0,t)=U_1$, $U(L,t)=U_2$,$\forall t$. My problem is given as
Arbitrary temperatures at ends . If the ends $x=0$
and $x=L$ of the bar in the text are kept at constant
temperatures $U_1$ and $U_2$ respectively, what is the temperature
$u_1(x)$ in the bar after a long time (theoretically,
as $t \to \infty$)? First guess, then calculate.
My guess is that the temperature after a very long time is given as the meadian temperature. Etc
$$u(x,t) \approx (U_2-U_1)/L, \quad \text{as} \quad t\to \infty$$
Now one does assume that the temperature reaches a limit, which is not
unlikely, then the solution will satisfy the laplacian $\nabla^2u=0$.
Which leads to the heat equation in one variable
$$ \frac{\mathrm{d}u}{\mathrm{d}t} = c^2 \frac{\mathrm{d}^2u}{\mathrm{d}x^2} $$
The standard way of assuming the solution is on the form $u(x,t)=X(x)T(t)$ fails for me.Begin with assuimg that the differential equation is equal to some arbitary constant $\lambda$ that is not dependant on $x$ nor $t$. Then I end up with the set of equations
$$\begin{array}{lcr}
T' & = & \lambda c^2 T\\
\ddot{X} & = & \lambda X
\end{array}$$
If we assume for a minute that $\lambda=0$, we end up with
$$X(x) = Ax + B, \qquad T(t)=C$$
Which does not satifsy the initial values. So, what do I do to solve this bugger?
| Let $u(x,t)=X(x)T(t)$ ,
Then $X(x)T'(t)=c^2X''(x)T(t)$
$\dfrac{T'(t)}{c^2T(t)}=\dfrac{X''(x)}{X(x)}=-\dfrac{\pi^2s^2}{L^2}$
$\begin{cases}\dfrac{T'(t)}{T(t)}=-\dfrac{\pi^2c^2s^2}{L^2}\\X''(x)+\dfrac{\pi^2s^2}{L^2}X(x)=0\end{cases}$
$\begin{cases}T(t)=c_3(s)e^{-\frac{\pi^2c^2ts^2}{L^2}}\\X(x)=\begin{cases}c_1(s)\sin\dfrac{\pi xs}{L}+c_2(s)\cos\dfrac{\pi xs}{L}&\text{when}~s\neq0\\c_1x+c_2&\text{when}~s=0\end{cases}\end{cases}$
$\therefore u(x,t)=C_1x+C_2+\sum\limits_{s=0}^\infty C_3(s)e^{-\frac{\pi^2c^2ts^2}{L^2}}\sin\dfrac{\pi xs}{L}+\sum\limits_{s=0}^\infty C_4(s)e^{-\frac{\pi^2c^2ts^2}{L^2}}\cos\dfrac{\pi xs}{L}$
$u(0,t)=U_1$ :
$C_2+\sum\limits_{s=0}^\infty C_4(s)e^{-\frac{\pi^2c^2ts^2}{L^2}}=U_1$
$\sum\limits_{s=0}^\infty C_4(s)e^{-\frac{\pi^2c^2ts^2}{L^2}}=U_1-C_2$
$C_4(s)=\begin{cases}U_1-C_2&\text{when}~s=0\\0&\text{when}~s\neq0\end{cases}$
$\therefore u(x,t)=C_1x+C_2+\sum\limits_{s=0}^\infty C_3(s)e^{-\frac{\pi^2c^2ts^2}{L^2}}\sin\dfrac{\pi xs}{L}+U_1-C_2=C_1x+U_1+\sum\limits_{s=1}^\infty C_3(s)e^{-\frac{\pi^2c^2ts^2}{L^2}}\sin\dfrac{\pi xs}{L}$
$u(L,t)=U_2$ :
$C_1L+U_1=U_2$
$C_1=\dfrac{U_2-U_1}{L}$
$\therefore u(x,t)=\dfrac{(U_2-U_1)x}{L}+U_1+\sum\limits_{s=1}^\infty C_3(s)e^{-\frac{\pi^2c^2ts^2}{L^2}}\sin\dfrac{\pi xs}{L}$
Hence $u(x,\infty)=\dfrac{(U_2-U_1)x}{L}+U_1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof of Cauchy Riemann Equations in Polar Coordinates How would one go about showing the polar version of the Cauchy Riemann Equations are sufficient to get differentiability of a complex valued function which has continuous partial derivatives?
I haven't found any proof of this online.
One of my ideas was writing out $r$ and $\theta$ in terms of $x$ and $y$, then taking the partial derivatives with respect to $x$ and $y$ and showing the Cauchy Riemann equations in the Cartesian coordinate system are satisfied. A problem with this approach is that derivatives get messy.
What are some other ways to do it?
| We can derive using purely polar coordinates. Start with
\begin{align}
z(r,\theta) &=r\,\mathrm{e}^{\mathrm{i}\theta} \\
f(z) &= u(r,\theta) + \mathrm{i} v(r,\theta)
\end{align}
We define $f'(z)$ using the limit
$$ f'(z) = \lim_{z\to 0} \frac{\Delta f}{\Delta z} $$
where
\begin{align}
\Delta f &= \Delta u + \mathrm{i} \Delta v \\
\Delta z &= z(r+\Delta r,\theta + \Delta\theta) - z(r,\theta) \\
&= (r+\Delta r)\,\mathrm{e}^{\mathrm{i}(\theta+\Delta\theta)} - r \,\mathrm{e}^{\mathrm{i}\theta}
\end{align}
Next, we try to first approach from $\Delta\theta\to 0$,
$$ \Delta z = (r+\Delta r)\,\mathrm{e}^{\mathrm{i}\theta} - r \,\mathrm{e}^{\mathrm{i}\theta} = \Delta r\,\mathrm{e}^{\mathrm{i}\theta} $$
Therefore when we take the limit,
$$ f'(z) = \lim_{\Delta r\to 0} \frac{\Delta u + \mathrm{i} \Delta v}{\Delta r \,\mathrm{e}^{i\theta}} = \frac{1}{\mathrm{e}^{\mathrm{i}\theta}}(u_r + \mathrm{i} v_r) \label{a}\tag{1}$$
On the other hand, approaching from $\Delta r\to 0$ first yields
$$\Delta z = r\,\mathrm{e}^{\mathrm{i}(\theta+\Delta\theta)} - r \,\mathrm{e}^{\mathrm{i}\theta}$$
Using the derivative
$$\mathrm{e}^{\mathrm{i}(\theta+\Delta\theta)}-\mathrm{e}^{\mathrm{i}\theta} = \frac{\mathrm{d}\mathrm{e}^{\mathrm{i}\theta}}{\mathrm{d}\theta}\Delta\theta = \mathrm{i}\mathrm{e}^{\mathrm{i}\theta}\Delta\theta$$
we get
$$\Delta z = \mathrm{i}r\,\mathrm{e}^{\mathrm{i}\theta}\Delta\theta$$
Therefore
$$f'(z) = \lim_{\Delta\theta\to 0} \frac{\Delta u + \mathrm{i} \Delta v}{\mathrm{i}r\,\mathrm{e}^{\mathrm{i}\theta}\Delta\theta} = \frac{1}{\mathrm{i}r\,\mathrm{e}^{\mathrm{i}\theta}}(u_\theta + \mathrm{i}v_\theta) \label{b}\tag{2}$$
Finally, comparing the real and imaginary parts of ($\ref{a}$) and ($\ref{b}$) gives us what we want:
$$u_r=\frac{1}{r}v_\theta \quad,\quad v_r = -\frac{1}{r}u_\theta$$
Reference: Kwong-tin Tang, Mathematical Methods for Engineers and Scientists 1 (Springer, 2007)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 6,
"answer_id": 4
} |
Real Analysis (Riemann Integral) Let $f(x)=c$ for all $x$ in $[a,b]$ and some real number $c$. Show by definition below that $f$ is Riemann integrable on $[a,b]$, and $\int f(x) dx = c(b-a)$.
Definition: A function $f$ is Riemann integrable on $[a,b]$ if there is a real number $R$ such that for any $\varepsilon > 0$, there exists $\delta > 0$ such that for any a partition $P$ of $[a,b]$ satisfying $\|P\|< \delta$, and for any Riemann sum $R(f,P)$ of relative to $P$, we have
$$|R(f,P)-R|< \varepsilon$$
| Let $R = c (b-a)$ and let $\varepsilon > 0$.
You want to show that there is $\delta > 0$ such that for all tagged partitions $P$ with $\|P\| < \delta$ you have
$$ \left | \sum_{k=1}^n f(x_i) (t_{i+1}-t_i) - R \right | < \varepsilon$$
where $x_i \in [t_i , t_{i+1}] \subset [a,b]$ form a tagged partition of $[a,b]$. We have $f(x_i) = c$ hence
$$ \left | \sum_{k=1}^n f(x_i) (t_{i+1}-t_i) - R \right | = \left |c \sum_{k=1}^n (t_{i+1}-t_i) - R \right | = \left | c (b-a) - R \right | = 0 < \varepsilon$$
hence $f$ is Riemann integrable with $$ \int_a^b f(x) dx = c (b - a)$$
Hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
escape velocity using limits I have the formula for a rocket's escape velocity from earth, $V$ being velocity, $v$ being initial velocity, and $r$ being the distance between the rocket and the center of the earth.
$$V = \sqrt{\frac{192000}{r}+v^2-48}$$
I am trying to find the value of $v$ for which an infinite limit for $r$ is obtained as $V$ approaches zero, this value of $v$ being the escape velocity for earth.
I have solved for $v$ (with $V$ being $0$), as $v = \sqrt{48-\frac{192000}{r}}$, but do not know how to continue solving the problem. I thought setting it up as the limit of the square root of $48-\frac{192000}{r}$ as $r$ approaches infinity (to give $v$), but that doesn't seem right.
| Let $r\to\infty$. Note that $\dfrac{192000}{r}\,$ approaches $0$. Thus since
$$V=\sqrt{\frac{192000}{r}+v^2-48},$$
$V$ approaches $\sqrt{v^2-48}$. If we want $V$ to approach $0$, we want $v^2-48=0$.
(Presumably we are measuring velocity in miles per second.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to find the Circular permutation with Repetition
Possible Duplicate:
In how many ways we can put $r$ distinct objects into $n$ baskets?
Need some guidance with the following problem :
There are 'n' different types of objects which needs to be placed in a circle of length 'r' , such that no two adjacent items are of the same type. Repetition is allowed.
eg. n = 4 {a,b,c,d} and r = 3 , the circular permutations are
a b c
a b d
a c b
a c d
a d b
a d c
b c d
b d c
We do not include a permutation like 'b d a' , since that is the same as 'a b d'. Nor do we include a permutation like 'a a d' or 'a d a' since they do not satisfy the adjacency condition.
Similarly for n = 4 {a,b,c,d} and r = 4,
'a b a b' is valid, but 'a b b c' is not.
Is there a general solution or method that I can follow to solve this problem?
| Yes, it is.
There is a good article about combinations and variations in codeproject.
You need for "Combinations (i.e., without Repetition)" there.
Also if you familar with C# you can use simple and short solution from stackoverflow.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do you find the center of a circle with a pencil and a book? Given a circle on a paper, and a pencil and a book. Can you find the center of the circle with the pencil and the book?
| The only book I have is usually under my pillow. It has become skewed over the years - no 90 degrees joy. As a consequence I failed to have success applying the (elegant) description of Patrick Li. Moreover, my book is too small anyway to connect diametrically opposing points on the circle.
Therefore I had to revert to a more tedious approach.
Pick two points on the circle, close enough for my book's reach, connect them.
Align one of the book's edges with that line, the bookcorner at one of the marked points, draw a line along the adjacent edge of the book - which is non-perpendicular to the first line. Flip the book and draw "the other" non-perpendicular line through the same point. Repeat at the other point. Connect the two intersections that the four lines make, and extend this line - by shifting one bookedge along - to complete the diameter. If the intersections are too wide apart for my book, retry with the original points closer together. Repeat all of this with two other points on the circle, to get two diameters crossing at the midpoint.
If, by some miracle, my book has straightened out again (but still is tiny compared to the circle), I will quickly find out: the two "non-perpendicular" lines at one of the initial mark-points overlap. Then, I continue the line(s) until I cross the circle at the other side; likewise at the other initial point, and I wind up with a long and narrow rectangle. Repeating that from some other location on the circle, roughly at 90 degrees along the circle from the first setup, I get two rectangles and their intersection is a small parallellogram somewhere in the middle. It's diagonals cross at the center of the circle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "101",
"answer_count": 17,
"answer_id": 1
} |
The Hungarian Algorithm In reading the proof of the Hungarian algorithm for the assignment problem in a weighted bigraph, I could not understand why the algorithm terminates. In the algorithm we choose a cover (namely labels for the vertices of the bigraph: $u_1\cdots u_n$, $v_1,\cdots v_n$ with $u_i+v_j\ge $weight of an edge $x_iy_j\forall i,j$) and keep adjusting it until the equality subgraph (the spanning subgraph of $K_{n,n}$ whose edges are pairs $x_iy_j$ such that $u_i+v_j=$weight of $x_iy_j$) has a perfect matching. My question is after each adjustment, why does the cost of the cover $\sum(u_i+v_i)$ reduce?
(We assume the weights are all integers).
Thanks.
| Let $\mathrm{M}$ be a maximum matching in the equality subgraph. According to König's theorem, we can find a vertex cover $C$ such that $\mathrm{|M|}=|C|$. We Call $R$ the set $C\cap X$, and $T$ the set $C\cap Y$.
Now, either $\mathrm{M}$ is a perfect matching, which means the algorithm should stop and return our current labelling as optimal, or $X$ is not saturated by $\mathrm{M}$. In that case, we adjust the weighted cover : We define $\epsilon = \min\{(u_i+v_j-w(x_iy_j):x_i\notin R, y_j \notin T\}$, and we substract $\epsilon$ to $u_i$ for each $x_i \notin R$, and we add $\epsilon$ to each $y_j$ for each $y_j \in T$.
The cost of the cover changes by $(|T| - |X\setminus R|)\epsilon$. This change is strictly negative because $|R|+|T|=|\mathrm{M}|\lt |X| \Rightarrow |T| \lt |X|-|R| \Rightarrow |T| \lt |X \setminus R|$. Since we assumed the integrality of the labels, it ensures that the algorithm terminates in finitely many steps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/205991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Visualizing Exterior Derivative How do you visualize the exterior derivative of differential forms?
I imagine differential forms to be some sort of (oriented) line segments, areas, volumes etc. That is if I imagine a two-form, I imagine two vectors, constituting a parallelogram.
So I think provided I can imagine a field of oriented line segments, with exterior derivative I should imagine an appropriate field of oriented areas.
| I usually think of differential forms as things "dual" to lines, surfaces, etc.
Here I picture forms in a 3-dimensional space. The generalization is obvious, with little care.
A foliation of the space (think of the sedimentary rocks) is always a 1-form. Not all 1-forms are foliations, but they can always be written as sums of foliations.
A line integral of a 1-form is simply "how many layers the line crosses". With signs.
A stream of "flux lines", that cross a surface, is a 2-form in R³. Not all 2-forms are streams of lines, but they can be sums of streams of lines.
The surface integral is, again, the number of "intersections".
A 3-form in R³ is simply a "cloud of points". Volume integration is "how many points are within a certain region".
The exterior derivative is the boundary of those objects.
Think of the foliation/1-form. If a layer breaks, its boundary is a line. Many layers that break form a stream of lines, that is a 2-form.
You see that if you take a closed loop, the integral of the 1-form along such loop is precisely the number of layers that the loop crossed without crossing back. If the loop is a keyring, the integral is the number of keys.
The value of this integral is the number of layers that were "born" or "dead" within the loop. Or, the number of stream-lines, of the exterior derivative, that crossed an area enclosed by the loop!
This is exactly Stokes' theorem: the integral of a 1-form around a closed loop is equal to the integral of its derivative on an area enclosed by such loop.
Ultimately, Stokes' theorem is a "conservation of intersections".
This works for any order, and any dimension.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 2,
"answer_id": 1
} |
understanding the language of a theorem The theorem is stated as follows in the book:
Let $\phi:G\rightarrow G'$ be a group homomorphism, and let
$H=Ker(\phi)$. Let $a\in G$. Then the set
$\phi^{-1}[\{\phi(a)\}] = \{x\in G | \phi(x)=\phi(a)\}$
is the left coset $aH$ of $H$, and is also the right coset $Ha$ of
$H$. Consequently, the two partitions of $G$ into left cosets and into
right cosets of $H$ are the same.
I'm trying to parse this statement and it's not clear to me what claim the author is trying to make at the very end when he says "Consequently, the two partitions of $G$ into left cosets and into right cosets of $H$ are the same." I'm under the impression that, in general, the left and right cosets are not always the same. Under what condition are they the same? Under the condition that you have a homomorphism?
Let me mention that at this point, we're not supposed to know what a normal subgroup is. The author introduces the idea of a normal subgroup 2 pages later.
| The condition is that $H$ is the kernel of a group homomorphism, not just any random subgroup.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Can all Hermitian matrices $H$ be written as $H=A^* A$? All the matrices below are square, complex matrices.
1) Is it true that, for every Hermitian matrix $H$, there exists $A$, that $A^*A=H$?
2) For any $A$, does $A^*A$ always have a square root? If it's not, is there any simple presumption of $A$ that makes $A^*A$ always have a square root?
| *
*This property is only true for matrices $H$ such that $x^*Hx\geq 0$ for all $x$ (non-negative definite). To see that, note that if it's the case $x^*A^*Ax=\lVert Ax\rVert^2\geq 0$. Conversely, if $x^*Hx\geq 0$ for all $x$, then we can diagonalize $H$, and each eigenvalue is non-negative (hence admits a square root).
*Yes, as $A^*A$ is non-negative definite and symmetric, we can find $P$ unitary and $D$ diagonal such that $A^*A=P^*DP$. As each diagonal entry of $D$ is non negative, take $D'$ diagonal such that $D'^2=D$. Then take $R:=P^*D'P$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Embeddings of bundles in projective space. Consider the projective variety $X = \mathbb{P}^2$, and the line bundle $\mathcal{O}_X(dH)$ where $H$ is a plane and $d \in \mathbb{N}$.
Let $L$ be the total space of $\mathcal{O}_X(dH)$. I know how to form this in the following way: Let $U_i$ be the standard affine opens of $X$, then
$$
L = \coprod (U_i \times \mathbb{A}^1)/\sim
$$
where the equivalence is given by the glueiing matrices $A_{ij}= (x_j/x_i)^d$.
I am pretty sure the total space is a quasi-projective variety. Can anybody give me an explicit embedding?
For my purposes i am happy with the cased $d \in \{1,2\}$, but a general treatment would be nice.
Thanks.
Edit: Projective changed to quasi projective.
| If $E$ is a vector bundle of rank $r$ on a projective variety $X$, then the total space of $E$ is a quasi-projective variety : this is a vast generalization of what you are asking.
And yet the proof is easy: the vector bundle $E$ has a projective completion $\bar E=\mathbb P(E\oplus \theta)$ where $\theta=X\times \mathbb A_k^1$ denotes the trivial vector bundle of rank one on $X$.
This bundle is a locally trivial bundle $p:\bar E\to X$ with fiber $\mathbb P^r$ and is a projective variety.
And finally the original vector bundle $E$ admits of an open embedding $$E\stackrel {\cong}{\to}\mathbb P(E\oplus 1) \stackrel {\text {open}}\subset \mathbb P(E\oplus \theta)=\bar E$$ into the projective variety $\bar E$, which shows that $E$ is indeed quasi-projective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Time & Work Problem 12 men do a work in 36 days.In how many days can 18 men do the same work?
Sol:This can be solved as
1 man's can do work in 12*36 days
So,1 man's 1 day work (1/12*36) days
Hence 18 men's 1 day work is (18/12*36)
So Days taken for 18 men is (12*36)/18 days.
Similarly Here is another question where i applied the same concept but i'm not getting the answer.
Q)4 men or 8 women can do a work in 24 days.How many days will 12 men and 8 women work to do same work.
My Soln:
4 men or 8 women do a work in 24 days.
so 1 man can do work in (24*4) days.
So 1 man's 1 day work is 1/(24*4)
Hence 12 men and 8 women or 16 mens can do work in 16/(24*4)
So days taken for 16 men to do work us (24*4)/16 days.
But the actual answer is 3/2 days.
| 4 men need 24 days
8 women need 24 days
one man needs 96 days [4*24]
one woman needs 192 days [8*24]
so 12 men and 8 women need 6 days [192 / (96 / 12 + 192 / 8)]
==> 192 / ((4 * 24) / 12 + (8 * 24) / 8) = 6
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Are inversion and multiplicaton open? If $G$ is a topological group, are inversion $G \to G$ and multiplication $G\times G \to G$ open mappings? More concretely, I try to show that division of complex numbers
$$\{(z,w) \in \mathbb{C}^2;\; w \neq 0\} \to \mathbb{C},\; (z,w) \mapsto \tfrac{z}{w}$$
is an open mapping. I want to use this to construct charts on $\mathbb{CP}^1 = (\mathbb{C}^2\setminus\{0\})/\mathbb{C}^{\times}$.
I don't know where to begin.
| Yes, open maps. The inversion $g\mapsto g^{-1}$ is in fact a homeomorphism, continuous by def. and its inverse is itself.
For the product, if $U,V\subseteq G$ open subsets, then the image of $(U,V)$ under multiplication is the complex product
$$U\cdot V =\{u\cdot v\mid u\in U,v\in V\} =\bigcup_{u\in U}(u\cdot V)$$
is a union of open sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
how to find arrange the following functions in increasing or decreasing order? I have the following three functions
$f_1(x) = \frac{1}{4} (8-3x + \sqrt{(x-2) (5x-14)}) (1-x)$
$f_2(x) = \frac{1}{8} (12-4x + \sqrt{2} \sqrt{(5x-14)(x-3)} + \sqrt{2} \sqrt{(x-2)(x-3)} )(1-x)$
$f_3(x) = (x-1)(x-2)$
How possible can it be shown that
$f_1(x) > (or <) f_2(x) < (or >) f_3(x)$
| If you are interested as $x \to \infty$, you can just look for the highest power of $x$. For $f_1(x)$, the square root goes as $\sqrt 5x$, so the whole thing goes as $(3-\sqrt 5)x^2$. You can do similarly with the second and the third is $x^2$. This says that as $x \to \infty, f_3(x) \gt f_1(x)$
If you are interested in all $x$, note that they all are $0$ at $x=1$, so none is strictly greater than any other.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove $\ell_1$ is first category in $\ell_2$ Prove that $\ell_1$ is first category in $\ell_2$.
I tried to solve this, but had no idea about the approach. Any suggestions are helpful.
Thanks in advance.
| Write $F_n:=\{x\in \ell^2,\sum_{j=1}^{+\infty}|x_j|\leq n\}$. Then $\ell^1=\bigcup_{n\geq 1}F_n$. $F_n$ is closed in $\ell^2$, as if $\{x^{(k)}\}$ is a sequence which lies in $F_n$ and converges to $x$ in $\ell^2$; we have for an integer $N$ that
$$\sum_{j=1}^N|x_j|\leq\lim_{k\to\infty}\sum_{j=0}^N|x_j^{(k)}|\leq n,$$
which gives $x\in F_n$.
$F_n$ has an empty interior in $\ell^2$. Otherwise, if $B_{\ell^2}(x,r)\subset F_n$, then for each $y\in \ell²$, we would have $\frac{r}{2\lVert y\rVert_2}y+x\in F_n$, hence $\frac{r}{2\lVert y\rVert_2}y\in F_{2n}$. This gives that $\lVert y\rVert_1\leq C\lVert y\rVert_2$ for an universal constant $C$, which is not possible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
ZFC set theory,first order theory
Possible Duplicate:
What is the difference between Gödel's Completeness and Incompleteness Theorems?
what is the relationship between ZFC and first-order logic?
I am a bit confused by a few things that I have read recently.
I have read that ZFC is a first order theory and that any part of mathematics can be expressed in ZFC. Now I know that first order logic is complete, however this would seem to contradict the incompleteness theorems (with I have a basic understanding of). I was wondering where I have gone wrong?
Thanks very much for any help (sorry for the silly question)
| As in the comments is said, the word 'complete' has 2 different meanings.
That the first order logic is complete, is meant that it is complete w.r.t the corresponding first order models, that is: a formula is valid in all models iff it has a proof (a deduction consisting of finitely many formulas, using some specific deduction rules, like modus ponens..)
That ZFC is incomplete, is meant it is incomplete as an axiom system: there is a formula $\phi$ such that neither $\phi$ nor $\lnot\phi$ is not provable from ZFC. (And, in fact, it will be still incomplete if adding any more axioms).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Prove $\{0,1\}^* -\{0^i 1^i\mid i \ge 0\}$ is context free? Is the only way to prove that this language is context-free to construct a Context-Free Grammar that accepts it?
If so any hints on how to get started?
| What do you think about the following? Does it work?
$$S\to M1X$$
$$S\to X0M$$
$$M\to 0M1$$
$$M\to \Lambda$$
$$X\to 1X$$
$$X\to 0X$$
$$X\to \Lambda$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What's the probability that Abe will win the dice game? Abe and Bill are playing a game. A die is rolled each turn.
If the die lands 1 or 2, then Abe wins.
If the die lands 3, 4, or 5, then Bill wins.
If the die lands 6, another turn occurs.
What's the probability that Abe will win the game?
I think that the probability is $\frac{2}{5}$ just by counting the number of ways for Abe to win. I'm not sure how to formalize this though in terms of a geometric distribution.
| You are right. You can just ignore rolls of $6$ as they leave you back in the same situation. To formalize this, the chance Abe wins on turn $n$ is $\frac 13 \left(\frac 16 \right)^{n-1}$ and the chance that Bill wins on turn $n$ is $\frac 12 \left(\frac 16 \right)^{n-1}$. You can sum these if you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/206829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 6,
"answer_id": 2
} |
$A = B^2$ for which matrix $A$? Is it true that for any $A\in M(n,\mathbb{C})$there exist a $B\in M(n,\mathbb{C})$ such that $A = B^2$? I think this is not true (but I don't know nay example), and then is it possible to characterize such $A$?
| The Cholesky decomposition is loosely related to the concept of taking the square root of a matrix. If $A$ is a positive-definite Hermitian matrix, then $$ A = B B^{*} $$, where $B$ is lower triangular with positive diangonal elements where $B^*$ is the conjugate transpose of B.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/207029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Do Lipschitz-continuous funcions have weak derivatives on bounded open sets? Let $\Omega\in\mathbb{R}^n$ be open and bounded. I'm wondering if a function
$f\in C^{0,1}(\Omega)$ (a Lipschitz-continuous one) is also an element of $W^{1,2}(\Omega)$ (that is the space of weakly derivatives functions whose first weak derivatives are $L^2$-functions).
One can easily show that $\|f\|_{L^2}$ is bounded. What I did not yet manage to show is that the weak derivatives $\partial_{x_i}f$ exist for $i=1,\dots,n$.
Do they even exist? And if so, is there a constant $C$ such that $\|f\|_{C^{0,1}} \le C\|f\|_{W^{1,2}}$ or $\|f\|_{W^{1,2}}\le C\|f\|_{C^{0,1}}$.
I'd be glad for any help or hints to literature on this.
Thank you very much!
| Take a look in the page 279 of this book: "Evans - Partial Differential Equation".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/207078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Asymptotic equivalence of exponents An earlier question links to a paper of Erdos in which he says that it is "well-known" that the Prime Number Theorem is equivalent to
$(\prod_{p\leq n}p)^{1/n} \to e$ as $n\to \infty.$ **
Here is my confusion.
If $~\prod_{p\leq n}p \sim e^n$ or $e^{\log \prod p}= e^{\sum \log p} \sim e^n,$
(the last relation appears in the linked question, but I take responsibility for it) doesn't this imply that
(*) $\lim_{n \to \infty} (\sum_{p\leq n} \log p - n) = 0?$
Of course it's true that $\lim_{n \to \infty} \frac{\sum \log p}{n} =1 $ and I do not think that (*) is true. But I think we do have in general that
$$ e^{f(x)}\sim e^{g(x)} \implies \lim (f(x) - g(x)) = 0,$$ since $\lim \frac{e^f}{e^g}= e^{f-g} = 1 \implies \lim (f-g) = 0.$
Can someone tell me where I have goofed? Thanks!
**If someone could point me to a proof of this I would appreciate it --I don't see it in Apostol or Hardy & Wright).
| $(\Pi_{ p \leq n} p)^{1/n} \rightarrow e$, as $n \rightarrow \infty$, doesn't imply $\Pi_{p \leq n} p \sim e^n$.
Example: $(2^n n)^{1/n} \rightarrow 2$, as $n\rightarrow \infty$, but $2^n n$ is not equivalent to $2^n$.
Indeed, $n^{1/n}=e^{\frac{\log n}{n}} \rightarrow 1$ because $\frac{\log n}{n} \rightarrow 0$ as $n \rightarrow \infty$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/207175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$L^2$-Oscillation Let $f:[0,1]\to \mathbb{R}$ be a smooth function such that the following property is satisfied.
$$\int\limits_{[0,1]}\int\limits_{[0,1]}|f(x)-f(y)|^2dxdy\leq \varepsilon.$$
What can I most say about $\max\limits_{[0,1]}f-\min\limits_{[0,1]}f$?
| If you choose $f_n(x) = x^n$, with $n$ a positive integer, a quick computation shows that $\int_{[0,1]} \int_{[0,1]} |f_n(x)-f_n(y)|^2 \, dx dy = \frac{2n^2}{(n+1)^2(2n+1)}$. Furthermore, $\max_{x\in [0,1]} f_n(x) - \min_{x\in [0,1]} f_n(x) = 1$ for all $n$.
Hence the integral can be made arbitrarily small, yet the range is 1. So, roughly speaking, not much can be said about the range given integral bound information.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/207246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
For integers $a$ and $b \gt 0$, and $n^2$ a sum of two square integers, does this strategy find the largest integer $x | x^2 \lt n^2(a^2 + b^2)$? Here is some background information on the problem I am trying to solve. I start with the following equation:
$n^2(a^2 + b^2) = x^2 + y^2$, where $n, a, b, x, y \in \mathbb Z$, and $a \ge b \gt 0$, $n \gt 0$, and $x \ge y \gt 0$.
For given values of $a$ and $b$ and some $n$, I need to find $x$ and $y$ such that $x$ is as large as possible (and $y$ as small as possible). The value of $n$ is up to me as long as allowable values are linear ($n = kn_0$). A naive approach is to use the distributive property to set $x = na$ and $y = nb$, but this causes $x$ and $y$ to grow at the same rate and doesn't guarantee $x$ is as large as possible.
I realized that if $n^2$ is the sum of two square integers (the length of the hypotenuse of a Pythagorean triangle), then I can use the Brahmagupta–Fibonacci identity to find values for $x$ and $y$ that are as good or better than using the distributive property:
$(a^2 + b^2)(c^2 + d^2) = (ac + bd)^2 + (ad - bc)^2$
$n^2 = c^2 + d^2$, where $c \gt d$
So for example, if $a = 2$, $b = 1$, and $n = 5$, then instead of
$5^2(2^2 + 1^2) = 10^2 + 5^2 = 125$, so that $x = 10$ and $y = 5$ we get
$(4^2 + 3^2)(2^2 + 1^2) = (4\cdot2 +3\cdot1)^2 + (4\cdot1 - 3\cdot2)^2 = 11^2 + (-2)^2 = 125$, so $x = 11$ and $y=2$
My question is, does this strategy always find $x$ to be the largest possible integer whose square is less than $n^2(a^2 + b^2)$, for $n$ a sum of two square integers?
| Let $N = n^2(a^2+b^2)$ and consider its prime factorization.
Let $p$ be a prime divisor of $N$.
If $p\equiv 3\pmod4$, then necessarily $p^2|N$, $p|x$ and $p|y$ (and the factor comes from $p|n$).
If $p\equiv 1\pmod 4$, find $u,v$ such that $u^2+v^2=p$.
Let $k$ be the exponet of $p$ in $N$, i.e. $p^k||N$.
Then $x+iy$ must be a multiple of $(u+iv)^r(u-iv)^{k-r}$ with $0\le r\le k$. This gives you $k+1$ choices for each uch $p$, and in fact the optimal choice may be influenced by the choices for other primes.
(The case $p=2$ is left as an exercise). The number of candidates to test is then the product of the $k+1$, which may become quite large, but of course much, much smaller than $N$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/207307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Limit of a continuous function Suppose that $f$ is a continuous and real function on $[0,\infty]$. How can we show that if $\lim_{n\rightarrow\infty}(f(na))=0$ for all $a>0$ then $\lim_{x\rightarrow+\infty} f(x)=0$?
| $\newcommand{\orb}{\operatorname{orb}}$If $f(x)\not\to 0$ as $x\to\infty$, then there is an $\epsilon>0$ such that for every $m\in\Bbb N$ there is an $x_m\ge m$ such that $|f(x_m)|\ge\epsilon$. Since $f$ is continuous, for each $m\in\Bbb N$ there is a $\delta_m>0$ such that $|f(x)|>\frac{\epsilon}2$ for all $x\in(x_m-\delta_m,x_m+\delta_m)$. For $n\in\Bbb N$ let $$U_n=\bigcup_{k\ge n}(x_k-\delta_k,x_k+\delta_k)\;.$$
For $a\in(0,1)$ let $\orb(a)=\{na:n\in\Bbb Z^+\}$, and for $n\in\Bbb N$ let $$G_n=\{a\in(0,1):\orb(a)\cap U_n\ne\varnothing\}\;.$$ Suppose that $0<b<c<1$, and let $$V(b,c)=\bigcup_{n\in\Bbb Z^+}(nb,nc)=\bigcup_{x\in(b,c)}\orb(x)\;.$$ Let $m=\left\lfloor\frac{b}{c-b}\right\rfloor+1$; $(n+1)b<nc$ for each $n\ge m$, so $V(b,c)\supseteq(mb,\to)$. It follows that $V(b,c)\cap U_n\ne\varnothing$ for each $n\in\Bbb N$ and hence that $(b,c)\cap G_n\ne\varnothing$ for each $n\in\Bbb N$. Thus, each $G_n$ is a dense open subset of $(0,1)$, so by the Baire category theorem $G=\bigcap_{n\in\Bbb N}G_n$ is dense in $(0,1)$ and in particular, $G\ne\varnothing$.
Fix $a\in G$. Then $a\in G_n$ for each $n\in\Bbb N$, so $\orb(a)\cap U_n\ne\varnothing$ for each $n\in\Bbb N$. This clearly implies that $\left\{n\in\Bbb Z^+:|f(na)|>\frac{\epsilon}2\right\}$ is infinite, contradicting the hypothesis that $\lim_{n\to\infty}f(na)=0$, and we conclude that $\lim_{x\to\infty}f(x)=0$.
Added: Since you’re having trouble with the notion of proof by contradiction, let me note that I need not have phrased it that way: with a small change in wording this becomes a proof of the contrapositive of the desired statement. Since a statement and its contrapositive are logically equivalent, it proves the desired statement as well.
The desired statement has the form $A\land B\Rightarrow C$, where $A$ is the hypothesis that $f$ is continuous, $B$ is the hypothesis that $\lim_{n\to\infty}f(na)=0$ for each $a>0$, and $C$ is the desired conclusion, that $\lim_{x\to\infty}f(x)=0$. As I phrased my argument, it has the following form:
Assume $A,B$, and $\lnot C$, and infer $\lnot B$, thereby showing that $A\land B\land\lnot C\Rightarrow B\land\lnot B$. Since $B\land\lnot B$ is a contradiction, the hypthesis $A\land B\land\lnot C$ is false. But we’re given that $A$ and $B$ are true, so it must be $\lnot C$ that’s false, and therefore, given that $A$ and $B$ are true, $C$ must be true.
I could, however, have cast the argument in the following form with very minor changes in wording:
Assume $A$. Then $\lnot C\Rightarrow\lnot B$, which is logically equivalent to $B\Rightarrow C$, so $A\land B\Rightarrow C$.
Specifically, I could have written this for the last paragraph:
Fix $a\in G$. Then $a\in G_n$ for each $n\in\Bbb N$, so $\orb(a)\cap U_n\ne\varnothing$ for each $n\in\Bbb N$. This clearly implies that $\left\{n\in\Bbb Z^+:|f(na)|>\frac{\epsilon}2\right\}$ is infinite, and hence that $\lim_{n\to\infty}f(na)\ne 0$. That is, we’ve shown that if $f(x)\lnot\to 0$ as $x\to\infty$, then there is at least one $a>0$ such that $\lim_{n\to\infty}f(na)\ne 0$. This is logically equivalent to the assertion that if $\lim_{n\to\infty}f(na)=0$ for every $a>0$, then $f(x)\to 0$ as $x\to\infty$, which is what we wanted to prove.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/207395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Calculating CRC code I think I may be under a misconception. When calculating the CRC code, how many bits do you append to the original message? Is it the degree of the generator polynomial (e.g. x^3+1 you append three 0s) or is it the number of digits used to represent the generator polynomial (e.g. x^3+1 gives 1001 which gives four 0s)?
For example if you had the generator G(x)=x^4+x+2 and the message 10 000 101 would the numerator be 100 001 010 000 or 1 000 010 100 000?
| Calculating the CRC does not itself append any bits to the message. It gives you some output bits that you can afterwards chose to do with as you want.
(Sometimes what you want to do with the output is to use it to compute some bits to append to the message such that the CRC of the augmented message happens to be all zeroes. Or equivalently, you can append sufficiently many zero bits to the message before calculating the CRC, and then subtract the remainder from the padded message, such that the result of that correction will have all-zero CRC).
The number of bits in the CRC is the degree of the generator polynomial. That is, for a generator polynomial of $x^3+1$ you get a 3-bit CRC. This is because the remainder in polynomail division always has lower degree than the divisor, so you only need to represent those terms with lower exponent than the leading term in the generator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/207467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.