Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Polynomial with infinitely many zeros. Can a polynomial in $ \mathbb{C}[x,y] $ have infinitely many zeros? This is clearly not true in the one-variable case, but what happens in two or more variables?
| Any nonconstant polynomial $p(x,y)\in\mathbb{C}[x,y]$ will always have infinitely many zeros.
If the polynomial is only a function of $x$, we may pick any value for $y$ and find a solution (since $\mathbb{C}$ is algebraically closed).
If the polynomial uses both variables, let $d$ be the greatest power of $x$ appearing in the polynomial. Writing $p(x,y)=q_d(y)x^d+q_{d-1}(y)x^{d-1}+\cdots+q_0(y)$, let $\hat y$ be any complex number other than the finitely many roots of $q_d$. Then $p(x,\hat y)$ is a polynomial in $\mathbb{C}[x]$, which has a root.
EDIT: I neglected to mention that this argument generalizes to any number of variables.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/286352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 5,
"answer_id": 3
} |
Example of Matrix in Reduced Row Echelon Form I'm struggling with this question and can't seem to come up with an example:
Give an example of a linear system (augmented matrix form) that has:
*
*reduced row echelon form
*consistent
*3 equations
*1 pivot variable
*1 free variable
The constraints that I'm struggling with is: If the system has 3 equations, that means the matrix must have at least 3 non-zero rows. And given everything else, how can I have only 1 pivot?
| Hint: It's gotta have only three columns, one for each of the variables (1 pivot, 1 free) and one column for the constants in the equations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/286412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to integrate $\int_{}^{}{\frac{\sin ^{3}\theta }{\cos ^{6}\theta }d\theta }$? How to integrate $\int_{}^{}{\frac{\sin ^{3}\theta }{\cos ^{6}\theta }d\theta }$?
This is kind of homework,and I have no idea where to start.
| One way is to avoid cumbersome calculations by using s for sine and c for cosine. Split the s^3 in the numerator into s*s^2, using s^2 = 1 - c^2 and putting everything in place you have
s*(1-c^2)/c^6. Since the lonley s will serve as the negative differential of c, the integrand reduces nicely into (1-c^2)*(-dc)/c^6. Divide c^6 into the numerator and you have
c^(-6)-c^(-4) which integrates nicely using the elementary power rule, the very first integration rule you learned! Of course, don't forget to backtrack, replacing the c's with cos() and you are done. No messy secants and tangents and certainly no trig substitutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/286579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
$x\otimes 1\neq 1\otimes x$ In Bourbaki, Algèbre 5, section 5, one has $A$ and $B$ two $K$-algebras in an extension $\Omega$ of $K$. It is said that if the morphism $A\otimes_K B\to \Omega$ is injective then $A\cap B=K$. I see the reason: if not there would exist $x\in A\cap B\setminus K$ so that $x\otimes 1=1\otimes x$ which is false.
But why $1\otimes x\neq x\otimes 1$ for $x\notin K$?
| I hope you know that if $\{v_i\}$ is a basis of $V$ and $\{w_j\}$ is a basis of $W$, then $\{v_i\otimes w_j\}$ is a basis of $V\otimes W$. Now since $x\notin K$, we can extend $\{1,x\}$ to a basis of $A$ and $B$, respectively. Now as a corollary of the above claim you have in particular that $1\otimes x$ and $x\otimes 1$ are linearly independent. In particular they are not equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/286642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Converting to regular expressions I am really not sure about the following problem, I tried to answer it according to conversion rules the best I can. I was wondering if someone can give me some hints as to whether or not I am on the right track.
Many thanks in advance.
Convert to Regular Expressions
L1 = {w | w begins with a 1 and ends with a 0.} = 1E*0
L2 = {w | w contains the substring 0101} = E*0101E*
L3 = {w | the length of w does not exceed 5} = EEEEE
L4 = {w | every odd position of w is a 1} = (E*=1 intersection E*E*)
L5 = { w | w contains an odd number of 1s, or exactly 2 0s. } = (E)* U (E=0)*2
| As for $L_1$ you are right, if you would like a more generic approach, it would be $1E^* \cap E^* 0$ which indeed equals $1E^*0$. The thing to remember is that conjunction "and" can be often thought of as intersection of languages.
Regarding $L_2$, it is also ok.
Your answer to $L_3$ is wrong, $EEEEE$ means exactly 5 symbols, where "does not exceed $5$ allows for much more, e.g. empty string. One way to achieve this is $\{\varepsilon\} \cup E \cup EE \cup EEE \cup E^4 \cup E^5$ or shorter $\bigcup_{k=0}^{5}E^k$.
In $L_4$ I do not understand your notation $E^*=1$. Also $E^*E^*$ equals $E^*$, that is, if you join two arbitrary strings of any length you will get just an arbitrary string of some length, on the other hand $(EE)^*$ would be different, in fact the length of $(EE)^*$ has to be even. Observe:
\begin{align}
\{\varepsilon\} = E^0 = (E^2)^0 \\
EE = E^2 = (E^2)^1 \\
EEEE = E^4 = (E^2)^2 \\
EEEEEE = E^6 = (E^2)^3 \\
EEEEEEEE = E^8 = (E^2)^4
\end{align}
$$\{\varepsilon\} \cup EE \cup EEEE \cup \ldots = (E^2)^0 \cup (E^2)^1 \cup (E^2)^2 \cup \ldots = (E^2)^*$$
Taking this into account, strings which contain $1$ on their odd positions are build from blocks $1\alpha$ for $\alpha \in E$ (I assume that first position has number 1) or in short form $1E$. To give you a hint, $(1E)^*$ would be a language of strings of even length with $1$ on odd positions. Try to work out the expression for any length.
$L_5$ could be better. Two $0$s should be $1^*01^*01^*$ or shorter $(1^*0)^21^*$. Four zeros would be $(1^*0)^41^*$, and using the example from $L_4$ try to find the correct expression for odd number of 1s. Some more hints: as "and" often involves intersection $\cap$, "or" usually ends up being some union $\cup$.
Hope this helps ;-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/286700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How would one find other real numbers that aren't in the rational field of numbers? For example, $\sqrt2$ isn't a rational number, since there is no rational number whose square equals two. And I see this example of a real number all the time and I'm just curious about how you can find or determine other numbers like so. Or better yet, how was it found that if $\mathbb Q$ is the set of all rational numbers, $\sqrt2\notin\mathbb Q$?
I appologize if the number theory tag isn't appropriate, I'm not really sure what category this question would fall under.
| I'm not sure if you're asking about finding a real number, or determining whether a given real number is rational or not. In any case, both problems are (in general) very hard.
Finding a real number
There are lots and lots of real numbers. How many? Well the set of all real numbers which have a finite description as a string in any given countable alphabet is countably infinite, but the set of all reals is uncountably infinite $-$ if we listed all the reals that we could possibly list then we wouldn't have even scratched the surface!
Determining whether or not a given real number is rational
No-one knows with certainty whether $e+\pi$ or $e\pi$ are rational, though we do know that if one is rational then the other isn't. In general, finding out if a real number is rational is very hard. There are quite a few methods that work in special cases, some more sophisticated than others.
An example of a ridiculous method that can be used to show that $\sqrt[n]{2}$ is irrational for $n>2$ is as follows. Suppose $\sqrt[n]{2} = \dfrac{p}{q}$. Then rearranging gives $2q^n=p^n$, i.e.
$$q^n + q^n = p^n$$
but since $n>2$ this contradicts Fermat's last theorem. [See here.]
The standard proof of the irrationality of $\sqrt{2}$ is as follows. Suppose $\sqrt{2} = \frac{p}{q}$ with $p,q$ integers and at most one of $p$ and $q$ even. (This can be done if it's rational: just keep cancelling $2$s until one of them is odd.) Then $2q^2=p^2$, and so $2$ divides $p^2$ (and hence $p$); but then $2^2$ divides $2q^2$, and so another $2$ must divide $q^2$, so $2$ divides $q$ too. But this contradicts the assumption that one of $p$ and $q$ is odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/286763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
What is a point? In geometry, what is a point?
I have seen Euclid's definition and definitions in some text books. Nowhere have I found a complete notion. And then I made a definition out from everything that I know regarding maths. Now, I need to know what I know is correct or not.
One book said, if we make a dot on a paper, it is a model for a point.
Another said it has no size. Another said, everybody knows what it is.Another said, if placed one after another makes a straight line.Another said, dimensionless.Another said, can not be seen by any means.
| We can't always define everything or prove all facts. When we define something we are describing it according to other well-known objects, so if we don't accept anything as obvious things, we can not define anything too! This is same for proving arguments and facts, if we don't accept somethings as Axioms like "ZFC" axioms or some else, then we can't speak about proving other facts.
About your question, I should say that you want to define "point" according to which objects? If you don't get it obvious you should find other objects you know that can describe "point"!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/286877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 1
} |
Prove that between every rational number and every irrational number there is an irrational number. I have gotten this far, but I'm not sure how to make it apply to all rational and irrational numbers....
http://i.imgur.com/6KeniwJ.png">
BTW, I'm quite newbish so please explain your reasoning to me like I'm 5. Thanks!
UPDATE:
| Let $p/q$ be a rational number and $r$ be an irrational number.
Consider the number $w = \dfrac{p/q+r}2$ and prove the following statements.
$1$. If $p/q < r$, then $w \in ]p/q,r[$. (Why?)
$2$. Similarly, if $r < p/q$, then $w \in ]r,p/q[$. (Why?)
$3$. $w$ is irrational. (Why?)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/286941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Differential of the exponential map on the sphere I have a problem understanding how to compute the differential of the exponential map. Concretely I'm struggling with the following concrete case:
Let $M$ be the unit sphere and $p=(0,0,1)$ the north pole. Then let $\exp_p : T_pM \cong \mathbb{R}^2 \times \{0\} \to M $ be the exponential map at $p$.
How do I now compute:
1) $\mathrm{D}\exp_p|_{(0,0,0)}(1,0,0)$
2) $\mathrm{D}\exp_p|_{(\frac{\pi}{2},0,0)}(0,1,0)$
3) $\mathrm{D}\exp_p|_{(\pi,0,0)}(1,0,0)$
4) $\mathrm{D}\exp_p|_{(2\pi,0,0)}(1,0,0)$
where $\mathrm{D}\exp_p|_vw$ is viewed as a directional derivative.
I really have no clue how to do this. Can anyone show me the technique how to handle that calculation?
| I' ll assume we are talking about the exponential map obtained from the Levi-Civita connection on the sphere with the round metric pulled from $\mathbb R^3$. If so, the exponential here can be understood as mapping lines through the origin of $\mathbb R^2$ to the great circles through the north pole. Its derivative then transports the tangent space at the north pole to the corresponding downward-tilted tangent spaces.
For example, in $(3)$ we map to the tangent space at the south pole (we have traveled a distance of $\pi$). But since this tangent space has been transported along the great circle in the $(x,z)$ plane, the orientation of its $x$-axis is reversed with respect to the north pole. So the result here is $(-1,0,0)$. Similarly, in $(2)$ we travel $\pi/2$ along the same circle and end up in a tangent space parallel to the $(y,z)$ plane. The vector $(0,1,0)$ points to the same direction all the time.
Can you work out the answer for $(1)$ and $(4)$ yourself now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How did Newton invent calculus before the construction of the real numbers? As far as I know, the reals were not rigorously constructed during his time (i.e via equivalence classes of Cauchy sequences, or Dedekind cuts), so how did Newton even define differentiation or integration of real-valued functions?
| Way earlier, the Greeks invented a surprising amount of mathematics. Archimedes knew a fair amount of calculus, and the Greeks proved by Euclidean geometry that $\sqrt{2}$ and other surds were irrational (thus annoying Pythagorus greatly).
And I can do a moderate amount of (often valid) math without knowing why the the co-finite subtopology of the reals has a hetero-cocomorphic normal centralizer (if that actually means something, I apologize).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 4
} |
Functor between categories with weak equivalance. A homotopical category is category with a distinguished class of morphism called weak equivalence.
A class $W$ of morphisms in $\mathcal{C}$ is a weak equivalence if:
*
*All identities are in $W$.
*for every $r,s,t$ which compositions
$rs$ and $st$ exist and are in $W$, then so are $r,s,t,rst$
Given two homotopical categories $\mathcal{C}$ and $\mathcal{D}$, there is the usual notion of functor category $\mathcal{C}^{\mathcal{D}}$. But, there is also the category of homotopical functors from $\mathcal{C}$ to $\mathcal{D}$, which I am confused about. Note that, $F:\mathcal{C} \rightarrow \mathcal{D}$ is a homotopical functor if it preserve the weak equivalence. I am quoting from page 78 of a work by Dwyer, Hirschorn, Kan and Smith; called Homotopy Limit Functors on Model Categories and ...
Homotopical functor category $\left(\mathcal{C}^{\mathcal{D}}\right)_W$ is the full subcategory of $\mathcal{C}^\mathcal{D}$ spanned by the homotopical functors. (What does it mean?)
Also, in the definition of homotopical equivalence of homotopical categories, we read that $f:\mathcal{C}\rightarrow \mathcal{D}$ and $g:\mathcal{D}\rightarrow \mathcal{C}$ are homotopical equivalence if their compositions $fg$ and $gf$ are naturally weakly equivalent, i.e. can be connected by a zigzag of natural weak equivalence.
I do not know what the authors mean by a zigzag of natural equivalence. Please, help me with these concepts and tell me where I can learn more about them.
Thank you.
| Maybe you're beginning your journey through model, localized, homotopy categories by a steep way. I would try this short paper first: W. G. Dwyer and J. Spalinski.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Degree of continuous mapping via integral Let $f \in C(S^{n},S^{n})$. If $n=1$ then the degree of $f$ coincides with index of curve $f(S^1)$ with respect to zero (winding number) and may be computed via integral
$$
\deg f = \frac{1}{2\pi i} \int\limits_{f(S^1)} \frac{dz}{z}
$$
Is it possible to compute the degree of continuous mapping $f$ in the case $n>1$ via integral of some differential form?
| You could find some useful information (try page 6) here and here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that if $x$ is a non-zero rational number, then $\tan(x)$ is not a rational number and use this to prove that $\pi$ is not a rational number.
Prove that if $x$ is a non-zero rational number, then $\tan(x)$ is not a rational number and use this to prove that $\pi$ is not a rational number.
I heard that this was proved two hundred years ago. I need this proof because I want to know the proof of why $\pi$ is not rational.
I need the simplest proof!
thanx !
| The proof from a few hundred years ago was done by Lambert and Miklós Laczkovich provided a simplified version later on. The Wikipedia page for "Proof that $\pi$ is irrational" provides this proof (in addition to some other discussion).
http://en.wikipedia.org/wiki/Proof_that_%CF%80_is_irrational#Laczkovich.27s_proof
Edit: Proving the more general statement here hinges upon Claim 3 in Laczkovich's proof.
Defining the functions $f_k(x)$ by
\begin{equation}
f_k(x) = 1 - \frac{x^2}{k} + \frac{x^4}{2!k(k+1)} - \frac{x^6}{3!k(k+1)(k+2)} + \cdots
\end{equation}
it can be seen (using Taylor series) that
\begin{equation}
f_{1/2}(x/2) = \cos(x)
\end{equation}
and
\begin{equation}
f_{3/2}(x/2) = \frac{\sin(x)}{x}
\end{equation}
so that
\begin{equation}
\tan x = x\frac{f_{3/2}(x/2)}{f_{1/2}(x/2)}
\end{equation}
Taking any $x \in \mathbb{Q} \backslash \{0\}$ we know that $x/2 \in \mathbb{Q} \backslash \{0\}$ and also that $x^2/4 \in \mathbb{Q} \backslash \{0\}$ as well. Then $x/2$ satisfies the hypotheses required by Claim 3.
Using Claim 3 and taking $k = 1/2$, we have
\begin{equation}
\frac{f_{k+1}(x/2)}{f_k(x/2)} = \frac{f_{3/2}(x/2)}{f_{1/2}(x/2)} \notin \mathbb{Q}
\end{equation}
which then also implies that
\begin{equation}
\frac{x}{2}\frac{f_{3/2}(x/2)}{f_{1/2}(x/2)} \notin \mathbb{Q}
\end{equation}
Multiplying by 2 then gives $\tan x \notin \mathbb{Q}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Evaluate $\int\sin(\sin x)~dx$ I was skimming the virtual pages here and noticed a limit that made me wonder the following
question: is there any nice way to evaluate the indefinite integral below?
$$\int\sin(\sin x)~dx$$
Perhaps one way might use Taylor expansion. Thanks for any hint, suggestion.
| For the maclaurin series of $\sin x$ , $\sin x=\sum\limits_{n=0}^\infty\dfrac{(-1)^nx^{2n+1}}{(2n+1)!}$
$\therefore\int\sin(\sin x)~dx=\int\sum\limits_{n=0}^\infty\dfrac{(-1)^n\sin^{2n+1}x}{(2n+1)!}dx$
Now for $\int\sin^{2n+1}x~dx$ , where $n$ is any non-negative integer,
$\int\sin^{2n+1}x~dx$
$=-\int\sin^{2n}x~d(\cos x)$
$=-\int(1-\cos^2x)^n~d(\cos x)$
$=-\int\sum\limits_{k=0}^nC_k^n(-1)^k\cos^{2k}x~d(\cos x)$
$=\sum\limits_{k=0}^n\dfrac{(-1)^{k+1}n!\cos^{2k+1}x}{k!(n-k)!(2k+1)}+C$
$\therefore\int\sum\limits_{n=0}^\infty\dfrac{(-1)^n\sin^{2n+1}x}{(2n+1)!}dx=\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^{n+k+1}n!\cos^{2k+1}x}{k!(n-k)!(2n+1)!(2k+1)}+C$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
} |
Finding the Galois group of $\mathbb Q (\sqrt 5 +\sqrt 7) \big/ \mathbb Q$ I know that this extension has degree $4$. Thus, the Galois group is embedded in $S_4$. I know that the groups of order $4$ are $\mathbb Z_4$ and $V_4$, but both can be embedded in $S_4$. So, since I know that one is cyclic meanwhile the other is not, I've tried to determine if the Galois group is cyclic but I couldn't make it. Is there any other way?
| You should first prove that $\mathbf{Q}(\sqrt{5}+\sqrt{7})/\mathbf{Q}$ is a Galois extension. For this it may be useful to verify that $\mathbf{Q}(\sqrt{5}+\sqrt{7}) = \mathbf{Q}(\sqrt{5},\sqrt{7})$. Then you might consider the Galois groups of $\mathbf{Q}(\sqrt{5})/\mathbf{Q}$ and $\mathbf{Q}(\sqrt{7})/\mathbf{Q}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Evaluate integral with quadratic expression without root in the denominator $$\int \frac{1}{x(x^2+1)}dx = ? $$
How to solve it? Expanding to $\frac {A}{x}+ \frac{Bx +C}{x^2+1}$ would be wearisome.
| You can consider
$\displaystyle \frac{1}{x(x^2 + 1)} = \frac{1 + x^2 - x^2}{x(x^2 + 1)}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Number of $n$-digit palindromes
How can one count the number of all $n$-digit palindromes? Is there any recurrence for that?
I'm not sure if my reasoning is right, but I thought that:
For $n=1$, we have $10$ such numbers (including $0$).
For $n=2$, we obviously have $9$ possibilities.
For $n=3$, we can choose 'extreme digits' in $9$ ways. Then there are $10$ possibilities for digits in the middle.
For n=4, again we choose extreme digits in $9$ ways and middle digits in $10$ ways
and, so on.
It seems that for even lengths of numbers we have $9 \cdot 10^{\frac{n}{2}-1}$ palindromes and for odd lengths $9 \cdot 10^{n-2}$. But this is certainly not even close to a proper solution of this problem.
How do I proceed?
| Details depend on whether for example $0110$ counts as a $4$-digit palindrome. We will suppose it doesn't. This makes things a little harder.
If $n$ is even, say $n=2m$, the first digit can be any of $9$, then the next $m-1$ can be any of $10$, and then the rest are determined. So there are $9\cdot 10^{m-1}$ palindromes with $2m$ digits.
If $n$ is odd, say $n=2m+1$, then the same sort of reasoning yields the answer $9\cdot 10^{m}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Prove this matrix is neither unipotent nor nilpotent. The question asks me to prove that the matrix,
$$A=\begin{bmatrix}1 & 1\\0 & 1\end{bmatrix}$$
is neither unipotent nor nilpotent. However, can't I simply row reduce this to the identity matrix:
$$A=\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}$$
which shows that it clearly is unipotent since $A^k=I$ for all $k \in \Bbb Z^+$?
Is there something wrong with the question or should I treat A as the first matrix rather than reducing it (I don't see how it would then not be unipotent considering the two matrices here are equivalent)?
| HINT: As Calvin Lin pointed out in the comments, $(A-I)^2=0$, so $A$ is unipotent. To show that $A$ is not nilpotent, show by induction on $n$ that
$$A^n=\begin{bmatrix}1 & n\\0 & 1\end{bmatrix}\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find the area of the parallelogram with vertices $K(1, 3, 1), L(1, 6, 3), M(6, 12, 3), N(6, 9, 1)$.
Find the area of the parallelogram with vertices $K(1, 3, 1), L(1, 6, 3), M(6, 12, 3), N(6, 9, 1)$.
I know that I need to get is an equation of the form (a vector) x (a second vector)
But, how do I decide what the two vectors will be from the points provided (since you cannot really draw it out accurately)?
I know you need to pick one point as the origin of the vector and then find the distance to each point, but which point would be the origin?
| Given a parallelogram with vertices $A$, $B$, $C$, and $D$, with $A$ diagonally opposite $C$, the vectors you want are $A-B$ and $A-D$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Convergence in distribution and convergence in the vague topology From Terrence Tao's blog
Exercise 23 (Implications and equivalences) Let ${X_n, X}$ be random variables taking values in a ${\sigma}$-compact metric space ${R}$.
(ii) Show that if ${X_n}$ converges in distribution to ${X}$, then ${X_n}$ has a tight sequence of distributions.
(iv) Show that ${X_n}$ converges in distribution to ${X}$ if and only if ${\mu_{X_n}}$ converges to ${\mu_X}$ in the vague topology (i.e. ${\int f\ d\mu_{X_n} \rightarrow \int f\ d\mu_X}$ for all continuous functions ${f: R \rightarrow {\bf R}}$ of compact support).
(v) Conversely, if ${X_n}$ has a tight sequence of distributions, and ${\mu_{X_n}}$ is convergent in the vague topology, show that ${X_n}$ is convergent in distribution to another random variable (possibly after extending the sample space). What happens if the tightness hypothesis is dropped?
Isn't (v) already part of (iv)? What do I miss to see? Thanks!
| I believe the discussion of this old mathexchange post clarifies what is meant (I don't find the way the exercise was written particularly clear): Definition of convergence in distribution
See in particular the comment by Chris Janjigian. In (iv), the limiting distribution is assumed to be that of a R.V.; in (v), all that is given is that we have vague convergence (which need not be to a R.V., per the discussion of the link). However, if we add tightness, it will be - and this is exercise (v).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove $\int_0^\infty \frac{\ln \tan^2 (ax)}{1+x^2}\,dx = \pi\ln \tanh(a)$ $$
\mbox{How would I prove}\quad
\int_{0}^{\infty}
{\ln\left(\,\tan^{2}\left(\, ax\,\right)\,\right) \over 1 + x^{2}}\,{\rm d}x
=\pi
\ln\left(\,\tanh\left(\,\left\vert\, a\,\right\vert\,\right)\,\right)\,{\Large ?}.
\qquad a \in {\mathbb R}\verb*\* \left\{\,0\,\right\}
$$
| Another approach:
$$ I(a) = \int_{0}^{+\infty}\frac{\log\tan^2(ax)}{1+x^2}\,dx$$
first use "lebniz integral differentiation"
and then do change of variable to calculate the integral.
Hope it helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 6,
"answer_id": 5
} |
Which of these values for $f(12)$ are possible? If $f(10)=30, f'(10)=-2$ and $f''(x)<0$ for $x \geq 10$, which of the following are possible values for $f(12)$ ? There may be more than one correct answer.
$24, 25, 26, 27, 28$
So since $f''(x)<0$ indicates that the graph for $f(x)$ is concave down, and after using slope formula I found and answer of $26$, would that make $f(12)$ less than or equal to $26$? Thank you!
| Hint: The second derivative condition tells you that the first derivative is decreasing past $10$, and so is $\lt -2$ past $10$.
By the Mean Value Theorem, $\dfrac{f(12)-f(10)}{12-10}=f'(c)$ for suitable $c$ strictly between $10$ and $12$. Now test the various suggested values.
For example, $\dfrac{27-30}{12-10}=-1.5$, impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $f(x)I have this question:
Let $f(x)→A$ and $g(x)→B$ as $x→x_0$. Prove that if $f(x) < g(x)$ for all $x∈(x_0−η, x_0+η)$ (for some $η > 0$) then $A\leq B$. In this case is it always true that $A < B$?
I've tried playing around with the definition for limits but I'm not getting anywhere. Can someone give me a hint on where to start?
| To show it is not always the case that $A<B$, you can come up with an example to show it is possible for $A = B$. So if we let $f(x) = (\frac{1}{x})^2$ and $g(x) = (\frac{1}{x})^4$, we know $f(x) < g(x)$ $\forall x \in (-1,1)$. And we know $\lim_{x \to 0} f(x) = \lim_{x \to 0} g(x) = \infty$ so it is possible for $A=B$.
Note the interval I gave you above is of the form $(x_0-η,x_0+η)$, which is centered about $x_0$ (distinguishing this from other answers).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/287986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
How to integrate $\int_{0}^{\infty }{\frac{\sin x}{\cosh x+\cos x}\cdot \frac{{{x}^{n}}}{n!}\ \text{d}x} $? I have done one with $\displaystyle\int_0^{\infty}\frac{x-\sin x}{x^3}\ \text{d}x$, but I have no ideas with these:
$$\begin{align*}
I&=\int_{0}^{\infty }{\frac{\sin x}{\cosh x+\cos x}\cdot \frac{{{x}^{n}}}{n!}\ \text{d}x}\tag1 \\
J&= \int_{0}^{\infty }{\frac{x-\sin x}{\left( {{\pi }^{2}}+{{x}^{2}} \right){{x}^{3}}}\ \text{d}x}\tag2 \\
\end{align*}$$
| I can address the second integral:
$$\int_{0}^{\infty }{dx \: \frac{x-\sin x}{\left( {{\pi }^{2}}+{{x}^{2}} \right){{x}^{3}}}}$$
Hint: We can use Parseval's Theorem
$$\int_{-\infty}^{\infty} dx \: f(x) \bar{g}(x) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} dk \: \hat{f}(k) \bar{\hat{g}}(k) $$
where $f$ and $\hat{f}$ are Fourier transform pairs, and same for $g$ and $\bar{g}$. The FT of $1/(x^2+\pi^2)$ is easy, so we need the FT of the rest of the integrand, which turns out to be possible.
Define
$$\hat{f}(k) = \int_{-\infty}^{\infty} dx \: f(x) e^{i k x} $$
It is straightforward to show using the Residue Theorem that, when $f(x) = (x^2+a^2)^{-1}$, then
$$\hat{f}(k) = \frac{\pi}{a} e^{-a |k|} $$
Thus we need to compute, when $g(x) = (x-\sin{x})/x^3$,
$$\begin{align} \hat{g}(k) &= \int_{-\infty}^{\infty} dx \: \frac{x-\sin{x}}{x^3} e^{i k x} \\ &= \frac{\pi}{2}(k^2-2 |k|+1) \mathrm{rect}(k/2) \\ \end{align}$$
where
$$\mathrm{rect}(k) = \begin{cases} 1 & |k|<\frac{1}{2} \\ 0 & |k|>\frac{1}{2} \end{cases} $$
Then we can write, using the Parseval theorem,
$$\begin{align} \int_{0}^{\infty }{dx \: \frac{x-\sin x}{\left( {{\pi }^{2}}+{{x}^{2}} \right){{x}^{3}}}} &= \frac{1}{8} \int_{-1}^1 dk \: (k^2-2 |k|+1) e^{-\pi |k|} \\ &= \frac{\left(2-2 \pi +\pi ^2\right)}{4 \pi
^3}-\frac{ e^{-\pi }}{2 \pi ^3} \\ \end{align}$$
NOTE
Deriving $\hat{g}(k)$ from scratch is challenging; nevertheless, it is straightforward (albeit, a bit messy) to prove that the expression is correct by performing the inverse transform on $\hat{g}(k)$ to obtain $g(x)$. I did this out and proved it to myself; I can provide the details to those that want to see them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 0
} |
Existence of irreducible polynomial of arbitrary degree over finite field without use of primitive element theorem? Suppose $F_{p^k}$ is a finite field. If $F_{p^{nk}}$ is some extension field, then the primitive element theorem tells us that $F_{p^{nk}}=F_{p^k}(\alpha)$ for some $\alpha$, whose minimal polynomial is thus an irreducible polynomial of degree $n$ over $F_{p^k}$.
Is there an alternative to showing that irreducible polynomials of arbitrary degree $n$ exist over $F_{p^k}$, without resorting to the primitive element theorem?
| A very simple counting estimation will show that such polynomials have to exist. Let $q=p^k$ and $F=\Bbb F_q$, then it is known that $X^{q^n}-X$ is the product of all irreducible monic polynomials over$~F$ of some degree$~d$ dividing $n$. The product$~P$ of all irreducible monic polynomials over$~F$ of degree strictly dividing $n$ then certainly divides the product over all strict divisors$~d$ of$~n$ of $X^{q^d}-X$ (all irreducible factors of$~P$ are present in the latter product at least once), so that one can estimate
$$
\deg(P)\leq\sum_{d\mid n, d\neq n}\deg(X^{q^d}-X)\leq\sum_{i<n}q^i=\frac{q^n-1}{q-1}<q^n=\deg(X^{q^n}-X),
$$
so that $P\neq X^{q^n}-X$, and $X^{q^n}-X$ has some irreducible factors of degree$~n$.
I should add that by starting with all $q^n$ monic polynomials of degree $n$ and using the inclusion-exclusion principle to account recursively for the reducible ones among them, one can find the exact number of irreducible polynomials over $F$ of degree $n$ to be
$$
\frac1n\sum_{d\mid n}\mu(n/d)q^d,
$$
which is a positive number by essentially the above argument (since all values of the Möbius function $\mu$ lie in $\{-1,0,1\}$ and $\mu(1)=1$). A quick search on this site did turn up this formula here and here, but I did not stumble upon an elementary and general proof not using anything about finite fields, although I gave one here for the particular case $n=2$. I might well have overlooked such a proof though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
removable singularity f(z) is analytic on the punctured disc $D(0,1) - {0}$ and the real part of f is positive. Prove that f has a removable singularity at $0$.
| Instead of looking at $e^{-f(z)}$, I think it's easier to do the following.
First, assume that $f$ is non-constant (otherwise the problem is trivial).
Let $\phi$ be a conformal mapping (you can write down an explict formula for $\phi$ if you want) from the right half-space onto the unit disc, and let $g(z) = \phi(f(z))$. Then $g$ maps the punctured disc into the unit disc, so in particular $g$ is bounded near $0$, which implies that $g$ must have a removable singularity at $z = 0$. Also, by the open mapping theorem $|g(0)| < 1$.
On the other hand,
$$f(z) = \phi^{-1}(g(z))$$
and since $|g(0)| < 1$ and, and $\phi$ is continuous on the unit disc, the limit $\lim_{z\to 0} f(z) = \phi^{-1}(g(0))$ also exists, which means that $0$ is a removable singularity for $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Convergence of series $\sum_{n=1}^\infty \ln\left(\frac{2n+7}{2n+1}\right)$? I have the series
$$\sum\limits_{n=1}^\infty \ln\left(\frac{2n+7}{2n+1}\right)$$
I'm trying to find if the sequence converges and if so, find its sum.
I have done the ratio and root test but It seems it is inconclusive.
How can I find if the series converges?
| Hint: Note that $$\lim_{n\to+\infty}\frac{\ln\left(\frac{2n+7}{2n+1}\right)}{n^{-1}}\neq0$$ so since the power of $n$ in the denominator is $-1$, so the series diverges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Eigenvalues of a $4\times4$ matrix I want to find the eigenvalues of the matrix
$$
\left[
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & a & a & 0 \\
0 & a & a & 0 \\
0 & 0 & 0 & b
\end{array}
\right]
$$
Can somebody explain me the theory behind getting the eigenvalues of this $4\times4$ matrix? The way I see it is to take the $4$ $a$'s as a matrix itself and see the big matrix as a diagonal one. The problem is, I can't really justify this strategy.
Here are the eigenvalues from Wolfram Alpha.
Thanks
| The eigenvalues of $A$ are the roots of the characteristic polynomial $p(\lambda)=\det (\lambda I -A)$.
In this case, the matrix $\lambda I-A$ is made of three blocks along the diagonal.
Namely $(\lambda)$, $\left(\begin{matrix} \lambda-a & -a \\ -a & \lambda -a \end{matrix}\right)$, and $(\lambda -b)$.
The determinant is therefore equal to the product of the determinants of these three matrices.
So you find:
$$
p(\lambda)=\lambda\cdot \lambda(\lambda-2a)\cdot (\lambda -b).
$$
Now you see that your eigenvalues are $0$ (with multiplicity $2$), $2a$, and $b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Equation in the real world Does a quadratic equation like $x^2 - ax + y = 0$ describe anything in the real world? (I want to know, if there is something in the same way that $x^2$ is describing a square.)
| Though not exactly same, depending upon value of a, following situations count as relevant.
For deep explanation, see wikipedia.
*
*Bernoulli's Effect. This gives relation of velocity of fluid($u$), Pressure($P$), gravitational constant($g$) and height($h$), $$\frac{u^2}{g}+P=h$$
*Mandelbrot Set has Recursive Equation $$P_c=z^2+c, z\in\mathbb{R}$$ which is interesting and creates fractals which appear in nature.
*The descrete logistics equation is a quadratic equation which surprisingly generates chaos. This is the way population growth (be it bacteria or humans) is calculated.
$$x_{n+1}=\mu x_n(1-x_n)$$
*Schrodinger's Equation
*Motion of Projectile as already mentioned
ed infinitum
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Questions related to nilpotent and idempotent matrices I have several questions on an assignment that I just can't seem to figure out.
1) Let $A$ be $2\times 2$ matrix. $A$ is nilpotent if $A^2=0$. Find all symmetric $2\times 2$ nilpotent matrices.
It is symmetric, meaning the matrix $A$ should look like $A=\begin{bmatrix} a & b \\ b & c\end{bmatrix}$. Thus, by working out $A^2$ I find that
$a^2 + b^2 = 0$ and
$ab + bc = 0$.
This tells me that $a^2 = - b^2$ and $a = -c$.
I'm not sure how to progress from here.
2)Suppose $A$ is a nilpotent $2\times 2$ matrix and let $B = 2A$ - I. Express $B^2$ in terms of $B$ and $I$. Show that $B$ is invertible and find $B$ inverse.
To find $B^2$ can I simply do $(2A -I)(2A - I)$ and expand as I would regular numbers?
This should give $4A^2 - 4A + I^2$.
Using the fact that $A^2$ is zero and $I^2$ returns $I$, the result is $I - 4A$. From here do I simply use the original expression to form an equation for $A$ in terms of $B$ and $I$ and substitute it in? Unless I am mistaken $4A$ cannot be treated as $2A^2$ and simplified to a zero matrix.
3) We say that a matrix $A$ is an idempotent matrix if $A^2 = A$. Prove that an idempotent matrix $A$ is invertible if and only if $A = I$.
I have no idea how to begin on this one.
4) Suppose that $A$ and $B$ are idempotent matrices such that $A+B$ is idempotent, prove that $AB = BA = 0$.
Again, I don't really have any idea how to begin on this one.
| For #1, you should also have $b^2 + c^2 = 0$. If you're working over the real numbers, note that the square of a real number is always $\ge 0$, and is $0$ only if the number is $0$.
If complex numbers are allowed, you could have $a = -c = \pm i b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Combinatorial interpretation of the identity: ${n \choose k} = {n \choose n-k}$ What is the combinatorial interpretation of the identity: ${n \choose k} = {n \choose n-k}$?
Proving this algebraically is trivial, but what exactly is the "symmetry" here. Could someone give me some sort of example to help my understanding?
EDIT: Can someone present a combinatorial proof?
| $n \choose k$ denotes the number of ways of picking $k$ objects out of $n$ objects, and specifying the $k$ objects that are picked is equivalent to specifying the $n-k$ objects that are not picked.
To put it differently, suppose you have $n$ objects, and you want to partition them into two sets: a set $A$ of size $k$, and a set $B$ of size $n-k$. If you pick which objects go into set $A$, the number of ways of doing so is denoted $n \choose k$, and if you (equivalently!) pick which objects go into set $B$, the number of ways is denoted $n \choose n-k$.
The point here is that the binomial coefficient $n \choose k$ denotes the number of ways partitioning $n$ objects into two sets one of size $k$ and one of size $n-k$, and is thus a special case of the multinomial coefficient $${n \choose k_1, k_2, \dots k_m} \quad \text{where $k_1 + k_2 + \dots k_m = n$}$$
which denotes the number of ways of partitioning $n$ objects into $m$ sets, one of size $k_1$, one of size $k_2$, etc.
Thus ${n \choose k}$ can also be written as ${n \choose k,n-k}$, and when written in this style, the symmetry is apparent in the notation itself:
$${n \choose k} = {n \choose k, n-k} = {n \choose n-k, k} = {n \choose n-k}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Question on limit: $\lim_{x\to 0}\large \frac{\sin^2{x^{2}}}{x^{2}}$ How would I solve the following trig equations?
$$\lim_{x\to 0}\frac{\sin^2{x^{2}}}{x^{2}}$$
I am thinking the limit would be zero but I am not sure.
| We use $$\sin^2 x =\frac{1-\cos 2x}{2}$$
$$\lim_{x\to 0}\frac{\sin^2 x^2}{x^2}.=\lim_{x\to 0} \frac {1-\cos 2x^2}{2x^2}=0 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Find infinitely many pairs of integers $a$ and $b$ with $1 < a < b$, so that $ab$ exactly divides $a^2 +b^2 −1$. So I came up with $b= a+1$ $\Rightarrow$ $ab=a(a+1) = a^2 + a$
So that:
$a^2+b^2 -1$ = $a^2 + (a+1)^2 -1$ = $2a^2 + 2a$ = $2(a^2 + a)$ $\Rightarrow$
$(a,b) = (a,a+1)$ are solutions.
My motivation is for this follow up question:
(b) With $a$ and $b$ as above, what are the possible values of:
$$
\frac{a^2 +b^2 −1}{ab}
$$
Update
With Will Jagy's computations, it seems that now I must show that the ratio can be any natural number $m\ge 2$, by the proof technique of vieta jumping.
Update
Via Coffeemath's answer, the proof is rather elementary and does not require such technique.
| $(3,8)$ is a possible solution.
This gives us 24 divides 72, and a value of 3 for (b).
Have you considered that if $ab$ divides $a^2+b^2-1$, then we $ab$ divides $a^2 + b^2 -1 + 2ab$?
This gives us $ab$ divides $(a+b+1)(a+b-1)$.
Subsequently, the question might become easier to work with.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Finding the derivative of an integral $$g(x) = \int_{2x}^{6x} \frac{u+2}{u-4}du $$
For finding the $ g'(x)$, would I require to find first the derivative of $\frac{u+2}{u-4}$
then Replace the $u$ with 6x and 2x and add them ?
(the 2x would have to flip so the whole term is negative)
If the previous statement is true would the final showdown be the following:
$$ \frac{6}{(2x-4)^2} - \frac{6}{(6x-4)^2}$$
| Let $f(u)=\frac{u+2}{u-4}$, and let $F(u)$ be the antiderivative of $f(u)$. Then
$$
g'(x)=\frac{d}{dx}\int_{2x}^{6x}f(u)du=\frac{d}{dx}\left(F(u)\bigg\vert_{2x}^{6x}\right)=\frac{d}{dx}[F(6x)-F(2x)]=6F'(6x)-2F'(2x)
$$
But $F'(u)=f(u)$. So the above evaluates to
$$
6f(6x)-2f(2x)=6\frac{6x+2}{6x-4}-2\frac{2x+2}{2x-4}=\cdots.
$$
In general,
$$
\frac{d}{dx}\int_{a(x)}^{b(x)}f(u)du=f(b(x))\cdot b'(x)-f(a(x))\cdot a'(x).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How many edges? We have a graph with $n>100$ vertices. For any two adjacent vertices is known that the degree of at least one of them is at most $10$ $(\leq10)$. What is the maximum number of edges in this graph?
| Let $A$ be the set of vertices with degree at most 10, and $B$ be the set of vertices with degree at least 11. By assumption, vertices of $B$ are not adjacent to each other. Hence the total number of edges $|E|$ in the graph is equal to the sum of degrees of all vertices in $A$ minus the number of edges connecting two vertices in $A$. Hence $|E|$ is maximized when
*
*the size of the set $A$ is the maximized,
*the degree of each vertex in $A$ is maximized, and
*the number of edges connecting two vertices in $A$ is minimized.
This means $|E|$ is maximized when the graph is a fully connected bipartite graph, where $|A|=n-10$ and $|B|=10$. The total number of edges of this graph is $10(n-10)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Continuity of the (real) $\Gamma$ function. Consider the real valued function
$$\Gamma(x)=\int_0^{\infty}t^{x-1}e^{-t}dt$$
where the above integral means the Lebesgue integral with the Lebesgue measure in $\mathbb R$. The domain of the function is $\{x\in\mathbb R\,:\, x>0\}$, and now I'm trying to study the continuity. The function $$t^{x-1}e^{-t}$$
is positive and bounded if $x\in[a,b]$, for $0<a<b$, so using the dominated convergence theorem in $[a,b]$, I have:
$$\lim_{x\to x_0}\Gamma(x)=\lim_{x\to x_0}\int_0^{\infty}t^{x-1}e^{-t}dt=\int_0^{\infty}\lim_{x\to x_0}t^{x-1}e^{-t}dt=\Gamma(x_0)$$
Reassuming $\Gamma$ is continuous in every interval $[a,b]$; so can I conclude that $\Gamma$ is continuous on all its domain?
| You could also try the basic approach by definition.
For any $\,b>0\,\,\,,\,\,\epsilon>0\,$ choose $\,\delta>0\,$ so that $\,|x-x_0|<\delta\Longrightarrow \left|t^{x-1}-t^{x_0-1}\right|<\epsilon\,$ in $\,[0,b]\,$
:
$$\left|\Gamma(x)-\Gamma(x_0)\right|=\left|\lim_{b\to\infty}\int\limits_0^b \left(t^{x-1}-t^{x_0-1}\right)e^{-t}\,dt\right|\leq$$
$$\leq\lim_{b\to\infty}\int\limits_0^b\left|t^{x-1}-t^{x_0-1}\right|e^{-t}\,dt<\epsilon\lim_{b\to\infty}\int\limits_0^b e^{-t}\,dt=\epsilon$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Sudoku puzzles and propositional logic I am currently reading about how to solve Sudoku puzzles using propositional logic. More specifically, they use the compound statement
$$\bigwedge_{i=1}^{9} \bigwedge_{n=1}^{9} \bigvee_{j=1}^{9}~p(i,j,n)$$
where $p(i,j,n)$ is the proposition that is true when the number
$n$ is in the cell in the $ith$ row and $jth$ column, to denote that every row contains every number. I know that this is what the entire compound statement implies, but I am trying to read each individual statement together. Taking one single case, does
$$\bigwedge_{i=1}^{9} \bigwedge_{n=1}^{9} \bigvee_{j=1}^{9}~p(i,j,n)$$
say that in the first row, the number one will be found in the first column, or second column, or third column, etc?
| Although expressible as propositional logic, for practical solutions, it is computationally more effective to view Sudoku as a Constraint Satisfaction Problem. See Chapter 6 of Russell and Norvig: Artificial Intelligence - A Modern Approach, for example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Show that $(a+b+c)^3 = a^3 + b^3 + c^3+ (a+b+c)(ab+ac+bc)$ As stated in the title, I'm supposed to show that $(a+b+c)^3 = a^3 + b^3 + c^3 + (a+b+c)(ab+ac+bc)$.
My reasoning:
$$(a + b + c)^3 = [(a + b) + c]^3 = (a + b)^3 + 3(a + b)^2c + 3(a + b)c^2 + c^3$$
$$(a + b + c)^3 = (a^3 + 3a^2b + 3ab^2 + b^3) + 3(a^2 + 2ab + b^2)c + 3(a + b)c^2+ c^3$$
$$(a + b + c)^3 = a^3 + b^3 + c^3 + 3a^2b + 3a^2c + 3ab^2 + 3b^2c + 3ac^2 + 3bc^2 + 6abc$$
$$(a + b + c)^3 = (a^3 + b^3 + c^3) + (3a^2b + 3a^2c + 3abc) + (3ab^2 + 3b^2c + 3abc) + (3ac^2 + 3bc^2 + 3abc) - 3abc$$
$$(a + b + c)^3 = (a^3 + b^3 + c^3) + 3a(ab + ac + bc) + 3b(ab + bc + ac) + 3c(ac + bc + ab) - 3abc$$
$$(a + b + c)^3 = (a^3 + b^3 + c^3) + 3(a + b + c)(ab + ac + bc) - 3abc$$
$$(a + b + c)^3 = (a^3 + b^3 + c^3) + 3[(a + b + c)(ab + ac + bc) - abc]$$
It doesn't look like I made careless mistakes, so I'm wondering if the statement asked is correct at all.
| In general, $$a^n+b^n+c^n = \sum_{i+2j+3k=n} \frac{n}{i+j+k}\binom {i+j+k}{i,j,k} s_1^i(-s_2)^js_3^k$$
where $s_1=a+b+c$, $s_2=ab+ac+bc$ and $s_3=abc$ are the elementary symmetric polynomials.
In the case that $n=3$, the triples possible are $(i,j,k)=(3,0,0),(1,1,0),$ and $(0,0,1)$ yielding the formula:
$$a^3+b^3+c^3 = s_1^3 - 3s_2s_1 + 3s_3$$
which is the result you got.
In general, any symmetric homogeneous polynomial $p(a,b,c)$ of degree $n$ can be written in the form:
$$p(a,b,c)=\sum_{i+2j+3k=n} a_{i,j,k} s_1^i s_2^j s_3^k$$
for some constants $a_{i,j,k}$.
If you don’t know the first statement, you can deduce the values $a_{i,j,k}$ by solving linear equations.
This is because the triples $(i,j,k)$ are limited to $(3,0,0), (1,1,0),(0,0,1).$. So if:
$$a^3+b^3+c^3=a_{3,0,0}(a+b+c)^3+a_{1,1,0}(ab+ac+bc)(a+b+c)+a_{0,0,1}abc$$
Then try it for specific values of $(a,b,c).$ For example, when $(a,b,c)=(1,0,0),$ you get:
$$1=a_{3,0,0}\cdot 1+a_{1,1,0}\cdot 0+a_{0,0,1}\cdot 0.$$
Try $(a,b,c)=(1,1,0)$ and $(1,1,1).$
I've often thought Fermat's Last Theorem was most interesting when stated as a question about these polynomials. One statement of Fermat can be written as:
If $p$ is an odd prime, then $a^p+b^p+c^p=0$ if and only if $a+b+c=0$ and $abc=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/288965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 1
} |
Prove about a right triangle How to prove (using vector methods) that the midpoint of the hypotenuse of a right triangle is equidistant from the three vertices.
Defining the right triangle as the one formed by $\vec{v}$ and $\vec{w}$ with hypotenuse $\vec{v} - \vec{w}$, This imply to prove that $||\frac{1}{2}(\vec{v}+\vec{w})|| = ||\frac{1}{2}(\vec{v}-\vec{w})||$.
The only thing that came to my mind is to expand like this:
\begin{align}
& {}\qquad \left\|\frac{1}{2}(\vec{v}+\vec{w})\right\| = \left\|\frac{1}{2}(\vec{v}-\vec{w})\right\| \\[10pt]
& =\left|\frac{1}{2}\right|\left\|(\vec{v}+\vec{w})\right\| = \left|\frac{1}{2}\right|\left\|(\vec{v}-\vec{w})\right\| \\[10pt]
& =\left\|(\vec{v}+\vec{w})\right\| = \left\|(\vec{v}-\vec{w})\right\| \\[10pt]
& =\left\|(v_1+w_1, v_2+w_2,\ldots,v_n+w_n)\right\| = \left\|(v_1-w_1, v_2-w_2,\ldots,v_n-w_n)\right\| \\[10pt]
& =\sqrt{(v_1+w_1)^2+ (v_2+w_2)^2+\cdots+(v_n+w_n)^2} \\[10pt]
& = \sqrt{(v_1-w_1)^2+ (v_2-w_2)^2+\cdots+(v_n-w_n)^2}
\end{align}
$$
And now I get stuck. Why? Because expanding it more would result in
$$v_1^2+2v_1w_1+w_1^2 \cdots = v_1^2-2v_1w_1+w_1^2 \cdots$$
And theres obiously a minus there.
I know a making a mistake but I can't get further. Any help?
| If u and v are orthogonal, $u.v=0$.
Then $||u+v||^2 = ||u-v||^2 = ||u||^2+||v||^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Leibniz Alternating Series Test Can someone help me find a Leibniz Series (alternating sum) that converges to $5$ ?
Does such a series even exist?
Thanks in advance!!!
I've tried looking at a series of the form $ \sum _ 1 ^\infty (-1)^{n} q^n $ which is a geometric series ... But I get $q>1 $ , which is impossible... Does someone have an idea?
| Take any Leibniz sequence $x_n$, the series of which converges not to zero, say to $c$, and then consider
the sequence $(\frac5c\cdot x_n)_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Calculate $20^{1234567} \mod 251$ I need to calculate the following
$$20^{1234567} \mod 251$$
I am struggling with that because $251$ is a prime number, so I can't simplify anything and I don't have a clue how to go on. Moreover how do I figure out the period of $[20]_{251}$? Any suggestions, remarks, nudges in the right direction are very appreciated.
| If you do not know the little theorem, a painful but -I think- still plausible method is to observe that $2^{10} = 1024 \equiv 20$, and $10^3 = 1000 \equiv -4$. Then, we may proceed like this:
$20^{1234567} = 2^{1234567}10^{1234567} = 2^{123456\times 10 + 7}10^{411522\times 3 + 1} = 1280\times 1024^{123456}1000^{411522} \equiv 1280\times 20^{123456}4^{411522}$.
Observe that after one pass, we are still left with the powers of $2$ and the powers of $20$ that we can handle with the same equivalences above.
We still have to make some calculations obviously (divisions etc), but it is at least not as hopeless as before.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Geometric Distribution $P(X\ge Y)$ I need to show that if $X$ and $Y$ are idd and geometrically distributed that the $P(X\ge Y)$ is $1\over{2-p}$. the joint pmf is $f_{xy}(xy)=p^2(1-p)^{x+y}$, and I think the only way to do this is to use a double sum: $\sum_{y=0}^{n}\sum_{x=y}^m p^2(1-p)^{x+y}$, which leads to me getting quite stuck. Any suggestions?
| It is easier to use symmetry:
$$
1 = \mathbb{P}\left(X<Y\right) +\mathbb{P}\left(X=Y\right) + \mathbb{P}\left(X>Y\right)
$$
The first and the last probability are the same, due to the symmetry, since $X$ and $Y$ are iid. Thus:
$$
\mathbb{P}\left(X<Y\right) = \frac{1}{2} \left(1 - \mathbb{P}\left(X=Y\right) \right)
$$
Thus:
$$
\mathbb{P}\left(X\geqslant Y\right) = \mathbb{P}\left(X>Y\right) + \mathbb{P}\left(X=Y\right) = \frac{1}{2} \left(1 + \mathbb{P}\left(X=Y\right) \right)
$$
The probability of $X=Y$ is easy:
$$
\mathbb{P}\left(X=Y\right) = \sum_{n=0}^\infty \mathbb{P}\left(X=n\right) \mathbb{P}\left(Y=n\right) = \sum_{n=0}^\infty p^2 (1-p)^{2n} = \frac{p^2}{1-(1-p)^2} = \frac{p}{2-p}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Proof of equivalence theorem about left invertible matrices I am taking a course in Matrix Theory and we have a theorem that states (among other things) that:
The following conditions on the matrix $A$ of size $m \times n$ are equivalent:
(1) A has left inverse
(2) The system $Ax=b$ has at most one solution for any column vector $b$.
...
The proof that (1) $\implies$ (2) goes like this:
If $Ax=b$ and $V$ is a left inverse, then $VAx=Vb \implies x=Vb$, so we have at most one solution (if any).
The thing is, left inverses are not unique right? Take
$A =
\left(
\begin{matrix}
1 \\ 0
\end{matrix}
\right)$
That has left inverses
$V_1=
\left(
\begin{matrix}
1 & 0
\end{matrix}
\right)
$
and
$ V_2 =
\left(
\begin{matrix}
1 & 1
\end{matrix}
\right)$
Does this mean that the proof is wrong or am I missing something?
| Existence of left inverse means $A$ is 1-1, i.e., $Ax_1 = Ax_2$ implies $VAx_1 = VAx_2$ , i.e., $x_1=x_2$. So a solution, if it exists, must be unique.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Product of pairwise coprime integers divides $b$ if each integer divides $b$ Let $a_1....a_n$ be pairwise coprime. That is $gcd(a_i, a_k) = 1$ for distinct $i,k$, I would like to show that if each $a_i$ divides $b$ then so does the product.
I can understand intuitively why it's true - just not sure how to formulate the proof exactly.
I want to say if we consider the prime factorizations of each $a_i$, then no two prime factorizations share any prime numbers. So the product of $a_1...a_n$ must appear in the prime factorization of $b$. Is this correct? Or at least if if the idea is correct, any way to formulate it more clearly?
| Use unique prime factorization for each $a_i$ to write it as $$a_i = \prod_{i=1}^k p_i^{\alpha_i}.$$ $k$ is chosen such that it will number all prime factors across the $a_i$, with $\alpha_i = 0$ when $p_i$ is not a factor of $a_i$. In other words, $k$ will be the same number for each $a_i$. By the assumption, $b$ will then be of form $$b = \prod_{i=1}^k p_i^{\beta_i} \cdot d = \prod_{i=1}^n a_i \cdot d,$$
for some integer $d$, and where $\beta_i$ is the unique (as the $\alpha_i$ are coprime, and each divide $b$) $\alpha_i \neq 0$ for all $i$. This was to be shown.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Another two hard integrals Evaluate :
$$\begin{align}
& \int_{0}^{\frac{\pi }{2}}{\frac{{{\ln }^{2}}\left( 2\cos x \right)}{{{\ln }^{2}}\left( 2\cos x \right)+{{x}^{2}}}}\text{d}x \\
& \int_{0}^{1}{\frac{\arctan \left( {{x}^{3+\sqrt{8}}} \right)}{1+{{x}^{2}}}}\text{d}x \\
\end{align}$$
| For the second integral, consider the more general form
$$\int_0^1 dx \: \frac{\arctan{x^{\alpha}}}{1+x^2}$$
(I do not understand what is special about $3+\sqrt{8}$.)
Taylor expand the denominator and get
$$\begin{align} &=\int_0^1 dx \: \arctan{x^{\alpha}} \sum_{k=0}^{\infty} (-1)^k x^{2 k} \\ &= \sum_{k=0}^{\infty} (-1)^k \int_0^1 dx \: x^{2 k} \arctan{x^{\alpha}} \end{align}$$
Now we can simply evaluate these integrals in terms of polygamma functions:
$$\int_0^1 dx \: x^{2 k} \arctan{x^{\alpha}} = \frac{\psi\left(\frac{a+2 k+1}{4 a}\right)-\psi\left(\frac{3 a+2 k+1}{4 a}\right)+\pi }{8 k+4}$$
where
$$\psi(z) = \frac{d}{dz} \log{\Gamma{(z)}}$$
and we get that
$$\int_0^1 dx \: \frac{ \arctan{x^{\alpha}}}{1+x^2} = \frac{\pi^2}{16} - \frac{1}{4} \sum_{k=0}^{\infty} (-1)^k \frac{\psi\left(\frac{3 \alpha+2 k+1}{4 \alpha}\right)-\psi\left(\frac{\alpha+2 k+1}{4 \alpha}\right) }{2 k+1} $$
This is about as close as I can get. The sum agrees with the numerical integration out to 6 sig figs at about $10,000$ terms for $\alpha = 3+\sqrt{8}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Show that $f = 0$ if $\int_a^b f(x)e^{kx}dx=0$ for all $k$ The problem is show that $f=0$ whenever $f\in C[a,b]$ and
$$\int_a^bf(x)e^{kx}dx =0, \hspace{1cm}\forall k\in\mathbb{N}.$$
Can someone help me?
Thank you!
| First, letting $u=e^x$ we note that $\int_{e^a}^{e^b}f(\ln u)u^{k-1}du=0$ for all $k\in\mathbb N$.
Next, see the following old questions:
Nonzero $f \in C([0, 1])$ for which $\int_0^1 f(x)x^n dx = 0$ for all $n$
If $f$ is continuous on $[a , b]$ and $\int_a^b f(x) p(x)dx = 0$ then $f = 0$
problem on definite integral
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proof of a comparison inequality I'm working on a problem that's been giving me some difficulty. I will list it below and show what work I've done so far:
If a, b, c, and d are all greater than zero and $\frac{a}{b} < \frac{c}{d}$, prove that $\frac{a}{b} < \frac{a + c}{b + d} < \frac{c}{d}$.
Alright, so far I think the first step is to notice that $\frac{a}{b+d} + \frac{c}{b+d} < \frac{a}{b} + \frac{c}{d}$ but after that I'm not sure how to continue. Any assistance would be a great help!
| As $$\frac ab<\frac cd\implies ad<bc\text { as } a,b,c,d>0$$
$$\frac{a+c}{b+d}-\frac ab=\frac{b(a+c)-a(b+d)}{b(b+d)}=\frac{bc-ad}{b(b+d)}>0 \text{ as } ad<bc \text{ and } a,b,c,d>0 $$
So, $$\frac{a+c}{b+d}>\frac ab$$
Similarly, $$\frac{a+c}{b+d}-\frac cd=\frac{ad-bc}{d(b+d)}<0\implies \frac{a+c}{b+d}<\frac cd$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Average of function, function of average I'm trying to find all functions $f : \mathbb{R} \to \mathbb{R}$ such that, for all $n > 1$ and all $x_1, x_2, \cdots, x_n \in \mathbb{R}$:
$$\frac{1}{n} \sum_{t = 1}^n f(x_t) = f \left ( \frac{1}{n} \sum_{t = 1}^n x_t \right )$$
My intuition is that this is only true if $f$ is a linear function. I started from the relation:
$$\sum_{t = 1}^n f(x_t) = f \left ( \sum_{t = 1}^n x_t \right )$$
That is, $f$ is additive. Then, by multiplying by $\frac{1}{n}$ each side, we obtain:
$$\frac{1}{n} \sum_{t = 1}^n f(x_t) = \frac{1}{n} f \left ( \sum_{t = 1}^n x_t \right )$$
And hence, any $f$ which has the property that $f(ax) = a f(x)$ (a linear function) will work. And since all linear functions are trivially additive, any linear function is a solution to my relation.
But all I have done is prove that linear functions are solutions, how should I go about showing that only linear equations are solutions? Is it valid to just "go backwards" in my argument, proving that if $f$ is a solution, then it must be linear? I feel it is not sufficient, since I only have implication and not equivalence. How do I proceed?
I think the $\frac{1}{n}$ term was added to confuse me, since without it, this would be straightforward.
| What about function f that:
$f(x+y)=f(x)+f(y)$ $\ \ \ \ \ \ x,y\in \mathbb{R}$
$f(a x)=af(x)$ $ \ \ \ \ \ a\in \mathbb{Q}, x\in \mathbb{R}$
This kind of function does not have to be continuous.
(It can be so wild that it does not have to be even measurable. And I guess(not sure at all) if it is measurable than it is linear function(in normal sense) almost everywhere.
Acording to wikipedia http://en.wikipedia.org/wiki/Harmonic_function#The_mean_value_property
All locally integrable continuous function that satisfy mean value property(in 1D it means $\frac{1}{2}(f(x)+f(y)) = f(\frac{x+y}{2})$) are infinitely differentiable and harmonics.
So I guess this answers your question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
how to solve this gamma function i know that $\Gamma (\frac {1}{2})=\sqrt \pi$ But I do not understand how to solve these equations
$$\Gamma (m+\frac {1}{2})$$
$$\Gamma (-m+\frac {1}{2})$$
are there any general relation to solve them
for example:
$\Gamma (1+\frac {1}{2})$
$\Gamma (-2+\frac {1}{2})$
| By the functional equation $$\Gamma(z+1)=z \, \Gamma(z)$$ (which is easily proved for the integral definition of $\Gamma$ by parts, and may be used to analytically continue the function to the negative numbers) we find $$\Gamma(1 + \frac{1}{2}) = \frac{1}{2} \Gamma(\frac{1}{2})$$ and $$\Gamma (-2+\frac {1}{2}) = \frac{1}{-1+\frac {1}{2}}\Gamma (-1+\frac {1}{2}) = \frac{2}{-1+\frac {1}{2}}\Gamma (\frac{1}{2})$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Ring theorem and isomorphic I got a problems as follow
Let $S = \left\{\begin{bmatrix}
a & 0 \\
0 & a \\ \end{bmatrix} | a \in R\right\}$, where $R$ is the set of real numbers. Then $S$ is a ring under matrix addition and multiplication. Prove that $R$ is isomorphic to $S$.
What is the key to prove it? By definition of ring?But I have no idea how to connect the characteristic of ring to Real number.
| Hint: Identify $a$ with \begin{bmatrix} a & 0 \\ 0 & a \\ \end{bmatrix} for each $a$ in $R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Cofactor theorem Shilov on page 12 says determinant $D=a_{1j}A_{1j}+a_{2j}A_{2j}+...+a_{nj}A_{nj}...(I)$ is an identity in the quantities $a_{1j}, a_{2j},..., a_{nj}$. Therefore it remains valid if we replace $a_{ij} (i = 1, 2,. . . , n)$ by any other quantities. The quantities $A_{1j}, A_{2j},..., A_{nj}$ remain unchanged when such a replacement is made, since they do not depend on the elements $a_{ij}$.
Quantity $A_{ij}$ is called the cofactor of the element $a_{ij}$ of the determinant $D$.
My question,from the equation (I) we see that all $\Large A$s are multiplied with specific $\Large a$s, not just any $\Large a$, then how did he conclude the above statement? .
| The identity (I) uses the elements $a_{1j}, a_{2j} \ldots$, which of course for the matrix have specific values. What the statement means is that if those values, and only those values, are changed in the matrix, the new determinant is valid for the new matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
socles of semiperfect rings For readers' benefit, a few definitions for a ring $R$.
The left (right) socle of $R$ is the sum of all minimal left (right) ideals of $R$. It may happen that it is zero if no minimals exist.
A ring is semiperfect if all finitely generated modules have projective covers.
Is there a semiperfect ring with zero left socle and nonzero right socle?
Someone asked me this recently, and nothing sprang to mind either way. In any case, I'm interested in a construction method that is amenable for creating imbalanced socles like this.
If you happen to know the answer when semiperfect is strengthened to be 'some side perfect' or 'semiprimary' or 'some side Artinian', then please include it as a comment. (Of course, a ring will have a nonzero socle on a side on which it is Artinian.)
| Here is an example due to Bass of a left perfect ring that is not right perfect. See Lam's First Course, Example 23.22 for an exposition (which is opposite to the one I use below).
Let $k$ be a field, and let $S$ be the ring of (say) column-finite $\mathbb{N} \times \mathbb{N}$-matrices over $k$. Let $E_{ij}$ denote the usual "matrix units." Consider $J = \operatorname{Span}\{E_{ij} : i > j\}$ be the set of lower-triangular matrices in $S$ that have only finitely many nonzero entries, all above the diagonal.
Let $R = k \cdot 1 + J \subseteq S$. Then $R$ is a local ring with radical $J$, and it's shown to be left perfect in the reference above. As I mentioned in my comment above, this means that $R$ is semiperfect and has nonzero right socle.
I claim that $R$ has zero left socle. Certainly any minimal left ideal will lie inside the unique maximal left ideal $J$. It suffices to show that every $a \in J \setminus \{0\}$ has left annihilator $ann_l(a) \neq J$. For then $Ra \cong R/ann_l(a) \not \cong R/J = k$. Indeed, if $a \in J$ then $a = \sum c_{ij} E_{ij}$ for some scalars $c_{ij}$ that are almost all zero. Let $r$ be maximal such that there exists $c_{rj} \neq 0$. Then there exist finitely many $j_1 < \cdots < j_r$ such that $c_{r j_p} \neq 0$. It follows that $E_{r+1,r} \in J$ with $$E_{r+1,r}a = E_{r+1,r} \sum c_{ij} E_{ij} = \sum c_{ij} E_{r+1,r} E_{ij} = \sum_p c_{i j_p} E_{r+1,j_p} \neq 0.$$ In particular, $E_{r+1,r} \in J \setminus ann_l(a)$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Not following what's happening with the exponents in this proof by mathematical induction. I'm not understanding what's happening in this proof. I understand induction, but not why $2^{k+1}=2*2^{k}$, and how that then equals $k^{2}+k^{2}$. Actually, I really don't follow any of the induction step - what's going on with that algebra? Thanks!
Let $P(n)$ be the statement
$2^n > n^2$, if $n$ is an integer greater than 4.
Basis step:
$~~~~~P(5)$ is true because $2^5 = 32 > 5^2 = 25$
Inductive step:
$~~~~~~$Assume that $P(k)$ is true, i.e. $2^k > k^2$.
We have to prove that $P(k+1)$ is true. Now
$$\begin{aligned}
2^{k+1} & = 2 \cdot 2^k\\
& > 2 \cdot k^2\\
& = k^2 + k^2\\
& > k^2 + 4k\\
& \geq k^2 +2k+1\\
& = (k+1)^2
\end{aligned}$$
$~~~~~$ because $k>4$
$~~~~~$ Therfore $P(k+1)$ is true
Hence from the principle of mathematical induction the given statement is true.
| I have copied the induction step table format from your question and will reformat it as I had to do in my high school geometry class. I find it very helpful to get away from the table as you gave it.
Set $P(n)$ equal to the statement that 2^n>n^2 for all n>4. Note the basis step 2^4>4^2 is true.
$~~~~~~$Assume that $P(k)$ is true, i.e. $2^k > k^2$.
We have to prove that $P(k+1)$ is true. Now
\begin{array}[10,2]{}
STATEMENT & REASON \\
2^{k+1} = 2 \cdot 2^k & \text{Induction step of the definition of }2^n \\
2^k > k^2 & \text{This is the induction hypothesis.} \\
2 \cdot 2^k > 2 \cdot k^2= k^2 + k^2 & \text{Multiplying by a positive number preserves inequalities} \\ & \text{ and the distributive law.} \\
k^2 + k^2 > k^2 + 4k & k>4\text{ by hypothesis, so } k^2 >4k \text{ and then } k^2 + 4k > k^2 + 2k +2\\ & \text{and the standard rules of inequalities used above apply.} \\
k^2 + 4k \geq k^2 +2k+1 & 4k > 2k+1 \text{ since }2k-1>0 \\
k^2 +4k +1 =(k+1)^2 & \text{Multiplication distributes over addition.} \\
\end{array}
$~~~~~~$
This shows $P(k)\implies P(k+1)$, and the principle of finite induction completes the proof that $P(n)$ is true for all $n>4$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/289969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Find all polynomials $\sum_{k=0}^na_kx^k$, where $a_k=\pm2$ or $a_k=\pm1$, and $0\leq k\leq n,1\leq n<\infty$, such that they have only real zeroes
Find all polynomials $\sum_{k=0}^na_kx^k$, where $a_k=\pm2$ or $a_k=\pm1$, and $0\leq k\leq n,1\leq n<\infty$, such that they have only real zeroes.
I've been thinking about this question, but I've come to the conclusion that I don't have the requisite math knowledge to actually answer it.
An additional, less-important question. I'm not sure where this problem is from. Can someone find a source?
edit
I'm sorry, I have one more request. If this can be evaluated computationally, can you show me a pen and paper way to do it?
| Let $\alpha_1,\alpha_2...\alpha_n$ the real roots. We know:
$$\sum \alpha_i^2=( \sum \alpha_i )^2-2\sum \alpha_i\alpha_j= \left(\frac{a_{n-1}}{a_n}\right)^2-2\left(\frac{a_{n-2}}{a_n}\right)\le 8$$
On the other hand, by AM-GM inequality:
$$\sum \alpha_i^2\ge n \sqrt[n]{|\prod\alpha_i|^2}=n\sqrt[n]{\left|\frac{a_0}{a_n}\right|^2}\ge n\sqrt[n]{\frac{1}{4}}$$
So $8\ge n \sqrt[n]{\frac{1}{4}} \Rightarrow n\le9$. The rest is finite enough.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 1,
"answer_id": 0
} |
Why doesn't this argument show that every locally compact Hausdorff space is normal? In proving that a locally compact Hausdorff space $X$ is regular, we can consider the one-point compactification $X_\infty$ (this is not necessary, see the answer here, but bear with me). Since $X$ is locally compact Hausdorff, $X_\infty$ is compact Hausdorff. As a result, $X_\infty$ is normal.
Imitating the idea in my proof in the link above and taking into consideration the correction pointed out in the answer, let $A,B\subseteq X$ be two disjoint, closed sets in $X$, and consider $X_\infty$. Since $X_\infty$ is normal, there are disjoint open sets $U,V \subseteq X_\infty$ such that $A\subseteq U$ and $B \subseteq V$. Then considering $X$ as a subset of $X_\infty$, we can take the open sets $U \cap X$ and $V \cap X$ as disjoint, open subsets of $X$ that contain $A$ and $B$, respectively...or so I thought. I see from the answers to this question that this does not succeed in proving that a locally compact Hausdorff space is normal, since this is not true.
So my question is simply: why does the above proof fail?
Thanks.
| $A$ and $B$ are closed in $X$. They need not be closed in the compactification $X_\infty$. You could try to fix this by replacing them with their closures in $X_\infty$, but then these need not be disjoint.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Factorization problem Find $m + n$ if $m$ and $n$ are natural numbers such that: $$\frac {m+n} {m^2+mn+n^2} = \frac {4} {49}\;.$$
My reasoning:
Say: $$m+n = 4k$$ $$m^2+mn+n^2 = 49k$$
It follows:$$(m+n)^2 = (4k)^2 = 16k^2 \Rightarrow m^2+mn+n^2 + mn = 16k^2 \Rightarrow mn = 16k^2 - 49k$$
Since: $$mn\gt0 \Rightarrow 16k^2 - 49k\gt0 \Rightarrow k\gt3$$
Then no more progress.
| Observe that $k$ must be a non-zero integer.
We know that $m, n$ are the roots of the quadratic equation
$$X^2 - 4kX + (16k^2 - 49k)$$
The roots, from the quadratic equation, are
$$ \frac { 4k \pm \sqrt{(4k)^2 - 4(16k^2 - 49k) }} {2} = 2k \pm \sqrt{ 49k - 12k^2}$$
The expression in the square root must be a perfect square.
Try $k = 1$, $49 k - 12k^2 = 37$ is not a perfect square.
Try $k = 2$, $49k - 12k^2 = 50$ is not a perfect square.
Try $k=3$, $49k-12k^2 = 39$ is not a perfect square.
Try $k=4$, $49k-12k^2 = 4$ is a perfect square. This leads to roots 6, 10, which have sum 16.
For $k\geq 5$, $49k - 12k^2 < 0$ has no solution.
For $k \leq -1$, $49k - 12k^2 < 0$ has no solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
Classifying mathematical "coincidences" Doing homework a few years ago, I noticed that the sum of the squares of $88$ and $33$ is $8833$. What would this kind of mathematical "curiosity" be called? Does this or any other similar coincidence have any deeper meaning or structure?
| Alf van der Poorten wrote a paper, The Hermite-Serret algorithm and $12^2+33^2$. He notes a number of similar curiosities, such as $$25840^2+43776^2=2584043776$$ and $$1675455088^2+3734621953^2=16754550883734621953$$ and develops the theory behind these things, including a discussion of the way to go from a solution of $z^2\equiv-1\pmod n$ to a solution of $x^2+y^2=n$.
The paper also appears in Cryptography and Computational Number Theory (Singapore, 1999), 129–136, Progr. Comput. Sci. Appl. Logic, 20, Birkhäuser, Basel, 2001. The Math Reviews data would be MR1944724 (2003i:11047)
See also http://oeis.org/A064942
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
$9^n \equiv 1 \mod 8$ I would like someone to check this inductive proof (sketch)
The base case is clear. For the inductive step, it follows that $8 \mid 9^{n+1} - 9 = 9(9^n - 1)$ by the indutive hyp. So $9^{n+1} \equiv 9 \equiv 1 \mod 8$.
Feedback would be appreciated.
| I'm assuming you mean what you say when you state your work as a proof "sketch".
The base case is clear. For the inductive step, it follows that $8 \mid 9^{n+1} - 9 = 9(9^n - 1)$ by the indutive hyp. So $9^{n+1} \equiv 9 \equiv 1 \mod 8$.
In your final write up, I'd suggest you "fill in" a bit of detail: e.g., to "walk through" the base case, at least stating that the congruence holds for $n=1$, or perhaps
"for $n = 1$, clearly, $9\equiv 1 \pmod 8$".
Then I suggest you make your inductive hypothesis explicit:
"Assume that it is true that $9^n \equiv 1 \pmod 8$,"
and then finish with, "for the inductive step....[what you wrote]"
If your task was to prove the congruence holds using proof by induction on $n$, then you've done a fine job of sketching such a proof.
If you can use other strategies, then bonsoon's suggestion is worth considering:
"Or note that since $9 \equiv 1 \pmod 8$, we have $9^n\equiv 1^n = 1 \pmod 8.$"
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Can every diagonalizable matrix be diagonalized into the identity matrix? I'm a chemistry major and I haven't taken much math, but this came up in a discussion of quantum chemistry and my professor said (not very confidently) that if a matrix is diagonalizable, then you should be able to diagonalize it to the identity matrix. I suspect this is true for symmetrical matrices, but not all matrices. Is that correct?
| Take the $0$ $n\times n$ matrix. It's already diagonal (and symmetrical) but certainly can't be diagonalized to the identity matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Mulitnomial Coefficient Power on a number using the binomial coefficient $_nC_r=\binom nr$, find the coefficient of $(wxyz)^2$ in the expansion of $(2w-x+3y+z-2)^n$. The answer key says its $n=12$, $r= 2\times2\times2\times2\times4$ in one of the equation for $_nC_r$. Why is there a $4$ there ? is it because there are $4$ terms ?
| The coefficient of $a^2b^2c^2d^2e^{n-8}$ in $(a+b+c+d+e)^n$ is the multinomial coefficient $\binom n{2,2,2,2,n-8}$. The $n-8$ is needed because the exponenents need to add up to $n$, anything else would make the multinomial coefficient undefined. For $n=12$ you get $n-8=4$, so I suppose that is where your $4$ comes from. Now put $a=2w,b=-x,c=3y,d=z,e=-2$ to get the real answer to your question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Books for Self learning math from the ground up I am a CSE graduate currently working as a .NET and android developer. I feel my poor basic in mathematics is hampering my improvements as a programmer.
I want to achieve a sound understanding on the basics of mathematics so that i can pick up a book on 3D graphics programming/Algorithm and not be intimidated by all the maths(linear algebra,discrete mathematics... etc. ) used in it.
So, what path/resource/book one should follow to create a good foundation on mathematics?
Thank you
P.S. I have read this but i wanted to know other/better options.
| Let me propose a different tack since you have a clear goal. Pick up a book on 3D graphics programming or algorithms, and if you come across something that intimidates you too much to get by on your own, ask about it here. We will be able to direct you to exact references to better understand the material in this way. Conceivably, there might be a little bit of recursion, and that's okay.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
$n\times n$ board, non-challenging rooks Consider an $n \times n$ board in which the squares are colored black and white in
the usual chequered fashion and with at least one white corner square.
(i) In how many ways can $n$ non-challenging rooks be placed on the white
squares?
(ii) In how many ways can $n$ non-challenging rooks be placed on the black
squares?
I've tried some combinations with shading z certain square and deleting the row and the column in which it was and then using recurrence, but it doesn't work.
| We are looking at all permutations of $n$ that are (white squares case) parity-perserving, or (black square case) parity-inversing. If $n$ is even the black squares case is equivalent to the white square case (by a vertical reflection for instance), and if $n$ is odd the black square case has no solutions (the odd numbers $1,3,\ldots,n$ must map bijectively to fewer even numbers $2,4,\ldots,n-1$).
So it remains to do the white square case. But now we must permute the odd and the even numbers independently among themselves, for $\lfloor\frac n2\rfloor!\times\lceil\frac n2\rceil!$ solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
Solve the recurrence $y_{n+1} = 2y_n + n$ for $n\ge 0$ So I have been assigned this problem for my discrete math class and am getting nowhere. The book for the class doesn't really have anything on recurrences and the examples given in class are not helpful at all. I seem to be going in circles with the math. Any help with this problem would be GREATLY appreciated.
Solve the recurrence
$$y_{n+1} = 2y_n + n$$ for non-negative integer $n$
and initial condition $y_0 = 1\;$ for
a) Ordinary generating series
b) Exponential generating series
c) Telescoping
d) Lucky guess + mathematical induction
e) Any other method of your choice
Thanks in advance!
| Using ordinary generating functions
$$y_{n+1}=2y_n+n$$
gets transformed into
$$\sum_{n=0}^\infty y_{n+1}x^n=2\sum_{n=0}^\infty y_nx^n+\sum_{n=0}^\infty nx^n$$
$$\sum_{n=0}^\infty y_{n+1}x^n=2 y(x)+x\sum_{n=1}^\infty nx^{n-1}$$
$$\sum_{n=0}^\infty y_{n+1}x^n=2 y(x)+x\frac{1}{(1-x)^2}$$
$$\sum_{n=0}^\infty y_{n+1}x^{n+1}=2x y(x)+x^2\frac{1}{(1-x)^2}$$
$$\sum_{n=1}^\infty y_{n}x^{n}=2x y(x)+x^2\frac{1}{(1-x)^2}$$
$$\sum_{n=0}^\infty y_{n}x^{n}=2x y(x)+x^2\frac{1}{(1-x)^2}+y_0$$
$$y(x)=2x y(x)+x^2\frac{1}{(1-x)^2}+y_0$$
$$(1-2x) y(x)=x^2\frac{1}{(1-x)^2}+y_0$$
$$ y(x)=x^2\frac{1}{(1-2x)(1-x)^2}+\frac{y_0}{1-2x}$$
Can you match the coefficients now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Show that $\frac{z-1}{\mathrm{Log(z)}}$ is holomorphic off $(-\infty,0]$ Let $f(z)=\frac{z-1}{Log(z)}$ for $z\neq 1$ and $f(1)=1$. Show that $f$ is holomorphic on $\mathbb{C}\setminus(-\infty,0]$.
I know it looks like an easy problem, but I got stuck and need some clarification. The way I see it, I need to show that $f$ is complex differentiable at every point in $\Omega=\mathbb{C}\setminus(-\infty,0]$. So, if we take any $z_{0}$ different than $1$, our function is the quotient of two complex differentiable functions on $\Omega$, so is complex differentiable ($Log(z_{0})\neq0)$. Now, if we take $z_{0}$ to be $1$, than if $f$ is continous at that point, we could use the Cauchy-Riemann equations to check whether $u_{x}(1,0)=v_{x}(1,0)$ and $u_{y}(1,0)=-v_{x}(1,0)$ with $f=u+iv$. My question is: Is this the fastest way to show complex differentiability at $1$? I mean, how do I get to differentiating $u$ and $v$ if I don't know them explicitly? I would also appreciate it if someone could give me some hints on how to compute the limit of $f$ at $1$.
| If $g$ is holomorphic on an open set $G$, $a\in G$, and $g(a)=0$, then there is a positive integer $n$ and a holomorphic function $h$ on $G$ such that $g(z)=(z-a)^nh(z)$ for all $z\in G$, and $h(a)\neq 0$. (This $n$ is the multiplicity of the zero of $g$ at $a$.)
Since $\mathrm{Log}(1)=0$, and $\mathrm{Log}'(1)=1\neq 0$, there exists a holomorphic function $h$ on $\mathbb C\setminus(-\infty,0]$ such that $\mathrm{Log}(z)=(z-1)h(z)$ for all $z$, and $h(1)\neq 0$. Since $\mathrm{Log}$ has no zeros except at $1$, $h$ has no zeros. Therefore $f=\dfrac{1}{h}$ is holomorphic on $\mathbb C\setminus(-\infty,0]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is this set closed under addition or multiplication or both and why? $\{-1,0,1\}$
Please give an explanation and also tell me what does closed under addition and multiplication mean.
Different definitions are given everywhere.
| A set $X$ is closed under addition if $x+y\in X$ for any $x,y\in X$. It is closed under multiplication if $x\times y\in X$ for any $x,y\in X$. Note that $x$ and $y$ may or may not be equal.
The set $\{-1,0,1\}$ is closed under multiplication but not addition (if we take usual addition and multiplication between real numbers). Simply verify the definitions by taking elements from the set two at a time, possibly the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Does $a!b!$ always divide $(a+b)!$ Hello the question is as stated above and is given to us in the context of group theory, specifically under the heading of isomorphism and products. I would write down what I have tried so far but I have made very little progress in trying to solve this over the last few hours!
| The number of ways to choose $a$ objects out of $a+b$ objects if order matters in the selection is $(a+b)\times(a+b-1)\times\cdots\times((a+b)-(a-1))=\frac{(a+b)!}{b!}$, since there are $a+b$ ways to choose the first object, $a+b-1$ ways to choose the second object from the remaining ones, and so on.
However, $a!$ permutations actually correspond to a single combination (where order is immaterial), since $a$ objects can be arranged in $a!$ ways. This means that $\frac{(a+b)!}{b!}=ka!$ for some integer $k$, so that $(a+b)!=ka!b!$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 2
} |
Inverse of a diagonal matrix plus a Kronecker product? Given two matrices $X$ and $Y$, it's easy to take the inverse of their Kronecker product:
$(X\otimes Y)^{-1} = X^{-1}\otimes Y^{-1}$
Now, suppose we have some diagonal matrix $\Lambda$ (or more generally an easily inverted matrix, or one for which we already know the inverse). Is there a closed-form expression or efficient algorithm for computing $(\Lambda + (X\otimes Y))^{-1}$?
| Let $C := \left\{ c_{i,j} \right\}_{i,j=1}^N$ and $A := \left\{ a_{i,j} \right\}_{i,j=1}^n$ be symmetric matrices. The spectral decompositions of the two matrices read $A = O^T D_A O$ and $C = U^T D_C U$ where $D_A := Diag(\lambda_k)_{k=1}^n$ and $D_C := Diag(\mu_k)_{k=1}^N$ and $O \cdot O^T = O^T \cdot O = 1_n$ and $U \cdot U^T = U^T \cdot U = 1_N$. We use equation 5 from the cited paper in order to compute the resolvent $G_{A \otimes C}(z) := \left(z 1_{ n N} - A \otimes C\right)^{-1}$. We have:
\begin{eqnarray}
G_{A \otimes C}(z) &=& \left( O^T \otimes U^T\right) \cdot \left( z 1_{ n N} - D_A \otimes D_C \right)^{-1} \cdot \left(O \otimes U \right) \\
&=& \left\{ \sum\limits_{k=1}^n O_{k,i} O_{k,j} U^T \cdot \left( z 1_{N} - \lambda_k D_C\right)^{-1} U \right\}_{i,j=1}^n \\
&=& \sum\limits_{p=0}^\infty \frac{1}{z^{1+p}} A^p \otimes C^p \\
&=& \frac{\sum\limits_{p=0}^{d-1} z^{d-1-p} \sum\limits_{l=d-p}^d (-1)^{d-l} {\mathfrak a}_{d-l} \left(A \otimes C\right)^{p-d+l}}{\sum\limits_{p=0}^d z^{d-p} (-1)^p {\mathfrak a}_p}
\end{eqnarray}
where
$\det( z 1_{n N} - A \otimes C) := \sum\limits_{l=0}^{n N} (-1)^l {\mathfrak a}_l z^{n N-l}$.
The first two lines from the top are straightforward. In the third line we expanded the inverse matrix in a series and finally in the fourth line we summed up the infinite series using Cayley-Hamilton's theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Comparing value of two definite integrals ($x^nlnx$) I need compare these two integrals:
$$
(1) \int_{a}^{b}x^n lnx dx\space \space
(2) \int_{a}^{b}x^{n+1} lnx dx
$$
for the following values of [a, b]: (A) [1, 2] for both integrals, (B) [0.5, 1] for both integrals and (C) [0.5, 1] for (1) and [0.3, 1] for (2).
What would be the most efficient way to solve this problem? Should I compare $x^nlnx$ and $x^{n+1}lnx$ (and the slope of both functions for a given range) or solve the integrals?
| You can give an elegant answer without touching the integrals, using integral properties.
It is know that if $f(x)$ and $g(x)$ are Riemann integrable functions over the closed interval $[a,b]$, and such that $f(x) \geq g(x)$, then $\int_{a}^{b} f(x) dx \geq \int_{a}^{b} g(x) dx$. The same thing goes for $>$ and the opposites.
So you just need to know where your $f(x)$, namely, $x^{n}ln(x)$ is bigger or smaller than $x^{n+1}ln(x)$. Once you know these regions where some condition of order occurs ($f$ is bigger/smaller/equal than $g$), you'll know the relation between both integrals, directly from the properties.
For the C) point of the question, I'd just check if the functions are monotone in the given intervals, if some function is always greater than the other in its corresponding interval, and I would play around with some tricky values for $n$.
Only if I couldn't get any insight from properties or theorems, I would try to compute the integrals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/290914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If the absolute value of a function is Riemann Integrable, then is the function itself integrable? I am trying to check the converses of a few theorems.
I know that that if $g$ is integrable then $|g|$ is integrable. However, if $|g|$ is Riemann Integrable, then is $g$ Rieman integrable?
I know that if $g$ is integrable then $g^2$ is integrable. However, is the converse true?
I have a hunch that they aren't true, but am failing to device the counter examples.
| Let $f(x)=1$ when $x$ is rational, $-1$ when $x$ is irrational. Interval say $[0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Plotting for solution for $y=x^2$ and $x^2 + y^2 = a $ Consider the system $$y=x^2$$ and $$x^2 + y^2 = a $$for $x>0$, $y>0$, $a>0$.
Solving for equations give me $y+y^2 = a$, and ultimately $$y = \frac {-1 + \sqrt {4a+1}} {2} $$ (rejected $\frac {-1 - \sqrt {4a+1}} {2} $ since $y>0$).
The next part is to plot on the $x-y$ plane for different values of $a$. Is plotting the graph of $y = x^2$ insufficient?
| Yes, it is insufficient.
You should notice that this equation is "special:"
$$x^2 + y^2 = a$$
This is the graph of a circle, radius $\sqrt{a}$.
So, your graph should contain both the parabola and the part of the circle in the region in question.
Here's a link to a graph from Wolfram Alpha which may help give some intuition. The darkest shaded region that is there is the region of interest.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Given an $m$ how to find $k$ such that $k m + 1$ is a perfect square Any way other than trying linearly for all values of $k$.
| If $km+1=n^2$ for some $n$ and $k=am+b$ for some $a,b\in\mathbb{Z}$, $0\le b<m$. Then
$$k=\frac{n^2-1}{m}=a^2m+2b+\frac{b^2-1}{m}$$
So we can know $k$ is integer if and only if $b^2\equiv 1 \pmod{m} $.
For example, if $b=\pm 1$ we get $k=a^2m\pm 2$ and $km+1=(am\pm1)^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
intersection of sylow subgroup I am in need of
*
*Example of a group $G$ a subgroup $A$ which not normal in $G$ and $p$ sylow subgoup $B$ of $G$ such that $A \cap B$ is a $p$-sylow subgroup of $A$.
A detailed solution will be helpful.
| If you want to construct an example which is non-trivial under several points of view, that is $A$ and $B$ and $G$ all distinct, and $A$ is not a $p$-group, you may find it as follows.
Find a finite group $G$ which has a non-normal subgroup $A$ which is not a $p$-group, but whose order is divisible by $p$. Let $S$ be a Sylow $p$-subgroup of $A$. Choose $B$ to be a Sylow $p$-subgroup of $G$ containing $S$. (Such a $B$ exists by Sylow's theorems.)
For instance, take $p = 2$, $G = S_{4}$, $A = \langle (123), (13) \rangle$ of order 6, $S = \langle (13) \rangle$, $B = \langle (1234), (13) \rangle$.
PS A simpler, but somewhat trivial example would be $G = B = \langle (1234), (13) \rangle$, $A = \langle (13) \rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
For which $a$ does the equation $f(z) = f(az) $ has a non constant solution $f$ For which $a \in \mathbb{C} -\ \{0,1\}$ does the equation $f(z) = f(az) $ has a non constant solution $f$ with $f$ being analytical in a neighborhood of $0$.
My attempt:
First, we can see that any such solution must satisfy:
$f(z)=f(a^kz)$ for all $k \in \mathbb{N} $.
If $|a|<1$:
The series $z_{k} = a^k$ converges to 0 which is an accumulation point, and $f(z_i)=f(z_j)$ for all $i, j\in \mathbb{N} $. Thus $f$ must be constant.
If $|a|=1$:
For all $a \neq 1$ , $f$ must be constant on any circle around $0$, so again $f$ must be constant.
My quesions are:
Am I correct with my consclusions?
Also, I'm stuck in the case where $|a|>1$. Any ideas?
Thanks
| Thank you all very much. For completeness, I will write here a sctach of the soultion:
For $|a|=1$ we can take $f(z)=z^k$ with $k = 2\pi/Arg(a) $.
For $|a|>1$, we can notice that actually $f(z)=f(a^kz)$ for all $k \in \mathbb{Z}$, so the solution is similar to the case $|a|<1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Looking for a simple proof that groups of order $2p$ are up to isomorphism $\Bbb{Z}_{2p}$ and $D_p$ for prime $p>2$. I'm looking for a simple proof that up to isomorphism every group of order $2p$ ($p$ prime greater than two) is either $\mathbb{Z}_{2p}$ or $D_{p}$ (the Dihedral group of order $2p$).
I should note that by simple I mean short and elegant and not necessarily elementary. So feel free to use tools like Sylow Theorems, Cauchy Theorem and similar stuff.
Thanks a lot!
| Use Cauchy Theorem
Cauchy's theorem — Let $G$ be a finite group and $p$ be a prime. If $p$ divides the order of $G$, then $G$ has an element of order $p$.
then you have an element $x\in G$ of order $2$ and another element $y\in G$ of order $p$. Now you have to show that $xy$ is of order $2p$
using commutativity we get:
$(xy)^2 = y^2$, and hence $ord(xy) \not| 2$
$(xy)^p = x$ , therefore $ord(xy) \not| p$
and $(xy)^{2p} = y^{2p} = e$, then $ord(xy) | 2p$
hence $1 <2<p<ord(xy) | 2p$
an then $ord(xy) = 2p$ because it doesn't equal to any divisor different of $2p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 4
} |
Can there be a well-defined set with no membership criteria? Prime numbers are a well-defined set with specific membership criteria. Can the same be said about "numbers"? Aren't numbers (that is, all numbers) a well defined set but without membership criteria? Anybody can say, given a particular object, whether that belongs in the set of numbers or not. But it may not be possible to give any criteria for this inclusion.
We may want to say that the set of all numbers has a criterion and that is that only numbers shall get into the set and all non-numbers shall stay out of it. But then, this is merely a definition and not a criterion for inclusion.
Therefore my question: Can there be a well-defined set with no membership criteria?
| A set must have membership criteria. Cantor defined sets by (taken from wikipedia)
A set is a gathering together into a whole of definite, distinct objects of our perception [Anschauung] and of our thought – which are called elements of the set.
As the elements in a set therefore are definite, we can describe them in a way - which is a membership critria. There can happen funny things (Take a look at Axiom of Choice and Russels Paradox, the latter showing why Cantors definition isn't enough and ZFC is needed (I sadly can't link to it now) )
In your example: The set of all numbers is the set containing all objects that we characterize as numbers. This quickly gets a bit meta-mathematical ("What is a number?", "two" and "2" are just the representatives / "shadows" of the thing we call two), you should take a look at how the natural numbers are defined.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Show that there are no nonzero solutions to the equation $x^2=3y^2+3z^2$ I am asked to show that there are no non zero integer solutions to the following equation
$x^2=3y^2+3z^2$
I think that maybe infinite descents is the key.
So I started taking the right hand side modulo 3 which gives me zero. Meaning that $X^2$ must be o modulo 3 as well and so I can write $X^2=3k$ , for some integer K and (k,3)=1.
I then divided by 3 and I am now left with $k=y^2+z^2$ . Now I know that any integer can be written as sum of 2 squares if and only if each prime of it's prime factorization has an even power if it is of the form 4k+3. But yet I am stuck .
If anyone can help would be appreciated.
| Assume $\,x,y,z\,$ have no common factor. Now let us work modulo $\,4\,$ : every square is either $\,0\,$ or $\,1\,$ , and since
$$3y^2+3z^2=-(y^2+z^2)=\begin{cases}0\,\;\;,\,y,z=0,2\\{}\\-1=3\,\;\;,\,y=1\,,\,z=0\,\,or\,\,y=0\,,\,z=1\\{}\\-2=2\;\;\,,\,y=z=1\end{cases}$$
so the only possibility is the first one, and thus also $\,x^2=0\,$ , but then all of $\,x,y,z\,$ are even...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
} |
Holomorphic functions on unit disc Let $f,g$ be holomorphic on $\mathbb{D}:=\lbrace z\in\mathbb{C}:|z|<1\rbrace$, $f\neq0,g\neq0$, such that $$\frac{f^{\prime}}{f}(\frac{1}{n})=\frac{g^{\prime}}{g}(\frac{1}{n}) $$ for all natural $n\geq1$. Does it imply that $f=Cg$, where $C$ is some constant?
Let $A:=\lbrace\frac{1}{n}:n\geq1\rbrace$ and $h:=\frac{f^{\prime}}{f}-\frac{g^{\prime}}{g}$. Now, $h$ is holomprphic on $\mathbb{D}$ and disappears on a subset of $\mathbb{D}$ which has a limit point. Thus $h=0$, so $\frac{f^{\prime}}{f}=\frac{g^{\prime}}{g}$ on $\mathbb{D}$.
Could someone help with the next steps? Or maybe $f$ doesn't have to be in the form described above?
| Notice that your last statement is equivalent to ${f'g-g'f}=0$, since $f,g \neq 0$. Now there is no harm in dividing that expression by $g^2$. You get ${f'g-g'f}{g^2}=0$. So, $(f/g)'=0$ and so your result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Is there a general formula for the antiderivative of rational functions? Some antiderivatives of rational functions involve inverse trigonometric functions, and some involve logarithms. But inverse trig functions can be expressed in terms of complex logarithms. So is there a general formula for the antiderivative of any rational function that uses complex logarithms to unite the two concepts?
| Write the rational function as $$f(z) = \dfrac{p(z)}{q(z)} = \dfrac{p(z)}{\prod_{j=1}^n (z - r_j)}$$
where $r_j$ are the roots of the denominator, and $p(z)$ is a polynomial.
I'll assume $p$ has degree less than $n$ and the roots $r_j$ are all distinct.
Then the partial fraction decomposition of $f(z)$ is
$$ f(z) = \sum_{j=1}^n \frac{p(r_j)}{q'(r_j)(z - r_j)}$$
where $p(r_j)/q'(r_j)$ is the residue of $f(z)$ at $r_j$. An antiderivative
is
$$ \int f(z)\ dz = \sum_{j=1}^n \frac{p(r_j)}{q'(r_j)} \log(z -r_j)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Square of two positive definite matrices are equal then they are equal I have read that if $P, Q$ are two positive definite matrices such that $P^2=Q^2$, then $P=Q$.
I don't know why. Some one can help me? Thanks for any indication.
| It all boils down to this:
Proposition. Suppose $A$ is an $n\times n$ positive definite matrix. Then $A^2$ has an eigenbasis. Furthermore, given any eigenbasis $\{v_1, \ldots, v_n\}$ of $A^2$ such that for each $i$, $A^2v_i=\lambda_iv_i$ for some $\lambda_i>0$, we must have $Av_i=\sqrt{\lambda_i}v_i$.
I will leave the proof of this proposition to you. Now, suppose $\{v_1,\ldots,v_n\}$ is an eigenbasis for $P^2=Q^2$ with $P^2v_i=Q^2v_i=\lambda_iv_i$. By the above proposition, we must also have $Pv_i=Qv_i=\sqrt{\lambda_i}v_i$. Since the mappings $x\mapsto Px$ and $x\mapsto Qx$ agree on a basis of the underlying vector space, $P$ must be equal to $Q$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
$\gcd(m,n) = 1$ and $\gcd (mn,a)=1$ implies $a \cong 1 \pmod{ mn}$ I have $m$ and $n$ which are relatively prime to one another and $a$ is relatively prime to $mn$
and after alot of tinkering with my problem i came to this equality:
$a \cong 1 \pmod m \cong 1 \pmod n$
why is it safe to say that $a \cong 1 \pmod {mn}$?..
| It looks as if you are asking the following. Suppose that $m$ and $n$ are relatively prime. Show that if $a\equiv 1\pmod{m}$ and $a\equiv 1\pmod{n}$, then $a\equiv 1\pmod{mn}$.
So we know that $m$ divides $a-1$, and that $n$ divides $a-1$. We want to show that $mn$ divides $a-1$.
Let $a-1=mk$. Since $n$ divides $a-1$, it follows that $n$ divides $mk$. But $m$ and $n$ are relatively prime, and therefore $n$ divides $k$. so $k=nl$ for some $l$, and therefore $a-1=mnl$.
Remark: $1.$ There are many ways to show that if $m$ and $n$ are relatively prime, and $n$ divides $mk$, then $n$ divides $k$.
One of them is to use Bezout's Theorem: If $m$ and $n$ are relatively prime, there exist integers $x$ and $y$ such that $mx+ny=1$.
Multiply through by $k$. We get $mkx+nky=k$. By assumption, $n$ divides $mk$, so $n$ divides $mkx$. Clearly, $n$ divides $nky$. So $n$ divides $mkx+nky$, that is, $n$ divides $k$.
$2.$ Note that there was nothing special about $1$. Let $m$ and $n$ be relatively prime. If $a\equiv c\pmod{m}$ and $a\equiv c\pmod{n}$, then $a\equiv c\pmod{mn}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Changing $(1-\cos(x))/x$ to avoid cancellation error for $0$ and $π$? I have to change this formula: $$\frac{1-\cos(x)}{x}$$ so that I can avoid the cancellation error. I can do this for 0 but not for $π$. So I get: $$\frac{\sin^2(x)}{x(1+\cos(x))}$$ which for $x$ close to $0$ gets rid of the cancellation error. But I don't know how to fix the error for $x$ close to $π$? I just want to know if I should be using trigonometric identities again? I've tried to use trig identities but nothing works. Any suggestions or hints?
Edit: So for π I meant that sin(π) would be 0 so it wouldn't give me the correct value as (1-cosπ)/π=2/π. The second equation would overall give me 0. That's the error I meant for π. Sorry for the confusion there.
| Another possiblity that avoids cancellation error at both places is
$$ \frac{2 \sin^2(x/2)}{x} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Why $f(x + x^3, y + y^3) \in L^1(\mathbb{R}^2)$, when $f(x, y) \in L^2(\mathbb{R}^2)$? How show that $f(x + x^3, y + y^3) \in L^1(\mathbb{R}^2)$, when $f(x, y) \in L^2(\mathbb{R}^2)$?
Can someone help me?
Thank you!
| The statement that $f(x, y) \in L^2(\mathbb{R}^2)$ is the same as the statement that $(f(x + x^3, y + y^3))^2(1 + 3x^2)(1 + 3y^2)$ is in $L^1(\mathbb{R}^2)$, which can be seen after a change of variables to $(u,v) = (x + x^3, y + y^3)$ for the latter. Inspired thus, we write
$$\int_{\mathbb{R}^2}|f(x + x^3, y + y^3)| = $$
$$\int_{\mathbb{R}^2}\bigg(\big|f(x + x^3, y + y^3)\big|\sqrt{(1 + 3x^2)(1 + 3y^2)}\bigg){1 \over \sqrt{(1 + 3x^2)(1 + 3y^2)} }\,dx\,dy$$
By Cauchy-Schwarz this is at most the square root of
$$\int_{\mathbb{R}^2}(f(x + x^3, y + y^3))^2(1 + 3x^2)(1 + 3y^2)\,dx\,dy\int_{\mathbb{R}^2}{1 \over (1 + 3x^2)(1 + 3y^2) }\,dx\,dy$$
The first integral is finite as described above, and the second one can directly be computed to something finite. So the original integral is finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If $f(x) = x-5$ and $g(x) = x^2 -5$, what is $u(x)$ if $(u \circ f)(x) = g(x)$?
Let $f(x) = x-5$, $g(x) = x^2 -5$. Find $u(x)$ if $(u \circ f)(x) = g(x)$.
I know how to do it we have $(f \circ u) (x)$, but only because $f(x)$ was defined. But here $u(x)$ is not defined. Is there any way I can reverse it to get $u(x)$ alone?
| I think I figured out what my professor did now . . .
$(u \circ f)(x) = g(x)$
$(u \circ f)(f^{-1} (x)) = g( f^{-1}(x)) $
$\big((u \circ f) \circ f^{-1}\big)(x) = (g \circ f^{-1})(x) $
$\big(u \circ (f \circ f^{-1})\big)(x) = (g \circ f^{-1})(x) $
$u(x) = g(f^{-1}(x))$
$u(x) = g(x+5)$
I think this is right. Please correct me if I'm wrong.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/291950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
definition of left (right) Exact Functors Let $P,Q$ be abelian categories and $F:P\to Q$ be an additive functor. Wikipedia states two definitions on left exact functors (right dually):
*
*$F$ is left exact if $0\to A\to B\to C\to 0$ is exact implies $0\to F(A)\to F(B)\to F(C)$ is exact.
*$F$ is left exact if $0\to A\to B\to C$ is exact implies $0\to F(A)\to F(B)\to F(C)$ is exact.
Moreover, it states that these two are equivalent definitions. I'm quite new at this topic so I'm not sure if this is immediately clear or not. Surely, 2. $\implies$ 1., being the more general case. But I don't see how to even approach the other direction; is this merely tautological?
| Assume 1. holds. First observe that $F$ preserves monomorphisms: If $i : A \to B$ is a monomorphism, then $0 \to A \xrightarrow{i} B \to \mathrm{coker}(i) \to 0$ is exact, hence also $0 \to F(A) \to F(B) \to F(\mathrm{coker}(i))$ is exact. In particular $F(i)$ is a monomorphism.
Now if $0 \to A \xrightarrow{i} B \xrightarrow{f} C$ is exact, then $0 \to A \xrightarrow{i} B \xrightarrow{f} \mathrm{im}(f) \to 0$ is exact, hence by assumption $0 \to F(A) \to F(B) \to F(\mathrm{im}(f))$ is exact. Since $F(\mathrm{im}(f)) \to F(C)$ is a monomorphism, it follows that also $0 \to F(A) \to F(B) \to F(C)$ is exact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Finding $n$ such that $\frac{3^n}{n!} \leq 10^{-6}$ This question actually came out of a question. In some other post, I saw a reference and going through, found this, $n>0$.
Solve for n explicitly without calculator:
$$\frac{3^n}{n!}\le10^{-6}$$
And I appreciate hint rather than explicit solution.
Thank You.
| I would use Stirling's approximation $n!\approx \frac {n^n}{e^n}\sqrt{2 \pi n}$ to get $\left( \frac {3e}n\right)^n \sqrt{2 \pi n} \lt 10^{-6}$. Then for a first cut, ignore the square root part an set $3e \approx 8$ so we have $\left( \frac 8n \right)^n \lt 10^{-6}$. Now take the base $10$ log asnd get $n(\log 8 - \log n) \lt -6$ Knowing that $\log 2 \approx 0.3$, it looks like $16$ will not quite work, as this will become $16(-0.3)\lt 6$. Each increment of $n$ lowers it by a factor $5$ around here, or a log of $0.7$. We need a couple of those, so I would look for $18$.
Added: the square root I ignored is worth a factor of $10$, which is what makes $17$ good enough.a
Alpha shows that $17$ is good enough.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Combinatorics Statistics Question The problem I am working on is:
An academic department with five faculty members—Anderson, Box, Cox, Cramer, and Fisher—must select two of its members to serve on a personnel review committee.
Because the work will be time-consuming, no one is anx-ious to serve, so it is decided that the representative will be selected by putting the names on identical pieces of paper
and then randomly selecting two.
a.What is the probability that both Anderson and Box will
be selected? [Hint:List the equally likely outcomes.]
b.What is the probability that at least one of the two members whose name begins with C is selected?
c. If the five faculty members have taught for 3, 6, 7, 10,
and 14 years, respectively, at the university, what is the
probability that the two chosen representatives have a
total of at least 15 years’ teaching experience there?
For a), I figured that since probability of Anderson being chosen is $1/5$ and Box being chosen is $1/5$ the answer would simply be $2/5$. It isn't, though. It is $0.1$ How did they get that answer? I might need help with parts b) and c) as well.
| most simple way to understand this problem.
(i just did this problem just now just now) lol
A, B, Co, Cr, and F exist.
pick 1 of 5 at random. (1/5)
pick another at random, but this time around there are only 4 choices. so (1/4)
multiply the two values. (1/20)
BUT! that's considering that A is picked at first try then B on the second.
you also have to consider the possibility that B would be picked first and etc.
so you add (1/20) + (1/20) = (2/20) = (1/10) = 0.1
pce
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ Let $s_n$ be a sequence defined as given below for $n \geq 1$. Then find out $\lim\limits_{n \to
\infty} s_n$.
\begin{align}
s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx
\end{align}
I have written a solution of my own, but I would like to know it is completely correct, and also I would like if people post more alternative solutions.
| Notice
(1) $\frac{s_n}{n} + \frac{s_{n+1}}{n+1} = \int_0^1 x^{n-1} dx = \frac{1}{n} \implies s_n + s_{n+1} = 1 + \frac{s_{n+1}}{n+1}$.
(2) $s_n = n\int_0^1 \frac{x^{n-1}}{1+x} dx < n\int_0^1 x^{n-1} dx = 1$
(3) $s_{n+1} - s_n = \int_0^1 \frac{d (x^{n+1}-x^n)}{1+x} = \int_0^1 x^n \frac{1-x}{(1+x)^2} dx > 0$
(2+3) $\implies s = \lim_{n\to\infty} s_n$ exists and (1+2) $\implies s+s = 1 + 0 \implies s = \frac{1}{2}$.
In any event, $s_n$ can be evaluated exactly to $n (\psi(n) - \psi(\frac{n}{2}) - \ln{2})$ where $\psi(x)$ is the diagamma function. Since $\psi(x) \approx \ln(x) - \frac{1}{2x} - \frac{1}{12x^2} + \frac{1}{120x^4} + ... $ as $x \to \infty$, we know:
$$s_n \approx \frac{1}{2} + \frac{1}{4 n} - \frac{1}{8 n^3} + ...$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 0
} |
Fermat's Little Theorem Transformation I am reading a document which states:
By Fermat's Little Theorem, $a^{p-1}\bmod p = 1$. Therefore, $a^{b^c}\bmod p = a^{b^c\bmod (p - 1)} \bmod p$
For the life of me, I cannot figure out the logic of that conclusion. Would someone mind explaining it? I will be forever in your debt.
Thank you!
| The key point is that if $\rm\ a^n = 1\ $ then exponents on $\rm\ a\ $ may be reduced mod $\rm\,n,\,$ viz.
Hint $\rm\quad a^n = 1\ \,\Rightarrow\,\ a^i = a^j\ \ { if} \ \ i\equiv j\,\ (mod\ n)\:$
Proof $\rm\ \ i = j\!+\!nk\:$ $\Rightarrow$ $\rm\:a^i = a^{j+nk} = a^j (a^n)^k = a^j 1^k = a^j\ \ $ QED
Yours is the special case $\rm\:0\ne a\in \Bbb Z/p,\:$ so $\rm\:a^{p-1}\! = 1,\:$ so exponents may be reduced mod $\rm\:p\!-\!1.$
Remark $\ $ You should check that proof works ok if $\rm\,k < 0\:$ (hint: $\rm\: a^n = 1\:\Rightarrow\: a\,$ is invertible, so negative powers of $\rm\,a\,$ are well-defined). The innate structure will become clearer if you study university algebra, where you will learn about cyclic groups, orders of elements, and order ideals, and modules.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving the sum of the first $n$ natural numbers by induction I am currently studying proving by induction but I am faced with a problem.
I need to solve by induction the following question.
$$1+2+3+\ldots+n=\frac{1}{2}n(n+1)$$
for all $n > 1$.
Any help on how to solve this would be appreciated.
This is what I have done so far.
Show truth for $N = 1$
Left Hand Side = 1
Right Hand Side = $\frac{1}{2} (1) (1+1) = 1$
Suppose truth for $N = k$
$$1 + 2 + 3 + ... + k = \frac{1}{2} k(k+1)$$
Proof that the equation is true for $N = k + 1$
$$1 + 2 + 3 + ... + k + (k + 1)$$
Which is Equal To
$$\frac{1}{2} k (k + 1) + (k + 1)$$
This is where I'm stuck, I don't know what else to do. The answer should be:
$$\frac{1}{2} (k+1) (k+1+1)$$
Which is equal to:
$$\frac{1}{2} (k+1) (k+2)$$
Right?
By the way sorry about the formatting, I'm still new.
| Think of pairing up the numbers in the series. The 1st and last (1 + n) the 2nd and the next to last (2 + (n - 1)) and think about what happens in the cases where n is odd and n is even.
If it's even you end up with n/2 pairs whose sum is (n + 1) (or 1/2 * n * (n +1) total)
If it's odd you end up with (n-1)/2 pairs whose sum is (n + 1) and one odd element equal to (n-1)/2 + 1 ( or 1/2 * (n - 1) * (n + 1) + (n - 1)/2 + 1 which comes out the same with a little algebra).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Proving the equality in weak maximum principle of elliptic problems This one is probably simple, but I just can't prove the result.
Suppose that $\mathop {\max }\limits_{x \in \overline \Omega } u\left( x \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } {u^ + }\left( x \right)$ and $\mathop {\min }\limits_{x \in \overline \Omega } u\left( x \right) \geqslant \mathop {\min }\limits_{x \in \partial \Omega } {u^ - }\left( x \right)$, where $\overline \Omega $ is closure of $\Omega $ and ${\partial \Omega }$ is boundary of $\Omega $, ${u^ + } = \max \left\{ {u,0} \right\}$ and ${u^ - } = \min \left\{ {u,0} \right\}$ (notice that $\left| u \right| = {u^ + } - {u^ - }$).
How to show that $\mathop {\max }\limits_{x \in \overline \Omega } \left| {u\left( x \right)} \right| = \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right|$?
So far, I've got
$\mathop {\max }\limits_{x \in \overline \Omega } \left| {u\left( x \right)} \right| = \mathop {\max }\limits_{x \in \overline \Omega } \left( {{u^ + }\left( x \right) - {u^ - }\left( x \right)} \right) \leqslant \mathop {\max }\limits_{x \in \overline \Omega } {u^ + }\left( x \right) + \mathop {\max }\limits_{x \in \overline \Omega } \left( { - {u^ - }\left( x \right)} \right) \leqslant \mathop {\max }\limits_{x \in \overline \Omega } {u^ + }\left( x \right) - \mathop {\min }\limits_{x \in \overline \Omega } {u^ - }\left( x \right)$
and
$\mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right| = \mathop {\max }\limits_{x \in \partial \Omega } \left( {{u^ + }\left( x \right) - {u^ - }\left( x \right)} \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } {u^ + }\left( x \right) - \mathop {\min }\limits_{x \in \partial \Omega } {u^ - }\left( x \right)$
Edit: $u \in {C^2}\left( \Omega \right) \cap C\left( {\overline \Omega } \right)$, although I don't see how that helps. $Lu=0$, which gives us $\mathop {\max }\limits_{x \in \overline \Omega } u\left( x \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } {u^ + }\left( x \right)$ and $\mathop {\min }\limits_{x \in \overline \Omega } u\left( x \right) \geqslant \mathop {\min }\limits_{x \in \partial \Omega } {u^ - }\left( x \right)$.
(Renardy, Rogers, An introduction to partial differential equations, p 103)
Edit 2: Come on, this should be super easy, the author didn't even comment on how the equality follows from those two.
| $\left. \begin{gathered}
\mathop {\max }\limits_{x \in \overline \Omega } u\left( x \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } {u^ + }\left( x \right)\mathop \leqslant \limits^{{u^ + } \leqslant \left| u \right|} \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right| \\
\mathop {\max }\limits_{x \in \overline \Omega } \left( { - u\left( x \right)} \right) \leqslant \mathop {\max }\limits_{x \in \partial \Omega } - {u^ - }\left( x \right)\mathop \leqslant \limits^{ - {u^ - } \leqslant \left| u \right|} \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right| \\
\end{gathered} \right\} \Rightarrow \mathop {\max }\limits_{x \in \overline \Omega } \left| {u\left( x \right)} \right| \leqslant \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right|$.
On the other hand, $\partial \Omega \subseteq \overline \Omega \Rightarrow \mathop {\max }\limits_{x \in \partial \Omega } \left| {u\left( x \right)} \right| \leqslant \mathop {\max }\limits_{x \in \overline \Omega } \left| {u\left( x \right)} \right|$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is there a good repository for mathematical folklore knowledge? Among mathematicians there is lot of folklore knowledge for which it is not obvious how to find original sources. This knowledge circulates orally.
An example: Among math competition folks, a common conversation is the search for a function over the reals that is infinitely differentiable, with it and all derivatives vanishing only at 0. I think $$f(x) := e^{-\frac{1}{x^2}}\mathrm{\;\;\;\; for\;\;} x \neq 0$$ is an answer to this one, and it is not hard to prove.
Is there any collection of such mathematical folklore, with proofs?
See also my follow-up question here.
| One place that contains a lot of mathematics is the nLab. It is largely centered on higher category theory/homotopy theory but also contains a lot of general stuff. It is certainly not aiming to only contain folklore knowledge but it does contain a lot of it.
Wikipedia will also certainly contain folklore knowledge embedded somewhere in the millions of pages of information but perhaps its accuracy is more questionable than what the nLab offers.
Various maths dedicated blogs will also contain folklore and anecdotes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
$Q=\Sigma q_i$ and its differentiation by one of its variables Suppose that $Q = q_1 + ... +q_n$.
why is
$$\frac{dQ}{dq_i} = \Sigma_{j=1}^{n}\frac{\partial q_j}{\partial q_i}$$?
Is it related to each $q_i$ being independent to other $q_i$'s?
| No, it is because differentiation is linear, ie, if $h=f+g$, then $\frac{d h}{d x} = \frac{d f}{d x} + \frac{d g}{d x}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that $f:[0,\infty)\to\mathbb{R}$ where $f(x) := {1\over x}\cos({1\over x}),x>0$ ,does $f$ has the intermediate value property on $[0,\infty)$? Prove that $f:[0,\infty)\to\mathbb{R}$ where $f(x) := {1\over x}\cos({1\over x}),x>0$ does $f$ has the intermediate value property on $[0,\infty)$?
Attempts: In $\mathbb{R}$, if $f$ is continuous then it has intermediate value property and as we know ${1\over x}\cos({1\over x})$ are continuous on $\mathbb{R}$ so it has the intermediate value property but it doesn't seem to be that straight forward.
| The function $f\colon [0,+\infty)\to \mathbb R$
$$
f(x) = \begin{cases}
\frac {1}{x} \cos \frac 1 x & \text{if $x>0$}\\\\
0 & \text {if $x=0$}
\end{cases}
$$
is not continuous but has the intermediate value property. In fact given two points $a,b \in [0,\infty)$ with $a<b$ you have two possibilities:
*
*$a>0$. In this case notice that the function is continuous on $[a,b]$ hence it has the property
*$a=0$. In this case you should notice that it is possible to find $\epsilon<b$ such that $f(\epsilon)=0$ (see where $\cos(1/x)=0$). Since $f$ is continuous on $[\epsilon,b]$ you can apply the intermediate value theorem there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What's the difference between $|\nabla f|$ and $\nabla f$? what's the difference between $|\nabla f|$ and $\nabla f$ for example in :
$$\nabla\cdot{\nabla f\over|\nabla f|}$$
| $|\nabla f|$ is the magnitude of the $\nabla f$ vector. The expression $\frac{\nabla f}{|\nabla f|}$ is thus a unit vector.
$\nabla$ (gradient) acts on a scalar to give a vector, and $\nabla\cdot$ (divergence) acts on a vector to give a scalar.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
newton: convergence when calculating $x^2-2=0$ Find $x$ for which $x^2-2=0$ using the newton algorithm and $x_0 = 1.4$.
Then you get $x_{k+1} = x_k + \frac{1}{x_k} - \frac{x_k}{2}$.
How to show that you need 100 steps for 100 digits precision?
So I need to show for which $N$ it is $|x_N-\sqrt{2}| \leq 10^{-100}$ and therefore I am asked to show
$$|x_k - \sqrt{2}|\leq\delta<0.1 \implies |x_{k+1}-\sqrt{2}|\leq \frac{\delta^2}{2}$$
Then it obviously follows that you need 100 steps, but I don't manage to show this..
| Try to show that $|x_{k+1}-\sqrt2|\leqslant\frac12(x_k-\sqrt2)^2$ hence $|x_k-\sqrt2|\leqslant2\delta_k$ implies that $|x_{k+1}-\sqrt2|\leqslant2\delta_{k+1}$ with $\delta_{k+1}=\delta_k^2$.
Then note that $\delta_0\lt10^{-2}$ hence $\delta_k\leqslant10^{-2^{k+1}}$ for every $k$, in particular $2\delta_6\lt2\cdot10^{-128}$ ensures (slightly more than) $100$ accurate digits.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/292891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find $\lim_{x\to 0^+} \ln x\cdot \ln(1-x)$ Find $$\lim_{x\to 0^+} \ln x\cdot \ln(1-x)$$
I've been unable to use the arithmetic rules for infinite limits, as $\ln x$ approaches $-\infty$ as $x\to 0^+$, while $\ln(1-x)$approaches $0$ as $x\to 0^+$, and the arithmetic rules for the multiplication of infinite limits only applies when one of the limits is finite and nonzero.
Can anyone point me in the right direction for finding this limit? I've been unable to continue..
(Spoiler: I've checked WolframAlpha and the limit is equal to $0$, but this information hasn't helped me to proceed)
| Hint: Another approach which is similar to @Mhenni's is:
When $x\to 0$ and we know that $\alpha(x)$ is very small then $$\ln(1+\alpha(x))\sim\alpha(x)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/293025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 3
} |
Euler graph - a question about the proof I have a question about the proof of this theorem.
A graph is Eulerian $\iff$ it is connected and all its vertices have even degrees.
My question concerns "$\Leftarrow$"
Let $T=(v_0, e_1, v_1, ..., e_m, v_m)$ be a trip in Eulerian graph G=(V, E) where vertices can repeat but edges cannot. Let's consider T of the largest possible length. We prove that
(i) $v_0 = v_m$, and
(ii) $\left\{ e_i : i = 1, 2, . . . , m\right\} = E$ (but I think I understand everything about this part)
Ad (i). If $v_0 \neq v_m$ then the vertex $v_0$ is incident to an odd number
of edges of the tour $T$. But since the degree $deg_G(v_0)$ is even, there
exists an edge $e \in E(G)$ not contained in T. Hence we could extend
$T$ by this edge — a contradiction.
What I don't understand here is why $v_0$ is incident to an odd number of edges.
| You have a typo: it’s when $v_0\ne v_m$ that you can conclude that $v_0$ is incident to an odd number of edges of $T$. It’s incident to $v_1$, and any other vertices of $T$ to which it is incident must come in pairs, one just before it and one just after in the tour. But then, as you say, $T$ would not be maximal, so this is impossible, and we must have $v_0=v_m$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/293106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Linear algebra eigenvalues and diagonalizable matrix Let $A$ be an $n\times n$ matrix over $\mathbb{C}$.
First I don't understand why $AA^*$ can be diagnosable over $\mathbb{C}$.
And why $i+1$ can't be eigenvalue of $AA^*$?
Hope question is clear enough and I don't have any spelling mistake and used right expressions.
| Any Hermitian matrix is diagonalizable. All eigenvalues of a Hermitian matrix are real. These two facts (that you probably learnt) solve the question: Show your matrix is Hermitian and note that $1+i$ is not real.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/293172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Compute $\sum_{k=1}^{n} \frac 1 {k(k + 1)} $ More specifically, I'm supposed to compute $\displaystyle\sum_{k=1}^{n} \frac 1 {k(k + 1)} $ by using the equality $\frac 1 {k(k + 1)} = \frac 1 k - \frac 1 {k + 1}$ and the problem before which just says that, $\displaystyle\sum_{j=1}^{n} a_j - a_{j - 1} = a_n - a_0$.
I can add up the sum for any $n$ but I'm not sure what they mean by "compute".
Thanks!
| It means: find how much it sums to. In fact, you have already said everything you need to solve the problem. You only have to put 1 and 1 together to obtain 2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/293244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Dirac's theorem question Give an example to show that the condition $\deg(v) \geq n/2$ in the statement of Dirac's theorem cannot be replaced by $\deg(v) \geq (n-1)/2$
The textbook gives the solution:
The complete bipartite graph $K_{n/2 - 1, n/2 + 1}$ if $n$ is even, and $K_{(n-1)/2, (n+1)/2}$ if $n$ is odd.
If anyone can explain this more thoroughly it would be greatly appreciated, thanks!
| The complete bipartite graph $K_{(n-1)/2, ~ (n+1)/2}$ has $(n-1)/2 + (n+1)/2 = n$ vertices.
Each vertex has degree greater than or equal to $(n-1)/2$ but this graph does not contain any Hamiltonian cycles, so the conclusion of Dirac's theorem does not hold.
You may wish to consider, for example, $K_{1,2}$ or $K_{2,3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/293396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.