Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Given $\frac {a\cdot y}{b\cdot x} = \frac CD$, find $y$. That's a pretty easy one... I have the following equality : $\dfrac {a\cdot y}{b\cdot x} = \dfrac CD$ and I want to leave $y$ alone so I move "$b\cdot x$" to the other side $$a\cdot y= \dfrac {(C\cdot b\cdot x)}{D}$$ and then "$a$" $$y=\dfrac {\dfrac{(C\cdot b\cdot x)}D} a.$$ Where is my mistake? I should be getting $y= \dfrac {(b\cdot C\cdot x)}{(a\cdot D)}$. I know that the mistake I am making is something very stupid, but can't work it out. Any help? Cheers!
No mistake was made. Observe that: $$ y=\dfrac{\left(\dfrac{Cbx}{D}\right)}{a}=\dfrac{Cbx}{D} \div a = \dfrac{Cbx}{D} \times \dfrac{1}{a}=\dfrac{Cbx}{Da}=\dfrac{bCx}{aD} $$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/415232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Representation problem: I don't understand the setting of the question! (From Serre's book) Ex 2.8 of Serre's book "Linear Representations of Finite Groups" says: Let $\rho:G\to V$ be a representation ($G$ finite and $V$ is complex, finite dimensional) and $V=W_1\oplus W_1 \oplus \dotsb \oplus W_2 \oplus \dotsb W_2\oplus \dotsb \oplus W_k$ be its explicit decomposition into irreducible subrepresentations. We know $W_i$ is only determined up to isomorphism, but $V_i := W_i\oplus \dotsb \oplus W_i$ is uniquely determined. The question asks: Let $H_i$ be the vector space of linear mappings $h:W_i\to V_i$ such that $\rho_s h = h\rho_s$ for all $s\in G$. Show that $\dim H_i = \dim V_i / \dim W_i$ etc... But I don't even understand what $\rho_s h = h\rho_s$ means in this case: to make sense of $h\rho_s$, don't we need to first fix some decomposition $W_i\oplus \dotsb \oplus W_i$, and consider $\rho$ restricted to one of these $W_i$? Is that what the question wants? But then how do I make sense of $\rho_s h$?
I'm not sure I understand what is worrying you, but each $W_{i}$ is a $G$-submodule, for any $w \in W_{i}, w\rho_{s} \in W_{i}.$ Then we can apply the map $h,$ s $(w\rho_{s})h$ is an element of $V_{i}.$ On the other hand, $wh \in V_{i}.$ We know that $V_{i}$ is also a $G$-submodule, so $(wh)\rho_{s}$ is an element of $V_{i}.$ The question asks you to consider those maps $h$ such that these two resulting elements of $V_{i}$ coincide for each $w \in W_{i}$ and each $s \in G.$ ( I am assuming that $\rho_{s}$ means the linear transformation associated to $s \in G$ by the representation $\rho).$ Clarification: Suppose that $V_{i}$ is a direct sum of $n_{i}$ isomorphic irreducible submodules. Strictly speaking, these could be labelled $W_{i_{1}},\ldots ,W_{i_{n_{i}}}.$ The intention is to look at maps from $W_{i_{1}}$ to $V_{i},$ but the resulting dimensions would be the same if any $W_{i_{j}}$ was used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/415297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Projections Open but not closed I often read that projections are Open but generally not closed. Unfortunately I do not have a counterexample for not closed available. Does anybody of you guys have?
The example which is mentioned by David Mitra shows that the projections are not closed: The projection $p: \mathbb R^2 \rightarrow \mathbb R$ of the plane $ \mathbb R^2 $ onto the $x$-axis is not closed. Indeed, the set $\color{red}{F}=\{(x,y)\in \mathbb R^2 : xy=1\}$ is closed in $\mathbb R^2 $ and yet its image $\color{blue}{p(F)}= \mathbb R \setminus \{0\}$ is not closed in $\mathbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/415369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Logic Circuits And Equations Issue - Multiply Binary Number By 3 I am trying to build a logic circuit that multiplies any 4 digit binary number by 3. I know that if I multiply/divide number by 2 it moves left/right the digits, but what I'm doing with multiply by 3? * *How to extract the equations that multiply the digits? *I need to use a full adder? I would like to get some advice. Thanks! EDIT I would like to get comments about my circuit of Multiplier 4 digits binary number by 3.
The easiest way I see is to note that $3n=2n+n$, so copy $n$, shift it one to the left, and add back to $n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/415440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Recursive Sequence Tree Problem (Original Research in the Field of Comp. Sci) This question appears also in https://cstheory.stackexchange.com/questions/17953/recursive-sequence-tree-problem-original-research-in-the-field-of-comp-sci. I was told that cross-posting in this particular situation could be approved, since the question can be viewed from many angles. I am a researcher in the field of computer science. In my research I have the following problem, which I have been thinking for quite a while now. I think the problem is best explained through an example, so first assume this kind of a tree structure: 1, 2, 3, 4, 5, 6, 7, 8 / \ 6, 8, 10, 12 -4, -4, -4, -4 / \ / \ 16, 20 -4, -4 -8, -8, 0, 0 / \ /   \ / \ / \ 36 -4 -8 0 -16 0 0 0 The root of the tree is always some sequence $s = (s_0, ..., s_{N-1})$ where $N = 2^p$ for some $p \in \mathbb{N}, p>2$. Please note that I am looking for a general solution to this, not just for sequences of the form $1, 2, ..., 2^p$. As you can see, the tree is defined in a recursive manner: the left node is given by $left(k)=root(k)+root(\frac{N}{2}+k), \quad 0 \leq k \leq \frac{N}{2}$ and the right node by $right(k)=root(k)-root(\frac{N}{2}+k), \quad 0 \leq k \leq \frac{N}{2}$ So, for example, (6 = 1+5, 8 = 2+6, 10 = 3+7, 12 = 4+8) and (-4 = 1-5, -4 = 2-6, -4 = 3-7, -4 = 4-7) would give the second level of the tree. I am only interested in the lowest level of the tree, i.e., the sequence (36, -4, -8, 0, -16, 0, 0, 0). If I compute the tree recursively, the computational complexity will be $O(N log N)$. That is a little slow for the purpose of the algorithm. Is it possible to calculate the last level in linear time? If a linear-time algorithm is possible, and you find it, I will add you as an author to the paper the algorithm will appear in. The problem constitutes about 1/10 of the idea/content in the paper. If a linear-time algorithm is not possible, I will probably need to reconsider other parts of the paper, and leave this out entirely. In such a case I can still acknowledge your efforts in the acknowledgements. (Or, if the solution is a contribution from many people, I could credit the whole math SE community.)
This is more of a comment, but it's too big for the comment block. An interesting note on Kaya's matrix $\mathbf{M}$: I believe that it can be defined recursively for any value of $p$. (I should note here that this is my belief. I have yet to prove it...) That is, let $\mathbf{M}_p$ be the matrix for the value of $p$ (here, let's remove the bound on $p\gt2$). Let $\mathbf{M}_1 = \begin{pmatrix}1 & 1 \\ 1 & -1\end{pmatrix}$. Then $\mathbf{M}_n = \begin{pmatrix} \mathbf{M}_{n-1} & \mathbf{M}_{n-1} \\ \mathbf{M}_{n-1} & -\mathbf{M}_{n-1}\end{pmatrix}$. Ah Ha! Thanks to some searches based off of Suresh Venkat's answer, I found that this matrix is called the Walsh Matrix. Multiplying this matrix by a column vector of your first sequence provides a column vector of the bottom sequence. As a side note, this makes an almost fractal-like pattern when colored. :) The above is for $p=4$. EDIT: I'm almost sure I've seen a graphic similar to the one above before. If someone recognizes something similar, that would be great...
{ "language": "en", "url": "https://math.stackexchange.com/questions/415515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Is the set $\{(x, y) : 3x^2 − 2y^ 2 + 3y = 1\}$ connected? Is the set $\{(x, y)\in\mathbb{R}^2 : 3x^2 − 2y^ 2 + 3y = 1\}$ connected? I have checked that it is an hyperbola, hence disconnected am i right?
Your set $S$ contains the points $(0,{1\over2})$ and $(0,1)$, but does not contain any points on the line $y={3\over4}$. Therefore $$S\cap\{(x,y)|y<{3\over4}\},\quad S\cap\{(x,y)|y>{3\over4}\}$$ is a partition of $S$ into disjoint (relatively) open subsets of ${\mathbb R}^2$. This shows that $S$ is not connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/415589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Finding a counterexample; quotient maps and subspaces Let $X$ and $Y$ be two topological spaces and $p: X\to Y$ be a quotient map. If $A$ is a subspace of $X$, then the map $q:A\to p(A)$ obtained by restricting $p$ need not be a quotient map. Could you give me an example when $q$ is not a quotient map?
M.B. has given an example that shows that the restriction of a quotient map to an open subspace need not be a quotient map. Here is an example showing that the restriction to a saturated set also need not. Restrictions to an open saturated set however always works. Let $X=[0,2],\ \ B=(0,1],\ \ A=\{0\}\cup(1,2]$ with the euclidean topology. Then the identification $q:X\to X/A$ is a quotient map, but $q:B\to q(B)$ is not. Here is why: As Ronald Brown wrote, $q:B\to q(B)$ is a quotient map iff each subset of $B$ which is saturated and open in $B$ is the intersection of $B$ with a saturated open set in $X$. Now $U=\left(\frac12,1\right]$ is open and saturated in $B=(0,1]$, but if it were the intersection of $B$ and an open saturated $V$, then this $V$ would intersect $A$ and, since it is saturated, it would contain $0$, and then again by openness it had to contain $[0,\epsilon)$ for some $\epsilon>0$. So the intersection of $V$ and $B$ can never be $U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/415650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
$\vec{u}+\vec{v}-\vec{w},\;\vec{u}-\vec{v}+\vec{w},\;-\vec{u}+\vec{v}+\vec{w} $ are linearly independent if and only if $\vec{u},\vec{v},\vec{w}$ are I'm consufed: how can I prove that $$\vec{u} + \vec{v} - \vec{w} , \qquad \vec{u} - \vec{v} + \vec{w},\qquad - \vec{u} + \vec{v} + \vec{w} $$ are linearly independent vectors if, and only if $\vec{u}$, $\vec{v}$ and $\vec{w}$ are linearly independent? Ps: sorry for my poor English!
Call the three vectors $a, b, c$ then you find $2u=a+b, 2v=a+c, 2w=b+c$ If there is a linear dependence in $a, b, c$ the equations translate into a (nontrivial) dependence in $u, v, w$ and vice versa - the two sets of vectors are related by an invertible linear transformation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/415729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
describe the domain of a function $f(x, y)$ Describe the domain of the function $f(x,y) = \ln(7 - x - y)$. I have the answer narrowed down but I am not sure if it would be $\{(x,y) \mid y ≤ -x + 7\}$ or $\{(x,y) \mid y < -x + 7\}$ please help me.
Nice job: on the inequality part of things; the only confusion you seem to have is with respect to whether to include the case $y = x-7$. But note that $$f(x, y) = f(x, x - 7) = \ln(7 - x - y) = \ln \left[7 - x -(7 - x)\right] = \ln 0$$ but $\;\ln (0)\;$ is not defined, hence $y = x - 7$ cannot be included domain! So we want the domain to be one with the strict inequality: $$\{(x,y) | y < -x + 7\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/415800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Prove: $D_{8n} \not\cong D_{4n} \times Z_2$. Prove $D_{8n} \not\cong D_{4n} \times Z_2$. My trial: I tried to show that $D_{16}$ is not isomorphic to $D_8 \times Z_2$ by making a contradiction as follows: Suppose $D_{4n}$ is isomorphic to $D_{2n} \times Z_2$, so $D_{8}$ is isomorphic to $D_{4} \times Z_2$. If $D_{16}$ is isomorphic to $D_{8} \times Z_2 $, then $D_{16}$ is isomorphic to $D_{4} \times Z_2 \times Z_2 $, but there is not Dihedral group of order $4$ so $D_4$ is not a group and so $D_{16}\not\cong D_8\times Z_2$, which gives us a contradiction. Hence, $D_{16}$ is not isomorphic to $D_{8} \times Z_2$. I found a counterexample for the statement, so it's not true in general, or at least it's not true in this case. __ Does this proof make sense or is it mathematically wrong?
$D_{8n}$ has an element of order $4n$, but the maximal order of an element in $D_{4n} \times \mathbb{Z}_2$ is $2n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/415879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Conditional Expected Value and distribution question The distribution of loss due to fire damage to a warehouse is: $$ \begin{array}{r|l} \text{Amount of Loss (X)} & \text{Probability}\\ \hline 0 & 0.900 \\ 500 & 0.060 \\ 1,000 & 0.030\\ 10,000 & 0.008 \\ 50,000 & 0.001\\ 100,000 & 0.001 \\ \end{array} $$ Given that a loss is greater than zero, calculate the expected amount of the loss. My approach is to apply the definition of expected value: $$E[X \mid X>0]=\sum\limits_{x_i}x_i \cdot p(x_i)=500 \cdot 0.060 + 1,000 \cdot 0.030 + \cdots + 100,000 \cdot 0.001=290$$ I am off by a factor of 10--The answer is 2,900. I am following the definition of expected value, does anyone know why I am off by a factor of $1/10$? Should I be doing this instead??? $E[X \mid X>0] = \sum\limits_{x_i} (x_i \mid x_i > 0) \cdot \cfrac{\Pr[x_i \cap x_i>0]}{\Pr(x_i > 0)}$ Thanks.
You completely missed the word "given". That means you want a conditional probability given the event cited. In other words, your second option is right. For example the conditional probability that $X=500$ given that $X>0$ is $$ \Pr(X=500\mid X>0) = \frac{\Pr(X=500\ \&\ X>0)}{\Pr(X>0)} = \frac{\Pr(X=500}{\Pr(X>0)} = \frac{0.060}{0.1} = 0.6. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/416030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Help with a trig-substitution integral I'm in the chapter of trigonometric substitution for integrating different functions. I'm having a bit of trouble even starting this homework question: $$\int \frac{(x^2+3x+4)\,dx} {\sqrt{x^2-4x}}$$
In order to make a proper substitution in integral calculus, the function that you are substituting must have a unique inverse function. However, there is such a case where the the derivative is present and you can make what I refer to as a "virtual substitution". This is not exactly the case here, we have to do other algebraic manipulations. Trigonometric functions such as sine, cosine and their variants have infinitely many inverse functions; inverse trigonometric functions ( i.e. arcsine, arccosine, etc...) have a unique inverse function, thus are fine. For example, if I made the substitution $y = \sin x$ ( where $-1≤y≤1$) , then $ x = (-1)^n \cdot \arcsin y + n\pi$ ($n \in \mathbb Z$): this does not work, without bound. If anyone disagrees with my statement, please prove that the substitution is proper. Also, in my opinion, turning a rational/algebraic function into a transcendental function is ridiculous. There are very elementary ways to approach this integral; a good book to read on many of these methods is Calculus Made Easy by Silvanus P. Thompson.
{ "language": "en", "url": "https://math.stackexchange.com/questions/416088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
A recursive formula for $a_n$ = $\int_0^{\pi/2} \sin^{2n}(x)dx$, namely $a_n = \frac{2n-1}{2n} a_{n-1}$ Where does the $\frac{2n-1}{2n}$ come from? I've tried using integration by parts and got $\int \sin^{2n}(x)dx = \frac {\cos^3 x}{3} +\cos x +C$, which doesn't have any connection with $\frac{2n-1}{2n}$. Here's my derivation of $\int \sin^{2n}(x)dx = \frac {\cos^3 x}{3} +\cos x +C$: $\sin^{2n+1}xdx=\int(1-\cos^2x)\sin xdx=\int -(1-u^2)du=\int(u^2-1)du=\frac{u^3}{3}+u+C=\frac{\cos^3x}{3}+\cos x +C$ where $u=\cos x$;$du=-\sin x dx$ Credits to Xiang, Z. for the question.
Given the identity $$\int \sin^n x dx = - \frac{\sin^{n-1} x \cos x}{n}+\frac{n-1}{n}\int \sin^{n-2} xdx$$ plugging in $2n$ yields $$\int \sin^{2n} x dx = - \frac{\sin^{2n-1} x \cos x}{2n}+\frac{2n-1}{2n}\int \sin^{2n-2} xdx$$ Since $$\int_0^{\pi/2} \sin^{2n} x dx = - \frac{\sin^{2n-1} x \cos x}{2n}|_0^{\pi/2}+\frac{2n-1}{2n}\int_0^{\pi/2} \sin^{2n-2} xdx$$ and $\frac{\sin^{2n-1} x \cos x}{2n}|_0^{\pi/2}=0$ for $n \ge 1$, we get $$\int_0^{\pi/2} \sin^{2n} x dx = \frac{2n-1}{2n}\int_0^{\pi/2} \sin^{2n-2} xdx$$ (We only care about $n \ge 1$ because in the original question, $a_0=\frac{\pi}{2}$ is given and only integer values of n with $n \ge 1$ need to satisfy $a_n=\frac{2n-1}{2n}a_{n-1}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/416162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Has $X\times X$ the following property? Let $X$ be a topological space that satisfies the following condition at each point $x$: For every open set $U$ containing $x$, there exists an open set $V$ with compact boundary such that $x\in V\subseteq U$. Does $X\times X$ also have that property?
Try to Proof: For every open set $W$ of $X \times X$, there exists $U_1$ and $U_2$ of $X$ such that $U_1 \times U_2 \subset W.$ Then there exist $V_1$ and $V_2$ such that there compact boundary are contained in $U_1$ and $U_2$ respectively. Next we prove that the compact boundary of $V_1 \times V_2$ are contained in $U_1\times U_2$. In fact, $Fr (V_1 \times V_2)= \overline{V_1 \times V_2} \setminus (V_1 \times V_2)=\overline{V_1} \times \overline{V_2} \setminus (V_1 \times V_2)=((\overline{V_1}\setminus V_1)\times \overline{V_2}) \cup (\overline{V_1 }\times (\overline{V_2}\setminus V_2))=(Fr(V_1) \times \overline{V_2}) \cup (\overline{V_1 }\times Fr(V_2))$. We cannot be sure that $(\overline{V_1}\setminus V_1)\times \overline{V_2}$ or $\overline{V_1 }\times (\overline{V_2}\setminus V_2)$ is compact unless $V_1$ and $V_2$ are compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/416270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $f^2$ is differentiable, how pathological can $f$ be? Apologies for what's probably a dumb question from the perspective of someone who paid better attention in real analysis class. Let $I \subseteq \mathbb{R}$ be an interval and $f : I \to \mathbb{R}$ be a continuous function such that $f^2$ is differentiable. It follows by elementary calculus that $f$ is differentiable wherever it is nonzero. However, considering for instance $f(x) = |x|$ shows that $f$ is not necessarily differentiable at its zeroes. Can the situation with $f$ be any worse than a countable set of isolated singularities looking like the one that $f(x) = |x|$ has at the origin?
To expand on TZakrevskiy's answer, we can use one of the intermediate lemmas from the proof of Whitney extension theorem. Theorem (Existence of regularised distance) Let $E$ be an arbitrary closed set in $\mathbb{R}^d$. There exists a function $f$, continuous on $\mathbb{R}^d$, and smooth on $\mathbb{R}^d\setminus E$, a large constant $C$, and a family of large constants $B_\alpha$ ($C$ and $B_\alpha$ being independent of the choice of the function $f$) such that * *$C^{-1} f(x) \leq \mathrm{dist}(x,E)\leq Cf(x)$ *$|\partial^\alpha f(x)| \leq B_\alpha~ \mathrm{dist}(x,E)^{1 - |\alpha|}$ for any multi-index $\alpha$. (See, for example, Chapter VI of Stein's Singular Integrals and Differentiability Properties of Functions.) Property 1 ensures that if $x\in \partial E$ the boundary, $f$ is not differentiable at $x$. On the other hand, it also ensures that $f^2$ is differentiable on $E$. Property 2, in particular, guarantees that $f^2$ is differentiable away from $E$. So we obtain Corollary Let $E\subset \mathbb{R}^d$ be an arbitrary closed set with empty interior, then there exists a function $f$ such that $f^2$ is differentiable on $\mathbb{R}^d$, $f$ vanishes precisely on $E$, and $f$ is not differentiable on $E$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/416343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
How to show that if $A$ is compact, $d(x,A)= d(x,a)$ for some $a \in A$? I really think I have no talents in topology. This is a part of a problem from Topology by Munkres: Show that if $A$ is compact, $d(x,A)= d(x,a)$ for some $a \in A$. I always have the feeling that it is easy to understand the problem emotionally but hard to express it in math language. I am a student in Economics and I DO LOVE MATH. I really want to learn math well, could anyone give me some advice. Thanks so much!
Let $f$ : A $\longrightarrow$ $\mathbb{R}$ such that a $\mapsto$ d(x, a), where $\mathbb{R}$ is the topological space induced by the $<$ relation, the order topology. For all open intervals (b, c) in $\mathbb{R}$, $f^{-1}((b, c))$ = {a $\in$ A $\vert$ d(x, a) $>$ b} $\cap$ {a $\in$ A $\vert$ d(x, a) $<$ c}, an open set. Therefore $f$ is continuous. (Munkres) Theorem 27.4 Let $f$ : X $\longrightarrow$ Y be continuous, where Y is an ordered set in the order topology. If X is compact, then there exists points c and d in X such that $f$(c) $\leq$ $f$(x) $\leq$ $f$(d) for every x $\in$ X By Theorem 27.4, $\exists$ r $\in$ A, d(x, r) = inf{ d(x, a) $\vert$ a $\in$ A} Therefore d(x, A) = d(x, a) for some a $\in$ A
{ "language": "en", "url": "https://math.stackexchange.com/questions/416514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How $\frac{1}{n}\sum_{i=1}^n X_i^2 - \bar X^2 = \frac{\sum_{i=1}^n (X_i - \bar X)^2}{n}$ How $\frac{1}{n}\sum_{i=1}^n X_i^2 - \bar X^2 = \frac{\sum_{i=1}^n (X_i - \bar X)^2}{n}$ i have tried to do that by the following procedure: $\frac{1}{n}\sum_{i=1}^n X_i^2 - \bar X^2$ =$\frac{1}{n}(\sum_{i=1}^n X_i^2 - n\bar X^2)$ =$\frac{1}{n}(\sum_{i=1}^n X_i^2 - \sum_{i=1}^n\bar X^2)$ =$\frac{1}{n} \sum_{i=1}^n (X_i^2 - \bar X^2)$ Then i have stumbled.
I think it is cleaner to expand the right-hand side. We have $$(X_i-\bar{X})^2=X_i^2-2X_i\bar{X}+(\bar{X})^2.$$ Sum over all $i$, noting that $2\sum X_i\bar{X}=2n\bar{X}\bar{X}=2n(\bar{X})^2$ and $\sum (\bar{X})^2=n(\bar{X})^2$. There is some cancellation. Now divide by $n$ and we get the left-hand side.
{ "language": "en", "url": "https://math.stackexchange.com/questions/416581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
lim sup and sup inequality Is it true that for a sequence of functions $f_n$ $$\limsup_{n \rightarrow \infty }f_n \leq \sup_{n} f_n$$ I tried to search for this result, but I couldn't find, so maybe my understanding is wrong and this does not hold.
The inequality $$\limsup_{n\to\infty}a_n\leq\sup_{n\in\mathbb{N}}a_n$$ holds for any real numbers $a_n$, because the definition of $\limsup$ is $$\limsup_{n\to\infty}a_n:=\lim_{m\to\infty}\left(\sup_{n\geq m}a_n\right)$$ and for any $n\in\mathbb{N}$, we have $$\left(\sup_{n\geq m}a_n\right)\leq\sup_{m\in\mathbb{N}}a_n$$ (if the numbers $a_1,\ldots,a_{m-1}$ are less than or equal to the supremum of the others, both sides are equal, and if not, then the right side is larger).Therefore $$\limsup_{n\to\infty}f_n(x)\leq \sup_{n\in\mathbb{N}}f_n(x)$$ holds for any real number $x$, which is precisely what is meant by the statement $$\limsup_{n\to\infty}f_n\leq \sup_{n\in\mathbb{N}}f_n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/416636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
recurrence relations for proportional division I am looking for a solution to the following recurrence relation: $$ D(x,1) = x $$ $$ D(x,n) = \min_{k=1,\ldots,n-1}{D\left({{x(k-a)} \over n},k\right)} \ \ \ \ [n>1] $$ Where $a$ is a constant, $0 \leq a \leq 1$. Also, assume $x \geq 0$. This formula can be interpreted as describing a process of dividing a value of x to n people: if there is a single person ($n=1$), then he gets all of the value x, and if there are more people, they divide x in a proportional way, with some loss ($a$) in the division process. For $a=0$ (no loss), it is easy to prove by induction that: $$ D(x,n) = {x \over n}$$ For $a=1$, it is also easy to see that: $$ D(x,n) = 0$$ But I have no idea how to solve for general $a$. Additionally, it is interesting that small modifications to the formula lead to entirely different results. For example, starting the iteration from $k=2$ instead of $k=1$: $$ D(x,2) = x/2 $$ $$ D(x,n) = \min_{k=2,\ldots,n-1}{D\left({{x(k-a)} \over n},k\right)} \ \ \ \ [n>2] $$ For $a=0$ we get the same solution, but for $a=1$ the solution is: $$ D(x,n) = {x \over n(n-1)}$$ Again I have no idea how to solve for general $a$. I created a spreadsheet with some examples, but could not find the pattern. Is there a systematic way (without guessing) to arrive at a solution?
For $x>0$ the answer is $$D(x,n)=\frac{x(1-a)(2-a)\ldots (n-1-a)}{n!}$$ Proof: induction on $n$. Indeed, $$D(x,n)=\min_{k=1,\ldots,n-1}\frac{x(k-a)(1-a)\ldots (k-1-a)}{k!n}$$ $$=\frac{x}{n}\min_{k=1,\ldots,n-1}\frac{(1-a)\ldots (k-a)}{k!}$$ andf it remains to prove $$\frac{(1-a)\ldots (k-a)}{k!}>\frac{(1-a)\ldots (k+1-a)}{(k+1)!}$$ But this is evident. For $x>0$ similarly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/416734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find out X in a trinomial How can I find out what X equals in this? $$x^2 - 2x - 3 = 117$$ How would I get started? I'm truly stuck.
Hint:1.$$ax^2 +bx +c=0 \to D=b^2-4ac\ge0 $$$$\ \to\begin{cases} \color{green}{x_1=\frac{-b+\sqrt{D}}{2a}} \\ \color{red}{x_2=\frac{-b-\sqrt{D}}{2a}} \\ \end{cases} $$$$$$ 2. $$x^2 +bx +c=(x+\frac{b}{2})^2=\frac{b^2}{4}-c\ge0\quad \text{then} x=\pm\sqrt{\frac{b^2}{4}-c}-\frac{b}{2}$$ 3. find $x_1$and $x_2 $by solving following system \begin{cases} x_1+x_2=\frac{-b}{a} \\ x_1x_2=\frac{c}{a} \\ \end{cases}
{ "language": "en", "url": "https://math.stackexchange.com/questions/416818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Limit of two variables, proposed solution check: $\lim_{(x,y)\to(0,0)}\frac{xy}{\sqrt{x^2+y^2}}$ Does this solution make sense, The limit in question: $$ \lim_{(x,y)\to(0,0)}\frac{xy}{\sqrt{x^2+y^2}} $$ My solution is this: Suppose, $$ \sqrt{x^2+y^2} < \delta $$ therefore $$xy<\delta^2$$ So by the Squeeze Theorem the limit exists since $$\frac{xy}{\sqrt{x^2+y^2}}<\frac{\delta^2}{\delta}=\delta$$ Is this sufficient?
Here's a more direct solution. We know $|x|,|y|\le\sqrt{x^2+y^2}$, so if $\sqrt{x^2+y^2}<\delta$, then $$\left|\frac{xy}{\sqrt{x^2+y^2}}\right|\le\frac{\big(\sqrt{x^2+y^2}\big)^2}{\sqrt{x^2+y^2}}=\sqrt{x^2+y^2}<\delta.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/416925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Telephone Number Checksum Problem I am having difficulty solving this problem. Could someone please help me? Thanks "The telephone numbers in town run from 00000 to 99999; a common error in dialling on a standard keypad is to punch in a digit horizontally adjacent to the intended one. So on a standard dialling keypad, 4 could erroneously be entered as 5 (but not as 1, 2, 7, or 8). No other kinds of errors are made. It has been decided that a sixth digit will be added to each phone number $abcde$. There are three different proposals for the choice of $X$: Code 1: $a + b + c + d +e + X$ $\equiv 0\pmod{2}$ Code 2: $6a + 5b + 4c + 3d + 2e + X$ $\equiv 0\pmod{6}$ Code 3: $6a + 5b + 4c + 3d + 2e + X$ $\equiv 0\pmod{10}$ Out of the three codes given, choose one that can detect a horizontal error and one that cannot detect a horizontal error. "
Code 1 can detect horizontal error. Let $c$ be the correct misdialed digit (order doesn't matter because they're added) and let $c'$ be the mistaken misdialed digit. For any two horizontally adjacent digits consist of an odd and even number. That is, $c-c'\equiv 1 \pmod 2$. It follows that the sum of the new digits will be $$ a+b+c'+d+e+X\equiv\\ a+b+c+d+e+X+(c'-c)\equiv\\ 0+(c'-c)\equiv\\ 1 \pmod 2 $$ Thus, any wrong number will fail the test. Code 2 cannot detect horizontal error. Consider, for example, the phone number $123231$. The number $223231$ will pass the test even though a horizontal error was made with the first digit. In general, this code fails to detect errors in the first digit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/416974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Probability to open a door using all the keys I have 1 door and 10 keys. What is the probablity to open the door trying all the keys? I will discard every single key time to time.
Solution 1: Hint: How many permutations are there for the order of the 10 keys? Hint: How many permutations are there, where the last key is the correct key? Solution 2: Hint: The probability that the last key is the correct key, is the same as the probability that the nth key is the correct key. Hence ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/417055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why is $\log z$ continuous for $x\leq 0$ rather than $x\geq 0$? Explain why Log (principal branch) is only continuous on $$\mathbb{C} \setminus\{x + 0i: x\leq0\}$$ is the question. However, I can't see why this is. Shouldn't it be $x \geq 0$ instead? Thanks.
The important thing here is that the exponential function is periodic with period $2\pi i$, meaning that $e^{z+2\pi i}=e^z$ for all $z\in\Bbb C$. If you imagine any stripe $S_a:=\{x+yi\mid y\in [a,a+2\pi)\}$ for any $a\in\Bbb R$, then we get that $z\mapsto e^z$ is actually one-to-one, restricted to $S_a$, (and maps onto $\Bbb C\setminus\{0\}$). A branch of $\log$ then, is basically the inverse of this $\exp|_{S_a}$ (more precisely, the inverse of $\exp|_{{\rm int\,}S_a}$). Observe that if a sequence $z_n\to x+ai\,$ and another $\,w_n\to x+(a+2\pi)i$ within the stripe $S_a$, then $\lim e^{z_n} = \lim e^{w_n}=:Z$. So, which value of the logarithm should be assigned to $Z$? Is it $x+ai$ on one edge of the stripe, or is it $x+(a+2\pi)i$ on the other edge? Since we want the logarithm to be continuous, we have to take the interior of the stripe (removing both edges, ${\rm int\,} S_a=\{x+yi\mid y\in (a,a+2\pi)\}$), else by the above, on the border, by continuity we would have $$x+ai=\log(e^{x+ai})=\log(e^{x+(a+2\pi)i})=x+(a+2\pi)i\,.$$ (For the principal branch, $a=-\pi$ is taken.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/417256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Where's the error in this $2=1$ fake proof? I'm reading Spivak's Calculus: 2 What's wrong with the following "proof"? Let $x=y$. Then $$x^2=xy\tag{1}$$ $$x^2-y^2=xy-y^2\tag{2}$$ $$(x+y)(x-y)=y(x-y)\tag{3}$$ $$x+y=y\tag{4}$$ $$2y=y\tag{5}$$ $$2=1\tag{6}$$ I guess the problem is in $(3)$, it seems he tried to divide both sides by $(x-y)$. The operation would be acceptable in an example such as: $$12x=12\tag{1}$$ $$\frac{12x}{12}=\frac{12}{12}\tag{2}$$ $$x=1\tag{3}$$ I'm lost at what should be causing this, my naive exploration in the nature of both examples came to the following: In the case of $12x=12$, we have an imbalance: We have $x$ in only one side then operations and dividing both sides by $12$ make sense. Also, In $\color{red}{12}\color{green}{x}=12$ we have a $\color{red}{coefficient}$ and a $\color{green}{variable}$, the nature of those seems to differ from the nature of $$\color{green}{(x+y)}\color{red}{(x-y)}=y(x-y)$$ It's like: It's okay to do the thing in $12x=12$, but for doing it on $(x+y)(x-y)=y(x-y)$ we need first to simplify $(x+y)(x-y)$ to $x^2-y^2$.
We have $x = y$, so $x - y = 0$. EDIT: I think I should say more. I'll go through each step: $x = y \tag{0}$ This is our premise that $x$ and $y$ are equal. $$x^2=xy\tag{1}$$ Note that $x^2 = xx = xy$ by $(0)$. So completely valid. $$x^2-y^2=xy-y^2\tag{2}$$ Now we're adding $-y^2$ to both sides of $(1$) so completely valid and we can see that it's another way of expressing $0 = 0$ as $x=y$, but nothing wrong here yet. $$(x+y)(x-y)=y(x-y)\tag{3}$$ $$x+y=y\tag{4}$$ Step $(3)$ is just basic factoring, and it is around here where things begin to go wrong. For $(4)$ to be a valid consequence of $(3)$, I would need $x - y \neq 0$ as otherwise, we would be dividing by $0$. However, this is in fact what we've done as $x=y$ implies that $x - y =0$. So $(3)-(4)$ is where things go wrong. $$2y=y\tag{5}$$ $$2=1\tag{6}$$ As a consequence of not being careful, we end up with gibberish. Hope this clarifies more!
{ "language": "en", "url": "https://math.stackexchange.com/questions/417324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Is this Expectation finite? How do I prove that $\int_{0}^{+\infty}\text{exp}(-x)\cdot\text{log}(1+\frac{1}{x})dx$ is finite? (if it is) I tried through simulation and it seems finite for large intervals. But I don't know how to prove it analytically because I don't know the closed form integral of this product. I am actually taking expectation over exponential distribution. Thank you
If we write $$\int_{0}^{+\infty}=\int_0^1+\int_1^{\infty}$$ then we see that $$\lim_{x\to 0^+} x^{1/2}f(x)=0<\infty\Longrightarrow\int_0^1f(x)dx~~\text{is convergent}$$ and $$\lim_{x\to +\infty} x^{2}f(x)=0<\infty\Longrightarrow\int_1^{\infty}f(x)dx~~\text{is convergent}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/417410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
If $\gcd(a,b)= 1$ and $a$ divides $bc$ then $a$ divides $c\ $ [Euclid's Lemma] Well I thought this is obvious. since $\gcd(a,b)=1$, then we have that $a\not\mid b$ AND $a\mid bc$. This implies that $a$ divides $c$. But apparently this is wrong. Help explain why this way is wrong please. The question tells you give me two relatively prime numbers $a$ and $b$ such that $a$ divides the product $bc$, prove that $a$ divides $c$. how is this not obvious? Explain to me how my "proof" is not correct.
It seems you have in mind a proof that uses prime factorizations, i.e. the prime factors of $\,a\,$ cannot occur in $\,b\:$ so they must occur in $\,c.\,$ You should write out this argument very carefully, so that it is clear how it depends crucially on the existence and uniqueness of prime factorizations, i.e. FTA = Fundamental Theorem of Arithmetic, i.e. $\Bbb Z$ is a UFD = Unique Factorization Domain. Besides the proof by FTA/UFD one can give more general proofs, e.g. using gcd laws (most notably distributive). Below is one, contrasted with its Bezout special case. $$\begin{eqnarray} a\mid ac,bc\, &\Rightarrow&\, a\mid (ac,\ \ \ \ bc)\stackrel{\color{#c00}{\rm DL}} = \ (a,\ b)\ c\ = c\quad\text{by the gcd Distributive Law }\color{#c00}{\rm (DL)} \\ a\mid ac,bc\, &\Rightarrow&\, a\mid uac\!+\!vbc = (\color{#0a0}{ua\!+\!vb})c\stackrel{\rm\color{#c00}{B\,I}} = c\quad\text{by Bezout's Identity }\color{#c00}{\rm (B\,I)} \end{eqnarray}$$ since, by Bezout, $\,\exists\,u,v\in\Bbb Z\,$ such that $\,\color{#0a0}{ua+vb} = (a,b)\,\ (= 1\,$ by hypothesis). Notice that the Bezout proof is a special case of the proof using the distributive law. Essentially it replaces the gcd in the prior proof by its linear (Bezout) representation, which has the effect of trading off the distributive law for gcds with the distributive law for integers. However, this comes at a cost of loss of generality. The former proof works more generally in domains having gcds there are not necessarily of such linear (Bezout) form, e.g. $\,\Bbb Q[x,y].\,$ The first proof also works more generally in gcd domains where prime factorizations needn't exist, e.g. the ring of all algebraic integers. See this answer for a few proofs of the fundamental gcd distributive law, and see this answer, which shows how the above gcd/Bezout proof extends analogously to ideals. Remark $ $ This form of Euclid's Lemma can fail if unique factorization fails, e.g. let $\,R\subset \Bbb Q[x]\,$ be those polynomials whose coefficient of $\,x\,$ is $\,0.\,$ So $\,x\not\in R.\,$ One easily checks $\,R\,$ is closed under all ring operations, so $\,R\,$ is a subring of $\,\Bbb Q[x].\,$ Here $\,(x^2)^3 = (x^3)^2\,$ is a nonunique factorization into irreducibles $\,x^2,x^3,\,$ which yields a failure of the above form of Euclid's Lemma, namely $\ (x^2,\color{#C00}{x^3}) = 1,\ \ {x^2}\mid \color{#c00}{x^3}\color{#0a0}{ x^3},\ $ but $\ x^2\nmid \color{#0a0}{x^3},\,$ by $\,x^3/x^2 = x\not\in R,\, $ and $\,x^2\mid x^6\,$ by $\,x^6/x^2 = x^4\in R.\ $ It should prove enlightening to examine why your argument for integers breaks down in this polynomial ring. This example shows that the proof for integers must employ some special property of integers that is not enjoyed by all domains. Here that property is unique factorization, or an equivalent, e.g. that if a prime divides a product then it divides some factor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/417479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 0 }
how to determine the following set is countable or not? How to determine whether or not these two sets are countable? The set A of all functions $f$ from $\mathbb{Z}_{+}$ to $\mathbb{Z}_{+}$. The set B of all functions $f$ from $\mathbb{Z}_{+}$ to $\mathbb{Z}_{+}$ that are eventually 1. First one is easier to determine since set of fuctions from $\mathbb{N}$ to $\{0,1\}$ is uncountable. But how to determine the second one? Thanks in advance!
Let $B_n$ be the set of functoins $f\colon \mathbb Z_+\to\mathbb Z_+$ with $f(x)\le n$ for all $x$ and $f(x)=1$ for $x>n$. Then $$B=\bigcup_{n\in\mathbb N}B_n$$ and $|B_n|=n^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/417587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The derivative of a linear transformation Suppose $m > 1$. Let $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map. Consider $f + Ax$ for $A \in \mathrm{Mat}_{m\times n}, x \in \mathbb{R}^n$. Define $F: \mathbb{R}^n \times \mathrm{Mat}_{m\times n} \rightarrow \mathrm{Mat}_{m\times n}$ by $F(x,A) = df_x + A$. So what is $dF_x$? (A) Is it $dF(x,A) = d(df_x + A) = d f_x + A$? And therefore, $$dF(x,A) =\left( \begin{array}{ccc} \frac{\partial f_1}{\partial x_1} & \cdots & \frac{\partial f_m}{\partial x_1} \\ \vdots & & \vdots \\ \frac{\partial f_1}{\partial x_n} & \cdots & \frac{\partial f_m}{\partial x_n}\end{array} \right) + \left( \begin{array}{ccc} a_{11} & \cdots & a_{1m} \\ \vdots & & \vdots \\ a_{n1} & \cdots & a_{nm}\end{array} \right)$$ (B) Or should it be $dF(x,A) = d(df_x + A) = d^2 f_x + I$? And therefore, $$dF(x,A) =\left( \begin{array}{ccc} \frac{\partial^2 f_1}{\partial x_1^2} & \cdots & \frac{\partial^2 f_m}{\partial x_1^2} \\ \vdots & & \vdots \\ \frac{\partial^2 f_1}{\partial x_n^2} & \cdots & \frac{\partial^2 f_m}{\partial x_n^2}\end{array} \right) + \left( \begin{array}{ccc} 1 & \cdots & 0 \\ \vdots & & \vdots \\ 0 & \cdots & 1\end{array} \right)$$ Does this look right? Thank you very much.
Since, by your definition, $F$ is a matrix-valued function, $DF$ would be a rank-3 tensor with elements $$ (DF)_{i,j,k} = \frac{\partial^2 f_i(x)}{\partial x_j \partial x_k} $$ Some authors also define matrix-by-vector and matrix-by-matrix derivatives differently be considering $m \times n$ matricies as vectors in $\mathbb{R}^{mn}$ and "stacking" the resulting partial derivatives. See this paper for more details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/417643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does $((x-1)! \bmod x) - (x-1) \equiv 0\implies \text{isPrime}(x)$ Does $$((x-1)! \bmod x) - (x-1) = 0$$ imply that $x$ is prime?
I want to add that this is the easy direction of Wilson's theorem. I have often seen Wilson's Theorem stated without the "if and only if" because this second direction is so easy to prove: Proof: If $x > 1, x \ne 4$ were not prime, then the product $(x-1)!$ would contain two factors multiplying to get $x$, and thus we would have $(x - 1)! \equiv 0 \pmod x$, contradicting the statement. On the other hand if $x = 4$, then $(x - 1)! = 6 \equiv 2 \not \equiv -1$ so in any case $x$ must be prime. At one point $4$ was my favorite integer for precisely the reason that it was the only exception to $(x - 1)! \equiv 0 \text{ or } 1 \pmod x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/417692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Linear equation System What are the solutions of the following system: $ 14x_1 + 35x_2 - 7x_3 - 63x_4 = 0 $ $ -10x_1 - 25x_2 + 5x_3 + 45x_4 = 0 $ $ 26x_1 + 65x_2 - 13x_3 - 117x_4 = 0 $ * *4 unknowns (n), 3 equations $ Ax=0: $ $ \begin{pmatrix} 14 & 35 & -7 & -63 & 0 \\ -10 & -25 & 5 & 45 & 0 \\ 26 & 65 & -13 & -117& 0 \end{pmatrix} $ Is there really more than one solution because of: * *$\operatorname{rank}(A) = \operatorname{rank}(A') = 1 < n $ What irritates me: http://www.wolframalpha.com/input/?i=LinearSolve%5B%7B%7B14%2C35%2C-7%2C-63%7D%2C%7B-10%2C-25%2C5%2C45%7D%2C%7B26%2C65%2C-13%2C-117%7D%7D%2C%7B0%2C0%2C0%7D%5D Row reduced form: $ \begin{pmatrix} 1 & 2.5 & -0.5 & -9.2 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \end{pmatrix} $ How to find a solution set ?
Yes, there are infinitely many real solutions. Since there are more unknowns than equations, this system is called underdetermined. Underdetermined systems can have either no solutions or infinitely many solutions. Trivially the zero vector solves the equation:$$Ax=0$$ This is sufficient to give that the system must have infinitely many solutions. To find these solutions, it suffices to find the row reduced echelon form of the augmented matrix for the system. As you have already noted, the augmented matrix is: \begin{pmatrix} 14 & 35 & -7 & -63 & 0 \\ -10 & -25 & 5 & 45 & 0 \\ 26 & 65 & -13 & -117& 0 \end{pmatrix} Row reducing this we obtain: \begin{pmatrix} 1 & \frac{5}{2} & \frac{-1}{2} & \frac{-9}{2} & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0& 0 \end{pmatrix} This corresponds to the equation $$x_1+\frac{5}{2}x_2-\frac{1}{2}x_3-\frac{9}{2}x_4=0$$ You can then express this solution set with $$x = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix}=\begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \frac{2}{9}x_1 +\frac{5}{9}x_2 -\frac{9}{4}x_3 \end{pmatrix}$$ As you have already noted, the rank(A) = 1, giving you $n-1=3$ free parameters. That is, you can supply any real values for $x_1,x_2,x_3$ and $x_4$ will be given as above. The choice to let $x_1,x_2,x_3$ be free parameters here is completely arbitrary. One could also freely choose $x_1,x_2,x_4$ and have $x_3$ be determined by solving for $x_3$ in the above equation. The wikipedia article on underdetermined systems has some more details and explanation than what I've provided: http://en.wikipedia.org/wiki/Underdetermined_system Row reduced echelon forms can be computed using Wolfram Alpha by entering "row reduce" following by the matrix. If you're interested, Gauss-Jordan elimination is a pretty good method for calculating reduced-row echelon forms by hand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/417790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that $\{x\in\mathbb R^n: \|x\|=\pi\}\cup\{0\}$ is not connected I do have problems with connected sets so I got the following exercise: $X:=\{x\in\mathbb{R}^n: \|x\|=\pi\}\cup\{0\}\subset\mathbb{R}^n$. Why is $X$ not connected? My attempt: I have to find disjoint open sets $U,V\ne\emptyset$ such that $U\cup V=X$ . Let $U=\{x\in\mathbb{R}^n: \|x\|=\pi\}$ and $V=\{0\}$. Then $V$ is relative open since $$V=\{x\in\mathbb{R}^n:\|x\|<1\}\cap X$$ and $\{x\in\mathbb{R}^n:\|x\|<1\}$ is open in $\mathbb{R}^n$. Is this right? and why is $U$ open?
A more fancy approach: it will suffice to say that the singleton $\{0\}$ is a clopen (=closed and open) set. It is closed, because one point sets are always closed. It is open, because it is $X \cap \{ x \in \mathbb{R}^n \ : \ || x || < 1 \}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/417851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 1 }
find $\frac{ax+b}{x+c}$ in partial fractions $$y=\frac{ax+b}{x+c}$$ find a,b and c given that there are asymptotes at $x=-1$ and $y=-2$ and the curve passes through (3,0) I know that c=1 but I dont know how to find a and b? I thought you expressed y in partial fraction so that you end up with something like $y=Ax+B+\frac{C}{x+D}$
The line $x =-1$ is assymptote, in that case we must have: $$\lim_{x\to -1}\frac{ax+b}{x+c}=\pm\infty$$ I've written $\pm$ because any of them is valid. This is just abuse of notation to make this a little faster. For this to happen we need a zero in the denominator at $x=-1$ since this is just a rational function. In that case we want $c=1$. The line $y = -2$ is assymptote, so that we have: $$\lim_{x\to +\infty}\frac{ax+b}{x+c}=-2$$ In that case, our strategy is to calculate the limit and equal it to this value. To compute the limit we do as follows: $$\lim_{x\to +\infty}\frac{ax+b}{x+c}=\lim_{x\to\infty}\frac{a+\frac{b}{x}}{1+\frac{c}{x}}=a$$ This implies that $a = -2$. Finally, since the curve passes through $(3,0)$ we have (I'll already plug in $a$ and $c$): $$\frac{-2\cdot3+b}{3+1}=0$$ This implies that $b=6$. So the function you want is $f : \Bbb R \setminus \{1\} \to \Bbb R$ given by: $$f(x)=\frac{-2x+6}{x+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/417925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
In linear logic sequent calculus, can $\Gamma \vdash \Delta$ and $\Sigma \vdash \Pi$ be combined to get $\Gamma, \Sigma \vdash \Delta, \Pi$? Linear logic is a certain variant of sequent calculus that does not generally allow contraction and weakening. Sequent calculus does admit the cut rule: given contexts $\Gamma$, $\Sigma$, $\Delta$, and $\Pi$, and a proposition $A$, we can make the inference $$\frac{\Gamma \vdash A, \Delta \qquad \Sigma, A \vdash \Pi}{\Gamma, \Sigma \vdash \Delta, \Pi}.$$ So what I'm wondering is if it's also possible to derive a "cut rule with no $A$": $$\frac{\Gamma \vdash \Delta \qquad \Sigma \vdash \Pi}{\Gamma, \Sigma \vdash \Delta, \Pi}.$$
The rule you suggest is called the "mix" rule , and it is not derivable from the standard rules of linear logic. Actually, what I know is that it's not derivable in multiplicative linear logic; I can't imagine that additive or exponential rules would matter, but I don't actually know that they don't. Hyland and Ong constructed a game semantics for multiplicative linear logic that violates the mix rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/417991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
What are the main relationships between exclusive OR / logical biconditional? Let $\mathbb{B} = \{0,1\}$ denote the Boolean domain. Its well known that both exclusive OR and logical biconditional make $\mathbb{B}$ into an Abelian group (in the former case the identity is $0$, in the latter the identity is $1$). Furthermore, I was playing around and noticed that these two operations 'associate' over each other, in the sense that $(x \leftrightarrow y) \oplus z$ is equivalent to $x \leftrightarrow (y \oplus z).$ This is easily seen via the following chain of equivalences. * *$(x \leftrightarrow y) \oplus z$ *$(x \leftrightarrow y) \leftrightarrow \neg z$ *$x \leftrightarrow (y \leftrightarrow \neg z)$ *$x \leftrightarrow (y \oplus z)$ Anyway, my question is, what are the major connections between the operations of negation, biconditional, and exclusive OR? Furthermore, does $(\mathbb{B},\leftrightarrow,\oplus,\neg)$ form any familiar structure? I know that the binary operations don't distribute over each other, so its not a ring.
You probably already know this, but the immediate connection between them is $(x\oplus y) \leftrightarrow \neg(x \leftrightarrow y)$. Then the exclusive OR reduces trivially to the biconditional, and vice versa.
{ "language": "en", "url": "https://math.stackexchange.com/questions/418064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Calculating $\int_{\pi/2}^{\pi}\frac{x\sin{x}}{5-4\cos{x}}\,\mathrm dx$ Calculate the following integral:$$\int_{\pi/2}^{\pi}\frac{x\sin{x}}{5-4\cos{x}}\,\mathrm dx$$ I can calculate the integral on $[0,\pi]$,but I want to know how to do it on $[\frac{\pi}{2},\pi]$.
$$\begin{align}\int_{\pi/2}^\pi\frac{x\sin x}{5-4\cos x}dx&=\pi\left(\frac{\ln3}2-\frac{\ln2}4-\frac{\ln5}8\right)-\frac12\operatorname{Ti}_2\left(\frac12\right)\\&=\pi\left(\frac{\ln3}2-\frac{\ln2}4-\frac{\ln5}8\right)-\frac12\Im\,\chi_2\left(\frac{\sqrt{-1}}2\right),\end{align}$$ where $\operatorname{Ti}_2(z)$ is the inverse tangent integral and $\Im\,\chi_\nu(z)$ is the imaginary part of the Legendre chi function. Hint: Use the following Fourier series and integrate termwise: $$\frac{\sin x}{5-4\cos x}=\sum_{n=1}^\infty\frac{\sin n x}{2^{n+1}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/418134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 2, "answer_id": 0 }
Why differentiating a function is similar to differentiating its log function? $f(6;p)=\binom{25}{6}p^6(1-p)^6,\quad 0\le p\le1$ I have three questions: $(1)$To find the relative maxima of p, the process is to take the derivative of the function with respect to $p$ equal to $0$ and solving the resulting equation for $p$. For easy computation we take the log of the function & differentiate the log function with respect to $p$ equal to $0$ and solving the resulting equation for $p$. $\bullet$"My Question Is Why $\frac{d}{dp}f(6;p)$ is similar to $\frac{d}{dp}\log f(6;p)$" (2) After the step(1) i found that $p=\frac{6}{25}$ is the only root. But on the book, which i am reading, is written that $p=0,1,\frac{6}{25}$ are the roots. $\bullet$"How p=0,1 can be root? The denominator $p(1-p)$ becomes $0$ when i do cross multiplication." (3)To check which value of $p$ maximize $f(6;p)$ , we take the second derivative of the function with respect to $p$ and set $p=0,1,\frac{6}{25}$ respectively. If the second derivative is less than $0$, then $p=$ that value maximize the function . for $p=\frac{6}{25}$ the second derivative is $-125.56<0$ , so $ p=\frac{6}{25} $ gives maximum. i also got for $p=0,1$ the second derivative is $\quad-\infty$ , so $ p=0,1 $ also give maximum. But the book commented out that $p=0,1$ give a minimum. $\bullet$"How $p=0,1$ give a minimum?"
There is a typo in the post, presumably you mean $p^6(1-p)^{19}$. Since $\log$ is an increasing function, finding the maximum of $p^6(1-p)^{19}$ and finding the maximum of its logarithm are equivalent problems. There is a bit of a question mark about the endpoints $p=0$ and $p=1$, but the maximum is clearly not there, so we are examining the logarithm in the interval $(0,1)$, where there is no issue. The logarithm is $6\log p +19\log(1-p)$, which has derivative $\frac{6}{p}-\frac{19}{1-p}$. This derivative is not $0$ at $p=0$ or $p=1$, in fact it is undefined at these places. It is true that the derivative of $p^6(1-p)^{19}$ is $0$ at $p=0$ and $p=1$, in addition to $p=\frac{6}{25}$, for the derivative is $(p^6)(-19)(1-p)^{18}+(1-p)^{19}(6)(p^5)$. This clearly vanishes at $p=0$ and $p=1$. In fact, by taking out the common factor $p^5(1-p)^{18}$, we can quickly find where the derivative is $0$ without doing the detour through logarithms. The second derivative is not really suitable for testing what happens at the endpoints. I assume these are probabilities, so $p$ is restricted to the interval $[0,1]$. Our original expression is obviously $0$ at the endpoints, and positive if $0\lt p\lt 1$, so it is clear that we have an absolute minimum at $0$ and at $1$. Actually, I would not use the second derivative test at all. Our derivative simplifies to $\frac{6-25p}{p(1-p)}$. The denominator is positive for all $p$ in the interval $(0,1)$. Looking at the numerator, we can see it is positive up to $p=\frac{6}{25}$ and then negative. So the logarithm is increasing in the interval $(0,\frac{6}{25}]$, and then decreasing, so reaches a maximum at $p=\frac{6}{25}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/418223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Basis for $L^2(0,T;H)$ Given a basis $b_i$ for the separable Hilbert space $H$, what is the basis for $L^2(0,T;H)$? Could it be $\{a_jb_i : j, i \in \mathbb{N}\}$ where $a_j$ is the basis for $L^2(0,T)$?
You are not far from correct result. The desired basis is a family of fnctions $\{f_{i,j}:i,j\in\mathbb{N}\}$ defined as $$ f_{i,j}(t)=a_j(t)b_j $$ The deep reason for this is the following. Since we have an identification. $$ L_2((0,T), H)\cong L_2(0,T)\otimes_2 H $$ it is enough to study to study bases of Hilbert tensor product of Hilbert spaces. It is known that for Hilbert spaces $K$, $H$ with orthnormal bases $\{e_i:i\in I\}$ and $\{f_j:j\in J\}$ respectively the family $$ \{e_i\otimes_2 f_j:i\in I\; j\in J\} $$ is an orthnormal basis of $K\otimes_2 H$
{ "language": "en", "url": "https://math.stackexchange.com/questions/418287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that a function f is continuous (1) $f:\mathbb{R} \rightarrow \mathbb{R}$ such that $$f(x) = \begin{cases} x \sin(\ln(|x|))& \text{if $x\neq0$} \\ 0 & \text{if $x=0$} \\ \end{cases}$$ Is $f$ continuous on $\mathbb{R}$? I want to use the fact that 2 continuous functions: $$f:I \rightarrow J ( \subset \mathbb{R})$$ $$g:J \rightarrow \mathbb{R}$$ $$g \circ f:I \rightarrow \mathbb{R}, x \mapsto g(fx) $$ 1)For $f=\ln(|x|)$: "By the inverse of the Fundamental Theorem of Calculus, $\ln x$ is defined as an integral, it is differentiable and its derivative is the integrand $1=\frac{1}{x}$. As every differentiable function is continuous, therefore $\ln x$ is continuous." so $f=\ln(|x|) ,I \in ]0, \infty)$ $f$ is continuous. 2)For $g= \sin(x)$: if $\epsilon > 0, \exists \delta>0:$ $$x \in J \wedge |x - x_0| < \delta , x_0 \in \mathbb{R} $$ $$\Rightarrow |f(x) - f(x_0)| \epsilon \Leftrightarrow |\sin(x)-\sin(x_0)|< \epsilon$$ $$|\sin(x)| \leq |x|$$ $$\Leftrightarrow |\sin(x)-\sin(x_0)|<|x - x_0| < \delta = \epsilon$$ So $g$ is continuous on $\mathbb{R}$ 3)Because x is also continous on $\mathbb{R}$ $ \Rightarrow x \sin(\ln(|x|))$ is continuous. Is my proof correct? Are there shorter ways to get this result?
For all $x\in (-\infty;0)\cup(0;+\infty)$ is function continuous since it is composition of continuous functions (I think it is necessary to show it in this task, as mentioned in comments, the real problem is $x=0$). By definition, function is continuous at $x_0$, if $$\lim_{x\rightarrow x_0}f(x)=f(x_0)$$ In this case: $$\lim_{x\rightarrow 0}x\sin(\ln(x))=0$$ because $\lim_{x\rightarrow0}=0$ and $|\sin(\ln(x))|\leq1$ (sine is bounded).
{ "language": "en", "url": "https://math.stackexchange.com/questions/418384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Counter example of upper semicontinuity of fiber dimension in classical algebraic geometry We know that if $f : X\to Y$ is a morphism between two irreducible affine varieties over an algebraically closed field $k$, then the function that assigns to each point of $X$ the dimension of the fiber it belongs to is upper semicontinuous on $X$. Does anyone know of a simple counterexample when $X$ is not irreducible anymore (but remains an algebraic set over $k$, i.e a finitely generated $k$-algebra) ? Edit : to avoid ambiguity about the definition of upper semicontinuity, it means here that for all $n\geq 0$, the set of $x\in X$ such as $\dim(f^{-1}(f(x) ) ) \geq n$ is closed in $X$. It seems to me it is not so obvious to find a counterexample, since in fact the set of $x\in X$ such that the dimension of the irreducible component of $f^{-1}(f(x) )$ in $X$ that contains $x$ is $\geq n$ is always closed even when $X$ is not irreducible.
I got my answer on MO, here : mathoverflow.net/questions/133567/…
{ "language": "en", "url": "https://math.stackexchange.com/questions/418469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Distinguishable telephone poles being painted Each of n (distinguishable) telephone poles is painted red, white, blue or yellow. An odd number are painted blue and an even number yellow. In how many ways can this be done? Can some give me a hint how to approach this problem?
Consider the generating function given by $( R + W + B + Y )^n$ Without restriction, the sum of all coefficients would give the number of ways to paint the distinguishable posts in any of the 4 colors. We substitute $R=W=B=Y = 1$ to find this sum, and it is (unsurprisingly) $4^n$. Since there are no restrictions on $R$ and $W$, we may replace them with $1$, and still consider the coefficients. If we have the restriction that we are only interested in cases where the degree of $B$ is odd (ignore $Y$ for now), then since $ \frac{1^k - (-1)^k}{2} = \begin{cases} 1 & k \equiv 1 \pmod{2} \\ 0 & k \equiv 0 \pmod{2} \\ \end{cases}$ the sum of the coefficients when the degree of $B$ is odd, is just the sum of the coefficients of $ \frac{ (R + W + 1 + Y) ^n - ( R + W + (-1) + Y) ^n} { 2} $. Substituting in $R=W=Y=1$, we get that the number of ways is $$ \frac{ (1 + 1 + 1 + 1)^n - (1 + 1 + (-1) +1)^n}{2} = \frac {4^n - 2^n} {2}$$ Now, how do we add in the restriction that the degree of $Y$ is even? Observe that since $ \frac{1^k + (-1)^k}{2} = \begin{cases} 1 & k \equiv 0 \pmod{2} \\ 0 & k \equiv 1 \pmod{2} \\ \end{cases}$ the sum of the coefficients when the degree of $B$ is odd and Y is even, is just the sum of the coefficients of $$ \frac{ \frac{ (R + W + 1 + 1) ^n - ( R + W + (-1) + 1) ^n} { 2} + \frac{ (R + W + 1 + (-1)) ^n - ( R + W + (-1) + (-1)) ^n} { 2} } { 2} $$ Now substituting in $R=W=1$, we get $\frac{ 4^n - 2 ^n + 2^n - 0^n } { 4} = 4^{n-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/418520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Are there any integer solutions to $\gcd(\sigma(n), \sigma(n^2)) = 1$ other than for prime $n$? A good day to everyone! Are there any integer solutions to $\gcd(\sigma(n), \sigma(n^2)) = 1$ other than for prime $n$ (where $\sigma = \sigma_1$ is the sum-of-divisors function)? Note that, if $n = p$ for prime $p$ then $$\sigma(p) = p + 1$$ $$\sigma(p^2) = p^2 + p + 1 = p(p + 1) + 1.$$ These two equations can be put together into one as $$\sigma(p^2) = p\sigma(p) + 1,$$ from which it follows that $$\sigma(p^2) + (-p)\cdot\sigma(p) = 1.$$ The last equation implies that $\gcd(\sigma(p), \sigma(p^2)) = 1$. I now attempt to show that prime powers also satisfy the number-theoretic equation in this question. If $n = q^k$ for $q$ prime, then $$\sigma(q^{2k}) = \frac{q^{2k + 1} - 1}{q - 1} = \frac{q^{2k + 1} - q^{k + 1}}{q - 1} + \frac{q^{k + 1} - 1}{q - 1} = \frac{q^{k + 1}(q^k - 1)}{q - 1} + \sigma(q^k).$$ Re-writing the last equation, we get $$(q - 1)\left(\sigma(q^{2k}) - \sigma(q^k)\right) = q^{k + 1}(q^k - 1).$$ Since $\gcd(q - 1, q) = 1$, then we have $$q^{k + 1} \mid \left(\sigma(q^{2k}) - \sigma(q^k)\right).$$ But we also have $$\sigma(q^{2k}) - \sigma(q^k) = q^{k + 1} + q^{k + 2} + \ldots + q^{2k} \equiv 0 \pmod {q^{k + 1}}.$$ Alas, this is where I get stuck. (I know of no method that can help me express $1$ as a linear combination of $\sigma(q^{2k})$ and $\sigma(q^k)$, from everything that I've written so far.) Anybody else here have any ideas? Thank you!
Let $n=pq$ where $p,q$ are two distinct primes. Then $$\sigma(n)=(p+1)(q+1)$$ $$\sigma(n^2)=(1+p+p^2)(1+q+q^2)$$ $$\sigma(6)=12$$ $$\sigma(6^2)=7 \cdot 13 \,.$$ Note that as long as $1+p+p^2$ and $1+q+q^2$ are primes, then for $n=pq$ we have $gcd(\sigma(n), \sigma(n^2))=1$. Actually you only need $$gcd(p+1,q^2+q+1)=\gcd(q+1,p^2+p+1)=1 \,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/418576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Smooth Pac-Man Curve? Idle curiosity and a basic understanding of the last example here led me to this polar curve: $$r(\theta) = \exp\left(10\frac{|2\theta|-1-||2\theta|-1|}{|2\theta|}\right)\qquad\theta\in(-\pi,\pi]$$ which Wolfram Alpha shows to look like this: The curve is not defined at $\theta=0$, but we can augment with $r(0)=0$. If we do, then despite appearances, the curve is smooth at $\theta=0$. It is also smooth at the back where two arcs meet. However it is not differentiable at the mouth corners. Again out of idle curiosity, can someone propose a polar equation that produces a smooth-everywhere Pac-Man on $(-\pi.\pi]$? No piece-wise definitions please, but absolute value is OK.
Not a very good one: $r(\theta) = e^{-\dfrac{1}{20 \theta^2}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/418641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Need explanation of passage about Lebesgue/Bochner space From a book: Let $V$ be Banach and $g \in L^2(0,T;V')$. For every $v \in V$, it holds that $$\langle g(t), v \rangle_{V',V} = 0\tag{1}$$ for almost every $t \in [0,T]$. What I don't understand is the following: This is equivalent to $$\langle g(t), v(t) \rangle_{V',V} = 0\tag{2}$$ for all $v \in L^2(0,T;V)$ and for almost every $t \in [0,T]$. OK, so if $v \in L^2(0,T;V)$, $v(t) \in V$, so (2) follows from (1). How about the reverse? Also is my reasoning really right? I am worried about the "for almost every $t$ part of these statements, it confuses me whether I am thinking correctly. Edit for the bounty: as Tomas' comment below, is the null set where (1) and (2) are non-zero the same for every $v$? If not, is this a problem? More details would be appreciated.
For $(2)$ implies $(1)$, consider the function $v\in L^2(0,T;V)$ defined by $$v(t)=w, \forall\ t\in [0,T]$$ where $w\in V$ is fixed. Hence, you have by $(2)$ that $$\langle g(t),v(t)\rangle=\langle g(t),w\rangle=0$$ for almost $t$. By varying $w$, you can conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/418693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
In diagonalization, can the eigenvector matrix be any scalar multiple? One can decompose a diagonalizable matrix (A) into A = C D C^−1, where C contains the eigenvectors and D is a diagonal matrix with the eigenvalues in the diagonal positions. So here's where I get confused. If I start with a random eigenvector matrix D > D [,1] [,2] [1,] 7 0 [2,] 0 5 and a random eigenvector matrix C > C [,1] [,2] [1,] 4 2 [2,] 1 3 There should be some matrix A with those eigenvectors and eigenvalues. When I compute A by multiplying C D C^-1 I get > A<-C%*%D%*%solve(C) > A [,1] [,2] [1,] 7.4 -1.6 [2,] 0.6 4.6 My understanding is that if I then work backwards and diagonalize A I should get the same matrix C and D, I started with to get A in the first place. But for reasons that escape me, I don't. > eigen(A) $values [1] 7 5 $vectors [,1] [,2] [1,] 0.9701425 0.5547002 [2,] 0.2425356 0.8320503 the first eigenvector is a multiple of column 1 of C and the second eigenvector is a multiple of column 2 of C. For some reason it feels strange that I have this relationship: xC D (xC)^-1 = C D C where x is a scalar. Did I screw up somewhere or is this true?
Well: $$(xA)^{-1}=\frac{1}{x}A^{-1}\qquad x\neq 0$$ so the result is actually the same. Eigenvectors are vectors of an eigenspace, and therefore, if a vector is an eigenvector, then any multiple of it is also an eigenvector. When you build a matrix of eigenvectors, you have infinite of them to choose from, that program is calculating two of them that it wants, and they don't have to be the ones you chose at the beginning.
{ "language": "en", "url": "https://math.stackexchange.com/questions/418756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
minimum value of a trigonometric equation is given. the problem is when the minimum value attains Suppose the minimum value of $\cos^{2}(\theta_{1}-\theta_{2})+\cos^{2}(\theta_{2}-\theta_{3})+\cos^{2}(\theta_{3}-\theta_{1})$ is $\frac{3}{4}$. Also the following equations are given $$\cos^{2}(\theta_{1})+\cos^{2}(\theta_{2})+\cos^{2}(\theta_{3})=\frac{3}{2}$$ $$\sin^{2}(\theta_{1})+\sin^{2}(\theta_{2})+\sin^{2}(\theta_{3})=\frac{3}{2}$$ and $$\cos\theta_{1}\sin\theta_{1}+\cos\theta_{2}\sin\theta_{2}+\cos\theta_{3}\sin\theta_{3}=0$$ To my intuition it can be proved that the minimum value of the 1st expression attains only if $(\theta_{1}-\theta_{2})=(\theta_{2}-\theta_{3})=(\theta_{3}-\theta_{1})=\frac{\pi}{3}$. Provide some hints and techniques how to solve this.
As $\sin2x=2\sin x\cos x,$ $$\cos\theta_1\sin\theta_1+\cos\theta_2\sin\theta_2+\cos\theta_3\sin\theta_3=0$$ $$\implies \sin2\theta_1+\sin2\theta_2+\sin2\theta_3=0$$ $$\implies \sin2\theta_1+\sin2\theta_2=-\sin2\theta_3\ \ \ \ (1)$$ As $\cos2x=\cos^2x-\sin^2x,$ $$\cos^2\theta_1+\cos^2\theta_2+\cos^2\theta_3=\frac32=\sin^2\theta_1+\sin^2\theta_2+\sin^2\theta_3$$ $$\implies \cos2\theta_1+\cos2\theta_2+\cos2\theta_3=0$$ $$\implies \cos2\theta_1+\cos2\theta_2=-\cos2\theta_3\ \ \ \ (2)$$ Squaring & adding $(1),(2)$ $$\sin^22\theta_1+\sin^22\theta_2+2\sin2\theta_1\sin2\theta_2+(\cos^22\theta_1+\cos^22\theta_2+2\cos2\theta_1\cos2\theta_2)=\sin^22\theta_3+\cos^22\theta_3$$ $$\implies 2+2\cos2(\theta_1-\theta_2)=1$$ Using $\cos2x=2\cos^2x-1,$ $$2+ 2\left(2\cos^2(\theta_1-\theta_2)-1\right)=1\implies \cos^2(\theta_1-\theta_2)=\frac14$$ Similarly, $\theta_2-\theta_3,\theta_3-\theta_1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/418812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Two propositions about weak* convergence and (weak) convergence Let $E$ be a normed space. We have the usual definitions: 1) $f, f_n \in E^*$, $n \in \mathbb{N}$, then $$f_n \xrightarrow{w^*} f :<=> \forall x \in E: f_n(x) \rightarrow f(x)$$ and in this case we say that $(f_n)$ is $weak^*$-$convergent$ to $f$. 2)$x, x_n \in E$, $n \in \mathbb{N}$, then $$x_n \xrightarrow{w} x :<=> \forall f \in E^*: f(x_n) \rightarrow f(x)$$ and in this case we say that $(x_n)$ is $weakly\ convergent$ to $x$. Now for the two propositions I want to prove or disprove the following statements. Let $f, f_n \in E^*$, $n \in \mathbb{N}$, such that $f_n \xrightarrow{w^*} f$ and let $x, x_n \in E$. Consider: [edit: Thanks for pointing out my mistake!] a) $x_n \rightarrow x$ => $f_n(x_n) \rightarrow f(x)$, b) $x_n \xrightarrow{w} x$ => $f_n(x_n) \rightarrow f(x)$. So far, I think that even b) is true which would imply that a) is also true. My reasoning is that, by assumption, we have $f_m(x_n) \rightarrow f_m(x)$ for every fixed $m \in \mathbb{N}$ as well as $f_m(x) \rightarrow f(x)$ for every $x \in E$. Hence we have $$\lim_{m\to \infty} \lim_{n\to \infty} f_m(x_n) = \lim_{m\to \infty} f_m(x) = f(x)$$ which should be the same as $$\lim_{n\to \infty} f_n(x_n) = f(x).$$ However, I'm a little bit suspicious because the setting seems to imply that a) ist true, but b) is not. Is my argument too sloppy?
a) $$ \|f_n(x_n) - f(x) \| \leq \|f_n(x_n) - f_n(x)\| + \| f_n(x) - f(x) \| \leq \|f_n\| \|x_n - x\| + \| f_n(x) - f(x) \|. $$ By the principle of uniform boundedness, $\sup_n \|f_n\|$ is finite, so you're done. b) Not true in general. Let $\{x_n\}$ be an orthonormal basis in some Hilbert space. Then $x_n \rightarrow 0$ weakly and, by reflexivity, $\langle x_n, \cdot \rangle \rightarrow 0$ in the weak-* topology. Evidently, $\langle x_n, x_n \rangle$ does not go to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/418882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
solvable subalgebra I want to show that a set $B\subset L$ is a maximal solvable subalgebra. With $L = \mathscr{o}(8,F)$, $F$ and algebraically closed field, and $\operatorname{char}(F)=0$ and $$B= \left\{\begin{pmatrix}p&q\\0&s \end{pmatrix}\mid p \textrm{ upper triangular, }q,s\in\mathscr{gl}(4,F)\textrm{ and }p^t=-s, q^t=-q\right\}.$$ $B$ is a subalgebra by construction. So my problem now is how I can show that it is maximal solvable. I tried the approach by '$L$ solvable $\Leftrightarrow$ $[L,L]$ nilpotent'. I am not sure how good this idea is, but here what I have: $[B,B]= span\{[x,y]\mid x,y\in B\}$. So $x$ and $y$ are matrices of the form above. That means, that for the '$p$-part' we see already that it is nilpotent. Since with each multiplication there is one more zero-diagonal. But how can I show now, that we will also get rid of the $q$ and $s$ part? I tried to look at $[x,y] = xy-yx = \begin{pmatrix}\bigtriangledown&*\\0&ss' \end{pmatrix}-\begin{pmatrix}\bigtriangledown&*\\ 0&s's \end{pmatrix}$ (with $\bigtriangledown$ any upper triangle matrix minus one diagonal, $s$ the part in the $x$ matrix and $s'$ in the $y$ matrix). (Sorry for the chaotic notation. I don't realy know how to write it easier..) Is this a beginning where I should go on with? Or does someone have a hint how to approach this problem? I don't really know, how to go on from here on. So I'd be very happy for any hint :) Best, Luca
As far as I can think, use the Thm c at page 84, however i have no idea of how to compute this $B(triangular)=H+\cup L_{\alpha}$ which are the positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/418954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
single valued function in complex plane Let $$f(z)=\int_{1}^{z} \left(\frac{1}{w} + \frac{a}{w^3}\right)\cos(w)\,\mathrm{d}w$$ Find $a$ such that $f$ is a single valued function in the complex plane.
$$ \begin{align} \left(\frac1w+\frac{a}{w^3}\right)\cos(w) &=\left(\frac1w+\frac{a}{w^3}\right)\left(1-\frac12w^2+O\left(w^4\right)\right)\\ &=\frac{a}{w^3}+\color{#C00000}{\left(1-\frac12a\right)}\frac1w+O(w) \end{align} $$ The residue at $0$ is $1-\frac12a$, so setting $a=2$ gives a residue of $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/419006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does every infinite group have a maximal subgroup? $G$ is an infinite group. * *Is it necessary true that there exists a subgroup $H$ of $G$ and $H$ is maximal ? *Is it possible that there exists such series $H_1 < H_2 < H_3 <\cdots <G $ with the property that for every $H_i$ there exists $H_{i+1}$ such that $H_i < H_{i+1}$?
Rotman p. 324 problem 10.25: The following conditions on an abelian group are equivalent: * *$G$ is divisible. *Every nonzero quotient of $G$ is infinite; and *$G$ has no maximal subgroups. It is easy to see above points are equivalent. If you need the details, I can add them here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/419091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 6, "answer_id": 5 }
Finding $a$ s.t the cone $\sqrt{x^{2}+y^{2}}=za$ divides the upper half of the unit ball into two parts with the same volume My friend gave me the following question: For which value of the parameter $a$ does the cone $\sqrt{x^{2}+y^{2}}=za$ divides $$\{(x,y,z):\, x^{2}+y^{2}+z^{2}\leq1,z\geq0\}$$ into two parts with the same volume ? I am having some difficulties with the question. What I did: First, a ball with radius $R$ have the volume $\frac{4\pi}{3}R^{3}$ hence the volume of the upper half of the unit ball is $\frac{\pi}{3}$. Secondly: I found where does the cone intersect with the boundary of the ball: $$ \sqrt{x^{2}+y^{2}}=za $$ hence $$ z=\sqrt{\frac{x^{2}+y^{2}}{a^{2}}} $$ and $$ x^{2}+y^{2}+z^{2}=1 $$ setting $z$ we get $$ x^{2}(\frac{a^{2}+1}{a^{2}})+y^{2}(\frac{a^{2}+1}{a^{2}})=1 $$ hence $$ x^{2}+y^{2}=\frac{a^{2}}{a^{2}+1} $$ using the equation for the cone we get $$ z=\frac{1}{\sqrt{a^{2}+1}} $$ I then did (and I am unsure about the boundaries) : $0<z<\frac{1}{\sqrt{a^{2}+1}},0<r<az$ and using the coordinates $x=r\cos(\theta),y=r\sin(\theta),z=z$ I got that the volume that the cone enclose in the ball is $$ \int_{0}^{\frac{1}{\sqrt{a^{2}+1}}}dz\int_{0}^{az}dr\int_{0}^{2\pi} r d\theta $$ which evaluates to $$ \frac{\pi a^{2}}{3(a^{2}+1)^{\frac{3}{2}}} $$ I then required this will be equal to e the volume of the upper half of the unit ball : $$ \frac{\pi a^{2}}{3(a^{2}+1)^{\frac{3}{2}}}=\frac{\pi}{3} $$ and got $$ a^{2}=(a^{2}+1)^{\frac{3}{2}} $$ which have no real solution, according to WA. Can someone please help me understand where I am wrong and how to solve this question ?
Check your bounds again. I believe they should be $$\begin{align}0<&z<\frac{1}{\sqrt{a^2+1}}\\az<&r<\sqrt{1-z^2}\end{align}$$ Finishing the integral with these bounds should yield $a=\sqrt3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/419163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
truncate, ceiling, floor, and...? Truncation rounds negative numbers upwards, and positive numbers downwards. Floor rounds all numbers downwards, and ceiling rounds all numbers upwards. Is there a term/notation/whatever for the fourth operation, which rounds negative numbers downwards, and positive numbers upwards? That is, one which maximizes magnitude as truncation minimizes magnitude?
The fourth operation is called "round towards infinity" or "round away from zero". It can be implemented by $$y=\text{sign} (x)\text{ceil}(|x|)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/419250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Usefulness of the concept of equivalent representations Definition: Let $G$ be a group, $\rho : G\rightarrow GL(V)$ and $\rho' : G\rightarrow GL(V')$ be two representations of G. We say that $\rho$ and $\rho'$ are $equivalent$ (or isomorphic) if $\exists \space T:V\rightarrow V'$ linear isomorphism such that $T{\rho_g}={\rho'_g}T\space \forall g\epsilon G$. But I don't understand why this concept is useful. If two groups $H,H'$ are isomorphic, then we can translate any algebraic property of $H$ into $H'$ via the isomorphism. But I don't see how a property of $\rho$ can be translated to similar property of $\rho'$. Nor I have seen any example in any textbook where this concept is used. Can someone explain its importance?
This is just the concept of isomorphism applied to representations, i.e. $T$ is providing an isomorphism between $V$ and $V'$ which interchanges the two group representations. So all your intuitions for the role of isomorphisms in group theory should carry over. Why are you having trouble seeing that properties of $\rho$ can be carried over to $\rho'$? As with any isomorphic situations, any property of $\rho$ that doesn't make specific reference to the names of the elements in $V$ (e.g. being irreducible or not, the number of irreducible constituents, the associated character) will carry over to $\rho'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/419378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Proof for Sum of Sigma Function How to prove: $$\sum_{k=1}^n\sigma(k) = n^2 - \sum_{k=1}^nn\mod k$$ where $\sigma(k)$ is sum of divisors of k.
$$\sum_{k=1}^n \sigma(k) = \sum_{k=1}^n\sum_{d|k} d = \sum_{d=1}^n\sum_{k=1,d|k}^{n}d = \sum_{d=1}^n d\left\lfloor \frac {n} {d}\right\rfloor$$ Now just prove that $$d\left\lfloor \frac n d\right\rfloor = n-(n\mod d)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/419459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why does the result of the Lagrangian depend on the formulation of the constraint? Consider the following maximization problem: $$ \max f(x) = 3 x^3 - 3 x^2, s.t. g(x) = (3-x)^3 \ge 0 $$ Now it's obvious that the maximum is obtained at $ x =3 $. In this point, however, the constraint qualification $$ Dg(x) = -3 (3-x)^2 = 0$$ fails, so it's not a solution of the Lagrangian. Re-formulating the constraint as $$ h(x) = 3 - x \ge 0 $$ allows obtaining the result, as the constraint qualification holds: $ Dh(x) = -1 $ Now, I'm well aware that the Lagrangian method can fail under certain circumstances. However, isn't it kind of odd that re-formulation of the constraint yiels a solution? Does this mean that whenever we're stuck with a constraint qualificiation issue, we should try to 'fix' the constraint?
Yes, reformulating the constraint might change the validity of the CQ. This is quite natural, since the Lagrangian expresses optimality via derivatives of the constraints (and the objective). Changing the constraint may render its derivative useless ($0$ in your case).
{ "language": "en", "url": "https://math.stackexchange.com/questions/419601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Endomorphisms of a semisimple module Is there an easy way to see the following: Given a $k$-algebra $A$, with $k$ a field, and a finite dimensional semisimple $A$-module $M$. Look at the natural map $A \to \mathrm{End}_k(M)$ that sends an $a \in A$ to $$ M \to M: m \mapsto a \cdot m. $$ Then the image of $A$ is a finite-dimensional semisimple algebra.
Here's one way to look at it: Notice that the kernel of the map is exactly $ann(M)$, which necessarily contains the Jacobson radical $J(A)$ of $A$. Since $A/J(A)$ and all of its quotients are semiprimitive, it follows that $A/ann(M)$ is semiprimitive. Now $M$ as a faithful $A/ann(M)$ module. Since the simple submodules of $M$ remain the same during this passage, $M$ is also still semisimple over this new ring. You can see in this question why a ring with a faithful module of finite length must be Artinian. Now we have that $A/ann(M)$ is Artinian and semiprimitive: so it is semisimple. I see I overlooked a simple way for concluding that the image is finite dimensional. Of course our image ring is a subalgebra of $End(M_k)$ which is finite dimensional... so the subring is finite dimensional as well. The argument I gave before essentially proves a more general case: "If $M$ is a semisimple $R$ module of finite length, the image of the natural map is a semisimple ring."
{ "language": "en", "url": "https://math.stackexchange.com/questions/419741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Question about order of elements of a subgroup Given a subgroup $H \subset \mathbb{Z}^4$, defined as the 4-tuples $(a,b,c,d)$ that satisfy $$ 8| (a-c); a+2b+3c+4d=0$$ The question is: give all orders of the elements of $\mathbb{Z}^4 /H$. I don't have any idea how to start with this problem. Can anybody give some hints, strategies etc to solve this one? thanks
As Amir has said above, consider the homomorphism $\phi:\oplus_{i=1}^{4}\mathbb{Z}\rightarrow \mathbb{Z}_{8}\oplus\mathbb{Z}$. You can check that as Amir pointed out $\operatorname{im}(\phi)\cong (\oplus_{i=1}^{4}\mathbb{Z})/H$, so what is $\operatorname{im}(\phi)$? If you wanted, you could work this out, but you are only asked for the possible orders of elements. In $w=(x,y)\in,\mathbb{Z}_{8}\oplus\mathbb{Z}$, then if $y\ne 0$, then $x$ has infinite order. Clearly, taking $w=\phi(0,0,0,1)$, we get $w=(0,4)$ and so \operatorname{im}(\phi)$ has elements of infinite order. Now consider elements of finite order. These must arise from elements of the form $w=(x,0)\in\mathbb{Z}_{8}\oplus\mathbb{Z}$. Now the possible orders of elements of $\mathbb{Z}_{8}$ are $1,2,4$ and $8$ can we find elements of such orders in $\operatorname{im}(\phi)$? Order 1: As $\operatorname{im}(\phi)$ is a subgroup, it contains the identity - an element of order $1$. Order 8: Consider $(a,b,c,d)\in \oplus_{i=1}^{4}\mathbb{Z}$. We want $\phi(a,b,c,d)$ to have order $8$, so $a-c$ must be coprime to $8$ (i.e. odd) and $a+2b+3c+4d=0$. Does such an element exist? Well as $a-c$ is odd, then $a$ and $c$ have different parity, so assume that $a$ is odd and $c$ is even. But then $a+2b+3c+4d$ will be off and hence not equal to $0$. Thus no elements of order $8$ exist, and in particular. Order 4: As above, considering $\phi(a,b,c,d)$ we must have $a-c\equiv 2\text{ or }6\pmod{8}$ and $a+2b+3c+4d=0$. If $a=2$ and $c=0$, then $a-c\equiv 2\pmod{8}$. Moreover, if we then take $b=-1$ and $d=0$ we have $w=\phi(2,-1,0,0)=(2,0)$, and so $w$ is an element of order $4$. Order 2: Take $2w=(4,0)=\phi(4,-2,0,0)$, and then $2w$ has order $2$. Thus the possible orders of elements of $(\oplus_{i=1}^{4}\mathbb{Z})/H$ are $1,2,4$ and $\infty$. Edit: I misread the question initially, and so thought it was asking for the possible orders of $H$ and not $\mathbb{Z}^{4}/H$. Here is an answer for finding orders of elements of $H$. To start off with, what are the possible powers of elements of the direct product $\oplus_{i=1}^{4}\mathbb{Z}$. Clearly the only element of finite order is the identity element of order $1$ (since this is the case in $\mathbb{Z}$). Thus as $H$ is a subgroup of $\oplus_{i=1}^{4}\mathbb{Z}$, the only possible finite order of elements of $H$ is $1$. Can this order be attained? Well clearly, the only element of $\oplus_{i=1}^{4}\mathbb{Z}$ of order $1$ is the identity element, namely $(0,0,0,0)$, and it is easy to see that this will be contained in $H$. Now let's check if there are elements of infinite order in $H$. Suppose $x=(a,b,c,d)$ is such an element. Well for simplicity (i.e. to get rid of your first condition), just take $a=c=0$ so that $8\vert a-c=0$. Then the other condition gives $0=a+2b+3c+4d=2b+4d$, meaning that $b+2d=0$. Thus taking $d=1$ and $b=-2$ we have that $x=(0,-2,0,1)\in H$, and $x$ has infinite order. We conclude that the possible orders of elements of $H$ are $1$ and $\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/419864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
number of errors in a binary code To transmit the eight possible stages of three binary symbols, a code appends three further symbols: if the message is ABC, the code transmits ABC(A+B)(A+C)(B+C). How many errors can it detect and correct? My first step should be to find the minimum distance, but is there a systematic way to find this? Or do I just try everything?
The code is linear and there are seven non-zero words, so the brute force method of listing all of them is quite efficient in this case. You can further reduce the workload by observiing that the roles of $A,B,C$ are totally symmetric. What this means is that you only need to check the weights of the words where you assign one, two or all three of them to have value $1$ (and set the rest of them equal to zero). Leaving that to you, but it sure looks like the minimum distance will be three, and hence the code can correct a single error. In general the Yes/No question: Does a linear code with a given generator matrix have non-zero words of weight below a given threshold? has been proven to belong to one of those nasty NP-complexity classes (NP-hard? NP-complete? IDK), so there is no systematic efficient way of doing this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/419914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can $\lim_{n\to \infty} (3^n + 4^n)^{1/n} = 4$? If $\lim_{n\to \infty} (3^n + 4^n)^{1/n} = 4$, then $\lim_{n\to \infty} 3^n + 4^n=\lim_{n\to \infty}4^n$ which implies that $\lim_{n\to \infty} 3^n=0$ which is clearly not correct. I tried to do the limit myself, but I got $3$. The way I did is that at the step $\lim_{n\to \infty} 3^n + 4^n=\lim_{n\to \infty}L^n$ I divided everything by $4^n$, and got $\lim_{n\to \infty} (\frac{3}{4})^n + 1=\lim_{n\to \infty} (\frac{L}{4})^n$. Informally speaking, the $1$ on the LHS is going to be very insignificant as $n \rightarrow \infty$, so $L$ would have to be $3$. Could someone explain to me why am I wrong and how can the limit possibly be equal to $4$? Thanks!
$\infty-\infty$ is not well-defined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/419999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that: $\sup_{z \in \overline{D}} |f(z)|=\sup_{z \in \Gamma} |f(z)|$ Suppose $D=\Delta^n(a,r)=\Delta(a_1,r_1)\times \ldots \times \Delta(a_n,r_n) \subset \mathbb{C}^n$ and $\Gamma =\partial \circ \Delta^n(a,r)=\left \{ z=(z_1, \ldots , z_n)\in \mathbb{C}^n:|z_j-a_j|=r_j,~ j=\overline{1,n} \right \}$. Let $f \in \mathcal{H}(D) \cap \mathcal{C}(\overline{D})$. Prove that: $\sup_{z \in \overline{D}} |f(z)|=\sup_{z \in \Gamma} |f(z)|$ I need your help. Thanks.
Since it's homework, a few hints: First assume that $f$ is holomorphic on a neighbourhood of $\bar D$. Then use the maximum principle in each variable separately to conclude that it's true in this case. Finally, approximate your $f$ by functions that are holomorphic on a neihbourhood of $\bar D$. More details I'll do it for $n=2$ under the assumption that $f$ extends to a neighbourhood of $\bar D$, and leave the general case up to you. Let $a \in \partial \Delta$ and define $\phi_a(\zeta) = (a,\zeta)$. Then $f(\phi_a(\zeta))$ is holomorphic on a neighbourhood of $\bar\Delta$, so by the maximum modulus principle in one variable applied to $f\circ \phi_a$, $\sup_{\zeta\in\bar\Delta} f(a, \zeta) = \sup_{\zeta\in\partial\Delta} f(a, \zeta)$. Do the same for $\psi_a(\zeta) = (\zeta,a)$ and take the sup over all $a \in \Delta$ to obtain $$\sup_{z\in \partial(\Delta \times \Delta)} f(z) = \sup_{z\in\partial\Delta\times\partial\Delta} f(z)$$ since the union of the images of $\phi_a$ and $\psi_a$ cover the entire boundary of $\Delta\times\Delta$. To finish off, use the maximum modulus principle in $\mathbb{C}^n$ to see that $$\sup_{z\in \bar\Delta \times \bar\Delta} f(z) = \sup_{z\in\partial(\Delta\times\Delta)} f(z)$$ or, if you prefer, do the same argument again with $a \in \Delta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the difference between isomorphism and homeomorphism? I have some questions understanding isomorphism. Wikipedia said that isomorphism is bijective homeomorphism I kown that $F$ is a homeomorphism if $F$ and $F^{-1}$ are continuous. So my question is: If $F$ and its inverse are continuous, can it not be bijective? Any example? I think if $F$ and its inverse are both continuous, they ought to be bijective, is that right?
Isomorphism and homeomorphism appear both in topology and abstract algebra. Here, I think you mean isomorphism and homeomorphism in topology. In one word, they are the same in topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is there a terminological difference between "sequence" and "complex" in homology theory Suppose you are given something like this: $\dots \longrightarrow A^n \longrightarrow A^{n+1} \longrightarrow \dots$ People tend to talk about "chain complexes" but about "short exact sequences". Is there any terminological difference or any convention with regards to using these words (EDIT: I mean "complex" and "sequence" in a homological context) that a mathematical writer should comply to?
To say that the sequence is a chain complex is a less imposing condition: it simply says that if you compose any two of the maps in the sequence, you get $0$. But to say the sequence is exact says more: it says this is precisely (or, if you rather, exactly) the only way you get something mapping to zero. The first statement says that the image of the arrow to the left of $A^n$ is contained in the kernel of the arrow to its right, the second statement says that the reverse inclusion also holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Need help solving the ODF $ f''(r) + \frac{1}{r}f'(r) = 0 $ I am currently taking complex analysis, and this homework question has a part that requires the solution to a differential equation. I took ODE over 4 years ago, so my skills are very rusty. The equation I derived is this: $$ f''(r) + \frac{1}{r}f'(r) = 0 $$ I made this substitution: $ v(r) = f'(r) $ to get get: $$ v'(r) + \frac{1}{r}v(r) = 0 \implies \frac{dv}{dr} = - \frac{v}{r} \implies \frac{dv}{v} = - \frac{dr}{r} $$ Unsure of what to do now. Edit: I forgot to add that the answer is $ a \log r + b $
You now have: 1/v dv = -1/r dr We can integrate this: ln(v) = -ln(r) + C1 v = e^(-ln(r) + c1) v = 1/r + c1 Thus we have: df/dr = 1/r + c1 (since v = f'(r) = df/dr) df = (1/r + c1) dr f = ln(r) + c1(r) + c2 And thats our answer!
{ "language": "en", "url": "https://math.stackexchange.com/questions/420256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is the outer boundary of a connected compact subset of $\mathbb{R}^2$ an image of $S^{1}$? A connected compact subset $C$ of $\mathbb{R}^2$ is contained in some closed ball $B$. Denote by $E$ the unique connected component of $\mathbb{R}^2-C$ which contains $\mathbb{R}^2-B$. The outer boundary $\partial C$ of $C$ is defined to be the boundary of $E$. Is $\partial C$ always a (continuous) image of $S^{1}$?
No; the outer boundary of any handle body is a $n$-holed torus for some $n$. But all sorts of other things can happen; if $C$ is the cantor set, then the boundary is the Cantor set. So it can be all sorts of things.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Sets question, without Zorn's lemma Is there any proof to $|P(A)|=|P(B)| \Longrightarrow |A|=|B|$ that doesn't rely on Zorn's lemma (which means, without using the fact that $|A|\neq|B| \Longrightarrow |A|<|B|$ or $|A|>|B|$ ) ? Thank you!
Even with Zorn's Lemma, one cannot (under the usual assumption that ZF is consistent) prove that if two power sets have the same cardinality, then the sets have the same cardinality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
If $G$ is a finite group, $H$ is a subgroup of $G$, and $H\cong Z(G)$, can we conclude that $H=Z(G) $? If $G$ is a finite group, $H$ is a subgroup of $G$, and $H\cong Z(G)$, can we conclude that $H=Z(G) $? I learnt that if two subgroups are isomorphic then it's not true that they act in the same way when this action is related to outside groups or elements. For example, if $H \cong K $ and both $H,K$ are normal of $G$ then it's not true that $G/H \cong G/K$. Also, there is sufficient condition (but not necessary, as I read after that) for there to be an automorphism that, when restricted to $H$, induces an isomorphism between $H$ and $K$. Now, is this true in this case? or is the statement about center and its isomorphic subgroups always true?
No. Let me explain why. You should think of $H\cong K$ as a statement about the internal structure of the subgroups $H$ and $K$. The isomorphism shows only that elements of $H$ interact with each other in the same way that elements of $K$ interact with each other. It doesn't say how any of these elements behave with the rest of the group - that is, the external behavior of elements of $H$ with the rest of $G$ may not be the same as the external behavior of elements of $K$ with the rest of $G$. Being the center of a group is a statement about external behavior. If we state that every element of $Z(G)$ commutes with every other element of $G$, then just because $H$ and $Z(G)$ have the same internal structure doesn't mean that every element of $H$ must then commute with every other element in $G$. For an easy counterexample, consider $G=S_3\times \mathbb{Z}_2$. Let $\alpha$ be any of the transpositions $(12)$, $(13)$, or $(23)$ in $S_3$, and let $\beta$ be the generator of the $\mathbb{Z}_2$. Here, by virtue of the direct product, $\beta$ commutes with every other element of $G$, and it is not difficult to see that no other nontrivial element of $G$ holds this property, so $\langle\beta\rangle =Z(G)$. On the other hand, we know that all groups of order $2$ are isomorphic, so $\langle \alpha \rangle \cong \langle \beta \rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
What's the difference between Complex infinity and undefined? Can somebody please expand upon the specific meaning of these two similar mathematical ideas and provide usage examples of each one? Thank you!
I don't think they are similar. "Undefined" is something that one predicates of expressions. It means they don't refer to any mathematical object. "Complex infinity", on the other hand, is itself a mathematical object. It's a point in the space $\mathbb C\cup\{\infty\}$, and there is such a thing as an open neighborhood of that point, as with any other point. One can say of a rational function, for example $(2x-3)/(x+5)$, that its value at $\infty$ is $2$ and its value at $-5$ is $\infty$. To say that $f(z)$ approaches $\infty$ as $z$ approaches $a$, means that for any $R>0$, there exists $\delta>0$ such that $|f(z)|>R$ whenever $0<|z-a|<\delta$. That's one of the major ways in which $\infty$ comes up. An expression like $\lim\limits_{z\to\infty}\cos z$ is undefined; the limit doesn't exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
kaleidoscopic effect on a triangle Let $\triangle ABC$ and straightlines $r$, $s$, and $t$. Considering the set of all mirror images of that triangle across $r$, $s$, and $t$ and its successive images of images across the same straightlines, how can we check whether $\triangle DEF$ is an element of that set? Given: * *Points: $A(1,1)$, $B(3,1)$, $C(1,2)$, $D(n,m)$, $E(n+1,m)$, $F(n,m+2)$, where $n$ and $m$ are integers numbers. *Straightlines: $r: x=0$, $s: y=0$ and $t:x+y=5$. No idea how to begin.
Create an image of your coordinate system, your three lines of reflections, and your original triangle. You can draw in mirror triangles pretty easily, and with a few of these, you will probably find a pattern. (I created my illustration using Cinderella, which comes with a tool to define transformation groups.) As you can see, there are locations $m,n$ for which the triangle $DEF$ is included. Note that $E$ is the image of $C$ and $F$ is the image of $B$, though. I'll leave it to you as an excercise to find a possible combination of reflections which maps $\triangle ABC$ to $\triangle DEF$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim\limits_{x\to0}\frac{1-\cos(x)}{x}$ $$\lim_{x\to0}\frac{1-\cos(x)}{x}$$ Could someone help me with this trigonometric limit? I am trying to evaluate it without l'Hôpital's rule and derivation.
There is also a fairly direct method based on trig identities and the limit $ \ \lim_{\theta \rightarrow 0} \ \tan \frac{\theta}{2} \ = \ 0 , $ which I discuss in the first half of my post here. [In brief, $$ \lim_{\theta \rightarrow 0} \ \tan \frac{\theta}{2} \ = \ 0 \ = \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\sin \theta} \ = \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\theta} \ \cdot \ \frac{\theta}{\sin \theta} $$ $$= \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\theta} \ \cdot \ \frac{1}{\lim_{\theta \rightarrow 0} \ \frac{\sin \theta}{\theta} } \ = \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\theta} \ \cdot \ \frac{1}{1} \ \Rightarrow \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\theta} \ = \ 0 \ . ] $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/420698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 6 }
Convergence of increasing measurable functions in measure? Let ${f_{n}}$ a increasing sequence of measurable functions such that $f_{n} \rightarrow f$ in measure. Show that $f_{n}\uparrow f$ almost everywhere My attempt The sequence ${f_{n}}$ converges to f in measure if for any $\epsilon >0$ there exists $N\in \mathbb{N}$ such that for all n>N, $$ m (\{x: |f_n(x) - f(x)| > \varepsilon\}) \rightarrow 0\text{ as } n \rightarrow\infty. $$ I think that taking a small enough epsilon are concludes the result since $f_n$ are increasing Can you help me solve this exercise? Thanks for your help
If $f_n$ converges to $f$ in measure, we have a subsequence $f_{n_k}$ converging to $f$ almost everywhere. As $f_n$ is increasing, we know that $f_n$ converges at every point or every subsequence increases to $\infty$. But as the subsequence $f_{n_k}$ converges to $f$ at almost every point, we have that $f_n$ converges to $f$ almost everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A directional derivative of $f(x,y)=x^2-3y^3$ at the point $(2,1)$ in some direction might be: A directional derivative of $$ f(x,y)=x^2-3y^3 $$ at the point $P(2,1)$ in some direction might be: a) $-9$ b) $-10$ c) $6$ d) $11$ e) $0$ I'd say it's $-9$ for sure, but what about $0$ (the direction would be $<0,0>$)? Are there any other proper answers?
$$D_{\vec u}f(\vec x)=\nabla f_{(2,1)}\frac{\vec u}{||\vec u||}\cdot=4u_1-9u_2\;\;\wedge\;\;u_1^2+u_2^2=1$$ so you get a non-linear system of equations $$\begin{align*}\text{I}&\;\;4u_1-9u_2=t\\\text{II}&\;\;\;\;u_1^2+\;u_2^2=\,1\end{align*}$$ and from here we get $$u_1^2+\left(\frac{4u_1-t}{9}\right)^2=1\implies 97u_1^2-8tu_1+(t^2-81)=0$$ The above quadratic's discriminant is $$\Delta=-324(t^2-97)\ge 0\iff |t|\le\sqrt{97}$$ Thus, the system has a solution for any $\;t\in\Bbb R\;\;,\;\;|t|\le\sqrt{97}\;$ , so all of $\,(a), (c), (e)\;$ fulfill this condition .
{ "language": "en", "url": "https://math.stackexchange.com/questions/420864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Whether $L=\{(a^m,a^n)\}^*$ is regular or not? I am condidering the automatic structure for Baumslag-Solitar semigroups. And I have a question. For any $m,n \in Z$, whether the set $L=\{(a^m,a^n)\}^*$ is regular or not. Here a set is regular means it can be recognized by a finite automaton. Since the operations:union, intersection, complement, concatenation and Kleene star for the regular sets are closed (see here), I have tried to represent $L$ to be the result of some sets under the operations mentioned above. If it is successfully represented, $L$ will be regular. But I failed. So I want to ask for some clues for this question. Thanks for your assistance.
As Boris has pointed out in the comments, so far your question doesn't make complete sense, because you haven't specified a finite alphabet $\Sigma$ such that $L\subseteq \Sigma^*$. What I think you probably want is to consider $L$ as a language over $\Sigma = (A\cap\{\epsilon\})\times(A\cap\{\epsilon\})$, where we are simplifying strings over $\Sigma$ and writing them as pairs of strings over $A$, for example $(a,a)(a,\epsilon) = (a^2,a)$. If this is what you mean, then for a given $m,n\in \mathbb{N}$, $L = \{(a^m,a^n)\}^*$ is indeed a regular language over $\Sigma$, which is obvious, since $(a^m,a^n)$ is just (the shorthand form of) a particular word over $\Sigma$, and so $(a^m,a^n)^*$ is a regular expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/420985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Symmetric positive definite with respect to an inner product Let $A$ be a SPD(symmetric positive-definite) real $n\times n$ matrix. let $B=LL^T$ be also SPD. Let $(,)_B$ be an inner product given by $(x,y)_B=x \cdot By=y^T Bx$. Then $(B^{-1}Ax,y)_B=(x,B^{-1}Ay)_B$ for all $x,y$. Show that $B^{-1}A$ is SPD with respect to $(,)_B$ I don't understand what it means SPD with respect to an inner product. What does it mean?
It means that in the definition of positive definite matrix you replace a standard euclidean scalar product by an inner product generated by another matrix. A matrix $C$ is positive definite with respect to inner product $(,)_B$ defined by a positive definite matrix $B$ iff $$\forall v\ne 0\, (Cv,v)_B = (BCv,v)>0$$ or, in our case $$\forall v\ne 0\, (B^{-1}Av,v)_B = (Av,v)>0,$$ which is given.
{ "language": "en", "url": "https://math.stackexchange.com/questions/421080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove $n\mid \phi(2^n-1)$ If $2^p-1$ is a prime, (thus $p$ is a prime, too) then $p\mid 2^p-2=\phi(2^p-1).$ But I find $n\mid \phi(2^n-1)$ is always hold, no matter what $n$ is. Such as $4\mid \phi(2^4-1)=8.$ If we denote $a_n=\dfrac{\phi(2^n-1)}{n}$, then $a_n$ is A011260, but how to prove it is always integer? Thanks in advance!
I will use Lifting the Exponent Lemma(LTE). Let $v_p(n)$ denote the highest exponent of $p$ in $n$. Take some odd prime divisor of $n$, and call it $p$. Let $j$ be the order of $2$ modulo $p$. So, $v_p(2^n-1)=v_p(2^j-1)+v_p(n/j)>v_p(n)$ as $j\le p-1$. All the rest is easy. Indeed, let's pose $n=2^jm$ where $m$ is odd. Then $\varphi\left(2^{2^jm}-1\right)=\varphi(2^m-1)\varphi(2^m+1)\varphi(2^{2m}+1)\cdots\varphi\left(2^{2^{j-1}}m+1\right)$. At least $2^j$ terms in the right side are even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/421151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44", "answer_count": 5, "answer_id": 3 }
Closed subset of $\;C([0,1])$ $$\text{The set}\; A=\left\{x: \forall t\in[0,1] |x(t)|\leq \frac{t^2}{2}+1\right\}\;\;\text{is closed in}\;\, C\left([0,1]\right).$$ My proof: Let $\epsilon >0$ and let $(x)_n\subset A$, $x_n\rightarrow x_0$. Then for $n\geq N$ $\sup_{[0,1]}|x_n(t)-x_0(t)|\leq \epsilon$ for some $N$ and we have $|x_0(t)|\leq|x_0(t)-x_N(t)|+|x_N(t)|\leq\epsilon+\frac{t^2}{2}+1$ Then $|x_0(t)|\leq \frac{t^2}{2}+1$ for all $t\in[0,1]$ Is the proof correct? Thank you.
Yes, it is correct. An "alternative" proof would consist in writing $A$ as the intersection of the closed sets $F_t:=\{x,|x(t)|\leqslant \frac{t^2}2+1\}$. It also work if we replace $\frac{t^2}2+1$ by any continuous function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/421216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Efficient way to compute $\sum_{i=1}^n \varphi(i) $ Given some upper bound $n$ is there an efficient way to calculate the following: $$\sum_{i=1}^n \varphi(i) $$ I am aware that: $$\sum_{i=1}^n \varphi(i) = \frac 12 \left( 1+\sum_{i=1}^n \mu(i) \left \lfloor \frac ni \right\rfloor ^2 \right) $$ Where: $\varphi(x) $ is Euler's Totient $\mu(x) $ is the Möbius function I'm wondering if there is a way to reduce the problem to simpler computations because my upper bound on will be very large, ie: $n \approx 10^{11} $. Neither $\varphi(x) $, nor $\mu(x) $, are efficient to compute for a large bound of $n$ Naive algorithms will take an unacceptably long time to compute (days) or I would need would need a prohibitively expensive amount of RAM to store look-up tables.
If you can efficiently compute $\sum_{i=1}^n \varphi(i)$, then you can efficiently compute two consecutive values, so you can efficiently compute $\varphi(n)$. Since you already know that $\varphi(n)$ isn't efficiently computable, it follows that $\sum_{i=1}^n \varphi(i)$ isn't either.
{ "language": "en", "url": "https://math.stackexchange.com/questions/421274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
How many of these four digit numbers are odd/even? For the following question: How many four-digit numbers can you form with the digits $1,2,3,4,5,6$ and $7$ if no digit is repeated? So, I did $P(7,4) = 840$ which is correct but then the question asks, how many of those numbers are odd and how many of them are even. The answer for odd is $480$ and even is $360$ but I have no clue as to how they arrived to that answer. Can someone please explain the process? Thanks!
We first count the number of ways to produce an even number. The last digit can be any of $2$, $4$, or $6$. So the last digit can be chosen in $3$ ways. For each such choice, the first digit can be chosen in $6$ ways. So there are $(3)(6)$ ways to choose the last digit, and then the first. For each of these $(3)(6)$ ways, there are $5$ ways to choose the second digit. So there are $(3)(6)(5)$ ways to choose the last, then the first, then the second. Finally, for each of these $(3)(6)(5)$ ways, there are $4$ ways to choose the third digit, for a total of $(3)(6)(5)(4)$. Similar reasoning shows that there are $(4)(6)(5)(4)$ odd numbers. Or else we can subtract the number of evens from $840$ to get the number of odds. Another way: (that I like less). There are $3$ ways to choose the last digit. Once we have chosen this, there are $6$ digits left. We must choose a $3$-digit number, with all digits distinct and chosen from these $6$, to put in front of the chosen last digit. This can be done in $P(6,3)$ ways, for a total of $(3)P(6,3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/421406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Change of variables in $k$-algebras Suppose $k$ is an algebraically closed field, and let $I$ be a proper ideal of $k[x_1, \dots, x_n]$. Does there exist an ideal $J \subseteq (x_1, \dots, x_n)$ such that $k[x_1, \dots, x_n]/I \cong k[x_1, \dots, x_n]/J$ as $k$-algebras?
This is very clear from the corresponding geometric statement: Every non-empty algebraic set $\subseteq \mathbb{A}^n$ can be moved to some that contains the zero. Actually a translation from some point to the zero suffices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/421476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Evaluating $\lim_{x\to0}\frac{x+\sin(x)}{x^2-\sin(x)}$ I did the math, and my calculations were: $$\lim_{x\to0}\frac{x+\sin(x)}{x^2-\sin(x)}= \lim_{x\to0}\frac{x}{x^2-\sin(x)}+\frac{\sin(x)}{x^2-\sin(x)}$$ But I can not get out of it. I would like do it without using derivation or L'Hôpital's rule .
$$\lim_{x\to0}\;\frac{\left(x+\sin(x)\right)}{\left(x^2-\sin(x)\right)}\cdot \frac{1/x}{1/x} \quad =\quad \lim_{x\to 0}\;\frac{1+\frac{\sin(x)}x}{x-\frac{\sin(x)}x}$$ Now we evaluate, using the fact that $$\lim_{x \to 0} \frac {\sin(x)}x = 1$$ we can see that: $$\lim_{x\to 0}\;\frac{1+\frac{\sin(x)}x}{x-\frac{\sin(x)}x} = \frac{1 + 1}{0 - 1} = -2$$ $$ $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/421553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is $\frac{1}{\frac{1}{X}}=X$? Can someone help me understand in basic terms why $$\frac{1}{\frac{1}{X}} = X$$ And my book says that "to simplify the reciprocal of a fraction, invert the fraction"...I don't get this because isn't reciprocal by definition the invert of the fraction?
Well, I think this is a matter of what is multiplication and what is division. First, we denote that $$\frac{1}{x}=y$$ which means $$xy=1\qquad(\mbox{assuming $x\ne0$ in fundamental mathematics where there isn't Infinity($\infty$)})$$ Now, $$\frac{1}{\frac{1}{x}}=\frac{1}{y}$$ by using the first equation. Here, by checking the second equation ($xy=1$), it is obvious that $$\frac{1}{y}=x$$ thus $$\frac{1}{\frac{1}{x}}=x$$ Q.E.D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/421620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 12, "answer_id": 2 }
Neighborhoods in the product topology. In the last two lines thread, it is said that $N \times M \subseteq A\times B$ is a neighborhood of $0$ if and only if $N, M$ are neighborhoods of $0$. Here $A, B$ are topological abelian groups. How to prove this result? I searched on the Internet, but was not able to find a proof.
If $A,B$ are topological spaces,then we define the product topology by means of the topologies of $A$ and $B$ in such a way that $U\times V$ is open whenever $U\subseteq A$ and $V\subseteq B$ are open. More proecisely, we take the smallest topology on $A\times B$ such that these $U\times V$ are all open. This is precisely achieved by declaring precisely the sets of the form $$ \bigcup_{i\in I}U_i\times V_i$$ with $I$ an arbitrary index set and $U_i\subseteq A$, $V_i\subseteq B$ open. If $C\subseteq A$, $D\subseteq B$ is such that $C\times D$ is open and nonempty(!), then we can conlude that $C,D$ are open. Why? Assume $$ C\times D = \bigcup_{i\in I}U_i\times V_i$$ as above. Since $C\times D$ is nonempty, let $(c,d)\in C\times D$. From the way $\{c\}\times D\subseteq C\times D$ is covered by the $U_i\times V_i$, we observe that $$ D=\bigcup_{i\in I\atop c\in U_i}V_i $$ hence $D$ is open. Of course $d\in D$ so that $D$ is an open neighbourhood of $d$. Similarly, $C$ is an open neighbourhood of $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/421753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to convert a permutation group into linear transformation matrix? is there any example about apply isomorphism to permutation group and how to convert linear transformation matrix to permutation group and convert back to linear transformation matrix
It seems that for each natural number $n$ there is an isomorphic embedding $i$ of the permutation group $S_n$ into the group of all non-degerated matrices of order $n$ over $\mathbb R$, defined as $i(\sigma)=A_\sigma=\|a_{ij}\|$ for each $\sigma\in S_n$, where $a_{ij}=1$ provided $\sigma(i)=j$, and $a_{ij}=0$ in the opposite case. From the other side, if $G$ is any group (and, in particular, a matrix group of linear transormations) then any element $g\in G$ induces a permutation $j(g)$ of the set $G$ such that $j(g)h=gh$ for each $h\in G$. Then the map $j:G\to S(G)$ should be an isomorphic embedding of the group $G$ into the group $S(G)$ of all permutations of the set $G$. You can read more details about such embedding here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/421846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
$\mathbb Q$-basis of $\mathbb Q(\sqrt[3] 7, \sqrt[5] 3)$. Can someone explain how I can find such a basis ? I computed that the degree of $[\mathbb Q(\sqrt[3] 7, \sqrt[5] 3):\mathbb Q] = 15$. Does this help ?
Try first to find the degree of the extension over $\mathbb Q$. You know that $\mathbb Q(\sqrt[3]{7})$ and $\mathbb Q(\sqrt[5]{3})$ are subfields with minimal polynomials $x^3 - 7$ and $x^5-3$ which are both Eisenstein. Therefore those subfields have degree $3$ and $5$ respectively and thus $3$ and $5$ divide $[\mathbb Q(\sqrt[3]7,\sqrt[5]3) : \mathbb Q]$, which means $15$ divides it. But you know that the set $\{ \sqrt[3]7^i \sqrt[5]3^j \, | \, 0 \le i \le 2, 0 \le j \le 4 \}$ spans $\mathbb Q(\sqrt[3]7, \sqrt[5]3)$ as a $\mathbb Q$ vector space. I am letting you fill in the blanks. Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/421941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $x+{1\over x} = r $ then what is $x^3+{1\over x^3}$? If $$x+{1\over x} = r $$ then what is $$x^3+{1\over x^3}$$ Options: $(a) 3,$ $(b) 3r,$ $(c)r,$ $(d) 0$
$\displaystyle r^3=\left(x+\frac{1}{x}\right)^3=x^3+\frac{1}{x^3}+3(x)\frac{1}{x}\left(x+\frac{1}{x}\right)=x^3+\frac{1}{x^3}+3r$ $\displaystyle \Rightarrow r^3-3r=x^3+\frac{1}{x^3}$ Your options are incorrect.For a quick counter eg. you can take $x=1/2$ to get $r=\frac{5}{2}$ and $x^3+\frac{1}{x^3}=\frac{65}{8}$ but none of the options result in $65/8$
{ "language": "en", "url": "https://math.stackexchange.com/questions/421995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
eigenvalue and independence Let $B$ be a $5\times 5$ real matrix and assume: * *$B$ has eigenvalues 2 and 3 with corresponding eigenvectors $p_1$ and $p_3$, respectively. *$B$ has generalized eigenvectors $p_2,p_4$ and $p_5$ satisfying $Bp_2=p_1+2p_2,Bp_4=p_3+3p_4,Bp_5=p_4+3p_5$. Prove that $\{p_1,p_2,p_3,p_4,p_5\}$ is linearly independent set.
One can do a direct proof. Here is the beginning: Suppose $0=\lambda_1p_1+\dots+\lambda_5p_5$. Then $$\begin{align*} 0&=(B-3I)^3(\lambda_1p_1+\dots+\lambda_5p_5)\\ &=(B-3I)^3(\lambda_1p_1+\lambda_2p_2)\\ &=(B-2I-I)^3(\lambda_1p_1+\lambda_2p_2)\text{ now one can apply binomial theorem as $B-2I$ and $-I$ commute}\\ &= (B-2I)^3(\lambda_1p_1+\lambda_2p_2)+3(B-2I)^2(-I)(\lambda_1p_1+\lambda_2p_2)+3(B-2I)(-I)^2(\lambda_1p_1+\lambda_2p_2)+(-I)^3(\lambda_1p_1+\lambda_2p_2)\\ &=(3\lambda_2-\lambda_1)p_1-\lambda_2p_2. \end{align*}$$ Now a similar application of $(B-2I)$ yields $\lambda_1=0$ and $\lambda_2=0$. Again similar applications of $(B-3I)^k$ for appropriate $k$ yield the vanishing of the other coefficients. I'll leave that to the reader.
{ "language": "en", "url": "https://math.stackexchange.com/questions/422070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fixed points for $k_{t+1}=\sqrt{k_t}-\frac{k_t}{2}$ For the difference equation $k_{t+1}=\sqrt{k_t}-\frac{k_t}{2}$ one has to find all "fixed points" and determine whether they are locally or globally asymptotically stable. Now I'm not quite sure what "fixed point" means in this context. Is it the same as "equilibrium point" (i.e., setting $\dot{k}=0$ , and calculate $k_{t+1}=k+\dot{k}=k+0$ from there)? Or something different? I feel confident in solving such types of DE, just not sure what "fixed point" is supposed to mean here. Thanks for providing some directions!
(I deleted my first answer after rereading your question; I thought perhaps I gave more info than you wanted.) Yes, you can read that as "equilibrium points". To find them, just let $k_t = \sqrt{k_t} - \frac{1}{2}k_t$. Solving for $k_t$ will give you a seed $k_0$ such that $k_0 = k_1$. As you wrote, letting $k_0 = 0$ is one such value. However, there's another $k_0$ that will behave similarly. Furthermore, it's obvious what happens if $k_0 < 0$. Does the same thing happen for any other $k_0$? Now that you've found all the totally uninteresting seeds and the really weird ones, what about the others?
{ "language": "en", "url": "https://math.stackexchange.com/questions/422140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Testing convergence of $\sum_{n=0}^{\infty }(-1)^n\ \frac{4^{n}(n!)^{2}}{(2n)!}$ Can anyone help me to prove whether this series is convergent or divergent: $$\sum_{n=0}^{\infty }(-1)^n\ \frac{4^{n}(n!)^{2}}{(2n)!}$$ I tried using the ratio test, but the limit of the ratio in this case is equal to 1 which is inconclusive in this case. Any hints please!
By Stirling's approximation $n!\sim\sqrt{2\pi n}(n/e)^n$, so $$\frac{4^{n}(n!)^{2}}{(2n)!}\sim \frac{2\pi n 4^{n} (n/e)^{2n}}{\sqrt{4\pi n}(2n/e)^{2n}} =\sqrt{\pi n}.$$ Thus, the series diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/422208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
A line through the centroid G of $\triangle ABC$ intersects the sides at points X, Y, Z. I am looking at the following problem from the book Geometry Revisited, by Coxeter and Greitzer. Chapter 2, Section 1, problem 8: A line through the centroid G of $\triangle ABC$ intersects the sides of the triangle at points $X, Y, Z$. Using the concept of directed line segments, prove that $1/GX + 1/GY + 1/GZ = 0$. I am perplexed by the statement and proof given in the book for this problem, whose statement I have reproduced verbatim (proof from the book is below, after this paragraph), as a single line through the centroid can only intersect all three sides of a triangle at $X, Y, Z$ if two of these points are coincident at a vertex of the triangle, and the third is the midpoint of the opposite side: In other words, if the line is a median. In this case I see that it is true, but not very interesting, and the proof doesn't have to be very complex. The median is divided in a 2:1 ratio, so twice the reciprocal of $2/3$ plus the reciprocal, oppositely signed, of $1/3$, gives $3 + -3 = 0$. But here's the proof given in the book: Trisect $BC$ at U and V, so that BU = UV = VC. Since GU is parallel to AB, and GV to AC, $$GX(1/GX + 1/GY + 1/GZ) = 1 + VX/VC + UX/UB = 1 + (VX-UX)/VC =$$ $$1 + VU/UV = 0$$. I must be missing something. (If this is a typo in this great book, it's the first one I've found). In the unlikely event that the problem is misstated, I have been unable to figure out what was meant. Please help me! [Note]: Here's a diagram with an elaborated version of the book's solution above, that I was able to do after realizing my mistake thanks to the comment from Andres.
a single line through the centroid can only intersect all three sides of a triangle at $X,Y,Z$ if two of these points are coincident at a vertex of the triangle, and the third is the midpoint of the opposite side The sides of the triangle are lines of infinite length in this context, not just the line segments terminated by the corners of the triangle. OK, Andreas already pointed that out in a comment, but since originally this was the core of your question, I'll leave this here for reference. I recently posted an answer to this question in another post. Since @Michael Greinecker♦ asked me to duplicate the answer here, that's what I'm doing. I'd work on this in barycentric homogenous coordinates. This means that your corners correspond to the unit vectors of $\mathbb R^3$, that scalar multiples of a vector describe the same point. You can check for incidence between points and lines using the scalar product, and you can connect points and intersect lines using the cross product. This solution is influenced heavily by my background in projective geometry. In this world you have $$ A = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \qquad B = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \qquad C = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \\ G = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \qquad l = \begin{pmatrix} 1 \\ a \\ -1-a \end{pmatrix} \qquad \left<G,l\right> = 0 \\ X = \begin{pmatrix} a \\ -1 \\ 0 \end{pmatrix} \qquad Y = \begin{pmatrix} 1+a \\ 0 \\ 1 \end{pmatrix} \qquad Z = \begin{pmatrix} 0 \\ 1+a \\ a \end{pmatrix} $$ The coordinates of $l$ were chosen such that the line $l$ already passes through $G$, as seen by the scalar product. The single parameter $a$ corresponds roughly to the slope of the line. The special case where the line passes through $A$ isn't handled, since in that case the first coordinate of $l$ would have to be $0$ (or $a$ would have to be $\infty$). But simply renaming the corners of your triangle would cover that case as well. In order to obtain lengths, I'd fix a projective scale on this line $l$. For this you need the point at infinity on $F$. You can obtain it by intersecting $l$ with the line at infinity. $$ F = l\times\begin{pmatrix}1\\1\\1\end{pmatrix} = \begin{pmatrix} 2a+1 \\ -2-a \\ 1-a \end{pmatrix} $$ To complete your projective scale, you also need to fix an origin, i.e. a point with coordinate “zero”, and a unit length, i.e. a point with coordinate “one”. Since all distances in your formula are measured from $G$, it makes sense to use that as the zero point. And since you could multiply all your lengths by a common scale factor without affecting the formula you stated, the choice of scale is irrelevant. Therefore we might as well choose $X$ as one. Note that this choice also fixes the orientation of length measurements along your line: positive is from $G$ in direction of $X$. We can then compute the two remaining coordinates, those of $Y$ and $Z$, using the cross ratio. In the following formula, square brackets denote determinants. The cross ratio of four collinear points in the plane can be computed as seen from some fifth point not on that line. I'll use $A$ for this purpose, both because it has simple coordinates and because, as stated above, the case of $l$ passing through $A$ has been omitted by the choice of coordinates for $l$. $$ GY = \operatorname{cr}(F,G;X,Y)_A = \frac{[AFX][AGY]}{[AFY][AGX]} = \frac{\begin{vmatrix} 1 & 2a+1 & a \\ 0 & -2-a & -1 \\ 0 & 1-a & 0 \end{vmatrix}\cdot\begin{vmatrix} 1 & 1 & 1+a \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{vmatrix}}{\begin{vmatrix} 1 & 2a+1 & 1+a \\ 0 & -2-a & 0 \\ 0 & 1-a & 1 \end{vmatrix}\cdot\begin{vmatrix} 1 & 1 & a \\ 0 & 1 & -1 \\ 0 & 1 & 0 \end{vmatrix}} = \frac{a-1}{a+2} \\ GZ = \operatorname{cr}(F,G;X,Z)_A = \frac{[AFX][AGZ]}{[AFZ][AGX]} = \frac{\begin{vmatrix} 1 & 2a+1 & a \\ 0 & -2-a & -1 \\ 0 & 1-a & 0 \end{vmatrix}\cdot\begin{vmatrix} 1 & 1 & 0 \\ 0 & 1 & 1+a \\ 0 & 1 & a \end{vmatrix}}{\begin{vmatrix} 1 & 2a+1 & 0 \\ 0 & -2-a & 1+a \\ 0 & 1-a & a \end{vmatrix}\cdot\begin{vmatrix} 1 & 1 & a \\ 0 & 1 & -1 \\ 0 & 1 & 0 \end{vmatrix}} = \frac{1-a}{1+2a} $$ The thrid length, $GX$, is $1$ by the definition of the projective scale. So now you have the three lengths and plug them into your formula. $$ \frac1{GX} + \frac1{GY} + \frac1{GZ} = \frac{a-1}{a-1} + \frac{a+2}{a-1} - \frac{2a+1}{a-1} = 0 $$ As you will notice, the case of $a=1$ is problematic in this setup, since it would entail a division by zero. This corresponds to the situation where $l$ is parallel to $AB$. In that case, the point $X$ which we used as the “one” of the scale would coincide with the point $F$ at infinity, thus breaking the scale. A renaming of triangle corners will again take care of this special case. The cases $a=-2$ and $a=-\tfrac12$ correspond to the two other cases where $l$ is parallel to one of the edges. You will notice that the lengths would again entail divisions by zero, since the points of intersection are infinitely far away. But the reciprocal lengths are just fine and will be zero in those cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/422265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove that the set $\{\sin(x),\sin(2x),...,\sin(mx)\}$ is linearly independent? Could you help me to show that the functions $\sin(x),\sin(2x),...,\sin(mx)\in V$ are linearly independent, where $V$ is the space of real functions? Thanks.
If $\{ \sin x, \sin 2x, \ldots, \sin mx\}$ is linear dependent, then for some $a_1,\ldots,a_m \in \mathbb{R}$, not all zero, we have: $$\sum_{k=1}^m a_k \sin kx = 0, \text{ for all } x \in \mathbb{R}$$ This in turn implies for every $z \in S^1 = \{ \omega \in \mathbb{C} : |\omega| = 1\}$, if we write $z$ as $e^{ix}$, we have: $$0 = \sum_{k=1}^m a_k \sin kx = \sum_{k=1}^m a_k \frac{z^k - z^{-k}}{2i} = \frac{z^{-m}}{2i}\sum_{k=1}^m a_k\left(z^{m+k}-z^{m-k}\right)$$ This contradicts with the fact the rightmost side of above expression is $\frac{z^{-m}}{2i}$ multiplied by a non-zero polynomial in $z$ and has at most finitely many roots on $S^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/422347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 2 }
Why does the Tower of Hanoi problem take $2^n - 1$ transfers to solve? According to http://en.wikipedia.org/wiki/Tower_of_Hanoi, the Tower of Hanoi requires $2^n-1$ transfers, where $n$ is the number of disks in the original tower, to solve based on recurrence relations. Why is that? Intuitively, I almost feel that it takes a linear number of transfers since we must transfer all but the bottommost disc to the buffer before transferring the bottommost disc to the target tower.
Your intuition is right. All but the bottom disk must be moved TWICE, so you should expect (one more than) twice the number of transfers for one fewer disk. We have $$2(2^n-1)+1=2^{n+1}-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/422409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does $\sum_{k=1 }^ n (x-1+k)^n=(x+n)^n$ have integer solution when $n\ge 4$? from a post in SE, one says $3^2+4^2=5^2,3^3+4^3+5^3=6^3$,that is interesting for me. so I begin to explore further, the general equation is $\sum_{k=1}^n (x-1+k)^n=(x+n)^n$, from $n \ge 4$ to $n=41$, there is no integer solution for $x$. for $n>41$,I can't get result as Walframalpha doesn't work. I doubt if there is any integer solution for $n \ge 4$, when $n$ is bigger,the $x$ is close to $\dfrac{n}{2}$. Can some one have an answer? thanks!
Your problem has been studied, and it is conjectured that only $3,4,5$ for squares and $3,4,5,6$ for cubes are such that all the numbers are consecutive, and the $k$th power of the last is the sum of the $k$th powers of the others ($k>1$). I've spent a lot of time on this question, and no wonder did not get anything final on a proof, since apparently nobody else has succeeded in proving it. The website calls it "Cyprian's Last Theorem", arguing that it seems so likely true but nobody has shown it yet, just like for Fermat for so many years. The page reference I found: http://www.nugae.com/mathematics/cyprian.htm There may be other links there to further get ideas...
{ "language": "en", "url": "https://math.stackexchange.com/questions/422474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Quadratic form $\mathbb{R}^n$ homogeneous polynomial degree $2$ Could you help me with the following problem? My definition of a quadratic form is: it is a mapping $h: \ V \rightarrow \mathbb{R}$ such that there exists a bilinear form $\varphi: \ V \times V \rightarrow \mathbb{R}$ such that $h(v)=\varphi(v,v)$. Could you tell me how, based on that definition, I can prove that a quadratic form on $V=\mathbb{R}^n$ is a homogeneous polynomial of degree $=2$?
Choose a basis for $V$, call it $\{v_1,\dots,v_n\}$. Then leting $v = \sum_{i=1}^n a_i v_i$ $$ h(v) = \varphi(v,v) = \sum_{i=1}^n \sum_{j=1}^n a_i a_j \varphi(v_i,v_j). $$ Therefore $h(v)$ is a polynomial whose terms are $a_i a_j$ (i.e. degree $2$ in the coeficients of $v$) and the coefficient in front of $a_i a_j$ is $\varphi(v_i,v_j)$. This means $h(v)$ is an homogeneous polynomial of degree $2$ in the variables $\{a_1,\dots,a_n\}$. Hoep that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/422540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Convert two points to line eq (Ax + By +C = 0) Say one has two points in the x,y plane. How would one convert those two points to a line? Of course I know you could use the slope-point formula & derive the line as following: $$y - y_0 = \frac{y_1-y_0}{x_1-x_0}(x-x_0)$$ However this manner obviously doesn't hold when $x_1-x_0 = 0$ (vertical line). The more generic approach should however be capable of define every line (vertical line would simply mean B = 0); $$Ax+By +C = 0$$ But how to deduce A, B, C given two points?
Let $P_1:(x_1,y_1)$ and $P_2:(x_2,y_2)$. Then a point $P:(x,y)$ lies on the line connecting $P_1$ and $P_2$ if and only if the area of the parallellogram with sides $P_1P_2$ and $P_1P$ is zero. This can be expressed using the determinant as $$ \begin{vmatrix} x_2-x_1 & x-x_1 \\ y_2-y_1 & y-y_1 \end{vmatrix} = 0 \Longleftrightarrow (y_1-y_2)x+(x_2-x_1)y+x_1y_2-x_2y_1=0, $$ so you get (up to scale) $A=y_1-y_2$, $B=x_2-x_1$ and $C=x_1y_2-x_2y_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/422602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Question in do Carmo's book Riemannian geometry section 7 I have a question. Please help me. Assume that $M$ is complete and noncompact, and let $p$ belong to $M$. Show that $M$ contains a ray starting from $p$. $M$ is a riemannian manifold. It is geodesically and Cauchy sequences complete too. A ray is a geodesic curve that its domain is $[0,\infty)$ and it minimizes the distance between start point to each other points of curve.
Otherwise suppose every geodesic emitting from p will fail to be a segment after some distance s. Since the unit sphere in the tangent plane that parameterizing these geodesics is compact, s has a maximum $s_{max}$. This means that the farthest distance from p is $s_{max}$, among all points of the manifold. So the diameter of the manifold is bounded by $2s_{max}$, by the triangle inequality. So the manifold is bounded and complete, by the Hopf–Rinow theorem, it is then compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/422683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
How to prove that the following function has exactly two zeroes in a particular domain? I am practicing exam questions for my own exam in complex analysis. This was one I couldn't figure out. Let $U = \mathbb{C} \setminus \{ x \in \mathbb{R} : x \leq 0 \} $ en let $\log : U \to \mathbb{C} $ be the usual holomorphic branch of the logarithm on $U$ with $\log (1) = 0$. Consider the function given by $f(z) = \log(z) - 4(z-2)^2$. Q1: Show that $f$ has exactly two zeroes (including multiplicity) in the open disk $D(2,1) = \{ z \in \mathbb{C} : |z-2| < 1 \} $. Q2: Show that $f$ has exactly two different zeroes in $D(2,1)$. I strongly suspect we should use Rouché's Theorem for this. I tried to apply it by setting $h(z) = \log(z)$ and $g(z) = -4 (z-2)^2$. If $|z-2| = 1$, then $z = 2 + e^{it}$ with $t \in [0,2 \pi ) $. Then we have $|h(z)| = |\log(z)| \leq \log|z| = \log|2+e^{it}| \leq \log(|2| + |e^{it}|) = \log|2|\cdot \log|e^{it}| = \log(2) \cdot \log(1) = 0$. Furthermore, we have $g(z) = |-4 (z-2)^2| = |-4| |z-2|^2 = 4$. So $|h(z)| < |g(z)|$ when $|z-2| = 1$. According to Rouché's Theorem, $f$ en $g$ have the same number of zeroes within $\{ |z-2| < 1 \}$, so we have to count the zeros of $g$ to find the number of zeroes of $f$. However, I can only find one zero of $g$, which is at $z=2$. Can you tell what's going wrong with my approach?
Your estimate $|h(z)|\le 0$ (when $|z-2|=1$) cannot possibly be true: a nonconstant holomorphic function cannot be equal to zero on a circle. I marked the incorrect steps in red: $$|\log(z)| \color{red}{\leq} \log|z| = \log|2+e^{it}| \leq \log(|2| + |e^{it}|) \color{red}{=} \log|2|\cdot \log|e^{it}| = \log(2) \cdot \log(1) = 0$$ A correct estimate could look like $$|\log(z)| \le |\operatorname{Re} \log z| + |\operatorname{Im} \log z| = \log |z| + |\arg z| \le \log 3+ \pi/2 <3$$ which is not sharp but suffices for the application of Rouché's theorem, which settles Q1. The question Q2 is less standard. One way to answer it is to observe that the real function $f(x)=\log x-4(x-2)^2$ has two distinct zeros on the interval $(1,3)$, because $f(1)<0$, $f(2)>0$, and $f(3)=\log 3-4<0$. Since we already know there are two zeros in $D(2,1)$ with multiplicity, there are no others.
{ "language": "en", "url": "https://math.stackexchange.com/questions/422745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Generating all coprime pairs within limits Say I want to generate all coprime pairs ($a,b$) where no $a$ exceeds $A$ and no $b$ exceeds $B$. Is there an efficient way to do this?
If $A$ and $B$ are comparable in value, the algorithm for generating Farey sequence might suit you well; it generates all pairs of coprime integers $(a,b)$ with $1\leq a<b\leq N$ with constant memory requirements and $O(1)$ operations per output pair. Running it with $N=\max(A,B)$ and filtering out pairs whose other component exceeds the other bound produces all the coprime pairs you seek. If the values of $A$ and $B$ differ too much, the time wasted in filtering the irrelevant pairs would be too high and a different approach (such as that suggested by Thomas Andrews) might be necessary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/422830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
$l^2$ is not compact Prove that the space $l^2$ (of real series $a_n$ such that $\sum_{i=1}^{\infty}a_i^2$ converges) is not compact. I want to use the open cover $\{\sum_{i=1}^{\infty}a_i^2<n\mid n\in\mathbb{Z}^+\}$ and show that it has no finite subcover. To do that, I must prove that for any $n$, the set $\{\sum_{i=1}^{\infty}a_i^2<n\}$ is an open set. So let $\sum_{i=1}^{\infty}a_i^2<n$. Suppose it equals $n-\alpha$. I must find $\epsilon$ such that for any series $\{b_i\}\in l^2$ with $\sum_{i=1}^{\infty}(a_i-b_i)^2<\epsilon$, we have $\sum_{i=1}^{\infty}b_i^2<n$. But $(a_i-b_i)^2$ has the term $-2a_ib_i$. How should I deal with that?
There're two easy ways to prove that $\ell_2$ is not compact. First, is to say that its dimension is infinite, hence closed unitary ball is not compact, hence the space itself is not compact. Another way to see this is to find a bounded sequence which doesn't have a convergent subsequence; as a matter of fact, a sequence of basis vectors doesn't converge in norm (nor does any its subsequence); it's easy to see by checking Cauchy criterion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/422913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Finding the definite integral $\int_0^1 \log x\,\mathrm dx$ $$\int_{0}^1 \log x \,\mathrm dx$$ How to solve this? I am having problems with the limit $0$ to $1$. Because $\log 0$ is undefined.
$\int_0^1 \log x dx=\lim_{a\to 0^+}\int_a^1\log x dx=\lim_{a\to 0^+}(x\log x-x|_a^1)=\lim_{a\to 0^+}(a-1-a\log a)=\lim_{a\to 0^+}(a-1) -\lim_{a\to 0^+}a\log a=-1-\lim_{a\to 0^+}a\log a$ Now $$\lim_{a\to 0^+}a\log a=\lim_{a\to 0^+}\frac{\log a}{1/a}$$ Using L' hopital's rule( which is applicable here), $$\lim_{a\to 0^+}\frac{\log a}{1/a}=\lim_{a\to 0^+}\frac{1/a}{1/a^2}=\lim_{a\to 0^+}(-a)=0$$ Therefore, $$\int_0^1 \log x dx= -1-\lim_{a\to 0^+}a\log a =-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/422970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 9, "answer_id": 4 }
Two students clean 5 rooms in 4 hours. How long do 40 students need for 40 rooms? A class decides to do a community involvement project by cleaning classrooms in a school. If 2 students can clean 5 classrooms in 4 hours, how long would it take for 40 students to clean 40 classrooms?
A student-hour is a unit of work. It represents 1 student working for an hour, or 60 students working for one minute, or 3600 students working for 1 second, or ... You're told that cleaning 5 classrooms takes 2 students 4 hours, or $8$ student-hours. So one classroom takes $\frac{8}{5}$ or $1.6$ student-hours. So the 40 classrooms will take $40 \times 1.6$ or $64$ student-hours. The forty students will put out $64$ student-hours in $\frac{64}{40}$ or $1.6$ hours...
{ "language": "en", "url": "https://math.stackexchange.com/questions/423044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Archimedean Proof? I've been struggling with a concept concerning the Archimedean property proof. That is showing my contradiction that For all $x$ in the reals, there exists $n$ in the naturals such that $n>x$. Okay so we assume that the naturals is bounded above and show a contradiction. If the naturals is bounded above, then it has a least upper bound (supremum) say $u$ Now consider $u-1$. Since $u=\sup(\mathbb N)$ , $u-1$ is an element of $\mathbb N$. (here is my first hiccup, not entirely sure why we can say $u-1$ is in $\mathbb N$) This implies (again not confident with this implication) that there exists a $m$ in $\mathbb N$ such that $m>u-1$. A little bit of algebra leads to $m+1>u$. $m+1$ is in $\mathbb N$ and $m+1>u=\sup(\mathbb N)$ thus we have a contradiction. Can anyone help clear up these implications that I'm not really comfortable with? Thanks!
I dont think you need the fact that $u\in N$(the fact is even not true) .And for the second difficulty the fact follows from the supremum property.As $u-1$ is not an upper bound so there exists a natural number greater than it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/423107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }