Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Evaluating a limit with variable in the exponent For $$\lim_{x \to \infty} \left(1- \frac{2}{x}\right)^{\dfrac{x}{2}}$$ I have to use the L'Hospital"s rule, right? So I get: $$\lim_{x \to \infty}\frac{x}{2} \log\left(1- \frac{2}{x}\right)$$ And what now? I need to take the derivative of the log, is it: $\dfrac{1}{1-\dfrac{2}{x}}$ but since there is x, I need to use the chain rule multiply by the derivative of $\dfrac{x}{2}$ ?
Recall the limit: $$\lim_{y \to \infty} \left(1+\dfrac{a}y\right)^y = e^a$$ I trust you can finish it from here, by an appropriate choice of $a$ and $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/392722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Ball-counting problem (Combinatorics) I would like some help on this problem, I just can't figure it out. In a box there are 5 identical white balls, 7 identical green balls and 10 red balls (the red balls are numbered from 1 to 10). A man picks 12 balls from the box. How many are the possibilities, in which: a) exactly 5 red balls are drawn -- b) a red ball isn't drawn -- c) there is a white ball, a green ball and at least 6 red balls Thanks in advance.
Hints: (a) How many ways can we choose $5$ numbers from $1,2,...,9,10$? (This will tell you how many different collections of $5$ red balls he may draw.) How many distinguishable collections of $7$ balls can he draw so that each of the seven is either green or white? Note that the answers to those two questions do not depend on each other, so we'll multiply them together to get the solution to part (a). (b) Don't overthink it. How many ways can this happen? (c) You can split this into $5$ cases (depending on the number of red balls drawn) and proceed in a similar way to what we did in part (a) for each case (bearing in mind that we've already drawn one green ball and one white ball). Then, add up the numbers of ways each case can happen.
{ "language": "en", "url": "https://math.stackexchange.com/questions/392813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Evaluating the integral: $\lim_{R \to \infty} \int_0^R \frac{dx}{x^2+x+2}$ Please help me in this integral: $$\lim_{R \to \infty} \int_0^R \frac{dx}{x^2+x+2}$$ I've tried as usually, but it seems tricky. Do You have an idea? Thanks in advance!
$$\dfrac1{x^2+x+2} = \dfrac1{\left(x+\dfrac12 \right)^2 + \left(\dfrac{\sqrt{7}}2 \right)^2}$$ Recall that $$\int_a^b \dfrac{dx}{(x+c)^2 + d^2} = \dfrac1d \left.\left(\arctan\left(\dfrac{x+c}d\right)\right)\right \vert_{a}^b$$ I trust you can finish it from here. You will also need to use the fact that $$\lim_{y \to \infty} \arctan(y) = \dfrac{\pi}2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/392869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof of: If $x_0\in \mathbb R^n$ is a local minimum of $f$, then $\nabla f(x_0) = 0$. Let $f \colon \mathbb R^n\to\mathbb R$ be a differentiable function. If $x_0\in \mathbb R^n$ is a local minimum of $f$, then $\nabla f(x_0) = 0$. Where can I find a proof for this theorem? This is a theorem for max/min in calculus of several variables. Here is my attempt: Let $x_0$ = $[x_1,x_2,\ldots, x_n]$ Let $g_i(x) = f(x_0+(x-x_i)e_i)$ where $e_i$ is the $i$-th standard basis vector of dimension $n$. Since $f$ has local min at $x_0$, then $g_i$ has local minimum at $x_i$. So by Fermat's theorem, $g'(x_i)= 0$ which is equal to $f_{x_i}(x_0)$. Therefore $f_{x_i}(x_0) = 0$ which is what you wanted to show. Is this right?
Do you know the proof for $n=1$? Can you try to mimic it for more variables, say $n=2$? Since $\nabla f(t)$ is a vector what you want to prove is that $\frac{\partial f}{\partial x_i}(t)=0$ for each $i$. That is why you need to mimic the $n=1$ proof, mostly. Recall that for the $n=1$, we prove that $$f'(t)\leq 0$$ and $$f'(t)\geq 0$$ by looking at $x\to t^{+}$ and $x\to t^{-}$. You should do the same in each $$\frac{\partial f}{\partial x_i}(t)=\lim_{h\to 0}\frac{f(t_1,\dots,t_i+h,\dots,t_n)-f(t_1,\dots,t_n)}h$$ ADD Suppose $f:\Bbb R\to \Bbb R$ is differentiable and $f$ has a local minimum in $t=0$. Then $f'(t)=0$. P Since $f$ has a local minimum at $t=0$, for suitably small $h$, $$f(t+h)-f(t)\geq 0$$ If $h>0$ then this gives $$\frac{f(t+h)-f(t)}{h}\geq 0$$ While if $h<0$ we get $$\frac{f(t+h)-f(t)}{h}\leq 0$$ Since $f'$ exist, the side limits also exist and equal $f'(t)$. From the above we conclude $f'(t)\geq 0$ and $f'(t)\leq 0$, so that $f'(t)=0 \;\;\blacktriangle$. Now, just apply that coordinatewise, and you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/392952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to find the limit for the quotient of the least number $K_n$ such that the partial sum of the harmonic series $\geq n$ Let $$S_n=1+1/2+\cdots+1/n.$$ Denote by $K_n$ the least subscript $k$ such that $S_k\geq n$. Find the limit $$\lim_{n\to\infty}\frac{K_{n+1}}{K_n}\quad ?$$
We know that $H_n=\ln n + \gamma +\epsilon(n)$, where $\epsilon(n)\approx \frac{1}{2n}$ and in any case $\epsilon(n)\rightarrow 0$ as $n\rightarrow \infty$. If $m=H_n$ we may as a first approximation solve as $n=e^{m-\gamma}$. Hence the desired limit is $$\lim_{m\rightarrow \infty} \frac{e^{m+1-\gamma}}{e^{m-\gamma}}=e$$ For a second approximation, $m=\gamma + \ln n +\frac{1}{2n}=\gamma+\ln n+\ln e^{\frac{1}{2n}}=\gamma+\ln ne^{\frac{1}{2n}}$. This may be rearranged as $ne^{\frac{1}{2n}}=e^{m-\gamma}$. This has solution $$n=-\frac{1}{2W(-e^{\gamma-m}/2)}$$ where $W$ is the Lambert function. Hence the desired limit is now $$\lim_{m\rightarrow \infty}\frac{W(-e^{\gamma-m}/2)}{W(-e^{\gamma-m-1}/2)}=e$$ Although not a proof, this is compelling enough that I'm not going to think about the next error term.
{ "language": "en", "url": "https://math.stackexchange.com/questions/392992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof regarding unitary self-adjoin linear operators I'm suck on how to do the following Linear Algebra proof: Let $T$ be a self-adjoint operator on a finite-dimensional inner product space $V$. Prove that for all $x \in V$, $||T(x)\pm ix||^2=||T(x)||^2+||x||^2.$ Deduce that $T-iI$ is invertible and that $[(T-iI)^{-1}]^{*}=(T+iI)^{-1}.$ Furthermore, show that $(T+iI)(T-iI)^{-1}$ is unitary. My attempt at a solution (to the first part): $||T(x)\pm ix||^2=\left< T(x)\pm ix, T(x)\pm ix\right>$ $=\left< T(x), T(x) \pm ix\right>\pm \left<ix, T(x)\pm ix \right>$ ... $=\left<T(x), T(x) \right>+ \left<x,x \right>$ $=||T(x)||^2+||x||^2$ The ... is the part I'm stuck on (I know, it's the bulk of the first part). I have yet to consider the next parts since I'm still stuck on this one. Any help would be appreciated! Thanks.
we have $$(Tx+ix,Tx+ix)=(Tx,Tx)+(ix, Tx)+(Tx, ix)+(ix,ix)=|Tx|^{2}+i(x,Tx)-i(x,Tx)+|x|^{2}$$where I assume you define the inner product to be Hermitian. I think for $Tx-ix$ it should be similar. The rest should be leave as an exercise for you; they are not that difficult. To solve the last one, notice we have $(T-iI)^{-1}$'s adjoint to be $(T+iI)^{-1}$, and $(T+iI)$'a adjoint to be $T-iI$. The first one can be proved by expanding $(T-iI)^{-1}$ as $(I-iT^{-1})^{-1}T^{-1}$, then use geometric series. The second one follows by $(Tx+ix,y)=(Tx,y)+(x,-iy)=(x,Ty)+(x-iy)=(x,(T-I)y)$. If we want $(T+iI)(T-iI)^{-1}$ to be unitary, then we want $$((T+iI)(T-iI)^{-1}x,(T+iI)(T-iI)^{-1}x)=(x,x)$$ for all $x$. Moving around this is the same as $$((T^{2}+I)(T-iI)^{-1}x,(T-iI)^{-1}x)=(x,x)$$ but this is the same as $$((T+iI)x, (T-iI)^{-1}x)=(x,x)$$ and the result follows because we know $(T-iI)^{*}=(T+iI)^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/393133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
positive Integer value of $n$ for which $2005$ divides $n^2+n+1$ How Can I calculate positive Integer value of $n$ for which $2005$ divides $n^2+n+1$ My try:: $2005 = 5 \times 401$ means $n^2+n+1$ must be a multiple of $5$ or multiple of $401$ because $2005 = 5 \times 401$ now $n^2+n+1 = n(n+1)+1$ now $n(n+1)+1$ contain last digit $1$ or $3$ or $7$ $\bullet $ if last digit of $n(n+1)+1$ not contain $5$. So it is not divisible by $5$ Now how can I calculate it? please explain it to me.
A number of the form $n^2+n+1$ has divisors of the form 3, or any number of $6n+1$, and has a three-place period in base n. On the other hand, there are values where 2005 divides some $n^2+n-1$, for which the divisors are of the form n, 10n+1, 10n+9. This happens when n is 512 or 1492 mod 2005.
{ "language": "en", "url": "https://math.stackexchange.com/questions/393299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Do these definitions of congruences on categories have the same result in this context? Let $\mathcal{D}$ be a small category and let $A=A\left(\mathcal{D}\right)$ be its set of arrows. Define $P$ on $A$ by: $fPg\Leftrightarrow\left[f\text{ and }g\text{ are parallel}\right]$ and let $R\subseteq P$. Now have a look at equivalence relations $C$ on $A$. Let's say that: * *$C\in\mathcal{C}_{s}$ iff $R\subseteq C\subseteq P$ and $fCg\Rightarrow\left(h\circ f\circ k\right)C\left(h\circ g\circ k\right)$ whenever these compositions are defined; *$C\in\mathcal{C}_{w}$ iff $R\subseteq C\subseteq P$ but now combined with $fCg\wedge f'Cg'\Rightarrow\left(f\circ f'\right)C\left(g\circ g'\right)$ whenever these compositions are defined. Then $P\in\mathcal{C}_{s}$ and $P\in\mathcal{C}_{w}$ so both are not empty. For $C_{s}:=\cap\mathcal{C}_{s}$ and $C_{w}:=\cap\mathcal{C}_{w}$ it is easy to verify that $C_{s}\in\mathcal{C}_{s}$ and $C_{w}\in\mathcal{C}_{w}$. My question is: Do we have $C_{w}=C_{s}$ here? It is in fact the question whether two different definitions of 'congruences' both result in the same smallest 'congruence' that contains relation $R\subseteq P$. I ask it here for small categories so that I can conveniently speak of 'relations' (small sets), but for large categories I have the same question. Mac Lane works in CWM with $C_{s}$, but is $C_{w}$ also an option?
They are identical. I will suppress the composition symbol for brevity and convenience. Suppose first that $C \in \mathcal C_w$, and that $f C g$. Since $h C h$ and $k C k$, we have $f C g$ implies $hf C hg$, which in turn implies $hfk C hgk$. Thus $C \in \mathcal C_s$. Suppose now that $C \in \mathcal C_s$, and that $f C g, f' C g'$. Then we have $ff' C gf'$ (take $h = \operatorname{id}, k = f'$) and $gf'Cgg'$ (take $h = g, k = \operatorname{id}$). By transitivity, $ff'Cgg'$. Thus $C \in \mathcal C_w$. Therefore, $\mathcal C_s = \mathcal C_w$, and we conclude $C_s = C_w$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/393388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Power series of $\frac{\sqrt{1-\cos x}}{\sin x}$ When I'm trying to find the limit of $\frac{\sqrt{1-\cos x}}{\sin x}$ when x approaches 0, using power series with "epsilon function" notation, it goes : $\dfrac{\sqrt{1-\cos x}}{\sin x} = \dfrac{\sqrt{\frac{x^2}{2}+x^2\epsilon_1(x)}}{x+x\epsilon_2(x)} = \dfrac{\sqrt{x^2(1+2\epsilon_1(x))}}{\sqrt{2}x(1+\epsilon_2(x))} = \dfrac{|x|}{\sqrt{2}x}\dfrac{\sqrt{1+2\epsilon_1(x)}}{1+\epsilon_2(x)} $ But I can't seem to do it properly using Landau notation I wrote : $ \dfrac{\sqrt{\frac{x^2}{2}+o(x^2)}}{x+o(x)} $ and I'm stuck... I don't know how to carry these o(x) to the end Could anyone please show me what the step-by-step solution using Landau notation looks like when written properly ?
It is the same as in the "$\epsilon$" notation. For numerator, we want $\sqrt{x^2\left(\frac{1}{2}+o(1)\right)}$, which is $|x|\sqrt{\frac{1}{2}+o(1)}$. In the denominator, we have $x(1+o(1))$. Remark: Note that the limit as $x\to 0$ does not exist, though the limit as $x$ approaches $0$ from the left does, and the limit as $x$ approaches $0$ from the right does.
{ "language": "en", "url": "https://math.stackexchange.com/questions/393536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to prove the existence of infinitely many $n$ in $\mathbb{N}$,such that $(n^2+k)|n!$ Show there exist infinitely many $n$ $\in \mathbb{N}$,such that $(n^2+k)|n!$ and $k\in N$ I have a similar problem: Show that there are infinitely many $n \in \mathbb{N}$,such that $$(n^2+1)|n!$$ Solution: We consider this pell equation,$n^2+1=5y^2$,and this pell equation have $(n,y)=(2,1)$,so this equation have infinite solution$(n,y)$,and $2y=2\sqrt{\dfrac{n^2+1}{5}}<n$. so $5,y,2y\in \{1,2,3,\cdots,n\}$, so $5y^2<n!$ then $(n^2+1)|n!$ but for $k$ I have consider pell equation,But I failed,Thank you everyone can help
Similar to your solution of $k=1$. Consider the pell's equation $n^2 + k = (k^2+k) y^2$. This has solution $(n,y) = (k,1)$, hence has infinitely many solutions. Note that $k^2 + k = k(k+1) $ is never a square for $k\geq 2$, hence is a Pell's Equation of the form $n^2 - (k^2+k) y^2 = -k$. Then, $2y = 2\sqrt{ \frac{ n^2+k} { k^2 +k } } \leq n$ (for $k \geq 2$, $n\geq 2$) always.
{ "language": "en", "url": "https://math.stackexchange.com/questions/393617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $3^n>n^4$ if $n\geq8$ Proving that $3^n>n^4$ if $n\geq8$ I tried mathematical induction start from $n=8$ as the base case, but I'm stuck when I have to use the fact that the statement is true for $n=k$ to prove $n=k+1$. Any ideas? Thanks!
You want to show $3^n>n^4$. This i.e. to showing $e^{n\ln3}>e^{4\ln n}$. This means you want to show $n\ln 3>4\ln n$. It suffices to show $\frac{n}{\ln n }>\frac{4}{\ln 3}$. Since $\frac{8}{\ln 8}>\frac{4}{\ln 3}$ and since $f(x)=\frac{x}{\ln x}$ has a positive first derivative for $x\geq 8$, the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/393695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Parent and childs of a full d-node tree i have a full d-node tree (by that mean a tree that each node has exactly d nodes as kids). My question is, if i get a random k node of this tree, in which position do i get his kids and his parent? For example, if i have a full binary tree, the positions that i can find the parent,left and right kid of the k node are $\dfrac k2, 2k, 2k+1$ respectively. Thanks in advance.
It looks like you're starting numbering at 1 for the root, and numbering "left to right" on each level/depth. If the root has depth $0$, then there are $d^{t}$ nodes with depth $t$ from the root in a full $d$-dimensional tree. Also, the depth of node $k$ is $\ell_{k}=\lceil\log_{d}(k-1)\rceil$. The number of nodes at depths below the depth of node $k$, then, is $$n_{k} = \sum_{i = 0}^{\ell_{k-1}}d^{i} = \frac{d^{\ell_{k}}-1}{d-1}.$$ So indexing on the row containing node $k$ starts at $n_{k}+1$. Children of node $k$: The position of $k$ in its row is just $p_{k}=k-n_{k}$. The children of node $k$ have depth one more than that of $k$, and in their respective row, the first child $c$ has position $d(p_{k}-1)+1=d(k-n_{k}-1)+1$, so the $j^{th}$ child of $k$ has position $$j + \sum_{i = 0}^{\ell_{k}+1}d^{i} + d(k-\sum_{i = 0}^{\ell_{k}}d^{i}-1) = j + \frac{d^{\ell_{k}+1}-1}{d-1} + d(k-\frac{d^{\ell_{k}}-1}{d-1}-1)$$ $$=j +dk-d +1$$ Parent of node $k$: Since this formula applies for the parent $p$ of $k$, for $1 \leq j \leq d$ $$1+dp-d +1\leq k \leq d+dp - d + 1 = dp+1$$ $$\Rightarrow dp - d \leq k-1 \leq dp$$ $$\Rightarrow p-1 \leq \frac{k-1}{d} \leq p$$ so $p = \lceil\frac{k-1}{d}\rceil$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/393771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Construct a linear programming problem for which both the primal and the dual problem has no feasible solution Construct (that is, find its coefficients) a linear programming problem with at most two variables and two restrictions, for which both the primal and the dual problem has no feasible solution. For a linear programming problem to have no feasible solution it needs to be either unbounded or just not have a feasible region at all I think. Therefore, I know how I should construct a problem if it would only have to hold for the primal problem. However, could anyone tell me how I should find one for which both the primal and dual problem have no feasible solution? Thank you in advance.
Consider an LP: $$ \begin{aligned} \min \; & x_{1}+2 x_{2} \\ \text { s.t. } & x_{1}+x_{2}=1 \\ & 2 x_{1}+2 x_{2}=3 \end{aligned} $$ and its dual: $$ \begin{aligned} \max\; & y_{1}+3 y_{2} \\ \text { s.t. } & y_{1}+2 y_{2}=1 \\ & y_{1}+2 y_{2}=2 \end{aligned} $$ They are both infeasible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/393818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
notation question (bilinear form) So I have to proof the following: for a given Isomorphism $\phi : V\rightarrow V^*$ where $V^*$ is the dual space of $V$ show that $s_{\phi}(v,w)=\phi(v)(w)$ defines a non degenerate bilinear form. My question : Does $\phi(v)(w)$ denote the map from $v$ to a linear function $w$? (in this case i had serious trubles in showing linearity in the second argument,really confusing. Or maybe it just means $\phi(v)$ times $w$ where $w$ is the scalar value ( we get $w$ by applying $v$ in the linear function it is mapped to) I just started today with dual spaces and try my best with the notation , but i couldn't figure it out , please if you have any idea please help me with the notation , i will solve the problem on my own.
Note that $\phi$ is a map from $V$ to $V^\ast$. So for each $v \in V$, we get an element $\phi(v) \in V^\ast$. Now $V^\ast$ is the space of linear functionals on $V$, i.e. $$V^\ast = \{\alpha: V \longrightarrow \Bbb R \mid \alpha \text{ is linear}\}.$$ So each element of $V^\ast$ is a function from $V$ to $\Bbb R$. Then for $v, w \in V$, the notation $$\phi(v)(w)$$ means $$(\phi(v))(w),$$ i.e. the function $\phi(v): V \longrightarrow \Bbb R$ takes $w \in V$ as its argument and we get an element of $\Bbb R$. So $s_\phi$ is really a map of the form $$s_\phi: V \times V \longrightarrow \Bbb R,$$ $$(v, w) \mapsto (\phi(v))(w).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/393895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Moment generating function of a stochastic integral Let $(B_t)_{t\geq 0}$ be a Brownian motion and $f(t)$ a square integrable deterministic function. Then: $$ \mathbb{E}\left[e^{\int_0^tf(s) \, dB_s}\right] = \mathbb{E}\left[e^{\frac{1}{2}\int_0^t f^2(s) \, ds}\right] $$ Now assume $(X_t)_{t\geq 0}$ is such that $\left(\int_0^tX_sdB_s\right)_{t\geq 0}$ is well defined. Does $$ \mathbb{E}\left[e^{\int_0^tX_s \, dB_s}\right] = \mathbb{E}\left[e^{\frac{1}{2}\int_0^tX_s^2 \, ds}\right] $$ still hold?
If $X$ and $B$ are independent, yes (use the first result to compute the expectation conditional on $X$, then take the expectation). Otherwise, no. For a counterexample, consider $X=B$ and use Itô's formula $\mathrm d (B^2)=2B\mathrm dB+\mathrm dt$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/393983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Matrix $BA\neq$$I_{3}$ If $\text{A}$ is a $2\times3$ matrix and $\text{B}$ is a $3\times2$ matrix, prove that $\text{BA}=I_{3}$ is impossible. So I've been thinking about this, and so far I'm thinking that a homogenous system is going to be involved in this proof. Maybe something about one of the later steps being that the last row of the matrix would be $0\neq \text{a}$, where a is any real number. I've also been thinking that for a $2\times3$ matrix, there is a (non-zero) vector $[x,y,z]$ such that $\text{A}[x,y,z]=[0,0]$ because the dot product could possibly yield $0$. I'm not sure if that's helpful at all though. Trouble is I'm not really too sure how to continue, or even begin. Any help would be appreciated.
Consider the possible dimension of the columnspace of the matrix $BA$. In particular, since $A$ has at most a two-dimensional columnspace, $BA$ has at most a two-dimensional columnspace. Stated more formally, if $A$ has rank $r_a$ and $B$ has rank $r_b$, then $BA$ has rank at most $\min\{ r_a, r_b \}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/394059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Are the graphs of these two functions equal to each other? The functions are: $y=\frac{x^2-4}{x+2}$ and $(x+2)y=x^2-4$. I've seen this problem some time ago, and the official answer was that they are not. My question is: Is that really true? The functions obviously misbehave when $x = -2$, but aren't both of them indeterminate forms at that point? Why are they different?
$(1)$The first function is undefined at $x = -2$, $(2)$ the second equation is defined at $x = -2$: $$(x + 2) y = x^2 - 4 \iff xy + 2y = x^2 - 4\tag{2}$$ It's graph includes the entire line $x = -2$. At $x = -2$, all values of y are defined, so every point lying on the line $x = -2$: each of the form $(-2, y)$ are included in the graph of function (2). Not so with the first equation. ADDED: Just to see how well Wolfram Alpha took on the challenge: Graph of Equation $(1)$: (It fails to show the omission at $x = -2$) But it does add: Graph of Equation $(2)$: Note: The pair of graphs included here do not match in terms of the scaling of the axes, so the line $y = x - 2$ looks sloped differently in one graph than in the other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/394128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Working with subsets, as opposed to elements. Especially in algebraic contexts, we can often work with subsets, as opposed to elements. For instance, in a ring we can define $$A+B = \{a+b\mid a \in A, b \in B\},\quad -A = \{-a\mid a \in A\}$$ $$AB = \{ab\mid a \in A, b \in B\}$$ and under these definitions, singletons work exactly like elements. For instance, $\{a\}+\{b\} = \{c\}$ iff $a+b=c$. Now suppose we're working in an ordered ring. What should $A \leq B$ mean? I can think of at least two possible definitions. * *For all $a \in A$ and $b \in B$ it holds that $a \leq b$. *There exists $a \in A$ and $b \in B$ such that $a \leq b$. Also, a third definition was suggested in the comments: *For all $a \in A$ there exists $b \in B$ such that $a \leq b$. Note that according to all three definitions, we have $\{a\} \leq \{b\}$ iff $a \leq b$. That's because "for all $x \in X$" and "there exists $x \in X$" mean the same thing whenever $X$ is a singleton set. What's the natural thing to do here? (1), (2), or something else entirely? Note that our earlier definitions leveraged existence. For example: $$A+B = \{x\mid \exists a \in A, b \in B : a+b=x\}.$$
Since we're talking about ordered rings, maybe the ordering could be applicable to each comparison, too. i.e., $$ a_n \le b_n \forall a \in A,b \in B$$ if you were to apply this to the sets of even integers (greater than 0) and odd integers it might look like 1<2, 3<4, etc. Of course, cardinality comes into play since you need n-elements to exist in both sets. Maybe the caveat could be $n=min([A],[B])$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/394182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Is there a name for this given type of matrix? Given a finite set of symbols, say $\Omega=\{1,\ldots,n\}$, is there a name for an $n\times m$ matrix $A$ such that every column of $A$ contains each elements of $\Omega$? (The motivation for this question comes from looking at $p\times p$ matrices such that every column contains the elements $1,\ldots, p$).
A sensible definition for this matrix would be a column-Latin rectangle, since the transpose is known as a row-Latin rectangle. Example: A. Drisko, Transversals in Row-Latin Rectangles, JCTA 81 (1998), 181-195. The $m=n$ case is referred to as a column-Latin square in the literature (this is in widespread use). I found one example of the use of column-Latin rectangle here (ref.; .ps file).
{ "language": "en", "url": "https://math.stackexchange.com/questions/394332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to prove these two ways give the same numbers? How to prove these two ways give the same numbers? Way 1: Step 1 : 73 + 1 = 74. Get the odd part of 74, which is 37 Step 2 : 73 + 37 = 110. Get the odd part of 110, which is 55 Step 3 : 73 + 55 = 128. Get the odd part of 128, which is 1 Continuing this operation (with 73 + 1) repeats the same steps as above, in a cycle. Way 2: Step 1: (2^x) * ( 1/73) > 1 (7 is the smallest number for x) (2^7) * ( 1/73) - 1 = 55/73 Step 2: (2^x) * (55/73) > 1 (1 is the smallest number for x) (2^1) * (55/73) - 1 = 37/73 Step 3: (2^x) * (37/73) > 1 (1 is the smallest number for x) (2^1) * (37/73) - 1 = 1/73 Repeating the steps with the fraction 1/73 goes back to step 1, and repeats them in a cycle. The two ways have the same numbers $\{1, 37, 55\}$ in the 3 steps. How can we prove that the two ways are equivalent and give the same number of steps?
Let $M=37$ (or any odd prime for that matter). To formalize your first "way": You start with an odd number $a_1$ with $1\le a_1<M$ (here specifically: $a_1=1$) and then recursively let $a_{n+1}=u$, where $u$ is the unique odd number such that $M+a_n=2^lu$ with $l\in\mathbb N_0$. By induction, one finds that $a_n$ is an odd integer and $1\le a_n<M$ To formalize your second "way": You start with $b_1=\frac c{M}$ where $1\le c<M$ is odd (here specifically: $c=1$) and then recursively let $b_{n+1}=2^kb_n-1$ where $k\in\mathbb N$ is chosen minimally with $2^kb_n>1$. Clearly, this implies by induction that $0< b_n\le 1$ and $Mb_n$ is an odd integer for all $n$. Then we have Proposition. If $a_{m+1}=M b_n$, then $a_m=M b_{n+1}$. Proof: Using $b_{n+1}=2^kb_n-1$, $M+a_m=2^la_{m+1}$, and $a_{m+1}=M b_n$, we find $$Mb_{n+1}=2^kMb_n-M = 2^ka_{m+1}-M=2^{k-l}(a_m+M)-M.$$ If $k>l$, we obtain that $Mb_{n+1}\ge 2a_m+M>M$, contradicting $b_{n+1}\le 1$. And if $k<l$, we obtain $Mb_{n+1}\le \frac12 a_m-\frac 12 M<0$, contradicting $b_{n+1}>0$. Therefore $k=l$ and $$ Mb_{n+1} = a_m$$ as was to be shown. $_\square$ Since there are only finitely many values available for $a_n$ (namely the odd naturals below $M$), the sequence $(a_n)_{n\in \mathbb N}$ must be eventually periodic, that is, there exists $p>0$ and $r\ge1$ such that $a_{n+p}=a_n$ for all $n\ge r$. Let $r$ be the smallest natural making this true. If we assume $r>1$, then by chosing $c=a_{r-1+p}$ in the definition of the sequenc $(b_n)_{n\in\mathbb N}$ we can enforce $Mb_1=a_{r-1+2p}=a_{r-1+p}$ and with the proposition find $Mb_2=a_{r-1+p}=a_{r-1}$ contradicting minimality of $r$. We conclude that $r=1$, that is the sequence $(a_n)_{n\in\mathbb N}$ is immediately periodic. Now the proposition implies that the sequence $(b_n)_{n\in\mathbb N}$ is also immediately periodic: Let $a_1=Mb_1$. Then by periodicity of $(a_n)$, we have $Mb_1=a_{1+p}$, by induction $Mb_k=a_{2+p-k}$ for $1\le k\le p+1$. Especially, $b_{p+1}=b_1$ and hence by induction $b_{n+p}=b_n$ for all $n$. Finally, we use the fact that $M$ is prime. Therefore the $Mb_n$ are precisely the numerators of the $b_n$. Our results above then show that these numerators are (if we start with $b_1=\frac{a_1}M$) precisely the same periodic sequence as $(a_n)$, but walking backwards. This is precisely what you observed. EDIT: As remarked by miket, $M$ need only be odd but not necessarily prime. To see that, one must observe that the $a_n$ are always relatively prime to $M$ if one starts with $a_1$ relatively prime to $M$. Consequently, the $Mb_n$ are still the numerators of the $b_n$ (i.e. their denominators are $M$ in shortest terms).
{ "language": "en", "url": "https://math.stackexchange.com/questions/394408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
equality between the index between field with $p^{n}$ elements and $ \mathbb{F}_{p}$ and n? can someone explain this? $ \left[\mathbb{F}_{p^{n}}:\mathbb{F}_{p}\right]=n $
$$|\Bbb F_p|=p\;,\;\;|\Bbb F_n^n|=p^n$$ amd since any element in the latter is a unique linear combination of some elements of it and scalars from the former, it must be that those some elements are exactly $\,n\,$ in number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/394470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
General Solution of Diophantine equation Having the equation: $$35x+91y = 21$$ I need to find its general solution. I know gcf $(35,91) = 7$, so I can solve $35x+917 = 7$ to find $x = -5, y = 2$. Hence a solution to $35x+91y = 21$ is $x = -15, y = 2$. From here, however, how do I move on to finding the set of general solutions? Any help would be very much appreciated! Cheers
Hint: If $35x + 91y = 21$ and $35x^* + 91y^* = 21$ for for some $(x,y)$ and $(x^*, y^*)$, we can subtract the two equalities and get $5(x-x^*) + 13(y-y^*) = 0$. What does this tell us about the relation between any two solutions? Now, $5$ and $13$ share no common factor and we're dealing with integers, $13$ must divide $(x-x^*)$. In other words, $x = x^* + 13k$ for some integer $k$ and substituting it into the equality yields $y = y^* - 5k$. Thus, once you have one solution $(x^*,y^*)$, all of them can be expressed as $(x^*+13k, y^*-5k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/394529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Comparing $\sqrt{1001}+\sqrt{999}\ , \ 2\sqrt{1000}$ Without the use of a calculator, how can we tell which of these are larger (higher in numerical value)? $$\sqrt{1001}+\sqrt{999}\ , \ 2\sqrt{1000}$$ Using the calculator I can see that the first one is 63.2455453 and the second one is 63.2455532, but can we tell without touching our calculators?
You can tell without calculation if you can visualize the graph of the square-root function; specifically, you need to know that the graph is concave (i.e., it opens downward). Imagine the part of the graph of $y=\sqrt x$ where $x$ ranges from $999$ to $1001$. $\sqrt{1000}$ is the $y$-coordinate of the point on the graph directly above the midpoint, $1000$, of that interval. $\frac12(\sqrt{999}+\sqrt{1001})$ is the average of the $y$-coordinates at the ends of this segment of the graph, so it's the $y$-coordinate of the point directly above $x=1000$ on the chord of the graph joining those two ends. The concavity of the graph shows that the chord lies below the graph. So $\frac12(\sqrt{999}+\sqrt{1001})<\sqrt{1000}$. Multiply by $2$ to get the numbers in your question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/394648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 11, "answer_id": 9 }
Limit of $\lim_{x \to 0}\left (x\cdot \sin\left(\frac{1}{x}\right)\right)$ is $0$ or $1$? WolframAlpha says $\lim_{x \to 0} x\sin\left(\dfrac{1}{x}\right)=0$ but I've found it $1$ as below: $$ \lim_{x \to 0} \left(x\sin\left(\dfrac{1}{x}\right)\right) = \lim_{x \to 0} \left(\dfrac{1}{x}x\dfrac{\sin\left(\dfrac{1}{x}\right)}{\dfrac{1}{x}}\right)\\ = \lim_{x \to 0} \dfrac{x}{x} \lim_{x \to 0} \dfrac{\sin\left(\dfrac{1}{x}\right)}{\dfrac{1}{x}}\\ = \lim_{x \to 0} 1 \\ = 1? $$ I wonder where I'm wrong...
$$\lim_{x \to 0} \left(x\cdot \sin\left(\dfrac{1}{x}\right)\right) = \lim_{\large\color{blue}{\bf x\to 0}} \left(\frac{\sin\left(\dfrac{1}{x}\right)}{\frac 1x}\right) = \lim_{\large\color{blue}{\bf x \to \pm\infty}} \left(\frac{\sin x}{x}\right) = 0 \neq 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/394687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 1 }
Smooth maps on a manifold lie group $$ \operatorname{GL}_n(\mathbb R) = \{ A \in M_{n\times n} | \det A \ne 0 \} \\ \begin{align} &n = 1, \operatorname{GL}_n(\mathbb R) = \mathbb R - \{0\} \\ &n = 2, \operatorname{GL}_n(\mathbb R) = \left\{\begin{bmatrix}a&b\\c&d\end{bmatrix}\Bigg| ad-bc \ne 0\right\} \end{align} $$ $(\operatorname{GL}_n(\mathbb R),\cdot)$ is a group. * *$AB$ is invertible if $A$ and $B$ are invertible. *$A(BC)=(AB)C$ *$I=\begin{bmatrix}1&0\\0&1\end{bmatrix}$ *$A^{-1}$ is invertible if $A$ is invertible. $$(\operatorname{GL}_n(\mathbb R) := \det{}^{-1}(\{0\}))$$ $\det{}^{-1}(\{0\})$ is open in $M_{n\times n}(\mathbb R)$. $\det : M_{n\times n}(\mathbb R) \to \mathbb R$ is continuous, why? $\dim \operatorname{GL}_n(\mathbb R) = n^2 - 1$, why? $(\operatorname{GL}_n(\mathbb R),\cdot)$ is a Lie group if: $$ \mu : G\times G \to G \\ \mu(A,B) = A\cdot B \:\text{ is smooth} \\ I(A) = A^{-1} \:\text{ is smooth} $$ How can I show this? I want to show that general special linear group is Lie group. I could not show the my last step. How can I show that $AB$ and $A^{-1}$ are smooth? please help me I want to learn this. Thanks Here the my handwritten notes http://i.stack.imgur.com/tkoMy.jpg
Well first you have to decide what exactly the "topology" on matrices is. Suppose we considered matrices just as vectors in $\mathbb{R}^{n^2}$, with the usual metric topology. Matrix multiplication by say $A$ take a matrix $B$ to another $\mathbb{R}^{n^2}$ vector where the entries are polynomials in the components of $B$. The other bit is similar, if you know what crammer's rule is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/394743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why is boundary information so significant? -- Stokes's theorem Why is it that there are so many instances in analysis, both real and complex, in which the values of a function on the interior of some domain are completely determined by the values which it takes on the boundary? I know that this has something to do with the general version of Stokes's theorem, but I'm not advanced enough to understand this yet -- does anyone have a (semi) intuitive explanation for this kind of phenomenon?
This is because many phenomena in nature can be described by well-defined fields. When we look at the boundaries of some surface enclosing the fields, it tells us everything we need to know. Take a conservative field, such as an electric or gravitational field. If we want to know how much energy needs to be added or removed to move something from one spot in the field to another, we do not have to look at the path. We just look at the initial and final location, and the potential energy equation for the field will give us the answer. The endpoints of a path are boundaries in a 1D space, but the idea extends to surfaces bounded by curves and so on. This is not so strange. For instance, to subtract two numbers, like 42 - 17, we do not have to look at all the numbers in between, like 37, right? 37 cannot "do" anything that will wreck the subtraction which is determined only by the values at the boundary of the interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/394797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 5 }
Counting Problem - N unique balls in K unique buckets w/o duplication $\mid$ at least one bucket remains empty and all balls are used I am trying to figure out how many ways one can distribute $N$ unique balls in $K$ unique buckets without duplication such that all of the balls are used and at least one bucket remains empty in each distribution? Easy, I thought. I'll just hold a bucket in reserve, distribute the balls, and place the empty bucket. I get: $ K\cdot N! / (N-K-1)! $ Even were I sure this handles the no duplicates condition, what if $K \geq N$? Then I get a negative factorial in the denominator. Is the solution correct and/or is there a more general solution? Thanks!
A slightly different approach using the twelvefold way. If $K>N$ then it doesn't matter how you distribute the balls since at least one bucket will always be empty. In this case we are simply counting functions from a $N$ element set to a $K$ element set. Therefore the number of distributions is $K^N$. If $K=N$ then the only bad assignments are the ones in which every bucket contains precisely one ball. This happens in precisely $N!$ ways, so just subtract out these cases for a total of $N^N - N!$ distributions. If $K < N$, we first choose a number of buckets which cannot be filled and then we fill the remaining buckets. If we choose $m$ buckets to remain empty, then the remaining $K-m$ buckets must be filled surjectively. The number of surjections for each $m$ is $$(K-m)!{N\brace K-m}$$ where the braced term is a Stirling number of the second kind. Summing over $m$ gives the required result $$\sum_{m=1}^{K-1}(K-m)!{N\brace K-m}\binom{K}{m}$$ I am not sure if this simplifies or not. In summary, if we let $f(N,K)$ denote the number of distributions, then $$f(N,K) = \begin{cases}K^N & K > N\\ N! & K=N\\ \sum_{m=1}^{K-1}(K-m)!{N\brace K-m}\binom{K}{m} & K < N\end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/394950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding an area of the portion of a plane? I need help with a problem I got in class today any help would be appreciated! Find the area of the portion of the portion of the plane $6x+4y+3z=12$ that passes through the first octant where $x, y$, and $z$ are all positive.. I graphed this plane and got all the vertices but I am not sure how my teacher wants us to approach this problem.. Do I calculate the line integral of each side of the triangle separately and add them together? because we are on the section of line integrals, flux, Green's theorem, etc..
If you have the three vertices, you can calculate the length of the three sides and use Heron's formula
{ "language": "en", "url": "https://math.stackexchange.com/questions/395013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Linear Algebra determinant and rank relation True or False? If the determinant of a $4 \times 4$ matrix $A$ is $4$ then its rank must be $4$. Is it false or true? My guess is true, because the matrix $A$ is invertible. But there is any counter-example? Please help me.
You're absolutely correct. The point of mathematical proof is that you don't need to go looking for counterexamples once you've found the proof. Beforehand that's very reasonable, but once you're done you're done. Determinant 4 is nonzero $\implies$ invertible $\implies$ full rank. Each of these is a standard proposition in linear algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/395075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Dimensions of vector subspaces in a direct sum are additive $V = U_1\oplus U_2~\oplus~...~ \oplus~ U_n~(\dim V < ∞)$ $\implies \dim V = \dim U_1 + \dim U_2 + ... + \dim U_n.$ [Using the result if $B_i$ is a basis of $U_i$ then $\cup_{i=1}^n B_i$ is a basis of $V$] Then it suffices to show $U_i\cap U_j-\{0\}=\emptyset$ for $i\ne j.$ If not, let $v\in U_i\cap U_j-\{0\}.$ Then \begin{align*} v=&0\,(\in U_1)+0\,(\in U_2)\,+\ldots+0\,(\in U_{i-1})+v\,(\in U_{i})+0\,(\in U_{i+1})+\ldots\\ & +\,0\,(\in U_j)+\ldots+0\,(\in U_{n})\\ =&0\,(\in U_1)+0\,(\in U_2)+\ldots+0\,(\in U_i)+\ldots+0\,(\in U_{j-1})+\,v(\in U_{j})\\ & +\,0\,(\in U_{j+1})+\ldots+0\,(\in U_{n}). \end{align*} Hence $v$ fails to have a unique linear sum of elements of $U_i's.$ Hence etc ... Am I right?
Yes, you're correct. Were you second guessing yourself? If so, no need to: You're argument is "spot on". If you'd like to save yourself a little space, and work, you can write your sum as: $$ \dim V = \sum_{i = 1}^n \dim U_i$$ "...If not, let $v\in U_i\cap U_j-\{0\}.$ Then $$v= v(\in U_i) + \sum_{\large 1\leq j\leq n; \,j\neq i} 0(\in U_j)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/395130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
About the induced vector measure of a Pettis integrable function(part 2) Notations: In what follows, $X$ stands for a Hausdorff LCTVS and $X'$ its topological dual. Let $(T,\mathcal{M},\mu)$ be a finite measure space, i.e., $T$ is a nonempty set, $\mathcal{M}$ a $\sigma$-algebra of subsets of $T$ and $\mu$ is a nonnegative finite measure on $\mathcal{M}$. Definition. A function $f:T\to X$ is said to be Pettis-integrable if * *for each $x'\in X'$, the composition map $$x'\circ f:T\to \mathbb{R}$$ is Lebesgue integrable and *for each $E\in \mathcal{M}$, there exists $x_E\in X$ such that $$x'(x_E)=\int_E(x'\circ f)d\mu$$ for all $x'\in X'$. In this case, $x_E$ is called the Pettis integral of $f$ over $E$ and is denoted by $$x_E=\int_E fd\mu.$$ Remark. Let $f:T\to X$ be Pettis-integrable. Define $$m_f:\mathcal{M}\to X$$ by $$m_f(E)=\int_E fd\mu$$ for any $E\in \mathcal{M}.$ Hahn-Banach Theorem ensures that $x_E$ in the above definition is necessarily unique and so $m_f$ is a well-defined mapping. Moreover, Orlicz-Pettis Theorem imply that the induced vector measure $m_f$ is countably additive, see for instance this. Question. With the above discussions, how do we show that $m_f$ is $\mu$-continuous? I would be thankful to someone who can help me...
Let $\mu(E)=0$. Take arbitrary $x'\in X'$, then $$ x'(m_f(E))=\int_E (x'\circ f)d\mu=0 $$ then. Since $x'\in X'$ is arbitrary by corollary of Hahn-Banach theorem $m_f(E)=0$. Thus, $m_f\ll\mu$
{ "language": "en", "url": "https://math.stackexchange.com/questions/395164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the Taylor series of $f(z)=\frac{1}{z}$ around $z=z_{0}$ I was asked the following (homework) question: For each (different but constant) $z_{0}\in G:=\{z\in\mathbb{C}:\, z\neq0$} find a power series $\sum_{n=0}^{\infty}a_{n}(z-z_{0})^{n}$ whose sum is equal to $f(z)$ on some subset of $G$. Please specify exactly on which subset this last claim holds. Suggestion: Instead of calculating derivatives of f, try using geometric series in some suitable way. What I did: Denote $f(z)=\frac{1}{z}$ and note $G\subseteq\mathbb{C}$ is open. For any $z_{0}\in G$ the maximal $R>0$ s.t $f\in H(D(z_{0},R))$ is clearly $R=|z_{0}|$. By Taylor theorem we have it that $f$ have a power series in $E:=D(z_{0},R)$ (and this can not be expended beyond this point, as this would imply that $f$ is holomorphic at $z=0$. I am able to find $f^{(n)}(z_{0})$ and to solve the exercise this way: I got that $$f^{(n)}(z)=\frac{(-1)^{n}n!}{z^{n+1}}$$ hence $$f(z)=\sum_{n=0}^{\infty}\frac{(-1)^{n}}{z_{0}^{n+1}}(z-z_{0})^{n};\, z\in E$$ but I am not able to follow the suggestion, I tried to do a small manipulation $$f(z)=\frac{1}{z-z_{0}+z_{0}}$$ and tried to work a bit with that, but I wasn't able to bring it to the form $$\frac{1}{1-\text{expression}}$$ or something similar that uses a known power series . Can someone please help me out with the solution that involves geometric series ?
I think you were on the right track: $$\frac1z=\frac1{z_0+(z-z_0)}=\frac1{z_0}\cdot\frac1{1+\frac{z-z_0}{z_0}}=\frac1z_0\left(1-\frac{z-z_0}{z_0}+\frac{(z-z_0)^2}{z_{0}^2}-\ldots\right)=$$ $$=\frac1z_0-\frac{z-z_0}{z_0^2}+\frac{(z-z_0)^2}{z_0^3}+\ldots+\frac{(-1)^n(z-z_0)^n}{z_0^{n+1}}+\ldots$$ As you can see, this is just what you got but only using the expansion of a geometric series, without the derivatives explicitly kicking in...The above is true whenever $$\frac{|z-z_0|}{|z_0|}<1\iff |z-z_0|<|z_0|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/395238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
prove that $\sum_{n=0}^{\infty} {\frac{n^2 2^n}{n^2 + 1}x^n} $ does not converge uniformly in its' convergence radius In calculus: Given $\displaystyle \sum_{n=0}^{\infty} {\frac{n^2 2^n}{n^2 + 1}x^n} $, prove that it converges for $-\frac{1}{2} < x < \frac{1}{2} $, and that it does not converge uniformly in the area of convergence. So I said: $R$ = The radius of convergence $\displaystyle= \lim_{n \to \infty} \left|\frac{C_n}{C_{n+1}} \right| = \frac{1}{2}$ so the series converges $\forall x$ : $-\frac{1}{2} < x < \frac{1}{2}$. But how do I exactly prove that it does not converge uniformly there? I read in a paper of University of Kansas that states: "Theorem: A power series converges uniformly to its limit in the interval of convergence." and they proved it. I'll be happy to get a direction.
According to Hagen von Eitzen theorem, you know that it will uniformly (and even absolutly) converge for intervals included in ]-1/2;1/2[. So the problem must be there. The definition of uniform convergence is : $ \forall \epsilon>0,\exists N_ \varepsilon \in N,\forall n \in N,\quad [ n \ge N_ \varepsilon \Rightarrow \forall x \in A,d(f_n(x),f(x)) \le \varepsilon] $ So you just have to prove that there is an $\varepsilon$ (1/3 for instance or any other) where for any $N_ \varepsilon$ there will always be an $x$ (look one close from 1/2) where $d(f_n(x),f(x)) \ge \varepsilon $ for an $n \ge N_ \varepsilon$
{ "language": "en", "url": "https://math.stackexchange.com/questions/395358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Sequence $(a_n)$ s.t $\sum\sqrt{a_na_{n+1}}<\infty$ but $\sum a_n=\infty$ I am looking for a positive sequence $(a_n)_{n=1}^{\infty}$ such that $\sum_{n=1}^{\infty}\sqrt{a_na_{n+1}}<\infty$ but $\sum_{n=1}^{\infty} a_n>\infty$. Thank you very much.
The simplest example I can think of is $\{1,0,1,0,...\}$. If you want your elements to be strictly positive, use some fast-converging sequence such as $n^{-4}$ in place of the zeroes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/395438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A limit on binomial coefficients Let $$x_n=\frac{1}{n^2}\sum_{k=0}^n \ln\left(n\atop k\right).$$ Find the limit of $x_n$. What I can do is just use Stolz formula. But I could not proceed.
$x_n=\frac{1}{n^2}\sum_{k=0}^{n}\ln{n\choose k}=\frac{1}{n^2}\ln(\prod {n\choose k})=\frac{1}{n^2}\ln\left(\frac{n!^n}{n!^2.(n-1)!^2(n-2)!^2...0!^2}\right)$ since ${n\choose k}=\frac{n!}{k!(n-k)!}$ $e^{n^2x_n}=\left(\frac{n^n(n-1)!}{n!^2}\right)e^{(n-1)^2x_{n-1}}=\left(\frac{n^{n-1}}{n!}\right)e^{(n-1)^2x_{n-1}}$ By Stirling's approximation, $n! \sim n^ne^{-n}\sqrt{2\pi n}$ $e^{n^2x_n}\sim \left(\frac{e^n}{n\sqrt{2\pi n}}\right)e^{(n-1)^2x_{n-1}}$ $x_n \sim \frac{(n-1)^2}{n^2}x_{n-1}+\frac{1}{n}-\frac{1}{n^2}\ln(n\sqrt{2\pi n})$ The $\frac{1}{n}$ term forces $x_n$ to tend to infinity, because $\frac{1}{n^2}\ln(n\sqrt{2\pi n})$ does not grow fast enough to stop it diverging.
{ "language": "en", "url": "https://math.stackexchange.com/questions/395507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 3 }
sum of monotonic increasing and monotonic decreasing functions I have a question regarding sum of monotinic increasing and decreasing functions. Would appreciate very much any help/direction: Consider an interval $x \in [x_0,x_1]$. Assume there are two functions $f(x)$ and $g(x)$ with $f'(x)\geq 0$ and $g'(x)\leq 0$. We know that $f(x_0)\leq 0$, $f(x_1)\geq 0$, but $g(x)\geq 0$ for all $x \in [x_0,x_1]$. I want to show that $q(x) \equiv f(x)+g(x)$ will cross zero only once. We know that $q(x_0)\leq 0$ and $q(x_1)\geq 0$. Is there a ready result that shows it or how to proceed to show that? Many thanks!!!
Alas, the answer is no. $$f(x)=\begin{cases}-4& x\in[0,2]\\ -2& x\in [2,4]\\0& x\in[4,6]\end{cases}$$ $$g(x)=\begin{cases}5 & x\in [0,1]\\3& x\in[1,3]\\1& x\in[3,5]\\ 0 & x\in[5,6]\end{cases}$$ $$q(x)=\begin{cases} 1 & x\in [0,1]\\ -1& x\in[1,2]\\ 1 & x\in[2,3]\\ -1 & x\in[3,4]\\ 1& x\in[4,5]\\ 0 & x\in[5,6]\end{cases}$$ This example could be made continuous and strictly monotone with some tweaking.
{ "language": "en", "url": "https://math.stackexchange.com/questions/395544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the Broader Name for the Fibonacci Sequence and the Sequence of Lucas Numbers? Fibonacci and Lucas sequences are very similar in their definition. However, I could just as easily make another series with a similar definition; an example would be: $$x_0 = 53$$ $$x_1 = 62$$ $$x_n = x_{n - 1} + x_{n - 2}$$ What I want to ask is, what is the general name for these types of sequences, where one term is the sum of the previous two terms?
Occasionally (as in the link posted by vadim123) you see "Fibonacci integer sequence". Lucas sequences (of which the Lucas sequence is but one example) are a slight generalization. Sometimes the term Horadam sequence is used instead. The general classification under which all of these fall is the linear recurrence relation. Most of the special properties of the Fibonacci sequence are inherited from either linear recurrence relations or divisibility sequences.
{ "language": "en", "url": "https://math.stackexchange.com/questions/395620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
question on summation? Please, I need to know the proof that $$\left(\sum_{k=0}^{\infty }\frac{n^{k+1}}{k+1}\frac{x^k}{k!}\right)\left(\sum_{\ell=0}^{\infty }B_\ell\frac{x^\ell}{\ell!}\right)=\sum_{k=0}^{\infty }\left(\sum_{i=0}^{k}\frac{1}{k+1-i}\binom{k}{i}B_in^{k+1-i}\right)\frac{x^k}{k!}$$ where $B_\ell$, $B_i$ are Bernoulli numbers. Maybe we should replace $k$ with $j$? Anyway, I need to prove how to move from the left to right. Thanks for all help.
$$\left(\sum_{k=0}^{\infty} \dfrac{n^{k+1}}{k+1} \dfrac{x^k}{k!} \right) \left(\sum_{l=0}^{\infty} B_l \dfrac{x^l}{l!}\right) = \sum_{k,l} \dfrac{n^{k+1}}{k+1} \dfrac{B_l}{k! l!} x^{k+l}$$ $$\sum_{k,l} \dfrac{n^{k+1}}{k+1} \dfrac{B_l}{k! l!} x^{k+l} = \sum_{m=0}^{\infty} \sum_{l=0}^{m} \dfrac{n^{m-l+1}}{m-l+1} \dfrac{B_l}{(m-l)! l!} x^{m}$$ This gives us $$\sum_{m=0}^{\infty} \sum_{l=0}^{m} \dfrac{B_l}{(m-l+1)! l!}n^{m-l+1} x^{m} = \sum_{m=0}^{\infty} \left(\sum_{l=0}^m \dfrac1{m-l+1} \dbinom{m}{l} B_l n^{m-l+1}\right)\dfrac{x^m}{m!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/395678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Packing circles on a line On today's TopCoder Single-Round Match, the following question was posed (the post-contest write-up hasn't arrived yet, and their explanations often leave much to be desired anyway, so I thought I'd ask here): Given a maximum of 8 marbles and their radii, how would you put them next to each other on a line so that the distance between the lowest point on the leftmost marble and the lowest point on the rightmost marble is as small as possible? 8! is small enough number for brute forcing, so we can certainly try all permutations. However, could someone explain to me, preferably in diagrams, how to calculate that distance value given a configuration? Also, any kind of background information would be appreciated.
If I understand well, the centers of the marbles are on the line. In that case, we can fix a coordinate system such that the $x$-axis is the line, and the center $C_1$ of the first marble is the origin. Then, its lowest point is $P=(0,-r_1)$. Calculate the coordinates of the centers of the next circles: $$C_2=(r_1+r_2,0),\ C_3=(r_1+2r_2+r_3,0),\ \dots,\ \\C_n=(r_1+2r_2+\ldots+2r_{n-1}+r_n,\ 0)$$ The lowest point of the last circle is $Q=(r_1+2r_2+..+2r_{n-1}+r_n,-r_n)$. Now we can use the Pythagorean theorem to calculate the distance $PQ$: $$PQ^2=(r_1+2r_2+..+2r_{n-1}+r_n)^2+(r_1-r_n)^2= \\=\left(2\sum_{k=1}^n r_k-(r_1+r_n)\right)^2+(r_1-r_n)^2\,.$$ It follows that the distance is independent of the order of the intermediate marbles, only the first and the last one counts. Hopefully from here you can calculate the minimum of this expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/395756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Help me prove this inequality : How would I go about proving this? $$ \displaystyle\sum_{r=1}^{n} \left( 1 + \dfrac{1}{2r} \right)^{2r} \leq n \displaystyle\sum_{r=0}^{n+1} \displaystyle\binom{n+1}{r} \left( \dfrac{1}{n+1} \right)^{r}$$ Thank you! I've tried so many things. I've tried finding a series I could compare one of the series to but nada, I tried to change the LHS to a geometric series but that didn't work out, please could someone give me a little hint? Thank you!
Here is the proof that $(1+1/x)^x$ is concave for $x\ge 1$. The second derivative of $(1+1/x)^x$ is $(1+1/x)^x$ times $$p(x)=\left(\ln\left(1+\frac{1}{x}\right)-\frac{1}{1+x}\right)^2-\frac{1}{x(1+x)^2}$$ Now for $x\ge 1$, we have $$\ln(1+1/x)-\frac{2}{1+x}\le \frac{1}{x}-\frac{2}{1+x}=\frac{1-x}{x(1+x)}\le 0$$ and $$\ln(1+1/x)\ge \frac{1}{x}-\frac{1}{2x^2}\ge 0,$$ so \begin{align*}p(x)&= \ln^2(1+1/x)-\frac{2\ln(1+1/x)}{1+x}+\frac{1}{(1+x)^2}-\frac{1}{x(1+x)^2}\\ &=\ln(1+1/x)(\ln(1+1/x)-2/(1+x))+\frac{x-1}{x(1+x)^2}\\ &\le \left(\frac{1}{x}-\frac{1}{2x^2}\right)\left(\frac{1}{x}-\frac{2}{1+x}\right)+\frac{x-1}{x(1+x)^2}\\ &=-\frac{(x-1)^2}{2x^3(1+x)^2}\le 0 \end{align*} proving that $(1+1/x)^x$ is concave for $x\ge 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/395829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
Baire's theorem from a point of view of measure theory According to Baire's theorem, for each countable collection of open dense subsets of $[0,1]$, their intersection $A$ is dense. Are we able to say something about the Lebegue's measure of $A$? Must it be positive? Of full measure? Thank you for help.
Let $q_1,q_2,\ldots$ be an enumeration of the rationals. Let $I_n^m$ be an open interval centered at $q_n$ with length at most $1/m 1/2^n$. Then $\bigcup_n I_n^m$ is an open dense set with Lebesgue measure at most $1/m$. The intersection of these open dense sets has Lebesgue measure zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/395893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Calculating $\sqrt{28\cdot 29 \cdot 30\cdot 31+1}$ Is it possible to calculate $\sqrt{28 \cdot 29 \cdot 30 \cdot 31 +1}$ without any kind of electronic aid? I tried to factor it using equations like $(x+y)^2=x^2+2xy+y^2$ but it didn't work.
If you are willing to rely on the problem setter to make sure it is a natural, it has to be close to $29.5^2=841+29+.25=870.25$ The one's digit of the stuff under the square root sign is $1$, so it is either $869$ or $871$. You can either calculate and check, or note that two of the factors are below $30$ and only one above, which should convince you it is $869$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/395962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 0 }
Methods for determining the convergence of $\sum\frac{\cos n}{n}$ or $\sum\frac{\sin n}{n}$ As far as I know, the textbook approach to determining the convergence of series like $$\sum_{n=1}^\infty\frac{\cos n}{n}$$ and $$\sum_{n=1}^\infty\frac{\sin n}{n}$$ uses Dirichlet's test, which involves bounding the partial sums of the cosine or sine terms. I have two questions: * *Are there any other approaches to seeing that these series are convergent? I'm mostly just interested to see what other kinds of arguments might be made. *What's the best way to show that these two series are only conditionally convergent? I don't even know the textbook approach to that question.
Since you specifically said "Are there any other approaches to seeing that these series are convergent" I will take the bait and give an extremely sketchy argument that isn't meant to be a proof at all. The harmonic series $\sum_{n=1}^{\infty} \frac{1}{n}$ famously diverges, but very slowly. For large $n$ the numbers being added are so tiny that it takes exponential amounts of time for the millions of tiny terms to amount to much at all. But $\cos(n)$ will be negative roughly half the time, so there's no possibility of any slow steady accumulation. Thus our sum $\sum_{n=1}^{\infty} \frac{\cos n}{n}$ will remain bounded. And furthermore, the magnitude of the terms being added gets smaller and smaller, so in a matter that's roughly analogous to the logic behind the alternating series test, the sum will oscillate [with a period of roughly 6], but with a decreasing amplitude around a value that is an artifact of the first handful of terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Does $\,x>0\,$ hint that $\,x\in\mathbb R\,$? Does $x>0$ suggest that $x\in\mathbb R$? For numbers not in $\,\mathbb R\,$ (e.g. in $\mathbb C\setminus \mathbb R$), their sizes can't be compared. So can I omit "$\,x\in\mathbb R\,$" and just write $\,x>0\,$? Thank you.
It really depends on context. But be safe; just say $x > 0, x\in \mathbb R$. Omitting the clarification can lead to misunderstanding it. Including the clarification takes up less than a centimeter of space. Benefits of clarifying the domain greatly outweigh the consequences of omitting the clarification. Besides one might want to know about rationals greater than $0$, or integers greater than $0$, and we would like to use $x \gt 0$ in those contexts, as well. ADDED NOTE: That doesn't mean that after having clarified the context, and/or defined the domain, you should still use the qualification "$x\in \mathbb R$" every time you subsequently write $x \gt 0$, in a proof, for example. But if there's any question in your mind about whether or not to include it, error on the side of inclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 7, "answer_id": 4 }
Simplifying this expression $(e^u-1)(e^u-e^l)$ Is it possible to write the following $$(e^u-1)(e^u-e^l)$$ as $$e^{f(u,l)}-1?$$
The first expression evaluates to $e^{2u}-e^u-e^l+1$, so if your question is whether there is some algebraic manipulation that brings this into the form of an exponential minus $1$ for general $l,u$ then the answer is no. In fact the range of the function $x\mapsto e^x-1$ in $\Bbb R$ is $(-1,\infty)$, and the product of two values in that range may be outside the range if one is negative and the other sufficiently large. For instance for $u=-1,l=10$ your product will be much less than $-1$, so it cannot be written as $e^x-1$ for any real number $x$. You might want to get around this by using complex numbers for $f(u,l)$, but then consider the example $u=-\ln2,l=\ln3$, for which you have $e^u-1=-\frac12$ and $e^l-1=2$, so the product $(e^u-1)(e^l-1)$ is $-\frac12\times2=-1$, which is not a value $e^x-1$ even for $x\in\Bbb C$, so this shows you just cannot define $f$ for general arguments $u,l$ at all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Evaluating $\int_0^\infty \frac{\log (1+x)}{1+x^2}dx$ Can this integral be solved with contour integral or by some application of residue theorem? $$\int_0^\infty \frac{\log (1+x)}{1+x^2}dx = \frac{\pi}{4}\log 2 + \text{Catalan constant}$$ It has two poles at $\pm i$ and branch point of $-1$ while the integral is to be evaluated from $0\to \infty$. How to get $\text{Catalan Constant}$? Please give some hints.
Following the same approach in this answer, we have \begin{gather*} \int_0^\infty\frac{\ln^a(1+x)}{1+x^2}\mathrm{d}x=\int_0^\infty\frac{\ln^a\left(\frac{1+y}{y}\right)}{1+y^2}\mathrm{d}y\\ \overset{\frac{y}{1+y}=x}{=}(-1)^a\int_0^1\frac{\ln^a(x)}{x^2+(1-x)^2}\mathrm{d}x\\ \left\{\text{write $\frac{1}{x^2+(1-x)^2}=\mathfrak{J} \frac{1+i}{1-(1+i)x}$}\right\}\\ =(-1)^a \mathfrak{J} \int_0^1\frac{(1+i)\ln^a(x)}{1-(1+i)x}\mathrm{d}x\\ =a!\ \mathfrak{J}\{\operatorname{Li}_{a+1}(1+i)\} \end{gather*} where the last step follows from using the integral form of the polylogarithm function: $$\operatorname{Li}_{a}(z)=\frac{(-1)^{a-1}}{(a-1)!}\int_0^1\frac{z\ln^{a-1}(t)}{1-zt}\mathrm{d}t.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/396170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 9, "answer_id": 7 }
find out the value of $\dfrac {x^2}{9}+\dfrac {y^2}{25}+\dfrac {z^2}{16}$ If $(x-3)^2+(y-5)^2+(z-4)^2=0$,then find out the value of $$\dfrac {x^2}{9}+\dfrac {y^2}{25}+\dfrac {z^2}{16}$$ just give hint to start solution.
Hint: What values does the function $x^2$ acquire(positive/negarive)? What is the solution of the equation $x^2=0$? Can you find the solution of the equation $x^2+y^2=0$? Now, what can you say about the equation $(x-3)^2+(y-5)^2+(z-4)^2=0 $? Can you find the values of $x,y,z?$
{ "language": "en", "url": "https://math.stackexchange.com/questions/396284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Solving modular equations that gives GCD = 1 I have problems with understanding modular equations that gives GCD = 1. For example: $$3x \equiv 59 \mod 100$$ So I'm getting $GCD(3, 100) = 1$. Now: $1 = -33*3 + 100$ That's where the first question appears - I always guess those -33 and 1 (here) numbers...is there a way to solve them? And the second question - the answer to that equation, at least according to the book, is: {$-33*59 + 100k$} (k - integer) - and why is that so? Where did the 59 came from?
$$\begin{eqnarray} \text{Note} &&\ 1 &=&\ 3\, (-33)\ \ \ \, +\ \ \ \, 100\, (1) \\ \stackrel{\times\, 59\ }\Rightarrow && 59\, &=&\ 3\, (-33\cdot 59) + 100\, (59)\end{eqnarray}$$ Hence $\ 59 = 3x+100y \ $ has a particular solution $\rm\:(x,y) = (-33\cdot 59, 59).\, $ By linearity, the general solution is the sum of this and the solution of the associated homogeneous equation $\ 0 = 3x+100y,\,$ so $\, \frac{y}x = \frac{-3}{100},\ $ so $\ x = 100k,\ y = -3k .\,$ So the general solution is their sum $$(-33\cdot 59,\,59)\, +\, (100k, -3k)\, =\, (-33\cdot 59+100k,\, 59-3k)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/396358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
HINT for summing digits of a large power I recently started working through the Project Euler challenges, but I've got stuck on #16 (http://projecteuler.net/problem=16) $2^{15} = 32768$ and the sum of its digits is $3 + 2 + 7 + 6 + 8 = 26$. What is the sum of the digits of the number $2^{1000}$? (since I'm a big fan of generality, my interpretation is to find a solution to the sum of digits of $a^b$ in base $c$, and obviously I'm trying to solve it without resorting to "cheats" like arbitrary-precision numbers). I guess this is simpler than I'm making it, but I've got no interest in being told the answer so I haven't been able to do a lot of internet searching (too many places just give these things away). So I'd appreciate a hint in the right direction. I know that $2^{1000} = 2^{2*2*2*5*5*5} = (((((2^2)^2)^2)^5)^5)^5$, and that the repeated sum of digits of powers of 2 follows the pattern $2, 4, 8, 7, 5, 1$, and that the last digit can be determined by an efficient pow-mod algorithm (which I already have from an earlier challenge), but I haven't been able to get further than that… (and I'm not even sure that those are relevant).
In this case, I'm afraid you just have to go ahead and calculate $2^{1000}$. There are various clever ways to do this, but for numbers this small a simple algorithm is fast enough. The very simplest is to work in base 10 from the start. Faster is to work in binary, and convert to base 10 at the end, but this conversion is not completely trivial. A compromise (on a 32-bit machine) would be to work in base 1000000000, so that you can double a number without overflow. Then at the end the conversion to base 10 is much simpler.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 0 }
Find the transform I have the paper with 3 points on it. I have also a photo of this paper. How can I determine where is the paper on the photo, if I know just the positions of these points? And are 3 points enough? It doesn't look like a linear transform: the paper turns into trapeze on the photo. But it should be able to be written mathematically. I was unsuccessfully looking for some function $\mathbb{R}^2\rightarrow\mathbb{R}^2$, which turns the coordinates of the points on the default paper to their coordinates on photo, so I could just apply this function to the paper corners, but I have no idea, what kind of transform it should be. Any help is appreciated :)
I believe a projective transformation is exactly right. But with a projective transformation you can take any four points in general position (no three collinear) to any other four. So you'd better add a fourth point to reconstruct the transformation. In terms of the linear algebra, you want to construct a $3\times 3$ matrix $A$, up to scalar multiples. You think of a point in $\mathbb R^2$ as a vector $X=\begin{bmatrix}1\\x\\y\end{bmatrix}\in\mathbb R^3$ and its image will be the vector $AX$ rescaled so that its first coordinate is likewise $1$. So see that you can determine $A$ from knowing where four points map, think of it this way. A linear map from $\mathbb R^3$ to $\mathbb R^3$ is determined by where it sends a basis. Given our three points in general position, they correspond to vectors $P$, $Q$, $R\in\mathbb R^3$ up to scaling. But now, given our fourth point in general position, we can rescale those vectors so that the fourth point is given by $P+Q+R$. Doing the corresponding construction with the four image points will now allow us to specify precisely where $A$ sends our basis vectors, up to one common scalar multiplication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If we know the eigenvalues of a matrix $A$, and the minimal polynom $m_t(a)$, how do we find the Jordan form of $A$? We have just learned the Jordan Form of a matrix, and I have to admit that I did not understand the algorithm. Given $A = \begin{pmatrix} 1 & 1 & 1 & -1 \\ 0 & 2 & 1 & -1 \\ 1 & -1 & 2 & -1 \\ 1 & -1 & 0 & 1 \end{pmatrix} $, find the Jordan Form $J(A)$ of the matrix. So what I did so far: (I) Calculate the polynomial: $P_A(\lambda) = (\lambda - 1)^2(\lambda -2)^2$. (II) Calculate the minimum polynomial: $m_A(\lambda) = P_A(\lambda) =(\lambda - 1)^2(\lambda -2)^2 $ But I am stuck now, how do we exactly calculate the Jordan Form of $A$? And an extra question that has been confusing me. In this case, does $A$ have $4$ eigenvalues or $2$ eigenvalues?
Since the minimal polynomial has degree $4$ which is the same order of the matrix, you know that $A$'s smith normal form is $\begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & m_A(\lambda )\end{pmatrix}$. Therefore the elementary divisors (I'm not sure this is the correct term in english) are $(\lambda -1)^2$ and $(\lambda -2)^2$. Theory tells you that one jordan block is $\color{grey}{(\lambda -1)^2\to} \begin{pmatrix}1 & 1\\ 0 & 1\end{pmatrix}$ and the other is $\color{grey}{(\lambda -2)^2\to} \begin{pmatrix} 2 &1\\ 0 & 2\end{pmatrix}$. Therefore a possible JNF for $A$ is $\begin{pmatrix} 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 2 &1\\ 0 & 0 & 0 & 2\end{pmatrix}$. Regarding the extra question, it seems to be asking about the geometric multiplicity. In this case it has two eingenvalues and not four.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If $\lim\limits_{x \to \pm\infty}f(x)=0$, does it imply that $\lim\limits_{x \to \pm\infty}f'(x)=0$? Suppose $f:\mathbb{R} \rightarrow \mathbb{R}$ is everywhere differentiable and * *$\lim_{x \to \infty}f(x)=\lim_{x \to -\infty}f(x)=0$, *there exists $c \in \mathbb{R}$ such that $f(c) \gt 0$. Can we say anything about $\lim_{x \to \infty}f'(x)$ and $\lim_{x \to -\infty}f'(x)$? I am tempted to say that $\lim_{x \to \infty}f'(x)$ = $\lim_{x \to -\infty}f'(x)=0$. I started with the following, but I'm not sure this is the correct approach, $$\lim_{x \to \infty}f'(x)= \lim_{x \to \infty}\lim_{h \to 0}\frac{f(x+h)-f(x)}{h}.$$
To correct a incorrect attempt, let $f(x) = e^{-x^2} \cos(e^{x^4})$, so $\begin{align}f'(x) &= e^{-x^2} 4 x^3 e^{x^4}(-\sin(e^{x^3})) -2x e^{-x^2} \cos(e^{x^3})\\ &= -4 x^3 e^{x^4-x^2} \sin(e^{x^3}) -2x e^{-x^2} \cos(e^{x^3})\\ \end{align} $ The $e^{x^4-x^2}$ term makes $f'(x)$ oscillate violently and unboundedly as $x \to \pm \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Property of the trace of matrices Let $A(x,t),B(x,t)$ be matrix-valued functions that are independent of $\xi=x-t$ and satisfy $$A_t-B_x+AB-BA=0$$ where $X_q\equiv \frac{\partial X}{\partial q}$. Why does it then follow that $$\frac{d }{d \eta}\textrm{Trace}[(A-B)^n]=0$$ where $n\in \mathbb N$ and $\eta=x+t$? Is there a neat way to see that this is true?
(This may not be a neat way to prove the assertion, but it's a proof anyway.) Let $\eta=x+t$ and $\nu=x-t$. Then $x=\eta+\nu$ and $t=\eta-\nu$ are functions of $\eta$ and $\nu$, $A=A(\eta+\nu,\eta-\nu)$ and similarly for $B$. As both $A$ and $B$ are independent of $\nu$, by the total derivative formula, we get \begin{align*} 0 = \frac{dA}{d\nu} &= \frac{\partial A}{\partial x} - \frac{\partial A}{\partial t},\\ 0 = \frac{dB}{d\nu} &= \frac{\partial B}{\partial x} - \frac{\partial B}{\partial t} \end{align*} and hence \begin{align*} \frac{dA}{d\eta} &= \frac{\partial A}{\partial x} + \frac{\partial A}{\partial t} = 2\frac{\partial A}{\partial t},\\ \frac{dB}{d\eta} &= \frac{\partial B}{\partial x} + \frac{\partial B}{\partial t} = 2\frac{\partial B}{\partial x}. \end{align*} Therefore $$ \frac{d(A-B)}{d\eta} = 2\left(\frac{\partial A}{\partial t} - \frac{\partial B}{\partial x}\right) = 2(BA - AB). $$ and \begin{align*} \frac{d}{d\eta}\operatorname{trace}[(A-B)^n] &= n \operatorname{trace}\left[(A-B)^{n-1}\frac{d(A-B)}{d\eta}\right]\\ &= 2n \operatorname{trace}[(A-B)^{n-1} (BA-AB)].\tag{1} \end{align*} Let $\mathcal{P}_m$ denotes the set of products of $m$ matrices from $\{A,B\}$ (e.g. $\mathcal{P}_2=\{AA,AB,BA,BB\}$). Then the function $f:\mathcal{P}_m\to \mathcal{P}_m$ defined by \begin{cases} f(B^m) = B^m,\\ f(pAB^k) = B^kAp &\text{ for all } 0\le k<m \text{ and } p\in\mathcal{P}_{m-k-1} \end{cases} is a bijection. Since $\operatorname{trace}(AqB)=\operatorname{trace}(Bf(q)A)$ for all $q\in\mathcal{P}_{n-1}$ and the degree of $B$ is preserved by $f$, it follows that $\operatorname{trace}[A(A-B)^{n-1}B]=\operatorname{trace}[B(A-B)^{n-1}A]$. Consequently, $$\operatorname{trace}[(A-B)^{n-1} (BA-AB)]=0$$ for all square matrices $A,B$ and $(1)$ evaluates to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Infinite Series Problem Using Residues Show that $$\sum_{n=0}^{\infty}\frac{1}{n^2+a^2}=\frac{\pi}{2a}\coth\pi a+\frac{1}{2a^2}, a>0$$ I know I must use summation theorem and I calculated the residue which is: $$Res\left(\frac{1}{z^2+a^2}, \pm ai\right)=-\frac{\pi}{2a}\coth\pi a$$ Now my question is: how do I get the last term $+\frac{1}{2a^2}$ after using the summation theorem?
The method of residues applies to sums of the form $$\sum_{n=-\infty}^{\infty} f(n) = -\sum_k \text{res}_{z=z_k} \pi \cot{\pi z}\, f(z)$$ where $z_k$ are poles of $f$ that are not integers. So when $f$ is even in $n$, you may express as follows: $$2 \sum_{n=1}^{\infty} f(n) + f(0)$$ For this case, $f(z)=1/(z^2+a^2)$ and the poles $z_{\pm}=\pm i a$ and using the fact that $\sin{i a} = i \sinh{a}$, we get $$\sum_{n=-\infty}^{\infty} \frac{1}{n^2+a^2} = \frac{\pi}{a} \text{coth}{\pi a}$$ The rest is just a little more algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $f :\mathbb{R}→ \mathbb{R}$ be a function such that $f^2$ and $f^3$ are differentiable. Is $f$ differentiable? Let $f :\mathbb{R}→ \mathbb{R}$ be a function such that $f^2$ and $f^3$ are differentiable. Is $f$ differentiable? Similarly, let $f :\mathbb{C}→ \mathbb{C}$ be a function such that $f^2$ and $f^3$ are analytic. Is $f$ analytic?
If you do want $f^{p}$ to represent the $p^{\text{th}}$ iterate, then you can let $f$ denote the characteristic function of the irrational numbers. Then $f^2$ and $f^3$ are both identically zero, yet $f$ is nowhere continuous, let alone differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
How to evaluate $\sqrt[3]{a + ib} + \sqrt[3]{a - ib}$? The answer to a question I asked earlier today hinged on the fact that $$\sqrt[3]{35 + 18i\sqrt{3}} + \sqrt[3]{35 - 18i\sqrt{3}} = 7$$ How does one evaluate such expressions? And, is there a way to evaluate the general expression $$\sqrt[3]{a + ib} + \sqrt[3]{a - ib}$$
As I said in a previous answer, finding out that such a simplification occurs is exactly as hard as finding out that $35+18\sqrt{-3}$ has a cube root in $\Bbb Q(\sqrt{-3})$ (well actually, $\Bbb Z[(1+\sqrt{-3})/2]$ because $35+18\sqrt{-3}$ is an algebraic integer). Suppose $p,q,d$ are integers. Then $p+q\sqrt d$ is an algebraic integer, and so is its cube root, and we can look for cube roots of the form $(x+y\sqrt d)$ with $2x,2y \in \Bbb Z$ So here is what you can do : * *compute analytically the $9$ possible values of $(p+q\sqrt{d})^{1/3}+(p-q\sqrt{d})^{1/3}$, using the polar form to take cube roots, so this involves the use of transcendental functions like $\arctan, \log, \exp$ and $\cos$. Does one of them look like an integer $x$ ? if so, let $y = 3qx/(x^3+p)$ and check algebraically if $(x+y\sqrt d)^3 = 8(p+q\sqrt d)$. *try to find a semi-integer root to the degree $9$ polynomial $64x^9-48px^6+(27dq^2-15p^2)x^3-p^3 = 0$. It will necessarily be of the form $\pm z$ or $\pm z/2$ where $z$ is a divisor of $p$, so you will need to decompose $p$ into its prime factors. *see if the norm of $p+q\sqrt d$ is a cube (in $\Bbb Z$). If it's not, you can stop. If you find that $p^2-dq^2 = r^3$ for some integer $r$, then try to find a semi-integer root to the degree $3$ polynomial $4x^3-3rx-p = 0$. Again you only need to check divisors of $p$ and their halves. The check that the norm was a cube allows you to make a simpler computation. *study the factorisation of $p+q\sqrt d$ in the ring of integers of $\Bbb Q(\sqrt d)$. This involves again checking that $p^2-dq^2$ is a cube $r^3$ (in $\Bbb Z$), then find the factorisation of the principal ideal $(p+q\sqrt d)$ in prime ideals. To do this you have to find square roots of $d$ modulo some prime factors of $r$ and find the relevant prime ideals. If $(p+q\sqrt d)$ is the cube of an ideal, you will need to check if that ideal is principal, say $(z)$ (so you may want to compute the ideal class group of $\Bbb Q(\sqrt d))$, and then finally compute the group of units to check if $(p+q\sqrt d)/z^3$ is the cube of a unit or not (especially when $d>0$, where that group is infinite and you need to do some continued fraction computation. otherwise it's finite and easy in comparison). I would say that methods $2$ and $4$ are strictly more complicated than method $3$, that checking if a polynomial has some integer roots is not too much more evil than doing a prime factorisation to check if some integer is a cube or not. And if you can bear with it, go for method $1$, which is the best, and you don't need a high precision. This is unavoidable. If you use Cardan's formula for the polynomial of the method $3$ you will end up precisely with the expression you started with, and that polynomial is the one you started with before considering simplifying that expression. Finally, I think what you want to hear is There is no formula that allows you to solve a cubic real polynomial by only taking roots of real numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/396915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 0 }
Compute value of $\pi$ up to 8 digits I am quite lost on how approximate the value of $\pi$ up to 8 digits with a confidence of 99% using Monte Carlo. I think this requires a large number of trials but how can I know how many trials? I know that a 99% confidence interval is 3 standard deviations away from the mean in a normal distribution. From the central limit theorem the standard deviation of the sample mean (or standard error) is proportional to the standard deviation of the population $\sigma_{\bar X} = \frac{\sigma}{\sqrt{n}}$ So I have something that relates the size of the sample (i.e. number of trials) with the standard deviation, but then I don't know how to proceed from here. How does the "8 digit precision" comes into play? UPDATE Ok I think I am close to understand it. From CLT we have $\displaystyle \sigma_{M} = \frac{\sigma}{\sqrt{N}}$ so in this case $\sigma = \sqrt{p(1-p)}$ therfore $\displaystyle \sigma_{M} = \frac{\sqrt{p(1-p)}}{\sqrt{N}}$ Then from the Bernoulli distribution, $\displaystyle \mu = p = \frac{\pi}{4}$ therefore $$\sigma_{M}=\frac{\sqrt{\pi(4-\pi)}}{\sqrt{N}}$$ but what would be the value of $\sigma_{M}$? and then I have $\pi$ in the formula but is the thing I am trying to approximate so how does this work? and still missing the role of the 8 digit precision.
The usual trick is to use the approximation of 355/113, repeating multiples of 113, until the denominator is 355. This version of $\pi=355/113$ is the usual implied value when eight digits is given. Otherwise, the approach is roughly $\sqrt{n}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/397028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
How to find inverse of the function $f(x)=\sin(x)\ln(x)$ My friend asked me to solve it, but I can't. If $f(x)=\sin(x)\ln(x)$, what is $f^{-1}(x)$? I have no idea how to find the solution. I try to find $$\frac{dx}{dy}=\frac{1}{\frac{\sin(x)}{x}+\ln(x)\cos(x)}$$ and try to solve it for $x$ by some replacing and other things, but I failed. Can anyone help? Thanks to all.
The function fails the horizontal line test for one, very badly in fact. One to one states that for any $x$ and $y$ in the domain of the function, that $f(x) = f(y) \Rightarrow x = y$. that is to each point in the domain there exists a unique point in the range. Many functions can be made to be one to one $(1-1)$ by restricting the interval over which values are taken, for example the inverse trig functions and the square root function (any even root). We typically only take the positive square root because otherwise the function would have two answers and each x wouldn't have a unique y value (The so called vertical line test, which is why generally if functions fail the horizontal line test they don't typically have an inverse). If you graph this function it looks very much like a growing sinusoidal shape this cannot be restricted uniquely in a manner that makes an inverse definable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/397109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
The negative square root of $-1$ as the value of $i$ I have a small point to be clarified.We all know $ i^2 = -1 $ when we define complex numbers, and we usually take the positive square root of $-1$ as the value of "$i$" , i.e, $i = (-1)^{1/2} $. I guess it's just a convention that has been accepted in maths and the value $i = -[(-1)^{1/2}] $ is neglected as I have never seen this value of "$i$" being used. What I wanted to know is, if we will use $i = -[(-1)^{1/2}] $ instead of $ (-1)^{1/2} $, would we be doing anything wrong? My guess is there is nothing wrong in it as far as the fundamentals of maths goes. Just wanted to clarify it with you guys. Thanks.
The square roots with the properties we know are used only for Positive Real numbers. We say that $i$ is the square root of $-1$ but this is a convention. You cannot perform operations with the usual properties of radicals if you are dealing with complex numbers, rather than positive reals. It is a fact that, $-1$ has two complex square roots. We just define one of them to be $i$. You have to regard the expression $i=\sqrt {-1}$ just as a symbol, and not do operations. Counterexample: $$\dfrac{-1}{1}=\dfrac{1}{-1}\Rightarrow$$ $$\sqrt{\dfrac{-1}{1}}=\sqrt{\dfrac{1}{-1}}\Rightarrow $$ $$\dfrac{\sqrt{-1}}{\sqrt{1}}=\dfrac{\sqrt{1}}{\sqrt{-1}}\Rightarrow$$ $$\dfrac{i}{1}=\dfrac{1}{i}\Rightarrow$$ $$i^2=1\text { ,a contradiction}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/397175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prove that ideal generated by.... Is a monomial ideal Similar questions have come up on the last few past exam papers and I don't know how to solve it. Any help would be greatly appreciated.. Prove that the ideal of $\mathbb{Q}[X,Y]$ generated by $X^2(1+Y^3), Y^3(1-X^2), X^4$ and $ Y^6$ is a monomial ideal.
Hint. $(X^2(1+Y^3), Y^3(1-X^2), X^4, Y^6)=(X^2,Y^3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/397326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Find the limit $ \lim_{n \to \infty}\left(\frac{a^{1/n}+b^{1/n}+c^{1/n}}{3}\right)^n$ For $a,b,c>0$, Find $$ \lim_{n \to \infty}\left(\frac{a^{1/n}+b^{1/n}+c^{1/n}}{3}\right)^n$$ how can I find the limit of sequence above? Provide me a hint or full solution. thanks ^^
$$\lim_{n \to \infty}\ln \left(\frac{a^{1/n}+b^{1/n}+c^{1/n}}{3}\right)^n=\lim_{n \to \infty}\frac{\ln \left(\frac{a^{1/n}+b^{1/n}+c^{1/n}}{3}\right)}{\frac{1}{n}}=\lim_{x\to 0^+}\frac{\ln \left(\frac{a^{x}+b^{x}+c^{x}}{3}\right)}{x}$$ L'Hospital or the definition of the derivative solves it. Actually since the limit is just the definition of the derivative of $\ln \left(\frac{a^{x}+b^{x}+c^{x}}{3}\right)$ at $x=0$, it would be wrong to use L'H :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/397376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 0 }
Summation of a finite series Let $$f(n) = \frac{1}{1} + \frac{1}{2} + \frac{1}{3}+ \frac{1}{4}+...+ \frac{1}{2^n-1} \forall \ n \ \epsilon \ \mathbb{N} $$ If it cannot be summed , are there any approximations to the series ?
$f(n)=H_{2^n-1}$, the $(2^n-1)$-st harmonic number. There is no closed form, but link gives the excellent approximation $$H_n\approx\ln n+\gamma+\frac1{2n}+\sum_{k\ge 1}\frac{B_{2k}}{2kn^{2k}}=\ln n+\gamma+\frac1{2n}-\frac1{12n^2}+\frac1{120n^4}-\ldots\;,$$ where $\gamma\approx 0.5772156649$ is the Euler–Mascheroni constant, and the $B_{2k}$ are Bernoulli numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/397430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to find the norm of the operator $(Ax)_n = \frac{1}{n} \sum_{k=1}^n \frac{x_k}{\sqrt{k}}$? How to find the norm of the following operator $$ A:\ell_p\to\ell_p:(x_n)\mapsto\left(n^{-1}\sum\limits_{k=1}^n k^{-1/2} x_k\right) $$ Any help is welcome.
Consider diagonal operator $$ S:\ell_p\to\ell_p:(x_n)\mapsto(n^{-1/2}x_n) $$ It is bounded and its norm is $\Vert S\Vert=\sup\{|n^{-1/2}|:n\in\mathbb{N}\} =1$ Consider Caesaro operator $$ T:\ell_p\to\ell_p:(x_n)\mapsto\left(n^{-1}\sum\limits_{k=1}^nx_k\right) $$ As it was proved earlier its norm is is $\Vert T\Vert\leq p(p-1)^{-1}$. Since $A=T\circ S$, then $$ \Vert A\Vert\leq\Vert T\Vert\Vert S\Vert=p(p-1)^{-1} $$ But unfortunately I can''t say this is the best constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/397501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Trigonometric substitution integral Trying to work around this with trig substitution, but end up with messy powers on sines and cosines... It should be simple use of trigonometric properties, but I seem to be tripping somewhere. $$\int x^5\sqrt{x^2+4}dx $$ Thanks.
It comes out pretty well if you put $x=2\tan\theta$. Doing it carefully, remove a $\tan\theta\sec\theta$ and everything else can be expressed as a polynomial in $\sec\theta$, hence it's easily done by substitution. But a more "efficient" substitution is $u=\sqrt{x^2+4}$, or $u^2=x^2+4$. Then $2u\,du = 2x\,dx$ andthe integral becomes $$\int (u^2-4)^2 u^2\,du\,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/397565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to show in a clean way that $z^4 + (x^2 + y^2 - 1)(2x^2 + 3y^2-1) = 0$ is a torus? How to show in a clean way that the zero-locus of $$z^4 + (x^2 + y^2 - 1)(2x^2 + 3y^2-1) = 0$$ is a torus?
Not an answer, but here's a picture. Cheers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/397625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 2 }
Help with modular arithmetic If$r_1,r_2,r_3,r_4,\ldots,r_{ϕ(a)}$ are the distinct positive integers less than $a$ and coprime to $a$, is there some way to easily calculate, $$\prod_{k=1}^{\phi(a)}ord_{a}(r_k)$$
The claim is true, with the stronger condition that there is some $i$ with $e_i=1$ and all other exponents are zero. The set of $r_i$'s is called a reduced residue system. The second (now deleted) claim is false. Let $a=7$. Then $2^13^1=6^1$, two different representations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/397774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Soving Recurrence Relation I have this relation $u_{n+1}=\frac{1}{3}u_{n} + 4$ and I need to express the general term $u_{n}$ in terms of $n$ and $u_{0}$. With partial sums I found this relation $u_{n}=\frac{1}{3^n}u_{0} + 4\sum_{n=1}^n\frac{1}{3^n-1}$ But I also need to prove by mathematical induction that my $u_{n}$ is ok, but I have no idea how to do this. Can anyone please help me? Thanks in advance
$$u_{ n }=\frac { 1 }{ { 3 }^{ n } } u_{ 0 }+4\sum _{ k=0 }^{ n-1 }{ \frac { 1 }{ { 3 }^{ k } } } =\frac { 1 }{ { 3 }^{ n } } u_{ 0 }+4\left( \frac { { \left( 1/3 \right) }^{ n }-1\quad }{ 1/3-1 } \right) =\frac { 1 }{ { 3 }^{ n } } u_{ 0 }-6\left( \frac { 1 }{ { 3 }^{ n } } -1 \right) $$ $$u_{ n }=\frac { 1 }{ { 3 }^{ n } } \left(u_{ 0 }-6\right)+6$$ See geometric series
{ "language": "en", "url": "https://math.stackexchange.com/questions/397894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Searching for unbounded, non-negative function $f(x)$ with roots $x_{n}\rightarrow \infty$ as $n \rightarrow \infty$ If a function $y = f(x)$ is unbounded and non-negative for all real $x$, then is it possible that it can have roots $x_n$ such that $x_{n}\rightarrow \infty$ as $n \rightarrow \infty$.
The function $ y = |x \sin(x)|$ has infinitely many roots $x_n$ such that $x_{n}\rightarrow \infty$ as $n \rightarrow \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/397955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How prove this $\int_{0}^{\infty}\sin{x}\sin{\sqrt{x}}\,dx=\frac{\sqrt{\pi}}{2}\sin{\left(\frac{3\pi-1}{4}\right)}$ Prove that $$\int_{0}^{\infty}\sin{x}\sin{\sqrt{x}}\,dx=\frac{\sqrt{\pi}}{2}\sin{\left(\frac{3\pi-1}{4}\right)}$$ I have some question. Using this, find this integral is not converge, I'm wrong? Thank you everyone
First make the substitution $x=u^2$ to get: $\displaystyle \int _{0}^{\infty }\!\sin \left( x \right) \sin \left( \sqrt {x} \right) {dx}=\int _{0}^{\infty }\!2\,\sin \left( {u}^{2} \right) \sin \left( u \right) u{du}$, $\displaystyle=-\int _{0}^{\infty }\!u\cos \left( u \left( u+1 \right) \right) {du}+ \int _{0}^{\infty }\!u\cos \left( u \left( u-1 \right) \right) {du}$, and changing variable again in the second integral on the R.H.S such that $u\rightarrow u+1$ this becomes: $=\displaystyle\int _{0}^{\infty }\!-u\cos \left( u \left( u+1 \right) \right) {du}+ \int _{-1}^{\infty }\!\left(u+1\right)\cos \left( u \left( u+1 \right) \right) {du} $, $\displaystyle=\int _{0}^{\infty }\!\cos \left( u \left( u+1 \right) \right) {du}+ \int _{-1}^{0}\! \left( u+1 \right) \cos \left( u \left( u+1 \right) \right) {du} $. Now we write $u=v-1/2$ and this becomes: $\displaystyle\int _{1/2}^{\infty }\!\cos \left( {v}^{2}-1/4 \right) {dv}+\int _{-1/ 2}^{1/2}\! \left( v+1/2 \right) \cos \left( {v}^{2}-1/4 \right) {dv}=$ $\displaystyle \left\{\int _{0}^{\infty }\!\cos \left( {v}^{2}-1/4 \right) {dv}\right\}$ $\displaystyle +\left\{\int _{-1/2} ^{1/2}\!v\cos \left( {v}^{2}-1/4 \right) {dv}+\int _{-1/2}^{0}\!1/2\, \cos \left( {v}^{2}-1/4 \right) {dv}-1/2\,\int _{0}^{1/2}\!\cos \left( {v}^{2}-1/4 \right) {dv}\right\},$ but the second curly bracket is zero by symmetry and so: $\displaystyle \int _{0}^{\infty }\!\sin \left( x \right) \sin \left( \sqrt {x} \right) {dx}=\displaystyle \int _{0}^{\infty }\!\cos \left( {v}^{2}-1/4 \right) {dv}$, $\displaystyle =\int _{0}^{ \infty }\!\cos \left( {v}^{2} \right) {dv}\cos \left( 1/4 \right) + \int _{0}^{\infty }\!\sin \left( {v}^{2} \right) {dv}\sin \left( 1/4 \right) $. We now quote the limit of Fresnel integrals: $\displaystyle\int _{0}^{\infty }\!\cos \left( {v}^{2} \right) {dv}=\int _{0}^{ \infty }\!\sin \left( {v}^{2} \right) {dv}=\dfrac{\sqrt{2\pi}}{4}$, to obtain: $\displaystyle \int _{0}^{\infty }\!\sin \left( x \right) \sin \left( \sqrt {x} \right) {dx}=\dfrac{\sqrt{2\pi}}{4}\left(\cos\left(\dfrac{1}{4}\right)+\sin\left(\dfrac{1}{4}\right)\right)=\dfrac{\sqrt{\pi}}{2}\sin{\left(\dfrac{3\pi-1}{4}\right)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/397990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
Understanding the (Partial) Converse to Cauchy-Riemann We have that for a function $f$ defined on some open subset $U \subset \mathbb{C}$ then the following if true: Suppose $u=\mathrm{Re}(f), v=\mathrm{Im}(f)$ and that all partial derivatives $u_x,u_y,v_x,v_y$ exists and are continuous on $U$. Suppose further that they satisfy the Cauchy-Riemann equations. Then $f$ is holomorphic on $U$. The proof for this is readily available, though there is a subtlety that I can't understand. We essentially want to compute $\lim_{h \rightarrow 0} \dfrac{f(z+h)-f(z)}{h}$ where $h=p+qi \in \mathbb{C}$. We need a relationship like $u(x+p,y+q)-u(x,y)=pu_x(x,y)+qu_y(x,y)+o(|p|+|q|)$. Why is this relationship true?
Denote $h=\pmatrix{p\\q}.$ Since $u(x,\ y)$ is differentiable at the point $(x,\ y),$ increment of $u$ can be represented as $$u(x+p,\ y+q)-u(x,\ y)=Du\;h+o(\Vert h\Vert)=pu_x(x,\ y)+qu_y(x,\ y)+o(|p|+|q|),$$ where $Du=\left(\dfrac{\partial{u}}{\partial{x}}, \ \dfrac{\partial{u}}{\partial{y}} \right).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/398065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finite ultraproduct I stucked when trying to prove: If $A_\xi$ are domains of models of first order language and $|A_\xi|\le n$ for $n \in \omega$ for all $\xi$ in index set $X$ and $\mathcal U$ is ultrafilter of $X$ then $|\prod_{\xi \in X} A_\xi / \mathcal U| \le n$. My tries: If $X$ is finite set then $\mathcal U$ is principal. Then singleton $\{x\}\in \mathcal U$ and $|\prod_{\xi \in X} A_\xi / \mathcal U| = |A_x|$. If $\mathcal U$ is not principal then for $x \in X$ there is $S_x \in \mathcal U$ with $x \notin S_x$. Then for every $k \in \omega$ there exists equivalence class corresponding to $S_{x_1} \cap \dots \cap S_{x_k}$ with size greater $|A_1|\cdot \dots \cdot |A_k|$. Can there be said anything about a structure of the ultrafilter if $X$ is infinite? And how to prove it?
The statement you are trying to prove is a consequence of Łoś's theorem - if every factor satisfies "there are no more than $n$ elements", then the set of factors that satisfy it is $X$, which is in $\mathcal{U}$, so by Łoś's theorem the ultraproduct will satisfy that sentence as well. Note that "there are no more than $n$ elements" is the sentence $$ (\exists x_1)\cdots(\exists x_n)(\forall y)[ y = x_1 \lor \cdots \lor y = x_n] $$ Thus one way to come up with a concrete proof of the statement you want is to examine the proof of Łoś's theorem and specialize it to the situation at hand. As a side note, if every factor is finite, but there is no bound on the sizes of the factors, then the ultraproduct will not be finite. The difference is that there is no longer a single sentence of interest that is true in all the factors, because finiteness is not definable in a first-order language. I assume that the OP figured out the hint, so let me spell out the answer for reference. Because $|A^\xi| = k$ for all $\xi \in X$, we can write $A^\xi = \{a^\xi_1, \ldots, a^\xi_k\}$ for each $\xi$. For $1 \leq i \leq k$ define $\alpha_i$ by $\alpha_i(\xi) = a^\xi_i$. Then every $\beta$ in the ultraproduct is equal to $\alpha_i$ for some $1 \leq i \leq k$. Proof: For $1\leq i \leq k$ let $B_i = \{\xi : \beta(\xi) = a^\xi_i\}$. Then $$X = B_1 \cup B_2 \cup\cdots\cup B_k.$$ Because $\mathcal{U}$ is an ultrafilter, one of the sets $B_i$ must be in $\mathcal{U}$, and if $B_i \in \mathcal{U}$ then $\beta = \alpha_i$ in the ultraproduct, QED. Thus we can explicitly name the $k$ elements of the ultraproduct: $\alpha_1, \alpha_2, \ldots, \alpha_k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/398160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that no number of the form 8k + 3 or 8k + 7 can be written in the form $a^2 +5b^2$ I'm studying for a number theory exam, and have got stuck on this question. Show that no number of the form $8k + 3$ or $8k + 7$ can be written in the form $a^2 +5b^2$ I know that there is a theorem which tells us that $p$ is expressible as a sum of $2$ squares if $p\equiv$ $1\pmod 4$. This is really all I have found to work with so far, and I'm not really sure how/if it relates. Many thanks!
$8k+3,8k+7$ can be merged into $4c+3$ where $k,c$ are integers Now, $a^2+5b^2=4c+3\implies a^2+b^2=4c+3-4b^2=4(c-b^2)+3\equiv3\pmod 4,$ But as $(2c)^2\equiv0\pmod 4,(2d+1)^2\equiv1\pmod 4,$ $a^2+b^2\equiv0,1,2\pmod 4\not\equiv3$ Clearly, $a^2+5b^2$ in the question can be generalized $(4m+1)x^2+(4n+1)y^2$ where $m,n$ are any integers
{ "language": "en", "url": "https://math.stackexchange.com/questions/398210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
The graph of $x^x$ I have a question about the graph of $f(x) = x^x$. How come the graph doesn't extend into the negative domain? Because, it is not as if the graph is undefined when $x=-5$. But according to the graph, that seems to be the case. Can someone please explain this? Thanks
A more direct answer is the reason your graphing calculator doesn't graph when $x<0$ is because there are infinite undefined "holes" and infinite defined points in the real plane. Even when you restrict the domain to $[-2,-1]$ this will still be the case. Note that for $x^x$ when $x<0$ if you calculate for the output of certain x-values (using the Texas I-85) you will have... $$x^x=\begin{cases} (-x)^x & x=\left\{ {2n\over 2m+1}\ |\ n, m \in \Bbb Z\right\}\frac{\text{even integer}}{\text{odd integer}}\\ -(-x)^{x} & x=\left\{ {2n+1\over 2m+1}\ |\ n, m \in \Bbb Z\right\}\frac{\text{odd integer}}{\text{odd integer}}\ \\ \text{undefined} & x=\left\{ {2n+1\over 2m}\ |\ n, m \in \Bbb Z\right\}\bigcup \left\{\mathbb{R}\setminus{\mathbb{Q}}\right\} \left(\frac{\text{odd integer}}{\text{even integer}},\text{irrational numbers}\right) \end{cases}$$ (Just remember to simplify fractions all the way until the denominator is a prime number (ex: $2/6\to1/3$)) This is because when we have $x^a$ it can only extend to the negative domain if $a$'s denominator is odd (ex: $x^{1/3},x^{2/3}$). Thus there are infinite undefined values from $[-2,-1]$ that are still (even/odd) when simplified. For example $( -3/2,-1/2)$ are undefined but so is $ (-19/10, -17/10, -15/10...-11/10)$ and $(-199/100, -197/100, -195/100,.....-101/100)$. This includes irrational numbers. There is also infinite defined values. There are infinite defined values that have positive output and infinite defined values that have a negative output. For example there is $(-2,-4/3$), ($-2,-24/13,-22/13,-16/13...-14/13)$ and $(-2,-52/27,-50/27,-48/27,-46/27,-44/27...-28/27)$ that are still positive. Then there is $(-5/3,-3/3)$, $(-25/13,-23/13,-21/13,-19/13..-13/13)$ and $(-53/27,-51/27,-49/27,-47/27,-45/27,-43/27...-27/27)$ that is negative. Because the function is so "disconnected" with undefined holes and real numbers the graphing calculator still fails to register a graph of $x^x$ when $x<0$. Thus when you see $x^x$ with the three graphs in the peicewise definition note that I am hiding the infinite holes that exist for ${x}^{x}$. Now since the outputs for the negative domain can be positive or negative we have two "trajectories". Thus we must graph $\left(-x\right)^{x}$ and $-\left(-x\right)^{x}$ with $x^x$. However, if you want to graph $x^x$ to seem "more continuous" you can either $|x|^{x}$ or $\text{sgn}{\left(x\right)}|x|^{x}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/398268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What can I do this cos term to remove the divide by 0? I was asked to help someone with this problem, and I don't really know the answer why. But I thought I'd still try. $$\lim_{t \to 10} \frac{t^2 - 100}{t+1} \cos\left( \frac{1}{10-t} \right)+ 100$$ The problem lies with the cos term. What can I do with the cos term to remove divide by 0 ? I found the answer to be $100$ (Google), but I do not know what they did to the $\cos$ term. Is that even the answer ? Thanks!
The cos term is irrelevant. It can only wiggle between $-1$ and $1$, and is therefore killed by the $t^2-100$ term, since that approaches $0$. For a less cluttered version of the same phenomenon, consider the function $f(x)=x\sin\left(\frac{1}{x}\right)$ (for $x\ne 0$). The absolute value of this is always $\le |x|$, so (by Squeezing) $f(x)\to 0$ as $x\to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/398409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is the universal quantifier $\forall x \in A : P(x)$ defined as $\forall x (x \in A \implies P(x))$ using an implication? And the same goes for the existential quantifier: $\exists x \in A : P(x) \; \Leftrightarrow \; \exists x (x \in A \wedge P(x))$. Why couldn’t it be: $\exists x \in A : P(x) \; \Leftrightarrow \; \exists x (x \in A \implies P(x))$ and $\forall x \in A : P(x) \; \Leftrightarrow \; \forall x (x \in A \wedge P(x))$?
I can't answer your first question. It's just the definition of the notation. For your second question, by definition of '$\rightarrow$', we have $\exists x (x\in A \rightarrow P(x)) \leftrightarrow \exists x\neg(x\in A \wedge \neg P(x))$ I think you will agree that this is quite different from $\exists x (x\in A \wedge P(x))$
{ "language": "en", "url": "https://math.stackexchange.com/questions/398492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How many points can you find on $y=x^2$, for $x \geq 0$, such that each pair of points has rational distance? Open problem in Geometry/Number Theory. The real question here is: Is there an infinite family of points on $y=x^2$, for $x \geq 0$, such that the distance between each pair is rational? The question of "if not infinite, then how many?" follows if there exists no infinite family of points that satisfies the hypothesis. We have that there exists a (in fact, infinitely many) three point families that satisfy the hypothesis by the following lemma and proof. Lemma 1: There are infinitely many rational distance sets of three points on $y=x^2$. The following proof is by Nate Dean. Proof. Let $S$ be the set of points on the parabola $y = x^2$ and let $d_1$ and $d_2$ be two fixed rational values. For any point, $P_0(r)=(r, r^2) \in S$, let $C_1(r)$ be the circle of radius $d_1$ centered at $P_0(r)$ and let $C_2(r)$ be the circle of radius $d_2$ centered at $P_0(r)$. Each of these circles must intersect $S$ in at least one point. Let $P_1(r)$ be any point in $C_1(r) \cap S$ and likewise, let $P_2(r)$ be any point in $C_2(r) \cap S$. Now let $dist(r)$ equal the distance between $P_1(r)$ and $P_2(r)$. The function $dist(r)$ is a continuous function of $r$ and hence there are infinitely many values of $r$ such that $P_0(r)$, $P_1(r)$, and $P_2(r)$ are at rational distance. $ \blacksquare $ This basically shows that the collection of families of three points that have pairwise rational distance on the parabola is dense in $S$. Garikai Campbell has shown that there are infinitely many nonconcyclic rational distance sets of four points on $y = x^2$ in the following paper: http://www.ams.org/journals/mcom/2004-73-248/S0025-5718-03-01606-5/S0025-5718-03-01606-5.pdf However, to my knowledge, no one has come forward with 5 point solutions, nor has it been proven that 5 point solutions even exist. But I know that many people have not seen this problem! Does anyone have any ideas on how to approach a proof of either the infinite case or even just a 5 point solution case? Edit: The above Lemma as well as the paper by Garikai Campbell do not include the half-parabola ($x \geq 0$) restriction. However, I thought that the techniques that he employed could be analogous to techniques that we could use to make progress on the half-parabola version of the problem.
The answer to the "infinite family" question appears to be, no. Jozsef Solymosi and Frank de Zeeuw, On a question of Erdős and Ulam, Discrete Comput. Geom. 43 (2010), no. 2, 393–401, MR2579704 (2011e:52024), prove (according to the review by Liping Yuan) that no irreducible algebraic curve other than a line or a circle contains an infinite rational set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/398574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 1, "answer_id": 0 }
What is the cardinality of $\Bbb{R}^L$? By $\Bbb{R}^L$, I mean the set that is interpreted as $\Bbb{R}$ in $L$, Godel's constructible universe. For concreteness, and to avoid definitional questions about $\Bbb{R}$, I'm looking at the set ${\cal P}(\omega)$ as a proxy. I would think it needs to be countable, since only definable subsets of $\omega$ are being considered, but I don't see how Cantor's theorem would fail here, since $L$ is a model of $\sf{ZFC}$. (Actually, that's a lie: this probably means that there is no injection $f:\omega\to{\cal P}(\omega)^L$ in $L$, even if there is one in $V$. I'm still a little hazy on all the details, though.) But the ordinals are in $L$, so $L$ is not itself countable, and there must exist genuinely uncountable elements of $L$. What does the powerset operation (in $L$) do to the cardinality of sets (as viewed in $V$), then?
It is impossible to give a complete answer to this question just in $\sf ZFC$. For once, it is possible that $V=L$ is true in the model you consider, so in fact $\Bbb R^L=\Bbb R$, so the cardinality is $\aleph_1$. On the other hand it is possible that the universe is $L[A]$ where $A$ is a set of $\aleph_2$ Cohen reals, in which case $\Bbb R^L$ is still of size $\aleph_1$. And it is also possible that the universe is $L[G]$ where $G$ is a function which collapses $\omega_1^L$, resulting in having $\Bbb R^L$ as a countable set. There are other axioms, such as large cardinal axioms (e.g. "$0^\#$ exists") which imply that $\Bbb R^L$ is countable, and one can arrange all sort of crazy tricks, where the result is large or small compared to $\Bbb R$. One thing is for certain, $|\Bbb R^L|=|\omega_1^L|\in\{\aleph_0,\aleph_1\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/398643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
In any tree, what is the maximum distance between a vertex of high degree and a vertex of low degree? In any undirected tree $T$, what is the maximum distance from any vertex $v$ with $\text{deg}(v) \geq 3$ to the closest (in a shortest path sense) vertex $y$ with $\text{deg}(y) \leq 2$? That is, $y$ can be leaf. It seems to me that this distance can be at most $\dfrac{\text{diam}(T)}{2}$, and furthermore that the maximum distance will be attained from a graph center. Is this true? There's probably simple argument for it somewhere.
In your second question (shortest path to vertex of degree $\le 2$), the bound $\operatorname{diam}(G) / 2$ holds, simply by noticing that the ends of the longest path in a tree are leaves, and the "worst" that a graph can do is have the vertex of degree $3$ or higher right in the center. But in fact, this holds for the shortest path from any vertex of to vertex of degree $1$. Can we do better? No, in fact. Take a look at a tournament bracket, for example (just delete the leaves on one side to create a unique center). Viewed as a tree, all the vertices are of degree $3$ except for the leaves, which are all at least distance $d/2$ from the center of this modified graph.
{ "language": "en", "url": "https://math.stackexchange.com/questions/398711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Yitang Zhang: Prime Gaps Has anybody read Yitang Zhang's paper on prime gaps? Wired reports "$70$ million" at most, but I was wondering if the number was actually more specific. *EDIT*$^1$: Are there any experts here who can explain the proof? Is the outline in the annals the preprint or the full accepted paper?
70 million is exactly what is mentioned in the abstract. It is quite likely that this bound can be reduced; the author says so in the paper: This result is, of course, not optimal. The condition $k_0 \ge 3.5 \times 10^6$ is also crude and there are certain ways to relax it. To replace the right side of (1.5) by a value as small as possible is an open problem that will not be discussed in this paper. He seems to be holding his cards for the moment... You can download a copy of the full accepted paper on the Annals page if your institution subscribes to the Annals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/398763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 3, "answer_id": 1 }
The integral of $\frac{1}{1+x^n}$ Motivated by this question: Integration of $\displaystyle \int\frac{1}{1+x^8}\,dx$ I got curious about finding a general expression for the integral $\int \frac{1}{1+x^n},\,n \geq 1$. By factoring $1+x^n$, we can get an answer for any given $n$ (in terms of logarithms, arctangents, etc), but I was wondering whether a general one-two-liner formula in terms of elementary functions is known/available (WolframAlpha trials for specific $n$ show some structure.).
I showed in THIS ANSWER, that a general solution is given by $$\bbox[5px,border:2px solid #C0A000]{\int\frac{1}{x^n+1}dx=-\frac1n\sum_{k=1}^n\left(\frac12 x_{kr}\log(x^2-2x_{kr}x+1)-x_{ki}\arctan\left(\frac{x-x_{kr}}{x_{ki}}\right)\right)+C'} $$ where $x_{kr}$ and $x_{ki}$ are the real and imaginary parts of $x_k$, respectively, and are given by $$x_{kr}=\text{Re}\left(x_k\right)=\cos \left(\frac{(2k-1)\pi}{n}\right)$$ $$x_{ki}=\text{Im}\left(x_k\right)=\sin \left(\frac{(2k-1)\pi}{n}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/398888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solve the equation $\sqrt{3x-2} +2-x=0$ Solve the equation: $$\sqrt{3x-2} +2-x=0$$ I squared both equations $$(\sqrt{3x-2})^2 (+2-x)^2= 0$$ I got $$3x-2 + 4 -4x + x^2$$ I then combined like terms $x^2 -1x +2$ However, that can not be right since I get a negative radicand when I use the quadratic equation. $x = 1/2 \pm \sqrt{((-1)/2)^2 -2}$ The answer is 6
$$\sqrt{3x-2} +2-x=0$$ Isolating the radical:$$\sqrt{3x-2} =-2+x$$ Squaring both sides:$$\bigg(\sqrt{3x-2}\bigg)^2 =\bigg(-2+x\bigg)^2$$ Expanding $(-2+x)^2$ and gathering like terms: $$3x-2=-2(-2+x)+x(-2+x)$$ $$3x-2=4-2x-2x+x^2$$ Set x equal to zero:$$3x-2=4-4x+x^2$$ Gather like terms:$$0=4+2-3x-4x+x^2$$ Factor the quadratic and find the solutions:$$0=x^2-7x+6$$ $$0=(x-6)(x-1)$$ $$0=x-6\implies\boxed{6=x}$$ $$0=x-1\implies\boxed{1=x}$$ Checking 6 as a solution: $$\sqrt{3(6)-2} +2-(6)=0$$ $$\sqrt{16} +2-6=0$$ $$4+2-6=0$$ $$6-6=0$$ Checking 1 as a solution: $$\sqrt{3(1)-2} +2-(1)=0$$ $$\sqrt{1} +2-1=0$$ $$2\neq0$$ The solution $x=1$ does not equal zero and therefore is not a solution. The solution $x=6$ does equal zero and therefore is our only solution to this equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/398984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Riemann integral and Lebesgue integral $f:R\rightarrow [0,\infty)$ is a Lebesgue-integrable function. Show that $$ \int_R f \ d m=\int_0^\infty m(\{f\geq t\})\ dt $$ where $m$ is Lebesgue measure. I know the question may be a little dump.
We have, using Fubini and denoting by$\def\o{\mathbb 1}\def\R{\mathbb R}$ $\o_A$ the indicator function of a set $A \subseteq \R$ \begin{align*} \int_\R f(x)\, dx &= \int_\R \int_{[0,\infty)}\o_{[0,f(x)]}(t)\, dt\,dx\\ &= \int_{[0,\infty)} \int_\R \o_{[0,f(x)]}(t)\, dx\, dt\\ &= \int_{[0,\infty)} \int_\R \o_{\{f \ge t\}}(x)\, dx\, dt\\ &= \int_{[0,\infty)} m(\{f\ge t\})\, dt. \end{align*} For the third line note that \begin{align*} \o_{[0,f(x)]}(t) = 1 &\iff 0 \le t \le f(x)\\ &\iff f(x) \ge t\\ &\iff x \in \{f\ge t\}\\ &\iff \o_{\{f\ge t\}}(x) = 1 \end{align*} and hence $\o_{[0,f(x)]}(t) = \o_{\{f \ge t\}}(x)$ for all $(x,t) \in \R \times [0,\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/399058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Improper integrals Question I was asked to define the next intergrals and I want to know if I did it right: $$1) \int^\infty_a f(x)dx = \lim_{b \to \infty}\int^b_af(x)dx$$ $$2) \int^b_{-\infty} f(x)dx = \lim_{a \to -\infty}\int^b_af(x)dx$$ $$3) \int^\infty_{-\infty} f(x)dx = \lim_{b \to \infty}\int^b_0f(x)dx + \lim_{a \to -\infty}\int^0_af(x)dx$$ Thanks.
The first two definitions you gave are the standard definitions, for $f$ say continuous everywhere. The third is more problematical, It is quite possible that the definition in your course is $$\lim_{a\to-\infty, b\to \infty} \int_a^b f(x)\,dx.$$ So $a\to-\infty$, $b\to\infty$ independently. What you wrote down would then be a fact rather than a definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/399119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solve equations $\sqrt{t +9} - \sqrt{t} = 1$ Solve equation: $\sqrt{t +9} - \sqrt{t} = 1$ I moved - √t to the left side of the equation $\sqrt{t +9} = 1 -\sqrt{t}$ I squared both sides $(\sqrt{t+9})^2 = (1)^2 (\sqrt{t})^2$ Then I got $t + 9 = 1+ t$ Can't figure it out after that point. The answer is $16$
An often overlooked fact is that $$ \sqrt{t^2}=\left|t\right|$$ Call me paranoid but here's how I would solve this $$\sqrt{t +9} - \sqrt{t} = 1$$ $$\sqrt{t +9} = \sqrt{t}+1$$ $$\left|t +9\right| = \left(\sqrt{t}+1\right)\left(\sqrt{t}+1\right)$$ $$\left|t +9\right| = \left|t\right|+2\sqrt{t}+1$$ Since the original equation has $\sqrt{t}$, then we know that $t\geq 0$ and we can safely remove the absolute value bars. So now we have $$t +9= t+2\sqrt{t}+1$$ $$9= 2\sqrt{t}+1$$ $$8= 2\sqrt{t}$$ $$4= \sqrt{t}$$ $$\left|t\right|=4^2$$ $$t=16$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/399199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 7, "answer_id": 5 }
What is the physical meaning of fractional calculus? What is the physical meaning of the fractional integral and fractional derivative? And many researchers deal with the fractional boundary value problems, and what is the physical background? What is the applications of the fractional boundary value problem?
This may not be what your looking for but... In my line of work I use fractional Poisson process a lot, now these arise from sets of Fractional Differential Equations and the physical meaning behind this is that the waiting times between events is no longer exponentially distributed but instead follows a Mittag-Leffler distribution, this results in waiting times between events that may be much longer than what would normally occur if one was to assume exponential waiting times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/399266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
Improper Integral $\int_{1/e}^1 \frac{dx}{x\sqrt{\ln{(x)}}} $ I need some advice on how to evaluate it. $$\int\limits_\frac{1}{e}^1 \frac{dx}{x\sqrt{\ln{(x)}}} $$ Thanks!
Here's a hint: $$ \int_{1/e}^1 \frac{1}{\sqrt{\ln x}} {\huge(}\frac{dx}{x}{\huge)}. $$ What that is hinting at is what you need to learn in order to understand substitutions. It's all about the chain rule. The part in the gigantic parentheses becomes $du$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/399423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Convergence of $\sum \frac{a_n}{(a_1+\ldots+a_n)^2}$ Assume that $0 < a_n \leq 1$ and that $\sum a_n=\infty$. Is it true that $$ \sum_{n \geq 1} \frac{a_n}{(a_1+\ldots+a_n)^2} < \infty $$ ? I think it is but I can't prove it. Of course if $a_n \geq \varepsilon$ for some $\varepsilon > 0$ this is obvious. Any idea? Thanks.
Let $A_n=\sum\limits_{k=1}^na_n$. Then $$ \begin{align} \sum_{n=1}^\infty\frac{A_n-A_{n-1}}{A_n^2} &\le\frac1{a_1}+\sum_{n=2}^\infty\frac{A_n-A_{n-1}}{A_nA_{n-1}}\\ &=\frac1{a_1}+\sum_{n=2}^\infty\left(\frac1{A_{n-1}}-\frac1{A_n}\right)\\ &\le\frac2{a_1} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/399565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Upper bound on the difference between two elements of an eigenvector Let $W$ be the non-negative, symmetric adjacency/affinity matrix for some connected graph. If $W_{ij}$ is large, then vertex $i$ and vertex $j$ have a heavily weighted edge between them. If $W_{ij} = 0$, then no edge connects vertex $i$ to vertex $j$. Now $L = \mathrm{diag}(W\mathbf{1})-W$ is the (unnormalized) graph Laplacian. Let $v$ be the Fiedler vector of $L$, that is, a unit eigenvector corresponding to the second smallest eigenvalue of $L$. As $W_{ij}$ increases, all else equal, $|v_i - v_j|$ tends to decrease---at least this is the idea behind spectral clustering. What is an upper bound on $|v_i - v_j|$, given quantities that don't require computing $v$, like $W_{ij}$ and $\|W\|$? Any suggestions or thoughts would be greatly appreciated.
I've seen peopl use Davis and Kahan (1970), "The Rotation of Eigenvectors by a Perturbation". It's sometimes a bit tough going, but incredibly useful for problems like this. More info would also be useful. Is $W$ stochastic? Are there underlying latent classes that control the distribution of $W_{ij}$, e.g., with $E(W_{ij})$ larger if $i$ and $j$ possess the same class? If so, then I suggest first considering $E(W_{ij})$ with the latent classes given. Reorder the rows and columns according to latent class. You'll end up with a block matrix with entries constant on each block. It's not that difficult to then compute eigenvalues and eigenvectors of this block matrix. You'll find the eigenvector entries for the same latent class are the same. Now perturb the matrix from $E(W)$ back to the original $W$ and use Davis and Kahan to bound differences in the eigenvector entries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/399634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluate $\int^{441}_0\frac{\pi\sin \pi \sqrt x}{\sqrt x} dx$ Evaluate this definite integral: $$\int^{441}_0\frac{\pi\sin \pi \sqrt x}{\sqrt x} dx$$
This integral (even the indefinite one) can be easily solved by observing: $$\frac{\mathrm d}{\mathrm dx}\pi\sqrt x = \frac{\pi}{2\sqrt x}$$ which implies that: $$\frac{\mathrm d}{\mathrm dx}\cos\pi\sqrt x = -\frac{\pi \sin\pi\sqrt x}{2\sqrt x}$$ Finally, we obtain: $$\int\frac{\pi\sin\pi\sqrt x}{\sqrt x}\,\mathrm dx = -2\cos\pi\sqrt x$$ whence the definite integral with bounds $0, n^2$ evaluates to $2(1-(-1)^n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/399840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Function generation by input $y$ and $x$ values I wonder if there are such tools, that can output function formulas that match input conditions. Lets say I will make input like that: $y=0, x=0$ $y=1, x=1$ $y=2, x=4$ and tool should generate for me function formula y=x^2. I am aware its is not possible to find out exact function, but it would be great to get some possibilities. I'm game developer and i need to code some behaviours in game mechanics, that aren't simply linear, sometimes I need for example arcus tangens, when i want my value to increase slower and slower for higher arguments. Problem is that I finished my school long time ago and simply I don't remember how does many functions looks like and such tool would be great to quickly find out what i need.
One of my favorite curve-fitting resources is zunzun. They have many, many possible types of curves that can fit the data you give it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/399899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Determining $\sin(15)$, $\sin(32)$, $\cos(49)$, etc. How do you in general find the trigonometric function values? I know how to find them for 30 45, and 60 using the 60-60-60 and 45-45-90 triangle but don't know for, say $\sin(15)$ or $\tan(75)$ or $\csc(50)$, etc.. I tried looking for how to do it but neither my textbook or any other place has a tutorial for it. I want to know how to find the exact values for all the trigonometric functions like $\sin x$, $\csc x$, ... opposed to looking it up or using calculator. According to my textbook, $\sin(15)=0.26$, $\tan(75)=3.73$, and $\csc(50)=1.31$ but doesn't show where those numbers came from, as if it was dropped from the Math heaven!
Value of $\sin{x}$ with prescribed accuracy can be calculated from Taylor's representation $$\sin{x}=\sum\limits_{n=0}^{\infty}{\dfrac{(-1)^n x^{2n+1}}{(2n+1)!}}$$ or infinite product $$\sin{x}=x\prod\limits_{n=1}^{\infty}{\left(1-\dfrac{x^2}{\pi^2 n^2} \right)}.$$ For some partial cases numerous trigonometric identities can be used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/399948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Help with combinations problem? Initially there are $m$ balls in one bag, and $n$ in the other, where $m,n>0$. Two different operations are allowed: a) Remove an equal number of balls from each bag; b) Double the number of balls in one bag. Is it always possible to empty both bags after a finite sequence of operations? Operation b) is now replaced with b') Triple the number of balls in one bag. Is it now always possible to empty both bags after a finite sequence of operations? This is question 4 on Round $1$ of the $2011/2012$ British Mathematical Olympiad. I suck at combinatorics and the like but need to practise to try and improve my competition mathematics. If anyone could give me a hint on where to start I'd be most grateful :D EDIT: Never mind guys, I just completely mis-read the question. I thought it said you had to double the numbers of balls in both bags. Thanks for the help!
Regarding the first part... Let $m>n$ Remove $n-1$ balls from each bag so that you have $m-n+1$ balls in one bag and $1$ ball in the other bag. Now repeat the algorithm of doubling the balls in the bag which has $1$ ball and then taking away $1$ from each bag till you have $1$ ball in each bag. Finally remove $1$ ball from each bag and you have emptied them in finite number of steps. For the second part Again suppose $m>n$ Remove $n-1$ balls from each bag so that you have $m-n+1$ balls in one bag and $1$ ball in the other bag. Now repeat the algorithm of tripling the balls in the bag which has $1$ ball and then taking away $2$ from each bag. If $2|(m-n)$, then you'll end up with $1$ ball in each bag in some steps. Otherwise you'll end up with $2$ balls in one bag and $1$ ball in the other bag and it seems like no matter what you do from here, you'll always end up with this arrangement. Hence, in the second part, it may be conjectured that the bags cannot be emptied in finite steps if $m-n\equiv 1\pmod2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/400026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Books for Geometry processing Please suggest some basic books on geometry processing. I want to learn this subject for learning algorithms in 3d mesh generation and graphics. Please suggest me subjects or areas of mathematics i have learn in order to be understanding 3d mesh generation. I am doing self study and i am very new to this topic Please suggest me online links to any videos or resources, if available.
See these books: * *Polygon Mesh Processing by Botsch et al. *Geometry and Topology for Mesh Generation by Edelsbrunner and these courses: * *Mesh Generation and Geometry Processing in Graphics, Engineering, and Modeling *Geometry Processing Algorithms
{ "language": "en", "url": "https://math.stackexchange.com/questions/400093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to show $x^4 - 1296 = (x^3-6x^2+36x-216)(x+6)$ How to get this result: $x^4-1296 = (x^3-6x^2+36x-216)(x+6)$? It is part of a question about finding limits at mooculus.
Hints: $1296=(-6)^4$ and $a^n-b^n=(a-b)(a^{n-1}+a^{n-2}b+\ldots+ab^{n-2}+b^{n-1})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/400178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
Simple Linear Regression Question Let $Y_{i} = \beta_{0} + \beta_{1}X_{i} + \epsilon_{i}$ be a simple linear regression model with independent errors and iid normal distribution. If $X_{i}$ are fixed what is the distribution of $Y_{i}$ given $X_{i} = 10$? I am preparing for a test with questions like these but I am realizing I am less up to date on these things than I thought. Could anyone explain the thought process used to approach this kind of question?
Let $\epsilon_i \sim N(0,\sigma^2)$. Then, we have: $$Y_i \sim N(\beta_0 + \beta_1 X_i,\sigma^2)$$ Further clarification: The above uses the following facts: (a) Expectation is a linear operator, (b) Variance of a constant is $0$, (c) Covariance of a random variable with a constant is $0$ and finally, (d) A linear combination of normals is also a normal. Does that help?
{ "language": "en", "url": "https://math.stackexchange.com/questions/400240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bertrand's postulate in another point of view I was just wondering why can't we use prime number's theorem to prove Bertrand's postulate.We know that if we show that for all natural numbers $n>2, \pi(2n)-\pi(n)>0$ we are done. Why can't it be proven by just showing (By using the prime number's theorem) that for every natural numbers $n>2, \frac{2n}{ln(2n)}-\frac{n}{ln(n)}>0$?
You need a precise estimate of the form $$c_1<\frac{\pi(n)\ln n}{n}<c_2.$$ With that you can derive $\pi(2n)>c_1\frac{2n}{\ln(2n)}>2c_1\frac{n}{\ln 2+\ln n}$. If you are lucky, you can continue$2c_1\frac n{\ln2+\ln n}>c_2\frac n{\ln n}>\pi(n)$. However, for this you better have $\frac{2c_1}{c_2} >1+\frac{\ln 2}{\ln n}$. So at least if $c_2<2c_1$, your idea works for sufficiently big $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/400308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
Prove that a cut edge is in every spanning tree of a graph Given a simple and connected graph $G = (V,E)$, and an edge $e \in E$. Prove: $e$ is a cut edge if and only if $e$ is in every spanning tree of $G$. I have been thinking about this question for a long time and have made no progress.
Hint ("only if"): Imagine you have a spanning tree in the graph which doesn't contain the cut-edge. What happens to the graph if you remove this cut edge? What happens to the spanning tree? Hint ("if"): What happens if you remove this "indispensable" edge (the one which is in every spanning tree)? Can the resulting graph have any spanning tree? What kinds of graphs don't have any spanning tree?
{ "language": "en", "url": "https://math.stackexchange.com/questions/400384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Finding example of sets that satisfy conditions give examples of sets such that: i)$A\in B$ and $A\subseteq B$ My answer : $B=\mathcal{P(A)}=\{\emptyset,\{1\},\{2\},\{1,2\}\}$ and $A=\{1,2\}$ then $A\in B$ and $A\subseteq B$ ii) $|(C\cup D)\setminus(C\cap D)|=1$ My answer is: $C=\{1,2,3\}$, $D=\{2,3\}$ then $C\cup D=\{1,2,3\}$ and $C\cap D=\{2,3\}$ so $(C\cup D)\setminus(C\cap D)=\{1\}$ and $|(C\cup D)\setminus(C\cap D)|=1$ Can we find sets A and B such that $A\in B$ and $B\subseteq A$? My answer is no. Are my answers correct?
Your answer is incorrect. Because $1,2\in A$, but $1,2\notin B$. Your second answer is correct. To the last question, the answer is again correct (assuming $\sf ZF$), because $A\in B$ and $B\subseteq A$ would imply that we have $A\in A$, which is impossible due to the axiom of regularity. To correct the first answer, consider the empty set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/400437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluating $48^{322} \pmod{25}$ How do I find $48^{322} \pmod{25}$?
Finding the $\phi(n)$ for $25$ was easy, but what it if the $n$ was arbitrarily large? $$48 \equiv -2( \mod 25)$$ Playing around with $-2(\mod 25)$ so as to get $1$ or $-1(\mod 25)$. We see that $1024 \equiv -1(\mod 25)$ $$((-2)^{10})^{32} \equiv 1 (\mod 25)$$ $$(-2)^{322} \equiv 4(\mod 25)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/400522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Does Euler totient function gives exactly one value(answer) or LEAST calculated value(answer is NOT below this value)? I was studying RSA when came across Euler totient function. The definition states that- it gives the number of positive values less than $n$ which are relatively prime to $n$. I thought I had it, until I came across this property:- Euler Totient function is multiplicative function, that is: $\varphi(mn) = \varphi(m)\varphi(n)$ Now, if $p$ is a prime number, $\varphi(p)=p-1$. Putting values of $p$ as 11 and 13 one by one, $$\varphi(11)=10$$ $$\varphi(13)=12$$ Applying above stated property, $$\varphi(11\cdot 13)=\varphi(11)\varphi(13)$$ $$\varphi(143)=12 \cdot 10$$ $$\varphi(143)=120$$ Is it correct? Does that mean we have $23$ values between $1$ and $143$ which are not relatively prime to $143$? Sorry if its something basic I'm missing. I'm not some genius at maths and came across this during study of RSA Algo. Thanks.
For some intuition about why $n$ and $m$ must be relatively prime, consider that, for $N=p_1^{a_1}p_2^{a_2}...p_n^{a_n}$, $$\varphi(N)=N \left (1-\frac{1}{p_1}\right) \left (1-\frac{1}{p_2}\right)...\left (1-\frac{1}{p_n}\right)$$ And for $M=p_m^b$, $$\varphi(M)=M \left (1-\frac{1}{p_m}\right)$$ If $p_m$ is not one of $p_1,p_2,...,p_n$ then $NM$ has $1$ more unique prime factor than $N$, and $$\varphi(NM)=NM \left (1-\frac{1}{p_1}\right) \left (1-\frac{1}{p_2}\right)...\left (1-\frac{1}{p_n}\right) \left(1-\frac{1}{p_m}\right)=\varphi(N)\varphi(M)$$ But if $p_m$ is one of $p_1,p_2,...,p_n$, then $NM$ the same number of unique prime factors as $N$, and $$\varphi(NM)=NM \left (1-\frac{1}{p_1}\right) \left (1-\frac{1}{p_2}\right)...\left (1-\frac{1}{p_n}\right) \ne\varphi(N)\varphi(M)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/400590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }