text
stringlengths 256
16.4k
|
---|
Exercise \(\PageIndex{1}\)
Find expressions for \(\displaystyle oshx+sinhx\) and \(\displaystyle coshx−sinhx.\) Use a calculator to graph these functions and ensure your expression is correct.
Answer
\(\displaystyle e^x\) and \(\displaystyle e^{−x}\)
Exercise \(\PageIndex{2}\)
From the definitions of \(\displaystyle cosh(x)\) and \(\displaystyle sinh(x)\), find their antiderivatives.
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{3}\)
Show that \(\displaystyle cosh(x)\) and \(\displaystyle sinh(x)\) satisfy \(\displaystyle y''=y\).
Answer
Answers may vary
Exercise \(\PageIndex{4}\)
Use the quotient rule to verify that \(\displaystyle tanh(x)'=sech^2(x).\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{5}\)
Derive \(\displaystyle cosh^2(x)+sinh^2(x)=cosh(2x)\) from the definition.
Answer
Answers may vary
Exercise \(\PageIndex{6}\)
Take the derivative of the previous expression to find an expression for \(\displaystyle sinh(2x)\).
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{7}\)
Prove \(\displaystyle sinh(x+y)=sinh(x)cosh(y)+cosh(x)sinh(y)\) by changing the expression to exponentials.
Answer
Answers may vary
Exercise \(\PageIndex{8}\)
Take the derivative of the previous expression to find an expression for \(\displaystyle cosh(x+y).\)
Answer
Add texts here. Do not delete this text first.
For the following exercises, find the derivatives of the given functions and graph along with the function to ensure your answer is correct. Exercise \(\PageIndex{9}\)
\(\displaystyle cosh(3x+1)\)
Answer
\(\displaystyle 3sinh(3x+1)\)
Exercise \(\PageIndex{10}\)
\(\displaystyle sinh(x^2)\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{11}\)
\(\displaystyle \frac{1}{cosh(x)}\)
Answer
\(\displaystyle −tanh(x)sech(x)\)
Exercise \(\PageIndex{12}\)
\(\displaystyle sinh(ln(x))\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{13}\)
\(\displaystyle cosh^2(x)+sinh^2(x)\)
Answer
\(\displaystyle 4cosh(x)sinh(x)\)
Exercise \(\PageIndex{14}\)
\(\displaystyle cosh^2(x)−sinh^2(x)\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{15}\)
\(\displaystyle tanh(\sqrt{x^2+1})\)
Answer
\(\displaystyle \frac{xsech^2(\sqrt{x^2+1})}{\sqrt{x^2+1}}\)
Exercise \(\PageIndex{16}\)
\(\displaystyle \frac{1+tanh(x)}{1−tanh(x)}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{17}\)
\(\displaystyle sinh^6(x)\)
Answer
\(\displaystyle 6sinh^5(x)cosh(x)\)
Exercise \(\PageIndex{18}\)
\(\displaystyle ln(sech(x)+tanh(x))\)
Answer
Add texts here. Do not delete this text first.
For the following exercises, find the antiderivatives for the given functions. Exercise \(\PageIndex{19}\)
\(\displaystyle cosh(2x+1)\)
Answer
\(\displaystyle \frac{1}{2}sinh(2x+1)+C\)
Exercise \(\PageIndex{20}\)
\(\displaystyle tanh(3x+2)\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{21}\)
\(\displaystyle xcosh(x^2)\)
Answer
\(\displaystyle \frac{1}{2}sinh^2(x^2)+C\)
Exercise \(\PageIndex{22}\)
\(\displaystyle 3x^3tanh(x^4)\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{23}\)
\(\displaystyle cosh^2(x)sinh(x)\)
Answer
\(\displaystyle \frac{1}{3}cosh^3(x)+C\)
Exercise \(\PageIndex{24}\)
\(\displaystyle tanh^2(x)sech^2(x)\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{25}\)
\(\displaystyle \frac{sinh(x)}{1+cosh(x)}\)
Answer
\(\displaystyle ln(1+cosh(x))+C\)
Exercise \(\PageIndex{26}\)
\(\displaystyle coth(x)\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{27}\)
\(\displaystyle cosh(x)+sinh(x)\)
Answer
\(\displaystyle cosh(x)+sinh(x)+C\)
Exercise \(\PageIndex{28}\)
\(\displaystyle (cosh(x)+sinh(x))^n\)
Answer
Add texts here. Do not delete this text first.
For the following exercises, find the derivatives for the functions. Exercise \(\PageIndex{29}\)
\(\displaystyle tanh^{−1}(4x)\)
Answer
\(\displaystyle \frac{4}{1−16x^2}\)
Exercise \(\PageIndex{30}\)
\(\displaystyle sinh^{−1}(x^2)\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{31}\)
\(\displaystyle sinh^{−1}(cosh(x))\)
Answer
\(\displaystyle \frac{sinh(x)}{\sqrt{cosh^2(x)+1}}\)
Exercise \(\PageIndex{32}\)
\(\displaystyle cosh^{−1}(x^3)\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{33}\)
\(\displaystyle tanh^{−1}(cos(x))\)
Answer
\(\displaystyle −csc(x)\)
Exercise \(\PageIndex{34}\)
\(\displaystyle e^{sinh^{−1}(x)}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{35}\)
\(\displaystyle ln(tanh^{−1}(x))\)
Answer
\(\displaystyle −\frac{1}{(x^2−1)tanh^{−1}(x)}\)
For the following exercises, find the antiderivatives for the functions. Exercise \(\PageIndex{36}\)
\(\displaystyle ∫\frac{dx}{4−x^2}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{37}\)
\(\displaystyle ∫\frac{dx}{a^2−x^2}\)
Answer
\(\displaystyle \frac{1}{a}tanh^{−1}(\frac{x}{a})+C\)
Exercise \(\PageIndex{38}\)
\(\displaystyle ∫\frac{dx}{\sqrt{x^2+1}}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{39}\)
\(\displaystyle ∫\frac{xdx}{\sqrt{x^2+1}}\)
Answer
\(\displaystyle \sqrt{x^2+1}+C\)
Exercise \(\PageIndex{40}\)
\(\displaystyle ∫−\frac{dx}{x\sqrt{1−x^2}}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{41}\)
\(\displaystyle ∫\frac{e^x}{\sqrt{e^{2x}−1}}\)
Answer
\(\displaystyle cosh^{−1}(e^x)+C\)
Exercise \(\PageIndex{42}\)
\(\displaystyle ∫−\frac{2x}{x^4−1}\)
Answer
Add texts here. Do not delete this text first.
For the following exercises, use the fact that a falling body with friction equal to velocity squared obeys the equation \(\displaystyle dv/dt=g−v^2\). Exercise \(\PageIndex{43}\)
Show that \(\displaystyle v(t)=\sqrt{g}tanh(\sqrt{gt})\) satisfies this equation.
Answer
Answers may vary
Exercise \(\PageIndex{44}\)
Derive the previous expression for \(\displaystyle v(t)\) by integrating \(\displaystyle \frac{dv}{g−v^2}=dt\).
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{45}\)
Estimate how far a body has fallen in \(\displaystyle 12\)seconds by finding the area underneath the curve of \(\displaystyle v(t)\).
Answer
\(\displaystyle 37.30\)
For the following exercises, use this scenario: A cable hanging under its own weight has a slope \(\displaystyle S=dy/dx\) that satisfies \(\displaystyle dS/dx=c\sqrt{1+S^2}\). The constant \(\displaystyle c\) is the ratio of cable density to tension. Exercise \(\PageIndex{46}\)
Show that \(\displaystyle S=sinh(cx)\) satisfies this equation.
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{47}\)
Integrate \(\displaystyle dy/dx=sinh(cx)\) to find the cable height \(\displaystyle y(x)\) if \(\displaystyle y(0)=1/c\).
Answer
\(\displaystyle y=\frac{1}{c}cosh(cx)\)
Exercise \(\PageIndex{48}\)
Sketch the cable and determine how far down it sags at \(\displaystyle x=0\).
Answer
Add texts here. Do not delete this text first.
For the following exercises, solve each problem. Exercise \(\PageIndex{49}\)
A chain hangs from two posts \(\displaystyle 2\)m apart to form a catenary described by the equation \(\displaystyle y=2cosh(x/2)−1\). Find the slope of the catenary at the left fence post.
Answer
\(\displaystyle −0.521095\)
Exercise \(\PageIndex{50}\)
A chain hangs from two posts four meters apart to form a catenary described by the equation \(\displaystyle y=4cosh(x/4)−3.\) Find the total length of the catenary (arc length).
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{51}\)
A high-voltage power line is a catenary described by \(\displaystyle y=10cosh(x/10)\). Find the ratio of the area under the catenary to its arc length. What do you notice?
Answer
\(\displaystyle 10\)
Exercise \(\PageIndex{52}\)
A telephone line is a catenary described by \(\displaystyle y=acosh(x/a).\) Find the ratio of the area under the catenary to its arc length. Does this confirm your answer for the previous question?
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{53}\)
Prove the formula for the derivative of \(\displaystyle y=sinh^{−1}(x)\) by differentiating \(\displaystyle x=sinh(y).\)
(Hint: Use hyperbolic trigonometric identities.)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{54}\)
Prove the formula for the derivative of \(\displaystyle y=cosh^{−1}(x)\) by differentiating \(\displaystyle x=cosh(y).\)
(Hint: Use hyperbolic trigonometric identities.)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{55}\)
Prove the formula for the derivative of \(\displaystyle y=sech^{−1}(x)\) by differentiating \(\displaystyle x=sech(y).\)
(Hint: Use hyperbolic trigonometric identities.)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{56}\)
Prove that \(\displaystyle cosh(x)+sinh(x))^n=cosh(nx)+sinh(nx).\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{57}\)
Prove the expression for \(\displaystyle sinh^{−1}(x).\) Multiply \(\displaystyle x=sinh(y)=(1/2)(e^y−e^{−y})\) by \(\displaystyle 2e^y\) and solve for \(\displaystyle y\). Does your expression match the textbook?
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{58}\)
Prove the expression for \(\displaystyle cosh^{−1}(x).\) Multiply \(\displaystyle x=cosh(y)=(1/2)(e^y−e^{−y})\) by \(\displaystyle 2e^y\) and solve for \(\displaystyle y\). Does your expression match the textbook?
Answer
Add texts here. Do not delete this text first. |
I'm writing some integrals and I don't like the way the
\int symbol is displayed when it is followed by a big delimiter.Here's the code:
\[\int_S\biggl(\nabla\times\bar B - \mu_0\bar J -\mu_0\varepsilon_0\frac{\partial\bar E}{\partial t}\biggr) \cdot\hat n\,ds = 0\]
And here's the output:
What I'd like to have is an integral sign taller than the parenthesis, as if it would have a sort of
\bigg command right before. Is there any way to do this, or the
\int symbol is impossible to modify? |
Newton’s method (or Newton-Raphson method) is an iterative procedure used to find the roots of a function.
Suppose we need to solve the equation \(f\left( x \right) = 0\) and \(x=c\) is the actual root of \(f\left( x \right).\) We assume that the function \(f\left( x \right)\) is differentiable in an open interval that contains \(c.\)
To find an approximate value for \(c:\)
Start with an initial approximation \({x_0}\) close to \(c.\) Determine the next approximation by the formula \[{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}}.\] Continue the iterative process using the formula \[\cssId{element2}{{x_{n + 1}} = {x_n} – \frac{{f\left( {{x_n}} \right)}}{{f^\prime\left( {{x_n}} \right)}}}\] until the root is found to the desired accuracy.
Let’s apply Newton’s method to approximate \(\sqrt 3.\) Suppose that we need to solve the equation
\[{f\left( x \right) = {x^2} – 3 = 0,}\]
where the root \(c \gt 0.\)
Take the derivative of the function:
\[{f^\prime\left( x \right) = \left( {{x^2} – 3} \right)^\prime = 2x.}\]
Let \({x_0} = 2.\) Calculate the next approximation \({x_1}:\)
\[{{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}} }={ 2 – \frac{{{2^2} – 3}}{{2 \cdot 2}} }={ 2 – \frac{1}{4} }={ 1.75}}\]
In the next step, we get
\[{{{x_2} = {x_1} – \frac{{f\left( {{x_1}} \right)}}{{f^\prime\left( {{x_1}} \right)}} }={ 1.75 – \frac{{{{1.75}^2} – 3}}{{2 \cdot 1.75}} }={ 1.732143}}\]
Similarly, we find the approximate value \({x_3}:\)
\[{{{x_3} = {x_2} – \frac{{f\left( {{x_2}} \right)}}{{f^\prime\left( {{x_2}} \right)}} }={ 1.732143 – \frac{{{{1.732143}^2} – 3}}{{2 \cdot 1.732143}} }={ 1.73205081}}\]
In this iteration, the approximation accuracy is \(8\) decimal places like in a smartphone calculator.
So, we were able to compute the square root of \(3\) with the accuracy to \(8\) decimal places just for \(3\) steps!
Solved Problems
Click a problem to see the solution.
Example 1Approximate \(\sqrt[3]{2}\) to 6 decimal places. Example 2Determine how many iterations does it take to compute \(\sqrt 5\) to 8 decimal places using Newton’s method with the initial value \({x_0} = 2?\) Example 3Approximate the solution of the equation \({x^2} + x – 3 = 0\) to 7 decimal places with the initial guess \({x_0} = 2.\) Example 4Approximate \(\ln 2\) to 5 decimal places. Example 5Let \(f\left( x \right) = {x^3} – 7.\) Using Newton’s method, compute 3 iterations for this function with the initial guess \({x_0} = 2.\) Example 6Approximate the solution of the equation \(x\ln x = 1\) with an accuracy of 4 decimal places. Use the initial guess \({x_0} = 2.\) Example 7Using Newton’s method, find the solution of the equation \(x + {e^x} = 0\) with an accuracy of 3 decimal places. Example 8Find an approximate solution, accurate to 5 decimal places, to the equation \(\cos x = {x^2}\) that lies in the interval \(\left[ {0,\large{\frac{\pi }{2}}\normalsize} \right].\) Example 9Find an approximate solution, accurate to 4 decimal places, to the equation \(\sin \left( {{e^x}} \right) = 1\) on the interval \(\left[ {0,\pi } \right].\) Example 10Given the equation \({x^4} – x – 1 = 0.\) It is known that this equation has a root in the interval \(\left( {1,2} \right).\) Find an approximate value of the root with an accuracy of 4 decimal places. Example 1.Approximate \(\sqrt[3]{2}\) to 6 decimal places.
Solution.
We apply Newton’s method to the function \(f\left( x \right) = {x^3} – 2\) assuming \(x \ge 0\) and perform several successive iterations using the formula
\[{x_{n + 1}} = {x_n} – \frac{{f\left( {{x_n}} \right)}}{{f^\prime\left( {{x_n}} \right)}}.\]
Let \({x_0} = 1.\) This yields the following results:
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}} }={ 1 – \frac{{{1^3} – 2}}{{3 \cdot {1^2}}} }={ 1.3333333}\]
\[{{x_2} = {x_1} – \frac{{f\left( {{x_1}} \right)}}{{f^\prime\left( {{x_1}} \right)}} }={ 1.3333333 – \frac{{{{1.3333333}^3} – 2}}{{3 \cdot {{1.3333333}^2}}} }={ 1.2638889}\]
Similarly, we get
\[{x_3} = 1.2599335\]
\[{x_4} = 1.2599211\]
\[{x_5} = 1.2599211\]
We see that the \(4\)th iteration gives the approximation to \(6\) decimal places, so the answer is \({x_4} = 1.259921\)
Example 2.Determine how many iterations does it take to compute \(\sqrt 5\) to 8 decimal places using Newton’s method with the initial value \({x_0} = 2?\)
Solution.
We apply Newton’s method to the function
\[f\left( x \right) = {x^2} – 5.\]
The iterations are given by the formula
\[{x_{n + 1}} = {x_n} – \frac{{f\left( {{x_n}} \right)}}{{f^\prime\left( {{x_n}} \right)}}.\]
The first approximation is equal to
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}}} = {2 – \frac{{{2^2} – 5}}{{2 \cdot 2}} }={ 2.25}\]
Continue the process to get the following approximations:
\[{x_2} = 2.236111111\]
\[{x_3} = 2.236067978\]
\[{x_4} = 2.236067977\]
Hence, it takes \(3\) iterations to get the approximate value of \(\sqrt 5\) to \(8\) decimal places (not too bad!)
Example 3.Approximate the solution of the equation \({x^2} + x – 3 = 0\) to 7 decimal places with the initial guess \({x_0} = 2.\)
Solution.
We apply the recurrent formula given by Newton’s method:
\[{x_{n + 1}} = {x_n} – \frac{{f\left( {{x_n}} \right)}}{{f^\prime\left( {{x_n}} \right)}}.\]
In the first step we get
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}}} ={ 2 – \frac{{{2^2} + 2 – 3}}{{2 \cdot 2 + 1}} }={ 1.4}\]
The next approximations are given by
\[{x_2} = 1.30526316\]
\[{x_3} = 1.30277735\]
\[{x_4} = 1.30277564\]
\[{x_5} = 1.30277564\]
Thus, we were able to get the approximate solution with an accuracy of \(7\) decimal places after \(4\) iterations. It is equal to
\[{x_4} = 1.30277564\]
Example 4.Approximate \(\ln 2\) to 5 decimal places.
Solution.
To find an approximate value of \(\ln 2,\) we use the recurrent formula
\[{x_{n + 1}} = {x_n} – \frac{{f\left( {{x_n}} \right)}}{{f^\prime\left( {{x_n}} \right)}}.\]
Starting from \({x_0} = 1,\) we obtain the following successive approximate values for \(\ln 2:\)
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}}} ={ 1 – \frac{{{e^1} – 2}}{{{e^1}}} }={ 0.735759}\]
\[{{x_2} = {x_1} – \frac{{f\left( {{x_1}} \right)}}{{f^\prime\left( {{x_1}} \right)}}} ={ 0.735759 – \frac{{{e^{0.735759}} – 2}}{{{e^{0.735759}}}} }={ 0.694042}\]
The next calculations produce
\[{x_3} = 0.693148\]
\[{x_4} = 0.693147\]
One can see that we’ve got the approximation to \(5\) decimal places on the \(3\)rd step. So the answer is
\[\ln 2 \approx 0.69315\]
Example 5.Let \(f\left( x \right) = {x^3} – 7.\) Using Newton’s method, compute 3 iterations for this function with the initial guess \({x_0} = 2.\)
Solution.
The iterative formula for Newton’s method is given as
\[{x_{n + 1}} = {x_n} – \frac{{f\left( {{x_n}} \right)}}{{f^\prime\left( {{x_n}} \right)}}.\]
Find out the derivative:
\[{f^\prime\left( x \right) = \left( {{x^3} – 7} \right)^\prime }={ 3{x^2}.}\]
The first iteration is equal to
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}}} ={ 2 – \frac{{{2^3} – 7}}{{3 \cdot {2^2}}} }={ 1.916667}\]
Next, we perform two more iterations:
\[{{x_2} = {x_1} – \frac{{f\left( {{x_1}} \right)}}{{f^\prime\left( {{x_1}} \right)}}} ={ 1.916667 – \frac{{{{1.916667}^3} – 7}}{{3 \cdot {{1.916667}^2}}} }={ 1.912938}\]
\[{{x_3} = {x_2} – \frac{{f\left( {{x_2}} \right)}}{{f^\prime\left( {{x_2}} \right)}}} ={ 1.912938 – \frac{{{{1.912938}^3} – 7}}{{3 \cdot {{1.912938}^2}}} }={ 1.912933}\]
After \(3\) iterations we’ve got the approximate solution with an accuracy of \(5\) decimal places.
Answer: \({x_3} = 1.91293\)
Example 6.Approximate the solution of the equation \(x\ln x = 1\) with an accuracy of 4 decimal places. Use the initial guess \({x_0} = 2.\)
Solution.
Consider the function
\[f\left( x \right) = x\ln x – 1\]
and apply Newton’s method to find zero of the function.
Find the derivative by the product rule:
\[{f^\prime\left( x \right) = \left( {x\ln x – 1} \right)^\prime }={ 1 \cdot \ln x + x \cdot \frac{1}{x} }={ \ln x + 1.}\]
Calculate the first approximation:
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}}} ={ 2 – \frac{{2\ln 2 – 1}}{{\ln 2 + 1}} }={ 1.77184}\]
Continue the iterative process until we reach an accuracy of \(4\) decimal places.
\[{{x_2} = {x_1} – \frac{{f\left( {{x_1}} \right)}}{{f^\prime\left( {{x_1}} \right)}}} ={ 1.77184 – \frac{{1.77184 \cdot \ln \left( {1.77184} \right) – 1}}{{\ln \left( {1.77184} \right) + 1}} }={ 1.76323}\]
\[{{x_3} = {x_2} – \frac{{f\left( {{x_2}} \right)}}{{f^\prime\left( {{x_2}} \right)}}} ={ 1.76323 – \frac{{1.76323 \cdot \ln \left( {1.76323} \right) – 1}}{{\ln \left( {1.76323} \right) + 1}} }={ 1.76322}\]
As you can see, we have obtained the required accuracy after only \(2\) steps.
Answer: \({x_2} = 1.7632\)
Example 7.Using Newton’s method, find the solution of the equation \(x + {e^x} = 0\) with an accuracy of 3 decimal places.
Solution.
We choose \({x_0} = – 1\) and compute the first approximation:
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}}} ={ – 1 – \frac{{ – 1 + {e^{ – 1}}}}{{1 + {e^{ – 1}}}} }={ – 0.5379}\]
Continue the process until we get the result with the required accuracy.
\[{{x_2} = {x_1} – \frac{{f\left( {{x_1}} \right)}}{{f^\prime\left( {{x_1}} \right)}} }={ – 0.5379 – \frac{{ – 0.5379 + {e^{ – 0.5379}}}}{{1 + {e^{ – 0.5379}}}} }={ – 0.5670}\]
\[{{x_3} = {x_2} – \frac{{f\left( {{x_2}} \right)}}{{f^\prime\left( {{x_2}} \right)}} }={ – 0.5670 – \frac{{ – 0.5670 + {e^{ – 0.5670}}}}{{1 + {e^{ – 0.5670}}}} }={ – 0.5671}\]
We see that we’ve already got a stable result to \(3\) decimal places, so the approximate solution is \(x \approx – 0.567\)
Example 8.Find an approximate solution, accurate to 5 decimal places, to the equation \(\cos x = {x^2}\) that lies in the interval \(\left[ {0,\large{\frac{\pi }{2}}\normalsize} \right].\)
Solution.
First we rewrite this equation in the form
\[\cos x – {x^2} = 0.\]
Suppose that the initial value of the root is \({x_0} = 1.\) Let’s get the first approximation using Newton’s method:
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}}} ={ {x_0} – \frac{{\cos {x_0} – x_0^2}}{{ – \sin {x_0} – 2{x_0}}} }={ 1 – \frac{{\cos 1 – {1^2}}}{{ – \sin 1 – 2 \cdot 1}} }={ 0.838218}\]
Here and further we write approximate values with 6 decimal places to track the convergence of the result.
In the next step, we have
\[{{x_2} = {x_1} – \frac{{\cos {x_1} – x_1^2}}{{ – \sin {x_1} – 2{x_1}}} }={ 0.838218 }-{ \frac{{\cos \left( {0.838218} \right) – {{0.838218}^2}}}{{ – \sin \left( {0.838218} \right) – 2 \cdot 0.838218}} }={ 0.824242}\]
The third approximation gives the following value of the root:
\[{{x_3} = {x_2} – \frac{{\cos {x_2} – x_2^2}}{{ – \sin {x_2} – 2{x_2}}} }={ 0.824242 }-{ \frac{{\cos \left( {0.824242} \right) – {{0.824242}^2}}}{{ – \sin \left( {0.824242} \right) – 2 \cdot 0.824242}} }={ 0.824132}\]
Continue computations:
\[{{x_4} = {x_3} – \frac{{\cos {x_3} – x_3^2}}{{ – \sin {x_3} – 2{x_3}}} }={ 0.824132 }-{ \frac{{\cos \left( {0.824132} \right) – {{0.824132}^2}}}{{ – \sin \left( {0.824132} \right) – 2 \cdot 0.824132}} }={ 0.824132}\]
The \(4\)th iteration preserves the first \(6\) decimal places. This means that the required accuracy of \(5\) decimal places was reached at the \(3\)rd step, so the answer is
\[{x_3} = 0.82413\]
Example 9.Find an approximate solution, accurate to 4 decimal places, to the equation \(\sin \left( {{e^x}} \right) = 1\) on the interval \(\left[ {0,\pi } \right].\)
Solution.
We apply Newton’s method with the initial guess \({x_0} = 0.5\)
Consider the function
\[f\left( x \right) = \sin \left( {{e^x}} \right) – 1.\]
The derivative is written as
\[f^\prime\left( x \right) = {e^x}\cos \left( {{e^x}} \right).\]
Then the first approximation is given by
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}}} ={ 0.5 – \frac{{\sin \left( {{e^{0.5}}} \right) – 1}}{{{e^{0.5}}\cos \left( {{e^{0.5}}} \right)}} }={ 0.4764}\]
Continue the iteration process until we get an accuracy of \(3\) decimal places.
\[{{x_2} = {x_1} – \frac{{f\left( {{x_1}} \right)}}{{f^\prime\left( {{x_1}} \right)}}} ={ 0.4764 – \frac{{\sin \left( {{e^{0.4764}}} \right) – 1}}{{{e^{0.4764}}\cos \left( {{e^{0.4764}}} \right)}} }={ 0.4641}\]
\[{x_3} = 0.4579\]
\[{x_4} = 0.4548\]
\[{x_5} = 0.4532\]
\[{x_6} = 0.4524\]
\[{x_7} = 0.4520\]
\[{x_8} = 0.4518\]
\[{x_9} = 0.4517\]
\[{x_{10}} = 0.4516\]
\[{x_{11}} = 0.4516\]
As you can see, the process converges slowly enough, so it took \(10\) steps to obtain a stable result to \(4\) decimal places.
The answer is \(x \approx 0.4516\)
Example 10.Given the equation \({x^4} – x – 1 = 0.\) It is known that this equation has a root in the interval \(\left( {1,2} \right).\) Find an approximate value of the root with an accuracy of 4 decimal places.
Solution.
Take the derivative:
\[{f^\prime\left( x \right) = \left( {{x^4} – x – 1} \right)^\prime }={ 4{x^3} – 1.}\]
Notice that \(f\left( 1 \right) = – 1,\) \(f\left( 2 \right) = 13,\) so we choose \({x_0} = 1\) as the initial guess.
Using the recurrent relation
\[{x_{n + 1}} = {x_n} – \frac{{f\left( {{x_n}} \right)}}{{f^\prime\left( {{x_n}} \right)}},\]
we compute several successive approximations:
\[{{x_1} = {x_0} – \frac{{f\left( {{x_0}} \right)}}{{f^\prime\left( {{x_0}} \right)}} }={ 1 – \frac{{{1^4} – 1 – 1}}{{4 \cdot {1^3} – 1}} }={ 1.3333}\]
\[{{x_2} = {x_1} – \frac{{f\left( {{x_1}} \right)}}{{f^\prime\left( {{x_1}} \right)}} }={ 1.3333 – \frac{{{{1.3333}^4} – 1.3333 – 1}}{{4 \cdot {{1.3333}^3} – 1}} }={ 1.2358}\]
\[{{x_3} = {x_2} – \frac{{f\left( {{x_2}} \right)}}{{f^\prime\left( {{x_2}} \right)}} }={ 1.2358 – \frac{{{{1.2358}^4} – 1.2358 – 1}}{{4 \cdot {{1.2358}^3} – 1}} }={ 1.2211}\]
\[{{x_4} = {x_3} – \frac{{f\left( {{x_3}} \right)}}{{f^\prime\left( {{x_3}} \right)}} }={ 1.2211 – \frac{{{{1.2211}^4} – 1.2211 – 1}}{{4 \cdot {{1.2211}^3} – 1}} }={ 1.2207}\]
\[{{x_5} = {x_4} – \frac{{f\left( {{x_4}} \right)}}{{f^\prime\left( {{x_4}} \right)}} }={ 1.2207 – \frac{{{{1.2207}^4} – 1.2207 – 1}}{{4 \cdot {{1.2207}^3} – 1}} }={ 1.2207}\]
You can see that we’ve got the solution accurate up to \(4\) decimal places in the \(4\)th step. Therefore, we can write the answer as \({x_4} = 1.2207\) |
There are a couple of loose ends that need tying up regarding the IT index. One of which is the derivation of the information equilibrium condition (see also the paper) with non-uniform probability distributions. This turns out to be relatively trivial and only involves a change in the IT index formula. The information equilibrium condition is
\frac{dA}{dB} = k \; \frac{A}{B}
$$
And instead of:
$$
k = \frac{\log \sigma_{A}}{\log \sigma_{B}}
$$
with $\sigma_A$ and $\sigma_B$ being the number of symbols in the "alphabet" chosen uniformly, we have
k = \frac{\sum_{i} p_{i}^{(A)} \log p_{i}^{(A)}}{\sum_{j} p_{j}^{(B)} \log p_{j}^{(B)}}
$$
where $p_{i}^{(A)}$ and $p_{j}^{(B)}$ represent the probabilities of the different outcomes. The generalization to continuous distributions is also trivial and is left as an exercise for the reader.
However, while it hasn't come up in any of the models yet, it should be noted that the above definitions imply that $k$ is positive. But it turns out that we can handle negative $k$ by simply using the transformation $B \rightarrow 1/C$ so that:
\begin{align}
\frac{dA}{dB} = & - |k| \; \frac{A}{B}\\
-C^{2} \frac{dA}{dC} = & - |k| \; \frac{AC}{1}\\
\frac{dA}{dC} = & |k| \; \frac{A}{C}
\end{align}
$$
That is to say an information equilibrium relationship $A \rightleftarrows B$ with a negative IT index is equivalent to the relationship $A \rightleftarrows 1/C$ with a positive index. |
The title is a bit of a joke, and for the controversy see here. Looking into the Solow model bumps you into the question of what "capital" (K) is, and that met with the titular controversy awhile back where Cambridge, MA said you could add up different stuff in a sensible way while Cambridge, UK said you couldn't.
The information equilibrium (IE) model calls the argument for the UK (and Joan Robinson), but allows (at least) two possibilities for definitions of capital that are sensible. These sensible definitions weren't advocated by Solow/Samuelson at MIT, hence why I say that the UK won the debate: you can't just add stuff up and get a sensible answer.
First, two quick "proofs". I already showed IE is an equivalence relation (you can use it to define a set of things in IE with some economic aggregate), but I need a bit more: IE is a group under multiplication.
If $A \rightleftarrows K$ (with IT index $a$), then $A^{x} \rightleftarrows K$ (with IT index $a x$) because:
\frac{d}{dK} A^{x} = x A^{x - 1} \frac{dA}{dK} = a x \frac{A^{x}}{K}
$$
(I show this because it applies for real exponents rather than just natural numbers and that might be important for some reason in the future; for natural numbers $x$ the following result would suffice.)
If $A \rightleftarrows K$ and $B \rightleftarrows K$ (with IT indices $a$ and $b$), then $A B \rightleftarrows K$ (with IT index $a + b$) because:
\frac{d}{dK} AB = \frac{dA}{dK} B + A \frac{dB}{dK} = (a + b) \frac{AB}{K}
$$
So we have the set of all things that are in IE with $K$, and the product of any two of those things is another thing in the set — therefore it's a group. It is not, however, a ring — the set isn't closed under addition:
\frac{d}{dK} (A + B) = \frac{dA}{dK} + \frac{dB}{dK} = \frac{a A + b B}{K}
$$
so $A + B$ is not in IE with $K$ unless $a = b$.
This basically was Joan Robinson's point — unless $A$ and $B$ are the same thing, you're comparing apples and oranges. Money doesn't help us either and if you introduce it, the relative prices of the capital goods become important **.
Sensible definitions of capital
Of course, this points to the first sensible solution to the capital controversy: instead of adding up capital items, use the geometric mean. Using the two results above, you can show that if $A \rightleftarrows K$, $B \rightleftarrows K$, $C \rightleftarrows K$, etc, then
(A B C \; ... \;)^{1/n} \rightleftarrows K
$$
The geometric mean is also the only sensible mean for capital goods measured either as indices or in terms of money.
The second sensible solution to the controversy is a partition function approach (as I've done here) where we simply define capital to be the expected value of the capital operator, which is just the sum of the individual capital goods operators:
\langle K \rangle \equiv \langle A + B + C + \; ... \; \rangle
$$
In that sense, "capital" would be more like NGDP than, say, a stock index.
...
** Update 29 May 2015
I thought I'd add in the details of the sentence I marked with ** above. We assume two goods markets (information equilibrium conditions) $p_{a} : N \rightleftarrows A$ and $p_{b} : N \rightleftarrows B$ where $N$ is aggregate demand/nominal output measured in money and the $p_{i}$ are prices. That gives us:
k_{a} p_{a} A = N \; \text{and} \; k_{b} p_{b} B = N
$$
Substituting into the formula above
$$
\frac{d}{dK} (A + B) = \frac{a A + b B}{K} = \left( \frac{a}{k_{a} p_{a}} + \frac{b}{k_{b} p_{b}} \right) \; \frac{N}{K}
$$
which basically shows that $K$ is in information equilibrium with aggregate demand. Note the appearance of the prices in the information transfer index.
... Changed the old notation $A \rightarrow B$ to the better notation for an information equilibrium relationship: $A \rightleftarrows B$. Added "measured in money" to 29 May 2015 update.
...
Update 14 October 2018
Changed the old notation $A \rightarrow B$ to the better notation for an information equilibrium relationship: $A \rightleftarrows B$. Added "measured in money" to 29 May 2015 update. |
The answer is positive, since $\psi:H_r \to \tilde H_{\binom {r}{k}}$ is proper, and every proper map is closed.
Here is a proof $\psi$ is proper:
Let $K \subseteq \tilde H_{\binom {r}{k}}$ be compact, and let $A_n \in \psi^{-1}(K)$. We shall prove $A_n$ has a convergent subsequence in $\psi^{-1}(K)$. It suffices to prove $A_n$ converges in $\text{End}(V)$; indeed, if $A_n \to A$, then $\bigwedge^k A_n \to \bigwedge^k A$, and the limit $\bigwedge^k A$ must be in $K$. In particular, $\binom {r}{k}=\operatorname{rank}(\bigwedge^kA) = \binom {\operatorname{rank}(A)}{k}$, so $\operatorname{rank}(A)=r$, that is $A \in H_r$.
By using SVD, we can assume $A_n=\text{diag}(\sigma_1^n,\dots,\sigma_r^n,0,\dots,0)$ is diagonal, where the first $r$ diagonal elements are non-zero, and the last $d-r$ elements are zero. (Since the orthogonal group is compact, the isometric components surely converge after passing to a subsequence).
$\bigwedge^k A_n$ is diagonal, and its first $\binom {r}{k}$ elements are of the form $\Pi_{s=1}^k \sigma_{i_s}^n$, where all the $1 \le i_s \le r$ are distinct. So, every such product
converges when $n \to \infty$ to a positive number. Indeed, $\psi(A_n)=\bigwedge^k A_n \in K \subseteq \tilde H_{\binom {r}{k}}$, so it converges (after passing to a subsequence) to an element $D \in K$. Since $\text{rank}(D)=\binom {r}{k}$, it follows that the products $\Pi_{s=1}^k \sigma_{i_s}^n$ must converge to positive numbers. (If even one of them converges to zero instead, the rank of the limit $D$ would be too low, which is a contradiction).
Now, let $1\le i \neq j \le r$. Since $r \ge k+1$, we can choose some $1 \le i_1,\dots,i_{k-1} \le r$ all different from $i,j$. Since both products $$(\Pi_{s=1}^{k-1} \sigma_{i_s}^n)\sigma_{i}^n,(\Pi_{s=1}^{k-1} \sigma_{i_s}^n)\sigma_{j}^n$$
converge to positive numbers, so does their ratio $C_{ij}^n=\frac{\sigma_i^n}{\sigma_j^n}$.
We know that $$\Pi_{s=1}^k \sigma_{s}^n=\Pi_{s=1}^k \sigma_{1}^n\frac{\sigma_s^n}{\sigma_1^n}=\Pi_{s=1}^k \sigma_{1}^nC_{s1}^n=(\sigma_{1}^n)^k \Pi_{s=1}^k C_{s1}^n$$
converges to a positive number. Since all the $C_{s1}^n$ converge to positive numbers, we deduce $\sigma_1^n$ converges. W.L.O.G the same holds for every $\sigma_i^n$, so $A_n$ indeed converges. (and we know the limit must have the right rank). |
While searching for more references for the new Wikipedia article (https://en.wikipedia.org/wiki/Sums_of_three_cubes) I found a horrible mistake related to this problem in Wolfram's _A New Kind of Science_, p. 789 (https://www.wolframscience.com/nks/p789--implications-for-mathematics-and-its-foundations/). Wolfram says that the smallest solution to \(x^3+y^3+z^3=2\) is the known sporadic one,
\[\begin{align}2&=1214928^3\\ &+ 3480205^3\\ &+ (-3528875)^3. \end{align}\] But the parametric solution has many that are smaller: \((1,1,0)\), \((7,-5,-6)\), \((49,-47,-24)\), etc.
@11011110 also z is not negative according to Wolfram there. A typo of course but still.
A Mastodon instance for maths people. The kind of people who make \(\pi z^2 \times a\) jokes.
Use
\( and
\) for inline LaTeX, and
\[ and
\] for display mode. |
The OP in this post asked the following:
If you take a regular $n$-sided polygon, which is inscribed in the unit circle and find the product of all its diagonals (including two sides) carried out from one corner you will get $n$ exactly:
$A_1A_2\cdot A_1A_3\cdot ...\cdot A_1A_n = n$
user21820 used the following idea to solve the above question.
Let $z$ be a complex number such that $z^n=1$ where $n$ is the number of sides of that polygon. Denote $z_0=1,z_1,...,z_{n-1}$ be roots of the equation $z^n=1.$ It suffices to show that $$\prod_{k=1}^{n-1}|1-z_k| = n.$$
By the definition of roots, we have$$z^n-1=\prod_{k=0}^{n-1}(z-z_k) = (z-1)\prod_{k=1}^{n-1}(z-z_k).$$Also by factorization, we have $$z^n-1 = (z-1)\sum_{k=0}^{n-1}z^k$$By equating the two equations and
cancelling the factor $(z-1)$, we have $$\prod_{k=1}^{n-1}(z-z_k) = \sum_{k=0}^{n-1}z^k.$$ Let $z=1.$ So we have $$\prod_{k=1}^{n-1}(1-z_k) = \sum_{k=0}^{n-1}z^k = n.$$Therefore, $$\prod_{k=1}^{n-1}|1-z_k| = n.$$
Question: After we cancel the factor $(z-1),$ I thought the substitution $z=1$ at latter part is invalid? My doubt arises from the fact that cancellation in the following $$0\cdot x = 0 \cdot y \Rightarrow x = y$$ is not valid. |
Search
Now showing items 1-4 of 4
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Multiplicity dependence of light-flavor hadron production in pp collisions at √s=7 TeV
(American Physical Society, 2019-02-08)
Comprehensive results on the production of unidentified charged particles, π±, K±, K0S, K∗(892)0, p, ¯p, ϕ (1020), Λ, ¯¯¯Λ, Ξ−, ¯¯¯Ξ+, Ω−, and ¯¯¯Ω+ hadrons in proton-proton (pp) collisions at √s=7 TeV at midrapidity ...
Measurement of dielectron production in central Pb-Pb collisions at √sNN=2.76 TeV
(American Physical Society, 2019-02-14)
The first measurement of dielectron (e+e−) production in central (0–10%) Pb–Pb collisions at √sNN=2.76TeV at the LHC is presented. The dielectron invariant-mass spectrum is compared to the expected contributions from hadron ... |
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr... |
Math question on Newton's method and detecting actual zeros
02-07-2017, 05:04 PM
Post: #1
Math question on Newton's method and detecting actual zeros
(Admins: If this is in the wrong forum, please feel free to move it)
This came up during a debugging process in which Newton's method (using backtracking linesearch) gave me a solution to the system
\[ \frac{x\cdot y}{x+y} = 127\times 10^{-12}, \quad \left( \frac{x+y}{x} \right)^2 = 8.377 \]
(This problem was posed on the HP Prime subforum: http://hpmuseum.org/forum/thread-7677.html)
One solution I found was: \( x=1.94043067156\times 10^{-10}, \
y=3.67576704293\times 10^{-10} \) (hopefully no typos).
On the Prime, the error for the equations are in the order of \(10^{-19} \) and \(10^{-11}\) for the first and second equations, respectively (again, assuming I made no typos copying). So my question is: should a numerical solver should treat \(1.27\times 10^{-10}\) as "significant" or 0 (especially when it comes time to check for convergence, when the tolerance for \( |f_i| \) might be set to, say, \( 10^{-10} \) -- here \( f_i \) is the i-th equation in the system, set equal to 0)?
Graph 3D | QPI | SolveSys
02-07-2017, 06:45 PM
Post: #2
RE: Math question on Newton's method and detecting actual zeros
.
Hi, Han:
(02-07-2017 05:04 PM)Han Wrote: (Admins: If this is in the wrong forum, please feel free to move it)
Your system is trivial to solve by hand, like this:
1) Parameterize:
y = t*x
2) Substitute y=t*x into the first equation (a = 127E-12):
x*t*x = a*(x+t*x) -> t*x^2 = a*(1+t)*x -> (assuming x is not 0, which would make the second equation meaningless) t*x = a*(1+t) -> x = a*(1+t)/t
3) Substitute y=t*x in the second equation (b=8.377)
(1+t)^2 = b -> 1+t= sqr(b) -> t = sqr(b)-1 or t = -sqr(b)-1
4) let's consider the first case (the second is likewise):
t = sqr(b)-1 = 1.8943047524405580466334231771918
5) substitute the value of t in the first equation above in (2):
x = a*(1+t)/t = 1.9404306676968291608003859882111e-10
6) now, y=t*x, so:
y = t*x = 3.6757670355995087192244474350336e-10
which gives your solution. Taking the negative sqrt would give another.
As for your question, the best way to check for convergence is not to rely on some tolerance for the purported zero value when evaluating both equations for the computed x,y approximations in every iteration but rather to stop when consecutive approximations differ in less than a user-set tolerance expressed in ulps, i.e. units in the last place.
For instance, if you're making your computation with 10 digits and you set your tolerance to 2 ulps you would stop iterating as soon as consecutive approximations for both x and y have 8 digits in common (mantissa digits, regardeless of the exponents which of course should be the same).
Once you stop the iterations you should then check the values of f(x,y) and g(x,y) to determine whether you've found a root, a pole, or an extremum (maximum, minimum) but as far as stopping the iterations is concerned, the tolerance in ulps is the one to use for best results as it is completely independent of the magnitude of the roots, they might be of the order of 1E38 or of 1E-69 and it wouldn't matter.
Regards.
V.
.
02-07-2017, 08:03 PM
Post: #3
RE: Math question on Newton's method and detecting actual zeros
(02-07-2017 06:45 PM)Valentin Albillo Wrote: .
Thank you for the detailed solution; though in truth it was merely to present a case where a function might itself produce outputs that are extremely tiny. The math I understand quite well; it's the computer science part of implementing Newton's method that was giving me trouble. Your explanation above regarding ulps was precisely the answer I was looking for.
Graph 3D | QPI | SolveSys
User(s) browsing this thread: 1 Guest(s) |
The $I^2$ statistic was introduced by Higgins and Thompson in their seminal 2002 paper and has become a rather popular statistic to report in meta-analyses, as it facilitates the interpretation of the amount of heterogeneity present in a given dataset.
For a standard random-effects models, the $I^2$ statistic is computed with $$I^2 = 100\% \times \frac{\hat{\tau}^2}{\hat{\tau}^2 + \tilde{v}},$$ where $\hat{\tau}^2$ is the estimated value of $\tau^2$ and $$\tilde{v} = \frac{(k-1) \sum w_i}{(\sum w_i)^2 - \sum w_i^2},$$ where $w_i = 1/v_i$ is the inverse of the sampling variance of the $i^{th}$ study. The equation for $\tilde{v}$ is equation 9 in Higgins & Thompson (2002) and can be regarded as the 'typical' within-study (or sampling) variance of the observed effect sizes or outcomes.
1) Sidenote: As the equation above shows, $I^2$ estimates the amount of heterogeneity relative to the total amount of variance in the observed effects or outcomes (which is composed of the variance in the true effects, that is, $\hat{\tau}^2$, plus sampling variance, that is, $\tilde{v}$). Therefore, it is not an absolute measure of heterogeneity and should not be interpreted as such. For example, a practically/clinically irrelevant amount of heterogeneity (i.e., variance in the true effects) could lead to a large $I^2$ value if all of the studies are very large (in which case $\tilde{v}$ will be small). Conversely, when all of the studies are small (in which case $\tilde{v}$ will be large), $I^2$ may still be small, even if there are large differences in the size of the true effects. See also chapter 16 in Borenstein et al. (2009), which discusses this idea very nicely.
However, this caveat aside, $I^2$ is a very useful measure because it directly indicates to what extent heterogeneity contributes to the total variance. In addition, most people find $I^2$ easier to interpret than estimates of $\tau^2$.
Let's try out the computation for a standard random-effects model (see Berkey et al. (1995) and
help(dat.bcg) for more details on the dataset used). First, we use the
rma() function for this:
library(metafor) dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg) res <- rma(yi, vi, data=dat) res$I2 [1] 92.22139
So, we estimate that roughly 92% of the total variance is due to heterogeneity (i.e., variance in the true effects), while the remaining 8% can be attributed to sampling variance.
Manually computing $I^2$ as described above yields the same result:
k <- res$k wi <- 1/dat$vi vt <- (k-1) * sum(wi) / (sum(wi)^2 - sum(wi^2)) 100 * res$tau2 / (res$tau2 + vt) [1] 92.22139
Before we continue with more complex models, it is useful to point out a more general equation for computing $I^2$, which also applies to models involving moderator variables (i.e., mixed-effects meta-regression models). This will also become important when dealing with models where sampling errors are no longer independent. So, let us define $$\mathbf{P} = \mathbf{W} - \mathbf{W} \mathbf{X} (\mathbf{X}' \mathbf{W} \mathbf{X})^{-1} \mathbf{X}' \mathbf{W},$$ where $\mathbf{W}$ is (for now) a diagonal matrix with the inverse sampling variances (i.e., $1/v_i$) along the diagonal and $\mathbf{X}$ is the model matrix. In the random-effects model, $\mathbf{X}$ is just a column vector with 1's, but in meta-regression models, it will contain additional columns with the values of the moderator variables. Then we define $$I^2 = 100\% \times \frac{\hat{\tau}^2}{\hat{\tau}^2 + \frac{k-p}{\mathrm{tr}[\mathbf{P}]}},$$ where $\mathrm{tr}[\mathbf{P}]$ denotes the trace of the $\mathbf{P}$ matrix (i.e., the sum of the diagonal elements) and $p$ denotes the number of columns in $\mathbf{X}$.
Let's try this out for the example above:
W <- diag(1/dat$vi) X <- model.matrix(res) P <- W - W %*% X %*% solve(t(X) %*% W %*% X) %*% t(X) %*% W 100 * res$tau2 / (res$tau2 + (res$k-res$p)/sum(diag(P))) [1] 92.22139
For a model with moderators, this is also how
rma() computes $I^2$:
[1] 68.39313 X <- model.matrix(res) P <- W - W %*% X %*% solve(t(X) %*% W %*% X) %*% t(X) %*% W 100 * res$tau2 / (res$tau2 + (res$k-res$p)/sum(diag(P))) [1] 68.39313
(although instead of using
solve(), which can be numerically unstable in some cases,
rma() uses the QR decomposition to obtain the $(\mathbf{X}' \mathbf{W} \mathbf{X})^{-1}$ part).
In models with moderators, the $I^2$ statistic indicates how much of the unaccounted variance in the observed effects or outcomes (which is composed of unaccounted variance in the true effects, that is, residual heterogeneity, plus sampling variance) can be attributed to residual heterogeneity. Here, we estimate that roughly 68% of the unaccounted variance is due to residual heterogeneity.
Multilevel structures arise when the estimates can be grouped together based on some higher-level clustering variable (e.g., paper, lab or research group, species). In that case, true effects belonging to the same group may be more similar to each other than true effects for different groups. Meta-analytic multilevel models can be used to account for the between- and within-cluster heterogeneity and hence the intracluster (or intraclass) correlation in the true effects. See Konstantopoulos (2011) for a detailed illustration of such a model.
In fact, let's use the same example here. First, we can fit the multilevel random-effects model with:
dat <- dat.konstantopoulos2011 res <- rma.mv(yi, vi, random = ~ 1 | district/school, data=dat) res Multivariate Meta-Analysis Model (k = 56; method: REML) Variance Components: estim sqrt nlvls fixed factor sigma^2.1 0.0651 0.2551 11 no district sigma^2.2 0.0327 0.1809 56 no district/school Test for Heterogeneity: Q(df = 55) = 578.8640, p-val < .0001 Model Results: estimate se zval pval ci.lb ci.ub 0.1847 0.0846 2.1845 0.0289 0.0190 0.3504 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Note that the model contains two variance components ($\sigma^2_1$ and $\sigma^2_2$), for the between-cluster (district) heterogeneity and the within-cluster (school within district) heterogeneity.
Based on the discussion above, it is now very easy to generalize the concept of $I^2$ to such a model (see also Nakagawa & Santos, 2012). That is, we can first compute:
W <- diag(1/dat$vi) X <- model.matrix(res) P <- W - W %*% X %*% solve(t(X) %*% W %*% X) %*% t(X) %*% W 100 * sum(res$sigma2) / (sum(res$sigma2) + (res$k-res$p)/sum(diag(P))) [1] 95.18731
Note that we have summed up the two variance components in the numerator and denominator. Therefore, this statistic can be thought of as the overall $I^2$ value that indicates how much of the total variance can be attributed to the total amount of heterogeneity (which is the sum of between- and within-cluster heterogeneity). In this case, the value is again very large, with approximately 95% of the total variance due to heterogeneity.
However, we can also break things down to estimate how much of the total variance can be attributed to between- and within-cluster heterogeneity separately:
[1] 63.32484 31.86248
Therefore, about 63% of the total variance is estimated to be due to between-cluster heterogeneity, with the remaining 32% due to within-cluster heterogeneity. And the remaining 5% are sampling variance.
Now we will consider the same type of generalization, but for a multivariate model with non-independent sampling errors. Therefore, not only do we need to account for heterogeneity and dependency in the underlying true effects, but we also now need to specify covariances between the sampling errors. For an illustration of such a model, see Berkey et al. (1998), which we can also use for illustration purposes here.
dat <- dat.berkey1998 V <- lapply(split(dat[,c("v1i", "v2i")], dat$trial), as.matrix) V <- bldiag(V) res <- rma.mv(yi, V, mods = ~ outcome - 1, random = ~ outcome | trial, struct="UN", data=dat) res Multivariate Meta-Analysis Model (k = 10; method: REML) Variance Components: outer factor: trial (nlvls = 5) inner factor: outcome (nlvls = 2) estim sqrt k.lvl fixed level tau^2.1 0.0327 0.1807 5 no AL tau^2.2 0.0117 0.1083 5 no PD rho.AL rho.PD AL PD AL 1 0.6088 - no PD 0.6088 1 5 - Test for Residual Heterogeneity: QE(df = 8) = 128.2267, p-val < .0001 Test of Moderators (coefficient(s) 1,2): QM(df = 2) = 108.8616, p-val < .0001 Model Results: estimate se zval pval ci.lb ci.ub outcomeAL -0.3392 0.0879 -3.8589 0.0001 -0.5115 -0.1669 *** outcomePD 0.3534 0.0588 6.0057 <.0001 0.2381 0.4688 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Two things are worth noting here. First of all, we allow the amount of heterogeneity to differ for the two outcomes (AL = attachment level; PD = probing depth) by using an unstructured variance-covariance matrix for the true effects (i.e.,
struct="UN"). Second,
V is the variance-covariance matrix of the sampling errors, which is no longer diagonal. The $\mathbf{W}$ matrix described earlier is actually the inverse of the variance-covariance matrix of the sampling errors, so in general, we should write $\mathbf{W} = \mathbf{V}^{-1}$.
Therefore, a possible generalization of $I^2$ to this model is:
W <- solve(V) X <- model.matrix(res) P <- W - W %*% X %*% solve(t(X) %*% W %*% X) %*% t(X) %*% W 100 * res$tau2 / (res$tau2 + (res$k-res$p)/sum(diag(P))) [1] 93.07407 82.84449
Hence, about 93% of the total (unaccounted for) variance is due to heterogeneity in the true effects for outcome AL and about 83% due to heterogeneity in the true effects for outcome PD.
The approach above computes the 'typical' sampling variance based on all studies for both $I^2$ values. However, we may want to compute two separate values of the the 'typical' sampling variance, one for each outcome. Doing so leads to these two $I^2$ values:
c(100 * res$tau2[1] / (res$tau2[1] + (sum(dat$outcome == "AL")-1)/sum(diag(P)[dat$outcome == "AL"])), 100 * res$tau2[2] / (res$tau2[2] + (sum(dat$outcome == "PD")-1)/sum(diag(P)[dat$outcome == "PD"]))) [1] 94.8571 75.1876
Not much of a difference, but if sampling variances had been very dissimilar for the two outcomes, then this could make more of a difference.
For multivariate models, Jackson et al. (2012) describe a different approach for computing $I^2$-type statistics that is based on the variance-covariance matrix of the fixed effects under the model with random effects and the model without. So, we fit these two models:
res.R <- rma.mv(yi, V, mods = ~ outcome - 1, random = ~ outcome | trial, struct="UN", data=dat) res.F <- rma.mv(yi, V, mods = ~ outcome - 1, data=dat)
Then $I^2$-type statistics for the two outcomes can be computed with:
c(100 * (vcov(res.R)[1,1] - vcov(res.F)[1,1]) / vcov(res.R)[1,1], 100 * (vcov(res.R)[2,2] - vcov(res.F)[2,2]) / vcov(res.R)[2,2]) [1] 95.49916 76.42214
These values are very similar to the ones obtained above when computing separate values for the 'typical' sampling variance for the two outcomes. For more details on this approach, see Jackson et al. (2012).
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009).
Introduction to meta-analysis. Chichester, UK: Wiley.
Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis.
Statistics in Medicine, 21(11), 1539–1558.
Jackson, D., White, I. R., & Riley, R. D. (2012). Quantifying the impact of between-study heterogeneity in multivariate meta-analyses.
Statistics in Medicine, 31(29), 3805–3820.
Konstantopoulos, S. (2011). Fixed effects and variance components estimation in three-level meta-analysis.
Research Synthesis Methods, 2(1), 61–76.
Nakagawa, S., & Santos, E. S. A. (2012). Methodological issues and advances in biological meta-analysis.
Evolutionary Ecology, 26(5), 1253–1274.
Takkouche, B., Cadarso-Suárez, C., & Spiegelman, D. (1999). Evaluation of old and new tests of heterogeneity in epidemiologic meta-analysis.
American Journal of Epidemiology, 150(2), 206–215.
Takkouche, B., Khudyakov, P., Costa-Bouzas, J., & Spiegelman, D. (2013). Confidence intervals for heterogeneity measures in meta-analysis.
American Journal of Epidemiology, 178(6), 993-1004. |
1) Given \(\vecs r(t)=(3t^2−2)\,\hat{\mathbf{i}}+(2t−\sin t)\,\hat{\mathbf{j}}\),
a. find the velocity of a particle moving along this curve.
b. find the acceleration of a particle moving along this curve.
Answer: a. \(\vecs v(t)=6t\,\hat{\mathbf{i}}+(2−\cos t)\,\hat{\mathbf{i}}\) b. \(\vecs a(t)=6\,\hat{\mathbf{i}}+\sin t\,\hat{\mathbf{i}}\) In questions 2 - 5, given the position function, find the velocity, acceleration, and speed in terms of the parameter \(t\).
2) \(\vecs r(t)=e^{−t}\,\hat{\mathbf{i}}+t^2\,\hat{\mathbf{j}}+\tan t\,\hat{\mathbf{k}}\)
3) \(\vecs r(t)=⟨3\cos t,\,3\sin t,\,t^2⟩\)
Answer: \(\vecs v(t)=-3\sin t\,\hat{\mathbf{i}}+3\cos t\,\hat{\mathbf{j}}+2t\,\hat{\mathbf{k}}\) \(\vecs a(t)=-3\cos t\,\hat{\mathbf{i}}-3\sin t\,\hat{\mathbf{j}}+2\,\hat{\mathbf{k}}\) \(\text{Speed}(t) = \|\vecs v(t)\| = \sqrt{9 + 4t^2}\)
4) \(\vecs r(t)=t^5\,\hat{\mathbf{i}}+(3t^2+2t- 5)\,\hat{\mathbf{j}}+(3t-1)\,\hat{\mathbf{k}}\)
5) \(\vecs r(t)=2\cos t\,\hat{\mathbf{j}}+3\sin t\,\hat{\mathbf{k}}\). The graph is shown here:
Answer: \(\vecs v(t)=-2\sin t\,\hat{\mathbf{j}}+3\cos t\,\hat{\mathbf{k}}\) \(\vecs a(t)=-2\cos t\,\hat{\mathbf{j}}-3\sin t\,\hat{\mathbf{k}}\) \(\text{Speed}(t) = \|\vecs v(t)\| = \sqrt{4\sin^2 t+9\cos^2 t}=\sqrt{4+5\cos^2 t}\) In questions 6 - 8, find the velocity, acceleration, and speed of a particle with the given position function.
6) \(\vecs r(t)=⟨t^2−1,t⟩\)
7) \(\vecs r(t)=⟨e^t,e^{−t}⟩\)
Answer: \(\vecs v(t)=⟨e^t,−e^{−t}⟩\), \(\vecs a(t)=⟨e^t, e^{−t}⟩,\) \( \|\vecs v(t)\| = \sqrt{e^{2t}+e^{−2t}}\)
8) \(\vecs r(t)=⟨\sin t,t,\cos t⟩\). The graph is shown here:
9) The position function of an object is given by \(\vecs r(t)=⟨t^2,5t,t^2−16t⟩\). At what time is the speed a minimum?
Answer: \(t = 4\)
10) Let \(\vecs r(t)=r\cosh(ωt)\,\hat{\mathbf{i}}+r\sinh(ωt)\,\hat{\mathbf{j}}\). Find the velocity and acceleration vectors and show that the acceleration is proportional to \(\vecs r(t)\).
11) Consider the motion of a point on the circumference of a rolling circle. As the circle rolls, it generates the cycloid \(\vecs r(t)=(ωt−\sin(ωt))\,\hat{\mathbf{i}}+(1−\cos(ωt))\,\hat{\mathbf{j}}\), where \(\omega\) is the angular velocity of the circle and \(b\) is the radius of the circle:
Find the equations for the velocity, acceleration, and speed of the particle at any time.
Answer: \(\vecs v(t)=(ω−ω\cos(ωt))\,\hat{\mathbf{i}}+(ω\sin(ωt))\,\hat{\mathbf{j}}\) \(\vecs a(t)=(ω^2\sin(ωt))\,\hat{\mathbf{i}}+(ω^2\cos(ωt))\,\hat{\mathbf{j}}\) \(\begin{align*} \text{speed}(t) &= \sqrt{(ω−ω\cos(ωt))^2 + (ω\sin(ωt))^2} \\ &= \sqrt{ω^2 - 2ω^2 \cos(ωt) + ω^2\cos^2(ωt) + ω^2\sin^2(ωt)} \\ &= \sqrt{2ω^2(1 - \cos(ωt))} \end{align*} \)
12) A person on a hang glider is spiraling upward as a result of the rapidly rising air on a path having position vector \(\vecs r(t)=(3\cos t)\,\hat{\mathbf{i}}+(3\sin t)\,\hat{\mathbf{j}}+t^2\,\hat{\mathbf{k}}\). The path is similar to that of a helix, although it is not a helix. The graph is shown here:
Find the following quantities:
a. The velocity and acceleration vectors
b. The glider’s speed at any time
Answer: \(∥\vecs v(t)∥=\sqrt{9+4t^2}\)
c. The times, if any, at which the glider’s acceleration is orthogonal to its velocity
13) Given that \(\vecs r(t)=⟨e^{−5t}\sin t,e^{−5t}\cos t,4e^{−5t}⟩\) is the position vector of a moving particle, find the following quantities:
a. The velocity of the particle
Answer: \(\vecs v(t)=⟨e^{−5t}(\cos t−5\sin t),−e^{−5t}(\sin t+5\cos t),−20e^{−5t}⟩\)
b. The speed of the particle
c. The acceleration of the particle
Answer: \(\vecs a(t)=⟨e^{−5t}(−\sin t−5\cos t)−5e^{−5t}(\cos t−5\sin t), −e^{−5t}(\cos t−5\sin t)+5e^{−5t}(\sin t+5\cos t),100e^{−5t}⟩\)
14) Find the maximum speed of a point on the circumference of an automobile tire of radius 1 ft when the automobile is traveling at 55 mph.
15) Find the position vector-valued function \(\vecs r(t)\), given that \(\vecs a(t)=\hat{\mathbf{i}}+e^t \,\hat{\mathbf{j}}, \quad \vecs v(0)=2\,\hat{\mathbf{j}}\), and \(\vecs r(0)=2\,\hat{\mathbf{i}}\).
16) Find \(\vecs r(t)\) given that \(\vecs a(t)=−32\,\hat{\mathbf{j}}, \vecs v(0)=600\sqrt{3} \,\hat{\mathbf{i}}+600\,\hat{\mathbf{j}}\), and \(\vecs r(0)=\vecs 0\).
17) The acceleration of an object is given by \(\vecs a(t)=t\,\hat{\mathbf{j}}+t\,\hat{\mathbf{k}}\). The velocity at \(t=1\) sec is \(\vecs v(1)=5\,\hat{\mathbf{j}}\) and the position of the object at \(t=1\) sec is \(\vecs r(1)=0\,\hat{\mathbf{i}}+0\,\hat{\mathbf{j}}+0\,\hat{\mathbf{k}}\). Find the object’s position at any time.
Answer: \(\vecs r(t)=0\,\hat{\mathbf{i}}+(\frac{1}{6}t^3+4.5t−\frac{14}{3})\,\hat{\mathbf{j}}+(\frac{t^3}{6}−\frac{1}{2}t+\frac{1}{3})\,\hat{\mathbf{k}}\) Projectile Motion
18) A projectile is shot in the air from ground level with an initial velocity of 500 m/sec at an angle of 60° with the horizontal. The graph is shown here:
a. At what time does the projectile reach maximum height?
Answer: \(44.185\) sec
b. What is the approximate maximum height of the projectile?
c. At what time is the maximum range of the projectile attained?
Answer: \(t=88.37\) sec
d. What is the maximum range?
e. What is the total flight time of the projectile?
Answer: \(t=88.37\) sec
19) A projectile is fired at a height of 1.5 m above the ground with an initial velocity of 100 m/sec and at an angle of 30° above the horizontal. Use this information to answer the following questions:
a. Determine the maximum height of the projectile.
b. Determine the range of the projectile.
Answer: The range is approximately 886.29 m.
20) A golf ball is hit in a horizontal direction off the top edge of a building that is 100 ft tall. How fast must the ball be launched to land 450 ft away?
21) A projectile is fired from ground level at an angle of 8° with the horizontal. The projectile is to have a range of 50 m. Find the minimum velocity (speed) necessary to achieve this range.
Answer: \(v=42.16\) m/sec
e. Prove that an object moving in a straight line at a constant speed has an acceleration of zero. |
The missing step is to show that $v_1,\ldots,v_n$ is linearly independent over $k(x)$. So suppose that there exist rational functions $r_1(x),\ldots,r_n(x)$, not all the zero function, such that
$r_1(x) v_1 + \ldots + r_n(x) v_n = 0$.
We may write $r_i(x) = n_i(x)/d(x)$, i.e., let $d(x)$ be a common denominator of the rational functions. Multiplying through by $d(x)$, we get a polynomial dependence relation
$n_1(x) v_1 + \ldots + n_n(x) v_n = 0$.
Now let $N(x)$ be the greatest common divisor of $n_1(x),\ldots,n_r(x)$. (Such things exist since the polynomial ring $k[x]$ has a division algorithm, hence is a Unique Factorization Domain.) Dividing through by $N(x)$, we get, say,
$N_1(x) v_1 + \ldots + N_n(x) v_n = 0$,
with $\operatorname{gcd}(N_1,\ldots,N_n) = 1$. In particular, not all of the $N_i$'s are divisible by $x$, so plugging in $x = 0$ gives a nontrivial linear dependence relation
$N_1(0) v_1 + \ldots + N_n(0) v_n = 0$
over $k$, a contradiction.
In more sophisticated language, we are showing that the extensions $K/k$ and $k(x)/k$ are
linearly disjoint. This can be rephrased in terms of tensor products, for instance...but in the end the above proof is the simplest I can think of.
Added: Robin Chapman and Steve D are right: I misread the question and thought that the OP had already worked out that a basis $v_1,\ldots,v_n$ of $K/k$ also spans $K(x)$ over $k(x)$, but in fact this is the hardest part of the argument. As the OP says, it is easy to see that the set of all $k(x)$-linear combinations of the $v_1,\ldots,v_n$ contains all elements of $K[x]$. As Robin says, it is enough to show that this span also contains all elements $\frac{1}{Q(x)}$ with $Q(x) \in K[x] \setminus {0}$, and a nice way to see this is to show that every $Q \in K[x]$ divides some nonzero polynomial $q \in k[x]$, for if $Q(x) g(x) = q(x)$, $\frac{1}{Q} = = \frac{1}{q} \cdot g(x)$.
Robin gives a nice argument for this: essentially he extends the norm map from a finite dimensional field extension to polynomials. I might as well complete my answer, and I might as well do it in a different way, so here goes:
It is enough to assume that $Q$ is irreducible in the UFD $K[x]$, i.e., that $\mathcal{P} = (Q)$ is a prime ideal. Put $\mathfrak{p} = \mathcal{P} \cap k[x]$. In great generality, the restriction of a prime ideal to a subring is again a prime ideal (this is even true for the preimage of a prime ideal under an arbitrary ring homomorphism, and is very easy to show). What we want to show is that $\mathfrak{p} \neq 0$. But $K[x]$ is a free $k[x]$-module of dimension $n = [K:k]$. In particular, the extension $K[x] / k[x]$ is finitely generated as a module, hence an
integral extension, and in such an extension any maximal ideal pulls back to a maximal ideal. So $\mathfrak{p}$ is maximal, hence nonzero.
These facts follow immediately from the definition of integral extensions: see e.g. Proposition 160 and Corollary 168 of
http://math.uga.edu/~pete/integral.pdf.
Note that a possible virtue of this argument is that one does not need to treat the separable and inseparable cases differently. |
For a Banach space $B$, the space $L^1(0,T;B)$ is naturally identified with $L^\infty(0,T;B^*)$
if and only if $B^*$ has the Radon-Nikodym property. The space $L^\infty(\Omega)$ does not have the RNP.
In general, the dual of $L^1(0,T;B)$ is the larger space $\Lambda^\infty(0,T;B^*)$ which consists of
weak*-measurable functions $f$ with $\|f\|_{B^*}\in L^\infty$. The weak* measurability means that for every $v\in B$, the scalar function $\langle v,f\rangle$ is measurable.
So, you can get a weak* convergent net to an element of $\Lambda^\infty(0,T;B^*)$. And a sequence if $L^1(0,T;B)$ is separable, which it is in your case.
As a freely available source, I recommend the lecture notes Martingales in Banach Spaces by Pisier. |
As written, it's not clear to me that it's actually fully coupled. That is, the solution to $B$ depends (via its BC2) on the solution to $a$, but assuming $\omega$ is simply some specified function of position and time, it looks like $a$ doesn't depend on the solution to $B$. If that's the case, then simply solve for $a$ on a 1D mesh first, then you have the (constant in time) BC for $B$ and you can solve it as though there were no coupling. This can be done using a number of techniques for applying BC's within the FD/FV method, but it didn't appear that this was what you were asking. If $\omega$ is a function of $B$, then the system would be fully coupled. I'll assume that they're fully coupled.
If you're confident that the diffusion in PDE2 is not going to ever have a component in the $x$ direction, and you think FD/FV is good for your problem (I so no reason why they wouldn't be), I would consider casting this as two separate, coupled 1D grids to avoid needlessly calculating zero-fluxes in the $x$ direction in the diffusing region. The first grid would be 1D in the $x$ direction, over which PDE1 can be defined. Then, you could repeat a 1D $y$ direction grid at each discretization point in $x$, and define PDE2 over each one of those. Then it's more like an equivalent 1+1D system. To solve that in a coupled system, as you suggested, you can use FD/FV to discretize in space, then step forward using the method of lines. Because $a$ doesn't appear with a time-derivative anywhere, the discretized problem will be a system of (index-1) differential algebraic equations (DAE's).
Once you've done the FD/FV, you would end up with an equation like this$$0 = f_1(\omega, \{a_i\})$$at each interior mesh point in the $x$ mesh, and$$\frac{\partial B}{\partial t} = f_2(\{B_i\})$$at each interior mesh point on the $y$ meshes where $f_1$ and $f_2$ are functions which depend on your discretization choice.
There are a number of ways to solve index-1 DAE's including (1) simple implicit Euler time stepping, (2) the idas module of the SUNDIALS suite, which has been wrapped for some other languages (Python, Julia...), or perhaps (3) Matlab's
ode15s, which also enables solution of index-1 DAE's by passing in a singular mass matrix.
Then, as you said, the coupling would come through the boundary conditions, which you can implement a number of ways. One common approach is to use "ghost points," or fictitious points added on each side of the mesh. So for example, for the right side of the $x$ mesh, say it has index $N_x$, you could add an additional equation for the fictitious point $N_{x}+1$,$$a_{N_{x}+1} = a_{N_x}$$and simply write the grid point equation for $a_{N_x}$ as you would for any interior point (typically as a function of $a_{N_{x}-1}, a_{N_x}, a_{N_{x}+1}$). The above is a case that's trivial enough that you could choose to substitute this algebraic equation into the discretization for point $N_x$ on the $x$ mesh, but when things get uglier, you can simply leave them as extra equations. For example, looking at a top point for the $B$ mesh(es) at $x$-index $x_i$, you could have the $y$-index point $N_{y}+1$ defined with the equation$$\left(B_{x_i,N_{y}+1} - B_{x_i,N_y}\right)/\Delta y = g(\omega,a_{x_i})$$where $\Delta y$ is the grid spacing in the $y$ direction. Then, again, if you can solve for $B_{x_i,N_y}$, you could substitute it into the $x_i, N_{y}$ equation, but if $\omega$ were some non-linear function of $B$, you could solve this algebraic equation along with the others in your DAE system. You can certainly change the discretization scheme to get higher order accuracy, but hopefully this demonstrates the idea. |
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation. |
I am attempting to solve an equation of the type:
$ \left( -\tfrac{\partial^2}{\partial x^2} - f\left(x\right) \right) \psi(x) = \lambda \psi(x) $
Where $f(x)$ has a simple pole at $0$, for the smallest $N$ eigenvalues and eigenvectors. The boundary conditions are: $\psi(0) = 0$ and $\psi(R)=0$, and I'm only looking at the function over $(0,R]$.
However, if I do a very simple, evenly spaced finite difference method, The smallest eigenvalue is very inaccurate, (sometimes there is a "false" eigenvalue that is several orders of magnitude more negative than the one I know should be there, the real "first eigenvalue" becomes the second, but is still poor).
What affects the accuracy of such a finite difference scheme? I assume that the singularity is what is causing the problem, and that an unevenly spaced grid would improve things significantly, are there any papers that can point me towards a good non-uniform finite difference method? But perhaps a higher order difference scheme would improve it more? How do you decide (or is it just "try both and see")
note: my finite difference scheme is symmetric tridiagonal where the 3 diagonals are:
$\left( -\frac{1}{2 \Delta^2}, \; \frac{1}{\Delta^2} - f(x), \; -\frac{1}{2 \Delta^2} \right)$
Where $\Delta$ is the grid spacing. And I am solving the matrix using a direct symmetric solver (I am assuming that the accuracy is not affected drastically by the solver, am I wrong?) |
In the discussion of the RCK model on these two posts I realized the Euler equation could be written as a maximum entropy condition. It's actually a fairly trivial application of the entropy maximizing version of the asset pricing equation:
$$
p_{i} = \frac{\alpha_{i}}{\alpha_{j}} \frac{\partial U/\partial c_{j}}{\partial U/\partial c_{i}} p_{j}
$$
To get to the typical macroeconomic Euler equation, define $\alpha_{i}/\alpha_{j} \equiv \beta$ and re-arrange:
\frac{\partial U}{\partial c_{i}} = \beta \; \frac{p_{j}}{p_{i}} \; \frac{\partial U}{\partial c_{j}}
$$
The price at time $t_{j}$ divided by the price at time $t_{i}$ is just (one plus) the interest rate $R$ (for the time $t_{j} - t_{i}$), so:
\frac{\partial U}{\partial c_{i}} = \beta (1 + R) \; \frac{\partial U}{\partial c_{j}}
$$
And we're done.
The intuition behind the traditional economic Euler equation is (borrowed from these lecture notes [pdf])
The Euler equation essentially says that [an agent] must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other [if utility is maximized].
The intuition for the maximum entropy version is different. It does involve the assumption of a large number of consumption periods (otherwise the intertemporal budget constraint wouldn't be saturated), but that isn't terribly important. The entropy maximum is actually given by (Eq. 4 at the link, re-arranged and using $p_{j}/p_{i} = 1 + R$):
c_{j} = c_{i} (1 + R)
$$
The form of the utility function $U$ allows us to transform it into the equation above, but this is the more fundamental version from the information equilibrium standpoint. This equation says that since you could be anywhere along the blue line between $c_{j}$ maximized and $c_{i}$ maximized on this graph:
the typical location for an economic agent is in the middle of that blue line [1]. Agents themselves might not be indifferent to their location on the blue line (or even the interior of the triangle), but a maximum entropy ensemble of agents is. Another way to put it is that the maximum entropy ensemble doesn't break the underlying symmetry of the system -- the interest rate does. If the interest rate was zero, all consumption periods would be the same and consumption would be equal. A finite interest rate transforms both the coordinate system and the location of maximum entropy point. You'd imagine deforming the n-dimensional simplex so that each axis was scaled by $(1 + r)$ where $r$ is the interest rate between $t_{i}$ and $t_{i + 1}$.
Footnotes:
[1] The graph shown is actually for a large finite dimensional system (a large, but finite number of consumption periods); the true entropy maximum would fall just inside the blue line/intertemporal budget constraint. |
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a...
@Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well
@Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$.
However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1.
Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$
Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ?
Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son...
I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying.
UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton.
hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0
Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something?
*it should be du instead of dx in the integral
**and the solution is missing a constant C of course
Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$?
My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical.
My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction.
Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on.
"... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.)
Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before. |
In papers in physics and mathematics one often encounters longer mathematical expressions which have to be, for intuition and typesetting, expressed using symbols standing for recurring patterns in the expressions.
Consider for instance a set of equations where "$r^2 + a^2 \cos^2\! \vartheta$" and "$r^2 - 2M r + a^2$" appear at multiple points so we decide to define the symbols $\Sigma \equiv r^2 + a^2 \cos^2 \! \vartheta$ and $\Delta \equiv r^2 - 2M r + a^2$ as placeholders which shorten my expressions. (This example comes from the metric of a spinning black hole.)
Now, when I make computations, I obtain expressions of the sort $$(\Delta + 2Mr - a^2 \sin^2\! \vartheta)(\Sigma -2Mr + a^2 \sin^2\! \vartheta)$$ In Mathematica I write
Sig = r^2 + a^2 Cos[th]^2; Delt = r^2 - 2 M r + a^2; expression = (Delt + 2 M r - a^2 Sin[th]^2) (Sig - 2 M r + a^2 Sin[th]^2)
and obtain
(r^2 + a^2 Cos[th]^2)(r^2 - 2 M r + a^2) which is obviously just $\Delta \Sigma$. I would like Mathematica to return $\Delta \Sigma$ automatically, i.e. maximize the amount of the expression which can be "unsubstituted" by the original set of symbols.
A simple replacement rule of the sort
expression/.{r^2 - 2 M r + a^2->Delt, r^2 + a^2 Cos[th]^2->Sig} is not what I am looking for because it does not crack things such as
(r(r-2M)+a^2) or
(r^2 + a^2 - a^2 Sin[th]^2). One could build a set of replacement rules which somehow list these variations but I do not think it would be able to take care of e.g.
(r^2 - 2 M r + a^2)(r^2 + a^2) - a^2 Sin[th]^2 (r^2 + 2 M r + a^2)
(An example of real output from FullSimplify) to reduce to $\Delta \Sigma - 4 M r a^2 \sin^2 \! \vartheta$.
I think this should be somehow possible through the modification of the
ComplexityFunction and
TransformationFunctions for FullSimplify but it is not clear to me how. |
Originally Posted by
kiwiheretic
This video:
Here are my concerns.
On the Youtube clock from:
9:25 Can he really simply draw a sphere around certain galaxies and say only those galaxies matter in the calculations? Why does he assume the other galaxies outside the sphere cancel out?
It's a consequence of how the inverse square law of gravitational attraction works,. If the universe is homogeneous and isotropic, then the attraction from any one galaxy outside the sphere is precisely cancelled by attraction(s) to other galaxies on the opposite side of the sphere, as he describes at 10:10 in the video. As an example: if you dig a hole in the Earth and lower yourself down into it say, 100 miles, then the gravitational force you would feel from the Earth would equal G time the mass of the Earth for the portion closer to the center than you are, divided by your distance from the center squared. In other words the mass of the Earth that is above your head has no effect (again, assuming the Earth is a perfect sphere with homogeneous and isotropic distribution of mass). The same phenomenon applies to electric fields as well as gravitational fields - if you are inside a metal spherical shell that has a charged surface the electric field at every point inside the sphere is precisely zero (this is how a Faraday Cage works).
Originally Posted by
kiwiheretic
19:35- Can the K term of $\displaystyle \frac{\dot{a}^2(t)}{a^2(t)} = \frac{8 \pi G}{3} \rho(t) - \frac{K}{a^2(t)}
$really cause $\displaystyle \frac{\dot{a}^2(t)}{a^2(t)}$ to become negative for certain values of K given that its a squared quantity and the density term must also be positive?
Good question. I think he misspoke - what he should have said (I think) is that if K >0 you get an ever-expanding universe, if K < 0 you get a shrinking universe, and if K=0 you get a flat universe. I see he uses this definition of open vs closed vs flat at 24:00 in the video. If you go back to the original equation at the beginning of his derivation and make the sign change for K like he does than you are essentially starting with:
$\displaystyle \frac 1 2 m v^2 = \frac {GMm}D - K $
In other words KE = PE plus a value K, and K cannot exceed the value of KE alone, because if it does then you have PE being a positive value, which makes no sense.
Originally Posted by
kiwiheretic
29:00 Can a photon really be modelled in an expanding cube arguing that the wavelength increases as it expands? Doesn't that assume the existence also of an expanding ether?
His argument for this as presented is not complete. He's using basic math to try to present concepts that really require much more complicated analysis in 4 dimensions (i.e. using General Relativity). So no - his argument as presented uses a lot of "hand waving" as opposed to mathematical rigor. That doesn't mean he's wrong - it's just that he's made a video using only high school math that really requires a much more rigorous and difficult treatment to be truly complete.
Originally Posted by
kiwiheretic
Otherwise why would an expanding universe cause the wavelength of a small wave packet of light to expand if we can't even detect this expansion from within our own solar system?
Not sure what you mean here. We can't detect expanding wavelength is real time, now that the universe is 14 billion years old or so. But we can detect the effect of expanding wavelengths on photons that have been traveling for billions of years - we see it in the red shift of light from distant galaxies, ad well as in the 3-degree background radiation.
Originally Posted by
kiwiheretic
38:00 Now it claims that a matter dominated universe based on the equation
$\displaystyle a = c t^\frac{2}{3}$ is asymptotic. Really? I thought it was a value that was the square of a cube root which I never knew was asymptotic!!
Again his argument is not rigorous. What he should have said is that the velocity of expansion in a flat system approaches zero over an infinite amount of time. Now that seems to imply that there is some max size that the universe would reach as expansion velocity approaches zero. But it's not clear from his explanation that this value isn't infinity. Since all his math is based on classical mechanics (not GR), the analogy is what happens if you throw a stone upward from the Earth: throw it fast enough and it escapes Earth's gravity and recedes forever- that's analogous to an open universe. Throw it slower and the Earth's gravity eventually causes the stone to stop rising and then fall back to Earth - that's analogous to the closed universe. There's a middle value in which the stone slows but never quite reaches zero velocity - that's the flat universe analog. How high does such a stone rise? The answer using Newtonian mechanics is infinitely high, even though it's velocity approaches zero after an infinite amount of time.
Originally Posted by
kiwiheretic
Is Dark Energy really the house of cards that a careful study of this video would cause us to believe? Is dark energy really just a flawed idea based upon faulty maths?
I stopped the video at 38:0 because that's as far as your questions went. The dark energy conjecture is based on observations of an expanding universe, to which math is applied to try and develop a model that "explains" why this is so. The conclusion I reach is that classical mechanics doesn't do good job at explaining much of this, so if you're going to make a video explaining the expansion of the universe based on high school level math you're going to have to use some short cuts. But that does not mean that this is all a "house of cards" or that the math behind it is faulty. |
I was thinking about the relative speed of an observation reference frame and an object which has been accelerated to a speed close to the speed of light. I'm by no mean an expert and the last physics class I took was more than 20 years ago so my question could be silly... If we accelerate a particle, let's say an electron, to 99.99% of the speed of light and then we start moving in the opposite direction and reach about the 0.01% of the speed of light in the opposite direction, by the original point of observation, using a reasonable amount of energy, we should be dilating our time, so the time in the original frame of reference should pass faster then the time in our moving frame, that means that a relative velocity observed from the original frame should increase from ours. Doesn't that mean that we will observe the $e$ passing $c $?
There are at least two misunderstandings in your argument. The most fundamental one is that you tried to compare times in two different reference frames, but you can't do that in relativity. For example, if Bob zooms off at $\frac{1}{2}c$ to the right, and Alice stays where she is, then
both of them will see "time dilation" when they look at the other person. Suppose that they are both carrying clocks. Both of them will think that the other person's clock is running slower than their own.
This comes into your scenario because I think you are conflating time dilation observed from the particle moving at $0.01\% c$ and time dilation from the rest frame.
The second misunderstanding here is that observers will not only see time dilation, but also space dilation, so this also makes figuring out the relative velocities a little bit harder.
Doing this calculation properly leads to the velocity-addition formula. Suppose that you have $v_1=0.9999c$ and $v_2=0.0001c$. Then the velocities that these two particles will perceive each other to be moving at (if they could perceive things and take measurements!) would be $$v_\text{rel} = \frac{v_1+v_2}{1+\frac{v_1 v_2}{c^2}} = \frac{c}{1+0.9999 \cdot 0.0001} \approx 0.99990002 c $$ |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
In this section, we shall study the concept of divisibility. Let \(a\) and \(b\) be two integers such that \(a \neq 0\). The following statements are equivalent:
\(a\) \(b\), divides \(a\) is a of \(b\), divisor \(a\) is a of \(b\), factor \(b\) is a of \(a\), and multiple \(b\) is \(a\). divisible by
They all mean
There exists an integer \(q\) such that \(b=aq\)
In terms of division, we say that \(a\) divides \(b\) if and only if the remainder is zero when \(b\) is divided by \(a\). We adopt the notation \[a \mid b \qquad \mbox{[pronounced as "\(a\) divides \(b\)'']}\] Do not use a forward slash \(/\) or a backward slash \(\backslash\) in the notation. To say that \(a\) does not divide \(b\), we add a slash across the vertical bar, as in
\[a \nmid b \qquad \mbox{[pronounced as "$a$ does not divide $b$'']}\] Do not confuse the notation \(a\mid b\) with \(\frac{a}{b}\). The notation \(\frac{a}{b}\) represents a fraction. It is also written as \(a/b\) with a (forward) slash. It uses floating-point (that is, real or decimal) division. For example, \(\frac{11}{4}=2.75\).
The definition of divisibility is very important. Many students fail to finish very simple proofs because they cannot recall the definition. So here we go again:
\(a\mid b\;\Leftrightarrow\;b=aq\) for some integer \(q\).
Both integers \(a\) and \(b\) can be positive or negative, and \(b\) could even be 0. The only restriction is \(a\neq0\). In addition, \(q\) must be an integer. For instance, \(3 = 2\cdot\frac{3}{2}\), but it is certainly absurd to say that 2 divides 3.
Example \(\PageIndex{1}\label{eg:divides-01}\)
Since \(14=(-2)\cdot(-7)\), it is clear that \(-2\mid 14\).
hands-on exercise \(\PageIndex{1}\label{he:divides-01}\)
Verify that \[5 \mid 35, \quad 8\nmid 35, \quad 25\nmid 35, \quad 7 \mid 14, \quad 2 \mid -14, \quad\mbox{and}\quad 14\mid 14,\] by finding the quotient \(q\) and the remainder \(r\) such that \(b=aq+r\), and \(r=0\) if \(a\mid b\).
Example \(\PageIndex{2}\label{eg:divides-02}\)
An integer is
if and only if it is divisible by 2, and it is even if and only if it is not divisible by 2. odd
hands-on exercise \(\PageIndex{2}\label{he:divides-02}\)
What is the remainder when an odd integer is divided by 2? Complete the following sentences:
If \(n\) is even, then \(n=\bline{0.5in}\) for some integer .
If \(n\) is odd, then \(n=\bline{0.5in}\) for .
Memorize them well, as you will use them frequently in this course.
hands-on exercise \(\PageIndex{3}\label{he:divides-03}\)
Complete the following sentence:
If \(n\) is not divisible by 3, then \(n=\bline{0.5in}\,\), or \(n=\bline{0.5in}\,\), for some integer .
Compare this to the \(\bdiv\) and \(\bmod\) operations. What are the possible values of \(n\bmod3\)?
Example \(\PageIndex{3}\label{eg:divides-03}\)
Given any integer \(a\neq 0\), we always have \(a\mid 0\) because \(0 = a\cdot 0\). In particular, 0 is divisible by 2, hence, it is considered an even integer.
Example \(\PageIndex{4}\label{eg:divides-04}\)
Similarly, \(\pm1\) and \(\pm b\) divide \(b\) for any nonzero integer \(b\). They are called the
of \(a\). A divisor of \(b\) that is not a trivial divisor is called a trivial divisors of \(b\). nontrivial divisor
For example, the integer 15 has eight divisors: \(\pm1, \pm3, \pm5, \pm15\). Its trivial divisors are \(\pm1\) and \(\pm15\), and the nontrivial divisors are \(\pm3\) and \(\pm5\).
Definition
A positive integer \(a\) is a
of \(b\) if \(a\mid b\) and \(a<|b|\). If \(a\) is a proper divisor of \(b\), we say that proper divisor . \(a\) divides \(b\) properly Remark
Some number theorists include negative numbers as proper divisors. In this convention, \(a\) is a proper divisor of \(b\) if \(a\mid b\), and \(|a|<|b|\). To add to the confusion, some number theorists exclude \(\pm1\) as proper divisors. Use caution when you encounter these terms.
Example \(\PageIndex{5}\label{eg:divides-05}\)
It is clear that 12 divides 132 properly, and 2 divides \(-14\) properly as well. The integer 11 has no proper divisor.
hands-on exercise \(\PageIndex{4}\label{he:divides-04}\)
What are the proper divisors of 132?
Definition
An integer \(p>1\) is a
if its positive divisors are 1 and \(p\) itself. Any integer greater than 1 that is not a prime is called prime . composite Remark
A positive integer \(n\) is composite if it has a divisor \(d\) that satisfies \(1<d<n\). Also, according to the definition, the integer 1 is neither prime nor composite.
Example \(\PageIndex{6}\label{eg:divides-06}\)
The integers \(2, 3, 5, 7, 11, 13, 17, 19, 23, \ldots\,\) are primes.
hands-on exercise \(\PageIndex{5}\label{he:divides-07}\)
What are the next five primes after 23?
Theorem \(\PageIndex{1}\)
There are infinitely many primes.
Proof
We postpone its proof to a later section, after we prove a fundamental result in number theory.
Theorem \(\PageIndex{2}\)
For all integers \(a\), \(b\), and \(c\) where \(a \neq 0\), we have
If \(a\mid b\), then \(a\mid xb\) for any integer \(x\).
If \(a\mid b\) and \(b\mid c\), then \(a\mid c\). (This is called the
of divisibility.) transitive property
If \(a\mid b\) and \(a\mid c\), then \(a\mid (sb+tc)\) for any integers \(x\) and \(y\). (The expression \(sb+tc\) is called a
of \(b\) and \(c\).) linear combination
If \(b\neq 0\) and \(a\mid b\) and \(b\mid a\), then \(a = \pm b\).
If \(a\mid b\) and \(a,b > 0\), then \(a \leq b\).
Proof
We shall only prove (1), (4), and (5), and leave the proofs of (2) and (3) as exercises.
Proof of (1)
Assume \(a\mid b\), then there exists an integer \(q\) such that \(b=aq\). For any integer \(x\), we have \[xb = x\cdot aq = a \cdot xq,\] where \(xq\) is an integer. Hence, \(a\mid xb\).
Proof of (4)
Assume \(a\mid b\), and \(b\mid a\). Then there exist integers \(q\) and \(q'\) such that \(b=aq\), and \(a=bq'\). It follows that \[a = bq' = aq\cdot q'.\] This implies that \(qq'=1\). Both \(q\) and \(q'\) are integers. Thus, each of them must be either 1 or \(-1\), which makes \(b=\pm a\).
Proof of (5)
Assume \(a\mid b\) and \(a,b>0\). Then \(b=aq\) for some integer \(q\). Since \(a,b>0\), we also have \(q>0\). Being an integer, we must have \(q\geq1\). Then \(b = aq \geq a\cdot 1 = a\).
Example \(\PageIndex{7}\label{eg:divides-07}\)
Use the definition of divisibility to show that given any integers \(a\), \(b\), and \(c\), where \(a\neq0\), if \(a\mid b\) and \(a\mid c\), then \(a\mid(sb^2+tc^2)\) for any integers \(s\) and \(t\).
Solution
We try to prove it from first principles, that is, using only the definition of divisibility. Here is the complete proof.
Assume \(a\mid b\) and \(a\mid c\). There exist integers \(x\) and \(y\)
such that \(b=ax\) and \(c=ay\). Then \[ sb^2+tc^2 = s(ax)^2+t(ay)^2 = a(sax^2+tay^2), \] where \(sax^2+tay^2\) is an integer. Hence \(a\mid(sb^2+tc^2)\).
The key step is substituting \(b=ax\) and \(c=ay\) into the expression \(sb^2+tc^2\). You may ask, how can we know this is the right thing to do?
Here is the reason. We want to show that \(a\mid(sb^2+tc^2)\). This means we need to find an integer which, when multiplied by \(a\), yields \(sb^2+tc^2\). This calls for writing \(sb^2+tc^2\) as a product of \(a\) and another integer that is yet to be determined. Since \(s\) and \(t\) bear no relationship to \(a\), our only hope lies in \(b\) and \(c\). We do know that \(b=ax\) and \(c=ay\), therefore, we should substitute them into \(sb^2+tc^2\).
hands-on exercise \(\PageIndex{6}\label{he:divides-06}\)
Let \(a\), \(b\), and \(c\) be integers such that \(a\neq 0\). Prove that if \(a\mid b\) or \(a\mid c\), then \(a\mid bc\).
Summary and Review An integer \(b\) is divisible by a nonzero integer \(a\) if and only if there exists an integer \(q\) such that \(b=aq\). An integer \(n>1\) is said to be prime if its only divisors are \(\pm1\) and \(\pm n\); otherwise, we say that \(n\) is composite. If a positive integer \(n\) is composite, it has a proper divisor \(d\) that satisfies the inequality \(1<d<n\).
Exercise \(\PageIndex{1}\label{ex:divides-01}\)
Let \(a\), \(b\), and \(c\) be integers such that \(a\neq0\). Use the definition of divisibility to prove that if \(a\mid b\) and \(c\mid (-a)\), then \((-c)\mid b\). Use only the definition of divisibility to prove these implications.
Exercise \(\PageIndex{2}\label{ex:divides-02}\)
Let \(a\), \(b\), \(c\), and \(d\) be integers with \(a,c\neq0\). Prove that
If \(a\mid b\) and \(c\mid d\), then \(ac\mid bd\). If \(ac \mid bc\), then \(a\mid b\).
Exercise \(\PageIndex{3}\label{ex:divides-03}\)
Let \(a\), \(b\), and \(c\) be integers such that \(a,b\neq0\). Prove that if \(a\mid b\) and \(b\mid c\), then \(a\mid c\).
Exercise \(\PageIndex{4}\label{ex:divides-04}\)
Let \(a\), \(b\), and \(c\) be integers such that \(a\neq0\). Prove that if \(a\mid b\) and \(a\mid c\), then \(a\mid (sb+tc)\) for any integers \(s\) and \(t\).
Exercise \(\PageIndex{5}\label{ex:divides-05}\)
Prove that if \(n\) is an odd integer, then \(n^2-1\) is divisible by 4.
Exercise \(\PageIndex{6}\label{ex:divides-06}\)
Use the result from Problem [ex:divides-05] to show that none of the numbers 11, 111, 1111, and 11111 is a perfect square. Generalize, and prove your conjecture.
Hint
Let \(x\) be one of these numbers. Suppose \(x\) is a perfect square, then \(x=n^2\) for some integer \(n\). How can you apply the result from Problem [ex:divides-05]?
Exercise \(\PageIndex{7}\label{ex:divides-07}\)
Prove that the square of any integer is of the form \(3k\) or \(3k+1\).
Exercise \(\PageIndex{8}\label{ex:divides-08}\)
Use Problem [ex:divides-07] to prove that \(3m^2-1\) is not a perfect square for any integer \(m\).
Exercise \(\PageIndex{9}\label{ex:divides-09}\)
Use induction to prove that \(3\mid (2^{2n}-1)\) for all integers \(n\geq1\).
Exercise \(\PageIndex{10}\label{ex:divides-10}\)
Use induction to prove that \(8\mid (5^{2n}+7)\) for all integers \(n\geq1\).
Exercise \(\PageIndex{11}\label{ex:divides-11}\)
Use induction to prove that \(5\mid (n^5-n)\) for all integers \(n\geq1\).
Exercise \(\PageIndex{11}\label{ex:divides-12}\)
Use induction to prove that \(5\mid (3^{3n+1}+2^{n+1})\) for all integers \(n\geq1\). |
In the early 2000's, I wrote software code to process and recover telemetry data from archived raw telemetry files obtained from the Pioneer 10 and 11 spacecraft between 1972 and 2003. This led to my participation in the investigation of the Pioneer Anomaly, an as-yet unexplained small, anomalous acceleration of these two spacecraft that was measured as the spacecraft were leaving the Solar System. Over the years, I developed a precision orbit determination program (ODP) that can model spacecraft orbits and radio-metric tracking data with the necessary accuracy. I also developed some thermal modeling code, as part of our investigation of the possibility that much (all?) of the anomaly is due to the recoil force caused by heat radiated by the spacecraft in an anisotropic pattern.
Pioneer 10 and 11 were launched in 1972 and 1973, respectively. These were humanity's first two spacecraft to leave the inner solar system, cross the asteroid belt, and make close-up observations of the gas giang Jupiter. The missions were tremendously successful. Both spacecraft reached Jupiter safely, providing the first ever close-up observations of the planet, its moons, and its immense radiation fields. Pioneer 11 used Jupiter's gravity for a maneuver that took it across the solar system, for an eventual encounter with Saturn several years later. The two spacecraft continued to operate well beyond their original design lifetime: Pioneer 11 was last contacted in 1995, whereas Pioneer 10's last transmission was received in 2003.
The Pioneer 10 and 11 spacecraft were spin-stabilized. The spacecraft's axis of rotation coincided with their antenna axis, and was oriented in the direction of the Earth. Infrequent precession maneuvers were needed to ensure that the Earth remained within the antenna beamwidth. This meant that much of the time, the spacecraft flew completely undisturbed. As a result, Pioneer 10 and 11 remains the most precise large-scale gravitational experiment to date and for the foreseeable future.
In the 1990s, it became apparent, however, that in order to achieve maximum agreement between theory and data, the theory needed to be modified. This small correction was in the form of a constant sunward acceleration, with an approximate magnitude of $a_P=(8.74\pm 1.33)\times 10^{-10}~{\rm m}/{\rm s}^2$.
My first contribution to researching the Pioneer anomaly was a C++ software library that could process Master Data Records, a format used by NASA's DSN (Deep Space Network) to store raw Pioneer telemetry.
Subsequently, I developed a full-blown orbit determination program that could utilize DSN radio-metric Doppler measurements of the Pioneer radio signal to model the Pioneer spacecraft orbits with high precision. The program can be run from the command line, but I also developed a Windows front-end using Visual C++ that made it easier to investigate test cases and monitor the calculation.
The program solves the relativistic equations of motion
\[\frac{d^2\vec{r}}{dt^2}=\frac{\mu_i}{|\vec{r}_i-\vec{r}|^3}(A_i(\vec{r}_i-\vec{r})+\vec{B}_i),\]
where $\vec{r}$ is the spacecraft's position, $\vec{r}_i$ is the position, $\mu_i$ is the mass of the $i$-th solar system body, $t$ is the time and the post-Newtonian correction terms $A_i$ and $\vec{B}_i$ are given by \begin{align}A_i&=1-\frac{1}{c^2}\left\{2(\beta+\gamma)\sum\limits_j\frac{\mu_j}{|\vec{r}_j-\vec{r}|}+\gamma v^2+(1+\gamma)v_i^2-2(1+\gamma)\vec{v}\cdot\vec{v}_i-\frac{3}{2}\left[\frac{(\vec{r}-\vec{r}_i)\cdot\vec{v}_i}{|\vec{r}_i-\vec{r}|}\right]^2\right\},\\ \vec{B}_i&=\frac{1}{c^2}\left\{(\vec{r}-\vec{r}_i)\cdot\left[(2+2\gamma)\vec{v}-(1+2\gamma)\vec{v}_i\right]\right\}(\vec{v}-\vec{v}_i).\end{align}
The parameters $\beta$ and $\gamma$ are the so-called Eddington-parameters; they are both 1 for general relativity, but may have different values for alternate theories of gravity.
The software also models the effects of the oblateness of Jupiter and Saturn (these become important when the spacecraft is near these giant planets) as well as the propagation of the radio signal through the solar system, affected by gravity, charged particles from the Sun, and the Earth's atmosphere. The positions of solar system bodies and the precise locations of DSN ground stations are obtained from NASA data sets. Nongravitational forces, notably solar pressure, are also accounted for by the code.
Uniquely, my program can also model the recoil force due to on-board generated heat that is radiated in an anisotropic pattern. Most of the heat on-board is due to two sources: waste heat from the spacecraft's radioisotope thermoelectric generators (RTGs) and electrical heat. It turns out that the thermal recoil force $\vec{F}_\mathrm{recoil}$ is proportional to a linear combination of the power $Q_\mathrm{rtg}$ and $Q_\mathrm{elec}$ of these two heat sources:
\[\vec{F}_\mathrm{recoil}\propto \eta_\mathrm{rtg}Q_\mathrm{rtg}+\eta_\mathrm{elec}Q_\mathrm{elec},\]
where the two coefficients $\eta_\mathrm{rtg}$ and $\eta_\mathrm{elec}$ are yet to be determined. To estimate the magnitude of these coefficients, and to verify that it is indeed legitimate to model the thermal recoil force this way, I also developed a finite element model of the spacecraft that, in combination with a raytracing algorithm, estimated $\eta_\mathrm{rtg}$ and $\eta_\mathrm{elec}$.
I first developed a finite element model of the spacecraft using Maxima. In turn, the Maxima code that I wrote generated C++ code which was then utilized by a generic raytracing algorithm that I developed. The final program ran from the command line as it iteratively estimated the thermal output of the spacecraft. One version of the program could also estimate the resulting change in angular momentum (due to the way heat is reflected off some asymmetric spacecraft components, thermal radiation could also affect the spacecraft's rotation, and indeed, an anomalous change in the rotation rate of both spacecraft was detected.) |
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals. |
In using the technique of integration by parts, you must carefully choose which expression is \(u\). For each of the following problems, use the guidelines in this section to choose \(u\). Do not evaluate the integrals.
1) \(\displaystyle ∫x^3e^{2x}\,dx\)
Answer: \( u=x^3\)
2) \(\displaystyle ∫x^3\ln(x)\,dx\)
3) \(\displaystyle ∫y^3\cos y\,dy\)
Answer: \(u=y^3\)
4) \(\displaystyle ∫x^2\arctan x\,dx\)
5) \(\displaystyle ∫e^{3x}\sin(2x)\,dx\)
Answer: \(u=\sin(2x)\) In exercises 6 - 37, find the integral by using the simplest method. Not all problems require integration by parts.
6) \(\displaystyle ∫v\sin v\,dv\)
7) \(\displaystyle ∫\ln x\,dx\) (Hint: \(\displaystyle ∫\ln x\,dx\) is equivalent to \(\displaystyle ∫1⋅\ln(x)\,dx.)\)
Answer: \(\displaystyle ∫\ln x\,dx \quad = \quad−x+x\ln x+C\)
8) \(\displaystyle ∫x\cos x\,dx\)
9) \(\displaystyle ∫\tan^{−1}x\,dx\)
Answer: \(\displaystyle ∫\tan^{−1}x\,dx\quad = \quad x\tan^{−1}x−\tfrac{1}{2}\ln(1+x^2)+C\)
10) \(\displaystyle ∫x^2e^x\,dx\)
11) \(\displaystyle ∫x\sin(2x)\,dx\)
Answer: \(\displaystyle ∫x\sin(2x)\,dx \quad = \quad −\tfrac{1}{2}x\cos(2x)+\tfrac{1}{4}\sin(2x)+C\)
12) \(\displaystyle ∫xe^{4x}\,dx\)
13) \(\displaystyle ∫xe^{−x}\,dx\)
Answer: \(\displaystyle ∫xe^{−x}\,dx \quad = \quad e^{−x}(−1−x)+C\)
14) \(\displaystyle ∫x\cos 3x\,dx\)
15) \(\displaystyle ∫x^2\cos x\,dx\)
Answer: \(\displaystyle ∫x^2\cos x\,dx \quad = \quad 2x\cos x+(−2+x^2)\sin x+C\)
16) \(\displaystyle ∫x\ln x\,dx\)
17) \(\displaystyle ∫\ln(2x+1)\,dx\)
Answer: \(\displaystyle ∫\ln(2x+1)\,dx \quad = \quad \tfrac{1}{2}(1+2x)(−1+\ln(1+2x))+C\)
18) \(\displaystyle ∫x^2e^{4x}\,dx\)
19) \(\displaystyle ∫e^x\sin x\,dx\)
Answer: \(\displaystyle ∫e^x\sin x\,dx \quad = \quad \tfrac{1}{2}e^x(−\cos x+\sin x)+C\)
20) \(\displaystyle ∫e^x\cos x\,dx\)
21) \(\displaystyle ∫xe^{−x^2}\,dx\)
Answer: \(\displaystyle ∫xe^{−x^2}\,dx \quad = \quad −\frac{e^{−x^2}}{2}+C\)
22) \(\displaystyle ∫x^2e^{−x}\,dx\)
23) \(\displaystyle ∫\sin(\ln(2x))\,dx\)
Answer: \(\displaystyle ∫\sin(\ln(2x))\,dx \quad = \quad −\tfrac{1}{2}x\cos[\ln(2x)]+\tfrac{1}{2}x\sin[\ln(2x)]+C\)
24) \(\displaystyle ∫\cos(\ln x)\,dx\)
25) \(\displaystyle ∫(\ln x)^2\,dx\)
Answer: \(\displaystyle ∫(\ln x)^2\,dx \quad = \quad 2x−2x\ln x+x(\ln x)^2+C\)
26) \(\displaystyle ∫\ln(x^2)\,dx\)
27) \(\displaystyle ∫x^2\ln x\,dx\)
Answer: \(\displaystyle ∫x^2\ln x\,dx \quad = \quad −\frac{x^3}{9}+\tfrac{1}{3}x^3\ln x+C\)
28) \(\displaystyle ∫\sin^{−1}x\,dx\)
29) \(\displaystyle ∫\cos^{−1}(2x)\,dx\)
Answer: \(\displaystyle ∫\cos^{−1}(2x)\,dx \quad = \quad −\tfrac{1}{2}\sqrt{1−4x^2}+x\cos^{−1}(2x)+C\)
30) \(\displaystyle ∫x\arctan x\,dx\)
31) \(\displaystyle ∫x^2\sin x\,dx\)
Answer: \(\displaystyle ∫x^2\sin x\,dx \quad = \quad −(−2+x^2)\cos x+2x\sin x+C\)
32) \(\displaystyle ∫x^3\cos x\,dx\)
33) \(\displaystyle ∫x^3\sin x\,dx\)
Answer: \(\displaystyle ∫x^3\sin x\,dx \quad = \quad −x(−6+x^2)\cos x+3(−2+x^2)\sin x+C\)
34) \(\displaystyle ∫x^3e^x\,dx\)
35) \(\displaystyle ∫x\sec^{−1}x\,dx\)
Answer: \(\displaystyle ∫x\sec^{−1}x\,dx \quad = \quad \tfrac{1}{2}x\left(−\sqrt{1−\frac{1}{x^2}}+x⋅\sec^{−1}x\right)+C\)
36) \(\displaystyle ∫x\sec^2x\,dx\)
37) \(\displaystyle ∫x\cosh x\,dx\)
Answer: \(\displaystyle ∫x\cosh x\,dx \quad = \quad −\cosh x+x\sinh x+C\) In exercises 38 - 46, compute the definite integrals. Use a graphing utility to confirm your answers.
38) \(\displaystyle ∫^1_{1/e}\ln x\,dx\)
39) \(\displaystyle ∫^1_0xe^{−2x}\,dx\) (Express the answer in exact form.)
Answer: \(\displaystyle ∫^1_0xe^{−2x}\,dx \quad = \quad \frac{1}{4}−\frac{3}{4e^2}\)
40) \(\displaystyle ∫^1_0e^{\sqrt{x}}\,dx \quad (\text{let}\, u=\sqrt{x})\)
41) \(\displaystyle ∫^e_1\ln(x^2)\,dx\)
Answer: \(\displaystyle ∫^e_1\ln(x^2)\,dx \quad = \quad 2\)
42) \(\displaystyle ∫^π_0x\cos x\,dx\)
43) \(\displaystyle ∫^π_{−π}x\sin x\,dx\) (Express the answer in exact form.)
Answer: \(\displaystyle ∫^π_{−π}x\sin x\,dx \quad = \quad 2\pi\)
44) \(\displaystyle ∫^3_0\ln(x^2+1)\,dx\) (Express the answer in exact form.)
45) \(\displaystyle ∫^{π/2}_0x^2\sin x\,dx\) (Express the answer in exact form.)
Answer: \(\displaystyle ∫^{π/2}_0x^2\sin x\,dx \quad = \quad −2+π\)
46) \(\displaystyle ∫^1_0x5^x\,dx\) (Express the answer using five significant digits.)
47) Evaluate \(\displaystyle ∫\cos x\ln(\sin x)\,dx\)
Answer: \(\displaystyle ∫\cos x\ln(\sin x)\,dx \quad = \quad −\sin(x)+\ln[\sin(x)]\sin x+C\) In exercises 48 - 50, derive the following formulas using the technique of integration by parts. Assume that \(n\) is a positive integer. These formulas are called reduction formulas because the exponent in the \(x\) term has been reduced by one in each case. The second integral is simpler than the original integral.
48) \(\displaystyle ∫x^ne^x\,dx=x^ne^x−n∫x^{n−1}e^x\,dx\)
49) \(\displaystyle ∫x^n\cos x\,dx=x^n\sin x−n∫x^{n−1}\sin x\,dx\)
Answer: Answers vary
50) \(\displaystyle ∫x^n\sin x\,dx=\)______
51) Integrate \(\displaystyle ∫2x\sqrt{2x−3}\,dx\) using two methods:
a. Using parts, letting \(dv=\sqrt{2x−3}\,dx\)
b. Substitution, letting \(u=2x−3\)
Answer: a. \(\displaystyle ∫2x\sqrt{2x−3}\,dx \quad = \quad \tfrac{2}{5}(1+x)(−3+2x)^{3/2}+C\) b. \(\displaystyle ∫2x\sqrt{2x−3}\,dx \quad = \quad \tfrac{2}{5}(1+x)(−3+2x)^{3/2}+C\)
In exercises 52 - 57, state whether you would use integration by parts to evaluate the integral. If so, identify \(u\) and \(dv\). If not, describe the technique used to perform the integration without actually doing the problem.
52) \(\displaystyle ∫x\ln x\,dx\)
53) \(\displaystyle ∫\frac{\ln^2x}{x}\,dx\)
Answer: Do not use integration by parts. Choose \(u\) to be \(\ln x\), and the integral is of the form \(\displaystyle ∫u^2\,du.\)
54) \(\displaystyle ∫xe^x\,dx\)
55) \(\displaystyle ∫xe^{x^2−3}\,dx\)
Answer: Do not use integration by parts. Let \(u=x^2−3\), and the integral can be put into the form \(∫e^u\,du\).
56) \(\displaystyle ∫x^2\sin x\,dx\)
57) \(\displaystyle ∫x^2\sin(3x^3+2)\,dx\)
Answer: Do not use integration by parts. Choose \(u\) to be \(u=3x^3+2\) and the integral can be put into the form \(\displaystyle ∫\sin(u)\,du.\)
In exercises 58-59, sketch the region bounded above by the curve, the \(x\)-axis, and \(x=1\), and find the area of the region. Provide the exact form or round answers to the number of places indicated.
58) \(y=2xe^{−x}\) (Approximate answer to four decimal places.)
59) \(y=e^{−x}\sin(πx)\) (Approximate answer to five decimal places.)
Answer: The area under graph is \(0.39535 \, \text{units}^2.\)
In exercises 60 - 61, find the volume generated by rotating the region bounded by the given curves about the specified line. Express the answers in exact form or approximate to the number of decimal places indicated.
60) \(y=\sin x,\,y=0,\,x=2π,\,x=3π;\) about the \(y\)-axis (Express the answer in exact form.)
61) \(y=e^{−x}, \,y=0,\,x=−1, \, x=0;\) about \(x=1\) (Express the answer in exact form.)
Answer: \(V = 2πe \, \text{units}^3\)
62) A particle moving along a straight line has a velocity of \(v(t)=t^2e^{−t}\) after \(t\) sec. How far does it travel in the first 2 sec? (Assume the units are in feet and express the answer in exact form.)
63) Find the area under the graph of \(y=\sec^3x\) from \(x=0\) to \(x=1\). (Round the answer to two significant digits.)
Answer: \(A= 2.05 \, \text{units}^2\)
64) Find the area between \(y=(x−2)e^x\) and the \(x\)-axis from \(x=2\) to \(x=5\). (Express the answer in exact form.)
65) Find the area of the region enclosed by the curve \(y=x\cos x\) and the \(x\)-axis for \(\frac{11π}{2}≤x≤\frac{13π}{2}.\) (Express the answer in exact form.)
Answer: \(A = 12π \, \text{units}^2\)
66) Find the volume of the solid generated by revolving the region bounded by the curve \(y=\ln x\), the \(x\)-axis, and the vertical line \(x=e^2\) about the \(x\)-axis. (Express the answer in exact form.)
67) Find the volume of the solid generated by revolving the region bounded by the curve \(y=4\cos x\) and the
Answer: \(V = 8π^2 \, \text{units}^3\)
68) Find the volume of the solid generated by revolving the region in the first quadrant bounded by \(y=e^x\) and the \(x\)-axis, from \(x=0\) to \(x=\ln(7)\), about the \(y\)-axis. (Express the answer in exact form.)
Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. |
First, the interpretation of $\eta=\frac{\partial\log y}{\partial\log x}$ and $\eta'=\frac{\partial y}{\partial x}$ are different. $\eta$ is the ratio of percent change and $\eta'$ is the ratio of absolute change. But you already know that. The real question is not why we define “elasticity” as a ratio of percent changes rather than absolute changes in economics, because that's how we use the word “elasticity” in everyday life: suppose rubber band A is 10 inches long and can be stretched by 1 inch when force F is applied, and rubber band B is 1 inch long and can be stretched by 0.5 inch when the same force F is applied. We would say rubber band B is more “elastic” than A, because we don't care about absolute change, but relative change when defining “elasticity”.
So I think your question is “why is elasticity $\eta$ more applicable/useful than $\eta'$?” My answer is that $\eta$ is not more useful/applicable/natural. Both $\eta$ and $\eta'$ are useful in different applications.
Take the price elasticity example. At unit price \$1, consumer A would buy 10 apples, and consumer B would buy 5 apples. At unit price \$2, consumer A would buy 6 apples, and consumer B would buy 1 apple. That is, when the price of apples increases by \$1, both consumers will reduce their purchase by 4 apples. For both consumers, $\eta'=\frac{\Delta\text{apples}}{\Delta\text{price}}=4$. This is a useful quantity if what we wanted to know is what happens to the sales of apples if price increased by \$1. But if we wanted to know which consumer would respond more dramatically to the price change, $\eta'$ is not a good measure. Because it seems consumer B is more “sensitive” to the price changes: she would cut her apple consumption by 80%, compared with only 40% reduction of consumer A. So $\eta$ is a better measure of this “sensitivity” or “elasticity” with respect to price changes.
In your demand curve extrapolation example, assuming constant elasticity $\eta$ is probably closer to truth than assuming constant $\eta'$. If you assume constant $\eta'$, the demand curve is a straight line. This effectively means price change from \$1 to \$2 will induce same change in quantity demanded as the price change from \$100 to \$101. But this is not supported by either evidence or common sense. Human brain does not seem to work this way. In this sense, relative changes do seem to be relevant in more economic applications than the absolute changes. |
This is the final one of a series of posts about the manuscript “Finite Part of Operator K-theory for Groups Finitely Embeddable into Hilbert Space and the Degree of Non-rigidity of Manifolds” (ArXiv e-print 1308.4744. http://arxiv.org/abs/1308.4744) by Guoliang Yu and Shmuel Weinberger. In previous posts (most recently this one) I’ve described their main result about the assembly map, what I call the Finite Part Conjecture, and explained some of the methodology of the proof for the large class of groups that they call “finitely embeddable in Hilbert space”. Now I want to explain some of the consequences of the Finite Part Conjecture. Continue reading
This paper, http://arxiv.org/abs/1210.6100, has been accepted by the Proceedings of the Edinburgh Mathematical Society. I just sent off the copyright transfer form this evening, so everything is now set, I hope.
The paper is mostly paying an expository debt. In my CBMS lecture notes I said that if one has the Dirac operator on a complete spin manifold \(M\), and if there is some subset \(N\subseteq M\) such that \(D\) has uniformly positive scalar curvature outside \(N\), then the index of \(D\) belongs to the K-theory of the ideal \(I_N \triangleleft C^*(M) \) associated to the subset \(N\). A very special case of this is the observation of Gromov-Lawson that \(D\) is Fredholm if we have uniformly positive scalar curvature outside a compact set. There are of course analogous results using thepositivity of the Weitzenbock curvature term for other generalized Dirac operators.
Until now, I had not written up the proof of this assertion, but I felt last year that it was (past) time to do so. This paper contains the proof and also that of the associated general form of the Gromov-Lawson relative index theorem which also appears in my CBMS notes. The latter proof uses some results from my paper with Paul Siegel on sheaf theory and Pashcke duality.
The submission to PEMS is in honor of a very pleasant sabbatical spent in Edinburgh in fall 2004.
Sasha Dranishnikov gave a talk describing some of his results about Gromov’s conjecture relating positive scalar curvature and
macroscopic dimension. Definition (Gromov) Let \(X\) be a metric space. We say that \(X\) has macroscopic dimension \(\le n\) if there exists a continuous, uniformly cobounded \(f\colon X\to K\), where \(K\) is an \(n\)-dimensional simplicial complex. We recall that uniformly cobounded means that there is an upper bound on the diameters of inverse images of simplices.
This is a metric notion, but it is quite different from the familiar
asymptotic dimension. One way of defining the latter says that \(X\) has asymptotic dimension \(\le n\) if, for each \(\epsilon>0\), there is an \(\epsilon\)- Lipschitz uniformly cobounded map to an \(n\)-dimensional simplicial complex (here, we agree to metrize \(K\) as a subset of the standard simplex in infinite-dimensional Euclidean space). From this definition it is apparent that the macroscopic dimension is less than or equal to the asymptotic dimension. On the other hand, it is also clear that the macroscopic dimension is less than or equal to the ordinary topological dimension.
Gromov famously conjectured that the universal cover of a compact \(n\)-manifold that admits a metric of positive scalar curvature should have macroscopic dimension \(\le n-2\). The motivating example for this conjecture is a manifold \(M^n = N^{n-2}\times S^2 \) – this clearly admits positive scalar curvature, and its universal cover has macroscopic dimension at most \(n-2\). Gromov’s conjecture suggests that this geometric phenomenon is “responsible” for all positive scalar curvature metrics. Continue reading |
One often hears the words "string tension" in string theory. But what does it really mean? In ordinary physics "tension" in an ordinary classical string arises from the fact that there are elasticity in the string material which is a consequence of the molecular interaction (which is electromagnetic in nature). But string theory, being the most fundamental framework to ask questions about physics (as claimed by the string theorists) can not take such elasticity for granted from the start. So my question is, what does "tension" mean in the context of string theory? Perhaps this question is foolish but please don't ignore it.
A good question. The string tension actually
is a tension, so you may measure it in Newtons (SI units). Recall that 1 Newton is 1 Joule per meter, and indeed, the string tension is the energy per unit length of the string.
Because the string tension is not far from the Planck tension - one Planck energy per one Planck length or $10^{52}$ Newtons or so - it is enough to shrink the string almost immediately to the shortest possible distance whenever it is possible. Unlike the piano strings, strings in string theory have a variable proper length.
This minimum distance, as allowed by the uncertainty principle, is comparable to the Planck length or 100 times the Planck length which is still tiny (although models where it is much longer exist).
For such huge energies and velocities comparable to the speed of light, one needs to appreciate special relativity, including the $E=mc^2$ famous equation. This equation says that the string tension is also equal to the mass of a unit length of the string (times $c^2$). The string is amazingly heavy - something like $10^{35}$ kg per meter: I divided the previous figure $10^{52}$ by $10^{17}$ which is the squared speed of light.
Basic equations of perturbative string theory
More abstractly, the string tension is the coefficient in the Nambu-Goto action for the string. What is it? Well, classical physics may be defined as Nature's effort to minimize the action $S$. For a particle in special relativity, $$ S = -m\int d\tau_{proper} $$ i.e. the action is equal to (minus) the proper length of the world line in spacetime multiplied by the mass. Note that because Nature tries to minimize it, massive particles will move along geodesics (straightest lines) in general relativity. If you expand the action in the non-relativistic limit, you get $-m\Delta t+\int dt\, mv^2/2$, where the second term is the usual kinetic part of the action in mechanics. That's because the curved lines in the Minkowski space are shorter than the straight ones.
String theory is analogously about the motion of 1-dimensional objects in the spacetime. They leave a history which looks like a 2-dimensional surface, the world sheet, which is analogous to the world line with an extra spatial dimension. The action is $$ S_{NG} = -T\int d\tau d\sigma_{proper} $$ where the integral is supposed to represent the proper area of the world sheet in spacetime. The coefficient $T$ is the string tension. Note that it is like the previous mass (from the point-like particle case) per unit distance. It may also be interpreted as the action per unit area of the world sheet - it's the same as energy per unit length because energy is action per unit time.
At this moment, when you understand the Nambu-Goto action above, you may start to study textbooks of string theory.
Piano strings are made out of metallic atoms, unlike fundamental strings in string theory. But I would say that the most important difference is that the strings in string theory are allowed - and love - to change their proper length. However, in all the other features, piano strings and strings in string theory are much more analogous than the string theory beginners usually want to admit. In particular, the internal motion is described by equations that may be called the wave function, at least in some proper coordinates.
Also, the strings in string theory are relativistic and on a large enough piece of world sheet, the internal SO(1,1) Lorentz symmetry is preserved. That's why a string carries not only an energy density $\rho$ but also a negative pressure $p=-\rho$ in the direction along the string.
protected by ACuriousMind♦ Jun 7 '17 at 16:49
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
X
Search Filters
Format
Subjects
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Measurement of the top quark mass with lepton+jets final states using $$\mathrm {p}$$ p $$\mathrm {p}$$ p collisions at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV
The European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11, pp. 1 - 27
The mass of the top quark is measured using a sample of $${{\text {t}}\overline{{\text {t}}}$$ tt¯ events collected by the CMS detector using proton-proton...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
2. Measurement of prompt and nonprompt $$\mathrm{J}/{\psi }$$ J / ψ production in $$\mathrm {p}\mathrm {p}$$ p p and $$\mathrm {p}\mathrm {Pb}$$ p Pb collisions at $$\sqrt{s_{\mathrm {NN}}} =5.02\,\text {TeV} $$ s NN = 5.02 TeV
The European Physical Journal C, ISSN 1434-6044, 04/2017, Volume 77, Issue 4, pp. 1 - 27
Abstract This paper reports the measurement of $$\mathrm{J}/{\psi }$$ J / ψ meson production in proton–proton ( $$\mathrm {p}\mathrm {p}$$ p p ) and...
Journal Article
Physical Review Letters, ISSN 0031-9007, 06/2015, Volume 115, Issue 1, p. 012301
The second-order azimuthal anisotropy Fourier harmonics, nu(2), are obtained in p-Pb and PbPb collisions over a wide pseudorapidity (.) range based on...
DISTRIBUTIONS | PLUS AU COLLISIONS | LEE-YANG ZEROS | ANISOTROPIC FLOW | PARTICLES | PHYSICS, MULTIDISCIPLINARY | ECCENTRICITIES | NUCLEAR COLLISIONS | PROTON-PROTON | Correlation | Large Hadron Collider | Anisotropy | Dynamics | Collisions | Luminosity | Charged particles | Dynamical systems
DISTRIBUTIONS | PLUS AU COLLISIONS | LEE-YANG ZEROS | ANISOTROPIC FLOW | PARTICLES | PHYSICS, MULTIDISCIPLINARY | ECCENTRICITIES | NUCLEAR COLLISIONS | PROTON-PROTON | Correlation | Large Hadron Collider | Anisotropy | Dynamics | Collisions | Luminosity | Charged particles | Dynamical systems
Journal Article
4. Study of the underlying event in top quark pair production in $$\mathrm {p}\mathrm {p}$$ p p collisions at 13 $$~\text {Te}\text {V}$$ Te
The European Physical Journal C, ISSN 1434-6044, 02/2019, Volume 79, Issue 2
Journal Article
5. Observation of Charge-Dependent Azimuthal Correlations in p-Pb Collisions and Its Implication for the Search for the Chiral Magnetic Effect
Physical Review Letters, ISSN 0031-9007, 03/2017, Volume 118, Issue 12, pp. 122301 - 122301
Charge-dependent azimuthal particle correlations with respect to the second-order event plane in p-Pb and PbPb collisions at a nucleon-nucleon center-of-mass...
PARITY VIOLATION | SEPARATION | PHYSICS, MULTIDISCIPLINARY | FIELD | Hadrons | Correlation | Large Hadron Collider | Searching | Correlation analysis | Collisions | Solenoids | Atomic collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PARITY VIOLATION | SEPARATION | PHYSICS, MULTIDISCIPLINARY | FIELD | Hadrons | Correlation | Large Hadron Collider | Searching | Correlation analysis | Collisions | Solenoids | Atomic collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
6. Measurement of the top quark mass with lepton+jets final states using $\mathrm {p}$$ $$\mathrm {p}$ collisions at $\sqrt{s}=13\,\text {TeV}
European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 11/2018, Volume 78, Issue 11
The mass of the top quark is measured using a sample of $\mathrm{t\overline{t}}$ events containing one isolated muon or electron and at least four jets in the...
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11
Journal Article
European Physical Journal C, ISSN 1434-6044, 04/2017, Volume 77, Issue 4
Journal Article
9. Observation of Correlated Azimuthal Anisotropy Fourier Harmonics in pp and p+Pb Collisions at the LHC
Physical Review Letters, ISSN 0031-9007, 02/2018, Volume 120, Issue 9, pp. 092301 - 092301
The azimuthal anisotropy Fourier coefficients (v_{n}) in 8.16 TeV p+Pb data are extracted via long-range two-particle correlations as a function of the event...
Anisotropy | Correlation analysis | Collisions | Rangefinding | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Anisotropy | Correlation analysis | Collisions | Rangefinding | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
10. Measurement of the weak mixing angle using the forward–backward asymmetry of Drell–Yan events in $$\mathrm {p}\mathrm {p}$$ pp collisions at 8$$\,\text {TeV}$$ TeV
The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 30
A measurement is presented of the effective leptonic weak mixing angle ($$\sin ^2\theta ^{\ell }_{\text {eff}}$$ sin2θeffℓ ) using the forward–backward...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
11. Evidence for transverse-momentum- and pseudorapidity-dependent event-plane fluctuations in PbPb and p Pb collisions
Physical Review C - Nuclear Physics, ISSN 0556-2813, 09/2015, Volume 92, Issue 3
Journal Article
12. Study of high-p(T) charged particle suppression in PbPb compared to pp collisions at root s(NN)=2.76 TeV
EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 03/2012, Volume 72, Issue 3
Journal Article |
Name Last modified Size Description Parent Directory - lgr2licr.lua 2013-07-18 11:57 4.5K greekhyperref.tex 2014-12-12 18:19 5.1K diacritics.tex 2014-12-20 21:13 6.2K alphabeta-lgr.def 2015-08-18 14:50 10K textalpha-doc.tex 2016-09-21 22:15 14K alphabeta-euenc.def 2015-08-18 14:50 15K alphabeta.sty 2019-07-10 11:54 15K lgrenc-test.tex 2015-08-06 13:50 17K alphabeta-doc.tex 2018-01-06 18:25 17K greek-fontenc.def 2019-07-11 17:04 18K lgr2licr.lua.html 2015-08-18 14:50 19K README 2019-07-11 17:04 21K textalpha.sty 2019-07-11 17:04 21K alphabeta-lgr.def.html 2015-08-18 14:50 24K greek-euenc-doc.tex 2018-01-06 18:08 27K alphabeta-euenc.def.html 2015-08-18 14:50 29K alphabeta.sty.html 2018-01-06 18:19 33K greek-fontenc.def.html 2019-07-11 17:04 37K lgrenc.def 2019-07-10 12:04 39K textalpha.sty.html 2019-07-11 17:04 44K greek-euenc.def 2018-01-06 18:10 45K greek-euenc.def.txt 2018-01-06 18:10 46K README.html 2019-07-11 17:04 50K test-active-semicolon.pdf 2019-07-11 13:30 60K lgrenc.def.html 2019-07-10 12:04 67K greek-euenc.def.html 2018-01-06 18:19 70K greek-euenc-doc.pdf 2019-07-11 17:04 77K test-nameclashes.pdf 2015-12-07 11:48 127K greekhyperref.pdf 2019-07-11 17:04 215K diacritics.pdf 2019-07-11 17:04 300K lgrenc-test.pdf 2019-07-11 17:04 300K alphabeta-doc.pdf 2019-07-11 17:04 422K textalpha-doc.pdf 2019-07-11 17:04 460K
Greek font encoding definition files
This work may be distributed and/or modified under the conditions of the LaTeX Project Public License, either version 1.3 of this license or any later version.
Abstract
LaTeX internal character representation (LICR) macros are a verbose but failsafe 7-bit ASCII encoding that works unaltered under both, 8-bit TeX and XeTeX/LuaTeX. Use cases are macro definitions and generated text
Note
The LICR macro names for Greek symbols are chosen pending endorsement by the TeX community and related packages.
Names for archaic characters, accents/diacritics, and punctuation may change in future versions.
0.9
2013-07-03
greek-fontenc.def “outsourced” from lgrxenc.def
experimental files xunicode-greek.sty and greek-euenc.def: LICRs for XeTeX/LuaTeX.
0.9.1
2013-07-18
Bugfix: wrong breathings psilioxia -> dasiaoxia.
0.9.2
2013-07-19
Bugfix: Disable composite defs starting with char macro,
fix “hiatus” handling.
0.9.3
2013-07-24
Fix “input” path in xunicode-greek and greek-euenc.def.
0.9.4
2013-09-10
greek-fontenc.sty: Greek text font encoding setup package,
remove xunicode-greek.sty.
0.10
2013-09-13
greek-fontenc.sty removed (obsoleted by textalpha.sty).
0.10.1
2013-10-01
0.11
2013-11-28
Compatibility with Xe/LuaTeX in 8-bit mode,
\greekscript TextCommand.
0.11.1
2013-12-01
Fix identification of greek-euenc.def.
0.11.2
2014-09-04
Documentation update, remove duplicate code.
0.12
2014-12-25
Fix auxiliary macro names in textalpha.
Conservative naming: move definition of \< and \> from greek-fontenc.def to textalpha.sty (Bugreport David Kastrup). Documentation update.
0.13
2015-09-04
Support for symbol variants,
keep-semicolon option in textalpha,
Do not convert \ypogegrammeni to \prosgegrammeni with \MakeUppercase.
0.13.1
2015-12-07
Fix rho with dasia bug in lgrenc.def (Linus Romer).
0.13.2
2016-02-05
Support for standard Unicode text font encoding “TU” (new in fontspec v2.5a).
0.13.3
2019-07-10
Drop error font declaration (cf. ltxbugs 4399).
0.13.4
2019-07-11
@uclclist entry for \prosgegrammeni.
Documentation update.
Greek symbols in text independent of font encoding and TeX engine.
Generic macros for Greek symbols in text and math.
The textalpha package.
The alphabeta package.
Test and usage example.
Example for use of the Greek LICR definitions with XeTeX or LuaTeX.
Greek script in PDF metadata.
The package hyperref defines the PU font encoding which also supports (monotonic) Greek.
These files are still in development and will eventually be moved to/merged with other packages or removed in future versions:
If possible, get this package from your distribution using its installation manager.
Otherwise, make sure LaTeX can find the package and definition files:
The arabi package provides the Babel arabic option which loads arabicfnt.sty for font setup. This package overwrites the LICR macros \omega and \textomega with font selecting commands. See the report for Debian bug 858987 for details and the arabi workaround below.
There are many alternatives to set up the support for a Greek font encoding provided by this package, e.g.:
Ensure support for Greek characters in text mode:
\usepackage{textalpha} \usepackage[normalize-symbols]{textalpha} \usepackage[normalize-symbols,keep-semicolon]{textalpha}
This sets up LICR macros for Greek text charactes under both, 8-bit TeX and Xe-/LuaTeX. For details see textalpha-doc.tex and textalpha-doc.pdf (8-bit TeX) as well as greek-euenc-doc.tex and greek-euenc-doc.pdf (XeTeX/LuaTeX).
To use the short macro names (\alpha … \Omega) known from math mode in both, text and math mode, write
\usepackage{alphabeta}
Use the greek option with Babel:
\usepackage[greek]{babel}
This automatically loads lgrenc.def with 8-bit TeX and greek-euenc.def with XeTeX/LuaTeX and provides localized auto-strings, hyphenation and other localizations (see babel-greek).
Declare LGR via fontenc. For example, specify T1 (8-bit Latin) as default font encoding and LGR for Greek with
\usepackage[LGR,T1]{fontenc} \usepackage[LGR]{fontenc} \usepackage{fontspec} \setmainfont{Linux Libertine O} % Latin Modern does not support Greek \setsansfont{Linux Biolinum O} \usepackage{textalpha}
To work around the conflict with arabi, it may suffice to ensure greek is loaded after arabic:
\usepackage[arabic,greek,english]{babel}
More secure is an explicit reverse-definition, e.g.
% save original \omega \let\mathomega\omega \usepackage[utf8]{inputenc} \usepackage[LAE,LGR,T1]{fontenc} \usepackage[arabic,greek,english]{babel} % fix arabtex: \DeclareTextSymbol{\textomega}{LGR}{119} \renewcommand{\omega}{\mathomega}
The [encguide] reserves the name T7 for a Greek standard font encoding. However, up to now, there is no agreement on an implementation because the restrictions for general text encodings are too severe for typesetting polytonic Greek.
The LGR font encoding is the de-facto standard for typesetting Greek with (8-bit) LaTeX. greek-fontenc provides a comprehensive LGR font encoding definition file.
Fonts in this encoding include the CB fonts (matching CM), grtimes (Greek Times), Kerkis (matching URW Bookman), DejaVu, Libertine GC, and the GFS fonts. Setup of these fonts as Greek variant to matching Latin fonts is facilitated by the substitutefont package.
The LGR font encoding allows to access Greek characters via an ASCII transliteration. This enables simple input with a Latin keyboard. Characters with diacritics can be selected by ligature definitions in the font (see [greek-usage], [teubner-doc], [cbfonts]).
A major drawback of the transliteration is, that you cannot access Latin letters if LGR is the active font encoding (e.g. in documents or parts of documents given the Babel language greek or polutionikogreek). This means that for every Latin-written word or acronym an explicit language-switch is required. This problem can only be solved via a font-encoding comprising Latin and Greek like the envisaged T7 or Unicode (with XeTeX or LuaTeX).
The font encoding file lgienc.def from ibycus-babel provides a basic setup (without any LICR macros or composite definitions).
Xe/LuaTeX works with any system-wide installed OpenType font. Suitable fonts supporting Greek include CM Unicode, Deja Vu, EB Garamond, the GFS fonts, Libertine OTF, Libertinus, Old Standard, Tempora, and UM Typewriter (all available on CTAN) but also many commercial fonts. Unfortunately, the fontspec default, Latin Modern misses most Greek characters.
Legacy Unicode font encodings for XeTeX and LuaTeX respectively.
This package provides LaTeX internal character representations (LICR macros) for Greek letters and diacritics. Macro names were selected based on the following considerations:
The fntguide (section 6.4 Naming conventions) recommends:
Where possible, text symbols should be named as \text followed by the
Adobe glyph name: for example \textonequarter or \textsterling. Similarly, math symbols should be named as \math followed by the glyph name, for example \mathonequarter or \mathsterling.
If there exists a
math-mode macro for a symbol, the corresponding textmacro could be formed by prepending text.
The glyph name for the GREEK SMALL LETTER FINAL SIGMA is sigma1, the corresponding math-macro is \varsigma. The text symbol is made available as \textvarsigma.
The math macros for the symbol variants \varepsilon and\varphi map to characters named “GREEK SMALL
LETTER …”, while\vartheta, \varkappa, \varrho, and \varpi map to “GREEK… SYMBOL” Unicode characters. (See also section 5.5.3 of theunicode-math documentation.)
The Unicode names list provides standardized descriptive names for all Unicode characters that use only capital letters of the Latin alphabet. While not suited for direct use in LICR macros, they can be either
used as inspiration for new LICR macro names or
converted to LICR macro names via a defined set of transformation rules.
\textfinalsigma is a descriptive alias for GREEK SMALL LETTER FINAL SIGMA derived via the rules:
drop “LETTER” if the name remains unique,
drop “GREEK” if the name remains unique,
use capitalized name for capital letters, lowercase for “SMALL” letters and drop “SMALL”,
concatenate
Omit the “text” prefix for macros that do not have a math counterpart?
Simpler,
ease of use (less typing, better readability of source text),
many established text macro names without “text”,
text prefix does
not mark a macro as encoding-specific or“inserting a glyph”. There are e.g. font-changing macros (\textbf,\textit) and encoding-changing macros (\textgreek,\textcyr).
There are examples of encoding-specific macros without the text-prefix, especially for letters, see encguide.
Less consistent,
possible name clashes
text prefix marks a macro as confined to text (as opposed to math) mode,
standard accent macros (\DeclareTextAccent definitions in latex/base/...) are one-symbol macros (\' \" ... \u \v ...) .
However, the Adobe Glyph List For New Fonts maps, e.g., “tonos” and “dieresistonos” to the spacing characters GREEK TONOS and GREEK DIALYTIKA TONOS, hence texttonos and textdiaresistonos should be spacing characters.
textcomp (ts1enc.def) defines \capital... accents (i.e. without text prefix).
Currently, greek-fontenc uses for diacritics:
Greek names like in Unicode, and ucsencs.def, and
the prefix \acc to distinguish the macros as TextAaccent and reduce the risc of name clashes (cf. \@tabacckludge).
Mathematical notation distinguishes variant shapes for beta (β|ϐ), theta (θ|ϑ), phi (φ|ϕ), pi (π|ϖ), kappa (κ|ϰ), rho (ρ|ϱ), Theta (Θ|ϴ), and epsilon (ε|ϵ). The variations have no syntactic meaning in Greek text and Greek text fonts use the shape variants indiscriminately.
Unicode defines separate code points for the symbol variants for use in mathematical context. However, they are sometimes also used in place of the corresponding letter characters in Unicode-encoded text.
The variant shapes are not given separate code-points in the LGR font encoding.
In mathematical mode, TeX supports the distinction between θ|ϑ, π|ϖ, φ|ϕ, ρ|ϱ, and ε|ϵ with \var<lettername> macros. However, the mapping of letter/symbol in Unicode to “normal”/variant in TeX is inconsistent and variant macros for ϴ ϐ, and ϰ are not available without additional packages (e.g. amssymb provides ϰ as \varkappa).
greek-fontenc provides \text<lettername>symbol LICR macros for these characters:
An alternative, more complete set of short mnemonic character names is the XML Entity Definitions for Characters W3C Recommendation from 01 April 2010.
For glyph names of the LGR encoding see, e.g., CB.enc by Apostolos Syropoulos and xl-lgr.enc from the libertine (legacy) package. lgr.cmap provides a mapping to Unicode characters.
A full set of \text* symbol macros is defined in ucsencs.def from the ucs package.
Aliases from puenc.def ensure that the hyperref package can convert Greek text in “LICR encoding” to a PDF-string (utf-8 encoded input is used as-is).
LaTeX3 Project Team, LaTeX2ε font selection, 2005. http://mirror.ctan.org/macros/latex/doc/fntguide.pdf
Frank Mittelbach, Robin Fairbairns, Werner Lemberg, LaTeX3 Project Team, LaTeX font encodings, 2006. http://mirror.ctan.org/macros/latex/doc/encguide.pdf
Apostolos Syropoulos, Writing Greek with the greek option of the babel package, 1997. http://mirrors.ctan.org/language/babel/contrib/greek/usage.pdf
Claudio Beccari, The CB Greek fonts, Εὔτυπον, τεῦχος № 21, 2008. http://www.eutypon.gr/eutypon/pdf/e2008-21/e21-a01.pdf
Claudio Beccari, teubner.sty An extension to the greek option of the babel package, 2011. http://mirror.ctan.org/macros/latex/contrib/teubner/teubner-doc.pdf
Werner Lemberg, Unicode support for the Greek LGR encoding Εὔτυπον, τεῦχος № 20, 2008. http://www.eutypon.gr/eutypon/pdf/e2008-20/e20-a03.pdf |
10:40 AM
I did not find any posts with \let in the title: data.stackexchange.com/math/query/972169/…\let
Only two where it was removed from the title at some point, in both cases a typo for \left: math.stackexchange.com/posts/399817/revisions math.stackexchange.com/posts/2966236/revisions data.stackexchange.com/math/revision/1066795/1318398/…
11:03 AM
This link should work: data.stackexchange.com/math/query/1071909/… (to find the posts with \let\ )
6 hours later…
5:17 PM
It seems that some people use
\sp for span. A minor problem with searching for such posts is that \sp might be a prefix of some other macros, so I might search for something like this: data.stackexchange.com/math/revision/1071496/1323755/…
Anyway, search like this returns mostly posts/comments with \span or \spec rather than \sp: data.stackexchange.com/math/revision/1071496/1323755/… data.stackexchange.com/math/revision/1071502/1323761/…
@chuyenvien94 Note that the set $\{x; \forall \delta>0: \nu(B(x,\delta))>0\}$ is closed, i.e. $$\text{spt} \nu = \overline{\{x; \forall \delta: \nu(B(x,\delta))>0\}}.$$ Concerning $\int f \, d\mu = 1$: This condition ensures that $\spt \nu \neq \emptyset$. In fact, $\spt \nu \neq \emptyset \Leftrightarrow \int f \, d\mu >0.$$ — saz Jun 15 '14 at 7:07
Two comments with macro \sp actually defined in a post and used in a comment: data.stackexchange.com/math/revision/1066862/1318474/…
Well, because $E$ was computed from $S(\{T(b_1),\ldots,T(b_n)\})$ so as to be basis for the subspace they span, which subspace is $S(\operatorname{Im}(T))$. From $\sp(E)=S(\operatorname{Im}(T))$ one gets $S^{-1}(\sp(E))=\operatorname{Im}(T)$, while also $S^{-1}(\sp(E))=\sp(S^{-1}(E))$. — Marc van Leeuwen Feb 21 '15 at 13:27
If $T$ and $K$ are subspaces and $\sp$ is span, then $\sp(T) = T$ and $\sp(K) = K$. The hypotheses $T \subseteq \sp(K)$ and $K \subseteq \sp(T)$ are therefore equivalent to $\sp(T) \subseteq \sp(K)$ and $\sp(K) \subseteq \sp(T)$; it follows immediately that $\sp(K)= \sp(T)$. A more interesting question is whether $T \subseteq \operatorname{K}$ and $K \subseteq \sp(T)$ imply $\sp(K) = \sp(T)$ in the case where $T$ and $K$ are just sets of vectors. — Michael Albanese Dec 20 '14 at 20:19
I fixed some posts containing macro \span without being defined: math.stackexchange.com/posts/2956613/revisions math.stackexchange.com/posts/1616182/revisions math.stackexchange.com/posts/1137508/revisions math.stackexchange.com/posts/226800/revisions
SEDE: data.stackexchange.com/math/revision/972953/1205843/… data.stackexchange.com/math/revision/1066861/1318473/…
We can also search for comments with \span: data.stackexchange.com/math/query/556789/… data.stackexchange.com/math/revision/1066862/1318474/…
data.stackexchange.com/math/revision/1066863/1318475/… data.stackexchange.com/math/revision/1066865/1318477/…
This one is unusual - a comment on a question, but it relies on a macro defined in one of the answers:
« first day (151 days earlier) ← previous day next day → last day (84 days later) » |
Consider a two period, single good, $2$ agent model. Time beings in perios $0$ in a known state (state $0$) but in period $1$ the world may find itself in any one of two states $s = 1,2$ with probabilities $\pi_s = 1/2$ for each state. Both of the consumers agree on the probabilities.
Each consumer as a constant relative risk aversion utility function with utility index $u^{i}(c_{s}^{i}) = \frac{(c^{i}_{s})^{1-\gamma^i}-1}{1-\gamma^i}$, $i = A,B$. Specifically, we assume that $\gamma^{A} > 0$ and $\gamma^{B} = 0$.
In addition to the consumption good there are $J=2$ financial securities with period $1$ payoffs $D$ (a $J\times S$ matrix) given by $$D = \begin{pmatrix} 1 & 0\\ -1 & 4\\ \end{pmatrix}$$ Thus, the first security pays off $1$ in both states and the second security pays off $-1$ in state $1$ and $4$ in state $2$. There is a spot market for these securities in period $0$ at prices $p_j > 0$. Each security is in zero net supply so that if one consumer is a buyer then the other consumer must be the seller. There are no short-sell constraints on the securities.
Consumers have no period zero endowment and do not consume in the first period so that $e_0^{i} = c_{0}^{i} = 0$, $i = A,B$.
The consumers are not allowed to short-sell the consumption good. I.e., we impost the restriction that $c_{s}^{i}\geq 0$,$\forall i = A,B$ and $s = 1,2$. There is no production in this economy so each agent is exogenously endowed with period $1$ endowments of: $e^{A} = (6,2)$ and $e^{B} = (6,6)$ in states $s = 1$, and $2$, for each agent respectively.
a.) Carefully write out the second consumer's maximization problem including the relevant budget constraints. Let $\lambda_{s}^{B}$ be the Lagrange multipliers on the budget constraints. Carefully write out the first-order conditions for this consumer including the inequality and complementary slackness conditions.
b.) Solve for the contract curve for this economy.
c.) Verify that the Arrow security prices for this economy are $q_1 = q_2$ and that the optimal allocation of goods is $c^{A} = (4,4)$ and $c^{B} = (8,4)$.
d.) Find the consumers' security portfolios $\theta^{A}$ and $\theta^{B}$.
e.) Normalize the Arrow security prices so that $q_1 = q_2$ and find the security prices $p_1$ and $p_2$.
Solution a.) The utility maximization problem for $B$ is \begin{align*} \max_{c^{B},\theta}\mathbb{E}\left[u(c^{B})\right] = \frac{1}{2}(c_1^{B} - 1) + \frac{1}{2}(c_2^{B} - 1) \ \ \text{s.t.} \ \ &p_1\theta_{1}^{B} + p_2\theta_{2}^{B} = 0, s = 0\\ &c_1^{B}\leq 6 + \theta_{1}^{B} - \theta_{2}^{B}, s = 1\\ &c_{2}^{B}\leq 6 + \theta_{1}^{B} + 4\theta_{2}^{B}, s =2; c_1^{B}\geq 0, c_2^{B}\geq 0 \end{align*} We have the Lagrangian, $$\mathcal{L}(c^{B},\theta^{B},\lambda^{B}) = \frac{1}{2}(c_1^{B} - 1) + \frac{1}{2}(c_2^{B} - 1) + \lambda_{0}^{B}(0- p_1 \theta_1^{B} - p_2 \theta_2^{B}) + \lambda_{1}^{B}(6 + \theta_1^{B} - \theta_2^{B} - c_1^{B}) + \lambda_2^{B}(6 + \theta_{1}^{B} + 4\theta_2^{B} - c_2^{B})$$ The First-order conditions are: \begin{align*} &\lambda_{0}^{B}: p_1\theta_1^{B} + p_2 \theta_{2}^{B} = 0; \lambda_{0}^{B}\geq 0\\ &\lambda_1^{B}: c_1^{B}\leq 6 + \theta_1^{B} - \theta_2^{B};\lambda_1^{B}\geq 0, \lambda_1^{B}(6+\theta_1^{B} - \theta_2^{B} - c_1^{B}) = 0\\ &\lambda_2^{B}: c_2{B} \leq 6+\theta_{1}^{B} + 4\theta_2^{B}, \lambda_2^{B}\geq 0, \lambda_2^{B}(6+\theta_{1}^{B} + 4\theta_2^{B} - c_2^{B}) = 0\\ &c_1^{B}: \frac{1}{2}-\lambda_1^{B}\leq 0; c_1^{B}\geq 0, c_1^{B}\left(\frac{1}{2} - \lambda_{1}^{B}\right) = 0\\ &c_2^{B}: \frac{1}{2}-\lambda_2^{B}\leq 0; c_2^{B}\geq 0, c_2^{B}\left(\frac{1}{2} - \lambda_{2}^{B}\right) = 0\\ &\theta_1^{B}: - \lambda_0^{B}p_1 + \lambda_1^{B} + \lambda_2^{B} = 0\\ &\theta_2^{B}: - \lambda_0^{B}p_2 - \lambda_1^{B} + 4\lambda_2^{B} = 0 \end{align*}
Solution b.) We have $$c_1^{A} + c_2^{B} = 12 \ \ \text{and} \ \ c_2^{A} + c_2^{B} = 8$$ We want to find the set of pareto optimal points (contact curve). We are looking for $$MRS^{A} = MRS^{B}$$ Note that $u^{i} = \pi_1 u(c_1^{i}) + \pi_2 u(c_2^{i})$ Thus, $$MRS^{i} = \frac{\pi_1 u^{i \prime}(c_1^{i})}{\pi_2 u^{i\prime}(c_2^{i})}$$ $$MRS^{A} = \left(\frac{c_1^{A}}{c_2^{A}}\right)^{-\gamma_A} = 1 = MRS^{B}$$ $c_1^{A} = c_2^{A}$ is the equation of the contract curve.
Attempted solution c.) This is where I am stuck, I know that we have to solve for the Planner's problem: $$\mathcal{L} = \sum_{i=A,B} \eta^{i}u^{i}(c_1^{i}) + \lambda_1\left(\sum_{i}e_1^{i} - \sum_{i}c_1^{i}\right) + \lambda_2\left(\sum_{i}e_2^{i} - \sum_{i}c_2^{i}\right)$$ but I am not sure how to proceed any further. Any suggestions would be greatly appreciated. This is my first time solving a planners problem and I am a Mathematics grad student so do not have much exposure to economics. |
Combinatorial conditions for linear systems of projective hypersurfaces
Miguel Marco
Santiago de Compostela, October 8th 2016
Before starting
Ask questions!
Survey of past and current work
Partly joint with J.I. Cogolludo
Two points determine a line.
\(a_{0,0}z+a_{1,0} x + a_{0,1}y\)
One point determines a pencil of lines
\(a_{0,0}z+a_{1,0} x + a_{0,1}y\)
Five (generic) points determine a conic.
\(a_{0,0}z^2+a_{1,0} xz + a_{0,1} yz +a_{2,0} x^ 2+a_{1,1}x y +a_{0,2}y^2\)
Four points determine a pencil of conics
\(a_{0,0}z^2+a_{1,0} xz + a_{0,1} yz +a_{2,0} x^ 2+a_{1,1}x y +a_{0,2}y^2\)
Four points determine a pencil of conics
\(a_{0,0}z^2+a_{1,0} xz + a_{0,1} yz +a_{2,0} x^ 2+a_{1,1}x y +a_{0,2}y^2\)
Nine points determine a cubic
\(a_{0,0}z^3+a_{1,0} x z^2+ a_{0,1} yz^2 +a_{2,0} x^ 2z+a_{1,1}x yz +a_{0,2}y^2z+a_{3,0}x^3+a_{2,1}x^2y+a_{1,2}xy^2+a_{0,3}y^3\)
Eight points determine a pencil of cubics
\(a_{0,0}z^3+a_{1,0} x z^2+ a_{0,1} yz^2 +a_{2,0} x^ 2z+a_{1,1}x yz +a_{0,2}y^2z+a_{3,0}x^3+a_{2,1}x^2y+a_{1,2}xy^2+a_{0,3}y^3\)
Eight points determine a pencil of cubics?
\(a_{0,0}z^3+a_{1,0} x z^2+ a_{0,1} yz^2 +a_{2,0} x^ 2z+a_{1,1}x yz +a_{0,2}y^2z+a_{3,0}x^3+a_{2,1}x^2y+a_{1,2}xy^2+a_{0,3}y^3\)
Theorem (Cayley-Bacharach): - If a cubic passes through eight intersections points of two other cubics, it also passes through the ninth.
Question:
When is there a pencil of curves?:
Two curves always generate a pencil
More than three are in a pencil iff every three of them are.
So the question really is:
What conditions should three curves satisfy to be in a pencil?
Not as trivial as it might seem:
First result
Noether fundamental theorem (\(af+bg\)):
Let \(f,g,h\) be three homogenous polynomials in \(\mathbb{C}[x,y,z]\). Being \(f\) and \(g\) coprime. Then \(h\in (f,g)\) if and only if \(h\in (f,g)_p \forall p\in \mathbb{P}^2\). Where \((f,g)_p\) is the ideal generated by \(f\) and \(g\) in the localization of \(\mathbb{C}[x,y,z]\) in the maximal ideal corresponding to \(p\).
Remarks about the Fundamentalsatz
The condition is trivially satisfied for the non-intersection points.
It reduces the problem to studying the problem locally at the base points.
Definition: Given a line arrangement \(\mathcal{L}=\{l_0,\ldots, l_n\}\), a partition \(\mathcal{L} = \mathcal{L}_0 \coprod \mathcal{L}_1 \coprod \cdots \coprod\mathcal{L}_m\) and an exponent function \(d:\mathcal{L}\mapsto \mathbb{Z}^+\) is said to form a combinatorial pencil if at each intersection point, one of these conditions hold:
The point is "monochromatic" (all the lines lie in the same \(\mathcal{L}_i\))
All the components have the same multiplicity in the point (\(\sum_{l\in\mathcal{L}_i}d(l) = \sum_{l\in\mathcal{L}_j}d(l)\))
Theorem (Falk-Yuzvinsky, M.): The previous method gives all (primitive) combinatorial pencils.
Theorem (Falk-Yuzvinsky): A line arrangement is a union of three or more curves in a pencil if and only if it admits a combinatorial pencil.
Example:
Example:
Generalization: arbitrary curves
Different types of singularities:
Algebraic invariants of a singularity
Definition: Given a local branch \(f=0\) at the origin, its multiplicity is defined as the maximum \(p\) for which \(f\in \mathfrak{m}^p\)
Definition: Given two local branches \(f=0,g=0\) at the origin, its intersection multiplicty is defined as \[dim\frac{\mathcal{O}_p}{(f,g)}\]
Tool for understanding singularities: blowup
Locally:
Consider a point \(p\).
The set of lines through \(p\) form a \(\mathbb{P}^1\).
We have a map \(\pi:\mathbb{L}\times\mathbb{P}^1\mapsto \mathbb{A}^2\)
\(\pi\) is bijective outside of \(p\)
\(\pi^{-1}(p)=\mathbb{P}^1\)
Properties of the blowup
The blowup "smoothens" the singularity.
Theorem: Every plane curve can be resolved to a normal crossing divisor by a finite number of blowup at points.
Lemma: Let \(f,g\) be local branches in a point \(p\). Let \(\bar{f},\bar{g}\) be their strict transforms. Then \[ (\bar{f},\bar{g}) = (f,g) - m_p(f)\cdot m_p(g)\]
Generalization of the incidence lattice
After resolving to normal crossings, we have:
The strict transforms of the original components.
The exceptional divisors that appeared during the blowup process
An incidence between them given by the multiplicity
Generalization of the process
Choose base exceptional divisors
Construct the incidence matrix
Proceed as before
Example
\[(- x^{3} + y^{3} - y^{2}) \cdot x \cdot (y - 1) \cdot y\]
Definition: Given a plane curve, with irreducible components \(\mathcal{C}=\{C_1,\ldots C_m\}\) a combinatorial pencil is a partition \(\mathcal{C}=\mathcal{C}_1\coprod\mathcal{C}_2\coprod \cdots \coprod \mathcal{C}_n\) and an exponent function \(d:\mathcal{C}\mapsto \mathbb{Z}^+\) such that at any singular point \(p\) of the curve, one of the following two options hold:
All components that go through \(p\) live in the same \(\mathcal{C_i}\)
For each local branch \(b\) at \(p\), \(b\in \mathcal{C}_i\), and every \(j,k\neq i\), the following equality holds:
Theorem (Cogolludo, M.): A Curve allows a combinatorial pencil if and only if it is the union of three or more fibers of a pencil of curves. Moreover the fibers are given by the partition and exponents of the combinatorial pencil.
Higher dimensions
Question: Given \(n+1\) hypersurfaces in \(\mathbb{C}\mathbb{P}^n\), when do they belong to a linear system of dimension \(n-1\)?
Still open
Partial answer
Definition (Libgober): A hyperplane arrangement is said to be an isolated non normal crossing (INNC) if the intersection of \(i\) hyperplanes has codimension \(i\) except maybe in isolated points.
Theorem (M.): An INNC arrangement in \(\mathbb{C}\mathbb{P}^n\) where at most \(n+1\) hyperplanes meet at a point is a union of \(n+1\) fibers of a linear system of dimension \(n\) if and only if it admits a partition such that every \(INNC\) point is the intersection of one hyperplane in each element of the partition. |
When a change in price results in an infinitely large response in quantity demanded, demand is perfectly elastic. The perfectly elastic demand curve is horizontal. At price P, consumers will buy a quantity Q. If there is an increase in price, quantity demanded drops to zero due to the existence of perfect substitutes. However, when price drops, how will the PED remain infinity? Wouldn't consumers demand as much, if not more, of the product?
In the title you ask about perfectly inelastic demand; in the text it is about perfectly elastic demand. I guess you want to know about the latter. So you can skip one of the paragraphs.
Let us define PED as this absolute vale $$\varepsilon_p = \left| \frac{d Q / Q}{dP / P} \right|,$$ (otherwise it is a non-positive value, this is just a convention).
In general, both perfectly elastic and inelastic demand are defined in the limit. As always, the math gets a little fishy when $\infty$ is involved (or if you want to divide by zero). To stay sane, it helps to think of an "infinite quanity" as it is used in economics as "very, very, very large
approaching infinity" or as "as much as possible". Similarly, think of a good with an "infinite price" as a good that "nobody can buy".
With
perfectly inelastic demand, price changes do not affect demanded quantity $q$. As an example, think of a life-saving drug. How must the price change that you demand $q'<q$ instead of $q$? Well, you always demand $q$ - it doesn't happen. How can we express "it doesn't happen"? We can say that you would only consume less than $q$ if the price becomes some number that is larger than any real number. Now think about a limit approaching this benchmark case of a vertical demand curve, some sequence of very steep declining lines. Then for any $dQ/Q<0$ it must be that $dP/P$ approaches infinity. Similarly, for a demanded quantity $q'>q$, i.e., $dQ/Q>0$, "the good has to be thrown at you". For a meaningful sequence approaching the perfectly inelastic demand, you have to ignore the restriction that prices are positive -- it must be that $dP/P \rightarrow - \infty$. Hence, $\varepsilon_p \rightarrow 0$.Alternatively, don't think in limits, but ask the reverse question. Instead of "how must the price change that my demanded quantity changes", ask "how do I change my demanded quanity if the price changes"? The answer is, not at all: $dQ/Q =0$ for any $dP/P$, making the PED always zero.
With
perfectly elastic demand, an arbitrarily large quantity can be sold at some market price $p$. As an example, think of a \$100 bill or a (hypothetical) perfectly competitive market. "Arbitratily large" does not mean infinity, which is not a real number. Again ask, how does the price have to change that my demanded quantity changes? By definition, any quantity is demanded at price $P$. That is, the price does not have to change at all. Hence, $dP/P=0$ for any $dQ/Q$. Then $\varepsilon_p$ is not well-defined (you divide by zero), but you can think of a sequence of very flat declining lines that approach the horizontal demand such that $dP/P\rightarrow 0$ and $\varepsilon_p \rightarrow \infty$.You can again also ask the reverse question: how does demanded quantity change when the price changes. For this just consider the limit again (and ignore the restricion to positive quanitites). If a \$100 bill is offered at \$99.99, demanded quantity goes to infinity. If a \$100 bill is offered at \$100.01, demanded quantity goes to minus infinity -- you would want to sell your bills. That is, $dQ/Q \rightarrow \mbox{sign}(dP/P) (-\infty)$ and $\varepsilon_p \rightarrow \infty$. |
Cardinality of Infinite Sigma-Algebra is at Least Cardinality of Continuum Theorem $\operatorname{card}\left({\mathcal M }\right) \ge \mathfrak c$ Corollary Then $\mathcal M$ is uncountable. Proof
We first show that $X$ is infinite.
By the definition of a $\sigma$-algebra, $\mathcal M$ is a subset of $\mathcal P(X)$.
Were $X$ finite, by Cardinality of Power Set of Finite Set, the cardinality of $\mathcal M$ would be at most $2^{\operatorname{card}(X)}$
As $2^{\operatorname{card}(X)}$ is finite if $X$ is finite, $X$ must be infinite by the assumption that $\mathcal M$ is infinite.
By the definition of $\sigma$-algebra, $X \in \mathcal M$.
Also, by Sigma-Algebra Contains Empty Set, $\varnothing \in \mathcal M$.
Construct a countable collection of sets $\left\langle {F_1, F_2, F_3, \ldots } \right\rangle_{\N}$ as follows:
$F_1 = \varnothing$ $F_2 = X$
We can continue this construction using the axiom of choice:
$F_3 = \text{ any set in } \mathcal M \setminus \left \{ { \varnothing, X } \right \}$ $\ \ \vdots$ $F_n = \text{ any set in } \mathcal M \setminus \left \{ {F_1, F_2, \ldots, F_{n-1} } \right \}$ $\ \ \vdots$
Consider an arbitrary $S \in \bigcup_{k \mathop \in \N} F_k$
Then $S \in F_k$ for some $F_k$.
By the well-ordering principle, there is a smallest such $k$.
Then for any $j < k$, $S \notin F_j$
Thus the sets in $\langle F_i \rangle$ are disjoint.
Recall $\mathcal M$ is infinite.
Then by Relative Difference between Infinite Set and Finite Set is Infinite, $\mathcal M \setminus \left\{ {F_1, F_2, \cdots , F_{k-1}} \right\}$ is infinite.
Thus this process can continue indefinitely, choosing an arbitary set in $\mathcal M$ that hasn't already been chosen for an earlier $F_k$.
By the definition of a $\sigma$-algebra, $\displaystyle \bigsqcup_{i \mathop \in \N} F_i$ is measurable.
By the definition of an indexed family, $\langle F_i \rangle_{i \mathop \in \N}$ corresponds to a mapping $\iota: \N \hookrightarrow \bigsqcup_{i \mathop \in \N} F_i$
Such a mapping $\iota$ is injective because:
each $F_i$ is disjoint from every other, each $F_i$ contains a distinct $x \in X$, $X$ has infinitely many elements
Define:
$\iota^*: \mathcal P \left({\N}\right) \to \mathcal M$: $\iota^*\left({N}\right) = \bigsqcup_{i \mathop \in N} F_i$
That is, for every $N \subseteq \N$, $\iota^*\left({N}\right)$ corresponds to a way to select a countable union of the sets in $\langle F_i \rangle$.
Because any distinct $F_i, F_j$ are disjoint, any two distinct ways to create a union $S \mapsto \bigsqcup_{i \mathop \in S} F_i$ will result in a different union $\iota^*(S)$.
Thus $\iota^*$ is an injection into $\mathcal M$.
Then the cardinality of $\mathcal M$ is at least $\mathcal P \left({\N}\right)$.
From Continuum equals Cardinality of Power Set of Naturals, $\R \sim \mathcal P \left({\N}\right)$.
Thus $\mathcal M$ is uncountable and $\operatorname{card}\left({\mathcal M}\right) \ge \mathfrak c$
$\blacksquare$ Proof of Corollary
Follows from the main result, as the real numbers are uncountable.
$\blacksquare$
Axiom of Choice
This theorem depends on the Axiom of Choice.
Most mathematicians are convinced of its truth and insist that it should nowadays be generally accepted.
However, others consider its implications so counter-intuitive and nonsensical that they adopt the philosophical position that it cannot be true.
Sources 1984: Gerald B. Folland: Real Analysis: Modern Techniques and their Applications: Exercise $1.3$ |
This is part of an old qual problem at my school.
Assume $\{f_n\}$ is a sequence of nonnegative continuous functions on $[0,1]$ such that $\lim_{n\to\infty}\int_0^1 f_n(x)dx=0$. Is it necessarily true that there are points $x_0\in[0,1]$ such that $\lim_{n\to\infty}f_n(x_0)=0$?
I think that there should be some $x_0$. My intuition is that if the integrals converge to $0$, then the $f_n$ should start to be close to zero in most places in $[0,1]$. If $\lim_{n\to\infty}f_n(x_0)\neq 0$ for any $x_0$, then the sequences $\{f_n(x_0)\}$ for each fixed $x_0$ have to have positive terms of arbitrarily large index. Since there are only countably many functions, I don't think it's possible to do this without making $\lim_{n\to\infty}\int_0^1 f_n(x)dx=0$.
Is there a proof or counterexample to the question? |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr... |
If $$f(x) = \int_x^2{\frac{dy}{\sqrt{1+y^3}}}$$
then find the value of $$\int_0^2{xf(x)}dx$$
I have no idea how to solve this question. Please help.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The integration domain can be equivalently written as
$$ \Omega = \{(x,y): x<y<2 ~~\mbox{and}~~~ 0 < x < 2 \} $$ or
$$ \Omega = \{(x,y): 0<x<y ~~\mbox{and}~~~ 0 < y < 2 \} $$
Such that
\begin{eqnarray} \int_0^2{\rm d}x\int_{x}^2{\rm d}y ~\frac{x}{\sqrt{1 + y^3}} &=& \int_0^2{\rm d}y\int_{0}^y{\rm d}x ~\frac{x}{\sqrt{1 + y^3}} \\ &=& \int_0^2{\rm d}y ~\frac{y^2}{2}\frac{1}{\sqrt{1 + y^3}} \\ &=& \frac{1}{2} \int_0^2{\rm d}y \frac{y^2}{\sqrt{1 + y^3}} \\ &=&\frac{1}{2}\times\frac{4}{3} = \frac{2}{3} \end{eqnarray}
Using integration by parts
$$\int_0^2xf(x)dx=\frac{1}{2}\int_{x=0}^{x=2}f(x)d(x^2)$$ $$=\frac{1}{2}x^2f(x)\bigg|_0^2-\frac{1}{2}\int_0^2x^2f'(x)dx$$ $$=2f(2)+\frac{1}{2}\int_0^2x^2\frac{1}{\sqrt{1+x^3}}dx$$ $$=0+\frac{1}{3}\sqrt{1+x^3}\bigg|_0^2$$ $$=\frac{1}{3}(3-1)$$ $$=\frac{2}{3}$$ |
It seems all the answers so far approaching this from a theoretical perspective are approaching this in terms of exact answers, but we can say a lot about when good approximations are possible too. Of course, some answers have already provided silly ways to do this exactly, so approximations may seem unnecessary, but it provides a nice avenue for some basic transcendental number theory.
It is an unsolved problem, which virtually everyone believes to be true, that $\frac e \pi$ is irrational. Let's assume for the moment that this is true. Then it's a trivial corollary of a well-known theorem that if $\alpha$ is an irrational number, and $\beta$ is any real number, there exist arbitrarily good approximations $p + q \alpha \approx \beta$ with $p,q$ integers. That means, taking $\alpha = \frac e \pi$ and $\beta = \frac \phi \pi$, we can find integers $p,q$ such that $p e + q \pi$ approximates $\phi$ to any tolerance you desire.
One such approximation could be $357 \pi - 412 e = 1.61646... \approx 1.61803... = \phi$, which is accurate to one part in 1000. One can do better, but this at least demonstrates the principle. If the 357 and 412 bother you, you may imagine that I've written a sum with 729 terms on the left hand side instead, 357 of which are $\pi$ and 412 of which are $-e$.
So what if, against all bets, $\frac e \pi$ is rational? Then the opposite is true. There is a single best approximation to $\phi$ of the form $p e + q \pi$, which is not exact, and there are infinitely many choices of $p$ and $q$ which yield the same approximation. This is because, in that case every number of the form $p e + q \pi$ is a rational multiple of $e$ with denominator dividing $d$ the denominator of $\frac e \pi$ when written as an integer fraction in lowest terms. Of course, none of these can be exact, since they're all either 0 or transcendental, while $\phi$ is algebraic, and since the set of all such numbers is discrete (being just $\frac{e}{d}\mathbb Z$ where $d$ is the denominator mentioned above), $\phi$ is not in its closure. That is to say, the irrationality of $\frac e \pi$ is equivalent to the existence of arbitrarily good approximations to $\phi$ of the form $p e + q \pi$ for integers $p$ and $q$. Of course, the current lower bounds on $d$ are likely to be extremely large since we know plenty of digits of both $e$ and $\pi$ and haven't yet found any such rational number with value $\frac e \pi$, so there are going to be very good approximations for all practical purposes, but eventually there has to be a single best one, in exactly the same way that there's a single best integer approximation to $\phi$ (namely 2).
Luckily, even in this case we can still construct arbitrarily good approximations to $\phi$ based on $e$ and $\pi$; just not in the same way. Of course, for some $n$, it must be true that $\sqrt[n] \frac{e}{\pi}$ is irrational (this is true for any real number other than 0 and 1, and $\frac e \pi$ is clearly neither). We can play exactly the same game as we did before to get arbitrarily good approximations of the form $p \sqrt[n] e + q \sqrt[n] \pi$ to $\phi$ with $p$ and $q$ integers. If the appearance of this $n$ bothers you, we can even take $n$ to be a power of 2 so that $\sqrt[n] {}$ can be written as a repeated composition of $\sqrt {}$, i.e. $\sqrt[8]{x}=\sqrt {\sqrt {\sqrt{x}}}$.
Note that in all cases above, it's (as far as I know) unknown whether the forms given can exactly represent $\phi$, though all bets are to the negative. Certainly there are no known cases in which it does represent $\phi$ exactly, since that would give a proof that $e$ and $\pi$ are not algebraically independent (a major unsolved problem). In principle, there could be cases where it's definitely known that the form does not represent $\phi$ exactly, but really there's just about nothing about problems like this so it would surprise me if there are any cases known. |
When I evaluate
Solve[a==Sin[b*c], b] to rearrange the following for $ b $:
$$ a = \sin(bc) $$
I get the following result from Mathematica:
$$\begin{align*} \left\{\left\{b\to \text{ConditionalExpression}\left[\frac{-\sin ^{-1}(a)+2 \pi c_1+\pi }{c},c_1\in \mathbb{Z}\right]\right\},\right.\left.\left\{b\to \text{ConditionalExpression}\left[\frac{\sin ^{-1}(a)+2 \pi c_1}{c},c_1\in \mathbb{Z}\right]\right\}\right\} \end{align*}$$
It seems far too complicated. Unless I'm making a huge mistake, surely solving the equation for $ b $ would give:
$$ b = \frac{\sin ^{-1}(a)}{c} $$
Am I doing something wrong? |
Sometimes, the best way to do this kind of things is rather simple. If the problem is computing this function a lot of times, then... don't compute it!
Basically, all you have to do is to write a table for a finite set of values $x_j$. If you are going to compute $f(x)$ with $x\in[a,b]$, then you compute $f(x_j)$, where $x_j=a+j\Delta x$, $\Delta x=(b-a)/N$. The larger $N$, the better representation of the function you get.
You compute this table once, at the beginning of the program. You could additionally write it to a file and simply recover it at the beginning.
Finally, if you need $f(x)$, with $x_j<x<x_{j+1}$, you return $f(x)\simeq(f(x_j)+f(x_{j+1}))/2$. Since your function is continuous, if $\Delta x$ is small enough, you are going to have a very good approximation for $f(x)$. And you avoid computing the function, since all the $f(x_j)$ are stored in memory.
This "trick" really saves a lot of computations (at expense of having the results stored in memory) and in my opinion it is not used as much as it should. |
I am looking for a proof, a hint or an idea to the following problem:
Is the unique solution $x\in (0,2\pi)$ of
$$ x\sin(x) + \cos(x) = 1 $$
which is equivalent to
$$ 2\arctan(x) = x$$
a rational multiple of $\pi$. I.e. is $\frac{x}{\pi} \in \mathbb{Q}$?
I believe that this is not true. This idea is based on the numeric solution, which does not look very rational:
One idea is to use Thomas Andrews answer:
$\arctan(x)$ is a rational multiple of $\pi$ if and only if the complex number $1+xi$ has the property that $(1+xi)^n$ is a real number for some positive integer $n$. This is not possible if $x$ is a rational, $|x|\neq 1$, because $(q+pi)^n$ cannot be real for any $n$ if $(q,p)=1$ and $|qp|>1$. So $\arctan(\frac{p}{q})$ cannot be a rational multiple of $\pi$. (His full answer and proof can be found here: ArcTan(2) a rational multiple of $\pi$?)
Now, one would need to show: If $x$ is a rational multiple of $\pi$, is there an $n$, such that $(q+p\pi i)^n$ is real? For $p,q,n \in \{1,\ldots,100\}$ Mathematica says no:
Do[Do[Do[If[(p + Pi*I*q)^n \[Element] Rationals, Print[n, p, q],], {n, 1, 100}], {p, 1, 100}], {q, 1, 100}]
Thanks in advance. |
If your interest is in data reduction, PCA, LASSO and ridge regression can all handle categorical predictors in principle. The default is typically to code the dummies as 0/1 numeric variables and standardize them like continuous numeric variables for scaling. Principal-components regression and ridge regression are fundamentally similar, with an all-or-none choice of components in the former and a graded combination produced by the latter.
Your idea to do PCA on the continuous variables and then combine with the dummy data, however, is also reasonable, as I said in a recent answer. As @amoeba noted in a comment on the present page, however, in either case it's true that: "Whether it's going to end up being useful, nobody can say in advance."
For example, for scaling of categorical variables coded 0/1, Frank Harrell notes in Regression Modeling Strategies, second edition, page 209 that "high prevalence cells [get] more shrinkage than low prevalence ones because the high prevalence cells will dominate the penalty function." That might or might not pose a problem for you.
And as he says on the following page:
For a categorical predictor having c levels, users of ridge regression often do not recognize that the amount of shrinkage and the predicted values from the
fitted model depend on how the design matrix is coded. For example, one will
get different predictions depending on which cell is chosen as the reference
cell when constructing dummy variables.
So if you've pre-coded a multi-level categorical variable already as multiple binaries, you have to consider that issue. Penalizing the squared difference of all
k regression coefficients for a k-level categorical variable from the mean of the coefficients (including a coefficient of 0 for the reference level) can help with that, as he points out.
An alternative you might consider is to use penalized maximum likelihood, where instead of penalizing the regression coefficients of pre-standardized predictor variables, you maximize$$ \log L - \frac{1}{2}\lambda\sum_{i=1}^{p}\left(s_i\beta_i\right)^2$$where each $s_i$ is chosen to make $s_i\beta_i$ unitless and $\lambda$ is the penalty. That allows work in the original scale of variables, and if there are some variables you don't want to penalize you can just set their scale factors to 0 in the penalized likelihood. Harrell's
rms package in R provides for this.
A few final notes. I've tried to provide a generally useful answer for future reference here, but I don't have experience with datasets of the scale you are considering and I can't say how efficient these approaches may be. Second, if you are going to use cross-validation or bootstrapping to compare among approaches, as always be sure to do this validation on all the steps of the model-building process. |
I'm reading
Lambda-Calculus and Combinators: An Introduction, and there's the following definition of $\lambda$-substitution: $FV(P)$ stands for the set containing all free-variables from $P$.
Definition 1.12 (Substitution)For any $M, N, x$, define $[N/x]M$ to be the result of substituting $N$ for every free occurrence of $x$ in $M$, and changing bound variables to avoid clashes. The precise definition is by induction on $M$, as follows . (after [CF58, p.94])
(a) $[N/x]x \equiv N$
(b) $[N/x]a \equiv a$ for all atoms $a \not \equiv x$
(c) $[N/x](PQ) \equiv ([N/x]P)([N/x]Q)$
(d) $[N/x](\lambda x.P) \equiv (\lambda x.P)$
(e) $[N/x](\lambda y.P) \equiv P$ if $x \not \in FV(P)$.
(f) $[N/x](\lambda y.P) \equiv \lambda y. [N/x]P$ if $x \in FV(P)$ and $y \not \in FV(N)$.
(g) $[N/x](\lambda y.P) \equiv \lambda z. [N/x][z/y]P$ if $x \in FV(P)$ and $y \in FV(N)$.
I do understand that:
$(a), (b)$ are the base cases for this induction. $(d)$ exists per definition, as one is not allowed to substitute bound variables. $(g)$ prevents a bound variable from changing to a free one. It does this by first substituting a bound variable.
My question is:
If one deletes $(d)$ and allows bound variables to be substituted, is $(g)$ strong enough to handle it without messing up everything?
I'm asking this because $(d)$ seems to prevent the following $\alpha$-equivalent substitutions.
$$ [y/x] ~~ \lambda x. x \equiv \lambda y.y $$ |
The Annals of Statistics Ann. Statist. Volume 24, Number 2 (1996), 862-878. Efficient maximum likelihood estimation in semiparametric mixture models Abstract
We consider maximum likelihood estimation in several examples of semiparametric mixture models, including the exponential frailty model and the errors-in-variables model. The observations consist of a sample of size
n from the mixture density $\int p_{\theta}(x|z) d \eta(z)$. The mixing distribution is completely unknown. We show that the first component $\hat{\theta}_n$ of the joint maximum likelihood estimator , $(\hat{\theta}_n \hat{\eta}_n)$ is asymptotically normal and asymptotically efficient in the semiparametric sense. Article information Source Ann. Statist., Volume 24, Number 2 (1996), 862-878. Dates First available in Project Euclid: 24 September 2002 Permanent link to this document https://projecteuclid.org/euclid.aos/1032894470 Digital Object Identifier doi:10.1214/aos/1032894470 Mathematical Reviews number (MathSciNet) MR1394993 Zentralblatt MATH identifier 0860.62029 Citation
Van der Vaart, Aad. Efficient maximum likelihood estimation in semiparametric mixture models. Ann. Statist. 24 (1996), no. 2, 862--878. doi:10.1214/aos/1032894470. https://projecteuclid.org/euclid.aos/1032894470 |
I found a relationship online between the conductivity of electrolyte and current.
$$i = F \sum_i z_i N_i = F^2 \left( \sum_i z_i^2 u_i c_i \right)\nabla\phi = -\kappa\nabla\phi \qquad \kappa = F^2 \sum_i z_i^2 u_i c_i$$
This equation is hard for me to follow. I get what most of the variables mean and I am guessing that $\Delta\phi$ is the potential difference. However, I thought that the conductivity was also related to temperature, which I do not see in the equation.
It has been a struggle for me to find results on the internet to relate conductivity and current together. Can someone help me and tell me if this equation is right.
Also, this equation is true for dilute solutions, what will the equation be for concentrated solutions (solutions near $\pu{1 M}$)?
Also, how would you calculate the potential gradient? |
Blog
We can define
roundness in many ways. For example, as you may know, the circle is the shape that given a fixed perimeter maximizes the area. This definition has many problems. One of the problems is that countries generally have chaotic perimeters (also known as borders), so they tend to be much longer than they seem to be.
For that reason, we have to define roundness some other way. Given a country, I will represent it as a plane region, more precisely a compact set \(C \subset \mathbb{R}^2\), and I will define its roundness as
\[ roundness(C) = \max_{ x \in \mathbb{R}^2, r \in \mathbb{R}_{>0}
} \frac{ area(C \cap D(x,r)) }{ \max \{ area(D(x,r)), area(C) \} } \]
where \(D(x, r)\) is the disk of center \(x\) and radius \(r\).
A linear recurrence is a linear equation that recursively defines a sequence. An example is the Fibonacci sequence, that is defined as
\[F_0 = 0\] \[F_1 = 1\] \[F_n = F_{n-1} + F_{n-2}\]
In this post we will talk about xor. Xor is a logical operator that outputs
true when the two input values are different, and false otherwise. It is usually simbolized with \(\oplus\).
We all know we can write any number in base 2. For example, \(18_{10} = 10010_2\). So we can ask a question, are there other (nontrivial) sequences such that any natural number is the sum of a finite subset of it? The answer is yes.
In Type Theory,
propositions as types is the idea that types can be interpreted as propositions and vice versa. It is also known as the Curry-Howard isomorphism and closely related with the concept of proofs as programs, this is the reason we will use 3 languages during this post: the language of logic, of type theory and Haskell. |
Where $f(x,y,z) = xyz$ and the constraint is $g(x,y,z) = x^2+2y^2+3z^2 = 6$
I have tried this problem like three or four times and not gotten the solution, I even asked this question once and got the wrong answer from my one and only answerer. The correct answer is {$\pm\sqrt{2}$,$\pm1$,$\pm\frac{\sqrt{2}}{\sqrt{3}}$} This problem just makes no sense from an algebraic standpoint this is my 4th problem off the odd number exercises and I could do the previous 3 just fine.
My attempt:
Let the function $f$ be defined as $f(x,y,z) = xyz$
find the maximum and minimum values subject to the constraint: $g(x,y,z) = x^2+2y^2+3z^2$
$$F=x y z +\lambda \left(x^2+2 y^2+3 z^2-6\right)$$ Computing derivatives $$F'_x=y z+2 \lambda x=0\tag 1$$ $$F'_y=x z+4 \lambda y=0\tag 2$$ $$F'_z=x y+6 \lambda z=0\tag 3$$ $$F'_\lambda=x^2+2 y^2+3 z^2-6=0\tag 4$$ Now, I should consider equations $(1,2,3)$ and solve them for $x,y,z$ in terms of $\lambda$.
Multiplying equations 1,2,3 by $x,y,z$ we obtain that $2x^2=4y^2=6z^2$. From here I found the corresponding multiples that $x^2$ and $y^2$ are in terms of $z$ and plugged into equation 4 to solve for $z$. I found that $x$ was a multiple of $z$ by 3 and $y$ was a multiple of $z$ by $\frac{3}{2}$ I found this by setting $4y^2=6z^2$ and got $\frac{6}{4}$ = $\frac{3}{2}$ Now plugging these into equation 4 I obtained $3+\frac32+3z^2-6=0$ My algebra lead me to $z= \pm\frac{1}{\sqrt{2}}$ but if done right $z$ should equal $\pm\frac{\sqrt{2}}{\sqrt{3}}$
I got this by adding 6 over to the right then subtracted 3 leaving me: $\frac{3}{2}$+$3z^2$=$3$ then subtracting $\frac{3}{2}$ lead me to : $3z^2$= $\frac{3}{2}$ and dividing by 3 gives $\frac{3}{2} \div \frac{3}{1}$ which is equivalent to $\frac{3}{2} \times \frac{1}{3}$ = $\frac{3}{6}$ and you can see that really leaves me with $z^2$= $\frac{1}{3}$ which is equivalent to $z=$ $\pm$ $\frac{1}{\sqrt{3}}$
What did I do incorrectly and how do I proceed from here once I have found $z$. Also how is it that you can write the two functions as two functions added together giving you $F$. |
Let $A(x)$ be a differentiable matrix-valued function with $\det A(x)\ne 0\,\forall x$. I understand that $$\frac{d}{dx}\log A(x)$$ does not have a simple expression in terms of $A$ and $dA/dx$ unless these two things commute, in which case the expression is $$A(x)^{-1}\frac{dA}{dx}.$$ Let's say I don't want to assume that $A$ and $dA/dx$ commute but I only care about the trace. Is it true for any differentiable non-singular function $A(x)$ that $$\text{tr}\left(A(x)^{-1}\frac{dA}{dx}\right)=\frac{d}{dx}\text{tr}\,\log A(x)=\frac{d}{dx}\log\det A(x)$$
Yes, it's true.
Let $F(t)$ and $f(t)=\tfrac{dF}{dt}\,\,$ define a function and its first derivative wrt a scalar argument.
Now apply the function to a matrix argument and take the trace $$\eqalign{ \phi &= {\rm \,tr}(F(A)) \cr }$$ The differential of this function is given by $$\eqalign{ d\phi &= d{\rm \,tr}(F(A)) = f(A^T):dA\cr }$$ where colon represents the trace/Frobenius product, i.e. $\,\,A:B={\rm tr}(A^TB).$
The specific case $F(t)=\log(t)$ yields $$\eqalign{ d{\rm \,tr}(\log(A)) &= (A^T)^{-1}:dA \cr &= (A^T)^{-1}:\tfrac{dA}{dx}\,dx \cr \cr \frac{d{\rm \,tr}(\log(A))}{dx} &= (A^T)^{-1}:\tfrac{dA}{dx} \cr &= {\rm tr}(A^{-1}\tfrac{dA}{dx}) \cr\cr }$$ The formula $$\log(\det(e^L)) = {\rm tr}(L)$$ is due to Jacobi.
If $\{\lambda_k\}$ are the eigenvalues of $L$, then the eigenvalues of $e^L$ are $\{e^{\lambda_k}\}$ and Jacobi's formula simply states that $$\eqalign{ \log\Big(\prod_k \exp(\lambda_k)\Big) = \log\Big(\exp\big(\sum_k\lambda_k\big)\Big) = \sum_k \lambda_k }$$ Setting $L=\log(A)\,\,$ recovers the final equality in your question. |
Number theory studies the properties of integers. Some basic results in number theory rely on the existence of a certain number. The next theorem can be used to show that such a number exists.
Theorem \(\PageIndex{1}\label{thm:PWO}\)
Every nonempty subset of \(\mathbb{N}\) has a smallest element.
Proof
The idea is rather simple. Start with the integer 1. If it belongs to \(S\), we are done. If not, consider the next integer 2, and then 3, and so on, until we find the first element in \(S\). However, like the principle of mathematical induction, it is unclear why “and so on” is possible. In fact, we cannot prove the principle of well-ordering with just the familiar properties that the natural numbers satisfy under addition and multiplication. Hence, we shall regard the principle of well-ordering as an axiom. Interestingly though, it turns out that the principle of mathematical induction and the principle of well-ordering are logically equivalent.
Theorem \(\PageIndex{2}\label{thm:PMI-PWO}\)
The principle of mathematical induction holds if and only if the principle of well-ordering holds.
Proof
(\(\Rightarrow\)) Suppose \(S\) is a nonempty set of natural numbers that has no smallest element. Let \[R = \{ x\in\mathbb{N} \mid x\leq s \mbox{ for every } s\in S\}.\] Since \(S\) does not have a smallest element, it is clear that \(R\cap S = \emptyset\). It is also obvious that \(1\in R\). Assume \(k\in R\). Then any natural number less than or equal to \(k\) must also be less than or equal to \(s\) for every \(s\in S\). Hence \(1,2,\ldots,k \in R\). Because \(R\cap S=\emptyset\), we find \(1,2,\ldots,k\notin S\). If \(k+1\in S\), then \(k+1\) would have been the smallest element of \(S\). This contradiction shows that \(k+1\in R\). Therefore, the principle of mathematical induction would have implied that \(R=\mathbb{N}\). That would make \(S\) an empty set, which contradicts the assumption that \(S\) is nonempty. Therefore, any nonempty set of natural numbers must have a smallest element.
(\(\Leftarrow\)) Let \(S\) be a set of natural numbers such that
\(1\in S\),
For any \(k\geq1\), if \(k\in S\), then \(k+1\in S\).
Suppose \(S\neq\mathbb{N}\). Then \(\overline{S}=\mathbb{N}-S\neq\emptyset\). The principle of well-ordering states that \(\overline{S}\) has a smallest element \(z\). Since \(1\in S\), we deduce that \(z\geq2\), which makes \(z-1\geq1\). The minimality of \(z\) implies that \(z-1\notin \overline{S}\). Hence, \(z-1\in S\). Condition (ii) implies that \(z\in S\), which is a contradiction. Therefore, \(S=\mathbb{N}\).
The principle of well-ordering is an existence theorem. It does not tell us which element is the smallest integer, nor does it tell us how to find the smallest element.
Example \(\PageIndex{1}\label{eg:PWO-01}\)
Consider the sets \[\begin{aligned} A &=& \{ n\in\mathbb{N} \mid n \mbox{ is a multiple of 3} \}, \\ B &=& \{ n\in\mathbb{N} \mid n = -11+7m \mbox{ for some } m\in\mathbb{Z} \}, \\ C &=& \{ n\in\mathbb{N} \mid n = x^2-8x+12 \mbox{ for some } x\in\mathbb{Z} \}. \end{aligned}\] It is easy to check that all three sets are nonempty, and since they contain only positive integers, the principle of well-ordering guarantees that each of them has a smallest element.
These smallest elements may not be easy to find. It is obvious that the smallest element in \(A\) is 3. To find the smallest element in \(B\), we need \(-11+7m>0\), which means \(m>11/7\approx1.57\). Since \(m\) has to be an integer, we need \(m\geq2\). Since \(-11+7m\) is an increasing function in \(m\), its smallest value occurs when \(m=2\). The smallest element in \(B\) is \(-11+7\cdot2=3\).
To determine the smallest element in \(C\), we need to solve the inequality \(x^2-8x+12>0\). Factorization leads to \(x^2-8x+12 = (x-2)(x-6)>0\), so we need \(x<2\) or \(x>6\). Because \(x\in\mathbb{Z}\), we determine that the minimum value of \(x^2-8x+12\) occurs at \(x=1\) or \(x=7\). Since \[1^2-8\cdot1+12 = 7^2-8\cdot7+12 = 5,\] The smallest element in \(C\) is 5.
Example \(\PageIndex{2}\label{eg:PWO-02}\)
The principle of well-ordering may not be true over real numbers or negative integers. In general, not every set of integers or real numbers must have a smallest element. Here are two examples:
The set \(\mathbb{Z}\).
The open interval \((0,1)\).
The set \(\mathbb{Z}\) has no smallest element because given any integer \(x\), it is clear that \(x-1<x\), and this argument can be repeated indefinitely. Hence, \(\mathbb{Z}\) does not have a smallest element.
A similar problem occurs in the open interval \((0,1)\). If \(x\) lies between 0 and 1, then so is \(\frac{x}{2}\), and \(\frac{x}{2}\) lies between 0 and \(x\), such that \[0 < x < 1 \quad\Rightarrow\quad 0 < \frac{x}{2} < x < 1.\] This process can be repeated indefinitely, yielding \[0 < \cdots < \frac{x}{2^n} < \cdots < \frac{x}{2^3} < \frac{x}{2^2} < \frac{x}{2} < x < 1.\] We keep getting smaller and smaller numbers. All of them are positive and less than 1. There is no end in sight, hence the interval \((0,1)\) does not have a smallest element.
The idea behind the principle of well-ordering can be extended to cover numbers other than positive integers.
Definition
A set \(T\) of real numbers is said to be
if every nonempty subset of \(T\) has a smallest element. well-ordered
Therefore, according to the principle of well-ordering, \(\mathbb{N}\) is well-ordered.
Example \(\PageIndex{3}\label{eg:PWO-03}\)
Show that \(\mathbb{Q}\) is not well-ordered.
Solution
Suppose \(x\) is the smallest element in \(\mathbb{Q}\). Then \(x-1\) is a rational number that is smaller than \(x\), which contradicts the minimality of \(x\). This shows that \(\mathbb{Q}\) does not have a smallest element. Therefore \(\Q\) is not well-ordered.
[eg:PWO-03]
hands-on exercise \(\PageIndex{1}\label{he:PWO-01}\)
Show that the interval \([0,1]\) is not well-ordered by finding a subset that does not have a smallest element
Summary and Review A set of real numbers (which could be decimal numbers) is said to be well-ordered if every nonempty subset in it has a smallest element. A well-ordered set must be nonempty and have a smallest element. Having a smallest element does not guarantee that a set of real numbers is well-ordered. A well-ordered set can be finite or infinite, but a finite set is always well-ordered.
Exercise \(\PageIndex{1}\label{ex:PWO-01}\)
Find the smallest element in each of these subsets of \(\mathbb{N}\).
\(\{n\in\mathbb{N} \mid n=m^2-10m+28 \mbox{ for some integer}\) \(m\). \(\{n\in\mathbb{N} \mid n=5q+3 \mbox{ for some integer} \) \(q\). \(\{n\in\mathbb{N} \mid n=-150-17d \mbox{ for some integer} \) \(d\). \(\{n\in\mathbb{N} \mid n=4s+9t \mbox{ for some integers}\) \(s\) and \(t\).
Exercise \(\PageIndex{2}\label{ex:PWO-02}\)
Determine which of the following subsets of \(\mathbb{R}\) are well-ordered:
\(\{\;\}\) \(\{-9,-7,-3,5,11\}\) \(\{0\}\cup\mathbb{Q}^+\) \(2\mathbb{Z}\) \(5\mathbb{N}\) \(\{-6,-5,-4,\ldots\,\}\)
Exercise \(\PageIndex{3}\label{ex:PWO-03}\)
Show that the interval \([3,5]\) is not well-ordered.
Hint
Find a subset of \([3,5]\) that does not have a smallest element.
Exercise \(\PageIndex{4}\label{ex:PWO-04}\)
Assume \(\emptyset \neq T_1 \subseteq T_2 \subseteq \mathbb{R}\). Show that if \(T_2\) is well-ordered, then \(T_1\) is also well-ordered.
Hint
Let \(S\) be a nonempty subset of \(T_1\). We want to show that \(S\) has a smallest element. To achieve this goal, note that \(T_1\subseteq T_2\).
Exercise \(\PageIndex{5}\label{ex:PWO-05}\)
Prove that \(2\mathbb{N}\) is well-ordered.
Hint
Use Problem [ex:PWO-04]
Exercise \(\PageIndex{6}\label{ex:PWO-06}\)
Assume \(\emptyset \neq T_1 \subseteq T_2 \subseteq \mathbb{R}\). Prove that if \(T_1\) does not have a smallest element, then \(T_2\) is not well-ordered. |
I recently attended an internal Convergent Science advanced training course on turbulence modeling. One of the audience members asked one of my favorite modeling questions, and I’m happy to share it here. It’s the sort of question I sometimes find myself asking tentatively, worried I might have missed something obvious. The question is this:
Reynolds-Averaged Navier Stokes (RANS) turbulence models and Large-Eddy Simulation (LES) turbulence models have very different behavior. LES will become a direct numerical simulation (DNS) in the limit of infinitesimally fine grid, and it shows a wide range of turbulent length scales. RANS does not become a DNS, no matter how fine we make the grid. Rather, it shows grid-convergent behavior (
i.e., the simulation results stop changing with finer and finer grids), and it removes small-scale turbulent content.
If I look at a RANS model or an LES turbulence model, the transport equations look very similar mathematically. How does the flow ‘know’ which is which?
There’s a clever, physically intuitive answer to this question, which motivates the development of additional hybrid models. But first we have to do a little bit of math.
Both RANS and LES take the approach of decomposing a turbulent flow into a component to be resolved and a component to be modeled. Let’s define the Reynolds decomposition of a flow variable ϕ as
$$\phi = \bar \phi \; + \;\phi’,$$
where the overbar term represents a time/ensemble average and the prime term is the fluctuating term. This decomposition has the following properties:
$$\overline{\overline{\phi}} = \bar \phi \;\;{\rm{and}}\;\;\overline{\phi’} = 0.$$
LES uses a different approach, which is a spatial filter. The filtering decomposition of ϕ is defined as
$$\phi = \left\langle \phi \right\rangle + \;\phi ”,$$
where the term in the angled brackets is the filtered term and the double-prime term is the sub-grid term. In practice, this is often calculated using a box filter, a spatial average of everything inside, say, a single CFD cell. The spatial filter has different properties than the Reynolds decomposition,
$$\left\langle {\left\langle \phi \right\rangle } \right\rangle \ne \left\langle \phi \right\rangle \;\;{\rm{and}}\;\;\left\langle {\phi ”} \right\rangle \ne 0.$$
To derive RANS and LES turbulence models, we apply these decompositions to the Navier-Stokes equations. For simplicity, let’s consider only the incompressible momentum equation. The Reynolds-averaged momentum equation is written as
$$\frac{{\partial \overline {{u_i}} }}{{\partial t}} + \frac{{\partial \overline {{u_i}}\; \overline {{u_j}} }}{{\partial {x_j}}} = – \frac{1}{\rho }\frac{{\partial \overline P }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left[ {\mu \left( {\frac{{\partial \overline {{u_i}} }}{{\partial {x_j}}} + \frac{{\partial \overline {{u_j}} }}{{\partial {x_i}}}} \right) – \frac{2}{3}\mu \frac{{\partial \overline {{u_k}} }}{{\partial {x_k}}}{\delta _{ij}}} \right] – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\overline {{{u’}_i}{{u’}_j}}} } \right).$$
This equation looks the same as the basic momentum transport equation, replacing each variable with the barred equivalent, with the exception of the term* in red. That’s where the RANS model will make a contribution.
The LES momentum equation, again neglecting Favre filtering, is written
$$\frac{{\partial \left\langle {{u_i}} \right\rangle }}{{\partial t}} + \frac{{\partial \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle }}{{\partial {x_j}}} = – \frac{1}{\rho }\frac{{\partial \left\langle P \right\rangle }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{{\partial \left\langle {{\sigma _{ij}}} \right\rangle }}{{\partial {x_j}}} – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\left\langle {{u_i}{u_j}} \right\rangle}} – \rho \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle \right).$$
Once again, we have introduced a single unclosed term*, shown in red. As with RANS, this is where the LES model will exert its influence.
These terms are physically stress terms. In the RANS case, we call it the Reynolds stress.
$${\tau _{ij,RANS}} = – \rho \overline {{{u’}_i}{{u’}_j}}.$$
In the LES case, we define a sub-grid stress as follows:
$${\tau _{ij,LES}} = \rho \left( {\left\langle {{{u}_i}{{u}_j}} \right\rangle – \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle } \right).$$
By convention, the same letter is used to denote these two subtly different terms. It’s common to apply one more assumption to both. Kolmogorov postulated that at sufficiently small scales, turbulence was statistically isotropic, with no preferential direction. He also postulated that turbulent motions were self-similar. The eddy viscosity approach invokes both concepts, treating
$${\tau _{ij,RANS}} = f\left( {{\mu _t},\overline V } \right)$$
and
$${\tau _{ij,LES}} = g\left( {{\mu _t},\overline V } \right),$$
where \(\overline V \) represents the vector of transported variables: mass, momentum, energy, and model-specific variables like turbulent kinetic energy. We have also introduced \({\mu _t}\), which we call the turbulent viscosity. Its effect is to dissipate kinetic energy in a similar fashion to molecular viscosity, hence the name.
If you skipped the math, here’s the takeaway. We have one unclosed term* each in the RANS and LES momentum equations, and in the eddy viscosity approach, we close it with what we call the turbulent viscosity \({\mu _t}\). Yet we know that RANS and LES have very different behavior. How does a CFD package like CONVERGE “know” whether that \({\mu _t}\) is supposed to behave like RANS or like LES? Of course the equations don’t “know”, and the solver doesn’t “know”. The behavior is constructed by the functional form of \({\mu _t}\).
How can the turbulent viscosity’s functional form construct its behavior? Dimensional analysis informs us what this term should look like. A dynamic viscosity has dimensions of density multiplied by length squared per time. If we’re looking to model the turbulent viscosity based on the flow physics, we should introduce dimensions of length and time. The key to the difference between RANS and LES behavior is in the way these dimensions are introduced.
Consider the standard k-ε model. It is a two-equation model, meaning it solves two additional transport equations. In this case, it transports turbulent kinetic energy (k) and the turbulent kinetic energy dissipation rate (ε). This model calculates the turbulent viscosity according to the local values of these two flow variables, along with density and a dimensionless model constant as
$${\mu _t} = {C_\mu }\rho \frac{{{k^2}}}{\varepsilon }.$$
Dimensionally, this makes sense. Turbulent kinetic energy is a specific energy with dimensions of length squared per time squared, and its dissipation rate has dimensions of length squared per time cubed. In a sufficiently well-resolved solution, all of these terms should limit to finite values, rather than limiting to zero or infinity. If so, the turbulent viscosity should limit to some finite value, and it does.
LES, in contrast, directly introduces units of length via the spatial filtering process. Consider the Smagorinsky model. This is a zero-equation model that calculates turbulent viscosity in a very different way. For the standard Smagorinsky model,
$${\mu _t} = \rho C_s^2{\Delta ^2}\sqrt {{S_{ij}}{S_{ij}}},$$
where \({C_s}\) is a dimensionless model constant, \({S_{ij}}\) is the filtered rate of strain tensor, and Δ is the grid spacing. Once again, the dimensions work out: density multiplied by length squared multiplied by inverse time. But what do the limits look like? The rate of strain is some physical quantity that will not limit to infinity. In the limit of infinitesimal grid size, the turbulent viscosity must limit to zero! The model becomes completely inactive, and the equations solved are the unfiltered Navier-Stokes equations. We are left with a direct numerical simulation.
When I was a first-year engineering student, discussion of dimensional analysis and limiting behaviors seemed pro forma and almost archaic.
Real engineers in the real world just use computers to solve everything, don’t they? Yes and no. Even those of us in the computational analysis world can derive real understanding, and real predictive power, from considering the functional form of the terms in the equations we’re solving. It can even help us design models with behavior we can prescribe a priori.
Detached Eddy Simulation (DES) is a hybrid model, taking advantage of the similarity of functional forms of the turbulent viscosities in RANS and LES. DES adopts RANS-like behavior near the wall, where we know an LES can be very computationally expensive. DES adopts LES behavior far from the wall, where LES is more computationally tractable and unsteady turbulent motions are more often important.
The math behind this switching behavior is beyond the scope of a blog post. In effect, DES solves the Navier-Stokes equations with some effective \({\mu _{t,DES}}\) such that \({\mu _{t,DES}} \approx {\mu _{t,RANS}}\) near the wall and \({\mu _{t,DES}} \approx {\mu _{t,LES}}\) far from the wall, with \({\mu _{t,RANS}}\) and \({\mu _{t,LES}}\) selected and tuned so that they are compatible in the transition region. Our understanding of the derivation and characteristics of the RANS and LES turbulence models allows us to hybridize them into something new.
*This term is a symmetric second-order tensor, so it has six scalar components. In some approaches (
e.g., Reynolds Stress models), we might transport these terms separately, but the eddy viscosity approach treats this unknown tensor as a scalar times a known tensor. |
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation. |
First, you don't place any restrictions on $k$. This will turn out to matter.
Second, "$f(x) = k x^{\frac{1}{2}}$" is not a function. A function presented this way is a domain and a rule/formula/expression for converting elements of the domain into elements of the image. When the domain is missing, it is
normally understood to be the largest subset of whatever set makes sense given the current subject of discourse. You seem to indicate that the real numbers is that set. In this case, the domain of $f(x)$ is $x \geq 0$. (Because there are no challenges to taking square roots of zero and positive numbers.) If $k=0$, we could conceivably argue that the domain is all of the reals, but that will fail to be interesting in the next step, so we set that aside.
Now that we have a fully specified function, we can try to find its inverse. If $k=0$, we're done because $f$ is not injective (equivalently, does not pass "the horizontal line test") so does not have an inverse. (If, for some reason, the domain of $f$ had been a single point, we would be able to continue -- the domain of the inverse will be the output of $f$ at that one point.) Otherwise, if $k>0$, the range of $f$ is $[0,\infty)$ and if $k < 0$, the range of $f$ is $(-\infty,0]$. Using the fact about functions and their inverses you mention, the domain of $f^{-1}$ is either $[0,\infty)$ or $(-\infty,0]$ as $k >0$ or $k < 0$, respectively.
We could go a bit further to check the above. We compute \begin{align*} f(x) &= y = k x^{\frac{1}{2}} \text{, so} \\ f^{-1}(y) &= x = \left( \frac{y}{k} \right)^2 \text{.}\end{align*}If $k>0$, we have $y \geq 0$, and we are looking at a point on the same half of the square function that becomes the upper half of the graph of the square root function under inversion. If $k < 0$, we have $y \leq 0$, and we are looking at the same half of the graph of the square function that becomes the
lower half of the square root function under inversion (as we must since $k<0$ means we are only considering that half). |
I am interested in how to do a rotation about the $x$-axis in QM for spin $s = 1$ system. In an answer to the post we have that for a general rotation in QM where spin $s = 1$ we have the equation:\begin{equation} \begin{aligned} \exp(i\alpha \mathbf{J}\cdot\hat{\mathbf{n}}) & = 1 + i\hat{\mathbf{n}}\cdot\mathbf{J}\sin\alpha + (\hat{\mathbf{n}}\cdot\mathbf{J})^2(\cos\alpha-1) \\& = 1 + \left[2i\hat{\mathbf{n}}\cdot\mathbf{J}\sin(\alpha/2)\right]\cos(\alpha/2) + \frac{1}{2}\left[2i\hat{\mathbf{n}}\cdot\mathbf{J}\sin(\alpha/2)\right]^2, \end{aligned}\end{equation}
Questions:Should the LHS not be $\exp(i\alpha \mathbf{J}\cdot\hat{\mathbf{n}}/2)$ as in the $s = 1/2$ case, where we have $$\exp(-i\frac{\alpha}{2}\vec{\sigma}\cdot\textbf{n}) = \cos\biggl(\frac{\alpha}{2}\biggr)-i\vec{\sigma}\cdot\textbf{n}\sin\biggl(\frac{\alpha}{2}\biggr)?$$ Also, would the idea be to then be to write $J_x = J_{+}+J_{-}$ where we have the raising and lowering operators, and then to express this as a matrix in the basis of $J_z$ eigenstates?
I am interested in how to do a rotation about the $x$-axis in QM for spin $s = 1$ system. In an answer to the post we have that for a general rotation in QM where spin $s = 1$ we have the equation:\begin{equation} \begin{aligned} \exp(i\alpha \mathbf{J}\cdot\hat{\mathbf{n}}) & = 1 + i\hat{\mathbf{n}}\cdot\mathbf{J}\sin\alpha + (\hat{\mathbf{n}}\cdot\mathbf{J})^2(\cos\alpha-1) \\& = 1 + \left[2i\hat{\mathbf{n}}\cdot\mathbf{J}\sin(\alpha/2)\right]\cos(\alpha/2) + \frac{1}{2}\left[2i\hat{\mathbf{n}}\cdot\mathbf{J}\sin(\alpha/2)\right]^2, \end{aligned}\end{equation}
No. When writing $\exp(i\alpha \hat n\cdot \vec J)$ one must use matrices $\hat J_x,\hat J_y,\hat J_z$ with the standard commutation relations: $$ [\hat J_x,\hat J_y]=i\hbar \hat J_z\, , \hbox{etc} $$ For $s=1/2$, the matrices that satisfy the commutation relations are $\{\textstyle\frac{1}{2}\sigma_x,\textstyle\frac{1}{2}\sigma_y,\textstyle\frac{1}{2}\sigma_z\}$ rather than $\{\sigma_x,\sigma_y,\sigma_z\}$, hence the need for the $\textstyle\frac{1}{2}$ factor.
Yes in general one would obtain the matrices for $\hat J_x$ and $\hat J_y$, lump them with $\hat J_z$ to construct $\exp(i\alpha \hat n\cdot \vec J)$ and exponentiate. The result does not depend on the basis but the basis of eigenstates of $\hat J_z$ is convenient since the $\hat J_\pm$ in this basis are well known and easy to compute.
Edit: in answer to a comment, the rotation matrices are usually of the form $$ R_z(\alpha)R_y(\beta)R_z(\gamma)=e^{-i \alpha L_z}e^{-i\beta Ly} e^{-i\gamma L_z} $$ To get $R_x$ one should choose $\alpha=-\pi/2$ and $\gamma=\pi/2$.
In a basis of eigenstates of $\hat J_z$, the rotation $R_x(\beta)=e^{i \pi L_z/2} R_y(\beta) e^{-i\pi L_z/2}$ for $s=1/2$ is given by $$ R_x(\beta)=\left( \begin{array}{cc} \cos \left(\frac{\beta }{2}\right) & -i \sin \left(\frac{\beta }{2}\right) \\ -i \sin \left(\frac{\beta }{2}\right) & \cos \left(\frac{\beta }{2}\right) \\ \end{array} \right)=e^{-i\beta \sigma_x/2}\, , $$ with states ordered as $\vert 1/2,1/2\rangle,\vert 1/2,-1/2\rangle$.
For $\ell=1$ the corresponding result is $$ R_x(\beta)=\left( \begin{array}{ccc} \cos ^2\left(\frac{\beta }{2}\right) & -\frac{i \sin (\beta )}{\sqrt{2}} & -\sin ^2\left(\frac{\beta }{2}\right) \\ -\frac{i \sin (\beta )}{\sqrt{2}} & \cos (\beta ) & -\frac{i \sin (\beta )}{\sqrt{2}} \\ -\sin ^2\left(\frac{\beta }{2}\right) & -\frac{i \sin (\beta )}{\sqrt{2}} & \cos ^2\left(\frac{\beta }{2}\right) \\ \end{array} \right) $$ for the ordering $\vert 1,1\rangle, \vert 1,0\rangle, \vert 1,-1\rangle$ |
The economist John Hicks wrote out Keynes' prose as an economic model that came to be known as the IS-LM model. I already derived this model before in a way that followed the way it is introduced in macroeconomics classes (as an IS and LM market). This derivation will acheive the same result, but approached fundamentally as an information transfer market system. The basis for the IS-LM model is that there are two markets: the real economy (IS) and the money market (LM) that couple to each other through the interest rate. As an information transfer system, we'll take a more direct route. We will posit that aggregate investment (demand) is a source of information that sends signals into the market that are detected by interest rate changes. The aggregate investment supply (the money supply) receives this information. See this post for a more detailed description of how information moves around the economy.
Let's start with the market $r : I \rightarrow M$, with interest rate $r$, nominal investment $I$ and money supply $M$ so that
$$
\text{(1) } r = \frac{dI}{dM} = \frac{1}{\kappa} \; \frac{I}{M}
$$
from the basic information transfer model. Looking at constant information source $I = I_{0}$, we have
$$
r = \frac{1}{\kappa} \; \frac{I_{0}}{\langle M\rangle}
$$
where $\langle M \rangle$ is the expected level of the money supply. Solving the differential equation (1), we obtain
$$
\Delta I \equiv I-I_{ref} = \frac{I_{0}}{\kappa}\log \frac{\langle M\rangle}{M_{ref}}
$$
where $ref$ refers to reference values of the variables $I$ and $M$. We can combine the previous two equations into a single function that defines the IS curve:
$$
\text{(2) }\log r = \log \frac{I_{0}}{\kappa M_{ref}} - \kappa \frac{\Delta I}{I_{0}}
$$
We can also look at constant $M = M_{0}$ so that we have, solving the differential equation (1) again:
$$
r = \frac{1}{\kappa} \; \frac{\langle I\rangle}{M_{0}}
$$
$$
\Delta M \equiv M-M_{ref} = \kappa M_{0} \log \frac{\langle I\rangle}{I_{ref}}
$$
where we can eliminate $\langle I\rangle$ to produce (after some re-arranging)
$$
\text{(3) } \log r = - \log \frac{\kappa M_{0}}{I_{ref}} + \frac{\Delta M}{\kappa M_{0}}
$$
Equations (2) and (3) represent how the market adjusts the interest rate to changes in the money supply given fixed investment demand and to changes in the investment demand given fixed money supply, respectively.
One more piece -- we need to relate output $Y$ to investment $I$ and money $M$. I'll add a simple market $p : N \rightarrow I$ where $N$ is NGDP (aggregate demand, sending information to aggregate investment) so that
$$
\frac{dN}{dI} = \frac{1}{\eta} \; \frac{N}{I}
$$
which we solve without holding either constant, resulting in:
$$
N \sim I^{1/\eta}
$$
It turns out empirically (see below), $\eta \simeq 1$ so that $\log N$ is proportional to $\log I$. If we hold the price level constant (we are looking at short run effects [1]), then we can say that $Y = N/P$ is proportional to $I$. This is the first part. To get the relationship with $M$, we'll match first order shifts of the IS and LM curves. If we have a small change in $r = r_{0} + \delta r$, then we have for the IS curve:
$$
\log r_{0} + \frac{\delta r}{r_{0}} + \cdots = \log \frac{I_{0}}{\kappa M_{ref}} - \kappa \frac{\Delta I}{I_{0}}
$$
So that
$$
\frac{\delta r}{r_{0}} \simeq - \kappa\frac{\Delta I}{I_{0}}
$$
Similarly for the LM curve
$$
\frac{\delta r}{r_{0}} \simeq \frac{\Delta M}{\kappa M_{0}}
$$
So that we have (after taking the absolute value as a postitive shift of the LM curve causes the interest rate to fall, but a positive shift in the IS curve causes the interest rate to rise)
$$
\Delta I \simeq \frac{I_{0}}{\kappa^{2} M_{0}} \Delta M
$$
Which means $\Delta Y \sim \Delta I \sim \Delta M$ are all proportional to each other and we can scale the IS and LM curves such that they become functions of $Y$. This allows us to plot them on the same graph, like the ISLM model. On the left, we have the relationship between $I$ and $Y$ and on the right, we have the IS-LM model graph (IS curve in blue, LM curve in red):
Empirically, it works best if we take $r \rightarrow r^{c}$ where $c \simeq 1/3$ [2] ($M$ is taken to be the monetary base):
And as promised, here is that plot of investment vs NGDP:
[1] This is effectively where economists' complaint about the IS-LM model not incorporating inflation (by not differentiating real and nominal interest rates) comes in.
[2] This is a fudge that I have yet to figure out. The empirical results for the 10-year and 3-month interest rates are a pretty good motivation. It fits the data. Additionally, it is basically a re-labeling of the interest rates. There is no a priori reason that the "information price" $p_{i}$ that goes into Equation (1) and the real world price represented by the interest rate $r$ need to be related by $r = p_{i}$; any bijective relationship is possible. We encounter this all the time e.g. decibels give us a more intuitive linear feeling of loudness than power. The market treats the cube root of the interest rate as a linear measure of information.
UPDATE 5/22/2015
Fixed the fudge factor. See here. Basically introduced another information equilibrium relationship between $r$ and the price of money $p$ so that the market at the top of the page becomes:
(r \rightarrow p) : I \rightarrow M
$$
And therefore $r \sim p^{1/c}$ |
Here's an answer to the general question, which I wrote up a while ago. It's a common interview question.
The question goes like this: "Say you have X,Y,Z three random variables such that the correlation of X and Y is something and the correlation of Y and Z is something else, what are the possible correlations for X and Z in terms of the other two correlations?"
We'll give a complete answer to this question, using the Cauchy-Schwarz inequality and the fact that $\mathcal{L}^2$ is a Hilbert space.
The Cauchy-Schwarz inequality says that if x,y are two vectors in an inner product space, then
$$\lvert\langle x,y\rangle\rvert \leq \sqrt{\langle x,x\rangle\langle y,y\rangle}$$
This is used to justify the notion of an ''angle'' in abstract vector spaces, since it gives the constraint
$$-1 \leq \frac{\langle x,y\rangle}{\sqrt{\langle x,x\rangle\langle y,y\rangle}} \leq 1$$which means we can interpret it as the cosine of the angle between the vectors x and y.
A Hilbert space is an infinite dimensional vector space with an inner product. The important thing for this post is that in a Hilbert space the inner product allows us to do geometry with the vectors, which in this case are random variables. We'll take for granted that the space of mean 0 random variables with variance 1 is a Hilbert space, with inner product $\mathbb{E}[XY]$. Note that, in particular
$$\frac{\langle X,Y\rangle}{\sqrt{\langle X,X\rangle\langle Y,Y\rangle}} = \text{Cor}(X,Y)$$
This often leads people to say that ''correlations are cosines'', which is intuitively true, but not formally correct, as they certainly aren't the cosines we naturally think of (this space is infinite dimensional), but all of the laws hold (like Pythagorean theorem, law of cosines) if we define them to be the negative of the cosines of the angle between two random variables, whose lengths we can think of as their standard deviations in this vector space.
Because this space is a Hilbert space, we can do all of the geometry that we did in high school, such as projecting vectors onto one another, doing orthogonal decomposition, etc. To solve this question, we use orthogonal decomposition, which is often called the ''uncorrelation trick'' in statistics and consists of writing a random variable as a function of another random variable plus a random variable that is uncorrelated with the second random variable. This is especially useful in the case of multivariate normal random variables, when two components being uncorrelated implies independence.
Okay, let's suppose that we know that the correlation of X and Y is $p_{xy}$, the correlation of Y and Z is $p_{yz}$, and we want to know the correlation of X and Z, which we'll call $p_{xz}$. Note that we don't lose generality by assuming mean 0 and variance 1 as scaling and translating vectors doesn't affect their correlations. We can then write that:
$$X = \langle X,Y\rangle Y + O^X_Y$$
$$Z = \langle Z,Y\rangle Y + O^Z_Y$$
where $\langle \cdot,\cdot\rangle$ stands for the inner product on the space and the $O$ are uncorrelated with Y. Then, we take the inner product of $X,Z$ which is the correlation we're looking for, since everything has variance 1. We have that
$$\langle X,Z\rangle = p_{xz} = \langle p_{xy}Y+O^X_Y,p_{zy}Y+O^Z_Y\rangle = p_{xy}p_{yz}+\langle O^X_Y,O^Z_Y\rangle$$
since the variance of Y is 1 and the other terms of this bilinear expansion are orthogonal and hence have 0 covariance. We can now apply the Cauchy-Schwarz inequality to the last term above to get that
$$p_{x,z} \leq p_{xy}p_{yz} + \sqrt{(1-p_{x,y}^2)(1-p_{y,z}^2)}$$
$$p_{x,z} \geq p_{xy}p_{yz} - \sqrt{(1-p_{x,y}^2)(1-p_{y,z}^2)}$$
where the fact that
$$\langle O^X_Y,O^X_Y\rangle = 1-p_{xy}^2$$
comes from the equation setting the variance of X equal to 1 or
$$1 = \langle X,X\rangle = \langle p_{xy}Y + O^X_Y,p_{xy}Y+O^X_Y\rangle = p_{xy}^2 + \langle O^X_Y,O^X_Y\rangle$$
and the exact same thing can be done for $O^Z_Y$.
So we have our answer. Sorry this was so long. |
I am trying to solve a system of equations and have a question regarding the validity of my approach when implementing a fifth-order Cash-Karp Runge-Kutta (CKRK) embedded method with the method of lines. To give the questions some context, let me state the problem I am attempting to solve:
$$ \frac {\partial E}{\partial z} = - \frac {1}{c^2}\frac {\partial E}{\partial t} - \frac{1}{k} \frac {\partial ^2 E}{\partial z^2} - \frac{1}{kc^2} \frac {\partial^2 E}{\partial t^2} + iP \tag{1} $$ $$ \\ \frac {\partial P}{\partial t} = iNE^* \tag{2}\\ $$ $$ \frac {\partial N}{\partial t} = iPE \tag{3} $$
$$ E(z=0) = \frac{\partial E}{\partial z}(z=0) = E(t=0) = \frac{\partial E}{\partial t}(t=0) = 0,\\ P(t=0) = P_0e^{z/c}, N(t=0) = N_0e^{z/c} $$
where $ c = 3 \times 10^8, k = 1000, P_0 $ and $N_0$ are constants$ i=\sqrt{-1}$$; 0 \leq t \leq 1000, 0 \leq z \leq 1000 $
I am implementing CKRK on the above, and even though the first spatial derivative of $E$ depends on the second spatial derivative of $E$, the numerical method appears to work when solving (1)-(3) when I use the scheme of approximating the time and spatial derivatives of $E$ on the right hand side of (1) by a backward difference approximation (I am using an accuracy of 5).
To switch (1) above to a system of first order spatial derivatives in $z$, I could make the substitution:
$$ U = \frac {\partial E}{\partial z} $$
and solve the following equations for $E$ instead:
$$ U = \frac {\partial E}{\partial z} \tag{4}\\ $$ $$\frac {\partial U}{\partial z} = - \frac {k}{c^2}\frac {\partial E}{\partial t} - kU - \frac{1}{c^2}\frac {\partial^2 E}{\partial t^2} + kP \tag{5} $$
But when testing these same initial/boundary conditions using the same numerical method on the coupled equations (2) - (5), the code takes too long to finish (the step sizes required become extremely small). I believe it is due to the fact that the coefficients on the right hand side of (5) are very large and cause stability issues. I have tried to rescale the values for $z,t,P,N,$ and $E$, but doing so causes one of the other coupled equations to become unstable or has no effect (e.g. scaling $z$ does nothing to the value $ E = U\Delta z$ since both $U$ and $\Delta z$ would scale reciprocally and cancel any effect). It is due to similar reasons I am solving $E$ in the $z$-direction as opposed to doing the substitution $U = \frac {\partial E}{\partial t}$ and solving it in $t$ which is the standard method of lines approach (when I tried this method, the $\Delta t$ given by CKRK becomes very small).
So ultimately, instead of using equations (2) - (5), I was wondering if applying CKRK to (1) - (3) is still a valid approach where I approximate the derivatives of $E$ on the right-side of (1) by backward finite differences? It seems very odd to apply CKRK to a first order spatial derivative that depends on an approximation of the second order spatial derivative, but is this wrong? (I would be using stored intermediate values of $E$ to ensure the backward finite difference approximations are also following the Runge-Kutta method.) |
Zeta-function method for regularization zeta-function regularization
Regularization and renormalization procedures are essential issues in contemporary physics — without which it would simply not exist, at least in the form known today (2000). They are also essential in supersymmetry calculations. Among the different methods, zeta-function regularization — which is obtained by analytic continuation in the complex plane of the zeta-function of the relevant physical operator in each case — might well be the most beautiful of all. Use of this method yields, for instance, the vacuum energy corresponding to a quantum physical system (with constraints of any kind, in principle). Assuming the corresponding Hamiltonian operator, , has a spectral decomposition of the form (think, as simplest case, of a quantum harmonic oscillator): , with some set of indices (which can be discrete, continuous, mixed, multiple, etc.), then the quantum vacuum energy is obtained as follows [a5], [a6]:
where is the zeta-function corresponding to the operator . The formal sum over the eigenvalues is usually ill-defined, and the last step involves analytic continuation, inherent to the definition of the zeta-function itself. These mathematically simple-looking relations involve very deep physical concepts (no wonder that understanding them took several decades in the recent history of quantum field theory, QFT). The zeta-function method is unchallenged at the one-loop level, where it is rigorously defined and where many calculations of QFT reduce basically (from a mathematical point of view) to the computation of determinants of elliptic pseudo-differential operators (DOs, cf. also Pseudo-differential operator) [a2]. It is thus no surprise that the preferred definition of determinant for such operators is obtained through the corresponding zeta-function.
When one comes to specific calculations, the zeta-function regularization method relies on the existence of simple formulas for obtaining the analytic continuation above. These consist of the reflection formula of the corresponding zeta-function in each case, together with some other fundamental expressions, as the Jacobi theta-function identity, Poisson's resummation formula and the famous Chowla–Selberg formula [a2]. However, some of these formulas are restricted to very specific zeta-functions, and it often turned out that for some physically important cases the corresponding formulas did not exist in the literature. This has required a painful process (it has taken over a decade already) of generalization of previous results and derivation of new expressions of this kind [a5], [a6]. [a1].
zeta regularization for integrals
The zeta function regularization may be have extended in order to include divergent integrals \begin{equation} \int_{a}^{\infty}x^{m}dx \qquad m >0 \end{equation} by using the recurrence equation
\begin{equation} \begin{array}{l} \int\nolimits_{a}^{\infty }x^{m-s} dx =\frac{m-s}{2} \int\nolimits_{a}^{\infty }x^{m-1-s} dx +\zeta (s-m)-\sum\limits_{i=1}^{a}i^{m-s} +a^{m-s} \\ -\sum\limits_{r=1}^{\infty }\frac{B_{2r} \Gamma (m-s+1)}{(2r)!\Gamma (m-2r+2-s)} (m-2r+1-s)\int\nolimits_{a}^{\infty }x^{m-2r-s} dx \end{array} \end{equation}
this is the natural extension to integrals of the Zeta regularization algorithm , this recurrence equation is finite since for \begin{equation} m-2r < -1 \qquad \int_{a}^{\infty}dxx^{m-2r}= -\frac{a^{m-2r+1}}{m-2r+1} \end{equation} the integrals inside the recurrence equation are convergents
References
[a1] A.A. Bytsenko, G. Cognola, L. Vanzo, S. Zerbini, "Quantum fields and extended objects in space-times with constant curvature spatial section" Phys. Rept. , 266 (1996) pp. 1–126 [a2] E. Elizalde, "Multidimensional extension of the generalized Chowla–Selberg formula" Commun. Math. Phys. , 198 (1998) pp. 83–95 [a3] S.W. Hawking, "Zeta function regularization of path integrals in curved space time" Commun. Math. Phys. , 55 (1977) pp. 133–148 [a4] M. Nakahara, "Geometry, topology, and physics" , Inst. Phys. (1995) pp. 7–8 [a5] E. Elizalde, S.D. Odintsov, A. Romeo, A.A. Bytsenko, S. Zerbini, "Zeta regularization techniques with applications" , World Sci. (1994) [a6] E. Elizalde, "Ten physical applications of spectral zeta functions" , Springer (1995) How to Cite This Entry:
Zeta-function method for regularization.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Zeta-function_method_for_regularization&oldid=29548 |
I was playing around with a 3-D potential $V$ such that $V_{(r)} = 0$ for $r<a$, and $V_{(r)} = V_0>0$ otherwise. By using the Schrödinger Equation, I showed that: $$\frac{-\hbar}{2m}\frac{1}{r^2}\frac{d}{dr}\bigl( r^2\frac{d}{dr}\bigr)\psi = E\psi$$
I then used the substitution $\psi_{(r)}=f_{(r)}/r$ and $k=\sqrt{2mE}/\hbar$ to get: $$\frac{1}{r}\frac{d^2f_{(r)}}{dr^2}=-\frac{k^2}{r}f_{(r)} \tag{I}$$
which describes the wavefunction $\psi_{(r)}=f_{(r)}/r$ inside the sphere. Hence, the differential equation has the domain $0\leq r<a$, and I cannot multiply both sides by $r$. This is unfortunate, because there is a similar equation for the outside of the sphere: $$\frac{1}{r}\frac{d^2f_{(r)}}{dr^2}=\frac{k'^2}{r}f_{(r)}$$ As this is outside the sphere, I can multiply both sides by $r$ to get a familiar differential equation that can be solved easily: $$\frac{d^2f_{(r)}}{dr^2}=k'^2f_{(r)}$$
If I do the same thing to $(I)$, I obtain the equation for simple harmonic motion, but substituting the solution back into $(I)$ as a sanity check gives a division by zero when evalutating for $r=0$. After that, I tried a number of substitutions to make $(I)$ have a more recognisable form - to no avail. Then I had the idea of multiplying my trial solution by some other function of $r$ so that upon substitution into $(I)$, the evaluation of $r=0$ doesn't give an infinity... but I don't know quite how to do that...
Long story short..... my question is: what trick do I need to get a meaningful solution to $(I)$? |
In my master thesis I used an Algorithm called
Approximative Dynamic Programming [1] to solve equations of the form
$$ \max_{\pi}\mathbb{E}^{\pi}\left\{\sum_{t=0}^{T}\gamma^tC_t^{\pi}(S_t,A_t^{\pi}(S_t))\right\}. $$
It uses Monte-Carlo sampling and an approximation $\overline{V}_t$ of the value function to come around the curse of dimensionality. As a decision function serves a convex linear program
$$ \hat{v}_t^n=\max_{a_t\in A_t^n}\left(C_t(S_t^n,a_t)+\gamma \overline{V}_{t+1}^{n-1}(S^M(S_t^n,a_t))\right), $$
and the update is done by using a stepsize $\alpha$
$$ \overline{V}_t^n(S_t^n)=(1-\alpha_{n-1})\overline{V}_t^{n-1}(S_t^n)+\alpha_{n-1}\hat{v}_t^n. $$ I wondered now if they exists similar algorithms, despite the many flavours of this algorithm, to deliver high quality solutions in reasonable time or if the used principle is the only one to solve such kind of problems?
Cheers, Reza
[1] POWELL, W. : Approximate Dynamic Programming: Solving the curses of dimensionality. Bd. 703. Wiley-Blackwell, 2007 |
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference |
''Diamond Paradox'' by Diamond (1971)
This is a "less-known paradox," usually put as a counter to famous Bertrand paradox. It is a starting point in the literature on informational frictions in consumer markets, and the scientists in the field agree on its significance.
Its idea is diametrically opposite to that of Bertrand. Consider the following simple example. There are $2$ firms which produce homogeneous goods at zero marginal cost and compete in prices, $p$. This simultaneously set prices. Also there is a single consumer who supplies a demand given by $1-p$. Importantly, the consumer does not observe prices set by firms and, therefore, needs to search for them sequentially, where search is costly. Suppose that cost of visiting a firm is given by $0 < c \leq \frac{1}{2}$. Then, the unique equilibrium of the market is that both firms charge monopoly price $$p^M= \frac{1}{2}.$$
This is a diametrically opposite result to that of Bertrand.
The reasoning behind the result is as follows. Suppose both firms charge $p=0$. Then, the consumer randomly visits one of the firms, say firm $i$, and buys. However, firm $i$ could have charged $c$ and made positive profits as the consumer would have bought goods anyway because she would have suffered cost $c$ had she left firm $i$ in order to buy from the rival firm. By the same argument, one can see that $p=c$ cannot be an equilibrium as now firm $i$ can charge $c+c$ and improve its profit. Continuing this way, it is easy to arrive to an equilibrium where both firms charge $p^M$. A firm does not want to charge $p^M+c$ simply because its profit is maximized at $p^M$.
Formal Analysis of the Example
Timing: First, the firms simultaneously set prices. Second, the consumer without knowing prices engage into sequential search. The first search is free and the consumer visit each firm with equal probability. The consumer can come back to the previously searched firm for free. The consumer has to observe a price of a firm to buy goods from that firm.
Beliefs: In equilibrium, the consumer has correct belief about strategies of firms. If, upon visiting a firm, she observes a price different from an equilibrium one, the consumers assumes that the rival firm has deviated to the same price too. Thus, the consumer has symmetric (out-of-equilibrium beliefs). Note: the results of the game does not change if the consumers has passive beliefs.
Strategies: Strategies of the firms are prices. As mixing is allowed, let $F(p)$ represent the probability that a firm charges a price no greater than $p$. Strategy of the consumer is whether to search for the second price, upon observing the first one. This strategy is given by a reservation price $r$, such that upon observing a price lower than $r$ she buys outright, upon observing a price greater than $r$ she searches further, and upon observing a price equal to $r$ she is indifferent between buying immediately and searching further.
Equilibrium Notion: Concept of Perfect Bayesian Equilibrium (PBE) is employed. A PBE is characterized by price distribution $F(p)$ for each firm and the consumer's reservation price strategy given by $r$ such that $(i)$ each firms chooses $F(p)$ that maximizes its profit, given the equilibrium strategy of the other firm and the consumer's optimal search strategy, and (ii) the consumer searches according to the reservation price rule $r$, given correct beliefs concerning equilibrium strategies of firms.
Theorem: For any $c>0$, there exists a PBE characterized by triple $(p^M, p^M, r)$, where $p^M$'s are charged with probability $1$ and $$r=1.$$
Proof: First, I prove that $r=1$, or that the consumer buys outright when she observes any price lower than $1$. Clearly, if she observes a price greater than $1$ she does not buy from that firm as this yields a negative payoff to the consumer. Now, suppose she observes price $p'<r$. Then, she expects the rival firm to charge $p'$ too. Thus, if she buys outright her payoff is $\int_{p'}^{1}(1-p)dp$, and if she searches she expects a payoff equal to $\int_{p'}^{1}(1-p)dp - c$. As the former is greater than the latter, she better-off when she buys immediately. This proves that $r=1$.
Next, I prove that both firms charge $p^M$. Clearly, firms never charge above $1$ as they will never sell. Then, the expected profit of a firm is $\frac{1}{2}(1-p)p$ because the consumer visits a firm half of the time. It is easy to see that the profit is maximized at $p^M$.
QED. |
For context, I'm studying the paper Coulomb blockade in superconducting quantum point contacts by Averin from 1998. Specifically, I am trying to find how he obtains equation 11 from equation 10, which gives the Landau Zener probability of ending up in a specific branch of the Josephson potential of a superconducting QPC.
Equation 10, describing the Schrodinger equation of the problem in the specific limit under consideration, is given by a system of two coupled first order ODE's: \begin{equation} 2\sqrt{\frac{E_C}{\Delta}} \frac{\partial\psi_s}{\partial x} = -s x \psi_s/2 + \sqrt{R} \psi_{-s} \end{equation} where $s\pm1$.
If I (for convenience) now take $A = 2\sqrt{\frac{E_C}{\Delta}}$ and $B = \sqrt{R}$, then substituting the differential equations one finds a 2nd order ODE that is solved by parabolic cylinder functions. Specifically, \begin{equation} \psi_{-1} = c_1 D_{\frac{-B^2-A}{A}}\left(\frac{x}{\sqrt{A}}\right)+c_2 D_{\frac{B^2}{A}}\left(\frac{i x}{\sqrt{A}}\right) \end{equation}
and
\begin{equation} \psi_1 = -\frac{1}{B}\left(x c_2 D_{\frac{B^2}{A}}\left(\frac{i x}{\sqrt{A}}\right)+\sqrt{A} \left[c_1 D_{-\frac{B^2}{A}}\left(\frac{x}{\sqrt{A}}\right)+i c_2 D_{\frac{B^2+A}{A}}\left(\frac{i x}{\sqrt{A}}\right)\right]\right) \\ \end{equation}
Now, in his paper he then says that by evaluating the asymptotes of these functions, one can find the probability $w = |\psi_{-1}(\infty)|^2$ for the state $s=1$ starting at $x \rightarrow -\infty$ to end up in the state $s=-1$ at $x\rightarrow \infty$. This leads to equation 11, which he writes as \begin{equation} w = |\psi_{-1}(\infty)|^2 = \frac{1}{\Gamma(\lambda)}\sqrt{\frac{2\pi}{\lambda}}\left(\frac{\lambda}{e}\right)^\lambda \end{equation}
where $\lambda = \frac{R^2}{2\sqrt{E_C/\Delta}}$, and using our substitutions, we can identify $\lambda = B^2/A$, which we already saw occur in the solution for $\psi_{-1}$.
My question is how one obtains this. In terms of the mathematics, we need to choose our boundary conditions (setting $c_1$ and $c_2$) such that $|\psi_1(-\infty)|^2 = 1$, $|\psi_{-1}(-\infty)|^2 = 0$ and then evaluate $|\psi_{-1}(\infty)|^2$ to get $w$. But I can't seem to figure out how to properly find the values of $c_1$ and $c_2$. I imagined that taking the limit $x->-\infty$ would see one of the parabolic cylinder equations go to zero while the other does not, so as to satisfy $|\psi_{-1}(-\infty)|^2 = 0$, and then I need to normalize the remaining coefficient so that $|\psi_1(-\infty)|^2 = 1$ is satisfied. Then evaluating 1 - $|\psi_{1}(\infty)|^2$ should give the solution, but evaluating these asymptotes does not seem to be working out.
One hint is perhaps already found in the form of the answer itself, which clearly looks like some form of a Stirling approximation/Gamma function in $\lambda$, but I can't make sense of it. Moreover, I should note that this problem is essentially a Landau Zener tunneling problem for those who are familiar with that.
Some code for evaluating the ODE's:
ff = f[x] /. Solve[A*g'[x] == x*g[x]/2 + B*f[x], f[x]];dff = D[ff, x];gg = g[x] /. DSolve[{A*f'[x] == -x*f[x]/2 + B*g[x] /. {f[x] -> ff, f'[x] -> dff}}, g[x], x]dgg = D[gg, x];ff = FullSimplify[ff /. {g[x] -> gg, g'[x] -> dgg}] |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
CryptoDB Ke Yang Affiliation: Google Inc Publications Year Venue Title
2005
EPRINT
Resource Fairness and Composability of Cryptographic Protocols
We introduce the notion of {\em resource-fair} protocols. Informally, this property states that if one party learns the output of the protocol, then so can all other parties, as long as they expend roughly the same amount of resources. As opposed to similar previously proposed definitions, our definition follows the standard simulation paradigm and enjoys strong composability properties. In particular, our definition is similar to the security definition in the universal composability (UC) framework, but works in a model that allows any party to request additional resources from the environment to deal with dishonest parties that may prematurely abort. In this model we specify the ideally fair functionality as allowing parties to ``invest resources'' in return for outputs, but in such an event offering all other parties a fair deal. (The formulation of fair dealings is kept independent of any particular functionality, by defining it using a ``wrapper.'') Thus, by relaxing the notion of fairness, we avoid a well-known impossibility result for fair multi-party computation with corrupted majority; in particular, our definition admits constructions that tolerate arbitrary number of corruptions. We also show that, as in the UC framework, protocols in our framework may be arbitrarily and concurrently composed. Turning to constructions, we define a ``commit-prove-fair-open'' functionality and design an efficient resource-fair protocol that securely realizes it, using a new variant of a cryptographic primitive known as ``time-lines.'' With (the fairly wrapped version of) this functionality we show that some of the existing secure multi-party computation protocols can be easily transformed into resource-fair protocols while preserving their security.
2004
EUROCRYPT
2004
TCC
2004
EPRINT
Efficient and Secure Multi-Party Computation with Faulty Majority and Complete Fairness
We study the problem of constructing secure multi-party computation (MPC) protocols that are {\em completely fair} --- meaning that either all the parties learn the output of the function, or nobody does --- even when a majority of the parties are corrupted. We first propose a framework for fair multi-party computation, within which we formulate a definition of secure and fair protocols. The definition follows the standard simulation paradigm, but is modified to allow the protocol to depend on the runing time of the adversary. In this way, we avoid a well-known impossibility result for fair MPC with corrupted majority; in particular, our definition admits constructions that tolerate up to $(n-1)$ corruptions, where $n$ is the total number of parties. Next, we define a ``commit-prove-fair-open'' functionality and construct an efficient protocol that realizes it, using a new variant of a cryptographic primitive known as ``time-lines.'' With this functionality, we show that some of the existing secure MPC protocols can be easily transformed into fair protocols while preserving their security. Putting these results together, we construct efficient, secure MPC protocols that are completely fair even in the presence of corrupted majorities. Furthermore, these protocols remain secure when arbitrarily composed with any protocols, which means, in particular, that they are concurrently-composable and non-malleable. Finally, as an example of our results, we show a very efficient protocol that fairly and securely solves the socialist millionaires' problem.
2004
EPRINT
Efficient and Universally Composable Committed Oblivious Transfer and Applications
Committed Oblivious Transfer (COT) is a useful cryptographic primitive that combines the functionalities of bit commitment and oblivious transfer. In this paper, we introduce an extended version of COT (ECOT) which additionally allows proofs of relations among committed bits, and we construct an efficient protocol that securely realizes an ECOT functionality in the universal-composability (UC) framework in the common reference string (CRS) model. Our construction is more efficient than previous (non-UC) constructions of COT, involving only a constant number of exponentiations and communication rounds. Using the ECOT functionality as a building block, we construct efficient UC protocols for general two-party and multi-party functionalities (in the CRS model), each gate requiring a constant number of ECOT's.
2003
EUROCRYPT
2003
EPRINT
Strengthening Zero-Knowledge Protocols using Signatures
Recently there has been an interest in zero-knowledge protocols with stronger properties, such as concurrency, unbounded simulation soundness, non-malleability, and universal composability. In this paper, we show a novel technique to convert a large class of existing honest-verifier zero-knowledge protocols into ones with these stronger properties in the common reference string model. More precisely, our technique utilizes a signature scheme existentially unforgeable against adaptive chosen-message attacks, and transforms any $\Sigma$-protocol (which is honest-verifier zero-knowledge) into an unbounded simulation sound concurrent zero-knowledge protocol. We also introduce $\Omega$-protocols, a variant of $\Sigma$-protocols for which our technique further achieves the properties of non-malleability and/or universal composability. In addition to its conceptual simplicity, a main advantage of this new technique over previous ones is that it avoids the Cook-Levin theorem, which tends to be rather inefficient. Indeed, our technique allows for very efficient instantiation based on the security of some efficient signature schemes and standard number-theoretic assumptions. For instance, one instantiation of our technique yields a universally composable zero-knowledge protocol under the Strong RSA assumption, incurring an overhead of a small constant number of exponentiations, plus the generation of two signatures.
2003
EPRINT
On Simulation-Sound Trapdoor Commitments
We study the recently introduced notion of a simulation-sound trapdoor commitment (SSTC) scheme. In this paper, we present a new, simpler definition for an SSTC scheme that admits more efficient constructions and can be used in a larger set of applications. Specifically, we show how to construct SSTC schemes from any one-way functions, and how to construct very efficient SSTC schemes based on specific number-theoretic assumptions. We also show how to construct simulation-sound, non-malleable, and universally-composable zero-knowledge protocols using SSTC schemes, yielding, for instance, the most efficient universally-composable zero-knowledge protocols known. Finally, we explore the relation between SSTC schemes and non-malleable commitment schemes by presenting a sequence of implication and separation results, which in particular imply that SSTC schemes are non-malleable.
2001
CRYPTO
2001
EPRINT
On the (Im)possibility of Obfuscating Programs
Informally, an {\em obfuscator} $O$ is an (efficient, probabilistic) ``compiler'' that takes as input a program (or circuit) $P$ and produces a new program $O(P)$ that has the same functionality as $P$ yet is ``unintelligible'' in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic encryption to complexity-theoretic analogues of Rice's theorem. Most of these applications are based on an interpretation of the ``unintelligibility'' condition in obfuscation as meaning that $O(P)$ is a ``virtual black box,'' in the sense that anything one can efficiently compute given $O(P)$, one could also efficiently compute given oracle access to $P$. In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of functions $F$ that are {\em \inherently unobfuscatable} in the following sense: there is a property $\pi : F \rightarrow \{0,1\}$ such that (a) given {\em any program} that computes a function $f\in F$, the value $\pi(f)$ can be efficiently computed, yet (b) given {\em oracle access} to a (randomly selected) function $f\in F$, no efficient algorithm can compute $\pi(f)$ much better than random guessing. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only {\em approximately} preserve the functionality, and (c) only need to work for very restricted models of computation ($TC_0$). We also rule out several potential applications of obfuscators, by constructing ``unobfuscatable'' signature schemes, encryption schemes, and pseudorandom function families. |
We finished covering several tests that help determine the convergence or divergence of a series and I tried them all and I couldn't make progress or produce an answer that I felt was coherent enough to follow.
there are two series up for consideration, the first;
$\sum_{k=1}^{\infty} \frac{\sqrt{k+1}-\sqrt{k}}{\sqrt{k}}$
and
$\sum_{k=1}^{\infty} ln(1+\frac{1}{2^k})$
For the first one I am not sure what test would work best but on the second one because there is a number number raised to the power $k$ I am led to believe the root test would work well, but I struggled to use algebra to work around the $ln$
Any hints or tips for proceeding would be highly appreciated. |
Search
Now showing items 1-6 of 6
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
I am trying to simulate a robot manipulator dynamics in SciLab.
Basically, I generated a step function that has constant acceleration for half of the time and then the same acceleration but negative for the other half, so I get a smooth transition between the manipulator positions.
This code generates the velocity and position from the step function I mentioned:
function [position,velocity,acceleration,time]=smoothTransition(initialPosition,finalPosition,resolution,timeSpan) a=(finalPosition-initialPosition)/((timeSpan/2)**2);//magnitud of acceleration and deceleration so I get to final position in timespan acceleration=[ones(1,resolution/2)*a -ones(1,resolution/2)*a]; if modulo(resolution,2) ~= 0 then acceleration=[acceleration -a];//case where time resolution is odd end time=linspace(0,timeSpan,resolution); function dx=f(t,x) dx(1)=x(2); dx(2)=linear_interpn(t,time,acceleration); endfunction x=ode([0;0],time(1),time,f); velocity=x(2,:); position=x(1,:);endfunction
Basically, I integrate the step function twice.
The formula to get the torque required is:
$$ \tau=gm_1s_{1x}\cos(q)+\ddot{q}m_1s_{1x}^2 $$
(this is a simplified version with one link)
Where $g$ is the gravity magnitude, $s_{1x}$ is how far is the center of mass in the x-direction, $m_1$ is the mass of the link, and $q$ is the angle.
What I am trying to do is generate a torque input with this equation an then do the numeric integration to get $q$ and its derivative back (mostly for testing purpose).
So I am trying to solve this numerically: $$ \ddot{q}=\frac{\tau-gm_1s_{1x}\cos(q)}{m_1s_{1x}^2} $$
The problem is that I don't get the same behavior back when I integrate for more than 1 second.
The code to do this is as follows
m_1=1;g=9.81;s_1x=1;[position,velocity,acceleration,time]=smoothTransition(0,%pi/2,100,10);tau=g*m_1*s_1x*cos(position)+acceleration*m_1*s_1x**2;function dx=f(t,x) torque=linear_interpn(t,time,tau); dx(1)=x(2); dx(2)=(-g*m_1*s_1x*cos(x(1))+torque)/(m_1*s_1x**2);endfunctionq0=[0;0];q=ode(q0,0,time,f);plot(time,position,'r');plot(time,q(1,:),'g');
In this code I plug the $\tau$ input and integrate two times, I suspect the problem is in this part.
With this code I get the following figure:
Where the red curve is the expected behavior and the green curve is the obtained one.
By the way, the problem persists if I increase the time resolution.
Edit:
I realize that when I crank up the resolution (let say 10000) the result curve (green) does approximate the the correct behavior (red).
Here the result with 10000 time resolution:
Is there a way to do a more exact integration without so much time resolution? |
I got a solution for the equation
Simplify[Solve[ 1/(2 q r (-1 + c1 M s1)) (-a q + M q + b c1 q s0 + a c1 M q s1 - c1 M^2 q s1 - c2 M r s1 + \[Sqrt](4 q r (-1 + c1 M s1) (b (cp + c2 s0) + c2 (a - M) M s1) + (b c1 q s0 - c1 M^2 q s1 + a q (-1 + c1 M s1) + M (q - c2 r s1))^2)) == 0 , M ]]
which is $\left\{\left\{M\to \frac{1}{2} \left(a-\frac{\sqrt{a^2 \text{c2} \text{s1}+4 b (\text{c2} \text{s0}+\text{cp})}}{\sqrt{\text{c2}} \sqrt{\text{s1}}}\right)\right\},\left\{M\to \frac{1}{2} \left(\frac{\sqrt{a^2 \text{c2} \text{s1}+4 b (\text{c2} \text{s0}+\text{cp})}}{\sqrt{\text{c2}} \sqrt{\text{s1}}}+a\right)\right\}\right\}$
But then when I plug that expression into the original lefthand side of the equation I do not get zero
Simplify[ 1/(2 q r (-1 + c1 M s1)) (-a q + M q + b c1 q s0 + a c1 M q s1 - c1 M^2 q s1 - c2 M r s1 + \[Sqrt](4 q r (-1 + c1 M s1) (b (cp + c2 s0) + c2 (a - M) M s1) + (b c1 q s0 - c1 M^2 q s1 + a q (-1 + c1 M s1) + M (q - c2 r s1))^2)) /. M -> 1/2 (a + Sqrt[4 b (cp + c2 s0) + a^2 c2 s1]/( Sqrt[c2] Sqrt[s1]))]
Not sure what is going on here. I've tried expanding the expression in the second piece of code before simplifying, but that unfortunately didn't help. |
Help:Editing
Editing a Wiki page is very easy. Simply click on the "
Edit" tab at the top (or the edit link on the right or bottom) of a Wiki page. This will bring you to a page with a text box containing the editable text of that page. If you want to experiment, please do so in our sandbox, not here. You could open the sandbox in a separate window or tab to be able to see both this text and your tests in the sandbox.
Type away, write a short edit summary on the small field below the edit-box. You may use shorthand to describe your changes, as described in the legend, and when you've finished, press preview to see how your changes will look. Then press "Save". Depending on your system, pressing "Enter" while the edit box is not active (when there is no typing cursor in it) may have the same effect as pressing the "Save" button. Also, please do not vandalise the information on Lazarus wiki.
You can also click on the "
Discussion" tab (or the " Discuss this page" link) to see the corresponding talk page, which contains comments about the page from other Wikipedia users. Click on the " +" tab (or " Edit this page") to add a comment. Contents 1 The wiki markup 1.1 Sections, paragraphs, lists and lines 1.2 Links and URLs 1.3 Images 1.4 Character formatting 1.5 Tables 1.6 Variables 1.7 Templates 1.8 Hiding the edit links 2 See also 3 Using Mozilla Firefox The wiki markup
In the left column of the table below, you can see what effects are possible. In the right column, you can see how those effects were achieved. In other words, to make text look like it looks in the left column, type it in the format you see in the right column.
You may want to keep this page open in a separate browser window for reference. If you want to try out things without danger of doing any harm, you can do so
in the sandbox. Sections, paragraphs, lists and lines
What it looks like What you type
Start your sections as follows:
== New section == === Subsection === ==== Sub-subsection ====
A single newlinegenerally has no effect on the layout.These can be used to separatesentences within a paragraph.Some editors find that this aids editingand improves the function
But an empty line starts a new paragraph.
A single [[newline]] generally has no effect on the layout. These can be used to separate sentences within a paragraph. Some editors find that this aids editing and improves the function ''diff'' (used internally to compare different versions of a page). But an empty line starts a new paragraph.
You can break lines
You can break lines<br> without starting a new paragraph.
marks the end of a list item.
* Lists are easy to do: ** Start every line with a star. *** More stars means deeper levels. **** A newline in a list marks the end of a list item. * An empty line starts a new list. # Numbered lists are also good ## very organized ## easy to follow ### easier still * You can even do mixed lists *# and nest them *#* like this ; Definition list : list of definitions ; item : the item's definition ; another item : the other item's definition
A manual newline starts a new paragraph.
: A colon indents a line or paragraph. A manual newline starts a new paragraph. IF a line starts with a space THEN it will be formatted exactly as typed; in a fixed-width font; lines won't wrap; ENDIF IF a line starts with a space THEN it will be formatted exactly as typed; in a fixed-width font; lines won't wrap; ENDIF <center>Centered text.</center>
A horizontal dividing line: this is above it
and this is below it.
A [[horizontal dividing line]]: this is above it ---- and this is below it. Links and URLs
What it looks like What you type
London has public transport.
London has [[public transport]].
San Francisco also has public transportation.
San Francisco also has [[public transport|public transportation]].
San Francisco also has public transportation.
San Francisco also has [[public transport]]ation. Examples include [[bus]]es, [[taxi]]s and [[streetcar]]s.
See the Wikipedia:Manual of Style.
See the [[Wikipedia:Manual of Style]].
Economics#See also is a link to a section within another page.
#Links and URLs is a link to a section on the current page.
#example is a link to an anchor that was created using
an id attribute
[[Economics#See also]] is a link to a section within another page. [[#Links and URLs]] is a link to a section on the current page. [[#example]] is a link to an anchor that was created using <div id="example">an id attribute</div>
Automatically hide stuff in parentheses: kingdom.
Automatically hide namespace: Village Pump.
Or both: Manual of Style
But not: [[Wikipedia:Manual of Style#Links|]]
Automatically hide stuff in parentheses: [[kingdom (biology)|]]. Automatically hide namespace: [[Wikipedia:Village Pump|]]. Or both: [[Wikipedia:Manual of Style (headings)|]] But not: [[Wikipedia:Manual of Style#Links|]]
The weather in London is a page that doesn't exist yet.
[[The weather in London]] is a page that doesn't exist yet.
Help:Editing is this page.
[[Help:Editing]] is this page.
When adding a comment to a Talk page, you should sign it by adding three tildes to add your user name:
or four to add user name plus date/time:
Five tildes gives the date/time alone:
When adding a comment to a Talk page, you should sign it by adding three tildes to add your user name: : ~~~ or four for user name plus date/time: : ~~~~ Five tildes gives the date/time alone: : ~~~~~ #REDIRECT [[United States]] [[fr:Wikipédia:Aide]] '''What links here''' and '''Related changes''' pages can be linked as: [[Special:Whatlinkshere/Wikipedia:How to edit a page]] and [[Special:Recentchangeslinked/Wikipedia:How to edit a page]] A user's '''Contributions''' page can be linked as: [[Special:Contributions/UserName]] or [[Special:Contributions/192.0.2.0]] [[Category:Character sets]] [[:Category:Character sets]]
Three ways to link to external (non-wiki) sources:
Three ways to link to external (non-wiki) sources: # Bare URL: http://www.nupedia.com/ # Unnamed link: [http://www.nupedia.com/] # Named link: [http://www.nupedia.com Nupedia]
Linking to other wikis:
Linking to another language's wiktionary:
Linking to other wikis: # [[Interwiki]] link: [[Wiktionary:Hello]] # Named interwiki link: [[Wiktionary:Hello|Hello]] # Interwiki link without prefix: [[Wiktionary:Hello|]] Linking to another language's wiktionary: # [[Wiktionary:fr:Bonjour]] # [[Wiktionary:fr:Bonjour|Bonjour]] # [[Wiktionary:fr:Bonjour|]]
ISBN 012345678X
ISBN 0-123-45678-X
ISBN 012345678X ISBN 0-123-45678-X
Date formats:
Date formats: # [[July 20]], [[1969]] # [[20 July]] [[1969]] # [[1969]]-[[07-20]]
Some uploaded sounds are listed at Wikipedia:Sound.
[[media:Sg_mrob.ogg|Sound]]
Canonical links to Lazarus and Free Pascal documentation:
Link to RTL documentation: [[doc:rtl/system/swapendian.html|SwapEndian]]. Link to FCL documentation: [[doc:fcl/uriparser/parseuri.html|ParseURI]]. Link to LCL documentation: [[doc:lcl/grids/tstringgrid.html|TStringGrid]]. Images
What it looks like What you type A picture: File:Wiki.png
or, with alternative text: jigsaw globe
or, floating to the right side of the page and with a caption:
or, floating to the right side of the page
A picture: [[Image:Wiki.png]] or, with alternative text: [[Image:Wiki.png|jigsaw globe]] or, floating to the right side of the page and with a caption: [[Image:Wiki.png|frame|Wikipedia Encyclopedia]] or, floating to the right side of the page ''without'' a caption: [[Image:Wiki.png|right|Wikipedia Encyclopedia]]
Clicking on an uploaded image displays a description page, which you can also link directly to: Image:Wiki.png
[[:Image:Wiki.png]]
To include links to images shown as links instead of drawn on the page, use a "media" link.
[[media:Tornado.jpg|Image of a Tornado]] Character formatting
What it looks like What you type ''Emphasize'', '''strongly''', '''''very strongly'''''.
[math]\sin x + \ln y[/math]
[math]\mathbf{x} = 0[/math]
Ordinary text should use wiki markup for emphasis, and should not use
<math>\sin x + \ln y</math> sin''x'' + ln''y'' <math>\mathbf{x} = 0</math> '''x''' = 0
A typewriter font for
A typewriter font for <tt>monospace text</tt> or for computer code: <code>int main()</code>
You can use small text for captions.
You can use <small>small text</small> for captions.
You can
You can also mark
You can <s>strike out deleted material</s> and <u>underline new material</u>. You can also mark <del>deleted material</del> and <ins>inserted material</ins> using logical markup rather than visual markup.
À Á Â Ã Ä Å
è é ê ë ì í À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ñ Ò Ó Ô Õ Ö Ø Ù Ú Û Ü ß à á â ã ä å æ ç è é ê ë ì í î ï ñ ò ó ô œ õ ö ø ù ú û ü ÿ ¿ ¡ § ¶ † ‡ • – — ‹ › « » ‘ ’ “ ” ™ © ® ¢ € ¥ £ ¤
ε
x<sub>1</sub> x<sub>2</sub> x<sub>3</sub> x<sup>1</sup> x<sup>2</sup> x<sup>3</sup> or x¹ x² x³ ε<sub>0</sub> = 8.85 × 10<sup>−12</sup> C² / J m. 1 [[hectare]] = [[1 E4 m²]] α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ σ ς τ υ φ χ ψ ω Γ Δ Θ Λ Ξ Π Σ Φ Ψ Ω ∫ ∑ ∏ √ − ± ∞ ≈ ∝ ≡ ≠ ≤ ≥ → × · ÷ ∂ ′ ″ ∇ ‰ ° ∴ ℵ ø ∈ ∉ ∩ ∪ ⊂ ⊃ ⊆ ⊇ ¬ ∧ ∨ ∃ ∀ ⇒ ⇔ → ↔ Obviously, ''x''² ≥ 0 is true. : <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> <nowiki>Link → (''to'') the [[Wikipedia FAQ]]</nowiki> <!-- comment here --> (see also: Chess symbols in Unicode) Tables Placement of the Table of Contents (TOC)
At the current status of the wiki markup language, having at least four headers on a page triggers the TOC to appear in front of the first header (or after introductory sections). Putting __TOC__ anywhere forces the TOC to appear at that point (instead of just before the first header). Putting __NOTOC__ anywhere forces the TOC to disappear. See also compact TOC for alphabet and year headings.
Keeping headings out of the Table of Contents
If you want some subheadings to not appear in the Table of Contents, then make the following replacements.
Replace == Header 2 == with <h2> Header 2 </h2>
Replace === Header 3 === with <h3> Header 3 </h3>
And so forth.
For example, notice that the following header has the same font as the other subheaders to this "Tables" section, but the following header does not appear in the Table of Contents for this page.
This header has the h4 font, but is NOT in the Table of Contents
This effect is obtained by the following line of code.
<h4> This header has the h4 font, but is NOT in the Table of Contents </h4>
Tables
There are two ways to build tables:
in special Wiki-markup (see m:Help:Table) with the usual HTML elements: <table>, <tr>, <td> or <th>.
For the latter, and a discussion on when tables are appropriate, see Wikipedia:Help:Table.
Variables (See also m:Help:Variable)
Code Effect {{CURRENTMONTH}} 07 {{CURRENTMONTHNAME}} July {{CURRENTMONTHNAMEGEN}} July {{CURRENTDAY}} 20 {{CURRENTDAYNAME}} Saturday {{CURRENTYEAR}} 2019 {{CURRENTTIME}} 16:17 {{NUMBEROFARTICLES}} 5,042 {{PAGENAME}} Editing {{NAMESPACE}} Help {{localurl:pagename}} /pagename {{localurl: Wikipedia:Sandbox|action=edit}} http://www.wikipedia.org/wiki/Sandbox?action=edit {{SERVER}} https://wiki.lazarus.freepascal.org {{ns:1}} Talk {{ns:2}} User {{ns:3}} User talk {{ns:4}} Lazarus wiki {{ns:5}} Lazarus wiki talk {{ns:6}} File {{ns:7}} File talk {{ns:8}} MediaWiki {{ns:9}} MediaWiki talk {{ns:10}} Template {{ns:11}} Template talk {{ns:12}} Help {{ns:13}} Help talk {{ns:14}} Category {{ns:15}} Category talk {{SITENAME}} Lazarus wiki NUMBEROFARTICLES is the number of pages in the main namespace which contain a link and are not a redirect, i.e. number of articles, stubs containing a link, and disambiguation pages. Templates
The MediaWiki software used by Wikipedia has limited support for template inclusion. This means standardized text chunks (such as boilerplate text) can be inserted into articles. For example, typing {{stub}} will appear as "
This article is a stub. You can help Wikipedia by expanding it." when the page is saved. See Wikipedia:Template messages for the complete list. Other commonly used ones are: {{disambig}} for disambiguation pages, {{spoiler}} for spoiler warnings and {{sectstub}} like an article stub but for a section. The are many subject-specific stubs e.g.: {{Geo-stub}}, {{Hist-stub}} and {{Linux-stub}}. For a complete list of stubs see Wikipedia:Template messages/Stubs.
Insert
__NOEDITSECTION__ into the document to suppress the edit links that appear next to every section header. See also m:Help:Formula Mediawiki user's guide to editing Wikipedia:MediaWiki. HTML element. Wikipedia:Protection policy Using Mozilla Firefox
There is an extension available for Firefox, which may help you to edit a wiki page. This extension can be used for all wikis based on MediaWiki (e.g. for the wikipedia). If you use version 1.5 of Firefox, you will need the developer version of the extension. Update: The extension doesn't support Firefox 2.0 (not yet).
If you have installed the extension a new toolbar will appear. This toolbar contains some icons to
format the text (e.g. bold) insert a structure item (e.g. headlines) insert a link (internal or external) insert a picture
The toolbar can be helpful escpecially for newbies, because it's not necessary to know the (complete) wiki syntax.
If you want to change the settings for the toolbar, you have to go to the tools menu of Firefox. Then you must click on the settings button of the extension. On the view tab for example you can set, that the toolbar only appears, if the URL contains the word 'wiki'. |
Your instructor seems to have a rather sloppy approach to the mathematics of microeconomics.
Let's begin with the easiest case, the bundle $(5,1)$. We have $\min\{3\cdot 5,1\}=1$. For $\delta<14$, we have $\min\{3\cdot 5, 1+\delta\}=1+\delta$ so the marginal utility of $y$ is $1$ at the bundle $(5,1)$. Moreover, for any $\delta>-14/3$, we have $\min\{3\cdot(5+\delta),1\}=1$, so the marginal utility of $x$ is $0$ at $(5,1)$.
A similar argument applies to the bundle $(2,8)$, but here the marginal utility of $y$ is $0$, and dividing by zero is not allowed. Maybe your instructor wants to hear that the MRS is infinite.
Now at the bundle $(3,9)$, increases in any commodity have zero effects, but decreases do. There is a kink and you cannot smoothly move along the curve. By any semi-reasonable standards, the MRS is undefined. But to answer the verbal question, the consumer is not willing to give up any amount of one commodity ( effect negative) to receive more of the other good (effect zero).
Now the idea that you get the optimal consumption bundle by setting the MRS equal to relative prices is complete nonsense. If prices $p_x$ and $p_y$ and income $m$ are all positive, we can still, however, still find the optimal bundle. First note that the optimal bundle $(x,y)$ wil satisfy $3x=y$. If $3x>y$, one could reduce the amount $x$ without decreasing the utility (the minimum is unchanged), but use the freed money to increase $y$, which does have a positive effect. In an optimal bundle, this is not possible. by a similar argument, we can rule out $3x<y$. So we must have $3x=y$, which means that at any optimal solution, the MRS is undefined!!! To solve for an optimal bundle, all that remains to do is plug the condition $3x=y$ into the budget constraint $p_x x+p_y y=m$. We can substitute $3x$ for $y$ to get$$p_x x+p_y 3x=m$$ and solve for $x$, which gives us$$x=\frac{m}{p_x+3p_y}.$$Similarly, we get $$y=\frac{m}{p_x/3+p_y}.$$ |
I am trying to solve an assignment on solving the Bogoliubov de Gennes equations self-consistently in Matlab. BdG equations in 1-Dimension are as follows:-
$$\left(\begin{array}{cc} -\frac{\hbar^{2}}{2m}\frac{\delta^{2}}{\delta z^{2}}-\mu+V\left(z\right) & \triangle(z)\\ \triangle(z) & \frac{\hbar^{2}}{2m}\frac{\delta^{2}}{\delta z^{2}}+\mu-V(z) \end{array}\right)\left(\begin{array}{c} u_{n}(z)\\ v_{n}(z) \end{array}\right)= \epsilon_{n}\left(\begin{array}{c} u_{n}(z)\\ v_{n}(z) \end{array}\right)$$ along with the equations for gap function $\triangle(z)$ and number density $n(z)$. $$\triangle(z)=U\sum_{n}\left(1-2f_{n,}\right)u_{n}(z)v_{n}^{\star}(z)$$ and $$n(z)=2\sum_{n}|{u_{n}(z)}|^{2}f_{n}+|{v_{n}(z)}|^{2}\left(1-f_{n}\right).$$
For the case of solving the BdG equations in Fourier space in Matlab for the case of a periodic potential and periodic gap function (assumed), we can take $$u_{n}(z)=\sum_{k}\exp\left[ikz\right]U_{n,k}, $$ $$\triangle(z)=\sum_{K}\exp(iKz)T_{K},$$ and $$ V(z)=\sum_{K}\exp(iKz)P_{K} $$ where the sum is over the reciprocal lattice vectors $K$ leading to the number equation $$N=2\sum_{n,k}\left[f_{n}|{U_{n,k}}|^{2}+\left(1-f_{n}\right)|{V_{n,k}}|^{2}\right]$$ with $ f_{n}$ as the Fermi distribution function $f_{n}=\frac{1}{\exp(\beta(\epsilon-\mu))+1}$. Solving the set of equations self-consistently for a fixed $N$, I am trying to get a value of chemical potential from the number equation each time after solving the eigenvector components $U_{n,k}$ and $V_{n,k}$, but due to the form of the exponentials in the number equation and sum over large number of them, I am unable to get a correct value of chemical potential out of them using Matlab routines as the root of the equation to put it back into the equations for eigenvector components.
In most cases, I get random values of chemical potential since the equation is more or less insoluble. How can I avoid this error ? Is there a better way to numerically solve the BdG equations self-consistently ? I also want to do this assignment in real space avoiding finite size effects but started with the Fourier space case to avoid errors associated with discretizing the differential. Please guide and ask for any details you might need.
Following is my MATLAB code to solve the equations in real space but the code does not work as fsolve does not find the mu value. |
When we divide a positive integer (the dividend) by another positive integer (the divisor), we obtain a quotient. We multiply the quotient to the divisor, and subtract the product from the dividend to obtain the remainder. Such a division produces two results: a quotient and a remainder.
This is how we normally divide 23 by 4:
\[ \require{enclose}
\begin{array}{rll} 5 && \\[-3pt] 4 \enclose{longdiv}{23}\kern-.2ex \\[-3pt] \underline{\phantom{0}20} && \\[-3pt] \phantom{00}3 \end{array}\]
In general, the division \(b\div a\) takes the form
\[ \require{enclose}
\begin{array}{rll} q && \\[-3pt] a \enclose{longdiv}{\phantom{0}b}\kern-.2ex \\[-3pt] \underline{\phantom{0}aq} && \\[-3pt] \phantom{00}r \end{array}\]
so that \(r=b-aq\), or equivalently, \(b=aq+r\). Of course, both \(q\) and \(r\) are integers. Yet, the following “divisions”
\[{ \require{enclose}\begin{array}{rll}
4 && \\[-3pt] 4 \enclose{longdiv}{23}\kern-.2ex \\[-3pt] \underline{\phantom{0}16} && \\[-3pt] \phantom{00}7 \end{array}}{\require{enclose} \begin{array}{rll} 2 && \\[-3pt] 4 \enclose{longdiv}{23}\kern-.2ex \\[-3pt] \underline{\phantom{0}8} && \\[-3pt] \phantom{00}15 \end{array}}{\require{enclose} \begin{array}{rll} 6 && \\[-3pt] 4 \enclose{longdiv}{23}\kern-.2ex \\[-3pt] \underline{\phantom{0}24} && \\[-3pt] \phantom{00}-1 \end{array}}{\require{enclose} \begin{array}{rll} 7 && \\[-3pt] 4 \enclose{longdiv}{23}\kern-.2ex \\[-3pt] \underline{\phantom{0}28} && \\[-3pt] \phantom{00}-5 \end{array}}\]
also satisfy the requirement \(b=aq+r\), but that is not what we normally do. This means having \(b=aq+r\) alone is not enough to define what quotient and remainder are. We need a more rigid definition.
Theorem \(\PageIndex{1}\label{thm:divalgo}\)
Given any integers \(a\) and \(b\), where \(a>0\), there exist integers \(q\) and \(r\) such that \[b = aq + r,\] where \(0\leq r< a\). Furthermore, \(q\) and \(r\) are uniquely determined by \(a\) and \(b\).
The integers \(b\), \(a\), \(q\), and \(r\) are called the
, dividend , divisor , and quotient , respectively. Notice that \(b\) is a multiple of \(a\) if and only if \(r=0\). remainder
The division algorithm describes what happens in long division. Strictly speaking, it is not an algorithm. An algorithm describes a procedure for solving a problem. The theorem does not tell us
how to find the quotient and the remainder. Some mathematicians prefer to call it the division theorem. Here, we follow the tradition and call it the division algorithm. Remark
This is the outline of the proof:
Describe how to find the integers \(q\) and \(r\) such that \(b=aq+r\). Show that our choice of \(r\) satisfies \(0\leq r< a\). Establish the uniqueness of \(q\) and \(r\).
Regarding the last part of the proof: to show that a certain number \(x\) is uniquely determined, a typical approach is to assume that \(x'\) is another choice that satisfies the given condition, and show that we must have \(x=x'\).
Proof
We first show the existence of \(q\) and \(r\). Let \[S = \{ b-ax \mid x\in\mathbb{Z} \mbox{ and } b-ax\geq 0 \}.\] Clearly, \(S\) is a set of nonnegative integers. To be able to apply the principle of well-ordering, we need to show that \(S\) is nonempty. Here is a constructive proof.
Case 1. If \(b\geq 0\), we can set \(x=0\). Then \(b-ax=b\geq0\).
Case 2. If \(b < 0\), set \(x=b\). Since \(a\geq1\), we have \(1-a\leq0\). Then \[b-ax = b-ab = b(1-a) \geq 0.\]
Finally, we have to establish the uniqueness of both \(q\) and \(r\). Let \(q'\) and \(r'\) be integers such that \[b=aq'+r', \qquad 0\leq r'< a.\] From \(aq+r = b = aq'+r'\), we find \(a(q-q') = r'-r\). Hence \[a\,|q-q'| = |r'-r|.\] Since \(|r'-r|\) is an integer, if \(|r'-r|\neq0\), we would have \(a\leq |r'-r|\). From \(0\leq r,r'<a\), we deduce that \(|r'-r|<a\), which clearly contradicts our observation that \(a\leq|r'-r|\). Hence, \(|r'-r|=0\). Then \(r'=r\). It follows that \(q'=q\). So the quotient \(q\) and the remainder \(r\) are unique.
Since \(S\) is nonempty, it follows from the principle of well-ordering that \(S\) has a smallest element. Call it \(r\). From the definition of \(S\), there exists some integer \(q\) such that \(b-aq=r\).
Next, we show that \(0\leq r<a\). The definition of \(S\) tells us immediately that \(r\geq0\), so we only need to show that \(r<a\). Suppose, on the contrary, \(r\geq a\). Then \(r = a+t\) for some integer \(t\geq 0\). Now \(b-aq = r = a + t\) implies that \[0 \leq t = b-aq-a = b-a(q+1).\] So \(t\in S\). Now \(t = r-a < r\) suggests that we have found another element in \(S\) which is even smaller than \(r\). This contradicts the minimality of \(r\). Therefore \(r < a\).
You should not have any problem dividing a positive integer by another positive integer. This is the kind of long division that we normally perform. It is more challenging to divide a negative integer by a positive integer. When \(b\) is negative, the quotient \(q\) will be negative as well, but the remainder \(r\) must be
nonnegative. In a way, \(r\) is the deciding factor: we choose \(q\) such that the remainder \(r\) satisfies the condition \(0\leq r<a\).
In general, for any integer \(b\), dividing \(b\) by \(a\) produces a decimal number. If the result is not an integer, round it
down to the next smaller integer (see Example [eg:fcnintro-03]). It is the quotient \(q\) that we want, and the remainder \(r\) is obtained from the subtraction \(r=b-aq\). For example, \[\frac{-22}{\;\;7} = -3.1428\ldots\,.\] Rounding it down produces the quotient \(q=-4\), and the remainder is \(r=-22-7(-4)=6\); and we do have \(-22=7\cdot(-4)+6\).
hands-on Exercise \(\PageIndex{1}\label{he:divalgo-01}\)
Compute the quotients \(q\) and the remainders \(r\) when \(b\) is divided by \(a\):
1.0 (a) \(b= 128\), \(a=7\) & (b) \(b=-128\), \(a=7\) & (c) \(b=-389\), \(a=16\)
Be sure to verify that \(b=aq+r\).
The division algorithm can be generalized to any nonzero integer \(a\).
Corollary \(\PageIndex{2}\label{cor:divalgo}\)
Given any integers \(a\) and \(b\) with \(a\neq 0\), there exist uniquely determined integers \(q\) and \(r\) such that \(b = aq +r\), where \(0\leq r < |a|\).
Proof
We only have to consider the case of \(a<0\). Since \(-a>0\), the original Euclidean Algorithm assures that there exist uniquely determined integers \(q'\) and \(r\) such that \[b = (-a)\cdot q' + r,\] where \(0\leq r<-a=|a|\). Therefore, we can set \(q=-q'\).
example \(\PageIndex{1}\label{eg:divalgo-01}\)
Not every calculator or computer program computes \(q\) and \(r\) the way we want them done in mathematics. The safest solution is to compute \(|b| \div |a|\) in the usual way, inspect the remainder to see if it fits the criterion \(0\leq r<|a|\). If necessary, adjust the value of \(q\) so that the remainder \(r\) satisfies the requirement \(0\leq r<|a|\). Here are some examples: \[\begin{array}{|r|r|r@{\;=\;}l|r|r|} \hline b & a & b & aq+r & q & r \\ \hline 14 & 4 & 14 & 4\cdot3+2 & 3 & 2 \\ -14 & 4 & -14 & 4\cdot(-4)+2 & -4 & 2 \\ -17 & -3 & -17 & (-3)\cdot6+1 & 6 & 1 \\ 17 & -3 & 17 & (-3)\cdot(-5)+2 & -5 & 2 \\ \hline \end{array}\]
The quotient \(q\) can be positive or negative, and the remainder \(r\) is always nonnegative.
Definition
Given integers \(a\) and \(b\), with \(a\neq 0\), let \(q\) and \(r\) denote the unique integers such that \(b=aq+r\), where \(0\leq r<|a|\). Define the
binary operators \(\mathrm{ div }\) and \(\bmod\) as follows:
Therefore, \(b\mathrm{ div } a\) gives the quotient, and \(b\bmod a\) yields the remainder of the integer division \(b\div a\). Recall that \(b\mathrm{ div } a\) can be positive, negative, or even zero. But
\(b\bmod a\) is always a nonnegative integer less than \(|a|\).
example \(\PageIndex{2}\label{eg:divalgo-02}\)
From the last example, we have
\(14\mathrm{ div } 4 = 3\), and \(14\bmod 4 = 2\). \(-14\mathrm{ div } 4 =-4\), and \(-14\bmod 4 = 2\). \(-17\mathrm{ div }-3 = 6\), and \(-17\bmod-3 = 1\). \(17\mathrm{ div }-3 =-5\), and \(17\bmod-3 = 2\).
Do not forget to check the computations, and remember that \(a\) need not be positive.
hands-on Exercise \(\PageIndex{2}\label{he:divalgo-02}\)
Complete the following table:
\[\begin{array}{|r|r|r|r|} \hline b\hfil & a\hfil & b\mathrm{ div } a \hfil & b\bmod a \hfil \\ \hline \noalign{\medskip} 334 & 15 & \qquad\qquad & \qquad\qquad \\ [6pt] 334 & -15 & & \\ [6pt] -334 & 15 & & \\ [6pt] -334 & -15 & & \\ [3pt] \hline \end{array}\] Do not forget: \(b\bmod a\) is always nonnegative.
example \(\PageIndex{3}\label{eg:divalgo-03}\)
Let \(n\) be an integer such that \[n\mathrm{ div }6 = q, \qquad\mbox{and}\qquad n\bmod6 = 4.\] Determine the values of \((2n+5)\mathrm{ div }6\), and \((2n+5)\bmod6\).
Solution
The given information implies that \(n=6q+4\). Then \[2n+5 = 2(6q+4)+5 = 12q+13 = 6(2q+2)+1.\] Therefore, \((2n+5)\mathrm{ div }6 = 2q+2\), and \((2n+5)\bmod6 = 1\).
hands-on Exercise \(\PageIndex{3}\label{he:divalgo-03}\)
Let \(n\) be an integer such that \[n\mathrm{ div }11 = q, \qquad\mbox{and}\qquad n\bmod11 = 5.\] Compute the values of \((6n-4)\mathrm{ div }11\) and \((6n-4)\bmod11\).
example \(\PageIndex{4}\label{eg:divalgo-04}\)
Suppose today is Wednesday. Which day of the week is it a year from now?
Solution
Denote Sunday, Monday, … , Saturday as Day 0, 1, … 6, respectively. Today is Day 3. A year (assuming 365 days in a year) from today will be Day 368. Since \[368 = 7\cdot52+4,\] it will be Day 4 of the week. Therefore, a year from today will be Thursday.
hands-on Exercise \(\PageIndex{4}\label{he:divalgo-04}\)
Suppose today is Friday. Which day of the week is it 1000 days from today?
Any integer divided by 7 will produce a remainder between 0 and 6, inclusive. Define \[A_i = \{ x\in\mathbb{Z} \mid x\bmod 7 = i \} \quad\mbox{ for } 0\leq i\leq 6,\] we find \[\mathbb{Z} = A_0\cup A_1\cup A_2\cup A_3\cup A_4\cup A_5\cup A_6,\] where the sets \(A_i\) are
. The collection of sets \[\{A_0,A_1,A_2,A_3,A_4,A_5,A_6\}\] is called a pairwise disjoint of \(\mathbb{Z}\), because every integer belongs to one and only one of these seven subsets. We also say that \(\mathbb{Z}\) is a partition of \(A_0,A_1,\ldots,A_6\). The same argument also applies to the division by any integer \(n\geq2\). disjoint union
In general, a collection or family of finite sets \(\{S_1,S_2,\ldots, S_n\}\) is called a partition of the set \(S\) if \(S\) is the disjoint union of \(S_1,S_2,\ldots\,S_n\). Partition is a very important concept, because it divides the elements of \(S\) into \(n\) classes \(S_1,S_2,\ldots,S_n\) such that every element of \(S\) belongs to a unique class. We shall revisit partition again when we study relations in Chapter [ch:relations].
Summary and Review The division of integers can be extended to negative integers. Given any integer \(b\), and any nonzero integer \(a\), there exist uniquely determined integers \(q\) and \(r\) such that \(b=aq+r\), where \(0\leq r<|a|\). We call \(q\) the quotient, and \(r\) the remainder. The reason we have unique choices for \(q\) and \(r\) is the criterion we place on \(r\). It has to satisfy the requirement \(0\leq r<|a|\). In fact, the criterion \(0\leq r<|a|\) is the single most important deciding factor in our choice of \(q\) and \(r\). We define two binary operations on integers. The \(\mathrm{ div }\) operation yields the quotient, and the \(\bmod\) operation produces the remainder, of the integer division \(b\div a\). In other words, \(b\mathrm{ div } a=q\), and \(b\bmod a=r\).
exercise \(\PageIndex{1}\label{ex:divalgo-01}\)
Find \(b\mathrm{ div } a\) and \(b\bmod a\), where
\(a= 13\), \(b= 300\) \(a= 11\), \(b=-120\) \(a=-22\), \(b= 145\)
exercise \(\PageIndex{2}\label{ex:divalgo-02}\)
Find \(b\mathrm{ div } a\) and \(b\bmod a\), where
\(a= 19\), \(b= 79\) \(a= 59\), \(b= 18\) \(a= 16\), \(b=-823\) \(a=-16\), \(b= 172\) \(a=- 8\), \(b=- 67\) \(a=-12\), \(b=-134\)
exercise \(\PageIndex{3}\label{ex:divalgo-03}\)
Prove that \[b\bmod a \in\{0,1,2,\ldots,|a|-1\}\] for any integers \(a\) and \(b\), where \(a\neq0\).
exercise \(\PageIndex{4}\label{ex:divalgo-04}\)
Prove that among any three consecutive integers, one of them is a multiple of 3.
Hint
Let the three consecutive integers be \(n\), \(n+1\), and \(n+2\). What are the possible values of \(n\bmod3\)? What does this translate into, according to the division algorithm? In each case, what would \(n\), \(n+1\), and \(n+2\) look like?
exercise \(\PageIndex{5}\label{ex:divalgo-05}\)
Prove that \(n^3-n\) is always a multiple of 3 for any integer \(n\) by
A case-by-case analysis. Factoring \(n^3-n\).
exercise \(\PageIndex{6}\label{ex:divalgo-06}\)
Prove that the set \(\{n,n+4,n+8,n+12,n+16\}\) contains a multiple of 5 for any positive integer \(n\).
exercise \(\PageIndex{7}\label{ex:divalgo-07}\)
Let \(m\) and \(n\) be integers such that \[m\mathrm{ div }5 = s, \qquad m\bmod5=1, \qquad n\mathrm{ div }5 = t, \qquad n\bmod5=3.\] Determine
\((m+n)\mathrm{ div }5\) \((m+n)\bmod5\) \((mn)\mathrm{ div }5\) \((mn)\bmod5\)
exercise \(\PageIndex{8}\label{ex:divalgo-08}\)
Let \(m\) and \(n\) be integers such that \[m\mathrm{ div }8 = s, \qquad m\bmod8=3, \qquad n\mathrm{ div }8 = t, \qquad n\bmod8=6.\] Determine
\((m+2)\mathrm{ div }8\) & (b) \((m+2)\bmod8\) \((3mn)\mathrm{ div }8\) & (d) \((3mn)\bmod8\) \((5m+2n)\mathrm{ div }8\) & (f) \((5m+2n)\bmod8\) \((3m-2n)\mathrm{ div }8\) & (h) \((3m-2n)\bmod8\) |
Suppose that $[U] = [0,...,U-1]$ is the universe from which all elements will be taken, and $A$ a hash table of size $m$.
A hash function $h:[U]\rightarrow[m]$ is truly random if
For any set of distincts elements $\{x_{1},...,x_{k}\} \subseteq [U]$ and any set of values $u_{1},...,u_{k} \subseteq [m]$ we have $Pr_{h}[h(x_{1}) = u_{1} \wedge ... \wedge h(x_{k}) = u_{k}] = \frac{1}{m^{k}}$. This of course implies that $h(x_{i})$ is uniform random and independent of $h(x_{1}),...,h(x_{i-1}),h(x_{i+1}),...,h(x_{k})$.
I was trying to understand why this is not possible to implement efficiently in practice, and found this paper where at some point they write in the abstract:
Hashing is fundamental to many algorithms and data structures widely used in practice. For theoretical analysis of hashing, there have been two main approaches. First, one can assume that the hash function is truly random, mapping each data item independently and uniformly to the range. This idealized model is unrealistic because a truly random hash function
requires an exponential number of bits to describe.
I do not see how using exponential number of bits can help us come up with a truly random hash function when the universe is $[U]$ and the hash table can store at most $m$ elements.
How would you use an exponential number of bits to come up with a function that can guarantee the probabilities described above? |
I need some help with this proof. A subsequential limit is a limit of a subsequence.
Suppose $a_n$ is a sequence with $\{ L_1, L_2, ...\}$ subsequential limits. Suppose $L_n \to L$. Prove that $L$ is a subsequential limit of $a_n$.
Proof:
We know (1): $\forall_{\epsilon > 0} \exists_{N_0} s.t \forall_{n>N_0} \implies |L_n - L| < \epsilon$
(2) For i = 1 to ... $\forall_{\epsilon_i > 0} \exists_{N_i} s.t \forall_{n_i > N_i} \implies |a_{n_i} - L_i| < \epsilon$
Can I just take my subsequence terms from the interval $|L_n - L| < \epsilon$ and then conclude by the definition of the limit that $L$ is subsequential limit? Other than that I am unsure of a general strategy of how to construct proofs with subsequences |
ISSN:
1531-3492
eISSN:
1553-524X
All Issues
Discrete & Continuous Dynamical Systems - B
November 2009 , Volume 12 , Issue 4
Select all articles
Export/Reference:
Abstract:
We study the existence of travelling-waves and local well-posedness in a subspace of $C_b^1(\R)$ for a nonlinear evolution equation recently proposed by Andrew C. Fowler to describe the dynamics of dunes. The travelling-waves we obtained however, were more bore-like than solitary-wave-like.
Abstract:
Explosive instabilities in spatially discrete reaction-diffusion systems are studied. We identify classes of initial data developing singularities in finite time and obtain predictions of the blow-up times, whose accuracy is checked by comparison with numerical solutions. We present averaged and local blow-up estimates. Local blow-up results show that it is possible to have blow-up after blow-up. Conditions excluding or implying blow-up at space infinity are discussed.
Abstract:
We study the asymptotic behavior of the solution of the Laplace equation in a domain perforated along the boundary. Assuming that the boundary microstructure is random, we construct the limit problem and prove the homogenization theorem. Moreover we apply those results to some spectral problems.
Abstract:
Cancer is one of the greatest killers in the world, particularly in western countries. A lot of the effort of the medical research is devoted to cancer and mathematical modeling must be considered as an additional tool for the physicians and biologists to understand cancer mechanisms and to determine the adapted treatments. Metastases make all the seriousness of cancer. In 2000, Iwata et al. [9] proposed a model which describes the evolution of an untreated metastatic tumors population. We provide here a mathematical analysis of this model which brings us to the determination of a Malthusian rate characterizing the exponential growth of the population. We provide as well a numerical analysis of the PDE given by the model.
Abstract:
We construct an auto-validated algorithm that calculates a close to identity change of variables which brings a general saddle point into a normal form. The transformation is robust in the underlying vector field, and is analytic on a computable neighbourhood of the saddle point. The normal form is suitable for computations aimed at enclosing the flow close to the saddle, and the time it takes a trajectory to pass it. Several examples illustrate the usefulness of this method.
Abstract:
In this paper, we answer the question under which conditions the porous-medium equation with convection and with periodic boundary conditions possesses gradient-type Lyapunov functionals (first-order entropies). It is shown that the weighted sum of first-order and zeroth-order entropies are Lyapunov functionals if the weight for the zeroth-order entropy is sufficiently large, depending on the strength of the convection. This provides new a priori estimates for the convective porous-medium equation. The proof is based on an extension of the algorithmic entropy construction method which is based on systematic integration by parts, formulated as a polynomial decision problem.
Abstract:
Biological invasion theory is one of important subjects in a biological control, an environmental preservation problem, a propagation of infectious diseases. I propose an propagation speed of traveling waves induced by an invasion of alien species for two-prey, one-predator modesl in which the commensalism induced by a predator between two prey species is considered. I investigate a spreading phenomenon and a minimal propagation speed for two cases that invader species is one species or more than one species. By numerical simulations and mathematical analysis, I conclude that the minimal speed is contingent only on the mobility of invasive species, furthermore, on that of one invader species even if two invader species invade at the same time. It is also shown that the commensalism via predator species affects spreading phenomena and a propagation speed, which is contingent on the type and the number of invasive species.
Abstract:
We formulate and analyze a deterministic mathematical model which incorporates some basic epidemiological features of the co-dynamics of malaria and tuberculosis. Two sub-models, namely: malaria-only and TB-only sub-models are considered first of all. Sufficient conditions for the local stability of the steady states are presented. Global stability of the disease-free steady state does not hold because the two sub-models exhibit backward bifurcation. The dynamics of the dual malaria-TB only sub-model is also analyzed. It has different dynamics to that of malaria-only and TB-only sub-models: the dual malaria-TB only model has no positive endemic equilibrium whenever $R_{MT}^d<1$, - its disease free equilibrium is globally asymptotically stable whenever the reproduction number for dual malaria-TB co-infection only $R_{MT}^d<1$ - it does not exhibit the phenomenon of backward bifurcation. Graphical representations of this phenomenon is shown, while numerical simulations of the full model are carried out in order to determine whether the two diseases will co-exist whenever their partial reproductive numbers exceed unity. Finally, we perform sensitivity analysis on the key parameters that drive the disease dynamics in order to determine their relative importance to disease transmission.
Abstract:
We consider an S-I(-R) type infectious disease model where the susceptibles differ by their susceptibility to infection. This model presents several challenges. Even existence and uniqueness of solutions is non-trivial. Further it is difficult to linearize about the disease-free equilibrium in a rigorous way. This makes disease persistence a necessary alternative to linearized instability in the superthreshold case. Application of dynamical systems persistence theory faces the difficulty of finding a compact attracting set. One can work around this obstacle by using integral equations and limit equations making it the special case of a persistence theory where the state space is just a set.
Abstract:
We derive an age-structured population model for the growth of a single species on a 2-dimensional (2D) lattice strip with Neumann boundary conditions. We show that the dynamics of the mature population is governed by a lattice reaction-diffusion system with delayed global interaction. Using theory of asymptotic speed of spread and monotone traveling waves for monotone semiflows, we obtain the asymptotic speed of spread $c^$*, the nonexistence of traveling wavefronts with wave speed $0 < c < c^$*, and the existence of traveling wavefront connecting the two equilibria $w\equiv 0$ and $w\equiv w^+$ for $c\geq c^$*.
Abstract:
In this paper, we study the error estimate of the $\theta$-scheme for the backward stochastic differential equation $y_t=\varphi(W_T)+\int_t^Tf(s,y_s)ds-\int_t^Tz_sdW_s$. We show that this scheme is of first-order convergence in $y$ for general $\theta$. In particular, for the case of $\theta=\frac{1}{2}$ (i.e., the Crank-Nicolson scheme), we prove that this scheme is of second-order convergence in $y$ and first-order in $z$. Some numerical examples are also given to validate our theoretical results.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Let $A$ be a real orthogonal matrix. Then $A^{\text T} A = I.$ Let $\lambda \in \Bbb C$ be an eigenvalue of $A$ corresponding to the eigenvector $X \in \Bbb C^n.$ Then we have
$$\begin{align*} X^{\text T} A^{\text T} A X = X^{\text T} X. \\ \implies (AX)^{\text T} AX & = X^{\text T} X. \\ \implies (\lambda X)^{\text T} \lambda X & = X^{\text T} X. \\ \implies {\lambda}^2 X^{\text T} X & = X^{\text T} X. \\ \implies ({\lambda}^2 - 1) X^{\text T} X & = 0. \end{align*}$$
Since $X$ is an eigenvector $X \neq 0.$ Therefore ${\|X\|_2}^2 = X^{\text T} X \neq 0.$ Hence we must have ${\lambda}^2 - 1 = 0$ i.e. ${\lambda}^2 = 1.$ So $\lambda = \pm 1.$
So according to my argument above it follows that eigenvalues of a real orthogonal matrix are $\pm 1.$ But I think that I am wrong as I know that the eigenvalues of an orthogonal matrix are unit modulus i.e. they lie on the unit circle.
What's going wrong in my argument above. Please help me in this regard.
Thank you very much for your valuable time. |
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a...
@Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well
@Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$.
However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1.
Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$
Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ?
Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son...
I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying.
UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton.
hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0
Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something?
*it should be du instead of dx in the integral
**and the solution is missing a constant C of course
Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$?
My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical.
My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction.
Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on.
"... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.)
Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before. |
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a...
@Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well
@Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$.
However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1.
Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$
Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ?
Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son...
I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying.
UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton.
hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0
Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something?
*it should be du instead of dx in the integral
**and the solution is missing a constant C of course
Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$?
My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical.
My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction.
Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on.
"... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.)
Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before. |
Consideration of the quantum mechanical description of the particle-in-a-box exposed two important properties of quantum mechanical systems. We saw that the eigenfunctions of the Hamiltonian operator are orthogonal, and we also saw that the position and momentum of the particle could not be determined exactly. We now examine the generality of these insights by stating and proving some fundamental theorems. These theorems use the Hermitian property of quantum mechanical operators, which is described first.
Hermitian Theorem
Since the eigenvalues of a quantum mechanical operator correspond to measurable quantities, the eigenvalues must be real, and consequently a quantum mechanical operator must be Hermitian.
Proof
We start with the premises that ψ and φ are functions, \(\int d\tau\) represents integration over all coordinates, and the operator  is Hermitian by definition if
\[ \int \psi ^* \hat {A} \psi d\tau = \int (\hat {A} ^* \psi ^* ) \psi d\tau \label {4-37}\]
This equation means that the complex conjugate of  can operate on ψ* to produce the same result after integration as  operating on φ, followed by integration. To prove that a quantum mechanical operator  is Hermitian, consider the eigenvalue equation and its complex conjugate.
\[\hat {A} \psi = a \psi \label {4-38}\]
\[\hat {A}^* \psi ^* = a^* \psi ^* = a \psi ^* \label {4-39}\]
Note that a* = a because the eigenvalue is real. Multiply (4-38) and (4-39) from the left by ψ* and ψ, respectively, and integrate over all the coordinates. Note that ψ is normalized. The results are
\[ \int \psi ^* \hat {A} \psi d\tau = a \int \psi ^* \psi d\tau = a \label {4-40}\]
\[ \int \psi \hat {A}^* \psi ^* d \tau = a \int \psi \psi ^* d\tau = a \label {4-41}\]
Since both integrals equal a, they must be equivalent.
\[ \int \psi ^* \hat {A} \psi d\tau = \int \psi \hat {A}^* \psi ^* d\tau \label {4-42}\]
The operator acting on the function, \(\hat {A}^* \int \psi ^* \hat {A} \psi d\tau = \int \psi \hat {A} ^* \psi ^* d\tau_* \), produces a new function. Since functions commute, Equation (4-42) can be rewritten as \[ \int \psi ^* \hat {A} \psi d\tau = \int (\hat {A}^*\psi ^*) \psi d\tau tag{4-43}\]
This equality means that  is Hermitian.
Orthogonality Theorem
Eigenfunctions of a Hermitian operator are orthogonal if they have different eigenvalues. Because of this theorem, we can identify orthogonal functions easily without having to integrate or conduct an analysis based on symmetry or other considerations.
Proof
ψ and φ are two eigenfunctions of the operator  with real eigenvalues \(a_1\) and \(a_2\), respectively. Since the eigenvalues are real, \(a_1^* = a_1\) and \(a_2^* = a_2\).
\[\hat {A} \psi = a_1 \psi \]
\[\hat {A}^* \psi ^* = a_2 \psi ^* \label {4-44}\]
Multiply the first equation by φ* and the second by ψ and integrate.
\[\int \psi ^* \hat {A} \psi d\tau = a_1 \int \psi ^* \psi d\tau \]
\[\int \psi \hat {A}^* \psi ^* d\tau = a_2 \int \psi \psi ^* d\tau \label {4-45}\]
Subtract the two equations in (4-45)to obtain
\[\int \psi ^*\hat {A} \psi d\tau - \int \psi \hat {A} ^* \psi ^* d\tau = (a_1 - a_2) \int \psi ^* \psi d\tau \label {4-46}\]
The left-hand side of (4-46) is zero because  is Hermitian yielding
\[ 0 = (a_1 - a_2 ) \int \psi ^* \psi d\tau \label {4-47}\]
If a1 and a2 in (4-47) are not equal, then the integral must be zero. This result proves that nondegenerate eigenfunctions of the same operator are orthogonal.
Exercise 4.44
Schmidt Orthogonalization Theorem
If the eigenvalues of two eigenfunctions are the same, then the functions are said to be degenerate, and linear combinations of the degenerate functions can be formed that will be orthogonal to each other. Since the two eigenfunctions have the same eigenvalues, the linear combination also will be an eigenfunction with the same eigenvalue. Degenerate eigenfunctions are not automatically orthogonal but can be made so mathematically. The proof of this theorem shows us one way to produce orthogonal degenerate functions.
Proof
If ψ and φ are degenerate but not orthogonal, define Φ = φ - Sψ where S is the overlap integral \(\int \psi ^* \psi d\tau \), then ψ and Φ will be orthogonal.
\[\int \psi ^* \phi d\tau = \int \psi ^* (\varphi - S\psi ) d\tau = \int \psi ^* \psi d\tau - S \int \psi ^*\psi d\tau \label {4-48}\]
\[= S - S = 0\]
Exercise
4.45
Find N that normalizes Φ if \(Φ = N(φ − Sψ)\) where ψ and φ are normalized and S is their overlap integral.
Commuting Operator Theorem
If two operators commute, then they can have the same set of eigenfunctions. By definition, two operators \(\hat {A}\) and \(\hat {B}\)commute if the effect of applying \(\hat {A}\) then \(\hat {B}\) is the same as applying \(\hat {B}\) then \(\hat {A}\), i.e. \(\hat {A}\hat {B} = \hat {B} \hat {A}\). For example, the operations brushing-your-teeth and combing-your-hair commute, while the operations getting-dressed and taking-a-shower do not. This theorem is very important. If two operators commute and consequently have the same set of eigenfunctions, then the corresponding physical quantities can be evaluated or measured exactly simultaneously with no limit on the uncertainty. As mentioned previously, the eigenvalues of the operators correspond to the measured values.
Proof
If \(\hat {A}\) and \(\hat {B}\) commute and ψ is an eigenfunction of \(\hat {A}\) with eigenvalue b, then
\[\hat {B} \hat {A} \psi = \hat {A} \hat {B} \psi = \hat {A} b \psi = b \hat {A} \psi \label {4-49}\]
Equation (4-49) says that \(\hat {A} \psi \) is an eigenfunction of \(\hat {B}\) with eigenvalue b, which means that when \(\hat {A}\) operates on ψ, it cannot change ψ. At most, \(\hat {A}\) operating on ψ can produce a constant times ψ.
\[\hat {A} \psi = a \psi \label {4-50}\]
\[\hat {B} (\hat {A} \psi ) = \hat {B} (a \psi ) = a \hat {B} \psi = ab\psi = b (a \psi ) \label {4-51}\]
Equation \(\ref{4-51}\) shows that Equation \(\ref{4-50}\) is consistent with Equation \(\ref{4-49}\). Consequently ψ also is an eigenfunction of \(\hat {A}\) with eigenvalue a.
Exercise 4.46
Write definitions of the terms orthogonal and commutation.
Exercise
4.47
Show that the operators for momentum in the x-direction and momentum in the y-direction commute, but operators for momentum and position along the x-axis do not commute. Since differential operators are involved, you need to show whether
\[\hat {P} _x \hat {P} _y f (x,y) = \hat {P} _y \hat {P} _x f (x, y)\]
\[\hat {P} _x \hat {x} f(x) = \hat {x} \hat {P} _x f(x) \]
where f is an arbitrary function, or you could try a specific form for f, e.g. f = 6xy.
General Heisenberg Uncertainty Principle
lthough it will not be proven here, there is a general statement of the uncertainty principle in terms of the commutation property of operators. If two operators \(\hat {A}\) and \(\hat {B}\) do not commute, then the uncertainties (standard deviations σ) in the physical quantities associated with these operators must satisfy
\[\sigma _A \sigma _B \ge | \int \psi ^* [ \hat {A} \hat {B} - \hat {B} \hat {A} ] \psi d\tau \label {4-52}\]
where the integral inside the square brackets is called the commutator, and ││signifies the modulus or absolute value. If \(\hat {A}\) and \(\hat {B}\) commute, then the right-hand-side of equation (4-52) is zero, so either or both σA and σ B could be zero, and there is no restriction on the uncertainties in the measurements of the eigenvalues a and b. If \(\hat {A}\) and \(\hat {B}\) do not commute, then the right-hand-side of equation (4-52) will not be zero, and neither σA nor σB can be zero unless the other is infinite. Consequently, both a and b cannot be eigenvalues of the same wavefunctions and cannot be measured simultaneously to arbitrary precision.
Exercise
4.48
Show that the commutator for position and momentum in one dimension equals –i ħ and that the right-hand-side of Equation (4-52) therefore equals ħ/2 giving \(\sigma _x \sigma _{px} \ge \frac {\hbar}{2}\)
Exercise
4.49
In a later chapter you will learn that the operators for the three components of angular momentum along the three directions in space (x, y, z) do not commute. What is the relevance of this mathematical property to measurements of angular momentum in atoms and molecules?
Exercise
4.50
Write the definition of a Hermitian operator and statements of the Orthogonality Theorem, the Schmidt Orthogonalization Theorem, and the Commuting Operator Theorem.
Exercise
4.51
Reconstruct proofs for the Orthogonality Theorem, the Schmidt Orthogonalization Theorem, and the Commuting Operator Theorem.
Exercise
4.52
Write a paragraph summarizing the connection between the commutation property of operators and the uncertainty principle.
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski |
2018-09-11 04:29
Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Detaljert visning - Lignende elementer 2018-08-25 06:58
Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Detaljert visning - Lignende elementer 2018-08-23 11:31 Detaljert visning - Lignende elementer 2018-08-23 11:31
Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Detaljert visning - Lignende elementer 2018-08-23 11:31 Detaljert visning - Lignende elementer 2018-08-23 11:31
Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Detaljert visning - Lignende elementer 2018-08-23 11:31
Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Detaljert visning - Lignende elementer 2018-08-22 06:27
Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Detaljert visning - Lignende elementer 2018-08-22 06:27
Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Detaljert visning - Lignende elementer 2018-08-22 06:27
Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Detaljert visning - Lignende elementer |
The permanent electric dipole moments of polar molecules can couple to the electric field of electromagnetic radiation. This coupling induces transitions between the rotational states of the molecules. The energies that are associated with these transitions are detected in the far infrared and microwave regions of the spectrum. For example, the microwave spectrum for carbon monoxide shown at the beginning of the chapter in Figure 7.1.1 spans a frequency range of 100 to 1200 GHz, which corresponds to 3 - 40 \(cm^{-1}\).
The selection rules for the rotational transitions are derived from the transition moment integral by using the spherical harmonic functions and the appropriate dipole moment operator, \(\hat {\mu}\).
\[ \mu _T = \int Y_{J_f}^{m_f*} \hat {\mu} Y_{J_i}^{m_i} \sin \theta\, d \theta\, d \varphi \label {7-46} \]
Evaluating the transition moment integral involves a bit of mathematical effort. This evaluation reveals that the transition moment depends on the square of the dipole moment of the molecule, \(\mu ^2\) and the rotational quantum number, \(J\), of the initial state in the transition,
\[\mu _T = \mu ^2 \dfrac {J + 1}{2J + 1} \label {7-47}\]
and that the selection rules for rotational transitions are
\[\Delta J = \pm 1 \label {7-48}\]
\[\Delta m_J = 0, \pm 1 \label {7-49}\]
For \(\Delta J = +1\), a photon is absorbed; for \(\Delta J = -1\) a photon is emitted.
Exercise \(\PageIndex{1}\)
Explain why your microwave oven heats water but not air. Hint: draw and compare Lewis structures for components of air and for water.
The energies of the rotational levels are given by Equation \(\ref{7-28}\),
\[E = J(J + 1) \dfrac {\hbar ^2}{2I} \label {7-28}\]
and each energy level has a degeneracy of \(2J+1\) due to the different \(m_J\) values.
Exercise \(\PageIndex{2}\)
Use the rotational energy level diagram for J = 0, 1, and 2 that you produced in Exercise 7.9, and add arrows to show all the allowed transitions between states that cause electromagnetic radiation to be absorbed or emitted.
Transition Energies
The transition energies for absorption of radiation are given by
\[\Delta E_{states} = E_f - E_i = E_{photon} = h \nu = hc \bar {\nu} \label {7-50}\]
\[h \nu =hc \bar {\nu} = J_f (J_f +1) \dfrac {\hbar ^2}{2I} - J_i (J_i +1) \dfrac {\hbar ^2}{2I} \label {7-51}\]
Since microwave spectroscopists use frequency, and infrared spectroscopists use wavenumber units when describing rotational spectra and energy levels, both \(\nu\) and \(\bar {\nu}\) are included in Equation \(\ref{7-51}\), and \(J_i\) and \(J_f\) are the rotational quantum numbers of the initial (lower) and final (upper) levels involved in the absorption transition. When we add in the constraints imposed by the selection rules, \(J_f\) is replaced by \(J_i + 1\), because the selection rule requires \(J_f – J_i = 1\) for absorption. The equation for absorption transitions then can be written in terms of the quantum number \(J_i\) of the initial level alone.
\[h \nu = hc \bar {\nu} = 2 (J_i + 1) \dfrac {\hbar ^2}{2I} \label {7-52}\]
Divide Equation \(\ref{7-52}\) by \(h\) to obtain the frequency of the allowed transitions,
\[ \nu = 2B (J_i + 1) \label {7-53}\]
where \(B\), the rotational constant for the molecule, is defined as
\[B = \dfrac {\hbar ^2}{2hI} \label {7-54}\]
Exercise \(\PageIndex{3}\)
Complete the steps going from Equation \(\ref{7-51}\) to Equation \(\ref{7-54}\) and identify the units of \(B\) at the end.
Exercise \(\PageIndex{4}\)
Figure 7.1.1 shows the rotational spectrum of \(^{12}C^{16}O\) as a series of nearly equally spaced lines. The line positions \(\nu _J\), line spacings, and the maximum absorption coefficients ( \(\gamma _{max}\)), the absorption coefficients associated with the specified line position) for each line in this spectrum are given here in Table \(\PageIndex{1}\).
J \(\nu _J\)(MHz) Spacing from previous line(MHz) \(\gamma _{max}\)
Let’s try to reproduce Figure 7.1.1 from the data in Table \(\PageIndex{1}\) by using the quantum theory that we have developed so far. Equation \(\ref{7-53}\) predicts a pattern of exactly equally spaced lines. The lowest energy transition is between \(J_i = 0\) and \(J_f = 1\) so the first line in the spectrum appears at a frequency of \(2B\). The next transition is from \(J_i = 1\) to \(J_f = 2\) so the second line appears at \(4B\). The spacing of these two lines is 2B. In fact the spacing of all the lines is \(2B\) according to this equation, which is consistent with the data in Table \(\PageIndex{1}\) showing that the lines are very nearly equally spaced. The difference between the first spacing and the last spacing is less than 0.2%.
Exercise \(\PageIndex{5}\)
Use Equation \(\ref{7-53}\) to prove that the spacing of any two lines in a rotational spectrum is \(2B\). That is, derive \(\nu _{J_i + 1} - \nu _{J_i} = 2B\).
Non-Rigid Rotor Effects Centrifugal stretching of the bond as \(J\) increases causes the decrease in the spacing between the lines in an observed spectrum. This decrease shows that the molecule is not really a rigid rotor. As the rotational angular momentum increases with increasing \(J\), the bond stretches. This stretching increases the moment of inertia and decreases the rotational constant. Centrifugal stretching is exactly what you see if you swing a ball on a rubber band in a circle (Figure \(\PageIndex{1}\)).
Figure \(\PageIndex{1}\): In the absence of the spring, the particles would fly apart. However, the force exerted by the extended spring pulls the particles onto a periodic, oscillatory path. Image used with permission (CC BY-SA 3.0; Cleonis).
The effect of centrifugal stretching is smallest at low \(J\) values, so a good estimate for \(B\) can be obtained from the \(J = 0\) to \(J = 1\) transition. From \(B\), a value for the bond length of the molecule can be obtained since the moment of inertia that appears in the definition of B, Equation \(\ref{7-54}\), is the reduced mass times the bond length squared.
Exercise \(\PageIndex{6}\)
Use the frequency of the \(J = 0\) to \(J = 1\) transition observed for carbon monoxide to determine a bond length for carbon monoxide.
When the centrifugal stretching is taken into account quantitatively, the development of which is beyond the scope of the discussion here, a very accurate and precise value for B can be obtained from the observed transition frequencies because of their high precision. Rotational transition frequencies are routinely reported to 8 and 9 significant figures. Problem 4 provides the equation used to extract a precise experimental value of \(B\) using all the peaks in a rotational spectrum and taking into account the effect of centrifugal stretching.
As we have just seen, quantum theory successfully predicts the line spacing in a rotational spectrum. An additional feature of the spectrum is the line intensities. The lines in a rotational spectrum do not all have the same intensity, as can be seen in Figure 7.1.1 and Table \(\PageIndex{1}\). The maximum absorption coefficient for each line, \(\gamma _{max}\), is proportional to the magnitude of the transition moment, \(\mu _T\) which is given by Equation \(\ref{7-47}\), and to the population difference between the initial and final states, \(\Delta n\). Since \(\Delta n\) is the difference in the number of molecules present in the two states per unit volume, it is actually a difference in number density.
\[ \gamma _{max} = C_{\mu T} \cdot \Delta n \label {7-55}\]
where C includes constants obtained from a more complete derivation of the interaction of radiation with matter.
The dependence on the number of molecules in the initial state is easy to understand. For example, if no molecules were in the \(J = 7\), \(m_J = 0\) state, no radiation could be absorbed to produce a \(J = 7\), \(m_J = 0\) to \(J = 8\), \(m_J = 0\) transition. The dependence of the line intensity on the population of the final state is explained in the following paragraphs.
When molecules interact with an electromagnetic field (i.e., a photon), they can be driven from one state to another with the absorption or emission of energy. Usually there are more molecules in the lower energy state and the absorption of radiation is observed as molecules go from the lower state to the upper state. This situation is the one we have encountered up to now. In some situations, there are more molecules in the upper state and the emission of radiation is observed as molecules are driven from the upper state to the lower state by the electromagnetic field. This situation is called population inversion, and the process is called stimulated emission. Stimulated emission is the reason lasers are possible. Laser is an acronym for light amplification by stimulated emission of radiation. Even in the absence of an electromagnetic field, atoms and molecules can lose energy spontaneously and decay from an upper state to a lower energy state by emitting a photon. This process is called spontaneous emission. Stimulated emission therefore can be thought of as the inverse of absorption because both processes are driven by electromagnetic radiation, i.e. the presence of photons.
Figure \(\PageIndex{2}\): a) In absorption, an incident photon \(h \nu\) is absorbed by the system and drives the system from the ground state to an excited state. b) In spontaneous emission, a photon is produced when the system goes from an excited state to the ground state. c) In stimulated emission, an incident photon is not absorbed, but drives the system from an excited state to the ground state, accompanied by release of a second photon.
Whether absorption or stimulated emission is observed when electromagnetic radiation interacts with a sample depends upon the population difference, \(\Delta n\), of the two states involved in the transition. For a rotational transition,
\[ \Delta n = n_J - n_{J+1} \label {7-56}\]
where \(n_J\) represents the number of molecules in the lower state and \(n_{J+1}\) represents the number in the upper state per unit volume. If this difference is 0, there will be no net absorption or stimulated emission because they exactly balance. If this difference is positive, absorption will be observed; if it is negative, stimulated emission will be observed.
We can develop an expression for \(\Delta n\) that uses only the population of the initial state, \(n_J\), and the Boltzmann factor. The Boltzmann factor allows us to calculate the population of a higher state given the population of a lower state, the energy gap between the states and the temperature. Multiply the right-hand side of Equation \(\ref{7-56}\) by \(n_J/n_J\) to obtain
\[\Delta n = \left ( 1 - \dfrac {n_{J+1}}{n_J} \right ) n_J \label {7-57}\]
Next recognize that the ratio of populations of the states is given by the Boltzmann factor which when substituted into yields
\[ \Delta n = \left ( 1 - e^{\dfrac {-h \nu _J}{kT}} \right ) n_J \label {7-58}\]
where \(h \nu _J\) is the energy difference between the two states. For the rigid rotor model
\[\nu _J = 2B (J + 1) \]
so Equation \(\ref{7-58}\) can be rewritten as
\[ \Delta n = \left ( 1 e^{\dfrac {-2hB(J+1)}{kT}} \right ) n_J \label {7-59}\]
Equation expresses the population difference between the two states involved in a rotational transition in terms of the population of the initial state, the rotational constant for the molecule, \(B\), the temperature of the sample, and the quantum number of the initial state.
To get the number density of molecules present in the initial state involved in the transition, \(n_J\), we multiply the fraction of molecules in the initial state, \(F_J\), by the total number density of molecules in the sample, \(n_{total}\).
\[n_J = F_J \cdot n_{total} \label {7-60}\]
The fraction \(F_J\) is obtained from the rotational partition function.
\[F_J = (2J + 1) \left (\dfrac {hB}{kT} \right ) \left ( e^{\dfrac {-2hB(J+1)}{kT}} \right ) \label {7-61}\]
The exponential is the Boltzmann factor that accounts for the thermal population of the energy states. The factor \(2J+1\) in this equation results from the degeneracy of the energy level. The more states there are at a particular energy, the more molecules will be found with that energy. The (\(hB/kT\)) factor results from normalization to make the sum of \(F_J\) over all values of \(J\) equal to 1. At room temperature and below only the ground vibrational state is occupied; so all the molecules (\(n_{total}\)) are in the ground vibrational state. Thus the fraction of molecules in each rotational state in the ground vibrational state must add up to 1.
Exercise \(\PageIndex{7}\)
Show that the numerator, \(J(J+1)hB\) in the exponential of Equation \ref{7-61} is the energy of level J.
Exercise \(\PageIndex{8}\)
Calculate the relative populations of the lowest (\(J = 0\)) and second (\(J = 1\)) rotational energy level in the HCl molecule at room temperature. Do the same for the lowest and second vibrational levels of HCl. Compare the results of these calculations. Are Boltzmann populations important to vibrational spectroscopy? Are Boltzmann populations important for rotational spectroscopy?
Now we put all these pieces together and develop a master equation for the maximum absorption coefficient for each line in the rotational spectrum, which is identified by the quantum number, \(J\), of the initial state. Start with Equation \(\ref{7-55}\) and replace \(\mu _T\) using Equation \(\ref{7-47}\).
\[ \gamma _{max} = C \left ( \mu ^2 \dfrac {J + 1}{2J + 1} \right ) \cdot \Delta n \label {7-62}\]
Then replace \(\Delta n\) using Equation \(\ref{7-59}\).
\[ \gamma _{max} = C \left ( \mu ^2 \dfrac {J + 1}{2J + 1} \right ) \left ( e^{\dfrac {-2hB(J+1)}{kT}} \right ) n_J \label {7-63}\]
Finally replace nJ using Equations \(\ref{7-60}\) and \(\ref{7-61}\) to produce
\[ \gamma _{max} = C \left[ \mu ^2 \dfrac {J + 1}{2J + 1}\right] \left[ e^{\dfrac {-2hB(J+1)}{kT}}\right] \left[ (2J + 1) \left (\dfrac {hB}{kT} \right ) \left ( e^{\dfrac {-2hB(J+1)}{kT}} \right )\right] n_{total} \label {7-64}\]
Equation \(\ref{7-64}\) enables us to calculate the relative maximum intensities of the peaks in the rotational spectrum shown in Figure \(\PageIndex{2}\), assuming all molecules are in the lowest energy vibrational state, and predict how this spectrum would change with temperature. The constant \(C\) includes the fundamental constants \(\epsilon_o\), \(c\) and \(h\), that follow from a more complete derivation of the interaction of radiation with matter. The complete theory also can account for the line shape and width and includes an additional radiation frequency factor.
\[ C = \dfrac {2 \pi}{3 \epsilon _0 ch } \label {7-65}\]
In the spectrum shown in Figure 7.1.1, the absorption coefficients for each peak first increase with increasing \(J\) because the difference in the populations of the states increases and the factor (\(J+1\)) increases. Notice that the denominator in the factor resulting from the transition moment cancels the degeneracy factor \(2J+1\). After the maximum the second Boltzmann factor, which is a decreasing exponential as \(J\) increases, dominates, and the intensity of the peaks drops to zero. Exploration of how well Equation \(\ref{7-64}\) corresponds to the data in Table \(\PageIndex{1}\) and discovering how a rotational spectrum changes with temperature are left to an end-of-the-chapter activity.
Exercise \(\PageIndex{9}\)
Why doesn’t the first Boltzmann factor in Equation \(\ref{7-64}\) cause the intensity to drop to zero as J increases.
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski |
The orbital energy eigenvalues obtained by solving the hydrogen atom Schrödinger equation are given by
\[E_n = -\dfrac {\mu e^4}{8 \epsilon ^2_0 h^2 n^2} \label {8.3.1}\]
where \(\mu\) is the reduced mass of the proton and electron, \(n\) is the principal quantum number and e, \(\epsilon _0\) and h are the usual fundamental constants. The energy is negative and approaches zero as the quantum number n approaches infinity. Because the hydrogen atom is used as a foundation for multi-electron systems, it is useful to remember the total energy (binding energy) of the ground state hydrogen atom, \(E_H = -13.6\; eV\). The spacing between electronic energy levels for small values of n is very large while the spacing between higher energy levels gets smaller very rapidly. This energy level spacing is a result of the form of the Coulomb potential, and can be understood in terms of the particle in a box model. We saw that as the potential box gets wider, the energy level spacing gets smaller. Similarly in the hydrogen atom as the energy increases, the Coulomb well gets wider and the energy level spacing gets smaller.
Figure \(\PageIndex{1}\) : the emission line spectrum for iron. The discrete lines imply quantized energy states for the atoms that produce them.
The line spectra produced by hydrogen atoms are a consequence of the quantum mechanical energy level expression, Equation \ref{8.3.1}. In Chapter 1 we saw the excellent match between the experimental and calculated spectral lines for the hydrogen atom using the Bohr expression for the energy, which is identical to Equation \(\ref{8.3.1}\).
Exercise \(\PageIndex{1}\)
Using Equation \(\ref{8.3.1}\) and a spreadsheet program or other software of your choice, calculate the energies for the lowest 100 energy levels of the hydrogen atom. Also calculate the differences in energy between successive levels. Do the results from these calculations confirm that the energy levels rapidly get closer together as the principal quantum number n increases? What happens to the energy level spacing as the principle quantum number approaches infinity?
The solution of the Schrödinger equation for the hydrogen atom predicts that energy levels with \(n > 1\) can have several orbitals with the same energy. In fact, as the energy and n increase, the degeneracy of the orbital energy level increases as well. The number of orbitals with a particular energy and value for \(n\) is given by \(n_2\). Thus, each orbital energy level is predicted to be \(n_2\)-degenerate. This high degree of orbital degeneracy is predicted only for one-electron systems. For multi-electron atoms, the electron-electron repulsion removes the \(l\) degeneracy so only orbitals with the same \(m_l\) quantum numbers are degenerate.
Exercise \(\PageIndex{2}\)
Use Equation or the data you generated in Exercise \(\ref{8.3.1}\) to draw an energy level diagram to scale for the hydrogen atom showing the first three energy levels and their degeneracy. Indicate on your diagram the transition leading to ionization of the hydrogen atom and the numerical value of the energy required for ionization, in eV, atomic units and kJ/mol.
To understand the hydrogen atom spectrum, we also need to determine which transitions are allowed and which transitions are forbidden. This issue is addressed next by using selection rules that are obtained from the transition moment integral. In previous chapters we determined selection rules for the particle in a box, the harmonic oscillator, and the rigid rotor. Now we will apply those same principles to the hydrogen atom case by starting with the transition moment integral.
The transition moment integral for a transition between an initial (i) state and a final (f) state of a hydrogen atom is given by
\[ \left \langle \mu _T \right \rangle = \int \psi ^* _{n_f, l_f, m_{l_f}} (r, \theta , \psi ) \hat {\mu} \psi _{n_i, l_i, m_{l_i}} (r, \theta , \psi ) d \tau \label {8.3.2a}\]
or in bra ket notation
\[ \left \langle \mu _T \right \rangle = \langle \psi ^*_{n_f, l_f, m_{l_f}} | \hat {\mu} | \psi _{n_i, l_i, m_{l_i}} \rangle \label{8.3.2b}\]
where the dipole moment operator is given by
\[ \hat {\mu} = - e \hat {r} \label {8.3.3}\]
The dipole moment operator expressed in spherical coordinates is
\[ \hat {\mu} = -er (\bar {x} \sin \theta \cos \psi + \bar {y} \sin \theta \sin \psi + \bar {z} \cos \theta \label {8.3.4}\]
The sum of terms on the right hand side of Equation \(\ref{8.3.4}\) shows that there are three components of \(\left \langle \mu _T \right \rangle\) to evaluate in Equation \(\ref{8.3.2a}\), where each component consists of three integrals: an \(r\) integral, a \(\theta \) integral, and a \(\psi \) integral.
Evaluation reveals that the \(r\) integral
always differs from zero so
\[ \Delta n = n_f - n_i = \text {not restricted} \label {8.3.5}\]
There is no restriction on the change in the principal quantum number during a spectroscopic transition; \(\Delta n\) can be anything. For absorption, \(\Delta n > 0 \), for emission \(\Delta n < 0\), and \(\Delta n = 0 \) when the orbital degeneracy is removed by an external field or some other interaction.
The selection rules for \(\Delta l \) and \(\Delta m_l\) come from the transition moment integrals involving \(\theta \) and \(\varphi\) in Equation \(\ref{8.3.2a}\). These integrals are the same ones that were evaluated for the rotational selection rules, and the resulting selection rules are
\[ \Delta l = \pm 1\]
and
\[\Delta m_l = 0, \pm 1 \label {8.3.6}\]
Exercise \(\PageIndex{3}\)
Write the spectroscopic selection rules for the rigid rotor and for the hydrogen atom. Why are these selection rules the same?
Contributors
Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski |
What, if anything, can we learn about customer switching costs by looking at price, revenue, profit, and quantity responses of producers to cost shocks?
For example, we can define the profit equation as:
$\Pi = \sum^\infty_{t=0} [ -\alpha S + \beta^t(P - C(q))] \cdot q(P,C,S)$
Where $q(P,C,S)$ is demand as as function of prices, costs, and switching costs respectively, $C(q)$ is total cost as a function of quantity produced, and $\beta$ is the discount rate of the firm, and $\alpha$ is the fraction of the switching cost refunded to the purchaser. Are there published or just worked out examples of choices of $q(P,C,S)$ and $C(q)$ that allow identification of S from $\partial\pi/\partial C$, $\partial P^*/\partial C$, and $\partial q^*/\partial C$?
Consider the cost shock as an exogenous shock, but I have in mind a setting that allows a structural identification. By switching costs, I mean that if a customer wants to first buy from firm A and later switch to firm B he pays $P_B+S$, in the period of the switch then $P_B,P_B\ldots$ in subsequent periods instead of $P_A, P_A, \ldots$. The $-\alpha S$ term is a refund to only the purchasers in the period of initial purchase while $S$ is paid in the final period of doing business with a firm. An example of $-\alpha S$ could be getting a special deal on your Verizon cellphone because you can't use that phone with any other network or a free toaster when you open a bank account. I fixed it in the profit equation so that $S$ was multiplied by $q$ to indicate that $S$ is is the switching costs per unit of $q$. I had in mind that customers purchase either $0$ or $1$ unit of the service and $q$ aggregates their individual decisions. But I'm not wedded that refund, if it gets me somewhere I'm happy to assume that $\alpha =0$ |
Yes, you still use the Nernst equation, but one of your implicit assumptions is not correct.
A point about the Nernst equation that often confuses people is that, at first glance, it doesn't predict any change in $E$ as temperature changes, as long as $Q$ remains equal to $1$, and $\ln Q = 0$.
$$E = E^\circ - \frac{RT}{\nu F}\ln Q \tag{1}$$
However, a slightly more careful analysis reveals that if you set $\ln Q = 0$, it only implies that $E = E^\circ$. The conclusion that $E$ doesn't change is based on an assumption that $E^\circ$ doesn't change, which is
not true.
"Standard conditions" do not stipulate any temperature. So $E^\circ$ itself is a function of temperature. The standard cell potential $E^\circ$ at $\pu{298 K}$ is going to be different from the standard cell potential $E^\circ$ at $\pu{250 K}$, and hence $E$ will vary with temperature, even if $\ln Q = 0$.
So, how does one determine the temperature dependence of $E^\circ$? The best way is to
measure it, but if that is not possible, then you have to somehow find an expression for it. The Nernst equation alone has no answer for this. Instead you would have to turn to thermodynamics. We know that
$$E^\circ = -\frac{\Delta_\mathrm rG^\circ}{\nu F} \tag{2}$$
Note that $\Delta_\mathrm rG^\circ$ is a function of temperature, and hence so is $E^\circ$. So, the problem becomes one of determining the variation of $\Delta_\mathrm rG^\circ$ with temperature. The most primitive way is probably to look up the data for $\Delta H_\mathrm{f}$ and $S_\mathrm{mol}$ of each of the compounds, calculate $\Delta_\mathrm{r} H^\circ$ and $\Delta_\mathrm{r} S^\circ$ for the reaction (using Hess's law), and then find
$$\Delta_\mathrm{r} G^\circ = \Delta_\mathrm{r} H^\circ - T\Delta_\mathrm{r} S^\circ \tag{3}$$
Of course, when you look up data, the chances are that the data you find are specified for $T = \pu{298 K}$. So, when you use these data to calculate $\Delta_\mathrm{r} H^\circ$ and $\Delta_\mathrm{r} S^\circ$, you are finding the values of these quantities at $T = \pu{298 K}$. If you plug these values into equation $(3)$, then you also assume that the values of $\Delta_\mathrm{r} H^\circ$ and $\Delta_\mathrm{r} S^\circ$ at your desired temperature are equal to their values at $\pu{298 K}$.
Depending on how much accuracy you need, this may or may not be tolerable, and there are certainly more thorough ways of calculating it. However, that discussion is best left for another question. |
Here's some empirical data for question 2, based on D.W.'s idea applied to bitonic sort. For $n$ variables, choose $j - i = 2^k$ with probability proportional to $\lg n - k$, then select $i$ uniformly at random to get a comparator $(i,j)$. This matches the distribution of comparators in bitonic sort if $n$ is a power of 2, and approximates it otherwise.
For a given infinite sequence of gates pulled from this distribution, we can approximate the number of gates required to get a sorting network by sorting many random bit sequences. Here's that estimate for $n < 200$ taking the mean over $100$ gate sequences with $6400$ bit sequences used to approximate the count: It appears to match $\Theta(n \log^2 n)$, the same complexity as bitonic sort. If so, we don't eat an extra $\log n$ factor due to the coupon collector problem of coming across each gate.
To emphasize: I'm using only $6400$ bit sequences to approximate the expected number of gates, not $2^n$. The mean required gates does rise with that number: for $n = 199$ if I use $6400$, $64000$, and $640000$ sequences the estimates are $14270 \pm 1069$, $14353 \pm 1013$, and $14539 \pm 965$. Thus, it's possible getting the last few sequences increases the asymptotic complexity, though intuitively it feels unlikely.
Edit: Here's a similar plot up to $n = 80$, but using the exact number of gates (computed via a combination of sampling and Z3). I've switched from power of two $d = j-i$ to arbitrary $d \in [1,\frac{n}{2}]$ with probability proportional to $\frac{\log n - \log d}{d}$. $\Theta(n \log^2 n)$ still looks plausible. |
Let's consider an
infinite dimensional space $X$ and a linear operator $T$. The resolvent operator of $T$ is $R_\lambda (T) = (T-\lambda I)^{-1}$. A regular value $\lambda$ of $T$ is a complex number such that:
(R1) $R_\lambda(T)$ exists
(R2) $R_\lambda(T)$ is bounded
(R3) $R_\lambda(T)$ defined on a set which is
dense in $X$
The resolvent set $\rho(T)$ consists of all regular values $\lambda$ of $T$. The complement $\sigma(T)=C-\rho(T)$ is the spectrum of $T$ and we may distinguish parts of the spectrum:
point spectrum (eigenvalues) $\sigma_p(T)$: (R1) isn't satisfied continuous spectrum $\sigma_c(T)$: (R2) isn't satisfied, but (R1) and (R3) are satisfied residual spectrum $\sigma_r(T)$: (R3) isn't satisfied, (R1) is satisfied, (R2) - doesn't matter
Please, help me to clarify a couple of points:
Question 1: The point spectrum consists of eigenvalues and exists in finite dimensional case. So its meaning seems to be the same as in a finite dimensional case (scaling of eigenvectors that roughly represent orientation of the distortion by $T$). What is the meaning of the continuous and the residual spectrum? Question 2: Why do we care about
dense in the definitions? I have found a related question but didn't get the exact answer from it. |
Let $G$ be group of order $17^4$. I have to find its center $Z(G)$ and $G/Z(G)$.
$|G|=17^4$.
Since $Z(G)$ is subgroup of $G$, the order of center divides the order of the group: $|Z(G)|=17^a$, where $a\leq4$.
$1^{\circ}$ $a=4$ $\Rightarrow |Z(G)|=|G|$. Since center of the group is Abelian, and the group is non-abelian, we have contradiction.
$2^{\circ}$ $a=3$ $\Rightarrow |G/Z(G)|=17$. This means that $G/Z(g)$ is cyclic, and then $G$ is Abelian. Contradiction.
What should I do with the cases $a=2$ and $a=1$?
And also, is this the right way to do this problem? Thank you. |
At this point we will take our information transfer process and apply it the the economic problem of supply and demand. In that case, we will identify the information process source as the demand $Q^d$, the information transfer process destination as the supply $Q^s$, and the process signal detector as the price $p$. The price detector relates the demand signal $\delta Q^d$ emitted from the demand $Q^d$ to a supply signal $\delta Q^s$ that is detected at the supply $Q^s$ and delivers a price $P$.
We translate Condition 1 in [1] for the applicability of our information theoretical description into the language of supply and demand:
Condition 1: The considered economic process can be sufficiently described by only two independent process variables (supply and demand: $Q^d, Q^s$) and is able to transfer information.
We are now going to look for functions $\langle Q^s \rangle = F(Q^d)$ or $\langle Q^d \rangle = F(Q^s)$ where the angle brackets denote an expected value. But first we assume ideal information transfer $I_{Q^s} = I_{Q^d}$ such that:$$(4) \space P= \frac{1}{\kappa} \frac{Q^d}{Q^s}$$
$$(5) \space \frac{dQ^d}{dQ^s}= \frac{1}{\kappa} \frac{Q^d}{Q^s}$$
Note that Eq. (4) represents
movement ofthe supply and demand curves where $Q^d$ is a "floating" information source (in the language of Ref [1]), as opposed to movement alongthe supply and demand curves where $Q^d =Q^d_0$ is a "constant information source".
If we do take $Q^d =Q^d_0$ to be a constant information source and integrate the differential equation Eq. (5)$$(6) \space \frac{\kappa }{Q_0^d}\int _{Q_{\text{ref}}^d}^{Q^d}d\left(Q^d\right)'=\int_{Q_{\text{ref}}^s}^{\left\langle Q^s\right\rangle } \frac{1}{Q^s} \, d\left(Q^s\right)$$
We find$$
(7) \space \Delta Q^d=Q^d-Q_{\text{ref}}^d=\frac{Q_0^d}{\kappa }\log \left(\frac{\left\langle Q^s\right\rangle }{Q_{\text{ref}}^s}\right)
$$
Equation (7) represents movement along the demand curve, and the equilibrium price $P$ moves according to Eq. (4) based on the expected value of the supply and our constant demand source:$$
\text{(8a) }P=
\frac{1}{\kappa }\frac{Q_0^d}{\left\langle Q^s\right\rangle }
$$
$$
\text{(8b) }
\Delta Q^d=\frac{Q_0^d}{\kappa }\log \left(\frac{\left\langle Q^s\right\rangle }{Q_{\text{ref}}^s}\right)
$$
Equations (8a,b) define a demand curve. A family of demand curves can be generated by taking different values for $Q_0^d$ assuming a constant information transfer index $\kappa$.
Analogously, we can define a supply curve by using a constant information destination $Q_0^s$ and follow the above procedure to find:
$$ \text{(9a) }P= \frac{1}{\kappa }\frac{\left\langle Q^d\right\rangle }{Q_0^s}$$$$ \text{(9b) }\Delta Q^s = \kappa Q_0^s \log \left(\frac{\left\langle Q^d\right\rangle}{Q_{\text{ref}}^d}\right)$$
So that equations (9a,b) define a supply curve. Again, a family of supply curves can be generated by taking different values for $Q_0^s$.
Note that equations (8) and (9) linearize (Taylor series around $Q^x=Q_\text{ref}^x$)$$
Q^d =Q_{\text{ref}}^d +\frac{Q_0^d}{\kappa }-Q_{\text{ref}}^s P
$$
$$
Q^s = Q_{\text{ref}}^s-\kappa Q_0^s+\frac{Q_0^s{}^2\kappa ^2}{Q_{\text{ref}}^d}P
$$
plus terms of order $(Q^x)^2$ such that$$
Q^d=\alpha -\beta P
$$
$$
Q^s=\gamma +\delta P
$$
where $\alpha =Q_{\text{ref}}^d+\left.Q_0^d\right/\kappa$, $ \beta = Q_{\text{ref}}^s$ ,$ \gamma = Q_{\text{ref}}^s-\kappa Q_0^s $ and $ \delta =\kappa ^2 Q_0^s{}^2/Q_{\text{ref}}^d$. This recovers a simple linear model of supply and demand (where you can add a time dependence to the price e.g. $ \frac{dP}{dt} \propto Q^s - Q^d $).
We can explicitly show the supply and demand curves using equations (8a,b) and (9a,b) and plotting price $P$ vs change in quantity $\Delta Q^x=\text{$\Delta $Q}^s $ or $\text{$\Delta $Q}^d$. Here we take $\kappa = 1$ and $Q_{\text{ref}}^x=1$ and show a few curves of $Q_0^x = 1 \pm 0.1$. For example, for $x = s$ and +0.1, we are shifting the supply curve to the right. In the figure we show a shift in the supply curve (red) to the right and to the left (top two graphs) and a shift in the demand curve (blue) to the right and to the left (bottom two graphs). The new equilibrium price is the intersection of the new colored (supply or demand) curve and the unchanged (demand or supply, respectively) curve.
References
[1] Information transfer model of natural processes: from the ideal gas law to the distance dependent redshift P. Fielitz, G. Borchardt http://arxiv.org/abs/0905.0610v2
[2] http://en.wikipedia.org/wiki/Gronwall's_inequality
[3] http://en.wikipedia.org/wiki/Noisy_channel_coding_theorem#Mathematical_statement
[4] http://en.wikipedia.org/wiki/Entropic_force
[5] http://en.wikipedia.org/wiki/Sticky_(economics) |
I have $n$-dimensional matrices $\mathrm{\hat{H}}(\vec{k})$ depending on vector parameter $\vec{k}$.
Now, eigenvalue routines return eigenvalues in no particular order (they are usually sorted), but I want to trace eigenvalues $E_i$ as smooth functions of $\vec{k}$. Because eigenvalues are not returned in any particular order, just tracing $E_i$ for some particular index $i\in\{1,..,n\}$ will return set of lines which are not smooth, as shown on the picture bellow
My idea to trace continuous lines was to use eigenvectors. Namely, for two close points $\vec{k}$ and $\vec{k}+d\vec{k}$ eigenvectors should be approximately orthonormal so that $v_i(\vec{k})\cdot v_j(\vec{k}+d\vec{k})\sim \delta_{p_i p_j}$ where $p_i, p_j\in \pi(\{1,...,n\})$, and $\pi$ is some permutation. Then I would use given permutation to reorder the eigenvalues and thus trace smooth lines.
I other words, I would trace continuity of eigenvectors.
However, I run into some problems with numerical routines. At a given small subset of points I use, few eigenvectors at nearby points are not almost orthonormal. My first suspicion was that those eigenvectors correspond to a degenerate eigenvalue, but that is not always true.
This also holds true if I reduce $d\vec{k}$ to be really small.
Is such thing allowed to happen. Or, is it possible to guarantee that numerical routines return continuous eigenvectors? Routine I use is numpy.linalg.eigh which is an interface for zheevd from LAPACK.
(Physicists amongst you will recognize that I am talking about the band structure) |
If a rigid body with the inertia matrix $I_B$, has the angular velocity $\overrightarrow{\omega_1}$ what torque is needed to rotate it around another axis say $\overrightarrow{\omega_2}$ while keeping it's original rotation ... Sort of the torque needed to change the position of the rigid body while it rotates. A know that it's angular momentum $\overrightarrow{L_1} = I_B\overrightarrow{\omega_1}$. I don't know how to proceed ... There will be another angular momentum while rotating about $\overrightarrow{\omega_2}$ that is $\overrightarrow{L_2} = I_B\overrightarrow{\omega_2}$ so the body will feel $\overrightarrow{L} = \overrightarrow{L_1} + \overrightarrow{L_2}$ ?! The torque needed is $\overrightarrow{\tau} = \frac{d\overrightarrow{L}}{dt}$ so a sort of $\overrightarrow{\tau} \approx \frac{\overrightarrow{L_2}}{T}$ where $T$ is some time ... ?? Or the body is rotating around it's first axis of rotation hence having the angular momentum $L = I\omega$ in it's coordinate system. Now if the coordinate system is rotated around another axis $ \omega_2$, what angular momentum will the body feel? It's inertia matrix in the rotating frame will be $I_r = R^{T}IR$ and from here the angular momentum will be $L_2 = I_r\omega_2$ ?
You can think about this in two different ways.
One way is to look at the initial and final angular momentum. If you go from $L\cdot(0,1, 0)$ to $L\cdot(1,0,0)$ you need to remove the $Y$ component and add the $X$ component. If you just calculate the difference in the angular momentum, then you get
$$\Delta L = L\cdot (1,-1,1)$$
which would immediately imply that a torque about the $(1,-1,0)$ axis will provide the desired result.
Interestingly, the same thing can be achieved by applying a torque that is always perpendicular to the current direction of motion: this is what happens during the precession of a horizonally mounted gyroscope, for example. In that case, you might start with the axis of rotation pointing along the Y direction; and the torque generated by the force of gravity on the center of mass of the gyroscope will result in precession; after a certain time, this can result in the gyroscope pointing along the $X$ direction. Since the torque in this case keeps changing direction, you would have to integrated over all directions - and find once again that the average torque integrated over time is the same as you would have calculated before.
In both cases, the equation of motion is
$$\frac{d\vec{L}}{dt} = \vec{\Gamma}$$
Going with the first approach, the angular momentum $\vec{L}=I\vec{\omega}$, from which it follows that you need a torque $\Gamma$ for a time $t$ such that
$$\Delta L = \Gamma t$$
Which in you case means
$$\vec{\Gamma} = \frac{I\omega(1,-1,0)}{ t}$$
Can you figure it out from here?
I shall present my conclusion. Given the following: the world coordinate system $(i_w, j_w, k_w)$, the body's inertia matrix in this coordinate system $I_w$ and the rotation axis $\omega$ in the world coordinate system, one can obtain the angular momentum $L$ as $L = I_w \cdot \omega$. If the rotation axis is precessed in $\omega_2 = R(t)^T\cdot \omega$, with $R(t)$ a rotation matrix then the inertia matrix of the rigid body in the world coordinate system is now $I_{w2} = R^T\cdot I_w \cdot R$ and the new angular momentum is $L_2 = I_{w2}\cdot \omega_2 = R^T\cdot I_w \cdot R \cdot R^T \cdot \omega = R^T \cdot L$. Hence, $$ \tau = \lim_{t \to 0}\frac{L2 - L}{t} = \lim_{t \to 0}\frac{(R(t)^T - I_3)}{t}\cdot L = \dot{R^T}\cdot L$$ where $I_3$ is the identity matrix. |
I'm considering a pulse inside a dispersive medium so its duration depends on the z you are.
... then the concept of transform-limited pulse does not hold globally for your setup. Transform-limited pulses are a 1D (generally time-domain) phenomenon, so in your configuration the question "is the pulse transform-limited" would be asked and answered locally and independently at each different point. And, in the presence of dispersion, if the pulse is transform-limited at a given point $z_0$, then it will not be transform-limited at any other point in general.
Generically, given a locally-defined electric field$$E(t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}A(\omega)e^{+i\omega t}d\omega,$$with spectral amplitude $A(\omega) = |A(\omega)| e^{i\phi(\omega)}$, the pulse is said to be transform-limited if its duration is minimal over the set of pulses that have an identical power spectrum $|A(\omega)|^2$. The reason we use this definition is that the power spectrum $|A(\omega)|^2$ is both
fixed by the gain profile of the laser gain medium and the details of the cavity, and easily measurable by using a conventional spectrometer,
whereas the time duration is
extremely hard to measure, not determined by the laser source, since the introduction of any dispersive optics will affect the pulse duration without affecting the power spectrum, and accessible (given enough money, time, and dedication) to experimental modification via a number of pulse-shaping schemes.
For a given laser source, the power spectrum is basically fixed, and therefore so is the bandwidth $\Delta\omega$, and this puts a limit, via the Fourier bandwidth theorem, on the minimal pulse duration that's achievable with your laser source. However, unless you've done a lot of work, the pulse that comes out of your source will
not be that short - instead, it will contain chirp and other types of dispersive features which make it longer than that minimal pulse duration. That problem can be fixed by using pulse shapers to introduce additional spectral phases (i.e. additional terms $e^{i\phi_\mathrm{shaper}(\omega)}$ multiplying the spectral amplitude) which cancel out the chirp and other dispersive behaviours to minimize the pulse duration.
The transform-limited pulse duration is the minimal pulse duration that's achievable using this procedure.
If you want to get truly technical, then this also depends on the choice of measure for the duration of the pulse (i.e. choosing the FWHM, as you've done with your $\tau$, or some other measure which e.g. takes into account some pre-defined sensitivity to pre- or post-pulses), but if you're arguing about that then you're well and truly into the weeds by that point.
The concept of a transform-limited pulse is of extreme relevance in on-the-ground experimental situations, where the spectrum of your pulse is some jagged beast instead of some nice smooth spectrum (say, take fig. 2(b) of this paper). To evaluate the transform-limited duration, you basically take a set of reasonably-smooth spectral phases $\phi(\omega)$ that's as expansive as you can, and you select the one that gives you the smallest pulse duration. (And yes, by duration you use the FWHM by default, but really you should use whatever is the best descriptor of the temporal resolution limits in your experiment, which will depend on the process you're using.) |
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
ddoc latex/formulas? Andrei Alexandrescu via Digitalmars-d digitalmars-d at puremagic.com Thu Sep 15 07:43:56 PDT 2016 On 09/15/2016 10:37 AM, Johan Engelen wrote:
>
> Well, I'm pretty sure just typing \( \) and running `dmd -D` is not
> going to give me the output that I want. Indeed it doesn't.
>
> But, as you write, it's easy to make it happen. Full example:
> ```
> /**
> * Macros:
> * DDOC =
> * <!DOCTYPE html>
> * <html lang="en-US">
> * <head>
> * <script type="text/javascript" async
> src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML">
>
> * </script>
> * <title>$(TITLE)</title>
> * </head>
> * <body><h1>$(TITLE)</h1>$(BODY)</body>
> * </html>
> */
>
> /**
> * \[
> * \mathbf{V}_1 \times \mathbf{V}_2 =
> * \begin{vmatrix}
> * \mathbf{i} & \mathbf{j} & \mathbf{k} \\
> * \frac{\partial X}{\partial u} & \frac{\partial Y}{\partial u} & 0 \\
> * \frac{\partial X}{\partial v} & \frac{\partial Y}{\partial v} & 0 \\
> * \end{vmatrix}
> * \]
> */
>
> module ddoctest;
> ```
>
> `dmd ddoctest.d -D -o-` produces an HTML file with nice looking math in it.
>
> Now, can we please add this question to stackoverflow, and add this as
> an answer?
> :)
Probably a wiki page would be an awesome idea. -- Andrei
More information about the Digitalmars-dmailing list |
The production side is modeled as follows:There are $m=1,...,j$ identical firms (the number of firms is not necessarily equal to the number of workers of course), that operate in a perfectly competitive environment. This means that firms are price takers, both in the goods market and in the production input market, i.e. they take prices as given when they seek to attain their objective. It also means that markets "clear": in particular, prices adjust without frictions/delay so that all capital and all labor are employed. The firms solve a static (not intertemporal) problem: maximize profits period-by-period separately. Labor is provided totally inelastically, no labor-leisure choice here from the part of workers. Moreover each firm has a constant-returns to scale production function in capital and labor, i.e. the function is homogeneous of degree one.
Omitting the time subscript, the typical firm's production function is
$$F(K_j, L_j),\;\; j=1,...,m$$
and the objective of the firms is to maximize profits which are defined as the surplus of production over payments to labor $wL_j$, net payments to rented capital $rK_j$, and depreciation $\delta K_j$:
$$\max_{K_j, L_j} \pi = F(K_j, L_j)-wL_j-rK_j-\delta K_j$$
Note that these are "real" magnitudes in the
economics sense of the word, i.e. that we have divided throughout by the price of output (we do not show it usually, we just say, "expressed in real terms").
Denote $\kappa_j \equiv K_j/L_j$, the capital-labor ratio
at firm level. Due to the homogeneity of degree one we can re-write the maximization problem of the firm as
$$\max_{\kappa_j} \pi = L_j\cdot \big[F(\kappa_j, 1)-w-r\kappa_j-\delta \kappa_j\big]$$
Note that $L_j$ has become a multiplicative factor, so we can maximize only the term in brackets, and so only with respect to the capital-labor ratio. We also set $F(\kappa_j, 1) \equiv f(\kappa_j)$ to arrive at
$$\max_{\kappa_j} \pi = \max_{\kappa_j} \big[f(\kappa_j)-w-r\kappa_j-\delta \kappa_j\big]$$
The first order condition for a maximum is for the first derivative to be set equal to zero so
$$f'(\kappa_j) - r - \delta = 0 \implies f'(\kappa_j) - \delta = r$$
This is not yet the funky equation $(6.32)$,although it looks a lot like it, because the latter is expressed in terms of "per capita" capital $k\equiv K/N_1$, i.e. at the level of individuals/consumers/workers, not at firm level.
How do we arrive at $(6.32)$? Well, since we have assumed that all firms are identical, that labor is provided inellastically, and also that the markets for production inputs clear, we have that
$$mK_j = K \implies K_j = K/m,\;\;\; mL_j = N_1 \implies L_j = N_1/m $$
So
$$\kappa_j = \frac {K_j}{L_j} = \frac {K/m}{N_1/m} = K/N_1 \equiv k$$
and now we have obtained $(6.32)$.
Note how all the assumptions made have been used in order to arrive at this result. |
The most feasible nuclear reaction for a "first-generation" fusion reaction is the one involving deuterium (D) and tritium (T): $$ \mathrm{D} + \mathrm{T} \rightarrow \alpha (3.5\;\mathrm{MeV}) + n (14.1\;\mathrm{MeV}) $$ Tritium is not a primary fuel and does not exist in significant quantities naturally since it decays with a half life of 12.3 years. It therefore has to be "bred" from a separate nuclear reaction. Most fusion reactor design concepts employ a lithium "blanket" surrounding the reaction vessel which absorbs the energetic fusion neutrons to produce tritium in such a reaction.
There are two stable isotopes of lithium, $\mathrm{^6Li}$ (7.59 % abundance) and $\mathrm{^7Li}$ (92.41 %). Both absorb neutrons to produce tritium:
Unfortunately, only the reaction with the less-abundant isotope has a significant cross section for thermal neutrons, and even then a neutron multiplier is required because of unavoidable neutron losses and incomplete geometric coverage of the blanket (endothermic nuclear reactions involving $\mathrm{^9Be}$ or $\mathrm{Pb}$ have been suggested). Enrichment of lithium is currently a messy and expensive activity involving large quantities of mercury: a viable method will need to be developed before a nuclear fusion reactor can become a reality.
import numpy as np from matplotlib import rc import matplotlib.pyplot as plt rc('font', **{'family': 'serif', 'serif': ['Computer Modern'], 'size': 14}) rc('text', usetex=True) def read_xsec(filename): """Read in the energy grid and cross section from filename.""" E, xs = np.genfromtxt(filename, comments='#', unpack=True, usecols=(0,1)) return E, xs # Read in the data files: # 6Li + n -> T + 4He + 4.8 MeV E_Li6, Li6_xs = read_xsec('Li-6(n,T)He-4.endf') # 7Li + n -> T + 4HE + n' - 2.466 MeV E_Li7, Li7_xs = read_xsec('Li-7(n,n+T)He-4.endf') fig, ax = plt.subplots() ax.loglog(E_Li6, Li6_xs, lw=2, label='$\mathrm{^6Li-n}$') ax.loglog(E_Li7, Li7_xs, lw=2, label='$\mathrm{^7Li-n}$') # Prettify, set the axis limits and labels ax.grid(True, which='both', ls='-') ax.set_xlim(10, 1.e8) ax.set_xlabel('E /eV') ax.set_ylim(0.001, 100) ax.set_ylabel('$\sigma\;/\mathrm{barn}$') ax.legend() plt.savefig('lithium-xsecs.png') plt.show() |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.