text
stringlengths
256
16.4k
Okay, so I'm having real problems distinguishing between the Steady State concept and the balanced growth path in this model: $$ Y = K^\beta (AL)^{1-\beta} $$ I have been asked to derive the steady state values for capital per effective worker: $$ k^*=\left(\frac{s}{n+g+ \delta }\right)^{\frac{1}{1-\beta }} $$ As well as the steady state ratio of capital to output (K/Y): $$ \frac{K^{SS}}{Y^{SS}} = \frac{s}{n+g+\delta } $$ I found both of these fine, but I have been also asked to find the "steady-state value of the marginal product of capital, dY/dK". Here is what I did: $$ Y = K^\beta (AL)^{1-\beta} $$ $$ MPK = \frac{dY}{dK} = \beta K^{\beta -1}(AL)^{1-\beta } $$ Substituting in for K in the steady state (calculated when working out steady state for K/Y ratio above): $$ K^{SS} = AL\left(\frac{s}{n+g+\delta }\right)^{\frac{1}{1-\beta }} $$ $$ MPK^{SS} = \beta (AL)^{1-\beta }\left[AL\left(\frac{s}{n+g+\delta }\right)^{\frac{1}{1-\beta }}\right]^{\beta -1} $$ $$ MPK^{SS} = \beta \left(\frac{s}{n+g+\delta }\right)^{\frac{\beta -1}{1-\beta }} $$ Firstly I need to know whether this calculation for the steady state value of MPK is correct? Secondly, I have been asked to Sketch the time paths of the capital-output ratio and the marginal product of capital, for an economy that converges to its balanced growth path "from below". I am having problems understanding exactly what the balanced growth path is, as opposed to the steady state, and how to use my calculations to figure out what these graphs should look like. Sorry for the mammoth post, any help is greatly appreciated! Thanks in advance.
Define a Bayesian game as follows: $$G = \left\langle I, \left(A_i,T_i,(p_{t_i})_{t_i \in T_i}, u_i \right)_{i \in I} \right\rangle$$ $I$ is the set of players $A_i$ is the action set for player $i$, $T_i$ is the set of possible types for player $i$, $p_{t_i} \in \Delta(T_{-i})$ is player $i$'s beliefs regarding the types of the other players. $(T_{-i}=\times_{j \ne i}T_j)$ $u_i : A \times T \rightarrow \mathbb{R}$ is player $i$'s utility function Then a Bayes-Nash equilibrium is defined as follows: A (pure) Bayes-Nash equilibrium is a profile of choice functions (or strategies) $(\sigma_i:T_i \rightarrow A_i)_{i \in I}$ such that, $\forall i \in I, \forall t_i \in T_i, \forall a_i \in A_i$, $$\sum_{t_-{i}}p_{t_i}(t_{-i}) \cdot u_i(\sigma_i(t_i), \sigma_{-i}(t_{-i}); t_i, t_{-i}) \geq \sum_{t_-{i}}p_{t_i}(t_{-i}) \cdot u_i(a_i, \sigma_{-i}(t_{-i}); t_i, t_{-i}) $$ where, for every $t_{-i}, \sigma_{-i}(t_{-i}) = (\sigma_j(t_j))_{j\ne i}$. And now my question: Am I correct that this implies that, in a BNE, every player (or rather, every type of every player) best responds given their beliefs about the types of other players, but that there is nothing in a BNE that pins down the beliefs a player (or type of player) has on the types of other players? That is to say, in a BNE, a player (or type of player) could have a degenerate belief (putting full probability on some $t_j^*$ for player $j$) in an equilibrium in which $t_j \ne t_j^*$? Put more simply, can a type of player's beliefs about the type of another player be wrong in a BNE?
This section addresses the graphing of the Tangent, Cosecant, Secant, and Cotangent curves. Verbal 1) Explain how the graph of the sine function can be used to graph \(y=\csc x\). Answer Since \(y=\csc x\) is the reciprocal function of \(y=\sin x\)you can plot the reciprocal of the coordinates on the graph of \(y=\sin x\) obtain the \(y\)-coordinates of \(y=\csc x\)The \(x\)-intercepts of the graph \(y=\sin x\) are the vertical asymptotes for the graph of \(y=\csc x\). 2) How can the graph of \(y=\cos x\) be used to construct the graph of \(y=\sec x\)? 3) Explain why the period of \(y=\tan x\) is equal to \(\pi \). Answer Answers will vary. Using the unit circle, one can show that \(y=\tan (x+\pi )=\tan x\). 4) Why are there no intercepts on the graph of \(y=\csc x\)? 5) How does the period of \(y=\csc x\) compare with the period of \(y=\sin x\)? Answer The period is the same: \(2\pi \) Algebraic For exercises 6-9, match each trigonometric function with one of the following graphs. Exercise \(\PageIndex{6}\) \(f(x)=\tan x\) Answer \(\mathrm{I}\) Exercise \(\PageIndex{7}\) \(f(x)=\sec x\) Answer \(\mathrm{IV}\) Exercise \(\PageIndex{8}\) \(f(x)=\csc x\) Answer \(\mathrm{II}\) Exercise \(\PageIndex{9}\) \(f(x)=\cot x\) Answer \(\mathrm{III}\) For exercises 10-16, find the period and horizontal shift of each of the functions. 10) \(f(x)=2\tan(4x-32)\) 11) \(h(x)=2\sec\left(\dfrac{\pi }{4}(x+1) \right)\) Answer period: \(8\); horizontal shift: \(1\) unit to left 12) \(m(x)=6\csc\left(\dfrac{\pi }{3}x+\pi \right)\) 13) If \(\tan x=-1.5\)find \(\tan (-x)\). Answer \(1.5\) 14) If \(\sec x=2\) , find \(\sec (-x)\). 15) If \(\csc x=-5\) , find \(\csc (-x)\). Answer \(5\) 16) If \(x\sin x=2\) , find \((-x)\sin (-x)\). For exercises 17-18, rewrite each expression such that the argument \(x\) is positive. Exercise \(\PageIndex{17}\) \(\cot(-x)\cos(-x)+\sin(-x)\) Answer \(-\cot x \cos x-\sin x\) 18) \(\cos(-x)+\tan(-x)\sin(-x)\) Graphical For the exercises 19-36, sketch two periods of the graph for each of the following functions. Identify the stretching factor, period, and asymptotes. 19) \(f(x)=2\tan(4x-32)\) Answer stretching factor: \(2\); period: \(\dfrac{\pi }{3}\)asymptotes: \(x=\dfrac{1}{4}\left(\dfrac{\pi }{2}+\pi k \right)+8\) 20) \(h(x)=2\sec\left(\dfrac{\pi }{4}(x+1) \right)\) 21) \(m(x)=6\csc\left(\dfrac{\pi }{3}x+\pi \right)\) Answer stretching factor: \(6\); period: \(6\); asymptotes: \(x=k\) 22) \(j(x)=\tan \left ( \dfrac{\pi }{2}x \right )\) Exercise \(\PageIndex{23}\) \(p(x)=\tan \left ( x-\dfrac{\pi }{2} \right )\) Answer stretching factor: \(1\); period: \(\pi \)asymptotes: \(x=\pi k\) 24) \(f(x)=4\tan (x)\) 25) \(f(x)=\tan \left ( x+\dfrac{\pi }{4} \right )\) Answer Stretching factor: \(1\); period: \(\pi \)asymptotes: \(x=\dfrac{\pi}{4}+\pi k\) 26) \(f(x)=\pi \tan(\pi x- \pi)-\pi\) 27) \(f(x)=2\csc (x)\) Answer stretching factor: \(2\); period: \(2\pi \)asymptotes: \(x=\pi k\) 28) \(f(x)=-\dfrac{1}{4}\csc(x)\) 29) \(f(x)=4\sec(3x)\) Answer stretching factor: \(4\); period: \(\dfrac{2\pi }{3}\)asymptotes: \(x=\dfrac{\pi }{6}k\) 30) \(f(x)=-3\cot(2x)\) 31) \(f(x)=7\sec(5x)\) Answer stretching factor: \(7\); period: \(\dfrac{2\pi }{5}\)asymptotes: \(x=\dfrac{\pi }{10}k\) 32) \(f(x)=\dfrac{9}{10}\csc(\pi x)\) 33) \(f(x)=2\csc \left(x+\dfrac{\pi }{4} \right)-1\) Answer Stretching factor: \(2\); period: \(2\pi \) ; asymptotes: \(x=-\dfrac{\pi}{4}+\pi k\) , where \(k\) is an integer 34) \(f(x)=-\sec \left(x-\dfrac{\pi }{3} \right)-2\) 35) \(f(x)=\dfrac{7}{5}\csc \left(x-\dfrac{\pi }{4} \right)\) Answer Stretching factor: \(\dfrac{7}{5}\); period: \(2\pi \) ; asymptotes: \(x=\dfrac{\pi}{4}+\pi k\) , where \(k\) is an integer 36) \(f(x)=5\left (\cot \left(x+\dfrac{\pi }{2} \right) -3 \right )\) For the exercises 37-38, find and graph two periods of the periodic function with the given stretching factor, \(| A |\) , period, and phase shift. 37) A tangent curve, \(A=1\)period of \(\dfrac{\pi }{3}\)and phase shift \((h, k)=\left ( \dfrac{\pi }{4},2 \right )\) Answer \(y=\tan\left(3\left(x-\dfrac{\pi}{4} \right) \right)+2\) 38) A tangent curve, \(A=-2\) , period of \(\dfrac{\pi }{4}\); and phase shift \((h, k)=\left (- \dfrac{\pi }{4},-2 \right )\) For the exercises 39-45, find an equation for the graph of each function. 39) Answer \(f(x)=\csc (2x)\) 40) 41) Answer \(f(x)=\csc (4x)\) 42) 43) Answer \(f(x)=2\csc x\) 44) 45) Answer \(f(x)=\dfrac{1}{2}\tan (100\pi x)\) Technology For the exercises 46-53, use a graphing calculator to graph two periods of the given function. Note: most graphing calculators do not have a cosecant button; therefore, you will need to input \(\csc x\) as \(\dfrac{1}{\sin x}\) 46) \(f(x)=| \csc (x) |\) 47) \(f(x)=| \cot (x) |\) Answer 48) \(f(x)=2^{\csc (x)}\) 49) \(f(x)=\frac{\csc (x)}{\sec (x)}\) Answer 50) Graph \(f(x)=1+\sec^2(x)-\tan^2(x)\)What is the function shown in the graph? 51) \(f(x)=\sec(0.001x)\) Answer 52) \(f(x)=\cot(100 \pi x)\) 53) \(f(x)=\sin^2x +\cos^2x\) Answer Real-World Applications 54) The function \(f(x)=20\tan\left(\dfrac{\pi }{10}x\right)\) marks the distance in the movement of a light beam from a police car across a wall for time \(x\)in seconds, and distance \(f(x)\) in feet. Graph on the interval \([0,5]\) Find and interpret the stretching factor, period, and asymptote. Evaluate \(f(10)\) and \(f(2.5)\) and discuss the function’s values at those inputs. 55) Standing on the shore of a lake, a fisherman sights a boat far in the distance to his left. Let \(x\) measured in radians, be the angle formed by the line of sight to the ship and a line due north from his position. Assume due north is \(0\) and \(x\) is measured negative to the left and positive to the right. (See Figure below.) The boat travels from due west to due east and, ignoring the curvature of the Earth, the distance \(d(x)\) in kilometers, from the fisherman to the boat is given by the function \(d(x)=1.5\sec(x)\) What is a reasonable domain for \(d(x)\)? Graph \(d(x)\) on this domain. Find and discuss the meaning of any vertical asymptotes on the graph of \(d(x)\). Calculate and interpret \(d\left ( -\dfrac{\pi }{3} \right )\) Round to the second decimal place. Calculate and interpret \(d\left ( \dfrac{\pi }{6} \right )\) Round to the second decimal place. What is the minimum distance between the fisherman and the boat? When does this occur? Answer \(\left ( -\dfrac{\pi }{2},\dfrac{\pi }{2} \right )\) \(x=-\dfrac{\pi }{2}\) and \(x=\dfrac{\pi }{2}\)the distance grows without bound as \(| x |\) approaches \(\dfrac{\pi }{2}\)—i.e., at right angles to the line representing due north, the boat would be so far away, the fisherman could not see it; \(3\); when \(x=-\dfrac{\pi }{3}\)the boat is \(3\) km away; \(1.73\); when \(x=\dfrac{\pi }{6}\)the boat is about \(1.73\) km away; \(1.5\) km; when \(x=0\) 56) A laser rangefinder is locked on a comet approaching Earth. The distance \(g(x)\)in kilometers, of the comet after \(x\) days, for \(x\) in the interval \(0\) to \(30\) days, is given by \(g(x)=250,000\csc \left(\dfrac{\pi }{30}x \right)\). Graph \(g(x)\) on the interval \([0,35]\). Evaluate \(g(5)\) and interpret the information. What is the minimum distance between the comet and Earth? When does this occur? To which constant in the equation does this correspond? Find and discuss the meaning of any vertical asymptotes. 57) A video camera is focused on a rocket on a launching pad \(2\) miles from the camera. The angle of elevation from the ground to the rocket after \(x\) seconds is \(\dfrac{\pi }{120}x\). Write a function expressing the altitude \(h(x)\)in miles, of the rocket above the ground after \(x\) seconds. Ignore the curvature of the Earth. Graph \(h(x)\) on the interval \((0,60)\). Evaluate and interpret the values \(h(0)\) and \(h(30)\). What happens to the values of \(h(x)\) as \(x\) approaches \(60\) seconds? Interpret the meaning of this in terms of the problem. Answer \(h(x)=2\tan \left(\dfrac{\pi }{120}x \right)\) \(h(0)=0\)after \(0\) seconds, the rocket is \(0\) mi above the ground; \(h(30)=2\)after \(30\) seconds, the rockets is \(2\) mi high; As \(x\) approaches \(60\) seconds, the values of \(h(x)\) grow increasingly large. The distance to the rocket is growing so large that the camera can no longer track it.
So the quantity I would like to understand is: $$\sum_{x \in \{ 0,1,\dots,m \}^n : \sum_{i=1}^n x_i=m} \exp \left ( -\beta \sum_{i=1}^n |x_{i+1}-x_i| \right )$$ where $m$ is a positive integer, $\beta=\frac{1}{k_B T}>0$, and we define $x_{n+1}=x_1$ (periodic boundary conditions). This is the partition function of a certain lattice model under a total mass constraint. The case I'm interested in is a thermodynamic limit: $m,n \to \infty$, but in particular with $m \gg n$ (high density). If I drop the mass constraint, i.e. if I consider $$\sum_{x \in \{ 0,1,\dots,m \}^n} \exp \left ( -\beta \sum_{i=1}^n |x_{i+1}-x_i| \right )$$ then I can actually compute the leading order asymptotic for the partition function when $m \to \infty$. Similar to the 1D Ising model, this can be done through the transfer matrix method: the partition function in this case is the trace of $Y^n$ where $Y$ is a $(m+1) \times (m+1)$ matrix with $y_{ij}=\exp(-\beta |i-j|)$. There is theory out there to compute the asymptotics of the trace for a nice family of matrices like this. I have thought about computing the constrained partition function by introducing an "artificial field" with an intensity parameter $\mu$, and then sending $\mu \to \infty$. For example, I could look at: $$\sum_{x \in \{ 0,1,\dots,m \}^n} \exp \left ( -\beta \sum_{i=1}^n |x_{i+1}-x_i| -\mu \left ( m - \sum_{i=1}^n x_i \right )^2 \right ).$$ The problem is that I don't see how to write this in the form $\operatorname{Tr}(Y^n)$ for some $Y$, because now all of the $x_i$ interact with each other (as can be seen by expanding the square). If this is too hard, I would also be very interested to see useful approximations, for example a mean field approximation to my perturbed energy. (I apologize if this question is too mathematical in character; I have actually asked the same question in different language on MSE already.)
I've had several Computer Science courses and, from what I recall, I've never been given a rigorous definition of suitable encoding. Definitions always tend to use effective method or some synonym to define a suitable encoding, even though the reason we are defining suitable encodings is to formalize what an effective method is. I've recently discussed it with some of my classmates and we found a few necessary condition but could not find sufficient ones. Let $X$ be set of things and $\Sigma$ an alphabet. An encoding is an injective function $e:X\to \Sigma^*$. The associated code is $e(X):=\{e(x) : x \in X\}$. Given a subset $P\subseteq X$, we say that it is $e$-decidable is $e(P)$ is decidable (as a subset of $e(X)$). Given two encodings $e$ and $e'$, we call translation from $e$ to $e'$ the unique function $f_{e\to e'}:e(X)\to e'(X)$ so that for all $x\in X, f_{e\to e'}(e(x))=e'(x)$. If $f_{e\to e'}$ is computable, then any $e'$-decidable problem is also $e$-decidable. We will therefore say that $e'$ is a better encoding than $e$ and write $e\rightsquigarrow e'$. Intuitively, we want the encoding to contains as little information as possible (so to minimize the set of decidable problems) but still have enough information to know which element we're talking about (hence the injectivity of $e$). For example, for $X=\{(M,w):\text{ Turing machine and } w \text{ word}\}$, if we call $e'$ one standard encoding and $e(M,w):=e'(M,w)b$ where $b$ is $1$ if $M$ halts on $w$ and $0$ otherwise. The halting problem is $e$-decidable but not $e'$-decidable. The function $f_{e\to e'}(ub):=u$ is computable so $e'$ is a better encoding than $e$. We say that two encodings are equivalent, and write $e\sim e'$ if $e\rightsquigarrow e'$ and $e'\rightsquigarrow e$. Given a suitable encoding $e$, the suitable encodings are exactly those equivalent to $e$. But I feel like there should be some way to define suitable encodings without needing to give a specific one. Let $e'$ be a an encoding so that for any encoding $e$, if $e\rightsquigarrow e'$, then $e'\rightsquigarrow e$ (and $e\sim e'$). We will call $e'$ a minimal encoding. An minimal encoding is an encoding better than any encoding it is comparable to. If $e\sim e'$, $e'$ is minimal iff $e$ is. And encoding $e$ is called decidable if $e(X)$ is. If $e\sim e'$, $e'$ is decidable iff $e$ is. A suitable encoding should probably be minimal and decidable but that doesn't seem to be enough to characterize them. Intuitively, there seems to be several equivalence classes of minimal decidable encodings, but there is only one that really represents what we want to call suitable encodings. A solution could be to add "If an encoding $e$ is suitable, then for any $e'$, any non $e'$-decidable problem is not $e$-decidable.". But I'm pretty sure one can build a minimal decidable encoding for integers so that beeing even is undecidable. Let $(M_n)_n$ be an enumeration of Turing machines. We can extract two subenumerations $(M_{\varphi(n)})_n$ (resp. $(M_{\psi(n)})_n$) of machines that halt (resp. do not halt) on all inputs. Then we define $e(2n)=e'(\varphi(n))$ and $e(2n+1)=e'(\psi(n))$ where $e'$ is some suitable encoding. Another idea is to say that the suitable encodings must somehow be more robust and so its equivalence class must be bigger. But both will be countable...
Assume $X_t$ is a multivariate Ornstein-Uhlenbeck process, i.e. $$dX_t=\sigma dB_t-AX_tdt$$ and the spot interest rate evolves by the following equation: $$r_t=a+b\cdot X_t.$$ After solving for $X_t$ using $e^{tA}X_t$ and Ito and looking at $\int_0^T{r_s\;ds}$, it turns out that $$\int_0^T{r_s\;ds} \sim \mathcal{N}(aT+b^{T}(I-e^{-TA})A^{-1}X_0,b^{T}V_Tb)$$ where $V_t$ is the covariance matrix of $\int_0^T(I-e^{-(T-u)A})A^{-1}\sigma dB_u$. This gives us the yield curve $$y(t)=a+\frac{b^{T}(I-e^{-tA})A^{-1}X_0}{t}+\frac{b^{T}V_tb}{2t}$$ and by plugging in $A= \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \\ \end{pmatrix}$ we finally arrive at $$y(t)=a+\frac{1-e^{-\lambda t}}{\lambda t}C_0+e^{-\lambda t}C_1+\frac{b^{T}V_tb}{2t}.$$ The formula above without $\frac{b^{T}V_tb}{2t}$ is known as the Nelson-Siegel yield curve model. Could somebody clarify why neglecting $\frac{b^{T}V_tb}{2t}$ leads to arbitrage opportunities? So I am essentially asking the following question: Why is the above model (with $\frac{b^{T}V_tb}{2t}$) arbitrage free?
So far, we have learned how to differentiate a variety of functions, including trigonometric, inverse, and implicit functions. In this section, we explore derivatives of logarithmic functions. Logarithmic functions can help rescale large quantities and are particularly helpful for rewriting complicated expressions. Derivative of the Logarithmic Function Now that we have the derivative of the natural exponential function, we can use implicit differentiation to find the derivative of its inverse, the natural logarithmic function. Theorem: The Derivative of the Natural Logarithmic Function If \(x>0\) and \(y=\ln x\),then \(\frac{dy}{dx}=\frac{1}{x}\). If \(x ≠ 0\) and \(y=\ln |x|\),then \(\frac{dy}{dx}=\frac{1}{x}\). Suppose the argument of the natural log is not just \(x\), but instead is \(g(x)\), a differentiable function. Now, using the chain rule, we get a more general derivative: for all values of \(x\) for which \(g(x)>0\), the derivative of \(h(x)=ln(g(x))\) is given by \(h′(x)=\frac{1}{g(x)}g′(x).\) Proof If \(x>0\) and \(y=\ln x\), then \(e^y=x.\) Differentiating both sides of this equation results in the equation \(e^y\frac{dy}{dx}=1.\) Solving for \(\frac{dy}{dx}\) yields \(\frac{dy}{dx}=\frac{1}{e^y}\). Finally, we substitute \(x=e^y\) to obtain \(\frac{dy}{dx}=\frac{1}{x}\). We may also derive this result by applying the inverse function theorem, as follows. Since \(y=g(x)=lnx\) is the inverse of \(f(x)=e^x\), by applying the inverse function theorem we have \(\frac{dy}{dx}=\frac{1}{f′(g(x))}=\frac{1}{e^{\ln x}}=\frac{1}{x}\). Using this result and applying the chain rule to \(h(x)=\ln (g(x))\) yields \(h′(x)=\frac{1}{g(x)}g′(x)\). □ The graph of \(y=lnx\) and its derivative \(\frac{dy}{dx}=\frac{1}{x}\) are shown in Figure. Figure \(\PageIndex{3}\): The function \(y=\ln x\) is increasing on \((0,+∞)\). Its derivative \(y'=\frac{1}{x}\) is greater than zero on \((0,+∞)\) Example \(\PageIndex{1}\):Taking a Derivative of a Natural Logarithm Find the derivative of \(f(x)=\ln (x^3+3x−4)\). Solution Use Equation directly. \(f′(x)=\frac{1}{x^3+3x−4}⋅(3x^2+3)\) Use \(g(x)=x^3+3x−4\) in \(h′(x)=\frac{1}{g(x)}g′(x)\). \(=\frac{3x^2+3}{x^3+3x−4}\) Rewrite. Example \(\PageIndex{2}\):Using Properties of Logarithms in a Derivative Find the derivative of \(f(x)=\ln (\frac{x^2\sin x}{2x+1})\). Solution At first glance, taking this derivative appears rather complicated. However, by using the properties of logarithms prior to finding the derivative, we can make the problem much simpler. \(f(x)=\ln (\frac{x^2\sin x}{2x+1})=2\ln x+\ln (\sin x)−\ln (2x+1)\) Apply properties of logarithms. \(f′(x)=\frac{2}{x}+\cot x−\frac{2}{2x+1}\) Apply sum rule and \(h′(x)=\frac{1}{g(x)}g′(x)\). Exercise \(\PageIndex{1}\) Differentiate: \(f(x)=\ln (3x+2)^5\). Hint Use a property of logarithms to simplify before taking the derivative. Answer \(f′(x)=\frac{15}{3x+2}\) Now that we can differentiate the natural logarithmic function, we can use this result to find the derivatives of \(y=log_bx\) and \(y=b^x\) for \(b>0,b≠1\). Derivatives of General Exponential and Logarithmic Functions Let \(b>0,b≠1,\) and let \(g(x)\) be a differentiable function. i. If, \(y=log_bx\), then \(\frac{dy}{dx}=\frac{1}{x\ln b}\). More generally, if \(h(x)=log_b(g(x))\), then for all values of x for which \(g(x)>0\), \(h′(x)=\frac{g′(x)}{g(x)\ln b}\). ii. If \(y=b^x,\) then \(\frac{dy}{dx}=b^x\ln b\). More generally, if \(h(x)=b^{g(x)},\) then \(h′(x)=b^{g(x)}g''(x)\ln b\) Proof If \(y=log_bx,\) then \(b^y=x.\) It follows that \(\ln (b^y)=\ln x\). Thus \(y\ln b=\ln x\). Solving for \(y\), we have \(y=\frac{\ln x}{\ln b}\). Differentiating and keeping in mind that \(\ln b\) is a constant, we see that \(\frac{dy}{dx}=\frac{1}{x\ln b}\). The derivative in Equation now follows from the chain rule. If \(y=b^x\). then \(\ln y=x\ln b.\) Using implicit differentiation, again keeping in mind that \(\ln b\) is constant, it follows that \(\frac{1}{y}\frac{dy}{dx}=\ln b\). Solving for \(\frac{dy}{dx}\) and substituting \(y=b^x\), we see that \(\frac{dy}{dx}=y\ln b=b^x\ln b\). The more general derivative (Equation) follows from the chain rule. □ Example \(\PageIndex{3}\):Applying Derivative Formulas Find the derivative of \(h(x)=\frac{3^x}{3^x+2}\). Solution Use the quotient rule and Note. \(h′(x)=\frac{3^x\ln 3(3^x+2)−3^x\ln 3(3^x)}{(3^x+2)^2}\) Apply the quotient rule. \(=\frac{2⋅3^x\ln 3}{(3x+2)^2}\) Simplify. Example \(\PageIndex{4}\): Finding the Slope of a Tangent Line Find the slope of the line tangent to the graph of \(y=log_2(3x+1)\) at \(x=1\). Solution To find the slope, we must evaluate \(\frac{dy}{dx}\) at \(x=1\). Using Equation, we see that \(\frac{dy}{dx}=\frac{3}{\ln 2(3x+1)}\). By evaluating the derivative at \(x=1\), we see that the tangent line has slope \(\frac{dy}{dx}∣_{x=1}=\frac{3}{4\ln 2}=\frac{3}{\ln16}\). Exercise \(\PageIndex{2}\) Find the slope for the line tangent to \(y=3^x\) at \(x=2.\) Hint Evaluate the derivative at \(x=2.\) Answer \(9\ln (3)\) Logarithmic Differentiation At this point, we can take derivatives of functions of the form \(y=(g(x))^n\) for certain values of \(n\), as well as functions of the form \(y=b^{g(x)}\), where \(b>0\) and \(b≠1\). Unfortunately, we still do not know the derivatives of functions such as \(y=x^x\) or \(y=x^π\). These functions require a technique called logarithmic differentiation, which allows us to differentiate any function of the form \(h(x)=g(x)^{f(x)}\). It can also be used to convert a very complex differentiation problem into a simpler one, such as finding the derivative of \(y=\frac{x\sqrt{2x+1}}{e^x\sin ^3x}\). We outline this technique in the following problem-solving strategy. Problem-Solving Strategy: Using Logarithmic Differentiation To differentiate \(y=h(x)\) using logarithmic differentiation, take the natural logarithm of both sides of the equation to obtain \(\ln y=\ln (h(x)).\) Use properties of logarithms to expand \(\ln (h(x))\) as much as possible. Differentiate both sides of the equation. On the left we will have \(\frac{1}{y}\frac{dy}{dx}\). Multiply both sides of the equation by \(y\) to solve for \(\frac{dy}{dx}\). Replace \(y\) by \(h(x)\). Example \(\PageIndex{5}\): Using Logarithmic Differentiation Find the derivative of \(y=(2x^4+1)^{\tan x}\). Solution Use logarithmic differentiation to find this derivative. \(\ln y=\ln (2x^4+1)^{\tan x}\) Step 1. Take the natural logarithm of both sides. \(\ln y=\tan x\ln (2x^4+1)\) Step 2. Expand using properties of logarithms. \(\frac{1}{y}\frac{dy}{dx}=\sec ^2x\ln (2x^4+1)+\frac{8x^3}{2x^4+1}⋅\tan x\) Step 3. Differentiate both sides. Use theproduct rule on the right. \(\frac{dy}{dx}=y⋅(\sec ^2x\ln (2x4+1)+\frac{8x^3}{2x^4+1}⋅\tan x)\) Step 4. Multiply byyon both sides. \(\frac{dy}{dx}=(2x^4+1)^{\tan x}(\sec ^2x\ln (2x^4+1)+\frac{8x^3}{2x^4+1}⋅\tan x)\) Step 5. Substitute \(y=(2x^4+1)^{\tan x}\). Example \(\PageIndex{6}\): Extending the Power Rule Find the derivative of \(y=\frac{x\sqrt{2x+1}}{e^x\sin ^3x}\). Solution This problem really makes use of the properties of logarithms and the differentiation rules given in this chapter. \(\ln y=\ln \frac{x\sqrt{2x+1}}{e^x\sin ^3x}\) Step 1. Take the natural logarithm of both sides. \(\ln y=\ln x+\frac{1}{2}ln(2x+1)−x\ln e−3\ln \sin x\) Step 2. Expand using properties of logarithms. \(\frac{1}{y}\frac{dy}{dx}=\frac{1}{x}+\frac{1}{2x+1}−1−3\frac{\cos x}{\sin x}\) Step 3. Differentiate both sides. \(\frac{dy}{dx}=y(\frac{1}{x}+\frac{1}{2x+1}−1−3\cot x)\) Step 4. Multiply by \(y\) on both sides. \(\frac{dy}{dx}=\frac{x\sqrt{2x+1}}{e^x\sin ^3x}(\frac{1}{x}+\frac{1}{2x+1}−1−3\cot x)\) Step 5. Substitute \(y=\frac{x\sqrt{2x+1}}{e^x\sin ^3x}.\) Exercise \(\PageIndex{3}\) Use logarithmic differentiation to find the derivative of \(y=x^x\). Hint Follow the problem solving strategy. Answer Solution: \(\frac{dy}{dx}=x^x(1+\ln x)\) Exercise \(\PageIndex{4}\) Find the derivative of \(y=(\tan x)^π\). Hint Use the result from Example. Answer \(y′=π(\tan x)^{π−1}\sec ^2x\) Key Concepts On the basis of the assumption that the exponential function \(y=b^x,b>0\) is continuous everywhere and differentiable at 0, this function is differentiable everywhere and there is a formula for its derivative. We can use a formula to find the derivative of \(y=\ln x\), and the relationship \(log_bx=\frac{\ln x}{\ln b}\) allows us to extend our differentiation formulas to include logarithms with arbitrary bases. Logarithmic differentiation allows us to differentiate functions of the form \(y=g(x)^{f(x)}\) or very complex functions by taking the natural logarithm of both sides and exploiting the properties of logarithms before differentiating. Key Equations Derivative of the natural exponential function \(\frac{d}{dx}(e^{g(x)})=e^{g(x)}g′(x)\) Derivative of the natural logarithmic function \(\frac{d}{dx}(\ln g(x))=\frac{1}{g(x)}g′(x)\) Derivative of the general exponential function \(\frac{d}{dx}(b^{g(x)})=b^{g(x)}g′(x)\ln b\) Derivative of the general logarithmic function \(\frac{d}{dx}(log_bg(x))=\frac{g′(x)}{g(x)\ln b}\) Glossary logarithmic differentiation is a technique that allows us to differentiate a function by first taking the natural logarithm of both sides of an equation, applying properties of logarithms to simplify the equation, and differentiating implicitly Contributors Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
I have successfully solved the multi-species diffusion-reaction equation \begin{equation} \frac{\partial c_i}{\partial t} = \nabla \cdot (d_i(x)\nabla c_i) + s_i(x,t), \quad \quad (1) \end{equation} with dicontinuous source term \begin{equation} s(x,t) = \left\{ \begin{array}{rcl} s_1(x,t) & \text{for} & 0<x\le x' , \\ s_2(x,t) &\text{for} & x'<x\le 1. \end{array}\right. \end{equation} In fact, the variable diffusion coefficient also takes slightly different forms in these two region. The equation is discretized with the conservative central scheme \begin{equation} c_j'(t) = \frac{1}{h^2}\biggl( d(x_{j+1/2})(c_{j+1}(t)-c_j(t))- d(x_{j-1/2})(c_{j}(t)-c_{j-1}(t))\biggr) + s(x_j,t). \end{equation} I used SUNDIALS package for time integration. Now I'm trying to solve the problem in a concentrated solution \begin{equation} \frac{\partial c_i}{\partial t} = \nabla \cdot \sum_k D_{ik}(x)\nabla c_k + s(x,t), \end{equation} where $D_{ik}$ may be negative. In dilute solution approximation (solvent concentration $c_0 \rightarrow \infty$), we have $D_{ik} \rightarrow d_{i}\delta_{ik}$. However, the solution exhibit spurious oscillations. The blue line is sloved with $c_0=10^{20}$ or equation (1); purple and green lines with $c_0=10^{10}$. The difference between purple and green line is the average scheme used for the diffusion coefficient at the cell interface, the former is with simple average and the latter with harmonic average. As shown in the figures below, there are oscillations near the boundaries and the point $x'$. (I should mention that the vary first osscilations began at the left boundary before they built up at the right boundary and the point $x'$). It is odd that even with the oscillations, the scheme with harmonic average never fails to converge at each timestep and follows closely the result of analytical dilution scheme (equation (1)); where as the simple average scheme fails at the one tenth of the time domain. At much later time the solutions look like Are there discretizatons within the finite volume scheme that avoid such oscillations?
I've been trying to find some resources that would help me figure out how to numerically solve a coupled system of ODEs which is also an eigenvalue problem. The system is something like: $ \tag{1} \kappa h_2(r) +\kappa r h'_2(r)+ (1+\kappa) W_3(r) + (1+ \kappa) r W'_3(r)+ (1 + \kappa) r^2 W''_3(r) = 0 $ $ \tag{2} W_3 +r W'_3+ (1+\kappa) h_2 + (1 +\kappa) r h'(2,r)+ (1+ \kappa)r^2 h''_2 = 0$ I'm not looking for a solution to these equations per se, these are just the equations I have so I thought I'd put them up as a reference. I need to solve for the functions $h_2$ and $W_3$, on the domain ($0,R$). I have the following boundary conditions: $\tag {3} W_3(r \rightarrow 0) = r^3, W_3(r = R) = 0 $ $\tag{4} h_2(r \rightarrow 0) = r^3, h_2 (r = R) = h_{2_{outside}}(r=R) $ Where $h_{2_{outside}} $ is a know solution from the domain ($R,\infty$). I also have the following condition $ \tag {5} h_2 (r = R) h_{2_{outside}}'(r=R) - h_2'(r=R)h_{2_{outside}}(r=R) =0 $ I am also confused about this system of equations because it would appear that I have one to many boundary conditions. That is unless the conditions on $W_3$ and $h_2$ going to zero are redundant. That is enforcing one will enforce the other just to keep equations (1) and (2) self consistent. However this issue may be digressing from my main question. The main question is: given equations (1) and (2) what numerical method can I use to find $W_3$, $h_2$, and $\kappa$? 1ST EDIT. I should mention that I've already tried solving these equations with a spectral approach, but have so far been unsuccessful. So I'm mainly looking for new ways to attack the problem. My preference would be finite-difference methods since I know those, but I'm open to learning new methods as well.
Condition for Cartesian Product Equivalent to Associated Cardinal Number Theorem Let $\left|{S}\right|$ denote the cardinal number of $S$. Then: $S \times T \sim \left|{S \times T}\right| \iff S \sim \left|{S}\right| \land T \sim \left|{T}\right|$ where $S \times T$ denotes the cartesian product of $S$ and $T$. Proof Necessary Condition If $S \times T \sim \left|{S \times T}\right|$, then there is a mapping $f$ such that: $f : S \times T \to \left|{S \times T}\right|$ is a bijection. Since $f$ is a bijection, it follows that: $S$ is equivalent to the image of $S \times \left\{{x}\right\}$ under $f$ where $x \in T$. By Condition for Set Equivalent to Cardinal Number, it follows that $S \sim \left|{S}\right|$. Similarly, $T \sim \left|{T}\right|$. $\Box$ Sufficient Condition Define the function $F$ to be: $\forall x \in S, y \in T: F \left({x, y}\right) = \left|{S}\right| \cdot g \left({y}\right) + f \left({x}\right)$ It follows that $F: S \times T \to \left|{S}\right| \cdot \left|{T}\right|$ is a injection. By Condition for Set Equivalent to Cardinal Number, it follows that $S \times T \sim \left|{S \times T}\right|$. $\blacksquare$
Complete Linearly Ordered Space is Compact Theorem Let $\left({X, \preceq, \tau}\right)$ be a linearly ordered space. Let $\left({X, \preceq}\right)$ be a complete lattice. Then $\left({X, \tau}\right)$ is compact. Proof By Compactness from Basis, it is sufficient to prove that an open cover of $X$ consisting of open intervals and rays has a finite subcover. Let $\mathcal A$ be an open cover of $X$ consisting of open rays and open intervals. Let $m = \inf X$. This infimum exists because $\left({X, \preceq}\right)$ is complete. Let $C$ be the set of all $x \in X$ such that a finite subset of $\mathcal A$ covers $\left[{m \,.\,.\, x}\right]$. $C$ is non-empty because $m \in C$. Let $s = \sup C$. Since $\mathcal A$ covers $X$, there is a $U \in \mathcal A$ such that $s \in U$. Then we must have $U = \left({a \,.\,.\, b}\right)$, $U = {\dot\uparrow} a$, or $U = {\dot\downarrow} b$. Suppose that $U = \left({a \,.\,.\, b}\right)$. Let $V \in \mathcal U$ contain $b$. Then by the definition of supremum, there is an $x \succ a$ such that there is a finite $\mathcal F \subseteq \mathcal A$ that covers $\left[{m \,.\,.\, x}\right]$. Then $\mathcal F \cup \left\{{U, V}\right\}$ covers $\left[{m \,.\,.\, b}\right]$, contradicting the fact that $s$ is an upper bound of $C$. Suppose next that $U = \dot\downarrow b$. Then for some $V \in \mathcal A$, $b \in V$. Then $\left[{m \,.\,.\, b}\right]$ is covered by $\left\{{U, V}\right\}$, contradicting the fact that $s$ is the supremum of $C$. Thus $U = \dot\uparrow a$. By the definition of supremum, $a$ is not an upper bound of $C$. So there is an $x \succ a$ such that there is a finite subset $\mathcal F$ of $\mathcal A$ that covers $\left[{m \,.\,.\, x}\right]$. Thus $\mathcal F \cup \left\{{U}\right\}$ is a finite subcover of $A$. $\blacksquare$
This section explores the use of symmetry to determine selection rules. Here we derive an analytical expression for the transition dipole moment integral for the particle-in-a-box model. The result that the magnitude of this integral increases as the length of the box increases explains why the absorption coefficients of the longer cyanine dye molecules are larger. We use the transition moment integral and the trigonometric forms of the particle-in-a-box wavefunctions to get Equation \(\ref{4-27}\) for an electron making a transition from orbital \(i\) to orbital \(f\). \[\mu _T = \dfrac {-2e}{L} \int \limits _0^L \sin \left (\dfrac {f \pi x}{L} \right ) x \sin \left ( \dfrac {i \pi x }{L} \right ) dx \] \[ = \dfrac {-2e}{L} \int \limits _0^L x \sin \left (\dfrac {f \pi x}{L} \right ) x \sin \left ( \dfrac {i \pi x }{L} \right ) dx \label {4-27}\] Exercise \(\PageIndex{1}\) Why is there a factor 2/L in Equation \(\ref{4-27}\)? What are the units associated with the dipole moment and the transition dipole moment? Simplify the integral by substituting \[\sin \psi \sin \theta = \dfrac {1}{2} [ \cos (\psi - \theta ) - \cos (\psi + \theta)] \label {4-28}\] and then use \[ \int \limits _0^L x \cos (ax) dx = [ \dfrac {1}{a^2} \cos (ax) + \dfrac {x}{a} \sin (ax) ]^L_0 \label {4-29}\] where \(a\) is any nonzero constant. Then, when we define \[ \Delta n = f - i \text {and} n_T = f + i \label {4-30}\] we can integrate to produce \[T = \dfrac {-e}{L} {(\dfrac {L}{\pi})}^2 [ \dfrac {1}{\Delta n^2} (\cos (\Delta n \pi) - 1) - \dfrac {1}{n^2_T} (\cos (\Delta n \pi) - 1) + \dfrac {1}{\Delta n} \sin (\Delta n \pi ) - \dfrac {1}{n_T} \sin (n_T \pi) ] \label {4-31}\] Exercise \(\PageIndex{2}\) Show that if \(Δn\) is an even integer, then \(n_T\) must be an even integer and \(μ_T = 0\). Exercise \(\PageIndex{3}\) Show that if \(i\) and \(f\) are both even or both odd integers then \(Δn\) is an even integer and \(μ_T = 0\). Exercise \(\PageIndex{4}\) Show that if Δn is an odd integer, then \(n_T\) must be an odd integer and \(μ_T\) is given by Equation \(\ref{4-32}\). \[ \mu _T = \dfrac {-2eL}{\pi ^2} (\dfrac {1}{n^2_T} - \dfrac {1}{\Delta n^2} ) = \dfrac {8eL}{\pi^2} ( \dfrac {f_i}{{(f^2 - i^2)}^2} ) \label {4-32}\] Exercise \(\PageIndex{5}\) Show that the two expressions for the transition moment in Equation \(\ref{4-32}\) are in fact equivalent. Example \(\PageIndex{1}\) What is the value of the transition moment integral for transitions 1→3 and 2→4? SOLUTION For these two transitions, either n and f are both odd or they are both even integers. In either case, Δn and nT are even integers. The cosine of an even integer multiple of π is +1 so the cosine terms in Equation \(\ref{4-31}\) become (1-1) = 0. The sine terms are zero because the sine of an even integer multiple of π is zero. Therefore, μT = 0 for these transitions and they are forbidden. The same reasoning applies to any transitions that have both i and f as even or as odd integers. Exercise \(\PageIndex{1}\) What is the value of the transition moment for the n = 8 to f = 10 transition? Example \(\PageIndex{2}\) What is the value of the transition moment integral for transitions 1→2 and 2→3? SOLUTION For these two transitions Δn = 1 and nT = 3 and 5, respectively, all odd integers. The cosine of an odd-integer multiple of π is -1 so the cosine terms in Equation \(\ref{4-31}\) become (-1-1) = -2. The sine terms in Equation (4-31) are zero because the sine of an odd integer multiple of π is zero. Therefore, \(μ_T\) has some finite value given by Equation (4-32). The same reasoning is used to evaluate the transition moment integral for any transitions that have Δn and nT as odd integers, e.g. 2→7 and 3→8. In these cases Δn = 5 and nT = 9 and 11, respectively. Again the transition moment integral for each of these transitions is finite. Exercise \(\PageIndex{2}\) Explain why one of the following transitions occurs with excitation by light and the other does not: i = 1 to f = 7 and i = 3 to f = 6. From Examples \(\PageIndex{1}\) and \(\PageIndex{2}\), we can formulate the selection rules for the particle-in-a-box model: Transitions are forbidden if Δn = f - i = an even integer. Transitions are allowed if Δn = f - i = an odd integer. In the next section we will see that these selection rules can be understood in terms of the symmetry of the wavefunctions. Through the evaluation of the transition moment integral, we can understand why the spectra of cyanine dyes are very simple. The spectrum for each dye consists only of a single peak because other transitions have very much smaller transition dipole moments. We also see that the longer molecules have the larger absorption coefficients because the transition dipole moment increases with the length of the molecule. Exercise \(\PageIndex{6}\) The lowest energy transition is from the HOMO to the LUMO, which were defined previously. Compute the value of the transition moment integral for the HOMO to LUMO transition \(E_3→E_4\) for a cyanine dye with 3 carbon atoms in the conjugated chain. What is the next lowest energy transition for a particle-in-a-box? Compute the value of the transition moment integral for the next lowest energy transition that is allowed for this dye. What are the quantum numbers for the energy levels associated with this transition? How does the probability of this transition compare in magnitude with that for 3→4? Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
1) Determine whether the function \(y=156(0.825)^t\) represents exponential growth, exponential decay, or neither. Explain. Answer exponential decay; The growth factor, \(0.825\) is between \(0\) and \(1\) 2) The population of a herd of deer is represented by the function \(A(t)=205(1.13)^t\)where \(t\) is given in years. To the nearest whole number, what will the herd population be after \(6\) years? 3) Find an exponential equation that passes through the points \((2,2.25)\) and \((5,60.75)\) Answer 4) Determine whether Table below could represent a function that is linear, exponential, or neither. If it appears to be exponential, find a function that passes through the points. x 1 2 3 4 f(x) 3 0.9 0.27 0.081 5) A retirement account is opened with an initial deposit of \(\$8,500\) and earns \(8.12\%\)interest compounded monthly. What will the account be worth in \(20\) years? Answer 6) Hsu-Mei wants to save \(\$5,000\) for a down payment on a car. To the nearest dollar, how much will she need to invest in an account now with \(7.5\%\) APR, compounded daily, in order to reach her goal in \(3\) years? 7) Does the equation \(y=2.294e^{-0.654t}\) represent continuous growth, continuous decay, or neither? Explain. Answer continuous decay; the growth rate is negative. 8) Suppose an investment account is opened with an initial deposit of \(\$10,500\) earning \(6.25\%\) interest, compounded continuously. How much will the account be worth after \(25\) years? 1) Graph the function \(f(x)=3.5(2)^x\)State the domain and range and give the \(y\)-intercept. Answer domain: all real numbers; range: all real numbers strictly greater than zero; \(y\)-intercept: \((0, 3.5)\); 2) Graph the function \(f(x)=4\left(\dfrac{1}{8}\right)^x\) and its reflection about the \(y\)-axis on the same axes, and give the \(y\)-intercept. 3) The graph of \(f(x)=6.5^x\) is reflected about the \(y\)-axis and stretched vertically by a factor of \(7\)What is the equation of the new function, \(g(x)\)? State its \(y\)-intercept, domain, and range. Answer 4) The graph below shows transformations of the graph of \(f(x)=2^x\)What is the equation for the transformation? 1) Rewrite \(\log_{17}(4913)=x\) as an equivalent exponential equation. Answer \(17^x=4913\) 2) Rewrite \(\ln(s)=t\) as an equivalent exponential equation. 3) Rewrite \(a^{-\tfrac{2}{5}}=b\) as an equivalent logarithmic equation. Answer \(\log_a b=-\dfrac{2}{5}\) 4) Rewrite \(e^{-3.5}=h\) as an equivalent logarithmic equation. 5) Solve for \(x\) if \(\log_{64}(x)=\dfrac{1}{3}\) by converting to exponential form. Answer \(x=64^{\tfrac{1}{3}}=4\) 6) Evaluate \(\log_5\left(\dfrac{1}{125}\right)\) without using a calculator. 7) Evaluate \(\log(0.000001)\) without using a calculator. Answer \(\log(0.000001)=-6\) 8) Evaluate \(\log(4.005)\) using a calculator. Round to the nearest thousandth. 9) Evaluate \(\ln\left(e^{-0.8648}\right)\) without using a calculator. Answer \(\ln\left(e^{-0.8648}\right)=-0.8648\) 10) Evaluate \(\ln \left ( \sqrt[3]{18} \right )\) using a calculator. Round to the nearest thousandth. 1) Graph the function \(g(x)=\log(7x+21)-4\) Answer 2) Graph the function \(h(x)=2\ln(9-3x)+1\) 3) State the domain, vertical asymptote, and end behavior of the function \(g(x)=\ln(4x+20)-17\) Answer Domain: \(x>-5\)Vertical asymptote: \(x=-5\)End behavior: \(x\rightarrow -5^+,f(x)\rightarrow -\infty\) and as \(x\rightarrow \infty ,f(x)\rightarrow \infty\) 1) Rewrite \(\ln(7r\cdot 11st)\) in expanded form. 2) Rewrite \(\log_8(x)+\log_8(5)+\log_8(y)+\log_8(13)\) in compact form. Answer \(\log_8(65xy)\) 3) Rewrite \(\log_m\left(\dfrac{67}{83}\right)\) in expanded form. 4) Rewrite \(\ln(z)-\ln(x)-\ln(y)\) in compact form. Answer \(\ln \left(\dfrac{z}{xy}\right)\) 5) Rewrite \(\ln \left(\dfrac{1}{x^5}\right)\) as a product. 6) Rewrite \(-\log_y \left(\dfrac{1}{12}\right)\) as a single logarithm. Answer \(\log_y (12)\) 7) Use properties of logarithms to expand \(\log \left(\dfrac{r^2s^{11}}{t^{14}}\right)\) 8) Use properties of logarithms to expand \(\ln \left(2b\sqrt{\dfrac{b+1}{b-1}}\right)\) Answer \(\ln(2)+\ln(b)+\dfrac{\ln(b+1)-\ln(b-1)}{2}\) 9) Condense the expression \(5\ln(b)+\ln(c)+\dfrac{\ln(4-a)}{2}\) to a single logarithm. 10) Condense the expression \(3\log_7 v+6\log_7 w-\dfrac{\log_7 u}{3}\) to a single logarithm. Answer \(\log_7 \left (\dfrac{v^3 w^6}{\sqrt[3]{u}} \right )\) 11) Rewrite \(\log_3(12.75)\) to base 12) Rewrite \(5^{12x-17}=125\) as a logarithm. Then apply the change of base formula to solve for \(x\) using the common log. Round to the nearest thousandth. Answer \(x = \dfrac{\tfrac{\log (125)}{\log (5)}+17}{12}=\dfrac{5}{3}\) 1) Solve \(216^{3x}\cdot 216^x=36^{3x+2}\) by rewriting each side with a common base. 2) Solve \(\dfrac{125}{\left(\tfrac{1}{625}\right)^{-x-3}}=5^3\) by rewriting each side with a common base. Answer \(x=-3\) 3) Use logarithms to find the exact solution for \(7\cdot 17^{-9x}-7=49\). If there is no solution, write no solution. 4) Use logarithms to find the exact solution for \(3e^{6n-2}+1=-60\). If there is no solution, write no solution. Answer no solution 5) Find the exact solution for \(5e^{3x}-4=6\). If there is no solution, write no solution. 6) Find the exact solution for \(2e^{5x-2}-9=-56\). If there is no solution, write no solution. Answer no solution 7) Find the exact solution for \(5^{2x-3}=7^{x+1}\). If there is no solution, write no solution. 8) Find the exact solution for \(e^{2x}-e^x-110=0\). If there is no solution, write no solution. Answer \(x=\ln (11)\) 9) Use the definition of a logarithm to solve: \(-5\log_7(10n)=5\) 10) Use the definition of a logarithm to find the exact solution for \(9+6\ln(a+3)=33\) Answer \(a=e^4-3\) 11) Use the one-to-one property of logarithms to find an exact solution for \(\log_8(7)+\log_8(-4x)=\log_8(5)\). If there is no solution, write no solution. 12) Use the one-to-one property of logarithms to find an exact solution for \(\ln(5)+\ln(5x^2-5)=\ln(56)\). If there is no solution, write no solution. Answer \(x=\pm \dfrac{9}{5}\) 13) The formula for measuring sound intensity in decibels \(D\) is defined by the equation \(D=10\log \left(\dfrac{I}{I_0}\right)\) where \(I\) is the intensity of the sound in watts per square meter and \(I_0=10^{-12}\) is the lowest level of sound that the average person can hear. How many decibels are emitted from a large orchestra with a sound intensity of \(6.3\cdot 10^{-3}\) watts per square meter? 14) The population of a city is modeled by the equation \(P(t)=256,114e^{0.25t}\) where \(t\) is measured in years. If the city continues to grow at this rate, how many years will it take for the population to reach one million? Answer about \(5.45\) years 15) Find the inverse function \(f^{-1}\) for the exponential function \(f(x)=2\cdot e^{x+1}-5\) 16) Find the inverse function \(f^{-1}\) for the logarithmic function \(f(x)=0.25\cdot \log_2(x^3+1)\) Answer \(f^{-1}(x)=\sqrt[3]{2^{4x}-1}\) For the exercises 1-2, use this scenario: A doctor prescribes \(300\) milligrams of a therapeutic drug that decays by about \(17\%\) each hour. 1) To the nearest minute, what is the half-life of the drug? 2) Write an exponential model representing the amount of the drug remaining in the patient’s system after \(t\) hours. Then use the formula to find the amount of the drug that would remain in the patient’s system after \(24\) hours. Round to the nearest hundredth of a gram. Answer \(f(t)=300(0.83)^t;f(24)\approx 3.43 \text{ g}\) For the exercises 3-4, use this scenario: A soup with an internal temperature of \(350^{\circ}\) F was taken off the stove to cool in a \(71^{\circ}\) F room. After fifteen minutes, the internal temperature of the soup was \(175^{\circ}\) F. 3) Use Newton’s Law of Cooling to write a formula that models this situation. 4) How many minutes will it take the soup to cool to \(85^{\circ}\)? Answer about \(45\) minutes For the exercises 5-7, use this scenario: The equation \(N(t)=\dfrac{1200}{1+199e^{-0.625t}}\) models the number of people in a school who have heard a rumor after \(t\) days. 5) How many people started the rumor? 6) To the nearest tenth, how many days will it be before the rumor spreads to half the carrying capacity? Answer about \(8.5\) days 7) What is the carrying capacity? For the exercises 8-10, enter the data from each table into a graphing calculator and graph the resulting scatter plots. Determine whether the data from the table would likely represent a function that is linear, exponential, or logarithmic. 8) \(x\) \(f(x)\) 1 3.05 2 4.42 3 6.4 4 9.28 5 13.46 6 19.52 7 28.3 8 41.04 9 59.5 10 86.28 Answer exponential 9) \(x\) \(f(x)\) 0.5 18.05 1 17 3 15.33 5 14.55 7 14.04 10 13.5 12 13.22 13 13.1 15 12.88 17 12.69 20 12.45 10) Find a formula for an exponential equation that goes through the points \((-2,100)\) and \((0,4)\)Then express the formula as an equivalent equation with base \(e\). Answer \(y=4(0.2)^x; y=4e^{-1.609438x}\) 1) What is the carrying capacity for a population modeled by the logistic equation \(P(t)=\dfrac{250,000}{1+499e^{-0.45t}}\)? What is the initial population for the model? 2) The population of a culture of bacteria is modeled by the logistic equation \(P(t)=\dfrac{14,250}{1+29e^{-0.62t}}\) where \(t\) is in days. To the nearest tenth, how many days will it take the culture to reach \(75\%\) of its carrying capacity? Answer about \(7.2\) days For the exercises 3-5 use a graphing utility to create a scatter diagram of the data given in the table. Observe the shape of the scatter diagram to determine whether the data is best described by an exponential, logarithmic, or logistic model. Then use the appropriate regression feature to find an equation that models the data. When necessary, round values to five decimal places. 3) \(x\) \(f(x)\) 1 409.4 2 260.7 3 170.4 4 110.6 5 74 6 44.7 7 32.4 8 19.5 9 12.7 10 8.1 4) \(x\) \(f(x)\) 0.15 36.21 0.25 28.88 0.5 24.39 0.75 18.28 1 16.5 1.5 12.99 2 9.91 2.25 8.57 2.75 7.23 3 5.99 3.5 4.81 Answer logarithmic; \(y=16.68718-9.71860\ln(x)\) 5) \(x\) \(f(x)\) 0 9 2 22.6 4 44.2 5 62.1 7 96.9 8 113.4 10 133.4 11 137.6 15 148.4 17 149.3 Practice Test 1) The population of a pod of bottlenose dolphins is modeled by the function \(A(t)=8(1.17)^t\) where \(t\) is given in years. To the nearest whole number, what will the pod population be after \(3\) years? Answer About \(13\) dolphins. 2) Find an exponential equation that passes through the points \((0,4)\) and \((2,9)\) 3) Drew wants to save \(\$2,500\) to go to the next World Cup. To the nearest dollar, how much will he need to invest in an account now with \(6.25\%\) APR, compounding daily, in order to reach his goal in \(4\) years? Answer \(\$1,947\) 4) An investment account was opened with an initial deposit of \(\$9,600\)and earns \(7.4\%\) interest, compounded continuously. How much will the account be worth after \(15\) years? 5) Graph the function \(f(x)=5(0.5)^{-x}\) and its reflection across the \(y\)-axis on the same axes, and give the \(y\)-intercept. Answer \(y\)-intercept: 6) The graph shows transformations of the graph of \(f(x)=\left(\dfrac{1}{2}\right)^x\)What is the equation for the transformation? 7) Rewrite \(\log_{8.5}(614.125)=a\) as an equivalent exponential equation. Answer \(8.5^a=614.125\) 8) Rewrite \(e^{\tfrac{1}{2}}=m\) as an equivalent logarithmic equation. 9) Solve for \(x\) by converting the logarithmic equation \(\log_{\tfrac{1}{7}}(x)=2\) to exponential form. Answer \(x=\left(\dfrac{1}{7}\right)^2=\dfrac{1}{49}\) 10) Evaluate \(\log(10,000,000)\) without using a calculator. 11) Evaluate \(\ln(0.716)\) using a calculator. Round to the nearest thousandth. Answer \(\ln(0.716)\approx -0.334\) 12) Graph the function 13) State the domain, vertical asymptote, and end behavior of the function Answer Domain: \(x<3\)Vertical asymptote: \(x=3\)End behavior: \(x\rightarrow 3^-,f(x)\rightarrow -\infty\) and \(x\rightarrow -\infty ,f(x)\rightarrow \infty\) 14) Rewrite \(\log(17a\cdot 2b)\) as a sum. 15) Rewrite \(\log_t(96)-\log_t(8)\) in compact form. Answer \(\log_t(12)\) 16) Rewrite \(\log_8 \left(a^{\tfrac{1}{b}}\right)\) as a product. 17) Use properties of logarithm to expand \(\ln(y^3z^2\cdot \sqrt[3]{x-4})\) Answer \(3\ln(y)+2\ln (z)+\dfrac{(x-4)}{3}\) 18) Condense the expression \(4\ln(c)+\ln (d)+\dfrac{\ln}{3}\dfrac{(b+3)}{3}\) to a single logarithm. 19) Rewrite \(16^{3x-5}=1000\) as a logarithm. Then apply the change of base formula to solve for \(x\) using the natural log. Round to the nearest thousandth. Answer \(x = \dfrac{\tfrac{\ln(1000)}{\ln(16)}+5}{3}\approx 2.497\) 20) Solve \(\left ( \dfrac{1}{81} \right )^x\cdot \dfrac{1}{243}=\left ( \dfrac{1}{9} \right )^{-3x-1}\) by rewriting each side with a common base. 21) Use logarithms to find the exact solution for \(-9e^{10a-8}-5=-41\). If there is no solution, write no solution. Answer \(a=\dfrac{\ln(4)+8}{10}\) 22) Find the exact solution for \(10e^{4x+2}+5=56\)If there is no solution, write no solution. 23) Find the exact solution for \(-5e^{-4x-1}-4=64\)If there is no solution, write no solution. Answer no solution 24) Find the exact solution for \(2^{x-3}=6^{2x-1}\)If there is no solution, write no solution. 25) Find the exact solution for \(e^{2x}-e^x-72=0\)If there is no solution, write no solution. Answer \(x=\ln(9)\) 26) Use the definition of a logarithm to find the exact solution for \(4\log(2n)-7=-11\). 27) Use the one-to-one property of logarithms to find an exact solution for \(\log(4x^2-10)+\log(3)=\log(51)\). If there is no solution, write no solution. Answer \(x=\pm \dfrac{3\sqrt{3}}{2}\) 28) The formula for measuring sound intensity in decibels \(D\) is defined by the equation \(D=10\log\left ( \dfrac{I}{I_0} \right )\)where \(I\) is the intensity of the sound in watts per square meter and \(I_0=10^{-12}\)is the lowest level of sound that the average person can hear. How many decibels are emitted from a rock concert with a sound intensity of \(4.7\cdot 10^{-1}\) watts per square meter? 29) A radiation safety officer is working with \(112\) grams of a radioactive substance. After \(17\) days, the sample has decayed to \(80\) grams. Rounding to five significant digits, write an exponential equation representing this situation. To the nearest day, what is the half-life of this substance? Answer \(f(t)=112e^{-.019792t}\); half-life: about \(35\) days 30) Write the formula found in the previous exercise as an equivalent equation with base \(e\)Express the exponent to five significant digits. 31) A bottle of soda with a temperature of \(71^{\circ}\)Fahrenheit was taken off a shelf and placed in a refrigerator with an internal temperature of \(35^{\circ}\) FAfter ten minutes, the internal temperature of the soda was \(63^{\circ}\) FUse Newton’s Law of Cooling to write a formula that models this situation. To the nearest degree, what will the temperature of the soda be after one hour? Answer \(T(t)=36e^{-0.025131t}+35\); \(T(60)\approx 43^{\circ}\) F 32) Enter the data from Table into a graphing calculator and graph the resulting scatter plot. Determine whether the data from the table would likely represent a function that is linear, exponential, or logarithmic. \(x\) \(f(x)\) 1 3 2 8.55 3 11.79 4 14.09 5 15.88 6 17.33 7 18.57 8 19.64 9 20.58 10 21.42 Answer logarithmic 33) The population of a lake of fish is modeled by the logistic equation \(P(t)=\dfrac{16,120}{1+25e^{-0.75t}}\) where \(t\) is time in years. To the nearest hundredth, how many years will it take the lake to reach \(80\%\) of its carrying capacity? For the following exercises, use a graphing utility to create a scatter diagram of the data given in the table. Observe the shape of the scatter diagram to determine whether the data is best described by an exponential, logarithmic, or logistic model. Then use the appropriate regression feature to find an equation that models the data. When necessary, round values to five decimal places. 34) \(x\) \(f(x)\) 1 20 2 21.6 3 29.2 4 36.4 5 46.6 6 55.7 7 72.6 8 87.1 9 107.2 10 138.1 Answer exponential; \(y=15.10062(1.24621)^x\) 35) \(x\) \(f(x)\) 3 13.98 4 17.84 5 20.01 6 22.7 7 24.1 8 26.15 9 27.37 10 28.38 11 29.97 12 31.07 13 31.43 36) \(x\) \(f(x)\) 0 2.2 0.5 2.9 1 3.9 1.5 4.8 2 6.4 3 9.3 4 12.3 5 15 6 16.2 7 17.3 8 17.9 Answer logistic; \(y=\dfrac{18.41659}{1+7.54644e^{-0.68375x}}\)
Terms and Concepts 1. The strategy for establishing bounds for triple integrals is "from ________ to ________, then from ________ to ________ and then from ________ to ________." Answer: We integrate from to surface , then from surface to curve and then from curve to point . point 2. Give an informal interpretation of what \(\int\int\int_Q \,dV\) means. Answer: \(\int\int\int_Q \,dV\) = Volume of the solid region \(Q\) 3. Give two uses of triple integration. Answer: To compute total mass or average density of a solid object, given a density function or to compute the average temperature in a solid region or object. 4. If an object has a constant density \(\delta\) and a volume \(V\), what is its mass? Answer: It's mass is \(\delta V\). Volume of Solid Regions In Exercises 5-8, two surfaces \(f_1(x,y)\) and \(f_2(x,y)\) and a region \(R\) in the \(xy\)-plane are given. Set up and evaluate the triple integral that represents the volume between these surfaces over \(R\). 5. \(f_1(x,y) = 8-x^2-y^2,\,f_2(x,y) =2x+y;\) \(R\) is the square with corners \((-1,-1)\) and \((1,1)\). Answer: V = \(\displaystyle \int_{-1}^{1}\int_{-1}^{1}\int_{2x+y}^{8-x^2-y^2} \,dz\,dy\,dx\quad\) \(=\quad\dfrac{88}{3}\,\text{units}^3\) 6. \(f_1(x,y) = x^2+y^2,\,f_2(x,y) =-x^2-y^2;\) \(R\) is the square with corners \((0,0)\) and \((2,3)\). 7. \(f_1(x,y) = \sin x \cos y,\,f_2(x,y) =\cos x \sin y +2;\) \(R\) is the triangle with corners \((0,0), \,(\pi , 0)\) and \((\pi,\pi)\). Answer: V = \(\displaystyle \int_{0}^{\pi}\int_{0}^{x}\int_{\sin x\cos y}^{\cos x\sin y + 2} \,dz\,dy\,dx\quad\) \(=\quad\left(\pi^2 - \pi\right)\,\text{units}^3\quad\) \(\approx 6.72801\,\text{units}^3\) 8. \(f_1(x,y) = 2x^2+2y^2+3,\,f_2(x,y) =6-x^2-y^2;\) \(R\) is the circle \(x^2+y^2=1\). In Exercises 9-16, a domain \(D\) is described by its bounding surfaces, along with a graph. Set up the triple integral that gives the volume of \(D\) in the indicated order(s) of integration, and evaluate the triple integral to find this volume. 9. \(D\) is bounded by the coordinate planes and \(z=2-\frac{2}{3}x-2y\). Evaluate the triple integral with order \(dz\,dy\,dx\). Answer: V = \(\displaystyle \int_{0}^{3}\int_{0}^{1-\frac{x}{3}}\int_{0}^{2 - \frac{2}{3}x-2y} \,dz\,dy\,dx\quad\) \(=\quad 1\,\text{unit}^3\) 10. \(D\) is bounded by the planes \(y=0,y=2,x=1,z=0\) and \(z=(2-x)/2\). Evaluate the triple integral with order \(dx\,dy\,dz\). 11. \(D\) is bounded by the planes \(x=0,x=2,z=-y\) and by \(z=y^2/2\). Evaluate the triple integral with orders \(dy\,dz\,dx\) and \(dz\,dy\,dx\) to verify that you obtain the same volume either way. Answer: V = \(\displaystyle \int_{0}^{2}\int_{0}^{2}\int_{-\sqrt{2z}}^{-z} \,dy\,dz\,dx\quad\) \(=\quad \dfrac{4}{3}\,\text{unit}^3\) V = \(\displaystyle \int_{0}^{2}\int_{-2}^{0}\int_{\frac{y^2}{2}}^{-y} \,dz\,dy\,dx\quad\) \(=\quad \dfrac{4}{3}\,\text{unit}^3\) 12. \(D\) is bounded by the planes \(z=0,y=9, x=0\) and by \(z=\sqrt{y^2-9x^2}\). Do not evaluate any triple integral. Just set this one up. 13. \(D\) is bounded by the planes \(x=2,y=1,z=0\) and \(z=2x+4y-4\). Evaluate the triple integral with orders \(dz\,dy\,dx\) and \(dx\,dy\,dz\) to verify that you obtain the same volume either way. Answer: V = \(\displaystyle \int_{0}^{2}\int_{1-\frac{x}{2}}^{1}\int_{0}^{2x+4y-4} \,dz\,dy\,dx\quad\) \(=\quad\dfrac{4}{3}\,\text{units}^3\) V = \(\displaystyle \int_{0}^{4}\int_{\frac{z}{4}}^{1}\int_{(z-4y+4)/2}^{2} \,dx\,dy\,dz\quad\) \(=\quad\dfrac{4}{3}\,\text{units}^3\) 14. \(D\) is bounded by the plane \(z=2y\) and by \(y=4-x^2\). Evaluate the triple integral with order \(dz\,dy\,dx\). 15. \(D\) is bounded by the coordinate planes and \(y=1-x^2\) and \(y=1-z^2\). Do not evaluate any triple integral. Which order would be easier to evaluate: \(dz\,dy\,dx\) or \(dy\,dz\,dx\)? Explain why. Answer: V = \(\displaystyle \int_{0}^{1}\int_{0}^{1-x^2}\int_{0}^{\sqrt{1-y}} \,dz\,dy\,dx\quad\) V = \(\displaystyle \int_{0}^{1}\int_{0}^{x}\int_{0}^{1-x^2} \,dy\,dz\,dx + \displaystyle \int_{0}^{1}\int_{x}^{1}\int_{0}^{1-z^2} \,dy\,dz\,dx\) The first one is easier since it only requires evaluation of a single integral, although both can be evaluated fairly easily. 16. \(D\) is bounded by the coordinate planes and by \(z=1-y/3\) and \(z=1-x\). Evaluate the triple integral with order \(dx\,dy\,dz\). Evaluating General Triple Integrals In exercises 17 - 20, evaluate the triple integrals over the rectangular solid box \(B\). 17. \(\displaystyle \iiint_B (2x + 3y^2 + 4z^3) \space dV,\) where \(B = \big\{(x,y,z) \,|\, 0 \leq x \leq 1, \space 0 \leq y \leq 2, \space 0 \leq z \leq 3\big\}\) Answer: \(192\) 18. \(\displaystyle \iiint_B (xy + yz + xz) \space dV,\) where \(B = \big\{(x,y,z) \,|\, 1 \leq x \leq 2, \space 0 \leq y \leq 2, \space 1 \leq z \leq 3\big\}\) 19. \(\displaystyle \iiint_B (x \space cos \space y + z) \space dV,\) where \(B = \big\{(x,y,z) \,|\, 0 \leq x \leq 1, \space 0 \leq y \leq \pi, \space -1 \leq z \leq 1\big\}\) Answer: \(0\) 20. \(\displaystyle \iiint_B (z \space sin \space x + y^2) \space dV,\) where \(B = \big\{(x,y,z) \,|\, 0 \leq x \leq \pi, \space 0 \leq y \leq 1, \space -1 \leq z \leq 2\big\}\) In Exercises 21 - 24, evaluate the triple integral. 21. \(\displaystyle \int_{-\pi/2}^{\pi/2}\int_{0}^{\pi}\int_{0}^{\pi} (\cos x \sin y \sin z )\,dz\,dy\,dx\) Answer: \(8\) 22. \(\displaystyle \int_{0}^{1}\int_{0}^{x}\int_{0}^{x+y} (x+y+z )\,dz\,dy\,dx\) 23. \(\displaystyle \int_{0}^{\pi}\int_{0}^{1}\int_{0}^{z} (\sin (yz))\,dx\,dy\,dz\) Answer: \(\pi\) 24. \(\displaystyle \int_{\pi}^{\pi^2}\int_{x}^{x^3}\int_{-y^2}^{y^2} (\cos x \sin y \sin z )\,dz\,dy\,dx\) Average Value of a Function 25. Find the average value of the function \(f(x,y,z) = x + y + z\) over the parallelepiped determined by \(x + 0, \space x = 1, \space y = 0, \space y = 3, \space z = 0\), and \(z = 5\). Answer: \(\frac{9}{2}\) 26. Find the average value of the function \(f(x,y,z) = xyz\) over the solid \(E = [0,1] \times [0,1] \times [0,1]\) situated in the first octant. Approximating Triple Integrals 27. The midpoint rule for the triple integral \(\displaystyle \iiint_B f(x,y,z) \,dV\) over the rectangular solid box \(B\) is a generalization of the midpoint rule for double integrals. The region \(B\) is divided into subboxes of equal sizes and the integral is approximated by the triple Riemann sum \[\sum_{i=1}^l \sum_{j=1}^m \sum_{k=1}^n f(\bar{x_i}, \bar{y_j}, \bar{z_k}) \Delta V,\nonumber\] where \((\bar{x_i}, \bar{y_j}, \bar{z_k})\) is the center of the box \(B_{ijk}\) and \(\Delta V\) is the volume of each subbox. Apply the midpoint rule to approximate \[\iiint_B x^2 \,dV\nonumber\] over the solid \(B = \big\{(x,y,z) \,|\, 0 \leq x \leq 1, \space 0 \leq y \leq 1, \space 0 \leq z \leq 1 \big\}\) by using a partition of eight cubes of equal size. Round your answer to three decimal places. Answer: \(\displaystyle \iiint_B f(x,y,z) \,dV\quad\) \(\approx\quad\frac{5}{16} \approx 0.313\) 28. [T] a. Apply the midpoint rule to approximate \[\iiint_B e^{-x^2} \,dV\nonumber\] over the solid \(B = \{(x,y,z) | 0 \leq x \leq 1, \space 0 \leq y \leq 1, \space 0 \leq z \leq 1 \}\) by using a partition of eight cubes of equal size. Round your answer to three decimal places. b. Use a CAS to improve the above integral approximation in the case of a partition of \(n^3\) cubes of equal size, where \(n = 3,4, ..., 10\). Applications 29. Suppose that the temperature in degrees Celsius at a point \((x,y,z)\) of a solid \(E\) bounded by the coordinate planes and the plane \(x + y + z = 5\) is given by: \[T (x,y,z) = xz + 5z + 10\nonumber\] Find the average temperature over the solid. Answer: \(17.5^{\circ}\) C 30. Suppose that the temperature in degrees Fahrenheit at a point \((x,y,z)\) of a solid \(E\) bounded by the coordinate planes and the plane \(x + y + z = 5\) is given by: \[T(x,y,z) = x + y + xy\nonumber\] Find the average temperature over the solid. 31. If the charge density at an arbitrary point \((x,y,z)\) of a solid \(E\) is given by the function \(\rho (x,y,z)\), then the total charge inside the solid is defined as the triple integral \(\displaystyle \iiint_E \rho (x,y,z) \,dV.\) Assume that the charge density of the solid \(E\) enclosed by the paraboloids \(x = 5 - y^2 - z^2\) and \(x = y^2 + z^2 - 5\) is equal to the distance from an arbitrary point of \(E\) to the origin. Set up the integral that gives the total charge inside the solid \(E\). Answer: Total Charge inside the Solid \(E \quad=\quad\) \(\displaystyle \int_{-\sqrt{5}}^{\sqrt{5}}\int_{-\sqrt{5-y^2}}^{\sqrt{5-y^2}}\int_{y^2+z^2-5}^{5 - y^2 - z^2} \sqrt{x^2+y^2+z^2}\,dx\,dz\,dy\) 32. Show that the volume of a regular right hexagonal pyramid of edge length \(a\) is \(\dfrac{a^3 \sqrt{3}}{2}\) by using triple integrals. Contributors Problems 17 - 20 and 25 - 32 are from Section 15.4, OpenStax Calculus 3 by Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. Problems 1 - 16 and 21 - 24 are from Apex Calculus, Section 13.6. Edited by Paul Seeburger (Monroe Community College)
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
Let's look at your original three equations. $$x+2y+z=2$$$$3x+8y+z=12$$$$4y+z=2$$ Now let's multiply by $\frac{2}{5}$, $\frac{1}{5}$, and $\frac{-3}{5}$ respectively. We get $$\frac{2x}{5}+\frac{4y}{5}+\frac{2z}{5}=\frac{4}{5}$$$$\frac{3x}{5}+\frac{8y}{5}+\frac{z}{5}=\frac{12}{5}$$$$\frac{-12y}{5}+\frac{-3z}{5}=\frac{-6}{5}$$ Now add the three equations together. We get $$x + 0y + 0z = 2$$ or $$x=2$$ Now multiply the same three equations by $\frac{-3}{10}$, $\frac{1}{10}$, and $\frac{1}{5}$ respectively. We get $$\frac{-3x}{10}+\frac{-6y}{10}+\frac{-3z}{10}=\frac{-6}{10}$$$$\frac{3x}{10}+\frac{8y}{10}+\frac{z}{10}=\frac{12}{10}$$$$\frac{4y}{5}+\frac{z}{5}=\frac{2}{5}$$ Summing $$0x + y + 0z = \frac{10}{10}$$ or $$y = 1$$ Now multiply the same three equations by $\frac{6}{5}$, $\frac{-2}{5}$, and $\frac{1}{5}$ respectively. We get $$\frac{6x}{5}+\frac{12y}{5}+\frac{6z}{5}=\frac{12}{5}$$$$\frac{-6x}{5}+\frac{-16y}{5}+\frac{-2z}{5}=\frac{-24}{5}$$$$\frac{4y}{5}+\frac{z}{5}=\frac{2}{5}$$ Summing $$0x + 0y + z = \frac{-10}{5}$$ or $$z = -2$$ And if you look at the numbers by which we multiplied, they are from $$\begin{pmatrix}2/5 & 1/5 & -3/5\\ -3/10 & 1/10 & 1/5\\ 6/5 & -2/5 & 1/5\end{pmatrix} = A^{-1}$$ We essentially did the matrix multiplication of $A \cdot A^{-1}$ to get $I$ manually when we could have just done $$\begin{pmatrix}x\\ y\\ z\end{pmatrix} = \begin{pmatrix}2/5 & 1/5 & -3/5\\ -3/10 & 1/10 & 1/5\\ 6/5 & -2/5 & 1/5\end{pmatrix}\begin{pmatrix}2\\ 12\\ 2\end{pmatrix}=\begin{pmatrix}2\\1\\-2\end{pmatrix}$$ and gotten the same answer. $A^{-1}$ is essentially the numbers by which we multiply the equations so we can add them together and get the solutions. Solving for the inverse is determining those numbers. Of course, if you're working with the equations, it would be easier to substitute in than to come up with all nine numbers. The convenient thing here is that we don't have to multiply $A\cdot A^{-1}$, as we already know the result. We can just do the right side multiplication.
I am trying to solve a bunch of equations for the zeros of the derivative of an analytic function, and I would like to know if there exist methods that exploit this structure to provide better performance than the standard algorithms. At the moment I am using Mathematica's FindRoot function, which I understand relies on Newton's method. (I iterate over quasirandom seeds as described here.) My problem, however, has additional, structure, of the form$$f'(z)=0$$for an analytic $f$, so maybe there is a better way to do this. So: are there methods that exploit this structure to provide better performance? To be a bit more explicit and provide some context in case it's useful: I am writing a Mathematica package to solve the saddle-point equations for high-order harmonic generation (as explained e.g. here), which are of the form$$\left\{\begin{aligned}(p(t,u)-A(u))^2+\gamma^2 & =0 \\(p(t,u)-A(t))^2+\gamma^2 & = \omega,\end{aligned}\right.\tag{1}$$where $p(t,u)=\frac{1}{i\varepsilon+t-u}\int_u^tA(\tau)\mathrm d\tau$ for $\varepsilon$ a small, positive constant, $\gamma>0$ is fixed, and $\omega>0$ is a parameter. Here $A(t)$ is the vector potential of a laser field and can be a three-dimensional vector, but a simple example is $A(t)=A_0\sin(t)$. These equations can be seen as looking for the zeros of both partial derivatives of $$S(t,u)=\gamma^2(t-u)-\omega t+\int_u^t(p(t,u)-A(\tau))^2\mathrm d\tau.$$$S$ can be assumed to be known explicitly but in some cases it is relatively awkward (with LeafCounts in the >6000 range). Some notes on the behaviour of this system: I am interested in a specific box in the complex plane for $t$ and $u$. In some regimes some roots can wander off the top and bottom of this box (i.e. to higher $|\mathrm{Im}(t)|$ than prescribed), and in those cases the roots would mostly be ignored anyway. It would be nice to have them as it simplifies the accounting, but they're not crucial. Some roots which are mostly not of interest can also wander off the sides of the box, and those would definitely get ignored anyway. However, I have no guarantees on the number of roots in my chosen box, and it takes some fiddling after I've got the entire curves w.r.t. $\omega$ to design a function $g(t_s,u_s)$ that will take a root $(t_s,u_s)$ and tell me whether it's a keeper or not. The regime $A_0^2\gg\gamma^2$ is of particular interest. In this case the roots will mostly have real $t$ roots, and as $\omega$ varies they will approach each other, have avoided crossings, and veer off into nonzero $\mathrm{Im}(t)$. When $A_0^2\gg\gamma^2$ these avoided crossings can be very tight and happen very quickly with respect to variations in $\omega$, but I'm OK with having to deal with those $\omega$ regions separately. At the moment I have an explicit expression for $S(t,u)$ which I differentiate symbolically with Mathematica, and this gives derivatives with LeafCountroughly 6 to 8 times those of $S$. They're not stuff I'd like to simplify by hand. I would appreciate methods which integrate well with Mathematica (either already implemented, existent as third-party tools, or relatively easy to implement), but if none are available I'm interested in any methods in this area. Sometimes I do have access to reasonable guesses for the roots (via e.g. a related, simpler $A(t)$, or using the results from the previous $\omega$) but I would rather avoid this or at least have methods which work even without such guesses. If a simpler model is desired, setting $\gamma=0$ can yield good estimates for $t$ (but not necessarily for $u$).
Hello, I've never ventured into char before but cfr suggested that I ask in here about a better name for the quiz package that I am getting ready to submit to ctan (tex.stackexchange.com/questions/393309/…). Is something like latex2quiz too audacious? Also, is anyone able to answer my questions about submitting to ctan, in particular about the format of the zip file and putting a configuration file in $TEXMFLOCAL/scripts/mathquiz/mathquizrc Thanks. I'll email first but it sounds like a flat file with a TDS included in the right approach. (There are about 10 files for the package proper and the rest are for the documentation -- all of the images in the manual are auto-generated from "example" source files. The zip file is also auto generated so there's no packaging overhead...) @Bubaya I think luatex has a command to force “cramped style”, which might solve the problem. Alternatively, you can lower the exponent a bit with f^{\raisebox{-1pt}{$\scriptstyle(m)$}} (modify the -1pt if need be). @Bubaya (gotta go now, no time for followups on this one …) @egreg @DavidCarlisle I already tried to avoid ascenders. Consider this MWE: \documentclass[10pt]{scrartcl}\usepackage{lmodern}\usepackage{amsfonts}\begin{document}\noindentIf all indices are even, then all $\gamma_{i,i\pm1}=1$.In this case the $\partial$-elementary symmetric polynomialsspecialise to those from at $\gamma_{i,i\pm1}=1$,which we recognise at the ordinary elementary symmetric polynomials $\varepsilon^{(n)}_m$.The induction formula from indeed gives\end{document} @PauloCereda -- okay. poke away. (by the way, do you know anything about glossaries? i'm having trouble forcing a "glossary" that is really an index, and should have been entered that way, into the required series style.) @JosephWright I'd forgotten all about it but every couple of months it sends me an email saying I'm missing out. Oddly enough facebook and linked in do the same, as did research gate before I spam filtered RG:-) @DavidCarlisle Regarding github.com/ho-tex/hyperref/issues/37, do you think that \textNFSSnoboundary would be okay as name? I don't want to use the suggested \textPUnoboundary as there is a similar definition in pdfx/l8uenc.def. And textnoboundary isn't imho good either, as it is more or less only an internal definition and not meant for users. @UlrikeFischer I think it should be OK to use @, I just looked at puenc.def and for example \DeclareTextCompositeCommand{\b}{PU}{\@empty}{\textmacronbelow}% so @ needs to be safe @UlrikeFischer that said I'm not sure it needs to be an encoding specific command, if it is only used as \let\noboundary\zzznoboundary when you know the PU encoding is going to be in force, it could just be \def\zzznoboundary{..} couldn't it? @DavidCarlisle But puarenc.def is actually only an extension of puenc.def, so it is quite possible to do \usepackage[unicode]{hyperref}\input{puarenc.def}. And while I used a lot @ in the chess encodings, since I saw you do \input{tuenc.def} in an example I'm not sure if it was a good idea ... @JosephWright it seems to be the day for merge commits in pull requests. Does github's "squash and merge" make it all into a single commit anyway so the multiple commits in the PR don't matter or should I be doing the cherry picking stuff (not that the git history is so important here) github.com/ho-tex/hyperref/pull/45 (@UlrikeFischer) @JosephWright I really think I should drop all the generation of README and ChangeLog in html and pdf versions it failed there as the xslt is version 1 and I've just upgraded to a version 3 engine, an dit's dropped 1.0 compatibility:-)
For exercises 1-10, consider points \(P(−1,3), Q(1,5),\) and \(R(−3,7)\). Determine the requested vectors and express each of them \(a.\) in component form and \(b.\) by using standard unit vectors. 1) \( {\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{PQ}}} \) Answer: \(a. \vec{PQ}=⟨2,2⟩ \quad b. \vec{PQ}=2\hat{\mathbf i}+2\hat{\mathbf j}\) 2) \(\vec{PR}\) 3) \(\vec{QP}\) Answer: \(a. \vec{QP}=⟨−2,−2⟩ \quad b. \vec{QP}=−2\hat{\mathbf i}−2\hat{\mathbf j}\) 4) \(\vec{RP}\) 5) \(\vec{PQ}+\vec{PR}\) Answer: \(a. \vec{PQ}+\vec{PR}=⟨0,6⟩ \quad b. \vec{PQ}+\vec{PR}=6\hat{\mathbf j}\) 6) \(\vec{PQ}−\vec{PR}\) 7) \(2\vec{PQ}−2\vec{PR}\) Answer: \(a. 2\vec{PQ}→−2\vec{PR}=⟨8,−4⟩ \quad b. 2\vec{PQ}−2\vec{PR}=8\hat{\mathbf i}−4\hat{\mathbf j}\) 8) \(2\vec{PQ}+\frac{1}{2}\vec{PR}\) 9) The unit vector in the direction of \(\vec{PQ}\) Answer: \(a. ⟨\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}⟩ \quad b. \frac{1}{\sqrt{2}}\hat{\mathbf i}+\frac{1}{\sqrt{2}}\hat{\mathbf j}\) 10) The unit vector in the direction of \(\vec{PR}\) 11) A vector \({\overset{\scriptstyle\rightharpoonup}{\mathbf v}}\) has initial point \((−1,−3)\) and terminal point \((2,1)\). Find the unit vector in the direction of \({\overset{\scriptstyle\rightharpoonup}{\mathbf v}}\). Express the answer in component form. Answer: \(⟨\frac{3}{5},\frac{4}{5}⟩\) 12) A vector \(\vec{\mathbf v}\) has initial point \((−2,5)\) and terminal point \((3,−1)\). Find the unit vector in the direction of \(\vec{\mathbf v}\). Express the answer in component form. 13) The vector \(\vec{\mathbf v}\) has initial point \(P(1,0)\) and terminal point \(Q\) that is on the y-axis and above the initial point. Find the coordinates of terminal point \(Q\) such that the magnitude of the vector \(\vec{\mathbf v}\) is \(\sqrt{5}\). Answer: \(Q(0,2)\) 14) The vector \(\vec{\mathbf v}\) has initial point \(P(1,1)\) and terminal point \(Q\) that is on the x-axis and left of the initial point. Find the coordinates of terminal point \(Q\) such that the magnitude of the vector \(\vec{\mathbf v}\) is \(\sqrt{10}\). For exercises 15 and 16, use the given vectors \(\vec{\mathbf a}\) and \(\vec{\mathbf b}\). a. Determine the vector sum \(\vec{\mathbf a}+\vec{\mathbf b}\) and express it in both the component form and by using the standard unit vectors. b. Find the vector difference \(\vec{\mathbf a}−\vec{\mathbf b}\) and express it in both the component form and by using the standard unit vectors. c. Verify that the vectors \(\vec{\mathbf a}, \, \vec{\mathbf b},\) and \(\vec{\mathbf a}+\vec{\mathbf b}\), and, respectively, \(\vec{\mathbf a}, \, \vec{\mathbf b}\), and \(\vec{\mathbf a}−\vec{\mathbf b}\) satisfy the triangle inequality. d. Determine the vectors \(2\vec{\mathbf a}, −\vec{\mathbf b},\) and \(2\vec{\mathbf a}−\vec{\mathbf b}.\) Express the vectors in both the component form and by using standard unit vectors. 15) \(\vec{\mathbf a}=2\hat{\mathbf i}+\hat{\mathbf j}, \vec{\mathbf b}=\hat{\mathbf i}+3\hat{\mathbf j}\) Answer: \(a.\, \vec{\mathbf a}+\vec{\mathbf b}=⟨3,4⟩, \quad \vec{\mathbf a}+\vec{\mathbf b}=3\hat{\mathbf i}+4\hat{\mathbf j}\) \(b.\, \vec{\mathbf a}−\vec{\mathbf b}=⟨1,−2⟩, \quad \vec{\mathbf a}−\vec{\mathbf b}=\hat{\mathbf i}−2\hat{\mathbf j}\) \(c.\) Answers will vary \(d.\, 2\vec{\mathbf a}=⟨4,2⟩, \quad 2\vec{\mathbf a}=4\hat{\mathbf i}+2\hat{\mathbf j}, \quad −\vec{\mathbf b}=⟨−1,−3⟩, \quad −\vec{\mathbf b}=−\hat{\mathbf i}−3\hat{\mathbf j}, \quad 2\vec{\mathbf a}−\vec{\mathbf b}=⟨3,−1⟩, \quad 2\vec{\mathbf a}−\vec{\mathbf b}=3\hat{\mathbf i}−\hat{\mathbf j}\) 16) \(\vec{\mathbf a}=2\hat{\mathbf i}, \vec{\mathbf b}=−2\hat{\mathbf i}+2\hat{\mathbf j}\) 17) Let \(\vec{\mathbf a}\) be a standard-position vector with terminal point \((−2,−4)\). Let \(\vec{\mathbf b}\) be a vector with initial point \((1,2)\) and terminal point \((−1,4)\). Find the magnitude of vector \(−3\vec{\mathbf a}+\vec{\mathbf b}−4\hat{\mathbf i}+\hat{\mathbf j}.\) Answer: \(15\) 18) Let \(\vec{\mathbf a}\) be a standard-position vector with terminal point at \((2,5)\). Let \(\vec{\mathbf b}\) be a vector with initial point \((−1,3)\) and terminal point \((1,0)\). Find the magnitude of vector \(\vec{\mathbf a}−3\vec{\mathbf b}+14\hat{\mathbf i}−14\hat{\mathbf j}.\) 19) Let \(\vec{\mathbf u}\) and \(\vec{\mathbf v}\) be two nonzero vectors that are nonequivalent. Consider the vectors \(\vec{\mathbf a}=4\vec{\mathbf u}+5\vec{\mathbf v}\) and \(\vec{\mathbf b}=\vec{\mathbf u}+2\vec{\mathbf v}\) defined in terms of \(\vec{\mathbf u}\) and \(\vec{\mathbf v}\). Find the scalar \(λ\) such that vectors \(\vec{\mathbf a}+λ\vec{\mathbf b}\) and \(\vec{\mathbf u}−\vec{\mathbf v}\) are equivalent. Answer: \(λ=−3\) 20) Let \(\vec{\mathbf u}\) and \(\vec{\mathbf v}\) be two nonzero vectors that are nonequivalent. Consider the vectors \(\vec{\mathbf a}=2\vec{\mathbf u}−4\vec{\mathbf v}\) and \(\vec{\mathbf b}=3\vec{\mathbf u}−7\vec{\mathbf v}\) defined in terms of \(\vec{\mathbf u}\) and \(\vec{\mathbf v}\). Find the scalars \(α\) and \(β\) such that vectors \(α\vec{\mathbf a}+β\vec{\mathbf b}\) and \(\vec{\mathbf u}−\vec{\mathbf v}\) are equivalent. 21) Consider the vector \(\vec{\mathbf a}(t)=⟨\cos t, \sin t⟩\) with components that depend on a real number \(t\). As the number \(t\) varies, the components of \(\vec{\mathbf a}(t)\) change as well, depending on the functions that define them. a. Write the vectors \(\vec{\mathbf a}(0)\) and \(\vec{\mathbf a}(π)\) in component form. b. Show that the magnitude \(∥\vec{\mathbf a}(t)∥\) of vector \(\vec{\mathbf a}(t)\) remains constant for any real number \(t\). c. As \(t\) varies, show that the terminal point of vector \(\vec{\mathbf a}(t)\) describes a circle centered at the origin of radius \(1\). Answer: \(a.\, \vec{\mathbf a}(0)=⟨1,0⟩, \quad \vec{\mathbf a}(π)=⟨−1,0⟩\) \(b.\) Answers may vary \(c.\) Answers may vary 22) Consider vector \(\vec{\mathbf a}(x)=⟨x,\sqrt{1−x^2}⟩\) with components that depend on a real number \(x∈[−1,1]\). As the number \(x\) varies, the components of \(\vec{\mathbf a}(x)\) change as well, depending on the functions that define them. a. Write the vectors \(\vec{\mathbf a}(0)\) and \(\vec{\mathbf a}(1)\) in component form. b. Show that the magnitude \(∥\vec{\mathbf a}(x)∥\) of vector \(\vec{\mathbf a}(x)\) remains constant for any real number \(x\) c. As \(x\) varies, show that the terminal point of vector \(\vec{\mathbf a}(x)\) describes a circle centered at the origin of radius \(1\). 23) Show that vectors \(\vec{\mathbf a}(t)=⟨\cos t, \sin t⟩\) and \(\vec{\mathbf a}(x)=⟨x,\sqrt{1−x^2}⟩\) are equivalent for \(x=r\) and \(t=2kπ\), where \(k\) is an integer. Answer: Answers may vary 24) Show that vectors \(\vec{\mathbf a}(t)=⟨\cos t, \sin t⟩\) and \(\vec{\mathbf a}(x)=⟨x,\sqrt{1−x^2}⟩\) are opposite for \(x=r\) and \(t=π+2kπ\), where \(k\) is an integer. For exercises 25-28, find a vector \(\vec{\mathbf v}\) with the given magnitude and in the same direction as the vector \(\vec{\mathbf u}\). 25) \(\|\vec{\mathbf v}\|=7, \quad \vec{\mathbf u}=⟨3,4⟩\) Answer: \(\vec{\mathbf v}=⟨\frac{21}{5},\frac{28}{5}⟩\) 26) \(‖\vec{\mathbf v}‖=3,\quad \vec{\mathbf u}=⟨−2,5⟩\) 27) \(‖\vec{\mathbf v}‖=7,\quad \vec{\mathbf u}=⟨3,−5⟩\) Answer: \(\vec{\mathbf v}=⟨\frac{21\sqrt{34}}{34},−\frac{35\sqrt{34}}{34}⟩\) 28) \(‖\vec{\mathbf v}‖=10,\quad \vec{\mathbf u}=⟨2,−1⟩\) For exercises 29-34, find the component form of vector \(\vec{\mathbf u}\), given its magnitude and the angle the vector makes with the positive x-axis. Give exact answers when possible. 29) \(‖\vec{\mathbf u}‖=2, θ=30°\) Answer: \(\vec{\mathbf u}=⟨\sqrt{3},1⟩\) 30) \(‖\vec{\mathbf u}‖=6, θ=60°\) 31) \(‖\vec{\mathbf u}‖=5, θ=\frac{π}{2}\) Answer: \(\vec{\mathbf u}=⟨0,5⟩\) 32) \(‖\vec{\mathbf u}‖=8, θ=π\) 33) \(‖\vec{\mathbf u}‖=10, θ=\frac{5π}{6}\) Answer: \(\vec{\mathbf u}=⟨−5\sqrt{3},5⟩\) 34) \(‖\vec{\mathbf u}‖=50, θ=\frac{3π}{4}\) For exercises 35 and 36, vector \(\vec{\mathbf u}\) is given. Find the angle \(θ∈[0,2π)\) that vector \(\vec{\mathbf u}\) makes with the positive direction of the x-axis, in a counter-clockwise direction. 35) \(\vec{\mathbf u}=5\sqrt{2}\hat{\mathbf i}−5\sqrt{2}\hat{\mathbf j}\) Answer: \(θ=\frac{7π}{4}\) 36) \(\vec{\mathbf u}=−\sqrt{3}\hat{\mathbf i}−\hat{\mathbf j}\) 37) Let \(\vec{\mathbf a}=⟨a_1,a_2⟩, \vec{\mathbf b}=⟨b_1,b_2⟩\), and \(\vec{\mathbf c}=⟨c_1,c_2⟩\) be three nonzero vectors. If \(a_1b_2−a_2b_1≠0\), then show there are two scalars, \(α\) and \(β\), such that \(\vec{\mathbf c}=α\vec{\mathbf a}+β\vec{\mathbf b}.\) Answer: Answers may vary 38) Consider vectors \(\vec{\mathbf a}=⟨2,−4⟩, \vec{\mathbf b}=⟨−1,2⟩,\) and \(\vec{\mathbf c}=\vec{\mathbf 0}\) Determine the scalars \(α\) and \(β\) such that \(\vec{\mathbf c}=α\vec{\mathbf a}+β\vec{\mathbf b}\). 39) Let \(P(x_0,f(x_0))\) be a fixed point on the graph of the differential function \(f\) with a domain that is the set of real numbers. a. Determine the real number \(z_0\) such that point \(Q(x_0+1,z_0)\) is situated on the line tangent to the graph of \(f\) at point \(P\). b. Determine the unit vector \(\vec{\mathbf u}\) with initial point \(P\) and terminal point \(Q\). Answer: \(a. \quad z_0=f(x_0)+f′(x_0); \quad b. \quad \vec{\mathbf u}=\frac{1}{\sqrt{1+[f′(x_0)]^2}}⟨1,f′(x_0)⟩\) 40) Consider the function \(f(x)=x^4,\) where \(x∈R\). a. Determine the real number \(z_0\) such that point \(Q(2,z_0)\) s situated on the line tangent to the graph of \(f\) at point \(P(1,1)\). b. Determine the unit vector \(\vec{\mathbf u}\) with initial point \(P\) and terminal point \(Q\). 41) Consider \(f\) and \(g\) two functions defined on the same set of real numbers \(D\). Let \(\vec{\mathbf a}=⟨x,f(x)⟩\) and \(\vec{\mathbf b}=⟨x,g(x)⟩\) be two vectors that describe the graphs of the functions, where \(x∈D\). Show that if the graphs of the functions \(f\) and \(g\) do not intersect, then the vectors \(\vec{\mathbf a}\) and \(\vec{\mathbf b}\) are not equivalent. 42) Find \(x∈R\) such that vectors \(\vec{\mathbf a}=⟨x, \sin x⟩\) and \(\vec{\mathbf b}=⟨x, \cos x⟩\) are equivalent. 43) Calculate the coordinates of point \(D\) such that \(ABCD\) is a parallelogram, with \(A(1,1), B(2,4)\), and \(C(7,4)\). Answer: \(D(6,1)\) 44) Consider the points \(A(2,1), B(10,6), C(13,4)\), and \(D(16,−2)\). Determine the component form of vector \(\vec{AD}\). 45) The speed of an object is the magnitude of its related velocity vector. A football thrown by a quarterback has an initial speed of \(70\) mph and an angle of elevation of \(30°\). Determine the velocity vector in mph and express it in component form. (Round to two decimal places.) Answer: \(⟨60.62,35⟩\) 46) A baseball player throws a baseball at an angle of \(30°\) with the horizontal. If the initial speed of the ball is \(100\) mph, find the horizontal and vertical components of the initial velocity vector of the baseball. (Round to two decimal places.) 47) A bullet is fired with an initial velocity of \(1500\) ft/sec at an angle of \(60°\) with the horizontal. Find the horizontal and vertical components of the velocity vector of the bullet. (Round to two decimal places.) Answer: The horizontal and vertical components are \(750\) ft/sec and \(1299.04\) ft/sec, respectively. 48) [T] A 65-kg sprinter exerts a force of \(798\) N at a \(19°\) angle with respect to the ground on the starting block at the instant a race begins. Find the horizontal component of the force. (Round to two decimal places.) 49) [T] Two forces, a horizontal force of \(45\) lb and another of \(52\) lb, act on the same object. The angle between these forces is \(25°\). Find the magnitude and direction angle from the positive x-axis of the resultant force that acts on the object. (Round to two decimal places.) Answer: The magnitude of resultant force is \(94.71\) lb; the direction angle is \(13.42°\). 50) [T] Two forces, a vertical force of \(26\) lb and another of \(45\) lb, act on the same object. The angle between these forces is \(55°\). Find the magnitude and direction angle from the positive x-axis of the resultant force that acts on the object. (Round to two decimal places.) 51) [T] Three forces act on object. Two of the forces have the magnitudes \(58\) N and \(27\) N, and make angles \(53°\) and \(152°\), respectively, with the positive x-axis. Find the magnitude and the direction angle from the positive x-axis of the third force such that the resultant force acting on the object is zero. (Round to two decimal places.) Answer: The magnitude of the third vector is \(60.03\)N; the direction angle is \(259.38°\). 52) Three forces with magnitudes 80 lb, 120 lb, and 60 lb act on an object at angles of \(45°, 60°\) and \(30°\), respectively, with the positive x-axis. Find the magnitude and direction angle from the positive x-axis of the resultant force. (Round to two decimal places.) 53) [T] An airplane is flying in the direction of \(43°\) east of north (also abbreviated as \(N43E\) at a speed of \(550\) mph. A wind with speed \(25\) mph comes from the southwest at a bearing of \(N15E\). What are the ground speed and new direction of the airplane? Answer: The new ground speed of the airplane is \(572.19\) mph; the new direction is \(N41.82E.\) 54) [T] A boat is traveling in the water at \(30\) mph in a direction of \(N20E\) (that is, \(20°\) east of north). A strong current is moving at \(15\) mph in a direction of \(N45E\). What are the new speed and direction of the boat? 55) [T] A 50-lb weight is hung by a cable so that the two portions of the cable make angles of \(40°\) and \(53°\), respectively, with the horizontal. Find the magnitudes of the forces of tension \(\vec{\mathbf T_1}\) and \(\vec{\mathbf T_2}\) in the cables if the resultant force acting on the object is zero. (Round to two decimal places.) Answer: \(\|\vec{\mathbf T_1}\|=30.13 \, lb, \quad \|\vec{\mathbf T_2}\|=38.35 \, lb\) 56) [T] A 62-lb weight hangs from a rope that makes the angles of \(29°\) and \(61°\), respectively, with the horizontal. Find the magnitudes of the forces of tension \(\vec{\mathbf T_1}\) and \(\vec{\mathbf T_2}\) in the cables if the resultant force acting on the object is zero. (Round to two decimal places.) 57) [T] A 1500-lb boat is parked on a ramp that makes an angle of \(30°\) with the horizontal. The boat’s weight vector points downward and is a sum of two vectors: a horizontal vector \(\vec{\mathbf v_1}\) that is parallel to the ramp and a vertical vector \(\vec{\mathbf v_2}\) that is perpendicular to the inclined surface. The magnitudes of vectors \(\vec{\mathbf v_1}\) and \(\vec{\mathbf v_2}\) are the horizontal and vertical component, respectively, of the boat’s weight vector. Find the magnitudes of \(\vec{\mathbf v_1}\) and \(\vec{\mathbf v_2}\). (Round to the nearest integer.) Answer: \(\|\vec{\mathbf v_1}\|=750 \, lb, \quad \|\vec{\mathbf v_2}\|=1299 \, lb\) 58) [T] An 85-lb box is at rest on a \(26°\) incline. Determine the magnitude of the force parallel to the incline necessary to keep the box from sliding. (Round to the nearest integer.) 59) A guy-wire supports a pole that is \(75\) ft high. One end of the wire is attached to the top of the pole and the other end is anchored to the ground \(50\) ft from the base of the pole. Determine the horizontal and vertical components of the force of tension in the wire if its magnitude is \(50\) lb. (Round to the nearest integer.) Answer: The two horizontal and vertical components of the force of tension are \(28\) lb and \(42\) lb, respectively. 60) A telephone pole guy-wire has an angle of elevation of \(35°\) with respect to the ground. The force of tension in the guy-wire is \(120\) lb. Find the horizontal and vertical components of the force of tension. (Round to the nearest integer.) Contributors Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. Exercises and LaTeX edited by Paul Seeburger
my assignment has this question, given a topological space X with finite many connected components, a function $f:X\rightarrow Y$ is continuous if and only if it is continuous on each components. Is it still true if X has infinitely many components. The first is okay but the reverse makes me confused, so how can I prove the reverse? Moreover, if $X$ has infinite connected components, I know it is not true but I can't find an example. Anyone help? Consider the set $X=\{1/n\mid n\in\mathbb N\}\cup\{0\}$ and the map $f:X\to\mathbb R$, $f(0)=1$, $f(1/n)=0$. Then $f$ is clearly continuous on each component since the components are just the elements of $X$. But $f$ is not continuous on $X$. The reason is that a set is not open if its intersection with each component is open, i.e. $X$ is not the topological sum of its components. It would work if all components were open, though. This is the case if there are only finitely many, but also if the space is locally connected. Edit: If a space $X$ has the property you mention, then a map $f$ from $X$ to an arbitrary space is continuous if and only if its composition with each inclusion map $i_C:C\hookrightarrow X$ is continuous for each component $C$. Then the space has the so-called final topology with respect to all those maps. It can be shown that the topology then consists of all sets $U$ such that $i_C^{-1}(U)$ is open in $C$ for each $i_C$. Since the preimage of a component is either the component itself or empty, thus open in every component, so it must be open. If you are not familiar yet with final topologies and the universal property, then there is also a more direct proof, which uses the idea of the counterexample above. The characteristic map $\chi_C$ of a component $C$ is constant on each component, hence it must be continuous on all of $X$. But this implies that $C=\chi_C^{-1}(1)$ is open. So $X$ has this special property iff each component is open.
In page $36$ of "Partial Differential Equation," Evan define $v(z) = \Phi(z-x) - \phi^x(z)$, where $\phi^x(z)$ is a corrector function, satisfying the following identities, $$ \Delta \phi^x = 0 \ in \ U \tag{1}$$ $$ \phi^x = \Phi(y-x) \ in \ \partial U \tag{2}$$ $$\lim_{\epsilon \rightarrow 0} \int_{\partial B(x,\epsilon)} \frac{\partial v}{\partial \nu} w dS = \lim_{\epsilon \rightarrow 0} \int_{\partial B(x,\epsilon)} \frac{\partial \Phi}{\partial \nu}(x-z) w(z) dS $$ I am kind of lost about how he got the equation.
Euler's Equation for Vanishing Variation is Invariant under Coordinate Transformations Theorem Euler's Equation for Vanishing Variation is invariant under coordinate transformations. Proof Let $J\sqbrk y$ be an integral functional: $\displaystyle J\sqbrk y=\int_a^b \map F{x,y,y'}\rd x$. Suppose, we introduce new curvilinear coordinates $u,v$ such that $x=\map x {u,v},y=\map y {u,v}$ such that $\begin{vmatrix} \dfrac {\partial x}{\partial u} \paren {u,v} & \dfrac {\partial x}{\partial v} \paren {u,v}\\ \dfrac {\partial y}{\partial u} \paren {u,v} & \dfrac {\partial y}{\partial v} \paren {u,v} \end{vmatrix} \ne 0$ After this transformation, a curve $y=\map y x$ in coordinates $\paren{x,~y}$ will be described by an equation $v=\map v u$ in coordinates $\paren{u,v}$. Let $\mathscr J \sqbrk v$ be a functional defined according to the following chain of operations: \(\displaystyle J\sqbrk y\) \(=\) \(\displaystyle \int_{x=a}^{x=b}\map F {x,y,y'}\rd x\) \(\displaystyle \) \(=\) \(\displaystyle \int_{u=\alpha}^{u=\beta}\map F{\map x{u,~v},\map y{u,v},\frac{\d y}{\d x} }\paren{\frac{\partial x}{\partial u}\rd u+\frac{\partial x}{\partial v}\frac{\d v}{\d u}\rd u}\) \(\displaystyle \) \(=\) \(\displaystyle \int_{u=\alpha}^{u=\beta}\map F {\map x {u,v},\map y {u,v},\frac{ \frac{\partial y}{\partial u} \rd u + \frac{\partial y}{\partial v} \frac{\d v}{\d u} \rd u}{\frac{\partial x}{\partial u} \rd u+\frac{\partial x}{\partial v} \frac{\d v}{\d u} \rd u} } \paren {x_u+x_v v'} \rd u\) \(\displaystyle \) \(=\) \(\displaystyle \int_{u=\alpha}^{u=\beta}\map F {\map x {u,v},\map y {u,v},\frac{y_u+y_vv'}{x_u+y_vv'} }\paren {x_u+x_vv'}\rd u\) \(\displaystyle \) \(=\) \(\displaystyle \int_{u=\alpha}^{u=\beta}\mathscr F\paren {u,v,v'}\rd u\) \(\displaystyle \) \(=\) \(\displaystyle \mathscr J\sqbrk v\) Now, consider the method of finite differences like it is used here. For functional $J\sqbrk y$ we have the area $\Delta\sigma$ bounded by functions $\map y x$ and $\map y x+\map h x$. Likewise, for functional $\mathscr J\sqbrk v$ we have area $\Delta\Sigma$ bounded by functions $\map v u$ and $\map v u+\map\eta u$. Note that as it was able to construct $\map v u$ from $\map y x$, same can be done for $\map\eta u$ starting with $\map h x$. As both areas approach zero, they become differential surface elements, which are related by corresponding Jacobian determinant of coordinate transformations: $\rd\sigma=\begin{vmatrix} x_u & x_v \\ y_u & y_v \end{vmatrix}\rd\Sigma$ The last thing to be shown is that variational derivative vanishes in new coordinates. By definition, it has to satisfy $\displaystyle\lim_{\Delta\sigma\to 0}\frac{\Delta J\sqbrk {y;h} }{\Delta\sigma}=0$ Since $J\sqbrk y=\mathscr J \sqbrk v$ with respect to a specific set of coordinates, we have \(\displaystyle \lim_{\Delta\sigma\to 0}\frac{\Delta J\sqbrk {y;h} }{\Delta\sigma}\) \(=\) \(\displaystyle \lim_{\Delta\Sigma\to 0}\frac{\Delta\mathscr J\sqbrk {v;\eta} }{\displaystyle\begin{vmatrix} x_u & x_v \\ y_u & y_v \end{vmatrix} \Delta \Sigma }\) \(\displaystyle \) \(=\) \(\displaystyle \begin{vmatrix} x_u & x_v \\ y_u & y_v \end{vmatrix}^{-1}\lim_{\Delta\Sigma\to 0}\frac{\Delta\mathscr J\sqbrk{v;\eta} }{\Delta\Sigma}\) We see that, if Jacobian determinant is nonvanishing, the condition of vanishing variational derivative is the same. The equivalence of Euler's equation in both sets of coordinates is the result of same method of finite differences being used in each case. $\blacksquare$
pre-calculus-trigonometry-calculator \sin \left(x\right)+\sin \left(\frac{x}{2}\right)=0, 0\le x\le 2\pi en 1. Sign Up free of charge: Join with Office365 Join with Facebook OR Join with email 2. Subscribe to get much more: From $0.99 Please try again using a different payment method One Time Payment $5.99 USD for 2 months Weekly Subscription $0.99 USD per week until cancelled Monthly Subscription $2.49 USD per month until cancelled Annual Subscription $19.99 USD per year until cancelled Please add a message. Message received. Thanks for the feedback.
We have had several questions about the relation of Cook and Karp reductions. It's clear that Cook reductions (polynomial-time Turing reductions) do not define the same notion of NP-completeness as Karp reductions (polynomial-time many-one reductions), which are usually used. In particular, Cook reductions can not separate NP from co-NP even if P $\neq$ NP. So we should not use Cook reductions in typical reduction proofs. Now, students found a peer-reviewed work [1] that uses a Cook-reduction for showing that a problem is NP-hard. I did not give them full score for the reduction they took from there, but I wonder. Since Cook reductions do define a similar notion of hardness as Karp reductions, I feel they should be able to separate P from NPC resp. co-NPC, assuming P $\neq$ NP. In particular, (something like) the following should be true: $\qquad\displaystyle L_1 \in \mathrm{NP}, L_2 \in \mathrm{NPC}_{\mathrm{Karp}}, L_2 \leq_{\mathrm{Cook}} L_1 \implies L_1 \in \mathrm{NPC}_{\mathrm{Karp}}$. The important nugget is that $L_1 \in \mathrm{NP}$ so above noted insensitivity is circumvented. We now "know" -- by definition of NPC -- that $L_2 \leq_{\mathrm{Karp}} L_1$. As has been noted by Vor, it's not that easy (notation adapted): Suppose that $L_1 \in \mathrm{NPC}_{\mathrm{Cook}}$, then by definition, for all languages $L_2 \in \mathrm{NPC}_{\mathrm{Karp}} \subseteq \mathrm{NP}$ we have $L_2 \leq_{\mathrm{Cook}} L_1$; and if the above implication is true then $L_1 \in \mathrm{NPC}_{\mathrm{Karp}}$ and thus $\mathrm{NPC}_{\mathrm{Karp}} = \mathrm{NPC}_{\mathrm{Cook}}$ which is still an open question. There may be other differences between the two NPCs but co-NP. Failing that, are there any known (non-trivial) criteria for when having a Cook-reduction implies Karp-NP-hardness, i.e. do we know predicates $P$ with $\qquad\displaystyle L_2 \in \mathrm{NPC}_{\mathrm{Karp}}, L_2 \leq_{\mathrm{Cook}} L_1, P(L_1,L_2) \implies L_1 \in \mathrm{NPC}_{\mathrm{Karp}}$? On the Complexity of Multiple Sequence Alignment by L. Wang and T. Jiang (1994)
Definition:Characteristic Polynomial Jump to navigation Jump to search Definition Let $K$ be a field. Let $L / K$ be a finite field extension of $K$. Let $\alpha \in L$, and $\theta_\alpha$ be the linear operator: $\theta_\alpha: L \to L : \beta \mapsto \alpha \beta$ The characteristic polynomial of $\alpha$ with respect to the extension $L / K$ is: $\det \sqbrk {X I_L - \theta_\alpha}$ where: $\det$ denotes the determinant of a linear operator $X$ is an indeterminate $I_L$ is the identity mapping on $L$.
Current browse context: math-ph Change to browse by: Bookmark(what is this?) Mathematics > Differential Geometry Title: Spectral sections, twisted rho invariants and positive scalar curvature (Submitted on 23 Sep 2013 (v1), last revised 25 Apr 2014 (this version, v3)) Abstract: We had previously defined the rho invariant $\rho_{spin}(Y,E,H, g)$ for the twisted Dirac operator $\not\partial^E_H$ on a closed odd dimensional Riemannian spin manifold $(Y, g)$, acting on sections of a flat hermitian vector bundle $E$ over $Y$, where $H = \sum i^{j+1} H_{2j+1} $ is an odd-degree differential form on $Y$ and $H_{2j+1}$ is a real-valued differential form of degree ${2j+1}$. Here we show that it is a conformal invariant of the pair $(H, g)$. In this paper we express the defect integer $\rho_{spin}(Y,E,H, g) - \rho_{spin}(Y,E, g)$ in terms of spectral flows and prove that $\rho_{spin}(Y,E,H, g)\in \mathbb Q$, whenever $g$ is a Riemannian metric of positive scalar curvature. In addition, if the maximal Baum-Connes conjecture holds for $\pi_1(Y)$ (which is assumed to be torsion-free), then we show that $\rho_{spin}(Y,E,H, rg) =0$ for all $r\gg 0$, significantly generalizing our earlier results. These results are proved using the Bismut-Weitzenb\"ock formula, a scaling trick, the technique of noncommutative spectral sections, and the Higson-Roe approach. Submission historyFrom: Varghese Mathai [view email] [v1]Mon, 23 Sep 2013 09:48:25 GMT (23kb) [v2]Sat, 2 Nov 2013 20:05:59 GMT (326kb,D) [v3]Fri, 25 Apr 2014 20:53:53 GMT (313kb,D)
In this post I showed how the information flows in a simple market for apples, but here I'm going to show what I hinted at in the earlier post: money is a tool to transfer information from the demand to the supply. Let's start with a simple system of $d$ buyers (green) and two suppliers of gray bundles of something. Each sale adds $+\log_{2} d$ bits of information [1] about the size of the market $d$ and the distribution of demand to a supplier. Or at least it would if the supplier had some knowledge of the size of $d$ in the first place. If there was only one supplier, that supplier could use the total amount of goods sold $s$ as an estimate since the total amount of information would have to be less than or equal to the information coming from the buyers (the source), i.e. (dropping the base of the $\log$'s): \text{(1) } I_{s} = n_{s} \log s \leq n_{d} \log d = I_{d} $$ You can't get more information from a message than the message contains! However, there is more than one supplier, so each supplier only sees a fraction of the total supply and a fraction of the total demand. If this were all there is to it, then while each transaction could transfer $\log d$ bits of information, the supplier would have no idea. Enter money; now with each sale a supplier acquires a few inherently worthless tokens: How does this help? Well, because these tokens are available to everyone (either set up by some government or based on custom), the supplier has some idea of how many are out there (let's call it $m$): Now each sale is accompanied by $n_{m}$ tokens of money, so that each token transfers $+ \log m$ bits of information from the demand to the supply. This monetary system could potentially work well enough so that we can say the information captured by the supply is equal to the demand, thus equation (1) becomes: \text{(2) }I_{s} = n_{s} \log s = n_{m} \log m = n_{d} \log d = I_{d} $$ We call this ideal information transfer when we use the equal sign. If we take $n_{s} = S/dS$ where $dS$ is the smallest/infinitesimal unit [2] of supply and likewise $n_{d} = D/dD$ for demand and assume $D, S \gg 1$ (a very large market), we can write: \frac{S}{dS} \log s = \frac{D}{dD} \log d $$ $$ \frac{dD}{dS} = \frac{\log d}{\log s} \frac{D}{S} $$ $$ \text{(3) } \frac{dD}{dS} = \frac{1}{\kappa} \frac{D}{S} $$ where we've defined the $\kappa \equiv \log s/\log d$. The left hand side of equation (3) can be identified with the price $p$ because it is proportional to $dM/dS$, or the rate of change of money for the smallest increase in quantity supplied. information transfer index But wait, there's more! See, money can be exchanged for all kinds of goods and services: This means that the total demand of all goods and services ($AD$, or aggregate demand) is related to the total amount of money ($M$) so that, again assuming ideal information transfer: \text{(4) } P = \frac{dAD}{dM} = \frac{1}{\kappa} \frac{AD}{M} $$ where $P$ is an overall measure of the price of all goods and services ($AD$); it's called the price level. The rate of change of the price level over time is inflation. Now for the totally awesome part: we can solve equation (4) for aggregate demand in terms of the money supply. It's a differential equation that can be solved by integration. We re-arrange equation 4 like this: \text{(5) }\frac{dAD}{AD} = \frac{1}{\kappa} \frac{dM}{M} $$ and integrate $$ \int_{AD_{0}}^{AD} \frac{dAD'}{AD'} = \frac{1}{\kappa} \int_{M_{0}}^{M} \frac{dM'}{M'} $$ $$ \log \frac{AD}{AD_{0}} = \frac{1}{\kappa} \log \frac{M}{M_{0}} $$ $$ \frac{AD}{AD_{0}} = \left( \frac{M}{M_{0}}\right)^{1/\kappa} $$ Using equation (4) again, we have $$ \text{(6) } P = \frac{1}{\kappa} \frac{AD_{0}}{M_{0}} \left( \frac{M}{M_{0}}\right)^{1/\kappa -1} $$ If $\kappa = 1/2$, then $P \sim M$, and the price level rises with the money supply, i.e. the quantity theory of money. Awesome, huh? Except the quantity theory doesn't really work that well ($\kappa \sim 0.6$ works better for the US and other countries, except Japan where $\kappa \sim 1$ is a better model). But we left out a big piece: aggregate demand (e.g. NGDP) is measured in the same units as the money supply. And to top it off the money supply is adjusted by the central bank based on economic conditions! This is the picture of the macroeconomy: What does it mean? It means that $\kappa = \log m/\log d$, or the amount of information transferred from the demand to the supply relative to the amount of information transferred by money, is changing! If we assume this happens somewhat slowly [3], we can transform equation (6) into: \text{(7) } P = \alpha \frac{1}{\kappa (MB, NGDP)} \left( \frac{MB}{MB_{0}}\right)^{1/\kappa (MB, NGDP) -1} $$ $$ \kappa (MB, NGDP) = \frac{\log MB/c_{0}}{\log NGDP/c_{0}} $$ where we've replaced M with the monetary base, AD with NGDP, grouped a bunch of constants as $\alpha$, and introduced a new constant $c_{0}$ since the units of money are arbitrary. We can fit this model to the price level (it works best, see the lower right graph, when the monetary base is actually just the currency component of the monetary base): Pretty cool, huh? Using this model (the or ITM), we seem to get less inflation from a given increase in the money supply as the money supply and the economy get bigger. In fact, it can even go the other way -- in Japan an increase in the money supply (blue) decreases the price level (brown): information transfer model The rest of this blog is devoted to exploring this model and the concept of information transfer in economic, even commenting on current events based on these ideas. Have a look around! Footnotes [1] You can think of it as a sale discovering the ID number of a buyer. If the ID number is a member of the set {1, 2, 3, 4, ... , d}, then you need $\log_{2} d$ bits to represent the ID number. Thus, a sale transfers $\log_{2} d$ bits from the buyer (demand) to the supplier (supply). I will drop the subscript 2 for the binary log in the rest of the post. [2] We are looking at infinitesimal units to see how supply and demand change with respect to each other. for those unfamiliar with this concept, it forms the basis of calculus. And as a physicist, I am given to frequent abuses of notation. [3] Technically, integrating equation (5) as we did assumes $\kappa$ is independent of $AD$ and $M$. However, if we assume $\kappa$ doesn't change rapidly with $M$ or $NGDP$, then the integration can proceed as shown, but is only an approximation.
This is my attempt at an exercise from Folland's Real Analysis. Could someone evaluate it (particularly the second claim)? Let $\mathcal M$ be an infinite $\sigma$-algebra. Claim: $\mathcal M$ contains an infinite sequence of disjoint sets. Proof: Since $\mathcal M$ is infinite, it must contain a sequence $\left\{E_j\right\}_{j=1}^\infty$ such that $E_n\neq E_m$ whenever $n\neq m$. Let $F_j=E_j\setminus\bigcup_{k=1}^{j-1}E_k$. Then $\left\{F_j\right\}_{j=1}^\infty$ is an infinite sequence in $\mathcal M$ of disjoint sets. Claim: $\text{card}\left(\mathcal M\right)\geq\mathfrak c$. Proof: Let $C=\left\{0,1\right\}^\mathbb N$. Then $\text{card}\left(C\right)=\mathfrak c$. Define $f:C\to\mathcal M$ by $c\mapsto\bigcup\left\{F_j:c_j=1,j\in\mathbb N\right\}$. Then $f$ is well-defined and injective: suppose otherwise that $c,d\in C$ with $f(c)=f(d)$ and $c\neq d$. Let $k\in\mathbb N$ be such that $c_k\neq d_k$. Assume, without loss of generality, that $c_k=1$. Then, since the $F_j$'s are disjoint, there is an $x\in F_k\subset f(c)$ such that $x\notin f(d)$, contradicting the assumption that $f(c)\subset f(d)$.
Well you just check the definitions. $T:V\to W$ is a linear transformation if, for all $v,w\in V$ and $a\in\mathbb{R}$ (works for general base fields), we have $$T(av+w)=aT(v)+T(w)$$Let's look at (a): Pick $(0,1,0)$ and $(0,-1,0)$. Then $T(0,0,0)=0$ but $T(0,1,0)+T(0,-1,0)=(4,-1,0)+(4,1,0)=(8,0,0)\not=(0,0,0)=T((0,1,0)+(0,-1,0))$. So this is clearly not linear. The issue is obviously the quadratic terms. As for $b$, is $P_3$ all degree $\leq 3$ real polynomials or all strictly degree $3$ polynomials (if $ax^3+bx^2+cx+d\in P_3$, do you require $a\not=0$?) Edit: Actually shouldn't matter: Let $p,q\in P_3$ and $a\in\mathbb{R}$. Then \begin{align*}T(ap+q)&=(ap+q)(2)+3x\cdot (ap+q)'(x)=ap(2)+q(2)+3x\cdot (ap'(x)+q'(x))\\&=ap(2)+q(2)+a3x\cdot p'(x)+3x\cdot q'(x)=a(p(2)+3x\cdot p'(x))+q(2)+3x\cdot q'(x)\\&=aT(p)+T(q)\end{align*}So $T$ is linear. What's going on is that $(a)$, the processes are all nonlinear: squaring isn't linear because you always get cross terms and you lose signs, which isn't linear. However, in $(b)$ you have evaluation at $2$, addition, multiplication by $3x$, and differentiation. All of these processes are linear, so combining all of them in this manner ought to be linear, and as you can see it follows.
Let $Y$ be the random variable counting how many successes you have in $N$ trials. If $X_i$ is the Bernoulli random variable giving $1$ if the $i$th draw is a successs and $0$ otherwise, then $Y = \sum\limits_{i=1}^N X_i$. Thus, $Y$ is a sum of presumably-independent Bernoulli random variables, hence a binomial random variable with probability of success $p = \mathbb{P}(X_i = 1)$. For example, the mass function is $$\mathbb{P}(Y = s) = \mathbb{P}\left(\sum\limits_{i=1}^N X_i = s\right) = {{N}\choose{s}} \mathbb{P}(X_i = 1)^s\, \mathbb{P}(X_i = 0)^{N-s} = {{N} \choose{s}} p^s \,(1-p)^{N-s}$$for $0 \leq s \leq N.$ At this point, you probably know that the expected value of a Binomial random variable with parameters $N$ and $p$ is $\mathbb{E}[Y] = Np$, or you can go through the calculation yourself easily since the expected value is linear to find$$\mathbb{E}[Y] = \sum\limits_{i=1}^N \mathbb{E}[X_i] = \sum\limits_{i=1}^N \Big[ 0 \cdot \mathbb{P}(X_i = 0) + 1 \cdot \mathbb{P}(X_i =1) \Big] = N \, \mathbb{P}(X_i = 1)$$ So, what you have left is to find $p = \mathbb{P}(X_i = 1)$. For this, it is helpful to consider the following:$$\mathbb{P}(X_i = 1) = \sum\limits_{r = 0}^m \mathbb{P}(X_i = 1 \mid r \text{ draws})\,p_r $$with\begin{align*}\mathbb{P}(X_i =1 \mid r \text{ draws}) = \begin{cases}1 & r > k_2 \\1 - \frac{{k_2} \choose {r}}{{k_1 + k_2} \choose {r}} & r \leq k_2\end{cases}\end{align*}where the last case with $r \leq k_2$ used: $\mathbb{P}(X_i = 1 \mid r) = 1 - \mathbb{P}(X_i = 0 \mid r)$. From here, you can put these pieces together.
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest. Nah, I have a pretty garbage question. Let me spell it out. I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$. For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$. This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin. Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle. Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$ $$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$ @user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure). The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$. @RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea. The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible. Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$.... The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case? I think there you really do need some kind of PDEs to construct good charts. I might be way overcomplicating this. If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$? I think so by the squeeze theorem or something. this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$ but then we can replace all of those $U_i$'s with balls, incurring some fixed error In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid... @BalarkaSen what is this ok but this does confirm that what I'm trying to do is wrong haha In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas... Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function? I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Registre complet - Registres semblants 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Registre complet - Registres semblants 2018-08-23 11:31 Registre complet - Registres semblants 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Registre complet - Registres semblants 2018-08-23 11:31 Registre complet - Registres semblants 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Registre complet - Registres semblants 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Registre complet - Registres semblants 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Registre complet - Registres semblants 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Registre complet - Registres semblants 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Registre complet - Registres semblants
In about half an hour, UK #ISS pass, starting at 21:58:57, duration 54 secs, visible, Magnitude -1.3 In about half an hour, UK #ISS pass, starting at 20:23:08, duration 228 secs, bright, Magnitude -2.5 In about half an hour, UK #ISS pass, starting at 21:11:12, duration 120 secs, bright, Magnitude -2.4 In about half an hour, UK #ISS pass, starting at 20:24:51, duration 91 secs, visible, Magnitude -1.7 It's well known that \( |\mathbb{N}| = |\mathbb{N}^2| \) and it's not hard to make an explicit 1-1 mapping. Ditto \( |\mathbb{Q}| = |\mathbb{Q}^2| \). We also know that \( |\mathbb{R}| = |\mathbb{R}^2| \). Your challenge, should you decide to accept it, it to give an explicit one-to-one mapping. Go ... @cerisara No problem ... happy to help (when I can!) @cerisara So in particular: \( \frac{f(x)}{x²} \) @cerisara There is, you open with backslash round bra and close with backslash round ket. \ ( e^{iπ}+1=0 \ ) \( e^{iπ}+1=0 \) Yesterday evening I measured the temperature of a cup of hot water cooling over a 1 hour time period. I then tried to fit an exponential decay curve (to a background temperature of 21°C) and a quadratic curve to the data: the values of R² agreed to 4 dp! oops, just noticed I forgot to tag @11011110 in the above toot, for making that diagram. Follow him! @PuffAddison Hiya ... sorry for the delay in replying. I think the thread has examples of LaTeX, and hand-drawn images of what I'm hoping for, and what I get instead. The example LaTeX code I'm using is here: A Mastodon instance for maths people. The kind of people who make \(\pi z^2 \times a\) jokes. Use \( and \) for inline LaTeX, and \[ and \] for display mode.
An RLC circuit is a simple electric circuit with a resistor, inductor and capacitor in it -- with resistance R, inductance Land capacitance C, respectively. It's one of the simplest circuits that displays non-trivial behavior. You can derive an equation for the behavior by using Kirchhoff's laws (conservation of the stocks and flows of electrons) and the properties of the circuit elements. Wikipedia does a fine job. You arrive at a solution for the current as a function of time that looks generically like this (not the most general solution, but a solution): i(t) = A e^{\left( -\alpha + \sqrt{\alpha^{2} - \omega^{2}} \right) t} $$ with $\alpha = R/2L$ and $\omega = 1/\sqrt{L C}$. If you fill in some numbers for these parameters, you can get all kinds of behavior: As you can tell from that diagram, the Kirchhoff conservation laws don't in any way nail down the behavior of the circuit. The values you choose for R, Land Cdo. You could have a slowly decaying current or a quickly oscillating one. It depends on R, Land C. Now you may wonder why I am talking about this on an economics blog. Well, Cullen Roche implicitly asked a question: Although [stock flow consistent models are] widely used in the Fed and on Wall Street it hasn’t made much impact on more mainstream academic economic modeling techniques for reasons I don’t fully know. The reason is that the content of stock flow consistent modeling is identical to Kirchhoff's laws. Currents are flows of electrons (flows of money); voltages are stocks of electrons (stocks of money). Krichhoff's laws do not in any way nail down the behavior of an RLC circuit. SFC models do not nail down the behavior of the economy. If you asked what the impact of some policy was and I gave you the graph above, you'd probably never ask again. What SFC models do in order to hide the fact that anything could result from an SFC model is effectively assume R = L = C = 1, which gives you this: I'm sure to get objections to this. There might even be legitimate objections. But I ask of any would-be objector: How is accounting for money different from accounting for electrons? Before saying this circuit model is in continuous time, note that there are circuits with clock cycles -- in particular the device you are currently reading this post with. I can't for the life of me think of any objection, and I showed exactly this problem with a SFC model from Godley and Lavoie: But to answer Cullen's implicit question -- as the two Nick Rowe is generally better than me at these things. Mathematicanotebooks above show, SFC models don't specify the behavior of an economy without assuming R = L = C = 1 ... that is to say Γ = 1. Update: Nick Rowe is generally better than me at these things.
I'd suggest to try it on your own. Do an expansion of your wavefunction in terms of spherical harmonics, $$\psi(\mathbf r) \ = \ \sum_{\ell} R_\ell(r,t) \, Y_{\ell 0} (\theta,\phi)\,.$$ Note that I've set the index $m$ in $Y_{\ell m}$ to zero, in order to account for the symmetry of your Hamiltonian with respect to rotations around the $x-y$ plane. This makes your problem essentially two-dimensional. You can also write the spherical harmonic in terms of Legendre polynomials,$$Y_{\ell0}(\theta,\varphi) = \sqrt{\frac{2\ell+1}{4\pi}} P_{\ell}(\cos\theta).$$ Insertion into your Schrödinger equation and project on the spherical hamonics, and you'll end up with a coupled equation for the radial functions $R_l(r,t)$. Here, "coupled" means coupled in $\ell$, i.e. the function $R_\ell(r,t)$ depends on all the other quantum numbers $0\leq \ell \leq L_{\max}$. Solve that with standard finite differences, it's not that hard. In the insertion-and-projection step above, the only problem appears in evaluating the matrix elements with $x^2+y^2$. If you write it as $r^2 Y_{10}$, it boils down to an integral over three spherical harmonics, which is related to the Clebsch-Gordan or Wigner-3j coefficients. But It's an easy one, for which analytical formulae exist (--just google for the buzzwords in the previous sentence). If you arrived at the working formula and need further assistance, let me know. EDIT: summarizing our lengthy discussion in the comment section, here is the final equation which is about to be solved numerically. $$i\frac{\partial}{\partial t} u_\ell(r,t) = \left(-\frac{1}{2} \frac{\partial^2}{\partial r^2} + \frac{\ell(\ell+1)}{2r^2} + V(r)\right) u_\ell(r,t) \\ \qquad \qquad\qquad\qquad\qquad\quad+ \frac{2}3 \,k(t)\, r^2 \; \sum_{\ell^\prime=\max(\ell-2,0)}^{\min(\ell+2,L_\max)} \left( \delta_{\ell,\ell^\prime} - \sqrt{\frac{4\pi}5} \alpha(\ell,\ell^\prime) \right)\,u_{\ell^\prime}(r,t)$$ Here the coefficient $\alpha(\ell,\ell^\prime)$ which you introduced is given by$$\alpha(\ell,\ell^\prime) \ = \ \int Y^\ast_{\ell 0}(\Omega) Y_{20}(\Omega) Y_{\ell^\prime 0}(\Omega)\ d\Omega$$(you can also express that more in standard terms such as Wigner3j symbols, see e.g. here). Note the restriction in the summation indices which come from the fact that $0\leq \ell^\prime \leq L_\max$ (where $L_\max$ is the maximum angular quantum number chosen in the numerical representation).
This is eybrow-raisingly tricky to answer. The short answer is: you can define them, in a complicated way that's not really useful, but why would you want such a thing? There's two main reasons why this is complicated, which hold for integer and non-integer powers respectively. For one, the two operators will behave quite differently. Because $a$ annihilates the vacuum state, it is not invertible, and its inverse $a^{-1}$ will not behave as expected. (Note that $n^{-1}a^\dagger$ is a left inverse, but not on the right; $a^{-1}$ ought to commute with $a$.) The most you can hope for is a Moore-Penrose pseudoinverse, which will have a rank 1 kernel. (Similarly, further negative powers will increase the kernel dimension.) The creation operator $a^\dagger$ has the opposite problem, as there's no $|\psi\rangle$ such that $a^\dagger|\psi\rangle=|0\rangle$, so again you can only hope for a rank-deficient pseudoinverse. Further, these operators do have eigenvalues, but they're complex: there's one coherent state $|\alpha\rangle$ for each $\alpha\in\mathbb C$ which obeys $a|\alpha\rangle=\alpha|\alpha\rangle$. Thus to make sense of $a^\nu$ for noninteger $\nu$ you need to make sense of $\alpha^\nu$ for all $\alpha$, and this means braving the branch cuts. While this can be dealt with in some arbitrary fashion, it will never hold up under the time evolution $a\mapsto a e^{-i\omega t}$, $\alpha\mapsto \alpha e^{-i\omega t}$ that's generated in phase space by the harmonic hamiltonian $H=\hbar\omega a^\dagger a$: coherent states rotate around the origin and they will meet within a period any branch cut you set up. While this is in principle OK, it will never have enough "nice" properties to be useful in practice. That said, it is possible to construct such operators. Since $a$ has eigenvalues, we would expect $a^\nu$ to behave well with respect to that structure, which means that if it didn't satisfy $$a^\nu|\alpha\rangle=\alpha^\nu|\alpha\rangle$$then we would not find the construction useful at all. This expression is not particularly well-defined, though, and you need to fix a branch cut to make sense of $\alpha^\nu:=e^{\nu\log(\alpha)}$. Once you do that, this fixes the action of $a^\nu$ on any state $|\psi\rangle$, because the coherent states, while non-orthogonal and not a basis, do obey the resolution of identity$$\int\frac{\text d^2\alpha}{\pi}|\alpha\rangle\langle\alpha|=1.$$Thus, for any vector $|\psi\rangle$ you must have $$a^\nu|\psi\rangle=\int\frac{\text d^2\alpha}{\pi}a^\nu|\alpha\rangle\langle\alpha|\psi\rangle=\int\frac{\text d^2\alpha}{\pi}\alpha^\nu|\alpha\rangle\langle\alpha|\psi\rangle.\tag1$$ So there, you've defined it! You might even go as far as saying that the domain of this new operator is all those $|\psi\rangle$ such that the integral above converges. Now what do you want to do with it? It's even hard to define exactly why the construction above is unsatisfactory, and that's because there's really no call to use such operators in any physical situation, so it's hard to think of properties that it ought to satisfy but doesn't. I pointed out one example above: we'd like $a^\nu$ to evolve to $a^\nu (e^{-i\omega t})^\nu=a^\nu e^{-i\nu\omega t}$ under the harmonic motion that takes $a$ to $a e^{-i\omega t}$, but in general this is not the case. If $U(t)=e^{-i\hat{n}\omega t}$, so $U(t)|\alpha\rangle=|\alpha e^{-i\omega t}\rangle$, then $a^\nu$ will evolve to $U(t)^\dagger a^\nu U(t)$, which, as defined above, would give$$\begin{align}U(t)^\dagger a^\nu U(t)|\psi\rangle& =\int\frac{\text d^2\alpha}{\pi}\alpha^\nu U(t)^\dagger|\alpha\rangle\langle\alpha|U(t)|\psi\rangle\\ & =\int\frac{\text d^2\alpha}{\pi}\alpha^\nu |\alpha e^{+i\omega t}\rangle\langle\alpha e^{+i\omega t}|\psi\rangle\\ & =\int\frac{\text d^2\alpha'}{\pi}\left(\alpha'e^{-i\omega t}\right)^\nu |\alpha \rangle\langle\alpha |\psi\rangle.\end{align}$$This would simplify to $e^{-i\nu\omega t}a^\nu|\psi\rangle$, if $\left(\alpha'e^{-i\omega t}\right)^\nu $ simplified to $\alpha'^\nu e^{-i\nu\omega t}$, but it doesn't. Instead, you must define it as $\left(\alpha'e^{-i\omega t}\right)^\nu :=e^{\nu\log\left(\alpha'e^{-i\omega t}\right)}$. On the complex plane, the logarithm no longer splits products into sums; all you can say is equality up to a multiple of $2\pi i$:$$\log(\alpha\beta)\equiv\log(\alpha)+\log(\beta) \mod 2\pi i.$$This means that $\left(\alpha'e^{-i\omega t}\right)^\nu=\alpha'^\nu e^{-i\nu\omega t}e^{2\pi i \nu n(\alpha,t)}$, for some integer $n$ which, in general, depends on $\alpha$ and $t$. If $\nu$ is integer, this is not a problem. Otherwise, though, you've introduced a new, complicated function of $\alpha$ into your integrand, and it will not generally integrate to the same thing. Other than this, though, it depends on what you want to do with it. One is always free to define whatever one wants, but this right is earned a posteriori by doing something useful with the definition. Thus, where would you expect such a monster to occur, and what properties would you expect of it for it to be useful there? Addendum: some thoughts on your edit. OK, so what you really care about isn't so much a power of $a$, but instead an operator that will obey$$[a^\nu,a^\dagger]=\nu a^{\nu-1}.$$I'm not completely sure, but I think the construction (1) will not, in general, satisfy that constraint. I would look at it from the opposite perspective: what you're really trying to solve is the operator-valued functional equation$$[A(\nu),a^\dagger]=\nu A(\nu-1)$$for some function $A:\mathbb C\to \rm{End}(\mathcal H)$. Equivalently, you can take the matrix elements of this equation to get a countable infinity of coupled functional equations,$$\nu A_{mn}(\nu-1)=\sqrt{n+1}A_{m,n+1}(\nu)-\sqrt{m}A_{m-1,n}(\nu), \tag2$$where $A_{m,n}(\nu)=\langle m|A(\nu)|n\rangle$. You know already one solution to this equation, for integer $\nu$:$$A_{mn}(\nu)=\sqrt{\frac{n!}{(n-\nu)!}}\delta_{m-n,\nu}\ \text{for }\nu=0,1,2,\ldots.$$The trick is extending these solutions. Now this doesn't get you very far in that it's only a restatement of your problem (which is often, of course, very useful). I can only offer some thoughts as to how my construction (1) fares in this regard. If you take that, then it means postulating$$A_{mn}(\nu)=\int\frac{\text d^2\alpha}{\pi} e^{-|\alpha|^2} \frac{\alpha^m{\alpha^\ast}^n}{\sqrt{m!n!}} e^{\nu \log(\alpha)}$$as a solution. Plugging this into the equation (2), and using the fact that $\alpha^{\nu-1}=e^{(\nu-1)\log(\alpha)}=\alpha^\nu \alpha^{-1}$ (i.e. sums of powers do map into products) you can reduce the mess to the equation$$\inte^{-|\alpha|^2} {\alpha^m{\alpha^\ast}^n} e^{(\nu-1) \log(\alpha)}\left[|\alpha|^2-m-\nu\right]{\text d^2\alpha} =0.$$The reason this vanishes in the integer-$\nu$ case is because the angular integration, over the argument of $\alpha$, averages out to zero, unless all the phases cancel out exactly (in which case the radial integral fixes $\nu$). This will in general, I think, not happen. To see whether it does, write $\alpha=r e^{i\theta}$ and $\log(\alpha)= \ln(r) +i\theta +2\pi i n(\theta)$, where $n(\theta)$ is an integer-valued function such that $n(\theta+2\pi)=n(\theta)-1\ \forall\theta$, and which embodies your branch cut. The integral above then reduces to$$\iinte^{-r^2} r^{m+n+\nu-1}e^{i(m-n+\nu-1)\theta}e^{2\pi i n(\theta)(\nu-1) }\left[r^2-m-\nu\right]r\text d\theta\text dr=0.$$Thus, you want the product of integrals $$\inte^{i(m-n+\nu-1)\theta}e^{2\pi i n(\theta)(\nu-1) }\text d\theta\times\int_0^\inftye^{-r^2} r^{m+n+\nu-1}\left[r^2-m-\nu\right]r\text dr$$to vanish. For concreteness, take $n(\theta)$ to be zero for $\theta\in(-\pi, \pi]$, and integrate over that interval. This yields, after reducing the radial integral to$$\Gamma\left(\frac{m+n+\nu+1}{2}+1\right)-(m+\nu)\Gamma\left(\frac{m+n+\nu+1}{2}\right)\propto\left[\frac{m+n+\nu+1}{2}-(m+\nu)\right],$$the integral$$0\stackrel{?}{=}i\left(m-n+\nu-1\right)\int_{-\pi}^{\pi}e^{i(m-n+\nu-1)\theta}\text d\theta=\left.e^{i(m-n+\nu-1)\theta}\right|_{-\pi}^{\pi}=e^{i(m-n+\nu-1)\pi}-e^{i(m-n+\nu-1)\pi}.$$Thus, then: for (1) to be a solution of (2), you need $\sin\left((m-n+\nu-1)\pi\right)=0$ or, equivalently,$$\boxed{\sin(\nu\pi)=0}$$to hold. This means that the construction in (1) only obeys the commutation relations for integer $\nu$. Bummer!
OpenCV 4.1.2-pre Open Source Computer Vision This tutorial will demonstrate the basic concepts of the homography with some codes. For detailed explanations about the theory, please refer to a computer vision course or a computer vision book, e.g.: Briefly, the planar homography relates the transformation between two planes (up to a scale factor): \[ s \begin{bmatrix} x^{'} \\ y^{'} \\ 1 \end{bmatrix} = H \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} = \begin{bmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{bmatrix} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \] The homography matrix is a 3x3 matrix but with 8 DoF (degrees of freedom) as it is estimated up to a scale. It is generally normalized (see also 1) with \( h_{33} = 1 \) or \( h_{11}^2 + h_{12}^2 + h_{13}^2 + h_{21}^2 + h_{22}^2 + h_{23}^2 + h_{31}^2 + h_{32}^2 + h_{33}^2 = 1 \). The following examples show different kinds of transformation but all relate a transformation between two planes. The homography can be estimated using for instance the Direct Linear Transform (DLT) algorithm (see 1 for more information). As the object is planar, the transformation between points expressed in the object frame and projected points into the image plane expressed in the normalized camera frame is a homography. Only because the object is planar, the camera pose can be retrieved from the homography, assuming the camera intrinsic parameters are known (see 2 or 4). This can be tested easily using a chessboard object and findChessboardCorners() to get the corner locations in the image. The first thing consists to detect the chessboard corners, the chessboard size ( patternSize), here 9x6, is required: The object points expressed in the object frame can be computed easily knowing the size of a chessboard square: The coordinate Z=0 must be removed for the homography estimation part: The image points expressed in the normalized camera can be computed from the corner points and by applying a reverse perspective transformation using the camera intrinsics and the distortion coefficients: The homography can then be estimated with: A quick solution to retrieve the pose from the homography matrix is (see 5): \[ \begin{align*} \boldsymbol{X} &= \left( X, Y, 0, 1 \right ) \\ \boldsymbol{x} &= \boldsymbol{P}\boldsymbol{X} \\ &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{r_3} \hspace{0.5em} \boldsymbol{t} \right ] \begin{pmatrix} X \\ Y \\ 0 \\ 1 \end{pmatrix} \\ &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \begin{pmatrix} X \\ Y \\ 1 \end{pmatrix} \\ &= \boldsymbol{H} \begin{pmatrix} X \\ Y \\ 1 \end{pmatrix} \end{align*} \] \[ \begin{align*} \boldsymbol{H} &= \lambda \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \\ \boldsymbol{K}^{-1} \boldsymbol{H} &= \lambda \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \\ \boldsymbol{P} &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \left( \boldsymbol{r_1} \times \boldsymbol{r_2} \right ) \hspace{0.5em} \boldsymbol{t} \right ] \end{align*} \] This is a quick solution (see also 2) as this does not ensure that the resulting rotation matrix will be orthogonal and the scale is estimated roughly by normalize the first column to 1. To check the result, the object frame projected into the image with the estimated camera pose is displayed: In this example, a source image will be transformed into a desired perspective view by computing the homography that maps the source points into the desired points. The following image shows the source image (left) and the chessboard view that we want to transform into the desired chessboard view (right). The first step consists to detect the chessboard corners in the source and desired images: The homography is estimated easily with: To warp the source chessboard view into the desired chessboard view, we use cv::warpPerspective The result image is: To compute the coordinates of the source corners transformed by the homography: To check the correctness of the calculation, the matching lines are displayed: The homography relates the transformation between two planes and it is possible to retrieve the corresponding camera displacement that allows to go from the first to the second plane view (see [141] for more information). Before going into the details that allow to compute the homography from the camera displacement, some recalls about camera pose and homogeneous transformation. The function cv::solvePnP allows to compute the camera pose from the correspondences 3D object points (points expressed in the object frame) and the projected 2D image points (object points viewed in the image). The intrinsic parameters and the distortion coefficients are required (see the camera calibration process). \[ \begin{align*} s \begin{bmatrix} u \\ v \\ 1 \end{bmatrix} &= \begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \end{bmatrix} \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \\ &= \boldsymbol{K} \hspace{0.2em} ^{c}\textrm{M}_o \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \end{align*} \] \( \boldsymbol{K} \) is the intrinsic matrix and \( ^{c}\textrm{M}_o \) is the camera pose. The output of cv::solvePnP is exactly this: rvec is the Rodrigues rotation vector and tvec the translation vector. \( ^{c}\textrm{M}_o \) can be represented in a homogeneous form and allows to transform a point expressed in the object frame into the camera frame: \[ \begin{align*} \begin{bmatrix} X_c \\ Y_c \\ Z_c \\ 1 \end{bmatrix} &= \hspace{0.2em} ^{c}\textrm{M}_o \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \\ &= \begin{bmatrix} ^{c}\textrm{R}_o & ^{c}\textrm{t}_o \\ 0_{1\times3} & 1 \end{bmatrix} \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \\ &= \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \end{align*} \] Transform a point expressed in one frame to another frame can be easily done with matrix multiplication: To transform a 3D point expressed in the camera 1 frame to the camera 2 frame: \[ ^{c_2}\textrm{M}_{c_1} = \hspace{0.2em} ^{c_2}\textrm{M}_{o} \cdot \hspace{0.1em} ^{o}\textrm{M}_{c_1} = \hspace{0.2em} ^{c_2}\textrm{M}_{o} \cdot \hspace{0.1em} \left( ^{c_1}\textrm{M}_{o} \right )^{-1} = \begin{bmatrix} ^{c_2}\textrm{R}_{o} & ^{c_2}\textrm{t}_{o} \\ 0_{3 \times 1} & 1 \end{bmatrix} \cdot \begin{bmatrix} ^{c_1}\textrm{R}_{o}^T & - \hspace{0.2em} ^{c_1}\textrm{R}_{o}^T \cdot \hspace{0.2em} ^{c_1}\textrm{t}_{o} \\ 0_{1 \times 3} & 1 \end{bmatrix} \] In this example, we will compute the camera displacement between two camera poses with respect to the chessboard object. The first step consists to compute the camera poses for the two images: The camera displacement can be computed from the camera poses using the formulas above: The homography related to a specific plane computed from the camera displacement is: On this figure, n is the normal vector of the plane and d the distance between the camera frame and the plane along the plane normal. The equation to compute the homography from the camera displacement is: \[ ^{2}\textrm{H}_{1} = \hspace{0.2em} ^{2}\textrm{R}_{1} - \hspace{0.1em} \frac{^{2}\textrm{t}_{1} \cdot n^T}{d} \] Where \( ^{2}\textrm{H}_{1} \) is the homography matrix that maps the points in the first camera frame to the corresponding points in the second camera frame, \( ^{2}\textrm{R}_{1} = \hspace{0.2em} ^{c_2}\textrm{R}_{o} \cdot \hspace{0.1em} ^{c_1}\textrm{R}_{o}^{T} \) is the rotation matrix that represents the rotation between the two camera frames and \( ^{2}\textrm{t}_{1} = \hspace{0.2em} ^{c_2}\textrm{R}_{o} \cdot \left( - \hspace{0.1em} ^{c_1}\textrm{R}_{o}^{T} \cdot \hspace{0.1em} ^{c_1}\textrm{t}_{o} \right ) + \hspace{0.1em} ^{c_2}\textrm{t}_{o} \) the translation vector between the two camera frames. Here the normal vector n is the plane normal expressed in the camera frame 1 and can be computed as the cross product of 2 vectors (using 3 non collinear points that lie on the plane) or in our case directly with: The distance d can be computed as the dot product between the plane normal and a point on the plane or by computing the plane equation and using the D coefficient: The projective homography matrix \( \textbf{G} \) can be computed from the Euclidean homography \( \textbf{H} \) using the intrinsic matrix \( \textbf{K} \) (see [141]), here assuming the same camera between the two plane views: \[ \textbf{G} = \gamma \textbf{K} \textbf{H} \textbf{K}^{-1} \] In our case, the Z-axis of the chessboard goes inside the object whereas in the homography figure it goes outside. This is just a matter of sign: \[ ^{2}\textrm{H}_{1} = \hspace{0.2em} ^{2}\textrm{R}_{1} + \hspace{0.1em} \frac{^{2}\textrm{t}_{1} \cdot n^T}{d} \] We will now compare the projective homography computed from the camera displacement with the one estimated with cv::findHomography The homography matrices are similar. If we compare the image 1 warped using both homography matrices: Visually, it is hard to distinguish a difference between the result image from the homography computed from the camera displacement and the one estimated with cv::findHomography function. OpenCV 3 contains the function cv::decomposeHomographyMat which allows to decompose the homography matrix to a set of rotations, translations and plane normals. First we will decompose the homography matrix computed from the camera displacement: The results of cv::decomposeHomographyMat are: The result of the decomposition of the homography matrix can only be recovered up to a scale factor that corresponds in fact to the distance d as the normal is unit length. As you can see, there is one solution that matches almost perfectly with the computed camera displacement. As stated in the documentation: As the result of the decomposition is a camera displacement, if we have the initial camera pose \( ^{c_1}\textrm{M}_{o} \), we can compute the current camera pose \( ^{c_2}\textrm{M}_{o} = \hspace{0.2em} ^{c_2}\textrm{M}_{c_1} \cdot \hspace{0.1em} ^{c_1}\textrm{M}_{o} \) and test if the 3D object points that belong to the plane are projected in front of the camera or not. Another solution could be to retain the solution with the closest normal if we know the plane normal expressed at the camera 1 pose. The same thing but with the homography matrix estimated with cv::findHomography Again, there is also a solution that matches with the computed camera displacement. The homography transformation applies only for planar structure. But in the case of a rotating camera (pure rotation around the camera axis of projection, no translation), an arbitrary world can be considered (see previously). The homography can then be computed using the rotation transformation and the camera intrinsic parameters as (see for instance 8): \[ s \begin{bmatrix} x^{'} \\ y^{'} \\ 1 \end{bmatrix} = \bf{K} \hspace{0.1em} \bf{R} \hspace{0.1em} \bf{K}^{-1} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \] To illustrate, we used Blender, a free and open-source 3D computer graphics software, to generate two camera views with only a rotation transformation between each other. More information about how to retrieve the camera intrinsic parameters and the 3x4 extrinsic matrix with respect to the world can be found in 9 (an additional transformation is needed to get the transformation between the camera and the object frames) with Blender. The figure below shows the two generated views of the Suzanne model, with only a rotation transformation: With the known associated camera poses and the intrinsic parameters, the relative rotation between the two views can be computed: Here, the second image will be stitched with respect to the first image. The homography can be calculated using the formula above: The stitching is made simply with: The resulting image is:
In 1923 a French physics graduate student named Prince Louis-Victor de Broglie (1892–1987) made a radical proposal based on the hope that nature is symmetric. If EM radiation has both particle and wave properties, then nature would be symmetric if matter also had both particle and wave properties. If what we once thought of as an unequivocal wave (EM radiation) is also a particle, then what we think of as an unequivocal particle (matter) may also be a wave. De Broglie’s suggestion, made as part of his doctoral thesis, was so radical that it was greeted with some skepticism. A copy of his thesis was sent to Einstein, who said it was not only probably correct, but that it might be of fundamental importance. With the support of Einstein and a few other prominent physicists, de Broglie was awarded his doctorate. De Broglie took both relativity and quantum mechanics into account to develop the proposal that , given by all particles have a wavelength \[\lambda = \dfrac{h}{p} \, (matter \, and \, photons),\] where \(h\) Planck’s constant and \(p\) is momentum. This is defined to be the de Broglie wavelength. (Note that we already have this for photons, from the equation \(p = h/\lambda\).) The hallmark of a wave is interference. If matter is a wave, then it must exhibit constructive and destructive interference. Why isn’t this ordinarily observed? The answer is that in order to see significant interference effects, a wave must interact with an object about the same size as its wavelength. Since \(h\) is very small, \(\lambda\) is also small, especially for macroscopic objects. A 3-kg bowling ball moving at 10 m/s, for example, has \[\lambda = h/p = (6.63 \times 10^{-34} \, J \cdot s)/[(3 \, kg)(10 \, m/s)] = 2 \times 10^{-35} \, m.\] This means that to see its wave characteristics, the bowling ball would have to interact with something about \(10^{-35} \, m\) in size—far smaller than anything known. When waves interact with objects much larger than their wavelength, they show negligible interference effects and move in straight lines (such as light rays in geometric optics). To get easily observed interference effects from particles of matter, the longest wavelength and hence smallest mass possible would be useful. Therefore, this effect was first observed with electrons. American physicists Clinton J. Davisson and Lester H. Germer in 1925 and, independently, British physicist G. P. Thomson (son of J. J. Thomson, discoverer of the electron) in 1926 scattered electrons from crystals and found diffraction patterns. These patterns are exactly consistent with interference of electrons having the de Broglie wavelength and are somewhat analogous to light interacting with a diffraction grating. (See Figure.) Connections: Waves All microscopic particles, whether massless, like photons, or having mass, like electrons, have wave properties. The relationship between momentum and wavelength is fundamental for all particles. De Broglie’s proposal of a wave nature for all particles initiated a remarkably productive era in which the foundations for quantum mechanics were laid. In 1926, the Austrian physicist Erwin Schrödinger (1887–1961) published four papers in which the wave nature of particles was treated explicitly with wave equations. At the same time, many others began important work. Among them was German physicist Werner Heisenberg (1901–1976) who, among many other contributions to quantum mechanics, formulated a mathematical treatment of the wave nature of matter that used matrices rather than wave equations. We will deal with some specifics in later sections, but it is worth noting that de Broglie’s work was a watershed for the development of quantum mechanics. De Broglie was awarded the Nobel Prize in 1929 for his vision, as were Davisson and G. P. Thomson in 1937 for their experimental verification of de Broglie’s hypothesis. Figure \(\PageIndex{1}\): This diffraction pattern was obtained for electrons diffracted by crystalline silicon. Bright regions are those of constructive interference, while dark regions are those of destructive interference. (credit: Ndthe, Wikimedia Commons) Example \(\PageIndex{1}\): Electron Wavelength versus Velocity and Energy For an electron having a de Broglie wavelength of 0.167 nm (appropriate for interacting with crystal lattice structures that are about this size): (a) Calculate the electron’s velocity, assuming it is nonrelativistic. (b) Calculate the electron’s kinetic energy in eV. Strategy For part (a), since the de Broglie wavelength is given, the electron’s velocity can be obtained from \(\lambda = h/p\) by using the nonrelativistic formula for momentum, \(p = mv\). For part (b), once \(v\) is obtained (and it has been verified that \(v\) is nonrelativistic), the classical kinetic energy is simply \((1/2)mv^2\). Solution for (a) Substituting the nonrelativistic formula for momentum \((p = mv)\) into the de Broglie wavelength gives \[\lambda = \dfrac{h}{p} = \dfrac{h}{mv}.\] Solving for \(v\) gives \[v =\dfrac{h}{m\lambda}.\] Substituting known values yields \[v = \dfrac{6.63 \times 10^{-34} \, J \cdot s}{(9.11 \times 10^{-31} \, kg)(0.167 \times 10^{-9} \, m)} = 4.36 \times 10^6 \, m/s.\] Solution for (b) While fast compared with a car, this electron’s speed is not highly relativistic, and so we can comfortably use the classical formula to find the electron’s kinetic energy and convert it to eV as requested. \[KE = \dfrac{1}{2} mv^2\] \[= \dfrac{1}{2}(9.11 \times 10^{-31} \, kg)(4.36 \times 10^6 \times 10^6 \, m/s)^2\] \[= (86.4 \times 10^{-18} \, J)\left(\dfrac{1 \, eV}{1.601 \times 10^{-19} \, J}\right)\] \[= 54.0 \, eV\] Discussion This low energy means that these 0.167-nm electrons could be obtained by accelerating them through a 54.0-V electrostatic potential, an easy task. The results also confirm the assumption that the electrons are nonrelativistic, since their velocity is just over 1% of the speed of light and the kinetic energy is about 0.01% of the rest energy of an electron (0.511 MeV). If the electrons had turned out to be relativistic, we would have had to use more involved calculations employing relativistic formulas. Electron Microscopes One consequence or use of the wave nature of matter is found in the electron microscope. As we have discussed, there is a limit to the detail observed with any probe having a wavelength. Resolution, or observable detail, is limited to about one wavelength. Since a potential of only 54 V can produce electrons with sub-nanometer wavelengths, it is easy to get electrons with much smaller wavelengths than those of visible light (hundreds of nanometers). Electron microscopes can, thus, be constructed to detect much smaller details than optical microscopes. (See Figure.) There are basically two types of electron microscopes. The transmission electron microscope (TEM) accelerates electrons that are emitted from a hot filament (the cathode). The beam is broadened and then passes through the sample. A magnetic lens focuses the beam image onto a fluorescent screen, a photographic plate, or (most probably) a CCD (light sensitive camera), from which it is transferred to a computer. The TEM is similar to the optical microscope, but it requires a thin sample examined in a vacuum. However it can resolve details as small as 0.1 nm (\(10^{-10} \, m\)), providing magnifications of 100 million times the size of the original object. The TEM has allowed us to see individual atoms and structure of cell nuclei. The scanning electron microscope (SEM) provides images by using secondary electrons produced by the primary beam interacting with the surface of the sample (see Figure). The SEM also uses magnetic lenses to focus the beam onto the sample. However, it moves the beam around electrically to “scan” the sample in the x and y directions. A CCD detector is used to process the data for each electron position, producing images like the one at the beginning of this chapter. The SEM has the advantage of not requiring a thin sample and of providing a 3-D view. However, its resolution is about ten times less than a TEM. Figure \(\PageIndex{2}\): Schematic of a scanning electron microscope (SEM) (a) used to observe small details, such as those seen in this image of a tooth of a Himipristis, a type of shark (b). (credit: Dallas Krentzel, Flickr) Electrons were the first particles with mass to be directly confirmed to have the wavelength proposed by de Broglie. Subsequently, protons, helium nuclei, neutrons, and many others have been observed to exhibit interference when they interact with objects having sizes similar to their de Broglie wavelength. The de Broglie wavelength for massless particles was well established in the 1920s for photons, and it has since been observed that all massless particles have a de Broglie wavelength \(\lambda = h/p\). The wave nature of all particles is a universal characteristic of nature. We shall see in following sections that implications of the de Broglie wavelength include the quantization of energy in atoms and molecules, and an alteration of our basic view of nature on the microscopic scale. The next section, for example, shows that there are limits to the precision with which we may make predictions, regardless of how hard we try. There are even limits to the precision with which we may measure an object’s location or energy. MAKING CONNECTIONS: The wave nature of matter allows it to exhibit all the characteristics of other, more familiar, waves. Diffraction gratings, for example, produce diffraction patterns for light that depend on grating spacing and the wavelength of the light. This effect, as with most wave phenomena, is most pronounced when the wave interacts with objects having a size similar to its wavelength. For gratings, this is the spacing between multiple slits.) When electrons interact with a system having a spacing similar to the electron wavelength, they show the same types of interference patterns as light does for diffraction gratings, as shown at top left in Figure. Atoms are spaced at regular intervals in a crystal as parallel planes, as shown in the bottom part of Figure. The spacings between these planes act like the openings in a diffraction grating. At certain incident angles, the paths of electrons scattering from successive planes differ by one wavelength and, thus, interfere constructively. At other angles, the path length differences are not an integral wavelength, and there is partial to total destructive interference. This type of scattering from a large crystal with well-defined lattice planes can produce dramatic interference patterns. It is called , for the father-and-son team who first explored and analyzed it in some detail. The expanded view also shows the path-length differences and indicates how these depend on incident angle \(\theta\) in a manner similar to the diffraction patterns for x rays reflecting from a crystal. Bragg reflection Figure \(\PageIndex{3}\): The diffraction pattern at top left is produced by scattering electrons from a crystal and is graphed as a function of incident angle relative to the regular array of atoms in a crystal, as shown at bottom. Electrons scattering from the second layer of atoms travel farther than those scattered from the top layer. If the path length difference (PLD) is an integral wavelength, there is constructive interference. Let us take the spacing between parallel planes of atoms in the crystal to be \(d\). As mentioned, if the path length difference (PLD) for the electrons is a whole number of wavelengths, there will be constructive interference—that is, \(PLD = n\lambda \, (n = 1, \, 2, \, 3, . . .).\) Because \(AB = BC = d \, sin \, \theta\), we have constructive interference when \(n\lambda = 2d \, sin \, \theta\). This relationship is called the Bragg equation and applies not only to electrons but also to x rays. The wavelength of matter is a submicroscopic characteristic that explains a macroscopic phenomenon such as Bragg reflection. Similarly, the wavelength of light is a submicroscopic characteristic that explains the macroscopic phenomenon of diffraction patterns. Summary Particles of matter also have a wavelength, called the de Broglie wavelength, given by \(\lambda = \frac{h}{p}\), where \(p\) is momentum. Matter is found to have the same as any other wave. interference characteristics Glossary de Broglie wavelength the wavelength possessed by a particle of matter, calculated by \(\lambda = h/p\) Contributors Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
From the commutation relations for the conformal Lie algebra, we may infer that the dilation operator plays the same role as the Hamiltonian in CFTs. The appropriate commutation relations are $[D,P_{\mu}] = iP_{\mu}$ and $[D,K_{\mu}] = -iK_{\mu}$, so that $P_{\mu}$ and $K_{\mu}$ are raising and lowering operators, respectively, for the operator $D$. This is analogous to the operators $\hat a$ and $\hat a^{\dagger}$ being creation and annihilation operators for $\hat H$ when discussing the energy spectra of the $n$ dimensional harmonic oscillator. My question is, while $\hat a$ and $\hat a^{\dagger}$ raise and lower the energy by one unit $( \pm \hbar \omega)$ for each application of the operator onto eigenstates of $\hat H$, what is being raised and lowered when we apply $P_{\mu}$ and $K_{\mu}$ onto the eigenvectors of $D$? Secondly, what exactly do we mean by the eigenvectors of $D$? Are they fields in space-time? Using the notation of Di Francesco in his book 'Conformal Field Theory', the fields transform under a dilation like $F(\Phi(x)) = \lambda^{-\Delta}\Phi(x)$, where $\lambda$ is the scale of the coordinates and $\Delta$ is the scaling dimension of the fields. Can I write $F(\Phi(x)) = D\Phi(x) = \lambda^{-\Delta}\Phi(x)$ to make the eigenvalue equation manifest? Thanks for clarity.
The Annals of Probability Ann. Probab. Volume 19, Number 4 (1991), 1737-1755. Strong Limit Theorems of Empirical Functionals for Large Exceedances of Partial Sums of I.I.D. Variables Abstract Let $(X_i,U_i)$ be pairs of i.i.d. bounded real-valued random variables ($X_i$ and $U_i$ are generally mutually dependent). Assume $E\lbrack X_i\rbrack < 0$ and $\Pr\{X_i > 0\} > 0$. For the (rare) partial sum segments where $\sum^l_{i=k}X_i \rightarrow \infty$, strong limit laws are derived for the sums $\sum^l_{i=k}U_i$. In particular a strong law for the length $(l - k + 1)$ and the empirical distribution of $U_i$ in the event of large segmental sums of $\sum X_i$ are obtained. Applications are given in characterizing the composition of high scoring segments in letter sequences and for evaluating statistical hypotheses of sudden change points in engineering systems. Article information Source Ann. Probab., Volume 19, Number 4 (1991), 1737-1755. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176990232 Digital Object Identifier doi:10.1214/aop/1176990232 Mathematical Reviews number (MathSciNet) MR1127724 Zentralblatt MATH identifier 0746.60028 JSTOR links.jstor.org Subjects Primary: 60F15: Strong theorems Secondary: 60F10: Large deviations 60G50: Sums of independent random variables; random walks Citation Dembo, Amir; Karlin, Samuel. Strong Limit Theorems of Empirical Functionals for Large Exceedances of Partial Sums of I.I.D. Variables. Ann. Probab. 19 (1991), no. 4, 1737--1755. doi:10.1214/aop/1176990232. https://projecteuclid.org/euclid.aop/1176990232
The conditional probability measures the probability of a certain event while knowing previous information about another event. For example, if we want to calculate the probability that, after having thrown a dice, a $$6$$ comes out, we already know, by the rule of Laplace, that the probability is $$\dfrac{1}{6}$$. Nevertheless, if we have the information that the result has been an even number, there are only three possibilities: $$2, 4$$ and $$6$$, therefore the probability happens to be higher, of $$\dfrac{1}{3}$$. Given two events $$A$$ and $$B$$, such that $$P(B)\neq 0$$, we call probability of $$A$$ conditioned on $$B$$, and we write $$P(A/B)$$, to: $$$P(A/B)=\dfrac{P(A\cap B)}{P(B)}$$$ From the formula of the conditional probability we can derive an expression that will turn out to be very useful for us further on: $$$P(A\cap B)=P(A/B)\cdot P(B)$$$ This expression is known as a principle of the compound probability.
Let A be reducible to B, i.e., $A \leq B$. Hence, the Turing machine accepting $A$ has access to an oracle for $B$. Let the Turing machine accepting $A$ be $M_{A}$ and the oracle for $B$ be $O_{B}$. The types of reductions: Turing reduction: $M_{A}$ can make multiple queries to $O_{B}$. Karp reduction: Also called "polynomial time Turing reduction": The input to $O_{B}$ must be constructed in polytime. Moreover, the number of queries to $O_{B}$ must be bounded by a polynomial. In this case: $P^{A} = P^{B}$. Many-one Turing reduction: $M_{A}$ can make only one query to $O_{B}$, during its the last step. Hence the oracle response cannot be modified. However, the time taken to constructed the input to $O_{B}$ need not be bounded by a polynomial. Equivalently: ($\leq_{m}$ denoting many-one reduction) $A \leq_{m} B$ if $\exists$ a computable function $f: \Sigma^{\ast} \to \Sigma^{\ast}$ such that $f(x) \in B \iff x\in A$. Cook reduction: Also called "polynomial time many-one reduction": A many-one reduction where the time taken to construct an input to $O_{B}$ must be bounded by a polynomial. Equivalently: ($\leq^{p}_{m}$ denoting many-one reduction) $A \leq^p_{m} B$ if $\exists$ a poly-timecomputable function $f: \Sigma^{\ast} \to \Sigma^{\ast}$ such that $f(x) \in B \iff x\in A$. Parsimonious reduction: Also called "polynomial time one-one reduction": A Cook reduction where every instance of $A$ mapped to a unique instance of $B$. Equivalently: ($\leq^{p}_{1}$ denoting parsimonious reduction) $A \leq^p_{1} B$ if $\exists$ a poly-timecomputable bijection $f: \Sigma^{\ast} \to \Sigma^{\ast}$ such that $f(x) \in B \iff x\in A$. These reductions preserve the number of solutions. Hence $\#M_{A} = \#O_{B}$. We can define more types of reductions by bounding the number of oracle queries, but leaving those out, could someone kindly tell me if I have gotten the nomenclature for the different types of reductions used, correctly. Are NP-complete problems defined with respect Cook reduction or parsimonious reduction? Can anyone kindly give an example of a problem that is NP-complete under Cook and not under parsimonious reduction. If I am not wrong, the class #P-Complete is defined with respect to Karp reductions.
This question already has an answer here: How does one go about calculating : $$\int_0^{\infty}\frac{\ln x}{1+x^2}dx$$ I've tried Integration by parts, and failed over and over again Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: How does one go about calculating : $$\int_0^{\infty}\frac{\ln x}{1+x^2}dx$$ I've tried Integration by parts, and failed over and over again $$\int_0^{\infty}\frac{\ln(x)}{1+x^2}dx=\color{#C00000}{\int_0^{1}\frac{\ln(x)}{1+x^2}dx}+\int_1^{\infty} \frac{\ln(x)}{1+x^2}dx$$ Let's find out : $$\color{#C00000}{\int_0^{1}\frac{\ln(x)}{1+x^2}dx}$$ Subsitute : $$t=\frac{1}{x}$$ $$\int_{\infty}^{1}\frac{\ln(\frac{1}{t})}{1+(\frac{1}{t})^2}\cdot-\frac{1}{t^2}dt=-\int_{\infty}^{1} \frac{\ln (t^{-1})}{t^2+1} dt=\int_{\infty}^{1} \frac{\ln (t)}{t^2+1} dt$$ Note:$$\color{blue}{\int_a^b f(t) dt= -\int_b^a f(t) dt}$$ $$\int_{\infty}^{1} \frac{\ln (t)}{1+t^2} dt=-\int_1^{\infty} \frac{\ln (t)}{1+t^2} dt$$ Back to $$\int_0^{\infty}\frac{\ln(x)}{1+x^2}dx=\color{#C00000}{\int_0^{1}\frac{\ln(x)}{1+x^2}dx}+\int_1^{\infty} \frac{\ln(x)}{1+x^2}dx$$ $$=\color{green}{-\int_1^{\infty} \frac{\ln (t)}{1+t^2} dt+\int_1^{\infty} \frac{\ln(x)}{1+x^2}dx}$$ Im sure you can see that $x$ and $t$ are just to letters assigned to the integral , that: $$\int_1^{\infty} \frac{\ln (t)}{1+t^2} dt=\int_1^{\infty} \frac{\ln(x)}{1+x^2}dx$$ Therefore: $$-\int_1^{\infty} \frac{\ln (t)}{1+t^2} dt+\int_1^{\infty} \frac{\ln(x)}{1+x^2}dx=0$$ $$\int_0^{\infty} \frac{\ln(x)}{1+x^2}dx=\color{green}{-\int_1^{\infty} \frac{\ln (t)}{1+t^2} dt+\int_1^{\infty} \frac{\ln(x)}{1+x^2}dx}=0$$ Hint Just write $$I=\int_0^{\infty}\frac{\ln(x)}{1+x^2}dx=\int_0^1\frac{\ln(x)}{1+x^2}dx+\int_1^{\infty}\frac{\ln(x)}{1+x^2}dx$$ For the second integral, change variable $x=\frac{1}{y}$
I am viewing an example of finding the Fisher information for a single sampling from an exponential distribution where: $$P(x|\theta) = \frac{1}{\theta}e^{-\frac{x}{\theta}}$$ The score $S$ is $S(x|\theta) = \frac{\partial}{\partial\theta}logP(x|\theta) = -\frac{1}{\theta} + \frac{x}{\theta^2}$. Fisher information is the expectency of $S^2$ which is: $$E_x[S^2] = E_x\left[\frac{1}{\theta^2} - 2\frac{x}{\theta^3} + \frac{x^2}{\theta^4}\right]$$ I know this might sound strange, but I don't know how to calculate this expectation. Something is mixed for me here. I know that $$E[P(x)]=\int xp(x)dx$$ But I can't connect the two pieces of information. In the book, they got: $$E_x[S^2] = E_x\left[\frac{1}{\theta^2} - 2\frac{x}{\theta^3} + \frac{x}{\theta^4}] = \frac{1}{\theta^2} - 2\frac{\theta}{\theta^3} + \frac{2\theta^2}{\theta^4}\right] = \frac{1}{\theta^2}$$ But I can't see how they got that. Any information will be useful. Thanks.
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
I've seen a lot couple of questions regarding the pumping lemma that are pretty similar to each other and this one is unfortunately not the exception. Most likely will be this question marked as a duplicate. But I guess there's no harm in asking. I was trying to show using the pumping lemma that this language is not regular: $$ { L = \{a^n b a^n \mid n \in{\mathbb N}\} } $$ I've tried the following, although I'm not sure of what I'm doing: I've tried also to follow the example in this question $$ w = xyz $$ $$ xyz = a^p b^p a^p $$ $$ |w| = 3p $$ $$ 3p > p $$ $$ |xy| \leq{p} $$ $$ |y| \geq{p} $$ but here is as far as I come. A thing that I don't understand is $b$ doesn't have an $n$ exponent like in $a$. Is it ok what I did by placing a $p$ exponent on $b$? I would appreciate any help in this matter.
In an atom, the electron is not just spread out evenly and motionless around the nucleus. The electron is still moving, however, it is moving in a very special way such that the wave that it forms around the nucleus keeps the shape of the orbital. In some sense, the orbital is constantly rotating. To understand precisely what is happening lets calculate some observables. Consider the Hydrogen $1s$ state which is described by \begin{equation} \psi _{ 1,0,0} = R _1 (r) Y _0 ^0 = R _{1,0} (r) \frac{1}{ \sqrt{ 4\pi } } \end{equation} where $ R _{1,0} \equiv 2 a _0 ^{ - 3/2} e ^{ - r / a _0 } $ is some function of only distance from the origin and is irrelevant for this discussion and the wavefunction is denoted by the quantum numbers, $n$, $ \ell $, and $ m $, $ \psi _{ n , \ell , m } $. The expectation value of momentum in the angular directions are both zero,\begin{equation} \int \,d^3r \psi _{ 1,0,0 } ^\ast p _\phi \psi _{ 1,0,0 } = \int \,d^3r \psi _{ 1,0,0 } ^\ast p _\theta \psi _{ 1,0,0 } = 0 \end{equation} where $ p _\phi \equiv - i \frac{1}{ r } \frac{ \partial }{ \partial \phi } $ and $ p _\theta \equiv \frac{1}{ r \sin \theta } \frac{ \partial }{ \partial \theta } $. However this is not the case for the $ 2P _z $ state ($ \ell = 1, m = 1 $) for example. Here we have,\begin{align} \left\langle p _\phi \right\rangle & = - i \int \,d^3r \frac{1}{ r}\psi _{ 1,1,1} ^\ast \frac{ \partial }{ \partial \phi }\psi _{ 1,1,1} \\ & = - i \int d r r R _{2,1} (r) ^\ast R _{ 2,1} (r) \int d \phi ( - i ) \sqrt{ \frac{ 3 }{ 8\pi }} \int d \theta \sin ^3 \theta \\ & = - \left( \int d r R _{2,1} (r) ^\ast R _{2,1} (r) \right) \sqrt{ \frac{ 3 }{ 8\pi }} 2\pi \frac{ 4 }{ 3} \\ & \neq 0\end{align} where $ R _{2 1} (r) \equiv \frac{1}{ \sqrt{3} } ( 2 a _0 ) ^{ - 3/2} \frac{ r }{ a _0 } e ^{ - r / 2 a _0 } $ (again the particular form is irrelevant for our discussion, the important point being that its integral is not zero). Thus there is momentum moving in the $ \hat{\phi} $ direction. The electron is certainly spread out in a "dumbell" shape, but the "dumbell" isn't staying still. Its constantly rotating around in space instead. Note that this is distinct from the spin of an electron which does not involve any movement in real space, but is instead an intrinsic property of a particle.
Here is the proposed theory: Definition: Let $M$ be a nonempty set with a binary operation $+$ satisfying the following properties: P-0: The operation $+: M \times M \to M$ is both associative and commutative. P-1: $\text{For every } x,y,z \in M \text{, if } z + x = z + y \, \text{ then } \, x = y$. P-2: $\text{For every } x,y,z \in M \text{, if } z = x + y \, \text{ then } \, z \ne x$. P-3: $\text{For every } x,y \in M \text{, if } x \ne y \, \text{ then } \, [\exists u \; | \, x = y +u] \text{ or } [\exists u \; | \, y = x +u]$. P-4: $\text{For all } X \subset M \text{ such that } X \ne \emptyset $ $\quad \exists \, x_0 \in X \text{ such that }$ $\quad \forall x \in X \; \; [\,x = x_0 \text{ or } (\exists u \in M \text{ such that } x_0 + u = x)\,]$ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Proposition 1: There exist a unique element, call it $1 \in M$, such that any other element $x \in M$ can be written uniquely in the form $x = 1 +u$. Proof Apply $\text{P-4}$ to the universal set $M$. $\quad \blacksquare$ Theorem 2: Let $N$ be a subset of $M$ closed under addition. If $1 \in N$ then $N = M$. Proof To get a contradiction, suppose $M$ has elements not in $N$. Then applying $\text{P-4}$ and proposition 1 to $M\setminus N$ the following must hold $\tag 1 1 + \alpha = \beta \text{ where } \beta \in M\setminus N$ $\quad \text{ and every other element of } M\setminus N \text{ is 'larger than' } \beta$ If $\alpha \in N$, then by $\text{(1)}$ the element $\beta$ would also belong to $N$. But $\text{(1)}$ also implies that $\alpha$ is 'less than' $\beta$, a contradiction.$\quad \blacksquare$ Theorem 3: Let $D$ be any subset of $M$ satisfying the following two properties: $\quad 1 \in D$ $\quad \text{If } d \in D \text{ then } d + 1 \in D$ Then $D = M$. Proof To arrive at a contradiction, apply $\text{P-4}$ to get the 'least' element $m$ of $M$ that is not $D$. By proposition 1 we can write $1 + u = m$. Now $u$ must be in $D$ (it is 'smaller' than $m$), but then so is $m$, a contradiction.$\quad \blacksquare$ We can also formulate the strong induction formulation of theorem 3, but we simply assume it in what follows. Now let $K$ be the carrier set for an algebraic structure defined by a binary operation $\times$. No properties on $\times$ need be postulated to prove the following result. Theorem 4: For any two morphisms $\quad f: (M,1,+) \to (K,\times)$ $\quad g: (M,1,+) \to (K,\times)$ if $f(1) = g(1)$ then $f = g$. Proof Let $D$ be the set of elements in $M$ where the two functions agree. Using induction (theorem 3), we see that the morphisms are equal. $\quad \blacksquare$ Theorem 5: If $(M,1,+)$ and $(N,1,+)$ are two are additive systems satisfying $\text{P-0}$ thru $\text{P-4}$ then there exist one and only one isomorphism between them. Proof (sketch) We have the two 'increment by 1' functions, $\sigma_M$ and $\sigma_N$. The construction of a bijective mapping $\phi$ is defined by $\quad \phi: 1 \mapsto 1$ $\quad (\phi \circ \sigma_M)(m) = (\sigma_N \circ \phi)(m)$ and this mapping is the only viable morphism. To show $\phi$ is indeed a morphism, use induction, so we can checkoff $\quad \phi ((u+1) + v) = (\phi \circ \sigma_M) (u + v) = (\sigma_N \circ \phi)(u+v)=$ $\quad \quad \sigma_N(\phi(u) + \phi(v))= (\phi(u) + \phi(1)) + \phi(v) = \phi(u + 1) + \phi(v) $ etc. So the same relation defining the bijection is used to show it is a morphism. $\quad \blacksquare$ Are these valid arguments and logical constructions? If they are, we now get addition 'free of charge' and can use inductive arguments. Of course we still need to define multiplication, but that can be done by analyzing the morphisms $\mu: M \to M$. By theorem 4 any self-morphism of $M$ is completely specified by knowing where the generator $1$ goes. So do all morphisms of the form $\tag 2 \mu_n: 1 \mapsto n \text{ with } n \in M$ actually exist? The identity mapping $\mu_1$ is a morphism and it commutes with itself. Assume that for $n \in M$ we have a a morphism $\mu_n$ and that it commutes with all morphisms $\mu_k$ where $k + u = n$. For $n+1$ define $\tag 3 \mu_{n+1} = \mu_{n} + \mu_{1}$ It is immediate that $\mu_{n+1}$ is a morphism that commutes with itself and also commutes with $\mu_{n}$. Moreover, if $\mu_k$ commutes with $\mu_{n}$ then $\tag 4 \mu_{n+1} \circ \mu_k = (\mu_{n} + \mu_{1}) \circ \mu_k = (\mu_{n} \circ \mu_k) + (\mu_{1} \circ \mu_k) =$ $\quad \quad \quad \quad \quad \quad \quad \quad (\mu_{k} \circ \mu_n) + \mu_k = \mu_{k} \circ (\mu_n + \mu_1) = \mu_k \circ \mu_{n+1}$ We have defined a commutative binary operation $*$, call it multiplication, on $M$. Theorem 6: In a natural way, $M$ is the carrier set for two binary operations $(M,1,+,*)$ with multiplication distributing over addition. The above simple arguments, once verified, provide an alternative to using the Dedekind–Peano axioms.
Mathematics - Functional Analysis and Mathematics - Metric Geometry Abstract The following strengthening of the Elton-Odell theorem on the existence of a $(1+\epsilon)-$separated sequences in the unit sphere $S_X$ of an infinite dimensional Banach space $X$ is proved: There exists an infinite subset $S\subseteq S_X$ and a constant $d>1$, satisfying the property that for every $x,y\in S$ with $x\neq y$ there exists $f\in B_{X^*}$ such that $d\leq f(x)-f(y)$ and $f(y)\leq f(z)\leq f(x)$, for all $z\in S$. Comment: 15 pages, to appear in Bulletin of the Hellenic Mayhematical Society Given a finite dimensional Banach space X with dimX = n and an Auerbach basis of X, it is proved that: there exists a set D of n + 1 linear combinations (with coordinates 0, -1, +1) of the members of the basis, so that each pair of different elements of D have distance greater than one. Comment: 15 pages. To appear in MATHEMATIKA
Permanent link: https://www.ias.ac.in/article/fulltext/pram/087/06/0086 The parameters of radio frequency helium discharge under atmospheric pressure were studied by electrical and optical measurements using high voltage probe, current probe and optical emission spectroscopy. Two discharge modes $\alpha$ and $\gamma$ were observed within certain limits. During $\alpha$ to $\gamma$ mode transition, a decrease in voltage (280–168 V), current (2.05–1.61 A) and phase angle (76$^{\rm o}-56^{\rm o}$) occurred. The discharge parameters such as resistance, reactance, sheath thickness, electron density, excitation temperature and gas temperature were assessed by electrical measurements using equivalent circuit model and optical emission spectroscopy. In $\alpha$ mode, the discharge current increased from 1.17 to 2.05 A, electron density increased from $0.19 \times 10^{12} {\rm to} 0.47 \times 10^{12} {\rm cm}^{−3}$ while sheath thickness decreased from 0.40 to 0.25 mm. The excitation temperatures in the $\alpha$ and $\gamma$ modes were 3266 and 4500 K respectively, evaluated by Boltzmann’s plot method. The estimated gas temperature increased from 335 K in the α mode to 485 K in the γ mode, suggesting that the radio frequency atmospheric pressure helium discharge can be used for surface treatment applications. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text 详细记录 - 相似记录 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 详细记录 - 相似记录 2018-08-23 11:31 详细记录 - 相似记录 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 详细记录 - 相似记录 2018-08-23 11:31 详细记录 - 相似记录 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 详细记录 - 相似记录 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE 详细记录 - 相似记录 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 详细记录 - 相似记录 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 详细记录 - 相似记录 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 详细记录 - 相似记录
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Mulliken populations (R.S. Mulliken, J. Chem. Phys. 23, 1833, 1841, 23389, 2343 (1955)) can be used to characterize the electronic charge distribution in a molecule and the bonding, antibonding, or nonbonding nature of the molecular orbitals for particular pairs of atoms. To develop the idea of these populations, consider a real, normalized molecular orbital composed from two normalized atomic orbitals. \[\psi _i = c_{ij}\phi _j + c_{ik}\phi_k \label{10-63}\] The charge distribution is described as a probability density by the square of this wavefunction. \[\psi ^2_i = c^2_{ij} \phi^2_j + c^2_{ik} \phi^2_k + 2c_{ik} \phi_i \phi_j \label{10-64}\] Integrating over all the electronic coordinates and using the fact that the molecular orbital and atomic orbitals are normalized produces \[1 = c^2_{ij} + c^2_{ik} + 2C_{ij}c_{ik}S_{jk} \label{10-65}\] where \(S_{jk}\) is the overlap integral involving the two atomic orbitals. Mulliken's interpretation of this result is that one electron in molecular orbital \( \psi _t\) contributes \(c^2_{ij}\) to the electronic charge in atomic orbital \(\varphi _j, c^2_{ik}\) to the electronic charge in atomic orbital \(\varphi_k\), and \(2c_{ij}c_{ik}S_{jk}\) to the electronic charge in the overlap region between the two atomic orbitals. He therefore called \(c^2_{ij}\) and \(c^2_{ik}\), the atomic-orbital populations, and \(2c_{ij}c_{ik}S_{jk}\), the overlap population. The overlap population is >0 for a bonding molecular orbital, <0 for an antibonding molecular orbital, and 0 for a nonbonding molecular orbital. It is convenient to tabulate these populations in matrix form for each molecular orbital. Such a matrix is called the Mulliken population matrix. If there are two electrons in the molecular orbital, then these populations are doubled. Each column and each row in a population matrix is corresponds to an atomic orbital, and the diagonal elements give the atomic-orbital populations, and the off-diagonal elements give the overlap populations. For our example, Equation \(\ref{10-63}\), the population matrix is \[P_i = \begin {pmatrix} c^2_{ij} & 2c_{ij}c_{ik}S_{jk} \\ 2c_{ij}c_{ik}S_{jk} & c^2_{ik} \end {pmatrix} \label{10-66}\] Since there is one population matrix for each molecular orbital, it generally is difficult to deal with all the information in the population matrices. Forming the net population matrix decreases the amount of data . The net population matrix is the sum of all the population matrices for the occupied orbitals. \[NP = \sum \limits_{i = occupied} P_i \label{10-67}\] The net population matrix gives the atomic-orbital populations and overlap populations resulting from all the electrons in all the molecular orbitals. The diagonal elements give the total charge in each atomic orbital, and the off-diagonal elements give the total overlap population, which characterizes the total contribution of the two atomic orbitals to the bond between the two atoms. The gross population matrix condenses the data in a different way. The net population matrix combines the contributions from all the occupied molecular orbitals. The gross population matrix combines the overlap populations with the atomic orbital populations for each molecular orbital. The columns of the gross population matrix correspond to the molecular orbitals, and the rows correspond to the atomic orbitals. A matrix element specifies the amount of charge, including the overlap contribution, that a particular molecular orbital contributes to a particular atomic orbital. Values for the matrix elements are obtained by dividing each overlap population in half and adding each half to the atomic-orbital populations of the participating atomic orbitals. The matrix elements provide the gross charge that a molecular orbital contributes to the atomic orbital. Gross means that overlap contributions are included. The gross population matrix therefore also is called the charge matrix for the molecular orbitals. An element of the gross population matrix (in the j th row and i th column) is given by \[GP_{ji} = Pi_{jj} + \frac {1}{2} \sum \limits _{k \ne j} Pi_{jk} \label{10-68}\] where \(P_i\) is the population matrix for the i th molecular orbital, \(Pi_{jj}\) is the atomic-orbital population and the \(Pi_{jk}\) is the overlap population for atomic orbitals j and k in the i th molecular orbital. Further condensation of the data can be obtained by considering atomic and overlap populations by atoms rather than by atomic orbitals. The resulting matrix is called the reduced-population matrix. The reduced population is obtained from the net population matrix by adding the atomic orbital populations and the overlap populations of all the atomic orbitals of the same atom. The rows and columns of the reduced population matrix correspond to the atoms. Atomic-orbital charges are obtained by adding the elements in the rows of the gross population matrix for the occupied molecular orbitals. Atomic charges are obtained from the atomic orbital charges by adding the atomic-orbital charges on the same atom. Finally, the net charge on an atom is obtained by subtracting the atomic charge from the nuclear charge adjusted for complete shielding by the 1s electrons. Exercise \(\PageIndex{1}\) Using your results from Exercise 10.29 for HF, determine the Mulliken population matrix for each molecular orbital, the net population matrix, the charge matrix for the molecular orbitals, the reduced population matrix, the atomic orbital charges, the atomic charges, the net charge on each atom, and the dipole moment. Note: The bond length for HF is 91.7 pm and the experimental value for the dipole moment is \(6.37 \times 10^{-30}\, C \cdot m\). Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
. Let $X \in \mathbb{Z}^n$. I need to find the number of pairs $ (i, j)$ with $ 0 < i < j \le n$ such that $ X_i>X_j$ as well as the number of pairs $ (i, j),\ 0 < i < j \le n$, such that $ X_i=X_j$. I know how to calculate in $O(n \log(n))$ time complexity.How to prove that a comparison-based algorithm with better complexity does not exist ( check "update" section on the bottom)? An algorithm with $O(n log(n))$ time complexity (one of the known methods): We store: 1) 2 accumulators for the first and for the second value. (both of them initially 0) 2) Balanced search tree (AVL, for example) with the size of subtree at each vertex (operation "add element" is $O(\log \text{size})$ cost). Elements in this tree are 2-tuples $((-\infty\cup\mathbb{Z}\cup\infty), \mathbb{Z} )$, comparison is done by first element, if first elements are equal - by the second element. Initially this tree is empty. Subprocedure - find the number of elements in the tree that lay in the interval $[X, Y]$ - with the time cost $O(\log n)$ (trivial) Call this procedure as count$(X, Y)$ So, for $i$ from $1$ to $n$: add to first accumulator count( ($-\infty,1$),($X_i -1, i$)) add to second accumulator count (($X_i -1, -1$), ($X_i -1, i-1$)) add to balanced tree ($X_i, i$) So, $n$ steps with $O (\log{n})$ cost Update:It cannot be calculated faster than $O(n)$ complexity:The second count can be calculated with $O(n)$ time complexity (with the simple hash tables), so I need to prove the time complexity lower bound only for counting the pairs $ (i, j),\ 0 < i < j \le n$ such that $X_i>X_j$
Conveners Flavor Violation David Hitlin (Caltech) Flavor Violation Alex Cerri (University of Sussex (GB)) Flavor Violation Yury Kolomensky (UC Berkeley/LBNL) Flavor Violation Alex Cerri (University of Sussex (GB)) Flavor Violation Hideki Okawa (University of Tsukuba (JP)) Flavor Violation Brando Bellazzini (CEA-Saclay) Flavor Violation Emmanuel Stamou (Weizmann Institute of Science) Flavor Violation Joshua Berger (Cornell University) Sridhara Dasu (University of Wisconsin (US)) 24/08/2015, 14:30 Flavor Violation Theory and Experiment Rare decays and angular distributions of B mesons are particularly sensitive probes of physics beyond the Standard Model. This talk reviews results on these processes obtained with the data collected by the CMS experiment during the first Run of the LHC. We will show the most precise single-experiment results on the measurement of the Bs->µµ branching fraction and search for Bd->µµ, along with... Gerald Eigen (University of Bergen (NO)) 24/08/2015, 14:55 Flavor Violation Theory and Experiment The large amount of Heavy Flavour data collected by the ATLAS experiment is potentially sensitive to New Physics, which may be found in the mixing of B meson states, or through processes that are naturally suppressed in the Standard Model. We present the most recent results on the measurement of the decay of the Bs into J/psi phi based on full data collected in LHC Run-1 and with updated... Mauro Valli 24/08/2015, 15:20 Flavor Violation Theory and Experiment We critically reassess the theoretical uncertainties in the Standard Model calculation of $B \rightarrow K^{*}\mu\,\mu$ observables, focusing on the low $q^{2}$ region. We point out that even optimized observables are affected by sizable uncertainties coming both from full form factors, when one considers the departure from the infinite mass limit, and from long distance effects. In... Marcin Chrzaszcz (University of Zurich (CH)) 24/08/2015, 16:30 Flavor Violation Theory and Experiment Electroweak and radiative penguin b-hadron and c-hadron decays set strong constraints on the SUSY parameter space. Recent LHCb measurements have shown indications of large unexpected asymmetries in B->K*mumu and hints of lepton universality violation. Latest results involving new decay modes are presented. Ramon Niet (Technische Universitaet Dortmund (DE)) 24/08/2015, 16:55 Flavor Violation Theory and Experiment B and D mesons provide an ideal laboratory for measurements of CP violation and searches for CPV beyond the Standard Model. We present recent LHCb results on measurements from several decay modes and based on different techniques, including time-dependent, time-integrated and dalitz analyses. 157. Enhancement of Br($B_d \to \mu^+\mu^-$)/Br($B_s \to \mu^+\mu^-$) in supersymmetric unified models Dr Yukihiro Mimura (National Taiwan University) 24/08/2015, 17:20 Flavor Violation Theory and Experiment The recent measurement of the branching fractions of the rare B meson decays, $B_d^0 \to \mu^+\mu^-$ and $B_s^0 \to \mu^+\mu^-$, is one of the most impressive achievements of the LHC experiments. The ratio of the branching fractions is about 2 sigma above the standard model prediction. Although the deviation is not very significant in the current statistical status, it is interesting to study... Gerald Eigen (University of Bergen (NO)) 25/08/2015, 14:30 Flavor Violation Theory and Experiment We present a selection of recent studies performed by using the full data sample collected by the BABAR detector. Among them are a measurement of CP asymmetries in the B0-B0bar mixing process by using inclusive dilepton samples, an angular analysis of the B -> K* l+l- decay, which might indirectly probe the presence of beyond-Standard-Model particles in the loop diagrams, and studies of... Cristina Biino (INFN Torino (IT)) 25/08/2015, 14:55 Flavor Violation Theory and Experiment The rare decays $K^+ \rightarrow \pi^+ \nu \nu$ are excellent processes to make tests of new physics at the highest scale complementary to LHC thanks to their theoretically cleaness. The NA62 experiment at CERN SPS aims to collect of the order of 100 events in two years of data taking, keeping the background at the level of 10%. Part of the experimental apparatus has been commissioned during... Ayan Paul (INFN, Sezione di Roma) 25/08/2015, 15:20 Flavor Violation Theory and Experiment The question of the validity of analyzing charmed meson decays to pairs of hadrons within the SU(3) framework has been long and often debated. While there are convincing arguments that small breaking of this symmetry can accommodate for the current experimental results, the inability to compute QCD effects in these modes render it quite impossible to justify with complete authority the... Sridhara Dasu (University of Wisconsin (US)) 25/08/2015, 16:30 Flavor Violation Theory and Experiment In this talk we present analyses involving Higgs bosons and heavy flavors at high transverse momentum by the CMS Experiment. In particular, we show results on searches for top quark flavor-changing neutral-current (FCNC) and top quark flavor-changing Higgs-current (FCHC) decays. In addition, measurements of the Higgs boson decays to leptons and bottom quarks, together with Lepton Flavor... Sandro Palestini (CERN) 25/08/2015, 16:55 Flavor Violation Theory and Experiment Heavy Flavours Higgs and flavour (Higgs to light lepton+quark couplings, Higgs CPV, Higgs flavour violation) flavour at the high pT frontier, flavour related searches, not trivial SUSY/composite searches (non-degenerate SUSY and not SUSY partners, MFV DM), top FCNC with specific emphasis on lepton and quark flavour. Dr Andrew Chen (U. of Michigan) 27/08/2015, 14:00 Flavor Violation Theory and Experiment The top quark pairs at Tevatron are produced mainly through proton anti-proton interaction. The asymmetry of this production is intriguing. The search for new physics beyond the Standard Model is on going using CDF RUN II data. The latest results of top quark production asymmetry, searching for Fermiophobic Higgs and FCNC will be given at the conference. Mr Mateusz Kamil Iskrzynski (University of Warsaw) 27/08/2015, 14:25 Flavor Violation Theory and Experiment The simplest Grand Unified Theory (GUT) embedding the Standard Model (SM) is based on the SU(5) symmetry. The unification of gauge couplings, failing in the SM, takes place in the R-parity conserving Minimal Supersymmetric Standard Model (MSSM). We investigated the possibility of satisfying the minimal SU(5) boundary conditions also for Yukawa matrices at the GUT scale within the MSSM. We... 274. CANCELED - Analysis of the quark sector in the 2HDM with a four-zero Yukawa texture using experimental CKM matrix data Olga Felix 27/08/2015, 14:50 Flavor Violation Theory and Experiment We analyse the Yukawa matrices structure, $\widetilde{ \bf Y}_{ _{1,2} }^{q}$, by assuming a four-zero texture ansatz for their definition, in the frame of the general 2-Higgs Doublet Model. Explicit and exact expressions for $\widetilde{ \bf Y}_{ _{1,2} }^{q}$ are shown. Naturally, these expressions have a functional structure like to Cheng and Sher ansatz. Furthermore, we perform a... Sven Heinemeyer (CSIC (Santander, ES)) 27/08/2015, 16:30 Flavor Violation Theory and Experiment We analyze the effects of (squark and slepton) flavor violation induced by RGE running from the GUT scale within the CMSSM. We show that these effects, in particular in the scalar quark sector, can induce corrections to electroweak precision observables that can set an upper limit on the scalar mass parameter m_0. Elena Ginina 27/08/2015, 16:55 Flavor Violation Theory and Experiment Already at tree-level the theoretical prediction for the cross section of $P P \to {\tilde q_i} {\tilde q_i}^*, \tilde q = \tilde u, \tilde d; i = 1, \ldots, 6$, can have a strong dependence on squark flavour mixing parameters. Such a case is not taken into account in any published LHC study up to now. As a logical next step we calculate the leading one-loop corrections to that... Yutaro Sato (Nagoya University) 27/08/2015, 17:20 Flavor Violation Theory and Experiment Various decays of B mesons have sensitive to the physics beyond the Standard Models. New particles such as SUSY particles might enter in the loop diagram. Charged scalar particles such as charged Higgs bosons might contribute in addition to the W boson. We present results constraining the new physics with a large data samples that contains 772 million BB pairs collected at the Upsilon(4S)... 241. Searches for lepton flavor violation and new physics signatures with the ATLAS detector at the LHC Dai Kobayashi (Tokyo Institute of Technology (JP)) 28/08/2015, 14:00 Flavor Violation Theory and Experiment Hints of new physics observed in B-physics have lead to considerable interests on the flavor sector and lepton flavor violating (LFV) effects observable at the LHC. Searches for LFV decays of the Standard Model particles or new heavy particles have been conducted at the ATLAS experiment. Run 1 LFV search results are presented in this talk, together with searches for other signatures at ATLAS.... Dr Stefania Vecchi (INFN Ferrara) 28/08/2015, 14:25 Flavor Violation Theory and Experiment Many models extending the SM to account for dark matter or explain inflation predict the existence of O(1) GeV mass particles with long lifetimes. LHCb's detection capabilities for detached vertices are exploited to search for particles decaying to muon pairs. New results are presented. Masato Yamanaka (Nagoya University) 28/08/2015, 14:50 Flavor Violation Theory and Experiment We proposed a new charged lepton flavor violation (CLFV) process, $\mu^- e^- \rightarrow e^- e^-$ in a muonic atom, as one of the promising processes to search for new physics beyond the standard model [1]. It was found that the attractive interaction of leptons with the nucleus in the muonic atom enhances the transition rate of the $\mu^- e^- \rightarrow e^- e^-$ process. We report on our... Prof. Joe Sato (Saitama University) 28/08/2015, 15:15 Flavor Violation Theory and Experiment We consider the case that μ-e conversion signal is discovered but other charged lepton flavor violating (cLFV) processes will never be found. In such a case, we need other approaches to confirm the μ-e conversion and its underlying physics without conventional cLFV searches. We study R-parity violating (RPV) SUSY models as a benchmark. We briefly review that our interesting case is realized... Xabier Marcano (IFT-UAM/CSIC) 28/08/2015, 16:30 Flavor Violation Theory and Experiment Within low scale seesaw mechanism, in contrast to standard type-I seesaw, one can get compatible with data light neutrinos with the addition of low scale heavy Majorana neutrinos that can still have large Yukawa couplings, leading to new potentially interesting phenomenology. Taking the Inverse Seesaw model as an explicit realization of this kind of models, we study different aspects of this... Philipp Sicking (TU Dortmund) 28/08/2015, 16:55 Flavor Violation Theory and Experiment The recent measurement of high energy extragalactic neutrinos by the IceCube Collaboration has opened a new window to probe non-standard neutrino properties. Among other effects, sterile neutrino altered dispersion relations (ADRs) due to shortcuts in an extra dimension can significantly affect astrophysical flavor ratios. We discuss an MSW-like resonant conversion arising from geodesics... Dr Franklin Potter (Sciencegems.com) 28/08/2015, 17:20 Flavor Violation Theory and Experiment A different finite subgroup of SU(2) for each lepton and each quark family leads to the first-principles determination of the mixing angles for neutrinos, quarks, and their PMNS and CKM mixing matrices. For example, the neutrino $\theta_{13}$ = 8.56$^{\circ}$ and normal hierarchy are predicted. Connections of these subgroups to the j-invariant of elliptic modular functions (and the Monster...
I've been looking at $$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$ It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example: $$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$ $$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$ $$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$ So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas. UPDATE: The integral reduces to finding $$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$ With $a =\dfrac{n+1}{m}$ which converges only if $$0 < a < 1$$ Using series I find the solution is $$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$
Originally Posted by philipishin Blackhole formula $\displaystyle E = mc^2$ is not the "black hole formula". It's Einstein's famous equation for the book-keeping associated with conversions between mass and energy. What makes it special is the fact that if 1 Joule of energy is converted to mass, we can find out precisely how much mass is converted. Root(c^2)=c=Root(E/m) from (1+1=2) means energy is same with mass. If you take the following equation $\displaystyle E = mc^2$ and square root both sides $\displaystyle \sqrt{E} = \sqrt{m}c$ then divide both sides by $\displaystyle \sqrt{m}$, then you obtain $\displaystyle c = \sqrt{\frac{E}{m}}$ but this is just algebra to make c the subject of the equation. It doesn't show or prove anything. Mass and energy are, in fact, not the same; the formula is a simplification of $\displaystyle E^2 = m^2 c^4 + p^2 c^2$ in the case of zero momentum (p=0). It is a formula that describes the energy associated with an object travelling at the speed of light that has both mass and momentum. This is required because in some nuclear reactions and particle collisions, the momentum is an important contributor, such as in photo-absorption and photo-disintegration. So when we measure energy with mass, there is no number. Firstly, 1 + 1 = 2 is just 'counting' in mathematics. In physics, there's no additional meaning beyond what it states, which is how to get to the next number in a counting sequence. Secondly, we can measure the mass of an object to be 10 kg and the kinetic energy of that same object to be 20 J, or 30 J, or whatever. They're physical quantities just like any other. All physical quantities have an amount (a number) and a unit (like J or kg). (1=1. When we put 2 into equation, 1:2=1:2. It means it is changed 1=1 into 2=2. This means number is changed.) The energy of objects changes all the time, the mass less so. A black hole is an object whose mass is so high that no force can compete with with the gravitational force that pulls everything inside. Therefore, it collapses to an object that has a very tiny volume (some believe it to be zero; a singularity) but a finite mass. This corresponds to an enormous density (possibly infinite if the volume is zero). Black holes are very strange objects indeed and they are under intense study. So physics formula as v=at(speed) means true as basic from its origin. (E=m*c^2) No, the formula $\displaystyle v = at$ is actually derivative of the definition of acceleration, which is $\displaystyle a = \frac{dv}{dt}$ When the change in velocity is constant over some time interval we have $\displaystyle a = \frac{\Delta v}{\Delta t}$ Multiply by $\displaystyle \Delta t$ to both sides yields $\displaystyle \Delta v = a \Delta t$ By definition $\displaystyle v_2 - v_1 = a (t_2 - t_1)$ Finally, if the start velocity is $\displaystyle v_1 = 0$ m/s and the start time $\displaystyle t_1 = 0$ s, then $\displaystyle v_2 = a t_2$ or, for convenience, we can drop the index, giving $\displaystyle v = at$ The typical nomenclature is to keep $\displaystyle v_1$ as an unknown (usually denoted as the symbol $\displaystyle u$) but to set a reference time, $\displaystyle t_1 = 0$, for every problem. Therefore we instead have $\displaystyle v = u + at$ which is one of the famous SUVAT equations for the motion of objects at constant acceleration.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
For the other sciences it´s easy to point to the most important equations that ground the discipline. If I want to explain Economics to a physicist say, what are considered to be the most important equations that underly the subject which I should introduce and attempt to explain? Instead of proposing specific equations, I will point to two concepts that lead to specific equations for specific theoretical set ups: A) Equilibrium The most fundamental and the most misunderstood concept in Economics. People look around and see constant movement -how more irrelevant can a concept be, than "equilibrium"? So the job here is to convey that Economics models the observation that things most of the time tend to "settle down" -so by characterizing this "fixed point", it gives us an anchor to understand the movements outside and around this equilibrium (which may be changing of course). It is not the case that " quantity supplied equals quantity demanded" (here is a foundational equation) $$Q_d = Q_s$$ but it is the case that supply tends to equal demand (of anything) for reasons that any economist should be able to convincingly present to anyone interested in listening (and deep down they all have to do with finite resources). Also, by determining the conditions for equilibrium, we can understand, when we observe divergence, which conditions were violated. B) Marginal optimization under constraints In a static environment, it leads to the equation of marginal quantities/first derivatives of functions. Goods market: marginal revenue equals marginal cost. Inputs market: marginal revenue product equals marginal reward (rent, wage). Etc. (I left "utility maximization" out of the picture on purpose, because, here first one would have to present what this "utility index" is all about, and how crazy we are ( not), by trying to model human "enjoyment" through the concept of utility). Perhaps you could cover it all under the umbrella "marginal benefit equal marginal cost" as other questions suggested: $$MB = MC$$ Economists live in marginal optimization and most consider it self-evident. But if you try to explain it to an outsider, there is a respectable probability that he will object or remain unconvinced, instead usually proposing "average optimization" as "more realistic", since "people do not calculate derivatives" (we don't argue that they do, only that their thought processes can be modeled as if they were). So one has to get his story straight about marginal optimization, with convincing examples, and a discussion about "why not average optimization". In an intertemporal setting, it leads to the discounted trade-off between "the present and the future", again "at the margin" -starting with the "Euler equation in consumption", which in its discrete deterministic version reads $$u'(c_{t})=\beta(1+r_{t+1})u'(c_{t+1})$$ ...and one cannot avoid the theme of utility, after all: $u'()$ is marginal utility from consumption, $0<\beta<1$ is a discount rate and $r_{t+1}$ is the interest rate ( don't consult wikipedia article on Euler's equation in consumption, the concept behind it is much more generally applicable and foundational than the specific application that the wikipedia article discusses). Interestingly, although dynamic economics are more technically demanding, I find this more intuitively appealing since people seem to understand way better "what you save today will determine what you will consume tomorrow", than "your wage rate will be the marginal revenue product of all labor employed". As has already been said, the MOST fundamental equation is surely: $$\text{MB}=\text{MC}$$ EDIT: This equation is fundamental in terms of the way economists think. As pointed out in the comments below, in terms of fundamental equations of economic models, the most fundamental equations describe equivalences between the uses and supplies of items (money, goods, etc.). These provide the tension of the marginal cost side of this equation. I would add equations related to comparative statics: Envelope theorem$$V^\prime(y)=f_y(x,y)$$ "Delta" analysis, as described in Samuelson's Foundations of Economic Analysis: $$\Delta p\Delta y-\Delta w\Delta x\geq0$$ (this examines responses of price-taking producers in terms of vectors of production $y$ and uses of inputs $x$, to their prices $p$ and $w$, essentially revealed preference for producers) Revealed preference If we can claim game theorists or mathematicians whose equations we use constantly: Karush-Kuhn-Tucker conditions, especially complementary slackness. There's no single equation for linear programming, but I think econ has a claim to Kantorovich too. \ Stationarity: $$\nabla f(x^*) = \sum_{i=1}^m \mu_i \nabla g_i(x^*) + \sum_{j=1}^l \lambda_j \nabla h_j(x^*)$$ Primal feasibility: $$g_i(x^*) \le 0, \mbox{ for all } i = 1, \ldots, m$$ $$h_j(x^*) = 0, \mbox{ for all } j = 1, \ldots, l \,\!$$ Dual feasibility: $$\mu_i \ge 0, \mbox{ for all } i = 1, \ldots, m$$ Complementary slackness: $$\mu_i g_i (x^*) = 0, \mbox{for all}\; i = 1,\ldots,m.$$ Nash equilibrium$$\theta_{i}^\star = \arg \max_{\theta_i} u_i(\theta_i ,\theta_{-i}^\star)$$ Revelation principle: which to be fair isn't so much an equation as a theorem... Bellman equation$$V(x)=\max_{c\in\Omega(x)} U(x,z)+\beta\left[V(x^\prime)\right]$$ Most of intro econ is intersecting lines. Specifically, $$\dfrac{MU_x}{p_x}=\dfrac{MU_y}{p_y}.$$ Marginal Utility per unit cost should always be equal Economics is about the logic of human behavior, how we make decisions in a world of scarcity. These equations describe constrained optimization under some usual assumptions like continuity, convex preferences, and no corner solutions. I'd also give prominence to consumer theory over producer. Most of undergrad producer theory can be understood with the same tools used in consumer theory. I think one of the most important equations (at least within macroeconomics) is: $$E\left[ m R \right] = 1$$ This equation has been used to derive many foundational results. This equation motivated the Hansen–Jagannathan bound. It is fundamental for asset pricing as well. Also, something interesting I saw from once from Tom Sargent. If you use the stochastic discount factor for a standard model $m = \beta E_t \left[ \frac{u'(c_{t+1})}{u'(c_t)} \right]$ then depending on which piece of the equation you allow to be exogenous you can get some fundamental results of macro: Permanent Income Hypothesis: Let $\beta R = 1$ then we get $c_t = E [c_{t+1}]$ Lucas Asset Pricing Model: Let the process for consumption be a given. Then the price of an asset can be described by $R_t^{-1} = p_t = E \left[ \frac{u'(c_{t+1})}{u'(c_t)} \right]$ I once heard Roger Myerson talk about why he thought Economics has, as a Social Science, been so successful in applying (or has so readily incorporated) mathematics. He suggested that perhaps it was due to some of the fundamental linearities within the world. Two examples would be the flow-balance constraints of scarce goods (commodity constraints) and no-arbitrage conditions. These are fundamentally linear constraints. It's important to emphasize the importance of these because we can get a surprising amount out of the two. For example, a lot of people think that the law of demand is a consequence of assuming rationality (specifically, preferences that exhibits a diminishing marginal rate of substitution). A result due to Gary Becker shows that the law of demand (albeit just a slightly weaker version) can be derived from the budget constraint alone. (See Becker 1962, "Irrational Behavior and Economic Theory.") That is, this fundamental economic result can be derived from the reality of scarce resources alone---without assuming rationality. The no-arbitrage condition is an application of the linear duality theorem (Farkas' lemma). A lot of economics and finance (asset pricing) can be done just by the assumption that in economic equilibrium there is no arbitrage. Extra Notes: Gary Becker made a lot of advances in the field by studying the way constraints affect human behavior. One famous quote, taken from his Nobel prize lecture, is the remark that "different constraints are decisive for different situations, but the most fundamental constraint is limited time." (Some discussion here.) Some more resources about how his work in this regard can be found here and here. Linear duality can be used to describe the no arbitrage condition. More generally, this theorem is typically proved with the Hyperplane Separation Theorem, which is mathematical tool that shows up a lot in economics textbooks. Also, keep in mind that it's enough just to assume that in economic equilibrium, there is approximately no arbitrage. Although I agree with Jyotirmoy Bhattacharya that the most interesting ideas in economics are not always best expressed through equations, I still want to mention the Slutsky or compensated law of demand from consumer theory $$ (p' -p) \Big[ x\big(p', p' x(p,w)\big) – x\big(p,w\big)\Big]^T \leq 0,$$ where $p',p \in \mathbb{R}_{++}^n$ are any two price vectors, $w \in \mathbb{R}_+$ is any level of income, and $x(\cdot,\cdot) \in \mathbb{R}^n$ is the demand function. The underlying relation is a couple of orders of certitude away from fundamental equations in other fields. Also, it does not ground the discipline, in the sense that it is not used all that often. However, I tend to view it as fundamental because It is an absolutely non-trivialconsequence of three simple and fundamental assumptions in consumer theory, namely, That the demand function $x(\cdot,\cdot)$ is homogenous of degree zero (no money illusion) Walras' law (people do not burn money) The weak axiom of revealed preferences (if you chose A when B is available “today”, you will not chose B “tomorrow” if A remains available) Therefore testing the inequality is equivalent to testing these three assumptions jointly. The three assumptions are used in the vast majority (maybe more than 90%?) of the models including consumers in economic theory. Their validity (at least as approximations) is therefore crucial to the validity of most models in economic theory (at least as approximations). Although it is not always obvious how to relate the notions of prices, goods and income to observables, all the element in the equation are observable in principle(as opposed to utility levels for instance) and the validity of the inequality can therefore be tested empirically. I don't think there are any economics equations with the same status as, say, Maxwell's equations in physics. In its place we have concepts like the equimarginal principle, competitive equilibrium or Nash equilibrium which are at the core of the "economist's approach". But I think that the real worth of economics is not even in these ideas themselves but in what we know about concrete problems in specific areas of applications: for example what we know about business cycles in macro. In this economics may be more like medicine than physics. For me, one of the most important ones is the budget constraint. It might seem too obvious but a lot of laypersons (though maybe not physicist) don't get it! $p⋅x \leq w$ A bit late to the game, but I'm surprised no one has named the equation to calculate OLS estimates: $$ \hat\beta=(X'X)^{-1}X'y $$ Whilst not as foundational as, for example, the Slutsky equation, the condition on the Lerner index that a profit maximising firm with price $p$, cost $c$, and price elasticity of demand $\eta$ has $$\frac{p-c}{p}=-\frac{1}{\eta}$$ is an important equation in industrial organisation. This is not only an elegant formulation of the solution of the firm's problem, but it is also practically useful: A firm that estimates its $\eta$ and knows its $c$ can use this formula to calculate the profit-maximising price. A regulator that observes a $p$ and estimates $\eta$ can use the formula to calculate $c$—important in many forms of regulation. It is already written but Euler equation in continous time yields $$\frac{\dot{C}}{C}=\sigma(r-\rho)$$ where $\sigma$ is intertemporal elasticity of substitution, $r$ interest rate and $\rho$ is the discount rate (impatience level). The foundation of intertemporal economics is the net present value equation. That is, the net present value of a future income stream is the yearly incomes divided by an appropriate discount factor, based on the prevailing interest rate, r, taken to the nth power, where n is the number of years. Well for microeconomics there are several, however they all follow the same pattern. Here I'll attempt to teach an entire intermediate microeconomics course in one post. Most microeconomics problems follow this format: Though leaving out some minor details, if you do enough microeconomics practice sets the problems end up looking the same after a while. This is what I got to share. Production/Utility functions There are three main types of utility/production functions you will be exposed to in an intermediate microeconomics course 1. They are: Cobb Douglas $$f(x_1,x_2)=x_1^ax_2^b$$ Leontif/ Perfect Complement $$f(x_1,x_2)=\min\{x_1,x_2\}$$ Perfect substitutes $$f(x_1,x_2)=x_1+x_2$$ Budget lines and Cost functions In consumer theory, you have a budget line represented by the formula: $$m=p_1x_1+p_2x_2$$ In producer theory we call it a cost function. $$C(x_1,x_2)=w_1x_1+w_2x_2$$ we either want to maximize consumption given a budget/cost function or minimize costs holding your utility/output level constant. To do this we use another equation: The Lagrangian Multiplier: Though not exclusive to economics tool per say, its the primary tool of all intermediate microeconomics students. $$\mathcal{L}=f(x_1,x_2)\pm\lambda(H-g(x_1,x_2))$$ where $H-g(x_1,x_2)$ is either a budget line/cost function or Utility/Production function when its equal to zero. We use this for calculating utility/profit maximizing consumption bundles/inputs or Minimize Costs holding profit/utility constant. And thats a wrap!* *Though there is what to say on marshallian and hicksian demands I'll leave that for others to fill in.
In using CES production functions of the form $f(x_1,x_2)=(x_1^\rho+x_2^\rho)^{1/\rho} $, we always assume that $\rho\leq1$. Why do we make that assumption? I understand that if $\rho>1$, the production function won't be concave anymore (and hence production set will not be convex), but what does that imply about profit and cost functions? The problem with $\rho>1$ is that it means the marginal product of factors is not decreasing ($\rho<1$) or constant ($\rho=1$) but increasing, which is an odd assumption. Such functions yield isoquants that are concave, and might lead to only one factor being used (as BKay said). As in any generic CES, the marginal product of factor $x_i$ is $$ MP_i = \left(\frac{y}{x_i}\right)^{1-\rho} $$ The derivative of this MP with respect to $x_i$ is, after some rearranging, $$ (\rho-1) \left(\frac{y}{x_i}\right)^{1-\rho}\left(\frac{x_{-i}}{x_iy^{\rho}}\right) $$ For $\rho>1$, this expression is positive, which means that the productivity of a factor increases as more of that factor is used. Regarding isoquants, you can find these by rewriting the production function as $x_2=g(y,x_1)$. In the generic CES, this is $$ x_2 = \left(y^{\rho} - x_1^{\rho}\right)^\frac{1}{\rho} $$ These are linear in the case of $\rho=1$, convex in the case of Cobb-Douglas (where the function above is $x_2=\frac{y}{x_1}$, a hyperbole), and concave in the case of $\rho>1$. For example, select $\rho=2$ and you have: $$ x_2^2 = y^2 - x_1^2 $$ which is the formula of a circle centered at $(0,0)$, with radius $y$. Normally, for production theory only $x_i \geq 0$ is interesting, which gives you the concave isoquants for different levels of $y$. The figure below shows an example, were for a given factor prices ratio, there is a corner solution (point A): (Code for reproducing figure here) Here is my attempt at this question, it's incomplete and/or incorrect so please help make suggestions and I will edit this. Cost Minimization Since $f(x_1,x_2)$ is not quasi-concave, the corresponding isoquant curves are not going to be covex to the origin (i.e. their upper contour set will not be convex). In this case firm should employ corner solution and conditional factor demands will be given as; $$x_1(p,y)=q^2 \quad and \quad x_2(p,y)=0 \quad\quad if\quad w_1< w_2$$$$x_1(p,y)=0 \quad and \quad x_2(p,y)=q^2 \quad\quad if\quad w_1>w_2$$$$x_1(p,y)=0 , x_2(p,y)=q^2 \quad or \quad x_1(p,y)=q^2 , x_2(p,y)=0 \quad if\quad w_1=w_2$$These conditional factor demands give the cost function;$$C(w,y)=min[w_1q^2,w_2q^2]$$ Profit Maximization I am really confused here. Even though the production function is convex but it still exhibit non-increasing returns to scale. $f(tx_1,tx_2)<tf(x_1,x_2) \quad\forall \quad t>1$. That is the solution will still exist (right?). So how does non-concavity of production function effect profit maximizing solution? In short, for $\rho \geq 1$ there going to be no solution for profit maximization in the short-run (at least one factor is fixed) for competitive case (price is fixes). In order to get from production function to cost function, we need to introduce factor prices ($r$ and $w$ for textbooks examples) and solve optimization problem. Extensive exposition can be found here. To build intuition, let's take $w=1$ and fix one factor. In order to deal with profit $\pi(q)$, we should introduce prices for produced goods as well $p>0$. So the problem might look as follows ($\rho=2$): $$ \pi(q) = p \cdot q - 1 \cdot (q^2 - 1)^{1/2} $$ It can be shown that for the profit function of this sort SOC is: $\pi'' >0$, which means there is no global maximum (though minimum exists). To see the same effect in a simpler example ( not derived from CES), consider this: $$ \pi(q) = p \cdot q - 2 \cdot q^{1/2} $$ SOC is $\pi'' = (1/2)q^{-3/2} > 0$.
I'm right now learning about Monodromy from self-studying Rick Miranda's fantastic book "Algebraic Curves and Riemann surfaces". Today, I read about monodromy, and the monodromy representation of a holomorphic map between compact Riemann surfaces. I understand that we start by having a holomorphic map $F:X \rightarrow Y$, of degree d, where X and Y are Riemann surfaces, and then we remove the branch points from Y, and all the corresponding points in X mapping to them. Let $B=\{b_1,..,b_n\}$ be the branch points, $A=\{a_1,...,a_m\}$ the ramification points. So, fix a point $q \in V = Y-B$. We have that there are d preimages of q in $U=X-A$. So for a specific branch point b, we choose some small open neighbourhood W of b so that $F^{-1}(W)$ gives a disjoint union $W_i$ of open neighbourhoods of the points mapping to b. Take some path from our basepoint $q$ to $q_0 \in W$, call this path $\alpha$. Choosing some small loop $\beta$ with basepoint q, with winding number 1 around b, and then considering $\alpha^{-1}\circ \beta \alpha$, gives a loop on V, based at q around b. We can now see that this loop only depends on $\beta$, in some sense. Say that the points that maps to b has multiplicity $n_i,...,n_j$. Then we have that, according to local normal form, there are local coordinates $z_j$ on the open neighbourhoods from above, so that the map takes the form $z=z_j^{n_j}$. Now, we have that the loop around b, when we lift it up here, will simply yield a cyclic permutation of the preimages in the neighbourhood. Now, my question is mostly: How do I apply this concretely?Let us take an example (from Miranda's book) :"Let $f(z) = 4z^2(z-1)^2/(2z-1)^2$ define a holomorphic map of degree 4 from $P^1$ to itself. Show that there are three branch points, and that the three permutations in $S_4$ are $\rho_1=(12)(34)$, $\rho_2(13)(24)$ and $\rho_3=(14)(23)$ up to conjugacy."I can find the branch points, and I see that the multiplicity of the two points mapping to it has multiplicity 2, but I don't get how to rigorously show that the above are the associated permutations. Hope I was clear, and sorry if I wasn't. UPDATENow, rereading the question properly, maybe he doesn't want me to find the specific permutations, but just simply showing that they have that conjugacy class. I think that is the case. But I would still be curious of how to find the specific permutation that the monodromy induces.
Definition:LAST Contents Definition LAST stands for LAnguage of Set Theory. Formal Language This is the formal language of LAST: The Alphabet The alphabet of LAST is as follows: The Letters The letters of LAST come in two varieties: Names of sets: $w_0, w_1, w_2, \ldots, w_n, \ldots$ These are used to refer to specific sets. Variables for sets: $v_0, v_1, v_2, \ldots, v_n, \ldots$ These are used to refer to arbitrary sets. The Signs The signs of LAST are as follows: The membershipsymbol: $\in$, to indicate that one set is an element of another. The equalitysymbol: $=$, to indicate that one set is equal to another. Quantifier symbols: Punctuation symbols: Parentheses: $($ and $)$. Formal Grammar The formal grammar of LAST is as follows: Any expression of one of these forms: $\left({v_n = v_m}\right)$ $\left({v_n = w_m}\right)$ $\left({w_m = v_n}\right)$ $\left({w_n = w_m}\right)$ $\left({v_n \in v_m}\right)$ $\left({v_n \in w_m}\right)$ $\left({w_m \in v_n}\right)$ $\left({w_n \in w_m}\right)$ is a formula of LAST. If $\phi, \psi$ are formulas of LAST, then: $\left({\phi \land \psi}\right)$ $\left({\phi \lor \psi}\right)$ are formulas of LAST. If $\phi$ is a formula of LAST, then expressions of the form: $\left({\forall v_n \phi}\right)$ $\left({\exists v_n \phi}\right)$ are formulas of LAST. No expressions that can not be constructed from the above rules are formulas of LAST.
CryptoDB Marco Streng Publications Year Venue Title 2008 EPRINT Abelian varieties with prescribed embedding degree We present an algorithm that, on input of a CM-field $K$, an integer $k \ge 1$, and a prime $r \equiv 1 \bmod k$, constructs a $q$-Weil number $\pi \in \O_K$ corresponding to an ordinary, simple abelian variety $A$ over the field $\F$ of $q$ elements that has an $\F$-rational point of order $r$ and embedding degree $k$ with respect to $r$. We then discuss how CM-methods over $K$ can be used to explicitly construct $A$. 2008 EPRINT CM construction of genus 2 curves with p-rank 1 We present an algorithm for constructing cryptographic hyperelliptic curves of genus $2$ and $p$-rank $1$, using the CM method. We also present an algorithm for constructing such curves that, in addition, have a prescribed small embedding degree. We describe the algorithms in detail, and discuss other aspects of $p$-rank 1 curves too, including the reduction of the class polynomials modulo $p$.
On Wielandt-Mirsky's conjecture for matrix polynomials Bull. Korean Math. Soc. Published online March 14, 2019 Công-Trình LÊQuy Nhon University Abstract : In matrix analysis, the \textit{Wielandt-Mirsky's conjecture} states that$$ dist(\sigma(A), \sigma(B)) \leq \|A-B\|, $$for any normal matrices $ A, B \in \mathbb C^{n\times n}$ and any operator norm $\|\cdot \|$ on $C^{n\times n}$. Here $dist(\sigma(A), \sigma(B))$ denotes the optimal matching distance between the spectra of the matrices $A$ and $ B$. It was proved by A.J. Holbrook (1992) that this conjecture is false in general. However it is true for the Frobenius distance and the Frobenius norm (the Hoffman-Wielandt's inequality). The main aim of this paper is to study the Hoffman-Wielandt's inequality and some weaker versions of the Wielandt-Mirsky's conjecture for matrix polynomials.
Search Now showing items 1-2 of 2 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Based on the values you provide, here is what I think is happening. Your $\lambda$ value is incredibly high and your $\beta$s are very small. In essence, you are overfitting the data and modeling the noise. Not sure how many observations you have, but 15000 features is a lot and hopefully your ratio of $p/n$ is not astronomical. I am assuming that in your data, $p>n$ so here is one recommendation or steps to try. Split the data into a training and testing set (80-20$\%$ ratio). Standardize the training set. Use the same parameters to standardize the testing set. So the mean of the columns of $X_{test} $ won't be zero or the std. won't be exactly 1, but that is okay. You are looking to generalize your results. It is important to note that there is no column of 1's in your training or testing set (not evaluating the intercept). Since you are doing classification, I assume that this is logisitic ridge regression with class labels within $(0,1)$. Within the CV steps for the $k^{th}$ fold, for different values of $\lambda$, evaluate the generalized cross validation error. $GCVE_{\lambda,k} =$ $\frac{\frac{1}{j}\sum_{i=1}^{j}(y_i-\hat{y_i})^2}{ (n_{cv} - df)} $. Here, $n_{cv}$ is the total number of samples used across the other $(k-1)$ CV folds in training the model, $df$ is the degree of freedom of the model which is the $trace$ of $X(X^\top X + \lambda I)^{-1} X^\top$ (where $X$ is constructed from the data in the $(k-1)$ folds), $j$ is the number of samples in the $k^{th}$ fold, and ($\hat{y_i}$) are the predicted class labels. Note that if you add a 1 to the denominator of $GCVE_{\lambda,k}$ and set $\lambda=0$, the entire model reduces to straightforward logistic regression with no regularization. Without a loss of generalizability, we can assume that $\lambda>0$ in your CV-folds and compute the average $GCVE_\lambda$ across the $k$ folds. Also compute the mean $\hat{\beta_\lambda}$'s from the CV folds for each $\lambda$. It is very important that you have an equal proportion of both classes in the training set, otherwise the solution can sometimes degenerate and simply just predict one class label all the time. A plot of $\lambda$ (x-axis) vs. mean $GCVE_\lambda$ (y-axis) should give you a montonically decreasing trace. As you increase $\lambda$, you are reducing the $df$ in the model or its ability to do anything useful. However, at a certain $\lambda$, the $GCVE_\lambda$ will taper and not change much; this decay happens usually for small values of $\lambda$. At the same time, the ridge trace ($\lambda$ vs mean $\hat{\beta_\lambda}$) should hopefully tell you that the above value of $\lambda$ coincides with stability in the ridge trace computed from the CV procedure (no wild oscillations). Basically you will have to eye ball the value of $\lambda$ that is in 'steady-state' mode in both plots and smaller the $\lambda$, the better. It is important to note what $\lambda$ actually does; you are basically correcting for multicollinearity or non-independence of the predictors. Hopefully the above traces converge to steady state quickly; larger the values of $\lambda$ and more imperfect is your assumption that there is truly a relationship between the predictors and the class variable. For this visually determined value of $\lambda$, train the entire set with all training data to obtain the $\hat{\beta}$ parameters. Now evaluate the model by trying to predict the $y_{test}$ values; you can compute $R^2$ from this result or you can test the null hypothesis of obtaining the test classification accuracy due to chance using the binomial distribution (for 2 class case). If your testing set is imbalanced, use balanced accuracy as a metric rather than simple accuracy. Here, importance is also placed for misclassification and is given by $BA = 0.5(TP/(TP+FN) + TN/(TN+FP))$. TP-True Positive. TN-True Negative. FN-False negative. FP-fasle positive. You can repeat the above procedure for different partitions of the training and testing set to get a distribution of $R^2$ or $p$ values to run some simple statistics at the global level. Personally, I would recommend running a permutation test to evaluate the significance of the $\beta$'s. Here, across many iterations (say 10000 times), the elements of the $y$ are shuffled so that the elements in row $j$ of $X$ do not correspond to the $j^{th}$ element of $y$. This procedure obtains a distribution of each $\beta_i, i=i...p$. Evaluate p-value as the proportion of the magnitude of those values obtained from the permutation > $|\beta_i| $. FDR (False discovery rate) can be used for correction as you are now running 15000 tests and you want to protect against a Type I error. It is important to note that if you are running this latter procedure for the significance test of the betas, partitioning into training and test set is unnecessary and you can use the entire dataset as the 'training set', with the cross-validation procedure. For example looking at your obtained betas (all of which are $10^{-3}$), none of them would be significant from a permutation test as the magnitude of those obtained from the permutations would likely be of the same value. Try tuning your model using the above procedure and if you still don't get any results, then it looks like there is really no relationship between your predictors and the class labels. Hope this helps...
In this section, we shall discuss a few technical results about \(\gcd(a,b)\). Theorem \(\PageIndex{1}\label{thm:EA}\) Let \(d=\gcd(a,b)\), where \(a,b\in\mathbb{N}\). Then \[\{ as+bt \mid s,t\in\mathbb{Z} \} = \{ nd \mid n\in\mathbb{Z} \}.\] Hence, every linear combination of \(a\) and \(b\) is a multiple of \(\gcd(a,b)\), and vice versa, every multiple of \(\gcd(a,b)\) is expressible as a linear combination of \(a\) and \(b\). Proof For brevity, let \[S=\{as+bt\mid s,t\in\mathbb{Z}\}, \qquad\mbox{and}\qquad T=\{nd\mid n\in\mathbb{Z}\}.\] We shall show that \(S=T\) by proving that \(S\subseteq T\) and \(T\subseteq S\). Let \(x\in S\). To prove that \(S\subseteq T\), we want to show that \(x\in T\) as well. Being in \(S\) means \(x = as+bt\) for some integers \(s\) and \(t\). Since \(d=\gcd(a,b)\), we know that \(d\mid a\) and \(d\mid b\). Hence, \(a=da'\) and \(b=db'\) for some integers \(a'\) and \(b'\). Then \[x = as+bt = da's+db't = d(a's+b't),\] where \(a's+b't\) is an integer. This shows that \(x\) is a multiple of \(d\). Hence, \(x\in T\). To show that \(T\subseteq S\), it suffices to show that \(d\in S\). The reason is, if \(d=as+bt\) for some integers \(s\) and \(t\), then \(nd = n(as+bt) = a(ns)+b(nt)\) implies that \(nd\in S\). To prove that \(d\in S\), consider \(S^+\). Since \(a=a\cdot1+b\cdot0\), we have \(a\in S^+\). Hence, \(S^+\) is a nonempty set of positive integers. The principle of well-ordering implies that \(S^+\) has a smallest element. Call it \(e\). Then \[e = as^*+bt^*\] for some integers \(s^*\) and \(t^*\). We already know that \(a\in S^+\). Being the smallest element in \(S^+\), we must have \(e\leq a\). Then \(a=eq+r\) for some integers \(q\) and \(r\), where \(0\leq r<e\). If \(r>0\), then \[r = a-eq = a-(as^*+bt^*)q = a(1-s^*q)+b(-t^*q).\] This makes \(r\) a linear combination of \(a\) and \(b\). Since \(r>0\), we find \(r\in S^+\). Since \(r<e\) would contradict the minimality of \(e\), we must have \(r=0\). Consequently, \(a=eq\), thus \(e\mid a\). Similarly, since \(b = a\cdot0+b\cdot1\in S^+\), we can apply the same argument to show that \(e\mid b\). We conclude that \(e\) is a common divisor of \(a\) and \(b\). Let \(f\) be any common divisor of \(a\) and \(b\). Then \(f\mid a\) and \(f\mid b\). It follows that \(f\mid (ax+by)\) for any integers \(x\) and \(y\). In particular, \(f\mid(as^*+bt^*)=e\). Hence, \(f\leq e\). Since \(e\) is itself a common divisor of \(a\) and \(b\), and we have just proved that \(e\) is larger than any other common divisor of \(a\) and \(b\), the integer \(e\) itself must be the greatest common divisor. It follows that \(d=\gcd(a,b)=e\in S^+\). The proof is now complete. Corollary \(\PageIndex{2}\) The greatest common divisor of two nonzero integers \(a\) and \(b\) is the smallest positive integer among all their linear combinations. In other words, \(\gcd(a,b)\) is the smallest positive element in the set \(\{as+bt \mid s,t\in\mathbb{Z}\}\). Corollary \(\PageIndex{3}\) For any nonzero integers \(a\) and \(b\), there exist integers \(s\) and \(t\) such that \(\gcd(a,b)=as+bt\). Proof Theorem [thm:EA] maintains that the set of all the linear combinations of \(a\) and \(b\) equals to the set of all the multiples of \(\gcd(a,b)\). Since \(\gcd(a,b)\) is a multiple of itself, it must equal to one of those linear combinations. Thus, \(\gcd(a,b) = sa+tb\) for some integers \(s\) and \(t\). Theorem \(\PageIndex{4}\) Two nonzero integers \(a\) and \(b\) are relatively prime if and only if \(as+bt=1\) for some integers \(s\) and \(t\). Proof The result is a direct consequence of the definition that \(a\) and \(b\) are said to be relatively prime if \(\gcd(a,b)=1\). Example \(\PageIndex{1}\label{eg:moregcd-01}\) It is clear that 5 and 7 are relatively prime, so are 14 and 27. Find the linear combination of these two pairs of numbers that equals to 1. Solution By inspection, or using the extended Euclidean algorithm, we find \(3\cdot5-2\cdot7=1\), and \(2\cdot14-1\cdot27=1\). hands-on Exercise \(\PageIndex{1}\label{he:moregcd-01}\) Show that \(\gcd(133,143)=1\) by finding an appropriate linear combination. hands-on Exercise \(\PageIndex{2}\label{he:moregcd-02}\) Show that 757 and 1215 are relatively prime by finding an appropriate linear combination. Example \(\PageIndex{2}\label{eg:moregcd-02}\) It follows from \[(-1)\cdot n+1\cdot(n+1) = 1\] that \(\gcd(n,n+1)=1\). Thus, any pair of consecutive positive integers is relatively prime. Theorem \(\PageIndex{5}\) (Euclid's Lemma) Let \(a,b,c\in\mathbb{Z}\). If \(\gcd(a,c)=1\) and \(c\mid ab\), then \(c\mid b\). Discussion Let us write down what we know and what we want to show (WTS): \[\begin{array}{ll} \mbox{Know}: & as+ct=1 \mbox{ for some integers $s$ and $t$}, \\ & ab = cx \mbox{ for some integer $x$}, \\ \mbox{WTS}: & b = cq \mbox{ for some integer $q$}. \end{array}\] To be able to show that \(b=cq\) for some integer \(q\), we have to come up with some information about \(b\). This information must come from the two equations \(as+ct=1\) and \(ab=cx\). Since \(b=b\cdot1\), we can multiply \(b\) to both sides of \(as+ct=1\). By convention, we cannot write \((as+ct=1) \cdot b\). This notation is unacceptable! The reason is: we cannot multiply an equation by a number. Rather, we have to multiply both sides of an equation by the number: \[b = 1\cdot b = (as+ct)\cdot b = asb + ctb.\] Obviously, \(ctb\) is a multiple of \(c\); we are one step closer to our goal. Since \(asb = ab\cdot s\), and we do know that \(ab\) is indeed a multiple of \(c\), so the proof can be completed. We are now ready to tie up the loose ends, and polish up the proof. Proof Assume \(\gcd(a,c)=1\), and \(c\mid ab\). There exist integers \(s\) and \(t\) such that \[as + ct = 1.\] This leads to \[b = 1\cdot b = (as+ct)\cdot b = asb + ctb.\] Since \(c\mid ab\), there exists an integer \(x\) such that \(ab=cx\). Then \[b = ab\cdot s + ctb = cx\cdot s + ctb = c(xs+tb),\] where \(xc+tb\in\mathbb{Z}\). Therefore, \(c\mid b\). Corollary \(\PageIndex{6}\) If \(a,b\in\mathbb{Z}\) and \(p\) is a prime such that \(p\mid ab\), then either \(p\mid a\) or \(p\mid b\). Proof If \(p\mid a\), we are done with the proof. If \(p\nmid a\), then \(\gcd(p,a)=1\), and Euclid’s lemma implies that \(p\mid b\). We cannot apply the corollary if \(p\) is composite. For instance, \(6\mid 4\cdot15\), but \(6\nmid 4\) and \(6\nmid 15\). On the other hand, when \(p\mid ab\), where \(p\) is a prime, it is possible to have both \(p\mid a\) and \(p\mid b\). For instance, \(5\mid 15\cdot 25\), yet we have both \(5\mid 15\) and \(5\mid 25\). Corollary \(\PageIndex{7}\) If \(a_1,a_2,\ldots,a_n\in\mathbb{Z}\) and \(p\) is a prime such that \(p\mid a_1 a_2 \cdots a_n\), then \(p\mid a_i\) for some \(i\), where \(1\leq i\leq n\). Consequently, if a prime \(p\) divides a product of \(n\) factors, then \(p\) must divide at least one of these \(n\) factors. Proof We leave the proof to you as an exercise. Example \(\PageIndex{3}\label{eg:moregcd-03}\) Prove that \(\sqrt{2}\) is irrational. Remark We proved previously that \(\sqrt{2}\) is irrational in a hands-on exercise. The solution we presented has a minor flaw. A key step in that proof claims that \[\mbox{The integer 2 divides $m^2$, therefore 2 divides $m$}.\] This claim is false in general. For example, 4 divides \(6^2\), but 4 does not divide 6. Therefore, we have to justify why this claim is valid for 2. Solution Suppose \(\sqrt{2}\) is rational, then we can write \[\sqrt{2} = \frac{m}{n}\] for some positive integers \(m\) and \(n\) that do not share any common divisor except 1. Squaring both sides and cross-multiplying gives \[2n^2 = m^2.\] Thus 2 divides \(m^2\). Since 2 is prime, Euclid’s lemma implies that 2 must also divide \(m\). Then we can write \(m=2s\) for some integer \(s\). The equation above becomes \[2n^2 = m^2 = (2s)^2 = 4s^2.\] Hence, \[n^2 = 2s^2,\] which implies that 2 divides \(n^2\). Again, since 2 is prime, Euclid’s lemma implies that 2 also divides \(n\). We have proved that both \(m\) and \(n\) are divisible by 2. This contradicts the assumption that \(m\) and \(n\) do not share any common divisor. Hence, \(\sqrt{2}\) must be irrational. hands-on Exercise \(\PageIndex{3}\label{he:moregcd-03}\) Prove that \(\sqrt{7}\) is irrational. We close this section with a truly fascinating result. Theorem \(\PageIndex{8}\) For any positive integers \(m\) and \(n\), \(\gcd(F_m,F_n)=F_{\gcd(m,n)}\). Corollary \(\PageIndex{9}\) For any positive integer \(n\), \(3\mid F_n \Leftrightarrow 4\mid n\). Proof (\(\Rightarrow\)) If \(3\mid F_n\), then, because \(F_3=4\), we have \[3 = \gcd(3,F_n) = \gcd(F_4,F_n) = F_{\gcd(4,n)}.\] It follows that \(\gcd(4,n)=4\), which in turn implies that \(4\mid n\). (\(\Leftarrow\)) If \(4\mid n\), then \(\gcd(4,n)=4\), and \[\gcd(3,F_n) = \gcd(F_4,F_n) = F_{\gcd(4,n)} = F_4 = 3;\] therefore, \(3\mid F_n\). Summary and Review Given any two nonzero integers, there is only one special linear combination that would equal to their greatest common divisor. All other linear combinations are only multiples of their greatest common divisor. If \(a\) and \(c\) are relatively prime, then Euclid’s lemma asserts that if \(c\) divides \(ab\), then \(c\) must divide \(b\). In particular, if \(p\) is prime, and if \(p\mid ab\), then either \(p\mid a\) or \(p\mid b\). Exercise \(\PageIndex{1}\label{ex:moregcd-01}\) Given any arbitrary positive integer \(n\), prove that \(2n+1\) and \(3n+2\) are relatively prime. Exercise \(\PageIndex{2}\label{ex:moregcd-02}\) Use induction to prove that for any integer \(n\geq2\), if \(a_1,a_2,\ldots,a_n\in\mathbb{Z}\) and \(p\) is a prime such that \(p\mid a_1 a_2 \cdots a_n\), then \(p\mid a_i\) for some \(i\), where \(1\leq i\leq n\). Exercise \(\PageIndex{3}\label{ex:moregcd-03}\) Prove that \(\sqrt{p}\) is irrational for any prime number \(p\). Exercise \(\PageIndex{4}\label{ex:moregcd-04}\) Prove that \(\sqrt[3]{2}\) is irrational. Exercise \(\PageIndex{5}\label{ex:moregcd-05}\) Given any arbitrary positive integers \(a\), \(b\), and \(c\), show that if \(a\mid c\), \(b\mid c\), and \(\gcd(a,b)=1\), then \(ab\mid c\). Remark. This result is very important. Remember it! Exercise \(\PageIndex{6}\label{ex:moregcd-06}\) Given any arbitrary positive integers \(a\), \(b\), and \(c\), show that if \(a\mid c\), and \(b\mid c\), then \(ab\mid cd\), where \(d=\gcd(a,b)\). Exercise \(\PageIndex{7}\label{ex:moregcd-07}\) Use induction to prove that \(3\mid(2^{4n}-1)\) and \(5\mid(2^{4n}-1)\) for any integer \(n\geq1\). Use these results to prove that \(15\mid (2^{4n}-1)\) for any integer \(n\geq1\). Exercise \(\PageIndex{8}\label{ex:moregcd-08}\) Prove that \(2\mid F_n \Leftrightarrow 3\mid n\) for any positive integer \(n\).
Set of Words Generates Group Theorem Let $S \subseteq G$ where $G$ is a group. Let $\hat S$ be defined as $S \cup S^{-1}$, where $S^{-1}$ is the set of all the inverses of all the elements of $S$. Then $\gen S = \map W {\hat S}$, where $\map W {\hat S}$ is the set of words of $\hat S$. Proof Let $H = \gen S$ where $S \subseteq G$. $H$ must certainly include $\hat S$, because any group containing $s \in S$ must also contain $s^{-1}$. Thus $\hat S \subseteq H$. Thus $\map W {\hat S} \subseteq H$. Now we prove that $\map W {\hat S} \le G$. By the Two-Step Subgroup Test: Let $x, y \in \map W {\hat S}$. Let $x = s_1 s_2 \ldots s_n \in \map W {\hat S}$. Then $x^{-1} = s_n^{-1} \ldots s_2^{-1} s_1^{-1} \in \map W {\hat S}$. Thus the conditions of the Two-Step Subgroup Test are fulfilled, and $\map W {\hat S} \le G$. $\blacksquare$ Sources 1965: J.A. Green: Sets and Groups... (previous) ... (next): $\S 5.3$. Subgroup generated by a subset 1966: Richard A. Dean: Elements of Abstract Algebra... (previous) ... (next): $\S 1.9$: Theorem $20$ 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{II}$: Exercise $\text{K}$ 1974: Thomas W. Hungerford: Algebra... (previous) ... (next): $\S 1.2$
This question already has an answer here: I was reading Schutz, A First Course in General Relativity. On page 9, he argued that the metric tensor is symmetric: $$ ds^2~=~\sum_{\alpha,\beta}\eta_{\alpha\beta} ~dx^{\alpha}~dx^{\beta} $$ $\text{Note that we can suppose}$ $\eta_{\alpha\beta}=\eta_{\beta\alpha}$ $\text{for all}$ $\alpha$ $\text{and}$ $\beta$ $\text{since only the sum}$ $\eta_{\alpha\beta}+\eta_{\beta\alpha}$ $\text{ever appears in the above equation when}$ $\alpha\neq\beta$. I don't understand his argument. If someone can explain why, I would really appreciate it.
The capillary action formula is given as: $$h=2T/ρgR$$ where h is the capillary rise, R is the radius of curvature of miniscus, ρ is density, g is force of gravity and T is the surface tension. Now consider a capillary with a large base which gets narrower as we move along the top, such that the radius of miniscus at the top has a radius R, now consider a tube of uniform radius made of same material such that it too allows a miniscus of radius R to form, looking at the formula both capillaries will have the same height of liquid in them but clearly the first one would have a greater weight of liquid suspended. What factors account for that? or is it that my utilization of the formula wrong, if so what is the proper interpretation? I would add to a very nice answer by @Farcher that, although the surface tension in the top narrow section will keep the liquid in place in both cases, the liquid will actually rise to that level in the first case, but not necessarily in the second. This is because as the height of the conical part decreases, the radius could be increasing faster, in which case the decrease in the surface tension ~1/R would be faster than the decrease in the extra pressure ~h, it needs to support. In this scenario, to reproduce the second case, the cone would have to be lowered down to its neck and than carefully raised. The diagrams and some basic math below shows that for a given radius of the capillary, R 0, and the height of the column, h 0, the half-angle of the cone that would support the rise of the liquid to the top has to be less than arcsin(R 0/h 0). Surface tension produces a pressure difference across the curved interface between the liquid and the air and it is this difference in pressure which results in the capillary rise. The pressure is atmospheric where the dots are black and less than atmospheric by an amount $\frac {2T}{R}$ where the dots are red. Here $T$ is the surface tension and $R$ is the radius of the tube at the air-liquid interface. To ensure that the pressure where the bottom black dots are is atmospheric the liquid rises so that the increase in pressure due to the liquid level being higher is equal to the decrease in pressure at the top due to the effect of surface tension. It is atmospheric pressure which keeps the columns of liquid in position with surface tension and the radius of the tube at the air-liquid interface which determines how high the column of liquid will rise. Note that I have assumed that at the top the air-liquid interface is in contact with a vertical tube. Your formula can only be used for a consistently straight capillary, with both sides parallel to each other. For a conical frustum, the formula is $$h=\frac{2Tcos\theta}{\rho gR}$$ Hence, if $R$ were to be the same, $h$ would definitely not be the same. Firstly the formula has come by equating the vertical components of forces .T(cos(tita))(2*pi r)=mg .m g=densityvolume g .volume=areaheight.In the second case though the area is changing but still the surface tension will be similar to that of first .This is because the water molecules that is causing surface tension will have a downward force from only those water molecules that are exactly below the above said molecules.So surface tension will remain the same.
I saw a flock of birds flying around today, and noticed that they didn't cast any shadows on the ground. I thought this to be rather strange, so I tried to resolve this mystery. My first idea was that the sun might actually be wide enough such that both 'ends' of the sun might cut underneath the birds, as the ground would still be illuminated by one or both sides of the sun as the bird flew in front. I calculated as follows: I assumed the distance from the earth to the sun to be $d = 150\times 10^6 \text{km}$, and the radius of the sun to be $R = 6.957 \times 10^5 \text{km}$. The angle $\alpha$, as shown in my poorly drawn figure, is the half angle between the two sides of the sun, and can be easily calculated: $$ \tan\alpha=\frac{R}{d} $$ $$ \alpha = \tan^{-1}\left(\frac{R}{d}\right) \approx 0.0046 \text{rad} $$ This means that the radius of an object flying at $20\rm m$ height must have at least radius $$ r = 20\tan\alpha = 0.093\rm m $$ I assume the birds to have a wingspan greater than $20\rm cm$, so in this case they must cast a shadow. I also considered the possible effects of diffraction, but aren't those effects too small to be observed on such a scale? Does anybody have an explanation for why birds don't seem to cast shadows, or maybe where my attempt at explaining the phenomenon falls short?
Linear Recurrences A linear recurrence is a linear equation that recursively defines a sequence. An example is the Fibonacci sequence, that is defined as \[F_0 = 0\] \[F_1 = 1\] \[F_n = F_{n-1} + F_{n-2}\] In general, a linear recurrence is a sequence \(\{a_n\}_n\) given by base cases and equations \[a_1 = x_1\] \[a_2 = x_k\] \[…\] \[a_k = x_k\] \[a_n = b_1 a_{n-1} + … + b_k a_{n-k}\] Notice that if the recurrence uses \(k\) previous terms, we need to have exactly \(k\) base cases, less won’t be enough and more would be redundant (it can even be contradictory). Now, we would like to solve this recurrence, that is, obtain a way to compute \(a_n\) for any \(n \in \mathbb{N}\) without having to compute all previous terms. We would like this method to be fast in terms of complexity, too. Enter the Matrix Linear algebra gives us a tool to solve all linear recurrences. First, let’s examine the Fibonacci case. Consider the matrix \[F = \begin{bmatrix} 0 & 1 \\ 1 & 1 \\ \end{bmatrix}\] It is very easy to see that \[F^n \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} F_n \\ F_{n+1} \end{bmatrix} \] Now, we all know there are several ways of multiplying matrices. So throwing some divide & conquer at the problem, we can compute the n-th Fibonacci number in \(O(log(n))\), supposing matrix multiplication is constant (as every matrix is of size \(2 \times 2\) we can assume that). But we can do better than that. What if we could calculate \(F^n\) even faster? Well, we can. Diagonalization A matrix is diagonal if it is of the form \[\begin{bmatrix} a_1 & 0 & \cdots & 0 \\ 0 & a_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_n \\ \end{bmatrix}\] Note that if \(A\) is diagonal, then \[A^k = \begin{bmatrix} a_1^k & 0 & \cdots & 0 \\ 0 & a_2^k & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_n^k \\ \end{bmatrix}\] A matrix \(A\) is diagonalizable if there exist an invertible matrix \(P\) such that \(PAP^{-1}\) is diagonal. Now, knowing all of this, if we could find a matrix \(P\) such that \(D = P\begin{bmatrix} 0 & 1 \\ 1 & 1 \\ \end{bmatrix} P^{-1}\) is diagonal, then \[\begin{bmatrix} 0 & 1 \\ 1 & 1 \\ \end{bmatrix}^k = P^{-1} D^k P \] And, as we said before, calculating powers of diagonal matrices is very easy. Doing some math, we can find out that \[P = \begin{bmatrix} \phi & -\phi^{-1} \\ 1 & 1 \\ \end{bmatrix}\] \[D = \begin{bmatrix} \phi & 0 \\ 0 & -\phi^{-1} \\ \end{bmatrix}\] do the job, where \(\phi = \frac{1 + \sqrt{5}}{2}\). So then we are done, we have two solutions. If we want to compute the n’th fibonacci number using floating point approximations, we can do it in time \(O(1)\), and if we want to do it with integer types, without loss of precision, we can do it in \(O(log(n))\), with really low constants. General Case You may be asking what happens in general, for every linear recurrence relation, i.e., something like: \[a_1 = x_1, a_2 = x_k, …, a_k = x_k\] \[a_n = b_1 a_{n-1} + … + b_k a_{n-k}\] Well, this is easy staring long enough at what we did previously. Note that \[\begin{bmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \ddots & \vdots \\ \vdots & 0 & \ddots & 1 & 0 \\ 0 & \vdots & \ddots & \ddots & 1 \\ b_k & b_{k-1} & \cdots & b_2 & b_1 \\ \end{bmatrix} \begin{bmatrix} a_{n-k} \\ a_{n-k+1} \\ \vdots \\ a_{n-2} \\ a_{n-1} \\ \end{bmatrix} = \begin{bmatrix} a_{n-k+1} \\ a_{n-k+2} \\ \vdots \\ a_{n-1} \\ a_{n} \\ \end{bmatrix}\] I will call this matrix the “generating matrix” of the linear recurrence. This matrix can remind you of a known and studied one. Now, as before, we diagonalize and solve the problem. But, what if the matrix is not diagonalizable? Well, then we can use Jordan normal form, for which there is a closed formula for powers. It is interesting to see that a really simple tool of linear algebra helped us solving a family of often-ocurring problems. Another interesting thing I can talk about, is what happens with eigenspaces. Let \(C\) be the generating matrix of some linear recurrence, what happens if \((a_1, a_2, …, a_k)\), the vector formed by the first terms of the sequence, is an eigenvector? Well, then \[C^m \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_k \\ \end{bmatrix} = \lambda^m \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_k \\ \end{bmatrix} \] Where \(\lambda\) is a number. So then we are done. This allows us to take it even further, because if the generating matrix is diagonalizable, then we can write the whole space as a direct sum of eigenspaces, so we can decompose our vector of first terms in eigenvectors, do the aforementioned calculation, and then sum up all the results. This is one of the reasons why diagonalizable matrices are great, because every vector of the space is a linear combination of eigenvectors. Another application of linear recurrences are differential equations. Linear differential equations can be solved using exactly the same method we used to solve linear recurrences. As you can probably imagine, this is very useful in physics, becase several families of problems are solved using linear differential equations, and we have a general method to solve them.
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
Analytic Function Visualization This webapp is a somewhat-naive tool to visualize complex, and, in particular, complex-analytic functions. You can enter a function in the very top left; the left half shows $\mathbb C$ and the right half shows the image of $\mathbb C$ under the given function. But what's with those two circles in the domain and codomain? The circles have to do with complex-analytic functions. The idea of a function being 'analytic' may be thought of as it being It is important to note that the center of the rotation and dilation is not zero but rather the center of a neighborhood. Woah, slow down. Hold up. From the beginning. We have a function $f : \mathbb C \to \mathbb C$. Let's take a look at a single point in $\mathbb C$, call it $z_0$. Consider some small movement or "arrow" $\Delta z$ within the domain that will take $z_0$ to $z_0 + \Delta z$. If we consider the image of $z_0$ and $z_0 + \Delta z$, which are $f(z_0)$ and $f(z_0 + \Delta z)$, respectively, we may draw another arrow, analogous to the $\Delta z$ arrow, between them which represents the movement it would take to bring $f(z_0)$ to $f(z_0 + \Delta z)$. The size of this second arrow is $f(z_0 + \Delta z) - f(z_0)$. But what's this got to do with analyticity? It's actually slightly subtler than this, since a function may be differentiable at a point but may only be analytic on an open set. Hey, those symbols look familiar! You may see where this is going. Limits are beautiful and rigorous things, but we are going to be naughty and rewrite this statement as:$$ f'(z_0) \approx \frac{f(z_0 + \Delta z) - f(z_0)}{\Delta z} $$ and then multiply by $\Delta z$, giving:$$ f'(z_0) \cdot \Delta z \approx f(z_0 + \Delta z) - f(z_0) $$ In the context of the figure, this says that the two arrows, one in the domain and one in the codomain, are ( The rigorous version is that in the limit case, the two arrows are multiples of each other. Since $\Delta z$ is aribtrary, we may point the domain arrow in any direction and give it any magnitude, and the corresponding arrow in the image will still be a multiple of it. In fact, since this multiple is $f'(z_0)$, which is independent of $\Delta z$, it's the same multiple for every arrow we draw. This means that we could instead draw a circle in the domain, and the correspinding circle in the image could also be approximated as a multiple of the domain's circle. This is what the circles are. They are a way of visualizing the local rotation and scaling of the function. A circle is drawn in the domain, centered around $z_0$, and drawn, rotated and scaled by $f'(z_0)$, in the codomain, centered around $f(z_0)$. The size of the circle may be changed in the settings menu, but not that it's called $\epsilon$ rather than $\Delta z$.
To answer your immediate question, $\gamma = 2$. You’re scaling the translated rectangle so that it has the same shape as the desheared parallelogram. Since you’ll be shearing parallel to the $x$-axis, you want the width of the scaled rectangle to equal the length of the paralellogram’s base, and the rectangle’s height should be equal to the paralellogram’s height (not its other edge length). [Note that you’ve gotten these two dilation factors swapped in your question, but corrected that in your own answer.] A simple way to compute this latter distance is to divide its area by its base length: $$\gamma = {\begin{vmatrix}4&3\\-2&1\end{vmatrix}\over\sqrt{4^2+3^2}} = \frac{10}5 = 2.$$ That aside, it’s convenient when working with affine transformations to use homogeneous coordinates so that translations can be included in the now $3\times3$ transformation matrix. (Depending on one’s point of view, this is either a cheap mathematical trick or reflects an important connection between the projective plane and Euclidian three-dimensional space.) The matrix that corresponds to the translation is, in block form, $$\left[ \begin{array}{c|c} I_2 & T \\ \hline 0&1 \end{array} \right]$$ and the complete transformation is $$\begin{bmatrix}x\\y\\1\end{bmatrix} \left[ \begin{array}{c|c} RA_2A_1 & RA_2A_1T \\ \hline 0&1 \end{array} \right] = \begin{bmatrix}u\\v\\1\end{bmatrix}.$$ This may not seem like much of an advantage over $2\times2$ matrices and a separate vector for the translation, but by turning an affine transformation of the plane into a linear transformation of 3-D space, this representation allows you to compute the complete transformation entirely mechanically in the following way: For any pairing of the vertices of two triangles in the plane, there is a unique affine transformation that maps between them. Thus, we only need to consider three of the four vertex pairs here; linearity of the map ensures that the fourth pair will be mapped correctly. A linear transformation is completely determined by its action on a basis. In fact, relative to that basis the columns of its matrix are the images of the basis vectors. Therefore, if $\mathbf p_1$, $\mathbf p_2$ and $\mathbf p_3$ are the homogeneous coordinates of three of the rectangle’s vertices and $\mathbf q_1$, $\mathbf q_2$, $\mathbf q_3$ the corresponding parallelogram vertices, the required mapping is $$\begin{bmatrix}\mathbf q_1 & \mathbf q_2 & \mathbf q_3\end{bmatrix} \begin{bmatrix} \mathbf p_1 & \mathbf p_2 & \mathbf p_3 \end{bmatrix}^{-1}.$$ The left-hand matrix maps from the standard basis to the parallelogram, and the right-hand matrix maps from the rectangle to the standard basis. If neither the rectangle nor the parallelogram is degenerate, then both of these matrices are nonsingular. For the example in your answer, this is $$\begin{bmatrix}0 & 4 & -2 \\ 0 & 3 & 1 \\ 1&1&1 \end{bmatrix} \begin{bmatrix} 10 & 20 & 10 \\ 0 & 0 & 2\\ 1&1&1 \end{bmatrix}^{-1} = \begin{bmatrix} \frac25 & -1 & -4 \\ \frac3{10} & \frac12 & -3 \\ 0&0& 1 \end{bmatrix},$$ which corresponds to the mapping $$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}\frac25&-1\\\frac3{10}&\frac12\end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix}-\begin{bmatrix}4\\3\end{bmatrix}.$$ Of course, you could skip all of this matrix stuff and work out an affine mapping directly as it’s just bilinear interpolation. Let $\lambda = (u-a)/(b-a)$ and $\mu = (v-c)/(d-c)$. Then $(x,y)^T = (1-\lambda-\mu)\mathbf P_1+\lambda\mathbf P_2+\mu\mathbf P_3$, where $\mathbf P_1\mathbf P_2$ is the side of the paralellogram that is the image of the lower edge of the rectangle and $\mathbf P_1\mathbf P_3$ is the image of the left side. This corresponds to a different decomposition of the full transformation matrix: $$\begin{bmatrix}\mathbf P_2-\mathbf P_1 & \mathbf P_3-\mathbf P_1 & \mathbf P_1 \\ 0&0&1\end{bmatrix} \begin{bmatrix}(b-a)^{-1}&0&0\\0&(d-c)^{-1}&0\\0&0&1\end{bmatrix} \begin{bmatrix}1&0&-a\\0&1&-c\\0&0&1\end{bmatrix}.$$ Computing this will likely be more efficient than the matrix inversion above. Comparing this to your decomposition, the stretch of the unit square to match the paralellogram, the shear and the rotation have been consolidated into a single matrix that maps the unit square onto the paralellogram. The product of the other two matrices maps the source rectangle onto the unit square, which you can verify be setting $\mathbf P_1=(0,0)^T$, $\mathbf P_2 = (1,0)^T$ and $\mathbf P_3=(0,1)^T$ (which makes the left-hand matrix the identity). Note that this isn’t the only affine map of the rectangle to the parallelogram. There are eight possible such maps, four that preserve orientation and four that change it.
Let $x_n$ be a sequence of independent random variables such that $P(x_n = 0) = 1 - \frac{1}{n}$ 1) Does $x_n$ converge to $0$ almost surely 2) Does $x_n$ converge to $0$ in probability 3) Does $x_n$ converge to $0$ in $L_p$ It is clear for me that answer for 2) is yes: $P(|x_n| > \epsilon) \leq P(x_n \neq 0)$. Since the sequence of the last elements converges to $0$ we can conclude that $x_n$ converges to $0$ in probability. I need some hints how to approach the remaining two questions. Thanks!
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range EUROPEAN TRANSPORT RESEARCH REVIEW, ISSN 1867-0717, 07/2019, Volume 11, Issue 1, pp. 1 - 16 PurposeIn terms of freight transportation it is essential to pick the most convenient mode(s) of transport (MOT). To get a more flexible system, one can assume... TRANSPORTATION | Vienna region | TRANSPORTATION SCIENCE & TECHNOLOGY | Quattromodal freight hub | Freight logistics | Quattromodality | Hubs | Freight transportation | Traffic planning | Traffic management | Transportation planning | Transportation research | Best practice TRANSPORTATION | Vienna region | TRANSPORTATION SCIENCE & TECHNOLOGY | Quattromodal freight hub | Freight logistics | Quattromodality | Hubs | Freight transportation | Traffic planning | Traffic management | Transportation planning | Transportation research | Best practice Journal Article 2. Identification of regulatory variants associated with genetic susceptibility to meningococcal disease SCIENTIFIC REPORTS, ISSN 2045-2322, 05/2019, Volume 9, Issue 1, pp. 6966 - 15 Non-coding genetic variants play an important role in driving susceptibility to complex diseases but their characterization remains challenging. Here, we... KAPPA-B | SEQUENCE VARIATION | BINDING | MULTIDISCIPLINARY SCIENCES | GENOME | NF-κB protein | Phenotypes | Disease | Epithelial cells | Association analysis | Single-nucleotide polymorphism | Regulatory sequences | Vaccines | Genetic variance | Infectious diseases | Genotyping | RelA protein | Binding sites | Meningococcal disease KAPPA-B | SEQUENCE VARIATION | BINDING | MULTIDISCIPLINARY SCIENCES | GENOME | NF-κB protein | Phenotypes | Disease | Epithelial cells | Association analysis | Single-nucleotide polymorphism | Regulatory sequences | Vaccines | Genetic variance | Infectious diseases | Genotyping | RelA protein | Binding sites | Meningococcal disease Journal Article BBA - Molecular and Cell Biology of Lipids, ISSN 1388-1981, 10/2019, Volume 1864, Issue 10, pp. 1363 - 1374 Endothelial lipase (EL) is a strong determinant of structural and functional properties of high-density lipoprotein (HDL). We examined whether the... HDL | Oxidation | LDL | Mass spectrometry | Proteomics | BIOCHEMISTRY & MOLECULAR BIOLOGY | LIPID HYDROPEROXIDES | ATHEROSCLEROSIS | MODULATES HDL | CELL BIOLOGY | HEPATIC LIPASE | BIOPHYSICS | FUNCTIONALITY | RELEVANCE | CHOLESTEROL EFFLUX | HDL PARTICLES | APOLIPOPROTEIN-A-I | Blood cholesterol | Lipase | Low density lipoproteins | Endothelium | Index Medicus HDL | Oxidation | LDL | Mass spectrometry | Proteomics | BIOCHEMISTRY & MOLECULAR BIOLOGY | LIPID HYDROPEROXIDES | ATHEROSCLEROSIS | MODULATES HDL | CELL BIOLOGY | HEPATIC LIPASE | BIOPHYSICS | FUNCTIONALITY | RELEVANCE | CHOLESTEROL EFFLUX | HDL PARTICLES | APOLIPOPROTEIN-A-I | Blood cholesterol | Lipase | Low density lipoproteins | Endothelium | Index Medicus Journal Article 4. A search for pair production of new light bosons decaying into muons in proton-proton collisions at 13 TeV Physics Letters B, ISSN 0370-2693, 09/2019, Volume 796, pp. 131 - 154 Phys. Lett. B 796 (2019) 131 A search for new light bosons decaying into muon pairs is presented using a data sample corresponding to an integrated luminosity... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 5. Measurement of B$^0_\mathrm{s}$ meson production in pp and PbPb collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV 10/2018 Phys. Lett. B 796 (2019) 168 The production cross sections of B$^0_\mathrm{s}$ mesons and charge conjugates are measured in proton-proton (pp) and PbPb... Journal Article 6. Search for vector-like leptons in multilepton final states in proton-proton collisions at $\sqrt{s}$ = 13 TeV 05/2019 Phys. Rev. D. 100 (2019) 052003 A search for vector-like leptons in multilepton final states is presented. The data sample corresponds to an integrated... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 7. A prospective cohort study to identify and evaluate endotypes of venous thromboembolism: Rationale and design of the Genotyping and Molecular Phenotyping in Venous ThromboEmbolism project (GMP-VTE) Thrombosis Research, ISSN 0049-3848, 09/2019, Volume 181, pp. 84 - 91 Several clinical, genetic and acquired risk factors for venous thromboembolism (VTE) have been identified. However, the molecular pathophysiology and... DATABASE | THROMBOSIS | ONLINE MENDELIAN INHERITANCE | PLATELETS | INITIATE | PULMONARY-EMBOLISM | DISEASE | RISK | NEUTROPHILS | PERIPHERAL VASCULAR DISEASE | HEMATOLOGY | GENOME-WIDE ASSOCIATION | Pulmonary embolism | Machine learning | Development and progression | Information management | Thromboembolism | Medicine, Preventive | Preventive health services | Index Medicus DATABASE | THROMBOSIS | ONLINE MENDELIAN INHERITANCE | PLATELETS | INITIATE | PULMONARY-EMBOLISM | DISEASE | RISK | NEUTROPHILS | PERIPHERAL VASCULAR DISEASE | HEMATOLOGY | GENOME-WIDE ASSOCIATION | Pulmonary embolism | Machine learning | Development and progression | Information management | Thromboembolism | Medicine, Preventive | Preventive health services | Index Medicus Journal Article 8. Measurement of exclusive $\rho^0$(770) photoproduction in ultraperipheral pPb collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV 02/2019 Eur. Phys. J. C 79 (2019) 702 Exclusive $\rho^0$(770) photoproduction is measured for the first time in ultraperipheral pPb collisions at $\sqrt{s_\mathrm{NN}}... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 9. Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state with two muons and two b quarks in pp collisions at 13 TeV Physics Letters B, ISSN 0370-2693, 08/2019, Volume 795, Issue C, pp. 398 - 423 A search for exotic decays of the Higgs boson to a pair of light pseudoscalar particles is performed under the hypothesis that one of the pseudoscalars decays... CMS | BSM Higgs physics | Physics | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | ROOT-S=13 TEV | PHYSICS, PARTICLES & FIELDS | Analysis | Quarks | Collisions (Nuclear physics) | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS CMS | BSM Higgs physics | Physics | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | ROOT-S=13 TEV | PHYSICS, PARTICLES & FIELDS | Analysis | Quarks | Collisions (Nuclear physics) | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 10. Search for dark matter in events with a leptoquark and missing transverse momentum in proton-proton collisions at 13 TeV Physics Letters B, ISSN 0370-2693, 08/2019, Volume 795, Issue C, pp. 76 - 99 A search is presented for dark matter in proton-proton collisions at a center-of-mass energy of TeV using events with at least one high transverse momentum ( )... CMS | Leptoquarks | Dark matter | Physics | PARTICLE | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | CANDIDATES | PHYSICS, PARTICLES & FIELDS | Analysis | Collisions (Nuclear physics) | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS CMS | Leptoquarks | Dark matter | Physics | PARTICLE | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | CANDIDATES | PHYSICS, PARTICLES & FIELDS | Analysis | Collisions (Nuclear physics) | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 11. Remote mobile manipulation with the centauro robot: Full‐body telepresence and autonomous operator assistance Journal of Field Robotics, ISSN 1556-4959, 07/2019 Solving mobile manipulation tasks in inaccessible and dangerous environments is an important application of robots to support humans. Example domains are... Computer Science - Robotics Computer Science - Robotics Journal Article 12. Centrality and pseudorapidity dependence of the transverse energy density in pPb collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV 10/2018 Phys. Rev. C 100 (2019) 024902 The almost hermetic coverage of the CMS detector is used to measure the distribution of transverse energy, $E_\mathrm{T}$, over... Journal Article 13. Search for charged Higgs bosons in the H$^{\pm}$ $\to$ $\tau^{\pm}\nu_\tau$ decay channel in proton-proton collisions at $\sqrt{s} =$ 13 TeV 03/2019 JHEP 07 (2019) 142 A search is presented for charged Higgs bosons in the H$^{\pm}$ $\to$ $\tau^{\pm}\nu_\tau$ decay mode in the hadronic final state and in... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 03/2019 Eur. Phys. J. C 79 (2019) 564 A search is presented for a heavy pseudoscalar boson A decaying to a Z boson and a Higgs boson with mass of 125 GeV. In the final... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 15. Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 06/2019, Volume 2019, Issue 6, pp. 1 - 34 Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The... Supersymmetry | BREAKING | ENERGY | Hadron-Hadron scattering (experiments) | MASS | SQUARK | PHYSICS, PARTICLES & FIELDS | Protons | Confidence intervals | Large Hadron Collider | Particle collisions | Transverse momentum | Luminosity | Pair production | Photons | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Supersymmetry | BREAKING | ENERGY | Hadron-Hadron scattering (experiments) | MASS | SQUARK | PHYSICS, PARTICLES & FIELDS | Protons | Confidence intervals | Large Hadron Collider | Particle collisions | Transverse momentum | Luminosity | Pair production | Photons | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 06/2019 Suggested is a process for the production of solid cooling agents, wherein a pre-scraped melt, i.e., a melt of menthol compounds with added seed crystals is... TRANSPORTING | THEIR RELEVANT APPARATUS | HYGIENE | CHEMICAL OR PHYSICAL PROCESSES, e.g. CATALYSIS OR COLLOIDCHEMISTRY | PREPARATIONS FOR MEDICAL, DENTAL, OR TOILET PURPOSES | MEDICAL OR VETERINARY SCIENCE | PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL | HUMAN NECESSITIES | PERFORMING OPERATIONS TRANSPORTING | THEIR RELEVANT APPARATUS | HYGIENE | CHEMICAL OR PHYSICAL PROCESSES, e.g. CATALYSIS OR COLLOIDCHEMISTRY | PREPARATIONS FOR MEDICAL, DENTAL, OR TOILET PURPOSES | MEDICAL OR VETERINARY SCIENCE | PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL | HUMAN NECESSITIES | PERFORMING OPERATIONS Patent
Almost regular | θ-normal | β-normal | α-normal | κ-normal | Almost normal | Almost β-normal | Normal | Weakly θ normal | MATHEMATICS | MATHEMATICS, APPLIED | kappa-normal | theta-normal | weakly theta normal | NORMAL SPACES | almost normal | alpha-normal | beta-normal | almost regular | almost beta-normal Nutrition policy -- United States | Diet -- United States | Reference values (Medicine) | Nutrition -- United States | Vitamin B in human nutrition L-normal | π -normal | CC -normal | Mildly normal | Almost normal | C-normal | Normal | Submetrizable | Epinormal | pi-normal | MATHEMATICS | CC-normal | almost normal | mildly normal | epinormal | submetrizable Almost normal space | Normal space | κ-normal space | (Weakly) β-normal space | (Weakly) θ-normal space | Almost β-normal space | (Weakly) densely normal space | normal space | almost normal space | kappa$-normal space | (weakly) $\theta$-normal space | (weakly) $\beta$-normal space | (weakly) densely normal space | almost $\beta$-normal space (weakly) (functionally) $\theta$-normal space | kappa$-normal (mildly) normal space | almost normal space | Delta$-normal space | strongly semi | regularly closed set | regularly open set | weakly $\kappa$-normal space | theta$-closed set | theta$-open set trans-lamina cribrosa gradient | intracranial hypotension | normal tension glaucoma | normal pressure hydrocephalus | CSF diversion | SURGERY | POPULATION | CEREBROSPINAL-FLUID PRESSURE | INJURY | INTRAOCULAR-PRESSURE | CLINICAL NEUROLOGY | OPEN-ANGLE GLAUCOMA | DIFFERENCE | NORMAL-TENSION GLAUCOMA | OCULAR HYPERTENSION | INTRACRANIAL-PRESSURE Cannot display more than 1000 results, please narrow the terms of your search.
ARUNAVA MUKHERJEA Articles written in Proceedings – Mathematical Sciences Volume 119 Issue 5 November 2009 pp 669-677 This article gives sufficient conditions for the limit distribution of products of i.i.d. $2\times 2$ stochastic matrices to be continuous singular, when the support of the distribution of the individual random matrices is countably infinite. It extends a previous result for which the support of the random matrices is finite. The result is based on adapting existing proofs in the context of attractors and iterated function systems to the case of infinite iterated function systems. Volume 124 Issue 4 November 2014 pp 603-612 Problems similar to Volume 129 Issue 4 September 2019 Article ID 0063 Research Article In probability theory, often in connection with problems on weak convergence, and also in other contexts, convolution equations of the form $\sigma*\mu=\mu$ come up. Many years ago, Choqet and Deny ( Current Issue Volume 129 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
A neural network is a non-linear classifier (separator is not a linear function). It can also be used for regression. A Shallow neural network is a one hidden layer neural network. A Vanilla neural network is a regular neural network having layers that do not form cycles. TensorFlow Playground is an interactive web interface for learning neural networks: http://playground.tensorflow.org. Computational Graph Above the computational graph for the function \(f(x) = (x-1)^2\). Forward propagation To minimize the function f, we assign a random value to x (e.g. x = 2), then we evaluate y, z, and f (forward propagation). Backward propagation Then we compute the partial derivative of f with respect to x step by step (Backward propagation).\(\frac{\partial f}{\partial x} = \frac{\partial f}{\partial y}*\frac{\partial y}{\partial x} + \frac{\partial f}{\partial z}*\frac{\partial z}{\partial x} = 2 \\ \frac{\partial f}{\partial y} = z = 1 \\ \frac{\partial f}{\partial z} = y = 1 \\ \frac{\partial y}{\partial x} = \frac{\partial z}{\partial x} = 1\) Then we update \(x:= x – ?.\frac{\partial f}{\partial x}\). We repeat the operation until convergence. Activation functions Activation functions introduce nonlinearity into models. The most used activation functions are: \(f(x) = \frac{1}{1+exp(-x)}\) Sigmoid Sigmoid has a positive and non-zero centred output (sigmoid(0) ? 0.5). When all activation units are positive, then weight update will be in the same direction (all positive or all negative updates) and that will cause a zigzag path during optimization. \(z=?w_i.a_i+b \\ \frac{dL}{dw_i}=\frac{dL}{dz}.\frac{dz}{dw_i}=\frac{dL}{dz}.ai\) If all ai>0, then the gradient will have the same sign as \(\frac{dL}{dz}\) (all positive or all negative). \(f(x) = \frac{2}{1+exp(-2x)} -1\) TanH When x is large, the derivative of the sigmoid or Tanh function is around zero (vanishing gradient/saturation). ReLU (Rectified Linear Unit) f(x) = max(0, x) Leaky ReLU f(x) = max(0.01x, x) Leaky Relu was introduced to fix the “Dying Relu” problem.\(z=?w_i.a_i+b \\ f=Relu(z) \\ \frac{dL}{dw_i}=\frac{dL}{df}.\frac{df}{dz}.\frac{dz}{dw_i}\) When z becomes negative, then the derivative of f becomes equal to zero, and the weights stop being updated. PRelu (Parametric Rectifier) f(x) = max(?.x, x) ELU (Exponential Linear Unit) f(x) = {x if x>0 otherwise ?.(exp(x)-1)} Other activation functions: Maxout Cost function\(J(?) = \frac{1}{m} \sum_{i=1}^{m} loss(y^{(i)}, f(x^{(i)}; ?))\) We need to find ? that minimizes the cost function: \(\underset{?}{argmin}\ J(?)\) Neural Network Regression Neural Network regression has no activation function at the output layer. \(loss(y,?) = |y – ?|\) L1 Loss function \(loss(y,?) = (y – ?)^2\) L2 Loss function Hinge loss function Hinge loss function is recommended when there are some outliers in the data.\(loss(y,?) = max(0, |y-?| – m)\) Two-Class Neural Network \(loss(y,?) = – y.log(?) – (1-y).log(1 – ?)\) Binary Cross Entropy Loss function Multi-Class Neural Network – One-Task Using Softmax, the output ? is modeled as a probability distribution, therefore we can assign only one label to each example. \(loss(Y,\widehat{Y}) = -\sum_{j=1}^c Y_{j}.log(\widehat{Y}_{j})\) Cross Entropy Loss function \(y = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix},\ ? = \begin{bmatrix} 2 \\ -5 \\ 3 \end{bmatrix} \\ loss(y,?) = \sum_{c?1} max(0, ?_c – ?_1 + m)\) Hinge Loss (SVM) function For m = 1, the sum will be equal to 2. Multi-Class Neural Network – Multi-Task In this version, we assign multiple labels to each example. \(loss(Y,\widehat{Y}) = \sum_{j=1}^c – Y_j.log(\widehat{Y}_j) – (1-Y_j).log(1 – \widehat{Y}_j)\) Loss function Regularization Regularization is a very important technique to prevent overfitting. Dropout For each training example, ignore randomly p activation nodes of each hidden layer. p is called dropout rate (p?[0,1]). When testing, scale activations by the dropout rate p. Inverted Dropout With inverted dropout, scaling is applied at the training time, but inversely. First, dropout all activations by dropout factor p, and second, scale them by inverse dropout factor 1/p. Nothing needs to be applied at test time. Data Augmentation As a regularization technique, we can apply random transformations on input images when training a model. Early stopping Stop when error rates decreases on training data while it increases on dev (cross-validation) data. \(J(?) = \frac{1}{m} \sum_{i=1}^{m} loss(y^{(i)}, f(x^{(i)}; ?)) \color{blue} { + ? .\sum_{j} |?_j|} \) L1 regularization ? is called regularization parameter \(J(?) = \frac{1}{m} \sum_{i=1}^{m} loss(y^{(i)}, f(x^{(i)}; ?)) \color{blue} { + ? .\sum_{j} ?_j^2} \) L2 regularization \(J(?) = \frac{1}{m} \sum_{i=1}^{m} loss(y^{(i)}, f(x^{(i)}; ?)) \color{blue} { + ? .\sum_{j} ?_j^p} \) Lp regularization For example, if the cost function \(J(?)=(?_1 – 1)^2 + (?_2 – 1)^2\), then the \(L_2\) regularized cost function is \(J(?)=(?_1 – 1)^2 + (?_2 – 1)^2 + ? (?_1^2 + ?_2^2)\) If ? is large, then the point that minimizes the regularized J(?) will be around (0,0) –> Underfitting. If ? ~ 0, then the point that minimizes the regularized J(?) will be around (1,1) –> Overfitting. Elastic net Combination of L1 and L2 regularizations. Normalization Gradient descent converges quickly when data is normalized Xi ? [-1,1]. If features have different scales, then the update of parameters will not be in the same scale (zig-zag). For example, if the activation function g is the sigmoid function, then when W.x+b is large g(W.x+b) is around 1, but the derivative of the sigmoid function is around zero. For this reason the gradient converges slowly when the W.x+b is large. Below some normalization functions. \(X:= \frac{X – ?}{?}\) ZScore \(X:= \frac{X – min}{max-min}\) MinMax \(X:= \frac{1}{1+exp(-X)}\) Logistic \(X:= \frac{1}{?\sqrt{2?}} \int_{0}^{X} \frac{exp(\frac{-(ln(t) – ?)^2}{2?^2})}{t} dt\) LogNormal \(X:= tanh(X)\) Tanh Weight Initialization Weight initialization is important because if weights are too big then activations explode. If weights are too small then gradients will be around zero (no learning). When we normalize input data, we make the mean of the input features equals to zero, and the variance equals to one. To keep the activation units normalized too, we can initialize the weights \( W^{(1)}\) so \(Var(g(W_{j}^{(1)}.x+b_{j}^{(1)}))\) is equals to one. If we suppose that g is Relu and \(W_{i,j}, b_j, x_i\) are independent, then:\(Var(g(W_{j}^{(1)}.x+b_{j}^{(1)})) = Var(\sum_{i} W_{i,j}^{(1)}.x_i+b_{j}^{(1)}) =\sum_{i} Var(W_{i,j}^{(1)}.x_i) + 0 \\ = \sum_{i} E(x_i)^2.Var(W_{i,j}^{(1)}) + E(W_{i,j}^{(1)})^2.Var(x_i) + Var(W_{i,j}^{(1)}).Var(x_i) \\ = \sum_{i} E(x_i)^2.Var(W_{i,j}^{(1)}) + E(W_{i,j}^{(1)})^2.Var(x_i) + Var(W_{i,j}^{(1)}).Var(x_i) \\ = \sum_{i} 0 + 0 + Var(W_{i,j}^{(1)}).Var(x_i) = n.Var(W_{i,j}^{(1)}).Var(x_i) \) Xavier initialization If we define \(W_{i,j}^{(1)} ? N(0,\frac{1}{\sqrt{n}})\), then the initial variance of activation units will be one (n is number of input units). We can apply this rule on all weights of the neural network. Batch Normalization Batch normalization is a technique to provide any layer in a Neural Network with normalized inputs. Batch Normalization has a regularizing effect. After training, ? will converge to the standard deviation of the mini-batches and ? will converge to the mean. The ?, ? parameters give more flexibility when shifting or scaling is needed. Hyperparameters Neural network hyperparameters are: Learning rate (?) (e.g. 0.1, 0.01, 0.001,…) Number of hidden units Number of layers Mini-bach size Momentum rate (e.g. 0.9) Adam optimization parameters (e.g. ?1=0.9, ?2=0.999, ?=0.00000001) Learning rate decay Local Minimum The probability that gradient descent gets stuck in a local minimum in a high dimensional space is extremely low. We could have a saddle point, but it’s rare to have a local minimum. Transfer Learning Transfer Learning consists in the use of parameters of a trained model when training new hidden layers of an extended version of that model.
For this assignment, we were told to go forth and create a linear motion axis built from scrap and then measure its precision. I had some leftover 8mm steel rod from what used to be 3D-printer materials, so I decided to make a small linear rail out of that. These rods were only 30cm long and rather thin for desk material, so I'm treating this one more like a scale model. I can use this to see what I can improve for the real thing. With round straight steel rod as my rails, I have two main concerns to address: Fabricate rail holders to maximize parallelism of the rods Make a carriage to achieve best slidey-ness/load capacity with least effort/wobbliness The rail holder objective is fairly straightforward, but the carriage requires some thought. Generally the more constrained you make a slider, the less load capacity it gets (before you get friction problems) I decided to see how far I could get with a circular-bore carriage (slider on one rail has cylindrical bores, and the other rail just has a flat.) Fundamental failure modes of this design will be angular wobbliness in the xy-plane (parallel to the base, but would move a laser beam side to side), since a slip-fit circular bore inherently will allow side-side play. However, such a design would prevent the carriage from lifting off the rail and I could later reduce angular errors with a preloading mechanism. Also, this design is really easy to machine. Below is some scratchwork: So if I have an 8mm rod, and I have a clearance bore of 21/64" (closest common machine tool size, equal to 8.335mm), what's the best error I can theoretically achieve? More fun scratchwork below: I can expect between 0.2° and 0.6° angular error assuming my carriage connection is actually rigid I needed to predict what sort of errors I would see from this device; for that I used another Slocum spreadsheet. This error apprortionment spreadsheet explores allowable errors for all the components in a machine based on total error the engineer wants to achieve and how precise the engineer can expect to get each individual part. The logic here is that I can easily acquire a decent actuator and build a decent structure through clever machining processes, but my sliding axis idea is going to rely on flawed delrin bearings and a really derpily-mounted and honestly not well-collimated laser "sensor". So I expect the most error to come from these items. The goal is to achieve 0.5mm precision (arbitrarily lofty goal) despite these items - for that to happen, my linear axis needs to have 0.33mm precision (angular precision 0.09deg) before considering load. (I'm moving things on this axis too slowly to care about thermal or process errors.) Welp, here goes. I machined a block of delrin to create the four rail holders and the circular-bore carriage sliders. All the critical features of the rail holders (height, bore, and mounting bolt holes) were machined first, subsequently the block was bandsawed into four pieces. The slider pieces were also matched by machining everything before splitting, to improve relative precision of the components during assembly. Bottom faces of rail holders and slider (left) and top faces (right) After assembling the more constrained rail, I measured the distance from the top of the rod to the slider - 1.42mm, and found a scrap piece of acrylic for the flat that reasonably matched that height. I then bolted my simple linear rail assembly to my lab's optical table, then attached a piece of sheet metal with VHB tape to test it. Simple Linear Axis with all the components For testing, I taped a laser pointer to the carriage and pointed it at a cabinet 20ft (6.12m) away. My carriage is 75mm long and wide, so using Abbe error principles $ \tan(\alpha) = \frac{\delta}{L} = \frac{g}{length of carriage} $ and $\alpha = \arctan(\frac{\delta}{L})$ where if I want my bearing error to be max 0.159mm (error apportionment), I want my angular error to be $\frac{0.159mm \times gap}{l} = 0.02deg$ and therefore max $\delta$ = 2.167 mm (repeatability at same location) and max $\delta$ = 3.36mm (moving the carriage the full length of the 214mm-long rail) Laser target. The white paper is so I can draw on it, and the black tape spot is for the camera's benefit. Once I rotated the optical table to a reasonable target-width (video below), I was ready to start properly measure my linear axis. It's possible to back-calculate the estimated angle of the assembly relative to the target based on the overall drift of the laser across all the trials, but I definitely won't conduct enough trials to properly statistics-away this particular source of error. Instead, I'll probably find a better calibration method for the next iteration of this linear axis once the actuator is attached. Anyway, during testing I discovered that my clearance-bore + flat method did indeed have noticeable side-side error and worse than I calculated - 9.81mm, which was a 0.09deg angular error for repeatability testing. A lot of this is due to my setup itself not being squared up - angular error at the front was only 0.04deg of error compared to 0.14deg at the rear. Traveling from back to front multiple times, I accrued an overall angular error of 0.23degrees. Womp. My estimation from looking at repeatability of the fronts and backs is that 0.05deg of that was due to the the table itself. Given these results, I tried squaring up the optical table a bit better and put a 500g weight on the carriage to look at effects of adding a load. This time, my fronts and backs had more similar displacements - both errors were 0.1deg. However, sliding back and forth got an error of 0.5deg - twice as much as when I tried this with minimal loading, and 5x what my error spreadsheet budgeted for. I suppose this is what I get for attaching my carriage to my bearings using compliant foam tape and attaching my laser with ducttape, and I'll find out how much better I can get when I add an actuator and reattach everything with more thought. However, the real experimental error matched up with my scratchwork predictions, despite having a bore gap 0.2mm larger than intended (using 11/32" reamer instead of a 21/64"). So probably I shouldn't expect to achieve anything significantly better even with an actuator.
Taghavi, A., Rohi, H., Darvish, V. (2015). Additivity of maps preserving Jordan $\eta_{\ast}$-products on $C^{*}$-algebras. Bulletin of the Iranian Mathematical Society, 41(Issue 7 (Special Issue)), 107-116. A. Taghavi; H. Rohi; V. Darvish. "Additivity of maps preserving Jordan $\eta_{\ast}$-products on $C^{*}$-algebras". Bulletin of the Iranian Mathematical Society, 41, Issue 7 (Special Issue), 2015, 107-116. Taghavi, A., Rohi, H., Darvish, V. (2015). 'Additivity of maps preserving Jordan $\eta_{\ast}$-products on $C^{*}$-algebras', Bulletin of the Iranian Mathematical Society, 41(Issue 7 (Special Issue)), pp. 107-116. Taghavi, A., Rohi, H., Darvish, V. Additivity of maps preserving Jordan $\eta_{\ast}$-products on $C^{*}$-algebras. Bulletin of the Iranian Mathematical Society, 2015; 41(Issue 7 (Special Issue)): 107-116. Additivity of maps preserving Jordan $\eta_{\ast}$-products on $C^{*}$-algebras Department of Mathematics, Faculty of Mathematical Sciences, University of Mazandaran, P.O. Box 47416-1468, Babolsar, Iran. Receive Date: 26 November 2014,Revise Date: 18 April 2015,Accept Date: 18 April 2015 Abstract Let $\mathcal{A}$ and $\mathcal{B}$ be two $C^{*}$-algebras such that $\mathcal{B}$ is prime. In this paper, we investigate the additivity of maps $\Phi$ from $\mathcal{A}$ onto $\mathcal{B}$ that are bijective, unital and satisfy $\Phi(AP+\eta PA^{*})=\Phi(A)\Phi(P)+\eta \Phi(P)\Phi(A)^{*},$ for all $A\in\mathcal{A}$ and $P\in\{P_{1},I_{\mathcal{A}}-P_{1}\}$ where $P_{1}$ is a nontrivial projection in $\mathcal{A}$. If $\eta$ is a non-zero complex number such that $|\eta|\neq1$, then $\Phi$ is additive. Moreover, if $\eta$ is rational<,> then $\Phi$ is $\ast$-additive.
Talk:Absolute continuity From Encyclopedia of Mathematics Revision as of 14:08, 30 July 2012 by Camillo.delellis Could I suggest using $\lambda$ rather than $\mathcal L$ for Lebesgue measure since it is very commonly used, almost standard it would be consistent with the notation for a general measure, $\mu$ calligraphic is being used already for $\sigma$-algebras --Jjg 12:57, 30 July 2012 (CEST) How to Cite This Entry: Absolute continuity. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Absolute_continuity&oldid=27468
[OS X TeX] Missing $ inserted in a "variations" environment ewan.Delanoy at math.unicaen.fr ewan.Delanoy at math.unicaen.fr Sun Mar 15 15:09:53 CET 2009 Hello all, I've just encountered the following problem with TeXShop : 1) Compilation stops with the error message "Missing $ inserted". 2) If I insist and press "Enter" twice in the console, it eventually compiles completely and produces a fine output. 3) By trial and error, I see that what causes this behaviour is a "variations" array (from the variations package) in the middle of my text. 3) The line number displayed in the error message, l.259, is not very illuminating because the source text is as follows \noindent \makebox[\textwidth][c]{ \begin{variations} x & \mI & & \alpha& & & 1 & & \beta& & & 2 & & \pI & \\ \filet f'(x) & \bg & + & \bb& & + & \z & -& \bb & & - &\z& + & &\bd \\ \filet \m{f(x)} & \bg \mI & \c \h{\pI} & \bb& \mI & \c & \h{-1} & \d \mI & \bb & \h{\pI}& \d &\frac{3}{2}& \c & \h{\pI} & \bd \\ \end{variations} }<- Line 259 is here Can anyone help me on this ? TIA, More information about the macostex-archivesmailing list
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
The size of the band is defined by the bandwidth NBW. This defined as the smallest number for which: K_{i,j}=0 for all i,j satisfying |i-j| > {NBW} . If K is symmetric, we will store the lower half of the band, i.e. all entries K_{i,j} for which i gt;= j . The halfbandwidth NHBW is defined by NBW = 2*NHBW + 1. Since we only get a nonzero entry in K_{i,j} if nodes with global numbering i and j are contained in the same element, the halfbandwidth NHBW can be calculated as the maximum over all elements l , of the difference in node numbers for the nodes in l . In POISS.FOR, this calculation is done at the start of subroutine assmb. We will therefore use NPOIN rows of GSTIF (one row for each node in the mesh) and NHBW+1 columns. We denote the latter value by NBAND, and dimension GSTIF at the start of POISS.FOR as DIMENSION GSTIF(MPOIN,MBAND) where the parameters MPOIN and MBAND are set in the PARAMETER statement. Then, on row i we store only k_{i,i- {NHBW}},...,k_{i,i} , for i=1,... ,NTOTV. We store these entries in an array GSTIF with NTOTV rows (where NTOTV is the total number of degrees of freedom) and (NHBW+1) columns. The mapping from K to GSTIF is: n K_{i,j} --> {GSTIF(I,J-I+NHBW+1)} for j=i- {NHBW},...,i . Then the diagonal of the matrix (entries where i=j ) will be stored in the final column of the array. You can see the entries of the element stiffness matrices being placed in GSTIF, in subroutine assmb of POISS.FOR. The mapping is slightly more complicated in ELAST.FOR, where there are two degrees of freedom per node. Here, the u and v d.o.f.s of node n (i.e. displacements in the x and y directions at node n ) are global degrees of freedom numbers i_n+1 and i_n+2 , where i_n=2(n-1) . You can see this mapping being used at the start of subroutine assmb. After making this adjustment, the local d.o.f.s are mapped to GSTIF by the formula in the previous section. At the start of assmb, the halfbandwidth is found by finding the position of the first non-zero entry in row I: this is stored in the array NMINVC. Thus, if NMINVC(I)=MI, it means that k_{i,mi} \neq 0 but k_{i,j}=0 {for} j=1,...,mi-1 . Then the halfbandwidth NHBW is given by \max_{i=1, {NTOTV}}(i-mi) . It will be seen that the size of the bandwidth, and hence the amount of storage required, will depend on the order in which the nodes of the mesh have been numbered. For this reason it is very important to use the element renumbering facility in PREFEL when producing the .dat file for input to the `main engine', to obtain a sensible numbering of the elements --- and then to re-number the nodes accordingly. Note that the bandwidth does not depend on the element numbering directly. The mapping from K to GSTIF is again n K_{i,j} --> {GSTIF(I,J-I+NHBW+1)}. Now, however, there are NBW columns, and the diagonal entries of K lie down the (NHBW+1)'th column. This storage technique is used in the soil consolidation program CONSL.FOR. As with band storage, the amount of storage used depends on the numbering of the nodes in the mesh. We again use the array NMINVC(I) to tell us the column where the first non-zero entry on row I of K lies. Then (as K will by symmetric in CONSL) we store only the entries from this point up to the diagonal entry, in GSTIF. GSTIF now becomes a one-dimensional array, and we use a pointer array LPOINT(I) to tell us where the I'th row of K begins. That is, the diagonal entry K_{i,i} will be stored as GSTIF(LPI), where LPI=LPOINT(I), for I=1 to NTOTV. The dimension of GSTIF is then LPOINT(NTOTV), which we denote by NSTIF. The parameter MSTIF is used to dimension GSTIF at the start of the program. The arrays NMINVC and LPOINT are set up by a subroutine mask in CONSL.FOR. LPOINT(I) can be found from LPOINT(I-1) by LPOINT(I) = LPOINT(I-1) + I - NMINVC(I) + 1 since there are I-NMINVC(I)+1 entries to store on row I. We start with LPOINT(1)=1. The mapping from the lower diagonal of K to GSTIF is now n K_{i,j} --> {GSTIF(LPOINT(I) - I + J)} if NMINVC(I) lt;= J lt;= I. This mapping is performed by a function gpos; if K_{i,j} lies outside the envelope of K which is being stored, gpos returns a value of zero. Otherwise, n K_{i,j} --> {GSTIF(GPOS(LPOINT,NTOTV,I,J))}. This algorithm is represented in the following fragments of pseudo-code. first we show the reduction phase, in which the lower triangular matrix L is formed: C...Reduction: do 20 i=1,n-1 pivot = G(i,i) do 10 k=i+1,n fact = G(k,i)/pivot do 5 j=i+1,n G(k,j) = G(k,j) - fact*G(i,j) 5 continue f(k) = f(k) - fact*f(i) 10 continue 20 continueHere, G(i,j) indicates the (i,j)'th entry of G; no modification has been made to represent the compact storage of G. If this pseudo-code is compared with the code in subroutine The solution is completed by the back-substitution phase, performed in subroutine backsu of THERM.FOR: c...back-substitution phase u(n) = f(n)/G(n,n) do 10 i=n-1,1, step -1 pivot = G(i,i) do 5 j=i+1,n f(i) = f(i) - G(i,j)*u(j) 5 continue u(i) = f(i)/pivot 10 continue The entries in L must be found systematically. The first one which can be found is l_{1,1} , since it will be seen by forming the matrix LL^T and comparing it with K , that k_{1,1} = {l_{1,1}}^2 . Once l_{1,1} is known, we could calculate all the entries below this in the first column. However, we will work in a row-based manner, and find l_{2,1} , use this to find l_{2,2} , before moving on to the 3rd row to find l_{3,1},l_{3,2},l_{3,3} --- then move on to the 4th row, and so on. This row-based approach matches the row-based skyline storage algorithm described earlier. This Choleski decomposition algorithm is used in the Basic-level programs POISS and ELAST, and can be found in subroutine chodec. The pseudo-code for this decomposition, applied to the matrix G, is as follows: c T c...decompose G into L.L i = 1 aii = G(1,1) clii = sqrt(aii) G(1,1) = clii c...decompose each row in turn do 20 i=2,n c...work along the row do 10 j=1,i-1 cljj = G(j,j) aij = G(i,j) sum = 0.0 do 5 k=1,j-1 gik = G(i,k) gjk = G(j,k) sum = sum + gik*gjk 5 continue clij = (aij - sum)/cljj G(i,j) = clij 10 continue c...finally find the diagonal entry on the row aii = G(i,i) sum = 0.0 do 15 k=1,i-1 clik = G(i,k) sum = sum + clik*clik 15 continue clii2 = aii - sum clii = sqrt(clii2) G(i,i) = clii 20 continueNote that as each entry in L is found, it is written to G, so that the corresponding entry of the stiffness matrix gets overwritten. At the end of the algorithm, G contains the entries of L in its lower triangle. In implementing this algorithm in a `main engine', we must use a mapping to convert G(i,j) to a storage space in GSTIF, as described in the earlier section on storage techniques. The system Gu=f can now be rewritten LL^Tu=f , and is solved in two stages: c...forward substitution: solve Ly = f y(1) = f(1)/G(1,1) do 10 i=2,n bi = f(i) sum = 0.0 do 5 k=1,i-1 gik = G(i,k) ak = y(k) sum = sum + gik*ak 5 continue clii = G(i,i) y(i) = (bi - sum)/clii 10 continue c T c...back-substitution phase: solve L x = y u(n) = y(n)/G(n,n) do 20 i=n-1,1, step -1 yi = y(i) sum = 0.0 do 15 k=i+1,n gki = G(k,i) ak = u(k) sum = sum + gki*ak 15 continue clii = G(i,i) u(i) = (yi - sum)/clii 20 continueIn program CONSL.FOR, we use the same solution algorithm, but modify its Fortran coding to take account of the skyline storage technique, described above. When row-based skyline storage is used with a Choleski decomposition, the algorithm given above for the final back-substitution phase (solve L^Tu=y ) is rather inefficient. This is because it works with rows of L^T , which are columns of L , the entries of which are scattered throughout the row-based storage array GSTIF. It can easily be modified, however, so that once an entry u_i is found, this is eliminated from all equations from i-1 up to row 1, which involves working up the column of L^T . The modified algorithm, replacing the last phase of the previous pseudo-code, is: c...back-substitution phase: solve L x = y c...We can do this in a column-based manner! do 20 i=n,1, step -1 yi = y(i) clii = G(i,i) xi = yi/clii u(i) = xi c...once we have found x_i, transfer the term involving this, to c...the r.h.s. in all earlier equations if (i.eq.1) goto 20 do 15 j=i-1,1, step -1 clij = G(i,j) y(j) = y(j) - clij*xi 15 continue 20 continueThis form of the routine is also used in PLADV, where skyline storage is also employed. The coding for this algorithm, for the case of a symmetric matrix, can be seen in subroutine front of the advanced elasticity program ELADV.FOR. I am indebted to Dr David Naylor for permission to use this subroutine. A number of global one-dimensional arrays have to be used, but the algorithm is still more space-efficient than skyline storage, and so ELADV should be used for elasticity analyses with complex meshes. There are comment lines in the subroutine, giving some guidance as to how the algorithm works. The algorithm can be adapted to the case of a non-symmetric matrix G , in which case approximately twice as much storage will be needed. My own adaptation of front for the nonsymmetric case, is the subroutine afront in the elasto-viscoplasticity program VPLAS.FOR. This program handles non-associated plastic flow, which results in non-symmetric stiffness matrices. The amount of storage needed depends on the numbering of the elements, not the nodes, as the algorithm is dependent on the order in which the elements are assembled into the front. The subroutine makes a preliminary run-through of the elements to check that sufficient storage is provided; this is defined by the parameter MFRON at the start of the program. The entries of G are allocated spaces in the one-dimensional array GSTIF, which is of dimension MFRON. As the assembly process is handled within front, there is no assmb subroutine within ELADV and VPLAS. an {Initialization} v^{(0)} = 0 r^{(0)} = f k = 0 z^{(0)} = C^{-1}r^{(0)} p^{(0)} = z^{(0)} {Begin iteration} \alpha_k = \frac{(z^{(k)},r^{(k)})}{(Gp^{(k)},p^{(k)}} v^{(k+1)} = v^{(k)} + \alpha_k p^{(k)} r^{(k+1)} = r^{(k)} - \alpha_k Gp^{(k)} z^{(k+1)} = C^{-1} r^{(k+1)} ta_k = \frac{(z^{(k+1)},r^{(k+1)})}{(z^{(k)},r^{(k)}} p^{(k+1)} = z^{(k+1)} + ta_k p^{(k)} k = k + 1 {Next iteration} a This form of the PCG algorithm is taken from Bartelt(1989) -- see Chapter 10 for details pf recommended texts. Here, (u,v) denotes the inner product of vectors u and v . The matrix C is the preconditioning matrix. The algorithm is terminated when the norm of the residual vector r^{(k+1)} is less than TOLER times the norm of f ; then the solution u=v^{(k+1)} . The simplest form of preconditioning is diagonal preconditioning, in which C = {diag}(G) . This algorithm is implemented in the Basic-level program FRAME.FOR. The solution is performed in subroutine pcgsol, the pseudo-code for which is: c...solve Kv=g using diagk as preconditioner c...multiplications K.v are performed element-by-element, c...reading the element matrices of K from the scratch file c c...initial guess; do 2 i=1,n v(i) = 0.0 r(i) = g(i) z(i) = 0.0 2 continue c c...z = {Kdiag}^-1 * r do 15 i=1,n z(i) = r(i)/diagk(i) 15 continue c do 16 i=1,n p(i) = z(i) 16 continue ztr = 0.0 do 17 i=1,n zi = z(i) ri = r(i) ztr = ztr + zi*ri 17 continue call norm2(mtotv,ntotv,r,rnorm0) itscg = 0 write(6,*)'PGSOL: iteration',itscg,',norm = ',rnorm0 c 25 itscg = itscg + 1 c c...create y which is G*p call ematgv( p , y ) pty = 0.0 do 28 i=1,n pty = pty + p(i)*y(i) 28 continue alpha = ztr/pty do 30 i=1,n v(i) = v(i) + alpha*p(i) 30 continue c do 40 i=1,n r(i) = r(i) - alpha*y(i) 40 continue c call norm2(mtotv,ntotv,r,rnorm) c write(6,*)' iteration',itscg,',norm = ',rnorm c if(rnorm.lt.cgtol) goto 200 c c...z = {Kdiag}^-1 * r do 45 i=1,n z(i) = r(i)/diagk(i) 45 continue c ztr0 = ztr ztr = 0.0 do 60 i=1,ntotv ztr = ztr + z(i)*r(i) 60 continue beta = ztr/ztr0 c do 130 i=1,ntotv p(i) = z(i) + beta*p(i) 130 continue c goto 25 c 200 continue write(6,*)'Solution converged after ',itscg,' iterations.'With the diagonal preconditioner (stored in the array Diagonal preconditioning works surprisingly well, considering its simplicity. If a more sophisticated preconditioner is required, the most popular currently is to perform an Incomplete Choleski (IC) factorization of G . This is essentially a symmetric Choleski decomposition of G into LL^T , as described earlier, but avoiding some or all of the `fill-in' which occurs within the envelope of G . That is, we may set L(i,j)=0 instead of L(i,j)=l_{i,j} if G(i,j)=0 . To compensate this, the calculated l_{i,j} is instead added to the diagonal term L(i,i) . Both diagonal and IC preconditioning are implemented in the advanced plasticity program PLADV.FOR. The form of IC preconditioning there is adaptive: the fill-in is avoided if the magnitude of l_{i,j} is less than a user-specified multiple ( filtol) of the diagonal term. PLADV.FOR uses the skyline storage technique for G . The subroutine inchol in PLADV.FOR performs this IC decomposition. The pseudo-code below should be compared with that for subroutine chodec: c...Incomplete Choleski decomposition small = 1.e-8 c...count the amount of fill-in which occurs kfill = 0 khole = 0 i = 1 a11 = G(1,1) cl11 = sqrt(a11) G(1,1) = cl11 c...decompose each row in turn do 20 i=2,n aii = G(i,i) c...we will not fill in if the term is smaller than filtol filtol = factor*abs(aii) c...work along the row do 10 j=1,i-1 cljj = G(j,j) aij = G(i,j) sum = 0.0 do 5 k=1,j-1 sum = sum + G(i,k)*G(j,k) 5 continue clij = (aij - sum)/cljj c...here is the 'Incomplete' part gij = G(i,j) if (gij.eq.0.0) then khole = khole + 1 if (abs(clij).lt.filtol) then c...avoid fill-in: add i,j term to diagonal instead G(i,i) = G(i,i) + clij goto 10 endif c...fill-in will occur kfill = kfill + 1 endif G(i,j) = clij 10 continue c...finally find the diagonal entry on the row aii = G(i,i) sum = 0.0 do 15 k=1,i-1 clik = gstif(i,k) if (clik.eq.0.0) goto 15 sum = sum + clik*clik 15 continue clii2 = aii - sum clii = sqrt(clii2) G(i,i) = clii 20 continue write(6,*) 'No. of holes found in G matrix = ',khole write(6,*) 'No. of holes filled-in = ',kfillAs might be expected, there is a trade-off between the amount of fill-in which is prevented, and the effectiveness of the resulting preconditioner. The example datafile Note that this is a very simple form of IC algorithm. There have been many improved algorithms proposed in recent years, and users wishing to apply this technique in practice are advised to search specialist journals for the lastest developments. The reader will have noticed that additional tolerance parameters must be specified when an iterative solver is used. In PLADV, the iterative solvers are chosen by setting the parameter ISOLA to 2 or 3: The alternative method for imposing nodal fixities, used in ELAST.FOR, is sometimes called a `big spring' approach. To impose the condition that u_k \approx 0.0 for some k , we add a large number ( 10^{20} ) to the corresponding diagonal term in GSTIF; this is done in subroutine fixdof, called by solve before calling chodec. This will force the value of u_k arising from the solution process to be very small, and we later set such small values to zero. Fixed values of u_k which are non-zero (e.g. specified displacements, or the specified values of the potential for nodes lying on Dirichlet boundaries) are dealt with similarly. If we want to ensure that u_k = g , we multiply K_{k,k} by 10^{12} , and set the corresponding entry in the r.h.s. vector as F_k = g*K_{k,k} . This process can be seen in fixdof in POISS.FOR. When a displacement degree of freedom is fixed, there will be a reaction force in this direction at the node, to maintain equilibrium. This can be found after the main solution, by calculating the r.h.s. for the corresponding row. This is done in subroutine reactn in ELAST.FOR and FRAME.FOR; these reactions are written into the load vector ASLOD, which is written in the result file.
Let $M$ be a smooth compact $n$-manifold without boundary, $g$ some choice of Riemannian metric on $M$, and $\omega_g$ the volume form gotten from $g$. Say you're interested in finding extrema for quantities determined by a choice of Riemannian metric on $M$ . . . perhaps $\lambda_1(\int_M \omega_g)^{2/n}$, where $\lambda_1$ is the first nonzero eigenvalue of a Laplace-type operator, or maybe the zeta-regularized determinant of the conformal Laplacian. The setting for this variational problem is the space of Riemannian metrics $\mathcal{M}$ on $M$. This is a tame Frechet manifold and is an open convex cone inside the tame Frechet space $\Gamma^\infty\!(S^2T^\ast\!M)$ of symmetric covariant 2-tensors on $M$ (Hamilton, 1982). These objects are standard in conformal geometry. In particular, if $M=S^{n}$, then $\Gamma^\infty\!(S^2T^\ast\!M)$ is the space of sections of a vector bundle associated to a representation of the conformal group of $M$, and variational problems can be addressed via the action of the conformal group (Møller & Ørsted, 2009). My concern is whether there is an analogue of this picture for strongly pseudoconvex CR manifolds, in particular for $S^{2n+1}$ with its standard CR structure. What is the tangent bundle of the space of strongly pseudoconvex CR structures? What is the tangent bundle of the space of This post imported from StackExchange MathOverflow at 2014-09-20 22:39 (UCT), posted by SE-user user41626 all CR structures? Is the space of strongly pseudoconvex CR structures on the sphere a tame Frechet manifold sitting as an open convex cone inside some tame Frechet space of sections of some vector bundle over $S^{2n+1}$? Is this vector bundle associated to a representation of a Lie group?
Proper time The proper time \(τ\) is the time measured by an observer \(O\) (which can just be a particle) who “stands still” in space relative to a coordinate system. For example, if I am standing still on the surface of the Earth one meter away from a tree (where I am standing still and not moving away from the tree) where the origin of a coordinate system \(x^i\) is located at the tree (and the coordinate system is fixed at that location), I will measure the proper time in my reference frame. Any other observer \(O'\) who is moving at a relative velocity away from the tree will measure a time \(t'\) which is longer than \(τ\). Suppose an observer \(O'\) moves away from the origin of the x-coordinate system through space (along the \(x^1\) dimension) and time (along the \(x^0=ct=t\) dimension). We can imagine someone at point A who does not move through any of the spatial dimensions \(x^1\), \(x^2\), and \(x^3\). Because he does not move through any of the spatial dimensions in the x-coordinate system, he is traveling through the time dimension \(x^0\) at the speed of light and he is measuring the proper time \(τ\). What about the observer \(O'\) who is moving through the spatial dimension \(x^1\) in the x-coordinate system at a velocity \(v\) away from \(O\)? The observer \(O\)(who is stationary at point A) measures \(O'\) traveling a distance\(x^1\) in a time \(t\). Because \(O'\) is moving through the spatial dimension \(x^1\) in the x-coordinate system, he measures the time \(t=γt\) between A and B. (He also measures a different “x coordinate” which will be \(y^1\).) In other words, \(O'\) will measure that it took him less time to go from A to B; \(O\) will measure that it took \(O’\) a longer time \(t\) to go from A to B. Time Dilation In one of Einstein’s original though experiments, he imagined an observer \(O'\) riding in a train which was moving past another observer \(O\) standing still on the side of the tracks. The train was moving at a velocity \(\vec{v}\) relative to \(O\). Two reference frames \(R'\) and \(R\) (which should be thought of as coordinate systems moving through space) are attached to \(O'\) and \(O\) respectively. The train is moving along only the x-axis in the \(R\)-frame; in the \(R'\)-frame no part of the train is moving through space along the x'-, y'- and -z'axes and is at rest. Right at the moment when \(O'\) is at \(x=0\) both observers clocks are synchronized and \(t'=t=0\). At this moment in time a light pulse is emitted from a light source \(S'\). This light pulse travels upwards along the vertical, bounces off of the mirror, and then arrives back at \(S'\). Let the distance between \(S'\) and the mirror be \(d\). In the \(R'\)-frame the light pulse travels along the y-axis at a constant speed \(c\). Let the time interval for the light pulse to travel from \(S'\) to the mirror and then back to \(S'\) (in the \(R'\)-frame) be \(Δt'\). Then the amount of time necessary for the light pulse to travel from \(S'\) to the mirror (the “half-way distance”) in the \(R'\)-frame is \(\frac{Δt'}{2}\). Since the speed of the light pulse is constant relative to \(R'\) then, from kinematics, we know that \(d=c\frac{Δt'}{2}\) and that $$Δt'=\frac{2d}{c}.$$ We shall now see that the constancy of the speed of light with respect to both reference frames combined with the fact that the light pulse must travel through a greater distance with respect to the \(R\)-frame leads to \(O\) measuring a longer time interval \(Δt\) between event 1 (when the light pulse is emitted) and event 2 (when the light pulse arrives back at \(S'\)). If \(Δt\) is the time interval measured in the \(R\)-frame for the light pulse to travel from \(S'\) to the mirror and back to \(S'\), then the time interval measured in that frame for the light pulse to go from \(S'\) to the mirror is \(\frac{Δt}{2}\). By the time the light pulse reaches the mirror the train (and the mirror) will have moved a distance \(v\frac{Δt}{2}\). Thus the light pulse must have traveled a horizontal distance \(v\frac{Δt}{2}\) and a vertical distance \(d\) in this time interval. Using the Pythagorean Theorem the total distance the light pulse traveled is related to the horizontal and vertical distances by \(x^2=(v\frac{Δt}{2})^2+d^2\). The speed of light is a constant \(c=3×10^8\frac{m}{s}\) relative to \(R\) (as it is for any frame) and because this speed is constant it follows (from kinematics) that \(x=c\frac{Δt}{2}\) and \((c\frac{Δt}{2})^2=(v\frac{Δt}{2})^2+d^2\). If we solve for \(Δt\) we get $$Δt=\frac{2d}{\sqrt{c^2-v^2}}=\frac{2d}{c\sqrt{1-\frac{v^2}{c^2}}}.$$ Since \(Δt'=\frac{2d}{c}\) we have $$Δt=\frac{Δt'}{\sqrt{1-\frac{v^2}{c^2}}}$$ where $$γ=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}.$$ In the \(R'\)-frame events 1 and 2 occur at the same points in space (since there \((x',\text{ ,}y'\text{, }z')\) points are identical) as shown in Figure 1. Any observer who measures the time interval between two events in a frame in which those two events occur at the same points in space are said to be measuring the proper time interval \(Δτ\) between those two events. We see that \(Δt'=Δτ\) and that $$Δt=γΔτ.$$ Because \(γ\) is always greater than one, it follows that \(Δt\) is always greater than \(Δτ\). Fundamentally, this means that whenever an observer is measuring the time interval between two events in a frame that are not at the same two spatial points, a time interval \(Δt\) will be measured that is always greater than \(Δt\). It is important to mention that this effect only becomes important when \(γ\) significantly deviates from equaling one which only happens, roughly speaking, when \(v>0.01c\). If \(O'\) started walking “carrying” his coordinate system (which is attached to him) around with him, he would be moving at a speed \(v\) relative to another frame (corresponding to someone “standing still” on the train) measuring \(Δτ\) and thus he would measure a slightly longer time. But since his walking speed is much slower than \(0.01c\) the effect of time dilation is negligible. Oftentimes, for practical purpose, if \(v<0.01c\), then we can assume that \(Δt=Δτ\). Only when \(v>0.01c\) does \(γ\) start to significantly deviate from being one. On the most fundamental level, photons are emitted from atoms composing the light source \(S'\). Those photons then travel through space, bounce off the atoms composing the mirror, and arrive back at the atoms composing \(S'\). All chemical and biological processes are, at the most fundamental level, due to atoms interacting with other atoms via photons of light and electromagnetic radiation. The fact that it takes a longer time \(Δt\) for any two atoms to interact with one another via a photon of light/radiation going from one atom to another means that it takes longer for chemical, and therefore biological, interactions to occur. Therefore, \(O\) in the \(R\)-frame will see all physical processes take a longer time to happen and the march of time will progress more slowly in his reference frame. This article is licensed under a CC BY-NC-SA 4.0 license.
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
It looks like you're new here. If you want to get involved, click one of these buttons! Let \(G\) be a graph, and let \(\textbf{Free}(G)\) be the corresponding free category.Somebody tells you that the only isomorphisms in \(\textbf{Free}(G)\) are the identity morphisms. Is that person correct? Why or why not? PreviousNext I presume that the person is making a statement about graphs generally and not a specific graph. The person is confused about the difference between a round-trip vs. a trip for which has a return flight. I presume that the person is making a statement about graphs generally and not a specific graph. The person is confused about the difference between a round-trip vs. a trip for which has a return flight. Yes, the only isomorphisms in the free category of any graph are the identity morphisms. This follows from the fact that the morphisms in \(\bf{Free}(G)\) do not obey any equations (other than those required in the definition of a category, i.e., the left and right unit laws). Suppose, by way of contradiction, that \(f:A\to B\) is a non-identity isomorphism in \(\bf{Free}(G)(A,B)\). Then there is a morphism \(g:B\to A\) such that \(f\circ g=\text{id}_A\) and \(g\circ f=\text{id}_B\). However, since \(f\) is not an identity morphism and the only equations among elements of \(\bf{Free}(G)(A,B)\) are the left and right unit laws, no such morphism \(g\) exists. In particular, if \(A\neq B\), then there are no equations at all between elements of \(\bf{Free}(G)(A,B)\). Even if \(A=B\), the only option is \(g=\text{id}_A\), which yields \(f\circ\text{id}_A=\text{id}_A\), so that \(f=\text{id}_A\), which contradicts the assumption that \(f\) is not the identity morphism. Yes, the only isomorphisms in the free category of any graph are the identity morphisms. This follows from the fact that the morphisms in \\(\bf{Free}(G)\\) do not obey any equations (other than those required in the definition of a category, i.e., the left and right unit laws). Suppose, by way of contradiction, that \\(f:A\to B\\) is a non-identity isomorphism in \\(\bf{Free}(G)(A,B)\\). Then there is a morphism \\(g:B\to A\\) such that \\(f\circ g=\text{id}_A\\) and \\(g\circ f=\text{id}_B\\). However, since \\(f\\) is not an identity morphism and the only equations among elements of \\(\bf{Free}(G)(A,B)\\) are the left and right unit laws, no such morphism \\(g\\) exists. In particular, if \\(A\neq B\\), then there are no equations at all between elements of \\(\bf{Free}(G)(A,B)\\). Even if \\(A=B\\), the only option is \\(g=\text{id}_A\\), which yields \\(f\circ\text{id}_A=\text{id}_A\\), so that \\(f=\text{id}_A\\), which contradicts the assumption that \\(f\\) is not the identity morphism. I see, some care must be taken when constructing a graph suitable for construction of a Free category. When parallel paths are present in the graph the are assumed to be not equal unless there is an equation specifically stating otherwise. The only exception to this rule is the case of the identity morphisms which have the implicit \( id \circ id = id \) equation. I see, some care must be taken when constructing a graph suitable for construction of a Free category. When parallel paths are present in the graph the are assumed to be not equal unless there is an equation specifically stating otherwise. The only exception to this rule is the case of the identity morphisms which have the implicit \\( id \circ id = id \\) equation.
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them but still, the first one from well, almost a decade ago shows up as the default content in the search window 1,2,3,6,11,23,47,106,235 well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go oh well "what would cotton mathers do?" the chat room unanimously ponders lol i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway? or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please very general advice for any number of topics for someone like yourself sir assuming gender because you should hate text based adam long ago if you were female or etc if its false then I apologise for the statistical approach to human interaction So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field? (I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.) (which is just the product of the integer and its conjugate) Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$ You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings (Plus I'm at work and am pretending I'm doing my job) Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit. @Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$ this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$ the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$ (just as a quotient of additive groups, that quotient group is finite) in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$ there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus) @MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively. $\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first: By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$. The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$. @Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics @MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$? Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists... As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour Or more likely, we will need to start recognising machines as a new species and interact with them accordingly so covert operations AI may still exists, even as domestic AIs continue to become widespread It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other that is, until their processing power become so strong that they can outdo human thinking But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy. I was just genuinely curious How does a message like this come from someone who isn't trolling: "for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise" 3 Anyway feel free to continue, it just seems strange @Adam I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree? So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$) @RyanUnger You're the guy to ask for this sort of thing I think: If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way? I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right? We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method. How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ? @anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues
I'm doing some research into the RSA cryptosystem but I just need some clarity on how it worked when it was published in the 70s. Now I know that it works with public keys but did it also work with private keys back then or did it use a shared public key first and then private keys were introduced later? RSA never was intended as a symmetric/secret key cryptosystem, or extensively used as such. Public/Private key pairs have been used for RSA from day one. A very close cousin of RSA, also using Public/Private key pairs, was known (but not published) at the GCHQ significantly before RSA was published. See Clifford Cocks's declassified A Note on 'Non-secret Encryption' (1973). Public-key cryptography was theorized at the GCHQ even before, see James Ellis's declassified The Possibility of Secure Non-Secret Encryption (1969), and his account in the declassified The history of Non-Secret Encryption (1987). See this question for more references on these early works. As kindly reminded by poncho: the Pohlig-Hellman exponentiation cipher is a symmetric analog of textbook RSA. It uses as public parameters a large public prime $p$ with $(p-1)/2$ also prime, and two random odd secret exponents $e$ and $d$ with the relation $e\cdot d\equiv1\pmod{(p-1)}$; encryption of $m$ with $1<m<p$ is $c\gets m^e\bmod p$, decryption is $m\gets c^e\bmod p$. By incorporating the computation of $d$ from the encryption key $e$ into the decryption, and using cycle walking to coerce down the message space to bitstrings and remove a few fixed points, it becomes a full-blown block cipher by the modern definition of that. Security is related to the discrete logarithm problem in $\mathbb Z_p^*$ . An algorithm for solving that is the main subject of the article, and what Pohlig-Hellman now designates. The encryption algorithm has little practical interest, because it is very slow for a symmetric-only algorithm. It never caught in practice, and I believe never was intended to do so. I found no earlier reference than: Stephen C. Pohlig and Martin E. Hellman, An Improved Algorithm for Computing Logarithm over $GF(p)$ and Its Cryptographic Significance published in IEEE Transactions on Information Theory, Volume 24 Issue 1, January 1978. RSA was clearly known to the authors when they submitted this correspondence. They make explicit reference to: Ronald L. Rivest, Adi Shamir, and Leonard Adleman, On Digital Signatures and Public-Key Cryptosystems, Technical Memo MIT/LCS/TM-82, dated April 1977 (received by the Defense Documentation Center on May 3, 1977; publication date unknown). Ronald L. Rivest, Adi Shamir, and Leonard Adleman, A Method for Obtaining Digital Signatures and Public-Key Cryptosystemspublished in Communications of the ACM, Volume 21 Issue 2, February 1978 (received April 4, 1977; revised September 1, 1977). Note: I discovered (1.) only because it is referenced by Pohlig and Hellman; it has a number of rough edges fixed in (2.), including a byzantine and unnecessary complication in the handling of messages not coprime with the public modulus, that are telling of the novelty. I refer to poncho's account on chronology.
We need some definitions to state the problem. Let $B$ be a commutative ring, $A$ its subring. We denote by $(A : B)$ the set $\{x \in B | xB \subset A\}$. $(A : B)$ is an ideal of $B$. It is contained $A$, hence it is also an ideal of $A$. It is called the conductor of the ring extention $B/A$. Let $K$ be an algebraic number field of degree $n$. An order of $K$ is a subring $R$ of $K$ such that $R$ is a free $\mathbb{Z}$-module of rank $n$. Now let $K$ be a quadratic number field. Let $\mathcal{O}_K$ be the ring of algebraic integers in $K$. Let $R$ be an order of $K$. Let $\mathfrak{f} = (R : \mathcal{O}_K)$. The ideal $\mathfrak{f}$ is important for the ideal theory of $R$ as shown in this question. Since $R$ is a finitely generated $\mathbb{Z}$-module, $R \subset \mathcal{O}_K$. Since both $\mathcal{O}_K$ and $R$ are free $\mathbb{Z}$-modules of rank $2$, the $\mathbb{Z}$-module $\mathcal{O}_K/R$ is finite. Let $f$ be the order of $\mathcal{O}_K/R$. I came up with the following proposition. Proposition$\mathfrak{f} = f\mathcal{O}_K$. Outline of my proofLet $d$ be the discriminant of $\mathcal{O}_K$.Then by this question, 1, $\omega = \frac{d + \sqrt d}{2}$ is a basis of $\mathcal{O}_K$ as a $\mathbb{Z}$-module. It is easy to see that $R = \mathbb{Z} + \mathbb{Z}f\omega$.Let $\alpha = a + bf\omega \in (R : \mathcal{O}_K)$. I deduce that $a$ is divisible by $f$ from $\alpha\omega \in R$ using $\omega^2 = d\omega - \frac{d(d-1)}{4}$.A full proof was given below as an answer. My questionHow do you prove the proposition?I would like to know other proofs based on different ideas from mine.I welcome you to provide as many different proofs as possible.I wish the proofs would be detailed enough for people who have basic knowledge of introductory algebraic number theory to be able to understand.
$$\lim_{x\to\infty} \sqrt{4x^2 + 3x} - 2x$$ I thought I could multiply both numerator and denominator by $\frac{1}{x}$, giving $$\lim_{x\to\infty}\frac{\sqrt{4 + \frac{3}{x}} -2}{\frac{1}{x}}$$ then as x approaches infinity, $\frac{3}{x}$ essentially becomes zero, so we're left with 2-2 in the numerator and $\frac{1}{x}$ in the denominator, which I thought would mean that the limit is zero. That's apparently wrong and I understand (algebraically) how to solve the problem using the conjugate, but I don't understand what's wrong about the method I tried to use.
H L LUO Articles written in Bulletin of Materials Science Volume 39 Issue 2 April 2016 pp 519-523 The effect of plating temperatures between 60 and 90$^{\circ}$C on structure and corrosion resistance for electroless NiWP coatings on AZ91D magnesium alloy substrate was investigated. Results show that temperature has a significant influence on the surface morphology and corrosion resistance of the NiWP alloy coating. An increase in temperature will lead to an increase in coating thickness and form a more uniform and dense NiWP coatings. Moreover, cracks were observed by SEM in coating surface and interface at the plating temperature of 90$^{\circ}$C. Coating corrosion resistance is highly dependent on temperature according to polarization curves. The optimum temperature isfound to be 80$^{\circ}$C and the possible reasons of corrosion resistance for NiWP coating have been discussed. Volume 40 Issue 3 June 2017 pp 577-582 NiWP alloy coatings were prepared by electrodeposition, and the effects of ferrous chloride (FeCl$_2$), sodium tungstate (Na$_2$WO$_4$) and current density ($D_K$) on the properties of the coatings were studied. The results show that upon increasing the concentration of FeCl$_2$, initially the Fe content of the coating increased and then tended to be stable; the deposition rate and microhardness of coating decreased when the cathodic current efficiency ($\eta$) initially increased and then decreased; and for a FeCl$_2$ concentration of 3.6 gl$^{−1}, the cathodic current efficiency reached its maximum of 74.23%. Upon increasing the concentration of Na$_2$WO$_4$, the W content and microhardness of the coatings increased; the deposition rate andthe cathode current efficiency initially increased and then decreased. The cathodic current efficiency reached the maximum value of 70.33% with a Na$_2$WO$_4$ concentration of 50 gl$^{−1}$, whereas the deposition rate is maximum at 8.67 $\mu$mh$^{−1}$ with a Na$_2$WO$_4$ concentration of 40 gl$^{−1}$. Upon increasing the $D_K$, the deposition rate, microhardness, Fe and W content of the coatings increased, the cathodic current efficiency increases first increased and then decreased. When $D_K$ was 4 A dm$^{−2}$,the current efficiency reached the maximum of 73.64%. Volume 41 Issue 2 April 2018 Article ID 0041 In this paper, ternary NiFeW alloy coatings were prepared by jet electrodeposition, and the effects of lord salt concentration, jet speed, current density and temperature on the properties of the coatings, including the composition, microhardness, surface morphology, structure and corrosion resistance, were investigated. Results reveal that the depositionrate reaches a maximum value of 27.30 $\mu$m h$^{−1}$, and the total current efficiency is above 85%. The maximum microhardness is 605 HV, and the wear and corrosion resistance values of the alloy coating are good. Moreover, the ternary NiFeW alloy coating is smooth and bright, and it presents a dense cellular growth. The alloy plating is nanocrystalline and has face-centered cubic structure. Current Issue Volume 42 | Issue 6 December 2019 Click here for Editorial Note on CAP Mode
The news reports from Jackson Hole are very interesting. Fed officials are grappling with a tough question: what will happen to inflation? Why is there so little inflation now? How will a rate rise affect inflation? How can we trust models of the latter that are so wrong on the former? Well, why don't we turn to the most utterly standard model for the answers to this question -- the sticky-price intertemporal substitution model. (It's often called "new-Keynesian" but I'm trying to avoid that word since its operation and predictions turn out to be diametrically opposed to anything "Keyneisan," as we'll see.) Here is the model's answer: Response of inflation (red) and output (black) to a permanent rise in interest rates (blue). The blue line supposes a step function rise in nominal interest rates. The red line plots the response of inflation and the black line plots output. The solid lines plot the answer to the standard question, what if the Fed suddenly and unexpectedly raises rates? But the Fed is not suddenly and unexpectedly doing anything, so the dashed lines plot answers to the much more relevant question: what if the Fed tells us long in advance that the rate rise is coming? According to this standard model, the answer is clear: Inflation rises throughout the episode, smoothly joining the higher nominal interest rate. Output declines. The model: \begin{equation} x_{t} =E_{t}x_{t+1}-\sigma(i_{t}-E_{t}\pi_{t+1}) \label{one} \end{equation} \begin{equation} \pi_{t} =\beta E_{t}\pi_{t+1}+\kappa x_{t} \label{two} \end{equation} where \(x\) denotes the output gap, \(i\) is the nominal interest rate, and \(\pi\) is inflation. The solution is \begin{equation} \pi_{t+1}=\frac{\kappa\sigma}{\lambda_{1}-\lambda_{2}}E_{t+1}\left[ i_{t}+\sum _{j=1}^{\infty}\lambda_{1}^{-j}i_{t-j}+\sum_{j=1}^{\infty}\lambda_{2} ^{j}E_{t+1}i_{t+j}\right] \label{three} \end{equation} \begin{equation*} x_{t+1}=\frac{\sigma}{\lambda_{1}-\lambda_{2}}E_{t+1}\left[ (1-\beta\lambda_1^{-1}) \sum _{j=0}^{\infty}\lambda_{1}^{-j}i_{t-j}+(1-\beta \lambda_2^{-1}) \sum_{j=1}^{\infty}\lambda_{2}^{j}E_{t+1}i_{t+j}\right] \end{equation*} where \[ \lambda_{1} =\frac{1}{2} \left( 1+\beta+\kappa\sigma +\sqrt{\left( 1+\beta+\kappa\sigma\right)^{2}-4\beta}\right) > 1 \] \[ \lambda_{2} =\frac{1}{2}\left( 1+\beta+\kappa\sigma -\sqrt{\left( 1+\beta+\kappa\sigma\right)^{2}-4\beta}\right) < 1. \] I use \(\beta = 0.97, \ \kappa = 0.2, \ \sigma = 0.3 \) to make the plot. As you see from \((\ref{three}\)), inflation is a two-sided geometrically-weighted moving average of the nominal interest rate, with positive weights. So the basic picture is not sensitive to parameter values. The expected and unexpected lines are the same once the announcement is made. This standard model embodies exactly zero of the rational expectations idea that unexpected policy moves matter more than expected policy moves. (That's not an endorsement, it's a fact about the model.) The Neo-Fisherian hypothesis and sticky prices A bit of context. In some earlier blog posts (start here) I explored the "neo-Fisherian" idea that perhaps raising interest rates raises inflation. The idea is simple. The nominal interest rate is the real rate plus expected inflation, \[ i_t = r_t + E_t \pi_{t+1} \] In the long run, real rates are independent of monetary policy. This "Fisher relation" is a steady state of any model -- higher interest rates correspond to higher inflation. However, is it a stable steady state, or unstable? If the nominal interest rate is stuck, say, at zero, do tiny bits of inflation spiral away from the Fisher equation? Or do blips in inflation melt away and converge steadily towards the interest rate? I'll call the latter the "long-run" Fisherian view. Even if that is true, perhaps an interest rate rise temporarily lowers inflation, and then inflation catches up in the long run. That's the "short-run" Fisherian question. One might suspect that the new-Fisherian idea is true for flexible prices, but that sticky prices lead to a failure of either the short-run or long-run neo-Fisherian hypothesis. The graph shows that this supposition is absolutely false. The most utterly standard modern model of sticky prices generates a short-run and long-run neo-Fisherian response. And reduces output along the way. Multiple equilibria and other issues Obviously, it's not that easy. There are about a hundred objections. The most obvious: this model with a fixed interest rate target has multiple equilibria. On the date of the announcement of the policy change, inflation and output can jump. Inflation response to an interest rate rise: multiple equilibria The picture shows some of the possibilities when people learn rates will rise three periods ahead of the actual rise. The solid red line is the response I showed above. The dashed red lines show what happens if there is an additional "sunspot" jump in inflation, which can happen in these models. Math: You can add an arbitrary \(\lambda_{1}^{-t}\delta_\tau \) to the impulse-response function given by (\(\ref{three}\)), where \(\tau\) is the time of the announcement (\(\tau=-3\) in the graph), and it still obeys equations \( ( \ref{one})-(\ref{two})\). These are impulse response functions and sunspots must be unexepected. So the only issue is the jump on announcement. Response functions are thereafter unique. A huge amount of academic effort is expended on pruning these equilibria (me too), which I won't talk about here. The bottom two lines show that it is possible to get a temporarily lower inflation response out of the model, if you can get a negative "sunspot" to coincide with the policy announcement. But I think the plot says we're mostly wasting our time on this issue. The alternative equilibria have the biggest effect on inflation when the policy is announced, not when the policy actually happens. But we do not see big changes in inflation when the Fed makes announcements. The Fed is not at all worried about inflation past that is slowly cooling as the day of the rise approaches, as these equilibria show. It's worried about inflation or deflation future in response to the actual rate rise. The graph suggests to me that most of the "sensible" equilibria are pretty near the solid line. The graph also shows that all the multiple equilibria are stable, and thus neo-Fisherian. At best we can have a short-run discussion. In the long run, a rate rise raises inflation in any equilibrium of this model. Yeah, there's lots more here -- what about Taylor rules, stochastic exits from the zero bound, off-equilibrium threats, QE, better Phillips curves with lagged inflation terms, habits in the IS curve, credit constraints, investment and capital, learning dynamics, fiscal policy, and so on and so on. This is a blog post, so we'll stop here. The paper to follow will deal with some of this. And the point is made. The basic simplest model makes a sharp and surprising prediction. Maybe that prediction is wrong because one or another epicycle matters. But I don't think much current discussion recognizes that this is the starting point, and you need patches to recover the opposite sign, not the other way around. Data and models I started with the observation that it would be nice if the model we use to analyze the rate rise gave a vaguely plausible description of recent reality. The graph shows the Federal Funds rate (green), the 10 year bond rate (red) and core CPI inflation (blue). The conventional way of reading this graph is that inflation is unstable, and so needs the Fed to actively adjust rates. Inflation is like a broom held upside down, with inflation on the top and the funds rate on the bottom. When inflation declines a bit, the Fed drives the funds rate down to push inflation back up, just as you would follow a falling broom. When inflation rises a bit, the Fed similarly quickly raises the funds rate. That view represents the conventional doctrine, that an interest rate peg is unstable, and will lead quickly to either hyperinflation (Milton Friedman's famous 1968 analysis) or to a deflationary "spiral" or "vortex." And this instability view predicts what will happen should the Fed deliberately raise rates. Raising rates is like deliberately moving the bottom of the broom. The top moves the other way, lowering inflation. When inflation is low enough, the Fed then quickly lowers rates to stop the broom from tipping off. But in 2008, interest rates hit zero. The broom handle could not move. The conventional view predicted that the broom will topple. Traditional Keynesians warned that a deflationary "spiral" or "vortex" would break out. Traditional monetarists looked at QE, and warned hyperinflation would break out. (I added the 10 year rate as an indicator of expected inflation, and to emphasize how little effect QE had. $3 trillion dollars of bond purchases later, good luck seeing anything but a steady downward trend in 10 year rates.) The amazing thing about the last 7 years in the US and Europe -- and 20 in Japan -- is that nothing happened!After the recession ended, inflation continued its gently downward trend. This is monetary economics Michelson–Morley moment. We set off what were supposed to be atomic bombs -- reserves rose from $50 billion to $3,000 billion, the crucial stabilizer of interest rate movements was stuck, and nothing happened. Oh sure, you can try to patch it up. Maybe we discover after the fact that wages are eternally sticky, even for 7 to 20 years while half the population changes jobs, so, sorry, that deflation vortex we predicted can't happen after all. Maybe the Fed is so wise it neatly steered the economy between the Great Deflationary Vortex on one side with just enough of the Hyperinflationary Quantitative Easing on the other to produce quiet. Maybe the great Fiscal Stimulus really did have a multipler of 6 or so (needed to be self-financing, as some claimed) and just offset the Deflationary Vortex. But when the seas are so quiet, and the tiller has been locked at 0 for seven years, it's awfully hard to take seriously the Captain's stories of great typhoons, vortices, and hyperwhales narrowly avoided by great skill and daring. Occam's razor says, let us take the facts seriously: An interest peg is stable after all. The classic theories that predict instability of an interest rate peg -- and consequently that higher rates will lead to lower inflation -- are just wrong, at least in our circumstances (important qualifier follows). But if those classic theories failed dramatically, what can take their place? Fortunately, I started this post with just one such theory. The utterly standard sticky-price model, sitting in Mike Woodford's and Jordi Gali's textbooks, predicts exactly what happened: inflation is stable under a peg, and thus raising interest rates to a new peg will raise inflation. The difference between traditional Keynesian or Monetarist models and this modern sticky-price model is deep and essential. In this model, people are forward-looking. In the standard unstable traditional-Keynesian or Monetarist model, people look backward. When written in equations, the traditional "IS" curve (\(\ref{one}\)) does not have \(E_t x_{t+1} \) or \(E_t\pi_{t+1}\) in it, and the "Phillips curve" (\(\ref{two}\)) has past inflation in it, not expected future inflation. Forward looking people generates stability, and backward looking people generates instability. If you drove a car by looking in the rear-view mirror, the car may indeed regularly veer off the road, unless the Fed sitting next to you yells about things to come and stabilizes the car. But when people drive looking through the front windshield, cars are quite stable, reverting to the middle of the road when the wind buffets them to one side or the other. The response function is also consistent with the experience of a few countries such as Sweden which did raise rates and swiftly abandoned the effort. Those rises didn't do much either way to inflation, but they did lower output. Just as the graph says. What to do? A robust approach I will not follow the standard economists' approach -- here's my bright new idea, the government should follow my advice tomorrow. Is this right? Maybe. Maybe not. I'm working on it, and hoping by that and this blog post to encourage others to do so as well. But if you're running the Fed, you don't have the luxury of waiting for research. You have to face an uncomfortable fact, which the news out of Jackson hole says they're facing: They don't really know what will happen or how the economy works. Nor does anyone else. They know that their own forecasts and models have been wrong 7 years in a row -- as has everyone elses', except a few bloggers with remarkably spotty memories -- so pinpoint structural forecasts of what will happen by raising rates made by those same models and logic are darn suspect. A robust policy decision should integrate over possibilities. So as far as I'll go is that this is a decent possibility, and should add to the caution over raising rates. Raising rates if there is a fire -- actual inflation -- might be sensible. Raising rates because of inflation forecasts from models that have been wrong seven years in a row seems a bit diceyer. Of course, there is a bit of divergence in goals as well. The Fed wants more inflation, so might take this model as more reason to tighten. And if this model is right, the Fed will produce the inflation which it desires and can then congratulate itself for foreseeing! I like zero. Zero rates are pretty darn good. Zero inflation is pretty darn good too. We get the Friedman-optimal quantity of money. And more. Financial stability: With no interest cost, people and businesses hold a lot of money, and don’t conjure complex but fragile cash-management schemes. Three trillion dollars of reserves are three trillion dollars of narrow banking. Taxes: You don’t pay taxes on inflationary gains and taxes erode less of the return on investments. We don't suffer sticky-price distortions from the economy. Yeah, growth is too slow, but monetary policy has nothing to do with long-run growth. So, face it, the outcomes we desire from monetary policy are just about perfect. We don't really know how this happened, but we should savor it while it lasts. This last point might be the main one. The model I showed above is utterly standard, as is the main result. "New-Keynesian" papers about the "zero bound" have been analyzing this state for nearly 20 years. The result that inflation is stable around the steady state is at least 20 years old. All the effort, however, has been about how to escape the zero bound. But why? If a very low interest peg is stable, and achieves the optimum quantity of money, why not leave it alone? OK, there's this multiple equilibrium technicality, but that hardly seems reason to go back to "normal." The only real concern is that some hidden force might be building up to upend this delightful state of affairs. That's behind most calls for raising rates. But clearly, nobody knows with any certainty what that force might be or how to adjust policy levers to head it off. One warning. In the above model, the interest rate peg is stable only so long as fiscal policy is solvent. Technically, I assume that fiscal surpluses are enough to pay off government debt at whatever inflation or deflation occurs. Historically, pegs have fallen apart many times, and always when the government did not have the fiscal resources or fiscal desire to support them. The statement "an interest rate peg is stable" needs this huge asterisk.
Article Keywords: Baire class one function; set of points of discontinuity; oscillation of a function Summary: A characterization of functions in the first Baire class in terms of their sets of discontinuity is given. More precisely, a function $f\colon \mathbb {R}\rightarrow \mathbb {R}$ is of the first Baire class if and only if for each $\epsilon >0$ there is a sequence of closed sets $\{C_n\}_{n=1}^{\infty }$ such that $D_f=\bigcup _{n=1}^{\infty }C_n$ and $\omega _f(C_n)<\epsilon $ for each $n$ where $$ \omega _f(C_n)=\sup \{|f(x)-f(y)|\colon x,y \in C_n\} $$ and $D_f$ denotes the set of points of discontinuity of $f$. The proof of the main theorem is based on a recent $\epsilon $-$\delta $ characterization of Baire class one functions as well as on a well-known theorem due to Lebesgue. Some direct applications of the theorem are discussed in the paper. References: [1] Bąkowska, A., Pawlak, R. J.: On some characterizations of Baire class one functions and Baire class one like functions . Tatra Mt. Math. Publ. 46 (2010), 91-106. MR 2731426 | Zbl 1224.26016 [2] Bressoud, D. M.: A Radical Approach to Lebesgue's Theory of Integration . MAA Textbooks Cambridge University Press, Cambridge (2008). MR 2380238 | Zbl 1165.00001 [3] Bruckner, A. M., Bruckner, J. B., Thomson, B. S.: Real Analysis . Prentice-Hall International, Upper Saddle River (1997). Zbl 0872.26001 [5] Kuratowski, K.: Topology. I. Academic Press, New York; Państwowe Wydawnictwo Naukowe, Warszawa; Mir, Moskva Russian (1966). [7] Natanson, I. P.: Theory of Functions of a Real Variable. II . Frederick Ungar Publishing New York German (1961). MR 0067952 [8] Zhao, D.: Functions whose composition with Baire class one functions are Baire class one . Soochow J. Math. 33 (2007), 543-551. MR 2404581 | Zbl 1137.26300
Last time we gave a quick intro to the chemistry and thermodynamics we'll use to understand 'coupling'. Now let's really get started! Suppose that we are in a setting in which some reaction $$ \mathrm{X} + \mathrm{Y} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} \mathrm{XY} $$ takes place. Let's also assume we are interested in the production of \(\mathrm{XY}\) from \(\mathrm{X}\) and \(\mathrm{Y},\) but that in our system, the reverse reaction is favored to happen. This means that that reverse rate constant exceeds the forward one, let's say by a lot: $$\alpha_\leftarrow \gg \alpha_\to$$ so that in equilibrium, the concentrations of the species will satisfy $$\displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]}\ll 1 }$$ which we assume undesirable. How can we influence this ratio to get a more desired outcome? This is where coupling comes into play. Informally, we think of the coupling of two reactions as a process in which an endergonic reaction — one which does not 'want' to happen — is combined with an exergonic reaction — one that does 'want' to happen — in a way that improves the products-to-reactants concentrations ratio of the first reaction. An important example of coupling, and one we will focus on, involves ATP hydrolysis: $$ \mathrm{ATP} + \mathrm{H}_2\mathrm{O} \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} + \mathrm{H}^+ $$ where ATP (adenosine triphosphate) reacts with a water molecule. Typically, this reaction results in ADP (adenosine diphosphate), a phosphate ion \(\mathrm{P}_{\mathrm{i}}\) and a hydrogen ion \(\mathrm{H}^+\). To simplify calculations, we will replace the above equation with $$\mathrm{ATP} \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} $$ since suppressing the bookkeeping of hydrogen and oxygen atoms in this manner will not affect our main points. One reason ATP hydrolysis is good for coupling is that this reaction is strongly exergonic: $$\beta_\to \gg \beta_\leftarrow$$ and in fact so much that $$ \displaystyle{ \frac{\beta_\to}{\beta_\leftarrow} \gg \frac{\alpha_\leftarrow}{\alpha_\to} } $$ Yet this fact alone is insufficient to explain coupling! To see why, suppose our system consists merely of the two reactions $$ \begin{array}{ccc} \mathrm{X} + \mathrm{Y} & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} \\ \\ \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} & \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} \label{beta} \end{array} $$ happening in parallel. We can study the concentrations in equilibrium to see that one reaction has no influence on the other. Indeed, the rate equation for this reaction network is $$ \begin{array}{ccl} \dot{[\mathrm{X}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\ \\ \dot{[\mathrm{Y}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\ \\ \dot{[\mathrm{XY}]} & = & \alpha_\to [\mathrm{X}][\mathrm{Y}]-\alpha_\leftarrow [\mathrm{XY}]\\ \\ \dot{[\mathrm{ATP}]} & =& -\beta_\to [\mathrm{ATP}]+\beta_\leftarrow [\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]\\ \\ \dot{[\mathrm{ADP}]} & = &\beta_\to [\mathrm{ATP}]-\beta_\leftarrow [\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]\\ \\ \dot{[\mathrm{P}_{\mathrm{i}}]} & = &\beta_\to [\mathrm{ATP}]-\beta_\leftarrow [\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}] \end{array} $$ When concentrations are constant, these are equivalent to the relations $$\displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} = \frac{\alpha_\to}{\alpha_\leftarrow} \ \ \text{ and } \ \ \frac{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{ATP}]} = \frac{\beta_\to}{\beta_\leftarrow} }$$ We thus see that ATP hydrolysis is in no way affecting the ratio of \([\mathrm{XY}]\) to \([\mathrm{X}][\mathrm{Y}].\) Intuitively, there is no coupling because the two reactions proceed independently. This 'independence' is clearly visible if we draw the reaction network as a so-called Petri net: So what really happens when we are in the presence of coupling? Stay tuned for the next episode! By the way, here's what ATP hydrolysis looks like in a bit more detail, from a website at Loreto College: You can also read comments on Azimuth, and make your own comments or ask questions there!
We start our consideration of rotational motion with a system consisting of two atoms connected by a rigid bond, shown in Figure \(\PageIndex{1}\). Translational motion can be separated from rotational motion if we specify the position of the center of mass by a vector \(R\), and the positions of each atom relative to the center of mass by vectors \(r_1\) and \(r_2\). The positions of the atoms then are given by \(R + r_1\) and \(R + r_2\). The motion of the two particles is described as the translational motion of the center of mass plus the rotational motion of the two particles around the center of mass. The quantum mechanical description of translational motion, which corresponds to a free particle with total mass \(m_1 + m_2\), was described in Chapter 5. Since translational motion and rotational motion are separable, i.e. independent, the translational and rotational energies will add, and the total wavefunction will be a product of a translational function and a rotational function. Figure \(\PageIndex{1}\) Diagrams of the coordinate systems and relevant vectors for a) a diatomic molecule with atoms of mass \(m_1\) and \(m_2\) and b) the equivalent reduced particle of reduced mass \(\mu\). Exercise \(\PageIndex{1}\) What do you need to know in order to write the Hamiltonian for the rigid rotor? We start our quantum mechanical description of rotation with the Hamiltonian: \[\hat {H} = \hat {T} + \hat {V} \label {7.1}\] To explicitly write the components of the Hamiltonian operator, first consider the classical energy of the two rotating atoms and then transform the classical momentum that appears in the energy equation into the equivalent quantum mechanical operator. In the classical analysis the rotational motion of two particles separated by a distance \(r\) can be treated as a single particle with reduced mass \(μ\) at a distance \(r\) from the center of rotation, which is the center of mass. The kinetic energy of the reduced particle is \[ T = \dfrac {p^2}{2 \mu} \label {7-2}\] where \[ P^2 = P^2_x + P^2_y + P^2_z \label {7-3}\] Transforming \(T\) to a quantum-mechanical operator yields \[ \hat {T} = - \dfrac {\hbar ^2 \nabla ^2}{2 \mu} \label {7-4}\] where \(\nabla ^2\) is the Laplacian operator. \[\nabla ^2 = \dfrac {\partial ^2}{\partial x^2} + \dfrac {\partial ^2}{\partial y^2} + \dfrac {\partial ^2}{\partial z^2} \label {7-5}\] The rigid rotor model does not include the presence of electric or magnetic fields, or any other external force. Since there are no forces acting on the rotating particle, the potential energy is constant, and we can set it to zero or to any other value because only changes in energy are significant, and there is no absolute zero of energy. \[ \hat {V} = 0 \label {7-6}\] Therefore, the Hamiltonian operator for the Schrödinger equation describing this system consists only of the kinetic energy term. \[ \hat {H} = \hat {T} + \hat {V} = - \dfrac {\hbar ^2 \nabla ^2}{2 \mu} \label {7-7}\] In Equation \ref{7-5} we wrote the Laplacian operator in Cartesian coordinates. Cartesian coordinates (x, y, z) describe position and motion relative to three axes that intersect at 90º. They work fine when the geometry of a problem reflects the symmetry of lines intersecting at 90º, but the Cartesian coordinate system is not so convenient when the geometry involves objects going in circles as in the rotation of a molecule. In this case, spherical coordinates \((r,\,\theta,\,\phi)\) shown in Figure \(\PageIndex{2}\) are better. Figure \(\PageIndex{2}\): Location of a point in three-dimensional space using both Cartesian and spherical coordinates. The variable ranges given in Equation define all of space in the spherical coordinate system. The limits of these coordinantes are \(0 \le r < \infty\) \(0 \le \theta < \pi\) \(0 \le \varphi < 2\pi\) Spherical coordinates are better because they reflect the spherical symmetry of a rotating molecule. Spherical coordinates have the advantage that motion in a circle can be described by using only a single coordinate. For example, as shown in Figure \(\PageIndex{2}\), changing φ describes rotation around the z‑axis. Changing \(θ\) also is very simple. It describes rotation in any plane containing the z‑axis, and \(r\) describes the distance from the origin for any value of \(θ\) and \(φ\). This situation is analogous to choosing Cartesian or spherical coordinates to locate rooms in a building. Cartesian coordinates are excellent if the building is designed with hallways intersecting at 90º and with an elevator running perpendicular to the floors. Cartesian coordinates would be awkward to use for addresses in a spherical satellite space station with spherical hallways at various distances from the center. Exercise \(\PageIndex{2}\) Imagine and draw a sketch of a spherical space station with spherical shells for hallways. Show how three spherical coordinates can be used to specify the address of a particular room in terms of values for \(r\), \(θ\), and \(φ\). In order to use spherical coordinates, we need to express \(\nabla ^2\) in terms of \(r\), \(θ\) and \(θ\) and \(φ\). The result of the coordinate transformation is \[\nabla ^2 = \dfrac {1}{r^2} \left ( \dfrac {\partial}{\partial r} r^2 \dfrac {\partial}{\partial r} + \dfrac {1}{\sin \theta} \dfrac {\partial}{\partial \theta} \sin \theta \dfrac {\partial}{\partial \theta} + \dfrac {1}{\sin ^2 \theta } \dfrac {\partial ^2}{\partial \varphi ^2} \right ) \label {7-19}\] The Hamiltonian operator in spherical coordinates now becomes \[ \hat {H} = \dfrac {-\hbar ^2}{2 \mu r^2} [\dfrac {\partial}{\partial r} r^2 \dfrac {\partial}{\partial r} + \dfrac {1}{\sin \theta} \dfrac {\partial}{\partial \theta} \sin \theta \dfrac {\partial}{\partial \theta} + \dfrac {1}{\sin ^2 \theta } \dfrac {\partial ^2}{\partial \varphi ^2}] \label {7-11}\] This version of the Hamiltonian looks more complicated than Equation \ref{7-7}, but it has the advantage of using variables that are separable (see Separation of Variables). As you may recall, when the variables are separable, the Schrödinger equation can be written as a sum of terms, with each term depending only on a single variable, and the wavefunction solutions are products of functions that each depend on only one variable. Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski