text
stringlengths
256
16.4k
I've been working with proofs involving $\limsup$ and $\liminf$, and I'm a bit confused regarding their general methodology. More specifically, I'm unsure about whether my approach to the following problem makes sense. Problem. Let $(s_n)$ and $(t_n$) be sequences and suppose that there exists $N_0$ such that $s_n \leq t_n$ for all $n > N_0$. Show that $\liminf s_n \leq \liminf t_n$ and that $\limsup s_n \leq \limsup t_n$. The way I approached it was as follows: Let $N > N_0$. Then $\limsup_{N \rightarrow \infty} \{ s_n : n > N \} \leq t_n$ as $s_n \leq t_n$, and $\limsup s_n $ is the largest possible limit of a subsequence of $s_n$. As $t_n : n > N$ is (by definition) less than $\limsup_{N \rightarrow \infty} \{ t_n : n > N \}$, the proof is complete. I'm pretty sure this is incorrect, however, and I'm generally unclear about the method behind such a proof. Any help is appreciated!
Contents ECE301 Fall 2008, Professor C.C. Wang Use this area to post your questions I am trying to figure out how to compute the norm of the DT signal $ x[n]= e^{j 2 \pi n} $. According to the solutions, the answer is $ \left| e^{j 2 \pi n} \right| = 1 $. I don't get it. Should'nt the answer be a function of n??? Response Since we are dealing with complex numbers, we can make the following substitution: $ \left| e^{j 2 \pi n} \right| = \left| \cos{2 \pi n} + j \sin{2 \pi n} \right| $ The magnitude of this expression will always be one since $ \cos^2\theta+\sin^2\theta\equiv 1 $. Related Question I see your point. But if what you say is true, then we also have $ e^{j 2 \pi t} = \left( e^{j 2 \pi} \right)^t= \left( cos{2 \pi } + j sin{2 \pi } \right)^t= 1^t =1, $ which is clearly wrong, because my high school teacher showed us the graph of $ e^{j 2 \pi t} $ and it was oscillating. So what am I doing wrong? MA181 to the rescue! The identity $ (e^a)^t=e^{ta} $ is true for real numbers, but it is not always true for complex numbers, even when $ a $ is complex and $ t $ is real. When $ z $ is complex and $ t $ is real, $ z^t $ stands for $ e^{t\log z} $, where $ \log z $ is the complex logarithm of $ z $, which has infinitely many possible values. By the way, the constant function is a function. The modulus of $ e^{j\theta} $ is equal to one for any value of $ \theta $ because that complex number represents a point on the unit circle. I saw the plea on our page and thought I'd check it out. While I can't claim to have much understanding of what is going on, it seems like that solution is valid because $ n $ represents either the natural numbers or integrers, not the real numbers. Therefore, it would be like taking the value of the sine function at whole-number products of $ 2\pi $. Even though the function oscillates, the sequence is constant. Hope I haven't made a fool of myself. --Jmason
Spring 2018, Math 171 Week 8 Exponential Distribution Let \(X \sim \mathrm{exp}(\lambda)\). Find the distribution of \(Y = \lceil X \rceil\) (Answer) \(\mathrm{geometric}(1-e^{-\lambda})\) Show that \(X\) and \(Y\) are both memoryless Find the distribution of \(\beta X\) (Answer) \(\mathrm{exponential}(\lambda/\beta)\) Find the distribution of \(e^{-X}\) (Solution) Let \(Y = e^{-X}\). Note since \(X \in [0, \infty)\) we have \(Y \in (0, 1]\). \[\begin{aligned}F_Y(y) &= P(Y \le y)\\ &= P(e^{-X}\le y)\\&=P(-X \le \log(y))\\&= P(X \ge -\log(y))\\&=1-F_X(-\log(y))\end{aligned}\] \[\begin{aligned}f_Y(y) &= \frac{d}{dy}F_Y(y) \\ &= \frac{d}{dy}(1 - F_X(-\log(y))) \\ &= -f_X(-\log(y))\cdot \frac{-1}{y} \\ &= \frac{\lambda e^{-\lambda (-\log(y))}}{y} \\ &= \lambda y^{\lambda - 1}\end{aligned}\] Let \(U \sim \mathrm{uniform}[0,1]\). Find the distribution of \(-\alpha\log{U}\) (Answer) \(\mathrm{exponential}(1/\alpha)\) Let \(X_1, X_2, \dots\overset{\mathrm{i.i.d}}{\sim} \mathrm{exp}(\lambda)\) (Discussed) Suppose \(N \sim \mathrm{geo}(p)\). Find the distribution of \(Z = \sum_{i=1}^N X_i\). Find the distribution of \(Q = \min(X_1, X_2, \dots X_n)\) (Answer) \(\mathrm{exponential}(n\lambda)\) Find the cumulative distribution of \(V = \max(X_1, X_2, \dots X_n)\) (Answer) \((1-e^{-v\lambda})^n\) Poisson Process Basics Let \(N(t)\) be a poisson process with rate \(\lambda\) (Discussed) Find the probability of no arrivals in \((3,5]\) (Discussed) Find the probability that there is exactly one arrival in each of the intervals: \((0,1], (1,2], (2,3], (3,4]\) (Discussed) Find the probability that there are two arrivals in \((0,2]\) and three arrivals in \((1,4]\) (Discussed) Find the covariance of \(N(t_1)\) and \(N(t_2)\) for \(0 < t_1 < t_2\)
Decide if the series $$\sum_{n=1}^\infty\frac{4^{n+1}}{3^{n}-2}$$ converges or diverges and, if it converges, find its sum. Is this how you would show divergence attempt: For $n \in [1,\infty), a_n = \frac{4^{n+1}}{3^n -2} \geq 0$ For $n \in [1,\infty), a_n = \frac{4^{n+1}}{3^n-2} \geq \frac{4^{n+1}}{3^n} = b_n$ Since $\sum_{n=1}^{\infty} \frac{4^{n+1}}{3^n}$ is a geometric series with $r = \frac{4}{3} > 1$. Therefore it diverges by the geometric series test and by the comparison test $\sum a_n$ diverges too.
Simulating Nonlinear Sound Propagation in an Acoustic Horn When modeling acoustic devices, it’s often enough to account for linear propagation alone, even though nonlinearities are always present. However, when the signaling amplitude reaches high levels in a design, nonlinear effects become important. Engineers can include nonlinear effects in simulations by taking advantage of the Nonlinear Acoustics (Westervelt) feature in the COMSOL Multiphysics® software, as demonstrated by an exponential horn example. Using an Acoustic Horn to Increase Sound Amplitude One of the oldest ways of amplifying sound is by using an acoustic horn. A classic example is the mechanical phonograph. Invented by Thomas Edison in the 1870s, the phonograph is a system made of a foil-wrapped wooden cylinder (later made of wax); a needle; and a horn placed against the foil, or metal diaphragm. With a phonograph, you can make a recording simply by speaking into the horn, with the vibration causing the needle to etch grooves into the foil. You can also listen to a recording by placing the needle at the beginning of the groove and turning the handle of the machine. As the needle moves along the groove pattern, the vibrations it makes are amplified by the horn. These capabilities inspired acoustic engineers to improve upon the design and soon, the cylinders were replaced with flat record discs, and more advanced horns were used to improve the sound amplification. Left: Thomas Edison and an early phonograph. Image in the public domain in the United States, via Wikimedia Commons. Right: A phonograph with a classic horn shape next to cylinders. Image by Tomasz Sienicki — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons. Nowadays, the acoustic horn is a common element used in electrodynamic loudspeakers or for signaling on ships and trains. At first, horn speakers could not amplify sound very far. After electricity entered the picture, though, horn speakers could transfer low levels of electric power into high levels of sound capable of filling large venues. Instead of a mechanically driven diaphragm, an electrically driven loudspeaker uses an electromagnetic moving coil and diaphragm to produce sound that is amplified through a horn. These high-efficiency speakers are often used in public address systems at outdoor parks or sports stadiums as well as in loud alarm systems. For high-amplitude signaling, the electromagnetic motor is often replaced with a compressed air driver. The reason the horn is so effective is because its shape allows for a controlled cross-section increase. This results in a so-called impedance match between the sound source (a loudspeaker) and the surrounding air. The idea is that an acoustic horn can radiate sound efficiently in a large frequency range. Efficient radiation is obtained when the pressure is in phase with the particle velocity, which requires a large surface at lower frequencies. The acoustic horn permits this; the sound is generated by a small source (at the throat of the horn) but radiated by a large surface (the mouth of the horn). The impedance matching properties of the horn ensure that the radiated wave front is altered as little as possible (from throat to mouth), keeping the pressure and particle velocity in phase. The simplest one-dimensional description of horn acoustics is given by the Webster horn equation. One common type of horn driver is the exponential horn, which has good impedance-matching capabilities. When the acoustic horns are driven at very high amplitudes — as is often the case for signaling (for ships or trains) or in sound systems used for concert venues — the nonlinear behavior of the acoustics needs to be taken into account. Because of the geometry of the horn, the high sound pressure levels (SPLs) are typically located in the throat of the horn. While nonlinear propagation is present at lower amplitudes, it doesn’t show its effects until high sound amplitudes are reached. Thus, it’s important to account for nonlinear effects in simulations when using an acoustic horn for high-amplitude signaling. As a good rule of thumb for the applicability of linear acoustics, it is valid as long as the acoustic pressure p is much smaller than \rho c^2 (that is, |p| \ll \rho c^2), where \rho is the fluid density (1.2 kg/m 3 for air) and c is the speed of sound (343 m/s for air). This gives a value of \rho c^2 = 1.4 \cdot 10^5 Pa for air. Assuming that “much smaller than” corresponds to a factor of 100, linear acoustics applies up to roughly an SPL of 154 dB. Modeling High-Amplitude Acoustics with the Westervelt Model You can model the propagation of nonlinear acoustic waves generated by a horn using the Acoustics Module, an add-on to COMSOL Multiphysics. Simulation allows you to see how the input waveform at the horn’s throat affects the waveform as output at the mouth. In this exponential horn example, the model is set up so that a harmonic input at the throat driven at the frequency f 0 = 130 Hz. This generates an acoustic wave with the frequency spectrum containing the harmonics 2f 0, 3f 0, 4f 0, etc. The model mesh resolves up to the fourth harmonic 4f 0. Nonlinear acoustic simulations require a full nonlinear transient analysis of the system, as frequency-domain models only apply in the linear case. A schematic of the acoustic horn model. The Pressure Acoustics, Transient interface is used in this example for the transient computation of the acoustic pressure, while the dissipative (thermally conducting and viscous) material model and the Nonlinear Acoustics (Westervelt) domain condition (the latter of which is available as of version 5.4 of the COMSOL® software) simulate the nonlinear propagation of acoustics in the physical domain. As shown below for the 2D axisymmetric model, the model includes the Exterior Field Calculation boundary condition (also available as of version 5.4), which comes into play when computing and visualizing the radiation pattern (more on that later), as well as perfectly matched layers (PMLs), which are used together with the lossless Transient Pressure Acoustics Model node to simulate the open nonreflecting condition toward infinity. 2D axisymmetric model setup. The nonlinear transient study has two steps: Time-dependent analysis Time-to-frequency fast Fourier transform (FFT) For the first step, the Nonlinear Acoustics (Westervelt) feature automatically tunes the time-dependent solver. This convenient functionality helps make the underlying nonlinear problem more effective. Once the solution reaches a steady state, a time-to-frequency FFT is performed, and the result is stored on the exterior field calculation boundary, where it is used to calculate the exterior field. Evaluating the Simulation Results First up in the results, you can take a look at the acoustic pressure. The plot to the left below compares the linear (green) and the nonlinear (blue) behaviors in a point just in front of the horn. The red lines correspond to the amplitude computed from the frequency-domain model. From this graph, you can visualize the total nonlinear acoustic pressure at high amplitudes. On the left, there’s a comparison of the linear and nonlinear approaches for computing the acoustic pressure. The animation on the right visualizes the total nonlinear acoustic pressure profile. Next, you can analyze the frequency content of the signals. The image on the left shows the transient computation of the acoustic pressure, zoomed in on 5 periods. The image on the right displays the frequency spectrum for both the linear and nonlinear analyses. From the graph, it is evident that the nonlinear model contains higher harmonic components. Due to the nonlinear behavior, energy is pumped from the fundamental frequency to the higher harmonics. Acoustic pressure as function of time (left) and frequency spectrum with the nonlinear harmonic components clearly visible (right). Next, you can examine the exterior field. The exterior field calculation feature makes it possible to visualize the radiation pattern of the acoustic field at any given distance from the source, enabling you to study the exterior field SPL. Below on the left is the normalized exterior field SPL, showing the nonlinear analysis versus the single frequency domain. In the image on the right, you can see the nonlinear transient analysis, showing the exterior field SPL at the first three harmonic frequency components. The latter graph also shows the relative amplitude of the various components. On the left is the normalized exterior field SPL for a nonlinear analysis versus a single frequency domain. On the right is the nonlinear transient analysis for the exterior field SPL of the first three frequency components. Nonlinear effects in the exponential horn model. As shown from this example, the Nonlinear Acoustics (Westervelt) feature and the Exterior Field Calculation boundary condition help account for and visualize nonlinear propagation and effects in acoustics simulations, thereby enabling engineers to improve upon acoustics designs requiring higher-amplitude signaling. Next Steps Try modeling an acoustic horn yourself: Click the button below, which will take you to the Application Gallery. From there, you can download a step-by-step guide and the MPH-file (must log into your COMSOL Access account and have a valid software license). Further Resources Read about nonlinear acoustics simulations in the following blog posts: Ready for more nonlinear acoustics modeling? Try out this tutorial: Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
I have the following partial differential equation: I'm asked to prove that if $f\equiv 0$, then the total energy (kinetic energy + potential energy) of the system decreases with time. What is the expression for the energy of this system? I know what the expression of energy is for parabolic or hyperbolic partial differential equations. But this, clearly, is neither. UPDATE: If we define the energy to be $\frac{1}{2}(u_t)^2+\frac{1}{2}\sum\limits_{ij}a^{ij}u_{x_i}u_{x_j}$, then it seems that $\frac{dE}{dt}=-\int{d(u_t)^2}$. I don't quite understand how one gets this final expression
Here is a reason. The fourth of Maxwell's macroscopic equations says that$$ \nabla \times \vec{H} = \vec{J} +\frac{\partial \vec{D}}{\partial t},$$where $\vec{J}$ is the free current at a point. In general, it is not possible to rewrite this in terms of B-field without a detailed knowledge of the microscopic behaviour of the medium (with the exception of vacuum) and what currents and polarisation charges are present, either inherently, or induced by applied fields. Sometimes the approximation is made that $\vec{B} = \mu \vec{H}$, but this runs into trouble in even quite ordinary magnetic materials that have a permanent magnetisation or suffer from hysteresis and the general relationship is that $$ \vec{B} = \mu_0 (\vec{H} + \vec{M}) , $$where $\vec{M}$ is the magnetisation field (permanent or induced magnetic dipole moment per unit volume). For these reasons, the auxiliary magnetic field strength $\vec{H}$ is invaluable for performing accurate calculations of the fields induced by currents, or vice-versa, within magnetic materials. On the other hand, the Lorentz force on charged particles is expressed in terms of the magnetic flux density $\vec{B}$.$$ \vec{F} = q\vec{E} + q\vec{v}\times \vec{B}$$Indeed this can form the basis of the definition of B-field and can be used, along with the lack of magnetic monopoles, to derive Maxwell's third equation (Faraday's law), which does not feature the H-field. So, both fields are a necessary part of the physicists toolbox. As Philosophiae Naturalis points out in a comment, the B-field can be thought of as the sum of the contributions from the (applied) H-field and whatever magnetisation (induced or intrinsic) is present. Often, we can only control or easily measure the applied H-field. In limited circumstances we can get away with using only one of the B- or H-field if the magnetisation is related to the applied H-field in a straightforward way. For other cases (and hence most ferromagnetic materials or permanent magnets) both fields must be considered.
Vector Cross Product The dot product discussed in the previous section, was introduced through the requirement that arose in calculating the work done by a given force \(\vec F\) when the point of application of the force is displaced by a certain amount given by \(\vec s\) : \[W = \vec F \cdot \vec s\] In this section, we’ll see that another form of vector product exists and is extremely useful to discuss many different physical phenomena; this product is called the cross product. The cross product of \(\vec a\,\,{\text{and}}\,\,\vec b\) is another vector \(\vec c\) \[\vec c = \vec a \times \vec b\] Let us, through a physical example, understand what the cross product means. Consider a horizontal magnetic field, which we can represent by \(\vec B\) q projected into this field with a velocity \(\vec v\)(at an angle \(\theta \) with the horizontal). Experiments show that the force \(\vec F\) acting on this particle (a) is perpendicular to the plane of \(\vec v\) and \(\vec B\) and goes into the plane for the figure above. (b) increases with increase in \(\left| {\vec v} \right|\;\;{\text{and}}\;\;\left| {\vec B} \right|\). (c) is such that its magnitude increases as \(\theta \) goes from \(0\;\;{\text{to}}\;\frac{\pi }{2}\). In fact, when \(\vec v\) and \(\vec B\) are parallel, the force on the particle is zero. For fixed magnitudes of \(\vec v\) and \(\vec B\), the force is the maximum when \(\begin{align}\theta = \frac{\pi }{2}\end{align}\) . (d) increases with increase in charge. This suggests the dependence \[\left| {\vec F} \right|\; \propto q\;\left| {\vec v} \right|\;\left| {\vec B} \right|\;\sin \theta \] which has been confirmed experimentally. In fact, the relation is (exactly), \[\left| {\vec F} \right| = q\;\left| {\vec v} \right|\;\left| {\vec B} \right|\;\sin \theta \] The direction of \(\vec F\) is found out to satisfy the right hand thumb rule. Holding out your thumb use your right hand fingers to map out the rotation from \(\vec v\) to \(\vec B\). The direction of \(\vec F\) is given by the direction in which the thumb points. Now, since \(\vec F\) is a vector with direction perpendicular to both \(\vec v\) and \(\vec B\), we write the expression for \(\vec F\) as \[\boxed{\vec F = q(\vec v \times \vec B)}\] where the vector \(\vec v \times \vec B\), the cross product of \(\vec v\) and \(\vec B\) , is understood to be a vector such that its magnitude is \(\left| {\vec v} \right|\left| {\vec B} \right|\sin \theta .\) and its direction is given by the right hand thumb rule In general, the cross product of \(\vec a\,\,and\,\,\vec b\) , i.e.\(\vec c = \vec a \times \vec b\) is a vector with magnitude \(\left| {\vec a} \right|\;\left| {\vec b} \right|\sin {{\theta }}\)( \({{\theta }}\) being the angle between \(\vec a\,\,and\,\,\vec b\) ) and direction perpendicular to the plane of \(\vec a\,\,and\,\,\vec b\) such that \(\vec a\,\,,\,\,\vec b\) and this direction form a right handed system. It is important to keep in mind that the cross product is a vector; the dot product was a scalar. The cross product is also referred to as the vector product. The cross product of \(\vec a\,\,{\text{and}}\,\,\vec b\) , say \(\vec c\), has an interesting geometrical interpretation. Since \(\left| {\vec c} \right| = \left| {\vec a} \right|\;\left| {\vec b} \right|\;\sin \theta ,\,\,\left| {\vec c} \right|\) represents the area of the parallelogram with adjacent sides \(\vec a\,\,{\text{and}}\,\,\vec b\) : In fact, the area of the parallelogram can itself be treated as a vector (as it is in physical phenomena): \[\vec A = \vec a \times \vec b\] The area of the triangle formed with \(\vec a\,\,{\text{and}}\,\,\vec b\) as two sides is simply \(\begin{align}\frac{1}{2}\left| {\vec A} \right| = \frac{1}{2}\;\left| {\vec a \times \vec b} \right|\end{align}\) .
Occupation Methods¶ This document explains and compares the different occupation methods availablein ATK. We suggest these guidelines for choosing the occupationmethod, depending on the system of interest: Systems with a band-gap(semiconductors, insulators, molecules): Use either Fermi-Diracor Gaussiansmearing with a low broadening, e.g. around 0.01 eV. Metals: Use either Methfessel-Paxtonor coldsmearing with as large a broadening as possible as long as the entropy contribution to the free energy remains small. Note The smearing width of the Fermi-Dirac distribution is roughly a factor oftwo larger than for the other functions. Therefore, in order to obtain thesame \(\mathbf{k}\)-point convergence using one of these methods asobtained with the Fermi-Dirac method one has to use a broadening of twicethe size. Background¶ Introduction to smearing methods¶ In ATK-DFT and ATK-SE the central object is the electron density\(n(\mathbf{r})\) which is calculated from the Kohn-Shameigenvectors \(\psi_n(\mathbf{r})\) by the expression where the index \(i\) runs over all states and \(f_i\) are the occupation numbers. The latter can be either 1 if the given state is occupied or 0 if the state is unoccupied. In periodic systems such as bulk materials or surfaces, the sum over states involves an integration over the Brillouin zone (BZ) of the system: In practice this integration is carried out numerically by summing over a finite set of \(\mathbf{k}\)-points. For gapped systems the density and derived quantities, like the total energy, converge quickly with the number of \(\mathbf{k}\)-points used in the integration. However, for metals the bands crossing the Fermi level are only partially occupied, and a discontinuity exists at the Fermi surface, where the occupancies suddenly jump from 1 to 0. In this case, one will often need a prohibitively large amount of \(\mathbf{k}\)-points in order to make calculations converge. The number of \(\mathbf{k}\)-points needed to make thecalculations converge can be drastically reduced by replacing theinteger occupation numbers \(f_{i\mathbf{k}}\) by a function thatvaries smoothly from 1 to 0 close to the Fermi level. The most natural choiceis the Fermi-Dirac distribution, where \(\epsilon_{i\mathbf{k}}\) is the energy, \(\mu\) is the chemical potential and \(\sigma=k_\text{B}T\) is the broadening. Fig. 104 shows the convergence of the total energy of bulk Aluminum, a typical simple metal. We see that using \(\sigma\) = 0.03 eV one needs a \(25\times 25\times 25\) \(\mathbf{k}\)-point sampling grid (a total of 15625 \(\mathbf{k}\)-points) in order to converge the total energy within 1 meV, whereas using \(\sigma\) = 0.43 eV the total energy is converged to within 1 meV using only a \(13\times 13\times 13\) grid (a total of 2197 \(\mathbf{k}\)-points). This results in a calculation which roughly is a factor of 7 faster. Free energy functional¶ When introducing the Fermi-Dirac distribution one effectively considers anequivalent system of non-interacting electrons at a temperature \(T\).This also means that the variational internal energy functional \(E[n]\)that is minimized is replaced by the free energy functional [Mer65] where \(S\) is the electronic entropy. All derived quantities such as the density, total energy, forces, etc., will therefore depend on the electron temperature \(T\). If one is actually interested in simulating a system at finite temperature, then the free energy is the relevant functional. If that is not the case, the zero-temperature internal energy \(E(\sigma=0)\) can still be extrapolated from the free energy \(F(\sigma)\), due to the quadratic dependence (to the lowest order) of both \(E(\sigma)\) and \(F(\sigma)\) on \(\sigma\) by the formula [Gil89] Fig. 105 shows that, for all the values of thebroadening \(\sigma\) considered, the value of the energy extrapolated to\(\sigma \to 0\) is basically spot on the actual value. Using thisextrapolation method it is thus possible to do calculations with very highbroadenings, necessary to converge metallic systems, with a reasonable numberof \(\mathbf{k}\)-points and still get an accurate ground state energy. Theextrapolated energy is by default shown in the output when doing a total energycalculation using QuantumATK. In order to get the value using QuantumATK see TotalEnergy. Unfortunately a similar extrapolation method does not exist for forces andstress. Thus these properties will be those that correspond to the free energyand will be directly dependent on the chosen broadening. In order to minimizethe errors introduced by the broadening, alternative occupation functions forwhich the entropic contribution to the free energy is smaller than for the Fermi-Dirac distribution have been developed. The different occupation functions are introduced on the basis of considering the density of states given by the expression Since the \(\mathbf{k}\)-point integration is in practice carried out as asum over a finite number of points, one has to replace the\(\delta\)-function by a smeared function \(\tilde{\delta}(x)\),whose width will be determined by a broadening \(\sigma\).With a choice of smearing function the occupation function is given by where \(\mu\) is the Fermi level. Even without directly introducingtemperature, one can then show that the functional that has to be minimizedis the generalized free energy [WD92][DV92].The generalized temperature is given by thebroadening \(\sigma\) and smearing method also directly determines theexpression for the generalized entropy. Comparison of smearing methods¶ In ATK four different smearing methods are available: Fermi-Diracdistribution ( FermiDirac) Gaussiansmearing ( GaussianSmearing) [FH83] Methfessel-Paxtonsmearing ( MethfesselPaxton) [MP89] Coldsmearing ( ColdSmearing) [MVDVP99] Warning While the broadening parameter of the Fermi-Dirac distributionhas a real physical meaning and can actually be associated with anelectronic temperature, this is not true for the other smearing methods,for which the broadening is simply a parameter without a well definedphysical meaning! Fig. 106 shows plots of the smeared \(\delta\)-functions and occupation functions for the different methods. From the figure we note a few things: The width of the Fermi-Diracsmearing function is larger than all the others. The ratio of the full width at half maximum between the Fermi-Diracand the Gaussiansmeared \(\delta\)-function is\[\alpha = \frac{\text{FWHM}(\text{Fermi-Dirac})}{\text{FWHM}(\text{Gaussian})} = \frac{2 \cosh^{-1}(\sqrt{2})}{\sqrt{\ln(2)}} \approx 2.117.\] This means that in order to get similar \(\mathbf{k}\)-point convergence as for the Fermi-Diracmethod one has to use a broadening which is a factor of ~2 larger when using one of the other methods. The Methfessel-Paxtonfunction is special in that the occupations may take unphysical negative values and values larger than one. For insulators and semiconductors as well as too coarsely sampled metals this may lead to negative density of states and a negative density, which may cause computational problems. The coldsmearing function is asymmetric but does not attain negative values and problems with negative density are therefore avoided. From Fig. 107 it can be seen that forthe Methfessel-Paxton and cold smearing the free energy, \(F(\sigma)\),hardly varies with \(\sigma\). In fact it can be shown that for these twomethods \(F(\sigma)\) only has 3 rd and higher order dependences on\(\sigma\). The low \(\sigma\) dependence on the free energy for Methfessel-Paxton and cold smearing should be carried over in derived quantities like forces andstress. This is indeed the case as illustrated inFig. 108, which shows the force on theuppermost atom in a 6 layer Aluminum 111 slab as a function of the usedbroadening. We see that for small values of the broadening the outer layers seek to contract,whereas this effect is reversed for the Fermi-Dirac distribution at a broadeningof about 0.75 eV due to the introduced electron gas pressure. For Methfessel-Paxton and cold smearing the error is neglible for a large rangeof values for the broadening. This means that one can efficiently calculateaccurate forces (for example, during structural optimizations, ab-initiomolecular dyanmics and phonons calculations) for metals using sizeablebroadenings and relatively low \(\mathbf{k}\)-point samplings. References¶ [DV92] Allesandro De Vita. The Energetics of Defects and Impurities in Metals and Ionic Materials from First Principles. PhD thesis, University of Keele, September 1992. [FH83] C. -L. Fu and K. -M. Ho. First-principles calculation of the equilibrium ground-state properties of transition metals: Applications to Nb and Mo. Phys. Rev. B, 28(10):5480–5486, November 1983. doi:10.1103/PhysRevB.28.5480. [Gil89] M. J. Gillan. Calculation of the vacancy formation energy in aluminium. J. Phys.: Condens. Matter, 1(4):689, 1989. doi:10.1088/0953-8984/1/4/005. [MVDVP99] Nicola Marzari, David Vanderbilt, Alessandro De Vita, and M. C. Payne. Thermal Contraction and Disordering of the Al(110) Surface. Phys. Rev. Lett., 82(16):3296–3299, April 1999. doi:10.1103/PhysRevLett.82.3296. [Mer65] N. D. Mermin. Thermal properties of the inhomogeneous electron gas. Phys. Rev., 137:A1441–A1443, Mar 1965. doi:10.1103/PhysRev.137.A1441. [MP89] M. Methfessel and A. T. Paxton. High-precision sampling for Brillouin-zone integration in metals. Phys. Rev. B, 40(6):3616–3621, August 1989. doi:10.1103/PhysRevB.40.3616. [WD92] M. Weinert and J. W. Davenport. Fractional occupations and density-functional energies and forces. Phys. Rev. B, 45(23):13709–13712, June 1992. doi:10.1103/PhysRevB.45.13709.
The ShiftRegister PWM Library enables usage of shift register pins as pulse-width modulated (PWM) pins. Instead of setting them to either high or low, the library lets the user set them to up to 256 PWM-levels. This post serves as a documentation page for the library and is to be extended over time. Getting Started In order to get started, you need an Arduino UNO, a 74HC595 shift register (NXP data sheet), and some LEDs. The figure below shows the wiring (click to enlarge). The LEDs may be connected to the eight shift register output pins (Q0 to Q7). Note that the shift register’s control wires (data, shift clock, and latch clock) are connected to the Arduino pins 2, 3, and 4. These pins can not be changed easily, because the library internally uses port manipulation to maximize performance. The Custom Wiring section of this post explains how these pins can be altered. After setting up the hardware, download and install the library on your machine. Now you can run the example sketch Sine by uploading the following code. The sketch makes the LEDs of the shift register pulse like a sine wave (as shown in the introduction video). #include "ShiftRegisterPWM.h" ShiftRegisterPWM sr(1, 16); void setup() { pinMode(2, OUTPUT); // sr data pin pinMode(3, OUTPUT); // sr clock pin pinMode(4, OUTPUT); // sr latch pin // use timer1 for frequent update sr.interrupt(ShiftRegisterPWM::UpdateFrequency::SuperFast); } void loop() { for (uint8_t i = 0; i < 8; i++) { uint8_t val = (uint8_t)(((float) sin(millis() / 150.0 + i / 8.0 * 2.0 * PI) + 1) * 128); sr.set(i, val); } } The key things to note from the sketch above are: Create a shift register object. An explanation for the resolutionparameter can be found in the next section. It is important to keep the resolution as low as possible because the memory consumption grows linearly with it. ShiftRegisterPWM sr(numShiftRegisters, resolution); Enable timer interrupts, i.e. automatically update the shift register output pins using timer 1 of the ATmega328P. The parameter defines the clock frequency (see next section for a table of possible values). sr.interrupt(ShiftRegisterPWM::UpdateFrequency::SuperFast); Set the i-th pinof the shift register to a PWM value between 0 (always off) and 255 (always on). If the output resolution is set to a lower value, e.g. 8, the PWM value will be scaled down accordingly. sr.set(i, val); Terminology Pulse-width modulation (PWM) is a technique for encoding information in a digital signal through pulsing. See Wikipedia for details. The Arduino Uno has six pins that support PWM output (namely 3, 5, 6, 9, 10, and 11) which can be accessed using the function analogWrite. In some cases, however, more PWM pins might be required. This library makes the pins of a shift register PWM capable. A shift register in our use-case is a storage that can be serially fed with digital values. In case of the 74HC595 shift register, it outputs the last eight bits of data in parallel. The PWM carrier frequency, denoted as $f_\text{carrier}$, is the frequency that at which PWM pulses are emitted. Its inverse value is the period. The Arduino’s carrier frequency is $490$ or $980$ Hz by default (reference). The PWM clock frequency, denoted as $f_\text{clock}$, is the maximum frequency at which outputs can be changed. Some possible values are predefined and listed in the table below. They can be passed to the interrupt function. For example like that: sr.interrupt(ShiftRegisterPWM::UpdateFrequency::SuperFast); Name Clock frequency $f_\text{clock}$ VerySlow $\approx6,400\text{ Hz}$ Slow $\approx12,800\text{ Hz}$ Medium $\approx25,600\text{ Hz}$ Fast $\approx35,714\text{ Hz}$ SuperFast $\approx51,281\text{ Hz}$ The PWM resolution $r$ is defined to be the fraction $r=\frac{f_\text{clock}}{f_\text{carrier}}$. Intuitively, it can be understood as the number of different brightness levels that LEDs can take when connected to the shift register . The resolution can be manually set on initialization of a shift register object (it is the second parameter of the constructor). Possible values are $r\in(0,255]$. For the Arduino’s PWM pins it is fixed to $r_\text{Arduino}=256$. Stacking Shift Registers The library support serial operation of multiple shift registers. The 74HC595 can be chained as shown in the following circuit diagram (click to enlarge). Now, the ShiftRegisterPWM constructor needs to be called with the corresponding number of shift registers. For the circuit diagram above that would be srCount=2. ShiftRegisterPWM shiftRegisterPWM(srCount, resolution); The time it takes to update the shift register output pins increases with the number of stacked shift registers. Therefore it is strongly recommended to decrease resolution $r$ and $f_\text{clock}$ with an increasing number of shift registers. Custom Wiring By default, the shift register must be connected to the Arduino UNO’s digital pins 2, 3, and 4. Arduino Shift register Role D2 DS Serial data D3 SH_CP Serial data transmission clock D4 ST_CP Shift register output flip-flop clock While other libraries for different purposes offer to manually set the pins by passing them as parameters, the ShiftRegister PWM Library does not offer that for performance reasons. Nevertheless, there is an option to change the pins. For each of the three wires, two macros can be defined which contain (1) the port and (2) a bit selection mask. The following table lists the macros with descriptions and the default value. Name Default Role ShiftRegisterPWM_DATA_PORT PORTD Register name of the bit that corresponds to the data pin ShiftRegisterPWM_DATA_MASK 0B00000100 Byte that masks the bit that corresponds to the data pin (2) ShiftRegisterPWM_CLOCK_PORT PORTD Register name of the bit that corresponds to the serial clock pin ShiftRegisterPWM_CLOCK_MASK 0B00001000 Byte that masks the bit that corresponds to the serial clock pin (3) ShiftRegisterPWM_LATCH_PORT PORTD Register name of the bit that corresponds to the latch clock pin ShiftRegisterPWM_LATCH_MASK 0B00010000 Byte that masks the bit that corresponds to the latch clock pin (4) The macros can be overwritten by making a definition prior to including the library. In the following snippet, the latch clock pin is set to be the digital pin 8 (instead of 4). #define ShiftRegisterPWM_LATCH_PORT PORTB #define ShiftRegisterPWM_LATCH_MASK 1 #include "ShiftRegisterPWM.h" Custom Timer In order to update the digital pins of the shift register with the desired PWM frequency, the library uses timer interrupts. The library registers an interrupt if the function sr.interrupt() is called. Note that this works only for a single shift register object (or multiple in serial). If (1) multiple PWM shift registers shall be used in parallel, (2) an exact frequency is required, or if (3) the timer is not available for usage, e.g. because another library is already using it, it is possible to manually configure the library to use another timer. For calculating the compare-match-register values, I recommend using a timer interrupt calculator tool. The example sketch CustomTimerInterrupt demonstrates manual timer operation mode. The library registers the timer 1 interrupt service routine (ISR) depending on a macro. In the source code, this is implemented as follows: #ifndef ShiftRegisterPWM_CUSTOM_INTERRUPT // Timer 1 interrupt service routine (ISR) ISR(TIMER1_COMPA_vect) { // function which will be called when an interrupt occurs at timer 1 cli(); // disable interrupts (in case update method takes too long) ShiftRegisterPWM::singleton->update(); sei(); // re-enable }; #endif Therefore, with writing #define ShiftRegisterPWM_CUSTOM_INTERRUPT prior to the library import, the redefinition of 'void __vector_11()' error can be prevented and timer 1 is spare.
Polar or Distance Form of a Straight Line Equation \(\textbf{Art 10 :} \qquad \boxed{{\text{Polar / Distance form of a line}}}\) Sometimes, it is very convenient to write the equation of a straight line in polar / distance form. Suppose we know that the line passes through the fixed point \(P(h,\,k)\) and is at an inclination of \(\theta :\) For any point \(Q(x,\,y)\) at a distance r from P along this line, we can write the simple relation \[\boxed{{\frac{{x - h}}{{\cos \theta }} = \frac{{y - k}}{{\sin \theta }} = r}}\] This is the required equation of the line. The point \(Q(x,\,y),\) at a distance r from P, has the coordinates \[Q(x,\,y) \equiv (h + r\cos \theta ,\,k + r\sin \theta ).\] \[\left\{ \begin{array}{l}{\text{Obviously, there will be another point, say }}Q'(x,y),{\text{ at a distance }}r{\text{ from }}P\\{\text{along this line but on the opposite side of }}Q{\text{; thus }}Q'(x,{\rm{ }}y){\text{ will have the }}\\{\text{coordinates }}Q'(x,\,y) \equiv (h - r\cos \theta ,\,\,\,k - r\sin \theta)\end{array} \right\}\] Example – 15 A line through \(A( - 5,\, - 4)\) meets the lines \(x + 3y = 2,\,\,2x + y + 4 = 0\) and \(x - y - 5 = 0\) at the points B, C and D respectively. If \[\begin{align}{\left( {\frac{{15}}{{AB}}} \right)^2} + {\left( {\frac{{10}}{{AC}}} \right)^2} = {\left( {\frac{6}{{AD}}} \right)^2},\end{align}\] find the equation of the line. Solution: The figure above roughly sketches the situation described in the equation. Let B, C and D be at distances \({r_1},\,{r_2}\) and \({r_3}\) from A along the line \(L = 0,\) whose equation we wish to determine. Assume the inclination of L to be \(\theta .\) Thus, B, C and D have the coordinates (respectively): \[\begin{align}B \equiv ( - 5 + {r_1}\cos \theta ,\,\,\,\,\, - 4 + {r_1}\sin \theta )\\C \equiv ( - 5 + {r_2}\cos \theta ,\,\,\,\,\, - 4 + {r_2}\sin \theta )\\D \equiv ( - 5 + {r_3}\cos \theta ,\,\,\,\,\, - 4 + {r_3}\sin \theta )\end{align}\] Since these three points(respectively) satisfy the three given equations, we have : Point B : \(\begin{align}( - 5 + {r_1}\cos \theta ) + 3( - 4 + {r_1}\sin \theta ) + 2 = 0 \quad \Rightarrow \qquad {r_1} = \frac{{15}}{{\cos \theta + 3\sin \theta }}\end{align}\) Point C : \(\begin{align}2( - 5 + {r_2}\cos \theta ) + ( - 4 + {r_2}\sin \theta ) + 4 = 0 \quad \Rightarrow \qquad {r_2} = \frac{{10}}{{2\cos \theta + \sin \theta }}\end{align}\) Point D : \(\begin{align}( - 5 + {r_3}\cos \theta ) - ( - 4 + {r_3}\sin \theta ) - 5 = 0 \quad \Rightarrow \qquad {r_3} = \frac{6}{{\cos \theta - \sin \theta }}\end{align}\) It is given that \[\begin{align}&{\left( {\frac{{15}}{{AB}}} \right)^2} + {\left( {\frac{{10}}{{AC}}} \right)^2} = {\left( {\frac{6}{{AD}}} \right)^2}\\ \text{i.e;}\qquad \qquad \qquad &{\left( {\frac{{15}}{{{r_1}}}} \right)^2} + {\left( {\frac{{10}}{{{r_2}}}} \right)^2} = {\left( {\frac{6}{{{r_3}}}} \right)^2}\\ \Rightarrow \qquad &{(\cos \theta + 3\sin \theta )^2} + {(2\cos \theta + \sin \theta )^2} = {(\cos \theta - \sin \theta )^2}\\ \Rightarrow \qquad &4{\cos ^2}\theta + 9{\sin ^2}\theta + 12\sin \theta \cos \theta = 0\\ \Rightarrow \qquad &{(2\cos \theta + 3\sin \theta )^2} = 0\\ \Rightarrow \qquad &\tan \theta = \frac{{ - 2}}{3}\\ \Rightarrow \qquad &m = \frac{{ - 2}}{3}\end{align}\] Thus, we obtain the slope of L as \(\begin{align}\frac{{ - 2}}{3}. \end{align}\) The equation of L can now be easily written : \[\begin{align}&L:y - ( - 4) = \frac{{ - 2}}{3}(x - ( - 5))\\ \Rightarrow \qquad &L:2x + 3y + 22 = 0\end{align}\] TRY YOURSELF - I Q1. A variable straight line drawn through the intersection of the lines\(\begin{align}\frac{x}{a} + \frac{y}{b} = 1\;\;and\;\;\frac{x}{b} + \frac{y}{a} = 1\end{align}\) and meets the axes in A and B. Show that the locus of the mid-point of AB is \(2xy(a + b) = ab(x + y)\) Q2. The line \(bx + ay = ab\) cuts the axes in A and B. Another variable line cuts the axes in C and D such that \(OA + OB = OC + OD\) where O is the origin. Prove that the locus of the point of intersection of the lines AD and BC is the line\(\;x + y = a + b\) Q3. A point P moves so that the square of its distance from (3, –2) is equal to its distance from the line \(5x - 12y = 13\) Find the locus of P. Q4. A line intersects the x-axis in A(7, 0) and the y-axis in B(0, –5). A variable line perpendicular to AB intersects the x-axis in P and the y-axis in Q. If AQ and BP intersect in R, find the locus of R. Q5. If the sum of the distances of a point from two perpendicular lines in a plane is 1, prove that its locus is a square. Q6. A vertex of an equilateral triangle is (2, 3) and the opposite side is \(\;x + y = 2\). Find the equations of the other sides. Q7. A ray of light along the line \(x - 2y - 3 = 0\) is incident upon the mirror-line \(3x - 2y - 5 = 0\) Find the equation of the reflected ray. Q8. If the vertices of a triangle have integral coordinates, show that it cannot be equilateral. Q9. Show using coordinate geometry that the angle bisectors of the sides of a triangle are concurrent. Q10. The sides of a triangle are \(4x + 3y + 7 = 0\;,\;5x + 12y - 27 = 0\;\;and\;\;3x + 4y + 8 = 0\) and By explicitly evaluating the medians in this triangle, show that they are concurrent. Q11. A rod APB of constant length meets the axes in A and B. If AP = b and PB = a and the rod slides between the axes, show that the locus of P is\(\;{b^2}{x^2} + {a^2}{y^2} = {a^2}{b^2}\) Q12. If p is the length of the perpendicular from the origin to the line whose intercepts on the axes are a and b, show that \(\begin{align}\frac{1}{{{p^2}}} = \frac{1}{{{a^2}}} + \frac{1}{{{b^2}}}\end{align}\) Q13. The lines \(3x + 4y - 8 = 0\;and\;5x + 12y + 3 = 0\) intersect in A. Find the equations of the lines passing through which intersect the given lines at B and C, such that \(AB = AC\). Q14. The equal sides AB and AC of an isosceles triangle ABC are produced to the points P and Q such that \(BP.CQ = A{B^2}\) Prove that the line PQ always passes through a fixed point. Q15. One side of a square is inclined to the x-axis at an angle and one of its extremities is at the origin; prove that the equations to its diagonals are \[\begin{array}{l} &y(\cos \alpha - \sin \alpha ) = x(\sin \alpha + \cos \alpha )\\\\ and\qquad &y(\sin \alpha + \cos \alpha ) + x(\cos \alpha - \sin \alpha ) = a \end{array}\] where a is the length of the side of the sqaure.
Complex number A complex number is a number of the form $z=x+iy$, where $x$ and $y$ are real numbers (cf. Real number) and $i=\def\i{\sqrt{-1}}\i$ is the so-called imaginary unit, that is, a number whose square is equal to $-1$ (in engineering literature, the notation $j=\i$ is also used): $x$ is called the real part of the complex number $z$ and $ y$ its imaginary part (written $x=\def\Re{\mathrm{Re}\;}\Re z$, $y=\def\Im{\mathrm{Im}\;}\Im z$). The real numbers can be regarded as special complex numbers, namely those with $y=0$. Complex numbers that are not real, that is, for which $y\ne 0$, are sometimes called imaginary numbers. The complicated historical process of the development of the notion of a complex number is reflected in the above terminology which is mainly of traditional origin. Algebraically speaking, a complex number is an element of the (algebraic) extension $\C$ of the field of real numbers $\R$ obtained by the adjunction to the field $\R$ of a root $i$ of the polynomial $X^2+1$. The field $\C$ obtained in this way is called the field of complex numbers or the complex number field. The most important property of the field $\C$ is that it is algebraically closed, that is, any polynomial with coefficients in $\C$ splits into linear factors. The property of being algebraically closed can be expressed in other words by saying that any polynomial of degree $n\ge 1$ with coefficients in $\C$ has at least one root in $\C$ (the d'Alembert–Gauss theorem or fundamental theorem of algebra). The field $\C$ can be constructed as follows. The elements $z=(x,y)$, $z'=(x',y'),\dots$ or complex numbers, are taken to be the points $z=(x,y)$, $z'=(x',y'),\dots$ of the plane $\R^2$ in Cartesian rectangular coordinates $x$ and $y$, $x'$ and $y',\dots$. Here the sum of two complex numbers $z=(x,y)$ and $z'=(x',y')$ is the complex number $(x+x',y+y')$, that is, $$z+z'=(x,y)+(x',y')=(x+x',y+y'),\label{1}$$ and the product of those complex numbers is the complex number $(xx'-yy',xy'+x'y)$, that is, $$zz'=(x,y)(x'y') = (xx'-yy',xy'+x'y).\label{2}$$ The zero element $0=(0,0)$ is the same as the origin of coordinates, and the complex number $(1,0)$ is the identity of $\C$. The plane $\R^2$ whose points are identified with the elements of $\C$ is called the complex plane. The real numbers $x,x',\dots$ are identified here with the points $(x,0)$, $(x',0),\dots$ of the $x$-axis which, when referring to the complex plane, is called the real axis. The points $(0,y)=iy$, $(0,y')=iy',\dots$ are situated on the $y$-axis, called the imaginary axis of the complex plane $\C$; numbers of the form $iy,iy',\dots$ are called pure imaginary. The representation of elements $z,z',\dots$ of $\C$, or complex numbers, as points of the complex plane with the rules (1) and (2) is equivalent to the above more widely used form of notating complex numbers: $$z=(x,y)=x+iy, z'=(x',y')=x'+iy',\dots,$$ also called the algebraic or Cartesian form of writing complex numbers. With reference to the algebraic form, the rules (1) and (2) reduce to the simple condition that all operations with complex numbers are carried out as for polynomials, taking into account the property of the imaginary unit: $ii=i^2=-1$. The complex numbers $z=(x,y)=x+iy$ and $\bar z=(x,-y)=x-iy$ are called conjugate or complex conjugates in the plane $\C$; they are symmetrically situated with respect to the real axis. The sum and the product of two conjugate complex numbers are the real numbers $$z+\bar z = 2\Re z,\quad z\bar z=|z|^2,$$ where $|z|=r=\sqrt{x^2+y^2}$ is called the modulus or absolute value of $z$. The following inequalities always hold: $$|z|-|z'| \le |z+z'|\le |z|+|z'|.$$ A complex number $z$ is different from 0 if and only if $|z|>0$. The mapping $z\mapsto \bar z$ is an automorphism of the complex plane of order 2 (that is, $z = \bar{\bar z}$) that leaves all points of the real axis fixed. Furthermore, $\overline{z+z'} = \bar z + \bar{z'}$, $\bar{zz'} = \bar{z}\bar{z'}$. The operations of addition and multiplication (1) and (2) are commutative and associative, they are related by the distributive law, and they have the inverse operations subtraction and division (except for division by zero). The latter are expressed in algebraic form as: $$z-z'=(x+iy)-(x'+iy')=(x-x')+i(y-y'),$$ $$\frac{z'}{z} = \frac{x'+iy'}{x+iy} = \frac{z\bar z}{|z|^2} =\frac{xx'+yy'}{x^2+y^2}+i\frac{y'x-x'y}{x^2+y^2},\quad z\ne0.\label{3}$$ Division of a complex number $z'$ by a complex number $z\ne0$ thus reduces to multiplication of $z'$ by $$\frac{\bar z}{|z|^2} = \frac{x}{x^2+y^2}-i\frac{y}{x^2+y^2}.$$ It is an important question whether the extension $\C$ of the field of reals constructed above, with the rules of operation indicated, is the only possible one or whether essentially different variants are conceivable. The answer is given by the uniqueness theorem: Every (algebraic) extension of the field $\R$ obtained from $\R$ by adjoining a root $i$ of the equation $X^2+1$ is isomorphic to $\C$, that is, only the above rules of operation with complex numbers are compatible with the requirement that the root $i$ be algebraically adjoined. This fact, however, does not exclude the existence of interpretations of complex numbers other than as points of the complex plane. The following two interpretations are most frequently employed in applications. Vector interpretation. A complex number $z=x+iy$ can be identified with the vector $(x,y)$ with coordinates $x$ and $y$ starting from the origin (see Fig.). Figure: c024140a In this interpretation, addition and subtraction of complex numbers is carried out according to the rules of addition and subtraction of vectors. However, multiplication and division of complex numbers, which must be performed according to (2) and (3), do not have immediate analogues in vector algebra (see [Sh], [LaSc]). The vector interpretation of complex numbers is immediately applicable, for example, in electrical engineering in the description of alternating sinusoidal currents and voltages. Matrix interpretation. The complex number $w=u+iv$ can be identified with a $(2\times 2)$-matrix of special type $$w=\begin{pmatrix}\phantom{-}u&v\\ -v&u\end{pmatrix}$$ where the operations of addition, subtraction and multiplication are carried out according to the usual rules of matrix algebra. By using polar coordinates in the complex plane $\C$, that is, the radius vector $r=|z|$ and polar angle $\def\phi{\varphi}\phi=\arg z$, called here the argument of $z$ (sometimes also called the phase of $z$), one obtains the trigonometric or polar form of a complex number: $$z=r(\cos\phi + i\sin\phi),\label{4}$$ $$r\cos\phi = \Re z,\quad r\sin\phi=\Im z.$$ The argument $\phi=\arg z$ is a many-valued real-valued function of the complex number $z\ne 0z\ne 0$, whose values for a given $z$ differ by integral multiples of $2\pi i$; the argument of the complex number $z=0$ is not defined. One usually takes the principal value of the argument $\phi = \def\Arg{\mathrm{Arg}} \Arg z$, defined by the additional condition $-\pi < \Arg z \le \pi$. The Euler formulas $e^{\pm i\phi} = \cos\phi\pm i\sin\phi$ transform the trigonometric form (4) into the exponential form of a complex number: $$z=re^{i\phi}\label{5}$$ The forms (4) and (5) are particularly suitable for carrying out multiplication and division of complex numbers: $$zz'=rr'[\cos(\phi+\phi')+i\sin(\phi+\phi')]=rr'e^{i(\phi+\phi')},$$ $$\frac{z}{z'}=\frac{r}{r'}[\cos(\phi-\phi')+i\sin(\phi-\phi')] =\frac{r}{r'}e^{i(\phi-\phi')},\quad r>0$$ Under multiplication (or division) of complex numbers the moduli are multiplied (or divided) and the arguments are added (or subtracted). Raising to a power or extracting a root is carried out according to the so-called de Moivre formulas: $$z^n = r^n(\cos n\phi + i\sin n\phi) = r^n e^{in\phi},$$ $$z^{1/n} = r^{1/n}\Big(\cos\frac{\phi+2k\pi}{n}+i\sin\frac{\phi+2k\pi}{n}\Big) =r^{1/n}e^{i(\phi+2k\pi)/n},$$ $$k=0,\dots,n-1,$$ where the first of these is also applicable for negative integer exponents $n$. Geometrically, multiplication of a complex number $z$ by a complex number $z'=r'e^{i\phi'}$ reduces to rotating the vector $z$ over the angle $\phi'$ (anti-clockwise if $\phi'>0$) and subsequently multiplying its length by $|z'|=r'$; in particular, multiplication by a complex number $z'=e^{i\phi'}$, which has modulus one, is merely rotation over the angle $\phi'$. Thus, complex numbers can be interpreted as operators of a special type (affinors, cf. Affinor). In this connection, the mixed vector-matrix interpretation of multiplication of complex numbers is sometimes useful: $$(x,y)\begin{pmatrix}\phantom{-}u&v\\ -v&u\end{pmatrix}=(xu-yv,xv+yu),$$ in which the multiplicand is treated as a matrix-vector and the multiplier as a matrix-operator. The bijection $(x,y)\mapsto x+iy$ induces on the field $\C$ the topology of the $2$-dimensional real vector space $\R^2$; this topology is compatible with the field structure of $\C$ and thus $\C$ is a topological field. The modulus $|z|$ is the Euclidean norm of the complex number $z={x,y}$, and $\C$ endowed with this norm is a complex one-dimensional Euclidean space, also called the complex $z$-plane. The topological product $\C^n=\C\times\cdots\times\C$ ($n$ times, $n\ge 1$) is a complex $n$-dimensional Euclidean space. For a satisfactory analysis of functions it is usually necessary to consider their behaviour in the complex domain. This is due to the fact that $\C$ is algebraically closed. Even the behaviour of such elementary functions as $z^n$, $\cos z$, $\sin z$, $e^z$ can be properly understood only when they are regarded as functions of a complex variable (see Analytic function). Apparently, imaginary quantities first occurred in the celebrated work The great art, or the rules of algebra by G. Cardano, 1545, who regarded them as useless and unsuitable for applications. R. Bombelli (1572) was the first to realize the value of the use of imaginary quantities, in particular for the solution of the cubic equation in the so-called irreducible case (when the real roots are expressed in terms of cube roots of imaginary quantities, cf. Cardano formula). He gave some of the simplest rules of operation with complex numbers. In general, expressions of the form $a+b\i$, $b\ne 0$, appearing in the solution of quadratic and cubic equations were called "imaginary" in the 16th century and 17th century. However, even for many of the great scholars of the 17th century, the algebraic and geometric nature of imaginary quantities was unclear and even mystical. It is known, for example, that I. Newton did not include imaginary quantities within the notion of number, and that G. Leibniz said that "complex numbers are a fine and wonderful refuge of the divine spirit, as if it were an amphibian of existence and non-existence" . The problem of expressing the $n$-th roots of a given number was mainly solved in the papers of A. de Moivre (1707, 1724) and R. Cotes (1722). The symbol $i=\i$ was proposed by L. Euler (1777, published 1794). It was he who in 1751 asserted that the field $\C$ is algebraically closed; J. d'Alembert (1747) came to the same conclusion. The first rigorous proof of this fact is due to C.F. Gauss (1799), who introduced the term "complex number" in 1831. The complete geometric interpretation of complex numbers and operations on them appeared first in the work of C. Wessel (1799). The geometric representation of complex numbers, sometimes called the "Argand diagramArgand diagram" , came into use after the publication in 1806 and 1814 of papers by J.R. Argand, who rediscovered, largely independently, the findings of Wessel. The purely arithmetic theory of complex numbers as pairs of real numbers was introduced by W. Hamilton (1837). He found a generalization of complex numbers, namely the quaternions (cf. Quaternion), which form a non-commutative algebra. More generally, it was proved at the end of the 19th century that any extension of the notion of number beyond the complex numbers requires sacrificing some property of the usual operations (primarily commutativity). See also Hypercomplex number; Double and dual numbers; Cayley numbers. References [Bo] N. Bourbaki, "Elements of mathematics. General topology", Addison-Wesley (1966) (Translated from French) MR0205211 MR0205210 Zbl 0301.54002 Zbl 0301.54001 Zbl 0145.19302 [Ha] G.H. Hardy, "A course of pure mathematics", Cambridge Univ. Press (1952) MR0049254 Zbl 0047.28304 [HuCo] A. Hurwitz, R. Courant, "Vorlesungen über allgemeine Funktionentheorie und elliptische Funktionen", Springer (1944) MR0011320 [Ko] A.I. Kostrikin, "Introduction to algebra", Springer (1982) (Translated from Russian) MR0661256 Zbl 0482.00001 [Ku] A.G. Kurosh, "Higher algebra", MIR (1972) (Translated from Russian) Zbl 0237.13001 [LaSc] M.A. [M.A. Lavrent'ev] Lawrentjew, B.V. [B.V. Shabat] Schabat, "Problems in hydrodynamics and their mathematical models", Moscow (1973) (In Russian) [Ma] A.I. Markushevich, "Theory of functions of a complex variable", 1–2, Chelsea (1977) (Translated from Russian) MR0444912 Zbl 0357.30002 [Sh] B.V. Shabat, "Introduction of complex analysis", 1, Moscow (1976) (In Russian) How to Cite This Entry: Complex number. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Complex_number&oldid=35191
Malonic acid is used in the manufacture of barbiturates (sleeping pills). The composition of the acid is 34.6% C, 3.9% H, and 61.5% O. What is malonic acid’s empirical formula? Solution: Assume a sample size of 100 g to make the math easier. Thus, we will assume there is a 100 g sample of malonic acid. Now, we will calculate the amount of grams that each element takes up in the sample size. Grams of carbon: 100 g \times 34.6% = 34.6 g Grams of hydrogen: 100 g \times 3.9% = 3.9 g Grams of oxygen: 100 g \times 61.5% = 61.5 g Now, we will convert each of these masses to moles by dividing by it’s specific molar mass. \text{Mol C} = 34.6\,g\,\text{C}\times \frac{1\text{mol C}}{12.01\, g\,\text{C}} \text{Mol C} =2.881\text{mol} \text{Mol H} = 3.9\,g\,\text{H}\times \frac{1\text{mol H}}{1.008\, g\,\text{H}} \text{Mol H} =3.87\text{mol} \text{Mol O} = 61.5\,g\,\text{O}\times \frac{1\text{mol O}}{15.999\, g\,\text{O}} \text{Mol O} =3.844\text{mol} To figure out the integers of the empirical formula, simply divide each value by the smallest mol number we found, which was 2.881. C= \frac{2.881}{2.881}=1 H=\frac{3.87}{2.881}=1.34\approx \frac{4}{3} O=\frac{3.884}{2.881}= 1.334\approx \frac{4}{3} Notice we got fractional values. To get whole numbers, we will multiply these values by the lowest common denominator. In this case, the lowest common denominator is 3. C= 1\times 3=3 H=\frac{4}{3}\times 3=4 O=\frac{4}{3}\times 3=4 Thus, the empirical formula is C_3H_4O_4
It would be extremely unlikely. A typical bacteria is about 1 µm diameter - they come in all kinds of shapes, but let's assume a "spherical bacteria" (this is the microscopic equivalent of the spherical cow). The drag force depends on the Reynolds number. Recall that $$\rm{Re} = \frac{u\ell}{\nu}$$ Where $u$ is the velocity, $\ell$ the "typical length scale" (1 µm), and $\nu$ the kinematic viscosity (about $8.9\cdot 10^{-7} m^2/s$ for water at 25°C). For such a small object, the Reynolds number will be very small and flow around the bacteria will most likely be laminar. This means that the drag is given by Stokes' equation: $$F = -6\pi \eta r \mathbf{v}$$ Here, $\eta$ is the dynamic viscosity ($8.9\cdot 10^{-4} Pa\cdot s$) - so $F=- 8.4\cdot 10^{-9} \mathbf{v} ~\rm{N}$. With a mass of $5\cdot 10^{-16}$ kg, the slightest difference in velocity (between the bacteria and the liquid) will immediately result in an enormous acceleration. In other words - there is no way the bacteria can "swim upstream". That leaves the question - could there be a water flow in the opposite direction that could entrain the bacteria? It would be quite hard to deliberately design a nozzle that could do that - you can safely assume that this is not happening here. The final question you can ask: what about the boundary layer? With viscous flow, the liquid at the boundary is stationary ... could the bacteria "crawl along" that? Even if we assume a parabolic velocity profile, there is a significant velocity gradient at the wall of the nozzle (where the velocity is lowest). Quite apart from the fact that the flow is most likely turbulent, given the small diameter of the nozzle it seems likely that there is significant water velocity even at 1 µm from the wall (and that's how far the bacteria would "stick out" into the flow). That is almost certainly enough, given the above calculation, to prevent "crawling upstream along the wall". I can think of lots of things that can go wrong at the dentist's office - I don't see how this can be one of them. But I have been wrong before.
Huge cardinal Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. However, the consistency of the existence of an $\omega_2$-complete $\omega_3$-saturated $\sigma$-ideal on $\omega_2$, as far as the set theory world is concerned, still requires an almost huge cardinal. [1] Contents 1 Definitions 2 References 3 Consistency strength and size 4 Relative consistency results 5 In set theoretic geology 6 References Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals A cardinal $\kappa$ is $\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $j(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Hyperhuge cardinals A cardinal $\kappa$ is $\lambda$-hyperhuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some inner model $M$ such that $\mathrm{crit}(j) = \kappa$, $j(\kappa)>\lambda$ and $^{j(\lambda)}M\subseteq M$. A cardinal is hyperhuge if it is $\lambda$-hyperhuge for all $\lambda>\kappa$.[3, 4] Huge* cardinals A cardinal $κ$ is $n$-huge* if for some $α > κ$, $\kappa$ is the critical point of an elementary embedding $j : V_α → V_β$ such that $j^n (κ) < α$.[5] Hugeness* variant is formulated in a way allowing for a virtual variant consistent with $V=L$: A cardinal $κ$ is virtually $n$-huge* if for some $α > κ$, in a set-forcing extension, $\kappa$ is the critical point of an elementary embedding $j : V_α → V_β$ such that $j^n(κ) < α$.[5] Ultrafilter definition The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." As for hyperhugeness, the following are equivalent:[4] $κ$ is $λ$-hyperhuge; $μ > λ$ and a normal, fine, κ-complete ultrafilter exists on $[μ]^λ_{∗κ} := \{s ⊂ μ : |s| = λ, |s ∩ κ| ∈ κ, \mathrm{otp}(s ∩ λ) < κ\}$; $\mathbb{L}_{κ,κ}$ is $[μ]^λ_{∗κ}$-$κ$-compact for type omission. Coherent sequence characterization of almost hugeness $C^{(n)}$-$m$-huge cardinals (this section from [6]) $κ$ is $C^{(n)}$-$m$-huge iff it is $m$-huge and $j(κ) ∈ C^{(n)}$ ($C^{(n)}$-huge if it is huge and $j(κ) ∈ C^{(n)}$). Equivalent definition in terms of normal measures: κ is $C^{(n)}$-$m$-huge iff it is uncountable and there is a $κ$-complete normal ultrafilter $U$ over some $P(λ)$ and cardinals $κ = λ_0 < λ_1 < . . . < λ_m = λ$, with $λ_1 ∈ C (n)$ and such that for each $i < m$, $\{x ∈ P(λ) : ot(x ∩ λ i+1 ) = λ i \} ∈ U$. It follows that “$κ$ is $C^{(n)}$-$m$-huge” is $Σ_{n+1}$ expressible. Every huge cardinal is $C^{(1)}$-huge. The first $C^{(n)}$-$m$-huge cardinal is not $C^{(n+1)}$-$m$-huge, for all $m$ and $n$ greater or equal than $1$. For suppose $κ$ is the least $C^{(n)}$-$m$-huge cardinal and $j : V → M$ witnesses that $κ$ is $C^{(n+1)}$-$m$-huge. Then since “x is $C^{(n)}$-$m$-huge” is $Σ_{n+1}$ expressible, we have $V_{j(κ)} \models$ “$κ$ is $C^{(n)}$-$m$-huge”. Hence, since $(V_{j(κ)})^M = V_{j(κ)}$, $M \models$ “$∃_{δ < j(κ)}(V_{j(κ)} \models$ “δ is huge”$)$”. By elementarity, there is a $C^{(n)}$-$m$-huge cardinal less than $κ$ in $V$ – contradiction. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-$m$-huge (inter alia) in $V_δ$, for all $n$ and $m$. If $κ$ is $C^{(n)}$-$\mathrm{I3}$, then it is $C^{(n)}$-$m$-huge, for all $m$, and there is a normal ultrafilter $\mathcal{U}$ over $κ$ such that $\{α < κ : α$ is $C^{(n)}$-$m$-huge for every $m\} ∈ \mathcal{U}$. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Usuba, Toshimichi. The downward directed grounds hypothesis and very large cardinals.Journal of Mathematical Logic 17(02):1750009, 2017. arχiv DOI bibtex Boney, Will. Model Theoretic Characterizations of Large Cardinals.arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge". Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals. In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge. While not every $n$-huge cardinal is strong, if $\kappa$ is almost $n$-huge with targets $\lambda_1,\lambda_2...\lambda_n$, then $\kappa$ is $\lambda_n$-strong as witnessed by the generated $j:V\prec M$. This is because $j^n(\kappa)=\lambda_n$ is measurable and therefore $\beth_{\lambda_n}=\lambda_n$ and so $V_{\lambda_n}=H_{\lambda_n}$ and because $M^{<\lambda_n}\subset M$, $H_\theta\subset M$ for each $\theta<\lambda_n$ and so $\cup\{H_\theta:\theta<\lambda_n\} = \cup\{V_\theta:\theta<\lambda_n\} = V_{\lambda_n}\subset M$. Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\theta$-supercompact for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. An $n$-huge* cardinal is an $n$-huge limit of $n$-huge cardinals. Every $n + 1$-huge cardinal is $n$-huge*.[5] As for virtually $n$-huge*:[5] If $κ$ is virtually huge*, then $V_κ$ is a model of proper class many virtually extendible cardinals. A virtually $n+1$-huge* cardinal is a limit of virtually $n$-huge* cardinals. A virtually $n$-huge* cardinal is an $n+1$-iterable limit of $n+1$-iterable cardinals. If $κ$ is $n+2$-iterable, then $V_κ$ is a model of proper class many virtually $n$-huge* cardinals. Every virtually rank-into-rank cardinal is a virtually $n$-huge* limit of virtually $n$-huge* cardinals for every $n < ω$. The $\omega$-huge cardinals A cardinal $\kappa$ is almost $\omega$-huge iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is $\omega$-huge iff the model $M$ can be required to have $M^\lambda\subset M$. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of I1 embeddings. Relative consistency results Hugeness of $\omega_1$ In [2] it is shown that if $\text{ZFC +}$ "there is a huge cardinal" is consistent then so is $\text{ZF +}$ "$\omega_1$ is a huge cardinal" (with the ultrafilter characterization of hugeness). Generalizations of Chang's conjecture Cardinal arithmetic in $\text{ZF}$ If there is an almost huge cardinal then there is a model of $\text{ZF+}\neg\text{AC}$ in which every successor cardinal is a Ramsey cardinal. It follows that (1) for all inner models $W$ of $\text{ZFC}$ and every singular cardinal $\kappa$, one has $\kappa^{+W} < \kappa^+$ and that (2) for all ordinal $\alpha$ there is no injection $\aleph_{\alpha+1}\to 2^{\aleph_\alpha}$. This in turn imply the failure of the square principle at every infinite cardinal (and consequently $\text{AD}^{L(\mathbb{R})}$, see determinacy). [3] In set theoretic geology If $\kappa$ is hyperhuge, then $V$ has $<\kappa$ many grounds (so the mantle is a ground itself).[3] This result has been strenghtened to extendible cardinals[7]. On the other hand, it s consistent that there is a supercompact cardinal and class many grounds of $V$ (because of the indestructibility properties of supercompactness).[3] References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Usuba, Toshimichi. The downward directed grounds hypothesis and very large cardinals.Journal of Mathematical Logic 17(02):1750009, 2017. arχiv DOI bibtex Boney, Will. Model Theoretic Characterizations of Large Cardinals.arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure [email protected] added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. We describe techniques for constructing models of size continuum in ω steps by simultaneously building a perfect set of enmeshed countable Henkin sets. Such models have perfect, asymptotically similar subsets. We survey applications involving Borel models, atomic models, two-cardinal transfers and models respecting various closure relations. Low energy and protein intakes have been associated with an increased risk of malnutrition in outpatients with chronic obstructive pulmonary disease (COPD). We aimed to assess the energy and protein intakes of hospitalised COPD patients according to nutritional risk status and requirements, and the relative contribution from meals, snacks, drinks and oral nutritional supplements (ONS), and to examine whether either energy or protein intake predicts outcomes. Subjects were COPD patients (n 99) admitted to Landspitali University Hospital in 1 year (March 2015–March 2016). Patients were screened for nutritional risk using a validated screening tool, and energy and protein intake for 3 d, 1–5 d after admission to the hospital, was estimated using a validated plate diagram sheet. The percentage of patients reaching energy and protein intake ≥75 % of requirements was on average 59 and 37 %, respectively. Malnourished patients consumed less at mealtimes and more from ONS than lower-risk patients, resulting in no difference in total energy and protein intakes between groups. No clear associations between energy or protein intake and outcomes were found, although the association between energy intake, as percentage of requirement, and mortality at 12 months of follow-up was of borderline significance (OR 0·12; 95 % CI 0·01, 1·15; P=0·066). Energy and protein intakes during hospitalisation in the study population failed to meet requirements. Further studies are needed on how to increase energy and protein intakes during hospitalisation and after discharge and to assess whether higher intake in relation to requirement of hospitalised COPD patients results in better outcomes. UBV observations, plus a few in R and I, were obtained during the 1984 eclipse of RZ Ophiuchi. Bolometric corrections and temperature calibration, applied to the magnitudes and colours of the stars, were used to derive the ratio of the stellar radii, and a light curve solution was obtained with this parameter fixed. Neither component fills its Roche lobe, but the system may be at a late stage of case C mass transfer. We introduce the concept of a locally finite abstract elementary class and develop the theory of disjoint$\left( { \le \lambda ,k} \right)$-amalgamation) for such classes. From this we find a family of complete${L_{{\omega _1},\omega }}$sentences${\phi _r}$that a) homogeneously characterizes${\aleph _r}$(improving results of Hjorth [11] and Laskowski–Shelah [13] and answering a question of [21]), while b) the${\phi _r}$provide the first examples of a class of models of a complete sentence in${L_{{\omega _1},\omega }}$where the spectrum of cardinals in which amalgamation holds is other that none or all. We introduce the notion of pseudoalgebraicity to study atomic models of first order theories (equivalently models of a complete sentence of${L_{{\omega _1},\omega }}$). Theorem: Let T be any complete first-order theory in a countable language with an atomic model. If the pseudominimal types are not dense, then there are 2ℵ0 pairwise nonisomorphic atomic models of T, each of size ℵ1. NGC 7027 is justifiably THE template spectrum for PNe. Its vast range of emission species – from molecular and neutral to ions with ionization potential > 120eV – its high surface brightness and accessibiliy for northern observatories make it the PN laboratory of choice. However the quality of the spectra from the UV to the IR is mixed, many line fluxes and identifications still remaining unchecked from photographic or image tube spectra. Very deep spectra of NGC 7027 (emission line strengths <10-4 of Hβ) in the 0.65 to 1.05μm region (Baluteau et al. 1995) showed the presence of many faint emission lines. Pequignot & Baluteau (1994) showed that heavy elements from the 4th, 5th and 6th rows of the Periodic Table have much higher abundances than Solar, confirming the synthesis of neutron capture elements in low mass stars and providing new constraints on stellar evolution theory. We report the direct detection of cyclic diameter variations in the Mira variable χ Cygni. Interferometric observations made between 1997 July and 1998 September, using the Cambridge Optical Aperture Synthesis Telescope (COAST) indicate periodic changes in the apparent angular diameter with amplitude 45 per-cent of the smallest value. The measurements were made in a 50 nm bandpass centred on 905 nm, which is only moderately contaminated by molecular absorption features. To assess the effects of atmospheric stratification on the apparent diameter measured in this band, we have also measured near-infrared diameters for a sample of five Miras, in both the J-band (1.3 μm) and Wing's (1971) 1.04 μm band, which is expected to isolate essentially pure continuum emission. We present J-band visibility curves which indicate that the intensity profiles of the stars in the sample differ greatly from each other. We have conducted a survey of the Lyα forest in the redshift domain 2.15 < z < 3.37 in front of nine QSOs within a 1o field to probe spatial structure along planes perpendicular to the line-of-sight. We find evidence for correlations of the Lyα absorption line wavelengths in the whole redshift range, and, at z > 2.8, of their equivalent widths. Such a correlation is consistent with the emerging picture that Lyα lines arise in filaments or large, flattened structures. In primates, the cortex adjoining the rostral border of V2 has been variously interpreted as belonging to a single visual area, V3, with dorsal V3 (V3d) representing the lower visual quadrant and ventral V3 (V3v) representing the upper visual quadrant, V3d and V3v constituting separate, incomplete visual areas, V3d and ventral posterior (VP), or V3d being divided into several visual areas, including a dorsomedial (DM) visual area, a medial visual area (M), and dorsal extension of VP (or VLP). In our view, the evidence from V1 connections strongly supports the contention that V3v and V3d are parts of a single visual area, V3, and that DM is a separate visual area along the rostral border of V3d. In addition, the retinotopy revealed by V1 connection patterns, microelectrode mapping, optical imaging mapping, and functional magnetic resonance imaging (fmri) mapping indicates that much of the proposed territory of V3d corresponds to V3. Yet, other evidence from microelectrode mapping and anatomical connection patterns supports the possibility of an upper quadrant representation along the rostral border of the middle of dorsal V2 (V2d), interpreted as part of DM or DM plus DI, and along the midline end of V2d, interpreted as the visual area M. While the data supporting these different interpretations appear contradictory, they also seem, to some extent, valid. We suggest that V3d may have a gap in its middle, possibly representing part of the upper visual quadrant that is not part of DM. In addition, another visual area, M, is likely located at the DM tip of V3d. There is no evidence for a similar disruption of V3v. For the present, we favor continuing the traditional concept of V3 with the possible modification of a gap in V3d in at least some primates.
The affine evaluation map is a surjective homomorphism from the quantumtoroidal ${\mathfrak {gl}}_n$ algebra ${\mathcal E}'_n(q_1,q_2,q_3)$ to thequantum affine algebra $U'_q\widehat{\mathfrak {gl}}_n$ at level $\kappa$completed with respect to the homogeneous grading, where $q_2=q^2$ and$q_3^n=\kappa^2$. We discuss ${\mathcal E}'_n(q_1,q_2,q_3)$ evaluation modules. We give highestweights of evaluation highest weight modules. We also obtain the decompositionof the evaluation Wakimoto module with respect to a Gelfand-Zeitlin typesubalgebra of a completion of ${\mathcal E}'_n(q_1,q_2,q_3)$, which describes adeformation of the coset theory $\widehat{\mathfrak {gl}}_n/\widehat{\mathfrak{gl}}_{n-1}$. We study plane partitions satisfying condition $a_{n+1,m+1}=0$ (thiscondition is called "pit") and asymptotic conditions along three coordinateaxes. We find the formulas for generating function of such plane partitions. Such plane partitions label the basis vectors in certain representations ofquantum toroidal $\mathfrak{gl}_1$ algebra, therefore our formulas can beinterpreted as the characters of these representations. The resulting formulasresemble formulas for characters of tensor representations of Lie superalgebra$\mathfrak{gl}_{m|n}$. We discuss representation theoretic interpretation ofour formulas using $q$-deformed $W$-algebra $\mathfrak{gl}_{m|n}$. On a Fock space constructed from $mn$ free bosons and lattice ${\Bbb{Z}}^{mn}$, we give a level $n$ action of the quantum toroidal algebra$\mathscr {E}_m$ associated to $\mathfrak{gl}_m$, together with a level $m$action of the quantum toroidal algebra ${\mathscr E}_n$ associated to${\mathfrak {gl}}_n$. We prove that the $\mathscr {E}_m$ transfer matricescommute with the $\mathscr {E}_n$ transfer matrices after an appropriateidentification of parameters. We use the Whittaker vectors and the Drinfeld Casimir element to show thateigenfunctions of the difference Toda Hamiltonian can be expressed viafermionic formulas. Motivated by the combinatorics of the fermionic formulas weuse the representation theory of the quantum groups to prove a number ofidentities for the coefficients of the eigenfunctions. We construct an analog of the subalgebra $Ugl(n)\otimes Ugl(m)$ of $Ugl(m+n)$in the setting of quantum toroidal algebras and study the restrictions ofvarious representations to this subalgebra. We identify the Taylor coefficients of the transfer matrices corresponding toquantum toroidal algebras with the elliptic local and non-local integrals ofmotion introduced by Kojima, Shiraishi, Watanabe, and one of the authors. That allows us to prove the Litvinov conjectures on the Intermediate LongWave model. We also discuss the (gl(m),gl(n)) duality of XXZ models in quantum toroidalsetting and the implications for the quantum KdV model. In particular, weconjecture that the spectrum of non-local integrals of motion of Bazhanov,Lukyanov, and Zamolodchikov is described by Gaudin Bethe ansatz equationsassociated to affine sl(2). We introduce and study a category $\text{Fin}$ of modules of the Borelsubalgebra of a quantum affine algebra $U_q\mathfrak{g}$, where the commutativealgebra of Drinfeld generators $h_{i,r}$, corresponding to Cartan currents, hasfinitely many characteristic values. This category is a natural extension ofthe category of finite-dimensional $U_q\mathfrak{g}$ modules. In particular, weclassify the irreducible objects, discuss their properties, and describe thecombinatorics of the q-characters. We study transfer matrices corresponding tomodules in $\text{Fin}$. Among them we find the Baxter $Q_i$ operators and$T_i$ operators satisfying relations of the form $T_iQ_i=\prod_j Q_j+ \prod_kQ_k$. We show that these operators are polynomials of the spectral parameterafter a suitable normalization. This allows us to prove the Bethe ansatzequations for the zeroes of the eigenvalues of the $Q_i$ operators acting in anarbitrary finite-dimensional representation of $U_q\mathfrak{g}$. We study highest weight representations of the Borel subalgebra of thequantum toroidal gl(1) algebra with finite-dimensional weight spaces. Inparticular, we develop the q-character theory for such modules. We introduceand study the subcategory of `finite type' modules. By definition, a moduleover the Borel subalgebra is finite type if the Cartan like current \psi^+(z)has a finite number of eigenvalues, even though the module itself can beinfinite dimensional. We use our results to diagonalize the transfer matrix T_{V,W}(u;p) analogousto those of the six vertex model. In our setting T_{V,W}(u;p) acts in a tensorproduct W of Fock spaces and V is a highest weight module over the Borelsubalgebra of quantum toroidal gl(1) with finite-dimensional weight spaces.Namely we show that for a special choice of finite type modules $V$ thecorresponding transfer matrices, Q(u;p) and T(u;p), are polynomials in u andsatisfy a two-term TQ relation. We use this relation to prove the Bethe Ansatzequation for the zeroes of the eigenvalues of Q(u;p). Then we show that theeigenvalues of T_{V,W}(u;p) are given by an appropriate substitution ofeigenvalues of Q(u;p) into the q-character of V. We study the conformal vertex algebras which naturally arise in relation tothe Nakajima-Yoshioka blow-up equations. We establish the method of Bethe ansatz for the XXZ type model obtained fromthe R-matrix associated to quantum toroidal gl(1). We do that by using shufflerealizations of the modules and by showing that the Hamiltonian of the model isobtained from a simple multiplication operator by taking an appropriatequotient. We expect this approach to be applicable to a wide variety of models. We define and study representations of quantum toroidal $gl_n$ with naturalbases labeled by plane partitions with various conditions. As an application,we give an explicit description of a family of highest weight representationsof quantum affine $gl_n$ with generic level. We establish the equivalence between the refined topological vertex ofIqbal-Kozcaz-Vafa and a certain representation theory of the quantum algebra oftype W_{1+infty} introduced by Miki. Our construction involves trivalentintertwining operators Phi and Phi^* associated with triples of the bosonicFock modules. Resembling the topological vertex, a triple of vectors in Z^2 isattached to each intertwining operator, which satisfy the Calabi-Yau andsmoothness conditions. It is shown that certain matrix elements of Phi andPhi^* give the refined topological vertex C_{lambda mu nu}(t,q) ofIqbal-Kozcaz-Vafa. With another choice of basis, we recover the refinedtopological vertex C_{lambda mu}^nu(q,t) of Awata-Kanno. The gluing factorsappears correctly when we consider any compositions of Phi and Phi^*. Thespectral parameters attached to Fock spaces play the role of the K"ahlerparameters. In third paper of the series we construct a large family of representationsof the quantum toroidal $\gl_1$ algebra whose bases are parameterized by planepartitions with various boundary conditions and restrictions. We study thecorresponding formal characters. As an application we obtain a Gelfand-Zetlintype basis for a class of irreducible lowest weight $\gl_\infty$-modules. We study the representation theory of the Ding-Iohara algebra $\calU$ to find$q$-analogues of the Alday-Gaiotto-Tachikawa (AGT) relations. We introduce theendomorphism $T(u,v)$ of the Ding-Iohara algebra, having two parameters $u$ and$v$. We define the vertex operator $\Phi(w)$ by specifying the permutationrelations with the Ding-Iohara generators $x^\pm(z)$ and $\psi^\pm(z)$ in termsof $T(u,v)$. For the level one representation, all the matrix elements of thevertex operators with respect to the Macdonald polynomials are factorized andwritten in terms of the Nekrasov factors for the $K$-theoretic partitionfunctions as in the AGT relations. For higher levels $m=2,3,...$, we presentsome conjectures, which imply the existence of the $q$-analogues of the AGTrelations. We begin a study of the representation theory of quantum continuous$\mathfrak{gl}_\infty$, which we denote by $\mathcal E$. This algebra dependson two parameters and is a deformed version of the enveloping algebra of theLie algebra of difference operators acting on the space of Laurent polynomialsin one variable. Fundamental representations of $\mathcal E$ are labeled by acontinuous parameter $u\in {\mathbb C}$. The representation theory of $\mathcalE$ has many properties familiar from the representation theory of$\mathfrak{gl}_\infty$: vector representations, Fock modules, semi-infiniteconstructions of modules. Using tensor products of vector representations, weconstruct surjective homomorphisms from $\mathcal E$ to spherical double affineHecke algebras $S\ddot H_N$ for all $N$. A key step in this construction is anidentification of a natural bases of the tensor products of vectorrepresentations with Macdonald polynomials. We also show that one of the Fockrepresentations is isomorphic to the module constructed earlier by means of the$K$-theory of Hilbert schemes. We construct a family of irreducible representations of the quantumcontinuous $gl_\infty$ whose characters coincide with the characters ofrepresentations in the minimal models of the $W_n$ algebras of $gl_n$ type. Inparticular, we obtain a simple combinatorial model for all representations ofthe $W_n$-algebras appearing in the minimal models in terms of $n$interrelating partitions. We introduce an analogue $K_n(x,z;q,t)$ of the Cauchy-type kernel functionfor the Macdonald polynomials, being constructed in the tensor product of thering of symmetric functions and the commutative algebra $\mathcal{A}$ over thedegenerate $\mathbb{C} \mathbb{P}^1$. We show that a certain restriction of$K_n(x,z;q,t)$ with respect to the variable $z$ is neatly described by thetableau sum formula of Macdonald polynomials. Next, we demonstrate that theinteger level representation of the Ding-Iohara quantum algebra naturallyproduces the currents of the deformed $\mathcal{W}$ algebra. Then we remarkthat the $K_n(x,z;q,t)$ emerges in the highest-to-highest correlation functionof the deformed $\mathcal{W}$ algebra. We introduce a class of quantum integrable systems generalizing the Gaudinmodel. The corresponding algebras of quantum Hamiltonians are obtained asquotients of the center of the enveloping algebra of an affine Kac-Moodyalgebra at the critical level, extending the construction of higher GaudinHamiltonians from hep-th/9402022 to the case of non-highest weightrepresentations of affine algebras. We show that these algebras are isomorphicto algebras of functions on the spaces of opers on P^1 with regular as well asirregular singularities at finitely many points. We construct eigenvectors ofthese Hamiltonians, using Wakimoto modules of critical level, and show thattheir spectra on finite-dimensional representations are given by opers withtrivial monodromy. We also comment on the connection between the generalizedGaudin models and the geometric Langlands correspondence with ramification. We derive a bosonic formula for the character of the principal space in thelevel $k$ vacuum module for $\widehat{\mathfrak{sl}}_{n+1}$, starting from aknown fermionic formula for it. In our previous work, the latter was written asa sum consisting of Shapovalov scalar products of the Whittaker vectors for$U_{v^{\pm1}}(\mathfrak{gl}_{n+1})$. In this paper we compute these scalarproducts in the bosonic form, using the decomposition of the Whittaker vectorsin the Gelfand-Zetlin basis. We show further that the bosonic formula obtainedin this way is the quasi-classical decomposition of the fermionic formula. We introduce a unital associative algebra A over degenerate CP^1. We showthat A is a commutative algebra and whose Poincar'e series is given by thenumber of partitions. Thereby we can regard A as a smooth degeneration limit ofthe elliptic algebra introduced by one of the authors and Odesskii. Then westudy the commutative family of the Macdonald difference operators acting onthe space of symmetric functions. A canonical basis is proposed for this familyby using A and the Heisenberg representation of the commutative family studiedby one of the authors. It is found that the Ding-Iohara algebra provides uswith an algebraic framework for the free filed construction. An ellipticdeformation of our construction is discussed, showing connections with theDrinfeld quasi-Hopf twisting a la Babelon Bernard Billey, the Ruijsenaarsdifference operator and the operator M(q,t_1,t_2) of Okounkov-Pandharipande. We explicitly construct two classes of infinitly many commutative operatorsin terms of the deformed Virasoro algebra. We call one of them local integralsand the other nonlocal one, since they can be regarded as elliptic deformationsof the local and nonlocal integrals of motion obtained by V.Bazhanov,S.Lukyanov and Al.Zamolodchikov. We study a class of representations of the Lie algebra of Laurent polynomialswith values in the nilpotent subalgebra of sl(3). We derive Weyl-type (bosonic)character formulas for these representations. We establish a connection betweenthe bosonic formulas and the Whittaker vector in the Verma module for thequantum group $U_v sl(3)$. We also obtain a fermionic formula for aneigenfunction of the sl(3) quantum Toda Hamiltonian. The filtration of the Virasoro minimal series representationsM^{(p,p')}_{r,s} induced by the (1,3)-primary field $\phi_{1,3}(z)$ is studied.For 1< p'/p< 2, a conjectural basis of M^{(p,p')}_{r,s} compatible with thefiltration is given by using monomial vectors in terms of the Fouriercoefficients of $\phi_{1,3}(z)$. In support of this conjecture, we give tworesults. First, we establish the equality of the character of the conjecturalbasis vectors with the character of the whole representation space. Second, forthe unitary series (p'=p+1), we establish for each $m$ the equality between thecharacter of the degree $m$ monomial basis and the character of the degree $m$component in the associated graded module Gr(M^{(p,p+1)}_{r,s}) with respect tothe filtration defined by $\phi_{1,3}(z)$. Let \{M_{r,s}\}_{0< r < p, 0< s < p'} be the irreducible Virasoro modules inthe $(p,p')$-minimal series. In our previous paper, we have constructed amonomial basis of \oplus_{r=1}^{p-1}M_{r,s} in the case of $1<p'/p<2$. By`monomials' we mean vectors of the form\phi^{(r_L,r_{L-1})}_{-n_L}...\phi^{(r_1,r_{0})}_{-n_1} |r_0,s >, where\phi_{-n}^{(r',r)} are the Fourier components of the (2,1)-primary fieldmapping M_{r,s} to M_{r',s}, and |r_0,s > is the highest weight vector ofM_{r_0,s}. In this article, for all p<p' with p>2 and s=1, we describe a subsetof such monomials which conjecturally forms a basis of\oplus_{r=1}^{p-1}M_{r,1}. We prove that the character of the combinatorial setlabeling these monomials coincides with the character of the correspondingVirasoro module. We also verify the conjecture in the case of p=3. A higher level analog of Weyl modules over multi-variable currents isproposed. It is shown that the sum of their dual spaces form a commutativealgebra. The structure of these modules and the geometry of the projectivespectrum of this algebra is studied for the currents of dimension one and two.Along the way we prove some particular cases of the conjectures in [FL1] andpropose a generalization of the notion of parking function representations.
I am working thru a derivation of the group velocity formula and I get to this stage: $$y=2A\cos(x\frac{\Delta K}{2} -t\frac{\Delta \omega}{2})\sin( \bar k x-\bar \omega t)$$ Then all the derivations I have seen say that $\frac{\Delta \omega}{\Delta K} $ is the group velocity. I know mathematically why this is a velocity but what I don't get is why do we know that this is the group velocity rather then the phase velocity and that $\frac{\bar \omega}{\bar k}$ is the phase velocity and not the group velocity? Consider a wave $$A = \int_{-\infty}^{\infty} a(k) e^{i(kx-\omega t)} \ dk,$$ where $a(k)$ is the amplitude of the kth wavenumber, and $\omega=\omega(k)$ is the frequency, related to $k$ via a dispersion relation. Note, if we wanted to track a wave, with wavenumber $k$, with constant phase, we would see that this occurs when $kx=\omega t$, i.e. $x/t = \omega/k = c$, with $c$ the $\textbf{phase}$ velocity. We would like to know the speed at which the envelope $|A|$ is traveling. For $\textbf{narrow banded}$ waves, the angular frequency $\omega$ can be approximated via the taylor expansion around a central wavenumber $k_o$, i.e. $$\omega(k) = \omega(k_o) + \frac{\partial \omega}{\partial k} (k-k_o) + \mathcal{O}((k-k_o)^2),$$ where the scale of the bandwidth is quantified by the small parameter $(k-k_o)$. Therefore, we can rewrite $A$ as $$A \approx e^{-i(\omega(k_o)t-k_o\frac{\partial \omega}{\partial k}t)} \int_{-\infty}^{\infty} a(k) e^{ik(x-\frac{\partial \omega}{\partial k} t)} \ dk.$$ Therefore $$|A| = \int_{-\infty}^{\infty} a(k) e^{ik(x-\frac{\partial \omega}{\partial k} t)} \ dk,$$ which says that the envelope, $|A|$, travels at speed $\frac{\partial \omega}{\partial k}$, i.e. $$|A(x,t)| = |A(x-c_g t,0)|,$$ where we have defined $$c_g \equiv \frac{\partial \omega}{\partial k}.$$ The group velocity has dynamical significance, as it is the velocity at which the energy travels. Definitions Before we begin, we should define some terms and parameters/functions that will be used later: Wave Number: $\equiv$ effectively the number of wave crests (i.e., anti-node of local maximum) per unit length $\leftrightharpoons$ ``density'' of waves $\rightarrow$ $\boldsymbol{\kappa}$ $=$ $\boldsymbol{\kappa}\left(\omega,\textbf{x},t\right)$ in general Wave Frequency: $\equiv$ effectively the number of wave crests crossing position $\mathbf{x}$ per unit time $\leftrightharpoons$ ``flux'' of waves $\rightarrow$ $\omega$ $=$ $\omega\left(\boldsymbol{\kappa},\textbf{x},t\right)$ in general Wave Phase: $\equiv$ position on a wave cycle between a crest and a trough (i.e., anti-node of local minimum) $\rightarrow$ $\phi$ $=$ $\phi\left(\textbf{x},t\right)$ in general Phase and Continuity Then, we can define an elementary solution to periodic wave equations as:$$ \psi\left( \mathbf{x}, t \right) = \mathcal{A} \ e^{ {\displaystyle i\left( \boldsymbol{\kappa} \cdot \mathbf{x} - \omega t \right) } }$$where $\mathcal{A}$ is the wave amplitude and, in general, can be a function of $\boldsymbol{\kappa}$ and/or $\omega$, but we will assume constant for now. Let us assume that a dispersion relation, $\omega$ $=$ $\mathcal{W}\left( \boldsymbol{\kappa}, \textbf{x}, t \right)$, exists and may be solved for positive real roots. In general, there will be multiple solutions to the dispersion relation, where each solution is referred to as different modes. The term in the exponent is known as the wave phase, given by:$$ \phi\left( \mathbf{x}, t \right) = \boldsymbol{\kappa}\left( \omega, \mathbf{x}, t \right) \cdot \mathbf{x} - \omega\left( \boldsymbol{\kappa}, \mathbf{x}, t \right) \ t + \phi{\scriptstyle_{o}}$$Because $\phi\left(\textbf{x},t\right)$ results from solutions of the wave equation, its derivatives must satisfy the dispersion relation through the following:$$ - \frac{ \partial \phi\left( \mathbf{x}, t \right) }{ \partial t } = \mathcal{W}\left( \frac{ \partial \phi\left( \mathbf{x}, t \right) }{ \partial \mathbf{x} }, \mathbf{x}, t \right)$$and we can see from the equation for $\phi\left(\textbf{x},t\right)$ that the following is true:$$\begin{align} \boldsymbol{\kappa} & = \frac{ \partial \phi\left( \mathbf{x}, t \right) }{ \partial \mathbf{x} } \\ \omega & = - \frac{ \partial \phi\left( \mathbf{x}, t \right) }{ \partial t }\end{align}$$We also know that $\partial^{2} \phi$/$\partial \mathbf{x} \partial t$ $=$ $\partial^{2} \phi$/$\partial t \partial \mathbf{x}$, therefore:$$\begin{align} \frac{ \partial^{2} \phi }{ \partial t \partial \mathbf{x} } - \frac{ \partial^{2} \phi }{ \partial \mathbf{x} \partial t } & = 0 \\ & = \frac{ \partial \boldsymbol{\kappa} }{ \partial t } - \frac{ - \partial \omega }{ \partial \mathbf{x} } = 0 \\ & = \frac{ \partial \boldsymbol{\kappa} }{ \partial t } + \frac{ \partial \omega }{ \partial \mathbf{x} } = 0 \\ & = \frac{ \partial \boldsymbol{\kappa} }{ \partial t } + \nabla \omega = 0\end{align}$$One can see that this final form looks similar to a continuity equation, so long as $\boldsymbol{\kappa}$ $\leftrightharpoons$ density of the waves, and $\omega$ $\leftrightharpoons$ flux of the waves. Phase Velocity From the above relations, we can see that on contours of constant $\phi\left(\textbf{x},t\right)$, we are sitting on local wave crests (i.e., phase fronts) where $\boldsymbol{\kappa}$ is orthogonal to these contours. These phase fronts move parallel to $\boldsymbol{\kappa}$ at a speed, $\mathbf{V}_{\phi}$, known as the phase velocity. The general form for this speed is given by:$$ \mathbf{V}_{\phi} \equiv \frac{ \mathcal{W}\left( \boldsymbol{\kappa}, \mathbf{x}, t \right) }{ \kappa } \boldsymbol{\hat{\kappa}}$$ Group Velocity We can rearrange our continuity equation by multiplying by unity to get:$$\begin{align} \frac{ \partial \boldsymbol{\kappa} }{ \partial t } + \frac{ \partial \omega }{ \partial \mathbf{x} } \cdot \frac{ \partial \boldsymbol{\kappa} }{ \partial \boldsymbol{\kappa} } & = 0 \\ \frac{ \partial \boldsymbol{\kappa} }{ \partial t } + \frac{ \partial \omega }{ \partial \boldsymbol{\kappa} } \cdot \frac{ \partial \boldsymbol{\kappa} }{ \partial \mathbf{x} } & = 0 \\ \frac{ \partial \boldsymbol{\kappa} }{ \partial t } + \left( \mathbf{V}_{g} \cdot \nabla \right) \boldsymbol{\kappa} & = 0\end{align}$$where $\mathbf{V}_{g}$ is called the group velocity, where we note that:$$\frac{ \partial \omega }{ \partial \mathbf{x} } = \frac{ \partial \mathcal{W}\left( \boldsymbol{\kappa}, \mathbf{x}, t \right) }{ \partial \boldsymbol{\kappa} } \cdot \frac{ \partial \boldsymbol{\kappa} }{ \partial \mathbf{x} } + \frac{ \partial \mathcal{W}\left( \boldsymbol{\kappa}, \mathbf{x}, t \right) }{ \partial \mathbf{x} }$$which shows that $\partial \mathcal{W}$/$\partial \boldsymbol{\kappa}$ $=$ $\left( \partial \omega / \partial \boldsymbol{\kappa} \right){\scriptstyle_{\textbf{x}}}$ $\Rightarrow$ different $\boldsymbol{\kappa}$'s propagate with velocity $\mathbf{V}_{g}$. In other words, $\mathbf{V}_{g}$ is the propagation velocity for $\kappa$ and $\mid$$\mathcal{A}$$\mid^{2}$ propagates with velocity $\mathbf{V}_{g}$. Thus, an observer moving with the phase fronts (crests) moves at $\mathbf{V}_{\phi}$, but they observe the local wavenumber and frequency to change in time $\Rightarrow$ neighboring phase fronts (crests) move away from the observer in this frame. In contrast, for an observer moving with $\mathbf{V}_{g}$, they observe constant local wavenumber and frequency (with respect to time), but phase fronts (crests) continuously move past the observer in this frame. References Whitham, G. B. (1999), Linear and Nonlinear Waves, New York, NY: John Wiley & Sons, Inc.; ISBN:0-471-35942-4. The frequency $\omega$ can be hight in a wave packet, but the envelope motion may be slow. The latter is determined with the $\cos(...)$; that is why they call it a group velocity. It is a velocity of displacement of the packet as a whole. There muct be applets on internet to show how a wave packet moves. protected by Qmechanic♦ May 3 '16 at 16:44 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Fit Point Process Model Involving Irregular Trend Parameters Experimental extension to ppm which finds optimal values of the irregular trend parameters in a point process model. Usage ippm(Q, …, iScore=NULL, start=list(), covfunargs=start, nlm.args=list(stepmax=1/2), silent=FALSE, warn.unused=TRUE) Arguments Q,… Arguments passed to ppmto fit the point process model. iScore Optional. A named list of R functions that compute the partial derivatives of the logarithm of the trend, with respect to each irregular parameter. See Details. start Named list containing initial values of the irregular parameters over which to optimise. covfunargs Argument passed to ppm. A named list containing values for allirregular parameters required by the covariates in the model. Must include all the parameters named in start. nlm.args Optional list of arguments passed to nlmto control the optimization algorithm. silent Logical. Whether to print warnings if the optimization algorithm fails to converge. warn.unused Logical. Whether to print a warning if some of the parameters in startare not used in the model. Details This function is an experimental extension to the point process model fitting command ppm. The extension allows the trend of the model to include irregular parameters, which will be maximised by a Newton-type iterative method, using nlm. For the sake of explanation, consider a Poisson point process with intensity function \(\lambda(u)\) at location \(u\). Assume that $$ \lambda(u) = \exp(\alpha + \beta Z(u)) \, f(u, \gamma) $$ where \(\alpha,\beta,\gamma\) are parameters to be estimated, \(Z(u)\) is a spatial covariate function, and \(f\) is some known function. Then the parameters \(\alpha,\beta\) are called regular because they appear in a loglinear form; the parameter \(\gamma\) is called irregular. To fit this model using ippm, we specify the intensity using the trend formula in the same way as usual for ppm. The trend formula is a representation of the log intensity. In the above example the log intensity is $$ \log\lambda(u) = \alpha + \beta Z(u) + \log f(u, \gamma) $$ So the model above would be encoded with the trend formula ~Z + offset(log(f)). Note that the irregular part of the model is an offset term, which means that it is included in the log trend as it is, without being multiplied by another regular parameter. The optimisation runs faster if we specify the derivative of \(\log f(u,\gamma)\) with respect to \(\gamma\). We call this the irregular score. To specify this, the user must write an R function that computes the irregular score for any value of \(\gamma\) at any location (x,y). Thus, to code such a problem, The argument trendshould define the log intensity, with the irregular part as an offset; The argument startshould be a list containing initial values of each of the irregular parameters; The argument iScore, if provided, must be a list (with one entry for each entry of start) of functions with arguments x,y,…, that evaluate the partial derivatives of \(\log f(u,\gamma)\) with respect to each irregular parameter. The coded example below illustrates the model with two irregular parameters \(\gamma,\delta\) and irregular term $$ f((x,y), (\gamma, \delta)) = 1 + \exp(\gamma - \delta x^3) $$ Arguments … passed to ppm may also include interaction. In this case the model is not a Poisson point process but a more general Gibbs point process; the trend formula trend determines the first-order trend of the model (the first order component of the conditional intensity), not the intensity. Value A fitted point process model (object of class "ppm") which also belongs to the special class "ippm". See Also Aliases ippm Examples # NOT RUN { nd <- 32 # }# NOT RUN { gamma0 <- 3 delta0 <- 5 POW <- 3 # Terms in intensity Z <- function(x,y) { -2*y } f <- function(x,y,gamma,delta) { 1 + exp(gamma - delta * x^POW) } # True intensity lamb <- function(x,y,gamma,delta) { 200 * exp(Z(x,y)) * f(x,y,gamma,delta) } # Simulate realisation lmax <- max(lamb(0,0,gamma0,delta0), lamb(1,1,gamma0,delta0)) set.seed(42) X <- rpoispp(lamb, lmax=lmax, win=owin(), gamma=gamma0, delta=delta0) # Partial derivatives of log f DlogfDgamma <- function(x,y, gamma, delta) { topbit <- exp(gamma - delta * x^POW) topbit/(1 + topbit) } DlogfDdelta <- function(x,y, gamma, delta) { topbit <- exp(gamma - delta * x^POW) - (x^POW) * topbit/(1 + topbit) } # irregular score Dlogf <- list(gamma=DlogfDgamma, delta=DlogfDdelta) # fit model ippm(X ~Z + offset(log(f)), covariates=list(Z=Z, f=f), iScore=Dlogf, start=list(gamma=1, delta=1), nlm.args=list(stepmax=1), nd=nd)# } Documentation reproduced from package spatstat, version 1.59-0, License: GPL (>= 2)
In mathematics, solving linear equations is one of the important topics. We can say concept of linear equations is the base of advance algebra. Many students are scared of math and this anxiety should be the first thing that needs to be fixed as soon as possible. Many times students are afraid of asking any question related to it because they do not want others to think that they did not understand the concept. So we have designed cross multiplication method concept in such a way so that students can learn at their own pace and clear their doubts. Go through the below content and at the end you will be confident about the concept and able to solve problems at our own. Also we provide NCERT Solutions on cross multiplication method problems which are prepared by experts. We provide accurate and easy solutions for all questions covered in NCERT textbooks. Answers have been structured in a logical and easy language for quick revisions during examination or tests. In this section, we will discuss about simultaneous linear equations by using cross-multiplication method. What is Cross Multiplication Method A method can be used to determine the value of a variable from any equation. Usually in elementary algebra and elementary arithmetic, rational expressions and equations involving fractions are solved by using cross-multiplication method. Cross Multiplication Method Pair of Linear Equations General form of a linear equation in two unknown quantities: ax + by + c = 0, (a, b ≠0) Assume two linear equation for x and y variables be $a_1$ x + $b_1$y + $c_1$ = 0 ...(1) $a_2$ x + $b_2$ y + $c_2$ = 0 ...(2) The coefficients of x are: $a_1$ and $a_2$ The coefficients of y are: $b_1$ and $b_2$ The constant terms are: $c_1$ and $c_2$ Use method of elimination to solve both the equations: Equation (1) is multiplied with $b_2$ $b_2$($a_1$ x + $b_1$y + $c_1$ = 0) $a_1$$b_2$ x + $b_1$$b_2$y + $c_1$$b_2$ = 0 ...(3) Equation (2) is multiplied with $b_1$ $b_1$($a_2$ x + $b_2$ y + $c_2$ = 0) $a_2$$b_1$ x + $b_2$$b_1$ y + $c_2$$b_1$ = 0 ...(4) Subtract equation 4 from equation 3, we have ($a_1$$b_2$ - $a_2$$b_1$) x + ($b_1$$b_2$ - $b_2$$b_1$)y + ($c_1$$b_2$ - $c_2$$b_1$) = 0 This implies x = $\frac{ b_1\ c_2 -\ b_2\ c_1}{ b_2\ a_1 -\ a_2\ b_1}$; where ($a_1$$b_2$ - $a_2$$b_1$) $\neq$ 0 To obtain the value of y, substitute the value of x in equation (1), y = $\frac{ c_1\ a_2 -\ c_2\ a_1}{a_1\ b_2\ -\ a_2\ b_1}$; where ($a_1$$b_2$ - $a_2$$b_1$) $\neq$ 0 From the value of x and y we can obtain the result as: Examples Example: Solve below system of linear equations using cross multiplication method 3x + y = 10 and x + 2y = 5 Solution: 3x + y - 10 = 0 ...equation(1) x + 2y - 5 = 0 ...equation (2) Here, a$_1$ = 3, b$_1$ = 1, c$_1$ = -10 a$_2$ = 1, b$_2$ = 2, c$_2$ = -5 Use above derived formula, to find the value of x and y. x = $\frac{ b_1 c_2- b_2 c_1}{a_1 b_2 - a_2 b_1}$ = $\frac{-5 \times 1\ - ( -10)\ \times 2}{3\ \times 2\ -\ 1\ \times 1}$ = $\frac{(-5) - (-20)}{6 -1)}$ = $\frac{15}{5}$ = 3 Value of x is 3 Now, y = $\frac{ c_1\ a_2 -\ c_2\ a_1}{a_1\ b_2\ -\ a_2\ b_1}$ = $\frac{(-10)(1)-(-5)(3)}{(3)(2) - (1)(1)}$ = $\frac{(-10) - (-15)}{6 - 1}$ = $\frac{5}{5}$ = 1 Therefore, solution is : x = 3 and y = 1 Practice Problems Solve below linear equations using cross multiplication method: Problem 1 : Solve -x + y = 10 and 3x - 5y = 1 Problem 2 : Find the value of x and y: 3x - 1 = 5 and 7x = y - 10 Problem 3 : Solve linear equations: 1/2x - 4y - 7 = 0 and x - y = 1
The average wavenumber for a ketone is about $\pu{1720 cm-1}$ and the average wavenumber for an ester is about $\pu{1740 cm-1}$. This, however, does not make sense, as the carbonyl group of an ester should have a greater single bond character than the ketone due to resonance from the adjacent oxygen atom. This greater single bond character should thus result in a lower wavenumber for the ester, but it does not. Is there an explanation for this? For an undergraduate starts to learn IR spectroscopy, the stretching frequency of any $A-B$ chemical bond, $\over\nu$ (in $\pu{cm-1}$) can be calculated by using following equation: $${\bar\nu} = \frac{1}{2\pi c}\sqrt{\frac{k}{\mu}}= \frac{1}{2\pi c}\sqrt{\frac{k(m_A+m_B)}{(m_Am_B}}$$ where $k=\text{the force constant of the bond}=\text{bond strength}$, $m_A= \text{mass of atom }A$, $m_B= \text{mass of atom }B$, and $c= \text{speed of light}$. The equation shows $\bar\nu$ is at least depends on two factors, reduced mass $\mu$ and the force constant of the bond $k$. The dependency on $\mu$ would be explained by the difference in stretching frequency of $\ce{C-H}$ ($\pu{\approx 3000 cm-1}$) and $\ce{C-D}$ ($\pu{\approx 2200 cm-1}$). The force constants of $\ce{C-H}$ and $\ce{C-D}$ are approximately equal. The most important fact when compared to different carbonyl bond stretching frequencies is force constants of the bonds of interest. For example, ring strain in a cyclic ketone usually increases the $\ce{C=O}$ stretching frequency. That of cycloheptanone is $\pu{\approx 1702 cm-1}$, cyclohexanone is $\pu{\approx 1714 cm-1}$, cyclopentanone is $\pu{\approx 1747 cm-1}$, and cyclobutanone is $\pu{\approx 1783 cm-1}$. Therefore, we can generally conclude that the stretching frequency of the bond increase with the increase of the reactivity of carbonyl bond. Likewise, reactivity of carbonyl compounds in increasing order is: acid chlorides ($\pu{1780-1820 cm-1}$) > acid anhydrides ($\pu{\approx 1760 cm-1}$ and $\pu{\approx 1810 cm-1}$) > esters ($\pu{1730-1750 cm-1}$) > aldehydes ($\pu{1720-1740 cm-1}$) > ketones ($\pu{1705-1725 cm-1}$) > carboxylic acids ($\pu{1700-17205 cm-1}$) > acid amides ($\pu{1630-1680 cm-1}$). The reduction of double bond character in ester (average bond length of methyl acetate is $\pu{\approx 123.2 pm}$) is minimal but has a better leaving group than in ketones (average bond length of acetaldehyde is $\pu{\approx 123.1 pm}$) when subjected to react.
I'm currently reading Quantum Computation and Quantum Information by Nielsen. I'm struggling to solve exercise 2.58. The problem is Suppose we prepare a quantum system in an eigenstate $|\psi\rangle$ of some observable $M$, with corresponding eigenvalue m. What is the average observed value of $M$, and the standard deviation? The average is easy to find, $$\langle M \rangle = \langle \psi | M | \psi \rangle = m \langle \psi | \psi \rangle = m.$$ The standard deviation is by definition, $$\Delta(M) = \sqrt{\langle M^2 \rangle - \langle M \rangle^2} = \sqrt{\langle M^2 \rangle - m^2}$$. So you just need find the value of $\langle M^2 \rangle$, $$\langle M^2 \rangle = \langle \psi | M^2 | \psi \rangle = m \langle \psi | M | \psi \rangle = m^2,$$ which give the standard deviation of zero. It's violating Heisenberg uncertainty principle so I believe that I've got something wrong. What is the correct answer to this exercise?
A set of linear equations with two or more variables having degree one. Before we go in details it is also recommended to have a look on other form of linear equations like linear equations with one variable and with two variables and so on. System of linear equation is one the most prominent topics in algebra. In this section will learn more about system of linear equations with two variables and different methods to solve them such as substitution method and elimination methods etc. What is the System of Linear Equations System of linear equations is a set of two or more linear equations working together and involving the same set of variables. A general system can contains m linear equations with n unknowns. For Example: System of linear equations with 2 variables is: $a_1 x + b_1 y + c_1 = 0$ and $a_2 x + b_2 y + c_2 = 0$. System of linear equations with 3 variables: $a_1 x + b_1 y + c_1 z + d_1= 0$ $a_2 x + b_2 y + c_2 z + d_2 = 0$ and $a_3 x + b_3 y + c_3 z + d_3 = 0$ General Form For $x_1,x_2, x_3,.....x_n$ are the unknowns and $b_1, b_2,.... b_m $ are the constant terms and $a_{11}, a_{12},.......,a_{mn}$ are the coefficients of the system. A general system of m linear equations with n unknowns can be written as: $a_{11}x_1+a_{12}x_2+....+a_{1n}x_n=b_1$ $a_{21}x_1+a_{22}x_2+....+a_{2n}x_n=b_2$ . . . $a_{m1}x_1+a_{m2}x_2+....+a_{mn}x_n=b_m$ Solving System of Linear Equations in Two Variables Let us consider, $a_1 x + b_1 y + c_1 = 0$ and $a_2 x + b_2 y + c_2 = 0$. There can be different ways/methods to solve these homogeneous system of linear equations. Some are 1. Elimination Method 2. Substitution Method. 3. Matrix Method 4. Cross Multiplication Method The solutions for the system of equations can be consistent or inconsistent. All based upon the ratios of the coefficients. For the above system of equations: System of Equations Condition Solution Type Consistent $\frac{a_{1}}{a_{2}}\: \neq \: \frac{b_{1}}{b_{2}}$ Unique solution Consistent $\frac{a_{1}}{a_{2}}\: = \: \frac{b_{1}}{b_{2}} = \frac{c_{1}}{c_{2}}$ Infinite solution Inconsistent $\frac{a_{1}}{a_{2}}\: = \: \frac{b_{1}}{b_{2}}\: \neq \: \frac{c_{1}}{c_{2}}$ Solution does not exists System of Linear Equations in Three Variables: The general form of linear equation in three variables, x, y and z is ax + by + cz +d =0, where a, b, c are real numbers and a, b, c not all equal to 0. This represent the equation of a plane in three-dimensional co-ordinate system, where a, b, c are the direction ratios of the normal to the plane. To solve the equation in three variables, we need to have three conditions (equations) relating the variables x, y and z. Elimination method is the most suitable method to solve the equations. Linear System of Differential Equations: A differential equation in which the dependent variable and its derivatives appear only in first degree is called a linear differential equation. An ordinary linear differential equation of order n is of the form, $\frac{\mathrm{d^n}y }{\mathrm{d} x^n}+P_{1}\frac{d^{n-1}y }{dx^{n-1}}+P_{2}\frac{d^{n-2}y }{dx^{n-2}}+...................P_{n}y=X$, where $P_{1}, P_{2},....................P_{n}$ and X are functions of x. If X = 0, it is called homogeneous equation, otherwise it is a non-homogeneous equation. First order linear differential equation: The general form of a linear equation of the first order is, $\frac{\mathrm{d} y}{\mathrm{d} x}+ Py = Q$, where P and Q are functions of x. The solution to the above equation is given by, y = $e^{-\int Pdx}\int Qe^{\int Pdx}dx$ Examples Example 1 : Solve : 2x + 3y = 25 and 3x + 2y =25 Solution: 2x + 3y = 25 ------------(1) 3x + 2y = 25 ------------(2) Multiplying Equation (1) by 3, 3 ( 2x + 3y ) = 3 ( 25) => 6x + 9y = 75 -----------(3) Multiplying equation (2) by 2, 2 ( 3x + 2y ) = 2 ( 25) => 6x + 4y = 50 ------(4) Solve equation 3 and equation 4, Subtracting (4) from (3) 5y = 25 => y = 25 /5 = 5 Substituting y=5 in (1) we get, 2x + 3(5) = 25 => 2x + 15 = 25 => 2x = 25 - 15 = 10 => x = 10/2 = 5 Therefore, the two equations intersect at the point (5, 5). Example 2: Solve for x and y. x + 3y = 8 ; 3x + 9y = 24 Solution: Let x + 3y = 8 --------(1) 3x + 9y = 24 ------- (2) Substituting x = 8 - 3y in (2), 3 ( 8 - 3y ) + 9y = 24 => 24 - 9y + 9y = 24 This condition is true for all values of y. we get 24 = 24 which is true. Since $\frac{a_{1}}{a_{2}}\: = \: \frac{b_{1}}{b_{2}}\: \neq \: \frac{c_{1}}{c_{2}}$ = 1/3 So solution does not exist for this particular example. Word Problems Follow below steps to solve word problems. Step 1: Read and understand the problem carefully. Step 2: Identify the unknown quantities. Represent with the variable. Step 3: Formulated the equations. Step 4: Simplify the equations, using any of the methods. Step 5: Write your answer. Example: 4 chairs and 3 tables cost Rs. 1400 and 5 chairs and 2 tables cost Rs. 1400. Find the cost of a chair and a table. Solution : Let the cost of a chair be Rs. x , and the cost of a table be Rs. y. Chairs Tables Total Cost Equation No. of Items 4 3 1400 Cost 4x 3y 4x + 3y 4x + 3y = 1400 No. of items 5 2 1400 Cost 5x 2y 5x + 2y 5x + 2y = 1400 Let us solve linear system of equations we have framed here. Hence we have the equations, 4x + 3y = 1400 -----------(1) 5x + 2y = 1400 ---------(2) Multiplying (1) by 5, 5( 4x + 3y ) = 5( 1400) => 20 x + 15 y = 7000 -----------(3) Multiplying (2) by 4, we get 4 ( 5x + 2y ) = 4 ( 1400) => 20 x + 8 y = 5600 -------(4) 20 x + 15 y = 7000 ----------------------(3) Subtracting (3) from (4), - 7 y = -1400 => y = -1400/-7 = 200 Substituting y= 200 in Equation (1), we get, 4x + 3 ( 200) = 1400 => 4x + 600 = 1400 => 4x = 1400 - 600 = 800 => x = 800/4 = 200 Therefore cost of a Chair is Rs. 200 and cost of a Table is Rs. 200.
Does anyone here understand why he set the Velocity of Center Mass = 0 here? He keeps setting the Velocity of center mass , and acceleration of center mass(on other questions) to zero which i dont comprehend why? @amanuel2 Yes, this is a conservation of momentum question. The initial momentum is zero, and since there are no external forces, after she throws the 1st wrench the sum of her momentum plus the momentum of the thrown wrench is zero, and the centre of mass is still at the origin. I was just reading a sci-fi novel where physics "breaks down". While of course fiction is fiction and I don't expect this to happen in real life, when I tired to contemplate the concept I find that I cannot even imagine what it would mean for physics to break down. Is my imagination too limited o... The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing, in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe... not exactly identical however Also typo: Wavefunction does not really have an energy, it is the quantum state that has a spectrum of energy eigenvalues Since Hamilton's equation of motion in classical physics is $$\frac{d}{dt} \begin{pmatrix} x \\ p \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \nabla H(x,p) \, ,$$ why does everyone make a big deal about Schrodinger's equation, which is $$\frac{d}{dt} \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \hat H \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} \, ?$$ Oh by the way, the Hamiltonian is a stupid quantity. We should always work with $H / \hbar$, which has dimensions of frequency. @DanielSank I think you should post that question. I don't recall many looked at the two Hamilton equations together in this matrix form before, which really highlight the similarities between them (even though technically speaking the schroedinger equation is based on quantising Hamiltonian mechanics) and yes you are correct about the $\nabla^2$ thing. I got too used to the position basis @DanielSank The big deal is not the equation itself, but the meaning of the variables. The form of the equation itself just says "the Hamiltonian is the generator of time translation", but surely you'll agree that classical position and momentum evolving in time are a rather different notion than the wavefunction of QM evolving in time. If you want to make the similarity really obvious, just write the evolution equations for the observables. The classical equation is literally Heisenberg's evolution equation with the Poisson bracket instead of the commutator, no pesky additional $\nabla$ or what not The big deal many introductory quantum texts make about the Schrödinger equation is due to the fact that their target audience are usually people who are not expected to be trained in classical Hamiltonian mechanics. No time remotely soon, as far as things seem. Just the amount of material required for an undertaking like that would be exceptional. It doesn't even seem like we're remotely near the advancement required to take advantage of such a project, let alone organize one. I'd be honestly skeptical of humans ever reaching that point. It's cool to think about, but so much would have to change that trying to estimate it would be pointless currently (lol) talk about raping the planet(s)... re dyson sphere, solar energy is a simplified version right? which is advancing. what about orbiting solar energy harvesting? maybe not as far away. kurzgesagt also has a video on a space elevator, its very hard but expect that to be built decades earlier, and if it doesnt show up, maybe no hope for a dyson sphere... o_O BTW @DanielSank Do you know where I can go to wash off my karma? I just wrote a rather negative (though well-deserved, and as thorough and impartial as I could make it) referee report. And I'd rather it not come back to bite me on my next go-round as an author o.o
Why All These Stresses and Strains? In structural mechanics you will come across a plethora of stress and strain definitions. It may be a Second Piola-Kirchhoff Stress or a Logarithmic Strain. In this blog post we will investigate these quantities, discuss why there is a need for so many variations of stresses and strains, and illuminate the consequences for you as a finite element analyst. The defining tensor expressions and transformations can be found in many textbooks, as well as through some web links at the end of this blog post, so they will not be given in detail here. The Tensile Test When evaluating the mechanical data of a material, it is common to perform a uniaxial tension test. What is actually measured is a force versus displacement curve, but in order to make these results independent of specimen size, the results are usually presented as stress versus strain. If the deformations are large enough, one question then is: do you compute the stress based on the original cross-sectional area of the specimen, or based on the current area? The answer is that both definitions are used, and are called Nominal stress and True stress, respectively. A second, and not so obvious, question is how to measure the relative elongation, i.e. the strain. The engineering strain is defined as the ratio between the elongation and the original length, \epsilon_{eng} = \frac{L-L_0}{L_0}. For larger stretches, however, it is more common to use either the stretch \lambda=\frac{L}{L_0} or the true strain (logarithmic strain) \epsilon_{true} = \log\frac{L}{L_0} = \log \lambda. The true strain is more common in metal testing, since it is a quantity suitable for many plasticity models. For materials with a very large possible elongation, like rubber, the stretch is a more common parameter. Note that for the undeformed material, the stretch is \lambda=1. In order to make use of the measured data in an analysis, you must make sure of the following two things: How the stress and strain are defined in the test In what form your analysis software expects it for a specific material model The transformation of the uniaxial data is not difficult, but it must not be forgotten. Stress-strain curves for the same tensile test. Geometric Nonlinearity Most structural mechanics problems can be analyzed under the assumption that the deformations are so small compared to the dimensions of the structure, that the equations of equilibrium can be formulated for the undeformed geometry. In this case, the distinctions between different stress and strain measures disappear. If displacements, rotations, or strains become large enough, then geometric nonlinearity must be taken into account. This is when we start to consider that area elements actually change, that there is a distinction between an original length and a deformed length, and that directions may change during the deformation. There are several mathematically equivalent ways of representing such finite deformations. For the uniaxial test above, the different representations are rather straight-forward. In real life however, geometries are three-dimensional, have multiaxial stress states, and might rotate in space. Even if we just consider the same tensile test, keep the stress and strain fixed at a certain level, and then rotate the specimen, questions arise. What results can we expect? Are the values of the stress and strain components expected to change or not? Stress Measures The most fundamental and commonly used stress quantity is the Cauchy stress, also known as the true stress. It is defined by studying the forces acting on an infinitesimal area element in the deformed body. Both the force components and the normal to the area have fixed directions in space. This means that if a stressed body is subjected to a pure rotation, the actual values of the stress components will change. What was originally a uniaxial stress state might be transformed into a full tensor with both normal and shear stress components. In many cases, this is neither what you want to use nor what you would expect. Consider for example an orthotropic material with fibers having a certain orientation. It is much more plausible that you want to see the stress in the fiber direction, even if the component is rotated. The Second Piola-Kirchhoff stress has this property. It is defined along the material directions. In the figure below, an originally straight cantilever beam has been subjected to bending by a pure moment at the tip. The xx-component of the Cauchy stress (top) and Second Piola-Kirchhoff stress (below) are shown. Since the stress is physically directed along the beam, the xx-component of the Cauchy stress (which is related to the global x-direction) decreases with the deflection. The Second Piola-Kirchhoff stress however, has the same through-thickness distribution all along the beam, even in the deformed configuration. Cauchy and Second Piola-Kirchhoff stress for an initially straight beam with constant bending moment. Another stress measure that you may encounter is the First Piola-Kirchhoff stress. It is a multiaxial generalization of the nominal (or engineering) stress. The stress is defined as the force in the current configuration acting on the original area. The First Piola-Kirchhoff is an unsymmetric tensor, and is for that reason less attractive to work with. Sometimes you may also encounter the Kirchhoff stress. The Kirchhoff stress is just the Cauchy stress scaled by the volume change. It has little physical significance, but can be convenient in some mathematical and numerical operations. Unfortunately, even without a rotation, the actual values of all these stress representations are not the same. All of them scale differently with respect to local volume changes and stretches. This is illustrated in the graph below. The xx-component of several stress measures are plotted at the fixed end of the beam, where the beam axis coincides with the x-axis. In the center of the beam, where strains, and thereby volume changes are small, all values approach each other. So for a case with large rotation but small strains, the stress representations can be seen as pure rotations of the same stress tensor. The distribution of axial stress at the fixed end of the beam. If you want to compute the resulting force or a moment on a certain boundary, there are really only two possible choices: Either integrate the Cauchy stress over the deformed boundary, or integrate the First Piola-Kirchhoff stress over the same boundary in the undeformed configuration. In COMSOL Multiphysics this corresponds to selecting either “Spatial frame” or “Material frame” in the settings for the integration operator. Strain Measures When investigating the uniaxial tensile test above, three different representations of the strain were introduced. It is possible to generalize all of them to multiaxial cases, but for the true strain this is not trivial. It has to be done through a representation in the principal strain directions because that is the only way to take the logarithm of a tensor. The general tensor representation of the logarithmic strain is often called Hencky strain. There are also many other possible representations of the deformation. Any reasonable representation however, must be able to represent a rigid rotation of an unstrained body without producing any strain. The engineering strain fails here, thus it cannot be used for general geometrically nonlinear cases. One common choice for representing large strains is the Green-Lagrange strain. It contains derivatives of the displacements with respect to the original configuration. The values therefore represent strains in material directions, similar to the behavior of the Second Piola-Kirchhoff stress. This allows a physical interpretation, but it must be realized that even for a uniaxial case, the Green-Lagrange strain is strongly nonlinear with respect to the displacement. If an object is stretched to twice its original length, the Green-Lagrange strain is 1.5 in the stretching direction. If the object is compressed to half its length, the strain would read -0.375. An even more fundamental quantity is the deformation gradient, \mathbf F, which contains the derivatives of the deformed coordinates with respect to the original coordinates, \mathbf F = \frac{\partial \mathbf x}{\partial \mathbf X}. The deformation gradient contains all information about the local deformation in the solid, and can be used to form many other strain quantities. As an example, the Green-Lagrange strain is \frac{1}{2} (\mathbf{F}^T \mathbf F-\mathbf I). A similar strain tensor, but based on derivatives with respect to coordinates in the deformed configuration, is the Almansi strain tensor, \frac{1}{2} ( \mathbf I-( \mathbf{F} \mathbf F^T)^{-1}). The Almansi strain tensor will then refer to directions fixed in space. Conjugate Quantities A general way to express the continuum mechanics problem is by using a weak formulation. In mechanics this is known as the principle of virtual work, which states that the internal work done by an infinitesimal strain variation operating on the current stresses equals the external work done by a corresponding virtual displacement operating on the loads. The stress and strain measures must then be selected so that their product gives an accurate energy density. This energy density may be related either to the undeformed or deformed volume, depending on whether the internal virtual work is integrated over the original or the deformed geometry. In the table below, some corresponding conjugate stress-strain pairs are summarized: Strain Stress Symmetry Volume Orientation Engineering Strain (based on deformed geometry); True strain; Almansi strain Cauchy (True stress) Symmetric Deformed Spatial Engineering Strain (based on deformed geometry); True strain; Almansi strain Kirchhoff Symmetric Original Spatial Deformation gradient First Piola-Kirchhoff (Nominal Stress) Non-symmetric Original Mixed Green-Lagrange strain Second Piola-Kirchhoff (Material Stress) Symmetric Original Material In the Solid Mechanics interface in COMSOL Multiphysics, the principle of virtual work is always expressed in the undeformed geometry (the “Material frame”). Green-Lagrange strains and Second Piola-Kirchhoff stresses are then used. Such a formulation is sometimes called a “Total Lagrangian” formulation. A formulation that is instead based on quantities in the current configuration is called an “Updated Lagrangian” formulation. Additional Resources on Stresses and Strains Comments (5) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
I am trying to solve the 8-puzzle using two heuristics, which are: Pieces out of place and Distance Manhattan. After consulting several websites for example see and some books, I'm with some doubts, ... I am trying to write my own Allan Deviation calculator in Mathematica using the definition,$$\sigma(\tau)^{2}=\frac{1}{2M-2}\sum_{i=1}^{N-1}(\bar{y}_{i+1} - \bar{y}_{i})^{2}\text{.}$$Where the $\bar{... I am trying to repeat an initial condition many times such that I get a desired final output. ONE final output depends on many subsequent calculations from ONE initial condition. But I want to get a ... I am attempting to create a loop (either Do, For or While) that will continue to take antiderivatives of f(x)= x until the area under it between 0 and 10 is greater than 1000. And whichever function ...
Difference between revisions of "Lower attic" From Cantor's Attic Line 7: Line 7: * [[Gamma | $\Gamma$]] * [[Gamma | $\Gamma$]] * [[Church-Kleene omega_1 | $\omega_1^{ck}$]] * [[Church-Kleene omega_1 | $\omega_1^{ck}$]] − * [[epsilon0# + * [[epsilon0#| $\$]] [[epsilon0#| $\$ ]] − + * [[epsilon0 | $\epsilon_0$]] * [[epsilon0 | $\epsilon_0$]] * [[small countable ordinals | small countably infinite ordinals]] * [[small countable ordinals | small countably infinite ordinals]] * [[Hilberts hotel | Hilbert's hotel]] * [[Hilberts hotel | Hilbert's hotel]] * [[<math>\omega</math>]] * [[<math>\omega</math>]] Revision as of 21:14, 27 December 2011 Welcome to the lower attic, where we store the comparatively smaller notions of infinity. Roughly speaking, this is the realm of countable ordinals and their friends. Up to The middle attic
Difference between revisions of "Lower attic" From Cantor's Attic m (removing superfluous bullet points) Line 17: Line 17: * the [[Feferman-Schütte]] ordinal [[Feferman-Schütte | $\Gamma_0$]] * the [[Feferman-Schütte]] ordinal [[Feferman-Schütte | $\Gamma_0$]] * [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]] * [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]] − * the [[omega one chess | omega one of chess]], [[omega one chess| $\omega_1^{\ + * the [[omega one chess | omega one of chess]], [[omega one chess| $\omega_1^{\chess}$]] * [[indecomposable]] ordinal * [[indecomposable]] ordinal * the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]] * the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]] Revision as of 06:57, 27 July 2013 Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent. $\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including admissible ordinals and relativized Church-Kleene $\omega_1^x$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the Bachmann-Howard ordinal the large Veblen ordinal the small Veblen ordinal the Feferman-Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers the omega one of chess, $\omega_1^{\mathfrak{Ch}}$, $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ indecomposable ordinal the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel and other toys in the playroom $\omega$, the smallest infinity down to the parlour, where large finite numbers dream
DFT-1/2 and DFT-PPS density functional methods for electronic structure calculations¶ Version: 2017.0 The 2017 release of QuantumATK introduces two novel density functional corrections for computing the electronic structure semiconductors and insulators: the DFT-1/2 and pseudopotential projector shift (DFT-PPS) methods. This tutorial gives an introduction to how to use these methods. DFT-1/2 is often also denoted LDA-1/2 or GGA-1/2,and is a semi-empirical approach to correcting the self-interaction errorin local and semi-local exchange-correlation density functionals for extended systems,similar in spirit to the DFT+U method.As such, it can be viewed as an alternative to the TB09-Meta-GGAmethod with broadly the same aims;to improve the description of conduction-band energy levels and band gaps.Just like with TB09-MGGA, the DFT+1/2 method is not suitable for calculations thatrely on total energy and forces, e.g., geometry optimization. PPS is an abbreviation for pseudopotential projector shift.This method introduces empirical shifts of the nonlocal projectors inthe SG15 pseudopotentials, in spirit of the empirical pseudopotentials proposedby Zunger and co-workers in [WZ95]. The projector shifts have beenadjusted to reproduce technologically important properties of semiconductorssuch as the fundamental band gap and lattice constant. The PPS method currentlyworks out-of-the-box for the elements silicon and germanium,but may be used for other elements also by manually specifying the appropriateprojector shifts (these must somehow first be determined).Importantly, the PPS method does work for geometry optimizationof semiconductor material structures. DFT-1/2 methods¶ The DFT-1/2 method attempts to correct the DFT self-interaction error by defining an atomic self-energy potential that cancels the electron-hole self-interaction energy. This potential is calculated for atomic sites in the system, and is defined as the difference between the potential of the neutral atom and that of a charged ion resulting from the removal of a fraction of its charge, between 0 and 1 electrons. The total self-energy potential is the sum of these atomic potentials. The addition of the DFT-1/2 self-energy potential to the DFT Hamiltonian has been found to greatly improve band gaps for a wide range of semiconducting and insulating systems [FMT11]. For more information, see the QuantumATK Manual entry on the DFT-1/2 method. Important Note also that not all elements in the system necessarily require the DFT-1/2 correction; it is generally advisable only to add this to the anionic species, and leave the cationic species as normal. Default DFT-1/2 parameters are available in QuantumATK; these have been optimized against a wide range of materials, and should improve upon the standard LDA or GGA band gap in most cases. InP bandstructure using PBE-1/2¶ and open the calculator widget to adjust the calculator parameters. Select a 9x9x9 k-point grid, and navigate to the Basis set/exchangecorrelation tab. Note that the default exchange-correlation method is“GGA” (PBE). Enable the DFT-1/2 correction by ticking the checkboxindicated below. #----------------------------------------# Exchange-Correlation#----------------------------------------exchange_correlation = GGAHalf.PBE that is, GGA-1/2 using the PBE functional, which might also be denoted PBE-1/2.You may download the script here if needed: InP.py or from a terminal: $ atkpython InP.py > InP.log The calculation will be very fast.Once done, use the Bandstructure Analyzer to plot the resulting band structure.You will correctly find that InP is a direct-gap semiconductorwith a band band gap of 1.46 eV,which is close to the experimental gap of 1.34 eV from Ref. [LRS96]. Tip You may want to zoom in on the bands around the Fermi level for a better view.There are two ways to do this: use the Zoom Tool (third icon from the left); or click the y-axis label to select it, then right-click andselect Edit itemin order to open the Axis Propertieswidget, where you can adjust the limits of the y-axis. III-V type semiconductor band gaps¶ The plots below show how the DFT-1/2 methods (LDA-1/2 and GGA-1/2) compare to LDA, GGA, and TB09-MGGA standard band gap calculations using default pseudopotentials and basis sets. A 9x9x9 k-point grid was used in all calculations, and experimental band gaps were adapted from Ref. [LRS96]. It is quite clear that the DFT-1/2 correction improves on standardLDA and GGA band gaps, which are usually too small or non-existing(no bar). TB09-MGGA calculations with self-consistently determined c-parameteralso improves band gaps in general. The red bars for Ge using LDA-1/2 and GGA-1/2 indicate that a directband gap is wrongly predicted (\(\Gamma\)–\(\Gamma\) instead of \(\Gamma\)–Las in experiments).The orange bar for GaP using GGA-1/2 indicates that the gap is correctlypredicted to be indirect, but that the band energy in the L-valleyis wrongly predicted to be a little lower than in the X-valley. Manually specifying DFT-1/2 parameters¶ It is of course possible to manually specify the DFT-1/2 parametersinstead of using the default ones.This is done by creating an instance of the DFTHalfParametersclass, which is then given as an argument to the basis set. For example, let’s consider the case of GaAs. The following script manually sets upthe As DFT-1/2 parameters identical to the default ones,and leaves the DFT-1/2 correction disabled for Ga, which is also defaultbehavior. Note the dft_half_parameters argument to the basis set: #----------------------------------------# Basis Set#----------------------------------------# LDA-1/2 parameters for Asdft_half_parameters = DFTHalfParameters( element=Arsenic, fractional_charge=[0.3, 0.0], cutoff_radius=4.0*Bohr, )# No LDA-1/2 parameters are needed for Ga (Disabled)basis_set = [ LDABasis.Arsenic_DoubleZetaPolarized( dft_half_parameters=dft_half_parameters), LDABasis.Gallium_DoubleZetaPolarized( dft_half_parameters=Disabled), ] Warning Choosing the appropriate DFT-1/2 parameters may be a very delicate matter, and great care has been taken in determining the default parameters. If you choose to use non-default DFT-1/2 parameters, the quality of those parameters is entirely your own responsibility as a user! QuantumWise does not offer support for determining custom DFT-1/2 parameters; we recommend in general that users stick to the default ones. DFT-PPS method¶ As already mentioned, the DFT-PPS method applies shifts to the nonlocal projectors in the SG15 pseudopotentials. The nonlocal part of the pseudopotential, \(\hat{V}_\text{nl}\), is modified according to where the sum is over all projectors \(p_{l}\), and \(\alpha_{l}\) is an empirical parameter that depends on the orbital angular momentum quantum number \(l\). Note that this approach does not increase the computational cost of DFT calculations! The required projector shift parameters have been optimized for silicon and germanium and for use with the PBE density functional and SG15 pseudopotentials only. These are implemented as separate basis sets: BasisGGASG15.Silicon_LowProjectorShiftBasisGGASG15.Silicon_MediumProjectorShiftBasisGGASG15.Silicon_HighProjectorShiftBasisGGASG15.Silicon_UltraProjectorShiftBasisGGASG15.Germanium_MediumProjectorShiftBasisGGASG15.Germanium_HighProjectorShiftBasisGGASG15.Germanium_UltraProjectorShift For each element, the same set of optimized projector shifts are applied inall SG15 basis sets. The script projector_shifts.py prints the built-in DFT-PPS parametersfor Si and Ge: basis_sets = [ BasisGGASG15.Silicon_MediumProjectorShift, # Si PPS-PBE SG15-Medium BasisGGASG15.Germanium_HighProjectorShift , # Ge PPS-PBE SG15-High ]for basis_set,element in zip(basis_sets,['Si','Ge']): print(element) projector_shift = basis_set.projectorShift() print("s-shift: %+.3f eV" % projector_shift.sOrbitalShift().inUnitsOf(eV)) print("p-shift: %+.3f eV" % projector_shift.pOrbitalShift().inUnitsOf(eV)) print("d-shift: %+.3f eV" % projector_shift.dOrbitalShift().inUnitsOf(eV)) Running it produces the output shown below. The d-shift for Si is 0.0 eVsince silicon has no d-electrons: Sis-shift: +21.330 eVp-shift: -1.430 eVd-shift: +0.000 eVGes-shift: +13.790 eVp-shift: +0.220 eVd-shift: -2.030 eV Si, SiGe, and Ge band gaps and lattice constants¶ The DFT-PPS method is enabled in the Script Generatorby selecting one of the ProjectorShift basis sets for Si or Ge,as indicated below. No other calculator settings need to be changed in orderto switch from ordinary PBE to DFT-PPS. One very convenient aspect of the DFT-PPS method is that it allows for geometry optimization (forces and stress minimization) just like ordinary GGA calculations do – in fact, the DFT-PPS parameters may often be chosen such as to give highly accurate semiconductor lattice constants while also producing accurate band gaps. In the following, you will consider bulk Si and Ge, and a simple 50/50SiGe alloy. These 3 bulk configurations are defined in the script bulks.py.The script pbe.py runs geometry optimization and band structure analysisfor all 3 configurations,while the script pps.py does the same using the DFT-PPS method. The latter script looks like shown below. Note the lines from 2 to 12 in that particular script, where the bulk configurations are imported from the external script, and a Python loop over these configurations and basis sets is set up: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 # -*- coding: utf-8 -*-from bulks import si, ge, sigesetVerbosity(MinimalLog)configurations = [si,ge,sige]labels = ['Si','Ge','SiGe']si_basis = BasisGGASG15.Silicon_MediumProjectorShiftge_basis = BasisGGASG15.Germanium_HighProjectorShiftbasis_sets = [[si_basis], [ge_basis], [si_basis,ge_basis]]for bulk_configuration,label,basis_set in zip(configurations,labels,basis_sets): outfile = "%s_PPS.hdf5" % label # ------------------------------------------------------------- # Calculator # ------------------------------------------------------------- k_point_sampling = MonkhorstPackGrid( na=9, nb=9, nc=9, ) numerical_accuracy_parameters = NumericalAccuracyParameters( k_point_sampling=k_point_sampling, density_mesh_cutoff=90.0*Hartree, ) calculator = LCAOCalculator( basis_set=basis_set, numerical_accuracy_parameters=numerical_accuracy_parameters, ) bulk_configuration.setCalculator(calculator) nlprint(bulk_configuration) bulk_configuration.update() nlsave(outfile, bulk_configuration) # ------------------------------------------------------------- # Optimize Geometry # ------------------------------------------------------------- fix_atom_indices_0 = [0, 1] constraints = [FixAtomConstraints(fix_atom_indices_0)] bulk_configuration = OptimizeGeometry( bulk_configuration, max_forces=0.05*eV/Ang, max_stress=0.1*GPa, max_steps=200, max_step_length=0.2*Ang, constraints=constraints, trajectory_filename=None, optimizer_method=LBFGS(), constrain_bravais_lattice=True, ) nlsave(outfile, bulk_configuration) nlprint(bulk_configuration) # ------------------------------------------------------------- # Bandstructure # ------------------------------------------------------------- bandstructure = Bandstructure( configuration=bulk_configuration, route=['L', 'G', 'X'], points_per_segment=50 ) nlsave(outfile, bandstructure) $ atkpython pps.py > pps.log$ atkpython pbe.py > pbe.log The jobs should take around 5 minutes each. Use then the script plot_pps.py to plot the results.Running that script should produce the figure shown below,where the calculated indirect band gaps are shown in red circles,and the calculated lattice constants are shown in blue squares,both for PBE (dashed lines) and DFT-PPS (solid lines). The DFT-PPS band gaps of Si and Ge compare very well to experiments (block dots; from Ref. [LRS96]), and varies roughly linearly with germanium content. On the contrary, the ordinary PBE method predicts zero Ge band gap. The DFT-PPS lattice constants of pure Si and Ge are also closer to experiments (grey squares) than found with non-corrected PBE. Manually specifying DFT-PPS parameters¶ It is of course possible to manually specify the DFT-PPS projector shift parameters instead of using the default ones. This may be especially useful in DFT-PPS calculations for elements where there are no default DFT-PPS parameters (only Si and Ge currently have defaults). The projector shifts are supplied using an instance of the PseudoPotentialProjectorShift class,which is then given as an argument to the SG15 basis set.As an example, the following script manually sets the Si and Ge DFT-PPSparameters identical to the default ones: #----------------------------------------# Basis Set#----------------------------------------# Basis set for SiliconSiliconBasis_projector_shift = PseudoPotentialProjectorShift( s_orbital_shift=21.33*eV, p_orbital_shift=-1.43*eV, d_orbital_shift=0.0*eV, f_orbital_shift=0.0*eV, g_orbital_shift=0.0*eV )SiliconBasis = BasisGGASG15.Silicon_Medium(projector_shift=SiliconBasis_projector_shift)# Basis set for GermaniumGermaniumBasis_projector_shift = PseudoPotentialProjectorShift( s_orbital_shift=13.79*eV, p_orbital_shift=0.22*eV, d_orbital_shift=-2.03*eV, f_orbital_shift=0.0*eV, g_orbital_shift=0.0*eV )GermaniumBasis = BasisGGASG15.Germanium_High(projector_shift=GermaniumBasis_projector_shift)# Total basis setbasis_set = [ SiliconBasis, GermaniumBasis, ] Warning Choosing the appropriate DFT-PPS parameters may be a very delicate matter, and often requires a numerical optimization procedure. The SciPy software offers numerous such routines, but if you choose to use non-default DFT-PPS parameters, the quality of those parameters is entirely your own responsibility as a user! QuantumWise does not offer support for optimizing DFT-PPS parameters; we recommend in general that users stick to the default parameters, if they exist. References¶ [FMT08] Luiz G. Ferreira, Marcelo Marques, and Lara K. Teles. Approximation to density functional theory for the calculation of band gaps of semiconductors. Phys. Rev. B, 78:125116, Sep 2008. doi:10.1103/PhysRevB.78.125116. [FMT11] (1, 2) Luiz G. Ferreira, Marcelo Marques, and Lara K. Teles. Slater half-occupation technique revisited: the LDA-1/2 and GGA-1/2 approaches for atomic ionization energies and band gaps in semiconductors. AIP Adv., 1(3):032119, 2011. doi:10.1063/1.3624562. [LRS96] (1, 2, 3) M. Levinshtein, S. Rumyantsev, and M. Shur, editors. Handbook Series on Semiconductor Parameters. volume 1. World Scientific Publishing Cp. Pte. Ltd., Singapore, 1996. [WZ95] L.-W. Wang and A. Zunger. Local-density-derived semiempirical pseudopotentials. Phys. Rev. B, 51:17398–17416, 1995. doi:10.1103/PhysRevB.51.17398.
Spring 2018, Math 171 Week 9 Miscillaneous Poisson Process Problems Let \(X_1, X_2, \dots\overset{\mathrm{i.i.d}}{\sim} \mathrm{exp}(\lambda)\), and let \(N(t)\) be a poisson process with rate \(\lambda\) Show the equality \(P(\sum_{i=1}^n X_i \le t) = P(N(t) \ge n)\) Find an analogous equality for \(P(s \le \sum_{i=1}^n X_i \le t)\) (Answer) \(P(N(s) < n, N(t) \ge n)\) \(n+m\) cars approach \(n\) toll booths \((m < n)\). The time taken for a car to pay its toll is exponentially distributed with rate \(\lambda\). When a toll booth becomes available, the next car in line fills it instantly if there are any cars waiting. What is the expected time before the first car exits the tolls? (Answer) \(\frac{1}{n\lambda}\) What is the expected time before the \(m^\mathrm{th}\) car exits the tolls? (Answer) \(\frac{m}{n\lambda}\) What is the expected time before the last car exits the tollbooths? (Solution) Let \(\tau_1, \tau_2, \dots \tau_{n+m}\) be the times between successive cars exiting any of the tollbooths. Then \(\tau_1\), the time of the first car to exit any of the tolls, is the minimum of \(n\) exponential waiting times (for each of the tolls), and is therefore distributed exponential with parameter \(n\lambda\). Likewise, \(\tau_2, \dots \tau_{m+1}\) are each the minimum of \(n\) exponential waiting times each (since the tolls are all occupied by cars until \(m+1\) cars have passed) and are therefore exponential with parameter \(n\lambda\). After \(m+1\) cars have passed, there are more tollbooths than cars left, so there are no cars occupying some of the tollbooths. For this reason, \(\tau_{m+2}\) is distributed exponentially with parameter \((n-1)\lambda\), \(\tau_{m+3}\) is distributed exponentially with parameter \((n-2)\lambda\), and so on. The quantity of interest is therefore \[\begin{aligned}\mathbb{E}[\tau_1 + \dots + \tau_{m+n}] &= \mathbb{E}[\tau_1] + \dots + \mathbb{E}[\tau_{n+m}]\\&=\frac{m+1}{n\lambda} + \frac{1}{(n-1)\lambda} + \frac{1}{(n-2)\lambda} + \dots \frac{1}{\lambda}\end{aligned}\] What is the probability that the \((n+1)\)st car to enter a toll exits before the first car to enter a toll? (Answer) \(\frac{n-1}{n} \frac{1}{2}\) Compound Poisson Process Let \(X_1, X_2, \dots\) be a sequence of i.i.d. random variables with mean \(\mu\) and variance \(\sigma^2\), and let \(N(t)\) be a poisson process with rate \(\lambda\) independent of all the \(X_k\). Define \(S(t) = \sum_{k=1}^{N(t)}X_k\). Compute \(\mathbb{E}[S(t)]\) (Answer) \(t \lambda \mu\) Compute \(\mathrm{Cov}(S(t_1), S(t_2))\) for \(t_1 < t_2\) (Solution)\[\begin{aligned}\mathrm{Cov}(S(t_1), S(t_2)) &= \mathrm{Cov}(S(t_1), S(t_2) - S(t_1) + S(t_1))\\ &= \mathrm{Cov}(S(t_1), S(t_2) - S(t_1)) + \mathrm{Var}(S(t_1))\\&=\mathrm{Cov}(\sum_{k=1}^{N(t_1)}X_k, \sum_{k=N(t_1)+1}^{N(t_2)}X_k) + \mathrm{Var}(S(t_1))\\&=\mathrm{Var}(S(t_1))\end{aligned}\] Where the last equality comes from the independence of \(X_i\) and \(X_j\) for \(i \neq j\) and the fact that the sums \(\sum_{k=1}^{N(t_1)}X_k\) and \(\sum_{k=N(t_1)+1}^{N(t_2)}X_k\) do not overlap in indices. Compute \(\mathrm{Var}(S(t))\) (Answer) \(t\lambda(\sigma^2 + \mu^2)\) Suppose the \(X_k\) have MGF \(M_X(u)\). Compute the MGF of \(S(t)\) (Answer) \(e^{t\lambda(M_X(u)-1)}\) Thinning and Superposition Let \(N_1(t)\) and \(N_2(t)\) be independent poisson processes with rates \(\lambda_1\) and \(\lambda_2\). Find the probability that the \(m_1^{\mathrm{th}}\) arrival of \(N_1(t)\) occurs before the \(m_2^{\mathrm{th}}\) arrival of \(N_2(t)\). Let \(N(t)\) be a poisson process with rate \(\lambda\) and let each arrival of the process be identified as either type 1 with probability \(p\) or type 2 with probability \(1-p\). Find the probability that the \(m_1^{\mathrm{th}}\) arrival of type 1 occurs before the \(m_2^{\mathrm{th}}\) arrival of type 2. \(n+m\) cars approach \(n\) toll booths \((m < n)\). The time taken for a car to pass through a toll booth is exponentially distributed with rate \(\lambda\). When a toll booth becomes available, a car fills it instantly if there are any cars waiting. (Discussed) Describe the exits of first \(m\) cars from the toll booths as a superposition of poisson processes (Discussed) What is the probability that the first car comes through the leftmost tollbooth? (Discussed) What is the probability that all of the first \(m\) cars comes through the leftmost tollbooth? What is the probability that each of the first \(m\) cars go exit through a different tollbooth? (Answer) \(\frac{n-1}{n}\frac{n-2}{n} \dots \frac{n-m+1}{n}\)
Difference between revisions of "Kunen inconsistency" Line 43: Line 43: Although the existence of Reinhardt cardinals has now been refuted in ZFC and GBC, the term is used in the ZF context to refer to the critical point of a nontrivial elementary embedding $j:V\to V$ of the set-theoretic universe to itself. Although the existence of Reinhardt cardinals has now been refuted in ZFC and GBC, the term is used in the ZF context to refer to the critical point of a nontrivial elementary embedding $j:V\to V$ of the set-theoretic universe to itself. + + + + {{References}} {{References}} Revision as of 14:40, 2 October 2014 The Kunen inconsistency, the theorem showing that there can be no nontrivial elementary embedding from the universe to itself, remains a focal point of large cardinal set theory, marking a hard upper bound at the summit of the main ascent of the large cardinal hierarchy, the first outright refutation of a large cardinal axiom. On this main ascent, large cardinal axioms assert the existence of elementary embeddings $j:V\to M$ where $M$ exhibits increasing affinity with $V$ as one climbs the hierarchy. The $\theta$-strong cardinals, for example, have $V_\theta\subset M$; the $\lambda$-supercompact cardinals have $M^\lambda\subset M$; and the huge cardinals have $M^{j(\kappa)}\subset M$. The natural limit of this trend, first suggested by Reinhardt, is a nontrivial elementary embedding $j:V\to V$, the critical point of which is accordingly known as a Reinhardtcardinal. Shortly after this idea was introduced, however,Kunen famously proved that there are no such embeddings,and hence no Reinhardt cardinals in ZFC. Since that time, the inconsistency argument has been generalized by various authors, including Harada [1](p. 320-321), Hamkins, Kirmayer and Perlmutter [2], Woodin [1](p. 320-321), Zapletal [3] and Suzuki [4, 5]. There is no nontrivial elementary embedding $j:V\to V$ from the set-theoretic universe to itself. There is no nontrivial elementary embedding $j:V[G]\to V$ of a set-forcing extension of the universe to the universe, and neither is there $j:V\to V[G]$ in the converse direction. More generally, there is no nontrivial elementary embedding between two ground models of the universe. More generally still, there is no nontrivial elementary embedding $j:M\to N$ when both $M$ and $N$ are eventually stationary correct. There is no nontrivial elementary embedding $j:V\to \text{HOD}$, and neither is there $j:V\to M$ for a variety of other definable classes, including gHOD and the $\text{HOD}^\eta$, $\text{gHOD}^\eta$. If $j:V\to M$ is elementary, then $V=\text{HOD}(M)$. There is no nontrivial elementary embedding $j:\text{HOD}\to V$. More generally, for any definable class $M$, there is no nontrivial elementary embedding $j:M\to V$. There is no nontrivial elementary embedding $j:\text{HOD}\to\text{HOD}$ that is definable in $V$ from parameters. It is not currently known whether the Kunen inconsistency may be undertaken in ZF. Nor is it known whether one may rule out nontrivial embeddings $j:\text{HOD}\to\text{HOD}$ even in ZFC. Metamathematical issues Kunen formalized his theorem in Kelly-Morse set theory, but it is also possble to prove it in the weaker system of Gödel-Bernays set theory. In each case, the embedding $j$ is a GBC class, and elementary of $j$ is asserted as a $\Sigma_1$-elementary embedding, which implies $\Sigma_n$-elementarity when the two models have the ordinals. Reinhardt cardinal Although the existence of Reinhardt cardinals has now been refuted in ZFC and GBC, the term is used in the ZF context to refer to the critical point of a nontrivial elementary embedding $j:V\to V$ of the set-theoretic universe to itself. Super Reinhardt cardinal A super Reinhardt cardinal $\kappa$, is a cardinal which is the critical point of elementary embeddings $j:V\to V$, with $j(\kappa)$ as large as desired. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Zapletal, Jindrich. A new proof of Kunen's inconsistency.Proc Amer Math Soc 124(7):2203--2204, 1996. www DOI MR bibtex Suzuki, Akira. Non-existence of generic elementary embeddings into the ground model.Tsukuba J Math 22(2):343--347, 1998. MR bibtex | Abstract Suzuki, Akira. No elementary embedding from $V$ into $V$ is definable from parameters.J Symbolic Logic 64(4):1591--1594, 1999. www DOI MR bibtex
Suppose $f,\omega:{\bf R}\to{\bf R}$ are functions with $\omega(0)=0$. Suppose for some $\alpha>1$, we have $$ f(b)\leq f(a)+\omega(|b-a|)^\alpha\quad\hbox{for all } a,b\in{\bf R}\tag{*} $$ If $\omega$ is differentiable at $x=0$, show that $f\in C^\infty({\bf R})$. The original problem is given as follows: I think $\omega(|b-a|)^\alpha$ should be understood as $[\omega(|b-a|)]^\alpha$. The condition (*) can be written as $$ \frac{|f(x+h)-f(x)|}{h}\leq \frac{\omega(|h|)^\alpha-\omega(0)^\alpha}{h}. $$ This seems to imply the differentiability of $f$. But how would one expect that $f$ could be smooth?
Neurons (Activation Functions)¶ Neurons can be attached to any layer. The neuron of each layer will affect the output in the forward pass and the gradient in the backward pass automatically unless it is an identity neuron. Layers have an identity neuron by default [1]. class Neurons. Identity¶ An activation function that does not change its input. class Neurons. ReLU¶ Rectified Linear Unit. During the forward pass, it inhibits all negative activations. In other words, it computes point-wise \(y=\max(0, x)\). The point-wise derivative for ReLU is\[\begin{split}\frac{dy}{dx} = \begin{cases}1 & x > 0 \\ 0 & x \leq 0\end{cases}\end{split}\] Note ReLU is actually not differentialble at 0. But it has subdifferential\([0,1]\). Any value in that interval can be taken as a subderivative, and can be used in SGD if we generalize from gradient descent to subgradientdescent. In the implementation, we choose the subgradient at \(x==0\) to be 0. class Neurons. Sigmoid¶ Sigmoid is a smoothed step function that produces approximate 0 for negative input with large absolute values and approximate 1 for large positive inputs. The point-wise formula is \(y = 1/(1+e^{-x})\). The point-wise derivative is\[\frac{dy}{dx} = \frac{-e^{-x}}{\left(1+e^{-x}\right)^2} = (1-y)y\] class Neurons. Tanh¶ Tanh is a transformed version of Sigmoid, that takes values in \(\pm 1\) instead of the unit interval. input with large absolute values and approximate 1 for large positive inputs. The point-wise formula is \(y = (1-e^{-2x})/(1+e^{-2x})\). The point-wise derivative is\[\frac{dy}{dx} = 4e^{2x}/(e^{2x} + 1)^2 = (1-y^2)\] [1] This is actually not true: not all layers in Mocha support neurons. For example, data layers currently does not have neurons, but this feature could be added by simply adding a neuron property to the data layer type. However, for some layer types like loss layers or accuracy layers, it does not make much sense to have neurons.
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Basis Of A Three Dimensional Space In the preceeding discussion, we talked about the basis of a plane. We can easily extend that discussion to observe that any three non-coplanar vectors can form a basis of three dimensional space: In other words, any vector \(\vec r\) in 3-D space can be expressed as a linear combination of three arbitrary non-coplanar vectors. From this, it also follows that for three non-coplanar vectors \(\vec a,\vec b,\vec c,\) if their linear combination is zero, i.e, if \[\lambda \vec a + \mu \vec b + \gamma \vec c = \vec 0\quad\qquad\qquad\left( {{\text{where}}\,\,\lambda ,\mu ,\gamma \in \mathbb{R}} \right)\] then \(\lambda ,\mu \,\,{\text{and}}\,\,\gamma \) must all be zero. To prove this, assume the contrary. Then, we have \[\vec a = \left( { - \frac{\mu }{\lambda }} \right)\vec b + \left( { - \frac{\gamma }{\lambda }} \right)\vec c\] which means that \(\vec a\) can be written as the linear combination of \(\vec b\,\,{\text{and}}\,\,\vec c\). However, this would make \(\vec a,\,\,\vec b\,\,{\text{and}}\,\vec c\) coplanar, contradicting our initial supposition. Thus, \(\lambda ,\mu \,\,{\text{and}}\,\,\gamma \) must be zero. We finally come to what we mean by linearly independent and linearly dependent vectors. Linearly independent vectors : A set of non-zero vectors \({\vec a_1},{\vec a_2},{\vec a_3}....,{\vec a_n}\) is said to be linearly independent if \[{\lambda _1}{\vec a_1} + {\lambda _2}{\vec a_2} + ... + {\lambda _n}{\vec a_n} = \vec 0\] \[implies\,\,\,\,{\lambda _1} = {\lambda _2} = .... = {\lambda _n} = 0\] Thus, a linear combination of linearly independent vectors cannot be zero unless all the scalars used to form the linear combination are zero. Linearly dependent vectors: A set of non-zero vectors \({\vec a_1},{\vec a_2},{\vec a_3},....,{\vec a_n}\) is said to be linearly dependent if there exist scalars \({\lambda _1},{\lambda _2}....{\lambda _n},\) not all zero such that, \[{\lambda _1}{\vec a_1} + {\lambda _2}{\vec a_2} + ..... + {\lambda _n}{\vec a_n} = \vec 0\] For example, based on our previous discussions, we see that (i) Two non-zero, non-collinear vectors are linearly independent. (ii) Two collinear vectors are linearly dependent (iii) Three non-zero, non-coplanar vectors are linearly independent. (iv) Three coplanar vectors are linearly dependent (v) Any four vectors in 3-D space are linearly dependent. You are urged to prove for yourself all these assertions. Example – 5 Let \({\vec a},\vec b\,\,{\text{and}}\,\,\vec c\) be non-coplanar vectors. Are the vectors \(2\vec a - \vec b + 3\vec c,\,\,\vec a + \vec b - 2\vec c\,\,{\text{and}}\,\,\vec a + \vec b - 3\vec c\) coplanar or non-coplanar? Solution: Three vectors are coplanar if there exist scalars \(\lambda ,\mu \in \mathbb{R}\) using which one vector can be expressed as the linear combination of the other two. Let us try to find such scalars: \[2\vec a - \vec b + 3\vec c = \lambda \left( {\vec a + \vec b - 2\vec c} \right) + \mu \left( {\vec a + \vec b - 3\vec c} \right)\] \[ \Rightarrow \quad \left( {2 - \lambda - \mu } \right)\vec a + \left( { - 1 - \lambda - \mu } \right)\vec b + \left( {3 + 2\lambda + 3\mu } \right)\vec c = \vec 0\] Since \(\vec a,\vec b,\vec c,\) are non-coplanar, we must have \[2 - \lambda - \mu = 0\] \[ - 1 - \lambda - \mu = 0\] \[3 + 2\lambda + 3\mu = 0\] This system, as can be easily verified , does not have a solution for \(\lambda \,\,{\text{and}}\,\,\mu \). Thus, we cannot find scalars for which one vector can be expressed as the linear combination of the other two, implying the three vectors must be non-coplanar. As an additional exercise, show that for three non-coplanar vectors \(\vec a,\,\,\vec b\,\,{\text{and}}\,\,\vec c\) , the vectors \(\vec a - \,2\vec b\, + 3\vec c,\,\,\vec a - 3\vec b + 5\vec c\,\,\,and\,\,\, - 2\vec a - \,3\vec b\, - 4\vec c\) are coplanar.
Medium Affects Wavelength We have already discussed that the speed of a wave is determined by the medium, and this is true for the speed of light as well. Recall that the frequency of a wave is set by the source, so the frequency of a wave does not change as it travels into a new medium. This is true also for light waves traveling through different media Because we know the wave speed and frequency are related by \(v_{wave} = f \lambda\), we see that if \(v_{wave}\) changes \(\lambda\) must also change. In other words, the wavelength of light changes when it travels into a different medium with a different allowed speed of light. Another way of understanding this is as follows: if \(v_{wave}\) is small, then the wave cannot travel very far in one period so \(\lambda\) is small, and if \(v_{wave}\) is large then one peak can travel much further in one period so \(\lambda\) is large. Normal Incidence Let us consider the concrete case of light traveling from air into water. First, we establish that light travels faster in air than it does in water. Imagine the plane wavefronts of the light traveling from air into water in the same direction as the normal, so the wavefronts are parallel with the surface. This is called normal incidence, as the light rays are travelling along the normal of the air-water boundary. We note that light travels faster in air than it does in water, and this makes the wavelength of light in water shorter. How Refraction Occurs The situation becomes much more interesting if the wavefronts of the light rays are not lined up exactly with the air-water boundary as shown below. Imagine what happens to one wavefront as it enters the water. Part of a wavefront enters the water and slows down while the rest of the wavefront stays in the air at its original speed. The wavefront in the air tries to speed ahead the wavefront in the water, but they still have to join smoothly at the boundary. This causes the whole wavefront to bend as demonstrated below. Traveling plane wave entering a region of lower wave velocity at an angle, illustrating the decrease in wavelength and change of direction (refraction) that results. Image used with permission (CC-BY-SA; Richard F. Lyon) A useful analogy is to consider a car traveling from the road (a fast medium) to mud (a slow medium). As the car travels, one tire goes forward faster than the other, which causes the entire car to turn. This is demonstrated visually below. Note that this analogy works for both the cases where the car goes from a fast medium to a slow one or from a slow medium to a fast one. Going back to our original picture, we can draw in rays perpendicular to our wavefronts, and notice that the direction of the light bends as we go from air to water. This bending occurs for any two different media in which the light waves have different speeds. This bending of light as it goes from one medium into another is called refraction. Notice that, although the light clearly bends, light rays travel in straight lines within each medium. Combining Reflection and Refraction You may wonder when a light ray hits a surface, how can we tell if it is going to be reflected or refracted? The answer is that a light ray is typically both reflected and refracted (we discuss this more in Total Internal Reflection). You might already have some familiarity with this from your experience with swimming pools. It is possible to see the sun from inside a swimming pool, so we know that light from the sun must be able to make it into the water. Therefore the sun’s rays are refracting as they enter the water. Someone standing on the side of the pool can also see the reflection of the sun on the water’s surface (known as the “glare”), so the sun’s rays must also be reflecting off the surface of the pool. There is no contradiction here – when the sun’s rays hit the surface some rays reflect and other rays refract. Because energy must be conserved, reflected and refracted rays have less energy than the incident ray. In refraction it is common to talk about the “fast” medium (with the high wave speed) and the “slow” medium (with the lower wave speed). In the case of light going from air to water, the fast medium is air and the slow medium is water. The above examples of refraction showed that when light travels from a fast medium to a slow medium the light rays bend toward the normal. Exercise Show that when light travels from a slow medium to a fast medium, the light rays bend away from the normal. The diagram below helps to illustrate what is meant when we say the rays bend "toward" or "away" form the normal. Refractive Indices That is all the qualitative information we need about refraction. We now turn to the quantitative task of determining precisely which way a refracted ray travels as it goes from one medium to another. We know that the amount of bending depends on the speed of the wave in the medium. For convenience we introduce a new concept, the refractive index. The refractive index \(n\) for light in a particular medium is defined such that \[n\equiv \dfrac{\text{speed of light in vacuum}}{\text{speed of light in medium}} = \dfrac{c}{v_{wave}}\] The reason for this is that the speed of light in materials is typically \(10^7\text{ - }10^8\text{ m/s}\), while the \(n\) for most materials is between one and five. The utility of the refractive index is that the values of \(n\) are easier to use than the values for \(v_{wave}\). From the definition of the refractive index, we know three things: \(n_{medium} \geq 1\), because nothing can travel faster than the speed of light in a vacuum. Given two media, the slower medium will have the larger \(n\) value and the faster one will have the smaller \(n\) value. \(n_{vacuum} = \dfrac{c}{c} = 1\). The refractive indices for some materials are given in this table Material \(n = c/v_{medium}\) Material \(n = c/v_{medium}\) Vacuum 1 (exact) Silicon 3.5 Air 1.0003 Germanium 4.0 Water 1.33 Diamond 2.42 Glass (crown) 1.50 - 1.62 Eye 1.3 Glass (flint) 1.57 - 1.75 Eye lens 1.41 Snell's Law With the definition of refractive index we can now give a quantitative description of refraction. We will call the refractive index in one of the medium \(n_1\) and the angle of the light ray in that medium is \(\theta_1\), and for the second medium we will use \(n_2\) and \(\theta_2\). All these quantities are related by Snell’s law: \[n_1 \sin \theta_1 = n_2 \sin \theta_2\] This result is simply presented above, but it can actually be derived from what we already know about waves (a derivation is presented in the summary). For your work, it will probably be more convenient to ignore the labels “1” and “2” and instead use the names of the media. An example of applying Snell's Law to light traveling from water to air is presented below: Example 3 If we placed a point source of light in a calm pool, how would the light bend coming into the air? Solution We know that rays come off the light source in all different directions, but we will only consider the light that exits the surface of the pool. We know that the light ray that is at normal incidence (\(\theta_{water} = 0\)) will pass through without bending. This result, which we described earlier, follows clearly from Snell's Law. By applying Snell’s law separately to rays pointing in different directions, we see that as we transition from normal incidence to high \(\theta_{inc}\) values, the bending becomes more severe. We illustrate multiple refracted rays in the picture below. Example 4 A ray of light (in air) hits a glass prism horizontally as shown below. The glass has a refractive index \(n= 1.5\). At what angle does the light refract inside the glass? Solution We know from a previous example (and trigonometry) that the incoming light ray is at an angle \(\theta_{air} = 60°\) from the normal. The refractive index for air is about one, so we can use Snell’s law: \[n_{air} \sin \theta_{air} = n_{glass} \sin \theta_{glass}\] \[\implies \sin \theta_{glass} = \dfrac{n_{air}}{n_{glass}} \sin \theta_{air} = \dfrac{1}{1.5} \times \sin 60° = 0.58\] We can use the inverse sine function (\(\sin^{-1}\)) to find \(\theta_{glass} = 35°\). The path of the ray looks like this Contributors Authors of Phys7C (UC Davis Physics Department)
We introduce the notion of a symplectic capacity relative to a coisotropicsubmanifold of a symplectic manifold, and we construct two examples of suchcapacities through modifications of the Hofer-Zehnder capacity. As aconsequence, we obtain a non-squeezing theorem for symplectic embeddingsrelative to coisotropic constraints and existence results for leafwise chordson energy surfaces. We study quiver gauge theories on the round and squashed seven-spheres, andorbifolds thereof. They arise by imposing $G$-equivariance on the homogeneousspace $G/H=\mathrm{SU}(4)/\mathrm{SU}(3)$ endowed with its Sasaki-Einsteinstructure, and $G/H=\mathrm{Sp}(2)/\mathrm{Sp}(1)$ as a 3-Sasakian manifold. Inboth cases we describe the equivariance conditions and the resulting quivers.We further study the moduli spaces of instantons on the metric cones over thesespaces by using the known description for Hermitian Yang-Mills instantons onCalabi-Yau cones. It is shown that the moduli space of instantons on thehyper-Kahler cone can be described as the intersection of three HermitianYang-Mills moduli spaces. We also study moduli spaces of translationallyinvariant instantons on the metric cone $\mathbb{R}^8/\mathbb{Z}_k$ over$S^7/\mathbb{Z}_k$. This paper shows that the integral equivariant cohomology Chern numberscompletely determine the equivariant geometric unitary bordism classes ofclosed unitary $G$-manifolds, which gives an affirmative answer to theconjecture posed by Guillemin--Ginzburg--Karshon in [20, Remark H.5, $\S3$,Appendix H], where $G$ is a torus. As a further application, we also obtain asatisfactory solution of [20, Question (A), $\S1.1$, Appendix H] on unitaryHamiltonian $G$-manifolds. Our key ingredients in the proof are the universaltoric genus defined by Buchstaber--Panov--Ray and the Kronecker pairing ofbordism and cobordism. Our approach heavily exploits Quillen's geometricinterpretation of homotopic unitary cobordism theory. Moreover, this method canalso be applied to the study of $({\Bbb Z}_2)^k$-equivariant unoriented bordismand can still derive the classical result of tom Dieck. We give a quantum version of the Danilov-Jurkiewicz presentation of thecohomology of a compact toric orbifold with projective coarse moduli space.More precisely, we construct a canonical isomorphism from a formal version ofthe Batyrev ring to the quantum orbifold cohomology at a canonical bulkdeformation. This isomorphism generalizes results of Givental, Iritani, andFukaya-Oh-Ohta-Ono for toric manifolds and Coates-Lee-Corti-Tseng for weightedprojective spaces. The proof uses a quantum version of Kirwan surjectivity andan equality of dimensions deduced using a toric minimal model program (tmmp).We show that there is a natural decomposition of the quantum cohomology wheresummands correspond to singularities in the tmmp, each giving rise to acollection of Hamiltonian non-displaceable tori. We give an $h$--principle type result for a class of Legendrian embeddings incontact manifolds of dimension at least $5$. These Legendrians, referred to asloose, have trivial pseudo-holomorphic invariants. We demonstrate they areclassified up to Legendrian isotopy by their smooth isotopy class equipped withan almost complex framing. This result is inherently high dimensional:analogous results in dimension $3$ are false. We give a construction of contact homology in the sense ofEliashberg--Givental--Hofer. Specifically, we construct coherent virtualfundamental cycles on the relevant compactified moduli spaces ofpseudo-holomorphic curves. Let $S$ be a compact oriented surface. We construct homogeneousquasimorphisms on $Diff(S, area)$, on $Diff_0(S, area)$ and on $Ham(S)$generalizing the constructions of Gambaudo-Ghys and Polterovich. We prove that there are infinitely many linearly independent homogeneousquasimorphisms on $Diff(S, area)$, on $Diff_0(S, area)$ and on $Ham(S)$ whoseabsolute values bound from below the topological entropy. In case when $S$ hasa positive genus, the quasimorphisms we construct on $Ham(S)$ are$C^0$-continuous. We define a bi-invariant metric on these groups, called the entropy metric,and show that it is unbounded. In particular, we reprove the fact that theautonomous metric on $Ham(S)$ is unbounded. We use quilted Floer theory to generalize Seidel's long exact sequence insymplectic Floer theory to fibered Dehn twists. We then apply it to constructversions of the Floer and Khovanov-Rozansky exact triangles in Lagrangian Floertheory of moduli spaces of bundles. We study the noncommutative Poincar\'e duality between the Poisson homologyand cohomology of unimodular Poisson algebras, and show that Kontsevich'sdeformation quantization as well as Koszul duality preserve the correspondingPoincar\'e duality. As a corollary, the Batalin-Vilkovisky algebra structuresthat naturally arise in these cases are all isomorphic. The purpose of this paper is to study stable representations of partiallyordered sets (posets) and compare it to the well known theory for quivers. Inparticular, we prove that every indecomposable representation of a poset offinite type is stable with respect to some weight and construct that weightexplicitly in terms of the dimension vector. We show that if a poset isprimitive then Coxeter transformations preserve stable representations. Whenthe base field is the field of complex numbers we establish the connectionbetween the polystable representations and the unitary $\chi$-representationsof posets. This connection explains the similarity of the results obtained inthe series of papers. We derive constraints on Lagrangian embeddings in completions of certainstable symplectic fillings with semisimple symplectic cohomologies. Manifoldswith these properties can be constructed by generalizing the boundary connectedsum operation to our setting, and are related to certain birational surgerieslike blow-downs and flips. As a consequence, there are many non-toric(non-compact) monotone symplectic manifolds whose wrapped Fukaya categories areproper. Let $K$ be a compact Lie group with complexification $G$, and let $V$ be aunitary $K$-module. We consider the real symplectic quotient $M_0$ at level $0$of the homogeneous quadratic moment map as well as the complex symplecticquotient, defined here as the complexification of $M_0$. We show that if $(V,G)$ is $3$-large, a condition that holds generically, then the complexsymplectic quotient has symplectic singularities and is graded Gorenstein. Thisin particular implies that the real symplectic quotient is graded Gorenstein.In the case that $K$ is a torus or $\operatorname{SU}_2$, we show that theseresults hold without the hypothesis that $(V,G)$ is $3$-large. For any asymptotically dynamically convex contact manifold $Y$, we show that$SH_*(W)=0$ is a property independent of the choice of topologically simple(i.e.\ $c_1(W)=0$ and $\pi_{1}(Y)\rightarrow \pi_1(W)$ is injective) Liouvillefilling $W$. In particular, if $Y$ is the boundary of a flexible Weinsteindomain, then any topologically simple Liouville filling $W$ has vanishingsymplectic homology. As a consequence, we answer a question of Lazarevpartially: a contact manifold $Y$ admitting flexible fillings determines theintegral cohomology of all the topologically simple Liouville fillings of $Y$.The vanishing result provides an obstruction to flexible fillability. As anapplication, we show that all Brieskorn manifolds of dimension $\ge 5$ cannotbe filled by flexible Weinstein manifolds. We introduce Morse branes in the Fukaya category of a holomorphic symplecticmanifold, with the goal of constructing tilting objects in the category. Wegive a construction of a class of Morse branes in the cotangent bundles, andapply it to give the holomorphic branes that represent the big tilting sheaveson flag varieties. For a semisimple Lie group $G_\mathbb{C}$ over $\mathbb{C}$, we study thehomotopy type of the symplectomorphism group of the cotangent bundle of theflag variety and its relation to the braid group. We prove a homotopyequivalence between the two groups in the case of$G_\mathbb{C}=SL_3(\mathbb{C})$, under the $SU(3)$-equivariancy condition onsymplectomorphisms. This paper generalizes the bordered-algebraic knot invariant introduced in anearlier paper, giving an invariant now with more algebraic structure. It alsointroduces signs to define these invariants with integral coefficients. Wedescribe effective computations of the resulting invariant. We prove a microlocal counterpart of categorical localization for Fukayacategories in the setting of the coherent-constructible correspondence. We use Lagrangian torus fibrations on the mirror $X$ of a toric Calabi-Yauthreefold $\check X$ to construct Lagrangian sections and various Lagrangianspheres on $X$. We then propose an explicit correspondence between the sectionsand line bundles on $\check X$ and between spheres and sheaves supported on thetoric divisors of $\check X$. We conjecture that these correspondences inducean embedding of the relevant derived Fukaya category of $X$ inside the derivedcategory of coherent sheaves on $\check X$. We define the contact homology algebra for any contact manifold and show thatit is an invariant of the contact manifold. More precisely, given a contactmanifold $(M,\xi)$ and some auxiliary data $\mathcal{D}$, we define an algebra$HC(\mathcal{D})$. If $\mathcal{D}_1$ and $\mathcal{D}_2$ are two choices ofauxiliary data for $(M,\xi)$, then $HC(\mathcal{D}_1)$ and $HC(\mathcal{D}_2)$are isomorphic. We use a simplified version of Kuranishi perturbation theory,consisting of semi-global Kuranishi charts. Using the wonderful compactification of a semisimple adjoint affine algebraicgroup G defined over an algebraically closed field k of arbitrarycharacteristic, we construct a natural compactification Y of the G-charactervariety of any finitely generated group F. When F is a free group, we show thatthis compactification is always simply connected with respect to the \'etalefundamental group, and when k=C it is also topologically simply connected. Forother groups F, we describe conditions for the compactification of the modulispace to be simply connected and give examples when these conditions aresatisfied, including closed surface groups and free abelian groups whenG=PGL(n,C). Additionally, when F is a free group we identify the boundarydivisors of Y in terms of previously studied moduli spaces, and we construct afamily of Poisson structures on Y and its boundary divisors arising fromBelavin-Drinfeld splittings of the double of the Lie algebra of G. In theappendix, authored by Sam Evens and Arlo Caine, we explain how to put a Poissonstructure on a quotient of a Poisson algebraic variety by the action of areductive Poisson algebraic group. Let $L \subset \mathbb R \times J^1(M)$ be a spin, exact Lagrangian cobordismin the symplectization of the 1-jet space of a smooth manifold $M$. Assume that$L$ has cylindrical Legendrian ends $\Lambda_\pm \subset J^1(M)$. It is wellknown that the Legendrian contact homology of $\Lambda_\pm$ can be defined withinteger coefficients, via a signed count of pseudo-holomorphic disks in thecotangent bundle of $M$. It is also known that this count can be lifted to amod 2 count of pseudo-holomorphic disks in the symplectization $\mathbb R\times J^1(M)$, and that $L$ induces a morphism between the $\mathbbZ_2$-valued DGA:s of the ends $\Lambda_\pm$ in a functorial way. We prove thatthis hold with integer coefficients as well. The proofs are built on thetechnique of orienting the moduli spaces of pseudo-holomorphic disks usingcapping operators at the Reeb chords. We give an expression for how the DGA:schange if we change the capping operators. We study cosmetic contact surgeries along transverse knots in the standardcontact 3-sphere, i.e. contact surgeries that yield again the standard contact3-sphere. The main result is that we can exclude non-trivial cosmetic contactsurgeries along all transverse knots not isotopic to the transverse unknot withself-linking number -1. As a corollary it follows that every transverse knot inthe standard contact 3-sphere is determined by the contactomorphism type of itsexteriors. Moreover, we give counterexamples to this for transverse links inthe standard contact 3-sphere. We define a new class of symplectic objects called "stops", which roughlyspeaking are Liouville hypersurfaces in the boundary of a Liouville domain.Locally, these can be viewed as pages of a compatible open book. To a Liouvilledomain with a collection of disjoint stops, we assign an $A_\infty$-categorycalled its partially wrapped Fukaya category. An exact Landau-Ginzburg modelgives rise to a stop, and the corresponding partially wrapped Fukaya categoryis meant to agree with the Fukaya category one is supposed to assign to theLandau-Ginzburg model. As evidence, we prove a formula that relates thesepartially wrapped Fukaya categories to the wrapped Fukaya category of theunderlying Liouville domain. This operation is mirror to removing a divisor. In v2, we also construct continuation functors without cascades, which shouldbe of independent interest. We prove that for a compact toric manifold whose anti-canonical divisor isnumerically effective, the Lagrangian Floer superpotential defined byFukaya-Oh-Ohto-Ono is equal to the superpotential written down by using thetoric mirror map under a convergence assumption. This gives a method to computeopen Gromov-Witten invariants using mirror symmetry. Consider the differential forms $A^*(L)$ on a Lagrangian submanifold $L\subset X$. Following ideas of Fukaya-Oh-Ohta-Ono, we construct a family ofcyclic unital curved $A_\infty$ structures on $A^*(L),$ parameterized by thecohomology of $X$ relative to $L.$ The family of $A_\infty$ structuressatisfies properties analogous to the axioms of Gromov-Witten theory. Ourconstruction is canonical up to $A_\infty$ pseudoisotopy. We work in thesituation that moduli spaces are regular and boundary evaluation maps aresubmersions, and thus we do not use the theory of the virtual fundamentalclass.
let $k$ is postive integer,and for any postive integer $n\ge 2$, show that: $$\left[\dfrac{n}{\sqrt{3}}\right]+1>\dfrac{n^2}{\sqrt{3n^2-5}}>\dfrac{n}{\sqrt{3}}$$ where $[x]$ is the largest integer not greater than $x$ Let $q_n = \left\lfloor \frac{n}{\sqrt{3}}\right\rfloor + 1$. When $n \ge 11\sqrt{3}$, we have $$\frac{n}{\sqrt{3}q_n} \ge \frac{\frac{n}{\sqrt{3}}}{\frac{n}{\sqrt{3}}+1} = 1 - \frac{\sqrt{3}}{n+\sqrt{3}} \ge \frac{11}{12} \quad\implies\quad \frac{n^2}{3q_n^2} \ge \left(\frac{11}{12}\right)^2 > \frac{5}{6} $$ Be definition, $\left\lfloor \frac{n}{\sqrt{3}}\right\rfloor$ is the largest integer less than or equal to $\frac{n}{\sqrt{3}}$. This implies $$\frac{n}{\sqrt{3}} < q_n \quad\implies\quad 3q_n^2 - n^2 > 0 \quad\implies\quad 3 q_n^2 - n^2 \ge 2$$ The last inequality is true because $3q_n^2 - n^2$ is an integer and there is no integer solution for the equation $3 q^2 - n^2 = 1$. Combine these, we find for any $n \ge 20 > 11\sqrt{3}$, we have $$3 n^2 - \frac{n^4}{q_n^2} = 3\left(\frac{n^2}{3q_n^2}\right)(3q_n^2 - n^2) > 3 \left( \frac{5}{6}\right) 2 = 5$$ This leads to $$3n^2 - 5 > \frac{n^4}{q_n^2} \quad\iff\quad q_n > \frac{n^2}{\sqrt{3n^2 -5}} \quad\text{ for } n \ge 20 \tag{*1}$$ By brute force, one can verify RHS$(*1)$ also work for $2 \le n \le 19$. As pointed out by Macavity in comment, the largest admissible $k$ for $n = 5$ is $5$. This means the maximum value of $k$ which works for all $n$ is indeed $5$.
$$\sum_{i=1}^n \frac1{4i-1}$$ I know I have to integrate the function but from what to find lower and upper bound. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community It sounds like you may be asking for a representation of $$\sum_{i=1}^n \frac{1}{4i-1}$$ as a Riemann sum estimate of two different integrals, one of which underestimates the true value of the integral (giving you an upper bound on your sum), and the other of which overestimates the true value of the integral (giving you a lower bound on your sum). The right-hand Riemann sum approximation of $$\int_0^n \frac{1}{4x-1} dx$$ with $\Delta x = 1$ is $\sum_{i=1}^n \frac{1}{4i-1}$ and will underestimate the integral. However, the integrand is undefined at $x = 1/4$, and so you're better off looking at $$\frac{1}{3} + \sum_{i=2}^n \frac{1}{4i-1}$$ as an underestimate of $$\frac{1}{3} + \int_1^n \frac{1}{4x-1} dx.$$ Similarly, the left-hand Riemann sum approximation of $$\int_0^n \frac{1}{4(x+1)-1} dx$$ with $\Delta x = 1$ is $\sum_{i=1}^n \frac{1}{4i-1}$ and will overestimate the integral. Drawing the Riemann sum approximations and the graphs of the functions will help to see this. Thus after putting it all together, we have $$\int_0^n \frac{1}{4x+3} dx < \sum_{i=1}^n \frac{1}{4i-1} \leq \frac{1}{3} + \int_1^n \frac{1}{4x-1} dx,$$ with equality holding in the second case only when $n=1$. $\sum_{i=1}^n \frac{1}{4i - 1} = \frac{1}{3} + \sum_{i=2}^n \frac{1}{4i-1}.$ Now apply the usual technique for obtaining a lower bound on the sum on the right. You can write $$\frac{1}{4i-1} = \frac{1}{4i} + \frac{1}{4i(4i-1)}.$$ Therefore $$\sum_{i=1}^n \frac{1}{4i-1} = \frac{1}{4} \sum_{i=1}^n \frac{1}{i} + \sum_{i=1}^n \frac{1}{4i(4i-1)}.$$ The second term is bounded (by comparison with the convergent infinite series $\sum_{i=1}^\infty 1/i^2$), and the first term is familiar.
The first question to answer: which matrices satisfy $A^2 = I$ and $\det(A) = 1$? It suffices to note that since $A^2 - I = 0$, the minimal polynomial of $A$ must divide $x^2 - 1 = (x-1)(x + 1)$. Hence, $A$ must be diagonalizable with eigenvalues equal to $\pm 1$. Moreover, since the product of eigenvalues is equal to $\det(A)$ which is equal to $1$, the $-1$ eigenvalue must have even multiplicity. All together, we can characterize these matrices as those of the form$$A = S \pmatrix{-I_{2k}&0\\0&I_{n - 2k}}S^{-1}, \quad k = 0,1,\dots,\lfloor n/2 \rfloor$$ where $I_k$ denotes the $k \times k$ identity matrix and the matrices here are $n \times n$. The second question: which matrices satisfy $A^2 = -I$ and $\det(A) = -1$? In fact, there are no such real matrices. Because $A^2 + I = 0$, the (complex) eigenvalues of $A$ must solve $x^2 + 1 = 0$, which is to say that the eigenvalues of $A$ are $\pm i$. Because $A$ is a real matrix, its complex eigenvalues come in conjugate pairs. Thus, its determinant must have the form $\det(A) = (-i)^k(i)^k = 1 \neq -1$. An alternative explanation: the minimal polynomial $x^2 + 1$ of $A$ is an irreducible polynomial over $\Bbb R$, so the characteristic polynomial must have the form $\det(xI - A) = (x^2 + 1)^k$. Thus, we find that $A$ has even size, and that $\det(A) = \det(0I - A) = (0^2 + 1)^k = 1$. As I note in the comments above, there are no solutions satisfying $A^2 = 0$, and in fact no non-invertible solutions other than $A = 0$.
Your comments indicate that you are dubious about iterating operations on ordinals more than finitely many times, and very dubious about iterating them uncountably many times. This is a fundamental point in set theory. The danger is in thinking of recursive definitions as processes which need to be carried out, in which case our "finitary biases" get in the way. Instead, you should think of a recursive definition as "happening all at once." Essentially, we can show that every recursive description corresponds to a unique function, and what lets us do this is transfinite induction. (It shouldn't be surprising that "we can do recursion for as long as we can do induction.") Specifically, suppose that $F:Ord\rightarrow Ord$ is a function on ordinals (or rather, class function; for simplicity I'm assuming we're working in a theory like NBG that makes all this much simpler to say). For $\theta>0$ an ordinal, say that a function $G$ iterates $F$ along $\theta$ starting at $\alpha$ iff The domain of $G$ is $\theta$, $G(0)=\alpha$, for $\beta+1<\theta$ we have $G(\beta+1)=F(G(\beta))$, and for $\lambda<\theta$ a limit we have $G(\lambda)=\sup\{G(\beta): \beta<\lambda\}$. Incidentally, this last condition is really only a natural thing to do if $F$ is nondecreasing, but strictly speaking this works for any $F$. In principle, there could be many $G$ with this property, or none at all. However, it turns out that there is only ever exactly one: For every $F:Ord\rightarrow Ord$ (= the function to be iterated), $\theta>0$ (= the iteration length), and $\alpha$ (= the starting value), there is exactly one $G$ which iterates $F$ along $\theta$ starting at $\alpha$. Moreover, the $G$s "cohere" in the sense that if $G$ iterates $F$ along $\theta$ starting at $\alpha$ and $G'$ iterates $F$ along $\theta'$ starting at $\alpha$, with $\theta<\theta'$, then for each $\eta<\theta$ we have $G(\eta)=G'(\eta)$. So in some sense there is a unique way to iterate $F$ along $Ord$. The proof is by transfinite induction: fixing an arbitrary $F$ and $\alpha$, consider some $\theta$ such that the claim holds for all iteration lengths $<\theta$. Intuitively, if $\theta=\gamma+1$ we just take the $G$ for $\gamma$ and "stick one more value onto it," and if $\theta$ is a limit we "glue the earlier $G$s together." It's a good exercise to turn this vague hint into an actual proof. The sequence of $\beth$ numbers can be constructed in this way: $F$ is the map sending an ordinal $\alpha$ to the cardinality of the powerset of $\alpha$ (which, remember, is itself an ordinal - cardinals are just initial ordinals). The starting value $\alpha$ is $\omega$: this amounts to setting $\beth_0=\omega$. To determine what $\beth_\eta$ should be, we set $\theta=\eta+1$ - or really we pick any $\theta>\eta$, by the "coherence" point above it doesn't affect the answer.
Hi everyone I'd like to know if the following is correct and if someone knows a better way to do it. Definition Let $x>0$ be a real, and $\alpha$ be a real number. We define the quantity $x^{\alpha}$, by the formula $\text{lim}_{n\rightarrow\infty} x^{q_n}$ where $(q_n)$ is a sequence of rationals which converges to $\alpha$. (I've shown that the definition is well-defined). Lemma: Let $r,s \in \mathbb{R}$ and $x\in \mathbb{R}^{>0}$. Then $(x^r)^s=x^{rs}$. Proof: Let $(s_n), (r_m)$ be sequences of rational numbers which converges to $r$ and $s$ respectively. \begin{align} \lim_{n\rightarrow \infty}( \lim_{m\rightarrow \infty}x^{r_m})^{s_n}=\lim_{n\rightarrow \infty} \lim_{m\rightarrow \infty}((x^{r_m})^{s_n}) \\ = \lim_{n\rightarrow \infty} \lim_{m\rightarrow \infty} x^{r_ms_n} \end{align} We will show that $\lim_{n\rightarrow \infty} \lim_{m\rightarrow \infty} x^{r_ms_n}= \lim_{n\rightarrow \infty} x^{r_ns_n}=x^{rs}$ Claim 1: $\lim_{m\rightarrow \infty} x^{r_ms_n}=x^{rs_n}$ It will suffice to show that $$\lim_ {m\rightarrow \infty}x^{r_ms_n-rs_n}= 1$$ Since the claim would then follows by the limit laws and the properties of the exponents (since $x^{r_ms_n}=x^{r_ms_n-rs_n}x^{rs_n}$). We know by hypothesis that $r_m \rightarrow r$, thus using the limit laws we can conclude $r_ms_n-rs_n \rightarrow 0$. Write $t_m = r_ms_n-rs_n$. We have to show that for every $\varepsilon>0$, the sequence $(x^{t_m}) \rightarrow 1$. Let $\varepsilon>0$ be given. We already know that $(x^{1/k}) \rightarrow 1$ and also $(x^{-1/k}) \rightarrow 1$ by the limit laws. So there is some $K$ for which $x^{1/K}$ and $x^{-1/K}$ are simultaneously $\varepsilon$-close to $1$. Let us fix $K$. Now since $t_m $ converges to zero as we have shown, so there is some $M$ such that $|t_m|\le 1/K$ for all $m\ge M$. Thus $$-1/K \le t_m \le 1/K$$ If $x>1$ we have $x^{-1/K}\le x^{t_m}\le x^{1/K}$, and in particular $x^{t_m}$ is $\varepsilon$-close to 1. A similar argument works when $x<1$ just with the inequality in reverse order. Hence $x^{t_m}$ converges to $1$ and the claim follows. Then $$\lim_{n\rightarrow \infty} \lim_{m\rightarrow \infty} x^{r_ms_n}=\lim_{n\rightarrow \infty} x^{rs_n}=x^{rs}$$ as desired. Only I need to justify the step when the limit and the rational exponent commutes $\lim_n {a_n}^q=(\lim _n a_n)^q$ assuming that $a_n$ converges to a positive real number. Is this a correct argument? Thanks in advance
On a question on this site there is an explanation of the algorithm Knuth gives in The Art Of Computer Programming to compute an approximation of $y = \log_bx$. Now, I understand why it works; anyway, the only question arising in my mind is: how can we pre-compute a table of logarithms with arguments of the type $\frac{2^k}{2^k-1}$? Or, generally speaking, is there an algorithm to compute a good approximation of $y = \log_b\frac{a^k}{a^k-1}$, considering such a logarithm as a special case? I see that the simplest case is when $a = b$. So we can write $y = k - \log_b(b^k-1)$. But then? Of course we cannot execute the same algorithm to compute $y = \log_b(b^k-1)$, for, unless $k=1$, the argument $x = b^k-1$ won't respect the initial condition of $1 \leq x < a$ Probably a solution would be to factorize $x$ in prime numbers and then sum the logarithms of each one, since once I read that Henry Briggs (who derived the fundamental idea behind this algorithm) found a clever way to take logarithms of prime numbers (this is Chapter Nine of his Arithmetica Logarithmica: see here); but, you know, I had no more motivation to inform myself on that as I got to the first page of his book: "Logarithms are numbers which, adjoined to numbers in proportions, mantain equal differences". I would rather like more "modern" explanations of the problem :)
Today we will apply the ideas of the others post by a simple example. Before, we are going to answer the question of the last week. What is exactly the $latex {h_{opt}}&fg=000000$ if we assume that $latex \displaystyle \displaystyle f(x) = \frac{1}{\sqrt{2\pi}} \text{exp}\left(\frac{-x^2}{2}\right)? &fg=000000$ How $latex {f(x)}&fg=000000$ is the density of standard normal distribution. It is easy to see that $latex {f^\prime(x)=(-x)f(x)}&fg=000000$, so we have $latex \displaystyle \displaystyle \|f^\prime \|_2^2=\frac{1}{\sqrt{4\pi}}\int x^2 \frac{1}{\sqrt{2\pi}} \frac{1}{\sqrt{\frac{1}{2}}} \text{exp}(-x^2)dx. &fg=000000$ We recognized the integral as the variance of a random variable with mean equal to 0 and variance equal to $latex {\sqrt{1/2}}&fg=000000$. Then we get, $latex \displaystyle \displaystyle \|f^\prime \|_2^2 = \frac{1}{\sqrt{4\pi}} \frac{1}{2} = \frac{1}{4\sqrt{\pi}}. &fg=000000$ Finally, using the results of Part III, $latex \displaystyle \displaystyle h_{opt}= \left(\frac{6}{n\|f^\prime \|_2^2}\right) = \left(\frac{24\sqrt{\pi}}{n}\right)\approx 3.5 n^{-1/3}. &fg=000000$ This $latex {h_{opt}}&fg=000000$ is not useful in many cases, but it could work as a rule-of-thumb bindwidth. For the example, we will use a sample of 1000 normal distributed numbers and we will try to use all the theory seen before. I did this little script in R to compute the MSE, MISE and plot the bias-variance trade-offs between them (Note: this is my very first script in R so I will appreciate any comment ). [sourcecode language=”r”] <pre># Function to get the breaks breaks <- function(x,h){ b = floor(min(x)/h) : ceiling(max(x)/h) b = b*h return(b) } #Generate 1000 numbers with normal standard distribution x=rnorm(1000) n=length(x); # Point to evaluate the MSE x0 = 0 # Real value of f(x0) f_x0 = dnorm(x0); # || f’ ||_2^2 for a normal standard distribution norm_f_prime = 1/(4*sqrt(pi)); #Sequence of bins hvec = seq(0.1,0.7,by=0.0005) Bias = numeric(); Var = numeric(); MSE = numeric(); Bias_MISE = numeric(); Var_MISE = numeric(); MISE = numeric(); par(mfrow=c(1,2)) hist(x,breaks=breaks(x,0.001),freq=F,xlab ="Bandwidth with h=0.001") hist(x,breaks=breaks(x,2),freq=F,xlab ="Bandwidth with h=2") par(mfrow=c(1,1)) for(h in hvec){ xhist = hist(x,breaks=breaks(x,h),plot=FALSE) # Average of bins near x0 bins_near_x0 = xhist$breaks=x0-h; p = mean(xhist$counts[bins_near_x0])/n; # Expectation of \hat{f} in x0 E_fhat_x0 = p/h; #Compute of Bias, Var, MSE and MISE Bias = c(Bias,E_fhat_x0 – f_x0); Var = c(Var,(p*(1-p))/(n*h^2)) MSE = c(MSE,tail(Var,1) + tail(Bias,1)^2) Var_MISE = c(Var_MISE,1/(n*h)) Bias_MISE = c(Bias_MISE,h^2 * norm_f_prime /12) MISE = c(MISE,tail(Var_MISE,1) + tail(Bias_MISE,1)) } #Tradeoff MSE in x0 max_range = range(Bias^2,Var,MSE) plot(hvec,MSE,ylim=max_range,type="l",col="blue",lwd=3, xlab="Bandwidth h") lines(hvec,Bias^2,type="l",lty=2,col="red",lwd=3) lines(hvec,Var,type="S",lty=6,col="black",lwd=3) legend(x="topleft",max_range,legend=c("MSE","Bias^2","Var"),col=c("blue","red","black"),lwd=3,lty=c(1,2,6)) # Tradeoff MISE max_range = range(Bias_MISE,Var_MISE,MISE) plot(hvec,MISE,ylim=max_range,type="l",col="blue",lwd=3,xlab="Bandwidth h") lines(hvec,Bias_MISE,type="l",lty=5,col="red",lwd=3) lines(hvec,Var_MISE,type="S",lty=6,col="black",lwd=3) legend(x="topleft",max_range,legend=c("MISE","Bias^2_MISE","Var_MISE"),col=c("blue","red","black"),lwd=3,lty=c(1,2,6)) # h optimal for the point x0 h_x0=hvec[MSE==min(MSE)] # h optimal for any point using the minimal MISE h_opt_MISE=hvec[MISE==min(MISE)] # h optimal for any point using the rule-of-thumb h_opt = (6/(n*norm_f_prime))^(1/3)# histogram with h_opt breaks = floor((min(x))/h_opt):ceiling((max(x))/h_opt) breaks = breaks*h_opt#Plot the histogram h=hist(x,breaks=breaks,freq=FALSE,ylim=c(0,0.5)) lines(sort(x),dnorm(sort(x)),type="l") [/sourcecode] To start notice that if the binwidth is too small or too large we will get a very bad approximation. As we can see, in the next plot, we will get a terrible fitting of the histogram for $latex h=0.01$ and $latex h=2$: The plots of the MSE and MISE trade-offs are respectively For the MSE in $latex x_0=0$ the optimal value is $latex h=0.255$. In the case of MISE the optimal value by minimization is $latex h=0.349$ and the real value, using the formula seen before, is $latex h=0.349083021225025$. Finally, we get the best fitted histogram: The next week, we will start with density estimation using kernels more generals and we will see that the histogram is, in fact, a particular case of one of them.
Bharadwaj, BVS and Chandran, LS and Das, Anita (2008) Isoperimetric Problem and Meta-fibonacci Sequences. In: 14th Annual International Conference on Computing and Combinatorics (COCOON 2008), JUN 27-29, 2008, Dalian. PDF fulltext.pdf - Published Version Restricted to Registered users only Download (453kB) | Request a copy Abstract Let G = (V,E) be a simple, finite, undirected graph. For S ⊆ V, let $\delta(S,G) = \{ (u,v) \in E : u \in S \mbox { and } v \in V-S \}$ and $\phi(S,G) = \{ v \in V -S: \exists u \in S$ , such that (u,v) ∈ E} be the edge and vertex boundary of S, respectively. Given an integer i, 1 ≤ i ≤ ∣ V ∣, the edge and vertex isoperimetric value at i is defined as b e (i,G) = min S ⊆ V; |S| = i |δ(S,G)| and b v (i,G) = min S ⊆ V; |S| = i |φ(S,G)|, respectively. The edge (vertex) isoperimetric problem is to determine the value of b e (i, G) (b v (i, G)) for each i, 1 ≤ i ≤ |V|. If we have the further restriction that the set S should induce a connected subgraph of G, then the corresponding variation of the isoperimetric problem is known as the connected isoperimetric problem. The connected edge (vertex) isoperimetric values are defined in a corresponding way. It turns out that the connected edge isoperimetric and the connected vertex isoperimetric values are equal at each i, 1 ≤ i ≤ |V|, if G is a tree. Therefore we use the notation b c (i, T) to denote the connected edge (vertex) isoperimetric value of T at i. Hofstadter had introduced the interesting concept of meta-fibonacci sequences in his famous book “Gödel, Escher, Bach. An Eternal Golden Braid”. The sequence he introduced is known as the Hofstadter sequences and most of the problems he raised regarding this sequence is still open. Since then mathematicians studied many other closely related meta-fibonacci sequences such as Tanny sequences, Conway sequences, Conolly sequences etc. Let T 2 be an infinite complete binary tree. In this paper we related the connected isoperimetric problem on T 2 with the Tanny sequences which is defined by the recurrence relation a(i) = a(i − 1 − a(i − 1)) + a(i − 2 − a(i − 2)), a(0) = a(1) = a(2) = 1. In particular, we show that b c (i, T 2) = i + 2 − 2a(i), for each i ≥ 1. We also propose efficient polynomial time algorithms to find vertex isoperimetric values at i of bounded pathwidth and bounded treewidth graphs. Item Type: Conference Proceedings Additional Information: Copyright of this article belongs to Springer. Department/Centre: Division of Electrical Sciences > Computer Science & Automation Depositing User: K.S. Satyashree Date Deposited: 24 Mar 2010 07:31 Last Modified: 19 Sep 2010 05:58 URI: http://eprints.iisc.ac.in/id/eprint/26493 Actions (login required) View Item
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Given an expression $\chi\,=\,C_{p1}\,\left[ h\,e^{- i\,p1.\,x}\,\, +\,\,h^{\dagger}\,e^{+i\,p1\,.\,x}\right]$ where $p1$ and $x$ are four-vectors; $C_{p1} = \ \frac{1}{\sqrt{(2 \pi)^3} \sqrt{2 \omega\,(p1,\ m)}}$, and $x\ .\ p1\ =\omega(p_1,m)\,t\ - {\vec p1} {\vec x}$ Please note that "$x$" and "$\chi$" are different variables. $x$ is a 4-vector, the components of which are $t$, and the three components of $\vec {x}$. Similarly, $p1$ is a 4-vector with components $\omega(p_1,m)$ and the three components of $\vec p1$. How does one teach Mathematica to do things like gradient of $\chi $, $\partial_{t} \chi$, product of $\chi$ i.e. $\chi^2$ etc. but not have to explicitly have to type the full form of the four vectors - in the subsequent input and results of evaluations? For the product of $\chi$s, it would be great if the variables could be programmed such that the first $\chi$ would take $p1$ and $x1$ as arguments, the second $\chi$ - $p2$ and $x2$ etc.
Discussion on International Mathematical Olympiad (IMO) Here we'll talk about IMO level problems. Rules: 1. You can post any 'math-problem' from anywhere if you are sure it has a solution. Also you should give the source.(Like book name, link or self-made) 2. If a problem remains unsolved for two days, the proposer must post the solution (for self-made problems) or the official solution will be posted. (for contest problems) 3. Anyone can post a new problem iff the previous problem has been solved already. 4. Don't forget to type the problem number. Rules: 1. You can post any 'math-problem' from anywhere if you are sure it has a solution. Also you should give the source.(Like book name, link or self-made) 2. If a problem remains unsolved for two days, the proposer must post the solution (for self-made problems) or the official solution will be posted. (for contest problems) 3. Anyone can post a new problem iff the previous problem has been solved already. 4. Don't forget to type the problem number. Problem 1: Circles $\omega _1$ with center $O$, $\omega _2$ with center $O'$, intersect at points $P,Q$. line $\ell$ passes through $P$ and intersects $\omega _1$, $\omega _2$ at $K,L$ ,respectively. points $A,B$ are on arcs $KQ,LQ$ (arcs do not contain $P$ ) respectively. $\angle KPA=\angle LPB,\angle KAP=90-\angle LBP$. Prove that $OO'$ is parallel to $KL$. source: http://www.artofproblemsolving.com/Foru ... 6&t=506365 Welcome to BdMO Online Forum. Check out Forum Guides & Rules Well,I am confused about your last conclusion, Nadim vai. I have the following proof in favor of the statement.And I think there is an error in your proof. Adib,please confirm which one of us is correct. $\color{blue}{\textit{To}} \color{red}{\textit{ problems }} \color{blue}{\textit{I am encountering with-}} \color{green}{\textit{AVADA KEDAVRA!}}$ Sorry I couldn't check the proofs of Sanzeed and Nadim vai since I'm a little busy with my texts. I found another constructive disproof. Choose any point $K$ on $\omega_1$ such that $KQ$ is not perpendicular to $PQ$.[We still haven't defined $\omega_2$]. Now draw $KQ \bot QL$ suth that $L \in KP$. Draw the circumcircle of $\triangle PQL$. Let this be $\omega_2$. Now take any point $A$ on arc $KQ$. Take point $B$ on arc $QL$ such that $\angle KPA=\angle LPB$. Notice that the construction satisfies all the properties of the problem yet doesn't satisfies the requirement. Note: This is a confusing problem. So I'm posting another problem. Problem 2: Determine all functions $f:\mathbb{N}\to\mathbb{N}$ such that for every pair $(m,n)\in\mathbb{N}^2$ we have that: \[f(m)+f(n)|m+n\] Source: Iran NMO-2004-P4. I found another constructive disproof. Choose any point $K$ on $\omega_1$ such that $KQ$ is not perpendicular to $PQ$.[We still haven't defined $\omega_2$]. Now draw $KQ \bot QL$ suth that $L \in KP$. Draw the circumcircle of $\triangle PQL$. Let this be $\omega_2$. Now take any point $A$ on arc $KQ$. Take point $B$ on arc $QL$ such that $\angle KPA=\angle LPB$. Notice that the construction satisfies all the properties of the problem yet doesn't satisfies the requirement. Note: This is a confusing problem. So I'm posting another problem. Problem 2: Determine all functions $f:\mathbb{N}\to\mathbb{N}$ such that for every pair $(m,n)\in\mathbb{N}^2$ we have that: \[f(m)+f(n)|m+n\] Source: Iran NMO-2004-P4. বড় ভালবাসি তোমায়,মা For Problem 2 Let us denote the statement with $P(m,n)$. Now, $P(1,1)\Rightarrow 2f(1)|2\Rightarrow f(1)=1$ since $f(m)\in \mathbb N$ Let $p$ be a prime. $P(p-1,1)\Rightarrow f(p-1)+f(1)|p-1+1=p$. Since $f(p-1)1>1$ we must have $f(p-1)=p-1$. Now, $f(p-1)+f(n)|p-1+n\Rightarrow (p-1+f(n))|(p-1+f(n))+(n-f(n))$. So, $(p-1+f(n))|(n-f(n))$. If we fix $n$ now then we can take arbitrarily large value of $p$, such that $p-1+f(n)>n-f(n)$. Still $(p-1+f(n))$ will divide $(n-f(n))$ so we must have $n-f(n)=0$ i.e. $f(n)=n\forall n\in \mathbb N$ which is indeed a solution. Last edited by SANZEED on Sat Nov 10, 2012 9:00 pm, edited 1 time in total. $\color{blue}{\textit{To}} \color{red}{\textit{ problems }} \color{blue}{\textit{I am encountering with-}} \color{green}{\textit{AVADA KEDAVRA!}}$ Then connect $C$ and the midpoint of $KQ$ say $N$. Then $CN\parallel LQ$ and again $CO\parallel LQ$ which will bring the necessary contradiction,I think.Nadim Ul Abrar wrote:What if Q' lie on LQ ?SANZEED wrote: $LQ\parallel LQ'$, which is impossible. $\color{blue}{\textit{To}} \color{red}{\textit{ problems }} \color{blue}{\textit{I am encountering with-}} \color{green}{\textit{AVADA KEDAVRA!}}$ For problem 1; Note that condition 2 implies those two circles are orthogonal. And 1st condition can be found for any $l$ for such two circles. So problem is certainly not true. You spin my head right round right round,(-$from$ "$THE$ $UGLY$ $TRUTH$" ) When you go down, when you go down down...... When you go down, when you go down down......
Thermodynamic property relations From Thermal-FluidsPedia Line 1: Line 1: For a single-component closed system (fixed mass), the first law of thermodynamics gives us: For a single-component closed system (fixed mass), the first law of thermodynamics gives us: - <center><math>d\hat E = \delta Q - \delta W\qquad \qquad( ) </math></center> + <center><math>d\hat E = \delta Q - \delta W\qquad \qquad() </math></center> (50) (50) where <math>\hat E</math> is the total energy of the closed system, <math>\delta Q</math> is heat transferred to a system and <math>\delta W</math> is the work done by the system to the surroundings. The contribution to the total energy is due to internal (<math>E</math>), kinetic, potential, electromagnetic, surface tension or other form of energies. If change of all other forms of energy can be neglected, then <math>\hat E = E</math>. Heat transfer to a system is positive (system receives heat), whereas heat transfer from the system is negative (system loses heat). In contrast, work done by a system is positive (system loses work), and the work done to the system is negative (system receives work). The mechanical work for a closed system is usually expressed as <math>\delta W = pdV,</math> where <math>p</math> is the pressure and <math>V</math> is the volume of the system – both are thermodynamic properties of the system. Change of a thermodynamic property depends on initial and final states only and does not depend on the path by which the change occurred. Therefore, thermodynamic properties are ''path-independent'' and its infinitesimal change is represented by exact differential <math>d</math> (such as <math>dE</math> or <math>dV</math>). The heat transfer, <math>Q</math>, and work, <math>W</math>, on the other hand, are ''path-dependent'' functions. Infinitesimal heat transfer and work are represented by <math>\delta Q</math> and <math>\delta W,</math> respectively, in order to distinguish them from the change of a path-independent function. If it is assumed that the only work done is by volume change, and that potential and kinetic energies are negligible, where <math>\hat E</math> is the total energy of the closed system, <math>\delta Q</math> is heat transferred to a system and <math>\delta W</math> is the work done by the system to the surroundings. The contribution to the total energy is due to internal (<math>E</math>), kinetic, potential, electromagnetic, surface tension or other form of energies. If change of all other forms of energy can be neglected, then <math>\hat E = E</math>. Heat transfer to a system is positive (system receives heat), whereas heat transfer from the system is negative (system loses heat). In contrast, work done by a system is positive (system loses work), and the work done to the system is negative (system receives work). The mechanical work for a closed system is usually expressed as <math>\delta W = pdV,</math> where <math>p</math> is the pressure and <math>V</math> is the volume of the system – both are thermodynamic properties of the system. Change of a thermodynamic property depends on initial and final states only and does not depend on the path by which the change occurred. Therefore, thermodynamic properties are ''path-independent'' and its infinitesimal change is represented by exact differential <math>d</math> (such as <math>dE</math> or <math>dV</math>). The heat transfer, <math>Q</math>, and work, <math>W</math>, on the other hand, are ''path-dependent'' functions. Infinitesimal heat transfer and work are represented by <math>\delta Q</math> and <math>\delta W,</math> respectively, in order to distinguish them from the change of a path-independent function. If it is assumed that the only work done is by volume change, and that potential and kinetic energies are negligible, - <center><math>dE = dQ - pdV\qquad \qquad( ) </math></center> + <center><math>dE = dQ - pdV\qquad \qquad() </math></center> (51) (51) The second law of thermodynamics for the single-component closed system can be described by the Clausius inequality, i.e., The second law of thermodynamics for the single-component closed system can be described by the Clausius inequality, i.e., - <center><math>dS \ge \frac{{\delta Q}}{T}\qquad \qquad( ) </math></center> + <center><math>dS \ge \frac{{\delta Q}}{T}\qquad \qquad() </math></center> (52) (52) Line 17: Line 17: Combining these general forms of the first two laws of thermodynamics results in an expression that is very useful for determining the conditions for equilibrium and stability of systems, namely, the fundamental relation of thermodynamics: Combining these general forms of the first two laws of thermodynamics results in an expression that is very useful for determining the conditions for equilibrium and stability of systems, namely, the fundamental relation of thermodynamics: - <center><math>dE \le TdS - \delta W\qquad \qquad( ) </math></center> + <center><math>dE \le TdS - \delta W\qquad \qquad() </math></center> (53) (53) where the inequality is used for irreversible processes and the equality for reversible processes. For a finite change in a system, the fundamental thermodynamic relationship becomes where the inequality is used for irreversible processes and the equality for reversible processes. For a finite change in a system, the fundamental thermodynamic relationship becomes - <center><math>\Delta E \le T\Delta S - W\qquad \qquad( ) </math></center> + <center><math>\Delta E \le T\Delta S - W\qquad \qquad() </math></center> (54) (54) Line 29: Line 29: - <center><math>{\left( {\delta Q} \right)_{rev}} = de = {c_v}dT \qquad \qquad ()</math></center> + <center><math>{\left( {\delta Q} \right)_{rev}} = de = {c_v}dT \qquad \qquad ()</math></center> (55) (55) - <center><math>{c_v} = {\left( {\frac{{\delta {Q_{rev}}}}{{mdT}}} \right)_v} = {\left( {\frac{{\partial e}}{{\partial T}}} \right)_v} = T{\left( {\frac{{\partial s}}{{\partial T}}} \right)_v} \qquad \qquad ()</math></center> + <center><math>{c_v} = {\left( {\frac{{\delta {Q_{rev}}}}{{mdT}}} \right)_v} = {\left( {\frac{{\partial e}}{{\partial T}}} \right)_v} = T{\left( {\frac{{\partial s}}{{\partial T}}} \right)_v} \qquad \qquad ()</math></center> (56) (56) Similarly, the expression for specific heat at constant pressure <math>{c_p}</math> can be obtained for a constant pressure process, Similarly, the expression for specific heat at constant pressure <math>{c_p}</math> can be obtained for a constant pressure process, - <center><math>{c_p} = {\left( {\frac{{\delta {Q_{rev}}}}{{mdT}}} \right)_p} = {\left( {\frac{{\partial h}}{{\partial T}}} \right)_p} = T{\left( {\frac{{\partial s}}{{\partial T}}} \right)_p} \qquad \qquad ()</math></center> + <center><math>{c_p} = {\left( {\frac{{\delta {Q_{rev}}}}{{mdT}}} \right)_p} = {\left( {\frac{{\partial h}}{{\partial T}}} \right)_p} = T{\left( {\frac{{\partial s}}{{\partial T}}} \right)_p} \qquad \qquad ()</math></center> (57) (57) Line 43: Line 43: Another important property that is often used for constant pressure processes is the volumetric coefficient of thermal expansion in terms of specific volume (<math>v</math>), Another important property that is often used for constant pressure processes is the volumetric coefficient of thermal expansion in terms of specific volume (<math>v</math>), - <center><math>\beta = \frac{1}{v}{\left( {\frac{{\partial v}}{{\partial T}}} \right)_p} \qquad \qquad ()</math></center> + <center><math>\beta = \frac{1}{v}{\left( {\frac{{\partial v}}{{\partial T}}} \right)_p} \qquad \qquad ()</math></center> (58) (58) The coefficient of thermal expansion is often used in natural convective heat and mass transfer in terms of density, The coefficient of thermal expansion is often used in natural convective heat and mass transfer in terms of density, - <center><math>\beta = - \frac{1}{\rho }{\left( {\frac{{\partial \rho }}{{\partial T}}} \right)_p} \qquad \qquad ()</math></center> + <center><math>\beta = - \frac{1}{\rho }{\left( {\frac{{\partial \rho }}{{\partial T}}} \right)_p} \qquad \qquad ()</math></center> (59) (59) Revision as of 01:03, 29 January 2010 For a single-component closed system (fixed mass), the first law of thermodynamics gives us: (50) where is the total energy of the closed system, δ Q is heat transferred to a system and δ W is the work done by the system to the surroundings. The contribution to the total energy is due to internal ( E), kinetic, potential, electromagnetic, surface tension or other form of energies. If change of all other forms of energy can be neglected, then . Heat transfer to a system is positive (system receives heat), whereas heat transfer from the system is negative (system loses heat). In contrast, work done by a system is positive (system loses work), and the work done to the system is negative (system receives work). The mechanical work for a closed system is usually expressed as δ W = p d V, where p is the pressure and V is the volume of the system – both are thermodynamic properties of the system. Change of a thermodynamic property depends on initial and final states only and does not depend on the path by which the change occurred. Therefore, thermodynamic properties are path-independent and its infinitesimal change is represented by exact differential d (such as d E or d V). The heat transfer, Q, and work, W, on the other hand, are path-dependent functions. Infinitesimal heat transfer and work are represented by δ Q and δ W, respectively, in order to distinguish them from the change of a path-independent function. If it is assumed that the only work done is by volume change, and that potential and kinetic energies are negligible, (51) The second law of thermodynamics for the single-component closed system can be described by the Clausius inequality, i.e., (52) where d S is the change of entropy of the closed system. The equal sign designates a reversible process, which is defined as an ideal process that after taking place can be reversed without leaving any change to either system or surroundings. The greater-than sign denotes an irreversible process. Combining these general forms of the first two laws of thermodynamics results in an expression that is very useful for determining the conditions for equilibrium and stability of systems, namely, the fundamental relation of thermodynamics: (53) where the inequality is used for irreversible processes and the equality for reversible processes. For a finite change in a system, the fundamental thermodynamic relationship becomes (54) where is the work done by the system to the surroundings.It is desirable to have a property that allows us to compare the energy storage capabilities of various substances under various processes such as constant volume or constant pressure. This property is specific heat. Two kinds of specific heat are used: specific heat at constant volume, c , and specific heat at constant pressure, v c . For a closed system undergoing a constant volume process, consider the first and second laws of thermodynamics for a reversible process, and using the definition of specific heat, p (55) (56) Similarly, the expression for specific heat at constant pressure c can be obtained for a constant pressure process, p (57) where h = e + p v is the specific enthalpy and s is the specific entropy.Another important property that is often used for constant pressure processes is the volumetric coefficient of thermal expansion in terms of specific volume ( v), (58) The coefficient of thermal expansion is often used in natural convective heat and mass transfer in terms of density, (59) The first and second laws of thermodynamics for open systems will be discussed in Chapter 3.
A C*-dynamical system is said to have the ideal separation property if everyideal in the corresponding crossed product arises from an invariant ideal inthe C*-algebra. In this paper we characterize this property for unitalC*-dynamical systems over discrete groups. To every C*-dynamical system weassociate a "twisted" partial C*-dynamical system that encodes much of thestructure of the action. This system can often be "untwisted," for example whenthe algebra is commutative, or when the algebra is prime and a certain specificsubgroup has vanishing Mackey obstruction. In this case, we obtain relativelysimple necessary and sufficient conditions for the ideal separation property. Akey idea is a notion of noncommutative boundary for a C*-dynamical system thatgeneralizes Furstenberg's notion of topological boundary for a group. A group is said to be C*-simple if its reduced C*-algebra is simple. Weestablish an intrinsic (group-theoretic) characterization of groups with thisproperty. Specifically, we prove that a discrete group is C*-simple if and onlyif it has no non-trivial amenable uniformly recurrent subgroups. We furtherprove that a group is C*-simple if and only if it satisfies an averagingproperty considered by Powers. In this short note we prove that the reduced group C*-algebra of a locallycompact group admits a non-zero trace if and only if the amenable radical ofthe group is open. This completely answers a question raised by Forrest, Spronkand Wiersma. A discrete group is said to be C*-simple if its reduced C*-algebra is simple,and is said to have the unique trace property if its reduced C*-algebra has aunique tracial state. A dynamical characterization of C*-simplicity wasrecently obtained by the second and third named authors. In this paper, weintroduce new methods for working with group and crossed product C*-algebrasthat allow us to take the study of C*-simplicity a step further, and inaddition to settle the longstanding open problem of characterizing groups withthe unique trace property. We give a new and self-contained proof of theaforementioned characterization of C*-simplicity. This yields a newcharacterization of C*-simplicity in terms of the weak containment ofquasi-regular representations. We introduce a convenient algebraic conditionthat implies C*-simplicity, and show that this condition is satisfied by a vastclass of groups, encompassing virtually all previously known examples as wellas many new ones. We also settle a question of Skandalis and de la Harpe on thesimplicity of reduced crossed products. Finally, we introduce a new propertyfor discrete groups that is closely related to C*-simplicity, and use it toprove a broad generalization of a theorem of Zimmer, originally conjectured byConnes and Sullivan, about amenable actions. We establish a new characterization of the Choquet order on the space ofprobability measures on a compact convex set. The characterization isdilation-theoretic, meaning that it relates to the representation theory ofpositive linear maps on the C*-algebra of continuous functions on the set. Thisyields an extension of Cartier's theorem on dilation of measures that is validin the non-metrizable setting. As an application, we prove Arveson'shyperrigidity conjecture for function systems, and obtain new approximationtheorems for positive maps from commutative C*-algebras into B(H). We consider reduced crossed products of twisted C*-dynamical systems overC*-simple groups. We prove there is a bijective correspondence between maximalideals of the reduced crossed product and maximal invariant ideals of theunderlying C*-algebra, and a bijective correspondence between tracial states onthe reduced crossed product and invariant tracial states on the underlyingC*-algebra. In particular, the reduced crossed product is simple if and only ifthe underlying C*-algebra has no proper non-trivial invariant ideals, and thereduced crossed product has a unique tracial state if and only if theunderlying C*-algebra has a unique invariant tracial state. We also show thatthe reduced crossed product satisfies an averaging property analogous toPowers' averaging property. In our paper "Essential normality, essential norms and hyperrigidity" weclaimed that the restriction of the identity representation of a certainoperator system (constructed from a polynomial ideal) has the unique extensionproperty, however the justification we gave was insufficient. In this note weprovide the required justification under some additional assumptions.Fortunately, homogeneous ideals that are "sufficiently non-trivial" are coveredby these assumptions. This affects the section of our paper relating essentialnormality and hyperrigidity. We show here that Proposition 4.11 and Theorem4.12 hold under the additional assumptions. We do not know if they hold in thegenerality considered in our paper. The classification of separable operator spaces and systems is commonlybelieved to be intractable. We analyze this belief from the point of view ofBorel complexity theory. On one hand we confirm that the classificationproblems for arbitrary separable operator systems and spaces are intractable.On the other hand we show that the finitely generated operator systems andspaces are completely classifiable (or smooth); in fact a finitely generatedoperator system is classified by its complete theory when regarded as astructure in continuous logic. In the particular case of operator systemsgenerated by a single unitary, a complete invariant is given by the spectrum ofthe unitary up to a rigid motion of the circle, provided that the spectrumcontains at least 5 points. As a consequence of these results we show that therelation on compact subsets of $\mathbb{C}^{n}$, given by homeomorphism via adegree 1 polynomial, is smooth. A theorem of Thompson provides a non-self-adjoint variant of the classicalSchur-Horn theorem by characterizing the possible diagonal values of a matrixwith given singular values. We prove an analogue of Thompson's theorem for II_1factors. We consider the Schur-Horn problem for normal operators in von Neumannalgebras, which is the problem of characterizing the possible diagonal valuesof a given normal operator based on its spectral data. For normal matrices,this problem is well-known to be extremely difficult, and in fact, it remainsopen for matrices of size greater than $3$. We show that the infinitedimensional version of this problem is more tractable, and establishapproximate solutions for normal operators in von Neumann factors of typeI$_\infty$, II and III. A key result is an approximation theorem that can beseen as an approximate multivariate analogue of Kadison's Carpenter Theorem. For a discrete group G, we consider the minimal C*-subalgebra of$\ell^\infty(G)$ that arises as the image of a unital positive G-equivariantprojection. This algebra always exists and is unique up to isomorphism. It istrivial if and only if G is amenable. We prove that, more generally, it can beidentified with the algebra $C(\partial_F G)$ of continuous functions onFurstenberg's universal G-boundary $\partial_F G$. This operator-algebraic construction of the Furstenberg boundary has a numberof interesting consequences. We prove that G is exact precisely when theG-action on $\partial_F G$ is amenable, and use this fact to prove Ozawa'sconjecture that if G is exact, then there is an embedding of the reducedC*-algebra $\mathrm{C}_r^*(G)$ of G into a nuclear C*-algebra which iscontained in the injective envelope of $\mathrm{C}_r^*(G)$. It is a longstanding open problem to determine which groups are C*-simple, inthe sense that the algebra $\mathrm{C}_r^*(G)$ is simple. We prove that thisproblem can be reformulated as a problem about the structure of the G-action onthe Furstenberg boundary. Specifically, we prove that a discrete group G isC*-simple if and only if the G-action on the Furstenberg boundary istopologically free. We apply this result to prove that Tarski monster groupsare C*-simple. This provides another solution to a problem of de la Harpe(recently answered by Olshanskii and Osin) about the existence of C*-simplegroups with no free subgroups. Let $S = (S_1, \ldots, S_d)$ denote the compression of the $d$-shift to thecomplement of a homogeneous ideal $I$ of $\mathbb{C}[z_1, \ldots, z_d]$.Arveson conjectured that $S$ is essentially normal. In this paper, we establishnew results supporting this conjecture, and connect the notion of essentialnormality to the theory of the C*-envelope and the noncommutative Choquetboundary. The unital norm closed algebra $\mathcal{B}_I$ generated by $S_1,\ldots,S_d$modulo the compact operators is shown to be completely isometrically isomorphicto the uniform algebra generated by polynomials on $\overline{V} :=\overline{\mathcal{Z}(I) \cap \mathbb{B}_d}$, where $\mathcal{Z}(I)$ is thevariety corresponding to $I$. Consequently, the essential norm of an element in$\mathcal{B}_I$ is equal to the sup norm of its Gelfand transform, and theC*-envelope of $\mathcal{B}_I$ is identified as the algebra of continuousfunctions on $\overline{V} \cap \partial \mathbb{B}_d$, which means it is acomplete invariant of the topology of the variety determined by $I$ in theball. Motivated by this determination of the C*-envelope of $\mathcal{B}_I$, wesuggest a new, more qualitative approach to the problem of essential normality.We prove the tuple $S$ is essentially normal if and only if it is hyperrigid asthe generating set of a C*-algebra, which is a property closely connected toArveson's notion of a boundary representation. We show that most of our results hold in a much more general setting. Inparticular, for most of our results, the ideal $I$ can be replaced by anarbitrary (not necessarily homogeneous) invariant subspace of the $d$-shift. We study the Hopf structure of a class of dual operator algebrascorresponding to certain semigroups. This class of algebras arises in dilationtheory, and includes the noncommutative analytic Toeplitz algebra and themultiplier algebra of the Drury-Arveson space, which correspond to the freesemigroup and the free commutative semigroup respectively. The preduals of thealgebras in this class naturally form Hopf (convolution) algebras. The originalalgebras and their preduals form (non-self-adjoint) dual Hopf algebras in thesense of Effros and Ruan. We study these algebras from this perspective, andobtain a number of results about their structure. We show that every operator system (and hence every unital operator algebra)has sufficiently many boundary representations to generate the C*-envelope. We study the structure of bounded linear functionals on a class ofnon-self-adjoint operator algebras that includes the multiplier algebra ofevery complete Nevanlinna-Pick space, and in particular the multiplier algebraof the Drury-Arveson space. Our main result is a Lebesgue decompositionexpressing every linear functional as the sum of an absolutely continuous (i.e.weak-* continuous) linear functional, and a singular linear functional that isfar from being absolutely continuous. This is a non-self-adjoint analogue ofTakesaki's decomposition theorem for linear functionals on von Neumannalgebras. We apply our decomposition theorem to prove that the predual of everyalgebra in this class is (strongly) unique. We establish the essential normality of a large new class of homogeneoussubmodules of the finite rank d-shift Hilbert module. The main idea is a notionof essential decomposability that determines when an arbitrary submodule can bedecomposed into the sum of essentially normal submodules. We prove that everyessentially decomposable submodule is essentially normal, and using ideas fromconvex geometry, we introduce methods for establishing that a submodule isessentially decomposable. It turns out that many homogeneous submodules of thefinite rank d-shift Hilbert module have this property. We prove that many ofthe submodules considered by other authors are essentially decomposable, and inaddition establish the essential decomposability of a large new class ofhomogeneous submodules. Our results support Arveson's conjecture that everyhomogeneous submodule of the finite rank d-shift Hilbert module is essentiallynormal. We consider the Arveson-Douglas conjecture on the essential normality ofhomogeneous submodules corresponding to algebraic subvarieties of the unitball. We prove that the property of essential normality is preserved byisomorphisms between varieties, and we establish a similar result for mapsbetween varieties that are not necessarily invertible. We also relate thedecomposability of an algebraic variety to the problem of establishing theessential normality of the corresponding submodule. These results are appliedto prove that the Arveson-Douglas conjecture holds for submodules correspondingto varieties that decompose into linear subspaces, and varieties that decomposeinto components with mutually disjoint linear spans. An $n$-tuple of operators $(V_1,...,V_n)$ acting on a Hilbert space $H$ issaid to be isometric if the row operator $(V_1,...,V_n) : H^n \to H$ is anisometry. We prove that every isometric $n$-tuple is hyperreflexive, in thesense of Arveson. For $n = 1$, the hyperreflexivity constant is at most 95. For$n \geq 2$, the hyperreflexivity constant is at most 6. An $n$-tuple of operators $(V_1,...,V_n)$ acting on a Hilbert space $H$ issaid to be isometric if the operator $[V_1\...\ V_n]:H^n\to H$ is an isometry.We prove a decomposition for an isometric tuple of operators that generalizesthe classical Lebesgue-von Neumann-Wold decomposition of an isometry into thedirect sum of a unilateral shift, an absolutely continuous unitary and asingular unitary. We show that, as in the classical case, this decompositiondetermines the weakly closed algebra and the von Neumann algebra generated bythe tuple. We show that for all q in the interval (-1,1), the Fock representation of theq-commutation relations can be unitarily embedded into the Fock representationof the extended Cuntz algebra. In particular, this implies that the C*-algebragenerated by the Fock representation of the q-commutation relations is exact.An immediate consequence is that the q-Gaussian von Neumann algebra is weaklyexact for all q in the interval (-1,1). A free semigroup algebra S is the weak-operator-closed (non-self-adjoint)operator algebra generated by n isometries with pairwise orthogonal ranges. Aunit vector x is said to be wandering for S if the set of images of x undernon-commuting words in the generators of S is orthonormal. We establish the following dichotomy: either a free semigroup algebra has awandering vector, or it is a von Neumann algebra. Consequences include thatevery free semigroup algebra is reflexive, and that certain free semigroupalgebras are hyper-reflexive with a very small hyper-reflexivity constant. We investigate the properties of bounded operators which satisfy a certainspectral additivity condition, and use our results to study Lie and Jordanalgebras of compact operators. We prove that these algebras have nontrivialinvariant subspaces when their elements have sublinear or submultiplicativespectrum, and when they satisfy simple trace conditions. In certain cases weshow that these conditions imply that the algebra is (simultaneously)triangularizable. We show that finitely subgraded Lie algebras of compact operators haveinvariant subspaces when conditions of quasinilpotence are imposed on certaincomponents of the subgrading. This allows us to obtain some useful informationabout the structure of such algebras. As an application, we prove a number ofresults on the existence of invariant subspaces for algebraic structures ofcompact operators. Along the way we obtain new criteria for thetriangularizability of a Lie algebra of compact operators. We show that a Jordan algebra of compact quasinilpotent operators whichcontains a nonzero trace class operator has a common invariant subspace. As aconsequence of this result, we obtain that a Jordan algebra of quasinilpotentSchatten operators is simultaneously triangularizable.
Consider a real scalar field $\phi$ in a theory with a Lagrangian $$ \mathcal{L}:=-\frac{1}{2}\partial _\mu \phi \partial ^\mu \phi -V(\phi ), $$ where $$ V(\phi ):= -\mu ^2\phi ^2+\frac{\lambda}{4!}\phi ^4, $$ where both $\mu$ and $\lambda$ are positive real numbers. We see that the potential has a couple of non-zero minima: $$ V'(\phi )=-2\mu ^2\phi +\frac{\lambda}{3!}\phi ^3=0\Rightarrow \phi =0,\pm 2\mu \sqrt{\frac{3}{\lambda}}=:\pm V_0 $$ (It turns out that $\phi =0$ is a local max, and $\phi =\pm V_0$ are local mins; check the second derivative.) As the usual story goes, we must define a new field $\psi :=\phi -V_0$ and re-write the theory in terms of this $\psi$ to get the appropriate Feynman rules of the quantum theory. If I did my algebra correctly (the details aren't exactly relevant here anyways), this substitution gives us $$ \mathcal{L}=-\frac{1}{2}\partial _\mu \psi \partial ^\mu \psi -2\mu ^2\psi ^2+\mu \sqrt{\frac{\lambda}{3}}\psi ^3+\frac{\lambda}{4!}\psi ^4-6\frac{\mu ^4}{\lambda}. $$ (Our Lagrangian no longer admits the symmetry $\psi \mapsto -\psi$, hence the term "symmetry breaking".) Th question arises: Why is this substitution special? This form of the Lagrangian has some nice properties (namely that the potential has a local min at $0$), but surely there are some other substitutions that could given us some other nice properties as well. What about those? My understanding of this was the following: The LSZ Reduction Formula, among other things, requires a priori that the fields one is working with have vanishing vacuum expectation value. Thus, when applying the LSZ formula, we must be working with $\psi$, not $\phi$, and so the appropriate Feynman rules can be read off only when the Lagrangian is written in terms of $\psi$. I have just recently discovered a problem with this explanation, however. Before, I was under the impression that $\langle 0_\pm |\phi |0_\pm \rangle =\pm V_0$ (this theory evidently has two physical vacuums, whatever that precisely means), so that the definition of $\psi$ forces $\psi$ to have vanishing expectation value, so that the LSZ formula can be applied. However, I recently learned that $\pm V_0$ is only an approximation to $\langle 0_\pm |\phi |0_\pm \rangle$, which implies that $\psi$ only approximately has vanishing vacuum expectation value, which means that LSZ doesn't technically apply. It seems that the proper substitution is in fact $\psi :=\phi -\langle 0_+|\phi|0_+\rangle$. There are several problems I see with this: The Lagrangian re-written in terms of $\psi$ should have a small, but non-zero, linearterm in $\psi$. The Feynman rules I've been using all along that arise from the substitution $\psi :=\phi -V_0$ are only approximation. The coefficients that arise from the 'proper' substitution $\psi :=\phi -\langle 0_+|\phi |0_+\rangle$ are going to be written in terms or something that can (to the best of my knowledge) only be calculated perturbatively (namely $\langle 0_+|\phi |0_+\rangle$), but we need to know these coefficients to obtain the Feynman rules to begin with (resulting in a 'circularity' problem). How does one go about resolving all these issues? (Disclaimer: I asked a very similar question here not quite a year ago, but my understanding of the situation has improved since then, and as is usual, my improvement of understanding has only brought forth many more questions regarding this, so I felt it was appropriate to address the issue once again.)
[OS X TeX] Missing $ inserted in a "variations" environment Ross Moore ross at ics.mq.edu.au Sun Mar 15 22:13:11 CET 2009 Hi Alain, On 16/03/2009, at 7:56 AM, Alain Schremmer wrote: > On Mar 15, 2009, at 4:24 PM, Peter Dyballa wrote: > >> discipline like you demonstrated > > You really ought NOT to make fun of your elders. > >> or uses GNU Emacs, which in AUCTeX mode colourizes the & in red. > > Forget THAT. I think we need a few smileys here. :-) What Peter was saying, and Jonathan has implied, is that if you use the coding: %\noindent \makebox[\textwidth][c]{ \begin{variations} x & \mI & & \alpha& & & 1 & & \beta& & & 2 & & \pI & \\ \filet f'(x) & \bg & + & \bb& & + & \z & -& \bb & & - &\z& + & &\bd \\ \filet \m{f(x)} & \bg \mI & \c \h{\pI} & \bb& \mI & \c & \h{-1} & \d \mI & \bb & \h{\pI}& \d &\frac{3}{2}& \c & \h{\pI} & \bd \\ \end{variations} %} then error messages become more meaningful. Note the comment characters before the \noindent and '}'. These are essential. The reason is that \makebox reads its full argument; ie. down to '}' before processing any of the commands it contains. Thus if an error or warning occurs, then TeX reports it as occurring at the line number where it has read to --- namely at the end, where '}' occurred. By removing that \makebox (using comments, since presumably we'll need it back again after having debugged the problem) then TeX is reading and interpreting contiguously. Hence when an error occurs, the messages will identify the correct place within this tabular material. This is generally a better debugging technique than inserting extra stuff within the output, using iXXXX as has been suggested. The latter works here, but only because TeX isn't too confused by the particular error. In other circumstances you may get no output at all, so iXXXX would leave you none the wiser. > > Best regards > --schremmer > ----------- Hope this helps, Ross ------------------------------------------------------------------------ Ross Moore ross at maths.mq.edu.au Mathematics Department office: E7A-419 Macquarie University tel: +61 (0)2 9850 8955 Sydney, Australia 2109 fax: +61 (0)2 9850 8114 ------------------------------------------------------------------------ More information about the macostex-archivesmailing list
The modern understanding of forces and the detailed relation of forces to motion was worked out by Newton and summarized in what has become known as “Newton's Laws of Motion” We have already spent considerable time making sense of Newton’s first and third laws, which are really part of our understanding of forces. The heart of this chapter, Newton’s second law, tells us how motion changes in time as a result of an unbalanced force. Be sure to refer back to Chapter 6 as frequently as necessary to refresh your understanding of , forces , and net force Newton’s and 1 st . 2 ndlaws Newton’s 2 nd Law When the forces (torques) don't balance, the relation between the unbalanced force \(\sum F\), and the degree of change of motion is given by . As you might surmise, when \(\sum F \neq 0\), there will be a change in the motion. From Chapter 7, we know that an unbalanced force Newton's Second Law acting over a timeproduces a change in the momentum of the object: \[\int \sum Fdt = \Delta p\] Newton’s 2 nd Law expresses this relationship in terms of the instantaneous time rate of change of momentum: \[\sum F = \Delta{dp}{dt}\] Since momentum, \(p\), is equal to the product of mass and velocity: \(p = m v\), we can rewrite the previous relation as: \[\sum F = \dfrac{d(mv)}{dt} = m \dfrac{dv}{dt} = ma\] or \[\sum F = ma\] where acceleration, \(a\), is the derivative with respect to time of the velocity, \(v\). For rotation, Newtons 2 nd law takes the form: \[\sum \tau = \dfrac{dL}{dt}~~ or~~ \sum \tau = I \alpha \] Several of the most important points concerning Newton’s 2 nd law are summarized below: An unbalanced force or torque \((\sum F \neq 0~~ or~~ \sum \tau \neq 0)\) causes a change in motion of an object. The change in motion is actually the time rate of change of velocity or angular velocity: The degree of “change of motion,” the , is proportional to the unbalanced force and inversely proportional to the mass of the object. acceleration Some General Comments on Newton’s 2 nd Law What is Newton’s 2 nd Law useful for? What can it tell us? Well, there are two basic situations: We know the forces that act on an object and we want to know what its motion is. We know the motion of an object and want to know the details of the forces acting on the object. The approaches we will develop apply to the motion of all objects in the “classical” realm. That is, they apply to planets orbiting around stars as well as baseballs tossed into the air. This is the physics that NASA uses to know how to fire off a Mars probe that travels millions of miles through space and actually lands on the red (actually a yellowish brown) planet. However, Newton’s 2 nd Law is not the physics that describes the motion of the electrons whirring about the nucleus of an atom, or the motion of the neutrons and protons in the nucleus of that atom. Quantum mechanics takes over from Newtonian dynamics when sizes begin to approach atomic dimensions. Newtonian mechanics also gets modified by special relativity when relative speeds of objects become significant compared to the speed of light \((3 \times 10^8 m/s)\). Another thing that makes the Newtonian approach rather simple and straightforward (at least compared to Quantum Mechanics), and what makes Newton’s laws so easy to write down, is the simplification that comes from being able to combine the separate microscopic electric and gravitational forces that act between individual atoms into a few macroscopic forces that we model as acting on the entire object at a single point. For example, the gravitational forces that act on each individual atom combine into one gravitational force that acts on the entire object at its center of gravity. The perpendicular contact force is the net result of all of the forces acting between the atoms of the two surfaces that actually are close to each other. We thus reduce the trillion or so individual electric forces that act between the closely spaced atoms on the surfaces to one net contact force. A warning: It is easy to write down Newton’s 2 nd law. Often we can figure out the forces that are acting, and thus get \(\sum F\). This immediately gives us the acceleration, a. In principle, as we shall see, it is straightforward to get velocity and displacement (and the time required for certain motions to occur) from the acceleration. But in practice, unless we know something about solving differential equations with computers, there are not many examples of motion for which we can easily write down the solution to the differential equation we call Newton’s 2 nd law. There are in fact, only three fairly straightforward cases. When the forces are constant, which leads to a constant acceleration. When the forces combine to produce an acceleration that always points toward the same point in space and always has the same magnitude (circular motion), and When the net force is always like the spring force—directly proportional to the displacement of the particle, but in the opposite direction. This last case leads to the very common oscillatory spring-mass motion we are familiar with from Chapter 3. In all three of these special cases, it is straightforward to write down simple algebraic expressions for the position of the particle as a function of time. That is, you tell me the time you want to know the position of the particle, and I can use my algebraic equation(s) to predict the position (and velocity and acceleration as well). We will examine the first two cases— constant acceleration and circular motion—in some detail in this chapter and the details of oscillatory motion in Chapter 8. The danger: it is easy to think that these motions are all there is; that Newton’s laws don’t apply to the infinity of other types of motion. They do! It is just that we can’t get nice algebraic expressions for the position of the object as a function of time. But it can always be done with a computer. And we can understand qualitatively what will happen, even if we don’t have a simple algebraic expression to “plug into.” Finding the Change in Motion from the 2 nd Law in a Step-Wise Fashion Before we look at the two special cases, let’s examine in a little more detail how we find the change in motion using Newton’s second law in a step-by-step fashion. We will make use of the general relationship we have used many times in this text: the new value of some variable is equal to the old value plus the change in that variable. We apply this to all three motion variables, \(r\), \(v\), and \(a\). \[r_f = r_i + \Delta r\] \[v_f = v_i + \Delta v\] \[a_f = a_i + \Delta a\] The figure 8.1.1 shows these relationships in two-dimensions: Figure 8.1.1 We can interpret these figures this way: If I know the position at the initial time, I can get the position at the final time by adding to the initial position the change in position that occurred during that time interval. Similarly, I can get the velocity at the final time by adding the change in velocity that occurred during the time interval to the initial velocity. Likewise, we get the final acceleration by adding the change in acceleration to the initial acceleration. How do we find these changes. Well, two of them come directly from the defining relations: The defining relation for the vector \(v\) is \[v = \dfrac{dr}{dt}\] We rewrite this to emphasize the change in the position vector (displacement) \[dr = v dt~~ or~~ \Delta r = v \Delta t\] The second form is exact if the velocity is constant over the time interval \(\Delta t\), or if we consider \(v\) to be the average value of the velocity, \(v_{avg}\), over the time interval, \(\Delta t\). Similarly for velocity: \[dv = a dt~~ or~~ \Delta v = a \Delta t \] Now we invoke Newton’s 2 nd law to relate the acceleration \(a\), to the net force: \[\sum F = m a ~~or~~ a = \sum F/m \] Now we have a way to step out the motion of an object (modeled as a point particle), if we know the net force that acts on the object. This is illustrated for one time interval, \(\Delta t\) in the figure 8.1.2. Figure 8.1.2: Going from net force to a to \(\Delta v\) to \(\Delta r\) Consistent lengths have been chosen for \(r\), \(v\), and \(a\), so that with a time interval of unity, all three figures can be plotted on the same graph. Knowing the net force, we can get the acceleration. Knowing the acceleration, we know how the velocity changes. Knowing the change in velocity, we know how the position of the object changes during the time interval. The basic approach outlined above, with only minor refinements, is exactly how Newton’s law is solved using a computer. The relationships expressed above in vector form can, of course, also be expressed in component form. \[\sum F_x = ma_x\] \[\sum F_y= ma_y \] \[a_x = \dfrac{dv_x}{ dt} , ~~or~~ v_x (t) = \int a_x (t)dt + v_{x 0}, \] \[a_y = \dfrac{dv_y}{ dt} ,~~ or~~ v_y (t) = \int a_y (t )dt + v_{y 0}. \] \[v_x = \dfrac{dx}{ dt} , ~~or~~ x(t ) = \int v_x (t)dt + x_0 , \] \[v_y = \dfrac{dy}{ dt} , ~~or~~ y(t) = \int v_y(t)dt + y_0.\] These separate sets of x- and y equations are completely independent of one another. This is a result of the independence of the spatial dimensions in the Galilean Space-Time Model. The usefulness of this approach—the separation into separate equations for each of the perpendicular directions—is due to the fact that they truly are independent. We can separately treat motion in two dimensions as two one-dimensional problems. For example, a thrown ball, if air friction is negligible, experiences a constant acceleration in the vertical direction due to the gravity force of the Earth pulling down, and zero acceleration in the horizontal direction. Each of these separate motions is straightforward to deal with separately, one dimension at a time.. Whether we work with the vector representations or the component representations depends on the particular questions we are trying to answer. Sometimes one is more useful; sometimes the other. The component equations are especially useful when there is an obvious difference in the forces (and resulting acceleration) in two perpendicular directions. The vector representation is often useful when the directions of the forces are continually changing and when we want to visualize the total force and total acceleration. Contributors Authors of Phys7B (UC Davis Physics Department)
Introduction The cross validation is a common technique to calibrate the binwidth histogram. They are the simplest and, sometimes, the most effective tool to describe the density of some dataset. As usual, suppose you have a independent and identically distributed random sample $X_1, X_2, \ldots X_n$ from some unknown continuous distribution called $f$. Recall that in a past post, we explained the construction of the histogram and we found the following histogram estimator for $f(x)$, \begin{equation*} \displaystyle \hat{f}_h(x)=\frac{1}{nh}\sum_{i=1}^n \sum_j \I(X_i\in B_j) \I(x\in B_j), \end{equation*} where the $B_j$ are a uniform partition of size $h$ of the real line. Therefore, we adjusted the binwidth minimizing the mean integrated squared error (MISE) and getting \begin{equation*} \displaystyle h_{opt}= \left(\frac{6}{n\|f^\prime \|_2^2}\right)\approx n^{-1/3} \end{equation*} Notice the bindwidth depends of the unknown quantity $\|f^\prime \|_2$, thus the main problem remains unsolved and we cannot use it in practice. Yeah I know!, I cheated before minimizing the MISE and forgetting the influence of the constant. In fact, all worked well because my example was a normal distributed with a relative large sample. Of course, those conditions are rare in statistics and we need improve the choice of $h$. Consequently, we will find a fully data-driven estimator for $h$ using a technique called cross-validation. Derivation of the cross validation formula Define the integrated squared error as follows, \begin{align*} ISE(h) & = \displaystyle \int \left(\hat{f}_h(x) – f(x)\right)^2\, dx \\ & = \displaystyle \int \hat{f}_h^2(x)\, dx – 2 \int \hat{f}_h(x) f(x)\, dx + \int f(x)^2\, dx \end{align*} Notice that to minimize the integrated squared error is equivalent to minimize the expression \begin{equation*} \displaystyle J(h) = \int \hat{f}_h^2(x)\, dx – 2 \int \hat{f}_h(x) f(x)\, dx \end{equation*} We aim to find an estimator $\hat{J}(h)$ of $J(h)$ such as the $\E[\hat{J}(h)] = J(h)$. Remark that we can estimate the first term using only available sample. However, the second one depends on the unknown function $f$. Thus, the first thought that comes in mind to approximate $\int \hat{f}_h(x) f(x)\, dx$ is to use the empirical estimator \begin{equation*} \displaystyle \frac{1}{n} \sum_{i=1}^n \hat{f}_h(X_i). \end{equation*} The problem here is that we are using twice the data, once to estimate $\hat{f}_h$ and once again to estimate the empirical sum. To remedy this situation define, \begin{equation*} \displaystyle \hat{f}_{h, -i}(x) = \frac{1}{(n-1)h} \sum_k \sum_{\substack{j=1 \\j \neq i}}^n \I(X_j\in B_k) \I(x\in B_k). \end{equation*} Here, $\hat{f}_{h, -i}(x)$ is the leave-one-out estimator. You guessed right! We have removed the $i^{\text{th}}$ sample in each evaluation to ensure the independence between $\hat{f}_{h, -i}(\cdot)$ and the $X_i$’s. In fact, we can prove that $\E[\hat{J}(h)] = J(h)$ (e.g.,[1]). The general criterion to find $h$ by cross-validation is, \begin{equation} \label{eq:hat_J_h} \displaystyle \hat{J}(h) = \int \hat{f}_h^2(x)\, dx – \frac{2}{n} \sum_{i=1}^n \hat{f}_{h, -i}(X_i). \end{equation} Particular case: The histogram The last equation looks ugly and trying to minimize it in this state, seems futile. Given that we are working—for now—with the histogram case, we can simplify it even further and find something easier to estimate. First, denote by $N_k$ the number of observations belonging to the $k^{\text{th}}$ interval $B_k$. The random variable $N_k$ has the form \begin{equation*} N_k = \sum_{i=1}^n \I(X_i\in B_k). \end{equation*} Let us start with the first term of equation \eqref{eq:hat_J_h}. \begin{align*} \int \hat{f}_h^2(x)\, dx & = \displaystyle \frac{1}{n^2h^2} \int \left(\sum_k \sum_{i=1}^n \I(x \in B_k) \I(X_i \in B_k)\right)^2\, dx \\ & = \displaystyle \frac{1}{n^2h^2} \int \left(\sum_k \I(x \in B_k) N_k \right)^2\, dx \\ & = \displaystyle \frac{1}{n^2h^2} \int \sum_k I(x \in B_k) N_k^2 + \sum_{k\lt l} I(x \in B_k) I(x \in B_l) N_k N_l \ dx. \end{align*} The second sum inside the integral is zero (Why?). Given that each $B_k$ has size $h$ we get, \begin{align} \int \hat{f}_h^2(x)\, dx & = \frac{1}{n^2h^2} \sum_k N_k^2 \int \I(x \in B_k) \, dx \nonumber\\ & = \frac{1}{n^2h} \sum_k N_k^2. \label{eq:first_term} \end{align} On the other side, we have to handle the second term on equation \eqref{eq:hat_J_h}. So we can write the full expression of this term and rearrange the summand operators, \begin{equation*} \sum_{i=1}^n \hat{f}_{h, -i}(X_i) = \frac{1}{(n-1)h} \sum_k \sum_{i=1}^n \I(X_i\in B_k) \sum_{\substack{j=1 \\j \neq i}}^n \I(X_j\in B_k). \end{equation*} Remark that the expression $\sum_{j=1, \, j \neq i}^n \I(X_j\in B_k)$ is equal to $N_k – \I(X_i\in B_k)$. We can simplify the second term into, \begin{align} \sum_{i=1}^n \hat{f}_{h, -i}(X_i) & = \displaystyle \frac{1}{(n-1)h} \sum_k \sum_{i=1}^n \I(X_i\in B_k) \left(N_k – \I(X_i\in B_k) \right) \nonumber \\ & = \displaystyle \frac{1}{(n-1)h} \sum_k N_k^2 – N_k. \label{eq:second_term} \end{align} Gathering the expressions \eqref{eq:first_term} and \eqref{eq:second_term} we obtain, \begin{equation*} \hat{J}(h) = \frac{2}{(n-1)h} – \frac{n+1}{n^2(n-1)h} \sum_k N_k^2. \end{equation*} We have achieved our main goal on finding a statistic (a formula which depends only in the data) that we can minimize it numerically to find the optimal bindwidth $h$. "Introduction to nonparametric estimation", "Springer series in statistics", "Springer", 0000. "214". ,
April 12th, 2016, 10:11 PM # 1 Senior Member Joined: Apr 2008 Posts: 194 Thanks: 3 How can I find the two missing solutions? Solve tan(2x) = cot(x) for 0 x< This is how I do it. = After doing some algebra, I get the equation below. = Next, I square root both sides and get two equations. tan(x) = and tan(x) = - The reference angle is . For the equation with the positive root, I find the two solutions in quadrants I and III which are x = and . For the equation with the negative root, I find the two solutions in quadrants II and IV which are x = and . In addition to the four solutions above, the answer key also gives two more which are x = and . I am not sure how to find the last two solutions. Can someone explain it? Thank you very much. Last edited by skipjack; April 13th, 2016 at 12:49 AM. April 13th, 2016, 01:01 AM # 2 Global Moderator Joined: Dec 2006 Posts: 20,978 Thanks: 2229 tan(x) isn't defined for x = $\pi$/2 or 3$\pi$/2, but cot(x) is. tan(2x) = cot(x) = tan(pi/2 - x) 2x = $\pi$/2 - x + k$\pi$, where k is an integer 3x = $\pi$/2 + k$\pi$ x = $\pi$/6 + k$\pi$/3 Now one chooses the values of k such that 0 $\small\leqslant$ x < 2$\pi$: k = 0, 1, 2, 3, 4, or 5. x = $\pi$/6, $\pi$/2, 5$\pi$/6, 7$\pi$/6, 3$\pi$/2, or 11$\pi$/6. April 13th, 2016, 02:01 PM # 3 Senior Member Joined: Apr 2008 Posts: 194 Thanks: 3 Thanks, skipjack. I would have never thought of approaching the problem the way you did. Your way is really a genius method. Is there a way I can find the two missing solutions by using my method? Thanks. April 13th, 2016, 02:45 PM # 4 Math Team Joined: Jul 2011 From: Texas Posts: 3,017 Thanks: 1603 Quote: $\tan(2x) = \cot{x}$ $\dfrac{\sin(2x)}{\cos(2x)} - \dfrac{\cos{x}}{\sin{x}} = 0$ $\dfrac{2\sin{x}\cos{x}}{\cos^2{x}-\sin^2{x}} - \dfrac{\cos{x}}{\sin{x}} = 0$ $\dfrac{2\sin^2{x}\cos{x}-\cos{x}(\cos^2{x}-\sin^2{x})}{\sin{x}(\cos^2{x}-\sin^2{x})} = 0$ $\dfrac{3\sin^2{x}\cos{x}-\cos^3{x}}{\sin{x}(\cos^2{x}-\sin^2{x})} = 0$ $\dfrac{3(1-\cos^2{x})\cos{x}-\cos^3{x}}{\sin{x}(\cos^2{x}-\sin^2{x})} = 0$ $\dfrac{\cos{x}(3-4\cos^2{x})}{\sin{x}(\cos^2{x}-\sin^2{x})} = 0$ setting the numerator = 0 ... $\cos{x} = 0 \implies x = \dfrac{\pi}{2} \, , \, \dfrac{3\pi}{2}$ $\cos^2{x} = \dfrac{3}{4} \implies \cos{x} = \pm \dfrac{\sqrt{3}}{2} \implies x = \dfrac{\pi}{6} \, , \, \dfrac{5\pi}{6} \, , \, \dfrac{7\pi}{6}\, , \, \dfrac{11\pi}{6}$ April 15th, 2016, 01:14 AM # 6 Global Moderator Joined: Dec 2006 Posts: 20,978 Thanks: 2229 At your first step, you assumed that tan(x) is defined. The alternative is that x = $\pi/2$ or $3\pi/2$, which, by inspection, are solutions. Initially expanding tan(2x) in terms of cot(x) gives 2cot(x)/(cot²(x) - 1) = cot(x), which leads to cot(x)(cot²(x) - 3) = 0. Hence cot(x) = 0, which implies x = $\pi/2$ or $3\pi/2$, or cot(x) = ±√3, which implies x = $\pi/6$, $5\pi/6$, $7\pi/6$ or $11\pi/6$. Tags find, miss, missing, solutions Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Find the missing number 10 - 6 * 15 / ? = 20 CountryThrills Algebra 2 November 17th, 2015 11:33 AM Puzzle. Find the missing number. snape Elementary Math 7 February 4th, 2013 11:28 PM Find the missing base 100101 two = 31 ______ vivalajuicy Number Theory 1 November 11th, 2012 02:06 PM Find the missing number zgonda New Users 0 August 29th, 2010 05:46 PM Find missing constants in polynomial limit unwisetome3 Algebra 0 December 31st, 1969 04:00 PM
Construction of formula in Sagemath program Let $P_k:= \mathbb{F}_2[x_1,x_2,\ldots ,x_k]$ be the polynomial algebra in $k$ variables with the degree of each $x_i$ being $1,$ regarded as a module over the mod-$2$ Steenrod algebra $\mathcal{A}.$ Here $\mathcal{A} = \langle Sq^{2^m}\,\,|\,\,m\geq 0\rangle.$ Being the cohomology of a space, $P_k$ is a module over the mod-2 Steenrod algebra $\mathscr{A}.$ The action of $\mathscr{A}$ on $P_k$ is explicitly given by the formula $$Sq^m(x_j^d) = \binom{d}{m}x_j^{m+d},$$ where $ \binom{d}{m}$ is reduced mod-2 and $\binom{d}{m} = 0$ if $m > d.$ Now, I want to use the Steenrod algebra package and Multi Polynomial ring package and using formular above to construction of formula following in Sagemath program $$ Sq^m(f) = \sum\limits_{2^{m_1} + 2^{m_2} + \cdots + 2^{m_k}= m}\binom{d_1}{2^{m_1}}x_1^{2^{m_1}+d_1}\binom{d_1}{2^{m_2}}x_2^{2^{m_2}+d_2}\ldots \binom{d_k}{2^{m_k}}x_k^{2^{m_k}+d_k}.$$ forall $f = x_1^{d_1}x_2^{d_2}\ldots x_k^{d_k}\in P_k$ Example: Let $k = 5, m = 2$ and $f = x_1^2x_2^3x_3^2x_4x_5\in P_5.$ We have $$ Sq^2(x_1^2x_2^3x_3^2x_4x_5) = x_1^4x_2^3x_3^2x_4x_5 + x_1^2x_2^5x_3^2x_4x_5 + x_1^2x_2^3x_3^4x_4x_5 +x_1^2x_2^3x_3^2x_4^2x_5^2 + x_1^2x_2^4x_3^2x_4x_5^2 + x_1^2x_2^4x_3^2x_4^2x_5^1.$$ I hope that someone can help. Thanks!
February 22nd, 2018, 03:50 PM # 1 Senior Member Joined: Apr 2017 From: New York Posts: 155 Thanks: 6 Integration By Parts Hi guys. Can somebody explain me only the last line ( in red frame). how did the last line shape just after the previous one. I know the topic well just didn't get that algebra part of ln transformation. appreciate it. February 22nd, 2018, 03:54 PM # 2 Global Moderator Joined: Oct 2008 From: London, Ontario, Canada - The Forest City Posts: 7,963 Thanks: 1148 Math Focus: Elementary mathematics and beyond Factor a 1/3 out of I. That should get you started. February 22nd, 2018, 04:01 PM # 3 Math Team Joined: Jul 2011 From: Texas Posts: 3,017 Thanks: 1603 $x \color{red}{\ln(3x+1)} -x+\dfrac{1}{3} \color{red}{\ln(3x+1)} + C$ factor out $\color{red}{\ln(3x+1)}$ from the two terms ... $\color{red}{\ln(3x+1)}\left[x + \dfrac{1}{3}\right]- x + C$ $\color{red}{\ln(3x+1)}\left[\dfrac{3x}{3} + \dfrac{1}{3}\right]- x + C$ $\color{red}{\ln(3x+1)}\left[\dfrac{3x+1}{3}\right]- x + C$ $\dfrac{1}{3}\ln(3x+1) \cdot (3x+1) - x + C$ February 22nd, 2018, 04:17 PM # 4 Global Moderator Joined: Oct 2008 From: London, Ontario, Canada - The Forest City Posts: 7,963 Thanks: 1148 Math Focus: Elementary mathematics and beyond $\displaystyle x\ln(3x+1)-x+\frac13\ln(3x+1)+C$ $\displaystyle \frac13(3x\ln(3x+1)+\ln(3x+1))-x+C$ $\displaystyle \frac13(3x+1)\ln(3x+1)-x+C$ Tags integration, parts Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post integration by parts help cookster Calculus 4 June 2nd, 2012 05:10 AM Integration By Parts aaron-math Calculus 3 September 27th, 2011 10:36 AM integration by parts oddlogic Calculus 11 March 25th, 2011 07:59 PM integration by parts jakeward123 Calculus 16 February 24th, 2011 07:34 AM Integration By Parts hikagi Calculus 3 September 24th, 2008 11:55 AM
Symbols:Greek Contents The $1$st letter of the Greek alphabet. Minuscule: $\alpha$ Majuscule: $\Alpha$ The $\LaTeX$ code for \(\alpha\) is \alpha . The $\LaTeX$ code for \(\Alpha\) is \Alpha . The $2$nd letter of the Greek alphabet. Minuscule: $\beta$ Majuscule: $\Beta$ The $\LaTeX$ code for \(\beta\) is \beta . The $\LaTeX$ code for \(\Beta\) is \Beta . The $3$rd letter of the Greek alphabet. Minuscule: $\gamma$ Majuscule: $\Gamma$ The $\LaTeX$ code for \(\gamma\) is \gamma . The $\LaTeX$ code for \(\Gamma\) is \Gamma . The $4$th letter of the Greek alphabet. Minuscule: $\delta$ Majuscule: $\Delta$ The $\LaTeX$ code for \(\delta\) is \delta . The $\LaTeX$ code for \(\Delta\) is \Delta . The $5$th letter of the Greek alphabet. Minuscules: $\epsilon$ and $\varepsilon$ Majuscule: $\Epsilon$ The $\LaTeX$ code for \(\epsilon\) is \epsilon . The $\LaTeX$ code for \(\varepsilon\) is \varepsilon . The $\LaTeX$ code for \(\Epsilon\) is \Epsilon . The $6$th letter of the Greek alphabet. Minuscule: $\zeta$ Majuscule: $\Zeta$ The $\LaTeX$ code for \(\zeta\) is \zeta . The $\LaTeX$ code for \(\Zeta\) is \Zeta . The $7$th letter of the Greek alphabet. Minuscule: $\eta$ Majuscule: $\Eta$ The $\LaTeX$ code for \(\eta\) is \eta . The $\LaTeX$ code for \(\Eta\) is \Eta . The $8$th letter of the Greek alphabet. Minuscules: $\theta$ and $\vartheta$ Majuscule: $\Theta$ The $\LaTeX$ code for \(\theta\) is \theta . The $\LaTeX$ code for \(\vartheta\) is \vartheta . The $\LaTeX$ code for \(\Theta\) is \Theta . The $9$th letter of the Greek alphabet. Minuscule: $\iota$ Majuscule: $\Iota$ The $\LaTeX$ code for \(\iota\) is \iota . The $\LaTeX$ code for \(\Iota\) is \Iota . The $10$th letter of the Greek alphabet. Minuscule: $\kappa$ Majuscule: $\Kappa$ The $\LaTeX$ code for \(\kappa\) is \kappa . The $\LaTeX$ code for \(\Kappa\) is \Kappa . The $11$th letter of the Greek alphabet. Minuscule: $\lambda$ Majuscule: $\Lambda$ The $\LaTeX$ code for \(\lambda\) is \lambda . The $\LaTeX$ code for \(\Lambda\) is \Lambda . The $12$th letter of the Greek alphabet. Minuscule: $\mu$ Majuscule: $\Mu$ The $\LaTeX$ code for \(\mu\) is \mu . The $\LaTeX$ code for \(\Mu\) is \Mu . The $13$th letter of the Greek alphabet. Minuscule: $\nu$ Majuscule: $\Nu$ The $\LaTeX$ code for \(\nu\) is \nu . The $\LaTeX$ code for \(\Nu\) is \Nu . The $14$th letter of the Greek alphabet. Minuscule: $\xi$ Majuscule: $\Xi$ The $\LaTeX$ code for \(\xi\) is \xi . The $\LaTeX$ code for \(\Xi\) is \Xi . The $15$th letter of the Greek alphabet. Minuscule: $\omicron$ Majuscule: $\textrm O$ The $\LaTeX$ code for \(\omicron\) is \omicron . The $\LaTeX$ code for \(\textrm O\) is \textrm O . The $16$th letter of the Greek alphabet. Minuscules: $\pi$ and $\varpi$ Majuscule: $\Pi$ The $\LaTeX$ code for \(\pi\) is \pi . The $\LaTeX$ code for \(\varpi\) is \varpi . The $\LaTeX$ code for \(\Pi\) is \Pi . The $17$th letter of the Greek alphabet. Minuscules: $\rho$ and $\varrho$ Majuscule: $\Rho$ The $\LaTeX$ code for \(\rho\) is \rho . The $\LaTeX$ code for \(\varrho\) is \varrho . The $\LaTeX$ code for \(\Rho\) is \Rho . The $18$th letter of the Greek alphabet. Minuscules: $\sigma$ and $\varsigma$ Majuscule: $\Sigma$ The $\LaTeX$ code for \(\sigma\) is \sigma . The $\LaTeX$ code for \(\varsigma\) is \varsigma . The $\LaTeX$ code for \(\Sigma\) is \Sigma . The $19$th letter of the Greek alphabet. Minuscule: $\tau$ Majuscule: $\Tau$ The $\LaTeX$ code for \(\tau\) is \tau . The $\LaTeX$ code for \(\Tau\) is \Tau . The $20$th letter of the Greek alphabet. Minuscule: $\upsilon$ Majuscule: $\Upsilon$ The $\LaTeX$ code for \(\upsilon\) is \upsilon . The $\LaTeX$ code for \(\Upsilon\) is \Upsilon . The $21$st letter of the Greek alphabet. Minuscules: $\phi$ and $\varphi$ Majuscules: $\Phi$ and $\varPhi$ The $\LaTeX$ code for \(\phi\) is \phi . The $\LaTeX$ code for \(\varphi\) is \varphi . The $\LaTeX$ code for \(\Phi\) is \Phi . The $\LaTeX$ code for \(\varPhi\) is \varPhi . The $22$nd letter of the Greek alphabet. Minuscule: $\chi$ Majuscule: $\Chi$ The $\LaTeX$ code for \(\chi\) is \chi . The $\LaTeX$ code for \(\Chi\) is \Chi . The $23$rd letter of the Greek alphabet. Minuscule: $\psi$ Majuscule: $\Psi$ The $\LaTeX$ code for \(\psi\) is \psi . The $\LaTeX$ code for \(\Psi\) is \Psi . The $24$th and final letter of the Greek alphabet. Minuscule: $\omega$ Majuscule: $\Omega$ The $\LaTeX$ code for \(\omega\) is \omega . The $\LaTeX$ code for \(\Omega\) is \Omega . Position Lowercase Uppercase Name 1 $\alpha$ $\Alpha$ Alpha 2 $\beta$ $\Beta$ Beta 3 $\gamma$ $\Gamma$ Gamma 4 $\delta$ $\Delta$ Delta 5 $\epsilon$ $\Epsilon$ Epsilon 6 $\zeta$ $\Zeta$ Zeta 7 $\eta$ $\Eta$ Eta 8 $\theta$ $\Theta$ Theta 9 $\iota$ $\Iota$ Iota 10 $\kappa$ $\Kappa$ Kappa 11 $\lambda$ $\Lambda$ Lambda 12 $\mu$ $\Mu$ Mu 13 $\nu$ $\Nu$ Nu 14 $\xi$ $\Xi$ Xi 15 $o$ $\textrm O$ Omicron 16 $\pi$ $\Pi$ Pi 17 $\rho$ $\Rho$ Rho 18 $\sigma$ $\Sigma$ Sigma 19 $\tau$ $\Tau$ Tau 20 $\upsilon$ $\Upsilon$ Upsilon 21 $\phi$ $\Phi$ Phi 22 $\chi$ $\Chi$ Chi 23 $\psi$ $\Psi$ Psi 24 $\omega$ $\Omega$ Omega Lowercase variants 25 $\varepsilon$ Varepsilon 26 $\vartheta$ Vartheta 27 $\varpi$ Varpi 28 $\varrho$ Varrho 29 $\varsigma$ Varsigma 30 $\varphi$ Varphi
New option Wavefront/Taper analysis is implemented. The new option allows you to estimate the influence of inhomogeneities of the deposition on the spectral characteristics and on the wavefront of the reflected/transmitted wave. The option is available at Analysis -->More-->Wavefront/Taper Phase computations include total path of the beam including extra space in the incident medium due to changed total thickness of the coating at different positions (Fig. 1). Example: Fig. 1. Schematic of the coating with thickness non-uniformity. To calculate the effect of a lack of uniformity on the wavefront, an extra incident material (gray region) is added to the front surface of the coating, so that the reference surface for phase calculations is completely free from non-uniformity. Fig. 2. Schematic of the multilayer at the central position (left) and the multilayer at the position x 1 (right). Phase at different positions can be calculated as phase of the amplitude reflection coefficient in the following way: \[ \varphi (x=0):\;\;\; r(d_1,...,d_m)\] \[ \varphi (x=x_1):\;\;\; r(d_1(x_1),...,d_m(x_m))\] Wavefront can be calculated and plotted vs. the wavelength or relative coordinate \(x\) : \[ \delta R(\lambda)=\frac{\varphi(\lambda)}{2\pi} \] \[ \delta R(x)=\frac{\varphi(x)}{2\pi} \] Fig. 3. Dependence of the reflected wavefront vs. relative coordinate calculated for parabolic taper interpolation specified in Fig. 3. The wavelength can be varied using the slider on the bottom of the window. In the window Taper/Wavefron parameters available by pressing Parameters button, you can choose between Simple taper function \(f(x)\) and more complicated taper dependence specified in Environments tab. Example. \(f(x)\) are parabolic functions defined through taper coefficients \(a\) and \(b\) for high- and low-index materials, respectively (Fig. 4). Fig. 4. Schematic of the parabolic non-uniformity and correspondence window in OptiLayer. Example. \(f(x)\) can be specified as a linear function through \(a\) and \(b\) for high- and low-index materials, respectively (Fig. 5). Fig. 5. Schematic of the parabolic non-uniformity and correspondence window in OptiLayer. Fig. 6. Dependence of the reflected wavefront vs. relative coordinate calculated for linear taper interpolation specified in Fig. 5. The wavelength can be varied using the slider on the bottom of the window. Fig. 7. Specification of taper coefficients in the environment manager. Reflected/transmitted wavefront can be calculated for more complicated taper interpolations specified through Taper coefficients in the Environments manager (Data --> Environments manager) (Fig. 7) and Relative Positions (Fig. 8). Fig. 8. Schematic of a complicated taper interpolation and the correspondence window in OptiLayer. OptiLayer allows you to calculate dependence of spectral characteristics (reflectance, transmittance, phase, GD, and GDD) on the relatvive coordinate and on the wavlength. Fig. 9. Schematic of a coating with layer non-uniformity. Fig. 10. Schematic of the coating with layer non-uniformity at central position (left) and at position \(x_1\) (right). Computations of spectral characteristics (reflectance, transmittance, phase, GD, and GDD) are performed at each relative coordinate/wavelength using standard formulas. Fig. 11. Wavelength dependence of reflectance at a relative coordinate of 0.45. You can vary the relative position using the slider. Also, you can vary the angle of incidence and polarization (s, p or average (both)). Fig. 12. Dependence of the coating reflectance on the position at the wavelength of 552nm. You can vary the wavelength with the help of the slider. Also, you can vary the angle of incidence and polarization (s, p or average (both)).
In "Entropy in Black Hole Pair Production" (arXiv:gr-qc/9306023), Strominger et al. notes The issue of whether (1.2) can be taken literally has bearing on the vexing question of what happens to information cast into a black hole. If one assumes that (1.2) counts all the black hole states, and that information is preserved, then one is forced to conclude that information escapes from a black hole at a rapid rate (proportional to the rate of area decrease) during the Hawking process. We do not think this is likely because it seems to requires a breakdown of semiclassical methods for arbitrarily large black holes and at arbitrarily weak curvatures, although this point is certainly the subject of heated debates! Here $(1.2)$ is $$N=e^{S_{bh}} \tag{1.2}$$ where $S_{bh}$ is the black hole entropy. How do we conclude that the information escapes at a rate proportional to the area? We have, $N=e^{S_{bh}}$. Therefore, I get: $\dfrac{dN}{dt}=e^{S_{bh}}\dfrac{dS_{bh}}{dt}=e^{S_{bh}}\dfrac{dA}{4dt} {\sim} e^{S_{bh}}\dfrac{dM^2}{dt}{\sim}e^{S_{bh}}r\big(r^2T^4\big){\sim}e^{S_{bh}}r^3\bigg(\dfrac{1}{r^4}\bigg){\sim}\dfrac{e^{A/4}}{\sqrt{A}}$ If I figure out how the information evaporation rate is proportional to the area then how do I make sense of it requiring breakdown of semi-classical methods for arbitrarily large black holes and arbitrarily weak curvatures? It seems to me that since they say that such a condition requires breakdown of semiclassical results for arbitrarily large black holes, the semiclassical treatment I used in Point 1 isn't something they are using to claim what they are claiming. They are using something different. But they haven't linked any reference as to in which context they are claiming this. Maybe I am naive and am unfamiliar with some basic results of blackhole thermodynamics that they are using to claim this. Kindly suggest how to understand the two points mentioned above.
Definition:Zero Vector Definition Let $\struct {R, +_R, \times_R}$ be a ring. Let $\struct {G, +_G}$ be an abelian group. Let $\struct {G, +_G, \circ}_R$ be an $R$-module. The identity of $\struct {G, +_G}$ is usually denoted $\mathbf 0$, or some variant of this, and called the zero vector. Note that on occasion it is advantageous to denote the zero vector differently, for example by $e$, or $0_V$ or $0_G$, in order to highlight the fact that the zero vector is not the same object as the zero scalar. Let $\struct {\R^n, +, \times}_\R$ be a real vector space. The zero vector in $\struct {\R^n, +, \times}_\R$ is: $\mathbf 0_{n \times 1} := \begin {bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end {bmatrix}$ where $0 \in \R$. Also known as The zero vector is also sometimes known as the null vector. Sources 1964: Iain T. Adamson: Introduction to Field Theory... (previous) ... (next): $\S 1.4$ 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 26$ 1968: Murray R. Spiegel: Mathematical Handbook of Formulas and Tables... (previous) ... (next): $\S 22$: Fundamental Definitions: $2.$ 1969: C.R.J. Clapham: Introduction to Abstract Algebra... (previous) ... (next): Chapter $7$: Vector Spaces: $\S 32$. Definition of a Vector Space 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: zero vector (null vector): 2.
Authors Index, Methods Funct. Anal. Topology 16 (2010), no. 4, 289-290 Methods Funct. Anal. Topology 16 (2010), no. 2, 112-119 In this paper we introduce the generalized continuous version of fusion frame, namely $gc$-fusion frame. Also we get some new results about Bessel mappings and perturbation in this case. On mixing and completely mixing properties of positive $L^1$-contractions of finite real W* -algebras Methods Funct. Anal. Topology 16 (2010), no. 3, 259-263 We consider a non-commutative real analogue of Akcoglu and Sucheston's result about the mixing properties of positive L$^1$-contractions of the L$^1$-space associated with a measure space with probability measure. This result generalizes an analogous result obtained for the L$^1$-space associated with a finite (complex) W$^*$-algebras. Methods Funct. Anal. Topology 16 (2010), no. 3, 197-202 We present solutions to some boundary value and initial-boundary value problems for the "wave" equation with the infinite dimensional L\'evy Laplacian $\Delta _L$ $$\frac{\partial^2 U(t,x)}{\partial t^2}=\Delta_LU(t,x)$$ in the Shilov class of functions. The strong Hamburger moment problem and related direct and inverse spectral problems for block Jacobi-Laurent matrices Methods Funct. Anal. Topology 16 (2010), no. 3, 203-241 In this article we propose an approach to the strong Hamburger moment problem based on the theory of generalized eigenvectors expansion for a selfadjoint operator. Such an approach to another type of moment problems was given in our works earlier, but for strong Hamburger moment problem it is new. We get a sufficiently complete account of the theory of such a problem, including the spectral theory of block Jacobi-Laurent matrices. Methods Funct. Anal. Topology 16 (2010), no. 1, 1-5 In this paper, we investigate hereditary properties of hyperspaces. Our basic cardinals are the Suslin hereditary number, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity. We prove that the hereditary cellularity, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity for any Eberlein compact and any Danto space and their hyperspaces coincide. Methods Funct. Anal. Topology 16 (2010), no. 4, 304-332 We propose a new axiomatics for a locally compact hypergroup. On the one hand, the new object generalizes a DJS-hypergroup and, on the other hand, it allows to obtain results similar to those for a unimodular hypecomplex system with continuous basis. We construct a harmonic analysis and, for a commutative locally compact hypergroup, give an analogue of the Pontryagin duality theorem. Methods Funct. Anal. Topology 16 (2010), no. 2, 101-111 Traces $\Phi$ on von Neumann algebras with values in complex order complete vector lattices are considered. The full description of these traces is given for the case when $\Phi$ is the Maharam trace. The version of Radon-Nikodym-type theorem for Maharam traces is established. Methods Funct. Anal. Topology 16 (2010), no. 2, 140-157 In this paper, a class of special finite dimensional perturbations of Volterra operators in Hilbert spaces is investigated. The main result of the article is finding necessary and sufficient conditions for an operator in a chosen class to be similar to the orthogonal sum of a dissipative and an anti-dissipative operators with finite dimensional imaginary parts. Methods Funct. Anal. Topology 16 (2010), no. 4, 298-303 We give an effective description of finite rank singular perturbations of a normal operator by using the concepts we introduce of an admissible subspace and corresponding admissible operators. We give a description of rank one singular perturbations in terms of a scale of Hilbert spaces, which is constructed from the unperturbed operator. Dimension stabilization effect for the block Jacobi-type matrix of a bounded normal operator with the spectrum on an algebraic curve Methods Funct. Anal. Topology 16 (2010), no. 1, 28-41 Under some natural assumptions, any bounded normal operator in an appropriate basis has a three-diagonal block Jacobi-type matrix. Just as in the case of classical Jacobi matrices (e.g. of self-adjoint operators) such a structure can be effectively used. There are two sources of difficulties: rapid growth of blocks in the Jacobi-type matrix of such operators (they act in $\mathbb C^1\oplus\mathbb C^2\oplus\mathbb C^3\oplus\cdots$) and potentially complicated spectra structure of the normal operators. The aim of this article is to show that these two aspects are closely connected: simple structure of the spectra can effectively bound the complexity of the matrix structure. The main result of the article claims that if the spectra is concentrated on an algebraic curve the dimensions of Jacobi-type matrix blocks do not grow starting with some value. Methods Funct. Anal. Topology 16 (2010), no. 2, 120-130 The paper deals with the singular Sturm-Liouville expressions $$l(y) = -(py')' + qy$$ with the coefficients $$q = Q', \quad 1/p, Q/p, Q^2/p \in L_1, $$ where the derivative of the function $Q$ is understood in the sense of distributions. Due to a new regularization, the corresponding operators are correctly defined as quasi-differentials. Their resolvent approximation is investigated and all self-adjoint and maximal dissipative extensions and generalized resolvents are described in terms of homogeneous boundary conditions of the canonical form. Methods Funct. Anal. Topology 16 (2010), no. 2, 131-139 We study systems of one-dimensional subspaces of a Hilbert space. For such systems, symmetric and orthoscalar systems, as well as graph related configurations of one-dimensional subspaces have been studied. Methods Funct. Anal. Topology 16 (2010), no. 4, 291-297 Methods Funct. Anal. Topology 16 (2010), no. 1, 6-16 Using a general approach that covers the cases of Gaussian, Poissonian, Gamma, Pascal and Meixner measures on an infinite- dimensional space, we construct a general integration by parts formula for analysis connected with each of these measures. Our consideration is based on the constructions of the extended stochastic integral and the stochastic derivative that are connected with the structure of the extended Fock space. Methods Funct. Anal. Topology 16 (2010), no. 1, 17-27 Let $E$ be either $\ell_1$ of $L_1$. We consider $E$-unattainable continuous linear operators $T$ from $L_1$ to a Banach space $Y$, i.e., those operators which do not attain their norms on any subspace of $L_1$ isometric to $E$. It is not hard to see that if $T: L_1 \to Y$ is $\ell_1$-unattainable then it is also $L_1$-unattainable. We find some equivalent conditions for an operator to be $\ell_1$-unattainable and construct two operators, first $\ell_1$-unattainable and second $L_1$-unattainable but not $\ell_1$-unattainable. Some open problems remain unsolved. Methods Funct. Anal. Topology 16 (2010), no. 4, 333-348 $J$-self-adjoint extensions of the Phillips symmetric operator $S$ are %\break studied. The concepts of stable and unstable $C$-symmetry are introduced in the extension theory framework. The main results are the following: if ${A}$ is a $J$-self-adjoint extension of $S$, then either $\sigma({A})=\mathbb{R}$ or $\sigma({A})=\mathbb{C}$; if ${A}$ has a real spectrum, then ${A}$ has a stable $C$-symmetry and ${A}$ is similar to a self-adjoint operator; there are no $J$-self-adjoint extensions of the Phillips operator with unstable $C$-symmetry. Methods Funct. Anal. Topology 16 (2010), no. 3, 242-258 The $H$-ring structure of certain infinite dimensional Grassmannians is discussed using various algebraic and analytical methods but avoiding cellular arguments. These methods allow us to treat these Grassmannians in a greater generality. Methods Funct. Anal. Topology 16 (2010), no. 2, 158-166 We study Michael's lower semifinite topology and Fell's topology on the collection of all closed limit subsets of a topological space. Special attention is given to the subfamily of all maximal limit sets. Methods Funct. Anal. Topology 16 (2010), no. 2, 167-182 Let $M$ be a smooth connected compact surface, $P$ be either a real line $\mathbb R$ or a circle $S^1$. Then we have a natural right action of the group $D(M)$ of diffeomorphisms of $M$ on $C^\infty(M,P)$. For $f\in C^\infty(M,P)$ denote respectively by $S(f)$ and $O(f)$ its stabilizer and orbit with respect to this action. Recently, for a large class of smooth maps $f:M\to P$ the author calculated the homotopy types of the connected components of $S(f)$ and $O(f)$. It turned out that except for few cases the identity component of $S(f)$ is contractible, $\pi_i O(f)=\pi_i M$ for $i\geq3$, and $\pi_2 O(f)=0$, while $\pi_1 O(f)$ it only proved to be a finite extension of $\pi_1D_{Id}M\oplus\mathbb Z^{l}$ for some $l\geq0$. In this note it is shown that if $\chi(M)<0$, then $\pi_1O(f)=G_1\times\cdots\times G_n$, where each $G_i$ is a fundamental group of the restriction of $f$ to a subsurface $B_i\subset M$ being either a $2$-disk or a cylinder or a Mobius band. For the proof of main result incompressible subsurfaces and cellular automorphisms of surfaces are studied. Methods Funct. Anal. Topology 16 (2010), no. 4, 349-358 We describe the spectrum of the problem generated by the Stieltjes string recurrence relations on a figure-of-eight graph. The continuity and the force balance conditions are imposed at the vertex of the graph. It is shown that the eigenvalues of such (main) problem are interlaced with the elements of the union of sets of eigenvalues of the Dirichlet problems generated by the parts of the string which correspond to the loops of the figure-of-eight graph. Also the eigenvalues of the main problem are interlaced with the elements of the union of sets of eigenvalues of the periodic problems generated by the same parts of the string. Methods Funct. Anal. Topology 16 (2010), no. 4, 359-382 All matrix modifications of classical Nevanlinna-Pick interpolation problem with a finite number of nonreal nodes which can be investigated by V. P. Potapov method are described. Methods Funct. Anal. Topology 16 (2010), no. 1, 42-50 We give necessary and sufficient conditions for a one-dimensional Schrodinger operator to have the number of negative eigenvalues equal to the number of negative intensities in the case of $\delta$ interactions. On the number of negative eigenvalues of a multi-dimensional Schrodinger operator with point interactions Methods Funct. Anal. Topology 16 (2010), no. 4, 383-392 We prove that the number $N$ of negative eigenvalues of a Schr\"odinger operator $L$ with finitely many points of $\delta$-interactions on $\mathbb R^{d}$ (${d}\le3$) is equal to the number of negative eigenvalues of a certain class of matrix $M$ up to a constant. This $M$ is expressed in terms of distances between the interaction points and the intensities. As applications, we obtain sufficient and necessary conditions for $L$ to satisfy $N=m,n,n$ for ${d}=1,2,3$, respectively, and some estimates of the minimum and maximum of $N$ for fixed intensities. Here, we denote by $n$ and $m$ the numbers of interaction points and negative intensities, respectively. Methods Funct. Anal. Topology 16 (2010), no. 2, 183-196 For mappings acting from an interval into a locally convex space, we study properties of strong compact variation and strong compact absolute continuity connected with an expansion of the space into subspaces generated by the compact sets. A description of strong $K$-absolutely continuous mappings in terms of indefinite Bochner integral is obtained. A special class of the spaces having $K$-Radon-Nikodym property is obtained. A relation between the $K$-Radon-Nikodym property and the classical Radon-Nikodym property is considered. Methods Funct. Anal. Topology 16 (2010), no. 1, 51-56 It was proved in~\cite{Pop09b} that a $*$-algebra is $C^*$-representable, i.e., $*$-isomorphic to a self-adjoint subalgebra of bounded operators acting on a Hilbert space if and only if there is an algebraically admissible cone in the real space of Hermitian elements of the algebra such that the algebra unit is an Archimedean order unit. In the present paper we construct such cones in free products of $C^*$-representable $*$-algebras generated by unitaries. We also express the reducing ideal of any algebraically bounded $*$-algebra with corepresentation $\mathcal F/\mathcal J$ where $\mathcal F$ is a free algebra as a closure of the ideal $\mathcal J$ in some universal enveloping $C^*$-algebra. Methods Funct. Anal. Topology 16 (2010), no. 1, 57-68 In this paper we consider decompositions of the identity operator into a linear combination of $k\ge 5$ orthogonal projections with real coefficients. It is shown that if the sum $A$ of the coefficients is closed to an integer number between $2$ and $k-2$ then such a decomposition exists. If the coefficients are almost equal to each other, then the identity can be represented as a linear combination of orthogonal projections for $\frac{k-\sqrt{k^2-4k}}{2} < A < \frac{k+\sqrt{k^2-4k}}{2}$. In the case where some coefficients are sufficiently close to $1$ we find necessary conditions for the existence of the decomposition. Inverse theorems in the theory of approximation of vectors in a Banach space with exponential type entire vectors Methods Funct. Anal. Topology 16 (2010), no. 1, 69-82 An arbitrary operator $A$ on a Banach space $X$ which is a generator of a $C_0$-group with a certain growth condition at infinity is considered. A relationship between its exponential type entire vectors and its spectral subspaces is found. Inverse theorems on the connection between the degree of smoothness of a vector $x\in X$ with respect to the operator $A$, the rate of convergence to zero of the best approximation of $x$ by exponential type entire vectors for operator $A$, and the $k$-module of continuity with respect to $A$ are established. Also, a generalization of the Bernstein-type inequality is obtained. The results allow to obtain Bernstein-type inequalities in weighted $L_p$ spaces. Methods Funct. Anal. Topology 16 (2010), no. 3, 264-270 We prove that every Schur representation of a poset corresponding to $\widetilde{E_8}$ can be unitarized with some character. Methods Funct. Anal. Topology 16 (2010), no. 1, 83-100 We study positive definite kernels $K = (K_{n,m})_{n,m\in A}$, $A=\mathbb Z$ or $A=\mathbb Z_+$, which satisfy a difference equation of the form $L_n K = \overline L_m K$, or of the form $L_n \overline L_m K = K$, where $L$ is a linear difference operator (here the subscript $n$ ($m$) means that $L$ acts on columns (respectively rows) of $K$). In the first case, we give new proofs of Yu.M. Berezansky results about integral representations for $K$. In the second case, we obtain integral representations for $K$. The latter result is applied to strengthen one our result on abstract stochastic sequences. As an example, we consider the Hamburger moment problem and the corresponding positive matrix of moments. Classical results on the Hamburger moment problem are derived using an operator approach, without use of Jacobi matrices or orthogonal polynomials. Methods Funct. Anal. Topology 16 (2010), no. 3, 271-288 We describe all solutions of the matrix Hamburger moment problem in a general case (no conditions besides solvability are assumed). We use the fundamental results of A. V. Shtraus on the generalized resolvents of symmetric operators. All solutions of the truncated matrix Hamburger moment problem with an odd number of given moments are described in an "almost nondegenerate" case. Some conditions of solvability for the scalar truncated Hamburger moment problem with an even number of given moments are given.
amp-mathml Displays a MathML formula. Required Script <script async custom-element="amp-mathml" src="https://cdn.ampproject.org/v0/amp-mathml-0.1.js"></script> Supported Layouts container Examples amp-mathml.amp.html Behavior This extension creates an iframe and renders a MathML formula. Example: The Quadratic Formula <amp-mathml layout="container" data-formula="\[x = {-b \pm \sqrt{b^2-4ac} \over 2a}.\]"> </amp-mathml> Example: Cauchy's Integral Formula <amp-mathml layout="container" data-formula="\[f(a) = \frac{1}{2\pi i} \oint\frac{f(z)}{z-a}dz\]"> </amp-mathml> Example: Double angle formula for Cosines <amp-mathml layout="container" data-formula="$$ \cos(θ+φ)=\cos(θ)\cos(φ)−\sin(θ)\sin(φ) $$"> </amp-mathml> Example: Inline formula This is an example of a formula of <amp-mathml layout="container" inline data-formula="`x`"></amp-mathml>, <amp-mathml layout="container" inline data-formula="\(x = {-b \pm \sqrt{b^2-4ac} \over 2a}\)"></amp-mathml> placed inline in the middle of a block of text. <amp-mathml layout="container" inline data-formula="\( \cos(θ+φ) \)"></amp-mathml> This shows how the formula will fit inside a block of text and can be styled with CSS. Attributes data-formula (required) Specifies the formula to render. inline (optional) If specified, the component renders inline ( inline-block in CSS). Validation See amp-mathml rules in the AMP validator specification. You've read this document a dozen times but it doesn't really cover all of your questions? Maybe other people felt the same: reach out to them on Stack Overflow.Go to Stack Overflow Found a bug or missing a feature? The AMP project strongly encourages your participation and contributions! We hope you'll become an ongoing participant in our open source community but we also welcome one-off contributions for the issues you're particularly passionate about.Go to GitHub
Authors Index, Methods Funct. Anal. Topology 16 (2010), no. 4, 289-290 Methods Funct. Anal. Topology 16 (2010), no. 2, 112-119 In this paper we introduce the generalized continuous version of fusion frame, namely $gc$-fusion frame. Also we get some new results about Bessel mappings and perturbation in this case. On mixing and completely mixing properties of positive $L^1$-contractions of finite real W* -algebras Methods Funct. Anal. Topology 16 (2010), no. 3, 259-263 We consider a non-commutative real analogue of Akcoglu and Sucheston's result about the mixing properties of positive L$^1$-contractions of the L$^1$-space associated with a measure space with probability measure. This result generalizes an analogous result obtained for the L$^1$-space associated with a finite (complex) W$^*$-algebras. Methods Funct. Anal. Topology 16 (2010), no. 3, 197-202 We present solutions to some boundary value and initial-boundary value problems for the "wave" equation with the infinite dimensional L\'evy Laplacian $\Delta _L$ $$\frac{\partial^2 U(t,x)}{\partial t^2}=\Delta_LU(t,x)$$ in the Shilov class of functions. The strong Hamburger moment problem and related direct and inverse spectral problems for block Jacobi-Laurent matrices Methods Funct. Anal. Topology 16 (2010), no. 3, 203-241 In this article we propose an approach to the strong Hamburger moment problem based on the theory of generalized eigenvectors expansion for a selfadjoint operator. Such an approach to another type of moment problems was given in our works earlier, but for strong Hamburger moment problem it is new. We get a sufficiently complete account of the theory of such a problem, including the spectral theory of block Jacobi-Laurent matrices. Methods Funct. Anal. Topology 16 (2010), no. 1, 1-5 In this paper, we investigate hereditary properties of hyperspaces. Our basic cardinals are the Suslin hereditary number, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity. We prove that the hereditary cellularity, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity for any Eberlein compact and any Danto space and their hyperspaces coincide. Methods Funct. Anal. Topology 16 (2010), no. 4, 304-332 We propose a new axiomatics for a locally compact hypergroup. On the one hand, the new object generalizes a DJS-hypergroup and, on the other hand, it allows to obtain results similar to those for a unimodular hypecomplex system with continuous basis. We construct a harmonic analysis and, for a commutative locally compact hypergroup, give an analogue of the Pontryagin duality theorem. Methods Funct. Anal. Topology 16 (2010), no. 2, 101-111 Traces $\Phi$ on von Neumann algebras with values in complex order complete vector lattices are considered. The full description of these traces is given for the case when $\Phi$ is the Maharam trace. The version of Radon-Nikodym-type theorem for Maharam traces is established. Methods Funct. Anal. Topology 16 (2010), no. 2, 140-157 In this paper, a class of special finite dimensional perturbations of Volterra operators in Hilbert spaces is investigated. The main result of the article is finding necessary and sufficient conditions for an operator in a chosen class to be similar to the orthogonal sum of a dissipative and an anti-dissipative operators with finite dimensional imaginary parts. Methods Funct. Anal. Topology 16 (2010), no. 4, 298-303 We give an effective description of finite rank singular perturbations of a normal operator by using the concepts we introduce of an admissible subspace and corresponding admissible operators. We give a description of rank one singular perturbations in terms of a scale of Hilbert spaces, which is constructed from the unperturbed operator. Dimension stabilization effect for the block Jacobi-type matrix of a bounded normal operator with the spectrum on an algebraic curve Methods Funct. Anal. Topology 16 (2010), no. 1, 28-41 Under some natural assumptions, any bounded normal operator in an appropriate basis has a three-diagonal block Jacobi-type matrix. Just as in the case of classical Jacobi matrices (e.g. of self-adjoint operators) such a structure can be effectively used. There are two sources of difficulties: rapid growth of blocks in the Jacobi-type matrix of such operators (they act in $\mathbb C^1\oplus\mathbb C^2\oplus\mathbb C^3\oplus\cdots$) and potentially complicated spectra structure of the normal operators. The aim of this article is to show that these two aspects are closely connected: simple structure of the spectra can effectively bound the complexity of the matrix structure. The main result of the article claims that if the spectra is concentrated on an algebraic curve the dimensions of Jacobi-type matrix blocks do not grow starting with some value. Methods Funct. Anal. Topology 16 (2010), no. 2, 120-130 The paper deals with the singular Sturm-Liouville expressions $$l(y) = -(py')' + qy$$ with the coefficients $$q = Q', \quad 1/p, Q/p, Q^2/p \in L_1, $$ where the derivative of the function $Q$ is understood in the sense of distributions. Due to a new regularization, the corresponding operators are correctly defined as quasi-differentials. Their resolvent approximation is investigated and all self-adjoint and maximal dissipative extensions and generalized resolvents are described in terms of homogeneous boundary conditions of the canonical form. Methods Funct. Anal. Topology 16 (2010), no. 2, 131-139 We study systems of one-dimensional subspaces of a Hilbert space. For such systems, symmetric and orthoscalar systems, as well as graph related configurations of one-dimensional subspaces have been studied. Methods Funct. Anal. Topology 16 (2010), no. 4, 291-297 Methods Funct. Anal. Topology 16 (2010), no. 1, 6-16 Using a general approach that covers the cases of Gaussian, Poissonian, Gamma, Pascal and Meixner measures on an infinite- dimensional space, we construct a general integration by parts formula for analysis connected with each of these measures. Our consideration is based on the constructions of the extended stochastic integral and the stochastic derivative that are connected with the structure of the extended Fock space. Methods Funct. Anal. Topology 16 (2010), no. 1, 17-27 Let $E$ be either $\ell_1$ of $L_1$. We consider $E$-unattainable continuous linear operators $T$ from $L_1$ to a Banach space $Y$, i.e., those operators which do not attain their norms on any subspace of $L_1$ isometric to $E$. It is not hard to see that if $T: L_1 \to Y$ is $\ell_1$-unattainable then it is also $L_1$-unattainable. We find some equivalent conditions for an operator to be $\ell_1$-unattainable and construct two operators, first $\ell_1$-unattainable and second $L_1$-unattainable but not $\ell_1$-unattainable. Some open problems remain unsolved. Methods Funct. Anal. Topology 16 (2010), no. 4, 333-348 $J$-self-adjoint extensions of the Phillips symmetric operator $S$ are %\break studied. The concepts of stable and unstable $C$-symmetry are introduced in the extension theory framework. The main results are the following: if ${A}$ is a $J$-self-adjoint extension of $S$, then either $\sigma({A})=\mathbb{R}$ or $\sigma({A})=\mathbb{C}$; if ${A}$ has a real spectrum, then ${A}$ has a stable $C$-symmetry and ${A}$ is similar to a self-adjoint operator; there are no $J$-self-adjoint extensions of the Phillips operator with unstable $C$-symmetry. Methods Funct. Anal. Topology 16 (2010), no. 3, 242-258 The $H$-ring structure of certain infinite dimensional Grassmannians is discussed using various algebraic and analytical methods but avoiding cellular arguments. These methods allow us to treat these Grassmannians in a greater generality. Methods Funct. Anal. Topology 16 (2010), no. 2, 158-166 We study Michael's lower semifinite topology and Fell's topology on the collection of all closed limit subsets of a topological space. Special attention is given to the subfamily of all maximal limit sets. Methods Funct. Anal. Topology 16 (2010), no. 2, 167-182 Let $M$ be a smooth connected compact surface, $P$ be either a real line $\mathbb R$ or a circle $S^1$. Then we have a natural right action of the group $D(M)$ of diffeomorphisms of $M$ on $C^\infty(M,P)$. For $f\in C^\infty(M,P)$ denote respectively by $S(f)$ and $O(f)$ its stabilizer and orbit with respect to this action. Recently, for a large class of smooth maps $f:M\to P$ the author calculated the homotopy types of the connected components of $S(f)$ and $O(f)$. It turned out that except for few cases the identity component of $S(f)$ is contractible, $\pi_i O(f)=\pi_i M$ for $i\geq3$, and $\pi_2 O(f)=0$, while $\pi_1 O(f)$ it only proved to be a finite extension of $\pi_1D_{Id}M\oplus\mathbb Z^{l}$ for some $l\geq0$. In this note it is shown that if $\chi(M)<0$, then $\pi_1O(f)=G_1\times\cdots\times G_n$, where each $G_i$ is a fundamental group of the restriction of $f$ to a subsurface $B_i\subset M$ being either a $2$-disk or a cylinder or a Mobius band. For the proof of main result incompressible subsurfaces and cellular automorphisms of surfaces are studied. Methods Funct. Anal. Topology 16 (2010), no. 4, 349-358 We describe the spectrum of the problem generated by the Stieltjes string recurrence relations on a figure-of-eight graph. The continuity and the force balance conditions are imposed at the vertex of the graph. It is shown that the eigenvalues of such (main) problem are interlaced with the elements of the union of sets of eigenvalues of the Dirichlet problems generated by the parts of the string which correspond to the loops of the figure-of-eight graph. Also the eigenvalues of the main problem are interlaced with the elements of the union of sets of eigenvalues of the periodic problems generated by the same parts of the string. Methods Funct. Anal. Topology 16 (2010), no. 4, 359-382 All matrix modifications of classical Nevanlinna-Pick interpolation problem with a finite number of nonreal nodes which can be investigated by V. P. Potapov method are described. Methods Funct. Anal. Topology 16 (2010), no. 1, 42-50 We give necessary and sufficient conditions for a one-dimensional Schrodinger operator to have the number of negative eigenvalues equal to the number of negative intensities in the case of $\delta$ interactions. On the number of negative eigenvalues of a multi-dimensional Schrodinger operator with point interactions Methods Funct. Anal. Topology 16 (2010), no. 4, 383-392 We prove that the number $N$ of negative eigenvalues of a Schr\"odinger operator $L$ with finitely many points of $\delta$-interactions on $\mathbb R^{d}$ (${d}\le3$) is equal to the number of negative eigenvalues of a certain class of matrix $M$ up to a constant. This $M$ is expressed in terms of distances between the interaction points and the intensities. As applications, we obtain sufficient and necessary conditions for $L$ to satisfy $N=m,n,n$ for ${d}=1,2,3$, respectively, and some estimates of the minimum and maximum of $N$ for fixed intensities. Here, we denote by $n$ and $m$ the numbers of interaction points and negative intensities, respectively. Methods Funct. Anal. Topology 16 (2010), no. 2, 183-196 For mappings acting from an interval into a locally convex space, we study properties of strong compact variation and strong compact absolute continuity connected with an expansion of the space into subspaces generated by the compact sets. A description of strong $K$-absolutely continuous mappings in terms of indefinite Bochner integral is obtained. A special class of the spaces having $K$-Radon-Nikodym property is obtained. A relation between the $K$-Radon-Nikodym property and the classical Radon-Nikodym property is considered. Methods Funct. Anal. Topology 16 (2010), no. 1, 51-56 It was proved in~\cite{Pop09b} that a $*$-algebra is $C^*$-representable, i.e., $*$-isomorphic to a self-adjoint subalgebra of bounded operators acting on a Hilbert space if and only if there is an algebraically admissible cone in the real space of Hermitian elements of the algebra such that the algebra unit is an Archimedean order unit. In the present paper we construct such cones in free products of $C^*$-representable $*$-algebras generated by unitaries. We also express the reducing ideal of any algebraically bounded $*$-algebra with corepresentation $\mathcal F/\mathcal J$ where $\mathcal F$ is a free algebra as a closure of the ideal $\mathcal J$ in some universal enveloping $C^*$-algebra. Methods Funct. Anal. Topology 16 (2010), no. 1, 57-68 In this paper we consider decompositions of the identity operator into a linear combination of $k\ge 5$ orthogonal projections with real coefficients. It is shown that if the sum $A$ of the coefficients is closed to an integer number between $2$ and $k-2$ then such a decomposition exists. If the coefficients are almost equal to each other, then the identity can be represented as a linear combination of orthogonal projections for $\frac{k-\sqrt{k^2-4k}}{2} < A < \frac{k+\sqrt{k^2-4k}}{2}$. In the case where some coefficients are sufficiently close to $1$ we find necessary conditions for the existence of the decomposition. Inverse theorems in the theory of approximation of vectors in a Banach space with exponential type entire vectors Methods Funct. Anal. Topology 16 (2010), no. 1, 69-82 An arbitrary operator $A$ on a Banach space $X$ which is a generator of a $C_0$-group with a certain growth condition at infinity is considered. A relationship between its exponential type entire vectors and its spectral subspaces is found. Inverse theorems on the connection between the degree of smoothness of a vector $x\in X$ with respect to the operator $A$, the rate of convergence to zero of the best approximation of $x$ by exponential type entire vectors for operator $A$, and the $k$-module of continuity with respect to $A$ are established. Also, a generalization of the Bernstein-type inequality is obtained. The results allow to obtain Bernstein-type inequalities in weighted $L_p$ spaces. Methods Funct. Anal. Topology 16 (2010), no. 3, 264-270 We prove that every Schur representation of a poset corresponding to $\widetilde{E_8}$ can be unitarized with some character. Methods Funct. Anal. Topology 16 (2010), no. 1, 83-100 We study positive definite kernels $K = (K_{n,m})_{n,m\in A}$, $A=\mathbb Z$ or $A=\mathbb Z_+$, which satisfy a difference equation of the form $L_n K = \overline L_m K$, or of the form $L_n \overline L_m K = K$, where $L$ is a linear difference operator (here the subscript $n$ ($m$) means that $L$ acts on columns (respectively rows) of $K$). In the first case, we give new proofs of Yu.M. Berezansky results about integral representations for $K$. In the second case, we obtain integral representations for $K$. The latter result is applied to strengthen one our result on abstract stochastic sequences. As an example, we consider the Hamburger moment problem and the corresponding positive matrix of moments. Classical results on the Hamburger moment problem are derived using an operator approach, without use of Jacobi matrices or orthogonal polynomials. Methods Funct. Anal. Topology 16 (2010), no. 3, 271-288 We describe all solutions of the matrix Hamburger moment problem in a general case (no conditions besides solvability are assumed). We use the fundamental results of A. V. Shtraus on the generalized resolvents of symmetric operators. All solutions of the truncated matrix Hamburger moment problem with an odd number of given moments are described in an "almost nondegenerate" case. Some conditions of solvability for the scalar truncated Hamburger moment problem with an even number of given moments are given.
I’m working on a mobile application to help technician to make inspections. They have a lot of question for each task, each question is about an item inspection. When the answer is positive no additional action is required (80% of cases). But when the answer is No, he has to identify the anomaly so I show a new page with anomaly choice. I choose to represent it as a cards because it needs to have an “on” and “off” statut. When the technician chose one anomaly it show a new screen because we need some additional informations, my problem appears here. I can not create a classic stepper because the number and type of question will depend of the answers : First he need to answer if he “fixed”, “momentarily fixed” or “can not fix for the moment” the anomaly. If the technician answer “fixed” or “momentarily fixed” we ask him to let a comment, to explain how he managed that If the technician answer “can not fix for the moment” we have two additional questions : we have to identify the reason of this failure (choice in a list of 6 items) and the estimated repair time (choice in a list of 2 items) I don’t know how to represent this userflow, this is my headache, but at the end the user need to come back to the anomaly screen, because he can add another anomaly with the same flow.. Thanks for your help ! Quoting from McAfee MVISION CASB Many cloud services use custom content disposition headers in an effort to improve the performance of their applications. These custom headers have the unintended side effect of preventing network security solutions (and on-premises DLP solutions that integrate to them via ICAP) from inspecting content for DLP. What are the content disposition headers exactly and why they prevent network solution from inspecting content? As per Microsoft Content-disposition is an extension to the MIME protocol that instructs a MIME user agent on how it should display an attached file A number of popular mobile apps utilize certificate pinning, such as Facebook. Does this mean that these applications cease to function completely on corporate and academic networks that utilize SSL inspection, unless the administrator specifically exempts them? If for any strong digraph $ H$ we let $ \lambda(H)$ to be the length of any shortest closed walk traveling over every arc in $ H$ then what is the maximum value of $ \lambda(D)$ for any strong digraph $ D$ with $ n$ arcs? I.e. for any $ n\in\mathbb{N}$ how well can we approximate $ M_n=\max(\lambda(D):{\small D\text{ is strong and }|E(D)|=n})$ ? I can prove $ \frac{1}{4}n^2-17n^{3/2}\leq M_n\leq 2n^2$ so I’m curious if there exists $ c\in\mathbb{R}$ for which $ M_n\sim cn^2$ . Per What is the best way to carry photographic film when travelling?, when traveling with photographic film, it’s best to pack it in hand luggage and request hand inspection of undeveloped film. In the United States, there’s an explicit TSA rule that allows you to request hand inspection of undeveloped film, and in all cases, the security officers will respect that. However, from what I’ve read online, security checkpoints in some countries often insist that one put undeveloped film through the scanners. (While this often results in not much visible effect for lower-speed films, it can be a problem if it’s scanned multiple times, especially when transiting through different countries, and higher-speed films shouldn’t be scanned at all.) I’ve searched online, but I’ve not been able to find any information on the security rules for India, unlike the U.S. TSA which fully documents its rules online. Is it possible or easy to request that photographic film be hand inspected at airport security checkpoints in India? Will Indian security officers usually honor the request? The Google Index tab in URL Inspection in Search google console shows incorrect info (https://www.dropbox.com/s/dvifnkv144noanj/incorrect.png?dl=0), while the Live Test tab shows there correct info (https://www.dropbox.com/s/bkh1je2ntybmoco/correct.png?dl=0). From my understanding the results should be the same, though maybe with some delay of next indexation, but the results remain different for a long time. As a result google search shows me the incorrect version (https://www.dropbox.com/s/7yiiv3062mjl8d7/result.png?dl=0).
Authors Index, Methods Funct. Anal. Topology 16 (2010), no. 4, 289-290 Methods Funct. Anal. Topology 16 (2010), no. 2, 112-119 In this paper we introduce the generalized continuous version of fusion frame, namely $gc$-fusion frame. Also we get some new results about Bessel mappings and perturbation in this case. On mixing and completely mixing properties of positive $L^1$-contractions of finite real W* -algebras Methods Funct. Anal. Topology 16 (2010), no. 3, 259-263 We consider a non-commutative real analogue of Akcoglu and Sucheston's result about the mixing properties of positive L$^1$-contractions of the L$^1$-space associated with a measure space with probability measure. This result generalizes an analogous result obtained for the L$^1$-space associated with a finite (complex) W$^*$-algebras. Methods Funct. Anal. Topology 16 (2010), no. 3, 197-202 We present solutions to some boundary value and initial-boundary value problems for the "wave" equation with the infinite dimensional L\'evy Laplacian $\Delta _L$ $$\frac{\partial^2 U(t,x)}{\partial t^2}=\Delta_LU(t,x)$$ in the Shilov class of functions. The strong Hamburger moment problem and related direct and inverse spectral problems for block Jacobi-Laurent matrices Methods Funct. Anal. Topology 16 (2010), no. 3, 203-241 In this article we propose an approach to the strong Hamburger moment problem based on the theory of generalized eigenvectors expansion for a selfadjoint operator. Such an approach to another type of moment problems was given in our works earlier, but for strong Hamburger moment problem it is new. We get a sufficiently complete account of the theory of such a problem, including the spectral theory of block Jacobi-Laurent matrices. Methods Funct. Anal. Topology 16 (2010), no. 1, 1-5 In this paper, we investigate hereditary properties of hyperspaces. Our basic cardinals are the Suslin hereditary number, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity. We prove that the hereditary cellularity, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity for any Eberlein compact and any Danto space and their hyperspaces coincide. Methods Funct. Anal. Topology 16 (2010), no. 4, 304-332 We propose a new axiomatics for a locally compact hypergroup. On the one hand, the new object generalizes a DJS-hypergroup and, on the other hand, it allows to obtain results similar to those for a unimodular hypecomplex system with continuous basis. We construct a harmonic analysis and, for a commutative locally compact hypergroup, give an analogue of the Pontryagin duality theorem. Methods Funct. Anal. Topology 16 (2010), no. 2, 101-111 Traces $\Phi$ on von Neumann algebras with values in complex order complete vector lattices are considered. The full description of these traces is given for the case when $\Phi$ is the Maharam trace. The version of Radon-Nikodym-type theorem for Maharam traces is established. Methods Funct. Anal. Topology 16 (2010), no. 2, 140-157 In this paper, a class of special finite dimensional perturbations of Volterra operators in Hilbert spaces is investigated. The main result of the article is finding necessary and sufficient conditions for an operator in a chosen class to be similar to the orthogonal sum of a dissipative and an anti-dissipative operators with finite dimensional imaginary parts. Methods Funct. Anal. Topology 16 (2010), no. 4, 298-303 We give an effective description of finite rank singular perturbations of a normal operator by using the concepts we introduce of an admissible subspace and corresponding admissible operators. We give a description of rank one singular perturbations in terms of a scale of Hilbert spaces, which is constructed from the unperturbed operator. Dimension stabilization effect for the block Jacobi-type matrix of a bounded normal operator with the spectrum on an algebraic curve Methods Funct. Anal. Topology 16 (2010), no. 1, 28-41 Under some natural assumptions, any bounded normal operator in an appropriate basis has a three-diagonal block Jacobi-type matrix. Just as in the case of classical Jacobi matrices (e.g. of self-adjoint operators) such a structure can be effectively used. There are two sources of difficulties: rapid growth of blocks in the Jacobi-type matrix of such operators (they act in $\mathbb C^1\oplus\mathbb C^2\oplus\mathbb C^3\oplus\cdots$) and potentially complicated spectra structure of the normal operators. The aim of this article is to show that these two aspects are closely connected: simple structure of the spectra can effectively bound the complexity of the matrix structure. The main result of the article claims that if the spectra is concentrated on an algebraic curve the dimensions of Jacobi-type matrix blocks do not grow starting with some value. Methods Funct. Anal. Topology 16 (2010), no. 2, 120-130 The paper deals with the singular Sturm-Liouville expressions $$l(y) = -(py')' + qy$$ with the coefficients $$q = Q', \quad 1/p, Q/p, Q^2/p \in L_1, $$ where the derivative of the function $Q$ is understood in the sense of distributions. Due to a new regularization, the corresponding operators are correctly defined as quasi-differentials. Their resolvent approximation is investigated and all self-adjoint and maximal dissipative extensions and generalized resolvents are described in terms of homogeneous boundary conditions of the canonical form. Methods Funct. Anal. Topology 16 (2010), no. 2, 131-139 We study systems of one-dimensional subspaces of a Hilbert space. For such systems, symmetric and orthoscalar systems, as well as graph related configurations of one-dimensional subspaces have been studied. Methods Funct. Anal. Topology 16 (2010), no. 4, 291-297 Methods Funct. Anal. Topology 16 (2010), no. 1, 6-16 Using a general approach that covers the cases of Gaussian, Poissonian, Gamma, Pascal and Meixner measures on an infinite- dimensional space, we construct a general integration by parts formula for analysis connected with each of these measures. Our consideration is based on the constructions of the extended stochastic integral and the stochastic derivative that are connected with the structure of the extended Fock space. Methods Funct. Anal. Topology 16 (2010), no. 1, 17-27 Let $E$ be either $\ell_1$ of $L_1$. We consider $E$-unattainable continuous linear operators $T$ from $L_1$ to a Banach space $Y$, i.e., those operators which do not attain their norms on any subspace of $L_1$ isometric to $E$. It is not hard to see that if $T: L_1 \to Y$ is $\ell_1$-unattainable then it is also $L_1$-unattainable. We find some equivalent conditions for an operator to be $\ell_1$-unattainable and construct two operators, first $\ell_1$-unattainable and second $L_1$-unattainable but not $\ell_1$-unattainable. Some open problems remain unsolved. Methods Funct. Anal. Topology 16 (2010), no. 4, 333-348 $J$-self-adjoint extensions of the Phillips symmetric operator $S$ are %\break studied. The concepts of stable and unstable $C$-symmetry are introduced in the extension theory framework. The main results are the following: if ${A}$ is a $J$-self-adjoint extension of $S$, then either $\sigma({A})=\mathbb{R}$ or $\sigma({A})=\mathbb{C}$; if ${A}$ has a real spectrum, then ${A}$ has a stable $C$-symmetry and ${A}$ is similar to a self-adjoint operator; there are no $J$-self-adjoint extensions of the Phillips operator with unstable $C$-symmetry. Methods Funct. Anal. Topology 16 (2010), no. 3, 242-258 The $H$-ring structure of certain infinite dimensional Grassmannians is discussed using various algebraic and analytical methods but avoiding cellular arguments. These methods allow us to treat these Grassmannians in a greater generality. Methods Funct. Anal. Topology 16 (2010), no. 2, 158-166 We study Michael's lower semifinite topology and Fell's topology on the collection of all closed limit subsets of a topological space. Special attention is given to the subfamily of all maximal limit sets. Methods Funct. Anal. Topology 16 (2010), no. 2, 167-182 Let $M$ be a smooth connected compact surface, $P$ be either a real line $\mathbb R$ or a circle $S^1$. Then we have a natural right action of the group $D(M)$ of diffeomorphisms of $M$ on $C^\infty(M,P)$. For $f\in C^\infty(M,P)$ denote respectively by $S(f)$ and $O(f)$ its stabilizer and orbit with respect to this action. Recently, for a large class of smooth maps $f:M\to P$ the author calculated the homotopy types of the connected components of $S(f)$ and $O(f)$. It turned out that except for few cases the identity component of $S(f)$ is contractible, $\pi_i O(f)=\pi_i M$ for $i\geq3$, and $\pi_2 O(f)=0$, while $\pi_1 O(f)$ it only proved to be a finite extension of $\pi_1D_{Id}M\oplus\mathbb Z^{l}$ for some $l\geq0$. In this note it is shown that if $\chi(M)<0$, then $\pi_1O(f)=G_1\times\cdots\times G_n$, where each $G_i$ is a fundamental group of the restriction of $f$ to a subsurface $B_i\subset M$ being either a $2$-disk or a cylinder or a Mobius band. For the proof of main result incompressible subsurfaces and cellular automorphisms of surfaces are studied. Methods Funct. Anal. Topology 16 (2010), no. 4, 349-358 We describe the spectrum of the problem generated by the Stieltjes string recurrence relations on a figure-of-eight graph. The continuity and the force balance conditions are imposed at the vertex of the graph. It is shown that the eigenvalues of such (main) problem are interlaced with the elements of the union of sets of eigenvalues of the Dirichlet problems generated by the parts of the string which correspond to the loops of the figure-of-eight graph. Also the eigenvalues of the main problem are interlaced with the elements of the union of sets of eigenvalues of the periodic problems generated by the same parts of the string. Methods Funct. Anal. Topology 16 (2010), no. 4, 359-382 All matrix modifications of classical Nevanlinna-Pick interpolation problem with a finite number of nonreal nodes which can be investigated by V. P. Potapov method are described. Methods Funct. Anal. Topology 16 (2010), no. 1, 42-50 We give necessary and sufficient conditions for a one-dimensional Schrodinger operator to have the number of negative eigenvalues equal to the number of negative intensities in the case of $\delta$ interactions. On the number of negative eigenvalues of a multi-dimensional Schrodinger operator with point interactions Methods Funct. Anal. Topology 16 (2010), no. 4, 383-392 We prove that the number $N$ of negative eigenvalues of a Schr\"odinger operator $L$ with finitely many points of $\delta$-interactions on $\mathbb R^{d}$ (${d}\le3$) is equal to the number of negative eigenvalues of a certain class of matrix $M$ up to a constant. This $M$ is expressed in terms of distances between the interaction points and the intensities. As applications, we obtain sufficient and necessary conditions for $L$ to satisfy $N=m,n,n$ for ${d}=1,2,3$, respectively, and some estimates of the minimum and maximum of $N$ for fixed intensities. Here, we denote by $n$ and $m$ the numbers of interaction points and negative intensities, respectively. Methods Funct. Anal. Topology 16 (2010), no. 2, 183-196 For mappings acting from an interval into a locally convex space, we study properties of strong compact variation and strong compact absolute continuity connected with an expansion of the space into subspaces generated by the compact sets. A description of strong $K$-absolutely continuous mappings in terms of indefinite Bochner integral is obtained. A special class of the spaces having $K$-Radon-Nikodym property is obtained. A relation between the $K$-Radon-Nikodym property and the classical Radon-Nikodym property is considered. Methods Funct. Anal. Topology 16 (2010), no. 1, 51-56 It was proved in~\cite{Pop09b} that a $*$-algebra is $C^*$-representable, i.e., $*$-isomorphic to a self-adjoint subalgebra of bounded operators acting on a Hilbert space if and only if there is an algebraically admissible cone in the real space of Hermitian elements of the algebra such that the algebra unit is an Archimedean order unit. In the present paper we construct such cones in free products of $C^*$-representable $*$-algebras generated by unitaries. We also express the reducing ideal of any algebraically bounded $*$-algebra with corepresentation $\mathcal F/\mathcal J$ where $\mathcal F$ is a free algebra as a closure of the ideal $\mathcal J$ in some universal enveloping $C^*$-algebra. Methods Funct. Anal. Topology 16 (2010), no. 1, 57-68 In this paper we consider decompositions of the identity operator into a linear combination of $k\ge 5$ orthogonal projections with real coefficients. It is shown that if the sum $A$ of the coefficients is closed to an integer number between $2$ and $k-2$ then such a decomposition exists. If the coefficients are almost equal to each other, then the identity can be represented as a linear combination of orthogonal projections for $\frac{k-\sqrt{k^2-4k}}{2} < A < \frac{k+\sqrt{k^2-4k}}{2}$. In the case where some coefficients are sufficiently close to $1$ we find necessary conditions for the existence of the decomposition. Inverse theorems in the theory of approximation of vectors in a Banach space with exponential type entire vectors Methods Funct. Anal. Topology 16 (2010), no. 1, 69-82 An arbitrary operator $A$ on a Banach space $X$ which is a generator of a $C_0$-group with a certain growth condition at infinity is considered. A relationship between its exponential type entire vectors and its spectral subspaces is found. Inverse theorems on the connection between the degree of smoothness of a vector $x\in X$ with respect to the operator $A$, the rate of convergence to zero of the best approximation of $x$ by exponential type entire vectors for operator $A$, and the $k$-module of continuity with respect to $A$ are established. Also, a generalization of the Bernstein-type inequality is obtained. The results allow to obtain Bernstein-type inequalities in weighted $L_p$ spaces. Methods Funct. Anal. Topology 16 (2010), no. 3, 264-270 We prove that every Schur representation of a poset corresponding to $\widetilde{E_8}$ can be unitarized with some character. Methods Funct. Anal. Topology 16 (2010), no. 1, 83-100 We study positive definite kernels $K = (K_{n,m})_{n,m\in A}$, $A=\mathbb Z$ or $A=\mathbb Z_+$, which satisfy a difference equation of the form $L_n K = \overline L_m K$, or of the form $L_n \overline L_m K = K$, where $L$ is a linear difference operator (here the subscript $n$ ($m$) means that $L$ acts on columns (respectively rows) of $K$). In the first case, we give new proofs of Yu.M. Berezansky results about integral representations for $K$. In the second case, we obtain integral representations for $K$. The latter result is applied to strengthen one our result on abstract stochastic sequences. As an example, we consider the Hamburger moment problem and the corresponding positive matrix of moments. Classical results on the Hamburger moment problem are derived using an operator approach, without use of Jacobi matrices or orthogonal polynomials. Methods Funct. Anal. Topology 16 (2010), no. 3, 271-288 We describe all solutions of the matrix Hamburger moment problem in a general case (no conditions besides solvability are assumed). We use the fundamental results of A. V. Shtraus on the generalized resolvents of symmetric operators. All solutions of the truncated matrix Hamburger moment problem with an odd number of given moments are described in an "almost nondegenerate" case. Some conditions of solvability for the scalar truncated Hamburger moment problem with an even number of given moments are given.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A January 1999 , Volume 5 , Issue 1 Select all articles Export/Reference: Abstract: This paper deals with various applications of two basic theorems in order- preserving systems under a group action -- monotonicity theorem and convergence theorem. Among other things we show symmetry properties of stable solutions of semilinear elliptic equations and systems. Next we apply our theory to traveling waves and pseudo-traveling waves for a certain class of quasilinear diffusion equa- tions and systems, and show that stable traveling waves and pseudo-traveling waves have monotone profiles and, conversely, that monotone traveling waves and pseudo- traveling waves are stable with asymptotic phase. We also discuss pseudo-traveling waves for equations of surface motion. Abstract: We establish the existence of solutions to an anti-periodic non-monotone boundary value problem. Our approach relies on a combination of monotonicity and compactness methods. Abstract: This paper is a study of the global structure of the attractors of a dynamical system. The dynamical system is associated with an oriented graph called a Symbolic Image of the system. The symbolic image can be considered as a finite discrete approximation of the dynamical system flow. Investigation of the symbolic image provides an opportunity to localize the attractors of the system and to estimate their domains of attraction. A special sequence of symbolic images is considered in order to obtain precise knowledge about the global structure of the attractors and to get filtrations of the system. Abstract: We study special symmetric periodic solutions of the equation $\dot x(t) =\alphaf(x(t), x(t-1))$ where $\alpha$ is a positive parameter and the nonlinearity $f$ satisfies the symmetry conditions $f(-u, v) = -f(u,-v) = f(u, v).$ We establish the existence and stability properties for such periodic solutions with small amplitude. Abstract: Topological transitivity, weak mixing and non-wandering are definitions used in topological dynamics to describe the ways in which open sets feed into each other under iteration. Using finite directed graphs, these definitions are generalized to obtain topological mapping properties. The extent to which these mapping properties are logically distinct is examined. There are three distinct properties which entail "interesting" dynamics. Two of these, transitivity and weak mixing, are already well known. The third does notappear in the literature but turns out to be close to weak mixing in a sense to be discussed. The remaining properties comprise a countably infinite collection of distinct properties entailing somewhat less interesting dynamics and including non-wandering. Abstract: We study the Cauchy problem for a nonlinear Schrödinger equation which is the generalization of a one arising in plasma physics. We focus on the so called subcritical case and prove that when the initial datum is "small", the solution exists globally in time and decays in time just like in the linear case. For a certain range of the exponent in the nonlinear term, we prove that the solution is asymptotic to a "final state" and the nonexistence of asymptotically free solutions. The method used in this paper is based on some gauge transformation and on a certain phase function. Abstract: The rich diversity of patterns and concepts intrinsic to the Julia and the Mandelbrot sets of the quadratic map in the complex plane invite a search for higher dimensional generalisations. Quaternions provide a natural framework for such an endeavour. The objective of this investigation is to provide explicit formulae for the domain of stability of multiple cycles of classes of quaternionic maps $F(Q)+C$ or $CF(Q)$ where $C$ is a quaternion and $F(Q)$ is an integral function of $Q$. We introduce the concept of quaternionic differentials and employ this in the linear stability analysis of multiple cycles. Abstract: Nonlinear stability and some other dynamical properties for a KS type equation in space dimension two are studied in this article. We consider here a variation of the KS equation where the derivatives in the nonlinear and the antidissipative linear terms are in one single direction. We prove the nonlinear stability for all positive times and study the corresponding attractor. Abstract: Given a control system (formulated as a nonconvex and unbounded differential inclusion) we study the problem of reaching a closed target with trajectories of the system. A controllability condition around the target allows us to construct a path that steers each point nearby into it in finite time and using a finite amount of energy. In applications to minimization problems, limits of such trajectories could be discontinuous. We extend the inclusion so that all the trajectories of the extension can be approached by (graphs of) solutions of the original system. In the extended setting the value function of an exit time problem with Lagrangian affine in the unbounded control can be shown to coincide with the value function of the original problem, to be continuous and to be the unique (viscosity) solution of a Hamilton-Jacobi equation with suitable boundary conditions. Abstract: We study the regularity of the composition operator $((f, g)\to g \circ f)$ in spaces of Hölder differentiable functions. Depending on the smooth norms used to topologize $f, g$ and their composition, the operator has different differentiability properties. We give complete and sharp results for the classical Hölder spaces of functions defined on geometrically well behaved open sets in Banach spaces. We also provide examples that show that the regularity conclusions are sharp and also that if the geometric conditions fail, even in finite dimensions, many elements of the theory of functions (smoothing, interpolation, extensions) can have somewhat unexpected properties. Abstract: In this paper, we give some existence results for equilibrium problems by proceeding to a perturbation of the initial problem and using techniques of recession analysis. We develop and describe thoroughly recession condition which ensure existence of at least one solution for hemivariational inequalities introduced by Panagiotopoulos. Then we give two applications to resolution of concrete variational inequalities. We shall examine two examples. The first one concerns the unilateral boundary condition. In the second, we shall consider the contact problem with given friction on part of the boundary. Abstract: In this paper we consider the notion of determining projections for two classes of stochastic dissipative equations: a reaction-diffusion equation and a 2-dimensional Navier-Stokes equation. We define certain finite dimensional objects that can capture the asymptotic behavior of the related dynamical system. They are projections on a space of polynomial functions, generalizing the classical (but not very much studied in a stochastic context) concepts of determining modes, nodes and volumes. Abstract: We show the local in time solvability of the Cauchy problem for nonlinear wave equations in the Sobolev space of critical order with nonlinear term of exponential type. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
In the course of some physics research I've been working on, a very annoying integral has appeared that I'm having difficulty evaluating numerically. Any help you could offer would be greatly appreciated. The integral to be evaluated is as follows: $$ \int_{0}^{\infty}d\omega\int_{-\infty}^{\infty}(dk_{x})k_{x}\frac{e^{-2|k_{\bot}|}(2\operatorname{Im} R_{1})(2\operatorname{Im} R_{2})}{|1-e^{-2|k_{\bot}|}R_{1}R_{2}|^{2}}\Theta(vk_{x}-\omega) $$ Here $v$ is a constant, $\Theta$ is the Heaviside theta function, and $$k_{\bot} = \sqrt{\frac{\omega^{2}}{c^{2}}-k_{x}^{2}} $$ where $c$ is a constant which we can take to be one. Finally, the function $R_{1}$ is given by $$ R_{1}[\omega,k_{x}] = \cfrac{\sqrt{\cfrac{\omega^{2}}{c^{2}}-k_{x}^{2}}-\sqrt{\cfrac{\omega^{2}\left(1-\cfrac{\omega_{p}^{2}}{\omega^{2}-\omega_{0}^{2}}\right)}{c^{2}}-k_{x}^{2}}}{\sqrt{\cfrac{\omega^{2}}{c^{2}}-k_{x}^{2}}+\sqrt{\cfrac{\omega^{2}\left(1-\cfrac{\omega_{p}^{2}}{\omega^{2}-\omega_{0}^{2}}\right)}{c^{2}}-k_{x}^{2}}} $$ The function $R_{2}(\omega,k_{x}) = R_{1}(\omega-vk_{x},k_{x})$. The parameters $\omega_{0},\omega_{p}$ which appear in the $R$ functions are parameters of the theory which we are free to choose. For obvious reasons, this is not a set of functions which are easy to visualize. However, careful examination of where the integrand goes to zero can substantially reduce the range of integration which we need to consider: the simplified integral is $$ \int_{0}^{\omega_{0}}d\omega\int_{0}^{\infty}(dk_{x})k_{x}\frac{e^{-2|k_{\bot}|}(2\operatorname{Im} R_{1})(2\operatorname{Im} R_{2})}{|1-e^{-2|k_{\bot}|}R_{1}R_{2}|^{2}}\Theta(vk_{x}-\omega) $$ The integrand is exponentially decaying with increasing $k_{x}$ for $k_{x}>\omega_{0}$, so it should be possible to truncate the $k_{x}$ integral with little error for $k_{x}\gg\omega_{0}$. My strategy for computing the integral so far has been as follows: I declare the function integrand[$\omega,k_{x}$] in Mathematica to be the aforementioned integrand; I then define integral[p_?NumberQ]:=NIntegrate[integrand[\omega,kx]/.{v->p, other substitutions},{\omega,0,\omega_0},{kx,0,10\omega_0}] Where "other substitutions" stands for replacements for constants in the integrand, and $10\omega_{0}$ was chosen as an arbitrary upperbound on the $k_{x}$ integral. At the suggestion of readers, I'll post the substitutions which I have been using, but please keep in mind that for all of the constants, any positive real value is allowed in principle. The parameters I've chosen are $$ \omega_{p}\rightarrow 10^{5},\omega_{0}\rightarrow 50 $$ Mathematica can then, in principle, compute the integral given a choice of $v$. The problem with this is that there seems to be consistent underestimation of the result. I say this because the integrand is positive definite in the region of interest, so increasing the size of the integration region should never decrease the magnitude of the integral. However, I have found situations where increasing the size of the integration region causes the value of the integral to drop, which I assume is because there is some sort of inverse relationship between the accuracy of integrand sampling and the size of the region of integration. If anyone can provide any insight into how to improve my approach, I would really appreciate the help. I have played with options like AccuracyGoal, but I'm open to suggestions if there are clever options to take besides just "increase accuracy". Thanks!
The voltage across an element is 12e^{-2t} V. The current entering the positive terminal of the element is 2e^{-2t} A. Find the energy absorbed by the element in 1.5 s starting from t = 0. Solution: The energy absorbed can be found by: (Where W is energy absorbed, v is voltage, t is time, and i is current) Substitute our voltage and current equations: \,\displaystyle W=\int^{1.5}_{0} (24e^{-4t})\,dt W=\dfrac{24e^{-4t}}{-4}\Big|^{1.5}_{0} W=5.985 J
Authors Index, Methods Funct. Anal. Topology 16 (2010), no. 4, 289-290 Methods Funct. Anal. Topology 16 (2010), no. 2, 112-119 In this paper we introduce the generalized continuous version of fusion frame, namely $gc$-fusion frame. Also we get some new results about Bessel mappings and perturbation in this case. On mixing and completely mixing properties of positive $L^1$-contractions of finite real W* -algebras Methods Funct. Anal. Topology 16 (2010), no. 3, 259-263 We consider a non-commutative real analogue of Akcoglu and Sucheston's result about the mixing properties of positive L$^1$-contractions of the L$^1$-space associated with a measure space with probability measure. This result generalizes an analogous result obtained for the L$^1$-space associated with a finite (complex) W$^*$-algebras. Methods Funct. Anal. Topology 16 (2010), no. 3, 197-202 We present solutions to some boundary value and initial-boundary value problems for the "wave" equation with the infinite dimensional L\'evy Laplacian $\Delta _L$ $$\frac{\partial^2 U(t,x)}{\partial t^2}=\Delta_LU(t,x)$$ in the Shilov class of functions. The strong Hamburger moment problem and related direct and inverse spectral problems for block Jacobi-Laurent matrices Methods Funct. Anal. Topology 16 (2010), no. 3, 203-241 In this article we propose an approach to the strong Hamburger moment problem based on the theory of generalized eigenvectors expansion for a selfadjoint operator. Such an approach to another type of moment problems was given in our works earlier, but for strong Hamburger moment problem it is new. We get a sufficiently complete account of the theory of such a problem, including the spectral theory of block Jacobi-Laurent matrices. Methods Funct. Anal. Topology 16 (2010), no. 1, 1-5 In this paper, we investigate hereditary properties of hyperspaces. Our basic cardinals are the Suslin hereditary number, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity. We prove that the hereditary cellularity, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity for any Eberlein compact and any Danto space and their hyperspaces coincide. Methods Funct. Anal. Topology 16 (2010), no. 4, 304-332 We propose a new axiomatics for a locally compact hypergroup. On the one hand, the new object generalizes a DJS-hypergroup and, on the other hand, it allows to obtain results similar to those for a unimodular hypecomplex system with continuous basis. We construct a harmonic analysis and, for a commutative locally compact hypergroup, give an analogue of the Pontryagin duality theorem. Methods Funct. Anal. Topology 16 (2010), no. 2, 101-111 Traces $\Phi$ on von Neumann algebras with values in complex order complete vector lattices are considered. The full description of these traces is given for the case when $\Phi$ is the Maharam trace. The version of Radon-Nikodym-type theorem for Maharam traces is established. Methods Funct. Anal. Topology 16 (2010), no. 2, 140-157 In this paper, a class of special finite dimensional perturbations of Volterra operators in Hilbert spaces is investigated. The main result of the article is finding necessary and sufficient conditions for an operator in a chosen class to be similar to the orthogonal sum of a dissipative and an anti-dissipative operators with finite dimensional imaginary parts. Methods Funct. Anal. Topology 16 (2010), no. 4, 298-303 We give an effective description of finite rank singular perturbations of a normal operator by using the concepts we introduce of an admissible subspace and corresponding admissible operators. We give a description of rank one singular perturbations in terms of a scale of Hilbert spaces, which is constructed from the unperturbed operator. Dimension stabilization effect for the block Jacobi-type matrix of a bounded normal operator with the spectrum on an algebraic curve Methods Funct. Anal. Topology 16 (2010), no. 1, 28-41 Under some natural assumptions, any bounded normal operator in an appropriate basis has a three-diagonal block Jacobi-type matrix. Just as in the case of classical Jacobi matrices (e.g. of self-adjoint operators) such a structure can be effectively used. There are two sources of difficulties: rapid growth of blocks in the Jacobi-type matrix of such operators (they act in $\mathbb C^1\oplus\mathbb C^2\oplus\mathbb C^3\oplus\cdots$) and potentially complicated spectra structure of the normal operators. The aim of this article is to show that these two aspects are closely connected: simple structure of the spectra can effectively bound the complexity of the matrix structure. The main result of the article claims that if the spectra is concentrated on an algebraic curve the dimensions of Jacobi-type matrix blocks do not grow starting with some value. Methods Funct. Anal. Topology 16 (2010), no. 2, 120-130 The paper deals with the singular Sturm-Liouville expressions $$l(y) = -(py')' + qy$$ with the coefficients $$q = Q', \quad 1/p, Q/p, Q^2/p \in L_1, $$ where the derivative of the function $Q$ is understood in the sense of distributions. Due to a new regularization, the corresponding operators are correctly defined as quasi-differentials. Their resolvent approximation is investigated and all self-adjoint and maximal dissipative extensions and generalized resolvents are described in terms of homogeneous boundary conditions of the canonical form. Methods Funct. Anal. Topology 16 (2010), no. 2, 131-139 We study systems of one-dimensional subspaces of a Hilbert space. For such systems, symmetric and orthoscalar systems, as well as graph related configurations of one-dimensional subspaces have been studied. Methods Funct. Anal. Topology 16 (2010), no. 4, 291-297 Methods Funct. Anal. Topology 16 (2010), no. 1, 6-16 Using a general approach that covers the cases of Gaussian, Poissonian, Gamma, Pascal and Meixner measures on an infinite- dimensional space, we construct a general integration by parts formula for analysis connected with each of these measures. Our consideration is based on the constructions of the extended stochastic integral and the stochastic derivative that are connected with the structure of the extended Fock space. Methods Funct. Anal. Topology 16 (2010), no. 1, 17-27 Let $E$ be either $\ell_1$ of $L_1$. We consider $E$-unattainable continuous linear operators $T$ from $L_1$ to a Banach space $Y$, i.e., those operators which do not attain their norms on any subspace of $L_1$ isometric to $E$. It is not hard to see that if $T: L_1 \to Y$ is $\ell_1$-unattainable then it is also $L_1$-unattainable. We find some equivalent conditions for an operator to be $\ell_1$-unattainable and construct two operators, first $\ell_1$-unattainable and second $L_1$-unattainable but not $\ell_1$-unattainable. Some open problems remain unsolved. Methods Funct. Anal. Topology 16 (2010), no. 4, 333-348 $J$-self-adjoint extensions of the Phillips symmetric operator $S$ are %\break studied. The concepts of stable and unstable $C$-symmetry are introduced in the extension theory framework. The main results are the following: if ${A}$ is a $J$-self-adjoint extension of $S$, then either $\sigma({A})=\mathbb{R}$ or $\sigma({A})=\mathbb{C}$; if ${A}$ has a real spectrum, then ${A}$ has a stable $C$-symmetry and ${A}$ is similar to a self-adjoint operator; there are no $J$-self-adjoint extensions of the Phillips operator with unstable $C$-symmetry. Methods Funct. Anal. Topology 16 (2010), no. 3, 242-258 The $H$-ring structure of certain infinite dimensional Grassmannians is discussed using various algebraic and analytical methods but avoiding cellular arguments. These methods allow us to treat these Grassmannians in a greater generality. Methods Funct. Anal. Topology 16 (2010), no. 2, 158-166 We study Michael's lower semifinite topology and Fell's topology on the collection of all closed limit subsets of a topological space. Special attention is given to the subfamily of all maximal limit sets. Methods Funct. Anal. Topology 16 (2010), no. 2, 167-182 Let $M$ be a smooth connected compact surface, $P$ be either a real line $\mathbb R$ or a circle $S^1$. Then we have a natural right action of the group $D(M)$ of diffeomorphisms of $M$ on $C^\infty(M,P)$. For $f\in C^\infty(M,P)$ denote respectively by $S(f)$ and $O(f)$ its stabilizer and orbit with respect to this action. Recently, for a large class of smooth maps $f:M\to P$ the author calculated the homotopy types of the connected components of $S(f)$ and $O(f)$. It turned out that except for few cases the identity component of $S(f)$ is contractible, $\pi_i O(f)=\pi_i M$ for $i\geq3$, and $\pi_2 O(f)=0$, while $\pi_1 O(f)$ it only proved to be a finite extension of $\pi_1D_{Id}M\oplus\mathbb Z^{l}$ for some $l\geq0$. In this note it is shown that if $\chi(M)<0$, then $\pi_1O(f)=G_1\times\cdots\times G_n$, where each $G_i$ is a fundamental group of the restriction of $f$ to a subsurface $B_i\subset M$ being either a $2$-disk or a cylinder or a Mobius band. For the proof of main result incompressible subsurfaces and cellular automorphisms of surfaces are studied. Methods Funct. Anal. Topology 16 (2010), no. 4, 349-358 We describe the spectrum of the problem generated by the Stieltjes string recurrence relations on a figure-of-eight graph. The continuity and the force balance conditions are imposed at the vertex of the graph. It is shown that the eigenvalues of such (main) problem are interlaced with the elements of the union of sets of eigenvalues of the Dirichlet problems generated by the parts of the string which correspond to the loops of the figure-of-eight graph. Also the eigenvalues of the main problem are interlaced with the elements of the union of sets of eigenvalues of the periodic problems generated by the same parts of the string. Methods Funct. Anal. Topology 16 (2010), no. 4, 359-382 All matrix modifications of classical Nevanlinna-Pick interpolation problem with a finite number of nonreal nodes which can be investigated by V. P. Potapov method are described. Methods Funct. Anal. Topology 16 (2010), no. 1, 42-50 We give necessary and sufficient conditions for a one-dimensional Schrodinger operator to have the number of negative eigenvalues equal to the number of negative intensities in the case of $\delta$ interactions. On the number of negative eigenvalues of a multi-dimensional Schrodinger operator with point interactions Methods Funct. Anal. Topology 16 (2010), no. 4, 383-392 We prove that the number $N$ of negative eigenvalues of a Schr\"odinger operator $L$ with finitely many points of $\delta$-interactions on $\mathbb R^{d}$ (${d}\le3$) is equal to the number of negative eigenvalues of a certain class of matrix $M$ up to a constant. This $M$ is expressed in terms of distances between the interaction points and the intensities. As applications, we obtain sufficient and necessary conditions for $L$ to satisfy $N=m,n,n$ for ${d}=1,2,3$, respectively, and some estimates of the minimum and maximum of $N$ for fixed intensities. Here, we denote by $n$ and $m$ the numbers of interaction points and negative intensities, respectively. Methods Funct. Anal. Topology 16 (2010), no. 2, 183-196 For mappings acting from an interval into a locally convex space, we study properties of strong compact variation and strong compact absolute continuity connected with an expansion of the space into subspaces generated by the compact sets. A description of strong $K$-absolutely continuous mappings in terms of indefinite Bochner integral is obtained. A special class of the spaces having $K$-Radon-Nikodym property is obtained. A relation between the $K$-Radon-Nikodym property and the classical Radon-Nikodym property is considered. Methods Funct. Anal. Topology 16 (2010), no. 1, 51-56 It was proved in~\cite{Pop09b} that a $*$-algebra is $C^*$-representable, i.e., $*$-isomorphic to a self-adjoint subalgebra of bounded operators acting on a Hilbert space if and only if there is an algebraically admissible cone in the real space of Hermitian elements of the algebra such that the algebra unit is an Archimedean order unit. In the present paper we construct such cones in free products of $C^*$-representable $*$-algebras generated by unitaries. We also express the reducing ideal of any algebraically bounded $*$-algebra with corepresentation $\mathcal F/\mathcal J$ where $\mathcal F$ is a free algebra as a closure of the ideal $\mathcal J$ in some universal enveloping $C^*$-algebra. Methods Funct. Anal. Topology 16 (2010), no. 1, 57-68 In this paper we consider decompositions of the identity operator into a linear combination of $k\ge 5$ orthogonal projections with real coefficients. It is shown that if the sum $A$ of the coefficients is closed to an integer number between $2$ and $k-2$ then such a decomposition exists. If the coefficients are almost equal to each other, then the identity can be represented as a linear combination of orthogonal projections for $\frac{k-\sqrt{k^2-4k}}{2} < A < \frac{k+\sqrt{k^2-4k}}{2}$. In the case where some coefficients are sufficiently close to $1$ we find necessary conditions for the existence of the decomposition. Inverse theorems in the theory of approximation of vectors in a Banach space with exponential type entire vectors Methods Funct. Anal. Topology 16 (2010), no. 1, 69-82 An arbitrary operator $A$ on a Banach space $X$ which is a generator of a $C_0$-group with a certain growth condition at infinity is considered. A relationship between its exponential type entire vectors and its spectral subspaces is found. Inverse theorems on the connection between the degree of smoothness of a vector $x\in X$ with respect to the operator $A$, the rate of convergence to zero of the best approximation of $x$ by exponential type entire vectors for operator $A$, and the $k$-module of continuity with respect to $A$ are established. Also, a generalization of the Bernstein-type inequality is obtained. The results allow to obtain Bernstein-type inequalities in weighted $L_p$ spaces. Methods Funct. Anal. Topology 16 (2010), no. 3, 264-270 We prove that every Schur representation of a poset corresponding to $\widetilde{E_8}$ can be unitarized with some character. Methods Funct. Anal. Topology 16 (2010), no. 1, 83-100 We study positive definite kernels $K = (K_{n,m})_{n,m\in A}$, $A=\mathbb Z$ or $A=\mathbb Z_+$, which satisfy a difference equation of the form $L_n K = \overline L_m K$, or of the form $L_n \overline L_m K = K$, where $L$ is a linear difference operator (here the subscript $n$ ($m$) means that $L$ acts on columns (respectively rows) of $K$). In the first case, we give new proofs of Yu.M. Berezansky results about integral representations for $K$. In the second case, we obtain integral representations for $K$. The latter result is applied to strengthen one our result on abstract stochastic sequences. As an example, we consider the Hamburger moment problem and the corresponding positive matrix of moments. Classical results on the Hamburger moment problem are derived using an operator approach, without use of Jacobi matrices or orthogonal polynomials. Methods Funct. Anal. Topology 16 (2010), no. 3, 271-288 We describe all solutions of the matrix Hamburger moment problem in a general case (no conditions besides solvability are assumed). We use the fundamental results of A. V. Shtraus on the generalized resolvents of symmetric operators. All solutions of the truncated matrix Hamburger moment problem with an odd number of given moments are described in an "almost nondegenerate" case. Some conditions of solvability for the scalar truncated Hamburger moment problem with an even number of given moments are given.
Would it be correct to characterize loop invariants as a type of tautology? I ask since the invariant must basically always be true, before the loop starts, before each iteration and after the loop terminates. I realize that there is the possibility that the invariant could become false during the body of the loop. But since inside the loop "doesn't count" is it fair to characterize the invariant as a tautology? A Tautology is a formula (in a certain logic) that is true under every model of that logic. That is, it is equivalent to the formula "$True$". A loop invariant, however, is a certain claim that is usually true under some models, and false under others (a model in this case is an algorithm). Then, you prove that the invariant is true under your specific model. If you add axioms to your logic that forces then only model to be the one of your specific program, then indeed this would be a tautology. But such a process (adding axioms and proving that something is a tautology), is what is more commonly known as "proof". (For clarification: even if you add enough axioms, you may not be able to prove your claim, even if it's a tautology, in case of incomplete systems). For example, consider a loop that increases a variable $i$ by $1$. An invariant of the loop may be that if before the loop $i>0$, then after the loop $i>0$. Indeed, this loop satisfies it. But it is not a tautology, since we can come up with other loops that do not satisfy it. The word tautology is a technical word. The following is a tautology of classical propositional logic. $\vdash p \lor \neg p$ When interpreted over the natural numbers, the following is a theorem. $ (\mathbb{N},<)\vdash \forall x. \exists y. x < y$ But we do not say it is a tautology in the strict logical sense of the word because there are structures where this is not true. Considering $S = \{a,b\}$, $<$ defined as $\{(a,b)\}$, we have $(S,<) \not \vdash \forall x. \exists y. x < y$ Similarly, if you think of a loop $P$ as implicitly defining the axioms of a theory, then a loop invariant $I$ as satisfying $P \vdash \text{Every execution satisfies } I $ Thus, in exactly the same way as the existence of successors is a theorem of arithmetic, a loop invariant is a theorem of a logical theory defined by the program. A loop invariant is not a tautology in the standard mathematical sense of the word tautology. A tautology in this context would satisfy $ \vdash \text{Every execution satisfies } I $ From which we can conclude that every tautology is a loop invariant, but not every loop invariant is a tautology. I am guessing that, by "tautology," you mean a property that is true in all states. (I have seen some Lecturers use the term in that way, e.g., $x > 1 \Longrightarrow x > 0$, which is true in all states no matter what $x$ is, might be called a "tautology". The technical definition of "tautology" in logic is more narrow, but I will continue to use your terminology.) A loop invariant is only true at a particular program point in the loop. It is true for every state encountered at that point, but it might be false for states encountered at other program points (inside the loop as well as outside the loop). So, clearly, it is not a "tautology" in the sense I stated above. However, there is an interesting proof rule formulated in Reynolds's extension of Hoare Logic. If, in a particular piece of code, there are no operations that affect the truth/falsity of an assertion, and we know that the assertion is true at the beginning of the code, then we can pretend that the assertion is a "tautology" in the middle of that code. A good example of this is a binary search procedure for an array $A$. Before the procedure starts, the pre-condition states that the array is sorted. Inside the binary search procedure, we don't do anything to modify the array. So, it will continue to be sorted throughout the procedure. Reynolds's rule says that, for the duration of the procedure, we can pretend that "$A$ is sorted" is a "tautology". This is a useful trick to use. Without it we would need to add "$A$ is sorted" in every assertion in the middle of the procedure, and we can see that it is quite pointless to keep repeating this silly condition because we are never modifying the array. Reynolds's rule allows to avoid the silliness. For interesting applications of this rule, see the Chapter 5 of Reynolds's Craft of Programming.
DynamicalMatrix¶ class DynamicalMatrix( configuration, filename, object_id, calculator=None, repetitions=None, atomic_displacement=None, acoustic_sum_rule=None, finite_difference_method=None, constraints=None, constrain_electrodes=None, use_equivalent_bulk=None, max_interaction_range=None, force_tolerance=None, processes_per_displacement=1, log_filename_prefix='forces_displacement_', use_wigner_seitz_scheme=None, use_symmetry=None, symmetrize=None)¶ Constructor for the DynamicalMatrix object. Parameters: configuration( BulkConfiguration| MoleculeConfiguration| DeviceConfiguration) – The configuration for which to calculate the dynamical matrix. calculator( Calculators) – The calculator to be used in the dynamical matrix calculations. Default:The calculator attached to the configuration. filename( str) – The full or relative path to save the results to. See nlsave(). object_id( str) – The object id to use when saving. See nlsave(). repetitions( Automatic| list of ints) – The number of repetitions of the system in the A, B, and C-directions given as a list of three positive integers, e.g. [3, 3, 3], or Automatic. Each repetition value must be odd. If use_wigner_seitz_schemeis set to Truethe only values allowed for repetition are [1, 1, 1]or Automatic. Default: Automatic atomic_displacement(PhysicalQuantity of type length) – The distance the atoms are displaced in the finite difference method. Default: 0.01 * Angstrom acoustic_sum_rule( bool) – Control if the acoustic sum rule should be invoked. Default: True finite_difference_method( Forward| Central) – The finite difference scheme to use. Default: Central constraints( list of type int) – List of atomic indices that will be constrained, e.g. [0, 2, 10]. Default:Empty list [] constrain_electrodes( bool) – Control if the electrodes and electrode extensions should be constrained in case of a DeviceConfiguration. Default: False use_equivalent_bulk( bool) – Control if a DeviceConfigurationshould be treated as a BulkConfiguration. Default: True max_interaction_range(PhysicalQuantity of type length) – Set the maximum range of the interactions. Default:All atoms are included force_tolerance(PhysicalQuantity of type energy per length squared) – All force constants below this value will be truncated to zero. Default: 1e-8 * Hartree/Bohr**2 processes_per_displacement( int) – The number of processes assigned to calculating a single displacement. Default:1 process per displacement. log_filename_prefix( str or None) – Prefix for the filenames where the logging output for every displacement calculation is stored. The filenames are formed by appending a number and the file extension (”.log”). If a value of None is given then all logging output is done to stdout. If a classical calculator is used, no per-displacment log files will be generated. Default: "displacement_" use_wigner_seitz_scheme( bool) – Control if the real space Dynamical Matrix should be extended according to the Wigner Seitz construction. use_wigner_seitz_scheme=Trueis only supported for simple orthorhombic, simple tetragonal and simple cubic lattices. Default: False use_symmetry( bool) – If enabled, only the symmetrically unique atoms are displaced and the remaining force constants are calculated using symmetry. Default: True acousticSumRule()¶ Returns: Return if the acoustic sum rule is invoked. Return type: bool atomicDisplacement()¶ Returns: The distance the atoms are displaced in the finite difference method. Return type: PhysicalQuantity with length unit constrainElectrodes()¶ Returns: Boolean determining if the electrodes and electrode extensions are constrained in case of a DeviceConfiguration. Return type: bool constraints()¶ Returns: The list of constrained atoms. Return type: list of int filename()¶ Returns: The filename where the study object is stored. Return type: str finiteDifferenceMethod()¶ Returns: The finite difference scheme to use. Return type: Central| Forward forceTolerance()¶ Returns: The force tolerance Return type: PhysicalQuantity with an energy per length squared unit e.g. Hartree/Bohr**2 logFilenamePrefix()¶ Returns: The filename prefix for the logging output of the study. Return type: str | LogToStdOut maxInteractionRange()¶ Returns: The maximum interaction range. Return type: PhysicalQuantity with length unit nlprint( stream=None)¶ Print a string containing an ASCII table useful for plotting the Study object. Parameters: stream( python stream) – The stream the table should be written to. Default: NLPrintLogger() numberOfProcessesPerTask()¶ Returns: The number of processes to be used to execute each task. If None, all available processes should execute each task collaboratively. Return type: int | None objectId()¶ Returns: The name of the study object in the file. Return type: str phononEigensystem( q_point=None, constrained_atoms=None)¶ Calculate the eigenvalues and eigenvectors for the dynamical matrix at a specified q-point. Parameters: q_point( list of 3 floats) – The fractional q-point to use. Default: [0.0, 0.0, 0.0] constrained_atoms( list of ints.) – List of atoms being constrained. The matrix elements from these atoms will not be included in the calculation of the eigensystem. Default:[] (empty list) Returns: The eigenvalues and eigenvectors of the dynamical matrix. Return type: 2-tuple containing the eigenvalues and eigenvectors processesPerDisplacement()¶ Returns: The number of processes per displacement. Return type: int realSpaceDynamicalMatrix()¶ Returns the real space dynamical matrix. The shape of the matrix is (N, M), where N is the number of degrees of freedom (3 * number of atoms), and M = N * R, where R is the total number of repetitions. Each subblock D[i*N:(i+1)*N, :] corresponds to the matrix elements between the center block (where the atoms have been displaced) and a neighbouring cell translated from the central cell by translations[i] in fractional coordinates. Returns: The real-space dynamical matrix as a sparse matrix together with a list of translation vectors in fractional coordinates. The real space dynamical matrix is given in units of (meV / hbar)**2. Return type: (scipy.sparse.csr_matrix, list of list(3) of integers) reciprocalSpaceDynamicalMatrix( q_point=None)¶ Evaluate the reciprocal space dynamical matrix for a given q-point in reciprocal space. Parameters: q_point( list of floats) – The fractional q-point to use. Default: [0.0, 0.0, 0.0] Returns: The dynamical matrix for q_point. Return type: PhysicalQuantity of units (meV / hbar)**2 repetitions()¶ Returns: The number of repetitions in the A, B, and C-directions for the supercell that is used in the finite displacement calculation. Return type: list of three int. symmetry()¶ Returns: True if the use of crystal symmetry to reduce the number of displacements is enabled. Return type: bool update()¶ Run the calculations for the DynamicalMatrix study object. useEquivalentBulk()¶ Returns: Boolean determining if a DeviceConfiguration is treated as a BulkConfiguration. Return type: bool wignerSeitzScheme()¶ Returns: Boolean to control if the real space Dynamical Matrix should be extended according to the Wigner-Seitz construction. Return type: bool Usage Examples¶ Note Study objects behave differently from analysis objects. See the Study object overview for more details. Calculate the DynamicalMatrix for a system repeated five times in the B direction and three times in the C direction. dynamical_matrix = DynamicalMatrix( configuration, filename='DynamicalMatrix.hdf5', object_id='dynamical_matrix', repetitions=(1,5,3) )dynamical_matrix.update() When using repetitions=Automatic, the cell is repeated such that all atoms within a pre-defined, element-pair dependent interaction range are included. dynamical_matrix = DynamicalMatrix( configuration, filename='DynamicalMatrix.hdf5', object_id='dynamical_matrix', repetitions=Automatic)dynamical_matrix.update() The default number of repetitions i.e. repetitions=Automatic can be found before a calculation using the function checkNumberOfRepetitions(). (nA, nB, nC) = checkNumberOfRepetitions(configuration) The maximum interaction range between two atoms can be specified manually, by usingthe max_interaction_range keyword. dynamical_matrix = DynamicalMatrix( configuration, filename='DynamicalMatrix.hdf5', object_id='dynamical_matrix', repetitions=Automatic, max_interaction_range=12.0*Ang,)dynamical_matrix.update() Notes¶ The DynamicalMatrix is calculated using the finite differencemethod in a repeated cell, which is sometimes also referred to as frozen-phonon or super-cell method. In the following, we denote the atoms in the unit cell by \(\mu\) and the atoms in the repeated cell by \(i\). Furthermore, denote the dynamical matrix elements, \(D_{\mu \alpha, i \beta}\), where \(\alpha, \beta\) are the Cartesian directions, i.e. \(x, y, z\). A dynamical matrix element is given by where \(F_{i \beta}\) is the force on atom \(i\) in direction \(\beta\) due to a displacement of atom \(\mu\) in direction \(\alpha\). The derivative is calculated by either forward or central finitedifferences, where in the following we will focus on the latter. Atom\(\mu\) is displaced by \(\Delta r_\alpha\) and \(-\Deltar_\alpha\), and the changes in the force, \(\Delta F_{i \beta}\) arecalculated to approximate the dynamical matrix element The default is to use repetitions=Automatic. In this case the cell isrepeated such that all atoms within a pre-defined element-dependent distancefrom the atoms in the unit cell are included in the repeated cell. Therepetitions used is written to the output file. These default interactionranges are suitable for most systems. However, if you are using long-ranged interactions, e.g. classical potentialswith electrostatic interactions in TremoloXCalculator, it might benecessary to increase the number of repetitions. For a 1D or 2D system, the unit cell should not be repeated in the confineddirections. This is only discovered by the repetitions=Automatic, if thereis enough vacuum in the unit cell in the directions that should not berepeated. That is typically 10-20 Å vacuum depending on the elements and theirinteraction ranges. Thus, for confined systems it is recommended to check therepetitions used and possibly use manual instead of automatic repetition. DynamicalMatrix calculations using DFT or Semi-Empirical calculators havefunctionality to fully resume partially completed calculations by re-running thesame script or reading the study object from file and calling update() onit. The study object will automatically detect which displacementcalculations have already been carried out and only run the ones that are notyet completed. To ensure highest performance this functionality is not availablefor ATK-ForceField calculations. Notes for DFT¶ In ATK-DFT the number of repetitions of the unit cell in super cell must ensurethat the change in the force on atoms outside the super cell is zero for everyatomic displacement in the center cell. An equivalent discussion of the numberof k-points of the super cell and the number of repetitions can be found for HamiltonianDerivatives in the section Notes for DFT.Consider a system with e.g. \(x\) and \(y\) as confined directions andthe k-point sampling of the unit cell \((1, 1, N_{\text{C}})\), seeFig. 131 (a). Assume that the number ofrepetitions in the C-direction is known for the change in the force on atomsoutside the super cell to be zero. Then the number of repetitions must be\((1, 1, {\scriptstyle \text{repetitions in C}})\). Furthermore, the k-pointsampling of the super cell becomes \((1, 1, \frac{N_{\text{C}}}{\text{repetitions in C}})\). Note From QuantumATK-2019.03 onwards, the k-point sampling anddensity-mesh-cutoff will be automatically adapted to the given numberof repetitions when setting up the super cell inside DynamicalMatrix and HamiltonianDerivatives. Thatmeans you can specify the calculator settings for the unit cell and use itwith any desired number of repetitions in dynamical matrix and hamiltonianderivatives calculations. When calculating the DynamicalMatrix with ATK-DFT, accurate resultsmay require a higher precision than usual by increasing the density_mesh_cutoffin NumericalAccuracyParameters and decreasing the tolerance in IterationControlParameters, e.g. numerical_accuracy_parameters = NumericalAccuracyParameters( density_mesh_cutoff=150.0*Hartree )iteration_control_parameters = IterationControlParameters( tolerance=1e-6 ) Notes for the simplified supercell method ( use_wigner_seitz_scheme=True)¶ The simplified supercell method is an approximation which allows to obtain thedispersion of vibrational eigenmodes with a force calculation of the unit cellonly, i.e. having repetitions=[1,1,1] (the poor man’s frozen phononcalculation). It is valid if the unit cell is large enough, i.e. if itaccommodates 200 atoms and more. The convergence should be checked withrespect to the number of atoms per unit cell or by a conventional frozenphonon calculation. Idea of the simplified supercell method¶ In large unit cells, the force of atom i due to a displacement of atom jis small for distances of half the length of the unit cell vectors. Due to thetranslational invariance, a displaced atom i contributes from two sides tothe force on atom j, which can be exactly decomposed if the distance betweenthe atoms is large enough. As an example, we will look at a 1D chain with 6atoms per unit cell (see Fig. 127). All atoms are at their equilibrium positions, besides atom 1 which isdisplaced along the \(x\) direction. The force on atom 5, for example,can be regarded as “repulsive” due to the displacement of atom 1 in the unitcell with translation \(T=0\) along the \(x\)-axis. However, there isalso an “attractive” contribution from the image of the displaced atom 1 inthe unit cell with translation \(T=1\). To decompose the two contributionswe construct a Wigner-Seitz cell centered at the undisplaced position of atom1 and check the distance to the nearest neighbor representations of atom 5.The representation of atom 5 inside the cell at \(T=0\) is not part of theWigner-Seitz cell, and thus this contribution is neglected. In contrast, therepresentation of atom 5 at \(T=-1\) is inside the Wigner-Seitz cell andwe keep this contribution. If an atom is at the face (like atom 4), the edgeor a vertex of the Wigner-Seitz cell, the force is included weighted by theinverse multiplicity of this Wigner-Seitz grid point. This construction isrepeated for all atoms inside the unit cell. In this way it is possible toapproximately calculate the entries of the DynamicalMatrix for allrepetitions of the cell from \(T = -1\) to \(T = 1\) and thereby getthe dispersion, by only calculating the forces in a single unit cell. Note The Wigner-Seitz construction ensures that the phonon spectrum at the \(\Gamma\)-point is preserved. Hence, a separate \(\Gamma\)-point calculation is not necessary.
This is really just an expansion of Michael Brown's comment. Let's say someone hands us any two quantum systems described by Hilbert spaces $\mathcal H_1$ and $\mathcal H_2$. Then we might be curious to know how we can write down an interaction Hamiltonian for these systems. This includes, as a subcase, systems consisting of particles with different statistics. To write down a Hamiltonian for the composite system that, in particular, can incorporate interactions between the two systems, we first need to decide what the Hilbert space of the composite system will be. The natural choice is called the tensor product of $\mathcal H_1$ and $\mathcal H_2$ written as $\mathcal H_1\otimes \mathcal H_2$. See the following physics.SE post for more information on why this choice is natural: Should it be obvious that independent quantum states are composed by taking the tensor product? Once we have chosen this as our composite Hilbert space, then there is no problem including interactions between the systems we started with. Let's examine Michael Brown's example in a bit more notational detail. Suppose the two systems consisted of one a particle of spin $s_1$ and a particle of spin $s_2$, and suppose that we want to write a spin-spin interaction term in the Hamiltonian. To do this, we appeal to a certain kind of product of two operators called the tensor product (not to be confused with the tensor product of the two Hilbert spaces). In particular, if $S_1^i$ are the operators representing the components of the first spin (which are operators on $\mathcal H_1$), and if $S_2^i$ are the operators representing the components of the second spin (which are operators on $\mathcal H_2$), then we first form operators $S_1^i\otimes I_2$ and $I_1\otimes S_2^i$ which act on the total Hilbert space $\mathcal H_1\otimes\mathcal H_2$ and represent "copies" of the original operators acting on the total Hilbert space. The notation here is that $I_1$ and $I_2$ are the identity operators on $\mathcal H_1$ and $\mathcal H_2$ respectively. Once we have done this, we can now write down the spin-spin interaction Hamiltonian as\begin{align} H_\mathrm{int} = \sum_i (S_1^i\otimes I_2)(I_1\otimes S_2^i)\end{align}As I wrote in the comments above, the following physics.SE post has more detail on what this tensor product notation means: How to tackle 'dot' product for spin matrices
It is generally known that 'jumps' in frequency data are difficult to estimate. In the current literature, many different techniques for estimating such jumps have been tested and often with satisfactory results. A summarizing paper about some of these techniques would be, for example, Riley, 2008. However, all these techniques are concerning frequency data that 'floats', in the sense that the data returns to a mean $\alpha>0$. I'm interested in detecting outbreaks for frequency data where $\alpha = 0$. A visualisation of this type of frequency data (graph from Brookmeyer & Stroup, 2003) would be: Now I found that this sort of data is often considered in 'disease outbreak detection' literature. But I am not able to find good transformations, algorithms or estimation procedures at all. This might be due to the fact that I am unsure about the name of this sort of frequency data. The graph I showed above of Brookmeyer & Stroup is a CUSUM plot of 'floating' frequency data, so it's not the data itself. They state that if the CUSUM plot exceeds $h\sigma$, an alert is declared. This makes sense as the CUSUM is a transformation of the deviations from the mean. But in the case of $\alpha=0$ type frequency data, this technique can't be applied. So, I have two questions: What are known transformations(such as CUSUM for 'floating' frequency data) for this type? What are well known and widely used detection algorithmsfor this type? Any insights are highly appreciated. Edit (some intuition) Simple algorithms that work well in real-time might be some type of transformation that takes into account the mean or stdev. of the data. Even though such a transformations are easy to construct, it might be very difficult to separate 'real outbreaks' from normal observations. This because the mean and stdev. both tend to zero when taken over a certain window. E.g. a threshold such as $k \sigma$ or $k \mu$ will not be robust.
Some Advances in Sidorenko’s Conjecture 2014/09/04 *Thursday* 4PM-5PM Room 1409 Sidorenko’s conjecture states that for every bipartite graph \(H\) on \(\{1,\cdots,k\}\) \begin{eqnarray*} \int \prod_{(i,j)\in E(H)} h(x_i, y_j) d\mu^{|V(H)|} \ge \left( \int h(x,y) \,d\mu^2 \right)^{|E(H)|} \end{eqnarray*} holds, where \(\mu\) is the Lebesgue measure on \([0,1]\) and \(h\) is a bounded, non-negative, symmetric, measurable function on \([0,1]^2\). An equivalent discrete form of the conjecture is that the number of homomorphisms from a bipartite graph \(H\) to a graph \(G\) is asymptotically at least the expected number of homomorphisms from \(H\) to the Erdos-Renyi random graph with the same expected edge density as \(G\). In this talk, we will give an overview on known results and new approaches to attack Sidorenko’s conjecture. This is a joint work with Jeong Han Kim and Choongbum Lee.
I'm concerned with equation 2.24 of http://arxiv.org/abs/1601.00482 The superconformal hypermultiplets in this paper have a conic hyperkahler target manifold and the authors want to gauge some isometries of this manifold. Letting the isometry group be $G$ and to have an associated Lie algebra $\mathfrak{g}$ generated by Killing vectors $k_I$, we can express this as $\mathcal{L}_{k_I} g=0$ where $g$ is the metric on the conic hyperkahler manifold. Then, in order to not break SUSY, the $k_I$ must commute with the SUSY generators. Apparently this is equivalent to the Killing vectors being triholomorphic $\mathcal{L}_{K_I} J_\alpha=0$ where $J_\alpha$ are the triplet of complex structures. Does anyone know why this is the case? Secondly, they say in 2.24 of this paper that the moment maps associated to these symmetries must satisfy the "equivariance condition". Unfortunately they don't offer any explanation of what this is or where it comes from. There is some discussion in other literature along the lines of "we can also derive the equivariance condition...." but they never say what it is or explain how they found it. The best I've found is in the Freedman/van Proeyen Supergravity book where in eqn (13.61), they seem to say it comes from requiring the moment maps to transform in the adjoint: $$(k_I^\alpha \partial_\alpha + k_I^{\bar{\alpha}} \partial_{\bar{\alpha}} ) \mu_J = f_{IJ}^K \mu_K$$ They then use some identities to write this as (13.62): $$k_I^\alpha g_{\alpha \bar{\beta}} k_J^{\bar{\beta}} - k_J^\alpha g_{\alpha \bar{\beta}} k_I^{\bar{\beta}} = i f_{IJ}^K \mu_K$$ Although I don't see how this looks anything like (2.24) of the attached paper. If anyone can offer any help or thoughts on either of these issues I'd greatly appreciate it!This post imported from StackExchange Physics at 2016-06-26 09:50 (UTC), posted by SE-user user11128
Evaluation of color characteristics OptiLayer provides calculations of color properties in almost all existing color coordinate systems. You can view color coordinates in a graphical or tabular forms. Light source, detector, observer, integration step, reference white and incident angle used for color evaluation are specified. OptiLayer provides a set of power options and color targets allowing you to design coatings with specified color properties. Tristimulus values and chromaticities XYZ CIE 1931 color space. Color coordinates are called tristimulus values X, Y, Z and determined as: \(X=\int\limits_{380 nm}^{780 nm} x(\lambda) E(\lambda) d\lambda \) \(Y=\int\limits_{380 nm}^{780 nm} y(\lambda) E(\lambda) d\lambda \) \(Z=\int\limits_{380 nm}^{780 nm} z(\lambda) E(\lambda) d\lambda \) Color basis functions \(x(\lambda),\; y(\lambda),\; z(\lambda) \) All colors can be represented on the chromaticity diagram: The chromaticity of a color can be specified by two parameters x and y, which are functions of tristimulus values X, Y, Z: \(x=\displaystyle \frac{X}{X+Y+Z},\;\;y=\frac{Y}{X+Y+Z},\;\;z=1-x-y \) Y is the luminance factor. The larger is Y, the brighter is the color. Corresponding numerical values of color coordinates can be observed in spreadsheets:
Question: As the homogenous cylinder of mass m enter the cylindrical surface, the velocity of its center is 1.5 m/s. Determine the angle {eq}\theta {/eq} at which the cylinder will come to a momentary stop. Assume that the cylinder rolls without slipping. Conservation of energy: It states that for a body in linear or rotational motion, the kinetic energy gained by the body is equal to the potential energy lost by the body during its motion. Answer and Explanation: Given Data The velocity of centre of cylinder is: {eq}v = 1.5\;{\rm{m/}}{{\rm{s}}^{\rm{}}} {/eq} The figure below represents the height gained by the cylinder. The mass moment of inertia of the cylinder is, {eq}I = \dfrac{{m{r^2}}}{2} {/eq} The energy equation for the cylinder is, {eq}K.{E_{lost}} = P.{E_{gain}} ...... (I) {/eq} Here, the loss in kinetic energy of the cylinder is {eq}K.{E_{loss}} {/eq} and the gain in the potential energy of the cylinder is {eq}P.{E_{gain}}. {/eq} The height gained by the cylinder is, {eq}\begin{align*}{l} h &= 1.2 - 1.2\cos \theta \\ & = 1.2\left( {1 - \cos \theta } \right) \end{align*} {/eq} The gain in potential energy is, {eq}P.{E_{gain}} = mg\left( {1.2\left( {1 - \cos \theta } \right)} \right) {/eq} The loss in kinetic energy is, {eq}K.{E_{lost}} = \dfrac{{m{v^2}}}{2} + \dfrac{{I{\omega ^2}}}{2} {/eq} Substitute the values in above equation. {eq}\begin{align*} K.{E_{lost}}& = \dfrac{{m{v^2}}}{2} + \dfrac{{m{r^2}{\omega ^2}}}{4}\\ & = \dfrac{{m{v^2}}}{2} + \dfrac{{m{v^2}}}{4}\\ & = \dfrac{{3m{v^2}}}{4} \end{align*} {/eq} Substitute the values in equation (I). {eq}\begin{align*} mg\left( {1.2\left( {1 - \cos \theta } \right)} \right)& = \dfrac{{3m{v^2}}}{4}\\ g\left( {1.2\left( {1 - \cos \theta } \right)} \right)& = \dfrac{{3{v^2}}}{4} \end{align*} {/eq} Substitute the values in above equation. {eq}\begin{align*} 9.81\left( {1.2\left( {1 - \cos \theta } \right)} \right)& = \dfrac{{3{{\left( {1.5} \right)}^2}}}{4}\\ 1 - \cos \theta & = 0.1433\\ \theta & = 31.05^\circ \end{align*} {/eq} Thus, the angle at which cylinder stops is {eq}31.05^\circ {/eq} Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from Geography 101: Human & Cultural GeographyChapter 13 / Lesson 9
Spring 2018, Math 171 Week 10 Poisson Process Conditioning Let \(N(t)\) be a Poisson process with rate \(\lambda\). Fix \(0 \le n\), \(s \le t\). Compute \(\mathbb{P}(N(s)=m \mid N(t) = n)\) for \(m \le n\). Give the name and parameters of the distribution. (Answer) Binomial(\(n,\frac{s}{t}\)) Compute \(\mathbb{E}[N(s) \mid N(t) = n]\) (Answer) \(n\frac{s}{t}\) Let \(N(t)\) be a Poisson process with rate \(\lambda\). Fix \(0 \le m \le n\), \(r \le s \le t\). Compute \(\mathbb{P}(N(s)-N(r)=k \mid N(t) = n, N(r)=m)\). Give the name and parameters of the distribution. (Answer) Binomial(\(n-m,\frac{s-r}{t-r}\)) Compute \(\mathbb{E}[N(s) \mid N(t) = n, N(r)=m]\) (Answer) \((n-m)\frac{s-r}{t-r}\) Compute the expected amount if time between the first and last arrivals in \([r, t]\) given that \(N(t) = n\) and \(N(r)=m\). (Solution) For convenience of writing, I will omit the condition on \(N(t) = n\) and \(N(r)=m\) in the expectations. Let \(L\) be the time of the last arrival in the interval, and \(F\) be the time of the first arrival in the interval. Then we seek \[\begin{aligned}\mathbb{E}[L-F] &= \mathbb{E}[L] - \mathbb{E}[F]\\&=t - \mathbb{E}[t-L] - r - \mathbb{E}[F-r] \\&= (t-r) - \mathbb{E}[t-L] - \mathbb{E}[F-r].\end{aligned}\] By symmetry, we can see that \(t-L\) (the time between the last arrival and the end of the interval) and \(F-r\) (the time between the start of the interval and the first arrival) have the same distribution, so \[\mathbb{E}[L-F] = (t-r) - 2\mathbb{E}[F-r].\] Since we have conditioned on \(N(t) = n\) and \(N(r)=m\), we know that there are \(n-m\) arrivals uniformly distributed in the interval by the conditioning property of the poisson process. Therefore, letting \(U_1, \dots U_{n-m}\) be i.i.d uniform\([0, t-r]\) random variables, we have \[\begin{aligned}\mathbb{E}[F-r] &= \mathbb{E}[\min(U_1, \dots U_{n-m})]\\&= \int_0^{t-r}P(\min(U_1, \dots U_{n-m}) > s) ds\\&=\int_0^{t-r}P(U_1>s, \dots U_{n-m}>s) ds\\&=\int_0^{t-r}P(U_1>s)^{n-m} ds\\&=\int_0^{t-r}\left(\frac{t-r-s}{t-r}\right)^{n-m} ds\\&=-(t-r)\frac{\left(\frac{t-r-s}{t-r}\right)^{n-m+1}}{n-m+1} \Big\vert_0^{t-r}\\&=\frac{t-r}{n-m+1}\end{aligned}\] So finally, we have \[\begin{aligned}\mathbb{E}[L-F] &= (t-r) - 2\frac{t-r}{n-m+1}\\&=(t-r)\left(1-\frac{2}{n-m+1}\right)\end{aligned}\] As a sanity check, we check that if \(n-m=1\) we have \(\mathbb{E}[L-F]=0\), and if \(n-m=\infty\) we have \(\mathbb{E}[L-F]=(t-r)\) Renewal Process Cars queue at a gate. The lengths of the cars are i.i.d with distribution \(F_L\) and mean \(\mu\). Let \(L \sim F_L\). Each successive car stops leaving a gap, distributed according to a uniform distribution on \((0, 1)\), to the car in front (or to the gate in the case of the car at the head of the queue). Consider the number of cars \(N(t)\) lined up within distance t of the gate. Determine \(\lim_{t \to \infty} \frac{\mathbb{E}[N(t)]}{t}\) if \(L = c\) is a fixed constant \(L\) is exponentially distributed with parameter \(\lambda\) Potential customers arrive at a service kiosk in a bank as a Poisson process of rate \(\lambda\). Being impatient, the customers leave immediately unless the assistant is free. Customers are served independently, with mean service time \(\mu\). Find the mean time between the starts of two successive service periods. (Answer) \(\mu + 1/\lambda\) Find the long run rate at which customers are served (Answer) \(\frac{1}{\mu + 1/\lambda}\) What proportion of customers who arrive at the bank actually get served? (Answer) \(\frac{1}{\lambda\mu + 1}\) Customers arrive at a 24 hour restaurant (which has only one table) according to a poisson process with rate 1 party per hour. If the table is occupied, they leave immediately. If the table is unoccupied, they stay and eat. Customers spend an average of $20 each. The length of time a party stays is uniformly distributed in \([0, N/2 \mathrm{\ hours}]\) where \(N\) is the number of people in the party. Answer the following questions for the cases when \(N\) is uniform\(\{2, 3, 4\}\) and when \(N\) is geometric(\(1/2\)). Let \(\tau_k\) be the time between when the \((k-1)\)st party finishes eating and the \((k)\)th party finishes eating. Compute \(\mathbb{E}[\tau_k]\) (Answer) \(\mathbb{E}[N/4]+1\) Let \(X_k\) be the amount the \((k)\)th party spends. Compute \(\mathbb{E}[X_k]\). (Answer) \(20\mathbb{E}[N]\) Let \(S(t)\) the amount total amount paid by all parties by time \(t\). Compute \(\lim_{t\to \infty}\frac{S(t)}{t}\), and justify the validity of your computation. (Answer) \(\frac{20\mathbb{E}[N]}{\mathbb{E}[N/4]+1}\). Justify by SLLN Useful Formulas For a discrete random variable \(X\) taking values in \(\{0, 1, 2, \dots\}\), we have \(\mathbb{E}[X] = \sum_{k=0}^\infty P(X > k)\) For a continuous random variable \(T\) taking values in \([0, \infty)\), we have \(\mathbb{E}[T] = \int_0^\infty P(T > s) ds\)
The spectral distribution $f(\omega)$ of a stationary time series$\{Y_t\}_{t\in\mathbb{Z}}$ can be used to investigate whether or not periodicstructures are present in $\{Y_t\}_{t\in\mathbb{Z}}$, but $f(\omega)$ has somelimitations due to its dependence on the autocovariances $\gamma(h)$. Forexample, $f(\omega)$ can not distinguish white i.i.d. noise from GARCH-typemodels (whose terms are dependent, but uncorrelated), which implies that$f(\omega)$ can be an inadequate tool when $\{Y_t\}_{t\in\mathbb{Z}}$ containsasymmetries and nonlinear dependencies. Asymmetries between the upper and lower tails of a time series can beinvestigated by means of the local Gaussian autocorrelations $\gamma_{v}(h)$introduced in Tj{\o}stheim and Hufthammer (2013), and these local measures ofdependence can be used to construct the local Gaussian spectral density$f_{v}(\omega)$ that is presented in this paper. A key feature of$f_{v}(\omega)$ is that it coincides with $f(\omega)$ for Gaussian time series,which implies that $f_{v}(\omega)$ can be used to detect non-Gaussian traits inthe time series under investigation. In particular, if $f(\omega)$ is flat,then peaks and troughs of $f_{v}(\omega)$ can indicate nonlinear traits, whichpotentially might discover local periodic phenomena that goes undetected in anordinary spectral analysis. Spectrum analysis can detect frequency related structures in a time series$\{Y_t\}_{t\in\mathbb{Z}}$, but may in general be an inadequate tool ifasymmetries or other nonlinear phenomena are present. This limitation is aconsequence of the way the spectrum is based on the second order moments (autoand cross-covariances), and alternative approaches to spectrum analysis havethus been investigated based on other measures of dependence. One such approachwas developed for univariate time series in Jordanger and Tj{\o}stheim (2017),where it was seen that a local Gaussian auto-spectrum $f_{v}(\omega)$, based onthe local Gaussian autocorrelations $\rho_v(\omega)$ from Tj{\o}stheim andHufthammer (2013), could detect local structures in time series that lookedlike white noise when investigated by the ordinary auto-spectrum $f(\omega)$.The local Gaussian approach in this paper is extended to a local Gaussiancross-spectrum $f_{kl:v}(\omega)$ for multivariate time series. The localcross-spectrum $f_{kl:v}(\omega)$ has the desirable property that it coincideswith the ordinary cross-spectrum $f_{kl}(\omega)$ for Gaussian time series,which implies that $f_{kl:v}(\omega)$ can be used to detect non-Gaussian traitsin the time series under investigation. In particular: If the ordinary spectrumis flat, then peaks and troughs of the local Gaussian spectrum can indicatenonlinear traits, which potentially might discover local periodic phenomenathat goes undetected in an ordinary spectral analysis. We are studying the problems of modeling and inference for multivariate counttime series data with Poisson marginals. The focus is on linear and log-linearmodels. For studying the properties of such processes we develop a novelconceptual framework which is based on copulas. However, our approach does notimpose the copula on a vector of counts; instead the joint distribution isdetermined by imposing a copula function on a vector of associated continuousrandom variables. This specific construction avoids conceptual difficultiesresulting from the joint distribution of discrete random variables yet it keepsthe properties of the Poisson process marginally. We employ Markov chain theoryand the notion of weak dependence to study ergodicity and stationarity of themodels we consider. We obtain easily verifiable conditions for both linear andlog-linear models under both theoretical frameworks. Suitable estimatingequations are suggested for estimating unknown model parameters. The largesample properties of the resulting estimators are studied in detail. The workconcludes with some simulations and a real data example. Let $\textbf{X} = (X_1,\ldots, X_p)$ be a stochastic vector having jointdensity function $f_{\textbf{X}}(x)$ with partitions $\textbf{X}_1 =(X_1,\ldots, X_k)$ and $\textbf{X}_2 = (X_{k+1},\ldots, X_p)$. A new method forestimating the conditional density function of $\textbf{X}_1$ given$\textbf{X}_2$ is presented. It is based on locally Gaussian approximations,but simplified in order to tackle the curse of dimensionality in multivariateapplications, where both response and explanatory variables can be vectors. Wecompare our method to some available competitors, and the error ofapproximation is shown to be small in a series of examples using real andsimulated data, and the estimator is shown to be particularly robust againstnoise caused by independent variables. We also present examples of practicalapplications of our conditional density estimator in the analysis of timeseries. Typical values for $k$ in our examples are 1 and 2, and we includesimulation experiments with values of $p$ up to 6. Large sample theory isestablished under a strong mixing condition. In this paper, we study parametric nonlinear regression under the Harrisrecurrent Markov chain framework. We first consider the nonlinear least squaresestimators of the parameters in the homoskedastic case, and establishasymptotic theory for the proposed estimators. Our results show that theconvergence rates for the estimators rely not only on the properties of thenonlinear regression function, but also on the number of regenerations for theHarris recurrent Markov chain. Furthermore, we discuss the estimation of theparameter vector in a conditional volatility function, and apply our results tothe nonlinear regression with $I(1)$ processes and derive an asymptoticdistribution theory which is comparable to that obtained by Park and Phillips[Econometrica 69 (2001) 117-161]. Some numerical studies including simulationand empirical application are provided to examine the finite sample performanceof the proposed approaches and results. Estimation mainly for two classes of popular models, single-index andpartially linear single-index models, is studied in this paper. Such modelsfeature nonstationarity. Orthogonal series expansion is used to approximate theunknown integrable link functions in the models and a profile approach is usedto derive the estimators. The findings include the dual rate of convergence ofthe estimators for the single-index models and a trio of convergence rates forthe partially linear single-index models. A new central limit theorem isestablished for a plug-in estimator of the unknown link function. Meanwhile, aconsiderable extension to a class of partially nonlinear single-index models isdiscussed in Section 4. Monte Carlo simulation verifies these theoreticalresults. An empirical study furnishes an application of the proposed estimationprocedures in practice. This paper considers a class of nonparametric autoregressive models withnonstationarity. We propose a nonparametric kernel test for the conditionalmean and then establish an asymptotic distribution of the proposed test. Boththe setting and the results differ from earlier work on nonparametricautoregression with stationarity. In addition, we develop a new bootstrapsimulation scheme for the selection of a suitable bandwidth parameter involvedin the kernel test as well as the choice of a simulated critical value. Thefinite-sample performance of the proposed test is assessed using one simulatedexample and one real data example. We propose to approximate the conditional expectation of a spatial randomvariable given its nearest-neighbour observations by an additive function. Thesetting is meaningful in practice and requires no unilateral ordering. It iscapable of catching nonlinear features in spatial data and exploring localdependence structures. Our approach is different from both Markov field methodsand disjunctive kriging. The asymptotic properties of the additive estimatorshave been established for $\alpha$-mixing spatial processes by extending thetheory of the backfitting procedure to the spatial case. This facilitates theconfidence intervals for the component functions, although the asymptoticbiases have to be estimated via (wild) bootstrap. Simulation results arereported. Applications to real data illustrate that the improvement indescribing the data over the auto-normal scheme is significant whennonlinearity or non-Gaussianity is pronounced. We derive an asymptotic theory of nonparametric estimation for a time seriesregression model $Z_t=f(X_t)+W_t$, where \ensuremath\{X_t\} and\ensuremath\{Z_t\} are observed nonstationary processes and $\{W_t\}$ is anunobserved stationary process. In econometrics, this can be interpreted as anonlinear cointegration type relationship, but we believe that our results areof wider interest. The class of nonstationary processes allowed for $\{X_t\}$is a subclass of the class of null recurrent Markov chains. This subclasscontains random walk, unit root processes and nonlinear processes. We derivethe asymptotics of a nonparametric estimate of f(x) under the assumption that$\{W_t\}$ is a Markov chain satisfying some mixing conditions. Thefinite-sample properties of $\hat{f}(x)$ are studied by means of simulationexperiments. Nonparametric methods have been very popular in the last couple of decades intime series and regression, but no such development has taken place for spatialmodels. A rather obvious reason for this is the curse of dimensionality. Forspatial data on a grid evaluating the conditional mean given its closestneighbors requires a four-dimensional nonparametric regression. In this paper asemiparametric spatial regression approach is proposed to avoid this problem.An estimation procedure based on combining the so-called marginal integrationtechnique with local linear kernel estimation is developed in thesemiparametric spatial regression setting. Asymptotic distributions areestablished under some mild conditions. The same convergence rates as in theone-dimensional regression case are established. An application of themethodology to the classical Mercer and Hall wheat data set is given andindicates that one directional component appears to be nonlinear, which hasgone unnoticed in earlier analyses.
Learning Objectives Given the linear kinematic equation, write the corresponding rotational kinematic equation Calculate the linear distances, velocities, and accelerations of points on a rotating system given the angular velocities and accelerations In this section, we relate each of the rotational variables to the translational variables defined in Motion Along a Straight Line and Motion in Two and Three Dimensions. This will complete our ability to describe rigid-body rotations. Angular vs. Linear Variables In Rotational Variables, we introduced angular variables. If we compare the rotational definitions with the definitions of linear kinematic variables from Motion Along a Straight Line and Motion in Two and Three Dimensions, we find that there is a mapping of the linear variables to the rotational ones. Linear position, velocity, and acceleration have their rotational counterparts, as we can see when we write them side by side: Linear Rotational Position $$x$$ $$\theta$$ Velocity $$v = \frac{dx}{dt}$$ $$\omega = \frac{d \theta}{dt}$$ Acceleration $$a = \frac{dv}{dt}$$ $$a = \frac{d \omega}{dt}$$ Let’s compare the linear and rotational variables individually. The linear variable of position has physical units of meters, whereas the angular position variable has dimensionless units of radians, as can be seen from the definition of \(\theta = \frac{s}{r}\), which is the ratio of two lengths. The linear velocity has units of m/s, and its counterpart, the angular velocity, has units of rad/s. In Rotational Variables, we saw in the case of circular motion that the linear tangential speed of a particle at a radius r from the axis of rotation is related to the angular velocity by the relation v t = r\(\omega\). This could also apply to points on a rigid body rotating about a fixed axis. Here, we consider only circular motion. In circular motion, both uniform and nonuniform, there exists a centripetal acceleration (Motion in Two and Three Dimensions). The centripetal acceleration vector points inward from the particle executing circular motion toward the axis of rotation. The derivation of the magnitude of the centripetal acceleration is given in Motion in Two and Three Dimensions. From that derivation, the magnitude of the centripetal acceleration was found to be $$a_{c} = \frac{v_{t}^{2}}{r}, \label{10.14}$$ where r is the radius of the circle. Thus, in uniform circular motion when the angular velocity is constant and the angular acceleration is zero, we have a linear acceleration—that is, centripetal acceleration—since the tangential speed in Equation 10.14 is a constant. If nonuniform circular motion is present, the rotating system has an angular acceleration, and we have both a linear centripetal acceleration that is changing (because v t is changing) as well as a linear tangential acceleration. These relationships are shown in Figure 10.14, where we show the centripetal and tangential accelerations for uniform and nonuniform circular motion. The centripetal acceleration is due to the change in the direction of tangential velocity, whereas the tangential acceleration is due to any change in the magnitude of the tangential velocity. The tangential and centripetal acceleration vectors \(\vec{a}_{t}\) and \(\vec{a}_{c}\) are always perpendicular to each other, as seen in Figure 10.14. To complete this description, we can assign a total linear acceleration vector to a point on a rotating rigid body or a particle executing circular motion at a radius r from a fixed axis. The total linear acceleration vector \(\vec{a}\) is the vector sum of the centripetal and tangential accelerations, $$\vec{a} = \vec{a}_{c} + \vec{a}_{t} \ldotp \label{10.15}$$ The total linear acceleration vector in the case of nonuniform circular motion points at an angle between the centripetal and tangential acceleration vectors, as shown in Figure 10.15. Since \(\vec{a}_{c} \perp \vec{a}_{t}\), the magnitude of the total linear acceleration is $$|\vec{a}| = \sqrt{a_{c}^{2} + a_{t}^{2}} \ldotp$$ Note that if the angular acceleration is zero, the total linear acceleration is equal to the centripetal acceleration. Relationships between Rotational and Translational Motion We can look at two relationships between rotational and translational motion. Generally speaking, the linear kinematic equations have their rotational counterparts. Table 10.2 lists the four linear kinematic equations and the corresponding rotational counterpart. The two sets of equations look similar to each other, but describe two different physical situations, that is, rotation and translation. Table 10.2 - Rotational and Translational Kinematic Equations Rotational Translational $$\theta_{f} = \theta_{0} + \bar{\omega} t$$ $$x = x_{0} + \bar{v} t$$ $$\omega_{f} = \omega_{0} + \alpha t$$ $$v_{f} = v_{0} + at$$ $$\theta_{f} = \theta_{0} + \omega_{0} t + \frac{1}{2} at^{2}$$ $$x_{f} = x_{0} + v_{0} t + \frac{1}{2} \omega t^{2}$$ $$\omega_{f}^{2} = \omega_{0}^{2} + 2 \alpha (\Delta \theta)$$ $$v_{f}^{2} = v_{0}^{2} + 2a (\Delta x)$$ The second correspondence has to do with relating linear and rotational variables in the special case of circular motion. This is shown in Table 10.3, where in the third column, we have listed the connecting equation that relates the linear variable to the rotational variable. The rotational variables of angular velocity and acceleration have subscripts that indicate their definition in circular motion. Table 10.3 - Rotational and Translational Quantities: Circular Motion Rotational Translational Relationship ( r = radius $$\theta$$ $$s$$ $$\theta = \frac{s}{r}$$ $$\omega$$ $$v_{t}$$ $$\omega = \frac{v_{t}}{r}$$ $$\alpha$$ $$a_{t}$$ $$\alpha = \frac{a_{t}}{r}$$ $$a_{c}$$ $$a_{c} = \frac{v_{t}^{2}}{r}$$ Example 10.7 Linear Acceleration of a Centrifuge A centrifuge has a radius of 20 cm and accelerates from a maximum rotation rate of 10,000 rpm to rest in 30 seconds under a constant angular acceleration. It is rotating counterclockwise. What is the magnitude of the total acceleration of a point at the tip of the centrifuge at t = 29.0s? What is the direction of the total acceleration vector? Strategy With the information given, we can calculate the angular acceleration, which then will allow us to find the tangential acceleration. We can find the centripetal acceleration at t = 0 by calculating the tangential speed at this time. With the magnitudes of the accelerations, we can calculate the total linear acceleration. From the description of the rotation in the problem, we can sketch the direction of the total acceleration vector. Solution The angular acceleration is $$\alpha = \frac{\omega - \omega_{0}}{t} = \frac{0 - (1.0 \times 10^{4}) \left(\dfrac{2 \pi\; rad}{60.0\; s}\right)}{30.0\; s} = -34.9\; rad/s^{2} \ldotp$$ Therefore, the tangential acceleration is $$a_{t} = r \alpha = (0.2\; m)(-34.9\; rad/s^{2}) = -7.0\; m/s^{2} \ldotp$$ The angular velocity at t = 29.0 s is $$\begin{split} \omega & = \omega_{0} + \alpha t = (1.0 \times 10^{4}) \left(\dfrac{2 \pi\; rad}{60.0\; s}\right) + (-39.49\; rad/s^{2})(29.0\; s) \\ & = 1047.2\; rad/s - 1012.71\; rad/s = 35.1\; rad/s \ldotp \end{split}$$ Thus, the tangential speed at t = 29.0 s is $$v_{t} = r \omega = (0.2\; m)(35.1\; rad/s) = 7.0\; m/s \ldotp$$ We can now calculate the centripetal acceleration at t = 29.0 s: $$a_{c} = \frac{v^{2}}{r} = \frac{(7.0\; m/s)^{2}}{0.2\; m} = 245.0\; m/s^{2} \ldotp$$ Since the two acceleration vectors are perpendicular to each other, the magnitude of the total linear acceleration is $$|\vec{a}| = \sqrt{a_{c}^{2} + a_{t}^{2}} = \sqrt{(245.0)^{2} + (-7.0)^{2}} = 245.1\; m/s^{2} \ldotp$$ Since the centrifuge has a negative angular acceleration, it is slowing down. The total acceleration vector is as shown in Figure 10.16. The angle with respect to the centripetal acceleration vector is $$\theta = \tan^{-1} \left(\dfrac{-7.0}{245.0}\right) = -1.6^{o} \ldotp$$ The negative sign means that the total acceleration vector is angled toward the clockwise direction. Significance From Figure 10.16, we see that the tangential acceleration vector is opposite the direction of rotation. The magnitude of the tangential acceleration is much smaller than the centripetal acceleration, so the total linear acceleration vector will make a very small angle with respect to the centripetal acceleration vector. Exercise 10.3 A boy jumps on a merry-go-round with a radius of 5 m that is at rest. It starts accelerating at a constant rate up to an angular velocity of 5 rad/s in 20 seconds. What is the distance travelled by the boy? Simulation Check out this PhET simulation to change the parameters of a rotating disk (the initial angle, angular velocity, and angular acceleration), and place bugs at different radial distances from the axis. The simulation then lets you explore how circular motion relates to the bugs’ xy-position, velocity, and acceleration using vectors or graphs. Contributors Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
Inhibitory competition plays a critical role in enabling us to focus on a few things at a time, which we can then process effectively without getting overloaded. Inhibition also ensures that those detectors that do get activated are the ones that are the most excited by a given input -- in Darwinian evolutionary terms, these are the fittest detectors. Without inhibition, the bidirectional excitatory connectivity in the cortex would quickly cause every neuron to become highly excited, because there would be nothing to check the spread of activation. There are so many excitatory connections among neurons that it doesn't take long for every neuron to become activated. A good analogy is placing a microphone near a speaker that is playing the sound from that microphone -- this is a bidirectional excitatory system, and it quickly leads to that familiar, very loud "feedback" squeal. If one's audio system had the equivalent of the inhibitory system in the cortex, it would actually be able to prevent this feedback by dynamically turning down the input gain on the microphone, and/or the output volume of the speaker. Another helpful analogy is to an air conditioner (AC), which has a thermostat control that determines when it kicks in (and potentially how strong it is). This kind of feedback control system allows the room to warm up to a given set point (e.g., 75 degrees F) before it starts to counter the heat. Similarly, inhibition in the cortex is proportional to the amount of excitation, and it produces a similar set point behavior, where activity is prevented from getting too high: typically no more than roughly 15-25% of neurons in any given area are active at a time. The importance of inhibition goes well beyond this basic regulatory function, however. Inhibition gives rise to competition -- only the most strongly excited neurons are capable of overcoming the inhibitory feedback signal to get activated and send action potentials to other neurons. This competitive dynamic has numerous benefits in processing and learning. For example, selective attention depends critically on inhibitory competition. In the visual domain, selective attention is evident when searching for a stimulus in a crowded scene (e.g., searching for a friend in a crowd as described in the introduction). You cannot process all of the people in the crowd at once, so only a relatively few capture your attention, while the rest are ignored. In neural terms, we say that the detectors for the attended few were sufficiently excited to out-compete all the others, which remain below the firing threshold due to the high levels of inhibition. Both bottom-up and top-down factors can contribute to which neural detectors get over threshold or not, but without inhibition, there wouldn't be any ability to select only a few to focus on in the first place. Interestingly, people with Balint's syndrome, who have bilateral damage to the parietal cortex (which plays a critical role in spatial attention of this sort), show reduced attentional effects and also are typically unable to process anything if a visual display contains more than one item (i.e., "simultanagnosia" -- the inability to recognize objects when there are multiple simultaneously present in a scene). We will explore these phenomena in the Perception Chapter. We will see in the Learning Chapter that inhibitory competition facilitates learning by providing this selection pressure, whereby only the most excited detectors get activated, which then gets reinforced through the learning process to make the most active detectors even better tuned for the current inputs, and thus more likely to respond to them again in the future. This kind of positive feedback loop over episodes of learning leads to the development of very good detectors for the kinds of things that tend to arise in the environment. Without the inhibitory competition, a large percentage of neurons would get trained up for each input, and there would be no specialization of detectors for specific categories in the environment. Every neuron would end up weakly detecting everything, and thus accomplish nothing. Thus, again we see that competition and limitations can actually be extremely beneficial. A summary term for the kinds of neural patterns of activity that develop in the presence of inhibitory competition is sparse distributed representations. These have relatively few (15-25%) neurons active at a time, and thus these neurons are more highly tuned for the current inputs than they would otherwise be in a fully distributed representation with much higher levels of overall activity. Thus, although technically inhibition does not contribute directly to the basic information processing functions like categorization, because inhibitory connectivity is strictly local within a given cortical area, inhibition does play a critical indirect role in shaping neural activity patterns at each level. Feedforward and Feedback Inhibition Figure \(3.16\): Feedforward and Feedback Inhibition. Feedback inhibition reacts to the actual level of activity in the excitatory neurons, by directly responding to this activity (much like an air conditioner reacts to excess heat). Feedforward inhibition anticipates the level of excitation of the excitatory neurons by measuring the level of excitatory input they are getting from the Input area. A balance of both types works best. There are two distinct patterns of neural connectivity that drive inhibitory interneurons in the cortex, feedforward and feedback (Figure 3.16). Just to keep things interesting, these are not the same as the connections among excitatory neurons. Functionally, feedforward inhibition can anticipate how excited the excitatory neurons will become, whereas feedback accurately reflects the actual level of activation they achieve. Feedback inhibition is the most intuitive, so we'll start with it. Here, the inhibitory interneurons are driven by the same excitatory neurons that they then project back to and inhibit. This is the classical "feedback" circuit from the AC example. When a set of excitatory neurons starts to get active, they then communicate this activation to the inhibitory interneurons (via excitatory glutamatergic synapses onto inhibitory interneurons -- inhibitory neurons have to get excited just like everyone else). This excitation of the inhibitory neurons then causes them to fire action potentials that come right back to the excitatory neurons, opening up their inhibitory ion channels via GABA release. The influx of Cl- (chloride) ions from the inhibitory input channels on these excitatory neurons acts to drive them back down in the direction of the inhibitory driving potential (in the tug-of-war analogy, the inhibitory guy gets bigger and pulls harder). Thus, excitation begets inhibition which counteracts the excitation and keeps everything under control, just like a blast of cold air from the AC unit. Feedforward inhibition is perhaps a bit more subtle. It operates when the excitatory synaptic inputs to excitatory neurons in a given area also drive the inhibitory interneurons in that area, causing the interneurons to inhibit the excitatory neurons in proportion to the amount of excitatory input they are currently receiving. This would be like a thermostat reacting to the anticipated amount of heat, for example, by turning on the AC based on the outside temperature. Thus, the key difference between feedforward and feedback inhibition is that feedforward reflects the net excitatory input, whereas feedback reflects the actual activation output of a given set of excitatory neurons. As we will see in the exploration, the anticipatory function of feedforward inhibition is crucial for limiting the kinds of dramatic feedback oscillations that can develop in a purely feedback-driven system. However, too much feedforward inhibition makes the system very slow to respond, so there is an optimal balance of the two types that results in a very robust inhibitory dynamic. Exploration of Inhibitory Interneuron Dynamics Inhibition (inhib.proj) -- this simulation shows how feedforward and feedback inhibitory dynamics lead to the robust control of excitatory pyramidal neurons, even in the presence of bidirectional excitation. FFFB Inhibition Function We can efficiently implement the feedforward (FF) and feedback (FB) form of inhibition without actually requiring the inhibitory interneurons, by using the average net input and activity levels in a given layer, in a simple equation shown below. This works surprisingly well, without requiring subsequent parameter adaptation during learning, and this FFFB form of inhibition is now the default, replacing the k-Winners-Take-All (kWTA) form of inhibition used in the 1st Edition of the textbook. The average excitatory net input to a layer (or group of units within a layer, if inhibition is operating at that level) is just the average of the net input (\(\eta_{i}\)) of each unit in the layer / group: \(<\eta>=\sum_{n} \frac{1}{n} \eta_{i}\) Similarly, the average activation is just the average of the activation values (\(y_{i}\)): \(<y>=\sum_{n} \frac{1}{n} y_{i}\) We compute the overall inhibitory conductance applied uniformly to all the units in the layer / group with just a few key parameters applied to each of these two averages. Because the feedback component tends to drive oscillations (alternately over and under reacting to the average activation), we apply a simple time integration dynamic on that term. The feedforward does not require this time integration, but it does require an offset term, which was determined by fitting the actual inhibition generated by our earlier kWTA equations. Thus, the overall inhibitory conductance is just the sum of the two terms (ff and fb), with an overall inhibitory gain factor gi: \(g_{i}(t)=\operatorname{gi}[\mathrm{ff}(t)+\mathrm{fb}(t)]\) This gi factor is typically the only parameter manipulated to determine how active overall a layer is. Typically a value of 1.5 is as low as is used, to give a more widely distributed activation pattern, with values around 2.0 (often 2.1 or 2.2 works best) being very typical. For very sparse layers (e.g., a single output unit active), values up to around 3.5 or so can be used. The feedforward (ff) term is: \(\mathrm{ff}(t)=\mathrm{ff}[<\eta>-\mathrm{ff} 0]_{+}\) where ff is the overall gain factor for the feedforward component (set to 1.0 by default), and ff0 is an offset (set to 0.1 by default) that is subtracted from the average netinput value . The feedback (fb) term is: \(\mathrm{fb}(t)=\mathrm{fb}(t-1)+d t[\mathrm{fb}<y>-\mathrm{fb}(t-1)]\) where fb is the overall gain factor for the feedback component (0.5 default), dt is the time constant for integrating the feedback inhibition (0.7 default), and the t-1 indicates the previous value of the feedback inhibition -- this equation specifies a graded folding-in of the new inhibition factor on top of what was there before, and the relatively fast dt value of 0.7 makes it track the new value fairly quickly -- there is just enough lag to iron out the oscillations. Overall, it should be clear that this FFFB inhibition is extremely simple to compute (much simpler than the previous kWTA computation), and it behaves in a much more proportional manner relative to the excitatory drive on the units -- if there is higher overall excitatory input, then the average activation overall in the layer will be higher, and vice-versa. The previous kWTA-based computation tended to be more rigid and imposed a stronger set-point like behavior. The FFFB dynamics, being much more closely tied to the way inhibitory interneurons actually function, should provide a more biologically accurate simulation. Exploration of FFFB Inhibition To see FFFB inhibition in action, you can follow the instructions at the last part of the Inhibition (inhib.proj) model.
Denote $\varSigma_1$ and $\varSigma_2$ your matrices both of dimension $p$. Cond number:$\log(\lambda_1)-\log(\lambda_p)$ where $\lambda_1$ ($\lambda_p$) is the largest (smallest) eigenvalue of $\varSigma^*$, where $\varSigma^*$ is defined as:$\varSigma^*:=\varSigma_1^{-1/2}\varSigma_2\varSigma_1^{-1/2}$ Edit: I edited out the second of the two proposals. I think I had misunderstood the question. The proposal based on condition numbers is used in robust statistics a lot to assess quality of fit. An old source I could find for it is: Yohai, V.J. and Maronna, R.A. (1990). The Maximum Bias of Robust Covariances. Communications in Statistics–Theory and Methods, 19, 3925–2933. I had originally included the Det ratio measure: Det ratio: $\log(\det(\varSigma^{**})/\sqrt{\det(\varSigma_2)*\det(\varSigma_1)})$ where $\varSigma^{**}=(\varSigma_1+\varSigma_2)/2$. which would be the Bhattacharyya distance between two Gaussian distributions having the same location vector. I must have originally read the question as pertaining to a setting where the two covariances were coming from samples from populations assumed to have equal means.
The theory of quasifree quantum stochastic calculus for infinite-dimensionalnoise is developed within the framework of Hudson-Parthasarathy quantumstochastic calculus. The question of uniqueness for the covariance amplitudewith respect to which a given unitary quantum stochastic cocycle is quasifreeis addressed, and related to the minimality of the corresponding stochasticdilation. The theory is applied to the identification of a wide class ofquantum random walks whose limit processes are driven by quasifree noises. Let $|\psi\rangle\langle \psi|$ be a random pure state on$\mathbb{C}^{d^2}\otimes \mathbb{C}^s$, where $\psi$ is a random unit vectoruniformly distributed on the sphere in $\mathbb{C}^{d^2}\otimes \mathbb{C}^s$.Let $\rho_1$ be random induced states$\rho_1=Tr_{\mathbb{C}^s}(|\psi\rangle\langle \psi |)$ whose distribution is$\mu_{d^2,s}$; and let $\rho_2$ be random induced states following the samedistribution $\mu_{d^2,s}$ independent from $\rho_1$. Let $\rho$ be a randomstate induced by the entanglement swapping of $\rho_1$ and $\rho_2$. We showthat the empirical spectrum of $\rho- {1\mkern -4mu{\rm l}}/d^2$ convergesalmost surely to the Marcenko-Pastur law with parameter $c^2$ as $d\rightarrow\infty$ and $s/d \rightarrow c$. As an application, we prove that the state$\rho$ is separable generically if $\rho_1, \rho_2$ are PPT entangled. In this paper we study on a massive MIMO relay system with linear precodingunder the conditions of imperfect channel state information at the transmitter(CSIT) and per-user channel transmit correlation. In our system thesource-relay channels are massive multiple-input multiple-output (MIMO) onesand the relay-destination channels are massive multiple-input single-output(MISO) ones. Large random matrix theory (RMT) is used to derive a deterministicequivalent of the signal-to-interference-plus-noise ratio (SINR) at each userin massive MIMO amplify-forward and decode-forward (M-MIMO-ADF) relaying withregularized zero-forcing (RZF) precoding, as the number of transmit antennasand users M,K approaches to infinity and M>>K. In this paper we obtain aclosed-form expression for the deterministic equivalent ofh^H_kW(hat)_lh(hat)_k, and we give two theorems and a corollary to derive thedeterministic equivalent of the SINR at each user. Simulation results show thatthe deterministic equivalent of the SINR at each user in M-MIMO-ADF relayingand the results of Theorem 1, Theorem 2, Proposition 1 and Corollary 1 areaccurate. We consider the sequence $( Q_n )_{n=1}^{\infty}$ of semi-meander polynomialswhich are used in the enumeration of semi-meandric systems (a family ofdiagrams related to the classical stamp-folding problem). We show that for afixed natural number $d$, the sequence $( Q_n (d) )_{n=1}^{\infty}$ appears assequence of moments for a compactly supported probability measure $\nu_d$ onthe real line. More generally, we consider a two-variable generalization $Q_n(t,u)$ of $Q_n(t)$, which is related to a natural concept of "self-intersectingmeandric system"; the second variable of $Q_n (t,u)$ keeps track of thecrossings of such a system (and one has, in particular, that $Q_n (t,0)$ is theoriginal semi-meander polynomial $Q_n (t)$). We prove that for a fixed naturalnumber $d$ and a fixed real number $q$ with $|q| < 1$, the sequence $( Q_n(d,q) )_{n=1}^{\infty}$ appears as sequence of moments for a compactlysupported probability measure $\nu_{d:q}$ on the real line. The measure$\nu_{d;q}$ is found as scalar spectral measure for an operator $T_{d;q}$constructed by using left and right creation/annihilation operators on a$q$-deformation of the full Fock space introduced by Bozejko and Speicher. Therelevant calculations of moments for $T_{d;q}$ are made by using a two-sidedversion of a (previously studied in the one-sided case) $q$-Wick formula, whichinvolves the number of crossings of a pair-partition. Let $\mu$ be a compactly supported probability measure on the positivehalf-line and let $\mu^{\boxtimes t}$ be the free multiplicative convolutionsemigroup. We show that the support of $\mu^{\boxtimes t}$ varies continuouslyas $t$ changes. We also obtain the asymptotic length of the support of thesemeasures. The phenomenon of superconvergence is proved for all freely infinitelydivisible distributions. Precisely, suppose that the partial sums of a sequenceof free identically distributed, infinitesimal random variables converge indistribution to a nondegenerate freely infinitely divisible law. Then thedistribution of the sum becomes Lebesgue absolutely continuous with acontinuous density in finite time, and this density can be approximated by thatof the limit law uniformly, as well as in all $L^{p}$-norms for $p>1$, on thereal line except possibly in the neighborhood of one point. Applicationsinclude the global superconvergence to freely stable laws and that to freecompound Poisson laws over the whole real line. The free contraction norm (or the (t)-norm) was introduced by Belinschi,Collins and Nechita as a tool to compute the typical location of the collectionof singular values associated to a random subspace of the tensor product of twoHilbert spaces. In turn, it was used in by them in order to obtain sharp boundsfor the violation of the additivity of the minimum output entropy for randomquantum channels with Bell states. This free contraction norm, however, isdifficult to compute explicitly. The purpose of this note is to give a goodestimate for this norm. Our technique is based on results of super convergencein the context of free probability theory. As an application, we give a new,simple and conceptual proof of the violation of the additivity of the minimumoutput entropy. This paper describes the quality of convergence to an infinitely divisiblelaw relative to free multiplicative convolution. We show that convergence indistribution for products of identically distributed and infinitesimal freerandom variables implies superconvergence of their probability densities to thedensity of the limit law. Superconvergence to the marginal law of freemultiplicative Brownian motion at a specified time is also studied. In theunitary case, the superconvergence to free Brownian motion and that to the Haarmeasure are shown to be uniform over the entire unit circle, implying further afree entropic limit theorem and a universality result for unitary free L\'{e}vyprocesses. Finally, the method of proofs on the positive half-line gives riseto a new multiplicative Boolean to free Bercovici-Pata bijection. We obtain a formula for the density of the free convolution of an arbitraryprobability measure on the unit circle of $\mathbb{C}$ with the freemultiplicative analogues of the normal distribution on the unit circle. Thisdescription relies on a characterization of the image of the unit disc underthe subordination function, which also allows us to prove some regularityproperties of the measures obtained in this way. As an application, we give anew proof for Biane's classic result on the densities of the freemultiplicative analogue of the normal distributions. We obtain analogue resultsfor probability measures on $\mathbb{R}^+$. Finally, we describe the density ofthe free multiplicative analogue of the normal distributions as an example andprove unimodality and some symmetry properties of these measures. We consider a pair of probability measures $\mu,\nu$ on the unit circle suchthat $\Sigma_{\lambda}(\eta_{\nu}(z))=z/\eta_{\mu}(z)$. We prove that the sametype of equation holds for any $t\geq 0$ when we replace $\nu$ by$\nu\boxtimes\lambda_t$ and $\mu$ by $\mathbb{M}_t(\mu)$, where $\lambda_t$ isthe free multiplicative analogue of the normal distribution on the unit circleof $\mathbb{C}$ and $\mathbb{M}_t$ is the map defined by Arizmendi and Hasebe.These equations are a multiplicative analogue of equations studied by Belinschiand Nica. In order to achieve this result, we study infinite divisibility ofthe measures associated with subordination functions in multiplicative freeBrownian motion and multiplicative free convolution semigroups. We use themodified $\mathcal{S}$-transform introduced by Raj Rao and Speicher to dealwith the case that $\nu$ has mean zero. The same type of the result holds forconvolutions on the positive real line. We also obtain some regularityproperties for the free multiplicative analogue of the normal distributions. For a series of free $R$-diagonal operators, we prove an analogue of thethree series theorem. We show that a series of free $R$-diagonal operatorsconverges almost uniformly if and if two numerical series converge. In this paper, we study the supports of measures in multiplicative freesemigroups on the positive real line and on the unit circle. We provideformulas for the density of the absolutely continuous parts of measures inthese semigroups. The descriptions rely on the characterizations of the imagesof the upper half-plane and the unit disc under certain subordinationfunctions. These subordination functions are $\eta$-transforms of infinitelydivisible measures with respect to multiplicative free convolution. Thecharacterizations also help us study the regularity properties of thesemeasures. One of the main results is that the number of components in thesupport of measures in the semigroups is a decreasing function of the semigroupparameter. Given a probability measure $\mu$ on the real line, there exists a semigroup$\mu_t$ with real parameter $t>1$ which interpolates the discrete semigroup ofmeasures $\mu_n$ obtained by iterating its free convolution. It was shown in\cite{[BB2004]} that it is impossible that $\mu_t$ has no mass in an intervalwhose endpoints are atoms. We extend this result to semigroups related tomultiplicative free convolution. The proofs use subordination results.
Does anyone here understand why he set the Velocity of Center Mass = 0 here? He keeps setting the Velocity of center mass , and acceleration of center mass(on other questions) to zero which i dont comprehend why? @amanuel2 Yes, this is a conservation of momentum question. The initial momentum is zero, and since there are no external forces, after she throws the 1st wrench the sum of her momentum plus the momentum of the thrown wrench is zero, and the centre of mass is still at the origin. I was just reading a sci-fi novel where physics "breaks down". While of course fiction is fiction and I don't expect this to happen in real life, when I tired to contemplate the concept I find that I cannot even imagine what it would mean for physics to break down. Is my imagination too limited o... The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing, in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe... not exactly identical however Also typo: Wavefunction does not really have an energy, it is the quantum state that has a spectrum of energy eigenvalues Since Hamilton's equation of motion in classical physics is $$\frac{d}{dt} \begin{pmatrix} x \\ p \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \nabla H(x,p) \, ,$$ why does everyone make a big deal about Schrodinger's equation, which is $$\frac{d}{dt} \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \hat H \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} \, ?$$ Oh by the way, the Hamiltonian is a stupid quantity. We should always work with $H / \hbar$, which has dimensions of frequency. @DanielSank I think you should post that question. I don't recall many looked at the two Hamilton equations together in this matrix form before, which really highlight the similarities between them (even though technically speaking the schroedinger equation is based on quantising Hamiltonian mechanics) and yes you are correct about the $\nabla^2$ thing. I got too used to the position basis @DanielSank The big deal is not the equation itself, but the meaning of the variables. The form of the equation itself just says "the Hamiltonian is the generator of time translation", but surely you'll agree that classical position and momentum evolving in time are a rather different notion than the wavefunction of QM evolving in time. If you want to make the similarity really obvious, just write the evolution equations for the observables. The classical equation is literally Heisenberg's evolution equation with the Poisson bracket instead of the commutator, no pesky additional $\nabla$ or what not The big deal many introductory quantum texts make about the Schrödinger equation is due to the fact that their target audience are usually people who are not expected to be trained in classical Hamiltonian mechanics. No time remotely soon, as far as things seem. Just the amount of material required for an undertaking like that would be exceptional. It doesn't even seem like we're remotely near the advancement required to take advantage of such a project, let alone organize one. I'd be honestly skeptical of humans ever reaching that point. It's cool to think about, but so much would have to change that trying to estimate it would be pointless currently (lol) talk about raping the planet(s)... re dyson sphere, solar energy is a simplified version right? which is advancing. what about orbiting solar energy harvesting? maybe not as far away. kurzgesagt also has a video on a space elevator, its very hard but expect that to be built decades earlier, and if it doesnt show up, maybe no hope for a dyson sphere... o_O BTW @DanielSank Do you know where I can go to wash off my karma? I just wrote a rather negative (though well-deserved, and as thorough and impartial as I could make it) referee report. And I'd rather it not come back to bite me on my next go-round as an author o.o
Buckling, When Structures Suddenly Collapse Buckling instability is a treacherous phenomenon in structural engineering, where a small increase in the load can lead to a sudden catastrophic failure. In this blog post, we will investigate some classes of buckling problems and how they can be analyzed. What Is Buckling? Have you ever seen the party trick where a full-grown person can balance on an emptied soda can? Even though the can’s wall is only 0.1 millimeter thick aluminum, it can sustain the load as long as its shape is perfectly cylindrical. The axial stress is below the yield stress, which is easily checked by just dividing the force by the cross section area. But, if you just press lightly against a point on the cylindrical surface, the can will collapse. The collapse load for the perfect cylinder is higher than the weight of the person performing the trick, while only a slight distortion will significantly decrease the load bearing capacity. This phenomenon is called imperfection sensitivity and is one of the possible pitfalls when designing structures under compression. You can see some cases of collapsed shells with dimensions much larger than soda cans on this page. Mathematically, buckling is a bifurcation problem. At a certain load level, there is more than one solution. The sketch below shows a bifurcation point and three different possible paths for the solution, branching out at the bifurcation point. The secondary path can be of three fundamentally different types as indicated in the sketch. A solution with a bifurcation. If the load carrying capacity continues to increase, the solution can be characterized as stable. This is the least dangerous situation, but if you fail to recognize it, you will probably compute too low stresses. Thereby you will underestimate the load carrying capacity. The neutral and unstable paths are more dangerous, since once the peak load is reached, there is no limitation of the displacements. When there is more than one solution, the question about which one is correct arises. All solutions will satisfy the equations of equilibrium, but in real life, the structure will have to select a path. It will do so based on where the energy can be minimized. The solution you compute using conventional linear theory will in general not be the preferred solution. You can make the analogy with a ball on a wavy surface. It can be in equilibrium both on the hilltops and in the valleys, but any perturbation will make it drop into the valley. In the same way, even the smallest perturbation to the structure will make it jump to the more energetically preferable state. In real life, there are no perfect structures; there will always be perturbations in geometry, material, or loads. Linearized Buckling Analysis The easiest way in which you can approach a buckling problem is by doing a linearized buckling analysis. This is essentially what you do with pen and paper for simple structures in basic engineering courses. Computing the critical loads for compressed struts (like the Euler buckling cases) is one such example. In COMSOL Multiphysics, there is a special study type called “Linear Buckling”. When performing such a study, you add the external loads with an arbitrary scale. It can be a unit load or the intended operating load. The study consists of two study steps: A Stationary study step where the stress state from the applied load is computed. A Linear Buckling study step. This is an eigenvalue solution where the stress state is used for determining the Critical load factor. The critical load factor is the factor by which you need to multiply the applied loads to reach the buckling load. If you modeled with operational loads, the critical load factor can be interpreted as a factor of safety. The critical load factor can be smaller than unity, in which case the critical load is smaller than the one you applied. This in itself is not a problem, since the analysis is linear. The critical load factor can even be negative, in which case the lowest load needed for buckling acts in the opposite direction from the one in which you applied the load. The eigenvalue solution will also give you the shape of the buckling mode. Note that the mode shape is only known to within an arbitrary scale factor, just like an eigenmode in an eigenfrequency analysis. Before going into detail, some words of warning are appropriate: For some real-life structures, the theoretical buckling load obtained using this approach can be significantly higher than what would be encountered in practice due to imperfection sensitivity. This is especially important for thin shells. Some structures show significant nonlinearity even before buckling. The reasons can be both geometrical and material nonlinearity. Never use symmetry conditions in a buckling analysis. Even though the structure and loads are symmetric, the buckling shape may not be. The buckling shapes of two symmetric frames with slightly different cross sections and equal symmetric load. The idea with the linearized buckling analysis is that the problem can be solved as a linear eigenvalue problem. The buckling criterion is that the stiffness matrix is singular, so that the displacements are indeterminate. The applied set of loads is called \mathbf P_0, and the critical load state is called \mathbf P_c = \lambda \mathbf P_0, where \lambda is a scalar multiplier. The total stiffness matrix for the full geometrically nonlinear problem, \mathbf K, can be seen as a sum of two contributions. One is the ordinary stiffness matrix for a linear problem, \mathbf K_L, and the second is a nonlinear addition, \mathbf K_{NL}, which depends on the load. In the linear approximation, \mathbf K_{NL} is proportional to the load, so that The stiffness matrix is singular when its determinant is zero. This forms an eigenvalue problem for the parameter, \lambda. The lowest eigenvalue \lambda is the critical load factor, and the corresponding eigenmode, \mathbf u, shows the buckling shape. By default, only one buckling mode corresponding to the lowest critical load is computed. You can select to compute any number of modes, and for a complex structure this can have some interest. There may be several buckling modes with similar critical load factors. The lowest one may not correspond with the most critical one in real life due to, for example, imperfection sensitivity. In the COMSOL software, you should not mark the Linear Buckling study step as being geometrically nonlinear. The nonlinear terms giving \mathbf K_{NL} are added separately. However, if you do select geometric nonlinearity, you will solve the following problem: The the extra ‘1’ in the term\lambda+1 is automatically compensated for, so the computed load factor is the same in either case. The best rule is to use the same setting for geometric nonlinearity in both the preload study step and the buckling study step. You can study an example of a linearized buckling analysis in the model Linear Buckling Analysis of a Truss Tower. Fixed Loads and Variable Loads Sometimes, there is one set of loads, \mathbf Q, which can be considered as fixed with respect to the buckling analysis, whereas another set of loads, \mathbf P_0, will be multiplied by the load factor \lambda. Still, the combination of both load systems must be taken into account when computing the critical load factor. Mathematically, this problem can be stated as This kind of problem can also be solved in COMSOL Multiphysics using one of two strategies: Run it as a post-buckling analysis, with one set of loads fixed and the other set of loads ramped up. This is straightforward, but unnecessarily heavy from the computational point of view. Use a modified version of the Linear Buckling study as described below. Due to the flexibility of the software, it is not difficult to modify the built-in Linear Buckling study so that it can handle the two separate load systems. To do that, start by adding an extra physics interface, which is used only to compute the stress state caused by the fixed load. Solve for this interface only in the stationary analysis, but not in the buckling step. The extra physics interface is not active in the Linear Buckling step. Now, you need to generate the extra stiffness matrix contribution in the buckling study from the stresses that were computed in the second physics interface. You do that by adding the following extra weak contribution: Here, \boldsymbol \sigma^{Q} is the stress tensor from the fixed load, and \mathbf E and \boldsymbol \epsilon are the Green-Lagrange and linear strain tensors, respectively. In other words, the difference \mathbf E-\boldsymbol \epsilon contains the quadratic terms of the Green-Lagrange strain tensor. Contribution from the fixed load system for a 2D Solid Mechanics problem. Now, you can run the study sequence as usual, and the computed critical load factor applies only to the second load system. Post-Buckling Analysis With a linearized buckling analysis, you will only find the critical load, but not what happens once it has been reached. In many cases, you are only interested in ensuring the safety against reaching the buckling load, and then a linearized study may be sufficient. Sometimes, you will, however, need the full deformation history. Some of the reasons for this might be: The structure has significant nonlinearity also before the critical load, so a linearized analysis is not applicable. You need to investigate imperfection sensitivity. The operation of the component intentionally acts in the post-buckling regime. In order to perform a post-buckling analysis, you will need to load the structure incrementally, and trace the load-deflection history. In the COMSOL software, you can use the parametric continuation solver to do this. Doing a post-buckling analysis is not a trivial task. An inherent problem is that there are several solutions to a bifurcation problem, so how do you know that the solution you get is as intended? Also, in many cases the buckling instability will manifest itself numerically as an ill-conditioned or singular stiffness matrix, so that the solver will fail to converge unless you use appropriate modeling techniques. Below, I outline some useful approaches. Symmetric Structures Consider a simple case like a cantilever beam with a compressive load at the tip. When it reaches the collapse load, it can deflect in an arbitrary direction in 3D, or in two possible directions in 2D. It is, however, unlikely that the solver will converge to any of these solutions, unless the symmetry is disturbed since the symmetric problem will become singular at the buckling load. If you add a small transverse load at the tip, the solution can be traced without problems. An example using this technique can be found in the Large Deformation Beam model. Snap-Through Problems In many cases, the structure will “jump” from one state to another. A simple example of this can be displayed by the two-bar truss structure below. Snap-through analysis of a simple truss structure. At deflection 0.2, the two bars are horizontal. When the force is increased, it will reach the peak value at A. Numerically, the stiffness matrix will become singular. Physically, the structure will suddenly invert and jump to the state B along the red dotted line. In real life, this will be a dynamic event. The stored strain energy will be released and converted into kinetic energy. One way of solving this problem is to actually run a time-dependent analysis, where the inertia forces will balance the external load and internal elastic forces. However, such an approach is seldom used, since it is computationally expensive. To trace the solid green line, you can replace the prescribed load by a prescribed displacement, and instead record the reaction force. Replacing loads with prescribed displacements is a simple method to stabilize models, but the method has limitations: It is more or less limited to cases where the external load is a single point load. The displacement you prescribe must be monotonically increasing. To introduce a more general method, consider the shallow cylindrical shell below. It is subjected to a single point load at the center, so in this case it would also be tempting to use displacement control. But, as you can see in the graph below, neither the force nor the displacement under the force is monotonic during the buckling event. A shallow cylindrical shell and graph of the load versus displacement. Animation of the buckling event. For problems like this, literature will recommend that you use an arc-length solver. The popular Riks method is one such method, and we are frequently asked why we do not add such a solver to our software. The simple answer is that we do not need one. A problem like this one is actually quite easy to solve using the continuation solver in COMSOL Multiphysics, once you have learned how to do it. All you need is to figure out a quantity in your model that will increase monotonically, and then use it to drive the analysis. For instance, in the model above, you can select the average vertical displacement of the shell surface as the controlling parameter. You will then add the load intensity as an extra degree of freedom in the problem, introduced through a Global Equation. The equation to be fulfilled is that the average displacement (defined through an average operator) should be equal to the continuation parameter (called disp in the screenshot below). Adding a Global Equation to control the load. The Stationary solver is set up to run the continuation sweep. You can download the full model from the Model Gallery. The method I just described above is by no means limited to buckling problems in mechanics. It can be used for any unstable problem, like a pull-in analysis of an electromechanical system, for example. Imperfections Occasionally, it is necessary to model imperfections explicitly. As an example, there are standards stating that a load must have a certain eccentricity, or that a beam must have a certain assumed initial curvature. When you introduce an imperfection, the load-deflection curve will take a “shortcut” between the branches of the ideal bifurcation curve. Solution path for a model with initial imperfection. When you include a disturbance in the model of a geometry that is imperfection sensitive, the peak load may decrease significantly. This is what happens to the soda can in the scenario mentioned earlier, and it is a physical reality, not just an effect of finite element modeling. Thus, it is of the utmost importance to actually take imperfections into account for this class of structures. Solution path for a model with imperfection sensitivity. How should you then select an appropriate imperfection in your model? One good strategy is to first perform a linearized buckling analysis and then use the computed mode shape as imperfection. The idea is that the structure will be most sensitive to this shape. It is, however, not essential that you capture the exact shape, so you could use anything similar. The size of the perturbation should be similar to what you would expect in your real structure when considering manufacturing tolerances and operating conditions. In some cases, it also a good idea to compute several buckling modes and try more than one of them if the critical load factors are of the same order of magnitude. The imperfection sensitivity can vary a lot between different buckling modes. Instead of actually changing the geometry, it is often easier to obtain the perturbation using an additional load. If you do so, you should make sure that the stresses introduced by that load do not significantly change the problem. Further Reading Comments (10) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
I'm given series $\sum_{n = 1}^{+\infty} \frac{(-1)^{n}}{(n+1)!}\left(1 + 2! + \cdots + n!\right)$ and I have to find whether it is convergent. Testing for absolute convergence, we have $a_n = \frac{1}{(n+1)!} + \frac{2}{(n+1)!} + \cdots + \frac{(n-1)!}{(n+1)!} + \frac{n!}{(n+1)!}$ and since last term is $\frac{n!}{(n+1)!} = \frac{1}{n+1}$ series diverge in comparison with harmonic series and hence can only be conditionally convergent, which I will try to prove from Leibniz criterion. Now, I have to show, that $a_n$-th term is monotonically decreasing and $\lim a_n = 0$. Treating $a_n$ as $\frac{a_n}{b_n} = \frac{1! + 2! + \cdots + n!}{(n+1)!}$ I can use Stolz-Cesàro theorem ($\lim \frac{a_n}{b_n} = \lim\frac{a_{n+1} - a_n}{b_{n+1} - b_n}$) since $b_n$ is monotonically increasing and $\lim b_n = +\infty$. Then $$\lim \frac{a_n}{b_n} = \lim\frac{a_{n+1} - a_n}{b_{n+1} - b_n} = \lim\frac{(n+1)!}{(n+2)! - (n+1)!} = \lim \frac{1}{n+2}\frac{1}{1 - \frac{1}{n+2}} = 0.$$ But how to prove monotonicity? I've tried $\frac{a_{n+1}}{a_n}$ but it didn't get me anywhere. What are some ways to show monotonicity of sequences like $a_n$?
Magnitudes Direction Cosines And Direction Ratios Of Vectors MAGNITUDE, DIRECTION COSINES AND DIRECTION RATIOS Consider a vector \[\vec r = x\hat i + y\hat j + z\hat k\] as shown in the figure below: The magnitude or \(\vec r\) is simply the length of the diagonal of the cuboid whose sides are x, y and z. Thus \[\left| {\vec r} \right| = \sqrt {{x^2} + {y^2} + {z^2}}\quad\quad\quad ...{\text{ }}\left( 1 \right)\] Suppose \(\vec r\) makes angles \(\alpha ,\beta \,\,{\text{and}}\,\lambda \) with the X, Y and Z axis, as shown: Then the quantities \[\begin{align}&{l = {\text{ }}cos\,\alpha } \\ &{m = {\text{ }}cos\,\beta } \\ &{n = {\text{ }}cos\,\gamma } \end{align}\] are called the direction cosines of \(\vec r\) (abbreviated as DCs. The DCs uniquely determine the direction of the vector. Note that since \[\vec r = x\hat i + y\hat j + z\hat k\] we have \[\begin{align}&\qquad\quad x = \left| {\vec r} \right|\cos \alpha = l\left| {\vec r} \right| \hfill \\& \qquad\quad y = \left| {\vec r} \right|\cos \beta = m\left| {\vec r} \right| \hfill \\&\qquad\quad z = \left| {\vec r} \right|\cos \gamma = n\left| {\vec r} \right| \hfill \\&\Rightarrow~ \quad {x^2} + {y^2} + {z^2} = \left( {{l^2} + {m^2} + {n^2}} \right){\left| {\vec r} \right|^2} \hfill \\ \end{align} \] From (1), this gives \[\boxed{{l^2} + {m^2} + {n^2} = 1}\] We can also infer from this discussion that the unit vector \(\hat r\) along \(\vec r\) can be written as \[\begin{align}&\hat r = \frac{{\vec r}}{{\left| {\vec r} \right|}} = \frac{{x\hat i + y\hat j + z\hat k}}{{\left| {\vec r} \right|}} \hfill \\&\;\;= l\hat i + y\hat j + z\hat k \hfill \\ \end{align} \] Direction ratios ( DRs) of a vector are simply three numbers, say a, b and c, which are proportional to the DCs, i.e \[\frac{l}{a} = \frac{m}{b} = \frac{n}{c}\] It follows that DRs are not uniqe ( DCs obviously are) From a set of DRs { a, b, c}, the DCs can easily be deduced: \[\frac{l}{a} = \frac{m}{b} = \frac{n}{c} = \frac{{\sqrt {{l^2} + {m^2} + {n^2}} }}{{\sqrt {{a^2} + {b^2} + {c^2}} }} = \frac{1}{{\sqrt {{a^2} + {b^2} + {c^2}} }}\] \[ \Rightarrow \quad l = \frac{a}{{\sqrt {{a^2} + {b^2} + {c^2}} }},m = \frac{b}{{\sqrt {{a^2} + {b^2} + {c^2}} }},n = \frac{c}{{\sqrt {{a^2} + {b^2} + {c^2}} }}\] Before we go on to solving examples involving the concepts we’ve seen till now, you are urged to once again go over the entire earlier discussion we’ve had, so that the “big picture” is clear in your mind.
What is Uniform Distribution A continuous probability distribution is a Uniform distribution and is related to the events which are equally likely to occur. It is defined by two parameters, x and y, where x = minimum value and y = maximum value. It is generally denoted by u(x,y). OR If the probability density function or probability distribution of a uniform distribution with a continuous random variable X is f(b)=1/y-x, then It is denoted by U(x,y), where x and y are constants such that x<a<y. It is written as X \(\sim\) U(a,b) (Note: Check whether the data is inclusive or exclusive before working out problems with uniform distribution.) Uniform Distribution Examples Example: The data in the table below are 55 times a baby yawns, in seconds, of a 9-week-old baby girl. 10.4 19.6 18.8 13.9 17.8 16.8 21.6 17.9 12.5 11.1 4.9 12.8 14.0 22.8 20.8 15.9 16.3 13.4 17.1 14.5 19.0 22.8 1.3 0.7 8.9 11.9 10.9 7.3 5.9 3.7 17.9 19.2 9.8 5.8 6.9 2.6 5.8 21.7 11.8 3.4 2.1 4.5 6.3 10.7 8.9 9.7 9.1 7.7 10.1 3.5 6.9 7.8 11.6 13.8 18.6 The sample mean = 11.49 The sample standard deviation = 6.23. As assumed, the yawn times, in secs, it follows a uniform distribution between 0 to 23 seconds(Inclusive). So, it is equally likely that any yawning time is from 0 to 23. Histograph Type: Empirical Distribution (It matches with theoretical uniform distribution). If the length is A, in seconds, of a 9-month-old baby’s yawn. The uniform distribution notation for the same is A \(\sim\) U(x,y) where x = the lowest value of a and y = the highest value of b. f(a) = 1/(y-x), f(a) = the probability density function. For x \(\leq\)a\(\leq\)y. In this example: X \(\sim\) U(0,23) f(a) = 1/(23-0) for For 0 \(\leq\)X\(\leq\)23. Theoretical Mean Formula \(\mu\) = (x+y)/2 Standard Deviation Formula \(\sigma\) = \(\sqrt{\frac{(y-x)^{2}}{12}}\) In this example, The theoretical mean = \(\mu\) = (x+y)/2\(\mu\) = (0+23)/2 = 11.50 standard deviation = \(\sqrt{ \frac{(y-x)^{2}}{12}}\) standard deviation = \(\sqrt{ \frac{(23-0)^{2}}{12}}\) =6.64 seconds. Register to BYJU’S for more information on various Mathematical concepts.
I think all the other answers do a better job at exactly replicating the original image than what I am going to share, but my main intention here is to provide some exposition and show the utility in a particular coordinate transformation that naturally results in graphics having similar properties to the original image. I will refer to this transformation as a log-polar transform (also referred to as log-polar coordinates) for reasons which will become clear after its definition is given below. Interestingly enough, what makes this transformation "natural" and yield psychedelic characteristics is its relationship with the anatomical properties of the human eye and its neurological basis in explaining the various form constants perceived during visual hallucinations. To the best of my knowledge, the earliest account of such a mathematical modelling in the literature seems to be the 1979 paper, by J. D. Cowan and G. B, Ermentrout, “A Mathematical theory of Visual Hallucinations”. For some motivation, consider some image in the complex plane with coordinates given in polar form as: $z=re^{i\theta}$. Taking the complex natural logarithm of $z$ gives: $\ln(z)=\ln(r)+i\theta$, which is now expressed in standard form. Here, the real part is the logarithm of the radial component of $z$ and the imaginary part is just the angular component of $z$. The log-polar transform, in the context of the complex plane, is just the mapping which results from taking the complex logarithm of each of the points in the plane. Instead, in the context of the Cartesian plane, the log-polar transform can be though of as the mapping which takes points $(x,y)=(r\cos(\theta),r\sin(\theta))$ to the points $(x',y')=(\ln(r),\theta))$, or more explicitly as: $(x',y')=(\ln(\sqrt{x^2+y^2}),\mathrm{atan2}(y,x))$. Some properties of this conformal mapping: vertical lines turn into circles (constant radius) horizontal lines turn into radial rays (constant angle) lines at other angles spiral out from the origin As an illustration, consider the following periodic density plot: img = ImageCrop@DensityPlot[ Sin[2 x - 20 Log[2 (Sin[y]^2 + 1), 2]], {x, 0, 16 Pi}, {y, 0, 32 Pi}, PlotPoints -> 250, ColorFunction -> "SunsetColors", Frame -> False, ImageSize -> 600] To apply the log-polar transform to this image, first define the map: LogPolar[x_, y_] := {Log[Sqrt[x^2 + y^2]], ArcTan[x, y]} Then use Mathematica's ImageTransformation command on the original image: ImageTransformation[img, LogPolar[#[[1]], #[[2]]] &, DataRange -> {{-Pi, Pi}, {-Pi, Pi}}] Note: in order for the transformed image to appear seamless at the angle corresponding to $\pi$ radians, the top and bottom edges of the original image should appear seamless if joined together. We can exploit the translational symmetry of the original plot to create a zooming animation after the log-polar transform has been applied. Instead of having to recompute the plot for every frame of the animation, lets use ImageTake to crop a portion of the original image and then shift this crop vertically by an amount that corresponds to the periodicity of the plot: d = ImageDimensions[img][[1]] Export["LPTzoom.gif", Table[ ImageResize[ ImageTransformation[ ImageTake[ img, {1, 14*d/16}, {1 + (2 - 2 t)*d/32, (32 - 2 t)*d/32}], LogPolar[#[[1]], #[[2]]] &, DataRange -> {{-Pi, Pi}, {-Pi, Pi}}], 500], {t, 0, 6/7, 1/7}] ] Similarly, translating the original image horizontally would produce a spinning animations instead of a zooming one. For good measure, combining both of these two directions of motion results in a spiraling animation: ProTip: try looking at the still image after staring at the animation for a little motion aftereffect. The interested viewer is invited to explore log-polar transforms of various images in excess at this link.
Material objects consist of charged particles. An electromagnetic wave incident on the object exerts forces on the charged particles, in accordance with the Lorentz force. These forces do work on the particles of the object, increasing its energy, as discussed in the previous section. The energy that sunlight carries is a familiar part of every warm sunny day. A much less familiar feature of electromagnetic radiation is the extremely weak pressure that electromagnetic radiation produces by exerting a force in the direction of the wave. This force occurs because electromagnetic waves contain and transport momentum. To understand the direction of the force for a very specific case, consider a plane electromagnetic wave incident on a metal in which electron motion, as part of a current, is damped by the resistance of the metal, so that the average electron motion is in phase with the force causing it. This is comparable to an object moving against friction and stopping as soon as the force pushing it stops (Figure \(\PageIndex{1}\)). When the electric field is in the direction of the positive y-axis, electrons move in the negative y-direction, with the magnetic field in the direction of the positive z-axis. By applying the right-hand rule, and accounting for the negative charge of the electron, we can see that the force on the electron from the magnetic field is in the direction of the positive x-axis, which is the direction of wave propagation. When the \(\vec{E}\) field reverses, the \(\vec{B}\) field does too, and the force is again in the same direction. Maxwell’s equations together with the Lorentz force equation imply the existence of radiation pressure much more generally than this specific example, however. Figure \(\PageIndex{1}\): Electric and magnetic fields of an electromagnetic wave can combine to produce a force in the direction of propagation, as illustrated for the special case of electrons whose motion is highly damped by the resistance of a metal. Maxwell predicted that an electromagnetic wave carries momentum. An object absorbing an electromagnetic wave would experience a force in the direction of propagation of the wave. The force corresponds to radiation pressure exerted on the object by the wave. The force would be twice as great if the radiation were reflected rather than absorbed. Maxwell’s prediction was confirmed in 1903 by Nichols and Hull by precisely measuring radiation pressures with a torsion balance. The schematic arrangement is shown in Figure \(\PageIndex{2}\). The mirrors suspended from a fiber were housed inside a glass container. Nichols and Hull were able to obtain a small measurable deflection of the mirrors from shining light on one of them. From the measured deflection, they could calculate the unbalanced force on the mirror, and obtained agreement with the predicted value of the force. Figure \(\PageIndex{2}\): Simplified diagram of the central part of the apparatus Nichols and Hull used to precisely measure radiation pressure and confirm Maxwell’s prediction. The radiation pressure \(p_{rad}\) applied by an electromagnetic wave on a perfectly absorbing surface turns out to be equal to the energy density of the wave: \[ \underbrace{p_{rad} = u \space} _{ \text{Perfect absorber}}. \label{eq5}\] If the material is perfectly reflecting, such as a metal surface, and if the incidence is along the normal to the surface, then the pressure exerted is twice as much because the momentum direction reverses upon reflection: \[ \underbrace{ p_{rad} = 2u }_{ \text{Perfect reflector}}. \label{eq10}\] We can confirm that the units are right: \[[u] = \dfrac{J}{m^3} = \dfrac{N \cdot m}{m^3} = \dfrac{N}{m^2} = units \, of \, pressure.\] Equations \ref{eq5} and \ref{eq10} give the instantaneous pressure, but because the energy density oscillates rapidly, we are usually interested in the time-averaged radiation pressure, which can be written in terms of intensity: \[ p = \langle p_{rad}\rangle = \begin{cases} I/c & \text{Perfect absorber} \\ 2I/c & \text{Perfect reflector} \end{cases} \label{eq20}\] Radiation pressure plays a role in explaining many observed astronomical phenomena, including the appearance of comets. Comets are basically chunks of icy material in which frozen gases and particles of rock and dust are embedded. When a comet approaches the Sun, it warms up and its surface begins to evaporate. The coma of the comet is the hazy area around it from the gases and dust. Some of the gases and dust form tails when they leave the comet. Notice in Figure \(\PageIndex{3}\) that a comet has two tails. The ion tail (or gas tail) is composed mainly of ionized gases. These ions interact electromagnetically with the solar wind, which is a continuous stream of charged particles emitted by the Sun. The force of the solar wind on the ionized gases is strong enough that the ion tail almost always points directly away from the Sun. The second tail is composed of dust particles. Because the dust tail is electrically neutral, it does not interact with the solar wind. However, this tail is affected by the radiation pressure produced by the light from the Sun. Although quite small, this pressure is strong enough to cause the dust tail to be displaced from the path of the comet. Figure \(\PageIndex{3}\): Evaporation of material being warmed by the Sun forms two tails, as shown in this photo of Comet Ison. (credit: modification of work by E. Slawik—ESO) Example \(\PageIndex{1}\): Halley’s Comet On February 9, 1986, Comet Halley was at its closest point to the Sun, about \(9.0 \times 10^{10} m\) from the center of the Sun. The average power output of the Sun is \(3.8 \times 10^{26} \, W\). Calculate the radiation pressure on the comet at this point in its orbit. Assume that the comet reflects all the incident light. Suppose that a 10-kg chunk of material of cross-sectional area \(4.0 \times 10^{-2} m^2\) breaks loose from the comet. Calculate the force on this chunk due to the solar radiation. Compare this force with the gravitational force of the Sun. Strategy Calculate the intensity of solar radiation at the given distance from the Sun and use that to calculate the radiation pressure. From the pressure and area, calculate the force. Solution a. The intensity of the solar radiation is the average solar power per unit area. Hence, at \(9.0 \times 10^{10} m\) from the center of the Sun, we have \[\begin{align} I &= S_{avg} \nonumber \\[5pt] &= \dfrac{3.8 \times 10^{26} \, W}{4\pi (9.0 \times 10^{10} \, m)^2} \nonumber \\[5pt] &= 3.7 \times 10^3 \, W/m^2. \nonumber \end{align} \nonumber\] Assuming the comet reflects all the incident radiation, we obtain from Equation \ref{eq20} \[\begin{align}p &= \dfrac{2I}{c} \nonumber \\[5pt] &= \dfrac{2(3.7 \times 10^3 \, W/m^2)}{3.00 \times 10^8 \, m/s} \nonumber \\[5pt] &= 2.5 \times 10^{-5} \, N/m^2. \nonumber \end{align} \nonumber\] b. The force on the chunk due to the radiation is \[\begin{align}F &= pA \nonumber \\[5pt] &= (2.5 \times 10^{-5} N/m^2)(4.0 \times 10^{-2} m^2) \nonumber \\[5pt] &= 1.0 \times 10^{-6} \, N, \nonumber \end{align} \nonumber\] whereas the gravitational force of the Sun is \[\begin{align} F_g &= \dfrac{GMm}{r^2} \nonumber \\[5pt] &= \dfrac{(6.67 \times 10^{-11} \, N \cdot m^2 /kg^2)(2.0 \times 10^{30} kg)(10 \, kg)}{(9.0 \times 10^{10} m)^2} \nonumber \\[5pt] &= 0.16 \, N. \nonumber \end{align} \nonumber\] Significance The gravitational force of the Sun on the chunk is therefore much greater than the force of the radiation. After Maxwell showed that light carried momentum as well as energy, a novel idea eventually emerged, initially only as science fiction. Perhaps a spacecraft with a large reflecting light sail could use radiation pressure for propulsion. Such a vehicle would not have to carry fuel. It would experience a constant but small force from solar radiation, instead of the short bursts from rocket propulsion. It would accelerate slowly, but by being accelerated continuously, it would eventually reach great speeds. A spacecraft with small total mass and a sail with a large area would be necessary to obtain a usable acceleration. When the space program began in the 1960s, the idea started to receive serious attention from NASA. The most recent development in light propelled spacecraft has come from a citizen-funded group, the Planetary Society. It is currently testing the use of light sails to propel a small vehicle built from CubeSats, tiny satellites that NASA places in orbit for various research projects during space launches intended mainly for other purposes. The LightSail spacecraft shown below (Figure \(\PageIndex{4}\)) consists of three CubeSats bundled together. It has a total mass of only about 5 kg and is about the size as a loaf of bread. Its sails are made of very thin Mylar and open after launch to have a surface area of \(32 \, m^2\). Figure \(\PageIndex{3}\): Two small CubeSat satellites deployed from the International Space Station in May, 2016. The solar sails open out when the CubeSats are far enough away from the Station. Example \(\PageIndex{2}\): LightSail Acceleration The first LightSail spacecraft was launched in 2015 to test the sail deployment system. It was placed in low-earth orbit in 2015 by hitching a ride on an Atlas 5 rocket launched for an unrelated mission. The test was successful, but the low-earth orbit allowed too much drag on the spacecraft to accelerate it by sunlight. Eventually, it burned in the atmosphere, as expected. The next Planetary Society’s LightSail solar sailing spacecraft is scheduled for 2018. The Lightsail is based on the on NASA's NanoSail-D project. Image used with permission (Public domain; NASA). LightSail Acceleration The intensity of energy from sunlight at a distance of 1 AU from the Sun is \(1370 \, W/m^2\). The LightSail spacecraft has sails with total area of \(32 \, m^2\) and a total mass of 5.0 kg. Calculate the maximum acceleration LightSail spacecraft could achieve from radiation pressure when it is about 1 AU from the Sun. Strategy The maximum acceleration can be expected when the sail is opened directly facing the Sun. Use the light intensity to calculate the radiation pressure and from it, the force on the sails. Then use Newton’s second law to calculate the acceleration. Solution The radiation pressure is \[F = pA = 2uA = \dfrac{2I}{c}A = \dfrac{2(1370 \, W/m^2)(32 \, m^2)}{(3.00 \times 10^8 m/s)} = 2.92 \times 10^{-4} N.\] The resulting acceleration is \[a = \dfrac{F}{m} = \dfrac{2.92 \times 10^{-4} N}{5.0 \, kg} = 5.8 \times 10^{-5} m/s^2.\] Significance If this small acceleration continued for a year, the craft would attain a speed of 1829 m/s, or 6600 km/h. Exercise \(\PageIndex{2}\) How would the speed and acceleration of a radiation-propelled spacecraft be affected as it moved farther from the Sun on an interplanetary space flight? Solution Its acceleration would decrease because the radiation force is proportional to the intensity of light from the Sun, which decreases with distance. Its speed, however, would not change except for the effects of gravity from the Sun and planets. Contributors Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
I am very confused with the meaning of Fermi sphere. I understand that it is exactly the same as the energy levels and Fermi energy in the real space and Fermi sphere is in K space but I don't know why is this important. What I have understood is the relation of Fermi sphere and real energy level is similar to that of crystal lattice in real space and reciprocal lattice but still anyone who could explain the significance of Fermi sphere in a very clear way. Consider a system of electrons confined in a cube of side length $L=V^{1/3}$. Let's assume that this "electron gas" is diluted enough that we can neglect the electron-electron interactions. This means that we have to solve the Schroedinger's equation for a free particle: $$-\frac{\hbar^2}{2m} \nabla^2 \psi(\mathbf r) = E\ \psi(\mathbf r)\tag{1}\label{1}$$ Moreover, since we are interested in the bulk properties of the material, we will assume periodic boundary conditions (PBC): $$\psi(x+L,y,z) = \psi(x,y,z)\\ \psi(x,y+L,z) = \psi(x,y,z)\\ \psi(x,y,z+L) = \psi(x,y,z) \tag{2}\label{2}$$ It is well known that the solution of Eq. \ref{1} is a plane wave: $$\psi_{\mathbf k}(\mathbf r) = \frac 1 {\sqrt{V}} e^{i \mathbf k \cdot \mathbf r} \tag{3}\label{3}$$ with energy $$E(\mathbf k) = \frac{\hbar^2k^2}{2m} \tag{4}\label{4}$$ If you apply the conditions \ref{2} to the solution \ref{3}, you will get $$e^{i k_x} L=e^{i k_y} L=e^{i k_z} L=1 \tag{5}\label{5}$$ and therefore $$k_\alpha = \frac{2 \pi n_\alpha} L \ \ \ (\alpha =x,y,z) \tag{6}\label{6}$$ where $n_\alpha$ are integers. Therefore, the allowed wavevectors form a discrete "grid" in reciprocal space (figure below - from D.J. Griffiths, Introduction to Quantum Mechanics). Notice how reciprocal space comes out naturally because of the relation \ref{4} between the energy of an electron and its wavevector. At $T=0$, the electrons will occupy the lowest available energy levels, starting with with $\mathbf k = \mathbf 0$. Since electrons are fermions with spin $1/2$, to satisfy Pauli's exclusion principle we can accommodate only two of them in every energy level. Therefore, starting from $\mathbf k=\mathbf 0$, we can imagine to place 2 electrons in every point in reciprocal space allowed from eq. \ref{6}. If the number of electrons is very large, it is easy to see that the result of this filling looks very much like a sphere: this is what we call the Fermi sphere. The radius of this sphere, $k_F$, is related to energy by equation \ref{4}. The energy of the electrons on the surface of the Fermi sphere is the Fermi energy: $$E_F= \frac{\hbar^2k_F^2}{2m} \tag{7}\label{7}$$ References For a detailed discussion of Fermi surfaces in general, see for example Ashcroft-Mermin, Solid State Physics.
I am self-studying models for financial economics and encountered the following problem: I don't see how the author can conclude that $\gamma = -0.62$. Let's rearrange the second to last equation: $$\gamma - r = -4(0.19 - r)$$ as $$r = \frac{\gamma + 0.76}{5}.$$ If $\gamma = 0.62$, then $r = 0.276.$ If $\gamma = -0.62$, then $r = 0.028$, as the author states. So I don't see how the author can conclude $\gamma = -0.62$ when letting $\gamma = 0.62$ does not contradict that $r \geq 0$.
Academic is designed to give technical content creators a seamless experience. You can focus on the content and Academic handles the rest. Highlight your code snippets, take notes on math classes, and draw diagrams from textual representation. On this page, you'll find some examples of the types of technical content that can be rendered with Academic. Examples Code Academic supports a Markdown extension for highlighting code syntax. You can enable this feature by toggling the highlight option in your config/_default/params.toml file. ```pythonimport pandas as pddata = pd.read_csv("data.csv")data.head()``` renders as import pandas as pddata = pd.read_csv("data.csv")data.head() Math Academic supports a Markdown extension for $\LaTeX$ math. You can enable this feature by toggling the math option in your config/_default/params.toml file and adding markup: mmark to your page front matter. To render inline or block math, wrap your LaTeX math with $$...$$. Example math block: $$\gamma_{n} = \frac{ \left | \left (\mathbf x_{n} - \mathbf x_{n-1} \right )^T \left [\nabla F (\mathbf x_{n}) - \nabla F (\mathbf x_{n-1}) \right ] \right |}{\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2}$$ renders as \[\gamma_{n} = \frac{ \left | \left (\mathbf x_{n} - \mathbf x_{n-1} \right )^T \left [\nabla F (\mathbf x_{n}) - \nabla F (\mathbf x_{n-1}) \right ] \right |}{\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2}\] Example inline math $$\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2$$ renders as \(\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2\) . Example multi-line math using the \\ math linebreak: $$f(k;p_0^*) = \begin{cases} p_0^* & \text{if }k=1, \\1-p_0^* & \text {if }k=0.\end{cases}$$ renders as \[f(k;p_0^*) = \begin{cases} p_0^* & \text{if }k=1, \\ 1-p_0^* & \text {if }k=0.\end{cases}\] Diagrams Academic supports a Markdown extension for diagrams. You can enable this feature by toggling the diagram option in your config/_default/params.toml file or by adding diagram: true to your page front matter. An example flowchart: ```mermaidgraph TD; A-->B; A-->C; B-->D; C-->D;``` renders as graph TD; A-->B; A-->C; B-->D; C-->D; An example sequence diagram: ```mermaidsequenceDiagram participant Alice participant Bob Alice->John: Hello John, how are you? loop Healthcheck John->John: Fight against hypochondria end Note right of John: Rational thoughts <br/>prevail... John-->Alice: Great! John->Bob: How about you? Bob-->John: Jolly good!``` renders as sequenceDiagram participant Alice participant Bob Alice->John: Hello John, how are you? loop Healthcheck John->John: Fight against hypochondria end Note right of John: Rational thoughts <br/>prevail... John-->Alice: Great! John->Bob: How about you? Bob-->John: Jolly good! An example Gantt diagram: ```mermaidgantt dateFormat YYYY-MM-DD section Section A task :a1, 2014-01-01, 30d Another task :after a1 , 20d section Another Task in sec :2014-01-12 , 12d another task : 24d``` renders as gantt dateFormat YYYY-MM-DD section Section A task :a1, 2014-01-01, 30d Another task :after a1 , 20d section Another Task in sec :2014-01-12 , 12d another task : 24d Todo lists You can even write your todo lists in Academic too: - [x] Write math example- [x] Write diagram example- [ ] Do something else renders as Write math example Write diagram example Do something else Tables Represent your data in tables: | First Header | Second Header || ------------- | ------------- || Content Cell | Content Cell || Content Cell | Content Cell | renders as First Header Second Header Content Cell Content Cell Content Cell Content Cell Asides Academic supports a Markdown extension for asides, also referred to as notices or hints. By prefixing a paragraph with A>, it will render as an aside. You can enable this feature by adding markup: mmark to your page front matter, or alternatively using the Alert shortcode. A> A Markdown aside is useful for displaying notices, hints, or definitions to your readers. renders as
That's a great question ! What you are asking about is one of the missing links between classical and quantum gravity. On their own, the Einstein equations are local field equations: $$ G_{\mu\nu} = 8 \pi G T_{\mu\nu} $$ and do not contain any topological information. At the level of the action principle: $$ S_{eh} = \int_\mathcal{M} d^4 x \sqrt{-g} \mathbf{R} $$ the term we generally include is the Ricci scalar $ \mathbf{R} = Tr[ R_{\mu\nu} ] $, which depends only on the first and second derivatives of the metric and is, again, a local quantity. So the action does not tell us about topology either, unless you're in two dimensions, where the Euler characteristic is given by the integral of the ricci scalar: $$ \int d^2 x \mathcal{R} = \chi $$ (modulo some numerical factors). So gravity in 2 dimensions is entirely topological. This is in contrast to the 4D case where the Einstein-Hilbert action appears to contain no topological information. This should cover your first question. All is not lost, however. One can add topological degrees of freedom to 4D gravity by the addition of terms corresponding to various topological invariants (Chern-Simons, Nieh-Yan and Pontryagin). For instance, the Chern-Simons contribution to the action looks like: $$ S_{cs} = \int d^4 x {}^\star R \, R $$ where $ R \equiv R_{abcd} $ is the Riemann tensor and $ {}^\star R_{abcd} = 1/2 \epsilon_{ab}{}^{ij} R_{cd\,ij} $ is its dual. Here is a very nice paper by Jackiw and Pi for the details of this construction. There's plenty more to be said about topology and general relativity. Your question only scratches the surface. But there's a goldmine underneath ! I'll let someone else tackle your second question. Short answer is "yes".This post imported from StackExchange Physics at 2014-04-01 16:47 (UCT), posted by SE-user user346
As proved by Euler, the value of any infinite continued fraction is an irrational number. Just as every finite continued fraction is a rational number, every infinite continued fraction represents an irrational number. We consider the class of irrational numbers of the form \(√n\), where \(√n\) is any non-square positive integer. These numbers are called the quadratic irrationals and arise as roots of quadratic equations of the form \(ax² + bx + c = 0\), where \(a,b\) and \(c \)are integers. We use the continued fraction algorithm. Let \( x₁\) be an irrational number of the form \(√n\) : i) express this number as an integer part and a fractional part; ii) the integer part is the next partial quotient; iii) calculate the reciprocal of the fractional part. To determine the infinite continued fraction of \( √2\). \(x₁ = √2\) lies between \( 1 \)and \(2 \) since \(1² < 2 < 2² \) and so int \((√2) = 1\) and frac\((√2) = √2 – 1\). Therefore,\( √2 = 1 + (√2 – 1)\), giving \(a₁ = 1\)and \(x₂ = \frac{1}{√2 – 1}\) We need to find the integer and fractional parts of \(x₂\)\(\frac{1}{√2 – 1} = \frac{1}{√2 – 1}\times\frac{√2 + 1}{√2 + 1} = \frac{√2 + 1}{1} = √2 + 1\) The above has just multiplied the fraction by one to clear the denominator of surds. Now we rewrite \( √2 + 1 = 1 + (√2 – 1) + 1 = 2 + (√2 – 1)\), giving \(a₂ = 2\) and \(x₃ = \frac{1}{√2 – 1}\) When we repeat for \(x₃\) we go through the same cycle again and \(a₃ = 2\) and \(x₄ = \frac{1}{√2 – 1}\) Thus,\( √2 = [1; 2, 2, 2, 2, 2, . . .]\). \(√2\) is said to be a periodic infinite continued fraction with a cycle length of one since \( 2\) repeats to infinity. All infinite continued fractions of the form \(√n\) are periodic. We introduce some new notation : \(√2 = [1; ⟨2⟩]\). The angular brackets indicate that the number/s inside them are repeated to infinity. We find the convergents of \(\sqrt{2}\) in the same way as last time\( \begin{array}{c | r r r r r r r} k&1&2&3&4&5&6&7\\ \hline a_k&1&2&2&2&2&2&2\\ p_k&1&3&7&17&41&99&239\\ q_k&1&2&5&12&29&70&169 \end{array} \) The convergents give rational approximations to \(√2\)\( \begin{align} 99/70&\approx1.414\,285\,7\\ 239/169&\approx1.414\,201\,2\\ \sqrt{2}&\approx1.414\,213\,6 \end{align} \) The convergents are used to find solutions to the quadratic Diophatine equation known as Pell’s Equation \(x² \,-\, ny² = ± 1\) When \( n = 2,\) the equation is \(\, x² \,- \,2y² = ± 1\) \(x = 3, y = 2\) is a solution since \(3²\,-\,2 \times2² = 9\,-\,8 = 1\)and \(x = 7, y = 5\) is a solution since \(7²\,-\,2 \times5² = 49\,-\,50 = -1\). All solutions are convergents of \( √n \) but not all convergents are solutions; it depends on the cycle length of the infinite continued fraction.\( \begin{align} \sqrt{3}=&[1; ⟨1, 2 ⟩]\\ \sqrt{7}= &[2; ⟨1, 1, 1, 4⟩]\\ \sqrt{13} =&\; \,[3; ⟨1, 1, 1, 1, 6⟩] \end{align} \) © OldTrout 2017
Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux 1. School of Mathematics, Southeast University, Nanjing 210096, China 2. Institute for Applied Mathematics, School of Mathematics, Southeast University, Nanjing 211189, China $\begin{eqnarray*} \left\{\begin{array}{lll}n_{t}+u\cdot\nabla n=\Delta n^m-\nabla\cdot(uS(x,n,c)\cdot\nabla c),&x\in\Omega,\ \ t>0,\\[1mm]c_t+u\cdot\nabla c=\Delta c-c+n,&x\in\Omega,\ \ t>0,\\[1mm]u_t+k(u\cdot\nabla)u=\Delta u+\nabla P+n\nabla\phi,&x\in\Omega,\ \ t>0\\[1mm]\nabla\cdot u=0,&x\in\Omega,\ \ t>0 \end{array}\right.\end{eqnarray*}$ $\Omega\subset\mathbb{R}^3$ $k\in\mathbb{R}$ $\phi\in W^{2,\infty}(\Omega)$ $S$ $\overline\Omega\times[0,\infty)^2\rightarrow\mathbb{R}^{3\times 3}$ $|S(x,n,c)|\leq S_0(n+1)^{-\alpha}\ \ {\rm for\ all}\ x\in\mathbb{R}^3,\ n\geq0,\ c\geq0.$ $m+\alpha>\frac{4}{3}$ $m>\frac{1}{3}$ Mathematics Subject Classification:35K65, 35Q35, 35Q51, 92C17. Citation:Feng Li, Yuxiang Li. Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5409-5436. doi: 10.3934/dcdsb.2019064 References: [1] V. Calvez and J. A. Carrillo, Volume effects in the Keller-Segel model: Energy estimates preventing blow-up, [2] X. Cao and S. Ishida, Global-in-time bounded weak solutions to a degenerate quasilinear Keller-Segel system with rotation, [3] R. Duan, X. Li and Z. Xiang, Global existence and large time behavior for a two-dimensional chemotaxis-Navier-Stokes system, [4] R. Duan, A. Lorz and P. Markowich, Global solutions to the coupled chemotaxis-fluid equations, [5] H. He and Q. Zhang, Global existence of weak solutions for the 3D chemotaxis-Navier-Stokes equations, [6] [7] [8] [9] [10] [11] [12] [13] Y. Li and Y. Li, Global boundedness of solutions for the chemotaxis-Navier-Stokes system in $\Bbb{R}^2$, [14] J. Liu and Y. Wang, Global existence and boundedness in a Keller-Segel-(Navier-)Stokes system with signal-dependent sensitivity, [15] J. Liu and Y. Wang, Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system involving a tensor-valued sensitivity with saturation, [16] [17] [18] [19] Y. Peng and Z. Xiang, Global existence and boundedness in a 3D Keller-Segel-Stokes system with nonlinear diffusion and rotational flux, [20] [21] Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, [22] Y. Tao and M. Winkler, Boundedness vs. blow-up in a two-species chemotaxis system with two chemicals, [23] I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein, Bacterial swimming and oxygen transport near contact lines, [24] Y. Wang, Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system with subcritical sensitivity, [25] Y. Wang and X. Cao, Global classical solutions of a 3D chemotaxis-Stokes system with rotation, [26] Y. Wang, M. Winkler and Z. Xiang, Global classical solutions in a two-dimensional chemotaxis-Navier-Stokes system with subcritical sensitivity, [27] [28] [29] M. Winkler, Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops, [30] [31] M. Winkler, Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity, [32] [33] M. Winkler, Global weak solutions in a three-dimensional chemotaxis–Navier-Stokes system, [34] [35] M. Winkler, Does fluid interaction affect regularity in the three-dimensional Keller-Segel system with saturated sensitivity?, [36] M. Winkler, Global existence and stabilization in a degenerate chemotaxis-Stokes system with mildly strong diffusion enhancement, [37] [38] M. Yang, Global solutions to Keller-Segel-Navier-Stokes equations with a class of large initial data in critical Besov spaces, [39] H. Yu, W. Wang and S. Zheng, Global classical solutions to the Keller-Segel-Navier-Stokes system with matrix-valued sensitivity, [40] Q. Zhang and Y. Li, Global existence and asymptotic properties of the solution to a two-species chemotaxis system, [41] [42] Q. Zhang and Y. Li, Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system, [43] [44] Q. Zhang and Y. Li, Global weak solutions for the three-dimensional chemotaxis-Navier-Stokes system with nonlinear diffusion, [45] Q. Zhang and X. Zheng, Global well-posedness for the two-dimensional incompressible chemotaxis-Navier-Stokes equations, [46] J. Zheng, Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system with nonlinear diffusion, show all references References: [1] V. Calvez and J. A. Carrillo, Volume effects in the Keller-Segel model: Energy estimates preventing blow-up, [2] X. Cao and S. Ishida, Global-in-time bounded weak solutions to a degenerate quasilinear Keller-Segel system with rotation, [3] R. Duan, X. Li and Z. Xiang, Global existence and large time behavior for a two-dimensional chemotaxis-Navier-Stokes system, [4] R. Duan, A. Lorz and P. Markowich, Global solutions to the coupled chemotaxis-fluid equations, [5] H. He and Q. Zhang, Global existence of weak solutions for the 3D chemotaxis-Navier-Stokes equations, [6] [7] [8] [9] [10] [11] [12] [13] Y. Li and Y. Li, Global boundedness of solutions for the chemotaxis-Navier-Stokes system in $\Bbb{R}^2$, [14] J. Liu and Y. Wang, Global existence and boundedness in a Keller-Segel-(Navier-)Stokes system with signal-dependent sensitivity, [15] J. Liu and Y. Wang, Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system involving a tensor-valued sensitivity with saturation, [16] [17] [18] [19] Y. Peng and Z. Xiang, Global existence and boundedness in a 3D Keller-Segel-Stokes system with nonlinear diffusion and rotational flux, [20] [21] Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, [22] Y. Tao and M. Winkler, Boundedness vs. blow-up in a two-species chemotaxis system with two chemicals, [23] I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein, Bacterial swimming and oxygen transport near contact lines, [24] Y. Wang, Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system with subcritical sensitivity, [25] Y. Wang and X. Cao, Global classical solutions of a 3D chemotaxis-Stokes system with rotation, [26] Y. Wang, M. Winkler and Z. Xiang, Global classical solutions in a two-dimensional chemotaxis-Navier-Stokes system with subcritical sensitivity, [27] [28] [29] M. Winkler, Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops, [30] [31] M. Winkler, Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity, [32] [33] M. Winkler, Global weak solutions in a three-dimensional chemotaxis–Navier-Stokes system, [34] [35] M. Winkler, Does fluid interaction affect regularity in the three-dimensional Keller-Segel system with saturated sensitivity?, [36] M. Winkler, Global existence and stabilization in a degenerate chemotaxis-Stokes system with mildly strong diffusion enhancement, [37] [38] M. Yang, Global solutions to Keller-Segel-Navier-Stokes equations with a class of large initial data in critical Besov spaces, [39] H. Yu, W. Wang and S. Zheng, Global classical solutions to the Keller-Segel-Navier-Stokes system with matrix-valued sensitivity, [40] Q. Zhang and Y. Li, Global existence and asymptotic properties of the solution to a two-species chemotaxis system, [41] [42] Q. Zhang and Y. Li, Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system, [43] [44] Q. Zhang and Y. Li, Global weak solutions for the three-dimensional chemotaxis-Navier-Stokes system with nonlinear diffusion, [45] Q. Zhang and X. Zheng, Global well-posedness for the two-dimensional incompressible chemotaxis-Navier-Stokes equations, [46] J. Zheng, Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system with nonlinear diffusion, [1] Dan Li, Chunlai Mu, Pan Zheng, Ke Lin. Boundedness in a three-dimensional Keller-Segel-Stokes system involving tensor-valued sensitivity with saturation. [2] Hirofumi Notsu, Masato Kimura. Symmetry and positive definiteness of the tensor-valued spring constant derived from P1-FEM for the equations of linear elasticity. [3] Sachiko Ishida. Global existence and boundedness for chemotaxis-Navier-Stokes systems with position-dependent sensitivity in 2D bounded domains. [4] Youshan Tao, Michael Winkler. Global existence and boundedness in a Keller-Segel-Stokes model with arbitrary porous medium diffusion. [5] [6] Youshan Tao. Global dynamics in a higher-dimensional repulsion chemotaxis model with nonlinear sensitivity. [7] Wei Wang, Yan Li, Hao Yu. Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity. [8] Marco Di Francesco, Alexander Lorz, Peter A. Markowich. Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: Global existence and asymptotic behavior. [9] Laiqing Meng, Jia Yuan, Xiaoxin Zheng. Global existence of almost energy solution to the two-dimensional chemotaxis-Navier-Stokes equations with partial diffusion. [10] T. Hillen, K. Painter, Christian Schmeiser. Global existence for chemotaxis with finite sampling radius. [11] María Astudillo, Marcelo M. Cavalcanti. On the upper semicontinuity of the global attractor for a porous medium type problem with large diffusion. [12] [13] Markus Gahn. Multi-scale modeling of processes in porous media - coupling reaction-diffusion processes in the solid and the fluid phase and on the separating interfaces. [14] Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski. The existence of weak solutions to immiscible compressible two-phase flow in porous media: The case of fields with different rock-types. [15] Zhi-An Wang, Kun Zhao. Global dynamics and diffusion limit of a one-dimensional repulsive chemotaxis model. [16] Chunhua Jin. Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion. [17] Sainan Wu, Junping Shi, Boying Wu. Global existence of solutions to an attraction-repulsion chemotaxis model with growth. [18] Radek Erban, Hyung Ju Hwang. Global existence results for complex hyperbolic models of bacterial chemotaxis. [19] Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. [20] Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa. Global existence for an attraction-repulsion chemotaxis fluid model with logistic source. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
MLE vs. MAP Table of Contents import numpy as npimport scipy as sp import seaborn as snsfrom matplotlib import pyplot as plt%matplotlib inline Motivation While I am happily (and painfully ) learning mean field variational inference, I suddenly found that I am not 100% sure about the differences between maximum likelihood estimation (MLE), maximum a posteriori (MAP), expectation maximization (EM), and variational inference (VI). It turns out that they are easy to distinguish after searching here and there! In this study note here, I will only focus on MLE vs. MAP because they are very similar. In fact, as we can see later, MLE is a special case of MAP, where a is used. uniform prior TL;DR MLE produces a point estimate that maximizes of the unknow parameters given observations (i.e., data) likelihood function MAP is a generalized case of MLE. It also produces a point estimate, which is the of the posterior distribution of the parameters mode EM is an method that tries to find MLE/MAP estimates of parameters when marginal probability is intractable (e.g., when there’re missing data or latent variables) iterative VI is a Bayesian method that provides a over the parameters instead of point estimates. posterior distribution Coin toss Most tutorials on MLE/MAP start with coin toss because it is a simple yet useful example to explain this topic. Suppose that we have a coin but we do not know if it is fair not. In other words, we have no idea whether the probability of getting head ( H) is the same as tail ( T). In this case, how can we such probability? estimate A natural way to do this is to flip this coin for several times to see how many H’s and T’s do we have. Let’s go with a random experiment. Before we start the experiment, let’s define some notations: $X$: a random variable that represents the coin toss outcome ($1$ for Hand $0$ for T) $\theta$: the probability of getting H Now, let’s assume that we don’t know $\theta$ (here, we will use $\theta=0.7$) and we are going to use random number generator to get some samples and see what the data is like. Let’s start by flip the coin 10 times. n = 10theta = 0.7X_arr = np.random.choice([0, 1], p=[1-theta, theta], size=10)X_arr array([1, 1, 0, 1, 0, 1, 1, 1, 1, 1]) We get 8 H’s and 2 T’s. Intuitively, we will do the following calculation even if we are not statisticians: This seems to be a reasonable guess. But why? MLE In fact, $\hat{\theta}$ is exactly what we get by using MLE! In the context of a coin toss, we can use Bernoulli distribution to model $x$: MLE states that our best guess (techinically, estimation) for $\theta$ based on observations we have. Specifically, such $\theta$ should be maximizing the likelihood function $L(\theta;x)$. That’s also why this method is named . maximum likelihood estimation Concretely, in our example, we can write down the probability mass function of $x$ to be: What’s likelihood function then? It is actually just the equation above. However, instead of thinking $p(x)$ as a function of $x$, we think of it as a function of $\theta$, given the data: In the case where we have more than one experiments (say $x={x_1, …, x_n}$) and assume independence between individual coin tosses, we have the following likelihood: Most of the time, we will apply logarithm on $L$ for simplicity of computation: Now this has become an optimization problem: given observations $x$, how do we maximize $\ell$: It is not difficult to show that $\ell$ is a concave function. Recall that a function a twice-differentiable $f$ is concave i.f.f. its second derivative is nonnegative (I use the case where we only one Bernoulli experiment but it is very similar when there $n$): Therefore, we can simply take the derivative of $\ell$ and set $\ell’$ value to zero. The resulting $\theta$ will be the one that : maximizes the likelihood function Notice that since x_i can only take 0 or 1, we can further let $\sum_i x_i = n_{H}$, which is the total number of heads from all the experiments. And that is, in fact, what we did previously: divide $n_{H}$ by $n$! MAP MLE works pretty well in the previous example. However, this is not as intuitive as how human infers something. Typically, our belief on things may vary over time. Specifically, we start with some prior knowledge to draw an initial guess. With more evidence, we can then modify our belief and obtain probability of some events of interest. This is exactly posterior statistics. Bayesian Note that MAP is not completely Bayesian because it only gives a point estimate Back to the coin toss example. If our data looks like this: X_arr = np.ones(n)X_arr array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) Our MLE will simply be: $\dfrac{n_H}{n} = \dfrac{0}{10} = 0$. However, this probably does not make sense. Intuitively, we will guess, for example, $\theta$ should be a value close to 0.5 (although this is not necessarily true). Therefore, we can introduce a distribution for our unknow paramter $p(\theta)$. By doing this, we are actually dragging our estimation towards our prior belief. However, the effects of priors will be gone with increasing data. According to Bayes theorem, we have: prior The denominator $p(x)$ is fixed, given $x$ and we can simplify as: Now let’s rewrite our optimization problem into: As compared with MLE, MAP has one more term, the prior of paramters $p(\theta)$. ($log~p(\theta) = log~constant$). When we take the logarithm of the objective, we are essentially maximizing the posterior and therefore getting the mode as the point estimate. In fact, if we are applying a uniform prior on MAP, MAP will turn into MLE Still coin toss! Let’s reuse our coin toss example: Now we introduce a prior distribution for $\theta$: $\theta \sim Beta(\alpha, \beta)$. Such choice is because of the fact that our likelihood follows a Bernoulli distribution. The use of Beta prior will turn the posterior into another Beta, thanks to the beautiful property of . Let’s proof this below: conjugacy Therefore, the posterior of $\theta$ will be updated with more data, slowly departing from the given prior. From a different perspective, we can think of the hyperparameters $\alpha$ and $\beta$ as . For example, we assume there are $\alpha$ success and $\beta$ failures before any data is given. In our example of all 1’s, MAP will drag the MLE estimate of $\hat{\theta}=0$ towards our prior belief that it is probably NOT true that a coin toss will always give a pseudo counts that smooth the posterior distribution H. Visualizing MAP As a last example, let’s see the iterative process of how posterior is updated given new data. Let’s assume the true $\theta$ to be 0.7. Let’s use a non-flat Beta prior with $\alpha=\beta=2$. alpha = beta = 2theta = 0.7n = 50X_arr = np.random.choice([0, 1], p=[1-theta, theta], size=n)sum(X_arr) / X_arr.size 0.68 Recall that our posterior is updated as $Beta(\alpha + x_i, \beta + 1 - x_i)$ with every input data $x_i$. beta_arr = np.asarray([[alpha+sum(X_arr[:i+1]), beta+(i+1-sum(X_arr[:i+1]))] for i in range(X_arr.size)])beta_arr = np.insert(beta_arr, 0, [alpha, beta], 0) Let’s see how the posterior changes when we have more data points beta_X = np.linspace(0, 1, 1000) my_color = '#2E8B57'fig, ax_arr = plt.subplots(ncols=4, figsize=(16,4), sharex=True)for i, iter_ in enumerate([0, 5, 15, 30]): ax = ax_arr[i] a, b = beta_arr[iter_] beta_Y = sp.stats.beta.pdf(x=beta_X, a=a, b=b) ax.plot(beta_X, beta_Y, color=my_color, linewidth=3) if a > 1 and b > 1: mode = (a-1)/(a+b-2) else: mode = a/(a+b) ax.axvline(x=mode, linestyle='--', color='k') ax.set_title('Iteration %d: $\hat{\\theta}_{MAP}$ = %.2f'%(iter_, mode))fig.tight_layout() Note that while this example shwos that MAP can give us a whole posterior distribution for paramter $\theta$, the goal of MAP is still to get a point estimate. This simplicified example is easy because we can solve this problem analytically thanks to conjugacy.