text
stringlengths
256
16.4k
If you tried solving any math problem without knowing the exact formula or method, you may get confused. Surely the question that once pops into your mind or might come is if it is fine to quit this question. Some students memorize formulas and rules to complete many math problems, but this does not mean they are good in the subject. Math is all about the understanding and practice. Mostly students feel solving linear equation using matrix method is more complicated than any other methods. This mind set might be because of various reasons. We have explained step by step how to solve linear equations using matrix. At the end of this section, you understand the underlying concepts behind what you are doing. Matrix method might be your first preference to solve problems based on linear equations. What is Matrix Method? Like any other methods such as substitution method, elimination method etc, Matrix Method is also used for solving systems of linear equations. This method also called as "Cramer's Rule". Before you move ahead advise to know about system of linear equations and get master to find the inverse of a matrix. Solving Systems of Equations by Matrix Method The linear equation system can be solved by using the matrix method. Consider a system of linear equations with three variables: $a_1$ x + $b_1$y + $c_1$z = $d_1$ $a_2$ x + $b_2$ y + $c_2$ = $d_2$ and $a_3$ x + $b_3$ y + $c_3$ = $d_3$ Let us suppose we have a system of linear equations. Let $A$ be the square matrix, made by coefficients of given system and $B$ be a column matrix that is obtained from the constant terms in the equations. Then, we may write $A\ X$ = $B$ ; where $X$ is the column matrix of variables of given linear equations. So we have $X$ = A$^{-1}\ B$ Using this concept, a set of linear equations can be solved via finding the inverse of the coefficient matrix. Step 1 : Find the cofactor matrix of coefficient matrix $A$ formed from the given system. Step 2 : Write its transpose matrix by interchanging the rows elements by column elements. This is called adjoint matrix of $A$ or $adj(A)$. Step 3 : Calculate the determinant of $A$ i.e. $|A|$. Step 4 : Divide $adj(A)$ by $|A|$ to find the inverse, since we have the following formula for inverse : $A^{-1}$ = $\frac{adj(A)}{|A|}$ Step 5 : Now, multiply $A^{-1}$ and $B$, we will get a column matrix. Step 6 : Compare the elements of this matrix by matrix $X$, we will obtain the values of our variables. Examples Example: Solve the linear system using matrix method. 5x + y = 13 3x + 2y = 5 Solution: $\begin{bmatrix} 5 & 1\\ 3 & 2\end{bmatrix} \begin{bmatrix} x\\ y\end{bmatrix} = \begin{bmatrix} 13\\ 5 \end{bmatrix}$ i.e $A\ X$ = $B$ Now, A = $\begin{bmatrix} 5 & 1\\ 3 & 2 \end{bmatrix}$ $cof(A)$ = $\begin{bmatrix} 2 & -3\\ -1 & 5\end{bmatrix}$ $Adj(A)$ = Transpose of $cof(A)$ = $\begin{bmatrix} 2 & -1\\ -3 & 5\end{bmatrix}$ Determinant of A = |A| = 10 - 3 = 7 $A^{-1}$ = $\frac{adj(A)}{|A|}$ = $\frac{1}{7}$ $\begin{bmatrix} 2 & -1\\ -3 & 5 \end{bmatrix}$ X = $A^{-1}\ B$ $\begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} \frac{2}{7} & -\frac{1}{7}\\ -\frac{3}{7} & \frac{5}{7} \end{bmatrix} \times \begin{bmatrix} 13\\ 5 \end{bmatrix}$ = $\begin{bmatrix} \frac{21}{7}\\ \frac{-14}{7} \end{bmatrix}$ $\begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} 3\\ -2 \end{bmatrix}$ On comparing we get, x = 3 and y = -2 Practice Problems Practice below problems to excel your skills. Problem 1 : Solve 3x - y = 10 and x + 2y = 1 Problem 2 : Solve linear equations using matrix method x + 3y = 6 2y - 5z = 4
Construct Definitions and Relationships In physics, many phenomena behave cyclically. Springs are the prototypical example of this. Elastic Object: An object, which when deformed (perhaps by squeezing it or stretching it) returns to its original shape. Spring: A coiled spiral shaped object that is constructed to be elastic over a large range of deformation. Some springs can only be stretched, but if the coils are initially separated, the spring is elastic when both stretched or compressed. Spring-mass: A spring with an object whose mass is considerably greater than the mass of the spring attached to one end of the spring. Hanging spring-mass: A spring-mass with one end solidly attached to a support in a manner that the spring hangs vertically with the mass at the bottom of the spring. Before the mass is attached to the spring, the bottom of the spring (the “free end”) will be at a definite position with respect to the attached end. This is the equilibrium position of the free end without the mass attached. When the mass is attached, the free end of the spring will move to a new, lower position. This is the equilibrium position of the free end of the spring with the mass attached. Assumptions and Simplifications: For the sake of being able to perform calculations, we assume that springs have negligible mass, and always have a mass attached at one end. These assumptions are obviously violated frequently in the real world, but without them even basic problems become very difficult. For example, finding the kinetic energy of a single mass in motion is trivial, but finding the kinetic energy of a spring alone, whose end is always moving faster than its base, becomes a daunting task, and detracts from the learning of physics principles. You'll be comforted to know that these simplifications yield realistic results in most cases involving spring-masses, since the spring's mass often is negligible compared to the weight at the end. A mass connected to a spring that follows Hooke's law is a system that, when displaced from its equilibrium position, experiences a restoring force, \(F\) that is proportional to the displacement, \(x\). Hooke's Law The force with which a spring pulls back when stretched (or pushes back when compressed) is proportional to the amount of stretch from equilibrium, provided the spring is not stretched too far. (Historically, this linear proportionality between the force and amount of stretch is referred to as Hooke’s law behavior.) We write the force with which the spring pulls back (the restoring force) as \[F = -k x\] where k is the “spring constant” or “force constant” (and is dependent on the stiffness of the particular spring). The minus sign indicates that the force is opposite in direction to the direction it was stretched or compressed. An important point to notice in the expression for the force of the spring is that x is measured from the un-stretched position of the free end of the spring. In order for the force to have units of newtons, the units of k must be newtons per meter. The force that you (an external agent) have to exert on the spring to stretch it a distance \(x\) is in the opposite direction to the restoring force and is equal to \(+k x\). Hooke's Law breaks down at the extremes of a spring's motion. For example, when stretched to the point of breaking or permanent deformation, a spring's behavior will begin to deviate substantially from the linear expectation. Also, when compressed so far that it begins to touch itself, clearly the forces at play change. For these reasons, it is usually assumed that springs stretch only within a small portion of their maximum deformation. Example: Deducing a Spring Constant Suppose a spring stretches 13 cm when a force of 780 N is applied to it. What is it's spring constant? Solution To find the spring constant, model the spring as a Hooke's law spring and solve for k. \[ F = kx \] \[ k = \frac{F}{x} \] Convert from cm to m: 13 cm = 0.13 m \[ k = \frac{780 N}{0.13m} = 6000 N/m \] The minus sign in the force equation is dropped because we are referring to the force we exert, not the force the spring exerts. The force is normally negative to indicate that the spring pulls opposite the direction of its deformation. Spring-Mass Force and Force Constant for a Hanging Mass A spring with a mass hanging on it acts exactly like a horizontal spring, except that the end of the spring has a different equilibrium position. There are now two forces acting on the mass: the force from the spring pulling up or pushing down and the force from the Earth always pulling down. We will not show this here, but the combination of the two forces is completely accounted for if we measure all stretches and compressions of the spring from the equilibrium point of the free end of the spring with the mass attached. The force with which a spring-mass pulls back when stretched (or pushes back when compressed) is proportional to the amount of stretch from the equilibrium determined with the mass attached (provided the spring is not stretched too far). We will usually write the force with which the spring-mass pulls back (the restoring force) using the symbol “y” instead of with an “x”, but this is just a convention. \[F = -k y\] where k is the “spring constant” or “force constant” (dependent on the stiffness of the particular spring, not on the mass). The minus sign indicates that the force is opposite to the direction the spring-mass was stretched or compressed. The force that you (an external agent) have to exert on the spring-mass to stretch it a distance y is in the opposite direction to the restoring force and is equal to \(+k y\). A critically important point to notice in the expression for the force of the spring-mass is that y is measured from the unstretched position of the free end of the spring-mass system. By measuring the stretch from this “new” equilibrium position, the effect of the force of the Earth is automatically taken into account, so does not have to be added back in. It is as if, we are in far outer space where there is no force of gravity. The only force is the “modified’ force of the spring. Energies of a Spring-Mass The work I do when I pull on the mass increases the potential energy of the mass/spring system. I can calculate this work: it is simply the product of the force I apply times the distance through which I move while pulling or pushing. The only tricky part is that the force is not constant, since \(F = -kx\). That is, the force is proportional to the distance I have pushed it or pulled it. So I need to take the average force times the distance. The force varies from zero to a max of ky. The average of this is \(\dfrac{1}{2}ky\). When I multiply this average force times the distance, I get the work I do on the system: \[W = ( \text{average force}) \times ( \text{distance}) \] \[ = (\dfrac{1}{2}kx)(x) = \dfrac{1}{2}kx^{2}\] Notice that while the force scales linearly with deformation, this tells us that the work required to deform the spring scales as the square of deformation. Potential Energy of a spring-mass The expression in the preceding paragraph represents the work I did on the system, so by energy conservation the potential energy of the spring must have changed by that same amount. It doesn’t matter how the spring got stretched (that is, whether it was extended or compressed). If stretched by an amount x, it must have a PE relative to the equilibrium position (with mass attached) of \[\text{PE spring-mass} = \dfrac{1}{2}kx^{2}\] The change in the spring-mass potential energy, ∆PE spring-mass can be found using the Energy-Interaction Modelin the standard way. Let’s assume in this example that I compress or stretch the spring further from its equilibrium position than when it was in its initial position, y i, to a final position y f. Therefore, I am doing positive work on the spring-mass system. That is, I am adding energy to the spring-mass system. (The work I do on the spring-mass system will be a positive number of joules.) Thus, its energy must increase. If I do not change its KE, i.e., do not change its velocity (the final velocity is equal to its initial velocity), the only energy system that changes is the PE spring-massenergy system. Kinetic energy of a spring-mass The kinetic energy of the spring-mass system is the same as for any mass that has a speed, v. \[\text{KE spring-mass} = \dfrac{1}{2}mv^{2}\] where m is the mass of the object attached to the spring and v is the speed of the mass. The change in the KE spring-mass is calculated the same way as the change of any energy: \[\Delta KE = KE_{f} - KE_{i} = \dfrac{1}{2}m(v_{f}^{2} - v_{i}^{2}) \] Example: Applying Energy Conservation to a Spring-Mass A spring of negligible mass and a spring constant of 120 N/m is fixed to a wall, free to oscillate. On the other end, a ball with a mass of 1.5 kg is attached. The spring-mass is then stretched 0.4 m, and released. What is the top speed of the attached ball? Solution Upon stretching the spring, some energy is stored in the springs' bonds as potential energy. This potential energy is released when the spring is allowed to oscillate. The maximum speed is accomplished when the spring returns to its equilibrium position, and all energy is kinetic energy. First, calculate the amount of potential energy the spring initially stores: \[ PE_{i} = \frac{1}{2} k x_{i}^{2} = \frac{1}{2} (120 N/m)(0.4m)^{2} = 9.6 J \] When the spring reaches it's maximum speed, all energy which was potential is then kinetic. Use this to find the mass' top speed: \[ PE_{max} = KE_{max} = \frac{1}{2}mv_{max}^{2} \] \[ v_{max} = \sqrt{ \frac{2KE}{m} } = \sqrt{ \frac{2 \times 9.6 J}{1.5 kg} } = 3.58 m/s \] Spring-Mass Systems: a Universal Motion Most physical systems that vibrate back and forth do so like a hanging spring-mass, particularly when the amplitude of vibration is not too large. Because this motion is so common, it is worth looking at it a little closer. The maximum value of the PE of this spring-mass system occurs when the mass is at its extreme positions and its speed is zero. Conversely, the KE of the spring-mass system is a maximum when the mass is at its equilibrium position (y = 0). We know from the last chapter that in the absence of friction, ∆E tot = ∆KE + ∆PE = 0, or in words, E tot is a constant. Looking at E tot at any particular time in its cycle of vibration, the energy is still going to be equal to the same total value. Written symbolically, this becomes: \[E_{tot} = PE + KE = constant = PE_{max} = KE_{max}\] The graph below shows the kinetic energy K and potential energy U of a spring-mass system as a function of the position from equilibrium. The kinetic energy as a function of y, K(y), is just the difference between the maximum energy, Emax, and U(y). This is just another way of expressing conservation of energy for this system. Both PE(y) and KE(y) are parabolas, centered about y = 0. The shape is due to the dependence on the square of y in the expression of the PE. Although we cannot show it now without investigating the time behavior of the motion of the mass, it turns out that the time-average (that is, the average over time) of the PE and KE are the same, and are consequently both equal to one-half the total energy. \[\text{avg } KE_{spring-mass} = \text{ avg } PE_{spring-mass} = \dfrac{1}{2}E_{total} \] The fact that the time-average potential and kinetic energies are the same has a profound implication for the model of matter that we are about to develop in Chapter 3. This result is true only for a potential energy that depends on the square of the variable. It is precisely the fact that the potential energy is quadratic with respect to position that makes a spring-mass system so special, so universal, and so important. The lowest point on the PE( y) curve is frequently called the potential energy minimum. Why is the value of the position variable for which the potential energy is a minimum significant? Consider what happens as energy is removed from a system which is oscillating about the equilibrium value of its position (e.g., a real spring-mass system, because of friction, gradually transfers mechanical energy to thermal energy). The amplitude (maximum extent) of the oscillations decreases until eventually, when all mechanical energy has been transferred to thermal systems, the system comes to rest at the equilibrium position, the position of the potential minimum. Now consider any physical system that oscillates and which will settle down to a stable position as energy is transferred to thermal systems. That stable equilibrium position represents the physical state where potential energy is smallest. (All of the PE and KE have been transferred to thermal systems.) The potential energy must increase as y increases (either positively or negatively) away from zero, the equilibrium position. Now nature seems to prefer smooth changes of things like potential energies. The simplest smooth mathematical function that increases for both positive and negative y is y 2. When we look at real physical systems, it turns out that sufficiently close to the minimum, the potential energy always “looks parabolic”! This is a result with far reaching consequences. It implies that What’s important is not that the spring-mass system is so special itself; it is, rather, that the behavior of the spring-mass system represents a any oscillating system behaves just like our simple spring-mass system, at least for small amplitudes of oscillation! truly universal behavior of any oscillating system.We will use this property shortly when we model the real atoms of liquids and solids as “oscillating masses and springs.” Contributors Authors of Phys7A (UC Davis Physics Department)
I just want to add something to the correct @annav answer, with a practical example in basic Quantum Field Theory. Imagine a particle process with $2$ initial particles and $2$ final particles, you have some initial state (say at t= $-\infty$), which is $|i\rangle =|1\rangle |2\rangle$, where $|1\rangle$ and $|2\rangle$ are the states (at t= $-\infty$) of the initial particles. This initial state $|i\rangle$ has a unitary evolution. Practically, the non-trivial part of this evolution is due to the exchange of "virtual particles" (for instance you may imagine two initial electrons exchanging a "virtual photon", or a initial left-handed electron and initial right-handed electron exchanging a "virtual Higgs") Now, the initial state $|i\rangle$ is evolving, so at $t = +\infty$, the final state could be written $|f\rangle = \sum\limits_{k,l} A_{1,2;k,l}|k\rangle |l\rangle$, where $|k\rangle$ and $|l\rangle$ represent some possible state for the final particles. Until now, you see that there is a (unitary) evolution due to the interaction, but there is no "collapse". $A_{1,2;k,l}$, in the above expression, is simply the probability amplitude to find the final particles in a state $|k\rangle |l\rangle$, supposing the initial particles in a state $|1\rangle |2\rangle$. However, if you make a measurement (at t=$+\infty$), you will have a "collapse", and you will find a final state $|k\rangle |l\rangle$ with the probability $|A_{1,2;k,l}|^2$ An other interesting point is that, considering here simple Quantum Mechanics, interactions between a particle and a measurement apparatus , may appear by entanglement. We may consider the example of the 2-slit experiment with photons. Without any measurement appararatus, the total state is $|\psi\rangle = |\psi_L\rangle + |\psi_R \rangle$, where $L$ and $R$ represent the two slits. If you bring a measurement apparatus potentially able to detect which slit has been used for the photon, but without doing explicitely the measurement, the new state is ; $|\psi'\rangle = |\psi_L\rangle |M_L \rangle + |\psi_R \rangle |M_R \rangle$, where $|M_R\rangle$ and $|M_L\rangle$ are states of the measurement apparatus which are quasi-orthogonal ($\langle M_R|M_L\rangle = 0$). This is a pre-measurement state, we see that there is an entanglement between the states of the particle, and the states of the measurement apparatus. Because the states of the apparatus are orthogonal, this destroys the interference pattern. Now, you may really perform a measurement, in this case, you explicitely detect which slit has been used by the photon. After this, the final state would be $|\psi''\rangle = |\psi_L\rangle |M_L \rangle$, if the $L$ slit path is detected. More correct models would involve in fact entangled (pre-measurement) states between the particle, the measurement apparatus and the environment $ \sum\limits_i |\psi_i\rangle |M_i \rangle |E_i \rangle$.
AliFMDMultCuts () AliFMDMultCuts (EMethod method, Double_t fmd1i, Double_t fmd2i=-1, Double_t fmd2o=-1, Double_t fmd3i=-1, Double_t fmd3o=-1) AliFMDMultCuts (const AliFMDMultCuts &o) AliFMDMultCuts & operator= (const AliFMDMultCuts &o) void Reset () Double_t GetMultCut (UShort_t d, Char_t r, Double_t eta, Bool_t errors) const Double_t GetMultCut (UShort_t d, Char_t r, Int_t etabin, Bool_t errors) const void SetMultCuts (Double_t fmd1i, Double_t fmd2i=-1, Double_t fmd2o=-1, Double_t fmd3i=-1, Double_t fmd3o=-1) void SetMPVFraction (Double_t frac=0) void SetNXi (Double_t nXi) void SetIncludeSigma (Bool_t in) void SetProbability (Double_t cut=1e-5) void Set (EMethod method, Double_t fmd1i, Double_t fmd2i=-1, Double_t fmd2o=-1, Double_t fmd3i=-1, Double_t fmd3o=-1) void Print (Option_t *option="") const void FillHistogram (TH2 *h) const void Output (TList *l, const char *name=0) const Bool_t Input (TList *l, const char *name) EMethod GetMethod () const const char * GetMethodString (Bool_t latex=false) const Cuts used when calculating the multiplicity. We can define our cuts in four ways (in order of priorty) Using a fixed value \( v\)- AliFMDMultCuts:: SetMultCuts Using a fraction \( f\) of the most probably value ( \( \Delta_p\)) from the energy loss fits Using some number \( n\) of widths ( \( \xi\)) below the most probable value ( \( \Delta_p\)) from the energy loss fits Using some number \( n\) of widths ( \( \xi+\sigma\)) below the most probable value ( \( \Delta_p\)) from the energy loss fits Using the \( x\) value for which \( P(x>p)\) given some cut value \( p\) Using the lower fit range of the energy loss fits The member function AliFMDMultCuts::Reset resets all cut values, meaning the lower bound on the fits will be used by default. This is useful to ensure a fresh start: The member function AliFMDMultCuts::GetMethod will return the method identifier for the current method employed (AliFMDMultCuts::EMethod). Like wise will the method AliFMDMultCuts::GetMethodString give a human readable string of the current method employed. Definition at line 38 of file AliFMDMultCuts.h. Set the cut for specified method. Note, that if method is kFixed, and only fmd1i is specified, then the outer rings cut value is increased by 20% relative to fmd1i. Also note, that if method is kLandauWidth, and cut2 is larger than zero, then \(\sigma\) of the fits are included in the cut value. Parameters method Method to use fmd1i Value for FMD1i fmd2i Value for FMD2i (if < 0, use fmd1i) fmd2o Value for FMD2o (if < 0, use fmd1i) fmd3i Value for FMD3i (if < 0, use fmd1i) fmd3o Value for FMD3o (if < 0, use fmd1i) Definition at line 63 of file AliFMDMultCuts.cxx. Referenced by AliFMDMultCuts(), AliFMDSharingFilter::AliFMDSharingFilter(), and DepSet().
© Copyright 2000-2015 Source Code Online. Free Source Code and Scripts Downloads. b. 1.4.2002 Description Description: b. is a bookmarking manager and is web based and accessible from any browser. The features are- A new and powerful feature is the ability to share bookmarks. These shared bookmarks may be publicly viewed by all, while a certain set of users have the ability to add or change them,Bookmark data stored in XML format,its customizable and one can determine the look of b. with CSS stylesheets, custom graphics, and HTML templates,supports multi-user environments using basic Web server user authentication. License: Freeware Downloads: 24 More Similar Code FLATTEXT Class B Scripts offers amongst the fastest and cheapest ways to place databases online. This is achieved through creation of custom Perl scripts that allow you to search, edit and delete database over the web. This script searches your exported file or creates a database from the very beginning while using the selected delimiter. ZBit B+Tree is an ASP web based online database component which can be integrated into users website to collect the data in database with the help of COM object, which uses B+ Tree algorithm. This component has features like multithreaded datas,... I present a method of computing the 1F1(a,b,x) function using a contour integral. The method is based on a numerical inversion, basically the Laplace inversion. Integral is 1F1(a,b,x) = Gamma(b)/2\pi i \int_\rho exp(zx)z^(-b)(1+x/z)^(-a)dz, \rho... L-BFGS-B is a collection of Fortran 77 routines for solving nonlinear optimization problems with bound constraints on the variables. One of the key features of the nonlinear solver is that the Hessian is not needed. I've designed an interface to... *Description and Cautions -The SFNG m-file is used to simulate the B-A algorithm and returns scale-free networks of given node sizes. Understanding the B-A algorithm is key to using this code to its fullest. Due to Matlab resource... The Mann-Kendall Tau-b non-parametric function computes a coefficient representing strength and direction of a trend for equally spaced data. While you do not need the Statistics Toolbox to compute Taub, you do need it to test for significance.... This is a pair of routines for computing Erlang B and C probabilities used in queueing theory and telecommunications. The routines use a numerically stable recurrence relation which works well for large numbers of servers. This... fastBSpline - A fast, lightweight class that implements non-uniform B splines of any order Matlab's spline functions are very general. This generality comes at the price of speed. For large-scale applications, including... rand_extended computes a random number in the interval [a,b] It is basically an extended version of the command rand Inputs: a: the lower bound of the interval b: the upper bound of the interval m,n(optional):... fcnPseudoBmodeUltrasoundSimulator generates a simulated Pseudo B-Mode Ultrasound image given the tissue acoustic echogenicity model for the structure to be imaged. The simulated image is of same image matrix size as the input echogenicity map.... All files and free downloads are copyright of their respective owners. We do not provide any hacked, cracked, illegal, pirated version of scripts, codes, components downloads. All files are downloaded from the publishers website, our file servers or download mirrors. Always Virus check files downloaded from the web specially zip, rar, exe, trial, full versions etc. Download links from rapidshare, depositfiles, megaupload etc not published.
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Spring 2018, Math 171 Week 3 Stopping/Non-Stopping times Let \(T_1, T_2\) be stopping times for some Markov Chain \(\{X_n:n \ge 0\}\). Which of the following will also necessarily be stopping times? Prove your claims. (Discussed) \(T_3=5\) \(T_4=T_1 + T_2 + 1\) (Discussed) \(T_5=T_1 + T_2 - 1\) (Solution) \(T_5\) will not necessarily be a stopping time. Suppose \(\{X_n:n \ge 0\}\) is the Markov Chain corresponding to the transition matrix \[P = \begin{matrix} & \mathbf 0 & \mathbf 1 \cr \mathbf 0 & 1/2 & 1/2 \cr \mathbf 1 & 0 & 1 \end{matrix}\] Suppose further that \(T_1 = \min\{n \ge 0: X_n = \mathbf 0\}\) and \(T_2 = \min\{n \ge 0: X_n = \mathbf 1\}\). If \(T_5\) were a stopping time we would have \(P(T_5 = n | X_n = x_n, \dots, X_0=x_0)\in \{0, 1\} \; \forall n\). However, by definition of \(T_5\)\[P(T_5 = 0 | X_0=\mathbf 0) = P(T_1 + T_2 = 1 | X_0=\mathbf 0)\] by directly enumerating the possibilities we see \[= P(T_1 = 1, T_2 = 0 | X_0=\mathbf 0)\] \[+ P(T_1 = 0, T_2 = 1 | X_0=\mathbf 0)\] and now using the Multiplication Rule \[= P(T_1 = 1 | T_2 = 0, X_0=\mathbf 0)P(T_2 = 0 | X_0=\mathbf 0)\] \[+ P(T_2 = 1 | T_1 = 0, X_0=\mathbf 0)P(T_1 = 0 | X_0=\mathbf 0)\] since \(T_1\) and \(T_2\) are stopping times this simplifies to \[= P(T_1 = 1 | X_0=\mathbf 0)P(T_2 = 0 | X_0=\mathbf 0)\]\[ + P(T_2 = 1 | X_0=\mathbf 0)P(T_1 = 0 | X_0=\mathbf 0)\] each term of which can be computed \[=0 \cdot 0 + \frac 1 2 \cdot 1\]\[= \frac 1 2 \notin \{0, 1\}\] (Discussed) Solve problem 4(ii) on HW 2 (Discussed) Show Lemma 1.3 from the textbook Classification of States Consider the Markov chain defined by the following transition matrix: \[P = \begin{matrix} & \mathbf 1 & \mathbf 2 & \mathbf 3 & \mathbf 4 & \mathbf 5 & \mathbf 6 & \mathbf 7 & \mathbf 8 \cr \mathbf 1 & 0.5 & 0 & 0.5 & 0 & 0 & 0 & 0 & 0 \cr \mathbf 2 & 0.5 & 0.5 & 0 & 0 & 0 & 0 & 0 & 0 \cr \mathbf 3 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0 & 0.5 \cr \mathbf 4 & 0 & 0 & 0.5 & 0 & 0.5 & 0 & 0 & 0 \cr \mathbf 5 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \cr \mathbf 6 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \cr \mathbf 7 & 0 & 0 & 0 & 0 & 0 & 0.5 & 0 & 0.5 \cr \mathbf 8 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{matrix}\] Identify the transient and recurrent states, and the irreducible closed sets in the Markov chain. Give reasons for your answers. Consider the Markov chain defined by the following transition matrix: \[P = \begin{matrix} & \mathbf 1 & \mathbf 2 & \mathbf 3 & \mathbf 4 & \mathbf 5 & \mathbf 6 \cr \mathbf 1 & 0 & 0 & 1 & 0 & 0 & 0 \cr \mathbf 2 & 0 & 0 & 0 & 0 & 0 & 1 \cr \mathbf 3 & 0 & 0 & 0 & 0 & 1 & 0 \cr \mathbf 4 & 0.25 & 0.25 & 0 & 0.5 & 0 & 0 \cr \mathbf 5 & 1 & 0 & 0 & 0 & 0 & 0 \cr \mathbf 6 & 0 & 0.5 & 0 & 0 & 0 & 0.5 \end{matrix}\] Identify the transient and recurrent states, and the irreducible closed sets in the Markov chain. Give reasons for your answers. Stationary Distributions Recall a stationary distribution is a vector \(\pi\) satisfying: \[\sum _i \pi(i)=1\] \[\pi(i)\ge 0, \quad \forall i\] \[\pi P = \pi\] Compute any and all stationary distributions of \[P = \begin{bmatrix} 0 & 0 & 1 \cr 1 & 0 & 0 \cr 0 & 1 & 0 \end{bmatrix}\] If you claim \(P\) has a unique stationary distribution, please justify. (Partially Discussed) Under what circumstances is the stationary distribution of \[P = \begin{bmatrix} 1-r & 0 & r \cr p & 1-p & 0 \cr 0 & q & 1-q \end{bmatrix}\] unique? Justify your answer. Compute the stationary distribution in this case. Compute any and all stationary distributions of \[P = \begin{bmatrix} 1 & 0 \cr 0 & 1 \end{bmatrix}\] If you claim \(P\) has a unique stationary distribution, please justify. Compute any and all stationary distributions of \[P = \begin{bmatrix} P_1 & 0 \cr 0 & P_2 \end{bmatrix}\] where \(P_1\) has a unique stationary distribution \(\pi_1\) and \(P_2\) has a unique stationary distribution \(\pi_2\). If you claim \(P\) has a unique stationary distribution, please justify. Compute any and all stationary distributions of \[P = \begin{bmatrix} 0 & p & 0 & 1-p \cr q & 0 & 1-q & 0 \cr 0 & 1-r & 0 & r \cr 1-s & 0 & s & 0 \end{bmatrix}\] If you claim \(P\) has a unique stationary distribution, please justify.
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Definition:Summation/Indexed Definition Let $\tuple {a_1, a_2, \ldots, a_n} \in S^n$ be an ordered $n$-tuple in $S$. The composite is called the summation of $\tuple {a_1, a_2, \ldots, a_n}$, and is written: $\displaystyle \sum_{j \mathop = 1}^n a_j = \paren {a_1 + a_2 + \cdots + a_n}$ The set of elements $\set {a_j \in S: 1 \le j \le n, \map R j}$ is called the summand. The sign $\sum$ is called the summation sign and sometimes referred to as sigma (as that is its name in Greek). Also see Results about summationscan be found here. Le signe $\displaystyle \sum_{i \mathop = 1}^{i \mathop = \infty}$ indique que l'on doit donner au nombre entier $i$ toutes les valeurs $1, 2, 3, \ldots$, et prendre la somme des termes. ( The sign $\displaystyle \sum_{i \mathop = 1}^{i \mathop = \infty}$ indicates that one must give to the whole number $i$ all the values $1, 2, 3, \ldots$, and take the sum of the terms.) -- 1820: Refroidissement séculaire du globe terrestre( Bulletin des Sciences par la Société Philomathique de Paris Vol. 3, 7: 58 – 70) -- 1820: However, some sources suggest that it was in fact first introduced by Euler. Sources 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 18$ 1971: George E. Andrews: Number Theory... (previous) ... (next): $\text {1-1}$ Principle of Mathematical Induction: Theorem $\text{1-2}$: Remark 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $2$: Integers and natural numbers: $\S 2.1$: The integers 1997: Donald E. Knuth: The Art of Computer Programming: Volume 1: Fundamental Algorithms(3rd ed.) ... (previous) ... (next): $\S 1.2.3$: Sums and Products: $(1)$
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV (Springer, 2014-08) The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
Subharmonic solutions for a class of Lagrangian systems 1. Department of Mathematics, Faculty of Sciences, University of Monastir, 5019 Monastir, Tunisia 2. Faculty of Applied Physics and Mathematics, Gdańsk University of Technology, Narutowicza 11/12, 80-233 Gdańsk, Poland We prove that second order Hamiltonian systems $ -\ddot{u} = V_{u}(t,u) $ with a potential $ V\colon \mathbb{R} \times \mathbb{R} ^N\to \mathbb{R} $ of class $ C^1 $, periodic in time and superquadratic at infinity with respect to the space variable have subharmonic solutions. Our intention is to generalise a result on subharmonics for Hamiltonian systems with a potential satisfying the global Ambrosetti-Rabinowitz condition from [ Mathematics Subject Classification:Primary: 37J45, 70H03; Secondary: 34C25, 34C37. Citation:Anouar Bahrouni, Marek Izydorek, Joanna Janczewska. Subharmonic solutions for a class of Lagrangian systems. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 1841-1850. doi: 10.3934/dcdss.2019121 References: [1] A. Abbondandolo, [2] A. Ambrosetti and V. Coti Zelati, [3] [4] K. Ch. Chang, [5] J. Ciesielski, J. Janczewska and N. Waterstraat, On the existence of homoclinic type solutions of inhomogenous Lagrangian systems, [6] [7] [8] M. Izydorek, Equivariant Conley index in Hilbert spaces and applications to strongly indefinite problems, [9] [10] M. Izydorek and J. Janczewska, The shadowing chain lemma for singular Hamiltonian systems involving strong forces, [11] J. Janczewska, An approximative scheme of finding almost homoclinic solutions for a class of Newtonian systems, [12] [13] [14] [15] P. H. Rabinowitz, [16] E. Serra, M. Tarallo and S. Terracini, On the existence of homoclinic solutions for almost periodic second order systems, [17] show all references References: [1] A. Abbondandolo, [2] A. Ambrosetti and V. Coti Zelati, [3] [4] K. Ch. Chang, [5] J. Ciesielski, J. Janczewska and N. Waterstraat, On the existence of homoclinic type solutions of inhomogenous Lagrangian systems, [6] [7] [8] M. Izydorek, Equivariant Conley index in Hilbert spaces and applications to strongly indefinite problems, [9] [10] M. Izydorek and J. Janczewska, The shadowing chain lemma for singular Hamiltonian systems involving strong forces, [11] J. Janczewska, An approximative scheme of finding almost homoclinic solutions for a class of Newtonian systems, [12] [13] [14] [15] P. H. Rabinowitz, [16] E. Serra, M. Tarallo and S. Terracini, On the existence of homoclinic solutions for almost periodic second order systems, [17] [1] Juntao Sun, Jifeng Chu, Zhaosheng Feng. Homoclinic orbits for first order periodic Hamiltonian systems with spectrum point zero. [2] [3] Changrong Zhu, Bin Long. The periodic solutions bifurcated from a homoclinic solution for parabolic differential equations. [4] [5] [6] Oksana Koltsova, Lev Lerman. Hamiltonian dynamics near nontransverse homoclinic orbit to saddle-focus equilibrium. [7] Benoît Grébert, Tiphaine Jézéquel, Laurent Thomann. Dynamics of Klein-Gordon on a compact surface near a homoclinic orbit. [8] Shigui Ruan, Junjie Wei, Jianhong Wu. Bifurcation from a homoclinic orbit in partial functional differential equations. [9] [10] Zhirong He, Weinian Zhang. Critical periods of a periodic annulus linking to equilibria at infinity in a cubic system. [11] Jingli Ren, Zhibo Cheng, Stefan Siegmund. Positive periodic solution for Brillouin electron beam focusing system. [12] [13] Boris Buffoni, Laurent Landry. Multiplicity of homoclinic orbits in quasi-linear autonomous Lagrangian systems. [14] Wenhua Qiu, Jianguo Si. On small perturbation of four-dimensional quasi-periodic system with degenerate equilibrium point. [15] Zaihong Wang, Jin Li, Tiantian Ma. An erratum note on the paper: Positive periodic solution for Brillouin electron beam focusing system. [16] [17] [18] Samir Adly, Daniel Goeleven, Dumitru Motreanu. Periodic and homoclinic solutions for a class of unilateral problems. [19] Anete S. Cavalcanti. An existence proof of a symmetric periodic orbit in the octahedral six-body problem. [20] Peter Giesl, James McMichen. Determination of the basin of attraction of a periodic orbit in two dimensions using meshless collocation. 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
Class 12 Physics is a very important subject for plus two students and so they prepare ahead hoping to do well in the exams. They reach out to all resources ranging from textbooks to class notes, solution books, previous year question papers, model question papers etc so that they can study well and also get the practice to write the exams well. We at BYJU’s catering to this requirement have compiled some important questions for class 12 Kerala Board students. We will see here also how the Kerala Plus Two Physics Important Questions can help students to prepare for the upcoming board exams. It gives students the confidence to write exams and also face the boards without fear. They gain more practice in solving the questions of physics. The important question papers are created on the basis of the Kerala Board Plus Two Physics syllabus, thus covering all the important topics and concepts taught in the subject. Get here an overview of the important physics question for class 12 Kerala Board: 1. Two equal and opposite charges placed in air is as shown in figure: (a)Redraw the figure and show the direction of the dipole moment (P), direction of the resultant electric field (E) at P. (b)Write an equation to find out the electric field at P. 2. What do you mean by drift velocity? Write a relation between drift velocity and electric current. 3. A galvanometer is connected as shown in the picture below: This combination can be used as ___________________( voltmeter/ rheostat/ ammeter) Derive an expression to find the value of resistance S. 4. The block diagram below shows the general form of a Communicative system Identify the blocks X and Y What is the difference between attenuation and amplification? 5. a Name the part of the electromagnetic spectrum: used in radar systems Produced by bombarding a metal target by a high speed electron b Electromagnetic waves are produced by___________________ (charges at rest/ charges in uniform motion/ charges in accelerated motion) c. Why only microwaves are used in microwave ovens? 6. Two thin convex lenses of focal length f₁ and f are placed in contact: If the object is at principal axis, draw Ray diagram of the image formation by this combination of lenses Obtain a general expression for effective focal length of the combination in terms of f₁ and f 7. (a) Derive an expression for self inductance of a solenoid (b) What do you mean by eddy current? Write any two applications of it. 8. (a) Name the different series of line observed in a hydrogen spectrum (b) Draw energy level diagram of hydrogen atom 9. Symbol of a diode is given below: The diode is a __________ (rectifier diode/ photo diode/ zener diode) Draw the VI characteristics of the above diode A zener diode with = 6.0V is used for voltage regulation. The current through the load is to be 4.0mA and that through the zener diode is 20mA. If the unregulated output is 10.0V, what is the value of the series resistor R. What is the fundamental frequency of the ripple in a full wave rectifier circuit operating from 50Hz mains? 10. Two spheres encloses charges as shown in the diagram below: Derive an expression of electric field intensity at any point on the surface S₂ What is the ratio of the electric flux through Sand S₂. 11. (a) Choose the wrong option: (i) Volt= Weber/ second (ii) Weber = Henry X Ampere (iii) Joule= Henry X Ampere² (iv) Volt=Weber x Second (b) The current in a coil of self- inductance 0.1 H varies from 2A to 5A in a time of 1ms. Find the induced emf across the coil. 12. (a) Sound waves do not exhibit_______ (i) interference (ii) diffraction (iii) Polarisation (iv) reflection (b)Describe Young’s double slit experiment to determine the bandwidth of the interference pattern Or The intensity of scattered light in Rayleigh scattering is proportional to __________ Explain the diffraction pattern obtained due to a single slit and represent graphically the variation of intensity with angle of diffraction 13. Choose the appropriate values for x-rays from the below table: Wavelength Frequency 1mm 3 x 10¹⁷ Hz 1m 3 x 10⁸ Hz 1 nm 3 x 10²¹ Hz 14. (a) Define half life period of a radioactive nucleus. Write down the relation connecting half life period and mean life. (b) Define 1 amu.Calculate its energy equivalent to MeV. 15. Write down the truth table of a NOR gate. 16. (a) The work function of a metal is 6 eV. If two photons each have energy 4 eV strike with the metal surface. (i) will the emission be possible? (ii) why? (b) The waves associated with the matter is called matter waves. Let and be the de-Broglie wavelengths associated with electron and proton respectively. If they are accelerated by the same potential, then (i) > (ii) > (iii) (iv) ⁻= \(\frac{1}{\lambda _{p}}\) 17. (a) The core of the transformer has the following properties: (i) Core is laminated (ii) hysteresis loop is narrow Explain the significance of each property (b) What is the significance of each property? 18. (a) List out any two limitations of Bohr atom model. (b) According to de-Broglie’s explanation of Bohr’s second postulate of quantization, the standing particle wave on a circular orbit for n=4 is given by: (i) \({2\pi r_{n}=4/\lambda}\) (ii) \(\frac{2\pi }{\lambda }= 4r_{n}\) (iii) \(2\pi r_{n}=4\lambda\) (iv) \(\frac{\lambda }{2\pi}= 4r_{n}\) 19. (a) What do you mean by Q value of a nuclear reaction? (b) Write down the expressions for Q value in the case of decay (c) Two nuclei have mass numbers in the ratio 1:64. What is the ratio of their nuclear radii? 20. (a) State Gauss Law for Magnetism (b) How this differs from Gauss law for electrostatics? (c)Why is the difference in the two cases? Students can also know more about the Kerala board of higher secondary education class 12 exams from BYJU’S.
I read that every chemical reaction is theoretically in equilibrium in an old textbook. If this is true how can a reaction be one way? Yes, every chemical reaction can theoretically be in equilibrium. Every reaction is reversible. See my answer to chem.SE question 43258 for more details. This includes even precipitation reactions and reactions that release gases. Equilibrium isn't just for liquids! Multiphase equilibria exist. The only thing that stops chemical reactions from being "in equilibrium" is the lack of the proper number of molecules. For a reaction to be in equilibrium, the concentrations of reactants and products must be related by the equilibrium constant. $$ \ce{ A <=> B} $$ $$ K = \frac{[B]}{[A]} $$ When equilibrium constants are extremely large or small, then extremely large numbers of molecules are required to satisfy this equation. If $K = 10^{30}$, then at equilibrium there will be $10^{30}$ molecules of B for every molecule of A. Another way to look at this is that for equilibrium to happen, there need to be at least$10^{30}$ molecules of B, i.e. more than one million molesof B, in the system for there to be "enough" B to guarantee an equilibrium, i.e. to guarantee that there will be a well-defined "equilibrium" concentration of A. When this many molecules are not present, then there is no meaningful equilibrium. For very large (or very small) equilibrium constants, it will be very difficult to obtain an equilibrium. In addition to needing a megamole-sized system (or bigger), the system will have to be well-mixed, isothermal, and isobaric. That's not easy to achieve on such large scales! Update Commenters suggest that "irreversible" reactions do not have an equilibrium. This is true, but tautological. In the real world, all reactions are reversible, at least to a (perhaps vanishingly small) degree. To say otherwise would violate microscopic reversibility. A reaction that was 100% irreverible would have an equilibrium constant of infinity. But if $K= \infty$, then $\Delta G^{\circ} = -RT \ln{K}$ would turn into $\Delta G^{\circ} = -\infty$. So to get infinite energy we would just have to use 100% irreversible reactions! Hopefully the problems with the idea of "irreversible" reactions are becoming apparent. Equilibrium can only apply to a closed system. Reactions which form insoluble precipitates or gases which escape do not exhibit the behavior of a closed system. Therefore, these reactions may not be in equilibrium. However, these claims are pragmatic rather than real. As it turns out, in the above answer, barium sulphate has a $\ce{K_{sp}}$ of $\ce{1.1 x 10^{-10}}$, so formally there is some small equilibrium related to the amount of barium sulphate in solution $\ce{1.05 x 10^{-5}}$ As gasses are escaping in solution, they may be readsorbed, an thus there would be some small equilibrium for processes like that. But pragmatically, these reactions are not at equilibrium. Yes every reaction is an equilibrium. A complete reaction is a equilibrium with high equlibrium constant. If you write the expression for equilibrium constant, you will find that high equilibrium constant implies that the conc. of the products is very high, i.e. the reaction has reached completion. $$ \ce{BaCl2 (aq) + H2SO4 (aq) -> BaSO4 (s) + 2 HCl (aq)} $$ Let's just think about what (aq) means; it means you have ions floating about in there which are in equilibrium with their solids. If you start from thinking there is no precipitate, just ions dissolved, we have $\ce{Ba^{2+}}$, $\ce{H+}$, $\ce{Cl-}$, and $\ce{SO4^{2-}}$. Then you consider the $K_{\mathrm{s}}$ of the different salts, which are $\ce{BaCl2}$, $\ce{HCl}$, $\ce{BaSO4}$, and $\ce{H2SO4}$. They will all go to $K_{\mathrm{s}}$, so all the salts would be forming and be dissolved unless blocked, e.g. things can become supersaturated. $\ce{BaSO4}$ has an extremely low Ks so most will precipitate at the same time. $\ce{BaCl2}$ will go to $K_{\mathrm{s}}$ with the ions $\ce{Ba^{2+}}$ and $\ce{Cl-}$ which are still in solution, and $\ce{H2SO4}$ would also go to $K_{\mathrm{s}}$, which means $\ce{BaCl2}$ is being formed, and therefore there is a reverse reaction. Note if I had $\ce{BaSO4}$ in water, which would be in equilibrium (so tiny dissolved/tiny bit of $\ce{Ba^{2+}}$ ions and $\ce{SO4^{2-}}$ ions), and I added $\ce{Cl-}$ ions, a negligible amount more $\ce{BaSO4}$ will dissolve, as $\ce{BaCl2}$ would go to equilibrium, reducing $\ce{Ba^{2+}}$ ions, leading to negligible amounts of $\ce{BaSO4}$ dissolving to remain at $K_{\mathrm{s}}$. This also shows Le Chatelier's principle. No, every reaction isn't in equilibrium with its products. Consider the following irreversible reaction: $$\ce{BaCl2(aq) + H2SO4(aq) -> BaSO4(ppt) + 2HCl(aq)}$$. By definition if the reaction is irreversible then there is no equilibrium for that reaction. If there were an "equilibrium" for the reaction then the equation would be something like: $$\ce{K_{eq}} = \dfrac{\ce{[BaSO4][HCl]^2}}{ \ce{[BaCl2][H2SO4]}}$$ and such an equilibrium just doesn't exist since when the barium sulfate precipitates there could be a microgram or a kilogram as the product. Think of it another way - adding $\ce{HCl}$ (in dilute solution) or $\ce{BaSO4}$ won't shift the reaction to the left. (Adding more HCl would shift $\ce{HSO4^{-} <-> H+ + SO4^{2-}}$ in concentrated solutions, which is besides the point I'm trying to make.) There is a solubility product for barium sulfate, but the solubility product doesn't depend on the amount of barium sulfate precipitate, nor the concentration of HCl. So the solubility product isn't for the overall reaction but rather for part of the system: $$\ce{[Ba][SO4^{2-}] = K_{sp}}$$ (Full disclosure - Theoretically the barium sulfate solubility product wouldn't depend on the HCl concentration, but really that isn't quite true. The barium sulfate solubility product really depends on the activity of the barium and sulfate ions, so the ionic strength of the solution matters.)
First, a trivial example that might anger you: Let $A_i$ be the observables of the Mermin-Peres square, and $a_i$ their non-contextual values. Then $\prod_i A_i = -\mathbb{1}$, but $\prod_i a_i = 1$, contradiction. In this case $f$ is multiplicative. But the same contradiction can be obtained considering $\prod_i A_i+\prod_i A_i = -2\mathbb{1}$ and $\prod_i a_i+\prod_i a_i = 2$, where $f$ is neither multiplicative nor additive. Now, a more interesting example, that I've found in a paper by Adán Cabello about inequalities for testing state-independent contextuality: Let $$A = \begin{pmatrix} Z \otimes \mathbb{1} & \mathbb{1} \otimes Z & Z \otimes Z \\ \mathbb{1} \otimes X & X \otimes \mathbb{1} & X \otimes X \\ Z \otimes X & X \otimes Z & Y \otimes Y \end{pmatrix}$$ be the Mermim-Peres square. If one ascribes non-contextual values $a_{ij} = \pm 1$ to the observables $A_{ij}$, one can then prove that$$ a_{11} a_{12} a_{13} + a_{21} a_{22} a_{23} + a_{31} a_{32} a_{33} \\+ a_{11} a_{21} a_{31} + a_{12} a_{22} a_{32} - a_{13} a_{23} a_{33} \le 4, $$whereas in quantum mechanics$$ \langle A_{11} A_{12} A_{13}\rangle + \langle A_{21} A_{22} A_{23}\rangle + \langle A_{31} A_{32} A_{33}\rangle \\+ \langle A_{11} A_{21} A_{31}\rangle + \langle A_{12} A_{22} A_{32}\rangle - \langle A_{13} A_{23} A_{33}\rangle = 6.$$The proof of the inequality may be done simply by enumerating the $2^9$ possibilities, if you're lazy, or by playing around with the triangle inequality. In either case, we have an $f$ that's not additive nor multiplicative. Of course, in this case the contradiction takes the form of an inequality, instead of a definite value for non-contextual values. I guess then that they used always a multiplicative or additive $f$ because it's easier to construct these kind of contradictions, based on parity arguments. But I don't think there's anything fundamental to it.This post has been migrated from (A51.SE)
Search Now showing items 1-2 of 2 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
If dark matter has gravity just like normal matter, does that mean it can also form planets, solar systems and so on? Any answer will be appreciated. Planets and stars, no. Globular clusters and galaxies, yes. Small scales To condense into such relatively compact objects as planets, stars, and even the more diffuse star-forming clouds, particles need to be able to dissipate their energy. If they don't do this, their velocities prohibit them from forming anything. "Normal" particles, i.e. atoms, do this by colliding. When atoms collide, they're excited, and when they de-excite, they emit radiation which leaves the system, carrying away energy. In this way, an ensemble of particles can relax into a less energetic system, eventually condensing into e.g. a star. Additionally, the collisions cause more energetic particles to donate energy to the less energetic ones, making the ensemble reach thermodynamic equilibrium, i.e. all particles have the same energy on average. Dark matter is, by definition, unable to collide and radiate, and hence, on such small scales as stars and planets, particles that enters a potential well with a given energy will maintain that energy. They will thus accelerate toward the center, then decelerate after its closest approach to the center, and finally leave the system with the same energy as before (if it was unbound to begin with). This makes it impossible for collisionless matter to form such small objects. Large scales On the scale of galaxies, however, various relaxation mechanisms allows dark matter to form structure. This is the reason that in pure N-body simulations of the Universe, such as the Millennium Simulation, you will see galaxies. The sizes of these structures depend on the resolution, but are measured in millions of Solar masses. The relaxation mechanisms include: Phase mixing This is sort of like galaxy arms winding up, but in phase space rather than real space. Chaotic mixing This happens when particles come so close that their trajectories diverge exponentially. Violent relaxation The two mechanisms listed above assume a constant gravitational potential $\Phi$, but as the systems relaxes, $\Phi$ changes, giving rise to an additional relaxation process. For instance, more massive particles tend to transfer more energy to their lighter neighbors and so become more tightly bound, sinking towards the center of the gravitational potential. This effect is known as mass segregation and is particularly important in the evolution of globular star clusters. Landau damping For a perturbation/wave with velocity $v_p$, if a particle comes with $v\gg v_p$, it will overtake the wave, first gaining energy as it falls into the potential, but later losing the same amount of energy as it climb up again. The same holds for particles with $v\ll v_p$ which are overtaken by the wave. However, particles with $v\sim v_p$ (i.e. that are near resonance with the wave) may experience a net gain or loss in energy. Consider a particle with $v$ slightly larger than $v_p$. Depending on its phase when interacting with the wave, it will be either accelerated and move away from resonance, or decelerated and move closer to resonance. The latter interact more effectively with the wave (i.e. be decelerated for a longer time), and on average there will thus be a net transfer of energy from particles with $v \gtrsim v_p$ to the wave. The opposite is true for particles with $v$ slightly smaller than $v_p$ You can read more about these mechanisms in Mo, Bosch, & White's Galaxy Formation and Evolution.
I am trying to download openjdk java 8 runtime so I can download minecraft but I keep getting an authentication error and it says I dont have permission and that I have to delete stuff 100mbps unmetered – can be upgraded to 1gig unmetered We can offer 1U of… | Read the rest of http://www.webhostingtalk.com/showthread.php?t=1770508&goto=newpost OWN data center operator Bacloud.com =========================================================== We have VERY LIMITED SPECIAL O… | Read the rest of http://www.webhostingtalk.com/showthread.php?t=1769999&goto=newpost Is it possible to restrict certain app notifications from going to the Notification Center on Apple Watch? One use case: I’d like to receive phone calls on the watch, but I don’t want those unanswered calls to end up in Notification Center. Another: I’d like to get notified of calendar updates, but I don’t want them to end up in Notification Center. I am a newbie at this and I’m just trying to learn how to do different things. I am wanting to move a character sprite in a horizontal line, toward the center of the screen when the primary mouse button is detected and then it moves back to the edge of the screen when no input is detected. Any help would be greatly appreciated. Let $ X_i$ be a sequence of iid random variables, $ E [X] = 0$ , $ E [X^2] = 1$ and $ E [|X|^k] < \infty$ for some $ k \ge 3$ . Classical local CLT says that the density function $ f_n$ of $ \frac1{\sqrt n}\sum_1^n X_i$ satisfies that $ $ f_n(x) – \phi(x)\left(1 + \sum_{j=1}^{k-2} n^{-\frac j2}P_j(x)\right) = o\left(n^{-\frac{k-2}2}\right), \quad \phi(x) = \frac1{\sqrt{2\pi}}e^{-\frac{x^2}2} $ $ where $ P_j$ is some $ (j + 2)$ -order polynomial, and the RHS is uniformly small for $ x \in R$ . This gives us very good estimate for constant $ x$ . My question is that can we get a similar expansion equation for $ f_n(\sqrt nx)$ ? Since $ \phi(\sqrt nx)$ decays faster than any polynomial order in $ n$ , we can not apply the local CLT directly. Remark: I consider this in order to estimating the following expression for $ x \not= 0$ and $ y$ : $ $ \frac1{f_n(\sqrt nx)}f_{n-1}\left(\frac{nx + y}{\sqrt{n-1}}\right), \quad n \text{ sufficiently large}. $ $ When $ x = 0$ by local CLT this is bounded by $ 1 + C(1+y^2)/n + o(1/n)$ . If $ x \not= 0$ , I expect the upper bound $ $ \exp\left\{-\frac{x^2}2 – xy\right\}\left[1 + \frac{C(x^2 + y^2)}n + o\left(\frac1n\right)\right]. $ $ I have a record center that is used for my send to connection to archive documents in my farm. I have created a content organizer rule to move documents to specific document libraries based on their contenttype. The rule works. My documents are moved to their designated libraries. I do see a weird behaviour and that is that for all my documents the modifiedby field gets the value of the account that has created the content organizer rule. In this case my user account. Also the modified and created date get changed. Shouldn’t the modifiedby and other field keep the values from to document source? If I use < to represent the setting “align left” and > to represent the setting “align right” what symbol should I use for “align center”? Is there any de facto standard for this? The symbol has to be an ASCII character the user can type on a US keyboard layout. Context I am working on the command line interface of a console application for Unix-like operating systems that can output pseudographical tables, like so: ┌─────┬─────┬────────┬─────┐ │ PID │ TTY │ TIME │ CMD │ ├─────┼─────┼────────┼─────┤ │ 8580│pts/1│00:00:00│ ps │ ├─────┼─────┼────────┼─────┤ │28075│pts/1│00:00:01│ zsh │ └─────┴─────┴────────┴─────┘ The application can be told to align the text in each column left, right or center. To make it do that the user gives it the command line option -align LIST where LIST is a list of words “left”, “right” or “center” where each word corresponds to one column, e.g., -align 'left left right center'. I found that having to write each word in full takes too much effort. I intend to introduce l (a small L), r and c as shortcuts (which has precedent in LaTeX) but I also want to offer another, more graphical set of shortcut characters that would be easier to understand at a glace, say, when reading a shell script. Since using <, > for “left” and “right” respectively seems inevitable I am looking for the third unknown symbol to go with those two. The iOS Control Center contains two rows of buttons at the top and bottom of the area. The previous implementation of it made more sense to me, as the top buttons were switchers, and the bottom buttons were mostly launchers (except for the flashlight, which feels out of place). (source: wikimedia.org) The most recent update introducing the Night Shift feature brought a new button to the Control Center. It’s now located in the middle of the bottom row and is responsible for toggling that feature on/off. I can see how the Flashlight could have been a one-time trade-off because on the other hand, the interaction span with that particular feature is supposed to be short: you launch it, quickly use it, and get back to whatever you were doing (just like with the Camera, Calculator or Timer). But now I don’t understand the logic behind those placement decisions completely. The Night Shift button is definitely a switcher, it can stay on for a long period of time, and it does feel like it belongs to the area where most of the switchers are. I do realise that the area will become too crowded, but then again, it is possible to have two rows of icons in there, with the secondary ones grouped in a collapsible/expandable area – just like the one that let’s you act on a banner notification (e.g. reply to a text message). That would also make it possible to include the switches for the Low Power Mode, Cellular Data and Auto Brightness in that quick access area, this way making it even more feature-rich. Before you tell me this actually belongs to Apple’s feedback website, let me finally ask my question: is there any logic behind this placement? It just doesn’t feel right, consistent or predictable. Yet I’m sure they know what they’re doing, which makes me wonder if I’m missing something. I did a search before asking to see if this is not going to be a duplicate. The email says, “Your UK visa application has been dispatched from the UK Visa Section.“ and I was waiting for my passport to be mailed back through the UPS return label I put into the package. But it’s been 4 days and the label is still not used/passport is not mailed back. What should I do?! Please help. I’m in big trouble…
Hydroquinone used as a photographic developer, is 65.4% C, 5.5% H, and 29.1% O, by mass. What is the empirical formula of hydroquinone? Solution: In most cases, when a question asks us to find an empirical formula without giving us a specific sample size, it’s best to assume a sample size of 100 g. It makes the math easy and leads us to a solution quicker. Thus, we will assume there is a 100 g sample of hydroquinone. We will now multiply by the percentage compositions to figure out the number of grams of each individual element. Grams of carbon: 100 g \times 65.4% = 65.4 g Grams of hydrogen: 100 g \times 5.5% = 5.5 g Grams of oxygen: 100 g \times 29.1% = 29.1 g Now, we will convert each of these masses to moles by dividing by it’s specific molar mass. \text{Mol C} = 65.4\,g\,\text{C}\times \frac{1\text{mol C}}{12.01\, g\,\text{C}} \text{Mol C} =5.445\text{mol} \text{Mol H} = 5.5\,g\,\text{H}\times \frac{1\text{mol H}}{1.008\, g\,\text{H}} \text{Mol H} =5.46\text{mol} \text{Mol O} = 29.1\,g\,\text{O}\times \frac{1\text{mol O}}{15.999\, g\,\text{O}} \text{Mol O} =1.819\text{mol} To figure out the integers of the empirical formula, simply divide each value by the smallest mol number we found, which was 1.819. C= \frac{5.445}{1.819}\approx 3 (rounded to the closest integer) H=\frac{5.46}{1.819}\approx 3 O=\frac{1.819}{1.819}= 1
I am learning finite element method(galerkin method) for solving ode/pde. when searching this topic,I often see examples using the hat function UnitTriangle[x] as the basis function of the galerkin approximation. I understand that the Galekin method is a method of expressing the objective function by the sum of a basis function and a coefficient, and solving the algebraic equation that is the result of integrating the residual with each basis function. something like, f is approximate function(solution) u is true(target) function approximated by f R is residual for example,$R[f(x)]=(u(x)-f(x))^2$ then $f=a_1 \phi_1+a_2 \phi_2+a_3 \phi_3+....+a_n \phi_n$ where $\int_{R}\phi_i \phi_j =0$ if $i\neq j$ and for all j=1,2,...,n $\int_{R}R[f(x)]\phi_j=0$ however,this is contrary to my intuition because linear combination of hat function doesn't provide practical approximation for the target function. that can be used for galerkin method the below is example with 10 nodes which can't be used for galerkin method. (*Hat function*)kernel[j_] := UnitTriangle[x - j](*candicate solution*)f = Total@Table[c[j]*kernel[j], {j, -10, 10, 1}];(*L2 norm between target function and candicate solution*)L2norm = Total[Power[Table[Sin[j] - (f /. x -> j), {j, -10, 10, 1}], 2 ] ];sol = Last@NMinimize[L2norm, Table[c[j], {j, -10, 10, 1}]];Plot[{Sin[x], f /. sol}, {x, -10, 10}] in the above,inner product of basis function doesn't 0. Table[ (*inner product of basis function*) NIntegrate[kernel[j]*kernel[j + 1], {x, -10, 10} ], {j, -10, 9} ] {0.166667, 0.166667, 0.166667, 0.166667, 0.166667, 0.166667, \ 0.166667, 0.166667, 0.166667, 0.166667, 0.166667, 0.166667, 0.166667, \ 0.166667, 0.166667, 0.166667, 0.166667, 0.166667, 0.166667, 0.166667} next is example which can be used for galerkin method where inner product of the basis functions are always 0. (*Hat function*)kernel[j_] := UnitTriangle[x - j](*candicate solution*)f = Total@Table[c[j]*kernel[j], {j, -10, 10, 2}];(*L2 norm between target function and candicate solution*)L2norm = Total[Power[Table[Sin[j] - (f /. x -> j), {j, -10, 10, 1}], 2 ] ];sol = Last@NMinimize[L2norm, Table[c[j], {j, -10, 10, 2}]];Plot[{Sin[x], f /. sol}, {x, -10, 10}]Table[ (*inner product of basis function*) NIntegrate[kernel[j]*kernel[j + 2], {x, -10, 10} ], {j, -10, 8} ] {0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \ 0., 0.} it's clear something wrong...
This is really embarrassing, but I'm not quite sure where I'm going wrong here... Why is this calculation of the gravitational potential inside a sphere with uniform mass distribution incorrect? Set-Up Let's say the sphere has mass $M$ and radius $R$ (and uniform mass density $\mu$), and what we want to find is the potential at any distance $r$ from the center of the sphere, where $r<R$. We normalize the potential to zero at infinity. Calculation The potential $\phi(r)$ is equal to the potential right outside of the sphere, plus the potential difference between some point inside the sphere and a point right outside. $$ \phi(r)=\phi_0-\int_R^r \frac{\mu G}{r}dV $$ (Sorry for using $r$ for the upper limit of the integral as well as for the variable in the integrand. Hopefully this doesn't cause confusion.) Now to figure out the different aspects of the above equation equation. The potential right outside of the sphere is: $$\phi_0=-\frac{MG}{R}$$ The differential volume element can be expressed as the constant-potential spherical shell's surface area times the shell's differential width: $$dV=4\pi r^2 dr$$ And one final detail, the mass density of the sphere: $$\mu=\frac{3M}{4\pi R^3}$$ Using this information, $$\phi(r)=-\frac{MG}{R}-\frac{3MG}{R^3}\int_R^r r dr$$ $$\phi(r)=-\frac{MG}{R^3}\left[R^2+\frac{3r^2}{2}-\frac{3R^2}{2}\right]$$ $$\phi(r)=-\frac{MG}{2R^3}(3r^2-R^2)$$ Conclusion This result disagrees with a few places I've visited, like this one, which states that the correct result (in terms of the variables I've used) is $$\phi(r)=-\frac{MG}{2R^3}(3R^2-r^2)$$ Both results give the same potential at $r=R$, obviously, but my result starts to look ridiculous for values like $r=R/2$. The only part in my calculation that seems sketchy to me is that first equation, where I talk about the potential difference at points inside and outside of the sphere; I don't know if it's correct to be dividing by $r$ in the integrand... Or maybe I just made a stupid algebra mistake somewhere in there. Where did I go wrong?
Evaluation of color characteristics OptiLayer provides calculations of color properties in almost all existing color coordinate systems. You can view color coordinates in a graphical or tabular forms. Light source, detector, observer, integration step, reference white and incident angle used for color evaluation are specified. OptiLayer provides a set of power options and color targets allowing you to design coatings with specified color properties. Tristimulus values and chromaticities XYZ CIE 1931 color space. Color coordinates are called tristimulus values X, Y, Z and determined as: \(X=\int\limits_{380 nm}^{780 nm} x(\lambda) E(\lambda) d\lambda \) \(Y=\int\limits_{380 nm}^{780 nm} y(\lambda) E(\lambda) d\lambda \) \(Z=\int\limits_{380 nm}^{780 nm} z(\lambda) E(\lambda) d\lambda \) Color basis functions \(x(\lambda),\; y(\lambda),\; z(\lambda) \) All colors can be represented on the chromaticity diagram: The chromaticity of a color can be specified by two parameters x and y, which are functions of tristimulus values X, Y, Z: \(x=\displaystyle \frac{X}{X+Y+Z},\;\;y=\frac{Y}{X+Y+Z},\;\;z=1-x-y \) Y is the luminance factor. The larger is Y, the brighter is the color. Corresponding numerical values of color coordinates can be observed in spreadsheets:
Title Stability for a System of N Fermions Plus a Different Particle with Zero-Range Interactions Publication Type Journal Article Year of Publication 2012 Authors Correggi, M, Dell'Antonio, G, Finco, D, Michelangeli, A, Teta, A Journal Rev. Math. Phys. 24 (2012), 1250017 Abstract We study the stability problem for a non-relativistic quantum system in\\r\\ndimension three composed by $ N \\\\geq 2 $ identical fermions, with unit mass,\\r\\ninteracting with a different particle, with mass $ m $, via a zero-range\\r\\ninteraction of strength $ \\\\alpha \\\\in \\\\R $. We construct the corresponding\\r\\nrenormalised quadratic (or energy) form $ \\\\form $ and the so-called\\r\\nSkornyakov-Ter-Martirosyan symmetric extension $ H_{\\\\alpha} $, which is the\\r\\nnatural candidate as Hamiltonian of the system. We find a value of the mass $\\r\\nm^*(N) $ such that for $ m > m^*(N)$ the form $ \\\\form $ is closed and bounded from below. As a consequence, $ \\\\form $ defines a unique self-adjoint and bounded from below extension of $ H_{\\\\alpha}$ and therefore the system is stable. On the other hand, we also show that the form $ \\\\form $ is unbounded from below for $ m < m^*(2)$. In analogy with the well-known bosonic case, this suggests that the system is unstable for $ m < m^*(2)$ and the so-called Thomas effect occurs. URL http://hdl.handle.net/1963/6069 DOI 10.1142/S0129055X12500171 Stability for a System of N Fermions Plus a Different Particle with Zero-Range Interactions Research Group:
Example: In a factory there are two Machines, Machine $A$ and Machine $B$ producing an Item. We are told that Machine $A$ produces $30 \%$ of the items and $3 \%$ of the items were defective. Machine $B$ produces $70 \%$ of the item and $4 \%$ of items produced by Machine $B$ are defective. If an item is drawn and found to be defective. What is the probability that Machine $B$ produced it. Solution: Problem like these make use of the Bayes theorem. Let us see how we can use it. Let us check if we can partition the sample space to mutually exclusive and exhaustive partitions. Let $A_1$ be the event of producing items on Machine $A$. Let $A_2$ be the event of producing items on Machine $B$. Let $B$ be the event of drawing a defective item. We can partition $B$ with $A_1 \cap B$ and $A_2 \cap B$ Together they would cover the whole event $B$ and they are mutually exclusive and exhaustive. We already have selected a defective item to know if it is Machine $B$ producing it. The formula can be written as $P(A_i \mid B)$ = $\frac{P(A_i) P(B\ (A_i) )}{ P(P(A_1) P(B \mid A_1) + P(A_2) P(B \mid A_2) + P(A_3) P(B \mid A_3) + ......... + P(A_n) P(B \mid A_n}$ Note the denominator would depend on the number of events. $P(A_1)$ = $0.3$ and $P(B \mid A_1)$ = $0.03$ $P(A_2)$ = $0.7$ and $P(B \mid A_2)$ = $0.04$ Now to apply in our formula $P(A_2 \mid B)$ = $\frac{(0.7)(0.004)}{(0.3)(0.03) + (0.7)(0.04)}$ $P(A_2 \mid B)$ = $\frac{0.028}{0.09+0.028}$ = $\frac{0.028}{0.037}$ = $0.7568$ This tells us the probability of the defective item manufactured by Machine $B$ is $75.68 \%$
Spring 2018, Math 171 Week 6 Exit Distributions A person is terminally ill. On a day when the person is awake, there is an 0.2 chance they will die overnight, and they are equally likely to be awake or unconscious the next day. On a day when the person is unconscious, there is an 0.2 chance they will be awake the next day, and they are equally likely to stay unconscious or die. Let \(X_n\) be the person’s state on day \(n\) (awake, unconscious, or dead). Show that \((X_n)_{n\ge 0}\) is a Markov chain. Find its transition matrix. (Answer) \[\begin{matrix}& \mathbf A & \mathbf U & \mathbf D \cr \mathbf A & 0.4 & 0.4 & 0.2 \cr \mathbf U & 0.2 & 0.4 & 0.4 \cr \mathbf D & 0 & 0 & 1\end{matrix}\] Compute the probability that the person spends at least one day awake before dying given that they are initially unconscious. (Answer) \(\frac{0.2}{1-0.4} = \frac 1 3\) Compute the expected number of days the person will spend awake before dying given that they are initially unconscious. (Answer) \(\frac 5 7\) In the Markov chain corresponding to the following transition matrix \[\begin{matrix} & \mathbf 1 & \mathbf 2 & \mathbf 3 & \mathbf 4 \cr \mathbf 1 & 0.1 & 0.5 & 0.2 & 0.2 \cr \mathbf 2 & 0.2 & 0.4 & 0.4 & 0 \cr \mathbf 3 & 0 & 0.5 & 0.3 & 0.2 \cr \mathbf 4 & 0.5 & 0.5 & 0 & 0 \end{matrix}\] compute the probability that the chain reaches state 1 before it reaches state 4 for each starting state. (Answer) Define \(h(x)=P_x(T_1 < T_4)\). Then \(h(1)=1\), \(h(4)=0\), and \[\begin{bmatrix}h(2) \cr h(3)\end{bmatrix} = \left(I - \begin{bmatrix}0.4 & 0.4 \cr 0.5 & 0.3\end{bmatrix}\right)^{-1}\begin{bmatrix}0.2 \cr 0\end{bmatrix}\] (Discussed) In the Markov chain corresponding to the following transition matrix \[\begin{matrix} & \mathbf 1 & \mathbf 2 & \mathbf 3 & \mathbf 4 & \mathbf 5 \cr \mathbf 1 & 0.1 & 0.5 & 0.2 & 0.2 & 0 \cr \mathbf 2 & 0.4 & 0.2 & 0.3 & 0 & 0.1\cr \mathbf 3 & 0 & 0.5 & 0.3 & 0.2 & 0\cr \mathbf 4 & 0.2 & 0 & 0.5 & 0.1 & 0.2 \cr \mathbf 5 & 0.1 & 0.1 & 0.2 & 0.1 & 0.5 \end{matrix}\] Compute the probability that the chain reaches states 1 or 2 before it reaches state 5 for each starting state. Exit Times In the Markov chain from 1.2 compute the expected time taken to reach state 4 from each of the other states. (Answer) \(g(x)=\mathbb E_x[T_4]\). Then \(g(4)=0\) and \[\begin{bmatrix}g(1) \cr g(2) \cr g(3)\end{bmatrix} = \left(I - \begin{bmatrix}0.1 & 0.5 & 0.2 \cr 0.2 & 0.4 & 0.4 \cr 0 & 0.5 & 0.3 \end{bmatrix}\right)^{-1} \begin{bmatrix}1 \cr 1 \cr 1\end{bmatrix}\] (Discussed) In the Markov chain from 1.3 compute the expected time taken to reach either of states 2 or 5 from each of the other states. That is, \(\mathbb E_x[T]\) where \(T = \min \{n \ge 0 \mid X_n \in \{2, 5\}\}\)
Jets subgroup Goal of the group Group Contacts Daniele del Re (CMS) Bruce Mellado (ATLAS) Gavin Salam (Theory) Frank Tackmann (Theory) Available Tools Settings for comparison studies H+2-jet NLO+shower comparison study Process/Inputs: =============== * pure gg -> H + 2j NLO+shower at parton level * m_H = 125 GeV * \sqrt{s} = 8 TeV * strict mtop -> \infty limit (no mtop rescaling or bottom interference) * on-shell stable Higgs * MSTW2008 NLO PDFs (68CL) (always also at LO) with its alphas(mZ) = 0.12018, nf = 5, 2-loop running * \mu_r = \mu_f = \mu = m_H in (N)LO hard matrix element corrections Jet cuts: ========= (\eta is pseudorapidity, y is rapidity) a) Jet selection: * anti-kT with R = 0.4 * at least two jets with pTj > 25 GeV and |\eta_j| < 5 b) VBF cuts: * \delta y_jj = |y_j1 - y_j2| > 2.8 * m_jj > 400 GeV Distributions: ============== All distributions with * only cuts a) and cuts a)+b) * at LO and NLO each with \mu = {2 m_H, m_H, m_H/2} 1) pTj1 [0, 300] in steps of 5 GeV 2) pTj2 [0, 300] in steps of 5 GeV 3) yj1 [-5, 5] in steps of 0.5 4) yj2 [-5, 5] in steps of 0.5 5) |\delta\y_jj| [0,10] in steps of 0.5 6) m_jj [0, 1000] in steps of 40 GeV 7) \Delta\phi_jj [0, Pi] in steps of Pi/20 8) pTj3 [0, 300] in steps of 5 GeV 9) yj3 [-5, 5] in steps of 0.5 10) |eta_H - 0.5(\eta_j1 + \eta_j2)| [0,10] in steps of 0.5 11) pT_{Hjj} = |\vec{p}_{TH} + \vec{p}_{Tj1} + \vec{p}_{Tj2}| [0, 300] in steps of 5 GeV 12) Pi - \Delta\phi_{H,jj} [0, 1.5] in steps of 0.05 Results First comparisons can be found here . Guidelines for UE related uncertainties The following are guidelines for the estimation of UE related uncertainties in ggF and VBF processes: 1) Turn UE on/off for the nominal default tune (expect ~10-20% variations depending on selection cuts and tune) 2) Cross check on/off effect for alternative tunes (that are alsodeemed "reasonable") 3) Cross checks can include the use of tunes performed within a common framework but using different PDFs (eg. NLO v. LO, as is the case with AU2-CT10 and AU2-CTEQ6L1) First studies can be found here and these studies will continue past the Moriond conferences. Until these guidelines can be implemented, the following temporary uncertainties are suggested: 1) For ggF+2j use 30% uncertainty 2) For VBF use 7% uncertainty These uncertainties pertain to the normalization of both samples in the loose and tight VBF categories. References Meetings Links -- ReiTanaka - 02-May-2012
in the late 1960s, the strongly interacting particles were a jungle. Protons, neutrons, pions, kaons, lambda hyperons, other hyperons, additional resonances, and so on. It seemed like dozens of elementary particles that strongly interacted. There was no order. People thought that quantum field theory had to die. However, they noticed regularities such as Regge trajectories. The minimal mass of a particle of spin $J$ went like $$ M^2 = aJ + b $$i.e. the squared mass is a linear function of the spin. This relationship was confirmed phenomenologically for a couple of the particles. In the $M^2$-$J$ plane, you had these straight lines, the Regge trajectories. Building on this and related insights, Veneziano "guessed" a nice formula for the scattering amplitudes of the $\pi+\pi \to \pi+\rho$ process, or something like that. It had four mesons and one of them was different. His first amplitude was the Euler beta function$$ M = \frac{\Gamma(u)\Gamma(v)}{\Gamma(u+v)}$$where $\Gamma$ is the generalized factorial and $u,v$ are linear functions of the Mandelstam variables $s,t$ with fixed coefficients again. This amplitude agrees with the Regge trajectories because $\Gamma(x)$ has poles for all non-positive integers. These poles in the amplitude correspond to the exchange of particles in the $s,t$ channels. One may show that if we expand the amplitude to the residues, the exchanged particles' maximum spin is indeed a linear function of the squared mass, just like in the Regge trajectory. So why are there infinitely many particles that may be exchanged? Susskind, Nielsen, Yoneya, and maybe others realized that there has to be "one particle" of a sort that may have any internal excitations - like the Hydrogen atom. Except that the simple spacing of the levels looked much easier than the Hydrogen atom - it was like harmonic oscillators. Infinitely many of them were still needed. They ultimately realized that if we postulate that the mesons are (open) strings, you reproduce the whole Veneziano formula because of an integral that may be used to define it. One of the immediate properties that the "string concept" demystified was the "duality" in the language of the 1960s - currently called the "world sheet duality". The amplitude $M$ above us $u,v$-symmetric. But it can be expanded in terms of poles for various values of $u$; or various values of $v$. So it may be calculated as a sum of exchanges purely in the $s$-channel; or purely in the $t$-channel. You don't need to sum up diagrams with the $s$-channel or with the $t$-channel: one of them is enough! This simple principle, one that Veneziano actually correctly guessed to be a guiding principle for his search of the meson amplitude, is easily explained by string theory. The diagram in which 2 open strings merge into 1 open string and then split may be interpreted as a thickened $s$-channel graph; or a thick $t$-channel graph. There's no qualitative difference between them, so they correspond to a single stringy integral for the amplitude. This is more general - one stringy diagram usually reduces to the sum of many field-theoretical Feynman diagrams in various limits. String theory automatically resums them. Around 1970, many things worked for the strong interactions in the stringy language. Others didn't. String theory turned out to be too good - in particular, it was "too soft" at high energies (the amplitudes decrease exponentially with energies). QCD and quarks emerged. Around mid 1970s, 't Hooft wrote his famous paper on large $N$ gauge theory - in which some strings emerge, too. Only in 1997, these hints were made explicit by Maldacena who showed that string theory was the right description of a gauge theory (or many of them) at the QCD scale, after all: the relevant target space must however be higher-dimensional and be an anti de Sitter space. In AdS/CFT, much of the original strategies - e.g. the assumption that mesons are open strings of a sort - get revived and become quantitatively accurate. It just works. Of course, meanwhile, around mid 1970s, it was also realized that string theory was primarily a quantum theory of gravity because the spin 2 massless modes inevitably exist and inevitably interact via general relativity at long distances. In the early and mid 1980s, it was realized that string theory included the right excitations and interactions to describe all particle species and all forces we know in Nature and nothing could have been undone about this insight later. Today, we know that the original motivation of string theory wasn't really wrong: it was just trying to use non-minimal compactifications of string theory. Simpler vacua of string theory explain gravity in a quantum language long before they explain the strong interactions.This post imported from StackExchange Physics at 2014-03-17 04:01 (UCT), posted by SE-user Luboš Motl
For a compact Riemannian locally symmetric space $\mathcal M$ of rank one andan associated vector bundle $\mathbf V_\tau$ over the unit cosphere bundle$S^\ast\mathcal M$, we give a precise description of those classical(Pollicott-Ruelle) resonant states on $\mathbf V_\tau$ that vanish undercovariant derivatives in the Anosov-unstable directions of the chaotic geodesicflow on $S^\ast\mathcal M$. In particular, we show that they are isomorphicallymapped by natural pushforwards into generalized common eigenspaces of thealgebra of invariant differential operators $D(G,\sigma)$ on compatibleassociated vector bundles $\mathbf W_\sigma$ over $\mathcal M$. As aconsequence of this description, we obtain an exact band structure of thePollicott-Ruelle spectrum. Further, under some mild assumptions on therepresentations $\tau$ and $\sigma$ defining the bundles $\mathbf V_\tau$ and$\mathbf W_\sigma$, we obtain a very explicit description of the generalizedcommon eigenspaces. This allows us to relate classical Pollicott-Ruelleresonances to quantum eigenvalues of a Laplacian in a suitable Hilbert space ofsections of $\mathbf W_\sigma$. Our methods of proof are based onrepresentation theory and Lie theory. We establish an equidistribution result for Ruelle resonant states on compactlocally symmetric spaces of rank one. More precisely, we prove that among thefirst band Ruelle resonances there is a density one subsequence such that therespective products of resonant and co-resonant states converge weakly to theLiouville measure. We prove this result by establishing an explicitquantum-classical correspondence between eigenspaces of the scalar Laplacianand the resonant states of the first band of Ruelle resonances which also leadsto a new description of Patterson-Sullivan distributions. We consider a $\mathbb{R}$-extension of one dimensional uniformly expandingopen dynamical systems and prove a new explicit estimate for the asymptoticspectral gap. To get these results, we use a new application of a "globalnormal form" for the dynamical system, a "semiclassical expression beyond theEhrenfest time" that expresses the transfer operator at large time as a sumover rank one operators (each is associated to one orbit). In this paper weestablish the validity of the so-called "diagonal approximation" up to twicethe local Ehrenfest time. For compact and for convex co-compact oriented hyperbolic surfaces, we provean explicit correspondence between classical Ruelle resonant states and quantumresonant states, except at negative integers where the correspondence involvesholomorphic sections of line bundles. We give a new fractal Weyl upper bound for resonances of convex co-compacthyperbolic manifolds in terms of the dimension $n$ of the manifold and thedimension $\delta$ of its limit set. More precisely, we show that as$R\to\infty$, the number of resonances in the box $[R,R+1]+i[-\beta,0]$ is$O(R^{m(\beta,\delta)+})$, where the exponent$m(\beta,\delta)=\min(2\delta+2\beta+1-n,\delta)$ changes its behavior at$\beta=(n-1-\delta)/2$. In the case $\delta<(n-1)/2$, we also give an improvedresolvent upper bound in the standard resonance free strip $\{\mathrm{Im}\\lambda\ > \delta-(n-1)/2\}$. Both results use the fractal uncertaintyprinciple point of view recently introduced in [arXiv:1504.06589]. The appendixpresents numerical evidence for the Weyl upper bound. Let $G$ be a real, reductive algebraic group, and let $X$ be a homogeneousspace for $G$ with a non-zero invariant density. We give an explicitdescription of a Zariski open, dense subset of the asymptotics of the temperedsupport of $L^2(X)$. Under additional hypotheses, this result remains true forvector bundle valued harmonic analysis on $X$. These results follow from anupper bound on the wave front set of an induced Lie group representation undera uniformity condition. We show that all generalized Pollicott-Ruelle resonant states of atopologically transitiv $C^\infty$-Anosov flow with an arbitrary $C^\infty$potential, have full support. Given a holomorphic iterated function scheme with a finite symmetry group$G$, we show that the associated dynamical zeta function factorizes intosymmetry-reduced analytic zeta functions that are parametrized by the unitaryirreducible representations of $G$. We show that this factorization implies afactorization of the Selberg zeta function on symmetric $n$-funneled surfacesand that the symmetry factorization simplifies the numerical calculations ofthe resonances by several orders of magnitude. As an application this allows usto provide a detailed study of the spectral gap and we observe for the firsttime the existence of a macroscopic spectral gap on Schottky surfaces. We prove equivariant spectral asymptotics for $ h$-pseudodifferentialoperators for compact orthogonal group actions generalizing results ofEl-Houakmi and Helffer (1991) and Cassanas (2006). Using recent results forcertain oscillatory integrals with singular critical sets (Ramacher 2010) wecan deduce a weak equivariant Weyl law. Furthermore, we can prove a completeasymptotic expansion for the Gutzwiller trace formula without any additionalcondition on the group action by a suitable generalization of the dynamicalassumptions on the Hamilton flow. In many non-integrable open systems in physics and mathematics resonanceshave been found to be surprisingly ordered along curved lines in the complexplane. In this article we provide a unifying approach to these resonance chainsby generalizing dynamical zeta functions. By means of a detailed numericalstudy we show that these generalized zeta functions explain the mechanism thatcreates the chains of quantum resonance and classical Ruelle resonances for3-disk systems as well as geometric resonances on Schottky surfaces. We alsopresent a direct system-intrinsic definition of the continuous lines on whichthe resonances are strung together as a projection of an analytic variety.Additionally, this approach shows that the existence of resonance chains isdirectly related to a clustering of the classical length spectrum on multiplesof a base length. Finally, this link is used to construct new examples whereseveral different structures of resonance chains coexist. Resonance chains have been observed in many different physical andmathematical scattering problems. Recently numerical studies linked thephenomenon of resonances chains to an approximate clustering of the lengthspectrum on integer multiples of a base length. A canonical example of such ascattering system is provided by 3-funneled hyperbolic surfaces where thelengths of the three geodesics around the funnels have rational ratios. In thisarticle we present a mathematical rigorous study of the resonance chains forthese systems. We prove the analyticity of the generalized zeta function whichprovide the central mathematical tool for understanding the resonance chains.Furthermore we prove for a fixed ratio between the fun- nel lengths and in thelimit of large lengths that after a suitable rescaling, the resonances in abounded domain align equidistantly along certain lines. The position of theselines is given by the zeros of an explicit polynomial which only depends on theratio of the funnel lengths. We consider a simple model of an open partially expanding map. Its trappedset K in phase space is a fractal set. We first show that there is a welldefined discrete spectrum of Ruelle resonances which describes the asymptoticsof correlation functions for large time and which is parametrized by theFourier component \nu on the neutral direction of the dynamics. We introduce aspecific hypothesis on the dynamics that we call "minimal captivity". Thishypothesis is stable under perturbations and means that the dynamics isunivalued on a neighborhood of K. Under this hypothesis we show the existenceof an asymptotic spectral gap and a Fractal Weyl law for the upper bound ofdensity of Ruelle resonances in the semiclassical limit \nu -> infinity. Somenumerical computations with the truncated Gauss map illustrate these results.
Question: The 0.5-lb block is pushed against the spring at A and released from rest. Neglecting friction, determine the smallest deflection of the spring for which the block will travel around the loop ABCDE and remain at all times in contact with the loop. Energy Conservation: The energy at every point for a particular system always remains equal as defined by the law of energy conservation. It means that the form of energy for a particular object can be changed, but its value remains the same. Answer and Explanation: Given data: Weight of the block is: {eq}W = 0.5\;{\rm{lb}} {/eq} Spring constant is: {eq}k = 3\;{\rm{lb/in}} = 3\;{\rm{lb/in}} \times \dfrac{{12\;{\rm{lb/ft}}}}{{1\;{\rm{lb/in}}}} = 36\;{\rm{lb/ft}} {/eq} Radius of the loop is: {eq}r = 2\;{\rm{ft}} {/eq} Expression for the weight of the block. {eq}W = mg {/eq} Here, the mass of the block is {eq}m {/eq} and the acceleration due to gravity is {eq}g = 32.2\;{\rm{ft/}}{{\rm{s}}^2}. {/eq} Substitute the values in the above expression. {eq}\begin{align*} 0.5\;{\rm{lb}} &= m\left( {32.2\;{\rm{ft/}}{{\rm{s}}^2}} \right)\\ m &= 0.015527\;{\rm{lb}} \cdot {{\rm{s}}^2}/{\rm{ft}} \end{align*} {/eq} Equating the forces. {eq}W = \dfrac{{m{v_D}^2}}{r} {/eq} Here, the velocity of the block at D is {eq}{v_D}. {/eq} Substitute the values in the above expression. {eq}\begin{align*} m\left( {32.2\;{\rm{ft/}}{{\rm{s}}^2}} \right) &= \dfrac{{m{v_D}^2}}{{2\;{\rm{ft}}}}\\ {v_D}^2 &= 64.4\;{\rm{f}}{{\rm{t}}^2}/{{\rm{s}}^2} \end{align*} {/eq} Expression for the initial spring energy. {eq}S.{E_1} = \dfrac{1}{2}k{x^2} {/eq} Here, the smallest deflection of the spring for which the block will travel around the loop ABCDE and remain at all times in contact with the loop is {eq}x. {/eq} Substitute the values in the above expression. {eq}\begin{align*} S.{E_1} &= \dfrac{1}{2}\left( {36\;{\rm{lb/ft}}} \right) \times {x^2}\\ S.{E_1} &= 18{x^2} \end{align*} {/eq} Expression for the final potential energy. {eq}P.{E_2} = mg\left( {2r} \right) {/eq} Substitute the values in the above expression. {eq}\begin{align*} P.{E_2} &= 0.5\;{\rm{lb}}\left( {2 \times 2\;{\rm{ft}}} \right)\\ P.{E_2} &= 2\;{\rm{lb}} \cdot {\rm{ft}} \end{align*} {/eq} Expression for the final kinetic energy. {eq}K.{E_2} = \dfrac{1}{2}m{\left( {{v_D}} \right)^2} {/eq} Substitute the values in the above expression. {eq}\begin{align*} K.{E_2} &= \dfrac{1}{2}\left( {0.015527\;{\rm{lb}} \cdot {{\rm{s}}^2}/{\rm{ft}}} \right) \times 64.4\;{\rm{f}}{{\rm{t}}^2}/{{\rm{s}}^2}\\ K.{E_2} &= 0.499969\;{\rm{lb}} \cdot {\rm{ft}} \end{align*} {/eq} Consider the conservation of energy. {eq}S.{E_1} + P.{E_1} + K.{E_1} = S.{E_2} + P.{E_2} + K.{E_2} {/eq} Here, the initial potential energy of the block is {eq}P.{E_1} = 0 {/eq}, the initial kinetic energy of the block is {eq}K.{E_1} = 0 {/eq} and the final spring energy is {eq}S.{E_2} = 0. {/eq} Substitute the values in the above expression. {eq}\begin{align*} 18{x^2} + 0 + 0 &= 0 + 2 + 0.499969\\ {x^2} &= 0.027776\\ x &= 0.1666\;{\rm{ft}} \end{align*} {/eq} Thus, the smallest deflection of the spring for which the block will travel around the loop ABCDE and remain at all times in contact with the loop is 0.1666 ft. Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from Geography 101: Human & Cultural GeographyChapter 13 / Lesson 9
OpenCV 3.4.6 Open Source Computer Vision void cv::accumulate (InputArray src, InputOutputArray dst, InputArray mask=noArray()) Adds an image to the accumulator image. More... void cv::accumulateProduct (InputArray src1, InputArray src2, InputOutputArray dst, InputArray mask=noArray()) Adds the per-element product of two input images to the accumulator image. More... void cv::accumulateSquare (InputArray src, InputOutputArray dst, InputArray mask=noArray()) Adds the square of a source image to the accumulator image. More... void cv::accumulateWeighted (InputArray src, InputOutputArray dst, double alpha, InputArray mask=noArray()) Updates a running average. More... void cv::createHanningWindow (OutputArray dst, Size winSize, int type) This function computes a Hanning window coefficients in two dimensions. More... Point2d cv::phaseCorrelate (InputArray src1, InputArray src2, InputArray window=noArray(), double *response=0) The function is used to detect translational shifts that occur between two images. More... void cv::accumulate ( InputArray src, InputOutputArray dst, InputArray mask = noArray() ) Python: dst = cv.accumulate( src, dst[, mask] ) #include <opencv2/imgproc.hpp> Adds an image to the accumulator image. The function adds src or some of its elements to dst : \[\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0\] The function supports multi-channel images. Each channel is processed independently. The function cv::accumulate can be used, for example, to collect statistics of a scene background viewed by a still camera and for the further foreground-background segmentation. src Input image of type CV_8UC(n), CV_16UC(n), CV_32FC(n) or CV_64FC(n), where n is a positive integer. dst Accumulator image with the same number of channels as input image, and a depth of CV_32F or CV_64F. mask Optional operation mask. void cv::accumulateProduct ( InputArray src1, InputArray src2, InputOutputArray dst, InputArray mask = noArray() ) Python: dst = cv.accumulateProduct( src1, src2, dst[, mask] ) #include <opencv2/imgproc.hpp> Adds the per-element product of two input images to the accumulator image. The function adds the product of two images or their selected regions to the accumulator dst : \[\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src1} (x,y) \cdot \texttt{src2} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0\] The function supports multi-channel images. Each channel is processed independently. src1 First input image, 1- or 3-channel, 8-bit or 32-bit floating point. src2 Second input image of the same type and the same size as src1 . dst Accumulator image with the same number of channels as input images, 32-bit or 64-bit floating-point. mask Optional operation mask. void cv::accumulateSquare ( InputArray src, InputOutputArray dst, InputArray mask = noArray() ) Python: dst = cv.accumulateSquare( src, dst[, mask] ) #include <opencv2/imgproc.hpp> Adds the square of a source image to the accumulator image. The function adds the input image src or its selected region, raised to a power of 2, to the accumulator dst : \[\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y)^2 \quad \text{if} \quad \texttt{mask} (x,y) \ne 0\] The function supports multi-channel images. Each channel is processed independently. src Input image as 1- or 3-channel, 8-bit or 32-bit floating point. dst Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point. mask Optional operation mask. void cv::accumulateWeighted ( InputArray src, InputOutputArray dst, double alpha, InputArray mask = noArray() ) Python: dst = cv.accumulateWeighted( src, dst, alpha[, mask] ) #include <opencv2/imgproc.hpp> Updates a running average. The function calculates the weighted sum of the input image src and the accumulator dst so that dst becomes a running average of a frame sequence: \[\texttt{dst} (x,y) \leftarrow (1- \texttt{alpha} ) \cdot \texttt{dst} (x,y) + \texttt{alpha} \cdot \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0\] That is, alpha regulates the update speed (how fast the accumulator "forgets" about earlier images). The function supports multi-channel images. Each channel is processed independently. src Input image as 1- or 3-channel, 8-bit or 32-bit floating point. dst Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point. alpha Weight of the input image. mask Optional operation mask. void cv::createHanningWindow ( OutputArray dst, Size winSize, int type ) Python: dst = cv.createHanningWindow( winSize, type[, dst] ) #include <opencv2/imgproc.hpp> This function computes a Hanning window coefficients in two dimensions. An example is shown below: dst Destination array to place Hann coefficients in winSize The window size specifications (both width and height must be > 1) type Created array type Point2d cv::phaseCorrelate ( InputArray src1, InputArray src2, InputArray window = noArray(), double * response = 0 ) Python: retval, response = cv.phaseCorrelate( src1, src2[, window] ) #include <opencv2/imgproc.hpp> The function is used to detect translational shifts that occur between two images. The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation Calculates the cross-power spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize. The function performs the following equations: \[\mathbf{G}_a = \mathcal{F}\{src_1\}, \; \mathbf{G}_b = \mathcal{F}\{src_2\}\]where \(\mathcal{F}\) is the forward DFT. \[R = \frac{ \mathbf{G}_a \mathbf{G}_b^*}{|\mathbf{G}_a \mathbf{G}_b^*|}\] \[r = \mathcal{F}^{-1}\{R\}\] \[(\Delta x, \Delta y) = \texttt{weightedCentroid} \{\arg \max_{(x, y)}\{r\}\}\] src1 Source floating point array (CV_32FC1 or CV_64FC1) src2 Source floating point array (CV_32FC1 or CV_64FC1) window Floating point array with windowing coefficients to reduce edge effects (optional). response Signal power within the 5x5 centroid around the peak, between 0 and 1 (optional).
So after a lot of research, and tons and tons of papers that I've went through, I finally have some idea how to solve the equations that will give me candidates for the asymptotic symmetry group for Kerr/CFT correspondence. One has boundary conditions $h_{\mu\nu}$, and metric that behaves like $g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}$, where $\bar{g}_{\mu\nu}$ is the background metric (in this case it's taken as near horizon extreme Kerr metric), and $h_{\mu\nu}$ are perturbations (given). So the task is finding diffeomorphisms $\xi$ for which this metric will transform according to $$\bar{g}_{\mu\nu}+\mathcal{L}_\xi \bar{g}_{\mu\nu}=g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}$$ And one solves that by solving equations $$\mathcal{L}_\xi g_{\mu\nu}=\mathcal(r^m)$$ where $\mathcal{O}(r^m)$ are boundary conditions, given in terms of radial coordinates.From this, and the fact that $h_{\mu\nu}=h_{\nu\mu}$ I get ten equations. Now my issue is this: What to take as ansatz for $\xi$? In some papers, they take this: $$\xi^\mu=\xi^\mu_0(t,\theta,\phi)+r\xi^\mu_1(t,\theta,\phi)+\mathcal{O}(r^2)$$ But in certain papers I've found that authors take this form (see after eq. 4.4 here, or here eq 5.1) $$\xi^\mu=\sum\limits_{n=-1}^\infty \xi^\mu_n r^{-n}$$ such that we have subleading contributions (a falloff in radial coordinate). Now, I'm inclined to use the second one, since the diffeomorphism given in Guica et. al is given with subleading terms. I should solve this by putting a certain few terms in, and see what the boundary condition says. For instance we have $$\mathcal{L}_\xi g_{tt}=\mathcal{O}(r^2)$$ That means that all $\mathcal{O}(r^2)$ contributions will cancel, and only equations with $r$, or smaller $r$ contributions will survive. But the answer is drastically different if I take first or second ansatz. So what to take? Am I on the right track? If I take second one, what n should I start from? $n=-1$ or $n=-2$? Oh and I finally got how to solve this by looking at this article and following how they got it in appendix A EDIT: I've been thinking. Does the fact that in their article, the metric, and the ansatz goes with powers of $v$, means that since their equations are limited by $\mathcal{O}(v)$ onward all higher power will cancel each other? In that case, since I'm taking only subleading terms in metric, and boundary conditions (different for each of 10 terms), and if I take that my ansatz falls off as $r^{-n}$, that means that all lower order correction cancel out? The question is still: what n to take? Since it's not the same if my $\xi$ starts with $r^2$, or $r$ :\ EDIT2: I'll put one component to show what I'm doing. Although I think something is wrong, since I'm getting complicated equations (when things get too complicated, it's kind of a show that things might not be going in the right direction, at least that's what experience showed me) So, my metric is $g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}$, as explained above. The first question that comes to my mind: do I put manually $h_{\mu\nu}$ in this expression, or do I use the given boundary conditions? I could say that $h_{\mu\nu}$ is arbitrary, and my metric has to have a power fall off perturbations, so that my $tt$ component would be $$g_{tt}=-\Omega^2(\theta)(1+r^2(1-\Lambda^2(\theta)))+r^{-1}h_{tt}(t,\theta,\phi)+\mathcal{O}(r^{-2})\quad (\star)$$ I removed $2GJ$ since it's just a factor in front of the metric, not depending on any of the variables ($t,r,\theta,\phi$), and it doesn't contribute to diffeomorphisms. I will try this after I finish what I did. However, I just put the given boundary conditions. For $tt$ component I have $$\mathcal{L}_\xi g_{tt}=\mathcal{O}(r^2)=\xi^\sigma\partial_\sigma g_{tt}+g_{\sigma t}\partial_t\xi^\sigma+g_{t\sigma}\partial_t\xi^\sigma$$ Since $g_{tt}$ depends on $r$ and $\theta$, I have two components in the first argument. In the second and third arguments are the same since $g_{t\phi}=g_{\phi t}$. So I have $$\xi^\theta\partial_\theta g_{tt}+\xi^r\partial_r g_{tt}+2(g_{tt}\partial_t\xi^t+g_{t\phi}\partial_t \xi^\phi)=\mathcal{O}(r^2)$$ $$(\xi^\theta_{-1}r+\xi^\theta_0+\xi^\theta_1r^{-1}+\mathcal{O}(r^{-2}))(-2\Omega\Omega'+2\Omega(\Lambda\Lambda'\Omega-\Omega'(1-\Lambda^2))r^2+\mathcal{O}(r^2))+(\xi^r_{-1}r+\xi^r_0+\xi^r_1r^{-1}+\mathcal{O}(r^{-2}))(-2\Omega^2(1-\Lambda^2)r+\mathcal{O}(r))+2((-\Omega^2-\Omega^2(1-\Lambda^2)r^2+\mathcal{O}(r^2))(\partial_t\xi^t_{-1}r+\partial_t\xi^t_0+\mathcal{O}(r^{-1}))+(\Lambda^2\Omega^2r+\mathcal{O}(1))(\partial_t\xi^\phi_{-1}r+\partial_t\xi^\phi_0+\partial_t\xi^\phi_1 r+\mathcal{O}(r^{-2})))=\mathcal{O}(r^2)$$ After sorting all out, and saying that all the corrections of $\mathcal{O}(r^2)$ vanish, I end up with $$(\Lambda\Lambda'\Omega-\Omega'(1-\Lambda^2))\xi^\theta_{-1}-\Omega(1-\Lambda^2)\partial_t\xi^t_{-1}=0$$ But, this one is relatively ok, the other ones sometimes have six or more components ($\xi^\mu$). I'll work all of them out, and then try with $(\star)$ as an assumption and see if things simplify... EDIT 3: So things aren't simplifying. But one thing I've learned: I need to solve asymptotic Killing equation. My next try, the one I think is a fast and relatively good try, is to just see what the leading orders of components should be, such that they satisfy the asymptotic Killing equations, given by boundary conditions. Then I'll try to do explicit calculation. For instance (and I hope I'm on a right track) for $tt$ component I would have $$\mathcal{L}_\xi g_{tt}=\mathcal{O}(r^2)$$$$\xi^\theta\partial_\theta g_{tt}+\xi^r\partial_r g_{tt}+2(g_{tt}\partial_t\xi^t+g_{t\phi}\partial_t \xi^\phi)=\mathcal{O}(r^2)$$$$\xi^\theta\mathcal{O}(r^2)+\xi^r\mathcal{O}(r)+\mathcal{O}(r^2)\partial_t\xi^t+\mathcal{O}(r)\partial_t\xi^\phi=\mathcal{O}(r^2)$$ So for this to be true (LHS=RHS) I should have $$\xi^r\to\mathcal{O}(r)$$ $$\xi^\theta\to\mathcal{O}(1)$$ $$\partial_t\xi^t\to\mathcal{O}(1)$$ $$\partial_t\xi^\phi\to\mathcal{O}(r)$$ I hope I'm on a right track. I got a hint how Guica et. al did it. First we need to guess what the symmetries are and then generate boundary conditions by acting with those diffeomorphisms, and then check that the charges are finite and integrable. So it's more a guessing game than exact solving :S EDIT 4: So far (by solving asymptotic Killing equations) I only get the $\xi^\theta$ component right. That is I have $$\xi^\theta_0=\xi^\theta_{-1}r$$ which, when I put in the ansatz I get $$\xi^\theta=\xi^\theta_{-1}r+\xi^\theta_0+\xi^\theta_1r^{-1}+\mathcal{O}(r^{-2})$$$$\xi^\theta=\xi^\theta_1 r^{-1}+\mathcal{O}(r^{-2})$$ which is exactly the one in the article by Guica et. al. But no luck with other components. If only there was some info on what I need to put in asymptotic Killing equation after the equal sign... EDIT 5: I'll write the equations I got by putting $\mathcal{L}_\xi g_{\mu\nu}=\mathcal{O}(r^n)$ where $\mathcal{O}(r^n)$ are given by boundary conditions: $$tt:\quad (\Lambda \Lambda' \Omega+(\Lambda^2-1)\Omega')\xi^\theta_{-1}+(\Lambda^2-1)\Omega \partial_t\xi^t_{-1}=0$$ $$t r:\quad (\Lambda^2-1)r(\xi^t_{-1}+\xi^t_0r^{-1})+(\Lambda^2-1)r^3(\xi^t_{-1}+\xi^t_0r^{-1}+\xi^t_1r^{-2}+\xi^t_2r^{-3})-$$$$-r(\xi^t_{-1}+\xi^t_0r^{-1})+\Lambda^2\xi^\phi_{-1}+\Lambda^2r^2(\xi^\phi_{-1}+\xi^\phi_0r^{-1}+\xi^\phi_1r^{-2})+\partial_t\xi^r_{-1}=0$$ $$t\theta:\quad (\Lambda^2-1)r^2(\partial_\theta\xi^t_{-1}r+\partial_\theta\xi^t_0+\partial_\theta\xi^t_1r^{-1}+\partial_\theta\xi^t_2r^{-2})-(\partial_\theta\xi^t_{-1}r+\partial_\theta\xi^t_0)+$$$$+\Lambda^2r(\partial_\theta\xi^\phi_{-1}r+\partial_\theta\xi^\phi_0+\partial_\theta\xi^\phi_2r^{-1})+(\partial_t\xi^\theta_{-1}r+\partial_t\xi^\theta_0)=0$$ $$t\phi:\quad (2\Lambda(\Lambda'\Omega+\Lambda\Omega'))(\xi^\theta_{-1}r+\xi^\theta_0)+\Lambda^2\Omega\xi^r_{-1}+$$$$+(\Omega(\Lambda^2-1))(\partial_\phi\xi^t_{-1}r+\partial_\phi\xi^t_0+\partial_\phi\xi^t_1r^{-1})-\Omega\partial_\phi\xi^t_{-1}+$$$$+\Lambda^2\Omega(\partial_\phi\xi^\phi_{-1}r+\partial_\phi\xi^\phi_0)+\Lambda^2\Omega(\partial_t\xi^\phi_{-1}r+\partial_t\xi^t_0)+\Lambda^2\Omega\partial_t\xi^\phi_{-1}=0$$ $$rr:\quad \xi^\theta_{-1}r+\xi^\theta_0=0$$ $$r\theta:\quad \partial_\theta\xi^r_{-1}+(\xi^\theta_{-1}r+\xi^\theta_0)=0$$ $$r\phi:\quad (\xi^t_{-1}r+\xi^t_0)+\xi^\phi_{-1}=0$$ $$\theta\theta:\quad \Omega'(\xi^\theta_{-1}r+\xi^\theta_0)+\Omega(\partial_\theta\xi^\theta_{-1}r+\partial_\theta\xi^\theta_0)=0$$ $$\theta\phi:\quad (\partial_\phi\xi^\theta_{-1}r+\partial_\phi\xi^\theta_0)+\Lambda^2r(\partial_\theta\xi^t_{-1}r+\partial_\theta\xi^t_0+\partial_\theta\xi^t_1r^{-1})+$$$$+\Lambda^2(\partial_\theta\xi^\phi_{-1}r+\partial_\theta\xi^\phi_0)=0$$ $$\phi\phi:\quad (\Omega\Lambda'+\Lambda\Omega')\xi^\theta_{-1}r+\Lambda\Omega r(\partial_\phi\xi^t_{-1}r+\partial_\phi\xi^t_0)+$$$$+\Lambda\Omega\partial_\phi\xi^\phi_{-1}r=0$$ The problem is, the only equation I can get some information, about how my vector should behave in components is $rr$, from which I get the general form of $\xi^\theta$ component of diffeomorphism. From $r\theta$ I get that the $\xi^r_{-1}$ term is a constant that doesn't depend on $\theta$, and $\theta\theta$ equation will just confirm that $rr$ one is correct. I guess from the fact that $\xi^r_{-1}$ is not zero I could say that the $r$ component will start at that term (like in the article). EDIT 6: I have added the bounty to raise awareness to this, if anyone knows if what I did so far is correct, and what to do next, please do tell :)This post imported from StackExchange Physics at 2014-03-07 13:48 (UCT), posted by SE-user dingo_d
What is a Blackbody? A black body is an idealization in physics that pictures a body that absorbs all electromagnetic radiation incident on it irrespective of its frequency or angle. Through the second law of thermodynamics that a body always tries to stay in thermal equilibrium. To stay in thermal equilibrium, a black body must emit radiation at the same rate as it absorbs and so it must also be a good emitter of radiation, emitting electromagnetic waves of as many frequencies as it can absorb i.e. all the frequencies. What is Blackbody Radiation? The radiation emitted by the blackbody is known as blackbody radiation. Below is the diagram of the spectral lines obtained from the blackbody radiation. The x-axis represents the wavelength while the y-axis represents the distribution of the spectral line. These spectral lines are obtained for different temperature ranges. Characteristics of Blackbody Radiation The characteristics of the blackbody radiation are explained with the help of the following laws: Wien’s displacement law Planck’s law Stefan-Boltzmann law Wien’s Displacement Law Wien’s displacement law states that The blackbody radiation curve for different temperatures peaks at a wavelength is inversely proportional to the temperature. Wien’s Law Formula Wien’s Law Formula \(\lambda_{max}=\frac{b}{T}\) Planck’s Law Using Planck’s law of blackbody radiation, the spectral density of the emission is determined for each wavelength at a particular temperature. Planck’s Law Formula Planck’s law \(E_{\lambda }=\frac{8\pi hc}{\lambda ^{5}(e^{\frac{hc}{\pi KT}}-1)}\) Stefan-Boltzmann Law The Stefan-Boltzmann law explains the relationship between total energy emitted and the absolute temperature. Stefan-Boltzmann Law Formula Stefan-Boltzmann Law E ∝ T Wien’s Displacement Law Example We can easily deduce that a wood fire which is approximately 1500K hot, gives out peak radiation at 2000 nm. This means that the majority of the radiation from the wood fire is beyond the human eye’s visibility. This is why a campfire is an excellent source of warmth but a very poor source of light. The temperature of the sun’s surface is 5700 K. Using the Wien displacement law; we can calculate the peak radiation output at a wavelength of 500 nm. This lies in the green portion of the visible light spectrum. Turns out, our eyes are highly sensitive to this particular wavelength of visible light. We really should be appreciative of the fact that a rather unusually large portion of the sun’s radiation falls in a fairly small visible spectrum. When a piece of metal is heated, it first becomes ‘red hot’. This is the longest visible wavelength. One further heating, it moves from red to orange and then yellow. At its hottest, the metal will be seen to be glowing white. This is the shorter wavelengths dominating the radiation. Stay tuned with BYJU’S to learn more about black body radiation, light sources and much more.
Rudin's Real and Complex Analysis Chapter 3 Exercise 4 is: Assume that $\varphi$ is a continuous real function on $(a,b)$ s.t. $$\varphi\left(\frac{x+y}{2}\right)\leq \frac{\varphi(x)+\varphi(y)}{2}$$ for all $x,y\in(a,b)$. Prove that $\varphi$ is convex. The conclusion does not follow if continuity is omitted from the hypotheses. My question is, is there some way to explicitly construct a counterexample such that $\varphi\left(\frac{x+y}{2}\right)\leq \frac{\varphi(x)+\varphi(y)}{2}$ for all $x,y\in(a,b)$, but $\varphi$ is not convex?
ASU Electronic Theses and Dissertations This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media. In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog. Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at [email protected]. Schmidt, Kevin E 7 Arizona State University 3 Beckstein, Oliver 2 Alarcon, Ricardo 2 Lebed, Richard 2 Shumway, John 1 Alarcon, Ricardo O more 1 Alarcón, Ricardo 1 Blyth, David Cooper 1 Chen, Tingyong 1 Comfort, Joseph R 1 Erten, Onur 1 Liu, Jianheng 1 Lynn, Joel Eric 1 Madeira, Lucas 1 Nelson, Garrett 1 Ritchie, Barry G 1 Ros, Robert 1 Sadjadi, Seyed Mahdi 1 Shovkovy, Igor 1 Shumway, John B 1 Spence, John C 1 Thorpe, Michael F 1 Treacy, Michael MJ 1 Weierstall, Uwe J 1 Yu, Hongbin 1 Zhang, Jie 7 English 7 Public Physics 2 Condensed matter physics 2 Nuclear physics and radiation 1 2-Photon Polymerization 1 2D materials 1 Atomic physics 1 Biophysics more 1 Carlo 1 Cold Fermi gases 1 GDVN 1 Hadronic weak interaction 1 Injector 1 Materials Science 1 Monte 1 Nuclear physics 1 Nuclear structure 1 Parity violation 1 Pions 1 Quantum 1 Quantum Monte Carlo 1 Serial Crystallography 1 Theoretical physics 1 Viscous Jet 1 XFEL 1 chiral 1 computational physics 1 condensed matter 1 glasses 1 graphene 1 path integral Monte Carlo 1 physics 1 quantum Monte Carlo 1 quantum wires 1 rigidity 1 silica One dimensional (1D) and quasi-one dimensional quantum wires have been a subject of both theoretical and experimental interest since 1990s and before. Phenomena such as the "0.7 structure" in the conductance leave many open questions. In this dissertation, I study the properties and the internal electron states of semiconductor quantum wires with the path integral Monte Carlo (PIMC) method. PIMC is a tool for simulating many-body quantum systems at finite temperature. Its ability to calculate thermodynamic properties and various correlation functions makes it an ideal tool in bridging experiments with theories. A general study of the features interpreted by the … Contributors Liu, Jianheng, Shumway, John B, Schmidt, Kevin E, et al. Created Date 2012 This work presents analysis and results for the NPDGamma experiment, measuring the spin-correlated photon directional asymmetry in the $\vec{n}p\rightarrow d\gamma$ radiative capture of polarized, cold neutrons on a parahydrogen target. The parity-violating (PV) component of this asymmetry $A_{\gamma,PV}$ is unambiguously related to the $\Delta I = 1$ component of the hadronic weak interaction due to pion exchange. Measurements in the second phase of NPDGamma were taken at the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source (SNS) from late 2012 to early 2014, and then again in the first half of 2016 for an unprecedented level of statistics in order … Contributors Blyth, David Cooper, Alarcon, Ricardo O, Ritchie, Barry G, et al. Created Date 2017 Monte Carlo methods often used in nuclear physics, such as auxiliary field diffusion Monte Carlo and Green's function Monte Carlo, have typically relied on phenomenological local real-space potentials containing as few derivatives as possible, such as the Argonne-Urbana family of interactions, to make sampling simple and efficient. Basis set methods such as no-core shell model or coupled-cluster techniques typically use softer non-local potentials because of their more rapid convergence with basis set size. These non-local potentials are typically defined in momentum space and are often based on effective field theory. Comparisons of the results of the two types of methods … Contributors Lynn, Joel Eric, Schmidt, Kevin E, Alarcón, Ricardo, et al. Created Date 2013 In this dissertation two kinds of strongly interacting fermionic systems were studied: cold atomic gases and nucleon systems. In the first part I report T=0 diffusion Monte Carlo results for the ground-state and vortex excitation of unpolarized spin-1/2 fermions in a two-dimensional disk. I investigate how vortex core structure properties behave over the BEC-BCS crossover. The vortex excitation energy, density profiles, and vortex core properties related to the current are calculated. A density suppression at the vortex core on the BCS side of the crossover and a depleted core on the BEC limit is found. Size-effect dependencies in the disk … Contributors Madeira, Lucas, Schmidt, Kevin E, Alarcon, Ricardo, et al. Created Date 2018 Sample delivery is an essential component in biological imaging using serial diffraction from X-ray Free Electron Lasers (XFEL) and synchrotrons. Recent developments have made possible the near-atomic resolution structure determination of several important proteins, including one G protein-coupled receptor (GPCR) drug target, whose structure could not easily have been determined otherwise (Appendix A). In this thesis I describe new sample delivery developments that are paramount to advancing this field beyond what has been accomplished to date. Soft Lithography was used to implement sample conservation in the Gas Dynamic Virtual Nozzle (GDVN). A PDMS/glass composite microfluidic injector was created and given … Contributors Nelson, Garrett, Spence, John C, Weierstall, Uwe J, et al. Created Date 2015 Spin-orbit interactions are important in determining nuclear structure. They lead to a shift in the energy levels in the nuclear shell model, which could explain the sequence of magic numbers in nuclei. Also in nucleon-nucleon scattering, the large nucleon polarization observed perpendicular to the plane of scattering needs to be explained by adding the spin-orbit interactions in the potential. Their effects change the equation of state and other properties of nuclear matter. Therefore, the simulation of spin-orbit interactions is necessary in nuclear matter. The auxiliary field diffusion Monte Carlo is an effective and accurate method for calculating the ground state … Contributors Zhang, Jie, Schmidt, Kevin E, Alarcon, Ricardo, et al. Created Date 2014 The structure of glass has been the subject of many studies, however some details remained to be resolved. With the advancement of microscopic imaging techniques and the successful synthesis of two-dimensional materials, images of two-dimensional glasses (bilayers of silica) are now available, confirming that this glass structure closely follows the continuous random network model. These images provide complete in-plane structural information such as ring correlations, and intermediate range order and with computer refinement contain indirect information such as angular distributions, and tilting. This dissertation reports the first work that integrates the actual atomic coordinates obtained from such images with structural … Contributors Sadjadi, Seyed Mahdi, Thorpe, Michael F, Beckstein, Oliver, et al. Created Date 2018
Optical response functions¶ The optical response functions couple an external electric field, \(E_\mathrm{ext}\), with the internal electric field arising from the response of the crystal. It is convenient to introduce the displacement field, \(\bf{D}\), which determines the electric field from external charges, \(\rho_\mathrm{ext}\). The displacement field is related to the internal electric field, \(\bf{E}\), through the polarization, \(\bf{P}\): where \({\bf -P}\) is the electric field from the internal charges, \(\rho_{int}\). The polarization is related to the dipole moment of the material, \({\bf p}\), where \(V\) is the volume of the material. Linear response coefficients¶ The linear response coefficients \(\chi\) (susceptibility), \(\epsilon_r\) (dielectric constant), and \(\alpha\) (polarizability) relate the electrodynamic quantities outlined above to each other: Optical conductivity¶ For a perturbation \({\bf E}({\bf r}) = {\bf E}_0 \exp({\rm i}{\bf q} \cdot {\bf r} )\), the linear response current in the long wave-length limit (\(q \ll 1/a\) , where \(a\) is the lattice constant) is given by [Wis63]: where \(\sigma\) is the optical conductivity. Units¶ The unit of the linear response coefficients are: Coefficient Unit \(\alpha\) C 2/Nm 5 \(\epsilon_r\) 1 \(\chi\) 1 \(\sigma\) C 2/Nsm 2 Relation between the linear response coefficients¶ All the response coefficients follow from the susceptibility, \(\chi\): The derivation of the last relation can be found in [Mar04]. [Mar04] Richard M. Martin. Electronic structure: Basic theory and practical methods. Cambridge University Press, New York, 2004. [Wis63] N. Wiser. Dielectric constant with local field effects included. Phys. Rev., 129:62–69, Jan 1963. doi:10.1103/PhysRev.129.62.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A November 2008 , Volume 21 , Issue 4 Select all articles Export/Reference: Abstract: We consider two (densely defined) involutions on the space of $q\times q$ matrices; $I(x_{ij})$ is the matrix inverse of $(x_{ij})$, and $J(x_{ij})$ is the matrix whose $ij$th entry is the reciprocal $x_{ij}^{-1}$. Let $K=I\circ J$. The set $\mathcal{SC}_q$ of symmetric, cyclic matrices is invariant under $K$. In this paper, we determine the degrees of the iterates $K^n=K\circ...\circ K$ restricted to $\mathcal{SC}_q$. Abstract: We determine the Hausdorff dimension of self-affine limit sets for some class of iterated function systems in the plane with an invariant direction. In particular, the method applies to some type of generalized non-self-similar Sierpiński triangles. This partially answers a question asked by Falconer and Lammering and extends a result by Lalley and Gatzouras. Abstract: We establish the existence of smooth stable manifolds for nonautonomous differential equations $v'=A(t)v+f(t,v)$ in a Banach space, obtained from sufficiently small perturbations of a linear equation $v'=A(t)v$ admitting a nonuniformexponential dichotomy. In addition to the exponential decay of the flow on the stable manifold we also obtain the exponential decay of its derivative with respect to the initial condition. Furthermore, we give a characterization of the stable manifold in terms of the exponential growth rate of the solutions. Abstract: In this article, we continue the study of viscosity solutions for second-order fully nonlinear parabolic equations, having a $L^1$ dependence in time, associated with nonlinear Neumann boundary conditions, which started in a previous paper (cf [2]). First, we obtain the existence of continuous viscosity solutions by adapting Perron's method and using the comparison results obtained in [2]. Then, we apply these existence and comparison results to the study of the level-set approach for front propagations problems when the normal velocity has a $L^1$-dependence in time. Abstract: Using the relation between the Hill's equations and the Ermakov-Pinney equations established by Zhang [27], we will give some interesting lower bounds of rotation numbers of Hill's equations. Based on the Birkhoff normal forms and the Moser twist theorem, we will prove that two classes of nonlinear, scalar, time-periodic, Newtonian equations will have twist periodic solutions, one class being regular and another class being singular. Abstract: By adapting a method in [11] with a suitable modification, we show that the critical dissipative quasi-geostrophic equations in $R^2$ has global well-posedness with arbitrary $H^1$ initial data. A decay in time estimate for homogeneous Sobolev norms of solutions is also discussed. Abstract: We prove that systems satisfying the specification property are saturated in the sense that the topological entropy of the set of generic points of any invariant measure is equal to the measure-theoretic entropy of the measure. We study Banach valued Birkhoff ergodic averages and obtain a variational principle for its topological entropy spectrum. As application, we examine a particular example concerning with the set of real numbers for which the frequencies of occurrences in their dyadic expansions of infinitely many words are prescribed. This relies on our explicit determination of a maximal entropy measure. Abstract: We study existence and positivity properties for solutions of Cauchy problems for both linear and semilinear parabolic equations with the biharmonic operator as elliptic principal part. The self-similar kernel of the parabolic operator $\partial_t+\Delta^2$ is a sign changing function and the solution of the evolution problem with a positive initial datum may display almost instantaneous change of sign. We determine conditions on the initial datum for which the corresponding solution exhibits some kind of positivity behaviour. We prove eventual local positivity properties both in the linear and semilinear case. At the same time, we show that negativity of the solution may occur also for arbitrarily large given time, provided the initial datum is suitably constructed. Abstract: Inspired by a biological model on genetic repressionproposed by P. Jacob and J. Monod, we introduce a new class of delay equations with nonautonomous past and nonlinear delay operator. With the aid of some new techniques from functional analysis we prove that these equations, which cover the biological model, are well--posed. Abstract: We prove the existence of attractors for higher dimensional wave equations with nonlinear interior damping which grows faster than polynomials at infinity. Abstract: This paper is devoted to determining the scalar relaxation kernel $a$ in a second-order (in time) integrodifferential equation related to a Banach space when an additional measurement involving the state function is available. A result concerning global existence and uniqueness is proved. The novelty of this paper consists in looking for the kernel $a$ in the Banach space $BV(0,T)$, consisting of functions of bounded variations, instead of the space $W^{1,1}(0,T)$ used up to now to identify $a$. An application is given, in the framework of $L^2$-spaces, to the case of hyperbolic second-order integrodifferential equations endowed with initial and Dirichlet boundary conditions. Abstract: In this work we study unfoldings of planar vector fields in a neighbourhood of a resonant saddle. We give a $\mathcal C^k$ normal form for the unfolding with respect to the conjugacy relation. Using our normal form we determine an asymptotic development, uniform with respect to the parameters, of the Dulac time of a resonant saddle deformation. Conjugacy relation instead of weaker equivalence relation is necessary when studying the time function. The Dulac time of a resonant saddle can be seen as the basic building block of the total period function of an unfolding of a hyperbolic polycycle. Abstract: We consider the two-dimensional Navier-Stokes equations with a time-delayed convective term and a forcing term which contains some hereditary features. Some results on existence and uniqueness of solutions are established. We discuss the asymptotic behaviour of solutions and we also show the exponential stability of stationary solutions. Abstract: In this paper, we first establish a criteria for finite fractal dimensionality of a family of compact subsets of a Hilbert space, and apply it to obtain an upper bound of fractal dimension of compact kernel sections to first order non-autonomous lattice systems. Then we consider the upper semicontinuity of kernel sections of general first order non-autonomous lattice systems and give an application. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Simulating Gaussian processes in julia This is my interpretation of the information provided in this tutorial on Noise Problems in the DifferentialEquations package. A Simple Wiener Process (White Noise) A Wiener process is a continuous process that has: Independent increments Gaussian increments Continuous paths, i.e. our Wiener process W is continuous in t The increments of a Wiener process can be described by the following equation \[ \varepsilon_t\cdot\sqrt{dt} \] Generating a Wiener Process The first element in the WienerProcess(t0, W0) function is the inital value of time (when we begin the process). The second element is the inital value of our process. We can also set $dt$ in for our process. using DifferentialEquations, Plots W = WienerProcess(0.0,0.0) dt = 0.1 W.dt = dt # Generate more values Now we have a Wiener Process object. However, this object doesn't have multiple points and is not plotable. We can add to our process using the code below. setup_next_step!(W) for i in 1:100 accept_step!(W,dt) end Plotting the process The Brownian Bridge A Brownian bridge is a Wiener process with a predefined beginning and end point. For the function BrownianBridge(t0,tend,W0,Wend), you input the first time period the last time period and the desired intial and terminal values for the process. BB = BrownianBridge(0.0, 100, 1.0, 1.0) BB.dt = 0.1 # For this Brownian Bridge the initial and terminal value of our process should be 1.0 Generate more values Again, we need to add more values to our object setup_next_step!(BB) for i in 1:1000 accept_step!(BB,dt) end Plotting the Brownian Bridge Now we can look at specfic types of Brownian Motion Geometric Brownian motion Geometric Brownian motion can be defined as: \[ dx_t = μ x_t dt + σ x_t dW_t \] For the GeometricBrownianMotionProcess() function we need several inputs: $\mu$, our "drift" parameter $\sigma$ our "variance" parameter Intial time period Inital value for our process μ = 1.0 σ = .8 GB = GeometricBrownianMotionProcess(μ, σ, 0.0, 1.0) BB.dt = 1.0 1.0 Generate more values setup_next_step!(GB) for i in 1:50 accept_step!(GB,dt) end Plotting Geometric Brownian Motion An Ornstein-Uhlenbeck process The Ornstein-Uhlenbeck process is the continuous time analog of a AR(1) process. We can write this process in the following form, \[ dx_t = θ(μ-x_t)dt + σdW_t \] For the OrnsteinUhlenbeckProcess() function we need $\theta$ our "drfit" parameter $\mu$ our "mean" parameter $\sigma$ our vairance parameter Our intial time period t0 Our intial value for our process θ = 1.5 μ = 2.0 σ = 1.5 OU = OrnsteinUhlenbeckProcess(θ, μ, σ, 0.0, 0.0) OU.dt = 0.1 0.1 Generate more values setup_next_step!(OU) for i in 1:100 accept_step!(OU,dt) end Plotting the Ornstein-Uhlenbeck Process Backing out parameters Another feature of the DifferentialEquations package is the ability to solve SDE problems and back out the coefficients for our Diffuction process. OU_prob = NoiseProblem(OU, (0.0,100.0)) solve_OU = solve(OU_prob, dt = 0.12, save_noise=true) # Plotting this, and our graph from before we can see that this will give us back thr exact same process we had earlier
We want to solve the eigenproblem $\hat{H}|\psi\rangle=\lambda |\psi\rangle$. Since $|\psi\rangle$ lives in an infinite dimensional Hilbert space, this isn't fit to put on to a computer. Instead, we discretize space into $N$ points. $\vec{\psi}$ is now an $N$ dimensional column vector, and $\bf H$ is now an $N\times N$ matrix. For this post I will write matrices in bold face ($\bf V$), column vectors with an arrow over them ($\vec{\psi}$), and operators with hats over them ($\hat{x}$). For this post I'll choose to discretize the region from $0$ to $L$ into $N$ points, starting with a point at $x_1=\frac{1}{N+1}L$ and ending at a point $x_N=\frac{N}{N+1}L$. I do this because $\psi(0)=\psi(L)=0$, and I want $N$ nonzero points. Then $x_i=\frac{i}{N+1}L$ and we have our column vector $\psi_i=\psi(x_i)$. Call the step size between these points $b=\frac{L}{N+1}$. There are a few parts to this: How to write $\psi$ as a column vector, how to write the potential $V$ as a matrix, how to write the derivative operator as a matrix, and how to write the full Hamiltonian as a matrix. Psi as a column vector Start with an example and take $L=1$. Suppose we want to represent $\psi(x)=\sqrt{\frac{2}{L}}\sin\left(\frac{\pi x}{L}\right)$ as a column vector. If $N=5$, the column vector representing this would be $$\vec{\psi}=\left[\begin{array}{c} \frac{1}{\sqrt{2}}\\\sqrt{\frac{3}{2}}\\\sqrt{2}\\\sqrt{\frac{3}{2}}\\\frac{1}{\sqrt{2}}\end{array}\right]$$This column vector has entries $\psi_i=\psi(x_i)$. Note that the normalization condition $\int_0^L \psi(x)^2dx=1$ turns into the condition $\vec{\psi}^T\vec{\psi} b=1$. (Check it: $\vec{\psi}^T\vec{\psi}=\frac{1}{2}+\frac{3}{2}+2+\frac{3}{2}+\frac{1}{2}=6$ and $b=1/6$. "b" takes the place of "dx"). It's a coincidence that this works exactly for the function we chose, but it tells us we should choose $\vec{\psi}^T\vec{\psi} b=1$ as a normalization condition. If $\psi$ were complex then this should indeed be $\vec{\psi}^\dagger \vec{\psi}b=1$. V as a matrix The operator $\hat{V}$ sends $\psi(x)$ to $V(x)\psi(x)$ for some function $V(x)$. So the matrix should send $\psi_i=\psi(x_i)$ to $V(x_i)\psi(x_i)$. This can be represented by a diagonal matrix. For example, with $N=3$, $${\bf V}=\left[ \begin{array}{ccc}V(x_1)&0&0\\0&V(x_2)&0\\0&0&V(x_3)\end{array}\right]$$ because $${\bf V}\vec{\psi}=\left[\begin{array}{c} V(x_1)\psi(x_1)\\V(x_2)\psi(x_2)\\V(x_3)\psi(x_3)\end{array}\right]$$ In Mathematica this can be constructed using the Array command (I use a lower case "n" because upper case N is taken): DiagonalMatrix[Array[V[#/(n + 1) L] &, n]] The derivative as a matrix There are many possible ways to approximate the derivative. For example, we could recall the limit definition of the derivative, and send $\psi(x_i)$ to $\frac{1}{b}\left(\psi(x_{i+1})-\psi(x_i)\right)$. The approximation I'll use for the second derivative will be sending $\psi(x_i)$ to $\frac{1}{b^2}\left(\psi(x_{i+1})+\psi(x_{i-1})-2\psi(x_i)\right)$. Let's consider the $N=5$ case again. Then the operator $\frac{\partial^2}{\partial x^2}$ can be written as: $${\bf D^2}={\bf \frac{\partial^2}{\partial x^2}}=\frac{1}{b^2}\left[\begin{array}{ccccc}-2 &1&0&0&0\\1&-2&1&0&0\\0&1&-2&1&0\\0&0&1&-2&1\\0&0&0&1&-2\end{array}\right]$$ (I'm giving it the name ${\bf D^2}$, but please don't think it's the square of an actual matrix ${\bf D}$!) Note that we run into trouble at the corners, but are saved because $\psi(x_0)=\psi(0)=0$, so we can leave terms out. If the boundary conditions were different, we wouldn't be able to represent the derivative as a matrix and would have to add additional terms. So the nice boundary conditions mean we dodged a bullet! This stripe of diagonal and immediately off-diagonal terms comes up incredibly often in numerics. It's called the "discrete laplacian", and is an example of a tridiagonal matrix. I learned about these from a book A First Course in Computational Physics by DeVries and Hasbun, and I highly recommend it. The eigenvectors of $\bf D^2$ are the discretized versions of the sine operator, just as how the eigenfunctions of the second derivative operator are sines and cosines. In Mathematica, you can use the Band[] function to specify these off-diagonal terms: D2=1/b^2 SparseArray[{Band[{1, 1}] -> -2, Band[{2, 1}] -> 1, Band[{1, 2}] -> 1}, {n,n}] The total Hamiltonian We can now write ${\bf H}=-\frac{\hbar^2}{2m} {\bf D^2}+{\bf V}$. For example, for $N=5$, if I put all the Mathematica code together: $${\bf H}=\left(\begin{array}{ccccc} \frac{\hbar ^2}{b^2 m}+V\left(\frac{L}{6}\right) & -\frac{\hbar ^2}{2 b^2 m} & 0 & 0 & 0 \\ -\frac{\hbar ^2}{2 b^2 m} & \frac{\hbar ^2}{b^2 m}+V\left(\frac{L}{3}\right) & -\frac{\hbar ^2}{2 b^2 m} & 0 & 0 \\ 0 & -\frac{\hbar ^2}{2 b^2 m} & \frac{\hbar ^2}{b^2 m}+V\left(\frac{L}{2}\right) & -\frac{\hbar ^2}{2 b^2 m} & 0 \\ 0 & 0 & -\frac{\hbar ^2}{2 b^2 m} & \frac{\hbar ^2}{b^2 m}+V\left(\frac{2 L}{3}\right) & -\frac{\hbar ^2}{2 b^2 m} \\ 0 & 0 & 0 & -\frac{\hbar ^2}{2 b^2 m} & \frac{\hbar ^2}{b^2 m}+V\left(\frac{5 L}{6}\right) \\\end{array}\right)$$ n=5; x[i_]=i L/(n+1); Vmatrix=DiagonalMatrix[Array[V[x[#]]&,n]]; D2=1/b^2 SparseArray[{Band[{1,1}]->-2,Band[{2,1}]->1,Band[{1,2}]->1},{n,n}]; H=-hbar^2/(2m)D2+Vmatrix Finding the Eigensystem It's not easy to find the eigenvalues of a general matrix. Tridiagonal matrices are easier, but not trivial by any means. You will probably want to use a library or consult more advanced references for this! Fortunately, Mathematica has a nice self-contained function for this called Eigensystem[]. The eigenvectors are our orthonormal wavefunctions, and the eigenvalues are our energy levels. Here's a working, full example, with a potential going like $V(x)=1-4(x-L/2)^2$ and 100 points. (Note: this means $H$ is a 100x100 matrix!) Output: Input: n=100; L=1; hbar=1; m=1; b=L/(n+1); V[x_]:=200(1-4(x-L/2)^2); x[i_]=i L/(n+1); normalize[v_]:=v/Sqrt[b v.v]; (* ensure that normalize[v].normalize[v]*b==1 *) Vmatrix=DiagonalMatrix[Array[V[x[#]]&,n]]; D2=1/b^2 SparseArray[{Band[{1,1}]->-2,Band[{2,1}]->1,Band[{1,2}]->1},{n,n}]; H=-hbar^2/(2m)D2+Vmatrix; eigensystem=Eigensystem[N[H]]; (* find its eigensystem *) eigensystem={eigensystem[[1]],normalize/@eigensystem[[2]]}; (* normalize all the eigenvectors *) Table[ListLinePlot[eigensystem[[2]][[-k]],AxesLabel->{"i","\[Psi]"},PlotLabel->"Eigenstate "<>ToString[k]<>", energy="<>ToString[NumberForm[eigensystem[[1]][[-k]],3]]],{k,1,5}] Setting V=0 and comparing our numerical accuracy, we find the first eight eigenvalues to be: Energies = {4.9344, 19.7328, 44.381, 78.855, 123.122, 177.138, 240.852, 314.2011} whereas the exact values are $n^2 \pi^2/2$, giving: Energies = {4.9348, 19.7392, 44.4132, 78.9568, 123.37, 177.653, 241.805, 315.827} So this approach is pretty accurate, giving an error of .5% for the eighth energy level.
In Quantum mechanics, observables are represented by hermitian operator. But does every hermitian operator represent a observable? If not , how do we know that whether a hermitian operator represent observable or not? What is the precise definition of the term "observable"? Given a quantum system with associated Hilbert space $\mathcal H$, the set of all self-adjoint bounded operators is $\newcommand{\bh}{\mathcal B(\mathcal H)_\text{sa}}\bh$. In general, only a small subset of $\bh$ will represent physically observable operators. For infinite-dimensional systems, $\bh$ is huge and there's no hope ever finding experiments for all its members; even in finite-dimensional systems it is very challenging to find experimental schemes sensitive to even a vector-space basis for $\bh$. The physical approach to this is to begin with a finite set of operators which you know you can measure. For a single free particle, for example, you'd take position and momentum; for a finite set of spins you'd take all their Pauli matrices. You then form the set $\mathcal A$ of all operators that can be formed from them via products and linear combinations, which has the structure of a $\mathcal C^\ast$ algebra, and that is your set of physical observables. The $\mathcal C^\ast$algebra itself is the really fundamental description of the system; the Hilbert space is simply one possible representation. In this formalism, states are functionals on $\mathcal A$: they are functions $$\rho:\mathcal A\rightarrow \mathbb C $$ that take an observable and give its measured value (or probable measured value, etc.) in that state. (In a Hilbert space representation, each such functional is associated with a density matrix $\hat\rho$, a trace-class positive operator such that $\rho(A)=\text{Tr}(\hat\rho\hat A)$ for $\hat A$ the Hilbert space operator associated with an arbitrary $A\in\mathcal A$. Edit:As joshphysics and WetSavannaAnimal rightly point out, this works as stated only for bounded operators and not for unbounded ones like position or energy. I'm afraid I don't know well enough how this extends to that class of operators - that needs someone with much stronger functional analysis chops than mine.
The answer from the link provides a formula of ratio of likelihoods of the null and the alternative hypotheses with detailed derivation, and the R code below is my implementation for it with $\theta_1 = 1$, $\theta_2 = 2$, $n_1 = 70$, and $n_2 = 100$. To generate samples, get cdf for exponential distribution first. If $f(x) = \frac{1}{\theta}e^{-\frac{x}{\theta}}$, F(x) = $\int_{0}^{x} \frac{1}{\theta}e^{-\frac{t}{\theta}} dt = 1 - e^{-\frac{x}{\theta}}$. If $u = 1 - e^{-\frac{x}{\theta}},$ then $x = -\theta*ln(1-u)$. So get $n_1$ samples $u \in [0, 1)$ and compute $x_i$ values with $\theta = \theta_1$ and $n_2$ samples $\in [0, 1)$ and compute $y_i$ values with $\theta = \theta_2$, and then follow the likelihood ratio formula to compute the ratio. The computed ratio is not always 1 using the formula after testing a few times. theta_ln_1Minusu <- function(theta, vec){ size = length(vec); for(i in 1:size){ vec[i] = -theta*log(1-vec[i]); } return (vec);} >theta1 = 1; theta2 = 2; n1 = 70; n2 = 100; >u1vals = runif(n1); u2vals = runif(n2); > xvals = theta_ln_1Minusu(theta1, u1vals); > yvals = theta_ln_1Minusu(theta2, u2vals); > x_avg = sum(xvals) / n1; y_avg = sum(yvals) / n2; > x_avg [1] 1.041831 > y_avg [1] 1.733426 > w1 = n1/(n1+n2);w2 = n2/(n1+n2); > likelihoodRatio = (w1+w2*y_avg/x_avg)^n1*(w1*x_avg/y_avg+w2)^n2; > likelihoodRatio [1] 168.8613 On the comment below: In addition, regarding EngrStudent's comment, I spent a while to compute and found that the ratio is close to 1 only when x_avg is close to y_avg; in other words, $\theta_1$ is close to $\theta_2$. Denote $\frac{yavg}{xavg}$ as $x$, $x > 0$, divide the last term by $(n_1+n_2)$, and then compute the log of the last term in the link: $w_1*ln(w_1+w_2x) + w_2*ln(w_1/x+w_2) = (w_1+w_2)*ln(w_1+w_2x) - w_2*ln(x) =ln(w_1+w_2x) -ln(x^{w_2})$ If this log value is 0, then $w_1+w_2x=x^{w_2}$. It is known that this equality holds when x = 1. The both sides are continuous. Now check the derivatives of both sides. The left side's is $w_2$, and the right side's is $w_2*x^{w_2-1}$. The increment rate of the left side is fixed to $w_2$, and $ w_2< 1$. If $x < 1$, $x^{w_2-1} > 1$, the increment rate of the right side is always larger than that of the left, $w_2$, and the left is larger than the right when x>0 close to 0. And if $x > 1$, the increment rate of the right side is always smaller than $w_2$. Thus, the equality holds only when $x = 1$.
I got this question on an interview (Codility) the other day. Imagine we have a system responsible for counting the total number of pages on a web site. It’s a single process on a single machine, and it has a RESTful HTTP API with one endpoint. It looks like POST /increment?n=N Whenever the endpoint is hit, the system will increment its internal count by N, open its state file on disk, overwrite the state file with the new value of N and close the file. There are many webserver processes on many machines, all calling into this system. Every time a webserver sees a web request, it calls out to the counter system like this POST /increment?n=1 The system is having trouble keeping up. What steps would you take to improve the throughput of the system? I had a difficult time understanding the scenario they were trying to explain. I answered that instead of having the counting system do the reading and writing to disk, it would simply put the work into a queue (like Redis or something similar) and then the queue could process the work in due course. I have no idea if that comes anywhere close and there were no other clarifications given in the question. Curious how other folks would approach this. Summary I was recently provided with a 2018 MBP by my employer. The network throughput slows gradually until I’m forced to reboot. Setup The following machines are connected via wired ethernet to a gigabit switch: 2018 MBP (the problematic machine, 10.14.4) 2015 MBP (the control, thunderbolt ethernet adapter, 10.14.4) 2010 iMac (the test target, built in ethernet, ubuntu) The 2018 MBP has a usbc-thunderbolt adapter, with a thunderbolt display attached and the ethernet cable connected to the display. When the throughput drops, I’ve tried the following: Switch to WiFi – doesn’t help, speed starts off around 300MBit/s but drops to the level the wired connection is currently at. Use a different switch port / ethernet cable – doesn’t help Test speed on control MBP using 2018 MBP’s ethernet cable & switch port (it’s fine) Connect thunderbolt monitor to control MBP and test throughput (also fine) Disconnect usbc-thunderbolt adapter incase it’s doing something odd (no difference) Disconnect power from 2018 MBP (no difference) Sleep/Wake 2018 MBP (no difference) I setup a test to show the problem: rebooted the 2018 MBP then looped iPerf making connections to the iMac and plotted the results, sure enough after a couple of days I’m getting 10MBits/s across a gigabit switch: The problem isn’t the iMac (test target) as I also made periodic tests from the control MBP to the iMac which consistently showed ~940Mbits/s. I’ve been using the 2015 MBP and thunderbolt display with it’s ethernet connection for years without issue, and have ruled that out as a cause. Also WiFi shows the same problem, which suggests its not related to a specific interface. I suspect a software problem with the networking stack on the 2018. Potentially suspicious software: Symantec Endpoint Protection (mandated by employer) Pulse Secure (required for VPN connection to employer, but not connected during these tests) Docker (required for development, does add bridges etc to local networking – this doesn’t cause a problem on the 2015 MBP though.) Questions Any idea what this might be about? Is there any way to reset the networking stack or some workaround that doesn’t involve a reboot? Unfortunately removing SEP is not an option 🙁 i have a running IPS/IDS on access point, all traffic is going from this access point. Now is there any way or tool that calculate the throughput of this IPS/IDS. Thanks in advance. We have mostly large file downloads and don’t care a lot about latency but only throughput into the major eyeball networks. We … | Read the rest of http://www.webhostingtalk.com/showthread.php?t=1764344&goto=newpost I have a few questions regarding slotted-ALOHA. Assume a network have 25 users and transmission request probability = 0.25. 1) What is the throughput and what is the probability that a user will successfully transmit a frame after three unsuccessful attempts? I have managed to calculate the throughput as 0.00627. But the major problem is the probability to succeed after 3 attempts. Should I use these two formulas? $ $ n_a = \sum_{n=0}^\infty n(1-p_a)^n p_a $ $ $ $ n_a = \frac{1-p_a}{p_a} $ $ 2) What is the average number of unsuccessful attempts before a user can transmit a frame in the above problem? Can somebody assist me? Best regards I am practicing socket programming in C language. I have two codes – server and client (based on TCP) on two different laptops. The server forks a new process for every new request. For simulating multiple simultaneous clients, I have used pthreads library in my client program. I have a lots of files at the server, each of fixed size (2 MB). I am calculating two things at the client – throughput and response time. Throughput is the average no. of files downloaded per second. And response time is the average time taken to download a file. I want to see that at what no. of simultaneous users (threads), the throughput gets saturated. But I am facing problem in analyzing that since the network speed is varying largely. Can someone suggest a way to analyse that? May be some factor other than throughput and response time which does not depend on network speed, because I need to find the number N (simultaneous users) for which the server is at maximum load. If network bandwidth control is necessary, than is there some simple way to control the maximum upload speed at server. I am using Ubuntu 18.04. The max limit may be imposed on the whole system only (for simplicity) since there is no other foreground process running parallely. For example, my upload speed varies from 3-5 MBps, can it be restricted to some lower value like 2 MBps. Or should I restrict the download speed at client rather than server’s upload speed so it is constant and always less than that of upload speed at server, since the analysis is done by client program. Say that I have a superscalar processor and I am given the latency, issue and capacity (in clock cycles) for different instructions. What is the general formula for latency bound and throughput bound? (I will convert to cost per instruction and billion instructions per second) This field seems to be too niche to find information online. In other words, how can I find the latency bound and throughput bound for fdiv Latency Issue Capacity fdiv L I C where L is the number of cycles for the fdiv latency, I for issue and C for capacity? Would $ \text{latency bound} = {L \over I} \times C$ and $ \text{throughput bound} = {L \over C}$ ? This is not the homework problem, but a “required” definition in order to be able to start it. I’ve got a question about 10gb networking that has me very stumped… Here is my situation: I’m only getting transfer rates of ~150mb/sec between my PC and NAS on my 10gb home network. My setup: [PC] Has Intel X550-T2 & a Samsung EVO 970 NVMe drive. [Server] Has Intel X540-T2 & 3x 250gb WD Blue SSDs in RAID 5 [NAS] Synology DS1517+ with an Intel X550-T2. [Switch] MikroTik CRS305-1G-4S+IN with four Mikrotik 10GB SFP+ modules NOTE: All cabling is Cat6a SFTP. ISSUE: Copying test files from [Server] to [NAS] has normal/fast speed (about 700mb/sec average with spikes up to 1gb/sec). Unfortunately, [PC] to [NAS] is ultra slow (~150mb/sec average). Troubleshooting Info: I ran iperf on all devices and speed was always ok (from the NAS to PC and also from the NAS to Server). I tried swapping out cables, tested cables via Intel driver software, tried using both ports of the X550-T2 on [PC] and also tried alternate ports on the MikroTik Switch. No joy. I also ran disk speed tests on [PC] with the Samsung Magician software. Sequential Read speed is 3,525 and Sequential Write speed is 2,302. “Real world” speed on [PC] copying files within the NVMe is also fast (over 1gb/sec for random assortment of files totaling ~6gb). Supplemental Info: [PC] is running Windows 10 64bit. [Server] is running Windows Server 2016. [NAS] has newest version of DSM (6.2.1-23824 Update 6) Any suggestions? This is what my project asks but i am really stacked on how to calculate the actual throughtput. The throughput measurement should be carried out for two scenarios, with and without a security protocol implemented. To investigate where the observed difference comes from, you need to look into the security mechanism you have implemented and find out what additional overhead has been introduced by it and how this additional overhead results in the reduced throughput.
There are two fascinating aspects of angular motion that don’t exist for linear motion in quite the same way. The first is that the rotational inertia is readily changed, as for example, when a skater extends or pulls in a leg. The second is related to the fact that both p and L are vector quantities and can change in direction without changing in magnitude When an ice skater begins to spin with a leg extended, there is only a small torque exerted on the skater by the ice. Thus, angular momentum diminishes rather slowly (she can spin for a long time). Now, if she pulls in her leg, her rotational inertia is reduced considerably, and her rotational velocity (spin velocity) increases considerably. This is most easily seen by writing the angular momentum as \( L = I\omega\) and noting that if L remains almost constant, then the product \(I\omega\) must remain constant. Another fascinating, and rather startling situation, is the change in direction of the angular momentum of a spinning object when it is acted upon by a torque that is not along the direction of the angular momentum vector itself. This is the weird behavior exhibited by a spinning top or gyroscope. Figure 7.7.1 shows a bicycle wheel supported by a rope at the left end of a short axle. Figure 7.7.2 is the extended force diagram. The torque caused by the force of the Earth acting down at the center of gravity of the wheel produces a torque that is perpendicular to this force and the axle; it points into the figure. If the wheel is not spinning, it just falls, rotating about the pivot point, because this is the only point of support. However, when the wheel is spinning with angular momentum \(L_0\), the situation is much different. Figure 7.7.1 Figure 7.7.2 Figure 7.7.3 is a top view, showing the original angular momentum vector, \(L_i\), the new angular momentum vector, \(L_f\) and the torque\(\tau\) . The torque acts for a time \(\Delta t\) . Figure 7.7.3 We use the angular impulse equation to give the change in angular momentum, \[ \tau \Delta t= \Delta L = L_f = L_i \] or \[ L_f = L_i + \tau\Delta t \] That is, the direction of the initial angular momentum, \(L_i\), is changed by the presence of the angular impulse, and is moved to the direction shown by \(L_f\). But if L is in a new direction, then the orientation of the wheel must have changed, because L is due to the spinning wheel and points along \(\omega\) . This turning motion of the orientation of the wheel is called precession. Instead of falling, the wheel precesses. Of course, once the angular momentum (and the wheel) point in a new direction, the torque comes into play again, causing the wheel to precess still farther. In this fashion, the wheel is caused to precess in a horizontal circle about the pivot point. Precession is analogous to the situation of a ball being twirled around in a circle on the end of a string. Why doesn’t the tension in the string pull the ball in toward the center of the circle? The answer is that it does, but the large tangential velocity also moves the ball in a direction tangent to the circle. The net result is that the ball travels in a circular path. If there were no large tangential velocity, the ball would indeed be pulled directly toward the center of the circle due to the tension in the string. A similar thing happens with the bike wheel. The torque causes a change in the direction of the large angular momentum of the spinning wheel. If the wheel did not have this large angular momentum, the torque would cause the wheel to tip over, or “fall down.”
In this article we define and publish the exact pre-flop probabilities for each possible combination of two hands in Textas Hold’em poker. An online tool at tools.timodenk.com/poker-odds-pre-flop makes the data visually accessible. Table of Contents Introduction A deck of French playing cards, as it is used in Texas Hold’em, contains 52 different cards; in a heads-up game two players are playing against each other. Both of them get two private cards dealt pre-flop face down. There are $\binom{52}{2}=1326$ different possible pairs of cards that players can get. In this work we determine the odds of each starting hand to win against any other starting hand. There is no equation or easy way of calculating the winning probability of a given hand, since it would be required to contain all the rules and mechanics of the game. Statistical approaches can determine the winning odds of one starting hand against another very quickly. This gives results that are statistically accurate but not guaranteed to be exact. However, for mathematical analysis of certain properties of pre-flop situations, the precise numbers are a requirement. Win Function Hold’em cards can have 13 different ranks and four different suits $$\begin{align} \mathcal{R}=&\left\{\text{Ace}, \text{2}, \text{3}, \text{4}, \text{5}, \text{6}, \text{7}, \text{8}, \text{9}, \text{10}, \text{Jack}, \text{Queen}, \text{King}\right\}\,,\\ \mathcal{S}=&\left\{\text{Club}, \text{Heart}, \text{Spade}, \text{Diamond}\right\}\,.\end{align}$$A card is an ordered pair of rank and suit; the set of all cards that exist in a the card deck is denoted as $$\begin{align}\mathcal{C}=\mathcal{R}\times\mathcal{S}\,,\end{align}$$ with $\left\lvert\mathcal{C}\right\rvert=52$. The set of possible starting hands, every possible combination of two distinct cards, is defined by $$\begin{align}\mathcal{H}=\left\{\left\{c_1\in\mathcal{C},c_2\in\mathcal{C}\right\}\mid c_1\neq c_2\right\}\end{align}$$ and has a cardinality of $\lvert\mathcal{H}\rvert=\binom{52}{2}=1326$. The set is containing unordered pairs because the two private cards are not ordered either. Calculating the Cartesian product of the set of possible hands with itself gives a new set of ordered pairs, that is $\mathcal{M}=\mathcal{H}\times\mathcal{H}$, with $\left\lvert\mathcal{M}\right\rvert=1,758,276$. This set contains all combinations of two starting hands as ordered pairs. The order matters, because we define the meaning of each of the pairs as a pre-flop situation where the first starting hand plays against the second. In this work we search for the winning odds function $$\begin{align} o:\mathcal{M}\rightarrow\left\{x\in\mathbb{Q}\mid0\le x\le1\right\}\,, \end{align}$$ that outputs for any pre-flop situation $\left(h_1,h_2\right)$, the odds of starting hand $h_1$ to win against $h_2$. Since every card exists only once in a deck, two players cannot play against each other if $h_1\cap h_2\ne\emptyset$. For these cases $o$ is undefined. Post-flop, five community cards are dealt. Three on the flop, followed by the turn card, and finally the river card. $\binom{52}{5}=2598960$ different combinations are possible. We define $\mathcal{P}$ as the set of all possible community cards, where each $p\in\mathcal{P}$ is itself a set of five different cards. The community cards $p$ determine which player wins, that is the one whose starting hand builds the best showdown hand. If both showdown hands are of equal rank at showdown, the pot is split. Given a pre-flop situation $m=(h_1,h_2)$ where the starting hand $h_1$ plays against $h_2$, only a subset of $\mathcal{P}$, namely $$\begin{align}\mathcal{P}_{(h_1,h_2)}=\left\{p\in\mathcal{P}\mid p\cap h_1=p\cap h_2=\emptyset\right\}\end{align}$$ can be dealt, as some cards are already taken from the deck. Furthermore a win function $$\begin{align}w:\{(m\in\mathcal{M},p\in\mathcal{P}_m)\}\rightarrow\{0,1\}\end{align}$$is required, that assesses a situation at showdown and returns $1$, if the first starting hand in $m$ wins against the second hand, given the public cards $p$. In case of loss or split it returns $0$. $w$ is defined by the rules of poker. With these definitions at hand we can define the odds of a hand to win against another hand as $$\begin{align}o(m)=\frac{\displaystyle\sum_{p\in\mathcal{P}_m}w(m,p)}{\lvert \mathcal{P}_m\rvert}\,.\end{align}$$In words, $o$ is dividing the number of possible community cards where $h_1$ wins against $h_2$ by the number of community cards that can be dealt altogether. Data and Results With the odds function $o$ at hand, we can define the winning odds matrix $\mathbf{M}\in\mathbb{Q}^{\lvert\mathcal{H}\rvert\times\lvert\mathcal{H}\rvert}$ as $$M_{i,j}=o(\mathcal{H}_i,\mathcal{H}_j)\,.$$ The matrix contains the winning odds for every heads-up pre-flop situation and is undefined at places, where $\mathcal{H}_i$ and $\mathcal{H}_j$ cannot play against each other (because of shared cards). From its row vectors, i.e. $\mathbf{M}_{i,:}$, we can compute the average odds of a hand $\mathcal{H}_i$ to win against a random hand, by taking the average of all entries with values. The probabilities for a split between two hands are given by $1-M_{i,j}-M_{j,i}$. We release $\mathbf{M}$ in two ways. As an online tool with user interface at tools.timodenk.com where the user can pick their pre-flop situation and get the exact odds, i.e. $o(m)$ and $o(m)\lvert\mathcal{P}_m\rvert$. As a serialized Java object holding the $o(m)\lvert\mathcal{P}_m\rvert$ values for every $m$. The object can be imported into a Java program and processed from there. We have already conducted some experiments related to the non-transitivity of the win function. For example, we found the three hands $$\begin{align} h_1=&\left\{\left(\text{Ace},\text{Club}\right),\left(\text{2},\text{Club}\right)\right\}\\ h_2=&\left\{\left(\text{10},\text{Spade}\right),\left(\text{9},\text{Spade}\right)\right\}\\ h_3=&\left\{\left(\text{2},\text{Heart}\right),\left(\text{2},\text{Diamond}\right)\right\}\,, \end{align}$$to satisfy$$\begin{align} o\left(h_1,h_2\right)\approx&0.54\gt0.5\\ o\left(h_2,h_3\right)\approx&0.53\gt0.5\\ o\left(h_3,h_1\right)\approx&0.61\gt0.5\,. \end{align}$$Expressed in words, this means for a hand $h_1$, which statistically beats $h_2$, which in turn beats a third hand $h_3$, it cannot be concluded that $h_1$ beats $h_3$ as well. The win function is not transitive. The most uneven pre-flop situation exists, if the hands$$\begin{align} h_1=&\left\{\left(\text{King},\text{Club}\right),\left(\text{King},\text{Diamond}\right)\right\}\\ h_2=&\left\{\left(\text{King},\text{Heart}\right),\left(\text{2},\text{Club}\right)\right\} \end{align}$$play against each other (or suit permutations). The pair of kings has a $94.16\%$ chance of winning. The pot is chopped in $1.53\%$ of all possible outcomes. Surprisingly, Aces do not appear in this constellation. The reason is that the hand A2 could hit a straight with just three cards ( 3, 4, 5). On the other hand, K2 needs four cards ( A, 3, 4, 5 or 3, 4, 5, 6) in order to build a straight that uses the 2. Also noteworthy is the fact, that the King of Clubs of $h_1$ blocks flushes, which $h_2$ could have otherwise gotten using the Deuce of Clubs. The lowest winning probability and highest probability for a split pot exists if $$\begin{align} h_1=&\left\{\left(\text{3},\text{Club}\right),\left(\text{2},\text{Diamond}\right)\right\}\\ h_2=&\left\{\left(\text{3},\text{Diamond}\right),\left(\text{2},\text{Club}\right)\right\} \end{align}$$battle each other. Both hands have an equal chance of winning, namely $0.71\%$. Consequently, a split occurs with a likelihood of $98.57\%$. Interestingly, the get-together of 43 vs. 43 (same suits as above) comes with a higher winning probability ($0.73\%$). That’s (at least partly) because there are a few more flushes, for which one of the hands does not only play the board. The lowest split probability of just $0.19\%$ is there, if the following hands play:$$\begin{align} h_1=&\left\{\left(\text{9},\text{Club}\right),\left(\text{8},\text{Diamond}\right)\right\}\\ h_2=&\left\{\left(\text{Ace},\text{Spade}\right),\left(\text{Ace},\text{Heart}\right)\right\}\,. \end{align}$$In which cases would the pot be chopped? For instance if a straight flush of Clubs from 2 to 6 occurs. Still unsolved is the search for the longest non-transitive chain of mutually disjoint pre-flop hands, such that $$o(h_1, h_2)>0.5\land o(h_2,h_3)>0.5\land\dots\land o(h_n,h_1)>0.5\,.$$ Special thanks goes to Dominik Müller for fruitful discussions and many algorithm optimization ideas.
Equation of Circle Jump to navigation Jump to search Theorem $\paren {x - a}^2 + \paren {y - b}^2 = R^2$ $x = a + R \cos t, \ y = b + R \sin t$ Instead, the center of a circle is commonly denoted $\polar {r_0, \varphi}$, where $r_0$ is the distance from the origin and $\varphi$ is the angle from the polar axis in the counterclockwise direction. $r^2 - 2 r r_0 \map \cos {\theta - \varphi} + \paren {r_0}^2 = R^2$
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
SPPU Civil Engineering (Semester 3) Geotechnical Engineering May 2015 Geotechnical Engineering May 2015 Total marks: -- Total time: -- Total time: -- INSTRUCTIONS (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary Answer any one question from Q1 and Q2 1 (a) Starting from first principles derive the following equations with usual nomenclature: \[ \gamma = \dfrac {G+eS_r)\gamma_\omega } { (1+e) \] 6 M 1 (b) Explain with diagram a method for determining coefficient of permeability 'K' for clayey soils in the laboratory. 6 M 2 (a) On a single graph paper, draw neat labelled graphs for: i) Uniformly graded soil ii) Well graded soil Gap graded soil iv) Show on the same graph zones of clay size slit size, sand and gravel clearly. i) Uniformly graded soil ii) Well graded soil Gap graded soil iv) Show on the same graph zones of clay size slit size, sand and gravel clearly. 6 M 2 (b) State the applications of flownet and explain how seepage through a dam can be determined using flow net. (State the equation and terms involved in it). 6 M Answer any one question from Q3 and Q4 3 (a) Write a note on Vane Shear Test with neat sketch and the formulae involved. 6 M 3 (b) A load 1000 kN acts as a point load at the surface of a soil mass. Estimate the stress at a point 3m below and 4m away from the point of action of the load by Boussineq's formula. Compare the value with the result from Westergaard's theory. 6 M 4 (a) Draw a curve showing the relationship between dry density and moisture content for Standard Protector test and indicate the salient features of the curve. 6 M 4 (b) Define total and effective stress. Determine the shear strength in terms of effective stress on a plane within a saturated soil mass at a poit where the total withn a saturated soil mass at a point where the total normal stress is 200 kN/m Determine the shear strength in terms of effective stress on a plane within a saturated soil mass at a poit where the total withn a saturated soil mass at a point where the total normal stress is 200 kN/m 2and the pore water pressure is 80 kN/m 2. The effective stress shear strength parameters for the soil are c'=16 kN/m 2and ϕ'=39°. 6 M Answer any one question from Q5 and Q6 5 (a) Describe Rehbann's construction for determination of earth pressure with neat sketch. 7 M 5 (b) Derive the expression for the active state of pressure at any point for a submerged cohesionless backfill along with pressure diagrams. 6 M 6 (a) Explain how surcharge will affect earth pressure for cohesionless and cohesive soils in active state with pressure diagrams. 7 M 6 (b) A smooth vertical wall retains a level surface with y=18 kN/m 3, ϕ=30°, to a depth of 8m. Draw the lateral pressure diagram and compute the total active pressure in dry condition and when water table rises to the GL. Assume γ sat=22 kN/m 3. 6 M 7 (a) Write short notes on causes and remedial measures of Landslides. 7 M 7 (b) Explain controlling techniques for subsurface contamination. 6 M 8 (a) What is slope stability and how are the different types of factor of safety determined? 7 M 8 (b) Discuss sources and types of ground contamination. 6 M More question papers from Geotechnical Engineering
CDS 110b: Stochastic Systems CDS 110b ← Schedule → Project This set of lectures presents an overview of random processes and stochastic systems. We begin with a short review of continuous random variables and then consider random processes and linear stochastic systems. Basic concepts include probability density functions (pdfs), joint probability, covariance, correlation and stochastic response. References and Further Reading R. M. Murray, Optimization-Based Control. Preprint, 2008: Chapter 4 - Stochastic Systems Hoel, Port and Stone, Introduction to Probability Theory - this is a good reference for basic definitions of random variables Apostol II, Chapter 14 - another reference for basic definitions in probability and random variables Frequently Asked Questions Q (2008): Why does E{X Y} = 0 if two random variables are independent By definition, we have that <amsmath> E\{X Y\} = \int_{-\infty}^\infty \int_{-\infty}^\infty x y p(x, y)\, dx\, dy</amsmath> where is the joint probability density function. If and are independent then and we have <amsmath> \begin{aligned} E\{X Y\} &= \int_{-\infty|}^\infty \int_{-\infty}^\infty x y p(x) p(y)\, dx\, dy \\ &= \int_{-\infty}^\infty \left( \int_{-\infty}^\infty x p(x)\, dx \right) y p(y)\, dy = \int_{-\infty}^\infty \mu_X y p(y) dy = \mu_X \mu_Y. \end{aligned}</amsmath> If we assume that then the result follows. (Alternatively, compute Q (2007): How do you determine the covariance and how does it relate to random processes The covariance of two random variables and is given by For the case when , the covariance is called the variance, . For a random process, , with zero mean, we define the covariance as If is a vector of length , then the covariance matrix is an matrix with entries where is the joint distribution desity function between and . Intuitively, the covariance of a vector random process describes how elements of the process vary together. If the covariance is zero, then the two elements are independent. Q (2006): Can you explain the jump from pdfs to correlations in more detail? The probability density function (pdf), tells us how the value of a random process is distributed at a particular time: <amsmath> P(a \leq X(t) \leq b) = \int_a^b p(x; t) dx.</amsmath> You can interpret this by thinking of as a separate random variable for each time The correlationfor a random process tells us how the value of a random process at one time, is related to the value at a different time . This relationship is probabalistic, so it is also described in terms of a distribution. In particular, we use the joint probability density function, to characterize this: <amsmath> P(a_1 \leq X_1(t_1) \leq b_1, a_2 \leq X_2(t_2) \leq b_2) = \int_{a_1}^{b_1} \int_{a_2}^{b_2} p(x_1, x_2; t_1, t_2) dx_1 dx_2</amsmath> Given any random process, descibes (as a density) how the value of the random variable at time is related (or "correlated") with the value at time . We can thus describe a random process according to its joint probability density function. In practice, we don't usually describe random processes in terms of their pdfs and joint pdfs. It is usually easier to describe them in terms of their statistics (mean, variance, etc). In particular, we almost never describe the correlation in terms of joint pdfs, but instead use the correlation function: <amsmath> \rho(t, \tau) = E\{X(t) X(\tau)\} = \int_{-\infty}^\infty \int_{-\infty}^\infty x_1 x_2 p(x_1, x_2; t, \tau) dx_1 dx_2</amsmath> The utility of this particular function is seen primarily through its application: if we know the correlation for one random process and we "filter" that random process through a linear system, we can compute the correlation for the corresponding output process. Q (2006): What is the meaning of a white noise process The definition of a white noise process is that it is a Gaussian process with constant power spectral density. The intution behind this definition is that the spectral content of the process is constant at all frequencies. The term "white" noise comes from the fact that the color "white" comes from having light present at all frequencies. Another interpretation of the white noise is through the power spectrum of a signal. In this case, we simply compute the Fourier transform of a signal . The signal is said to be white if it has constant spectrum across all frequencies. More information Q (2006): What is a random process (in relation to transfer function) Formally, a random processis a continuous collection of random variables . It is perhaps easiest to think first of a discrete time random process. At each time instant , is a random variable according to some distribution. If the process is white, then there is no correlation between and when . If, on the other hand, the value of gives us information about what will be, then the processes are correlated and is the correlation function. These concepts can also be written in continous time, in which case each is a random variable and is the correlation function. This takes some time to get used to since is not a signal, but rather a description of a class of signals (satisfying some probability measures). A transfer functiondescribes how we map signalsin the frequency domain (see). We can use transfer functions to describe how random processes are mapped through a linear system (this is called spectral response; see lecture notes or text) More information Q (2006): what is the transfer function for a parallel combination of and ? If two transfer functions are in parallel (meaning: they receive the same input and the the output is the sum of the outputs from the individual transfer functions), the net transfer function is . Note that this is different than the formula that you get when you have parallel interconnections of resistors in electrical engineering. This is because when two outputs come together in a circuit diagram this restricts the voltage to be the same at the corresponding terminals, whereas in a block diagram we sumthe output signals.
HEALpix and spherical harmonic sampling Laurent Jacques Posts: 7 Joined: October 20 2004 Affiliation: FYMA/PHYS/UCL Contact: Hi, I have still a question about HEALpix grid in the particular context of spherical harmonic (SH) sampling and SH transform. Are we sure that SH transform can be exactly realized to all orders exactly as it can be done on traditional equi-angular grid (I mean, D theta = cste, D phi = cste)? The problem is that, in polar caps, the number of points on each iso-latitude ring decreases as its distance to the North pole, i.e. from 4*Nside to 4. However, spherical harmonics Y_l^m of order (l,m) (with m<=|l|) contain the term exp(i m phi) which needs at least m+1 points to be correctly sampled (without aliasing) on each iso-latitude ring. So, I'm not sure that Spherical Fourier transforms, i.e. spherical harmonic transform, can be realized exactly on HEALpix grid. In best cases, this can be just an approximation with a certain error. What do you think ? Perhaps it is already well known but I haven't read such remark elsewhere. Do you know a reference which explains that effect ? Best, Laurent. Antony Lewis Posts: 1498 Joined: September 23 2004 Affiliation: University of Sussex Contact: Yes, it is approximate. However near the poles the spherical harmonics go like [tex]Y_{lm} \sim (-1)^m \frac{e^{im\phi}}{m!}\sqrt{\frac{(2l+1)}{4\pi} \frac{(l+m)!}{(l-m)!}} \left( \frac{\theta}{2}\right)^m[/tex] so for large m and small [tex]\theta[/tex] the contributions go like [tex]\theta^m[/tex] which is tiny. So although HealPix does include all [tex]m\le l[/tex], near the polar caps you can to very good accuracy neglect high-m contributions (much better than numerical precision). This makes sense because [tex]e^{i\ell\phi}[/tex] has a spatial frequency much higher near the poles than at the equator, so you wouldn't expect a significant contribution near the poles unless m is small compared to [tex]\ell[/tex]. So a small number of pixels near the poles is not so bad. Laurent Jacques Posts: 7 Joined: October 20 2004 Affiliation: FYMA/PHYS/UCL Contact: OK. Thanks for the answer. It will be good to control this error (don't know how) which is not present on equi-angular grid (I mean for band-limited signals). I think that, for instance, CMB analysis relies partially on these spherical harmonic transform and results in this field have to be error-controlled. However, IMHO, approximation is not so harmless since for instance for [tex]i = N_{\rm side}/2 [/tex] (i.e. in the North cap and with [tex]i[/tex] the iso-latitude ring label), spherical harmonics are so not well estimated by [tex]C e^{{\rm i} m\phi} \big(\frac{\theta}{2}\big)^m [/tex], leading to more important contributions and preventing good computations of SH transform on [tex]|m|\geq N_{\rm side}/2 [/tex] in case where [tex]l\geq N_{\rm side}/2 [/tex] (for high frequencies). Best, Laurent. Antony Lewis Posts: 1498 Joined: September 23 2004 Affiliation: University of Sussex Contact: From the differential equation for the [tex]P_{lm}[/tex] you can show that the turning point (where the character changes from oscillatory to decaying) is at [tex]\sin\theta \sim m/\ell[/tex]. Hence for [tex]m >> \ell \sin\theta[/tex] you are always well in the damped tail, and the contributions are small. However it's true that the pixelization error is worse around the polar caps. I believe they are writing a paper on the error properties.
We highly recommend you (I can not stress enough) to download the LaTeX fonts (just 151 KB) to your PC, and install them. This will not only improve the look of the website, but also save bandwidth as you do not need to download the images every time you see an equation. Instructions to install and enable fonts: A. How (and why) to install LaTeX fonts in your PC: 1. Download the fonts attached with this post(we have attached 8 fonts) and unzip them. Then copy the all 8 fonts (control+C). Go to start > control panel > fonts. Now paste the fonts (control+V). The fonts will be installed. 2. Then restart your browser, and go to this topic. 3. After that click on the jsMath button at the bottom-right corner of your page. Cick on "Options", and check if your settings are as shown the following image. If not, change them. (Specially set the setting for 5 years, and select "use native TeX fonts" 4. Bingo! We are done! When you don't have fonts installed, you shall see something like this (and it will take an eon to load): Whereas, with the settings enabled, and fonts installed you should see something nice like the following. Now compare and decide if you have done everything correctly. $2\sum \sqrt{(a^{4}+b^{2}c^{2})(b^{4}+c^{2}a^{2})} \geq 2[3,1,0]$ (Cauchy) $a^4+b^4+c^4+a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2}+ 2\sum \sqrt{(a^{4}+b^{2}c^{2})(b^{4}+c^{2}a^{2})}\geq 2[3,1,0]+\frac{[4,0,0]+[2,2,0]}{2}$ $2[3,1,0]+\frac{[4,0,0]+[2,2,0]}{2}\geq\frac{[4,0,0]}{2}+ \frac{3[3,1,0]}{2}+\frac{[2,1,1]+[2,2,0]}{2}$ I think that after seeing these examples, I don't need to repeat whywe should install LaTeX fonts. B. LaTeX Intro: In the advanced mathematics or science textbook you can see nicely typeset equations. LaTeX makes it possible. LaTeX is a typesetting program that can generate professional looking equations (and many more cool stuffs!). However, the aim of this post is to tell you how to write simple nice equations. The art of problem solving (AOPS) forum has a great LaTeX guide for the beginners. However, we shall need to focus on a few parts for being able write in our posts: 1. How to write equations in the posts 2. All the symbols (you haven't probably seen all of them before ) Now you can write equations without learning LaTeX code.However, learning LaTeX is fun, and might be useful in your later life. So learn try learning LaTeX seeing the codes. Read: Writing Equation using LaTeX was never easier! C. Our Own guide: If you want to write simple equation within a line like the following: Just use dollar sign (technically when you are writing inline math use it) like the follwoing:This is Pythagoras' theorem $a^2+b^2=c^2$ Code: Select all This is Pythagoras' theorem $a^2+b^2=c^2$ Use:The following is normal distribution function \[F(x) = \tfrac{1}{\sqrt{2\pi\sigma^2}}\; e^{ -\frac{(x-\mu)^2}{2\sigma^2} } \] Code: Select all \[ Your equation within this third bracketed backslashs \] Code: Select all The following is normal distribution function \[F(x) = \tfrac{1}{\sqrt{2\pi\sigma^2}}\; e^{ -\frac{(x-\mu)^2}{2\sigma^2} } \] The best way to learn LaTeX is to see the examples. Double click on the examples to see the code. *** for writing power use ^ i.e. x^a=$x^a$ *** for writing subscript use _ i.e. x_a=$x_a$ *** for writing nice looking fractions use \frac{a}{b}=$\frac{a}{b}$. You can use \frac within frac command i.e. \frac{\frac{ab}{c}}{\frac{f}{g}}$=\dfrac{\frac{ab}{c}}{\frac{f}{g}}$ ** "\" this sign is vry frequently used in LaTeX. See the symbol guide of AOPS to learn how to write symbols and letters like \pi=$\pi$ When you think that you have learned some basic LaTeX, you an try it on codecogs equation editor or on our Test Forum.
The pressure changes, so we cannot use the simplified equation you use. From the perfect gas law, we can calculate the volume change, but we don't need to go there. The big problem is that in addition to the heat transfer, pressure-volume work is done. Work is easy to determine at constant pressure. When pressure changes also, it gets harder. However, let's use enthalpy to get rid of work (and thus the change in volume), leaving us with $dS=f\left(\frac{dT}{T},\frac{dP}{P}\right)$, which is the data we are given. Here's the derivation. Second Law of Thermodynamics: $$dS = \frac{\delta Q}{T}$$ First Law of Thermodynamics: $$dU = \delta Q + \delta W = \delta Q - PdV_m$$ Definition of Enthalpy: $$dH = dU + d(PV_m) = dU +PdV_m + V_mdP$$ Substitutions: $$dH = \delta Q + PdV_m + V_mdP - PdV_m = \delta Q +V_mdP$$$$\delta Q = dH - V_mdP$$ Perfect gas equation (notice that I am using molar volume $V_m$, so we will be calculating molar entropy change): $$PV_m = RT$$$$V_m = \frac{RT}{P}$$ At constant pressure (Yes, I know pressure changes, bear with me. We'll put that back in). $$dH = C_p dT$$ Substitute to get rid of $V_m$, since we don't have volume data. $$dQ = C_p dT - \frac{RT}{P}dP$$ Substitute into the second law: $$dS = C_p \frac{dT}{T} - R\frac{dP}{P}$$ Integrate (now we account for change in pressure: $$\int_{S_1}^{S_2} dS = C_p \int_{T_1}^{T_2} \frac{dT}{T} - R \int_{P_1}^{P_2} \frac{dP}{P} $$ $$\Delta S = C_p \ln \left(\frac{T_2}{T_1}\right) - R \ln \left(\frac{P_2}{P_1} \right)$$ Using this method, I get the correct answer after I multiply the molar entropy change by the number of moles. Note, that if you are given volume change instead, you can use a similar derivation to reach a similar equation: $$\Delta S = C_v \ln \left(\frac{T_2}{T_1}\right) + R \ln \left(\frac{V_{m,2}}{V_{m,1}} \right)$$ Reference:
Why does the following DFA have (to have) the state $b_4$? Shouldn't states $b_1,b_2,b_3$ already cover "exactly two 1s"? Wouldn't state $b_4$ mean "more than two 1s", even if it doesn't trigger an accept state? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community $b_4$ is what is called a trap state, that is, a state that exists just so that all possible transitions are explicitly represented, even those that do not lead to a final state. It doesn't change the language that is being defined, and can be omitted for the sake of brevity. b4 exists to cover the entire alphabet ([0,1], in this case) for each state. While this is not strictly necessary, it is a hot topic of discussion in the field. By showing the complete graph, it is more obvious that a third '1' in your input string permanently moves you out of the 'accept' state b3. The formal definition of a DFA is $M = (Q, \Sigma, \delta, q_0, F)$, were $Q$ is the finite set of states, $\Sigma$ is the alphabet, $\delta$ is the transition function, $q_0 \in Q$ is the start state, and $F \subseteq Q$ is the set of final states. Note that $\delta \colon Q \times \Sigma \to Q$ is specified to be a function, i.e., it has to be defined for all states and symbols. The graphical depiction of the DFA is complete in this sense with $b_4$. Often such dead states are just omitted in the sake of clarity of the diagram, the reader is surely capable of adding them if required. Answering your question I have to say(sadly) that it depend. It depends on the definition of DFA that you are using because it appears to not be concensus in a unique definition. For example I use the definition of the DFA where $\delta$ is a function. The next question is: Is $\delta$ a total function or a partial function? Personally when I use the term function I am refering to total functions by default. But someone can disagree with me. More importantly when I studied the definition of a DFA my teacher told me that $\delta$ is a total function. Summarizing I use a particular definition of a DFA where the $b_4$ state have to exist. I can skip drawing it for the sake of laziness or clarity, but I know it exist. Finally to answer your question more precisely we have to know what definition of DFA you use. Wouldn't state b4 mean "more than two 1s", even if it doesn't trigger an accept state? The state $b_4$ means that if a word $\sigma$ have more than two "1" it will never reach an accepting state so $\sigma\notin L = \{w\,|\, \text{contains exactly two ones}\}$.
Definition:Positive/Integer Definition Informally, the positive integers are the set: $\Z_{\ge 0} = \left\{{0, 1, 2, 3, \ldots}\right\}$ As the set of integers $\Z$ is the Inverse Completion of Natural Numbers, it follows that elements of $\Z$ are the isomorphic images of the elements of equivalence classes of $\N \times \N$ where two tuples are equivalent if the difference between the two elements of each tuple is the same. Thus positive can be formally defined on $\Z$ as a relation induced on those equivalence classes as specified in the definition of integers. The integer $z \in \Z: z = \left[\!\left[{\left({a, b}\right)}\right]\!\right]_\boxminus$ is positiveif and only if $b \le a$. The set of positive integers is denoted $\Z_{\ge 0}$. An element of $\Z$ can be specifically indicated as being positive by prepending a $+$ sign: $+x := x \in \Z_{\ge 0}$. $\forall x, y \in \Z: x \le y \iff y - x \in \Z_{\ge 0}$ Also known as As there is often confusion as to whether or not $0$ is included in the set of positive integers, it may be preferable to refer to the set of non-negative integers instead. Also see
The title says it all. Quaternions are widely used to represent the orientation of a spacecraft. Why is that, and how do quaternions compare to other alternatives? What are quaternions and how are they used in spacecraft dynamics? Background Newtonian mechanics says we live in a universe with three spatial dimensions, and a universal time that is the independent variable, in which we can describe translation and rotation. Relativity theory says we live in a universe with three space-like dimensions and one time-like dimension that are interweaved. This answer ignores relativistic aspects. With this assumption, describing translation is easy, at least compared to rotation. An object's center of mass has a position in some frame of reference (preferably inertial) that changes over time. Gravitation, drag, and other interactions such as firing the translational thrusters change this value as a function of time. Simulate these interactions and you have a three degree of freedom (3DOF) simulation. What makes 3DOF modeling so easy is that it is a commutative space. Walk north ten meters and then east five meters and you'll arrive at a certain spot. Alternatively, walking east five meters and the north ten meters will bring you to that same spot. Translation in Euclidean space commutes. Rotation in three dimensional space is not commutative. Pick up a book and rotate it about an axis parallel to the book's spine and then rotate the book about an axis perpendicular to the book's face. Do these actions differently (first rotate about an axis perpendicular to the book's face and then about it's spine) and you'll get a different end result. Rotations in three-dimensional space As rotations in three-dimensional space do not commute, whatever they are, they are not vectors. Commutativity is an essential quality of being a vector. Since rotations aren't vectors, how to represent them? It turns out that there are a many different ways to represent rotations in three dimensional space. Euler was one of the first to investigate this issue, realizing a number of key concepts. One is that a rigid body rotating in three dimensional space can always be described as instantaneously rotating about a single axis. This concept is unique to 3D space, and is also key as to why quaternions (unit quaternions) are useful for describing rotations. I'll address this a bit later. He also realized that the orientation of a planet, or of a rigid body in general, can be described in terms of three rotations. Euler used a sequence of three rotations: A rotation about the Z axis, then another about the once-rotated X axis, and then yet another about the twice-rotated Z axis. There's nothing special about this canonical Z-X'-Z'' sequence. All that is needed is that the first and third rotations be about the same body-fixed axis and that the middle element be about a body-fixed axis orthogonal to the other axis. A related sequence is to use three orthogonal body-fixed axes. This choice results in (for example) a roll-pitch-yaw sequence. There are six classical Euler sequences (e.g., Z-X'-Z''), plus six Tait-Bryan sequences (e.g., roll-pitch-yaw). Yet another variation on this theme is to rotate about a fixed set of axes. This leads to the extrinsic Euler sequences. All together, there are 24 different things that can be called Euler rotation sequences. Another way to represent rotations is via a proper orthogonal matrix. An orthogonal matrix is an NxN matrix in which each row (or each column) is a unit vector and is orthogonal to all of the other rows (or columns). This representation is generic; it applies to any N-dimensional cartesian space. The 3x3 representation is of special interest because we live in a 3D universe. A key problem with an NxN matrix used to represent orientation is that rotations in N dimensional space have $(N^2-N)/2$ degrees of freedom while the matrix has $N^2$ elements. Over half of the elements of the matrix are redundant. (This problem of redundancy also applies to quaternions. Rotations in 3D space have three degrees of freedom. Quaternions have four elements.) Yet another way to represent orientation / rotation in 3D space is via Euler's single axis theorem. No matter how weirdly a rigid body is oriented, there always exists an axis combined with a rotation by some angle about that axis that can be used to represent the orientation of the body in question with respect to some given frame of reference. This axis/angle representation is very close to a quaternionic representation. Given an unit vector $\hat u$ and an angle $\theta$, the quaternion whose scalar part is $\cos(\theta/2)$ and whose vector part is either $\hat u \sin(\theta/2)$ or $-\hat u \sin(\theta/2)$ captures all of the information needed to represent the orientation, completely avoiding the singularity issues associated with Euler sequences, and almost avoiding the over-specification issue associated with matrices. WTF are quaternions? The above begs the question. I wrote about quaternions before discussing what quaternions are. William Rowan Hamilton struggled to find an extension of the complex numbers, where $i=\sqrt{-1}$. He finally had an insight while taking a walk with his wife and invented the quaternions. To ensure he wouldn't forget this incredible insight, he carved some graffiti on a bridge: $$i^2=j^2=k^2=ijk=-1$$ This seemingly simple graffiti is incredibly complex (pun intended). Complex numbers have two elements, one real and one imaginary part. Negative one has two square roots in the complex numbers. Quaternions have four elements, one real part and three imaginary parts. Negative one has an uncountably many square roots in the quaternions. There is a natural progression from the quaternions, discovered post-Hamilton. The octonions have eight elements, one of which is real part and the other seven are (somehow) imaginary. The sedenions have sixteen elements, one of which is real and the other fifteen are imaginary, and so on, with powers of two. The problem is that each step up the ladder drops a key mathematical concept. Quaternions don't commute (multiplicatively), octonions don't even associate. Sedenions and higher are generally rather worthless. Quaternion mathematics Quaternions have two key binary operators, addition and multiplication, plus additive and multiplicative identity elements, additive inverses, and except for zero, multiplicative inverses. Addition works analogously to addition of a pair of four dimensional Cartesian vectors. Multiplication is where quaternions get tricky. From Hamilton's graffiti, one can deduce (for example) that $ij=k$ but that $ji=-k$. Quaternion multiplication is noncommutative. If one views quaternions as comprising a real scalar part and a vectorial imaginary part, the product of two quaternions $(a+\vec b)(c+\vec d)$ is ($ac-\vec b \cdot \vec d) + (a\vec d + c\vec b + \vec b \times \vec d)$. This gives quaternions just the right amount of messiness needed to describe the rotational difference between two frames of reference in three-dimensional space. In particular, given some quaternion $q$ and a vector $v$ viewed as a pure imaginary quaternion, the quaternion operation $q\,v\,q^{-1}$ is another pure imaginary quaternion. This operation rotates the vector $v$ in three dimensional space. So does the operation $q^{-1}\,v\,q$. Unit quaternions Because we're dealing with inverses, it makes sense to scale those quaternions so they are unitary. The inverse of a quaternion is its conjugate divided by the square of the magnitude. For a unit quaternion $q$, its inverse and quaternion conjugate are one and the same: $q^{-1} = q^\ast$ for all unit quaternions $q$. Another reason to use unit quaternions is that they form the right kind of mathematical structure needed to represent rotations in 3D space. The unit quaternions are a mathematical group, as are rotations in 3D space. Every unit quaternion represents some rotation in 3D space, and every rotation in 3D space can be represented by a unit quaternion. Actually, every rotation in 3D space can be represented by two unit quaternions. Multiply a unit quaternion by -1 and you'll get another unit quaternion that represents the same rotation as the first one. Unit quaternions form a double cover on rotations in 3D space. How are quaternions used to represent rotations in 3D space? This is the key question. The answer is that there is a very simple relation between the single axis representation of a rotation and a unit quaternion. The scalar part (aka real part) of this quaternion is $\cos(\theta/2)$, where $\theta$ is the single axis rotation angle, and the imaginary part is either $\hat u \sin(\theta/2)$ or $-\hat u \sin(\theta/2)$, where $\hat u$ is the unit vector along the axis of rotation. Whether one uses a plus or minus sign is completely arbitrary. This of course results in a number of arguments over "the right way to do it." There is no right way. Both approaches are equally valid. What are some of the gotchas? While quaternions aren't evil as are Euler angles, there are issues associated with their use. Left versus right quaternions. There are two ways to use unit quaternions to represent a rotation or transformation in 3D space: $q v q^{\ast}$ or $q^{\ast} v q$. Both approaches are equally valid mathematically; the only difference is whether the unconjugated quaternion is on the precedes or follows the vector to be transformed or rotated. Transformation versus rotation quaternions. Quaternions can be used to represent the physical rotation of a vector in 3D space and to represent the transformation of the same vector from one coordinate system to another. These turn out to be conjugate operations. Order of the elements. Quaternions have four elements, a scalar real part and a vectorial imaginary part. Which should go first, the real or imaginary part, when storing or exchanging them with someone else? The answer is that there is no right answer. How to numerically integrate them. The above three are representational issues only. This final issue is complex enough to be handled separately. The first issues mean that when working with someone else (e.g., a joint integrated simulation), one had better get the nomenclature the two teams straightened out. The odds are strong that the two groups will not be using the same representations, and the odds are also strong that neither team will change their internal scheme. Fortunately, the conversion from one to another is easily handled. There will be lots of finger-pointing it's not handled. How to numerically integrate unit quaternions? There are a number of reasons one may need to integrate a unit quaternion over time. A non-zero angular velocity changes a spacecraft's orientation. I use left transformation unit quaternions in my work. The time derivative of such a quaternion is $\dot q_{I\to B} = -\frac 1 2 \omega \, q_{I\to B}$ where $q_{I\to B}$ is the inertial to spacecraft body left transformation quaternion and $\omega$ is the spacecraft's angular velocity expressed in the body frame coordinates. Numerical integration inevitably involves steps along the lines of $x(t+\Delta t) = x(t) + \dot x(t)\Delta t$. This is mathematically invalid for unit quaternions; the unit quaternions do not have an addition operator. It is valid for quaternions in general, but the result is inevitably a non-unit quaternion. One simple expedient is to normalize the result. This works,a bit, for very small steps. There has been a huge amount of work on geometric integrators over the last two decades, in particular, Lie group integrators. The unit quaternions are a fairly simple Lie group. The tangent space is the pure imaginary quaternions. This is the space in which angular velocities live. The pure imaginary quaternions form an algebra, a Lie algebra in particular. All of the work done on Lie group integration techniques applies directly to the unit quaternions.
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
that's a question from some exam in Calculus Can someone help? does $\int _0^{\infty }\frac{\sin\pi \:x}{\left|\ln \left(x\right)\right|^{\frac{3}{2}}}$ converge? I proved that it converges between 1 and infinity using comparison test with the integral of $\frac{1}{x^{\frac{3}{2}}}$ Between 1/2 and 1 i used Dirichlet exam to prove it converges. Is that true? Any thoughts aout how can I prove between 0 and 1/2?
I'm reading a proof of Multivariate CLT using Lindeberg Theorem. Let $X_n = (X_{ni},... ,X_{nk})$ be independent random vectors all having the same distribution. Suppose that $E[X_{nu}]<\infty$; let the vector of means be $c=(c_1,..., c_k)$, where $c_u=E[X_{nu}],$ and let the covariance matrix be $\Sigma = [\sigma_{uv}],$ where $\sigma_{uv}=E[(X_{nu} — c_u)(X_{nv} — c_v)].$ Put $Sn=X_{1}+\cdots X_{n}.$ Under these assumptions, the distribution of the random vector $(S_n — nc)/\sqrt{n}$ converges weakly to the centered normal distribution with covariance matrix $\Sigma$. The proof is as follows: Let $Y =(Y_1,...,Y_{n})$ be a normally distributed random with $0$ means and covariance matrix $\Sigma.$ For given $t=(t_1,...,t_k)$ let $Z_n=\displaystyle\sum_{u=1}^{k}t_u(X_{nu}-c_{u})$ and $Z=\displaystyle\sum_{u=1}^{k}t_uY_u.$ Then it suffices prove that $n^{1/2}\displaystyle\sum_{j=1}^{n}Z_j$ converges in distribution to $Z$ (for arbitrary $t$). But this is an instant consequence of the Lindeberg-Levy theorem. I'm stuck following this proof. I'm not sure if Lindeberg condition is satisfied, i.e. $$\displaystyle\lim_{n\rightarrow\infty}\displaystyle\sum_{k=1}^{n}\frac{1}{s_n}\int_{\{|Z_k/\sqrt{n}|>\epsilon s_{n}\}}\frac{|Z_k|^2}{n} dP=0.$$ My idea is that $\{|Z_k/\sqrt{n}|>\epsilon s_{n}\}$ decreases to $\emptyset$; that's the reason of the integral converges to $0,$ but what about of the convergence or divergence of $s_{n}$ and the sum that tends to infinity? Any kind of help is thanked in advanced.
My following questions come from the understanding of the relations between the PSGs for two gauge-equivalent mean-field (MF) Hamiltonians (or MF ansatz). Considering the Schwinger-fermion ($f_{\sigma}$) MF approach to spin-liquid phases of spin-1/2 system. Let $H$ and $H'$ be two $SU(2)$ gauge-equivalent MF Hamiltonians, i.e., $H'=RHR^{-1}$, where the unitary operator $R$ represents an $SU(2)$ gauge rotation generated by $J=\frac{1}{2}\psi^\dagger \mathbf{\tau} \psi$ with $\psi=(f_\uparrow,f_\downarrow^\dagger)^T$. Now, if $G_UU\in PSG(H)$, then simple calculation shows that $RG_UUR^{-1}\in PSG(H')$, where $U$ represents a symmetry operator and $G_U$ is the gauge operator associated with $U$. But we are used to the combined form of a symmetry operator followed by a gauge operator for an element in PSG, thus there are several ways to rewrite the expression $RG_UUR^{-1}$ as: (1) $(RG_UUR^{-1}U^{-1})U$; (2) $(RG_U)U'$ with $U'=UR^{-1}$; or (3) $(RG_UR^{-1})U''$ with $U''=RUR^{-1}$. So how to understand these expressions? As for (1): The question is whether $RG_UUR^{-1}U^{-1}$ is an $SU(2)$ gauge operator? More specifically, it seems generally impossible to write $UR^{-1}U^{-1}$ as an gauge operator generated by $J=\frac{1}{2}\psi^\dagger \mathbf{\tau} \psi$. However, if we generalize the definition of $SU(2)$ gauge operators $R$ to those satisfying 3 properties A: Unitary; B: $R\psi_iR^{-1}=W_i\psi_i, W_i\in SU(2)$ matrices, which implies that physical spins should be gauge invariant (e.g., $R\mathbf{S}_iR^{-1}=\mathbf{S}_i$); and C: $RP=PR=P$ with projection operator $P$, which implies that physical spin-space should be gauge invariant. Then one can show that $UR^{-1}U^{-1}$ indeed fulfills the above 3 properties A,B,C where $U$ is time reversal, $SU(2)$ spin rotation, or lattice symmetries. (Furthermore, $R$ respects A,B,C $\Rightarrow R^{-1}$ respects A,B,C; $R_1,R_2$ both respect A,B,C $\Rightarrow R_1R_2$ respects A,B,C.) Therefore, the expression $RG_UUR^{-1}U^{-1}$ in (1) is an $SU(2)$ gauge operator in the sense A,B,C. As for (2) or (3): We may ask: If $U$ represents some symmetry (e.g., time reversal, $SU(2)$ spin rotation, or lattice symmetries), then does $UR$ or $RU$ still represent the same physical symmetry? Where $R$ is an $SU(2)$ gauge operator (in the sense A,B,C mentioned above). One can show that $U'=UR$ or $RU$ represents the same physical symmetry as $U$ in the following sense: $U'\mathbf{S}_iU'^{-1}=U\mathbf{S}_iU^{-1}$ and $U'\phi=U\phi$, where $\phi=P\phi\in$ physical spin space. (Note that $U'\phi=U\phi$ is still a physical spin state due to $[P,U]=[P,U']=0$.) Therefore, the $U'$ and $U''$ in expressions (2) and (3) indeed represent the same physical symmetry as $U$. Are my understandings correct? Thanks in advance. This post imported from StackExchange Physics at 2015-02-25 16:26 (UTC), posted by SE-user Kai Li A useful formula: Let $G_iU_i\in PSG(H), i=1,2,...,n$, then the $SU(2)$ gauge operator $G_U$ associated with the combined symmetry $U=U_1U_2\cdots U_n$ has the following form $$G_U=G_1U_1G_2U_2\cdots G_{n-1}U_{n-1}G_nU_{n-1}^{-1}\cdots U_2^{-1}U_1^{-1}$$ such that $G_UU\in PSG(H)$.
Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China We consider a 2$n$th-order nonlinear difference equation containing both many advances and retardations with $\phi_c$-Laplacian. Using the critical point theory, we obtain some new and concrete criteria for the existence and multiplicity of periodic and subharmonic solutions in the more general case of the nonlinearity. Keywords:Periodic and subharmonic solution, 2$n$th-order, nonlinear difference equation, $\phi_c$-Laplacian, critical point theory. Mathematics Subject Classification:39A23. Citation:Peng Mei, Zhan Zhou, Genghong Lin. Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2085-2095. doi: 10.3934/dcdss.2019134 References: [1] Z. AlSharawi, J. M. Cushing and S. Elaydi, [2] Z. Balanov, C. Garcia-Azpeitia and W. Krawcewicz, On variational and topological methods in nonlinear difference equations, [3] X. C. Cai and J. S. Yu, Existence of periodic solutions for a 2$n$th-order nonlinear difference equation, [4] P. Chen and X. H. Tang, Existence of homoclinic orbits for 2$n$th-order nonlinear difference equations containing both many advances and retardations, [5] L. H. Erbe, H. Xia and J. S. Yu, Global stability of a linear nonautonomous delay difference equations, [6] Z. M. Guo and J. S. Yu, Existence of periodic and subharmonic solutions for second-order superlinear difference equations, [7] Z. M. Guo and J. S. Yu, The existence of periodic and subharmonic solutions of subquadratic second order difference equations, [8] Z. M. Guo and J. S. Yu, Applications of critical point theory to difference equations, [9] J. H. Leng, Periodic and subharmonic solutions for 2$n$th-order $\phi_{c}$-Laplacian difference equations containing both advance and retardation, [10] G. H. Lin and Z. Zhou, Homoclinic solutions of discrete $\phi$-Laplacian equations with mixed nonlinearities, [11] X. Liu, Y. B. Zhang, H. P. Shi and X. Q. Deng, Periodic and subharmonic solutions for fourth-order nonlinear difference equations, [12] X. H. Liu, L. H. Zhang, P. Agarwal and G. T. Wang, On some new integral inequalities of Gronwall-Bellman-Bihari type with delay for discontinuous functions and their applications, [13] [14] H. Matsunaga, T. Hara and S. Sakata, Global attractivity for a nonlinear difference equation with variable delay, [15] J. Mawhin, Periodic solutions of second order nonlinear difference systems with $\phi$-Laplacian: a variational approach, [16] P. H. Rabinowitz, [17] H. P. Shi, Periodic and subharmonic solutions for second-order nonlinear difference equations, [18] H. P. Shi and Y. B. Zhang, Existence of periodic solutions for a 2$n$th-order nonlinear difference equation, [19] J. S. Yu and Z. M. Guo, On boundary value problems for a discrete generalized Emden-Fowler equation, [20] Q. Q. Zhang, Boundary value problems for forth order nonlinear $p$-Laplacian difference equations, [21] Q. Q. Zhang, Homoclinic orbits for a class of discrete periodic Hamiltonian systems, [22] Q. Q. Zhang, Homoclinic orbits for discrete Hamiltonian systems with indefinite linear part, [23] Q. Q. Zhang, Homoclinic orbits for discrete Hamiltonian systems with local super-quadratic conditions, [24] Z. Zhou and D. F. Ma, Multiplicity results of breathers for the discrete nonlinear Schrödinger equations with unbounded potentials, [25] Z. Zhou and M. T. Su, Boundary value problems for 2$n$th-order $\phi_{c}$-Laplacian difference equations containing both advance and retardation, [26] Z. Zhou and J. S. Yu, On the existence of homoclinic solutions of a class of discrete nonlinear periodic systems, [27] [28] Z. Zhou, J. S. Yu and Y. M. Chen, Homoclinic solutions in periodic difference equations with saturable nonlinearity, show all references References: [1] Z. AlSharawi, J. M. Cushing and S. Elaydi, [2] Z. Balanov, C. Garcia-Azpeitia and W. Krawcewicz, On variational and topological methods in nonlinear difference equations, [3] X. C. Cai and J. S. Yu, Existence of periodic solutions for a 2$n$th-order nonlinear difference equation, [4] P. Chen and X. H. Tang, Existence of homoclinic orbits for 2$n$th-order nonlinear difference equations containing both many advances and retardations, [5] L. H. Erbe, H. Xia and J. S. Yu, Global stability of a linear nonautonomous delay difference equations, [6] Z. M. Guo and J. S. Yu, Existence of periodic and subharmonic solutions for second-order superlinear difference equations, [7] Z. M. Guo and J. S. Yu, The existence of periodic and subharmonic solutions of subquadratic second order difference equations, [8] Z. M. Guo and J. S. Yu, Applications of critical point theory to difference equations, [9] J. H. Leng, Periodic and subharmonic solutions for 2$n$th-order $\phi_{c}$-Laplacian difference equations containing both advance and retardation, [10] G. H. Lin and Z. Zhou, Homoclinic solutions of discrete $\phi$-Laplacian equations with mixed nonlinearities, [11] X. Liu, Y. B. Zhang, H. P. Shi and X. Q. Deng, Periodic and subharmonic solutions for fourth-order nonlinear difference equations, [12] X. H. Liu, L. H. Zhang, P. Agarwal and G. T. Wang, On some new integral inequalities of Gronwall-Bellman-Bihari type with delay for discontinuous functions and their applications, [13] [14] H. Matsunaga, T. Hara and S. Sakata, Global attractivity for a nonlinear difference equation with variable delay, [15] J. Mawhin, Periodic solutions of second order nonlinear difference systems with $\phi$-Laplacian: a variational approach, [16] P. H. Rabinowitz, [17] H. P. Shi, Periodic and subharmonic solutions for second-order nonlinear difference equations, [18] H. P. Shi and Y. B. Zhang, Existence of periodic solutions for a 2$n$th-order nonlinear difference equation, [19] J. S. Yu and Z. M. Guo, On boundary value problems for a discrete generalized Emden-Fowler equation, [20] Q. Q. Zhang, Boundary value problems for forth order nonlinear $p$-Laplacian difference equations, [21] Q. Q. Zhang, Homoclinic orbits for a class of discrete periodic Hamiltonian systems, [22] Q. Q. Zhang, Homoclinic orbits for discrete Hamiltonian systems with indefinite linear part, [23] Q. Q. Zhang, Homoclinic orbits for discrete Hamiltonian systems with local super-quadratic conditions, [24] Z. Zhou and D. F. Ma, Multiplicity results of breathers for the discrete nonlinear Schrödinger equations with unbounded potentials, [25] Z. Zhou and M. T. Su, Boundary value problems for 2$n$th-order $\phi_{c}$-Laplacian difference equations containing both advance and retardation, [26] Z. Zhou and J. S. Yu, On the existence of homoclinic solutions of a class of discrete nonlinear periodic systems, [27] [28] Z. Zhou, J. S. Yu and Y. M. Chen, Homoclinic solutions in periodic difference equations with saturable nonlinearity, [1] Mohan Mallick, R. Shivaji, Byungjae Son, S. Sundar. Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system. [2] [3] [4] Genghong Lin, Zhan Zhou. Homoclinic solutions of discrete $ \phi $-Laplacian equations with mixed nonlinearities. [5] [6] Imed Bachar, Habib Mâagli. Singular solutions of a nonlinear equation in apunctured domain of $\mathbb{R}^{2}$. [7] Claudianor O. Alves, Vincenzo Ambrosio, Teresa Isernia. Existence, multiplicity and concentration for a class of fractional $ p \& q $ Laplacian problems in $ \mathbb{R} ^{N} $. [8] Guoyuan Chen, Yong Liu, Juncheng Wei. Nondegeneracy of harmonic maps from $ {{\mathbb{R}}^{2}} $ to $ {{\mathbb{S}}^{2}} $. [9] [10] [11] Jaime Angulo Pava, César A. Hernández Melo. On stability properties of the Cubic-Quintic Schródinger equation with $\delta$-point interaction. [12] Nicholas J. Kass, Mohammad A. Rammaha. Local and global existence of solutions to a strongly damped wave equation of the $ p $-Laplacian type. [13] Phuong Le. Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian. [14] Abdelwahab Bensouilah, Van Duong Dinh, Mohamed Majdoub. Scattering in the weighted $ L^2 $-space for a 2D nonlinear Schrödinger equation with inhomogeneous exponential nonlinearity. [15] Linglong Du, Caixuan Ren. Pointwise wave behavior of the initial-boundary value problem for the nonlinear damped wave equation in $\mathbb{R}_{+}^{n} $. [16] Erchuan Zhang, Lyle Noakes. Riemannian cubics and elastica in the manifold $ \operatorname{SPD}(n) $ of all $ n\times n $ symmetric positive-definite matrices. [17] Linlin Fu, Jiahao Xu. A new proof of continuity of Lyapunov exponents for a class of $ C^2 $ quasiperiodic Schrödinger cocycles without LDT. [18] Florin Diacu, Shuqiang Zhu. Almost all 3-body relative equilibria on $ \mathbb S^2 $ and $ \mathbb H^2 $ are inclined. [19] [20] Ilwoo Cho, Palle Jorgense. Free probability on $ C^{*}$-algebras induced by hecke algebras over primes. 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
Ground states of nonlinear Schrödinger equations with fractional Laplacians 1. Center for Applied Mathematics, Guangzhou University, Guangzhou 510405, China 2. School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China $ \begin{equation*} \left\{ \begin{aligned} (-\Delta)^\alpha u+u& = f(u), x\in\mathbb{R}^N,\\ u(x)&\geq 0. \end{aligned} \right. \end{equation*} $ $ H^\alpha (\mathbb{R}^N) $ Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C35. Citation:Zupei Shen, Zhiqing Han, Qinqin Zhang. Ground states of nonlinear Schrödinger equations with fractional Laplacians. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2115-2125. doi: 10.3934/dcdss.2019136 References: [1] [2] [3] [4] D. Applebaum, Lévy processes-from probability to finance and quantum groups, [5] X. Chang and Z.-Q. Wang, Ground state of scalar field equations involving a fractional Laplacian with general nonlinearity, [6] [7] S. Dipierro, G. Palatucci and E. Valdinoci, Existence and symmetry results for a Schrödinger type problem involving the fractional Laplacian, [8] P. Felmer, A. Quaas and J. Tan, Positive solutions of the nonlinear Schrodinger equation with the fractional Laplacian, [9] P. Felmer, A. Quaas, M. Tang and J. Yu, Monotonicity properties for ground states of the scalar field equation, [10] [11] [12] L. Jeanjean and K. Tanaka, A positive solution for an asymptotically linear elliptic problem on $\mathbb{R}^N$ autonomous at infinity, [13] [14] [15] [16] [17] [18] [19] X. Tang, X. Lin and J. Yu, Nontrivial solutions for Schrödinger equation with local supper-quadratic conditions, [20] L. Vlahos, H. Isliker, Y. Kominis and K. Hizonidis, Normal and Anomalous Diffusion: A Tutorial, Order and Chaos, Patras University Press, 2008. Google Scholar [21] Y. Wei and X. Su, Multiplicity of solutions for non-local elliptic equations driven by the fractional Laplacian, [22] H. Weitzner and G. M. Zaslavsky, Some applications of fractional equations, Chaotic transport and complexity in classical and quantum dynamics, [23] show all references References: [1] [2] [3] [4] D. Applebaum, Lévy processes-from probability to finance and quantum groups, [5] X. Chang and Z.-Q. Wang, Ground state of scalar field equations involving a fractional Laplacian with general nonlinearity, [6] [7] S. Dipierro, G. Palatucci and E. Valdinoci, Existence and symmetry results for a Schrödinger type problem involving the fractional Laplacian, [8] P. Felmer, A. Quaas and J. Tan, Positive solutions of the nonlinear Schrodinger equation with the fractional Laplacian, [9] P. Felmer, A. Quaas, M. Tang and J. Yu, Monotonicity properties for ground states of the scalar field equation, [10] [11] [12] L. Jeanjean and K. Tanaka, A positive solution for an asymptotically linear elliptic problem on $\mathbb{R}^N$ autonomous at infinity, [13] [14] [15] [16] [17] [18] [19] X. Tang, X. Lin and J. Yu, Nontrivial solutions for Schrödinger equation with local supper-quadratic conditions, [20] L. Vlahos, H. Isliker, Y. Kominis and K. Hizonidis, Normal and Anomalous Diffusion: A Tutorial, Order and Chaos, Patras University Press, 2008. Google Scholar [21] Y. Wei and X. Su, Multiplicity of solutions for non-local elliptic equations driven by the fractional Laplacian, [22] H. Weitzner and G. M. Zaslavsky, Some applications of fractional equations, Chaotic transport and complexity in classical and quantum dynamics, [23] [1] Chenmin Sun, Hua Wang, Xiaohua Yao, Jiqiang Zheng. Scattering below ground state of focusing fractional nonlinear Schrödinger equation with radial data. [2] Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. [3] Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. [4] Chao Ji. Ground state solutions of fractional Schrödinger equations with potentials and weak monotonicity condition on the nonlinear term. [5] Daniele Garrisi, Vladimir Georgiev. Orbital stability and uniqueness of the ground state for the non-linear Schrödinger equation in dimension one. [6] Kenji Nakanishi, Tristan Roy. Global dynamics above the ground state for the energy-critical Schrödinger equation with radial data. [7] Xiaoyan Lin, Yubo He, Xianhua Tang. Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential. [8] Lun Guo, Wentao Huang, Huifang Jia. Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $. [9] Marco A. S. Souto, Sérgio H. M. Soares. Ground state solutions for quasilinear stationary Schrödinger equations with critical growth. [10] Zhanping Liang, Yuanmin Song, Fuyi Li. Positive ground state solutions of a quadratically coupled schrödinger system. [11] Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. [12] C. Cortázar, Marta García-Huidobro. On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian. [13] C. Cortázar, Marta García-Huidobro. On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian. [14] Miaomiao Niu, Zhongwei Tang. Least energy solutions for nonlinear Schrödinger equation involving the fractional Laplacian and critical growth. [15] Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang. Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent. [16] [17] Ran Zhuo, Yan Li. Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian. [18] Jianhua Chen, Xianhua Tang, Bitao Cheng. Existence of ground state solutions for a class of quasilinear Schrödinger equations with general critical nonlinearity. [19] Zhitao Zhang, Haijun Luo. Symmetry and asymptotic behavior of ground state solutions for schrödinger systems with linear interaction. [20] Xianhua Tang, Sitong Chen. Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials. 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Search Now showing items 1-3 of 3 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... Net-baryon fluctuations measured with ALICE at the CERN LHC (Elsevier, 2017-11) First experimental results are presented on event-by-event net-proton fluctuation measurements in Pb- Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. The ALICE detector is well ...
The cost of 4 tables and 2 chairs cost Rs. 500 and the cost of 5 tables and 4 chairs cost Rs. 700, what will be the costs of 10 chairs and 10 tables. When we come across situations like this we need to frame the corresponding equations. We use application of linear equations to solve such real life problems. Linear equations can be used to determine the values of unknown integers, solve many arithmetic equations and expressions. In the above situation, we assume a variable for the cost of a table and the cost of a chair. We then solve the equations for the variables and get the exact costs of chairs and tables. What are Equations? Equations are those in which the algebraic expressions are equated to a constant or an algebraic expression. For example, 2x + 3y = 5z. The Equations are solved to find the exact value of the variable involved in the equation. Different Forms A linear equation does not involve any products or roots of variables. All variables have degree one. There are many ways of writing linear equations , but they usually have constants and variables. The general form of a linear equation with, - One variable is ax + b = 0, where a ≠ 0 and x is the variable. - Two variables is a x + by + c = 0, where, a ≠ 0, b ≠ 0 , x and y are the variables. - Three variables is ax + by + cz + d = 0 where a ≠ 0, b ≠ 0, c ≠ 0, x, y, z are the variables. Linear Equations in one Variable The only power of the variable is 1. In a linear equation there will be definite values for the variables to satisfy the condition of the equation. General form: ax + b = 0: where a $\neq$ 0 Linear Equations in Two Variables Linear equations in two variables are of the form, ax + by + c = 0 , (where a$\neq$ 0and b$\neq$0) also known as the general form of the straight line. To solve for the variables x and y, we need to have two equations. Otherwise for every values of x there will be a corresponding value of y. Hence a single equation has infinite number of solutions and each solution is the point on the line. Below are different forms of the linear equations : 1.Slope intercept form ; General form is y = mx + c 2.Point–slope form ; General form is y - y_1 = m(x - x_1) 3.Intercept form; General form is x/x_0 + y/y_0 = 1 Where m = slope of a line; (x_0, y_0) intercept of x-axis and y-axis. System of Linear Equations System of linear equations are of the form: $a_1$x + $b_1$y + $c_1$ = 0 and $a_2$ x + $b_2$ y + $c_2$ = 0 This system of equations have following types of solutions according to the ratio of the corresponding coefficients. The above system of equations may have unique solution or no solution or infinite number of solutions. Graphing Linear Equations Graph of linear equations in one variable is a point on the real number line. For Example: Draw graph for the linear equation, 2x + 5 = 9 Given 2x + 5 = 9 => 2x + 5 - 5 = 9 - 5 => 2x = 4 => $\frac{2x}{2} = \frac{4}{2}$ => x = 2 This can be represented on the real number line as follows: In the above number line we can see that the solution, x = 2 is marked. Graph of linear Equations of 2 variables will be a straight line, which can be shown on co-ordinate graph. Let us draw the graph for the equation, x + y = 5 We have the points, (-3, 8), (0, 5), (5, 0) Solving Linear Equations How to solve linear Equations? Linear Equations of 2 or more variables can be solved by various methods. For the equations involving 2 variables, we need to have two equations to solve for the two variables. There are various methods to solve these equations. To solve equations you have to be tricky and choose smartly any of the methods. Some of them are as follows: Let us solve the set of equation using method of substitution :x + y = 12 and 2x + 3y =32 Solution: Givenx + y = 12 ---------- (1)2x + 3y = 32 ---------- (2)From Equation (1), y = 12 - xSubstituting, y = 12 - x, in Equation (2),2x + 3(12 - x ) = 32=> 2x + 36 - 3x = 32=> 2x - 3x = 32 - 36=> -x = -4=> x = 4Substituting the value of x in y = 12 - x, we gety = 12 - 4 = 8Hence the solution is (x, y) = ( 4, 8). Linear Equations and Inequalities Linear equation, a statement involving one or more variables whose degree is one. The inequalities are those where the algebraic expressions are connected by an inequality signs, less than (<), greater than (>), less than equal to ($\leqslant$) , greater than equal to ($\geq$). Examples: Linear equations: 3x, 1/2x + 5 = 0, 2x + 1/2 Linear inequalities: 2x > 0, 2x + 8 < 6, 1/3x + 10 $\leq$ 5 Let us solve an inequality equation, Find the value of x, 2x + 3 < 7, for all natural numbers. 2x + 3 < 7 = 2x + 3 - 3 < 7 - 3 = 2x < 4 = $\frac{2x}{2} < \frac{4}{2}$ = x < 2 => $x = 1$, is only the solution which satisfy the inequality. => 2 * 1 + 3 = 2 + 3 = 5 < 7.
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
Find the points on the curve $x^2+xy+y^2=7$ where tangent is parallel to (a) X axis (b) parallel to Y axis. For part (a) if we differentiate w.r.t $x$ we get $y'=\large-\frac{2x+y}{x+2y}$, this when equated to zero (parallel to X axis means slope is zero) gives $x=-y/2$, this when substituted in the equation of the curve gives $\left(\sqrt{\frac{7}{3}},-2\sqrt{\frac{7}{3}}\right),\left(-\sqrt{\frac{7}{3}},2\sqrt{\frac{7}{3}}\right)$ For part (b) we find $dx/dy$, equate it to zero and find that the points are $\left(-2\sqrt{\frac{7}{3}},\sqrt{\frac{7}{3}}\right),\left(2\sqrt{\frac{7}{3}},-\sqrt{\frac{7}{3}}\right)$ $\underline{\text{My question}}$ is if we consider two points on the curve $(x_1, y_1),(x_2, y_2)$ such that $x_1 = x_2$, because for the line parallel to Y axis coordinates of x remain the same. Substituting these two points in the equation of the curve we get $$x_1^2+x_1y_1+y_1^2=7$$ $$x_1^2+x_1y_2+y_2^2=7$$ solving them simultaneously we get $x_1=-(y_1+y_2)=x_2$, but this does not agree with the points that we calculated $\left(-2\sqrt{\frac{7}{3}},\sqrt{\frac{7}{3}}\right),\left(2\sqrt{\frac{7}{3}},-\sqrt{\frac{7}{3}}\right)$ in part (b). $y_1+y_2=0\neq x_1 \hspace{10pt}\text{OR} \hspace{10pt} \neq x_2$
The title is pretty self explanatory. If I have $V$ nodes and $E$ edges in a connected undirected graph, is there a formula to determine an upper bound on the maximum possible diameter? The exact graph is unknown, but the number of edges and the number of vertices is. I do know that when $E=V(V-1)/2$ (complete graph), the maximum possible diameter is $1$, and when $E=V-1$ (line graph), the maximum possible diameter is $V-1$, but I have no idea about anything in between. We assume that $v \geq n-1 $ and $v \leq \frac{n(n-1)}{2}$ Given $v$ edges and $n$ nodes, let's compute the minimal number of nodes $u$ needed to spend the excess of edges in a spending-hole $SH$: We know that the saturation of $u$ nodes needs $\frac{n(n-1)}{2}$ edges and it remains $w = n-u$ edges to constitute a linear graph generator of long distances. First, let's try to minimize $u$ and maximize $w$ in : $n = w + u$ $v = w-1 + 1 + \frac{u(u-1)}{2}$ where $w-1$ is for the long distance subgraph , $\frac{u(u-1)}{2}$ for the spending-hole $SH$ and $1$ edge to join them. $=> u' = \frac{(3+\sqrt{(8(v-n)+9)})}{2}$ $=> u = \lceil {u'} \rceil = \lceil {\frac{(3+\sqrt{(8(v-n)+9)})}{2} }\rceil$ , $ceil(u')$ because we deal with integers and $u'$ was just a computed bound. Then we compute the remaining nodes $n-u$ , we take from the $SH$ as many edges needed to reach $n-u$ and we may compute the distance $d = n-u +1$ which must be added $1$ if the $SH$ is saturated, ie if $\frac{u(u-1)}{2}-u= v-n$ . Numerical application : /*main(5,true) :5 nodes :for v=4 : 2 nodes will consume 1 edges from 1 ; it remains 3 nodes 3 edges,d =4 5 : 3 3 3 ; 2 2 3 6 : 4 5 6 ; 1 1 3 7 : 4 6 6 ; 1 1 2 8 : 5 8 10 ; 0 0 2 9 : 5 9 10 ; 0 0 2 10 : 5 10 10 ; 0 0 1 Note how the distance $d$ changes with the edges in excess and how it decreases by $1$ when the $SH$ is saturated. I recall that the number of edges is bounded by the question and we cannot have double edges, no edges or edges without nodes. function main(n,all){ var u , spent , spentmax , v , V = n*(n-1)/2 , res = n+" nodes :\n" , exm = -1,d , firstline = true ; for ( v = n-1 ; v <= V ; v ++ ) { u = Math.ceil( (3+Math.sqrt(8*(v-n)+9))/2 ) ; spentmax = (u*(u-1)/2) ; spent = (v-n+u) ; if(u!=exm || all ) { if(u!=exm ) { if( all ) res += "\n" ; exm = u ; } d = 1 + n-u + ( spentmax == spent ? 0 : 1 ); if( firstline ) res += ( "for v="+v+" : "+u +" nodes will consume "+spent+" edges from "+spentmax+" ; it remains "+(n-u)+" nodes " + (v-spent) + " edges,d ="+d+"\n" ) ; else res += ( " "+v+" : "+u +" "+spent+" "+spentmax+" ; "+(n-u)+" " + (v-spent) + " "+d+"\n" ) ; firstline = false ; } } return res ;}// scratchpad formalism to get the result by typing CTRL L at the end of the scriptvar z1 , z2 = main(5,false) ; // number of nodes and true to get all the intermediate edges stepsz1=z2;/*main(12,false) :12 nodes :for v=11 : 2 nodes will consume 1 edges from 1 ; it remains 10 nodes 10 edges,d =11 12 : 3 3 3 ; 9 9 10 13 : 4 5 6 ; 8 8 10 15 : 5 8 10 ; 7 7 9 18 : 6 12 15 ; 6 6 8 22 : 7 17 21 ; 5 5 7 27 : 8 23 28 ; 4 4 6 33 : 9 30 36 ; 3 3 5 40 : 10 38 45 ; 2 2 4 48 : 11 47 55 ; 1 1 3 57 : 12 57 66 ; 0 0 2 */ Even if the proof is not fundamentally detailed, one can see that the construction is minimal, starting from a linear graph with $v = n-1$ and adding the edges in excess in a Spending hole ( or a ball of wool if one prefers ). When the latter is saturated, we "sacrify" a new node until a new saturation. When all the edges in excess have used a minimum of nodes, it remains a piece of linear graph which is joined to the $SH$ by one edge to one node. The same question with the possibility of not connection is interesting too ... This kind of problems has a lot of applications when the algorithm may add nodes at its convienience ( Steiner tree problems family ). ps : feel free to edit and correct obscure translations, TY This is not a complete answer but rather an observation which leads to good results in some special cases. An interesting family of graphs to consider is the following. Take a complete graph $K_k$ and draw a simple path od length $v-k$ from one of its vertices. The thing you obtain has $v$ vertices, $v-(k^2-3k)/2$ edges, and diameter $v-k+1$. (The diameter is attained at a furthest point on the path and any vertex in $K_k$ that is not on the path.) A straightforward computation shows that you can get a graph with diameter $$V-\left\lceil\sqrt{2E-2V+\frac94}+\frac12\right\rceil,$$ $V$ vertices, and $E$ edges. Note that this bound is asymptotically best possible when there are not too many edges, that is, when $E=o(V^2)$, because then you get a graph with diameter $V+o(V)$. My bound is not supposed to be good for large $E$ though.
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
I have discovered that the following two tags are too similar to each other: riemann-zeta-function, with 299 questions tagged, has the description The Riemann zeta function is the function of one complex variable $s$ defined by the series $\zeta(s) = \sum_{n \geq 1} \frac{1}{n^s}$ when $\operatorname{Re}(s) > 1$. It admits a meromorphic continuation to $\mathbb{C}$ with only a simple pole at $1$. This function satisfies a functional equation relating the values at $s$ and $1-s$. This is the most simple example of an $L$-function and a central object of number theory. and no tag wiki; zeta-functions, with 195 questions tagged, has the description The Riemann zeta function is defined as the analytic continuation of the function defined for $\sigma > 1$ by the sum of the preceding series. and tag wiki The Riemann zeta function, $\zeta(s)$, is a function of a complex variable $s$ that analytically continues the sum of the infinite series $$\zeta(s) =\sum_{n=1}^\infty\frac{1}{n^s}$$ which converges when the real part of $s$ is greater than $1$. There are two problems here: does it make sense to have one tag dedicated to just the Riemann $\zeta$, and a single other one for the rest of the $\zeta$ functions? A single tag for all these functions should suffice. if we still want tags to distinguish between Riemann and "non-Riemann" $\zeta$ functions, then the latter class of functions should be correctly described in the tag description and tag wiki - a thing that does not currently happen with the latter tag. My suggestion is to just melt the former tag into the latter, and automatically retag all the questions.
We all know the thin lens equation. For $o$ being a horizontal object distance and $f$ being the focal length, the horizontal image distance $i$ is described by: $$\frac{1}{f} = \frac{1}{o} + \frac{1}{i}$$ However, I haven't seen this applied to determine where an incoming ray should go. This is a problem when writing, for example, a ray tracer. Working it out seemed to be pretty straightforward. I reasoned that the above equation gives a location of an image point that any object point focuses to. Therefore, if you consider an incoming ray starting at some position $\vec{p}_1$, then we can consider $\vec{p}_1$ to be some "object" and figure out where its image is. Since by definition the object point focuses to the image point, by definition every ray from the object point through the lens passes through the image. Therefore, you can just take the incoming ray's intersection with the lens, and then shoot a new ray from that point out through the image. In math, that looks like (feel free to skip) this. Let's assume everything is in 2D and that the ray is entering from the left and hits a thin lens centered at the origin. Ray $\vec{R}_1(s)=\vec{p}_1+s\cdot\vec{d}_1$ is the incoming ray and we want to find $\vec{p}_2$, $\vec{d}_2$ for outgoing ray $\vec{R}_2(t)=\vec{p}_2+t\cdot\vec{d}_2$. Finding $\vec{p}_2$ is simple, since this is just the intersection of $\vec{R}_1$ with the lens. To find $\vec{d}_2$, first we find an "image" of the ray's origin, $\vec{I}$: $$o = |\vec{p}_{1,x}|\tag{horizontal distance}$$ $$i = \left( \frac{1}{f} - \frac{1}{o} \right)^{-1} \tag{thin lens eq.}$$ $$\vec{I} = -\frac{i}{o} \vec{p}_{1} \tag{scale through lens}$$ Note that the above works regardless of whether the ray origin $\vec{p}_1$ is inside the focal distance or not. For the next step, it matters, though. If the image is real (i.e., on the opposite side of the lens from the incoming ray): $$\vec{d}_2 = \frac{\vec{I}_{real} - \vec{p}_2}{|| \vec{I}_{real} - \vec{p}_2 ||}$$ If the image is virtual (i.e., on the same side of the lens as the incoming ray), then the concept is the same; you just need to reverse the direction: $$\vec{d}_2 = \frac{\vec{p}_2 - \vec{I}_{virtual}}{|| \vec{p}_2 - \vec{I}_{virtual} ||}$$ Note that all of the above works regardless of which side the ray enters on (so we can relax the constraint about the ray coming from the left). All of this makes nice pseudocode (originally from Python) (takes into account a lens center and a few edge cases): o = [(positive) horizontal distance from ray origin to lens center]lens_to_obj = [ray origin] - [lens center]lens_to_objn = normalized(lens_to_obj)if o == [focal length]: #Object is exactly at the focal length; rays are parallel and no finite image is formed. new_direction = -lens_to_objnelse: #Thin lens equation i = 1.0 / (1.0/[focal length] - 1.0/o) if o < [focal length]: #Object is inside the focal length; rays diverge and virtual image is behind object # on the same side of the lens. image = (-i/o)*lens_to_obj + [lens center] new_direction = normalized([lens hit pt] - image) else: #Object is outside the focal length; rays converge and real image is on the other # side of the lens. image = (i/-o)*lens_to_obj + [lens center] new_direction = normalized(image - [lens hit pt])return Ray([lens hit pt], new_direction) When I use this code in a raytracer, I get nice looking images, but Monte Carlo integrals running forward and backward don't agree (in a way that they actually do for other kinds of lenses (for example, I tried a glass sphere)). Have some eye candy: In the above, the first picture is of an aperture; note how the flux estimates agree. The second picture is of a thin lens; note how they don't. The 275 line, well commented source that generated the above is available here. The point of this very long question is this: something has gone wrong with these flux calculations. Under the circumstances, I'd say it's something else, but retrying these experiments with other kinds of distorting objects (e.g. mirrored/glass circles) shows that the problem really only seems to be occurring with thin lenses. So, my question is this: What's wrong with the above algorithm for computing refracted rays?
A departmental store maintains its monthly sales details in a spread sheet. The rows are used to represent the items and the sales amount for the items are shown in columns representing the day in the month. Such arrangement of collective information in rows and columns is called a matrix. The plural for matrix is matrices. Definition of a Matrix: Matrix is a rectangular array consisting of numbers arranged in rows and columns. If a matrix contains "m" rows and "n" columns, it is called an m $\times$ n matrix (read as m by n) or a matrix of size m $\times$ n. Example: $A_{4\times 3} = \begin{bmatrix} 2 &-1 & \frac{3}{2}\\ 1 & 0 & -2\\ \frac{2}{5} & -3 &1 \\ 2 & -1 & 0 \end{bmatrix}$ Matrix A in the example given has 4 rows and 3 columns and hence a 4 $\times$ 3 matrix. Each entry in the matrix is called an element of the matrix. The general entry of matrix is written as $a_{ij}$, where i and j correspondingly represent the row and column the element is contained in. Thus, $a_{11}$ = 2, $a_{12}$ = -1, $a_{13}$ = $\frac{3}{2}$, $a_{21}$ = 1, $a_{22}$ = 0, $a_{23}$ = -2 and so on. The notation {$a_{ij}$} containing the general element is also used to represent the matrix $A_{ij}$ . Although, matrices are widely used in solving many mathematical problems. Their most significant application is perhaps seen in solving systems of linear equations. There are four different types of matrix as follows: Square matrix Diagonal matrix Unit matrix Zero matrix Square Matrix: A matrix of size n $\times$ n, where number of rows = number of columns = n is called a Square Matrix of order n. For a square matrix, the entries $a_{11}$ , $a_{22}$ , $a_{33}$ , ...., $a_{nn}$ form the main diagonal of the matrix. Example: A 3 x 3 square matrix is given below. The entries 1, 2 and 0 form the main diagonal of the matrix and are also called the diagonal elements . Diagonal Matrix: A diagonal matrix is a square matrix, whose non diagonal entries are all zero. In general, B = {$b_{ij}$} is a Diagonal Matrix, where $b_{ij}$ = 0, whenever i $\neq $ j. Example: $B_{44}= \begin{bmatrix} 2 &0 & 0 &0 \\ 0 & 3 & 0 & 0\\ 0 & 0 & -1 &0 \\ 0 & 0& 0& 1 \end{bmatrix}$ Unit or Identity Matrix: A unit or identity matrix is a diagonal matrix, whose diagonal elements are all 1's and remaining are 0's. Identity Matrix of order 2 is as follows: $\begin{bmatrix} 1 & 0\\ 0& 1 \end{bmatrix}$ Identity Matrix of order 3 $\begin{bmatrix} 1 & 0 &0 \\ 0 & 1 &0 \\ 0 & 0 & 1 \end{bmatrix}$ Identity matrix get their name from the fact that they serve as identity in matrix multiplication. Zero Matrix: If the elements of a matrix are all zeros, then it is called a zero matrix. Examples: Zero matrix of order m x n serves as the additive identity Matrices of m x n size. $0_{23}= \begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 0 \end{bmatrix}$ $0_{23}= \begin{bmatrix} 0 & 0 & 0\\ 0& 0 & 0\\ 0& 0& 0 \end{bmatrix}$
Najati, A., Abdollahpour, M., Park, C. (2017). On the stability of linear differential equations of second order. International Journal of Nonlinear Analysis and Applications, 8(2), 65-70. doi: 10.22075/ijnaa.2017.1078.1226 Abbas Najati; Mohammad Abdollahpour; Choonkil Park. "On the stability of linear differential equations of second order". International Journal of Nonlinear Analysis and Applications, 8, 2, 2017, 65-70. doi: 10.22075/ijnaa.2017.1078.1226 Najati, A., Abdollahpour, M., Park, C. (2017). 'On the stability of linear differential equations of second order', International Journal of Nonlinear Analysis and Applications, 8(2), pp. 65-70. doi: 10.22075/ijnaa.2017.1078.1226 Najati, A., Abdollahpour, M., Park, C. On the stability of linear differential equations of second order. International Journal of Nonlinear Analysis and Applications, 2017; 8(2): 65-70. doi: 10.22075/ijnaa.2017.1078.1226 On the stability of linear differential equations of second order 1Department of Mathematics, Faculty of Sciences, University of Mohaghegh Ardabili, Ardabil 56199-11367, Iran 2Department of Mathematics, Hanyang University, Seoul, 133--791, South Korea Abstract The aim of this paper is to investigate the Hyers-Ulam stability of the linear differential equation $$y''(x)+\alpha y'(x)+\beta y(x)=f(x)$$ in general case, where $y\in C^2[a,b],$ $f\in C[a,b]$ and $-\infty<a<b<+\infty$. The result of this paper improves a result of Li and Shen [\textit{Hyers-Ulam stability of linear differential equations of second order,} Appl. Math. Lett. 23 (2010) 306--309].
Spring 2018, Math 171 Week 7 Martingales Let \((X_n)_{n \ge 0}\) be i.i.d. uniform on \([-1, 0) \cup (0, 1]\) Show that \((M_n)_{n \ge 0}\) with \(M_n = X_0 + \dots + X_n\) is a Martingale. Show that \((M_n)_{n \ge 0}\) with \(M_n = \frac{1}{X_0} + \dots + \frac{1}{X_n}\) is not a Martingale. (Answer) \(\mathbb{E}|M_n| = \infty\) Let \((X_n)_{n \ge 1}\) be i.i.d. uniform on \(\{-1, 1\}\). AKA Rademacher distributed. Let \(S_n = X_1 + \dots + X_n\), \(S_0 = 0\). Compute the moment generating function of \(S_n\). That is, \(\mathbb{E}[e^{tS_n}]\) (Answer) \(\left(\frac{e^t + e^{-t}}{2}\right)^n\) Find the odd moments of \(S_n\). That is, \(\mathbb{E}[S_n^{2m+1}]\). Explain briefly why the result makes sense. (Answer) 0. Makes sense because \(S_n\) is symmetric about 0. Find a formula for the even moments of \(S_n\). That is, \(\mathbb{E}[S_n^{2m}]\). (Answer) \(\sum_{k=0}^n \binom{n}{k}\left(\frac{1}{2}\right)^n (2k-n)^{2m}\) For what values of \(c_n\) is \(M_n = S_n^2 - c_n\) a martingale? (Answer) \(c_n = n\) Find a formula for \(\mathbb{E}[S_{n+1}^{2m} \mid S_n=t]\) which depends only on \(m\) and \(t\) (Answer) \(\sum_{k=0}^m {2m \choose 2k} t^{2k}\) Optional Stopping Problem 5.16 from the textbook (2nd edition) Problem 5.17 from the textbook (2nd edition) Problem 5.8 from the textbook (2nd edition)
I've read that realized kernels are the thing to use for calculating daily volatility from high-frequency data. So I've got minute data, how do I actually use such a kernel? Will it give me minute-ly volatility, do I have to normalize it somehow? Also, what data do I feed it - minute data since the start of the trading day, one day's worth of data, all historical data I have until this point, or something else? The use of kernels to estimate volatility using intraday data is " nothing more" than combining: intraday volatility estimation kernel smoothing Thus you have to take care about the "usual pits" of these two approaches. Intraday volatility estimation. I hope you know the "signature plot" effect. Of course if you use the proper estimation method, it should take care of if, but just in case you should check that you do not suffer from it. Kernel smoothing. You will have to tune the time scale of your kernel. Of course theoretical papers like Designing Realized Kernels to Measure the ex post Variation of Equity Prices in the Presence of Noise do not really have to do it since their results are asymptotic but real life is not.Moreover you may know that since you will have at date $t$ information of previous dates in your estimator $\hat\sigma_t$, if you multiply it by a statistic from the past $X_{t-\delta t}$ you will obtain a "trace" of the correlation between $X$ and $\sigma$ (i.e. somewhere inside $\mathbb{E}(\hat\sigma_t \cdot X_{t-\delta t})$ you have a term in $\mathbb{E}(K(\delta t) \cdot \sigma_{t-\delta t} \cdot X_{t-\delta t})$, where $K$ is your kernel). It may add biais if you use your kernel volatility estimate to build other estimators. And not only for multiplications. Using a realized kernel for calculating volatility will give you results in the same resolution as the data you feed them. So if you feed them minute-by-minute data, then the volatility will be calculated minute-by-minute. What that really means is that only once per minute will you have a good estimate of the volatility of whatever asset you're looking at. The other 99.99% of the the time, the market might introduce changes which may throw that estimate out of the window. If you're not interested in high-frequency volatility estimates, then it's a completely different matter. You'd be better off trying to pre-filter your data before feeding it to the realized kernel. The goal there is to reduce the noise so that the remaining signal matches up to the frequency you wish to use. So if you want to have a day-by-day estimate based on minute-by-minute data, you can probably get a pretty good result by boiling down the minute-by-minute event data into a hour-by-hour events. I'm not familiar with such algorithms that give weight to temporal heuristics such as day-of-week or month-in-year or year-by-year cycles. Unless you know you're using such an algorithm, then there's no reason to feed more than a just today's data if you want an estimate for the current day. If anything, adding more data only causes the estimate to become duller, giving you only week- or month-accurate estimates. If you don't weight your data at all, then feeding in all your historical data might give you a volatility estimate for the next decade, but it will be off by a mile in the short term.
Using PIPE-FLO ® Professional to Evaluate Compressible Choked Flow in Control Valves PIPE-FLO Professional uses equations from recognized industrial standards for control valves, making it an excellent tool to evaluate cavitation and choked flow in compressible and incompressible fluid flow through control valves in a piping system. PIPE-FLO Professional 2009 used the , while PIPE-FLO Professional v12 (and above) implemented updated equations based on the ANSI/ISA-75.01.01-2007 Flow Equations for Sizing Control Valves standard (the ANSI/ISA international equivalent standard). IEC 60534-2-1 (2011) Industrial Process Control Valves – Flow Capacity – Sizing Equations for Fluid Flow under Installed Conditions PIPE-FLO Professional v12 also implements additional calculations and a new methodology for predicting and performing calculations around the choked flow condition. These changes result in lower calculated choked flow rates in v12 than v2009, resulting in choked flow warnings in v12 where they were not indicated in v2009. Customers are encouraged to evaluate models built in earlier versions of PIPE-FLO Professional to determine if this change impacts their systems. One key to successfully modeling compressible fluid flow in PIPE-FLO Professional is to define the pipe's fluid zone pressure as close as possible to the calculated inlet or average pressures at the device. For control valves, the compressible flow equations from the ANSI/ISA and IEC standards accounts for the compressibility effects of the gas flow by calculating an Expansion Factor (Y) for the valve. Background on Control Valve Performance and Choked Flow for Gas Applications The hydraulic performance of a control valve is characterized by its Flow Coefficient (Cv), which defines the amount of pressure drop across a valve at a given flow rate, or conversely the flow rate at a given pressure drop. The performance of a control valve at a fixed position (fixed Cv) is shown in Figure 1 for a control valve with Cv = 218.4 and xTP = 0.8575 for air at 100 psia inlet static pressure and 60 degF. The dimensionless Pressure Drop Ratio (x) is used on the vertical axis, which can only vary from 0.0 to 1.0. The Expansion Factor (Y) is shown as the green line and uses the upper horizontal axis with x and takes into account the adiabatic expansion of the air across the valve. The incompressible flow rate (in lb/sec) is the blue line and compressible flow rate (in lb/sec) is the red line. Note: the flow equation results in units of lb/hr, which is divided by 3600 to obtain units of lb/sec. The incompressible equation in Figure 1 displays a 2nd order relationship between flow rate and pressure drop across the valve. Multiplying the Expansion Factor Y curve by the Incompressible Flow Equation curve = Compressible Flow Equation curve! For compressible gas applications, choked flow occurs in a control valve when the gas velocity at the vena contracta approaches the local speed of sound and the Mach Number approaches 1.0. The ANSI/ISA and IEC standards state that choked flow conditions occur when Y=2/3 and the flow rate no longer increases with increasing dP (increasing x). The pressure drop across the valve cannot exceed the choked pressure drop, even if the exit pressure should result in a greater dP and x. When the actual pressure drop is greater than the choked pressure drop, the excess pressure drop must occur at the exit plane in the form of normal shock waves, resulting in a "back pressure" being felt at the outlet of the control valve. From Figure 1, the choked flow conditions occur at Ychoked = 2/3, x choked = 0.859, w choked = 17.1 lb/sec, dPchoked = 85.9 psi. W_\text{compressible} = 63.3YC_v\sqrt{xP_1 \rho_1} Y = 1-\frac{x}{3F_\gamma x_\text{TP}} x = \frac{P_\text{1 static abs}-P_\text{2 static abs}}{P_\text{1 static absolute}} = \frac{dP}{P_\text{1 static abs}} Figure 1. Mass Flow Rate (w) and Expansion Factor (Y) vs Pressure Drop Ratio (x) for Cv = 218.4 PIPE-FLO Professional 2009 Results PIPE-FLO 2009 uses equations from the ANSI/ISA-75.01.01-2007 standard to determine the pressure drop and flow rate using the flow coefficient relationship and total inlet pressure. Choked flow conditions were flagged, but there was increased uncertainty in the calculated results when near the choked flow condition. Figure 2 shows the calculated results for the conditions shown in Figure 1 for a control valve with Cv = 218.4 with air at 60 deg F and 100 psia inlet static pressure (107.8 psia total pressure). The valve discharges to a total pressure of 14.7 psia. The inlet and outlet pipes are 4" NPS Sched 40 with a very short length so that P1 and P2 essentially defines the inlet and outlet total pressures of the control valve. Note that there are two fluid zones, one at 100 psia for the inlet pipe and the other at 14.7 psia for the outlet pipe. In PF 2009, the choked flow rate was labeled as Qmax, equal to 18.44 lb/sec in Figure 2. Since the calculated flow rate is 17.49 lb/sec (less than Qmax), choked flow was not indicated. This value of Qmax is 7.8% greater than the choked flow rate in Figure 1, but the calculated flow rate is only 2.3% greater than the Figure 1 choked flow rate. The calculated air velocity at the outlet is 2582 ft/sec, which is above the sonic velocity of air (around 1100 ft/sec), indicating that there is inaccuracy that needs to be further evaluated. The calculated dP across the valve is 93.1 psi, which is established by the entered values at P1 and P2, but this is greater than the dP choked from Figure 1, so there is inaccuracy in this value. The calculated Qmax and dP would put this point to the right of the red curve in Figure 1. Figure 2. Control valve results in PIPE-FLO 2009 PIPE-FLO Professional v12.0 and v12.1 Results In PIPE-FLO Professional v12, updated equations based on the (the ANSI/ISA equivalent) were implemented, as well as new calculations and a different methodology to evaluate performance around the choked flow condition. The IEC standard shifts focus from the choked flow rate to the choked pressure drop, so PFv12.0 added the calculations for Choked dP, but did not calculate Qmax. Additional control valve parameters were also calculated (xT, xTP, Y). Figure 3 shows the results calculated in v12.1 for the same conditions modeled in Figures 1 and 2 above. IEC 60534-2-1 (2011) Industrial Process Control Valves – Flow Capacity – Sizing Equations for Fluid Flow under Installed Conditions Since the calculated dP (93.1 psi) is greater than the Choked dP (85.89 psi), a warning (Message ID 157 Choked flow through the device) is generated indicating choked flow conditions. (Also note the calculated Y < 2/3, also indicating choked flow). Because of the choked flow conditions, the calculated flow rate in v12.1 is 2% less than the v2009 flow rate. The choked condition creates another inaccuracy in the results which shows up in the calculated outlet pressure of the valve. The outlet pressure of the valve is set by the entered value at P2. But the choked condition limits the dP across the valve to the Choked dP value, so the outlet total pressure should be (107.8 - 85.89) = 21.91 psia, not 14.7 psia. The rest of the pressure drop would occur beyond the end of the pipe. The inaccuracy in the calculated velocity in Pipe 2 is also an indication of the uncertainty in the calculated results that needs further evaluation. Improvements have been made to further this evaluation in v14. Figure 3. Control valve results in PIPE-FLO v12.1 PIPE-FLO Professional v14 Results Additional calculations were added with the development of PIPE-FLO v14.0, including the static pressures at the inlet and outlet of pipes and the choked flow rate for the control valves (which was Qmax in v2009). The absolute inlet pressure is now used for P1 in the control valve equations, as opposed to the absolute static inlet pressure that was used in v12.1 and v2009. This change was made to conform to the IEC and ANSI/ISA standards, and result in lower choked flow rates, especially at high inlet velocities. total These additional results give more insight into evaluating choked flow conditions in control valves and adjusting the model accordingly. Figure 4 shows the calculated results for PIPE-FLO v14.1 for the same conditions modeled in Figure 1, 2 and 3 above, with the additional calculated results displayed. Choked flow conditions are still indicated with a calculated dP > Choked dP (93.1 > 85.89 psi) and the calculated Expansion Factor Y = 0.6387 < 2/3 (same results as v12.1). The calculated flow rate = 17.14 lb/sec, which is 0.12% greater than the choked flow rate (17.12 lb/sec), also indicating choked flow conditions. The new calculated results for static and total pressure at the inlet and outlet of pipes allow the adjustment of the Pressure Boundaries which use the entered Total Pressure value for calculations. In order to obtain 100 psia static pressure at the inlet of the control valve, the total pressure must be 107.8 psia for the calculated flow rate (the dynamic pressure is 7.8 psi). As with v12.1 in Figure 3, the calculated outlet pressure of the control valve is inaccurate due to the choked flow conditions. Since Choked dP = 85.89 psi and P1 total = 107.8 psia, the actual valve outlet total pressure cannot go below 21.91 psia, even though a lower pressure exists downstream. The outlet pressure shown in the results is calculated from the user-entered total pressure P2 and the dP across the outlet pipe, Pipe 2, which is essentially dP = 0 psi due to the very short length of pipe entered. A new warning implemented in PIPE-FLO v14.0 indicates that the static pressure at P2 is below absolute 0 psia, indicating the model is depicting a physical impossibility. In other words, the user-entered conditions cannot be achieved due to choked flow conditions. The calculated velocity is also indicating a physical impossibility since it is above sonic velocity, or Mach 1. Although the calculated results and warnings are indicating a problem with how the system is modeled, the results can be interpreted to help adjust the boundary conditions to eliminate the inaccuracy. Figure 4. Control valve results in PIPE-FLO v14.0 Adjusting Your PIPE-FLO Professional Model for Choked Flow Conditions The PIPE-FLO model can be adjusted to account for the choked flow conditions created in the control valve, as shown in Figure 5. By increasing the total pressure specified at P2 and assigning Pipe 2 a fluid zone with a pressure close to that value, the results can be corrected. This is an iterative process to find the correct outlet total pressure, P2, and the appropriate fluid zone pressure. Figure 5 shows a set of results with P2 = 27.8 psia total and a fluid zone pressure of 28 psia assigned to Pipe 2. The calculated flow rate (17.09 lb/sec) is very close but just below the choked flow value (17.12 lb/sec) so the choked flow condition is not flagged (also, the calculated dP is less than the choked dP). These results indicate the valve is just at the verge of the fully choked condition. The calculated velocity in Pipe 2 is closer to sonic velocity so there is greater accuracy for this calculation. Further iterations of P2 and reassigning fluid zones may be needed to narrow down the solution. The outlet pressure of the control valve now shows the true amount of back pressure that would be created by the choked flow condition in the valve. Figure 5. Control valve results with P2 adjusted for choked flow conditions in v14.1. Summary Evaluating choked flow conditions in PIPE-FLO Professional has improved with each update of the software. New calculations and tools have been added to improve the accuracy of the calculations and help the user evaluate the calculated results and adjust the model to more accurately depict the real-world performance of the control valve in a piping system. For additional reading on choked flow in control valves, refer to this article by Jon Monsen at Valin Corporation: . Gas Flow in Control Valves
Definition:Number Definition There are five main classes of number: $(1): \quad$ The natural numbers: $\N = \set {0, 1, 2, 3, \ldots}$ $(2): \quad$ The integers: $\Z = \set {\ldots, -3, -2, -1, 0, 1, 2, 3, \ldots}$ $(3): \quad$ The rational numbers: $\Q = \set {p / q: p, q \in \Z, q \ne 0}$ $(4): \quad$ The real numbers: $\R = \set {x: x = \sequence {s_n} }$ where $\sequence {s_n}$ is a Cauchy sequence in $\Q$ $(5): \quad$ The complex numbers: $\C = \set {a + i b: a, b \in \R, i^2 = -1}$ It is possible to categorize numbers further, for example: The set of algebraic numbers $\mathbb A$ is the subset of the complex numbers which are roots of polynomials with rational coefficients. The algebraic numbers include the rational numbers, $\sqrt 2$, and the golden section $\varphi$. The set of transcendental numbers is the set of all the real numbers which are not algebraic. The transcendental numbers include $\pi, e$ and $\sqrt 2^{\sqrt 2}$. The set of prime numbers (sometimes referred to as $\mathbb P$) is the subset of the integers which have exactly two positive divisors, $1$ and the number itself. The first several positive primes are $2, 3, 5, 7, 11, 13, \ldots$ Number Sets as Algebraic Structures Note that: $\struct {\N, +, \le}$ can be defined as a naturally ordered semigroup. $\struct {\Z, +, \times, \le}$ is a totally ordered integral domain. $\struct {\Q, +, \times, \le}$ is a totally ordered field, and also a metric space. $\struct {\R, +, \times, \le}$ is a totally ordered field, and also a complete metric space. $\struct {\C, +, \times}$ is a field, but cannot be ordered. However, it can be treated as a metric space. Also see It is possible to continue from the concept of complex numbers and define: The quaternions $\mathbb H$ (labelled $\mathbb H$ for William Rowan Hamilton who discovered / invented them) The octonions $\mathbb O$ The sedenions $\mathbb S$ and so forth. Thence follows an entire branch of mathematics: see Cayley-Dickson construction. Comment Note that (up to isomorphism): $\N \subseteq \Z \subseteq \Q \subseteq \R \subseteq \C$ and of course $\mathbb P \subseteq \Z$. Sources 1965: J.A. Green: Sets and Groups... (next): $\S 1.1$. Sets 1965: Seth Warner: Modern Algebra... (previous) ... (next): Chapter $1$: Algebraic Structures: $\S 1$: The Language of Set Theory 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (next): Introduction 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 1$. Sets; inclusion; intersection; union; complementation; number systems 1975: W.A. Sutherland: Introduction to Metric and Topological Spaces... (previous) ... (next): Notation and Terminology 1977: K.G. Binmore: Mathematical Analysis: A Straightforward Approach... (previous) ... (next): $\S 1$: Real Numbers: $\S 1.2$: The set of real numbers 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): $\S 2$: Introductory remarks on sets: $\text{(b)}$ 1996: H. Jerome Keisler and Joel Robbin: Mathematical Logic and Computability... (previous) ... (next): Appendix $\text{A}.1$: Sets
@Rubio The options are available to me and I've known about them the whole time but I have to admit that it feels a bit rude if I act like an attribution vigilante that goes around flagging everything and leaving comments. I don't know how the process behind the scenes works but what I have done up to this point is leave a comment then wait for a while. Normally I get a response or I flag after some time has passed. I'm guessing you say this because I've forgotten to flag several times You can always leave a friendly comment if you like, but flagging gets eyes on it to get the problem addressed - ideally before people start answering it. something we don't want is for people to farm rep off someone else's content, which we see occasionally; but even beyond that, SE in general and we in particular dislike it when people post content they didn't create without properly acknowledging its source. And most of the creative effort here is in the question. So yeah, it's best to flag it when you see it. That'll put it into the queue for reviewers to agree (or not) - so don't worry that you're single-handedly (-footedly?) stomping on people :) Unfortunately, a significant part of the time, the asker never supplies the origin. Sometimes they self-delete the question rather than just tell us where it came from. Other times they ignore the request and the whole thing, including whatever effort people put into answering, gets discarded when the question is deleted. Okay. This is the first Riley I've written, and it gets progressively harder as you go along, so here goes. I wrote this, and then realized that I used a mispronunciation of the target, so I had to sloppily improvise. I apologize. Anyway, I hope you enjoy it!My prefix is just shy of white,Yet... IBaNTsJTtStPMP means "I'm Bad at Naming Things, so Just Try to Solve this Patterned Masyu Puzzle!".The original Masyu rules apply.Make a single loop with lines passing through the centers of cells, horizontally or vertically. The loop never crosses itself, branches off, or goe... This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Etienne Word™. Use the following examples to find the rule:These are not the only examples of Etienne Wo... This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Eternal Word™. Use the following examples to find the rule:$$% set Title text. (spaces around the text ARE ... IntroductionI am an enthusiastic geometry student, preparing for my first quiz. Yet while revising I accidentally spilt my coffee onto my notes. Can you rescue me and draw me a diagram so that I can revise it for tomorrow’s test? Thank you very much!My Notes Sometimes you are this wordRemove the first letter, does not change the meaningRemove the first two letters, still feels the sameRemove the first three letters and you find a wayRemove the first four letters and you get a numberThe letters rearranged is a surnameWh... – "Sssslither..."Brigitte jumped. The voice had whispered almost directly into her ear, yet there was nobody to be seen. She looked at the ground beneath her feet. Was something moving? She was probably imagining things again.– Did you hear something? she asked her guide, Skaylee.... The creator of this masyu forgot to add the final stone, so the puzzle remains incomplete. Finish his job for him by placing one additional stone (either black or white) on the board so that the result is a uniquely solvable masyu.Normal masyu rules apply. So here's a standard Nurikabe puzzle.I'll be using the final (solved) grid for my upcoming local puzzle competition logo as it will spell the abbreviation of the competition name. So, what does it spell?Rules (adapted from Nikoli):Fill in the cells under the following rules.... I've designed a set of dominoes puzzles that I call Donimoes. You slide thedominoes like the cars in Nob Yoshigahara's Rush Hour puzzle, always alongtheir long axis. The goal of Blocking Donimoes is to slide all the dominoesinto a rectangle, without sliding any matching numbers next to each ot... I am mud that will trap you. I am a colloid hydrogel. What am I?Take the first half of me and add me to this:I am dangerous to wolves and werewolves alike. Some people even say that I am dangerous to unholy things. Use the creator of Poirot to find out: What am I?Now, take another word for ... Clark who is consecutive in nature, lives in California near the 100th street. Today he decided to take his palindromic boat and visit France. He booked a room which has a number of thrice a prime. Then he ordered Taco and Cola for his breakfast. The online food delivery site asked him to enter t... Suppose you are sitting comfortably in your universe admiring the word SING. Just then, Q enters your universe and insists that you insert the string "IMMER" into your precious word to create a new word for his amusement.Okay, you can make the word IMMERSING...But then you realize, you can a... You! I see you walking thereNary a worry or a careCome, listen to me speakMy mind is strong, though my body is weak.I've got a riddle for you to ponderSomething to think about whilst you wanderIt's a classic Riley, a word split in threeFor a prefix, an... @OmegaKrypton rather a poor solution, I think, but I'll try it anyway: Quarrel= cross words. When combined heartlessly: put them together by removing the middle space. Thus, crosswords. Nonstop: remove the final letter. We've made crossword = feature in daily newspaper I saw this photo on LinkedIn:Is this a puzzle? If so, what does it mean and what is a solution?What I've found so far:$a = \pi r^2$ is clearly the area of a disk of radius $r$$2\pi r$ is clearly its diameter$\displaystyle \int\dfrac{dx}{sin\ x} = ln\left(\left| tan \dfrac{x}{2}\right|\...
Daniel Soltész I'm a PhD student at Budapest University of Technology and Economics, department of Computer science. My main interest is graph theory and extremal set theory. Email: protosdrone (gmail) 9 answers 1 question ~6k people reached Budapest Member for 6 years, 1 month 59 profile views Last seen Jul 27 at 18:38 Communities (8) Top network posts 49 Strange (or stupid) arithmetic derivation 27 Computer calculations in a paper 20 A Ramsey avoidance game 19 Reproving a known theorem in an article 13 Order of magnitude of $\sum \frac{1}{\log{p}}$ 12 Set system with different differences 12 Coloring the edges of a torus graph View more network posts → Top tags (14) 16 Are these close-votes cast correctly? Oct 7 '15 15 Play the game: an observation about the reputation Jul 14 '14 10 Data on voting by 'association bonus only' users Jan 2 '14 8 Does Mathoverflow really want the 'Publicist' badge? Dec 10 '13 6 Why so heavy downvoting? Sep 10 '15 4 Soft questions without reputation Jan 6 '14 1 Question about “auto-awarding” a bounty Sep 19 '14 0 Discreet editing Nov 2 '14
Definition:Positive Part Definition Then the positive part of $f$, $f^+: X \to \overline \R$, is the extended real-valued function defined by: $\forall x \in X: f^+ \left({x}\right) := \max \left\{{0, f \left({x}\right)}\right\}$ Also defined as Some sources insist that $f$ be a real-valued function instead. However, $\R \subseteq \overline \R$ by definition of $\overline \R$. Thus, the definition given above incorporates this approach. Also see Definition:Negative Part, the natural associate of positive part
FermiDirac¶ class FermiDirac( broadening)¶ Parameters: broadening(PhysicalQuantity of type energy or temperature) – The broadening of the Fermi-Dirac distribution. Usage Examples¶ Use the Fermi-Dirac occupation function with a broadening of 0.1 eV on an LCAOCalculator: numerical_accuracy_parameters = NumericalAccuracyParameters( occupation_method=FermiDirac(0.1*eV))calculator = LCAOCalculator(numerical_accuracy_parameters=numerical_accuracy_parameters) Notes¶ Note For comparison of different occupation methods and suggestions for which one to choose, see Occupation Methods. In the Fermi-Dirac smearing scheme one effectively considers the system at finite temperature. This means that one replaces the integer occupation numbers when calculating e.g. the electron density in DFT by fractional occupation numbers given by the Fermi-Dirac distribution, where \(\epsilon\) is the energy of a given state, \(\mu\) is the chemical potential/Fermi level and \(\sigma\) is a broadening parameter. For finite temperature calculations \(\sigma=k_\text{B} T\) with \(T\) the temperature. The Fermi-Dirac smearing scheme also corresponds to replacing the Dirac delta-function in the density of states by a smeared function given by In the Fermi-Dirac smearing scheme the contribution to the generalized entropy from a state with occupation \(f\) is [Mer65] The total energy can extrapolated to zero broadening by adding to the total internal energy the following correction term where \(i\) runs over all states. [Mer65] N. D. Mermin. Thermal properties of the inhomogeneous electron gas. Phys. Rev., 137:A1441–A1443, Mar 1965. doi:10.1103/PhysRev.137.A1441.
You've formulated the question very well, but you have left out just one variable. (Trying to understand something that is not in your grade level is a major crime, but that is your teacher's fault: he hasn't properly taught you not to think). The way to split up the problem is: Get from the surface of planet A to "outer space" in the vicinity of planet A. Have enough velocity left over to get to the vicinity of planet B. Fall onto planet B. It is convenient to consider "outer space" as being infinity: it makes the calculations easier and it makes only the tiniest difference to the figures. 1. From the surface of A to outer space The initial velocity required to get from the surface of a planet to "infinity" is called the escape velocity. For the Earth I find it convenient to remember it as $11$km/sec, or as $\sqrt{2}$ times the orbital velocity of a very low orbit, which I remember as $8$km/sec. (That $\sqrt{2}$ is true for any gravitating body, anywhere). You can look up the exact figures in Wikipedia, but I hope you don't. Despite what your teacher thinks, science is not about blind obedience to established authority. According to Newton's law of gravitation, the gravitational force of a planet of mass $M$ on an object of mass $m$ at a distance $r$ from its centre is $$-\frac{GMm}{r^2}\text{,}$$ where $G$ is the Newton's gravitational constant, which is the same always and everywhere. You can look it up, and also celebrate the fact that out of all the constants of nature it is the least accurately known. The minus sign is because it makes most sense to treat all distances, velocities, accelerations and forces as acting upwards - and of course gravity pulls downwards. Now, even at your grade you should know that the work performed by a force is equal to the force times the distance moved. So on a tiny bit of the object's journey up from the planet's surface (a distance $\Delta{r}$, say) the work performed by gravity is $-\frac{GMm}{r^2}\Delta{r}$. Adding up all the little pieces, the total amount of work performed by gravity on the object's journey from the surface into outer space is $$-\int_{r_A}^{\infty}\frac{GMm}{r^2}d{r}\text.$$(If your teacher says that integration is beyond your grade level, strangle him. Integration is easy. Get an elementary calculus book and read it for fun and see). Doing the integration, the total work done by gravity turns out to be $-\frac{GMm}{r_A}$. If you launched your particle with a velocity $v$, that means that it started with a kinetic energy of $\frac12{m}v^2$. When it gets to outer space, the work done by gravity means that the resultant kinetic energy is $$\frac12{m}v^2-\frac{GMm}{r_A}$$ and you'll see that this makes sense, because for small $v$ it's negative (the particle never gets that far), for $v$ equal to the escape velocity it's exactly zero (the particle escapes but that's that), and for larger $v$ it's positive, so there's still some kinetic energy left. A couple of points: There is a factor of $m$ in both halves of the equation. This shows that the mass of the particle isn't relevant to the dynamics of its motion. If planet A is the Earth, you don't know $M$ without looking it up in a book, and you don't know the value of $G$ without looking it up in a book. That would be immoral. On the other hand, you could measure the radius of the Earth if you wanted (Eratosthenes seems to have been the first to do this, and it's quite a doable experiment for everybody), and you could also measure the acceleration due to gravity at the Earth's surface. You would therefore be able to use "acceleration = $GM/r^2$" to work out the value of the product $GM$, and thus be able to work out the escape velocity without looking up anything at all. 2. From outer space near planet A to outer space near planet B I'll be much briefer here. Planet A and planet B are both (I hope) orbiting the Sun. If planet B is further away than planet A, you will need some extra kinetic energy to climb out of the Sun's "gravity well". If you prefer, you can think of it as needing "surplus velocity" after escaping from planet A. I will now cheat and say that if you are going out from the Earth to Mars, you need to have $2.9$km/sec of velocity left over, once you have got to outer space, to get out from the vicinity of the Earth to the vicinity of Mars. You could do this working out for yourself, by deducing the acceleration due to the Sun's gravity at the Earth's distance from the Sun (using the length of the year) and comparing it to that at Mars's distance from the Sun (using the Martian year). But I do need to let you do some of the work yourself! Just one other point: it isn't $11.2+2.9=14.1$km/sec you'd need to get to Mars. You need a starting kinetic energy which gets you to $2.9$km/sec when you get to "outer space", and because kinetic energy is proportional to the square of velocity, this means that you only need $11.6$km/sec to start with. On the other hand, if Planet B is nearer to the Sun than planet A (Venus, for example), then you don't need any extra velocity at all. The escape velocity is enough. The relative orbits of planets A and B are the variable that you left out of your question. 3. From outer space near Planet B to the surface of Planet B. No extra velocity needed. Start at zero, and Planet B's gravity will carry you in all the way. I've taken a long time over this because you sound like the sort of person who doesn't just want canned answers from books. Working things out for yourself is what science ought to be about (life, too). It's just unfortunate that so many schools seem to teach the opposite.
Pigeonhole Principle Theorem Then: where $\ceiling {\, \cdot \,}$ denotes the ceiling function. Proof Then the maximum number of elements of any $S_i$ would be $\ceiling {\dfrac n k} - 1$. So the total number of elements of $S$ would be no more than $k \paren {\ceiling {\dfrac n k} - 1} = k \ceiling {\dfrac n k} - k$. There are two cases: Suppose $k \divides n$. Then $\ceiling {\dfrac n k} = \dfrac n k$ is an integer and: $k \ceiling {\dfrac n k} - k = n - k$ Thus: $\displaystyle \sum_{i \mathop = 1}^k \card {S_i} \le n - k < n$ This contradicts our assumption that no subset $S_i$ of $S$ has as many as $\ceiling {\dfrac n k}$ elements. Next, suppose that $k \nmid n$. Then: $k \ceiling {\dfrac n k} - k < \dfrac {k \paren {n + k} } k - k = n$ and again this contradicts our assumption that no subset $S_i$ of $S$ has as many as $\ceiling {\dfrac n k}$ elements. Either way, there has to be at least $\ceiling {\dfrac n k}$ elements in at least one $S_i \subseteq S$. $\blacksquare$ Source of Name It is known as the Pigeonhole Principle because of the following. Suppose you have $n + 1$ pigeons, but have only $n$ holes for them to stay in. By the Pigeonhole Principle at least one of the holes houses $2$ pigeons. It is also known as Dirichlet's Box (or Drawer) Principle, or, as Dirichlet named it, Schubfachprinzip ( drawer principle or shelf principle). In Russian and some other languages, it is known as the Dirichlet principle or Dirichlet's principle, which name ambiguously also refers to the minimum principle for harmonic functions. Also see Sources 1971: Gaisi Takeuti and Wilson M. Zaring: Introduction to Axiomatic Set Theory: $\S 10.18$ 1977: Gary Chartrand: Introductory Graph Theory... (previous) ... (next): $\S 2.2$: Isomorphic Graphs: Problem $23$ 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: Dirichlet's principle 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: Pigeonhole Principle Weisstein, Eric W. "Dirichlet's Box Principle." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/DirichletsBoxPrinciple.html
Zero sharp $0^{\#}$ is a $\Sigma_3^1$ real number which cannot be proven to exist in $\text{ZFC}$. It's existence contradicts the Axiom of constructibility, $V=L$. In fact, it's existence is somewhat equivalent to $L$ being completely different from $V$. Definition $0^{\#}$ is defined as the set of all Gödel numberings of first-order formula $\varphi$ such that $L\models\varphi(\aleph_0,\aleph_1...\aleph_n)$ for some $n$. Because of the stability of $\aleph_\omega$, $0^{\#}$ is equivalent to the set of all Gödel numberings of first-order formula $\varphi$ such that $L_{\aleph_{\omega}}\models\varphi(\aleph_0,\aleph_1...\aleph_n)$. This definition implies the existence of Silver Indiscernables. Moreover, it implies: Given any set $X\in L$ which is first-order definable in $L$, $X\in L_{\omega_1}$. This of course implies that $\aleph_1$ is not first-order definable in $L$, because $\aleph_1\not\in L_{\omega_1}$. This is already a disproof of $V=L$ (because $\aleph_1$ is first-order definable). For every $\alpha\in\omega_1^L$, every uncountable cardinal is $\alpha$-iterable, $\geq$ an $\alpha$-Erdős, and totally ineffable in $L$. There are $\mathfrak{c}$ many reals which are not constructible (that is, $x\not\in L$). The existence of $0^\#$ is implied by: Chang's Conjecture. The existence of an $\omega_1$-iterable cardinal. The negation of the singular cardinal hypothesis ($\text{SCH}$). The axiom of determinacy ($\text{AD}$). $0^{\#}$ cardinal $0^{\#}$ exists iff there is a nontrivial elementary embedding $j:L\rightarrow L$ (by a theorem of Kunen). The critical point of such an embedding is sometimes called a $0^{\#}$ cardinal, and sometimes called a $j:L\rightarrow L$ cardinal. These cardinals do not coincide with measurable cardinals for a long time. While the least measurable cardinal is $\Sigma_1^2$-describable, each of these cardinals is totally indescribable. Furthermore, the least measurable cardinal $\kappa$ such that $V_\kappa$ satisfies the existence of a measurable cardinal is not a $j:L\rightarrow L$ cardinal, and the least measurable cardinal $\kappa$ such that $V_\kappa$ satisfies the existence of such a cardinal is not a $j:L\rightarrow L$ cardinal, and so on. However, the existence of a measurable suffices to prove the existence and consistency of a $j:L\rightarrow L$ cardinal. More information to be added here. References Jech, Thomas J. Set Theory (The 3rd Millennium Ed.). Springer, 2003.
What is the relationship between the Flow Coefficient (C v) and the Discharge Coefficient (C d)? v) and the Discharge Coefficient (C d)? Good communication between various groups involved with fluid piping systems is critical for the proper design, operation, and determination of cost for many systems in residential, commercial, and industrial applications. It is crucial that the engineer understand and apply equations correctly to prevent costly mistakes in the sizing and selection of equipment, operating within safety limits, and avoiding unnecessary modifications later in plant life. One potential area for costly miscommunication is the use of for devices that have a fluid flowing through them. Manufacturers of various equipment use different coefficients to characterize the hydraulic performance of their devices, and these difference must be understood when applying them to calculations involving piping systems. coefficients In a previous article, the difference between the Resistance Coefficient (K) and Flow Coefficient (C v) was evaluated and a relationship between the two was derived. The Flow Coefficient (C v in US units, K v in SI units) is typically associated with the hydraulic performance of a control valve, but other devices such as safety relief valves are characterized by the Discharge Coefficient (C d, sometimes designated by K d), which is also associated with orifices and nozzles. They are not numerically equivalent, so what is the relationship between the two? There are various standards in the U.S. and internationally that are used to size and select control valves and relief valves, most notably the ANSI/ISA-75.01.01 Flow Equations for Sizing Control Valves (IEC 60534-2-1 equivalent) and the API Standard 520 Part 1, Sizing, Selection, and Installation of Pressure-relieving Devices in Refineries. These two standards can be used to derive the relationship between the Flow Coefficient (C v) and the Discharge Coefficient (C d) for relief valves. There are minor differences in the nomenclature used in each standard, so for the purpose of this article, the nomenclature will be defined for the equations below along with the engineering units being used. Control Valve Sizing Equations When sizing a control valve, the minimum required flow coefficient is calculated based on the design flow rate and expected pressure drop across the valve, and a valve is selected that has a flow coefficient greater than the calculated value. Here's the general sizing equation for control valves for incompressible fluids according to ANSI/ISA-75.01.01 Equation 1, non-choked turbulent flow: Where: Q = volumetric flow rate (gpm, m 3/hr, or lpm) dP = pressure drop across the valve (psi, kPa, or bar) ρ 1 = density of the fluid flowing through the valve (lb/ft 3or kg/m 3) ρ 0 = density of water flowing through the valve (lb/ft 3or kg/m 3) SG = ρ 1 / ρ= specific gravity of the fluid (dimensionless) 0 N 1 = constant that depends on the units used for Q and dP (N 1 = 1.0 for units of gpm and psi) There are other factors that may be included in the sizing equation to account for piping geometry, high viscosity, or choked flow conditions. Using U.S. units of gpm and psi, the flow coefficient equation in its simplest form is: Relief Valve Sizing Equations When sizing a relief valve, the minimum required effective area is calculated and a relief valve is selected that has an effective area greater than the calculated value. The sizing equation for relief valves for liquids using U.S. units according to Equation 28 in the API 520 standard is: Where: A = required effective orifice area (in 2) Q = volumetric flow rate (gpm) SG = specific gravity of the fluid (dimensionless) P 1 = upstream relieving pressure (psig) P 2 = backpressure (psig) K d = rated discharge coefficient (dimensionless) = C d= (actual flow rate) / (ideal flow rate) K w = correction factor for backpressure ( = 1 if discharging to atmosphere or if backpressure is less than 50% of inlet pressure) K c = rupture disc correction factor, if installed ( = 1 if none installed) K v = viscosity correction factor ( = 1 if Re > 10 5) 38 = all unit conversions compiled into one constant Assuming no rupture disc is installed, no viscosity correction, and backpressure < 50% inlet pressure, the API 520 equation (using Cd instead of Kd) boils down to: Rearranging Equation 4 yields: Relationship Between C v and C d The right hand side of Equation 5 is common with the flow coefficient equation, Equation 2 above. Therefore, for liquids: A similar evaluation can be done for compressible gases and vapors (using Equation 11a in the ANSI/ISA 75.01.01 standard and Equation 3 in the API standard 520 Part 1, for example), but the relationship becomes: The next question is: "Why are the constants different?" The answer is that the discharge coefficient for a given valve is smaller for a liquid than it is for a gas due to the expansion of the gas as it passes through the valve. For example, one manufacturer shows the discharge coefficient for one of their valves in liquid service is 0.579, but for gas service is 0.801. The ratio of the discharge coefficients is 0.801/0.579 = 1.38. The ratio of the constants in the above equations is 38/27.66 = 1.37, roughly equal. Deriving the Numerical Constant in C v to C d Relationship The next question a good engineer will ask is: "Where does the constant 38 come from?" The answer to that requires some unit analysis of the one-dimensional isentropic nozzle flow energy balance equation, which is given in Appendix B of the API 520 standard. Using U.S. units for liquid, the mass flow rate per unit area through a nozzle (mass flux, G) using Equation B.1 and B.6 in the API standard, is: Where: G = mass flux in lb/sec•ft 2 w = theoretical mass flow rate in lb/sec a = flow area in ft 2 g = 32.174 ft/sec 2 ρ = fluid density in lb/ft3 dP = pressure drop across the relief, in lb/in 2 144 = conversion between in 2 and ft 2 Disregarding the unit conversion needed for the moment, the mass flow rate is related to the volumetric flow rate by: Therefore, the mass flux is: Solving for area (a) and taking the fluid density into the square root: (11) a = \frac{\rho Q}{\sqrt{(2)(g)(144)\rho dP}} = \frac{Q\sqrt{\rho^2}}{\sqrt{(2)(g)(144)\rho dP}} = Q\sqrt{\frac{\rho}{(2)(g)(144)\rho dP}} The density ( ρ) in Equation 11 is the fluid density, but the valve sizing equations use the specific gravity. Specific gravity is: Where ρ = density of water at 60 °F = 62.37 lb/ft 3. Taking this relationship into the area equation yields: Before we throw in all the units, we need the area in square inches, not square feet, so: (15) A = a \bigg(144\frac{in^2}{ft^2}\bigg) = \bigg(144\frac{in^2}{ft^2}\bigg)Q\sqrt{\frac{\Big(62.37\frac{lb}{ft^3}\Big)(SG)}{(2)(g)\Big(144\frac{in^2}{ft^2}\Big)dP}} Now let's put in all the units: A = \bigg|\frac{144 in^2}{ft^2}\bigg|\frac{Q gal}{min}\bigg|\frac{min}{60 sec}\bigg|\frac{ft^3}{7.48055gal}\sqrt{\frac{62.37 lb}{ft^3}\bigg|\frac{1}{2}\bigg|\frac{sec^2}{32.174 ft}\bigg|\frac{ft^2}{144 in^2}\bigg|\frac{in^2}{dP lb}\bigg|\frac{(SG)}{1}\bigg|} The discharge coefficient comes into the equation above because the flow rate, Q, is the theoretical flow rate assuming incompressible isentropic flow. The discharge coefficient is the ratio of the actual flow to the theoretical flow: Rearranged: Substituting into the area equation above (Equation 16) yields the relief valve sizing equation: Summary Over the course of history, the scientific and engineering study involving fluid flow in piping systems has resulted in developing different to characterize the hydraulic performance of various devices that obstruct fluid flow. Because engineers view the hydraulic performance of devices differently, mistakes can be made if the proper concepts and equations are not applied correctly. These can be costly mistakes in sizing and selecting the wrong equipment which can mean the difference between the system having sufficient pressure relieving capacity or the system rupturing during a high pressure relief incident. coefficients
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
I have a personal problem I want to solve. I have a system of linear inequalities with 97 unknowns and 150000 ineqaulities, I think formal notation of the problem should be something like this \begin{equation} \begin{bmatrix} x_{00000101} & \cdots & x_{00000197}\\ \vdots & \ddots &\vdots\\ x_{15000001} & \cdots & x_{15000097}\\ \end{bmatrix} \times \left[ \begin{array}{c} y_1 \\ \vdots \\ y_{97} \end{array} \right] = \left[ \begin{array}{c} z_1 \\ \vdots \\ z_{150000} \end{array} \right] \end{equation} \begin{equation} \forall z \in \left\{ z_i | i \in [1,150000]\right\}, z_i > 0 \end{equation} We also know that $\forall x \in $ leftmost matrix $x \in \left\{-1,0,1 \right\}$ (i.e coefficients can only take -1,0,1 as values) and there is exacly $5$ $1$s and $5$ $-1$'s in every row in leftmost matrix. So, each row sums up to $0$. What I want to do is to determine if I can order $y_i$'s and if so, I want to find an algorithm to order them. I am considering using a genetic algorithm and by trying arbitrary values for every $y_i$. But I guess that could be computationaly expensive. Is there a way I can deterministically solve this problem? Some background information: every $y_i$ represents a player in a game, I have a data of 150000 matches of 5 player vs 5 player. I am assuming team's ability to win is sum of the abilities of its players. So I am wondering if I can order the players to best to worse.
Introduction to Area If you needed to cover the floor of your house from wall to wall, you would need to share the area of the floor at the carpet store, after which they would tell you how much carpet you would need and what it would cost. If you wanted to varnish the dining room table, you would need to know the area so that you can confirm the cost. These are just a few everyday examples of how areas of closed shapes are important to know and calculate. The Big Idea: The two dimensions of the area What is the area? The area of a closed two-dimensional figure (like a circle, triangle, rectangle, square, rhombus or parallelogram) is the amount of space enclosed by the outer boundary of the figure. This outer boundary is referred to as the perimeter and is a one-dimensional measure of length. But as soon as we try to calculate area, we get into two dimensions of measurement. There is another measure which is used for closed spaces, and it is called the perimeter, which is a one-dimensional number. It refers to the length of the boundary of the figure. In the example of the varnishing of the table, if you wanted to find out the length of the wooden edge of the table, you would add up all four sides of the rectangular table to get the length of the perimeter. However, when it comes to area, take a look at the graph illustration below and try to understand why it is that we multiply the sides in order to get the desired result: The following image is another way of visualising the basic difference between area and perimeter, remember, area represents the space enclosed by a figure, it is two dimensional, while the perimeter is the length of the edge, and is a one-dimensional quantity. How is it important? Area of a rectangle The only difference between a square and a rectangle is that a rectangle has both sets of opposite sides equal, instead of all four sides being equal. So, if we represent the length of a rectangle by the letter ‘\({l}\)’, and the breadth by the letter ‘\({b}\)’, then \(\begin{align}{\text{Area of rectangle}} = {\text{length}} \times {\text{breadth}} = l \times b\end{align}\) If you have a rectangular wall in your house which needs painting, then you can calculate the area to be painted by seeing what the length and the height of the wall is. For all four walls of a room, you calculate the areas of the two opposite rectangular walls and add them together. If you need the ceiling painted as well, you need to calculate the area of the rectangular ceiling also. Area of a square A square is a four-sided figure in which all four sides are equal to each other, and all four angles are right angles. If we represent each side of a square by the letter \({a}\), then \(\begin{align}{\text{Area of square}} = a \times a = {a^2}\end{align}\) If we continue the example of the sandwich and consider the original square-shaped bread slices (before you cut them into two triangles), then the area would be calculated as \(\begin{align}{\text{Area of each square sandwich}} = 5 \times 5 = {25}\,\rm{cm^2}\end{align}\) Area of a triangle A triangle is a closed space enclosed by three sides. If we keep any side parallel to the bottom of the page, then that side is called the base of the triangle and its length is represented by the letter \({b}\). The perpendicular line joining the vertex opposite this base to a point in this base is called the height or altitude of the triangle and is usually represented by the letter \({h}\). The formula for the area of a triangle is then written as: \(\begin{align}{\text{Area of triangle}} = \frac{1}{2} \times b \times h\end{align}\) Another interesting way to understand the origin of this formula is to picture it as half the area enclosed by a rectangle. To help visualize it, look at the image below: If you want further clarification on this concept, take a closer look at the next section. A simple tip Did you notice that the area of the triangular sandwich was exactly half of the area of the square sandwich? That is because the ‘\({b}\)’ and ‘\({h}\)’ of the triangle were both equal to the ‘\({a}\)’ of the rectangle. Have a look at this image to get a better idea.
What is the difference in energy between the two levels responsible for the ultraviolet emission line of the magnesium atom at 285.2 nm? Solution: Show me the final answer↓ We will first figure out the frequency, and use that value to calculate the difference in energy. The first equation we will use is: (Where v is frequency, c is speed of light, and \lambda is wavelength) The question states that the line of emission for the magnesium atom is at 285.2 nm. We will write this value in meters. Let us substitute the values into our equation to figure out the frequency. (The speed of light is equal to 2.998 \times 10^8 m/s) v\,=\,1.05 \times 10^{15} /s We will now use the following formula to figure out the energy. (Where E is energy, h is the Planck’s constant, and v is the frequency) E\,=\,(6.626\times 10^{-34}\text{J}\cdot \text{s})(1.05 \times 10^{15}\text{/s}) E\,=\,6.96\times 10^{-19} J Final Answer:
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
This is one of the accompanying workshops of the 50 Years Faculty of Mathematics Anniversary Conference. The conference takes place at Bielefeld University. You can find detailed directions here. All talks will be held in T2-213. If you are planning your trip to Germany, you might also be interested in the conference Buildings 2019, which will take place in Magdeburg in the following week. This conference is supported by the DFG priority programme 2026 Geometry at Infinity. Tuesday, 24.09 10:00 - 11:00 Rigidity of the Torelli subgroup in Out(F n) ( Camille Horbez ) 11:30 - 12:30 Maximal direct products of free groups in Out(F N) ( Ric Wade) 14:00 - 15:00 Typical Trees: An Out(F r) Excursion ( Catherine Pfaff) 15:45 - 16:45 Problem session Wednesday, 25.09 10:00 - 11:00 The geometry of hyperbolic free-by-cyclic groups ( Yael Algom-Kfir) 11:30 - 12:30 The Minimally displaced set of an irreducible automorphism is locally finite ( Dionysios Syrigos) 14:00 - 15:00 Homotopy type of the free factor complex ( Radhika Gupta) 15:45 - 16:45 Surface subgroups of Out(F n) via combination of Veech subgroups ( Funda Gultepe ) 19:00 - 00:00 Conference Dinner Thursday, 26.09 10:00 - 11:00 Aut(F n) has property (T) ( Marek Kaluba) 11:30 - 12:30 Graph products, quasi-median graphs, and automorphisms ( Anthony Genevois) For $\phi$ an outer automorphism of $F_n$, the corresponding free-by-cyclic group is the group with the following presentation: $G_\phi = \langle x_1, \dots, x_n, t \mid t x_i t^{-1} = \phi(x_i) \rangle$. There is a satisfying correspondence between properties of $\phi$ and properties of the group $G_\phi$. For example, $\phi$ is atoroidal if and only if $G_\phi$ is hyperbolic (Brinkman). In the hyperbolic case, the Gromov boundary of $G_\phi$ contains a cut point if and only if some power of $\phi$ is reducible (Bowditch and Kapovich-Kleiner). We restrict to the case that $G_\phi$ is hyperbolic and $\partial G_\phi$ contains no cut points, then $\partial G_\phi$ is homeomorphic to the Menger curve (Kapovich-Kleiner). Thus, from the point of view of the topology of their boundary, $G_\phi, G_\psi$ are indistinguishable for $\phi \in Out(F_n)$ and $\psi \in Out(F_m)$. However, Paulin showed that the conformal structure of a hyperbolic group is a complete quasi-isometric invariant. Is it possible that all groups of this form are quasi-isometric? In this talk we shall discuss some geometric aspects of $G_\phi$ that effect the conformal structure of $\partial G_\phi$ and ultimately, its dimension. This is joint work with Arnaud Hilion, Emily Stark and Mladen Bestvina. This talk is dedicated to graph products of groups, a common generalisation of free products and right-angled Artin groups. I will explain how quasi-median graphs appear as natural geometric models for graph products, and how they can be used in order to study their automorphisms. Using an amalgamation of Veech subgroups of the mapping class group a la Leininger-Reid, we construct new examples of surface subgroups of \(\mathrm{Out}( F_n)\) whose elements are either conjugate to elements in the Veech group, or except for one accidental parabolic, all fully irreducible. I will talk about this construction, and on the way describe Veech subgroups of \(\mathrm{Out}(F_n)\) . This is part of a joint work with Binbin Xu. The mapping class group of a surface acts on the curve complex which is known to be homotopy equivalent to a wedge of spheres. In this talk, I will define the 'free factor complex', an analog of the curve complex, on which the group of outer automorphisms of a free group acts by isometries. This complex has many similarities with the curve complex. I will present the result (joint with Benjamin Brück) that the free factor complex is also homotopy equivalent to a wedge of spheres. We will also look at higher connectivity results for the simplicial boundary of Outer space. I will present a joint work with Sebastian Hensel and Ric Wade. We prove that when n is at least 4, every injective morphism from IAn (outer automorphisms of a free group \(F_n\) acting trivially on homology) to \(Out(F_n)\) differs from the inclusion by a conjugation. This applies more generally to a wide collection of subgroups of \(Out(F_n)\) that we call twist-rich, which include all terms in the Andreadakis-Johnson filtration and all subgroups of \(Out(F_n)\) that contain a power of every Dehn twist. This extends previous works on commensurations of \(Out(F_n)\) and its subgroups. I will sketch the recent proof of (arXiv:1812.03456) that the group of automorphisms of free group on \(n \geq 6\) generators has Kazhdan's property (T). The proof follows by estimating the spectral gap of \(\Delta_n\), the group Laplace operator via sum of squares decomposition in real group algebra. We use the action of "Weyl" group to simplify the combinatorics of computing \( \Delta_n^2\) and reduce the problem of finding a sum of squares decompositions (for all \(n \geq 6\)) to a single computation for \(n=5\). The final computation is just small enough to be performed using computer software. As a side-result we produce asymptotically optimal lower estimates on Kazhdan constants for both \(\operatorname{SAut}(F_n)\) and \(\operatorname{SL}_n(\mathbb{Z})\). In the latter case these considerably narrow the gap between the upper and lower bounds. This is joint work with Dawid Kielak and Piotr W. Nowak. Random walks are not new to geometric group theory (see, for example, work of Furstenberg, Kaimonovich, Masur). However, following independent proofs by Maher and Rivin that pseudo-Anosovs are generic within mapping class groups, and then new techniques developed by Maher-Tiozzo, Sisto, and others, the field has seen in the past decade a veritable explosion of results. In a 2 paper series, we answer with fine detail a question posed by Handel-Mosher asking about invariants of generic outer automorphisms of free groups and then a question posed by Bestvina as to properties of R-trees of full hitting measure in the boundary of Culler-Vogtmann outer space. This is joint work with Ilya Kapovich, Joseph Maher, and Samuel J. Taylor. Let \(G\) be a group which splits as \(G = F_n * G_1 *...*G_k\), where every \(G_i\) is freely indecomposable and not isomorphic to the group of integers. Guirardel and Levitt generalised the Culler- Vogtmann Outer space of a free group by introducing an Outer space on which \(Out(G)\) acts. Francaviglia and Martino introduced the Lipschitz metric for the Culler- Vogtmann space and later for the general Outer space. For any automorphism of \(Out(G)\), we can define the displacement function with respect to the Lipschitz metric and the corresponding level sets. Recently, the same authors proved that for every L, the L-level set (the set of points of the outer space which are displaced by at most L) is connected, whenever it is non-empty . In the special case of an irreducible automorphism, they proved that the Min- set (the set of points which are minimally displaced) is always non-empty and it coincides with the set of the points that admit train track representatives for the automorphism. In a joint paper with Francaviglia and Martino, we prove that the Min set of a (hyperbolic) irreducible automorphism is (uniformly) locally finite, even if the relative outer space is locally infinite. Compared to maximal rank free abelian groups, maximal direct products of free groups in \(Out(F_N)\) are remarkably rigid. I will talk about some joint work with Martin Bridson, where we show that every subgroup of \(Out(F_N)\) isomorphic to a direct product of 2N-4 free groups fixes a splitting called an N-2 rose. This rose is canonical, so also gives us information about the centralizers and normalizers of such subgroups. As an application, we show that every endomorphism of \(Out(F_N)\) sends Nielsen automorphisms to powers of Nielsen automorphisms.
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
Loss Layers¶ class MultinomialLogisticLossLayer¶ The multinomial logistic loss is defined as \(\ell = -w_g\log(x_g)\), where \(x_1,\ldots,x_C\) are probabilities for each of the \(C\) classes conditioned on the input data, \(g\) is the corresponding ground-truth category, and \(w_g\) is the weightfor the \(g\)-th class (default 1, see bellow). The conditional probability blob should be of the shape \((W,H,C,N)\), and the ground-truth blob should be of the shape \((W,H,1,N)\). Typically there is only one label for each instance, so \(W=H=1\). The ground-truth should be a zero-basedindex in the range of \(0,\ldots,C-1\). Should be a vector containing two symbols. The first one specifies the name for the conditional probability input blob, and the second one specifies the name for the ground-truth input blob. weights¶ This could be used to specify weights for different classes. The following values are allowed Empty array (default). This means each category should be equally weighted. A 3D tensor of the shape (width, height, channels). Here the (w,h,c) entry indicates the weights for category c at location (w,h). A 1D vector of length channels. When both width and height are 1, this is equivalent to the case above. Otherwise, the weight vector across channels is repeated at every location (w,h). normalize¶ Indicating how weights should be normalized if given. The following values are allowed :local(default): Normalize the weights locally at each location (w,h), across the channels. :global: Normalize the weights globally. :no: Do not normalize the weights. The weights normalization are done in a way that you get the same objective function when specifying equal weightsfor each class as when you do not specify any weights. In other words, the total sum of the weights are scaled to be equal to weights ⨉ height ⨉ channels. If you specify :no, it is your responsibility to properly normalize the weights. class SoftmaxLossLayer¶ \[\sigma(x_1,\ldots,x_C) = (\sigma_1,\ldots,\sigma_C) = \left(\frac{e^{x_1}}{\sum_j e^{x_j}},\ldots,\frac{e^{x_C}}{\sum_je^{x_j}}\right)\] which essentially turn the predictions into non-negative values with exponential function and then re-normalize to make them look like probabilties. Then the transformed values are used to compute the multinomial logsitic loss as\[\ell = -w_g \log(\sigma_g)\] Here \(g\) is the ground-truth label, and \(w_g\) is the weight for the \(g\)-th category. See the document of MultinomialLogisticLossLayerfor more details on what the weights mean and how to specify them. The shapes of inputs is the same as MultinomialLogisticLossLayer: the multi-class predictions are assumed to be along the channel dimension. The reason we provide a combined softmax loss layer instead using one softmax layer and one multinomial logistic layer is that the combined layer produces the back-propagation error in a more numerically robust way.\[\frac{\partial \ell}{\partial x_i} = w_g\left(\frac{e^{x_i}}{\sum_j e^{x_j}} - \delta_{ig}\right) = w_g\left(\sigma_i - \delta_{ig}\right)\] Here \(\delta_{ig}\) is 1 if \(i=g\), and 0 otherwise. Should be a vector containing two symbols. The first one specifies the name for the conditional probability input blob, and the second one specifies the name for the ground-truth input blob.
Spring 2018, Math 171 Week 5 Reversibility/Detailed Balance Condition Show that Ehrenfest’s chain is reversible. \[P(x,y)= \begin{cases}\frac{N-x}{N}, & \mathrm{if\ } y=x+1 \cr \frac{x}{N}, & \mathrm{if\ } y=x-1 \cr 0, & \mathrm{otherwise}\end{cases}\] (Answer) Show KCC on simple cycles: All simple cycles are length 2, and cycles of length 2 always satisfy KCC. Consider the Markov chain \(\{X_n:n \ge 0\}\) on \(\mathcal{S} = \{1, 2, \dots, N\}\) with a transition matrix of the form \[P(x,x-1)=q(x), \; P(x,x)=r(x), \; P(x,x+1)=p(x)\] Find conditions on \(q, r, p\) which guarantee the chain will be irreducible. (Answer) \(q(x) \neq 0\) and \(p(x) \neq 0\) \(\forall x\) Show the chain is reversible (Answer) Show KCC on simple cycles: All simple cycles are length 2, and cycles of length 2 always satisfy KCC. Find the stationary distribution, assuming the chain is irreducible (Discussed) Consider the Markov chain \(\{X_n:n \ge 0\}\) on \(\mathcal{S} = \{1, 2, \dots, N\}\) with a transition matrix of the form \[P(x,y)= \begin{cases}q(x) & \mathrm{if\ } x \neq y \cr 1-(N-1)q(x) & \mathrm{if\ } x = y \end{cases}\] Verify that such chains are reversible Suppose \(P\) is the transition matrix for a reversible Markov chain. Fix \(i_0, j_0 \in \mathcal{S}\) Define a new transition matrix \[P'(x,y)= \begin{cases}aP(i_0, j_0) & \mathrm{if\ } x=i_0, y=j_0 \cr aP(j_0, i_0) & \mathrm{if\ } x=j_0, y=i_0 \cr P(i_0, i_0) + (1-a)P(i_0, j_0) & \mathrm{if\ } x=y=i_0 \cr P(j_0, j_0) + (1-a)P(j_0, i_0) & \mathrm{if\ } x=y=j_0 \cr P(x,y) & \mathrm{otherwise} \end{cases}\] Verify that the Markov chain defined by the new transition matrix will be reversible How will the stationary distribution of the new Markov chain relate to the stationary distribution of the original? Limiting Behavior (Discussed) Consider Ehrenfest’s chain \(\{X_n:n \ge 0\}\) subject to the transition probabilities. \[P(x,y)= \begin{cases}\frac{N-x}{N}, & \mathrm{if\ } y=x+1 \cr \frac{x}{N}, & \mathrm{if\ } y=x-1 \cr 0, & \mathrm{otherwise}\end{cases}\] Compute the period of \(\{X_n:n \ge 0\}\) (Answer) 2 Show that \(Y_n=X_{2n}\) is a Markov chain. Is it irreducible? (Answer) No. The state space has been split into two parts which don’t communicate with each other. Consider \(Y_n=X_{2n}\) under the restriction \(X_0 \in \{0,\ 2,\ \dots,\ 2\lfloor\frac{N}{2}\rfloor\}\). Compute its stationary distribution. Explain why \(Y_n\) converges in distribution to the stationary distribution as \(n \to \infty\). (Discussed) Consider the Markov chain \(\{X_n:n \ge 0\}\) on \(\mathcal{S} = \{1, 2, \dots, N\}\) with a transition matrix of the form \[P(x,x-1)=q(x), \; P(x,x)=r(x), \; P(x,x+1)=p(x)\] Find conditions on \(q, r, p\) which lead to a period of 2 (Answer) \(r(x) = 0\) Suppose \(q(x)=0\). Find a formula for \(\mathbb{E}_x[N(y)]\) for \(x < y\) Consider the Markov chain \(\{X_n:n \ge 0\}\) on \(\mathcal{S} = \{1, 2, \dots, N\}\) with a transition matrix of the form \[P(x,y)= \begin{cases}p, & \mathrm{if\ } y=x+1, \; x<N \cr 1-p, & \mathrm{if\ } x=0, \; y=0 \cr r & \mathrm{if\ } y = x, \; 0<y<N \cr q, & \mathrm{if\ } y=x-1, \; x>0 \cr 1-q, & \mathrm{if\ } x=N, \; y=N \cr 0, & \mathrm{otherwise}\end{cases}\] Find a system of equations which could be solved to find \(\mathbb{E}_x[T_N]\) for any \(x\). Compute \(\mathbb{E}_x[T_N]\) when \(r=0\) and \(q=1-p\)
Search Now showing items 1-9 of 9 Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ... Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE (Elsevier, 2017-11) Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ... The new Inner Tracking System of the ALICE experiment (Elsevier, 2017-11) The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ... Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE (Elsevier, 2017-11) Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ... Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE (Elsevier, 2017-11) The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... Jet-hadron correlations relative to the event plane at the LHC with ALICE (Elsevier, 2017-11) In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ... Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE (Elsevier, 2017-11) We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
This article by Yngvason is probably a good start: Yngvason, J. (2005). The role of type III factors in quantum field theory. Reports on Mathematical Physics, 55(1), 135–147. (arxiv) The Type III property says something about statistical independence. Let $\mathcal{O}$ be a double cone, and let $\mathfrak{A}(\mathcal{O})$ be the associated algebra of observables. Assuming Haag duality, we have $\mathfrak{A}(\mathcal{O}')'' = \mathfrak{A}(\mathcal{O})$. If $\mathfrak{A}(\mathcal{O})$ is not of Type I, the Hilbert space $\mathcal{H}$ of the system does not decompose as $\mathcal{H} = \mathcal{H}_1 \otimes \mathcal{H}_2$ in such a way that $\mathfrak{A}(\mathcal{O})$ acts on the first tensor factor, and $\mathfrak{A}(\mathcal{O}')$ on the second. This implies that one cannot prepare the system in a certain state when restricted to measurements in $\mathcal{O}$ regardless of the state in the causal complement. It should be noted that if the split property holds, that is there is a Type I factor $\mathfrak{N}$ such that $\mathfrak{A}(\mathcal{O}) \subset \mathfrak{N} \subset \mathfrak{A}(\widehat{\mathcal{O}})$ for some region $\mathcal{O} \subset \widehat{\mathcal{O}}$, a slightly weaker property is available: a state can be prepared in $\mathcal{O}$ irregardless of the state in $\widehat{\mathcal{O}}'$. An illustration of the consequences can be found in the article above. Another consequence is that the Borchers property B automatically holds: if $P$ is some projection in $\mathfrak{A}(\mathcal{O})$, then there is some isometry $W$ in the same algebra such that $W^*W = I$ and $W W^* = P$. This implies that we can modify the state locally to be an eigenstate of $P$, by doing the modification $\omega(A) \to \omega_W(A) = \omega(W^*AW)$. Note that $\omega_W(P) = 1$ and $\omega_W(A) = \omega(A)$ for $A$ localised in the causal complement of $\mathcal{O}$. Type III$_1$ implies something slightly stronger, see the article cited for more details. As to the first question, one can prove that the local algebras of free field theories are Type III. This was done by Araki in the 1960's. You can find references in the article mentioned above. In general, the Type III condition follows from natural assumptions on the observable algebras. Non-trivial examples probably have to be found in conformal field theory, but I do not know any references on the top of my head.
$\vec{B}=\nabla \times \vec{A}\tag1$ This is true because at every point $\nabla\cdot\vec{B}=0 \tag2$ In free space points, $\displaystyle \vec{B}=\dfrac{\mu_0}{4 \pi}\int_C \dfrac{I\ dl \times\hat{r}}{r^2}\tag3$ Consequently: $\nabla \cdot\vec{B}=0 \tag2$ At the points on the circuit, there is a singularity and we cannot directly apply equation $(3)$, i.e. Biot Savart law. So in this case how can $\nabla \cdot\vec{B}=0 $ Edit (My understanding)@ garyp: $\text{I am a graduate student. So I may seem to be a bit naive. Anyway please tell whether}\\ \text{I am understanding in the right way.}$ Here I am not considering the circuit as three dimensional. By considering it as one dimensional, the (closed) circuit becomes equivalent to a magnetic dipole layer of infinitesimal thickness. By using inverse square law of magnetic poles, we can find magnetic field (intensity)at any point outside the magnetic dipole layer (even at points infinitely close to the dipole layer). Let's first see the magnetic field due to an element of dipole layerat a point infinitely close to it: $$\vec{B}=\mu_0\vec{H}= k \ M\ \left[ \dfrac{\hat{r_1}}{r_1^2}-\dfrac{\hat{r_2}}{r_2^2} \right] dS'$$ (where $M$ is magnetic pole density and $S'$ is the surface of magnetic dipole layer) Now making use of divergence formula in spherical coordinates: \begin{align} \nabla \cdot \vec{B} &= k \ M\ \left[ \nabla \cdot \dfrac{\hat{r_1}}{r_1^2}-\nabla \cdot \dfrac{\hat{r_2}}{r_2^2} \right]\ dS' \\ & = \lim_{r\to\ 0} k \ M\ \left \{ \left[ \dfrac{1}{r^2} \dfrac{\partial \left( r^2 \dfrac{\hat{r}}{r^2} \right)}{\partial r} \right]_{\text{at }r_1} -\left[ \dfrac{1}{r^2} \dfrac{\partial \left( r^2 \dfrac{\hat{r}}{r^2} \right)}{\partial r} \right]_{\text{at }r_2} \right \} dS' \\ \text{We see the two $r^2$ cancel out} \\ & = \lim_{r\to\ 0} k \ M\ \left[ \left[ \dfrac{1}{r^2} \dfrac{\partial \hat{r}}{\partial r} \right]_{\text{at } r_1} - \left[ \dfrac{1}{r^2} \dfrac{\partial \hat{r}}{\partial r} \right]_{\text{at } r_2} \right] dS'\\ & =0\\ \text{(This is because $\dfrac{\partial \hat{r}}{\partial r}=0$)} \end{align} Therefore the divergence (at points outside the dipole layer) due to each of the elements of dipole layeris zero. That is, the divergence (at points outside the dipole layer) due to the magnetic dipole layer is zero. Thus we see that the magnetic field may blow up at points infinitely close to the dipole layer but its divergence will still be zero. Thus the divergence of magnetic field due to (closed) circuit is zero everywhere except at points on the (closed) circuit. Now comes the key point:Since we know the divergence of magnetic field due to a (closed) circuit, even at points infinitely close to the circuit, is zero we ignore the circuit and say $\nabla \cdot \vec{B}=0$ everywhere on $\mathbb R^3$.
Linear algebra is a branch of mathematics, but the truth of it is that linear algebra is the mathematics of data. Linear algebra is the study of vector spaces, lines and planes, mappings that are required for linear transforms and many more. Dealing with linear equations is also a very prominent part of linear algebra. In this section students will learn about how to solve linear equations using different methods. There are three possible outcomes for a system of linear equations: one unique solution, infinitely many solutions, and no solution. How to Solve Linear Equations in Two Variables Any system that can be written in the form. ax + by = m cx + dy = n Where a,b,c,d,m and n are real numbers and a,b,c,d are non zero, is a system of linear equations in two variables. Solving linear equations means to find the values of unknown variables, here x and y, which satisfies both equations. With the help of algebraic methods such as substitution method, elimination method, graphing method and matrix method find the solution set for the pair of two linear equations. These methods also get the answers for such questions, How many solution has a linear equation in two variable? Will get to know all possible conditions if system of equations has, unique solution, infinitely many solutions or no solution. Linear Equations in Two Variables Questions Few solved examples are illustrated below for better understanding:Solve 10x - 20y = 110, 5x - 14y = 71 Example 1: Solution: 10x - 20y = 110 ...(1) 5x - 14y = 71 ...(2) Use elimination method to solve above system of equations: Multiply equation (1) by number 5 and equation 2 by number 10, we get 5{10x - 20y = 110 } 50x - 100y = 550 ....(3) 10{5x - 14y = 71 } 50x - 140y = 710 ...(4) Coefficient of x in both the equations is same, so we can eliminate x from the system of equation by subtracting them. (Equation 4 ) - (Equation 3) 50x - 140y = 710 50x - 100y = 550 - + - ______________ 0 - 40y = 160 ______________ or -40y = 160 or y = -4 (dividing both the sides by -4) To get the value of x, put y = -4 in equation (2) 5x - 14y = 71 5x - 14(-4) = 71 5x + 56 = 71 5x = 71 - 56 5x = 15 x = 15/5 This implies x = 3 Answer: x = 3 y = -4 Example 2: Solve the below pair of linear equations using matrix method: 3x - 2y = 17 6x + 3y = 27 Solution: 3x-2y = 17 6x+3y = 27 Solution: 3x - 2y = 17 6x + 3y = 27 $\begin{bmatrix} 3 & -2\\ 6 & 3\end{bmatrix} \begin{bmatrix} x\\ y\end{bmatrix} = \begin{bmatrix} 17\\ 27 \end{bmatrix}$ i.e $A\ X$ = $B$ A = $\begin{bmatrix} 3 & -2\\ 6 & 3 \end{bmatrix}$ Now, Write matrix of cofactors: $cof(A)$ = $\begin{bmatrix} 3 & -6\\ 2 & 3\end{bmatrix}$ $Adj(A)$ = Transpose of $cof(A)$ = $\begin{bmatrix} 3 & 2\\ -6 & 3\end{bmatrix}$ Determinant of A = |A| = 9 -(-12) = 9 + 12= 21 $A^{-1}$ = $\frac{adj(A)}{|A|}$ = $\begin{bmatrix} \frac{1}{7} & \frac{2}{21}\\ \frac{-2}{7} & \frac{1}{7} \end{bmatrix}$ X = $A^{-1}\ B$ $\begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} \frac{1}{7} & \frac{2}{21}\\ -\frac{2}{7} & \frac{1}{7} \end{bmatrix} \times \begin{bmatrix} 17\\ 27 \end{bmatrix}$ = $\begin{bmatrix} \frac{17}{7} + \frac{18}{7}\\ \frac{-34}{7}+\frac{27}{7} \end{bmatrix}$ $\begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} 5\\ -1 \end{bmatrix}$ Therefore, the value of x and y is: x = 5 Practice Problems Practice Problem 1: Solve 11x - 9y = 4 ; 6x - 5y =-3 Practice Problem 2: Solve x + 5y = 2 and x + 3y = 10 Practice Problem 3: Solve pair of equations using substitution method: x - 2y = 6 and -5x + 2y = 1
What is the difference between linearly and affinely independent vectors? Why does affine independence not imply linear independence necessarily? Can someone explain using an example? To augment Lord Shark's answer, I just wanted to talk a little about the intuition behind it. Intuitively, a set of vectors is linearly dependent if there are more vectors than necessary to generate their span, i.e. the smallest subspace containing them. On the other hand, a set of vectors is affinely dependent if there are more vectors than necessary to generate their affine hull, i.e. the smallest flat (translate of a linear space) containing them. A single vector $v$ in a vector space generates an affine hull of $\lbrace v \rbrace$, which is just the trivial subspace $\lbrace 0 \rbrace$ translated by $v$. But, if $v \neq 0$, the span is the entire line between $0$ and $v$, as $0$ must be part of any subspace. To generate that line as an affine hull, you could look at the list $v, 0$. So, $v, 0$ are linearly dependent (e.g. $0 = 0 \cdot v + 5 \cdot 0$) as $0$ is not necessary to generate the span (just $v$ would have done fine), but both are necessary to generate the line as the affine hull, so they are affinely independent. To prove this, suppose $\lambda_1 + \lambda_2 = 0$ and, $$\lambda_1 \cdot v + \lambda_2 \cdot 0 = 0.$$ Then $\lambda_1 \cdot v = 0$, which implies $\lambda_1 = 0$, since $v \neq 0$. Since $\lambda_1 + \lambda_2 = 0$, we therefore also have $\lambda_2 = 0$. This proves affine independence. Vectors $v_1,\ldots,v_n$ are linearly dependent if there are$\lambda_1,\ldots,\lambda_n$, not all zero, with $\lambda_1 v_1+\cdots+\lambda_n v_n=0$. Vectors $v_1,\ldots,v_n$ are affinely dependent if there are$\lambda_1,\ldots,\lambda_n$, not all zero, with $\lambda_1 v_1+\cdots+\lambda_n v_n=0$ and $\lambda_1+\cdots+\lambda_n=0$. In $\Bbb R^1$ any two distinct vectors are linearly dependent but affinely independent. The set $\{ v_1,...,v_n\}$ is linearly independent if for $c_1, c_2,...,c_n$, not all zero, such that $$\sum_{k=1}^n c_kv_k =0 \Leftrightarrow c_j=0, \: \forall j=1,2,...,n.$$ The set is called affinely linearly independent if they are linearly independent and also $$ \sum_{k=1}^n c_k=1.$$
Learning Objectives Explain the difference between mass and weight Explain why falling objects on Earth are never truly in free fall Describe the concept of weightlessness Mass and weight are often used interchangeably in everyday conversation. For example, our medical records often show our weight in kilograms but never in the correct units of newtons. In physics, however, there is an important distinction. Weight is the pull of Earth on an object. It depends on the distance from the center of Earth. Unlike weight, mass does not vary with location. The mass of an object is the same on Earth, in orbit, or on the surface of the Moon. Units of Force The equation F net = ma is used to define net force in terms of mass, length, and time. As explained earlier, the SI unit of force is the newton. Since F net = ma, $$1\; N = 1\; kg \cdotp m/s^{2} \ldotp \nonumber$$ Although almost the entire world uses the newton for the unit of force, in the United States, the most familiar unit of force is the pound (lb), where 1 N = 0.225 lb. Thus, a 225-lb person weighs 1000 N. Weight and Gravitational Force When an object is dropped, it accelerates toward the center of Earth. Newton’s second law says that a net force on an object is responsible for its acceleration. If air resistance is negligible, the net force on a falling object is the gravitational force, commonly called its weight \(\vec{w}\), or its force due to gravity acting on an object of mass m. Weight can be denoted as a vector because it has a direction; down is, by definition, the direction of gravity, and hence, weight is a downward force. The magnitude of weight is denoted as w. Galileo was instrumental in showing that, in the absence of air resistance, all objects fall with the same acceleration g. Using Galileo’s result and Newton’s second law, we can derive an equation for weight. Consider an object with mass m falling toward Earth. It experiences only the downward force of gravity, which is the weight \(\vec{w}\). Newton’s second law says that the magnitude of the net external force on an object is \(\vec{F}_{net} = m \vec{a}\). We know that the acceleration of an object due to gravity is \(\vec{g}\), or \(\vec{a} = \vec{g}\). Substituting these into Newton’s second law gives us the following equations. Defintion: Weight The gravitational force on a mass is its weight. We can write this in vector form, where \(\vec{w}\) is weight and m is mass, as $$\vec{w} = m \vec{g} \ldotp \label{5.8}$$ In scalar form, we can write $$w = mg \ldotp \label{5.9}$$ Since g = 9.80 m/s 2 on Earth, the weight of a 1.00-kg object on Earth is 9.80 N: $$w = mg = (1.00\; kg)(9.80 m/s^{2}) = 9.80\; N \ldotp$$ When the net external force on an object is its weight, we say that it is in free fall, that is, the only force acting on the object is gravity. However, when objects on Earth fall downward, they are never truly in free fall because there is always some upward resistance force from the air acting on the object. Acceleration due to gravity g varies slightly over the surface of Earth, so the weight of an object depends on its location and is not an intrinsic property of the object. Weight varies dramatically if we leave Earth’s surface. On the Moon, for example, acceleration due to gravity is only 1.67 m/s 2. A 1.0-kg mass thus has a weight of 9.8 N on Earth and only about 1.7 N on the Moon. The broadest definition of weight in this sense is that the weight of an object is the gravitational force on it from the nearest large body, such as Earth, the Moon, or the Sun. This is the most common and useful definition of weight in physics. It differs dramatically, however, from the definition of weight used by NASA and the popular media in relation to space travel and exploration. When they speak of “weightlessness” and “microgravity,” they are referring to the phenomenon we call “free fall” in physics. We use the preceding definition of weight, force \(\vec{w}\) due to gravity acting on an object of mass m, and we make careful distinctions between free fall and actual weightlessness. Be aware that weight and mass are different physical quantities, although they are closely related. Mass is an intrinsic property of an object: It is a quantity of matter. The quantity or amount of matter of an object is determined by the numbers of atoms and molecules of various types it contains. Because these numbers do not vary, in Newtonian physics, mass does not vary; therefore, its response to an applied force does not vary. In contrast, weight is the gravitational force acting on an object, so it does vary depending on gravity. For example, a person closer to the center of Earth, at a low elevation such as New Orleans, weighs slightly more than a person who is located in the higher elevation of Denver, even though they may have the same mass. It is tempting to equate mass to weight, because most of our examples take place on Earth, where the weight of an object varies only a little with the location of the object. In addition, it is difficult to count and identify all of the atoms and molecules in an object, so mass is rarely determined in this manner. If we consider situations in which \(\vec{g}\) is a constant on Earth, we see that weight \(\vec{w}\) is directly proportional to mass m, since \(\vec{w} = m \vec{g}\), that is, the more massive an object is, the more it weighs. Operationally, the masses of objects are determined by comparison with the standard kilogram, as we discussed in Units and Measurement. But by comparing an object on Earth with one on the Moon, we can easily see a variation in weight but not in mass. For instance, on Earth, a 5.0-kg object weighs 49 N; on the Moon, where g is 1.67 m/s 2, the object weighs 8.4 N. However, the mass of the object is still 5.0 kg on the Moon. Example \(\PageIndex{1}\): Clearing a Field A farmer is lifting some moderately heavy rocks from a field to plant crops. He lifts a stone that weighs 40.0 lb. (about 180 N). What force does he apply if the stone accelerates at a rate of 1.5 m/s 2? Strategy We were given the weight of the stone, which we use in finding the net force on the stone. However, we also need to know its mass to apply Newton’s second law, so we must apply the equation for weight, w = mg, to determine the mass. Solution No forces act in the horizontal direction, so we can concentrate on vertical forces, as shown in the following free-body diagram. We label the acceleration to the side; technically, it is not part of the free-body diagram, but it helps to remind us that the object accelerates upward (so the net force is upward). $$w = mg \nonumber $$ $$m = \frac{w}{g} = \frac{180\; N}{9.8\; m/s^{2}} = 18\; kg \nonumber$$ $$\sum F = ma \nonumber$$ $$F - w = ma \nonumber$$ $$F - 180\; N = (18\; kg)(1.5\; m/s^{2}) \nonumber$$ $$F - 180\; N = 27\; N \nonumber$$ $$F = 207\; N = 210\; N\; \text{ to two significant figures} \nonumber$$ Significance To apply Newton’s second law as the primary equation in solving a problem, we sometimes have to rely on other equations, such as the one for weight or one of the kinematic equations, to complete the solution. Exercise \(\PageIndex{1}\) For \(\PageIndex{1}\), find the acceleration when the farmer’s applied force is 230.0 N. Answer Add texts here. Do not delete this text first. Simulation Can you avoid the boulder field and land safely just before your fuel runs out, as Neil Armstrong did in 1969? This version of the classic video game accurately simulates the real motion of the lunar lander, with the correct mass, thrust, fuel consumption rate, and lunar gravity. The real lunar lander is hard to control. Phet Simulation Use this interactive simulation to move the Sun, Earth, Moon, and space station to see the effects on their gravitational forces and orbital paths. Visualize the sizes and distances between different heavenly bodies, and turn off gravity to see what would happen without it. Contributors Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
Difference between revisions of "Huge" m (→Consistency strength and size: added relation to strong cardinals) Line 24: Line 24: === Ultrahuge cardinals === === Ultrahuge cardinals === − A cardinal $\kappa$ is '''$\lambda$-ultrahuge''' for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $ + A cardinal $\kappa$ is '''$\lambda$-ultrahuge''' for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $j(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is '''ultrahuge''' if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [http://logicatorino.altervista.org/slides/150619tsaprounis.pdf] Notice how similar this definition is to the alternative characterization of [[extendible]] cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the "''almost''" variants. === Ultrafilter definition === === Ultrafilter definition === Line 35: Line 35: As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." + + == Consistency strength and size == == Consistency strength and size == Line 65: Line 67: Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also [[supercompact|$\theta$-supercompact]] for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also [[supercompact|$\theta$-supercompact]] for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. − === $\omega$- + === $\omega$-=== A cardinal $\kappa$ is '''almost $\omega$-huge''' iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is '''$\omega$-huge''' iff the model $M$ can be required to have $M^\lambda\subset M$. A cardinal $\kappa$ is '''almost $\omega$-huge''' iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is '''$\omega$-huge''' iff the model $M$ can be required to have $M^\lambda\subset M$. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of [[rank-into-rank|I1 embeddings]]. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of [[rank-into-rank|I1 embeddings]]. + + + + + + + + + + + + {{References}} {{References}} Revision as of 09:16, 10 October 2018 Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. However, the consistency of the existence of an $\omega_2$-complete $\omega_3$-saturated $\sigma$-ideal on $\omega_2$, as far as the set theory world is concerned, still requires an almost huge cardinal. [1] Contents 1 Definitions 2 Consistency strength and size 3 Relative consistency results 4 References Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals A cardinal $\kappa$ is $\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $j(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Ultrafilter definition The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." Coherent sequence characterization of almost hugeness Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge". Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals. In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge. While not every $n$-huge cardinal is strong, if $\kappa$ is almost $n$-huge with targets $\lambda_1,\lambda_2...\lambda_n$, then $\kappa$ is $\lambda_n$-strong as witnessed by the generated $j:V\prec M$. This is because $j^n(\kappa)=\lambda_n$ is measurable and therefore $\beth_{\lambda_n}=\lambda_n$ and so $V_{\lambda_n}=H_{\lambda_n}$ and because $M^{<\lambda_n}\subset M$, $H_\theta\subset M$ for each $\theta<\lambda_n$ and so $\cup\{H_\theta:\theta<\lambda_n\} = \cup\{V_\theta:\theta<\lambda_n\} = V_{\lambda_n}\subset M$. Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\theta$-supercompact for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. The $\omega$-huge cardinals A cardinal $\kappa$ is almost $\omega$-huge iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is $\omega$-huge iff the model $M$ can be required to have $M^\lambda\subset M$. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of I1 embeddings. Relative consistency results Hugeness of $\omega_1$ In [2] it is shown that if $\text{ZFC +}$ "there is a huge cardinal" is consistent then so is $\text{ZF +}$ "$\omega_1$ is a huge cardinal" (with the ultrafilter characterization of hugeness). Generalizations of Chang's conjecture Cardinal arithmetic in $\text{ZF}$ If there is an almost huge cardinal then there is a model of $\text{ZF+}\neg\text{AC}$ in which every successor cardinal is Ramsey. It follows that for all ordinals $\alpha$ there is no injection $\aleph_{\alpha+1}\to 2^{\aleph_\alpha}$. This in turns imply the failure of the square principle at every infinite cardinal (and consequently $\text{AD}^{L(\mathbb{R})}$, see determinacy). [3] References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
Neurons (Activation Functions)¶ Neurons can be attached to any layer. The neuron of each layer will affect the output in the forward pass and the gradient in the backward pass automatically unless it is an identity neuron. Layers have an identity neuron by default [1]. class Neurons. Identity¶ An activation function that does not change its input. class Neurons. ReLU¶ Rectified Linear Unit. During the forward pass, it inhibits all inhibitions below some threshold \(\epsilon\), typically 0. In other words, it computes point-wise \(y=\max(\epsilon, x)\). The point-wise derivative for ReLU is\[\begin{split}\frac{dy}{dx} = \begin{cases}1 & x > \epsilon \\ 0 & x \leq \epsilon\end{cases}\end{split}\] epsilon¶ Specifies the minimum threshold at which the neuron will truncate. Default 0. Note ReLU is actually not differentiable at \(\epsilon\). But it has subdifferential\([0,1]\). Any value in that interval can be taken as a subderivative, and can be used in SGD if we generalize from gradient descent to subgradientdescent. In the implementation, we choose the subgradient at \(x==0\) to be 0. class Neurons. LReLU¶ Leaky Rectified Linear Unit. A Leaky ReLU can help fix the “dying ReLU” problem. ReLU’s can “die” if a large enough gradient changes the weights such that the neuron never activates on new data.\[\begin{split}\frac{dy}{dx} = \begin{cases}1 & x > 0 \\ 0.01 & x \leq 0\end{cases}\end{split}\] class Neurons. Sigmoid¶ Sigmoid is a smoothed step function that produces approximate 0 for negative input with large absolute values and approximate 1 for large positive inputs. The point-wise formula is \(y = 1/(1+e^{-x})\). The point-wise derivative is\[\frac{dy}{dx} = \frac{-e^{-x}}{\left(1+e^{-x}\right)^2} = (1-y)y\] class Neurons. Tanh¶ Tanh is a transformed version of Sigmoid, that takes values in \(\pm 1\) instead of the unit interval. input with large absolute values and approximate 1 for large positive inputs. The point-wise formula is \(y = (1-e^{-2x})/(1+e^{-2x})\). The point-wise derivative is\[\frac{dy}{dx} = 4e^{2x}/(e^{2x} + 1)^2 = (1-y^2)\] class Neurons. Exponential¶ The exponential function.\[y = exp(x)\] [1] This is actually not true: not all layers in Mocha support neurons. For example, data layers currently does not have neurons, but this feature could be added by simply adding a neuron property to the data layer type. However, for some layer types like loss layers or accuracy layers, it does not make much sense to have neurons.
Category:Fourier Series Jump to navigation Jump to search is called the Let $\alpha \in \R$ be a real number. Let: \(\displaystyle a_n\) \(=\) \(\displaystyle \dfrac 1 \pi \int_\alpha^{\alpha + 2 \pi} f \left({x}\right) \cos n x \rd x\) \(\displaystyle b_n\) \(=\) \(\displaystyle \dfrac 1 \pi \int_\alpha^{\alpha + 2 \pi} f \left({x}\right) \sin n x \rd x\) Then: $\displaystyle \frac {a_0} 2 + \sum_{n \mathop = 1}^\infty \left({a_n \cos n x + b_n \sin n x}\right)$ is called the Fourier Series for $f$. Subcategories This category has the following 3 subcategories, out of 3 total. Pages in category "Fourier Series" The following 14 pages are in this category, out of 14 total. C F Fourier Cosine Coefficients for Even Function over Symmetric Range Fourier Cosine Coefficients for Odd Function over Symmetric Range Fourier Series for Even Function over Symmetric Range Fourier Series for Odd Function over Symmetric Range Fourier Series over General Range from Specific Fourier Sine Coefficients for Even Function over Symmetric Range Fourier Sine Coefficients for Odd Function over Symmetric Range Fourier's Theorem