text
stringlengths
256
16.4k
Created in the early 17th century, the gas laws have been around to assist scientists in finding volumes, amount, pressures and temperature when coming to matters of gas. The gas laws consist of three primary laws: Charles' Law, Boyle's Law and Avogadro's Law (all of which will later combine into the General Gas Equation and Ideal Gas Law). Introduction The three fundamental gas laws discover the relationship of pressure, temperature, volume and amount of gas. Boyle's Law tells us that the volume of gas increases as the pressure decreases. Charles' Law tells us that the volume of gas increases as the temperature increases. And Avogadro's Law tell us that the volume of gas increases as the amount of gas increases. The ideal gas law is the combination of the three simple gas laws. Ideal Gases Ideal gas, or perfect gas, is the theoretical substance that helps establish the relationship of four gas variables, pressure (P), volume(V), the amount of gas(n)and temperature(T). It has characters described as follow: The particles in the gas are extremely small, so the gas does not occupy any spaces. The ideal gas has constant, random and straight-line motion. No forces between the particles of the gas. Particles only collide elastically with each other and with the walls of container. Real gas, in contrast, has real volume and the collision of the particles is not elastic, because there are attractive forces between particles. As a result, the volume of real gas is much larger than of the ideal gas, and the pressure of real gas is lower than of ideal gas. All real gases tend to perform ideal gas behavior at low pressure and relatively high temperature. The tells us how much the real gases differ from ideal gas behavior. compressiblity factor (Z) \[ Z = \dfrac{PV}{nRT} \] For ideal gases, \( Z = 1 \). For real gases, \( Z\neq 1 \). Boyle's Law In 1662, Robert Boyle discovered the correlation between Pressure (P)and Volume (V) (assuming Temperature(T)and Amount of Gas(n)remain constant): \[ P\propto \dfrac{1}{V} \rightarrow PV=x \] where x is a constant depending on amount of gas at a given temperature. Pressure is inversely proportional to Volume Another form of the equation (assuming there are 2 sets of conditions, and setting both constants to eachother) that might help solve problems is: \[ P_1V_1 = x = P_2V_2 \] Example 1.1 A 17.50mL sample of gas is at 4.500 atm. What will be the volume if the pressure becomes 1.500 atm, with a fixed amount of gas and temperature? Charles' Law In 1787, French physicists Jacques Charles, discovered the correlation between Temperature(T) and Volume(V) (assuming Pressure (P) and Amount of Gas(n) remain constant): \[ V \propto T \rightarrow V=yT \] where y is a constant depending on amount of gas and pressure. Volume is directly proportional to Temperature Another form of the equation (assuming there are 2 sets of conditions, and setting both constants to eachother) that might help solve problems is: \[ \dfrac{V_1}{T_1} = y = \dfrac{V_2}{T_2} \] Example 1.2 A sample of Carbon dioxide in a pump has volume of 20.5 mL and it is at 40.0 \[ V_2=\dfrac{V_1 \centerdot T_2}{T_1}\] \[ =\dfrac{20.5mL \centerdot (60+273.15K)}{40+273.15K}\] \[ = 22.1mL \] Avogadro's Law In 1811, Amedeo Avogadro fixed Gay-Lussac's issue in finding the correlation between the Amount of gas(n) and Volume(V) (assuming Temperature(T) and Pressure(P) remain constant): \[ V \propto n \rightarrow V = zn\] where z is a constant depending on Pressure and Temperature. Volume(V) is directly proportional to the Amount of gas(n) Another form of the equation (assuming there are 2 sets of conditions, and setting both constants to eachother) that might help solve problems is: \[ \dfrac{P_1}{n_1} = z= \dfrac{P_2}{n_2}\] Example 1.3 A 3.80 g of oxygen gas in a pump has volume of 150 mL. constant temperature and pressure. If 1.20g of oxygen gas is added into the pump. What will be the new volume of oxygen gas in the pump if temperature and pressure held constant? V \[ n_1= \dfrac{m_1}{M_oxygen gas} \] \[ n_2= \dfrac{m_2}{M_oxygen gas} \] \[ V_2=\dfrac{V_1 \centerdot n_2}{n_1}\] \[ = \dfrac{150mL\centerdot \dfrac{5.00g}{32.0g \centerdot mol^-1} \dfrac{3.80g}{32.0g\centerdot mol^-1} \] \[ = 197ml\] The ideal gas law is the combination of the three simple gas laws. By setting all three laws directly or inversely proportional to Volume, you get: \[ V \propto \dfrac{nT}{P}\] Next replacing the directly proportional to sign with a constant(R) you get: \[ V = \dfrac{RnT}{P}\] And finally get the equation: \[ PV = nRT \] where P= the absolute pressure of ideal gas V= the volume of ideal gas n = the amount of gas T = the absolute temperature R = the gas constant Here, R is the called the gas constant. The value of R is determined by experimental results. Its numerical value changes with units. R = gas constant = 8.3145 Joules · mol -1 · K-1 (SI Unit) = 0.082057 L · atm·K-1 · mol-1 Example 1.4 At 655mm Hg and 25.0 n=? \[ n=\frac{PV}{RT} \] \[ =\frac{655mm Hg \centerdot \frac{1 atm}{760mm Hg} \centerdot 0.75L}{0.082057L \centerdot atm \centerdot mol^-1 \centerdot K^-1 \centerdot (25+273.15K) }\] \[ =0.026 mol\] Evaluation of the Gas Constant, R You can get the numerical value of gas constant, R, from the ideal gas equation, PV=nRT. At standard temperature and pressure, where temperature is 0 oC, or 273.15 K, pressure is at 1 atm, and with a volume of 22.4140L, \[ R= \frac{PV}{RT} \] \[ \frac{1 atm \centerdot 22.4140L}{1 mol \centerdot 273.15K} \] \[ =0.082057 \; L \centerdot atm \centerdot mol^{-1} K^{-1} \] \[ R= \frac{PV}{RT} \] \[ = \frac{1 atm \centerdot 2.24140 \centerdot 10^{-2}m^3}{1 mol \centerdot 273.15K} \] \[ = 8.3145\; m^3\; Pa \centerdot mol^{-1} \centerdot K^{-1} \] General Gas Equation In an Ideal Gas situation, \( \frac{PV}{nRT} = 1 \) (assuming all gases are "ideal" or perfect). In cases where \( \frac{PV}{nRT} \neq 1 \) or if there are multiple sets of conditions (Pressure(P), Volume(V), number of gas(n), and Temperature(T)), use the General Gas Equation: Assuming 2 set of conditions: Initial Case: Final Case: \[ P_iV_i = n_iRT_i \; \; \; \; \; \; P_fV_f = n_fRT_f \] Setting both sides to R (which is a constant with the same value in each case), one gets: \[ R= \dfrac{P_iV_i}{n_iT_i} \; \; \; \; \; \; R= \dfrac{P_fV_f}{n_fT_f} \] If one substitutes one R for the other, one will get the final equation and the General Gas Equation: \[ \dfrac{P_iV_i}{n_iT_i} = \dfrac{P_fV_f}{n_fT_f} \] Standard Conditions If in any of the laws, a variable is not give, assume that it is given. For constant temperature, pressure and amount: Absolute Zero (Kelvin): 0 K = - 273.15 oC 2. Pressure: 1 Atmosphere (760 mmHg) T(K) = T( oC) + 273.15 (unit of the temperature must be Kelvin) 3. Amount: 1 mol = 22.4 Liter of gas 4. In the Ideal Gas Law, the gas constant R = 8.3145 Joules · mol -1 · K -1 = 0.082057 L · atm·K - 1 · mol - 1 The Van der Waals Equation For Real Gases Dutch physicist Johannes Van Der Waals developed an equation for describing the deviation of real gases from the ideal gas. There are two correction terms added into the ideal gas equation. They are \( 1 +a\frac{n^2}{V^2}\), and \( 1/(V-nb) \). Since the attractive forces between molecules do exist in real gases, the pressure of real gases is actually lower than of the ideal gas equation. This condition is considered in the van der waals equation. Therefore, the correction term \( 1 +a\frac{n^2}{V^2} \) corrects the pressure of real gas for the effect of attractive forces between gas molecules. Similarly, because gas molecules have volume, the volume of real gas is much larger than of the ideal gas, the correction term \(1 -nb \) is used for correcting the volume filled by gas molecules. Practice Problems If 4L of H 2gas at 1.43 atm is at standard temperature, and the pressure were to increase by a factor of 2/3, what is the final volume of the H 2gas? (Hint: Boyle's Law) If 1.25L of gas exists at 35 oC with a constant pressure of .70 atm in a cylindrical block and the volume were to be multiplied by a factor of 3/5, what is the new temperature of the gas? (Hint: Charles's Law) A ballon with 4.00g of Helium gas has a volume of 500mL. When the temperature and pressure remain constant. What will be the new volume of Helium in the ballon if another 4.00g of Helium is added into the ballon? (Hint: Avogadro's Law) Solutions 1. 2.40L To solve this question you need to use Boyle's Law: \[ P_1V_1 = P_2V_2 \] Keeping the key variables in mind, temperature and the amount of gas is constant and therefore can be put aside, the only ones necessary are: Initial Pressure: 1.43 atm Initial Volume: 4 L Final Pressure: 1.43x1.67 = 2.39 Final Volume(unknown): V 2 Plugging these values into the equation you get: V 2=(1.43atm x 4 L)/(2.39atm) = 2.38 L 2. 184.89 K To solve this question you need to use Charles's Law: Once again keep the key variables in mind. The pressure remained constant and since the amount of gas is not mentioned, we assume it remains constant. Otherwise the key variables are: Initial Volume: 1.25 L Initial Temperature: 35 oC + 273.15 = 308.15K Final Volume: 1.25L*3/5 = .75 L Final Temperature: T 2 Since we need to solve for the final temperature you can rearrange Charles's: Once you plug in the numbers, you get: T 2=(308.15 K x .75 L)/(1.25 L) = 184.89 K 3. 1000 mL or 1L Using Avogadro's Law to solve this problem, you can switch the equation into \( V_2=\frac{n_1\centerdot V_2}{n_2} \). However, you need to convert grams of Helium gas into moles. \[ n_1 = \frac{4.00g}{4.00g/mol} = \text{1 mol} \] Similarily, n 2=2 mol \[ V_2=\frac{n_2 \centerdot V_2}{n_1}\] \[ =\frac{2 mol \centerdot 500mL}{1 mol}\] \[ = \text{1000 mL or 1L } \] References Petrucci, Ralph H. General Chemistry: Principles and Modern Applications. 9th Ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2007. Staley, Dennis. Prentice Hall Chemistry. Boston, MA: Pearson Prentice Hall, 2007. Olander, Donald R. "Chapter2 Equation of State." General Thermodynamics. Boca Raton, NW: CRC, 2008. Print O'Connell, John P., and J. M. Haile. "Properties Relative to Ideal Gases." Thermodynamics: Fundamentals for Applications. Cambridge: Cambridge UP, 2005. Print. Ghare, Shakuntala. "Ideal Gas Laws for One Component." Ideal Gas Law, Enthalpy, Heat Capacity, Heats of Solution and Mixing. Vol. 4. New York, NY, 1984. Print. F.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
Regularity of extremal solutions of semilinear elliptic problems with non-convex nonlinearities on general domains 1. School of Mathematics, Iran University of Science and Technology, Narmak, Tehran, Iran 2. School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran, P.O.Box: 19395-5746, Iran $ -\Delta u =\lambda f(u) $ $ \Omega $ $ \Bbb{R}^{n} $ $ f $ $ C^{1} $ $ [0, \infty) $ $ \frac{f(t)}{t} \rightarrow \infty $ $ t \rightarrow \infty $ $ \Omega $ $ f $ $ u^{*} $ $ n = 2 $ [. In this paper, we prove this for higher dimensions depending on the nonlinearity 5] $ f $ $ \frac{1}{2} < \beta_{-}:=\liminf\limits_{t\rightarrow\infty} \frac{f'(t)F(t)}{f(t)^{2}}\leq \beta_{+}:=\limsup\limits_{t\rightarrow\infty} \frac{f'(t)F(t)}{f(t)^{2}} < \infty, $ $ F(t)=\int_{0}^{t}f(s)ds $ $ u^{*} \in L^{\infty}(\Omega) $ $ n \leq 6 $ $\beta_{-}=\beta_{+}>\frac{1}{2} $ $ \frac{1}{2} < \beta_{-}\leq \beta_{+} < \frac{7}{10} $ $ u^{*} \in L^{\infty}(\Omega) $ $ n \leq 9 $ $ \beta_{-} > \frac{1}{2} $ $ u^{*} \in H^{1}_{0}(\Omega) $ $ n \geq 1 $ $ \epsilon > 0 $ $$$ \frac{tf'(t)}{f(t)} \geq 1+\frac{1}{(\ln t)^{2-\epsilon}} ~~ \text{for large} ~ t, $$$ [. 4] Mathematics Subject Classification:Primary: 35K57, 35B65; Secondary: 35J60. Citation:Asadollah Aghajani. Regularity of extremal solutions of semilinear elliptic problems with non-convex nonlinearities on general domains. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3521-3530. doi: 10.3934/dcds.2017150 References: [1] [2] S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. Ⅰ, [3] [4] [5] [6] [7] X. Cabré, A. Capella and M. Sanchéon, Regularity of radial minimizers of reaction equations involving the $ p $-Laplacian, [8] X. Cabré and X. Ros-Oton, Regularity of stable solutions up to dimension 7 in domains of double revolution, [9] [10] X. Cabré, M. Sanchéon and J. Spruck, A priori estimates for semistable solutions of semilinear elliptic equations, [11] M. G. Crandall and P. H. Rabinowitz, Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems, [12] [13] [14] [15] F Mignot and J.-P. Puel, Sur une classe de problèmes non linéaires avec non linéairité positive, [16] [17] [18] [19] [20] [21] [22] show all references References: [1] [2] S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. Ⅰ, [3] [4] [5] [6] [7] X. Cabré, A. Capella and M. Sanchéon, Regularity of radial minimizers of reaction equations involving the $ p $-Laplacian, [8] X. Cabré and X. Ros-Oton, Regularity of stable solutions up to dimension 7 in domains of double revolution, [9] [10] X. Cabré, M. Sanchéon and J. Spruck, A priori estimates for semistable solutions of semilinear elliptic equations, [11] M. G. Crandall and P. H. Rabinowitz, Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems, [12] [13] [14] [15] F Mignot and J.-P. Puel, Sur une classe de problèmes non linéaires avec non linéairité positive, [16] [17] [18] [19] [20] [21] [22] [1] Xavier Cabré, Manel Sanchón, Joel Spruck. A priori estimates for semistable solutions of semilinear elliptic equations. [2] Claudia Anedda, Giovanni Porru. Boundary estimates for solutions of weighted semilinear elliptic equations. [3] [4] [5] Zhiguo Wang, Yiqian Wang, Daxiong Piao. A new method for the boundedness of semilinear Duffing equations at resonance. [6] [7] Tomás Sanz-Perela. Regularity of radial stable solutions to semilinear elliptic equations for the fractional Laplacian. [8] Soohyun Bae. Positive entire solutions of inhomogeneous semilinear elliptic equations with supercritical exponent. [9] Yi-hsin Cheng, Tsung-Fang Wu. Multiplicity and concentration of positive solutions for semilinear elliptic equations with steep potential. [10] [11] [12] Zhijun Zhang. Large solutions of semilinear elliptic equations with a gradient term: existence and boundary behavior. [13] Sara Barile, Addolorata Salvatore. Radial solutions of semilinear elliptic equations with broken symmetry on unbounded domains. [14] [15] Soohyun Bae, Yūki Naito. Separation structure of radial solutions for semilinear elliptic equations with exponential nonlinearity. [16] Shoichi Hasegawa. Stability and separation property of radial solutions to semilinear elliptic equations. [17] Jinlong Bai, Desheng Li, Chunqiu Li. A note on multiplicity of solutions near resonance of semilinear elliptic equations. [18] Francesca Alessio, Piero Montecchiari, Andrea Sfecci. Saddle solutions for a class of systems of periodic and reversible semilinear elliptic equations. [19] [20] Cemil Tunç. Stability, boundedness and uniform boundedness of solutions of nonlinear delay differential equations. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Group Generated by Commutators of Two Normal Subgroups is a Normal Subgroup Problem 129 Let $G$ be a group and $H$ and $K$ be subgroups of $G$.For $h \in H$, and $k \in K$, we define the commutator $[h, k]:=hkh^{-1}k^{-1}$.Let $[H,K]$ be a subgroup of $G$ generated by all such commutators. Show that if $H$ and $K$ are normal subgroups of $G$, then the subgroup $[H, K]$ is normal in $G$. We first prove that a conjugate of each generator is in $[H,K]$.Let $h \in H, k \in K$. For any $g \in G$, we have by inserting $g^{-1}g=e$ inbetweens\begin{align*}g[h,k]g^{-1}&=ghkh^{-1}k^{-1}=(ghg^{-1})(gkg^{-1})(gh^{-1}g^{-1})(gk^{-1}g^{-1})\\& = (ghg^{-1})(gkg^{-1})(ghg^{-1})^{-1}(gkg^{-1})^{-1}.\end{align*} Now note that $ghg^{-1} \in H$ since $H$ is normal in $G$, and $gkg^{-1} \in K$ since $K$ is normal in $G$. Thus $g[h,k]g^{-1} \in [H, K]$.By taking the inverse of the above equality, we also see that $g[k,h]g^{-1} \in [H, K]$. Thus the conjugate of the inverse $[h,k]^{-1}=[k,h]$ is in $[H, K]$. Next, note that any element $x \in [H,K]$ is a product of generators or their inverses. So let us write\[x=[h_1, k_1]^{\pm 1}[h_2, k_2]^{\pm 1}\cdots [h_n, k_n]^{\pm 1},\]where $h_i \in H, k_i\in K$ for $i=1,\dots, n$.Then for any $g \in G$, we have\begin{align*}gxg^{-1}=(g[h_1, k_1]^{\pm 1}g^{-1})(g[h_2, k_2]^{\pm 1}g^{-1})\cdots(g [h_n, k_n]^{\pm 1} g^{-1}).\end{align*} We saw that the conjugate of a generator, or its inverse, by $g \in G$ is in $[H,K]$.Thus $gxg^{-1}$ is also in $[H, K]$.This proves that the group $[H,K]$ is a normal subgroup of $G$. Commutator Subgroup and Abelian Quotient GroupLet $G$ be a group and let $D(G)=[G,G]$ be the commutator subgroup of $G$.Let $N$ be a subgroup of $G$.Prove that the subgroup $N$ is normal in $G$ and $G/N$ is an abelian group if and only if $N \supset D(G)$.Definitions.Recall that for any $a, b \in G$, the […] A Condition that a Commutator Group is a Normal SubgroupLet $H$ be a normal subgroup of a group $G$.Then show that $N:=[H, G]$ is a subgroup of $H$ and $N \triangleleft G$.Here $[H, G]$ is a subgroup of $G$ generated by commutators $[h,k]:=hkh^{-1}k^{-1}$.In particular, the commutator subgroup $[G, G]$ is a normal subgroup of […] Non-Abelian Simple Group is Equal to its Commutator SubgroupLet $G$ be a non-abelian simple group. Let $D(G)=[G,G]$ be the commutator subgroup of $G$. Show that $G=D(G)$.Definitions/Hint.We first recall relevant definitions.A group is called simple if its normal subgroups are either the trivial subgroup or the group […] Normal Subgroups Intersecting Trivially Commute in a GroupLet $A$ and $B$ be normal subgroups of a group $G$. Suppose $A\cap B=\{e\}$, where $e$ is the unit element of the group $G$.Show that for any $a \in A$ and $b \in B$ we have $ab=ba$.Hint.Consider the commutator of $a$ and $b$, that […] Two Normal Subgroups Intersecting Trivially Commute Each OtherLet $G$ be a group. Assume that $H$ and $K$ are both normal subgroups of $G$ and $H \cap K=1$. Then for any elements $h \in H$ and $k\in K$, show that $hk=kh$.Proof.It suffices to show that $h^{-1}k^{-1}hk \in H \cap K$.In fact, if this it true then we have […] Abelian Normal Subgroup, Intersection, and Product of GroupsLet $G$ be a group and let $A$ be an abelian subgroup of $G$ with $A \triangleleft G$.(That is, $A$ is a normal subgroup of $G$.)If $B$ is any subgroup of $G$, then show that\[A \cap B \triangleleft AB.\]Proof.First of all, since $A \triangleleft G$, the […]
Search Now showing items 1-2 of 2 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
I must use each number 2,0,1,9 (only once) to come up with an answer of 76 Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. How about this $\frac{9}{.\overline{1}} - \frac{0!}{.2} = 76$ where $.\overline{1} = .111111\ldots$ $$-\log_{\sqrt{9}!-2}(\log(\underbrace{\sqrt{\sqrt{...\sqrt{10}}}}_\text{158 square roots})$$ The "158" is not part of the equation. Normally, you'd write down all 158 square roots. I was doing a bit of research into factorials, and found both hyperfactorials (denoted by an $H$) and alternating factorials (denoted by an $AF$). Hopefully this answer fulfills your need. $AF(\sqrt{9} + 1) * H(2 + 0)$ First we can take the hyperfactorial of $H(2 + 0)$. $AF(\sqrt{9} + 1) * 4$ We'll then solve for the other pair of brackets. $AF(4) * 4$ Now we'll take the alternating factorial. $19 * 4$ And then some basic multiplication to get: $76$ (2+1)! + 0! = 7, concatenate with 9 flipped over = 76. Or: you used 19 in your guess, so I'm assuming concatenating the original numbers is allowed: (9+2)!!!!! + 10 (9^2)-(2^2)-(1^2)-(0^2)=76 If we take the square of all the numbers and apply subtraction then we get 76. $(((\sqrt{9})!)!!-10)*2 = 76$ Here is the answer! Finally!!!!! And thanks to all who helped in the spirit of solving a puzzle.
Microscopic Realization of the Kerr/CFT Correspondence Author Guica, Monica Published Versionhttps://doi.org/10.1007/JHEP02(2011)010 MetadataShow full item record CitationGuica, Monica, and Andrew Strominger. 2011. Microscopic Realization of the Kerr/CFT Correspondence. Journal of High Energy Physics 2011(2): 1-20. AbstractSupersymmetric M/string compactifications to five dimensions contain BPS black string solutions with magnetic graviphoton charge \(P\) and near-horizon geometries which are quotients of \(AdS 3 \times S^2\). The holographic duals are typically known 2D CFTs with central charges \(c L = c R = 6P^3\) for large \(P\). These same 5D compactifications also contain non-BPS but extreme Kerr-Newman black hole solutions with \(SU(2)\) Lspin \(J L\) and electric graviphoton charge \(Q\) obeying \(Q^3 \le J L^2\). It is shown that in the maximally charged limit \(Q^3 \rightarrow J L^2\), the near-horizon geometry coincides precisely with the right-moving temperature \(T R = 0\) limit of the black string with magnetic charge \(P = J L^\frac{1}{3}\). The known dual of the latter is identified as the \(cL = c R =6J L\) CFT predicted by the Kerr/CFT correspondence. Moreover, at linear order away from maximality, one finds a \(T R \not= 0\) quotient of the AdS 3 factor of the black string solution and the associated thermal CFT entropy reproduces the linearly sub-maximal Kerr-Newman entropy. Beyond linear order, for general \(Q^3 < J L^2\), one has a finite-temperature quotient of a warped deformation of the magnetic string geometry. The corresponding dual deformation of the magnetic string CFT potentially supplies, for the general case, the \(c L = c R =6J L\) CFT predicted by Kerr/CFT. Terms of UseThis article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAP Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:8087213 Collections FAS Scholarly Articles [15998]
Manual definition of Feynman-Kac models¶ It is not particularly difficult to define manually your own FeynmanKac classes. Consider the following problem: we would like to approximate the probability that \(X_t \in [a,b]\) for all \(0\leq t < T\), where \((X_t)\) is a random walk: \(X_0\sim N(0,1)\), and This probability, at time \(t\), equals \(L_t\), the normalising constant of the following Feynman-Kac sequence of distributions: where: \(M_0(dx_0)\) is the \(N(0,1)\) distribution; \(M_s(x_{s-1},dx_s)\) is the \(N(x_{s-1}, 1)\) distribution; \(G_s(x_{s-1}, x_s)= \mathbb{1}_{[0,\epsilon]}(x_s)\) Let’s define the corresponding FeymanKac object: [1]: %matplotlib inlineimport warnings; warnings.simplefilter('ignore') # hide warningsfrom matplotlib import pyplot as pltimport seaborn as sbimport numpy as npfrom scipy import statsimport particlesclass GaussianProb(particles.FeynmanKac): def __init__(self, a=0., b=1., T=10): self.a, self.b, self.T = a, b, T def M0(self, N): return stats.norm.rvs(size=N) def M(self, t, xp): return stats.norm.rvs(loc=xp, size=xp.shape) def logG(self, t, xp, x): return np.where((x < self.b) & (x > self.a), 0., -np.inf) The class above defines: the initial distribution, \(M_0(dx_0)\) and the kernels \(M_t(x_{t-1}, dx_t)\), through methods M0(self, N)and M(self, t, xp). In fact, these methods simulate \(N\) random variables from the corresponding distributions. Function logG(self, t, xp, x)returns the log of function \(G_t(x_{t-1}, x_t)\). Methods M0 and M also define implicitly how the \(N\) particles should be represented internally: as a (N,) numpy array. Indeed, at time \(0\), method M0 generates a (N,) numpy array, and at times \(t\geq 1\), method M takes as an input ( xp) and returns as an output arrays of shape (N,). We could use another type of object to represent our \(N\) particles; for instance, the smc_samplers module defines a ThetaParticles class for storing \(N\)particles representing \(N\) parameter values (and associated information). Now let’s run the corresponding SMC algorithm: [2]: fk_gp = GaussianProb(a=0., b=1., T=30)alg = particles.SMC(fk=fk_gp, N=100)alg.run()plt.style.use('ggplot')plt.plot(alg.summaries.logLts)plt.xlabel('t')plt.ylabel(r'log-probability'); That was not so hard. However our implementation suffers from several limitations: The SMC sampler we ran may be quite inefficient when interval \([a,b]\) is small; in that case many particles should get a zero weight at each iteration. We cannot currently run the SQMC algorithm (the quasi Monte Carlo version of SMC); to do so, we need to specify the Markov kernels \(M_t\) in a different way: not as simulators, but as deterministic functions that take as inputs uniform variates (see below). Let’s address the second point: [3]: class GaussianProb(particles.FeynmanKac): du = 1 # dimension of uniform variates def __init__(self, a=0., b=1., T=10): self.a, self.b, self.T = a, b, T def M0(self, N): return stats.norm.rvs(size=N) def M(self, t, xp): return stats.norm.rvs(loc=xp, size=xp.shape) def Gamma0(self, u): return stats.norm.ppf(u) def Gamma(self, t, xp, u): return stats.norm.ppf(u, loc=xp) def logG(self, t, xp, x): return np.where((x < self.b) & (x > self.a), 0., -np.inf)fk_gp = GaussianProb(a=0., b=1., T=30) We have added: methods Gamma0and Gamma, which define the deterministic functions \(\Gamma_0\) and \(\Gamma\) we mentioned above. Mathematically, for \(U\sim \mathcal{U}([0,1]^{d_u})\), then \(\Gamma_0(U)\) is distributed according to \(M_0(dx_0)\), and \(\Gamma_t(x_{t-1}, U)\) is distributed according to \(M_t(x_{t-1}, dx_t)\). class attribute du, i.e. \(d_u\), the dimension of the \(u\)-argument of functions \(\Gamma_0\) and \(\Gamma_t\). We are now able to run both the SMC and the SQMC algorithms that corresponds to the Feyman-Kac model of interest; let’s compare their respective performance. (Recall that function multiSMC runs several algorithms multiple times, possibly with varying parameters; here we vary parameter qmc, which determines whether we run SMC or SMQC.) [5]: results = particles.multiSMC(fk=fk_gp, qmc={'smc': False, 'sqmc': True}, N=100, nruns=10)sb.boxplot(x=[r['qmc'] for r in results], y=[r['output'].logLt for r in results]); We do get some variance reduction, but not so much. Let’s see if we can do better by addressing point 1 above. The considered problem has the structure of a state-space model, where process \((X_t)\) is a random walk, \(Y_t = \mathbb{1}_{[a,b]}(X_t)\), and \(y_t=1\) for all \(t\)’s. This remark leads us to define alternative Feynman-Kac models, that would correspond to guided and auxiliary formalisms of that state-space model. In particular, for the guided filter, the optimal proposal distribution, i.e. the distribution of \(X_t|X_{t-1}, Y_t\), is simply a Gaussian distributiontruncated to interval \([a, b]\); let’s implement the corresponding Feynman-Kac class. [6]: def logprobint(a, b, x): """ returns log probability that X_t\in[a,b] conditional on X_{t-1}=x """ return np.log(stats.norm.cdf(b - x) - stats.norm.cdf(a - x))class Guided_GP(GaussianProb): def Gamma(self, t, xp, u): au = stats.norm.cdf(self.a - xp) bu = stats.norm.cdf(self.b - xp) return xp + stats.norm.ppf(au + u * (bu - au)) def Gamma0(self, u): return self.Gamma(0, 0., u) def M(self, t, xp): return self.Gamma(t, xp, stats.uniform.rvs(size=xp.shape)) def M0(self, N): return self.Gamma0(stats.uniform.rvs(size=N)) def logG(self, t, xp, x): if t == 0: return np.full(x.shape, logprobint(self.a, self.b, 0.)) else: return logprobint(self.a, self.b, xp)fk_guided = Guided_GP(a=0., b=1., T=30) In this particular case, it is a bit more convenient to define methods Gamma0 and Gamma first, and then define methods M0 and M. To derive the APF version, we must define the auxiliary functions (functions \(\eta_t\) in Chapter 10 of the book) that modify the resampling probabilities; in practice, we define the log of these functions, as follows: [7]: class APF_GP(Guided_GP): def logeta(self, t, x): return logprobint(self.a, self.b, x)fk_apf = APF_GP(a=0., b=1., T=30) Ok, now everything is set! We can do a 3x2 comparison of SMC versus SQMC, for the 3 considered Feynman-Kac models. [8]: results = particles.multiSMC(fk={'boot':fk_gp, 'guided':fk_guided, 'apf': fk_apf}, N=100, qmc={'smc': False, 'sqmc': True}, nruns=20)sb.boxplot(x=['%s-%s'%(r['fk'], r['qmc']) for r in results], y=[r['output'].logLt for r in results]); Let’s discard the bootstrap algorithms to better visualise the results for the other algorithms: [9]: res_noboot = [r for r in results if r['fk']!='boot']sb.boxplot(x=['%s-%s'%(r['fk'], r['qmc']) for r in res_noboot], y=[r['output'].logLt for r in res_noboot]); Voilà!
Posted in The web presence of the MPIM has been relaunched, offering a fresh design and a number of new features: There is a calendar, which can be subscribed with current calendar applications (see the link at the lower right corner). The management of events is improved, e.g. conference programmes are created automatically and in a uniform format. The website can handle TeX almost everywhere, i.e. expressions enclosed in \$ are formatted properly. This should improve the readabiliy of abstracts and other pages. Example: $$\zeta(s) := \sum_{n=1}^{\infty} \frac{1}{n^s}\,.$$ The list of guests is more detailed and can include photos. Access to the database of MPIM preprints is improved. The site is mostly bilingual. The design and structure is modernized. Of course, everything you see here is work in progress. We are looking forward to your comments and suggestions.
A discrete-time Markov chain (DTMC) is a tuple $M=(S,s_{init},P)$ where $S$ is a finite set of states, $s_{init}\in S$ the initial state, and $P:S\times S\to[0,1]$ the one-step transition probability matrix. For a subset $S'\subseteq S$ with $s_{init},t\in S'$ we define the induced sub-DTMC $M_{S'}=(S',s_{init},P')$ with $P'(s,s')=P(s,s')$ for all $s,s'\in S'$. If the sum of the out-going probabilities of a state $s\in S'$ is less than 1, a deadlock state is entered with probability $1-\sum_{s'\in S'}P(s,s')$ such that $t$ cannot be reached anymore. Assume we are given a DTMC $M=(S,s_{init},P)$ and a target state $t\in S$ such that the probability to finally reach $t$ from $s_{init}$ is larger than a given bound $\lambda$. We are interested in subsets $S'\subseteq S$ with $s_{init},t\in S'$ such that the probability of reaching $t$ from $s_{init}$ in $M_{S'}$ is still larger than $\lambda$. Question: What is the complexity of deciding whether there is such a $S'$ containing at most $k$ states (for a given $k$)? We suppose that this problem is NP-complete (obviously it is in NP: guess an appropriate $S'$ and compute the reachability probabilities to check if the bound $\lambda$ is exceeded; this can be done by solving a linear equation system). However, we have not found a reduction to show the NP-hardness. Maybe someone can help ... For Markov decision processes, which feature non-deterministic choices in addition to the probabilistic choices of DTMCs, the same problem has been proven to be NP-complete, see Chadha, Viswanathan - A counterexample-guided abstraction-refinement framework for Markov decision processes, ACM Trans. Comput. Log. 12(1), 2010http://dl.acm.org/citation.cfm?doid=1838552.1838553 The same holds also for DTMCs and PCTL properties, if nested formulae are allowed. However, for pure reachability I was not able to find a proof for NP-hardness.
Answer The displacement amplitude is $2.77\times 10^{-7}~m$ The displacement amplitude is about 92 times larger than the average distance between molecules in a room. Work Step by Step We can find the intensity of the sound: $\beta = 10~log\frac{I}{I_0}$ $60.0 = 10~log\frac{I}{I_0}$ $6.0 = log\frac{I}{I_0}$ $10^{6.0} = \frac{I}{I_0}$ $I = (10^{6.0})~I_0$ $I = (10^{6.0})~(1.0\times 10^{-12}~W/m^2)$ $I = 1.0\times 10^{-6}~W/m^2$ We can use $343~m/s$ as the speed of sound in air. We can use $\rho = 1.2~kg/m^3$ as the density of air. We can find the displacement amplitude: $s_0 = \sqrt{\frac{I}{2\pi^2 \rho f^2 v}}$ $s_0 = \sqrt{\frac{1.0\times 10^{-6}~W/m^2}{(2\pi^2)(1.2~kg/m^3)(40~Hz)^2(343~m/s)}}$ $s_0 = 2.77\times 10^{-7}~m$ The displacement amplitude is $2.77\times 10^{-7}~m$ We can compare the displacement amplitude to $3~nm$: $\frac{2.77\times 10^{-7}~m}{3\times 10^{-9}~m} = 92$ The displacement amplitude is about 92 times larger than the average distance between molecules in a room.
Search Now showing items 21-26 of 26 Measurement of transverse energy at midrapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV (American Physical Society, 2016-09) We report the transverse energy ($E_{\mathrm T}$) measured with ALICE at midrapidity in Pb-Pb collisions at ${\sqrt{s_{\mathrm {NN}}}}$ = 2.76 TeV as a function of centrality. The transverse energy was measured using ... Elliptic flow of electrons from heavy-flavour hadron decays at mid-rapidity in Pb–Pb collisions at $\sqrt{s_{\rm NN}}= 2.76$ TeV (Springer, 2016-09) The elliptic flow of electrons from heavy-flavour hadron decays at mid-rapidity ($|y| < 0.7$) is measured in Pb–Pb collisions at $\sqrt{s_{\rm NN}}= 2.76$ TeV with ALICE at the LHC. The particle azimuthal distribution with ... Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ... D-meson production in $p$–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV and in $pp$ collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2016-11) The production cross sections of the prompt charmed mesons D$^0$, D$^+$, D$^{*+}$ and D$_{\rm s}^+$ were measured at mid-rapidity in p-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm NN}}=5.02$ TeV ... Azimuthal anisotropy of charged jet production in $\sqrt{s_{\rm NN}}=2.76$ TeV Pb–Pb collisions (Elsevier, 2016-02) This paper presents measurements of the azimuthal dependence of charged jet production in central and semi-central $\sqrt{s_{\rm NN}}=2.76$ TeV Pb–Pb collisions with respect to the second harmonic event plane, quantified ... Particle identification in ALICE: a Bayesian approach (Springer Berlin Heidelberg, 2016-05-25) We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation ...
Learning Objectives Explain the Ideal Gas Law There are a number of chemical reactions that require ammonia. In order to carry out the reaction efficiently, we need to know how much ammonia we have for stoichiometric purposes. Using gas laws, we can determine the number of moles present in the tank if we know the volume, temperature, and pressure of the system. Ideal Gas Law The combined gas law shows that the pressure of a gas is inversely proportional to volume and directly proportional to temperature. Avogadro's Law shows that volume or pressure is directly proportional to the number of moles of gas. putting these together leaves us with the following equation: \[\dfrac{P_1 \times V_1}{T_1 \times n_1} = \dfrac{P_2 \times V_2}{T_2 \times n_2}\] As with the other gas laws, we can also say that \(\frac{\left( P \times V \right)}{\left( T \times n \right)}\) is equal to a constant. The constant can be evaluated provided that the gas being described is considered to be ideal. The ideal gas law is a single equation which relates the pressure, volume, temperature, and number of moles of an ideal gas. If we substitute in the variable \(R\) for the constant, the equation becomes: \[\dfrac{P \times V}{T \times n} = R\] The ideal gas law is conveniently rearranged to look this way, with the multiplication sings omitted: \[PV = nRT\] The variable \(R\) in the equation is called the ideal gas constant. Evaluating the Ideal Gas Constant The value of \(R\), the ideal gas constant, depends on the units chosen for pressure, temperature, and volume in the ideal gas equation. It is necessary to use Kelvin for the temperature and it is conventional to use the SI unit of liters for the volume. However, pressure is commonly measured in one of three units: \(\text{kPa}\), \(\text{atm}\), or \(\text{mm} \: \ce{Hg}\). Therefore, \(R\) can have three different values. We will demonstrate how \(R\) is calculated when the pressure is measured in \(\text{kPa}\). The volume of \(1.00 \: \text{mol}\) of any gas at STP (Standard temperature, 273.15 K and pressure, 1 atm)is measured to be \(22.414 \: \text{L}\). We can substitute \(101.325 \: \text{kPa}\) for pressure, \(22.414 \: \text{L}\) for volume, and \(273.15 \: \text{K}\) for temperature into the ideal gas equation and solve for \(R\). \[\begin{align*} R &= \frac{PV}{nT} \\[4pt] &= \frac{101.325 \: \text{kPa} \times 22.414 \: \text{L}}{1.000 \: \text{mol} \times 273.15 \: \text{K}} \\[4pt] &= 8.314 \: \text{kPa} \cdot \text{L/K} \cdot \text{mol} \end{align*}\] This is the value of \(R\) that is to be used in the ideal gas equation when the pressure is given in \(\text{kPa}\). The table below shows a summary of this and the other possible values of \(R\). It is important to choose the correct value of \(R\) to use for a given problem. Unit of \(P\) Unit of \(V\) Unit of \(n\) Unit of \(T\) Value and Unit of \(R\) \(\text{kPa}\) \(\text{L}\) \(\text{mol}\) \(\text{K}\) \(8.314 \: \text{J/K} \cdot \text{mol}\) \(\text{atm}\) \(\text{L}\) \(\text{mol}\) \(\text{K}\) \(0.08206 \: \text{L} \cdot \text{atm/K} \cdot \text{mol}\) \(\text{mm} \: \ce{Hg}\) \(\text{L}\) \(\text{mol}\) \(\text{K}\) \(62.36 \: \text{L} \cdot \text{mm} \: \ce{Hg}/\text{K} \cdot \text{mol}\) Notice that the unit for \(R\) when the pressure is in \(\text{kPa}\) has been changed to \(\text{J/K} \cdot \text{mol}\). A kilopascal multiplied by a liter is equal to the SI unit for energy, a joule \(\left( \text{J} \right)\). Example \(\PageIndex{1}\) Oxygen Gas What volume is occupied by \(3.76 \: \text{g}\) of oxygen gas at a pressure of \(88.4 \: \text{kPa}\) and a temperature of \(19^\text{o} \text{C}\)? Assume the oxygen is ideal. SOLUTION Identify the "given"information and what the problem is asking you to "find." Given: Mass \(\ce{O_2} = 3.76 \: \text{g}\) Find: List other known quantities \(\ce{O_2} = 32.00 \: \text{g/mol}\) \(R = 8.314 \: \text{J/K} \cdot \text{mol}\) Plan the problem \[V = \frac{nRT}{P} \nonumber\] Calculate 1. \[3.76 \: \cancel{\text{g}} \times \frac{1 \: \text{mol} \: \ce{O_2}}{32.00 \: \cancel{\text{g}} \: \ce{O_2}} = 0.1175 \: \text{mol} \: \ce{O_2} \nonumber\] 2. Now substitute the known quantities into the equation and solve. \[V = \frac{nRT}{P} = \frac{0.1175 \: \cancel{\text{mol}} \times 8.314 \: \cancel{\text{J/K}} \cdot \cancel{\text{mol}} \times 292 \: \cancel{\text{K}}}{88.4 \: \cancel{\text{kPa}}} = 3.23 \: \text{L} \: \ce{O_2} \nonumber\] Think about your result. The number of moles of oxygen is far less than one mole, so the volume should be fairly small compared to molar volume \(\left( 22.4 \: \text{L/mol} \right)\) since the pressure and temperature are reasonably close to standard. The result has three significant figures because of the values for \(T\) and \(P\). Since a joule \(\left( \text{J} \right) = \text{kPa} \cdot \text{L}\), the units cancel out correctly, leaving a volume in liters. Example \(\PageIndex{2}\): Argon Gas A 4.22 mol sample of Ar has a pressure of 1.21 atm and a temperature of 34°C. What is its volume? SOLUTION Identify the "given"information and what the problem is asking you to "find." Given: n = 4.22 mol P = 1.21 atm T = 34°C Find: List other known quantities none Plan the problem 1. The first step is to convert temperature to kelvin. 2. Then, rearrange the equation algebraically to solve for V \[V = \frac{nRT}{P} \nonumber\] Calculate 1. 34 + 273 = 307 K 2. Now substitute the known quantities into the equation and solve. \[ \begin{align*} V=\frac{(4.22\, \cancel{mol})(0.08205\frac{L.\cancel{atm}}{\cancel{mol.K}})(307\, \cancel{K)}}{1.21\cancel{atm}} \\[4pt] &= 87.9 \,L \end{align*}\] Think about your result. The number of moles of Ar is large so the expected volume should also be large. Exercise \(\PageIndex{1}\) A 0.0997 mol sample of O 2 has a pressure of 0.692 atm and a temperature of 333 K. What is its volume? Answer 3.94 L Exercise \(\PageIndex{2}\) For a 0.00554 mol sample of H 2, P = 23.44 torr and T = 557 K. What is its volume? Answer 8.21 L Summary The ideal gas constant is calculated. An example of calculations using the ideal gas law is shown.
Difference between revisions of "Geometry and Topology Seminar" (→Spring 2014) (→Spring Abstracts) Line 302: Line 302: This talk will describe recent work developing aspects of this picture in the setting of a free-by-cyclic group G. Specifically, I will introduce a polynomial invariant that determines a convex polygonal cone C in the first cohomology of G whose integral points all correspond to algebraically and dynamically interesting splittings of G. The polynomial invariant additionally provides a wealth of dynamical information about these splittings. This is joint work with Ilya Kapovich and Christopher J. Leininger. This talk will describe recent work developing aspects of this picture in the setting of a free-by-cyclic group G. Specifically, I will introduce a polynomial invariant that determines a convex polygonal cone C in the first cohomology of G whose integral points all correspond to algebraically and dynamically interesting splittings of G. The polynomial invariant additionally provides a wealth of dynamical information about these splittings. This is joint work with Ilya Kapovich and Christopher J. Leininger. + + + + + + + + + + ===Matthew Kahle (Ohio)=== ===Matthew Kahle (Ohio)=== Revision as of 11:32, 10 February 2014 Contents 1 Fall 2013 2 Fall Abstracts 3 Spring 2014 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2013 date speaker title host(s) September 6 September 13, 10:00 AM in 901! Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Kent September 20 September 27 October 4 October 11 October 18 Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Kent October 25 Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT local November 1 Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Dymarz November 8 Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Kent November 15 Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel Kent November 22 Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. local Thanksgiving Recess December 6 Sean Paul (Wisconsin) (Semi)stable Pairs I local December 13 Sean Paul (Wisconsin) (Semi)stable Pairs II local Fall Abstracts Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Abstract: For a compact surface S, the associated pants graph P(S) consists of vertices corresponding to pants decompositions of S and edges corresponding to elementary moves between pants decompositions. Motivated by the Weil-Petersson geometry of Teichmüller space, Aramayona, Parlier, and Shackleton conjecture that the full subgraph G of P(S) determined by fixing a multicurve is totally geodesic in P(S). We resolve this conjecture in the case that G is a product of Farey graphs. This is joint work with Sam Taylor. Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Abstract: We discuss the notion of gap distributions of various lists of numbers in [0, 1], in particular focusing on those which are associated to certain low-dimensional dynamical systems. We show how to explicitly compute some examples using techniques of homogeneous dynamics, generalizing earlier work on gaps between Farey Fractions. This works gives some possible notions of `randomness' of special trajectories of billiards in polygons, and is based partly on joint works with J. Chaika, J. Chaika and S. Lelievre, and with Y.Cheung. This talk may also be of interest to number theorists. Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT Many problems in differential geometry can be reduced to solving a PDE of form [math] \mu(x)=0 [/math] where [math]x[/math] ranges over some function space and [math]\mu[/math] is an infinite dimensional analog of the moment map in symplectic geometry. In Hamiltonian dynamics the moment map was introduced to use a group action to reduce the number of degrees of freedom in the ODE. It was soon discovered that the moment map could be applied to Geometric Invariant Theory: if a compact Lie group [math]G[/math] acts on a projective algebraic variety [math]X[/math], then the complexification [math]G^c[/math] also acts and there is an isomorphism of orbifolds [math] X^s/G^c=X//G:=\mu^{-1}(0)/G [/math] between the space of orbits of Mumford's stable points and the Marsden-Weinstein quotient. In September of 2013 Dietmar Salamon, his student Valentina Georgoulas, and I wrote an exposition of (finite dimensional) GIT from the point of view of symplectic geometry. The theory works for compact Kaehler manifolds, not just projective varieties. I will describe our paper in this talk; the following Monday Dietmar will give more details in the Geometric Analysis Seminar. Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Abstract: A quasi-regular (QR) mapping between metric manifolds is a branched cover with bounded dilatation, e.g. f(z)=z^2. In a joint work with K. Fassler and K. Peltonen, we define QR mappings of sub-Riemannian manifolds and show that: 1) Every lens space admits a uniformly QR (UQR) mapping f. 2) Every UQR mapping leaves invariant a measurable conformal structure. The first result uses an explicit "conformal trap" construction, while the second builds on similar results by Sullivan-Tukia and a connection to higher-rank symmetric spaces. Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Abstract: Given a triangulated 3-manifold M a natural question is: Does M admit a hyperbolic structure? While this question can be answered in the negative if M is known to be reducible or toroidal, it is often difficult to establish a certificate of hyperbolicity, and so computer methods have developed for this purpose. In this talk, I will describe a new method to establish such a certificate via verified computation and compare the method to existing techniques. This is joint work with Kazuhiro Ichihara, Masahide Kashiwagi, Hidetoshi Masai, Shin'ichi Oishi, and Akitoshi Takayasu. Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel The proof of the Hausdorff-Banach-Tarski paradox relies on the existence of a nonabelian free group in the group of rotations of [math]\mathbb{R}^3[/math]. To help generalize this paradox, Borel proved the following result on free groups. Borel’s Theorem (1983): Let [math]F[/math] be a free group of rank two. Let [math]G[/math] be an arbitrary connected semisimple linear algebraic group (i.e., [math]G = \mathrm{SL}_n[/math] where [math]n \geq 2[/math]). If [math]\gamma[/math] is any nontrivial element in [math]F[/math] and [math]V[/math] is any proper subvariety of [math]G(\mathbb{C})[/math], then there exists a homomorphism [math]\phi: F \to G(\mathbb{C})[/math] such that [math]\phi(\gamma) \notin V[/math]. What is the class, [math]\mathcal{L}[/math], of groups that may play the role of [math]F[/math] in Borel’s Theorem? Since the free group of rank two is in [math]\mathcal{L}[/math], it follows that all residually free groups are in [math]\mathcal{L}[/math]. In this talk, we present some methods for determining whether a finitely generated group is in [math]\mathcal{L}[/math]. Using these methods, we give a concrete example of a finitely generated group in [math]\mathcal{L}[/math] that is *not* residually free. After working out a few other examples, we end with a discussion on how this new theory provides an answer to a question of Brueillard, Green, Guralnick, and Tao concerning double word maps. This talk covers joint work with Michael Larsen. Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. The celebrated Poincare-Hopf theorem states that a vector field [math]X[/math] on a manifold [math]M[/math] has nonempty zero set [math]Z(X)[/math], provided [math]M[/math] is compact with empty boundary and [math]M[/math] has nonzero Euler characteristic. Surprising little is known about the set of common zeros of two or more vector fields, especially when [math]M[/math] is not compact. One of the few results in this direction is a remarkable theorem of Christian Bonatti (Bol. Soc. Brasil. Mat. 22 (1992), 215–247), stated below. When [math]Z(X)[/math] is compact, [math]i(X)[/math] denotes the intersection number of [math]X[/math] with the zero section of the tangent bundle. [math]\cdot [/math] Assume [math] dim_{\mathbb{R}(M)} ≤ 4[/math], [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Then every analytic vector field commuting with [math]X[/math] has a zero in [math]Z(X)[/math]. In this talk I will discuss the following analog of Bonatti’s theorem. Let [math]\mathfrak{g}[/math] be a Lie algebra of analytic vector fields on a real or complex 2-manifold [math]M[/math], and set [math]Z(g) := \cap_{Y \in \mathfrak{g}} Z(Y)[/math]. • Assume [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Let [math]\mathfrak{g}[/math] be generated by analytic vector fields [math]Y[/math] on [math]M[/math] such that the vectors [math][X,Y]p[/math] and [math]Xp[/math] are linearly dependent at all [math]p \in M[/math]. Then [math]Z(\mathfrak{g}) \cap Z(X) \neq \emptyset [/math]. Related results on Lie group actions, and nonanalytic vector fields, will also be treated. Sean Paul (Wisconsin) (Semi)stable Pairs I Sean Paul (Wisconsin) (Semi)stable Pairs II Spring 2014 date speaker title host(s) January 24 January 31 Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups Kent February 7 February 14 February 21 Ioana Suvaina (Vanderbilt) ALE Ricci flat Kahler surfaces from a Tian-Yau construction approach Maxim February 28 Jae Choon Cha (POSTECH, Korea) TBA Maxim March 7 March 14 Spring Break March 28 April 4 Matthew Kahle (Ohio) TBA Dymarz April 11 April 18 Pallavi Dani (LSU) TBA Dymarz April 25 Jingzhou Sun (Stony Brook) TBA Wang May 2 May 9 Spring Abstracts Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups The beautiful theory developed by Thurston, Fried and McMullen provides a near complete picture of the various ways a hyperbolic 3-manifold M can fiber over the circle. Namely, there are distinguished convex cones in the first cohomology M^1(M;R) whose integral points all correspond to fibrations of M, and the dynamical features of these fibrations are all encoded by McMullen's "Teichmuller polynomial." This talk will describe recent work developing aspects of this picture in the setting of a free-by-cyclic group G. Specifically, I will introduce a polynomial invariant that determines a convex polygonal cone C in the first cohomology of G whose integral points all correspond to algebraically and dynamically interesting splittings of G. The polynomial invariant additionally provides a wealth of dynamical information about these splittings. This is joint work with Ilya Kapovich and Christopher J. Leininger. Ioana Suvaina (Vanderbilt)) ALE Ricci flat Kahler surfaces from a Tian-Yau construction approach" The talk presents an explicit classification of the ALE Ricci flat Kahler surfaces (M,J,g), generalizing previous classification results of Kronheimer. The manifolds are related to Q-Gorenstein deformations of quotient singularities of type C^2/G, with G a finite subgroup of U(2). Using this classification, we show how these metrics can also be obtained by a construction of Tian-Yau. In particular, we find good compactifications of the underlying complex manifold M. Matthew Kahle (Ohio) TBA Pallavi Dani (LSU) TBA Jingzhou Sun(Stony Brook) "TBA"
Miloslav Znojil The quantum-catastrophe (QC) benchmark Hamiltonians of paper I (M. Znojil, J. Phys. A: Math. Theor. 45 (2012) 444036) are reconsidered, with the infinitesimal QC distance \(\lambda\) replaced by the total time $\tau$ of the fall into the singularity. Our amended model becomes unique, describing the complete QC history as initiated by a Hermitian and diagonalized N-level oscillator Hamiltonian at \(\tau=0\). In the limit \(\tau \to 1\) the system finally collapses into the completely (i.e., N-times) degenerate QC state. The closed and compact Hilbert-space metrics are then calculated and displayed up to N=7. The phenomenon of the QC collapse is finally attributed to the manifest time-dependence of the Hilbert space and, in particular, to the emergence and to the growth of its anisotropy. A quantitative measure of such a time-dependent anisotropy is found in the spread of the N-plet of the eigenvalues of the metric. Unexpectedly, the model appears exactly solvable — at any multiplicity N, the N-plet of these eigenvalues is obtained in closed form. http://arxiv.org/abs/1212.0734 Quantum Physics (quant-ph); Mathematical Physics (math-ph)
1. Analogy Generally speaking, a base is a reference system in which things are expressed. For example, the number $64$ is written 1000 in the base $2$ (binary base). They are in fact the same number but they are expressed (represented) in different bases : The natural base for numbers is the base 10. As regards vectors, the natural base is called the standard basis (also called canonical basis or normal basis). 2. Representations As for numbers, vectors are represented with respect to a basis. Example I Figure 8.1 shows the vector $\vec{v}= \left( \begin{smallmatrix} 11 \\ 8 \end{smallmatrix} \right)$ represented in the standard basis of $\mathbb{R}^2$, namely $STD = \{ \underbrace{\left( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \right)}_\vec{e_1}, \underbrace{\left( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right)}_\vec{e_2} \}$. In fact, $\vec{v}$ is the result of a linear combination of the basis vectors : Let $B= \{ \underbrace{ \left( \begin{smallmatrix} 4 \\ 1 \end{smallmatrix} \right)}_{\vec{k_1}} \underbrace{ \left( \begin{smallmatrix} 1 \\ 2 \end{smallmatrix} \right) }_{\vec{k_2}} \}$ be a basis of $\mathbb{R}^2$. Let’s represent $\vec{v}$ in $B$. For doing that, we need to solve the following equation : In $(3)$ the coefficients $\lambda_i$ are the so-called components (also called coordinates). By solving $(3)$ we get $2$ and $3$ as components of $\vec{v}$ with respect to $B$. Figure 8.2 shows $\vec{v}$ in $B$. We have thus the vector $\vec{v}$ represented in two different basis : 3. Conditions For being a basis the following conditions have to be met : Any vector of a given space must be constructedfrom a linear combination of the basis vectors. That is the goal of a basis. The vectors of the basis must be linearly independent. 4. Change of basis To convert a vector from a given basis $B$ into the standard basis, the following formula can be used : Example II Let’s convert $\vec{v}=\left( \begin{smallmatrix} 2 \\ 3 \end{smallmatrix} \right)$ from the basis $B=\{\left( \begin{smallmatrix} 4 \\ 1 \end{smallmatrix} \right),\left( \begin{smallmatrix} 1 \\ 2 \end{smallmatrix} \right)\}$, into the standard basis. Recapitulation Any vector is expressed with respect to a given basis. By default, vectors are expressed in the standard basis. The standard basis of the space $\mathbb{R}^{2}$ is $\{ \left( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \right), \left( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right) \}$. The standard basis of the space $\mathbb{R}^{3}$ is $\{ \left( \begin{smallmatrix} 1 \\ 0 \\ 0 \end{smallmatrix} \right), \left( \begin{smallmatrix} 0 \\ 1 \\ 0 \end{smallmatrix} \right), \left( \begin{smallmatrix} 0 \\ 0 \\ 1 \end{smallmatrix} \right) \}$. A basis of a given space generates, by linear combination, any vector of that space. Are said to be the components (or coordinates) of a vector, the representation of a vector in a given basis. All bases of $\mathbb{R}^{2}$ have $2$ vectors, all bases of $\mathbb{R}^{3}$ have $3$ vectors, etc. Generally speaking, all bases of $\mathbb{R}^{n}$ have $n$ vectors.
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
Find eigenvalues and a basis for the eigenspace(2) Substituting our new $\lambda$ back into our A matrix,(3) Row reduction here doesn't quite work the way we understand it, but we can notice that each equation in the matrix should have the same nontrivial solution. This was given by the book and I have no justification for it beyond "the book said so." Because this example is based on preliminary reading, I'm not certain as to why, just that it is. I hope to have this question answered in class. Continuing!(4) If we choose a nice $x_2$ (like 5), we eliminate the need for any ugly decimals or even worse…fractions…*shudder* Therefore, $\vec{v}_1 = \begin{bmatrix} -2-4\imath\\5 \end{bmatrix}$ & $\vec{v}_2 = \begin{bmatrix} -2+4\imath\\5 \end{bmatrix}$ Complex Eigenvalues ALWAYS occur in complex pairs. This is again stated in the book as a fact. If one plugs in our other eigenvalue (which is the complex conjugate of the first), we do in fact find our second eigenvalue to be the complex conjugate of the first, but I have not seen it generally proved that this is the case.
February 4, 2015 ece1229 No comments Ampere's law, antenna, antisymmetric, average power, bivector, complex power, constituative relations, continuity equation, cross product, curl, dB, dBi, decibel, directivity, divergence, divergence theorem, dot product, dual, duality, ece1229, electic source, electric dipole, electric field, electric sources, far field, four current, four gradient, four potential, four vector, free space, GA, gain, Geometric Algebra, geometric product, grade, Green's function, half power beamwidth, Helmholtz equation, Helmholz equation, impedance, impulse response, intensity, isotropic radiator, linear media, linear time invariant, Lorentz gauge, magnetic charge, magnetic current, magnetic dipole, magnetic field, magnetic source, magnetic sources, magnetization, Mathematica, Maxwell-Faraday equation, Maxwell's equation, Maxwell's equations, Mie scattering, Minkowski space, optical limit, parametric plot, ParametricPlot, ParametricPlot3D, Pauli basis, phasor, plane wave, polarization vector, potential, power, Poynting vector, pseudoscalar, radar cross section, radiation intensity, Rayleigh scattering, scalar, scalar potential, spacetime gradient, spacetime split, spherical coordinates, spherical scattering, spherical wave, Stokes' theorem, superposition, vector potential, wedge product I’ve now posted a first set of notes for the antenna theory course that I am taking this term at UofT. Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.) The notes linked above include: Reading notes for chapter 2 (Fundamental Parameters of Antennas) and chapter 3 (Radiation Integrals and Auxiliary Potential Functions) of the class text. Geometric Algebra musings. How to do formulate Maxwell’s equations when magnetic sources are also included (those modeling magnetic dipoles). Some problems for chapter 2 content. February 4, 2015 ece1229 No comments ece1229, electric field, electric sources, four potential, magnetic field, magnetic sources, scalar potential, spacetime gradient, spacetime split, vector potential [Click here for a PDF of this post with nicer formatting] This is a small addition to Phasor form of (extended) Maxwell’s equations in Geometric Algebra. Relative to the observer frame implicitly specified by \( \gamma_0 \), here’s an expansion of the curl of the electric four potential \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:720} \begin{aligned} \grad \wedge A_{\textrm{e}} &= \inv{2}\lr{ \grad A_{\textrm{e}} – A_{\textrm{e}} \grad } \\ &= \inv{2}\lr{ \gamma_0 \lr{ \spacegrad + j k } \gamma_0 \lr{ A_{\textrm{e}}^0 – \BA_{\textrm{e}} } – \gamma_0 \lr{ A_{\textrm{e}}^0 – \BA_{\textrm{e}} } \gamma_0 \lr{ \spacegrad + j k } } \\ &= \inv{2}\lr{ \lr{ -\spacegrad + j k } \lr{ A_{\textrm{e}}^0 – \BA_{\textrm{e}} } – \lr{ A_{\textrm{e}}^0 + \BA_{\textrm{e}} } \lr{ \spacegrad + j k } } \\ &= \inv{2}\lr{ – 2 \spacegrad A_{\textrm{e}}^0 + j k A_{\textrm{e}}^0 – j k A_{\textrm{e}}^0 + \spacegrad \BA_{\textrm{e}} – \BA_{\textrm{e}} \spacegrad – 2 j k \BA_{\textrm{e}} } \\ &= – \lr{ \spacegrad A_{\textrm{e}}^0 + j k \BA_{\textrm{e}} } + \spacegrad \wedge \BA_{\textrm{e}} \end{aligned} \end{equation} In the above expansion when the gradients appeared on the right of the field components, they are acting from the right (i.e. implicitly using the Hestenes dot convention.) The electric and magnetic fields can be picked off directly from above, and in the units implied by this choice of four-potential are \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:760} \BE_{\textrm{e}} = – \lr{ \spacegrad A_{\textrm{e}}^0 + j k \BA_{\textrm{e}} } = -j \lr{ \inv{k}\spacegrad \spacegrad \cdot \BA_{\textrm{e}} + k \BA_{\textrm{e}} } \end{equation} \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:780} c \BB_{\textrm{e}} = \spacegrad \cross \BA_{\textrm{e}}. \end{equation} For the fields due to the magnetic potentials \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:800} \lr{ \grad \wedge A_{\textrm{e}} } I = – \lr{ \spacegrad A_{\textrm{e}}^0 + j k \BA_{\textrm{e}} } I – \spacegrad \cross \BA_{\textrm{e}}, \end{equation} so the fields are \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:840} c \BB_{\textrm{m}} = – \lr{ \spacegrad A_{\textrm{m}}^0 + j k \BA_{\textrm{m}} } = -j \lr{ \inv{k}\spacegrad \spacegrad \cdot \BA_{\textrm{m}} + k \BA_{\textrm{m}} } \end{equation} \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:860} \BE_{\textrm{m}} = -\spacegrad \cross \BA_{\textrm{m}}. \end{equation} Including both electric and magnetic sources the fields are \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:900} \BE = -\spacegrad \cross \BA_{\textrm{m}} -j \lr{ \inv{k}\spacegrad \spacegrad \cdot \BA_{\textrm{e}} + k \BA_{\textrm{e}} } \end{equation} \begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:920} c \BB = \spacegrad \cross \BA_{\textrm{e}} -j \lr{ \inv{k}\spacegrad \spacegrad \cdot \BA_{\textrm{m}} + k \BA_{\textrm{m}} } \end{equation}
2018-08-25 06:58 Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 詳細記錄 - 相似記錄 2018-08-25 06:58 詳細記錄 - 相似記錄 2018-08-25 06:58 Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 詳細記錄 - 相似記錄 2018-08-24 06:19 Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 詳細記錄 - 相似記錄 2018-08-24 06:19 Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 詳細記錄 - 相似記錄 2018-08-24 06:19 Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 詳細記錄 - 相似記錄 2018-08-24 06:19 Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 詳細記錄 - 相似記錄 2018-08-24 06:19 Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 詳細記錄 - 相似記錄 2018-08-24 06:19 First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 詳細記錄 - 相似記錄 2018-08-23 11:31 Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 詳細記錄 - 相似記錄
The Symmetric Group is a Semi-Direct Product of the Alternating Group and a Subgroup $\langle(1,2) \rangle$ Problem 465 Prove that the symmetric group $S_n$, $n\geq 3$ is a semi-direct product of the alternating group $A_n$ and the subgroup $\langle(1,2) \rangle$ generated by the element $(1,2)$. Add to solve later Contents Definition (Semi-Direct Product). Internal Semi-Direct-Product Recall that a group $G$ is said to be an (internal) semi-direct product of subgroups $H$ and $K$ if the following conditions hold. $H$ is a normal subgroup of $G$. $H\cap K=\{e\}$, where $e$ is the identity element in $G$. $G=HK$. In this case, we denote the group by $G=H\rtimes K$. External Semi-Direct Product If $G$ is an internal semi-direct product of $H$ and $K$, it is an external semi-direct product defined by the homomorphism $\phi:K \to \Aut(H)$ given by mapping $k\in K$ to the automorphism of left conjugation by $k$ on $H$. That is, $G \cong H \rtimes_{\phi} K$. Proof. Recall that each element of the symmetric group $S_n$ can be written as a product of transpositions (permutations which exchanges only two elements). This defines a group homomorphism $\operatorname{sgn}:S_n\to \{\pm1\}$ that maps each element of $S_n$ that is a product of even number of transpositions to $1$, and maps each element of $S_n$ that is a product of odd number of transpositions to $-1$. The alternating group $A_n$ is defined to be the kernel of the homomorphism $\operatorname{sgn}:S_n \to \{\pm1\}$: \[A_n:=\ker(\operatorname{sgn}).\] As it is the kernel, the alternating group $A_n$ is a normal subgroup of $S_n$. Also by first isomorphism theorem, we have \[S_n/A_n\cong \{\pm1\},\] and it yields that \[|A_n|=\frac{|S_n|}{|\{\pm1\}|}=\frac{n!}{2}.\] Since $\operatorname{sgn}\left(\,(1,2) \,\right)=-1$, the intersection of $A_n$ and $\langle(1,2)\rangle$ is trivial: \[A_n \cap \langle(1,2) \rangle=\{e\}.\] Let $H=A_n$ and $K=\langle(1,2) \rangle$. Then we have \begin{align*} |HK|=\frac{|H|\cdot |K|}{|H\cap K|}=|H|\cdot | K|=\frac{n!}{2}\cdot 2=n!. \end{align*} Since $HK < S_n$ and both groups have order $n!$, we have $S_n=HK$. In summary we have observed that $H=A_n$ and $K=\langle(1,2) \rangle$ satisfies the conditions for a semi-direct product of $G=S_n$. Hence \[S_n=A_n\rtimes \langle(1,2) \rangle.\] As an external semi-direct product, it is given by \[S_n \cong A_n\rtimes_{\phi} \langle(1,2) \rangle,\] where $\phi: \langle(1,2) \rangle \to \Aut(A_n)$ is given by \[\phi\left(\, (1,2) \,\right)(x)=(1,2)x(1,2)^{-1}.\] Add to solve later
1. Analogy In basic algebra, the solution of the following equation is said to be the multiplicative inverse : Solution of $(1)$ is $\frac{1}{a}$, which is also written $a^{-1}$. In linear algebra, the equivalent equation to $(1)$ is : In $(2)$, $I$ is the so-called identity matrix. It contains $1$’s in its diagonal and $0$’s everywhere else, as follows : $\left( \begin{smallmatrix} 1 & \cdots & 0 \\ \vdots & \phantom{/} 1 & \vdots \\ 0 & \cdots & 1 \end{smallmatrix} \right)$. 2. Calculation The inverse matrix of $A=\left( \begin{smallmatrix} a_{1,1} & \cdots & a_{1,n} \\ \vdots & \cdots & \phantom{1}\vdots \\ a_{n,1} & \cdots & a_{n,n} \end{smallmatrix} \right)$ is obtained by dividing the transpose of the cofactor matrix, noted $C^{T}$, by the determinant of $A$ : In $(3)$ there are two new concepts. First, the matrix of cofactors (or comatrix), second the transpose of a matrix. Let’s first discuss on how to transpose a matrix. To transpose a matrix, convert its lines into columns. Example I Let $A=\left( \begin{smallmatrix} -5 & 0 \\ -8 & -1 \end{smallmatrix} \right)$. Let’s compute its transpose. Let’s now tell more about the cofactor matrix. The cofactor $c_{i,j}$ of $a_{i,j}$ is defined as follows : In $(5)$, $D_{i,j}$ is the determinant of $A$ without the row $\textbf{i}$ and without the column $\textbf{j}$. Example II Let $A=\left( \begin{smallmatrix} -5 & 0 \\ -8 & -1 \end{smallmatrix} \right)$. Let’s compute $D_{1,2}$ of $A$. Example III Let $A=\left( \begin{smallmatrix} 2 & 1 \\ 5 & -1 \end{smallmatrix} \right)$. Let’s compute its inverse. First, let’s compute its determinant : Second, let’s compute the cofactor matrix : Third, let’s compute the transpose of $C$ and thus get the inverse of $A$: 3. Inverse function More generally, the matrix inverse can be seen as an inverse function. In analysis, an inverse function, noted $f^{-1}$, associates back the images with the elements in the starting set, as Figure 10.1 illustrates. It turns out that a function has an inverse only if it is bijective. Example IV Let $t$ : $t$ is not bijective since many $\vec{x}$ are associated to $\vec{y}=\left( \begin{smallmatrix} 0 \\ 0 \end{smallmatrix} \right)$. Indeed, by decomposing, $x_1 \left( \begin{smallmatrix} 2 \\ -6 \end{smallmatrix} \right) + x_2 \left( \begin{smallmatrix} 1 \\-3 \end{smallmatrix} \right) = \left( \begin{smallmatrix} 0 \\ 0 \end{smallmatrix} \right)$ has infinitely many solutions because the vectors are linearly dependent. Recapitulation The matrix inverse of $A$, noted $A^{-1}$, has the following property : $AA^{-1} = A^{-1}A = I$. A function $f$ has an inverse, noted $f^{-1}$, only if it is bijective. Therefore, for having an inverse a matrix $A$ have to satisfy the two following conditions : $A$ must be square columns of $A$ must be linearly independent $A^{-1}$ is obtained by dividing the transpose of the cofactor matrix by the determinant of $A$, namely :
Finitely Generated Torsion Module Over an Integral Domain Has a Nonzero Annihilator Problem 432 (a) Let $R$ be an integral domain and let $M$ be a finitely generated torsion $R$-module. Prove that the module $M$ has a nonzero annihilator. In other words, show that there is a nonzero element $r\in R$ such that $rm=0$ for all $m\in M$. Here $r$ does not depend on $m$. (b) Find an example of an integral domain $R$ and a torsion $R$-module $M$ whose annihilator is the zero ideal. Contents Proof. (a) Prove that the module $M$ has a nonzero annihilator. Since $M$ is a finitely generated $R$-module, there is a finite set \[A:=\{a_1, a_2, \dots, a_n\} \subset M\] such that $M=RA$ As $M$ is a torsion $R$-module, for each $a_i\in A\subset M$ there is a nonzero element $r_i\in R$ such that \[r_ia_i=0.\] Let us put $r\in R$ to be the product of these $r_i$: \[r:=r_1 r_2 \cdots r_n.\] Note that $r$ is a nonzero element of $R$ since each $r_i$ is nonzero and $R$ is an integral domain. We claim that the element $r$ annihilates the module $M$. Let $m$ be an arbitrary element in $M$. Since $M$ is generated by the set $A$, we can write \[m=s_1a_1+s_2a_2+\cdots +s_n a_n\] for some elements $s_1, s_2, \dots, s_n\in R$. Note that since $R$ is an integral domain, it is commutative by definition. Hence we can change the order of the product in $r$ freely. Thus for each $i$ we can write \[r=p_ir_i,\] where $p_i$ is the product of all $r_j$ except $r_i$. Then it follows that we have \begin{align*} ra_i&=p_ir_ia_i=p_i0=0 \tag{*} \end{align*} for each $i$. Using this, we obtain \begin{align*} rm&=r(s_1a_1+s_2a_2+\cdots +s_n a_n)\\ &=rs_1a_1+rs_2a_2+\cdots +rs_n a_n\\ &=s_1ra_1+s_2ra_2+\cdots +s_n ra_n && \text{as $R$ is commutative}\\ &=s_10+s_20+\cdots +s_n 0 && \text{by (*)}\\ &=0. \end{align*} Therefore, for any element $m\in M$ we have proved that $rm=0$. Thus the nonzero element $r$ annihilates the module $M$. (b) Find an example of an integral domain $R$ and a torsion $R$-module $M$ whose annihilator is the zero ideal. Let $R=\Z$ be the ring of integers. Then $R=\Z$ is an integral domain. Consider the $\Z$-module \[M=\oplus_{i=1}^{\infty}\Zmod{2^i}.\] Then each element $a\in M$ can be written as \[a=(a_1+\Zmod{2}, a_2+\Zmod{2^2}, \dots, a_k+\Zmod{2^k}, 0, 0, \dots)\] for some $a_1, a_2, \dots, a_k\in \Z$. (Here $k$ depends on $a$.) It follows that we have \[2^ka=0,\] and thus $M$ is a torsion $\Z$-module. We now prove that any annihilator of $M$ must be the zero element of $R=\Z$. Let $r\in \Z$ be an annihilator of $M$. Choose an integer $k$ so that $r < 2^k$. Consider the element \[a=(0, 0, \dots, 1+\Zmod{2^k}, 0, 0, \dots)\] in $M$. The only nonzero entry of $a$ is at the $k$-th place. Since $r$ is an annihilator, we have \begin{align*} 0=ra=(0, 0, \dots, r+\Zmod{2^k}, 0, 0, \dots) \end{align*} and this implies that $r=0$ because $r < 2^k$. We conclude that the annihilator is the zero ideal. Add to solve later
№ 8 All Issues Panakhov E. S. Ukr. Mat. Zh. - 2012. - 64, № 11. - pp. 1516-1525 We consider the inverse problem for second-order differential operators with regular singularity and show that the potential function can be uniquely determined by the set of values of eigenfunctions at some interior point and parts of two spectra. Ukr. Mat. Zh. - 2006. - 58, № 1. - pp. 132–138 In the paper, an inverse problem with two given spectra for second order differential operator with singularity of type $\cfrac{2}{r} + \cfrac{l(l+1)}{r^2}$ (here, $l$ is a positive integer or zero) at zero point is studied. It is well known that two spectra $\{\lambda_n\}$ and $\{\mu_n\}$ uniquely determine the potential function $q(r)$ in a singular Sturm-Liouville equation defined on interval $(0, \pi]$. One of the aims of the paper is to prove the generalized degeneracy of the kernel $K(r, s)$. In particular, we obtain a new proof of Hochstadt's theorem concerning the structure of the difference $\widetilde{q}(r) - q(r)$.
This example explores the physics of the damped harmonic oscillator by solving the equations of motion in the case of no driving forces, investigating the cases of under-, over-, and critical-damping Derive Equation of Motion Solve the Equation of Motion (F = 0) Underdamped Case () Overdamped Case () Critically Damped Case () Conclusion Consider a forced harmonic oscillator with damping shown below. Model the resistance force as proportional to the speed with which the oscillator moves. Define the equation of motion where is the mass is the damping coefficient is the spring constant is a driving force syms x(t) m c k F(t) eq = m*diff(x,t,t) + c*diff(x,t) + k*x == F eq(t) = Rewrite the equation using and . syms gamma omega_0 eq = subs(eq, [c k], [m*gamma, m*omega_0^2]) eq(t) = Divide out the mass . Now we have the equation in a convenient form to analyze. eq = collect(eq, m)/m eq(t) = Solve the equation of motion using dsolve in the case of no external forces where . Use the initial conditions of unit displacement and zero velocity. vel = diff(x,t); cond = [x(0) == 1, vel(0) == 0]; eq = subs(eq,F,0); sol = dsolve(eq, cond) sol = Examine how to simplify the solution by expanding it. sol = expand(sol) sol = Notice that each term has a factor of , or , use collect to gather these terms sol = collect(sol, exp(-gamma*t/2)) sol = The term appears in various parts of the solution. Rewrite it in a simpler form by introducing the damping ratio . Substituting ζ into the term above gives: syms zeta; sol = subs(sol, ... sqrt(gamma^2 - 4*omega_0^2), ... 2*omega_0*sqrt(zeta^2-1)) sol = Further simplify the solution by substituting in terms of and , sol = subs(sol, gamma, 2*zeta*omega_0) sol = We have derived the general solution for the motion of the damped harmonic oscillator with no driving forces. Next, we'll explore three special cases of the damping ratio where the motion takes on simpler forms. These cases are called underdamped , overdamped , and critically damped . If , then is purely imaginary solUnder = subs(sol, sqrt(zeta^2-1), 1i*sqrt(1-zeta^2)) solUnder = Notice the terms in the above equation and recall the identity Rewrite the solution in terms of . solUnder = coeffs(solUnder, zeta);solUnder = solUnder(1);c = exp(-omega_0 * zeta * t);solUnder = c * rewrite(solUnder / c, 'cos') solUnder = solUnder(t, omega_0, zeta) = solUnder solUnder(t, omega_0, zeta) = The system oscillates at a natural frequency of and decays at an exponential rate of . Plot the solution with fplot as a function of and . z = [0 1/4 1/2 3/4]; w = 1; T = 4*pi; lineStyle = {'-','--',':k','-.'}; fplot(@(t)solUnder(t, w, z(1)), [0 T], lineStyle{1}); hold on; for k = 2:numel(z) fplot(@(t)solUnder(t, w, z(k)), [0 T], lineStyle{k}); end hold off; grid on; xticks(T*linspace(0,1,5)); xticklabels({'0','\pi','2\pi','3\pi','4\pi'}); xlabel('t / \omega_0'); ylabel('amplitude'); lgd = legend('0','1/4','1/2','3/4'); title(lgd,'\zeta'); title('Underdamped'); If , then is purely real and the solution can be rewritten as solOver = sol solOver = solOver = coeffs(solOver, zeta); solOver = solOver(1) solOver = Notice the terms and recall the identity . Rewrite the expression in terms of . c = exp(-omega_0*t*zeta);solOver = c*rewrite(solOver / c, 'cosh') solOver = solOver(t, omega_0, zeta) = solOver solOver(t, omega_0, zeta) = Plot the solution to see that it decays without oscillating. z = 1 + [1/4 1/2 3/4 1]; w = 1; T = 4*pi; lineStyle = {'-','--',':k','-.'}; fplot(@(t)solOver(t, w, z(1)), [0 T], lineStyle{1}); hold on; for k = 2:numel(z) fplot(@(t)solOver(t, w, z(k)), [0 T], lineStyle{k}); end hold off; grid on; xticks(T*linspace(0,1,5)); xticklabels({'0','\pi','2\pi','3\pi','4\pi'}); xlabel('\omega_0 t'); ylabel('amplitude'); lgd = legend('1+1/4','1+1/2','1+3/4','2'); title(lgd,'\zeta'); title('Overdamped'); If , then the solution simplifies to solCritical(t, omega_0) = limit(sol, zeta, 1) solCritical(t, omega_0) = Plot the solution for the critically damped case. w = 1; T = 4*pi; fplot(solCritical(t, w), [0 T]) xlabel('\omega_0 t'); ylabel('x'); title('Critically damped, \zeta = 1'); grid on; xticks(T*linspace(0,1,5)); xticklabels({'0','\pi','2\pi','3\pi','4\pi'}); We have examined the different damping states for the harmonic oscillator by solving the ODEs which represents its motion using the damping ratio . Plot all three cases together to compare and contrast them. zOver = pi; zUnder = 1/zOver; w = 1; T = 2*pi; lineStyle = {'-','--',':k'}; fplot(@(t)solOver(t, w, zOver), [0 T], lineStyle{1},'LineWidth',2); hold on; fplot(solCritical(t, w), [0 T], lineStyle{2},'LineWidth',2) fplot(@(t)solUnder(t, w, zUnder), [0 T], lineStyle{3},'LineWidth',2); hold off; textColor = lines(3); text(3*pi/2, 0.3 , 'over-damped' ,'Color',textColor(1,:)); text(pi*3/4, 0.05, 'critically-damped','Color',textColor(2,:)); text(pi/8 , -0.1, 'under-damped'); grid on; xlabel('\omega_0 t'); ylabel('amplitude'); xticks(T*linspace(0,1,5)); xticklabels({'0','\pi/2','\pi','3\pi/2','2\pi'}); yticks((1/exp(1))*[-1 0 1 2 exp(1)]); yticklabels({'-1/e','0','1/e','2/e','1'}); lgd = legend('\pi','1','1/\pi'); title(lgd,'\zeta'); title('Damped Harmonic Oscillator');
In this post we’re going to cover some basic intuition to work on logistic regression for Deep Learning algorithms. Logistic regression is an algorithm for binary classification, which is basically used when you want to have your model to return 0 or 1. Some examples: is this image a cat? is this email spam? etc. The basic equation is: $$ \begin{align} \hat{y} = w^T x + b \label{basic} \end{align} $$ where: $\mathbf{\hat{y}}$: is the value that our model predicts $\mathbf{w \in \mathbb{R}^n}$: is a vector of $\mathbf{n}$ parameters representing the weights. $\mathbf{x \in \mathbb{R}^n}$: is a vector of $\mathbf{n}$ parameters representing the features. $\mathbf{b \in \mathbb{R}}$: is a scalar representing the biasor interceptterm $\mathbf{w}$ and $\mathbf{b}$ are the parameters that control the behavior of the model. We can think of $\mathbf{w}$ as the weights that determine how each feature $\mathbf{x_i}$ affects the prediction. The objective of the machine learning algorithm is to learn the parameters $\mathbf{w}$ and $\mathbf{b}$ so $\mathbf{\hat{y}}$ becomes a good estimate of the chance of being $\mathbf{y}$ The output of the equation ($\ref{basic}$) is a linear function. So, how we transform this linear regression result to a non-linear result? The answer is the sigmoid function that transforms our input to a binary output: $$ \begin{align} \hat{y} = \sigma(w^T x + b) \end{align} $$ where $$ \begin{align} \sigma(z) = \frac{1}{1 + e^{-z}} \end{align} $$ The sigmoid function can be represented as: As you can see this activation function allows us to map results to 0 or 1 given: For larger positive values of $\mathbf{z}$ we will have a $\mathbf{\sigma(z)}$ near 1 For larger negative values of $\mathbf{z}$ we will have a $\mathbf{\sigma(z)}$ near 0 First of all, we have the loss function which is used by one training example: $$ \begin{align} \mathcal{L}(\hat{y}, y) = - \bigl(y\log\hat{y} + (1 - y) \log(1 - \hat{y})\bigr) \end{align} $$ And the cost function measures how you are performing for the entire training set: $$ \begin{align} \mathcal{J}(w, b) = \frac1m \sum_{i=1}^m \mathcal{L}(\hat{y}^{(i)}, y^{(i)}) \end{align} $$ As we want to improve as much as possible the performance, we are going to try to find the w & b values that minimizes this cost function. And that, is basically what gradient descent does for us. Gradient descent is one of the most popuar optimization methods for neural networks for its simplicity (although it can have convergence problems due local minimums). Other optimization methods are: Adam or RMSprop. The basic idea on gradient descent is that on each iteration (determined by the slope or derivative $\partial$), the weights are updated incrementally using a learning rate $\alpha$. A visual interpretation of gradient descent is the following: Given our cost function $\mathcal{J}(w, b)$, weights and bias are updated with the following formula: $$ \begin{align} w = w - \alpha\,\frac{\partial\,\mathcal{J}(w, b)}{\partial w} \end{align} $$ $$ \begin{align} b = b - \alpha\,\frac{\partial\,\mathcal{J}(w, b)}{\partial b} \end{align} $$ where the symbol $\partial$ in $\partial\,\mathcal{J}(w, b)$ basically means the derivative of the cost function. In the next post, we will see how to apply this theory with an example written with python & TensorFlow.
Authors: Malaj V. P., Ratsa M. F. Abstract A system $\Sigma$ of function from the set $K$ is called a chain (completive) system in $K$, if the set of closed systems $K'$, such that $\Sigma \subseteq K'\subseteq K$, forms a chain with respect to the inclusion (is finite). A system $\Sigma \subseteq K$ is precompletive in $K$, if $\Sigma$ is not completive in $K$, but for every function $f$ of $K$, which is not expressible via $\Sigma$, the system $\Sigma U\{f\}$ is completive in $K$. Let the classes $J^{\mu}_1,\dots ,J^{\mu}_4$ be the classes of the pseudo-Boolean functions, which preserve respectively the predicates: $(x_1\ne \tau)\& \dots \& (x_{\mu}\ne \tau)\& ((x_1\& \dots \& x_{\mu +1})\ne \tau)$; $(x_1\ne \tau)\& \dots \& (x_{\mu}\ne \tau)\& ((x_1 \vee \dots \veex_{\mu +1})\ne \tau)$; $(x_1 \ne \tau)\& ((x_1 \vee \dots \vee x_{\mu +1})\ne \tau)\&(\neg x_2=\dots =\neg x_{\mu +1})$; $((x_1\vee \dots \vee x_{\mu+1})\ne \tau)\& (\neg x_1=\dots =\neg x_{\mu +1})$ \ on the set $\{0,\tau, 1\}$, where $\mu = 1,2,\dots$. Let $J^{\infty}_i = J^1_i\cap J^2_i \cap \dots$ $(i=1,\dots, 4)$. Then the systems $J^{\infty}_1,\dots ,J^{\infty}_4$ \ are the only classically complete chain and completive ones in the set of all thepseudo-Boolean functions. In the paper it is proved that these systems have a finite basis. Some examples of bases are presented.
If $W_1 \cup W_2$ is a subspace, then $W_1 \subset W_2$ or $W_2 \subset W_1$. $(\implies)$ Suppose that the union $W_1\cup W_2$ is a subspace of $V$.Seeking a contradiction, assume that $W_1 \not \subset W_2$ and $W_2 \not \subset W_1$.This means that there are elements\[x\in W_1\setminus W_2 \text{ and } y \in W_2 \setminus W_1.\] Since $W_1 \cup W_2$ is a subspace, it is closed under addition. Thus, we have $x+y\in W_1 \cup W_2$. It follows that we have either\[x+y\in W_1 \text{ or } x+y\in W_2.\]Suppose that $x+y\in W_1$. Then we write \begin{align*}y=(x+y)-x.\end{align*}Since both $x+y$ and $x$ are elements of the subspace $W_1$, their difference $y=(x+y)-x$ is also in $W_1$. However, this contradicts the choice of $y \in W_2 \setminus W_1$. Similarly, when $x+y\in W_2$, then we have\[x=(x+y)-y\in W_2,\]and this contradicts the choice of $x \in W_1 \setminus W_2$. In either case, we have reached a contradiction.Therefore, we have either $W_1 \subset W_2$ or $W_2 \subset W_1$. If $W_1 \subset W_2$ or $W_2 \subset W_1$, then $W_1 \cup W_2$ is a subspace. $(\impliedby)$ If we have $W_1 \subset W_2$, then it yields that $W_1 \cup W_2=W_2$ and it is a subspace of $V$. Similarly, if $W_2 \subset W_1$, then we have $W_1\cup W_2=W_2$ and it is a subspace of $V$.In either case, the union $W_1 \cup W_2$ is a subspace. The Union of Two Subspaces is Not a Subspace in a Vector SpaceLet $U$ and $V$ be subspaces of the vector space $\R^n$.If neither $U$ nor $V$ is a subset of the other, then prove that the union $U \cup V$ is not a subspace of $\R^n$.Proof.Since $U$ is not contained in $V$, there exists a vector $\mathbf{u}\in U$ but […] Union of Two Subgroups is Not a GroupLet $G$ be a group and let $H_1, H_2$ be subgroups of $G$ such that $H_1 \not \subset H_2$ and $H_2 \not \subset H_1$.(a) Prove that the union $H_1 \cup H_2$ is never a subgroup in $G$.(b) Prove that a group cannot be written as the union of two proper […] The Sum of Subspaces is a Subspace of a Vector SpaceLet $V$ be a vector space over a field $K$.If $W_1$ and $W_2$ are subspaces of $V$, then prove that the subset\[W_1+W_2:=\{\mathbf{x}+\mathbf{y} \mid \mathbf{x}\in W_1, \mathbf{y}\in W_2\}\]is a subspace of the vector space $V$.Proof.We prove the […] Determine the Values of $a$ so that $W_a$ is a SubspaceFor what real values of $a$ is the set\[W_a = \{ f \in C(\mathbb{R}) \mid f(0) = a \}\]a subspace of the vector space $C(\mathbb{R})$ of all real-valued functions?Solution.The zero element of $C(\mathbb{R})$ is the function $\mathbf{0}$ defined by […] The Intersection of Two Subspaces is also a SubspaceLet $U$ and $V$ be subspaces of the $n$-dimensional vector space $\R^n$.Prove that the intersection $U\cap V$ is also a subspace of $\R^n$.Definition (Intersection).Recall that the intersection $U\cap V$ is the set of elements that are both elements of $U$ […]
When my information changes, I alter my conclusions. What do you do, sir? —attributed to John Maynard Keynes The last two chapters showed how Bayesians make personal probabilities objective. They can be quantified using betting rates. And they are bound to the laws of probability by Dutch books. But what about learning from evidence? Observation and evidence-based reasoning are the keystones of science. They’re supposed to separate the scientific method from other ways of viewing the world, like superstition or faith. So where do they fit into the Bayesian picture? When we observe something new, we change our beliefs. A doctor sees the results of her patient’s lab test and concludes he doesn’t have strep throat after all, just a bad cold. In Bayesian terms, the beliefs you have before a change are called your . We denote your prior beliefs with the familiar operator \(\pr\). Your prior belief about hypothesis \(H\) is written \(\p(H)\). The new beliefs you form based on the evidence are called your priors What’s the rule for changing your beliefs? When you get new evidence, how do you go from \(\pr(H)\) to \(\po(H)\)? Let’s start by thinking about an example. Imagine you’re about to test a chemical with litmus paper to determine whether it’s an acid or a base. Before you do the test, you think it’s probably an acid if the paper turns red, and it’s probably a base if the paper turns blue. Suppose the paper turns red. Conclusion: the sample is probably an acid. So your new belief in hypothesis \(H\) is determined by your prior conditional belief. Before, you thought \(H\) was probably true if \(E\) is true. When you learn that \(E\) in fact is true, you conclude that \(H\) is probably true. When you learn new evidence \(E\), your posterior probability in hypothesis \(H\) should match your prior conditional probability: \[ \po(H) = \pr(H \given E). \] For example, imagine I’m going to roll a six-sided die behind a screen so you can’t see the result. But I’ll tell you whether the result is odd or even. Before I do, what is your personal probability that the die will land on a high number (either \(4\), \(5\), or \(6\))? Let’s assume your answer is \(\pr(H) = 1/2\). Also before I tell you the result, what is your personal probability that the die will land on a high number given that it lands on an even number? Let’s assume your answer here is \(\pr(H \given E) = 2/3\). Figure 18.1: Prior vs. posterior probabilities in a die-roll problem. \(H\) \(=\) the die landed \(4\), \(5\), or \(6\). \(E\) \(=\) the die landed even. \(Pr(H) = 1/2\), \(Pr^*(H) = 2/3\). Now I roll the die and I tell you it did in fact land even. What is your new personal probability that it landed on a high number? Following the Conditionalization rule, \(\po(H) = \pr(H \given E) = 2/3\). We learned how to use Bayes’ theorem to calculate \(\pr(H \given E)\). If we combine Bayes’ theorem with Conditionalization we get: \[ \po(H) = \pr(H) \frac{\pr(E \given H)}{\pr(E)}. \] Because this formula is so useful for figuring out what conclusion to draw from new evidence, the Bayesian school of thought is named after it. Bayesian statisticians use it to evaluate evidence in actual scientific research. And Bayesian philosophers use it to explain the logic behind the scientific method.8 We learned part of this story back in Chapter 10. Bayes’ theorem provides an objective guide for changing your personal probabilities. Given the prior probabilities on the right hand side, you can calculate what your new probabilities should be on the left. But where do the prior probabilities on the right come from? Are there any objective rules for determining them? How do we calculate \(\pr(H)\), for example? Let’s go back to our example where I roll a die behind a screen. Before I tell you whether the die landed on an even number, it seems reasonable to assign probability \(1/2\) to the proposition that the die will land on a high number (\(4\), \(5\), or \(6\)). But what if someone had a different prior probability, like \(\pr(H) = 1/10\)? That seems like a strange opinion to have. Why would they think the die is so unlikely to land on a high number, when there are just as many high numbers as low ones? On the other hand, if you don’t know whether the die is fair, it is possible it’s biased against high numbers. So maybe they’re on to something. And notice, assigning \(\pr(H) = 1/10\) doesn’t violate the laws of probability, as long as they also assign \(\pr(\neg H) = 9/10\). So we couldn’t make a Dutch book against them. Where do prior probabilities come from then? How do we decide whether to start with \(\pr(H) = 1/2\) or \(\pr(H) = 1/10\)? Here is a very natural proposal: The Principle of Indifference dates back to the very early days of probability theory. In fact Laplace seems to have thought it was the central principle of probability.For a long time it was known by a different name: “The Principle of Insufficient Reason”. The idea was that, without any reason to think one outcome more likely than another, they should all get the same probability.In \(1921\) it was renamed “The Principle of Indifference” by economist John Maynard Keynes (1883–1946). The idea behind the new name is that you should be indifferent about which outcome to bet on, since they all have the same probability of winning. If there are \(n\) possible outcomes, each outcome should have the same prior probability: \(1/n\). In the die example, there are six possible outcomes. So each would have prior probability \(1/6\), and thus \(\pr(H) = 1/2\): \[ \begin{aligned} \pr(H) &= \pr(4) + \pr(5) + \pr(6)\\ &= 1/6 + 1/6 + 1/6\\ &= 1/2. \end{aligned} \] Here’s one more example. In North American roulette, the wheel has \(38\) pockets, \(2\) of which are green: zero (\(\mathtt{0}\)) and double-zero (\(\mathtt{00}\)). If you don’t know whether the wheel is fair, what should your prior probability be that the ball will land in a green pocket? Figure 18.2: A North American roulette wheel According to the Principle of Indifference, each space has equal probability, \(1/38\). So \(\pr(G) = 1/19\): \[ \begin{aligned} \pr(G) &= \pr(\mathtt{0}) + \pr(\mathtt{00})\\ &= 1/38 + 1/38\\ &= 1/19. \end{aligned} \] So far so good, but there’s a problem. Sometimes the number of possible outcomes isn’t a finite number \(n\), it’s a continuum. Suppose you had to bet on the angle the roulette wheel will stop at, rather than just the colour it will land on. There’s a continuum of possible angles, from \(0\deg\) to \(360\deg\). It could land at an angle of \(3\deg\), or \(314.1\deg\), or \(100\pi\deg\), etc. So what’s the probability the wheel will stop at, say, an angle between \(180\deg\) and \(270\deg\)? Well, this range is \(1/4\) of the whole range of possibilities from \(0\deg\) to \(360\deg\). So the natural answer is \(1/4\). Generalizing this idea gives us another version of the Principle of Indifference. If there is an interval of possible outcomes from \(a\) to \(b\), the probability of any subinterval from \(c\) to \(d\) is: \[\frac{d-c}{b-a}.\] Figure 18.3: The continuous version of the Principle of Indifference: \(Pr(H)\) is the length of the \(c\)-to-\(d\) interval divided by the length of the whole \(a\)-to-\(b\) interval. The idea is that the prior probability of a hypothesis \(H\) is just the proportion of possibilities where \(H\) occurs. If the full range of possibilities goes from \(a\) to \(b\), and the subrange of \(H\) possibilities is from \(c\) to \(d\), then we just calculate how big that subrange is compared to the whole range. Unfortunately, there’s a serious problem with this way of thinking. In fact it’s so serious that the Principle of Indifference is not accepted as part of the modern theory of probability. You won’t find it in a standard mathematics or statistics textbook on probability. What’s the problem? Imagine a factory makes square pieces of paper, whose sides always have length somewhere between \(1\) and \(3\) feet. What is the probability the sides of the next piece of paper they manufacture will be between \(1\) and \(2\) feet long? Applying the Principle of Indifference we get \(1/2\):\[ \frac{d-c}{b-a} = \frac{2-1}{3-1} = \frac{1}{2}. \]That seems reasonable, but now suppose we rephrase the question. What is the probability that the area of the next piece of paper will be between \(1\) ft\(^2\) and \(4\) ft\(^2\)? Applying the Principle of Indifference again, we get a different number, \(3/8\):\[ \frac{d-c}{b-a} = \frac{4-1}{9-1} = \frac{3}{8}. \]But the answer should have been the same as before: it’s the same questions, just rephrased! If the sides are between \(1\) and \(2\) feet long, that’s the same as the area being between \(1\) ft\(^2\) and \(4\) ft\(^2\). Figure 18.4: Joseph Bertrand (1822–1900) presented this paradox in his \(1889\) book Calcul des Probabilités. He used a different example though. Our example is a bit easier to understand, and comes from the book Laws and Symmetry by Bas van Fraassen. So which answer is right, \(1/2\) or \(3/8\)? It depends on which dimension we apply the Principle of Indifference to: length vs. area. And there doesn’t seem to be any principled way of deciding which dimension to use. So we don’t have a principled way to apply the Principle of Indifference. Here’s a video explaining Bertrand’s paradox thanks to wi-phi.com: There’s nothing special about the example of the paper factory, the same problem comes up all the time. Take the continuous roulette wheel. Suppose the angle is stops at depends on how hard it’s spun. The wheel’s starting speed can be anywhere between \(1\) and \(10\) miles per hour, let’s suppose. And if it’s between \(2\) and \(5\) miles per hour, it lands at an angle between \(180\deg\) and \(270\deg\) degrees. Otherwise it lands at an angle outside that range. If we apply the Principle of Indifference to the wheel’s starting speed we get a probability of \(1/3\) that it will land at an angle between \(180\deg\) and \(270\deg\): \[ \frac{d-c}{b-a} = \frac{5-2}{10-1} = \frac{1}{3}. \] But we got an answer of \(1/4\) when we solved the same problem before. Once again, what answer we get depends on how we apply the Principle of Indifference. If we apply it to the final angle we get \(1/4\), if we apply it to the starting speed we get \(1/3\). And there doesn’t seem to be any principled way of deciding which way to go. There is no accepted solution to Bertrand’s paradox. Some Bayesians think it shows that prior probabilities should be somewhat subjective. Your beliefs have to follow the laws of probability to avoid Dutch books. But beyond that you can start with whatever prior probabilities seem right to you. (The Principle of Indifference should be abandoned.) Others think the paradox shows that Bayesianism is too subjective. The whole idea of “prior” and “posterior” probabilities was a mistake, say the frequentists. Probability isn’t a matter of personal beliefs. There are objective rules for using probability to evaluate a hypothesis, but Bayes’ theorem is the wrong way to go about it. So what’s the right way, according to frequentism? The next two chapters introduce the frequentist method. Suppose a carpenter makes circular tables that always have a diameter between \(40\) and \(50\) inches. Use the Principle of Indifference to answer the following questions. (Give exact answers, not decimal approximations.) Joe spends his afternoons whittling cubes that have a side length between \(2\) and \(10\) centimetres. Use the Principle of Indifference to answer the following questions. (Give exact answers, not decimal approximations.) Joel is in New York and he needs to be in Montauk by \(4\):\(00\) to meet Clementine. He boards a train departing at \(3\):\(00\) and asks the conductor whether they’ll be in Montauk by \(4\):\(00\). The conductor says the train will arrive some time between \(3\):\(50\) and \(4\):\(12\), but she refuses to be more specific. After thinking it over, Joel realizes that his odds may actually be better than that. It’s a \(60\) mile trip to Montauk, so the train must travel at an average speed between \(a\) and \(b\) miles per hour. A factory makes triangular traffic signs. The height of their signs is always the same as the width of the base. And the base is always between \(3\) and \(6\) feet. A factory makes circular dartboards whose diameter is always between \(1\) and \(2\) feet. Some bars water down their whisky to save money. Suppose the proportion of whisky to water at your local bar is always somewhere between \(1/2\) and \(2\). That is, there’s always at least \(1\) unit of whisky for every \(2\) units of water. But there’s never more than \(2\) units of whisky for every \(1\) unit of water. Suppose you order a “whisky”.
(a) Is it true that $A$ must commute with its transpose? The answer is no. We give a counterexample. Let\[A=\begin{bmatrix}1 & -1\\0& 2\end{bmatrix}.\]Then the transpose of $A$ is\[A^{\trans}=\begin{bmatrix}1 & 0\\-1& 2\end{bmatrix}.\]We compute\[AA^{\trans}=\begin{bmatrix}1 & -1\\0& 2\end{bmatrix}\begin{bmatrix}1 & 0\\-1& 2\end{bmatrix}=\begin{bmatrix}2 & -2\\-2& 4\end{bmatrix},\]and\[A^{\trans}A=\begin{bmatrix}1 & 0\\-1& 2\end{bmatrix}\begin{bmatrix}1 & -1\\0& 2\end{bmatrix}=\begin{bmatrix}1 & -1\\-1& 5\end{bmatrix}.\]Therefore, we see that\[AA^{\trans}\neq A^{\trans} A,\]that is, $A$ does not commute with its transpose $A^{\trans}$. (b) Is it true that the rows of $A$ must also form an orthonormal set? The answer is yes. Note that in general the column vectors of a matrix $M$ form an orthonormal set if and only if $M^{\trans}M=I$, where $I$ is the identity matrix. (Such a matrix is called orthogonal matrix.) Thus, by assumption we have $A^{\trans} A=I$. Let $B=A^{\trans}$.Then the column vectors of $B$ is the row vectors of $A$. Hence it suffices to show that $B^{\trans}B=I$. Since $A^{\trans} A=I$, we know that $A$ is invertible and the inverse $A^{-1}=A^{\trans}$.In particular, we have $A^{\trans} A=A A^{\trans}=I$. We have\begin{align*}B^{\trans}B=(A^{\trans})^{\trans}A^{\trans}=(AA^{\trans})^{\trans}=I^{\trans}=I.\end{align*}Thus, we obtain $B^{\trans}B=I$ and by the general fact stated above, the column vectors of $B$ form an orthonormal set.Hence the row column vectors of $A$ form an orthonormal set. Simple Commutative Relation on MatricesLet $A$ and $B$ are $n \times n$ matrices with real entries.Assume that $A+B$ is invertible. Then show that\[A(A+B)^{-1}B=B(A+B)^{-1}A.\](University of California, Berkeley Qualifying Exam)Proof.Let $P=A+B$. Then $B=P-A$.Using these, we express the given […] Inequality Regarding Ranks of MatricesLet $A$ be an $n \times n$ matrix over a field $K$. Prove that\[\rk(A^2)-\rk(A^3)\leq \rk(A)-\rk(A^2),\]where $\rk(B)$ denotes the rank of a matrix $B$.(University of California, Berkeley, Qualifying Exam)Hint.Regard the matrix as a linear transformation $A: […] Find the Inverse Matrix of a Matrix With FractionsFind the inverse matrix of the matrix\[A=\begin{bmatrix}\frac{2}{7} & \frac{3}{7} & \frac{6}{7} \\[6 pt]\frac{6}{7} &\frac{2}{7} &-\frac{3}{7} \\[6pt]-\frac{3}{7} & \frac{6}{7} & -\frac{2}{7}\end{bmatrix}.\]Hint.You may use the augmented matrix […] Linear Dependent/Independent Vectors of PolynomialsLet $p_1(x), p_2(x), p_3(x), p_4(x)$ be (real) polynomials of degree at most $3$. Which (if any) of the following two conditions is sufficient for the conclusion that these polynomials are linearly dependent?(a) At $1$ each of the polynomials has the value $0$. Namely $p_i(1)=0$ […]
Apps for Teaching Mathematical Modeling of Tubular Reactors The Tubular Reactor application is a tool where students can model a nonideal tubular reactor, including radial and axial variations in temperature and composition, and investigate the impact of different operating conditions. It also exemplifies how teachers can build tailored interfaces for problems that challenge the students’ imagination. The model and exercise are originally described in Scott Fogler’s book Elements of Chemical Reaction Engineering. I wish I had access to this type of tool when I was a student! Apps Simplify Teaching and Learning Mathematical Modeling Concepts I still remember the calculus classes at engineering school where we first encountered partial differential equations. Despite the teacher’s efforts in trying to exemplify diffusion with the distance and the time it takes for a shark to detect your blood in the water if you cut yourself while diving, the rest of the course was mostly overshadowed by theorems. Theorems that could prove existence and uniqueness, for relatively simple problems, and by techniques such as variable separation and conformal mapping. Apart from math theory and solving techniques, I realize now that what we really needed, in order to understand mathematical models, was to study the solution to the model equations and investigate this for different assumptions and conditions. The Tubular Reactor with Jacket application in the COMSOL Multiphysics® software version 5.0 gives students the possibility to go from a mathematical model of a nonideal tubular reactor straight to the solution of the corresponding numerical model. The model is taken from an exercise in Scott Fogler’s book Elements of Chemical Reaction Engineering, which is one of the most popular books in undergraduate and graduate courses in chemical reaction engineering. Value to the Student The mathematical model consists of an energy balance and a material balance described in an axisymmetric coordinate system. As a student, you can change the activation energy of the reaction, the thermal conductivity, and the heat of reaction in the reactor (see Step 2 in the figure above). The resulting solution gives the axial and radial conversion and temperature profiles in the reactor. For some data, the results from the simulation are not obvious, which means that the interpretation of the model results also becomes a problem-solving exercise. Value to the Teacher The Tubular Reactor app can be accessed by a teacher in the Application Builder. As a teacher, you can investigate how to include model and application documentation in an application’s user interface. You can also learn how to include user interface commands that allow the students to generate a report from each simulation. In addition, the application accessed in the Application Builder also shows you how to create menu bars, ribbons, ribbon tabs, form collections, and forms in an application’s user interface and how to link these user interface components with settings and results in the underlying embedded model. The Tubular Reactor Application The different steps in the exercise for the tubular reactor problem are reflected in the ribbon on Windows® operating systems or in the main toolbar on Linux® operating systems and Mac OS in the application’s user interface. The natural first step is to read the documentation (see Step 1 in Figure 1 above). The students can then change the activation energy and the heat of reaction, as well as the thermal conductivity in the reactor in Step 2. The third step is to compute the solution to the model equations (Step 3). This makes it possible for the students to analyze the solution in four different plots (Step 4): Two surface plots that show the temperature and the conversion of the reactant in the reactor, and two cut line plots that show the temperature and conversion of the reactant in the reactor along three different lines placed at three different z-positions (see Figure 3 further down the page for an example). The four different plots are found under their respective tabs in a so-called form collection. The last step is to generate a report (Step 5) that documents the model and the results from the simulation. In this case, the output is in Microsoft® Word® format, but you may also generate HTML reports. For the teacher, the application builder tree and the member form preview in the Application Builder reveal the structure of the app (see Figure 2 below). The Main Window node (labeled 1 in the screenshot below) contains the child nodes that describe the file menu (2) and the ribbon (3). In Linux® operating systems, the ribbon is shown as a toolbar. It also contains a reference to the main form. The Form node (4) contains five forms in this case: One form that describes the main form and four forms that describe the different members in the graphics form collection. These four graphics form members correspond to the four plots mentioned above. Figure 2. The Application Builder user interface that includes the application builder tree to the left and the preview of the included forms to the right. In between is the settings window for each selected form, declaration, method, library, or model nodes. The text input widgets for the activation energy, the thermal conductivity, and the heat of reaction (5) are linked to the corresponding parameters in the embedded model. The range of values is also limited in order to provide a safe input range that does not produce garbage. The Declarations node (6) includes the declaration of variables that are not defined in the embedded model. For instance, you can declare a string variable that displays a message in the user interface when the app is run based on a selection by the user. In this example, a string variable is created to show if the simulation results are updated or not (i.e., if the student changes the activation energy without re-solving the model equation, a string variable displayed in the graphics window is set to “*Not Updated”). The application further contains a set of methods (7) that correspond to loading the model documentation, computing the results in a simulation, and generating the report. These methods are linked to the corresponding menus in the ribbon or in the main menu. The methods are graphically generated, but can then be edited manually using the method editor for further flexibility. The Library node (8) contains files that are embedded in the application. In this example, we have a PDF-file that contains the application’s documentation linked to the corresponding ribbon menus. The Tubular Reactor Model The process described by the model is that for the exothermic reaction of propylene oxide with water to form propylene glycol. This reaction takes place in a tubular reactor equipped with a cooling jacket in order to avoid explosion (see the figure in the “Model Results” section below). The reaction takes place in the liquid phase and in the presence of a solvent. The density of the reactor solution is therefore assumed to vary to a negligible extent despite variations in composition and temperature. Under these assumptions, it is possible to define a fully developed velocity profile along the radius of the reactor. The model equations describe the conservation of material and energy. The dependent variables are the concentration, c, and the temperature, T, in the reactor. The material and energy equations are defined along two independent variables: the variable for the radial direction, r, and along the axial direction, z. These equations form a system of two coupled partial differential equations (PDE), along r and z. The boundary conditions define the concentration and temperature at the inlet of the reactor. At the outlet, the outwards flux of material and energy is dominated by advection and is described accordingly. At the reactor wall, the heat flux is proportional to the temperature difference between the reactor and the cooling jacket. (1) \nabla \cdot \left( { – D\nabla c} \right) + \nabla c \cdot {\bf{u}} + {k_f}c = 0\\ c = {c_0}\quad at\;inlet;\quad \left( {\left( { – D\nabla c} \right) + c{\bf{u}}} \right) \cdot {\bf{n}} = c{\bf{u}} \cdot {\bf{n}}\quad at\;outlet\\ \\ \nabla \cdot \left( { – k\nabla T} \right) + \rho {C_p}\nabla T \cdot {\bf{u}} + {k_f}c\Delta H = 0\\ T = {T_0}\quad at\;inlet;\quad \left( {\left( { – k\nabla T} \right) + \rho {C_p}T\,{\bf{u}}} \right) \cdot {\bf{n}} = \rho {C_p}T\,{\bf{u}} \cdot {\bf{n}}\quad at\;outlet\\ – k\nabla T \cdot {\bf{n}} = {s_a}h\left( {{T_j} – T} \right)\quad at\;reactor\;walls \end{array}\] Model Results The results from the simulation are quite interesting. For example, the conversion profiles along the radial cut lines display a minimum and a maximum, as seen in Figure 3 below. In Fogler’s book, one of the tasks for the student is to explain these profiles. Here, we can reveal that the profile is explained by the combination of the exothermic reaction, the advective term, and the cooling from the jacket. In the middle of the reactor, the large flow velocity reduces the conversion, since the reactants reach far into the reactor before they react. This is labeled 1 in Figure 3. Figure 3. Cut lines plot of the conversion in the reactor along the radial direction at different axial positions: Inlet, half axial location, and outlet. Closer to the wall, the flow rate decreases and the conversion then increases, since the temperature is still relatively high far from the jacket wall, which also gives a high reaction rate (2). However, as we get even closer to the wall, the conversion starts to decrease due to the cooling of the jacket, which decreases the reaction rate (3) in the figure above. At the reactor wall, the cooling is very efficient, which should decrease the conversion even more. However, the conversion increases slightly, since there is no advection of reactants at the wall. In other words, the space time for the volume elements that travel at the wall is very high, since the flow is zero at the wall (4). The reactants are therefore consumed to a larger extent. Applications in Teaching The Tubular Reactor example shows how to create a dedicated user interface based on a model — an application — where students can build an intuitive connection between a physical description of a reactor and the implications of this description in the operation of the reactor. An important component in this exercise is that the results are not obvious; the interpretation of the results requires some thinking. The Application Builder provides a user-friendly tool for the teacher to graphically create application interfaces. It allows teachers to concentrate on the exercise itself rather than investing time in explaining software tools or programming interfaces in the traditional way. They can focus on generating simulation results that trigger thinking. The students get more challenging and entertaining exercises that focus on the problem, not on the technicalities of running simulation software. Next Steps Intrigued? Learn more about the Application Builder on the 5.0 Release Highlights page. Download the Tubular Reactor Jacket app Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Linux is a registered trademark of Linus Torvalds. Mac OS is a trademark of Apple Inc., registered in the U.S. and other countries. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Determinant/Trace and Eigenvalues of a MatrixLet $A$ be an $n\times n$ matrix and let $\lambda_1, \dots, \lambda_n$ be its eigenvalues.Show that(1) $$\det(A)=\prod_{i=1}^n \lambda_i$$(2) $$\tr(A)=\sum_{i=1}^n \lambda_i$$Here $\det(A)$ is the determinant of the matrix $A$ and $\tr(A)$ is the trace of the matrix […] Nilpotent Matrix and Eigenvalues of the MatrixAn $n\times n$ matrix $A$ is called nilpotent if $A^k=O$, where $O$ is the $n\times n$ zero matrix.Prove the followings.(a) The matrix $A$ is nilpotent if and only if all the eigenvalues of $A$ is zero.(b) The matrix $A$ is nilpotent if and only if […] A Square Root Matrix of a Symmetric MatrixAnswer the following two questions with justification.(a) Does there exist a $2 \times 2$ matrix $A$ with $A^3=O$ but $A^2 \neq O$? Here $O$ denotes the $2 \times 2$ zero matrix.(b) Does there exist a $3 \times 3$ real matrix $B$ such that $B^2=A$ […] Eigenvalues of Squared Matrix and Upper Triangular MatrixSuppose that $A$ and $P$ are $3 \times 3$ matrices and $P$ is invertible matrix.If\[P^{-1}AP=\begin{bmatrix}1 & 2 & 3 \\0 &4 &5 \\0 & 0 & 6\end{bmatrix},\]then find all the eigenvalues of the matrix $A^2$.We give two proofs. The first version is a […] Finite Order Matrix and its TraceLet $A$ be an $n\times n$ matrix and suppose that $A^r=I_n$ for some positive integer $r$. Then show that(a) $|\tr(A)|\leq n$.(b) If $|\tr(A)|=n$, then $A=\zeta I_n$ for an $r$-th root of unity $\zeta$.(c) $\tr(A)=n$ if and only if $A=I_n$.Proof.(a) […]
The matrix size, noted $m \times n$, is the number of rows, respectively the number of columns that a matrix contains. A matrix is said to be square when $m=n$. Matrices of example II are of size $2 \times 3$. 1. Multiplication We have seen in the previous chapter how coefficients and variables are separated from one another, which leads to a matrix multiplying a vector. This implies this multiplication to be defined. Let $A$ be a matrix of size $\, {\color{orangered}{m}} \times {\color{steelblue}{n}}$ and $B$ be a matrix of size $\, {\color{grey}{m’}} \times {\color{darkred}{n’}}$. $AB=C$ is possible only if ${\color{steelblue}{n}}={\color{grey}{m’}}$. The $C$ matrix, the product, is of size ${\color{orangered}{m}} \times {\color{darkred}{n’}}$. The multiplication $A$ by $B$ is done by distributing $A$ rows on each columns of $B$ and by adding the products, namely $(c_{i,j}) = \sum_{k=0}^{n} a_{i,k}\,b_{k,j}$. Example I 2. Addition The addition of two matrices is possible only if they both have the same size. The corresponding elements are added. The subtraction is defined in a similar way. Example II 3. Mutiplication by a scalar (number) All elements of the matrix are multiplied by the scalar. Example III 4. Decomposition A matrix multiplying a column matrix can be decomposed into an addition of multiplications. This is due to the previous rules. Example IV Recapitulation Multiplication $AB = C$ where $A$ a matrix of size $\,{\color{orangered}{m}} \times {\color{steelblue}{n}}$ and $B$ a matrix of size $\, {\color{grey}{m’}} \times {\color{darkred}{n’}}$. The multiplication is possible only if ${\color{steelblue}{n}}={\color{grey}{m’}}$. Matrix $C$ size is ${\color{orangered}{m}} \times {\color{darkred}{n’}}$. $A B \ne AB$ Let $K=T$, the left multiplication by $P$ gives $PK=PT$ Let $K=T$, the right multiplication by $P$ gives $KP=TP$ Addition $A + B = (a_{i,j} + b_{i,j})$ Multiplication by a scalar $\lambda A= (\lambda a_{i,j})$ Distributivity $A(B+C)=AB+AC$ Decomposition matrix multiplying a column matrix $ \left(\begin{smallmatrix} v_{1} & \cdots & w_{1} \\ \vdots & \cdots & \vdots \\ v_n & \cdots & w_n \end{smallmatrix}\right) \left(\begin{smallmatrix} \lambda_1 \\ \vdots \\ \lambda_n \end{smallmatrix}\right) = \lambda_1 \left(\begin{smallmatrix} v_{1} \\ \vdots \\ v_n \end{smallmatrix}\right) + \cdots + \lambda_n \left(\begin{smallmatrix} w_{1} \\ \vdots \\ w_n \end{smallmatrix}\right) $
It is well-known that the complement of $\{ ww \mid w\in \Sigma^*\}$ is context-free. But what about the complement of $\{ www \mid w\in \Sigma^*\}$? Still CFL I believe, with an adaptation of the classical proof. Here's a sketch. Consider $L = \{xyz : |x|=|y|=|z| \land (x \neq y \lor y \neq z)\}$, which is the complement of $\{www\}$, with the words of length not $0$ mod $3$ removed. Let $L' = \{uv : |u| \equiv_3 |v| \equiv_3 0 \land u_{2|u|/3} \neq v_{|v|/3}\}$. Clearly, $L'$ is CFL, since you can guess a position $p$ and consider that $u$ ends $p/2$ after that. We show that $L = L'$. $L \subseteq L'$: Let $w = xyz \in L$. Assume there's a $p$ such that $x_p \neq y_p$. Then write $u$ for the $3p/2$ first characters of $w$, and $v$ for the rest. Naturally, $u_{2|u|/3} = x_p$. Now what is $v_{|v|/3}$? First: $$|v|/3 = (|w| - 3p/2)/3 = |w|/3 - p/2.$$ Hence, in $w$, this is position: $$|u|+|v|/3 = 3p/2 + |w|/3 - p/2 = |w|/3 + p,$$ or, in other words, position $p$ in $y$. This shows that $u_{2|u|/3} = x_p \neq y_p = v_{|v|/3}$. If $y_p \neq z_p$, then let $u$ be the first ${3\over2}(|w|/3 + p)$ characters of $w$, so that $u_{2|u|/3}$ is $y_p$; $v$ is the rest of $w$. Then: $$|u| + |v|/3 = 2|w|/3 + p$$ hence similarly, $v_{|v|/3} = z_p$. $L' \subseteq L$: We reverse the previous process. Let $w = uv \in L'$. Write $p = 2|u|/3$. Then: $$p+|w|/3 = 2|u|/3+|uv|/3 = |u| + |v|/3.$$ Thus $w_p = u_{2|u|/3} \neq v_{|v|/3} = w_{p + |w|/3}$, and $w \in L$ (since if $w$ is of the form $xxx$, it must hold that $w_p = w_{p+|w|/3}$ for all $p$). Here is the way I think about solving this problem. In my opinion, it's intuitively clearer. A word $x$ is not of the form $www$ iff either (i) $|x| \not\equiv 0$ (mod 3), which is easy to check, or (ii) there is some input symbol $a$ that differs from the corresponding symbol $b$ that occurs $|w|$ positions later. We use the usual trick of using the stack to maintain an integer $t$ by having a new "bottom-of-stack" symbol $Z$, storing the absolute value $|t|$ as the number of counters on the stack, and sgn($t$) by the state of the PDA. Thus we can increment or decrement $t$ by doing the appropriate operation. The goal is to use nondeterminism to guess the positions of the two symbols you are comparing, and use the stack to record $t := |x|-3d$, where $d$ is the distance between these two symbols. We accomplish this as follows: increment $t$ for each symbol seen until the first guessed symbol $a$ is chosen, and record $a$ in the state. For each subsequent input symbol, until you decide you've seen $b$, decrement $t$ by $2$ ($1$ for the input length and $-3$ for the distance). Guess the position of the second symbol $b$ and record whether $a \not= b$. Continue incrementing $t$ for subsequent input symbols. Accept if $t = 0$ (detectable by $Z$ at top) and $a \not= b$. The nice thing about this is that it should be completely clear how to extend this to arbitrary powers. Just a different ("grammar oriented") perspective to prove that the complement of $\{ w^k \}$ is CF for any fixed $k$ using closure properties. First note that in the complement of $\{ w^k \}$ there is always $i$ such that $w_i \neq w_{i+1}$. We focus on $w_1 \neq w_2$ and start with a simple CF grammar that generates: $L = \{\underbrace{a00...0}_{w_1} \; \underbrace{b00...0}_{w_2} ... \underbrace{000...0}_{w_k} \mid |w_i|=n \} = \{ a 0^{n-1} \, b 0^{n(k-1)-1} \}$ E.g. for $k = 3$, we have $L = \{ a\,b\,0, a0\,b0\,00, a00\,b00\,000, ...\}$, $G_L = \{ S \to ab0 | aX00, X \to 0X00 | 0b0 \}$ Then apply closure under inverse homomorphism, and union: First homomorphism: $\varphi(1) \to a, \varphi(0) \to b, \varphi(1)\to 0, \varphi(0) \to 0 $ Second homomorphism: $\varphi'(0) \to a, \varphi'(1) \to b, \varphi'(1)\to 0, \varphi'(0) \to 0$ $L' = \varphi^{-1}(L) \cup \varphi'^{-1}(L)$ is still context free Apply closure under cyclic shifts to $L'$ to get the set of strings of length $kn$ not of the form $w^k$: $L'' = Shift(L') = \{ u \mid u \neq w^k \land |u| = kn \}$. Finally add the regular set of strings whose length is not divisible by $k$ in order to get exactly the complement of $\{w^k\}$: $L'' \cup \{\{0,1\}^n\mid n \bmod k \neq 0\} = \{ u \mid u \neq w^k\}$
Math General Math Forum - For general math related discussion and news View Poll Results: Is x/0 infinitesimal? Yes! Completely agree! 0 0% Possibly agree... 0 0% Don't agree. 2 100.00% I have proof against it. 0 0% I don't know. 0 0% Multiple Choice Poll. Voters: 2. You may not vote on this poll LinkBack Thread Tools Display Modes January 28th, 2016, 08:47 PM # 1 Newbie Joined: Jan 2016 From: Canada Posts: 2 Thanks: 0 Dividing by Zero Theories I recently watched a numberphile video on youtube about the problems with zero, and it got my thinking. Looking at the linguistics, dividing means putting into a given number of groups, and if you have zero groups, then every -For lack of a better term- 'piece' would be by itself, and each piece would be infinitesimally small. So by that thinking, x/0=0.000...01, but since you cant write that, in a finite space, we would need a new symbol for it. January 28th, 2016, 08:53 PM # 2 Newbie Joined: Jan 2016 From: Canada Posts: 2 Thanks: 0 Please share your opinion! I'm only in grade 9, and don't know much about this stuff, so if anyone more experienced is completely facepalming right now, please don't hate, explain it to me. And use simple language Last edited by Chichenwin; January 28th, 2016 at 08:58 PM. Reason: Afterthought January 28th, 2016, 09:45 PM # 3 Math Team Joined: Nov 2014 From: Australia Posts: 689 Thanks: 244 Below is the graph of $y = \dfrac{a}{x}$. Can you see what happens near $x = 0$? The right side of the graph shoots up towards infinity and the left side of the graph shoots down towards negative infinity. That's certainly not infinitesimal. January 29th, 2016, 12:03 AM # 4 Senior Member Joined: Apr 2014 From: Glasgow Posts: 2,161 Thanks: 734 Math Focus: Physics, mathematical modelling, numerical and computational solutions Another fun case is: $\displaystyle y = \frac{\sin x}{x}$ as x gets closer and closer to 0 (i.e. closer and closer towards 0/0). Here's a plot of the function: https://www.google.co.uk/webhp?sourc...%20x%20%2F%20x y = 1 at x = 0! The function above is well known and is called sinc x. Weird no? The point of this is that the divide by zero operation really does give weird behaviour... all functions that have a possible divide by zero in it, such as $\displaystyle y = \frac{a}{x}$ and $\displaystyle y = \frac{\sin x}{x}$ could have different behaviours as the divisor gets closer and closer to zero. So what do you do to get around this? In practise what you would do is look to $\displaystyle limit theory$ and evaluate the limit of the function as the dividend tends to 0. January 29th, 2016, 05:08 AM # 5 Math Team Joined: Dec 2013 From: Colombia Posts: 7,685 Thanks: 2665 Math Focus: Mainly analysis and algebra On a technical note $y={\sin x \over x}$ does not have $y(0)=1$. Instead $y(0)$ is undefined - it has no value. This is precisely because the operation of dividing by zero is not well defined. On a more general mathematical note: mathematical results are not a matter of opinion determined by the result of a poll. A statement is either true, false or undecidable. Sometimes you can get different results in different mathematical systems, but no one mathematical system is more true than any other: each is simply a product of the axioms and definitions that underpin it. Some systems are considered more standard than others, and if you don't specify a particular system, most people will assume you mean one of those. Last edited by v8archie; January 29th, 2016 at 05:23 AM. February 2nd, 2016, 03:53 AM # 6 Member Joined: Oct 2014 From: UK Posts: 62 Thanks: 2 To me, the many problems with zero are an indication of serious flaws with the fundamentals of mathematics. It is stated that zero has no multiplicative inverse and this is used as an excuse to treat zero differently to all other numbers. Divide by zero is just one of many zero-related problems. Another obvious one is the belief that zero times anything equals zero. If I have zero apples this is not the same as there being zero universes, but I can equate these two because they both supposedly equal 0. I have also read that zero to the power zero can supposedly have different values in different situations. In base 10 there are lots of division operations that are problematic, not just divide by zero. A simple example is 1 divided by 3. The problem is that the division algorithm (for long/short division) has no defined way it can end, it cannot complete. It is a huge cop-out to avoid the issue by calling it a ratio, or to simply assert that using the words ‘infinitely many’ will somehow mysteriously cause it to finish. To solve problems like divide 1 by 3, I produced my own algorithm for division in any given base where an end point is always achieved. My new algorithm works for any integer values, including divide by zero. For example, it evaluates 0 divided by 0 to be 1, which just happens to support the argument that sin(x)/x = 1 at x=0 (as opposed to being undefined). Sometime soon I hope to write a blog article on this subject at Extreme Finitism. The introduction of complex numbers made many problems solvable which were previously not solvable using algebra. The treatment of zero should have a similarly elegant solution; it should not involve lots of annoying rules that say what we can and cannot do when rearranging algebraic expressions to avoid hitting problems with zero. My response to the OP is that the question makes no sense because I do not accept ‘infinitesimal’ to be well defined. Last edited by Karma Peny; February 2nd, 2016 at 04:32 AM. February 2nd, 2016, 04:25 AM # 7 Math Team Joined: Dec 2013 From: Colombia Posts: 7,685 Thanks: 2665 Math Focus: Mainly analysis and algebra Quote: Most of what you wrote is either wrong or right, but not for the reasons you give. Tags dividing, problem, question, theories, unsolved Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Dividing by 0 afk Algebra 3 December 26th, 2014 09:18 PM 1st order theories outsos Physics 7 April 25th, 2014 03:05 AM Books/articles about applications of some theories honzik Math Books 0 February 8th, 2013 12:29 PM Theories Involved - A phenomena of weight lifting rnck Physics 0 December 22nd, 2012 10:05 PM Axiomatic theories Zombi Applied Math 3 April 3rd, 2010 05:25 PM
An argument is valid if it is impossible for the premises to be true and the conclusion false. An argument is sound if it is valid and all the premises are true. There are three connectives: \(\neg\) (negation), \(\wedge\) (conjunction), and \(\vee\) (disjunction). Their truth tables are as follows \(A\) \(B\) \(\neg A\) \(A \wedge B\) \(A \vee B\) T T F T T T F F F T F T T F T F F T F F A proposition that is always true. A proposition that is never true. Two propositions are mutually exclusive if they cannot both be true. One proposition logically entails another if it is impossible for the first to be true and the second false. Two propositions are logically equivalent if they entail one another. Proposition \(A\) is independent of proposition \(B\) if the truth (or falsity) of \(B\) makes no difference to the probability of \(A\). A repeating process is fair if each repetition has the same probability and the repetitions are independent of one another. If \(A\) and \(B\) are independent then \(\p(A \wedge B) = \p(A) \times \p(B)\). If \(A\) and \(B\) are mutually exclusive then \(\p(A \vee B) = \p(A) + \p(B)\). If \(A\) is a tautology then \(\p(A) = 1\). If \(A\) is a contradiction then \(\p(A) = 0\). If \(A\) and \(B\) are logically equivalent then \(\p(A) = \p(B)\). \[\p(A \given B) = \frac{\p(A \wedge B)}{\p(B)}.\] \(A\) is independent of \(B\) if \(\p(A \given B) = \p(A)\). \(\p(\neg A) = 1 - \p(A)\). \(\p(A \wedge B) = \p(A \given B) \p(B)\) if \(\p(B) > 0\). \(\p(A \vee B) = \p(A) + \p(B) - \p(A \wedge B)\). If \(1 > \p(B) > 0\) then \[\p(A) = \p(A \given B)\p(B) + \p(A \given \neg B)\p(\neg B).\] If \(\p(A), \p(B) > 0\) then \[\p(A \given B) = \p(A) \frac{\p(B \given A)}{\p(B)}.\] If \(1 > \p(A) > 0\) and \(\p(B) > 0\) then \[\p(A \given B) = \frac{\p(B \given A)\p(A)}{\p(B \given A)\p(A) + \p(B \given \neg A)\p(\neg A)}.\] Suppose act \(A\) has possible payoffs \(\$x_1, \$x_2, \ldots, \$x_n\). Then the expected monetary value of \(A\) is defined:\[ \begin{aligned}\E(A) &= \p(\$x_1) \times \$x_1 + \p(\$x_2) \times \$x_2 + \ldots + \p(x_n) \times \$x_n. \end{aligned}\] Suppose act \(A\) has possible consequences \(C_1, C_2, \ldots,C_n\). Denote the utility of each outcome \(U(C_1)\), \(U(C_2)\), etc. Then the expected utility of \(A\) is defined:\[ \EU(A) = \p(C_1)\u(C_1) + \p(C_2)\u(C_2) + \ldots + \p(C_n)\u(C_n). \] Suppose an agent’s best and worst possible outcomes are \(B\) and \(W\). Let \(\u(B) = 1\) and \(\u(W) = 0\). And suppose \(\p(B)\) be the lowest probability such that they are indifferent between outcome \(O\) and a gamble with probability \(\p(B)\) of outcome \(B\), and probability \(1 - \p(B)\) of outcome \(W\). Then, if the agent is following the expected utility rule, \(\u(O) = \p(B)\). If you would choose \(X\) over \(Y\) if you knew that \(E\) was true, and you’d also choose \(X\) over \(Y\) if you knew \(E\) wasn’t true, then you should choose \(X\) over \(Y\) when you don’t know whether \(E\) is true or not. Personal probabilities are measured by fair betting rates, if the agent is following the expected value rule. More concreteley, suppose an agent regards as fair a bet where they win \(w\) if \(A\) is true, and they lose \(l\) if \(A\) is false. Then, if they are following the expected value rule, their personal probability for \(A\) is: \[ \p(A) = \frac{l}{w + l}. \] A Dutch book is a set of bets where each bet is fair according to the agent’s betting rates, and yet the set of bets is guaranteed to lose them money. Agents who violate the laws of probability can be Dutch booked. Agents who obey the laws of probability cannot be Dutch booked. If there are \(n\) possible outcomes, each outcome should have the same prior probability: \(1/n\). If there is an interval of possible outcomes from \(a\) to \(b\), the probability of any subinterval from \(c\) to \(d\) is: \[\frac{d-c}{b-a}.\] A significance test at the \(.05\) level can be described in three steps: For a test at the \(.01\) level, follow the same steps but check instead whether \(k\) falls outside the range of outcomes expected \(99\%\) of the time. Suppose an event has two possible outcomes, with probabilities \(p\) and \(1-p\). And suppose the event will be repeated \(n\) independent times. We define the mean \(\mu = np\) and the standard deviation \(\sigma = \sqrt{np(1-p)}\). Let \(k\) be the number of times the first outcome occurs. Then, if \(n\) is large enough:
Inverse Map of a Bijective Homomorphism is a Group Homomorphism Problem 445 Let $G$ and $H$ be groups and let $\phi: G \to H$ be a group homomorphism.Suppose that $f:G\to H$ is bijective.Then there exists a map $\psi:H\to G$ such that\[\psi \circ \phi=\id_G \text{ and } \phi \circ \psi=\id_H.\]Then prove that $\psi:H \to G$ is also a group homomorphism. Isomorphism Criterion of Semidirect Product of GroupsLet $A$, $B$ be groups. Let $\phi:B \to \Aut(A)$ be a group homomorphism.The semidirect product $A \rtimes_{\phi} B$ with respect to $\phi$ is a group whose underlying set is $A \times B$ with group operation\[(a_1, b_1)\cdot (a_2, b_2)=(a_1\phi(b_1)(a_2), b_1b_2),\]where $a_i […] Injective Group Homomorphism that does not have Inverse HomomorphismLet $A=B=\Z$ be the additive group of integers.Define a map $\phi: A\to B$ by sending $n$ to $2n$ for any integer $n\in A$.(a) Prove that $\phi$ is a group homomorphism.(b) Prove that $\phi$ is injective.(c) Prove that there does not exist a group homomorphism $\psi:B […] Normal Subgroups, Isomorphic Quotients, But Not IsomorphicLet $G$ be a group. Suppose that $H_1, H_2, N_1, N_2$ are all normal subgroup of $G$, $H_1 \lhd N_2$, and $H_2 \lhd N_2$.Suppose also that $N_1/H_1$ is isomorphic to $N_2/H_2$. Then prove or disprove that $N_1$ is isomorphic to $N_2$.Proof.We give a […] A Group Homomorphism and an Abelian GroupLet $G$ be a group. Define a map $f:G \to G$ by sending each element $g \in G$ to its inverse $g^{-1} \in G$.Show that $G$ is an abelian group if and only if the map $f: G\to G$ is a group homomorphism.Proof.$(\implies)$ If $G$ is an abelian group, then $f$ […]
Define $\Pi_k \text{SAT}$ by 'Given a quantified boolean formula $\varphi = \forall y_1\exists y_2\dots Q_ky_k\mbox{ }\phi(y_1, \dots, y_k)$, where $\phi(y_1, \dots, y_k)$ is boolean predicate with each $y_i$ a vector of variables, $Q_{2j-1} = \forall$ and $Q_{2j} = \exists$ at every $j\in\mathbb N$, is $\varphi$ valid?' (page $99$ in Arora book). $ETH$ says $\text{SAT}$ and so $\Pi_1\text{SAT}=\overline{\text{SAT}}$ needs $\Omega(2^{c\cdot n_1})$ time at some $c>0$ where $n_1$ is number of variables in $y_1$. If there are $n=n_1+\dots+n_k$ variables where each $y_i$ has $n_i$ variables then what is the best known algorithm for $\Pi_k \text{SAT}$ at each fixed $k$? Is there a generalized $ETH$ that applies to $\Pi_k\text{SAT}$?
Eigenvector A vector which has the property that its product with A is the same as its product with a scalar quantity known as its eigenvalue. This follows the form(1) To find said eigenvector, one must subtract $\lambda\vec{x}$ from both sides after multiplying the right side of the equation by the identity matrix (essentially multiplying by 1, so it's allowed) to give the eigenvalues matrix positions so subtraction makes sense. Therefore, equation 1 becomes(2) If we only allow for the values of lambda that makes the statement true, therefore only attempt to find only eigenvectors, we can use the property that the $det(\vec{0})$ to say that the determinant of the previous equation in equation 2 is also 0. This can be represented by the determinant of the nxn matrix(3) Now to avoid the horrible, horrible algebra, let's just say our matrix A is $\begin{bmatrix} 4&0&3&-1\\0&1&1&2\\0&0&-2&0\\0&0&0&4 \end{bmatrix}$ Following the above, general archetype, we can find(4) We say that the eigenvalue 4 has a multiplicity of 2 because it appears twice. Similar Matrices Matrix A is similar to B if there exists an invertible matrix P such that(7)
proof: pair up people from different sets; no pair from same set imagine you have people from different groups and you would like to pair them up so that no pair is constituted by people of the same group. i've spent a couple of minutes simulating different possibilities and have come to the conclusion that it is always possible given that no set (group) has more than $\displaystyle n/2$ elements (people) in it. but i am looking for a mathematical proof or some reference of what this problem is called. i have a feeling it is a very general set theory concept and i'm overcomplifying it. thanks! Re: proof: pair up people from different sets; no pair from same set Actually, one can do this if in each set has the same number of members. This assumes two groups and everyone is assigned to a couple with someone from the other group. If this is an incorrect reading of the question please correct me. If there are more than two groups please explain exactly the setup. Quote: Originally Posted by foerno Re: proof: pair up people from different sets; no pair from same set this is correct except the number of groups is variable (let's say $\displaystyle m$). the number of members of each group is also variable - $\displaystyle n(i)$. for $\displaystyle m=2$ the problem is, of course, trivial. Quote: Originally Posted by Plato Re: proof: pair up people from different sets; no pair from same set If each of $A~\&~B$ is a finite set and $|A|=|B|=m$ (number in the sets) then all you must do is construct a injection from $A\to B$. Quote: Originally Posted by foerno An injection is a one-to-one function, a unique pairing. There are $m!$ (factorial) such mappings. Re: proof: pair up people from different sets; no pair from same set no, no, you misunderstood. $m$ is the number of sets. but |A Quote: Originally Posted by Plato 1|, |A 2|, ... , |A m| can all be different. the point is, no matter how many sets or how many items, the point is to show the requirements necessary in order to create non-homogenous pairs (i.e. each item is paired with an item from a different set). i think the only requirement is that $n$ is even, and $\forall i : 1 \leq i \leq m, |A$ i$| \leq \frac{n}{2}$ where $n$ is the total number of items and $m$ is the total number of sets. but i'm looking for a mathematical proof. Re: proof: pair up people from different sets; no pair from same set Re: proof: pair up people from different sets; no pair from same set Can anyone else understand the problem? I doubt it! It seems to be terribly confusing. Quote: Originally Posted by foerno
Skills to Develop Predict the acidity of a salt solution. Calculate the pH of a salt solution. Calculate the concentrations of various ions in a salt solution. Explain hydrolysis reactions. A salt is formed between the reaction of an acid and a base. Usually, a neutral salt is formed when a strong acid and a strong base are neutralized in the reaction: \[\ce{H+ + OH- \rightleftharpoons H2O} \label{1}\] The bystander ions in an acid-base reaction form a salt solution. Most neutral salts consist of cations and anions listed in the table below. These ions have little tendency to react with water. Thus, salts consisting of these ions are neutral salts. For example: \(\ce{NaCl}\), \(\ce{KNO3}\), \(\ce{CaBr2}\), \(\ce{CsClO4}\) are neutral salts. When weak acids and bases react, the relative strength of the conjugated acid-base pair in the salt determines the pH of its solutions. The salt, or its solution, so formed can be acidic, neutral or basic. A salt formed between a strong acid and a weak base is an acid salt, for example \(\ce{NH4Cl}\). A salt formed between a weak acid and a strong base is a basic salt, for example \(\ce{NaCH3COO}\). These salts are acidic or basic due to their acidic or basic ions as shown in the Table \(\PageIndex{1}\). Ions of neutral salts Acidic Ions Basic Ions Cations Anions Cations Anions Anions \(\ce{Na+}\) \(\ce{K+}\) \(\ce{Cl-}\) \(\ce{Br-}\) \(\ce{NH4+}\) \(\ce{Al^3+}\) \(\ce{HSO4-}\) \(\ce{HPO4^2-}\) \(\ce{F-}\) \(\ce{C2H3O2-}\) \(\ce{Rb+}\) \(\ce{Cs+}\) \(\ce{I-}\) \(\ce{ClO4-}\) \(\ce{Pb^2+}\) \(\ce{Sn^2+}\) \(\ce{H2PO4-}\) \(\ce{PO4^3-}\) \(\ce{NO2-}\) \(\ce{HCO3-}\) \(\ce{Mg^2+}\) \(\ce{Ca^2+}\) \(\ce{BrO4-}\) \(\ce{ClO3-}\) \(\ce{CN-}\) \(\ce{CO3^2-}\) \(\ce{Sr^2+}\) \(\ce{Ba^2+}\) \(\ce{NO3-}\) \(\ce{S^2-}\) \(\ce{SO4^2-}\) Hydrolysis of Acidic Salts A salt formed between a strong acid and a weak base is an acid salt. Ammonia is a weak base, and its salt with any strong acid gives a solution with a pH lower than 7. For example, let us consider the reaction: \[\ce{HCl + NH4OH \rightleftharpoons NH4+ + Cl- + H2O} \label{2}\] In the solution, the \(\ce{NH4+}\) ion reacts with water (called hydrolysis) according to the equation: \[\ce{NH4+ + H2O \rightleftharpoons NH3 + H3O+}. \label{3}\] The acidity constant can be derived from \(K_w\) and \(K_b\). \[\begin{align} K_{\large\textrm a} &= \dfrac{\ce{[H3O+] [NH3]}}{\ce{[NH4+]}} \dfrac{\ce{[OH- ]}}{\ce{[OH- ]}}\\ &= \dfrac{K_{\large\textrm w}}{K_{\large\textrm b}}\\ &= \dfrac{1.00 \times 10^{-14}}{1.75 \times 10^{-5}} = 5.7 \times 10^{-10} \end{align}\] Example \(\PageIndex{1}\) What is the concentration of \(\ce{NH4+}\), \(\ce{NH3}\), and \(\ce{H+}\) in a 0.100 M \(\ce{NH4NO3}\) solution? SOLUTION Assume that \(\ce{[NH3]} = x\), then \(\ce{[H3O+]} = x\), and you write the concentration below the formula in the reaction: \(\begin{array}{ccccccc} \ce{NH4+ &+ &H2O &\rightleftharpoons &NH3 &+ &H3O+}\\ 0.100-x &&&&x &&x \end{array}\) \(\begin{align} K_{\large\textrm a} &= \textrm{5.7E-10}\\ &= \dfrac{x^2}{0.100-x} \end{align}\) Since the concentration has a value much greater than K a, you may use \(\begin{align} x &= (0.100\times\textrm{5.7E(-10)})^{1/2}\\ &= \textrm{7.5E-6} \end{align}\) \(\begin{align} \ce{[NH3]} &= \ce{[H+]} = x = \textrm{7.5E-6 M}\\ \ce{pH} &= -\log\textrm{7.5e-6} = 5.12 \end{align}\) \(\ce{[NH4+]} = \textrm{0.100 M}\) DISCUSSION Since pH = 5.12, the contribution of \(\ce{[H+]}\) due to self ionization of water may therefore be neglected. Hydrolysis of Basic Salts A basic salt is formed between a weak acid and a strong base. The basicity is due to the hydrolysis of the conjugate base of the (weak) acid used in the neutralization reaction. For example, sodium acetate formed between the weak acetic acid and the strong base \(\ce{NaOH}\) is a basic salt. When the salt is dissolved, ionization takes place: \[\ce{NaAc \rightleftharpoons Na+ + Ac-} \label{4}\] In the presence of water, \(\ce{Ac-}\) undergoes hydrolysis: \[\ce{H2O + Ac- \rightleftharpoons HAc + OH-} \label{5}\] And the equilibrium constant for this reaction is K b of the conjugate base \(\ce{Ac-}\) of the acid \(\ce{HAc}\). Note the following equilibrium constants: Acetic acid (\(K_a=1.75 \times 10^{-5}\)) and Ammonia (\( K_b=1.75 \times 10^{-5}\)) \(\begin{align} K_{\large\textrm b} &= \ce{\dfrac{[HAc] [OH- ]}{[Ac- ]}}\\ K_{\large\textrm b} &= \ce{\dfrac{[HAc] [OH- ]}{[Ac- ]} \dfrac{[H+]}{[H+]}}\\ K_{\large\textrm b} &= \ce{\dfrac{[HAc]}{[Ac- ][H+]} [OH- ][H+]}\\ &= \dfrac{K_{\large\textrm w}}{K_{\large\textrm a}}\\ &= \dfrac{\textrm{1.00e-14}}{\textrm{1.75e-5}} = \textrm{5.7e-10} \end{align}\) Thus, \(K_{\large\ce a} K_{\large\ce b} = K_{\large\ce w}\) or \(\mathrm{p\mathit K_{\large a} + p\mathit K_{\large b} = 14}\) for a conjugate acid-base pair. Let us look at a numerical problem of this type. Example \(\PageIndex{2}\) Calculate the \(\ce{[Na+]}\), \(\ce{[Ac- ]}\), \(\ce{[H+]}\) and \(\ce{[OH- ]}\) of a solution of 0.100 M \(\ce{NaAc}\) (at 298 K). ( K a = 1.8E-5) SOLUTION Let x represent \(\ce{[H+]}\), then \(\begin{array}{ccccccc} \ce{H2O &+ &Ac- &\rightleftharpoons &HAc &+ &OH-}\\ &&0.100-x &&x &&x \end{array}\) \(\dfrac{x^2}{0.100-x} = \dfrac{\textrm{1E-14}}{\textrm{1.8E-5}} = \textrm{5.6E-10}\) Solving for x results in \(\begin{align} x &= \sqrt{0.100\times\textrm{5.6E-10}}\\ &= \textrm{7.5E-6} \end{align}\) \(\ce{[OH- ]} = \ce{[HAc]} = \textrm{7.5E-6}\) \(\ce{[Na+]} = \textrm{0.100 F}\) DISCUSSION This corresponds to a pH of 8.9 or \(\ce{[H+]} = \textrm{1.3E-9}\). Note that \(\dfrac{K_{\large\ce w}}{K_{\large\ce a}} = K_{\large\ce b}\) of \(\ce{Ac-}\), so that K b rather than K a may be given as data in this question. Salts of Weak Acids and Weak Bases A salt formed between a weak acid and a weak base can be neutral, acidic, or basic depending on the relative strengths of the acid and base. If K a(cation) > K b(anion) the solution of the salt is acidic. If K a(cation) = K b(anion) the solution of the salt is neutral. If K a(cation) < K b(anion) the solution of the salt is basic. Example \(\PageIndex{3}\) Arrange the three salts according to their acidity. \(\ce{NH4CH3COO}\) (ammonium acetate), \(\ce{NH4CN}\) (ammonium cyanide), and \(\ce{NH4HC2O4}\) (ammonium oxalate). \(K_{\large\ce a}(\textrm{acetic acid}) = \textrm{1.85E-5}\), \(K_{\large\ce a}(\textrm{hydrogen cyanide}) = \textrm{6.2E-10}\), \(K_{\large\ce a}(\textrm{oxalic acid}) = \textrm{5.6E-2}\), \(K_{\large\ce b}(\ce{NH3}) = \textrm{1.8E-5}\). SOLUTION ammonium oxalate -- acidic, \(K_{\large\ce a}(\ce o) > K_{\large\ce b}(\ce{NH3})\) ammonium acetate -- neutral, \(K_{\large\ce a} = K_{\large\ce b}\) ammonium cyanide -- basic, \(K_{\large\ce a}(\ce c) < K_{\large\ce b}(\ce{NH3})\) Questions The reaction of an acid and a base always produces a salt as the by-product, true or false? (t/f) Is a solution of sodium acetate acidic, neutral or basic? Are solutions of ammonium chloride acidic, basic or neutral? Calculate the pH of a 0.100 M \(\ce{KCN}\) solution. \(K_{\large\ce a}(\ce{HCN}) = \textrm{6.2e-10}\), \(K_{\large\ce b}(\ce{CN-}) = \textrm{1.6E-5}\). The symbol \(K_{\large\ce b}(\ce{HS-}) \) is the equilibrium constant for the reaction: \(\ce{HS- + OH- \rightleftharpoons S^2- + H2O}\) \(\ce{HS- + H2O \rightleftharpoons H2S + OH-}\) \(\ce{HS- + H2O \rightleftharpoons H3O+ + S^2-}\) \(\ce{HS- + H3O+ \rightleftharpoons H2S + H2O}\) What symbol would you use for the equilibrium constant of \(\ce{HS- \rightleftharpoons H+ + S^2-}\) Solutions Answertrue Consider... Water is the real product, while the salt is formed from the spectator ions. Answerbasic Consider... Acetic acid is a weak acid that forms a salt with a strong base, \(\ce{NaOH}\). The salt solution turns bromothymol-blue blue. Answeracidic Consider... Ammonium hydroxide does not have the same strength as a base as \(\ce{HCl}\) has as an acid. Ammonium chloride solutions turn bromothymol-blue yellow. Answer11.1 Consider... \(\begin{array}{ccccccccccc} \ce{KCN &\rightarrow &K+ &+ &CN- &&&&&&}\\ \ce{&&& &CN- &+ &H2O &\rightleftharpoons &HCN &+ &OH-}\\ &&& &(0.100-x) &&&&x &&x \end{array}\) \(\begin{align} x &= (0.100\times\textrm{1.5E-5})^{1/2}\\ &= \textrm{1.2E-3}\\ \ce{pOH} &= 2.9\\ \ce{pH} &= 11.1 \end{align}\) Answerb Consider... Write an equation for K byourself. Do not guess. The b. is the closest among the four. Answer K a Consider... This is the ionization of \(\ce{HS-}\); K afor \(\ce{HS-}\), or \(K_{\large\ce a_{\Large 2}}\) for \(\ce{H2S}\). Contributors Chung (Peter) Chieh (Professor Emeritus, Chemistry @ University of Waterloo)
Bayesian inference for state-space models¶ Defining a prior distribution¶ We have already seen that module particles.distributions defines various ProbDist objects; i.e. objects that represent probability distribution. Such objects have methods to simulate random variates, compute the log-density, and so on. This module defines in particular a class called StructDist, whose methods take as inputs and outputs structured arrays. This is what we are going to use to define prior distributions. Here is a simple example: [2]: %matplotlib inlineimport warnings; warnings.simplefilter('ignore') # hide warningsfrom matplotlib import pyplot as pltimport numpy as npfrom particles import distributions as distsprior_dict = {'mu': dists.Normal(scale=2.), 'rho': dists.Uniform(a=-1., b=1.), 'sigma':dists.Gamma()}my_prior = dists.StructDist(prior_dict) Object my_prior represents a distribution for \(\theta=(\mu, \rho, \sigma)\) where \(\mu\sim N(0,2^2)\), \(\rho \sim \mathcal{U}([-1,1])\), \(\sigma \sim \mathrm{Gamma}(1, 1)\), independently. We may now sample from this distribution, or compute its pdf, and so on. For each of the operations, the inputs and outputs must be structured arrays, with named variables 'rho' and 'sigma'. [3]: theta = my_prior.rvs(size=500) # sample 500 theta-parametersplt.style.use('ggplot')plt.hist(theta['sigma'], 30);plt.xlabel('sigma')plt.figure()z = my_prior.logpdf(theta)plt.hist(z, 30)plt.xlabel('log-pdf'); We may want to transform sigma into its logarithm, so that the support of the distribution is not constrained to \(\mathbb{R}^+\): [4]: another_prior_dict = {'rho': dists.Uniform(a=-1., b=1.), 'log_sigma':dists.LogD(dists.Gamma())}another_prior = dists.StructDist(another_prior_dict)another_theta = another_prior.rvs(size=100)plt.hist(another_theta['log_sigma'], 20)plt.xlabel('log-sigma'); Now, another_theta contains two variables, rho and log_sigma, and the latter variable is distributed according to \(Y=\log(X)\), with \(X\sim \mathrm{Gamma}(1, 1)\). (The documention of module distributions has more details on tranformed distributions.) We may also want to introduce dependencies between \(\rho\) and \(\sigma\). Consider this: [5]: from collections import OrderedDictdep_prior_dict = OrderedDict()dep_prior_dict['rho'] = dists.Uniform(a=0., b=1.)dep_prior_dict['sigma'] = dists.Cond( lambda theta: dists.Gamma(b=1./theta['rho']))dep_prior = dists.StructDist(dep_prior_dict)dep_theta = dep_prior.rvs(size=2000)plt.scatter(dep_theta['rho'], dep_theta['sigma'])plt.axis([0., 1., 0., 8.])plt.xlabel('rho')plt.ylabel('sigma'); The lines above encodes a chain rule decomposition: first we specify the marginal distribution of \(\rho\), thne we specify the distribution of \(\sigma\) given \(\rho\). A standard dictionary in Python is unordered: there is no way to make sure that the keys appear in a certain order. Thus we use instead an OrderedDict, and define first the distribution of \(\rho\), then the distribution of \(\sigma\) given \(\rho\); Cond is a particular ProbDist classthat defines a conditional distribution, based on a function that takes an argument theta, and returns a ProbDist object. All the example above involve univariate distributions; however, the components of StructDist also accept multivariate distributions. [6]: reg_prior_dict = OrderedDict()reg_prior_dict['sigma2'] = dists.InvGamma(a=2., b=3.)reg_prior_dict['beta'] = dists.MvNormal(cov=np.eye(20))reg_prior = dists.StructDist(reg_prior_dict)reg_theta = reg_prior.rvs(size=200) Bayesian inference for state-space models¶ We return to the simplified stochastic volatility introduced in the basic tutorial: which we implemented as follows (this time with default values for the parameters): [7]: from particles import state_space_models as ssmclass StochVol(ssm.StateSpaceModel): default_parameters = {'mu':-1., 'rho':0.95, 'sigma': 0.2} def PX0(self): # Distribution of X_0 return dists.Normal(loc=self.mu, scale=self.sigma / np.sqrt(1. - self.rho**2)) def PX(self, t, xp): # Distribution of X_t given X_{t-1}=xp (p=past) return dists.Normal(loc=self.mu + self.rho * (xp - self.mu), scale=self.sigma) def PY(self, t, xp, x): # Distribution of Y_t given X_t=x (and possibly X_{t-1}=xp) return dists.Normal(loc=0., scale=np.exp(x)) We mentioned in the basic tutorial that StochVol represents the parameteric class of univariate stochastic volatility model. Indeed, StochVol will be the object we pass to Bayesian inference algorithms (such as PMMH or SMC\(^2\)) in order to perform inference with respect to that class of models. PMMH (Particle marginal Metropolis-Hastings)¶ Let’s try first PMMH. This is a Metropolis-Hastings algorithm that samples from the posterior of parameter \(\theta\) (given the data). However, since the corresponding likelihood is intractable, each iteration of PMMH runs a particle filter that approximates it. [8]: from particles import mcmc# real dataraw_data = np.loadtxt('../../../datasets/GBP_vs_USD_9798.txt', skiprows=2, usecols=(3,), comments='(C)')full_data = np.diff(np.log(raw_data))data = full_data[:50]my_pmmh = mcmc.PMMH(ssm_cls=StochVol, prior=my_prior, data=data, Nx=200, niter=1000)my_pmmh.run(); # may take several seconds... The arguments we set when instantiating class PMMH requires little explanation; just in case: Nxis the number of particles (for the particle filter run at each iteration); niteris the number of MCMC iterations. Upon completion, object my_pmmh.chain is a ThetaParticles object, with the following attributes: my_pmmh.chain.thetais a structured array of size 10 (the number of iterations) with keys 'mu', 'rho'and 'sigma'; my_pmmh.chain.lpostis an array of length 10, contanining the (estimated) log-posterior density for each simulated \(\theta\). Let’s plot the mcmc traces. [9]: for p in prior_dict.keys(): # loop over mu, theta, rho plt.figure() plt.plot(my_pmmh.chain.theta[p]) plt.xlabel('iter') plt.ylabel(p) You might wonder what type of Metropolis sampler is really implemented here: the starting point of the chain is sampled from the prior; you may instead set it to a specific value using option starting_point(when instantiating PMMH); the proposal is an adaptativeGaussian random walk: this means that the covariance matrix of the random step is calibrated on the fly on past simulations (using vanishing adaptation). This may be disabled by setting option adaptive=False; a bootstrap filter is run to approximate the log-likelihood; you may use a different filter (e.g. a guided filter) by passing a FeynmanKacclass to option fk_cls; you may also want to pass various parameters to each call to SMCthrough (dict) argument smc_options; e.g. smc_options={'qmc': True}will make each particle filter a SQMC algorithm. Thus, by and large, quite a lot of flexibilty is hidden behind this default behaviour. Particle Gibbs¶ PMMH is just a particular instance of the general family of PMCMC samplers; that is MCMC samplers that run some particle filter at each iteration. Another instance is Particle Gibbs (PG), where one simulate alternatively: 1. from the distribution of \(\theta\) given the states and the data; 2. renew the state trajectory through a CSMC (conditional SMC step). Since Step 1 is model- (and user-)dependent, you need to define it for the model you are considering. This is done by sub-classing ParticleGibbs and defining method update_theta as follows: [10]: class PGStochVol(mcmc.ParticleGibbs): def update_theta(self, theta, x): new_theta = theta.copy() sigma, rho = 0.2, 0.95 # fixed values xlag = np.array(x[1:] + [0.,]) dx = (x - rho * xlag) / (1. - rho) s = sigma / (1. - rho)**2 new_theta['mu'] = self.prior.laws['mu'].posterior(dx, sigma=s).rvs() return new_theta For simplicity \(\rho\) and \(\sigma\) are kept constant; only \(\mu\) is updated. This means we are actually sampling from the posterior of \(\mu\) given the data, while these other parameters are kept contant. Let’s run our PG algorithm: [11]: pg = PGStochVol(ssm_cls=StochVol, data=data, prior=my_prior, Nx=200, niter=1000)pg.run() # may take several seconds... Now let’s plot the results: [12]: plt.plot(pg.chain.theta['mu'])plt.xlabel('iter')plt.ylabel('mu')plt.figure()plt.hist(pg.chain.theta['mu'][20:], 50)plt.xlabel('mu'); SMC^2¶ Finally, we consider SMC\(^2\), a SMC algorithm that makes it possible to approximate: all the partial posteriors (of \(\theta\) given \(y_{0:t}\), for \(t=0, 1, ..., T\)) rather than only the final posterior; the marginal likelihoods of the data. SMC\(^2\) is a two-level SMC sampler: it simulates many \(\theta\)-values from the prior, and update their weights recursively, according to the likelihood of each new datapoint; however, since these likelihood factors are intractable, for each \(\theta\), a particle filter is run to approximate it; hence a number \(N_x\) of \(x\)-particles are generated for, and attached to, each \(\theta\). The class SMC2 is defined inside module smc_samplers. It is run in the same way as the other SMC algorithms. [13]: import particlesfrom particles import smc_samplers as sspfk_smc2 = ssp.SMC2(ssm_cls=StochVol, data=data, prior=my_prior,init_Nx=50, ar_to_increase_Nx=0.1)alg_smc2 = particles.SMC(fk=fk_smc2, N=500)alg_smc2.run() Again, a few choices are made for you by default: A bootstrap filter is run for each \(\theta-\)particle; this may be changed by setting option fk_classwhile instantiating SMC2; e.g. fk_class=ssm.GuidedPFwill run instead a guided filter. Option init_Nxdetermines the initialnumber of \(x-\)particles; the algorithm automatically increases \(N_x\) each time the acceptance rate drops below \(10%\) (as specified through option ar_to_increase=0.1. Set this this option to 0.if you do not want to increase \(N_x\) in the course of t he algorithm. The particle filters (in the \(x-\)dimension) are run with the default options of class SMC; e.g. resampling is set to systematicand so on; other options may be set by using option smc_options. [14]: plt.scatter(alg_smc2.X.theta['mu'], alg_smc2.X.theta['rho'])plt.xlabel('mu')plt.ylabel('rho');
NTS ABSTRACTSpring2019 Return to [1] Contents Jan 23 Yunqing Tang Feb 1 Yunqing Tang The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Feb 8 Roman Fedorov A conjecture of Grothendieck and Serre on principal bundles in mixed characteristic Abstract: Let G be a reductive group scheme over a regular local ring R. An old conjecture of Grothendieck and Serre predicts that such a principal bundle is trivial, if it is trivial over the fraction field of R. The conjecture has recently been proved in the "geometric" case, that is, when R contains a field. In the remaining case, the difficulty comes from the fact, that the situation is more rigid, so that a certain general position argument does not go through. I will discuss this difficulty and a way to circumvent it to obtain some partial results. Feb 13 Frank Calegari Recent Progress in Modularity Abstract: We survey some recent work in modularity lifting, and also describe some applications of these results. This will be based partly on joint work with Allen, Caraiani, Gee, Helm, Le Hung, Newton, Scholze, Taylor, and Thorne, and also on joint work with Boxer, Gee, and Pilloni. Feb 15 Junho Peter Whang Integral points and curves on moduli of local systems Abstract: We consider the Diophantine geometry of moduli spaces for special linear rank two local systems on surfaces with fixed boundary traces. After motivating their Diophantine study, we establish a structure theorem for their integral points via mapping class group descent, generalizing classical work of Markoff (1880). We also obtain Diophantine results for algebraic curves in these moduli spaces, including effective finiteness of imaginary quadratic integral points for non-special curves. Feb 22 Yifan Yang Rational torsion on the generalized Jacobian of a modular curve with cuspidal modulus Abstract: In this talk we consider the rational torsion subgroup of the generalized Jacobian of the modular curve X_0(N) with respect to a reduced divisor given by the sum of all cusps. When N=p is a prime, we find that the rational torsion subgroup is always cyclic of order 2 (while that of the usual Jacobian of X_0(p) grows linearly as p tends to infinity, according to a well-known result of Mazur). Subject to some unproven conjecture about the rational torsions of the Jacobian of X_0(p^n), we also determine the structure of the rational torsion subgroup of the generalized Jacobian of X_0(p^n). This is a joint work with Takao Yamazaki. March 22 Fang-Ting Tu Title: Supercongrence for Rigid Hypergeometric Calabi-Yau Threefolds Abstract: This is a joint work with Ling Long, Noriko Yui, and Wadim Zudilin. We establish the supercongruences for the rigid hypergeometric Calabi-Yau threefolds over rational numbers. These supercongruences were conjectured by Rodriguez-Villeagas in 2003. In this work, we use two different approaches. The first method is based on Dwork's p-adic unit root theory, and the other is based on the theory of hypergeometric motives and hypergeometric functions over finite fields. In this talk, I will introduce the first method, which allows us to obtain the supercongruences for ordinary primes. April 12 Junehyuk Jung Title: Quantum Unique Ergodicity and the number of nodal domains of automorphic forms Abstract: It has been known for decades that on a flat torus or on a sphere, there exist sequences of eigenfunctions having a bounded number of nodal domains. In contrast, for a manifold with chaotic geodesic flow, the number of nodal domains of eigenfunctions is expected to grow with the eigenvalue. In this talk, I will explain how one can prove that this is indeed true for the surfaces where the Laplacian is quantum uniquely ergodic, under certain symmetry assumptions. As an application, we prove that the number of nodal domains of Maass-Hecke eigenforms on a compact arithmetic triangles tends to $+\infty$ as the eigenvalue grows. I am going to also discuss the nodal domains of automorphic forms on $SL_2(\mathbb{Z})\backslash SL_2(\mathbb{R})$. Under a minor assumption, I will give a quick proof that the real part of weight $k\neq 0$ automorphic form has only two nodal domains. This result captures the fact that a 3-manifold with Sasaki metric never admits a chaotic geodesic flow. This talk is based on joint works with S. Zelditch and S. Jang. April 19 Hang Xue (Arizona) Title: Arithmetic theta lifts and the arithmetic Gan--Gross--Prasad conjecture. Abstract: I will explain the arithmetic analogue of the Gan--Gross--Prasad conjecture for unitary groups. I will also explain how to use arithmetic theta lift to prove certain endoscopic cases of it. May 3 Matilde Lalin (Université de Montréal) Title: The mean value of cubic $L$-functions over function fields. Abstract: We will start by exploring the problem of finding moments for Dirichlet $L$-functions, including the first main results and the standard conjectures. We will then discuss the problem for function fields. We will then present a result about the first moment of $L$-functions associated to cubic characters over $\F_q(t)$, when $q\equiv 1 \bmod{3}$. The case of number fields was considered in previous work, but never for the full family of cubic twists over a field containing the third roots of unity. This is joint work with C. David and A. Florea. May 10 Hector Pasten (Harvard University) Title: Shimura curves and estimates for abc triples. Abstract: I will explain a new connection between modular forms and the abc conjecture. In this approach, one considers maps to a given elliptic curve coming from various Shimura curves, which gives a way to obtain unconditional results towards the abc conjecture starting from good estimates for the variation of the degree of these maps. The approach to control this variation of degrees involves a number of tools, such as Arakelov geometry, automorphic forms, and analytic number theory. The final result is an unconditional estimate that lies beyond the existing techniques in the context of the abc conjecture, such as linear forms in logarithms.
Since $M$ is finitely generated, let $x_1, \dots, x_n$ be generators of $M$.Similarly, let $z_1, \dots, z_m$ be generators of $M^{\prime\prime}$. The exactness of the sequence (*) yields that the homomorphism $g:M’\to M^{\prime\prime}$ is surjective.Thus, there exist $y_1, \dots, y_m\in M’$ such that\[g(y_i)=z_i\]for $i=1, \dots, m$. We claim that the elements\[f(x_1), \dots, f(x_n), y_1, \dots, y_m\]generate the module $M$.Let $w$ be an arbitrary element of $M’$. Then $g(w)\in M^{\prime\prime}$ and we can write\[g(w)=\sum_{i=1}^m r_iz_i\]for some $r_i\in R$ as $z_i$ generate $M^{\prime\prime}$.Then we have\begin{align*}g(w)&=\sum_{i=1}^m r_iz_i\\&=\sum_{i=1}^m r_i g(y_i)\\&=g\left(\, \sum_{i=1}^m r_iy_i \,\right)\end{align*}since $g$ is a module homomorphism. It follows that we have\begin{align*}g\left(\, w- \sum_{i=1}^m r_iy_i \,\right)=g(w)-g\left(\, \sum_{i=1}^m r_iy_i \,\right)=0,\end{align*}and thus\[w- \sum_{i=1}^m r_iy_i \in \ker(g).\] Since the sequence (*) is exact, we have $\ker(g)=\im(f)$.Hence there exists $x\in M$ such that\[f(x)=w- \sum_{i=1}^m r_iy_i.\]Since $x_i$ generate $M$, we can write\[x=\sum_{i=1}^n s_i x_i\]for some $s_i\in R$.Thus, we have\begin{align*}w&=f(x)+\sum_{i=1}^m r_iy_i\\&=f\left(\, \sum_{i=1}^n s_i x_i \,\right)+\sum_{i=1}^m r_iy_i\\&=\sum_{i=1}^n s_if(x_i)+\sum_{i=1}^m r_iy_i.\end{align*} This proves that any element $w\in M’$ can be written as a linear combination of\[f(x_1), \dots, f(x_n), y_1, \dots, y_m,\]and we conclude that $M’$ is generated by these elements and thus finitely generated. Nilpotent Ideal and Surjective Module HomomorphismsLet $R$ be a commutative ring and let $I$ be a nilpotent ideal of $R$.Let $M$ and $N$ be $R$-modules and let $\phi:M\to N$ be an $R$-module homomorphism.Prove that if the induced homomorphism $\bar{\phi}: M/IM \to N/IN$ is surjective, then $\phi$ is surjective.[…] Basic Exercise Problems in Module TheoryLet $R$ be a ring with $1$ and $M$ be a left $R$-module.(a) Prove that $0_Rm=0_M$ for all $m \in M$.Here $0_R$ is the zero element in the ring $R$ and $0_M$ is the zero element in the module $M$, that is, the identity element of the additive group $M$.To simplify the […] Annihilator of a Submodule is a 2-Sided Ideal of a RingLet $R$ be a ring with $1$ and let $M$ be a left $R$-module.Let $S$ be a subset of $M$. The annihilator of $S$ in $R$ is the subset of the ring $R$ defined to be\[\Ann_R(S)=\{ r\in R\mid rx=0 \text{ for all } x\in S\}.\](If $rx=0, r\in R, x\in S$, then we say $r$ annihilates […] Torsion Submodule, Integral Domain, and Zero DivisorsLet $R$ be a ring with $1$. An element of the $R$-module $M$ is called a torsion element if $rm=0$ for some nonzero element $r\in R$.The set of torsion elements is denoted\[\Tor(M)=\{m \in M \mid rm=0 \text{ for some nonzero} r\in R\}.\](a) Prove that if $R$ is an […] Ascending Chain of Submodules and Union of its SubmodulesLet $R$ be a ring with $1$. Let $M$ be an $R$-module. Consider an ascending chain\[N_1 \subset N_2 \subset \cdots\]of submodules of $M$.Prove that the union\[\cup_{i=1}^{\infty} N_i\]is a submodule of $M$.Proof.To simplify the notation, let us […]
Find an Orthonormal Basis of the Range of a Linear Transformation Problem 478 Let $T:\R^2 \to \R^3$ be a linear transformation given by \[T\left(\, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \,\right) = \begin{bmatrix} x_1-x_2 \\ x_2 \\ x_1+ x_2 \end{bmatrix}.\] Find an orthonormal basis of the range of $T$. ( The Ohio State University, Linear Algebra Final Exam Problem) Contents Solution. Let $A$ be the matrix representation of the linear transformation of $T$. That is, \[A=\begin{bmatrix} T(\mathbf{e}_1) & T(\mathbf{e}_2) \end{bmatrix},\] where \[\mathbf{e}_1=\begin{bmatrix} 1 \\ 0 \end{bmatrix}, \mathbf{e}_2=\begin{bmatrix} 0 \\ 1 \end{bmatrix}\] form the standard basis of the vector space $R^2$. By the formula, we see that \[T\left(\, \begin{bmatrix} 1 \\ 0 \end{bmatrix} \,\right) =\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, T\left(\, \begin{bmatrix} 0 \\ 1 \end{bmatrix}\,\right)=\begin{bmatrix} -1 \\ 1 \\ 1 \end{bmatrix},\] and thus the matrix $A$ for $T$ is \[A=\begin{bmatrix} 1 & -1 \\ 0 & 1 \\ 1 &1 \end{bmatrix}.\] Note that the range of $T$ is the same as the range of $A$. We reduce the matrix $A$ by the elementary row operations as follows: \begin{align*} A=\begin{bmatrix} 1 & -1 \\ 0 & 1 \\ 1 &1 \end{bmatrix} \xrightarrow{R_3-R_1} \begin{bmatrix} 1 & -1 \\ 0 & 1 \\ 0 & 0 \end{bmatrix} \xrightarrow{R_1+R_2} \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix}. \end{align*} Since the both columns contain the leading $1$’s, we conclude that \[\left\{\, \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} -1 \\ 1 \\ 1 \end{bmatrix} \,\right\}\] is a basis of the range of $A$ by the leading $1$ method. Note that the dot product of these basis vectors is $0$, hence they are already orthogonal. Hence to obtain an orthonormal basis, we just need to normalize the length of these vectors to $1$. In summary, an orthonormal basis of the range of $T$ is \[\left\{\, \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \frac{1}{\sqrt{3}}\begin{bmatrix} -1 \\ 1 \\ 1 \end{bmatrix} \,\right\}.\] Final Exam Problems and Solution. (Linear Algebra Math 2568 at the Ohio State University) This problem is one of the final exam problems of Linear Algebra course at the Ohio State University (Math 2568). The other problems can be found from the links below. Find All the Eigenvalues of 4 by 4 Matrix Find a Basis of the Eigenspace Corresponding to a Given Eigenvalue Diagonalize a 2 by 2 Matrix if Diagonalizable Find an Orthonormal Basis of the Range of a Linear Transformation (This page) The Product of Two Nonsingular Matrices is Nonsingular Determine Wether Given Subsets in ℝ4 R 4 are Subspaces or Not Find a Basis of the Vector Space of Polynomials of Degree 2 or Less Among Given Polynomials Find Values of $a , b , c$ such that the Given Matrix is Diagonalizable Idempotent Matrix and its Eigenvalues Diagonalize the 3 by 3 Matrix Whose Entries are All One Given the Characteristic Polynomial, Find the Rank of the Matrix Compute $A^{10}\mathbf{v}$ Using Eigenvalues and Eigenvectors of the Matrix $A$ Determine Whether There Exists a Nonsingular Matrix Satisfying $A^4=ABA^2+2A^3$ Add to solve later
Spherical Harmonics are a group of functions used in math and the physical sciences to solve problems in disciplines including geometry, partial differential equations, and group theory. The general, normalized Spherical Harmonic is depicted below: \[ Y_{l}^{m}(\theta,\phi) = \sqrt{ \dfrac{(2l + 1)(l - |m|)!}{4\pi (l + |m|)!} } P_{l}^{|m|}(\cos\theta)e^{im\phi} \] One of the most prevalent applications for these functions is in the description of angular quantum mechanical systems. A Brief History Utilized first by Laplace in 1782, these functions did not receive their name until nearly ninety years later by Lord Kelvin. Any harmonic is a function that satisfies Laplace's differential equation: \[ \nabla^2 \psi = 0 \] These harmonics are classified as spherical due to being the solution to the angular portion of Laplace's equation in the spherical coordinate system. Laplace's work involved the study of gravitational potentials and Kelvin used them in a collaboration with Peter Tait to write a textbook. In the 20th century, Erwin Schrödinger and Wolfgang Pauli both released papers in 1926 with details on how to solve the "simple" hydrogen atom system. Now, another ninety years later, the exact solutions to the hydrogen atom are still used to analyze multi-electron atoms and even entire molecules. Much of modern physical chemistry is based around framework that was established by these quantum mechanical treatments of nature. The "Basic" Description The 2p x and 2p z (angular) probability distributions depicted on the left and graphed on the right using "desmos". As Spherical Harmonics are unearthed by working with Laplace's equation in spherical coordinates, these functions are often products of trigonometric functions. These products are represented by the \( P_{l}^{|m|}(\cos\theta)\) term, which is called a Legendre polynomial. The details of where these polynomials come from are largely unnecessary here, lest we say that it is the set of solutions to a second differential equation that forms from attempting to solve Laplace's equation. Unsurprisingly, that equation is called "Legendre's equation", and it features a transformation of \(\cos\theta = x\). As the general function shows above, for the spherical harmonic where \(l = m = 0\), the bracketed term turns into a simple constant. The exponential equals one and we say that: \[ Y_{0}^{0}(\theta,\phi) = \sqrt{ \dfrac{1}{4\pi} }\] What is not shown in full is what happens to the Legendre polynomial attached to our bracketed expression. In the simple \(l = m = 0\) case, it disappears. It is no coincidence that this article discusses both quantum mechanics and two variables, \(l\) and \(m\). These are exactly the angular momentum quantum number and magnetic quantum number, respectively, that are mentioned in General Chemistry classes. If we consider spectroscopic notation, an angular momentum quantum number of zero suggests that we have an s orbital if all of \(\psi(r,\theta,\phi)\) is present. This s orbital appears spherically symmetric on the boundary surface. In other words, the function looks like a ball. This is consistent with our constant-valued harmonic, for it would be constant-radius. Extending these functions to larger values of \(l\) leads to increasingly intricate Legendre polynomials and their associated \(m\) values. The \({Y_{1}^{0}}^{*}Y_{1}^{0}\) and \({Y_{1}^{1}}^{*}Y_{1}^{1}\) functions are plotted above. Recall that these functions are multiplied by their complex conjugate to properly represent the Born Interpretation of "probability-density" (\(\psi^{*}\psi)\). It is also important to note that these functions alone are not referred to as orbitals, for this would imply that both the radial and angular components of the wavefunction are used. Example \(\PageIndex{1}\): Identify the location(s) of all planar nodes of the following spherical harmonic: \[Y_{2}^{0}(\theta,\phi) = \sqrt{ \dfrac{5}{16\pi} }(3cos^2\theta - 1)\] Solution Nodes are points at which our function equals zero, or in a more natural extension, they are locations in the probability-density where the electron will not be found (i.e. \(\psi^{*}\psi = 0)\). As this specific function is real, we could square it to find our probability-density. \[Y_{2}^{0} = [Y_{2}^{0}]^2 = 0\] As the non-squared function will be computationally easier to work with, and will give us an equivalent answer, we do not bother to square the function. The constant in front can be divided out of the expression, leaving: \[3cos^2\theta - 1 = 0\] \[\theta = cos^{-1}\bigg[\pm\dfrac{1}{\sqrt3}\bigg]\] \[\theta = 54.7^o \& 125.3^o\] The Advanced Description We have described these functions as a set of solutions to a differential equation but we can also look at Spherical Harmonics from the standpoint of operators and the field of linear algebra. For the curious reader, a more in depth treatment of Laplace's equation and the methods used to solve it in the spherical domain are presented in this section of the text. For a brief review, partial differential equations are often simplified using a separation of variables technique that turns one PDE into several ordinary differential equations (which is easier, promise). This allows us to say \(\psi(r,\theta,\phi) = R_{nl}(r)Y_{l}^{m}(\theta,\phi)\), and to form a linear operator that can act on the Spherical Harmonics in an eigenvalue problem. The more important results from this analysis include (1) the recognition of an \(\hat{L}^2\) operator and (2) the fact that the Spherical Harmonics act as an eigenbasis for the given vector space. The \(\hat{L}^2\) operator is the operator associated with the square of angular momentum. It is directly related to the Hamiltonian operator (with zero potential) in the same way that kinetic energy and angular momentum are connected in classical physics. \[\hat{H} = \dfrac{\hat{L}^2}{2I}\] for \(I\) equal to the moment of inertia of the represented system. It is a linear operator (follows rules regarding additivity and homogeneity). More specifically, it is Hermitian. This means that when it is used in an eigenvalue problem, all eigenvalues will be real and the eigenfunctions will be orthogonal. In Dirac notation, orthogonality means that the inner product of any two different eigenfunctions will equal zero: \[\langle \psi_{i} | \psi_{j} \rangle = 0\] When we consider the fact that these functions are also often normalized, we can write the classic relationship between eigenfunctions of a quantum mechanical operator using a piecewise function: the Kronecker delta. \[\langle \psi_{i} | \psi_{j} \rangle = \delta_{ij} \, for \, \delta_{ij} = \begin{cases} 0 & i \neq j \ 1 & i = j \end{cases} \] This relationship also applies to the spherical harmonic set of solutions, and so we can write an orthonormality relationship for each quantum number: \[\langle Y_{l}^{m} | Y_{k}^{n} \rangle = \delta_{lk}\delta_{mn}\] Example \(\PageIndex{2}\): Symmetry The parity operator is sometimes denoted by "P", but will be referred to as \(\Pi\) here to not confuse it with the momentum operator. When this Hermitian operator is applied to a function, the signs of all variables within the function flip. This operator gives us a simple way to determine the symmetry of the function it acts on. Recall that even functions appear as \(f(x) = f(-x)\), and odd functions appear as \(f(-x) = -f(x)\). Combining this with \(\Pi\) gives the conditions: If \[\Pi Y_{l}^{m}(\theta,\phi) = Y_{l}^{m}(-\theta,-\phi)\] then the harmonic is even. If \[\Pi Y_{l}^{m}(\theta,\phi) = -Y_{l}^{m}(\theta,\phi)\] then the harmonic is odd. Using the parity operator and properties of integration, determine \(\langle Y_{l}^{m}| Y_{k}^{n} \rangle\) for any \( l\) an even number and \(k\) an odd number. Solution As this question is for any even and odd pairing, the task seems quite daunting, but analyzing the parity for a few simple cases will lead to a dramatic simplification of the problem. Start with acting the parity operator on the simplest spherical harmonic, \(l = m = 0\): \[\Pi Y_{0}^{0}(\theta,\phi) = \sqrt{\dfrac{1}{4\pi}} = Y_{0}^{0}(-\theta,-\phi)\] Now we can scale this up to the \(Y_{2}^{0}(\theta,\phi)\) case given in example one: \[\Pi Y_{2}^{0}(\theta,\phi) = \sqrt{ \dfrac{5}{16\pi} }(3cos^2(-\theta) - 1)\] but cosine is an even function, so again, we see: \[ Y_{2}^{0}(-\theta,-\phi) = Y_{2}^{0}(\theta,\phi)\] It appears that for every even, angular QM number, the spherical harmonic is even. As it turns out, every odd, angular QM number yields odd harmonics as well! If this is the case (verified after the next example), then we now have a simple task ahead of us. Note: Odd functions with symmetric integrals must be zero. \[\langle Y_{l}^{m}| Y_{k}^{n} \rangle = \int_{-\inf}^{\inf} (EVEN)(ODD)d\tau \] An even function multiplied by an odd function is an odd function (like even and odd numbers when multiplying them together). As such, this integral will be zero always, no matter what specific \(l\) and \(k\) are used. As one can imagine, this is a powerful tool. The impact is lessened slightly when coming off the heels off the idea that Hermitian operators like \(\hat{L}^2\) yield orthogonal eigenfunctions, but general parity of functions is useful! Consider the question of wanting to know the expectation value of our colatitudinal coordinate \(\theta\) for any given spherical harmonic with even-\(l\). \[\langle \theta \rangle = \langle Y_{l}^{m} | \theta | Y_{l}^{m} \rangle \] \[\langle \theta \rangle = \int_{-\inf}^{\inf} (EVEN)(ODD)(EVEN)d\tau \] Again, a complex sounding problem is reduced to a very straightforward analysis. Using integral properties, we see this is equal to zero, for any even-\(l\). A photo-set reminder of why an eigenvector (blue) is special. From https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors. Lastly, the Spherical Harmonics form a complete set, and as such can act as a basis for the given (Hilbert) space. This means any spherical function can be written as a linear combination of these basis functions, (for the basis spans the space of continuous spherical functions by definition): \[f(\theta,\phi) = \sum_{l}\sum_{m} \alpha_{lm} Y_{l}^{m}(\theta,\phi) \] While any particular basis can act in this way, the fact that the Spherical Harmonics can do this shows a nice relationship between these functions and the Fourier Series, a basis set of sines and cosines. Spherical Harmonics are considered the higher-dimensional analogs of these Fourier combinations, and are incredibly useful in applications involving frequency domains. In the past few years, with the advancement of computer graphics and rendering, modeling of dynamic lighting systems have led to a new use for these functions. Example \(\PageIndex{3}\): In order to do any serious computations with a large sum of Spherical Harmonics, we need to be able to generate them via computer in real-time (most specifically for real-time graphics systems). This requires the use of either recurrence relations or generating functions. While at the very top of this page is the general formula for our functions, the Legendre polynomials are still as of yet undefined. The two major statements required for this example are listed: \( P_{l}(x) = \dfrac{1}{2^{l}l!} \dfrac{d^{l}}{dx^{l}}[(x^{2} - 1)^{l}]\) \( P_{l}^{|m|}(x) = (1 - x^{2})^{\tiny\dfrac{|m|}{2}}\dfrac{d^{|m|}}{dx^{|m|}}P_{l}(x)\) Using these recurrence relations, write the spherical harmonic \(Y_{1}^{1}(\theta,\phi)\). Solution To solve this problem, we can break up our process into four major parts. The first is determining our \(P_{l}(x)\) function. As \(l = 1\): \( P_{1}(x) = \dfrac{1}{2^{1}1!} \dfrac{d}{dx}[(x^{2} - 1)]\) \( P_{1}(x) = \dfrac{1}{2}(2x)\) \( P_{1}(x) = x\) Now that we have \(P_{l}(x)\), we can plug this into our Legendre recurrence relation to find the : associated Legendre function, with \(m = 1\) \( P_{1}^{1}(x) = (1 - x^{2})^{\tiny\dfrac{1}{2}}\dfrac{d}{dx}x\) \( P_{1}^{1}(x) = (1 - x^{2})^{\tiny\dfrac{1}{2}}\) At the halfway point, we can use our general definition of Spherical Harmonics with the newly determined Legendre function. With \(m = l = 1\): \[ Y_{1}^{1}(\theta,\phi) = \sqrt{ \dfrac{(2(1) + 1)(1 - 1)!}{4\pi (1 + |1|)!} } (1 - x^{2})^{\tiny\dfrac{1}{2}}e^{i\phi} \] \[ Y_{1}^{1}(\theta,\phi) = \sqrt{ \dfrac{3}{8\pi} } (1 - x^{2})^{\tiny\dfrac{1}{2}}e^{i\phi} \] The last step is converting our cartesian function into the proper coordinate system or making the switch from x to \(\cos\theta\). \[ Y_{1}^{1}(\theta,\phi) = \sqrt{ \dfrac{3}{8\pi} } (1 - (\cos\theta)^{2})^{\tiny\dfrac{1}{2}}e^{i\phi} \] \[ Y_{1}^{1}(\theta,\phi) = \sqrt{ \dfrac{3}{8\pi} } (sin^{2}\theta)^{\tiny\dfrac{1}{2}}e^{i\phi} \] \[ Y_{1}^{1}(\theta,\phi) = \sqrt{ \dfrac{3}{8\pi} }sin\theta e^{i\phi} \] As a side note, there are a number of different relations one can use to generate Spherical Harmonics or Legendre polynomials. Often times, efficient computer algorithms have much longer polynomial terms than the short, derivative-based statements from the beginning of this problem. As a final topic, we should take a closer look at the two recursive relations of Legendre polynomials together. As derivatives of even functions yield odd functions and vice versa, we note that for our first equation, an even \(l\) value implies an even number of derivatives, and this will yield another even function. When we plug this into our second relation, we now have to deal with \(|m|\) derivatives of our \(P_{l}\) function. We are in luck though, as in the spherical harmonic functions there is a separate component entirely dependent upon the sign of \(m\). As such, any changes in parity to the Legendre polynomial (to create the associated Legendre function) will be undone by the flip in sign of \(m\) in the azimuthal component. Parity only depends on \(l\)! This confirms our prediction from the second example that any Spherical Harmonic with even-\(l\) is also even, and any odd-\(l\) leads to odd \(Y_{l}^{m}\). Sources Details on the History of S.H. - http://www.liquisearch.com/spherical_harmonics/history A collection of Schrödinger's papers, dated 1926 - http://www.physics.drexel.edu/~bob/Quantum_Papers/Schr_1.pdf Details on Kelvin and Tait's Collaboration - http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199231256.001.0001/acprof-9780199231256-chapter-11 Graph \(\theta\) Traces of S.H. Functions with Desmos - https://www.desmos.com/ Information on Hermitian Operators - http://www.pa.msu.edu/~mmoore/Lect4_BasisSet.pdf Discussions of S.H. Functions and Computer Graphics - https://www.cs.dartmouth.edu/~wjarosz/publications/dissertation/appendixB.pdf and http://www.cs.columbia.edu/~dhruv/lighting.pdf Contributors Alexander Staat
Conditioning is the soul of statistics. —Joe Blitzstein We often need to account for multiple pieces of evidence. More than one witness testifies about the colour of a taxicab; more than one person responds to our poll about an upcoming election; etc. How do we a calculate conditional probability when there are multiple conditions? In other words, how do we handle quantities of the form \(\p(A \given B_1 \wedge B_2 \wedge \ldots)\)? Imagine you’re faced with another one of our mystery urns. There are two equally likely possibilities: \[ \begin{aligned} A &: \mbox{The urn contains $70$ black marbles, $30$ white marbles.}\\ \neg A &: \mbox{The urn contains $20$ black marbles, $80$ white marbles.}\\ \end{aligned} \] Now suppose you draw a marble at random and it’s black. You put it back, give the urn a good shake, and then draw another: black again. What’s the probability the urn has \(70\) black marbles? We need to calculate \(\p(A \given B_1 \wedge B_2)\), the probability of \(A\) given that the first and second draws were both black. We already know how to do this calculation for one draw, \(\p(A \given B_1)\). We use Bayes’ theorem to get: \[ \begin{aligned} \p(A \given B_1) &= \frac{\p(B_1 \given A)\p(A)}{\p(B_1 \given A) \p(A) + \p(B_1 \given \neg A) \p(\neg A)} \\ &= \frac{(70/100)(1/2)}{(70/100)(1/2) + (20/100)(1/2)}\\ &= 7/9. \end{aligned} \] But for two draws, Bayes’ theorem gives us: \[ \begin{aligned} \p(A \given B_1 \wedge B_2) &= \frac{\p(B_1 \wedge B_2 \given A)\p(A)}{\p(B_1 \wedge B_2 \given A) \p(A) + \p(B_1 \wedge B_2 \given \neg A) \p(\neg A)}. \end{aligned} \] To fill in the values on the right hand side, we need to know these quantities: To get the first quantity, remember that we replaced the first marble before doing the second draw. So, given \(A\), the second draw is independent of the first. There are still \(70\) black marbles out of \(100\) on the second draw, so the chance of black on the second draw is still \(70/100\). In other words: \[ \begin{aligned} \p(B_1 \wedge B_2 \given A) &= \p(B_1 \given A) \p(B_2 \given A)\\ &= (70/100)^2. \end{aligned} \] The same reasoning applies given \(\neg A\), too. Except here the chance of black on each draw is \(20/100\). So: \[ \begin{aligned} \p(B_1 \wedge B_2 \given \neg A) &= \p(B_1 \given \neg A) \p(B_2 \given \neg A)\\ &= (20/100)^2. \end{aligned} \] Returning to Bayes’ theorem, we can now finish the calculation: \[ \begin{aligned} \p(A \given B_1 \wedge B_2) &= \frac{\p(B_1 \wedge B_2 \given A)\p(A)}{\p(B_1 \wedge B_2 \given A) \p(A) + \p(B_1 \wedge B_2 \given \neg A) \p(\neg A)} \\ &= \frac{(70/100)^2(1/2)}{(70/100)^2(1/2) + (20/100)^2(1/2)}\\ &= 49/53. \end{aligned} \] The same solution can also be captured in a probability tree. The tree will have an extra stage now, because there’s a second draw. And it will have many more leaves, but luckily we can ignore most of them. We just need to worry about the two leaves where both draws have come up black. And we only need to fill in the probabilities along the paths that lead to those two leaves. The result is Figure 9.1. So \(\p(A \given B_1 \wedge B_2) = 0.245 / (0.245 + 0.02)\), which is the same as \(49/53\), the answer we got with Bayes’ theorem. You might be able to guess now what would happen after three black draws. Instead of getting squared probabilities in Bayes’ theorem, we’d get cubed probabilities. And using the same logic, we could keep going. We could use Bayes’ theorem to calculate \(\p(A \given B_1 \wedge \ldots \wedge B_n)\) for as many draws \(n\) as you like. Let’s try a different sort of problem with multiple conditions. Recall the taxicab problem from Chapter 8: A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data: We saw it’s only about \(41\%\) likely the cab was really blue, even with the witness’ testimony. But what if there had been two witnesses, both saying the cab was blue? Let’s use Bayes’ theorem again: \[ \begin{aligned} \p(B \given W_1 \wedge W_2) &= \frac{\p(B)\p(W_1 \wedge W_2 \given B)}{\p(W_1 \wedge W_2)}. \end{aligned} \] We have one of the terms here already: \(\p(B) = 15/100\). What about the other two: Let’s make things easy on ourselves by assuming our two witnesses are reporting independently. They don’t talk to each other, or influence one another in any way. They’re only reporting what they saw (or think they saw). Then we can “factor” these probabilities like we did when sampling with replacement: \[ \begin{aligned} \p(W_1 \wedge W_2 \given B) &= \p(W_1 \given B) \p(W_2 \given B)\\ &= (80/100)^2. \end{aligned} \] And for the denominator we use the Law of Total Probability: \[ \begin{aligned} \p(W_1 \wedge W_2) &= \p(W_1 \wedge W_2 \given B)\p(B) + \p(W_1 \wedge W_2 \given \neg B)\p(\neg B)\\ &= (80/100)^2(15/100) + (20/100)^2(85/100)\\ &= 96/1000 + 34/1000\\ &= 13/100. \end{aligned} \] Now we can return to Bayes’ theorem to finish the problem: \[ \begin{aligned} \p(B \given W_1 \wedge W_2) &= \frac{(15/100)(80/100)^2}{13/100}\\ &= 96/130\\ &\approx .74. \end{aligned} \] So, with two witnesses independently agreeing that the cab was blue, the probability goes up from less than \(1/2\) to almost \(3/4\). Figure 9.2: Tree diagram for the two-witness taxicab problem We can use a tree here too, similar to the one we made when sampling two black marbles with replacement. As before, we only need to worry about the \(W_1 \wedge W_2\) leaves, the ones where both witnesses say the cab was blue. The result is Figure 9.2, which tells us that \(\p(B \given W_1 \wedge W_2) = 0.096 / (0.096 + 0.034)\), which is approximately \(0.74\). The problems we’ve done so far were simplified by assuming independence. We sampled with replacement in the urn problem, and we assumed our two witnesses were independently reporting what they saw in the taxicab problem. What about when independence doesn’t hold? Let’s go back to our urn problem, but this time suppose we don’t replace the marble after the first draw. How do we calculate \(\p(A \given B_1 \wedge B_2)\) then? We’re still going to start with Bayes’ theorem: \[ \begin{aligned} \p(A \given B_1 \wedge B_2) &= \frac{\p(B_1 \wedge B_2 \given A)\p(A)}{\p(B_1 \wedge B_2 \given A) \p(A) + \p(B_1 \wedge B_2 \given \neg A) \p(\neg A)}. \end{aligned} \] But to calculate terms like \(\p(B_1 \wedge B_2 \given A)\) now, we need to think things through in two steps. We know the first draw has a \(70/100\) chance of coming up black if \(A\) is true:\[ \p(B_1 \given A) = 70/100. \]And once the first draw has come up black, if \(A\) is true then there are 69 black balls remaining and 30 white. So:\[ \p(B_2 \given B_1 \wedge A) = 69/99. \]So instead of multiplying \(70/100\) by itself, we’re multiplying \(70/100\) by almost \(70/100\):\[ \begin{aligned} \p(B_1 \wedge B_2 \given A) &= (70/100)(69/99)\\ &= 161/300. \end{aligned}\] Using similar reasoning for the possibility that \(\neg A\) instead, we can calculate \[ \begin{aligned} \p(B_1 \wedge B_2 \given \neg A) &= (20/100)(19/99)\\ &= 19/495. \end{aligned} \] Returning to Bayes’ theorem to finish the calculation: \[ \begin{aligned} \p(A \given B_1 \wedge B_2) &= \frac{\p(B_1 \wedge B_2 \given A)\p(A)}{\p(B_1 \wedge B_2 \given A) \p(A) + \p(B_1 \wedge B_2 \given \neg A) \p(\neg A)} \\ &= \frac{(161/300)(1/2)}{(161/300)(1/2) + (19/495)(1/2)} \\ &= 5313/5693 \\ &\approx .93. \end{aligned} \] Notice how similar this answer is to the \(.92\) we got when sampling with replacement. With so many black and white marbles in the urn, taking one out doesn’t make much difference. The second draw is almost the same as the first, so the final answer isn’t much affected. Figure 9.3: Tree diagram for two draws without replacement, values rounded The tree diagram for this problem will also be similar to the with-replacement version. The key difference is the probabilities at the last stage of the tree. Without independence, the probability of a \(B_2\) branch is affected by the \(B_1\) that precedes it. The result is Figure 9.3, though note that some values are rounded. Still we find that: \[ \begin{aligned} \p(A \given B_1 \wedge B_2) &\approx \frac{ 0.2439 }{ 0.2439 + 0.0192 } \\ &\approx 0.93. \end{aligned} \] The calculation we just did relied on a new rule, which we should make explicit. Start by recalling a familiar rule: \(\p(A \wedge B) = \p(A \given B) \p(B).\) Our new rule applies the same idea to situations where some proposition \(C\) is taken as a given. \(\p(A \wedge B \given C) = \p(A \given B \wedge C) \p(B \given C).\) In a way, the new rule isn’t really new. We just have to realize that the probabilities we get when we take a condition \(C\) as given are still probabilities. They obey all the same rules as unconditional probabilities, and this includes the General Multiplication Rule. Another example which illustrates this point is the Negation Rule. The following conditional version is also valid: \(\p(\neg A \given C) = 1 - \p(A \given C).\) We could go through all the rules of probability we’ve learned and write out the conditional version for each one. But we’ve already got enough rules and equations to keep track of. So let’s just remember this mantra instead: Conditional probabilities are probabilities. So if we have a rule of probability, the same rule will hold if we add a condition \(C\) into each of the \(\p(\ldots)\) terms. We’ve learned two strategies for calculating conditional probabilities with multiple conditions. The first strategy is easier, but it only works when the conditions are appropriately independent. Like when we sample with replacement, or when two witnesses independently report what they saw. In this kind of case, we first use Bayes’ theorem, and then “factor” the terms: \[ \begin{aligned} \p(A \given B_1 \wedge B_2) &= \frac{\p(B_1 \wedge B_2 \given A)\p(A)}{% \p(B_1 \wedge B_2 \given A)\p(A) +% \p(B_1 \wedge B_2 \given \neg A)\p(\neg A)}\\ &= \frac{\p(B_1 \given A)\p(B_2 \given A)\p(A)}{% \p(B_1 \given A)\p(B_2 \given A)\p(A) +% \p(B_1 \given \neg A)\p(B_2 \given \neg A)\p(\neg A)}\\ &= \frac{(\p(B_1 \given A))^2\p(A)}{% (\p(B_1 \given A))^2\p(A) +% (\p(B_1 \given \neg A))^2\p(\neg A)}. \end{aligned} \] Our second strategy is a little more difficult. But it works even when the conditions are not independent. We still start with Bayes’ theorem. But then we apply the conditional form of the General Multiplication Rule:\[ \begin{aligned} \p(A \given B_1 \wedge B_2) &= \frac{\p(B_1 \wedge B_2 \given A)\p(A)}{% \p(B_1 \wedge B_2 \given A)\p(A) +% \p(B_1 \wedge B_2 \given \neg A)\p(\neg A)}\\ &= \frac{\p(B_2 \given B_1 \wedge A)\p(B_1 \given A)\p(A)}{% \p(B_2 \given B_1 \wedge A)\p(B_1 \given A)\p(A) +% \p(B_2 \given B_1 \wedge \neg A)\p(B_1 \given \neg A)\p(\neg A)}. \end{aligned}\] These are some pretty hairy formulas, so memorizing them probably isn’t a good idea. It’s better to understand how they flow from Bayes’ theorem or a tree diagram. Recall the following problem from Chapter 8. Willy Wonka Co. makes two kinds of boxes of chocolates. The “wonk box” has four caramel chocolates and six regular chocolates. The “zonk box” has six caramel chocolates, two regular chocolates, and two mint chocolates. A third of their boxes are wonk boxes, the rest are zonk boxes. They don’t mark the boxes. The only way to tell what kind of box you’ve bought is by trying the chocolates inside. In fact, all the chocolates look the same; you can only tell the difference by tasting them. Previously you calculated the probability a randomly chosen box is a wonk box given that a chocolate randomly selected from it is caramel. This time, suppose you randomly select two chocolates. Recall the following problem from Chapter 8. A magic shop sells two kinds of trick coins. The first kind are biased towards heads: they come up heads \(9\) times out of \(10\) (the tosses are independent). The second kind are biased towards tails: they comes up tails \(8\) times out of \(10\) (tosses still independent). Half the coins are the first kind, half are the second kind. But they don’t label the coins, so you have to experiment to find out which are which. Previously, you picked a coin at random and flipped it once. But now suppose you flip it a second time. What’s the probability it’s the first kind of coin if it lands heads both times? Recall the following problem from Chapter 8. There is a room filled with two types of urns. The two types of urn look identical, but \(80\%\) of them are Type A. Previously you calculated the probability a randomly selected urn is Type B given that one marble randomly drawn from it is yellow. Suppose now you put the yellow marble back, shake hard, and draw another marble at random from the same urn. Recall the following problem from Chapter 8. A room contains four urns. Three of them are Type X, one is Type Y. You are going to pick an urn at random and start drawing marbles from it at random without replacement. Previously you calculated the probability the urn is Type X given that the first draw is black. The order in which conditions are given doesn’t matter. More precisely, the following equation always holds: \[ \p(A \given B \wedge C) = \p(A \given C \wedge B).\] Use the rules of probability to prove that it always holds. The order in which things happen often matters. If the light was red but is now green, the intersection is probably safe to drive through. But if the light was green and is now red, it’s probably not safe. We just saw, though, that the order in which conditions are given doesn’t make any difference to the probability. Explain why these two observations do not conflict. Above we observed that \(\p(\neg A \given C) = 1 - \p(A \given C)\). Prove that this equation holds. Hint: start with the definition of conditional probability, and then recall that \(1 = \p(C) / \p(C)\).
(1) Orthogonal Two vectors are orthogonal if their dot product is equal to zero by the relationship If two vectors are orthogonal (90 degrees or $\frac{\pi}{2}$ rads), $\cos{\theta}=0$. Unit Vector A vector of magnitude 1. This is often denoted as $\hat{v}$. In fact, in physics, this is where we get the symbols $\hat{i} \hat{j} \text{ and } \hat{k}$ which symbolize the unit vectors in the x, y, and z directions respectively. To make a vector a unit vector, simply divide each element by its magnitude which is written as $\frac{\vec{r}}{|\vec{r}|} = \hat{r}$ On a side note, if two vectors are orthogonal, $|\vec{u}|^2 + |\vec{v}|^2 = |\vec{u+v}|^2$ because, if $\vec{u}$ and $\vec{v}$ are orthogonal, they intersect at a right angle and $\vec{u+v}$ signifies the diagonal of the parallelogram formed by the two vectors. In the case of two orthogonal vectors, the parallelogram is a rectangle (remember squares are rectangles, but rectangles are not necessarily squares) and $\vec{u+v}$ is the hypotenuse of a right triangle. Thus by the Pythagorean theorem, $|\vec{u}|^2 + |\vec{v}|^2 = |\vec{u+v}|^2$.
Although machine learning is great for shape classification, for shape recognition, we must still use the old methods. Methods such as Hough Transform, and RANSAC. In this post, we’ll look into using Hough Transform for recognizing straight lines. The following is taken from E. R. Davies’ book, Computer Vision: Principals, Algorithms, Application, Learning and Image Digital Image Processing by Gonzalez and Woods. Straight edges are amongst the most common features of the modern world, arising in perhaps the majority of manufactured objects and components – not least in the very buildings in which we live. Yet, it is arguable whether true straight lines ever arise in the natural state: possibly the only example of their appearance in virgin outdoor scenes is in the horizon – although even this is clearly seen from space as a circular boundary! The surface of water is essentially planar, although it is important to realize that this is a deduction: the fact remains that straight lines seldom appear in completely natural scenes. Be all this as it may, it is clearly vital both in city pictures and in the factory to have effective means of detecting straight edges. This chapter studies available methods for locating these important features. Historically, HT has been the main means of detecting straight edges, and since the method was originally invented by Hough in 1962, it has been developed and refined for this purpose. We’re going to concentrate on it on this blog post, and this also prepares you to use HT to detect circles, ellipses, corners, etc, which we’ll talk about in the not-too-distant future. We start by examining the original Hough scheme, even thoupgh it is now seen to be wasteful in computation since it has evolved. First, let us introduce Hough Transform. Often, we have to work in unstructured environments in which all we have is an edge map and no knowledge about where objects of interest might be. In such situations, all pixels are candidates for linking, and thus have to be accepted or eliminated based on predefined global properties. In this section, we develop an approach based on whether set of pixels lie on curves of a specified shape. Once detected, these curves form the edge or region boundaries of interest. Given $n$ points in the image, suppose that we want to find subsets of these points that lie on straight lines. One possible soltion is to fine all lines determined by every pair of points, then find all subsets of points that are close to particular lines. This approach involves finding $n(n-1)/2 \sim n^2$ lines, then performing $(n)(n(n-1))/2 \sim n^3$ comparisons for every points to all lines. As you might have guessed, this is extremely computationally expensive task. Imagine this, we check every pixel for neighboring pixels and compare their distance to see if they form a straight line. Impossible! Hough, as we said, in 1962, proposed an alternative approach to this scanline method. Commonly referred to as the Hough transform. Let $(x_i, y_i)$ denote p point in the xy-plane and consider the general equation of a straight line in slope-intercept form: $y_i = ax_i + b$. Infinitely many lines pass through $(x_i, y_i)$ but they all satisfy the equation we saw, for varying values of $a$ and $b$. However, writing this equation as $b = -x_i a+y_i$ and considering the ab-plane – also called parameter space, yields the equation of a single line in parameter space associated with it, which intersects the line associated with $(x_i, y_i)$ at some point $(a\prime, b\prime)$ in parameter space, where $a\prime$ is the slope and $b\prime$ is the intercept of the line containing the both $(x_i, y_i)$s in the xy-plane, and of course, we are assuming that lines are not parallel, in fact, all points on this line have lines in parameter space that intersect at $(a\prime, b\prime)$. Here, this figure illustrates s the concepts: In principle, the parameter space lines corresponding to all points $(x_k, y_k)$ in the xy-plane could be plotted, and the principal (goddammit, principle, principal, fuck this language!) lines in that plane could be found by identifying points in parameter space where large numbers of parameter-space lines intersect. However, a difficulty with this approach is that $a$, approaches infinity as the lines approaches vertical direction. One way around this difficulty is to use the normal representation of a line: \[ x \cos(\theta) + y sin(\theta) = \rho \] Figure on the right below demonstrates the geometrical interpretation of parameters $\rho$ and $\theta$. A horizontal line has $\theta = 0^\circ$, with $\rho$ being equal to the positive x-intercept. Similarly, a vertical line has $\theta = 90^\circ$, with $\rho$ being equal to positive y-intercept. Each sinusoidal curve in the middle of the figure below represents the family of lines that pass through a particular point $(x_k. y_k)$ in xy-plane. Let’s talk about the properties of Hough transform. Figure below illustrates the Hough transform based on the equation above. On the top, you see an image of size $M\times M \bigvee M=101$ with five labeled white points, and below it shows each of these points mapped into the parameter space, $\rho\theta$-plane using subdivisions of one unit for the $\rho$ and $\theta$ axes. The range of $\theta$ values is $\pm 90^\circ$ and the range of $\rho$ values is $\pm \sqrt{2} M$ As the bottom image shows, each curve has a different sinusoidal shape. The horizontal line resulting from the mapping of point 1 is a sinusoid of zero amplitude. The points labeled A and B in the image on the bottom illustrate the colinearity detection property of the Hough transform. For exampele, point B marks the intersection of the curves corresponding to points 2, 3, and 4 in the xy image plane. The location of point A indicates that these thre points line on a straight line passing through the origin $(\rho = 1)$ and oriented at $-45^\circ$. Similarly, the curves intersecting at point B in parameter space indicate that 2, 3, and 4 line on a straight like oriented at $45^\circ$, and whose distance from origin is $\rho = 71$. Finally, Q, R, and S illustrate the fact that Hough transform exhibits a reflective adjacency relationship at the right and left edges of the parameter space. Now that we know the basics of HT and line detection using HT, let’s take a look at Longitudinal Line Localization. The previous method is insensitive to where along the infinite idealized line an observed segment appear. He reason for this is that we only have two parameters, $\rho$ and $\theta$.There is some advantage to be gained in this, in parital occlusion of line does not prevent its detection: indeed, if several segments of a line are visible, they can all contribute to the peak in parameter space, hence improving senitivity. On the other hand, for full image interpretation, it is useful to have information about the longitudinal placement of line segments. Ths is achieved by a further stage of processing. The additional stage involves finding which points contributed to each peak in the main parameter space, and carrying out connectivity analysis in each case. Some call this process xy-gruping. It is not vital that the line segments should be 4-connected (meaning, a neighborhood with only the vertical and horizontal neighbors) or 8-connected (with diagonal neighbors) – just that there should be sufficient points on them so that adjacent points are withing a threshold distance apart, i.e. groups of points arem erged if they are withing prespecified distance. Finally, segments shorter than a certain minimum length can be ignored too insignificant to help with image interpretation. The alternative method for saving computation time is the Foot-of-Normal method. Created by the author of book I’m quoting from, it eliminates the use of trigonometric functions such as arctan by employing a different parametrization scheme. Both the methods we’ve described employ abstract parameter spaces in which poitns bear no immediately obvious relation to image space. N the alternative scheme, the parameter spaces in which points bear no immediately obvious visual relation to image space. In this alternative scheme, the parameter space is a second image space, whifcfh is congruent to image space. This type of parameter space is obtained in the following way. First, each edge fragment in the image is produced much as required previously so that $\rho%” can be measured, but this time the foot of the normal from the origin is taken as a voting position in the parameter space. Taking %(x_0, y_0) as the foot of the normal from the origin to the relevant line, it is found that: \[b/a = y_0/x_0 \] \[(x-x_0)x_0 + (y-y_o)y_0 \] Thes etwo equations are sufficient to compute the two coordinates, $(x_0, y_0)$. Solving for $x_0$ and $y_0$ gives: \[ x_0 = va \] \[y_0 = vb \] Where: \[ \frac{ax + by}{a^2 + b^2} \] Well, we’re done for now! It’s time to take a shower, then study regression, as I’m done with classification. I’m going to write a post about regression, stay tuned!
Sylow’s Theorem (Summary) In this post we review Sylow’s theorem and as an example we solve the following problem. Problem 64 Show that a group of order $200$ has a normal Sylow $5$-subgroup. Add to solve later Contents Review of Sylow’s Theorem One of the important theorems in group theory is Sylow’s theorem. Sylow’s theorem is a very powerful tool to solve the classification problem of finite groups of a given order. In this article, we review several terminologies, the contents of Sylow’s theorem, and its corollary. We also give an example that can be solved using Sylow’s theorem. At the end of this post, the links to various Sylow’s theorem problems are given. We first introduce several definitions. Definition 1. Let $G$ be a group and $p$ be a prime number. A group of order $p^{\alpha}$ for some non-negative integer $\alpha$ is called a . $p$-group A subgroup of $G$ which is a $p$-subgroup is called $p$-subgroup. Definition 2. Let $G$ be a finite group of order $n$. Let $p$ be a prime number dividing $n$. Write $n=p^{\alpha}m$, where $\alpha, m \in \Z$ and $p$ does not divide $m$. Then any subgroup $H$ of $G$ is called a of $G$ if the order of $H$ is $p^{\alpha}$. Sylow $p$-group Sylow’s theorem Let $G$ be a finite group of order $p^{\alpha}m$, where the prime number $p$ does not divide $m$. There exists at least one Sylow $p$-subgroup of $G$. If $P$ is a Sylow $p$-subgroup of $G$ and $Q$ is any $p$-subgroup of $G$, then there exists $g \in G$ such that $Q$ is a subgroup of $gPg^{-1}$. In particular, any two Sylow $p$-subgroups of $G$ are conjugate in $G$. The number $n_p$ of Sylow $p$-subgroups of $G$ is \[n_p \equiv 1 \pmod p.\] That is, $n_p=pk+1$ for some $k\in \Z$. The number $n_p$ of Sylow $p$-subgroup of $G$ is the index of the normalizer $N_G(P)$ in $G$ for any Sylow $p$-subgroup $P$, hence $n_p$ divides $m$. Corollary In the notation of the previous theorem, if the number $n_p$ of Sylow $p$-subgroup of $G$ is $n_p=1$, then the Sylow $p$-subgroup is a normal subgroup of $G$. Example/Problem. Now as an example we solve the problem. Problem. Show that a group of order $200$ has a normal Sylow $5$-subgroup. Solution. We have the factorization $200=2^3\cdot 5^2$. By Sylow’s theorem the number of Sylow $5$-subgroup satisfies $n_5 \equiv 1 \pmod 5$ and $n_5$ divides $8$. The numbers satisfies $n_5 \equiv 1 \pmod 5$ are $n_5=1, 6, 11, \cdots$. Among these numbers, only $1$ divides $8$. Thus the only number satisfies both conditions is $1$. Hence $n_5=1$ and there is only one Sylow $5$-subgroup. Then by the corollary, the Sylow $5$-subgroup is normal. More Problems on Sylow’s theorem Sylow’s theorem is a handy tool to determine the group structure of a finite group. We list here several problems/examples that can be solved using Sylow’s theorem. All solutions are given in the links below. Sylow subgroups of a group of order $33$ is normal subgroups Group of order pq has a normal Sylow subgroup and solvable If the order is an even perfect number, then a group is not simple A group of order pqr contains a normal subgroup Groups of order 100, 200. Is it simple? If a Sylow subgroup is normal in a normal subgroup, it is a normal subgroup subgroup containing all p-Sylow subgroups of a group A group of order $20$ is solvable Non-abelian group of order $pq$ and its Sylow subgroups Prove that a Group of Order 217 is Cyclic and Find the Number of Generators Every Group of Order 20449 is an Abelian Group Every Sylow 11-Subgroup of a Group of Order 231 is Contained in the Center $Z(G)$ Every Group of Order 72 is Not a Simple Group Add to solve later
Since $I_1+I_2=R$, there exists $a \in I_1$ and $b \in I_2$ such that\[a+b=1.\]Then we have\begin{align*}1&=1^{m+n-1}=(a+b)^{m+n-1}\\[6pt]&=\sum_{k=1}^{m+n-1}\begin{pmatrix}m+n-1 \\k\end{pmatrix}a^k b^{m+n-1-k}\\[6pt]&=\sum_{k=1}^{m-1}\begin{pmatrix}m+n-1 \\k\end{pmatrix}a^k b^{m+n-1-k}+\sum_{k=m}^{m+n-1}\begin{pmatrix}m+n-1 \\k\end{pmatrix}a^k b^{m+n-1-k}.\end{align*}In the third equality, we used the binomial expansion. Note that the first sum is in $I_2^n$ since it is divisible by $b^n\in I_2^n$.The second sum is in $I_1^n$ since it is divisible by $a^m\in I_1^n$. Thus the sum is in $I_1^m+I_2^n$, and hence we have $1 \in I_1^m+I_2^n$, which implies that $I_1^m+I_2^n=R$. Nilpotent Element a in a Ring and Unit Element $1-ab$Let $R$ be a commutative ring with $1 \neq 0$.An element $a\in R$ is called nilpotent if $a^n=0$ for some positive integer $n$.Then prove that if $a$ is a nilpotent element of $R$, then $1-ab$ is a unit for all $b \in R$.We give two proofs.Proof 1.Since $a$ […] Ring Homomorphisms and Radical IdealsLet $R$ and $R'$ be commutative rings and let $f:R\to R'$ be a ring homomorphism.Let $I$ and $I'$ be ideals of $R$ and $R'$, respectively.(a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$.(b) Prove that $\sqrt{f^{-1}(I')}=f^{-1}(\sqrt{I'})$(c) Suppose that $f$ is […] Ideal Quotient (Colon Ideal) is an IdealLet $R$ be a commutative ring. Let $S$ be a subset of $R$ and let $I$ be an ideal of $I$.We define the subset\[(I:S):=\{ a \in R \mid aS\subset I\}.\]Prove that $(I:S)$ is an ideal of $R$. This ideal is called the ideal quotient, or colon ideal.Proof.Let $a, […]
Authors: Ashraf Karamzadeh, Hamid Reza Maimani, Ali Zaeembashi Keywords: Domination, Signed Italian Dominating Function, Signed Italian Domination Number. Abstract A signed Italian dominating function on a graph $G=(V,E)$ is a function $f:V\to \{ -1, 1, 2 \}$ satisfying the condition that for every vertex $u$, $f[u]\ge 1$. The weight of signed Italian dominating function is the value $f(V)=\sum_{u\in V}f(u)$. The signed Italian domination number of a graph $G$, denoted by $\gamma_{sI}(G)$, is the minimum weight of a signed Italian dominating function on a graph $G$. In this paper, we determine the signed Italian domination number of some classes of graphs. We also present several lower bounds on the signed Italian domination number of a graph. In particular, for a graph $G$ without isolated vertex we show that $\gamma_{sI}(G)\ge \frac{3n-4m}{2}$ and characterize all graphs attaining equality in this bound. We show that if $G$ is a graph of order $n\ge2$, then $\gamma_{sI}(G)\ge 3\sqrt \frac{n}{2}-n$ and this bound is sharp. Mathematics Section, Department of Basic Sciences, Shahid Rajaee Teacher Training University, P.O. Box 16785-163, Tehran, Iran Phone:+98 22970060-9 E-mail: , , Fulltext – 0.28 Mb
Measuring distance is an important task for many applications like preprocessing, clustering or classification of data. In general, the distance between two points can be calculated as\begin{equation} \label{eq:EuclideanStandardizationMahalanobis_Distance} \operatorname{d}(\fvec{x}, \fvec{y}) = \sqrt{\left( \fvec{x} - \fvec{y} \right)^T S^{-1} \left( \fvec{x} - \fvec{y} \right)} \end{equation} where \(S\) is a \(n \times n\) matrix which defines the distance type. It starts with the common Euclidean distance which only calculates the length of the line between two points. If the variance of the feature dimensions is very different (e.g. if the first dimension measures in meter and the second measures in kilogram, then the ranges will very likely be different), then we can compensate by standardizing each dimension first. To also compensate for the correlation between feature dimensions, the Mahalanobis distance is useful. The following table summarizes for each distance type the basic properties (for the 2D case). Method \(S\) Distance pattern Information from the dataset Euclidean \begin{equation*} \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix} \end{equation*} Equal circles None Standardization \begin{equation*} \begin{pmatrix} \sigma_1^2 & 0 \\ 0 & \sigma_2^2 \\ \end{pmatrix} \end{equation*} Axis-aligned ellipses Variance in each dimension: \(\sigma_1^2, \sigma_2^2\) Mahalanobis \begin{equation*} \begin{pmatrix} \sigma_1^2 & \sigma_{12} \\ \sigma_{21} & \sigma_2^2 \\ \end{pmatrix} \end{equation*} Data-aligned ellipses Variance in each dimension: \(\sigma_1^2, \sigma_2^2\), Variance across dimensions: \(\sigma_{12} = \sigma_{21}\) Note that the definition of the Euclidean distance is indeed the same as the L2-norm of the difference vector (shown for the 2D case)\begin{align*} \sqrt{\left( \fvec{x} - \fvec{y} \right)^T S_E^{-1} \left( \fvec{x} - \fvec{y} \right)} &= \sqrt{ \begin{pmatrix} x_1 - y_1 & x_2 - y_2 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x_1 - y_1 \\ x_2 - y_2 \end{pmatrix} } \\ &= \sqrt{ \begin{pmatrix} x_1 - y_1 & x_2 - y_2 \end{pmatrix} \begin{pmatrix} x_1 - y_1 \\ x_2 - y_2 \end{pmatrix} } \\ &= \sqrt{ (x_1 - y_1)^2 + (x_2 - y_2)^2 } \\ &= \left\| \fvec{x} - \fvec{y} \right\|_2. \end{align*} To visualize the different distances consider the following example where we have two sets of data points which were both randomly generated from a normal distribution but with different variances. Additionally, they are translated and rotated to different positions in the coordinate frame. Consider this as a two-class problem where all points from the first dataset belong to the first and all points from the second dataset belong to the second class. We can also define the cluster centre for each class as\begin{equation*} \fvec{c}_i = \frac{1}{\left| C_i \right|} \sum_{\fvec{x} \in C_i} \fvec{x} \end{equation*} where \(C_i\) is the set which stores every data from class \(i\). Suppose a new data point should be classified, e.g. assigned to one of the two classes (\(\omega_1\) or \(\omega_2\)). A very simple thing to do is to calculate the distance to the centre of each class and assign the new point to the class where this distance is minimized\begin{equation} \label{eq:EuclideanStandardizationMahalanobis_Classifier} \omega_* = \operatorname{argmin}_{i\in\{1,2\}} \sqrt{\left( \fvec{x} - \fvec{c}_i \right)^T S_i^{-1} \left( \fvec{x} - \fvec{c}_i \right)}. \end{equation} The classifier basically utilizes \eqref{eq:EuclideanStandardizationMahalanobis_Distance}, but instead of calculating the distance between arbitrary points it always calculates the distance to the centre of the current class. Note also that each class has its own (co)-variance matrix \(S_i\). We want to take account for the different variances of the data points of each class so that the natural structure of the data points is reflected in the distance calculation as well. Hence, the points of the two classes are considered separately leading to covariance matrices of\begin{equation*} S_1 = \begin{pmatrix} 11.4278 & 6.73963 \\ 6.73963 & 4.91336 \\ \end{pmatrix} \quad \text{and} \quad S_2 = \begin{pmatrix} 7.163 & -33.9277 \\ -33.9277 & 228.504 \\ \end{pmatrix}. \end{equation*} These matrices also define the distance pattern around each centre which visualizes how a distance function behaves relative to a fixed point (the cluster centre \(\fvec{c}_i\) in this case). Take a look at the following animation to explore the different methods and their distance pattern. As you can see, using the Euclidean distance results in equal circles around each centre. The structure of the data is not taken into consideration; we only search for the nearest red point. Using standardized variables, we incorporate that the variance of the feature dimensions (\(x_1, x_2\)) is not the same. To take this into account, the equal circles are transformed to ellipses. But note that these ellipses are still aligned with the coordinate frame. Finally, the Mahalanobis distance also analyses the correlation between the feature dimensions in the dataset (e.g. is there a linear dependency between \(x_1\) and \(x_2\)?) and uses this information in the distance calculation. The distance pattern consists still of ellipses but compared to standardized variables they are now transformed to the shape of the data. More precisely: they are aligned with the principal components of the data. List of attached files: EuclideanStandardizationMahalanobis.nb [PDF] (Mathematica notebook which was used to create the animation with the simple classifier) ← Back to the overview page
As Diwali approaches, we have learned to worry about air quality. Over the last few years, several studies have noted the increase in pollution levels during the period of Diwali owing to increase in commercial activity and firework displays. However, as we show in our previous article, there is considerable variation in PM 2.5 levels in Delhi in terms of location/time/month: Time Effect: The effect of diwali is not uniform throughout the day and is more prevelant at particular time of the day than other times. We also need to adjust for the confounding effect of time: pollution levels are high during the night and low during the day. Location Effect: Several areas of Delhi are severly polluted throughout the time, whereas others see large variations in their pollution levels. All these reasons make it difficult to attribute the entire increase in PM2.5 on Diwali. Month Effect: The day of Diwali Festival varies in the Gregorian Calendar between the 17th October and 15th November every year. Existing pollution levels are already high when compared to the annual average. This is a confounding effect. It is possible that the bad air that we see in Delhi at the time of Diwali is just the bad air quality in winter, and is not causally impacted upon by Diwali. In this article, we attempt to quantify the increase in the PM 2.5 levels during the Diwali period. Does Diwali have an impact upon air quality? If so, by how much? Issues in research design The opportunity to identify a Diwali effect comes from the fact that Diwali is a `moving holiday' which takes place on a different day of each year. If this were not the case, it would be strongly correlated with changing climate. Our ability to analyse these questions is greatly hampered by the lack of data. As of today, the data only runs from 1/2013 to 10/2016. The air pollution caused by fireworks includes many contaminants. The data that we are studying covers only pm2.5. Pollution levels on Diwali The data used for the analysis comes from the US Consulate based in Chanakyapuri and the Central Pollution Control Board for 4 locations (R K Puram, Punjabi Bagh, Mandir Marg, Anand Vihar). The data consists of hourly PM 2.5 levels across the five locations from January 2013 to October 2016. We winsorise the data at 1% on both ends to remove the extreme tail values. The effect of Diwali on pollution levels We first estimate the effect of Diwali on daily data using an event study. We aggregate the hourly concentration of PM2.5, at each location, to arrive at the daily numbers. The day of the Lakshmi Puja is taken as the event day. Therefore, we get 3 events for each location. Next, we calculate the percentage change in PM2.5 concentration levels by differencing the logarithm of PM2.5 values. These are then re-indexed to show the cumulative change over a 20 day window. Event study showing the change in PM2.5 around Diwali date (in days) The solid line represents the average cumulative percentage change in PM2.5 values during the window, whereas the dashed line represents the confidence intervals calculated using the bootstrapped standard errors. We see that pollution levels start increasing one day before Diwali, and increase till two days after Diwali. It is also interesting to note that the increase in the pollution levels is significant during the two days after Diwali. This can be attributed to the fact that Diwali celebrations begin only on the night of Diwali, thereby leading to a significant increase the next day, as well as Diwali being celebrated over an extended period of time. We now come at the same set of questions using a regression. Contribution of Diwali on PM2.5: Regression analysis Since Diwali is celebrated over a number of days we also define the following models: Diwali=t: Diwali Diwali={t-1:t+1}: 3 Days (day before Diwali, Diwali, day after Diwali) Diwali={t-1:t+2}: 4 Days (preceding day to two days after Diwali) The model is as follows: \[ PM2.5_{it} = \alpha + \beta_1*Diwali_{t}+ \beta_2*Diwali_{t}*l_{i} + m_t + h_t + l_i+\epsilon_{it} \] where, $i$ is location, and $t$ is time. Here, PM 2.5 is the hourly measured levels of the pollutant. The first model takes Diwali to be only the date of Diwali, second model defines the Diwali days from one day before to one day after and the third model considers Diwali from the preceding day to two days after Diwali. In addition, we have month ($m_t$), location ($l_i$), and hour ($h_t$) fixed effects. The base for the location interaction term is Anand Vihar. Robust standard errors are used for our analysis throughout. Dependent variable: Hourly PM2.5 Concentration Diwali=t Diwali={t-1:t+1} Diwali={t-1:t+2} (1) (2) (3) Diwali -3.720 98.687 134.709 t = -0.177 t = 8.496 *** t = 13.181 *** Chanakyapuri*Diwali 17.270 -75.878 -87.035 t = 0.638 t = -5.100 *** t = -6.692 *** Mandir Marg*Diwali 73.078 -67.943 -66.844 t = 2.606 *** t = -4.450 *** t = -4.979 *** Punjabi Bagh*Diwali 65.630 -49.033 -52.254 t = 2.374 ** t = -3.254 *** t = -3.945 *** R K Puram*Diwali 63.348 -54.228 -67.094 t = 2.291 ** t = -3.589 *** t = -5.055 *** Month FE Yes Yes Yes Location FE Yes Yes Yes Hour FE Yes Yes Yes Observations 118,847 118,847 118,847 R 2 0.264 0.264 0.266 Adjusted R 2 0.264 0.264 0.266 F Statistic (df = 39; 118803) 1,091.020 *** 1,094.274 *** 1,103.673 *** The first model (Column 1) shows that the baseline effect (i.e. at Anand Vihar) is not statistically different from non-Diwali days. For locations, other than Chanakyapuri, there is a differential effect on Diwali relative to Anand Vihar on Diwali. For instance, Diwali adds on an average 69.35 (73.07-3.72) µg/m 3PM2.5 particulate matter in air at Mandir Marg relative to Anand Vihar. When we consider the second (Column 2) and third (Column 3) specifications, there is a statistically significant effect in Anand Vihar. The average particulate matter is 99 µg/m 3higher when we consider a two day Diwali, and 135 µg/m 3when we consider a three day Diwali period. While this may not seem much, given the already degraded air quality during these months, Diwali makes the pollution level reach alarming levels (>400, the monthly average in October November is around 340) which can have severe impacts on the health of people. The Diwali effect is lower in other other locations relative to Anand Vihar. Thus, we see, that on the main day of Diwali, Anand Vihar is not too different from other days, while other locations have more pollutants relative to Anand Vihar. However, once we take into account 1-2 days after Diwali, we see that Anand Vihar is the most polluted location, and other locations have lower pollutants relative to Anand Vihar. Conclusion Very little is known, at present, about air quality and Diwali. Using the admittedly weak data resources, we have begun analysing this question here. To the extent that these results are persuasive, they could help individuals plan strategies to avoid being in Delhi on these days. There is also a case for a Pigouvian tax on fireworks, in order to overcome the externality. Previous work on Diwali, which helps us see other dimensions of Diwali, includes: Seasonal adjustment with Indian data: how big are the gains and how to do itby Rudrani Bhattacharya, Radhika Pandey, Ila Patnaik, Ajay Shah, and IEDs in Diwali and Toxic chemicals in Holiby Ajay Shah.
Problem Set 13 This is to be completed by February 1st, 2018. Exercises Datacamp * Complete the lesson: a. Python Data Science Toolbox (Part II) For a logistic regressor (multiclass ending in softmax) write down the update rules for gradient descent. For a two layer perceptron ending in softmax with intermediate relu non-linearity write down the update rules for gradient descent. Python Lab Build a two layer perceptron (choose your non-linearity) in numpy for a multi-class classification problem and test it on MNIST. Build a MLP in Keras and test it on MNIST. Problem Set 12 This is to be completed by January 25th, 2018. Exercises Datacamp Complete the lesson: a. Python Data Science Toolbox (Part I) Let $S\subset \Bbb R^n$ with $|S|<\infty$. Let $\mu=\frac{1}{|S|}\sum_{x_i\in S} x_i$. Show that $$ \frac{1}{|S|}\sum_{(x_i,x_j)\in S\times S} ||x_i-x_j||^2 = 2\sum_{x_i\in S} ||x_i-\mu||^2.$$ Prove that the $K$-means clustering algorithm converges. Python Lab Implement a $K$-Nearest Neighbors classifier and apply it to the MNIST dataset (you will probably need to apply PCA, you can use a library for this at this point). Implement a $K$-Means clustering algorithm and apply it to the MNIST dataset (after removing the labels and applying a PCA transformation) with $K=10$. Compare the cluster labelings with the actual labelings. Complete the implementation of the decision tree algorithm from last week. Problem Set 11 This is to be completed by January 18th, 2018. Exercises Datacamp Complete the lesson: a. Intermediate Python for Data Science What is the maximum depth of a decision tree trained on $N$ samples? If we train a decision tree to an arbitrary depth, what will be the training error? How can we alter a loss function to help regularize a decision tree? Python Lab 1. Construct a function which will transform a dataframe of numerical features into a dataframe of binary features of the same shape by setting the value of the jth feature of the ith sample to be true precisely when the value is greater than or equal to the median value of that feature. 2. Construct a function which when presented with a dataframe of binary features, labeled outputs, and a corresponding loss function and chooses the feature to split upon which will minimize the loss function. Here we assume that on each split the function will just return the mean value of the outputs. 3. Test these functions on a real world dataset (for classification) either from ISLR or from Kaggle. Problem Set 10 This is to be completed by January 11th, 2018. Exercises Datacamp Complete the lesson: a. Intro to Python for Data Science During this week’s problem session I will provide an introduction to Python. Problem Set 9 This is to be completed by December 21st, 2017. Exercises Datacamp Complete the lesson: a. Intermediate R: Practice R Lab: Consider a two class classification problem with one class denoted positive. Given a list of probability predictions for the positive class, a list of the correct probabilities (0’s and 1’s), and a number N>=2 of data points, construct a function which produces an Nx2 matrix/dataframe whose ith row (starting at 1) is the pair (x,y) where x is the false positive rate and y is the true positive rate of a classifier which classifies to true if the probability is greater or equal to (i-1)/(N-1). Construct another function which produces the line graph associated to the points from the previous function . Finally, produce another function which estimates the area under the curve of the previous graph. Problem Set 8 This is to be completed by December 14th, 2017. There will be no exercise session this week. Exercises Datacamp Complete the lesson: a. Beginning Bayes in R Problem Set 7 This is to be completed by December 7th, 2017. Exercises Datacamp Complete the lesson: a. Credit risk modeling in R. Exercises from Elements of Statistical Learning Complete exercise: a. 4.5 (Use the reduced form of the logistic classifier that fits an (n,k-1)-matrix for a problem with n features and k classes). R Lab: Construct a logistic regression classifier by hand and test it on MNIST. Problem Set 6 This is to be completed by November 30th, 2017. Exercises Datacamp Complete the lesson: a. Text Mining: Bag of Words Exercises from Elements of Statistical Learning Complete exercises: a. 4.2 b. 4.6 Run the perceptron learning algorithm by hand for the two class classification problem with $(X,Y)$-pairs (given by bitwise or): $((0,0), 0), ((1,0),1), ((0,1),1)), ((1,1),1)$. R Lab: Update the LDA Classifier from last week as follows. a. After fitting an LDA Classifier, produce a function which projects an input sample onto the hyperplane containing the class centroids. b. Update the classifier to use these projections for classification. Compare the runtimes of prediction of the two methods when the number of features is large relative to the number of classes. Construct a perceptron classifier for two class classification. Put an upper bound on the number of steps. a. Evaluate the perceptron on the above problem and for the bitwise xor problem: $((0,0), 0), ((1,0),1), ((0,1),1)), ((1,1),0)$. Problem Set 5 This is to be completed by November 23rd, 2017. Exercises Datacamp Complete the lesson: a. Machine Learning Toolbox R Lab: Write a function in R that will take in a vector of discrete variables and will produce the corresponding one hot encodings. Write a function in R that will take in a matrix $X$ of samples and a vector $Y$ of classes (in $(1,…,K)$) and produces a function which classifies a new sample according to the LDA rule (do not use R’s built in machine learning facilities). Do the same for QDA. Apply your models to the MNIST dataset for handwriting classification. There are various ways to get this dataset, but perhaps the easiest is to pull it in through the keras package. Besides having keras is useful anyway. You may need to reduce the dimension of the data and/or the number of samples to get this to work in a re Problem Set 4 This is to be completed by November 16th, 2017. Exercises Datacamp Complete the lessons: a. Supervised Learning in R: Regression b. Supervised Learning in R: Classification c. Exploratory Data Analysis (If you did not already do so) Let $\lambda\geq 0$, $X\in \Bbb R^n\otimes \Bbb R^m$, $Y\in \Bbb R^n$, and $\beta \in \Bbb R^m$ suitably regarded as matrices. Identify when $$\textrm{argmin}_\beta (X\beta-Y)^t(X\beta-Y)+\lambda \beta^t\beta$$ exists, and determine it in these cases. How does the size of $\lambda$ affect the solution? When might it be desirable to set $\lambda$ to be positive? Bayesian approach to linear regression. Suppose that $\beta\sim N(0,\tau^2)$, and the distribution of $Y$ conditional on $X$ is $N(X\beta,\sigma^2I)$, i.e., $\beta$, $X$, and $Y$ are vector valued random variables. Show that, after seeing some data $D$, the MAP and mean estimates of the posterior distribution for $\beta$ correspond to solutions of the previous problem. R Lab: Write a linear regression function that takes in a matrix of $x$-values and a corresponding vector of $y$-values and returns a function derived from the linear regression fit. Write a function that takes in a non-negative number (the degree), a vector of $x$-values and a corresponding vector of $y$-values and returns a function derived from the polynomial regression fit. Write a function that takes in a number $n$, a vector of $x$-values, and a corresponding vector of $y$-values and returns a function of the form: $$f(x)=\sum_{i=0}^n a_i \sin(ix)+b_i\cos(ix).$$ Generate suitable testing data for the three functions constructed above and plot the fitted functions.
The chances of crashing your car are pretty low, but they’re considerably higher if you’re drunk. Probabilities change depending on the conditions. We symbolize this idea by writing \(\p(A \given B)\), the probability that \(A\) is true given that \(B\) is true. And we call this kind of probability . conditional probability To say \(B\) increases the chance of \(A\) we write \(\p(A \given B) \gt \p(A)\). And to say \(B\) doubles the chance of \(A\) write \(\p(A \given B) = 2 \times \p(A)\). For example, to say the probability of \(A\) given \(B\) is 30%, we write: \[ \p(A \given B) = .3. \] But how do we calculate conditional probabilities? Figure 6.1: Conditional probability in a fair die roll Suppose I roll a fair, six-sided die behind a screen. You can’t see the result, but I tell you it’s an even number. What’s the probability it’s also a “high” number: either a \(4\), \(5\), or \(6\)? Maybe you figured the correct answer: \(2/3\). But why is that correct? Because, out of the three even numbers (\(2\), \(4\), and \(6\)), two of them are high (\(4\) and \(6\)). And since the die is fair, we expect it to land on a high number \(2/3\) of the times it lands on an even number. This hints at a formula for \(\p(A \given B)\). \[ \p(A \given B) = \frac{\p(A \wedge B)}{\p(B)}. \] In the die-roll example, we considered how many of the \(B\) possibilities were also \(A\) possibilities. Which means we divided \(\p(A \wedge B)\) by \(\p(B)\). In fact, this formula is our official definition for the concept of conditional probability. When we write the sequence of symbols \(\p(A \given B)\), it’s really just shorthand for the fraction \(\p(A \wedge B) / \p(B)\). Figure 6.2: Conditional probability is the size of the \(A \wedge B\) region compared to the entire \(B\) region. In terms of an Euler diagram (Figure 6.2), the definition of conditional probability compares the size of the purple \(A \wedge B\) region to the size of the whole \(B\) region, purple and blue together. If you don’t mind getting a little colourful with your algebra: \[ \p(A \given B) = \frac{\color{bookpurple}{\blacksquare}}{\color{bookpurple}{\blacksquare} + \color{bookblue}{\blacksquare}}. \] So the definition works because, informally speaking, \(\p(A \wedge B)/\p(B)\) is the proportion of the \(B\) outcomes that are also \(A\) outcomes. Dividing by zero is a common pitfall with conditional probability. Notice how the definition of \(\p(A \given B)\) depends on \(\p(B)\) being larger than zero. If \(\p(B) = 0\), then the formula The comedian Steven Wright once quipped that “black holes are where God divided by zero.” \[ \p(A \given B) = \frac{\p(A \wedge B)}{\p(B)} \] doesn’t even make any sense. There is no number that results from the division on the right hand side.3 There are alternative mathematical systems of probability, where conditional probability is defined differently to avoid this problem. But in this book we’ll stick to the standard system. In this system, there’s just no such thing as “the probability of \(A\) given \(B\)” when \(B\) has zero probability. In such cases we say that \(\p(A \given B)\) is undefined. It’s not zero, or some special number. It just isn’t a number. We already encountered conditional probabilities informally, when we used a tree diagram to solve the Monty Hall problem. In a tree diagram, each branch represents a possible outcome. The number placed on that branch represents the chance of that outcome occurring. But that number is based on the assumption that all branches leading up to it occur. So the probability on that branch is conditional on all previous branches. For example, suppose there are two urns of coloured marbles: I flip a fair coin to decide which urn to draw from, heads for Urn B and tails for Urn W. Then I draw one marble at random. The probability of drawing a black marble on the top path is \(3/4\) because we are assuming the coin landed heads, and thus I’m drawing from Urn X. If the coin lands tails instead, and I draw from Urn Y, then the chance of a black marble is instead \(1/4\). So these quantities are conditional probabilities: \[ \begin{aligned} \p(B \given H) &= 3/4,\\ \p(B \given T) &= 1/4. \end{aligned} \] Notice, though, the first branch in a tree diagram is different. In the \(H\)-vs.-\(T\) branch, the probabilities are unconditional, since there are no previous branches for them to be conditional on. Imagine an urn contains marbles of three different colours: 20 are red, 30 are blue, and 40 are green. I draw a marble at random. What is \(\p(R \given \neg B)\), the probability it’s red given that it’s not blue? \[ \begin{aligned} \p(R \given \neg B) &= \frac{\p(R \wedge \neg B)}{\p(\neg B)}\\ &= \frac{\p(R)}{\p(\neg B)}\\ &= \frac{20/90}{60/90}\\ &= 1/3. \end{aligned} \] This calculation relies on the fact that \(R \wedge \neg B\) is logically equivalent to \(R\). A red marble is automatically not blue, so \(R\) is true under exactly the same circumstances as \(R \wedge \neg B\). The Equivalence Rule thus tells us \(\p(R \wedge \neg B) = \p(R)\). Suppose a university has 10,000 students, and each student is studying under one of four broad headings: Humanities, Social Sciences, STEM, or Professional. Within each of these categories, the number of students with an average grade of A, B, C, or D is as follows: Humanities Social Sciences STEM Professional A 200 600 400 900 B 500 800 1600 900 C 250 400 1500 750 D 50 200 500 450 What is the probability a randomly selected student will have an A average, given that they are studying either Humanities or Social Sciences? \[ \begin{aligned} \p(A \given H \vee S) &= \frac{\p(A \wedge (H \vee S))}{\p(H \vee S)}\\ &= \frac{800/10,000}{3,000/10,000}\\ &= 4/15. \end{aligned} \] What about the reverse probability, that a student is studying either Humanities or Social Sciences given that they have an A average? \[ \begin{aligned} \p(H \vee S \given A) &= \frac{\p((H \vee S) \wedge A)}{\p(A)}\\ &= \frac{800/10,000}{2,100/10,000}\\ &= 8/21. \end{aligned} \] Notice how we get a different number now. In general, the probability of \(A\) given \(B\) will be different from the probability of \(B\) given \(A\). These are different concepts. For example, university students are usually young, but young people aren’t usually university students. Most aren’t even old enough to be in university. So the probability someone is young given they are in university is high. But the probability someone is in university given that they are young is low. So \(\p(Y \given U) \neq \p(U \given Y)\). Once in a while we do find cases where \(\p(A \given B) = \p(B \given A)\). For example, suppose we throw a dart at random at a circular board, divided into four quadrants. The chance the dart will land on the left half given that it lands on the top half is the same as the chance it lands on the top half given it lands on the left. Both probabilities are \(1/2\). But this kind of thing is the exception rather than the rule. Usually, \(\p(A \given B)\) will be a different number from \(\p(B \given A)\). So it’s important to remember how order matters. When we write \(\p(A \given B)\), we are discussing the probability of \(A\). But we are discussing it under the assumption that \(B\) is true. We explained independence informally back in Chapter 4: \(A\) and \(B\) are independent if the truth of one doesn’t change the probability of the other. Now that we’ve formally defined conditional probability, we can formally define independence too. \(A\) is independent of \(B\) if \(\p(A \given B) = \p(A)\) and \(\p(A) > 0\). In other words, they’re independent if \(A\)’s probability is the same after \(B\) is given as it was before (and not just for the silly reason that there was no chance of \(A\) being true to begin with). Now we can establish three useful facts about independence. The first is summed up in the mantra “independence means multiply”. This actually has two parts. We already learned the first part with the Multiplication Rule: if \(A\) is independent of \(B\), then \(\p(A \wedge B) = \p(A)\p(B)\). Except now we can see why this rule holds, using the definition of conditional probability and some algebra: \[ \begin{aligned} \p(A \given B) &= \frac{\p(A \wedge B)}{\p(B)} & \mbox{by definition}\\ \p(A \given B)\p(B) &= \p(A \wedge B) & \mbox{by algebra}\\ \p(A)\p(B) &= \p(A \wedge B) & \mbox{by independence}. \end{aligned} \] The second part of the “independence means multiply” mantra is new though. It basically says that the reverse also holds. As long as \(\p(A) > 0\) and \(\p(B) > 0\), if \(\p(A \wedge B) = \p(A)\p(B)\), then \(A\) is independent of \(B\). Bottom line: as long as there are no zeros to worry about, independence is the same thing as \(\p(A \wedge B) = \p(A)\p(B)\). Second, independence is symmetric. If \(A\) is independent of \(B\) then \(B\) is independent of \(A\). Informally speaking, if \(B\) makes no difference to \(A\)’s probability, then \(A\) makes no difference to \(B\)’s probability. This is why we often say “\(A\) and \(B\) are independent”, without specifying which is independent of which. Since independence goes both ways, they’re automatically independent of each other. Third, independence extends to negations. If \(A\) is independent of \(B\), then it’s also independent of \(\neg B\) (as long as \(\p(\neg B) > 0\), so that \(\p(A \given \neg B)\) is well-defined). Notice, this also means that if \(A\) is independent of \(B\), then \(\neg A\) is independent of \(\neg B\) (as long as \(\p(\neg A) > 0\)). So far our definition of independence only applies to two propositions. We can extend it to three as follows: \(A\), \(B\), and \(C\) are independent if In other words, a trio of propositions is independent if each pair of them is independent, and the multiplication rule applies to their conjunction. The same idea can be extended to define independence for four propositions, five, etc. Answer each of the following: Suppose \(\p(B) = 4/10\), \(\p(A) = 7/10\), and \(\p(B \wedge A) = 2/10\). What are each of the following probabilities? Five percent of tablets made by the company Ixian have factory defects. Ten percent of the tablets made by their competitor company Guild do. A computer store buys \(40\%\) of its tablets from Ixian, and \(60\%\) from Guild. This exercise and the next one are based on very similar exercises from Ian Hacking’s wonderful book, An Introduction to Probability and Inductive Logic. Draw a probability tree to answer the following questions. In the city of Elizabeth, the neighbourhood of Southside has lots of chemical plants. \(2\%\) of Elizabeth’s children live in Southside, and \(14\%\) of those children have been exposed to toxic levels of lead. Elsewhere in the city, only \(1\%\) of the children have toxic levels of exposure. Draw a probability tree to answer the following questions. Imagine 100 prisoners are sentenced to death. 70 of them are housed in cell block A, the other 30 are in cell block B. Of the prisoners in cell block A, 9 are innocent. Only 1 prisoner in cell block B is innocent. The law requires that one prisoner be pardoned. The lucky prisoner will be selected by flipping a fair coin to choose either cell block A or B. Then a fair lottery will be used to select a random prisoner from the chosen cell block. What is the probability the pardoned prisoner comes from cell block A if she is innocent? Answer each of the following to find out. \(I\) = The pardoned prisoner is innocent. \(A\) = The pardoned prisoner comes from cell block A. Suppose \(A\), \(B\), and \(C\) are independent, and they each have the same probability: \(1/3\). What is \(\p(A \wedge B \given C)\)? If \(A\) and \(B\) are mutually exclusive, what is \(\p(A \given B)\)? Justify your answer using the definition of conditional probability. Which of the following situations is impossible? Justify your answer. Is the following statement true or false: if \(A\) and \(B\) are mutually exclusive, then \(Pr(A \vee B \given C) = Pr(A \given C) + Pr(B \given C)\). Justify your answer. Justify the second part of the “independence means multiply” mantra: if \(\p(A) > 0\), \(\p(B) > 0\), and \(\p(A \wedge B) = \p(A) \p(B)\), then \(A\) is independent of \(B\). Hint: start by supposing \(\p(A) > 0\), \(\p(B) > 0\), and \(\p(A \wedge B) = \p(A)\p(B)\). Then apply some algebra and the definition of conditional probability. Justify the claim that independence is symmetric: if \(A\) is independent of \(B\), then \(B\) is independent of \(A\). Hint: start by supposing that \(A\) is independent of \(B\). Then write out \(\p(A \given B)\) and apply the definition of conditional probability.
Sometimes it is desired to derive attributes for an arbitrary position based on the attributes of some known points. Imagine for example a triangle where each of the three base points has its own colour and you want to derive the colour of any other position in the grid. In such cases, barycentric coordinates are useful. This coordinate system is built upon some base points (say three for a triangle) and every other point inside or outside of the triangle can be represented by a linear combination of these base points\begin{equation*} \fvec{x} = \alpha_1 \fvec{x}_1 + \alpha_2 \fvec{x}_2 + \alpha_3 \fvec{x}_3. \end{equation*} Each point \(\fvec{x}_i\) is weighted into the resulting point by the corresponding weighting factor \(\alpha_i\). Weights can be positive (for points inside the triangle) or negative (for points outside of the triangle). Important is that in total they sum up to\begin{equation*} \sum_{i=1}^3 \alpha_i = 1. \end{equation*} As an example consider the representation of the first base point itself\begin{equation*} \fvec{\alpha} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \Rightarrow 1 \cdot \fvec{x}_1 + 0 \cdot \fvec{x}_2 + 0 \cdot \fvec{x}_3 = \fvec{x}_1. \end{equation*} The resulting weights can also be used to interpolate between colours associated with the base points. In total, each point in the grid gets its own (unique) weight vector. The technique can also be expanded to higher dimensions, e.g. four weights and corresponding points forming a tetrahedron. List of attached files: ← Back to the overview page
Shortest and straightest geodesics in sub-Riemannian geometry Starts: 15:00 10 May 2019 Ends: 16:00 10 May 2019 What is it: Seminar Organiser: Department of Mathematics Who is it for: University staff, External researchers, Adults, Alumni, Current University students Speaker: Professor Dmitri Alekseevsky Join us for this research seminar, part of the Geometry, topology and mathematical physics seminar series. There are several different, but equivalent denitions of geodesics in a Riemannian manifold, They are generalized to sub-Riemannian manifolds, but become non-equivalent. H. R. Herz remarked that there are two main approaches for definition of geodesics: geodesics as shortest curves based on Maupertuis principle of least action (variational approach) and geodesics as straightest curves based on D'Alembert's principle of virtual work (which leads to geometric descriptions based on the notion of parallel transport). We briefly discuss different definitions of sub-Riemannian geodesics and interrelations between them. A.M. Vershik and L.D. Faddeev showed that for a generic sub-Riemannian manifold Q all shortest geodesics (defined as projections of integral curves of the corresponding Hamiltonian flow) are different from straightest geodesics (defined by Schouten partial connection). They gave the first example when shortest geodesics coincide with straightest Hamiltonian geodesics (with the zero initial covector \lambda \in T^*Q) and stated the problem of characterisation of sub-Riemannian manifolds with such property. We show that this class contains Chaplygin transversally homogeneous systems, defined by the sub-Riemannian metric on the total space Q of a principal bundle \pi: Q \to M = Q/G over a Riemannian manifold (M; g^M), associated with a principal connection. Hamiltonian geodesics of such system describe evolution of a charged particle in Yang--Mills field and straightest geodesics --- the motion of a classical mechanical system with non-holonomic constraints. We describe some classes of homogeneous sub-Riemannian manifolds, where straightest geodesics coincides with shortest geodesics, including sub-Riemannian symmetric spaces. Speaker Professor Dmitri Alekseevsky Organisation: Kharkevich Institute for Information Transmission Problems of the Russian Academy of Sciences Biography: See Professor Dmitri Alekseevsky's profile: Travel and Contact Information Find event Frank Adams 2 Alan Turing Building Manchester
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
An eigenvalue semiclassical problem for the Schrödinger operator with an electrostatic field DOI: http://dx.doi.org/10.12775/TMNA.2006.006 Abstract We consider the following system of Schrödinger-Maxwell equations in the unit ball $B_1$ of ${\mathbb R}^3$ $$ -\frac{\hbar^2}{2m}\Delta v+ e\phi v=\omega v, \quad -\Delta\phi=4\pi e v^2 $$ with the boundary conditions $ u=0$, $ \phi=g$ on $\partial B_1$, where $\hbar$, $m$, $e$, $\omega > 0$, $v$, $\phi\colon B_1\rightarrow {\mathbb R}$, $g\colon \partial B_1\to {\mathbb R}$. Such system describes the interaction of a particle constrained to move in $B_1$ with its own electrostatic field. We exhibit a family of positive solutions $(v_\hbar, \phi_\hbar)$ corresponding to eigenvalues $\omega_\hbar$ such that $v_\hbar$ concentrates around some points of the boundary $\partial B_1$ which are minima for $g$ when $\hbar\rightarrow 0$. equations in the unit ball $B_1$ of ${\mathbb R}^3$ $$ -\frac{\hbar^2}{2m}\Delta v+ e\phi v=\omega v, \quad -\Delta\phi=4\pi e v^2 $$ with the boundary conditions $ u=0$, $ \phi=g$ on $\partial B_1$, where $\hbar$, $m$, $e$, $\omega > 0$, $v$, $\phi\colon B_1\rightarrow {\mathbb R}$, $g\colon \partial B_1\to {\mathbb R}$. Such system describes the interaction of a particle constrained to move in $B_1$ with its own electrostatic field. We exhibit a family of positive solutions $(v_\hbar, \phi_\hbar)$ corresponding to eigenvalues $\omega_\hbar$ such that $v_\hbar$ concentrates around some points of the boundary $\partial B_1$ which are minima for $g$ when $\hbar\rightarrow 0$. Keywords Schrödinger-Maxwell system; existence; concentration Full Text:FULL TEXT Refbacks There are currently no refbacks.
Positive integers $\displaystyle a, b$ are both relatively prime and less than or equal to 2008. $\displaystyle a^2 + b^2$ is a perfect square.$\displaystyle b$ has the same digits as $\displaystyle a$ in the reverse order. The number of such ordered pairs $\displaystyle (a, b)$ is _________ . I started with 2 digits:Let $\displaystyle a=xy$ and $\displaystyle b=yx$ $\displaystyle (10x+y)^2 + (10y+x)^2$$\displaystyle 101x^2 + 40xy + 101y^2$which can't be factorized and isn't a perfect square. Tried the same with 3 digits and ended up with this:$\displaystyle 10001x^2 + 200y^2 + 10001z^2 + 400xz + 2020xy + 2020yz$This also isn't factorizable. I've simply got no idea of how to proceed from here. Jun 20th 2010, 06:25 AM Wilmer There simply is NONE. You sure your question is CORRECTLY worded? Jun 21st 2010, 01:36 AM darknight Quote: Originally Posted by Wilmer There simply is NONE. You sure your question is CORRECTLY worded? Dunno, someone challenged me to solve it. Guess it was his idea of a joke. :(My Apologies.. Jun 21st 2010, 02:46 AM simplependulum I am not sure but my answer is zero ! It is a famous property ( which is not what i am confused (Happy) ) , $\displaystyle a-b \equiv 0 \bmod{9}$ , the proof is as follows : Let $\displaystyle a = \sum_{i=0}^n a_i 10^i ~~ a_i \in \{\ 0,1,2,...,9 \}\ $ so $\displaystyle b = \sum_{i=0}^n a_i 10^{n-i} $ and $\displaystyle a-b = \sum_{i=0}^n a_i ( 10^i - 10^{n-i} ) \equiv \sum_{i=0}^n a_i ( 1-1) \bmod{9} \equiv 0 \bmod{9} $ We have $\displaystyle a^2 + b^2 = c^2 $ Since $\displaystyle a,b$ coprime , they can be expressed as $\displaystyle m^2 - n^2 ~,~ 2mn $ , wlog let $\displaystyle a = m^2 - n^2 ~ b = 2mn $ so we have I consider the form $\displaystyle x^2 - 2y^2 $ whether it can be the multiple of $\displaystyle 9$ Be caution the quadratic residues are $\displaystyle [R] = \{\ 0,1,4,7 \} $ so $\displaystyle 2[R] = \{\ 0,2,8,14 \}= \{\ 0,2,5,8 \}$ , since the intersection of the sets is just $\displaystyle \{\ 0 \}$ , we conclude $\displaystyle x^2 - 2y^2 \equiv 0 \bmod{9} $ iff $\displaystyle x \equiv y \equiv 0 \bmod{3} $ . Therefore , $\displaystyle m-n \equiv n \equiv 0 \bmod{3} ~ \implies m \equiv n \equiv 0 \bmod{3} $ which is false because , $\displaystyle (a,b)=1 \implies (m,n)=1$ so we can never find out any odered pair $\displaystyle (a,b) $ . EDIT: I have made it more complicated , in fact we can consider the quadratic residues from here : $\displaystyle a^2 + b^2 = c^2 $ It is easy to show $\displaystyle a \equiv b \bmod{9} $ since we have already proved that $\displaystyle a - b \equiv 0 \bmod{9} $ , so we have : $\displaystyle 2a^2 \equiv c^2 \bmod{9} $ consider the residues as i mentioned . Jun 21st 2010, 06:33 AM Wilmer Quote: Originally Posted by simplependulum EDIT: I have made it more complicated , in fact we can consider the quadratic residues from here :$\displaystyle a^2 + b^2 = c^2 $ We could simply apply the "pythagorean triplet" rules, couldn't we, SimpleP ?
We say that a group $G$ is residually finite if for each $g\in G$ that is not equal to the identity of $G$, there exists a finite group $F$ and a group homomorphism $$\varphi:G\to F$$ such that $\varphi(g)$ is not the identity of $F$. The definition does not change if we require that $\varphi$ be surjective. Therefore, a group $G$ is residually finite if and only if for each $g\in G$ that is not the identity, there exists a finite index normal subgroup $N$ of $G$ such that $g\not\in N$. Hence, if $G$ is residually finite, then the intersection of all finite-index normal subgroups is trivial. The converse holds, too (why?). Examples and Non-examples of Residually Finite Groups Before we examine why residually finite groups are interesting, let's first take a look at some examples. The following groups are residually finite: Finite groups are residually finite $\Z$ is residually finite: if $z\in Z$ is a nonzero integer, then the reduction map $\Z\to \Z/(|z| + 1)$ does not send $z$ to zero More generally, any finitely-generated abelian group is residually finite The automorphism group of a residually finite group is residually finite. For example, $\Aut(\Z\times \Z)\cong \GL_2(\Z)$ is residually finite (though it is also easy to see this directly) Free groups are residually finite John Hempel proved that fundamental groups of 2-manifolds are residually finite (Proc. Amer. Math. Soc. 32 (1972), 323) It seems like quite a lot of groups are residually finite. So we need some examples of groups that aren't residually finite, right? Here they are: The additive group of rational numbers $\Q$ is not residually finite. That's because it's divisible. If $D$ is a divisible group and $F$ is finite, then every homomorphism $\varphi:D\to F$ is in fact trivial! That's because there exists a positive integer $n$ such that $g^n$ is the identity for all $g\in F$. So, if $x\in D$, there exists a $y\in D$ such that $y^n = x$ and so $\varphi(x) = \varphi(y^n) = \varphi(y)^n = 1_F$. The subgroup of ${\rm Sym}(Z)$ generated by the translation map $n\mapsto n + 1$ and the permutation $(0~1)$ is finitely generated and not residually finite. Residually Finite Groups and the Word Problem If a group $G$ is given by a presentation $\langle X~|~ R\rangle$ where $X$ is a set of generators and $R$ is a set of relations (meaning that $G$ is the quotient of the free group on $X$ by the smallest normal subgroup containing the words in the set $R$), then a natural question to ask is: given a word in the symbols in $X$ and their formal inverses, does the word represent the identity element? If there is a guaranteed to terminate algorithm that answers this question for any word, then the group $G$ is said to have solvable word problem. Let's take an example: let $G$ be the group presented by $\langle x,y~|~ xyx^2 \rangle$. Does the word $xy^2xy$ represent the identity element of $G$? If you take a few moments to try a prove this, you'll see that it is actually very difficult. That's the word problem for you: it is a hard problem. That's because working combinatorially with presented groups is tough. But presented groups arise naturally in mathematics as the fundamental groups of manifolds, so it makes sense to try and figure them out. If a group $G$ has a presentation $\langle X~|~R\rangle$ such that $X$ and $R$ are both finite sets, then $G$ is called finitely presented. Because you have a finite number of relations and generators, you can enumerate the words that represent the identity in $G$ such that if $w$ is any word that represents the identity, $w$ will appear on this list in a finite amount of time. Great! But that still doesn't solve the word problem. That's because if $w$ does not represent the identity, merely enumerating all the words that represent the identity of successively longer lengths still won't tell you that $w$ does not represent the identity. That's where residually finite groups come into play. All finite groups of a given size are enumerable in a finite amount of time, and all homomorphisms from a finitely-presented group to a given finite group can be enumerated in a finite amount of time. Therefore, if $G$ is residually finite and is finitely presented, by enumerating all these homomorphisms, you will eventually find one where the given word $w$ is sent to a nontrivial element. So, enumerating all words that represent the identity and all homomorphisms to a finite group at the same time is an algorithm that is guaranteed to determine whether a given word represents the identity in a finitely-presented residually finite group. As in the case with modules, finitely-presented groups are better behaved that finitely-generated groups. Meskin in (Proc. Amer. Math. Soc. 43 (1974), 8–10.) gave an example of a finitely-generated residually finite group that has unsolvable word problem! Hopfian Groups There is another popular concept in group theory: we call a group $G$ Hopfian if every surjective homomorphism $G\to G$ is also injective. The idea of Hopfian also generalises a property of finite groups in a different way: of course, every finite group is Hopfian. Residually finite groups are not always Hopfian. For example, a free group on infinitely many generators is residually finite but not Hopfian. That's because you can take any surjective map $X\to X$ on an infinite set that is not injective and extend it to a map the corresponding free groups. However, if $G$ is residually finite and finitely generated, then it is Hopfian. Indeed, suppose $G$ is residually finite and finitely generated, and that $\varphi:G\to G$ is surjective. Let $K$ be any normal subgroup of $G$ of finite index. Define a map \begin{align} \Phi: {\rm Hom}(G,G/K)&\longrightarrow {\rm Hom}(G,G/K)\\ f&\longmapsto f\circ\varphi.\end{align} Since $G$ is finitely generated and $G/K$ is finite, the set ${\rm Hom}(G,G/K)$ is finite. Since $\varphi$ is surjective, $\Phi$ is injective and thus also surjective. Therefore, there exists a $f:G\to G/K$ such that $\Phi(f)$ is the quotient map $G\to G/K$. This means that the kernel of $\varphi$ is contained in $K$. Since $K$ was arbitrary, the kernel of $\varphi$ is contained in the intersection of all finite-index normal subgroups. This intersection is trivial because $G$ is residually finite. Thus, the kernel of $\varphi$ is trivial and hence $\varphi$ is injective. This result is quite interesting. For example, you can't find a surjective homomorphism $\Z\times \Z/2\to\Z\times\Z/2$ that is surjective but not injective. You can prove this directly but it is not altogether obvious.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - vv - final states using proton–proton collisions at √s=13 TeV with the ATLAS detector European Physical Journal C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 Journal Article 2. Measurement of the ZZ production cross section in proton-proton collisions at √s = 8 TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and ZZ→ℓ−ℓ+νν¯¯¯ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1126-6708, 2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ−ℓ+νν¯ channels (ℓ = e, μ) in proton-proton collisions at s=8TeV at the Large Hadron... Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Journal Article 3. Search for new resonances decaying to a W or Z boson and a Higgs boson in the ℓ+ℓ−bb¯, ℓνbb¯, and νν¯bb¯ channels with pp collisions at s=13 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 02/2017, Volume 765, Issue C, pp. 32 - 52 Journal Article 4. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - ν ν ¯ final states using proton-proton collisions at s = 13 TeV with the ATLAS detector The European physical journal. C, Particles and fields, ISSN 1434-6044, 2018, Volume 78, Issue 4, pp. 293 - 34 Journal Article 5. Measurement of exclusive γγ→ℓ+ℓ− production in proton–proton collisions at s=7 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 10/2015, Volume 749, Issue C, pp. 242 - 261 Journal Article 6. Measurement of the ZZ production cross section in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and Z Z → ℓ − ℓ + ν ν ¯ $$ ZZ\to {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1029-8479, 1/2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ − ℓ + ν ν ¯ $$ {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ channels (ℓ = e, μ) in... Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Journal Article 7. Search for heavy ZZ resonances in the $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$ ℓ+ℓ-νν¯ final states using proton–proton collisions at $$\sqrt{s}= 13$$ s=13 $$\text {TeV}$$ TeV with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 4/2018, Volume 78, Issue 4, pp. 1 - 34 A search for heavy resonances decaying into a pair of $$Z$$ Z bosons leading to $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 8. ZZ -> l(+)l(-)l '(+)l '(-) cross-section measurements and search for anomalous triple gauge couplings in 13 TeV pp collisions with the ATLAS detector PHYSICAL REVIEW D, ISSN 2470-0010, 02/2018, Volume 97, Issue 3 Measurements of ZZ production in the l(+)l(-)l'(+)l'(-) channel in proton-proton collisions at 13 TeV center-of-mass energy at the Large Hadron Collider are... PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Couplings | Large Hadron Collider | Particle collisions | Transverse momentum | Sensors | Cross sections | Bosons | Muons | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Couplings | Large Hadron Collider | Particle collisions | Transverse momentum | Sensors | Cross sections | Bosons | Muons | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 9. Measurement of event-shape observables in Z→ℓ+ℓ− events in pp collisions at √s = 7 TeV with the ATLAS detector at the LHC European Physical Journal C, ISSN 1434-6044, 2016, Volume 76, Issue 7, pp. 1 - 40 Journal Article 10. Search for heavy ZZ resonances in the l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu)over-bar final states using proton-proton collisions at root s=13 TeV with the ATLAS detector EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 A search for heavy resonances decaying into a pair of Z bosons leading to l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu) over bar final states, where l stands for... DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 11. Search for new phenomena in the WW→lνl′ν′ final state in pp collisions at s=7TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 01/2013, Volume 718, Issue 3, pp. 860 - 878 Journal Article
Filter Results: Full text PDF available (6) Publication Year 2004 2017 This year (0) Last 5 years (12) Last 10 years (16) Supplemental Content Publication Type Co-author Journals and Conferences Learn More The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider… (More) Searches for high-mass resonances in the dijet invariant mass spectrum with one or two jets identified as $b$-jets are performed using an integrated luminosity of $3.2$ fb$^{-1}$ of proton--proton… (More) Charged Higgs bosons produced in association with a single top quark and decaying via $H^{\pm} \rightarrow \tau\nu$ are searched for with the ATLAS experiment at the LHC, using proton-proton… (More) A search for long-lived particles is performed using a data sample of 4.7 fb(-1) from proton-proton collisions at a centre-of-mass energy. root s = 7 TeV collected by the ATLAS detector at the LHC.… (More) This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control,… (More) The Trigger and Data Acquisition system (TDAQ) of the ATLAS experiment at the CERN Large Hadron Collider is based on a multi-level selection process and a hierarchical acquisition tree. The system,… (More) This Letter reports a search for a heavy particle that decays to WW using events produced in pp collisions at √ s = 7 TeV. The data were recorded in 2011 by the ATLAS detector and correspond to … (More) Author(s): Aad, G; Abbott, B; Abdallah, J; Abdinov, O; Aben, R; Abolins, M; AbouZeid, OS; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, BS; Adamczyk, L; Adams, DL; Adelman, J; Adomeit, S;… (More) © 2015, CERN for the benefit of the ATLAS collaboration.The production of a (Formula presented.) boson in association with a (Formula presented.) meson in proton–proton collisions probes the… (More) A measurement of the production processes of the recently discovered Higgs boson is performed in the two-photon final state using 4.5 fb(-1) of proton-proton collisions data at root s = 7 TeV and… (More)
This article is all about the basics of probability. There are two interpretations of a probability, but the difference only matters when we will consider inference. Frequency The degree of belief Axioms of Probability A function \(P\) which assigns a value \(P(A)\) to every event \(A\) is a probability measure or probability distribution if it satisfies the following three axioms. \(P(A) \geq 0 \text{ } \forall \text{ } A\) \(P(\Omega) = 1\) If \(A_1, A_2, …\) are disjoint then \(P(\bigcup_{i=1}^{\infty} A_i) = \sum_{i=1}^{\infty} P(A_i) \) These axioms give rise to the following five properties. \(P(\emptyset) = 0\) \(A \subset B \Rightarrow P(A) \leq P(B)\) \(0 \leq P(A) \leq 1\) \(P(A^\mathsf{c}) = 1 – P(A)\) \(A \cap B = \emptyset \Rightarrow P(A \cup B) = P(A) + P(B)\) The Sample Space The sample space, , is the set of all possible outcomes, . Subsets of are events. The empty set contains no elements. Example – Tossing a coin Toss a coin once: Toss a coin twice: Then event that the first toss is heads: Set Operations – Complement, Union and Intersection Complement Given an event, , the complement of is , where: Union The union of two sets A and B, is set of the events which are in either A, or in B or in both. Intersection The intersection of two sets A and B, is set of the events which are in both A and B. Difference Set The difference set is the events in one set which are not in the other: Subsets If every element of A is contained in B then A is a subset of B: or equivalently, . Counting elements If A is a finite set, then denotes the number of elements in A. Indicator function An indicator function can be defined: Disjoint events Two events A and B are disjoint or mutually exclusive if (the empty set) – i.e. there are no events in both A and B). More generally, are disjoint if whenever . Example – intervals of the real line The intervals are disjoint. The intervals are not disjoint. For example, . Partitions A partition of the sample space is a set of disjoint events such that . Monotone increasing and monotone decreasing sequences A sequence of events, is monotone increasing if . Here we define and write . Similarly, a sequence of events, is monotone decreasing if . Here we define . Again we write Hello. I’ve started this blog to use it as a sort of notebook. My plan is to learn about things which interest me, and then to take notes here. The idea is that it will help me to consolidate what I learn, and it will make a reference. Hopefully someone else will get some use from it too.
Computational Aerodynamics Questions & Answers I'm glad to hear that. Because your post may help others, I'll give you 2 points bonus boost. I corrected it. Both the integral form and the differential form can be used in CFD. But we can derive the integral form by integrating the differential form over a volume.. We'll get to this at one point. Interesting question: I'll give 2 points bonus boost. The following is always correct: $$ \frac { \partial ( {\frac {1} {2}} {\phi^2} ) } { \partial \xi} = {\phi} {\frac {\partial \phi} {\partial \xi}} $$ where $\phi$ is any property and $\xi$ can be $x$, $y$, $t$, or any coordinate. It doesn't matter if $\phi$ is $v_x$ or $t$, the above is a mathematical transformation, not a physical one. Not a bad question, I'll give you 1.5 points bonus. $\pi$
From the relation (1), we have\[xy^2x^{-1}=y^3.\]Computing the power of $n$ of this equality yields that\[xy^{2n}x^{-1}= y^{3n} \tag{3}\]for any $n\in \N$. In particular, we have\[xy^4x^{-1}=y^6 \text{ and } xy^6x^{-1}=y^9.\]Substituting the former into the latter, we obtain\[x^2y^4x^{-2}=y^9. \tag{4}\]Cubing both sides gives\[x^2y^{12}x^{-2}=y^{27}.\] Using the relation (3) with $n=4$, we have $xy^8x^{-1}=y^{12}$.Substituting this into equality (4) yields $x^3y^8x^{-1}=y^{27}$. Now we have\begin{align*}y^{27}=x^3y^8x^{-1}=(x^3y)y^8(y^{-1}x^{-3})=yx^2y^8x^{-2}y.\end{align*}Squaring the relation (4), we have $x^2y^8x^{-2}=y^{18}$.Substituting this into the previous, we obtain $y^{27}=y^{18}$, and hence\[y^9=e,\]where $e$ is the identity element of $G$. Note that as we have $xy^2x^{-1} =y^3$, the elements $y^2, y^3$ are conjugate to each other.Thus, the orders must be the same. This observation together with $y^9=e$ imply $y=e$. It follows from the relation (2) that $x=e$ as well.Therefore, the group $G$ is the trivial group. A Simple Abelian Group if and only if the Order is a Prime NumberLet $G$ be a group. (Do not assume that $G$ is a finite group.)Prove that $G$ is a simple abelian group if and only if the order of $G$ is a prime number.Definition.A group $G$ is called simple if $G$ is a nontrivial group and the only normal subgroups of $G$ is […] Dihedral Group and Rotation of the PlaneLet $n$ be a positive integer. Let $D_{2n}$ be the dihedral group of order $2n$. Using the generators and the relations, the dihedral group $D_{2n}$ is given by\[D_{2n}=\langle r,s \mid r^n=s^2=1, sr=r^{-1}s\rangle.\]Put $\theta=2 \pi/n$.(a) Prove that the matrix […] Centralizer, Normalizer, and Center of the Dihedral Group $D_{8}$Let $D_8$ be the dihedral group of order $8$.Using the generators and relations, we have\[D_{8}=\langle r,s \mid r^4=s^2=1, sr=r^{-1}s\rangle.\](a) Let $A$ be the subgroup of $D_8$ generated by $r$, that is, $A=\{1,r,r^2,r^3\}$.Prove that the centralizer […] Non-Abelian Simple Group is Equal to its Commutator SubgroupLet $G$ be a non-abelian simple group. Let $D(G)=[G,G]$ be the commutator subgroup of $G$. Show that $G=D(G)$.Definitions/Hint.We first recall relevant definitions.A group is called simple if its normal subgroups are either the trivial subgroup or the group […] Every Cyclic Group is AbelianProve that every cyclic group is abelian.Proof.Let $G$ be a cyclic group with a generator $g\in G$.Namely, we have $G=\langle g \rangle$ (every element in $G$ is some power of $g$.)Let $a$ and $b$ be arbitrary elements in $G$.Then there exists […]
Note that since $n>2$, the primitive $n$-th root $\zeta$ is not a real number.Also, we have\begin{align*}\zeta+\zeta^{-1}=2\cos(2\pi /n),\end{align*}which is a real number. Thus the field $\Q(\zeta+\zeta^{-1})$ is real.Therefore the degree of the extension satisfies\[ [\Q(\zeta):\Q(\zeta+\zeta^{-1})] \geq 2.\] We actually prove that the degree is $2$.To see this, consider the polynomial\[f(x)=x^2-(\zeta+\zeta^{-1})x+1\]in $\Q(\zeta+\zeta^{-1})[x]$. The polynomial factos as\[f(x)=x^2-(\zeta+\zeta^{-1})x+1=(x-\zeta)(x-\zeta^{-1}).\]Hence $\zeta$ is a root of this polynomial. It follows from $[\Q(\zeta):\Q(\zeta+\zeta^{-1})] \geq 2$ that $f(x)$ is the minimal polynomial of $\zeta$ over $\Q(\zeta+\zeta^{-1})$, and hence the extension degree is\[ [\Q(\zeta):\Q(\zeta+\zeta^{-1})] =2.\] Comment. The subfield $\Q(\zeta+\zeta^{-1})$ is called the maximal real subfield.The reason why it is called as such should be clear from the proof. Degree of an Irreducible Factor of a Composition of PolynomialsLet $f(x)$ be an irreducible polynomial of degree $n$ over a field $F$. Let $g(x)$ be any polynomial in $F[x]$.Show that the degree of each irreducible factor of the composite polynomial $f(g(x))$ is divisible by $n$.Hint.Use the following fact.Let $h(x)$ is an […] Galois Group of the Polynomial $x^p-2$.Let $p \in \Z$ be a prime number.Then describe the elements of the Galois group of the polynomial $x^p-2$.Solution.The roots of the polynomial $x^p-2$ are\[ \sqrt[p]{2}\zeta^k, k=0,1, \dots, p-1\]where $\sqrt[p]{2}$ is a real $p$-th root of $2$ and $\zeta$ […] $x^3-\sqrt{2}$ is Irreducible Over the Field $\Q(\sqrt{2})$Show that the polynomial $x^3-\sqrt{2}$ is irreducible over the field $\Q(\sqrt{2})$.Hint.Consider the field extensions $\Q(\sqrt{2})$ and $\Q(\sqrt[6]{2})$.Proof.Let $\sqrt[6]{2}$ denote the positive real $6$-th root of of $2$.Then since $x^6-2$ is […] Application of Field Extension to Linear CombinationConsider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$.Let $\alpha$ be any real root of $f(x)$.Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$.Proof.We first prove that the polynomial […]
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
Let $G$ be a connected (strongly connected) graph (digraph). Assume that the minimal vertex degree (in/out degrees) of the graph is $\delta$ (are $\delta^-,\delta^+$). What is the maximal diameter possible for such graph? For example, if $\delta \geq \frac {n}{2}$ ($\delta^- + \delta^+ \geq n-1)$, then the graph is of diameter at most 2. Also, it seems that $\delta \geq c\cdot n$ gives an upper bound of $\lceil \frac{2}{c} \rceil - 1$ on the diameter, but I'm not sure it's tight (also, haven't tried proving it yet, so I might be wrong). (For $c=\frac{1}{2}$ it's definitely not tight). What other bounds can we get?
Global weak solution to the quantum Navier-Stokes-Landau-Lifshitz equations with density-dependent viscosity 1. School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China 2. Institute of Applied Physics and Computational Mathematics, China Academy of Engineering Physics, Beijing, 100088, China In this paper we investigate the global existence of the weak solutions to the quantum Navier-Stokes-Landau-Lifshitz equations with density dependent viscosity in two dimensional case. We research the model with singular pressure and the dispersive term. The main technique is using the uniform energy estimates and B-D entropy estimates to prove the convergence of the solutions to the approximate system. We also use some convergent theorems in Sobolev space. Keywords:Navier-Stokes-Landau-Lifshitz equations, global weak solutions, energy estimate, B-D entropy estimate. Mathematics Subject Classification:Primary: 35A01, 35D30, 35M31, 35Q40; Secondary: 76N10. Citation:Guangwu Wang, Boling Guo. Global weak solution to the quantum Navier-Stokes-Landau-Lifshitz equations with density-dependent viscosity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6141-6166. doi: 10.3934/dcdsb.2019133 References: [1] R. A. Adams and J. F. Fournier, [2] A. I. Akhiezer, V. G. Yakhtar and S. V. Peletminskii, [3] F. Alouges and A. Soyeur, On global weak solutions for Landau-Lifshitz equations: Existence and nonuniqueness, [4] P. Antonelli and S. Spirito, Global existence of finite energy weak solutions of quantum Navier-Stokes equations, [5] P. Antonelli and S. Spirito, On the compactness of finite energy weak solutions to the quantum Navier-Stokes equations, [6] D. Bresch and B. Desjardins, Existence of global weak solutions for 2D viscous shallow water equations and convergence to the quasi-geostrophic model, [7] D. Bresch and B. Desjardins, On the construction of approximate solutions for the 2D viscous shallow water model and for compressible Navier-Stokes models, [8] D. Bresch, B. Desjardins and C. K. Lin, On some compressible fluid models: Korteweg, lubrication, and shallow water systems, [9] D. Bresh, B. Desjardins and E. Zatorska, Two-velosity hydrodynamics in fluid mechanics: Part Ⅱ Existence of global $\kappa$-entropy solutions to the compressible Navier-Stokes system with degenerate viscosities, [10] [11] [12] [13] [14] E. Feireisl, [15] [16] [17] [18] B. L. Guo and G. W. Wang, Global finite energy weak solution to the viscous quantum Navier-Stokes-Landau-Lifshitz-Maxwell equation in 2-dimension, [19] B. Haspot, Existence of global strong solution for the compressible Navier-Stokes equations with degenerate viscosity coefficients in 1D, [20] [21] D. Hoff, Global well-posedness of the cauchy problem for the Navier-Stokes equations of nonisentropic flow with discontinuous initial data, [22] D. Hoff, Global solutions of the equations of one-dimensional, compressible flow with large data and forces, and with differing end states, [23] [24] [25] Z. Lei, D. Li and X. Y. Zhang, Remarks of global wellposedness of liquid crystal flows and heat flow of harmonic maps in two dimensions, [26] [27] P. L. Lions, [28] [29] A. Mellet and A. Vasseur, Existence and uniqueness of global strong solutions for one-dimensional compressible Navier-Stokes equations, [30] [31] R. Moser, Partial regularity for the Landau-Lifshitz equation in small dimensions, MPI (Leipzig) preprint, 2002.Google Scholar [32] [33] A. Novotný and I. Straškraba, [34] [35] [36] [37] [38] [39] G. W. Wang and B. L. Guo, Existence and uniqueness of the weak solution to the incompressible Navier-Stokes-Landau-Lifshitz model in 2-dimension, [40] [41] [42] [43] Y. L. Zhou, H. S. Sun and B. L. Guo, On the solvability of the initial value problem for the quasilinear degenerate parabolic system: $\vec{Z}_t=\vec{Z}\times \vec{Z}_xx+\vec{f}(x, t, \vec{Z})$, [44] [45] [46] Y. L. Zhou, H. S. Sun and B. L. Guo, The weak solution of homogeneous boundary value problem for the system of ferromagnetic chain with several variables, [47] Y. L. Zhou, H. S. Sun and B. L. Guo, Some boundary problems of the spin system and the system of ferro magnetic chain Ⅰ: Nonlinear boundary problems, [48] Y. L. Zhou, H. S. Sun and B. L. Guo, Some boundary problems of the spin system and the system of ferromagnetic chain Ⅱ: Mixed problems and others, [49] show all references References: [1] R. A. Adams and J. F. Fournier, [2] A. I. Akhiezer, V. G. Yakhtar and S. V. Peletminskii, [3] F. Alouges and A. Soyeur, On global weak solutions for Landau-Lifshitz equations: Existence and nonuniqueness, [4] P. Antonelli and S. Spirito, Global existence of finite energy weak solutions of quantum Navier-Stokes equations, [5] P. Antonelli and S. Spirito, On the compactness of finite energy weak solutions to the quantum Navier-Stokes equations, [6] D. Bresch and B. Desjardins, Existence of global weak solutions for 2D viscous shallow water equations and convergence to the quasi-geostrophic model, [7] D. Bresch and B. Desjardins, On the construction of approximate solutions for the 2D viscous shallow water model and for compressible Navier-Stokes models, [8] D. Bresch, B. Desjardins and C. K. Lin, On some compressible fluid models: Korteweg, lubrication, and shallow water systems, [9] D. Bresh, B. Desjardins and E. Zatorska, Two-velosity hydrodynamics in fluid mechanics: Part Ⅱ Existence of global $\kappa$-entropy solutions to the compressible Navier-Stokes system with degenerate viscosities, [10] [11] [12] [13] [14] E. Feireisl, [15] [16] [17] [18] B. L. Guo and G. W. Wang, Global finite energy weak solution to the viscous quantum Navier-Stokes-Landau-Lifshitz-Maxwell equation in 2-dimension, [19] B. Haspot, Existence of global strong solution for the compressible Navier-Stokes equations with degenerate viscosity coefficients in 1D, [20] [21] D. Hoff, Global well-posedness of the cauchy problem for the Navier-Stokes equations of nonisentropic flow with discontinuous initial data, [22] D. Hoff, Global solutions of the equations of one-dimensional, compressible flow with large data and forces, and with differing end states, [23] [24] [25] Z. Lei, D. Li and X. Y. Zhang, Remarks of global wellposedness of liquid crystal flows and heat flow of harmonic maps in two dimensions, [26] [27] P. L. Lions, [28] [29] A. Mellet and A. Vasseur, Existence and uniqueness of global strong solutions for one-dimensional compressible Navier-Stokes equations, [30] [31] R. Moser, Partial regularity for the Landau-Lifshitz equation in small dimensions, MPI (Leipzig) preprint, 2002.Google Scholar [32] [33] A. Novotný and I. Straškraba, [34] [35] [36] [37] [38] [39] G. W. Wang and B. L. Guo, Existence and uniqueness of the weak solution to the incompressible Navier-Stokes-Landau-Lifshitz model in 2-dimension, [40] [41] [42] [43] Y. L. Zhou, H. S. Sun and B. L. Guo, On the solvability of the initial value problem for the quasilinear degenerate parabolic system: $\vec{Z}_t=\vec{Z}\times \vec{Z}_xx+\vec{f}(x, t, \vec{Z})$, [44] [45] [46] Y. L. Zhou, H. S. Sun and B. L. Guo, The weak solution of homogeneous boundary value problem for the system of ferromagnetic chain with several variables, [47] Y. L. Zhou, H. S. Sun and B. L. Guo, Some boundary problems of the spin system and the system of ferro magnetic chain Ⅰ: Nonlinear boundary problems, [48] Y. L. Zhou, H. S. Sun and B. L. Guo, Some boundary problems of the spin system and the system of ferromagnetic chain Ⅱ: Mixed problems and others, [49] [1] Shijin Ding, Boling Guo, Junyu Lin, Ming Zeng. Global existence of weak solutions for Landau-Lifshitz-Maxwell equations. [2] Xueke Pu, Boling Guo, Jingjun Zhang. Global weak solutions to the 1-D fractional Landau-Lifshitz equation. [3] Yong Yang, Bingsheng Zhang. On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations:Ⅰ. [4] Jingrui Wang, Keyan Wang. Almost sure existence of global weak solutions to the 3D incompressible Navier-Stokes equation. [5] [6] [7] Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. [8] Daniel Pardo, José Valero, Ángel Giménez. Global attractors for weak solutions of the three-dimensional Navier-Stokes equations with damping. [9] Jun Zhou. Global existence and energy decay estimate of solutions for a class of nonlinear higher-order wave equation with general nonlinear dissipation and source term. [10] Peter E. Kloeden, José Valero. The Kneser property of the weak solutions of the three dimensional Navier-Stokes equations. [11] Peter Anthony, Sergey Zelik. Infinite-energy solutions for the Navier-Stokes equations in a strip revisited. [12] John W. Barrett, Endre Süli. Existence of global weak solutions to Fokker-Planck and Navier-Stokes-Fokker-Planck equations in kinetic models of dilute polymers. [13] Fei Jiang, Song Jiang, Junpin Yin. Global weak solutions to the two-dimensional Navier-Stokes equations of compressible heat-conducting flows with symmetric data and forces. [14] Joanna Rencławowicz, Wojciech M. Zajączkowski. Global regular solutions to the Navier-Stokes equations with large flux. [15] Peixin Zhang, Jianwen Zhang, Junning Zhao. On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum. [16] [17] Minghua Yang, Zunwei Fu, Jinyi Sun. Global solutions to Chemotaxis-Navier-Stokes equations in critical Besov spaces. [18] Ciprian Foias, Ricardo Rosa, Roger Temam. Topological properties of the weak global attractor of the three-dimensional Navier-Stokes equations. [19] Laiqing Meng, Jia Yuan, Xiaoxin Zheng. Global existence of almost energy solution to the two-dimensional chemotaxis-Navier-Stokes equations with partial diffusion. [20] 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
In this article, we will learn about successive percentage change, it deals with two or more percentage changes in a quantity consecutively. Therefore, she must visit outlet offering a discount of 60% + 40%. Why this isn’t the simple addition of two percentage changes? Successive Percentage Change:If there are percentage changes of a% and b% in a quantity consecutively, then total equivalent percentage change will be equal to the `(a + b + \frac { ab }{ 100 } )%`. Example1: There is two outlet, one is offering a discount of 50%+ 50% and other is offering a discount of 60% + 40%. At which outlet, one must visit so that she gets more discount? Solution: Case1: 50%+ 50% Total discount = -50 + -50 + `\frac {-50\times -50 }{ 100 } ` = -100 +25 =-75% ⇒ 75% discount Solution: Case1: 50%+ 50% Total discount = -50 + -50 + `\frac {-50\times -50 }{ 100 } ` = -100 +25 =-75% ⇒ 75% discount Case2: 60% + 40%.Total discount = `(-60)+(-40)+(-60)×\frac { -40 }{ 100 } =-100+24=-76%` ⇒ 76% discount Therefore, she must visit outlet offering a discount of 60% + 40%. Example2: The length & breadth of a rectangle have been increase by 30% & 20% respectively. By what percentage its area will increase? Solution: Area= length × breadth Total Percentage change=`( a+b+\frac { ab }{ 100 } ) %` Solution: Area= length × breadth Total Percentage change=`( a+b+\frac { ab }{ 100 } ) %` = P(equivalent)= `30+20+\frac { 30 \times 20 }{ 100 }= 60% ` Example3: The length of the rectangle has been increased by 30% & breadth has been decreased by 20%. By what percentage its area will change? Solution: Area= length × breadth Total Percentage change= `( a+b+\frac { ab }{ 100 } ) %` Solution: Area= length × breadth Total Percentage change= `( a+b+\frac { ab }{ 100 } ) %` = P(equivalent)= `30+(-20)+\frac { 30 \times (-20) }{ 100 }= 4% ` Example4: There is 10%, 15% & 20% depreciation in the value of mobile phone in 1st, 2nd & 3rd month after sale if the price at beginning was 10,000R, then price of mobile after 3rd month will be:- Solution: Total Percentage change= `( a+b+\frac { ab }{ 100 } ) %` Aliter: Final Price= Original Price×` MF _{ 1 }× MF _{ 2 }× MF _{ 3 }` ⇒ = 10000×0.9×0.85×0.8 = 6120 D Solution: Total Percentage change= `( a+b+\frac { ab }{ 100 } ) %` Take 10% & %20, Percentage Equivalent = -10 -20 +(-10) `\times \frac { -20 }{ 100 }` = -28% Now, taking 28% & 15%. Percentage Equivalent = -28 -15 +(-28) `\times \frac { -15 }{ 100 }`= 38.8%Therefore, Price after 3rd month = 10000× (100-38.8) % = 6120Rs Aliter: Final Price= Original Price×` MF _{ 1 }× MF _{ 2 }× MF _{ 3 }` ⇒ = 10000×0.9×0.85×0.8 = 6120 D Example5: Price of an item is increased by 40% and its sales decrease by 20%, what will be tha percentage effect on income of shopkeeper? Solution: Income= Price × sales ⇒ Percentage (effect) = 40 +(-20) `\times \frac { (-40)(-20) }{ 100 }` = 12%↑se Solution: Income= Price × sales ⇒ Percentage (effect) = 40 +(-20) `\times \frac { (-40)(-20) }{ 100 }` = 12%↑se Example6: The radius of the circle has increased by 15%. By what percent its area will be increased? Solution: Area of circle =` \pi e ^{ 2 }` Area (eq.) = `( a+b+\frac { ab }{ 100 } ) % = r + r+ frac { (r)\times(r) }{ 100 }= 2r+ \frac { (r)(2) }{ 100 }` Solution: Area of circle =` \pi e ^{ 2 }` Area (eq.) = `( a+b+\frac { ab }{ 100 } ) % = r + r+ frac { (r)\times(r) }{ 100 }= 2r+ \frac { (r)(2) }{ 100 }` ⇒ `2×15+ \frac { { 15 }^{ 2 } }{ 100 } = 32.25%` Note: Effect on Area= 2P +(P×P)/100 (where, P is %change in variable) this is valid for Circle, Square & Equilateral triangle. Faulty Balance: Example1: A milkman mixes 100 litres of water with every 800lit. Of milk and sells at a markup of 11.11%. Find the percentage profit? Solution: Total Profit = Adding water + Mark-Up ⇒ Profit = `\frac { 100 }{ 800 } + \frac { 1 }{ 9 } + \frac { 100 }{ 800 } \times \frac { 1 }{ 9 } = \frac { 9+8+1 }{ 72 }= \frac { 1 }{ 4 }`⇛ 25% Profit Solution: Total Profit = Adding water + Mark-Up ⇒ Profit = `\frac { 100 }{ 800 } + \frac { 1 }{ 9 } + \frac { 100 }{ 800 } \times \frac { 1 }{ 9 } = \frac { 9+8+1 }{ 72 }= \frac { 1 }{ 4 }`⇛ 25% Profit Aliter: 100lit water+ 800lit milk=900lit milk Let, CP= Rs1/lit⇒ total CP= 800Rs & SP= 900Rs & there is mark-up as well. So Mark Up=`\frac { 1 }{ 9 } \times900`= 100Rs ⇒ total SP= 900+100=1000Rs ⇒`\frac { SP }{ CP }= \frac { 1000 }{ 800 }= \frac { 5 }{ 4 }= 1.25 ⇛25%Profit ` Let, CP= Rs1/lit⇒ total CP= 800Rs & SP= 900Rs & there is mark-up as well. So Mark Up=`\frac { 1 }{ 9 } \times900`= 100Rs ⇒ total SP= 900+100=1000Rs ⇒`\frac { SP }{ CP }= \frac { 1000 }{ 800 }= \frac { 5 }{ 4 }= 1.25 ⇛25%Profit ` Compound Interest: Compound Interest in simple terms, is successive percentage equivalent of simple interest. Example1: The difference between compound interest and simple interest on a sum for two years at 8% per annum, where the interest is compounded annually is Rs.16. Find the principal amount? Solution: Simple Interest for 2years = 2×8%=16% Compound Interest for 2years =`8+8+\frac { 8\times 8 }{ 100 } =16.64%` Therefore, difference = 0.64% = Principal `\times` 0.64%= 16 Solution: Simple Interest for 2years = 2×8%=16% Compound Interest for 2years =`8+8+\frac { 8\times 8 }{ 100 } =16.64%` Therefore, difference = 0.64% = Principal `\times` 0.64%= 16 = Principal = `16+\frac { 100 }{ 0.64 }= 2500 Rs` What's trending in BankExamsToday Smart Prep Kit for Banking Exams by Ramandeep Singh - Download here
Prove that $\F_3[x]/(x^2+1)$ is a Field and Find the Inverse Elements Problem 529 Let $\F_3=\Zmod{3}$ be the finite field of order $3$. Consider the ring $\F_3[x]$ of polynomial over $\F_3$ and its ideal $I=(x^2+1)$ generated by $x^2+1\in \F_3[x]$. (a) Prove that the quotient ring $\F_3[x]/(x^2+1)$ is a field. How many elements does the field have? (b) Let $ax+b+I$ be a nonzero element of the field $\F_3[x]/(x^2+1)$, where $a, b \in \F_3$. Find the inverse of $ax+b+I$. (c) Recall that the multiplicative group of nonzero elements of a field is a cyclic group. Confirm that the element $x$ is not a generator of $E^{\times}$, where $E=\F_3[x]/(x^2+1)$ but $x+1$ is a generator. Contents Proof. (a) Prove that the quotient ring $\F_3[x]/(x^2+1)$ is a field Let $f(x)=x^2+1$. We claim that the polynomial $f(x)$ is irreducible over $\F_3$. To see this, note that $f(x)$ is a quadratic polynomial. So $f(x)$ is irreducible over $\F_3$ if it does not have a root in $\F_3$. We have \begin{align*} f(0)=1, \quad f(1)=2, \quad f(2)=2^2+1=2 \text{ in } \F_3. \end{align*} Hence $f(x)$ does not have a root in $\F_3$ and it is irreducible over $\F_3$. It follows that the quotient $\F_3[x]/(x^2+1)$ is a field. Since $x^2+1$ is quadratic, the extension degree of $\F_3[x]/(x^2+1)$ over $\F_3$ is $2$. Hence the number of elements in the field is $3^2=9$. (b) Find the inverse of $ax+b+I$ Let $ax+b$ be a representative of a nonzero element of the field $\F_3[x]/(x^2+1)$. Let $cx+d$ be its inverse. Then we have \begin{align*} 1&=(ax+b)(cx+d)=acx^2+(ad+bc)x+bd\\ &=(ad+bc)x+bd-ac \end{align*} since $x^2=-1$ in $\F_3[x]/(x^2+1)$. Hence we obtain two equations \begin{align*} ad+bc=0 \text{ and } bd-ac=1. \end{align*} Since $ax+b$ is a nonzero element, at least one of $a, b$ is not zero. If $a\neq 0$, then the first equation gives \[d=-\frac{bc}{a}. \tag{*}\] Substituting this to the second equation, we obtain \begin{align*} \left(\, \frac{-b^2-a^2}{a} \,\right)c=1. \end{align*} Observe that $a^2+b^2$ is not zero in $\F_3$. (Since $a \neq 0$, we have $a^2=1$. Also $b^2=0, 1$.) Hence we have \begin{align*} c=-\frac{a}{a^2+b^2}. \end{align*} It follows from (*) that \[d=\frac{b}{a^2+b^2}\] Thus, if $a \neq 0$, then the inverse element is \[(ax+b)^{-1}=\frac{1}{a^2+b^2}(-ax+b). \tag{**}\] If $a=0$, then $b\neq 0$ and it is clear that the inverse element of $ax+b=b$ is $1/b$. Note that the formula (**) is still true in this case. In summary, we have for any nonzero element $ax+b$ in the field $\F_3[x]/(x^2+1)$. (c) $x$ is not a generator but $x+1$ is a generator Note that the order of $E^{\times}$ is $8$ since $E$ is a finite field of order $9$ by part (a). We compute the powers of $x$ and obtain \begin{align*} x, \quad x^2=-1, \quad x^3=-x, \quad x^4=-x^2=1. \end{align*} Thus, the order of the element $x$ is $4$, hence $x$ is not a generator of the cyclic group $E^{\times}$. Next, let us check that $x+1$ is a generator. We compute the powers of $x+1$ as follows. \begin{align*} &x+1, \quad (x+1)^2=x^2+2x+1=2x, \\ &(x+1)^3=2x(x+1)=2x^2+2x=2x-2=2x+1\\ &(x+1)^4=(2x+1)(x+1)=2x^2+3x+1=2. \end{align*} Observe that at this post the order of $x+1$ must be larger than $4$. Since the order of $E^{\times}$ is $8$, the order of $x+1$ must be $8$ by Lagrange’s theorem. Just for a reference we give the complete list of powers of $x+1$. \[\begin{array}{ |c|c|} \hline n & (x+1)^n \\ \hline 1 & x+1 \\ 2 & 2x \\ 3 & 2x+1 \\ 4 & 2 \\ 5 & 2x+2\\ 6 & x\\ 7 &x+2\\ 8 & 1\\ \hline \end{array}\] Add to solve later
Suppose that we have\[\phi(m)=0.\]Then we have $2m=0$, and hence $m=0$.It follows that the group homomorphism $\phi$ is injective. (c) Prove that there does not exist a group homomorphism $\psi:B \to A$ such that $\psi \circ \phi=\id_A$. Seeking a contradiction, assume that there exists a group homomorphism $\psi:B \to A$ such that $\psi \circ \phi =\id_A$.Then we compute\begin{align*}&1=\id_A(1)=\psi \circ \phi(1)\\&=\psi(2)=\psi(1+1)\\&=\psi(1)+\psi(1) && \text{since $\psi$ is a group homomorphism}\\&=2\psi(1).\end{align*}It yields that\[\psi(1)=\frac{1}{2}.\]However note that $\psi(1)$ is an element in $A$, thus $\psi(1)$ is an integer.Hence we got a contradiction, and we conclude that there is no such $\psi$. A Homomorphism from the Additive Group of Integers to ItselfLet $\Z$ be the additive group of integers. Let $f: \Z \to \Z$ be a group homomorphism.Then show that there exists an integer $a$ such that\[f(n)=an\]for any integer $n$.Hint.Let us first recall the definition of a group homomorphism.A group homomorphism from a […] A Group Homomorphism is Injective if and only if MonicLet $f:G\to G'$ be a group homomorphism. We say that $f$ is monic whenever we have $fg_1=fg_2$, where $g_1:K\to G$ and $g_2:K \to G$ are group homomorphisms for some group $K$, we have $g_1=g_2$.Then prove that a group homomorphism $f: G \to G'$ is injective if and only if it is […] Inverse Map of a Bijective Homomorphism is a Group HomomorphismLet $G$ and $H$ be groups and let $\phi: G \to H$ be a group homomorphism.Suppose that $f:G\to H$ is bijective.Then there exists a map $\psi:H\to G$ such that\[\psi \circ \phi=\id_G \text{ and } \phi \circ \psi=\id_H.\]Then prove that $\psi:H \to G$ is also a group […] The Quotient by the Kernel Induces an Injective HomomorphismLet $G$ and $G'$ be a group and let $\phi:G \to G'$ be a group homomorphism.Show that $\phi$ induces an injective homomorphism from $G/\ker{\phi} \to G'$.Outline.Define $\tilde{\phi}([g])=\phi(g)$ and show that this is well-defined.Show […] Abelian Normal subgroup, Quotient Group, and Automorphism GroupLet $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.Let $\Aut(N)$ be the group of automorphisms of $G$.Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.Then prove that $N$ is contained in the center of […] Dihedral Group and Rotation of the PlaneLet $n$ be a positive integer. Let $D_{2n}$ be the dihedral group of order $2n$. Using the generators and the relations, the dihedral group $D_{2n}$ is given by\[D_{2n}=\langle r,s \mid r^n=s^2=1, sr=r^{-1}s\rangle.\]Put $\theta=2 \pi/n$.(a) Prove that the matrix […]
Self-focusing Multibump Standing Waves in Expanding Waveguides Article First Online: 93 Downloads Citations Abstract Let Mbe a smooth k-dimensional closed submanifold of \({\mathbb{R}^N, N \geq 2}\), and let Ω be the open tubular neighborhood of radius 1 of the expanded manifold \({M_R := \{R_x : x \in M\}}\). For R Rsufficiently large we show the existence of positive multibump solutions to the problem The function $$ -\Delta u + \lambda u = f(u)\,{\rm in}\,\Omega_R,\quad u= 0\,{\rm on}\,\partial\Omega_R. $$ fis superlinear and subcritical, and λ > −λ 1, where λ 1is the first Dirichlet eigenvalue of −Δ in the unit ball in \({\mathbb{R}^{N-k}}\). KeywordsTangent Space Tubular Neighborhood Nonlinear Elliptic Equation Ground State Solution Dirichlet Eigenvalue These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Preview Unable to display preview. Download preview PDF. References 1.N. Ackermann, M. Clapp, and F. Pacella, Alternating sign multibump solutions of nonlinear elliptic equations in expanding tubular domains, 40 pages, preprint, 2011.Google Scholar 2.T. Bartsch, M. Clapp, M. Grossi, and F. Pacella, Asymptotically radial solutions in expanding annular domains, Preprint.Google Scholar 3.H. Berestycki, L.A. Caffarelli, and L. Nirenberg, Inequalities for second-order elliptic equations with applications to unbounded domains. I, Duke Math. J. 81(1996), no. 2, 467-494, A celebration of John F. Nash, Jr.Google Scholar 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.C. Sulem and P.L. Sulem, The nonlinear Schrödinger equation, Applied Mathematical Sciences, vol. 139, Springer-Verlag, New York, 1999, Self-focusing and wave collapse.Google Scholar 14. Copyright information © Springer Basel AG 2011
Let $ p(t) = \Sigma_{k=1}^n c_k e^{i \lambda_k t}$ be an exponential polynomial. In the paper "Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type" http://www.math.msu.edu/~fedja/Published/paper.ps Nazarov proves an estimate on the maximum value attained by the polynomial $p$ in an interval $I$, in terms of the maximum of $p$ in a subset $E \subset I$. To be precise he obtains the following estimate:- $$ \sup_{t \in I} |p(t)| \leq ( \frac{A \mu(I)}{\mu(E)} )^{n-1} \sup_{t\in E} |p(t)|.$$ At one point he mentions that the result holds true for more general functions of the type $$p(t) = \Sigma_{k=1}^n q_k(t) e^{i \lambda_k t}$$ where $q_k(t)$ are algebraic polynomials of degree d_k$; by an "obvious" approximation argument. It is not clear to me, what exactly is the argument he is suggesting ? One of the method he uses to obtain an estimate as mentioned above is by using Turan's Lemma. Although in that case one gets exponent $2 n^2$ instead of $n-1$. $\underline {Turan's Lemma}$ Let $ z_1,\dots,z_n$ be complex numbers, $|z_j|\geq 1, j=1,\dots,n.$ Let $ b_1,\dots, b_n \in \mathbb C $ and $$S_j:= \Sigma_{k=1}^n b_k z_k^j$$ Then $$|S_0| \leq \{\frac{4 e (m+n-1)}{n}\}^{n-1} \max_{j=m+1}^{m+n} |S_j|.$$ As a simple consequence of this result when the value of an exponential polynomial (with constant coefficient) is known for $n$ consecutive term of an arithmetic progression, then one can get an estimate of the value of the polynomial along that arithmetic progression. i.e., Let $p(t)=\Sigma_{k=1}^n c_k e^{i \lambda_k t}$ and assume that the value of the polynomial $p(t)$ is known for $t_j=t_0+j \delta$ for $ j= m+1,...,m+n$. Then substitute $b_k=c_k e^{i \lambda_j t_0}$ and $z_k= e^{i \lambda_k \delta}$ and apply Turan's lemma. The result now follows by Lebesgue's density theorem and some averaging argument. We might want to get a similar result like Turan's Lemma for the more general type of exponential polynomial $p(t) = \Sigma_{k=1}^n q_k(t) e^{i \lambda_k t}$ where $q_k(t)$ are algebraic polynomials of degree d_k$. But I doubt this is what he is suggesting here, as later in order to get the sharper result (the one with exponent $n-1$) he uses some weak type estimates it seems. (I have not read this part of the proof yet). So, what exactly is the obvious approximation argument he is trying to suggest here?
Although machine learning is great for shape classification, for shape recognition, we must still use the old methods. Methods such as Hough Transform, and RANSAC. In this post, we’ll look into using Hough Transform for recognizing straight lines. The following is taken from E. R. Davies’ book, Computer Vision: Principals, Algorithms, Application, Learning and Image Digital Image Processing by Gonzalez and Woods. Straight edges are amongst the most common features of the modern world, arising in perhaps the majority of manufactured objects and components – not least in the very buildings in which we live. Yet, it is arguable whether true straight lines ever arise in the natural state: possibly the only example of their appearance in virgin outdoor scenes is in the horizon – although even this is clearly seen from space as a circular boundary! The surface of water is essentially planar, although it is important to realize that this is a deduction: the fact remains that straight lines seldom appear in completely natural scenes. Be all this as it may, it is clearly vital both in city pictures and in the factory to have effective means of detecting straight edges. This chapter studies available methods for locating these important features. Historically, HT has been the main means of detecting straight edges, and since the method was originally invented by Hough in 1962, it has been developed and refined for this purpose. We’re going to concentrate on it on this blog post, and this also prepares you to use HT to detect circles, ellipses, corners, etc, which we’ll talk about in the not-too-distant future. We start by examining the original Hough scheme, even thoupgh it is now seen to be wasteful in computation since it has evolved. First, let us introduce Hough Transform. Often, we have to work in unstructured environments in which all we have is an edge map and no knowledge about where objects of interest might be. In such situations, all pixels are candidates for linking, and thus have to be accepted or eliminated based on predefined global properties. In this section, we develop an approach based on whether set of pixels lie on curves of a specified shape. Once detected, these curves form the edge or region boundaries of interest. Given $n$ points in the image, suppose that we want to find subsets of these points that lie on straight lines. One possible soltion is to fine all lines determined by every pair of points, then find all subsets of points that are close to particular lines. This approach involves finding $n(n-1)/2 \sim n^2$ lines, then performing $(n)(n(n-1))/2 \sim n^3$ comparisons for every points to all lines. As you might have guessed, this is extremely computationally expensive task. Imagine this, we check every pixel for neighboring pixels and compare their distance to see if they form a straight line. Impossible! Hough, as we said, in 1962, proposed an alternative approach to this scanline method. Commonly referred to as the Hough transform. Let $(x_i, y_i)$ denote p point in the xy-plane and consider the general equation of a straight line in slope-intercept form: $y_i = ax_i + b$. Infinitely many lines pass through $(x_i, y_i)$ but they all satisfy the equation we saw, for varying values of $a$ and $b$. However, writing this equation as $b = -x_i a+y_i$ and considering the ab-plane – also called parameter space, yields the equation of a single line in parameter space associated with it, which intersects the line associated with $(x_i, y_i)$ at some point $(a\prime, b\prime)$ in parameter space, where $a\prime$ is the slope and $b\prime$ is the intercept of the line containing the both $(x_i, y_i)$s in the xy-plane, and of course, we are assuming that lines are not parallel, in fact, all points on this line have lines in parameter space that intersect at $(a\prime, b\prime)$. Here, this figure illustrates s the concepts: In principle, the parameter space lines corresponding to all points $(x_k, y_k)$ in the xy-plane could be plotted, and the principal (goddammit, principle, principal, fuck this language!) lines in that plane could be found by identifying points in parameter space where large numbers of parameter-space lines intersect. However, a difficulty with this approach is that $a$, approaches infinity as the lines approaches vertical direction. One way around this difficulty is to use the normal representation of a line: \[ x \cos(\theta) + y sin(\theta) = \rho \] Figure on the right below demonstrates the geometrical interpretation of parameters $\rho$ and $\theta$. A horizontal line has $\theta = 0^\circ$, with $\rho$ being equal to the positive x-intercept. Similarly, a vertical line has $\theta = 90^\circ$, with $\rho$ being equal to positive y-intercept. Each sinusoidal curve in the middle of the figure below represents the family of lines that pass through a particular point $(x_k. y_k)$ in xy-plane. Let’s talk about the properties of Hough transform. Figure below illustrates the Hough transform based on the equation above. On the top, you see an image of size $M\times M \bigvee M=101$ with five labeled white points, and below it shows each of these points mapped into the parameter space, $\rho\theta$-plane using subdivisions of one unit for the $\rho$ and $\theta$ axes. The range of $\theta$ values is $\pm 90^\circ$ and the range of $\rho$ values is $\pm \sqrt{2} M$ As the bottom image shows, each curve has a different sinusoidal shape. The horizontal line resulting from the mapping of point 1 is a sinusoid of zero amplitude. The points labeled A and B in the image on the bottom illustrate the colinearity detection property of the Hough transform. For exampele, point B marks the intersection of the curves corresponding to points 2, 3, and 4 in the xy image plane. The location of point A indicates that these thre points line on a straight line passing through the origin $(\rho = 1)$ and oriented at $-45^\circ$. Similarly, the curves intersecting at point B in parameter space indicate that 2, 3, and 4 line on a straight like oriented at $45^\circ$, and whose distance from origin is $\rho = 71$. Finally, Q, R, and S illustrate the fact that Hough transform exhibits a reflective adjacency relationship at the right and left edges of the parameter space. Now that we know the basics of HT and line detection using HT, let’s take a look at Longitudinal Line Localization. The previous method is insensitive to where along the infinite idealized line an observed segment appear. He reason for this is that we only have two parameters, $\rho$ and $\theta$.There is some advantage to be gained in this, in parital occlusion of line does not prevent its detection: indeed, if several segments of a line are visible, they can all contribute to the peak in parameter space, hence improving senitivity. On the other hand, for full image interpretation, it is useful to have information about the longitudinal placement of line segments. Ths is achieved by a further stage of processing. The additional stage involves finding which points contributed to each peak in the main parameter space, and carrying out connectivity analysis in each case. Some call this process xy-gruping. It is not vital that the line segments should be 4-connected (meaning, a neighborhood with only the vertical and horizontal neighbors) or 8-connected (with diagonal neighbors) – just that there should be sufficient points on them so that adjacent points are withing a threshold distance apart, i.e. groups of points arem erged if they are withing prespecified distance. Finally, segments shorter than a certain minimum length can be ignored too insignificant to help with image interpretation. The alternative method for saving computation time is the Foot-of-Normal method. Created by the author of book I’m quoting from, it eliminates the use of trigonometric functions such as arctan by employing a different parametrization scheme. Both the methods we’ve described employ abstract parameter spaces in which poitns bear no immediately obvious relation to image space. N the alternative scheme, the parameter spaces in which points bear no immediately obvious visual relation to image space. In this alternative scheme, the parameter space is a second image space, whifcfh is congruent to image space. This type of parameter space is obtained in the following way. First, each edge fragment in the image is produced much as required previously so that $\rho%” can be measured, but this time the foot of the normal from the origin is taken as a voting position in the parameter space. Taking %(x_0, y_0) as the foot of the normal from the origin to the relevant line, it is found that: \[b/a = y_0/x_0 \] \[(x-x_0)x_0 + (y-y_o)y_0 \] Thes etwo equations are sufficient to compute the two coordinates, $(x_0, y_0)$. Solving for $x_0$ and $y_0$ gives: \[ x_0 = va \] \[y_0 = vb \] Where: \[ \frac{ax + by}{a^2 + b^2} \] Well, we’re done for now! It’s time to take a shower, then study regression, as I’m done with classification. I’m going to write a post about regression, stay tuned!
Article Higher order concentration of measure We study sharpened forms of the concentration of measure phenomenon typically centered at stochastic expansions of order d-1 for any d \in N. The bounds are based on dth order derivatives or difference operators. In particular, we consider deviations of functions of independent random variables and differentiable functions over probability measures satisfying a logarithmic Sobolev inequality, and functions on the unit sphere. Applications include concentration inequalities for U-statistics as well as for classes of symmetric functions via polynomial approximations on the sphere (Edgeworth-type expansions). We find sufficient conditions for a probability measure $\mu$ to satisfy an inequality of the type $$ \int_{\R^d} f^2 F\Bigl(\frac{f^2}{\int_{\R^d} f^2 d \mu} \Bigr) d \mu \le C \int_{\R^d} f^2 c^{*}\Bigl(\frac{|\nabla f|}{|f|} \Bigr) d \mu + B \int_{\R^d} f^2 d \mu, $$ where $F$ is concave and $c$ (a cost function) is convex. We show that under broad assumptions on $c$ and $F$ the above inequality holds if for some $\delta>0$ and $\epsilon>0$ one has $$ \int_{0}^{\epsilon} \Phi\Bigl(\delta c\Bigl[\frac{t F(\frac{1}{t})}{{\mathcal I}_{\mu}(t)} \Bigr] \Bigr) dt < \infty, $$ where ${\mathcal I}_{\mu}$ is the isoperimetric function of $\mu$ and $\Phi = (y F(y) -y)^{*}$. In a partial case $${\mathcal I}_{\mu}(t) \ge k t \phi ^{1-\frac{1}{\alpha}} (1/t), $$ where $\phi$ is a concave function growing not faster than $\log$, $k>0$, $1 < \alpha \le 2$ and $t \le 1/2$, we establish a family of tight inequalities interpolating between the $F$-Sobolev and modified inequalities of log-Sobolev type. A basic example is given by convex measures satisfying certain integrability assumptions. A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight tra’¼āc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the ’¼ünal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a ’¼ünite-dimensional system of di’¼Ćerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of di’¼Ćerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station. This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.
Search Now showing items 1-10 of 34 Search for top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in √s = 8 TeV pp collisions with the ATLAS detector (Springer, 2014-11) The results of a search for top squark (stop) pair production in final states with one isolated lepton, jets, and missing transverse momentum are reported. The analysis is performed with proton-proton collision data at s√ ... Search for supersymmetry in events with large missing transverse momentum, jets, and at least one tau lepton in 20 fb−1 of √s = 8 TeV proton-proton collision data with the ATLAS detector (Springer, 2014-09-18) A search for supersymmetry (SUSY) in events with large missing transverse momentum, jets, at least one hadronically decaying tau lepton and zero or one additional light leptons (electron/muon), has been performed using ... Measurement of the top quark pair production charge asymmetry in proton-proton collisions at √s = 7 TeV using the ATLAS detector (Springer, 2014-02) This paper presents a measurement of the top quark pair ( tt¯ ) production charge asymmetry A C using 4.7 fb−1 of proton-proton collisions at a centre-of-mass energy s√ = 7 TeV collected by the ATLAS detector at the LHC. ... Measurement of the low-mass Drell-Yan differential cross section at √s = 7 TeV using the ATLAS detector (Springer, 2014-06) The differential cross section for the process Z/γ ∗ → ℓℓ (ℓ = e, μ) as a function of dilepton invariant mass is measured in pp collisions at √s = 7 TeV at the LHC using the ATLAS detector. The measurement is performed in ... Measurements of fiducial and differential cross sections for Higgs boson production in the diphoton decay channel at √s=8 TeV with ATLAS (Springer, 2014-09-19) Measurements of fiducial and differential cross sections are presented for Higgs boson production in proton-proton collisions at a centre-of-mass energy of s√=8 TeV. The analysis is performed in the H → γγ decay channel ... Measurement of the inclusive jet cross-section in proton-proton collisions at \( \sqrt{s}=7 \) TeV using 4.5 fb−1 of data with the ATLAS detector (Springer, 2015-02-24) The inclusive jet cross-section is measured in proton-proton collisions at a centre-of-mass energy of 7 TeV using a data set corresponding to an integrated luminosity of 4.5 fb−1 collected with the ATLAS detector at the ... ATLAS search for new phenomena in dijet mass and angular distributions using pp collisions at $\sqrt{s}$=7 TeV (Springer, 2013-01) Mass and angular distributions of dijets produced in LHC proton-proton collisions at a centre-of-mass energy $\sqrt{s}$=7 TeV have been studied with the ATLAS detector using the full 2011 data set with an integrated ... Search for direct chargino production in anomaly-mediated supersymmetry breaking models based on a disappearing-track signature in pp collisions at $\sqrt{s}$=7 TeV with the ATLAS detector (Springer, 2013-01) A search for direct chargino production in anomaly-mediated supersymmetry breaking scenarios is performed in pp collisions at $\sqrt{s}$ = 7 TeV using 4.7 fb$^{-1}$ of data collected with the ATLAS experiment at the LHC. ... Search for heavy lepton resonances decaying to a $Z$ boson and a lepton in $pp$ collisions at $\sqrt{s}=8$ TeV with the ATLAS detector (Springer, 2015-09) A search for heavy leptons decaying to a $Z$ boson and an electron or a muon is presented. The search is based on $pp$ collision data taken at $\sqrt{s}=8$ TeV by the ATLAS experiment at the CERN Large Hadron Collider, ... Evidence for the Higgs-boson Yukawa coupling to tau leptons with the ATLAS detector (Springer, 2015-04-21) Results of a search for $H \to \tau \tau$ decays are presented, based on the full set of proton--proton collision data recorded by the ATLAS experiment at the LHC during 2011 and 2012. The data correspond to integrated ...
The Annals of Statistics Ann. Statist. Volume 11, Number 1 (1983), 104-113. The Generalised Problem of the Nile: Robust Confidence Sets for Parametric Functions Abstract The pivotal model is described and applied to the estimation of parametric functions $\phi(\theta)$. This leads to equations of the form $H(x; \theta) = G\{p(x, \theta)\}$. These can be solved directly or by the use of differential equations. Examples include various parametric functions $\phi(\theta, \sigma)$ in a general location-scale distribution $f(p), p = (x - \theta)/\sigma$ and in two location-scale distributions. The latter case includes the ratio of the two scale parameters $\sigma_1/\sigma_2$, the difference and ratio of the two location parameters $\theta_1 - \theta_2$ and the common location $\theta$ when $\theta_1 = \theta_2 = \theta$. The use of the resulting pivotals to make inferences is discussed along with their relation to examples of non-uniqueness occurring in the literature. Article information Source Ann. Statist., Volume 11, Number 1 (1983), 104-113. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176346061 Digital Object Identifier doi:10.1214/aos/1176346061 Mathematical Reviews number (MathSciNet) MR684868 Zentralblatt MATH identifier 0514.62004 JSTOR links.jstor.org Citation Barnard, G. A.; Sprott, D. A. The Generalised Problem of the Nile: Robust Confidence Sets for Parametric Functions. Ann. Statist. 11 (1983), no. 1, 104--113. doi:10.1214/aos/1176346061. https://projecteuclid.org/euclid.aos/1176346061
Illinois Journal of Mathematics Illinois J. Math. Volume 59, Number 3 (2015), 801-817. A note on reduced and von Neumann algebraic free wreath products Abstract We study operator algebraic properties of the reduced and von Neumann algebraic versions of the free wreath products $\mathbb{G}\wr_{*}S_{N}^{+}$, where $\mathbb{G}$ is a compact matrix quantum group. Based on recent results on their corepresentation theory by Lemeux and Tarrago in [Lemeux and Tarrago (2014)], we prove that $\mathbb{G}\wr_{*}S_{N}^{+}$ is of Kac type whenever $\mathbb{G}$ is, and that the reduced version of $\mathbb{G}\wr_{*}S_{N}^{+}$ is simple with unique trace state whenever $N\geq8$. Moreover, we prove that the reduced von Neumann algebra of $\mathbb{G}\wr_{*}S_{N}^{+}$ does not have property $\Gamma$. Article information Source Illinois J. Math., Volume 59, Number 3 (2015), 801-817. Dates Received: 11 February 2016 Revised: 22 March 2016 First available in Project Euclid: 30 September 2016 Permanent link to this document https://projecteuclid.org/euclid.ijm/1475266409 Digital Object Identifier doi:10.1215/ijm/1475266409 Mathematical Reviews number (MathSciNet) MR3554234 Zentralblatt MATH identifier 1355.46056 Subjects Primary: 46L54: Free probability and free operator algebras Citation Wahl, Jonas. A note on reduced and von Neumann algebraic free wreath products. Illinois J. Math. 59 (2015), no. 3, 801--817. doi:10.1215/ijm/1475266409. https://projecteuclid.org/euclid.ijm/1475266409
Recall that a group $G$ is said to be solvable if $G$ has a subnormal series\[\{e\}=G_0 \triangleleft G_1 \triangleleft G_2 \triangleleft \cdots \triangleleft G_n=G\]such that the factor groups $G_i/G_{i-1}$ are all abelian groups for $i=1,2,\dots, n$. Proof. Since $18=2\cdot 3^2$, the number $n_3$ of Sylow $3$-subgroups is $1$ by the Sylow theorem.(Sylow’s theorem implies that $n_3 \equiv 1 \pmod{3}$ and $n_3$ divides $2$.)Hence the unique Sylow $3$-subgroup $P$ is a normal subgroup of $G$. A Group of Order $20$ is SolvableProve that a group of order $20$ is solvable.Hint.Show that a group of order $20$ has a unique normal $5$-Sylow subgroup by Sylow's theorem.See the post summary of Sylow’s Theorem to review Sylow's theorem.Proof.Let $G$ be a group of order $20$. The […] Group of Order $pq$ Has a Normal Sylow Subgroup and SolvableLet $p, q$ be prime numbers such that $p>q$.If a group $G$ has order $pq$, then show the followings.(a) The group $G$ has a normal Sylow $p$-subgroup.(b) The group $G$ is solvable.Definition/HintFor (a), apply Sylow's theorem. To review Sylow's theorem, […] Sylow Subgroups of a Group of Order 33 is Normal SubgroupsProve that any $p$-Sylow subgroup of a group $G$ of order $33$ is a normal subgroup of $G$.Hint.We use Sylow's theorem. Review the basic terminologies and Sylow's theorem.Recall that if there is only one $p$-Sylow subgroup $P$ of $G$ for a fixed prime $p$, then $P$ […] Are Groups of Order 100, 200 Simple?Determine whether a group $G$ of the following order is simple or not.(a) $|G|=100$.(b) $|G|=200$.Hint.Use Sylow's theorem and determine the number of $5$-Sylow subgroup of the group $G$.Check out the post Sylow’s Theorem (summary) for a review of Sylow's […] Non-Abelian Group of Order $pq$ and its Sylow SubgroupsLet $G$ be a non-abelian group of order $pq$, where $p, q$ are prime numbers satisfying $q \equiv 1 \pmod p$.Prove that a $q$-Sylow subgroup of $G$ is normal and the number of $p$-Sylow subgroups are $q$.Hint.Use Sylow's theorem. To review Sylow's theorem, check […] Every Group of Order 12 Has a Normal Subgroup of Order 3 or 4Let $G$ be a group of order $12$. Prove that $G$ has a normal subgroup of order $3$ or $4$.Hint.Use Sylow's theorem.(See Sylow’s Theorem (Summary) for a review of Sylow's theorem.)Recall that if there is a unique Sylow $p$-subgroup in a group $GH$, then it is […]
Quiz 12. Find Eigenvalues and their Algebraic and Geometric Multiplicities Problem 376 (a) Let \[A=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 &1 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix}.\] Find the eigenvalues of the matrix $A$. Also give the algebraic multiplicity of each eigenvalue. (b) Let \[A=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 &1 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix}.\] One of the eigenvalues of the matrix $A$ is $\lambda=0$. Find the geometric multiplicity of the eigenvalue $\lambda=0$. Contents Solution (a). Eigenvalues of $A$ and algebraic multiplies Eigenvalues and their algebraic multiplicities are determined by the characteristic polynomial $p(t)$ of $A$. By definition, the characteristic polynomial of $A$ is $p(t)=\det(A-tI)$. We have \begin{align*} &p(t)=\det(A-tI)\\ &=\begin{vmatrix} -t & 0 & 0 & 0 \\ 1 &1-t & 1 & 1 \\ 0 & 0 & -t & 0 \\ 1 & 1 & 1 & 1-t \end{vmatrix}\\[6pt] &=-t\begin{vmatrix} 1-t & 1 & 1 \\ 0 &-t &0 \\ 1 & 1 & 1-t \end{vmatrix} && \text{by the first row cofactor expansion}\\[6pt] &=-t\left(\, -t\begin{vmatrix} 1-t & 1\\ 1& 1-t \end{vmatrix} \,\right)&& \text{by the second row cofactor expansion}\\[6pt] &=t^2\left(\, (1-t)^2-1 \,\right)\\ &=t^2(t^2-2t)\\ &=t^3(t-2). \end{align*} Thus the characteristic polynomial is \[p(t)=t^3(t-2).\] From this, the eigenvalues of $A$ are $0$ and $2$ with algebraic multiplicities $3$ and $1$, respectively. Solution (b). Geometric multiplicity We give two solutions for part (b). First Solution (b). (Finding the rank first) Recall that the geometric multiplicity of $\lambda$ is the dimension of the eigenspace $E_{\lambda}=\calN(A-\lambda I)$. That is, the geometric multiplicity of $\lambda$ is the nullity of the matrix $A-\lambda I$. Let us now consider the case $\lambda=0$. We first find the rank of $A-0 I=A$ as follows. \begin{align*} A-0 I= A=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 &1 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix} \xrightarrow{R_4-R_2} \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 &1 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \xrightarrow{R_1 \leftrightarrow R_2} \begin{bmatrix} 1 &1 & 1 & 1\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}. \end{align*} The last matrix is in reduced row echelon form. Hence the rank of $A$ is $1$. The rank-nullity theorem says that \[\text{rank of $A$ } + \text{ nullity of $A$}=4.\] Thus, the nullity of $A=A-0I$ is $3$, and hence the geometric multiplicity of $\lambda=0$ is $3$. Second Solution (b). (Finding a basis of the eigenspace) In this solution, we find a basis of the eigenspace $E_0$. By definition $E_0=\calN(A-0I)=\calN(A)$. Thus, the eigenspace $E_0$ is the null space of the matrix $A$. We solve the equation $A\mathbf{x}=\mathbf{0}$ as follows. The augmented matrix of this equation is \begin{align*} [A\mid \mathbf{0}]= \left[\begin{array}{rrrr|r} 0 & 0 & 0 & 0 &0 \\ 1 &1 & 1 & 1 &0 \\ 0 & 0 & 0 & 0 &0\\ 1 & 1 & 1 & 1 &0 \end{array} \right] \xrightarrow{R_4-R_2} \left[\begin{array}{rrrr|r} 0 & 0 & 0 & 0 &0 \\ 1 &1 & 1 & 1 &0 \\ 0 & 0 & 0 & 0 &0\\ 0 & 0 & 0 & 0 &0 \end{array} \right] \xrightarrow{R_1 \leftrightarrow R_2} \left[\begin{array}{rrrr|r} 1 &1 & 1 & 1 &0 \\ 0 & 0 & 0 & 0 &0 \\ 0 & 0 & 0 & 0 &0\\ 0 & 0 & 0 & 0 &0 \end{array} \right]. \end{align*} Hence the solution satisfies \[x_1=-x_2-x_3-x_4\] and the general solution is \begin{align*} \mathbf{x}=\begin{bmatrix} -x_2-x_3-x_4 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} =x_2\begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix}+x_3\begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \end{bmatrix}+x_4 \begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix}. \end{align*} Therefore, the eigenspace is \begin{align*} &E_0=\calN(A)\\ &=\left\{\, \mathbf{x}\in \C^4 \quad \middle | \quad \mathbf{x}=x_2\begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix}+x_3\begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \end{bmatrix}+x_4 \begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix}, \text{ for any } x_2, x_3, x_4\in \C \,\right\}\\[10pt] &=\Span\left\{\, \begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix} \,\right\}. \end{align*} Thus the set \[\left\{\, \begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix} \,\right\}\] is a spanning set of $E_0$, and it is straightforward to check that the set is linearly independent. Hence this set is a basis of $E_0$, and the dimension of $E_0$ is $3$. The geometric multiplicity of $\lambda=0$ is the dimension of $E_0$ by definition. Thus, the geometric multiplicity of $\lambda$ is $3$. Comment. These are Quiz 12 problems for Math 2568 (Introduction to Linear Algebra) at OSU in Spring 2017. I could have combined these two problems into one problem, and just asked to find eigenvalues and algebraic/ geometric multiplicities for each eigenvalue. The reason I didn’t do is that I wanted to rescue students who didn’t get a correct answer in part (a) for some reasons. Also, since one of the eigenvalue is given in (b), students could use this information to double check their solutions in (a). (At least if you didn’t get the eigenvalue $0$, you made a mistake somewhere.) List of Quiz Problems of Linear Algebra (Math 2568) at OSU in Spring 2017 There were 13 weekly quizzes. Here is the list of links to the quiz problems and solutions. Quiz 1. Gauss-Jordan elimination / homogeneous system. Quiz 2. The vector form for the general solution / Transpose matrices. Quiz 3. Condition that vectors are linearly dependent/ orthogonal vectors are linearly independent Quiz 4. Inverse matrix/ Nonsingular matrix satisfying a relation Quiz 5. Example and non-example of subspaces in 3-dimensional space Quiz 6. Determine vectors in null space, range / Find a basis of null space Quiz 7. Find a basis of the range, rank, and nullity of a matrix Quiz 8. Determine subsets are subspaces: functions taking integer values / set of skew-symmetric matrices Quiz 9. Find a basis of the subspace spanned by four matrices Quiz 10. Find orthogonal basis / Find value of linear transformation Quiz 11. Find eigenvalues and eigenvectors/ Properties of determinants Quiz 12. Find eigenvalues and their algebraic and geometric multiplicities Quiz 13 (Part 1). Diagonalize a matrix. Quiz 13 (Part 2). Find eigenvalues and eigenvectors of a special matrix Add to solve later
Computational Aerodynamics Questions & Answers I'm glad to hear that. Because your post may help others, I'll give you 2 points bonus boost. I corrected it. Both the integral form and the differential form can be used in CFD. But we can derive the integral form by integrating the differential form over a volume.. We'll get to this at one point. Interesting question: I'll give 2 points bonus boost. The following is always correct: $$ \frac { \partial ( {\frac {1} {2}} {\phi^2} ) } { \partial \xi} = {\phi} {\frac {\partial \phi} {\partial \xi}} $$ where $\phi$ is any property and $\xi$ can be $x$, $y$, $t$, or any coordinate. It doesn't matter if $\phi$ is $v_x$ or $t$, the above is a mathematical transformation, not a physical one. Not a bad question, I'll give you 1.5 points bonus. $\pi$
Hello all, I have a multi-stage stochastic problem and I am employing multi-cut Benders decomposition to solve it. The objective function is that of a typical planning problem, essentially the summation of expected investment and operation cost across all scenario tree nodes. A node-variable formulation of the non-decomposed problem is as follows: \(z = \min \sum_{m=1}^{M} \pi_{m}\left ( cx_{m} + qy_{m} \right )\), where \(x_{m}\) is the investment decision at scenario tree node \(m\), \(y_{m}\) is the operation decision at scenario tree node \(m\), \(\pi_{m}\) is the probability of occurrence for scenario tree node \(m\) When moving all operation variables to the sub-problem, the master problem objective function is: \(z = \min \sum_{m=1}^{M} \pi_{m}\left ( cx_{m} + \alpha_{m} \right )\), where \(\alpha_{m}\) is the sub-problem approximation subject to the appended Bender cuts. The cuts to be appended at iteration \(v\) are: \(\alpha_{m} \geq h_{m}^{v-1} + \lambda_{m}^{v-1} ( x_{m}-x_{m}^{v-1} ), \forall m\), where \(h_{m}^{v-1}\) is the optimal sub-problem solution from the previous iteration \(\lambda_{m}^{v-1}\) is the dual multiplier from the previous iteration Currently, I am looking to move from an expected cost decision criterion to minimization of the maximum regret experienced. My non-decomposed solution strategy is to: \(z = \min r \) \(r \geq \sum_{m \epsilon \Omega_{s}}\left ( cx_{m} + qy_{m} \right ) - d_{s}, \forall s\) The above approach works fine. However, I am having difficulty understanding how a Benders decomposition scheme (again, separating \(x\) and \(y\)) could be applied to speed up convergence. According to [1], a straightforward implementation of Benders should be possible, but I am not certain: thank you in advance for your input! ikonikon [1] Gorenstin, B.G.; Campodonico, N.M.; Costa, J.P.; Pereira, M.V.F.; , "Power system expansion planning under uncertainty," Power Systems, IEEE Transactions on , vol.8, no.1, pp.129-136, Feb 1993. I don't do stochastic models, but it seems to me your decomposition would be essentially the same. Rewrite the master as \(\min r \ni r \ge \sum_{m\in\Omega_{s}}\left(cx_{m}+\alpha_{m}\right)-d_{s}\,\,\forall s\), then minimize \(qy_{m}\) given \(x_{m}\) in a subproblem and, if \(\alpha_{m}\) underestimates it, add the optimality (point) cut you got before. Convergence would be an optimal master solution that did not generate any new Benders cuts.
Simulation Tools for Solving Wave Electromagnetics Problems When solving wave electromagnetics problems with either the RF or Wave Optics modules, we use the finite element method to solve the governing Maxwell’s equations. In this blog post, we will look at the various modeling, meshing, solving, and postprocessing options available to you and when you should use them. The Governing Equation for Modeling Frequency Domain Wave Electromagnetics Problems COMSOL Multiphysics uses the finite element method to solve for the electromagnetic fields within the modeling domains. Under the assumption that the fields vary sinusoidally in time at a known angular frequency \omega = 2 \pi f and that all material properties are linear with respect to field strength, the governing Maxwell’s equations in three dimensions reduce to: With the speed of light in vacuum, c_0, the above equation is solved for the electric field, \mathbf{E}=\mathbf{E}(x,y,z), throughout the modeling domain, where \mathbf{E} is a vector with components E_x, E_y, and E_z. All other quantities (such as magnetic fields, currents, and power flow) can be derived from the electric field. It is also possible to reformulate the above equation as an eigenvalue problem, where a model is solved for the resonant frequencies of the system, rather than the response of the system at a particular frequency. The above equation is solved via the finite element method. For a conceptual introduction to this method, please see our blog series on the weak form, and for a more in-depth reference, which will explain the nuances related to electromagnetic wave problems, please see The Finite Element Method in Electromagnetics by Jian-Ming Jin. From the point of view of this blog post, however, we can break down the finite element method into these four steps: Model Set-Up:Defining the equations to solve, creating the model geometry, defining the material properties, setting up metallic and radiating boundaries, and connecting the model to other devices. Meshing:Discretizing the model space using finite elements. Solving:Solving a set of linear equations that describe the electric fields. Postprocessing:Extracting useful information from the computed electric fields. Let’s now look at each one of these steps in more detail and describe the options available at each step. Options for Modifying the Governing Equations The governing equation shown above is the frequency domain form of Maxwell’s equations for wave-type problems in its most general form. However, this equation can be reformulated for several special cases. Let us first consider the case of a modeling domain in which there is a known background electric field and we wish to place some object into this background field. The background field can be a linearly polarized plane wave, a Gaussian beam, or any general user-defined beam that satisfies Maxwell’s equations in free space. Placing an object into this field will perturb the field and lead to scattering of the background field. In such a situation, you can use the Scattered Field formulation, which solves the above equation, but makes the following substitution for the electric field: where the background electric field is known and the relative field is the field that, once added to the background field, gives the total field that satisfies the governing Maxwell’s equations. Rather than solving for the total field, it is the relative field that is being solved. Note that the relative field is not the scattered field. For an example of the usage of this Scattered Field formulation, which considers the radar scattering off of a perfectly electrically conductive sphere in a background plane wave and compares it to the analytic solution, please see our Computing the Radar Cross Section of a Perfectly Conducting Sphere tutorial model. Next, let’s consider modeling in a 2D plane, where we solve for \mathbf{E}=\mathbf{E}(x,y) and can additionally simplify the modeling by considering an electric field that is polarized either In-Plane or Out-of-Plane. The In-Plane case will assume that E_z=0, while the Out-of-Plane case assumes that E_x=E_y=0. These simplifications reduce the size of the problem being solved, compared to solving for all three components of the electric field vector. For modeling in the 2D axisymmetric plane, we solve for \mathbf{E}=\mathbf{E}(r,z), where the vector \mathbf{E} has the components E_r, E_\phi, and E_z. We can again simplify our modeling by considering the In-Plane and Out-of-Plane cases, which assume E_\phi=0 and E_r=E_z=0, respectively. When using either the 2D or the 2D axisymmetric In-Plane formulations, it is also possible to specify an Out-of-Plane Wave Number. This is appropriate to use when there is a known out-of-plane propagation constant, or known number of azimuthal modes. For 2D problems, the electric field can be rewritten as: and for 2D axisymmetric problems, the electric field can be rewritten as: where k_z or m, the out-of-plane wave number, must be specified. This modeling approach can greatly simplify the computational complexity for some types of models. For example, a structurally axisymmetric horn antenna will have a solution that varies in 3D but is composed of a sum of known azimuthal modes. It is possible to recover the 3D solution from a set of 2D axisymmetric analyses by solving for these out-of-plane modes at a much lower computational cost, as demonstrated in our Corrugated Circular Horn Antenna tutorial model. Meshing Requirements and Capabilities Whenever solving a wave electromagnetics problem, you must keep in mind the mesh resolution. Any wave-type problem must have a mesh that is fine enough to resolve the wavelengths in all media being modeled. This idea is fundamentally similar to the concept of the Nyquist frequency in signal processing: The sampling size (the finite element mesh size) must be at least less than one-half of the wavelength being resolved. By default, COMSOL Multiphysics uses second-order elements to discretize the governing equations. A minimum of two elements per wavelength are necessary to solve the problem, but such a coarse mesh would give quite poor accuracy. At least five second-order elements per wavelength are typically used to resolve a wave propagating through a dielectric medium. First-order and third-order discretization is also available, but these are generally of more academic interest, since the second-order elements tend to be the best compromise between accuracy and memory requirements. The meshing of domains to fulfill the minimum criterion of five elements per wavelength in each medium is now automated within the software, as shown in this video, which shows not only the meshing of different dielectric domains, but also the automated meshing of Perfectly Matched Layer domains. The new automated meshing capability will also set up an appropriate periodic mesh for problems with periodic boundary conditions, as demonstrated in this Frequency Selective Surface, Periodic Complementary Split Ring Resonator tutorial model. With respect to the type of elements used, tetrahedral (in 3D) or triangular (in 2D) elements are preferred over hexahedral and prismatic (in 3D) or rectangular (in 2D) elements due to their lower dispersion error. This is a consequence of the fact that the maximum distance within an element is approximately the same in all directions for a tetrahedral element, but for a hexahedral element, the ratio of the shortest to the longest line that fits within a perfect cubic element is \sqrt3. This leads to greater error when resolving the phase of a wave traveling diagonally through a hexahedral element. It is only necessary to use hexahedral, prismatic, or rectangular elements when you are meshing a perfectly matched layer or have some foreknowledge that the solution is strongly anisotropic in one or two directions. When resolving a wave that is decaying due to absorption in a material, such as a wave impinging upon a lossy medium, it is additionally necessary to manually resolve the skin depth with the finite element mesh, typically using a boundary layer mesh, as described here. Manual meshing is still recommended, and usually needed, for cases when the material properties will vary during the simulation. For example, during an electromagnetic heating simulation, the material properties can be made functions of temperature. This possible variation in material properties should be considered before the solution, during the meshing step, as it is often more computationally expensive to remesh during the solution than to start with a mesh that is fine enough to resolve the eventual variations in the fields. This can require a manual and iterative approach to meshing and solving. When solving over a wide frequency band, you can consider one of three options: Solve over the entire frequency range using a mesh that will resolve the shortest wavelength (highest frequency) case. This avoids any computational cost associated with remeshing, but you will use an overly fine mesh for the lower frequencies. Remesh at each frequency, using the parametric solver. This is an attractive option if your increments in frequency space are quite widely spaced, and if the meshing cost is relatively low. Use different meshes in different frequency bands. This will reduce the meshing cost, and keep the solution cost relatively low. It is essentially a combination of the above two approaches, but requires the most user effort. It is difficult to determine ahead of time which of the above three options will be the most efficient for a particular model. Regardless of the initial mesh that you use, you will also always want to perform a mesh refinement study. That is, re-run the simulation with progressively finer meshes and observe how the solution changes. As you make the mesh finer, the solution will become more accurate, but at a greater computational cost. It is also possible to use adaptive mesh refinement if your mesh is composed entirely of tetrahedral or triangular elements. Solver Options Once you have properly defined the problem and meshed your domains, COMSOL Multiphysics will take this information and form a system of linear equations, which are solved using either a direct or iterative solver. These solvers differ only in their memory requirements and solution time, but there are several options that can make your modeling more efficient, since 3D electromagnetics models will often require a lot of RAM to solve. The direct solvers will require more memory than the iterative solvers. They are used for problems with periodic boundary conditions, eigenvalue problems, and for all 2D models. Problems with periodic boundary conditions do require the use of a direct solver, and the software will automatically do so in such cases. Eigenvalue problems will solve faster when using a direct solver as compared to using an iterative solver, but will use more memory. For this reason, it can often be attractive to reformulate an eigenvalue problem as a frequency domain problem excited over a range of frequencies near the approximate resonances. By solving in the frequency domain, it is possible to use the more memory-efficient iterative solvers. However, for systems with high Q-factors it becomes necessary to solve at many points in frequency space. For an example of reformulating an eigenvalue problem as a frequency domain problem, please see these examples of computing the Q-factor of an RF coil and the Q-factor of a Fabry-Perot cavity. The iterative solvers used for frequency-domain simulations come with three different options defined by the Analysis Methodology settings of Robust (the default), Intermediate, or Fast, and can be changed within the physics interface settings. These different settings alter the type of iterative solver being used and the convergence tolerance. Most models will solve with any of these settings, and it can be worth comparing them to observe the differences in solution time and accuracy and choose the option most appropriate for your needs. Models that contain materials that have very large contrasts in the dielectric constants (~100:1) will need the Robust setting and may even require the use of the direct solver, if the iterative solver convergence is very slow. Postprocessing Capabilities Once you’ve solved your model, you will want to extract data from the computed electromagnetic fields. COMSOL Multiphysics will automatically produce a slice plot of the magnitude of the electric field, but there are many other postprocessing visualizations you can set up. Please see the Postprocessing & Visualization Handbook and our blog series on Postprocessing for guidance and to learn how to create images such as those shown below. Attractive visualizations can be created by plotting combinations of the solution fields, meshes, and geometry. Of course, good-looking images are not enough — we also want to extract numerical information from our models. COMSOL Multiphysics will automatically make available the S-parameters whenever using Ports or Lumped Ports, as well as the Lumped Port current, voltage, power, and impedance. For a model with multiple Ports or Lumped Ports, it is also possible to automatically set up a Port Sweep, as demonstrated in this tutorial model of a Ferrite Circulator, and write out a Touchstone file of the results. For eigenvalue problems, the resonant frequencies and Q-factors are automatically computed. For models of antennas or for scattered field models, it is additionally possible to compute and plot the far-field radiated pattern, the gain, and the axial ratio. Far-field radiation pattern of a Vivaldi antenna. You can also integrate any derived quantity over domains, boundaries, and edges to compute, for example, the heat dissipated inside of lossy materials or the total electromagnetic energy within a cavity. Of course, there is a great deal more that you can do, and here we have just looked at the most commonly used postprocessing features. Summary of Wave Electromagnetics Simulation Tools We’ve looked at the various different formulations of the governing frequency domain form of Maxwell’s equations as applied to solving wave electromagnetics problems and when they should be used. The meshing requirements and capabilities have been discussed as well as the options for solving your models. You should also have a broad overview of the postprocessing functionality and where to go for more information about visualizing your data in COMSOL Multiphysics. This information, along with the previous blog posts on defining the material properties, setting up metallic and radiating boundaries, and connecting the model to other devices should now give you a reasonably complete picture of what can be done with frequency domain electromagnetic wave modeling in the RF and Wave Optics modules. The software documentation, of course, goes into greater depth about all of the features and capabilities within the software. If you are interested in using the RF or Wave Optics modules for your modeling needs, please contact us. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
In this chapter we’ll introduce the last few concepts we need from deductive logic, and we’ll learn a useful technique in the process: truth tables. Complex propositions can be built up out of other, simpler propositions: Notice, we call it’s not true that a connective even though it doesn’t actually connect two propositions together. Here we’ve used two simple propositions to build up longer, more complex ones using the terms and, either/or, and it’s not true that. Such terms are called . connectives The three connectives just listed are the only ones we’ll need in this book. Each has a name and a shorthand symbol: Name English Symbol Example conjunction and \(\wedge\) \(A \wedge B\) disjunction either/or \(\vee\) \(A \vee B\) negation it’s not true that \(\neg\) \(\neg A\) Here are some more examples of complex propositions: Sometimes we also need parentheses, to avoid ambiguity. Consider an example from arithmetic: \[ 4 \div 2 \times 2 = 1. \] Is this equation true? That depends on what you mean. Does the division operation come first, or the multiplication? So we use parentheses to clarify: \(4 \div (2 \times 2) = 1\), but \((4 \div 2) \times 2 = 4\). In logic we use parentheses to prevent ambiguity similarly. Consider: \[ A \vee B \wedge C. \] This proposition is ambiguous, it has two interpretations. In English we can distinguish them with a comma: Notice how these statements make different claims. The first takes a definite stand on Cerci: she is the queen. It only leaves open the question whether Aegon is a tyrant or Brandon is a wizard. Whereas the second statement takes no definite stand on any of our three characters. Maybe Aegon is a tyrant, maybe not. Maybe Brandon is a wizard and Cerci is the queen, maybe not. In logic we use parentheses to clarify which interpretation we mean: Notice how the first statement is primarily an \(\wedge\) statement. It uses \(\wedge\) to combine the simpler statements \(C\) and \(A \vee B\) together. Whereas the second statement is primarily a \(\vee\) statement. It uses \(\vee\) to combine \(A\) with \(B \wedge C\). We call the last connective used to build up the statement the . main connective Two more examples: Technically, the last example should have parentheses to prevent ambiguity, like so: \((\neg A) \vee B\). But things get cluttered and hard to read if we add parentheses around every negation. So we have a special understanding for \(\neg\) in order to keep things tidy. This special understanding for \(\mathbin{\sim}\) mirrors the one for minus signs in arithmetic. The negation symbol \(\mathbin{\sim}\) only applies to the proposition immediately following it. So in the proposition \(\neg A \vee B\), the \(\neg\) only applies to \(A\). And in \(\neg (A \wedge B) \vee C\), it only applies to \(A \wedge B\). The truth of a complex proposition built using our three connectives depends on the truth of its components. For example, \(\neg A\) is false if \(A\) is true, and it’s true if \(A\) is false: Table 3.1: Truth table for \(\neg\) \(A\) \(\neg A\) T F F T Slightly more complicated is the rule for \(\&\): Table 3.2: Truth table for \(\wedge\) \(A\) \(B\) \(A \wedge B\) T T T T F F F T F F F F There are four rows now because \(\&\) combines two propositions \(A\) and \(B\) together to make the more complex proposition \(A \& B\). Since each of those propositions could be either true or false, there are \(2 \times 2 = 4\) possible situations to consider. Notice that in only one of these situations is \(A \& B\) true, namely the first row where both \(A\) and \(B\) are true. The truth table for \(\vee\) (“either/or”) is a little more surprising: Table 3.3: Truth table for \(\vee\) \(A\) \(B\) \(A \vee B\) T T T T F T F T T F F F Now the complex proposition is always true, except in one case: the last row where \(A\) and \(B\) are both false. It makes sense that \(A \vee B\) is false when both sides are false. But why is it true when both sides are true? Doesn’t “Either \(A\) or \(B\)” mean that just one of these is true? Sometimes it does have that meaning. But sometimes it means “Either A or B, or both”. Consider this exchange: X: What are you doing tomorrow night? Y: I’m either going to a friend’s house or out to a club. I might even do both, if there’s time. Person Y isn’t necessarily changing their mind here. They could just be clarifying: they’re doing at least one of these things, possibly even both of them. Although it’s common to use “either/or” in English to mean just one or the other, in logic we use the more permissive reading. So \(A \vee B\) means either \(A\), or \(B\), or both. We can always convey the stricter way of meaning “either/or” with a more complex construction: \[(A \vee B) \wedge \neg (A \wedge B).\] That says: \[ \mbox{Either $A$ or $B$ is true, and it's not the case that both $A$ and $B$ are true}.\] Which is just a very explicit way of saying: either one or the other, but not both. We can even verify that the complex construction captures the meaning we want using a truth table. We start with an empty table, where the header lists all the formulas we use to build up to the final, complex one we’re interested in: \(A\) \(B\) \(A \vee B\) \(A \& B\) \(\neg(A \wedge B\)) \((A \vee B) \wedge \neg (A \wedge B)\) \(\;\) \(\;\) \(\;\) \(\;\) Then we fill in the possible truth values for the simplest propositions, \(A\) and \(B\): \(A\) \(B\) \(A \vee B\) \(A \& B\) \(\neg(A \wedge B\)) \((A \vee B) \wedge \neg (A \wedge B)\) T T T F F T F F Next we consult the truth tables above for \(\&\) and \(\vee\) to fill in the columns at the next level of complexity: \(A\) \(B\) \(A \vee B\) \(A \& B\) \(\neg(A \wedge B\)) \((A \vee B) \wedge \neg (A \wedge B)\) T T T T T F T F F T T F F F F F Then move up to the next level of complexity. To fill in the column for \(\neg(A \wedge B)\), we consult the column for \(A \wedge B\) and apply the rules from the table for \(\neg\): \(A\) \(B\) \(A \vee B\) \(A \& B\) \(\neg(A \wedge B\)) \((A \vee B) \wedge \neg (A \wedge B)\) T T T T F T F T F T F T T F T F F F F T Finally, we consult the columns for \(A \vee B\) and \(\neg(A \wedge B)\), and the table for \(\&\), to fill in the column for \((A \vee B) \wedge \neg(A \& B)\): \(A\) \(B\) \(A \vee B\) \(A \& B\) \(\neg(A \wedge B\)) \((A \vee B) \wedge \neg (A \wedge B)\) T T T T F F T F T F T T F T T F T T F F F F T F Complex constructions like this are difficult at first, but don’t worry. With practice they quickly become routine. Some propositions come out true in every row of the truth table. Consider \(A \vee \neg A\) for example: \(A\) \(\neg A\) \(A \vee \neg A\) T F T F T T Such propositions are especially interesting because they must be true. Their truth is guaranteed, just as a matter of logic. So we call them . logical truths The other side of this coin is propositions that are false in every row of the truth table, like \(A \wedge \neg A\): \(A\) \(\neg A\) \(A \wedge \neg A\) T F F F T F These propositions are called . contradictions Notice that the negation of a contradiction is a logical truth. For example, consider the truth table for \(\neg (A \wedge \neg A)\): \(A\) \(\neg A\) \(A \wedge \neg A\) \(\neg (A \wedge \neg A)\) T F F T F T F T Truth tables can be used to establish that two propositions are mutually exclusive. A very simple example is the propositions \(A\) and \(\neg A\): \(A\) \(\neg A\) T F F T There is no row in the table where both propositions are true. And if two propositions can’t both be true, they are mutually exclusive by definition. A slightly more complex example is the propositions \(A \vee B\) and \(\neg A \wedge \neg B\): \(A\) \(B\) \(\neg A\) \(\neg B\) \(A \vee B\) \(\neg A \wedge \neg B\) T T F F T F T F F T T F F T T F T F F F T T F T Again there’s no row where \(A \vee B\) and \(\neg A \wedge \neg B\) are both true. So they are mutually exclusive. Truth tables can also be used to establish that an argument is valid. Here’s a very simple example: \(A \wedge B\). Therefore, \(A\). Obviously it’s not possible for the premise to be true and the conclusion false, so the argument is valid (if a bit silly). Accordingly, there is no line of the truth table where \(A \wedge B\) comes out true, yet \(A\) comes out false: \(A\) \(B\) \(A \wedge B\) T T T T F F F T F F F F The only line where \(A \wedge B\) comes out true is the first one. And on that line \(A\) is true too. So the argument from \(A \wedge B\) to \(A\) is valid. One more example: \(A \vee B\). \(\neg A\). Therefore, \(B\). This argument is valid because the first premise says that at least one of the two propositions \(A\) and \(B\) must be true, and the second line says it’s not \(A\). So it must be \(B\) that’s true, as the conclusion asserts. And once again there is no line of the truth table where both \(A \vee B\) and \(\neg A\) are true, yet \(B\) is false: \(A\) \(B\) \(\neg A\) \(A \vee B\) T T F T T F F T F T T T F F T F The only line where both \(A \vee B\) and \(\neg A\) are true is the third row, and \(B\) is true on that row. So once again the truth table tells us this argument is valid. In the previous chapter we introduced the concept of logical entailment. \(A\) logically entails \(B\) when it’s impossible for \(A\) to be true and \(B\) false. When one proposition entails another, there is no line of the truth table where the first proposition is true and the second is false. Sometimes entailment goes in both directions: the first proposition entails the second and the second entails the first. For example, not only does \(A \wedge B\) entail \(B \wedge A\), but also \(B \wedge A\) entails \(A \wedge B\). We say such propositions are . In terms of truth tables, their columns match perfectly, they are identical copies of T’s and F’s. logically equivalent \(A\) \(B\) \(A \wedge B\) \(B \wedge A\) T T T T T F F F F T F F F F F F A more complex example is the propositions \(\neg (A \vee B)\) and \(\neg A \wedge \neg B\): \(A\) \(B\) \(\neg A\) \(\neg B\) \(A \vee B\) \(\neg(A \vee B)\) \(\neg A \wedge \neg B\) T T F F T F F T F F T T F F F T T F T F F F F T T F T T Here again the columns under these two propositions are identical. Connectives can be used to build more complex propositions, like \(A \wedge B\) or \(A \vee \neg B\). We introduced three connectives: In a complex proposition, the main connective is the last one used to build it up from simpler components. In \(A \vee \neg B\) the main connective is the \(\vee\). An argument’s validity can be established with a truth table, if there’s no row where all the premises have a T and yet the conclusion has an F. Truth tables can also be used to establish that two propositions are mutually exclusive, if there is no row of the table where both propositions have a T. Logically equivalent propositions entail one another. When two propositions have identical columns in a truth table, they are logically equivalent. Using the following abbreviations: \[ \begin{aligned} A &= \mbox{Asha loves Cerci},\\ B &= \mbox{Balon loves Cerci}, \end{aligned} \] translate each of the following into logicese (e.g. \(\neg A \vee B\)). For each pair of propositions, use a truth table to determine whether they are mutually exclusive. For each pair of propositions, use a truth table to determine whether they are logically equivalent. The proposition \(A \vee (B \wedge C)\) features three simple propositions, so its truth table has 8 rows. Fill in the rest of the table: \(A\) \(B\) \(C\) \(B \wedge C\) \(A \vee (B \wedge C)\) T T T T T F T F T T F F F T T F T F F F T F F F Use a truth table to determine whether the propositions \(A \vee (B \wedge C)\) and \((A \vee B) \wedge (A \vee C)\) are equivalent.
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Ground states of nonlinear Schrödinger systems with periodic or non-periodic potentials 1. School of Mathematics and Statistics, Central South University, Changsha, 410083 Hunan, China 2. School of Traffic and Transportation Engineering, Central South University, Changsha, 410075 Hunan, China In this paper we study a class of weakly coupled Schrödinger system arising in several branches of sciences, such as nonlinear optics and Bose-Einstein condensates. Instead of the well known super-quadratic condition that $\lim_{|z|\to∞}\frac{F(x,z)}{|z|^2} = ∞$ uniformly in $x$, we introduce a new local super-quadratic condition that allows the nonlinearity $F$ to be super-quadratic at some $x∈ \mathbb{R}^N$ and asymptotically quadratic at other $x∈ \mathbb{R}^N$. Employing some analytical skills and using the variational method, we prove some results about the existence of ground states for the system with periodic or non-periodic potentials. In particular, any nontrivial solutions are continuous and decay to zero exponentially as $|x| \to ∞$. Our main results extend and improve some recent ones in the literature. Keywords:Schrödinger system, superlinear, asymptotically linear, ground states, local super-quadratic conditions. Mathematics Subject Classification:35J50; 35J47. Citation:Dongdong Qin, Xianhua Tang, Qingfang Wu. Ground states of nonlinear Schrödinger systems with periodic or non-periodic potentials. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1261-1280. doi: 10.3934/cpaa.2019061 References: [1] A. Ambrosetti, G. Cerami and D. Ruiz, Solitons of linearly coupled systems of semilinear non-autonomous equations on $ \mathbb{R}^N$, [2] [3] [4] T. Bartsch, Z.-Q. Wang and M. Willem, The Dirichlet problem for superlinear elliptic equations, in [5] [6] [7] G. W. Chen and S. W. Ma, Infinitely many solutions for resonant cooperative elliptic systems with sublinear or superlinear terms, [8] G. W. Chen and S. W. Ma, Nonexistence and multiplicity of solutions for nonlinear elliptic systems of $ \mathbb{R}^N$, [9] R. Cipolatti and W. Zumpichiatti, On the existence and regularity of ground states for a nonlinear system of coupled Schrödinger equations in $ \mathbb{R}^N$, [10] [11] [12] [13] Y. H. Ding and C. Lee, Multiple solutions of Schrödinger equations with indefinite linear part and super or asymptotically linear terms, [14] D. E. Edmunds and W. D. Evans, [15] [16] [17] [18] M. N. Islam, [19] [20] [21] [22] [23] L. Ma and L. Zhao, Uniqueness of ground states of some coupled nonlinear Schrödinger systems and their application, [24] L. A. Maia, E. Montefusco and B. Pellacci, Positive solutions for a weakly coupled nonlinear Schrödinger system, [25] [26] J. Mederski, Ground states of a system of nonlinear Schrödinger equations with periodic potentials, [27] [28] A. M. Molchanov, On the discreteness of the spectrum conditions for self-adjoint differential equations of the second order, [29] [30] [31] [32] D. D. Qin, Y. B. He and X. H. Tang, Ground and bound states for non-linear Schrödinger systems with indefinite linear terms, [33] D. D. Qin, J. Chen and X. H. Tang, Existence and non-existence of nontrivial solutions for Schrödinger systems via Nehari-Pohozaev manifold, [34] Q. F. Wu and D. D. Qin, Ground and bound states of periodic Schrödinger equations with super or asymptotically linear terms, [35] [36] M. Reed and B. Simon, [37] [38] M. Schechter and W. M. Zou, Weak linking theorems and Schrödinger equations with critical Soblev exponent, [39] [40] [41] [42] [43] [44] [45] [46] show all references References: [1] A. Ambrosetti, G. Cerami and D. Ruiz, Solitons of linearly coupled systems of semilinear non-autonomous equations on $ \mathbb{R}^N$, [2] [3] [4] T. Bartsch, Z.-Q. Wang and M. Willem, The Dirichlet problem for superlinear elliptic equations, in [5] [6] [7] G. W. Chen and S. W. Ma, Infinitely many solutions for resonant cooperative elliptic systems with sublinear or superlinear terms, [8] G. W. Chen and S. W. Ma, Nonexistence and multiplicity of solutions for nonlinear elliptic systems of $ \mathbb{R}^N$, [9] R. Cipolatti and W. Zumpichiatti, On the existence and regularity of ground states for a nonlinear system of coupled Schrödinger equations in $ \mathbb{R}^N$, [10] [11] [12] [13] Y. H. Ding and C. Lee, Multiple solutions of Schrödinger equations with indefinite linear part and super or asymptotically linear terms, [14] D. E. Edmunds and W. D. Evans, [15] [16] [17] [18] M. N. Islam, [19] [20] [21] [22] [23] L. Ma and L. Zhao, Uniqueness of ground states of some coupled nonlinear Schrödinger systems and their application, [24] L. A. Maia, E. Montefusco and B. Pellacci, Positive solutions for a weakly coupled nonlinear Schrödinger system, [25] [26] J. Mederski, Ground states of a system of nonlinear Schrödinger equations with periodic potentials, [27] [28] A. M. Molchanov, On the discreteness of the spectrum conditions for self-adjoint differential equations of the second order, [29] [30] [31] [32] D. D. Qin, Y. B. He and X. H. Tang, Ground and bound states for non-linear Schrödinger systems with indefinite linear terms, [33] D. D. Qin, J. Chen and X. H. Tang, Existence and non-existence of nontrivial solutions for Schrödinger systems via Nehari-Pohozaev manifold, [34] Q. F. Wu and D. D. Qin, Ground and bound states of periodic Schrödinger equations with super or asymptotically linear terms, [35] [36] M. Reed and B. Simon, [37] [38] M. Schechter and W. M. Zou, Weak linking theorems and Schrödinger equations with critical Soblev exponent, [39] [40] [41] [42] [43] [44] [45] [46] [1] [2] Chuangye Liu, Zhi-Qiang Wang. A complete classification of ground-states for a coupled nonlinear Schrödinger system. [3] Alireza Khatib, Liliane A. Maia. A positive bound state for an asymptotically linear or superlinear Schrödinger equation in exterior domains. [4] Xiaoyan Lin, Yubo He, Xianhua Tang. Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential. [5] Wen Feng, Milena Stanislavova, Atanas Stefanov. On the spectral stability of ground states of semi-linear Schrödinger and Klein-Gordon equations with fractional dispersion. [6] Giuseppe Maria Coclite, Helge Holden. Ground states of the Schrödinger-Maxwell system with dirac mass: Existence and asymptotics. [7] [8] Alain Bensoussan, Jens Frehse. On diagonal elliptic and parabolic systems with super-quadratic Hamiltonians. [9] Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. [10] [11] [12] Zupei Shen, Zhiqing Han, Qinqin Zhang. Ground states of nonlinear Schrödinger equations with fractional Laplacians. [13] [14] Yong-Yong Li, Yan-Fang Xue, Chun-Lei Tang. Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent. [15] [16] Yanfang Xue, Chunlei Tang. Ground state solutions for asymptotically periodic quasilinear Schrödinger equations with critical growth. [17] Zhanping Liang, Yuanmin Song, Fuyi Li. Positive ground state solutions of a quadratically coupled schrödinger system. [18] Xiao-Fei Zhang, Fei Guo. Multiplicity of subharmonic solutions and periodic solutions of a particular type of super-quadratic Hamiltonian systems. [19] Eugenio Montefusco, Benedetta Pellacci, Marco Squassina. Energy convexity estimates for non-degenerate ground states of nonlinear 1D Schrödinger systems. [20] Guangze Gu, Xianhua Tang, Youpei Zhang. Ground states for asymptotically periodic fractional Kirchhoff equation with critical Sobolev exponent. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Mathematics About the Number 2017 Happy New Year 2017!! Here is the list of mathematical facts about the number 2017 that you can brag about to your friends or family as a math geek. Contents 2017 is a prime number Of course, I start with the fact that the number 2017 is a prime number. The previous prime year was 2011. The next prime year is 2027 and it is actually a twin prime year (2027 and 2029 are both primes). 2017th prime number is 17539. Combined number 201717539 is also prime. Yet combined number 175392017 is composite. 2017 is 306th prime number. $306=2\cdot 3^2\cdot 17$ contains a prime factor 17. 2017+2+0+1+7=2027 is the next prime year. You may find more prime years from the list of one million primes that I made. 2017 is not a Gaussian prime The number 2017 is congruent to 1 mod 4. (When we divide 2017 by 4, the remainder is 1.) Such a number can be factored in the ring of Gaussian integers $\Z[i]$, where $i=\sqrt{-1}$. Explicitly we have \[2017=(44+9i)(44-9i).\] 2017 is not an Eisenstein prime The number 2017 can be factored in the ring of Eisenstein integers $\Z[\omega]$, where $\omega=e^{2\pi i/3}$ is a primitive third root of unity, as \[2017=(-7-48\omega^2)(41+48\omega^2).\] 2017 is a sum of squares We can write 2017 as a sum of two squares: \[2017=44^2+9^2.\] 2017 is a part of Pythagorean triple (To obtain these numbers note that in general for any integers $m>n>0$, the triple $(a, b, c)$, where \[a=m^2-n^2, b=2mn, c=m^2+n^2\] is a Pythagorean triple by Euclid’s formula. Since we know $2017=44^2+9^2$, apply this formula with $m=44, n=9$.) A Pythagorean triple $(a, b, c)$ is said to be primitive if the integers $a, b, c$ are coprime. A Pythagorean triple obtained from Euclid’s formula is primitive if and only if $m$ and $n$ are coprime. In our case, $m=44$ and $n=9$ are coprime, the Pythagorean triple $(1855, 792, 2017)$ is primitive. By the way, Carl Friedrich Gauss passed away on February 23rd 1855. (Reference: Wikipedia Carl Friedrich Gauss.) 2017 is a sum of three cubes The number 2017 can be expressed as a sum of three cubes of primes: \[2017=7^3+7^3+11^3.\] 2017 appears in $\pi$ The number 2017 appear in the decimal expansion of $\pi=3.1415…$. Look at the last four numbers of $\pi=3.1415…2017$ truncated to $8900$ decimal places. The number 2017 does not appear in the decimal expansion of $2017^{2017}$. Exam problem using 2017 Let \[A=\begin{bmatrix} -1 & 2 \\ 0 & -1 \end{bmatrix} \text{ and } \mathbf{u}=\begin{bmatrix} 1\\ 0 \end{bmatrix}.\] Compute $A^{2017}\mathbf{u}$. This is one of the exam problems at the Ohio State University. Check out the solutions of this problem here. How many prime numbers are there? 2017 is a prime number. How many prime numbers exist? In fact, there are infinitely many prime numbers. Please check out the post As the title suggests, the proof is only in one-line. More fun with 2017? If you know or come up with more interesting properties of the number 2017, please let me know. I hope 2017 will be a wonderful year for everyone!! Add to solve later
Difference between revisions of "Algebra and Algebraic Geometry Seminar Spring 2018" (→Schedules) (→Schedules) Line 13: Line 13: [[Algebra and Algebraic Geometry Seminar Fall 2018 | Fall 2018 schedule]] [[Algebra and Algebraic Geometry Seminar Fall 2018 | Fall 2018 schedule]] − [[ + [[Spring 2018 ]] == Spring 2018 Schedule == == Spring 2018 Schedule == Latest revision as of 10:25, 26 December 2018 The seminar meets on Fridays at 2:25 pm in room B235. Contents 1 Algebra and Algebraic Geometry Mailing List 2 Schedules 3 Spring 2018 Schedule 4 Abstracts Algebra and Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Schedules Spring 2018 Schedule Abstracts Tasos Moulinos Derived Azumaya Algebras and Twisted K-theory Topological K-theory of dg-categories is a localizing invariant of dg-categories over [math] \mathbb{C} [/math] taking values in the [math] \infty [/math]-category of [math] KU [/math]-modules. In this talk I describe a relative version of this construction; namely for [math]X[/math] a quasi-compact, quasi-separated [math] \mathbb{C} [/math]-scheme I construct a functor valued in the [math] \infty [/math]-category of sheaves of spectra on [math] X(\mathbb{C}) [/math], the complex points of [math]X[/math]. For inputs of the form [math]\operatorname{Perf}(X, A)[/math] where [math]A[/math] is an Azumaya algebra over [math]X[/math], I characterize the values of this functor in terms of the twisted topological K-theory of [math] X(\mathbb{C}) [/math]. From this I deduce a certain decomposition, for [math] X [/math] a finite CW-complex equipped with a bundle [math] P [/math] of projective spaces over [math] X [/math], of [math] KU(P) [/math] in terms of the twisted topological K-theory of [math] X [/math] ; this is a topological analogue of a result of Quillen’s on the algebraic K-theory of Severi-Brauer schemes. Roman Fedorov A conjecture of Grothendieck and Serre on principal bundles in mixedcharacteristic Let G be a reductive group scheme over a regular local ring R. An old conjecture of Grothendieck and Serre predicts that such a principal bundle is trivial, if it is trivial over the fraction field of R. The conjecture has recently been proved in the "geometric" case, that is, when R contains a field. In the remaining case, the difficulty comes from the fact, that the situation is more rigid, so that a certain general position argument does not go through. I will discuss this difficulty and a way to circumvent it to obtain some partial results. Juliette Bruce Asymptotic Syzygies in the Semi-Ample Setting In recent years numerous conjectures have been made describing the asymptotic Betti numbers of a projective variety as the embedding line bundle becomes more ample. I will discuss recent work attempting to generalize these conjectures to the case when the embedding line bundle becomes more semi-ample. (Recall a line bundle is semi-ample if a sufficiently large multiple is base point free.) In particular, I will discuss how the monomial methods of Ein, Erman, and Lazarsfeld used to prove non-vanishing results on projective space can be extended to prove non-vanishing results for products of projective space. Andrei Caldararu Computing a categorical Gromov-Witten invariant In his 2005 paper "The Gromov-Witten potential associated to a TCFT" Kevin Costello described a procedure for recovering an analogue of the Gromov-Witten potential directly out of a cyclic A-inifinity algebra or category. Applying his construction to the derived category of sheaves of a complex projective variety provides a definition of higher genus B-model Gromov-Witten invariants, independent of the BCOV formalism. This has several advantages. Due to the categorical invariance of these invariants, categorical mirror symmetry automatically implies classical mirror symmetry to all genera. Also, the construction can be applied to other categories like categories of matrix factorization, giving a direct definition of FJRW invariants, for example. In my talk I shall describe the details of the computation (joint with Junwu Tu) of the invariant, at g=1, n=1, for elliptic curves. The result agrees with the predictions of mirror symmetry, matching classical calculations of Dijkgraaf. It is the first non-trivial computation of a categorical Gromov-Witten invariant. Aron Heleodoro Normally ordered tensor product of Tate objects and decomposition of higher adeles In this talk I will introduce the different tensor products that exist on Tate objects over vector spaces (or more generally coherent sheaves on a given scheme). As an application, I will explain how these can be used to describe higher adeles on an n-dimensional smooth scheme. Both Tate objects and higher adeles would be introduced in the talk. (This is based on joint work with Braunling, Groechenig and Wolfson.) Moisés Herradón Cueto Local type of difference equations The theory of algebraic differential equations on the affine line is very well-understood. In particular, there is a well-defined notion of restricting a D-module to a formal neighborhood of a point, and these restrictions are completely described by two vector spaces, called vanishing cycles and nearby cycles, and some maps between them. We give an analogous notion of "restriction to a formal disk" for difference equations that satisfies several desirable properties: first of all, a difference module can be recovered uniquely from its restriction to the complement of a point and its restriction to a formal disk around this point. Secondly, it gives rise to a local Mellin transform, which relates vanishing cycles of a difference module to nearby cycles of its Mellin transform. Since the Mellin transform of a difference module is a D-module, the Mellin transform brings us back to the familiar world of D-modules. Eva Elduque On the signed Euler characteristic property for subvarieties of Abelian varieties Franecki and Kapranov proved that the Euler characteristic of a perverse sheaf on a semi-abelian variety is non-negative. This result has several purely topological consequences regarding the sign of the (topological and intersection homology) Euler characteristic of a subvariety of an abelian variety, and it is natural to attempt to justify them by more elementary methods. In this talk, we'll explore the geometric tools used recently in the proof of the signed Euler characteristic property. Joint work with Christian Geske and Laurentiu Maxim. Harrison Chen Equivariant localization for periodic cyclic homology and derived loop spaces There is a close relationship between derived loop spaces, a geometric object, and (periodic) cyclic homology, a categorical invariant. In this talk we will discuss this relationship and how it leads to an equivariant localization result, which has an intuitive interpretation using the language of derived loop spaces. We discuss ongoing generalizations and potential applications in computing the periodic cyclic homology of categories of equivariant (coherent) sheaves on algebraic varieties. Phil Tosteson Stability in the homology of Deligne-Mumford compactifications The space [math]\bar M_{g,n}[/math] is a compactification of the moduli space algebraic curves with marked points, obtained by allowing smooth curves to degenerate to nodal ones. We will talk about how the asymptotic behavior of its homology, [math]H_i(\bar M_{g,n})[/math], for [math]n \gg 0[/math] can be studied using the representation theory of the category of finite sets and surjections. Wei Ho Noncommutative Galois closures and moduli problems In this talk, we will discuss the notion of a Galois closure for a possibly noncommutative algebra. We will explain how this problem is related to certain moduli problems involving genus one curves and torsors for Jacobians of higher genus curves. This is joint work with Matt Satriano. Daniel Corey Initial degenerations of Grassmannians Let Gr_0(d,n) denote the open subvariety of the Grassmannian Gr(d,n) consisting of d-1 dimensional subspaces of P^{n-1} meeting the toric boundary transversely. We prove that Gr_0(3,7) is schoen in the sense that all of its initial degenerations are smooth. The main technique we will use is to express the initial degenerations of Gr_0(3,7) as a inverse limit of thin Schubert cells. We use this to show that the Chow quotient of Gr(3,7) by the maximal torus H in GL(7) is the log canonical compactification of the moduli space of 7 lines in P^2 in linear general position. Alena Pirutka Irrationality problems Let X be a projective algebraic variety, the set of solutions of a system of homogeneous polynomial equations. Several classical notions describe how ``unconstrained the solutions are, i.e., how close X is to projective space: there are notions of rational, unirational and stably rational varieties. Over the field of complex numbers, these notions coincide in dimensions one and two, but diverge in higherdimensions. In the last years, many new classes of non stably rational varieties were found, using a specialization technique, introduced by C. Voisin. This method also allowed to prove that the rationality is not a deformation invariant in smooth and projective families of complex varieties: this is a joint work with B. Hassett and Y. Tschinkel. In my talk I will describe classical examples, as well as the recent progress around these rationality questions. Nero Budur Homotopy of singular algebraic varieties By work of Simpson, Kollár, Kapovich, every finitely generated group can be the fundamental group of an irreducible complex algebraic variety with only normal crossings and Whitney umbrellas as singularities. In contrast, we show that if a complex algebraic variety has no weight zero 1-cohomology classes, then the fundamental group is strongly restricted: the irreducible components of the cohomology jump loci of rank one local systems containing the constant sheaf are complex affine tori. Same for links and Milnor fibers. This is joint work with Marcel Rubió. Alexander Yom Din Drinfeld-Gaitsgory functor and contragradient duality for (g,K)-modules Drinfeld suggested the definition of a certain endo-functor, called the pseudo-identity functor (or the Drinfeld-Gaitsgory functor), on the category of D-modules on an algebraic stack. We extend this definition to an arbitrary DG category, and show that if certain finiteness conditions are satisfied, this functor is the inverse of the Serre functor. We show that the pseudo-identity functor for (g,K)-modules is isomorphic to the composition of cohomological and contragredient dualities, which is parallel to an analogous assertion for p-adic groups. In this talk I will try to discuss some of these results and around them. This is joint work with Dennis Gaitsgory. John Lesieutre Some higher-dimensional cases of the Kawaguchi-Silverman conjecture Given a dominant rational self-map f : X -->X of a variety defined over a number field, the first dynamical degree $\lambda_1(f)$ and the arithmetic degree $\alpha_f(P)$ are two measures of the complexity of the dynamics of f: the first measures the rate of growth of the degrees of the iterates f^n, while the second measures the rate of growth of the heights of the iterates f^n(P) for a point P. A conjecture of Kawaguchi and Silverman predicts that if P has Zariski-dense orbit, then these two quantities coincide. I will prove this conjecture in several higher-dimensional settings, including for all automorphisms of hyper-K\"ahler varieties. This is joint work with Matthew Satriano.
A commutative ring $R$ is called a principal ideal domain (PID) if every ideal of $R$ can be generated by a single element. If $R$ is a principal ideal domain, is every subring of $R$ a principal ideal domain? No, definitely not. That is because you can take any integral domain that is not a […] An associative ring $R$ is called von Neumann regular if for each $x\in R$ there exists a $y\in R$ such that $x = xyx$. Now let $R$ be a commutative ring. Its dimension is the supremum over lengths of chains of prime ideals in $R$. So for example, fields are zero dimensional because the only […] Let's see an example of a finitely-generated flat module that is not projective! What does this provide a counterexample to? If $R$ is a ring that is either right Noetherian or a local ring (that is, has a unique maximal right ideal or equivalently, a unique maximal left ideal), then every finitely-generated flat right $R$-module […] Let $R$ be a commutative ring and $M_n(R)$ denote the ring of $n\times n$ matrices with coefficients in $R$. For $X,Y\in M_n(R)$, their commutator $[X,Y]$ is defined by $$[X,Y] := XY – YX.$$ The trace of any matrix is defined as the sum of its diagonal entries. If $X$ and $Y$ are any matrices, what […] Suppose $I$ is an ideal in a ring $R$ and $J,K$ are ideals such that $I\subseteq J\cup K$. Then either $I\subseteq J$ or $I\subseteq K$. Indeed, suppose that there is some $x\in I$ such that $x\not\in J$. If $y\in I$ is arbitrary and $y\not\in K$ then $x + y$ is in neither $J$ nor $K$. […] Let $\F_q$ be a finite field. For any function $f:\F_q\to \F_q$, there exists a polynomial $p\in \F_q[x]$ such that $f(a) = p(a)$ for all $a\in \F_q$. In other words, every function from a finite field to itself can be represented by a polynomial. In particular, every permutation of $\F_q$ can be represented by a polynomial. […] Let $R$ be a commutative ring. The zero divisors of $R$, which we denote $Z(R)$ is the set-theoretic union of prime ideals. This is just because in any commutative ring, the set of subsets of $R$ that can be written as unions of prime ideals is in bijection with the saturated multiplicatively closed sets (the […] Let $\Z[\Z/n]$ denote the integral group ring of the cyclic group $\Z/n$. How would you create $\Z[\Z/n]$ in Sage so that you could easily multiply elements? First, if you've already assigned a group to the variable 'A', then 1 R = GroupAlgebra(A,ZZ) will give you the corresponding group ring and store it in the variable 'R'. The first […] Let $R$ be a commutative ring and $(p)$ be a principal prime ideal. What can be said about the intersection $\cap_{k=1}^\infty (p)^k$? Let's abbreviate this $\cap (p)^k$ (I like to use the convention that when limits are not specified, then the operation like intersection is taken over all possible indices). Let's try an example. For […] For a commutative ring, what does the partially ordered set (=poset) of primes look like? I already talked a little about totally ordered sets of primes, but what about in general? For a general partially ordered set $S$ there are two immediate questions that come to mind: Does there exist a commutative ring whose poset […]
Matching Digit Sums Problem 676 Let $d(i,b)$ be the digit sum of the number $i$ in base $b$. For example $d(9,2)=2$, since $9=1001_2$.When using different bases, the respective digit sums most of the time deviate from each other, for example $d(9,4)=3 \ne d(9,2)$. However, for some numbers $i$ there will be a match, like $d(17,4)=d(17,2)=2$. Let $ M(n,b_1,b_2)$ be the sum of all natural numbers $i \le n$ for which $d(i,b_1)=d(i,b_2)$. For example, $M(10,8,2)=18$, $M(100,8,2)=292$ and $M(10^6,8,2)=19173952$. Find $\displaystyle \sum_{k=3}^6 \sum_{l=1}^{k-2}M(10^{16},2^k,2^l)$, giving the last 16 digits as the answer.
We give an example of a group of infinite order each of whose elements has a finite order.Consider the group of rational numbers $\Q$ and its subgroup $\Z$.The quotient group $\Q/\Z$ will serve as an example as we verify below. Note that each element of $\Q/\Z$ is of the form\[\frac{m}{n}+\Z,\]where $m$ and $n$ are integers. This implies that the representatives of $\Q/\Z$ are rational numbers in the interval $[0, 1)$.There are infinitely many rational numbers in $[0, 1)$, and hence the order of the group $\Q/\Z$ is infinite.On the other hand, as each element of $\Q/\Z$ is of the form $\frac{m}{n}+\Z$ for $m, n\in \Z$, we have\[n\cdot \left(\, \frac{m}{n}+\Z \,\right)=m+\Z=0+\Z\]because $m\in \Z$.Thus the order of the element $\frac{m}{n}+\Z$ is at most $n$.Hence the order of each element of $\Q/\Z$ is finite. Therefore, $\Q/\Z$ is an infinite group whose elements have finite orders. Group of Order 18 is SolvableLet $G$ be a finite group of order $18$.Show that the group $G$ is solvable.DefinitionRecall that a group $G$ is said to be solvable if $G$ has a subnormal series\[\{e\}=G_0 \triangleleft G_1 \triangleleft G_2 \triangleleft \cdots \triangleleft G_n=G\]such […] The Group of Rational Numbers is Not Finitely Generated(a) Prove that the additive group $\Q=(\Q, +)$ of rational numbers is not finitely generated.(b) Prove that the multiplicative group $\Q^*=(\Q\setminus\{0\}, \times)$ of nonzero rational numbers is not finitely generated.Proof.(a) Prove that the additive […] Commutator Subgroup and Abelian Quotient GroupLet $G$ be a group and let $D(G)=[G,G]$ be the commutator subgroup of $G$.Let $N$ be a subgroup of $G$.Prove that the subgroup $N$ is normal in $G$ and $G/N$ is an abelian group if and only if $N \supset D(G)$.Definitions.Recall that for any $a, b \in G$, the […] Normal Subgroups, Isomorphic Quotients, But Not IsomorphicLet $G$ be a group. Suppose that $H_1, H_2, N_1, N_2$ are all normal subgroup of $G$, $H_1 \lhd N_2$, and $H_2 \lhd N_2$.Suppose also that $N_1/H_1$ is isomorphic to $N_2/H_2$. Then prove or disprove that $N_1$ is isomorphic to $N_2$.Proof.We give a […]
This is a very quick introduction to Galois descent for schemes defined over fields. It is a very special case of faithfully flat descent and other topos-descent theorems, which I won't go into at all. Typically, if you look up descent in an algebraic geometry text you will quickly run into all sorts of diagrams […] This is mostly a continuation on the group I gave in the last post, which is given by the presentation $$G = \langle a,t ~|~ t^{-1}a^2t = a^3\rangle.$$ At the risk of beating a dead horse, I proved that the homomorphism $f:G\to G$ given on generators by $f(t) = t$ and $f(a) = a^2$ is […] A few weeks ago I gave an example of a non-Hopfian finitely-presented group. Recall that a group $G$ is said to be Hopfian if every surjective group homomorphism $G\to G$ is actually an isomorphism. All finitely-generated, residually finite groups are Hopfian. So for example, the group of the integers $\Z$ is Hopfian. Another example of […] Every once in a while I spot a true gem on the arXiv. Unsolved Problems in Group Theory: The Kourkovka Notebook is such a gem: it is a huge collection of open problems in group theory. Started in 1965, this 19th volume contains hundreds of problems posed by mathematicians around the world. Additionally, problems solved […] In a recent post on residually finite groups, I talked a bit about Hopfian groups. A group $G$ is Hopfian if every surjective group homomorphism $G\to G$ is an isomorphism. This concept connected back to residually finite groups because if a group $G$ is residually finite and finitely generated, then it is Hopfian. A free […] In a talk yesterday by Boris Kunyavski at the University of Ottawa, I learned a little about the Ore conjecture, which in 2010 was proved a theorem in: Liebeck, Martin W.; O'Brien, E. A.; Shalev, Aner; Tiep, Pham Huu. The Ore conjecture. J. Eur. Math. Soc. (JEMS) 12 (2010), no. 4, 939-1008. It's quite a […] We say that a group $G$ is residually finite if for each $g\in G$ that is not equal to the identity of $G$, there exists a finite group $F$ and a group homomorphism $$\varphi:G\to F$$ such that $\varphi(g)$ is not the identity of $F$. The definition does not change if we require that $\varphi$ be […] The Fibonacci sequence is an infinite sequence of integers $f_0,f_1,f_2,\dots$ defined by the initial values $f_0 = f_1 = 1$ and the rule $$f_{n+1} = f_n + f_{n-1}$$ In other words, to get the next term you take the sum of the two previous terms. For example, it starts off with: $$1,1,2,3,5,8,13,21,34,55,\dots$$ You can define […] When I first saw convolution I happily used it without thinking about why we should define it in the first place. Here's a post that might tell you why it makes sense from an algebraic viewpoint. Let's recall the convolution product, as it is for functions $f:\R\to\R$. If $f,g:\R\to\R$ are two such functions, then the […] Let $k$ be a commutative ring. Let $\G_a$ be group functor $\G_a(R) = R$ and $\G_m$ be the group functor $\G_m(R) = R^\times$, both over the base ring $k$. What are the homomorphisms $\G_a\to \G_m$? In other words, what are the characters of $\G_a$? This depends on the ring, of course! The representing Hopf algebra […]
This is a short list of books to get you started on learning automorphic representations. Before I talk about them, I will first define automorphic representation, which will take a few paragraphs. To start, we need an affine algebraic $F$-group scheme $G$ where $F$ is a number field or function field. We let $\A_F$ be […] A perfect number is a positive integer $n$ such that $n$ is the sum of its proper divisors. For example $6 = 1 + 2 + 3$. The symbol $\sigma(n)$ is usally used for the sum of all the divisors of a positive integer $n$, so that a number is perfect if and only if […] You can ask lots of questions about primes. After posting 50 facts about primes, I couldn't resist making another graph. In this one, the x-axis is $n$ and the y-axis is the number of primes up to $n$ that contain a given decimal digit (written in decimal, of course). I've plotted all of these on […] A prime is a natural number greater than one whose only factors are one and itself. I find primes pretty cool, so I made a list of 50 facts about primes: The first twenty primes are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, […] Sir Michael Atiyah's preprints are now on the internet: The Riemann Hypothesis The Fine Structure Constant The meat of the claimed proof of the Riemann hypothesis is in Atiyah's construction of the Todd map $T:\C\to \C$. It supposedly comes from the composition of two different isomorphisms $$\C\xrightarrow{t_+} C(A)\xrightarrow{t^{-1}_{-}} \C$$ of the complex field $\C$ with […] Well this is strange indeed: according to this New Scientist article published today, the famous Sir Michael Atiyah is supposed to talk this Monday at the Heidelberg Laureate Forum. The topic: a proof of the Riemann hypothesis. The Riemann hypothesis states that the Riemann Zeta function defined by the analytic continuation of $\zeta(s) = \sum_{n=1}^\infty […] This is the final post on the Jacobi symbol. Recall that the Jacobi symbol $(m/n)$ for relatively prime integers $m$ and $n$ is defined to be the sign of the permutation $x\mapsto mx$ on the ring $\Z/n$. In the introductory post we saw this definition, some examples, and basic properties for calculation purposes. In Part […] In the last post, we examined the Jacobi symbol: for two relatively prime integers $m$ and $n$, we defined the Jacobi symbol $(m/n)$ to be the sign of the permutation $x\mapsto mx$ on the ring $\Z/n$. It turns out that the Jacobi symbol plays a part in the theory of quadratic residues. For a number […] If $m$ and $n$ are relatively prime integers, the Jacobi symbol $(m/n)$ is defined as the sign of the permutation $x\mapsto mx$ on the set $\Z/n$. Let's give a simple example: $(7/5)$. The permutation on $\{1,2,3,4\}$ is given by $(1 2 4 3) = (1 2)(2 4)(4 3)$ which has an odd number of transpositions. […] A perfect number is a positive integer $n$ such that $$\sum_{d|n} d = 2n.$$Put another way, $n$ is the sum of its proper divisors. Check out a a quick intro to perfect numbers that I wrote last November. The first three perfect numbers are $6, 28,$ and $496$. Currently, the largest perfect number, corresponding to […]
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
שלח תגובה 3 הודעות • דף 1מתוך 1 \(\langle M \rangle = \beta N m^2 H\) \(\sigma = N\ln 2 -\frac{1}{2}N\left(\beta m H\right)^2\) \(d\sigma = 0\Rightarrow \sigma = \text{const}\) In \(\sigma\) - \(N,m\) and \(2\) are constants trivially. So if \(\sigma\) is constant (adiabatic), it means that \(\beta H\) is too. But also in \(\langle M \rangle\), the only term which isn't trivially constant is \(\beta H\). So is is also a constant. \(\sigma = N\ln 2 -\frac{1}{2}N\left(\beta m H\right)^2\) \(d\sigma = 0\Rightarrow \sigma = \text{const}\) In \(\sigma\) - \(N,m\) and \(2\) are constants trivially. So if \(\sigma\) is constant (adiabatic), it means that \(\beta H\) is too. But also in \(\langle M \rangle\), the only term which isn't trivially constant is \(\beta H\). So is is also a constant.