text
stringlengths
256
16.4k
The Product of a Subgroup and a Normal Subgroup is a Subgroup Problem 448 Let $G$ be a group. Let $H$ be a subgroup of $G$ and let $N$ be a normal subgroup of $G$.The product of $H$ and $N$ is defined to be the subset\[H\cdot N=\{hn\in G\mid h \in H, n\in N\}.\]Prove that the product $H\cdot N$ is a subgroup of $G$. A subgroup $N$ of a group $G$ is called a normal subgroup if for any $g\in G$ and $n\in N$, we have\[gng^{-1}\in N.\] Proof. We prove that the product $H\cdot N$ is closed under products and inverses. Let $h_1n_1$ and $h_2n_2$ be elements in $H\cdot N$, where $h_1, h_2\in H$ and $n_1, n_2\in N$.Let $e$ be the identity element in $G$.We have\begin{align*}(h_1n_1)(h_2n_2)&=h_1en_1h_2n_2\\&=h_1(h_2h_2^{-1})n_1h_2n_2 && \text{since $h_2h_2^{-1}=e$}\\&=(h_1h_2)(h_2^{-1}n_1h_2n_2). \tag{*}\end{align*}Since $H$ is a subgroup, the element $h_1h_2$ is in $H$.Also, since $N$ is a normal subgroup, we have $h_2^{-1}n_1h_2$ is in $N$. Hence\[h_2^{-1}n_1h_2n_2=(h_2^{-1}n_1h_2)n_2\in N.\]It follows from (*) that the product\[(h_1n_1)(h_2n_2)=(h_1h_2)(h_2^{-1}n_1h_2n_2)\in H\cdot N.\]Therefore, the product $H\cdot N$ is closed under products. Next, let $hn$ be any element in $H\cdot N$, where $h\in H$ and $n\in N$.Then we have\begin{align*}(hn)^{-1}&=n^{-1}h^{-1}\\&=en^{-1}h^{-1}\\&=(h^{-1}h)n^{-1}h^{-1} &&\text{since $h^{-1}h=e$}\\&=h^{-1}(hn^{-1}h^{-1}).\end{align*}Since $N$ is a normal subgroup, we have $hn^{-1}h^{-1}\in N$, and hence\[(hn)^{-1}=h^{-1}(hn^{-1}h^{-1})\in H\cdot N.\]Thus, the product $H\cdot N$ is closed under inverses. This completes the proof that the product $H\cdot N$ is a subgroup of $G$. Normal Subgroups, Isomorphic Quotients, But Not IsomorphicLet $G$ be a group. Suppose that $H_1, H_2, N_1, N_2$ are all normal subgroup of $G$, $H_1 \lhd N_2$, and $H_2 \lhd N_2$.Suppose also that $N_1/H_1$ is isomorphic to $N_2/H_2$. Then prove or disprove that $N_1$ is isomorphic to $N_2$.Proof.We give a […] Group Homomorphism, Preimage, and Product of GroupsLet $G, G'$ be groups and let $f:G \to G'$ be a group homomorphism.Put $N=\ker(f)$. Then show that we have\[f^{-1}(f(H))=HN.\]Proof.$(\subset)$ Take an arbitrary element $g\in f^{-1}(f(H))$. Then we have $f(g)\in f(H)$.It follows that there exists $h\in H$ […] A Simple Abelian Group if and only if the Order is a Prime NumberLet $G$ be a group. (Do not assume that $G$ is a finite group.)Prove that $G$ is a simple abelian group if and only if the order of $G$ is a prime number.Definition.A group $G$ is called simple if $G$ is a nontrivial group and the only normal subgroups of $G$ is […] Any Subgroup of Index 2 in a Finite Group is NormalShow that any subgroup of index $2$ in a group is a normal subgroup.Hint.Left (right) cosets partition the group into disjoint sets.Consider both left and right cosets.Proof.Let $H$ be a subgroup of index $2$ in a group $G$.Let $e \in G$ be the identity […] Abelian Group and Direct Product of Its SubgroupsLet $G$ be a finite abelian group of order $mn$, where $m$ and $n$ are relatively prime positive integers.Then show that there exists unique subgroups $G_1$ of order $m$ and $G_2$ of order $n$ such that $G\cong G_1 \times G_2$.Hint.Consider […] Abelian Normal Subgroup, Intersection, and Product of GroupsLet $G$ be a group and let $A$ be an abelian subgroup of $G$ with $A \triangleleft G$.(That is, $A$ is a normal subgroup of $G$.)If $B$ is any subgroup of $G$, then show that\[A \cap B \triangleleft AB.\]Proof.First of all, since $A \triangleleft G$, the […]
(a) Prove that the column vectors of every $3\times 5$ matrix $A$ are linearly dependent. Note that the column vectors of the matrix $A$ are linearly dependent if the matrix equation\[A\mathbf{x}=\mathbf{0}\]has a nonzero solution $\mathbf{x}\in \R^5$. The equation is equivalent to a $3\times 5$ homogeneous system.As there are more variables than equations, the homogeneous system has infinitely many solutions. In particular, the equation has a nonzero solution $\mathbf{x}$.Hence the column vectors are linearly dependent. (b) Prove that the row vectors of every $5\times 3$ matrix $B$ are linearly dependent. Observe that the row vectors of the matrix $B$ are the column vectors of the transpose $B^{\trans}$. Note that the size of $B^{\trans}$ is $3\times 5$. In part (a), we showed that the column vectors of any $3\times 5$ matrix are linearly dependent.It follows that the column vectors of $B^{\trans}$ are linearly dependent.Hence the row vectors of $B$ are linearly dependent. The Set of Vectors Perpendicular to a Given Vector is a SubspaceFix the row vector $\mathbf{b} = \begin{bmatrix} -1 & 3 & -1 \end{bmatrix}$, and let $\R^3$ be the vector space of $3 \times 1$ column vectors. Define\[W = \{ \mathbf{v} \in \R^3 \mid \mathbf{b} \mathbf{v} = 0 \}.\]Prove that $W$ is a vector subspace of $\R^3$.[…]
Infinite many positive solutions for nonlinear first-order BVPs with integral boundary conditions on time scales Abstract In this paper, we investigate the existence of infinite many positive solutions for the nonlinear first-order BVP with integral boundary conditions $$ \cases x^{\Delta}(t)+p(t)x^{\sigma}(t)=f(t,x^{\sigma}(t)), & t\in (0,T)_{\mathbb{T}}, \\ x(0)-\beta x^{\sigma}(T)=\alpha\int_{0}^{\sigma(T)}x^{\sigma}(s)\Delta g(s), \endcases $$ where $x^{\sigma}=x\circ\sigma$, $f\colon [0,T]_{\mathbb{T}}\times\mathbb{R^{+}}\rightarrow\mathbb{R^{+}}$ is continuous, $p$ is regressive and rd-continuous, $\alpha,\beta\geq0$, $g\colon [0,T]_{\mathbb{T}}\rightarrow \mathbb{R}$ is a nondecreasing function. By using the fixed-point index theory and a new fixed point theorem in a cone, we provide sufficient conditions for the existence of infinite many positive solutions to the above boundary value problem on time scale $\mathbb{T}$. first-order BVP with integral boundary conditions $$ \cases x^{\Delta}(t)+p(t)x^{\sigma}(t)=f(t,x^{\sigma}(t)), & t\in (0,T)_{\mathbb{T}}, \\ x(0)-\beta x^{\sigma}(T)=\alpha\int_{0}^{\sigma(T)}x^{\sigma}(s)\Delta g(s), \endcases $$ where $x^{\sigma}=x\circ\sigma$, $f\colon [0,T]_{\mathbb{T}}\times\mathbb{R^{+}}\rightarrow\mathbb{R^{+}}$ is continuous, $p$ is regressive and rd-continuous, $\alpha,\beta\geq0$, $g\colon [0,T]_{\mathbb{T}}\rightarrow \mathbb{R}$ is a nondecreasing function. By using the fixed-point index theory and a new fixed point theorem in a cone, we provide sufficient conditions for the existence of infinite many positive solutions to the above boundary value problem on time scale $\mathbb{T}$. Keywords Time scale; boundary value problem; positive solution; integral boundary condition Full Text:FULL TEXT Refbacks There are currently no refbacks.
Difference between revisions of "Geometry and Topology Seminar" (→Fall 2016) (→Fall 2016) Line 81: Line 81: | Yu Zeng(University of Rochester) | Yu Zeng(University of Rochester) | [[#Yu Zeng| "TBA"]] | [[#Yu Zeng| "TBA"]] − | + | | | |- |- Revision as of 22:01, 8 November 2016 Contents 1 Fall 2016 2 Spring 2017 3 Fall Abstracts 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2016 date speaker title host(s) September 9 Bing Wang (UW Madison) "The extension problem of the mean curvature flow" (Local) September 16 Ben Weinkove (Northwestern University) "Gauduchon metrics with prescribed volume form" Lu Wang September 23 Jiyuan Han (UW Madison) "Deformation theory of scalar-flat ALE Kahler surfaces" (Local) September 30 October 7 Yu Li (UW Madison) "Ricci flow on asymptotically Euclidean manifolds" (Local) October 14 Sean Howe (University of Chicago) "Representation stability and hypersurface sections" Melanie Matchett Wood October 21 Nan Li (CUNY) "Quantitative estimates on the singular Sets of Alexandrov spaces" Lu Wang October 28 Ronan Conlon(Florida International University) "New examples of gradient expanding K\"ahler-Ricci solitons" Bing Wang November 4 Jonathan Zhu (Harvard University) "Entropy and self-shrinkers of the mean curvature flow" Lu Wang November 11 Richard Kent (Wisconsin) Analytic functions from hyperbolic manifolds local November 18 Caglar Uyanik (Illinois) "TBA" Kent Thanksgiving Recess December 2 Peyman Morteza (UW Madison) "TBA" (Local) December 9 Yu Zeng(University of Rochester) "TBA" Bing Wang December 16 Spring 2017 date speaker title host(s) Jan 20 Jan 27 Feb 3 Feb 10 Feb 17 Feb 24 March 3 March 10 March 17 March 24 Spring Break March 31 April 7 April 14 April 21 April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud). Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky. Sean Howe Representation stability and hypersurface sections We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}! Nan Li Quantitative estimates on the singular sets of Alexandrov spaces The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber. Yu Li In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature. Gaven Marin TBA Peyman Morteza TBA Richard Kent Analytic functions from hyperbolic manifolds Thurston's Geometrization Conjecture, now a celebrated theorem of Perelman, tells us that most 3-manifolds are naturally geometric in nature. In fact, most 3-manifolds admit hyperbolic metrics. In the 1970s, Thurston proved the Geometrization conjecture in the case of Haken manifolds, and the proof revolutionized 3-dimensional topology, hyperbolic geometry, Teichmüller theory, and dynamics. Thurston's proof is by induction, constructing a hyperbolic structure from simpler pieces. At the heart of the proof is an analytic function called the skinning map that one must understand in order to glue hyperbolic structures together. A better understanding of this map would more brightly illuminate the interaction between topology and geometry in dimension three. I will discuss what is currently known about this map. Caglar Uyanik TBA Bing Wang The extension problem of the mean curvature flow We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li. Ben Weinkove Gauduchon metrics with prescribed volume form Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti. Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set. Spring Abstracts Bena Tshishiku "TBA" Archive of past Geometry seminars 2015-2016: Geometry_and_Topology_Seminar_2015-2016 2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology
NTS ABSTRACTSpring2019 Return to [1] Contents Jan 25 Asif Ali Zaman A log-free zero density estimate for Rankin-Selberg $L$-functions and applications Abstract:We discuss a log-free zero density estimate for Rankin-Selberg $L$-functions of the form $L(s,\pi\times\pi_0)$, where $\pi$ varies in a given set of cusp forms and $\pi_0$ is a fixed cusp form. This estimate is unconditional in many cases of interest, and holds in full generality assuming an average form of the generalized Ramanujan conjecture. There are several applications of this density estimate related to the rarity of Landau-Siegel zeros of Rankin-Selberg $L$-functions, the Chebotarev density theorem, and nontrivial bounds for torsion in class groups of number fields assuming the existence of a Siegel zero. We will highlight the latter two topics. This represents joint work with Jesse Thorner. Feb 1 Yunqing Tang Exceptional splitting of reductions of abelian surfaces with real multiplication Abstract: Zywina showed that after passing to a suitable field extension, every abelian surface $A$ with real multiplication over some number field has geometrically simple reduction modulo $\frak{p}$ for a density one set of primes $\frak{p}$. One may ask whether its complement, the density zero set of primes $\frak{p}$ such that the reduction of $A$ modulo $\frak{p}$ is not geometrically simple, is infinite. Such question is analogous to the study of exceptional mod $\frak{p}$ isogeny between two elliptic curves in the recent work of Charles. In this talk, I will show that abelian surfaces over number fields with real multiplication have infinitely many non-geometrically-simple reductions. This is joint work with Ananth Shankar. Feb 8 Roman Fedorov A conjecture of Grothendieck and Serre on principal bundles in mixed characteristic Abstract: Let G be a reductive group scheme over a regular local ring R. An old conjecture of Grothendieck and Serre predicts that such a principal bundle is trivial, if it is trivial over the fraction field of R. The conjecture has recently been proved in the "geometric" case, that is, when R contains a field. In the remaining case, the difficulty comes from the fact, that the situation is more rigid, so that a certain general position argument does not go through. I will discuss this difficulty and a way to circumvent it to obtain some partial results. Feb 13 Frank Calegari Recent Progress in Modularity Abstract: We survey some recent work in modularity lifting, and also describe some applications of these results. This will be based partly on joint work with Allen, Caraiani, Gee, Helm, Le Hung, Newton, Scholze, Taylor, and Thorne, and also on joint work with Boxer, Gee, and Pilloni. Feb 15 Junho Peter Whang Integral points and curves on moduli of local systems Abstract: We consider the Diophantine geometry of moduli spaces for special linear rank two local systems on surfaces with fixed boundary traces. After motivating their Diophantine study, we establish a structure theorem for their integral points via mapping class group descent, generalizing classical work of Markoff (1880). We also obtain Diophantine results for algebraic curves in these moduli spaces, including effective finiteness of imaginary quadratic integral points for non-special curves. Feb 22 Yifan Yang Rational torsion on the generalized Jacobian of a modular curve with cuspidal modulus Abstract: In this talk we consider the rational torsion subgroup of the generalized Jacobian of the modular curve X_0(N) with respect to a reduced divisor given by the sum of all cusps. When N=p is a prime, we find that the rational torsion subgroup is always cyclic of order 2 (while that of the usual Jacobian of X_0(p) grows linearly as p tends to infinity, according to a well-known result of Mazur). Subject to some unproven conjecture about the rational torsions of the Jacobian of X_0(p^n), we also determine the structure of the rational torsion subgroup of the generalized Jacobian of X_0(p^n). This is a joint work with Takao Yamazaki. March 22 Fang-Ting Tu Title: Supercongrence for Rigid Hypergeometric Calabi-Yau Threefolds Abstract: This is a joint work with Ling Long, Noriko Yui, and Wadim Zudilin. We establish the supercongruences for the rigid hypergeometric Calabi-Yau threefolds over rational numbers. These supercongruences were conjectured by Rodriguez-Villeagas in 2003. In this work, we use two different approaches. The first method is based on Dwork's p-adic unit root theory, and the other is based on the theory of hypergeometric motives and hypergeometric functions over finite fields. In this talk, I will introduce the first method, which allows us to obtain the supercongruences for ordinary primes. April 12 Junehyuk Jung Title: Quantum Unique Ergodicity and the number of nodal domains of automorphic forms Abstract: It has been known for decades that on a flat torus or on a sphere, there exist sequences of eigenfunctions having a bounded number of nodal domains. In contrast, for a manifold with chaotic geodesic flow, the number of nodal domains of eigenfunctions is expected to grow with the eigenvalue. In this talk, I will explain how one can prove that this is indeed true for the surfaces where the Laplacian is quantum uniquely ergodic, under certain symmetry assumptions. As an application, we prove that the number of nodal domains of Maass-Hecke eigenforms on a compact arithmetic triangles tends to $+\infty$ as the eigenvalue grows. I am going to also discuss the nodal domains of automorphic forms on $SL_2(\mathbb{Z})\backslash SL_2(\mathbb{R})$. Under a minor assumption, I will give a quick proof that the real part of weight $k\neq 0$ automorphic form has only two nodal domains. This result captures the fact that a 3-manifold with Sasaki metric never admits a chaotic geodesic flow. This talk is based on joint works with S. Zelditch and S. Jang. April 19 Hang Xue (Arizona) Title: Arithmetic theta lifts and the arithmetic Gan--Gross--Prasad conjecture. Abstract: I will explain the arithmetic analogue of the Gan--Gross--Prasad conjecture for unitary groups. I will also explain how to use arithmetic theta lift to prove certain endoscopic cases of it. May 3 Matilde Lalin (Université de Montréal) Title: The mean value of cubic $L$-functions over function fields. Abstract: We will start by exploring the problem of finding moments for Dirichlet $L$-functions, including the first main results and the standard conjectures. We will then discuss the problem for function fields. We will then present a result about the first moment of $L$-functions associated to cubic characters over $\F_q(t)$, when $q\equiv 1 \bmod{3}$. The case of number fields was considered in previous work, but never for the full family of cubic twists over a field containing the third roots of unity. This is joint work with C. David and A. Florea. May 10 Hector Pasten (Harvard University) Title: Shimura curves and estimates for abc triples. Abstract: I will explain a new connection between modular forms and the abc conjecture. In this approach, one considers maps to a given elliptic curve coming from various Shimura curves, which gives a way to obtain unconditional results towards the abc conjecture starting from good estimates for the variation of the degree of these maps. The approach to control this variation of degrees involves a number of tools, such as Arakelov geometry, automorphic forms, and analytic number theory. The final result is an unconditional estimate that lies beyond the existing techniques in the context of the abc conjecture, such as linear forms in logarithms.
Collisional Frequency is the average rate in which two reactants collide for a given system and is used to express the average number of collisions per unit of time in a defined system. Background and Overview To fully understand how the collisional frequency equation is derived, consider a simple system (a jar full of helium) and add each new concept in a step-by-step fashion. Before continuing with this topic, it is suggested that the articles on collision theory and collisional cross section are reviewed, as these topics are essential to understanding collisional frequency. The equation for collisional frequency is the following: \(Z_{AB} = N_{A}N_{B}\left(r_{A} + r_{B}\right)^2\sqrt{ \dfrac{8\pi K_{B}T}{\mu_{AB}}} \) Also, although technically these statements are false, the following assumptions are used when deriving and calculating the collisional frequency: All molecules travel through space in straight lines. All molecules are hard, solid spheres. The reaction of interest is between only two molecules. Collisions are hit or miss only. They occur when distance between the center of the two reactants is less than or equal to the sum of their respective radii. Even if the two molecules barely miss each other, it is still considered a complete miss. The two molecules do not interact (in reality, their electron clouds would interact, but this has no bearing on the equation). Single Molecule Moving In determining the collisional frequency for a single molecule, \(Z_i\), picture a jar filled with helium atoms. These atoms collide by hitting other helium atoms within the jar. If every atom except one is frozen and the number of collisions in one minute is counted, the collisional frequency (per minute) of a single atom of helium within the container could be determined. This is the basis for the equation. \(Z_i = \dfrac{(Volume \; of \; Collisional \; Cylinder) (Density)}{Time}\) While the helium atom is moving through space, it sweeps out a collisional cylinder, as shown above. If the center of another helium atom is present within the cylinder, a collision occurs. The length of the cylinder is the helium atom's mean relative speed, \(\sqrt{2}\langle c \rangle\), multiplied by change in time, \(\Delta{t}\). The mean relative speed is used instead of average speed because, in reality, the other atoms are moving and this factor accounts for some of that. The area of the cylinder is the helium atom's collisional cross section. Although collision will most likely change the direction an atom moves, it does not affect the volume of the collisional cylinder, which is due to density being uniform throughout the system. Therefore, an atom has the same chance of colliding with another atom regardless of direction as long as the distance traveled is the same. \(Volume \; of \; Collisional \; Cylinder = \sqrt{2}\pi{d^2}\langle c \rangle\Delta{t}\) Density Next, account must be taken of the other atoms that are moving that helium can hit; which is simply the density \(\rho\) of helium within the system. The density component can be expanded in terms of N and V. N is the number of atoms in the system, and V is the volume of the system. Alternatively, the density in terms of pressure (relating pressure to volume using the perfect gas law equation, PV = nRT: \[\rho = \left(\dfrac{N}{V}\right) = \left(\dfrac{\rho{N_A}}{V}\right) = \left(\dfrac{\rho{N_A}}{RT}\right) = \left(\dfrac{\rho}{kT}\right)\] The Full Equation When you substitute in the values for Z i, the following equation results: \[{Z_{i} = \dfrac{\sqrt{2}\pi d^{2} \left \langle c \right \rangle\Delta{t}\left(\dfrac{N}{V}\right)}{\Delta{t}}}\] Cancel Δt: \[Z_{i} = \sqrt{2}\pi d^{2} \left \langle c \right \rangle\left(\dfrac{N}{V}\right)\] All Molecules Moving System: \(Z_{ii}\) Now imagine that all of the helium atoms in the jar are moving again. When all of the collisions for every atom of helium moving within the jar in a minute are counted, Z ii results. The relation is thus: \[Z_{ii} = \dfrac{1}{2}Z_{i}\left(\dfrac{N}{V}\right)\] This expands to: \[Z_{ii} = \dfrac{\sqrt{2}}{2}\pi d^{2}\left \langle c \right \rangle\left(\dfrac{N}{V}\right)^2\] System With Collisions Between Different Types of Molecules: \(Z_{AB}\) Consider a system of hydrogen in a jar: \[H_{A} + H_{BC} \leftrightharpoons H_{AB} + H_{C}\] In considering hydrogen in a jar instead of helium, there are several problems. First, the H A ions have a smaller radius than the H BC molecules. This is easily solved by accounting for the different radii which changes \(d^{2}\) to \(\left(r_A + r_B\right)^2\). The second problem is that the number of H A ions could be much different than the number of H BC molecules. So we expand \(\dfrac{\sqrt{2}}{2}\left(\dfrac{N}{V}\right)^2\) to account for the number of both reacting molecules to get \(N_AN_B\). Because two reactants are considered, Z ii becomes Z AB, and the two changes are combined to give the following equation: \[Z_{AB} = N_{A}N_{B}\pi\left(r_{A} + r_{B}\right)^2 \left \langle c \right \rangle\] Mean speed, \( \left \langle c \right \rangle \), can be expanded: \[ \left \langle c \right \rangle = \sqrt{\dfrac{8k_BT}{\pi m}}\] This leads to the final change to the collisional frequency equation. Because two different molecules must be taken into account, the equation must accommodate molecules of different masses (m). So, mass (m) must be converted to reduced mass, \( \mu_{AB} \), converting a two bodied system to a one bodied system. Now we substitute \( \left \langle c \right \rangle \) in the Z AB equation to obtain: \[Z_{AB} = N_{A}N_{B}\left(r_{A} + r_{B}\right)^2\pi\sqrt{ \dfrac{8k_{B}T}{\pi\mu_{AB}}}\] Cancel \(\pi\): \[Z_{AB} = N_{A}N_{B}\left(r_{A} + r_{B}\right)^2\sqrt{\dfrac{8\pi{k_{B}T}}{\mu_{AB}}}\] with \(N_A\) is the number of A molecules in the system \(N_B\) is the number of B molecules in the system \(r_a\) is the radius of molecule A \(r_b\) is the radius of molecule B \(k_B\) is the Boltzmann constant \(k_B\) =1.380 x 10 -23Joules Kelvin \(T\) is the temperature in Kelvin \(\mu_{AB}\) is the reduced mass found by using the equation \(\mu_{AB} = \dfrac{m_Am_B}{m_A + m_B}\) Variables that affect Collisional Frequency Temperature:As is evident from the collisional frequency equation, when temperature increases, the collisional frequency increases. Density: From a conceptual point, if the density is increased, the number of molecules per volume is also increased. If everything else remains constant, a single reactant comes in contact with more atoms in a denser system. Thus if density is increased, the collisional frequency must also increase. Size of Reactants: Increasing the size of the reactants increases the collisional frequency. This is directly due to increasing the radius of the reactants as this increases the collisional cross section. This in turn increases the collisional cylinder. Because radius term is squared, if the radius of one of the reactants is doubled, the collisional frequency is quadrupled. If the radii of both reactants are doubled, the collisional frequency is increased by a factor of 16. Problems If the temperature of the system was increased, how would the collisional frequency be affected? If the masses of both the reactants were increased, how would the collisional frequency be affected? 0.4 moles of N 2gas (molecular diameter= 3.8x10 -10m and mass= 28 g/mol) occupies a 1-liter (0.001m 3) container at 1 atm of pressure and at room temperature (298K) a) Calculate the number of collisions a single molecule makes in one second.(hint use Z i) b) Calculate the binary collision frequency.(hint use Z ii) References Atkins, Peter and Julio de Paula. Physical Chemistry for the Life Sciences. 2006. New York, NY: W.H. Freeman and Company. pp.282-288, 290 Atkins, Peter. Physical Chemistry: Sixth Edition. 2000. New York, NY: W.H. Freeman and Company. pp.29,30 Atkins, Peter. Concepts in Physical Chemistry. 1995. New York, NY: W.H. Freeman and Company. pp.64 Collision frequency. Web. 14 Mar 2011. <http://www.everyscience.com/Chemistr...ses/c.1255.php> James, Ames. "Problem set 1." (2011): Print. Contributors Keith Dunaway (UCD), Imteaz Siddique (UCD)
Is it true that the cardinality of every maximal linearly independent subset of a finitely generated free module $A^{n}$ is equal to $n$ (not just at most $n$, but in fact $n$)? Here $A$ is a nonzero commutative ring. I know that it's true if $A$ is Noetherian or integral domain. I thought it was not true in general but I came up with something that looks like a proof and I can't figure out where it went wrong. I think I have a counter-example. Let $A$ be the ring of functions $f$ from $\mathbb{C}^2 \setminus (0,0) \to \mathbb{C}$ such there is a polynomial $\widetilde{f} \in \mathbb{C}[x,y]$ such that $\widetilde{f}(x,y)=f(x,y)$ for all but finitely many $(x,y)$ in $\mathbb{C}^2$. Map $A$ into $A^2$ by $f \mapsto (fx, fy)$. We check that this is injective: If $fx=0$ then $f$ is zero off of the $x$-axis. Similarly, if $fy=0$, then $f$ is zero off of the $y$-axis. So $(fx, fy) = (0,0)$ implies that $f$ is zero everywhere on $\mathbb{C}^2 \setminus (0,0)$. We now claim that there do not exist $(u,v)$ in $A^2$ such that $(f,g) \mapsto (fx+gu, \ fy+gv)$ is injective. Suppose such a $(u,v)$ exists. Let $\widetilde{u}$ and $\widetilde{v}$ be the polynomials in $\mathbb{C}[x,y]$ which coincide with $u$ and $v$ at all but finitely many points. Let $\Delta=\widetilde{u} y - \widetilde{v} x$. Since $\Delta$ is a polynomial which vanishes at $(0,0)$, it is not a non-zero constant. Thus, $\Delta$ vanishes on an entire infinite subset of $\mathbb{C}^2$. Let $(p,q)$ be a point in $\mathbb{C}^2 \setminus (0,0)$ such that $\Delta(p,q)=0$, $\widetilde{u}(p,q)= u(p,q)$ and $\widetilde{v}(p,q)=v(p,q)$. So $q u(p,q) - p v(p,q) =0$. Since $(p,q) \neq (0,0)$, there is some $k \in \mathbb{C}$ such that $(u(p,q), v(p,q)) = (kp, kq)$. Take $f$ to be $-k$ at $(p,q)$ and $0$ elsewhere; let $g$ be $1$ at $(p,q)$ and $0$ elsewhere. So $(fx+gu, fy+gv)=0$, and the map $(f,g) \mapsto (fx+gu, \ fy+gv)$ is not injective. We have to prove that $m \leq n$ if there is a monomorphism $A^m \to A^n$. Since this is given by a $n \times m$ matrix with entries in $A$ and every finitely generated ring is noetherian, it is enough to consider the case that $A$ is noetherian. Now you already know the proof for this case, but I just add it. Pick a minimal prime ideal $\mathfrak{p} \subseteq A$. This exists since $A \neq 0$. Now localize at $\mathfrak{p}$. Then we may replace $A$ by $A_{\mathfrak{p}}$, and thereby assume that $A$ is a $0$-dimensional noetherian ring, thus artinian. For such a ring it is known that the length of finitely generated modules is finite, and additive on short exact sequences. In particular $m * l(A) \leq n * l(A)$. Since $l(A) \neq 0$ is finite, we get $m \leq n$. By the way, the assertion can be generalized to the infinite case: Let $M$ be a free module with basis $B$ and $L \subseteq M$ a linearly independent subset. Then $|L| \leq |B|$. Proof: Let $B$ be infinite. Representing elements of $L$ as linear combinations of elements in $B$ yields a map $f : L \to E(B)$, where $E(B)$ denotes the set of finite subsets of $B$. Now let $F$ be such a finite subset with $n$ elements. The finite case yields that there are at most $n$ linearly independent elements in $\langle F \rangle$, thus also in $f^{-1}(F)$. Now we use cardinal arithmetics: $|L| = \sum_{n > 0} \sum_{F \in E(B), |F|=n} |f^{-1}(F)| \leq \sum_{n > 0} |B^n| = \sum_{n > 0} |B| = |B|.$ EDIT: See the comments; this does not answer kwan's question yet. I have spotted the mistake in my proof. So here is the "wrong" proof: Let $v_{1},\ldots,v_{m}$ be linearly independent elements in $A^{n}$, where $m\lt n$. Write them as $n$-tuples of elements in $A$, thereby forming an $n$-by-$m$ matrix. Linear independence of $v_{i}\ $s means that the rank of this matrix is $m$. So there is an $m$-by-$m$ minor with non-zero determinant. By exchanging rows if necessary, bring these $m$ rows to the top part of the matrix. Now add a colomn to the right side of the matrix whose entries are $0$ except at the $n+1$ th position, where the entry is $1$. Then the new $n$-by-$(m+1)$ matrix has rank $(m+1)$ and hence the $(m+1)$ columns, the first $m$ of which are the $v_{i}\ $s, are linearly independent. The mistake was the notion of rank of a matrix. When the entries are not from integral domain, the proper definition should be the largest integer $m$ such that there is no nonzero element in the ring annihilating the determinant of every $m$-by-$m$ minor. In the above example, I can't conclude that the rank of the $n$-by-$(m+1)$ matrix is $(m+1)$. With this, I can now exhale a sigh of relief and continue believing that this is not true in general. (By the way I also know that it is true for free modules of infinite rank) Assuming that $A$ has a maximal ideal $\mathfrak{m}$ (for example, by using Zorn's Lemma), one can proceed as follows: if $M$ is a free $A$-module with basis $(v_i)_{i\in I}$, then $M \cong A^I$, whence $M / \mathfrak{m} M \cong A^I / \mathfrak{m} A^I \cong (A / \mathfrak{m} A)^I$. This is a vector space over $k := A / \mathfrak{m} A$ of dimension $|I|$. Since over fields, all vector space bases of the same vector space have the same length, and since the $k$-vector space structure of $M / \mathfrak{m} M$ is independent of the choice of the basis, this shows that all $A$-bases of $M$ have the same cardinality. I don't remember where I first saw this though... maybe someone else has a reference? I saw this first in the case that $A = \mathbb{Z}$ and $\mathfrak{m} = (2)$ for free abelian groups $M$, to show that the rank is well-defined.
The answer is yes. Suppose we have a factorization $Q = A\cdot B$. One easy observation is that $A$ and $B$ must be disjoint (since for $w\in A\cap B$ we get $w^2\in Q$). In particular, only one of $A,B$ can contain $\epsilon$. We can assume wlog (since the other case is completely symmetric) that $\epsilon\in B$. Then since $a$ and $b$ cannot be factored into non-empty factors, we must have $a,b\in A$. Next we get that $a^mb^n\in A$ (and, completely analogously, $b^ma^n\in A$) for all $m,n>0$ by induction on $m$: For $m=1$, since $ab^n\in Q$, we must have $ab^n = uv$ with $u\in A, v\in B$. Since $u\neq\epsilon$, $v$ must be $b^k$ for some $k\le n$. But if $k>0$, then since $b\in A$ we get $b^{1+k}\in Q$, contradiction. So $v=\epsilon$, and $ab^n\in A$. For the inductive step, since $a^{m+1}b^n\in Q$ we have $a^{m+1}b^n = uv$ with $u\in A, v\in B$. Since again $u\neq\epsilon$, we have either $v = a^kb^n$ for some $0<k<m+1$, or $v=b^k$ for some $k<n$. But in the former case, $v$ is already in $A$ by the induction hypothesis, so $v^2\in Q$, contradiction. In the latter case, we must have $k=0$ (i.e. $v=\epsilon$) since from $b\in A$ we get $b^{1+k}\in Q$. So $u=a^{m+1}b^n\in A$. Now consider the general case of primitive words with $r$ alternations between $a$ and $b$, i.e. $w$ is either $a^{m_1}b^{n_1}\ldots a^{m_s}b^{n_s}$, $b^{m_1}a^{n_1}\ldots b^{m_s}a^{n_s}$ (for $r=2s-1$), $a^{m_1}b^{n_1}\ldots a^{m_{s+1}}$, or $b^{m_1}a^{n_1}\ldots b^{m_{s+1}}$ (for $r=2s$); we can show that they are all in $A$ using induction on $r$. What we did so far covered the base cases $r=0$ and $r=1$. For $r>1$, we use another induction on $m_1$, which works very much the same way as the one for $r=1$ above: If $m_1=1$, then $w=uv$ with $u\in A, v\in B$, and since $u\neq\epsilon$, $v$ has fewer than $r$ alternations. So $v$ (or its root in case $v$ itself is not primitive) is in $A$ by the induction hypothesis on $r$ for a contradiction as above unless $v=\epsilon$. So $w=u\in A$. If $m_1>1$, in any factorization $w=uv$ with $u\neq\epsilon$, $v$ either has fewer alternations (and its root is in $A$ unless $v=\epsilon$ by the induction hypothesis on $r$), or a shorter first block (and its root is in A unless $v=\epsilon$ by the induction hypothesis on $m_1$). In either case we get that we must have $v=\epsilon$, i.e. $w=u\in A$. The case of $Q' := Q\cup\{\epsilon\}$ is rather more complicated. The obvious things to note are that in any decomposition $Q = A\cdot B$, both $A$ and $B$ must be subsets of $Q'$ with $A\cap B = \{\epsilon\}$. Also, $a,b$ must be contained in $A\cup B$. With a bit of extra work, one can show that $a$ and $b$ must be in the same subset. Otherwise, assume wlog that $a\in A$ and $b\in B$. Let us say that $w\in Q'$ has a proper factorization if $w=uv$ with $u\in A\setminus\{\epsilon\}$ and $v\in B\setminus\{\epsilon\}$. We have two (symmetric) subcases depending on where $ba$ goes (it must be in $A$ or $B$ since it has no proper factorization). If $ba\in A$, then $aba$ has no proper factorization since $ba,a\notin B$. Since $aba\in A$ would imply $abab\in A\cdot B$, we get $aba\in B$. As a consequence, $bab$ is neither in $A$ (which would imply $bababa\in A\cdot B$) nor in $B$ (which would imply $abab\in A\cdot B$). Now consider the word $babab$. It has no proper factorization since $bab\notin A\cup B$ and $abab,baba$ are not primitive. If $babab\in A$, then since $aba\in B$ we get $(ba)^4\in A\cdot B$; if $babab\in B$, then since $a\in A$ we get $(ab)^3\in A\cdot B$. So there is no way to have $babab\in A\cdot B$, contradiction. The case $ba\in B$ is completely symmetric. In a nutshell: $bab$ has no proper factorization and cannot be in $B$, so it must be in $A$; therefore $aba$ cannot be in $A$ or $B$; therefore $ababa$ has no proper factorization but also cannot be in either $A$ or $B$, contradiction. I am currently not sure how to proceed beyond this point; it would be interesting to see if the above argument can be systematically generalized.
This question is coming from the fact that all the counter examples for which second order stochastical domination holds but first oder stochastical domination fails do not accept increasing likelihood ratio condition. From this, a natural question is: If second order stochastical domination together with increasing likelihood ratio implies first order stochastical domination. $(1)$ Second order stochastical domination: $\rightarrow$ $\int_{t=-\infty}^x (G(t)-F(t))\mbox{d}t\geq 0\quad\forall x.$ $(2)$ Increasing likelihood ratio: $\rightarrow$ $\frac{\mbox{d}F(t)}{\mbox{d}G(t)}=\frac{f(t)}{g(t)}$ increasing function in $t$. Do $(1)$ and $(2)$ imply $G(x)\geq F(x)\forall x$? Any ideas? Thanks.
(a) Prove that the additive group $\Q=(\Q, +)$ is not finitely generated. Seeking a contradiction assume that the group $\Q=(\Q, +)$ is finitely generated and let $r_1, \dots, r_n$ be nonzero generators of $\Q$.Express the generators as fractions\[r_i=\frac{a_i}{b_i},\]where $a_i, b_i$ are integers. Then every rational number $r$ can be written as the sum\[r=c_1r_1+\cdots+c_k r_n\]for some integers $c_1, \dots, c_n$. Then we have\[r=\frac{m}{b_1\cdots b_n},\]where $m$ is an integer (which you can write down explicitly using $a_i, c_i$). Let $p$ be a prime number that does not divide $b_1\cdots b_n$, and choose $r=1/p$. Then we must have\[\frac{1}{p}=\frac{m}{b_1\cdots b_n}\]for some integer $m$.Then we have\[pm=b_1\cdots b_n\]and this implies $p$ divides $b_1\cdots b_n$, which contradicts our choice of the prime number $p$. Thus, the group $\Q$ cannot be finitely generated. (b) Prove that the multiplicative group $\Q^*=(\Q\setminus\{0\}, \times)$ of nonzero rational numbers is not finitely generated. Suppose on the contrary that the group $\Q^*=(\Q\setminus\{0\}, \times)$ is finitely generated and let\[r_i=\frac{a_i}{b_i}\]be generators for $i=1, \dots, n$, where $a_i, b_i$ are integers. Then every nonzero rational number $r$ can be written as\[r=r_1^{c_1}\cdots r_n^{c_n}=\frac{a_1^{c_1}\cdots a_n^{c_n}}{b_1^{c_1}\cdots b_n^{c_n}}\]for some integers $c_n$. Let $p$ be a prime number that does not divide $b_1\cdots b_n$, and consider $r=1/p$.Then as in part (a), this leads a contradiction.Hence $\Q^*$ is not finitely generated. Use Lagrange’s Theorem to Prove Fermat’s Little TheoremUse Lagrange's Theorem in the multiplicative group $(\Zmod{p})^{\times}$ to prove Fermat's Little Theorem: if $p$ is a prime number then $a^p \equiv a \pmod p$ for all $a \in \Z$.Before the proof, let us recall Lagrange's Theorem.Lagrange's TheoremIf $G$ is a […] Example of an Infinite Group Whose Elements Have Finite OrdersIs it possible that each element of an infinite group has a finite order?If so, give an example. Otherwise, prove the non-existence of such a group.Solution.We give an example of a group of infinite order each of whose elements has a finite order.Consider […] A Homomorphism from the Additive Group of Integers to ItselfLet $\Z$ be the additive group of integers. Let $f: \Z \to \Z$ be a group homomorphism.Then show that there exists an integer $a$ such that\[f(n)=an\]for any integer $n$.Hint.Let us first recall the definition of a group homomorphism.A group homomorphism from a […]
I know I promised a post on regression, but then I realized I only have a shallow understanding of Boosting and AdaBoost. So I biked to the nearest public library, when to the index cards, search for ‘Boost’ and after perusing through hundreds of self-help books, I found the greatest resource on AdaBoost: “How to Boost Your Spirit by Ada MacNally”. Nah, kidding. Such book doesn’t exist. The following, as always, is my study notes, taken from Foundations of Machine Learning by MIT Press and Machine Learning in Action by Peter Harrington published by Manning. The first book is heavy on math. The second book is more of a fluff book and is much simpler than the first one. It is often difficult, for a non-trivial learning task, to directly devise an accurate algorithm satisfying the strong PAC-learning requirements that we saw in the PAC-learnable algorithm post I wrote before (link) but, there can be more hope for finding simple predictors guaranteed only to perform slightly better than random. The following gives a formal definition of such weak learners. As in the PAC-learning post, we let $n$ be a number such that the computational cost of representing any element $x \in \mathcal{X}$ is at most $O(n)$ and denote by size(c) the maximal cost of computational representation of $c \in \mathcal{C}$. Let’s define weak learning. A concept class $\mathcal{C}$ is said to be weakly PAC-learnable if there exists an algorithm $\mathcal{A}, \gamma > 0$ and a polynomial function $poly(., . , .)$ such that for any $\delta > 0$, for all distributions $\mathcal{D}$ on $\mathcal{X}$ and for any target concept $c \in \mathcal{C}$, the following holds for any sample size $m \geq poly(1/\delta,n,size(c))$: \[ \underset{\mathcal{S} \sim \mathcal{D}^m}{\mathbb{P}}\left[R(h_s) \leq \frac{1}{2} – \gamma \right] \leq 1 – \delta, \] where $h_s$ is the hypothesis returned by algorithm $\mathcal{A}$ when trained on sample $S$. When such an algorithm $\mathcal{A}$ exists, it is called a weak learning algorithm for $\mathcal{C}$ or a weak learner. The hypothesis returned by a weak learning algorithm is called a base classifier. The key idea behind boosting techniques is to use a weak learning algorithm to build a strong learner. That is, an accurate PAC-learning algorithm. To do so, boosting techniques use an ensemble method: they combine different base classifiers returned by a weak learner by a weak learner to create more accurate predictors. But which base classifier should be used and how should they be combined? Another visualization shall be: The algorithm takes as input a labeled sample $S=((x_1, y_1),\ldots,(x_m, y_m))$, with $(x_i, y_i) \in \mathcal{X} \times \{-1, +1\}$ for all $i \in [m]$, and maintains a distribution over the indices $\{-1, \ldots, m\}$. Initially, lines 1-2, the distribution uniform ($\mathcal{D}_1$). At each round of boosting, that is each iteration $t \in [T]$, of the loop in lines 3-8, a new base classifier $ h_t \in \mathcal{H} $ is selected that minimizes the error on the training sample weighted by the distribution $ \mathcal{D}_t$: \[ h_t \in \underset{h \in \mathcal{H}}{argmin}\underset{i \sim \mathcal{D}_t}{\mathbb{P}} [h(x_i) \neq y_i] = \underset{h \in \mathcal{H}}{argmin}\sum_{i=1}^{m}\mathcal{D}_t(i)1_{h(x_i) \neq y_i} .\] $Z_t$ is simply a normalization factor to ensure that the weights $D_{t+1}(i)$ sum to one. The precise reason for the definition of the coefficient at $\alpha_t$ will become clear later. For now, observe that if $\epsilon_t$, the error of the base classifier, is less than $\frac{1}{2}$, then $\frac{1-\epsilon_t}{\epsilon_t} > 1$ and $\alpha_t$ is positive. Thus, the new distribution $\mathcal{D}_{t+1}$ is defined from $\mathcal{D}_t$ by substantially increasing the weight on $i$ if point $x_i$ is incorrectly classified, and, on the contrary, decreasing it if $x_i$ is correctly classified. This has the effect of focusing more on the points incorrectly classified at the next round of boosting, less on those correctly classified by $h_t$. After $T$ rounds of boosting, the classifier returned by AdaBoost is based on the sing of function $f$, which is a non-negative linear combination of the base classifiers $h_t$. The weight $\alpha_t$ assigned to $h_t$in that um is a logarithmic function of the ratio of the accuracy $1-\epsilon_t$ and error $\epsilon_t$ of $h_t$. Thus, more accurate base classifiers are assigned a larger weight in that sum. For any $t \in [T]$, we will denote by $f_t$ the linear combination of the base classifiers after $t$ rounds of boosting: $f_t = \sum_{s=1}^{t}\alpha_sh_s$. In particular, we have,e $f_T = f$. The distribution $\mathcal{D}_{t+1}$ can be expressed in terms of $f_t$ and the normalization factors $Z_s$, $s \in [t]$ as follows: \[ \forall i \in [m], \mathcal{D}_{t+1}(i)=\frac{e^{y_if_t(x_i)}}{ m \prod_{s=1}^{t}Z_s} .\] The AdaBoost algorithm can be generalized in several ways: – Instead of a hypothesis with minimal weighted error, $h_t$ can be more generally the base classifier returned by a weak learner algorithm trained on $D_t$; – The range of the base classifiers could be $[-1, +1]$, or more generally bounded subset of $\mathbb{R}$. The coefficients $\alpha_t$ can then be different and may not even admit a closed form. In general, they are chosen to minimize an upper bound on the empirical error, as discussed in the next section. Of course, in that general case the hypotheses $h_t$ are not binary classifiers, but their sign could define the label and their magnitude could be interpreted as a measure of confidence. AdaBoost was originally designed to address the theoretical question of whether a weak learning algorithm could be be used to derive a strong learning one. Here, we will show that it coincides in fact with a very simple algorithm, which consists of applying a general coordinate descent technique to a convex and differentiable objective function. For simplicity, in this section, we assume that the base classifier set $\mathcal{H}$ is finite, with cardinality $N: \mathcal{H} = /{h_1, \ldots, h_N/}$. An ensemble function $f$ such as the one returned by AdaBoost can then be written as $f = \sum_{j=1}^N\bar{\alpha}_j h_j$. Given a labeled sample $S = ((x_1, y_1), \ldots, (x_m, y_m))$ let $F$ be the objective function defined for all $\boldsymbol{\bar{\alpha}} = (\bar{alpha}_1, \ldots, \bar{\alpha}_N) \in \mathbb{R}^N$ by: \[ F(\boldsymbol{\bar{\alpha}}) = \frac{1}{m}\sum_{i=1}^{m}e^{y_i f(x_i)} = \frac{1}{m}\sum_{i=1}^m e^{y_i \sum_{j=1}{N}\bar{\alpha}h_j(x_i)}. \] $F$ is a convex function of $\boldsymbol{\bar{\alpha}}$ since it is a sum of convex functions, each obtained by composition of the (convex) exponential function with an affine function of $\boldsymbol{\bar{\alpha}}$. The AdaBoost algorithm takes the input dataset, the class labels, and the number of iterations. This is the only parameter you need to specify in most ML libraries. Here, we briefly describe the standard practical use of AdaBoost. An important requirement for the algorithm is the choice of the base classifiers or that of the weak learner. The family of base classifiers typically used with AdaBoost in practice is that of decision trees, which are equivalent to hierarchical partitions of the space. Among decision trees, those of depth one, also known as stumps, are by far the most frequently used base classifiers. Boosting stumps are threshold functons associated to a single feature. Thus, a stump corresponds to a single axis-aligned partition of space. If the data is in $\mathbb{R}^N$, we can associate a stump to each ot he N components. Thus, to determine the stump with the minimal weighted error at each round of boosting, the best component and the best threshold for each component must be computed. Now, let’s create a weak learner with a decision stump, and then implement a different AdaBoost algorithm using it. If you’re familiar with decision trees, that’s OK, you’ll understand this part. However, if you’re not, either learn about them, or wait until I cover them tomorrow. A decision tree with only one split is a decision stump. The pseudocode to generate a simple decision stump looks like this: Set the minError to +∞ For every feature in dataset: For every step: For each inequality: Build a decision stump and test it with the weighted dataset If the error is less than minError: Set this stump as the best stump Return the best stump Now that we have generated a decision stump, let’s train it: For each iteration: Find the best stump using buildStump() Add the best stump to the stump array Calculate α Calculate the new eight vector Update the aggregate class estimate If the error rate is 0: Break out of the loop Well, that is it for today! I know I keep promising one thing, and deliver another, but this time I’m really going to talk about decision trees! Semper Fudge!
Emil Stoyanov's New Year's Problem What Is This About? Source Problem In $\Delta ABC,\,$ $M\in BC,\,$ $N\in AC;\,$ $K=AM\cap BN;\,$ the circumcircles $(AKN)\,$ and $(BKM)$ intersect a the orthocenter $H\,$ of $\Delta ABC.$ Prove $AM=BN.$ Solution 1 Let $AD\,$ and $BE\,$ be two altitudes in $\Delta ABC.\,$ Since $\Delta BEC\,$ is right, $\angle MBH=\angle CBE=90^{\circ}-\angle C.\,$ Similarly, $NAH =90^{\circ}-\angle C.\,$ As two inscribed angles subtended by the same arc, $\angle BMH=\angle BKH,\,$ so that $\angle HKN =180^{\circ}-\angle BMH.\,$ But also $\angle HKN =180^{\circ}-\angle NAH,\,$ implying $\angle BMH = \angle NAH = \angle 90^{\circ}-\angle C =\angle MBH.\,$ This makes $\Delta BHM\,$ isosceles, implying that the same is true for $\Delta BAM:\,\,$ $AM=AB.\,$ Similarly we show that $BN=AB\,$ and, therefore, $AM=BN.$ Solution 2 We'll assume, as in the diagram, that $\Delta ABC\,$ is acute. Let $D\,$ and $E\,$ be the projections of $A\,$ and $B\,$ on $BC\,$ and $AC,\,$ respectively. We'll work in complex numbers. Since $BHKM\,$ is cyclic, $\displaystyle\frac{K-M}{B-M}\cdot\frac{B-H}{K-H}\lt 0.\,$ From the diagram, $K-M=\alpha (A-K),\,$ $B-M=\beta (B-C),\,$ with $\alpha, \beta \gt 0.\,$ Thus, $\displaystyle\frac{A-K}{B-C}\cdot\frac{B-H}{K-H}\lt 0.\,$ Similarly, $\displaystyle\frac{B-K}{A-C}\cdot\frac{A-H}{K-H}\lt 0.\,$ Dividing one by another gives $\displaystyle\frac{A-K}{B-K}\cdot\frac{A-C}{B-C}\cdot\frac{B-H}{A-H}\gt 0.\,$ Now, we have three positively oriented triangles, with the following implications: $\displaystyle\begin{align} \Delta KAB:\; & \frac{A-K}{B-K}=\frac{AK}{BK}e^{-i\angle AKB}\\ \Delta CAB:\; & \frac{A-C}{B-C}=\frac{AK}{BK}e^{-i\angle C}\\ \Delta HAB:\; & \frac{B-H}{A-H}=\frac{AK}{BK}e^{i(\angle B+\angle C)}. \end{align}$ We deduce that $\displaystyle\begin{align} 0&\lt \frac{A-K}{B-K}\cdot\frac{A-C}{B-C}\cdot\frac{B-H}{A-H}\\ &=\frac{AK}{BK}\cdot\frac{AC}{BC}\cdot\frac{BH}{AH}e^{i(-\angle AKB-\angle C+\angle A+\angle B)} \end{align}$ From which it follows that $\angle AKB=-\angle C+\angle A+\angle B\,$ and, subsequently, $\angle MKB=2\angle C.\,$ As two inscribed angles, $\angle MHB=\angle MKB=2\angle C.\,$ Finally, since $\angle MBH=90^{\circ}-\angle C,\,$ $\Delta MHB\,$ is isosceles and so is $\Delta MAB,\,$ making $AM=AB.\,$ Similarly, $BN=AB.\;$ By transitivity, $AM=BN.$ Extras Extra 1 (Statement: Leo Giugiuc; proof: Leo Giugiuc) $KH\,$ is the internal bisector of $\angle AKB.$ Indeed, $\angle BKH=\angle BMH=\angle MBH=90^{\circ}-\angle C.\,$ Similarly, $\angle AKH=90^{\circ}-\angle C.$ Extra 2 (Statement: Leo Giugiuc; proof: Leo Giugiuc) Let $P\,$ and $Q\,$ be the second intersections of $(AKH)\,$ and $(BKH),\,$ respectively, with $AB.\,$ Then $AQ=AK\,$ and $BP=BK.$ By Power of a Point theorem, $AK\cdot AM=AQ\cdot AB,\,$ and since $AM=AB,\,$ then $AQ=AK.\,$ $BP=BK\,$ is shown similarly. Extra 3 (Statement: Sorin Borodi; proof: Leo Giugiuc) $CP=CQ.$ Indeed, $\angle HPQ=\angle AKH\,$ and $\angle HQP=\angle BKH\,$ and, since $\angle AKH=\angle BKH,\,$ also $\angle HPQ=\angle HQP.\,$ So that, in $\Delta HPQ,\,$ the altitude from $H\,$ is also the median, making $CH\,$ both the alitude and the median from $C\,$ in $\Delta CPQ\,$ such that the triangle is isosceles: $AP=CQ.$ Extra 4 Let $U\,$ be the second intersection of $BE\,$ with $(AKN),\,$ $V\,$ the second intersection of $AD\,$ with $(BKM).\,$ Then $UH\,$ is a diameter of $(AKN),\,$ $VH\,$ is a diameter of $(BKM).$ Suffice it to observe that, say, $E\,$ is the midpoint of $AN\,$ and $BE\perp AN.\,$ Since the perpendicular from the center of a circle bisects a chord, $UH\,$ is bound to pass through the center of $(AKN),\,$ which makes it a diameter. As a consequence, $\angle HKU=90^{\circ}.\,$ Similarly, $\angle HKV=90^{\circ}.\,$ Thus, in particular, $U,K,V\,$ are collinear and $UV\,$ serves as the external bisector of $\angle AKB.$ In particular, $UV\,$ bisects $\angle BKM\,$ and $\angle AKN.$ Extra 5 Let $T\,$ be the second intersection of $UV\,$ with $(AKB).\,$ Then $T\,$ is the midpoint of $UV.$ Extra 6 (Statement: Emil Stoyanov) Let $T\,$ be the second intersection of $(KMN)\,$ and $(ABK).\,$ Then $\angle HKT=90^{\circ}.$ In particular, point $T\,$ coincides with point $T\,$ from Extra 5. Extra 7 (Statement: Emil Stoyanov) The center $O_1\,$ of $(AKN)\,$ and the center $O_2\,$ of $(BKM)\,$ lie on $(AKB).\,$ Obviously $H\,$ serves the incenter of $\Delta AKB.$ Extra 8 (Statement: Emil Stoyanov) The point $T\,$ from Extra 6 has additional related properties: $AT=BT,\,$ $NT=MT.$ $\Delta AMT\sim\Delta BNT,\,$ and, in fact, $\Delta AMT=\Delta BNT.$ $KT\,$ is the external angle bisector of $\angle AKB.$ Acknowledgment The problem above has been kindly posted by Emil Stoyanov at the CutTheKnotMath facebook page, with New Year greetings. Solution 1 has been arrived at in online collaboration with Leo Giugiuc who also came up with a partially analytic solution (Solution 2). Leo Giugiuc, Sorin, Borodi, and Emil Stoyanov have discovered additional properties of the configuration. These are collected in the Extra part, with authorship assigned per item. This part of the page grew up to such an extent that when more features have been discoeverd by a Romanian teacher Gabriela Negutescu, I decided to put them on a separate page. 65461540
An $n\times n$ matrix $A$ is said to be singular if there exists a nonzero vector $\mathbf{v}$ such that $A\mathbf{v}=\mathbf{0}$. Otherwise, we say that $A$ is a nonsingular matrix. Proof. Let $\mathbf{v}=\begin{bmatrix}1 \\1 \\\vdots \\1\end{bmatrix}$ be the $n$-dimensional vector all of whose entries are $1$.Then we compute $A\mathbf{v}$. Note that $A\mathbf{v}$ is an $n$-dimensional vector, and the $i$-th entry of $A\mathbf{v}$ is the sum of the $i$-th row of the matrix $A$ for $i=1, \dots, n$.It follows from the assumption that the sum of elements in each row of $A$ is $0$ that we have\[A\mathbf{v}=\mathbf{0}.\] As $\mathbf{v}$ is a nonzero vector, this equality implies that $A$ is a singular matrix. Example To illustrate the proof, let us consider a concrete example.Let $A=\begin{bmatrix}1 & -4 & 3 \\2 & 5 &-7 \\3 & 0 & -3\end{bmatrix}$.Then the sum of each row of $A$ is zero. Now we compute\begin{align*}\begin{bmatrix}1 & -4 & 3 \\2 & 5 &-7 \\3 & 0 & -3\end{bmatrix}\begin{bmatrix}1 \\1 \\1\end{bmatrix}=\begin{bmatrix}1+(-4)+3\\2+5+(-7) \\3+0+(-3)\end{bmatrix}=\begin{bmatrix}0 \\0 \\0\end{bmatrix}.\end{align*}This shows that the matrix $A$ is singular because we have $A\mathbf{v}=\mathbf{0}$ for a nonzero vector $\mathbf{v}$. If a Matrix is the Product of Two Matrices, is it Invertible?(a) Let $A$ be a $6\times 6$ matrix and suppose that $A$ can be written as\[A=BC,\]where $B$ is a $6\times 5$ matrix and $C$ is a $5\times 6$ matrix.Prove that the matrix $A$ cannot be invertible.(b) Let $A$ be a $2\times 2$ matrix and suppose that $A$ can be […] Find All Values of $x$ so that a Matrix is SingularLet\[A=\begin{bmatrix}1 & -x & 0 & 0 \\0 &1 & -x & 0 \\0 & 0 & 1 & -x \\0 & 1 & 0 & -1\end{bmatrix}\]be a $4\times 4$ matrix. Find all values of $x$ so that the matrix $A$ is singular.Hint.Use the fact that a matrix is singular if and only […] Perturbation of a Singular Matrix is NonsingularSuppose that $A$ is an $n\times n$ singular matrix.Prove that for sufficiently small $\epsilon>0$, the matrix $A-\epsilon I$ is nonsingular, where $I$ is the $n \times n$ identity matrix.Hint.Consider the characteristic polynomial $p(t)$ of the matrix $A$.Note […]
Current browse context: math.SP Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Spectral Theory Title: Example of periodic Neumann waveguide with gap in spectrum (Submitted on 25 May 2016) Abstract: In this note we investigate spectral properties of a periodic waveguide $\Omega^\varepsilon$ ($\varepsilon$ is a small parameter) obtained from a straight strip by attaching an array of $\varepsilon$-periodically distributed identical protuberances having "room-and-passage" geometry. In the current work we consider the operator $\mathcal{A}^\varepsilon=-\rho^\varepsilon\Delta_{\Omega^\varepsilon}$, where $\Delta_{\Omega^\varepsilon}$ is the Neumann Laplacian in $\Omega^\varepsilon$, the weight $\rho^\varepsilon$ is equal to $1$ everywhere except the union of the "rooms". We will prove that the spectrum of $\mathcal{A}^\varepsilon$ has at least one gap as $\varepsilon$ is small enough provided certain conditions on the weight $\rho^\varepsilon$ and the sizes of attached protuberances hold. (Dedicated to Pavel Exner's 70th birthday) Submission historyFrom: Andrii Khrabustovskyi [view email] [v1]Wed, 25 May 2016 10:22:31 GMT (150kb,D)
NTS ABSTRACTSpring2019 Return to [1] Contents Jan 23 Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24 Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31 Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7 Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU). Feb 14 Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28 Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7 Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14 Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross. This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew. March 28 Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. April 4 Wei-Lun Tsai Hecke L-functions and $\ell$ torsion in class groups Abstract: The canonical Hecke characters in the sense of Rohrlich form a set of algebraic Hecke characters with important arithmetic properties. In this talk, we will explain how one can prove quantitative nonvanishing results for the central values of their corresponding L-functions using methods of an arithmetic statistical flavor. In particular, the methods used rely crucially on recent work of Ellenberg, Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri.
Express a Hermitian Matrix as a Sum of Real Symmetric Matrix and a Real Skew-Symmetric Matrix Problem 405 Recall that a complex matrix is called Hermitian if $A^*=A$, where $A^*=\bar{A}^{\trans}$.Prove that every Hermitian matrix $A$ can be written as the sum\[A=B+iC,\]where $B$ is a real symmetric matrix and $C$ is a real skew-symmetric matrix. Since $A$ is Hermitian, we have\[\bar{A}^{\trans}=A.\]Taking the conjugate of this identity, we also have\[A^{\trans}=\bar{A}. \tag{*}\] Let\[B=\frac{1}{2}(A+\bar{A})\]and\[C=\frac{1}{2i}(A-\bar{A}).\]We claim that $B$ is a real symmetric matrix and $C$ is a real skew-symmetric matrix. We have\begin{align*}\bar{B}=\frac{1}{2}\overline{(A+\bar{A})}=\frac{1}{2}(\bar{A}+\bar{\bar{A}})=\frac{1}{2}(\bar{A}+A)=B.\end{align*}Thus, the matrix $B$ is real. To prove $B$ is symmetric, we compute\begin{align*}&B^{\trans}=\frac{1}{2}(A+\bar{A})^{\trans}\\&=\frac{1}{2}(A^{\trans}+\bar{A}^{\trans})\\&=\frac{1}{2}(\bar{A}+A) && \text{by (*) and $A$ is Hermitian}\\&=B.\end{align*}This proves that $B$ is symmetric. The matrix $C$ is real because we have\begin{align*}\bar{C}=\frac{1}{-2i}(\bar{A}-\bar{\bar{A}})=\frac{1}{2i}(A-\bar{A})=C.\end{align*}We also have\begin{align*}&C^{\trans}=\frac{1}{2i}(A^{\trans}-\bar{A}^{\trans})\\&=\frac{1}{2i}(\bar{A}-A)&& \text{by (*) and $A$ is Hermitian}\\&=-\frac{1}{2i}(A-\bar{A})=-C.\end{align*}Hence $C$ is a skew-symmetric matrix. Finally, we compute\begin{align*}B+iC&=\frac{1}{2}(A+\bar{A})+i\cdot \frac{1}{2i}(A-\bar{A})\\&=\frac{1}{2}(A+\bar{A})+\frac{1}{2}(A-\bar{A})\\&=A.\end{align*}Therefore, we have obtained the sum as described in the problem. Related Question. Prove that each complex $n\times n$ matrix $A$ can be written as\[A=B+iC,\]where $B$ and $C$ are Hermitian matrices. 7 Problems on Skew-Symmetric MatricesLet $A$ and $B$ be $n\times n$ skew-symmetric matrices. Namely $A^{\trans}=-A$ and $B^{\trans}=-B$.(a) Prove that $A+B$ is skew-symmetric.(b) Prove that $cA$ is skew-symmetric for any scalar $c$.(c) Let $P$ be an $m\times n$ matrix. Prove that $P^{\trans}AP$ is […] Eigenvalues of a Hermitian Matrix are Real NumbersShow that eigenvalues of a Hermitian matrix $A$ are real numbers.(The Ohio State University Linear Algebra Exam Problem)We give two proofs. These two proofs are essentially the same.The second proof is a bit simpler and concise compared to the first one.[…] Questions About the Trace of a MatrixLet $A=(a_{i j})$ and $B=(b_{i j})$ be $n\times n$ real matrices for some $n \in \N$. Then answer the following questions about the trace of a matrix.(a) Express $\tr(AB^{\trans})$ in terms of the entries of the matrices $A$ and $B$. Here $B^{\trans}$ is the transpose matrix of […] Subspaces of Symmetric, Skew-Symmetric MatricesLet $V$ be the vector space over $\R$ consisting of all $n\times n$ real matrices for some fixed integer $n$. Prove or disprove that the following subsets of $V$ are subspaces of $V$.(a) The set $S$ consisting of all $n\times n$ symmetric matrices.(b) The set $T$ consisting of […] There is at Least One Real Eigenvalue of an Odd Real MatrixLet $n$ be an odd integer and let $A$ be an $n\times n$ real matrix.Prove that the matrix $A$ has at least one real eigenvalue.We give two proofs.Proof 1.Let $p(t)=\det(A-tI)$ be the characteristic polynomial of the matrix $A$.It is a degree $n$ […]
Triple Time Limit: 12000/6000 MS (Java/Others) Memory Limit: 65536/65536 K (Java/Others) Description Given the finite multi-set $A$ of $n$ pairs of integers, an another finite multi-set $B$ of $m$ triples of integers, we define the product of $A$ and $B$ as a multi-set $C =A * B \\ = \{\langle a,c,d\rangle \mid \langle a,b\rangle\in A,~\langle c,d,e\rangle\in B~and~b=e\}$ For each $\langle a,b,c\rangle\in C$, its BETTER set is defined as $BETTER_C(\langle a,b,c\rangle) = \\ \{ \langle u,v,w\rangle\in C \mid \langle u,v,w\rangle \neq \langle a,b,c\rangle,~u\ge a,~v\ge b,~w\ge c \}$ As a \textbf{multi-set} of triples, we define the TOP subset (as a multi-set as well) of $C$, denoted by $TOP(C)$, as $TOP(C) = \{ \langle a,b,c\rangle\in C \mid BETTER_C(\langle a,b,c\rangle) = \emptyset \}$ You need to compute the size of $TOP(C)$. Input The input contains several test cases. The first line of the input is a single integer $t~(1\le t\le 10)$ which is the number of test case. Then $t$ test cases follow. Each test case contains three lines. The first line contains two integers $n~(1\le n\le 10^5)$ and $m~(1\le m\le 10^5)$ corresponding to the size of $A$ and $B$ respectively. The second line contains $2\times n$ nonnegative integers \[a_1,b_1,a_2,b_2,\cdots,a_n,b_n\] which describe the multi-set $A$, where $1\le a_i,b_i\le 10^5$. The third line contains $3\times m$ nonnegative integers \[c_1,d_1,e_1,c_2,d_2,e_3,\cdots,c_m,d_m,e_m\] corresponding to the $m$ triples of integers in $B$, where $1\le c_i,d_i\le 10^3$ and $1\le e_i\le 10^5$. Output For each test case, you should output the size of set $TOP(C)$. Sample Input 2 5 9 1 1 2 2 3 3 3 3 4 2 1 4 1 2 2 1 4 1 1 1 3 2 3 2 2 4 1 2 2 4 3 3 2 3 4 1 3 3 4 2 7 2 7 2 7 1 4 7 2 3 7 3 2 7 4 1 7 Source 2015ACM/ICPC亚洲区沈阳站-重现赛(感谢东北大学)
Introduction to Semiconductors concept Our study of analog circuits will be built on our understanding of semiconductor physics. Semiconductors (metals which are somewhere between copper wire and a brick wall in terms of resistance) allow for some very clever devices to be built that work in strange, nonlinear ways and are nothing like our familiar resistors, capacitors and inductors. Through this topic we will come to master transistor circuits. The transistor is based on understanding two back-to-back "PN junctions", so we'll need to learn what a PN junction is and how it works. But to do that we'll need to learn what P and N are, which is where we're starting here. Although much of the physics content at the beginning few topics might seem too theoretical to be useful it will become more and more important as your learning goes on. In analog circuits more than anywhere else, your ability to "picture" what is happening in the circuit is of the utmost importance. Your ability to gain an intuitive understanding of how complex and complicated circuits will behave begins with understanding what's happening "under the hood". So work through the mathematics and theory so that you can become a master of the applications. fact Charge is carried around in circuits by the movement of valence electrons into available spaces in adjacent valence shells. We can also view this as the movement of available spaces in valence shells moving in the opposite direction. This might sound confusing but its really a straightforward idea. While current is really electrons moving around on the outside of molecules into gaps on an adjacent molecule's outtermost electron "shell" we can pretend that these "free spaces" in an electron shell are really the bits moving since if an electron leaves one molecule's shell it must leave a "free space" in its place. We call these free spaces in valence shells "holes". Now as you may remember from high school chemistry atoms with only one or two electrons in their valence shell are quite "willing" to give them up whereas atoms with an almost completely full valence shell (often 8 electrons fills a valence shell) is very "willing" to accept another electron. So elements with three to five valence electrons are somewhere between and are often referred to as "semiconductors". These elements exhibit interesting chemical properties that make them useful for building solid state devices capable of more complex functions that resistors, capacitors and inductors. fact Elements with three to five valence electrons are termed "semiconductors" and are the basis of microelectronic devices like diodes and transistors. More formally, we can calculate the energy required to dislodge an electron from an atom (which is required for a current to flow). We call this energy the "bandgap energy", having a very high bandgap energy means we have an insulator while a very low bandgap energy is a strong conductor. fact The energy required to dislodge an electron from an atom is called the "bandgap energy". For silicon this is \(E_g = 1.12eV\) Using the bandgap energy we can calculate the number of free electrons at a given temperature. The more free electrons the more conductive the material will be. A larger \(E_g\) will lead to fewer free electrons and a higher temperature will lead to a greater number of free electrons. fact The density of free electrons in some material is given by: $$ n_i = 5.2\times 10^{15}T^{3/2}e^{\frac{-E_g}{2kT}} \text{electrons/cm}^3 $$ Where k is the Boltzmann constant \(k = 1.38\times 10^{-23}\)J/K and T is the temperature in Kelvins. Semiconductors typically have an \(E_g\) between 1.0eV and 1.5eV example Find the density of free electrons in silicon at room temperature (T = 300) and T = 400 (100\(^\circ\)C)From above we see that for silicon \(E_g = 1.12\)eV \( = 1.792\times 10^{-19}\)J so plugging things into our formula: $$ n_{i(T = 300)} = 1.08\times 10^{10}\text{electrons/cm}^3 $$ $$ n_{i(T = 400)} = 3.7126\times 10^{12}\text{electrons/cm}^3 $$ While these numbers are very large the fact that silicon has \(5\times 10^{22}\)electrons/cm\(^3\) means that only one in five trillion atoms has a free electron! For this reason we do something called "doping" where we insert some other atoms (at a very low density) into our collection of silicon atoms in order to change the number of free electrons. These free electrons (or holes) come from the donated atoms. Assuming the total number of free electrons are denoted by \(n\) and the total number of available holes is given by \(p\) we come to the very useful equation: fact $$np = n_i$$ In both undoped and doped semiconductors. When we add extra free electrons or holes these will combine with some of the existing holes or free electrons to essentially cancel each other out. So while their product is always constant their ratio (and total number) can be changed by doping (since \(n + p\) is NOT constant. We call semiconductors where \(n \gt p\) n-type semiconductors and those where \(n \lt p\) p-type semiconductors. fact To create an n-type semiconductor we need to add a doping element with a small number of valence electrons (so that they will be easily dislodged). To create a p-type semiconductor we need to add a doping element with a large number of valence electrons (so that they will easily accept more electrons). practice problems
Consider the Koch curve $G \subseteq \mathbb{R}^2$. Clearly $G$ is the invariant set (IS) of the iterated function system (IFS) $\lbrace \phi_1, \phi_2, \phi_3, \phi_4 \rbrace$. Where (not wanting to jump between $\mathbb{R}^2$ and $\mathbb{C}$ but doing so for ease): $\phi_1(x) = \frac{1}{3} x$, $\phi_2(x) = \frac{1}{3} (x \exp(\frac{i \pi}{3}) + 1)$, $\phi_3(x) = \frac{1}{3} (x \exp(-\frac{i \pi}{3}) + 1 + \exp(\frac{i \pi}{3}))$, $\phi_4(x) = \frac{1}{3} (x + 2)$ However can we do better? i.e. can we find an IFS consisting of fewer contractions such that its IS is $G$? In this case, yes. The IFS $\lbrace \psi_1, \psi_2 \rbrace$ also has $G$ as its IS where: $\psi_1(x) = \frac{1}{\sqrt{3}} x \exp(-\frac{5 i \pi}{6}) + \frac{1}{3} (1 + \exp(\frac{i \pi}{3}))$, $\psi_2(x) = \frac{1}{\sqrt{3}} x \exp(\frac{5 i \pi}{6}) + 1$ And as we know that an IFS consisting of a single contraction has a single point as its IS, we know that this is the best that we can do. But what about in general? If $G \subseteq \mathbb{R}^n$ is the IS of the IFS $\lbrace \phi_1, \phi_2, \ldots, \phi_m \rbrace$ when can we tell if there exists an IFS with $G$ as its IS and consisting of strictly less than $m$ contractions? As a specific example: how about the Sierpinski gasket / carpet? Can we do better that the obvious 3 / 8 construction IFS?
2018-08-25 06:58 Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 详细记录 - 相似记录 2018-08-25 06:58 详细记录 - 相似记录 2018-08-25 06:58 Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 详细记录 - 相似记录 2018-08-24 06:19 Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 详细记录 - 相似记录 2018-08-24 06:19 Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 详细记录 - 相似记录 2018-08-24 06:19 Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 详细记录 - 相似记录 2018-08-24 06:19 Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 详细记录 - 相似记录 2018-08-24 06:19 Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 详细记录 - 相似记录 2018-08-24 06:19 First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 详细记录 - 相似记录 2018-08-23 11:31 Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 详细记录 - 相似记录
Math/Display < Math > Contents Display Math The famous result (once more) is given by \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: Numbering Formulae The famous result (once more) is given by \placeformula \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: The \placeformula command is optional, and produces the equation number; leaving it off produces an unnumbered equation. Changing format of numbers You can use \setupformulas to change the format of numbers. For example to get bold numbers inside square brackets use \setupformulas[left={[},right={]},numberstyle=bold] which gives To get equations also numbered by section, add the command: \setupnumber[formula][way=bysection] To the start of your document. To get alphabets instead of numbers, use \setupformulas[numberconversion=Character] which gives Changing Formula alignment Normally a formula is centered, but in case you want to align it left or right, you can set up formulas to behave that way. Normally a formula will adapt its left indentation to the environment: \setuppapersize[A5] \setuplayout[textwidth=8cm] \setupformulas[align=left] \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=middle] \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=right] \startformula c^2 = a^2 + b^2 \stopformula Or in print: With formula numbers the code is: \setuppapersize[A5] \setuplayout[textwidth=8cm] \setupformulas[align=left] \placeformula \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=middle] \placeformula \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=right] \placeformula \startformula c^2 = a^2 + b^2 \stopformula And the formulas look like: When tracing is turned on ( \tracemathtrue) you can visualize the bounding box of the formula, As you can see, the dimensions are the natural ones, but if needed you can force a normalized line: \setuppapersize[A5] \setuplayout[textwidth=8cm] \setupformulas[align=middle,strut=yes] \tracemathtrue \placeformula \startformula c^2 = a^2 + b^2 \stopformula This time we get a more spacy result. [Ed. Note: For this example equation, there appears to be no visible change.] We will now show a couple of more settings and combinations of settings. In centered formulas, the number takes no space \setuppapersize[A5] \setuplayout[textwidth=8cm] \tracemathtrue \setupformulas[align=middle] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula You can influence the placement of the whole box with the parameters leftmargin and rightmargin. \setuppapersize[A5] \setuplayout[textwidth=8cm] Some example text, again, to show where the right and left margins of the text block are. \tracemathtrue \setupformulas[align=right,leftmargin=3em] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=left,rightmargin=1em] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula You can also inherit the margin from the environment. \setuppapersize[A5] \setuplayout[textwidth=8cm] Some example text, again, to show where the right and left margins of the text block are. \tracemathtrue \setupformulas[align=right,margin=standard] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula The distance between the formula and the number is only applied when the formula is left or right aligned. \setuppapersize[A5] \setuplayout[textwidth=8cm] \tracemathtrue \setupformulas[align=left,distance=2em] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula Referencing formulae The famous result (and again) is given by \placeformula[formulalabel] \startformula c^2 = a^2 + b^2. \stopformula And now we can refer to formula \ref[][formulalabel]. This, when typeset, produces the following: Note, that \ref expects two arguments, therefore you need the brackets twice. By default, only the formula number appears as a reference. This can be changed by using \definereferenceformat. For example, to create a command \eqref which shows the formula number in brackets, use \definereferenceformat[eqref][left=(,right=)] Sub-Formula Numbering Automatic Sub-Formula Numbering Examples: \startsubformulas[eq:1] \placeformula[eq:first] \startformula c^2 = a^2 + b^2 \stopformula \placeformula[eq:second] \startformula c^2 = a^2 + b^2 \stopformula \stopsubformulas Formula (\in[eq:1]) states the Pythagora's Theorem twice, once in (\in[eq:first]) and again in (\in[eq:second]). The Manual Method Sometimes, you need more fine grained control over numbering of subformulas. In that case one can make use of the optional agument of \placeformula command and the related \placesubformula commands which can be used to produce sub-formula numbering. For example: Examples: \placeformula{a} \startformula c^2 = a^2 + b^2 \stopformula \placesubformula{b} \startformula c^2 = a^2 + b^2 \stopformula What's going on here is simpler than it might appear at first glance. Both \placeformula and \placesubformula produce equation numbers with the optional tag added at the end; the sole difference is that the former increments the equation number first, while the latter does not (and thus can be used for the second and subsequent formulas that use the same formula number but presumably have different tags). This is sufficient for cases where the standard ConTeXt equation numbers suffice, and where only one equation number is needed per formula. However, there are many cases where this is insufficient, and \placeformula defines \formulanumber and \subformulanumber commands, which provide hooks to allow the use of ConTeXt-managed formula numbers with plain TeX equation numbering. These, when used within a formula, simply return the formula number in properly formatted form, as can be seen in this simple example with plain TeX's \eqno. Note that the optional tag is inherited from \placeformula. More examples: \placeformula{c} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \eqno{\formulanumber} \stopformula In order for this to work properly, we need to turn off ConTeXt's automatic formula number placement; thus the \let command to empty \doplaceformulanumber, which must be placed after the start of the formula. In many practical examples, however, this is not necessary; ConTeXt redefines \displaylines and \eqalignno to do this automatically. For more control over sub-formula numbering, \formulanumber and \subformulanumber have an optional argument parallel to that of \placeformula, as demonstrated in this use of plain TeX's \eqalignno, which places multiple equation numbers within one formula. \placeformula \startformula \eqalignno{ c^2 &= a^2 + b^2 &\formulanumber{a} \cr c &= \left(a^2 + b^2\right)^{\vfrac{1}{2}} &\subformulanumber{b}\cr a^2 + b^2 &= c^2 &\subformulanumber{c} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula Note that both \formulanumber and \subformulanumber can be used within the same formula, and the formula number is incremented as expected. Also, if an optional argument is specified in both \placefigure and \formulanumber, the latter takes precedence. More examples for left-located equation number: \setupformulas[location=left] \placeformula{d} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \leqno{\formulanumber} \stopformula and \placeformula \startformula \leqalignno{c^2 &= a^2 + b^2 &\formulanumber{a} \cr a^2 + b^2 &= c^2 &\subformulanumber{b} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula -- 23:46, 15 Aug 2005 (CEST) Prinse Wang List of Formulas You can have a list of the formulas contained in a document by using \placenamedformula instead of \placeformula. Only the formulas written with \placenamedformula are not put in the list, so that you can control precisely the content of the list. Example: \subsubject{List of Formulas} \placelist[formula][criterium=text,alternative=c] \subsubject{Formulas} \placenamedformula[one]{First listed Formula} \startformula a = 1 \stopformula \endgraf \placeformula \startformula a = 2 \stopformula \endgraf \placenamedformula{Second listed Formula}{b} \startformula a = 3 \stopformula \endgraf Gives: Shaded background for part of a displayed equation (see also Framed) To highlight part of a formula, you can give it a gray background using \mframed: the following is the code you can use in mkii (see below what one has to do in mkiv): \setuppapersize[A5] \setupcolors[state=start] \def\graymath{\mframed[frame=off, background=color, backgroundcolor=gray, backgroundoffset=3pt]} \startformula \ln (1+x) =\, \graymath{x - {x^2\over2}} \,+ {x^3\over3}-\cdots. \stopformula \setuppapersize[A5] \definemathframed[graymath] [ frame=off, location=mathematics, background=color, backgroundcolor=lightgray, backgroundoffset=2pt ] \starttext Since for $|x| < 1$ we have \startformula \log(1+x) = \graymath{x- \displaystyle{x^2\over2}} + {x^3 \over 3} + \cdots \stopformula we may write $\log(1+x) = x + O(x^2)$. \stoptext The result is shown below (possibly the framed part of the formula is not aligned correctly with the remainder of the formula because the mkiv engine on Context Garden is not up to date…).
We've been talking about the Miller-Rabin randomized primality test, which is one of the easiest to implement and most effective tests that, given a number, will either prove it to be composite or state that it is most likely prime. As good as it is for practical applications, the Miller-Rabin test leaves something to be […] Last time, I explained the Miller-Rabin probabilistic primality test. Let's recall it: Theorem. Let $p$ be an odd prime and write $p-1 = 2^kq$ where $q$ is an odd number. If $a$ is relatively prime to $p$ then at least one of the following statements is true: $a^q\equiv 1\pmod{p}$, or One of $a^q,a^{2q},a^{4q},\dots,a^{2^{k-1}q}$ is congruent […] Fermat's little theorem states that for a prime number $p$, any $a\in \Z/p^\times$ satisfies $a^{p-1} = 1$. If $p$ is not prime, this may not necessarily be true. For example: $$2^{402} = 376 \in \Z/403^\times.$$ Therefore, we can conclude that 403 is not a prime number. In fact, $403 = 13\cdot 31$ Fermat's little theorem […] Alice wants her friends to send her stuff only she can read. RSA public-key encryption allows her to do that: she chooses huge primes $p$ and $q$ and releases $N = pq$ along with an encryption exponent $e$ such that ${\rm gcd}(e,(p-1)(q-1)) = 1$. If Bob wants to send Alice a message $m$, he sends […] In a group $G$, the discrete logarithm problem is to solve for $x$ in the equation $g^x =h$ where $g,h\in G$. In Part 1, we saw that solving the discrete log problem for finite fields would imply that we could solve the Diffie-Hellman problem and crack the ElGamal encryption scheme. Obviously, the main question at […] Given a group $G$, and an element $g\in G$ (the "base"), the discrete logarithm $\log_g(h)$ of an $h\in G$ is an element $x\in G$ such that $g^x = h$ if it exists. Its name "discrete logarithm" essentially means that we are only allowed to use integer powers in the group, rather than extending the definition […] There is a cool way to express 1 as a sum of unit fractions using partitions of a fixed positive integer. What do we mean by partition? If $n$ is such an integer then a partition is just a sum $e_1d_1 + \cdots + e_kd_k = n$ where $d_i$ are positive integers. For example, 7 […] A real number is simply normal in base $b$ if the frequency of each base $b$ digit in the first $n$ digits tends to a limit as $n$ goes to infinity, and each of these limits is the same. In other words, a real number is simply normal in base $b$ if each digit appears […] Lately I've been thinking about primes, and I've plotted a few graphs to illustrate some beautiful ideas involving primes. Even though you might not always associate with primes, they are always haunting quietly in the background. Abundance of primes in an arithmetic progression Let's start out with the oddest prime of all: 2. Get it? […] Since the days of antiquity, we've always been looking for ways to determine whether a natural number is prime. Trial division up to the square root of a number quickly becomes tedious, thought it is worth noting that even on my fairly old laptop a slightly optimised trial-division algorithm will list all the primes under […]
(This is basically an extension of $\pi$ Day puzzle one to twenty) $\tau$ is greater than $\pi$ and $\tau>\pi$. Create the numbers from $1$ to $20$ using only: Tau ($\tau$, equivalent to $2\pi$) Basic arithmetic operations ($+-\times\div$) Square roots ($\sqrt{x}$ or $\sqrt[2]{x}$) Exponentiation ($x^y$) Negative tau ($-\tau$) Floor functions ($\lfloor x\rfloor$) Anything not in this list is forbidden. You are not allowed to have negative signs outside of $\tau$ or not as an operation (E.g $-\lfloor\tau\rfloor$ is forbidden, but $\tau-\lfloor\tau\rfloor$ is allowed.) You are also not allowed to use parentheses, although $\lfloor x\rfloor$ can make a good substitute. Some basic MathJaX syntax: $\tau, +, -, \times, \div, \sqrt{\tau^{\tau}}, \lfloor\tau\rfloor, \sqrt[2]{\tau}$ $\tau, +, -, \times, \div, \sqrt{\tau^{\tau}}, \lfloor\tau\rfloor, \sqrt[2]{\tau}$ Some more Remember the order of operations. 1 = $\tau\div\tau$ (Uses 2 $\tau$s, worse score) 1 = $\lfloor\sqrt{\sqrt\tau}\rfloor$ (Uses 1 $\tau$, better score) 2 = $\tau\div\tau+\tau\div\tau$ (Uses 4 $\tau$s, worse score) 2 = $\lfloor\sqrt\tau\rfloor$ (Uses 1 $\tau$, better score) Try and use the least $\tau$s possible.
Define $\tilde{\phi}([g])=\phi(g)$ and show that this is well-defined. Show that $\tilde{\phi}$ is a homomorphism. Show that $\tilde{\phi}$ is injective. Proof. Define the map $\tilde{\phi}: G/\ker{\phi} \to G’$ by sending $[g]$ to $\phi(g)$. Here $[g]$ is the element of $G/\ker{\phi}$ represented by $g\in G$. We need to show that this is well-defined.Namely, we need to show that $\tilde{\phi}$ does not depend on the choice of representative. So suppose $[g]=[h]$ for $g, h \in G$. Then we have $x:=gh^{-1} \in \ker{\phi}$. Thus we have\[e’=\phi(x)=\phi(gh^{-1})=\phi(g)\phi(h)^{-1},\]where $e’\in G’$ is the identity element of $G’$.Here the third equality follows because $\phi$ is a homomorphism.Hence we obtain $\phi(g)=\phi(h)$, equivalently $\tilde{\phi}([g])=\tilde{\phi}([h])$. Thus $\tilde{\phi}$ is well-defined. Now we show that $\tilde{\phi}: G/\ker{\phi} \to G’$ is a homomorphism. Let $[g], [h] \in G/\ker{\phi}$. Then we have\[\tilde{\phi}([g][h])=\tilde{\phi}([gh])=\phi(gh)=\phi(g)\phi(h)=\tilde{\phi}([g])\tilde{\phi}([h]) \]and $\tilde{\phi}$ is a homomorphism. Finally, we prove that $\tilde{\phi}$ is injective. Suppose that $\tilde{\phi}([g])=e’$. Then this means $\phi(g)=e’$, hence $g\in \ker{\phi}$.Thus $[g]=[e]$, where $e$ is the identity element of $G$.Hence $\tilde{\phi}$ is injective and the proof is complete. A Group Homomorphism is Injective if and only if MonicLet $f:G\to G'$ be a group homomorphism. We say that $f$ is monic whenever we have $fg_1=fg_2$, where $g_1:K\to G$ and $g_2:K \to G$ are group homomorphisms for some group $K$, we have $g_1=g_2$.Then prove that a group homomorphism $f: G \to G'$ is injective if and only if it is […] Injective Group Homomorphism that does not have Inverse HomomorphismLet $A=B=\Z$ be the additive group of integers.Define a map $\phi: A\to B$ by sending $n$ to $2n$ for any integer $n\in A$.(a) Prove that $\phi$ is a group homomorphism.(b) Prove that $\phi$ is injective.(c) Prove that there does not exist a group homomorphism $\psi:B […] Dihedral Group and Rotation of the PlaneLet $n$ be a positive integer. Let $D_{2n}$ be the dihedral group of order $2n$. Using the generators and the relations, the dihedral group $D_{2n}$ is given by\[D_{2n}=\langle r,s \mid r^n=s^2=1, sr=r^{-1}s\rangle.\]Put $\theta=2 \pi/n$.(a) Prove that the matrix […] Group Homomorphism from $\Z/n\Z$ to $\Z/m\Z$ When $m$ Divides $n$Let $m$ and $n$ be positive integers such that $m \mid n$.(a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined.(b) Prove that $\phi$ is a group homomorphism.(c) Prove that $\phi$ is surjective.(d) Determine […] Normal Subgroups, Isomorphic Quotients, But Not IsomorphicLet $G$ be a group. Suppose that $H_1, H_2, N_1, N_2$ are all normal subgroup of $G$, $H_1 \lhd N_2$, and $H_2 \lhd N_2$.Suppose also that $N_1/H_1$ is isomorphic to $N_2/H_2$. Then prove or disprove that $N_1$ is isomorphic to $N_2$.Proof.We give a […] Group Homomorphism, Preimage, and Product of GroupsLet $G, G'$ be groups and let $f:G \to G'$ be a group homomorphism.Put $N=\ker(f)$. Then show that we have\[f^{-1}(f(H))=HN.\]Proof.$(\subset)$ Take an arbitrary element $g\in f^{-1}(f(H))$. Then we have $f(g)\in f(H)$.It follows that there exists $h\in H$ […]
With which notation do you feel uncomfortable? closed as not constructive by Loop Space, Chris Schommer-Pries, Qiaochu Yuan, Scott Morrison♦ Mar 19 '10 at 6:10 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. If this question can be reworded to fit the rules in the help center, please edit the question. There is a famous anecdote about Barry Mazur coming up with the worst notation possible at a seminar talk in order to annoy Serge Lang. Mazur defined $\Xi$ to be a complex number and considered the quotient of the conjugate of $\Xi$ and $\Xi$: $$\frac{\overline{\Xi}}{\Xi}.$$ This looks even better on a blackboard since $\Xi$ is drawn as three horizonal lines. My favorite example of bad notation is using $\textrm{sin}^2(x)$ for $(\textrm{sin}(x))^2$ and $\textrm{sin}^{-1}(x)$ for $\textrm{arcsin}(x)$, since this is basically the same notation used for two different things ($\textrm{sin}^2(x)$ should mean $\textrm{sin}(\textrm{sin}(x))$ if $\textrm{sin}^{-1}(x)$ means $\textrm{arcsin}(x)$). It might not be horrible, since it rarely leads to confusion, but it is inconsistent notation, which should be avoided in general. I personally hate the notation $x \mid y$, for "$x$ divides $y$". Of course, I'm used to reading it by now, but a general principle I follow and recommend is: Never use a symmetric symbol to denote an asymmetric relation! I never liked the notation ${\mathbb Z}_p$ for the ring of residue classes modulo $p$. At one point, it confused the hell out of me, and this confusion is easily avoided by writing $C_p$, $C(p)$ or ${\mathbb Z}/p$. Mathematicians are really quite bad when it comes to notation. They should learn from programming langauges people. Bad notation actually makes it difficult for students to understand the concepts. Here are some really bad ones: Using $f(x)$ to denote both the value of $f$ at $x$ and the function $f$ itself. Because of this students in programming classes cannot tell the difference beween $f$ (the function) and $f(x)$ (function applied to an argument). When I was a student nobody ever managed to explain to me why $dy/dx$ made sense. What is $dy$ and what is $dx$? They're not numbers, yet we divide them (I am just giving a student's perspective). In Langrangian mechanics and calculus of variations people take the partial derivative of the Lagrangian $L$ with respect to $\dot q$, where $\dot q$ itself is the derivative of momentum $q$ with respect to time. That's crazy. The summation convention, e.g., that ${\Gamma^{ij}}_j$ actually means $\sum_j {\Gamma^{ij}}_j$ is useful but very hard to get used to. In category theory I wish people sometimes used anynotation as opposed to nameless arrows which are introduced in accompanying text as "the evident arrow". Physicist will hate me for this, but I never liked Einstein's summation convention, nor the famous bra ($\langle\phi|$) and ket ($|\psi\rangle$) notation. Both notations make easy things look unnecessarily complicated, and especially the bra-ket notation is no fun to use in LaTeX. My candidate would be the (internal) direct sum of subspaces $U \oplus V$ in linear algebra. As an operator it is equivalent to sum but with the side effect of implying that $U \cap V = \lbrace 0\rbrace$. Whenever I had a chance to teach linear algebra I found this terribly confusing for students. I think composition of arrows $f:X\to Y$ and $g:Y\to Z$ should be written $fg$ not $gf$. First of all it would make the notation $\hom(X,Y)\to\hom(Y,Z)\to \hom(X,Z)$ much more natural: $\hom(E,X)$ should be a left $\hom(E,E)$ module because $E$ is on the left :) Secondly, diagrams are written from left to right (even stronger: Almost anything in the western world is written left to right). And i think the strange (-1) needed when shifting complexes is an effect of this twisted notation. The notation ]a,b[ for open intervals and its ilk. Sorry, Bourbaki. Writing a finite field of size $q$ as $\mathrm{GF}(q)$ instead of as $\mathbf{F}_q$ always rubbed me the wrong way. I know where it comes from (Galois Field), and I think it is still widely used in computer science, and maybe in some allied areas of discrete math, but I still dislike it. As Trevor Wooley used to always say in class, ``Vinogradov's notation sucks....the constants away." For those who don't know, Vinogradov's notation in this context is $f(x)\ll g(x)$ meaning $f(x) = O(g(x)).$ (if you prefer big-O notation, that is). I rather dislike the notation $$\int_{\Omega}f(x)\,\mu(dx)$$ myself. I realize that just as the integral sign is a generalized summation sign, the $dx$ in $\mu(dx)$ would stand for some small measurable set of which you take the measure, but it still rubs me the wrong way. Is it only because I was brought up with the $\int\cdots\,d\mu(x)$ notation? The latter nicely generalizes the notation for the Stieltjes integral at least. I get very frustrated when an author or speaker writes "Let $X\colon= A\sqcup B$..." to mean: $A$ and $B$ are disjoint sets (in whatever the appropriate universe is), and let $X\colon= A\cup B$. If they just meant "form the disjoint union of $A$ and $B$" this would be fine. But I've seen speakers later use the fact that $A$ and $B$ are disjoint, which was never stated anywhere except as above. You should never hide an assumption implicitly in your notation. The use of squared brackets $\left[...\right]$ for anything. It's not bad per se, but unfortunately it is used both as a substitute for $\left(...\right)$ and as a notation for the floor function. And there are cases when it takes a while to figure out which of these is meant - I'm not making this up. The word "character" meaning: a 1-dimensional representation, a representation, a trace form of a representation, a formal linear combination of representations, a formal linear combination of trace forms of representations. The word "adjoint", and the corresponding notation $A\mapsto A^{\ast}$, having two completely unrelated meanings. The term "symplectic group" used to mean the group $U(n,{\mathbb H})$. It's as if people called $U(n)$ and $GL(n,{\mathbb R})$ by some single name. My personal pet peeve of notation HAS to be algebraists writing functions on the right a la Herstein's "Topics In Algebra". I don't know why they do it when everyone else doesn't. I think one of them got up one day and decided they wanted to be cooler then everyone else, seriously... I don't like (but maybe for a bad reason) the notation $F\vdash G$ for $F$ is left adjoint to $G$. Any comment ? A cute idea but for which I have yet to find supporters is D. G. Northcott's notation (used at least in [Northcott, D. G. A first course of homological algebra. Cambridge University Press, London, 1973. xi+206 pp. MR0323867) for maps in a commutative diagram, which consists in enumerating the names of the objects placed vertices along the way of the composition. Thus, if there is only one map in sight from $M$ to $N$, he writes it simply $MN$, so he has formulas looking like $$A'A(ABB'') = A'ABB'' = A'B'BB'' = 0.$$ He also writes maps on the right, so his $$xMN=0$$ means that the image of $x$ under the map from $M$ to $N$ is zero. I would not say this is among the worst notations ever, though. Students have big difficulties when first confronted with the $o(\cdot)$ and $O(\cdot)$ notation. The term $o(x^3)$, e.g., does not denote a certain function evaluated at $x^3$, but a function of $x$, defined by the context, that converges to zero when divided by $x^3$. I have struggled with 'dx'. I've spent years trying to study every different approach to calculus that I could find to try and make sense of it. I read about the limit definitions in my first book, vector calculus with them as pullbacks of linear transformations or flows/flux, differential forms from the bridge project, k-forms, nonstandard analysis which enlarges $\mathbb{R}$ to give you infinitesimals (and unbounded numbers) but the same first order properties and lets integral be defined as a sum, constructive analysis using a monad to take the closure of the rationals to give reals... but I am still just as confused as ever, I understand that the mathematical notation doesn't have a compositional semantics but still don't really get it - one of the problems is despite not really understanding it, or having any abstract definition of it.. I can still get correct answers and I really hope this doesn't become a theme as I study more topics in mathematics. p < q as in "the forcing condition p is stronger than q". I hate the short cut $ab$ for $a\cdot b$. Everyone get used to it, BUT it creates very deep problem with all other notation; say you never can be sure what $f(x+y)$ or $2\!\tfrac23$ might be... Also in modern mathematics people do not multiply things too often, so it does not have sense to make such a short cut. Yet the shortcut $x^n$ is really bad one. One can not use upper indexes after this. It would be easy to write $x^{\cdot n}$ instead.
An $n\times n$ matrix is invertible if and only if its rank is $n$.The rank of a matrix is the number of nonzero rows of a (reduced) row echelon form matrix that is row equivalent to the given matrix. Solution. We compute the rank of the matrix $A$.Applying elementary row operations, we obtain\begin{align*}\begin{bmatrix}1 & 1 & 1 \\1 &2 &k \\1 & 4 & k^2\end{bmatrix}\xrightarrow{\substack{R_2-R_1 \\ R_3-R_1}}\begin{bmatrix}1 & 1 & 1 \\0 & 1 &k-1 \\0 & 3 & k^2-1\end{bmatrix}\xrightarrow{\substack{R_1-R_2 \\ R_3-3R_2}}\begin{bmatrix}1 & 0 & 2-k \\0 &1 &k-1 \\0 & 0 & k^2-3k+2\end{bmatrix}.\end{align*}The last matrix is in row echelon form. Note that $A$ is an invertible matrix if and only if its rank is $3$.Therefore the $(3,3)$-entry of the last matrix must be nonzero: $k^2-3k+2=(k-1)(k-2)\neq 0$. It follows that the matrix $A$ is invertible for any $k$ except $k=1, 2$. Projection to the subspace spanned by a vectorLet $T: \R^3 \to \R^3$ be the linear transformation given by orthogonal projection to the line spanned by $\begin{bmatrix}1 \\2 \\2\end{bmatrix}$.(a) Find a formula for $T(\mathbf{x})$ for $\mathbf{x}\in \R^3$.(b) Find a basis for the image subspace of $T$.(c) Find […] Find the Rank of a Matrix with a ParameterFind the rank of the following real matrix.\[ \begin{bmatrix}a & 1 & 2 \\1 &1 &1 \\-1 & 1 & 1-a\end{bmatrix},\]where $a$ is a real number.(Kyoto University, Linear Algebra Exam)Solution.The rank is the number of nonzero rows of a […] Given the Characteristic Polynomial, Find the Rank of the MatrixLet $A$ be a square matrix and its characteristic polynomial is given by\[p(t)=(t-1)^3(t-2)^2(t-3)^4(t-4).\]Find the rank of $A$.(The Ohio State University, Linear Algebra Final Exam Problem)Solution.Note that the degree of the characteristic polynomial […]
Keeping Track of Element Order in Multiphysics Models Whenever you are building a finite element model in COMSOL Multiphysics, you should be aware of the element order that is being used. This is particularly important for multiphysics models as there are some distinct benefits to using different element orders for different physics. Today, we will review the key concepts behind element order and discuss how it applies to some common multiphysics models. What Is Element Order? Whenever we solve a finite element problem, we are approximating the true solution field to a partial differential equation (PDE) over a domain. The finite element method starts by subdividing the modeling domain up into smaller, simpler domains called elements. These elements are defined by a set of points, traditionally called nodes, and each node has a set of shape functions or basis functions. Every shape function is associated with some degrees of freedom. The set of all of these discrete degrees of freedom is traditionally referred to as the solution vector. Note: You can read more about the process of going from the governing PDE to the solution vector in our previous blog posts “A Brief Introduction to the Weak Form” and “Discretizing the Weak Form Equations“. Once the solution vector is computed, the finite element approximation to the solution field is constructed by interpolation using the solution vector and the set of all of the basis functions in all of the elements. The element order refers to the type of basis functions that are used. Let’s now visualize some of the basis functions for one of the more commonly used elements in COMSOL Multiphysics: the two-dimensional Lagrange element. We will look at a square domain meshed with a single quadrilateral (four-sided) element that has a node at each corner. If we are computing a scalar field, then the Lagrange element has a single degree of freedom at each node. You can visualize the shape functions for a first-order Lagrange element in the image below. The shape functions for a first-order square quadrilateral Lagrange element. The first-order shape functions are each unity at one node and zero at all of the others. The complete finite element solution over this element is the sum of each shape function times its associated degree of freedom. We’ll now compare our first-order shape functions with our second-order shape functions. The shape functions for a single second-order square quadrilateral Lagrange element. Observe that the second-order quadrilateral Lagrange element has node points at the midpoints of the sides as well as in the element center. It has a total of nine shape functions and, again, each shape function is unity at one node and zero elsewhere. Let’s now look at what happens when our single quadrilateral element represents a domain that is not a perfect square but rather a domain with some curved sides. In such cases, it is common to use a so-called isoparametric element, meaning that the geometry is approximated with the same shape functions as the one used for the solution. This geometric approximation is shown below for the first- and second-order cases. A domain with curved sides. Single first- and second-order quadrilateral elements are applied. As we can see in the image above, the first-order element simply approximates the curved sides as straight sides. The second-order element much more accurately approximates these curved boundaries. This difference, known as a geometric discretization error, is discussed in greater detail in an earlier blog post. The shape functions for the isoparametric first- and second-order Lagrange elements are shown below. The shape functions of a single first-order isoparametric Lagrange element for the domain with curved sides. The shape functions of a single second-order isoparametric Lagrange element for a domain with curved sides. We can observe from the above two images that the first-order element approximates all sides of the domain as straight lines, while the second-order element approximates the curved shapes much more accurately. Thus, if we are modeling a domain with curved sides, we need to use several linear elements along any curved domain boundaries just so that we can accurately represent the domain itself. For any real-world finite element model, there will of course always be more than one element describing the geometry. Additionally, keep in mind that regardless of the element order, you will want to perform a mesh refinement study, also called a mesh convergence study. That is, you will use finer and finer meshes (smaller and smaller elements) to solve the same problem and see how the solution converges. You terminate this mesh refinement procedure after achieving your desired accuracy. A good example of a mesh refinement study is presented in the application example of a Stress Analysis of an Elliptic Membrane. All well-posed, single-physics finite element problems will converge toward the same answer, regardless of the element order. However, different element orders will converge at different rates and therefore require various computational resources. Let’s explore why different PDEs have different element orders. Element Order in Single-Physics Models For the purposes of this discussion, let’s consider just the set of PDEs governing common single-physics problems that exhibit no variation in time. We can put all of these PDEs into one of two broad categories: Poisson-type: Poisson-type PDEs are used to describe heat transfer in solids, solid mechanics, electric currents, electrostatics and magnetostatics, thin-film flow, and flow in porous media governed by Darcy’s law or the Richards’ equation. Such governing PDEs are all of the form:\nabla \cdot (- D \nabla u ) = f Note that this is a second-order PDE, thus second-order (quadratic) elements are the default choice within COMSOL Multiphysics for all of these types of equations. Transport-type: Transport-type PDEs are used to describe chemical species transport as well as heat transfer in fluids and porous media. The governing equations here are quite similar to Poisson’s equation, with one extra term — a velocity vector:\nabla \cdot ( -D \nabla u + \mathbf{v} u ) = f The extra velocity term results in a governing equation that is closer to a first-order PDE. The velocity field is usually computed by solving the Navier-Stokes equation, which is itself a type of transport equation that describes fluid flow. It is often the case that, for such problems, there is a high Péclet number or Reynolds number. This is one of the reasons why the default choice is to use first-order (linear) elements for these PDEs. Note that for fluid flow problems where the Reynolds number is low, the default is to use the so-called P2 + P1 elementsthat solve for the fluid velocity via second-order discretization and solve for the pressure via first-order discretization. The P2 + P1 elements are the default for the Creeping Flow, Brinkman Equationsand Free and Porous Media Flowinterfaces. This is also the case for the Two-Phase Flow, Level Setand Two-Phase Flow, Phase Fieldinterfaces. Further, any type of transport or fluid flow interface uses stabilization to solve the problem more quickly and robustly. For an overview of stabilization methods, check out our earlier blog post “Understanding Stabilization Methods“. So how can we check the default settings for the element order used by a particular physics interface? Within the Model Builder, we first need to go to the Show menu and toggle on Discretization. After doing so, you will see a Discretization section within the physics interface settings, as shown in the screenshot below. Screenshot showing how to view the element order of a physics interface. Keep in mind that as long as you’re working with only single physics, it typically does not matter too much which element order you use as long as you remember to perform a mesh convergence study. The solutions with a different element order may require quite varying amounts of memory and time to solve, but they will all converge toward the same solution with sufficient mesh refinement. However, when we start dealing with multiphysics problems, things become a little bit more complicated. Next, we’ll look at two special cases of multiphysics modeling where you should be aware of element order. Conjugate Heat Transfer: Heat Transfer in Solids with Heat Transfer in Fluids COMSOL Multiphysics includes a predefined multiphysics coupling between heat transfer and fluid flow that is meant for simulating the temperature of objects that are cooled or heated by a surrounding fluid. The Conjugate Heat Transfer interface (and the functionally equivalent Non-Isothermal Flow interface) is available with the Heat Transfer Module and the CFD Module for both laminar and turbulent fluid flow. The Conjugate Heat Transfer interface is composed of two physics interfaces: the Heat Transfer interface and the Fluid Flow interface. The Fluid Flow interface (whether laminar or turbulent) uses linear element order to solve for the fluid velocity and pressure fields. The Heat Transfer interface solves for the temperature field in the fluid as well as the temperature field in the solid. The same linear element discretization is used throughout the temperature field in both the solid and fluid domains. Now, if you are setting up a conjugate heat transfer problem by manually adding the various physics interfaces, you do need to be careful. If you start with the Heat Transfer in Solids interface and add a Heat Transfer in Fluids domain feature to the interface, a second-order discretization will be used for the temperature field by default. This is not generally advised, as it will require more memory than a first-order temperature discretization. The default first-order discretization of the fluid flow field justifies using first-order elements throughout the model. It is also worth mentioning a related multiphysics coupling: the Local Thermal Non-Equilibrium interface available with the Heat Transfer Module. This interface is designed to solve for the temperature field of a fluid flowing through a porous matrix medium as well as the temperature of the matrix through which the fluid flows. That is, there are two different temperatures, the fluid and the solid matrix temperature, at each point in space. The interface also uses first-order discretization for both of the temperatures. Thermal Stress: Heat Transfer in Solids with Solid Mechanics The other common case where a multiphysics coupling uses different element orders from a single-physics problem is when computing thermal stresses. For the Thermal Stress multiphysics coupling, the default is to use linear discretization for the temperature and quadratic discretization for the structural displacements. To understand why this is so, we can look at the governing Poisson-type PDE for linear elasticity: where \mathbf{C} is the stiffness tensor and \mathbf{\epsilon} is the strain tensor. For a problem where temperature variation affects stresses, the strain tensor is: where \mathbf{\alpha} is a tensor containing the coefficients of thermal expansion, T is the temperature, T_0 is the strain-free reference temperature, and \mathbf{u} is the structural displacement field. By default, we solve for the structural displacements using quadratic discretization, but we can see from the equation above that the strains are computed by taking the gradients of the displacement fields. This lowers the discretization order of the strains to a linear order. Hence, the temperature field discretization should also be lowered to a linear order. Closing Remarks We have discussed the meaning of discretization order in COMSOL Multiphysics and why it is relevant for two different multiphysics cases that frequently arise. If you are putting together your own multiphysics models, you’ll want to keep element order in mind. Additionally, it is good to address what can happen if you build a multiphysics model with element orders that disagree with what we’ve outlined here. As it turns out, in many cases, the worst thing that will happen is that your model will simply require more memory and converge to a solution more slowly. In the limit of mesh refinement, any combination of element orders in different physics will give the same results, but the convergence may well be very slow and oscillatory. If you do observe any spatial oscillations to the solution (for example, a stress field that looks rippled or wavy), then check the element orders. Today’s blog post is designed as a practical guideline for element selection in multiphysics problems within COMSOL Multiphysics. A more in-depth discussion of stability criterion for mixed (hybrid) finite element methods can be found in many texts, such as Concepts and Applications of Finite Element Analysis by Robert D. Cook, David S. Malkus, Michael E. Plesha, and Robert J. Witt. Comments (4) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
S Muralithar Articles written in Pramana – Journal of Physics Volume 55 Issue 3 September 2000 pp L471-L478 Rapid Communication Excited states of 63Cu were populated via the $^{52}{\rm Cr} + {}^{16}{\rm O}$ (65 MeV) reaction using the gamma detector array equipped with charged particle detector array for reaction channel separation. On the basis of $\gamma-\gamma$ coincidence relations and angular distribution ratios, a level scheme was constructed up to $E_{x} = 7$ MeV and $J^{\pi} = 23/2^{(+)}$. The decay scheme deduced was interpreted in terms of shell model calculations, with a restricted basis of the $f_{5/2}$, $p_{3/2}$, $p_{1/2}$, $g_{9/2}$ orbitals outside a $^{56}_{28}$Ni core. Volume 75 Issue 2 August 2010 pp 317-331 Accelerators and Instrumentation for Nuclear Physics N Madhavan S Nath T Varughese J Gehlot A Jhingan P Sugathan A K Sinha R Singh K M Varier M C Radhakrishna E Prasad S Kalkal G Mohanto J J Das Rakesh Kumar R P Singh S Muralithar R K Bhowmik A Roy Rajesh Kumar S K Suman A Mandal T S Datta J Chacko A Choudhury U G Naik A J Malyadri M Archunan J Zacharias S Rao Mukesh Kumar P Barua E T Subramanian K Rani B P Ajith Kumar K S Golda Hybrid recoil mass analyzer (HYRA) is a unique, dual-mode spectrometer designed to carry out nuclear reaction and structure studies in heavy and medium-mass nuclei using gas-filled and vacuum modes, respectively and has the potential to address newer domains in nuclear physics accessible using high energy, heavy-ion beams from superconducting LINAC accelerator (being commissioned) and ECR-based high current injector system (planned) at IUAC. The first stage of HYRA is operational and initial experiments have been carried out using gas-filled mode for the detection of heavy evaporation residues and heavy quasielastic recoils in the direction of primary beam. Excellent primary beam rejection and transmission efficiency (comparable with other gas-filled separators) have been achieved using a smaller focal plane detection system. There are plans to couple HYRA to other detector arrays such as Indian national gamma array (INGA) and $4\pi$ spin spectrometer for ER tagged spectroscopic/spin distribution studies and for focal plane decay measurements. Volume 79 Issue 3 September 2012 pp 403-415 High-spin states of 216Ra $(Z = 88,N = 128)$ have been investigated through 209Bi( 10B, 3n) reaction at an incident beam energy of 55 MeV and 209Bi( 11B, 4n) reaction at incident beam energies ranging from 65 to 78 MeV. Based on $\gamma \gamma$ coincidence data, the level scheme for 216Ra has been considerably extended up to $\sim 33\hbar$ spin and 7.2 MeV excitation energy in the present experiment with placement of 28 new 𝛾-transitions over what has been reported earlier. Tentative spin-parity assignments are done for the newly proposed levels on the basis of the DCO ratios corresponding to strong gates. Empirical shell model calculations were carried out to provide an understanding of the underlying nuclear structure. Volume 82 Issue 4 April 2014 pp 683-696 Three different types of experiments have been performed to explore the complete and incomplete fusion dynamics in heavy-ion collisions. In this respect, first experiment for the measurement of excitation functions of the evaporation residues produced in the 20Ne+ 165Ho system at projectile energy ranges ≈2–8 MeV/nucleon has been done. Measured cumulative and direct crosssections have been compared with the theoretical model code PACE-2, which takes into account only the complete fusion process. It has been observed that, incomplete fusion fraction is sensitively dependent on projectile energy and mass asymmetry between the projectile and the target systems. Second experiment for measuring the forward recoil range distributions of the evaporation residues produced in the 20Ne+ 165Ho system at projectile energy ≈8MeV/nucleon has been done. It has been observed that, some evaporation residues have shown additional peaks in the measured forward recoil range distributions at cumulative thicknesses relatively smaller than the expected range of the residues produced via complete fusion. The results indicate the occurrence of incomplete fusion involving the breakup of 20Ne into 4He+ 16O and/or 8Be+ 12C followed by one of the fragments with target nucleus 165Ho. Third experiment for the measurement of spin distribution of the evaporation residues produced in the 16O+ 124Sn system at projectile energy ≈6 MeV/nucleon, showed that the residues produced as incomplete fusion products associated with fast 𝛼 and 2𝛼-emission channels observed in the forward cone, are found to be distinctly different from those of the residues produced as complete fusion products. The spin distribution of the evaporation residues also inferred that in incomplete fusion reaction channels input angular momentum ($J_0$) increases with fusion incompleteness when compared to complete fusion reaction channels. Present observation clearly shows that the production of fast forward 𝛼-particles arises from relatively larger angular momentum in the entrance channel leading to peripheral collision. Volume 82 Issue 4 April 2014 pp 769-778 A multidetector gamma array (GDA), for studying nuclear structure was built with ancillary devices namely gamma multiplicity filter and charged particle detector array. This facility was designed for in-beam gamma spectroscopy measurements in fusion evaporation reactions at Inter-University Accelerator Centre, New Delhi. Description of the facility and in-beam performance with two experimental studies done are presented. This array was used in a number of nuclear spectroscopic and reaction investigations. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
Fractal Weyl bounds and Hecke triangle groups 1. Laboratoire de Mathématiques D'avignon, Université d'Avignon, 301 rue Baruch de Spinoza, 84916 Avignon Cedex, France 2. University of Bremen, Department 3 - Mathematics, Bibliothekstr. 5, 28359 Bremen, Germany 3. Institute for Mathematics, University of Jena, Ernst-Abbe-Platz 2, 07743 Jena, Germany Let $ \Gamma_w $ be a non-cofinite Hecke triangle group with cusp width $ w>2 $ and let $ \varrho\colon\Gamma_w\to U(V) $ be a finite-dimensional unitary representation of $ \Gamma_w $. In this note we announce a new fractal upper bound for the Selberg zeta function of $ \Gamma_w $ twisted by $ \varrho $. In strips parallel to the imaginary axis and bounded away from the real axis, the Selberg zeta function is bounded by $ \exp\left( C_{\varepsilon} \vert s\vert^{\delta + \varepsilon} \right) $, where $ \delta = \delta_w $ denotes the Hausdorff dimension of the limit set of $ \Gamma_w. $ This bound implies fractal Weyl bounds on the resonances of the Laplacian for any geometrically finite surface $ X = \widetilde{\Gamma}\backslash \mathbb{H}^2 $ whose fundamental group $ \widetilde{\Gamma} $ is a finite index, torsion-free subgroup of $ \Gamma_w $. Keywords:Fractal Weyl bound, resonances, hyperbolic surfaces of infinite area, Hecke triangle groups, Selberg zeta function, transfer operator. Mathematics Subject Classification:Primary: 58J50, Secondary: 37C30, 37D35, 11M36. Citation:Frédéric Naud, Anke Pohl, Louis Soares. Fractal Weyl bounds and Hecke triangle groups. Electronic Research Announcements, 2019, 26: 24-35. doi: 10.3934/era.2019.26.003 References: [1] [2] O. Bandtlow and O. Jenkinson, Explicit eigenvalue estimates for transfer operators acting on spaces of holomorphic functions, [3] D. Borthwick, C. Judge and P. Perry, Selberg's zeta function and the spectral geometry of geometrically finite hyperbolic surfaces, [4] C. Chang and D. Mayer, An extension of the thermodynamic formalism approach to Selberg's zeta function for general modular groups, [5] [6] S. Dyatlov, Improved fractal Weyl bounds for hyperbolic manifolds, To appear in JEMS.Google Scholar [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] D. Jakobson and F. Naud, Resonances and density bounds for convex co-compact congruence subgroups of $SL_2(\Bbb{Z})$, [17] [18] P. Lax and R. Phillips, Translation representation for automorphic solutions of the wave equation in non-Euclidean spaces. I, [19] [20] [21] [22] [23] D. Mayer, T. Mühlenbruch and F. Strömberg, The transfer operator for the Hecke triangle groups, [24] M. Möller and A. Pohl, Period functions for Hecke triangle groups, and the Selberg zeta function as a Fredholm determinant, [25] [26] [27] [28] S. Nonnenmacher and M. Zworski, Fractal Weyl laws in discrete models of chaotic scattering, [29] S. Patterson and P. Perry, The divisor of Selberg's zeta function for Kleinian groups. Appendix A by Charles Epstein, [30] A. Pohl, A thermodynamic formalism approach to the Selberg zeta function for Hecke triangle surfaces of infinite area, [31] [32] [33] [34] [35] [36] [37] B. Stratmann and M. Urbański, The box-counting dimension for geometrically finite Kleinian groups, [38] A. Venkov, Spectral theory of automorphic functions, [39] A. Venkov and P. Zograf, On analogues of the Artin factorization formulas in the spectral theory of automorphic functions connected with induced representations of Fuchsian groups, [40] show all references References: [1] [2] O. Bandtlow and O. Jenkinson, Explicit eigenvalue estimates for transfer operators acting on spaces of holomorphic functions, [3] D. Borthwick, C. Judge and P. Perry, Selberg's zeta function and the spectral geometry of geometrically finite hyperbolic surfaces, [4] C. Chang and D. Mayer, An extension of the thermodynamic formalism approach to Selberg's zeta function for general modular groups, [5] [6] S. Dyatlov, Improved fractal Weyl bounds for hyperbolic manifolds, To appear in JEMS.Google Scholar [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] D. Jakobson and F. Naud, Resonances and density bounds for convex co-compact congruence subgroups of $SL_2(\Bbb{Z})$, [17] [18] P. Lax and R. Phillips, Translation representation for automorphic solutions of the wave equation in non-Euclidean spaces. I, [19] [20] [21] [22] [23] D. Mayer, T. Mühlenbruch and F. Strömberg, The transfer operator for the Hecke triangle groups, [24] M. Möller and A. Pohl, Period functions for Hecke triangle groups, and the Selberg zeta function as a Fredholm determinant, [25] [26] [27] [28] S. Nonnenmacher and M. Zworski, Fractal Weyl laws in discrete models of chaotic scattering, [29] S. Patterson and P. Perry, The divisor of Selberg's zeta function for Kleinian groups. Appendix A by Charles Epstein, [30] A. Pohl, A thermodynamic formalism approach to the Selberg zeta function for Hecke triangle surfaces of infinite area, [31] [32] [33] [34] [35] [36] [37] B. Stratmann and M. Urbański, The box-counting dimension for geometrically finite Kleinian groups, [38] A. Venkov, Spectral theory of automorphic functions, [39] A. Venkov and P. Zograf, On analogues of the Artin factorization formulas in the spectral theory of automorphic functions connected with induced representations of Fuchsian groups, [40] [1] Dieter Mayer, Tobias Mühlenbruch, Fredrik Strömberg. The transfer operator for the Hecke triangle groups. [2] [3] [4] [5] [6] [7] [8] [9] [10] Dmitry Jakobson and Iosif Polterovich. Lower bounds for the spectral function and for the remainder in local Weyl's law on manifolds. [11] [12] Giovanni Forni. The cohomological equation for area-preserving flows on compact surfaces. [13] [14] [15] [16] Daniel N. Dore, Andrew D. Hanlon. Area preserving maps on $\boldsymbol{S^2}$: A lower bound on the $\boldsymbol{C^0}$-norm using symplectic spectral invariants. [17] D. V. Osin. Peripheral fillings of relatively hyperbolic groups. [18] Christoph Bandt, Helena PeÑa. Polynomial approximation of self-similar measures and the spectrum of the transfer operator. [19] [20] Gary Froyland, Simon Lloyd, Anthony Quas. A semi-invertible Oseledets Theorem with applications to transfer operator cocycles. 2018 Impact Factor: 0.263 Tools Metrics Other articles by authors [Back to Top]
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
Group of Invertible Matrices Over a Finite Field and its Stabilizer Problem 108 Let $\F_p$ be the finite field of $p$ elements, where $p$ is a prime number. Let $G_n=\GL_n(\F_p)$ be the group of $n\times n$ invertible matrices with entries in the field $\F_p$. As usual in linear algebra, we may regard the elements of $G_n$ as linear transformations on $\F_p^n$, the $n$-dimensional vector space over $\F_p$. Therefore, $G_n$ acts on $\F_p^n$. Let $e_n \in \F_p^n$ be the vector $(1,0, \dots,0)$. (The so-called first standard basis vector in $\F_p^n$.) Find the size of the $G_n$-orbit of $e_n$, and show that $\Stab_{G_n}(e_n)$ has order $|G_{n-1}|\cdot p^{n-1}$. Conclude by induction that \[|G_n|=p^{n^2}\prod_{i=1}^{n} \left(1-\frac{1}{p^i} \right).\] Proof. Let $\calO$ be the orbit of $e_n$ in $\F_p^n$. We claim that $\calO=\F_p^n \setminus \{0\}$, hence \[|\calO|=p^n-1.\] To prove the claim, let $a_1 \in \F_p^n$ be a nonzero vector. Then we can extend this vector to a basis of $\F_p^n$, that is, there is $a_2, \dots, a_n \in \F_p^n$ such that $a_1,\ a_2, \dots, a_n$ is a basis of $\F_P^n$. Since they are a basis the matrix $A=[a_1 \dots a_n]$ is invertible, that is , $A \in G_n$. We have \[Ae_n=a_1\] Thus $a_1\in \calO$. It is clear that $0 \not \in \calO$. Thus we proved the claim. Next we show that \[|\Stab_{G_n}(e_n)|=|G_{n-1}|\cdot p^{n-1}. \tag{*} \] Note that $A \in \Stab_{G_n}(e_n)$ if and only if $A e_n=e_n$. Thus $A$ is of the form \[ \left[\begin{array}{r|r} 1 & A_2 \\ \hline \mathbf{0} & A_1 \end{array} \right], \] where $A_1$ is an $(n-1)\times (n-1)$ matrix, $A_2$ is a $1\times (n-1)$ matrix , and $\mathbf{0}$ is the $(n-1) \times 1$ zero matrix. Since $A$ is invertible, the matrix $A_1$ must be invertible as well, hence $A_1 \in G_{n-1}$. The matrix $A_2$ can be anything. Thus there are $|G_{n-1}|$ choices for $A_1$ and $p^{n-1}$ choices for $A_2$. In total, there are $|G_{n-1}|p^{n-1}$ possible choices for $A \in \Stab_{G_n}(e_n)$. This proves (*). Finally we prove that \[|G_n|=p^{n^2}\prod_{i=1}^{n} \left(1-\frac{1}{p^i} \right)\] by induction on $n$. When $n=1$, we have \[|G_1|=|\F_p\setminus \{0\}|=p-1=p\left(1-\frac{1}{p} \right).\] Now we assume that the formula is true for $n-1$. By the orbit-stabilizer theorem, we have \[ |G_n: \Stab_{G_n}(e_n)|=|\calO|.\] Since $G_n$ is finite, we have \begin{align*} |G_n|&=|\Stab_{G_n}(e_n)||\calO|\\ &=(p^n-1)|G_{n-1}|p^{n-1}\\ &=(p^n-1)p^{n-1}\cdot p^{(n-1)^2}\prod_{i=1}^{n-1} \left(1-\frac{1}{p^i} \right) \text{ by the induction hypothesis}\\ &=p^n\left(1-\frac{1}{p^n} \right)p^{n-1}p^{n^2-2n+1} \prod_{i=1}^{n-1} \left(1-\frac{1}{p^i} \right) \\ &=p^{n^2}\prod_{i=1}^{n} \left(1-\frac{1}{p^i} \right). \end{align*} Thus the formula is true for $n$ as well. By induction, the formula is true for any $n$. Add to solve later
April 23rd, 2016, 05:13 AM # 11 Banned Camp Joined: Dec 2012 Posts: 1,028 Thanks: 24 Luckily my last post was blown away for an unknown reason.... The point is show that a proportional condition on $Y$ (here Delta/Delta=1) on a curve $Y=X^2$, that has a first linear derivate can again "generate" a proportional condition on $X$, here $(C-A)/(C-B)= p/q$ with $(p,q)\in N$. While this cannot happen on $Y=X^n$ wiht $n>2$ because the non linear first derivates define an irrational ratio between the "middle" integer height ($C^n\in N$) and the correspondent irrational X ($C$), that is his n-th root. Math asap.... but picture already tell the story... Happy to hear that Mr. Whiles becomes Sir. and 750KUsd reach, while we, humans, are already searching for a more usefull solution to FLT problem ;-P Last edited by complicatemodulus; April 23rd, 2016 at 05:18 AM. May 14th, 2016, 11:39 PM # 12 Banned Camp Joined: Dec 2012 Posts: 1,028 Thanks: 24 I return in math remembering my first post in this topic: Part 1) FLT rewritten with Sum is: $\displaystyle A^3 = \sum_{X=B+1}^{C} (3X^2-3X+1)$ or $\displaystyle A^3 = \sum_{X=1}^{C-B} (3(X+B)^2-3(X+B)+1)$ or $\displaystyle A^3 = \sum_{X=1}^{C-B} (3X^2+6BX+3B^2-3X-3B +1)$ or $\displaystyle A^3 = \sum_{X=1}^{C-B} ((3X^2-3X +1) +6BX-3B +3B^2) $ or $\displaystyle A^3 = (C-B)^3 + 3B \sum_{X=1}^{C-B} (2X-1) +3B^2(C-B)$ or $\displaystyle A^3 = (C-B)^3 + 3B (C-B)^2 +3B^2(C-B)$ so $(C-B)>=1$ is a factor of A Part 2) for the same reason/process will be: $\displaystyle B^3 = (C-A)^3 + 3A (C-A)^2 +3A^2(C-A)$ so $(C-A)>=1$ is a factor of B This is true for any $n>=2$ since rising n the limits are the same and change just the term $Mn= (X^n-(X-1)^n)$ Part 3) Why for n=2 there is a solution: Remembering via Step Sum we prove we are able to represent also Rational value/powers, for n=2 we can write: (1) $\displaystyle \sum_{X=1}^{A} (2X-1) = \sum_{X=B+1}^{C} (2X-1)= \sum_{X=1}^{C-B} (2(X+B)-1)$ Introducing the Step Sum, keeping: $K=A$ and calling $x=X/A$ I already show nothing change if we will rewrite the (1) as the Step Sum, Step $x_1=1/A$, $x_2= 2/A$; $x_3= 3/A$ etc...: $\displaystyle \sum_{x=1/A}^{A} (2x/A-1/A^2) = \sum_{x=1/A}^{C-B} (2(x+B)/A-1/A^2))$ And now we can divide both terms by $A^2$, dividing the upper limit of the Step Sum by $A$: $\displaystyle \sum_{x=1/A}^{1} (2x/A-1/A^2) = \sum_{x=1/A}^{(C-B)/A)} (2(x+B)/A-1/A^2))$ But remembering the is: $ A= \pi_1 *(C-B) $ And solving and grouping the new known square: $\displaystyle 1= \sum_{x=1/A}^{(C-B)/A)} (2x/A-1/A^2)) + 2B/A *((C-B)/A) $ or: $\displaystyle 1 = ((C-B)/A)^2 + 2B(C-B)/(A^2) $ or: $\displaystyle 1 = 1/{\pi_1}^2 + 2B/((C-B)* \pi_1) $ or: $\displaystyle {{\pi_1}^2} * (C-B) = (C-B)+2B $ or: $\displaystyle {\pi_1}^2 = (C+B)/(C-B) $ that fit for many values: f.ex $\displaystyle {\pi_1}^2 = (5+4)/(5-4) = 9$ $\displaystyle {\pi_1}=A = 3 $ As already written it works for any $A>=3$ infact in case "A" is ODD: $A$; $B= (A^2-1)/2$ ; $C=(A^2+1)/2$ in case "A" is EVEN: $A$; $B= (A/2)^2 -1)/2$ ; $C=(A/2)^2+1$ Part 4) Why Fermat is right for $n=3$ Because doing the same tedious "process" starting from: $\displaystyle A^3 = \sum_{X=B+1}^{C} (3X^2-3X+1)$ Sorry for this mental torture... I'll jump directly to the reduced Step Sum already divided by $A^3$: $\displaystyle \sum_{x=1/A}^{1} (3x^2/A-3x/A^2+ 1/A^3) = \sum_{x=1/A}^{(C-B)/A} (3(x+B)^2/A -3(x+B)/A^2+1/A^3))$ so.... remembering the is: $ A= \pi_1 *(C-B) $ or $ B= \pi_2 *(C-A) $ we arrive to the non elegant solution: $\displaystyle {\pi_1}^3- 3B^2{\pi_1}/(C-B) - 3B-1 =0 $ or (if we start from $B^3=C^3- A^3$) to: $\displaystyle {\pi_2}^3- 3A^2{\pi_1}/(C-A) - 3A-1 =0 $ that is not elegant AT ALL since rising n we have again to repeat all this torture... ...but I hope here is much easy to "see" what is the dust (what I call the mixed product) that lock the gears rising n... Waiting for your kindly check... and probably a way to add some shortcut using your more powerfull math... Thanks Ciao Stefano Last edited by complicatemodulus; May 14th, 2016 at 11:44 PM. May 15th, 2016, 02:01 AM # 13 Banned Camp Joined: Dec 2012 Posts: 1,028 Thanks: 24 One minute more on, let me have an hope for the final elegant proof just remembering the binomial develope rule... Thanks Ciao Stefano May 15th, 2016, 09:21 AM # 14 Senior Member Joined: Sep 2010 Posts: 221 Thanks: 20 A^n=(C-B)[C^(n-1)+....+B^(n-1)]=(C-B){(C-B)^(n-1)+nCB[C^n-3)-...+B^(n-3)]} C-B and polynomial in brackets are coprime unless C-B is divided by n. The rest of its factors must be to the power n (including n=3) and divide A. May 15th, 2016, 08:51 PM # 15 Banned Camp Joined: Dec 2012 Posts: 1,028 Thanks: 24 Quote: Tags divisible, flt, means Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post What does TA(x)=0 means? JohnnyGs Linear Algebra 5 October 18th, 2015 04:46 AM What symbol means to you? Rama Number Theory 2 February 2nd, 2014 01:30 PM Means of Observation jasonjason Computer Science 0 July 26th, 2013 01:47 AM what does A / B means? swat532 Real Analysis 3 September 15th, 2007 05:18 PM
Another reason for the popularity of statistical significance testing is probably that complicated mathematical procedures lend an air of scientific objectivity to conclusions. —Ronald P. Carver Significance testing is all about whether the outcome would be too much of a coincidence for the hypothesis to be true. But how much of a coincidence is too much? Should we reject a hypothesis when we find results that are significant at the \(.05\)-level? At the \(.01\)-level? Critics say this decision is subjective. In the social sciences, researchers have mainly adopted \(.05\) as the cutoff, while in other sciences the convention is to use \(.01\). But we saw there’s nothing special about these numbers. Except that they’re easy to estimate without the help of a computer. It makes a big difference what cutoff we choose, though. The lower the cutoff, the less often a result will be deemed significant. So a lower cutoff means fewer hypotheses ruled out, which means fewer scientific discoveries. Think of all the studies of potential cancer treatments being done around the world. In each study, the researcher is testing the hypothesis that their treatment has no effect, in the hopes of disproving that hypothesis. They’re hoping enough patients get better in their study to show the treatment genuinely helps. If just a few more patients improve with their treatment compared to a placebo, it could just be a fluke. But if a lot more improve, that means the treatment is probably making a real difference. The lower the significance level, the more patients have to improve before they can call their study’s results “significant”. So a lower significance level means fewer treatments will be approved by the medical community. That might seem like a bad thing, but it also has an upside. It means fewer bogus treatments being adopted. Medical treatments are often expensive, painful, or dangerous. So it’s a serious problem to approve ones that don’t actually help. But sometimes it just leads to silly dietary choices. After all, occasionally a study of a useless treatment will have lots of patients improving anyway, just by luck. When that happens, the medical community adopts a treatment that doesn’t actually work. And there might be lots of studies out there experimenting with treatments that don’t actually help. So if we do enough studies, we’re bound to get flukey results in some of them, and adopt a treatment that doesn’t actually work by mistake. Suppose for example there are only two types of potential treatment being studied. The useless kind just leave the patient’s recovery up to chance: it’s a \(50\%\) shot. But the effective kind increase their chances of recovery by \(15\%\): they have a \(65\%\) chance of getting better with these treatments. Let’s also imagine we’re studying \(200\) different treatments, \(160\) of which are actually useless and \(40\) of which are effective. Since we don’t know which are which, we’ll run \(200\) studies, one for each treatment. And just to make things concrete, let’s suppose each study has \(n = 100\) subjects enrolled. What will happen if our researchers set the cutoff for statistical significance at \(.05\)? Only a few of the bogus treatments will be approved. Just \(5\%\) of those studies will get flukey, statistically significant results. And half of those will look like they’re harming patients’ chances of recovery, rather than helping. So only \(2.5\%\) of the \(160\) bogus treatments will be approved, which is \(4\) treatments. But also, a good number of the genuine treatments will be missed. Only about \(85\%\) will be discovered as it turns out: see Figure 20.1. Figure 20.2: The results of our \(200\) studies. Green pills represent genuinely effective treatments, red pills represent useless treatments. The dashed green line represents the treatments we approve: only \(34\) out of \(38\) of these are genuinely effective. As a result, only about \(89\%\) of the treatments we approve will actually be genuinely effective. We can’t see this from Figure 20.1 because it doesn’t show the base rates. We have to turn to the kind of reasoning we did in the taxicab problem instead. Figure 20.2 shows the results. Since \(2.5\%\) of the \(160\) useless treatments (red pills) will be approved, that’s \(4\) bogus “discoveries”. And since \(85\%\) of the \(40\) genuine ones (green pills) will be approved, that’s about \(34\) genuine treatments discovered. So only about \(34/38 \approx 89\%\) of our approved treatments actually work. We could improve this percentage by lowering the threshold for significance to \(.01\). But then only half of the genuine treatments would be identified by our studies (Figure 20.3). Figure 20.4: With a stricter significance cutoff of \(.01\), we make fewer “bogus” discoveries. But we miss out on a lot of genuine discoveries too. Figure 20.4 shows what happens now: about \(95\%\) of our approved treatments will be genuine. We discover half of the \(40\) genuine treatments, which is \(20\). And \(.005\) of the \(160\) useless treatments is \(.8\), which we’ll round up to \(1\) for purposes of the diagram. So \(20/21 \approx .95\) of our approved treatments are genuinely effective. But we’ve paid a dear price for this increase in precision: we’ve failed to identify half the treatments there are to be discovered. We’ve missed out on a lot of potential benefit to our patients. Notice, however, that if bogus treatments were rarer the problem wouldn’t be so pressing. For example, Figure 20.5 shows what would happen if half the treatments being studied were bogus, instead of \(80\%\). Then we could have stuck with the \(.05\) threshold and still had very good results: we would discover about \(85\) genuine treatments and only approved about \(3\) bogus ones, a precision of about \(97\%\). There are two lessons here. First, lowering the threshold for significance has both benefits and costs. A lower threshold means fewer false discoveries, but it means fewer genuine discoveries too. Second, the base rate informs our decision about how to make this tradeoff. The more false hypotheses there are to watch out for, the stronger the incentive to use a lower threshold.10 Is the real base rate low enough that medical researchers actually need to worry? Yes, according to this research. Figure 20.5: A significance cutoff of \(.05\) does much better if the base rate is more favourable. Bayesian critics of significance testing conclude that, when we choose where to set the cutoff, we’re choosing based on our “priors”. We’re relying on assumptions about the base rate, the prior probability of the null hypothesis. So, these critics say, frequentism faces exactly the same subjectivity problem as Bayesianism. Bayesians use Bayes’ theorem to evaluate hypothesis \(H\) in light of evidence \(E\): \[ \p(H \given E) = \p(H) \frac{\p(E \given H)}{\p(E)} .\] But we saw there’s no recipe for calculating the prior probability of \(H\), \(\p(H)\). You just have to start with your best guess about how plausible \(H\) is. Likewise, frequentists have to start with their best guess about how many of the potential cancer treatments being studied are bogus, and how many are real. That’s how we decide where to set the cutoff for statistical significance. From a Bayesian point of view, significance testing focuses on just one term in Bayes’ theorem, the numerator \(\p(E \given H)\). We suppose the hypothesis is true, and then consider how likely the sort of outcome we’ve observed is. But Bayes’ theorem tells us we need more information to find what we really want, namely \(\p(H \given E)\). And for that, we need to know \(\p(H)\), how plausible \(H\) is to begin with. So significance testing is based on a mistake, according to Bayesian critics. It ignores essential base rate information, namely how many of the hypotheses we study are true. And this can lead to very wrong results, as we learned from the taxicab problem in Chapter 8. Figure 20.6: Harold Jeffreys (1891–1989) first raised the problem known as Lindley’s paradox in \(1939\). Dennis Lindley labeled it a paradox in \(1957\), hence the name. It is sometimes called the Jeffreys-Lindley paradox. This critique of frequentism is sharpened by a famous problem known as Lindley’s paradox. The tulip example is based on an example from Henry Kyburg’s book Logical Foundations of Statistical Inference. It also appears in Howson & Urbach’s Scientific Reasoning: A Bayesian Approach. Suppose a florist receives a large shipment of tulip bulbs with the label scratched off. The company that sent the shipment only sends two kinds of shipments. The first kind contains \(25\%\) red bulbs, the second kind has \(50\%\) red bulbs. The two kinds of shipment are equally common. So the store owner figures this shipment could be of either kind, with equal probability. To figure out which kind of shipment she has, she takes a sample of \(48\) bulbs and plants them to see what colour they grow. Of the \(48\) planted, \(36\) grow red. What should she conclude? Intuitively, this result fits much better with the \(50\%\) hypothesis than the \(25\%\) hypothesis. So she should conclude she got the second, \(50\%\) kind of shipment. It’s just a coincidence that well over half the bulbs in her experiment were red. But if she uses significance testing, she won’t get this result. In fact she’ll get an impossible result. Let’s see how. Figure 20.7: The result \(k = 36\) out of \(n = 48\) is easily statistically significant for the null hypothesis \(p = .25\). Our florist starts by testing the hypothesis that \(25\%\) of the bulbs in the shipment are red. She calculates \(\mu\) and \(\sigma\): \[ \begin{aligned} \mu &= np = (48)(1/4) = 12,\\ \sigma &= \sqrt{np(1-p)} = \sqrt{(48)(1/4)(3/4)} = 3. \end{aligned} \] The \(99\%\) range is \(12 \pm (3)(3)\), or from \(3\) to \(21\). Her finding of \(k = 36\) is nowhere close to this range, so the result is significant at the \(.01\) level. She rejects the \(25\%\) hypothesis. Figure 20.8: The result \(k = 36\) out of \(n = 48\) is also statistically significant for the null hypothesis \(p = .5\). So far so good, but what if she tests the \(50\%\) hypothesis too? She calculates the new \(\mu\) and \(\sigma\): \[ \begin{aligned} \mu &= np = (48)(1/2) = 24,\\ \sigma &= \sqrt{np(1-p)} = \sqrt{(48)(1/2)(1/2)} \approx 3.5. \end{aligned} \] So her result \(k = 36\) is also significant at the \(.01\) level! The \(99\%\) range is \(13.5\) to \(34.5\), which doesn’t include \(k = 36\). So she rejects the \(50\%\) hypothesis also. But now she has rejected the only two possibilities. There are only two kinds of shipment, and she’s ruled them both out. Something seems to have gone wrong! How did things go so wrong? Figure 20.9 shows what’s happening here. Neither hypothesis fits the finding \(k = 36\) very well: it’s an extremely improbable result on either hypothesis. This is why both hypotheses end up being rejected by a significance test. But one of these hypotheses still fits the finding much better than the other. The blue curve (\(p = .25\)) flatlines long before it gets to \(k = 36\), while the red curve (\(p = .5\)) is only close to flatlining. So the most plausible interpretation is that the shipment is half red bulbs (\(p = .5\)), and it’s just a fluke that we happen to have gotten much more than half red in our sample. Bayesians will happily point out the source of the trouble: our florist has ignored the prior probabilities. If we use Bayes’ theorem instead of significance testing, we’ll find that the store owner should believe the second hypothesis, which seems right. \(36\) red bulbs out of \(48\) fits much better with the \(50\%\) hypothesis than with the \(25\%\) hypothesis. How do we apply Bayes’ theorem in the tulip example? First we label our two hypotheses and the evidence: \[ \begin{aligned} H &= \mbox{$25\%$ of the bulbs are red},\\ \neg H &= \mbox{$50\%$ of the bulbs are red,}\\ E &= \mbox{Out of 48 randomly selected bulbs, 36 grew red.} \end{aligned} \] Because the two kinds of shipment are equally common, the prior probabilities of our hypotheses are: \[ \begin{aligned} \p(H) &= 1/2,\\ \p(\neg H) &= 1/2. \end{aligned} \] So we just need to calculate \(\p(E \given H)\) and \(\p(E \given \neg H)\). That’s actually not so easy, but with the help of a computer we get: \[ \begin{aligned} \p(E \given H) &\approx 4.7 \times 10^{-13},\\ \p(E \given \neg H) &\approx 2.5 \times 10^{-4}. \end{aligned} \] So we plug these numbers into Bayes’ theorem and get: \[ \begin{aligned} \p(H \given E) &= \frac{\p(E \given H)\p(H)}{\p(E \given H)\p(H) + \p(E \given \neg H)\p(\neg H)}\\ &\approx \frac{(4.7 \times 10^{-13})(1/2)}{(4.7 \times 10^{-13})(1/2) + (2.5 \times 10^{-4})(1/2)}\\ &\approx .000000002,\\ \p(\neg H \given E) &= \frac{\p(E \given \neg H)\p(\neg H)}{\p(E \given \neg H)\p(\neg H) + \p(E \given H)\p(H)}\\ &\approx \frac{(2.5 \times 10^{-4})(1/2)}{(2.5 \times 10^{-4})(1/2) + (4.7 \times 10^{-13})(1/2)}\\ &\approx .999999998. \end{aligned} \] Conclusion: the probability of the first hypothesis \(H\) has gone way down, from \(1/2\) to about \(.000000002\). But the probability of the second hypothesis \(\neg H\) has gone way up from \(1/2\) to about \(.999999998\)! So we should believe the second hypothesis, not reject it. According to Bayesian critics, this shows that significance testing is misguided. It ignores crucial background information. In this example, there were only two possible hypotheses, and they were equally likely. So whichever one fits the results best is supported by the evidence. In fact, the second hypothesis is strongly supported by the evidence, even though it fits the result quite poorly! Sometimes it makes more sense to reconcile ourselves to a coincidence than to reject the null hypothesis. Which of the following statements are true? Select all that apply. Suppose we have \(1,000\) coins and we are going to conduct an experiment on each one to determine which are biased. Each coin will be tossed \(100\) times. In each experiment, the null hypothesis is always that the coin is fair, and we will reject this hypothesis when the results of the experiment are significant at the \(.05\) level. Suppose half the coins are fair and half are not. Suppose also that when a coin is unfair, the probability of getting a result that is not statistically significant is \(0.2\). Suppose we are going to investigate \(1,000\) null hypotheses by running an experiment on each. In each experiment, we will reject the null hypothesis when the results are significant at the \(.01\) level. Suppose \(90\%\) of our hypotheses are false. Suppose also that when a null hypothesis is false, the results will not be statistically significant \(25\%\) of the time. True or false: it is possible for the results of an experiment to be significant at the \(.05\) level even though the posterior probability of the null hypothesis is \(\p(H \given E) = .99\). Suppose there are two types of urns, Type A and Type B. Type A urns contain \(1/5\) black balls, the rest white. Type B urns contain \(1/10\) black balls, the rest white. You have an urn that could be either Type A or Type B, you aren’t sure. You think it’s equally likely to be Type A as Type B. So you decide to use a significance test to find out. Your null hypothesis is that it’s a Type A urn. You draw \(25\) marbles at random, and \(13\) of them are black. Suppose the government is testing a new education policy. There are only two possibilities: either the policy will work and it will help high school students learn to write better \(3/4\) of the time, or it will have no effect and students’ writing will only improve \(1/2\) of the time, as usual. The government does a study of \(432\) students and finds that, under the new policy, \(285\) of them improved in their writing. Lindley’s paradox occurs when a significance test will direct us to reject the hypothesis even though the posterior probability of the hypothesis \(\p(H \given E)\) is high. Describe your own example of this kind of case.
True or False Problems of Vector Spaces and Linear TransformationsThese are True or False problems.For each of the following statements, determine if it contains a wrong information or not.Let $A$ be a $5\times 3$ matrix. Then the range of $A$ is a subspace in $\R^3$.The function $f(x)=x^2+1$ is not in the vector space $C[-1,1]$ because […] Subspace Spanned by Trigonometric Functions $\sin^2(x)$ and $\cos^2(x)$Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$.Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$.(a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ […] Linear Algebra Midterm 1 at the Ohio State University (3/3)The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.This post is Part 3 and contains […] Linear Transformation and a Basis of the Vector Space $\R^3$Let $T$ be a linear transformation from the vector space $\R^3$ to $\R^3$.Suppose that $k=3$ is the smallest positive integer such that $T^k=\mathbf{0}$ (the zero linear transformation) and suppose that we have $\mathbf{x}\in \R^3$ such that $T^2\mathbf{x}\neq \mathbf{0}$.Show […] Subspace of Skew-Symmetric Matrices and Its DimensionLet $V$ be the vector space of all $2\times 2$ matrices. Let $W$ be a subset of $V$ consisting of all $2\times 2$ skew-symmetric matrices. (Recall that a matrix $A$ is skew-symmetric if $A^{\trans}=-A$.)(a) Prove that the subset $W$ is a subspace of $V$.(b) Find the […]
Find a Matrix so that a Given Subset is the Null Space of the Matrix, hence it’s a Subspace Problem 252 Let $W$ be the subset of $\R^3$ defined by \[W=\left \{ \mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}\in \R^3 \quad \middle| \quad 5x_1-2x_2+x_3=0 \right \}.\] Exhibit a $1\times 3$ matrix $A$ such that $W=\calN(A)$, the null space of $A$. Conclude that the subset $W$ is a subspace of $\R^3$. Solution. Note that the defining equation $5x_1-2x_2+x_3=0$ can be written as \[\begin{bmatrix} 5 & -2 & 1 \\ \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}=0.\] Hence if we put $A=\begin{bmatrix} 5 & -2 & 1 \\ \end{bmatrix}$, then the defining equation becomes \[A\mathbf{x}=0.\] Hence, we have \[W=\{x \in \R^3 \mid A\mathbf{x}=0\},\] which is exactly the null space of the $1\times 3$ matrix $A$. Hence we have proved that $W=\calN(A)$. In general, the null space of an $m\times n$ matrix is a subspace of the vector space $\R^n$. (See the post The null space (the kernel) of a matrix is a subspace of $\R^n$.) Since we showed that $W$ is the null space of $1\times 3$ matrix $A$, we conclude that $W$ is a subspace of $\R^3$. Add to solve later
Search Now showing items 1-10 of 50 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... Production of $\pi^0$ and $\eta$ mesons up to high transverse momentum in pp collisions at 2.76 TeV (Springer, 2017-05) The invariant differential cross sections for inclusive $\pi^{0}$ and $\eta$ mesons at midrapidity were measured in pp collisions at $\sqrt{s}=2.76$ TeV for transverse momenta $0.4<p_{\rm T}<40$ GeV/$c$ and $0.6<p_{\rm ... Linear and non-linear flow modes in Pb-Pb collisions at $\sqrt{s_{\rm NN}} =$ 2.76 TeV (Elsevier, 2017-10) The second and the third order anisotropic flow, $V_{2}$ and $V_3$, are mostly determined by the corresponding initial spatial anisotropy coefficients, $\varepsilon_{2}$ and $\varepsilon_{3}$, in the initial density ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... Production of muons from heavy-flavour hadron decays in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Elsevier, 2017-07) The production of muons from heavy-flavour hadron decays in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV was studied for $2<p_{\rm T}<16$ GeV/$c$ with the ALICE detector at the CERN LHC. The measurement was performed ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... Measurement of deuteron spectra and elliptic flow in Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV at the LHC (Springer, 2017-10) The transverse momentum ($p_{\rm T}$) spectra and elliptic flow coefficient ($v_2$) of deuterons and anti-deuterons at mid-rapidity ($|y|<0.5$) are measured with the ALICE detector at the LHC in Pb-Pb collisions at ...
Application of Field Extension to Linear Combination Problem 335 Consider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$. Let $\alpha$ be any real root of $f(x)$. Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$. Proof. We first prove that the polynomial $f(x)=x^3-x+1$ is irreducible over $\Q$. Since $f(x)$ is a monic cubic polynomial, the only possible roots are the divisors of the constant term $1$. As we have $f(1)=f(-1)=1\neq 0$, the polynomial has no rational roots. Hence $f(x)$ is irreducible over $\Q$. Then $f(x)$ is the minimal polynomial of $\alpha$ over $\Q$, and hence the field extension $\Q(\alpha)$ over $\Q$ has degree $3$. If $\sqrt{2}$ is a linear combination of $1, \alpha, \alpha^2$, then it follows that $\sqrt{2}\in \Q(\alpha)$. Then $\Q(\sqrt{2})$ is a subfield of $\Q(\alpha)$. Then the degree of the field extension is \begin{align*} 3=[\Q(\alpha): \Q]=[\Q(\alpha): \Q(\sqrt{2})] [\Q(\sqrt{2}): \Q]. \end{align*} Since $[\Q(\sqrt{2}): \Q]=2$, this is impossible. Thus, $\sqrt{2}$ is not a linear combination of $1, \alpha, \alpha^2$. Add to solve later
I was hoping someone might be able to offer some advice on placing knots for restricted cubic splines. In brief, I am helping a team of clinicians who want to predict which patients require a specialist team to meet them when they arrive (via ambulance) at the hospital. The outcome is severe trauma or not (assessed post-treatment but based on patient condition on arrival; it is a binary outcome) and independent variables are measures the paramedics take at the scene - blood pressure, pulse, etc… It is clear based on contextual knowledge that a non-linear relationship with the outcome is expected; e.g., extremely low or high blood pressure would be associated with worse outcome. As a result, I am using restricted cubic splines; however, I have noticed that for some of the independent variables, there is a small group of patients (approx. 20-30 out of 3000) who score very differently (and not always the same people) and may be exerting large influence on the splines. For example, for oxygen saturation (O2 sats), most people score 80% or above, but there is a group who score in the region 30-70. The expectation would be that as O2 sats increase, probability of severe trauma would reduce. This is indeed what the model returns if I ignore those with O2 sats less than 80 (which I’d prefer not to do) - there is a reasonably rapid decrease in probability as O2 sats increase to ~90% before slowing down - but including them means the probability of severe trauma flattens out (possibly slightly decreases) from below ~90% O2 sats. Is there a better approach than simply trying other knot positions (e.g., moving the first knot to 70-80% or adding another knot around this point) and reporting this is what was done based on the rationale above? I.e., is it justifiable to allow contextual knowledge and some exploration to dictate the approach? Frank also recommends using Akaike’s information criterion (AIC) to help choosing the number of knots (k). As you have noticed, splines are still sensitive to data scarcity. In my experience, increasing the number of knots - rather than the placements – tend to improve the fit. I’m currently working on a Shiny app that allows you to toy with non-linearity (of the relationship with Y) and the number of knots. It might help you visualize the impact of k a little better? Feedback appreciated! Restricted cubic splines often require pre-specification of knots, and the important issue issue is the choice of the number of knots: the most used number of knots >> 5 knots (=4 df), 4 knots (=3 df), and 3 knots (=2 df). Pre-specification of the location of knots is not easy, but fortunately location of not is not crucial in model fitting. The number of knots are more important than location! So, practically we may pre-specify the number of knots as " When the sample size is large (e.g., n ≥ 100 with a continuous uncensored response variable), k = 5 is a good choice. Small samples (< 30, say) may require the use of k = 3. Five knots are sufficient to capture many non-linear patterns, it may not be wise to include 5 knots for each continuous predictor in a multivariable model. Too much flexibility would lead to overfitting. One strategy is to define a priori how much flexibility will be allowed for each predictor, i.e. how many df will be spent. In smaller data sets, we may for example choose to use only linear terms or splines with 3 knots (2 df), especially if no strong prior information suggests that a nonlinear function is necessary. Alternatively, we might examine different RCS. Ref: Harrell RMS book and Steyerberg CPM book Excellent answers. A few other thoughts: Include a graph showing the fits with confidence bands, and spike histograms along the x-axis to show the data density for O_2. Aside from the fact that the number of knots is the most important aspect of restricted cubic spline modeling, we tend to put knots where we can afford to put knots. Hence the use of quantiles for default knot locations. The spike histogram mentioned above will help on this count. Most analysts would just force linearity on the predictor, so you’re way ahead of the game. Thank you all for your input! It’s greatly appreciated. I should have given a little more information on what I had attempted so far. I originally planned to use 5 knots based on the locations given in Frank’s book; this results in knots at 88, 95, 97, 99 and 100, though 100 is the maximum score and so was not included - hence 4 knots. I had also looked at using existing clinical thresholds for placing the knots (92, 94, 96). Whilst this latter approach is not sufficient to capture what’s happening in the upper end of the data (as per the default knot placings, 50% of the data is 95+), it has more expected behaviour in terms of what happens below the last knot: the prob of outcome decreases as O2 sats increase, unlike with the default knots. It seems the placing of the last knot is vital - with the first knot below about 89 the line is increasing, and with the first knot above 89 it is decreasing. I think this is a feature of this slightly strange data where 95% of the people occupy 12% of the scale and the remaining 5% are spread across much of the other 88%. The point re confidence bands is key - the bands are very wide when O2 is below approx. 80 and the changing gradient of the line below the last knot (depending on where it is placed) is a reflection of this. I could be over-thinking it! I believe I should stick with the default knot locations and accept the counter-intuitive findings are a result of uncertainty (reflected in the confidence bands) rather than fudge it so the line looks more like what we expect (and then possibly lend an air of unfounded certainty). My hunch, if your effect sample size is fairly large, is to put knots at 82 88 95 97 but the suggested graphs would help in the decision. It would help to see some of your graphs and how knot placement influence the curves. The first plot is using the default knots (I used 5 knots, but two knots are the same based on the default locations, so only 4 are actually used): The second plot uses the current clinical thresholds (3 knots): This demonstrates something more like we’d expect for O2 sats below 90 (only in the sense that prob of outcome decreases - whether it decreases at a realistic rate is another matter), though clearly misses something the default knots approach picks up closer to 100. Thank you for sharing the figures. In my opinion difficult to make much of the “squiggle” you observe when you use 5 knots. Even with perfectly normally distributed predictors, using many knots can lead to this phenomenon (7 knots in this example): Same data with 3 knots: Overall it might still lead to a model that is well fitted and yield better predictions than without the splines. Given the terribly skewed nature of the data, I’m wondering if data transformation (log?) might be useful here? @f2harrell Very interesting. You have strong evidence for non-monotonicity with a nadir at about an O2 of 0.97. I’d be interested in seeing a loess fit, and AIC values for the various knot choices. Having worse results for an O2 sat near 100% smells of measurement error in that part of the scale, or sicker patients getting external O2 support to prop up the O2 sat. I’ve encountered this before. Setting knots for splines is difficult. If the sample size is reasonably small (not over <2,000 but depends on the model), I’d recommend Gaussian Processes. It’s a great way to fit a non-linear function that requires interpretatability. I’ve had success with this with clinicians in the past. Specifically, in cardiology estimating a non-linear risk function where non-linear relationship is Blood Pressure. The relationship between BP and death is obviously non-linear, we know this from clinical experience. The best software for this so far is either GPstuff, or GPy. I’d recommend a logistic linear model: logit^{-1}(\frac{exp(x)}{1 - exp(x)}) = \beta_0 + x_1 \beta_1 + x_2 \beta_2 + ... + \epsilon where \epsilon \sim N(0, 1), for which you specify a non linear prior on each parameter where you expect a non-linear relationship. So - \beta_{non-linear} \sim GP((x,x') | \theta) Where for smoothness we select a Radial basis function (synonymous with Exponential Quadratic or Gaussian kernel). A relevant paper is: Gaussian Process Based Approaches for Survival Analysis Alan Saul. For many applications GP’s are not applicable, but for this application it works well. If you’re having issues implementing it, feel free to shoot me an email: [email protected] and I can code it up quickly. Do you have information on clinical diagnoses of these people ? Either pre-existing or due to whatever brought them into hospital? There are conditions that would mean their O2 sats don’t follow the same patterns as the average person. COPD would be one common example - folks with COPD typically have much lower 02 sats than the average person of same gender/age. My limited experience with Gaussian processes is that they fit no better than splines and take much, much more computation time. For this particular application I’d compare with nonparametric regression a la loess. There is some information on the cause of needing admission (it’s fairly crudely divided into 6 categories: fall from less than 2m, more than 2m, vehicle collision, etc.) but nothing on their clinical history. Ah right no worries - I thought it worth checking. My concern is that you are facing significant measurement errors for lower values of SO2…At the lower end of the tail you have both very few subjects and likely very large measurement errors. A more reliable measurement would be the O2 arterial partial pressure (pO2), unfortunately hard to assess on site. This is a situation where (I imagine) where you’re likely to have: a number of numeric predictors with potentially dramatic effects at either end when you get out of the “normal range” (in the sense of a lab result being “normal”), lots of holes in the data, and potentially interactions every-which-way. I’d want to run the data through a Random Forest model, just to see what a totally flexible model that has no moral scruples, and is robust to missing data, would do. Might not be what you’d go with ultimately, but it’d be a good starting benchmark IMO. Is this intended to replace triage-by-phone/trauma-team-activation for incoming patients? That’d be a usage case where the validity testing would presumably have to be rigorous^2.
I have a LP problem with n decision variables, \(x_1, \ldots, x_n\), and one of the constraints is \[\max(x_1, \ldots, x_n) = c\] where \(c\) is a constant and \(max\) is the max function (i.e., it returns the highest of the parameters \(x_1, \ldots,x_n\)). Is there a way to cope with this constraint, transforming it in a set of linear constraints and/or enriching the objective function with some penalty? Thanks \[f(y_1, y_2, \ldots, y_m)\] where \[y_i \in \{0,1\}.\] I want that exactly \(c\) out of \(m\) variables value 1, so one constraint is \[\sum_{i=1}^m y_i = c\] Additionally, I want these "c" variables to be consecutive, e.g., \(y_1=1, y_3=1, \ldots, y_{c+1}=1\) is not a valid solution. Thus I was thinking the following: let \[x_1 = y_1 + y_2 +\ldots+ y_c\] \[x_2 = y_2 + y_3 +...+ y_{c+1}\] \[\ldots\] \[x_n = y_{m-c+1} + y_{m-c+2} +...+ y_{m}\] To be a valid solution, it must exist only one variable \(x_j\) such that \(x_j=c\), while all the other variables necessarily are \(<c\). This is the origin of the constraint \[\max(x_1,...,x_n)=c\] I believe that this could and should be modeled without the x variables altogether. If you would like the ones to appear consecutively in the y variables, you should say so: forbid a "hole" (a 0) in between two ones: \[y_i + y_{i+2} \leq 1 + y_{i+1}\] hope this helps. Think of the constraint as a disjunction. You want \(x_i \leq c, \forall i\) (the easy part) and you want \(x_i = c, \exists i\) (the hard part). You can use an auxiliary binary variable and big-\(M\) to enforce or not a lower bound. Then you want to enforce just one lower bound. Does that help? In terms of the original problem (the \(y\) version), why not just introduce binary variables \(z_i, i\in \{1,\dots,m-c+1\}\), where \(z_i=1\) if the sequence of unit-value \(y\) variables starts at \(i\). The new variables form an SOS1 (\(z_1+\dots+z_{m-c+1}=1\)), and \(y\) (which can now be declared real rather than binary) is given by \(y_i=z_{i-c+1}+\dots+z_i\) with appropriate adjustments to the starting index to prevent it from going negative.
Let $n$ be a positive integer.Since $G/N$ is a cyclic group, let $g$ be a generator of $G/N$.So we have $G/N=\langle g\rangle$.Then $\langle g^n \rangle$ is a subgroup of $G/N$ of index $n$. By the fourth isomorphism theorem, every subgroup of $G/N$ is of the form $H/N$ for some subgroup $H$ of $G$ containing $N$.Thus we have $\langle g^n \rangle=H/N$ for some subgroup $H$ in $G$ containing $N$. Since $G/N$ is cyclic, it is in particular abelian.Thus $H/N$ is a normal subgroup of $G/N$. The fourth isomorphism theorem also implies that $H$ is a normal subgroup of $G$, and we have\begin{align*}[G:H]=[G/N : H/N]=n.\end{align*}Hence $H$ is a normal subgroup of $G$ of index $n$. Any Finite Group Has a Composition SeriesLet $G$ be a finite group. Then show that $G$ has a composition series.Proof.We prove the statement by induction on the order $|G|=n$ of the finite group.When $n=1$, this is trivial.Suppose that any finite group of order less than $n$ has a composition […] Normal Subgroups, Isomorphic Quotients, But Not IsomorphicLet $G$ be a group. Suppose that $H_1, H_2, N_1, N_2$ are all normal subgroup of $G$, $H_1 \lhd N_2$, and $H_2 \lhd N_2$.Suppose also that $N_1/H_1$ is isomorphic to $N_2/H_2$. Then prove or disprove that $N_1$ is isomorphic to $N_2$.Proof.We give a […] Isomorphism Criterion of Semidirect Product of GroupsLet $A$, $B$ be groups. Let $\phi:B \to \Aut(A)$ be a group homomorphism.The semidirect product $A \rtimes_{\phi} B$ with respect to $\phi$ is a group whose underlying set is $A \times B$ with group operation\[(a_1, b_1)\cdot (a_2, b_2)=(a_1\phi(b_1)(a_2), b_1b_2),\]where $a_i […] A Simple Abelian Group if and only if the Order is a Prime NumberLet $G$ be a group. (Do not assume that $G$ is a finite group.)Prove that $G$ is a simple abelian group if and only if the order of $G$ is a prime number.Definition.A group $G$ is called simple if $G$ is a nontrivial group and the only normal subgroups of $G$ is […] If the Quotient Ring is a Field, then the Ideal is MaximalLet $R$ be a ring with unit $1\neq 0$.Prove that if $M$ is an ideal of $R$ such that $R/M$ is a field, then $M$ is a maximal ideal of $R$.(Do not assume that the ring $R$ is commutative.)Proof.Let $I$ be an ideal of $R$ such that\[M \subset I \subset […] Group of Order 18 is SolvableLet $G$ be a finite group of order $18$.Show that the group $G$ is solvable.DefinitionRecall that a group $G$ is said to be solvable if $G$ has a subnormal series\[\{e\}=G_0 \triangleleft G_1 \triangleleft G_2 \triangleleft \cdots \triangleleft G_n=G\]such […] A Group of Order $20$ is SolvableProve that a group of order $20$ is solvable.Hint.Show that a group of order $20$ has a unique normal $5$-Sylow subgroup by Sylow's theorem.See the post summary of Sylow’s Theorem to review Sylow's theorem.Proof.Let $G$ be a group of order $20$. The […]
I have an output signal $y$ which is an input signal $x$ convolved $\star$ with an impulse response function $h$ with some added noise $n$ : $$y(t) = h(t) \star x(t) + n(t)$$ I know the input signal $x$ and output signal $y$ and would like to calculate $h$ the impulse response function. I found that deconvolution is not as straightforward as convolution because the input signal contains zeros and then division in the frequency domain would not be defined. Looking around the internet for ways to "deconvolve" if found two methods: Wiener deconvolution and regularized deconvolution. The Wiener deconvolution seemed easier to understand so I wanted to try and implement it in Matlab (the Matlab function deconv gives me errors about the input signal having a zero at the first entry and if I read the help file it only seems to work correctly for polynomials?). So per the wikipedia explanation you want to find $g$ so that: $$\hat{x}(t) = g(t) \star y(t)$$ But then in the definition of how to calculate $G$ they use all variables in the original equation. Also they only show how to find $x$ but probably $x$ and $h$ can be exchanged because convolution is commutative, but I'm unsure about the correct length of both vectors. Currently they are the same length. $$G(f) = \frac{H^*(f) S(f)}{|H(f)|^2 S(f) + N(f)}$$ where: $H = {\tt fft}(h)$ $G = {\tt fft}(g)$ $S =$ power spectral density of $x$ ? is this ${\tt fft}(x)$? $N = $mean power spectral density of $n$, don't really understand what this is My question is how do I get the impulse response function $h$ without already knowing it (as it is both in the definition of $G$ and in the original equation)? Since I know both input and output it should not be very different from finding the input with a known impulse response function. I want to do this in Matlab.
N Nimai Singh Articles written in Pramana – Journal of Physics Volume 60 Issue 2 February 2003 pp 405-409 Raj Gandhi Kamales Kar S Uma Sankar Abhijit Bandyopadhyay Rahul Basu Pijushpani Bhattacharjee Biswajoy Brahmachari Debrupa Chakraborti M Chaudhury J Chaudhury Sandhya Choubey E J Chun Atri Desmukhya Anindya Datta Gautam Dutta Sukanta Dutta Anjan Giri Sourendu Gupta Srubabati Goswami Namit Mahajan H S Mani A Mukherjee Biswarup Mukhopadhyaya S N Nayak M Randhawa Subhendu Rakshit Asim K Ray Amitava Raychaudhuri D P Roy Probir Roy Suryadeep Roy Shiv Sethi G Sigl Arunansu Sil N Nimai Singh Mark Vagins Urjit Yagnik This is the report of neutrino and astroparticle physics working group at WHEPP-7. Discussions and work on CP violation in long baseline neutrino experiments, ultra high energy neutrinos, supernova neutrinos and water Cerenkov detectors are discussed. Volume 65 Issue 6 December 2005 pp 1015-1025 In this paper we briefly outline the quadrature method for estimating uncertainties in a function which depends on several variables, and apply it to estimate the numerical uncertainties in QCD-QED rescaling factors. We employ here the one-loop order in QED and three-loop order in QCD evolution equations of the fermion mass renormalisation. Our present calculation is found to be new and also reliable when compared to the earlier values employed by various authors Volume 66 Issue 2 February 2006 pp 361-375 An attempt has been made to discriminate theoretically the three possible patterns of neutrino mass models,viz., degenerate, inverted hierarchical and normal hierachical models, within the framework of Type-II see-saw formula. From detailed numerical analysis we are able to arrive at a conclusion that the inverted hierarchical model with the same CP phase (referred to as Type [IIA]), appears to be most favourable to survive in nature (and hence most stable), with the normal hierarchical model (Type [III]) and inverted hierarchical model with opposite CP phase (Type [IIB]), follow next. The degenerate models (Types [IA,IB,IC]) are found to be most unstable. The neutrino mass matrices which are obtained using the usual canonical see-saw formula (Type I), and which also give almost good predictions of neutrino masses and mixings consistent with the latest neutrino oscillation data, are re-examined in the presence of the left-handed Higgs triplet within the framework of non-canonical see-saw formula (Type II). We then estimate a parameter (the so-called discriminator) which may represent the minimum degree of suppression of the extra term arising from the presence of left-handed Higgs triplet, so as to restore the good predictions on neutrino masses and mixings already acquired in Type-I see-saw model. The neutrino mass model is said to be favourable and hence stable when its canonical see-saw term dominates over the non-canonical (perturbative) term, and this condition is used here as a criterion for discriminating neutrino mass models. Volume 69 Issue 4 October 2007 pp 533-549 Research Articles We explore a novel possibility for lowering the solar mixing angle ($\theta_{12}$) from tri-bimaximal mixings, without sacrificing the predictions of maximal atmospheric mixing angle ($\theta_{23} = 45^{\circ}$) and zero reactor angle ($\theta_{13} = 0^{\circ}$) in the inverted and normal hierarchical neutrino mass models having 2-3 symmetry. This can be done through the identification of a flavour twister term in the texture of neutrino mass matrix and the variation of such term leads to lowering of solar mixing angle. For the observed ranges of $\Delta m_{21}^{2}$ and $\Delta m_{23}^{2}$, we calculate the predictions on tan 2 $\theta_{12} = 0.5, 0.45, 0.35$ for different input values of the parameters in the neutrino mass matrix. We also observe a possible transition from inverted hierarchical model having even CP parity (Type-IHA) to inverted hierarchical model having odd CP parity (Type-IHB) in the first two mass eigenvalues, when there is a change in input values of parameters in the same mass matrix. The present work differs from the conventional approaches for the deviations from tri-bimaximal mixing, where the 2-3 symmetry is broken, leading to $\theta_{23} \neq 45^{\circ}$ and $\theta_{13} \neq 0^{\circ}$. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
I programmed one. Simple, i know, but it does the job Let me know what you guys think of it. EDIT: Download Math Tool 3.5 here Last edited by janvdl; Jan 11th 2008 at 05:50 AM. Reason: 3.5 Released Follow Math Help Forum on Facebook and Google+ Originally Posted by janvdl I programmed one. Simple, i know, but it does the job Download it here Let me know what you guys think of it. it can't find complex roots Originally Posted by Jhevon it can't find complex roots No it uses the $\displaystyle \frac{ -b \pm \sqrt{b^2 - 4ac} }{2a}$ formula. I guess only kids in lower grades might find it useful then... *sigh* Originally Posted by janvdl No it uses the $\displaystyle \frac{ -b \pm \sqrt{b^2 - 4ac} }{2a}$ formula. I guess only kids in lower grades might find it useful then... *sigh* i realize that you used the quadratic formula. couldn't you alter the code to recognize complex roots though? Originally Posted by Jhevon i realize that you used the quadratic formula. couldn't you alter the code to recognize complex roots though? Expand on that? Originally Posted by janvdl Expand on that? as in tell the program that if it gets $\displaystyle \sqrt{-2}$ for example, it should interpret it as $\displaystyle \sqrt{2}~i$. would that be too hard? i never checked, how does it deal with double roots? that is, when what's under the square root sign is zero? Originally Posted by Jhevon as in tell the program that if it gets $\displaystyle \sqrt{-2}$ for example, it should interpret it as $\displaystyle \sqrt{2}~i$. would that be too hard? i never checked, how does it deal with double roots? that is, when what's under the square root sign is zero? You started this idea, so you're going to help me put it to work Originally Posted by janvdl You started this idea, so you're going to help me put it to work i don't know what programming language you are using, and chances are, even if i did, i wouldn't know it Originally Posted by Jhevon i don't know what programming language you are using, and chances are, even if i did, i wouldn't know it Oh no, you're just going to do the maths... Okay, i got it to calculate complex roots. I was wondering if i could put a link to MHF up on it? Would that be okay? EDIT: I guess everyone would like to see first. See the attached pic Originally Posted by janvdl Okay, i got it to calculate complex roots. I was wondering if i could put a link to MHF up on it? Would that be okay? EDIT: I guess everyone would like to see first. See the attached pic Good job! Originally Posted by colby2152 Good job! Thanks. I sent a message to MathGuru. I'll have to wait and see what he thinks of it. I like it. Great Now here's the link chaps Download Quadratic Solver 2 Originally Posted by janvdl Great Now here's the link chaps Download Quadratic Solver 2 The link takes forever to load. -Dan View Tag Cloud
1. Linear combination If a vector can be expressed as a linear combination of other vectors, they are said to be linearly dependent. Example I We’d like to know if one vector in Figure 6.1 depends upon the others, as the following : We have thus : Let’s factorize the right hand-side of $(2)$ : Let’s proceed with the matrix elimination as we have seen in the dedicated chapter : From the matrix in echelon form, we get : $ -4\lambda_3 = 12 \iff$ $\lambda_3 = -3$ Let’s replace $\lambda_3$ in the first row : $2\lambda_2 + -2 \cdot \underbrace{-3}_{\lambda_3} = 8 \iff$ $\lambda_2 = 1$ The solution is : Therefore any vector $\vec{v_i} \in \{\vec{v_1}, \vec{v_2}, \vec{v_3}\}$ can be expressed as the linear combination of the others. 2. Generalization $(1)$ is obviously equivalent to $(4)$ : More generally, we have : If the unique solution to $(5)$ is $\lambda{_1}=\lambda{_2}=\lambda{_3}=0$ then none of the vectors can be expressed as the linear combination of the other. Otherwise the vectors are linearly dependent. Recapitulation If a vector can be expressed as the linear combination of other vectors, they are said to be linearly dependent. As for example : $\left( \begin{smallmatrix} 2 \\ -1 \\ -1 \end{smallmatrix} \right) = {\small -1} \left( \begin{smallmatrix} 1 \\ -1 \\ 0 \end{smallmatrix} \right) + {\small -1} \left( \begin{smallmatrix} -3 \\ 2 \\ 1 \end{smallmatrix} \right)$. To know if one vector $\vec{v_i} \in \{\vec{v_1}, \vec{v_2}, \ldots{} ,\vec{v_n} \}$ can be expressed as the linear combination of the others, the following equation has to be solved : If that equation has as a unique solution all $\lambda{_i}=0$, it means that none of the vecteors $\vec{v_i}$ depends upon the others. In that case, the vectors are said to be linearly independent.
For Bayesians, probabilities are beliefs. When I say it’ll probably rain today, I’m telling you something about my personal level of confidence in rain today. I’m saying I’m more than \(50\%\) confident it’ll rain. But how can we quantify something as personal and elusive as a level of confidence? Bayesians answer this question using the same basic idea we used for utility in Chapter 12. They look at people’s willingness to risk things they care about. The more confident someone is, the more they’ll be willing to bet. So let’s use betting rates to quantify personal probabilities. I said I’m more than \(50\%\) confident it’ll rain today. But exactly how confident: \(60\%\)? \(70\%\)? Well, I’d give two-to-one odds on it raining today, and no higher. In other words, I’d accept a deal that pays \(\$1\) if it rains, and costs me \(\$2\) otherwise. But I wouldn’t risk more than \(\$2\) when I only stand to win \(\$1\). In this example I put \(2\) dollars on the table, and you put down \(1\) dollar. Whoever wins the bet keeps all \(3\) dollars. The sum of all the money on the table is called the . In this case the stake is \(\$2 + \$1 = \$3\). stake If it doesn’t rain, I’ll lose \(\$2\). To find my , we divide this potential loss by the stake:\[ \begin{aligned} \mbox{betting rate} &= \frac{\mbox{potential loss}}{\mbox{stake}}\\ &= \frac{\$2}{\$2 + \$1}\\ &= \frac{2}{3}. \end{aligned}\] fair betting rate Figure 16.1: A bet that pays \(\$1\) if you win and costs \(\$2\) if you lose, is fair when the blue and red regions have equal size: when the probability of winning is \(2/3\). A person’s betting rate reflects their degree of confidence. The more confident they are of winning, the more they’ll be willing to risk losing. In this example my betting rate is \(2/3\) because I’m \(2/3\) confident it will rain. That’s my personal probability: \(\p(R) = 2/3\). Notice that a bet at two-to-one odds has zero expected value given my personal probability of \(2/3\): \[ (2/3)(\$1) + (1/3)(-\$2) = 0. \] This makes sense: it’s a fair bet from my point of view, after all. Figure 16.2: A bet that pays \(\$9\) if you win and costs \(\$1\) if you lose is fair when the probability of winning is \(1/10\). What if I were less confident in rain, say just \(1/10\) confident? Then I’d be willing to stake much less. I’d need you to put down at least \(\$9\) before I’d put down even \(\$1\). Only then would the bet have \(0\) expected value: \[ (1/10)(\$9) + (9/10)(-\$1) = 0. \] So, for the bet to be fair in my eyes, the odds have to match my fair betting rate. Here’s the general recipe for quantifying someone’s personal probability in proposition \(A\): Notice how we got the same formula we started with: potential loss divided by total stake. You can memorize this formula, but personally, I prefer to apply the recipe. It shows why the formula works, and it also exposes the formula’s limitations. It helps us understand when the formula doesn’t work. Personal probabilities aren’t revealed by just any old betting rate a person will accept. They’re exposed by the person’s fair betting rates. Consider: I’d take a bet where you pay me a million dollars of it rains today, and I pay you just \(\$1\) otherwise. But that’s because I think this bet is advantageous. I don’t think this is a fair bet, which is why I’d only take one side of it. I wouldn’t take the reverse deal, where I win \(\$1\) if it rains and I pay you a million dollars if it does. That’s a terrible deal from my point of view! So you can’t just look at a bet a person is willing to accept. You have to look at a bet they’re willing to accept because they think it’s fair. Another caveat is that we’re cheating by using dollars instead of utils. When we learned about utility, we saw that utility and dollars can be quite different. Gaining a dollar and losing a dollar aren’t necessarily comparable. Especially if it’s your last dollar! So, to really measure personal probabilities accurately, we’d have to substitute utilities for dollars. Nevertheless, we’ll pretend dollars and utils are equal for simplicity. Dollars are a decent approximation of utils for many people, as long as we stick to small sums. Last but definitely not least, our method only works when the person is following the expected value formula. Setting the expected value equal to zero was the key to deriving the formula: \[ \p(A) = \frac{\mbox{potential loss}}{\mbox{stake}}. \] But we know people don’t always follow the expected value formula, that’s one of the lessons of the Allais paradox. So this way of measuring personal probabilities is limited. Sometimes we don’t have the betting rate we need in order to apply the loss/stake formula directly. But we can still figure things out indirectly, given the betting rates we do have. For example, I’m not very confident there’s intelligent life on other planets. But I’d be much more confident if we learned there was life of any kind on another planet. If NASA finds bacteria living on Mars, I’ll be much less surprised to learn there are intelligent aliens on Alpha Centauri. Exactly how confident will I be? What is \(\p(I \given L)\), my personal probability that there is intelligent life on other planets given that there’s life of some kind on other planets at all? Suppose I tell you my betting rates for \(I\) and \(L\). I deem the following bets fair: You can apply the loss/stake formula to figure \(\p(I) = 1/10\) and \(\p(L) = 4/10\). But what about \(\p(I \given L)\)? You can figure that out by starting with the definition of conditional probability: \[ \begin{aligned} \p(I \given L) &= \p(I \wedge L)/\p(L) \\ &= \p(I)/\p(L) \\ &= 1/4. \end{aligned} \] The second line in this calculation uses the fact that \(I\) is equivalent to \(I \wedge L\). If there’s intelligent life, then there must be life, by definition. So \(I \wedge L\) is redundant. We can drop the second half and replace the whole statement with just \(I\). The general strategy here is: 1) identify what betting rates you have, 2) apply the loss/stakes formula to get those personal probabilities, and then 3) apply familiar rules of probability to derive other personal probabilities. We have to be careful though. This technique only works if the subject’s betting rates follow the familiar rules of probability. If my betting rate for rain tomorrow is \(3/10\), you might expect my betting rate for no rain to be \(7/10\). But people don’t always follow the laws of probability, just as they don’t always follow the expected utility rule. The taxicab problem from Chapter 8 illustrates one way people commonly violate the rules of probability. We’ll encounter another way in the next chapter. Li thinks humans will eventually colonize Mars. More exactly, he regards the following deal as fair: if he’s right about that, you pay him \(\$3\), otherwise he’ll pay you \(\$7\). Suppose Li equates money with utility: for him, the utility of gaining \(\$3\) is \(3\), the utility of losing \(\$7\) is \(-7\), and so on. Li also thinks there’s an even better chance of colonization if Elon Musk is elected president of the United States. If Musk is elected, Li will regard the following deal as fair: if colonization happens you pay him \(\$3\), otherwise he pays you \(\$12\). Li thinks the chances of colonization are lower if Musk is not elected. His personal conditional probability that colonization will happen given that Musk is not elected is \(1/2\). Sam thinks the Saskatchewan Roughriders will win the next Grey Cup game. She’s confident enough that she regards the following deal as fair: if they win, you pay her \(\$3\), otherwise she’ll pay you \(\$7\). Suppose Sam equates money with utility: for her, the utility of gaining \(\$3\) is \(3\), the utility of losing \(\$7\) is \(-7\), and so on. Sam thinks the Roughriders will have an even better chance in the snow. If it snows during the game, she will regard the following deal as fair: if the Roughriders win, you pay her \(\$3\), otherwise she’ll pay you \(\$12\). Sam thinks that the Roughriders will lose their advantage if it doesn’t snow. Her personal conditional probability that the Roughriders will win if it doesn’t snow is \(1/2\). Sam thinks the Leafs have a real shot at the playoffs next year. In fact, she regards the following deal as fair: if the Leafs make the playoffs, you pay her \(\$2\), otherwise she pays you \(\$10\). Suppose Sam equates money with utility: for her, the utility of gaining \(\$2\) is \(2\), the utility of losing \(\$10\) is \(-10\), and so on. Sam also thinks the Leafs might even have a shot at winning the Stanley Cup. She’s willing to pay you \(\$1\) if they don’t win the Cup, if you agree to pay her \(\$2\) if they do. That’s a fair deal for her. Freya isn’t sure whether it will snow tomorrow. For her, a fair gamble is one where she gets \(\$10\) if it snows and she pays \(\$10\) if it doesn’t. Assume Freya equates money with utility. Here’s another gamble Freya regards as fair: she’ll check her phone to see whether tomorrow’s forecast calls for snow. If it does predict snow, she’ll pay you \(\$10\), but you have to pay her \(\$5\) if it doesn’t. After checking the forecast and seeing that it does predict snow, Freya changes her betting odds for snow tomorrow. Now she’s willing to accept as little as \(\$5\) if it snows, while still paying \(\$10\) if it doesn’t. Ben’s favourite TV show is Community. He thinks it’s so good they’ll make a movie of it. In fact, he’s so confident that he thinks the following is a fair deal: he pays you \(\$8\) if they don’t make it into a movie and you pay him \(\$1\) if they do. Assume Ben equates money with utility. Ben thinks the odds of a Community movie getting made are even higher if his favourite character, Shirly, returns to the show (she’s on leave right now). If Shirly returns, he’s willing to pay as much as \(\$17\) if the movie does not get made, in return for \(\$1\) if it does. Ben also thinks the chances of a movie go down drastically if Shirly doesn’t return. His personal conditional probability that the movie will happen without Shirly is only \(1/3\).
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
Solving Laplace Equation having boundary conditions Hello, Please watch this video https://youtu.be/_cPU-nf9owk and tell me whether $A)C_{n,m}=\frac {16V_0}{\pi^2 mn\cosh{\bigg(\sqrt{(\frac{n\pi}{a})^2+ (\frac{n\pi}{a})^2}}\bigg)}$ or $B)C_{n,m}=\frac {16V_0}{\pi^2 mn\cosh{\bigg(\sqrt{(\frac{n\pi}{a})^2+ (\frac{n\pi}{a})^2}}\frac{a}{2}\bigg)}$ Which is correct A) or B)? Re: Solving Laplace Equation having boundary conditions You are kidding, right? Quote: Originally Posted by Vinod Re: Solving Laplace Equation having boundary conditions Hello, Quote: Originally Posted by Walagaster Notice the argument of $\cosh$. The Math professor in the video put $\beta= \sqrt{(\frac{n\pi}{a})^2+(\frac{m\pi}{a})^2}y$ where $y=\frac{\pm a}{2}$ But i think math professor in the video forgot to put y in the argument of $\cosh$ while calculating $C_{n,m}$. Also math professor didn't comment at $V=0$ at the centre of cube. Is $E=0$ there?
A question that pops up for many DSP-ers working with IIR and FIR filters, I think, is how to look at a filter’s frequency and phase response. For many, maybe they’ve calculated filter coefficients with something like the biquad calculator on this site, or maybe they’ve used a MATLAB, Octave, Python (with the scipy library) and functions like freqz to compute and plot responses. But what if you want to code your own, perhaps to plot within a plugin written in c++? You can find methods of calculating biquads, for instance, but here we’ll discuss a general solution. Fortunately, the general solution is easier to understand than starting with an equation that may have been optimized for a specific task, such as plotting biquad response. Plotting an impulse response One way we could approach it is to plot the impulse response of the filter. That works for any linear, time-invariant process, and a fixed filter qualifies. One problem is that we don’t know how long the impulse response might be, for an arbitrary filter. IIR (Infinite Impulse Response) filters can have a very long impulse response, as the name implies. We can feed a 1.0 sample followed by 0.0 samples to obtain the impulse response of the filter. While we don’t know how long it will be, we could take a long impulse response, perhaps windowing it, use an FFT to convert it to the frequency domain, and get a pretty good picture. But it’s not perfect. For an FIR (Finite Impulse Response) filter, though, the results are precise. And the impulse response is equal to the coefficients themselves. So: For the FIR, we simply run the coefficients through an FFT, and take the absolute value of the complex result to get the magnitude response. (The FFT requires a power-of-2 length, so we’d need to append zeros to fill, or use a DFT. But we probably want to append zeros anyway, to get more frequency points out for our graph.) Plotting the filter precisely Let’s look for a more precise way to plot an arbitrary filter’s response, which might be IIR. Fortunately, if we have the filter coefficients, we have everything we need, because we have the filter’s transfer function, from which we can calculate a response for any frequency. The transfer function of an IIR filter is given by \(H(z)=\frac{a_{0}z^{0}+a_{1}z^{-1}+a_{2}z^{-2}…}{b_{0}z^{0}+b_{1}z^{-1}+b_{2}z^{-2}…}\) z 0 is 1, of course, as is any value raised to the power of 0. And for normalized biquads, bis always 1, but I’ll leave it here for generality—you’ll see why soon. 0 To translate that to an analog response, we substitute e jω for z, where ωis 2π*freq, with freq being the normalized frequency, or frequency/samplerate: \(H(e^{j\omega})=\frac{a_{0}e^{0j\omega}+a_{1}e^{-1j\omega}+a_{2}e^{-2j\omega}…}{b_{0}e^{0j\omega}+b_{1}e^{-1j\omega}+b_{2}e^{-2j\omega}…}\) Again, e 0jω is simply 1.0, but left so you can see the pattern. Here it is restated using summations of an arbitrary number of poles and zeros: \(H(e^{j\omega})=\frac{\sum_{n=0}^{N}a_{n}e^{-nj\omega}}{\sum_{m=0}^{M}b_{m}e^{-mj\omega}}\) For any angular frequency, ω, we can solve H(e jω). A normalized frequency of 0.5 is half the sample rate, so we probably want to step it from 0 to 0.5— ωfrom 0 to π—for however many points we want to evaluate and plot. Coding it From that last equation, we can see that a single FOR loop will handle the top or the bottom coefficient sets. Here, we’ll code that into a function that can evaluate either zeros ( a terms) or poles ( b terms). We’ll refer to this as our direct evaluation function, since it evaluates the coefficients directly (as opposed to evaluating an impulse response). You’ve probably noticed the j, meaning an imaginary part of a complex number—the output will be complex. That’s OK, the output of an FFT is complex too, and we know how to get magnitude and phase from it already. Some languages support complex arithmetic, and have no problem evaluating “ e**(-2*j*0.5)”—either directly, or with an “exp” (exponential) function. It’s pretty easy in Python, for instance. (Something like, coef[idx] * math.e**(-idx * w * 1j), as the variable idx steps through the coefficients array.) For languages that don’t, we can use Euler’s formula, e jx = cos(x) + j * sin(x); that is, the real part is the cosine of the argument, and the imaginary part is the sine of it. (Remember, j is the same as i—electrical engineers already used i to symbolize current, so they diverged from physicists and used j. Computer programming often use j, maybe because i is a commonly used index variable.) So, we create our function, run it on the numerator coefficients for a given frequency, run it again on the denominator coefficients, and divide the two. The result will be complex—taking the absolute value gives us the magnitude response at that frequency. Revisiting the FIR Since we already had a precise method of looking at FIR response via the FFT/DFT, let’s compare the two methods to see how similar they are. To use our new method for the case of an FIR, we note that the denominator is simply 1, so there is no denominator to evaluate, no need for division. So: For the FIR, we simply run the coefficients through our evaluation function, and take the absolute value of the complex result to get the magnitude response. Does that sound familiar? It’s the same process we outlined using the FFT. And back to IIR OK, we just showed that our new evaluation function and the FFT are equivalent. (There is a difference—our evaluation function can check the response at an arbitrary frequency, whereas the FFT frequency spacing is defined by the FFT size, but we’ll set that aside for the moment. For a given frequency, the two produce identical results.) Now, if the direct evaluation function and the FFT give the same results, for the same frequency point, and the numerator and denominator are evaluated by the same function, by extension we could also get a precise evaluation by substituting an FFT process for both the numerator and denominator, and dividing the two as before. Note that we’re no longer talking about the FFT of the impulse response, but the coefficients themselves. That means we no longer have the problem of getting the response of an impulse that can ring out for an unknown time—we have a known number of coefficients to run through the FFT. Which is better? In general, the answer is our direct evaluation method. Why? We can decide exactly where we want to evaluate each point. That means that we can just as easily plot with log frequency as we can linear. But, there may be times that the FFT is more suitable—it is extremely efficient for power-of-2 lengths. (And don’t forget that we can use a real FFT—the upper half of the general FFT results would mirror the lower half and not be needed.) An implementation We probably want to evaluate ω from 0 to π, corresponding to a range of half the sample rate. So, we’d call the evaluation function with the numerator coefficients and with the denominator coefficients, for every ω that we want to know (spacing can be linear or log), and divide the two. For frequency response, we’d take the absolute value (equivalently, the square root of the sum of the squared real and imaginary parts) of each complex result to obtain magnitude, and arc tangent of the imaginary part divided by the real part (specifically, we use the atan2 function, which takes into account quadrants). Note that this is the same conversion we use for FFT results, as you can see in my article, A gentle introduction to the FFT. \(magnitude:=\left |H \right |=abs(H)=\sqrt{H.real^2+H.imag^2}\) \(phase := atan2(H.imag,H.real)\) For now, I’ll leave you with some Python code, as it’s cleaner and leaner than a C or C++ implementation. It will make it easier to transfer to any language you might want (Python can be quite compact and elegant—I’m going for easy to understand and translate with this code). Here’s the direct evaluation routine corresponding to the summation part of the equation (you’ll also need to “import numpy” to have e available—also available in the math library, but we’ll use numpy later, so we’ll stick with numpy alone): import numpy as np # direct evaluation of coefficients at a given angular frequency def coefsEval(coefs, w): res = 0 idx = 0 for x in coefs: res += x * np.e**(-idx * 1j * w) idx += 1 return res Again, we call this with the coefficients for each frequency of interest. Once for the numerator coefficients (the a coefficients on this website, corresponding to zeros), once for the denominator coefficients ( b, for the poles—and don’t forget that if there is no b0, the case for a normalized filter, insert a 1.0 in its place). Divide the first result by the second. Use use abs (or equivalent) for magnitude and atan2 for phase on the result. Repeat for every frequency of interest. Here’s a python function that evaluates numerator and denominator coefficients at an arbitrary number of points from 0 to π radians, with equal spacing, returning arrays of magnitude (in dB) and phase (in radian, between +/- π): # filter response, evaluated at numPoints from 0-pi, inclusive def filterEval(zeros, poles, numPoints): magdB = np.empty(0) phase = np.empty(0) for jdx in range(0, numPoints): w = jdx * math.pi / (numPoints - 1) resZeros = coefsEval(zeros, w) resPoles = coefsEval(poles, w) # output magnitude in dB, phase in radians Hw = resZeros / resPoles mag = abs(Hw) if mag == 0: mag = 0.0000000001 # limit to -200 dB for log magdB = np.append(magdB, 20 * np.log10(mag)) phase = np.append(phase, math.atan2(Hw.imag, Hw.real)) return (magdB, phase) Here’s an example of evaluating biquad coefficients at 64 evenly spaced frequencies from 0 Hz to half the sample rate (these coefficients are right out of the biquad calculator on this website—don’t forget to include b0 = 1.0): zeros = [ 0.2513643668578741, 0.5027287337157482, 0.2513643668578741 ] poles = [ 1.0, -0.17123074520885395, 0.1766882126403502 ] (magdB, phase) = filterEval(zeros, poles, 64) print("\nMagnitude:\n") for x in magdB: print(x) print("\nPhase:\n") for x in phase: print(x) Next up, a javascript widget to plot magnitude and phase of arbitrary filter coefficients. Extra credit The direct evaluation function performs a Fourier analysis at a frequency of interest. For better understanding, reconcile it with the discrete Fourier transform described in A gentle introduction to the FFT. In that article, I describe probing the signal with cosine and sine waves to obtain the response at a given frequency. Look again at Euler’s formula, which shows that e jω is cosine (real part) and sine (imaginary part), which the article alludes to this under the section “Getting complex”. You should understand that the direct evaluation function presented here could be used to produce a DFT (given complete evaluation of the signals at appropriately spaced frequencies). The main difference is that for this analysis, we need not do a complete and reversible transform—we need only analyze frequency response values that we want to graph.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
Nullspace Let A be a mxn matrix. The nullspace is therefore the set of vectors $\vec{x}$ such that $A\vec{x} = \vec{0}$ Nul(A) A = $\begin{bmatrix} 1&0&1\\2&-1&1\end{bmatrix}$ Is $b = \begin{bmatrix} 2\\-2 \end{bmatrix} \in$ Nul(A) NO! $A*b \neq \vec{0}$. Actually, we cannot even perform the operation. It can therefore be seen that THE NULLSPACE LIVES IN $\Re^n$ But can we describe a null set? Sure! We've been doing this a while without knowing it. If we augment a matrix with the zero vector and solve down as we've been doing for some time, we find a set of vectors which describe the nullspace of the matrix A Columnspace The set of all linear combinations of the columns of A A = $\begin{bmatrix} 1&0&1\\2&-1&1 \end{bmatrix}$ Therefore, Col(A) = span{$\begin{bmatrix} 1\\2 \end{bmatrix} \begin{bmatrix} 0\\-1 \end{bmatrix} \begin{bmatrix} 1\\1 \end{bmatrix}$} In general, we can therefore say the column space lives in $\Re^m$
I know I promised a post on regression, but then I realized I only have a shallow understanding of Boosting and AdaBoost. So I biked to the nearest public library, when to the index cards, search for ‘Boost’ and after perusing through hundreds of self-help books, I found the greatest resource on AdaBoost: “How to Boost Your Spirit by Ada MacNally”. Nah, kidding. Such book doesn’t exist. The following, as always, is my study notes, taken from Foundations of Machine Learning by MIT Press and Machine Learning in Action by Peter Harrington published by Manning. The first book is heavy on math. The second book is more of a fluff book and is much simpler than the first one. It is often difficult, for a non-trivial learning task, to directly devise an accurate algorithm satisfying the strong PAC-learning requirements that we saw in the PAC-learnable algorithm post I wrote before (link) but, there can be more hope for finding simple predictors guaranteed only to perform slightly better than random. The following gives a formal definition of such weak learners. As in the PAC-learning post, we let $n$ be a number such that the computational cost of representing any element $x \in \mathcal{X}$ is at most $O(n)$ and denote by size(c) the maximal cost of computational representation of $c \in \mathcal{C}$. Let’s define weak learning. A concept class $\mathcal{C}$ is said to be weakly PAC-learnable if there exists an algorithm $\mathcal{A}, \gamma > 0$ and a polynomial function $poly(., . , .)$ such that for any $\delta > 0$, for all distributions $\mathcal{D}$ on $\mathcal{X}$ and for any target concept $c \in \mathcal{C}$, the following holds for any sample size $m \geq poly(1/\delta,n,size(c))$: \[ \underset{\mathcal{S} \sim \mathcal{D}^m}{\mathbb{P}}\left[R(h_s) \leq \frac{1}{2} – \gamma \right] \leq 1 – \delta, \] where $h_s$ is the hypothesis returned by algorithm $\mathcal{A}$ when trained on sample $S$. When such an algorithm $\mathcal{A}$ exists, it is called a weak learning algorithm for $\mathcal{C}$ or a weak learner. The hypothesis returned by a weak learning algorithm is called a base classifier. The key idea behind boosting techniques is to use a weak learning algorithm to build a strong learner. That is, an accurate PAC-learning algorithm. To do so, boosting techniques use an ensemble method: they combine different base classifiers returned by a weak learner by a weak learner to create more accurate predictors. But which base classifier should be used and how should they be combined? Another visualization shall be: The algorithm takes as input a labeled sample $S=((x_1, y_1),\ldots,(x_m, y_m))$, with $(x_i, y_i) \in \mathcal{X} \times \{-1, +1\}$ for all $i \in [m]$, and maintains a distribution over the indices $\{-1, \ldots, m\}$. Initially, lines 1-2, the distribution uniform ($\mathcal{D}_1$). At each round of boosting, that is each iteration $t \in [T]$, of the loop in lines 3-8, a new base classifier $ h_t \in \mathcal{H} $ is selected that minimizes the error on the training sample weighted by the distribution $ \mathcal{D}_t$: \[ h_t \in \underset{h \in \mathcal{H}}{argmin}\underset{i \sim \mathcal{D}_t}{\mathbb{P}} [h(x_i) \neq y_i] = \underset{h \in \mathcal{H}}{argmin}\sum_{i=1}^{m}\mathcal{D}_t(i)1_{h(x_i) \neq y_i} .\] $Z_t$ is simply a normalization factor to ensure that the weights $D_{t+1}(i)$ sum to one. The precise reason for the definition of the coefficient at $\alpha_t$ will become clear later. For now, observe that if $\epsilon_t$, the error of the base classifier, is less than $\frac{1}{2}$, then $\frac{1-\epsilon_t}{\epsilon_t} > 1$ and $\alpha_t$ is positive. Thus, the new distribution $\mathcal{D}_{t+1}$ is defined from $\mathcal{D}_t$ by substantially increasing the weight on $i$ if point $x_i$ is incorrectly classified, and, on the contrary, decreasing it if $x_i$ is correctly classified. This has the effect of focusing more on the points incorrectly classified at the next round of boosting, less on those correctly classified by $h_t$. After $T$ rounds of boosting, the classifier returned by AdaBoost is based on the sing of function $f$, which is a non-negative linear combination of the base classifiers $h_t$. The weight $\alpha_t$ assigned to $h_t$in that um is a logarithmic function of the ratio of the accuracy $1-\epsilon_t$ and error $\epsilon_t$ of $h_t$. Thus, more accurate base classifiers are assigned a larger weight in that sum. For any $t \in [T]$, we will denote by $f_t$ the linear combination of the base classifiers after $t$ rounds of boosting: $f_t = \sum_{s=1}^{t}\alpha_sh_s$. In particular, we have,e $f_T = f$. The distribution $\mathcal{D}_{t+1}$ can be expressed in terms of $f_t$ and the normalization factors $Z_s$, $s \in [t]$ as follows: \[ \forall i \in [m], \mathcal{D}_{t+1}(i)=\frac{e^{y_if_t(x_i)}}{ m \prod_{s=1}^{t}Z_s} .\] The AdaBoost algorithm can be generalized in several ways: – Instead of a hypothesis with minimal weighted error, $h_t$ can be more generally the base classifier returned by a weak learner algorithm trained on $D_t$; – The range of the base classifiers could be $[-1, +1]$, or more generally bounded subset of $\mathbb{R}$. The coefficients $\alpha_t$ can then be different and may not even admit a closed form. In general, they are chosen to minimize an upper bound on the empirical error, as discussed in the next section. Of course, in that general case the hypotheses $h_t$ are not binary classifiers, but their sign could define the label and their magnitude could be interpreted as a measure of confidence. AdaBoost was originally designed to address the theoretical question of whether a weak learning algorithm could be be used to derive a strong learning one. Here, we will show that it coincides in fact with a very simple algorithm, which consists of applying a general coordinate descent technique to a convex and differentiable objective function. For simplicity, in this section, we assume that the base classifier set $\mathcal{H}$ is finite, with cardinality $N: \mathcal{H} = /{h_1, \ldots, h_N/}$. An ensemble function $f$ such as the one returned by AdaBoost can then be written as $f = \sum_{j=1}^N\bar{\alpha}_j h_j$. Given a labeled sample $S = ((x_1, y_1), \ldots, (x_m, y_m))$ let $F$ be the objective function defined for all $\boldsymbol{\bar{\alpha}} = (\bar{alpha}_1, \ldots, \bar{\alpha}_N) \in \mathbb{R}^N$ by: \[ F(\boldsymbol{\bar{\alpha}}) = \frac{1}{m}\sum_{i=1}^{m}e^{y_i f(x_i)} = \frac{1}{m}\sum_{i=1}^m e^{y_i \sum_{j=1}{N}\bar{\alpha}h_j(x_i)}. \] $F$ is a convex function of $\boldsymbol{\bar{\alpha}}$ since it is a sum of convex functions, each obtained by composition of the (convex) exponential function with an affine function of $\boldsymbol{\bar{\alpha}}$. The AdaBoost algorithm takes the input dataset, the class labels, and the number of iterations. This is the only parameter you need to specify in most ML libraries. Here, we briefly describe the standard practical use of AdaBoost. An important requirement for the algorithm is the choice of the base classifiers or that of the weak learner. The family of base classifiers typically used with AdaBoost in practice is that of decision trees, which are equivalent to hierarchical partitions of the space. Among decision trees, those of depth one, also known as stumps, are by far the most frequently used base classifiers. Boosting stumps are threshold functons associated to a single feature. Thus, a stump corresponds to a single axis-aligned partition of space. If the data is in $\mathbb{R}^N$, we can associate a stump to each ot he N components. Thus, to determine the stump with the minimal weighted error at each round of boosting, the best component and the best threshold for each component must be computed. Now, let’s create a weak learner with a decision stump, and then implement a different AdaBoost algorithm using it. If you’re familiar with decision trees, that’s OK, you’ll understand this part. However, if you’re not, either learn about them, or wait until I cover them tomorrow. A decision tree with only one split is a decision stump. The pseudocode to generate a simple decision stump looks like this: Set the minError to +∞ For every feature in dataset: For every step: For each inequality: Build a decision stump and test it with the weighted dataset If the error is less than minError: Set this stump as the best stump Return the best stump Now that we have generated a decision stump, let’s train it: For each iteration: Find the best stump using buildStump() Add the best stump to the stump array Calculate α Calculate the new eight vector Update the aggregate class estimate If the error rate is 0: Break out of the loop Well, that is it for today! I know I keep promising one thing, and deliver another, but this time I’m really going to talk about decision trees! Semper Fudge!
I’m extremely agitated today. I dunno why. Maybe because there was some convulsion in the peaceful tidings of the house I live in, or the fact that I’m kinda hungry at the moment. Anyways, I don’t have time for chitchat. Let’s get to the studying. The following is taken from Foundations of Machine Learning by Rostamyar, et al. Support Vector Machines are the most theoretically well motivated and practically most effective classification algorithms in modern machine learning. Consider an input space $\mathcal{X}$ that is a subset of $\mathbb{R}^N$ with $N \geq 1$, and the output or target space $\mathcal{Y}=\{-1, +1\}$, and let $f : \mathcal{X} \rightarrow \mathcal{Y} $ be the target function. Given a hypothesis set $\mathcal{H}$ of functions mapping $\mathcal{X}$ to $\mathcal{Y}$, the binary classification task is formulated as follows: The learner receives a training sample $S$ of size $m$ drawn independently and identically from $\mathcal{X}$ to some unknown distribution $\mathcal{D}$, $S = ((x_1, y_1), \ldots, (x_m, y_m)) \in (\mathcal{X}\times\mathcal{Y})^m$, with $y_i = f(x_i) $ for all $i \in [m]$. The problem consists of determining a hypothesis $ h \in \mathcal{H}$, a binary classifier, with small generalization error : The probability that hypothesis set is not the target function is our error rate. \[ R_{\mathcal{D}} = \underset{x\sim\mathcal{D}}{\mathbb{P}} [h(x) \neq f(x)]. \] Different hypothesis sets $\mathcal{H}$ can be selected for this task. Hypothesis sets with smaller complexity provide better learning guarantees, everything else being equal. A natural hypothesis set with relatively small complexity is that of a linear classifier, or hyperplanes, which can be defined as follows: \[ \mathcal{H}= \{x \rightarrow sign(w.x+b) : w \in \mathbb{R}^N, b \in r\} \] The learning problem is then referred to as a linear classification problem. The general equation of a hyperplane in $\mathbb{R}^N$is $w.x+b=0$ where $w\in\mathbb{R}^N$ is a non-zero vector normal to the hyperplane $b\in\mathbb{R}$ a scalar. A hypothesisol. of the form $x\rightarrow sign(w.x+b)$ thus labels positively all points falling on one side of the hyperplane $w.x+b=0$ and negatively all others. From now until we say so, we’ll assume that the training sample $S$ can be linearly separated, that is, we assume the existence of a hyperplane that perfectly separates the training samples into two populations of positively and negatively labeled points, as illustrated by the left panel of figure below. This is equivalent to the existence of $ (\boldsymbol{w}, b) \in (\mathbb{R}^N – \boldsymbol{\{0\}}) \times \mathbb{R}$such that: \[ \forall i \in [m], \quad y_i(\boldsymbol{w}.x_i + b) \geq 0 \] But, as you can see above, there are then infinitely many such separating hyperplane. Which hyperplane should a learning algorithm select? The definition of SVM solution is based on the notion of geometric margin. Let’s define what we just came up with: The geometric margin $\rho_h(x)$ pf a linear classifier $h:\rightarrow \boldsymbol{w.x} + b $ at a point $x$ is its Euclidean distance to the hyperplane $\boldsymbol{w.x}+b=0$: \[ \rho_h(x) = \frac{w.x+b}{||w||_2} \] The geometric margin of $\rho_h$ of a linear classifier h for a sample $S = (x_1, …, x_m) $ is the minimum geometric margin over the points in the sample, $\rho_h = min_{i\in[m]} \rho_h(x_i)$, that is the distance of hyperplane defining h to the closest sample points. So what is the solution? It is that, the separating hyperplane with the maximum geometric margin is thus known as maximum-margin hyperplane. The right panel of the figure above illustrates the maximum-margin hyperplane returned by SVM algorithm is the separable case. We will present later in this chapter a theory that provides a strong justification for the solution. We can observe already, however, that the SVM solution can also be viewed as the safest choice in the following sense: a test point is classified correctly by separating hyperplanes with geometric margin $\rho$ even when it falls within a distance $\rho$ of the training samples sharing the same label: for the SVM solution, $\rho$ is the maximum geometric margin and thus the safest value. We now derive the equations nd optimization problem that define the SVM solution. By definition of the geometric margin, the maximum margin of $\rho$ of a separating hyperplane is given by: \[ \rho = \underset{w,b : y_i(w.x_i+b) \geq 0}{max}\underset{i\in[m]}{min}\frac{|w.x_i=b}{||w||} = \underset{w,b}{max}{min}\frac{y_(w.x_i+b)}{||w||} \] The second quality follows from the fact that, since sample is linearly separable, for the maximizing pair $(w, b), y_i(w.x_i+b)$ must be non-negative for al $i\in[m]$. Now, observe that the last expression is invariant to multiplication of $(w, b)$ by a positive scalar. Thus, we can restrict ourselves to pairs $(\boldsymbol{w},b)$ scaled such that $min_{i\in[m]}(\boldsymbol{w}.x_i+b) = 1$: \[ \rho = \underset{min_{i\in[m]}y_i(w.x_i+b)=1}{max}\frac{1}{||w||} = \underset{\forall i \in[m],y_i(w.x_i+b) \geq }{max}\frac{1}{||w||} \] Figure below illustrates the solution $(w, b)$ of the maximization we just formalized. In addition to the maximum-margin hyperplane, it also shows the marginal hyperplanes, which are the hyperplanes parallel to the separating hyperplane and passing through the closest points on the negative or positive sides. Since maximizing $1/||w||$ is equivalent to minimizing $\frac{1}{2}||w||^2$, in view of the equation above, the pair $(\boldsymbol{w}, b)$ returned by the SVM in the separable case is the solution of the following convex optimization problem: \[ \underset{w, b}{min}\frac{1}{2}||w||^2 \]\[ \text{subject to}: y_i(\boldsymbol{w}.x_i+b) \geq 1, \forall i \in[m] \] Since the objective function is quadratic and the constraints are affine (meaning they are greater or equal to) the optimization problem above is in fact a specific instance of quadratic programming (QP), a family of problems extensively studied in optimization. A variety of commercial and open-source solvers are available for solving convex QP problems. Additionally, motivated by the empirical success of SVMs along with its rich theoretical underpinnings, specialized methods have been developed to more efficiently solve this particular convex QP problem, notably the block coordinate descent algorithms with blocks of just two coordinates. So what are support vectors? See the formula above, we note that constraints tare affine We introduce Lagrange variables $\alpha_i \geq 0, i\in[m]$, associated to the m constrains and denoted by $\boldsymbol{\alpha}$ the vector $(\alpha_1, \ldots, \alpha_m)^T$. The Lagrangian can then be defiend for all $\boldsymbol{w}\in\mathbb{R}^N,b\in\mathbb{R}$, and $\boldsymbol{\alpha}\in\mathbb{R}_+^m$ by: \[ \mathcal{L}(\boldsymbol{w},b,\boldsymbol{\alpha} = \frac{1}{2}||w||^2 – \sum_{i = 1}^{m}\alpha_i[y_i(w.x_i+b) -1] \] Support vectors fully define the maximum-margin hyperplane or SVM solution, which justifies the name of the algorithm. By definition, vectors not lying on the marginal hyperplanes do not affect the definiton of these hyperplanse – in their absence, the solution the solution to the SVM problem is unique, the support vectors are not. In dimensiosn $N, N+1$ points are sufficient to define a hyperplane. Thus when more then $N+1$ points lie on them marginal hyperplane, different choices are possible for $N+1$ support vectors. But the points in the space are not always separable. In most practical settings, the training data is not linearly separable, which implies that for any hyperplane $\boldsymbol{w.x}+b=0$, there exists $x_i \in S$ such that: \[ y_i[\boldsymbol{w.x_i}+b] \ngeq 1 \] Thus, the constrains imposed on the linearly separable case cannot be hold simultaneously. However, a relaxed version of these constraints can indeed hold, that is, for each $i\in[m]$, there exists $\xi_i \geq 0$ such that: \[ y_i[\boldsymbol{w.x_i}+b] \ngeq 1-\xi_i \] The variables $\xi_i$ are known as slack variables and are commonly used in optimization to define relaxed versions of constraints. Here, a slack variable $\xi_i$ measures the distance by which vector $x_i$ violates the desires inequality, $y_i(\boldsymol{w.x_i} + b) \geq 1 . This figure illustrates this situation: For hyperplane $y_i(w.x_i+b) = 1 $, a vector $x_i$ with $x_i > 0$ can be viewed as an outlier. Each $x_i$ must be positioned on the correct side the appropriate marginal hyperplane. Here’s the formula we use to optimize the non-separable cases: \[ \underset{w, b, \xi}{min} \frac{1}{2}||w||^2 + C\sum_{i=1}^{m}\xi_i^p \]\[ \text{subject to} \quad y_i(w.x_i+b) \geq 1-\xi_i \wedge \xi_i \geq 0, i\in[m] \] Okay! Alright! I think I understand it now. That’s enough classification for today. I’m going to study something FUN next. Altough I’m a bit drowsy… No matter! I have some energy drinks at home. Plus I have some methamphetamine which I have acquired to boost my eenrgy… Nah, kidding. I’m a cocaine man!
If you look at the little network diagram below, you’ll probably agree that $P$ is the most “central” node in some intuitive sense. This post is about using a belief’s centrality in the web of belief to give a coherentist account of its justification. The more central a belief is, the more justified it is. But how do we quantify “centrality”? The rough idea: the more ways there are to arrive at a proposition by following inferential pathways in the web of belief, the more central it is. Since we’re coherentists today (for the next 10 minutes, anyway), cyclic pathways are allowed here. If we travel $P \rightarrow Q \rightarrow R \rightarrow P$, that counts as an inferential path leading to $P$. And if we go around that cycle twice, that counts as another such pathway. You might think this just wrecks the whole idea. Every node has infinitely many such pathways leading to it, after all. By cycling around and around we can come up with literally any number of pathways ending at a given node. But, by examining how these pathways differ in the limit, we can differentiate between more and less central nodes/beliefs. We can thus clarify a sense in which $P$ is most central, and quantify that centrality. We can even use that quantity to answer a classic objection to coherentism leveled by Klein & Warfield (1994). As a bonus, we can do all this without ever giving an account of whatmakes a corpus of beliefs “coherent.” This flips the script on a lot ofcontemporary formal work on coherentism. 1 Because coherentism isholistic, you might think it has to evaluate the coherence of a wholecorpus first, before it can assess the individual members. 2 But we’llsee this isn’t so.$$\newcommand\T{\intercal}\newcommand{\A}{\mathbf{A}}\renewcommand{\v}{\mathbf{v}}$$ Counting Pathways Our idea is to count how many paths there are leading to $P$ vs. other nodes. We start with paths of length $1$, then count paths of length $2$, then length $3$, and so on. As we count longer and longer paths, each node’s count approaches infinity. But not their relative ratios! If, at each step, we divide the number of paths ending at $P$ by the number of all paths, this ratio converges. To find its limit, we represent our graph numerically. A graph can be represented in a table, where each node corresponds to a row and column. The columns represent “sources” and the rows represent “targets.” We put a $1$ where the column node points to the row node, otherwise we put a $0$. $P$ $Q$ $R$ $P$ 0 1 1 $Q$ 1 0 0 $R$ 0 1 0 Hiding the row and column names gives us a matrix we’ll call $\A$: $$\A =\left[ \begin{matrix} 0 & 1 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{matrix} \right]. $$ Notice how each row records the length-$1$ paths leading to the corresponding node. There are two such paths to $P$, and one each to $Q$ and $R$. 3 The key to counting longer paths is to take powers of $\A$. If wemultiply $\A$ by itself to get $\A^2$, we get a record of the length-$2$paths: $$\A^2 = \A \times \A = \left[ \begin{matrix} 0 & 1 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{matrix} \right] \left[ \begin{matrix} 0 & 1 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{matrix} \right] = \left[ \begin{matrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \end{matrix} \right]. $$ There are two such paths to $P$: $$ \begin{aligned} Q \rightarrow R \rightarrow P,\\ P \rightarrow Q \rightarrow P. \end{aligned} $$ Similarly for $Q$: $$ \begin{aligned} Q \rightarrow P \rightarrow Q,\\ R \rightarrow P \rightarrow Q. \end{aligned} $$ While $R$ has just one length-$2$ path: $$ P \rightarrow Q \rightarrow R. $$ If we go on to examine $\A^3$, its rows will tally the length-$3$ paths; in general, $\A^n$ tallies the paths of length-$n$. But we want relative ratios, not raw counts. The trick to getting theseis to divide $\A$ at each step by a special number $\lambda$, known asthe “leading eigenvalue” of $\A$ (details below). If we takethe limit$$ \lim_{n \rightarrow \infty} \left(\frac{\A}{\lambda}\right)^n $$ weget a matrix whose columns all have a special property: $$\left[ \begin{matrix} 0.41 & 0.55 & 0.31 \\ 0.31 & 0.41 & 0.23 \\ 0.23 & 0.31 & 0.18 \end{matrix} \right]. $$ They all have the same relative proportions. They’re multiples of the same “frequency vector,” a vector of positive values that sum to $1$: $$ \left[ \begin{matrix} 0.43 \\ 0.32 \\ 0.25 \\ \end{matrix} \right]. $$ So as we tally longer and longer paths, we find that $43\%$ of those paths lead to $P$, compared with $32\%$ for $Q$ and $25\%$ for $R$. Thus $P$ is about $1.3$ times as justified as $Q$ ($.43/.32$), and about $1.7$ times as justified as $R$ ($.43/.25$). We want absolute degrees of justification though, not just comparative ones. So we borrow a trick from probability theory and use a tautology for scale. We add a special node $\top$ to our graph, which every other node points to, though $\top$ doesn’t point back. Updating our matrix $\A$ accordingly, we insert $\top$ in the firstrow/column: $$\A =\left[ \begin{matrix} 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{matrix} \right]. $$ Redoing our limit anlaysis gives us the vector $(1.00, 0.57, 0.43, 0.33)$. But this isn’t our final answer, because it’s actually not possible for the non-$\top$ nodes to get a value higher than $2/3$ in a graph with just $3$ non-$\top$ nodes. 4 So wedivide elementwise by $(1, 2/3, 2/3, 2/3)$ to scale things, giving usour final result: $$\left[ \begin{matrix} 1.00 \\ 0.85 \\ 0.65 \\ 0.49 \end{matrix} \right]. $$ The relative justifications are the same as before, e.g. $P$ is still $1.3$ times as justified as $Q$. But now we can make absolute assessments too. $R$ comes out looking pretty bad ($0.49$), as seems right, while $Q$ looks a bit better ($0.65$). Of course $P$ looks best ($0.85$), though maybe not quite good enough to be justified toutcourt. The Klein–Warfield Problem Ok that’s theoretically nifty and all, but does it work on actual cases? Let’s try it out by looking at a notorious objection to coherentism. Klein & Warfield (1994) argue that coherentism flouts the laws of probability. How so? Making sense of things often means believing more: taking on new beliefsto resolve the tensions in our existing ones. For example, if we thinkTweety is a bird who can’t fly, the tension is resolved if we alsobelieve they’re a penguin. 5 But believing more means believing less probably. Increases in logical strength bring decreases in probability (unless the stronger content was already guaranteed with probability $1$). So increasing the coherence in one’s web of belief will generally mean decreasing its probability. How could increasing coherence increase justification, then? Merricks (1995) points out that, even though the probability of the whole corpus goes down, the probabilities of individual beliefs go up in a way. After all, it’s more likely Tweety can’t fly if they’re a penguin, than if they’re just a bird of some unknown species. That’s only the beginning of a satisfactory answer though. After all, we might not be justified in believing Tweety’s a penguin in the first place! Adding a new belief to support an existing belief doesn’t help if the new belief has no support itself. We need a more global assessment, which is where the present account shines. Suppose we add $P$ = Tweety is a penguin to the network containing $B$= Tweety is a bird and $\neg F$ = Tweety can’t fly. Will thisincrease the centrality/justification of $B$ and of $\neg F$? Yes, butwe need to sort out the support relations to verify this. Presumably $P$ supports $B$, and $\neg F$ too. But what about the other way around? If Tweety is a flightless bird, there’s a decent chance they’re a penguin. But it’s hardly certain; they might be an emu or kiwi instead. Come to think of it, isn’t support a matter of degree, so don’t we need finer tools than just on/off arrows? Yes, and the refinement is easy. We accommodate degrees of support by attaching weights to our arrows. Instead of just placing a $1$ in our matrix $\A$ wherever the column-node points to the row-node, we put a number from the $[0,1]$ interval that reflects the strength of support. The same limit analysis as before still works, as it turns out. We just think of our inferential links as “leaky pipes” now, where weaker links make for leakier pipelines. We still need concrete numbers to analyze the Tweety example. But it’s a toy example, so let’s just make up some plausible-ish numbers to get us going. Let’s suppose $1\%$ of birds are flightless, and birds are an even smaller percentage of the flightless things, say $0.1\%$. Let’s also pretend that $20\%$ of flightless birds are penguins. Before believing Tweety is a penguin then, our web of belief looks like this: Calculating the degrees of justification for $B$ and $\neg F$, both come out very close to $0$ as you’d expect (with $B$ closer to $0$ than $\neg F$). Now we add $P$. Recalculating degrees of justification, we find that they increase drastically. $B$ and $F$ are now justified to degree $0.85$, while $P$ is justified to degree $0.26$. (All numbers approximate.) So our account vindicates Merricks. Not only does adding $P$ to the corpus add “local” justification for $B$ and for $\neg F$. It also improves their standing on a more global assessment. You might be worried though: did $P$ come out too weakly justified, at just $0.26$? No: that’s either an artifact of oversimplification, or else it’s actually the appropriate outcome. Notice that $B$ and $\neg F$ don’t really support Tweety being a penguin. They’re a flightless bird, sure, but maybe they’re an emu, kiwi, or moa. We chose to believe penguin, and maybe we have our reasons. If we do, then the graph is missing background beliefs which would improve $P$’s standing once added. But otherwise, we just fell prey to stereotyping or comes-to-mind-bias, in which case it’s right that $P$ stand poorly. Technical Background The notion of centrality used here is a common tool in network analysis, where it’s known as “eigenvector centrality.” Because the frequency vector we arrive at in the limit is an eigenvector of the matrix $\A$. In fact it’s a special eigenvector, the only one with all-positive values. Since we’re measuring justification on a $0$-to-$1$ scale, our account depends on there always being such an eigenvector for $\A$. In fact we need it to be unique, up to scaling (i.e. up to multiplication by a constant). The theorem that guarantees this is actually quite old, going back to work by Oskar Perron and Georg Frobenius published around 1910. Here’s one version of it. Perron–Frobenius Theorem. Let $\A$ be a square matrix whose entriesare all positive. Then all of the following hold. $\A$ has an eigenvalue $\lambda$ that is larger (in absolute value)than $\A$’s other eigenvalues. We call $\lambda$ the leading eigenvalue. $\A$’s leading eigenvalue has an eigenvector $\v$ whose entries areall positive. We call $\v$ the leading eigenvector. $\A$ has no other positive eigenvectors, save multiples of $\v$. The powers $(\A/\lambda)^n$ as $n \rightarrow \infty$ approach a matrix whose columns are all multiples of $\v$. Now, our matrices had some zeros, so they weren’t positive in all their entries. But it doesn’t really matter, as it turns out. Frobenius’ contribution was to generalize this result to many cases thatfeature zeros. But even in cases where Frobenius’ weaker conditionsaren’t satisfied, we can just borrow a trick from Google. 6 Instead ofusing a $0$-to-$1$ scale, we use $\epsilon$-to-$1$ for some very smallpositive number $\epsilon$. Then all entries in $\A$ are guaranteed tobe positive, and we just rescale our results accordingly. (Choose$\epsilon$ small enough and the difference is negligible in practice.) Acknowledgments This post owes a lot to prior work by Elena Derksen and Selim Berker. I’d never really thought much about how coherence and justification relate prior to reading Derksen’s work. And Berker’s prompted me to take graphs more seriously as a way of formalizing coherentism. I’m also grateful to David Wallace for introducing me to the Perron–Frobenius theorem’s use as a tool in network analysis. [return] In his seminal book on coherentism, Bonjour (1985) writes: “the justification of a particular empirical belief finally depends, not on other particular beliefs as the linear conception of justification would have it, but instead on the overall system and its coherence.” This doesn’t commit us to assessing overall coherence before individual justification. But that’s a natural conclusion you might come away with. [return] We could count every proposition as pointing to itself. This would mean putting $1$’s down the diagonal, i.e. adding the identity matrix $\mathbf{I}$ to $\A$. This can be useful as a way to ensure the limits we’ll require exist. But we’ll solve that problem differently in the “Technical Background” section. And otherwise it doesn’t really affect our results. It increases the leading eigenvalue by $1$, but doesn’t affect the leading eigenvector. [return] In general, the maximum possible centrality is $(k-1)/k$ in a graph with $k$ non-$\top$ nodes. [return] Hat tip to Erik J. Olsson’s entry on coherentism in the SEP, which uses this example in place of Klein & Warfield’s slightly more involved one. [return] Google’s founders used a variant of eigenvector centrality called “PageRank” in their original search engine. [return]
Thanks for all the answers. I realize that I had a big mistake. I did not figure out the distribution of the pseudo-random sequences. I try to use PRGs or PRFs to define the pseudo-random sequences. If there is a formal one, please tell me. Of course, the pseudo-random sequence and the truly random sequence are indistinguishable. And the definition should describe some possible collections of "pseudo-random sequences" (e.g. M-sequence). I also consider that what the "sequence" is. Is it a kind of "numbers" or "functions"? As a book "Lecture Notes on Cryptography" written by Shafi Goldwasser and Mihir Bellare says, The one-time pad is generally impractical because of the large amount of key that must be stored. In practice, one prefers to store only a short random key, from which a long pad can be produced with a suitable cryptographic operator. Such an operator, which can take a short random sequence $x$ and deterministically “expand” it into a pseudo-random sequence $y$, is called a pseudo-random sequence generator. Usually $x$ is called the seed for the generator. The sequence $y$ is called “pseudo-random” rather than random since not all sequences $y$ are possible outputs; the number of possible $y$’s is at most the number of possible seeds. Nonetheless, the intent is that for all practical purposes $y$ should be indistinguishable from a truly random sequence of the same length. It is important to note that the use of pseudo-random sequence generator reduces but does not eliminate the need for a natural source of random bits; the pseudo-random sequence generator is a “randomness expander”, but it must be given a truly random seed to begin with. To achieve a satisfactory level of cryptographic security when used in a one-time pad scheme, the output of the pseudo-random sequence generator must have the property that an adversary who has seen a portion of the generator’s output $y$ must remain unable to efficiently predict other unseen bits of $y$. For example, note that an adversary who knows the ciphertext $C$ can guess a portion of $y$ by correctly guessing the corresponding portion of the message $M$, such as a standardized closing “Sincerely yours,”. We would not like him thereby to be able to efficiently read other portions of $M$, which he could do if he could efficiently predict other bits of $y$. Most importantly, the adversary should not be able to efficiently infer the seed $x$ from the knowledge of some bits of $y$. It seems that the notion of pseudo-random sequences is an approach to find a way to generate the pseudo-random number. Although I think the pseudo-random sequences have more functions perhaps. Thus, I think it is correct to use the definition of PRGs to define the pseudo-random sequences. Let $G \colon \{\, 0,1 \,\}^{*} \rightarrow \{\, 0,1 \,\}^{\infty}$ be a generator of sequences if for all the polynomial $p(\cdot)$, $G_{p}$ is polynomial-time, where $G_{p}(s)$ is the first $p(|s|)$ bits of $G(s)$. We say $G$ is a pseudo-random-sequence generator if there exists a negligible function $\varepsilon(\cdot)$ such that for every polynomial $p(\cdot)$ and every PPT distinguisher $D$, $$\left\vert \Pr\left[ x \leftarrow \{\, 0,1 \,\}^{p(n)}: D(x) = 1\right] - \Pr \left[ s \leftarrow \{\, 0,1 \,\}^{n}: D(G_{p}(s)) = 1 \right] \right\vert \leq \varepsilon(n)$$ there are some kinds of classical pseudo-random generators: linear feedback shift registers and the binary expansion of any algebraic number (such as $\sqrt{5}= 10.001111000110111 \ldots $). Although they are both insecure. It shows that the sequence does not need a finite period. I still do not know how to define it by using the definition of PRFs. Because I cannot find a function like $f_{s} \colon \{\, 0,1 \,\}^{\lambda(|s|)} \rightarrow \{\, 0,1 \,\}^{\lambda(|s|)}$ where $s$ is the key.
($M(n)$ is "the time required to perform precision$n$multiplication.") $\:$ By page 10 of this paper, a precision-$n$ approximation to $\pi$ can be computed in time $\;\; O\Big(\hspace{-0.04 in}M(n)\hspace{-0.02 in}\cdot \hspace{-0.02 in}\log(n)\hspace{-0.04 in}\Big) \;\;\;$. By the very end of this paper, one can find a rational number $c$ such that for all $n$, the first $n$ bits of $\pi$ are the first $n$ bits of every precision-$\hspace{-0.03 in}\lfloor (19.9\hspace{-0.04 in}\cdot \hspace{-0.04 in}n)\hspace{-0.04 in}+\hspace{-0.04 in}c\rfloor$ approximation of $\pi$, so the first $n$ bits of $\pi$ can be computed in time $\;\;\; O\Big(\hspace{-0.04 in}M(n) \cdot \hspace{-0.02 in}\log(n)\hspace{-0.04 in}\Big) \:\:\:\:$. Thus, if $\Theta$$(M(n)\hspace{-0.04 in}\cdot \hspace{-0.04 in}\log(n))$ has a time-constructible function, one can print $\pi$ "with the time spent between printing digit" $n\hspace{-0.04 in}-\hspace{-0.05 in}1$ "and digit $n$ being" $\;\; O\Big(\hspace{-0.05 in}\big(\hspace{-0.02 in}M(n)\hspace{-0.04 in}\cdot \hspace{-0.04 in}\log(n)\hspace{-0.02 in}\big)\big/n\hspace{-0.04 in}\Big) \;\;$, $\;\;$ as follows: (Recall that the 0th and 1th bits of $\pi$ are both "1".) $\;\;\;$ Set $\: i = 0 \:$ and $\: t_{curr} =1 \:$ and output "1", then run the following loop forever. $\;\;\;$ Compute [the first $2^{\hspace{.02 in}i+2}$ bits of $\pi$] and $\big[\hspace{-0.02 in}$an integer $t_{next}$ in $\Theta \hspace{-0.04 in}\left(\hspace{-0.02 in}M\hspace{-0.04 in}\left(\hspace{-0.02 in}2^i\hspace{-0.02 in}\right)\hspace{-0.06 in}\cdot \hspace{-0.05 in}i\hspace{-0.02 in}\right)$ such that $t_{next}$ is an upper bound on the time it will take to compute $t_{next}$ and those bits of $\pi \big]$. $\:$ At the start of those computations, output the $2^i\hspace{-0.03 in}$-th bit of $\pi$. Interleaved with those computations, output a bit of $\pi$ approximately $t_{curr}\hspace{-0.03 in}\big/\hspace{-0.02 in}2^i$ steps after outputting the previous bit of $\pi$. $\:$ At the end of those computations, output the rest of the first $2^{\hspace{.02 in}i+1}$ bits of $\pi$, then increment $i$, set $\: t_{curr} = t_{next} \:$, $\:$ and go back to the beginning of this loop. With the fastest known integer multiplication algorithm, that would achieve $\;\;\; t\hspace{.02 in}(n) \: = \: (\log(n))^{\hspace{.02 in}2}\hspace{-0.05 in}\cdot 8$ $\hspace{.02 in}\log^{\hspace{-.02 in}*}$$(n)$$\;\;\;$. To avoid that general trick, one should probably impose a sublinear space bound.
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
The Annals of Probability Ann. Probab. Volume 22, Number 2 (1994), 833-853. Ergodic Theorems for Infinite Systems of Locally Interacting Diffusions Abstract Let $x(t) = \{x_i(t), i \in \mathbb{Z}^d\}$ be the solution of the system of stochastic differential equations $dx_i(t) = \bigg(\sum_{j\in\mathbb{Z}^d}a(i,j)x_j(t) - x_i(t)\bigg) dt + \sqrt{2g (x_i(t))} dw_i(t), \quad i \in \mathbb{Z}^d.$ Here $g: \lbrack 0, 1\rbrack \rightarrow \mathbb{R}^+$ satisfies $g > 0$ on (0, 1), $g(0) = g(1) = 0, g$ is Lipschitz, $a(i,j)$ is an irreducible random walk kernel on $\mathbb{Z}^d$ and $\{w_i(t), i \in \mathbb{Z}^d\}$ is a family of standard, independent Brownian motions on $\mathbb{R}; x(t)$ is a Markov process on $X = \lbrack 0, 1\rbrack^{\mathbb{Z}^d}$. This class of processes was studied by Notohara and Shiga; the special case $g(v) = v(1 - v)$ has been studied extensively by Shiga. We show that the long term behavior of $x(t)$ depends only on $\hat{a}(i,j) = (a(i,j) + a(j, i))/2$ and is universal for the entire class of $g$ considered. If $\hat{a}(i,j)$ is transient, then there exists a family $\{\nu_\theta, \theta \in \lbrack 0, 1\rbrack\}$ of extremal, translation invariant equilibria. Each $\nu_\theta$ is mixing and has density $\theta = \int x_0 d\nu_\theta$. If $\hat{a}(i,j)$, is recurrent, then the set of extremal translation invariant equilibria consists of the point masses $\{\delta_0, \delta_1\}$. The process starting in a translation invariant, shift ergodic measure $\mu$ on $X$ with $\int x_0 d\mu = \theta$ converges weakly as $t \rightarrow \infty$ to $\nu_\theta$ if $\hat{a}(i,j)$ is transient, and to $(1 - \theta)\delta_0 + \theta\delta_1$ if $\hat{a}(i,j)$ is recurrent. (Our results in the recurrent case remove a mild assumption on $g$ imposed by Notohara and Shiga.) For the case $\hat{a}(i,j)$ transient we use methods developed for infinite particle systems by Liggett and Spitzer. For the case $\hat{a}(i,j)$, recurrent we use a duality comparison argument. Article information Source Ann. Probab., Volume 22, Number 2 (1994), 833-853. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176988732 Digital Object Identifier doi:10.1214/aop/1176988732 Mathematical Reviews number (MathSciNet) MR1288134 Zentralblatt MATH identifier 0806.60100 JSTOR links.jstor.org Citation Cox, J. T.; Greven, Andreas. Ergodic Theorems for Infinite Systems of Locally Interacting Diffusions. Ann. Probab. 22 (1994), no. 2, 833--853. doi:10.1214/aop/1176988732. https://projecteuclid.org/euclid.aop/1176988732
I am going to assume that by constant, you really mean constant as opposed to instantaneous. In other words, we are still bound by the speed of light propagation delay. We are also bound by the laws of orbital mechanics as currently understood. Since you use "planets" plural, I take it that humanity has colonies on multiple planets and possibly moons, as opposed to just one outpost away from Earth or Earth-centric orbit (which we already have one of: the International Space Station). I'm also going to, for simplicity's sake, assume that you have unlimited power output for the transmitters. In practice this is not going to be the case, but to a first order approximation to maintain reader suspension of disbelief, it works okay. Also, you can trade power output for data rate, as described by the Shannon–Hartley theorem, so if you can accept a lower data transmission rate, you can make do with less power (to a point). Let's start with colonies only on the planets' surface, not any of the moons in the solar system. The problem here is that planets orbit the Sun with little regard to their respective orbital alignment with the other planets. The easiest way to ensure that every planet is always in view of at least one communications relay satellite is probably to put the relay sats in an orbit around the Sun which is highly inclined relative to the solar system ecliptic (the imaginary disk that is formed by the orbits of the planets, which traces back to the protoplanetary disk of the solar system). A simple way to do that (well, "simple", but still expensive in terms of orbital maneuvering to get into place) would be to use a solar polar orbit. This is an orbit that goes across the poles of the Sun, rather than around the Sun's equator, at a 90 degree angle to the ecliptic. Having three relay satellites in a solar polar orbit, 120 degrees out of phase, will ensure that there is always one within view of somewhere on every planet in the solar system, as the Sun will only block the view of one at any one time (as viewed from any particular planet). You may want a few extra for redundancy, but doing so does not significantly change the setup. Given that the other end of the link is close to the ecliptic, having three ensures that one is always within view of each planet, whereas with two, the situation could arise that one is behind the Sun and the other is directly in front of the Sun. That would almost certainly work from a geometric point of view, but in practice you would have serious trouble picking the signal out from the Sun's noise (see below). Now, notice that I said somewhere on every planet. You are going to need a similar constellation in orbit around each planet where there is a human colony, to ensure that there is a satellite in view of every point on the surface where it is needed. At this point it comes down to a similar scenario to that described in Minimum number of satellites to image the entirety of Earth's surface at all times. It turns out that this is possible to do with four to six satellites (mostly depending on your ground station capabilities, I suppose; four is the absolute minimum required for the satellite constellation to be able to see every point on the surface all the time, but you also need particular spots on the surface to be able to communicate with at least one of the satellites at any given time). Again, you may want a few extra for redundancy, but solving this problem is not an insurmountable task. Once you add colonies on the planets' moons or otherwise in orbit of the planets, you will need a reliable method for communication from the colony to the relay sats around the planet. For this, you can look to the Tracking and Data Relay Satellite System (TDRSS) for inspiration. Long story short, you need at least three satellites in geostationary orbit to maintain constant communications between any point in orbit and any point on the ground, after which getting the signal to the solar system relay sats is merely a matter of getting a signal (any signal) from point A to point B on the planet's or moon's surface, or between the TDRSS-like satellites. Once the signal is within view of one of the solar relay sats, have the satellite shoot it off toward the solar relay satellite, and the signal is off to its penultimate destination. There are two big problems with this that your world's engineers would be facing, which I can think of. First, the Sun is rather noisy well into the RF spectrum. That's a problem when the Sun is in line with the desired signal. So you will need to either place the Sun-orbiting satellites in a relatively high orbit around the Sun to ensure sufficient separation that high-gain antennas can select against the radio noise of the Sun, or extremely high-gain antennas at the ends of the links. I don't know which of these would be easier, but given that fighting the ecleptic is already difficult, either might well be worth the price to pay. Note that the higher-gain antennas require more accurate aiming, which will require more station-keeping, necessiting more reaction mass ("fuel") on-board the satellites for a given service life. Again, not insurmountable, but worth keeping in mind as it is an issue that real-life engineers would have to contend with and make tradeoffs in. Second, solar polar orbit is hard. I alluded to this above, but don't dismiss the importance of it; it really is crazy hard. Let's say you want to place the Sun-orbiting relay satellites at the distance to the Sun of Venus (0.73 AU), with an inclination to the ecliptic of 90 degrees. First, you need to get to Venus' orbit, which can be accomplished with a Hohmann transfer (calculated based on a heliocentric, or Sun-centered, reference frame): $$ r_1 = 1.00~\text{AU} \approx 149\,598\,023\,000~\text{m} \\r_2 = 0.73~\text{AU} \approx 109\,206\,445\,611~\text{m} \\\Delta v_1 = \sqrt{\frac{\mu_\text{Sun}}{r_1}} \left( \sqrt{\frac{2r_2}{r_1 + r_2}} - 1 \right) = \sqrt{\frac{1.3271244 \times 10^{20}}{149\,598\,023\,000}} \left( \sqrt{\frac{218\,412\,891\,222}{258\,804\,468\,611}} - 1 \right) \\\Delta v_2 = \sqrt{\frac{\mu_\text{Sun}}{r_2}} \left( 1 - \sqrt{\frac{2r_1}{r_1 + r_2}} \right) = \sqrt{\frac{1.3271244 \times 10^{20}}{109\,206\,445\,611}} \left( 1 - \sqrt{\frac{299\,196\,046\,000}{258\,804\,468\,611}} \right) \\\Delta v_1 \approx 2\,423~\text{m/s} \\\Delta v_2 \approx 2\,622~\text{m/s} \\\Delta v = \Delta v_1 + \Delta v_2 \approx 5\,045~\text{m/s} $$ which is managable (going to the Moon took a total of about 11 km/s delta-v for the trip out, plus some for landing and going back for a total delta-v budget of somewhere in the vicinity of 20 km/s split among the Saturn, service module, lunar module descent and lunar module ascent stages). This puts you in the neighborhood of Venus; not necessarily in Venus' actual location (that depends on orbital transfer timing, or what we refer to as launch windows), but at least approximately co-orbiting with it. Now, assume that your orbit is circular, and change its inclination by 90 degrees while maintaining its circularity (technically, its eccentricity), where $v = 35.02~\text{km/s}$ is Venus' orbital velocity around the Sun: $$ \Delta v_i = 2v \sin\left({\frac{\Delta i}{2}}\right) = 70\,040~\text{m/s} \times \sin\left(\frac{90°}{2}\right) \approx 49\,526~\text{m/s} $$ So if your satellite-carrying spacecraft is already in Earth's orbit (which is not the same thing as an orbit around the Earth, but rather, co-orbiting the Sun with Earth), you need a total velocity change (delta-v) budget of about 54,600 m/s to enter a solar polar orbit at Venus' distance from the Sun, and that's after you apply almost 8 km/s plus drag losses to get to low Earth orbit. While there are almost certainly tricks you can use to cut down on how much of this you need to apply under power (with rocket engines running), that remains a massive undertaking. I wouldn't be the least bit surprised if you'd be looking at something similar to the Saturn C-8, which was about the same height but much bulkier than the Saturn V which sent Apollo towards the Moon. Compare also Is it possible to communicate in space while the sun is between parties? on the Space Exploration SE.
Imagine you’re going to flip a fair coin twice. You could get two heads, two tails, or one of each. How probable is each outcome? It’s tempting to say they’re equally probable, \(1/3\) each. But actually the first two are only \(1/4\) likely, while the last is \(1/2\) likely. Why? There are actually four possible outcomes here, but we have to consider the order of events to see how. If you get one each of heads and tails, what order will they come in? You could get the head first and then the tail, or the reverse. So there are four possible sequences: HH, TT, HT, and TH. And all four sequences are equally likely, a probability of \(1/4\). How do we know each sequence has \(1/4\) probability though? And how does that tell us the probability is \(1/2\) that you’ll get one each of heads and tails? We need to introduce some mechanics of probability to settle these questions. We denote the probability of proposition \(A\) with \(Pr(A)\). For example, \(Pr(A)=2/3\) means there’s a \(2/3\) chance \(A\) is true. Now, our coin is fair, and by definition that means it always has a \(1/2\) chance of landing heads and a \(1/2\) chance of landing tails. For a single toss, we can use \(H\) for the proposition that it lands heads, and \(T\) for the proposition that it lands tails. We can then write \(Pr(H) = 1/2\) and \(Pr(T) = 1/2\). For a sequence of two tosses, we can use \(H_1\) for heads on the first toss, and \(H_2\) for heads on the second toss. Similarly, \(T_1\) and \(T_2\) represent tails on the first and second tosses, respectively. The four possible sequences are then expressed by the complex propositions: We want to calculate the probabilities of these propositions. For example, we want to know what number \(Pr(H_1 \,\&\, H_2)\) is equal to. Because the coin is fair, we know \(Pr(H_1) = 1/2\) and \(Pr(H_2) = 1/2\). The probability of heads on any given toss is always \(1/2\), no matter what came before. To get the probability of \(H_1 \,\&\, H_2\) it’s then natural to compute:\[ \begin{aligned} Pr(H_1 \,\&\, H_2) &= Pr(H_1) \times Pr(H_2)\\ &= 1/2 \times 1/2\\ &= 1/4. \end{aligned}\]And this is indeed correct, but only because the coin is fair and thus the tosses are independent. The following is a general rule of probability: If \(A\) and \(B\) are independent, then \(Pr(A \,\&\, B) = Pr(A) \times Pr(B)\). So, because our two coin tosses are independent, we can multiply to calculate \(Pr(H_1 \,\&\, H_2) = 1/4\). And the same reasoning applies to all four possible sequences, so we have: \[ \begin{aligned} Pr(H_1 \,\&\, H_2) &= 1/4,\\ Pr(T_1 \,\&\, T_2) &= 1/4,\\ Pr(H_1 \,\&\, T_2) &= 1/4,\\ Pr(T_1 \,\&\, H_2) &= 1/4. \end{aligned} \] The Multiplication rule only applies to independent propositions. Otherwise it gives the wrong answer. For example, the propositions \(H_1\) and \(T_1\) are definitely not independent. If the coin lands heads on the first toss (\(H_1\)), that drastically alters the chances of tails on the first toss (\(T_1\)). It changes that probability to zero! If you were to apply the Multiplication Rule though, you would get \(Pr(H_1 \,\&\, T_1) = Pr(H_1) \times Pr(T_1) = 1/2 \times 1/2 = 1/4\), which is definitely wrong. Only use the Multiplication Rule on independent propositions. We observed that you can get one head and one tail two different ways. You can either get heads then tails (\(H_1 \,\&\, T_2\)), or you can get tails then heads (\(T_1 \,\&\, H_2\)). So the logical expression for “one of each” is: \[ (H_1 \,\&\, T_2) \vee (T_1 \,\&\, H_2). \] This proposition is a disjunction: its main connective is \(\vee\). How do we calculate the probability of a disjunction? If \(A\) and \(B\) are mutually exclusive, then \(Pr(A \vee B) = Pr(A) + Pr(B)\). In this case the two sides of our disjunction are mutually exclusive. They describe opposite orders of affairs. So we can apply the Addition Rule to calculate: \[ \begin{aligned} Pr((H_1 \,\&\, T_2) \vee (T_1 \,\&\, H_2)) &= Pr(H_1 \,\&\, T_2) + Pr(T_1 \,\&\, H_2)\\ &= 1/4 + 1/4\\ &= 1/2. \end{aligned} \] This completes the solution to our opening problem. We’ve now computed the three probabilities we wanted: In the process we introduced two central rules of probability, one for \(\,\&\,\) and one for \(\vee\). The multiplication rule for \(\,\&\,\) only applies when the propositions are independent. The addition rule for \(\,\vee\,\) only applies when the propositions are mutually exclusive. Why does the addition rule for \(\vee\) sentences only apply when the propositions are mutually exclusive? Well imagine the weather forecast says there’s a \(90\%\) chance of rain in the morning, and there’s also a \(90\%\) chance of rain in the afternoon. What’s the chance it’ll rain at some point during the day, either in the morning or the afternoon? If we calculate \(Pr(M \vee A) = Pr(M) + Pr(A)\), we get \(90\% + 90\% = 180\%\), which doesn’t make any sense. There can’t be a \(180\%\) chance of rain tomorrow. The problem is that \(M\) and \(A\) are not mutually exclusive. It could rain all day, both morning and afternoon. We’ll see the correct way to handle this kind of situation in Chapter 7. In the meantime just be careful: Only use the Addition Rule on mutually exclusive propositions. Figure 5.1: Mutually exclusive propositions don’t overlap Exclusivity and independence can be hard to keep straight at first. One way to keep track of the difference is to remember that mutually exclusive propositions don’t overlap, but independent propositions usually do. Independence means the truth of one proposition doesn’t affect the chances of the other. So if you find out that \(A\) is true, \(B\) still has the same chance of being true. Which means there have to be some \(B\) possibilities within the \(A\) circle (unless the probability of \(A\) was zero to start with). Figure 5.2: Independent propositions do overlap (unless one of them has zero probability). So independence and exclusivity are very different. Generally speaking, exclusive propositions are not independent, and independent propositions are not exclusive. There is one exception. If \(\p(A) = 0\), then \(A\) and \(B\) can be both independent and mutually exclusive. If they’re mutually exclusive, the probability of \(A\) just stays \(0\) after \(B\) we learn \(B\). But otherwise, independence and mutual exclusivity are incompatible with one another. Another marker that may help you keep these two concepts straight: exclusivity is a concept of deductive logic. It’s about whether it’s possible for both propositions to be true (even if that possibility is very unlikely). But independence is a concept of inductive logic. It’s about whether one proposition being true changes the probability of the other being true. Figure 5.3: The Tautology Rule. Every point falls in either the \(A\) region or the \(\neg A\) region, so \(\p(A \vee \neg A) = 1\). A tautology is a proposition that must be true, so its probability is always 1. \(\p(T) = 1\) for every tautology \(T\). For example, \(A \vee \neg A\) is a tautology, so \(\p(A \vee \neg A) = 1\). In terms of an Euler diagram, the \(A\) and \(\neg A\) regions together take up the whole diagram. To put it a bit colourfully, \(\p(A \vee \neg A) = \color{bookred}{\blacksquare}\color{black}{} + \color{bookblue}{\blacksquare}\color{black}{} = 1\). The flipside of a tautology is a contradiction, a proposition that can’t possibly be true. So it has probability 0. \(\p(C) = 0\) for every contradiction \(C\). For example, \(A \wedge \neg A\) is a contradiction, so \(\p(A \wedge \neg A) = 0\). In terms of our Euler diagram, there is no region where \(A\) and \(\neg A\) overlap. So the portion the diagram devoted to \(A \wedge \neg A\) is nil, zero. Equivalent propositions are true under exactly the same circumstances (and false under exactly the same circumstances). So they have the same chance of being true (ditto false). Figure 5.4: The Equivalence Rule. The \(A \vee B\) region is identical to the \(B \vee A\) region, so they have the same probability. \(\p(A) = \p(B)\) if \(A\) and \(B\) are logically equivalent. For example, \(A \vee B\) is logically equivalent to \(B \vee A\), so \(\p(A \vee B) = \p(B \vee A)\). In terms of an Euler diagram, the \(A \vee B\) region is exactly the same as the \(B \vee A\) region: the red region. So both propositions take up the same amount of space in the diagram. In math and statistics books you’ll often see a lot of the same concepts from this chapter introduced in different language. Instead of propositions, they’ll discuss events, which are sets of possible outcomes. For example, the roll of a six-sided die has six possible outcomes: \(1, 2, 3, 4, 5, 6\). And the event of the die landing on an even number is the set \(\{2, 4, 6\}\). In this way of doing things, rather than consider the probability that a proposition \(A\) is true, we consider the probability that event \(E\) occurs. Instead of considering a conjunction of propositions like \(A \,\&\, B\), we consider the intersection of two events, \(E \cap F\). And so on. If you’re used to seeing probability presented this way, there’s an easy way to translate into logic-ese. For any event \(E\), there’s the corresponding proposition that event \(E\) occurs. And you can translate the usual set operations into logic as follows: Table 5.1: Translating between events and propositions Events Propositions \(E^c\) \(\sim\! A\) \(E \cap F\) \(A \,\&\, B\) \(E \cup F\) \(A \vee B\) We won’t use the language of events in this book. I’m just mentioning it in case you’ve come across it before and you’re wondering how it connects. If you’ve never seen it before, you can safely ignore this section. In this chapter we learned how to represent probabilities of propositions using the \(Pr(\ldots)\) operator. We also learned some fundamental rules of probability. There were three rules corresponding to the concepts of tautology, contradiction, and equivalence. And there were two rules corresponding to the connectives \(\wedge\) and \(\vee\). The restrictions on these two rules are essential. If you ignore them, you will get wrong answers. What is the probability of each of the following propositions? Give an example of each of the following: For each of the following, say whether it is true or false. Assume \(Pr(A \wedge B)=1/3\) and \(Pr(A \wedge \neg B)=1/5\). Answer each of the following: Suppose \(A\) and \(B\) are independent, and \(A\) and \(C\) are mutually exclusive. Assume \(\p(A) = 1/3, \p(B) = 1/6, \p(C) = 1/9\), and answer each of the following: True or false? If \(\p(A)=\p(B)\), then \(A\) and \(B\) are logically equivalent. Consider the following argument: If a coin is fair, then the probability of getting at least one heads in a sequence of four tosses is quite high: above 90%. Therefore, if a fair coin has landed tails three times in a row, the next toss will probably land heads. Answer each of the following questions. Suppose a fair, six-sided die is rolled two times. What is is the probability of it landing on the same number each time? Hint: calculate the probability of it landing on a different number each time. To do this, first count the number of possible ways the two rolls could turn out. Then count how many of these are “no-repeats.” Same as the previous exercise but with four rolls instead of two. That is, suppose a fair, six-sided die is rolled four times. What is the probability of it landing on the same number four times in a row? The Addition Rule can be extended to three propositions. If \(A\), \(B\), and \(C\) are all mutually exclusive with one another, then \[ \p(A \vee B \vee C) = \p(A) + \p(B) + \p(C).\] Explain why this rule is correct. Would the same idea extend to four mutually exclusive propositions? To five? (Hint: there’s more than one way to do this. You can use an Euler diagram. Or you can derive the new rule from the original one, by thinking of \(A \vee B \vee C\) as a disjunction of \(A \vee B\) and \(C\).) You have a biased coin, where each toss has a \(3/5\) chance of landing heads. But each toss is independent of the others. Suppose you’re going to flip the coin \(1,000\) times. The first 998 tosses all land tails. What is the probability at least one of the last two flips will be tails?
Let $N$ be a type ${\rm II}$ factor, with trace $\tau$. Consider its fundamental group$$ \mathcal{F}(N)= \{ \tau(p)/\tau(q) \ | \ p,q \text{ non-zero finite projections in } N \text{ and } pNp \simeq qNq \}. $$ Let $\alpha$ be a free ergodic measure preserving action of a countable ICC group $\Gamma$ on a $\sigma$-finite standard Borel measure space $(X,\mu)$. Then $L^{\infty}(X,\mu) \rtimes_{\alpha} \Gamma$ and $L(\Gamma)$ are type ${\rm II}$ factors. Question: Is $\mathcal{F}(L^{\infty}(X,\mu) \rtimes_{\alpha} \Gamma)$ a subgroup of $\mathcal{F}(L(\Gamma))$? I did not find a counterexample in the following reference: On the fundamental group of ${\rm II}_1$ factors and equivalence relations arising from group actions, by Sorin Popa and Stefaan Vaes. Application: A positive answer would solve the free group factor isomorphism problem. Proof: The group measure space construction $\mathcal{M} = L^{\infty}(\mathbb{S}^{1}, Leb) \rtimes_{\alpha} \mathbb{F}_{2}$ in this post is a ${\rm III}_1$ factor, so that its core $\widetilde{\mathcal{M}} = \mathcal{M} \rtimes_{\sigma} \mathbb{R} = L^{\infty}(\mathbb{S}^1 \times \mathbb{R}_{+}^*, Leb) \rtimes_{\widetilde{\alpha}} \mathbb{F}_2$ (see this answer) is a ${\rm II}_{\infty}$ factor of fundamental group $\mathbb{R}_{+}^*$ (moreover $\widetilde{\alpha}$ is free and ergodic). But by assumption $\mathcal{F}(\widetilde{\mathcal{M}})$ would be a subgroup of $\mathcal{F}(L(\mathbb{F}_{2}))$, so that $\mathcal{F}(L(\mathbb{F}_{2})) = \mathbb{R}_{+}^*$ also, implying that for all $n \ge 2$, $L(\mathbb{F}_{n}) \simeq L(\mathbb{F}_{2})$, by the works of Voiculescu and Radulescu. Bonus question: Can every subgroup of $\mathcal{F}(L(\Gamma))$ be realized as $\mathcal{F}(L^{\infty}(X,\mu) \rtimes_{\alpha} \Gamma)$? For this bonus question, I expect at most a counter-example (because a proof could be very hard). Naive approach for a positive answer to the main question: Let $N$ be $L^{\infty}(X,\mu) \rtimes_{\alpha} \Gamma$, as specified above. First of all, $L(\Gamma)$ can be taken as a subfactor of $N$. Take $t \in \mathcal{F}(N)$, then there are projections $p,q \in L(\Gamma) \subset N$ such that $\tau(p)/\tau(q) = t$. Then, by definition of the fundamental group, $pNp$ is isomorphic to $qNq$ (because the isom. class of such compression depends only on the trace of the projection). Let $\Phi: pNp \to qNq$ be an isomorphism. Then $\Phi(pL(\Gamma)p)$ = $qKq$, for some $K \subset N$. Can we choose $\Phi$ such that we can take $K = L(\Gamma)$? If so, $t$ is in $\mathcal{F}(L(\Gamma))$, and the result follows.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
Miloslav Znojil For non-Hermitian equilateral q-pointed star-shaped quantum graphs of paper I [Can. J. Phys. 90, 1287 (2012), arXiv 1205.5211] we show that due to certain dynamical aspects of the model as controlled by the external, rotation-symmetric complex Robin boundary conditions, the spectrum is obtainable in a closed asymptotic-expansion form, in principle at least. Explicit formulae up to the second order are derived for illustration, and a few comments on their consequences are added. http://arxiv.org/abs/1411.3828 Quantum Physics (quant-ph); Spectral Theory (math.SP) Miloslav Znojil Among quantum systems with finite Hilbert space a specific role is played by systems controlled by non-Hermitian Hamiltonian matrices \(H\neq H^\dagger\) for which one has to upgrade the Hilbert-space metric by replacing the conventional form \(\Theta^{(Dirac)}=I\) of this metric by a suitable upgrade \(\Theta^{(non−Dirac)}\neq I\) such that the same Hamiltonian becomes self-adjoint in the new, upgraded Hilbert space, \(H=H\ddagger=\Theta^{−1}H^\dagger\Theta\). The problems only emerge in the context of scattering where the requirement of the unitarity was found to imply the necessity of a non-locality in the interaction, compensated by important technical benefits in the short-range-nonlocality cases. In the present paper we show that an why these technical benefits (i.e., basically, the recurrent-construction availability of closed-form Hermitizing metrics \(\Theta^{(non−Dirac)}\) can survive also in certain specific long-range-interaction models. http://arxiv.org/abs/1410.3583 Quantum Physics (quant-ph) Miloslav Znojil A non-Hermitian N−level quantum model with two free real parameters is proposed in which the bound-state energies are given as roots of an elementary trigonometric expression and in which they are, in a physical domain of parameters, all real. The wave function components are expressed as closed-form superpositions of two Chebyshev polynomials. In any eligible physical Hilbert space of finite dimension \(N<\infty\) our model is constructed as unitary with respect to an underlying Hilbert-space metric \(\Theta\neq I\). The simplest version of the latter metric is finally constructed, at any dimension N=2,3,…, in closed form. This version of the model may be perceived as an exactly solvable N−site lattice analogue of the \(N=\infty\) square well with complex Robin-type boundary conditions. At any \(N<\infty\) our closed-form metric becomes trivial (i.e., equal to the most common Dirac’s metric \(\Theta(Dirac)=I\)) at the special, Hermitian-Hamiltonian-limit parameters. http://arxiv.org/abs/1409.3788 Mathematical Physics (math-ph); Quantum Physics (quant-ph) Miloslav Znojil It is shown that the toy-model-based considerations of loc. cit. (see also arXiv:1312.3395) are based on an incorrect, manifestly unphysical choice of the Hilbert space of admissible quantum states. A two-parametric family of all of the eligible correct and potentially physical Hilbert spaces of the model is then constructed. The implications of this construction are discussed. In particular, it is emphasized that contrary to the conclusions of loc. cit. there is no reason to believe that the current form of the PT-symmetric quantum theory should be false as a fundamental theory. http://arxiv.org/abs/1404.1555 Quantum Physics (quant-ph); High Energy Physics – Theory (hep-th); Mathematical Physics (math-ph) Geza Levai, Frantisek Ruzicka, Miloslav Znojil Three classes of finite-dimensional models of quantum systems exhibiting spectral degeneracies called quantum catastrophes are described in detail. Computer-assisted symbolic manipulation techniques are shown unexpectedly efficient for the purpose. http://arxiv.org/abs/1403.0723 Quantum Physics (quant-ph); Mathematical Physics (math-ph) D. Krejcirik, P. Siegl, M. Tater, J. Viola We propose giving the mathematical concept of the pseudospectrum a central role in quantum mechanics with non-Hermitian operators. We relate pseudospectral properties to quasi-Hermiticity, similarity to self-adjoint operators, and basis properties of eigenfunctions. The abstract results are illustrated by unexpected wild properties of operators familiar from PT-symmetric quantum mechanics. http://arxiv.org/abs/1402.1082 Spectral Theory (math.SP); Mathematical Physics (math-ph); Quantum Physics (quant-ph) Miloslav Znojil The elementary quadratic plus inverse sextic interaction containing a strongly singular repulsive core in the origin is made regular by a complex shift of coordinate \(x=s−i\epsilon\). The shift \(\epsilon>0\) is fixed while the value of s is kept real and potentially observable, \(s∈(−\infty,\infty)\). The low-lying energies of bound states are found in closed form for the large couplings g. Within the asymptotically vanishing \(\mathcal{O}(g^{−1/4})\) error bars these energies are real so that the time-evolution of the system may be expected unitary in an {\em ad hoc} physical Hilbert space. http://arxiv.org/abs/1401.1435 Quantum Physics (quant-ph) Miloslav Znojil The practical use of non-Hermitian (i.e., typically, PT-symmetric) phenomenological quantum Hamiltonians is discussed as requiring an explicit reconstruction of the ad hoc Hilbert-space metrics which would render the time-evolution unitary. Just the N-dimensional matrix toy models Hamiltonians are considered, therefore. For them, the matrix elements of alternative metrics are constructed via solution of a coupled set of polynomial equations, using the computer-assisted symbolic manipulations for the purpose. The feasibility and some consequences of such a model-construction strategy are illustrated via a discrete square well model endowed with multi-parametric close-to-the-boundary real bidiagonal-matrix interaction. The degenerate exceptional points marking the phase transitions are then studied numerically. A way towards classification of their unfoldings in topologically non-equivalent dynamical scenarios is outlined. http://arxiv.org/abs/1305.4822 Quantum Physics (quant-ph) Miloslav Znojil The answer is yes. Via an elementary, exactly solvable crypto-Hermitian example it is shown that inside the interval of admissible couplings the innocent-looking point of a smooth unavoided crossing of the eigenvalues of Hamiltonian $H$ may carry a dynamically nontrivial meaning of a phase-transition boundary or “quantum horizon”. Passing this point requires a change of the physical Hilbert-space metric $\Theta$, i.e., a thorough modification of the form and of the interpretation of the operators of all observables. http://arxiv.org/abs/1303.4876 Quantum Physics (quant-ph); Mathematical Physics (math-ph) Miloslav Znojil A compact review is given, and a few new numerical results are added to the recent studies of the q-pointed one-dimensional star-shaped quantum graphs. These graphs are assumed endowed with certain specific, manifestly non-Hermitian point interactions, localized either in the inner vertex or in all of the outer vertices and carrying, in the latter case, an interesting zero-inflow interpretation. http://arxiv.org/abs/1303.4331 Quantum Physics (quant-ph)
The Blum-Blum-Shub generator is a deterministic Pseudo-Random Bit Generator with security reducible to that of integer factorization. Setup: Secretly chose random primes $P$, $Q$, with $P\equiv Q\equiv 3\pmod4$, and compute $N=P\cdot Q$. Secretly chose a random seed $x_0$ in $[1\dots n-1]$ with $\gcd(x_0,N)=1$. Use: To generate the $i$-th bit, compute $x_i=x_{i-1}^2\bmod N$, and output the low-order bit of $x_i$. In that definition, BBS produce 1 bit per iteration. For a given $N$, how much can this be improved while maintaining security demonstrably reducible to factorization of $N$ (or determining quadratic residuosity $\bmod N$)? This is discussed in Vazirani & Vazirani: Efficient and Secure Pseudo-Random Number Generation, with proof that the low 2 bits can be safely extracted, and even (if I get it correctly) $\log n$ bits where $n=\lg_2 N$. However the authors "notice that in all the proofs, $\log n$ can be replaced by $c\cdot\log n$, for any constant $c$". Note 5.41 in the HAC gives it as $c\cdot\lg_2n$ bits and warns that "for a modulus $N$ of a fixed bitlength (eg. $n=$1024 bits), an explicit range of values of $c$ for which the resulting generator is cryptographically secure under the intractability assumption of the integer factorization problem has not been determined".
A Relation of Nonzero Row Vectors and Column Vectors Problem 406 Let $A$ be an $n\times n$ matrix. Suppose that $\mathbf{y}$ is a nonzero row vector such that\[\mathbf{y}A=\mathbf{y}.\](Here a row vector means a $1\times n$ matrix.)Prove that there is a nonzero column vector $\mathbf{x}$ such that\[A\mathbf{x}=\mathbf{x}.\](Here a column vector means an $n \times 1$ matrix.) We give two proofs. The first proof does not use the theory of eigenvalues and the second one uses it. Proof 1.(Without the theory of eigenvalues) Let $I$ be the $n\times n$ identity matrix. Then we have\begin{align*}\mathbf{0}_{1\times n}=\mathbf{y}A-\mathbf{y}=\mathbf{y}(A-I),\end{align*}where $\mathbf{0}_{1\times n}$ is the row zero vector. Taking the transpose, we have\begin{align*}\mathbf{0}_{n\times 1}&=\mathbf{0}_{1\times n}^{\trans}=\left(\,\mathbf{y}(A-I) \,\right)^{\trans}\\&=(A-I)^{\trans}\mathbf{y}^{\trans}.\end{align*} Since the vector $\mathbf{y}$ is nonzero, the transpose $\mathbf{y}^{\trans}$ is a nonzero column vector.Thus, the above equality yields that the matrix $(A-I)^{\trans}$ is singular.It follows that the matrix $A-I$ is singular as well.Hence there exists a nonzero column vector $\mathbf{x}$ such that\[(A-I)\mathbf{x}=\mathbf{0}_{n\times 1},\]and consequently we have\[A\mathbf{x}=\mathbf{x}\]for a nonzero column vector $\mathbf{x}$. Proof 2. (Using the theory of eigenvalues) Taking the conjugate of the both sides of the identity $\mathbf{y}A=\mathbf{y}$, we obtain\[A^{\trans}\mathbf{y}^{\trans}=\mathbf{y}^{\trans}.\]Since $\mathbf{y}$ is a nonzero row vector, $\mathbf{y}^{\trans}$ is a nonzero column vector.It follows that $1$ is an eigenvalue of the matrix $A^{\trans}$ and $\mathbf{y}^{\trans}$ is a corresponding eigenvector. Since the matrices $A$ and $A^{\trans}$ has the same eigenvalues , we deduce that the matrix $A$ has an eigenvalue $1$.(See part (b) of the post “Transpose of a matrix and eigenvalues and related questions.“.)Let $\mathbf{x}$ be an eigenvector corresponding to the eigenvalue $1$ (by definition $\mathbf{x}$ is nonzero). Then we have\[A\mathbf{x}=\mathbf{x}\]as required. Subspaces of Symmetric, Skew-Symmetric MatricesLet $V$ be the vector space over $\R$ consisting of all $n\times n$ real matrices for some fixed integer $n$. Prove or disprove that the following subsets of $V$ are subspaces of $V$.(a) The set $S$ consisting of all $n\times n$ symmetric matrices.(b) The set $T$ consisting of […] Rotation Matrix in Space and its Determinant and EigenvaluesFor a real number $0\leq \theta \leq \pi$, we define the real $3\times 3$ matrix $A$ by\[A=\begin{bmatrix}\cos\theta & -\sin\theta & 0 \\\sin\theta &\cos\theta &0 \\0 & 0 & 1\end{bmatrix}.\](a) Find the determinant of the matrix $A$.(b) Show that $A$ is an […] Eigenvalues of a Matrix and its Transpose are the SameLet $A$ be a square matrix.Prove that the eigenvalues of the transpose $A^{\trans}$ are the same as the eigenvalues of $A$.Proof.Recall that the eigenvalues of a matrix are roots of its characteristic polynomial.Hence if the matrices $A$ and $A^{\trans}$ […] Find All Values of $x$ so that a Matrix is SingularLet\[A=\begin{bmatrix}1 & -x & 0 & 0 \\0 &1 & -x & 0 \\0 & 0 & 1 & -x \\0 & 1 & 0 & -1\end{bmatrix}\]be a $4\times 4$ matrix. Find all values of $x$ so that the matrix $A$ is singular.Hint.Use the fact that a matrix is singular if and only […]
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
$L^q$-Extensions of $L^p$-spaces by fractional diffusion equations 1. Department of Mathematics and Department of Computer Science, Georgetown University, Washington D.C. 20057 2. Department of Mathematics and Statistics, Memorial University of Newfoundland, St. John's, NL A1C 5S7, Canada Mathematics Subject Classification:Primary: 31C15, 35K08, 35K9. Citation:Der-Chen Chang, Jie Xiao. $L^q$-Extensions of $L^p$-spaces by fractional diffusion equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 1905-1920. doi: 10.3934/dcds.2015.35.1905 References: [1] [2] [3] [4] [5] J. Bisquert, Interpretation of a fractional diffusion equation with nonconserved probability density in terms of experimental systems with trapping or recombination,, 72 (2005). doi: 10.1103/PhysRevE.72.011109. Google Scholar [6] [7] [8] L. Caffarelli and L. Silvestre, Regularity estimates for the solution and the free boundary of the obstacle problem for the fractional Laplacian,, 171 (2008), 425. doi: 10.1007/s00222-007-0086-6. Google Scholar [9] D. Chamorro, [10] [11] [12] [13] [14] [15] H. Federer, [16] [17] [18] [19] [20] M. Nishio, K. Shimomura and N. Suzuki, $\alpha$-parabolic Bergman spaces,, 42 (2005), 133. Google Scholar [21] [22] M. Nishio, N. Suzuki and M. Yamada, Compact Toeplitz operators on parabolic Bergman spaces,, 38 (2008), 177. Google Scholar [23] [24] [25] [26] G. Wu and J. Yuan, Well-posedness of the Cauchy problem for the fractional power dissipative equation in critical Besov spaces,, [27] J. Wu, Dissipative quasi-geostrophic equations with $L^p$ data,, 2001 (2001), 1. Google Scholar [28] J. Wu, Lower Bounds for an integral involving fractional Laplacians and the generalized Navier-Stokes equations in Besov spaces,, 263 (2006), 803. doi: 10.1007/s00220-005-1483-6. Google Scholar [29] [30] [31] [32] [33] show all references References: [1] [2] [3] [4] [5] J. Bisquert, Interpretation of a fractional diffusion equation with nonconserved probability density in terms of experimental systems with trapping or recombination,, 72 (2005). doi: 10.1103/PhysRevE.72.011109. Google Scholar [6] [7] [8] L. Caffarelli and L. Silvestre, Regularity estimates for the solution and the free boundary of the obstacle problem for the fractional Laplacian,, 171 (2008), 425. doi: 10.1007/s00222-007-0086-6. Google Scholar [9] D. Chamorro, [10] [11] [12] [13] [14] [15] H. Federer, [16] [17] [18] [19] [20] M. Nishio, K. Shimomura and N. Suzuki, $\alpha$-parabolic Bergman spaces,, 42 (2005), 133. Google Scholar [21] [22] M. Nishio, N. Suzuki and M. Yamada, Compact Toeplitz operators on parabolic Bergman spaces,, 38 (2008), 177. Google Scholar [23] [24] [25] [26] G. Wu and J. Yuan, Well-posedness of the Cauchy problem for the fractional power dissipative equation in critical Besov spaces,, [27] J. Wu, Dissipative quasi-geostrophic equations with $L^p$ data,, 2001 (2001), 1. Google Scholar [28] J. Wu, Lower Bounds for an integral involving fractional Laplacians and the generalized Navier-Stokes equations in Besov spaces,, 263 (2006), 803. doi: 10.1007/s00220-005-1483-6. Google Scholar [29] [30] [31] [32] [33] [1] [2] Damiano Foschi. Some remarks on the $L^p-L^q$ boundedness of trigonometric sums and oscillatory integrals. [3] Karen Yagdjian, Anahit Galstian. Fundamental solutions for wave equation in Robertson-Walker model of universe and $L^p-L^q$ -decay estimates. [4] Masahiro Ikeda, Takahisa Inui, Mamoru Okamoto, Yuta Wakasugi. $ L^p $-$ L^q $ estimates for the damped wave equation and the critical exponent for the nonlinear problem with slowly decaying data. [5] Markus Riedle, Jianliang Zhai. Large deviations for stochastic heat equations with memory driven by Lévy-type noise. [6] Ming Wang, Yanbin Tang. Attractors in $H^2$ and $L^{2p-2}$ for reaction diffusion equations on unbounded domains. [7] Tadeusz Iwaniec, Gaven Martin, Carlo Sbordone. $L^p$-integrability & weak type $L^{2}$-estimates for the gradient of harmonic mappings of $\mathbb D$. [8] L. Cherfils, Y. Il'yasov. On the stationary solutions of generalized reaction diffusion equations with $p\& q$-Laplacian. [9] Lucas C. F. Ferreira, Elder J. Villamizar-Roa. On the heat equation with concave-convex nonlinearity and initial data in weak-$L^p$ spaces. [10] Lucas C. F. Ferreira, Elder J. Villamizar-Roa. On the stability problem for the Boussinesq equations in weak-$L^p$ spaces. [11] Chérif Amrouche, Nour El Houda Seloula. $L^p$-theory for the Navier-Stokes equations with pressure boundary conditions. [12] Angelo Favini, Alfredo Lorenzi, Hiroki Tanabe, Atsushi Yagi. An $L^p$-approach to singular linear parabolic equations with lower order terms. [13] [14] Patrizia Pucci, Mingqi Xiang, Binlin Zhang. A diffusion problem of Kirchhoff type involving the nonlocal fractional [15] Yong Xia, Yu-Jun Gong, Sheng-Nan Han. A new semidefinite relaxation for $L_{1}$-constrained quadratic optimization and extensions. [16] C. García Vázquez, Francisco Ortegón Gallego. On certain nonlinear parabolic equations with singular diffusion and data in $L^1$. [17] Alexandre N. Carvalho, Jan W. Cholewa. Strongly damped wave equations in $W^(1,p)_0 (\Omega) \times L^p(\Omega)$. [18] [19] Qunyi Bie, Haibo Cui, Qiru Wang, Zheng-An Yao. Incompressible limit for the compressible flow of liquid crystals in $ L^p$ type critical Besov spaces. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Let $x$ be an element in $F$. We want to show that there exists $a, b\in F$ such that\[x=a^2+b^2.\] Since $F$ is a finite field, the characteristic $p$ of the field $F$ is a prime number. If $p=2$, then the map $\phi:F\to F$ defined by $\phi(a)=a^2$ is a field homomorphism, hence it is an endomorphism since $F$ is finite.( The map $\phi$ is called the Frobenius endomorphism). Thus, for any element $x\in F$, there exists $a\in F$ such that $\phi(a)=x$.Hence $x$ can be written as the sum of two squares $x=a^2+0^2$. Now consider the case $p > 2$.We consider the map $\phi:F^{\times}\to F^{\times}$ defined by $\phi(a)=a^2$. The image of $\phi$ is the subset of $F$ that can be written as $a^2$ for some $a\in F$. If $\phi(a)=\phi(b)$, then we have\[0=a^2-b^2=(a-b)(a+b).\]Hence we have $a=b$ or $a=-b$.Since $b \neq 0$ and $p > 2$, we know that $b\neq -b$.Thus the map $\phi$ is a two-to-one map. Thus, there are $\frac{|F^{\times}|}{2}=\frac{|F|-1}{2}$ square elements in $F^{\times}$.Since $0$ is also a square in $F$, there are\[\frac{|F|-1}{2}+1=\frac{|F|+1}{2}\]square elements in the field $F$. Put\[A:=\{a^2 \mid a\in F\}.\]We just observed that $|A|=\frac{|F|+1}{2}$. Fix an element $x\in F$ and consider the subset\[B:=\{x-b^2 \mid b\in F\}.\]Clearly $|B|=|A|=\frac{|F|+1}{2}$. Observe that both $A$ and $B$ are subsets in $F$ and\[|A|+|B|=|F|+1 > |F|,\]and hence $A$ and $B$ cannot be disjoint. Therefore, there exists $a, b \in F$ such that $a^2=x-b^2$, or equivalently,\[x=a^2+b^2.\] Hence each element $x\in F$ is the sum of two squares. Explicit Field Isomorphism of Finite Fields(a) Let $f_1(x)$ and $f_2(x)$ be irreducible polynomials over a finite field $\F_p$, where $p$ is a prime number. Suppose that $f_1(x)$ and $f_2(x)$ have the same degrees. Then show that fields $\F_p[x]/(f_1(x))$ and $\F_p[x]/(f_2(x))$ are isomorphic.(b) Show that the polynomials […] Prove that $\F_3[x]/(x^2+1)$ is a Field and Find the Inverse ElementsLet $\F_3=\Zmod{3}$ be the finite field of order $3$.Consider the ring $\F_3[x]$ of polynomial over $\F_3$ and its ideal $I=(x^2+1)$ generated by $x^2+1\in \F_3[x]$.(a) Prove that the quotient ring $\F_3[x]/(x^2+1)$ is a field. How many elements does the field have?(b) […] Prove that any Algebraic Closed Field is InfiniteProve that any algebraic closed field is infinite.Definition.A field $F$ is said to be algebraically closed if each non-constant polynomial in $F[x]$ has a root in $F$.Proof.Let $F$ be a finite field and consider the polynomial\[f(x)=1+\prod_{a\in […]
This article is all about the basics of probability. There are two interpretations of a probability, but the difference only matters when we will consider inference. Frequency The degree of belief Axioms of Probability A function \(P\) which assigns a value \(P(A)\) to every event \(A\) is a probability measure or probability distribution if it satisfies the following three axioms. \(P(A) \geq 0 \text{ } \forall \text{ } A\) \(P(\Omega) = 1\) If \(A_1, A_2, …\) are disjoint then \(P(\bigcup_{i=1}^{\infty} A_i) = \sum_{i=1}^{\infty} P(A_i) \) These axioms give rise to the following five properties. \(P(\emptyset) = 0\) \(A \subset B \Rightarrow P(A) \leq P(B)\) \(0 \leq P(A) \leq 1\) \(P(A^\mathsf{c}) = 1 – P(A)\) \(A \cap B = \emptyset \Rightarrow P(A \cup B) = P(A) + P(B)\) The Sample Space The sample space, , is the set of all possible outcomes, . Subsets of are events. The empty set contains no elements. Example – Tossing a coin Toss a coin once: Toss a coin twice: Then event that the first toss is heads: Set Operations – Complement, Union and Intersection Complement Given an event, , the complement of is , where: Union The union of two sets A and B, is set of the events which are in either A, or in B or in both. Intersection The intersection of two sets A and B, is set of the events which are in both A and B. Difference Set The difference set is the events in one set which are not in the other: Subsets If every element of A is contained in B then A is a subset of B: or equivalently, . Counting elements If A is a finite set, then denotes the number of elements in A. Indicator function An indicator function can be defined: Disjoint events Two events A and B are disjoint or mutually exclusive if (the empty set) – i.e. there are no events in both A and B). More generally, are disjoint if whenever . Example – intervals of the real line The intervals are disjoint. The intervals are not disjoint. For example, . Partitions A partition of the sample space is a set of disjoint events such that . Monotone increasing and monotone decreasing sequences A sequence of events, is monotone increasing if . Here we define and write . Similarly, a sequence of events, is monotone decreasing if . Here we define . Again we write
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A April 2007 , Volume 17 , Issue 2 Select all articles Export/Reference: Abstract: This special issue of DCDS is dedicated to Carlos Gutierrez and Marco Antonio Teixeira, on the occasion of their 60th birthday. Born in Peru, C. Gutierrez obtained his Ph.D. degree from IMPA in 1974, under the supervision of Jorge Sotomayor. In the same year he began to serve as a researcher there. He retired from IMPA and now is a full professor at ICMC-University of São Paulo (USP), where his leadership has been crucial to the development of the research on dynamical systems. For more information please click the “Full Text” above. Abstract: We prove that the shadowing property does not hold for diffeomorphisms in an open and dense subset of the set of $C^1$-robustly non-hyperbolic transitive diffeomorphisms (i.e., diffeomorphisms with a $C^1$-neighborhood consisting of non-hyperbolic transitive diffeomorphisms). Abstract: We study, from a new point of view, families of planar vector fields without singularities $ \{ X_{\mu}$  :  $-\varepsilon < \mu < \varepsilon\} $ defined on the complement of an open ball centered at the origin such that, at $\mu=0$, infinity changes from repellor to attractor, or vice versa. We also study a sort of local stability of some $C^1$ planar vector fields around infinity. Abstract: The statistical analysis of the structurally stable quadratic vector fields made in [4] shows that the phase portrait 7.1 (see Figure 1) appears without limit cycles, when the other three phase portraits in the same family with low probability sometimes appear with limit cycles. Here we prove that quadratic vector fields having the phase portrait 7.1 have no limit cycles. Abstract: We consider the set $\F$ of the $C^1$-maps $f:\S^1 \to \S^1$ which are of degree 2, uniformly expanding except for a small interval and such that the origin is a fixed critical point. Fixing a $f \in \F$, we show that, for a generic $\alpha$-Hölder potential $A:\S^1 \to \RR$, the $f$-invariant probability measure that maximizes the action given by the integral of $A$ is unique and uniquely ergodic on its support. Furthermore, restricting $f$ to a suitable subset of $\S^1$, we also show that this measure is supported on periodic orbits. Our main tool is the existence of a $\alpha$-Hölder sub-action function for each fixed $f \in \F$ and each fixed potential $A$. We also show that these results can be applied to the special potential $A=\log|f'|$. Abstract: We extend the Hartman-Grobman theorems for discrete random dynamical systems (RDS), proved in [7], in two directions: for continuous RDS and for hyperbolic stationary trajectories. In this last case there exists a conjugacy between travelling neighborhoods of trajectories and neighborhoods of the origin in the corresponding tangent bundle. We present applications to deterministic dynamical systems. Abstract: This paper studies the behavior of lines of curvature near umbilic points that appear generically on surfaces depending on two parameters. Abstract: This paper studies the differential equation $\dot z=f(z)$, where $f$ is an analytic function in $\mathbb C$ except, possibly, at isolated singularities. We give a unify treatment of well known results and provide new insight into the local normal forms and global properties of the solutions for this family of differential equations. Abstract: We present an algorithm which determines global conditions for a class of discontinuous vector fields in 4D (called polynomial relay systems) to have periodic orbits. We present explicit results relying on constructive proofs, which involve classical Effective Algebraic Geometry algorithms. Abstract: We study geometric properties of the integral curves of an implicit differential equation in a neighbourhood of a codimension $\le 1$ singularity. We also deal with the way these singularities bifurcate in generic families of equations and the changes in the associated geometry. The main tool used here is the Legendre transformation. Abstract: In this note we present a simple computable criteria that assures the existence of hyperbolic horseshoes for certain diffeomorphisms of the torus. The main advantage of our method is that it is very easy to check numerically whether the criteria is satisfied or not. Abstract: We obtain results on existence and continuity of physical measures through equilibrium states and apply these to non-uniformly expanding transformations on compact manifolds with non-flat critical sets, deducing sufficient conditions for continuity of physical measures and, for local diffeomorphisms, necessary and sufficient conditions for stochastic stability. In particular we show that, under certain conditions, stochastically robust non-uniform expansion implies existence and continuous variation of physical measures. Abstract: We present some results and one open question on the existence of polynomial inverse integrating factors for polynomial vector fields. Abstract: Using the half-Reeb component technique as introduced in [10], we try to clarify the intrinsic relation between the injectivity of differentiable local homeomorphisms $X$ of $R^2$ and the asymptotic behavior of real eigen-values of derivations $DX(x)$. The main result shows that a differentiable local homeomorphism $X$ of $R^2$ is injective and that its image $X(R^2)$ is a convex set if $X$ satisfies the following condition: (*) There does not exist a sequence $R^2$ ∋ $x_i\rightarrow \infty$ such that $X(x_i)\rightarrow a\in \R^2$ and $DX(x_i)$ has a real eigenvalue $\lambda _i\rightarrow 0$.When the graph of $X$ is an algebraic set, this condition becomes a necessary and sufficient condition for $X$ to be a global diffeomorphism. Abstract: We show that there are examples of expansive, non-Anosov geodesic flows of compact surfaces with non-positive curvature, where the Livsic Theorem holds in its classical (continuous, Hölder) version. We also show that such flows have continuous subaction functions associated to Hölder continuous observables. Abstract: In this paper we obtain some non-linear analogues of Schur's theorem asserting that a finitely generated subgroup of a linear group all of whose elements have finite order is, in fact, finite. The main result concerns groups of symplectomorphisms of certain manifolds of dimension $4$ including the torus $T^4$. Abstract: A lot of partial results are known about the Liénard differential equations : $\dot x= y -F_a^n(x),\ \ \dot y =-x.$ Here $F_a^n$ is a polynomial of degree $2n+1,\ \ F_a^n(x)= \sum_{i=1}^{2n}a_ix^i+x^{2n+1},$ where $a = (a_1,\cdots,a_{2n}) \in \R^{2n}.$ For instance, it is easy to see that for any $a$ the related vector field $X_a$ has just a finite number of limit cycles. This comes from the fact that $X_a$ has a global return map on the half-axis $Ox=\{x \geq 0\},$ and that this map is analytic and repelling at infinity. It is also easy to verify that at most $n$ limit cycles can bifurcate from the origin. For these reasons, Lins Neto, de Melo and Pugh have conjectured that the total number of limit cycles is also bounded by $n,$ in the whole plane and for any value $a.$ In fact it is not even known if there exists a finite bound$L(n)$ independent of $a,$ for the number of limit cycles. In this paper, I want to investigate this question of finiteness. I show that there exists a finite bound $L(K,n)$ if one restricts the parameter in a compact $K$ and that there is a natural way to put a boundary to the space of Liénard equations. This boundary is made of slow-fast equations of Liénard type,obtained as singular limits of the Liénard equations for large values of the parameter. Then the existence of a global bound $L(n)$ can be related to the finiteness of the number of limit cycles which bifurcate from slow-fast cyclesof these singular equations. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Formula Constrained Optimization For example, a quarter-wave stack with pairs and QWOT thicknesses equal to m and a can be represented as b We can consider some set of possible values for integers , and for every possible combination of m and n OptiLayer will try to find optimal values of continuous parameters m and a , optimizing the design performance with respect to loaded targets. b Another example may be a stack having period with varying optical thicknesses. Ultra-wide range high reflectors, chirp mirrors and other coatings can be designed in this way. Example. High reflector operating in the spectral range from 400 nm to 900 nm. Refractive indices of layer materials , glass substrate. n L=1.45 Width of the first high reflection zone of a quarter wave mirror with the central wavelength \(\lambda_0\) can be estimated with the help of a known formula: \(\Delta=\lambda_u-\lambda_l,\;\;\displaystyle\frac{\lambda_u}{\lambda_l}=\frac{\pi+\arccos(-\xi)}{\pi-\arccos(\xi)} \) \(\displaystyle\xi=\frac{n_H^2+n_L^2-6 n_Hn_L}{(n_H+n_L)^2}\) where \(\lambda_u\) and \(\lambda_l\) are upper and lower boundaries of the high reflection zone. If the central wavelength is 650 nm then the width of the high reflection zone of a quarter wave mirror is about 200 nm. In our design problem we need to cover a spectral range 400-900 nm, i.e. we need a high reflection zone of 500 nm. It means that we need to combine several quarter wave or near quarter wave stacks. Illustration of the design process of a wide band reflector. Assuming that \(\lambda_l=400\) nm and using formulas above, it is possible to estimate that three quarter wave stacks are required. The corresponding width of the high reflection zones are: \(\Delta_1=140 nm, \;\;\Delta_2=200 nm,\;\; \Delta_3=260 nm\) Then we need The parameters \(c_1 H \;d_1 L\; (a_1H \;b_1L)^{m_1} \;c_2H \;d_2L \;(a_2H\; b_2L)^{m_2}\; c_3H \;d_3L \;(a_3H\;b_3L)^{m_3} \;c_4H\) In OptiLayer design formulas and their parameters can be specified in OptiLayer will consider all possible combinations of specified integer parameters combination, OptiLayer optimizes the design with respect to continuous parameters m 1, m 2, m 3 a 1, b 1, c 1,... All calculated design are stored in the This design example is illustrated in our video example "High Reflector Part II" on YouTube
Expectation Maximization Last updated on 02-15-2018. Table of Contents Introduction: Density estimation Jensen Inequality EM Algorithm Formalization Towards deeper understanding of EM: Evidence Lower Bound (ELBO) Applying EM on Gaussian Mixtures Real example Note: All the materials below are based on the excellent lecture videos by Prof. Andrew Ng, along with the lecture notes. import numpy as npimport scipy as sp import seaborn as snsfrom matplotlib import pyplot as plt%matplotlib inline Introduction: Density estimation We start by defining a problem in the context of unsupervised learning. Suppose we have 1 dimensional data, denoted by $\mathbf{x} = (x^{(1)}, …, x^{(m)})$, shown as below. N = 100mu_arr = np.array([1, 10])sigma_arr = np.array([1, 1])x = np.append(np.random.normal(mu_arr[0], sigma_arr[0], N), np.random.normal(mu_arr[1], sigma_arr[1], N))x[:10] array([ 1.00612457, 2.06217951, 1.08963858, 2.97049835, 1.01856583, -0.12178341, 0.68514333, 0.4269212 , 0.43700963, -0.21670371]) fig, ax = plt.subplots(figsize=(12,3))ax.scatter(x[:N], np.zeros(N), c='green', marker=2, s=150)ax.scatter(x[N:], np.zeros(N), c='orange', marker='x', s=150)ax.set_ylim([-1e-4, 1e-4])_ = ax.set_yticks([])sns.distplot(x[:N], color='green')sns.distplot(x[N:], color='orange') However, we may not know the real labels for each data point in reality. This, naturally, introduces a latent (a.k.a. hiddent/unobserved) variable called $\mathbf{z} = (z^{(1)}, …, z^{(m)})$, which is multinomial: $z^{(i)}$ indicates which distribution a specific point $x^{(i)}$ belongs to. In the example above, $z^{(i)}$ actually follows a Bernoulli, since there are only two possible outcomes. In this case, we have a model: $p(\mathbf{x},\mathbf{z}; \Theta)$, where only $\mathbf{x}$ is observed. Our goal is to maximize $L(\Theta)=\prod_{i}p(x^{(i)};\Theta)$. It is common to instead maximize the : $\ell(\Theta)=\sum_{i}ln~p(x^{(i)}; \Theta)$. This is also called the log likelihood because we do not know the latent variable $\mathbf{z}$ that indicate each data point’s memebrship (to density which a data point belongs). incompelte data log likelihood To gain more insights on why this problem is diffcult, we decompose the log likelihood: Note that there is summation over $z^{(i)}$ inside logarithm. Even if individual joint probability distributions are in the exponential families, the summation still makes the derivative intractable. Jensen Inequality Before we talk about how EM algorithm can help us solve the intractability, we need to introduce Jensen inequality. This will be used later to construct a (tight) lower bound of the log likelihood. Definition: Let function $f$ be a convex function (e.g., $f’’ \geq 0$). Let $x$ be a random variable. Then we have $f(E[x]) \leq E[f(x)]$. Further, $f$ is if $f’’ > 0$. In such case, $f(E[x]) = E[f(x)]$ i.f.f. $x=E[x]$ strictly convex On the other hand, $f$ is concave if $f’’ \leq 0$, and we have $f(E[x]) \geq E[f(x)]$. A useful example (that will be applied in EM algorithm) is $f(x) = ln~x$ is for $x > 0$. Proof: strictly concave Therefore, we have $ln~E[x] \geq E[ln~x]$ Let’s go with a concrete example by plotting $f(x) = ln~x$ x = np.linspace(.1, 5, 100)y = np.log(x)fig, ax = plt.subplots(figsize=(8,4))ax.scatter(x, y, color='green', marker='+')## 1 (p=0.5)ax.vlines(x=1, ymin=-2.5, ymax=0, linestyle='--', color='green')ax.hlines(y=0, xmin=-.1, xmax=1, linestyle='--', color='green')ax.text(x=1.1, y=np.log(.7), s='$f(1)=0$', fontdict={'size': 13})## 4 (p=0.5)ax.vlines(x=4, ymin=-2.5, ymax=np.log(4), linestyle='--', color='green')ax.hlines(y=np.log(4), xmin=-.1, xmax=4, linestyle='--', color='green')ax.text(x=4.1, y=np.log(3.1), s='$f(4)=%.2f$'%(np.log(4)), fontdict={'size': 13} )ax.plot([1, 4], np.log([1, 4]), color='green', linestyle='dotted')# E(x) = (1+4)/2 = 2.5ax.vlines(x=2.5, ymin=-2.5, ymax=np.log(2.5), linestyle='--', color='green')ax.hlines(y=np.log(2.5), xmin=-.1, xmax=2.5, linestyle='--', color='green') EM Algorithm Formalization Derivation Now we can formarlize the problem, starting from the log likelihood from Section 1: where $q_i(z^{(i)})$ is any arbitrary probability distribution for $z^{(i)}$ and thus:$q_i(z^{(i)}) \geq 0; \sum_{z^{(i)}}~q_i(z^{(i)}) = 1$. It is interesting (actually important) to note that the summation inside the logarithm, $\sum_{z^{(i)}} q_i(z^{(i)}) \frac{p(x^{(i)}, z^{(i)}; \Theta)}{q_i(z^{(i)})}$, takes the form of expectation of $\frac{p(x^{(i)}, z^{(i)}; \Theta)}{q_i(z^{(i)})}$, given $x^{(i)}$. Recall that $E[f(x)] = \sum_i p(x^{(i)})f(x^{(i)})$, where $p(x)$ is the probability mass function for $x^{(i)}$. Therefore, we can write the above into: Now, it is not difficult to see that we can apply Jensen Inequality on the term within the summation over $i$: In this way, we successfully construct the lower bound for $\ell(\Theta)$. However, this is not enough. Note that we want to squeeze the lower bound as much as we can to obtain the . Based on Section 2, we need equality where $c$ is some constant. And this leads to From the formula above, we can determine the choice of $q$ The last term is actually the posterior distribution of $z^{(i)}$ (soft membership) given the observation $x^{(i)}$ and the paramter $\Theta$ Algorithm Operationalization EM is an iterative algorithm that consists of two steps: E step: Let $q_i(z^{(i)}) = p(z^{(i)}\vert x^{(i)}; \Theta)$. The gives a tight lower bound for $\ell(\Theta)$. This is actually maximizing the expectation shown above. M step: Update parameters to maximize the lower bound $\Theta := argmax_{\Theta} \sum_i \sum_{z^{(i)}} q_i(z^{(i)})ln~\frac{p(x^{(i)}, z^{(i)}; \Theta)}{q_i(z^{(i)})}$ Convergence To prove the convergence of EM algorithm, we can prove the fact that $\ell(\Theta^{(t+1)}) \geq \ell(\Theta^{(t)})$ for any $t$, where $\Theta^{(t)}$ is the parameter estimatations at iteration $t$. Recall that we choose $q_i(z^{(i)}) = p(z^{(i)}\vert x^{(i)}; \Theta)$ so that Jensen inequality will become such that: equality In the M step, we then update our parameter to be $\Theta^{(t+1)}$ that maximizes the R.H.S. of the equation above: Therefore, EM algorithm will make the log likelihood change monotonically. A common practice to monitor the convergence is to test if the difference of log likelihoods at two success iterations is less than some predefined tolerance parameter. Towards deeper understanding of EM: Evidence Lower Bound (ELBO) We just derived the formulation of EM algorithm in details. However, one thing we did not do is to explicitly write out the decomposition of $\ell(\Theta)$. In fact, we can mathematically write out the log likelihood into the sum of two terms: $KL(q\vert \vert p) = \sum_{z} q(z) ln~ \frac{q(z)}{p(z \vert \mathbf{x}; \Theta)}$ $\mathcal{L}(\mathbf{x}; \Theta) = \sum_{z} q(z) ln~ \frac{p(\mathbf{x}, z; \Theta)}{q(z)}$ Derivation ELBO Here, we define evidence lower bound to be $ELBO = \mathcal{L}(\mathbf{x}; \Theta)$, because KL-divergence is non-negative. Specifically, the equality only holds when $q(z) = p(z \vert \mathbf{x}; \Theta)$. Therefore, $ELBO$ can be seen as a lower bound of the evidence (incomplete data likelihood): tight Such result is actually consistent with the previous derivation where we directly apply Jensen’s inequality. Applying EM on Gaussian Mixtures In this section, we will use an example of Gaussian Mixture to demonstrate the application of EM algorithm. Suppose we have some data $\mathbf{x}={x^{(1)}, …, x^{(m)}}$, which some from $K$ different Gaussian distributions ($K$ mixtures). We will use the following notations: $\mu_k$: the mean of the $k^{th}$ Gaussian component $\Sigma_k$: the covariance matrix of the $k^{th}$ Gaussian component $\phi_k$: the multinomial parameter of a specific datapoint belonging to the $k^{th}$ componenet. $z^{(i)}$: the latent variable (multinomial) for each $x^{(i)}$ We also assume that the dimension of each $x^{(i)}$ is $n$. The goal is: $max_{\mu, \Sigma, \phi}~ln~p(\mathbf{x};\mu, \Sigma, \phi)$. Therefore this follows exactly the EM framework. E step We set $w_j^{(i)} = q_i(z^{(i)}=j) = p(z^{(i)}=j \vert x^{(i)}; \mu, \Sigma, \phi)$. M step We will write down the lower bound and get derivatives for each of the three parameters. Note that: $x^{(i)} \vert z^{(i)}=j; \mu, \Sigma \sim \mathcal{N}(\mu_j, \Sigma_j)$ $z^{(i)}=j; \phi \sim Multi(\phi)$ We can then leverage these probability distributions and continue Now, we need to maximize this lower bound for each of the three parameters. Many of the derivative on vector/matrix are based on Matrix Cookbook Derivative of $\mu_j$ Set the last term zero, we have Derivative of $\Sigma_j$ First, we consider the derivative of the first term in the square bracket: Then, we do the second term Combined these results back and set it to zero, we have: Rearrange the equation and we have: Derivative of $\phi_j$ This is relatively simpler but we need to apply Lagrange multipliers because $\sum_j \phi_j = 1$. We need to construct Lagrangian, with $\lambda$ as the Lagrange multiplier: We will take derivative on $\mathcal{L}$ and set it to zero: Rearrange and we will have $\phi_j = -\frac{\sum_i w_j^{(i)}}{\lambda}$. Recall that $\sum_j \phi_j = 1$, we have: Finally, we have: Real example class GMM(object): def __init__(self, X, k=2): # dimension X = np.asarray(X) self.m, self.n = X.shape self.data = X.copy() # number of mixtures self.k = k def _init(self): # init mixture means/sigmas self.mean_arr = np.asmatrix(np.random.random((self.k, self.n))) self.sigma_arr = np.array([np.asmatrix(np.identity(self.n)) for i in range(self.k)]) self.phi = np.ones(self.k)/self.k self.w = np.asmatrix(np.empty((self.m, self.k), dtype=float)) #print(self.mean_arr) #print(self.sigma_arr) def fit(self, tol=1e-4): self._init() num_iters = 0 ll = 1 previous_ll = 0 while(ll-previous_ll > tol): previous_ll = self.loglikelihood() self._fit() num_iters += 1 ll = self.loglikelihood() print('Iteration %d: log-likelihood is %.6f'%(num_iters, ll)) print('Terminate at %d-th iteration:log-likelihood is %.6f'%(num_iters, ll)) def loglikelihood(self): ll = 0 for i in range(self.m): tmp = 0 for j in range(self.k): #print(self.sigma_arr[j]) tmp += sp.stats.multivariate_normal.pdf(self.data[i, :], self.mean_arr[j, :].A1, self.sigma_arr[j, :]) *\ self.phi[j] ll += np.log(tmp) return ll def _fit(self): self.e_step() self.m_step() def e_step(self): # calculate w_j^{(i)} for i in range(self.m): den = 0 for j in range(self.k): num = sp.stats.multivariate_normal.pdf(self.data[i, :], self.mean_arr[j].A1, self.sigma_arr[j]) *\ self.phi[j] den += num self.w[i, j] = num self.w[i, :] /= den assert self.w[i, :].sum() - 1 < 1e-4 def m_step(self): for j in range(self.k): const = self.w[:, j].sum() self.phi[j] = 1/self.m * const _mu_j = np.zeros(self.n) _sigma_j = np.zeros((self.n, self.n)) for i in range(self.m): _mu_j += (self.data[i, :] * self.w[i, j]) _sigma_j += self.w[i, j] * ((self.data[i, :] - self.mean_arr[j, :]).T * (self.data[i, :] - self.mean_arr[j, :])) #print((self.data[i, :] - self.mean_arr[j, :]).T * (self.data[i, :] - self.mean_arr[j, :])) self.mean_arr[j] = _mu_j / const self.sigma_arr[j] = _sigma_j / const #print(self.sigma_arr) Setup dataset. X = np.random.multivariate_normal([0, 3], [[0.5, 0], [0, 0.8]], 20)X = np.vstack((X, np.random.multivariate_normal([20, 10], np.identity(2), 50)))X.shape (70, 2) gmm = GMM(X)gmm.fit() Iteration 1: log-likelihood is -396.318688Iteration 2: log-likelihood is -358.902036Iteration 3: log-likelihood is -357.894632Iteration 4: log-likelihood is -353.797418Iteration 5: log-likelihood is -332.842685Iteration 6: log-likelihood is -287.787552Iteration 7: log-likelihood is -277.701280Iteration 8: log-likelihood is -268.521010Iteration 9: log-likelihood is -249.376342Iteration 10: log-likelihood is -230.906536Iteration 11: log-likelihood is -230.883025Iteration 12: log-likelihood is -230.883025Terminate at 12-th iteration:log-likelihood is -230.883025 gmm.mean_arr matrix([[-0.05739539, 2.69216832], [20.04929409, 9.69219807]]) gmm.sigma_arr array([[[0.30471149, 0.27830426], [0.27830426, 1.3309902 ]], [[0.94378014, 0.07495073], [0.07495073, 1.13089165]]]) gmm.phi array([0.28571429, 0.71428571])
We have 1208 guests and no members online Most kings together in a game ⛀ L. Camara ⛂ M. Jaggoe ⚐ Drawn Game 1985-02-22 World Championship Youth 1985 A game with 5 against 2 kings is more common. It's a difficult endgame; the player with the 5 kings has a winning position, but some players don't know how. One example: Herman Meijer - M. Visser (Netherlands, 1984). After 128 moves and 9 hours Meijer accepted that he didn't know how to win and proposed a draw. This is an input Result \[\frac{x^2}{a^2}+\frac{y^2}{b^2}=1\] Another example Result will be numbered equation like this \begin{equation}\int_{-\infty}^{+\infty}e^{-x^2/2}dx=?\label{eq2}\end{equation} Just type to call equation, labeled eq1, like (\ref{eq2}). Now let see inline equations This \(\frac{x^2}{a^2}+\frac{y^2}{b^2}=1\) is inline equation. Do you like it? Get JExtBOX Equation PSPicture Example Another example Attention If you want to use or test it on your site, please open page of JExtBOX Article Auto Manager. This is demo of JExtBOX Code Display plugin. This is demo of JExtBOX Auto Paging plugin. The demo is different from plugin to hide some important javascript code. For example, non-responsive etc. Please open this article to view result of the plugin. Mongolia From Wikipedia, the free encyclopedia Mongolia (Mongolian: Монгол улс, literally Mongol country/state) is a landlocked country in East and Central Asia. It is bordered by Russia to the north and China to the south, east and west. Although Mongolia does not share a border with Kazakhstan, its westernmost point is only 38 kilometres (24 mi) from Kazakhstan's eastern tip. Ulan Bator, the capital and largest city, is home to about 45% of the population. Mongolia's political system is a parliamentary republic. This is demo of JExtBOX Social Share Buttons plugin.
The risk-neutral measure $\mathbb{Q}$ is a mathematical construct which stems from the law of one price, also known as the principle of no riskless arbitrage and which you may already have heard of in the following terms: "there is no free lunch in financial markets". This law is at the heart of securities' relative valuation, see this very nice paper by Emmanuel Derman ("Metaphors, Models & Theories", 2011) and some part of this discussion. In what follows, assume for the sake of simplicity existence of a risk-free asset ; deterministic and constant rates, with risk-free rate $r$ ; no dividends and no additional equity funding costs. How to relate $\mathbb{Q}$ to $\mathbb{P}$: some useful concepts The risk-neutral measure $\mathbb{Q}$ is a probability measure which is equivalent to $\mathbb{P}$ and under which the prices of assets (I should rather say the price of self-financing portfolios composed of marketed securities to be perfectly rigorous), discounted at the risk-free rate, turn out to be martingales. If one assumes there is no free lunch in the real world (hence under $\mathbb{P}$), then the above definition (more specifically the "equivalent" part) suggests that there will be no free lunch under $\mathbb{Q}$ either. To convince yourself have a look at the accepted answer to this SE question. This answers your question concerning no arbitrage conditions. The martingale property is convenient since it allows us to represent asset prices as expectations conditional on the information we currenty have, which seems intuitive and natural. Indeed from the definition if $X_t$ is a $\mathbb{Q}$-martingale then$$ X_0 = E^{\mathbb{Q}}[X_t \vert \mathcal{F}_0] $$ The adjective risk-neutral comes from the fact that, using a replication argument (static for linear contracts, dynamic for most of the others) and under the assumption of no free lunch (+ market completeness, continuous trading, no frictions), one can show that the true performance of the stock simply disappears from the option valuation problem. Risk aversion thus disappears and only the risk-free rate $r$ remains. This is exactly what Black-Scholes-Merton showed and which earned them the Nobel prize in the first place, see below. A simple example: the Black-Scholes model Assume that the stock price $S_t$ follows a GBM under $\mathbb{P}$$$ \frac{dS_t}{S_t} = \mu dt + \sigma dW_t^{\mathbb{P}}\ \ \ (1) $$where $\mu$ is the expected performance of the stock and $\sigma$ the annualised volatility of log-returns. This equation describes the dynamics of the stock in the real world. Consider the pricing (we are still under in the real world) of a contingent claim $V_t = V(t,S_t)$ of which the only thing we know is that it pays out $\phi(S_T)$ to its holder when $t=T$ (generic European option). Now, consider the following self-financing portfolio: $$\Pi_t = V_t - \alpha S_t$$ Using Itô's lemma along with the self-financing property yields:\begin{align}d\Pi_t &= dV_t - \alpha dS_t \\ &= \left( \frac{\partial V}{\partial t} + \frac{1}{2} \sigma^2 S_t^2 \frac{\partial^2 V}{\partial S^2}\right) dt + \left( \frac{\partial V}{\partial S} - \alpha \right) dS_t\\\end{align} The original argument of Black-Scholes-Merton is then that, if we can dynamically rebalance the portfolio $\Pi_t$ so that the number of shares held is continuously adjusted to be equal to $\alpha = \frac{\partial V}{\partial S}$, then the portfolio $\Pi_t$ would drift at a deterministic rate which, by absence of arbitrage opportunity, should match the risk-free rate. Writing this as $d\Pi_t = \Pi_t r dt$ and remembering that we've picked $\alpha = \frac{\partial V}{\partial S}$ to reach this conclusion, we have \begin{align}&d\Pi_t = \Pi_t r dt \\\iff& \left( \frac{\partial V}{\partial t} + \frac{1}{2} \sigma^2 S_t^2 \frac{\partial^2 V}{\partial S^2} \sigma^2 \right) dt = \left( V_t - \frac{\partial V}{\partial S} S_t \right) r dt \\\iff& \frac{\partial V}{\partial t}(t,S) + r S \frac{\partial V}{\partial S}(t,S) + \frac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2}(t,S) - rV(t,S) = 0\end{align}which is the famous Black-Scholes pricing equation. Now, the Feynmann-Kac theorem tells us that the solution to the above PDE can be computed as:$$ V_0 = E^\mathbb{Q}[ e^{-rT} \phi(S_T) \vert \mathcal{F}_0 ] $$where under a certain measure $\mathbb{Q}$$$ \frac{dS_t}{S_t} = r dt + \sigma dW_t^{\mathbb{Q}} $$which shows that$$\frac{V_t}{B_t} \text{ and } \frac{S_t}{B_t} \text{ are } \mathbb{Q}\text{-martingales}$$with $B_t = e^{rt}$ representing the value of the risk-free asset we mentioned in the introduction. Notice how $\mu$ has completely disappeared from the pricing equation. Because this Feynman-Kac formula very much resembles a magical trick, let us zoom in on the change of measure from a more mathematical perspective (the above was indeed the financial argument... at least for deriving the pricing equation, not for expressing its solution in martingale form). Starting from $(1)$, let us define the quantity $\lambda$ as the excess-return over the risk-free rate of our stock, expressed in volatility units (ie its Sharpe ratio):$$ \lambda = \frac{\mu - r}{\sigma} $$Plugging this into $(1)$ gives:$$ \frac{dS_t}{S_t} = r dt + \sigma (dW_t^{\mathbb{P}} + \lambda dt) $$Now Girsanov theorem tells us that if we define the Radon-Nikodym of the change of measure as$$ \left. \frac{d\mathbb{Q}}{d\mathbb{P}} \right\vert_{\mathcal{F}_t} = \mathcal{E}(-\lambda W_t^{\mathbb{P}}) $$then the process$$ W_t^{\mathbb{Q}} := W_t^{\mathbb{P}} - \langle W^{\mathbb{P}}, -\lambda W^{\mathbb{P}} \rangle_t = W_t^{\mathbb{P}} + \lambda t $$will emerge as a $\mathbb{Q}$-Brownian motion, hence we can write:$$ \frac{dS_t}{S_t} = r dt + \sigma dW_t^{\mathbb{Q}} $$ Okay, this might seem even more magic to you than earlier, but there is a rigorous mathematical treatment behind don't worry. Anyway, an interesting feature of writing and manipulating the Radon-Nikodym derivative is that one can eventually show that: $$V_0 = E^{\mathbb{Q}} \left[ \left. \frac{V_T}{B_T} \right\vert \mathcal{F}_0 \right] = E^{\mathbb{P}} \left[ \left. \frac{V_T}{B_T} \mathcal{E}(-\lambda W_T^\mathbb{P}) \right\vert \mathcal{F}_0 \right]$$ where I have used the Bayes' rule for condition expectations, with $$ X := V_T/B_T,\ \ \ f := \frac{d\mathbb{Q}}{d\mathbb{P}} \vert \mathcal {F}_T = \mathcal{E}(-\lambda W_T^{\mathbb{P}}),\ \ \ E^\mathbb{P}[f \vert \mathcal{F}_0 ] = 1 $$ The above result is extremely interesting and can here be re-expressed as $$ V_0 = E^{\mathbb{Q}} \left[ e^{-rT} \phi(S_T) \vert \mathcal{F}_0 \right] = E^{\mathbb{P}} \left[ e^{-\left(r+\frac{\lambda^2}{2}+\frac{\lambda}{T} W_T^{\mathbb{P}}\right)T} \phi(S_T) \vert \mathcal{F}_0 \right] $$ This shows that, under BS assumptions: The option price can be calculated as an expectation under $\mathbb{Q}$ in which case we discount cash flows at the risk-free rate. The option price can also be calculated as an expectation under $\mathbb{P}$ but this time we need to discount cash flows based on our risk-aversion, which transpires through the market risk premium $\lambda$ (which depends on $\mu$). This answers your question: Therefore, in addition to use expected returns, which other adjustment might be needed in order to estimate real-world probabilities? You need to use a stochastic discount factor accounting for the risk aversion, see above and further remarks below. Estimating real-world probabilities assuming BS You have different possibilities here. The first idea which springs to mind is to calibrate your diffusion model to observed time series. When doing that, you hope to get an estimate for $\mu$ and $\sigma$ in the GBM case. Now given what we just said earlier, you must be very careful when pricing under $\mathbb{P}$: you cannot discount at the risk-free rate. Also obtaining a statistically significant estimation for $\mu$ (and the latent equity risk premium) may not be as easy as it seems see the discussion here It's more complicated than that when you choose another model than BS The relationship:$$ V_0 = E^{\mathbb{Q}} \left[ \frac{V_T}{B_T} \vert \mathcal{F}_0 \right] = E^{\mathbb{P}} \left[ \frac{V_T}{B_T} f \vert \mathcal{F}_0 \right] $$with $$f = \left. \frac{d\mathbb{Q}}{d\mathbb{P}} \right\vert_{\mathcal{F}_T}$$will hold (under mild technical conditions). Compared to the risk-free discount factor$$DF (0,T):=1/B_T $$the quantity$$SDF (0,T):=f/B_T$$is best known as a Stochastic Discount Factor (maybe you've already heard about SDF models, this is precisely that) and we can write, without loss of generality: $$ V_0 = E^{\mathbb{Q}} \left[ DF (0,T) V_T \vert \mathcal{F}_0 \right] = E^{\mathbb{P}} \left[ SDF (0,T) V_T \vert \mathcal{F}_0 \right] $$ The problem is that, depending on the model assumptions you use, you cannot always have a simple and/or unique form for $f$ (hence $SDF (0,T) $) as it used to be the case in BS. This is notably the case for incomplete models (i.e. models that include jumps and/or stochastic volatility etc.). So now you understand why when we need models to price options, we directly calibrate them under $\mathbb{Q}$ and not on time series observed under $\mathbb{P}$.
As others have pointed it out, it's difficult to know what he meant. Remember that he was reporting on something he had been told which he probably hadn't fully comprehended at the time, or of which he had only an inaccurate recollection at the time of writing. This answer is the configuration that came to my mind as having some chance of being the one he had had in mind. (If he didn't have it in mind but had only been told of at some point, then I expect the example to be more complicated than mine.) When Newman writes 'which are able to divide a space', I take it that the space can be the Euclidean plane; he would have written the space if he had meant the three-dimensional space. My guess then is a hyperbolic pencil of circles (the set of blue circles in the drawing is a finite sample of this): The set of blue circles, including the vertical straight line which is missing from the middle of the drawing, contains infinitely many curves. If you pick countably infinitely many (or finitely many) of them including the straight line such that the sizes of the circles on both sides are unbounded (which also means that there is one arbitrarily near the straight line on either side), then those lines partition the plane ('divide a space') into shapes such that none of those shapes contains a straight line. Addendum. A simpler (but equivalent, as we'll see) example is an unbounded set of concentric circles and the concentric annuli defined by them, e.g. with equal spacing:$$\Big\{\{x\in\mathbb{R}^2\,|\, n<|x|<n+1\}\ \Big|\ n\in\mathbb{N}\Big\}.$$This is projectively isomorphic to my first example because there is a Möbius transformation which maps one configuration to the other. You can map the two focuses (limiting points) of the pencil of circles, and the point where the vertical line intersects the segment between the two focuses to the centre of the concentric circles, to $\infty$, and to an arbitrary point of any circle of the set of concentric circles, respectively. Compare with this: A family of concentric circles centered at a single focus C forms a special case of a hyperbolic pencil, in which the other focus is the point at infinity of the complex projective line. The corresponding elliptic pencil consists of the family of straight lines through C; these should be interpreted as circles that all pass through the point at infinity. (Pencils of circles, Wikipedia)
Given that $\sin\theta = \dfrac{1}{2}$ and that $\cos\theta = -\dfrac{\sqrt{3}}{2}$ and $0^\circ \leq \theta \leq 360^\circ$, find the value of $\theta$. Since $\sin\theta > 0$ and $\cos\theta < 0$, you have correctly concluded that $\theta$ is a second-quadrant angle. You also took the inverse cosine of $-\dfrac{\sqrt{3}}{2}$, from which you can conclude that $\theta = 150^\circ$. Let's see why. I will be working in radians. The arccosine function (inverse cosine function) $\arccos: [-1, 1] \to [0, \pi]$ is defined by $\arccos x = \theta$ if $\theta$ is the unique angle in $[0, \pi]$ such that $\cos\theta = x$. Since $\dfrac{5\pi}{6}$ is the unique angle $\theta \in [0, \pi]$ such that $\cos\theta = -\dfrac{\sqrt{3}}{2}$, $$\theta = \arccos\left(-\dfrac{\sqrt{3}}{2}\right) = \dfrac{5\pi}{6}$$Converting to degrees yields $\theta = 150^\circ$. To reiterate, since there is only one angle $\theta$ in $[0, \pi]$ such that $\cos\theta = -\dfrac{\sqrt{3}}{2}$, we may conclude that $$\theta = \arccos\left(-\dfrac{\sqrt{3}}{2}\right) = \frac{5\pi}{6}$$ While it is not needed to solve this problem, consider the diagram below. Two angles in standard position (vertex at the origin, initial side on the positive $x$-axis) have the same sine if the $y$-coordinates of the points where their terminal sides intersect the unit circle are equal. By symmetry, $$\sin(\pi - \theta) = \sin\theta$$ Any angle coterminal with one of these angles will also have the same sine. Hence, $\sin\theta = \sin\varphi$ if $$\varphi = \theta + 2n\pi, n \in \mathbb{Z}$$or $$\varphi = \pi - \theta + 2n\pi, n \in \mathbb{Z}$$Two angles in standard position have the same cosine if the $x$-coordinates of the points where their terminal sides intersect the unit circle are equal. By symmetry, $$\cos(-\theta) = \cos\theta$$Any angle coterminal with one of these angles will also have the same cosine. Hence, $\cos\theta = \cos\varphi$ if $$\varphi = \theta + 2n\pi, n \in \mathbb{Z}$$or $$\varphi = -\theta + 2n\pi, n \in \mathbb{Z}$$
I was reading the following notes on tensor products: http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf At some point (p. 39) there is the following example In the last paragraph, he says that using exterior powers it can be proved that if $I\oplus I\simeq S^2$ as $S$-module, then $I\otimes_S I\simeq S$ as $S$-modules. I do not know a lot about exterior powers (just the definition), but I would like to know what is the property being used here and what is the isomorphism he finds out. Can you give me some hints? As matter of fact I think it really proves that $I\otimes_S I$ is isomorphic to $S\otimes_S S\simeq S$, but I cannot construct a surjective map from $S\otimes_S S$ to $I\otimes_S I$.
Part a) of this is fine, but I'm really stuck on part b) and I have a test on this in an hours time, does anyone have any hints? closed as off-topic by Toby Mak, Dietrich Burde, воитель, mrtaurho, Yanior Weg Aug 12 at 19:32 This question appears to be off-topic. The users who voted to close gave this specific reason: " This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – Toby Mak, Dietrich Burde, воитель, mrtaurho, Yanior Weg I think the best place to start is to first give things names, e.g. $$(a,b,c) \ast (\lambda, \mu, \nu) =: (x,y,z) $$ And then simplify the RHS of the expression $$ F(x,y,z) = F(a,b,c) \cdot F(\lambda, \mu, \nu)$$ to get an expression for $x$, $y$ and $z$. You'll use part a) for this. EDIT: $(\alpha, \beta, \gamma)$ were not the best choice of letters! Good luck! $(a\alpha^2+b\alpha+c)(d\alpha^2+e\alpha+f)=ad(\alpha^4)+(bd+ae)\alpha^3+(af+be+cd)\alpha^2+(bf+ce)\alpha+cf$ Then use the relation $\alpha^3=-3\alpha+24/5$ to reduce this to something like $ad(-3\alpha^2+24/5\alpha)+(bd+ae)(-3\alpha+24/5)+(af+be+cd)\alpha^2+(bf+ce)\alpha+cf =(-3ad+af+be+cd)\alpha^2+(24ad/5-3bd-3ae+bf+ce)\alpha+(24bd/5+24ae/5+cf)$ which will give us a way to compose $(a,b,c)*(d,e,f)=(-3ad+af+be+cd,24ad/5-3bd-3ae+bf+ce,24bd/5+24ae/5+cf)$
Goldbach's Theorem Theorem Let $F_m$ and $F_n$ be Fermat numbers such that $m \ne n$. Then $F_m$ and $F_n$ are coprime. Without loss of generality, suppose that $m > n$. Then $m = n + k$ for some $k \in \Z_{>0}$. \(\displaystyle F_m - 1\) \(\equiv\) \(\displaystyle -1\) \(\displaystyle \pmod p\) as $p \divides F_m$ \(\displaystyle F_n - 1\) \(\equiv\) \(\displaystyle -1\) \(\displaystyle \pmod p\) as $p \divides F_n$ \(\displaystyle \leadsto \ \ \) \(\displaystyle \paren {F_n - 1}^{2^k}\) \(\equiv\) \(\displaystyle -1\) \(\displaystyle \pmod p\) Fermat Number whose Index is Sum of Integers \(\displaystyle \leadsto \ \ \) \(\displaystyle \paren {-1}^{2^k}\) \(\equiv\) \(\displaystyle -1\) \(\displaystyle \pmod p\) Congruence of Product \(\displaystyle \leadsto \ \ \) \(\displaystyle 1\) \(\equiv\) \(\displaystyle -1\) \(\displaystyle \pmod p\) Congruence of Powers \(\displaystyle \leadsto \ \ \) \(\displaystyle 0\) \(\equiv\) \(\displaystyle 2\) \(\displaystyle \pmod p\) Hence $p = 2$. However, it has already been established that $p$ is odd. From this contradiction it is deduced that there is no such $p$. Hence the result. $\blacksquare$ Let $F_m$ and $F_n$ be Fermat numbers such that $m < n$. Let $d = \gcd \set {F_m, F_n}$. $F_m \divides F_n - 2$ But then: \(\displaystyle d\) \(\divides\) \(\displaystyle F_n\) Definition of Common Divisor of Integers \(\, \displaystyle \land \, \) \(\displaystyle d\) \(\divides\) \(\displaystyle F_m\) (where $\divides$ denotes divisibility) \(\displaystyle \leadsto \ \ \) \(\displaystyle d\) \(\divides\) \(\displaystyle F_n - 2\) as $F_m \divides F_n - 2$ \(\displaystyle \leadsto \ \ \) \(\displaystyle d\) \(\divides\) \(\displaystyle F_n - \paren {F_n - 2}\) \(\displaystyle \leadsto \ \ \) \(\displaystyle d\) \(\divides\) \(\displaystyle 2\) $d \ne 2$ It follows that $d = 1$. The result follows by definition of coprime. $\blacksquare$ Source of Name This entry was named for Christian Goldbach. Sources 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): $\S 2.4$: The rational numbers and some finite fields: Further Exercises $9$ 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $257$ 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: Fermat number(P. de Fermat, 1640)
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
That's a great question! What you are asking about is one of the missing links between classical and quantum gravity. On their own, the Einstein equations, $ G_{\mu\nu} = 8 \pi G T_{\mu\nu}$, are local field equations and do not contain any topological information. At the level of the action principle, $$ S_{\mathrm{eh}} = \int_\mathcal{M} d^4 x \, \sqrt{-g} \, \mathbf{R} $$ the term we generally include is the Ricci scalar $ \mathbf{R} = \mathrm{Tr}[ R_{\mu\nu} ] $, which depends only on the first and second derivatives of the metric and is, again, a local quantity. So the action does not tell us about topology either, unless you're in two dimensions, where the Euler characteristic is given by the integral of the ricci scalar: $$ \int d^2 x \, \mathcal{R} = \chi $$ (modulo some numerical factors). So gravity in 2 dimensions is entirely topological. This is in contrast to the 4D case where the Einstein-Hilbert action appears to contain no topological information. This should cover your first question. All is not lost, however. One can add topological degrees of freedom to 4D gravity by the addition of terms corresponding to various topological invariants (Chern-Simons, Nieh-Yan and Pontryagin). For instance, the Chern-Simons contribution to the action looks like: $$ S_{cs} = \int d^4 x \frac{1}{2} \left(\epsilon_{ab} {}^{ij}R_{cdij}\right)R_{abcd} $$ Here is a very nice paper by Jackiw and Pi for the details of this construction. There's plenty more to be said about topology and general relativity. Your question only scratches the surface. But there's a goldmine underneath ! I'll let someone else tackle your second question. Short answer is "yes".
Search Now showing items 1-10 of 165 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
Search Now showing items 1-10 of 165 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
amp-mathml Displays a MathML formula. Required Script <script async custom-element="amp-mathml" src="https://cdn.ampproject.org/v0/amp-mathml-0.1.js"></script> Supported Layouts container Examples amp-mathml.amp.html Behavior This extension creates an iframe and renders a MathML formula. Example: The Quadratic Formula <amp-mathml layout="container" data-formula="\[x = {-b \pm \sqrt{b^2-4ac} \over 2a}.\]"> </amp-mathml> Example: Cauchy's Integral Formula <amp-mathml layout="container" data-formula="\[f(a) = \frac{1}{2\pi i} \oint\frac{f(z)}{z-a}dz\]"> </amp-mathml> Example: Double angle formula for Cosines <amp-mathml layout="container" data-formula="$$ \cos(θ+φ)=\cos(θ)\cos(φ)−\sin(θ)\sin(φ) $$"> </amp-mathml> Example: Inline formula This is an example of a formula of <amp-mathml layout="container" inline data-formula="`x`"></amp-mathml>, <amp-mathml layout="container" inline data-formula="\(x = {-b \pm \sqrt{b^2-4ac} \over 2a}\)"></amp-mathml> placed inline in the middle of a block of text. <amp-mathml layout="container" inline data-formula="\( \cos(θ+φ) \)"></amp-mathml> This shows how the formula will fit inside a block of text and can be styled with CSS. Attributes data-formula (required) Specifies the formula to render. inline (optional) If specified, the component renders inline ( inline-block in CSS). Validation See amp-mathml rules in the AMP validator specification. You've read this document a dozen times but it doesn't really cover all of your questions? Maybe other people felt the same: reach out to them on Stack Overflow.Go to Stack Overflow Found a bug or missing a feature? The AMP project strongly encourages your participation and contributions! We hope you'll become an ongoing participant in our open source community but we also welcome one-off contributions for the issues you're particularly passionate about.Go to GitHub
Suppose that $A$ and $B$ are DFAs. We know that there is some DFA $M$ such that $L(M) = L(A) \bigtriangleup L(B)$, the symmetric difference. Also, we can construct this $M$ by some Turing machine $N$. But can we ensure that $N$ has the following form? $N$ consists of (i) a read-only input tape, (ii) a work tape that is log-space with respect to $|\langle A, B\rangle|$, and (iii) a one-way, write-only, polynomial-time output tape. This really comes down to showing that this kind of TM can construct DFAs for $L(A) \cup L(B)$ and $L(A) \cap L(B)$. But it's not clear to me how this would work. Any help is appreciated.
Let us denote $X^\top X$ by $A$. By construction, it is a $n\times n$ square symmetric positive semi-definite matrix, i.e. it has an eigenvalue decomposition $A=V\Lambda V^\top$, where $V$ is the matrix of eigenvectors (each column is an eigenvector) and $\Lambda$ is a diagonal matrix of non-negative eigenvalues $\lambda_i$ sorted in the descending order. You want to maximize $$\operatorname{Tr}(D^\top A D),$$ where $D$ has $l$ orthonormal columns. Let us write it as $$\operatorname{Tr}(D^\top V\Lambda V^\top D)=\operatorname{Tr}(\tilde D^\top\Lambda \tilde D)=\operatorname{Tr}\big(\tilde D^\top \operatorname{diag}\{\lambda_i\}\, \tilde D\big)=\sum_{i=1}^n\lambda_i\sum_{j=1}^l\tilde D_{ij}^2.$$ This algebraic manipulation corresponds to rotating the coordinate frame such that $A$ becomes diagonal. The matrix $D$ gets transformed as $\tilde D=V^\top D$ which also has $l$ orthonormal columns. And the whole trace is reduced to a linear combination of eigenvalues $\lambda_i$. What can we say about the coefficients $a_i=\sum_{j=1}^l\tilde D_{ij}^2$ in this linear combination? They are row sums of squares in $\tilde D$, and hence (i) they are all $\le 1$ and (ii) they sum to $l$. If so, then it is rather obvious that to maximize the sum, one should take these coefficients to be $(1,\ldots, 1, 0, \ldots, 0)$, simply selecting the top $l$ eigenvalues. Indeed, if e.g. $a_1<1$ then the sum will increase if we set $a_1=1$ and reduce the size of the last non-zero $a_i$ term accordingly. This means that the maximum will be achieved if $\tilde D$ is the first $l$ columns of the identity matrix. And accordingly if $D$ is the first $l$ columns of $V$, i.e. the first $l$ eigenvectors. QED. (Of course this is a not a unique solution. $D$ can be rotated/reflected with any $l\times l$ orthogonal matrix without changing the value of the trace.) This is very close to my answer in Why does PCA maximize total variance of the projection? This reasoning follows @whuber's comment in that thread: [I]s it not intuitively obvious that given a collection of wallets of various amounts of cash (modeling the non-negative eigenvalues), and a fixed number $k$ that you can pick, that selecting the $k$ richest wallets will maximize your total cash? The proof that this intuition is correct is almost trivial: if you haven't taken the $k$ largest, then you can improve your sum by exchanging the smallest one you took for a larger amount.
Using usual notation we have, $SDP(G) \geq OPT(G) \geq Alg_{GW}(G) \geq \alpha_{GW} SDP(G) \geq \alpha_{GW} OPT(G)$ where we mean, $SDP(G)$ = The maximum value that the SDP finds of the objective function $\sum_{(u,v) \in E} w_{uv}( 1-\vec{y_u}.\vec{y_v})/2$ (one unit vector $\vec{y_v}$ in $\mathbb{R}^d$ for each vertex $v$ for some $d$) And $w_{uv}$ are the given edge weights such that $\sum_{(u,v) \in E} w_{uv} =1$ $OPT(G)$ = The actual size of the max-cut for the graph $Alg_{GW}(G)$ = The value of the max-cut returned by the randomized rounding prescription of Goemans-Williamson $\alpha_{GW}$ = the famous constant of $\sim 0.878$ If say someone shows that for every $\epsilon$ there is a graph $G(\epsilon)$ such that $(\alpha_{GW}+\epsilon)OPT(G(\epsilon)) \geq Alg_{GW}(G(\epsilon))$ then does this imply that this is a class of graphs on which the algorithm is performing as bad as it could? Why does showing the existence of such a family of graphs as above also necessarily imply that on this family $SDP(G(\epsilon)) = OPT(G(\epsilon))$ ?
Pole (of a function) $ \newcommand{\abs}[1]{\left| #1 \right|} \newcommand{\set}[1]{\left\{ #1 \right\}} $ An isolated singular point $a$ of single-valued character of an analytic function $f(z)$ of the complex variable $z$ for which $\abs{f(z)}$ increases without bound when $z$ approaches $a$: $\lim_{z\rightarrow a} f(z) = \infty$. In a sufficiently small punctured neighbourhood $V=\set{z\in\C : 0 < \abs{z-a} < R}$ of the point $a \neq \infty$, or $V'=\set{z\in\C : r < \abs{z} < \infty}$ in the case of the point at infinity $a=\infty$, the function $f(z)$ can be written as a Laurent series of special form: \begin{equation} \label{eq1} f(z) = \sum_{k=-m}^\infty c_k (z-a)^k,\quad \text{'"`UNIQ-MathJax16-QINU`"', '"`UNIQ-MathJax17-QINU`"', '"`UNIQ-MathJax18-QINU`"'}, \end{equation} or, respectively, \begin{equation} \label{eq2} f(z) = \sum_{k=-m}^\infty \frac{c_k}{z^k},\quad \text{'"`UNIQ-MathJax19-QINU`"', '"`UNIQ-MathJax20-QINU`"', '"`UNIQ-MathJax21-QINU`"'}, \end{equation} with finitely many negative exponents if $a\neq\infty$, or, respectively, finitely many positive exponents if $a=\infty$. The natural number $m$ in these expressions is called the order, or multiplicity, of the pole $a$; when $m=1$ the pole is called simple. The expressions \ref{eq1} and \ref{eq2} show that the function $p(z)=(z-a)^mf (z)$ if $a\neq\infty$, or $p(z)=z^{-m}f(z)$ if $a=\infty$, can be analytically continued to a full neighbourhood of the pole $a$, and, moreover, $p(a) \neq 0$. Alternatively, a pole $a$ of order $m$ can also be characterized by the fact that the function $1/f(z)$ has a zero of multiplicity $m$ at $a$. A point $a=(a_1,\ldots,a_n)$ of the complex space $\C^n$, $n\geq2$, is called a pole of the analytic function $f(z)$ of several complex variables $z=(z_1,\ldots,z_n)$ if the following conditions are satisfied: 1) $f(z)$ is holomorphic everywhere in some neighbourhood $U$ of $a$ except at a set $P \subset U$, $a \in P$; 2) $f(z)$ cannot be analytically continued to any point of $P$; and 3) there exists a function $q(z) \not\equiv 0$, holomorphic in $U$, such that the function $p(z) = q(z)f(z)$, which is holomorphic in $U \setminus P$, can be holomorphically continued to the full neighbourhood $U$, and, moreover, $p(a) \neq 0$. Here also $$ \lim_{z\rightarrow a}f(z) = \lim_{z\rightarrow a}\frac{p(z)}{q(z)} = \infty; $$ however, for $n \geq 2$, poles, as with singular points in general, cannot be isolated. References [1] B.V. Shabat, "Introduction of complex analysis" , 2 , Moscow (1976) (In Russian) Comments References [a1] L.V. Ahlfors, "Complex analysis" , McGraw-Hill (1979) pp. Chapt. 8 [a2] H. Grauert, K. Fritzsche, "Several complex variables" , Springer (1976) (Translated from German) [a3] R.M. Range, "Holomorphic functions and integral representation in several complex variables" , Springer (1986) pp. Chapt. 1, Sect. 3 How to Cite This Entry: Pole (of a function). Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Pole_(of_a_function)&oldid=25728
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
Primary groups and pure subgroups. Hi: Let G be a p-primary group that is not divisible. Assume there is $x \in G[p]$ that is divisible by $p^k$ but not by $p^{k+1}$ and let $x= p^k y$. I must prove that $<y> \cap nG \subset n<y>$. $p^{k+1}y= px= 0$. In case $(n,p)=1$, there is $b \in <y>$ such that $y=nb$ and then $sy= snb= n(sb) \in n<y>$. But when $(n,p) \ne 1$ I cannot find a way. Could you tell me how I should proceed? I'm not familiar with the notation G[p], but I assume you mean the following: Let p be a prime and G an abelian p group. Let k be a non-negative integer and suppose $x\in p^kG\setminus p^{k+1}G$ is of order p; say $p^ky=x$ but $p^{k+1}z=x$ has no solution z. Then <y> is a pure subgroup; i.e. for any integer n, $<y>\cap \,nG\subseteq n<y>$. Proof. For any p group H and any integer n prime to p, nH=H. So it is sufficient to prove the statement for n of the form $p^m$ for any positive integer m. Now for any finite cyclic p group <y> of order $p^{k+1}$, the subgroups of <y> form a chain with respect to set inclusion. That is, every subgroup is of the form $<p^sy>$ and $<y>\supset <py>\supset <p^2y>\supset\cdots \supset<p^ky>\supset <0>$. 1. Suppose $m\leq k$. Then $p^{m-1}y\notin p^mG$: Otherwise there is $z\in G$ with $p^mz=p^{m-1}y$. Let $r=k-m+1$. Then $p^rz=p^{k+1}z=p^ky$, a contradiction. Since $H=<y>\cap p^mG$ is a subgroup of <y>, $p^{m-1}y\notin H$ and said subgroups form a chain, $H\subseteq<p^my>$. 2. First $<y>\cap p^{k+1}G=<0>$: if not, then again since the sugroups of <y> form a chain, $<p^ky>\subset <y>\cap p^{k+1}G\subset p^{k+1}G and x=p^ky\in p^{k+1}G$, a contradiction. Then for any $m\geq k+1$, $<y>\cap \,p^mG\subseteq <y>\cap \,p^{k+1}G=<0>=<p^my>$. All times are GMT -8. The time now is 08:41 PM. Copyright © 2019 My Math Forum. All rights reserved.
Geometrical Optics 101: Paraxial Ray Tracing Calculations Ray tracing is the primary method used by optical engineers to determine optical system performance. Ray tracing is the act of manually tracing a ray of light through a system by calculating the angle of refraction/reflection at each surface. This method is extremely useful in systems with many surfaces, where Gaussian and Newtonian imaging equations are unsuitable given the degree of complexity. Today, ray tracing software such as ZEMAX® or CODE V® enable optical engineers to quickly simulate the performance of very complicated systems. Paraxial ray tracing involves small ray angles and heights. To understand the basic principles of paraxial ray tracing, consider the necessary calculations and ray tracing tables employed in manually tracing rays of light through a system. This will in turn highlight the usefulness of modern computing software. PARAXIAL RAY TRACING STEPS: CALCULATING BFL OF A PCX LENS Paraxial ray tracing by hand is typically done with the aid of a ray tracing sheet (Figure 1). The number of optical lens surfaces is indicated horizontally and the key lens parameters vertically. There are also sections to differentiate the marginal and chief ray. Table 1 explains the key optical lens parameters. To illustrate the steps in paraxial ray tracing by hand, consider a plano-convex (PCX) lens. For this example, #49-849 25.4mm Diameter x 50.8mm FL lens is used for simplicity. This particular calculation is used to calculate the back focal length (BFL) of the PCX lens, but it should be noted that ray tracing can be used to calculate a wide variety of system parameters ranging from cardinal points to pupil size and location. Figure 1: Sample Ray Tracing Sheet Step 1: Enter Known Values To begin, enter the known dimensional values of #49-849 into the ray tracing sheet (Figure 2). Surface 0 is the object plane, Surface 1 is the convex surface of the lens, Surface 2 is the plano surface of the lens, and Surface 3 is the image plane (Figure 3). Remember that the curvature (C) is equivalent to 1 divided by the radius of curvature (R). The first thickness value (t) (25mm in this example) is the distance from the object to the first surface of the lens. This value is arbitrary for incident collimated light (i.e. light parallel to the optical axis of the optical lens). The index of refraction (n) can be approximated as 1 in air and as 1.517 for the N-BK7 substrate of the lens. Variable Description C Curvature t Thickness n Index of Refraction Φ Surface Power y Ray Height u Ray Angle Table 1: Optical Lens Parameters for Ray Tracing In Figure 2, the red box is the value to be calculated because it is the distance from the second surface to the point of focus (BFL). The power (Φ) of the individual surfaces is given by the fourth line and is calculated using Equation 1. Note: A negative sign is added to this line to make further calculations easier. In this example, Surface 1 is the only surface with power as it is the only curved surface in the system. (1)$$ \phi = \left( n_2 - n_1 \right) C_1 $$ Figure 2: Entering Known Lens Parameter Values into Ray Tracing Sheet Figure 3: Surfaces of a Plano-Convex (PCX) Lens Step 2: Add a Marginal Ray to the System The next step is to add a marginal ray to the system. Since the PCX lens is spherical with a constant radius of curvature and a collimated input beam is used, the ray height (y) is arbitrary. To simplify calculations, use a height of 1mm. A collimated beam also means the initial ray angle (u) is 0 degrees. In the ray tracing sheet, nu is simply the angle of the ray multiplied by the refractive index of that medium. Both variables are included to make subsequent calculations simpler (Figure 4). Figure 4: Adding a Marginal Ray to the Ray Tracing Sheet Step 3: Calculate BFL with Equations and the Ray Tracing Sheet Ray tracing involves two primary equations in addition to the one for calculating power. Equations 2 – 3 are necessary for any ray tracing calculations. (2)$$ y' = y + u't' $$ (3)$$ n'u' = nu - y \phi $$ where an apostrophe denotes the subsequent surface, angle, thickness, etc. In this example, to find the ray height at Surface 2 (y'), take the ray height at Surface 1 (y) and add it to -0.0197 multiplied by 3.296: (2.1)\begin{align} y' & = y + u't' \\ & = y + \left( \frac{t'}{n'} \right) n'u' \\ & = 1 + \left( 3.296 \right) \left( -0.0197 \right) = 0.93508 \end{align} Performing this for ray angle yields the following value. The entire process is repeated until the ray trace is complete (Figure 5). (3.1)\begin{align} n'u' & = nu - y \phi \\ & = -0.0197 + \left( 0.93508 \right) \left( -0 \right) = -0.0197 \end{align} Figure 5: Propagating the Ray through the System Now, solve for the BFL by either adjusting the thickness value until the final ray height is 0 (Figure 6) or by backwards calculating the BFL for a ray height of 0. For #49-849, the final BFL value is 47.48mm. This is very close to the 47.50mm listed in the lens' specifications. The difference is attributed to the rounding error of using an index of refraction of 1.517 instead of a slightly more accurate value that was used when the lens was initially designed. Figure 6: Calculating Back Focal Length of a Plano-Convex (PCX) Lens using a Ray Tracing Sheet DECIPHERING A TWO LENS RAY TRACING SHEET To completely understand a ray tracing sheet, consider a two lens system consisting of a double-concave (DCV) lens, an iris, and a double-convex (DCX) lens (Figures 7 - 8). To learn more about DCV and DCX lenses, please read Understanding Optical Lens Geometries. Figure 7: Double-Concave (DCV) and Double-Convex (DCX) Lens System Figure 8: Sample Double-Concave (DCV) and Double-Convex (DCX) Ray Tracing System The aperture stop is the limiting aperture and defines how much light is allowed through the system. The aperture stop can be an optical lens surface or an iris, but it is always a physical surface. The entrance pupil is the image of the aperture stop when it is imaged through the preceding lens elements into object space. The exit pupil is the image of the aperture stop when it is imaged through the following lens elements into image space. In an optical system, the aperture stop and the pupils are used to define two very important rays. The chief ray is one that begins at the edge of the object and goes through the center of the entrance pupil, exit pupil, and the stop (in other words, it has a height (Ӯ) of 0 at those locations). The chief ray, therefore, defines the size of the object and image and the locations of the pupils. The marginal ray of an optical system begins on-axis at the object plane. This ray encounters the edge of the pupils and stops and crosses the axis at the object and image points. The marginal ray, therefore, defines the location of the object and image and the sizes of the pupils. Aperture Stop Location If the location of the aperture stop is unknown, a trial ray, known as the pseudo marginal ray, must be propagated through the system. For an object not at infinity, this ray must begin at the axial position of the object and can have an arbitrary incident angle. For an object at infinity, the ray can begin at an arbitrary height, but must have an incident angle of 0°. Once this is accomplished, the aperture stop is simply the surface that has the smallest CA/y p value, where CA is the surface clear aperture and y p is the height of the pseudo marginal ray at that surface. After locating the aperture stop, the pseudo marginal ray can be scaled appropriately to obtain the actual marginal ray (remember the marginal ray should touch the edge of the aperture stop). Once the size and location of the aperture stop is known, the marginal ray height is equal to the radius of the stop and the chief ray height is zero at that location. Paraxial ray tracing can then be carried out in both the forward and the reverse directions from those points. When doing ray tracing in reverse, Equations 4 – 5 are useful. Note the similarities to Equations 2 – 3. (4)$$ y = y' - u't' $$ (5)$$ nu = n'u' + y \phi $$ Vignetting Analysis Once the location and size of the aperture stop is known, use vignetting analysis to see which surfaces will vignette, or cause rays to be blocked. Vignetting analysis is accomplished by taking the clear aperture at every surface and dividing it by two. That value is then compared to the heights of the chief and marginal rays at that surface (Equation 6). Equation 6 can be easily reordered to Equation 7. If Equation 7 is true, the surface does not vignette. (6)$$ \frac{\text{CA}}{2} \geq \left| \bar{y} \right| + \left| y \right|$$ (7)$$ \frac{\text{CA}}{\left| \bar{y} \right| + \left| y \right|} \geq 2 $$ Notice in the preceding DCV and DCX example how Surface 3 is the aperture stop where the CA/(|Ӯ |+|y|) value is the smallest among all surfaces. Also, none of the surfaces vignette because all values are greater than or equal to 2. Object/Image Size and Location Object (Surface 0) Size is 10mm in diameter (twice the chief ray height at Surface 0) Location is 5mm in front of the first lens (the first thickness value) Image (Surface 6) Size is 18.2554mm in diameter (twice the final chief ray height) Location is 115.4897mm behind the final lens surface (the last thickness value) It is important to note that the Surface 0 chief ray height is positive while the Surface 6 chief ray height is negative. This indicates that the image is inverted. Effective Focal Length To solve for the effective focal length (EFL), it is first necessary to trace a pseudo marginal ray through the system for an object at infinity (i.e. the first ray angle will be 0). In Figure 9, an arbitrary initial height of 1 is chosen to simplify calculations. Once this is accomplished, the EFL of the system is given by Equation 8. Figure 9: Pseudo Marginal Ray (8)\begin{align} \text{EFL} & = \frac{y_1}{nu_{\text{last}}} \\ & = \frac{1}{0.02870} = 34.843 \text{mm} \end{align} (9)\begin{align} \text{FOV} & = 2 \left( n \bar{u} \right) \\ & = 2 \left( 0.182 \right) = 0.364 \text{radians} = 20.856° \end{align} where nū is the first chief ray angle. Lagrange Invariant The optical invariant is a useful tool that allows optical designers to determine various values without having to completely ray trace a system. It is obtained by comparing two rays within a system at any axial point. The optical invariant is constant for any two rays at every point in the system. In other words, if the invariant for a set of two rays is known, ray trace one of the rays and then scale that by the invariant to find the second. The Lagrange Invariant is a version of the optical invariant that uses the chief ray and the marginal ray as the two rays of interest. It is solved using Equation 10 and is illustrated in Figure 10. (10)$$ Ж = n \bar{u} y - n u \bar{y} $$ Figure 10: The Lagrange Invariant of Ray Tracing REAL-WORLD RAY TRACING AND SOFTWARE ADVANTAGES Within paraxial ray tracing, there are several assumptions that introduce error into the calculations. Paraxial ray tracing assumes that the tangent and sine of all angles are equal to the angles themselves (in other words, tan(u) = u and sin(u) = u). This approximation is valid for small angles, but can lead to the propagation of error as ray angles increase. Real ray tracing is a method of reducing paraxial error by eliminating the small-angle approximation and by accounting for the sag of each surface to better model the refraction of off-axis rays. As with paraxial ray tracing, real ray tracing can be done by hand with the help of a ray trace sheet. For the sake of brevity, only the paraxial method has been demonstrated. Ray tracing software such as CODE V and ZEMAX use real ray tracing to model user-inputted optical systems. Ray tracing by hand is a tedious process. Consequently, ray tracing software is usually the preferred method of analysis. Figure 11 shows the DCV-DCX system from the section on "Deciphering a Two Lens Ray Tracing Sheet". The following ZEMAX screenshot shows a focal length value of 34.699mm – confirming the paraxial calculation previously performed. Figure 11: Sample ZEMAX System Data Ray tracing is an important tool for any optical designer. While the proliferation of ray tracing software has minimized the need for paraxial ray tracing by hand, it is still useful to understand conceptually how individual rays of light move through an optical system. Paraxial ray tracing and real ray tracing are great ways to approximate optical lens performance before finalizing a design and going into production. Without ray tracing, system design is much more difficult, expensive, and time-intensive. References Dereniak, Eustace L., and Teresa D. Dereniak. "Chapter 10. Paraxial Ray Tracing." In Geometrical and Trigonometric Optics, 255-91. Cambridge, UK: Cambridge University Press, 2008. Geary, Joseph M. "Chapter 4 – Paraxial World." In Introduction to Lens Design: With Practical Zemax Examples, 33-42. Richmond, VA: Willmann-Bell, 2002. Greivenkamp, John E. "Paraxial Raytrace." In Field Guide to Geometrical Optics, 20-32. Vol. FG01. Bellingham, WA: SPIE—The International Society for Optical Engineers, 2004. Smith, Warren J. "Chapter 3. Paraxial Optics and Calculations." In Modern Optical Engineering, 35-51. 4th ed. New York, NY: McGraw-Hill Education, 2007.
Huge cardinal Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. However, the consistency of the existence of an $\omega_2$-complete $\omega_3$-saturated $\sigma$-ideal on $\omega_2$, as far as the set theory world is concerned, still requires an almost huge cardinal. [1] Contents 1 Definitions 2 Consistency strength and size 3 Relative consistency results 4 References Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals A cardinal $\kappa$ is $\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $j(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Ultrafilter definition The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." Coherent sequence characterization of almost hugeness Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge". Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals. In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge. While not every $n$-huge cardinal is strong, if $\kappa$ is almost $n$-huge with targets $\lambda_1,\lambda_2...\lambda_n$, then $\kappa$ is $\lambda_n$-strong as witnessed by the generated $j:V\prec M$. This is because $j^n(\kappa)=\lambda_n$ is measurable and therefore $\beth_{\lambda_n}=\lambda_n$ and so $V_{\lambda_n}=H_{\lambda_n}$ and because $M^{<\lambda_n}\subset M$, $H_\theta\subset M$ for each $\theta<\lambda_n$ and so $\cup\{H_\theta:\theta<\lambda_n\} = \cup\{V_\theta:\theta<\lambda_n\} = V_{\lambda_n}\subset M$. Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\theta$-supercompact for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. The $\omega$-huge cardinals A cardinal $\kappa$ is almost $\omega$-huge iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is $\omega$-huge iff the model $M$ can be required to have $M^\lambda\subset M$. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of I1 embeddings. Relative consistency results Hugeness of $\omega_1$ In [2] it is shown that if $\text{ZFC +}$ "there is a huge cardinal" is consistent then so is $\text{ZF +}$ "$\omega_1$ is a huge cardinal" (with the ultrafilter characterization of hugeness). Generalizations of Chang's conjecture Cardinal arithmetic in $\text{ZF}$ If there is an almost huge cardinal then there is a model of $\text{ZF+}\neg\text{AC}$ in which every successor cardinal is Ramsey. It follows that for all ordinals $\alpha$ there is no injection $\aleph_{\alpha+1}\to 2^{\aleph_\alpha}$. This in turns imply the failure of the square principle at every infinite cardinal (and consequently $\text{AD}^{L(\mathbb{R})}$, see determinacy). [3] References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
$\ce{NH3}$ is a weak base so I would have expected $\ce{NH4+}$ to be a strong acid. I can't find a good explanation anywhere and am very confused. Since only a small proportion of $\ce{NH3}$ molecules turn into $\ce{NH4+}$ molecules, I would have expected a large amount of $\ce{NH4+}$ molecules to become $\ce{NH3}$ molecules. First, let’s get the definition of weak and strong acids or bases out of the way. The way I learnt it (and the way everybody seems to be using it) is: $\displaystyle \mathrm{p}K_\mathrm{a} < 0$ for a strong acid $\displaystyle \mathrm{p}K_\mathrm{b} < 0$ for a strong base $\displaystyle \mathrm{p}K_\mathrm{a} > 0$ for a weak acid $\displaystyle \mathrm{p}K_\mathrm{b} > 0$ for a weak base Thus strong acid and weak base are not arbitrary labels but clear definitions based on an arbitrary measurable physical value — which becomes a lot less arbitrary if you remember that this conincides with acids stronger than $\ce{H3O+}$ or acids weaker than $\ce{H3O+}$. Your point of confusion seems to be a statement that is commonly taught and unquestionably physically correct, which, however, students have a knack of misusing: The conjugate base of a strong acid is a weak base. Maybe we should write that in a more mathematical way: If an acid is strong, its conjugate base is a weak base. Or in mathematical symbolism: $$\mathrm{p}K_\mathrm{a} (\ce{HA}) < 0 \Longrightarrow \mathrm{p}K_\mathrm{b} (\ce{A-}) > 0\tag{1}$$ Note that I used a one-sided arrow. These two expressions are not equivalent. One is the consequence of another. This is in line with another statement that we can write pseudomathematically: If it is raining heavily the street will be wet. $$n(\text{raindrops}) \gg 0 \Longrightarrow \text{state}(\text{street}) = \text{wet}\tag{2}$$ I think we immediately all agree that this is true. And we should also all agree that the reverse is not necessarily true: if I empty a bucket of water on the street, then te street will be wet but it is not raining. Thus: $$\text{state}(\text{street}) = \text{wet} \rlap{\hspace{0.7em}/}\Longrightarrow n(\text{raindrops}) \gg 0\tag{2'}$$ This should serve to show that sometimes, consequences are only true in one direction. Spoiler: this is also the case for the strength of conjugate acids and bases. Why is the clause above on strength and weakness only true in one direction? Well, remember the way how $\mathrm{p}K_\mathrm{a}$ values are defined: $$\begin{align}\ce{HA + H2O &<=> H3O+ + A-} && K_\mathrm{a} (\ce{HA}) = \frac{[\ce{A-}][\ce{H3O+}]}{[\ce{HA}]}\tag{3}\\[0.6em] \ce{A- + H2O &<=> HA + OH-} && K_\mathrm{b} (\ce{A-}) = \frac{[\ce{HA}][\ce{OH-}]}{[\ce{A-}]}\tag{4}\end{align}$$ Mathematically and physically, we can add equations $(3)$ and $(4)$ together giving us $(5)$: $$\begin{align}\ce{HA + H2O + A- + H2O &<=> A- + H3O+ + HA + OH-}&& K = K_\mathrm{a}\times K_\mathrm{b}\tag{5.1}\\[0.6em] \ce{2 H2O &<=> H3O+ + OH-}&&K = K_\mathrm{w}\tag{5.2}\end{align}$$ We see that everything connected to the acid $\ce{HA}$ cancels out in equation $(5)$ (see $(\text{5.2})$) and thus that the equilibrium constant of that reaction is the autodissociation constant of water $K_\mathrm{w}$. From that, equations $(6)$ and $(7)$ show us how to arrive at a well-known and important formula: $$\begin{align}K_\mathrm{w} &= K_\mathrm{a} \times K_\mathrm{b}\tag{6}\\[0.6em] 10^{-14} &= K_\mathrm{a} \times K_\mathrm{b}\\[0.6em] 14 &= \mathrm{p}K_\mathrm{a} (\ce{HA}) + \mathrm{p}K_\mathrm{b} (\ce{A-})\tag{7}\end{align}$$ Now let us assume the acid in question is strong, e.g. $\mathrm{p}K_\mathrm{a} (\ce{HA}) = -1$. Then, by definition the conjugate base must be (very) weak: $$\mathrm{p}K_\mathrm{b}(\ce{A-}) = 14- \mathrm{p}K_\mathrm{a}(\ce{HA}) = 14-(-1) = 15\tag{8}$$ Hence, our forward direction of statement $(1)$ holds true. However, the same is not true if we add an arbitrary weak acid to the equation; say $\mathrm{p}K_\mathrm{a} (\ce{HB}) = 5$. Then we get: $$\mathrm{p}K_\mathrm{b} (\ce{B-}) = 14-\mathrm{p}K_\mathrm{a}(\ce{HB}) = 14-5 = 9\tag{9}$$ A base with a $\mathrm{p}K_\mathrm{b} = 9$ is a weak base. Thus, the conjugate base of the weak acid $\ce{HB}$ is a weak base. We realise that we can generate a weak base in two ways: by plugging a strong acid into equation $(7)$ or by plugging a certain weak base. Since the sum of $\mathrm{p}K_\mathrm{a} + \mathrm{p}K_\mathrm{b}$ must equal $14$, it is easy to see that both cannot be strong. However, it is very possible that both the base and the acid are weak. Thus, the reverse statement of $(1)$ is not true. $$\mathrm{p}K_\mathrm{a}(\ce{HA}) < 0 \rlap{\hspace{1em}/}\Longleftarrow \mathrm{p}K_\mathrm{b} (\ce{A-}) > 0\tag{1'}$$ First, let's understand the perspective of weak acid and weak base. This is in relation to pure water — like in most general chemistry courses. Pure Water has a $\mathrm{p}K_\mathrm{a}$ of 14. $\ce{NH3/NH4+}$ has a $\mathrm{p}K_\mathrm{a}$ of 9.25. We know: $$\ce{NH4+ + H2O <--> NH3 + H3O+}$$ Thus, we solve for the $K_\mathrm{a}$ (which depends on the concentration of each of your chemicals): $$K_\mathrm{a} = \frac{[\ce{NH3}][\ce{H3O+}]}{[\ce{NH4+}]}$$ Similarly, we can solve for $K_\mathrm{b}$: We know the ionization equation for a base is: $$\ce{B + H2O <--> HB+ + OH-}$$ Which means: $$\ce{NH3 + H2O <--> NH4+ + OH-}$$ So in order to solve for the $K_\mathrm{b}$, plug the concentration of your chemicals in: $$K_\mathrm{b} = \frac{[\ce{NH4+}][\ce{OH-}]}{[\ce{NH3}]}$$ tl;dr: As the math shows, you can think of these as a "weak but not meaningless" acid/base. Don't let your assumptions throw you off — a weak acid generates a weak base, and vice versa. See the Henderson-Hasselbalch Equation. $\ce{NH3}$ is not in the same class of weak bases as say, $\ce{Cl-}$. The acid to base curve isn't extreme like $\ce{HCl + H2O -> Cl- + H3O+}$. In case a less abstract answer will help: A base, like $\ce{NH_3}$, is a base because it has a significant chance of picking up protons in water. You can almost think of it as a competition between the $\ce{NH_3}$'s and the $\ce{H_2O}$'s to pick up the free protons. It is a weak base because it is not a certainty that all the $\ce{NH_3}$'s in a given sample will pick up a proton and hold onto it. In any given sample at any point in time, a certain portion of the $\ce{NH_4^+}$'s ($\ce{NH_3}$'s that won the proton) will give their protons up and a certain number of $\ce{NH_3}$'s will pick up new protons. Eventually, the system reaches a steady state (quantifiable with $K_b$). Our definition of an "acid" is just something that donates protons, as the $\ce{NH_4^+}$ does above. It's not that they're really different things: the $\ce{NH_3}$'s a base when its picking up protons and its an acid when it lets them go.
I want to calculate the kinetic energy of a disk (radius $R$ and mass $M$) rolling without slipping in horizontal plane, but with center of mass displaced a distance $r$ from the geometrical center. I have no problem with the translational part (center of mass), but when dealing with the rotational $\dfrac{1}{2}I\dot{\theta}^2$ I dont know what expression is the correct one for the moment of inertia. My attempts: $1.$ $I=\dfrac{1}{2}Mr^2$, as a particle rolling around the center. $2.$ $I=I_{CM}+Mr^2$, but in this case (using parallel axes theorem) I dont know how to calculate the $I_{CM}$, since the mass distributtion is unkown (and not uniform because the center of mass is not in the center). $3.$ $I=I_{CM}+Mr^2$, where this time $I_{CM}$ is that of a uniform disk of mass M (so equals $\dfrac{1}{2}MR^2$ but this would be like saying that te disk is rolling about its center of mass, wich is not the case. Any help would be appreciated. ---------------------EDIT:-------------------- (changing notation to $r=a$, $\theta=\phi$, $I=I_{CM}$) I found in Landau the same problem (solved): He uses the inertia about axis through the center of mass, I, and with parallel axes theorem translate this to the contact point. But now I want to do the same calculation using the approach above. The position of the center of mass is $(-a\sin(\phi)+R\phi,-a\cos(\phi)+R)$, so its velocity is $(-a\dot{\phi}\cos(\phi)+R\dot{\phi},-a\dot{\phi}\sin(\phi))$, and the translational energy of the center of mass is $\dfrac{1}{2}m(R^2+a^2-2aR\cos{\phi})\dot{\phi}^2$. Now I have to add the rotational part $\dfrac{1}{2}I^{*}\dot{\phi}^2$. So the only way in order to get the same result that Landau is $I^{*}=I$, but it says that the disk rotates about its center of mass, wich i dont think is the case. If instead $I^{*}=I+Ma^2$, (that is, translating the rotation about the center) like suggested, then the result is not the same. Whats wrong?
I think it'd be nice to have a whole set of possible messages, each associated with some equation which has zero on the right hand side. It'd look something like this: "404; Did you just [insert operation that yields zero*]? Because there's nothing here! (Insert relevant equation**)" *Examples; (**Corresponding equations): 1. symmetrize the electromagnetic field strength tensor \(\bigl(F^{(\mu\nu)}=0\bigr)\) 2. take a covariant derivative of the metric \(\bigl(\nabla_\rho g_{\mu\nu}=0\bigr)\) 3. calculate a lightlike interval \(\bigl(ds^2=0\bigr)\) 4. vary the action of the internet \(\bigl(\delta S=0\bigr)\) 5. take the d'Alembertian of a massless field \(\bigl(\square\phi=0\bigr)\) 6. check the Bianchi identities \(\bigl(\nabla_{[\mu}F_{\nu\sigma]}=0\bigr)\ \text{or}\ \bigl(\nabla_{[\mu}R_{\nu\sigma]}=0\bigr)\) feel free to add other/better ones if you'd like
Developing flow From Thermal-FluidsPedia Line 7: Line 7: Similar analysis and conclusions can be made with the Schmidt number, Sc, relative to mass transfer problems concerning the entrance effects due to mass diffusion. If one needs to get detailed information concerning the hydrodynamic, thermal or concentration entrance effects, the conservation equations should be solved without a fully developed velocity, concentration, or temperature profile. Similar analysis and conclusions can be made with the Schmidt number, Sc, relative to mass transfer problems concerning the entrance effects due to mass diffusion. If one needs to get detailed information concerning the hydrodynamic, thermal or concentration entrance effects, the conservation equations should be solved without a fully developed velocity, concentration, or temperature profile. - [[Image:Fig5.15. + [[Image:Fig5.15.|thumb|400 px|alt=and for a circular tube|and for forced convective heat and mass transfer in a circular tube. - + - + + The conservation equations with the above assumptions, as well as neglecting the viscous dissipation and assuming an incompressible Newtonian fluid, are The conservation equations with the above assumptions, as well as neglecting the viscous dissipation and assuming an incompressible Newtonian fluid, are Line 54: Line 53: |} |} Typical boundary conditions are: Typical boundary conditions are: - <math>\text{Axial velocity at wall} u\left( x,{{r}_{o}} \right)=0 \text{no slip boundary condition}</math> + - <math>\text{Radial velocity at wall} v\left( x,{{r}_{o}} \right)= \left\{ \begin{align} + <math>\text{Axial velocity at wall} u\left( x,{{r}_{o}} \right)=0 \text{no slip boundary condition}</math> + <math>\text{Radial velocity at wall} v\left( x,{{r}_{o}} \right)= \left\{ \begin{align} & {{v}_{w}}=0 \text{impermeable wall} \\ & {{v}_{w}}=0 \text{impermeable wall} \\ & {{v}_{w}}>0 \text{injection and} {{v}_{w}}<0 \text{suction} \\ & {{v}_{w}}>0 \text{injection and} {{v}_{w}}<0 \text{suction} \\ & {{{\dot{m}}}_{w}}^{\prime \prime }= \text{mass flux due to diffusion} \\ & {{{\dot{m}}}_{w}}^{\prime \prime }= \text{mass flux due to diffusion} \\ & \quad \ \ \ =\rho \left[ {{\omega }_{1,w}}{{v}_{w}}-{{D}_{12}}{{\left. \frac{\partial {{\omega }_{1}}}{\partial r} \right|}_{r={{r}_{o}}}} \right] \\ & \quad \ \ \ =\rho \left[ {{\omega }_{1,w}}{{v}_{w}}-{{D}_{12}}{{\left. \frac{\partial {{\omega }_{1}}}{\partial r} \right|}_{r={{r}_{o}}}} \right] \\ - \end{align} \right.</math> + \end{align} \right.</math> - + <math>\text{Thermal condition on wall at} \left( r={{r}_{o}} \right)\quad \quad \left\{ \begin{align} - <math>\text{Thermal condition on wall at} \left( r={{r}_{o}} \right)\quad \quad \left\{ \begin{align} + & {{T}_{w}}= \text{const}\text{.} \\ & {{T}_{w}}= \text{const}\text{.} \\ & {{q}_{w}}^{\prime \prime }=-k{{\left. \frac{\partial T}{\partial r} \right|}_{r={{r}_{o}}}}= \text{const}\text{. or} \\ & {{q}_{w}}^{\prime \prime }=-k{{\left. \frac{\partial T}{\partial r} \right|}_{r={{r}_{o}}}}= \text{const}\text{. or} \\ & {{T}_{w}}=f\left( x \right) \text{or} \\ & {{T}_{w}}=f\left( x \right) \text{or} \\ & {{q}_{w}}^{\prime \prime }=g\left( x \right) \\ & {{q}_{w}}^{\prime \prime }=g\left( x \right) \\ - \end{align} \right.</math> + \end{align} \right.</math> - + <math>\text{Inlet condition at} x=0</math> - <math>\text{Inlet condition at} x=0</math> + <math>\left\{ \begin{align} - <math>\left\{ \begin{align} + & T={{T}_{in}} \\ & T={{T}_{in}} \\ & {{\omega }_{1}}={{\omega }_{1,}}_{in} \\ & {{\omega }_{1}}={{\omega }_{1,}}_{in} \\ & u={{u}_{in}} \\ & u={{u}_{in}} \\ - \end{align} \right.</math> + \end{align} \right.</math> - + <math>\text{Outlet condition at} x=L\begin{matrix} - <math>\text{Outlet condition at} x=L\begin{matrix} + {} & {} & {} & {} & {} \\ {} & {} & {} & {} & {} \\ \end{matrix}\left\{ \begin{align} \end{matrix}\left\{ \begin{align} Line 83: Line 80: & u=? \\ & u=? \\ & P=? \\ & P=? \\ - \end{align} \right.</math> + \end{align} \right.</math> - Clearly there are five partial differential equations and five unknowns (u, v, P, T, + Clearly there are five partial differential equations and five unknowns (u, v, P, T, ). All equations are of elliptic nature and one can neglect the axial diffusion terms, <math>\left( \frac{{{\partial }^{2}}u}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}v}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}T}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}{{\omega }_{1}}}{\partial {{x}^{2}}} \right)</math>, under some circumstances in order to make the conservation equations of parabolic nature. These axial diffusion terms can also be neglected under boundary layer assumptions. - <math>\left( \frac{{{\partial }^{2}}u}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}v}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}T}{\partial {{x}^{2}}},\ \frac{{{\partial }^{2}}{{\omega }_{1}}}{\partial {{x}^{2}}} \right)</math> + - , under some circumstances in order to make the conservation equations of parabolic nature. These axial diffusion terms can also be neglected under boundary layer assumptions. + Making boundary layer assumptions makes the result invalid very close to the tube entrance where the Reynolds number is very small. Shah and London 1978showed that the momentum boundary layer assumption will lead to error if Re < 400 and / D< 0.005Re. In these circumstances, the full Navier Stokes equation should be solved. - Making boundary layer assumptions makes the result invalid very close to the tube entrance where the Reynolds number is very small. Shah and London + - It was also shown in + It was also shown in that there are circumstances other than boundary layer assumptions where axial diffusion terms, such as the axial conduction term, can be neglected. However, as we showed in the case of the energy equation, one cannot neglect axial conduction for a very low Prandtl number despite the thermal boundary layer assumption. + In general, elliptic equations are more complex to solve analytically or numerically than parabolic equations. Furthermore, to solve the equations as elliptic you need pertinent information at the outlet as well, which in some cases is unknown. The momentum equation is nonlinear while the energy equation is linear under the constant property assumption. In general, elliptic equations are more complex to solve analytically or numerically than parabolic equations. Furthermore, to solve the equations as elliptic you need pertinent information at the outlet as well, which in some cases is unknown. The momentum equation is nonlinear while the energy equation is linear under the constant property assumption. - - - - - - - - - + + + + + + + + + + + + <center> <center> <div style="display:inline;"> <div style="display:inline;"> Line 335: Line 336: |} |} </div></center> </div></center> + + <center> <center> Revision as of 05:29, 23 July 2010 For forced convective heat and mass transfer with constant properties, the hydrodynamic entrance length is independent of Pr or Sc. When assuming fully developed flow, the point at which the temperature profile becomes fully developed for forced convection in tubes is linearly proportional to RePr. Analysis of these criteria for a fully developed flow and temperature profile shows that when Pr 1, as is the case with fluids with high viscosities such as oils, the temperature profile takes a longer distance to completely develop. In these circumstances (Pr 1), it makes sense to assume fully developed velocity since the thermal entrance is much longer than the hydrodynamic entrance. Obviously, from the definition of Prandtl number and the above criteria, one expects that when Pr ≈ 1 for fluids such as gases, the temperature and velocity develop at the same rate. When Pr 1, as in the case of liquid metals, the temperature profile will develop much faster than the velocity profile, and therefore a uniform velocity assumption (slug flow) is appropriate. Similar analysis and conclusions can be made with the Schmidt number, Sc, relative to mass transfer problems concerning the entrance effects due to mass diffusion. If one needs to get detailed information concerning the hydrodynamic, thermal or concentration entrance effects, the conservation equations should be solved without a fully developed velocity, concentration, or temperature profile. Consider laminar forced convective heat and mass transfer in a circular tube for the case of steady two-dimensional constant properties. The inlet velocity, temperature and concentration are uniform at the entrance with the possibility of mass transfer between the wall and fluid, as shown in figure to the right. The conservation equations with the above assumptions, as well as neglecting the viscous dissipation and assuming an incompressible Newtonian fluid, are Typical boundary conditions are: x= 0 Clearly there are five partial differential equations and five unknowns ( u, v, P, T, ω 1). All equations are of elliptic nature and one can neglect the axial diffusion terms, , under some circumstances in order to make the conservation equations of parabolic nature. These axial diffusion terms can also be neglected under boundary layer assumptions. Making boundary layer assumptions makes the result invalid very close to the tube entrance where the Reynolds number is very small. Shah and London [1] showed that the momentum boundary layer assumption will lead to error if Re < 400 and L H / D< 0.005Re. In these circumstances, the full Navier Stokes equation should be solved. It was also shown in Basics of Internal Forced Convection that there are circumstances other than boundary layer assumptions where axial diffusion terms, such as the axial conduction term, can be neglected. However, as we showed in the case of the energy equation, one cannot neglect axial conduction for a very low Prandtl number despite the thermal boundary layer assumption. In general, elliptic equations are more complex to solve analytically or numerically than parabolic equations. Furthermore, to solve the equations as elliptic you need pertinent information at the outlet as well, which in some cases is unknown. The momentum equation is nonlinear while the energy equation is linear under the constant property assumption. In most cases, the momentum, energy, and species equations are uncoupled, except under the following circumstances which make the equations coupled. 1. Variable properties, such as density variation as a function of temperature in natural convection problems. 2. Coupled governing equations and/or boundary conditions in phase change problems, such as absorption or dissolution problems. 3. Existence of a source term in one conservation equation that is a function of the dependent variable in another conservation equation. Langhaar Cite error: Closing </ref> missing for <ref> tag obtained approximate solutions for the momentum equation for circular tubes by solving the linearized momentum equation. Hornbeck [2] solved the momentum equation numerically by making boundary layer assumptions (parabolic form). Several investigators solved the energy equation either using Langhaar’s approximate velocity profile, or solving the momentum and energy equations numerically for both constant wall temperature and constant wall heat flux in circular tubes. Heat transfer in hydrodynamic and thermal entrance region has been solved numerically based on full elliptic governing equations [3]Bahrami, H., 2009, Personal Communication, Storrs, CT.</ref>. Variations of local and average Nusselt numbers for different Prandtl numbers under constant wall temperature and constant heat flux using full elliptic governing equations are shown in the two figures to the right, respectively. The local and average Nusselt numbers for different Prandtl numbers and boundary conditions are also presented in the following two tables. Table Local and average Nusselt number for the entrance region of a circular tube with constant wall temperature n Nu x Nu m'' x + Pr = 0.7 Pr = 2 Pr = 5 Pr = 0.7 Pr = 2 Pr = 5 0.001 17.0 12.5 10.6 60.7 34.9 24.9 0.002 11.6 8.86 8.04 37.3 22.6 17.0 0.004 8.29 6.64 6.30 23.3 15.1 12.0 0.008 6.29 5.23 5.09 14.9 10.4 8.80 0.01 5.81 4.89 4.80 13.1 9.36 8.03 0.02 4.69 4.12 4.12 8.9 6.89 6.21 0.04 4.01 3.75 3.76 6.46 5.39 5.06 0.06 3.79 3.67 3.68 5.56 4.83 4.61 0.08 3.71 3.66 3.66 5.09 4.54 4.37 0.1 3.68 3.66 3.66 4.81 4.36 4.23 0.12 3.66 3.66 3.66 4.62 4.24 4.14 3.66 3.66 3.66 3.66 3.66 3.66 Table Local and mean Nusselt number for the entrance region of a circular tube with constant wall heat flux n Nu x Nu m'' x + Pr = 0.7 Pr = 2 Pr = 5 Pr = 0.7 Pr = 2 Pr = 5 0.001 23.0 17.8 15.0 61.0 43.4 34.2 0.002 15.6 12.5 11.1 39.8 29.0 23.5 0.004 10.9 9.16 8.44 26.3 19.8 16.5 0.008 7.92 7.01 6.65 17.7 13.8 11.9 0.01 7.24 6.49 6.22 15.7 12.4 10.8 0.02 5.69 5.30 5.21 11.0 9.11 8.23 0.04 4.81 4.64 4.62 8.09 7.00 6.54 0.06 4.53 4.46 4.45 6.94 6.18 5.87 0.08 4.43 4.39 4.39 6.33 5.74 5.51 0.1 4.39 4.37 4.37 5.95 5.47 5.29 0.12 4.37 4.36 4.36 5.69 5.29 5.13 0.16 4.36 4.36 4.36 5.36 5.23 4.94 4.36 4.36 4.36 4.36 4.36 4.36 Heaton et al. [4] approximated the result for linearized momentum and energy equations using the energy equation for constant wall heat flux for a group of circular tube annulus for several Prandtl numbers. The following table summarizes the results for parallel plates and circular annulus. Table Local Nusselt number for the entrance region of a group of circular Tube Annulus with Constant Wall Heat Flux Parallel planes Circular-tube annulus K= 0.50 Pr Nu 11 θ* 1 Nu ii Nu oo θ* i θ* i 0.01 24.2 0.048 -- 24.2 -- 0.0322 11.7 0.117 -- 11.8 -- 0.0786 8.8 0.176 9.43 8.9 0.252 0.118 5.77 0.378 6.4 5.88 0.525 0.231 5.53 0.376 6.22 5.6 0.532 0.238 5.39 0.346 6.18 5.04 0.528 0.216 0.7 18.5 0.037 19.22 18.3 0.0513 0.0243 9.62 0.096 10.47 9.45 0.139 0.063 7.68 0.154 8.52 7.5 0.228 0.0998 5.55 0.327 6.35 5.27 0.498 0.207 5.4 0.345 6.19 5.06 0.527 0.215 5.39 0.346 6.18 5.04 0.528 0.216 10 15.6 0.0311 16.86 15.14 0.045 0.0201 9.2 0.092 10.2 8.75 0.136 0.0583 7.49 0.149 8.43 7.09 0.224 0.0943 5.55 0.327 6.35 5.2 0.498 0.204 5.4 0.345 6.19 5.05 0.527 0.215 5.39 0.346 6.18 5.04 0.528 0.216 Cite error: <ref> tags exist, but no <references/> tag was found
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
I am preparing for an exam and am stuck on the following problem : Let $G$ be a group of order $2555 = 5 \cdot 7 \cdot 73$, show that $G$ is cyclic. It is not hard to show that the Sylow-73 subgroup is normal in $G$ and that at either the Sylow-5 or the Sylow-7 subgroup is normal. My thought was to prove that both the the Sylow-5 and Sylow-7 subgroups are normal, because then the claim would follow, but I am unsure how to proceed. Perhaps using the fact that $G$ mod the Sylow-73 subgroup is cyclic? Any hint would be appreciated. EDIT: So it suffices to show that $G$ is abelian because there is only one abelian group of order 2555, namely the cyclic group of order $2555$. Let $P_{73}$ denote the Sylow 73-subgroup. It is easy to see that $G/P_{73}$ is cyclic. Further we have that $G$ is abelian if $P_{73} \subseteq Z(G)$. I believe that $P_{73} \subseteq Z(G)$ can (may?) be shown by letting $G$ act on $P_{73}$ via conjugation. Then by the class equation, $73 = \sum_{i =1}^n \text{cl}(x_{i})$ where $x_{1}, \dots, x_{n}$ is a set of representatives of each conjugacy class. I think that one can show that $\text{cl}(x_{i})$ is trivial, which implies that $gx_{i}g^{-1} = x_{i}$ for all $g \in G$ or, equivalently, that $gx_{i} = x_{i}g$. This would show that each element of $P_{73}$ is an element of the center of $G$ and then the claim would follow.
Numerical Solution of the KdV Contents Introduction We present here a method to solve the KdV equation numerically. There are many different methods to solve the KdV and we use here a spectral method which has been found to work well. Spectral methods work by using the Fourier transform (or some varient of it) to calculate the derivative. Fourier Transform Recall that the Fourier transform is given by and the inverse Fourier transform is given by (note that there are other ways of writing this transform). The most important property of the Fourier transform is that where we have assumed that the function [math]f\left( x\right) [/math] vanishes at [math]\pm \infty .[/math] This means that the Fourier transform converts differentiation to multiplication by [math]k[/math]. Solution for the Linearized KdV We begin with a simple example. Suppose we want to solve the linearized KdV equation subject to solve initial conditions We can solve this equation by taking the Fourier transform. and obtain So that Therefore Numerical Implementation Using the FFT The Fast Fourier Transform (FFT) is a method to calculate the fourier transform efficiently for discrete sets of points. These points need to be evenly spaced in the [math]x[/math] plane and are given by (note they can start at any [math]x[/math] value). For the FFT to be as efficient as possible [math]N[/math] should be a power of [math]2.[/math] Corresponding to the discrete set of points in the [math]x[/math] domain is a discrete set of points in the [math]k[/math] plane given by where [math]\Delta k=2\pi /(N\Delta x).[/math] Note that this numbering seems slighly odd and is due to aliasing. We are not that interested in the frequency domain solution but we need to make sure that we select the correct values of [math]k[/math] for our numerical code. Numerical Code for the Linear KdV Here is the code to solve the linear KdV using MATLAB N = 1024; t=0.1; x = linspace(-10,10,N); delta_x = x(2) - x(1); delta_k = 2*pi/(N*delta_x); k = [0:delta_k:N/2*delta_k,-(N/2-1)*delta_k:delta_k:-delta_k]; f = exp(-x.^2); f_hat = fft(f); u = real(ifft(f_hat.*exp(i*k.^3*t))); Numerical Solution of the KdV It turns out that a method to solve the KdV equation can be derived using spectral methods. We begin with the KdV equation written as The Fourier transform of the KdV is therefore We solve this equation by a split step method. We write the equation as We can solve exactly while the equation needs to be solved by time stepping. The idea of the split step method is to solve alternatively each of these equations when stepping from [math]t[/math] to [math] t+\Delta t.[/math] Therefore we solve Note that we are using Euler's method to time step and that the solution could be improved by using a better method, such as the Runge-Kutta 4 method. The only slighly tricky thing is that we have both [math]u[/math] and [math]\hat{u}[/math] in the equation, but we can simply use the Fourier transform to connect these. The equation then becomes Numerical Code to solve the KdV by the split step method Here is some code to solve the KdV using MATLAB N = 256; x = linspace(-10,10,N); delta_x = x(2) - x(1); delta_k = 2*pi/(N*delta_x); k = [0:delta_k:N/2*delta_k,-(N/2-1)*delta_k:delta_k:-delta_k]; c=16; u = 1/2*c*(sech(sqrt(c)/2*(x+8))).^2; delta_t = 0.4/N^2; tmax = 0.1; nmax = round(tmax/delta_t); U = fft(u); for n = 1:nmax % first we solve the linear part U = U.*exp(1i*k.^3*delta_t); %then we solve the non linear part U = U - delta_t*(3i*k.*fft(real(ifft(U)).^2)); end Example Calculations We cosider the evolution of the KdV with two solitons as initial condition as given below. Animation Three-dimensional plot.
Measure algebra (measure theory) $\newcommand{\A}{\mathcal A}\newcommand{\B}{\mathcal B} $A measure algebra is a pair $(\B,\mu)$ where $\B$ is a Boolean σ-algebra and $\mu$ is a (strictly) positive measure on $\B$. The (strict) positivity means $\mu(x)\ge0$ and $\mu(x)=0\iff x=\bszero_{\B}$ for all $x\in\B$. However, about the greatest value $\mu(\bsone_{\B})$ of $\mu$, assumptions differ: from $\mu(\bsone_{\B})=1$ (that is, $\mu$ is a probability measure) in [H2, p. 43] and [K, Sect. 17.F] to $\mu(\bsone_{\B})<\infty$ (that is, $\mu$ is a totally finite measure) in [G, Sect. 2.1] to $\mu(\bsone_{\B})\le\infty$ in [P, Sect. 1.4C], [H1, Sect. 40], [F, Vol. 3, Sect. 321]. A measure algebra of a measure space consists, by definition, of all equivalence classes of measurable sets. (The equivalence is equality mod 0. Sets of the original σ-algebra or its completion give the same result.) Contents On motivation Measure algebras are "a coherent way to ignore the sets of measure $0$ in a measure space" [P, Sect. 1.4C, page 15]. "Many of the difficulties of measure theory and all the pathology of the subject arise from the existence of sets of measure zero. The algebraic treatment gets rid of this source of unpleasantness by refusing to consider sets at all; it considers sets modulo sets of measure zero instead." [H2, page 42] Probability theory without sets of probability zero (in particular, in terms of measure algebras), proposed long ago [S], [D], is "more in agreement with the historical and conceptual development of probability theory" [S, Introduction]. An event is defined here as an element of a $\B$ where $(\B,\mu)$ is a measure algebra; accordingly, a random variable with values in a measurable space $(X,\A)$ is defined as a σ-homomorphism from $\A$ (treated as a Boolean σ-algebra) to $\B$; see [S, p. 727] and [D, p. 273]. "The basic conceptual concern in statistics is not so much with the values of the measurable function $f$ representing a random variable ... as with the sets ... where $f$ takes on certain values (and with the probabilities of those sets)." [S, p. 727] In spite of elegance and other advantages, the measure algebra approach to probability is not the mainstream. When dealing with random processes, "the equivalence-class formulation just will not work: the 'cleverness' of introducing quotient spaces loses the subtlety which is essential even for formulating the fundamental results on existence of continuous modifications, etc., unless one performs contortions which are hardly elegant. Even if these contortions allow one to formulate results, one would still have to use genuine functions to prove them; so where does the reality lie?!" [W, p. xiii] Bad news: we cannot get rid of measure spaces and sets of measure zero. Good news: we can get rid of pathological measure spaces, thus achieving harmony between measure spaces and measure algebras. "Since it can be argued that sets of measure zero are worthless, not only from the algebraic but also from the physical point of view, and since every measure algebra can be represented as the algebra associated with a non-pathological measure space, the poverty of some measure spaces may be safely ignored." [H2, p. 43] Basic notions and facts Let $(\B,\mu)$ be a measure algebra, and $\mu(\bsone_{\B})<\infty$. Defining the distance between $A,B\in\B$ as $\mu(A\Delta B)$ (the measure of their symmetric difference) one turns $B$ into a metric space. This is always a complete metric space. If it is separable, the measure algebra $(\B,\mu)$ is also called separable. An atom of $\B$ is, by definition, an element $A\in\B$ such that $A>\bszero_{\B}$ and no $B\in\B$ satisfies $A>B>\bszero_{\B}$. If $\B$ contains no atoms it is called nonatomic (or atomless). The isomorphism theorem Theorem 1. All separable nonatomic normalized measure algebras are mutually isomorphic. Here "normalized" means $\mu(\bsone_{\B})=1$. Isomorphic classification of all totally finite (and σ-finite, and some more; not necessarily separable or nonatomic) measure algebras is available, see [F, Vol. 3 "Measure algebras", Chapter 33 "Maharam's theorem"]. Realization of homomorphisms Every measure preserving map $\phi:X_1\to X_2$ between measure spaces $(X_1,\A_1,\mu_1)$, $(X_2,\A_2,\mu_2)$ induces a homomorphism $\Phi:\B_2\to\B_1$ between their measure algebras $(\B_1,\nu_1)$, $(\B_2,\nu_2)$ as follows: $\Phi(B_2)$ (for $B_2\in\B_2$) is the equivalence class of the inverse image $\phi^{-1}(A_2)$ of some (therefore every) set $A_2\in\A_2$ belonging to the equivalence class $B_2$. In general, a homomorphism $\Phi:\B_2\to\B_1$ is not necessarily induced by some measure preserving map $\phi:X_1\to X_2$ (even if $(X_1,\A_1,\mu_1)=(X_2,\A_2,\mu_2)$ is a probability space and $\Phi$ is an automorphism). According to [F], it is "one of the central problems of measure theory: under what circumstances will a homomorphism between measure algebras be representable by a function between measure spaces?" [F, Vol. 3, Chap. 34, p. 162; see also pp. 174, 182]. References [P] Karl Petersen, "Ergodic theory", Cambridge (1983). MR0833286 Zbl 0507.28010 [H1] P.R. Halmos, "Measure theory", Van Nostrand (1950). MR0033869 Zbl 0040.16802 [H2] P.R. Halmos, "Lectures on ergodic theory", Math. Soc. Japan (1956). MR0097489 Zbl 0073.09302 [G] Eli Glasner, "Ergodic theory via joinings", Amer. Math. Soc. (2003). MR1958753 Zbl 1038.37002 [K] Alexander S. Kechris, "Classical descriptive set theory", Springer-Verlag (1995). MR1321597 Zbl 0819.04002 [F] D.H. Fremlin, "Measure theory", Torres Fremlin, Colchester. Vol. 1: 2004 MR2462519 Zbl 1162.28001; Vol. 2: 2003 MR2462280 Zbl 1165.28001; Vol. 3: 2004 MR2459668 Zbl 1165.28002; Vol. 4: 2006 MR2462372 Zbl 1166.28001 [S] I.E. Segal, "Abstract probability spaces and a theorem of Kolmogoroff", Amer. J. Math. 76 (1954), 721–732. MR0063602 Zbl 0056.12301 [D] L.E. Dubins, "Generalized random variables", Trans. Amer. Math. Soc. 84 (1957), 273–309. MR0085326 Zbl 0078.31003 [W] David Williams, "Probability with martingales", Cambridge (1991). MR1155402 Zbl 0722.60001 [C] Constantin Carathėodory, "Die homomorphieen von Somen und die Multiplikation von Inhaltsfunktionen" (German), Annali della R. Scuola Normale Superiore di Pisa (Ser. 2) 8 (1939), 105–130. MR1556820 Zbl 0021.11403 [HN] P.R. Halmos, J. von Neumann, "Operator methods in classical mechanics, II", Annals of Mathematics (2) 43 (1942), 332–350. MR0006617 Zbl 0063.01888 How to Cite This Entry: Measure algebra (measure theory). Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Measure_algebra_(measure_theory)&oldid=21757
Difference between revisions of "Vopenka" Line 4: Line 4: In a set theoretic setting, the most common definition is the following: In a set theoretic setting, the most common definition is the following: <blockquote> <blockquote> − For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$. + For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding$j:M\to N$. </blockquote> </blockquote> For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures Revision as of 00:58, 3 October 2017 Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following: For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$. For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, then $V_\kappa$ satisfies Vopěnka's principle. Contents Formalisations As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. A somewhat stronger alternative is to view Vopěnka's principle as an axiom in second-order set theory capable to dealing with proper classes, such as von Neumann-Gödel-Bernays set theory. This is a strictly stronger assertion.[1]. Finally, one may relativize the principle to a particular cardinal, leading to the concept of a Vopěnka cardinal. Vopěnka cardinals An inaccessible cardinal $\kappa$ is a Vopěnka cardinal if and only if $V_\kappa$ satisfies Vopěnka's principle, that is, where we interpret the proper classes of $V_\kappa$ as the subsets of $V_\kappa$ of cardinality $\kappa$. As we mentioned above, every almost huge cardinal is a Vopěnka cardinal. Equivalent statements The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters. [2] Other points to note Whilst Vopěnka cardinals are very strong in terms of consistency strength, a Vopěnka cardinal need not even be weakly compact. Indeed, the definition of a Vopěnka cardinal is a $\Pi^1_1$ statement over $V_\kappa$, and $\Pi^1_1$ indescribability is one of the equivalent definitions of weak compactness. Thus, the least weakly compact Vopěnka cardinal must have (many) other Vopěnka cardinals less than it. References Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex