text
stringlengths 256
16.4k
|
---|
A remark on an eigenvalue condition for the global injectivity of differentiable maps of $R^2$
1.
Instituto de Ciências Matemáticas e de Computa¸cão - USP, Cx. Postal 668, CEP 13560–970, São Carlos, SP, Brazil
2.
Institute of Mathematics, P.O. Box 1078, Hanoi, Vietnam
There does not exist a sequence $R^2$ ∋ $x_i\rightarrow \infty$ such that $X(x_i)\rightarrow a\in \R^2$ and $DX(x_i)$ has a real eigenvalue $\lambda _i\rightarrow 0$.When the graph of $X$ is an algebraic set, this condition becomes a necessary and sufficient condition for $X$ to be a global diffeomorphism. Mathematics Subject Classification:Primary: 14R15; Secondary: 14E07, 14E09, 14E4. Citation:Carlos Gutierrez, Nguyen Van Chau. A remark on an eigenvalue condition for the global injectivity of differentiable maps of $R^2$. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 397-402. doi: 10.3934/dcds.2007.17.397
[1] [2]
Roberto Livrea, Salvatore A. Marano.
A min-max principle for non-differentiable functions with a weak compactness condition.
[3] [4] [5] [6]
Romain Aimino, Huyi Hu, Matthew Nicol, Andrei Török, Sandro Vaienti.
Polynomial loss of memory for maps of the interval with a neutral fixed point.
[7] [8]
VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad.
A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition.
[9]
Hicham Zmarrou, Ale Jan Homburg.
Dynamics and bifurcations of random circle diffeomorphism.
[10] [11]
Christian Bonatti, Sylvain Crovisier and Amie Wilkinson.
The centralizer of a $C^1$-generic diffeomorphism is trivial.
[12]
Joachim Escher, Boris Kolev.
Right-invariant Sobolev metrics of fractional order on the diffeomorphism group of the circle.
[13]
Robert Lauter and Victor Nistor.
On spectra of geometric operators on open manifolds and differentiable groupoids.
[14] [15] [16] [17] [18]
Robert Brooks and Eran Makover.
The first eigenvalue of a Riemann surface.
[19]
Nikolaos S. Papageorgiou, Vicenţiu D. Rădulescu, Dušan D. Repovš.
Perturbations of nonlinear eigenvalue problems.
[20]
Antoni Ferragut, Jaume Llibre, Adam Mahdi.
Polynomial inverse integrating factors for polynomial vector fields.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
The Hierarchical Strauss Hard Core Point Process Model
Creates an instance of the hierarchical Strauss-hard core point process model which can then be fitted to point pattern data.
Usage
HierStraussHard(iradii, hradii=NULL, types=NULL, archy=NULL)
Arguments iradii
Matrix of interaction radii
hradii
Optional matrix of hard core distances
types
Optional; vector of all possible types (i.e. the possible levels of the
marksvariable in the data)
archy
Optional: the hierarchical order. See Details.
Details
This is a hierarchical point process model for a multitype point pattern (Hogmander and Sarkka, 1999; Grabarnik and Sarkka, 2009). It is appropriate for analysing multitype point pattern data in which the types are ordered so that the points of type \(j\) depend on the points of type \(1,2,\ldots,j-1\).
The hierarchical version of the (stationary) Strauss hard core process with \(m\) types, with interaction radii \(r_{ij}\), hard core distances \(h_{ij}\) and parameters \(\beta_j\) and \(\gamma_{ij}\) is a point process in which each point of type \(j\) contributes a factor \(\beta_j\) to the probability density of the point pattern, and a pair of points of types \(i\) and \(j\) closer than \(r_{ij}\) units apart contributes a factor \(\gamma_{ij}\) to the density
provided \(i \le j\). If any pair of points of types \(i\) and \(j\) lies closer than \(h_{ij}\) units apart, the configuration of points is impossible (probability density zero).
The nonstationary hierarchical Strauss hard core process is similar except that the contribution of each individual point \(x_i\) is a function \(\beta(x_i)\) of location and type, rather than a constant beta.
The function
ppm(), which fits point process models to point pattern data, requires an argument of class
"interact" describing the interpoint interaction structure of the model to be fitted. The appropriate description of the hierarchical Strauss hard core process pairwise interaction is yielded by the function
HierStraussHard(). See the examples below.
The argument
types need not be specified in normal use. It will be determined automatically from the point pattern data set to which the HierStraussHard interaction is applied, when the user calls
ppm. However, the user should be confident that the ordering of types in the dataset corresponds to the ordering of rows and columns in the matrix
radii.
The argument
archy can be used to specify a hierarchical ordering of the types. It can be either a vector of integers or a character vector matching the possible types. The default is the sequence \(1,2, \ldots, m\) meaning that type \(j\) depends on types \(1,2, \ldots, j-1\).
The matrices
iradii and
hradii must be square, with entries which are either positive numbers or zero or
NA. A value of zero or
NA indicates that no interaction term should be included for this combination of types.
Note that only the interaction radii and hard core distances are specified in
HierStraussHard. The canonical parameters \(\log(\beta_j)\) and \(\log(\gamma_{ij})\) are estimated by
ppm(), not fixed in
HierStraussHard().
Value
An object of class
"interact" describing the interpoint interaction structure of the hierarchical Strauss-hard core process with interaction radii \(iradii[i,j]\) and hard core distances \(hradii[i,j]\).
References
Grabarnik, P. and Sarkka, A. (2009) Modelling the spatial structure of forest stands by multivariate point processes with hierarchical interactions.
Ecological Modelling 220, 1232--1240.
Hogmander, H. and Sarkka, A. (1999) Multitype spatial point patterns with hierarchical interactions.
Biometrics 55, 1051--1058. See Also
MultiStraussHard for the corresponding symmetrical interaction.
Aliases HierStraussHard Examples
# NOT RUN { r <- matrix(c(30, NA, 40, 30), nrow=2,ncol=2) h <- matrix(c(4, NA, 10, 15), 2, 2) HierStraussHard(r, h) # prints a sensible description of itself ppm(ants ~1, HierStraussHard(r, h)) # fit the stationary hierarchical Strauss-hard core process to ants data# }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2) |
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
Two-parameter homogenization for a Ginzburg-Landau problem in a perforated domain
1.
Department of Mathematics and Materials Research Institute, Penn State University, University Park, PA 16802
2.
Université de Lyon, Université Lyon 1, Institut Camille Jordan CNRS UMR 5208, 43, boulevard du 11 november 1918, F-69622 Villeurbanne, France
We consider the existence of a minimizer of the Ginzburg-Landau energy
$E_\lambda(u)=\frac 1\2_[\int_{A_\delta}](|\nabla u|^2+\frac\lambda 2(1-|u|^2)^2)$
among all maps in $u\in\J$.
It turns out that, under appropriate assumptions on $\lambda=\lambda(\delta)$, existence is governed by the asymptotic behavior of the $H^1$-capacity of $A_\delta$. When the limit of the capacities is $>\pi$, we show that minimizers exist and that they are, when $\delta\to 0$, equivalent to minimizers of the same problem in the subclass of $\J$ formed by the $\mathbb{S}^1$-valued maps. This result parallels the one obtained, for a fixed domain, in [3], and reduces homogenization of the Ginzburg-Landau functional to the one of harmonic maps, already known from [2].
When the limit is $<\pi$, we prove that, for small $\delta$, the minimum is not attained, and that minimizing sequences develop vortices. In the case of a fixed domain, this was proved in [1].
Mathematics Subject Classification:Primary: 35B27; Secondary: 55M2. Citation:Leonid Berlyand, Petru Mironescu. Two-parameter homogenization for a Ginzburg-Landau problem in a perforated domain. Networks & Heterogeneous Media, 2008, 3 (3) : 461-487. doi: 10.3934/nhm.2008.3.461
[1]
Leonid Berlyand, Volodymyr Rybalko, Nung Kwan Yip.
Renormalized Ginzburg-Landau energy and location of near boundary vortices.
[2] [3]
Hassen Aydi, Ayman Kachmar.
Magnetic vortices for a Ginzburg-Landau type energy with discontinuous constraint. II.
[4]
Leonid Berlyand, Volodymyr Rybalko.
Homogenized description of multiple Ginzburg-Landau vortices pinned by small holes.
[5]
Ko-Shin Chen, Peter Sternberg.
Dynamics of Ginzburg-Landau and Gross-Pitaevskii vortices on manifolds.
[6]
Gregory A. Chechkin, Vladimir V. Chepyzhov, Leonid S. Pankratov.
Homogenization of trajectory attractors of Ginzburg-Landau equations with randomly oscillating terms.
[7]
Hans G. Kaper, Bixiang Wang, Shouhong Wang.
Determining nodes for the Ginzburg-Landau equations of superconductivity.
[8] [9]
Hakima Bessaih, Yalchin Efendiev, Florin Maris.
Homogenization of the evolution Stokes equation in a perforated domain with a stochastic Fourier boundary condition.
[10] [11] [12] [13] [14]
Satoshi Kosugi, Yoshihisa Morita.
Phase pattern in a Ginzburg-Landau model with a discontinuous coefficient in a ring.
[15] [16] [17]
Satoshi Kosugi, Yoshihisa Morita, Shoji Yotsutani.
A complete bifurcation diagram of the Ginzburg-Landau equation with periodic boundary conditions.
[18]
Noboru Okazawa, Tomomi Yokota.
Smoothing effect for generalized complex
Ginzburg-Landau equations in unbounded domains.
[19]
N. I. Karachalios, H. E. Nistazakis, A. N. Yannacopoulos.
Remarks on the asymptotic behavior of solutions of complex discrete Ginzburg-Landau equations.
[20]
Yuta Kugo, Motohiro Sobajima, Toshiyuki Suzuki, Tomomi Yokota, Kentarou Yoshii.
Solvability of a class of complex Ginzburg-Landau equations in periodic Sobolev spaces.
2018 Impact Factor: 0.871
Tools Metrics Other articles
by authors
[Back to Top] |
A field $F$ is said to be algebraically closed if each non-constant polynomial in $F[x]$ has a root in $F$.
Proof.
Let $F$ be a finite field and consider the polynomial\[f(x)=1+\prod_{a\in F}(x-a).\]The coefficients of $f(x)$ lie in the field $F$, and thus $f(x)\in F[x]$. Of course, $f(x)$ is a non-constant polynomial.
Note that for each $a \in F$, we have\[f(a)=1\neq 0.\]So the polynomial $f(x)$ has no root in $F$.Hence the finite field $F$ is not algebraic closed.
It follows that every algebraically closed field must be infinite.
Prove that $\F_3[x]/(x^2+1)$ is a Field and Find the Inverse ElementsLet $\F_3=\Zmod{3}$ be the finite field of order $3$.Consider the ring $\F_3[x]$ of polynomial over $\F_3$ and its ideal $I=(x^2+1)$ generated by $x^2+1\in \F_3[x]$.(a) Prove that the quotient ring $\F_3[x]/(x^2+1)$ is a field. How many elements does the field have?(b) […]
Explicit Field Isomorphism of Finite Fields(a) Let $f_1(x)$ and $f_2(x)$ be irreducible polynomials over a finite field $\F_p$, where $p$ is a prime number. Suppose that $f_1(x)$ and $f_2(x)$ have the same degrees. Then show that fields $\F_p[x]/(f_1(x))$ and $\F_p[x]/(f_2(x))$ are isomorphic.(b) Show that the polynomials […]
Application of Field Extension to Linear CombinationConsider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$.Let $\alpha$ be any real root of $f(x)$.Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$.Proof.We first prove that the polynomial […]
Each Element in a Finite Field is the Sum of Two SquaresLet $F$ be a finite field.Prove that each element in the field $F$ is the sum of two squares in $F$.Proof.Let $x$ be an element in $F$. We want to show that there exists $a, b\in F$ such that\[x=a^2+b^2.\]Since $F$ is a finite field, the characteristic $p$ of the field […]
Galois Group of the Polynomial $x^2-2$Let $\Q$ be the field of rational numbers.(a) Is the polynomial $f(x)=x^2-2$ separable over $\Q$?(b) Find the Galois group of $f(x)$ over $\Q$.Solution.(a) The polynomial $f(x)=x^2-2$ is separable over $\Q$The roots of the polynomial $f(x)$ are $\pm […]
Degree of an Irreducible Factor of a Composition of PolynomialsLet $f(x)$ be an irreducible polynomial of degree $n$ over a field $F$. Let $g(x)$ be any polynomial in $F[x]$.Show that the degree of each irreducible factor of the composite polynomial $f(g(x))$ is divisible by $n$.Hint.Use the following fact.Let $h(x)$ is an […] |
Given a number of vectors with $n$ elements, i.e., $S=(a_1, \cdots, a_n)$, $T_j=(b_1^j, \cdots, b_n^j)$ for $j=1,\cdots, m$ where each $a_i$ or $b^i_j$ is a natural number.
Question: determine whether, for all subset $I\subseteq \{1, \cdots, n\}$, there is some $T_j$ ($1\leq j\leq m$) such that $\max\{a_i\mid i\in I\}=\max\{b_i^j\mid i\in I\}$.
Obviously there is an exponential time algorithm to do this (one can just enumerate all $I$), but can we do better to have a polynomial-time algorithm? or is it NP-hard?
Example: $S=(1,2,0)$, $T_1=(2,1,0)$, $T_2=(2,0,1)$, $T_3=(1,0,2)$, $T_4=(0,2,1)$ gives an affirmtive answer.
Motivation: this question is from database research, the background of which is a bit hard to describe precisely here. |
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to .
To send content items to your Kindle, first ensure [email protected] added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We prove simplicity of all intermediate$C^{\ast }$-algebras$C_{r}^{\ast }(\unicode[STIX]{x1D6E4})\subseteq {\mathcal{B}}\subseteq \unicode[STIX]{x1D6E4}\ltimes _{r}C(X)$in the case of minimal actions of$C^{\ast }$-simple groups$\unicode[STIX]{x1D6E4}$on compact spaces$X$. For this, we use the notion of stationary states, recently introduced by Hartman and Kalantar [Stationary$C^{\ast }$-dynamical systems. Preprint, 2017, arXiv:1712.10133]. We show that the Powers’ averaging property holds for the reduced crossed product$\unicode[STIX]{x1D6E4}\ltimes _{r}{\mathcal{A}}$for any action$\unicode[STIX]{x1D6E4}\curvearrowright {\mathcal{A}}$of a$C^{\ast }$-simple group$\unicode[STIX]{x1D6E4}$on a unital$C^{\ast }$-algebra${\mathcal{A}}$, and use it to prove a one-to-one correspondence between stationary states on${\mathcal{A}}$and those on$\unicode[STIX]{x1D6E4}\ltimes _{r}{\mathcal{A}}$.
In this paper, we revisit the theory of induced representations in the setting of locally compact quantum groups. In the case of induction from open quantum subgroups, we show that constructions of Kustermans and Vaes are equivalent to the classical, and much simpler, construction of Rieffel. We also prove in general setting the continuity of induction in the sense of Vaes with respect to weak containment.
A locally compact group G is compact if and only if its convolution algebras contain non-zero (weakly) completely continuous elements. Dually, G is discrete if its function algebras contain non-zero completely continuous elements. We prove non-commutative versions of these results in the case of locally compact quantum groups.
We prove that if${\it\rho}$is an irreducible positive definite function in the Fourier–Stieltjes algebra$B(G)$of a locally compact group$G$with$\Vert {\it\rho}\Vert _{B(G)}=1$, then the iterated powers$({\it\rho}^{n})$as a sequence of unital completely positive maps on the group$C^{\ast }$-algebra converge to zero in the strong operator topology.
We show that a regular locally compact quantum group$\mathbb{G}$is discrete if and only if${{\mathcal{L}}^{\infty }}\left( \mathbb{G} \right)$contains non-zero compact operators on${{\mathcal{L}}^{2}}\left( \mathbb{G} \right)$. As a corollary we classify all discrete quantum groups among regular locally compact quantum groups$\mathbb{G}$where${{\mathcal{L}}^{1}}\left( \mathbb{G} \right)$has the Radon-Nikodym property.
In this paper we use the recent developments in the representation theory of locally compact quantum groups, to assign to each locally compact quantum group$\mathbb{G}$a locally compact group$\tilde{\mathbb{G}}$that is the quantum version of point-masses and is an invariant for the latter. We show that “quantum point-masses” can be identified with several other locally compact groups that can be naturally assigned to the quantum group$\mathbb{G}$. This assignment preserves compactness as well as discreteness (hence also finiteness), and for large classes of quantum groups, amenability. We calculate this invariant for some of the most well-known examples of non-classical quantum groups. Also, we show that several structural properties of$\mathbb{G}$are encoded by$\tilde{\mathbb{G}}$the latter, despite being a simpler object, can carry very important information about$\mathbb{G}$.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection. |
I’m extremely agitated today. I dunno why. Maybe because there was some convulsion in the peaceful tidings of the house I live in, or the fact that I’m kinda hungry at the moment. Anyways, I don’t have time for chitchat. Let’s get to the studying.
The following is taken from
Foundations of Machine Learning by Rostamyar, et al.
Support Vector Machines are the most theoretically well motivated and practically most effective classification algorithms in modern machine learning.
Consider an input space $\mathcal{X}$ that is a subset of $\mathbb{R}^N$ with $N \geq 1$, and the output or target space $\mathcal{Y}=\{-1, +1\}$, and let $f : \mathcal{X} \rightarrow \mathcal{Y} $ be the target function. Given a hypothesis set $\mathcal{H}$ of functions mapping $\mathcal{X}$ to $\mathcal{Y}$, the binary classification task is formulated as follows:
The learner receives a training sample $S$ of size $m$ drawn independently and identically from $\mathcal{X}$ to some unknown distribution $\mathcal{D}$, $S = ((x_1, y_1), \ldots, (x_m, y_m)) \in (\mathcal{X}\times\mathcal{Y})^m$, with $y_i = f(x_i) $ for all $i \in [m]$. The problem consists of determining a hypothesis $ h \in \mathcal{H}$, a
binary classifier, with small generalization error : The probability that hypothesis set is not the target function is our error rate.
\[ R_{\mathcal{D}} = \underset{x\sim\mathcal{D}}{\mathbb{P}} [h(x) \neq f(x)]. \]
Different hypothesis sets $\mathcal{H}$ can be selected for this task. Hypothesis sets with smaller complexity provide better learning guarantees, everything else being equal. A natural hypothesis set with relatively small complexity is that of a linear classifier, or hyperplanes, which can be defined as follows:
\[ \mathcal{H}= \{x \rightarrow sign(w.x+b) : w \in \mathbb{R}^N, b \in r\} \]
The learning problem is then referred to as a
linear classification problem. The general equation of a hyperplane in $\mathbb{R}^N$is $w.x+b=0$ where $w\in\mathbb{R}^N$ is a non-zero vector normal to the hyperplane $b\in\mathbb{R}$ a scalar. A hypothesisol. of the form $x\rightarrow sign(w.x+b)$ thus labels positively all points falling on one side of the hyperplane $w.x+b=0$ and negatively all others.
From now until we say so, we’ll assume that the training sample $S$ can be linearly separated, that is, we assume the existence of a hyperplane that perfectly separates the training samples into two populations of positively and negatively labeled points, as illustrated by the left panel of figure below. This is equivalent to the existence of $ (\boldsymbol{w}, b) \in (\mathbb{R}^N – \boldsymbol{\{0\}}) \times \mathbb{R}$such that:
\[ \forall i \in [m], \quad y_i(\boldsymbol{w}.x_i + b) \geq 0 \]
But, as you can see above, there are then infinitely many such separating hyperplane. Which hyperplane should a learning algorithm select? The definition of SVM solution is based on the notion of
geometric margin.
Let’s define what we just came up with: The geometric margin $\rho_h(x)$ pf a
linear classifier $h:\rightarrow \boldsymbol{w.x} + b $ at a point $x$ is its Euclidean distance to the hyperplane $\boldsymbol{w.x}+b=0$:
\[ \rho_h(x) = \frac{w.x+b}{||w||_2} \]
The geometric margin of $\rho_h$ of a linear classifier h for a sample $S = (x_1, …, x_m) $ is the minimum geometric margin over the points in the sample, $\rho_h = min_{i\in[m]} \rho_h(x_i)$, that is the distance of hyperplane defining h to the closest sample points.
So what is the solution? It is that, the separating hyperplane with the maximum geometric margin is thus known as
maximum-margin hyperplane. The right panel of the figure above illustrates the maximum-margin hyperplane returned by SVM algorithm is the separable case. We will present later in this chapter a theory that provides a strong justification for the solution. We can observe already, however, that the SVM solution can also be viewed as the safest choice in the following sense: a test point is classified correctly by separating hyperplanes with geometric margin $\rho$ even when it falls within a distance $\rho$ of the training samples sharing the same label: for the SVM solution, $\rho$ is the maximum geometric margin and thus the safest value.
We now derive the equations nd optimization problem that define the SVM solution. By definition of the geometric margin, the maximum margin of $\rho$ of a separating hyperplane is given by:
\[ \rho = \underset{w,b : y_i(w.x_i+b) \geq 0}{max}\underset{i\in[m]}{min}\frac{|w.x_i=b}{||w||} = \underset{w,b}{max}{min}\frac{y_(w.x_i+b)}{||w||} \]
The second quality follows from the fact that, since sample is linearly separable, for the maximizing pair $(w, b), y_i(w.x_i+b)$ must be non-negative for al $i\in[m]$. Now, observe that the last expression is invariant to multiplication of $(w, b)$ by a positive scalar. Thus, we can restrict ourselves to pairs $(\boldsymbol{w},b)$ scaled such that $min_{i\in[m]}(\boldsymbol{w}.x_i+b) = 1$:
\[ \rho = \underset{min_{i\in[m]}y_i(w.x_i+b)=1}{max}\frac{1}{||w||} = \underset{\forall i \in[m],y_i(w.x_i+b) \geq }{max}\frac{1}{||w||} \]
Figure below illustrates the solution $(w, b)$ of the maximization we just formalized. In addition to the maximum-margin hyperplane, it also shows the
marginal hyperplanes, which are the hyperplanes parallel to the separating hyperplane and passing through the closest points on the negative or positive sides.
Since maximizing $1/||w||$ is equivalent to minimizing $\frac{1}{2}||w||^2$, in view of the equation above, the pair $(\boldsymbol{w}, b)$ returned by the SVM in the separable case is the solution of the following convex optimization problem:
\[ \underset{w, b}{min}\frac{1}{2}||w||^2 \]\[ \text{subject to}: y_i(\boldsymbol{w}.x_i+b) \geq 1, \forall i \in[m] \]
Since the objective function is quadratic and the constraints are
affine (meaning they are greater or equal to) the optimization problem above is in fact a specific instance of quadratic programming (QP), a family of problems extensively studied in optimization. A variety of commercial and open-source solvers are available for solving convex QP problems. Additionally, motivated by the empirical success of SVMs along with its rich theoretical underpinnings, specialized methods have been developed to more efficiently solve this particular convex QP problem, notably the block coordinate descent algorithms with blocks of just two coordinates.
So
what are support vectors? See the formula above, we note that constraints tare affine
We introduce Lagrange variables $\alpha_i \geq 0, i\in[m]$, associated to the m constrains and denoted by $\boldsymbol{\alpha}$ the vector $(\alpha_1, \ldots, \alpha_m)^T$. The Lagrangian can then be defiend for all $\boldsymbol{w}\in\mathbb{R}^N,b\in\mathbb{R}$, and $\boldsymbol{\alpha}\in\mathbb{R}_+^m$ by:
\[ \mathcal{L}(\boldsymbol{w},b,\boldsymbol{\alpha} = \frac{1}{2}||w||^2 – \sum_{i = 1}^{m}\alpha_i[y_i(w.x_i+b) -1] \]
Support vectors fully define the maximum-margin hyperplane or SVM solution, which justifies the name of the algorithm. By definition, vectors not lying on the marginal hyperplanes do not affect the definiton of these hyperplanse – in their absence, the solution the solution to the SVM problem is unique, the support vectors are not. In dimensiosn $N, N+1$ points are sufficient to define a hyperplane. Thus when more then $N+1$ points lie on them marginal hyperplane, different choices are possible for $N+1$ support vectors.
But the points in the space are not always separable. In most practical settings, the training data is not linearly separable, which implies that for any hyperplane $\boldsymbol{w.x}+b=0$, there exists $x_i \in S$ such that:
\[ y_i[\boldsymbol{w.x_i}+b] \ngeq 1 \]
Thus, the constrains imposed on the linearly separable case cannot be hold simultaneously. However, a relaxed version of these constraints can indeed hold, that is, for each $i\in[m]$, there exists $\xi_i \geq 0$ such that:
\[ y_i[\boldsymbol{w.x_i}+b] \ngeq 1-\xi_i \]
The variables $\xi_i$ are known as
slack variables and are commonly used in optimization to define relaxed versions of constraints. Here, a slack variable $\xi_i$ measures the distance by which vector $x_i$ violates the desires inequality, $y_i(\boldsymol{w.x_i} + b) \geq 1 . This figure illustrates this situation:
For hyperplane $y_i(w.x_i+b) = 1 $, a vector $x_i$ with $x_i > 0$ can be viewed as an
outlier. Each $x_i$ must be positioned on the correct side the appropriate marginal hyperplane. Here’s the formula we use to optimize the non-separable cases:
\[ \underset{w, b, \xi}{min} \frac{1}{2}||w||^2 + C\sum_{i=1}^{m}\xi_i^p \]\[ \text{subject to} \quad y_i(w.x_i+b) \geq 1-\xi_i \wedge \xi_i \geq 0, i\in[m] \]
Okay! Alright! I think I understand it now. That’s enough classification for today. I’m going to study something FUN next. Altough I’m a bit drowsy… No matter! I have some energy drinks at home. Plus I have some methamphetamine which I have acquired to boost my eenrgy… Nah, kidding. I’m a cocaine man! |
The nLab says the following about closed monoidal functor categories:
Let $C$ be a complete closed monoidal category and $I$ any small category. Then the functor category $[I, C]$ is closed monoidal with the pointwise tensor product, $(F \otimes G)(x) = F(x) \otimes G(x)$.
Now I wonder what the right adjoint of $F \otimes {-}$ is. I suppose, that it fulfills the following equation (which is a generalization of the equation for exponentials in functor categories):
$$(F \multimap G)(x) = \int_{y : I} \prod_{I(x, y)} F(y) \multimap G(y)$$
Is this correct? And if yes, is there a more standard way of representing the right adjoint? |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Coupling Heat Transfer with Subsurface Porous Media Flow
In the second part of our Geothermal Energy series, we focus on the coupled heat transport and subsurface flow processes that determine the thermal development of the subsurface due to geothermal heat production. The described processes are demonstrated in an example model of a hydrothermal doublet system.
Deep Geothermal Energy: The Big Uncertain Potential
One of the greatest challenges in geothermal energy production is minimizing the prospecting risk. How can you be sure that the desired production site is appropriate for, let’s say, 30 years of heat extraction? Usually, only very little information is available about the local subsurface properties and it is typically afflicted with large uncertainties.
Over the last decades, numerical models became an important tool to estimate risks by performing parametric studies within reasonable ranges of uncertainty. Today, I will give a brief introduction to the mathematical description of the coupled subsurface flow and heat transport problem that needs to be solved in many geothermal applications. I will also show you how to use COMSOL software as an appropriate tool for studying and forecasting the performance of (hydro-) geothermal systems.
Governing Equations in Hydrothermal Systems
The heat transport in the subsurface is described by the heat transport equation:
(1)
Heat is balanced by conduction and convection processes and can be generated or lost through defining this in the source term, Q. A special feature of the
Heat Transfer in Porous Media interface is the implemented Geothermal Heating feature, represented as a domain condition: Q_{geo}.
There is also another feature that makes the life of a geothermal energy modeler a little easier. It’s possible to implement an averaged representation of the thermal parameters, composed from the rock matrix and the groundwater using the matrix volume fraction, \theta, as a weighting factor. You may choose between volume and power law averaging for several immobile solids and fluids.
In the case of volume averaging, the volumetric heat capacity in the heat transport equation becomes:
(2)
and the thermal conductivity becomes:
(3)
Solving the heat transport properly requires incorporating the flow field. Generally, there can be various situations in the subsurface requiring different approaches to describe the flow mathematically. If the focus is on the micro scale and you want to resolve the flow in the pore space, you need to solve the creeping flow or Stokes flow equations. In partially saturated zones, you would solve Richards’ equation, as it is often done in studies concerning environmental pollution (see our past Simulating Pesticide Runoff, the Effects of Aldicarb blog post, for instance).
However, the fully-saturated and mainly pressure-driven flows in deep geothermal strata are sufficiently described by Darcy’s law:
(4)
where the velocity field, \mathbf{u}, depends on the permeability, \kappa, the fluid’s dynamic viscosity, \mu, and is driven by a pressure gradient, p. Darcy’s law is then combined with the continuity equation:
(5)
If your scenario concerns long geothermal time scales, the time dependence due to storage effects in the flow is negligible. Therefore, the first term on the left-hand side of the equation above vanishes because the density, \rho, and the porosity, \epsilon_p, can be assumed to be constant. Usually, the temperature dependencies of the hydraulic properties are negligible. Thus, the (stationary) flow equations are independent of the (time-dependent) heat transfer equations. In some cases, especially if the number of degrees of freedom is large, it can make sense to utilize the independence by splitting the problem into one stationary and one time-dependent study step.
Fracture Flow and Poroelasticity
Fracture flow may locally dominate the flow regime in geothermal systems, such as in karst aquifer systems. The Subsurface Flow Module offers the
Fracture Flow interface for a 2D representation of the Darcy flow field in fractures and cracks.
Hydrothermal heat extraction systems usually consist of one or more injection and production wells. Those are in many cases realized as separate boreholes, but the modern approach is to create one (or more) multilateral wells. There are even tactics that consist of single boreholes with separate injection and production zones.
Note that artificial pressure changes due to water injection and extraction can influence the structure of the porous medium and produce hydraulic fracturing. To take these effects into account, you can perform poroelastic analyses, but we will not consider these here.
COMSOL Model of a Hydrothermal Application: A Geothermal Doublet
It is easy to set up a COMSOL Multiphysics model that features long time predictions for a hydro-geothermal application.
The model region contains three geologic layers with different thermal and hydraulic properties in a box with a volume V≈500 [m³]. The box represents a section of a geothermal production site that is ranged by a large fault zone. The layer elevations are interpolation functions from an external data set. The concerned aquifer is fully saturated and confined on top and bottom by aquitards (impermeable beds). The temperature distribution is generally a factor of uncertainty, but a good guess is to assume a geothermal gradient of 0.03 [°C/m], leading to an initial temperature distribution T
0(z)=10 [°C] – z·0.03 [°C/m]. Hydrothermal doublet system in a layered subsurface domain, ranged by a fault zone. The edge is about 500 meters long. The left drilling is the injection well, the production well is on the right. The lateral distance between the wells is about 120 meters.
COMSOL Multiphysics creates a mesh that is perfectly fine for this approach, except for one detail — the mesh on the wells is refined to resolve the expected high gradients in that area.
Now, let’s crank the heat up! Geothermal groundwater is pumped (produced) through the production well on the right at a rate of 50 [l/s]. The well is implemented as a cylinder that was cut out of the geometry to allow inlet and outlet boundary conditions for the flow. The extracted water is, after using it for heat or power generation, re-injected by the left well at the same rate, but with a lower temperature (in this case 5 [°C]).
The resulting flow field and temperature distribution after 30 years of heat production are displayed below:
Result after 30 years of heat production: Hydraulic connection between the production and injection zones and temperature distribution along the flow paths. Note that only the injection and production zones of the boreholes are considered. The rest of the boreholes are not implemented, in order to reduce the meshing effort.
The model is a suitable tool for estimating the development of a geothermal site under varied conditions. For example, how is the production temperature affected by the lateral distance of the wells? Is it worthwhile to reach a large spread or is a moderate distance sufficient?
This can be studied by performing a parametric study by varying the well distance:
Flow paths and temperature distribution between the wells for different lateral distances. The graph shows the production temperature after reaching stationary conditions as a function of the lateral distance.
With this model, different borehole systems can easily be realized just by changing the positions of the injection/production cylinders. For example, here are the results of a single-borehole system:
Results of a single-borehole approach after 30 years of heat production. The vertical distance between the injection (up) and production (down) zones is 130 meters.
So far, we have only looked at aquifers without ambient groundwater movement. What happens if there is a hydraulic gradient that leads to groundwater flow?
The following figure shows the same situation as the figure above, except that now there is a hydraulic head gradient of \nablaH=0.01 [m/m], leading to a superposed flow field:
Single borehole after 30 years of heat production and overlapping groundwater flow due to a horizontal pressure gradient. Other Posts in This Series Modeling Geothermal Processes with COMSOL Software Geothermal Energy: Using the Earth to Heat and Cool Buildings Further Reading Download the Geothermal Doublet tutorial Explore the Subsurface Flow Module Related papers and posters presented at the COMSOL Conference: Hydrodynamic and Thermal Modeling in a Deep Geothermal Aquifer, Faulted Sedimentary Basin, France Simulation of Deep Geothermal Heat Production Full Coupling of Flow, Thermal and Mechanical Effects in COMSOL Multiphysics® for Simulation of Enhanced Geothermal Reservoirs Multiphysics Between Deep Geothermal Water Cycle, Surface Heat Exchanger Cycle and Geothermal Power Plant Cycle Modelling Reservoir Stimulation in Enhanced Geothermal Systems Comments (26) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Any element of the ring $\Z[\sqrt{-5}]$ is of the form $a+b\sqrt{-5}$ for some integers $a, b$.The associated (field) norm $N$ is given by\[N(a+b\sqrt{-5})=(a+b\sqrt{-5})(a-b\sqrt{-5})=a^2+5b^2.\]
Consider the case when $a=2, b=1$.Then we have\begin{align*}(2+\sqrt{-5})(2-\sqrt{-5})=9=3\cdot 3. \tag{*}\end{align*}
We claim that the numbers $3, 2\pm \sqrt{-5}$ are irreducible elements in the ring $\Z[\sqrt{-5}]$.
To prove the claim at once, we show that any element in $\Z[\sqrt{-5}]$ of norm $9$ is irreducible.
Let $\alpha$ be an element in $\Z[\sqrt{-5}]$ such that $N(\alpha)=9$.Suppose that $\alpha=\beta \gamma$ for some $\beta, \gamma \in \Z[\sqrt{-5}]$.Out goal is to show that either $\beta$ or $\gamma$ is a unit.
We have\begin{align*}9&=N(\alpha)=N(\beta)N(\gamma).\end{align*}Since the norms are nonnegative integers, $N(\beta)$ is one of $1, 3, 9$.
If $N(\beta)=1$, then it yields that $\beta$ is a unit.
If $N(\beta)=3$, then we write $\beta=a+b\sqrt{-5}$ for some integers $a, b$, and we obtain\[3=N(\beta)=a^2+5b^2.\]A quick inspection yields that there are no integers $a, b$ satisfying this equality.Thus $N(\beta)=3$ is impossible.
If $N(\beta)=9$, then $N(\gamma)=1$ and thus $\gamma$ is a unit.
Therefore, we have shown that either $\beta$ or $\gamma$ is a unit.
Note that the elements $3, 2\pm \sqrt{-5}$ have norm $9$, and hence they are irreducible by what we have just proved.
It follows from the equalities in (*) that the factorization of the element $9$ into irreducible elements are not unique.Thus, the ring $\Z[\sqrt{-5}]$ is not a UFD.
Related Question.
Problem.Prove that the quadratic integer ring $\Z[\sqrt{5}]$ is not a Unique Factorization Domain (UFD).
Ring of Gaussian Integers and Determine its Unit ElementsDenote by $i$ the square root of $-1$.Let\[R=\Z[i]=\{a+ib \mid a, b \in \Z \}\]be the ring of Gaussian integers.We define the norm $N:\Z[i] \to \Z$ by sending $\alpha=a+ib$ to\[N(\alpha)=\alpha \bar{\alpha}=a^2+b^2.\]Here $\bar{\alpha}$ is the complex conjugate of […]
The Ring $\Z[\sqrt{2}]$ is a Euclidean DomainProve that the ring of integers\[\Z[\sqrt{2}]=\{a+b\sqrt{2} \mid a, b \in \Z\}\]of the field $\Q(\sqrt{2})$ is a Euclidean Domain.Proof.First of all, it is clear that $\Z[\sqrt{2}]$ is an integral domain since it is contained in $\R$.We use the […]
A ring is Local if and only if the set of Non-Units is an IdealA ring is called local if it has a unique maximal ideal.(a) Prove that a ring $R$ with $1$ is local if and only if the set of non-unit elements of $R$ is an ideal of $R$.(b) Let $R$ be a ring with $1$ and suppose that $M$ is a maximal ideal of $R$.Prove that if every […]
5 is Prime But 7 is Not Prime in the Ring $\Z[\sqrt{2}]$In the ring\[\Z[\sqrt{2}]=\{a+\sqrt{2}b \mid a, b \in \Z\},\]show that $5$ is a prime element but $7$ is not a prime element.Hint.An element $p$ in a ring $R$ is prime if $p$ is non zero, non unit element and whenever $p$ divide $ab$ for $a, b \in R$, then $p$ […]
Three Equivalent Conditions for an Ideal is Prime in a PIDLet $R$ be a principal ideal domain. Let $a\in R$ be a nonzero, non-unit element. Show that the following are equivalent.(1) The ideal $(a)$ generated by $a$ is maximal.(2) The ideal $(a)$ is prime.(3) The element $a$ is irreducible.Proof.(1) $\implies$ […] |
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc. |
hide
Free keywords: Mathematics, Number Theory, math.NT,High Energy Physics - Theory, hep-th,Mathematical Physics, math-ph,Mathematics, Mathematical Physics, math.MP
Abstract: We study the correlators of irregular vertex operators in two-dimensional
conformal field theory (CFT) in order to propose an exact analytic formula for
calculating numbers of partitions, that is:
1) for given $N,k$, finding the total number $\lambda(N|k)$ of length $k$
partitions of $N$: $N=n_1+...+n_k;0<n_1\leq{n_2}...\leq{n_k}$.
2) finding the total number $\lambda(N)=\sum_{k=1}^N\lambda(N|k)$ of
partitions of a natural number $N$
We propose an exact analytic expression for $\lambda(N|k)$ by relating
two-point short-distance correlation functions of irregular vertex operators in
$c=1$ conformal field theory ( the form of the operators is established in this
paper): with the first correlator counting the partitions in the upper
half-plane and the second one obtained from the first correlator by conformal
transformations of the form $f(z)=h(z)e^{-{i\over{z}}}$ where $h(z)$ is regular
and non-vanishing at $z=0$. The final formula for $\lambda(N|k)$ is given in
terms of regularized ($\epsilon$-ordered) finite series in the generalized
higher-derivative Schwarzians and incomplete Bell polynomials of the above
conformal transformation at $z=i\epsilon$ ($\epsilon\rightarrow{0}$) |
Let A = $\begin{bmatrix} a_1,&a_2,&...,&a_n \end{bmatrix} \text{ and } \vec{x} = \begin{bmatrix} x_1\\x_2\\x_3\\...\\x_n \end{bmatrix}$
$A \vec{x} = x_1a_a + x_2a_2+...+ x_na_n$
Thus,(1)
which is equivalent to(2)
The first way of multiplying takes each row in the first matrix (A) and multiplies it by the corresponding column entry in the vector ($\vec{x}$) and adding the products to make the new entry in the matrix of dimensions defined by the number of rows in the first matrix by the number of columns in the second matrix. If it's a matrix of nxm dimensions times a vector of mx1 dimensions, the end result will always be nx1. An easy mnemonic for remembering this is the inner two dimensions must match.
If we have the matrix equation $A\vec{x} = \vec{b}$, where $A \text{ and } \vec{b}$ are given, $A = \begin{bmatrix}1&2&3\\0&1&1\\-1&-1&0\\ \end{bmatrix} \text{ and } \vec{b} = \begin{bmatrix} 1\\2\\2 \end{bmatrix}$ and are asked to find a vector that satisfies $A\vec{x} = \vec{b}$, we simply augment A with $\vec{b}$ and solve as we have learned in previous sections. These three values are the conditions under which the augmented matrix is consistent and can be represented as a vector $\begin{bmatrix} x_1\\x_2\\x_3 \end{bmatrix}$.
In this case, we find that matrix to be $\begin{bmatrix} -\frac{7}{2}\\ \frac{3}{2} \\ \frac{1}{2} \end{bmatrix}$. It doesn't turn out too nicely, but try the reduction yourself and see if the logic makes sense. |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
In
mathematics, the limit comparison test (LCT) (in contrast with the related direct comparison test) is a method of testing for the convergence of an infinite series. Statement [ edit ]
Suppose that we have two series
and Σ n a n {\displaystyle \Sigma _a_} with Σ n b n {\displaystyle \Sigma _b_} for all a n ≥ 0 , b n > 0 {\displaystyle a_\geq 0,b_>0} . n {\displaystyle n}
Then if
with lim n → ∞ a n b n = c {\displaystyle \lim _{\frac }}}=c} , then either both series converge or both series diverge. 0 < c < ∞ {\displaystyle 0<c<\infty } [1]
Because
we know that for all lim n → ∞ a n b n = c {\displaystyle \lim _{\frac }}}=c} there is a positive integer ε > 0 {\displaystyle \varepsilon >0} such that for all n 0 {\displaystyle n_} we have that n ≥ n 0 {\displaystyle n\geq n_} , or equivalently | a n b n − c | < ε {\displaystyle \left|{\frac }}}-c\right|<\varepsilon } − ε < a n b n − c < ε {\displaystyle -\varepsilon <{\frac }}}-c<\varepsilon } c − ε < a n b n < c + ε {\displaystyle c-\varepsilon <{\frac }}}<c+\varepsilon } ( c − ε ) b n < a n < ( c + ε ) b n {\displaystyle (c-\varepsilon )b_<a_<(c+\varepsilon )b_}
As
we can choose c > 0 {\displaystyle c>0} to be sufficiently small such that ε {\displaystyle \varepsilon } is positive. So c − ε {\displaystyle c-\varepsilon } and by the b n < 1 c − ε a n {\displaystyle b_<{\frac }a_} direct comparison test, if converges then so does ∑ n a n {\displaystyle \sum _a_} . ∑ n b n {\displaystyle \sum _b_}
Similarly
, so if a n < ( c + ε ) b n {\displaystyle a_<(c+\varepsilon )b_} converges, again by the direct comparison test, so does ∑ n b n {\displaystyle \sum _b_} . ∑ n a n {\displaystyle \sum _a_}
That is, both series converge or both series diverge.
Example [ edit ]
We want to determine if the series
converges. For this we compare with the convergent series ∑ n = 1 ∞ 1 n 2 + 2 n {\displaystyle \sum _^{\infty }{\frac +2n}}} . ∑ n = 1 ∞ 1 n 2 = π 2 6 {\displaystyle \sum _^{\infty }{\frac }}={\frac {\pi ^}}}
As
we have that the original series also converges. lim n → ∞ 1 n 2 + 2 n n 2 1 = 1 > 0 {\displaystyle \lim _{\frac +2n}}{\frac }}=1>0} One-sided version [ edit ]
One can state a one-sided comparison test by using
limit superior. Let for all a n , b n ≥ 0 {\displaystyle a_,b_\geq 0} . Then if n {\displaystyle n} with lim sup n → ∞ a n b n = c {\displaystyle \limsup _{\frac }}}=c} and 0 ≤ c < ∞ {\displaystyle 0\leq c<\infty } converges, necessarily Σ n b n {\displaystyle \Sigma _b_} converges. Σ n a n {\displaystyle \Sigma _a_} Example [ edit ]
Let
and a n = 1 − ( − 1 ) n n 2 {\displaystyle a_={\frac }}}} for all natural numbers b n = 1 n 2 {\displaystyle b_={\frac }}} . Now n {\displaystyle n} does not exist, so we cannot apply the standard comparison test. However, lim n → ∞ a n b n = lim n → ∞ ( 1 − ( − 1 ) n ) {\displaystyle \lim _{\frac }}}=\lim _(1-(-1)^)} and since lim sup n → ∞ a n b n = lim sup n → ∞ ( 1 − ( − 1 ) n ) = 2 ∈ [ 0 , ∞ ) {\displaystyle \limsup _{\frac }}}=\limsup _(1-(-1)^)=2\in [0,\infty )} converges, the one-sided comparison test implies that ∑ n = 1 ∞ 1 n 2 {\displaystyle \sum _^{\infty }{\frac }}} converges. ∑ n = 1 ∞ 1 − ( − 1 ) n n 2 {\displaystyle \sum _^{\infty }{\frac }}}} Converse of the one-sided comparison test [ edit ]
Let
for all a n , b n ≥ 0 {\displaystyle a_,b_\geq 0} . If n {\displaystyle n} diverges and Σ n a n {\displaystyle \Sigma _a_} converges, then necessarily Σ n b n {\displaystyle \Sigma _b_} , that is, lim sup n → ∞ a n b n = ∞ {\displaystyle \limsup _{\frac }}}=\infty } . The essential content here is that in some sense the numbers lim inf n → ∞ b n a n = 0 {\displaystyle \liminf _{\frac }}}=0} are larger than the numbers a n {\displaystyle a_} . b n {\displaystyle b_} Example [ edit ]
Let
be analytic in the unit disc f ( z ) = ∑ n = 0 ∞ a n z n {\displaystyle f(z)=\sum _^{\infty }a_z^} and have image of finite area. By D = { z ∈ C : | z | < 1 } {\displaystyle D=\ :|z|<1\}} Parseval's formula the area of the image of is f {\displaystyle f} . Moreover, ∑ n = 1 ∞ n | a n | 2 {\displaystyle \sum _^{\infty }n|a_|^} diverges. Therefore, by the converse of the comparison test, we have ∑ n = 1 ∞ 1 / n {\displaystyle \sum _^{\infty }1/n} , that is, lim inf n → ∞ n | a n | 2 1 / n = lim inf n → ∞ ( n | a n | ) 2 = 0 {\displaystyle \liminf _{\frac |^}}=\liminf _(n|a_|)^=0} . lim inf n → ∞ n | a n | = 0 {\displaystyle \liminf _n|a_|=0} See also [ edit ] References [ edit ] Further reading [ edit ] Rinaldo B. Schinazi: From Calculus to Analysis. Springer, 2011, ISBN 9780817682897, pp. 50 Michele Longo and Vincenzo Valori: The Comparison Test: Not Just for Nonnegative Series. Mathematics Magazine, Vol. 79, No. 3 (Jun., 2006), pp. 205–210 ( JSTOR) J. Marshall Ash: The Limit Comparison Test Needs Positivity. Mathematics Magazine, Vol. 85, No. 5 (December 2012), pp. 374–375 ( JSTOR) External links [ edit ] |
Square Space Silo Problem 431
Fred the farmer arranges to have a new storage silo installed on his farm and having an obsession for all things square he is absolutely devastated when he discovers that it is circular. Quentin, the representative from the company that installed the silo, explains that they only manufacture cylindrical silos, but he points out that it is resting on a square base. Fred is not amused and insists that it is removed from his property.
Quick thinking Quentin explains that when granular materials are delivered from above a conical slope is formed and the natural angle made with the horizontal is called the angle of repose. For example if the angle of repose, $\alpha = 30$ degrees, and grain is delivered at the centre of the silo then a perfect cone will form towards the top of the cylinder. In the case of this silo, which has a diameter of 6m, the amount of space wasted would be approximately 32.648388556 m
3. However, if grain is delivered at a point on the top which has a horizontal distance of $x$ metres from the centre then a cone with a strangely curved and sloping base is formed. He shows Fred a picture.
We shall let the amount of space wasted in cubic metres be given by $V(x)$. If $x = 1.114785284$, which happens to have three squared decimal places, then the amount of space wasted, $V(1.114785284) \approx 36$. Given the range of possible solutions to this problem there is exactly one other option: $V(2.511167869) \approx 49$. It would be like knowing that the square is king of the silo, sitting in splendid glory on top of your grain.
Fred's eyes light up with delight at this elegant resolution, but on closer inspection of Quentin's drawings and calculations his happiness turns to despondency once more. Fred points out to Quentin that it's the radius of the silo that is 6 metres, not the diameter, and the angle of repose for his grain is 40 degrees. However, if Quentin can find a set of solutions for this particular silo then he will be more than happy to keep it.
If Quick thinking Quentin is to satisfy frustratingly fussy Fred the farmer's appetite for all things square then determine the values of $x$ for all possible square space wastage options and calculate $\sum x$ correct to 9 decimal places. |
A
perfect number is a positive integer $n$ such that $$\sum_{d|n} d = 2n.$$Put another way, $n$ is the sum of its proper divisors. Check out a a quick intro to perfect numbers that I wrote last November. The first three perfect numbers are $6, 28,$ and $496$. Currently, the largest perfect number, corresponding to the largest Mersenne prime is $$(2^{77232917} – 1)\cdot 2^{77232916}$$This perfect number is over 46 million digits long!
Here's another fun fact about perfect numbers. A positive integer $n$ is perfect if and only if
$$\sum_{d\mid n} \frac{1}{d} = 2.$$This is just a slightly disguised form of the definition of perfect. For $$\sum_{d\mid n} \frac{1}{d} = \sum_{d\mid n} \frac{n/d}{n} = 2n/2.$$ Even so, the sum $\sum_{d\mid n} 1/d$ is a cool looking as a function of $n$: What are those stronger lines all about? I don't know. Is this sum bounded? No, because $\sum_{d\mid n} 1/d$ contains the first $k$ terms of the harmonic series for $n = k!$.
Speaking of sums of divisors, can a number ever be the sum of squares of all its proper divisors? What about cubes? I couldn't find any examples…can you? |
Smectic-A Order at the Surface of a Nematic Liquid Crystal: Synchrotron X-Ray Diffraction Author
Als-Nielsen, J.
Christensen, F.
Published Versionhttps://doi.org/10.1103/PhysRevLett.48.1107 MetadataShow full item record CitationAls-Nielsen, J., F. Christensen, and Peter S. Pershan. 1982. Smectic-A order at the surface of a nematic liquid crystal: Synchrotron x-ray diffraction. Physical Review Letters 48(16): 1107–1110. AbstractA novel geometry in which it is possible to do x-ray diffraction from a horizontal surface of fluids is applied to liquid crystals. A large-diameter drop of octyloxycyanobiphenyl (8OCB) on a glass plate treated for homeotropic alignment yields perfect alignment of the smectic-A layers at the top surface over an area of several square millimeters. The surface in the bulk nematic as well as in the isotropic phase was found to consist of smectic-A layers with a penetration depth equal to the longitudinal smectic-A correlation length \(\xi_{\parallel} \sim (T-T_{NA})^{-\nu_{\parallel}}\) determined previously. Terms of UseThis article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:10361859
Collections FAS Scholarly Articles [15998] |
Any Automorphism of the Field of Real Numbers Must be the Identity Map Problem 507
Prove that any field automorphism of the field of real numbers $\R$ must be the identity automorphism.
Contents
Problem 507 Proof. Claim 1. For any positive real number $x$, we have $\phi(x)>0$. Claim 2. For any $x, y\in \R$ such that $x>y$, we have $\phi(x) > \phi(y)$. Claim 3. The automorphism $\phi$ is the identity on positive integers. Claim 4. The automorphism $\phi$ is the identity on rational numbers. Claim 5. The automorphism $\phi$ is the identity on real numbers. Proof.
We prove the problem by proving the following sequence of claims.
Let $\phi:\R \to \R$ be an automorphism of the field of real numbers $\R$.
Claim 1. For any positive real number $x$, we have $\phi(x)>0$. Claim 2. For any $x, y\in \R$ such that $x>y$, we have $\phi(x) > \phi(y)$. Claim 3. The automorphism $\phi$ is the identity on positive integers. Claim 4. The automorphism $\phi$ is the identity on rational numbers. Claim 5. The automorphism $\phi$ is the identity on real numbers.
Let us now start proving the claims.
Let $\phi:\R \to \R$ be an automorphism of the field of real numbers $\R$. Claim 1. For any positive real number $x$, we have $\phi(x)>0$.
Since $x$ is a positive real number, we have $\sqrt{x}\in \R$ and
\[\phi(x)=\phi\left(\sqrt{x}^2\right)=\phi(\sqrt{x})^2 \geq 0.\]
Note that since $\phi(0)=0$ and $\phi$ is bijective, $\phi(x)\neq 0$ for any $x\neq 0$.
Thus, it follows that $\phi(x) > 0$ for each positive real number $x$. Claim 1 is proved. Claim 2. For any $x, y\in \R$ such that $x>y$, we have $\phi(x) > \phi(y)$.
Since $x > y$, we have $x-y > 0$ and it follows from Claim 1 that
\[0<\phi(x-y)=\phi(x)-\phi(y).\] Hence, $\phi(x)> \phi(y)$. Claim 3. The automorphism $\phi$ is the identity on positive integers.
Let $n$ be a positive integer. Then we have
\begin{align*} \phi(n)=\phi(\underbrace{1+1+\cdots+1}_{\text{$n$ times}})=\underbrace{\phi(1)+\phi(1)+\cdots+\phi(1)}_{\text{$n$ times}}=n \end{align*} since $\phi(1)=1$. Claim 4. The automorphism $\phi$ is the identity on rational numbers.
Any rational number $q$ can be written as $q=\pm m/n$, where $m, n$ are positive integers.
Then we have \begin{align*} \phi(q)=\phi\left(\, \pm \frac{m}{n} \,\right)=\pm \frac{\phi(m)}{\phi(n)}=\pm \frac{m}{n}=q, \end{align*} where the third equality follows from Claim 3. Claim 5. The automorphism $\phi$ is the identity on real numbers.
In this claim, we finish the proof of the problem.
Let $x$ be any real number.
Seeking a contradiction, assume that $\phi(x)\neq x$.
There are two cases to consider:
\[x < \phi(x) \text{ or } x > \phi(x).\]
First, suppose that $x < \phi(x)$. Then there exists a rational number $q$ such that \[x< q < \phi(x).\] Then we have \begin{align*} \phi(x) &< \phi(q) && \text{by Claim 2 since $x < q$}\\ &=q && \text{by Claim 4 since $q$ is rational}\\ &<\phi(x) && \text{by the choice of $q$}, \end{align*} and this is a contradiction. Next, consider the case when $x > \phi(x)$.
There exists a rational number $q$ such that \[\phi(x) < q < x.\] Then by the same argument as above, we have \[\phi(x) < q =\phi(q) < \phi(x),\] which is a contradiction. Thus, in either case we reached a contradiction, and hence we must have $\phi(x)=x$ for all real numbers $x$. This proves that the automorphism $\phi: \R \to \R$ is the identity map.
Add to solve later |
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ... |
Shortcut keys for inserting Greek symbols into the equation \+: name of the symbol
\alpha \kappa \varrho \beta \lambda \sigma \chi \mu \varsigma \delta \nu \tau \epsilon \o \upsilon \varepsilon \pi \omega \phi \varpi \xi \varphi \theta \psi \gamma \vartheta \zeta \eta \rho
See Shortcut keys for inserting symbols and templates into the equation to find other frequently used symbols.
To insert a capital letter of the Greek alphabet, simply enter
\+ Name of the symbol startingwith a capital letter:
\Delta \Phi \Gamma \Lambda \Mu \Pi \Theta \Sigma \Upsilon \Omega \Hi \Psi
How to insert other symbols and templates in an equation, see Shortcut keys for inserting symbols and templates into the equation.
How to use all these symbols outside the equation, select the option
Use Math AutoCorrect rules outside ofmath regions in the Word Options. How to do it, see Choosing Math AutoCorrect Options. |
Hybrid Interaction Point Process Model
Creates an instance of a hybrid point process model which can then be fitted to point pattern data.
Usage
Hybrid(...)
Arguments …
Two or more interactions (objects of class
"interact") or objects which can be converted to interactions. See Details.
Details
A
hybrid (Baddeley, Turner, Mateu and Bevan, 2013) is a point process model created by combining two or more point process models, or an interpoint interaction created by combining two or more interpoint interactions.
The
hybrid of two point processes, with probability densities \(f(x)\) and \(g(x)\) respectively, is the point process with probability density $$h(x) = c \, f(x) \, g(x)$$ where \(c\) is a normalising constant.
Equivalently, the hybrid of two point processes with conditional intensities \(\lambda(u,x)\) and \(\kappa(u,x)\) is the point process with conditional intensity $$ \phi(u,x) = \lambda(u,x) \, \kappa(u,x). $$ The hybrid of \(m > 3\) point processes is defined in a similar way.
The function
ppm, which fits point process models to point pattern data, requires an argument of class
"interact" describing the interpoint interaction structure of the model to be fitted. The appropriate description of a hybrid interaction is yielded by the function
Hybrid().
The arguments
… will be interpreted as interpoint interactions (objects of class
"interact") and the result will be the hybrid of these interactions. Each argument must either be an interpoint interaction (object of class
"interact"), or a point process model (object of class
"ppm") from which the interpoint interaction will be extracted.
The arguments
… may also be given in the form
name=value. This is purely cosmetic: it can be used to attach simple mnemonic names to the component interactions, and makes the printed output from
print.ppm neater.
Value
An object of class
"interact" describing an interpoint interaction structure.
References
Baddeley, A., Turner, R., Mateu, J. and Bevan, A. (2013) Hybrids of Gibbs point process models and their implementation.
Journal of Statistical Software 55:11, 1--43. http://www.jstatsoft.org/v55/i11/ See Also Aliases Hybrid Examples
# NOT RUN { Hybrid(Strauss(0.1), Geyer(0.2, 3)) Hybrid(Ha=Hardcore(0.05), St=Strauss(0.1), Ge=Geyer(0.2, 3)) fit <- ppm(redwood, ~1, Hybrid(A=Strauss(0.02), B=Geyer(0.1, 2))) fit ctr <- rmhcontrol(nrep=5e4, expand=1) plot(simulate(fit, control=ctr)) # hybrid components can be models (including hybrid models) Hybrid(fit, S=Softcore(0.5)) # plot.fii only works if every component is a pairwise interaction data(swedishpines) fit2 <- ppm(swedishpines, ~1, Hybrid(DG=DiggleGratton(2,10), S=Strauss(5))) plot(fitin(fit2)) plot(fitin(fit2), separate=TRUE, mar.panel=rep(4,4))# }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2) |
Let $( \mathbb{R}^n, \| \cdot \|_P)$ be the $n$-dimensional Euclidean space equipped with $\ell_p$-norm $\| \cdot \|_p$ for some $p\in [1, + \infty]$. Let $A$ be a convex set in $\mathbb{R}^n$ and define \begin{align} A^{\epsilon} = \{ y \in \mathbb{R}^n \colon \exists x \in A~\text{such that}~\| x -y \| _{p} \leq \epsilon \}, \end{align} where $\epsilon >0$ is a real number. In general, what is the condition that there exists a $3$-order differentiable function $f \colon \mathbb{R}^n \rightarrow [0,1]$ such that $f$ is a good approximation of $\mathbb{1}_{A}$, which is the indicator function of set $A$.
In specific, it is ideal that $f(x) = 1$ when $x\in A$ and $f(x) = 0$ when $x\in \mathbb{R}^n \setminus A^{\epsilon}$. Moreover, the $\ell_q$-norm of the $i$-th order gradient of $f$ is proportional to $\epsilon^{-i}$ for $i = 1,2,3$. That is \begin{align} \| D^{(i)} f (x) \|_{q} \leq C_i \cdot \epsilon^{-i}, ~\text{for any}~~ i = 1,2,3. \end{align} Here the high-order gradients are taken as vectors, $1/q + 1/p = 1$, and $\| \cdot \|_q$ is the dual-norm of $\| \cdot \|_p$.
An inspiration of this problem is that for $p = q= 2$, the problem is solved for any convex sets. In addition, for rectangles of the form $\{ x \in \mathbb{R}^n \colon a_i \leq x_i\leq b_i, i =1, \ldots, n\}$, approximation under $\ell_{\infty}$ is also established. This approximation depends on the function $g(x) = 1/\rho \cdot \log [\sum_{i=1}^n \exp( \rho\cdot x_j)]$ which approximates the function that returns the maximum element of a vector in $\mathbb{R}%d$. See https://arxiv.org/abs/1412.3661v4. But general convex sets in $(\mathbb{R}^n, \| \cdot \|_{\infty})$ is not covered.
Moreover, for Banach spaces, similar results are also established for $\| \cdot \|_{B}$, which is the norm in the Banach space. However, such result depends on the differentiability of $\| \cdot \|_B$, which does not handle convex sets in $(\mathbb{R}^n, \| \cdot \|_{\infty})$. |
Intermediate Thermodynamics Questions & Answers
If all molecules would always have a velocity vector pointing in the positive $x$ direction, then the average velocity in the positive $x$ direction would be $\overline{q}$ (with $\overline{q}$ the molecular speed). But, molecules move in all directions randomly, not just in the positive $x$ direction. When integrating and taking a time average over all molecules in a gas, then it can be shown that the average velocity in the positive $x$ direction is $\frac{1}{4} \overline{q}$. I can give you 0.5 point for this question. For more points, you need to express better what you don't understand.
To answer your question, no, you cannot substitute $\overline{u^2}$ by $\overline{q_x}^2$. This is because by definition: $$ \overline{u^2} \equiv \frac{1}{\Delta t}\int_0^{\Delta t} u^2 dt $$ $$ \overline{q_x} \equiv \frac{1}{\Delta t}\int_0^{\Delta t} \max(0,u) dt $$ That is, $\overline{u^2}$ is the average in time of the square of the $x$ component of the velocity while $\overline{q_x}$ is the average in time of the component of the velocity in the positive $x$ direction. Taking the square of $\overline{q_x}$ will give a totally different answer as $\overline{u^2}$: $$ \frac{1}{\Delta t}\int_0^{\Delta t} u^2 dt \ne \left( \frac{1}{\Delta t}\int_0^{\Delta t} \max(0,u) dt\right)^2 $$ You can find more information about how to integrate the latter integrals in some book on the “Kinetic Theory of Gases” — but this is beyond the scope of this course. I'll give you 1 point for this question — for more points, you need to formulate it correctly the first time with the right notation.
For the second part of your question, please delete it and ask a new question below (only one question per post).
In class, we found that $\xi=N \bar{q}_x$. We used dimensions to give us a hint only. Ultimately, the number of particules hitting the wall per unit time per unit area is equal to the number of particules per unit volume ($N$) times the average velocity of the particules perpendicular to the wall. The velocity perpendicular to the wall corresponds to the average velocity of the molecules along one direction ($\bar{q}_x$, or $\bar{q}_y$, but not $\bar{q}$..). I chose the positive $x$ direction for illustrative purposes — I could have chosen the negative $x$ direction, or the positive $y$ direction, and we would have obtained the same answer. I'll give you 0.5 point for this question.
You're on the right track. Consider one molecule. It can move in any direction with equal probability. Thus, to find the average velocity along one direction, you need to integrate the molecule's velocity in spherical coordinates (on the surface of a sphere). I'll give you 0.5 point for this question. I would have given more if you would have asked it the first time without using an attached image and if the post would be free of spelling mistakes.
$\pi$ |
Filter Results: Full text PDF available (36) Publication Year
1976
2018
This year (0) Last 5 years (6) Last 10 years (13) Supplemental Content Publication Type Co-author Journals and Conferences
Learn More
A method is disclosed whereby a semiconductor silicon substrate wafer is diffused with a P or N type dopant or "impurity" in an open tube at a temperature of about 1050 DEG C in the presence of at… (More)
One Investigates inequalities for the probabilities and mathematical expectations which follow from the postulates of the local quantum theory. It turns out that the relation between the quantum and… (More)
Abstract. ((Without abstract))
Classical ergodic theory deals with measure (or measure class) preserving actions of locally compact groups on Lebesgue spaces. An important tool in this setting is a theorem of Mackey which provides… (More)
Contrary to the classical wisdom, processes with independent values (defined properly) are much more diverse than white noises combined with Poisson point processes, and product systems are much… (More)
Uncountably many mutually non-isomorphic product systems (that is, continuous tensor products of Hilbert spaces) of types II-0 and III are constructed by probabilistic means (random sets and… (More)
AbstractWe show that the flat chaotic analytic zero points (i.e. zeroes of a random entire function $$\psi (z) = \sum {_{k = 0}^\infty \zeta } k\frac{{z^k }}{{\sqrt {k!} }}$$ where ζ0, ζ1, … are… (More)
We consider symmetric auctions that may be multi-unit, with multi-dimensional bids and correlated multi-dimensional signals. Payment and allocation mechanisms are quite arbitrary. There are n… (More)
For the white noise, the spectral density is constant, and the past (restriction to (−∞,0)) is independent from the future (restriction to (0,+∞)). If the spectral density is not too far from being… (More) |
Let $\mathbf{x}$ be an eigenvector corresponding to the eigenvalue $\lambda$. Then we have\[A\mathbf{x}=\lambda \mathbf{x}.\]Taking the conjugate of both sides, we have\[\overline{A\mathbf{x}}=\overline{\lambda \mathbf{x}}.\]
Since $A$ is a real matrix, it yields that\[A\bar{\mathbf{x}}=\bar{\lambda}\bar{\mathbf{x}}. \tag{*}\]Note that $\mathbf{x}$ is a nonzero vector as it is an eigenvector. Then the complex conjugate $\bar{\mathbf{x}}$ is a nonzero vector as well.Thus the equality (*) implies that the complex conjugate $\bar{\lambda}$ is an eigenvalue of $A$ with corresponding eigenvector $\bar{\mathbf{x}}$.
Proof 2.
Let $p(t)$ be the characteristic polynomial of $A$.Recall that the roots of the characteristic polynomial $p(t)$ are the eigenvalues of $A$.Thus, we have\[p(\lambda)=0.\]
As $A$ is a real matrix, the characteristic polynomial $p(t)$ has real coefficients.It follows that\[\overline{p(t)}=p(\,\bar{t}\,).\]The previous two identities yield that\begin{align*}p(\bar{\lambda})=\overline{p(\lambda)}=\bar{0}=0,\end{align*}and the complex conjugate $\bar{\lambda}$ is a root of $p(t)$, and hence $\bar{\lambda}$ is an eigenvalue of $A$.
There is at Least One Real Eigenvalue of an Odd Real MatrixLet $n$ be an odd integer and let $A$ be an $n\times n$ real matrix.Prove that the matrix $A$ has at least one real eigenvalue.We give two proofs.Proof 1.Let $p(t)=\det(A-tI)$ be the characteristic polynomial of the matrix $A$.It is a degree $n$ […]
Eigenvalues of a Hermitian Matrix are Real NumbersShow that eigenvalues of a Hermitian matrix $A$ are real numbers.(The Ohio State University Linear Algebra Exam Problem)We give two proofs. These two proofs are essentially the same.The second proof is a bit simpler and concise compared to the first one.[…]
Find Eigenvalues, Eigenvectors, and Diagonalize the 2 by 2 MatrixConsider the matrix $A=\begin{bmatrix}a & -b\\b& a\end{bmatrix}$, where $a$ and $b$ are real numbers and $b\neq 0$.(a) Find all eigenvalues of $A$.(b) For each eigenvalue of $A$, determine the eigenspace $E_{\lambda}$.(c) Diagonalize the matrix $A$ by finding a […] |
Difference between revisions of "Algebra and Algebraic Geometry Seminar Spring 2018"
Line 1: Line 1:
The seminar meets on Fridays at 2:25 pm in room B113.
The seminar meets on Fridays at 2:25 pm in room B113.
−
Here is the schedule for [[Algebraic Geometry Seminar Spring 2017 | the previous semester]]
+
Here is the schedule for [[Algebraic Geometry Seminar Spring 2017 | the previous semester]], [[Algebra and Algebraic Geometry Seminar Spring 2018 | the next semester]], and for [[Algebra and Algebraic Geometry Seminar | this semester]].
−
==Algebra and Algebraic Geometry Mailing List==
==Algebra and Algebraic Geometry Mailing List==
Revision as of 14:15, 23 August 2018
The seminar meets on Fridays at 2:25 pm in room B113.
Contents 1 Algebra and Algebraic Geometry Mailing List 2 Spring 2018 Schedule 3 Abstracts Algebra and Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Spring 2018 Schedule Abstracts Tasos Moulinos Derived Azumaya Algebras and Twisted K-theory
Topological K-theory of dg-categories is a localizing invariant of dg-categories over [math] \mathbb{C} [/math] taking values in the [math] \infty [/math]-category of [math] KU [/math]-modules. In this talk I describe a relative version of this construction; namely for [math]X[/math] a quasi-compact, quasi-separated [math] \mathbb{C} [/math]-scheme I construct a functor valued in the [math] \infty [/math]-category of sheaves of spectra on [math] X(\mathbb{C}) [/math], the complex points of [math]X[/math]. For inputs of the form [math]\operatorname{Perf}(X, A)[/math] where [math]A[/math] is an Azumaya algebra over [math]X[/math], I characterize the values of this functor in terms of the twisted topological K-theory of [math] X(\mathbb{C}) [/math]. From this I deduce a certain decomposition, for [math] X [/math] a finite CW-complex equipped with a bundle [math] P [/math] of projective spaces over [math] X [/math], of [math] KU(P) [/math] in terms of the twisted topological K-theory of [math] X [/math] ; this is a topological analogue of a result of Quillen’s on the algebraic K-theory of Severi-Brauer schemes.
Roman Fedorov A conjecture of Grothendieck and Serre on principal bundles in mixedcharacteristic
Let G be a reductive group scheme over a regular local ring R. An old conjecture of Grothendieck and Serre predicts that such a principal bundle is trivial, if it is trivial over the fraction field of R. The conjecture has recently been proved in the "geometric" case, that is, when R contains a field. In the remaining case, the difficulty comes from the fact, that the situation is more rigid, so that a certain general position argument does not go through. I will discuss this difficulty and a way to circumvent it to obtain some partial results.
Juliette Bruce Asymptotic Syzygies in the Semi-Ample Setting
In recent years numerous conjectures have been made describing the asymptotic Betti numbers of a projective variety as the embedding line bundle becomes more ample. I will discuss recent work attempting to generalize these conjectures to the case when the embedding line bundle becomes more semi-ample. (Recall a line bundle is semi-ample if a sufficiently large multiple is base point free.) In particular, I will discuss how the monomial methods of Ein, Erman, and Lazarsfeld used to prove non-vanishing results on projective space can be extended to prove non-vanishing results for products of projective space.
Andrei Caldararu Computing a categorical Gromov-Witten invariant
In his 2005 paper "The Gromov-Witten potential associated to a TCFT" Kevin Costello described a procedure for recovering an analogue of the Gromov-Witten potential directly out of a cyclic A-inifinity algebra or category. Applying his construction to the derived category of sheaves of a complex projective variety provides a definition of higher genus B-model Gromov-Witten invariants, independent of the BCOV formalism. This has several advantages. Due to the categorical invariance of these invariants, categorical mirror symmetry automatically implies classical mirror symmetry to all genera. Also, the construction can be applied to other categories like categories of matrix factorization, giving a direct definition of FJRW invariants, for example.
In my talk I shall describe the details of the computation (joint with Junwu Tu) of the invariant, at g=1, n=1, for elliptic curves. The result agrees with the predictions of mirror symmetry, matching classical calculations of Dijkgraaf. It is the first non-trivial computation of a categorical Gromov-Witten invariant.
Aron Heleodoro Normally ordered tensor product of Tate objects and decomposition of higher adeles
In this talk I will introduce the different tensor products that exist on Tate objects over vector spaces (or more generally coherent sheaves on a given scheme). As an application, I will explain how these can be used to describe higher adeles on an n-dimensional smooth scheme. Both Tate objects and higher adeles would be introduced in the talk. (This is based on joint work with Braunling, Groechenig and Wolfson.)
Moisés Herradón Cueto Local type of difference equations
The theory of algebraic differential equations on the affine line is very well-understood. In particular, there is a well-defined notion of restricting a D-module to a formal neighborhood of a point, and these restrictions are completely described by two vector spaces, called vanishing cycles and nearby cycles, and some maps between them. We give an analogous notion of "restriction to a formal disk" for difference equations that satisfies several desirable properties: first of all, a difference module can be recovered uniquely from its restriction to the complement of a point and its restriction to a formal disk around this point. Secondly, it gives rise to a local Mellin transform, which relates vanishing cycles of a difference module to nearby cycles of its Mellin transform. Since the Mellin transform of a difference module is a D-module, the Mellin transform brings us back to the familiar world of D-modules.
Eva Elduque On the signed Euler characteristic property for subvarieties of Abelian varieties
Franecki and Kapranov proved that the Euler characteristic of a perverse sheaf on a semi-abelian variety is non-negative. This result has several purely topological consequences regarding the sign of the (topological and intersection homology) Euler characteristic of a subvariety of an abelian variety, and it is natural to attempt to justify them by more elementary methods. In this talk, we'll explore the geometric tools used recently in the proof of the signed Euler characteristic property. Joint work with Christian Geske and Laurentiu Maxim.
Harrison Chen Equivariant localization for periodic cyclic homology and derived loop spaces
There is a close relationship between derived loop spaces, a geometric object, and (periodic) cyclic homology, a categorical invariant. In this talk we will discuss this relationship and how it leads to an equivariant localization result, which has an intuitive interpretation using the language of derived loop spaces. We discuss ongoing generalizations and potential applications in computing the periodic cyclic homology of categories of equivariant (coherent) sheaves on algebraic varieties.
Phil Tosteson Stability in the homology of Deligne-Mumford compactifications
The space [math]\bar M_{g,n}[/math] is a compactification of the moduli space algebraic curves with marked points, obtained by allowing smooth curves to degenerate to nodal ones. We will talk about how the asymptotic behavior of its homology, [math]H_i(\bar M_{g,n})[/math], for [math]n \gg 0[/math] can be studied using the representation theory of the category of finite sets and surjections.
Wei Ho Noncommutative Galois closures and moduli problems
In this talk, we will discuss the notion of a Galois closure for a possibly noncommutative algebra. We will explain how this problem is related to certain moduli problems involving genus one curves and torsors for Jacobians of higher genus curves. This is joint work with Matt Satriano.
Daniel Corey Initial degenerations of Grassmannians
Let Gr_0(d,n) denote the open subvariety of the Grassmannian Gr(d,n) consisting of d-1 dimensional subspaces of P^{n-1} meeting the toric boundary transversely. We prove that Gr_0(3,7) is schoen in the sense that all of its initial degenerations are smooth. The main technique we will use is to express the initial degenerations of Gr_0(3,7) as a inverse limit of thin Schubert cells. We use this to show that the Chow quotient of Gr(3,7) by the maximal torus H in GL(7) is the log canonical compactification of the moduli space of 7 lines in P^2 in linear general position.
Alena Pirutka Irrationality problems
Let X be a projective algebraic variety, the set of solutions of a system of homogeneous polynomial equations. Several classical notions describe how ``unconstrained
the solutions are, i.e., how close X is to projective space: there are notions of rational, unirational and stably rational varieties. Over the field of complex numbers, these notions coincide in dimensions one and two, but diverge in higherdimensions. In the last years, many new classes of non stably rational varieties were found, using a specialization technique, introduced by C. Voisin. This method also allowed to prove that the rationality is not a deformation invariant in smooth and projective families of complex varieties: this is a joint work with B. Hassett and Y. Tschinkel. In my talk I will describe classical examples, as well as the recent progress around these rationality questions. Nero Budur Homotopy of singular algebraic varieties
By work of Simpson, Kollár, Kapovich, every finitely generated group can be the fundamental group of an irreducible complex algebraic variety with only normal crossings and Whitney umbrellas as singularities. In contrast, we show that if a complex algebraic variety has no weight zero 1-cohomology classes, then the fundamental group is strongly restricted: the irreducible components of the cohomology jump loci of rank one local systems containing the constant sheaf are complex affine tori. Same for links and Milnor fibers. This is joint work with Marcel Rubió.
Alexander Yom Din Drinfeld-Gaitsgory functor and contragradient duality for (g,K)-modules
Drinfeld suggested the definition of a certain endo-functor, called the pseudo-identity functor (or the Drinfeld-Gaitsgory functor), on the category of D-modules on an algebraic stack. We extend this definition to an arbitrary DG category, and show that if certain finiteness conditions are satisfied, this functor is the inverse of the Serre functor. We show that the pseudo-identity functor for (g,K)-modules is isomorphic to the composition of cohomological and contragredient dualities, which is parallel to an analogous assertion for p-adic groups.
In this talk I will try to discuss some of these results and around them. This is joint work with Dennis Gaitsgory.
John Lesieutre Some higher-dimensional cases of the Kawaguchi-Silverman conjecture
Given a dominant rational self-map f : X -->X of a variety defined over a number field, the first dynamical degree $\lambda_1(f)$ and the arithmetic degree $\alpha_f(P)$ are two measures of the complexity of the dynamics of f: the first measures the rate of growth of the degrees of the iterates f^n, while the second measures the rate of growth of the heights of the iterates f^n(P) for a point P. A conjecture of Kawaguchi and Silverman predicts that if P has Zariski-dense orbit, then these two quantities coincide. I will prove this conjecture in several higher-dimensional settings, including for all automorphisms of hyper-K\"ahler varieties. This is joint work with Matthew Satriano. |
Prefixes
concept
When dealing with the values in electrical circuits we often have to use numbers from 0.0000001 to 1000000. Sometimes we use numbers of both extremes in the same circuit for different things. This becomes tedious and difficult to use. It's also error prone since so many zeros makes leaving one off or adding one on easy to do and can wreak havok on your answers. To fix this we use simple prefixes before the units of a number to indicate how big or small it is without having to write out all the zeros. You're almost certainly used to using some of these prefixes already. The kilo, mega and giga are (almost) the same as the ones you use when talking about data usage on the internet. Working with prefixes is awkward and a real pain at first but it makes everything much, much simpler as you get into more advanced work, don't worry too much about memorising the different letters and their values, just keep a table written next to you as you do your work and you'll automatically pick it up eventually.
fact
Each prefix is a single letter and represents \(\times 10^x\) where \(x\) is a number determined by the letter of the prefix.
fact
The following table lists the most common prefixes and what they represent as well as their full names.
Symbol Multiplier Full Name n \(\times 10^{-9}\) nano u (or \(\mu\)) \(\times 10^{-6}\) micro m \(\times 10^{-3}\) milli k \(\times 10^{3}\) kilo M \(\times 10^{6}\) mega G \(\times 10^{9}\) giga
fact
Prefixes always come at the end of a number but before the unit. They're called prefixes because they prefix the unit, not the number.
example
Expand 10kWe just replace the 'k' with \(\times 10^3\) to get \(10\times10^3 = 10000\)
example
Expand 1uLooking up 'u' in the table we see that our number becomes \(1\times 10^{-6} = 0.000001\)
example
Expand 1.23M'M' is \(\times 10^6\) so 1.23M \(= 1.23\times 10^6 = 1230000\)
example
Simplify 1000When we simplify numbers using prefixes we want to pick the prefix that makes the number out front between 1 and 1000. In this case the prefix 'k' is exactly the number we need so we can say \(1000 = 1k\)
example
Simplify 1300Now to find the right number and prefix we divide our 1300 by various prefixes until we get a number between 1 and 1000. As you can probably see if we again choose 'k' we get \(\frac{1300}{k} = 1.3 \implies 1300 = 1.3k\)
practice problems |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
I think -- and hope -- that every computer science student is confronted with this problem which feels like a paradoxon. It is a very good example for the difference of computable in TCS sense and computable in a practical sense.
My thoughts back then were: "Yea, if I
knew the answer, it would obviously be computable. But how to find out?" The trick is to rid yourself from the illusion that you have to find out wether $\pi$ has this property or not. Because this, obviously (read: imho), cannot be done by a Turing machine (as long as we do not have more knowledge than we have about $\pi$).
Consider your definition for computability: we say $f$ is (Turing-)computable if and only if $\exists M \in TM : f_M = f$. That is you only have to show
existence of an appropriate Turing machine, not give one. What you -- we -- try to do there is to compute the Turing machine that computes the required function. This is a way harder problem!
The basic idea of the proof is: I give you an infinite class of functions, all of them computable (to show; trivial here). I prove then that the function you are looking for is in that class (to show; case distinction here). q.e.d. |
In mathematics, a theory like the theory of probability is developed axiomatically. That means we begin with fundamental laws or principles called
axioms, which are the assumptions the theory rests on. Then we derive the consequences of these axioms via proofs: deductive arguments which establish additional principles that follow from the axioms. These further principles are called theorems.
In the case of probability theory, we can build the whole theory from just three axioms. And that makes certain tasks much easier. For example, it makes it easy to establish that anyone who violates a law of probability can be Dutch booked. Because, if you violate a law of probability, you must also be violating one of the three axioms that entail the law you’ve violated. And with only three axioms to check, we can verify pretty quickly that violating an axiom always makes you vulnerable to a Dutch book.
The axiomatic approach is useful for lots of other reasons too. For example, we can program the axioms into a computer and use it to solve real-world problems. Or, we could use the axioms to verify that the theory is consistent: if we can establish that the axioms don’t contradict one another, then we know the theory makes sense. Axioms are also a useful way to summarize a theory, which makes it easier to compare it to alternative theories.
In addition to axioms, a theory typically includes some
definitions. Definitions construct new concepts out of existing ones, ones that already appear in the axioms. Definitions don’t add new assumptions to the theory. Instead they’re useful because they give us new language in which to describe what the axioms already entail.
So a theory is a set of statements that tells us everything true about the subject at hand. There are three kinds of statements:
In this appendix we’ll construct probability theory axiomatically. We’ll learn how to derive all the laws of probability discussed in Part I from three simple statements.
Probability theory has three axioms, and they’re all familiar laws of probability. But they’re fundamental laws in a way. All the other laws can be derived from them.
The three axioms are:
For any proposition \(A\), \(0 \leq \p(A) \leq 1\).
If \(A\) is a logical truth then \(\p(A) = 1\).
If \(A\) and \(B\) are mutually exclusive then \(\p(A \vee B) = \p(A) + \p(B)\).
Our task now is to derive from these three axioms the other laws of probability. We do this by stating each law, and then giving a proof of it: a valid deductive argument showing that it follows from the axioms and definitions.
Let’s start with one of the easier laws to derive.
\(\p(\neg A) = 1 - \p(A)\)
Proof. To prove this rule, start by noticing that \(A \vee \neg A\) is a logical truth. So we can reason as follows:\[ \begin{aligned} \p(A \vee \neg A) &= 1 & \mbox{ by Tautology}\\ \p(A) + \p(\neg A) &= 1 & \mbox{ by Additivity}\\ \p(\neg A) &= 1 - \p(A) & \mbox{ by algebra.} \end{aligned}\]
The black square indicates the end of the proof. Notice how each line of our proof is justified by either applying an axiom or using basic algebra. This ensures it’s a valid deductive argument.
Now we can use the Negation rule to establish the flipside of the Tautology rule: the Contradiction rule.
If \(A\) is a contradiction then \(\p(A) = 0\).
Proof. Notice that if \(A\) is a contradiction, then \(\neg A\) must be a tautology. So \(\p(\neg A) = 1\). Therefore:\[ \begin{aligned} \p(A) &= 1 - \p(\neg A) & \mbox{by Negation}\\ &= 1 - 1 & \mbox{by Tautology}\\ &= 0 & \mbox{by arithmetic.} \end{aligned}\]
Our next theorem is about conditional probability. But the concept of conditional probability isn’t mentioned in the axioms, so we need to define it first.
The conditional probability of \(A\) given \(B\) is written \(\p(A \given B)\) and is defined: \[\p(A \given B) = \frac{\p(A \wedge B)}{\p(B)},\] provided that \(\p(B) > 0\).
From this definition we can derive the following theorem.
If \(\p(B) > 0\), then \(\p(A \wedge B) = \p(A \given B)\p(B)\).
Proof. \[ \begin{aligned} \p(A \given B) &= \frac{\p(A \wedge B)}{\p(B)} & \mbox{ by definition}\\ \p(A \given B)\p(B) &= \p(A \wedge B) & \mbox{ by algebra}\\ \p(A \wedge B) &= \p(A \given B)\p(B) & \mbox{ by algebra.} \end{aligned}\]
Notice that the first step in this proof wouldn’t make sense if we didn’t assume from the beginning that \(\p(B) > 0\). That’s why the theorem begins with the qualifier, “If \(\p(B) > 0\)…”.
Next we’ll prove the Equivalence rule and the General Addition rule. These proofs are longer and more difficult than the ones we’ve done so far.
When \(A\) and \(B\) are logically equivalent, \(\p(A) = \p(B)\).
Proof. Suppose that \(A\) and \(B\) are logically equivalent. Then \(\neg A\) and \(B\) are mutually exclusive: if \(B\) is true then \(A\) must be true, hence \(\neg A\) false. So \(B\) and \(\neg A\) can’t both be true.
So we can apply the Additivity axiom to \(\neg A \vee B\): \[ \begin{aligned} \p(\neg A \vee B) &= \p(\neg A) + \p(B) & \mbox{ by Additivity}\\ &= 1 - \p(A) + \p(B) & \mbox{ by Negation.} \end{aligned} \]
Next notice that, because \(A\) and \(B\) are logically equivalent, we also know that \(\neg A \vee B\) is a necessary truth. If \(B\) is false, then \(A\) must be false, so \(\neg A\) must be true. So either \(B\) is true, or \(\neg A\) is true. So \(\neg A \vee B\) is always true, no matter what.
So we can apply the Tautology axiom: \[ \begin{aligned} \p(\neg A \vee B) &= 1 & \mbox{ by Tautology.} \end{aligned} \] Combining the previous two equations we get: \[ \begin{aligned} 1 &= 1 - \p(A) + \p(B) & \mbox{ by algebra}\\ \p(A) &= \p(B) & \mbox{ by algebra}. \end{aligned} \]
Now we can use this theorem to derive the General Addition rule.
\(\p(A \vee B) = \p(A) + \p(B) - \p(A \wedge B)\).
Proof. Start with the observation that \(A \vee B\) is logically equivalent to:\[ (A \wedge \neg B) \vee (A \wedge B) \vee (\neg A \wedge B). \]This is easiest to see with an Euler diagram, but you can also verify it with a truth table. (We won’t go through either of these exercises here.)
So we can apply the Equivalence rule to get: \[ \begin{aligned} \p(A \vee B) &= \p((A \wedge \neg B) \vee (A \wedge B) \vee (\neg A \wedge B)). \end{aligned} \] And thus, by Additivity: \[ \begin{aligned} \p(A \vee B) &= \p(A \wedge \neg B) + \p(A \wedge B) + \p(\neg A \wedge B). \end{aligned} \]
We can also verify with an Euler diagram (or truth table) that \(A\) is logically equivalent to \((A \wedge B) \vee (A \wedge \neg B)\), and that \(B\) is logically equivalent to \((A \wedge B) \vee (\neg A \wedge B)\). So, by Additivity, we also have the equations: \[ \begin{aligned} \p(A) &= \p(A \wedge \neg B) + \p(A \wedge B).\\ \p(B) &= \p(A \wedge B) + \p(\neg A \wedge B). \end{aligned} \] Notice, the last equation here can be transformed to: \[ \begin{aligned} \p(\neg A \wedge B) &= \p(B) - \p(A \wedge B). \end{aligned} \] Putting the previous four equations together, we can then derive: \[ \begin{aligned} \p(A \vee B) &= \p(A \wedge \neg B) + \p(A \wedge B) + \p(\neg A \wedge B) & \mbox{by algebra}\\ &= \p(A) + \p(\neg A \wedge B) & \mbox{by algebra}\\ &= \p(A) + \p(B) - \p(A \wedge B) & \mbox{by algebra.} \end{aligned} \]
Next we derive the Law of Total Probability and Bayes’ theorem.
If \(0 < \p(B) < 1\), then
\[ \p(A) = \p(A \given B)\p(B) + \p(A \given \neg B)\p(\neg B). \]
Proof. \[ \begin{aligned} \p(A) &= \p((A \wedge B) \vee (A \wedge \neg B)) & \mbox{ by Equivalence}\\ &= \p(A \wedge B) + (A \wedge \neg B) & \mbox{ by Additivity}\\ &= \p(A \given B)\p(B) + \p(A \given \neg B)\p(\neg B) & \mbox{ by Multiplication.} \end{aligned}\]
Notice, the last line of this proof only makes sense if \(\p(B) > 0\) and \(\p(\neg B) > 0\). That’s the same as \(0 < \p(B) < 1\), which is why the theorem begins with the condition: “If \(0 < \p(B) < 1\)…”.
Now for the first version of Bayes’ theorem:
If \(\p(A),\p(B)>0\), then \[ \p(A \given B) = \p(A)\frac{\p(B \given A)}{\p(B)}. \]
Proof. \[ \begin{aligned} \p(A \given B) &= \frac{\p(A \wedge B)}{\p(B)} & \mbox{by definition}\\ &= \frac{\p(B \given A)\p(A)}{\p(B)} & \mbox{by Multiplication}\\ &= \p(A)\frac{\p(B \given A)}{\p(B)} & \mbox{by algebra.}\\ \end{aligned}\]
And next the long version:
If \(1 > \p(A) > 0\) and \(\p(B)>0\), then \[ \p(A \given B) = \frac{\p(A)\p(B \given A)}{\p(A)\p(B \given A) + \p(\neg A)\p(B \given \neg A)}. \]
Proof. \[ \begin{aligned} \p(A \given B) &= \frac{\p(A)\p(B \given A)}{\p(B)} & \mbox{by Bayes' theorem}\\ &= \frac{\p(A)\p(B \given A)}{\p(A)\p(B \given A) + \p(\neg A)\p(B \given \neg A)} & \mbox{by Total Probability.} \end{aligned}\]
Finally, let’s introduce the concept of independence, and two key theorems that deal with it.
\(A\) is independent of \(B\) if \(\p(A \given B) = \p(A)\) and \(\p(A) > 0\).
Now we can state and prove the Multiplication rule.
If \(A\) is independent of \(B\), then \(\p(A \wedge B) = \p(A)\p(B)\).
Proof. Suppose \(A\) is independent of \(B\). Then:\[ \begin{aligned} \p(A \given B) &= \p(A) & \mbox{ by definition}\\ \frac{\p(A \wedge B)}{\p(B)} &= \p(A) & \mbox{ by definition}\\ \p(A \wedge B) &= \p(A) \p(B) & \mbox{ by algebra.}\end{aligned}\]
Finally, we prove another useful fact about independence, namely that it goes both ways.
If \(A\) is independent of \(B\), then \(B\) is independent of \(A\).
Proof. To derive this fact, suppose \(A\) is independent of \(B\). Then:\[ \begin{aligned} \p(A \wedge B) &= \p(A) \p(B) & \mbox{ by Multiplication}\\ \p(B \wedge A) &= \p(A) \p(B) & \mbox{ by Equivalence}\\ \frac{\p(B \wedge A)}{\p(A)} &= \p(B) & \mbox{ by algebra}\\ \p(B \given A) &= \p(B) & \mbox{ by definition.} \end{aligned}\]
We’ve now established that the laws of probability used in this book can be derived from the three axioms we began with. |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
Difference between revisions of "Geometry and Topology Seminar"
(→Fall 2016)
(→Fall 2016)
Line 79: Line 79:
|-
|-
|December 9
|December 9
−
|
+ +
|
|
|
|
|
|-
|-
|December 16
|December 16
− +
|
−
|
|
|
|-
|-
Revision as of 22:00, 8 November 2016 Contents 1 Fall 2016 2 Spring 2017 3 Fall Abstracts 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2016
date speaker title host(s) September 9 Bing Wang (UW Madison) "The extension problem of the mean curvature flow" (Local) September 16 Ben Weinkove (Northwestern University) "Gauduchon metrics with prescribed volume form" Lu Wang September 23 Jiyuan Han (UW Madison) "Deformation theory of scalar-flat ALE Kahler surfaces" (Local) September 30 October 7 Yu Li (UW Madison) "Ricci flow on asymptotically Euclidean manifolds" (Local) October 14 Sean Howe (University of Chicago) "Representation stability and hypersurface sections" Melanie Matchett Wood October 21 Nan Li (CUNY) "Quantitative estimates on the singular Sets of Alexandrov spaces" Lu Wang October 28 Ronan Conlon(Florida International University) "New examples of gradient expanding K\"ahler-Ricci solitons" Bing Wang November 4 Jonathan Zhu (Harvard University) "Entropy and self-shrinkers of the mean curvature flow" Lu Wang November 11 Richard Kent (Wisconsin) Analytic functions from hyperbolic manifolds local November 18 Caglar Uyanik (Illinois) "TBA" Kent Thanksgiving Recess December 2 Peyman Morteza (UW Madison) "TBA" (Local) December 9 Yu Zeng(University of Rochester) "TBA" December 16 Spring 2017
date speaker title host(s) Jan 20 Jan 27 Feb 3 Feb 10 Feb 17 Feb 24 March 3 March 10 March 17 March 24 Spring Break March 31 April 7 April 14 April 21 April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons
A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud).
Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces
We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky.
Sean Howe Representation stability and hypersurface sections
We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}!
Nan Li Quantitative estimates on the singular sets of Alexandrov spaces
The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber.
Yu Li
In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature.
Gaven Marin TBA Peyman Morteza TBA Richard Kent Analytic functions from hyperbolic manifolds
Thurston's Geometrization Conjecture, now a celebrated theorem of Perelman, tells us that most 3-manifolds are naturally geometric in nature. In fact, most 3-manifolds admit hyperbolic metrics. In the 1970s, Thurston proved the Geometrization conjecture in the case of Haken manifolds, and the proof revolutionized 3-dimensional topology, hyperbolic geometry, Teichmüller theory, and dynamics. Thurston's proof is by induction, constructing a hyperbolic structure from simpler pieces. At the heart of the proof is an analytic function called the
skinning map that one must understand in order to glue hyperbolic structures together. A better understanding of this map would more brightly illuminate the interaction between topology and geometry in dimension three. I will discuss what is currently known about this map. Caglar Uyanik TBA Bing Wang The extension problem of the mean curvature flow
We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li.
Ben Weinkove Gauduchon metrics with prescribed volume form
Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti.
Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow
The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set.
Spring Abstracts Bena Tshishiku
"TBA"
Archive of past Geometry seminars
2015-2016: Geometry_and_Topology_Seminar_2015-2016
2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology |
Recall that a complex matrix $M$ is said to be Hermitian if $M^*=M$.Here $A^*$ is the conjugate transpose matrix $M^*=\bar{M}^*$.
Proof.
Let\[B=\frac{A+A^*}{2} \text{ and } C=\frac{A-A^*}{2i}.\]We claim that $B$ and $C$ are Hermitian matrices.Using the fact that $(A^*)^*=A$, we compute\begin{align*}B^*&=\left(\, \frac{A+A^*}{2} \,\right)^*\\&=\frac{A^*+(A^*)^*}{2}\\&=\frac{A^*+A}{2}=B.\end{align*}It yields that the matrix $B$ is Hermitian.
We also have\begin{align*}C^*&=\left(\, \frac{A-A^*}{2i} \,\right)^*\\&=\frac{A^*-(A^*)^*}{-2i}\\&=\frac{A^*-A}{-2i}\\&=\frac{A-A^*}{2i}=C.\end{align*}Thus, the matrix $C$ is also Hermitian.
Finally, note that we have\begin{align*}B+iC&=\frac{A+A^*}{2}+i\frac{A-A^*}{2i}\\&=\frac{A+A^*}{2}+\frac{A-A^*}{2}\\&=A.\end{align*}Therefore, each complex matrix $A$ can be written as $A=B+iC$, where $B$ and $C$ are Hermitian matrices.
\item By the proof of part (a), it suffices to compute\[B=\frac{A+A^*}{2} \text{ and } C=\frac{A-A^*}{2i}.\]
We have\[A^*=\begin{bmatrix}-i & 2+i\\6& 1-i\end{bmatrix}.\]
A direct computation yields that\[B=\begin{bmatrix}0 & 4+\frac{i}{2}\\[6pt]4-\frac{i}{2}& 1\end{bmatrix} \text{ and } C=\begin{bmatrix}1 & -\frac{1}{2}-2i\\[6pt]-\frac{1}{2}+2i& 1\end{bmatrix}.\]
By the result of part (a), these matrices are Hermitian and satisfy $A=B+iC$, as required.
Related Question.
Problem. Prove that every Hermitian matrix $A$ can be written as the sum\[A=B+iC,\]where $B$ is a real symmetric matrix and $C$ is a real skew-symmetric matrix.
Eigenvalues of a Hermitian Matrix are Real NumbersShow that eigenvalues of a Hermitian matrix $A$ are real numbers.(The Ohio State University Linear Algebra Exam Problem)We give two proofs. These two proofs are essentially the same.The second proof is a bit simpler and concise compared to the first one.[…]
Diagonalize the $2\times 2$ Hermitian Matrix by a Unitary MatrixConsider the Hermitian matrix\[A=\begin{bmatrix}1 & i\\-i& 1\end{bmatrix}.\](a) Find the eigenvalues of $A$.(b) For each eigenvalue of $A$, find the eigenvectors.(c) Diagonalize the Hermitian matrix $A$ by a unitary matrix. Namely, find a diagonal matrix […] |
2014 Heat Transfer Midterm Exam
May 2nd 2014
19:00 — 21:00
NO NOTES OR BOOKS; USE HEAT TRANSFER TABLES THAT WERE DISTRIBUTED; ANSWER ALL 4 QUESTIONS; ALL QUESTIONS HAVE EQUAL VALUE.
$c_{\rm c}=900$ J/kg$^\circ$C, $\rho_{\rm c}=2000$ kg/m$^3$, $k_{\rm c}=0.3$ W/m$^\circ$C
that the earth has properties of:
$c_{\rm e}=1000$ J/kg$^\circ$C, $\rho_{\rm e}=700$ kg/m$^3$, $k_{\rm e}=2.0$ W/m$^\circ$C
and that the temperature of the earth surface is of 20$^\circ$, and that the contact resistance between the earth and the shell is of 0.1 m$^2$$^\circ$C/W determine the maximum amount of radioactive wastes (in grams) that can be inserted in the spherical shell such that the temperature anywhere within the concrete shell does not exceed 50$^\circ$C.
$L=0.4$ m, $H=0.2$ m, $D=0.2$ m
$c=900$ J/kg$^\circ$C, $\rho=2000$ kg/m$^3$, $k=1.4$ W/m$^\circ$C
and that $h$ can be taken as 14 W/m$^2$$^\circ$C and that $T_\infty$ corresponds to 20$^\circ$C, find the following temperatures at a time 3 hours after the concrete starts to be cooled by the air flow:
$\pi$ |
Exponential Functions Form a Basis of a Vector Space Problem 590
Let $C[-1, 1]$ be the vector space over $\R$ of all continuous functions defined on the interval $[-1, 1]$. Let
\[V:=\{f(x)\in C[-1,1] \mid f(x)=a e^x+b e^{2x}+c e^{3x}, a, b, c\in \R\}\] be a subset in $C[-1, 1]$. (a) Prove that $V$ is a subspace of $C[-1, 1]$. (b) Prove that the set $B=\{e^x, e^{2x}, e^{3x}\}$ is a basis of $V$. (c) Prove that \[B’=\{e^x-2e^{3x}, e^x+e^{2x}+2e^{3x}, 3e^{2x}+e^{3x}\}\] is a basis for $V$.
Contents
Proof. (a) Prove that $V$ is a subspace of $C[-1, 1]$.
Note that each function in the subset $V$ is a linear combination of the functions $e^x, e^{2x}, e^{3x}$.
Namely, we have \[V=\Span\{e^x, e^{2x}, e^{3x}\}\] and we know that the span is always a subspace. Hence $V$ is a subspace of $C[-1,1]$. (b) Prove that the set $B=\{e^x, e^{2x}, e^{3x}\}$ is a basis of $V$.
We noted in part (a) that $V=\Span(B)$. So it suffices to show that $B$ is linearly independent.
Consider the linear combination \[c_1e^x+c_2 e^{2x}+c_3 e^{3x}=\theta(x),\] where $\theta(x)$ is the zero function (the zero vector in $V$). Taking the derivative, we get \[c_1e^x+2c_2 e^{2x}+3c_3 e^{3x}=\theta(x).\] Taking the derivative again, we obtain \[c_1e^x+4c_2 e^{2x}+9c_3 e^{3x}=\theta(x).\]
Evaluating at $x=0$, we obtain the system of linear equations
\begin{align*} c_1+c_2+c_3&=0\\ c_1+2c_2+3c_3&=0\\ c_1+4c_2+9c_3&=0. \end{align*}
We reduce the augmented matrix for this system as follows:
\begin{align*}
\left[\begin{array}{rrr|r}
1 & 1 & 1 & 0 \\
1 &2 & 3 & 0 \\
1 & 4 & 9 & 0
\end{array} \right] \xrightarrow[R_3-R_1]{R_2-R_1}
\left[\begin{array}{rrr|r}
1 & 1 & 1 & 0 \\
0 &1 & 2 & 0 \\
0 & 3 & 8 & 0
\end{array} \right] \xrightarrow[R_3-3R_2]{R_1-R_2}\\[6pt] \left[\begin{array}{rrr|r}
1 & 0 & -1 & 0 \\
0 &1 & 2 & 0 \\
0 & 0 & 2 & 0
\end{array} \right] \xrightarrow{\frac{1}{2}R_3}
\left[\begin{array}{rrr|r}
1 & 0 & -1 & 0 \\
0 &1 & 2 & 0 \\
0 & 0 & 1 & 0
\end{array} \right] \xrightarrow[R_2-2R_2]{R_1+R_3}
\left[\begin{array}{rrr|r}
1 & 0 & 0 & 0 \\
0 &1 & 0 & 0 \\
0 & 0 & 1 & 0
\end{array} \right].
\end{align*}
It follows that the solution of the system is $c_1=c_2=c_3=0$.
Hence the set $B$ is linearly independent, and thus $B$ is a basis for $V$.
Anotehr approach.
Alternatively, we can show that the coefficient matrix is nonsingular by using the Vandermonde determinant formula as follows.
Observe that the coefficient matrix of the system is a Vandermonde matrix: \[A:=\begin{bmatrix} 1 & 1 & 1 \\ 1 &2 &3 \\ 1^2 & 2^2 & 3^2 \end{bmatrix}.\] The Vandermonde determinant formula yields that \[\det(A)=(3-1)(3-2)(2-1)=2\neq 0.\] Hence the coefficient matrix $A$ is nonsingular. Thus we obtain the solution $c_1=c_2=c_3=0$. (c) Prove that $B’=\{e^x-2e^{3x}, e^x+e^{2x}+2e^{3x}, 3e^{2x}+e^{3x}\}$ is a basis for $V$.
We consider the coordinate vectors of vectors in $B’$ with respect to the basis $B$.
The coordinate vectors with respect to basis $B$ are \[[e^x-2e^{3x}]_B=\begin{bmatrix} 1 \\ 0 \\ -2 \end{bmatrix}, [e^x+e^{2x}+2e^{3x}]_B=\begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}, [3e^{2x}+e^{3x}]_B=\begin{bmatrix} 0 \\ 3 \\ 1 \end{bmatrix}.\] Let $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ be these vectors and let $T=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$. Then we know that $B’$ is a basis for $V$ if and only if $T$ is a basis for $\R^3$.
We claim that $T$ is linearly independent.
Consider the matrix whose column vectors are $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$: \begin{align*} \begin{bmatrix} 1 & 1 & 0 \\ 0 &1 &3 \\ -2 & 2 & 1 \end{bmatrix} \xrightarrow{R_3+2R_1} \begin{bmatrix} 1 & 1 & 0 \\ 0 &1 &3 \\ 0 & 4 & 1 \end{bmatrix} \xrightarrow[R_3-4R_1]{R_1-R_2}\\[6pt] \begin{bmatrix} 1 & 0 & -3 \\ 0 &1 &3 \\ 0 & 0 & -11 \end{bmatrix} \xrightarrow{-\frac{1}{11}R_3} \begin{bmatrix} 1 & 0 & -3 \\ 0 &1 &3 \\ 0 & 0 & 1 \end{bmatrix} \xrightarrow[R_2-3R_3]{R_1+3R_3} \begin{bmatrix} 1 & 0 & 0 \\ 0 &1 &0 \\ 0 & 0 & 1 \end{bmatrix}. \end{align*}
Thus, the matrix is nonsingular and hence the column vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly independent.
As $T$ consists of three linearly independent vectors in the three-dimensional vector space $\R^3$, we conclude that $T$ is a basis for $\R^3$.
Therefore, by the correspondence of the coordinates, we see that $B’$ is a basis for $V$.
Related Question.
If you know the Wronskian, then you may use the Wronskian to prove that the exponential functions $e^x, e^{2x}, e^{3x}$ are linearly independent.
See the post
Using the Wronskian for Exponential Functions, Determine Whether the Set is Linearly Independent for the details.
Try the next more general question.
The solution is given in the post ↴
Exponential Functions are Linearly Independent
Add to solve later |
Chemical Equations (YAY!)
Given the combustion reaction $x_1CH_4 + x_2O_2 \rightarrow x_3CO_2 +x_4H_2O$, what are the values of $\vec{x}$?
Using Linear Algebra and assigning each element to a ROW we can redefine this apparently difficult problem to a simple matrix(1)
Therefore, the $\vec{x} = x_3 \begin{bmatrix} 1\\0\\0\\0 \end{bmatrix} + x_4 \begin{bmatrix} 0\\1\\\frac{1}{2}\\1 \end{bmatrix}$. Now, chemists are afraid of fractions, so we choose an $x_4$ such that it's only whole numbers, so why not $x_4 = 2$.
Network Flow
I apologize that no picture will be accompanying this one because I have no idea how to make a diagram of that complexity in LaTeX and I don't really feel like learning all that right now…so! Here's the situation
Note: All positive integers are flow in, all negative integers are flow out.
At point A
$x_5 +x_1 -300 +x_2 = 0$ B: $x_2 + 200 -700 - x_3 = 0$ C: $x_3 + x_4 -200 + x_1 = 0$ Total: $x_5 + x_4 + 200 = 200 + 300 + 700$ |
Take: $M$ a Riemannian manifold, ${X_0}\in M$, $N_{X_0}$ a submanifold of $M$ going through ${X_0}$, and $Z \in N_{X_0}$ in a neighborhood of ${X_0}$.
At ${X_0} \in N_{X_0}$, we consider the orthogonal splitting of the tangent space: $T_{X_0} M=T_{X_0} N_{X_0} \oplus H$. The coordinates of $Z$ can be written $(z,F(z))$. More precisely, we have:
$$F(z)^a=-\frac{1}{2}h(0)_{kl}^a z^kz^l +O(||z||^3)$$
where $h$ is the second fundamental form of $N_{X_0}$ in $M$ at ${X_0}$. $F$ represents the local equation of the submanifold $N_{X_0}$ going through ${X_0}$, in the tangent space at ${X_0}$. Am I right so far?
=======
Now we assume that there is an isometric group action on $M$. Thus $M$ is foliated by submanifolds $N_X$ (the orbits) which we index by $X$. We choose the points $X$ lying on a geodesic crossing all orbits orthogonally.
We consider a ${X_0}$ and work in the tangent space at ${X_0}$. We take a submanifold $N_X$, going through $X$, a point
in a neighborhood of $X_0$. We take $Z \in N_X$ such that $Z$ is in a neighborhood of ${X}$.
My questions are:
Can we say something about the coordinates of $Z$ in $T_{X_0} M=T_{X_0}N_{X_0} \oplus H$? Could we write something like $(z,x+F(z,x))$ where $x$ are the coordinates of $X$? What would be $F$? Can we have a Taylor expansion of $F$ similar to the one above, but in terms of $O(|(z,x)|^n)$?
Is there a theory on Riemannian foliations dealing with these questions? I couldn't find anything but abstract theorems.
Many thanks in advance for your help! |
Is there any way to theoretically, by the use of mathematics, to calculate the time taken to brute-force RSA keys?
Even classically, this is not so easy as you seem to imply.
RSA is based on the hardness of the integer factorization problem. The fastest classical algorithm known that solves this problem is the General Number Field Sieve (GNFS), and it solves integer factorization in subexponential time, or $\exp((\sqrt[3]{64/9}+o(1)) (\log N)^{1/3} (\log \log N)^{2/3})$ where $N$ is the modulus, and $\log$ refers to the natural logarithm.
However, we can't easily calculate an exact running time using this formula. Asymptotic running times hide constant values, which can vary wildly depending on the algorithm. Additionally, they don't tell us anything about what kind of parallelization may be possible.
GNFS happens to be a very complex algorithm. It involves a significant number of parameters. There are four different "stages" to the algorithm. Some stages can be made to take more time, in order to make the other stages faster. Additionally, some stages are easily parallelizable, while others are not, or only to a limited degree. In other words, coming up with the right set of parameters in order to minimize the total time to break an RSA modulus is very far from trivial. Let alone somewhat accurately estimate how much time it will take.
So unfortunately there's no easy answer to this question. However, you can get an idea by looking at some numbers that have been successfully factored.
How would one calculate the time to brute-force RSA keys using a quantum computer?
This is easier to calculate but at the same time impossible. Integer factorization can be done using Shor's algorithm on a quantum computer. This algorithm runs in polynomial time, $O((\log N)^2 (\log \log N) (\log \log \log N))$. From this paper you can see that the number of logical qubits required are approximately $2 \log N$, and the number of logical quantum gates $4(\log N)^3$.
As mentioned in the paper, these logical qubits and quantum gates would require many physical qubits and quantum gates to actually implement. It's still not certain whether quantum computation is even possible on a large scale, due to issues with quantum decoherence. So again, I'm afraid there's no answer to this question. How could you possibly estimate the running time of an algorithm on a computer that doesn't yet exist? |
Note that the polynomial $x^3-2$ is irreducible over $\Q$ by Eisenstein’s criterion (with prime $p=2$).This implies that if $\alpha$ is any root of $x^3-2$, then the degree of the field extension $\Q(\alpha)$ over $\Q$ is $3$:\[[\Q(\alpha) : \Q]=3. \tag{*}\]
Seeking a contradiction, assume that $x^3-2$ is reducible over $\Q(i)$.Then $x^3-2$ has a root in $\Q(i)$ as it is a reducible degree $3$ polynomial. So let us call the root $\alpha \in \Q(i)$.
Then $\Q(\alpha)$ is a subfield of $\Q(i)$ and thus we have\[2=[\Q(i) :\Q]=[\Q(i): \Q(\alpha)][\Q(\alpha):\Q]\geq 3\]by (*). Hence we have reached a contradiction.As a result, $x^3-2$ is irreducible over $\Q(i)$.
Application of Field Extension to Linear CombinationConsider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$.Let $\alpha$ be any real root of $f(x)$.Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$.Proof.We first prove that the polynomial […]
$x^3-\sqrt{2}$ is Irreducible Over the Field $\Q(\sqrt{2})$Show that the polynomial $x^3-\sqrt{2}$ is irreducible over the field $\Q(\sqrt{2})$.Hint.Consider the field extensions $\Q(\sqrt{2})$ and $\Q(\sqrt[6]{2})$.Proof.Let $\sqrt[6]{2}$ denote the positive real $6$-th root of of $2$.Then since $x^6-2$ is […]
Example of an Infinite Algebraic ExtensionFind an example of an infinite algebraic extension over the field of rational numbers $\Q$ other than the algebraic closure $\bar{\Q}$ of $\Q$ in $\C$.Definition (Algebraic Element, Algebraic Extension).Let $F$ be a field and let $E$ be an extension of […]
Galois Group of the Polynomial $x^p-2$.Let $p \in \Z$ be a prime number.Then describe the elements of the Galois group of the polynomial $x^p-2$.Solution.The roots of the polynomial $x^p-2$ are\[ \sqrt[p]{2}\zeta^k, k=0,1, \dots, p-1\]where $\sqrt[p]{2}$ is a real $p$-th root of $2$ and $\zeta$ […] |
This article is all about the basics of probability. There are two interpretations of a probability, but the difference only matters when we will consider inference.
Frequency The degree of belief Axioms of Probability
A function \(P\) which assigns a value \(P(A)\) to every event \(A\) is a
probability measure or probability distribution if it satisfies the following three axioms. \(P(A) \geq 0 \text{ } \forall \text{ } A\) \(P(\Omega) = 1\) If \(A_1, A_2, …\) are disjoint then \(P(\bigcup_{i=1}^{\infty} A_i) = \sum_{i=1}^{\infty} P(A_i) \)
These axioms give rise to the following five properties.
\(P(\emptyset) = 0\) \(A \subset B \Rightarrow P(A) \leq P(B)\) \(0 \leq P(A) \leq 1\) \(P(A^\mathsf{c}) = 1 – P(A)\) \(A \cap B = \emptyset \Rightarrow P(A \cup B) = P(A) + P(B)\) The Sample Space The sample space, , is the set of all possible outcomes, . Subsets of are events. The empty set contains no elements. Example – Tossing a coin
Toss a coin once:
Toss a coin twice:
Then event that the first toss is heads:
Set Operations – Complement, Union and Intersection Complement
Given an event, , the
complement of is , where: Union
The
union of two sets A and B, is set of the events which are in either A, or in B or in both. Intersection
The
intersection of two sets A and B, is set of the events which are in both A and B. Difference Set
The
difference set is the events in one set which are not in the other: Subsets
If every element of A is contained in B then A is a
subset of B: or equivalently, . Counting elements
If A is a finite set, then denotes the number of elements in A.
Indicator function An indicator function can be defined: Disjoint events
Two events A and B are
disjoint or mutually exclusive if (the empty set) – i.e. there are no events in both A and B). More generally, are disjoint if whenever . Example – intervals of the real line
The intervals are disjoint.
The intervals are not disjoint. For example, . Partitions
A
partition of the sample space is a set of disjoint events such that . Monotone increasing and monotone decreasing sequences
A sequence of events, is
monotone increasing if . Here we define and write .
Similarly, a sequence of events, is
monotone decreasing if . Here we define . Again we write
Hello.
I’ve started this blog to use it as a sort of notebook.
My plan is to learn about things which interest me, and then to take notes here. The idea is that it will help me to consolidate what I learn, and it will make a reference. Hopefully someone else will get some use from it too. |
1.
Introduction to bearings
2.
Bearings and direction word problems
3.
Angle of elevation and depression
Back to Course Index
Theorems that are useful:
Pythagorean Theorem: a2+b2=c2a^{2} + b^{2} = c^{2}a2+b2=c2
Trig ratio: sinθ=OH\sin \theta = \frac{O}{H}sinθ=HO
cosθ=AH\cos \theta = \frac{A}{H}cosθ=HA
tanθ=OA\tan \theta = \frac{O}{A}tanθ=AO
Law of sine: asinA=bsinB=csinC\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}sinAa=sinBb=sinCc
Law of cosine: c2=a2+b2−2abcosCc^{2} = a^{2} + b^{2} - 2ab \cos Cc2=a2+b2−2abcosC
Charlie leaves home for a bike ride, heading 040°T for 5km.
A camping group made a return journey from their base camp. From the camp, they first travelled 120°T for 3km. Then they travelled 210°T for 9km. Determine the direction and distance they need to travel if they want to return to the base camp now.
Melody and April go to the same school. Melody's home is 3.5km with a bearing of S16°W from school whilst April's home is 2.4km with a bearing of N42°E from school. How far away are their homes from each other?
Radar X detected an earthquake N55°E of it. 16km due east of Radar X, Radar Y detected the same earthquake N14°W of it.
A plane is sighted by Tom and Mary at bearings 028°T and 012°T respectively. If they are 2km away from each other, how high is the plane?
Consider the following diagram.
Find the distance between P and Q.
No Javascript
It looks like you have javascript disabled.
You can still navigate around the site and check out our free content, but some functionality, such as sign up, will not work.
If you do have javascript enabled there may have been a loading error; try refreshing your browser. |
April 27th, 2014, 11:02 AM
# 1
Newbie
Joined: Apr 2014
From: zagreb
Posts: 5
Thanks: 0
Set Theory
I'm reading Jech's book. Can someone solve exercises
1.6.
1.7.
2.12
help me to understand lemmae 3.6.-3.10. and 5.2.
theorem
3.11.
4.5.
4.8.-Baire category theorem
April 28th, 2014, 11:00 AM
# 2
Newbie
Joined: Apr 2014
From: zagreb
Posts: 5
Thanks: 0
({} є S and (for each x є S) x U {x} є S)
We call a set S with the above property inductive.
A set T is transitive if x є T implies x subset of T.
Exercise 1.6. If X is inductive, then {x є X: x is transitive and every nonempty z subset of x has an є-minimal element} is inductive (t is є-minimal in z if there is no s є z such that s є t)
Last edited by raul14; April 28th, 2014 at 11:04 AM.
May 22nd, 2014, 12:27 PM
# 3
Senior Member
Joined: Apr 2014
From: zagreb, croatia
Posts: 234
Thanks: 33
Math Focus: philosophy/found of math, metamath, logic, set/category/order/number theory, algebra, topology
Theorem 4.5.
cardin
Every perfect set has cardinality of $\displaystyle R$.
Proof. Given a perfect set $\displaystyle P$, we want to find a one-to-one function $\displaystyle F$ from $\displaystyle {0, 1}^\omega$ into $\displaystyle P$.
$\displaystyle {0, 1}^\omega$ is equipotent to $\displaystyle {0, 2}^\omega$, given a sequence of 0s and 1s, you just map it to the sequence with 2s instead of 1s.
C (the Cantor set) is equipotent to $\displaystyle {0, 2}^\omega$, given a real of the form $\displaystyle \sum_{n=1}^{\infty}\frac{a_n}{3^n}$, where each $\displaystyle a_n$= 0 or 2, map it to the sequence with coresponding 0s and 2s, and so the cardinality of C = $\displaystyle 2^\aleph_0$.
Therefore cardinality of $\displaystyle R$>=$\displaystyle 2^\aleph_0$, because C is a subset of reals.
As the set $\displaystyle Q$ is dense in reals, every real number r is equal to sup{q element of Q: q<r} and because Q is countable, it follows cardinality of $\displaystyle R$<=cardinality of power set of Q = $\displaystyle 2^\aleph_0$
I don't understand the last paragraph. Help?
By Cantor Bernstein cardinality of $\displaystyle R$=$\displaystyle 2^\aleph_0$
So, R is equipotent to C, and to $\displaystyle {0, 1}^\omega$.
Tags set, theory
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Set theory MOD Help. MathsLOL Algebra 4 March 6th, 2012 11:52 AM Set theory help. MathsLOL Algebra 4 March 4th, 2012 09:34 AM Graph theory and number theory proglote Number Theory 3 October 30th, 2011 04:20 PM Category-theory (finite group theory) prove butabi Abstract Algebra 8 September 3rd, 2011 01:52 PM this is on set theory, please help me dhillon Number Theory 10 January 23rd, 2011 06:42 PM |
Yes:
imho geometrically the most interesting one is obtained as a quotient $SO(5)/SO(4)$ where the Poisson stucture on $SO(5)$ is not the so-called standard one, but one determined by an element in the maximal torus (sometimes they are called twisted). This is the Poisson analogue of what is mentioned in Quantum symmetry groups of noncommutative spheres - Varilly, Joseph C. Commun.Math.Phys. 221 (2001) 511-523 (where only the quantum counterpart is developed), ie. the Poisson version of the Connes--Landi noncommutative 4--sphere. As I mentioned in the comment above $SO(4)$ is not a Poisson-Lie subgroup of a $SO(5)$ but only a coisotropic subgroup.
The symplectic foliation is very interesting. In fact you have a level function which is a Casimir, so that leaves are contained in the 3-dimensional spheres $t= const.$ (0-dim leaves when $t=0,1$). Inside such spheres 2-dimensional leaves correspond to the usual description of the 3-sphere as two solid tori glued together. Of course the rank drops down to zero in two copies of $S^1$.
I sort of got the impression this is the only way you obtain a Poisson homogeneous structure on $S^4$ starting from a compact Poisson-Lie group.
There is another interesting Poisson structure on $S^4$ coming from Poisson-Lie groups, not homogeneous one but as a double coset. There the symplectic foliation consists only of two leaves: a 0-dimensional one and a $4$--dimensional symplectomorphic to the standard one on $C^2$ (so a sort of "Poisson compactification"). But since I contributed to this maybe it is interesting only for me...
ADDED
About the notion of Poisson homogeneous spaces $G/H$ of a Poisson-Lie group $G$ one may consider:
1) $H$ is a Poisson-Lie subgroup of $G$ (def. of Chari-Pressley, indeed);
2) the projection $G:\to G/H$ is a Poisson map;
3) the action of $G$ on $G/H$ is both a homogeneous action and a Poisson action.
We have $1)\Rightarrow 2) \Rightarrow 3)$ but none of the arrows is reversible. $2)$ is equivalent to $H$ being a coisotropic subgroup of $G$ and is characterized by the fact that there exists a point in $G/H$ in which the Poisson bivector vanishes. Remark that in general the trivial $\pi=0$ Poisson structure on $G/H$ is not necessarily Poisson homogeneous, as you seem to assume, unless the Poisson-Lie structure on the whole $G$ is trivial. |
Here is the answer to Question 2. It may be probably simplified.
Denote $y=3-x$, then we rewrite your identity as $$\binom{y+K-2}K=\frac{(y-1)y(y+1)\dots (y+K-2)}{K!}=c_0\binom{y}0+c_1\binom{y}1+\dots+c_K\binom{y}K,$$where $$c_p=p!\sum_{n=p}^K(-K)^{n-p}\frac1{n!}\sum\limits_{ \begin{subarray}{c} k_1+\dotsb+k_{n}=K \\ k_i \geq 1 \end{subarray}}\sigma_p(k_1,\dotsc,k_{n}) \prod\limits_{i=1}^n \dfrac{k_i^{k_i-2}}{(k_i-1)!}.$$On the other hand, by Vandermonde--Chu identity we have $$\binom{y+K-2}K=\sum_{i=2}^{K}\binom{y}i\binom{K-2}{K-i},$$so your identity is equivalent to the formula$$\sum_{n=p}^K(-K)^{n-p}\frac{K!}{n!}\sum\limits_{ \begin{subarray}{c} k_1+\dotsb+k_{n}=K \\ k_i \geq 1 \end{subarray}}\sigma_p(k_1,\dotsc,k_{n}) \prod\limits_{i=1}^n \dfrac{k_i^{k_i-2}}{(k_i-1)!}=\frac{K!}{p!}\binom{K-2}{K-p},$$I multiplied both parts by $K!/p!$. Note that$$\frac{K!}{n!}\sum\limits_{ \begin{subarray}{c} k_1+\dotsb+k_{n}=K \\ k_i \geq 1 \end{subarray}}\sigma_p(k_1,\dotsc,k_{n}) \prod\limits_{i=1}^n \dfrac{k_i^{k_i-2}}{(k_i-1)!}$$is a number of the trees $T$ on $\{0,1,\dots,K\}$ such that degree of 0 equals $n$ and $p$ vertices in different components of $T\setminus\{0\}$ are marked. Indeed, if these components $A_1,\dots,A_n$ are enumerated (this corresponds to the multiple $n!$) and $i$-th component $A_i$ has $k_i$ vertices, then we have $\frac{K!}{k_1!\dots k_n!}$ ways to choose $A_i$, $\sigma_p(k_1,\dotsc,k_{n}) $ ways to mark $p$ vertices in different components, $k_i^{k_i-1}$ ways to make a tree on $A_i$ and choose a vertex in $A_i$ joined with 0.
Note that each (out of $\binom{K}p$ sets) set of $p$ marked vertices makes the same contribution to the sum. So, we may suppose that the marked set is $\{1,2,\dots,p\}$ and we have to prove that the sum of $(-K)^{n-p}$ over admissible trees (where the tree $T$ is admissible if $1,2,\dots,p$ are in different components of $T\setminus \{0\}$) equals $\frac1{\binom{K}p}\frac{K!}{p!}\binom{K-2}{K-p}=(p-1)p\dots (K-2)$.
We start to prove this from the case $p=0$, $p=1$, where the restriction that $1,2,\dots,p$ are in different components of $T\setminus \{0\}$ disappears. Then the sum $z_0^{n-1}z_1^{d_1-1}\dots z_K^{d_K-1}$, $d_i=\deg(i)$, over all trees on $\{0,\dots,K\}$ equals, as is well known and easy to prove, to $(z_0+\dots+z_K)^{K-1}$. Substituting $z_0=-K$, $z_1=\dots=z_K=1$ we get the result.
Now we deal with the more involved case $p\geqslant 2$. Denote $K=p+m$ and consider the variables $z_0,z_1,\dots,z_p,z_{p+1},\dots$ (infinitely many for simplicity of notations). Denote $s=z_0+z_1+\dots$, write $\sigma_i$ for the $i$-th elementary symmetric polynomial of $z_{p+1},z_{p+2},\dots$. Denote $\varphi_0=1$, $\varphi_m=s\varphi_{m-1}+(p-1)p\dots (p+m-2)\sigma_m$ for $m\geqslant 1$. I claim that the sum of $z_0^{n-p}z_1^{d_1-1}\dots z_{p+m}^{d_{p+m}-1}$ over all admissible trees equals $\varphi_m(z_0,z_1,\dots,z_{p+m},0,0,\dots)$.
Note that this implies our claim, as follows from the substitution $z_0=-K=-p-m,z_1=\dots=z_{p+m}=1$.
The proof is on induction in $m$. Base $m=0$ is clear. For the induction step, look at coefficients of any specific monomial $z_0^{n-p}z_1^{d_1-1}\dots z_{p+m}^{d_{p+m}-1}$. Consider two cases:
1) $d_i=1$ for a certain index $i\in \{p+1,\dots,p+m\}$, without loss of generality $i=p+m$. This corresponds to the case when $p+m$ has degree 1, such a vertex may be joined with any of other vertices, and removing corresponding edge we get a tree (it remains admissible) on $\{0,1,\dots,K-1\}$. This corresponds to the summand $s\varphi_{m-1}$: namely, $z_j\varphi_{m-1}$ corresponds to the edge between $p+m$ and $j$; $j=0,1,\dots,p+m-1$.
2) $d_{p+1},\dots,d_{p+m}$ are greater than 1. Then they are all equal to 2, since the degree of the whole monomial equals $m$. In this case there are $p(p+1)\dots (p+m-1)$ admissible trees (well, they are all admissible for such a choice of degrees and we may either apply the above formula for all trees, or prove it by induction, or as you wish). It remains to prove that the coefficient of $z_{p+1}\dots z_{p+m}$ in the function $\varphi_m$ equals $p(p+1)\dots (p+m-1)$. Since $\varphi_m=s\varphi_{m-1}+(p-1)p\dots (p+m-2)\sigma_m$, it is equivalent to proving that the coefficient of $z_{p+1}\dots z_{p+m}$ in $s\varphi_{m-1}$ equals $p(p+1)\dots (p+m-1)-(p-1)p\dots (p+m-2)=mp(p+1)\dots(p+m-2)$. We should take some $z_j$, $p+1\leqslant j\leqslant p+m$, from the multiple $s=\sum z_i$, and for each choice of $j$ we have a coefficient of $z_j^{-1}\cdot z_{p+1}\dots z_{p+m}$ in $\varphi_{m-1}$ equal to $p(p+1)\dots(p+m-2)$ - by induction (base $m-1=0$ is clear). |
Advanced Tutorial (geared toward state-space models)¶
This tutorial covers more or less the same topics as the basic tutorial (filtering, smoothing, and parameter estimation of state-space models), but in greater detail.
Defining state-space models¶
We consider a state-space model of the form:
where function \(f\) is defined as follows: \(f(x) = \tau_0 - \tau_1 * \exp( \tau_2 * x)\). This model comes from Population Ecology; there \(X_t\) stands for the logarithm of the population size of a given species. This model may be defined as follows.
[2]:
% matplotlib inlineimport warnings; warnings.simplefilter('ignore') # hide warnings# the usual importsfrom matplotlib import pyplot as pltimport seaborn as sbimport numpy as np# imports from the packageimport particlesfrom particles import state_space_models as ssmfrom particles import distributions as distsclass ThetaLogistic(ssm.StateSpaceModel): """ Theta-Logistic state-space model (used in Ecology). """ default_params = {'tau0':.15, 'tau1':.12, 'tau2':.1, 'sigmaX': 0.47, 'sigmaY': 0.39} def PX0(self): # Distribution of X_0 return dists.Normal() def f(self, x): return (x + self.tau0 - self.tau1 * np.exp(self.tau2 * x)) def PX(self, t, xp): # Distribution of X_t given X_{t-1} = xp (p=past) return dists.Normal(loc=self.f(xp), scale=self.sigmaX) def PY(self, t, xp, x): # Distribution of Y_t given X_t=x, and X_{t-1}=xp return dists.Normal(loc=x, scale=self.sigmaY)
This is most similar to what we did in the previous tutorial (for stochastic volatility models): methods
PX0,
PX and
PY return objects defined in module
distributions. (See the documentation of that module for a list of available distributions).
The only novelty is that we defined (as a class attribute) the dictionary
default_parameters, which provides default values for each parameter. When it is defined, each parameter that is not set explicitly when instantiating (calling)
ThetaLogistic is replaced by its default value:
[3]:
my_ssm = ThetaLogistic() # use default values for all parametersx, y = my_ssm.simulate(100)plt.style.use('ggplot')plt.plot(y)plt.xlabel('t')plt.ylabel('data')
[3]:
Text(0, 0.5, 'data')
“Bogus Parameters” (parameters that do not appear in
PX0,
PX and
PY) are simply ignored:
[4]:
just_for_fun = ThetaLogistic(tau2=0.3, bogus=92.) # ok
This behaviour may look suprising, but it will allow us to define prior distributions that involve hyper-parameters.
Automatic definition of
FeynmanKac objects¶
We have seen in the previous tutorial how to run a bootstrap filter: we first define some
Bootstrap object, and then passes it to SMC.
[5]:
fk_boot = ssm.Bootstrap(ssm=my_ssm, data=y)my_alg = particles.SMC(fk=fk_boot, N=100)my_alg.run()
In fact,
ssm.Bootstrap is a subclass of
FeynmanKac, the base class for objects that represent “Feynman-Kac models” (covered in Chapters 5 and 10 of the book). To make things simple, a Feynman-Kac model is a “recipe” for our SMC algorithms; in particular, it tells us:
how to sample each particle \(X_t^n\) at time \(t\), given their ancestors \(X_{t-1}^n\); how to reweight each particle \(X_t^n\) at time \(t\).
The bootstrap filter is a particular “recipe”, where:
we sample the particles \(X_t^n\) according to the state transition of the model; in our case a \(N(f(x_{t-1}),\sigma_X^2)\) distribution. we reweight the particles according to the likelihood of the model; here the density of \(N(x_t,\sigma_Y^2)\) at point \(y_t\).
The class
ssm.Bootstrap defines this recipe automatically from the supplied state-space model and data.
The bootstrap filter is not the only available “recipe”. We may want to run a
guided filter, where the particles are simulated according to user-chosen proposal kernels. Such proposal kernels may be defined by adding methods
proposal and
proposal0 to our
StateSpaceModel class:
[6]:
class ThetaLogistic_with_prop(ThetaLogistic): def proposal0(self, data): return self.PX0() def proposal(self, t, xp, data): prec_prior = 1. / self.sigmaX**2 prec_lik = 1. / self.sigmaY**2 var = 1. / (prec_prior + prec_lik) mu = var * (prec_prior * self.f(xp) + prec_lik * data[t]) return dists.Normal(loc=mu, scale=np.sqrt(var))my_better_ssm = ThetaLogistic_with_prop()
In this particular case, we implemented the “optimal” proposal, that is, the distribution of \(X_t\) given \(X_{t-1}\) and \(Y_t\). (Check this is indeed this case, this is a simple exercise!). (For simplicity, the proposal at time 0 is simply the distribution of X_0, so this one is not optimal.)
Now we may define our guided Feynman-Kac model:
[7]:
fk_guided = ssm.GuidedPF(ssm=my_better_ssm, data=y)
An APF (auxiliarly particle filter) may be implemented in the same way: for this, we must also define method
logeta, which computes the auxiliary function used in the resampling step; see the documentation and the end of Chapter 10 of the book.
Running a particle filter¶
Here is the signature of class
SMC:
[8]:
alg = particles.SMC(fk=fk_guided, N=100, seed=None, ESSrmin=0.5, resampling='systematic', store_history=False, compute_moments=False, online_smoothing=None, verbose=False)
Apart from
fk (which expects a
FeynmanKac object), all the other arguments are optional. Here is what they do:
N: the number of particles
seed: value used to initialise the pseudo-random generator before the partice filter is run (if None the algorithm is not seeded)
resampling: which resampling scheme to use (possible choices:
'multinomial',
'residual',
'stratified',
'systematic'and
'ssp')
ESSrmin: the particle filter resamples at each iteration such that ESS / N is below this threshold; set it to
1.(resp.
0.) to resample every time (resp. to never resample)
The remaining arguments (
store_history,
compute_moments and
online_smoothing) will be explained in the following sections.
Once we have a created a SMC object, we may run it, either step by step, or in one go. For instance:
[9]:
next(alg) # processes data-point y_0next(alg) # processes data-point y_1for _ in range(8): next(alg) # processes data-points y_3 to y_9# alg.run() # would process all the remaining data-points
At any time, object
alg has the following attributes:
alg.t: index of next iteration
alg.X: the N current particles \(X_t^n\); typically a (N,) or (N,d) numpy array
alg.W: the N normalised weights \(W_t^n\) (a (N,) numpy array)
alg.Xp: the N particles at the previous iteration, \(X_{t-1}^n\)
alg.A: the N ancestor variables: A[3] = 12 means that the parent of \(X_t^3\) was \(X_{t-1}^{12}\).
alg.summaries: various summaries collected at each iteration.
Let’s do for instance a weighted histogram of the particles.
[10]:
plt.hist(alg.X, 20, weights=alg.W);
Object alg.summaries contains various lists of quantities collected at each iteration, such as:
* `alg.summaries.ESSs`: the ESS (effective sample size) at each iteration* `alg.summaries.rs_flags`: whether or not resampling was triggered at each step* `alg.summaries.logLts`: estimates of the log-likelihood of the data $y_{0:t}$
All this and more is explained in the documentation of the
collectors module. Let’s plot the ESS and the log-likelihood:
[11]:
plt.plot(alg.summaries.ESSs)plt.xlabel('t')plt.ylabel('ESS')
[11]:
Text(0, 0.5, 'ESS')
[12]:
plt.plot(alg.summaries.logLts)plt.xlabel('t')plt.ylabel('log-likelihood')
[12]:
Text(0, 0.5, 'log-likelihood')
Running many particle filters in one go¶
Function multiSMC accepts the same arguments as
SMC plus the following extra arguments:
nruns: number of runs
nprocs: if >0, number of CPU cores to use; if <=0, number of cores
not touse; i.e.
nprocs=0means use all cores
out_func: a function that is applied to each resulting particle filter (see below).
To explain how exactly
multiSMC works, let’s try to compare the bootstrap and guided filters for the theta-logistic model we defined at the beginning of this tutorial:
[13]:
outf = lambda pf: pf.logLtresults = particles.multiSMC(fk={'boot':fk_boot, 'guid':fk_guided}, nruns=20, nprocs=1, out_func=outf)
The command above runs
40 particle algorithms (on a single core): 20 bootstrap filters, and 20 guided filters. The output,
results, is a list of 40 dictionnaries; each dictionary contains the following (key, value) pairs:
'model': either
'boot'or
'guid'(according to whether a boostrap or guided filter has been run)
'run': a run indicator (between 0 and 19)
'output': the result of
outf(pf)where pf is the SMC object that was run. (If
outfis set to None, then the SMC object is returned.)
The rationale between function
outf is that SMC objects may take a lot of memory in certain cases (especially if you set
store_history=True, see section on smoothing below), so we may want to save only some results of interest rather than the complete object itself. Here the output is simply the estimate of the log-likelihood of the (complete) data computed by each particle filter. Let’s check if the guided filter provides lower-variance estimates, relative to the bootstrap filter.
[14]:
sb.boxplot(x=[r['fk'] for r in results], y=[r['output'] for r in results])
[14]:
<matplotlib.axes._subplots.AxesSubplot at 0x7fae78e0aba8>
This is indeed the case. To understand this line of code, you must be a bit familiar with list comprehensions.
More generally, function
multiSMC may be used to run multiple SMC algorithms, while varying any possible arguments; for more details, see the documentation of
multiSMC and of the module
particles.utils.
Summaries, on-line smoothing¶
We have said that
alg.summaries (where
alg is a SMC object) contains various
lists that collect information at each iteration (such as the ESS, the log-likelihood estimates). The following options (of class
SMC) produce extra summaries:
moments: if set to True, the weighted, component-wise mean and variance of the particles are computed and stored in a list of dictionaries (with keys ‘mean’ and ‘var’), called
alg.summaries.moments.
online_smoothing: may be set to None (no on-line smoothing), ‘naive’ (standard forward smoothing), or ‘ON2’ (the \(O(N^2)\) version, which is expensive.)
For more details on on-line smoothing, see the documentation of module
particles.collectors.
Let’s compute the moments:
[15]:
alg_with_mom = particles.SMC(fk=fk_guided, N=100, moments=True)alg_with_mom.run()plt.plot([m['mean'] for m in alg_with_mom.summaries.moments], label='filtered mean')plt.plot(y, label='data')plt.legend()
[15]:
<matplotlib.legend.Legend at 0x7fae78c72160>
Off-line smoothing¶
Off-line smoothing is the task of approximating, at some final time \(T\) (i.e. when we have stopped acquiring data), the distribution of all the states, \(X_{0:T}\), given the full data, \(Y_{0:T}\).
To run a particular off-line smoothing algorithm, one must first run a particle filter, and save its
history:
[16]:
alg = particles.SMC(fk=fk_guided, N=100, store_history=True)alg.run()
Now
alg has a
hist attribute, which is a
ParticleHistory object. Basically,
alg.hist recorded, at each time \(t\):
the N particles \(X_t^n\) their weights \(W_t^n\) the N ancestor variables
Smoothing algorithms are implemented as methods of class
ParticleHistory. For instance, the FFBS (forward filtering backward sampling) algorithm, which samples complete smoothing trajectories, may be called as follows:
[17]:
trajectories = alg.hist.backward_sampling(5, linear_cost=False)plt.plot(trajectories)
[17]:
[<matplotlib.lines.Line2D at 0x7fae78bf1e48>, <matplotlib.lines.Line2D at 0x7fae78bf1f98>, <matplotlib.lines.Line2D at 0x7fae78bf9128>, <matplotlib.lines.Line2D at 0x7fae78bf9278>, <matplotlib.lines.Line2D at 0x7fae78bf93c8>]
The output of
backward_sampling is a list of 100 arrays:
trajectories[t][m] is the \(t\)-component of trajectory \(m\). (If you want to turn it into a numpy array, simply do:
np.array(trajectories).)
Option
linear_cost determines whether we use the standard, \(O(N^2)\) version of FFBS (where generating a single trajectory costs \(O(N)\)), or the \(O(N)\) version which relies on rejection. The latter algorithm requires us to specify an upper bound for the transition density of \(X_t | X_{t-1}\); this may be done by defining a method
upper_bound_trans(self, t) in the considered state-space model.
[18]:
class ThetaLogistic_with_upper_bound(ThetaLogistic_with_prop): def upper_bound_log_pt(self, t): return -np.log(np.sqrt(2 * np.pi) * self.sigmaX)my_ssm = ThetaLogistic_with_upper_bound()alg = particles.SMC(fk=ssm.GuidedPF(ssm=my_ssm, data=y), N=100, store_history=True)alg.run()(more_trajectories, acc_rate) = alg.hist.backward_sampling(10, linear_cost=True, return_ar=True)print('acceptance rate was %1.3f' % acc_rate)plt.plot(more_trajectories)
acceptance rate was 0.413
[18]:
[<matplotlib.lines.Line2D at 0x7fae78b62dd8>, <matplotlib.lines.Line2D at 0x7fae78b62f28>, <matplotlib.lines.Line2D at 0x7fae78b6d0b8>, <matplotlib.lines.Line2D at 0x7fae78b6d208>, <matplotlib.lines.Line2D at 0x7fae78b6d358>, <matplotlib.lines.Line2D at 0x7fae78b6d4a8>, <matplotlib.lines.Line2D at 0x7fae78b6d5f8>, <matplotlib.lines.Line2D at 0x7fae78b4e828>, <matplotlib.lines.Line2D at 0x7fae78b6d860>, <matplotlib.lines.Line2D at 0x7fae78b6d9b0>]
Two-filter smoothing is also available. The difficulty with two-filter smoothing is that it requires to design an “information filter”, that is a particle filter that computes recursively (backwards) the likelihood of the model. Since this is not trivial for the model considered here, we refer to Section 11.6 of the book and the documentation of package
smoothing. |
The equation $M \mathbf{x} = \mathbf{0}$ then yields a system of linear equations with $n$ equations and $n$ variables.To find a solution, consider the augmented matrix $ \begin{bmatrix}[c|c] M & \mathbf{0} \end{bmatrix}$.
Because $M$ is upper-triangular, we can use back-substitution to solve. The bottom row of the augmented matrix gives the equation $m_{n, n} x_n = 0$.By assumption, $m_{n, n} \neq 0$ because it is a diagonal entry. Thus we must have that $x_n=0$.
Next, the second-to-last row in the augmented matrix gives the equation $m_{n-1, n-1} x_{n-1} + m_{n-1, n} x_n = 0$. Because $x_n = 0$ and $m_{n-1, n-1} \neq 0$, we must have that $x_{n-1} = 0$.
We continue working backward in this way to see that $x_i = 0$ for all $1 \leq i \leq n$. Thus $\mathbf{x} = \mathbf{0}$, and so the columns of $M$ must be linearly independent.
Does the conclusion hold if we do not assume that $M$ has non-zero diagonal entries?
If the diagonal entries of $M$ could be non-zero, then the columns might be linearly dependent. Consider the simple example\[M = \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}.\]
Linear Independent Continuous FunctionsLet $C[3, 10]$ be the vector space consisting of all continuous functions defined on the interval $[3, 10]$. Consider the set\[S=\{ \sqrt{x}, x^2 \}\]in $C[3,10]$.Show that the set $S$ is linearly independent in $C[3,10]$.Proof.Note that the zero vector […]
The Set of Vectors Perpendicular to a Given Vector is a SubspaceFix the row vector $\mathbf{b} = \begin{bmatrix} -1 & 3 & -1 \end{bmatrix}$, and let $\R^3$ be the vector space of $3 \times 1$ column vectors. Define\[W = \{ \mathbf{v} \in \R^3 \mid \mathbf{b} \mathbf{v} = 0 \}.\]Prove that $W$ is a vector subspace of $\R^3$.[…] |
It is known that the arguments of prime elements of $\mathbb{Z}[i]$ are equidistributed in $(0,2π)$ (by Theorem 5.36 of Iwaniec and Kowalski, or one of Kubilius' papers cited below). This theorem extends to any imaginary quadratic number ring $\mathcal{O}$ if one uses prime ideal numbers (especially for those number rings that are not UFDs) as mentioned for instance in Dias's paper (cited below).
One way to remove the reliance on prime ideal numbers is to restrict the set of prime ideals under consideration to those arising from rational primes splitting into
principal prime ideals (so that the prime ideal numbers are associates to the generators of these prime ideals; moreover the generators themselves are prime numbers in $\mathcal{O}$).
Now I can finally pose my question: Can someone refer me to a reference/proof in the literature that states that the prime elements arising from a rational prime splitting into principal ideals are also equidistributed in $(0,π/U)$ where $U$ denotes the number of units in $\mathcal{O}$? As far as I understand, this should be provable by applying Fourier analysis to the Chebotarev corollary I stated above (instead of the full Prime Ideal Theorem) to pick off the primes in a given sector $(\alpha, \beta) \subset (0, 2\pi)$. This process should yield an asymptotic formula of the form $\frac{\beta - \alpha}{2\pi} \cdot \frac{1}{2h} \frac{x}{\log{x}}$. Am I right about this?
This is the first time I am posting something of this magnitude on mathoverflow; so I hope I phrased it properly enough to convey what I am asking. I will happy fix or clarify anything that may be a bit imprecise. Thank you!
Remark: I am asking about the existence of this theorem in the literature so that I don't have to unnecessarily reprove it for an article I am writing.
References:
1) D. Dias, The angular distribution of integral ideal numbers with a fixed norm in quadratic extensions, 2014, available at http://arxiv.org/pdf/1404.6271v1.pdf.
2) J. Kubilius, The distribution of Gaussian primes in sectors and contours,
Leningrad. Gos. Univ. U\v{c}. Zap, Cer. Mat. Nauk 137 (19) (1950) 40-52.
3) J. Kubilius, On some problems of the geometry of prime numbers,
Mat. Sbornik N.S. 31 (73) (1952) 507-542. |
Meshing Considerations for Linear Static Problems
In this blog entry, we introduce meshing considerations for linear static finite element problems. This is the first in a series of postings on meshing techniques that is meant to provide guidance on how to approach the meshing of your finite element model with confidence.
About Finite Element Meshing
The finite element mesh serves two purposes. It first subdivides the CAD geometry being modeled into smaller pieces, or
elements, over which it is possible to write a set of equations describing the solution to the governing equation. The mesh is also used to represent the solution field to the physics being solved. There is error associated with both the discretization of the geometry as well as discretization of the solution, so let’s examine these separately. Geometric Discretization
Consider two very simple geometries, a block and a cylindrical shell:
There are four different types of elements that can be used to mesh these geometries — tetrahedra (tets), hexahedra (bricks), triangular prismatics (prisms), and pyramid elements:
The grey circles represent the corners, or
nodes, of the elements. Any combination of the above four elements can be used. (For 2D modeling, triangular and quadrilateral elements are available.) You can see by examination that both of these geometries could be meshed with as few as one brick element, two prisms, three pyramids, or five tets. As we learned in the previous blog post about solving linear static finite element problems, you will always arrive at a solution in one Newton-Raphson iteration. This is true for linear finite element problems regardless of the mesh. So let’s take a look at the simplest mesh we could put on these structures. Here’s a plot of a single brick element discretizing these geometries:
The mesh of the block is obviously a perfect representation of the true geometry, while the mesh of the cylindrical shell appears quite poor. In fact, it only appears that way when plotted. Elements are always plotted on the screen as having straight edges (this is done for graphics performance purposes) but COMSOL usually uses a second-order Lagrangian element to discretize the geometry (and the solution). So although the element edges always appear straight, they are internally represented as:
The white circles represent the midpoint nodes of these second-order element edges. That is, the lines defining the edges of the elements are represented by three points, and the edges approximated via a polynomial fit. There are also additional nodes at the center of each of these quadrilateral faces and in the center of the volume for these second-order Lagrangian hexahedral elements (omitted for clarity). Clearly, these elements do a better job of representing the curved boundaries of the elements. By default, COMSOL uses second-order elements for most physics, the two exceptions are problems involving chemical species transport and when solving for a fluid flow field. (Since those types of problems are convection dominated, the governing equations are better solved with first-order elements.) Higher order elements are also available, but the default second-order elements usually represent a good compromise between accuracy and computational requirements.
The figure below shows the geometric discretization error when meshing a 90° arc in terms of the number of first- and second-order elements:
The conclusion that can be made from this is that at least two second-order elements, or at least eight first-order elements, are needed to reduce the geometric discretization error below 1%. In fact, two second-order elements introduce a geometric discretization error of less that 0.1%. Finer meshes will more accurately represent the geometry, but will take more computational resources. This gives us a couple of good practical guidelines:
When using first-order elements, adjust the mesh such that there are at least eight elements per 90° arc When using second-order elements, use two elements per 90° arc
With these rules of thumb, we can now estimate the error we’ve introduced by meshing the geometry, and we can do so with some confidence before even having to solve the model. Now let’s turn our attention to how the mesh discretizes the solution.
Solution Discretization
The finite element mesh is also used to represent the solution field. The solution is computed at the node points, and a polynomial basis is used to interpolate this solution throughout the element to recover the total solution field. When solving linear finite elements problems, we are always able to compute a solution, no matter how coarse the mesh, but it may not be very accurate. To understand how mesh density affects solution accuracy, let’s look at a simple heat transfer problem on our previous geometries:
A temperature difference is applied to opposing faces of the block and the cylindrical shell. The thermal conductivity is constant, and all other surfaces are thermally insulated.
The solution for the case of the square block is that the temperature field varies linearly throughout the block. So for this model, a single, first-order, hexahedral element would actually be sufficient to compute the true solution. Of course, you will rarely be that lucky!
Therefore, let’s look at the slightly more challenging case. We’ve already seen that the cylindrical shell model will have geometric discretization error due to the curved edges, so we would start this model with at least two second-order (or eight first-order) elements along the curved edges. If you look closely at the above plot, you can see that the element edges on the boundaries are curved, while the interior elements have straight edges.
Along the axis of the cylinder, we can use a single element, since the temperature field will not vary in this direction. However, in the radial direction, from the inside to outside surface, we also need to have enough elements to discretize the solution. The analytic solution for this case goes as \ln(r) and can be compared against our finite element solution. Since the polynomial basis functions cannot perfectly describe the function, let’s plot the error in the finite element solution for both the linear and quadratic elements:
What you can see from this plot is that, as you increase the number of elements in the model, the error goes down. This is a fundamental property of the finite element method: the more elements, the more accurate your solution. Of course, there is also a cost associated with this. More computational resources, both time and hardware, are required to solve larger models. Now, you’ll notice that there are no units to the
x-axis of this graph, and that is on purpose. The rate at which error decreases with respect to mesh refinement will be different for every model, and depends on many factors. The only important point is that it will always go down, monotonically, for well-posed problems.
You’ll also notice that, after a point, the error starts to go back up. This will happen once the individual mesh elements start to get very small, and we run into the limits of numerical precision. That is, the numbers in our model are smaller than can be accurately represented on a computer. This is an inherent problem with all computational methods, not just the finite element method; computers cannot represent all real numbers accurately. The point at which the error starts to go back up will be around \sqrt{2^{-52}} \approx 1.5 \times 10^{-8} and to be on the safe and practical side, we often say that the minimal achievable error is 10
-6. Thus, if we integrate the scaled difference between the true and computed solution over the entire model:
We say that the error, \epsilon, can typically be made as small as 10
-6 in the limits of mesh refinement. In practice, the inputs to our models will anyways usually have much greater uncertainty than this. Also keep in mind that in general we don’t know the true solution, we will instead have to compare the computed solutions between different sized meshes and observe what values the solution converges toward. Adaptive Mesh Refinement
I would like to close this blog post by introducing a better way to refine the mesh. The plots above show that error decreases as all of the elements in the model are made smaller. However, ideally you would only make the elements smaller in regions where the error is high. COMSOL addresses this via
Adaptive Mesh Refinement, which first solves on an initial mesh and iteratively inserts elements into regions where the error is estimated to be high, and then re-solves the model. This can be continued for as many iterations as desired. This functionality works with triangular elements in 2D and tetrahedrals in 3D. Let’s examine this problem in the context of a simple structural mechanics problem — a plate under uniaxial tension with a hole, as shown in the figure below. Using symmetry, only one quarter of the model needs to be solved.
The computed displacement fields, and the resultant stresses, are quite uniform some distance away from the hole, but vary strongly nearby. The figure below shows an initial mesh, as well as the results of several adaptive mesh refinement iterations, along with the computed stress field.
Note how COMSOL preferentially inserts smaller elements around the hole. This should not be a surprise, since we already know there will be higher stresses around the hole. In practice, it is recommended to use a combination of adaptive mesh refinement, engineering judgment, and experience to find an acceptable mesh.
Summary of Main Points You will always want to perform a mesh refinement study and compare results on different sized meshes Use your knowledge of geometric discretization error to choose as coarse a starting mesh as possible, and refine from there You can use adaptive mesh refinement, or your own engineering judgment, to refine the mesh Comments (5) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Let $P=(p_1,\ldots,p_d)$ be a distribution on $[d]$. Given $n$ iid draws from $P$, we construct some empirical estimate $\hat P_n=(\hat p_{n,1},\ldots,\hat p_{n,d})$. Let us define the $r$-risk by $$ J_n^r = \sum_{i=1}^d |p_i-\hat p_{n,i}|^r. $$
It is known (see, e.g., Lemma 2.4 here) that when $\hat P_n$ is the maximum likelihood (i.e., empirical frequency) estimator and $r\ge2$, we have $\mathbb{E}[J_n^r]\le1/n$. In particular, the expected $r$-risk decays at a dimension-free rate.
It is also known that for $r=1$, the risk decays at a minimax rate of $\Theta(\sqrt{d/n})$.
Question: what is known for $1<r<2$? |
If a Sylow Subgroup is Normal in a Normal Subgroup, it is a Normal Subgroup
Problem 226
Let $G$ be a finite group. Suppose that $p$ is a prime number that divides the order of $G$.Let $N$ be a normal subgroup of $G$ and let $P$ be a $p$-Sylow subgroup of $G$.Show that if $P$ is normal in $N$, then $P$ is a normal subgroup of $G$.
To prove the problem, let $g\in G$ be any element and try to show that both $P$ and $g^{-1}Pg$ are $p$-Sylow subgroups of $N$.Then use the fact above with $Q_1=P$, $Q_2=g^{-1}Pg$, and $H=N$.
We use the following notations: $A < B$ means that $A$ is a subgroup of a group $B$, and $A \triangleleft B$ denotes that $A$ is a normal subgroup of $B$.
Proof.
For any $g \in G$, since $P < N$ and $N \triangleleft G$, we have\begin{align*}g^{-1}Pg < g^{-1}Ng=N.\end{align*}Thus $g^{-1}Pg$ is a $p$-Sylow subgroup in $N$. In general, any two $p$-Sylow subgroups in a group are conjugate by Sylow's theorem.Since $P$ and $g^{-1}Pg$ are both $p$-Sylow subgroups in $N$, there exists $n \in N$ such that\[n^{-1}Pn=g^{-1}Pg.\]Since $n\in N$ and $P$ is normal in $N$, we have $n^{-1}Pn=P$.Hence we obtain\[P=g^{-1}Pg.\]Since $g\in G$ is arbitrary, this implies that $P$ is a normal subgroup in $G$.
Subgroup Containing All $p$-Sylow Subgroups of a GroupSuppose that $G$ is a finite group of order $p^an$, where $p$ is a prime number and $p$ does not divide $n$.Let $N$ be a normal subgroup of $G$ such that the index $|G: N|$ is relatively prime to $p$.Then show that $N$ contains all $p$-Sylow subgroups of […]
Non-Abelian Group of Order $pq$ and its Sylow SubgroupsLet $G$ be a non-abelian group of order $pq$, where $p, q$ are prime numbers satisfying $q \equiv 1 \pmod p$.Prove that a $q$-Sylow subgroup of $G$ is normal and the number of $p$-Sylow subgroups are $q$.Hint.Use Sylow's theorem. To review Sylow's theorem, check […]
Group of Order $pq$ Has a Normal Sylow Subgroup and SolvableLet $p, q$ be prime numbers such that $p>q$.If a group $G$ has order $pq$, then show the followings.(a) The group $G$ has a normal Sylow $p$-subgroup.(b) The group $G$ is solvable.Definition/HintFor (a), apply Sylow's theorem. To review Sylow's theorem, […]
Sylow Subgroups of a Group of Order 33 is Normal SubgroupsProve that any $p$-Sylow subgroup of a group $G$ of order $33$ is a normal subgroup of $G$.Hint.We use Sylow's theorem. Review the basic terminologies and Sylow's theorem.Recall that if there is only one $p$-Sylow subgroup $P$ of $G$ for a fixed prime $p$, then $P$ […]
A Group of Order $20$ is SolvableProve that a group of order $20$ is solvable.Hint.Show that a group of order $20$ has a unique normal $5$-Sylow subgroup by Sylow's theorem.See the post summary of Sylow’s Theorem to review Sylow's theorem.Proof.Let $G$ be a group of order $20$. The […]
Every Group of Order 72 is Not a Simple GroupProve that every finite group of order $72$ is not a simple group.Definition.A group $G$ is said to be simple if the only normal subgroups of $G$ are the trivial group $\{e\}$ or $G$ itself.Hint.Let $G$ be a group of order $72$.Use the Sylow's theorem and determine […] |
The Order of a Conjugacy Class Divides the Order of the Group Problem 455
Let $G$ be a finite group.
The centralizer of an element $a$ of $G$ is defined to be \[C_G(a)=\{g\in G \mid ga=ag\}.\]
A
conjugacy class is a set of the form \[\Cl(a)=\{bab^{-1} \mid b\in G\}\] for some $a\in G$. (a)Prove that the centralizer of an element of $a$ in $G$ is a subgroup of the group $G$. (b) Prove that the order (the number of elements) of every conjugacy class in $G$ divides the order of the group $G$.
Contents
Proof. (a) Prove that the centralizer of $a$ in $G$ is a subgroup of $G$.
Since the identity element $e$ of $G$ satisfies $ea=a=ae$, it is in the centralizer $C_G(a)$.
Hence $C_G(a)$ is not an empty set. We show that $C_G(a)$ is closed under multiplications and inverses.
Let $g, h \in C_G(a)$. Then we have
\begin{align*} (gh)a&=g(ha)\\ &=g(ah) && \text{since $h\in C_G(a)$}\\ &=(ga)h\\ &=(ag)h&& \text{since $g\in C_G(a)$}\\ &=a(gh). \end{align*} So $gh$ commutes with $a$ and thus $gh \in C_G(a)$. Thus $C_G(a)$ is closed under multiplications.
Let $g\in C_G(a)$. This means that we have $ga=ag$.
Multiplying by $g^{-1}$ on the left and on the right, we obtain \begin{align*} g^{-1}(ga)g^{-1}=g^{-1}(ag)g^{-1}, \end{align*} and thus we have \[ag^{-1}=g^{-1}a.\] This implies that $g^{-1}\in C_G(a)$, hence $C_G(a)$ is closed under inverses.
Therefore, $C_G(a)$ is a subgroup of $G$.
(b) Prove that the order of every conjugacy class in $G$ divides the order of $G$.
We give two proofs for part (b).The first one is a more direct proof and the second one uses the orbit-stabilizer theorem.
The First Proof of (b).
By part (a), the centralizer $C_G(a)$ is a subgroup of the finite group $G$.
Hence the set of left cosets $G/C_G(a)$ is a finite set, and its order divides the order of $G$ by Lagrange’s theorem.
We prove that there is a bijective map from $G/C_G(a)$ to $\Cl(a)$.
Define the map $\phi:G/C_G(a) \to \Cl(a)$ by \[\phi\left(\, gC_G(a) \,\right)=gag^{-1}.\]
We must show that it is well-defined.
For this, note that we have \begin{align*} gC_G(a)=hC_G(a) &\Leftrightarrow h^{-1}g\in C_G(a)\\ & \Leftrightarrow (h^{-1}g)a(h^{-1}g)^{-1}=a\\ & \Leftrightarrow gag^{-1}=hag^{-1}. \end{align*} This computation shows that the map $\phi$ is well-defined as well as $\phi$ is injective. Since the both sets are finite sets, this implies that $\phi$ is bijective. Thus, the order of the two sets is equal.
It yields that the order of $C_G(a)$ divides the order of the finite group $G$.
The Second Proof of (b). Use the Orbit-Stabilizer Theorem
We now move on to the alternative proof.
Consider the action of the group $G$ on itself by conjugation: \[\psi:G\times G \to G, \quad (g,h)\mapsto g\cdot h=ghg^{-1}.\]
Then the orbit $\calO(a)$ of an element $a\in G$ under this action is
\[\calO(a)=\{ g\cdot a \mid g\in G\}=\{gag^{-1} \mid g\in G\}=\Cl(a).\]
Let $G_a$ be the stabilizer of $a$.
Then the orbit-stabilizer theorem for finite groups say that we have \begin{align*} |\Cl(a)|=|\calO(a)|=[G:G_a]=\frac{|G|}{|G_a|} \end{align*} and hence the order of $\Cl(a)$ divides the order of $G$.
Note that the stabilizer $G_a$ of $a$ is the centralizer $C_G(a)$ of $a$ since
\[G_a=\{g \in G \mid g\cdot a =a\}=\{g\in G \mid ga=ag\}=C_G(a).\]
Add to solve later |
1. Analogy
Let’s assume that we’d like do a numerical processing in the best suited base on this purpose.
Let’s have an example : we’d like to divide an even number by $2$. It turns out that the base 2 is the best suited base for doing that. Indeed a simple shift from left to right does that processing.
We have the three following steps :
Change of basis (from decimal to binary) We do the processing in that new base (shift) Change of basis (from binary to decimal) Example I
Let’s divide $208$ by $16$.
Let’s proceed the above steps discussed :
Conversion of $208$ into the base 2 :
11010000
1st shift :
01101000. 2nd shift :
00110100. 3rd shift :
00011010. 4th shift :
00001101
Conversion of
00001101into the decimal base : $13$
2. Calculation
A diagonalizable matrix $A$ is decomposed as follows :
In $(1)$ $P$ is the eigenvectors matrix of $A$ and $D$ the eigenvalues matrix of $A$.
In our analogy of above, the calculation of $A\vec{x}$ would be the numerical processing. We then have in $(1)$, from right to left :
Conversion of $\vec{x}$ in a new basis, becoming $\vec{x}’$ Processing the linear transformation in the new basis : $D\vec{x}’=\vec{y}’$ Conversion of $\vec{y}’$ in the starting basis, becoming $\vec{y}$ Example II
Let $A=\left( \begin{smallmatrix} 2 & 1 \\ 3 & 4 \end{smallmatrix} \right)$. Let’s diagonalize $A$.
We determined the eigenvectors of $A$ in the dedicated chapter (let $\beta=1$).
Then $P=\left( \begin{smallmatrix} 1 & 1 \\ -1 & 3 \end{smallmatrix} \right)$ and $P^{-1}= \frac{1}{4}\left( \begin{smallmatrix} 3 & -1 \\ 1 & 1 \end{smallmatrix} \right).$ The matrix of the eigenvalues is : $D=\left( \begin{smallmatrix} 1 & 0 \\ 0 & 5 \end{smallmatrix} \right)$.
The diagonalization of $A$ is thus :
3. But more concretely ?
The point of the diagonalization is to proceed the linear transformation in a more suited basis. This new basis is composed of the eigenvectors and turns to be a new axis system.
This reduces the calculation amount, since in the new basis, $\vec{y}’$ is obtained by multiplying by a factor $\lambda_i$ the components of $\vec{x}$ for each axis
independently from one another. Figure 12.1 shows the linear transformation of the example II in the new basis : $D \underbrace{ \left( \begin{smallmatrix} v \\ w \end{smallmatrix} \right) }_{\vec{x}’} = \underbrace{\left( \begin{smallmatrix} 1v \\ 5w \end{smallmatrix} \right) }_{\vec{y}’}$. Recapitulation
The
diagonalization proceeds a basis change . Only if they exist and form a basis, the eigenvectors become the new basis.
A diagonalizable matrix $A$ is decomposed as follows :
Each eigenvalue in $D$ must stand in the same column than the related eigenvector in $P$.
$D$ is a diagonal matrix. Since the elements outside of its diagonal equal zero, $D$ acts on each axis
independently from one another, which reduces the calculation amount. |
Tagged: skew-symmetric matrix Problem 593
We fix a nonzero vector $\mathbf{a}$ in $\R^3$ and define a map $T:\R^3\to \R^3$ by
\[T(\mathbf{v})=\mathbf{a}\times \mathbf{v}\] for all $\mathbf{v}\in \R^3$. Here the right-hand side is the cross product of $\mathbf{a}$ and $\mathbf{v}$. (a) Prove that $T:\R^3\to \R^3$ is a linear transformation.
Add to solve later
(b) Determine the eigenvalues and eigenvectors of $T$. Problem 564
Let $A$ and $B$ be $n\times n$ skew-symmetric matrices. Namely $A^{\trans}=-A$ and $B^{\trans}=-B$.
(a) Prove that $A+B$ is skew-symmetric. (b) Prove that $cA$ is skew-symmetric for any scalar $c$. (c) Let $P$ be an $m\times n$ matrix. Prove that $P^{\trans}AP$ is skew-symmetric. (d) Suppose that $A$ is real skew-symmetric. Prove that $iA$ is an Hermitian matrix. (e) Prove that if $AB=-BA$, then $AB$ is a skew-symmetric matrix. (f) Let $\mathbf{v}$ be an $n$-dimensional column vecotor. Prove that $\mathbf{v}^{\trans}A\mathbf{v}=0$.
Add to solve later
(g) Suppose that $A$ is a real skew-symmetric matrix and $A^2\mathbf{v}=\mathbf{0}$ for some vector $\mathbf{v}\in \R^n$. Then prove that $A\mathbf{v}=\mathbf{0}$. Quiz 8. Determine Subsets are Subspaces: Functions Taking Integer Values / Set of Skew-Symmetric Matrices Problem 328 (a) Let $C[-1,1]$ be the vector space over $\R$ of all real-valued continuous functions defined on the interval $[-1, 1]$. Consider the subset $F$ of $C[-1, 1]$ defined by \[F=\{ f(x)\in C[-1, 1] \mid f(0) \text{ is an integer}\}.\] Prove or disprove that $F$ is a subspace of $C[-1, 1]$.
Add to solve later
(b) Let $n$ be a positive integer. An $n\times n$ matrix $A$ is called skew-symmetric if $A^{\trans}=-A$. Let $M_{n\times n}$ be the vector space over $\R$ of all $n\times n$ real matrices. Consider the subset $W$ of $M_{n\times n}$ defined by \[W=\{A\in M_{n\times n} \mid A \text{ is skew-symmetric}\}.\] Prove or disprove that $W$ is a subspace of $M_{n\times n}$. Problem 166
Let $V$ be the vector space of all $2\times 2$ matrices. Let $W$ be a subset of $V$ consisting of all $2\times 2$ skew-symmetric matrices. (Recall that a matrix $A$ is skew-symmetric if $A^{\trans}=-A$.)
(a) Prove that the subset $W$ is a subspace of $V$. (b) Find the dimension of $W$.
(
The Ohio State University Linear Algebra Exam Problem) Read solution Problem 143
Let $V$ be the vector space over $\R$ consisting of all $n\times n$ real matrices for some fixed integer $n$. Prove or disprove that the following subsets of $V$ are subspaces of $V$.
(a) The set $S$ consisting of all $n\times n$ symmetric matrices. (b) The set $T$ consisting of all $n \times n$ skew-symmetric matrices.
Add to solve later
(c) The set $U$ consisting of all $n\times n$ nonsingular matrices. |
2018-09-11 04:29
Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text レコードの詳細 - ほとんど同じレコード 2018-08-25 06:58
Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31
Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31
Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31
Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE レコードの詳細 - ほとんど同じレコード 2018-08-22 06:27
Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 レコードの詳細 - ほとんど同じレコード 2018-08-22 06:27
Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 レコードの詳細 - ほとんど同じレコード 2018-08-22 06:27
Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 レコードの詳細 - ほとんど同じレコード |
Let \( A \) be any real square matrix (not necessarily symmetric). Prove that: $$ (x'A x)^2 \leq (x'A A'x)(x'x) $$
The key point in proving this inequality is to recognize that \( x'A A'x \) can be expressed as vector norm of \( A'x \).
Proof:
If \( x=0 \), then the inequality is trival.
Suppose \( x \neq 0 \).
\( \frac{x'A x}{x'x}
= \frac{(A'x)'x}{\| x \|^2} = (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} \)
Because \( \frac{x}{\| x \|} \) is a unit vector, \( A'\frac{x}{\| x \|} \) can be considered as scale and rotation of \( \frac{x}{\| x \|} \) by \( A' \). Thus, the resulting vector norm of \( A'\frac{x}{\| x \|} \) is \( \alpha \) for some \( \alpha > 0 \). And \( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|}=\alpha \, cos(\beta) \) for some \( -\pi \leq \beta \leq \pi \), which is the angle between before and after premulitplying \( A' \).
Now:
\( ( \frac{x'A x}{x'x} )^2 \)
\(= ( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} )^2 \)
\( =\alpha^2 \, cos(\beta)^2 \)
\( \leq \alpha^2 \)
\(= (A'\frac{x}{\| x \|})'A'\frac{x}{\| x \|} \)
\(= \frac{(A'x)'A'x}{\| x \|^2} \)
\(= \frac{x'A A'x}{x'x} \)
Finally, multiplying both sides by \( (x'x)^2 \) completes the proof. |
PDE Geometric Analysis seminar
The seminar will be held in room 901 of Van Vleck Hall on Mondays from 3:30pm - 4:30pm, unless indicated otherwise.
Contents Seminar Schedule Spring 2016
date speaker title host(s) January 25 Tianling Jin (HKUST and Caltech) Holder gradient estimates for parabolic homogeneous p-Laplacian equations Zlatos February 1 Russell Schwab (Michigan State University) Neumann homogenization via integro-differential methods Lin February 8 Jingrui Cheng (UW Madison) Semi-geostrophic system with variable Coriolis parameter Tran & Kim February 15 Paul Rabinowitz (UW Madison) On A Double Well Potential System Tran & Kim February 22 Hong Zhang (Brown) On an elliptic equation arising from composite material Kim February 29 Aaron Yip (Purdue university) TBD Tran March 7 Hiroyoshi Mitake (Hiroshima university) Selection problem for fully nonlinear equations Tran March 15 Nestor Guillen (UMass Amherst) TBA Lin March 21 (Spring Break) March 28 Ryan Denlinger (Courant Institute) The propagation of chaos for a rarefied gas of hard spheres in vacuum Lee April 4 April 11 April 18 April 25 Moon-Jin Kang (UT-Austin) Kim May 2 Abstracts Tianling Jin
Holder gradient estimates for parabolic homogeneous p-Laplacian equations
We prove interior Holder estimates for the spatial gradient of viscosity solutions to the parabolic homogeneous p-Laplacian equation u_t=|\nabla u|^{2-p} div(|\nabla u|^{p-2}\nabla u), where 1<p<\infty. This equation arises from tug-of-war like stochastic games with white noise. It can also be considered as the parabolic p-Laplacian equation in non divergence form. This is joint work with Luis Silvestre.
Russell Schwab
Neumann homogenization via integro-differential methods
In this talk I will describe how one can use integro-differential methods to attack some Neumann homogenization problems-- that is, describing the effective behavior of solutions to equations with highly oscillatory Neumann data. I will focus on the case of linear periodic equations with a singular drift, which includes (with some regularity assumptions) divergence equations with non-co-normal oscillatory Neumann conditions. The analysis focuses on an induced integro-differential homogenization problem on the boundary of the domain. This is joint work with Nestor Guillen.
Jingrui Cheng
Semi-geostrophic system with variable Coriolis parameter.
The semi-geostrophic system (abbreviated as SG) is a model of large-scale atmospheric/ocean flows. Previous works about the SG system have been restricted to the case of constant Coriolis force, where we write the equation in "dual coordinates" and solve. This method does not apply for variable Coriolis parameter case. We develop a time-stepping procedure to overcome this difficulty and prove local existence and uniqueness of smooth solutions to SG system. This is joint work with Michael Cullen and Mikhail Feldman.
Hong Zhang
On an elliptic equation arising from composite material
I will present some recent results on second-order divergence type equations with piecewise constant coefficients. This problem arises in the study of composite materials with closely spaced interface boundaries, and the classical elliptic regularity theory are not applicable. In the 2D case, we show that any weak solution is piecewise smooth without the restriction of the underling domain where the equation is satisfied. This completely answers a question raised by Li and Vogelius (2000) in the 2D case. Joint work with Hongjie Dong.
Paul Rabinowitz
On A Double Well Potential System
We will discuss an elliptic system of partial differential equations of the form
\begin{equation} \label{*} \tag{*} -\Delta u + V_u(x,u) = 0,\;\;x \in \Omega = \R \times \mathcal{D}\subset \R^n, \;\;\mathcal{D} \; bounded \subset \R^{n-1} \end{equation} \[\frac{\partial u}{\partial \nu} = 0 \;\;on \;\;\partial \Omega,\] with $u \in \R^m$,\; $\Omega$ a cylindrical domain in $\R^n$, and $\nu$ the outward pointing normal to $\partial \Omega$. Here $V$ is a double well potential with $V(x, a^{\pm})=0$ and $V(x,u)>0$ otherwise. When $n=1, \Omega =\R^m$ and \eqref{*} is a Hamiltonian system of ordinary differential equations. When $m=1$, it is a single PDE that arises as an Allen-Cahn model for phase transitions. We will discuss the existence of solutions of \eqref{*} that are heteroclinic from $a^{-}$ to $a^{+}$ or homoclinic to $a^{-}$, i.e. solutions that are of phase transition type.
This is joint work with Jaeyoung Byeon (KAIST) and Piero Montecchiari (Ancona).
Hiroyoshi Mitake
Selection problem for fully nonlinear equations
Recently, there was substantial progress on the selection problem on the ergodic problem for Hamilton-Jacobi equations, which was open during almost 30 years. In the talk, I will first show a result on the convex Hamilton-Jacobi equation, then tell important problems which still remain. Next, I will mainly focus on a recent joint work with H. Ishii (Waseda U.), and H. V. Tran (U. Wisconsin-Madison) which is about the selection problem for fully nonlinear, degenerate elliptic partial differential equations. I will present a new variational approach for this problem. |
CDS 212, Homework 7, Fall 2010
From MurrayWiki
J. Doyle Issued: 9 Nov 2010 CDS 212, Fall 2010 Due: 18 Nov 2010 Problems Show that <amsmath>E(s) = D+C(sI-A)^{-1}B</amsmath> has <amsmath>H_\infty</amsmath> norm <amsmath>< \gamma</amsmath> if the following LMI is satisfied: <amsmath> \left[\begin{array}{ccccccc} A^TP+PA& PB& C^T\\ B^TP& -\gamma^2 I& D^T\\ C& D& -I\end{array}\right]\leq 0,</amsmath>
for some <amsmath>P>0.</amsmath>
Formulate the model fitting problem <amsmath> min ||(G-\hat{G})||_{H_\infty}</amsmath> where <amsmath>\hat{G}=\hat{D} + \hat{C} (sI - \hat{A})^{-1}\hat{B}</amsmath> with <amsmath>\hat{A}</amsmath> and <amsmath>\hat{B}</amsmath> given and <amsmath>\hat{C}</amsmath> and <amsmath>\hat{D}</amsmath> to be optimized as an LMI. Write a MATLAB/cvx code for this problem. Consider the system <amsmath>G(s) =\frac{P(s)}{(s+0.1)}</amsmath> where <amsmath>P(s)</amsmath> is a 10th order Pade approximation to a 1 second delay. Calculate the Hankel singular values for this system (using balancmr). Output the truncated balanced truncations of orders 1:10. (note that balancmr can produce a set of output ss systems). and compare the norm of the error with the upper and lower bounds. For <amsmath>G(s)</amsmath> as above calculate the optimal Hankel norm approximations. Note the Hankel singular values of the error system and comment. Note that the better error bound on the <amsmath>H_\infty</amsmath> norm requires a non-zero <amsmath>D</amsmath>-term but the hankelmr function does not output this. By examining the Nyquist plot of the error in an example demonstrate that there exists such a <amsmath>D</amsmath>-term. Note how the poles positions vary with the order of the approximation. Use cvx to examine improvements to the above <amsmath>H_\infty</amsmath> norm errors that can be achieved by optimizing the <amsmath>C</amsmath> and <amsmath>D</amsmath> terms with the <amsmath>A</amsmath> and <amsmath>B</amsmath> terms from the balanced and Hankel-norm approximants. |
Tagged: matrix Problem 250
Let $\mathbf{u}$ and $\mathbf{v}$ be vectors in $\R^n$, and let $I$ be the $n \times n$ identity matrix. Suppose that the inner product of $\mathbf{u}$ and $\mathbf{v}$ satisfies
\[\mathbf{v}^{\trans}\mathbf{u}\neq -1.\] Define the matrix \[A=I+\mathbf{u}\mathbf{v}^{\trans}.\]
Prove that $A$ is invertible and the inverse matrix is given by the formula
\[A^{-1}=I-a\mathbf{u}\mathbf{v}^{\trans},\] where \[a=\frac{1}{1+\mathbf{v}^{\trans}\mathbf{u}}.\] This formula is called the Sherman-Woodberry formula. Problem 249
Suppose that the following matrix $A$ is the augmented matrix for a system of linear equations.
\[A= \left[\begin{array}{rrr|r} 1 & 2 & 3 & 4 \\ 2 &-1 & -2 & a^2 \\ -1 & -7 & -11 & a \end{array} \right],\] where $a$ is a real number. Determine all the values of $a$ so that the corresponding system is consistent. Problem 248
We say that two $m\times n$ matrices are
row equivalent if one can be obtained from the other by a sequence of elementary row operations.
Let $A$ and $I$ be $2\times 2$ matrices defined as follows.
\[A=\begin{bmatrix} 1 & b\\ c& d \end{bmatrix}, \qquad I=\begin{bmatrix} 1 & 0\\ 0& 1 \end{bmatrix}.\] Prove that the matrix $A$ is row equivalent to the matrix $I$ if $d-cb \neq 0$. Read solution Problem 222
Suppose that $n\times n$ matrices $A$ and $B$ are similar.
Then show that the nullity of $A$ is equal to the nullity of $B$.
In other words, the dimension of the null space (kernel) $\calN(A)$ of $A$ is the same as the dimension of the null space $\calN(B)$ of $B$. Problem 218
For a real number $0\leq \theta \leq \pi$, we define the real $3\times 3$ matrix $A$ by
\[A=\begin{bmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta &\cos\theta &0 \\ 0 & 0 & 1 \end{bmatrix}.\] (a) Find the determinant of the matrix $A$. (b) Show that $A$ is an orthogonal matrix.
Add to solve later
(c) Find the eigenvalues of $A$. Given Graphs of Characteristic Polynomial of Diagonalizable Matrices, Determine the Rank of Matrices Problem 217
Let $A, B, C$ are $2\times 2$ diagonalizable matrices.
The graphs of characteristic polynomials of $A, B, C$ are shown below. The red graph is for $A$, the blue one for $B$, and the green one for $C$.
From this information, determine the rank of the matrices $A, B,$ and $C$.
Read solution Add to solve later
Problem 216
Let
\[A=\begin{bmatrix} 1 & 3 & 3 \\ -3 &-5 &-3 \\ 3 & 3 & 1 \end{bmatrix} \text{ and } B=\begin{bmatrix} 2 & 4 & 3 \\ -4 &-6 &-3 \\ 3 & 3 & 1 \end{bmatrix}.\] For this problem, you may use the fact that both matrices have the same characteristic polynomial: \[p_A(\lambda)=p_B(\lambda)=-(\lambda-1)(\lambda+2)^2.\] (a) Find all eigenvectors of $A$. (b) Find all eigenvectors of $B$. (c) Which matrix $A$ or $B$ is diagonalizable? (d) Diagonalize the matrix stated in (c), i.e., find an invertible matrix $P$ and a diagonal matrix $D$ such that $A=PDP^{-1}$ or $B=PDP^{-1}$.
(
Stanford University Linear Algebra Final Exam Problem) Read solution Problem 200
Let
\[ A=\begin{bmatrix} 5 & 2 & -1 \\ 2 &2 &2 \\ -1 & 2 & 5 \end{bmatrix}.\]
Pick your favorite number $a$. Find the dimension of the null space of the matrix $A-aI$, where $I$ is the $3\times 3$ identity matrix.
Your score of this problem is equal to that dimension times five.
(
The Ohio State University Linear Algebra Practice Problem) Read solution Problem 194
Find the value(s) of $h$ for which the following set of vectors
\[\left \{ \mathbf{v}_1=\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} h \\ 1 \\ -h \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 1 \\ 2h \\ 3h+1 \end{bmatrix}\right\}\] is linearly independent.
(
Boston College, Linear Algebra Midterm Exam Sample Problem) Read solution Problem 193
Let $A$ be a $3 \times 3$ matrix.
Let $\mathbf{x}, \mathbf{y}, \mathbf{z}$ are linearly independent $3$-dimensional vectors. Suppose that we have \[A\mathbf{x}=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, A\mathbf{y}=\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, A\mathbf{z}=\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}.\]
Then find the value of the determinant of the matrix $A$.Add to solve later |
Recall that the extension degree of the cyclotomic field of $n$-th roots of unity is given by $\phi(n)$, the Euler totient function.Thus we have\[[\Q(\zeta_8):\Q]=\phi(8)=4.\]
Without loss of generality, we may assume that\[\zeta_8=e^{2 \pi i/8}=\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}i.\]
Then $i=\zeta_8^2 \in \Q(\zeta_8)$ and $\zeta_8+\zeta_8^7=\sqrt{2}\in \Q(\zeta_8)$.Thus, we have\[\Q(i, \sqrt{2}) \subset \Q(\zeta_8).\]
It suffices now to prove that $[\Q(i, \sqrt{2}):\Q]=4$.Note that we have $[\Q(i):\Q]=[\Q(\sqrt{2}):\Q]=2$.Since $\Q(\sqrt{2}) \subset \R$, we know that $i\not \in \Q(\sqrt{2})$.Thus, we have\begin{align*}[\Q(i, \sqrt{2}):\Q]=[[\Q(\sqrt{2})(i):\Q(\sqrt{2})][\Q(\sqrt{2}):\Q]=2\cdot 2=4.\end{align*}
Extension Degree of Maximal Real Subfield of Cyclotomic FieldLet $n$ be an integer greater than $2$ and let $\zeta=e^{2\pi i/n}$ be a primitive $n$-th root of unity. Determine the degree of the extension of $\Q(\zeta)$ over $\Q(\zeta+\zeta^{-1})$.The subfield $\Q(\zeta+\zeta^{-1})$ is called maximal real subfield.Proof. […]
Example of an Infinite Algebraic ExtensionFind an example of an infinite algebraic extension over the field of rational numbers $\Q$ other than the algebraic closure $\bar{\Q}$ of $\Q$ in $\C$.Definition (Algebraic Element, Algebraic Extension).Let $F$ be a field and let $E$ be an extension of […]
Galois Group of the Polynomial $x^p-2$.Let $p \in \Z$ be a prime number.Then describe the elements of the Galois group of the polynomial $x^p-2$.Solution.The roots of the polynomial $x^p-2$ are\[ \sqrt[p]{2}\zeta^k, k=0,1, \dots, p-1\]where $\sqrt[p]{2}$ is a real $p$-th root of $2$ and $\zeta$ […]
Cubic Polynomial $x^3-2$ is Irreducible Over the Field $\Q(i)$Prove that the cubic polynomial $x^3-2$ is irreducible over the field $\Q(i)$.Proof.Note that the polynomial $x^3-2$ is irreducible over $\Q$ by Eisenstein's criterion (with prime $p=2$).This implies that if $\alpha$ is any root of $x^3-2$, then the […]
Prove that $\F_3[x]/(x^2+1)$ is a Field and Find the Inverse ElementsLet $\F_3=\Zmod{3}$ be the finite field of order $3$.Consider the ring $\F_3[x]$ of polynomial over $\F_3$ and its ideal $I=(x^2+1)$ generated by $x^2+1\in \F_3[x]$.(a) Prove that the quotient ring $\F_3[x]/(x^2+1)$ is a field. How many elements does the field have?(b) […]
Application of Field Extension to Linear CombinationConsider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$.Let $\alpha$ be any real root of $f(x)$.Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$.Proof.We first prove that the polynomial […] |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
I have a (classical) scheduling problem using completion date variables, with a constraint like
$$ c_j = s_j + p_j $$
where \(s_j\) and \(c_j\) are variables representing the start date and the completion date of job \(j\), respectively, and \(p_j\) is a parameter representing the duration of job \(j\) (I skip the other constraints).
Although I understand this may not be the ideal formulation, I am trying to understand - for the sake of experimentation - whether it is possible to detect if a job \(j\) is running during a generic period \(t\) where
$$ \min\left(s_j\right) \leq t \leq \max\left(c_j\right). $$
I was thinking (so far with no success) about a binary variable \(u_{jt}\) equal to 1 if job \(j\) is being worked in period \(t\) and 0 otherwise. The rough idea is that I can use the variable \(u\) to represent the status of a resource over time, like a machine or a worker.
Is that possible, keeping the model linear?
@Marco posted in a comment that you could replace \(s_j\) with \(\sum_t t * x_{jt}\) where \(x_{jt}\) is 1 if job \(j\) starts at time \(t\), 0 otherwise. You would also want the constraint \(\sum_t x_{jt}=1 \ \forall j\). Your \(u_{jt}\) variable would then be defined by \(u_{jt}=\sum_{\tau=t-p_j+1}^t x_{j\tau}\).
You could also do this with the original \(s_j\) variables by introducing |
An electrolyte solution is a solution that generally contains ions, atoms or molecules that have lost or gained electrons, and is electrically conductive. For this reason they are often called ionic solutions, however there are some cases where the electrolytes are not ions. For this discussion we will only consider solutions of ions. A basic principle of electrostatics is that opposite charges attract and like charges repel. It also takes a great deal of force to overcome this electrostatic attraction.
Introduction
The general form of Coulomb's law describes the force of attraction between charges:
\[F=k\frac{q_1mq_2}{r^2}\]
However, we must make some changes to this physics formula to be able to use it for a solution of oppositely charged ions. In Coulomb's Law, the constant \[k=\frac{1}{4\pi\varepsilon_{0}}\], where \(\varepsilon_{0}\) is the permittivity of free space, such as in a vacuum. However, since we are looking at a solution, we must consider the effect that the medium (the solvent in this case) has on the electrostatic force, which is represented by the dielectric constant \(\varepsilon\):
\[F=\frac{q_{1}q_{2}}{4\pi\varepsilon_{0}\varepsilon r^{2}}\]
Polar substances such as water have a relatively high dielectric constant.
Standard Definitions of Enthalpy, Entropy, and Gibbs Energy for Ions
Ions are not stable on their own, and thus no ions can ever be studied separately. Particularly in biology, all ions in a certain cell or tissue have a counterion that balances this charge. Therefore, we cannot measure the enthalpy or entropy of a single ion as we can atoms of a pure element. So we define a reference point. The \(\Delta_{f}\overline{H}^{\circ} \) of a hydrogen ion \(H^+\) is equal to zero, as are the other thermodynamic quantities.
\[\Delta_{f}\overline{H}^{\circ}[H^{+}(aq)]=0\]
\[\Delta_{f}\overline{G}^{\circ}[H^{+}(aq)]=0\]
\[\overline{S}^{\circ}[H^{+}(aq)]=0\]
When studying the formation of ionic solutions, the most useful quantity to describe is chemical potential \(\mu\), defined as the partial molar Gibbs energy of the ith component in a substance:
\[\mu_{i}=\overline{G}_{i}=\left(\frac{\partial G}{\partial n_{i}}\right)_{T,P,n_{j}}=\mu_{i}^{\circ}+RT\ln x_{i}\]
where \(x_{i}\) can be any unit of concentration of the component: mole fraction, molality, or for gases, the partial pressure divided by the pressure of pure component.
Ionic Solutions
To express the chemical potential of an electrolyte in solution in terms of molality, let us use the example of a dissolved salt such as magnesium chloride, \(MgCl_{2}\).
\[MgCl_{2}\rightleftharpoons Mg^{2+}+2Cl^{-} \label{1}\]
We can now write a more general equation for a dissociated salt:
\[M_{\nu+}X_{\nu-}\rightleftharpoons\nu_{+}M^{z+}+\nu_{-}X^{z-} \label{2} \]
where \(\nu_{\pm}\) represents the stoichiometric coefficient of the cation or anion and \(z_\pm\) represents the charge, and M and X are the metal and halide, respectively.
The total chemical potential for these anion-cation pair would be the sum of their individual potentials multiplied by their stoichiometric coefficients:
\[\mu=\nu_{+}\mu_{+}+\nu_{-}\mu_{-} \label{3} \]
The chemical potentials of the individual ions are:
\[\mu_{+} = \mu_+^{\circ}+RT\ln m_+ \label{4} \]
\[\mu_{-} = \mu_-^{\circ}+RT\ln m_- \label{5} \]
And the molalities of the individual ions are related to the original molality of the salt m by their stoichiometric coefficients
\[m_{+}=\nu_{+}m\]
Substituting Equations \(\ref{4}\) and \(\ref{5}\) into Equation \(\ref{3}\),
\[ \mu=\left( \nu_+\mu_+^{\circ}+\nu_- mu_-^{\circ}\right)+RT\ln\left(m_+^{\nu+}m_-^{\nu-}\right) \label{6} \]
since the total number of moles \(\nu=\nu_{+}+\nu_{-}\), we can define the mean ionic molality as the geometric average of the molarity of the two ions:
\[ m_{\pm}=(m_+^{\nu+}m_-^{\nu-})^{\frac{1}{\nu}}\]
then Equation \(\ref{6}\) becomes
\[\mu=(\nu_{+}\mu_{+}^{\circ}+\nu_{-}\mu_{-}^{\circ})+\nu RT\ln m_{\pm} \label{7} \]
We have derived this equation for a ideal solution, but ions in solution exert electrostatic forces on one another to deviate from ideal behavior, so instead of molarities we must use the activity a to represent how the ion is behaving in solution. Therefore the mean ionic activity is defined as
\[a_{\pm}=(a_{+}^{\nu+}+a_{-}^{\nu-})^{\frac{1}{\nu}}\]
where
\[a_{\pm}=\gamma m_{\pm} \label{mean}\]
and \(\gamma_{\pm}\) is the
mean ionic activity coefficient, which is dependent on the substance. Substituting the mean ionic activity of \Equation \(\ref{mean}\) into Equation \(\ref{7}\),
\[\mu=(\nu_{+}\mu_{+}^{\circ}+\nu_{-}\mu_{-}^{\circ})+\nu RT\ln a_{\pm}=(\nu_{+}\mu_{+}^{\circ}+\nu_{-}\mu_{-}^{\circ})+RT\ln a_{\pm}^{\nu}=(\nu_{+}\mu_{+}^{\circ}+\nu_{-}\mu_{-}^{\circ})+RT \ln a \label{11}\]
when \(a=a_{\pm}^{\nu}\). Equation \(\ref{11}\) then represents the chemical potential of a nonideal electrolyte solutions. To calculate the mean ionic activity coefficient requires the use of the Debye-Hückel limiting law, part of the Debye-Hückel theory of electrolytes .
Example \(\PageIndex{1}\)
Let us now write out the chemical potential in terms of molality of the salt in our first example, \(MgCl_{2}\). First from Equation \(\ref{1}\), the stoichiometric coefficients of the ions are:
\[\nu_{+} = 1,\nu_{-} = 2,\nu\; = 3\]
The mean ionic molality is
\[m_{\pm} = (m_{+}^{1}m_{-}^{2})^{\frac{1}{3}}= (\nu_{+}m\times\nu_{-}m)^{\frac{1}{3}}=m(1^{1}2^{2})^{\frac{1}{3}}=1.6m\]
The expression for the chemical potential of \[MgCl_{2}\] is
\[\mu_{MgCl_{2}}=\mu_{MgCl_{2}}^{\circ}+3RT\ln1.6m\]
References Chang, Raymond. Physical Chemistry for the Biosciences. Sausalito, California: University Science Books, 2005. Contributors Konstantin Malley |
Separable ODEs
concept
Separable ODEs are some of the simplest to solve for and don't require any new skills you haven't used before. As such they are a perfect, gentle introduction into the world of ODE solving.
fact
A separable ODE is one where it can be written in the following form: $$f(x)dx = g(y)dy$$ In other words, everything with one variable can be placed on one side of the equals sign and everything with the other variable can be placed on the other side.
example
Determine whether the following are separable ODEs:
\(\frac{dy}{dx} = x^2\) \(x^2ydy = y^2dx\) \(\frac{dy}{dx} = x^y\) We can rewrite this one as \(dy = x^2dx\) so it is separable. Dividing both sides by \(x^2y^2\) gives us \(\frac{1}{y}dy = \frac{1}{x^2}dx\) which is separated so it is separable. This one can't be rearranged into a separable form. No matter what we do we can't place all the \(x\)s on one side and all the \(y\)s on the other. So this one is NOT separable.
fact
Solving a separable ODE is simple. We place it in its separated form and then integrate both sides to get a solution.
example
Solve \(\frac{dy}{dx} = x^2\)Placing it in separable form we get \(dy = x^2dx\). Now we integrate both sides: \(\int dy = \int x^2dx\) \(y = \frac{1}{3}x^3 + c\) We must ALWAYS remember that constant of integration. A solution to a differential equation is never one function, but a family of such functions differing by an additive constant.
example
Solve \(\frac{dy}{dx} = y\)Rearrange to get \(\frac{1}{y}dy = dx\) and integrate both sides: \(\int \frac{1}{y}dy = \int dx\) \(\log(y) = x + c\) Exponentiate both sides to get a nicer answer: \(y = e^{x + c}\) Which can be simplified by letting \(e^c = A\): \(y = Ae^x\) This simplification is very common in ODEs as exponentials are everywhere.
example
Solve \(x\frac{dy}{dx} = y\)\(\frac{1}{y}dy = \frac{1}{x}dx\) \(\int \frac{1}{y}dy = \int \frac{1}{x}dx\) \(\log(y) = \log(x) + c\) Raise \(e\) to both sides: \(y = e^{\log(x) + c} = Ax\) where \(A = e^c\)
example
Solve \(y' = e^{-y}(2x - 4)\)This one looks a little more complicated than the others but works the same. Remember that \(y' = \frac{dy}{dx}\). \(e^ydy = (2x-4)dx\) \(\int e^ydy = \int (2x - 4)dx\) \(e^y = x^2 - 4x + c\) We won't always get our answer in the form \(y = f(x)\), although we could in this case it isn't necessary.
practice problems |
Do you ever get the feeling that mathematics uses the word dimension a lot? Well, that's for good reason. The concept of dimension is fundamental in mathematics. What is dimension? You can think of dimension as a numerical invariant characterizing the number of parameters required to do a certain thing. For example, for vector spaces, dimension is the cardinality of a basis, and a basis is a minimal set from which you can specify all vectors via linear combinations. The cartesian plane is two-dimensional because you need two coordinates in order to specify any point.
There are other more exotic types of dimension used in ring theory, and this post aims to be a quick introduction to them.
Rank of a free module Possible values: all cardinals.
As we've already talked about, the dimension of a vector space is the cardinality of a basis of that space. Every vector space has a basis, and all bases of a given vector space have the same cardinality. Therefore, the concept of dimension in this case is well-defined.
If $R$ is a ring, then we could try and define a dimension (called the
rank) for an $R$-module $M$, as the cardinality of a minimal generating set. The only problem is, the rank may not be well-defined. What is true is that if a minimal generating set for $M$ is infinite, then all minimal generating sets for $M$ have the same cardinality. But the conclusion is not necessarily true if there exists some minimal generating set for $M$ that is finite.
Actually, there exists a ring $R$ and two natural numbers $m$ and $n$ such that $R^n\cong R^m$. One such example is the endomorphism ring of a vector space of infinite dimension. Whoa, right? However, suppose a ring $R$ has the property that $R^n\cong R^m$ implies $m=n$. Then $R$ is said to have the
invariant basis property. All commutative rings have the invariant basis property, so the concept of rank of a free module always makes sense here. Some noncommutatative rings such as finite rings also have the invariant basis property. Krull dimension of a commutative ring Possible values: all natural numbers and infinity.
If $R$ is a commutative ring, its
Krull dimension is the supremum over the set of integers $n$ such that there exists a chain \[ P_0\subset \cdots \subset P_n\] of prime ideals of $R$. For example, fields have Krull dimension zero since the only prime ideal of a field is the zero ideal. The integers (and more generally, Dedekind domains) have Krull dimension one because every nonzero prime in these rings is maximal.
If $F$ is a field, the Krull dimension of the polynomial ring $F[x_1,\dots,x_n]$ is $n$. Algebro-geometrically, the ring $F[x_1,\dots,x_n]$ represents $n$-dimensional $F$-space, and in general, Krull dimension accurately captures the concept of dimension of a algebraic variety. In more general cases, Krull dimension can behave bizarrely.
Embedding dimension of a local ring Possible values: all cardinals
If $R$ is a commutative local ring with maximal ideal $M$, then the
embedding dimension of $R$ is defined to be the dimension of $M/M^2$ as an $R/M$-vector space. For example, consider the commutative ring $\Z/4$. It is local with maximal ideal $(2)$ whose square is zero. Over $\Z/2\cong \Z/4/(2)$, the ideal $(2)$ is one-dimensional. Notice on the other hand that the Krull dimension of $\Z/4$ is zero, so this dimension is not the same as the Krull dimension. In fact: Theorem.The Krull dimension of a commutative local ring is always less than or equal to its embedding dimension.
In fact, the local rings for which the embedding dimension is equal to the Krull dimension are exactly the
regular local rings. The grade Possible values: natural numbers and infinity
The grade is not exactly a dimension, but I included it since it is so closely related. For any commutative ring, it is defined as the length of the longest regular sequence on $R$ (the grade also makes sense for $R$-modules). It is close to a dimension and the grade of a commutative ring is always less than its Krull dimension. Rings for which these values are equal are called
Cohen-Macaulay. As you might expect, regular local rings are Cohen-Macaulay, but the converse is not true. Projective dimension of a module Possible values: natural numbers and infinity
The projective dimension of an $R$-module is the infimum over the set of integers $n$ such that there exists a projective resolution
\[0\to P_n\to\cdots\to P_0\to M\to 0\] of $M$. Of course, if no such integer exists, then the projective dimension is infinity, which is consistent with our definition as the infimum of the empty set is infinity. We write ${\rm projdim}_R(M)$ for the projective dimension of $M$.
The projective dimension is a measure of how many projective modules it takes to "specify" $M$. If ${\rm projdim}_R(M) = 0$ then $M$ is actually projective. Here's an example: if k is a positive integer, then ${\rm projdim}_\Z(\Z/k) = 1$. That is because as a $\Z$-module, $\Z/k$ is not projective but has a projective resolution
\[ 0\to\Z\xrightarrow{k}\Z\to \Z/k\to 0.\] Projective dimension gives rise to a really cool concept: global dimension. Global dimension of a ring Possible values: natural numbers and infinity
Because submodules of free abelian groups are free abelian, we see that ${\rm projdim}_\Z(\Z/k) = 1$, and so all $\Z$ modules have projective dimension
at most one. Motivated by this, we can define the global dimension of a ring to be the supremum over the projective dimensions of all its modules.
Actually, we have to be a bit careful. We should really define the
left global dimension of $R$ to be the supremum over the projective dimensions of all the left modules, and similarly for right modules. The left and global dimensions need not agree, though if one is zero, the other is also zero.
Speaking of global dimension zero: a ring is semisimple if every one of its left modules is projective (equivalently, injective). It turns out this implies that all the right modules are projective. The Artin-Wedderburn theorem says that such rings are finite direct products of full matrix rings over division rings.
Global dimension one rings, or
hereditary rings include the integers and many other classes of rings as well. In fact, a ring $R$ has right global dimension one if and only if every submodule of every free $R$-module is projective.
The rings of larger global dimension don't have as nice a classification, but with other characteristics, global dimension provides a useful tool to classify and study rings.
Flat dimension and weak dimension Possible values: natural numbers and infinity
If we repeat the definitions we had for projective dimension and global dimension but replace
projective module with flat module, then we get the concepts of flat dimension of a module (length of smallest flat resolution) and weak dimension of a ring (supremum over flat dimensions of modules). Only, in this case, we don't need to distinguish left and right modules because left weak dimension and right weak dimension are always the same for any ring, whether or not it is commutative.
Rings of flat dimension zero are exactly the von Neumann regular rings: those rings $R$ such that for every $x\in R$ there exists a $y\in R$ such that $x = xyx$. They were named after John von Neumann of course. Commutative von Neumann regular rings are exactly those rings that are reduced and have Krull dimension zero. In fact, I wrote about this more than two years ago.
One strategy to classify rings is to mix various dimensions together, and see if you can come up with some nice hidden characterizations!
Injective dimension of a module Possible values: natural numbers and infinity
There is also the injective dimension of an $R$-module: it is the infimum over lengths of its injective resolutions. It is the dual notion to projective dimension. If you take the supremum over injective dimensions of a ring, you just get global dimension again. That's also really cool. For example, if a ring $R$ has left global dimension five, then there exists a left $R$-module that has a projective resolution of length five, and there exists another left $R$-module that has an injective resolution of length five, and no shorter resolutions of these modules exist.
Variations of homological dimensions Possible values: depends on the type of dimension
Projective and injective dimensions are examples of
homological dimensions: these are those dimensions that are defined by resolutions of modules. There are many variations, many of them based on relative homological algebra. Perhaps the most famous are the "Gorenstein" dimensions, named after Daniel Gorenstein. These concepts became well-studied because for local rings, finite Gorenstein dimension characterizes whether that ring has finite injective dimension over itself.
I am not very familiar with Gorenstein dimensions, but Lars Winther Christensen wrote a Springer LNM called
Gorenstein dimensions, which is a good start to learn more about these cool dimensions. Gelfand-Kirillov dimension Possible values: $0, 1, [2,\infty]$ (cool, right?)
The Gelfand-Kirillov dimension is much more mysterious than the other dimensions I defined so far, and is more analytic in nature. It is defined for $k$-algebras $A$ where $k$ is a field. We'll need some preliminary definitions to define the Gelfand-Kirillov dimension.
A finite-dimensional subspace $V$ of $A$ that contains $1$ is called a
subframe of $A$. If $V$ is a subframe with ordered basis $\{ v_1,\dots,v_n\}$, we define $V^i$ to be the set of monomials of length $i$ in the $v_1,\dots, v_n$. So for example, $V^3$ contains monomials like $v_1v_2v_4$, etc.
Define $F_n^V = k + V + V^2 + \cdots + V^n$, and define $d_V(n) = \dim_k(F_n^V(A))$. Thus, $d_V(n)$ is a natural number and depends of course on the choice of the vector space $V$ contained in $A$. The number $d_V(n)$ as a function of $n$ measures how fast you can get elements of $A$ by taking longer and longer products of vectors in $V$. It is a growth function.
We define the the Gelfand-Kirillov dimension of $k[V]$ by
\[{\rm GKdim}(k[V]) := \limsup \frac{\log(d_V(n))}{\log(n)}.\] The Gelfand-Kirillov dimension of $A$ is defined to be \[{\rm GKdim}(A) := \sup_V {\rm GKdim}(k[V]) \] where the supremum is taken over all subframes of $A$ (recall, these are finite-dimensional subspaces that contain the algebra identity). That these quantities exist are left as an exercise for the interested reader.
If $A$ is a finite-dimensional $k$-algebra, then for every subframe $V$, the dimension of the sets $F_n^V(A)$ are bounded, and so ${\rm GKdim}(A) = 0$ in this case. Thus, the Gelfand-Kirillov dimension is only interesting for infinite-dimensional algebras. It is also not hard to show that if $A$ is a free noncommutative $k$-algebra on a two-element set, then ${\rm GKdim}(A)= \infty$. That is because if you take $V$ to be spanned by $\{1,X,Y\}$ then the dimension of $F_n^V(A)$ grows exponentially in $n$.
To produce other basic examples, we can use the following:
Theorem.For any $k$-algebra,
\[{\rm GKdim}(A[x_1,\dots,x_d]) = {\rm GKdim}(A) + d.\]
Thus, the polynomial ring in $d$-variables over a field has Gelfand-Kirillov dimension $d$. These given examples of all possible natural numbers. In fact, non-natural number Gelfand-Kirillov dimensions can only occur for non-commutative rings! In fact, a result in
Borho, Walter; Kraft, Hanspeter. Über die Gelfand-Kirillov-Dimension. Math. Ann. 220 (1976), no. 1, 1–24. MR0412240
is that any number in $[2,\infty]$ can appear as a Gelfand-Kirillov dimension. Unfortunately, this paper is in German so I can't read it and possibly explain it on this blog.
Conclusion
This is by no means an exhaustive list of all dimensions, and I probably will talk a lot more about some of these and others in future posts. |
The Finite Element Method is a powerful numerical technique for solving ordinary and partial differential equations in a range of complex science and engineering applications, such as multi-domain analysis and structural engineering. It involves decomposing the analysis domain into a discrete mesh before constructing and then solving a system of equations built over mesh elements. The number of equations involved grows as the mesh is refined, making the Finite Element Method computationally very intensive. However, various stages of the process can be easily parallelized.
In this article we perform coupled electro-mechanical finite element analysis of an electrostatically actuated micro-electro-mechanical (MEMS) device. We apply parallel computing techniques to the most computationally intensive part of the mechanical analysis stage. Using a 40-worker
1 setup, we will reduce the time taken for the mechanical analysis with an approximately one-million DOF mesh from nearly 60 hours to less than 2 hours. MEMS Devices
MEMS devices typically consist of thin, high-aspect ratio, movable beams or electrodes suspended over a fixed electrode (Figures 1 and 2). They integrate mechanical elements on silicon substrates using microfabrication.
The electrode deformation caused by the application of voltage between the movable and fixed electrodes can be used for actuation, switching, and other signal and information processing functions.
FEM provides a convenient tool for characterizing the inner workings of MEMS devices so as to predict temperatures, stresses, dynamic response characteristics, and possible failure mechanisms.
One of the most common MEMS switches is the cantilever series (Figure 3). This consists of beams suspended over a ground electrode.
Figure 4 shows the modeled geometry. The top electrode is 150μm in length, and 2μm in thickness. Young’s modulus E is 170 GPa, and the Poisson ratio υ is 0.34. The bottom electrode is 50μm in length, 2μm in thickness, and located 100μm from the leftmost end of the top electrode. The gap between the top and bottom electrodes is 2μm.
When a voltage is applied between the top electrode and the ground plane, electrostatic charges are induced on the surface of the conductors, which give rise to electrostatic forces acting normal to the surface of the conductors. Since the ground plane is fixed, the electrostatic forces deform only the top electrode. When the beam deforms, the charge redistributes on the surface of the conductors. The resultant electrostatic forces and the deformation of the beam also change. This process continues until a state of equilibrium is reached.
Applying FEM to Coupled Electro-Mechanical Analysis
For simplicity, we will use the relaxation-based algorithm rather than the Newton method to couple the electrostatic and the mechanical domains
2. The steps are as follows:
1. Solve the electrostatic FEA problem in the nondeformed geometry with constant potential V0 on the movable electrode.
2. Compute load and boundary conditions for the mechanical solution using the calculated values of the charge density along the movable electrode.
The electrostatic pressure on the movable electrode is given by \[P=\frac{1}{2\epsilon}|D|^2\]
Where,
\(|D|\) = Magnitude of the electric flux density \(\epsilon\) = Electric permittivity next to the movable electrode
3. Solve the mechanical FEA to compute the deformation of the movable electrode.
4. Using the calculated displacement of the movable electrode, update the charge density along the movable electrode.
\[\left|D_{\mathrm{def}}(x)\right| \approx \left|D_0(x)\right|\frac{G}{G - v(x)}\]
Where,
\(|D_{\mathrm{def}}(x)|\) = Magnitude of the electric flux density in the deformed electrode \(|D_0(x)|\) = Magnitude of the electric flux density in the undeformed electrode \(G\) = Distance between the movable and fixed electrodes in the absence of actuation \(v(x)\) = Displacement of the movable electrode at position x along its axis
5. Repeat steps 2 – 4 until the electrode deformation values in the last two iterations converge.
The electrostatic analysis (step 1) involves five steps:
Draw the cantilever switch geometry. Specify the Dirichlet boundary conditions as a constant potential of 20 V to the movable electrode. Mesh the exterior domain. Solve for the unknown electric potential in the exterior domain. Plot the solution (Figure 5).
We can quickly perform each of these tasks via the Partial Differential Equation Toolbox™ interface.
Mechanical Analysis
The mechanical analysis involves five steps:
Meshing the domain Deriving element-level equations Assembly Imposing boundary conditions Solving the equations Meshing the Domain
In this step we discretize the system domain into smaller domains, or elements. Discretization lets us represent the geometry of the domain and approximate the solution over each element to better represent the solution over the entire domain. The number of elements determines the size of the problem.
Deriving Element-Level Equations
In this step, we assume an approximate solution for the differential equations over an element at selected points, or nodes. In our example, the solution is determined in terms of discrete values of \(\phi\) displacements in the \(x\) and \(y\) directions (2D analysis). The number of unknown primary fields at a node is called the degrees of freedom (DOF) at that node.
The governing differential equation is now applied to the domain of a single element (Figure 6). At the element level, the solution to the governing equation is replaced by a continuous function approximating the distribution of \(\phi\) over the element domain \(D^e\), expressed in terms of the unknown nodal values \(\phi_1\), \(\phi_2\), and \(\phi_3\) of the solution \(\phi\). A system of equations in terms of \(\phi_1\), \(\phi_2\), and \(\phi_3\) can then be formulated for the element.
Assembly
We obtain the solution equations for the system by combining the solution equations of each element to ensure continuity at each node. In our example, the element-level stiffness and force matrices (\(K_e\) and \(F_e\)) are assembled to create a global stiffness matrix (\(K\)) and a force matrix (\(F\)) over the entire domain
Imposing Boundary Conditions
We impose the necessary boundary conditions at the boundary nodes and solve the global system of equations. The solution \(\phi(x,y)\) to the problem becomes a piecewise approximation, expressed in terms of the nodal values of \(\phi\). A system of linear algebraic equations results from the assembly procedure: \(K \phi = F\).
It is not uncommon for practical engineering problems to have a system containing thousands of equations. Parallel computing techniques can greatly reduce the time required to assemble the matrices and compute the solution for problems of such massive size.
Solving the Equations
We solve the global system of equations \(K \phi = F\) for the displacements in the \(x\) and \(y\) directions.
Parallelizing the Problem
We implement each step defined in the Finite Element Method in a MATLAB
® function. Meshing is done using
mesh2d ,a MATLAB based mesh generation application for 2D geometries. The Profiling tool in MATLAB shows that the most time-consuming operations belong to the stiffness matrix assembly step.
This step involves three main operations for each element:
1. Compute the element stiffness matrix (\(K_e\)) from the element-level solution equations. \(K_e\) is of size \(N_e \times N_e\) Where,
\(N_e\) is the number of DOF per element
\(n\) is the number of nodes per element
\(D\) is the number of DOF per node
2. Map the local positions of the \(K_e\) matrix values to their position in the global stiffness matrix.
3. Populate the global stiffness matrix (\(K\)) using the map with the element stiffness matrix values.
Number of Elements Degrees of Freedom (DOF) Assembly Time (seconds) Total Execution Time (seconds) Assembly Time/ Total Execution Time (%) 528 806 4.09 4.82 84.5 6584 7690 45.073 46.371 97.2 53550 55882 960.069 1005.448 95.5 460752 469566 64573.472 64616.662 99.9 Table 1. Comparison of assembly time and total execution time for different DOF.
As the number of elements increases, so do the number of iterations and the size of \(K\). Figure 7 and Table 1 compare the time taken for assembly to the total execution time in serial mode for the system with different DOF. Clearly, assembly is the most time-consuming portion, taking up more than 99% of the total execution time when the system has high DOF.
Stiffness Matrix Assembly
Fortunately, the final assembled \(K\) matrix is independent of the order in which the elements are chosen within the loop. We can evaluate the contributions of several elements to global stiffness matrix (\(K\)) simultaneously by distributing the computations across multiple MATLAB workers. The assembly operations are normally executed in a serial
for-loop, which steps through each element and determines its contribution to the global stiffness matrix. We simply convert the serial
for-loop to a parallel
for-loop using the
parfor construct in Parallel Computing Toolbox™ (Figure 8).
In our example, the global stiffness matrix is the sum of the contributions of the element stiffness matrices across all the iterations of the loop. The
parfor construct enables us to handle such reduction assignments (usually of the form \(r = f(\text{expr}, r)\)) automatically.
To demonstrate the performance gains achieved by using a parallel approach, the problem was scaled up from coarse mesh to a super-refined mesh. The coarse mesh contained about 128 elements with a total of 150 DOF. We refined the mesh until it contained 856,800 elements with 861,122 DOF. As we refined the mesh, the displacement of the free end of the cantilever beam converged
For the parallel approach, we used a computer cluster with one head-node and 5 machines, each with the following configuration: Dual Intel
® Xeon ® 1.6 GHz quad-core processor (8-cores per machine for a total of 40 cores), 13 GB RAM with Windows ® 64-bit operating system. Each machine ran 8 MATLAB workers for a total of 40 workers. To measure the serial execution time, we used a single MATLAB worker running on the head node. Using a 64-bit OS enabled us to create large sparse matrices (up to 861,122 x 861,122) without running into memory limitations. In most finite element applications, the resultant K is sparse in nature.
Figures 9a and 9b compare the total execution time in seconds with increasing DOF between the serial (red) and parallel (green) modes of execution. Table 2 summarizes the results.
Degrees of Freedom (DOF) Total Execution Time (seconds) Serial Mode Parallel Mode 150 0.53 1.15 250 0.77 1.18 806 4.82 1.37 2200 12.93 2.8 7690 46.37 6.1 28546 355.04 20.92 103822 3406.61 129.23 218862 14871.7 496.47 469566 64616.66 1911.71 866122 218674.88 6237.91 Table 2. Representative data used in Figures 9a and 9b.
Notice that for a system with few DOF, the cost of distributing the operations is much higher compared to the execution times of these operations. As a result, when the system had only 250 DOF, serial execution was actually faster than the parallel execution.
Figure 9b shows that up to about 400 DOF, the point where the two curves intersect, the serial mode execution is faster than the parallel mode. We see a performance improvement by switching to parallel mode only after this point. The actual execution time and cross-over point depend on several factors, including the execution speed of the MATLAB functions involved, the processing speed of the worker machines, network speed, and number of workers, available memory, and system load.
Summary
In this article we demonstrated a simple approach to parallelizing an FEA application. We began by analyzing serial code performance, focusing on the most computationally intensive part of our setup. With simple code changes we were able to significantly boost our application performance, cutting down time for analyzing an 800,000DOF system from 60 hours to less than 2 hours on a 40-worker setup.
1 Workers are MATLAB computational engines that run separately from your MATLAB session. 2 P.S. Sumant, N.R. Aluru and A.C. Cangellaris, “A methodology for fast finite element modeling of electrostatically actuated MEMS.” International Journal for Numerical Methods in Engineering 2009; 77:1789-1808. |
The Annals of Probability Ann. Probab. Volume 25, Number 4 (1997), 1545-1587. Limit theorems for products of positive random matrices Abstract
Let $S$ be the set of $q \times q$ matrices with positive entries, such that each column and each row contains a strictly positive element, and denote by $S^\circ$ the subset of these matrices, all entries of which are strictly positive. Consider a random ergodic sequence $(X_n)_{n \geq1}$ in $S$. The aim of this paper is to describe the asymptotic behavior of the random products $X^{(n)} =X_n \ldots X _1, n\geq 1$ under the main hypothesis $P(\Bigcup_{n\geq 1}[X^{(n)}\in S^\circ])>0$. We first study the behavior “in direction” of row and column vectors of $X^{(n)}$. Then, adding a moment condition, we prove a law of large numbers for the entries and lengths of these vectors and also for the spectral radius of $X^{(n)}$ . Under the mixing hypotheses that are usual in the case of sums of real random variables, we get a central limit theorem for the previous quantities. The variance of the Gaussian limit law is strictly positive except when $(X^{(n)})_{n\geq 1}$ is tight. This tightness property is fully studied when the $X_n, n\geq 1$, are independent.
Article information Source Ann. Probab., Volume 25, Number 4 (1997), 1545-1587. Dates First available in Project Euclid: 7 June 2002 Permanent link to this document https://projecteuclid.org/euclid.aop/1023481103 Digital Object Identifier doi:10.1214/aop/1023481103 Mathematical Reviews number (MathSciNet) MR1487428 Zentralblatt MATH identifier 0903.60027 Citation
Hennion, H. Limit theorems for products of positive random matrices. Ann. Probab. 25 (1997), no. 4, 1545--1587. doi:10.1214/aop/1023481103. https://projecteuclid.org/euclid.aop/1023481103 |
Geometry and Topology Seminar Contents 1 Fall 2016 2 Spring 2017 3 Fall Abstracts 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2016 Spring 2017
date speaker title host(s) Jan 20 Carmen Rovi (University of Indiana Bloomington) "The mod 8 signature of a fiber bundle" Maxim Jan 27 Feb 3 Rafael Montezuma (University of Chicago) "TBA" Lu Wang Feb 10 Feb 17 Yair Hartman (Northwestern University) "Intersectional Invariant Random Subgroups and Furstenberg Entropy." Dymarz Feb 24 Lucas Ambrozio (University of Chicago) "TBA" Lu Wang March 3 Mark Powell (Université du Québec à Montréal) "TBA" Kjuchukova March 10 Autumn Kent (Wisconsin) Analytic functions from hyperbolic manifolds local March 17 March 24 Spring Break March 31 Xiangwen Zhang (University of California-Irvine) "TBA" Lu Wang April 7 April 14 Xianghong Gong (Wisconsin) "TBA" local April 21 Joseph Maher (CUNY) "TBA" Dymarz April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons
A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud).
Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces
We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky.
Sean Howe Representation stability and hypersurface sections
We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}!
Nan Li Quantitative estimates on the singular sets of Alexandrov spaces
The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber.
Yu Li
In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature.
Peyman Morteza We develop a procedure to construct Einstein metrics by gluing the Calabi metric to an Einstein orbifold. We show that our gluing problem is obstructed and we calculate the obstruction explicitly. When our obstruction does not vanish, we obtain a non-existence result in the case that the base orbifold is compact. When our obstruction vanishes and the base orbifold is non-degenerate and asymptotically hyperbolic we prove an existence result. This is a joint work with Jeff Viaclovsky. Caglar Uyanik Geometry and dynamics of free group automorphisms
A common theme in geometric group theory is to obtain structural results about infinite groups by analyzing their action on metric spaces. In this talk, I will focus on two geometrically significant groups; mapping class groups and outer automorphism groups of free groups.We will describe a particular instance of how the dynamics and geometry of their actions on various spaces provide deeper information about the groups.
Bing Wang The extension problem of the mean curvature flow
We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li.
Ben Weinkove Gauduchon metrics with prescribed volume form
Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti.
Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow
The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set.
Yu Zeng Short time existence of the Calabi flow with rough initial data
Calabi flow was introduced by Calabi back in 1950’s as a geometric flow approach to the existence of extremal metrics. Analytically it is a fourth order nonlinear parabolic equation on the Kaehler potentials which deforms the Kaehler potential along its scalar curvature. In this talk, we will show that the Calabi flow admits short time solution for any continuous initial Kaehler metric. This is a joint work with Weiyong He.
Spring Abstracts Lucas Ambrozio
"TBA"
Rafael Montezuma
"TBA"
Carmen Rovi The mod 8 signature of a fiber bundle
In this talk we shall be concerned with the residues modulo 4 and modulo 8 of the signature of a 4k-dimensional geometric Poincare complex. I will explain the relation between the signature modulo 8 and two other invariants: the Brown-Kervaire invariant and the Arf invariant. In my thesis I applied the relation between these invariants to the study of the signature modulo 8 of a fiber bundle. In 1973 Werner Meyer used group cohomology to show that a surface bundle has signature divisible by 4. I will discuss current work with David Benson, Caterina Campagnolo and Andrew Ranicki where we are using group cohomology and representation theory of finite groups to detect non-trivial signatures modulo 8 of surface bundles.
Bena Tshishiku
"TBA"
Autumn Kent Analytic functions from hyperbolic manifolds
At the heart of Thurston's proof of Geometrization for Haken manifolds is a family of analytic functions between Teichmuller spaces called "skinning maps." These maps carry geometric information about their associated hyperbolic manifolds, and I'll discuss what is presently known about their behavior. The ideas involved form a mix of geometry, algebra, and analysis.
Xiangwen Zhang
"TBA"
Archive of past Geometry seminars
2015-2016: Geometry_and_Topology_Seminar_2015-2016
2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology |
I had originally contacted Frank Harrell with this issue and he suggested I post here for some discussion. While reviewing a JAMA article (doi:10.100/jama.2018.14276) attempting to understand the application of a Bayesian analysis of existing RCT data, I happened to run across this NEJM article (doi/10.1056/NEJMoa1900906).
Part of my interest in this area is to address the woefully inefficient clinical trial process we have evolved to with nearly every clinical question requiring a large RCT. A “failed” trial means either “rejecting” the hypothesis or repeating a larger version. Most worrisome to me is that in many situations such as off patent drugs, there is insufficient funding or incentive for such a trial. Furthermore, with nutritionally related questions such as caloric intake, food composition, or vitamins/ minerals/ supplements, the interventions are treated as drugs where the placebo arm is treated as no intake which does not make sense. Finally, for non-pharmacologic interventions such as exercise, traditional RCTs of sufficient magnitude will never take place. Thus alternative approaches are desperately needed.
Below is my original email to Frank:
The NEJM article basically concludes that vitamin D does not lower the risk of diabetes. They powered their trial for a 25% reduction in risk. When I look at figure 3, it appears that there was reduction, but not to the degree they hypothesized. Furthermore, most of the subgroups demonstrate trends in the direction that I would predict are consistent with their hypothesis. For example, lower serum levels of vitamin D (25-hydroxyvitamin D) which should respond better to normalization of serum levels display a slightly lower risk. Blacks would be expected have lower serum level also demonstrate a greater risk reduction. Obese individuals who need higher intakes of vitamin D (vitamin D partitions to fat tissue), don’t respond as well as non-obese people; there’s a similar effect with waist circumference. Recommendations for vitamin D intakes are higher in the elderly and a larger response is seen. Finally, there’s a bigger effect in individuals from higher latitudes which would also be expected to start with lower vitamin D status.
In summary, all the directions of the various subgroups are consistent with the overall hypothesis, but because of the expectation of a specific effect size, they conclude no effect. An accompanying commentary does remark that there may be a smaller effect size, but this would require another larger trial.
My question relates to how Bayesian analysis can extract some useful information from this data set as well as what would need to be set up at the outset to allow a Bayesian analysis so that we’re not always in the position of looking at a “failed” trial and either carving out specific subgroups for a follow-up trials or simply to lather, rinse, repeat with a larger trial? Getting away from yes / no trials to an approach that can offer a spectrum of results would be truly innovative as well as accelerate our ability to translate clinical concepts to medical practice.
You make excellent points about other interventions not having enough funding for an RCT. I’ve asked myself similar questions.
In terms of the primary study – if you assume an effect distribution around zero before seeing the data, then use their data to update your prior, you can provide evidence for a range of effects that were not initially planned for in the study.
I would also think a Bayesian analysis of the subgroups could be persuasive evidence in terms of the effect.
Edit: Links to related and relevant posts on this topic:
From a broader POV, a better analysis of RCT data can (when examined from a Bayesian decision theory POV) lead to the derivation of an experiment that will decide the clinically relevant question.
Here is what I think after having given this issue a lot of thought. It seems reasonable to me, but I would value some additional input from scholars in this area.
A quick way to describe my thoughts would be a Bayesian parametric meta-analysis of non-parametric primary effect sizes.
My main emphasis would be on the effect estimate (regardless of significance), and the design (to see if there need to be downward adjustments to precision based on errors in the analysis such as dichotomization, improper change score analysis, etc. Dr. Harrell lists a number in his free booklet Biostats for Biomedical Research AKA BBR).
My preferred estimate of effect would be some sort of odds ratio related to the logistic model. I think parametric effect sizes based on standardized means are more fragile than is understood.Standardized mean differences are easily translated into log odds.
See the following for an informal proof of translating means into odds:
The actual ratio to multiply standardized mean effect by is \frac{\pi}{\sqrt{3}}.
I guess you can say I share Dr. Harrell’s preference for the Wilcoxon-Mann-Whitney as my default 2 sample test, and the logistic model from which it is derived.
You could do a meta-analysis of the relevant trials, adjust for publication bias, then do a bootstrap on the corrected effect size estimates.
Points inside the bootstrap CI could be defensible point estimates to base a Bayesian prior distribution on. If the 25th percentile of the bootstrap distribution is assumed to be the mean of a normal distribution – is it far enough from 0 that another study would be hard to justify?
More complicated models would require the use of meta-regression. The logistic model would be a natural fit here.
Empirical Bayes techniques have been described in this area that might help you persuade the dogmatic frequentists. Using an Empirical Bayes approach gives you a posterior distribution than can be interpreted as a predictive model for future studies.
I’ve already collected a number of papers related to the issue of meta-analysis in this thread
Michael your questions are excellent. There are a lot of basic issues that pertain that I’d like to start with. First, in the Pittas vitamin D paper, the NEJM did what they so often do: make the “absence of evidence is not evidence of absence error” in completely misinterpreting the p-value of 0.12. Bayesian posterior inference has many advantages, and here is one of them: the posterior distribution tells you to what extent you can conclude that two treatments are similar. For example you can compute P(hazard ratio between .9 and 1/.9 | data) if your “similarity zone” is a 10% reduction in hazard, up to the corresponding increase. To the issue of efficacy of vitamin D, the assessment of any efficacy is P(hr < 1 | data) given your prior. Any authors who want to conclude that a treatment should not work should go through this exercise.
A second way that Bayes helps is that all efficacy assessments in Bayes are directional. Contrast that with a 2-tailed test. A 2-sided p-value is effectively using a multiplicity adjustment for the off chance that you may want to make a claim that a drug increases mortality. By being interested only in amassing evidence for a mortality reduction, Bayes provides a higher posterior probability of efficacy than you would imagine from a 2-sided p-value.
Finally, I’ll briefly address the central question of how do we quantify evidence and how does this relate to decision making. Since a p-value is the probability that someone else’s data will be more extreme than yours if your null hyothesis is true for them, it provides no direct evidence for an effect whatsoever. Because of that, we don’t have much of a clue for when to act as if a treatment is effective, and we don’t have a clue about the chance that we will be making a mistake in taking a certain action. By contrast, a posterior probability of efficacy of 0.94 immediately tells us that if we act as if the treatment is effective we have a 0.06 chance of being wrong.
An optimum Bayes decision does something like maximizing expected utility. Expected utility is a function of the entire posterior distribution and the utility function. When the utility function puts a high penalty on using a drug when it doesn’t work (regulator’s regret), higher values of the posterior probability of efficacy will be required to make the decision of approving a drug. On the other hand, when patients do not have an alternative treatment available, as in rare diseases or Alzeimer’s, or when a drug is cheap and has no side effects, most people’s utility function will be such that just playing the odds will give a good decision. So in some cases if the probability of efficacy is 0.51 or greater it would be wise to act as if the drug is effective, and use it.
The latter issue underscores the silliness of the NEJM paper’s conclusion.
I’m convinced that’s the way it is. The difficulty is that simple tutorials or even books on Bayesian survival analysis in R are still rare today. This is a problem given that the selection of priors is subtle and nuanced. Do you have any suggestions?
I’ll soon be released a semi-comprehensive set of handouts on Bayesian methods in treatment comparisons, and will follow that up with a journal article that will hopefully be a tutorial. These papers should help.
Frank, it’s very stimulating. Your commitment is remarkable, as always. I’ve been reading the BEST package tutorial tonight, and the Kruschke article. Very interesting and useful. The graphics are very nice, and the information it provides is incredible. However, note, there are no simple tutorials on Bayesian survival analysis. The explanations about available R packages on that topic are quite complicated, without many examples. Only real experts in the field can really take advantage of those explanations about Bayesian survival methods in R, so they don’t play any didactic role. Today it is imposible for me to learn bayesian survival analysis with these texts. I’d have to stop practicing medicine and study for two years. The lack of tutorials I think is the main reason why Bayesian statistics are not popular. No one knows how to handle bayesian survival analysis (at least in R) except a few experts in the world. Just look at the explanation of the spBayesSurv package, possibly the work of a great genius, but an indecipherable hieroglyph for most people. I think a good contribution could be made from your field to ours, if any of you could bridge the gap between those apparently complicated algorithms and clinical reality. Perhaps a specific thread, similar to the dictionary of statistical terms, could be opened in Datamethods to gather simple information on Bayesian survival tutorials, when available.
We definitely need more tutorials in Bayesian survival analysis. I’ll keep a lookout for them. In the meantime look at this and ask for guidance here. Also this. Maybe one of us will do a tutorial with spBayesSurv. |
AKS primality testing solves whether a given integer is prime in $P$. AKS algorithm is following:
Input: integer n > 1.
Check if $n$ is a perfect power: if $n = a^b$ for integers $a > 1$ and $b > 1$, output composite.
Find the smallest $r$ such that $ord_r(n) > (\log_2 n)^2$ (if $r$ and $n$ are not coprime, then skip this $r$).
For all $2 ≤ a ≤\min(r, n−1)$, check that $a$ does not divide $n$: If $a|n$ for some $2 ≤ a ≤ \min(r, n−1)$, output composite.
If $n ≤ r$, output prime.
For $a = 1$ to ${\displaystyle \left\lfloor \scriptstyle {{\sqrt {\varphi (r)}}\log_{2}(n)}\right\rfloor}$ do
if $((X+a)^n - (X^n+a))≠0\bmod (X^r − 1,n)$, output composite;
Output prime.
After Emil's comment note 5. is not in $NC$ hierarchy. It seems step 2. is not in $NC$ since $GCD$ is not known to be in $NC$.
Since we have $r\leq polylog(n)$ unconditional bound we may even look for smallest $r$ parallely (by testing $r|(n^k-1)$ in $NC$ at each of the parallel processor testing a different $r$) if coprimality is in $NC$ and we can use a boolean logic to combine steps 2 and 5 so they run in parallel. All other steps seem parallelizable wholly with some boolean logic.
So is it true that if coprimality testing (rather than $GCD$ finding) and polynomial identity testing for this particular polynomial in 5. proved in $NC$ then primality is in $NC$?
It seems it is possible coprimality is in $NC$. Note this is easier than $GCD$ finding. |
Chemical stoichiometry
Category : JEE Main & Advanced
Chemical stoichiometry
Stoichiometry (pronounced “stoy-key om-e-tree”) is the calculation of the quantities of reactants and products involved in a chemical reaction. That means quantitative calculations of chemical composition and reaction are referred to as stoichiometry.
Basically, this topic involves two types of calculations.
(a) Simple calculations (gravimetric analysis) and
(b) More complex calculations involving concentration and volume of solutions (volumetric analysis).
There is no borderline, which can distinguish the set of laws applicable to gravimetric and volumetric analysis. All the laws used in one are equally applicable to the other i.e., mole as well as equivalent concept. But in actual practise, the problems on gravimetric involves simpler reactions, thus mole concept is convenient to apply while volumetric reactions being complex and unknown (unknown simple means that it is not known to you, as it’s not possible for you to remember all possible reactions), equivalent concept is easier to apply as it does not require the knowledge of balanced equation.
(1)
Gravimetric analysis : In gravimetric analysis we relate the weights of two substances or a weight of a substance with a volume of a gas or volumes of two or more gases. Problems Involving Mass-Mass Relationship
(i) Write down the balanced equation to represent the chemical change.
(ii) Write the number of moles below the formula of the reactants and products. Also write the relative weights of the reactants and products (calculated from the respective molecular formula), below the respective formula.
(iii) Apply the unitary method to calculate the unknown factor (s).
Problems Involving Mass-Volume Relationship
For solving problems involving mass-volume relationship, proceed according to the following instructions,
(i) Write down the relevant balanced chemical equations (s).
(ii) Write the weights of various solid reactants and products.
(iii) Gases are usually expressed in terms of volumes. In case the volume of the gas is measured at room temperature and pressure (or under conditions other than N.T.P.), convert it into N.T.P. by applying gas equation.
(iv) Volume of a gas at any temperature and pressure can be converted into its weight and vice-versa with the help of the relation, by \[PV=\frac{g}{M}\times RT\] where \[g\] is weight of gas, \[M\] is mole. wt. of gas, \[R\] is gas constant.
Calculate the unknown factor by unitary method.
Problems Based on Volume-Volume Relationship
Such problems can be solved according to chemical equation as,
(i) Write down the relevant balanced chemical equation.
(ii) Write down the volume of reactants and products below the formula to each reactant and product with the help of the fact that \[1gm\] molecule of every gaseous substance occupies 22.4 litres at N.T.P.
(iii) In case volume of the gas is measured under particular (or room) temperature, convert it to volume at NTP by using ideal gas equation.
Take the help of Avogadro’s hypothesis “
Equal volume of different gases under the similar conditions of temperature and pressure contain the same number of molecules”.
(2)
Volumetric analysis : It is a method which involves quantitative determination of the amount of any substance present in a solution through volume measurements. For the analysis a standard solution is required. (A solution which contains a known weight of the solute present in known volume of the solution is known as standard solution.)
To determine the strength of unknown solution with the help of known (standard) solution is known as titration. Different types of titrations are possible which are summerised as follows,
(i)
Redox titrations : To determine the strength of oxidising agents or reducing agents by titration with the help of standard solution of reducing agents or oxidising agents.
Examples:
\[\begin{align}
& \underline{\begin{align}
& \underline{\begin{align}
& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{K}_{2}}C{{r}_{2}}{{O}_{7}}+4{{H}_{2}}S{{O}_{4}}\to {{K}_{2}}S{{O}_{4}}+C{{r}_{2}}{{(S{{O}_{4}})}_{3}}+4{{H}_{2}}O+3[O] \\
& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,[2FeS{{O}_{4}}+{{H}_{2}}S{{O}_{4}}+O\to F{{e}_{2}}{{(S{{O}_{4}})}_{3}}+{{H}_{2}}O]\times 3\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \\
\end{align}} \\
& \,\,6FeS{{O}_{4}}+{{K}_{2}}C{{r}_{2}}{{O}_{7}}+7{{H}_{2}}S{{O}_{4}}\to 3Fe{{(S{{O}_{4}})}_{3}}+{{K}_{2}}S{{O}_{4}}+C{{r}_{2}}{{(S{{O}_{4}})}_{3}}7{{H}_{2}}O \\
& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,2KMn{{O}_{4}}+3{{H}_{2}}S{{O}_{4}}\to {{K}_{2}}S{{O}_{4}}+2MnS{{O}_{4}}+3{{H}_{2}}O+5[O] \\
& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,[2FeS{{O}_{4}}+{{H}_{2}}S{{O}_{4}}+O\to F{{e}_{2}}{{(S{{O}_{4}})}_{3}}+{{H}_{2}}O]\times 5\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \\
\end{align}} \\
& 10FeS{{O}_{4}}+2KMn{{O}_{4}}+8{{H}_{2}}S{{O}_{4}}\to 5F{{e}_{2}}{{(S{{O}_{4}})}_{3}}+{{K}_{2}}S{{O}_{4}}+2MnS{{O}_{4}}+8{{H}_{2}}O \\
\end{align}\]
Similarly with \[{{H}_{2}}{{C}_{2}}{{O}_{4}}\]
\[2KMn{{O}_{4}}+3{{H}_{2}}S{{O}_{4}}+5{{H}_{2}}{{C}_{2}}{{O}_{4}}\to \]
\[{{K}_{2}}S{{O}_{4}}+2MnS{{O}_{4}}+8{{H}_{2}}O+10C{{O}_{2}}\]etc.
(ii)
Acid-base titrations : To determine the strength of acid or base with the help of standard solution of base or acid.
Example: \[NaOH+HCl\to NaCl+{{H}_{2}}O\]
and \[NaOH+C{{H}_{3}}COOH\to C{{H}_{3}}COONa+{{H}_{2}}O\] etc.
(iii)
Iodiometric titrations : This is a simple titration involving free iodine. This involves the titration of iodine solution with known sodium thiosulphate solution whose normality is \[N\]. Let the volume of sodium thiosulphate is \[V\,ml\].
\[{{I}_{2}}+2N{{a}_{2}}{{S}_{2}}{{O}_{3}}\to 2NaI+N{{a}_{2}}{{S}_{4}}{{O}_{6}}\]
\[n=2\], \[n=1\]
Equivalents of \[{{I}_{2}}=\]Equivalent of \[N{{a}_{2}}{{S}_{2}}{{O}_{3}}\]
\[\therefore \] Equivalents of \[{{I}_{2}}=N\times V\times {{10}^{-3}}\]
Moles of \[{{I}_{2}}=\frac{N\times V\times {{10}^{-3}}}{2}\]
Mass of free \[{{I}_{2}}\] in the solution \[=\left[ \frac{N\times V\times {{10}^{-3}}}{2}\times 254 \right]\,g\].
(iv)
Iodometric titrations : This is an indirect method of estimation of iodine. An oxidising agent is made to react with excess of solid \[KI\]. The oxidising agent oxidises \[{{I}^{-}}\] to \[{{I}_{2}}\]. This iodine is then made to react with \[N{{a}_{2}}{{S}_{2}}{{O}_{3}}\] solution.
Oxidising Agent
\[(A)+KI\to {{I}_{2}}\xrightarrow{2N{{a}_{2}}{{S}_{2}}{{O}_{3}}}2NaI+N{{a}_{2}}{{S}_{4}}{{O}_{6}}\]
Let the normality of \[N{{a}_{2}}{{S}_{2}}{{O}_{3}}\] solution is \[N\] and the volume of thiosulphate consumed to \[V\,ml\].
Equivalent of \[A=\]Equivalent of \[{{I}_{2}}=\]Equivalents of \[N{{a}_{2}}{{S}_{2}}{{O}_{3}}\]
Equivalents of \[{{I}_{2}}\] liberated from \[KI=N\times V\times {{10}^{-3}}\]
Moles of \[{{I}_{2}}\] liberated from \[KI=\frac{N\times V\times {{10}^{-3}}}{2}\]
Mass of \[{{I}_{2}}\] liberated from \[KI=\left[ \frac{N\times V\times {{10}^{-3}}}{2}\times 254 \right]g\].
(v)
Precipitation titrations : To determine the anions like \[C{{N}^{-}},\ AsO_{3}^{3-},\ PO_{4}^{3-},\ {{X}^{-}}\] etc, by precipitating with \[AgN{{O}_{3}}\] provides examples of precipitation titrations.
\[NaCl+AgN{{O}_{3}}\to AgCl\downarrow +NaN{{O}_{3}}\] \[KSCN+AgN{{O}_{3}}\to AgSCN\downarrow +KN{{O}_{3}}\]
End point and equivalence point : The point at which titration is stopped is known as end point, while the point at which the acid and base ( or oxidising and reducing agents) have been added in equivalent quantities is known as equivalence point. Since the purpose of the indicator is to stop the titration close to the point at which the reacting substances were added in equivalent quantities, it is important that the equivalent point and the end point be as close as possible. Normal solution : A solution containing one gram equivalent weight of the solute dissolved per litre is called a normal solution; e.g. when 40 g of NaOH are present in one litre of NaOH solution, the solution is known as normal ( N) solution of NaOH. Similarly, a solution containing a fraction of gram equivalent weight of the solute dissolved per litre is known as subnormal solution. For example, a solution of NaOH containing 20 g (1/2 of g eq. wt.) of NaOH dissolved per litre is a sub-normal solution. It is written as N/2 or 0.5 N solution.
Formula used in solving numerical problems on volumetric analysis
(1) Strength of solution = Amount of substance in
g \[litr{{e}^{-1}}\]
(2) Strength of solution = Amount of substance in
g moles \[litr{{e}^{-1}}\]
(3) Strength of solution = Normality ´ Eq. wt. of the solute
= molarity \[\times \] Mol. wt. of solute
(4) \[\text{Molarity}=\frac{\text{Moles of solute}}{\text{Volume in litre}}\]
(5) \[\text{Number of moles}=\frac{\text{Wt}\text{. in }gm}{\text{Mol}\text{. wt}\text{.}}=M\times {{V}_{(in\,l)}}\]
\[=\frac{\text{Volume in litres}}{22.4}\] at NTP (only for gases)
(6) Number of millimoles \[=\frac{\text{Wt}\text{. in }gm\text{ }\times \text{1000}}{\text{mol}\text{. wt}\text{.}}\]
\[=\text{Molarity}\times \text{Volume in }ml.\]
(7) Number of equivalents
\[=\frac{\text{Wt}\text{. in }gm}{\text{Eq}\text{. wt}\text{.}}=x\times \text{No}\text{. of moles}\times \text{Normality}\times \text{Volume in litre}\]
(8) Number of milliequivalents (meq.)
\[=\frac{\text{Wt}\text{. in }gm\times \text{1000}}{\text{Eq}\text{. wt}\text{.}}=\text{normality}\times \text{Volume in }ml.\]
(9) Normality\[=x\times \text{No}\text{. of millimoles}\]
\[=x\times \text{Molarity}=\frac{\text{Strength in }gm\,litr{{e}^{-1}}}{\text{Eq}\text{. wt}\text{.}}\]
where \[x=\frac{\text{Mol}\text{. wt}\text{.}}{\text{Eq}\text{. wt}\text{.}}\],
x = valency or change in oxi. Number.
(10) Normality formula, \[{{N}_{1}}{{V}_{1}}={{N}_{2}}{{V}_{2}}\]
(11) % by weight \[=\frac{\text{Wt}\text{. of solvent}}{\text{Wt}\text{. of solution}}\times 100\]
(12) % by volume \[=\frac{\text{Wt}\text{. of solvent}}{\text{Vol}\text{. of solution}}\times 100\]
(13) % by strength\[=\frac{\text{Vol}\text{. of solvent}}{\text{Vol}\text{. of solution}}\times 100\]
(14) Specific gravity
\[=\frac{\text{Wt}\text{. of solution}}{\text{Vol}\text{. of solution}}=\text{Wt}\text{. of 1 }ml.\text{ of solution}\]
(15) Formality \[=\frac{\text{Wt}\text{. of ionic solute}}{\text{Formula Wt}\text{. of solute}\times {{V}_{in\,l}}}\]
(16) Mol. Wt. = V.D ´ 2 (For gases only)
You need to login to perform this action.
You will be redirected in 3 sec |
Consider a scenario-tree, with 4 stages. The first stage is the root node. This root node has two children. So the second stage has 2 nodes. Each node of the scenario tree has 2 children. So in total we have: 15 nodes.
How many nodes will the deterministic equivalent of this have?
asked
spyimp
I'm not sure I understand your scenario tree structure. Therefore, I would give a hypothetical example to help you understand the reasoning procedure. Let's assume your problem has three random parameters, each with two possible outcomes (let's say zero or one). First, note that the root node doe not represent any of the random parameters, and each column after the root node represents a random parameter. Consequently, columns #1, #2, and #3 would have 2, 4, and 8 child nodes. Ultimately, the probability space space, \(\Omega\) would have eight scenarios (equal to the number of nodes in the last column). Assuming that each parameters takes either zero or one with equal probability, the probability of each scenario would be \(0.5 \times 0.5 \times 0.5 = 0.125\).
$$ \Omega = \begin{Bmatrix} \xi_1=(0, 0, 0),p_1 = 0.125\\ \xi_2=(0, 0, 1),p_2 = 0.125\\ \xi_3=(0, 1, 0),p_3 = 0.125\\ \xi_4=(0, 1, 1),p_4 = 0.125\\ \xi_5=(1, 0, 0),p_5 = 0.125\\ \xi_6=(1, 0, 1),p_6 = 0.125\\ \xi_7=(1, 1, 0),p_7 = 0.125\\ \xi_8=(1, 1, 1),p_8 = 0.125 \end{Bmatrix} $$ |
Introduction to AC Circuits
concept
All through DC we dealt with voltage and current sources that were constant. We analysed our circuits and figured out what the one, single value for a voltage somewhere or a current somewhere else. In AC we'll be dealing with voltage and current sources that not only change their values, but change their polarity as well. So a current that was 5A in one direction could shift to be 5A in the opposite direction a second later. AC is how power is delivered to your home and is a fundamental part of all electronics that draw power from the power grid. Even if you want your circuits to work with DC, if you're powering them from a socket in your home you'll have to convert that AC to DC, which you can't do unless you know how to analyse AC circuits. AC can seem a little daunting at first because it changes a lot of what we've been doing up until now. But the truth is that going from DC to AC is much less difficult than picking up DC was in the first place. If you are already able to analyse DC circuits then you can learn to analyse AC circuits, just don't let the differences intimidate you.
fact
AC stands for "Alternating Current", because circuits driven by AC signals cause the current to change direction back and forth as they operate.
We'll primarily deal with AC voltage sources (since this is what you deal with most often out in the real world).
fact
The symbol for an AC voltage source is:
fact
AC sources don't just change their values randomly, they are periodic, which means that they give off a repeating pattern (most often a sine wave) like the image below:
Sine waves are, by far, the most common way to transmit AC power and they'll be the main type of AC source we deal with here.
A sine wave is defined by three things:
Amplitude Frequency Phase
When dealing with AC we'll most often use radians rather than degrees. If you're not familiar with radians go take a quick read, it's important you understand what it is, how it connects to the sine wave, and how to use it.
fact
A sine wave with amplitude A, frequency \(\omega\) and phase \(\theta\) is given mathematically by: \(g(t) = A\sin(\omega t + \theta)\)
fact
The amplitude of a sine wave \(A\) is half the vertical distance between the top peak and the bottom peak. We'll almost always have the sine wave centered on the zero line so the amplitude is the height of the peak.
The amplitude is also sometimes called the magnitude, as a historical reference to the sine wave's connection to circles and polar coordinates.
fact
When writing the amplitude of a specific quantity (like the voltage), we often write it like \(V_m\) rather than a generic \(A\).
fact
The period of a sine wave is the time it takes for the wave to complete one cycle.
fact
The frequency of a sine wave (\(f\)) is a measure of how quickly it repeats itself. We call 1 period the time it takes for the sine wave to completely repeat itself, for instance going from the top peak, down to the bottom peak and then back to the top peak, is one period. The frequency is given by \(\frac{1}{T}\)Hz Where \(T\) is the period in seconds and Hz stands for "Hertz", the unit of the number of revolutions per second.
example
Find the frequency in Hz of the following sine wave:The frequency can be found either by counting the number of times the wave repeats itself in 1 second (which is the definition of Hertz) or by finding the period of the wave (\(T\)) and using the formula: \(f = \frac{1}{T}\) For this example it seems easier to use the second method since we can clearly see that the wave repeats every 20ms. So \(f = \frac{1}{20\cdot 10^{-3}} = 50\)Hz Incidentally voltage travelling through power lines to homes is often either 50Hz or 60Hz depending on your country.
We often use a different unit for frequency in some applications, called "radians per second" which is given the symbol \(\omega\) (that's a small Greek omega).
fact
To find the frequency of a sine wave in radians per second (often written rad/s or just rads) use the following formula: \(\omega = \frac{2\pi}{T}\)rad/s Where \(T\) is the period of the wave.
fact
Convert between Hertz and rad/s using the following identities: \(f = \frac{\omega}{2\pi}\)Hz \(\omega = 2\pi f\)rad/s
You're likely to see \(2\pi f\) quite frequently when working with AC electronics.
fact
The phase of a sine wave is a measure of which angle the sine wave "started" at. It is given by the formula: \(\theta = \sin^{-1}(g(0))\). That is, the phase shift is the angle that the wave "starts" at assuming that it is the \(\sin\) function.
example
Find the phase of the following sine wave:Here \(g(0) = \frac{1}{\sqrt{2}}\) So \(\theta = \frac{\pi}{4}\)
fact
As you can see in the figure below the two sine waves are really the same wave but one has been shifted a little compared to the other. We measure this shift (\(\phi\)) by the following formula: \(\phi = \sin^{-1}(g(0)) - \sin^{-1}(h(0))\) If \(\phi \gt 0\) we say that h "leads" g (or that g "lags" h). If \(\phi \lt 0\) we say that g "leads" h (or that h "lags" g). Our selection for which one is "h" and which is "g" is arbitrary.
You can see that leading just means that the wave seems to have started earlier whereas lagging means the wave seems to have started later. Of course since these waves are periodic we could say that the lagging wave is really just leading by a whole lot, we choose "leading" and "lagging" based on whichever results in the lowest difference between the two waves. It's always important to know if you're working in degrees or radians, lost marks abound when students forget which one they're using; or which one their calculator is using.
practice problems |
The classic “Lockean” thesis about full and partial belief says full belief is rational iff strong partial belief is rational. Hannes Leitgeb’s “Humean” thesis proposes a subtler connection. $ \newcommand\p{Pr} \newcommand{\B}{\mathbf{B}} \newcommand{\given}{\mid} $
The Humean Thesis
For a rational agent whose full beliefs are given by the set $\mathbf{B}$, and whose credences by the probability function $\p$: $B \in \mathbf{B}$ iff $\p(B \given A) > t$ for all $A$ consistent with $\mathbf{B}$.
Notice that we can think of this as, instead, a coherentist theory of justification. Suppose we replace credence with “evidential” probability (think: Carnap, Williamson). Then we get a theory of justification where beliefs aren’t justified in isolation. It’s not enough for a belief to be highly probable in its own right, it has to be part of a larger body that underwrites that high probability.
Flipping things around, the coherentist theory of justification from my last wacky post doubles as an even wackier theory of full belief. The Humean view is roughly that a belief is justified iff its fellows secure its high probability. Now the “Super-Humean” view says a belief is justified to the extent its fellows secure its high centrality.
(Last time we explored one fun way of measuring centrality, drawing on coherentism for inspiration, and network theory for the math. But network theory offers many others ways of measuring centrality, which could be slotted in here to provide alternative theories of full and partial belief.)
Like Leitgeb’s Humean view, the Super-Humean view has a holistic character. Instead of evaluating full beliefs just by looking at your credences, we also have to look at what else you believe.
Another parallel: both theories have a permissive quality. Leitgeb presents examples where more than one set $\B$ fits with a given credence function $\p$, on the Humean view. And the same will be true on the Super-Humean view.
1
But there are interesting differences. We can evaluate beliefs individually on the Super-Humean account, even though our method of evaluation is holistic. True, a belief’s justification depends on what else you believe. But your beliefs don’t all stand or fall together; some can come out justified even though others come out unjustified.
Strictly speaking, some beliefs come out highly justified even though others come out hardly justified. Because, differing again from the Humean view, evaluations are graded on the Super-Humean view. Each belief is assigned a degree of justification.
One nice thing about the Super-Humean view, then, is that it allows for “non-ideal” theorizing. We can study non-ideal agents, and discern more justified beliefs from lesser ones.
“But does it handle the lottery and preface paradoxes?”, is the question we always ask about a theory of full belief. As is so often the case, the answer is “yes, but…”.
Consider a lottery of $100$ tickets with one to be selected at random as the winner. If you believe of each ticket that it will lose, we have a network of $101$ nodes: $L_1$ through $L_{100}$, plus the tautology node $\top$. How strong are the connections between these nodes? Assuming we take $L_3–L_{100}$ as givens in determining the weight of the $L_2 \rightarrow L_1$ arrow, it gets weight $0$ since$$\p(L_1 \given L_2 \wedge L_3 \wedge \ldots \wedge L_{100}) = 0.$$And likewise for all the other arrows,
2 except those pointing to the $\top$ node (they always get weight $1$). All the $L_i$ beliefs thus come out with rock-bottom justification compared to $\top$, i.e. you aren’t justified in believing these lottery propositions.
Contrast that with a preface case, where you believe each of $100$ claims you’ve researched, $C_1$ through $C_{100}$. These claims are positively correlated though, or at least independent.
3 So$$\p(C_1 \given C_2 \wedge C_3 \wedge \ldots \wedge C_{100}) \approx 1,$$and likewise for the other $C_i$. The belief-graph here is thus tightly connected, and the $C_i$ nodes will score high on centrality compared to $\top$. So you’re highly justified in your beliefs in the preface case.
So far so good, at least if you think—as I tend to—that lottery beliefs should come out unjustified, while preface beliefs should come out justified. What’s the “but…” then? I see two issues (at least).
First, we had to assume that all your remaining beliefs are taken as given in assessing the weight of a connection like $L_2 \rightarrow L_1$. That worked out well here. But as a general rule, it doesn’t always have great results, as Juan Comesaña noted about our treatment of the Tweety case last time.
We could go all the way to the other extreme of course, and just evaluate the $L_2 \rightarrow L_1$ connection in isolation by looking at $\p(L_1 \given L_2)$. But that seems too extreme, since it means ignoring the agent’s other beliefs altogether.
What we want is something in between, it seems. We want the agent’s other beliefs to “get in the way” enough that they substantially weaken the connections in the lottery graph. But we don’t want them to be taken entirely for granted. Exactly how to achieve the right balance here is something I’m not sure about.
Second issue: what if you only adopt a few lottery beliefs, just $L_1$ and $L_2$ for example? Then we can’t exploit the “collective defeat” that drove our treatment of the lottery.
You might respond that this is a fine result, since isolated lottery beliefs are actually justified. It’s only when you apply the same logic to all the tickets that your justification is undercut. But I find this unsatisfying.
Maybe a student encountering the paradox for the first time is justified in believing their ticket will lose. But it should be enough to defeat that justification that they merely realize they could believe the same thing about all the other tickets, for identical reasons. Even if they don’t go ahead to form those beliefs, they should drop the one belief they had about their own ticket.
This is one way in which Leitgeb’s Humean theory seems superior to me. On the Humean view, which beliefs are rational depends on how the space of possibilities is partitioned (see Leitgeb 2014). And the partition is determined by the context—how the subject frames the situation in their mind. (At least, that’s how I understand Leitgeb here.) So just realizing the symmetry of the lottery paradox is enough to defeat justification, on the Humean view.
Example: imagine we’ll flip a coin of unknown bias $10$ times. And suppose the probabilities obey Laplace’s Rule of Succession (a.k.a. Carnap’s $\mathfrak{m}^*$ confirmation function). Then, if you believe each flip will land heads, your beliefs will all come out highly justified, i.e. highly central in your web of $10$ beliefs. But they’d have the same justification if you instead believed each flip will land tails.
Permissivism aside, this might seem a pretty bad result on its own. Even if our theory fixed which way you should go, say heads instead of tails, that would be pretty weird. Shouldn’t you wait for at least a few flips before forming any such beliefs?
The problem is that we haven’t required your beliefs to be inherently probable, only that they render one another probable. The Lockean and Humean theories have such a threshold requirement built-in, but we can build it into our theory too. We can just stipulate that a full belief should be highly probable, as well as being highly central in the network of all your beliefs.
[return] More carefully, each arrow gets the minimum possible weight. If we use the “Google hack” from last time, this is some small positive number $\epsilon$ instead of $0$. [return] Notice we’re borrowing the crux of Pollock’s (1994) classic treatment of the lottery and preface paradoxes. We’re just plugging his observation into a different formal framework. [return] |
Powers of a Matrix Cannot be a Basis of the Vector Space of Matrices
Problem 375
Let $n>1$ be a positive integer. Let $V=M_{n\times n}(\C)$ be the vector space over the complex numbers $\C$ consisting of all complex $n\times n$ matrices. The dimension of $V$ is $n^2$.Let $A \in V$ and consider the set\[S_A=\{I=A^0, A, A^2, \dots, A^{n^2-1}\}\]of $n^2$ elements.Prove that the set $S_A$ cannot be a basis of the vector space $V$ for any $A\in V$.
Every Basis of a Subspace Has the Same Number of VectorsLet $V$ be a subspace of $\R^n$.Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\}$ is a basis of the subspace $V$.Prove that every basis of $V$ consists of $k$ vectors in $V$.Hint.You may use the following fact:Fact.If […]
Dimension of the Sum of Two SubspacesLet $U$ and $V$ be finite dimensional subspaces in a vector space over a scalar field $K$.Then prove that\[\dim(U+V) \leq \dim(U)+\dim(V).\]Definition (The sum of subspaces).Recall that the sum of subspaces $U$ and $V$ is\[U+V=\{\mathbf{x}+\mathbf{y} \mid […]
Linear Transformation and a Basis of the Vector Space $\R^3$Let $T$ be a linear transformation from the vector space $\R^3$ to $\R^3$.Suppose that $k=3$ is the smallest positive integer such that $T^k=\mathbf{0}$ (the zero linear transformation) and suppose that we have $\mathbf{x}\in \R^3$ such that $T^2\mathbf{x}\neq \mathbf{0}$.Show […]
Prove a Given Subset is a Subspace and Find a Basis and DimensionLet\[A=\begin{bmatrix}4 & 1\\3& 2\end{bmatrix}\]and consider the following subset $V$ of the 2-dimensional vector space $\R^2$.\[V=\{\mathbf{x}\in \R^2 \mid A\mathbf{x}=5\mathbf{x}\}.\](a) Prove that the subset $V$ is a subspace of $\R^2$.(b) Find a basis for […]
Any Vector is a Linear Combination of Basis Vectors UniquelyLet $B=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a basis for a vector space $V$ over a scalar field $K$. Then show that any vector $\mathbf{v}\in V$ can be written uniquely as\[\mathbf{v}=c_1\mathbf{v}_1+c_2\mathbf{v}_2+c_3\mathbf{v}_3,\]where $c_1, c_2, c_3$ are […] |
Search
Now showing items 1-10 of 19
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... |
1. Matrix
A
matrix is a grid of $m$ rows and $n$ columns. It contains the coefficients of a linear system. These are the so-called elements of the matrix.
Example I shows how a linear system is turned into a matricial equation in the form $A\vec{x} = \vec{y}$.
Example I
Index notation is used to specify a matrix element $a_{i,j}$, with $i$ the row and $j$ the column. For example, the element $a_{3,2}$ equals to $4$ in the $A$ matrix of example I.
2. Matrix elimination
The starting point for a matrix elimination is the
augmented matrix, as example II shows. The variable elimination is performed by adding two rows together. One of the two rows is then replaced by the result of the addition. Example II Once we get the augmented matrix in echelon form we can solve back the system by going from the bottom up, as follows :
$ 10z = 30 \iff$ $z = 3$
Let’s replace $z$ in the second row :
$ y + 1 \cdot \underbrace{3}_{z} = 1 \iff$ $y = -2$
Let’s replace $y$ and $z$ in the first row :
$-1x + -3 \cdot \underbrace{-2}_{y} + 1 \cdot \underbrace{3}_{z} = 10 \iff -x + 9 = 10 \iff$ $x = -1$
The solution is :
Example III
From the last row of the echelon matrix we have :
$0x+0y+0z = 5$
That equation is unsolvable, the system then has no solution :
Example IV
From the last row of the echelon matrix we have :
$0x+0y = 0 \iff 0=0$
That equation does not provide any information about the variables.
Let’s move on to the upper row from which we get :
$1x+-1y = 1 \iff x = 1+y$. Let $y= \beta$.
Only one relevant equation remains but the system has two variables. Thus the system has infinitely many solutions and the general solution is :
Recapitulation
Transforming linear system into a matricial equation
separates the coefficients from the variables. This make things more readable.
Solving a matricial equation is done by performing operations on the
augmented matrix rows, namely interchanging rows, multiplying a row by a factor (different than $0$) and replacing a row by the sum of two rows together. This till we eventually get a matrix in echelon form (like a staircase).
From the echelon matrix, we solve rows by substituing back the values found from the bottom up. Rows of all zeroes do not provide any information about the variables and can thus be ignored. |
A (quasi)variety $\mathcal{K}$ of algebraic structures has
(EDP(R)C) ifthere is a finite conjunction of atomic formulas $\phi(u,v,x,y)$ such that for allalgebraic structures $\mathbf{A}\in\mathcal{K}$ we have$\langle x,y\rangle\in\mbox{Cg}_{\mathcal{K}}(u,v)\iff \mathbf{A}\models \phi(u,v,x,y)$. Here$\theta=\mbox{Cg}_{\mathcal{K}}(u,v)$ denotes the smallest (relative) congruence that identifies the elements$u,v$, where “relative” means that $\mathbf{A}//\theta\in\mathcal{K}$.Note that when the structures are algebras then the atomic formulas are simply equations. equationally definable principal (relative) congruences
Relative congruence extension property
Relatively congruence distributive
W. J. Blok and D. Pigozzi,
, Algebra Universalis, On the structure of varieties with equationally definable principal congruences. I, II, III, IV 15, 1982, 195-227 MRreview, 18, 1984, 334-379 MRreview, 32, 1994, 545-608 MRreview, 31, 1994, 1-35 MRreview |
Use Sylow’s theorem and determine the number of $5$-Sylow subgroup of the group $G$.Check out the post Sylow’s Theorem (summary) for a review of Sylow’s theorem.
Proof.
(a) When $|G|=100$.
The prime factorization of $100$ is $2^2\cdot 5^2$. Let us determine the number $n_5$ of $5$-Sylow subgroup of $G$.By Sylow’s theorem, we know that $n_5 \equiv 1 \pmod{5}$ and $n_5$ divides $2^2$.The only number satisfies both constraints is $n_5=1$. Thus there is only one $5$-Sylow subgroup of $G$. This implies that the $5$-Sylow subgroup is a normal subgroup of $G$.Since the order of the $5$-Sylow subgroup is $25$, it is a proper normal subgroup. Thus, the group $G$ is not simple.
(b) When $|G|=200$
The prime factorization is $200=2^3\cdot 5^2$.We again consider the number $n_5$ of $5$-Sylow subgroups of $G$.
Sylow’s theorem implies that $n_5 \equiv 1 \pmod{5}$ and $n_5$ divides $2^3$.These two constraints has only one solution $n_5=1$.Thus the group $G$ has a unique proper normal $5$-Sylow subgroup of order $25$. Hence $G$ is a simple group.
Group of Order $pq$ Has a Normal Sylow Subgroup and SolvableLet $p, q$ be prime numbers such that $p>q$.If a group $G$ has order $pq$, then show the followings.(a) The group $G$ has a normal Sylow $p$-subgroup.(b) The group $G$ is solvable.Definition/HintFor (a), apply Sylow's theorem. To review Sylow's theorem, […]
Sylow Subgroups of a Group of Order 33 is Normal SubgroupsProve that any $p$-Sylow subgroup of a group $G$ of order $33$ is a normal subgroup of $G$.Hint.We use Sylow's theorem. Review the basic terminologies and Sylow's theorem.Recall that if there is only one $p$-Sylow subgroup $P$ of $G$ for a fixed prime $p$, then $P$ […]
Group of Order 18 is SolvableLet $G$ be a finite group of order $18$.Show that the group $G$ is solvable.DefinitionRecall that a group $G$ is said to be solvable if $G$ has a subnormal series\[\{e\}=G_0 \triangleleft G_1 \triangleleft G_2 \triangleleft \cdots \triangleleft G_n=G\]such […]
Non-Abelian Group of Order $pq$ and its Sylow SubgroupsLet $G$ be a non-abelian group of order $pq$, where $p, q$ are prime numbers satisfying $q \equiv 1 \pmod p$.Prove that a $q$-Sylow subgroup of $G$ is normal and the number of $p$-Sylow subgroups are $q$.Hint.Use Sylow's theorem. To review Sylow's theorem, check […]
Every Group of Order 12 Has a Normal Subgroup of Order 3 or 4Let $G$ be a group of order $12$. Prove that $G$ has a normal subgroup of order $3$ or $4$.Hint.Use Sylow's theorem.(See Sylow’s Theorem (Summary) for a review of Sylow's theorem.)Recall that if there is a unique Sylow $p$-subgroup in a group $GH$, then it is […]
A Group of Order $20$ is SolvableProve that a group of order $20$ is solvable.Hint.Show that a group of order $20$ has a unique normal $5$-Sylow subgroup by Sylow's theorem.See the post summary of Sylow’s Theorem to review Sylow's theorem.Proof.Let $G$ be a group of order $20$. The […] |
Let $X$ be a smooth complex quasi-projective variety. We can find good compactification: a smooth proper variety $\bar{X}$ such that ${\bar X} \setminus X$ is a divisor with normal crossing. The variety $\bar{X}$ is then stratified by the singulartities of the divisor. And one can compute the mixed Hodge structure on $H^{\bullet}(X)$ in terms of the pure Hodge structures $H^{\bullet}(S_\alpha)$ of the smooth closed strata using a spectral sequence.
Let's say a variety $Y$ is Hodge-Tate if $h^{p,q}(Y) = 0$ for $p\neq q$.
If all the closed strata of $\bar{X}$ are Hodge-Tate then $X$ is Hodge-Tate.
Question: Let $X$ be a smooth complex quasi-projective variety. Assume $X$ is Hodge-Tate.
Can one find a good compactification $\bar{X}$ with Hodge-Tate strata? Are all good compactifications of $X$ of this type? (Edit: Answer is no, see Torsten's elementary example). |
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc. |
I am a new aviation student and I was reading about induced drag the other day. I know that it is produced as a result of the tip vortices and that the greater the aspect ratio of an airplane the less the induced drag force. But when it came to the equation of the force, it is equal to:
$D_i = \frac{1}{2}\rho V^2 S \frac{C_L^2}{\pi AR \epsilon}$
If we substitute aspect ratio $AR$ with span/chord $\frac{b}{c}$ and plan area $S$ as $b\cdot c$, the span term will be cancelled and the induced drag will be affected by the chord length only.
It kind of contradicts the effect of aspect ratio on the induced drag force, doesn't it? |
Recent Posts Recent Comments Archives Categories Meta Author Archives: Erin Kraeher
Lake Titicaca An issue that I was not aware of at all before this class was the drying out of many lakes around the world. It is a huge problem for the communities that surround the lake, as well as … Continue reading
To start off, what is the term “aridification”? I had no idea when I was looking at environmental articles. Aridification is “the process of a region becoming increasingly dry.” Instead of seasonal variation, this refers to a long-term change in … Continue reading
Imagine walking through the streets of center city Philadelphia, sometimes it just feels like every step you take there’s another piece of litter on the ground. Cities are especially bad because of the large population, but imagine being surrounded by … Continue reading
I think as a global community when we see nature at its finest we are in awe of how grand things are how beautiful the wildlife is, how exotic the flowers are and how giant trees can grow to be … Continue reading
\(x^3-3x^2 – 10x=0\) \((1+r)^n\) \((5.7\times 10^ {-8})\times (1.6\times 10^{12})= 9.12\times 10^{4}\) \(\pi L (1-\alpha)R^2=4\pi\sigma T^4R^2\) \[12\text { km} \times \frac {0.6 \text { mile}} {1 \text { km}} \approx {7.2 \text { mile}} \] \(4,173,445,346.50 \approx 4,200,000,000 … Continue reading |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
The most important thing you need to know about LaTex Wikidots is that any "mathy" writing needs to be inside either the math tag [[math]] Math goes here [[/math]] or the inline math tag [[$ math goes here $]]. Every symbol should be done in LaTex for consistency of font throughout your portfolio.
Here are some of the most important symbols that will be used this semester.
vector: $\vec{xy}$
subscripts: $x_1, x_2, x_3, \ldots$ superscripts:$a^1, a^2, a^3, \ldots, a^{10}, a^{11}, \ldots$ length vector : $|\vec{v}|$ dot product: $\vec{v} \cdot \vec{w}$ Real Space: $\mathbb{R}^{n}$ Gradients: $\nabla$ lambda symbol : $\lambda$ the plane: $\mathbb{R}^2$ a line: $\ell$ fractions: $\frac{1}{b}$ define a function: $f:D \to E$ implications and contradictions: $\Rightarrow$, $\Rightarrow\Leftarrow$ square root: $\sqrt{2}$ pi: $\pi$ limits: $\lim_{n\to\infty} \frac{1}{n} = 0$ percentages: % (This symbol is special in LaTex, and the usual way to get it in LaTex doesn't work here. Just leave it out of the dollar-sign environment.) Matrix:
Feel free to add your own.
To see the code, click on 'edit' at the top of the page, highlight the code and then past it into your portfolio page.
I have written this so that you could copy each item individually.
Besides the 'inline' math code, we can have larger equations show up on their own line using [[math]] Math goes here [[/math]]. See the difference between the typsetting below: Let $f(x) = x^2+1$. Then(2)
You will need to put a space between two different symbols in the same piece of code, or it will try to interpret it as one! In addition, include the whole symbol in LaTex, not just the unusual character: $\triangle ABC$ is better than $\triangle$ABC.
I suggest that you add WebEquation to your browser's bookmarks. You can also download a Detexify app for your smartphone. And a Google search for "comprehensive LaTex symbol list" will get you a HUGE pdf containing everything a beginner or intermediate LaTex user could ever want, including a very nice index. |
A
perfect number is a positive integer $n$ such that $n$ is the sum of its proper divisors. For example $6 = 1 + 2 + 3$. The symbol $\sigma(n)$ is usally used for the sum of all the divisors of a positive integer $n$, so that a number is perfect if and only if $\sigma(n) = 2n$. All known perfect numbers are even, and they correspond to Mersenne primes. These are primes of the form $2^k – 1$. For example, if $k=5$ then $2^5 – 1 = 31$, a prime number. The correspondence between Mersenne primes and even perfect numbers is given by: Theorem.Every even perfect number is of the form $2^{k-1}(2^k – 1)$, and such a number is perfect if and only if $2^k-1$ is a prime number.
What about odd perfect numbers? No one knows if they exist. However, Euler did prove that an odd perfect number has to be of the form
\[ n = p^aq_1^{2b_1}\cdots q_r^{2b_r}\] where $p,q_1,\dots,q_r$ are distinct odd primes, $p\equiv 1\pmod{4}$ and $\alpha\equiv 1\pmod{4}$. In the paper
Hagis, Peter, Jr. Outline of a proof that every odd perfect number has at least eight prime factors. Math. Comp. 35 (1980), no. 151, 1027—1032. MR0572873
the author has shown that $r \geq 7$. This already gives a pretty lower bound for an odd perfect number! Using Euler's restrictions, we see that an odd perfect number has to be greater than 29276722732257. That's on the order of $10^{13}$. Actually, there's a lot more known about the various factors that a perfect number must have. Using all of them, it is shown in
Brent, R. P.; Cohen, G. L.; te Riele, H. J. J. Improved techniques for lower bounds for odd perfect numbers. Math. Comp. 57 (1991), no. 196, 857—868
that an odd perfect number has to be greater than $10^{300}$.
Abundance
The number $A(n) = \sigma(n) – 2n$ is called the
abundance of the number $n$. Of course, $A(n) = 0$ if and only if $n$ is perfect. For odd $n$, there are very few numbers for which $|A(n)|$ is small. Of course, the numbers 1,3,5,6,7,9, and 15 are small and so $|A(n)|$ is small for them. But here are the first few numbers after $n=9$ with $|A(n)| \lt 10$:
n 315 1155 8925 32445 442365 $\sigma(n) – 2n$ -6 -6 6 6 6
In fact, these are the twelve odd numbers with $|A(n)| < 10$ under two million. In contrast, there are 81 even numbers satisfying this inequality. |
A
linear equation in the variables $x_1, x_2, ... , x_n$ is an equation that can be written in the form
A
system of linear equations is a collection of linear equations involving the same variables.
A
solution of the system is a list of numbers that makes each equation true. If the matrix has a solution, it is considered consistent, if not, it is inconsistent.
The set of all solutions is the
solution set.
Two systems are
equivalent if they have the same solution set.
In a matrix, a vertical dashed line indicates the other side of an equation. This is a way of denoting an augmented matrix versus a variable coefficient matrix. While this is not a normal idea in math, just Dr. Villalpando's idea, it should be the normal.
Linear Combination
Let $\bar{v}_1 , \bar{v}_2, ... , \bar{v}_p$ be vectors in $\mathbb{R}^n$.
Let $c_1, c_2, ..., c_p$ be scalars. Then
is the linear combination of $\bar{v}_1 , \bar{v}_2, ... , \bar{v}_p$
Span
Let $\bar{v}_1 , \bar{v}_2, ... , \bar{v}_p \in \mathbb{R}^n$. The
span of $\bar{v}_1 , \bar{v}_2, ... , \bar{v}_p$ is the set of all linear combinations of$\bar{v}_1 , \bar{v}_2, ... , \bar{v}_p$ Echelon Forms All nonzero rows are above any rows of all zeros Each leading entry of a row is in a column to the right of the leading entry of the row above it All entries in a column belowa leading entry are zero REDUCED ECHELON The above three plus Leading entry in each row is 1 Each leading 1 is the only non-zero entry in its column Row Reductions(3) Vector
A line that has both direction and magnitude (oh yeah!) represented as a vertical matrix of dimensions nx1 where $\vec{u} = \begin{bmatrix} u_1 \\ u_2 \\ ... \\ u_n \end{bmatrix}$
Properties $\vec{u} + \vec{v} = \vec{v} + \vec{u}$ Commutative Property $(\vec{u} + \vec{v}) + \vec{w} = \vec{u} + (\vec{v} + \vec{w})$ Associative Property $\vec{u} + \vec{0} = \vec{u}$ Identity Property $\vec{u} + (-\vec{u}) = \vec{0}$ Inverse Property $c(\vec{u} + \vec{v}) = c\vec{u} + c\vec{v}$ Distributive Property
Linear Combinations of any two vectors are in the span of those vectors, therefore,(4)
Homogeneous Equations
A
Homogeneous equations has a solution that satisfies the equation $A\vec{x} = \vec{0}$.
Therefore, $x_1 = 4x_3 \\ x_2 = -2x_3 \\ x_3 = x_3$ or in
Parametric Form, $\vec{x} = x_3 \begin{bmatrix} 4\\-2\\1 \end{bmatrix}$
This is the homogeneous solution.
Non-Homogeneous
A solution that satisfies the equation $A\vec{x} = \vec{b}$ where A is a matrix of n x m dimensions and b is a matrix of n x 1 dimensions (is it unnecessary to say that?)(7)
To solve for the non-homogeneous $\vec{x}$, simply augment with $\vec{b}$ and solve. We get $\vec{x} = x_5 \begin{bmatrix} 4\\-2\\1 \end{bmatrix} + \begin{bmatrix} 9\\-3\\0 \end{bmatrix}$ in Parametric form. Notice that the solution here looks very much like the homogeneous solution if $\vec{b} = \vec{0}$ but there is now a point (9,-3,0) that this line is forced through. That's the difference. The Homogeneous solution is a set of all lines that satisfy the equation, but the non-homogeneous solution is that particular solution. Therefore, all the lines such $A\vec{x} = \vec{b}$ will have the same $x_5 \begin{bmatrix} 4\\-2\\1 \end{bmatrix}$ regardless of $\vec{b}$, but it only changes the point the line is forced through.
Linear Independence
The vectors $\vec{v_1}, \vec{v_2}, ... ,\vec{v_n}$ in $\Re^n$ are
linearly independent if the vector equation
has only a trivial ($x = 0$) solution.
In summary, a linearly independent solution has only one solution, where each vector supplies independent information about the solution.
Two vectors that are not scalar multiples of eachother must be linearly independent.
If the vectors are linearly
dependent, there must exist a set $c_1, c_2, c_3, ... , c_p$ where at least one coefficient is not zero such that
In other words, there is more than just the zero solution. Ergo, if there are infinite solutions, it's linearly dependent.
If, however, there are only two vectors, they are linearly dependent if one is a scalar multiple of another.
A function is a system with one output for every input
$A\vec{x}$ map a vector in $\vec{x}$ in $\Re^n$ to a vector in $\Re^m$
A = $\begin{bmatrix} 1&3&-1&0\\0&1&0&1 \end{bmatrix}$(10)
T: $\Re^4 \rightarrow \Re^2 \text{ } T(\vec{x}) = A\vec{x}$
Image of $\vec{x}$ is $T(\vec{x}) = A\vec{x}$(11)
Do note that $T(\vec{x_4}) = T(\vec{x_6})$. This means that the function T is not one to one, meaning there can be more than one input for the same output, much as a parabola has the same y value for two different x values.
Linear Transformations
A transformation with domain D is Linear if
$T(\vec{u} + \vec{v}) = T(\vec{u}) + T(\vec{v})$ $T(c\vec{u}) = cT(\vec{u})$ |
November 6th, 2017, 01:06 AM
# 1
Senior Member
Joined: Jan 2015
From: usa
Posts: 104
Thanks: 1
Simplification of a formula
We consider a fixed parameter $\theta>0$.
For all $t>0$ we note:
$$u(t)=\frac{\sinh\big(\frac{t}{2}\cosh(\theta)\bi g)}{\cosh(\theta)}$$
$$A(t)=\frac{\sqrt{\cosh^2(\theta)u^2(t)+1}-1}{\cosh^2(\theta)}-\Big(\cosh\Big(\frac{t}{2}\Big)-1\Big)$$
$$f(t)=-\ln\Big(1-\frac{2A(t)}{u(t)+\sinh(t)+A(t)}\Big)$$
For all $\theta>0$ the equation $$2\sinh\Big(cosh(\theta)t\Big)\sinh\Big(t\Big)=1$ $ where $t>0$
has a unique solution that we note $t^*>0$.
I want to know if we can simplify $f(t^*)$ or either find a constant $c>0$ such that $f(t^*)\ge c$??
Please help me . Thanks.
Tags formula, simplification
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Help with simplification Ku5htr1m Calculus 6 October 12th, 2016 04:36 PM Simplification 3uler Trigonometry 1 February 3rd, 2015 03:08 AM Need some help with simplification p3aul Algebra 5 January 23rd, 2011 09:11 PM Simplification wulfgarpro Algebra 7 April 18th, 2010 03:16 AM simplification arron1990 Calculus 1 December 31st, 1969 04:00 PM |
In mathematical logic,
satisfiability and validity are elementary concepts of semantics. A formula is satisfiable if it is possible to find an interpretation (model) that makes the formula true. [1] A formula is valid if all interpretations make the formula true. The opposites of these concepts are unsatisfiability and invalidity, that is, a formula is unsatisfiable if none of the interpretations make the formula true, and invalid if some such interpretation makes the formula false. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition.
The four concepts can be raised to apply to whole theories: a theory is satisfiable (valid) if one (all) of the interpretations make(s) each of the axioms of the theory true, and a theory is unsatisfiable (invalid) if all (one) of the interpretations make(s) each of the axioms of the theory false.
It is also possible to consider only interpretations that make all of the axioms of a second theory true. This generalization is commonly called satisfiability modulo theories.
The question whether a sentence in propositional logic is satisfiable is a decidable problem. In general, the question whether sentences in first-order logic are satisfiable is not decidable. In universal algebra and equational theory, the methods of term rewriting, congruence closure and unification are used to attempt to decide satisfiability. Whether a particular theory is decidable or not depends whether the theory is variable-free or on other conditions.
[2] Reduction of validity to satisfiability
For classical logics, it is generally possible to reexpress the question of the validity of a formula to one involving satisfiability, because of the relationships between the concepts expressed in the above square of opposition. In particular φ is valid if and only if ¬φ is unsatisfiable, which is to say it is not true that ¬φ is satisfiable. Put another way, φ is satisfiable if and only if ¬φ is invalid.
For logics without negation, such as the positive propositional calculus, the questions of validity and satisfiability may be unrelated. In the case of the positive propositional calculus, the satisfiability problem is trivial, as every formula is satisfiable, while the validity problem is co-NP complete.
Propositional satisfiability
In the case of classical propositional logic, satisfiability is decidable for propositional formulae. In particular, satisfiability is an NP-complete problem, and is one of the most intensively studied problems in computational complexity theory.
Satisfiability in first-order logic
Satisfiability is undecidable and indeed it isn't even a semidecidable property of formulae in first-order logic (FOL).
[3] This fact has to do with the undecidability of the validity problem for FOL. The question of the status of the validity problem was posed firstly by David Hilbert, as the so-called Entscheidungsproblem. The universal validity of a formula is a semi-decidable problem. If satisfiability were also a semi-decidable problem, then the problem of the existence of counter-models would be too (a formula has counter-models iff its negation is satisfiable). So the problem of logical validity would be decidable, which contradicts the Church-Turing theorem, a result stating the negative answer for the Entscheidungsproblem. Satisfiability in model theory
In model theory, an atomic formula is satisfiable if there is a collection of elements of a structure that render the formula true.
[4] If A is a structure, φ is a formula, and a is a collection of elements, taken from the structure, that satisfy φ, then it is commonly written that A ⊧ φ [a]
If φ has no variables, that is, if φ is an atomic sentence, and it is satisfied by
A, then one writes A ⊧ φ
In this case, one may also say that
A is a model for φ, or that φ is true in A. If T is a collection of atomic sentences (a theory) satisfied by A, one writes A ⊧ T Finite satisfiability
A problem related to satisfiability is that of
finite satisfiability, which is the question of determining whether a formula admits a finite model that makes it true. For a logic that has the finite model property, the problems of satisfiability and finite satisfiability coincide, as a formula of that logic has a model if and only if it has a finite model. This question is important in the mathematical field of finite model theory.
Nevertheless, finite satisfiability and satisfiability need not coincide in general. For instance, consider the first-order logic formula obtained as the conjunction of the following sentences, where a and b are constants:
R(a_0, a_0) R(a_0, a_1) \forall x y (R(x, y) \rightarrow \exists z R(y, z)) \forall x y z (R(y, x) \wedge R(z, x) \rightarrow x = z))
The resulting formula has the infinite model R(a_0, a_0), R(a_0, a_1), R(a_1, a_2), \ldots, but it can be shown that it has no finite model (starting at the fact R(a, b) and following the chain of R atoms that must exist by the third axiom, the finiteness of a model would require the existence of a loop, which would violate the fourth axiom, whether it loops back on a_0 or on a different element).
The computational complexity of deciding satisfiability for an input formula in a given logic may differ from that of deciding finite satisfiability; in fact, for some logics, only one of them is decidable.
See also Notes ^ See, for example, Boolos and Jeffrey, 1974, chapter 11. ^ ^ Baier, Christel (2012). "Chapter 1.3 Undecidability of FOL". Lecture Notes — Advanced Logics. Technische Universität Dresden — Institute for Technical Computer Science. pp. 28–32. Retrieved 21 July 2012. ^ Wilifrid Hodges (1997). A Shorter Model Theory. Cambridge University Press. p. 12. References
Boolos and Jeffrey, 1974. Computability and Logic. Cambridge University Press.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
? to see shortcuts.
Markdown is the simple, readable formatting markup used by Gingko. It’s quite common all over the web.
Get a MarkDown cheatsheet with Ctrl+M.
# This is a heading!A sentence or two. This is in plain text, and this is *italic text*.We need some **bold** text:- You can also make lists quite easily- With links http://bbc.co.uk 1. numbered lists too 2. including **formatting**- Want a [fancier link](http://c2.com)?Or an image? 
A sentence or two. This is in plain text, and this is
italic text.
We need some
bold text:
Or an image?
# Heading One## Heading Two###### Heading Six
Normal, *Italic*, **Bold**, ***BoldItalic***
Lorem ipsum dolor sit amet,
consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus. Vestibulum enim wisi, viverra nec, fringilla in, , risus. laoreet vitae
Regular link: http://duckduckgo.comNamed Link: [LinkName](https://link.com)
> "Not everything worth measuring can be measured."> *-- Albert Einstein*
“Not everything worth measuring can be measured.”
— Albert Einstein

FilePicker allows you to auto-upload images from DropBox and link to your Gingko Tree.

(You have to edit the card to get a new one.)
Images free for commercial purposes with attribution.
[ ] Unchecked[X] Checked
Header 1 | Header 2-------------- | -------------A table cell! | What's this?Another cell | More cell!
Header 1 Header 2 A table cell! What’s this? Another cell More cell! Gingko can also do standard HTML & CSS.
HTML & CSS are big topics. Here are a couple of short introductions:
<span class="key">Shift</span><span class="key">Cmd</span><span class="key">S</span>
(No Line Breaks)
ShiftCmdS
Its HTML ID is:
card55c55cce66fd9c8c649117b0
Pop up the menu and choose
Inspect Element to find its ID.
This
style code can be anywhere in the tree.
<style>#card55c55cce66fd9c8c649117b0 { background-color: #000000; color: #FFFFFF;}</style>
When exporting to HTML, you can customise the the style by adding a card with:
<style>article.html-export { /* CSS, fonts etc. here */}</style>
You can change margins, borders, images, colours, fonts, and logos.
Very handy for styling output for different projects, customers or workshops.
You can embed loads of content into Gingko.
Share. Choose
embed.
<iframe src="https://player.vimeo.com/video/24247978" width="400" height="170" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
<iframe width="400" height="300" src="https://www.youtube.com/embed/egCKZHsICm8" frameborder="0" allowfullscreen></iframe>
Paste plain text publicly for free for ever.
CURRENTLY BROKEN In PASTEBIN
<iframe src="http://pastebin.com/embed_iframe.php?i=7jqTJVrn" style="border:none;width:400px;height:200px"></iframe>
Desmos Graphing calculator: only pastes images for now.
CURRENTLY BROKEN
Just add
$...$ for inline equations ($\tau \equiv 2\pi$), and
$$...$$ for display equations:
$$f(a) = \frac{1}{2\pi i} \oint_\gamma \frac{f(z)}{z-a}\, dz$$
If note displaying correctly, click
Enable LaTeX in the gear menu.
CSS for this guide |
Global attractor for a Klein-Gordon-Schrodinger type system
1.
Department of Mathematics, National Technical University, Zografou Campus 157 80, Athens, Greece
2.
Department of Mathematics, National Technical University, Zografou Campus 157 80, Athens, Hellas, Greece
$i\psi_t + k\psi_(xx) + i\alpha\psi$ = $\phi\psi + f(x)$,
$\phi_(tt)$ - $\phi_(xx) + \phi + \lambda\phi_t$ = -$Re\psi_x + g(x)$, $\psi(x,0)=\psi_0(x), \phi(x,0)$ = $\phi_0, \phi_t(x,0)=\phi_1(x)$ $\phi(x,t)=\phi(x,t)=0$, $x\in\partial\Omega, t>0$
where $x \in \Omega, t > 0, k > 0, \alpha > 0, \lambda > 0, f(x)$ and $g(x)$ are the driving terms and $\Omega$ (bounded) $\subset \mathbb{R}$. Also we prove the continuous dependence of solutions of the system on the initial data as well as the existence of a global attractor.
Keywords:Klein-Gordon-Schrodinger equation; Global Attractor; Absorbing set; Asymptotic Compactness; Uniqueness; Continuity.. Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C3. Citation:Marilena N. Poulou, Nikolaos M. Stavrakakis. Global attractor for a Klein-Gordon-Schrodinger type system. Conference Publications, 2007, 2007 (Special) : 844-854. doi: 10.3934/proc.2007.2007.844
[1] [2]
Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio.
Global attractor for a nonlocal
[3]
M. Keel, Tristan Roy, Terence Tao.
Global well-posedness of the
Maxwell-Klein-Gordon equation below the energy norm.
[4] [5]
Irena Lasiecka, Roberto Triggiani.
Global exact controllability of semilinear wave equations by a double compactness/uniqueness argument.
[6]
Zhiming Liu, Zhijian Yang.
Global attractor of multi-valued operators with applications to a strongly damped nonlinear wave equation without uniqueness.
[7]
Andrew Comech.
Weak attractor
of the Klein-Gordon field in discrete space-time
interacting with a nonlinear oscillator.
[8]
Salah Missaoui, Ezzeddine Zahrouni.
Regularity of the attractor for a coupled Klein-Gordon-Schrödinger system with cubic nonlinearities in $\mathbb{R}^2$.
[9] [10] [11]
Satoshi Masaki, Jun-ichi Segata.
Modified scattering for the Klein-Gordon equation with the critical nonlinearity in three dimensions.
[12]
Aslihan Demirkaya, Panayotis G. Kevrekidis, Milena Stanislavova, Atanas Stefanov.
Spectral stability analysis for standing waves of a perturbed Klein-Gordon equation.
[13]
Chi-Kun Lin, Kung-Chien Wu.
On the fluid dynamical approximation to the
nonlinear Klein-Gordon equation.
[14]
Hironobu Sasaki.
Small data scattering for the Klein-Gordon equation with cubic convolution nonlinearity.
[15] [16] [17] [18] [19] [20]
Emmanuel Hebey and Frederic Robert.
Compactness and global estimates for the geometric Paneitz equation in high dimensions.
Impact Factor:
Tools Metrics Other articles
by authors
[Back to Top] |
This is a short list of books to get you started on learning automorphic representations. Before I talk about them, I will first define automorphic representation, which will take a few paragraphs.
To start, we need an affine algebraic $F$-group scheme $G$ where $F$ is a number field or function field. We let $\A_F$ be the adeles of $F$. The idelic norm is defined as
\[|-| = \prod_v |-|_v:F^\times\backslash\A_F^\times\to \R.\] That is, the idelic norm is the product of all the norms where the product runs over all the places of $F$. We define \[ G(\A_F)^1 = \cap_{\chi\in X^*(G)}\ker(|-|\circ\chi). \] That is $G(\A_F)^1$ is a subgroup of $G(\A_F)$ consisting of all elements $g$ such that $|\chi(g) = 1|$ for all characters $\chi\in X^*(G)$. The reason for introducing this subgroup rather than working with the full adelic group $G(\A_F)^1$ is representation-theoretic: the group $G(\A_F)^1$ is unimodular and under the unique-up-to-scale Haar measure, the quotient $G(F)\backslash G(\A_F)^1$ has finite volume, and therefore we can do a lot of representation theory compared to working with $G(\A_F)$.
So far, we are talking about pretty concrete objects. However, if you are a little shaky with adeles and places, a good place to start is the book
Ramakrishnan, Dinakar; Valenza, Robert J. Fourier analysis on number fields. Graduate Texts in Mathematics, 186. Springer-Verlag, New York, 1999. xxii+350 pp. ISBN: 0-387-98436-4
You should know the first five chapters of this book pretty well. It will take you through some basic representation theory, local fields and global fields, and adeles. Anyways, let's continue with our definition of an automorphic representation. We have the group $G(\A_F)^1$, and I mentioned that it is unimodular. We fix some Haar measure, and consider the quotient $G(F)\backslash G(\A_F)^1$, which has finite volume under the induced measure. It is natural therefore to consider the space $L^2(G(F)\backslash G(\A_F)^1)$. This space has a natural pairing defined by an integral:
\[ (f_1,f_2) = \int_{G(F)\backslash G(\A_F)^1} f_1(x)\overline{f_2(x)}{\rm d}x.\] Hecke algebras
In order to define automorphic representation, we need to introduce the Hecke algebra. The Hecke algebra is actually a pretty concrete object, especially for nonarchimedean fields. In fact, if you've never seen these types of constructions before, it would be a good idea to consult Fiona Murnaghan's course notes on the representation theory of locally compact groups and reductive groups.
There are two definitions of the Hecke algebra, one for number fields and one for function fields. The one for function fields is easier, because we don't have to worry about the infinite places. If $F$ is a function field, the Hecke algebra of $G$ is just the space of locally constant (a.k.a. smooth) compactly supported functions on $G(\A_F)$. We denote this space by $\Hcl$.
If $F$ is a number field, then we define $\Hcl^\infty$ as the locally constant, compactly supported functions on $G(\A_F^\infty)$, where $\A_F^\infty$ is the subgroup of the adeles that is the restricted direct product of all the $F_v$ over the nonarchimedean places $v$ of $F$. So, it's just like the function field case, because in both cases we are just considering a restricted direct product over nonarchimedean local fields.
However, we still want to define the Hecke algebra $\Hcl$ for number fields, and we only have $\H^\infty$, which is just one part of $\Hcl$. The other part is defined as follows. Let $K$ be a maximal compact subgroup of the real Lie group $G(R\otimes_\Q F)$. Then we define $\Hcl_\infty$ as the convolution algebra of distributions of $G(R\otimes_Q F)$ supported on $K$. Then the Hecke algebra when $F$ is a number field is defined as $\Hcl = \Hcl_\infty\otimes\Hcl^\infty.$ Just for understanding the definition, it might be easier to just think of function fields, but it is the number field case that is the most interesting from a number-theoretic perspective.
The definition
There is a natural action of the Hecke algebra $\Hcl$ on the Hilbert space $L^2 = L^2(G(F)\backslash G(\A_F)^1)$ defined by
\[\begin{align*}R:\Hcl\times L^2&\longrightarrow L^2\\ (f,\phi)&\longmapsto \left(g\mapsto \int_{G(\A_F)}\phi(gh)f(h){\rm d}h\right). \end{align*}\] Note that because $f$ is locally constant and compactly supported, this integral makes sense. This gives a representation of $\Hcl$. An automorphic representation is defined to be an admissible representation of $\Hcl$ isomorphic to a subquotient of the representation of $\Hcl$ on $L^2$. Here, a representation of $\Hcl$ on $V$ is admissible if the fixed point set $V^K$ is finite dimensional for every open compact $K$ and $V$ is nondegenerate. Nondegenerate means that every element of $V$ can be written as $\sum h_iv_i$ for $h_i\in H$ and $v\in V$; this is not a trivial condition since Hecke algebras for noncompact groups do not have an identity. Books on automorphic representations
People studying automorphic representations are really lucky to have a few good books on the topic, some of which have come out in the last ten years. An obvious addition is the two volume series:
Goldfeld, Dorian; Hundley, Joseph. Automorphic representations and $L$-functions for the general linear group. Volume I. With exercises and a preface by Xander Faber. Cambridge Studies in Advanced Mathematics, 129. Cambridge University Press, Cambridge, 2011. xx+550 pp. ISBN: 978-0-521-47423-8
Goldfeld, Dorian; Hundley, Joseph. Automorphic representations and $L$-functions for the general linear group. Volume II. With exercises and a preface by Xander Faber. Cambridge Studies in Advanced Mathematics, 130. Cambridge University Press, Cambridge, 2011. xx+188 pp. ISBN: 978-1-107-00799-4
These books are great because prerequisites are kept to a minimum and everything is done for the general linear group ${\rm GL}_n$. This special case has a lot of simplifications compared to a general reductive group. Also in this book is the connection to automorphic forms and other classical number theory topics.
Students may face some difficulties with Goldfeld and Hundley's books, partially because they are so long and contain a lot of different but important details. For readers looking for a more compact source but still very readable, the book
Bump, Daniel. Automorphic forms and representations. Cambridge Studies in Advanced Mathematics, 55. Cambridge University Press, Cambridge, 1997. {\rm xiv}+574 pp. ISBN: 0-521-55098-X
should be useful. It's especially good for someone who already has some familiarity with modular forms, although it can be read by someone with little knowledge of them as well. Gelbart's book
Gelbart, Stephen S. Automorphic forms on adèle groups. Annals of Mathematics Studies, No. 83. Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1975. {\rm x}+267 pp.
is also good, as it focuses on GL2 and uses adeles.
Also exciting and even more modern treatment is the Springer GTM An Introduction to Automorphic Representations with a view toward Trace Formulae by Jayce Getz and Heekyoung Hahn. It is currently being written and should come out this year. However, at the time of this post, the first sixteen chapters are available for download at that link. This book is quite different than the books by Goldfeld and Hundley. As the title suggests, this book is much more focused on the mathematics behind Arthur-Selberg-like trace formula, orbital integrals, and Langlands functoriality. Once completed, this book will certainly stand as the best entry into trace formula.
There is also a two volume series edited by Borel and Casselman
Borel, Armand, and William Casselman, eds. Automorphic Forms, Representations and L-Functions. Vol. 1/2. American Mathematical Soc., 1979.
which contains one of the best summaries of the mathematics surrounding automorphic representations. |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
L-function
Calculates an estimate of the inhomogeneous version of the \(L\)-function (Besag's transformation of Ripley's \(K\)-function) for a spatial point pattern.
Usage
Linhom(...)
Arguments …
Arguments passed to
Kinhomto estimate the inhomogeneous K-function.
Details
This command computes an estimate of the inhomogeneous version of the \(L\)-function for a spatial point pattern
The original \(L\)-function is a transformation (proposed by Besag) of Ripley's \(K\)-function, $$L(r) = \sqrt{\frac{K(r)}{\pi}}$$ where \(K(r)\) is the Ripley \(K\)-function of a spatially homogeneous point pattern, estimated by
Kest.
The inhomogeneous \(L\)-function is the corresponding transformation of the inhomogeneous \(K\)-function, estimated by
Kinhom. It is appropriate when the point pattern clearly does not have a homogeneous intensity of points. It was proposed by Baddeley, Moller and Waagepetersen (2000).
The command
Linhom first calls
Kinhom to compute the estimate of the inhomogeneous K-function, and then applies the square root transformation.
For a Poisson point pattern (homogeneous or inhomogeneous), the theoretical value of the inhomogeneous \(L\)-function is \(L(r) = r\). The square root also has the effect of stabilising the variance of the estimator, so that \(L\) is more appropriate for use in simulation envelopes and hypothesis tests.
Value
Essentially a data frame containing columns
the vector of values of the argument \(r\) at which the function \(L\) has been estimated
the theoretical value \(L(r) = r\) for a stationary Poisson process
References
Baddeley, A., Moller, J. and Waagepetersen, R. (2000) Non- and semiparametric estimation of interaction in inhomogeneous point patterns.
Statistica Neerlandica 54, 329--350. See Also Aliases Linhom Examples
# NOT RUN { data(japanesepines) X <- japanesepines L <- Linhom(X, sigma=0.1) plot(L, main="Inhomogeneous L function for Japanese Pines")# }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2) |
Difference between revisions of "Geometry and Topology Seminar"
Line 240: Line 240:
"TBA"
"TBA"
− −
===Rafael Montezuma===
===Rafael Montezuma===
Revision as of 23:13, 27 January 2017 Contents 1 Fall 2016 2 Spring 2017 3 Fall Abstracts 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2016 Spring 2017
date speaker title host(s) Jan 20 Carmen Rovi (University of Indiana Bloomington) "The mod 8 signature of a fiber bundle" Maxim Jan 27 Feb 3 Rafael Montezuma (University of Chicago) "Metrics of positive scalar curvature and unbounded min-max widths" Lu Wang Feb 10 Feb 17 Yair Hartman (Northwestern University) "Intersectional Invariant Random Subgroups and Furstenberg Entropy." Dymarz Feb 24 Lucas Ambrozio (University of Chicago) "TBA" Lu Wang March 3 Mark Powell (Université du Québec à Montréal) "TBA" Kjuchukova March 10 Autumn Kent (Wisconsin) Analytic functions from hyperbolic manifolds local March 17 March 24 Spring Break March 31 Xiangwen Zhang (University of California-Irvine) "TBA" Lu Wang April 7 reserved Lu Wang April 14 Xianghong Gong (Wisconsin) "TBA" local April 21 Joseph Maher (CUNY) "TBA" Dymarz April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons
A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud).
Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces
We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky.
Sean Howe Representation stability and hypersurface sections
We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}!
Nan Li Quantitative estimates on the singular sets of Alexandrov spaces
The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber.
Yu Li
In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature.
Peyman Morteza We develop a procedure to construct Einstein metrics by gluing the Calabi metric to an Einstein orbifold. We show that our gluing problem is obstructed and we calculate the obstruction explicitly. When our obstruction does not vanish, we obtain a non-existence result in the case that the base orbifold is compact. When our obstruction vanishes and the base orbifold is non-degenerate and asymptotically hyperbolic we prove an existence result. This is a joint work with Jeff Viaclovsky. Caglar Uyanik Geometry and dynamics of free group automorphisms
A common theme in geometric group theory is to obtain structural results about infinite groups by analyzing their action on metric spaces. In this talk, I will focus on two geometrically significant groups; mapping class groups and outer automorphism groups of free groups.We will describe a particular instance of how the dynamics and geometry of their actions on various spaces provide deeper information about the groups.
Bing Wang The extension problem of the mean curvature flow
We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li.
Ben Weinkove Gauduchon metrics with prescribed volume form
Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti.
Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow
The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set.
Yu Zeng Short time existence of the Calabi flow with rough initial data
Calabi flow was introduced by Calabi back in 1950’s as a geometric flow approach to the existence of extremal metrics. Analytically it is a fourth order nonlinear parabolic equation on the Kaehler potentials which deforms the Kaehler potential along its scalar curvature. In this talk, we will show that the Calabi flow admits short time solution for any continuous initial Kaehler metric. This is a joint work with Weiyong He.
Spring Abstracts Lucas Ambrozio
"TBA"
Rafael Montezuma
"Metrics of positive scalar curvature and unbounded min-max widths"
In this talk, I will construct a sequence of Riemannian metrics on the three-dimensional sphere with scalar curvature greater than or equal to 6, and arbitrarily large min-max widths. The search for such metrics is motivated by a rigidity result of min-max minimal spheres in three-manifolds obtained by Marques and Neves.
Carmen Rovi The mod 8 signature of a fiber bundle
In this talk we shall be concerned with the residues modulo 4 and modulo 8 of the signature of a 4k-dimensional geometric Poincare complex. I will explain the relation between the signature modulo 8 and two other invariants: the Brown-Kervaire invariant and the Arf invariant. In my thesis I applied the relation between these invariants to the study of the signature modulo 8 of a fiber bundle. In 1973 Werner Meyer used group cohomology to show that a surface bundle has signature divisible by 4. I will discuss current work with David Benson, Caterina Campagnolo and Andrew Ranicki where we are using group cohomology and representation theory of finite groups to detect non-trivial signatures modulo 8 of surface bundles.
Yair Hartman
"Intersectional Invariant Random Subgroups and Furstenberg Entropy."
In this talk I'll present a joint work with Ariel Yadin, in which we solve the Furstenberg Entropy Realization Problem for finitely supported random walks (finite range jumps) on free groups and lamplighter groups. This generalizes a previous result of Bowen. The proof consists of several reductions which have geometric and probabilistic flavors of independent interests. All notions will be explained in the talk, no prior knowledge of Invariant Random Subgroups or Furstenberg Entropy is assumed.
Bena Tshishiku
"TBA"
Autumn Kent Analytic functions from hyperbolic manifolds
At the heart of Thurston's proof of Geometrization for Haken manifolds is a family of analytic functions between Teichmuller spaces called "skinning maps." These maps carry geometric information about their associated hyperbolic manifolds, and I'll discuss what is presently known about their behavior. The ideas involved form a mix of geometry, algebra, and analysis.
Xiangwen Zhang
"TBA"
Archive of past Geometry seminars
2015-2016: Geometry_and_Topology_Seminar_2015-2016
2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology |
Let's try to generalize the $VC$-dimension (of the class of hyperplanes) to include accuracy/error. Let $S$ be a set of points in $R^d$ and $t$ in $[0,1]$. We say that the class of hyperplanes $t$-shatters $S$ if for every binary labeling of the points in $S$, there exists some hyperplane which separates $S$ with accuracy at least $t$ (i.e., at least $t*|S|$ of the points in $S$ are classified correctly by the hyperplane). We then define the $VC(t)$-dimension of the class of hyperplanes (in a feature space of dimension $d$) to be the size of the largest set $S$ which the hyperplanes $t$-shatter, i.e.,
$VC(t) = max_{S \subset R^d}{|S|}$, subject to the constraint that the hyperplanes $t$-shatter $S$.
For example, the usual VC-dimension is $VC(1)$. So $VC(1)=d+1$. $VC(t)$ is clearly nonincreasing in $t$. $VC(0)=\infty$ (in fact, I think $VC(0.5)=\infty$).
Question: Have people studied this generalization of VC-dimension, or something similar? If so, what is it called, and can you point me to resources about it? What is the best lower bound on $VC(t)$? Thanks!
Edit: Here's the motivation for my question. For any set of non-coplanar $d+1$ points in $R^d$, and any binary labeling of these points, there is a linear classifier (a hyperplane) which classifies these points with 100% accuracy. More generally, given $n>d+1$ points (satisfying some "weak" condition like non-coplanarity) and a binary labeling of them, what can we say about the accuracy of a linear classifier for these points? If $t \in [0,1]$ and $VC(t) \geq n$, then we know that for any $n$ points in $R^d$ (satisfying some "weak" condition) and any labeling of these points, there is a linear classifier which has accuracy at least $t$ on these points. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.