text
stringlengths 256
16.4k
|
---|
I am reading Tom Dieck's page 537 and I am not sure what the vertical map that I put in the title is in the diagram in the bottom of the page. This map is labeled Thom Isomorphism. Here $MSO(k)$ is the thom space of the tautological bundle over $BSO(n)$. There is a thom isomorphism from $H_{j-1}(MSO(k))\to H_{j-k-1}(BSO(k))$ giving by capping with the $k$ dimensional cohomology class of $H^k(MSO(K))$.
The suspension isomorphism on reduced homology is from $H_j(MSO(k)) \to H_{j-k}(\Sigma BSO(k))$. I am skeptical that $H_{j-k}(\Sigma BSO(k))=H_{j-k}(BSO(k))$. For example $\Sigma BSO(2)=\Sigma BS^1=\Sigma K(\mathbb{Z},2)=\Sigma \Omega K(\mathbb{Z},3)$ which admits an evaluation map that admits isomorphisms on low dimensional homotopy groups , to $K(\mathbb{Z},3)=$ which doesn't have the same low dimensional homoptopy groups as $BSO(2)=K(\mathbb{Z},2)$
Another random idea is I know that the thom isomorphism commutes with the boundary map and hence suspension. Do you have any ideas on finding this map?
edit: I put the wrong map in the title and consequentially my post made no sense. It should be readable now. |
Let's focus on the volatility contract price. Generalisation to cubic and quartic contracts is straightforward.
Following the paper's notations, the evaluation date is $t$ and the (European) contracts all expire at $T = t+\tau$. A
volatility contract is specifically associated to the payoff function
$$ H[S] = R(t,\tau;S)^2 = \left(\ln S(t+\tau) - \ln S(t) \right)^2 = \left[ \ln \left(\frac{S}{S(t)}\right) \right]^2 $$where we've used the paper's notation$$ S(t+\tau) := S $$
According to
arbitrage-free pricing theory, the price of a volatility contract should be calculated as$$ V(t,\tau) = \mathcal{E}_t^*\left\{ e^{-r \tau} H[S] \right\} $$where $\mathcal{E}^*_t\{ \cdot \}$ figures an expectation taken under the (risk-neutral) measure $\Bbb{Q}$ associated to the risk-free money market account numéraire, conditional on the information available at $t$.
The key result to conclude is the
Carr-Madan formula (see references mentioned in the paper or here), which tells you that - for a sufficiently regular payout function - one can write out the conditional expectation above as$$ \mathcal{E}_t^*\left\{ e^{-r \tau} H[S] \right\} = H[\bar{S}] + (S - \bar{S}) H_S[\bar{S}] + \int_{\bar{S}}^\infty H_{SS}[K] C(t,\tau;K) dK + \int_{0}^\bar{S} H_{SS}[K] P(t,\tau;K) dK \tag{3} $$for any $\bar{S}$.
From the definition of $H[S]$, by differentiating we get\begin{align}H_S[S] &= 2 R(t,\tau;S) \frac{1}{S} \\H_{SS}[S] &= \frac{2}{S^2}(1-R(t,\tau;S))\end{align}
Now let's further simplify equation $(3)$ by picking $\bar{S} = S(t)$. This is a convenient choice since it means that terms involving $H[\bar{S}]$ and $H_S[\bar{S}]$ will disappear (because $R(t,\tau;\bar{S})=0$). We are then left with
\begin{align}\mathcal{E}_t^*\left\{ e^{-r \tau} H[S] \right\} &= \int_{S(t)}^\infty \frac{2 \left(1-\ln\left(\frac{K}{S(t)}\right)\right)}{K^2} C(t,\tau;K) dK + \int_{0}^{S(t)} \frac{2\left(1-\ln\left(\frac{K}{S(t)}\right)\right)}{K^2} P(t,\tau;K) dK \\&= \int_{S(t)}^\infty \frac{2 \left(1-\ln\left(\frac{K}{S(t)}\right)\right)}{K^2} C(t,\tau;K) dK + \int_{0}^{S(t)} \frac{2\left(1+\ln\left(\frac{S(t)}{K}\right)\right)}{K^2} P(t,\tau;K) dK \tag{7} \\&= V(t,\tau) \end{align}
That's it! |
Schedule of the International Workshop on Logic and Algorithms in Group Theory Monday, October 22
10:15 - 10:50 Registration & Welcome coffee 10:50 - 11:00 Opening remarks 11:00 - 12:00 Alex Lubotzky: First order rigidity of high-rank arithmetic groups 12:00 - 14:00 Lunch break 14:00 - 15:00 Zlil Sela: Basic conjectures and preliminary results in non-commutative algebraic geometry 15:00 - 16:00 Katrin Tent: Burnside groups of relatively small odd exponent 16:00 - 16:30 Coffee, tea and cake 16:30 - 17:30 Harald Andres Helfgott: Growth in linear algebraic groups and permutation groups: towards a unified perspective afterwards Reception Tuesday, October 23
09:30 - 10:30 Chloe Perin: Forking independence in the free group 10:30 - 11:00 Group photo and coffee break 11:00 - 12:00 Krzysztof Krupinski: Amenable theories 12:00 - 14:00 Lunch break 14:00 - 15:00 Gregory Cherlin: The Relational Complexity of a Finite Permutation Group 15:00 - 16:00 Todor Tsankov: A model-theoretic approach to rigidity in ergodic theory 16:00 - 16:30 Coffee, tea and cake Wednesday, October 24
09:30 - 10:30 Dan Segal: Small profinite groups 10:30 - 11:00 Coffee break 11:00 - 12:00 Martin Kassabov: On the complexity of counting homomorphism to finite groups 12:00 - 13:00 Anna Erschler: Arboreal structures, Poisson boundary and growth of Groups 19:00 - Dinner - Tuscolo Münsterblick, Gerhard-von-Are-Straße 8, Bonn Thursday, October 25
09:30 - 10:30 Laura Ioana Ciobanu Radomirovic: Equations in groups, formal languages and complexity 10:30 - 11:00 Coffee break 11:00 - 12:00 James Wilson: Distinguishing Groups and the Group Isomorphism problem 12:00 - 14:00 Lunch break 14:00 - 15:00 Alan Reid: Distinguishing certain triangle groups by their finite quotients 15:00 - 16:00 Alla Detinko: Computing with infinite linear groups: methods, algorithms, and applications 16:00 - 16:30 Tea and cake Friday, October 25
09:30 - 10:30 George Willis: Computing the scale 10:30 - 11:00 Coffee break 11:00 - 12:00 Agatha Atkarskaya; Towards a Group-like Small Cancellation Theory for Rings Abstracts Agatha Atkarskaya: Towards a Group-like Small Cancellation Theory for Rings
Let a group $G$ be given by generators and defining relations. It is known that we cannot extract specific information about the structure of $G$ using the defining relations in the general case. However, if these defining relations satisfy small cancellation conditions, then we possess a great deal of knowledge about $G$. In particular, such groups are hyperbolic, that is we can express the multiplication in the group by means of thin triangles. It seems of interest to develop a similar theory for rings. Let $kF$ be the group algebra of the free group $F$ over some field $k$. Let $F$ have a fixed system of generators, then its elements are reduced words in these generators that we call monomials. Let $\mathcal{I}$ be ideal of $kF$ generated by a set of polynomials and let $kF / \mathcal{I}$ be the corresponding quotient algebra. In the present work we state conditions on these polynomials that will enable a combinatorial description of the quotient algebra similar to small cancellation quotients of the free group. In particular, we construct a linear basis of $kF / \mathcal{I}$ and describe a special system of linear generators of $kF / \mathcal{I}$ for which the multiplication table amounts to a linear combination of thin triangles. Constructions of groups with exotic properties make extensive use of small cancellation theory and its generalizations. In the similar way, generalizations of our approach allow to construct various examples of algebras with exotic properties. This is a joint work with A. Kanel-Belov, E. Plotkin and E. Rips.
Gregory Cherlin: The Relational Complexity of a Finite Permutation Group
I am interested in a numerical invariant of finite permutation groups called the {\it relational complexity} which is suggested by the model theoretic point of view. (From a model theorist's perspective, the study of finite structures and the study of finite permutation groups are the same subject.) One conjecture about this invariant states that an almost simple primitive permutation group of relational complexity 2 must be the symmetric group acting naturally [1]. Considerable progress toward a proof has been made lately using a combination of theory and machine computation (e.g., [2]). A variety of computational methods have been devised which are very helpful in the limited context of the stated conjecture, and perhaps in other cases. I aim to provide a sense of what the invariant measures, and to discuss some ways that it can be determined, or estimated (structurally, group theoretically, or computationally).
References:
[1] Gregory Cherlin, Sporadic homogeneous structures, The Gelfand Mathematical Seminars, 1996-1999, pp.~15-48, Birkhäuser, 2000.
[2] Nick Gill and Pablo Spiga, Binary permutation groups: Alternating and Classical Groups, preprint. arXiv:1610.01792 [math.GR]
Related Talks:
* The relational complexity of a finite primitive structure, ICMS, Sep.~19, 2018
* Finite binary homogeneous structures, ICMS, July 10, 2014 Alla Detinko: Computing with infinite linear groups: methods, algorithms, and applications
In the talk we will survey our ongoing collaborative project in a novel domain of computational group theory: computing with linear groups over infinite fields. We provide an introduction to the area, and discuss available methods and algorithms. Special consideration will be given to the most recent developments in computing with Zariski dense groups and applications. This talk is aimed at a general algebraic audience (see also our expository article https://doi.org/10.1016/j.exmath.2018.07.002}).
Anna Erschler: Arboreal structures, Poisson boundary and growth of groups
A group is said to be ICC if all non-identity elements have infinitely many conjugates. In a joint work with Vadim Kaimanovich, given an ICC group, we construct a forest F with the vertex set G and a probability measure on G such that trajectories of the random walk tend almost surely to points of the boundary of F; we show that the Poisson boundary can be identified with the boundary of the forest, endowed with the hitting distribution; we show that the convergence to Poisson boundary has strong convergence property, resembling the case of simple random walks on free groups and we show that the action of G on the Poisson boundary is free.
Our result is a development of a recent result of Joshua Frisch, Yair Hartman, Omer Tamuz and Pooya Vahidi Ferdowsi, who has shown that any ICC group admits a measure with non-trivial Poisson boundary. In a joint work with Tianyi Zheng, we construct measures with power law decay on torsion Grigorchuk groups that have non-trivial Poisson boundary. As an application we obtain near optimal lower bound for the growth of these groups.
Harald Andres Helfgott: Growth in linear algebraic groups and permutation groups: towards a unified perspective
Given a finite group $G$ and a set $A$ of generators, the diameter $ diam(\Gamma(G,A))$ of the Cayley graph $\Gamma(G,A)$ is the smallest $\ell$ such that every element of $G$ can be expressed as a word of length at most $\ell$ in $A \cup A^{-1}$. We are concerned with bounding $diam(G):= \max_A diam(\Gamma(G,A))$. It has long been conjectured that the diameter of the symmetric group of degree $n$ is polynomially bounded in $n$. In 2011, Helfgott and Seress gave a quasipolynomial bound $\exp((\log n)^{4+\epsilon})$. We will discuss a recent, much simplified version of the proof, emphasising the links in commons with previous work on growth in linear algebraic groups.
Reference:
Martin Kassabov: On the complexity of counting homomorphism to finite groups (joint with Eric Samperton)
I will discuss the complexity of determining the number of homomorphisms from finitely presented groups to finite groups, which turns out to be a #P complete problem. Somewhat surprisingly, this is even the case when the target is a finite nilpotent group.
Krzysztof Krupinski: Amenable theories
I will introduce the notion of an amenable theory as a natural counterpart of the notion of a definably amenable group. Roughly speaking, amenability means that there are invariant (under the action of the group of automporphism of a sufficiently saturated model), Borel, probability measures on various types spaces. I will discuss several equivalent definitions and give some examples. Then I will discuss the result that each amenable theory is Gcompact. This is a part of my recent paper (still in preparation) with Udi Hrushovski and Anand Pillay.
Alex Lubotzky: First order rigidity of high-rank arithmetic groups
The family of high rank arithmetic groups is a class of groups playing an important role in various areas of mathematics. It includes SL(n,Z), for n>2 , SL(n, Z[1/p] ) for n>1, their finite index subgroups and many more.
A number of remarkable results about them have been proven including; Mostow rigidity, Margulis Super rigidity and the Quasi-isometric rigidity. We will talk about a further type of rigidity: "first order rigidity" (also called quasi-axiomatisable in some articles). Namely, if G is such a non-uniform characteristic zero arithmetic group and H a finitely generated group which is elementary equivalent to it, then H is isomorphic to G. This stands in contrast with Zlil Sela's remarkable work which implies that the free groups, surface groups and hyperbolic groups (many of which are low-rank arithmetic groups) have many non isomorphic finitely generated groups which are elementary equivalent to them. Joint work with Nir Avni and Chen Meiri.
Chloe Perin: Forking independence in the free group
Model theorists define, in structures whose first-order theory is "stable" (i.e. suitably nice), a notion of independence between elements. This notion coincides for example with linear independence when the structure considered is a vector space, and with algebraic independence when it is an algebraically closed field. Sela showed that the theory of the free group is stable. In a joint work with Rizos Sklinos, we give an interpretation of this model theoretic notion of independence in the free group using Grushko and JSJ decompositions.
Laura Ioana Ciobanu Radomirovic: Equations in groups, formal languages and complexity
For a group G, solving equations where the coefficients are elements in G and the solutions take values in G can be seen as akin to solving Diophantine equations in number theory, answering questions from linear algebra or moregenerally, algebraic geometry. Moreover, the question of satisfiability of equations fits naturally into the framework of the first order theory of G.I will start the talk with a survey containing results from both mathematics and computer science about solving equations in infinite nonabelian groups, with emphasis on free and hyperbolic groups. I will then show how for these groups the solutions to equations can be beautifully described in terms of formal languages, and that the latest techniques involving string compression produce optimal space complexity algorithms. If time allows, I will show how some of the results can carry over to certain group extensions. This is joint work, in several projects, with Volker Diekert, Murray Elder, Derek Holt and Sarah Rees.
Alan Reid: Distinguishing certain triangle groups by their finite quotients
We prove that certain arithmetic Fuchsian triangle groups are profinitely rigid in the sense that they are determined by their set of finite quotients amongst all finitely generated residually finite groups. Amongst the examples are the (2,3,8) triangle group.
Dan Segal: Small profinite groups
I will explore the connections between various conditions of smallness on a profinite group, such as being (topologically) finitely generated, having only finitely many open subgroups of each finite index, or having all finite-index subgroups open, and the extent to which these can be characterized by algebraic properties of the associated system of finite groups.
Reference:
Remarks on profinite groups having few open subgroups, J. Combinatorial Algebra 2 (2018), 87-101. Zlil Sela: Basic conjectures and preliminary results in non-commutative algebraic geometry
Algebraic geometry studies the structure of varieties over fields and commutative rings. Starting in the 1960's ring theorists (Cohn, Bergman and others) have tried to study the >structure of varieties over some non-commutative rings (notably free associative algebras). The lack of unique factorization that they tackled and studied in detail, and the pathologies that they were aware of, prevented any attempt to prove or even speculate what can be the properties of such varieties. Using techniques and concepts from geometric group theory and from low dimensional topology, we formulate concrete conjectures about the structure of these varieties, and prove preliminary results in the direction of these conjectures.
Katrin Tent: Burnside groups of relatively small odd exponent
(joint work with A. Atkarskaya and E. Rips)
The free Burnside group B(n,m) of exponent m is the quotient of the free group on n generators by the normal subgroup generated by its mth powers. Novikov and Adyan showed that for odd m sufficiently large and n at least 2, the group B(n,m) is infinite. In subsequent work, Adyan improved the lower bound on m, the latest bound being 101.
Our approach to Burnside groups combines geometric and combinatorial aspects. We first define a collection of canonical relators for the normal subgroup with nice combinatorial properties. These properties allow us to use generalized small cancellation methods leading to a version of Greendlinger's Lemma sufficient to deduce the infinity of these groups.
Todor Tsankov: A model-theoretic approach to rigidity in ergodic theory
(joint work with Tomás Ibarlucía)
I will discuss a new approach to some rigidity results in the ergodic theory of non-amenable groups via model theory. I will explain how ergodic theory can be formalized in continuous logic and how the model-theoretic notion of algebraic closure plays an important role in understanding strongly ergodic systems.
No prior knowledge of modeltheory or ergodic theory will be assumed.
George Willis: Computing the scale
The scale function on a totally disconnected, locally compact (t.d.l.c.) group is continuous and takes positive integer values. In special cases, such as when the group has a linear [1] or geometric [2] representation, it may be computed. However not all t.d.l.c. groups have representations which allow computation of the scale and other features of the group and the extent to which it is possible to obtain such representations is not known.
[1] H. Glöckner, Scale functions on linear groups over local skew fields, J. Algebra 205, 525-541 (1998).
[2] G. A. Willis, The structure of totally disconnected, locally compact groups, Math. Ann. 300, 341-363 (1994). |
Question: Estimate the value to the nearest tenth
$$\sqrt{47}$$
But I don't know how I could estimate without using the calculator
Thank You and Help is appreciated
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The $$\sqrt{47}$$ is between $$\sqrt{36}$$ and $\sqrt{49}$, so that means $$6<\sqrt{47}<7$$. Now, we can use casework. To find $n^2=47$, we first try a greater number, such as $6.7$. Using $6.7$, we find $6.7^2=44.89$, so $6.7<\sqrt{47}$. Next, we try $6.9$. $6.9^2=47.61$. We try $6.8$ for measure, and we find $6.8^2=46.24$. Therefore, $\sqrt{47}$ rounded to nearest tenth is $6.9$
Typically the binomial theorem is used in this type of situation as follows: $$\sqrt{47}=7\sqrt{\frac{47}{49}}=7\sqrt{1-\frac2{49}}$$ $$\therefore\sqrt{47}\approx7\left(1+\frac12\left(-\frac2{49}\right)\right)=\frac{48}7\approx6.9$$ With the final answer rounded to the nearest tenth. But all you actually need to do is find the values of some estimates squared and conclude the rounded value of the answer from these results. $$6.8^2=46.24$$ $$6.85^2=46.9225$$ $$6.9^2=47.61$$ So from this you can see that $$6.85^2\lt 47 \lt 6.9^2$$ So you can conclude that $$6.85\lt\sqrt{47}\lt6.9$$ So the value of $\sqrt{47}$ is $6.9$ to the nearest tenth.
When dealing with algebraic integers / numbers of degree two (radicals $\sqrt N$ basicly, $N$ an integer,) the (subjectively) best way to approximate by using rational numbers is to build the
continued fraction of the number.
This answers goes in this direction. (Using the calculator, we get
a priori some $6.855654600\dots$ value, and we have to consider at least the approximation $6.855$, then round to $6.9$.)
Let us compute the continued fraction of the given number $a=\sqrt{47}$:
$$ \begin{aligned} a &= \boxed{6} + \color{blue}{(a-6)} \\ &= \boxed{6}+\frac{a+6}{a^2-6^2} = \boxed{6}+\frac1 {\displaystyle \frac{a+6}{11}} = \boxed{6}+\frac1 {\displaystyle \boxed{1}+\frac{a-5}{11}} \\ &= \boxed6+\frac1 {\displaystyle \boxed1 + \frac1 {\displaystyle \frac{11(a+5)}{a^2-5^2}}} = \boxed6+\frac1 {\displaystyle \boxed1 + \frac1 {\displaystyle \frac{a+5}{2}}} \\ &= \boxed6+\frac1 {\displaystyle \boxed1 + \frac1 {\displaystyle \boxed 5 + \frac{a-5}{2}}} \\ &= \boxed6+\frac1 {\displaystyle \boxed1 + \frac1 {\displaystyle \boxed 5 + \frac 1 {\displaystyle \boxed 1 +\frac {a-6}{11}}}} \\ &= \boxed6+\frac1 {\displaystyle \boxed1 + \frac1 {\displaystyle \boxed 5 + \frac 1 {\displaystyle \boxed 1 +\frac1 {\displaystyle \boxed{12} + \color{blue}{(a-6)}}}}} \\ &=\dots \end{aligned} $$ Note that the $\color{blue}{(a-6)}$ was "already computed". So considering the calculus from the first blue position, and implementing it recursively on the last blue position we get better and better approximations. Continued fractions are collecting in this sense only the "boxed data", it is a good way to denote this number like: $$ a =\sqrt{47} =[6;\ 1,5,1,12,\ 1,5,1,12\ ,\ \dots] =[6;\ \overline{1,5,1,12}] $$ Now the fractions obtained by truncation of this infinite representations are better and better approximations of the given number $a$. Let us see them: $$ \begin{aligned} x_0 &= [6;] = 6\ ,\\ x_1 &= [6;1] = 6+1/1=7\ ,\\ x_2 &= [6;1,5] = 6+1/(1+1/5)=41/6=6.8(3)\dots\ ,\\ x_3 &= [6;1,5,1] = 6+1/(1+1/(5+1/1))=48/7=6.(857142)\ ,\\ x_3 &= [6;1,5,1,12] = 6+1/(1+1/(5+1/(1+1/12)))=617/90=6.8(5)\ ,\\ \end{aligned} $$ and so on. Observe that the "convergents" above are respectively less, more, less, more, less, ... than $a=\sqrt{47}$. At this point we can enclose $a$ between the numbers - 6 . 857142857142... and - 6 . 855555555555... so the best tenth approximating is obtained by rounding up to $6.9$.
In fact, some computer algebra software like sage computes easily something like:
sage: K.<a> = QuadraticField(47)sage: c = a.continued_fraction()sage: c[6; (1, 5, 1, 12)*]sage: c.convergents()lazy list [6, 7, 41/6, ...]sage: for k in [0..10]:....: ck = c.convergent(k)....: print "%s. convergent = %s = %s" % (k, ck, ck.n())....: 0. convergent = 6 = 6.000000000000001. convergent = 7 = 7.000000000000002. convergent = 41/6 = 6.833333333333333. convergent = 48/7 = 6.857142857142864. convergent = 617/90 = 6.855555555555565. convergent = 665/97 = 6.855670103092786. convergent = 3942/575 = 6.855652173913047. convergent = 4607/672 = 6.855654761904768. convergent = 59226/8639 = 6.855654589651589. convergent = 63833/9311 = 6.8556546020835610. convergent = 378391/55194 = 6.85565460013770
(The above $10$.th convergent is for instance the best fraction approximation of $a$ with denominators $\le 55194$. The bigger coefficients appear in the continued fraction, the better approximation, and the less needed $k$ to get a given bound. In our case, each time we pass through the $12$ in the continued fraction, we get a "jumped better approximation", well, just my feeling...)
Let $f(x) = y = \sqrt x$
At $x=49$ ,$y=\sqrt{49} = 7$
Let $dx\approx\Delta x = -2$ so that $x+\Delta x = 49-2=47$
Now, $dy = \frac{1}{2\sqrt x}dx = \frac{1}{2\sqrt x}dx =-2\frac{1}{2\sqrt{49}}\approx-0.1428\cdots$
$\Delta y \approx dy = - 0.1428\cdots$
So, $f(x+\Delta x) = f(47) = y +\Delta y \approx 7-0.14 \approx 6.86$
$$\sqrt47 \approx 6.86 \ or \ \sqrt{47} \approx 6.9 $$
This is how to do it without a calculator. We will show in detail how to estimate the square root of $857.142$.
$$\begin{array}{r} & & 6 &. & 8 & 5 & 5 \\ & &-- &. &-- &-- &--\\ &) &47 &. &00 &00 &00\\ 6 & &36 &.\\ & &-- \\ & &11 &. &00\\ 128 & &10 &. &24\\ & &-- &. &-- \\ & & &. &76 &00 \\ 1365 & & &. &68 &25 \\ & & &. &-- &-- \\ & & &. & 7 &75 &00\\ 13705 & & &. & 6 &85 &25 \\ & & &. &-- &-- &-- \\ & & &. & &89 &75\\ \end{array}$$
To see whether or not to round off the last digit, try $5$ for the next digit. We would get $137105 \times 5 = 685525 < 897500$. So $6.856$ is closer to the true answer.
Macluarin series of $\sqrt{49-x}$ is $$\sqrt{49-x}=7-\frac{x}{14}-\frac{x^2}{2744}-...$$ at $x=2$ $$\sqrt{47}\approx 7-\frac{1}{7}$$
There is also the Scaffold Square Root Algorithm (Digit by Digit with Examples): $$ \require{enclose} \begin{array}{rl} \color{#090}{6}.\phantom{0}\color{#090}{8}\,\phantom{0}\color{#090}{5}\,\phantom{0}\color{#090}{5}\\[-4pt] \enclose{radical}{47.00\,00\,00}\\[-4pt] \underline{36}\phantom{.00\,00\,00}&\quad\leftarrow\color{#090}{6}\cdot\color{#090}{6}\\[-4pt] 11\,00\phantom{\,00\,00}\\[-4pt] \underline{10\,24}\phantom{\,00\,00}&\quad\leftarrow\color{#C00}{12}\color{#090}{8}\cdot\color{#090}{8}\\[-4pt] 76\,00\phantom{\,00}\\[-4pt] \underline{68\,25}\phantom{\,00}&\quad\leftarrow\color{#C00}{136}\color{#090}{5}\cdot\color{#090}{5}\\[-4pt] 7\,75\,00\\[-4pt] \underline{6\,85\,25}&\quad\leftarrow\color{#C00}{1370}\color{#090}{5}\cdot\color{#090}{5}\\[-4pt] \phantom{0.00}\,89\,75 \end{array} $$ The digits in red are twice the digits collected so far on top of the vinculum.
There are many methods to compute square roots of a number $s$. One that predates calculators by several millenia is Heron's method a.k.a. the Babylonian method (which also is an easy case of Newton's method), and it roughly says that if you have first not-too-bad estimate $x$, then
$$\dfrac12 (x + \dfrac{s}{x})$$
-- the average of $x$ and $s/x$ -- will be an even better estimate for $\sqrt s$.
For $x=6$, the calculation is easily done without a calculator and gives $\frac{83}{12} = 6 \frac{11}{12}$.
For $x=7$, the calculation is easily done without a calculator and gives $\frac{48}{7} = 6 \frac{6}{7}$.
Note that both are $\approx 6.9$. You can of course take either value and put in the formula again to get better estimates. |
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc. |
Kinematics is all about motion. And since we are now moving away from pure electrostatics, it is about time to introduce some motion of particles.
In exercise set 6 we ask you to simulate the motion of a particle in an electric field using numeric methods. This is likely something you have seen before, so we’ll just briefly restate the most important concepts here.
Numerical integration
To determine the motion of any particle, we need to find the position as a function of time. This is usually done by observing that the double derivative of the position, namely the acceleration, can be integrated over time to give back the position. However, evaluating the integral
$$\mathbf r(t) = \int \int \mathbf a(t, \mathbf v(t), \mathbf r(t)) ~dt~dt$$
is no trivial task, except for the cases where $\mathbf a$ is constant. That’s why numerical integration is so useful when working with more complicated problems, especially when $\mathbf a$ is a function of $\mathbf r$ and $\mathbf v$.
In numerical integration, we divide the time into discrete steps, where we at each step recalculate the acceleration, velocity and position. One of the simplest of these schemes is the Euler-Cromer scheme.
In the Euler-Cromer scheme, we increase the current velocity by adding the acceleration at each time step. The acceleration is then a function of the current time, and the position and velocity at the previous time step.
$$\mathbf v(t_{i+1}) = \mathbf v(t_i) + \mathbf a(t_{i}, \mathbf r(t_i), \mathbf v(t_i)) \cdot \Delta t$$
Similarily, we find the position using the velocity we just calculated for the current time step:
$$\mathbf r(t_{i+1}) = \mathbf r(t_i) + \mathbf v(t_{i+1}) \cdot \Delta t$$
We have to do this from the first step in time till the last, and in terms of code this is translated into a for-loop. In Python we may express this as a small script:
from pylab import * dt = 1e-3 t0 = 0 t1 = 10 t = linspace(t0, t1, (t1 - t0)/dt) r1 = zeros((len(t),3)) v1 = zeros((len(t),3)) r1[0] = [0.0, 0.0, 0.0] v1[0] = [0.0, 0.0, 0.0] for i in range(len(t)-1): a1 = array([5.0, 0.0, 0.0]) v1[i+1] = v1[i] + a1*dt r1[i+1] = r1[i] + v1[i+1]*dt figure() plot(t, r1[:,0], label="Motion in x direction") xlabel("$t$",fontsize=16) ylabel("$x$",fontsize=16) legend() show()
To run this in an IPython terminal, just save it to file a file named “kinematics.py” and launch it by running the following commands in the folder where you saved the file:
ipython --pylab run kinematics.py
Here we are using vectors, generated by linspace. The linspace function does in this case generate an array of 3-dimensional vectors, where the first argument, len(t), tells linspace that we want as many such vectors as there are time steps. The second argument, 3, tells linspace that each vector is three-dimensional.
The reason for using vectors (instead of 3 arrays of length len(t)) is that it makes it simple to write the following two statements in a way that is easy to read and remember:
r1[0] = [0.0, 0.0, 0.0] v1[0] = [0.0, 0.0, 0.0]
This sets the first vectors of our position and velocity, also known as the initial conditions. In this case we have chosen the components of both to be $x = 0$, $y = 0$ and $z = 0$.
The for-loop is basically what we defined as the Euler-Cromer scheme above, with a constant acceleration vector in the $x$-direction:
for i in range(len(t)-1): a1 = array([5.0, 0.0, 0.0]) v1[i+1] = v1[i] + a1*1e-4 r1[i+1] = r1[i] + v1[i+1]*dt
At each time step the position and velocity is not only calculated, but stored to an array that makes it readily available for plotting. The plotting is done for the first component of all $\mathbf r$-vectors as a function of t:
plot(t, r1[:,0], label="Motion in x direction")
This results in a plot looking something like this:
For the case where we are working with a more physical problem, we would have to calculate the force $\mathbf F$ at each time step and find the acceleration from Newton’s second law, $\mathbf a = \frac{F}{m}$.
You should now be ready to play around with your own kinematic problems. |
Introduction
I thought there might be an error in the original statement of the question,and the OP was no longer around to ask. So I assumed that the tape wasread-only everywhere, and wrote a first proof based on thatassumption, motivated by the fact that the TM has full Turing poweroutside the input part of the tape if it can write it, which induces the false beliefthat it can recognize any RE language.
However, that is not the case: the restriction on writing on the inputpart of the tape implies that only finite information can be extractedfrom the input, limited by the number of states on entry and exit ofthat part of the tape (combined with side of entry and exit). InstructedA is to be credited for remarking in a comment that there is a problem with recognizing any RE language, since it is not possible to make a copy of the input without EVER writing to the original input area,
Hence I wrote a second proof that assumes that only the input sectionof the tape is read-only, the rest being read-write allowed.
I am keeping both proofs here, as the first did help me find thesolution, even though it is not necessary to understand the secondproof, is more complex, and is subsumed by the second proof. It can beskipped. However, the weaker proof has the advantage of being constructive(to obtain a FSA equivalent to the Turing Machine), while the more general result is not constructive.
However I am giving first the last and more powerful result. I am a bit surprised that I was not able to find this result, even without proof, elsewhere on the net, or by asking some competent users, and any reference to published work would be welcome.
Contents: Turing machines that do not overwrite input accept only regular languages. This proof is not constructive. Turing machines with read-only tapes accept only regular languages. It may be skipped as subsumed by previous proof, but it uses a different approach, which has the advantage of being constructive. Turing machines that do not overwrite input accept only regular languages
We recall that, while the TM does not overwrite its input, and is thusread only on its input,
the TM can read and write on the rest of thetape. The proof relies on the fact that the observational behavior ofthe TM over an unlnown input can produce only a finite number ofdifferent cases. Hence, though the TM has full Turing power just byrelying on the rest of its tape, its information on the input, whichcan be any string in $\Sigma^*$, is finite, so it can compute only on afinite number of different cases. This gives a different view of thefinite character of regular languages, behavioral rather thanstructural.
We assume that the TM accepts when it enters an accept state.
Proof.
We define an
input restricted computations (IRC) as a (read-only)computation of the TM such that the TM head stays on the input part ofthe tape, except possibly for the last transition that may move it toa cell immediately at the left or the right of the input area.
A
left input restricted computations is an IRC that starts on theleftmost symbol of the input. A right input restrictedcomputations is an IRC that starts on the rightmost symbol of theinput.
We first prove that, for left inputrestricted computations that start in state $p$, the followinglanguages are regular:
the language $K_{Lp\to Lq}$ of input strings such that there is a left inputrestricted computation, starting in state $p$, that ends on the first cell left of theleftmost input symbol in state $q$;
the language $K_{Lp\to Rq}$ of input strings such that there is a left inputrestricted computation, starting in state $p$, that ends on the first cell right of therightmost input symbol in state $q$;
the language $A_{Lp}$ of input strings such that there is a left inputrestricted computation, starting in state $p$, that reaches an acceptstate.
And similarly, for right input restricted computations starting instate $p$, the following similarly defined languages are regular:$K_{Rp\to Lq}$, $K_{Rp\to Rq}$, and $A_{Rp}$.
The 6 proofs rely on the fact that
two-ways non-deterministic finitestate automata (2NFA) recognize regular sets (see Hopcroft+Ullman1979, pp 36-41, and execise 2.18 page 51). A 2NFA works like aread-only TM on a tape limited to its input, starting initially fromthe leftmost symbol, and accepting by moving beyond the right end inan accepting state.
In each of the 6 cases, the proof is done by building a 2NFA thatmimics the input restricted computations, but with some extratransitions to make sure it can start from the leftmost cell andaccept the language by exiting from the rightmost end in an acceptingstate. For the $K_{??\to ??}$ languages, the original accepting stateof the TM are changed into states leading to a halting non-acceptingcomputation. In two cases, it may be necessary to add an extra cellwith a new guard symbol on the left to detect TM computations thatwould terminate on the left end, so as to make them terminate on theright end.
These languages are defined for all combinations of states $p$and $q$ of the original Turing machine. They represent all that can beobserved (hence known and computed on) of the input by the TM.
If $k$ is the number of states, we thus define $4k^2$ languages$K_{??\to ??}$ and $2k$ languages $A_{??}$, hence a total of $4k^2+2k$languages. Actually, some of these languages can be equal.
These are the only possible input restricted computations of the TMstarting on one end of the input. Hence the computations induced by each input string (outside the input section of the tape) arecharacterised by the set of such languages the input belongs or does not belong to, hence by anintersection of each of these $4k^2+2k$ languages or its complement in $\Sigma^*$. All these intersections are finite intersections of r$4k^2+2k$ regular languages, or their complement which are also regular, and are therefore regular.
As a consequence, the set of these intersections defines a partition$\mathcal P$ of $\Sigma^*$ into at most $2^{4k^2+2k}$ regular languages(
at most because some initial languages may be equal, and someintersections may be too). All strings belonging to the sameequivalence class can produce exactly the same behavior, as seen fromthe ends of the input. This implies that they cannot be distinguishedfor computation of the Turing Machine, if you abstract away whathappens in the read-only input area.
If we take two strings $u$ and $v$ in the same equivalence class of$\mathcal P$ , we can prove, by induction on the number of times theinput area is entered, that for any accepting computation of the TM on $u$,there is an accepting TM computation on $v$ that is identical everywhere outsidethe input area. Hence, either all strings of an equivalence class areaccepted, or none is. As a consequence, the language accepted by theTM is a union of equivalence classes of $\mathcal P$. Hence it is afinite union of regular languages, and thus it is a regular language.
To be very complete, we skipped the case of the empty inputstring. In this case, we just have a normal TM, that can read or writeanywhere. If it reaches an accepting state, the empty string is inthe language, else it is not. But that has little effect on the factthat the language recognized is regular.
Of course, it is not decidable whether an equivalence class is or isnot in the language (the same holds for the empty string). This is anon constructive proof.
QED
Turing machines with read-only tapes accept only regular languages
This is subsumed by the previous result. It is kept as it uses a different approach, probably less elegant, and helped me in finding the previous proof by understanding what matters. But it can well be skipped by readers. However, one advantage of this proof is that it is a constructive proof producing a FSA accepting the language. A sketch of a similar proof is given by Hendrik Jan in his answer to a previous similar question, which assumes the whole tape was read-only.
I assume that the blank symbol that is on the unused part of the tapeis never part of the input. This symbol is noted here $\Box$. The TMis supposed to accept when it reaches an accepting state.
The first step of the proof is to show that the head need not everleave the input area of the tape. We thus analyze what happen when thehead moves off the rightmost input symbol. The analysis when movingoff the leftmost one is identical.
If we consider that the head has moved on the first blank cell on theright of the input, the TM being in state $q$, we have to understandwhat can happen. There are actually three cases, that may besimultaneously possible when the TM is non-deterministic:
the TM keeps computing for ever, without the head ever coming comingback on the input part of the tape;
the TM reaches an (a) accepting or (b) stops in a non-accepting state;
the TM head ultimately comes back on the rightmost cell of theinput, the finite control being in state $r$.
So we have to analyze the behavior of the TM finite control, whencomputing on a blank half-tape, starting in state $q$ on the leftmostcell of a blank half-tape, infinite towards the right.
Since the TM does not write, and reads only the blank symbol $\Box$,all the finite control can do is move left or right, andconfigurations are differentiated only by the position of the head,i.e. by an integer. The tape can be replaced by a counter, starting at$1$, that is incremented when the head moves right and decrementedwhen it moves left, provided we consider only transitions that requirethe blank symbol on the tape. If the counter goes down to $0$, thatcorrespond to a case of the head coming back on the rightmost inputsymbol.
A first remark is that we can ignore computations that do notterminate (case 1) or that terminate with rejection (case 2.b) sincetermination with acceptance is the only relevant case for accepting astring. So we only want to know whether the counter can go down to$0$, and in what state, or whether the computation can reach anaccepting state.
We represent the relevant part of the finite state control by adirected graph where the vertices are the states of the TM, and wherethe edges are the blank transitions, with a weight +1 or -1 dependingon whether the head is supposed to move right or left.
We define $A_R$ the set of state $q$ from which an accepting statecan be reached with a positive weighted path.
We also compute the set $E_R$ of all pairs $(q,r)$ of states such thatthere is a path of weight $-1$ from $q$ to $r$, but no prefix of thatpath has a negative weight.
Then we modify the finite state control of the TM as follow (ignoring nowall transitions on blank symbol $\Box$):
We create a new accepting state $q_A$ with no transitions.
For every transition $p,a\mapsto R,q$ you add a transition$p,a\mapsto R,q_A$ if $q\in A_R$ (i.e. an acceptance is possible ifyou are on the rightmost symbol).
For every transition $p,a\mapsto R,q$, and every pair $(q,r)\in E_R$you add a dummy-transition $p,a\mapsto S,r$, where $S$ indicates thatthe head should not move. Since this is not an allowed move with mostautomata formalizations, these dummy states can be eliminated bytransitive closure afterwards.
Once this is completed, we proceed to remove the dummytransitions. For every tape symbol $a$, we build the set$F_a=\{(p,r)\mid \text{ there is a dummy transition } p,a\mapstoS,r\}$, and we consider the transitive closure $F_a^*$ of the relationdefined by $F_a$. Then, for every transition $r,a\mapsto L,s$ of theoriginal TM, and every pair $(p,r)\in F_a^*$, we add a new transition$p,a\mapsto L,s$. Then all dummy transitions can be removed.
We proceed similarly for the moves of the head left of the input partof the tape, thus reversing left and right, and exchanging $+1$ and$-1$ in the graph weights.
Once this has been done, we remove completely all transitions on blankcells, since the corresponding computation are short-circuited by thenew transitions. And we now have a new TM with a head that stays on theinput all the time, except when accepting with state $q_A$, and stillrecognizes the original language.
We now have to do a few cosmetic changes, so as to make this TM behaveexactly like a two-ways NDA (acceptance is only by exiting the inputon the right into an eccpting state). Then we can rely on the on theknow equivalence between 2-NDA and FSA (see for example Hopcroft+Ullman 1979, page 40) to obtain the proof that thelanguage is regular.
QED |
In this section, we'll examine orbits and stabilizers, which will allow us to relate group actions to our previous study of cosets and quotients.
Definition 6.1.0: The Orbit
Let \(S\) be a \(G\)-set, and \(s\in S\). The
orbit of \(s\) is the set \(G\cdot s = \{g\cdot s \mid g\in G\}\), the full set of objects that \(s\) is sent to under the action of \(G\).
There are a few questions that come up when encountering a new group action. The foremost is 'Given two elements \(s\) and \(t\) from the set \(S\), is there a group element such that \(g\cdot s=t\)?' In other words, can I use the group to get from any element of the set to any other? In the case of the action of \(S_n\) on a coin, the answer is yes. But in the case of \(S_4\) acting on the deck of cards, the answer is no. In fact, this is just a question about orbits. If there is only one orbit, then I can always find a group element to move from any object to any other object. This case has a special name.
Definition 6.1.1: Transitive Group Action
A group action is
transitive if \(G\cdot s = S\). In other words, for any \(s, t\in S\), there exists \(g\in G\) such that \(g\cdot s=t\). Equivalently, \(S\) contains a single orbit.
Equally important is the stabilizer of an element, the subset of \(G\) which leaves a given element \(s\) alone.
Definition 6.1.2: The Stabilizer
The
stabilizer of \(s\) is the set \(G_s = \{g\in G \mid g\cdot s=s \}\), the set of elements of \(G\) which leave \(s\) unchanged under the action.
For example, the stabilizer of the coin with heads (or tails) up is \(A_n\), the set of permutations with positive sign. In our example with \(S_4\) acting on the small deck of eight cards, consider the card \(4D\). The stabilizer of \(4D\) is the set of permutations \(\sigma\) with \(\sigma(4)=4\); there are six such permutations.
In both of these examples, the stabilizer was a subgroup; this is a general fact!
Proposition 6.1.3
The stabilizer \(G_s\) of any element \(s \in S\) is a subgroup of \(G\).
Proof 6.1.4
Let \(g, h \in G_s\). Then \(gh\cdot s = g\cdot (h\cdot s) = g\cdot s=s\). Thus, \(gh\in G_s\). If \(g\in G_s\), then so is \(g^{-1}\): By definition of a group action, \(1\in G_s\), so:
Thus, \(G_s\) is a subgroup.
Group action morphisms
And now some algebraic examples!
Let \(G\) be any group and \(S=G\). The
left regular actionof \(G\) on itself is given by left multiplication: \(g\cdot h = gh\). The first condition for a group action holds by associativity of the group, and the second condition follows from the definition of the identity element. (There is also a right regular action, where \(g\cdot h = hg\); the action is 'on the right'.) The Cayley graph of the left regular action is the same as the usual Cayley graph of the group!
Let \(H\) be a subgroup of \(G\), and let \(S\) be the set of cosets \(G/\mathord H\). The
coset actionis given by \(g\cdot (xH) = (gx)H\).
Figure 6.1: \(H\) is the subgroup of \(S_4\) with \(\sigma(1)=1\) for all \(\sigma\) in \(H\). This illustrates the action of \(S_4\) on cosets of \(H\).
Exercise 6.1.5
Consider the permutation group \(S_n\), and fix a number \(i\) such that \(1\leq i\leq n\). Let \(H_i\) be the set of permutations in \(S_n\) with \(\sigma(i)=i\).
Show \(H_i\) is a subgroup of \(S_n\). Now let \(n=5\) and Sketch the Cayley graph of the coset action of \(S_5\) on \(H_1\) and \(H_3\).
Proposition 6.1.6
Let \(S\) be a \(G\)-set, with \(s\in S\) and \(G_s\). For any \(g, h\in G\), \(g\cdot s=h\cdot s\) if and only if \(gG_s=hG_s\). As a result, there is a bijection between elements of the orbit of \(s\) and cosets of the stabilizer \(G_s\).
Proof 6.1.7
We have \(gG_s=hG_s\) if and only if \(h^{-1}g\in G_s\), if and only if \((h^{-1}g)\cdot s=s\), if and only if \(h\cdot s=g\cdot s\), as desired.
In fact, we can generalize this idea considerably. We're actually identifying elements of the \(G\)-set with cosets of the stabilizer group, which is also a \(G\)-set; in other words, defining a function \(\phi\) between two \(G\)-sets. The theorem says that this function preserves the group operation: \(\phi(g\cdot s)=g\cdot \phi(s)\).
Definition
Let \(S, T\) be \(G\)-sets. A
morphism of \(G\)-sets is a function \(\phi:S\rightarrow T\) such that \(\phi(g\cdot s)=g\cdot \phi(s)\) for all \(g\in G, s\in S\). We say the \(G\)-sets are isomorphic if \(\phi\) is a bijection.
We can then restate the proposition:
Theorem 6.1.9
For any \(s\) in a \(G\)-set \(S\), the orbit of \(S\) is isomorphic to the coset action on \(G_s\).
Now we can use LaGrange's theorem in a very interesting way! We know that the cardinality of a subgroup divides the order of the group, and that the number of cosets of a subgroup \(H\) is equal to \(|G|/\mathord |H|\). Then we can use the relationship between cosets and orbits to observe the following:
Theorem 6.1.10
Let \(S\) be a \(G\)-set, with \(s\in S\). Then the size of the orbit of \(s\) is \(|G|/\mathord |G_s|\).
For a somewhat obvious example, considering \(S_{13}\) acting on the numerical values of playing cards, we can observe that any given card is fixed by a subgroup of \(S_{13}\) isomorphic to \(S_{12}\) (switching around the other twelve numbers in any way doesn't change affect the given card). Then the size of the orbit of the card is \(|S_{13}|/\mathord |S_{12}| = 13\). That's a number we could have figured out directly by reasoning a bit, but it shows us that the theorem is working sensibly!
Now that we have a notion of isomorphism of \(G\)-sets, we can say something to classify \(G\)-sets. What kinds of actions are possible?
Let \(G\) be a finite group, and \(S\) a finite \(G\)-set. Then \(S\) is a collection of orbits. We knw that every orbit is isomorphic to \(G\) acting on the cosets of some subgroup of \(H\). So we have the following theorem:
Theorem 6.1.11: Classification of \(G\)-Sets Let \(G\) be a finite group, and \(S\) a finite \(G\)-set. Then \(S\) is isomorphic to a union of coset actions of \(G\) on subgroups.
For example, \(S_{13}\) acting on a full deck of cards decomposes as a union of four orbits, each isomorphic to the coset action of \(S_{13}\) on a subgroup isomorphic to \(S_{12}\).
In short, to understand all possible \(G\)-sets, we should try to understand all of the subgroups of \(G\). In general, this is a hard problem, though it's easy for some cases.
Exercise 6.1.12
For \(n=15\), draw Cayley graphs of the coset action of \(\mathbb{Z}_{15}\) on each of it's cosets. Describe all the subgroups of \(\mathbb{Z}_n\) for arbitrary \(n\).
Exercise 6.1.13
\(S_n\) acts on subsets of \(N=\{1,2,3,\ldots,n\}\) in a natural way: if \(U=\{i_1, \ldots, i_k\}\subset N\), then \(\sigma\cdot U = \{\sigma(i_1), \ldots, \sigma(i_k)\}\).
Decompose the action of \(S_4\) on the subsets of \(\{1,2,3,4\}\) into orbits. Draw a Cayley graph of the action. Identify each orbit with the coset action on a subgroup of \(S_4\). Contributors Tom Denton (Fields Institute/York University in Toronto) |
Our example won't show how
powerful this theorem is: it's too simple. But it'll help explain the ideas involved.
A diatomic molecule consists of two atoms of the same kind, stuck together:
At room temperature there are 5 elements that are diatomic gases: hydrogen, nitrogen, oxygen, fluorine, chlorine. Bromine is a diatomic liquid, but easily evaporates into a diatomic gas:
Iodine is a crystal at room temperatures:
but if you heat it a bit, it becomes a diatomic liquid and then a gas:
so people often list it as a seventh member of the diatomic club.
When you heat any diatomic gas enough, it starts becoming a 'monatomic' gas as molecules break down into individual atoms. However, just as a diatomic molecule can break apart into two atoms: $$ A_2 \to A + A $$ two atoms can recombine to form a diatomic molecule: $$ A + A \to A_2 $$ So in equilibrium, the gas will be a mixture of diatomic and monatomic forms. The exact amount of each will depend on the temperature and pressure, since these affect the likelihood that two colliding atoms stick together, or a diatomic molecule splits apart. The detailed nature of our gas also matters, of course.
But we don't need to get into these details here! Instead, we can just write down the 'rate equation' for the reactions we're talking about. All the details we're ignoring will be hiding in some constants called 'rate constants'. We won't try to compute these; we'll leave that to our chemist friends.
To write down our rate equation, we start by drawing a 'reaction network'. For this, we can be a bit abstract and call the diatomic molecule $B$ instead of $A_2$. Then it looks like this:
We could write down the same information using a Petri net:
But today let's focus on the reaction network! Staring at this picture, we can read off various things:
•
Species. The species are the different kinds of atoms, molecules, etc. In our example the set of species is $S = \{A, B\}$.
•
Complexes. A complex is finite sum of species, like $A$, or $A + A$, or for a fancier example using more efficient notation, $2 A + 3 B$. So, we can think of a complex as a vector $v \in \mathbb{R}^S$. The complexes that actually show up in our reaction network form a set $C \subseteq \mathbb{R}^S$. In our example, $C = \{A+A, B\}$.
•
Reactions. A reaction is an arrow going from one complex to another. In our example we have two reactions: $A + A \to B$ and $B \to A + A$. Chemists define a reaction network to be a triple $(S, C, T)$ where $S$ is a set of species, $C$ is the set of complexes that appear in the reactions, and $T$ is the set of reactions $v \to w$ where $v, w \in C$. (Stochastic Petri net people call reactions transitions, hence the letter $T$.)
So, in our example we have:
• Species $S = \{A,B\}$.
• Complexes: $C= \{A+A, B\}$.
• Reactions: $T = \{A+A\to B, B\to A+A\}$. To get the rate equation, we also need one more piece of information: a
rate constant $r(\tau)$ for each reaction $\tau \in T$. This is a nonnegative real number that affects how fast the reaction goes. All the details of how our particular diatomic gas behaves at a given temperature and pressure are packed into these constants!
The rate equation says how the expected numbers of the various species—atoms, molecules and the like—changes with time. This equation is deterministic. It's a good approximation when the numbers are large and any fluctuations in these numbers are negligible by comparison.
Here's the general form of the
rate equation:
$$ \frac{d}{d t} x_i = \sum_{\tau\in T} r(\tau) \, (n_i(\tau)-m_i(\tau)) \, x^{m(\tau)} $$
Let's take a closer look. The quantity $x_i$ is the expected population of the $i$th species. So, this equation tells us how that changes. But what about the right hand side? As you might expect, it's a sum over reactions. And:
• The term for the reaction $\tau$ is proportional to the rate constant $r(\tau)$.
• Each reaction $\tau$ goes between two complexes, so we can write it as $m(\tau) \to n(\tau)$. Among chemists the input $m(\tau)$ is called the
reactant complex, and the output is called the product complex. The difference $n_i(\tau)-m_i(\tau)$ tells us how many items of species $i$ get created, minus how many get destroyed. So, it's the net amount of this species that gets produced by the reaction $\tau$. The term for the reaction $\tau$ is proportional to this, too.
• Finally, the
law of mass action says that the rate of a reaction is proportional to the product of the concentrations of the species that enter as inputs. More precisely, if we have a reaction $\tau$ with input is the complex $m(\tau)$, we define $ x^{m(\tau)} = x_1^{m_1(\tau)} \cdots x_k^{m_k(\tau)}$. The law of mass action says the term for the reaction $\tau$ is proportional to this, too!
Let's see what this says for the reaction network we're studying:
Let's write $x_1(t)$ for the number of $A$ atoms and $x_2(t)$ for the number of $B$ molecules. Let the rate constant for the reaction $B \to A + A$ be $\alpha$, and let the rate constant for $A + A \to B$ be $\beta$. Then the rate equation is this:
$$ \frac{d}{d t} x_1 = 2 \alpha x_2 - 2 \beta x_1^2 $$
$$ \frac{d}{d t} x_2 = -\alpha x_2 + \beta x_1^2 $$
This is a bit intimidating. However, we can solve it in closed form thanks to something very precious: a
conserved quantity.
We've got two species, $A$ and $B$. But remember, $B$ is just an abbreviation for a molecule made of two $A$ atoms. So, the total number of $A$ atoms is conserved by the reactions in our network. This is the number of $A$'s plus twice the number of $B$'s: $x_1 + 2x_2$. So, this should be a
conserved quantity: it should not change with time. Indeed, by adding the first equation above to twice the second, we see:
$$ \frac{d}{d t} \left( x_1 + 2x_2 \right) = 0 $$
As a consequence, any solution will stay on a line
$$ x_1 + 2 x_2 = c $$
for some constant $c$. We can use this fact to rewrite the rate equation just in terms of $x_1$:
$$ \frac{d}{d t} x_1 = \alpha (2c - x_1) - 2 \beta x_1^2 $$
This is a separable differential equation, so we can solve it if we can figure out how to do this integral
$$ t = \int \frac{d x_1}{\alpha (2c - x_1) - 2 \beta x_1^2 } $$
and then solve for $x_1$.
This sort of trick won't work for more complicated examples. But the idea remains important: the numbers of atoms of various kinds—hydrogen, helium, lithium, and so on—are conserved by chemical reactions, so a solution of the rate equation can't roam freely in $\mathbb{R}^S$. It will be trapped on some hypersurface, which is called a 'stoichiometric compatibility class'. And this is very important.
We don't feel like doing the integral required to solve our rate equation in closed form, because this idea doesn't generalize too much. On the other hand, we can always solve the rate equation numerically. So let's try that!
For example, suppose we set $\alpha = \beta = 1$. We can plot the solutions for three different choices of initial conditions, say $(x_1,x_2) = (0,3), (4,0),$ and $(3,3)$. We get these graphs:
It looks like the solution always approaches an equilibrium. We seem to be getting different equilibria for different initial conditions, and the pattern is a bit mysterious. However, something nice happens when we plot the ratio $x_1^2 / x_2$:
Apparently it always converges to 1. Why should that be? It's not terribly surprising. With both rate constants equal to 1, the reaction $A + A \to B$ proceeds at a rate equal to the square of the number of $A$'s, namely $x_1^2$. The reverse reaction proceeds at a rate equal to the number of $B$'s, namely $x_2$. So in equilibrium, we should have $x_1^2 = x_2$.
But why is the equilibrium
stable? In this example we could see that using the closed-form solution, or maybe just common sense. But it also follows from a powerful theorem that handles a lot of reaction networks.
It's called Feinberg's deficiency zero theorem, and we saw it last time. Very roughly, it says that if our reaction network is 'weakly reversible' and has 'deficiency zero', the rate equation will have equilibrium solutions that behave about as nicely as you could want.
Let's see how this works. We need to remember some jargon:
•
Weakly reversible. A reaction network is weakly reversible if for every reaction $v \to w$ in the network, there exists a path of reactions in the network starting at $w$ and leading back to $v$.
•
Reversible. A reaction network is reversible if for every reaction $v \to w$ in the network, $w \to v$ is also a reaction in the network. Any reversible reaction network is weakly reversible. Our example is reversible, since it consists of reactions $A + A \to B$, $B \to A + A$.
But what about 'deficiency zero'? We defined that concept last time, but let's review:
•
Connected component. A reaction network gives a kind of graph with complexes as vertices and reactions as edges. Two complexes lie in the same connected component if we can get from one to the other by a path of reactions, where at each step we're allowed to go either forward or backward along a reaction. Chemists call a connected component a linkage class. In our example there's just one:
•
Stoichiometric subspace. The stoichiometric subspace is the subspace $\mathrm{Stoch} \subseteq \mathbb{R}^S$ spanned by the vectors of the form $w - v$ for all reactions $v \to w$ in our reaction network. This subspace describes the directions in which a solution of the rate equation can move. In our example, it's spanned by $B - 2 A$ and $2 A - B$, or if you prefer, $(-2,1)$ and $(2,-1)$. These vectors are linearly dependent, so the stoichiometric subspace has dimension 1.
•
Deficiency. The deficiency of a reaction network is the number of complexes, minus the number of connected components, minus the dimension of the stoichiometric subspace. In our example there are 2 complexes, 1 connected component, and the dimension of the stoichiometric subspace is 1. So, our reaction network has deficiency 2 - 1 - 1 = 0.
So, the deficiency zero theorem applies! What does it say? To understand it, we need a bit more jargon. First of all, a vector $x \in \mathbb{R}^S$ tells us how much we've got of each species: the amount of species $i \in S$ is the number $x_i$. And then:
•
Stoichiometric compatibility class. Given a vector $v\in \mathbb{R}^S$, its stoichiometric compatibility class is the subset of all vectors that we could reach using the reactions in our reaction network:
$$ \{ v + w \; : \; w \in \mathrm{Stoch} \} $$
In our example, where the stoichiometric subspace is spanned by $(2,-1)$, the stoichiometric compatibility class of the vector $(a,b)$ is the line consisting of points
$$ (x_1, x_2) = (a,b) + s(2,-1) $$
where the parameter $s$ ranges over all real numbers. Notice that this line can also be written as
$$ x_1 + 2x_2 = c $$
We've already seen that if we start with initial conditions on such a line, the solution will stay on this line. And that's how it always works: as time passes, any solution of the rate equation stays in the same stoichiometric compatibility class!
In other words:
the stoichiometric subspace is defined by a bunch of linear equations, one for each linear conservation law that all the reactions in our network obey.
Here a
linear conservation law is a law saying that some linear combination of the numbers of species does not change.
Next:
•
Positivity. A vector in $\mathbb{R}^S$ is positive if all its components are positive; this describes a a container of chemicals where all the species are actually present. The positive stoichiometric compatibility class of $x\in \mathbb{R}^S$ consists of all positive vectors in its stoichiometric compatibility class.
We finally have enough jargon in our arsenal to state the zero deficiency theorem. We'll only state the part we need today:
Zero Deficiency Theorem (Feinberg). If a reaction network is weakly reversible and the rate constants are positive, the rate equation has exactly one equilibrium solution in each positive stoichiometric compatibility class. Any sufficiently nearby solution that starts in the same class will approach this equilibrium as $t \to +\infty$.
In our example, this theorem says there's just one positive equilibrium $(x_1,x_2)$ in each line
$$ x_1 + 2x_2 = c $$
We can find it by setting the time derivatives to zero:
$$ \frac{d}{d t} x_1 = 2 \alpha x_2 - 2 \beta x_1^2 = 0 $$
$$ \frac{d}{d t} x_2 = -\alpha x_2 + \beta x_1^2 = 0 $$
Solving these, we get
$$ \frac{x_1^2}{x_2} = \frac{\alpha}{\beta} $$
So, these are our equilibrium solutions. It's easy to verify that indeed, there's one of these in each stoichiometric compatibility class $x_1 + 2x_2 = c$. And the zero deficiency theorem also tells us that any sufficiently nearby solution that starts in the same class will approach this equilibrium as $t \to \infty$.
This partially explains what we saw before in our graphs. It shows that in the case $\alpha = \beta = 1$, any solution that starts by
nearly having
$$ \frac{x_1^2}{x_2} = 1 $$
will actually have
$$ \lim_{t \to +\infty} \frac{x_1^2}{x_2} = 1 $$
But in fact, in this example we don't even need to start
near the equilibrium for our solution to approach the equilibrium! What about in general? We don't know, but just to get the ball rolling, we'll risk the following wild guess: Conjecture. If a reaction network is weakly reversible and the rate constants are positive, the rate equation has exactly one equilibrium solution in each positive stoichiometric compatibility class, and any positive solution that starts in the same class will approach this equilibrium as $t \to +\infty$.
If anyone knows a proof or counterexample, we'd be interested. If this result were true, it would really clarify the dynamics of reaction networks in the zero deficiency case.
Next time we'll talk about this same reaction network from a stochastic point of view, where we think of the atoms and molecules as reacting in a
probabilistic way. And we'll see how the conservation laws we've been talking about today are related to Noether's theorem for Markov processes!
You can also read comments on Azimuth, and make your own comments or ask questions there! |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Imagine putting a boat in a lake with random currents. The current at a particular point would hit the boat in a force at this point. At another point, the current is different so it would hit the boat with a different force. A grid of all these forces that his on the boat is called the vector field. Using the vector field, we can determine work,(the total water hitting the boat) circulation (the amount of water that would go in the same direction as the boat), and the flux (the amount of water that hits the boat) .
Vectors
A vector is a ray that starts at a point\((x,y,z)\) and goes in the direction \(x \hat{\textbf{i}}+y \hat{\textbf{j}} +z \hat{\textbf{k}} \). A vector field is the compilation of these vectors at every point. We draw vector field with evenly spread points for visual purposes, but you should imagine the field as a continuum.
A vector field by itself has no meaning, but for the purpose of this section, we will call the field \(F\) because force is a common use of the vector field. There are some methods you can use to figure out what a vector field looks like when given an equation
know what the vector field looks like find a program that graphs vector fields graph vectors at a number of points until you get a sense of what the graph looks like. Use more advanced math you haven't learned yet
If you have no idea what a vector field looks like, you have to do it the hard way by plugging in points. This is a waste of time so it could be beneficial to memorize the common ones. Very rarely will you be required to know what a vector field looks like as long as you can solve the problem given. However, knowing what the vector field looks like can help identify if an answer is reasonable.
Work
This section assumes you know what work is, which is the sum of all the forces over the line
\[Work=\int_C F ds. \nonumber \]
This expression and those in the following sections can be solved using a line integral. This is the general equation but we can derive it a little more, start with an arbitrary force in parametric form \( \vec{F}(x,y,z) \) and Newton's second law \( \vec{F}=m\vec{a} \) we can convert \( \vec{F} (x,y,z) \) into vector form \( \vec{F}(\vec{r})\) to simplify the equation.
To get work over a line, the end result should be \(\int_C \vec{F} dr\), the sum of the forces over the line \(r(t)\).
First, change \(\vec{a}\) into \(\dfrac{dv}{dt}\) (the definition of acceleration)
\[ \vec{F}=m\dfrac{dv}{dt} \nonumber \]
we will multiply both sides by \(\vec{v}\) . Notice that \(\vec{v}\) is the same as \(\dfrac{dr}{dt}\), so we can use this for the purpose of this proof
\[\vec{F} \cdot \dfrac{dr}{dt} = m \dfrac{dv}{dt} \cdot \vec{v} . \nonumber \]
Use the fundamental theorem of calculus to change form
\[\begin{align} \dfrac{dv}{dt}\cdot\vec{v} &= \dfrac{d}{dt} \int \vec{v}\dfrac{dv}{dt} \nonumber \\[4pt] &=\dfrac{d}{dt} \left ( \dfrac{\vec{v}\cdot\vec{v}}{2} \right) \nonumber \\[4pt] &=\dfrac{1}{2}\dfrac{d}{dt}\left ( \left | \vec v \right |^2 \right ) \end{align}. \nonumber \]
Now put this back into the equation and we get
\[ \vec{F}\cdot\dfrac{dr}{dt}=\dfrac{m}{2}\dfrac{d}{dt}\left | \vec v \right |^2. \nonumber \]
Integrate both sides with respect to time from time \(0\) to time \(T\)
\[ \int_0^T\vec{F}\cdot\dfrac{dr}{dt}dt=\dfrac{m}{2}\int_0^T\dfrac{d}{dt}\left | \vec v \right |^2dt \nonumber \]
\[ \int_0^T\vec{F}\cdot dr=\dfrac{m}{2}d\int_0^T\left | \vec v \right |^2 . \nonumber \]
Notice that \(\int_0^T\vec{F}\cdot dr\) is the line integral using the vector equation for the path. solving this equation yields
\[ \int_0^T\vec{F}\cdot dr=\dfrac{m}{2}\left | \vec v(T) \right |^2-\dfrac{m}{2}\left | \vec v(0) \right |^2 \nonumber \]
This is the equation taught in physics classes. work is equal to change in kinetic energy
Flux
Flux is defined as the amount of "stuff" going through a curve or a surface and we can get the flux at a particular point by taking the force and seeing how much of the force is perpendicular to the curve. To do this we can find the normal vector, then take the dot product of the normal vector and force vector to see how much of the force vector is acting in the normal direction. Then we can add up all these points to get the total flux.
Thus the equation is
\[\int_C \vec{F}\cdot\hat{n} \,ds. \nonumber \]
This is the general equation but we can break this down a little more, the normal vector is perpendicular to the tangent vector and lies in the plane \(\hat{n}=\hat{T}\times\hat{k}\) by including \(k\) in the cross product, we ensure \(\hat{n}\) is on the plane.
\[\hat{T}=\dfrac{\dfrac{dr}{dt}}{\left | \dfrac{dr}{dt}\right |} \nonumber \]
\[ds=\left | \dfrac{dr}{dt} \right | dt \nonumber \]
This substitution is useful to know but is not used in breaking this equation down, so we can rewrite this equation as
\[\int_{t=a}^{t=b} \vec{F}\cdot \left(\dfrac{\dfrac{dr}{dt}}{\left | \dfrac{dr}{dt}\right |}\times \hat{\textbf{k}} \right)ds \nonumber \]
\[\int_{t=a}^{t=b} \vec{F}\cdot \left(\dfrac{\dfrac{dr}{dt}}{\left | \dfrac{dr}{dt}\right |}\times \hat{\textbf{k}}\right)ds. \nonumber \]
We can actually break down \(\left(\dfrac{\dfrac{dr}{dt}}{\left | \dfrac{dr}{dt}\right |}\times \hat{\textbf{k}} \right)\) even further:
\[\hat{T}=\left(\dfrac{\dfrac{dr}{dt}}{\left | \dfrac{dr}{dt}\right |}\right)=\dfrac{dr}{ds} \nonumber \]
\[\dfrac{dr}{ds}=\dfrac{dx}{ds} \hat{\textbf{i}} +\dfrac{dy}{ds} \hat{\textbf{j}} . \nonumber \]
\[\hat{n}=\hat{T}\times \hat{\textbf{k}} =\dfrac{dx}{ds} \hat{\textbf{i}} +\dfrac{dy}{ds} \hat{\textbf{j}} \times \hat{\textbf{k}} =-\dfrac{dx}{ds} \hat{\textbf{j}} +\dfrac{dy}{ds} \hat{\textbf{i}} \nonumber \]
If we let \(\vec{F}=M \hat{\textbf{i}} + N \hat{\textbf{j}} \) then we can rewrite the equation as:
\[\int_C \left ( M\dfrac{dy}{ds}-N\dfrac{dx}{ds}\right )ds =\int_C Mdy-Ndx. \nonumber \]
Example \(\PageIndex{1}\): Gravity
What is the work done by force of gravity
\[\vec{F}=-\dfrac{\hat{r}}{\left | \vec{r} \right |^2} \nonumber \]
over a circular trajectory?
Solution
Since radius is not specified, we will use the variable \(R\) for radius.
we know the vector equation for a circle \( \vec{r}(t)=R\; \cos(t) \hat{\textbf{i}} +R\; \sin(t) \hat{\textbf{j}} \) we are given \(F\) in terms of \(r\) (if we were not given \(F\) in terms of \(r\) we would have to convert it because we need to integrate with respect to \(r\) we have the work equation \(W=\int_0^T \vec{F}(r)\cdot d\vec{r}\) we know that \(T=2\pi\) because that is one rotation of a circle
Putting everything together we get the equation
\[W=\int_0^{2\pi} -\dfrac{\hat{r}}{\left | \vec{r} \right |^2}\cdot \dfrac{d\vec{r}}{dt}dt \nonumber \]
Lets break down each component.
Simplify \(\left | \vec{r} \right |^2\):
\[\left | \vec{r} \right |^2=\sqrt{(R\cos(t)^2+(R\sin(t)^2)}^2=\left (\sqrt{R^2(\sin^2(t)+\cos^2(t))}\right )^2=(\sqrt{R^2(1)})^2=R^2. \nonumber \]
\(\hat{r}\) is the unit vector of \(\vec{r}\).
\[\begin{align} \hat{r}=\dfrac{\vec{r}}{\left | \vec{r} \right |}&=\dfrac{R\cos(t) \hat{\textbf{i}} +R\sin(t) \hat{\textbf{j}} }{\sqrt{R^2(\sin^2(t)+\cos^2(t))}} \nonumber \\[4pt] &= \dfrac{R\cos(t) \hat{\textbf{i}} +R\sin(t) \hat{\textbf{j}} }{\sqrt{R^2(1)}} \nonumber \\[4pt] &= \dfrac{R\cos(t) \hat{\textbf{i}} +R\sin(t) \hat{\textbf{j}} }{R} \nonumber \\[4pt] &=\cos(t) \hat{\textbf{i}} +\sin(t) \hat{\textbf{j}} \end{align} \nonumber \]
\(\dfrac{d\vec{r}}{dt}\) is the derivative of \(\vec{r}(t)\).
\[\dfrac{d\vec{r}}{dt}=\dfrac{d}{dt}\left (R\cos(t) \hat{\textbf{i}} +R\sin(t) \hat{\textbf{j}} \right)=-R\sin(t) \hat{\textbf{i}} +R\cos(t) \hat{\textbf{j}} \nonumber \]
\(R\) is a constant so we can just move that out and substitute in our results for \(\hat{r}\) and \(d\vec{r}\) .
\[W=-\dfrac{1}{R^2}\int_0^{2\pi} (\cos(t) \hat{\textbf{i}} +\sin(t) \hat{\textbf{j}} )\cdot (-R\sin(t) \hat{\textbf{i}} +R\cos(t)\ \hat{\textbf{j}} )dt \nonumber \]
The equation can just be solved using integrals.
\[\begin{align} W&=-\dfrac{1}{R^2}\int_0^{2\pi} (-R\sin(t)\cos(t) \hat{\textbf{i}} +R\cos(t)\sin(t) \hat{\textbf{j}} )dt \nonumber \\[4pt] &= -\dfrac{1}{R^2}\left [ \left(-R\dfrac{\sin^2(t)}{2}\right) \hat{\textbf{i}} +\left(R\dfrac{\sin^2(t)}{2}\right) \hat{\textbf{j}} \right ]_0^{2\pi} \nonumber \\[4pt] &= -\dfrac{1}{R^2} \left(-R\dfrac{\sin^2(2\pi)}{2}\right) \hat{\textbf{i}} +\left(R\dfrac{\sin^2(2\pi)}{2}\right) \hat{\textbf{j}} - \left(-R\dfrac{\sin^2(0)}{2}\right) \hat{\textbf{i}} -\left(R\dfrac{\sin^2(0)}{2}\right) \hat{\textbf{j}} \nonumber \\[4pt] &= -\dfrac{1}{R^2}(0 \hat{\textbf{i}} -0 \hat{\textbf{i}} +0 \hat{\textbf{j}} -0 \hat{\textbf{j}} )\nonumber \\[4pt] &=0 \end{align} \nonumber \]
Another way to look at this problem is to identify you are given the position vector(\(\vec(t)\) in a circle the velocity vector is tangent to the position vector so the cross product of \(d(\vec{r})\) and \(\vec{r}\) is 0 so the work is \(0\).
Example \(\PageIndex{2}\): Flux through a Square
Find the flux of \(F=x \hat{\textbf{i}} +y \hat{\textbf{j}} \) through the square with side length 2.
Solution
First we need to parameterize the equation of the curve. Since this is a square we need four different equations. One of which is
\[r(t)=(1-t) \hat{\textbf{i}} + \hat{\textbf{j}} \;\;\;\;0\leq t\leq 2. \nonumber \]
Next find \(\dfrac{dr}{dt}\), the unit tangent vector of \(r\)
\[\dfrac{dr}{dt}=\dfrac{d}{dt}(1-t) \hat{\textbf{i}} +\dfrac{d}{dt} \hat{\textbf{j}} = - \hat{\textbf{i}} \nonumber \]
\[\hat{T}=\dfrac{\dfrac{dr}{dt}}{\left | \dfrac{dr}{dt} \right |}=\dfrac{- \hat{\textbf{i}} }{1}=-\hat{ \hat{\textbf{i}} }. \nonumber \]
Then find the unit normal vector which is defined as \(\hat{n}=\hat{T} \times \hat{\textbf{k}} \)
\[\hat{n}=- \hat{\textbf{i}} \times \hat{\textbf{k}} = \hat{\textbf{j}} . \nonumber \]
Now we have everything required to solve for the flux \(\int_C F\cdot\hat{n} ds\)
\[\int_C (x(t) \hat{\textbf{i}} dx +y(t) \hat{\textbf{j}} )\cdot \hat{\textbf{j}} \left |\dfrac{dr}{dt}\right | dt \nonumber \]
\[ \int_0^2 0 \hat{\textbf{i}} +y(t) \hat{\textbf{j}} (1) dt = 2. \nonumber \]
We found the flux this side to be 2. For any other problems, we would need to calculate the other three lines and add them up. But for this specific problem, the force is outward from the center so the other three sides have the same flux. making the answer \(2 \times 4 = 8\).
Example \(\PageIndex{3}\): Flux through a Circle
Find the flux of \(F=x \hat{\textbf{i}} +y \hat{\textbf{j}} \) through a circle with radius = \(R\).
Solution
First, parameterize the curve:
\[r(t)=R\cos(t) \hat{\textbf{i}} +R\sin(t) \hat{\textbf{j}} \; \;\;\;0\leq t \leq 2\pi \;\;\text{or}\;\; x(t)=R\cos(t) \hat{\textbf{i}} \;\;y(t)=R\sin(t) \hat{\textbf{j}}. \nonumber \]
Then the unit tangent vector \(\hat{T}\):
\[\dfrac{dr}{dt}=\dfrac{d}{dt}R\cos(t) \hat{\textbf{i}} +\dfrac{d}{dt}R\sin(t) \hat{\textbf{j}} = -R\sin(t) \hat{\textbf{i}} + R\cos(t) \hat{\textbf{j}} \nonumber \]
\[\left | \dfrac{dr}{dt} \right |=\sqrt{\left ( \dfrac{dx}{dt} \right )^2 +\left ( \dfrac{dy}{dt} \right )^2}=\sqrt{(-\sin(t)^2+\cos(t)^2}=\sqrt{R^2\cos^2(t)+R^2\sin^2(t)} =R \nonumber \]
\[\hat{T}=\dfrac{\dfrac{dr}{dt}}{\left | \dfrac{dr}{dt} \right |}=\dfrac{R-\sin(t) \hat{\textbf{i}} + R\cos(t) \hat{\textbf{j}} }{R}= -\sin(t) \hat{\textbf{i}} + \cos(t) \hat{\textbf{j}}. \nonumber \]
Find the unit normal vector \(\hat{n}=\hat{T} \times \hat{\textbf{k}} \):
\[\hat{n}=(-\sin(t)+\cos(t))\times \hat{\textbf{k}} \nonumber \]
\[\hat{n}=\sin(t) \hat{\textbf{j}} +\cos(t) \hat{\textbf{i}}. \nonumber \]
Solve for the flux \(\int_C F\cdot\hat{n} ds\):
\[\int_C (x(t) \hat{\textbf{i}} +y(t))\cdot \sin(t) \hat{\textbf{j}} +\cos(t) \hat{\textbf{i}} \left |\dfrac{dr}{dt}\right | dt \nonumber \]
\[\int_0^{2\pi} (x(t) \hat{\textbf{i}} +y(t))\cdot \sin(t) \hat{\textbf{j}} +\cos(t) \hat{\textbf{i}} (R)dt \nonumber \]
\[\int_0^{2\pi} (x(t)\cos(t) + y(t)\sin(t))Rdt. \nonumber \]
Substitute in for \(x(t)\) and \(y(t)\)
\[\begin{align} &\int_0^{2\pi} (R\cos(t)\cos(t) + R\sin(t)\sin(t))(R)dt \nonumber \\[4pt] &= \int_0^{2\pi} (R^2)dt \nonumber \\[4pt] &= R^2\left [t\right ]_0^{2\pi} \nonumber \\[4pt] &=2\pi R^2 . \end{align} \nonumber \]
Example \(\PageIndex{5}\): Flux through an Ellipse
Find the flux of \(F=x \hat{\textbf{i}} +y \hat{\textbf{j}} \) through an ellipse with axes \(a\) and \(b\).
Solution
Start off by parameterizing the curve of an ellipse
\[\vec{r}(t)=a\cos(t) \hat{\textbf{i}} +b\sin(t) \hat{\textbf{j}}. \nonumber \]
Then, find the unit tangent vector
\[\vec{T}=\dfrac{dr}{dt}=-a\sin(t) \hat{\textbf{i}} +b\cos(t) \hat{\textbf{j}} \nonumber \]
\[\left | \dfrac{dr}{dt} \right |=\sqrt{(-a\sin(t))^2+(b\cos(t))^2}=..... \nonumber \]
so solving for \(\left | \dfrac{dr}{dt} \right |\) might take up too much time so for now we're just going to leave it as it is
\[\hat{T}=\dfrac{\dfrac{dr}{dt}}{\left | \dfrac{dr}{dt} \right |}=\dfrac{-a\sin(t) \hat{\textbf{i}} +b\cos(t) \hat{\textbf{j}} }{\left | \dfrac{dr}{dt} \right |} \nonumber \]
Now we take cross product with \(\hat{k}\) to get \(\hat{n}\)
\[\hat{n}=\dfrac{-a\sin(t) \hat{\textbf{i}} +b\cos(t) \hat{\textbf{j}} }{\left | \dfrac{dr}{dt} \right |}\times \hat{\textbf{k}} =\dfrac{b\cos(t) \hat{\textbf{i}} +a\sin(t) \hat{\textbf{j}} }{\left | \dfrac{dr}{dt} \right |}. \nonumber \]
Now just solve for flux
\[\begin{align} & \int_C \vec{F}(r(t))\cdot \hat{n}ds \nonumber \\[4pt] &= \int_0^{2\pi} (x \hat{\textbf{i}} +y \hat{\textbf{j}} )\cdot \left (\dfrac{b\cos(t) \hat{\textbf{i}} +a\sin(t) \hat{\textbf{j}} }{\left | \dfrac{dr}{dt} \right |}\right ) \left (\left | \dfrac{dr}{dt} \right |\right )dt \nonumber \\[4pt] &= \int_0^{2\pi} (a\cos(t) \hat{\textbf{i}} +b(\sin(t) \hat{\textbf{j}} )\cdot \left (\dfrac{b\cos(t) \hat{\textbf{i}} +a\sin(t) \hat{\textbf{j}} }{\left | \dfrac{dr}{dt} \right |}\right ) \left (\left | \dfrac{dr}{dt} \right |\right )dt. \end{align} \nonumber \]
Notice that \(\left | \dfrac{dr}{dt} \right|\) cancels out and we're left with
\[\begin{align} \int_0^{2\pi} a^2\cos^2(t)+b^2\sin^2(t) dt \nonumber \\[4pt] &= \int_0^{2\pi} a^2+b^2 dt \nonumber \\[4pt] &= 2\pi(a^2+b^2). \end{align} \nonumber \]
Circulation
Circulation is the amount of "stuff" parallel to the direction of motion. We are looking for the amount of "stuff going" in the direction of the tangent vector and we calculate that by taking the dot product of \(\vec{F}\) (The vector field) and \(\hat T\) (the direction that is tangent to the curve) We'll use \(\Gamma\) as the variable for circulation
\[ \Gamma=\int_C \vec{F}\cdot\hat{T} ds. \nonumber \]
Example \(\PageIndex{5}\): Circulation
Given the vector field \(\vec{F}(\vec{r})=-y \hat{\textbf{i}} +x \hat{\textbf{j}} \) find circulation over a circle.
Solution
Parameterize the equation for a circle
\[a\cos(t) \hat{\textbf{i}} +a\sin(t) \hat{\textbf{j}} \;\;\;\;x(t)=R\cos(t)\;\; y(t)=R\sin(t) \nonumber \]
Then find \(\hat{T}\)
\[\vec{T}=\dfrac{dr}{dt}=-R\sin(t) \hat{\textbf{i}} +R\cos(t) \hat{\textbf{j}} \nonumber \]
\[\left | \vec{T} \right |=\sqrt{(R\sin(t))^2+(R\cos(t))^2}=R \nonumber \]
\[\hat{T}=\dfrac{\vec{T}}{\left | \vec{T} \right |}=\dfrac{-R\sin(t) \hat{\textbf{i}} +R\cos(t) \hat{\textbf{j}} }{R}=-\sin(t) \hat{\textbf{i}} +\cos(t) \hat{\textbf{j}}. \nonumber \]
Then calculate for circulation
\[\begin{align} \Gamma=\int_C \vec{F}\cdot \hat{T} ds &=\int_C \vec{F}\cdot \hat{T}\left |\dfrac{dr}{dt}\right |dt \nonumber \\[4pt] &=\int_C \left ( -R\sin(t) \hat{\textbf{i}} +R\cos(t) \hat{\textbf{j}} \right )\cdot \left (-\sin(t) \hat{\textbf{i}} +\cos(t) \hat{\textbf{j}} \right ) (R)dt \nonumber \\[4pt] &= \int_0^{2\pi} \left (R\sin^2(t)+R\cos^2(t)\right)Rdt \nonumber \\[4pt] &= \int_0^{2\pi} R^2dt=2\pi R^2 . \end{align} \nonumber \]
Contributors Danny Nguyen (UCD)
Integrated by Justin Marshall. |
tl;dr: Park your ISS-like space station above 700 km and there is a good chance it will only lose 100 m/s in 1,000 years due to atmospheric drag at least (and 2000 km for a million years). However, there are other problems
This is a really interesting question! Just for example,
the LAGEOS satellites are about 6,000 km above the Earth's surface and are expected to re-enter the atmosphere in 8 million years or so. But they are spherical and dense, whereas a space station may be non-aerodynamically shaped and have a low density.
Let's look at the current TLE for the ISS from https://www.celestrak.com/NORAD/elements/stations.txt
ISS (ZARYA)
1 25544U 98067A 19203.81086311 .00000606 00000-0 18099-4 0 9996
2 25544 51.6423 184.5274 0006740 168.1171 264.4057 15.50995519180787
The value for B-star is see this
18099-4 which is 0.18099E-04 which is 1.8099e-05 which is pretty big, as it should be for a hollow space station with big solar panels. It has units of inverse Earth radii (see this from this).
Wikipedia's BSTAR gives the following equation for acceleration due to drag:
$$a_D = \frac{\rho}{\rho_0} B^* v^2$$
where $\rho_0$ is a reference density and is about 0.1570 kg/m^2/Earth radii and $v$ is velocity presumably in m/s.
Calculating time to reentry requires some calculus, so let's just estimate the time it takes to lose 100 m/s in velocity.
$$\Delta t = \frac{\Delta v}{\frac{dv}{dt}} = \frac{\Delta v}{a_D} $$
If we then set $\Delta t$ to 1000 years or $\sim 1000 \times \pi \times 10^7$ seconds, we get $a_D \sim 3E-09$ m/s^2 for that 100 m/s loss.
Putting that back into the first equation and using the ISS' $B^*$, we get
$$\rho = \frac{a_D \rho_0}{B^* v^2} $$
The funky units (Earth radii-based) work out and the atmospheric density is about 8E-13 kg/m^2 based on an orbital velocity of about 7000 m/s
What altitude is that? It depends greatly on the Sun's activity. The plot below puts it as low as 380 km during a solar minimum, but up at 700 km during an active Sun.
This link found in this answer puts 8E-13 kg/m^3 closer to 600 km for an active Sun.
That link shows 8E-16 (for 1 million years) at about 2,000 km.
So park your ISS-like space station above 700 km and there is a good chance it will only lose 100 m/s in 1,000 years
due to atmospheric drag at least, and above 2,000 km for 1,000,000 years. However, there are other problems due to a big patch of space junk in the 600 to 1000 km neighborhood.
Found in this answer, from https://en.wikipedia.org/wiki/Scale_height Wertz et al. SSC12-IV-6, 26th Annual AIAA/USU Conference on Small Satellites. |
https://doi.org/10.1351/goldbook.E02283
@E02281@ describing the progress of a chemical reaction equal to the number of chemical transformations, as indicated by the reaction equation on a molecular scale, divided by the @A00543@ (it is essentially the @A00297@ of chemical transformations). The change in the extent of reaction is given by \(\mathrm{d}\xi =\frac{\mathrm{d}n_{\text{B}}}{\nu _{\text{B}}}\), where \(\nu _{\text{B}}\) is the @S06025@ of any reaction entity
Green Book, 2nd ed., p. 43 [Terms] [Book] PAC, 1992, PAC, 1993, PAC, 1996, PAC, 1996, B(reactant or product) and \(\mathrm{d}n_{\text{B}}\) is the corresponding amount. Sources:
Green Book, 2nd ed., p. 43 [Terms] [Book]
PAC, 1992,
64, 1569. ( Quantities and units for metabolic processes as a function of time (IUPAC Recommendations 1992)) on page 1572 [Terms] [Paper]
PAC, 1993,
65, 2291. ( Nomenclature of kinetic methods of analysis (IUPAC Recommendations 1993)) on page 2295 [Terms] [Paper]
PAC, 1996,
68, 149. ( A glossary of terms used in chemical kinetics, including reaction dynamics (IUPAC Recommendations 1996)) on page 165 [Terms] [Paper]
PAC, 1996,
68, 957. ( Glossary of terms in quantities and units in Clinical Chemistry (IUPAC-IFCC Recommendations 1996)) on page 973 [Terms] [Paper] |
On the dimension of vertex labeling of $k$-uniform dcsl of $k$-uniform caterpillar Abstract
A distance compatible set labeling (dcsl) of a connected graph $G$ is an injective set assignment $f : V(G) \rightarrow 2^{X},$ $X$ being a nonempty ground set, such that the corresponding induced function $f^{\oplus} :E(G) \rightarrow 2^{X}\setminus \{\emptyset\}$ given by $f^{\oplus}(uv)= f(u)\oplus f(v)$ satisfies $\mid f^{\oplus}(uv) \mid = k_{(u,v)}^{f}d_{G}(u,v) $ for every pair of distinct vertices $u, v \in V(G),$ where $d_{G}(u,v)$ denotes the path distance between $u$ and $v$ and $k_{(u,v)}^{f}$ is a constant, not necessarily an integer. A dcsl $f$ of $G$ is $k$-uniform if all the constant of proportionality with respect to $f$ are equal to $k,$ and if $G$ admits such a dcsl then $G$ is called a $k$-uniform dcsl graph. The $k$-uniform dcsl index of a graph $G,$ denoted by $\delta_{k}(G)$ is the minimum of the cardinalities of $X,$ as $X$ varies over all $k$-uniform dcsl-sets of $G.$ A linear extension ${\mathbf{L}}$ of a partial order ${\mathbf{P}} = (P, \preceq)$ is a linear order on the elements of $P$, such that $ x \preceq y$ in ${\mathbf{P}}$ implies $ x \preceq y$ in ${\mathbf{L}}$, for all $x, y \in P$. The dimension of a poset ${\mathbf{P}},$ denoted by $dim({\mathbf{P}}),$ is the minimum number of linear extensions on ${\mathbf{P}}$ whose intersection is `$\preceq$'. In this paper we prove that $dim({\mathcal{F}}) \leq \delta_{k}(P^{+k}_n),$ where ${\mathcal{F}}$ is the range of a $k$-uniform dcsl of the $k$-uniform caterpillar, denoted by $P^{+k}_n \ (n\geq 1, k\geq 1)$ on `$n(k+1)$' vertices.
Keywords
$k$-uniform dcsl index, dimension of the poset, lattice
3 :: 8
Refbacks There are currently no refbacks.
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
According to this article,
[...] the propensity function for the conversion reaction S → P in the well-mixed discrete stochastic case can be written $a(S) = \frac{V_{max}\cdot S}{K_m + S/\Omega}$ where $\Omega$ is the system volume.
I don't quite understand how this formula is derived from the
non-discrete Michaelis-Menten kinetics $v = \frac{V_{max} \cdot [S]}{K_M + [S]}$ (see Wikipedia). According to my understanding, $[S] = S/\Omega$. If we apply this to the formula from Wikipedia we get$$\frac{V_{max} \cdot[S]}{K_M + [S]} = \frac{V_{max} \cdot S/\Omega}{K_M + S/\Omega} = \frac{V_{max} \cdot S}{\Omega \cdot K_M + S} $$which is not the same as $\frac{V_{max}\cdot S}{K_m + S/\Omega}$. So, how can one derive the formula from the quoted article (if it is correct)? If not, how can we correctly get to a discrete propensity function from the Michaelis-Menten kinetics? |
There is a calculation that I had been thinking for a long time of working out to my own satisfaction, both because of its intrinsic importance and because it seemed like it would be fun. This was to calculate the intensity of a gravitational wave for a given amplitude. E.g., for LIGO events we have amplitudes expressed as a fractional change in the metric, and these can be related to the mass-energy released in the event that created the waves. Misner, Thorne, and Wheeler have the following expression (I've changed the notation slightly) in section 36.7:
$$ T_{cd}=\frac{1}{32\pi}\langle g_{ab,c}g^{ab}{}_{,d} \rangle , $$
where $g_{\mu\nu}-\eta_{\mu\nu} \ll 1$. This is for the effective stress-energy tensor for a gravitational wave, in the transverse traceless (TT) gauge, averaged over several wavelengths.
MTW's derivation of this is complex and spread around in little chunks in different places in the book. I thought I would go through the features of this equation and see how many of them I could understand or give heuristic derivations for. Dimensionally, it makes sense, and this occurs iff we have two derivatives. It depends on the square of the amplitude, which makes sense in the small-amplitude limit for any wave. The derivatives are partial derivatives, not covariant derivatives, because the covariant derivative of the metric is zero. The average is required because by the equivalence principle, we can never have a local expression for the energy density of the gravitational field. Next I started worrying about the factor of $1/32\pi$, wondering if there was some heuristic way to produce it.
What occurred to me was to work out an expression, in similar notation, for the energy density of the gravitational field in Newtonian mechanics. The energy density is $-(1/8\pi)\textbf{g}^2=-(1/8\pi)(\nabla\phi)^2$, where the units are such that $G=1$, and $\nabla$ is a 3-gradient. In the static case, in the semi-newtonian limit, the metric is $g_{tt}=1+2\phi$. So translating the above expression for the energy density into GR-style notation, we have
$$\textbf{g}=-\frac{1}{2}\nabla g_{tt}$$
$$T_{tt} = -\frac{1}{32\pi}g_{tt,\mu}g^{tt,\mu},$$
where $\mu$ runs only over the spacelike coordinates.
So these two expressions have some obvious similarities, but they are also different in certain ways. It's nice that the $1/32\pi$ is the same in the two cases, although the sign is opposite. However, we're contracting over different indices. I'm also not very convinced by my own comparison of these expressions because the one for a gravitational wave is specific to a certain gauge, and I never actually used that assumption.
Is it possible to give a semi-heuristic derivation -- one that's more convincing than mine -- for the expression for the effective stress-energy tensor of a gravitational wave? |
Say a matrix A is positive semi-definite. Let B be a square matrix composed of replicas of A as sub-blocks, s.t. $$B=\begin{pmatrix} A & A \\ A & A \\ \end{pmatrix},$$ or $$\begin{pmatrix} A & A & A \\ A & A & A \\ A & A & A \\ \end{pmatrix},$$ etc. Would $B$ be semi-definite as well?
For any vector $x$, divide it into appropriately-sized subvectors $x_1,\ldots,x_n$ so that $$\begin{align} x^TBx &= \begin{bmatrix}x_1\\\vdots\\x_n\end{bmatrix}^T\begin{bmatrix}A&\cdots& A\\\vdots&\ddots&\vdots\\A&\cdots&A\end{bmatrix}\begin{bmatrix}x_1\\\vdots\\x_n\end{bmatrix} \\ &= x_1^TAx_1 + \cdots + x_1^TAx_n \\ &\phantom{=}+ \cdots \\ &\phantom{=}+ x_n^TAx_1 + \cdots + x_n^TAx_n \\ &= (x_1+\cdots+x_n)^TA(x_1+\cdots+x_n) \end{align}$$ which is nonnegative because $A$ is positive semidefinite.
Let $X$ and $Y$ be symmetric positive semidefinite matrices. Then $X\otimes Y$, where $\otimes$ denotes the Kronecker product, is symmetric positive semidefinite as well.
Let $$ X=E, \quad Y=A, $$ where $E$ is a square matrix of ones (for the matrices in question, $E$ is $2\times 2$ or $3\times 3$). Note that $E=ee^T$ with $e=[1,1,\ldots,1]^T$, so $E$ is obviously symmetric positive semidefinite. Now use the fact above.
Let $$B=\left( \begin{array}{cc} A & A \\ A & A \\ \end{array} \right)$$ Decomposit an Vector $z\in\mathbb{R}^{2n}$ to $$z=\left( \begin{array}{cc} x \\ y \\ \end{array}\right)\quad x,y\in \mathbb{R}^n$$ then $$Bz=\left( \begin{array}{cc} A & A \\ A & A \\ \end{array} \right) \left( \begin{array}{cc} x \\ y \\ \end{array} \right)=\left( \begin{array}{cc} Ax+Ay \\ Ax+Ay \\ \end{array} \right) $$ So $$\langle z,Bz\rangle=\left\langle\left( \begin{array}{cc} x \\ y \\ \end{array} \right),\left( \begin{array}{cc} Ax+Ay \\ Ax+Ay \\ \end{array} \right)\right\rangle=xAx+xAy+yAx+yAy=(x+y)A(x+y)\geq0$$
No, consider the 2-by-2 matrix with 1 in all entries.
Edit: I may have been too quick here, that matrix so defined actually is positive semi-definite, but it is not positive definite.
Edit2: I think you are correct: Let $A\geq 0$ and let $M$ be the '2-by-2' block matrix with $A$ in all 4 blocks. Then $$ \langle M(x,y),(x,y)\rangle = \langle Ax,x\rangle +\langle Ay,y\rangle +\langle Ax,y\rangle+\langle Ay,x\rangle=\langle Ax,x\rangle +\langle Ay,y\rangle +2Re \langle Ax,y\rangle. $$ Now, since $A\geq 0$, we can employ the Cauchy-Schwarz inequality to obtain
$$ |Re \langle Ax,y\rangle|\leq |\langle Ax,y\rangle|\leq \sqrt{\langle Ax,x\rangle\langle Ay,y\rangle}\leq \frac{1}{2}(\langle Ax,x\rangle + \langle Ay,y\rangle), $$
which implies that $M\geq 0$. I believe this will go through to higher orders as well.
Yes in special cases because the determinant of a block matrix is the product of the determinants of the blocks, if the blocks are placed in the the main diagonal. This is not true for general block matrices. Something like
[A 0 0]
[0 B 0]
[0 0 C]
is allowed. |
Skills to Develop
Construct probability models. Compute probabilities of equally likely outcomes. Compute probabilities of the union of two events. Use the complement rule to find probabilities. Compute probability using counting theory.
Residents of the Southeastern United States are all too familiar with charts, known as spaghetti models, such as the one in Figure \(\PageIndex{1}\). They combine a collection of weather data to predict the most likely path of a hurricane. Each colored line represents one possible path. The group of squiggly lines can begin to resemble strands of spaghetti, hence the name. In this section, we will investigate methods for making these types of predictions.
Figure \(\PageIndex{1}\): An example of a “spaghetti model,” which can be used to predict possible paths of a tropical storm. 1 Constructing Probability Models
Suppose we roll a six-sided number cube. Rolling a number cube is an example of an
experiment, or an activity with an observable result. The numbers on the cube are possible results, or outcomes, of this experiment. The set of all possible outcomes of an experiment is called the sample space of the experiment. The sample space for this experiment is \(\{1,2,3,4,5,6 \}\). An event is any subset of a sample space.
The likelihood of an event is known as probability. The probability of an event pp is a number that always satisfies \(0≤p≤1\), where \(0\) indicates an impossible event and \(1\) indicates a certain event. A probability model is a mathematical description of an experiment listing all possible outcomes and their associated probabilities. For instance, if there is a \(1\%\) chance of winning a raffle and a \(99\%\) chance of losing the raffle, a probability model would look much like Table \(\PageIndex{1}\).
Outcome Probability Winning the raffle 1% Losing the raffle 99%
The sum of the probabilities listed in a probability model must equal \(1\), or \(100\%\).
How to: Given a probability event where each event is equally likely, construct a probability model.
Identify every outcome. Determine the total number of possible outcomes. Compare each outcome to the total number of possible outcomes.
Example \(\PageIndex{1}\): Constructing a Probability Model
Construct a probability model for rolling a single, fair die, with the event being the number shown on the die.
Solution
Begin by making a list of all possible outcomes for the experiment. The possible outcomes are the numbers that can be rolled: \(1\), \(2\), \(3\), \(4\), \(5\), and \(6\). There are six possible outcomes that make up the sample space.
Assign probabilities to each outcome in the sample space by determining a ratio of the outcome to the number of possible outcomes. There is one of each of the six numbers on the cube, and there is no reason to think that any particular face is more likely to show up than any other one, so the probability of rolling any number is \(16\).
Outcome Roll of 1 Roll of 2 Roll of 3 Roll of 4 Roll of 5 Roll of 6 Probability \(\dfrac{1}{6}\) \(\dfrac{1}{6}\) \(\dfrac{1}{6}\) \(\dfrac{1}{6}\) \(\dfrac{1}{6}\) \(\dfrac{1}{6}\)
Q&A: Do probabilities always have to be expressed as fractions?
No. Probabilities can be expressed as fractions, decimals, or percents. Probability must always be a number between \(0\) and \(1\), inclusive of \(0\) and \(1\). Computing Probabilities of Equally Likely Outcomes
Let \(S\) be a sample space for an experiment. When investigating probability, an event is any subset of \(S\). When the outcomes of an experiment are all equally likely, we can find the probability of an event by dividing the number of outcomes in the event by the total number of outcomes in \(S\). Suppose a number cube is rolled, and we are interested in finding the probability of the event “rolling a number less than or equal to 4.” There are 4 possible outcomes in the event and 6 possible outcomes in \(S\), so the probability of the event is \(\dfrac{4}{6}=\dfrac{2}{3}\).
COMPUTING THE PROBABILITY OF AN EVENT WITH EQUALLY LIKELY OUTCOMES
The probability of an event \(E\) in an experiment with sample space \(S\) with equally likely outcomes is given by
\[P(E)=\dfrac{\text{number of elements in }E}{\text{number of elements in }S}=\dfrac{n(E)}{n(S)}\]
\(E\) is a subset of \(S\), so it is always true that \(0≤P(E)≤1\).
Example \(\PageIndex{2}\): Computing the Probability of an Event with Equally Likely Outcomes
A number cube is rolled. Find the probability of rolling an odd number.
Solution
The event “rolling an odd number” contains three outcomes. There are \(6\) equally likely outcomes in the sample space. Divide to find the probability of the event.
\(P(E)=\dfrac{3}{6}=\dfrac{1}{2}\)
Exercise \(\PageIndex{1}\)
A number cube is rolled. Find the probability of rolling a number greater than \(2\).
Answer
\(\dfrac{2}{3}\)
Computing the Probability of the Union of Two Events
We are often interested in finding the probability that one of multiple events occurs. Suppose we are playing a card game, and we will win if the next card drawn is either a heart or a king. We would be interested in finding the probability of the next card being a heart or a king. The union of two events \(E\) and \(F\),written \(E\cup F\), is the event that occurs if either or both events occur.
\[P(E\cup F)=P(E)+P(F)−P(E\cap F)\]
Suppose the spinner in Figure \(\PageIndex{2}\) is spun. We want to find the probability of spinning orange or spinning a \(b\).
Figure \(\PageIndex{2}\): A pie chart with six options.
There are a total of \(6\) sections, and \(3\) of them are orange. So the probability of spinning orange is \(\dfrac{3}{6}=\dfrac{1}{2}\). There are a total of \(6\) sections, and \(2\) of them have a \(b\). So the probability of spinning a \(b\) is \(\dfrac{2}{6}=\dfrac{1}{3}\). If we added these two probabilities, we would be counting the sector that is both orange and a \(b\) twice. To find the probability of spinning an orange or a \(b\), we need to subtract the probability that the sector is both orange and has a \(b\).
\(\dfrac{1}{2}+\dfrac{1}{3}−\dfrac{1}{6}=\dfrac{2}{3}\)
The probability of spinning orange or a \(b\) is \(\dfrac{2}{3}\).
PROBABILITY OF THE UNION OF TWO EVENTS
The
probability of the union of two events \(E\) and \(F\) (written \(E\cup F\)) equals the sum of the probability of \(E\) and the probability of \(F\) minus the probability of \(E\) and \(F\) occurring together (which is called the intersection of \(E\) and \(F\) and is written as \(E\cap F\)).
\[P(E\cup F)=P(E)+P(F)−P(E\cap F)\]
Example \(\PageIndex{3}\): Computing the Probability of the Union of Two Events
A card is drawn from a standard deck. Find the probability of drawing a heart or a \(7\).
Solution
A standard deck contains an equal number of hearts, diamonds, clubs, and spades. So the probability of drawing a heart is \(\dfrac{1}{4}\). There are four \(7s\) in a standard deck, and there are a total of \(52\) cards. So the probability of drawing a \(7\) is \(\dfrac{1}{13}\).
The only card in the deck that is both a heart and a \(7\) is the \(7\) of hearts, so the probability of drawing both a heart and a \(7\) is \(\dfrac{1}{52}\). Substitute \(P(H)=\dfrac{1}{4}\), \(P(7)=\dfrac{1}{13}\), and \(P(H\cap 7)=\dfrac{1}{52}\) into the formula.
\[\begin{align*} P(E\cup F) &=P(E)+P(F)−P(E\cap F) \\[4pt] &=\dfrac{1}{4}+\dfrac{1}{13}−\dfrac{1}{52} \\[4pt] &=\dfrac{4}{13} \end{align*}\]
The probability of drawing a heart or a \(7\) is \(\dfrac{4}{13}\).
Exercise \(\PageIndex{2}\)
A card is drawn from a standard deck. Find the probability of drawing a red card or an ace.
Answer
\(\dfrac{7}{13}\)
Computing the Probability of Mutually Exclusive Events
Suppose the spinner in Figure \(\PageIndex{2}\) is spun again, but this time we are interested in the probability of spinning an orange or a \(d\). There are no sectors that are both orange and contain a \(d\), so these two events have no outcomes in common. Events are said to be mutually exclusive events when they have no outcomes in common. Because there is no overlap, there is nothing to subtract, so the general formula is
\[P(E\cap F)=P(E)+P(F)\]
Notice that with mutually exclusive events, the intersection of \(E\) and \(F\) is the empty set. The probability of spinning an orange is \(\dfrac{3}{6}=\dfrac{1}{2}\) and the probability of spinning a \(d\) is \(\dfrac{1}{6}\). We can find the probability of spinning an orange or a \(d\) simply by adding the two probabilities.
\[\begin{align*} P(E\cap F)&=P(E)+P(F) \\[4pt] &=\dfrac{1}{2}+\dfrac{1}{6} \\ &=\dfrac{2}{3} \end{align*}\]
The probability of spinning an orange or a \(d\) is \(\dfrac{2}{3}\).
PROBABILITY OF THE UNION OF MUTUALLY EXCLUSIVE EVENTS
The
probability of the union of two mutually exclusive events \(E\) and \(F\) is given by
\[P(E\cap F)=P(E)+P(F)\]
Example \(\PageIndex{4}\): Computing the Probability of the Union of Mutually Exclusive Events
A card is drawn from a standard deck. Find the probability of drawing a heart or a spade.
Solution
The events “drawing a heart” and “drawing a spade” are mutually exclusive because they cannot occur at the same time. The probability of drawing a heart is \(\dfrac{1}{4}\), and the probability of drawing a spade is also \(\dfrac{1}{4}\), so the probability of drawing a heart or a spade is
\(\dfrac{1}{4}+\dfrac{1}{4}=\dfrac{1}{2}\)
Exercise \(\PageIndex{3}\)
A card is drawn from a standard deck. Find the probability of drawing an ace or a king.
Answer
\(\dfrac{2}{13}\)
Using the Complement Rule to Compute Probabilities
We have discussed how to calculate the probability that an event will happen. Sometimes, we are interested in finding the probability that an event will
not happen. The complement of an event \(E\), denoted \(E′\), is the set of outcomes in the sample space that are not in \(E\). For example, suppose we are interested in the probability that a horse will lose a race. If event \(W\) is the horse winning the race, then the complement of event \(W\) is the horse losing the race.
To find the probability that the horse loses the race, we need to use the fact that the sum of all probabilities in a probability model must be \(1\).
\[P(E′)=1−P(E)\]
The probability of the horse winning added to the probability of the horse losing must be equal to \(1\). Therefore, if the probability of the horse winning the race is \(\dfrac{1}{9}\), the probability of the horse losing the race is simply
\(1−\dfrac{1}{9}=\dfrac{8}{9}\)
Computing Probability Using Counting Theory
Many interesting probability problems involve counting principles, permutations, and combinations. In these problems, we will use permutations and combinations to find the number of elements in events and sample spaces. These problems can be complicated, but they can be made easier by breaking them down into smaller counting problems.
Assume, for example, that a store has \(8\) cellular phones and that \(3\) of those are defective. We might want to find the probability that a couple purchasing \(2\) phones receives \(2\) phones that are not defective. To solve this problem, we need to calculate all of the ways to select \(2\) phones that are not defective as well as all of the ways to select \(2\) phones. There are \(5\) phones that are not defective, so there are \(C(5,2)\) ways to select \(2\) phones that are not defective. There are \(8\) phones, so there are \(C(8,2)\) ways to select \(2\) phones. The probability of selecting \(2\) phones that are not defective is:
\[ \begin{align*} \dfrac{\text{ways to select 2 phones that are not defective}}{\text{ways to select 2 phones}}&=\dfrac{C(5,2)}{C(8,2)} \\[4pt] &=\dfrac{10}{28} \\[4pt] &=\dfrac{5}{14} \end{align*}\]
Example \(\PageIndex{5}\): Computing Probability Using Counting Theory
A child randomly selects \(5\) toys from a bin containing \(3\) bunnies, \(5\) dogs, and \(6\) bears.
Find the probability that only bears are chosen. Find the probability that \(2\) bears and \(3\) dogs are chosen. Find the probability that at least \(2\) dogs are chosen. Solution We need to count the number of ways to choose only bears and the total number of possible ways to select \(5\) toys. There are \(6\) bears, so there are \(C(6,5)\) ways to choose \(5\) bears. There are \(14\) toys, so there are \(C(14,5)\) ways to choose any \(5\) toys.
\(\dfrac{C(6,5)}{C(14,5)}=\dfrac{6}{2,002}=\dfrac{3}{1,001}\)
We need to count the number of ways to choose \(2\) bears and \(3\) dogs and the total number of possible ways to select \(5\) toys. There are \(6\) bears, so there are \(C(6,2)\) ways to choose \(2\) bears. There are \(5\) dogs, so there are \(C(5,3)\) ways to choose \(3\) dogs. Since we are choosing both bears and dogs at the same time, we will use the Multiplication Principle. There are \(C(6,2)⋅C(5,3)\) ways to choose \(2\) bears and \(3\) dogs. We can use this result to find the probability.
\(\dfrac{C(6,2)C(5,3)}{C(14,5)}=\dfrac{15⋅10}{2,002}=\dfrac{75}{1,001}\)
It is often easiest to solve “at least” problems using the Complement Rule. We will begin by finding the probability that fewer than \(2\) dogs are chosen. If less than \(2\) dogs are chosen, then either no dogs could be chosen, or \(1\) dog could be chosen.
When no dogs are chosen, all \(5\) toys come from the \(9\) toys that are not dogs. There are \(C(9,5)\) ways to choose toys from the \(9\) toys that are not dogs. Since there are \(14\) toys, there are \(C(14,5)\) ways to choose the \(5\) toys from all of the toys.
\(\dfrac{C(9,5)}{C(14,5)}=\dfrac{63}{1,001}\)
If there is \(1\) dog chosen, then \(4\) toys must come from the \(9\) toys that are not dogs, and \(1\) must come from the \(5\) dogs. Since we are choosing both dogs and other toys at the same time, we will use the Multiplication Principle. There are \(C(5,1)⋅C(9,4)\) ways to choose \(1\) dog and \(1\) other toy.
\(\dfrac{C(5,1)C(9,4)}{C(14,5)}=\dfrac{5⋅126}{2,002}=\dfrac{315}{1,001}\)
Because these events would not occur together and are therefore mutually exclusive, we add the probabilities to find the probability that fewer than \(2\) dogs are chosen.
\(\dfrac{63}{1,001}+\dfrac{315}{1,001}=\dfrac{378}{1,001}\)
We then subtract that probability from \(1\) to find the probability that at least \(2\) dogs are chosen.
\(1−\dfrac{378}{1,001}=\dfrac{623}{1,001}\)
Exercise \(\PageIndex{5}\)
A child randomly selects \(3\) gumballs from a container holding \(4\) purple gumballs, \(8\) yellow gumballs, and \(2\) green gumballs.
Find the probability that all \(3\) gumballs selected are purple. Find the probability that no yellow gumballs are selected. Find the probability that at least \(1\) yellow gumball is selected. Answer
\(\dfrac{1}{91}\)
Answer
\(\dfrac{5}{91}\)
Answer
\(\dfrac{86}{91}\)
Key Equations
probability of an event with equally likely outcomes
\(P(E)=\dfrac{n(E)}{n(S)}\)
probability of the union of two events
\(P(E\cup F)=P(E)+P(F)−P(E\cap F)\)
probability of the union of mutually exclusive events
\(P(E\cup F)=P(E)+P(F)\)
probability of the complement of an event
\(P(E')=1−P(E)\)
Key Concepts Probability is always a number between \(0\) and \(1\), where \(0\) means an event is impossible and \(1\) means an event is certain. The probabilities in a probability model must sum to \(1\). See Example \(\PageIndex{1}\). When the outcomes of an experiment are all equally likely, we can find the probability of an event by dividing the number of outcomes in the event by the total number of outcomes in the sample space for the experiment. See Example \(\PageIndex{2}\). To find the probability of the union of two events, we add the probabilities of the two events and subtract the probability that both events occur simultaneously. See Example \(\PageIndex{3}\). To find the probability of the union of two mutually exclusive events, we add the probabilities of each of the events. See Example \(\PageIndex{4}\). The probability of the complement of an event is the difference between \(1\) and the probability that the event occurs. See Example \(\PageIndex{5}\). In some probability problems, we need to use permutations and combinations to find the number of elements in events and sample spaces. See Example \(\PageIndex{6}\). |
The cost function is described as:
$$ J(x) = \frac{(M(x)-y)^2}{\sigma_y^2} + \frac{(x-x_0)^2}{\sigma_x^2} $$
where,
$x$ are the unknowns, $k$ and $H$ in our case $x_0$ are prior estimates of unknowns $\sigma_x$ are the uncertainty of these prior estimates $M$ is the model we are solving, in our case the heat equation $y$ are observables on the solution, surface heat flow in our case $\sigma_y$ is the uncertainty of our observables
We are solving the steady state heat equation which requires input variables $k$, $H$, and $q_0$. The second part of this equation can be written as:
$$ \sum \frac{(k-k_0)^2}{\sigma_k^2} , \frac{(H-H_0)^2}{\sigma_H^2} , \dots $$
where all of our prior estimates and uncertainties are added to the cost function.
The temperature, $T$, with respect to depth, $z$ is:
$$ T(z) = T_0 + \frac{Q_0 z}{k} + \frac{H dz^2}{k} \left( 1-e^{-z/dz} \right) $$
we can formulate this into a forward model where the cost function is returned:
def f(x): ''' forward model, returns the cost function J(x) ''' k, H, Q0 = tuple(x) T = T0 + Q0*z/k + H*dz**2/k * (1-exp(-z/dz)) q = k * (T[1]-T[0]) / (z[1]-z[0]) return (q-q_obs)**2/sigma_q**2 + (k-k_prior)**2/sigma_k**2 + (H-H_prior)**2/sigma_H**2 + (Q0-Q0_prior)**2/sigma_Q0**2
q_obs and
sigma_q are surface heat flow observations and uncertainty respectively.The right hand side of this equation is all our prior estimates of $k$, $H$, and $Q_0$.These are all summed to the cost function.
Note that k, H, and Q0 are the only parameters that change.
Now the tangent linear model uses the derivatives of each line in the forward model with respect to
k,
H, and
Q0.
def f_tl(x, dx): ''' tangent linear model ''' k, H, Q0 = tuple(x) dkdx, dHdx, dQ0dx = 1, 1, 1 dk, dH, dQ0 = tuple(dx) # T = f(k, H, Q0) T = T0 + Q0*z/k + H*dz**2/k * (1-exp(-z/dz)) dTdk = -Q0*np.array(z)/k**2 - H*dz**2/k**2 * (1-exp(z/dz)) dTdH = dz**2/k * (1-exp(z/dz)) dTdQ0 = z/k dT = dTdk*dk + dTdH*dH + dTdQ0*dQ0 # similar to the dot product # q_model = f(T0, T1, k) q_model = k * (T[1]-T[0])/(z[1]-z[0]) dqdT1 = k / (z[1]-z[0]) dqdT0 = -k / (z[1]-z[0]) dqdk = (T[1]-T[0]) / (z[1]-z[0]) dq_model = dot([dqdT0, dqdT1, dqdk], [dT[0], dT[1], dk]) # return f(q_model, k, H, Q0) d2q = (2*q_model-2*q_obs)/sigma_q**2 d2k = (2*k-2*k_prior)/sigma_k**2 d2H = (2*H-2*H_prior)/sigma_H**2 d2Q0 = (2*Q0-2*Q0_prior)/sigma_Q0**2 d2x = dot([d2q, d2k, d2H, d2Q0], [dq_model, dk, dH, dQ0]) return d2x
This returns reasonable values of $df$ providing the change in our variables ($k, H, Q_0$) are relatively small. Now to move onto the adjoint model…
We backwards-propagate a small change in $f$ through the function, starting with the last line of the tangent linear model. For every element ($k, H, Q_0$) that appears in more than 1 line, we need to accumulate them and be careful not to overwrite them.
def f_ad(x, df): ''' adjoint model ''' k, H, Q0 = tuple(x) T = T0 + Q0*z/k + H*dz**2/k * (1-exp(z/dz)) q_model = k * (T[1]-T[0])/(z[1]-z[0]) d2q = (2*q_model-2*q_obs)/sigma_q**2 d2k = (2*k-2*k_prior)/sigma_k**2 d2H = (2*H-2*H_prior)/sigma_H**2 d2Q0 = (2*Q0-2*Q0_prior)/sigma_Q0**2 dq_ad = d2q*df dk_ad = d2k*df dH_ad = d2H*df dQ0_ad = d2Q0*df dqdT0 = -k / (z[1]-z[0]) dqdT1 = k / (z[1]-z[0]) dqdk = (T[1]-T[0]) / (z[1]-z[0]) dT0_ad = dqdT0*dq_ad dT1_ad = dqdT1*dq_ad dk_ad += dqdk*dq_ad dTdk = -Q0*np.array(z)/k**2 - H*dz**2/k**2 * (1-exp(z/dz)) dTdH = dz**2/k * (1-exp(z/dz)) dTdQ0 = z/k dk_ad += dTdk[0]*dT0 + dTdk[1]*dT1 dH_ad += dTdH[0]*dT0 + dTdH[1]*dT1 dQ0_ad += dTdQ0[0]*dT0 + dTdQ0[1]*dT1 return dk_ad, dH_ad, dQ0_ad
The adjoint model returns $dk, dH, dQ_0$.
Apart from being able to calculate $dx$ from any $f(x)$, the adjoint model is much more computationally friendly than the TLM.E.g. the TLM computed
T[0],
T[1],
T[2]…
T[n], but the adjoint only requires computation on elements you actually need - in this case only
T[0] and
T[1].
When writing the adjoint, only enough of the forward model needs to be computed so that you have the appropriate variables. For most cases this is a complete forward run. |
Real Analysis Exchange Real Anal. Exchange Volume 39, Number 1 (2013), 91-100. Essential Divergence in Measure of Multiple Orthogonal Fourier Series Abstract
In the present paper we prove the following theorem: \\ Let \(\{\vf _{m,n}(x,y)\}_{m,n=1}^{\infty} \) be an arbitrary uniformly bounded double orthonormal system on \(I^2:=[0,1]^2\) such that for some increasing sequence of positive integers \(\{N_n\}_{n=1}^\infty \) the Lebesgue functions \(L_{N_n,N_n}(x,y)\) of the system are bounded below a. e. by \( \ln^{1+\epsilon} N_n \), where \(\epsilon \) is a positive constant. Then there exists a function \(g \in L(I^2)\) such that the double Fourier series of \(g\) with respect to the system \(\{\vf _{m,n}(x,y)\}_{m,n=1}^{\infty} \) essentially diverges in measure by squares on \(I^2\). The condition is critical in the logarithmic scale in the class of all such systems %\footnote{ 2000 Mathematics Subject Classification : Primary 42B08; Secondary 40B05. % Key words and phrases: %Essential divergence in measure, orthogonal Fourier series, Lebesgue functions}.
Article information Source Real Anal. Exchange, Volume 39, Number 1 (2013), 91-100. Dates First available in Project Euclid: 1 July 2014 Permanent link to this document https://projecteuclid.org/euclid.rae/1404230142 Mathematical Reviews number (MathSciNet) MR1006530 Zentralblatt MATH identifier 1296.13021 Subjects Primary: 26B10: Implicit function theorems, Jacobians, transformations with several variables 42B08: Summability Secondary: 40B05: Multiple sequences and series (should also be assigned at least one other classification number in this section) Citation
Getsadze, Rostom. Essential Divergence in Measure of Multiple Orthogonal Fourier Series. Real Anal. Exchange 39 (2013), no. 1, 91--100. https://projecteuclid.org/euclid.rae/1404230142 |
According to me, $\ce{Fe^{2+}}$ should be a better reducing agent because $\ce{Fe^2+}$ - after being oxidized - will attain a stable $\ce{d^5}$ configuration, whereas $\ce{Cr^2+}$ will attain a $\ce{d^3}$ configuration. I think the half filled $\ce{d^5}$ configuration is more stable than the $\ce{d^3}$ configuration. Why is this not so?
The short answer is thermodynamics. Reduction with $\ce{Cr^2+}$ must be more exergonic than reduction with $\ce{Fe^2+}$, we'll get to some numbers in a bit, but let's deal with the concept.
It is tricky to compare the "stability" of two possible products that occur from different pathways. What is more important for the spontaneity of the reaction is the change in (free) energy that occurs from start to finish.
To put it another way, it may be that $\ce{Fe^3+}$ is more stable than $\ce{Cr^3+}$ on an absolute scale, but what we really care about is how much more stable $\ce{Cr^3+}$ is to $\ce{Cr^2+}$ compared to how much more stable $\ce{Fe^3+}$ is to $\ce{Fe^2+}$.
Let's examine all four using electron configuration as you have done:
$\ce{Fe^2+}$ is $\ce{d^6}$ or more probably $\ce{[Ar] 4s^1 3d^5}$ - two half-filled half shells! $\ce{Fe^3+}$ is $\ce{d^5}$, which is $\ce{[Ar] 3d^5}$ - one filled half shell.
The difference between the two iron ions might not be that large.
$\ce{Cr^2+}$ is $\ce{d^4}$, which is $\ce{[Ar] 3d^4}$ or $\ce{[Ar] 4s^2 3d^2}$ $\ce{Cr^3+}$ is $\ce{d^3}$, which is $\ce{[Ar] 3d^3}$ or $\ce{[Ar] 4s^2 3d^1}$
The energy difference between chromium ions might be larger. Represented graphically, the reaction coordinate energy diagram for the two process might be:
Now the number part.
We can go looking for some standard thermodynamic data to help make our case. The best data are for standard reduction potentials, because these data are for exactly what we want!
Taking the reduction potential data and writing the equations in the direction we care about, we have:
$$\begin{align} &\ce{Fe^2+ -> Fe^3+} &&&E^\circ=\pu{-0.77 V}\\ &\ce{Cr^2+ -> Cr^3+} &&&E^\circ=\pu{+0.44 V} \end{align}$$
Spontaneous reactions produce positive potential differences, so we can see right now that $\ce{Cr^2+}$ is a better reducing agent.
Let's go a step farther to free energy.
$$\Delta G^\circ =-nFE^\circ$$
However, to deal with free energy, we need a full reaction. Technically, comparing the two half-reactions in isolation is just as bad. However, the definition of the standard electrode potential and the standard free energy come with a common zero point in terms of half-reaction: $\ce{2H+ + 2e- -> H2}$.
The two full reactions (net ionic equations anyway) are:
$$\begin{align} &\ce{2Fe^2+ + 2H+ -> 2Fe^3+ + H2} &&E^\circ=\pu{-0.77 V}&&&\Delta G^\circ=\pu{+74kJ/mol}\\ &\ce{2Cr^2+ + 2H+ -> 2Cr^3+ + H2} &&E^\circ=\pu{+0.44 V}&&&\Delta G^\circ = \pu{-42 kJ/mol} \end{align}$$
And the winner is chromium, by a whopping $\pu{116 kJ/mol}$. In fact, the data might show that $\ce{Fe^2+}$ is
more stable than $\ce{Fe^3+}$, while $\ce{Cr^2+}$ is less stable than $\ce{Cr^3+}$.
In water both $\ce{Cr^{3+}}$ and $\ce{Fe^{3+}}$ are in octahedral configurations. This means that $d$-orbitals become unequal in their energy; specifically 3 of them are lower and 2 are higher. This is the premise of 'crystal field theory'.
Depending on the nearest neighborhood, the splitting may be strong enough to force electron pairing or it may not. For water it usually isn't. This means that 'first half-filled shell' here is 3 electrons - $\ce{V^{2+}}$ $\ce{Cr^{3+}}$ or $\ce{Mn^{4+}}$. The second half-filled shell in low field (water ligand) is $\ce{Mn^{2+}}$ or $\ce{Fe^{3+}}$ (5 $d$-orbitals), both surprisingly stable. In strong field (say, $\ce{CN^-}$ ligand), it is 6 electrons (3 double occupied lower orbitals), like in $\ce{[Co(NH3)_{6}]^{3+}}$ and $\ce{[Fe(CN)_{6}]^{4-}}$. The next "subshell" is, 8 electrons, (6 on 3 lower orbitals and 2 on higher), like in $\ce{Ni^{2+}}$. Depending on the strength of the ligands, an unusual square planar coordination may become preferable, with two higher orbitals also splitting. It is typical for $\ce{Ni}$ subgroup in +2 oxidation state and $\ce{Cu}$ subgroup in +3 oxidation state.
TL;DR : invest some time into reading about crystal field theory.
It is quite easy, a good reducing agent means which can be oxidised easily, for:
$\ce{Cr^2+ -> Cr^3+} $, we have $\mathrm d^3$ configuration for $\ce{Cr^3+} $, which we can say that stable as all the three electrons are in $\mathrm{t_{2g}}$ level, hence it is half-filled. On the other hand, for:
$\ce{Fe^2+ -> Fe^3+} $, we have $\mathrm d^5$ configuration for $\ce{Fe^3+} $, which is again stable due to half-filled d orbitals. So, who is more stable? we can figure out from $\ce{M^3+/M^2+}$ standard electrode potentials. for $\ce{Cr^2+ -> Cr^3+} $, it is $\pu{-0.41 V}$ , and for $\ce{Fe^2+ -> Fe^3+} $, it is $\pu{0.77 V}$.
The positive one indicates that it is difficult to remove electron from $\ce{Fe^2+}$ and negative one indicates that it is easier to remove electron from $\ce{Cr^2+}$. Hence, $\ce{Cr^2+}$ is a stronger reducing agent.
$\ce{Cr^2+}$ is a better reducing agent. As it attains a $\ce d^3$ configuration on loosing an electron while $\ce{Fe^2+}$ attains $\ce d^5$ configuration. In an aqueous medium $\ce d^3$ (half filled $\ce{t_{2g}}$ orbital) is more stable than $\ce d^5$. |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Help:Formula Contents 1 TeX 2 General 3 Functions, symbols, special characters 4 Subscripts, superscripts, integrals 5 Fractions, matrices, multilines 6 Fonts 7 Parenthesizing big expressions, brackets, bars 8 Spacing 9 Align with normal text flow 10 Forced PNG rendering 11 Examples 12 See also 13 External Links 14 Wikinews-specific content and links to other help pages TeX[edit]
MediaWiki uses
TeX markup for mathematical formulae. It generates either PNG images or simple HTML markup, depending on user preferences and the complexity of the expression. In the future, as more browsers are smarter, it will be able to generate enhanced HTML or even MathML in many cases.
Math markup goes inside
<math> ... </math>. The edit toolbar has a button for this.
The PNG images are black on white (not transparent). These colors, as well as font sizes and types, are independent of browser settings or css. Font sizes and types will often deviate from what HTML renders. The css selector of the images is img.tex.
In the case of a non-white page background, the white background of the formula effectively highlights it, which can be an advantage or a disadvantage.
One may want to avoid using TeX markup as part of a line of regular text, as the formulae don't align properly and the font size, as said, usually does not match.
The alt attribute of the TeX images (the text that shows up in the hover box) is the wikitext that produced them, excluding the <math> and </math>.
General[edit]
Spaces and newlines are ignored. Apart from function and operator names, as is customary in mathematics for variables, letters are in italics; digits are not. For other text, to avoid being rendered in italics like variables, use
\mbox:
<math>\mbox{abc}</math> gives
Line breaks help keep the wikitext clear, for instance, a line break after each term or matrix row.
Functions, symbols, special characters[edit]
For producing special characters without math tags, see Help:Special characters.
Comparison:
α gives α, <math>\alpha</math> gives ("&" and ";" vs. "\", in this case the same code word "alpha" √2 gives √2, <math>\sqrt{2}</math> gives (the same difference as above, but also another code word, "radic" vs. "sqrt"; in TeX braces) √(1- e²) gives √(1- e²), <math>\sqrt{1-e^2}</math> gives (parentheses vs. braces, "''e''" vs. "e", "²" vs. "^2")
Feature Syntax How it looks rendered std. functions (good) \sin x + \ln y +\operatorname{sgn} z std. functions (wrong) sin x + ln y + sgn z Modular arithm. s_k \equiv 0 \pmod{m} Derivatives \nabla \partial x dx \dot x \ddot y Sets \forall x \not\in \empty \varnothing \subseteq A \cap \bigcap B \cup \bigcup \exists \{x,y\} \times C Logic p \land \bar{q} \to p\lor \lnot q Root \sqrt{2}\approx 1.4 \sqrt[n]{x} Relations \sim \simeq \cong \le \ge \equiv \not\equiv \approx \ne \propto Geometric \triangle \angle \perp \| 45^\circ Arrows
\leftarrow \rightarrow \leftrightarrow
\Leftarrow \Rightarrow \Leftrightarrow
Special \oplus \otimes \pm \mp \hbar \wr \dagger \ddagger \star * \ldots \circ \cdot \times \bullet \infty \vdash \models Lowercase \mathcal has some extras \mathcal {45abcdenpqstuvwx} Subscripts, superscripts, integrals[edit]
Feature Syntax How it looks rendered Superscript a^2 Subscript a_2 Grouping a^{2+2} a_{i,j} Combining sub & super x_2^3 Preceding sub & super {}_1^2\!X_3^4 Derivative (good) x' Derivative (wrong in HTML) x^\prime Derivative (wrong in PNG) x\prime Derivative dots \dot{x}, \ddot{x} Underlines & overlines \hat a \bar b \vec c \widehat {d e f} \overline {g h i} \underline {j k l} Sum \sum_{k=1}^N k^2 Product \prod_{i=1}^N x_i Limit \lim_{n \to \infty}x_n Integral \int_{-N}^{N} e^x\, dx Path Integral \oint_{C} x^3\, dx + 4y^2\, dy Fractions, matrices, multilines[edit]
Feature Syntax How it looks rendered Fractions \frac{2}{4} or {2 \over 4} Binomial coefficients {n \choose k} Small Fractions \begin{matrix} \frac{2}{4} \end{matrix} Matrices \begin{matrix} x & y \\ z & v \end{matrix} \begin{vmatrix} x & y \\ z & v \end{vmatrix} \begin{Vmatrix} x & y \\ z & v \end{Vmatrix} \begin{bmatrix} 0 & \cdots & 0 \\ \vdots &
\ddots & \vdots \\ 0 & \cdots &0\end{bmatrix}
\begin{Bmatrix} x & y \\ z & v \end{Bmatrix} \begin{pmatrix} x & y \\ z & v \end{pmatrix} Case distinctions f(n) = \begin{cases} n/2, & \mbox{if }n\mbox{ is even} \\ 3n+1, & \mbox{if }n\mbox{ is odd} \end{cases} Multiline equations \begin{matrix}f(n+1) & = & (n+1)^2 \\ \ & = & n^2 + 2n + 1 \end{matrix} Alternative multiline equations (using tables) {| |- |<math>f(n+1)</math> |<math>=(n+1)^2</math> |- | |<math>=n^2 + 2n + 1</math> |} Fonts[edit]
Feature Syntax How it looks rendered Greek letters \alpha \beta \gamma \Gamma \phi \Phi \Psi\ \tau \Omega Blackboard bold x\in\mathbb{R}\sub\mathbb{C} boldface (vectors) \mathbf{x}\cdot\mathbf{y} = 0 boldface (greek) \boldsymbol{\alpha} + \boldsymbol{\beta} + \boldsymbol{\gamma} Fraktur typeface \mathfrak{a} \mathfrak{B} Script \mathcal{ABC} Hebrew \aleph \beth \gimel \daleth non-italicised characters \mbox{abc} mixed italics (bad) \mbox{if} n \mbox{is even} mixed italics (good) \mbox{if }n\mbox{ is even} Parenthesizing big expressions, brackets, bars[edit]
Feature Syntax How it looks rendered Not good ( \frac{1}{2} ) Better \left ( \frac{1}{2} \right )
You can use various delimiters with \left and \right:
Feature Syntax How it looks rendered Parentheses \left ( A \right ) Brackets \left [ A \right ] Braces \left \{ A \right \} Angle brackets \left \langle A \right \rangle Bars and double bars \left | A \right | and \left \| B \right \|
Delimiters can be mixed,
\left [ 0,1 \right )
Use \left. and \right. if you don't
want a delimiter to appear:
\left . \frac{A}{B} \right \} \to X Floor and ceiling functions: \lfloor x \rfloor = \lceil y \rceil Spacing[edit]
Note that TeX handles most spacing automatically, but you may sometimes want manual control.
Feature Syntax How it looks rendered double quad space a \qquad b quad space a \quad b text space a\ b text space without PNG conversion a \mbox{ } b large space a\;b medium space a\>b [not supported] small space a\,b no space ab negative space a\!b Align with normal text flow[edit]
Due to the default css
img.tex { vertical-align: middle; }
an inline expression like should look good.
Forced PNG rendering[edit]
To force the formula to render as PNG, add
\, (small space) at the end of the formula (where it is not rendered). This will force PNG if the user is in "HTML if simple" mode, but not for "HTML if possible" mode (math rendering settings in preferences).
You can also use
\,\! (small space and negative space, which cancel out) anywhere inside the math tags. This
does force PNG even in "HTML if possible" mode, unlike
\,.
This could be useful to keep the rendering of formulae in a proof consistent, for example, or to fix formulae that render incorrectly in HTML (at one time, a^{2+2} rendered with an extra underscore), or to demonstrate how something is rendered when it would normally show up as HTML (as in the examples above).
For instance:
Syntax How it looks rendered a^{c+2} a^{c+2} \, a^{\,\!c+2} a^{b^{c+2}} (WRONG with option "HTML if possible or else PNG"!) a^{b^{c+2}} \, (WRONG with option "HTML if possible or else PNG"!) a^{b^{c+2}}\approx 5 (due to "" correctly displayed, no code "\,\!" needed) a^{b^{\,\!c+2}} \int_{-N}^{N} e^x\, dx \int_{-N}^{N} e^x\, dx \, \int_{-N}^{N} e^x\, dx \,\! This has been tested with most of the formulae on this page, and seems to work perfectly.
You might want to include a comment in the HTML so people don't "correct" the formula by removing it:
<!-- The \,\! is to keep the formula rendered as PNG instead of HTML. Please don't remove it.--> Examples[edit]
See also[edit] Proposed GNU LilyPond support External Links[edit] A LaTeX tutorial. http://www.maths.tcd.ie/~dwilkins/LaTeXPrimer/ A PDF document introducing TeX -- see page 39 onwards for a good introduction to the maths side of things: http://www.ctan.org/tex-archive/info/gentle/gentle.pdf A PDF document introducing LaTeX -- skip to page 59 for the math section. See page 72 for a complete reference list of symbols included in LaTeX and AMS-LaTeX. http://www.ctan.org/tex-archive/info/lshort/english/lshort.pdf TeX reference card: http://www.csit.fsu.edu/~mimi/tex/tex-refcard-letter.pdf Can't remember but I put it here so it must be ok ;-) http://www.ams.org/tex/amslatex.html A set of public domain fixed-size math symbol bitmaps: http://us.metamath.org/symbols/symbols.html [edit]
Help contents
,,,,,,, |
Lets suppose we have two planes given by the parametric equations
$$\begin{align} &\eta_1~:~\vec{x}_1~=~\vec{o}_1+\vec{R}_{11}t_{11}+\vec{R}_{12}t_{12}\\ &\eta_2~:~\vec{x}_2~=~\vec{o}_2+\vec{R}_{21}t_{21}+\vec{R}_{22}t_{22} \end{align}$$
where all occuring vectors are elements of the euclidean space $\mathbb{R}^3$. We can also describe these planes by the following equations
$$\begin{align} &\eta_1~:~(\vec{x}_1-\vec{o}_1)\cdot(\vec{R}_{11}\times\vec{R}_{12})\\ &\eta_2~:~(\vec{x}_2-\vec{o}_2)\cdot(\vec{R}_{21}\times\vec{R}_{22}) \end{align}$$
The main target I attempt to fulfill is to transform $\eta_1$ to $\eta_2$. I want to this in two steps $(1)$ Making both planes parallel $(2)$ Shift $\eta_1$ to be equal to $\eta_2$.
I tried to approach by using the normal vectors of the both planes and so to solve the equation
$$\textbf{T}\vec{n}_1=\vec{n}_2$$
for $\textbf{T}\in \mathbb{R}^{3\times3}$. To be honest I have to clue from hereon and I guess this is not even the right attempt. What I can say about the matrix-vector equation is that the eigenvalues, or atleast one of the eigenvalues, of the matrix $\textbf{T}$ has to $1$ and therefore $\vec{n}_2$ would be an eigenvector of $\textbf{T}$. I am familiar with the concept of rotation matrices around the different axis for an angle $\alpha$ which are given by
$$\begin{align} \small{\textbf{T}_x=\begin{pmatrix}1&0&0\\0&\cos(\alpha)&-\sin(\alpha)\\0&\sin(\alpha)&\cos(\alpha)\end{pmatrix}~ \textbf{T}_y=\begin{pmatrix}\cos(\alpha)&0&-\sin(\alpha)\\0&1&0\\\sin(\alpha)&0&\cos(\alpha)\end{pmatrix}~ \textbf{T}_z=\begin{pmatrix}\cos(\alpha)&-\sin(\alpha)&0\\\sin(\alpha)&\cos(\alpha)&0\\0&0&1\end{pmatrix}} \end{align}$$
but since I do not know the angle $\alpha$ this does not help me at all. Another attempt would be to compute all angles alone and then jsut multiply the matrices $\textbf{T}_x$, $\textbf{T}_y$ and $\textbf{T}_z$ but hence this does not seem that efficient I am not sure about this.
Mainly I want to know I there is a way to construct $\textbf{T}$ out of $\vec{n}_1$ and $\vec{n}_2$ or atleast out of the given defintion of $\eta_1$ and $\eta_2$. If it is not possible could you please explain to me why this is so. Furthermore could maybe someone provide an example transform of a plane $\eta_1$ to plane $\eta_2$ by using a general algorithm if there exist such as thing.
Thank you in advance. |
I am having a difficulty setting up the proof of the fact that two basis of a vector space have the same cardinality for the infinite-dimentional case. In particular, let $V$ be a vector space over a field $K$ and let $\left\{v_i\right\}_{i \in I}$ be a basis where $I$ is infinite countable. Let $\left\{u_j\right\}_{j \in J}$ be another basis. Then $J$ must be infinite countable as well. Any ideas on how to approach the proof?
In
spirit, the proof is very similar to the proof that two finite bases must have the same cardinality: express each vector in one basis in terms of the vectors in the other basis, and leverage that to show the cardinalities must be equal, by using the fact that the "other" basis must span and be lineraly independent.
Suppose that $\{v_i\}_{i\in I}$ and $\{u_j\}_{j\in J}$ are two infinite bases for $V$.
For each $i\in I$, $v_i$ is in the linear span of $\{u_j\}_{j\in J}$. Therefore, there exists a
finite subset $J_i\subseteq J$ such that $v_i$ is a linear combination of the vectors $\{u_j\}_{j\in J_i}$ (since a linear combination involves only finitely many vectors with nonzero coefficient).
Therefore, $V=\mathrm{span}(\{v_i\}_{i\in I}) \subseteq \mathrm{span}\{u_j\}_{j\in \cup J_i}$. Since no proper subset of $\{u_j\}_{j\in J}$ can span $V$, it follows that $J = \mathop{\cup}\limits_{i\in I}J_i$.
Now use this to show that $|J|\leq |I|$, and a symmetric argument to show that $|I|\leq |J|$.
Note. The argument I have in mind in the last line involves some (simple) cardinal arithmetic, but it is enough that at least some form of the Axiom of Choice may be needed in its full generality.
Once you have the necessary facts about infinite sets, the argument is very much like that used in the finite-dimensional case. The two crucial pieces of information are (1) that if $I$ is an infinite set of cardinality $\kappa$, say, then $I$ has $\kappa$ finite subsets, and (2) that if $|J|>\kappa$, and $J$ is expressed as the union of $\kappa$ subsets, then at least one of those subsets must be infinite.
Let $B_1 = \{v_i:i\in I \}$ and $B_2 = \{u_j:j \in J \}$, and suppose that $|J|>|I| = \kappa$. Each $u_j \in B_2$ can be written as a linear combination of some finite subset of $B_1$, say $u_j = \sum\limits_{i \in F_j}k_{ji}v_i$, where $F_j$ is a finite subset of $I$. For each finite $F \subseteq I$ let $J_F = \{j \in J:F_j = F\}$; clearly $J$ is the union of these sets $J_F$. But by (1) above $I$ has only $\kappa$ finite subsets, and $|J|>\kappa$, so by (2) above there must be some finite $F \subseteq I$ such that $J_F$ is infinite.
To simplify the notation, let $F = \{i_1,i_2,\dots,i_n\}$, and for $\mathcal{l}=1,2,\dots,n$ let$v_\mathcal{l} = v_{i_\mathcal{l}}$; then every vector $u_j$ with $j \in J_F$ is a linear combination of the vectors $v_1,v_2,\dots,v_n$. In other words, $\{u_j:j \in J_F\} \subseteq \operatorname{span}\{v_1,v_2,\dots,v_n\}$, and of course $\{u_j:j \in J_F\}$, being a subset of the basis $B_2$, is linearly independent. But $\operatorname{span}\{v_1,v_2,\dots,v_n\}$ is of dimension $n$ over $K$, so any set of more than $n$ vectors in $\operatorname{span}\{v_1,v_2,\dots,v_n\}$ must be linearly
dependent, and we have a contradiction. It follows that we must have $|J| \le |I|$. By symmetry (or by the same argument with the rôles of $I$ and $J$ interchanged), $|I| \le |J|$, and hence $|I|=|J|$. |
A
Raid Boss or Boss Pokémon is an extremely powerful Pokémon that has very high CP. It hatches from a egg which appears atop a Gym upon the beginning of the Raid Battle. A countdown will display the time until the egg hatches and the battle begins. Details
Upon using a Raid Pass to join the battle, the Trainer and up to 19 other Trainers work together to defeat the Raid Boss. If a Raid Boss is successfully defeated within the three-minute time limit, or five-minute for Legendary Raids, those Trainers have the chance to catch a decently powerful Pokémon of their own.
[1] [2] The captured Pokémon does not necessarily have the same moves as the Raid Boss.
A Raid Boss can battled as many times before it is either defeated or the 45-minute time limit expires.
List of Raid Bosses
There are a limited number of Pokémon that can be possible Raid Bosses at the same time. The pool of available Raid Boss Pokémon have been changing since the feature release. Additionally, in past events, there have been exclusive Raid Bosses that cannot normally be encountered in Raid Battles.
Raid Boss CP formula
The formula is identical to the traditional CP Formula; except the stamina and CP scalar portion in the divisor are replaced with a tier scalar which represents the Raid Boss's HP and the IVs for attack and defense are fixed to 15 (maximum IV value).
$ {BCP = \frac{(Base Atk + 15) ~ \times ~ \sqrt{Base Def + 15} ~ \times ~ \sqrt{TierScalar}}{10}} $
Where
is:
TierScalar
Tier 1:
600
Tier 2:
1,800
Tier 3:
3,600
Tier 4:
9,000
Tier 5:
15,000
Trivia The Raid Boss icon is based off of Rhydon's face — who was the first Pokémon designed and programmed into the original game. [3]There are also various Rhydon statutes that can be found throughout the game as well. On November 4 th, 2017, the raid bosses were rotated for the first time. [4] During Pokémon GO Park and Pokémon GO Stadium event in Yokohama, for the first time in game, there were several event-exclusive Raid Boss Pokémon available. Before January 31 st, 2019, the stamina of tier 3 to 5 Raid Bosses were 3,000, 7,500 and 12,500 respectively. [5] [6] References ↑ Raid Battles and New Gym Features are Coming!. Pokémon GO Live. Retrieved on 2017-06-19. ↑ Raid Battles. Niantic Support. Retrieved on 2017-06-19. ↑ Bulbasaur Isn't Neccesarily The First Pokémon. Kotaku. Retrieved on 2017-06-20. ↑ New Raid Bosses Collection Thread. /r/TheSilphRoad. Retrieved on 2017-11-04. ↑ Community Note: Rebalancing in Raids and Trainer Battles. Pokémon GO Live. Retrieved on 2019-06-10. ↑ New Raid Boss Stamina values. /r/TheSilphRoad. Retrieved on 2019-07-07. |
The variation $\delta F$ for any field (or degree of freedom) $F$, given an infinitesimal transformation, is always calculated as the commutator$$ \delta F = [ \bar\epsilon Q, F ] $$where $\bar \epsilon$ is a parameter ("angle" or "shift" or some generalization) of the transformation and $Q$ is the generator. (Those may be replaced by other letters.)
This is the usual Lie-algebra-based way how operators transform. The finite (but very close to identity) transformation may be said to be$$ U = \exp(\bar\epsilon Q) = 1 + \bar\epsilon Q + o(\epsilon) $$and the difference of the conjugated $F$ from the original one is the variation$$\delta F = U F U^{-1} - F $$So these totally general rules that are already taught in undergraduate quantum mechanics etc. are just applied to the generator $Q$, the infinitesimal "superangle" $\bar\epsilon$, and operators like $\theta^A$, $\sigma$, and $Y$...
Note that the product $\bar\epsilon$ is "bosonic", so its commutators, and not anticommutators, enter the formulae. However, they may be decomposed to anticommutators.
This explains the first "equation" in (4.21) and (4.22). The following ones are the actual calculations, using (4.20). The second term in $Q$ according to (4.20), one which contrains $\partial_\alpha$, doesn't contribute anything to (4.21) because $\theta^A$ and $\sigma^\alpha$ are independent coordinates of the superspace (super world sheet), so the partial derivative of one with respect to the other vanishes.
Analogously, the first term vanishes and only the second term contributes in (4.22).
In (4.23), the expression $QY^\mu$ simply means the same as $[Q,Y^\mu]$: it is the differential operators in $Q$, with all the right coefficients, acting on $Y^\mu$. It is similar to differentiating functions of positions in ordinary quantum mechanics. Imagine that you have a function $V(x)$ of the operator $x$. Then you may write $V'(x)$, a different, differentiated function of the same operator $x$, as $i/\hbar$ times $[p,V(x)]$. The commutator of $p$ (the $x$-derivative) with the operator does the differentiating of the functions. On state vectors, derivatives act simply from the left, but the analogous action on the operators has to be written as commutators.
The other term $[\bar\epsilon,Y^\mu]=$ doesn't contribute, it is zero, because $\bar\epsilon$ is a (Grassmannian but still) $c$-number. So this analogous is zero much like the commutator $[5,x]$ in quantum mechanics.
The derivative of $\bar\theta^A \theta$ with respect to $\bar\theta^A$ is $+\theta$, like division, and one may get a factor of $2$ there is a sum over $A$. It is, up to possible signs, the same claim as that the $x$-derivative of $xy$ is $y$. You must have missed some prefactors $\theta$ in some terms when you decided about the incorrect result of the derivative.
The name of the male co-author is John Schwarz, not Schwartz.This post imported from StackExchange Physics at 2015-05-02 20:48 (UTC), posted by SE-user Luboš Motl |
Model Bias and Variance
In previous section, we studied about Type of Datasets, Type of Errors and Problem of Overfitting
Over fitting Low Bias with High Variance Low training error – ‘Low Bias’ High testing error Unstable model – ‘High Variance’ The coefficients of the model change with small changes in the data Under fitting High Bias with low Variance High training error – ‘high Bias’ testing error almost equal to training error Stable model – ‘Low Variance’ The coefficients of the model doesn’t change with small changes in the data The Bias-Variance Decomposition
\[Y = f(X)+\epsilon\] \[Var(\epsilon) = \sigma^2\] \[Squared Error = E[(Y -\hat{f}(x_0))^2 | X = x_0 ]\] \[= \sigma^2 + [E\hat{f}(x_0)-f(x_0)]^2 + E[\hat{f}(x_0)-E\hat{f}(x_0)]^2\] \[= \sigma^2 + (Bias)^2(\hat{f}(x_0))+Var(\hat{f}(x_0 ))\]
Overall Model Squared Error = Irreducible Error + \(Bias^2\) + Variance Bias-Variance Decomposition Overall Model Squared Error = Irreducible Error + \(Bias^2\) + Variance Overall error is made by bias and variance together High bias low variance, Low bias and high variance, both are bad for the overall accuracy of the model A good model need to have low bias and low variance or at least an optimal where both of them are jointly low How to choose such optimal model. How to choose that optimal model complexity Choosing optimal model-Bias Variance Tradeoff Bias Variance Tradeoff Test and Training Error
The next post is about Cross validation. |
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
#011 CNN Why convolutions ? Why convolutions ? In this post we will talk about why convolutions or convolutional neural networks work so well in a computer vision.
Convolutions are very useful when we include them in our neural networks. There are two main advantages of \(Convolutional \) layers over \(Fully\enspace connected\) layers:
parameter sharing and sparsity of connections.
We can illustrate an example. Let’s say that we have a \(32\times32\times3\) dimensional image. This actually comes from the example from the previous post, but let’s say we use \( 6 \), \(5 \times 5\) filters (\( f=5 \)), so this gives \(28\times28\times6 \) dimensional output.
The main advantage of \(Conv\) layers over \(Fully\enspace connected \) layers is number of parameters to be learned
If we multiply \(32\times32\times3\) we’ll get \(3072\) and if we multiply \(28\times28\times 6 \) that is \(4704\). If we were to create in our network with \(3072\) units in one layer and \(4704 \) units in the next layer and if we connect every one of these neurons then the weight matrix, the number of parameters in the weight matrix, would be a \(3072 \times 4704\) which is about \(14\) million parameters to be learned. That’s just a lot of parameters to train and today we can train neural networks with even more parameters than \(14 \) million. However, considering that this is just a pretty small image, this is a lot of parameters to train. Naturally, if this was to be a \(1000 \times 1000 \) image, then this weight matrix will just become too large. On the other hand, if you look at the number of parameters in this convolutional layer, each filter is \(5 \times 5\), so each filter has \(25 \) parameters plus a bias parameter which means we have \(26\) parameters per filter, and we have six filters so the total number of parameters is \(256 \). Hence, the number of parameters in this \(Conv \) layer remains quite small.
The reasons that a \(Convnet \) has a small number of parameters is really two reasons:
Parameter sharing
A feature detector, for example a vertical edge detector, that’s useful in one part of the image is probably useful in another part of the image. What this means is that if we apply a \(3\times3 \) filter for detecting vertical edges at one part of an image, we can then apply the same \(3\times3 \) filter at another position in the image .
A feature detector, such as a vertical edge detector, that’s useful in one part of the image may be useful in another part of the image as well
Each of these feature detectors, can use the same parameters in a lot of different positions in our input image in order to detect a vertical edge or some other feature. This is also true for low-level features like edges as well as for higher-level features like maybe detection of the eye that indicates a face.
Sparsity of connections
In each layer, each output value depends only on a small number of inputs
Let’s now explain what does sparsity of connections mean. If we look at the zero (blue-upper left): it was computed by a \(3\times3 \) convolution and it depends only on this \(3\times3\) input grid of cells. So, this output unit on the right is connected only to \(9 \) out of these \(36 \) (\(6 \times 6\)) input features. In particular, the rest of these pixel values, do not have any effect on that output. As an another example, this red output depends only on \(9 \) input features, and as if only those \(9 \) input features are connected to this output.
Through these two mechanisms a neural network has a lot fewer parameters which allows it to be trained on a smaller training sets. Also, this makes it being less prone to an overfitting. One another reason why convolutional neural networks are so good in computer vision is:
Translation invariance
Next, convolutional neural networks are being very good at capturing translation invariance, and that means that a picture of a cat shifted a couple pixels to the right is still a cat. Also, convolutional structure helps neural network because the fact that an image shifted a few pixels should result in pretty similar features and should probably be assigned the same output label. The fact that we’re applying the same filter to all the positions of the image both in the early layers and in the later layers, that helps a neural network to automatically learn to be a more robust, or to better capture this desirable property of translation invariance .
Finally, let’s put it all together and see how we can train one of these networks.
Putting it all together: Training a Convolutional Neural Network
We use a gradient descent to optimize parameters to reduce cost function \( J \) :
$$ J=\frac{1}{m}\sum_{i=1}^{m} \left ( {\hat{y}}^{\left ( i \right )},y^{\left ( i \right )} \right ) $$
Let’s say that we want to build a cat detector and we have a training set as follows: \(x \) is an image and \( y \) can be binary labels for one of \( k \) classes. Let’s say we’ve chosen a convolutional neural network structure that suits the image. That convolutional neural network consists of some \(convolutional \) and \(Pooling\enspace layers \). Then, we have \(Fully\enspace connected\) layers followed by a \(softmax \) output that gives us \(\hat{y} \). The \(Convolutional \) layers and the \(Fully\ connected\) layers will have various parameters \(w \), as well as biases \(b \), so that lets us to define a cost function. We can compute the cost function \(J \) as the sum of losses of our neural networks predictions on our entire training set and it may be divided by m to get an average value. To train this neural network we can use a gradient descent or some other optimization algorithm, like gradient descent with momentum, in order to optimize all the parameters of the neural network and to reduce the cost function \(J \). And if we do this, we can build a very effective cat detector or some other detector.
In the next posts we will see an architecture of some convolutional neural networks. |
Just thinking out loud here...
If we define a function called \\(\mathrm{when}\\),
\\[
\mathrm{when}(b,x) :\mathbf{Bool}\otimes \mathcal{X} \to \mathcal{X} \\\\
:= \text{if } b \text{ then } x \text{ else } \varnothing,
\\]
then \\(\mathrm{when}\\) is adjoint (in fact equivalent to) to taking the Cartesian product with a Boolean variable,
\\[
(b \times -) \vdash \mathrm{when}(b,-).
\\]
So then a \\(\mathbf{Cost}\\)-morphism is something like \\(\mathrm{when}(x\multimap y,r)\\), ie are \\(\mathbf{Bool}\\)-morphisms that are labeled by a real number. |
Bernoulli Bernoulli Volume 24, Number 4A (2018), 2499-2530. The sharp constant for the Burkholder–Davis–Gundy inequality and non-smooth pasting Abstract
We revisit the celebrated family of BDG-inequalities introduced by Burkholder, Gundy (
Acta Math. 124 (1970) 249–304) and Davis ( Israel J. Math. 8 (1970) 187–190) for continuous martingales. For the inequalities $\mathbb{E}[\tau^{\frac{p}{2}}]\leq C_{p}\mathbb{E}[(B^{*}(\tau))^{p}]$ with $0<p<2$ we propose a connection of the optimal constant $C_{p}$ with an ordinary integro-differential equation which gives rise to a numerical method of finding this constant. Based on numerical evidence, we are able to calculate, for $p=1$, the explicit value of the optimal constant $C_{1}$, namely $C_{1}=1.27267\ldots$ . In the course of our analysis, we find a remarkable appearance of “non-smooth pasting” for a solution of a related ordinary integro-differential equation. Article information Source Bernoulli, Volume 24, Number 4A (2018), 2499-2530. Dates Received: January 2016 Revised: September 2016 First available in Project Euclid: 26 March 2018 Permanent link to this document https://projecteuclid.org/euclid.bj/1522051216 Digital Object Identifier doi:10.3150/17-BEJ935 Mathematical Reviews number (MathSciNet) MR3779693 Zentralblatt MATH identifier 06853256 Citation
Schachermayer, Walter; Stebegg, Florian. The sharp constant for the Burkholder–Davis–Gundy inequality and non-smooth pasting. Bernoulli 24 (2018), no. 4A, 2499--2530. doi:10.3150/17-BEJ935. https://projecteuclid.org/euclid.bj/1522051216 |
Trimester Seminar Venue: HIM, Poppelsdorfer Allee 45, Lecture Hall Thursday, December 13th, 2:30 p.m. Conjugacy classes and centralisers in classical groups
Speaker: Giovanni de Franceschi (Auckland)
Abstract
We discuss conjugacy classes and associated centralisers in classical groups, giving descriptions which underpin algorithms to list these explicitly.
Thursday, December 13th, 2 p.m. PFG, PRF and probabilistic finiteness properties of profinite groups
Speaker: Matteo Vannacci (Düsseldorf)
Abstract
A profinite group G equipped with its Haar measure is a probability space and one can talk about "random elements" in G. A profinite group G is said to be
positively finitely generated (PFG) if there is an integer k such that k Haar-random elements generate G with positive probability. I will talk about a variation of PFG, called "positive finite relatedness" (PFR) for profinite groups. Finally I will survey some recent work-in-progress defining higher probabilistic homological finiteness properties (PFP_n), building on PFG and PFR. This is joint work with Ged Corob Cook and Steffen Kionke. Thursday, December 13th, 11 a.m. Strong Approximation for Markoff Surfaces and Product Replacement Graphs
Speaker: Alex Gamburd (Graduate Centre, CUNY)
Abstract
Markoff triples are integer solutions of the equation $x^2+y^2+z^2=3xyz$ which arose in Markoff's spectacular and fundamental work (1879) on diophantine approximation and has been henceforth ubiquitous in a tremendous variety of different fields in mathematics and beyond. After reviewing some of these, in particular the intimate relation with product replacement graphs, we will discuss joint work with Bourgain and Sarnak on the connectedness of the set of solutions of the Markoff equation modulo primes under the action of the group generated by Vieta involutions, showing, in particular, that for almost all primes the induced graph is connected. Similar results for composite moduli enable us to establish certain new arithmetical properties of Markoff numbers, for instance the fact that almost all of them are composite.
Wednesday, December 12th, 11 a.m. McKay graphs for simple groups
Speaker: Martin Liebeck (Imperial College)
Abstract
Let V be a faithful module for a finite group G over a field k, and let Irr(kG) denote the set of irreducible kG-modules. The McKay graph M(G,V) is a directed graph with vertex set Irr(kG), having edges from any irreducible X to the composition factors of the tensor product of X and V. These graphs were first defined by McKay in connection with the well-known McKay correspondence. I shall discuss McKay graphs for simple groups.
Tuesday, December 11th, 11 a.m. Groups, words and probability
Speaker: Aner Shalev (Hebrew University of Jerusalem)
Abstract
I will discuss probabilistic aspects of word maps on finite and infinite groups. I will focus on solutions of some probabilistic Waring problems for finite simple groups, obtained in a recent work with Larsen and Tiep. Various applications will also be given.
Thursday, December 6th, 1:30 p.m. Conjugacy growth in groups
Speaker: Alex Evetts
Thursday, December 6th, 2:10 p.m. Zeta functions of groups: theory and computations
Speaker: Tobias Rossmann
Abstract
I will give a brief introduction to the theory of zeta functions of (infinite) groups and algebras. I will describe some of the techniques used to investigate these functions and give an overview of recent work on practical methods for computing them.
Thursday, December 6th, 3:10 p.m. Zeta functions of groups and model theory
Speaker: Michele Zordan
Abstract
In this talk we shall explore the connections between rationality questions regarding zeta functions of groups and model theory of valued fields.
Wednesday, December 5th, 11 a.m. Towards Short Presentations for Ree Groups
Speaker: Alexander Hulpke (Colorado State University, Fort Collins)
Abstract
The Ree groups 2G2(32m+1) are the only class of groups for which no short presentations (that is of length polynomial in log(q) are known. I will report on work of Ákos Seress of myself that found a likely candidate for such a short presentation, as well as the obstacles that lie in the way of proving that it is a presentation for the group.
Tuesday, December 4th, 11 a.m. Finding involution centralisers efficiently in classical groups of odd characteristic
Speaker: Cheryl Praeger (University of Western Australia)
Abstract
Bray's involution centraliser algorithm plays a key role in recognition algorithms for classical groups over finite fields of odd order. It has always performed faster than the time guaranteed/justified by complexity analyses. Work of Dixon, Seress and I published this year give a satisfactory analysis for SL(n,q). And we are slowly making progress with the other classical groups. The "we" are Colva Roney-Dougal, Stephen Glasby and me - and we have conquered the unitary groups so far.
Thursday, November 29th, 2 p.m. Density of small cancellation presentations
Speaker: Michal Ferov
Thursday, November 29th, 2 p.m. Constructing Grushko and JSJ decompositions: a combinatorial approach
Speaker: Suraj Krishna
Abstract
The class of graphs of free groups with cyclic edge groups constitutes an important source of examples in geometric group theory, particularly of hyperbolic groups. In this talk, I will focus on groups of this class that arise as fundamental groups of certain nonpositively curved square complexes. The square complexes in question, called tubular graphs of graphs, are obtained by attaching tubes (a tube is a Cartesian product of a circle with the unit interval) to a finite collection of finite graphs. I will explain how to obtain two canonical decompositions, the Grushko decomposition and the JSJ decomposition, for the fundamental groups of tubular graphs of graphs. While our algorithm to obtain the Grushko decomposition is of polynomial time-complexity, the algorithm for the JSJ decomposition is of double exponential time-complexity and is the first such algorithm with a bound on its time-complexity.
Thursday, November 29th, 2 p.m. On the Burnside variety of groups
Speaker:Rémi Coulon
Thursday, November 22th, 3 p.m. From the Principle conjecture towards the Algebraicity conjecture
Speaker:Ulla Karhumäki
Abstract
It was proven by Hrushovski that, if true, the Algebraicity conjecture implies that if an infinite simple group of finite Morley rank has a generic automorphism, then the fixed point subgroup of this automorphism is pseudofinite. I will state some results suggesting that the converse is true as well, and further, present a possible strategy for proving that the Principle conjecture and the Algebraicity conjecture are actually equivalent.
Thursday, November 15th, 3 p.m. Separating cyclic subgroups in the pro-p topology
Speaker: Michal Ferov
Thursday, November 15th, 3 p.m. Refinements and filters for groups
Speaker: Josh Maglione
Thursday, November 8th, 2 p.m. On spaces of Lipschitz functions on finitely generated and Carnot groups
Speaker: Michal Doucha
Abstract
The motivation for this work comes from functional analysis, namely to study the normed spaces of Lipschitz functions defined on metric spaces, however certain natural restrictions lead us to focus on finitely generated and Lie groups as metric spaces in question. We show that whenever $\Gamma$ is a finitely generated nilpotent torsion-free group and $G$ is its Mal'cev closure which is Carnot, then the spaces of Lipschitz functions defined on $\Gamma$ and $G$ are isomorphic as Banach spaces. This applies e.g. to the pairs $(\mathbb{Z}^d, \mathbb{R}^d)$ or $(H_3(\mathbb{Z}), H_3(\mathbb{R}))$. I will focus on the group-theoretic content of the results and on the relations between finitely generated nilpotent torsion-free groups and their asymptotic cones (which are Carnot groups) and Mal'cev closures. Based on joint work with Leandro Candido and Marek Cuth.
Thursday, November 8th, 2 p.m. The Probability Distribution of Word Maps on Finite Groups
Speaker: Turbo Ho
Thursday, November 8th, 2 p.m. How to construct short laws for finite groups
Speaker: Henry Bradford
Friday, November 2th, 2 - 4 p.m. Groups, boundaries and Cannon--Thurston maps
Speaker: Giles Gardam
On the isomorphism problem for one-relator groups
Speaker: Alan Logan
String C-group representations for symmetric and alternating groups
Speaker: Dimitri Leemans
Abstract
A string C-group representation of a group G is a pair (G,S) where S is a set of involutions generating G and satisfying an intersection property as well as a commuting property. String C-group representations are in one-to-one correspondance with abstract regular polytopes. In this talk, we will talk about what is known on string C-group representations for the symmetric and alternating groups. We will also explain some open questions in that area that involve group theory, graph theory and combinatorics.
Monday, October 29th, 3:15 p.m. Product set growth in groups and hyperbolic geometry
Speaker: Markus Steenbock
Abstract
We discuss product theorems in groups acting on hyperbolic spaces:
for every hyperbolic group there exists a constant $a>0$ such that for every finite subset U that is not contained in a virtually cyclic subgroup, $|U^3|>(a|U|)^2$. We also discuss the growth of $|U^n|$ and conclude that the entropy of $U$ (the limit of $1/n log|U^n|$ as $n$ goes to infinity) exceeds $1/2\log(a|U|)$. This generalizes results of Razborov and Safin, and answers a question of Button. We discuss similar estimates for groups acting acylindrically on trees or hyperbolic spaces. This talk is on a joint work with T. Delzant. Thursday, October 18th, 2 p.m. A quick introduction to homogeneous dynamics Guan Lifan Thursday, October 18th, 2:30 p.m. Scale subgroups of automorphism groups of trees George Willis Thursday, October 18th, 3 p.m. Searching for random permutation groups Robert Gilman
Abstract
It is well known that two random permutations generate the symmetric or alternating group with asymptotic probability 1. In other words the collection of all other permutation groups has asymptotic density 0. This is bad news if you want to sample random two-generator permutation groups. However, there is another notion of density, defined in terms of Kolmogorov complexity, with respect to which the asymptotic density of every infinite computable set is positive. For the usual reasons the corresponding search algorithm cannot be implemented, but one may try a heuristic variation. Perhaps surprisingly, it seems to work. We present some experimental results.
Thursday, October 11th, 3 p.m. Canonical conjugates of finite permutation groups Robin Candy (Australian National University)
Abstract
Given a finite permutation group $G \le \operatorname{Sym}(\Omega)$ we discuss a way to find a canonical representative of the equivalence class of conjugate groups $G^{\operatorname{Sym}(\Omega)}=\left\{ s^{-1} G s \,\middle|\, s \in \operatorname{Sym}(\Omega) \right\}$. As a consequence the subgroup conjugacy and symmetric normaliser problems are introduced and addressed. The approach presented is based on an adaptation of Brendan McKay's graph isomorphism algorithm and is heavily related to Jeffrey Leon's partition backtrack algorithm.
Thursday, October 4th, 3 p.m. Enumerating characters of Sylow p-subgroups of finite groups of Lie type $G(p^f)$ Alessandro Paolini (TU Kaiserslautern)
Abstract
Let q=p^f with p a prime. The problem of enumerating characters of subgroups of a finite group of Lie type G(q) plays an important role in various research problems, from random walks on G(q) to cross-characteristics representations of G(q). O'Brien and Voll have recently determined a formula for the generic number of irreducible characters of a fixed degree of a Sylow p-subgroup U(q) of G(q), provided p>c where c is the nilpotency class of G(q).
We discuss in this talk the situation in the case $p \le c$. In particular, we describe an algorithm for the parametrization of the irreducible characters of U(q) which replaces the Kirillov orbit used in the case p>c. Moreover, we present connections with a conjecture of Higman and we highlight a departure from the case of large p. This is based on joint works with Goodwin, Le and Magaard. Thursday, October 4th, 3 p.m. Rationality of the representation zeta function for compact FAb $p$-adic analytic groups. Michele Zordan (KU Leuven)
Abstract
Let $\Gamma$ be a topological group such that the number $r_n(\Gamma)$ of its irreducible continuous complex characters of degree $n$ is finite for all $n\in\mathbb{N}$. We define the {\it representation zeta function} of $\Gamma$ to be the Dirichlet generating function \[\zeta_{\Gamma}(s) = \sum_{n\ge 1} r_n(\Gamma)n^{-s} \,\,\,(s\in\mathbb{C}).\]% One goal in studying a sequence of numbers is to show that it has some sort of regularity. Working with zeta functions, this amounts to showing that $\zeta_{\Gamma}(s)$ is rational. Rationality results for the representation zeta function of $p$-adic analytic groups have been first been obtained by Jaikin-Zapirain for almost all $p$. In this talk I shall report on a new proof (joint work with Stasinski) of Jaikin-Zapirain's result without restriction on the prime.
Thursday, September 27th, 3:30p.m. Hyperbolicity is preserved under elementary equivalence Simon Andre (University of Rennes)
Abstract
Zlil Sela proved that any finitely generated group which satisfies the same first-order properties as a torsion-tree hyperbolic group is itself torsion-free hyperbolic. This result is striking since hyperbolicity is defined in a purely geometric way. In fact, Sela's theorem remains true for hyperbolic groups with torsion, as well as for subgroups of hyperbolic groups, and for hyperbolic and cubulable groups. I will say a few words about these results.
Thursday, September 6th, 3 p.m. Universal minimal flows of the homeomorphism groups of Ważewski dendrites Aleksandra Kwiatkowska (Universität Münster)
Abstract
For each P ⊆ {3,4,...,ω}, we consider Ważewski dendrite W
P, which is a compact connected metric space that we can construct in the framework of the Fraïssé theory. If P is finite, we prove that the universal minimal flow of the homeomorphism group H(W P) is metrizable and we compute it explicitly. This answers a question of Duchesne. If $P$ is infinite, we show that the universal minimal flow of H(W P) is not metrizable. This provides examples of topological groups which are Roelcke precompact and have a non-metrizable universal minimal flow with a comeager orbit. |
When updating the weights of a neural network using the backpropagation algorithm with a momentum term, should the learning rate be applied to the momentum term as well?
Most of the information I could find about using momentum have the equations looking something like this:
$W_{i}' = W_{i} - \alpha \Delta W_i + \mu \Delta W_{i-1}$
where $\alpha$ is the learning rate, and $\mu$ is the momentum term.
if the $\mu$ term is larger than the $\alpha$ term then in the next iteration the $\Delta W$ from the previous iteration will have a greater influence on the weight than the current one.
Is this the purpose of the momentum term? or should the equation look more like this?
$W_{i}' = W_{i} - \alpha( \Delta W_i + \mu \Delta W_{i-1})$
ie. scaling everything by the learning rate? |
It looks like you're new here. If you want to get involved, click one of these buttons!
We've been looking at feasibility relations, as our first example of enriched profunctors. Now let's look at another example. This combines many ideas we've discussed - but don't worry, I'll review them, and if you forget some definitions just click on the links to earlier lectures!
Remember, \(\mathbf{Bool} = \lbrace \text{true}, \text{false} \rbrace \) is the preorder that we use to answer true-or-false questions like
while \(\mathbf{Cost} = [0,\infty] \) is the preorder that we use to answer quantitative questions like
or
In \(\textbf{Cost}\) we use \(\infty\) to mean it's impossible to get from here to there: it plays the same role that \(\text{false}\) does in \(\textbf{Bool}\). And remember, the ordering in \(\textbf{Cost}\) is the
opposite of the usual order of numbers! This is good, because it means we have
$$ \infty \le x \text{ for all } x \in \mathbf{Cost} $$ just as we have
$$ \text{false} \le x \text{ for all } x \in \mathbf{Bool} .$$ Now, \(\mathbf{Bool}\) and \(\mathbf{Cost}\) are monoidal preorders, which are just what we've been using to define enriched categories! This let us define
and
We can draw preorders using graphs, like these:
An edge from \(x\) to \(y\) means \(x \le y\), and we can derive other inequalities from these. Similarly, we can draw Lawvere metric spaces using \(\mathbf{Cost}\)-weighted graphs, like these:
The distance from \(x\) to \(y\) is the length of the shortest directed path from \(x\) to \(y\), or \(\infty\) if no path exists.
All this is old stuff; now we're thinking about enriched profunctors between enriched categories.
A \(\mathbf{Bool}\)-enriched profunctor between \(\mathbf{Bool}\)-enriched categories also called a feasibility relation between preorders, and we can draw one like this:
What's a \(\mathbf{Cost}\)-enriched profunctor between \(\mathbf{Cost}\)-enriched categories? It should be no surprise that we can draw one like this:
You can think of \(C\) and \(D\) as countries with toll roads between the different cities; then an enriched profunctor \(\Phi : C \nrightarrow D\) gives us the cost of getting from any city \(c \in C\) to any city \(d \in D\). This cost is \(\Phi(c,d) \in \mathbf{Cost}\).
But to specify \(\Phi\), it's enough to specify costs of flights from
some cities in \(C\) to some cities in \(D\). That's why we just need to draw a few blue dashed edges labelled with costs. We can use this to work out the cost of going from any city \(c \in C\) to any city \(d \in D\). I hope you can guess how! Puzzle 182. What's \(\Phi(E,a)\)? Puzzle 183. What's \(\Phi(W,c)\)? Puzzle 184. What's \(\Phi(E,c)\)?
Here's a much more challenging puzzle:
Puzzle 185. In general, a \(\mathbf{Cost}\)-enriched profunctor \(\Phi : C \nrightarrow D\) is defined to be a \(\mathbf{Cost}\)-enriched functor
$$ \Phi : C^{\text{op}} \times D \to \mathbf{Cost} $$ This is a function that assigns to any \(c \in C\) and \(d \in D\) a cost \(\Phi(c,d)\). However, for this to be a \(\mathbf{Cost}\)-enriched functor we need to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category! We do this by saying that \(\mathbf{Cost}(x,y)\) equals \( y - x\) if \(y \ge x \), and \(0\) otherwise. We must also make \(C^{\text{op}} \times D\) into a \(\mathbf{Cost}\)-enriched category, which I'll let you figure out to do. Then \(\Phi\) must obey some rules to be a \(\mathbf{Cost}\)-enriched functor. What are these rules? What do they mean concretely in terms of trips between cities?
And here are some easier ones:
Puzzle 186. Are the graphs we used above to describe the preorders \(A\) and \(B\) Hasse diagrams? Why or why not? Puzzle 187. I said that \(\infty\) plays the same role in \(\textbf{Cost}\) that \(\text{false}\) does in \(\textbf{Bool}\). What exactly is this role?
By the way, people often say
\(\mathcal{V}\)-category to mean \(\mathcal{V}\)-enriched category, and \(\mathcal{V}\)-functor to mean \(\mathcal{V}\)-enriched functor, and \(\mathcal{V}\)-profunctor to mean \(\mathcal{V}\)-enriched profunctor. This helps you talk faster and do more math per hour. |
It is not generally true that $\mathcal{A}_n\times\mathcal{B}$ decreases to $\mathcal{A}_\infty\times\mathcal{B}$. In general, the inequality $\mathcal{A}_\infty\times\mathcal{B}\subseteq\bigcap_n(\mathcal{A}_n\times\mathcal{B})$ holds, but the inequality can be strict. I'll demonstrate this by a counterexample.
Consider the case where $X=Y=2^\mathbb{N}$. An element $\omega$ of this space is a function $\omega\colon\mathbb{N}\to\{0,1\}$. Think of this as the space of an infinite sequence of coin tosses, where $\omega(n)=1$ if the n'th toss is a head and $\omega(n)=0$ if it is a tail. Also, let $\mathcal{A}=\mathcal{B}$ be the sigma-algebra generated by the individual tosses, so that it is generated by the sets $A_n=\{\omega\in X\colon\omega(n)=1\}$ ($n=1,2,\ldots$). Also, let $\mathcal{A}_n$ be the sigma-algebra generated by all the tosses on or after the n'th one. So, $\mathcal{A}_n$ is generated by $\{A_m\colon m\ge n\}$. Then, $\mathcal{A}_\infty=\bigcap_n\mathcal{A}_n$ is the tail sigma-algebra. I claim that the set$$
S=\bigcup_{n=1}^\infty\left\{(\omega_X,\omega_Y)\in X\times Y\colon \omega_X(m)=\omega_Y(m)\;{\rm for\ all\;}m\ge n\right\}
$$is in $\bigcap_n(\mathcal{A}_n\times\mathcal{B})$ but not in $\mathcal{A}_\infty\times\mathcal{B}$. Here, $S$ represents the pairs of sequences of coin tosses which agree with each other for all but finitely many tosses. The event for which they agree on every toss starting from the n'th one is $S_n=\bigcap_{m\ge n}((A_m\times A_m)\cup(A_m^c\times A_m^c))$, which is in $\mathcal{A}_n\times\mathcal{B}$. We then have $S=\bigcup_{m\ge n}S_m\in\mathcal{A}_n\times\mathcal{B}$ for all $n$, so $S\in\bigcap_n(\mathcal{A}_n\times\mathcal{B})$. In fact, $S\in\bigcap_n(\mathcal{A}_n\times\mathcal{A}_n)$.
We can also show that $S\not\in\mathcal{A}_\infty\times\mathcal{B}$. Let $\mathbb{P}$ be the probability measure on $(X\times Y,\mathcal{A}\times\mathcal{B})$ such that, the outcomes $(\omega_X,\omega_Y)\in X\times Y$ are distributed such that $\omega_X(n)$ are independent Bernoulli random variables with probability 1/2 of being equal to one (representing tossing a fair coin), $\omega_Y=\omega_X$ with probability 1/2, and $\omega_Y=1-\omega_X$ with probability 1/2. That is, if $\epsilon=(\epsilon_1,\epsilon_2,\ldots)$ is a sequence of independent random variables with $\mathbb{P}(\epsilon_k=1)=\mathbb{P}(\epsilon_k=0)=1/2$ then$$
\mathbb{P}(U)=\frac12\mathbb{P}\left((\epsilon,\epsilon)\in U\right)+\frac12\mathbb{P}\left((\epsilon,1-\epsilon)\in U\right)
$$for any $U\in\mathcal{A}\times\mathcal{B}$. Then, $\mathbb{P}(S)=1/2$. Now consider any set $U=V\times W\in\mathcal{A}_\infty\times\mathcal{B}$. Kolmogorov's zero-one law says that $V$ has probability 0 or 1. So, up to a set of zero probability, we have $U=X\times W$ or $U=\emptyset\times W=X\times\emptyset$. In any case, up to set of zero probability, $U$ can be written as $X\times W$ for some $W\in \mathcal{B}$ and, by the monotone class theorem (or pi-system d-system lemma), this extends to all $U\in\mathcal{A}_\infty\times\mathcal{B}$. However, $S$ is
not of this form. In fact, as $S$ only depends on the events $C_n=\{(\omega_X,\omega_Y)\colon\omega_X(n)=\omega_Y(n)\}$, it can be seen that it is independent of $\{\emptyset,X\}\times\mathcal{B}$ so, if it did lie in this sigma-algebra (up to a probability zero event) then it would be independent of itself. This would imply $\mathbb{P}(S)=0$ or $\mathbb{P}(S)=1$, which is a contradiction.
It is true that, if $\mu,\nu$ are probability measures on $(X,\mathcal{A})$ and $(Y,\mathcal{B})$ respectively and $\mathbb{P}=\mu\times\nu$ is the product measure, then every $S\in\bigcap_n(\mathcal{A}_n\times\mathcal{B})$ is in $\mathcal{A}_\infty\times\mathcal{B}$ up to a zero probability set. In fact, you can show that$$
\mathbb{E}\left[X\;\big\vert\mathcal{A}_n\times\mathcal{B}\right]\to\mathbb{E}\left[X\;\big\vert\mathcal{A}_\infty\times\mathcal{B}\right]
$$is probability as $n\to\infty$, for any integrable $\mathcal{A}\times\mathcal{B}$-measurable random variable $X$ (in fact, it converges almost-surely, but that is not needed here). In particular, if $X$ is $\bigcap_n(\mathcal{A}_n\times\mathcal{B})$-measurable, then the left hand side is just $X$, so $X=\mathbb{E}[X\;\vert\mathcal{A}_\infty\times\mathcal{B}]$ is (almost-surely) $\mathcal{A}_\infty\times\mathcal{B}$-measurable.To prove this limit, it is enough to consider $X=YZ$ where $Y$ $\mathcal{A}$-measurable and $Z$ is $\mathcal{B}$-measurable, and apply the monotone class lemma. You will probably need Levy's downward theorem to show that $\mathbb{E}[Y\;\vert\mathcal{A}_n]$ tends to $\mathbb{E}[Y\;\vert\mathcal{A}_\infty]$ in probability (I have a proof of the downward theorem on by blog, here). |
To complement the 2nd part of D.W.'s answer, we would like to find an $\mathcal H$-polytope (defined by the intersection of closed half-spaces) whose intersection with $\{0,1\}^4$ is
$$\{ (1,1,0,0),(0,1,1,1),(1,0,0,1) \}$$
Let
$$\Phi := \left(x_{1} \wedge x_{2} \wedge \neg x_{3} \wedge \neg x_{4}\right) \vee \left(\neg x_{1} \wedge x_{2} \wedge x_{3} \wedge x_{4} \right) \vee \left(x_{1} \wedge \neg x_{2} \wedge \neg x_{3} \wedge x_{4}\right)$$
Using SymPy,
>>> from sympy import *
>>> x1, x2, x3, x4 = symbols('x1 x2 x3 x4')
>>> Phi = (x1 & x2 & Not(x3) & Not(x4)) | (Not(x1) & x2 & x3 & x4) | (x1 & Not(x2) & Not(x3) & x4)
Converting to the CNF,
>>> to_cnf(Phi,simplify=true)
And(Or(x1, x2), Or(x1, x3), Or(x1, x4), Or(x2, x4), Or(x2, Not(x3)), Or(x4, Not(x3)), Or(Not(x1), Not(x3)), Or(x3, Not(x2), Not(x4)))
Hence,
$$\Phi \equiv \left(x_{1} \vee x_{2}\right) \wedge \left(x_{1} \vee x_{3}\right) \wedge \left(x_{1} \vee x_{4}\right) \wedge \left(x_{2} \vee x_{4}\right) \wedge \left(x_{2} \vee \neg x_{3}\right) \wedge \left(x_{4} \vee \neg x_{3}\right) \wedge \left(\neg x_{1} \vee \neg x_{3}\right) \wedge \left(x_{3} \vee \neg x_{2} \vee \neg x_{4}\right)$$
Note that $\neg x_i$ and $x_i \vee x_j$ can be translated into binary integer programming as $1 - x_i$ and $x_i + x_j \geq 1$, respectively. Thus, an $\mathcal H$-polytope with the desired property is defined as follows
$$\begin{bmatrix} 1 & 1 & 0 & 0\\ 1 & 0 & 1 & 0\\ 1 & 0 & 0 & 1\\ 0 & 1 & 0 & 1\\ 0 & 1 & -1 & 0\\ 0 & 0 & -1 & 1\\ -1 & 0 & -1 & 0\\ 0 & -1 & 1 & -1\end{bmatrix} \begin{bmatrix} x_1\\ x_2\\ x_3\\ x_4\end{bmatrix} \geq \begin{bmatrix} 1\\ 1\\ 1\\ 1\\ 0\\ 0\\ -1\\ -1\end{bmatrix}$$
Verifying in Haskell:
λ> filter (\(x1,x2,x3,x4)->(x1 + x2 >= 1 && x1 + x3 >= 1 && x1 + x4 >= 1 && x2 + x4 >= 1 && x2 - x3 >= 0 && -x3 + x4 >= 0 && -x1 - x3 >= -1 && -x2 + x3 - x4 >= -1)) [ (x1,x2,x3,x4) | x1 <- [0,1], x2 <- [0,1], x3 <- [0,1], x4 <- [0,1] ]
[(0,1,1,1),(1,0,0,1),(1,1,0,0)]
It seems to be correct. However, note that D.W.'s polytope is more economical, as it only uses $5$ half-spaces, whereas my polytope uses $8$ half-spaces. Verifying D.W.'s polytope:
λ> filter (\(x1,x2,x3,x4)->(-x1 - x3 >= -1 && -x2 + x3 - x4 >= -1 && x2 + x4 >= 1 && x1 + x3 >= 1 && x1 + x2 - x3 + x4 >= 1)) [ (x1,x2,x3,x4) | x1 <- [0,1], x2 <- [0,1], x3 <- [0,1], x4 <- [0,1] ]
[(0,1,1,1),(1,0,0,1),(1,1,0,0)] |
If one has an equation such as $$x=-(3.2±0.1)\cos(30.3º±0.2º).$$ How does the error carry to be able to find the value of $x$? I have found that you have to -sine the error in the cosine, but then how do you deal with the value by which the scalar is multiplied, and its error?
You can use the error propagation formula for the product of two numbers:
$$x=AB$$ $$\Rightarrow\left(\frac{\Delta x}{x}\right)^2=\left(\frac{\Delta A}{A}\right)^2+\left(\frac{\Delta B}{B}\right)^2$$
where $\Delta x$ is the error in $x$ etc.
So in your case, you would identify $$A=3.2\pm0.1$$ and $$B=\cos((30.3\pm0.2)^\text{o})$$ (note that $\Delta B$ is not simply $0.2^\text{o}$, you have to work it out, but it seems you are fine with this).
The more general error propagation formula is (for any function $f(A,B,...)$):
$$\sigma_f^2=\sigma_A^2\left(\frac{\partial f}{\partial A}\right)^2+\sigma_B^2\left(\frac{\partial f}{\partial B}\right)^2+...$$
which may be used to e.g. find the error in the cosine etc. Note though this assumes $A$ and $B$ are independent. |
Geometry is all about figures and Triangle is one of the first which one learns in school. Triangles are the most commonly tested topics of GMAT as they present many shortcuts to solve the problem which is vital in this savvy exam. So, let’s know some of the properties of the triangles:
Sum of all the internal angles of a triangle is 180° The sum of any two sides of a triangle is always greater than the third side. The difference of any two sides of a triangle is always less than the third side. The longest side of the triangle is always opposite to the largest angle and vice-versa.
Here, the largest angle is angle ABC, so the longest side will be opposite to it, which is side AC.
Similarly, the shortest angle is angle CAB, so the shortest side is BC.
If two sides of a triangle are equal than its opposite angles are equal. Exterior angle theorem: In a triangle, any exterior angle is equal to sum of the two opposite interior angles.
According to this theorem, the sum of angle EFG and angle EGF is equal to angle DEG.
Types of a Triangle:
There are six types of triangle:
Acute triangle: Each angle is less than 90° Obtuse triangle: One of the angles is greater than 90° Right triangle: One angle is equal to 90° Equilateral triangle: All the angles are equal to 60° and all sides are equal in length Isosceles triangle: Two angles and two sides of the triangle are equal. Scalene triangle: All the sides and angles are different in measurement.
Triangles: Sample Questions
Let’s solve some questions and understand it properly:
Question:
What is the area of the above figure?
Solution:
First of all, let’s cut the figure in two parts since we don’t know the type of quadrilateral it is.
So, after cutting, it becomes,
And,
Part-1 is a right triangle. So, we can use the Pythagoras theorem to find the third side.
So, it is a Pythagoras Triplet of the form of \((3-4-5) \times 4\), which is 12-16-20
So, the third side is 20.
Hence Area is \(\frac{1}{2} \times b \times h\) \(\frac{1}{2} \times 12 \times 16 = 96\)
Part 2. Now it is an equilateral triangle.
So, its area is \(\frac{\sqrt{3}}{4} \times a^{2} = \frac{\sqrt{3}}{4} \times 20^{2} = 100\sqrt{3}\)
Total Area = \(96 + 100\sqrt{3}\)
Read more on GMAT Quant: Special Right Triangles.
We’ll be glad to help you in your GMAT preparation journey. You can ask for any assistance related to GMAT and MBA from us by just giving a missed call at
+918884544444, or you can drop an SMS. You can write to us at [email protected]. |
Let $f: X \to Y$ be continuous and proper (a map is proper iff the preimage of a compact set is compact). Furthermore, assume that $Y$ is locally compact and Hausdorff (there are various ways of defining local compactness in Hausdorff spaces, but let's say this means each point $y \in Y$ has a local basis of compact neighborhoods).
Prove that $f$ is a closed map.
I know that this proof cannot require much more than a basic topological argument. But there's just something that I'm missing.
We can start with $C \subseteq X$ closed, and then try to show that $Y \setminus F(C)$ is open (for each $q \in Y \setminus F(C)$, we would want to find an open set $V_q$ with $q \in V_q \subseteq Y \setminus F(C)$).
Hints or solutions are greatly appreciated. |
In classical mechanics we can describe the state of a system by a set of two numbers {\(\vec{R}, \vec{p}\)} where \(\vec{R}\) is the position of the object and \(\vec{p}\) is its momentum. The law of dynamics (given by Newton's second law, \(\sum{\vec{F}}=m\frac{d^2\vec{R}}{dt^2}\)) describes how the state of the object changes with time. The law of dynamics is deterministic. This means that if you know the initial state {\(\vec{R}_0,\vec{p}_0\)} of the system you can use the law of dynamics to fully determine the future state {\(\vec{R}(t),\vec{p}(t)\)} of the system at any future time \(t\).
The law of dynamics is reversible. This means that if you took two identical systems which start out in different initial states (say {\(\vec{R}_1(t_0),\vec{p}_1(t_0)\)} and {\(\vec{R}_2(t_0),\vec{p}_2(t_0)\)} respectively), they will evolve with time according to the law of dynamics in such a way that they remain in different states. Suppose however the states {\(\vec{R}_1(t_0),\vec{p}_1(t_0)\)} and {\(\vec{R}_2(t_0),\vec{p}_2(t_0)\)} evolved into the same state {\(\vec{R}(t),\vec{p}(t)\)}. If the only information you had was {\(\vec{R}(t),\vec{p}(t)\)} then, using the law of dynamics, there would be no way to know for sure whether this state evolved from {\(\vec{R}_1(t_0),\vec{p}_1(t_0)\) or {\(\vec{R}_2(t_0),\vec{p}_2(t_0)\)}. Because Newton’s law of dynamics is reversible this will not happen and the two systems will stay in different states (say {\(\vec{R}_1(t),\vec{p}_1(t)\)} and {\(\vec{R}_2(t),\vec{p}_2(t)\)}) for all times \(t\). A consequence of this is that if you know the state of either system at time \(t\) you can always use the law of dynamics to determine the state of either system at an earlier time \(t_0\).
In quantum dynamics we assume that the states of different isolated systems are deterministic and reversible. (Do not confuse the states being deterministic with the measurements being deterministic—the latter, of course, is not deterministic.) If two different systems start out in two different states \(|\psi(t_0)⟩\) and \(|\phi(t_0)⟩\) then they will remain in different states and \(|\psi(t)⟩\) and \(|\phi(t)⟩\) will stay different for all times \(t\). A consequence of this is that the inner product \(⟨\psi(t)|\phi(t)⟩\) will remain unchanged.
Scrodinger's time-dependent equation is the single most important equation in quantum mechanics. It is used to determine how any state \(|\psi(t)⟩\) of a quantum system changes with time; at all times \(t\), you'll know what \(|\psi(t)⟩\) is. This equation is also used to determine the probability \(P(L,t)\) of measuring any physical quantity \(L\) at any time \(t\). What the two functions \(|\psi(t)⟩\) and \(P(L,t)\) are depends on the total energy of the system (which is associated with the Energy operator \(\hat{E}\)) and the initial state \(|\psi(0)⟩\) of the system. All you need are these two initial conditions to determine the entire future of the system. In classical mechanics the state of a particle is specified by two numbers—the position \(\vec{R}\)
In quantum mechanics if we knew the initial state \(|\psi_i(0)⟩\) of every particle in the universe, we could use the quantum analogue of Newton's second law—namely Schrodinger's time-dependent equation—to determine the future state \(|\psi_i(t)⟩\) of every particle in the universe. But where quantum mechanics differs from classical mechanics is that the state \(|\psi_i(t)⟩\) does not encapsulate everything about the system—rather it encapsulates everything that
can be known about the system, which isn't everything. Each particle would have its own wavefunction \(\psi_{i,j}(t)\); in general there would be a probability amplitude associated with any physical measurement. Although the probability function \(P(L,t)\) is deterministic, the measurement of any physical quantity \(L\) is inherently probabilistic. Therefore there is an inherent randomness built into the cosmos.
This article is licensed under a CC BY-NC-SA 4.0 license. |
Ionic Concentration Gradient Across a Bilayer A Half-Step Beyond Ideal: Ion Gradients and Transmembrane Potentials
One key way the cell stores free energy is by having different concentrations of molecules in different "compartments" - e.g., extra-cellular vs. intracellular or in an organelle compared to cytoplasm. The molecules playing this role are charged molecules, or ions, such as sodium ($\plus{Na}$), chloride ($\minus{Cl}$), potassium ($\plus{K}$), calcium ($\dplus{Ca}$), and numerous nucleotide species. A brief overview of trans-membrane ion physiology is available.
Although the simplest way to study the physics of free energy storage in such a gradient is by considering ideal particles all with zero potential energy, the reality of the cell is that electrostatic interactions are critical. Fortunately, the most important non-ideal effects of charge-charge interactions can be understood in terms of the usual ideal particles (which do not interact with one another) that do, however, feel the effects of a "background" electrostatic field. Such a mean-field picture is a simple approximation to the electrostatic effects induced primarily by having an excess of one or more charged species on a given side of a membrane - for example, the excess of $\plus{Na}$ ions in the extracellular environment.
Two semi-ideal gases of ions in different potentials
Unlike our examination of two membrane-separated ideal gases, the particles here are explicitly charged and they "feel" the electrostatic potential $\Phi$. The positively charged anions schematically represent $\plus{K}$ ions interacting with the field generated by the imbalance of $\plus{Na}$ ions - the extracellular or "outside" concentration of sodium is maintained at a high value relative to the cytoplasm or "inside" by constant ATP-driven pumping. However,
the effects of the $\plus{Na}$ concentration gradient will only be treated implicitly via the different values for $\inn{\Phi} < \out{\Phi}$.
To be precise, the model consists of $N$ particles that do not interact with one another, but which interact with the external potential $\Phi$ as if each had a charge of $q$, leading to potential energy $q \cdot \Phi_{\mathrm{X}}$ for each particle, where X = "in" or "out". The total volume $V$ is divided into inside and outside so that $\inn{V} + \out{V} = V$, with $\inn{N} + \out{N} = N$ ions populating the two compartments. The whole system is maintained at constant temperature $T$. Particles can pass through the channel shown in the figure, but we assume it is closed so that $N_{\mathrm{in}}$ and $N_{\mathrm{out}}$ are constants: as shown in our discussion of two membrane-separated ideal gases, the assumption is a convenience and not an approximation because the total system volume $V$ and particle-number $N$ are truly constant.
Deriving the free energy
We have two gases of ideal "ions" (that interact with the external potential but not with other ions). Mathematically, we can largely follow our discussion of two membrane-separated ideal gases. The total free energy is the sum of the two ideal gas free energies and the two electrostatic potential energies:
where $\fidl$ is defined in the ideal gas page and $q$ is the ionic charge.
Because it is really the
difference in electrostatic potential which governs the ionic behavior, we define $\dphi = \inn{\Phi} - \out{\Phi}$. In terms of this quantity, we can rewrite the total free energy as
Eq. (3) is the free energy as a function of the number of particles inside the membrane (in volume $\inn{V}$), which could be equivalently described using the chemical potential. Inclusion of the electrostatic effects shifts the location of the most probable state, or free energy minimum.
The most probable concentrations: The Nernst equation
If we open the channel and allow exchange of atoms between the compartments, the value of $\inn{N}$ can change. The probability of having $\inn{N}$ atoms in $\inn{V}$ is proportional to the Boltzmann factor of the free energy:
The most probable $\inn{N}$ value therefore can be found by determining the minimum of $F$. This will represent the equilibrium point in the thermodynamic limit (very large $N$ - when fluctuations about the most probable $\inn{N}$ will be very small compared to $\inn{N}$ itself). We set $\dee F / \dee \inn{N} = 0$ in Eq. (3), then re-arrange and cancel terms to find
where you should recognize the left-hand side as the ratio of concentrations.
In words, Eq. (6) shows that
the concentrations inside and outside vary according to the Boltzmann factor of the ionic charge times the potential difference. Such an equilibrium is called a Donnan equilibrium. It should be comforting that when $\dphi = 0$, we recover equal concentrations. Comparison to cellular behavior
As the exercise below will show, for some ions ($\minus{Cl}$, $\plus{K}$) the Nernst equation is a reasonable approximation. This suggests that such ions permeate the membrane passively. For some ions ($\plus{Na}$, $\dplus{Ca}$), the concentration ratios are very different from what would be predicted from the Nernst equation because the cell uses active transport to control them.
Mass action and its limitations
It is always worthwhile to pursue both thermodynamic
and kinetic analyses of any system you really care about, or just to train yourself to consider a problem from multiple perspectives. By comparison to the present case, some of the results from the truly ideal (uncharged) two-compartment system may seem puzzling.
In contrast to the uncharged system, we can see that the transport rates through the channel
cannot be equal in the two directions. Let $k_{io}$ be the inside-to-outside rate constant and $k_{oi}$ be the reverse rate constant. Starting from detailed balance, which says that the overall flows must be equal and opposite, and substituting the Nernst relation (6), we find that
In other words, the ratio of rates for an ion channel depends on the potential difference. By itself, this does not contradict the mass action viewpoint (that rate constants are independent of concentrations) ... so long as $\dphi$ is truly constant. But if, more generally, $\dphi$ depends on the relative concentrations of the ion species moving through the channel, then the mass-action picture breaks down. Such a breakdown would occur, for example, if there were two species of ions, one of which could not permeate the membrane and hence was maintained at fixed inside and outside concentrations: in this case, flow of the permeable ion would change $\dphi$ and, in turn, change the rate "constants".
The brief overview of trans-membrane ion physiology may help to clarify the bigger picture of ion/membrane behavior.
References R. Phillips et al., "Physical Biology of the Cell," (Garland Science, 2009). B. Alberts et al., "Molecular Biology of the Cell", Garland Science (many editions available). Exercises Derive Eqs. (5) and (6). Use Eq. (6) to derive a concentration ratio for $\minus{Cl}$ using $\dphi = -90 \, \mathrm{mV}$ (typical for skeletal muscle) and compare your result to the experimental value of $\sim 1 / 30$. This will require careful consideration of units when multiplying together physical constants. |
This question occurred to me while reading http://arxiv.org/abs/1806.08762/ Any observed sequence is necessarily finite, and any finite sequence is computable, either by explicitly storing all the data and just printing it, or by fitting an $n^{th}$-degree polynomial to an $n$-length sequence, etc.
So there's no such thing as an uncomputable (finite) sequence, whereby the article's title "Experimentally probing the incomputability of quantum randomness", initially struck me as somewhat of a non-sequitur (although the details of the article made somewhat more sense of it).
But pursuing the classical non-sequitur sense that initially occurred to me, suppose you have some process purported to be "truly random". So prove it! At best, you can give me a finite sequence generated by that process. Then I just fit a polynomial to it, and poof, it's pseudo-random. So my question... what analysis of such a finite sequence would prove that in the $n\to\infty$ limit, your finite pseudo-random sequence would be truly random (or, in some epsilon-delta sense, approach true randomness to any desired accuracy)?
For definiteness and simplicity, and for another argument, suppose we're talking about a sequence of random integers from, say, $1$ to $k$. Then there are $k^n$ $n$-length sequences, and not only can we fit a polynomial to each one, we can use some standard enumeration to enumerate them all. Then, choosing a single (random or not) number $1\ldots k^n$ corresponds to choosing an entire (random or not) sequence. But "single numbers" aren't typically considered random in the first place, whereby (one might argue) neither should finite sequences be. So, again, how could you prove that the $n\to\infty$ limit of the sequence generated by a physical (and supposedly random) process is truly random?
What occurred to me as a possible answer is that if "truly_random"$\sim$uncomputable, then consider a sequence of computable functions, say our polynomials $f_n(i), i=1\ldots n$, such that $1\le f_n(i)\le k$ is the $i^{th}$ number of the experimentally-generated $n$-length sequence. Then we'd want to prove that, given a finite number of such computable $f_n$'s, the function $\lim_{n\to\infty}f_n$ is uncomputable. Can that be done (or provably not done), and would it characterize "true randomness"?
Edit---------
Re @orlp's comment, below: Yeah, I guess I phrased that imprecisely. Indeed, you can even construct a more trivial counterexample to my imprecise wording, using the halting problem.
Suppose you claim the have a (let's say
C language) function int halts(int n) whose input is the Godel number of a program, and whose output is $1$ if program n halts, or $0$ if not (and let's say $-1$ if n is the Godel number of a string that's not a valid program, just so halts() is a total function on domain $\mathbb N$). Then $\mathbf{\mbox{halts()}:}\mathbb N\to\{0,1;-1\}$.
And let's say ancillary function
int godel(char *filename) reads the contents of text file filename, returning its godel number. Now write the following little C program and put it in file doIhalt.c
int main ( int argc, char *argv[] ) { int halts(), godel(); while ( halts(godel("doIhalt.c")) == 1 ) ; exit ( 0 ); }
So our
doIhalt program runs forever if halts() says it halts, and contrariwise halts immediately if halts() says it runs forever. Thus, if n=godel("doIhalt.c") is the Godel number of that program, the one single number halts(n) is immediately uncomputable. You don't even need a finite sequence at all, as in @orlp's example, to achieve an uncomputable number.
By computability here, however, we're talking about recursive enumerability, whereby the individual elements of our set of integers are given, and the question is whether or not there's a program that enumerates them all. For computer-generated pseudo-random numbers, the answer's a foregone conclusion, "yes". But we're asking about some black-box physical process that's somehow generating number-after-number. And after some finite time we've accumulated a finte sequence of its output. So we obviously have in our hands all those numbers -- that's the premise to begin with. And I was saying, or trying to say, that a finite set of numbers which we "have in our hands" is necessarily computable (meaning r.e.). So how would you say that precisely -- and much more succinctly than I elaborated it above?
In any case, my question then was whether or not the behavior of that black-box physical process is "truly random" -- how could we answer that question given only a finite sequence of the box's output (a prefix string, so to speak).
Edit#2-------------
Re @D.W.'s comment below his answer. Okay, considering that the preceding $\lim_{n\to\infty}f_n$ limit isn't well-defined, the alternative standard characterization involves algorithmic complexity, as follows.
First, to briefly recapitulate, we've got a black-box physical process, generating (maybe random) integers, $r_i$, one after another, in the range $1\le r_i\le k$. And for the first $n$ of them, $r_1,r_2,...,r_n$ we construct a function $f_n(i)=r_i,\ i=1...n$. So $f_{n+1}$ might necessarily be very different than $f_n$, or it might be quite similar.
Note that a program for the simplest $f_1$ function would just store $r_1$ and print it. Any kind of algorithm to compute that one-number-domain function would surely be more complicated than just printing it. Indeed, let $K(f_n)$ denote the Kolmogorov/algorithmic complexity of (the simplest program that computes) function $f_n$. So the first few $f_n$'s probably just store-and-print all the corresponding $r_i,\ i=1...n$.
But eventually, as $n$ gets large enough, storing-and-printing would presumably not be the shortest/simplest way to generate all the necessary $n$ values $r_i,\ i=1...n$. The length/complexity of an algorithm would be shorter than all the necessary data. But, of course, that's
>>only if<< there exists such an algorithm in the first place.
So, for a pseudo-random number generator, we know there's an algorithm. So $lim_{n\to\infty}K(f_n)=const$ because that same algorithm computes as many random numbers as you like. Otherwise, for "truly random" sequences, the limit diverges because there's no algorithm, and there's ultimately no alternative to storing-and-printing all the data.
I'm not sure whether or not the above satisfies all (or even some of) @D.W.'s objections below, but in any case, given a finite "prefix string" of the black-box's output $r_i,\ i=1...n$, you still can't tell (I don't think) whether or not that $K(\cdot)$ limit converges. |
Here's the $n$-eigenvector proof:
We assume
$A\vec v_i = \lambda_i \vec v_i, \; 1 \le i \le n, \tag 1$
with
$\lambda_i \ne \lambda_j, \; 1 \le i, j \le n; \tag 2$
assume there is a linear dependence between the eigenvectors:
$\displaystyle \sum_1^n a_i \vec v_i = 0, \; \exists [a_i \ne 0, 1 \le i \le n]; \tag 3$
since relations such as (3) are assumed to exist, there is (at least) one having a minimum number of non-zero coefficients $a_i$; we assume (3) is such; we note the number of non-zero $a_i$ must be $\ge 2$, otherwise (3) is of the form
$a_j \vec v_j = 0, \tag 4$
which implies $a_j = 0$, forbidden by hypothesis. Then
$A(\displaystyle \sum_1^n a_i \vec v_i) = 0, \tag 5$
or
$\displaystyle \sum_1^n a_i \lambda_i \vec v_i = 0; \tag 6$
we may assume without loss of generality that $a_1 \ne 0$; if we multiply (3) by $\lambda_1$ we have
$\displaystyle \sum_1^n a_i \lambda_1 \vec v_i = 0; \tag 7$
we subtract (7) from (6):
$\displaystyle \sum_2^n a_i (\lambda_i - \lambda_1) \vec v_i = 0; \tag 8$
since for all $i$
$\lambda_i - \lambda_1 \ne 0, \tag 9$
(8) is a linear relation between eigenvectors with fewer non-zero coefficients than (3); this contradiction shows (3) is impossible and hence the eigenvectors are linearly independent. |
The Context
(In this section I'm just going to explain hypothesis testing, type one and two errors, etc, in my own style. If you're comfortable with this material, skip to the next section)
The Neyman-Pearson lemma comes up in the problem of
simple hypothesis testing. We have two different probability distributions on a common space $\Omega$: $P_0$ and $P_1$, called the null and the alternative hypotheses. Based on a single observation $\omega\in\Omega$, we have to come up with a guess for which of the two probability distributions is in effect. A test is therefore a function which to each $\omega$ assigns a guess of either "null hypothesis" or "alternative hypothesis". A test can obviously be identified with the region on which it returns "alternative", so we're just looking for subsets (events) of the probability space.
Typically in applications, the null hypothesis corresponds to some kind of status quo, whereas the alternative hypothesis is some new phenomenon which you're trying to prove or disprove is real. For example, you may be testing someone for psychic powers. You run the standard test with the cards with squiggly lines or what not, and get them to guess a certain number of times. The null hypothesis is that they'll get no more than one in five right (since there's five cards), the alternative hypothesis is that they're psychic and may get more right.
What we'd like to do is minimize the probability of making a mistake. Unfortunately, that's a meaningless notion. There are two ways you could make a mistake. Either the null hypothesis is true, and you sample an $\omega$ in your test's "alternative" region, or the alternative hypothesis is true, and you sample the "null" region. Now, if you fix a region $A$ of the probability space (a test), then the numbers $P_0(A)$ and $P_1(A^{c})$, the probabilities of making those two kinds of errors, are completely well-defined, but since you have no prior notion of "probability that the null/alternative hypothesis is true", you can't get a meaningful "probability of either kind of mistake". So this is a fairly typical situation in mathematics where we want the "best" of some class of objects, but when you look closely, there is no "best". In fact, what we're trying to do is minimize $P_0(A)$ while maximizing $P_1(A)$, which are clearly opposing goals.
Keeping in mind the example of the psychic abilities test, I like to refer to the type of mistake in which the null is true but you conclude the alternative as true as "
delusion" (you believe the guy's psychic but he's not), and the other kind of mistake as " obliviousness". The Lemma
The approach of the Neyman-Pearson lemma is the following: let's just pick some maximal probability of delusion $\alpha$ that we're willing to tolerate, and then find the test that has minimal probability of obliviousness while satisfying that upper bound. The result is that such tests always have the form of a likelihood-ratio test:
Proposition (Neyman-Pearson lemma)
If $L_0, L_1$ are the likelihood functions (PDFs) of the null and alternative hypotheses, and $\alpha > 0$, then the region $A\subseteq \Omega$ which maximizes $P_1(A)$ while maintaining $P_0(A)\leq \alpha$ is of the form
$$A=\{\omega\in \Omega \mid \frac{L_1(\omega)}{L_0(\omega)} \geq K \}$$
for some constant $K>0$. Conversely, for
any $K$, the above test has $P_1(A)\geq P_1(B)$ for any $B$ such that $P_0(B)\leq P_0(A)$.
Thus, all we have to do is find the constant $K$ such that $P_0(A)=\alpha$.
The proof on Wikipedia at time of writing is a pretty typical oracular mathematical proof that just consists in conjecturing that form and then verifying that it is indeed optimal. Of course the real mystery is where did this idea of taking a ratio of the likelihoods even came from, and the answer is:
the likelihood ratio is simply the density of $P_1$ with respect to $P_0$.
If you've learned probability via the modern approach with Lebesgue integrals and what not, then you know that under fairly unrestrictive conditions, it's always possible to express one probability measure as being given by a density function with respect to another. In the conditions of the Neyman-Pearson lemma, we have two probability measures $P_0$, $P_1$ which both have densities with respect to some underlying measure, usually the counting measure on a discrete space, or the Lebesgue measure on $\mathbb R^n$. It turns out that since the quantity that we're interested in controlling is $P_0(A)$, we should be taking $P_0$ as our underlying measure, and viewing $P_1$ in terms of how it relates to $P_0$, thus, we consider $P_1$ to be given by a density function with respect to $P_0$.
Buying land
The heart of the lemma is therefore the following:
Let $\mu$ be a measure on some space $\Omega$, and let $f$ be a positive, integrable function on $\Omega$. Let $\alpha > 0$. Then the set $A$ with $\mu(A)\leq\alpha$ which maximizes $\int_A fd\mu$ is of the form
$$\{\omega\in\Omega\mid f(\omega)\geq K\}$$
for some constant $K>0$, and conversely, any such set maximizes $\int f$ over all sets $B$ smaller than itself in measure.
Suppose you're buying land. You can only afford $\alpha$ acres, but there's a utility function $f$ over the land, quantifying, say, potential for growing crops, and so you want a region maximizing $\int f$. Then the above proposition says that your best bet is to basically order the land from most useful to least useful, and buy it up in order of best to worst until you reach the maximum area $\alpha$. In hypothesis testing, $\mu$ is $P_0$, and $f$ is the density of $P_1$ with respect to $P_0$ (which, as already stated, is $L_1/L_0$).
Here's a quick heuristic proof: out of a given region of land $A$, consider some small one meter by one meter square tile, $B$. If you can find another tile $B'$ of the same area somewhere outside of $A$, but such that the utility of $B'$ is greater than that of $B$, then clearly $A$ is not optimal, since it could be improved by swapping $B$ for $B'$. Thus an optimal region must be "closed upwards", meaning if $x\in A$ and $f(y)>f(x)$, then $y$ must be in $A$, otherwise we could do better by swapping $x$ and $y$. This is equivalent to saying that $A$ is simply $f^{-1}([K, +\infty))$ for some $K$. |
Hi John.
A couple of small typos:
$$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P)} x' \textrm{ and } x_{f^{\ast}(Q) } x'. $$
should be:
$$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P)} x' \textrm{ and } x \sim_{f^{\ast}(Q) } x'. $$
The next equation also has a small typo and should be:
$$ f(x) \sim_{P \wedge Q} f(x') \textrm{ if and only if } f(x) \sim_P f(x') \textrm{ and } f(x) \sim_Q f(x'). $$
Also I think the join \\(P \vee Q \\) has two parts:
$$ P \vee Q = \\{\\{11,12,13,22,23\\},\\{21\\}\\} . $$ |
While answering this question about a hypothetical 3-sphere universe $S^3$ expanding with a constant acceleration $\phi$ from a zero initial speed
$$ r=\dfrac{\phi}{2}t^2$$
I started from a generic metric defined in the hyperspherical coordinates:
$$ ds^2 = - c^2 dt^2 + a(t)^2 r^2 d\mathbf{\Omega}^2 $$
Where r is the radius, $a(t)$ is a scale factor, and
$$ d\mathbf{\Omega}^2=d\psi^2 + \sin^2\psi\left(d\theta^2 + \sin^2\theta\, d\varphi^2\right) $$
By combining the formulas we obtain
$$ ds^2 = - c^2 dt^2 + (\dfrac{\phi}{2}t^2)^2 d\mathbf{\Omega}^2 $$
Is it possible to define the scale factor for this metric explicitly as follows?
$$ a=a(t) $$
Thank you for your insight. |
According to Google’s Data Liberation Front (and German privacy laws), users are able to download a complete record of their data from the Google servers. This service is implemented as Google Takeout. I downloaded my Location History data of the last few months and created an animation using Processing. My movement patterns of all recorded days are superposed in one video.
If you want to create one of these by yourself you could download the source code at GitHub.
Data Acquisition
If location history is enabled on your Android device, you can easily download a large JSON file including every single ever recorded location sample of you. This file can easily get tens of megabytes large. Besides the location data, a map image of the area is also needed. I used the
Copy Image function of Google Earth to export this nice view of view of Berlin:
The image also displays an
Image Overlay of a calibration grid, that was added to the export. Google Earth allows to add overlays at given geological coordinates. A little 4-point-calibration gives some good results. Calibration
Google Earth and the Location History protocol use two different notations to express geological coordinates. While in Google Earth they look like this:
52°33’20.39″N, 13°19’17.47″E (degrees°minutes’seconds”), they would appear looking like this: 52.555663889, 13.321519444 (decimal) in the Location History. You can easily convert between them using: decimal = degrees + minutes/60 + seconds/3600.
To match the geological coordinates \(p\) from the Takeout JSON with pixel coordinates \(p’\) of our map image, we have to apply a calibration. Assuming a projective mapping \(H\) between both coordinate systems exist, which should approximately be the case for relatively small areas of the globe, we can formulate the following equation:
\[ \begin{aligned}
p’ & = H \cdot p \\ \begin{pmatrix} x’ \\ y’ \\ w’ \end{pmatrix} & = \begin{pmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & 1 \end{pmatrix} \cdot \begin{pmatrix} x \\ y \\ 1 \end{pmatrix} \\ \end{aligned} \]
To convert the homogeneous pixel coordinates \(p’\) to Euclidean coordinates \( \left( X, Y \right) \), we use the following equations:
\[ \begin{aligned}
X & = \frac{x’}{w’} \\ Y & = \frac{y’}{w’} \\ \end{aligned} \]
Expressing these using \(H\) and the geological coordinates \(\left(x,y\right)\) gives the following equations:
\[ \begin{aligned}
X & = h_{11} x + h_{12} y + h_{13} – X h_{31} x – Y h_{32} y \\ Y & = h_{21} x + h_{22} y + h_{23} – X h_{31} x – Y h_{32} y \\ \end{aligned} \]
In order to determine \(H\) we reformulate them to:
\[ \begin{aligned}
\begin{pmatrix} X \\ Y \\ \vdots \\ \vdots \\ \vdots \end{pmatrix} & = \begin{pmatrix} x & y & 1 & 0 & 0 & 0 & -Xx & -Yy \\ 0 & 0 & 0 & x & y & 1 & -Xx & -Yy \\ & & & & \vdots & & & \\ & & & & \vdots & & & \\ & & & & \vdots & & & \end{pmatrix} \cdot \begin{pmatrix} h_{11} \\ h_{12} \\ h_{13} \\ h_{21} \\ h_{22} \\ h_{23} \\ h_{31} \\ h_{32} \end{pmatrix} \\ b & = A \cdot h \end{aligned} \]
Using four known corresponding points \(\left(x,y\right)\) and \(\left(X,Y\right)\), each adding two equations to the system, we can solve for all eight unknown parameters of \(H\):
\[ \begin{aligned} A^{-1} \cdot b & = h \end{aligned} \]
The following OCTAVE code does the job. The matrices x and X hold our four corresponding point pairs of geological and pixel coordinates determined from the above overlay image.
% input quad x = [ 52.555663889 13.321519444 52.555663889 13.426994444 52.491488889 13.426994444 52.491488889 13.321519444 ] % output quad X = [ 516 223 1046 192 1083 727 532 759 ] % design matrix A = []; for i = 1:size(x,1) A = [A; x(i,1) x(i,2) 1, 0 0 0, -X(i,1)*x(i,1) -X(i,1)*x(i,2) 0 0 0, x(i,1) x(i,2) 1, -X(i,2)*x(i,1) -X(i,2)*x(i,2) ]; end % homography h = inv(A) * X'(:); H = reshape([h;1],3,3)' csvwrite('homography.csv', H);
You could also do the same directly in Java / Processing using LUD, a minimalistic class for LU decomposition on 2d arrays:
// input quad double[][] x = { { 52.555663889, 13.321519444 }, { 52.555663889, 13.426994444 }, { 52.491488889, 13.426994444 }, { 52.491488889, 13.321519444 }, }; // output quad double[][] X = { { 516, 223 }, { 1046, 192 }, { 1083, 727 }, { 532, 759 }, }; // equation system to solve for h double[][] A = new double[8][]; double[][] b = new double[8][]; for (int i=0; i<4; i++) { // coefficients A[2*i+0] = new double[] { x[i][0], x[i][1], 1, 0, 0, 0, -X[i][0]*x[i][0], -X[i][0]*x[i][1] }; A[2*i+1] = new double[] { 0, 0, 0, x[i][0], x[i][1], 1, -X[i][1]*x[i][0], -X[i][1]*x[i][1] }; // constant terms b[2*i+0] = new double[] { X[i][0] }; b[2*i+1] = new double[] { X[i][1] }; }; // solve Ah = b double[][] h = new LUD(LUD.copy(A)).solve(b); // reshape double[][] H = { { h[0][0], h[1][0], h[2][0] }, { h[3][0], h[4][0], h[5][0] }, { h[6][0], h[7][0], 1 } }; |
Following the book of Friedrich "Dirac operators and riemannian geometry" (AMS, vol 25), I define the generalized Seiberg-Witten equations for $(A,A',\psi , \phi)$, with $A,A'$ two connections and $\psi, \phi$, two spinors:
1)
$D_A ( \psi)=0$
2)
$D_{A'} ( \phi)=0$
3)
$F_+ (A)=-(1/4) \omega (\psi)$
4)
$F_+ (A')=-(1/4) \omega (\phi)$
5)
$ A- A' = Im( \frac{d<\psi|\phi>}{<\psi|\phi>})$
$Im$ is the imaginary part of the complex number.
The gauge group $(h,h') \in Map(M,S^1)$ acts over the solutions of the generalized Seiberg-Witten equations:
$(h,h').(A,A',\psi,\phi)=((1/h)^* A, (1/{h'})^* A', h \psi, h' \phi )$
We have compact moduli spaces because it is a closed set in the product of two compact sets (the SW moduli spaces).
Moreover, the situation can perhaps be generalized to $n$ solutions of the Seiberg-Witten equations $(A_i ,\psi_i )$:
1)
$D_{A_i}( \psi_i)=0$
2)
$F_+(A_i)= -(1/4) \omega (\psi_i)$
3)
$ A_i- A_j=Im( \frac{d<\psi_i|\psi_j>}{<\psi_i|\psi_j>})$ |
NIRSpec MSA Leakage Subtraction Recommended Strategies
A very small fraction of light seeps through the JWST NIRSpec Micro-Shutter Assembly onto the detectors even when all shutters are closed. This leakage affects the IFU and MOS observations. Strategies to prevent and correct this issue are provided and discussed.
The NIRSpec observing modes Multi-Object Spectroscopy (MOS, using the Micro-Shutter Assembly) and Integral Field Spectroscopy (using the Integral Field Unit; IFU) share the same area on the NIRSpec detectors and are therefore mutually exclusive. When IFU observations are conducted, the Micro-Shutter Assembly (MSA) will be closed to block light coming from the MOS field of view that could contaminate the IFU spectra.
Unfortunately, due to the limited light-blocking performance of the MSA, a small fraction of light coming from the ~9 square arcmin MOS field of view will still reach the detectors, even if the micro-shutters are all commanded “closed”. The limitations are the following:
A small number of micro-shutters (the so-called "failed-open” shutters) are stuck open and cannot be commanded closed. Any light present in these shutters will always go through to the detectors and create unwanted, parasitic spectra that will overlap with the IFU spectra. In most cases, the contaminating spectra will be comprised of sky background, but the failed open shutters can intercept sources in some cases. The micro-shutters are not perfectly opaque and, even when closed, they will let a small fraction of the incident light through. The attenuation level of the shutters (often referred as their “contrast”) ranges from a few thousand to more than ten thousand. This light “leaking” through the MSA will create two different types of parasitic signal. When a very bright object is present in the MOS field of view, even after an attenuation of a few thousand by the closed micro-shutters its spectrum may still be detectable and could contaminate the IFU spectra. Although the leaked spectrum by an individual closed micro-shutter would be too weak to be detected, the leaked spectra from the many adjacent micro-shutters will overlap each other in the spectral direction generating a parasitic 'diffuse' signal, called the “MSA leakage”, that can be significant. This effect is analogous is analogous to having a dispersed background present in wide-field-slitless spectroscopy observations, although with attenuation through the MSA. Importantly, however, the MSA contrast varies globally across the MSA, and also shows more significant leakage on small scales at the tops and bottoms of shutters. Hence, the parasitic signal will have spatial structure which is difficult to predict.
The NIRSpec Bright Spoilers article provides strategies to mitigate the effect of leaked signal from bright sources, whether they appear in failed-open shutters or are imprinted through closed microshutters. In the remainder of this article, we focus on the diffuse parasitic signal from the overlap of dispersed background, which we refer to as "MSA leakage."
Significance of the diffuse MSA leakage Will the MSA leakage be present in my observation?
Yes, the leakage will always be present in MSA and IFU observations. The zodiacal light and telescope stray light will always be present in the MOS field of view and generate MSA leakage. In some observations, additional background from extended astrophysical sources may be present and provide an additional contribution to the MSA leakage.
How can I estimate the importance of the MSA leakage?
The level of the MSA leakage will depend on the brightness of the incident background from which it originates. Its actual level will, therefore, vary from one observation to the other. To allow for an easy scaling of its level we have computed how the MSA leakage would compare to the direct IFU spectrum of the incident background. Table 1 provides information on the relative increase in the background in the IFU spectra due to the MSA leakage. Note that this increase is both wavelength and position dependent. These dependencies are characterized in Deshpande et al. (2018).
Table 1. Relative increase in background in the IFU spectra due to the presence of the MSA leakage
Instrument Configuration Relative increase (in %)
(calibration lamp measurements)
(calibration lamp measurements)
(zodiacal model prediction)
Low-spectral resolution configuration
CLEAR/PRISM
1.5
1.7
1.6
F070LP/G140M
30
82
3.9
F100LP/G140M
4.8
13.4
6.2
F170LP/G235M
1.8
3.5
5.7
F290LP/G395M
1.8
6.6
2.2
F070LP/G140H
1.5
2.3
4.7
F100LP/G140H
2.9
6.5
11.3
F170LP/G235H
3.7
7.4
7.4
F290LP/G395H
5.8
19
4.6 These numbers were derived from test data obtained on the ground using the internal calibration sources of the instrument that provide a uniform illumination of the MOS field of view (i.e. mimicking the presence of an extended background source). Exposures were obtained with an open IFU aperture and then closed, allowing a comparison of the MSA leakage to the “in-field” spectra of the incident illumination obtained with the IFU. On sky, the parasitic signal will differ from this estimate, since the spectrum of the calibration lamps is not the same as the spectrum of the zodiacal background. The right-most column of Table 1 gives a prediction for the MSA leakage from a zodiacal light, estimated by fitting a model to the calibration lamp leakage data, and substituting the JWST background spectrum for the lamp spectrum in the model. Details of this modeling can be found in Deshpande et al. (2018). 1 The 95th percentile refers to the value where 95% of the measured MSA leakage values are smaller, and 5% are larger. Note that the MSA leakage varies both spatially and with wavelength (Deshpande et al. 2018). 2 How to determine whether MSA leakage calibration exposures are necessary
It is important to note from Table 1 that (except for the calibration lamp measurements using the F070LP/G140M setup) the contribution of the MSA leakage to an IFU spectrum is typically at least 10 times lower than the direct contribution of the background (i.e. as observed in the IFU spectra). As a consequence, before deciding if the MSA leakage should be subtracted, it is necessary to have gone through the steps of selecting a background subtraction strategy.
No need for MSA leakage subtraction cases
If your observation falls into one of the following three categories below, then either the MSA leakage subtraction is either not necessary, or it is performed automatically as part of the background subtraction scheme. Hence, in these cases no action is necessary.
No background subtraction planned: If it was deemed unnecessary to subtract the background from the IFU spectra then it should not be necessary to subtract the MSA leakage due to its fainter nature. This scenario arises for bright objects, where the surface brightness of the expected emission is significantly higher than the zodiacal background. Off-scene nodding: When an off-scene noddingscheme is used (with the standard pixel-level subtraction procedure) and if the incident background can be considered uniformover the complete NIRSpec field of view and over the two nodding positions, then the MSA leakage is subtracted at the same time than the background. In-scene nodding: When an in-scene noddingscheme is used (with the standard pixel-level subtraction procedure) and if the incident background can be considered uniformover scales of a few arcsec (i.e. the typical amplitude of the nodding steps), then the MSA leakage is subtracted at the same time than the background. Other Cases
In all other observation scenarios, the subtraction of the MSA leakage can only be performed using dedicated MSA leakage exposures. Obtaining these exposures will add overhead time and using them may have an impact on the final signal to noise ratio. It is therefore important to assess if the MSA leakage subtraction is actually necessary. However, it is difficult to provide a universal recipe for this assessment that typically requires comparing the level of the MSA leakage to the noise level in the observation. There are some obvious cases, though. For example, observing a moon in the IFU that is in close proximity to a planet or its rings. When light from extraneous bright sources falls on the MSA, light leakage calibration exposures can help to remove this unwanted light from science spectra.
The following steps outline how to use the JWST Exposure Time Calculator (ETC) to estimate whether leakage calibration exposures may be useful:
Perform a signal-to-noise (S/N) calculation of the scene, ignoring any knowledge of the MSA leakage. Note that if a strong extended astrophysical background other than the zodiacal light and the telescope stray light is present, it may have to be included into the computation by adding this source to the area covered by the object and the background subtraction area (so the background spectrum is subtracted by the ETC). Note that, for the purposes of this ETC calculation, the IFU background subtraction can be modeled by in-scene nodding, even though in-scene nodding may preclude the need for leakage calibration exposures. The S/N ratio obtained using this scene will be called \rm{S/N}_{ETC}^{obj}. Create a new scene by duplicating the scene used for step 1. In this new scene, add a new source corresponding to the incident background scaled using the factor from Table 1 (95 thpercentile value, divided by 100 as the table contains percent values). This source should extend both over the area covered by the object and the background subtraction area. The S/N ratio obtained using this scene will be called \rm{S/N}_{ETC}^{obj-leak}.
Create a new scene by duplicating the one used for step 1. In this new scene, add a new source corresponding again to the incident background scaled using the factor of Table 1 (95
thpercentile value, divided by 100 as the table contains percent values) but this time make sure it only extends over the area covered by the object and not over the area used for background subtraction. This signal to noise ratio obtained using this scene will be called \rm{S/N}_{ETC}^{obj+leak}.
It can be shown that:
{\rm{S/N}_{ETC}^{obj-leak}} = {{\rm{S}} \over { \sqrt{\rm S + (1 + \epsilon) B + D}} } \\ {\rm{S/N}_{ETC}^{obj+leak}} = {{\rm{S + \epsilon B}} \over { \sqrt{\rm S + (1+ \epsilon) B + D}} },
where S is the source signal, B is the background signal, and D is variance on the the detector noise, all measured in electrons. Likewise, \rm{\epsilon} is the increase in the background due to MSA leakage, as given in Table 1 (divided by 100 as the table contains percent values).
Then, we can estimate the amount of leakage signal, \rm{\epsilon}B, relative to the noise in the observation:
\Delta(\rm{S/N}) = \rm{S/N}_{ETC}^{obj+leak} - \rm{S/N}_{ETC}^{obj-leak} = {{\rm{\epsilon B}} \over { \sqrt{\rm S + (1+ \epsilon) B + D}} }.
If \rm{\epsilon}B is small compared to the noise in the observation (\Delta(\rm{S/N}) << 1) it is likely that a leakage correction would not be useful, and would simply add noise from the pixel-wise subtraction of the exposures. Alternatively, if \rm{\epsilon}B is significant compared to the noise in the observations (\Delta(\rm{S/N}) \gtrsim 1), users may find that their science goals require the subtraction of the additional signal. Here, we reiterate that the MSA leakage varies spatially across the MSA on both small and large scales, so the MSA leakage signal that must be subtracted will have a variable amplitude, with a 95th percentile characterized by \rm{\epsilon}B.
These guidelines are derived using our best knowledge of the MSA leakage from ground-based data. Without any on-sky experience, a conservative approach is strongly recommended. If there is any doubt about the necessity of leakage calibration exposures, we urge users to err on the side of caution, and include the leakage calibration exposures.
Adjustments to ETC calculations to account for MSA Leakage
Table 2 lists corrections to the ETC calculations that are needed to get accurate S/N estimates, both for cases where leakage exposures are subtracted, and when they are not. In the latter, it follows from the equations above that the only necessary correction is in background limited cases, where the ETC-predicted \rm{S/N}_{ETC}^{obj} is reduced by a factor of \frac{1}{\sqrt{1+\epsilon}}. In the object-noise-limited and detector-noise-limited cases, the background noise is negligible, so the MSA leakage is also negligible. For the former case, we can write the S/N achieved after the pixel-by-pixel subtraction as:
{\rm{S/N}_{real}^{obj-leak}} = {{\rm{S}} \over { \sqrt{\rm S + (1 + 2 \epsilon) B + 2 D}} }.
Then, it can be shown that \rm{S/N}_{real}^{obj-leak} is related to \rm{S/N}_{ETC}^{obj-leak} by the ratios in Table 2. The subtraction of the leakage calibration exposure adds noise in both the background limited and detector-noise limited regimes.
The JWST Exposure Time Calculator (ETC) provides the necessary information to determine whether the calculations are in the object-noise-limited, background-noise-limited, or detector-noise limited regimes. The "Reports" section in the lower right corner provides the S/N for the calculation, along with the source counts (S; in e-/s) and background noise (B; in e-/s). This information can be used to infer the variance on the detector noise, D, and determine the dominant noise source in the planned observations.
Table 2. Corrections to ETC calculations to account for MSA leakage signal
Correction to be applied to \rm{S/N}_{ETC}^{obj} to account for the presence of the MSA leakage (for cases where the leakage exposures are not subtracted).
1
\frac{1}{\sqrt{1+\epsilon}}
1
Correction to be applied \rm{S/N}_{ETC}^{obj-leak} to account for additional noise introduced by the leakage background subtraction
1
\frac{\sqrt{1+\epsilon}}{\sqrt{1+2 \epsilon}}
\frac{1}{\sqrt{2}}
Number of MSA leakage exposures
When preparing NIRSpec IFU observations and including MSA leakage exposures, two different options are available when executing a dither or nodding pattern:
A single MSA leakage exposure is obtained only for the position at the beginning of the dither or nodding pattern. An MSA leakage exposure is obtained for each dither or nodding point.
It is recommended to select option 2 if the incident background generating the MSA leakage varies during the dither or nodding pattern. If the background is expected to be uniform over those scales then option 1 is recommended.
References
The contrast performance of the NIRSpec micro shutters and its impact on NIRSpec integral field observations |
This question, concerning the approximation $\frac{163}{\ln(163)}\approx 2^5$, was posted on MO 5 years ago: Why Is 163/ln(163) a Near-Integer?.
It was concluded that it had nothing to do with 163 being a Heegner number, and that it is most likely just a mathematical coincidence.
Playing with my calculator, I noticed that $163\pi\approx2^9$, and $\ln(163)\pi\approx 2^4$, so I thought maybe $\pi$ has something to do with this? I proceeded to press more buttons on my calculator, and came up with $\pi\approx\frac{2^9}{163}+\frac1{2^{11}}\approx\frac{2^4}{\ln(163)}+\frac1{2^{11}}$. What's going on here?
I noticed also that $67$ exhibits somthing similar: $\frac{67}{\ln(67)}\approx2^4-\frac{67}{2^{10}}$.
I haven't found such relations with other Heegner numbers, but I still remain unsatisfied. Maybe it is the start of some Ramanujan-type infinite series for $\frac1{\pi}$, or..? I am not convinced that these relations are just meaningless numerology. Can someone explain what's going on? And what does $\pi$ has to do with this? I post this hoping that someone who knows more than I do could shed some light on it, and am sorry in advance if this is not the appropriate place to do so. |
I came across the following series in my math homework (Fourier Series):
Does the following series converge or diverge? If converges, does it converge absolutely?
$\sum_{n=-\infty}^{\infty}\frac{(-1)^n}{n^2+3}$
Typically, I would be well equipped to answer the question, however the "n=$-\infty$" is giving me trouble. Normally, if "n=$0$", the alternating series test could show convergence, and a direct comparison test with a p-series could show absolute convergence. How does the "$-\infty$" change the problem, if at all? |
For different purposes, I sometimes have to draw an bode plot. I first of would like to know if there is any piece of software which can draw them for me. So I gave in the transfer function and then it gives me the bode plot (both phase and the magnitude). This would save some time at occasions.
Ok, so now the real question. I have the following transfer function.
$$ H(j\omega) = \frac{j \frac{\omega}{\omega_0} }{1 + 3j \frac{\omega}{\omega_0}} $$
The modulus can be calculated as follows (correct me if I am wrong):
$$ |H(j\omega)| = 20 \log_{10}(p) - 20 \log_{10} \left(\sqrt{1+(3p)^2}\right)$$
(where \$ p=\frac{\omega}{\omega_0} \$, with \$ \omega_0=\frac{1}{500 \times 6.37 \times 10^{-7}} \$).
\$ 20 \log_{10}(p) \$, would be easy, but I am not sure how to calculate the second one? How do I calculate the (magnitude) bode plot of \$ -20 \log_{10} \left(\sqrt{1+(3p)^2} \right) \$ |
Let us assume that the market portfolio consists of n assets. Given that the return of the market portfolio can be written as $r_m = \sum_{j=1}^{n} w_jr_j$, we have that $\sigma^2_m = E(\sum_{j=1}^{n} w_jr_j - E(\sum_{j=1}^{n} w_jr_j))^2$, but how do I show that $$E(\sum_{j=1}^{n} w_jr_j - E(\sum_{j=1}^{n} w_jr_j))^2 = \sum_{j=1}^{n} w_jCov(r_j,r_m)$$? If I show that the equation above is true, than I can claim that $$E(\sum_{j=1}^{n} w_jr_j - E(\sum_{j=1}^{n} w_jr_j))^2 = \sum_{j=1}^{n} w_jCov(r_j,r_m) = \sum_{j=1}^{n} w_j\beta\sigma^2_m$$
This is how I am trying to prove the result. We know that:
$$\sigma^2_m = E(\sum_{j=1}^{n} w_jr_j - E(\sum_{j=1}^{n} w_jr_j))^2= E[(w_jr_j)^2]-E^2[w_jr_j]$$
Accordingly, we may show that:
$$\sum_{j=1}^{n} w_jCov(r_j,r_m) = E[(w_jr_j)^2]-E^2[w_jr_j]$$
Now: $\sum_{j=1}^{n} w_jCov(r_j,r_m) = \sum_{j=1}^{n} w_jE[r_jr_m]-\sum_{j=1}^{n} w_jE[r_j]E[r_m]=\sum_{j=1}^{n} w_jE[r_j\sum_{j=1}^{n} w_jr_j]-\sum_{j=1}^{n} w_jE[r_j]E[\sum_{j=1}^{n} w_jr_j]=\sum_{j=1}^{n} w_jE[\sum_{j=1}^{n} w_jr_j^2]-E[\sum_{j=1}^{n} w_jr_j]E[\sum_{j=1}^{n} w_jr_j]$
It looks like I am not able to prove the result because $$\sum_{j=1}^{n} w_jE[\sum_{j=1}^{n} w_jr_j^2] \neq E[(w_jr_j)^2]$$
Can you help me, please? |
School of Mathematics and Statistics, Central South University, Changsha, 410083 Hunan, P. R. China.
Receive Date: 08 May 2015,Revise Date: 10 September 2015,Accept Date: 22 October 2015
Abstract
In this paper, we consider the following Kirchhoff-type equations: $-\left(a+b\int_{\mathbb{R}^{3}}|\nabla u|^{2}\right)\Delta u+V(x) u=\lambda$ $f(x,u)+u^{5}, \quad \mbox{in }\mathbb{R}^{3},$ $u(x)>0, \quad \mbox{in }\mathbb{R}^{3},$ $u\in H^{1}(\mathbb{R}^{3}) ,$ where $a,b>0$ are constants and $\lambda$ is a positive parameter. The aim of this paper is to study the existence of positive solutions for Kirchhoff-type equations with a nonlinearity in the critical growth under some suitable assumptions on $V(x)$ and $f(x,u)$. Recent results from the literature are improved and extended. |
Gravitational Force Exerted by a Rod
In this lesson, we'll derive a formula which will allow us to calculate the gravitational force exerted by a rod of length \(L\) on a particle a horizontal distance \(x\) away from the rod as illustrated in Figure 1. We'll assume that the width and depth of the rod are negligible and approximate all of the mass comprising the rod as being distributed along only one dimension. We'll model the rod as being composed of an infinite number of particles of mass \(dm\). The mass of the rod is given by the infinite sum of all the mass elements (or particles) comprising the rod:
$$M_{rod}=\int{dm}.\tag{1}$$
We're interested in finding the gravitational force exerted by the rod on a particle of some mass \(m\). Now, of source, the notion of a particle is something that is very abstract—an object of zero size with all of its mass concentrated at a single point (more precisely, a
geometrical point which is another very abstract notion) in space. No object is actually a particle (except a black hole), but if the object is very small compared to the size of the rod then it is reasonable to ignore the size of the dimensions of that object and to approximate it as a point mass.
Newton's law of gravity is defined as
$$\vec{F}_{m_1,m_2}=G\frac{m_1m_2}{r^2}\hat{r}_{1,2}.\tag{2}$$
where \(\vec{F}_{m_1,m_2}\) is the gravitational force exerted by a particle of mass \(m_1\) on another particle of mass \(m_2\), \(r\) is their separation distance, and \(\hat{r}_{1,2}\) is a unit vector pointing from \(m_1\) to \(m_2\). For the moment, we'll just be interested in the
magnitude of the gravitational force exerted on \(m_2\) which is given by
$$F_{m_1,m_2}=G\frac{m_1m_2}{r^2}.\tag{3}$$
When I was telling you the definition of Newton's law of gravity, notice how I was very specific about how Equation (2) (and thus Equation (3) as well) gives the gravitational force exerted by one
particle (or point-mass) on another particle. The famous equation representing Newton's law of gravity only deals with particles and for this reason we cannot use Equation (2) or (3) to compute the gravitational force exerted by a rod on a particle—the rod isn't a particle, it's an extended object. When dealing with the mass of any extended object in classical mechanics—whether it be a rod, disk, ball, or any other geometrical shape—we can think of the entire shape of that object as being built up by an infinite number of point-masses of mass \(dm\). Given how I defined Equations (2) and (3) as being in terms of only particles, we can use Equation (3) to compute the gravitational force exerted by one of the particles of mass \(dm\) exerted on the particle of mass \(m\) as
$$F_g=Gm\frac{1}{r^2}dm,\tag{4}$$
where \(dm\) and \(m\) are the mass of each particle, and \(r\) is their separation distance. As you can see from Figure 1, if \(x\) represents the position of the mass \(dm\) on the \(x\)-axis, then the separation distance between \(dm\) and \(m\) must be \((L+d)-x\). Thus, we can represent Equation (4) as
$$F_g=Gm\frac{1}{((L+d)-x)^2}dm.\tag{5}$$
Equation (5) represents the gravitational force exerted by any particle in the rod on the particle as horizontal distance \(d\) away from the rod. To find the total gravitational force exerted by the rod, we must "add up" (indeed, "add up" an infinite number of time) the gravitational forces, \(Gmdm/x^2\), exerted by every particle \(dm\) on the mass \(m\):
$$F_{rod,m}=Gm\int{\frac{1}{((L+d)-x)^2}dm}.\tag{6}$$
Equation (6) does indeed give the magnitude of the gravitational force exerted by \(M_{rod}\) on \(m\)—but the only problem is that we cannot calculate the value of this force since we cannot evaluate the integral in Equation (6). To calculate the integral in Equation (60, the integrand and limits of integration must be represented in terms of the same variable. If we assume that the mass density \(λ\) of the rod is constant, then
$$λ=\frac{dm}{dx}$$
and
$$dm=λdx.\tag{7}$$
Substituting Equation (7) into (6), we have
$$F_{rod,m}=Gmλ\int_{-L/2}^{L/2}\frac{1}{((L+d)-x)^2}dx.\tag{8}$$
As you can see, after representing everything in the integral in terms of \(x\), we have ended up with an integral that is fairly straightforward to calculate. If we let \(u=L+d-x\), then
$$\frac{du}{dx}=-1$$
and
$$dx=-du\tag{9}$$
Substituting \(u=L+d-x\) and Equation (9) into (8), we have
$$F_{rod,m}=-Gmλ\int_{?_1}^{?_2}\frac{1}{u^2}du.\tag{10}$$
When \(x=-L/2\), \(u=\frac{3L}{2}+d\) and when \(x=L/2\), \(u=\frac{L}{2}+d\). Substituting these limits of integration into Equation (10), we have
$$F_{rod,m}=Gmλ\int_{\frac{L}{2}+d}^{\frac{3L}{2}+d}\frac{1}{u^2}du.\tag{11}$$
Solving the integral in Equation (11), we have
$$Gmλ\int_{\frac{L}{2}+d}^{\frac{3L}{2}+d}\frac{1}{u^2}du=Gmλ\biggl[\frac{-1}{u}\biggr]_{\frac{L}{2}+d}^{\frac{3L}{2}+d}=Gmλ(\frac{1}{d}-\frac{1}{L+d}).$$
Thus,
$$F_{rod,m}=Gmλ(\frac{1}{d}-\frac{1}{L+d}).\tag{12}$$
Equation (12) allows us to calculate the magnitude of the gravitational force exerted by a rod on a particle. Since each mass \(dm\) in the rod is pulling on the mas \(m\) in the \(-x\) direction, the entire rod pulls on \(m\) in the \(-x\) direction. If we multiply the magnitude of the gravitational force, \(F_{rod,m}\), by \(-\hat{i}\), this will give us a gravitational force with a magnitude of \(F_{rod,m}\) and a direction of \(-\hat{i}\) in the negative \(x\) direction. Thus, the gravitational force exerted by the rod on the mass \(m\) is given by
$$\vec{F}_{rod,m}=Gmλ(\frac{1}{d}-\frac{1}{L+d})(-\hat{i}).\tag{13}$$
The entire problem we just solved would also apply to a problem from electrostatics which deals with finding the electric force exerted by a charged rod on a charged particle. This is because the law specifying the electric force (namely, Column's law) is of the
This article is licensed under a CC BY-NC-SA 4.0 license. |
I know the concerted mechanism for β-keto acids, but neither could I figure out, nor was I able to find out the mechanism for α,β-unsaturated acids. Any help is appreciated.
Albeit carboxylic acids and their derivatives lose carbon dioxide under a variety of experimental conditions, the literature on the decarboxylation of $\alpha,\beta$-unsaturated carboxylic acids is not common. The reason for this is all $\alpha,\beta$-unsaturated carboxylic acids seemingly do not undergo decaboxylation under mild conditions, except for those containing substitutions at $\beta$-$\ce{C}$ position, which stabilize the intermediate carbocation.
A wide variety of experimental conditions undoubtedly make the reaction to undergo by many different mechanisms. For example, the role of acid catalysis in the decomposition of cinnamic acid (and related compounds) has been clearly demonstrated in Ref.1.The mechanism for this acid catalyzed decarboxylation is illustrated in the following figure (see
I in the diagram). The structures of some example of these susceptible substrates are also included (Ref.1) in the diagram:
Some carboxylic acids undergo decarboxylation by pyrolysis. It is an experimental fact that the olefins resulting from pyrolysis of simple $\alpha,\beta$-unsaturated acids (note that simple means at least no substitution on $\alpha$-$\ce{C}$) are homogeneous and contain terminal unsaturation (Ref.2). Two different mechanisms leading to the same olefin are formulated in Ref.2. The first one is essentially the same mechanism suggested for the decarboxylation of cinnamic acid and related compounds (see
I in the diagram). Before discuss about the second mechanism, let’s look at the proposed mechanism for $\beta,\gamma$-unsaturated carboxylic acid compounds (see II in the diagram). The decarboxylation mechanism of $\beta,\gamma$-unsaturated carboxylic acid compounds undergo the reaction through a cyclic transition complex to give alkene products. This mechanism mimics the one proposed for the thermal decomposition of $\beta$-keto acids.
As a consequence, an alternative mechanism for thermal decomposition of $\alpha,\beta$-unsaturated carboxylic acid compounds was also proposed if the substrate contains allylic $\mathrm{sp^3}$ carbon to extend the double bond (see
III in the diagram). It was assumed that under thermal energy, substrate first isomerizes to the $\beta,\gamma$-isomer, which then decarboxylates through a cyclic transition complex to give the terminal alkene product as given by mechanism I. References: W. S. Johnson, W. E. Heinz, “The Acid-Catalyzed Decarboxylation of Cinnamic Acids,” J. Am. Chem. Soc. 1949, 71(8), 2913–2918 (DOI: 10.1021/ja01176a098). R. T. Arnold, O. C. Elmer, R. M. Dodson, “Thermal Decarboxylation of Unsaturated Acids,” J. Am. Chem. Soc. 1950, 72(10), 4359–4361 (DOI: 10.1021/ja01166a007).
The decarboxylation of cinnamic acid
1 and related structures has been studied by Johnson and Heinz. The reasonable mechanism below was suggested.
W. S. Johnson and W. E. Heinz,
J. Am. Chem. Soc., 1949, 71, 2913. |
Lasted edited by Andrew Munsey, updated on June 15, 2016 at 1:21 am.
: See also Directory:Kinetic Energy Devices
Kinetic energy (SI unit: the
There was an error working with the wiki: Code[1] needed to accelerate a body from rest to its current velocity. Having gained this energy during its
There was an error working with the wiki: Code[9], the body maintains this kinetic energy unless its speed changes.
There was an error working with the wiki: Code[10] work of the same magnitude would be required to return the body to a state of rest from that velocity. The etymology of 'kinetic energy' is the Greek word for motion
There was an error working with the wiki: Code[11] and the Greek word for active work
There was an error working with the wiki: Code[12]. Therefore the term 'kinetic energy' means through motion do active work. The terms kinetic energy and work and their present scientific meanings date back to the mid 19th century. Early understandings of these ideas can be attributed to
There was an error working with the wiki: Code[13] who in 1829 published the paper titled Du Calcul de l'effet des machines outlining the mathematics of kinetic energy.
: E_k = \int {F} \cdot {d}{s} = \int {v} \cdot {d}{p}
This equation states that the kinetic energy (Ek) is equal to the
There was an error working with the wiki: Code[14] of the
There was an error working with the wiki: Code[15] of the Velocity (v) of a body and the
There was an error working with the wiki: Code[16] change of the body's
There was an error working with the wiki: Code[17] (p). It is assumed that the body starts at rest (motionless).
The following holds only if we assume that we are dealing with Newtonian mechanics and that we have a point object. Simply writing "in the non-relativistic case" doesn't get the job done, and elaborating the necessary conditions is too burdensome here. Under certain assumptions, this work (and thus the kinetic energy) is equal to:
:E_k = \frac{1}{2} mv^2
where m is the object's Mass and v is the object's
There was an error working with the wiki: Code[18].
Energy can exist in many forms, for example chemical energy, heat, electromagnetic radiation, potential energy (gravitational, electric, elastic, etc.), nuclear energy, mass, and kinetic energy. Various forms of energy can often be converted to other forms. Kinetic energy can be best understood by examples that demonstrate how it is transformed from other forms of energy and to the other forms. For example a cyclist will use chemical energy that was provided by food to accelerate a bicycle to a chosen speed. This speed can be maintained without further work, except to overcome air-resistance and friction. The energy has been converted into the energy of motion, known as kinetic energy but the process is not completely efficient and heat is also produced within the cyclist. The kinetic energy in the moving bicycle and the cyclist can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. The kinetic energy has now largely been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. (There are some frictional losses so that the bicycle will never quite regain all the original speed.) Alternatively the cyclist could connect a dynamo to one of the wheels and also generate some electrical energy on the descent. The bicycle would be travelling more slowly at the bottom of the hill because some of the energy has been diverted into making electrical power. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated as heat energy.
Spacecraft use chemical energy to take off and gain considerable kinetic energy to reach orbital velocity. This kinetic energy gained during launch will remain constant while in orbit because there is almost no friction. However it becomes apparent at re-entry when the kinetic energy is converted to heat. Kinetic energy can be passed from one object to another. In the game of billiards, the player gives kinetic energy to the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it will slow down dramatically and the ball it collided with will accelerate to a speed as the kinetic energy is passed on to it. Collisions in billiards are elastic collisions, where kinetic energy is preserved.
Flywheels are being developed as a method of energy storage (see article flywheel energy storage). This illustrates that kinetic energy can also be rotational. Note the formula in the articles on flywheels for calculating rotational kinetic energy is different, though analogous.
There was an error working with the wiki: Code[19] is any process of converting energy from one form to another. Energy found in fossil fuels, solar radiation, or nuclear fuels needs to be converted into other energy forms such as electrical, propulsive, or cooling to be useful. Machines are used to convert energy from one form to another. The efficiency of a machine characterizes how well it can convert the energy from one form to another. Energy is converted so that it may be used by other machines or to provide an energy service to society. An internal combustion engine converts the chemical energy in the gasoline to the propulsive energy that moves a car. A solar cell converts the solar radiation into the electrical energy that can then be used to light a bulb or power a computer.
In
There was an error working with the wiki: Code[20], the kinetic energy of a "point object" (a body so small that its size can be ignored) is given by the equation E_k = \frac{1}{2} mv^2 where m is the mass and v is the speed of the body. For example - one would calculate the kinetic energy of an 80 kg mass travelling at 40 mph (17.8816 metres per second) as \frac{1}{2} \cdot 80 \cdot 17.8816^2 = 6395 joules. Note that the kinetic energy increases with the square of the speed. This means for example that if you are traveling twice as fast, you need to lose four times as much energy to stop.
For non-relativistic mechanics, the formula above gives:
:E_k = \frac{1}{2}mv^2
It sometimes is convenient to split the total kinetic energy of body into the sum of the body's center-of-mass translational kinetic energy and the energy of rotation around the center of mass
There was an error working with the wiki: Code[21]:
: E_k = E_t + E_r \,
where:
:Ek is the total kinetic energy
:Et is the translational kinetic energy
:Er is the rotational energy or angular kinetic energy
For the translational kinetic energy of a body with constant Mass m, whose
There was an error working with the wiki: Code[22] is moving in a straight line with speed v, as seen above is equal to
: E_t = \frac{1}{2} mv^2
where:
:m is mass of the body
:v is speed of the
There was an error working with the wiki: Code[23] body
If a body is rotating, its
There was an error working with the wiki: Code[2] or angular kinetic energy is simply sum of kinetic energies of its moving parts, and thus is equal to:
: E_r = \frac{1}{2} I \omega^2
where:
:I is the body's
There was an error working with the wiki: Code[24]
:? is the body's
There was an error working with the wiki: Code[25].
The kinetic energy of a system depends on the
There was an error working with the wiki: Code[26]. It is lowest with respect to the
There was an error working with the wiki: Code[27], i.e., in a frame of reference in which the center of mass is stationary. In another frame of reference the additional kinetic energy is that corresponding to the total mass and the speed of the center of mass. Thus kinetic energy is a relative measure and no object can be said to have a unique kinetic energy. A rocket engine could be seen to transfer its energy to the rocket ship or to the exhaust stream depending upon the chosen frame of reference. But the total energy of the system, ie kinetic energy, fuel chemical energy, heat energy etc, will be conserved regardless of the choice of measurement frame. The kinetic energy of an object is related to its
There was an error working with the wiki: Code[28] by the equation:
:E_k = \frac{p^2}{2m}
There was an error working with the wiki: Code[3]'s
There was an error working with the wiki: Code[4] must be used for calculating the kinetic energy of bodies whose speeds are a significant fraction of the velocity of light. As Einstein's formula states:
: E = mc^2.
For an object in motion:
: m = \frac{m_0}{\sqrt{1 - (v/c)^2}} ,
where m0 is the rest mass, v is the object's speed, and c is the speed of light in vacuum.
So:
: E = mc^2 = c^2(\frac{m_0}{\sqrt{1 - (v/c)^2}}) .
The equation shows that the energy of an object approaches infinity as the velocity v approaches the speed of light c, thus it is impossible to accelerate an object across this boundary. By substituting x = (v/c)^2 we can rewrite this as:
: E = c^2(m_0 (1 - x)^{-\frac{1}{2}}) .
The first two
There was an error working with the wiki: Code[29] coefficients of the correction factor f(x) = (1 - x)^{-\frac{1}{2}} are:
: f(0) = (1 - 0)^{-\frac{1}{2}} = 1
: f'(0) = \frac{1}{2}(1 - 0)^{-\frac{3}{2}} = \frac{1}{2} .
So we approximate f(x) \approx f(0) + f'(0)x :
: E = c^2(m_0 (1 - x)^{-1/2}) \approx c^2 (m_0 (1 + \frac{1}{2}x)) = c^2 (m_0 (1 + \frac{1}{2} v^2/c^2 )) = m_0 c^2 + \frac{1}{2} m_0 v^2 ,
indicating that the total energy can be partitioned into the rest mass's energy plus the traditional newtonian energy (at low speeds). When objects move at speeds much slower than light (e.g. in everyday phenomena on Earth), the first two terms of the series predominate. The next term in the approximation is small for low speeds, and can be found by extending the expansion into a Taylor series by one more term:
: E \approx c^2 (m_0 (1 + \frac{1}{2} v^2/c^2 + \frac{3}{8} v^4/c^4 )) = m_0 c^2 + \frac{1}{2} m_0 v^2 + \frac{3}{8} m_0 v^4/c^2 .
For example, for a speed of 10 km/s the correction to the Newtonian kinetic energy is 0.07 J/kg (on a Newtonian kinetic energy of 50 MJ/kg) and for a speed of 100 km/s it is 710 J/kg (on a Newtonian kinetic energy of 5 GJ/kg), etc. For higher speeds, the formula for the relativistic kinetic energy is derived by simply subtracting out the rest mass energy:
: E_k = mc^2 - m_0 c^2 = m_0 c^2(\frac{1}{\sqrt{1 - (v/c)^2}} - 1) .
The relation between kinetic energy and
There was an error working with the wiki: Code[30] is more complicated in this case, and is given by the equation:
:E_k = \sqrt{p^2c^2+m_0^2c^4}-m_0c^2.
This can also be expanded as a
There was an error working with the wiki: Code[31], the first term of which is the simple expression from Newtonian mechanics. What this suggests is that the formulae for energy and momentum are not special and axiomatic, but rather concepts which emerge from the equation of mass with energy and the principles of relativity.
In quantum
There was an error working with the wiki: Code[5], the expectation value of the electron kinetic energy, \langle\hat{T}\rangle, for a system of electrons described by the
There was an error working with the wiki: Code[6] \vert\psi\rangle is a sum of 1-electron operator expectation values:
:\langle\hat{T}\rangle = -\frac{\hbar^2}{2 m_e}\bigg\langle\psi \bigg\vert \sum_{i=1}^N \nabla^2_i \bigg\vert \psi \bigg\rangle
where m_e is the mass of the electron and \nabla^2_i is the
There was an error working with the wiki: Code[32] operator acting upon the coordinates of the i'th electron and the summation runs over all electrons.
The
There was an error working with the wiki: Code[7] formalism of quantum mechanics requires knowledge of the electron density only, i.e., it formally does not require knowledge of the wavefunction. Given an electron density \rho({r}), the exact N-electron kinetic energy functional is unknown however, for the specific case of a 1-electron system, the kinetic energy can be written as
: T[\rho] = \frac{1}{8} \int \frac{ \nabla \rho({r}) \cdot \nabla \rho({r}) }{ \rho({r}) } d^3r
where T[\rho] is known as the Weizsacker kinetic energy functional.
There was an error working with the wiki: Code[33]
There was an error working with the wiki: Code[34]
There was an error working with the wiki: Code[8]
Serway, Raymond A. Jewett, John W. (2004). Physics for Scientists and Engineers, 6th ed., Brooks/Cole. ISBN 0-534-40842-7.
Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics, 5th ed., W. H. Freeman. ISBN 0-7167-0809-4.
Tipler, Paul Llewellyn, Ralph (2002). Modern Physics, 4th ed., W. H. Freeman. ISBN 0-7167-4345-0.
School of Mathematics and Statistics, University of St Andrews (2000). Biography of Gaspard-Gustave de Coriolis (1792-1843). Retrieved on 2006-03-03.
There was an error working with the wiki: Code[1], Wikipedia: The Free Encyclopedia. Wikimedia Foundation. |
I had acquired a ballscrew assembly from one of the loading docks, and was really excited about using it as the main actuator for this desk. (This is the same ballscrew from Kris's first seek&geek) Even though there was no obvious part number or datasheet, I could estimate the stiffness by looking at similar ballscrews and felt pretty happy using this approximation in the rest of my calculations.
The ballscrew assembly has been sitting on my bookshelf for years with the same wrapping I found it with - paper towels and packing tape. This week I took off the wrapping and started dimensioning things... and started this very wild ride.
Long story short, I accidentally discovered the true reason it was wrapped up. I had thought the towels were simply to prevent dust from getting in the bearings, but the true reason was to prevent pine resin from contaminating everything else! At the base of the ballscrew where the supporting block bearing is, there was a glob of pine resin. In my excitement to measure all the dimensions, I had allowed the ball nut to sink into this resin. So suddenly, the entire assembly was seized! In retrospect, what I should have done was soak the entire assembly in acetone to dissolve all the grease and resin. But, for some reason, I thought there would be rubber or plastic components that would be unhappy with the solvent bath. So I painstakingly took everything apart, soaked everything in acetone, reassembled the pieces, and finally relubricated all the parts!
First discovery: the two bearings in the driver block are in a face-to-face configuration. They use 8mm ID flanged bearings, where the outer flange is held in place and the inner races are preloaded by a torqued nut compressing them against the 12mm screw.
The face-to-face configuration has more compliance against rolling moments, which makes it more forgiving with misalignment (4x less sensitive to roll than the back-to-back configuration). Assuming maximum race deflection of these ball bearings is 15μm under nominal max load of 3300N, linear stiffness should be 2.2*10^6 N/m, making
Next up was reattaching the shaft. The end of the ballscrew had a really fine thread, which got slightly damaged by me pressing the shaft back on. I used a knife to gently nudge the threads back in place, so I could reattach the nut. There's also a washer on the front end of this assembly that protects the inner races of the bearings from resin goop. I replaced the resin goop ball with a blob of lithium grease. Probably this was unnecessary. Next up was re-assembly hell. Luckily for me, this ballnut uses an external ball-return-plate. Otherwise I doubt I would have been able to repair this item (or maybe I would've come up with the better idea of dunking the whole assembly in acetone first). There were originally 50 balls, 2.3mm diameter. Unfortunately I lost one in the repacking process :( Repacking the balls involved picking them up with tweezers, packing them in the channel, then feeding the shaft such that the balls were evenly spaced. I did this five times in the process of hardware debugging. Next up was lubrication. Chain oil was too clingy, Tap-magic too light, but machine oil worked fine. Never again! But, the ballscrew lives! And now I feel justified using this reuse ballscrew in the desk. This is what the balls are doing on the inside.
We can take a guess at load capacity of the ballnut knowing how many there are (too many!) and their diameters. First, taking a look at contact pressure.
Maximum contact pressure can be approximated with
So for these balls, $P_(max)$ < 11.2 N per ball, for a total load capacity of 550N, or 123lbs. That means no attempting to stand on the ballnut by itself.
Ballscrew assembly pre-shenanigans
Long story short, I accidentally discovered the true reason it was wrapped up. I had thought the towels were simply to prevent dust from getting in the bearings, but the true reason was to prevent pine resin from contaminating everything else!
At the base of the ballscrew where the supporting block bearing is, there was a glob of pine resin. In my excitement to measure all the dimensions, I had allowed the ball nut to sink into this resin. So suddenly, the entire assembly was seized!
In retrospect, what I should have done was soak the entire assembly in acetone to dissolve all the grease and resin. But, for some reason, I thought there would be rubber or plastic components that would be unhappy with the solvent bath. So I painstakingly took everything apart, soaked everything in acetone, reassembled the pieces, and finally relubricated all the parts!
Fixing my mistakes
First discovery: the two bearings in the driver block are in a face-to-face configuration. They use 8mm ID flanged bearings, where the outer flange is held in place and the inner races are preloaded by a torqued nut compressing them against the 12mm screw.
Bearing diagram. Solid lines are outer diameter (outer races), dashed lines are inner races, r
ed lines are approximations of ball contact forces and directions
$K_(moment) = \frac{K_(linear) L^2}{4} = 3.1\cdot10^4 \frac{N}{m}$
So that's neat. The next component in the stack is a steel washer. This item was supposed to prevent the ball of resin from gooping up the bearings below, but when the ballnut plunged into the resin it brought up this guy with it.
Next up was reattaching the shaft. The end of the ballscrew had a really fine thread, which got slightly damaged by me pressing the shaft back on. I used a knife to gently nudge the threads back in place, so I could reattach the nut. There's also a washer on the front end of this assembly that protects the inner races of the bearings from resin goop.
I replaced the resin goop ball with a blob of lithium grease. Probably this was unnecessary.
Next up was re-assembly hell. Luckily for me, this ballnut uses an external ball-return-plate. Otherwise I doubt I would have been able to repair this item (or maybe I would've come up with the better idea of dunking the whole assembly in acetone first).
There were originally 50 balls, 2.3mm diameter. Unfortunately I lost one in the repacking process :(
Repacking the balls involved picking them up with tweezers, packing them in the channel, then feeding the shaft such that the balls were evenly spaced. I did this five times in the process of hardware debugging.
Next up was lubrication. Chain oil was too clingy, Tap-magic too light, but machine oil worked fine.
Never again! But, the ballscrew lives! And now I feel justified using this reuse ballscrew in the desk.
This is what the balls are doing on the inside.
Modified from barnesballscrew.com
Maximum contact pressure can be approximated with
$P_(max) = \frac{P_(load)}{\frac{\pi}{2}r}$,
where we need to take care to not exceed the Brinell hardness... that's how bearings fail! Assuming the bearings are 52100 bearing steel, hardness should be ~200 BHN.
(wikipedia) |
Anindya wrote:
> However in this case we could slightly vary the definition of a \\(\mathcal{V}\\)-enriched category to use right-to-left composition instead of left-to-right composition.
Hmm, as mentioned in my previous comment I don't see a left/right asymmetry built into the definition of enriched category:
> **Definition.** A **\\(\mathcal{V}\\)-enriched category** \\(\mathcal{X}\\) consists of two parts, satisfying two properties. First:
> 1. one specifies a set \\(\mathrm{Ob}(\mathcal{X})\\), elements of which are called **objects**;
> 2. for every two objects \\(x,y\\), one specifies an element \\(\mathcal{X}(x,y)\\) of \\(\mathcal{V}\\).
> Then:
> a) for every object \\(x\in\text{Ob}(\mathcal{X})\\) we require that
> \[ I\leq\mathcal{X}(x,x) .\]
> b) for every three objects \\(x,y,z\in\mathrm{Ob}(\mathcal{X})\\), we require that
> \[ \mathcal{X}(x,y)\otimes\mathcal{X}(y,z)\leq\mathcal{X}(x,z). \]
The interesting thing is that you can interpret \\(\mathcal{X}(x,y)\\) as 'morphisms from \\(x\\) to \\(y\\)' or 'morphisms from \\(y\\) to \\(x\\)' and the definition makes sense either way!
Are you suggesting that we could switch to using
\[ \mathcal{X}(y,z)\otimes \mathcal{X}(x,y) \leq\mathcal{X}(x,z)? \]
We could, but you'll notice this isn't just a left/right reflection of the usual definition: I could explain the difference to someone who can't tell the difference between left and right! In the usual definition, the two \\(y\\)'s here are next to each other:
\[ \mathcal{X}(x,y)\otimes\mathcal{X}(y,z) \]
while in the modified definition they aren't. |
$2.11×10^{135}$
If I'm not mistaken the following grid must give the optimal result:
where are 990/5 = 198 layers with 2 vertices.
Let's calculate number of paths. I number 2-vertice layers from 0 to 197. CD is 0th, GH is 197th. There are 3 different possibilities at each transition between two layers:
1) straight
/
/ __
2) vertical + straight
__
|\ |
| \ |
3) zigzag
__ __
\/ \
/\ _\
A rule here is that you can make vertical line only at the beginning of the transition. I call type 1 and type 2 open and type 3 - closed. After open transition you can have any other transition, but after closed you can have only type 1. Let's call number of paths, which lead to open layer $i$: $a_i$ and n of paths, which lead to closed layer $i$: $b_i$.
$a_0 = 2$, $b_0 = 0$ (see the vertical-line-rule above).
Open pattern can be followed by 4 open (type 1 and 2) or by 2 closed. Closed patern can be followed only by 2 open (type 1). Thereby:$$ \begin{pmatrix} a_{n} \\ b_{n} \end{pmatrix} =\begin{pmatrix} 4 & 2\\ 2 & 0 \end{pmatrix}\begin{pmatrix} a_{n-1} \\ b_{n-1} \end{pmatrix} $$
Finally, to reach B-point after last layer we can use 2 ways if we ended with open pattern and 1 way with closed. So the total number of paths are:
$$ \begin{pmatrix}2&0\end{pmatrix}\begin{pmatrix} 4 & 2\\ 2 & 0 \end{pmatrix}^{197}\begin{pmatrix} 2 \\ 1 \end{pmatrix} $$
The result is $10^{135.325} = 2.11\cdot10^{135}$.
To be sure I haven't made mistakes, I have compared my calculations for 20 vertices and 45 edges graph with recursive program. They do agree.
The "inductive prove" (guess based) that most probably you can't do better you can find below.
The old answer:
@Falk Hüffner, gave the improved number $6.76 \cdot 10^{130}$ and the fact that you need to use 15 edges per "link" in the "chain", but there is still no prove and no example of the "link", which allows you to achieve this number. So let me do it here and give some extrapolations, which allows to imagine a probable optimal solution.
First,
my chain of thoughts: I got noted that @Henning Makholm (3 edges -> 2 paths), @Roland (5 edges -> 4 paths) and @f'' (9 edges -> 15 paths) can be reached easily from the complete graphs (for 3,4 and 5 vertices correspondingly) by rejecting the most useless edges. In case of 3 vertices there are no edges to reject. In case of 4 and 5 vertices you need to reject the one who connects A to B directly (and adds only 1 path).
Now, if we take complete graph on 6 vertices and will reject edges, which connect A to B via only 1 vertices we will get a graph, which have 10 edges and 20 paths. Which leads to $6.34 \cdot 10^{128}$ and is quite close to result with 5 vertices.
Here comes
the example for current max-paths: But you can improve result if you take 7 vertices and reject edges, which connect A to B via only 1 vertices. You will have 96 paths per 15 edges. And $6.76 \cdot 10^{130}$ paths for 990 edges. To do so you need to take complete graph on 5 vertices (10 edges) and any 3 of them connect to A and other 2 connect to B.
Complete graph on 5 vertices allows you to reach any point from any-other point in $1+3+3\cdot 2+3\cdot 2+1$ = 16 paths. Plus you have 3 different ways to reach A and 2 ways to reach B, this leads to $16 \cdot 3 \cdot 2$ = 96 paths. Then $96^{990/15} \approx 6.76 \cdot 10^{130}$.
And
the extrapolation: When you try the same trick with 8 vertices, you will get $(65\cdot3\cdot3)^{990/(15+3+3)} \approx 2.83\cdot 10^{130}$ which is already smaller and since 990 is not dividable by 21 the actual result will be even worse. Let's try now to reject also paths with 2 vertices. This is achieved in the following configuration: And the result here is exactly the same as with 7 vertices: 15 edges and 96 paths.
So I suppose to improve this you need to consider 9 vertices and reject all paths via 0,1 and 2 vertices (I still need to figure out how to build and count this, may be it is even to reject all paths, which go through 2 vertices only).
Now, it looks like you need to add 2 vertices and reject 1-vertice longer path each time. Finally, we can guess here that the most optimal result will be a complete graph on X vertices with all paths via 0,1,2,...,(X-5)/2 vertices excluded. So if you stretch it between A and B you should see a chain with width of ~2 vertices.
P.S. Another @Falk Hüffner result $8.16 \cdot 10^{130}$ with 18 edges and 240 paths in "link" is achieved in the following configuration of the link:
This configuration follows the general sense rule, which I mentioned above: you need to take complete graph and rid of most useless links (those, which contribute to the shortest paths). Unfortunately it was hard for me to consider all possibilities on paper, but my program confirmed that there are exactly 240 paths. |
Electronic Journal of Probability Electron. J. Probab. Volume 14 (2009), paper no. 11, 314-340. Heat kernel estimates and Harnack inequalities for some Dirichlet forms with non-local part Abstract
We consider the Dirichlet form given by $$ {\cal E}(f,f) = \frac{1}{2}\int_{R^d}\sum_{i,j=1}^d a_{ij}(x)\frac{\partial f(x)}{\partial x_i} \frac{\partial f(x)}{\partial x_j} dx$$ $$ + \int_{R^d \times R^d} (f(y)-f(x))^2J(x,y)dxdy.$$ Under the assumption that the ${a_{ij}}$ are symmetric and uniformly elliptic and with suitable conditions on $J$, the nonlocal part, we obtain upper and lower bounds on the heat kernel of the Dirichlet form. We also prove a Harnack inequality and a regularity theorem for functions that are harmonic with respect to $\cal E$.
Article information Source Electron. J. Probab., Volume 14 (2009), paper no. 11, 314-340. Dates Accepted: 2 February 2009 First available in Project Euclid: 1 June 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1464819472 Digital Object Identifier doi:10.1214/EJP.v14-604 Mathematical Reviews number (MathSciNet) MR2480543 Zentralblatt MATH identifier 1190.60069 Subjects Primary: 60J35: Transition functions, generators and resolvents [See also 47D03, 47D07] Secondary: 60J75: Jump processes Rights This work is licensed under aCreative Commons Attribution 3.0 License. Citation
Foondun, Mohammud. Heat kernel estimates and Harnack inequalities for some Dirichlet forms with non-local part. Electron. J. Probab. 14 (2009), paper no. 11, 314--340. doi:10.1214/EJP.v14-604. https://projecteuclid.org/euclid.ejp/1464819472 |
Often when I teach students at our Business School they have a hard time understanding compact linear programming (LP) formulations. So here it is, a short introduction to some of the concepts you need to know for understanding compact LP formulations.
Sets
A set is a group of elements, e.g. $\{1,2,4\}$ is a set with 3 elements, namely, $1,2$ and $4$ and $A=\{(2,3),(4,5),(6,8),(5,6)\}$ is a set called $A$ with 4 elements (pairs), namely, $(2,3),(4,5),(6,8)$ and $(5,6)$. Note that in the last case each element is a pair $(i,j)$. Sets containing pairs are often used when we formulate LPs based on network problems where the pair $(i,j)$ denote the arc/edge from node $i$ to node $j$.
You can take the union of sets: \[\{1,2,3\} \cup \{3,4\} = \{1,2,3,4\},\] subtract elements: \[\{1,2,3,4,5\} \setminus \{3,4\} = \{1,2,5\},\] or the intersection: \[\{1,2,3\} \cap \{3,4\} = \{3\}.\]
Some special sets are: $\emptyset=\{\}$ – the empty set, $\mathbb{N} = \{1,2,3,\ldots\}$ – the set of natural numbers, $\mathbb{N_0} = \{0,1,2,3,\ldots\}$ – the set of non-negative integers, and $\mathbb{Z} = \{ \ldots,-2,-1,0,1,2,\ldots \}$ – the set of integers.
A network $G$ is described using a pair consisting of two sets $G=(V,A)$. Here $V$ denote the set of nodes and $A$ the set of arcs (if directed) or edges (if undirected). Given the network below we have that $V=\{1,2,3,4\}$ and $A=\{a_1,a_2,a_3,a_4,a_5\}=\{(4,2),(1,2),(2,3),(1,4),(3,4)\}$. Note that we write the arcs in two different ways using either elements $a_i$ or pairs. Moreover, we always have the tail of the arc as the first number in the pair.
Subsets can be described using set-builder notation: $A_3=\{(i,j)\in A : i=3 \text{ or } j=3\} =\{(i,j)\in A \mid i=3 \text{ or } j=3\} = \{(2,3),(3,4)\}$. We read it as $A_3$ equals the set of $(i,j)$ in $A$ such that (satisfying) either $i$ or $j$ equals $3$. Note symbols $:$ and $\mid$ have the same meaning (such that).
Sums based on sets
Sums may be written in different ways. Let $I=\{1,2,3,4\}$. Then we have
\[
\sum_{i\in I} x_i = \sum_{i=1}^{4} x_i = \sum_{i\in\{1,2,3,4\}} x_i = x_1+x_2+x_3+x_4. \]
Other examples:
\[
\sum_{(i,j)\in\{(i,j)\in A : i=3 \text{ or } j=3\} } x_{ij} = \sum_{(i,j)\in A_3} x_{ij} = x_{23}+x_{34} \]
and
\[
\sum_{(2,j)\in A} x_{2j} = x_{23}. \] Constraints
Often constraints are written in a compact way to avoid writing a lot of equations explicit so\[\sum_{i \in \left\{ 1,6 \right\}} x_{ij} \leq b_j,\; \forall j \in \left\{ 8,12 \right\},\] can be written explicit as
\begin{align}
x_{1,8} + x_{6,8} &\leq b_8, \\ x_{1,12} + x_{6,12} &\leq b_{12}. \\ \end{align}
Note the constraint consists of two parts: 1) the equation and 2) a part describing which index values we consider. The second part contain the $\forall$-sign which is read “for all”. Think of it as all fixed index values in the second part must be inserted in the equation one at a time. That is, we have a constraint for each combination of index in the second part. As a result a constraint where an index is both in the sum and the second part is not valid. Finally, note that sometimes $x_{ij}$ is written $x_{i,j}$ this is done to avoid confusion if the index are large, e.g. $x_{121}$ could be interpreted as $x_{1,21}$ or $x_{12,1}$.
Let us do another example: \[\sum_{(i,j)\in A} x_{ij} – \sum_{(j,i)\in A} x_{ji} \leq b_i,\; \forall i\in V. \] Here we have a constraint for each node in $V$, i.e. if we consider the network above, 4 in total. So we have to use the values $1,2,3$ and $4$ for index $i$. For $i=2$ the constraint becomes:
\[
\sum_{(2,j)\in A} x_{2j} – \sum_{(j,2)\in A} x_{j2} = x_{23} – (x_{12}+x_{42}) \leq b_{2}. \]
Similar for $i=1,3,4$ we have:
\begin{align}
x_{14} +x_{12} &\leq b_{1}, \\ x_{34} – x_{23} &\leq b_{3}, \\ x_{42} – x_{14}-x_{34} &\leq b_{4}. \\ \end{align}
Last example: \[\sum_{(i,j)\in A} x_{ij}+y_k \leq z_i,\; \forall i\in V, k=1,4. \] We have $4\cdot 2=8$ constraints which are:
\begin{align}
x_{14} +x_{12}+y_1 &\leq z_{1} \text{ ($i=1$ and $k=1$),} \\ x_{23} +y_1 &\leq z_{2} \text{ ($i=2$ and $k=1$),} \\ x_{34} +y_1 &\leq z_{3} \text{ ($i=3$ and $k=1$),} \\ x_{42} +y_1 &\leq z_{4} \text{ ($i=4$ and $k=1$),} \\ x_{14} +x_{12}+y_4 &\leq z_{1} \text{ ($i=1$ and $k=4$),} \\ x_{23} +y_4 &\leq z_{2} \text{ ($i=2$ and $k=4$),} \\ x_{34} +y_4 &\leq z_{3} \text{ ($i=3$ and $k=4$),} \\ x_{42} +y_4 &\leq z_{4} \text{ ($i=4$ and $k=4$).} \\ \end{align} |
The class $BPP$ contains all the languages decided by a probabilistic Turing machine in polynomial time with probability of success more that 2/3 for every input.
The class $\Sigma^p_2$ contains all the languages for which there is a polinomial time Turing machine $M$ and a plynomial function $q : \mathbb{N} \rightarrow \mathbb{N}$ such that: $$ x \in L \iff \exists u \in \{0,1\}^{q(|x|)} \forall v \in \{ 0,1 \}^{q(|x|)} M(u,x,v)=1$$ Define $\Pi^p_i=\{\bar{L} : L \in \Sigma^p_2 \}$
The theorem states that the class $BPP$ is contained by the intersection of $\Sigma^p_2$ and $\Pi^p_2$.
To prove the theorem it is proved that for every set $S \subseteq \{0,1\}^m$ with $|S| \leq 2^{m-n}$ and every k vectors $u_1, \ldots, u_k$ $$\bigcup_{i=1}^k(S+u_i) \neq \{0,1\}^m$$ Where $S+u = \{ x+u : x \in S \}$ and + denotes addition modulo 2 i.e. bitwise XOR.
It is also proved that for every set $S \subseteq \{0,1\}^m$ with $|S| \geq (1-2^{-n})2^m$ and every k vectors $u_1, \ldots, u_k$ $$\bigcup_{i=1}^k(S+u_i) = \{0,1\}^m$$
I don't get why this claims imply that if a language is in $BPP$, then $$ \exists u_1, \ldots,u_k \in \{ 0,1 \}^m \forall r \in \{ 0,1 \}^m \bigvee_{i=1}^k M(x,r \oplus u_i) = 1 $$
How does the claims about sets of binary strings imply the computation above?
What I don't understand is how the translations preserve the original random strings and how can a small set of translates cover all possible random strings. |
Today’s post by Clément Canonne.
Following the Boolean monotonicity testing bonanza, here’s an open problem. In short,
does adaptivity help for monotonicity testing of Boolean functions? Problem: Consider the problem of monotonicity testing for Boolean functions on the hypercube. Given oracle access to \(f\colon \{0,1\}^n \to \{0,1\}\), we wish to decide if \(f\) is (i) monotone vs. (ii) \(\epsilon\)-far from monotone (in Hamming distance). For either the one-sided or two-sided version of the problem, what is the exact status of adaptive testers? State of the art: – Fischer et al. [FLN+02] showed one-sided non-adaptive testers require \(\sqrt{n}\) queries. This implies an \(\Omega(\log n)\) lower bound for one-sided adaptive testers. – Chen et al. [CDST15] proved that two-sided non-adaptive testers require (essentially) \(\Omega(\sqrt{n})\) queries. This implies an \(\Omega(\log n)\) lower bound for 2-sided adaptive testers. – Khot et al. [KMS15] recently gave a one-sided non-adaptive tester making \(\tilde{O}(\sqrt{n}/\epsilon^2)\) queries. The story is essentially complete for non-adaptive testing. Comments: As of now, it is not clear whether adaptivity can help. Berman et al. [BRY14] showed the benefit of adaptivity for Boolean monotonicity testing over the domain \([n]^2\) (switch the \(2\) and the \(n\) from the hypercube). A gap provably exists between adaptive and non-adaptive testers: \(O(1/\epsilon)\) vs. \(\Omega(\log(1/\epsilon)/\epsilon)\). References:
[FLN+02] E. Fischer, E. Lehman, I. Newman, S. Raskhodnikova, R. Rubinfeld, and A. Samorodnitsky.
Monotonicity testing over general poset domains. Symposium on Theory of Computing, 2002
[BRY14] P. Berman, S. Raskhodnikova, and G. Yaroslavtsev.
\(L_p\) testing. Symposium on Theory of Computing, 2014
[CDST15] X. Chen, A. De, R. Servedio, L.-Y. Tang.
Boolean function monotonicity testing requires (almost) \(n^{1/2}\) non-adaptive queries. Symposium on Theory of Computing, 2015
[KMS15] S. Khot, D. Minzer, and S. Safra.
On monotonicity testing and Boolean Isoperimetric type theorems. ECCC, 2015 Erratum: a previous version of this post stated (incorrectly) lower bound of \(\Omega(\sqrt{n}/\epsilon^2)\). This has been corrected to \(\Omega(\sqrt{n})\). |
I am trying to prove the following Knapsack approximation algorithm, the problem definition:
Input:
A set $S$ of $n$ objects that contains weights and values:
$w_1,w_2,\ldots,w_n$ (weights) $v_1,v_2,\ldots,v_n$ (values)
$W$ — The total weight bound.
Scaling factor $0 < c < 1$ Output: Let set $\mathrm{OPT}(S)$ be the optimal solution (set of items with maximum values that their total weight is less than $W$) of the problem, I want to find a set $T$ whose value $\sum_{i \in T} v_i$ is a least $c \cdot \sum_{i \in \mathrm{OPT}(S)} v_i$. The algorithm: for $i$ from 1 to $n$: $w'_i = c \cdot w_i$. run the Dynamic Programing algorithm with the scaled items (table size $n \times cW$) — Knapsack Dynamic programming (Definition A).
I am not sure what is the exact reason why this is working. I know that because I am only scaling the weights, the values are still the same, and the ratio between the weight/value of each item is remaining almost the same (depending on $c$).
I am also required to prove the correctness of this approximation algorithm by proving the following lemma:
For every $i \in T$ there are items $i_1,\ldots,i_k \in \mathrm{OPT}(S)$ such that $v_i \geq c \sum_{j=1}^k v_{i_j}$.
The first direction of the proof is immediate if $i \in T \cap \mathrm{OPT}(S)$.
I got lost trying to prove the other side when $i \notin \mathrm{OPT}(S)$.
I would be thankful to get some explanation and guidance. |
Reading Toda's original paper, it's not clear that a general application of $\cdot$ has meaning, however it may have been extended later.
Toda introduces three operators: $\oplus\cdot$, $\mathsf{BP}\cdot$ and $\mathsf{C}\cdot$, so the $\cdot$ is, in this context, not an independent piece of notation. Toda is extending the $\mathsf{BP}\cdot$ operator introduced by Schöning (but with uglier notation - $BP\mathscr{C}$ for the operator applied to class $\mathscr{C}$).
The three operators give (intuitively) the closures of classes under three different types of reductions. I'll reproduce the definition (and intuitive explanation) for $\mathsf{BP}\cdot$ for Toda's paper (p. 867):
$L \in \mathsf{BP}\cdot\mathbf{K}$ if there exist a set $A \in \mathbf{K}$, a polynomial $p$, and a constant $\alpha > 0$ such that, for all $x\in \Sigma^{\ast}$,
$$
\text{Prob}(\{w\in\{0,1\}^{p(|x|)}:x\#w\in A \leftrightarrow x \in L\}) \geqq \frac{1}{2}\alpha
$$
The intuitive definition is (p. 865):
Intuitively speaking, a set is in $\mathrm{BP}\cdot\oplus \mathbf{P}$ if and only if it is reducible to a set in $\oplus\mathbf{P}$ under a polynomial-time randomized reduction with two-sided bounded error probability. |
Let $ a,b,c$ positive integer such that $ a + b + c \mid a^2 + b^2 + c^2$.
Show that $ a + b + c \mid a^n + b^n + c^n$ for infinitely many positive integer $ n$.
(problem composed by
Laurentiu Panaitopol)
So far no idea.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Let $ a,b,c$ positive integer such that $ a + b + c \mid a^2 + b^2 + c^2$.
Show that $ a + b + c \mid a^n + b^n + c^n$ for infinitely many positive integer $ n$.
(problem composed by
Laurentiu Panaitopol)
So far no idea.
Claim.$a+b+c\mid a^{2^n}+b^{2^n}+c^{2^n}$ for all $n\geq0$. Proof. By induction: True for $n=0,1$ $\checkmark$. Suppose it's true for $0,\ldots,n$. Note that $$a^{2^{n+1}}+b^{2^{n+1}}+c^{2^{n+1}}=(a^{2^n}+b^{2^n}+c^{2^n})^2-2(a^{2^{n-1}}b^{2^{n-1}}+b^{2^{n-1}}c^{2^{n-1}}+c^{2^{n-1}}a^{2^{n-1}})^2+4a^{2^{n-1}}b^{2^{n-1}}c^{2^{n-1}}(a^{2^{n-1}}+b^{2^{n-1}}+c^{2^{n-1}})$$
and that
$$2(a^{2^{n-1}}b^{2^{n-1}}+b^{2^{n-1}}c^{2^{n-1}}+c^{2^{n-1}}a^{2^{n-1}})=(a^{2^{n-1}}+b^{2^{n-1}}+c^{2^{n-1}})^2-(a^{2^n}+b^{2^n}+c^{2^n})$$
is divisible by $a+b+c$ by the induction hypothesis.
It seems that there's a partial solution.
Suppose that $\mathrm{gcd}(a,a+b+c)=\mathrm{gcd}(b,a+b+c)=\mathrm{gcd}(c,a+b+c)=1$. Then for $n=k\cdot \phi(a+b+c)+1 \, (k=1,2, \ldots )$, where $\phi$ is Euler's function, we have: $$ (a^n+b^n+c^n)-(a^2+b^2+c^2)=a^2 (a^{n-1}-1) + b^2 (b^{n-1}-1) + c^2 (c^{n-1}-1), $$ where all round brackets are divisible by $a+b+c$ according to Euler theorem. Therefore $(a+b+c) \mid (a^n+b^n+c^n)$ for all these $n$.
There's one more solution (it isn't mine). One can even prove that $(a + b + c) \mid (a^n + b^n + c^n)$ for all $n=3k+1$ and $n=3k+2$. It's enough to prove that $a + b + c \mid a^n + b^n + c^n$ => $a + b + c \mid a^{n+3} + b^{n+3} + c^{n+3}$. The proof is here: https://vk.com/doc104505692_416031961?hash=3acf5149ebfb5338b5&dl=47a3df498ea4bf930e (unfortunately, it's in Russian but it's enough to look at the formulae). One point which may need commenting: $(ab+bc+ca)(a^{n-2} + b^{n-2} + c^{n-2})$ is always divisible by $(a+b+c)$ (it's necessary to consider 2 cases: $(a+b+c)$ is odd and $(a+b+c)$ is even).
If $a,b,c,n\in\Bbb Z_{\ge 1}$, $a+b+c\mid a^2+b^2+c^2$, then $$a+b+c\mid a^n+b^n+c^n$$
is true when $n\nmid 3$, but not necessarily when $n\mid 3$.
$$x^2+y^2+z^2+2(xy+yz+zx)=(x+y+z)^2$$
$$\implies x+y+z\mid 2(xy+yz+zx)$$
$$\implies x+y+z\mid (x^k+y^k+z^k)(xy+yz+zx)$$
for all $k\ge 1$ (to see why, check cases when $x+y+z$ is even and when it's odd).
$$x^{n+3}+y^{n+3}+z^{n+3}=(x^{n+2}+y^{n+2}+z^{n+2})(x+y+z)$$
$$-(x^{n+1}+y^{n+1}+z^{n+1})(xy+yz+zx)+(x^n+y^n+z^n)xyz$$
for all $n\ge 1$. We know $$x+y+z\mid (x^{n+2}+y^{n+2}+z^{n+2})(x+y+z)$$
$$-(x^{n+1}+y^{n+1}+z^{n+1})(xy+yz+zx)$$
Now let $(x,y,z)=(x_1,y_1,z_1)=(1,3,9)$. $$x_1+y_1+z_1\nmid x_1^3+y_1^3+z_1^3$$ $$x_1+y_1+z_1\nmid \left(x_1^3+y_1^3+z_1^3\right)x_1y_1z_1$$ $$\implies x_1+y_1+z_1\nmid x_1^6+y_1^6+z_1^6$$
Since $x_1+y_1+z_1$ is coprime to $x_1,y_1,z_1$, we get $$x_1+y_1+z_1\nmid (x_1^6+y_1^6+z_1^6)x_1y_1z_1,$$
and so $x_1+y_1+z_1\nmid x_1^9+y_1^9+z_1^9$, etc.
Therefore $x+y+z$ cannot generally (for all $x,y,z\in\mathbb Z_{\ge 1}$) divide $x^{3m}+y^{3m}+z^{3m}$ for any given $m\ge 1$.
However, we easily get $x+y+z$ always divides $x^n+y^n+z^n$ for $n$ not divisible by $3$,
because $x+y+z\mid (x+y+z)xyz$ and $x+y+z\mid \left(x^2+y^2+z^2\right)xyz$,
because $x+y+z\mid x^2+y^2+z^2$ (given), so $x+y+z\mid x^4+y^4+z^4, x^5+y^5+z^5$,
so $x+y+z\mid \left(x^4+y^4+z^3\right)xyz, \left(x^5+y^5+z^5\right)xyz$,
so $x+y+z\mid x^7+y^7+z^7, x^8+y^8+z^8$, etc.
Here's a more intuitive way to get the idea of considering powers of $2$.
Added (below): in the same way we can prove that any $n=6k\pm1$ works.
Note that $a+b+c\mid(a+b+c)^2-(a^2+b^2+c^2)=2(ab+bc+ca)$. By The Fundamental Theorem of Symmetric Polynomials (FTSP), $a^n+b^n+c^n$ is an integer polynomial in $a+b+c$, $ab+bc+ca$ and $abc$. If $3\nmid n$, no term has degree divisible by $3$ so each term has at least one factor $a+b+c$ or $ab+bc+ca$. If we can find infinitely many $n$ such that the terms without a factor $a+b+c$ have a coefficient that is divisible by $2$, then we are done because $a+b+c\mid2(ab+bc+ca)$. This suggests taking a look to the polynomial $a^n+b^n+c^n$ over $\Bbb F_2$. Note that over $\Bbb F_2$, $a^{2^n}+b^{2^n}+c^{2^n}=(a+b)^{2^n}+c^{2^n}=(a+b+c)^{2^n}$ is divisible by $(a+b+c)$. Because the polynomial given by FTSP over $\Bbb F_2$ is the reduction modulo $2$ of that polynomial over $\mathbb Z$ (this is a consequence of the uniqueness given by the FTSP), this shows that the coefficients of those terms that have no factor $a+b+c$ is divisible by $2$, and we are done because $3\nmid2^n$. (In fact all coefficients except that of $(a+b+c)^{2^n}$ are divisible by $2$.)
Added later Ievgen's answer inspired me to generalise the above approach to $n=6k\pm1$. Consider again $a^n+b^n+c^n$ as an integer polynomial in $abc,ab+bc+ca,a+b+c$ (which we can do by FTSP). Because $3\nmid n$, no term has the form $(abc)^k$. It remains to handle the terms of the form $m\cdot(ab+bc+ca)^k(abc)^l$. If $a+b+c$ is odd, then $a+b+c\mid ab+bc+ca$ and we're done. If $a+b+c$ is even, at least one of $a,b,c$ is even so $2\mid abc$, and hence $a+b+c\mid m\cdot(ab+bc+ca)^k(abc)^l$. (Note that $l>0$ because $n$ is odd.)
For any positive integer x this is true: $x \leqslant x^2$ (From $1 \leqslant x$ for any positive ineger x ). So $a + b + c \leqslant a^2 + b^2 + c^2$. But for 2 positive integers $x$, $y$ $x$ is divisible by $y$ only if $x \geqslant y$. So $a + b + c \geqslant a^2 + b^2 + c^2$ if $a + b + c \mid a^2 + b^2 + c^2$. From these 2 inequalities: $a + b + c = a^2 + b^2 + c^2$
So $a + b - a^2 - b^2 = c^2 - c$
Evaluation for the right part $c^2 - c \geqslant 0$ (because $x \leqslant x^2$ for any positive integer x). Evaluation for the left part $a + b - a^2 - b^2 \leqslant 0$ (adding 2 inequalities $a - a^2 \leqslant 0$ and $b - b^2 \leqslant 0$)
So the left part is $\leqslant 0$ and the right part $\geqslant 0$. But they are equal so $a + b - a^2 - b^2 = c^2 - c = 0$ and $c^2 = c$. c is positive so we can divide both part of the last equation by c and get $c = 1$.
Similarly $b = 1$ and $a = 1$. So $a + b + c = 3$ and $a^n + b^n + c^n = 3$ for any positive $n$. 3 is divisible by 3 so $a + b + c \mid a^n + b^n + c^n$ for infinitely many positive integer n. |
I would tend to
disagree with that quote:
The entropy $S(\rho_A)$ measures the amount of correlation (classical and/or quantum) between $A$ with the external world.
I think this is only true if you assume the joint $A\otimes(\text{external world})$ to be in a
pure state. In this case, as explained in steg's answer, $S(\rho_A)$ can indeed be taken as a measure of quantum entanglement between $A$ and the external world.
If you drop this assumption, then you could for example have a joint density matrix given by:$$\rho = \rho_A \otimes \rho_E$$(where ${}_E$ stands for the environment/external world), in which there are
no correlation at all between $A$ and the external world, irrespective of what $\rho_A$ is (hence irrespective of how large $S(\rho_A)$ might be).
Letting aside that if the authors were implicitly assuming the global state to be pure, they should have made no reference to "classical correlations" (which are absent in the case of a pure joint state), that implicit assumption is in my opinion misguided. It is motivated by the idea that pure quantum states are somehow "more fundamental", with density matrices introduced as an after-thought. There exist however more satisfactory axiomatizations of quantum mechanics in which density matrices are the basic objects, recording our previous knowledge of a system. Then pure states play no special role: they are just states of "maximal knowledge", in which we happen to know as much about the system as is possible to know given quantum mechanics. But if the system in question is the entire world, I would rather expect the opposite, namely a
very partial knowledge!
Now, one might think that if the system $A$
started in a pure state at some $t=t_o$, i.e. the current uncertainty in $\rho_A$ is entirely due to its interaction with a quantum and possibly noisy environment, then this uncertainty will reflect the correlations with the environment, because the two would have been generated concomitantly by the interaction. But even that seemingly reasonable statement is not true, as shown by the following example. Take $A$ to be a qubit, initially in $|0\rangle\langle 0|$, and take $E$ to be in a classical superposition of two states $\frac{1}{2} |a\rangle\langle a| + \frac{1}{2} |b\rangle\langle b|$. There exists a unitary evolution mapping $|0\rangle \otimes |a\rangle$ to $|0\rangle \otimes |a\rangle$ and $|0\rangle \otimes |b\rangle$ to $|1\rangle \otimes |a\rangle$, so that the final joint density matrix is:$$\left(\frac{1}{2} |0\rangle\langle 0| + \frac{1}{2} |1\rangle\langle 1|\right) \otimes |a\rangle\langle a|.$$Here, the noisy environment has managed to "contaminate" our system A, without getting correlated with it at all! |
I have this ARMA(1,1) process where $\epsilon_t$ is the classical White Noise process
$$X_t=\epsilon_t +\alpha_{t-1}\epsilon_{t-1}+\theta_{t-1}X_{t-1}$$
and I have to write its Wold representation. Using the lag operator I get
$$\epsilon_t=\frac{1-\theta_{t-1}L}{1+\alpha_{t-1}L}X_{t}$$ Assuming the process is stationary and invertible, how can I recover the Wold representation? |
Learning Objectives Calculate the price elasticity of supply Calculating the Price Elasticity of Supply
The price elasticity of supply measures how much quantity supplied changes in response to a change in the price. The calculations and interpretations are analogous to those we explained above for the price elasticity of demand. The only difference is we are looking at how producers respond to a change in the price instead of how consumers respond.
Price elasticity of supply is the percentage change in the quantity of a good or service supplied divided by the percentage change in the price. Since this elasticity is measured along the supply curve, the law of supply holds, and thus price elasticities of supply are always positive numbers. We describe supply elasticities as elastic, unitary elastic and inelastic, depending on whether the measured elasticity is greater than, equal to, or less than one. Exercise: Elasticity of Supply from Point A to Point B
Assume that an apartment rents for $650 per month and at that price 10,000 units are offered for rent, as shown in Figure 2, below. When the price increases to $700 per month, 13,000 units are offered for rent. By what percentage does apartment supply increase? What is the price sensitivity?
Step 1. We know that
[latex]\displaystyle\text{Price Elasticity of Supply}=\frac{\text{percent change in quantity}}{\text{percent change in price}}[/latex]
Step 2. From the midpoint method we know that
[latex]\displaystyle\text{percent change in quantity}=\frac{Q_2-Q_1}{(Q_2+Q_1)\div{2}}\times{100}[/latex]
[latex]\displaystyle\text{percent change in price}=\frac{P_2-P_1}{(P_2+P_1)\div{2}}\times{100}[/latex]
Step 3. We can use the values provided in the figure in each equation:
[latex]\displaystyle\text{percent change in quantity}=\frac{13,000-10,000}{(13,000+10,000)\div{2}}\times{100}=\frac{3,000}{11,500}\times{100}=26.1[/latex]
[latex]\displaystyle\text{percent change in price}=\frac{700-650}{(700+650)\div{2}}\times{100}=\frac{50}{675}\times{100}=7.4[/latex]
Step 4. Then, those values can be used to determine the price elasticity of demand:
[latex]\displaystyle\text{Price Elasticity of Supply}=\frac{26.1\text{ percent}}{7.4\text{ percent}}=3.53[/latex]
Again, as with the elasticity of demand, the elasticity of supply is not followed by any units. Elasticity is a ratio of one percentage change to another percentage change—nothing more—and is read as an absolute value. In this case, a 1% rise in price causes an increase in quantity supplied of 3.5%. Since 3.5 is greater than 1, this means that the percentage change in quantity supplied will be greater than a 1% price change. If you’re starting to wonder if the concept of slope fits into this calculation, read on for clarification.
Watch It
Watch this video to see a real-world application of price elasticity.
Try It Try It
These questions allow you to get as much practice as you need, as you can click the link at the top of the first question (“Try another version of these questions”) to get a new set of questions. Practice until you feel comfortable doing the questions.
Glossary elastic supply: supply responds more than proportionately to a change in price; i.e. the percent change in quantity supplied is greater than percent change in price inelastic supply: supply responds less than proportionately to a change in price; i.e. the percent change in quantity supplied is less than percent change in price price elasticity of supply: percentage change in the quantity supplied divided by the percentage change in price unitary elastic supply: supply responds exactly proportionately to a change in price; i.e. the percent change in quantity supplied is equal to the percent change in price |
Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-size and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,
1 and you may get a shoutout in next week’s column. If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter. Riddler Express
Our first-ever Riddler Express centered on the “Sesame Street” character Count Von Count. He counts aloud by tweeting one number at a time, and with the numbers spelled out like so: “One!” “Two!” “Three!” … “Five hundred thirty eight!” etc. The original puzzle was to see how high the count could count before hitting Twitter’s 140-character limit. Riddler Nation found that it was all the way up to 1,111,373,373,372, or, as the count would write it, “One trillion one hundred eleven billion three hundred seventy three million three hundred seventy three thousand three hundred seventy two!”
But since then, Twitter has expanded its limit to 280 characters. That means Count Von Count’s possibilities have expanded as well. How high can Count Von Count count now? How much higher can the count count because of the new character limit?
Important note! The count is enthusiastic and must end all of his tweets with an exclamation point. Riddler Classic
Speaking of compulsive counting counts, I, too, have a problem. Whenever I see a string of digits — a license plate, a ZIP code … a stranger’s PIN number — I try to turn them into a true mathematical equation by maintaining the digits’ order and inserting symbols. For example, the ZIP code of my office is 10023. I could turn that into 1+0+0+2=3 or 1×0=0×23 (or many other equations).
This game gets more complicated the more digits you have, and strings of four or five digits seem to be the sweet spot where there’s a lot of fun to be had.
Considering strings of all lengths and inserting only common mathematical symbols — ) ( + – × ÷ ^ = — what proportion of each string length has true mathematical equations lurking inside of it? (For example, for strings of length three, you’d consider the groups of digits 000, 001, …, 999, trying to insert symbols into each one to find a correct equation. You’d find that 000 has many possibilities, whereas 129 has none. For strings of length four, you’d consider 0000 through 9999, and so on.) As the strings get longer, there’s more you can do with them: Is there a string length where every possible string has a correct equation inside of it?
Solution to last week’s Riddler Express
Congratulations to 👏 David Leman 👏 of Indianapolis, winner of last week’s Riddler Express!
In competitive darts, a common game is called 501. A player starts with 501 points and subtracts the score of each throw. He or she must finish with exactly zero points. (Also, according to the rules, the final dart must land in either the bull’s-eye or the outer, doubled segments.) Finishing a game in the minimum number of throws is a rare feat, akin to a perfect 300 game in bowling. Last week I asked: What is the minimum number of throws? How many different ways are there to do it?
The minimum number of throws to count down from 501 is
nine. The most valuable space on a dartboard is triple-20 — the piece of the 20 slice inside the inner ring — which is worth 60 points. 501 divided by 60 gives us a number between eight and nine, so we know we can’t do it in eight throws or fewer. Here’s one way to do it in nine:
60 + 60 + 60 + 60 + 60 + 60 + 60 + 57 + 24 = 501
That’s a string of seven triple-20s, then a triple-19 and then a double-12, for a total of 501. (Remember that your game must end on a doubled throw.)
There are
3,944 possible routes to nine-dart perfection — some of these include the same throws, but in different orders. The basic idea is to figure out how many 60s you might need, what throws could supplement those, and then how many ways you could mix up the order in which those throws came. Solver Mike Seifert went low-tech and shared his work:
And solver Amy Leblang took the high-tech route and was kind enough to share the code she used to calculate that total.
To see this math in action, here’s a video (with a hat tip to my colleague Daniel Levitt) of a man very nearly completing two perfect nine-dart finishes in a row:
Solution to last week’s Riddler Classic
Congratulations to 👏 Ben Breadsell 👏 of Perth, Australia, winner of last week’s Riddler Classic!
Last week, you were throwing darts at a dartboard that has a radius of 1 foot. Your darts always fell on the board and never outside. Furthermore, your chances of hitting any area on the board were exactly proportional to the area of the patch — your darts landed according to a uniform probability distribution. You kept throwing darts until your nth dart hit a location that was less than 1 foot from some other dart. You were then “out,” and n-1 was your final score. You were asked three questions about this game: What is the maximum possible score? What is the probability of getting a score greater than 1? What is the expected value of your score?
First, the maximum possible score is
7. This can be achieved with one dart at the very center of the board and six other darts evenly spaced along the board’s edge. Mahalingam Vaidhyanathan, the puzzle’s submitter, illustrated what this looks like:
Second, the probability of getting a score greater than 1 is
about 0.41 — to be precise, it’s \(\frac{3\sqrt{3}}{4\pi}\).
To see how this works, take a look at another of Mahalingam’s illustrations:
The blue circle represents the dartboard, the green dot represents where your first dart landed, and the blue dot is the center of the dartboard. The green circle represents the “bad” area for your second dart — throw it anywhere within the green circle and you’ll either be within 1 foot of your first dart (game over) or miss the board entirely (which you’re too skilled to do anyway).
So, to solve for the probability that you’ll get a score greater than one dartpoint, we need to solve for how often you’ll land on the dartboard but outside the “bad” radius around your first dart. That in part depends on where that first dart landed
.
Let
x be the distance between your dart and the center of the board. To get a score greater than 1, your second dart must fall outside of the green circle and in the blue area. That area of the overlap, call it A, equals:
\begin{equation*}A=2\cos^{-1}\left(\frac{x}{2}\right)-x\sqrt{1-\left(\frac{x}{2}\right)^2}\end{equation*}
To find the probability over all the possible
x’s where your dart could land, we remember that our darts fall according to a uniform density and then integrate over x as it ranges from 0 (your dart lands at the center) to 1 (your dart lands on the edge). That looks like this:
\begin{equation*}1-\frac{1}{\pi}\int_0^1 (2x)(A)dx=\frac{3\sqrt{3}}{4\pi} \end{equation*}
That equals roughly 41 percent.
The answer to the third, and last, question: The expected value of your score is a fairly meager
1.47. The integral math to get that answer gets too complicated to handle with pencil and paper, so you’re best off building a computer dart-throwing simulation. Solver Samir Khan was kind enough to share the code he used to simulate 100,000 dart games.
We’ve already found the probability of your scoring exactly 1 point. It’s 1 minus the answer above, or about 59 percent. The probability of scoring 2 points turns about to be about 35 percent, 3 points about 6 percent, 4 points about 0.1 percent, and anything higher is negligible. Solver Steven W. made a histogram of the frequency of scores, over 1 million games:
Taking an average of these scores weighted by their probabilities gives an expected score of about 1.47.
Want to submit a riddle?
Email me at [email protected]. |
SciPost Submission Page Probing Lepton Universality with (Semi)-Leptonic B decays by Giovanni Banelli, Robert Fleischer, Ruben Jaarsma and Gilberto Tetlalmatzi-Xolocotzi - Published as SciPost Phys. Proc. 1, 013 (2019) Submission summary
As Contributors: Ruben Jaarsma · Gilberto Tetlalmatzi-Xolocotzi Preprint link: scipost_201812_00041v1 Date accepted: 2019-01-17 Date submitted: 2018-12-13 Submitted by: Tetlalmatzi-Xolocotzi, Gilberto Submitted to: SciPost Physics Proceedings Proceedings issue: The 15th International Workshop on Tau Lepton Physics (Amsterdam, 2018-09) Domain(s): Exp. & Theor. Subject area: High-Energy Physics - Phenomenology Abstract
The most recent measurements of the observables $R_{D^{(*)}}$ are in tension with the Standard Model offering hints of New Physics in $b\rightarrow c \ell \bar{\nu}_{\ell}$ transitions. Motivated by these results, in this work we present an analysis on their $b\rightarrow u \ell \bar{\nu}_{\ell}$ counterparts (for $\ell=e, ~\mu, ~\tau$). Our study has three main objectives. Firstly, using ratios of branching fractions, we assess the effects of beyond the Standard Model scalar and pseudoscalar particles in leptonic and semileptonic $B$ decays ($B^-\rightarrow \ell^- \bar{\nu}_{\ell}$, $\bar{B}\rightarrow \pi \ell \bar{\nu}_{\ell}$ and $\bar{B}\rightarrow \rho \ell \bar{\nu}_{\ell}$). Here a key role is played by the leptonic $B$ processes, which are highly sensitive to new pseudoscalar interactions. In particular, we take advantage of the most recent measurement of the branching fraction of the channel $B^-\rightarrow \mu^-\bar{\nu}_{\mu}$ by the Belle collaboration. Secondly, we extract the CKM matrix element $|V_{ub}|$ while accounting simultaneously for New Physics contributions. Finally, we provide predictions for the branching fractions of yet unmeasured leptonic and semileptonic $B$ decays.
Current status: Submission & Refereeing History Reports on this Submission Report 1 by Greg Ciezarek on 2018-12-23 Invited Report Cite as: Greg Ciezarek, Report on arXiv:scipost_201812_00041v1, delivered 2018-12-23, doi: 10.21468/SciPost.Report.767 Report
This proceeding relates the recent hints of an anomaly in $b \rightarrow c \tau \nu$ processes to current and future measurements in semileptonic $b \rightarrow u \ell \nu$ and fully leptonic $B \rightarrow \ell \nu$ processes.
These measurements can provide important additional information to constrain different interpretations of the b to tau measurements in terms of new physics. This work focuses on new physics with a (pseudo)scalar Lorentz structure, where it is clearly demonstrated that fully leptonic processes play a greatly enhanced role. Predictions in several new physics scenarios are presented for quantities which can be measured at ongoing and future experiments. The importance of new measurements distinguishing between decay modes with electrons and muons are emphasized. This proceeding is an important contribution to the interpretation of tau lepton physics, and is recommended for publication. |
NIRSpec Predicted Performance
NIRSpec sensitivity estimates may be obtained using the JWST Exposure Time Calculator (ETC). Methods on how to do this and and some best-estimates of limiting performance are available for users.
Information presented in the NIRSpec predicted performance articles describes the best knowledge of performance based on instrument component and flight model tests carried out on the ground. The scientific performance of all of JWST's instruments will be measured during the commissioning period immediately after launch. The best estimates of the performance of all of JWST's instruments are incorporated into the JWST Exposure Time Calculators (ETC). The information provided on predicted sensitivity performance within these pages is only for general guidance. Measured on-sky performance values will be incorporated into the ETCs and provided here as they become available.
Related NIRSpec predicted performance information can be found in these articles:
NIRSpec Sensitivity, the limiting flux sensitivity in a set of comparison benchmark calculations. NIRSpec Bright Source Limits, the brightest objects that may be observed using a given instrumental configuration.
The calculation method for sensitivity, transmission of the combined optical element and the efficiency of the detectors is discussed below.
From surface brightness to quantized units
The information shown here is meant as a guideline for the NIRSpec calculations in the linked subpages: NIRSpec Sensitivity and Bright Source Limits. The JWST Exposure Time Calculators (ETC) should be used to derive the best estimate of signal-to-noise and saturation limits for science sources of interest.
The the flux rate at the NIRSpec entrance aperture, S_\lambda^{ideal}, is derived from the surface brightness of the astronomical source of interest (I_\lambda), the telescope collecting area (A) in cm
2 and the dimensionless combined transfer efficiency of the OTE (primary, secondary, tertiary, and fine steering mirrors) (\eta_{OTE}(\lambda)).
(1) S_{\lambda}^{ideal} = A \times \eta_{OTE}(\lambda)\times I_\lambda
where S_\lambda^{ideal} is in units of photons/s/μm/pixel.
After the OTE, the flux is transferred through a specific instrument optics system.
Instrument optics systems are a crucial telescope component that can be described by one-dimensional (wavelength-dependent) efficiency curves (unity is perfect transmission). Each instrument/mode combination can have efficiency contributions from internal reflections and optical stops, filters, and dispersers. The actual focal plane flux rate is then:
(2) S_{\lambda}^{\rm focal\, plane} = S_{\lambda}^{\rm ideal} \times \eta_{\rm optics}({\lambda}) \times \eta_{\rm filter}({\lambda}) \times \eta_{\rm disperser}({\lambda})
in units of photons/s/μm/pixel.
Faint sensitivity limits for a specific benchmark calculation case are described in the NIRSpec Sensitivity article. The signal to noise is derived from the number of incident photons measured by the end to end instrument, including the detector. This optical system transmission, S_\lambda, follows from the above equation via a detector quantum efficiency function (QE; see the upper curve in Figure 1). The QE curve assumes a quantum yield (QY) of unity at all wavelengths (i.e. the efficiency of a detector at converting an incident photon to a measured electron is 1.0).
The optical transmission function is:
(3) S_{\lambda} = S_{\lambda}^{\rm focal\, plane} \times {\rm QE}({\lambda})
in units of e
–/s/μm/pixel. The optical transmission is presented in Figure 2 for the NIRSpec grating-filter combinations using both the MOS internal optics throughput and the IFU internal optics throughput. Note that fixed slit (FS) spectroscopy shares the same optics as MOS.
Investigation of bright saturation limits for the NIRSpec detectors is described in the NIRSpec Bright Limits article. The calculations presented in the bright limit case measure the number of electrons that accumulate in the detector, based on the incident photons. In this case, the photon conversion efficiency (PCE) is used to present overall system efficiency of detecting electrons and filling the detector well. Calculation of the PCE uses the detector relative quantum efficiency (RQE), which is the QE multiplied by the quantum yield of the detector. At wavelengths shortward of 1.4 μm, a single incident photon can result in more than one electron measured on the detector. This results in a RQE greater than 1.0, as presented in the lower panel of Figure 1.
The PCE is:
(4) PCE_{\lambda} = S_{\lambda}^{\rm focal\, plane} \times {\rm RQE}({\lambda})
in units of e
–/s/μm/pixel. The PCE is shown in Figure 3 for the MOS and the IFU internal optics throughputs. References
Pontoppidan, K. 2016, Proc of SPIE, 9910, 16
Pandeia: a multi-mission exposure time calculator for JWST and WFIRST |
The nonlocal boundary problem with perturbations of antiperiodicity conditions for the eliptic equation with constant coefficients Abstract
In this article, we investigate a problem with nonlocal boundary conditions which are perturbations of antiperiodical conditions in bounded $m$-dimensional parallelepiped using Fourier method. We describe properties of a transformation operator $R:L_2(G) \to L_2(G),$ which gives us a connection between selfadjoint operator $L_0$ of the problem with antiperiodical conditions and operator $L$ of perturbation of the nonlocal problem $RL_0=LR.$
Also we construct a commutative group of transformation operators $\Gamma(L_0).$ We show that some abstract nonlocal problem corresponds to any transformation operator $R \in \Gamma(L_0):L_2(G) \to L_2(G)$ and vice versa. We construct a system $V(L)$ of root functions of operator $L,$ which consists of infinite number of adjoint functions. Also we define conditions under which the system $V(L)$ is total and minimal in the space $L_{2}(G),$ and conditions under which it is a Riesz basis in the space $L_{2}(G)$. In case if $V(L)$ is a Riesz basis in the space $L_{2}(G),$ we obtain sufficient conditions under which the nonlocal problem has a unique solution in the form of Fourier series by system $V(L).$
Also we construct a commutative group of transformation operators $\Gamma(L_0).$ We show that some abstract nonlocal problem corresponds to any transformation operator $R \in \Gamma(L_0):L_2(G) \to L_2(G)$ and vice versa. We construct a system $V(L)$ of root functions of operator $L,$ which consists of infinite number of adjoint functions. Also we define conditions under which the system $V(L)$ is total and minimal in the space $L_{2}(G),$ and conditions under which it is a Riesz basis in the space $L_{2}(G)$.
In case if $V(L)$ is a Riesz basis in the space $L_{2}(G),$ we obtain sufficient conditions under which the nonlocal problem has a unique solution in the form of Fourier series by system $V(L).$
Keywords
differential-operator equation, eigenfunctions, Riesz basis
3 :: 10
Refbacks There are currently no refbacks.
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
[EDIT, this posting had been answered to the negative, However it couldn't be deleted, so I've written a salvage for it in the posting titled "What is the consistency strength of Ackermann + the following cardinals to ordinals isomorphism?"]
Working in Morse-Kelley set theory
Add to it the following schema:
Cardinals to Ordinals isomprhism: if $\psi(Y)$ is a formula in which only the symbol $``Y"$ appear free [and only free], and the symbol $``x"$ never occurring, and $\psi(x)$ is the formula obtained from $\psi(Y)$ by merely replacing each occurrence of the symbol $``Y"$ by an occurrence of the symbol $``x"$, then:
$\forall x (x \text{ is an ordinal }\to \exists Y(\psi(Y) \wedge Y \text{ is a cardinal } \wedge x < Y)) \\ \to \{x | x \text{ is a cardinal } \wedge \psi(x) \} \text{ is order isomorphic to } \{x | x \text{ is an ordinal }\} $
is an axiom.
Note: just to avoid confusion: the terms "cardinal" and "ordinal" here are defined in the standard von Neumann definitions, but they range over all classes and not just sets. So an ordinal class is a transitive set of transitive sets, and a cardinal class is an ordinal class is the smallest of all ordinal classes that are bijective to it.
Questions:
Is there a clear inconsistency with this principle?
if this is consistent, then what's the consistency strength of the above theory? |
This post describes some discriminative machine learning algorithms.
Normal distribution
Linear regression
Bernoulli distribution
Logistic regression
Multinomial distribution
Multinomial logistic regression (Softmax regression)
Exponential family distribution
Generalized linear regression
Multivariate normal distribution
Gaussian discriminant analysis or EM Algorithm
X Features conditionally independent
\(p(x_1, x_2|y)=p(x_1|y) * p(x_2|y) \)
Naive Bayes Algorithm
Other ML algorithms are based on geometry, like the SVM and K-means algorithms.
Linear Regression
Below a table listing house prices by size.
99
100
100
101
60
110
For the size 50\(m^2\), if we suppose that prices are normally distributed around the mean μ=100 with a standard deviation σ, then P(y|x = 50) = \(\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{y-μ}{\sigma})^{2})\)
We define h(x) as a function that returns the mean of the distribution of y given x (E[y|x]). We will define this function as a linear function. \(E[y|x] = h_{θ}(x) = \theta^T x\)
P(y|x = 50; θ) = \(\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{y-h_{θ}(x)}{\sigma})^{2})\)
We need to find θ that maximizes the probability for all values of x. In other words, we need to find θ that maximizes the likelihood function L:\(L(\theta)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{n} P(y^{(i)}|x^{(i)};θ)\)
Or maximizes the log likelihood function l:\(l(\theta)=log(L(\theta )) = \sum_{i=1}^{m} log(P(y^{(i)}|x^{(i)};\theta ))\) \(= \sum_{i=1}^{m} log(\frac{1}{\sigma \sqrt{2\pi}}) -\frac{1}{2} \sum_{i=1}^{n} (\frac{y^{(i)}-h_{θ}(x^{(i)})}{\sigma})^{2}\)
To maximize l, we need to minimize J(θ) = \(\frac{1}{2} \sum_{i=1}^{m} (y^{(i)}-h_{θ}(x^{(i)}))^{2}\). This function is called the Cost function (or Energy function, or Loss function, or Objective function) of a linear regression model. It’s also called “Least-squares cost function”.
J(θ) is convex, to minimize it, we need to solve the equation \(\frac{\partial J(θ)}{\partial θ} = 0\). A convex function has no local minimum.
There are many methods to solve this equation:
Gradient descent (Batch or Stochastic Gradient descent) Normal equation Newton method Matrix differentiation
Gradient descent is the most used Optimizer (also called Learner or Solver) for learning model weights.
\(θ_{j} := θ_{j} – \alpha \frac{\partial J(θ)}{\partial θ_{j}} = θ_{j} – α \frac{\partial \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2}}{\partial θ_{j}}\)
Batch Gradient descent
α is called “Learning rate”\(θ_{j} := θ_{j} – α \frac{1}{2} \sum_{i=1}^{n} \frac{\partial (y^{(i)}-h_{θ}(x^{(i)}))^{2}}{\partial θ_{j}}\)
If \(h_{θ}(x)\) is a linear function (\(h_{θ} = θ^{T}x\)), then :\(θ_{j} := θ_{j} – α \sum_{i=1}^{n} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \)
Batch size should fit the size of CPU or GPU memories, otherwise learning speed will be extremely slow.
When using Batch gradient descent, the cost function in general decreases without oscillations.
\(θ_{j} := θ_{j} – \alpha (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \)
Stochastic (Online) Gradient Descent (SGD) (use one example for each iteration – pass through all data N times (N Epoch))
This learning rule is called “Least mean squares (LMS)” learning rule. It’s also called Widrow-Hoff learning rule.
Mini-batch Gradient descent
Run gradient descent for each mini-batch until we pass through traning set (1 epoch). Repeat the operation many times.\(θ_{j} := θ_{j} – \alpha \sum_{i=1}^{20} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \)
Mini-batch size should fit the size of CPU or GPU memories.
When using Batch gradient descent, the cost function decreases quickly but with oscillations.
Learning rate decay
It’s a technique used to automatically reduce learning rate after each epoch. Decay rate is a hyperparameter.\(α = \frac{1}{1+ decayrate + epochnum} . α_0\)
Momentum
Momentum is a method used to accelerate gradient descent. The idea is to add an extra term to the equation to accelerate descent steps.\(θ_{j_{t+1}} := θ_{j_t} – α \frac{\partial J(θ_{j_t})}{\partial θ_j} \color{blue} {+ λ (θ_{j_t} – θ_{j_{t-1}})} \)
Below another way to write the expression:\(v(θ_{j},t) = α . \frac{\partial J(θ_j)}{\partial θ_j} + λ . v(θ_{j},t-1) \\ θ_{j} := θ_{j} – \color{blue} {v(θ_{j},t)}\)
Nesterov Momentum is a slightly different version of momentum method.
AdaGrad
Adam is another method used to accelerate gradient descent.
The problem in this method is that the term
grad_squared becomes large after running many gradient descent steps. The term grad_squared is used to accelerate gradient descent when gradients are small, and slow down gradient descent when gradients are large. RMSprop
RMSprop is an enhanced version of AdaGrad.
The term
decay_rate is used to apply exponential smoothing on grad_squared term. Adam optimization
Adam is a combination of Momentum and RMSprop.
Normal equation
To minimize the cost function \(J(θ) = \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} = \frac{1}{2} (Xθ – y)^T(Xθ – y)\), we need to solve the equation:\( \frac{\partial J(θ)}{\partial θ} = 0 \\ \frac{\partial trace(J(θ))}{\partial θ} = 0 \\ \frac{\partial trace((Xθ – y)^T(Xθ – y))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ – θ^TX^Ty – y^TXθ + y^Ty)}{\partial θ} = 0\) \(\frac{\partial trace(θ^TX^TXθ) – trace(θ^TX^Ty) – trace(y^TXθ) + trace(y^Ty))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ) – trace(θ^TX^Ty) – trace(y^TXθ))}{\partial θ} = 0\) \(\frac{\partial trace(θ^TX^TXθ) – trace(y^TXθ) – trace(y^TXθ))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ) – 2 trace(y^TXθ))}{\partial θ} = 0 \\ \frac{\partial trace(θθ^TX^TX)}{\partial θ} – 2 \frac{\partial trace(θy^TX))}{\partial θ} = 0 \\ 2 X^TXθ – 2 X^Ty = 0 \\ X^TXθ= X^Ty \\ θ = {(X^TX)}^{-1}X^Ty\)
If \(X^TX\) is singular, then we need to calculate the pseudo inverse instead of the inverse.
Newton method
\(J”(θ_{t}) := \frac{J'(θ_{t+1}) – J'(θ_{t})}{θ_{t+1} – θ_{t}}\) \(\rightarrow θ_{t+1} := θ_{t} – \frac{J'(θ_{t})}{J”(θ_{t})}\)
Matrix differentiation
To minimize the cost function: \(J(θ) = \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} = \frac{1}{2} (Xθ – y)^T(Xθ – y)\), we need to solve the equation:\( \frac{\partial J(θ)}{\partial θ} = 0 \\ \frac{\partial θ^TX^TXθ – θ^TX^Ty – y^TXθ + y^Ty}{\partial θ} = 2X^TXθ – \frac{\partial θ^TX^Ty}{\partial θ} – X^Ty = 0\)
\(2X^TXθ – \frac{\partial y^TXθ}{\partial θ} – X^Ty = 2X^TXθ – 2X^Ty = 0\) (Note: In matrix differentiation: \( \frac{\partial Aθ}{\partial θ} = A^T\) and \( \frac{\partial θ^TAθ}{\partial θ} = 2A^Tθ\))
we can deduce \(X^TXθ = X^Ty\) and \(θ = (X^TX)^{-1}X^Ty\)
Logistic Regression
Below a table that shows tumor types by size.
1
0
1
0
2
0
2
1
3
1
3
1
Given x, y is distributed according to the Bernoulli distribution with probability of success p = E[y|x].\(P(y|x;θ) = p^y (1-p)^{(1-y)}\)
We define h(x) as a function that returns the expected value (p) of the distribution. We will define this function as:
\(E[y|x] = h_{θ}(x) = g(θ^T x) = \frac{1}{1+exp(-θ^T x)}\). g is called Sigmoid (or logistic) function.
P(y|x; θ) = \(h_{θ}(x)^y (1-h_{θ}(x))^{(1-y)}\)
We need to find θ that maximizes this probability for all values of x. In other words, we need to find θ that maximizes the likelihood function L:\(L(θ)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{m} P(y^{(i)}|x^{(i)};θ)\)
Or maximize the log likelihood function l:\(l(θ)=log(L(θ)) = \sum_{i=1}^{m} log(P(y^{(i)}|x^{(i)};θ ))\) \(= \sum_{i=1}^{m} y^{(i)} log(h_{θ}(x^{(i)}))+ (1-y^{(i)}) log(1-h_{θ}(x^{(i)}))\)
Or minimize the \(-l(θ) = \sum_{i=1}^{m} -y^{(i)} log(h_{θ}(x^{(i)})) – (1-y^{(i)}) log(1-h_{θ}(x^{(i)})) = J(θ) \)
J(θ) is convex, to minimize it, we need to solve the equation \(\frac{\partial J(θ)}{\partial θ} = 0\).
There are many methods to solve this equation:
Gradient descent
\(θ_{j} := θ_{j} –
Gradient descent α\sum_{i=1}^{n} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \) Logit function (inverse of logistic function)
Logit function is defined as follow: \(logit(p) = log(\frac{p}{1-p})\)
The idea in the use of this function is to transform the interval of p (outcome) from [0,1] to [0, ∞]. So instead of applying linear regression on p, we will apply it on logit(p).
Once we find θ that maximizes the Likelihood function, we can then estimate logit(p) given a value of x (\(logit(p) = h_{θ}(x) \)). p can be then calculated using the following formula:\(p = \frac{1}{1+exp(-logit(h_{θ}(x)))}\)
Multinomial Logistic Regression (using maximum likelihood estimation)
In multinomial logistic regression (also called Softmax Regression), y could have more than two outcomes {1,2,3,…,k}.
Below a table that shows tumor types by size.
1
1
1
1
2
2
2
2
2
3
3
3
3
3
Given x, we can define a multinomial distribution with probabilities of success \(\phi_j = E[y=j|x]\).\(P(y=j|x;\Theta) = ϕ_j \\ P(y=k|x;\Theta) = 1 – \sum_{j=1}^{k-1} ϕ_j \\ P(y|x;\Theta) = ϕ_1^{1\{y=1\}} * … * ϕ_{k-1}^{1\{y=k-1\}} * (1 – \sum_{j=1}^{k-1} ϕ_j)^{1\{y=k\}}\)
We define \(\tau(y)\) as a function that returns a \(R^{k-1}\) vector with value 1 at the index y: \(\tau(y) = \begin{bmatrix}0\\0\\1\\0\\0\end{bmatrix}\), when \(y \in \{1,2,…,k-1\}\), .
and \(\tau(y) = \begin{bmatrix}0\\0\\0\\0\\0\end{bmatrix}\), when y = k.
We define \(\eta(x)\) as a \(R^{k-1}\) vector = \(\begin{bmatrix}log(\phi_1/\phi_k)\\log(\phi_2/\phi_k)\\…\\log(\phi_{k-1}/\phi_k)\end{bmatrix}\)\(P(y|x;\Theta) = 1 * exp(η(x)^T * \tau(y) – (-log(\phi_k)))\)
This form is an exponential family distribution form.
We can invert \(\eta(x)\) and find that:\(ϕ_j = ϕ_k * exp(η(x)_j)\) \(= \frac{1}{1 + \frac{1-ϕ_k}{ϕ_k}} * exp(η(x)_j)\) \(=\frac{exp(η(x)_j)}{1 + \sum_{c=1}^{k-1} ϕ_c/ϕ_k}\) \(=\frac{exp(η(x)_j)}{1 + \sum_{c=1}^{k-1} exp(η(x)_c)}\)
If we define η(x) as linear function, \(η(x) = Θ^T x = \begin{bmatrix}Θ_{1,1} x_1 +… + Θ_{n,1} x_n \\Θ_{1,2} x_1 +… + Θ_{n,2} x_n\\…\\Θ_{1,k-1} x_1 +… + Θ_{n,k-1} x_n\end{bmatrix}\), and Θ is a \(R^{n*(k-1)}\) matrix.
Then: \(ϕ_j = \frac{exp(Θ_j^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)}\)
The hypothesis function could be defined as: \(h_Θ(x) = \begin{bmatrix}\frac{exp(Θ_1^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)} \\…\\ \frac{exp(Θ_{k-1}^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)} \end{bmatrix}\)
We need to find Θ that maximizes the probabilities P(y=j|x;Θ) for all values of x. In other words, we need to find θ that maximizes the likelihood function L:\(L(θ)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{m} P(y^{(i)}|x^{(i)};θ)\) \(=\prod_{i=1}^{m} \phi_1^{1\{y^{(i)}=1\}} * … * \phi_{k-1}^{1\{y^{(i)}=k-1\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}\) \(=\prod_{i=1}^{m} \prod_{c=1}^{k-1} \phi_c^{1\{y^{(i)}=c\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}\)
\(=\prod_{i=1}^{m} \prod_{c=1}^{k-1} \phi_c^{1\{y^{(i)}=c\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}\) and \(ϕ_j = \frac{exp(Θ_j^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)}\)
Multinomial Logistic Regression (using cross-entropy minimization)
In this section, we will try to minimize the cross-entropy between Y and estimated \(\widehat{Y}\).
We define \(W \in R^{d*n}\), \(b \in R^{d}\) such as \(S(W x + b) = \widehat{Y}\), S is the Softmax function, k is the number of outputs, and \(x \in R^n\).
To estimate W and b, we will need to minimize the cross-entropy between the two probability vectors Y and \(\widehat{Y}\).
The cross-entropy is defined as below:\(D(\widehat{Y}, Y) = -\sum_{j=1}^d Y_j Log(\widehat{Y_j})\)
Example:
if \(\widehat{y} = \begin{bmatrix}0.7 \\0.1 \\0.2 \end{bmatrix} \& \ y=\begin{bmatrix}1 \\0 \\0 \end{bmatrix}\) then \(D(\widehat{Y}, Y) = D(S(W x + b), Y) = -1*log(0.7)\)
We need to minimize the entropy for all training examples, therefore we will need to minimize the average cross-entropy of the entire training set.
\(L(W,b) = \frac{1}{m} \sum_{i=1}^m D(S(W x^{(i)} + b), y^{(i)})\), L is called the loss function.
If we define \(W = \begin{bmatrix} — θ_1 — \\ — θ_2 — \\ .. \\ — θ_d –\end{bmatrix}\) such as:\(θ_1=\begin{bmatrix}θ_{1,0}\\θ_{1,1}\\…\\θ_{1,n}\end{bmatrix}, θ_2=\begin{bmatrix}θ_{2,0}\\θ_{2,1}\\…\\θ_{2,n}\end{bmatrix}, θ_d=\begin{bmatrix}θ_{d,0}\\θ_{d,1}\\…\\θ_{d,n}\end{bmatrix}\)
We can then write \(L(W,b) = \frac{1}{m} \sum_{i=1}^m D(S(W x^{(i)} + b), y^{(i)})\)\(= \frac{1}{m} \sum_{i=1}^m \sum_{j=1}^d 1^{\{y^{(i)}=j\}} log(\frac{exp(θ_k^T x^{(i)})}{\sum_{k=1}^d exp(θ_k^T x^{(i)})})\)
For d=2 (nbr of class=2), \(= \frac{1}{m} \sum_{i=1}^m 1^{\{y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\}} log(\frac{exp(θ_1^T x^{(i)})}{exp(θ_1^T x^{(i)}) + exp(θ_2^T x^{(j)})}) + 1^{\{y^{(i)}=\begin{bmatrix}0 \\ 1\end{bmatrix}\}} log(\frac{exp(θ_2^T x^{(i)})}{exp(θ_1^T x^{(i)}) + exp(θ_2^T x^{(i)})})\)
\(1^{\{y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\}}\) means that the value is 1 if \(y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\) otherwise the value is 0.
To estimate \(θ_1,…,θ_d\), we need to calculate the derivative and update \(θ_j = θ_j – α \frac{\partial L}{\partial θ_j}\)
Kernel regression
Kernel regression is a non-linear model. In this model we define the hypothesis as the sum of kernels.
\(\widehat{y}(x) = ϕ(x) * θ = θ_0 + \sum_{i=1}^d K(x, μ_i, λ) θ_i \) such as:
\(ϕ(x) = [1, K(x, μ_1, λ),…, K(x, μ_d, λ)]\) and \(θ = [θ_0, θ_1,…, θ_d]\)
For example, we can define the kernel function as : \(K(x, μ_i, λ) = exp(-\frac{1}{λ} ||x-μ_i||^2)\)
Usually we select d = number of training examples, and \(μ_i = x_i\)
Once the vector ϕ(X) calculated, we can use it as new engineered vector, and then use the normal equation to find θ:\(θ = {(ϕ(X)^Tϕ(X))}^{-1}ϕ(X)^Ty\)
Bayes Point Machine
The Bayes Point Machine is a Bayesian linear classifier that can be converted to a nonlinear classifier by using feature expansions or kernel methods as the Support Vector Machine (SVM).
More details will be provided.
Ordinal Regression
Ordinal Regression is used for predicting an ordinal variable. An ordinal variable is a categorical variable for which the possible values are ordered (eg. size: Small, Medium, Large).
More details will be provided.
Poisson Regression
Poisson regression assumes the output variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear function.
log(E[Y|X]) = log(λ) = θ.x |
EEDDIITT: this gives a proof of my main claim in my first answer, that a certain function takes its minimum value at a certain primorial. I actually put that information, with a few examples, into the wikipedia article, but it was edited out within a minute as irrelevant. No accounting for taste.
ORIGINAL: We take as given Theorem 327 on page 267 of Hardy and Wright, that for some fixed $0 < \delta < 1,$ the function$$ g(n) = \frac{\phi(n)}{n^{1-\delta}} $$ goes to infinity as $n$ goes to infinity.
Note that $g(1) = 1$ but $g(2) < 1.$ For some $N_\delta,$ whenever $ n > N_\delta$ we get $g(n) > 1.$ It follows that, checking all $1 \leq n \leq N_\delta,$ the quantity $g(n)$ assumes a minimum which is less than 1. Perhaps it assumes this minimum at more than one point. If so, we are taking the largest such value of $n.$
Here we are going to prove that the value of $n$ at which the minimum occurs is the primorial created by taking the product of all the primes $p$ that satisfy $$ p^{1-\delta} \geq p-1. $$As I mentioned, in case two the minimum occurs at two different $n,$ this gives the larger of the two.
So, the major task of existence is done by Hardy and Wright. We have the minimum of $g$ at some$$ n = p_1^{a_1} p_2^{a_2} p_3^{a_3} \cdots p_r^{a_r}, $$ with $$ p_1 < p_2 < \cdots < p_r. $$
First, ASSUME that one or more of the $a_i > 1.$ Now,$$ \frac{ g(p_i)}{g(p_i^{a_i})} = p^{\delta - a_i \delta} = p^{\delta (1 - a_i)} < 1. $$ As a result, if we decrease that exponent to one, the value of $g$ is lowered, contradicting minimality. So all exponents are actually 1.
Second, ASSUME that there is some gap, some prime $q < p_r $ such that $q \neq p_j$ for all $j,$ that is $q$ does not divide $n.$ Well, for real variable $x > 0,$ the function $$ \frac{x-1}{x^{1-\delta}} $$ is always increasing, as the first derivative is$$ x^{\delta - 2} (\delta x +(1-\delta)). $$It follows that, in the factorization of $n,$ if we replace $p_r$ by $q,$ the value of $g$ is lowered, contradicting minimality. So the prime factors of $n$ are consecutive, beginning with 2, and $n$ is called a primorial.
Finally, what is the largest prime factor of $n?$ Beginning with 2, multiplying by any prime $p$ with $$ \frac{p-1}{p^{1-\delta}} \leq 1 $$ shrinks the value of $g$ or keeps it the same, so in demanding the largest $n$ in case there are two attaining the minimum of $g,$ we take $n$ to be the product of all primes $p$ satisfying$$ p - 1 \leq p^{1-\delta}, $$ or$$ p^{1-\delta} \geq p-1 $$as I first wrote it.
Examples are given in my first answer to this same question.
EEDDIITTTT: Jean-Louis Nicolas showed, in 1983, that the Riemann Hypothesis is true if and only if, for all primorials $P,$$$ \frac{e^\gamma \phi(P) \log \log P}{P} < 1. $$ Alright, the exact reference is: Petites valeurs de la fonction d'Euler. Journal of Number Theory, volume 17 (1983), number 3, pages 375-388.
On the other hand, if RH is false, the inequality is true for infinitely many primorials and false for infinitely many. So, either way, it is true for infinitely many primorials (once again, these are $P = 2 \cdot 3 \cdot 5 \cdots p$ the product of consecutive primes beginning with 2).
For whatever reason, the criterion of Guy Robin, who was a student of Nicolas, got to be better known. |
Introduction: Spectral data and OLS
As an (astro-) physicists, a lot of your time goes into the analysis of some kind of spectrum. Whether you are studying some star far away or taking measurements on some material in a laboratory, very often spectrums (spectra?) are involved and very often they are noisy. For those unfamiliar: a spectrum generally shows the intensity of some phenomena as a function of wavelength.
Normally, when confronted with such noisy data, the first thing to do is to smooth the data. When I was still studying physics, we mostly used software for that purpose. Smoothing was nothing more than a click of a button.
Recently, I came to understand that smoothing can be performed with the use of weighted least squares (WLS) regression. I remember WLS from studying econometrics, at that time it was only introduced as a way to deal with heteroskedasticity. Which is a fancy word for describing data where the variance of one variable is not uniform when regressed against another variable. Now I don’t want to go too much into that area at the moment, but if you are interested just take a look at this image to understand the problem and these note on how to deal with it.
Instead I want to explain how WLS can be used for smoothing of data. Let’s start with some noisy data:
Now before we go into the “weighted” part of WLS, I first want to explain how you actually use this data in a regression, since that took me a while to wrap my head around. Normally, when doing regression or applying any kind of other machine learning algorithm, it is quite clear what the features (input) and what the labels (output) are. However, spectral data normally comes as a CSV in which the header-row specifies the different wavelengths, and all the remaining rows denote a single spectrum, with a single intensity value for each wavelength. It took me a while to understand that from a regression perspective, the intensity is actually the
label () and the features () are the individual wavelengths for which intensities are recorded.
Let’s use this knowledge to do an -unweighted- least squares regression really quick. We know that ordinary least squares (OLS) minimizes the following objective function (it’s kind of in the name, innit?) :
\begin{align} RSS_{OLS} = \sum_i (y_i - x_i^T \theta)^2 \end{align}
Which leads to the normal equation:
\begin{align} (X^TX)\theta &= X^Ty \\\ \end{align}
Which, solved for , becomes:
\begin{align} \theta &= (X^TX)^{-1}X^Ty \\\ \end{align}
So that the OLS estimate for becomes:
\begin{align} \hat{y} &= X\theta \\
\hat{y} &= X(X^TX)^{-1}X^Ty \end{align}
We create our (constant term and wavelengths) and (intensity values) as follows:
\begin{align} X = \begin{bmatrix} 1 \ 1150 \\
1 \ 1151 \\ 1 \ 1152 \\ \vdots \\ 1 \ 1599 \end{bmatrix} \hspace{1cm} y = \begin{bmatrix} 0.693 \\ 1.464 \\ 2.068 \\ \vdots \\ 0.382 \end{bmatrix} \end{align}
And find something which is -as expected- quite useless:
Weighted least squares regression for smoothing of data
The idea here, is that we are not going to fit a single line, but a lot of very small pieces of line. In fact, we will perform a single (weighted) linear regression for
each point at which the function will be evaluated. This means that our model is no longer parametric. In other words, the model is no longer fully described by a set of parameters , instead it remains dependant on our input data. Let’s see how that works:
The objective to minimize in weighted linear regression model is given by:
\begin{align} RSS_{WLS} = \sum_i w_i(y_i - x_i^T \theta)^2 \end{align}
So the only difference with is the introduction of a weight which
weighs the importance of each squared error term . When is small, the error term of observation will be pretty much ignored, when is large, the observation will have a large impact on the fit. What we will do, is make dependant on the distance of to the point where we want to evaluate our model:
\begin{align} w_i = f_{\tau}(x) = \exp\Big(\frac{-(x - x_i)^2}{2\tau^2}\Big) \end{align}
Therefore, when we evaluate the model at some point we can be sure, that only those values close to will be taken into consideration. The further away we go from , the less pronounced are the effects on the estimate.
The
speed by which the effect of observations decreases with their distance to , is controlled by the parameter . This is a parameter we have to set. For now, we set this parameter to 5.
It can be shown, that WLS leads to:
\begin{align} \hat{y} &= X(X^TWX)^{-1}X^TWy \end{align}
Where is the (diagonal) weight matrix as such:
\begin{align} W = \frac{1}{2}\begin{bmatrix} w_0 \ 0 \ \cdots \ 0 \\
0 \ w_1 \ \cdots \ 0 \\ \vdots \ \\ 0 \ 0 \ \cdots \ w_m \end{bmatrix} \end{align}
Putting it all together, this results in something one can actualy work with:
Personally, I think it is quite fascinating that a minor modification of OLS can result in such a different outcome for such a different use-case. This shows me, that having a thorough understanding of the mathematics and foundations behind an algorithm or approach is so valuable. It allows to take these foundations and re-use it in a different way. Which leads to tailored-solutions for a problem at hand.
Next post, I will use this smoothed data to perform functional regression, in which an unknown part of a curve is predicted from training data. |
Help:Editing This is a copy of the master help page at m:Help:Editing. Do not edit this page. Edits will be lost in the next update from the master page. Either edit the master help page for all projects at Meta, or edit the project-specific text at Template:Ph:Editing. You are welcome to copy the exact wikitext from the master page at Meta and paste it into this page at any time.
This Editing Overview has a lot of
wikitext examples. You may want to keep this page open in a separate browser window for reference while you edit.
Each of the topics covered here is covered somewhere else in more detail. Please look in the box on the right for the topic you are interested in.
Contents 1 Editing basics 2 Wikitext markup -- making your page look the way you want 3 Tips and tricks 4 Minor edits 5 See also 6 External links 7 Wikiquote-specific content 8 Links to other help pages Editing basics[edit] Start editing To start editing a MediaWiki page, click on the " Edit this page" (or just " edit") link at one of its edges. This will bring you to the edit page: a page with a text box containing the wikitext: the editable source code from which the server produces the webpage. If you just want to experiment, please do so in the sandbox, not here. Summarize your changes You should write a short edit summary in the small field below the edit-box. You may use shorthand to describe your changes, as described in the legend. Preview before saving When you have finished, press preview to see how your changes will look -- beforeyou make them permanent. Repeat the edit/preview process until you are satisfied, then click "Save" and your changes will be immediately applied to the article. You can see some more detailed examples at Help:Wiki markup examples. If you want to try out things without danger of doing any harm, you can do so in the Wikiquote:Sandbox. Basic text formatting[edit]
What it looks like What you type
You can
You can ''emphasize text'' by putting two apostrophes on each side. Three apostrophes will emphasize it '''strongly'''. Five apostrophes is '''''even stronger'''''.
A single newline has no effect on the layout.
But an empty line starts a new paragraph.
A single newline has no effect on the layout. But an empty line starts a new paragraph.
You can break lines
You can break lines<br> without starting a new paragraph.<br> Please use this sparingly.
You should "sign" your comments on talk pages:
You should "sign" your comments on talk pages: : Three tildes gives your user name: ~~~ : Four tildes give your user name plus date/time: ~~~~ : Five tildes gives the date/time alone: ~~~~~
You can use
Put text in a
Superscripts and subscripts:x
You can use <b>HTML tags</b>, too, if you want. Some useful ways to use HTML: Put text in a <tt>typewriter font</tt>. The same font is generally used for <code>computer code</code>. <strike>Strike out</strike> or <u>underline</u> text, or write it <span style="font-variant:small-caps"> in small caps</span>. If the wiki has the templates, this can {{smallcaps|be even easier to write}}. Superscripts and subscripts: x<sup>2</sup>, x<sub>2</sub>
For a list of HTML tags that are allowed, see HTML in wikitext. However, you should avoid HTML in favor of Wiki markup whenever possible.
Organizing your writing[edit]
What it looks like What you type
Headings organize your writing into sections. The Wiki software can automatically generate a table of contents from them.
Using more equals signs creates a subsection.
Don't skip levels, like from two to four equals signs. Start with two equals signs; don't use single equals signs.
== Section headings == Headings organize your writing into sections. The Wiki software can automatically generate a table of contents from them. === Subsection === Using more equals signs creates a subsection. ==== A smaller subsection ==== Don't skip levels, like from two to four equals signs. Start with two equals signs; don't use single equals signs.
marks the end of the list.
* ''Unordered lists'' are easy to do: ** Start every line with a star. *** More stars indicate a deeper level. *A newline *in a list marks the end of the list. *Of course you can start again.
A newline marks the end of the list.
# Numbered lists are also good: ## Very organized ## Easy to follow A newline marks the end of the list. #New numbering starts with 1. * You can even do mixed lists *# and nest them *#* or break lines<br>in lists.
Another kind of list is a
Another kind of list is a '''definition list''': ; word : definition of the word ; longer phrase : phrase defined
A newline after that starts a new paragraph.
:A colon indents a line or paragraph. A newline after that starts a new paragraph. ::This is often used for discussion on talk pages.
You can make horizontal dividing lines to separate text.
But you should usually use sections instead, so that they go in the table of contents.
You can make horizontal dividing lines to separate text. ---- But you should usually use sections instead, so that they go in the table of contents. Links[edit]
You will often want to make clickable
links to other pages.
What it looks like What you type
You can put formatting around a link.Example:
The weather in London is a page that doesn't exist yet. You can create it by clicking on the link.
Here's a link to a page named [[Official position]]. You can even say [[official position]]s and the link will show up right. You can put formatting around a link. Example: ''[[Wikipedia]]''. The ''first letter'' will automatically be capitalized, so [[wikipedia]] is the same as [[Wikipedia]]. Capitalization matters after the first letter. [[The weather in London]] is a page that doesn't exist yet. You can create it by clicking on the link.
You can link to a page section by its title:
If multiple sections have the same title, add a number. #Example section 3 goes to the third section named "Example section".
You can link to a page section by its title: *[[List of cities by country#Morocco]]. *[[List of cities by country#Sealand]]. If multiple sections have the same title, add a number. [[#Example section 3]] goes to the third section named "Example section".
You can make a link point to a different place with a piped link. Put the link target first, then the pipe character "|", then the link text.
You can make a link point to a different place with a [[Help:Piped link|piped link]]. Put the link target first, then the pipe character "|", then the link text. *[[Help:Link|About Links]] *[[List of cities by country#Morocco| Cities in Morocco]]
You can make an external link just by typing a URL: http://www.nupedia.com
You can give it a title: Nupedia
Or leave the title blank: [1]
You can make an external link just by typing a URL: http://www.nupedia.com You can give it a title: [http://www.nupedia.com Nupedia] Or leave the title blank: [http://www.nupedia.com] #REDIRECT [[United States]]
Category links don't show up, but add the page to a category.
Add an extra colon to actually link to the category: Category:Wikiquote help
Category links don't show up, but add the page to a category. [[Category:Category:Wikiquote help]] Add an extra colon to actually link to the category: [[:Category:Category:Wikiquote help]]
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your Preferences:
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your [[Special:Preferences|]]: * [[July 20]], [[1969]] * [[20 July]] [[1969]] * [[1969]]-[[07-20]] Just show what I typed[edit]
A few different kinds of formatting will tell the Wiki to display things as you typed them.
What it looks like What you type <nowiki> tags
The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: →
<nowiki> The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: → </nowiki> <pre> tags The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → <nowiki> <pre> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: →</nowiki> Leading spaces
Leading spaces are another way to preserve formatting.
Putting a space at the beginning of each line stops the text from being reformatted. It still interprets Wiki Leading spaces are another way to preserve formatting. Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Wiki]] ''markup'' and special characters: → Images, tables, video and sounds[edit]
This is a very quick introduction. For more information, see:
Help:Images and other uploaded files for how to upload files Help:Extended image syntax for how to arrange images on the page Help:Table for how to create a table
What it looks like What you type
A picture, including alternate text:
You can put the image in a frame with a caption:
A picture, including alternate text: [[Image:Wikiquote-logo-en.png|The logo for this Wiki]] You can put the image in a frame with a caption: [[Image:Wikiquote-logo-en.png|frame|The logo for this Wiki]]
A link to Wikipedia's page for the image: Image:Wikiquote-logo-en.png
Or a link directly to the image itself: Media:Wikiquote-logo-en.png
A link to Wikipedia's page for the image: [[:Image:Wikiquote-logo-en.png]] Or a link directly to the image itself: [[Media:Wikiquote-logo-en.png]]
Use
Use '''media:''' links to link to sounds or videos: [[media:Sg_mrob.ogg|A sound file]]
<center> {| border=1 cellspacing=0 cellpadding=5 | This | is |- | a | '''table''' |} </center> Mathematical formulae[edit]
What it looks like What you type
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> Templates[edit] Templates are segments of Wiki markup that are meant to be copied automatically ("transcluded") into a page.You add them by putting the template's name in {{double braces}}.
Some templates take
parameters, as well, which you separate with the pipe character.
What it looks like What you type {{Transclusion demo}}
This template takes two parameters, and creates underlined text with a hover box:
This template takes two parameters, and creates underlined text with a hover box: {{H:title|This is the hover text| Hover your mouse over this text}} Tips and tricks[edit] Page protection[edit]
In a few cases, where an administrator has protected a page, the link labeled "Edit this page" is replaced by the text "View source" (or equivalents in the language of the project). In that case the page cannot be edited. Protection of an image page includes protection of the image itself.
Edit conflicts[edit]
If someone else makes an edit while you are making yours, the result is an edit conflict. Many conflicts can be automatically resolved by the Wiki. If it can't be resolved, however, you will need to resolve it yourself. The Wiki gives you two text boxes, where the top one is the other person's edit and the bottom one is your edit. Merge your edits into the top edit box, which is the only one that will be saved.
Reverting[edit]
The edit link of a page showing an old version leads to an edit page with the old wikitext. This is a useful way to restore the old version of a page. However, the edit link of a diff page gives the current wikitext, even if the diff page shows an old version below the table of differences.
Error messages[edit]
If you get an error message upon saving a page, you can't tell whether the actual save has failed or just the confirmation. You can go back and save again, and the second save will have no effect, or you can check "My contributions" to see whether the edit went through.
Checking spelling and editing in your favorite editor[edit]
You may find it more convenient to copy and paste the text first into your favorite text editor, edit and spell check it there, and then paste it back into your web browser to preview. This way, you can also keep a local backup copy of the pages you have edited. It also allows you to make changes offline.
If you edit this way, it's best to leave the editing page open after you copy from it, using the same edit box to submit your changes, so that the usual edit conflict mechanism can deal with it. If you return to the editing page later, please make sure that nobody else has edited the page in the meantime. If someone has, you'll need to merge their edits into yours by using the diff feature in the page history.
Composition of the edit page[edit]
The editing page consists of these sections:
The edit toolbar (optional) The editing text box The edit summary box Save/Preview/Cancel links A list of templates used on the page A preview, if you have requested one. Your preferences may place the preview at the top of the page instead. Position-independent wikitext[edit]
No matter where you put these things in the wikitext, the resulting page is displayed the same way:
Minor edits[edit]
A logged-in user can mark an edit as "minor". Minor edits are generally spelling corrections, formatting, and minor rearrangement of text. Users may choose to
hide minor edits when viewing Recent Changes.
Marking a significant change as a minor edit is considered bad Wikiquette. If you have accidentally marked an edit as minor, make a dummy edit, verify that the "
[ ] This is a minor edit" check-box is unchecked, and explain in the edit summary that the previous edit was not minor. See also[edit] Help:Editing FAQ Help:Automatic conversion of wikitext Help:Calculation Help:Editing toolbar Help:HTML in wikitext Protecting pages Help:Starting a new page Help:Variable UseModWiki and Wikipedia:PHP script. HTML elements. Live preview - a way to preview your edits without contacting the server. Image:Special characters Verdana IE.png - shows special characters with codes, and also shows problem characters. [edit] Wikiquote-specific content[edit] Links to other help pages[edit]
Help contents - all pages in the Help namespace: Meta b: c: n: w: q: wikisource wiktionary |
Assessment | Biopsychology | Comparative |Cognitive | Developmental | Language | Individual differences |Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Hebbian theory describes a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cell's repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in 1949, it is also called Hebb's rule, Hebb's postulate, and cell assembly theory, and states: Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell Ais near enough to excite a cell Band repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.
The theory is often summarized as "
cells that fire together, wire together", although this is an oversimplification of the nervous system not to be taken literally, as well as not accurately representing Hebb's original statement on cell connectivity strength changes. The theory is commonly evoked to explain some types of associative learning in which simultaneous activation of cells leads to pronounced increases in synaptic strength. Such learning is known as Hebbian learning. Hebbian engrams and cell assembly theoryEdit
Hebbian theory concerns how neurons might connect themselves to become engrams. Hebb's theories on the form and function of cell assemblies can be understood from the following:
"The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become 'associated', so that activity in one facilitates activity in the other." (Hebb 1949, p. 70) "When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell." (Hebb 1949, p. 63)
Gordon Allport posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows:
"If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly interassociated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become 'auto-associated'. We may call a learned (auto-associated) pattern an engram." (Hebb 1949, p. 44)
Hebbian theory has been the primary basis for the conventional view that when analyzed from a holistic level, engrams are neuronal nets or neural networks.
Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms
PrinciplesEdit
From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously—and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.
This original principle is perhaps the simplest form of weight selection. While this means it can be relatively easily coded into a computer program and used to update the weights for a network, it also prohibits the number of applications of Hebbian learning. Today, the term
Hebbian learning generally refers to some form of mathematical abstraction of the original principle proposed by Hebb. In this sense, Hebbian learning involves weights between learning nodes being adjusted so that each weight better represents the relationship between the nodes. As such, many learning methods can be considered to be somewhat Hebbian in nature.
The following is a formulaic description of Hebbian learning: (note that many other descriptions are possible)
$ \,w_{ij}=x_ix_j $
where $ w_{ij} $ is the weight of the connection from neuron $ j $ to neuron $ i $ and $ x_i $ the input for neuron $ i $. Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections $ w_{ij} $ are set to zero if $ i=j $ (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.
Another formulaic description is:
$ w_{ij} = \frac{1}{p} \sum_{k=1}^p x_i^k x_j^k\, $ ,
where $ w_{ij} $ is the weight of the connection from neuron $ j $ to neuron $ i $, $ p $ is the number of training patterns, and $ x_{i}^k $ the $ k $th input for neuron $ i $. This is learning by epoch (weights updated after all the training examples are presented). Again, in a Hopfield network, connections $ w_{ij} $ are set to zero if $ i=j $ (no reflexive connections).
A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf. Klopf's model reproduces a great many biological phenomena, and is also simple to implement.
Generalization and stabilityEdit
Hebb's Rule is often generalized as
$ \,\Delta w_i = \eta x_i y $,
or the change in the $ i $th synaptic weight $ w_i $ is equal to a learning rate $ \eta $ times the $ i $th input $ x_i $ times the postsynaptic response $ y $. Often cited is the case of a linear neuron,
$ \,y = \sum_j w_j x_j $,
and the previous section's simplification takes both the learning rate and the input weights to be 1. This version of the rule is clearly unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. However, it can be shown that for
any neuron model, Hebb's rule is unstable. Therefore, network models of neurons usually employ other learning theories such as BCM theory, Oja's rule [1], or the Generalized Hebbian Algorithm. See alsoEdit Anti-Hebbian learning BCM theory Coincidence Detection in Neurobiology Dale's principle Generalized Hebbian Algorithm Leabra Long-term potentiation Memory Metaplasticity Neural networks Oja learning rule Tetanic stimulation Spike-timing-dependent plasticity Synaptotropic hypothesis ReferencesEdit Further readingEdit Hebb, D.O. (1949), The organization of behavior, New York: Wiley Hebb, D.O. (1961). "Distinctive features of learning in the higher animal" J. F. Delafresnaye (Ed.) Brain Mechanisms and Learning, London: Oxford University Press. Hebb, D.O., and Penfield, W. (1940). Human behaviour after extensive bilateral removal from the frontal lobes. Archives of Neurology and Psychiatry 44: 421–436. Allport, D.A. (1985). "Distributed memory, modular systems and dysphasia" Newman, S.K. and Epstein, R. (Eds.) Current Perspectives in Dysphasia, Edinburgh: Churchill Livingstone. Bishop, C.M. (1995). Neural Networks for Pattern Recognition, Oxford: Oxford University Press. Paulsen, O., Sejnowski, T. J. (2000). Natural patterns of activity and long-term synaptic plasticity. Current opinion in neurobiology 10(2): 172–179.
Neuroethology
Concepts in Neuroethology
Feedforward · Coincidence detector · Umwelt · Instinct · Feature detector · Central pattern generator (CPG) ·NMDA receptor · Lateral inhibition · Fixed action pattern · Krogh's Principle·
History of Neuroethology Methods in Neuroethology Model Systems in Neuroethology
This page uses Creative Commons Licensed content from Wikipedia (view authors). |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
My question concerns the possibility of using the hypothesis of
local thermodynamic equilibrium for calculate the entropy of a non-homogeneous equilibrium states.
In the figure, $N$ particles of a fluid are inside a vessel of volume $V$ under the effect of a constant gravitational field $g$. The total energy of the particles (sum of internal energy and external potential energy) is $U_{TOT}$.
I can imagine two ways for calculating the entropy of the system.
The first one is using of the definition of entropy in statistical mechanics for the microcanonical ensemble:
S= k ln $ \Omega$
where, as it's known, $ \Omega$ is the number of microstates associated with the parameters $N$, $V$ and $ U_{TOT}$ (or the phase-space area in the classical approximation).
The second possibility is instead assuming a condition of local thermodynamic equilibrium and searching the distribution of $ \rho ( \vec {x})$ and $ T( \vec {x} )$ that maximize the total entropy:
$S= \int {s dV} = \int s (\rho,T) dV$
where $s (\rho,T) $ is the specific volume entropy for the homogeneous fluid and where the maximization is done under the constraints of total mass and total energy:
$m_{TOT}= N m_{PARTICLE}= \int {\rho dV} $
$U= \int [{u_{INT}+u_{POT}] dV}= \int {[u_{INT}(\rho,T) + \rho g ]dV}=U_{TOT}$
So, in conclusion, the question is:
have the two entropies the same value?.
In the book of Phil Attard "Thermodynamics and Statistical Mechanics: Equilibrium by Entropy Maximisation", pag. 145-149, the author shows that the two entropies effectively correspond for the specific case of an ideal gas (he uses the canonical formalism, but the theme is the same I guess).
On the other hand, Arieh Ben-Naim in the Appendix-B of the article
Is Entropy Associated with Time's Arrow? criticizes the use of the assumption of local thermodynamic equilibrium for fluids different from ideal gases (https://arxiv.org/ftp/arxiv/papers/1705/1705.01467.pdf).
Thus, another linked question is:
does the answer to the question above depend on the kind of fluid in the vessel?
I'm an energetic engineer so, maybe, I'm not considering some fundamental concept of statistical mechanics.
Anyway this seems to me an interesting question because, if the two entropies are different, it means that the use the local thermodynamics equilibrium is questionable and doubtful non only in the context of non-equilibrium thermodynamics, but also in the more specific case of non-homogeneous equilibrium thermodynamics.
Thank you everyone for the attention and best regards. |
Free Particle
The eigenvalues \(E_i\) of the energy operator are the possible measurable values of the total energy of a quantum system. For a nonrelativistic (moving at speeds much less than the speed of light), massive particle that is an isolated system the total energy of the particle is just its kinetic energy:
$$E=\frac{1}{2}mv^2=\frac{p^2}{2m}.\tag{1}$$
The energy operator \(\hat{E}\) associated with this particle is the one with eigenvalues \(E_i=\frac{p^2}{2m}\). This operator is given by
$$\hat{E}=\frac{\hat{p}^2}{2m}.\tag{2}$$
In this example we’ll use Schrödinger’s time-dependent equation to solve for the wavefunction \(\psi(x,t)\) which will allow us to calculate the probability (as a function of position and time) of measuring any physical quantity:
$$iℏ\frac{∂}{∂t}\psi(x,t)=\hat{P}^2/2m\psi(x,t).\tag{3}$$
What exactly is \(\hat{P}^2\)? We can find out the answer to this question by letting \(-iℏ\frac{∂}{∂x}\) act on \(-iℏ\frac{∂}{∂x}\) to get \(\hat{P}^2=-ℏ^2\frac{∂^2}{∂x^2}\). Thus the energy operator is
$$\hat{E}=\frac{-ℏ^2}{2m}\frac{∂^2}{∂x^2}.\tag{4}$$
Schrödinger’s time-dependent equation for a nonrelativistic, massive, free particle is given by
$$iℏ\frac{∂}{∂t}\psi(x,t)=\frac{-ℏ^2}{2m}\frac{∂^2}{∂x^2}\psi(x,t).\tag{5}$$
We can solve this differential equation to obtain the wavefunction \(\psi(x,t)\) shown in the animation on the right. The wavefunction moves to the right and becomes more dispersed as time goes on. Thus, initially we have a decent idea of where the particle is; if we let the particle move away, without disturbing it by not introducing any potential \(V(x)\), as time goes on the measurement of its \(x\) position will become more and more uncertain.
Particle in One-Dimensional Box
The time-independent Schrodinger equation is \(\frac{-ℏ^2}{2m}\frac{d^2\psi}{dx^2}+V\psi=E\psi\) where \(E\) is the total energy of the system and \(V\) is the potential. We’ll start by considering a “free particle.” This is just a single particle as an isolated system. For simplicity we’ll imagine that the particle is confined to only moving along the x-axis and we’ll just think about the probability amplitudes \(\psi_x\) at each value of \(x\). Since the particle is an isolated system, \(V=0\) at each value of \(x\). (Potentials only exist when there is an external field present produced by some particles in the outside environment.)
Schrodinger’s equation simplifies to
$$\frac{-ℏ^2}{2m}\frac{d^2{\psi_x}}{dx^2 }=E\psi_x.$$
Let’s solve for the solution \(\psi_x\) of this differential equation by doing some algebra and making some substitutions; then once we solve for the probability amplitude \(\psi_x\) it’ll be straightforward to determine the probability distribution \(P(x)\) of the particle. Let’s multiply both sides of this equation by \(\frac{-2m}{ℏ^2}\) to get
$$\frac{d^2{\psi_x}}{dx^2}=\frac{-2m}{ℏ^2}=E\psi_x.$$
Since the free particle is an isolated system it does not have any potential energy. However it can have kinetic energy. Thus the total energy \(E\) of the system (which in this case is just the particle) is given by \(E=KE=\frac{1}{2}mv^2=\frac{P^2}{2m}\). The equation \(p=ℏk\) is the momentum of a quantum particle; substituting this into \(E\) we get \(E=ℏ^2\frac{k^2}{2m}\). Let’s substitute this into Schrodinger’s equation to get
$$\frac{d^2\psi_x}{dx^2}=\frac{-2m}{ℏ^2}\biggl(\frac{ℏ^2k^2}{2m}\biggl)\psi_x=-k^2\psi_x.$$
If we use the “guess-and-check” method to solve for a solution \(\psi_x\) to this differential equation, we’ll see that there are many different solutions. For example \(Acoskx\) and \(Bsinkx\) are different solutions. The general solution is given by \(\psi_x=Ae^{ikx}+Be^{-ikx}\) which, using trigonometry, can also be written as \(\psi_x=Acoskx+Bsinkx\) (\(A\) and \(B\) are different in this equation). If we differentiate this solution with respect to x twice and get \(-k^2\psi_x\), then we know that we guessed the right solution. Doing this we get
$$\frac{d\psi_x}{dx}=A(ik)e^{ikx}+B(-ik)e^{-ikx}⇒\frac{d^2ψ_x}{dx^2}=A(ik)^2e^{ikx}+B(ik)^2e^{-ikx}=-Ak^2e^{ikx}-Bk^2e^{-ikx}=-k^2(Ae^{ikx}+Be^{-ikx})=-k^2\psi_x.$$
If \(\psi_x=Ae^{ikx}+Be^{-ikx}\) then when we plug this into \(\frac{d^2\psi_x}{dx^2}=-k^2\psi_x\) both sides of the equation are indeed equal. Therefore this must be the solution. Using trigonometry we can rewrite the solution as \(\psi_x=A(coskx+isinkx)+B(cos(-kx)+isin(-kx))=(A-B)coskx+(A-B)isinkx=Ccoskx+Dsinkx\) where \(C\) and \(D\) are just numbers. To save letters we’ll just rename \(C\) and \(D\) to \(A\) and \(B\).
Let’s now consider a different situation. Suppose that along the x-axis there is a constant potential \(V_0\). At every \(x\) the particle will have a constant potential energy given by \(PE=\frac{V_0}{m}\). Let’s use Schrodinger’s equation to solve for the probability amplitudes \(\psi_x\) for a particle in the presence of a constant potential \(V_0\). First let’s subtract \(V_0\psi_x\) from both sides to get
$$\frac{-ℏ^2}{2m}\frac{d^2\psi}{dx^2}=(E-V_0)\psi_x.$$
Next multiply both sides by \(\frac{-2m}{ℏ^2}\) to get
$$\frac{d^2\psi}{dx^2}=\frac{2m}{ℏ^2}(V_0-E)\psi_x.$$
Let \(k^2=\frac{2m}{ℏ^2}(V_0-E)\); then we have
$$\frac{d^2\psi}{dx^2}=k^2\psi_x.$$
Let’s use the “guess-and-check” method to solve for \(\psi_x\). Let’s guess \(\psi_x=Ae^{-kx}\) and plug it into the equation to see if both sides are equal:
$$\frac{d\psi_x}{dx}=A(-k)e^{-kx}⇒\frac{d^2\psi_x}{dx^2}=Ak^2e^{-kx}=k^2\psi_x.$$
We see that this is indeed a solution. From this solution \(\psi_x=Ae^{-kx}\), we see that as the x distance increases \(\psi_x\) decreases exponentially. Since \(P(x)=\psi_x\psi_x^*\) the probability of finding the particle decreases exponentially with increasing \(x\). The bigger the value of \(k\) is (where \(k=\sqrt{2m\frac{(V_0-E)}{ℏ^2}}\)), the more rapidly \(\psi_x\) decreases with increasing \(x\). By increasing the constant value of \(V_0\), we see that k gets bigger; therefore the bigger \(V_0\) is the more rapidly \(\psi_x\) decreases with increasing \(x\). As \(V_0\) approaches \(∞\), \(\psi_x\) decreases from \(\psi_x=A\) (at \(x=0\)) to \(0\) immediately with increasing \(x\).
Suppose that a particle is moving along the x-axis in the presence of the potential \(V(x)\) as shown in figure 3. In the region where \(x≤0\) and \(x≥L\) there is a constant potential \(V_0\). We showed in the previous example that \(\psi_x\) for values of \(x\) where there is a constant potential \(V_0\) is given by \(\psi_x=Ae^{-kx}\). Suppose that \(V_0→∞\) in the regions where \(x≤0\) and \(x≥L\). If this happens then \(\lim_{V_0\to∞} \psi_x=0\) and we can say that \(\psi_x=0\) at all those values of \(x\). When the particle is in the region \(0<x<L\) where \(V=0\) it is an isolated system. Thus, in this region, the solution \(ψ_x=Acoskx+Bsinkx\) describes the probability amplitude associated with every \(x\) position. Let’s solve for the constants \(A\) and \(B\). To do this we’ll use the two boundary conditions \(\psi_x(0)=0\) and \(\psi_x(L)=0\). Using the first boundary condition \(\psi_x\) becomes
$$\psi_x(0)=0=Acos(k·0)+Bsin(k·0)=A.$$
Since \(A=0\) our solution \(\psi_x\) simplifies to \(\psi_x=Bsinkx\). Applying the second boundary condition, we get
$$\psi_x(L)=0=Bsin(kL).$$
Notice that although \(B=0\) would satisfy this boundary condition it would correspond to a solution \(\psi_x=0\) for all values of \(x\) within the range \(0<x<L\). Another way to satisfy this boundary condition (which as we shall see will lead to much less trivial solutions \(ψ_x\) is if \(sin(kL)=0\). \(sin(kL)=0\) when \(kL=nπ\)where \(n=0,1,2,…\). In other words when \(k=nπ/L\) the solution \(\psi_x\) satisfies \(\psi_x(L)=0\). If we substitute this value of \(k\) into the solution we get
$$\psi_x(x)=Bsin\biggl(\frac{nπ}{L x}\biggl)\text{ (0<x<L).}$$
We see that inside the potential well the function \(\psi_x(x)\) is a standing wave. If we substitute \(k=nπ/L\) into \(E=\frac{ℏ^2k^2}{2m}\) and express \(ℏ\) in terms of Plank’s constant; we get
$$E_n=\frac{h^2}{8mL^2}n^2.$$
Schrodinger’s equation predicts that the total energy of a particle trapped in a potential well is quantized and comes in discrete values; in other words the energy distribution is not continuous as shown in Figure 4 below.
The lowest possible energy level for a particle in a box is \(E_1=\frac{h^2}{8mL^2}\). Thus Schrodinger's equations predict that the energy \(E_n\) of this particle can never be zero; therefore the particle can never be at rest. The lowest possible energy level, given by \(E_1=\frac{h^2}{8mL^2}\), is called the ground-state energy. The allowed energy levels of the particle in the box are shown in Figure 4.
This article is licensed under a CC BY-NC-SA 4.0 license.
References
1. Wikipedia contributors. "Schrödinger equation."
Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 20 May. 2017. Web. 22 May. 2017.
2. DrPhysicsA. "Schrodinger Equation for Free Particle and Particle in a Box Part 2". Online video clip. YouTube. YouTube, 12 November 2010. Web. 05 May 2017.
3. Friedman, Art; Susskind, Leonard. Quantum Mechanics: The Theoretical Minimum. Basic Books, 2015. Print. |
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging". |
Solutions to Try Its
1. [latex]f\left(x\right)[/latex] is a power function because it can be written as [latex]f\left(x\right)=8{x}^{5}[/latex]. The other functions are not power functions.
2. As
x approaches positive or negative infinity, [latex]f\left(x\right)[/latex] decreases without bound: as [latex]x\to \pm \infty , f\left(x\right)\to -\infty\\ [/latex] because of the negative coefficient.
3. The degree is 6. The leading term is [latex]-{x}^{6}[/latex]. The leading coefficient is –1.
4. As [latex]x\to \infty , f\left(x\right)\to -\infty ; as x\to -\infty , f\left(x\right)\to -\infty [/latex]. It has the shape of an even degree power function with a negative coefficient.
5. The leading term is [latex]0.2{x}^{3}[/latex], so it is a degree 3 polynomial. As
x approaches positive infinity, [latex]f\left(x\right)[/latex] increases without bound; as x approaches negative infinity, [latex]f\left(x\right)[/latex] decreases without bound.
6.
y-intercept [latex]\left(0,0\right)[/latex]; x-intercepts [latex]\left(0,0\right),\left(-2,0\right)[/latex], and [latex]\left(5,0\right)[/latex]
7. There are at most 12
x-intercepts and at most 11 turning points.
8. The end behavior indicates an odd-degree polynomial function; there are 3
x-intercepts and 2 turning points, so the degree is odd and at least 3. Because of the end behavior, we know that the lead coefficient must be negative.
9. The
x-intercepts are [latex]\left(2,0\right),\left(-1,0\right)[/latex], and [latex]\left(5,0\right)[/latex], the y-intercept is [latex]\left(0,\text{2}\right)[/latex], and the graph has at most 2 turning points. Solutions to Odd-Numbered Exercises
1. The coefficient of the power function is the real number that is multiplied by the variable raised to a power. The degree is the highest power appearing in the function.
3. As
x decreases without bound, so does [latex]f\left(x\right)[/latex]. As x increases without bound, so does [latex]f\left(x\right)[/latex].
5. The polynomial function is of even degree and leading coefficient is negative.
7. Power function
9. Neither
11. Neither
13. Degree = 2, Coefficient = –2
15. Degree =4, Coefficient = –2
17. [latex]\text{As }x\to \infty ,f\left(x\right)\to \infty ,\text{ as }x\to -\infty ,f\left(x\right)\to \infty [/latex]
19. [latex]\text{As }x\to -\infty ,f\left(x\right)\to -\infty ,\text{ as }x\to \infty ,f\left(x\right)\to -\infty [/latex]
21. [latex]\text{As }x\to -\infty ,f\left(x\right)\to -\infty ,\text{ as }x\to \infty ,f\left(x\right)\to -\infty [/latex]
23. [latex]\text{As }x\to \infty ,f\left(x\right)\to \infty ,\text{ as }x\to -\infty ,f\left(x\right)\to -\infty [/latex]
25.
y-intercept is [latex]\left(0,12\right)[/latex], t-intercepts are [latex]\left(1,0\right);\left(-2,0\right);\text{and }\left(3,0\right)[/latex].
27.
y-intercept is [latex]\left(0,-16\right)[/latex]. x-intercepts are [latex]\left(2,0\right)[/latex] and [latex]\left(-2,0\right)[/latex].
29.
y-intercept is [latex]\left(0,0\right)[/latex].i x-intercepts are [latex]\left(0,0\right),\left(4,0\right)[/latex], and [latex]\left(-2, 0\right)[/latex].
31. 3
33. 5
35. 3
37. 5
39. Yes. Number of turning points is 2. Least possible degree is 3.
41. Yes. Number of turning points is 1. Least possible degree is 2.
43. Yes. Number of turning points is 0. Least possible degree is 1.
45. Yes. Number of turning points is 0. Least possible degree is 1.
47. [latex]\text{As }x\to -\infty ,f\left(x\right)\to \infty ,\text{ as }x\to \infty ,f\left(x\right)\to \infty [/latex]
x f( x) 10 9,500 100 99,950,000 –10 9,500 –100 99,950,000
49. [latex]\text{As }x\to -\infty ,f\left(x\right)\to \infty ,\text{ as }x\to \infty ,f\left(x\right)\to -\infty [/latex]
x f( x) 10 –504 100 –941,094 –10 1,716 –100 1,061,106
51. The
y-intercept is [latex]\left(0, 0\right)[/latex]. The x-intercepts are [latex]\left(0, 0\right),\text{ }\left(2, 0\right)[/latex]. [latex]\text{As }x\to -\infty ,f\left(x\right)\to \infty ,\text{ as }x\to \infty ,f\left(x\right)\to \infty [/latex]
53. The
y-intercept is [latex]\left(0,0\right)[/latex]. The x-intercepts are [latex]\left(0, 0\right),\text{ }\left(5, 0\right),\text{ }\left(7, 0\right)[/latex]. [latex]\text{As }x\to -\infty ,f\left(x\right)\to -\infty ,\text{ as }x\to \infty ,f\left(x\right)\to \infty [/latex]
55. The
y-intercept is [latex]\left(0, 0\right)[/latex]. The x-intercept is [latex]\left(-4, 0\right),\text{ }\left(0, 0\right),\text{ }\left(4, 0\right)[/latex]. [latex]\text{As }x\to -\infty ,f\left(x\right)\to -\infty ,\text{ as }x\to \infty ,f\left(x\right)\to \infty [/latex]
57. The
y-intercept is [latex]\left(0, -81\right)[/latex]. The x-intercept are [latex]\left(3, 0\right),\text{ }\left(-3, 0\right)[/latex]. [latex]\text{As }x\to -\infty ,f\left(x\right)\to \infty ,\text{ as }x\to \infty ,f\left(x\right)\to \infty [/latex]
59. The
y-intercept is [latex]\left(0, 0\right)[/latex]. The x-intercepts are [latex]\left(-3, 0\right),\text{ }\left(0, 0\right),\text{ }\left(5, 0\right)[/latex]. [latex]\text{As }x\to -\infty ,f\left(x\right)\to -\infty ,\text{ as }x\to \infty ,f\left(x\right)\to \infty [/latex]
61. [latex]f\left(x\right)={x}^{2}-4[/latex]
63. [latex]f\left(x\right)={x}^{3}-4{x}^{2}+4x[/latex]
65. [latex]f\left(x\right)={x}^{4}+1[/latex]
67. [latex]V\left(m\right)=8{m}^{3}+36{m}^{2}+54m+27[/latex]
69. [latex]V\left(x\right)=4{x}^{3}-32{x}^{2}+64x[/latex] |
what does it mean by becoming extensional in the first place?
The axiom of extensionality relates to what it means for two functions to be equal. Specifically, extensionality says:
$f = g \iff \forall x \ldotp f(x) = g(x)$
That is, functions are equal if they map equal inputs to equal outputs. By this definition, quicksort and mergesort are equal, even if they don't have the same implementations, because they behave the same
as functions.
How does it become extensional
What's missing is the rule of definitional equality for functions. It usually looks like this:
$\frac{\Gamma, (x : U) \vdash (f x) = (g x):V}{\Gamma \vdash f = g: (x : U) \to V}\text{(Fun-DefEq)}$
That is, two functions are definitionally equal when they produce equal results
when applied to an abstract variable. This is similar in spirit to the way we typecheck polymorphic functions: you make sure it holds for all values by making sure it holds for an abstract value.
We get extensionality when we combine the two: if two functions always produce the same result, we should be able to find some equality proof $P$ such that $\Gamma,(x: U) \vdash P:Id_V(f x, g x)$ i.e. the proof that the two functions always produce the same result. But, if we combine this with the rule $\text{(Id-DefEq)}$, then any time two functions are extensionally equal (i.e. we can find the proof term $P$, then they are also
definitionally equal.
This is in stark contrast to an intensional system, where two functions are equal if and only if their bodies are
syntactically identical. So mergesort and quicksort are intensionally different, but extensionally the same.
The $\text{(Id-DefEq)}$ means that extensional equality is baked into the type system: if you have a type constructor $T : ((x : U) \to V) \to \mathsf{Set}$, then you can use a value of type $T\ f$ in a context expecting $T\ g$ if $f$ and $g$ map equal inputs to equal outputs. Again this is not true in an intensional system, where $f$ and $g$ might be incompatible if they're syntactically different.
Does the above mean that we purposefully drop the proof that M and N are equal and just consider them to be equal definitionally (like a presumption)?
It's even a bit stronger than that. It's saying that $M$ and $N$ are definitionally equal any time
there exists some proof that they are propositionally equal. So on the one hand, if you have a propositional proof that two values are equal, you can forget that proof and say that they are definitionally equal. But on the other hand, if you are trying to prove that two values are definitionally equal (as a dependent type checking algorithm would), then you cannot say that they are not equal unless you are certain that no proof $P$ exists. This is why it is undecidable. |
Fourier series in orthogonal polynomials
A series of the form
$$\sum_{n=0}^\infty a_nP_n\tag{1}$$
where the polynomials $\{P_n\}$ are orthonormal on an interval $(a,b)$ with weight function $h$ (see Orthogonal polynomials) and the coefficients $\{a_n\}$ are calculated from the formula
$$a_n=\int\limits_a^bh(x)f(x)P_n(x)dx.\tag{2}$$
Here, the function $f$ belongs to the class $L_2=L_2[(a,b),h]$ of functions that are square summable (Lebesgue integrable) with weight function $h$ over the interval $(a,b)$ of orthogonality.
As for any orthogonal series, the partial sums $\{s_n(x,f)\}$ of \ref{1} are the best-possible approximations to $f$ in the metric of $L_2$ and satisfy the condition
$$\lim_{n\to\infty}a_n=0.\tag{3}$$
For a proof of the convergence of the series \ref{1} at a single point $x$ or on a certain set in $(a,b)$ one usually applies the equality
$$f(x)-s_n(x,f)=\mu_n[a_n(\phi_x)P_{n+1}-a_{n+1}(\phi_x)P_n(x)],$$
where $\{a_n(\phi_x)\}$ are the Fourier coefficients of an auxiliary function $\phi_x$, given by
$$\phi_x(t)=\frac{f(x)-f(t)}{x-t},\quad t\in(a,b),$$
for fixed $x$, and $\mu_n$ is the coefficient given by the Christoffel–Darboux formula. If the interval of orthogonality $[a,b]$ is bounded, if $\phi_x\in L_2$ and if the sequence $\{P_n\}$ is bounded at the given point $x$, then the series \ref{1} converges to the value $f(x)$.
The coefficients \ref{2} can also be defined for a function $f$ in the class $L_1=L_1[(a,b),h]$, that is, for functions that are summable with weight function $h$ over $(a,b)$. For a bounded interval $[a,b]$, condition \ref{3} holds if $f\in L_1[(a,b),h]$ and if the sequence $\{P_n\}$ is uniformly bounded on the whole interval $[a,b]$. Under these conditions the series \ref{1} converges at a certain point $x\in[a,b]$ to the value $f(x)$ if $\phi_x\in L_1[(a,b),h]$.
Let $A$ be a part of $(a,b)$ on which the sequence $\{P_n\}$ is uniformly bounded, let $B=[a,b]\setminus A$ and let $L_p(A)=L_p[A,h]$ be the class of functions that are $p$-summable over $A$ with weight function $h$. If, for a fixed $x\in A$, one has $\phi_x\in L_1(A)$ and $\phi_x\in L_2(B)$, then the series \ref{1} converges to $f(x)$.
For the series \ref{1} the localization principle for conditions of convergence holds: If two functions $f$ and $g$ in $L_2$ coincide in an interval $(x-\delta,x+\delta)$, where $x\in A$, then the Fourier series of these two functions in the orthogonal polynomials converge or diverge simultaneously at $x$. An analogous assertion is valid if $f$ and $g$ belong to $L_1(A)$ and $L_2(B)$ and $x\in A$.
For the classical orthogonal polynomials the theorems on the equiconvergence with a certain associated trigonometric Fourier series hold for the series \ref{1} (see Equiconvergent series).
Uniform convergence of the series \ref{1} over the whole bounded interval of orthogonality $[a,b]$, or over part of it, is usually investigated using the Lebesgue inequality
$$\left|f(x)-\sum_{k=0}^na_kP_k(x)\right|\leq[1+L_n(x)]E_n(f),\quad x\in[a,b],$$
where the Lebesgue function
$$L_n(x)=\int\limits_a^bh(t)\left|\sum_{k=0}^nP_k(x)P_k(t)\right|dt$$
does not depend on $f$ and $E_n(f)$ is the best uniform approximation (cf. Best approximation) to the continuous function $f$ on $[a,b]$ by polynomials of degree not exceeding $n$. The sequence of Lebesgue functions $\{L_n\}$ can grow at various rates at the various points of $[a,b]$, depending on the properties of $h$. However, for the whole interval $[a,b]$ one introduces the Lebesgue constants
$$L_n=\max_{x\in[a,b]}L_n(x),$$
which increase unboundedly as $n\to\infty$ (for different systems of orthogonal polynomials the Lebesgue constants can increase at different rates). The Lebesgue inequality implies that if the condition
$$\lim_{n\to\infty}L_nE_n(f)=0$$
is satisfied, then the series \ref{1} converges uniformly to $f$ on the whole interval $[a,b]$. On the other hand, the rate at which the sequence $\{E_n(f)\}$ tends to zero depends on the differentiability properties of $f$. Thus, in many cases it is not difficult to formulate sufficient conditions for the right-hand side of the Lebesgue inequality to tend to zero as $n\to\infty$ (see, for example, Legendre polynomials; Chebyshev polynomials; Jacobi polynomials). In the general case of an arbitrary weight function one can obtain specific results if one knows asymptotic formulas or bounds for the orthogonal polynomials under consideration.
References
[1] G. Szegö, "Orthogonal polynomials" , Amer. Math. Soc. (1975) [2] Ya.L. Geronimus, "Polynomials orthogonal on a circle and interval" , Pergamon (1960) (Translated from Russian) [3] P.K. Suetin, "Classical orthogonal polynomials" , Moscow (1979) (In Russian)
See also the references to Orthogonal polynomials.
Comments
See also [a1], Chapt. 4 and [a2], part one. Equiconvergence theorems have been proved more generally for the case of orthogonal polynomials with respect to a weight function $h$ on a finite interval belonging to the Szegö class, i.e. $\log h\in L$, cf. [a2], Sect. 4.12. For Fourier series in orthogonal polynomials with respect to a weight function on an unbounded interval see [a2], part two.
References
[a1] G. Freud, "Orthogonal polynomials" , Pergamon (1971) (Translated from German) [a2] P. Nevai, G. Freud, "Orthogonal polynomials and Christoffel functions (A case study)" J. Approx. Theory , 48 (1986) pp. 3–167 How to Cite This Entry:
Fourier series in orthogonal polynomials.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Fourier_series_in_orthogonal_polynomials&oldid=43445 |
It looks like you're new here. If you want to get involved, click one of these buttons!
We've been looking at feasibility relations, as our first example of enriched profunctors. Now let's look at another example. This combines many ideas we've discussed - but don't worry, I'll review them, and if you forget some definitions just click on the links to earlier lectures!
Remember, \(\mathbf{Bool} = \lbrace \text{true}, \text{false} \rbrace \) is the preorder that we use to answer true-or-false questions like
while \(\mathbf{Cost} = [0,\infty] \) is the preorder that we use to answer quantitative questions like
or
In \(\textbf{Cost}\) we use \(\infty\) to mean it's impossible to get from here to there: it plays the same role that \(\text{false}\) does in \(\textbf{Bool}\). And remember, the ordering in \(\textbf{Cost}\) is the
opposite of the usual order of numbers! This is good, because it means we have
$$ \infty \le x \text{ for all } x \in \mathbf{Cost} $$ just as we have
$$ \text{false} \le x \text{ for all } x \in \mathbf{Bool} .$$ Now, \(\mathbf{Bool}\) and \(\mathbf{Cost}\) are monoidal preorders, which are just what we've been using to define enriched categories! This let us define
and
We can draw preorders using graphs, like these:
An edge from \(x\) to \(y\) means \(x \le y\), and we can derive other inequalities from these. Similarly, we can draw Lawvere metric spaces using \(\mathbf{Cost}\)-weighted graphs, like these:
The distance from \(x\) to \(y\) is the length of the shortest directed path from \(x\) to \(y\), or \(\infty\) if no path exists.
All this is old stuff; now we're thinking about enriched profunctors between enriched categories.
A \(\mathbf{Bool}\)-enriched profunctor between \(\mathbf{Bool}\)-enriched categories also called a feasibility relation between preorders, and we can draw one like this:
What's a \(\mathbf{Cost}\)-enriched profunctor between \(\mathbf{Cost}\)-enriched categories? It should be no surprise that we can draw one like this:
You can think of \(C\) and \(D\) as countries with toll roads between the different cities; then an enriched profunctor \(\Phi : C \nrightarrow D\) gives us the cost of getting from any city \(c \in C\) to any city \(d \in D\). This cost is \(\Phi(c,d) \in \mathbf{Cost}\).
But to specify \(\Phi\), it's enough to specify costs of flights from
some cities in \(C\) to some cities in \(D\). That's why we just need to draw a few blue dashed edges labelled with costs. We can use this to work out the cost of going from any city \(c \in C\) to any city \(d \in D\). I hope you can guess how! Puzzle 182. What's \(\Phi(E,a)\)? Puzzle 183. What's \(\Phi(W,c)\)? Puzzle 184. What's \(\Phi(E,c)\)?
Here's a much more challenging puzzle:
Puzzle 185. In general, a \(\mathbf{Cost}\)-enriched profunctor \(\Phi : C \nrightarrow D\) is defined to be a \(\mathbf{Cost}\)-enriched functor
$$ \Phi : C^{\text{op}} \times D \to \mathbf{Cost} $$ This is a function that assigns to any \(c \in C\) and \(d \in D\) a cost \(\Phi(c,d)\). However, for this to be a \(\mathbf{Cost}\)-enriched functor we need to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category! We do this by saying that \(\mathbf{Cost}(x,y)\) equals \( y - x\) if \(y \ge x \), and \(0\) otherwise. We must also make \(C^{\text{op}} \times D\) into a \(\mathbf{Cost}\)-enriched category, which I'll let you figure out to do. Then \(\Phi\) must obey some rules to be a \(\mathbf{Cost}\)-enriched functor. What are these rules? What do they mean concretely in terms of trips between cities?
And here are some easier ones:
Puzzle 186. Are the graphs we used above to describe the preorders \(A\) and \(B\) Hasse diagrams? Why or why not? Puzzle 187. I said that \(\infty\) plays the same role in \(\textbf{Cost}\) that \(\text{false}\) does in \(\textbf{Bool}\). What exactly is this role?
By the way, people often say
\(\mathcal{V}\)-category to mean \(\mathcal{V}\)-enriched category, and \(\mathcal{V}\)-functor to mean \(\mathcal{V}\)-enriched functor, and \(\mathcal{V}\)-profunctor to mean \(\mathcal{V}\)-enriched profunctor. This helps you talk faster and do more math per hour. |
Search
Now showing items 1-10 of 31
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
Introduction: the Newton-Raphson method
The first time I was introduced to the Newton-Raphson method was a couple of years ago. I was studying physics and during a course on computational physics we had to optimize some function related to the N-body problem. About a week ago, I encountered Newton’s method for the second time, during the Stanford course by Andrew Ng on machine learning. Prof. Ng introduces the Newton-Raphson method as an alternative to the very well known gradient descent algorithm. I really like the algorithm, because the process is so intuitive. Its best explained by simply looking at the iterations:
In short, the goal is to find . You start at some random point . Take the derivative to compute the tangent. Compute the intersection between the tangent and the zero, . This x-coordinate is at least as close to as .
Logistic regression
In one of the exercises of the Stanford course, students are asked to first prove that the emperical loss function for logistic regression:
\begin{equation} J(\theta) = \frac{1}{m}\sum_{i} \log{1+e^{-y^i \theta^T x^i}} \end{equation}
is positive semi-definite and thus has a single global optimum. Secondly, some data is provided and students have to implement the Newton-Raphson method to determine the parameters of a logistic regression.
I proved the former by showing that the quadratic-form of the Hessian was larger-or-equal zero, and in doing so I had already computed the first and second order derivatives:
\begin{align} \frac{\delta}{\delta_l} J(\theta) &= \frac{-1}{m}\sum_{i}g(-y^i \theta^T x^i)y^i x^i_l \\\
\frac{\delta^2}{\delta_l \delta_k} J(\theta) &= \frac{1}{m}\sum_{i}(g(-y^i \theta^T x^i))(1 - g(-y^i \theta^T x^i)) x^i_k x^i_l \\\ \textrm{where }g(z) &= \frac{1}{1 + e^{-z}} \textrm{, the sigmoid-function} \end{align} Vectorization
I used these results for my python implementation of the Newton-Raphson method, which was unvectorized. Unvectorized implementations are always a little unsatisfactory, since they tend to be large and have lots of (nested) loops. With all those indexes around, it is often difficult to debug in case something is not working as expected. Also, it is well-known that vectorization is much faster in performance. So, as a little test, I decided to do the vectorization and test how much faster this solution would be. Short anwser:
250 times faster.
Now -depending on the actual functions involved- vectorization is not super easy. It requires a good knowledge of matrix and vector multiplications and a certain amount of intuition.
So I started with the argument for the sigmoid function. Currently, gives a scalar. For each observation , one scalar is computed. Instead of a single scalar, a column vector for all the observations should be computed. To achieve this, the is replaced with since:
\begin{align} X\theta = \begin{bmatrix} – x^1 – \\\
\vdots \\\ – x^m – \end{bmatrix} \cdot \theta = \begin{bmatrix} – x^1 \cdot \theta – \\\ \vdots \\\ – x^m \cdot \theta – \end{bmatrix} \end{align}
Resulting in a column vector with all our estimates for (one for each observations). Next, I had to multiplicate each element in this vector with . Here, I can’t use a dot-product since that sums everything up, and collapses to a scalar. Instead an element-wise multiplication is used: . Accordingly I updated my sigmoid function implementation so that it takes vectors and matrices as input and returns objects of the same size.
For the element outside of the sigmoid function, I also used an element-wise multiplication so that we are now at which is still a column vector. Finally, this vector needs to multiplied with while collapsing on the observations () and expanding on the features (). To do this we need to take the inner product with respect to and the outer product with respect to . To refresh, an inner product between and is defined as:
\begin{align} x^Ty = \begin{bmatrix} x_1 \ \cdots \ x_n \end{bmatrix} \cdot \begin{bmatrix} y_1 \\\
\vdots \\\ y_n \end{bmatrix} = \sum x_i y_i \end{align}
Since we know that has each observation as a single row, and we need to take the inner product
over the observations, we need to have individual observations as columns. Therefore we transpose and multiply in front to end up with:
\begin{align} \nabla_{\theta} J(\theta) = X^Tg(-y\circ X\theta)\circ y \end{align}
For the Hessian, we have to translate the term. Since the summation is over , there is an implied inner product over the observations, and an outer product over the features: . However, the also needs to be considered, and more specifically during the inner-product (summation). This means this term has to be put in between. The simplest way to do this, is to construct a diagonal matrix with the values of the column vector on the diagonal:
\begin{align} g(-y^i \theta^T x^i))(1 - g(-y^i \theta^T x^i)) = \begin{bmatrix} a_1 \\\
\vdots \\\ a_m \end{bmatrix} => \begin{bmatrix} a_1 \ 0 \ \cdots \ 0 \\\ 0 \ a_2 \ \cdots \ 0 \\\ \ \vdots \\\ 0 \ 0 \ \cdots \ a_m \end{bmatrix} = A \end{align}
With this matrix the expression for the Hessian becomes:
\begin{align} \nabla_{\theta}^2 J(\theta) = X^T A X \end{align}
Coming back to the code, these vectorized versions are implemented here. The last thing to do is to run both methods and compare the times. With the provided data, the unvectorized version takes on my system around 600 to 800 ms. The vectorized implementation takes around 2 to 3 ms. On average around
250 times as fast! |
Integers
Category : 7th Class
INTEGERS
FUNDAMENTALS
N= (1, 2, 3, 4,............)
Elementary Question - 1: Which is the smallest natural number? Ans.: 1
Elementary Question - 2: Which is the smallest whole number? Ans.: 0
W= (0, 1, 2, 3,............)
That is, \[Z=\{........-4,\,\,-3,\,\,-2,\,\,-1\}\cup \{0,\text{ }1,\,\,2,\,\,3,\,\,4,\,\,5,.....\}\]
Where \[\cup \] denote "union" or combination of these two sets.
Concept of Infinity
We can go on adding more and more numbers to the right side of the number line (e.g., 100, 101, ......100000, .........1 crore, ............1000 crores............. m an unending manner upto plus infinity and similarly to the left side of the number line upto minus infinity.
\[\left( -\,\infty \right)\]Minus infinity crore Plus infinity \[\left( +\infty \right)\]
This very, very large unending number on the right side and left side of number line are called plus infinity \[\left( +\,\infty \right)\]and minus infinity \[\left( -\,\infty \right)\] respectively.
\[\left( -\,\infty \right)\] \[\left( +\,\infty \right)\]
Note:
e.g., \[3+\left( -\,5 \right)=-2\]
2.0 is not included in either \[{{Z}^{+}}\]c or\[{{Z}^{-}}\]. Hence, it is non-negative integer
Common use of numbers
(i) To represent quantities like profit, income, increase, rise, high, north, east, above, depositing, climbing and so on, positive numbers are used.
(ii) To represent quantities like loss, expenditure, decrease, fall, low, south, west, below, withdrawing, sliding and so on, negative numbers are used.
(iii) On a number line, when we
Note: Mod of a number
Mod or modulus of a number denotes the positive value of that number.
Thus, |a| = +a if a > 0 and |a| = - a if a < 0
Elementary questions - 3, find |6| - | - 3|
Ans.\[\left| 6 \right|=+\,6\,\And \left| -3 \right|=+\,3\therefore \left| 6 \right|-|-3|=6-3=3.\]
A short note on notations
The brilliance of a mathematician or mathematical student lies in his/ her ability to tell more things in less words. To illustrate \[\in \] means "belongs to": means such that; \[\forall \] means for all. For example, to represent that x is a natural number. We write \[x\in \,N;\]
Further, \[\subset \] means is a "subset of'. A subset is a smaller set wholly contained in larger set.
For e.g., if A = {1, 2, 3} and B = {1, 2, 3, 4, 5} then \[A\subset B\]i.e., A is a subset of B.
Elementary question - 4: Is set of natural numbers, a subset of integers?
Answer: Yes, it is represented as\[N\subset Z\]
Properties of integers:
For \[a,b\in Z,a+b\in Z,a-b\in Z\] \[and\text{ }a\times b\in Z.\]
If \[a,b\in Z,\]the a + b = b + a and \[a\times b\text{ }=b\times a.\]
If \[a,\text{ }b,\text{ }c\text{ }\in \text{ }Z.\]then a + (b + c) = (a + b) + c and \[a\times \left( b\times c \right)=\left( a\times b \right)\times c.\]
For a, b and \[c\in Z,\text{ }a\text{ }\left( b+c \right)=ab+ac\]and \[a\left( b-c \right)=ab-ac.\]
For \[a\in Z,a+0=a=0+a\]and \[a\times 1=a=1\times a.\]
Elementary Question - 5:
Show by a practical example that closure property is not satisfied with respect to division in the set of integers.
Ans. Consider two numbers 2 and 3
Let us do \[2\div 3=\frac{2}{3}\]
Now. \[2\in Z,\,\,3\in Z\]
But \[\frac{2}{3}\cancel{\in }\,Z\] as \[\frac{2}{3}\] is not an integer.
You need to login to perform this action.
You will be redirected in 3 sec |
Hydejack offers a few additional features to markup your content. Don’t worry, these are merely CSS classes added with kramdown’s
{:...} syntax, so that your content remains compatible with other Jekyll themes.
Table of Contents A word on building speeds Adding a table of contents Adding message boxes Adding large text Adding large images Adding image captions Adding large quotes Adding faded text Adding tables Scroll table Flip table Small tables Adding code blocks Adding math Inline Block A word on building speeds
If building speeds are a problem, try using the
--incremental flag, e.g.
bundle exec jekyll serve --incremental
From the Jekyll docs (emphasis mine):
Enable the experimental incremental build feature. Incremental build only re-builds posts and pages that have changed, resulting in significant performance improvements for large sites,
but may also break site generation in certain cases.
The breakage occurs when you create new files or change filenames. Also, changing the title, category, tags, etc. of a page or post will not be reflected in pages other then the page or post itself. This makes it ideal for writing new posts and previewing changes, but not setting up new content.
Adding a table of contents
You can add a generated table of contents to any page by adding
{:toc} below a list.
Example: see above
Markdown:
* this unordered seed list will be replaced by toc as unordered list{:toc}
Adding message boxes
You can add a message box by adding the
message class to a paragraph.
Example:
Markdown:
**NOTE**: You can add a message box.{:.message}
Adding large text
You can add large text by adding the
lead class to the paragraph.
Example:
You can add large text.
Markdown:
You can add large text.{:.lead}
Adding large images
You can make an image span the full width by adding the
lead class.
Example:
Markdown:
{:.lead data-width="800" data-height="100"}
Adding image captions
You can add captions to images by adding the
figure class to the paragraph containing the image and a caption.
A caption for an image.
Markdown:
{:.lead data-width="800" data-height="100"}A caption for an image.{:.figure}
For better semantics, you can also use the
figure/
figcaption HTML5 tags:
<figure> <img alt="An image with a caption" src="https://placehold.it/800x100" class="lead" data-width="800" data-height="100" /> <figcaption>A caption to an image.</figcaption></figure>
Adding large quotes
You can make a quote “pop out” by adding the
lead class.
Example:
You can make a quote “pop out”.
Markdown:
> You can make a quote "pop out".{:.lead}
Adding faded text
You can gray out text by adding the
faded class. Use this sparingly and for information that is not essential, as it is more difficult to read.
Example:
I’m faded, faded, faded.
Markdown:
I'm faded, faded, faded.{:.faded}
Adding tables
Adding tables is straightforward and works just as described in the kramdown docs, e.g.
Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell
Markdown:
| Default aligned |Left aligned| Center aligned | Right aligned ||-----------------|:-----------|:---------------:|---------------:|| First body part |Second cell | Third cell | fourth cell |
However, it gets tricker when adding large tables. In this case, Hydejack will break the layout and grant the table the entire available screen width to the right:
Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell Second line foo strong baz Second line foo strong baz Second line foo strong baz Second line foo strong baz Third line quux baz bar Third line quux baz bar Third line quux baz bar Third line quux baz bar Second body Second body Second body Second body 2 line 2 line 2 line 2 line Footer row Footer row Footer row Footer row Scroll table
If the extra space still isn’t enough, the table will receive a scrollbar. It is browser default behavior to break the lines inside table cells to fit the content on the screen. By adding the
scroll-table class on a table, the behavior is changed to never break lines inside cells, e.g:
Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell Second line foo strong baz Second line foo strong baz Second line foo strong baz Second line foo strong baz Third line quux baz bar Third line quux baz bar Third line quux baz bar Third line quux baz bar Second body Second body Second body Second body 2 line 2 line 2 line 2 line Footer row Footer row Footer row Footer row
You can add the
scroll-table class to a markdown table by putting
{:.scroll-table} in line directly below the table. To add the class to a HTML table, add the it to the
class attribute of the
table tag, e.g.
<table class="scroll-table">.
Flip table
Alternatively, you can “flip” (transpose) the table. Unlike the other approach, this will keep the table head (now the first column) fixed in place.
You can enable this behavior by adding
flip-table or
flip-table-small to the CSS classes of the table. The
-small version will only enable scrolling on “small” screens (< 1080px wide).
Example:
Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell Second line foo strong baz Second line foo strong baz Second line foo strong baz Second line foo strong baz Third line quux baz bar Third line quux baz bar Third line quux baz bar Third line quux baz bar 4th line quux baz bar 4th line quux baz bar 4th line quux baz bar 4th line quux baz bar 5th line quux baz bar 5th line quux baz bar 5th line quux baz bar 5th line quux baz bar 6th line quux baz bar 6th line quux baz bar 6th line quux baz bar 6th line quux baz bar 7th line quux baz bar 7th line quux baz bar 7th line quux baz bar 7th line quux baz bar 8th line quux baz bar 8th line quux baz bar 8th line quux baz bar 8th line quux baz bar 9th line quux baz bar 9th line quux baz bar 9th line quux baz bar 9th line quux baz bar 10th line quux baz bar 10th line quux baz bar 10th line quux baz bar 10th line quux baz bar
You can add the
flip-table class to a markdown table by putting
{:.flip-table} in line directly below the table. To add the class to a HTML table, add the it to the
class attribute of the
table tag, e.g.
<table class="flip-table">.
Small tables
If a table is small enough to fit the screen even on small screens, you can add the
stretch-table class to force a table to use the entire available content width. Note that stretched tables can no longer be scrolled.
Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell
You can add the
stretch-table class to a markdown table by putting
{:.stretch-table} in line directly below the table. To add the class to a HTML table, add the it to the
class attribute of the
table tag, e.g.
<table class="stretch-table">.
Adding code blocks
To add a code block without syntax highlighting, simply indent 4 spaces (regular markdown). For code blocks with code highlighting, use
~~~<language>. This syntax is also supported by GitHub. For more information and a list of supported languages, see Rouge.
Example:
// Example can be run directly in your JavaScript console// Create a function that takes two arguments and returns the sum of those// argumentsvar adder = new Function("a", "b", "return a + b");// Call the functionadder(2, 6);// > 8
Markdown:
~~~js// Example can be run directly in your JavaScript console// Create a function that takes two arguments and returns the sum of those// argumentsvar adder = new Function("a", "b", "return a + b");// Call the functionadder(2, 6);// > 8~~~
Adding math
Hydejack supports math blocks via KaTeX.
Why KaTeX instead of MathJax? KaTeX is faster and more lightweight at the cost of having less features, but for the purpose of writing blog posts, this should be a favorable tradeoff.
Before you add math content, make sure you have the following in your config file:
kramdown: math_engine: mathjax # this is not a typo math_engine_opts: preview: true preview_as_code: true
Inline
Example:
Lorem ipsum
f(x) = x^2.
Markdown:
Lorem ipsum $$ f(x) = x^2 $$.
Block
Example:
\begin{aligned} \phi(x,y) &= \phi \left(\sum_{i=1}^n x_ie_i, \sum_{j=1}^n y_je_j \right) \\[2em] &= \sum_{i=1}^n \sum_{j=1}^n x_i y_j \phi(e_i, e_j) \\[2em] &= (x_1, \ldots, x_n) \left(\begin{array}{ccc} \phi(e_1, e_1) & \cdots & \phi(e_1, e_n) \\ \vdots & \ddots & \vdots \\ \phi(e_n, e_1) & \cdots & \phi(e_n, e_n) \end{array}\right) \left(\begin{array}{c} y_1 \\ \vdots \\ y_n \end{array}\right)\end{aligned}
Markdown:
$$\begin{aligned} \phi(x,y) &= \phi \left(\sum_{i=1}^n x_ie_i, \sum_{j=1}^n y_je_j \right) \\[2em] &= \sum_{i=1}^n \sum_{j=1}^n x_i y_j \phi(e_i, e_j) \\[2em] &= (x_1, \ldots, x_n) \left(\begin{array}{ccc} \phi(e_1, e_1) & \cdots & \phi(e_1, e_n) \\ \vdots & \ddots & \vdots \\ \phi(e_n, e_1) & \cdots & \phi(e_n, e_n) \end{array}\right) \left(\begin{array}{c} y_1 \\ \vdots \\ y_n \end{array}\right)\end{aligned}$$
Continue with Scripts |
№ 8
All Issues Volume 59, № 10, 2007
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1299–1312
Necessary and sufficient conditions for the existence of a function from the class
S -
with prescribed values of integral norms of three successive derivatives (generally speaking, of a fractional order) are obtained.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1313–1321
We obtain solutions of new extremal problems of the geometric theory of functions of a complex variable related to estimates for the inner radii of nonoverlapping domains. Some known results are generalized to the case of open sets.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1322–1330
For a controlled integro-differential equation with fuzzy noise, we introduce the notions of a fuzzy bundle of trajectories and a fuzzy reachability set and prove some properties of fuzzy bundles.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1331–1338
A subgroup $H$ of a group $G$ is said to be nearly pronormal in $G$ if, for each subgroup $L$ of the group $G$ including H, the normalizer $N_L ( H)$ is contranormal in $L$. We prove that if $G$ is a (generalized) soluble group in which every subgroup is nearly pronormal, then all subgroups of $G$ are pronormal.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1339–1352
We establish relations for the distribution of functionals associated with the behavior of a classical risk process after ruin and a multivariate ruin function.
Characterization of the semilattice of idempotents of a finite-rank permutable inverse semigroup with zero
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1353–1362
We give a characterization of the semilattice of idempotents of a finite-rank permutable inverse semigroup with zero.
Asymptotic representations of solutions of one class of nonlinear nonautonomous differential equations of the third order
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1363–1375
We establish asymptotic representations for unbounded solutions of nonlinear nonautonomous differential equations of the third order that are close, in a certain sense, to equations of the Emden-Fowler type.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1376–1390
We introduce a generalized hybrid integral transformation of the Mehler-Fock type on a segment [0;
R] with n conjugate points. We consider examples of application of this transformation to the solution of typical singular boundary-value problems for linear partial differential equations of the second order in piecewise-homogeneous media. On representations of a general solution in the theory of micropolar thermoelasticity without energy dissipation
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1391–1398
In the present paper, the linear theory of micropolar thermoelasticity without energy dissipation is considered. This work is organized as follows: Section 2 is devoted to basic equations for micropolar thermoelastic materials, supposed to be isotropic and homogeneous, and to assumptions on constitutive constants. In Section 3, some theorems related to representations of a general solution are studied.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1399–1409
The set $\mathcal{D}^{\infty}$ of infinitely differentiable periodic functions is studied in terms of generalized $\overline{\psi}$-derivatives defined by a pair $\overline{\psi} = (\psi_1, \psi_2)$ of
sequences $\psi_1$ and $\psi_2$ .
It is shown that every function $f$ from the set $\mathcal{D}^{\infty}$ has at least one derivative whose parameters $\psi_1$ and $\psi_2$ decrease faster than any power function, and, at the same time, for an arbitrary
function $f \in \mathcal{D}^{\infty}$ different from a trigonometric polynomial, there exists a pair $\psi$ whose parameters $\psi_1$ and $\psi_2$ have the same rate of decrease and for which the $\overline{\psi}$-derivative no longer exists.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1410–1418
We prove a generalization of the well-known Dzyadyk theorem that gives an interesting geometric criterion for the analyticity of functions.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1419–1431
In this paper, the sharp estimates for some multilinear commutators related to certain sublinear integral operators are obtained. The operators include the Littlewood - Paley operator and the Marcinkiewicz operator. As application, we obtain the weighted $L^p (p > 1)$ inequalities and $L \log L$-type estimate for the multilinear commutators.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1432–1435
We investigate one solvable subclass of quantified formulas in pure predicate calculus and obtain a necessary and sufficient condition for the satisfiability of formulas from this subclass.
Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1436–1440
We solve the problem of representation of measures with values in a Banach space as the limits of weakly convergent sequences of vector measures whose basis is a given nonnegative measure. |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
When I first learnt about GANs (generative adversarial networks)
1 I followedthe “alternative” objective (which I will refer to as $G_{alt}$), which is the most common GAN objective found inthe wild at the time of writing. You can see an example of it in DCGAN 2,which isavailable on GitHub.
$G_{alt}$ corresponds to the following update steps:
nn.BCECriterion in Torch) with target 1 for “real” images and 0 for“fake” images.
With the Power of Mathematics™ we can express the loss functions used in the above update steps.
Let
Discriminator update, “real” examples $x\sim X_{data}$:
$$ \mathcal{L}_{real}(x) =\mathcal{L} _{BCE}(\sigma(V(x;\omega)), 1) =-\log(\sigma(V(x;\omega))) $$
Discriminator update, “fake” examples $x\sim G(Z;\theta)$:
$$ \mathcal{L}_{fake}(x) =\mathcal{L} _{BCE}(\sigma(V(x;\omega)), 0) =-\log(1-\sigma(V(x;\omega))) $$
Generator update:
$$ \mathcal{L}_{gen}(z) =\mathcal{L} _{BCE}(\sigma(V(G(z;\theta);\omega)), 1) =-\log(\sigma(V(G(z;\theta);\omega))) $$
Imagine that $q(x)$ is the true probability distribution over all images, and that $p(x)$ is our approximation. As we train our GAN, the approximation $p(x)$ becomes closer to $q(x)$. There are multiple ways of measuring the distance of one probability distribution from another, and functions for doing so are called f-divergences. Prominent examples of f-divergences include KL divergence and JS divergence.
Somewhere in our practical formulation of the GAN objective we have
implicitlyspecified a divergence to be minimised. This wouldn’t matter very much if ourmodel had the capacity to model $q(x)$ perfectly, since the minimum would be achievedwhen $p(x)=q(x)$ regardless of which divergence is used. In reality this is notthe case, and even after perfect training $p(x)$ will still be an approximation.The kicker is that the “best” approximation depends on the divergence used.
For example, consider a simplified case in one dimension where $q(x)$ is a bimodal distribution, but $p(x)$ only has the modelling capacity of a single Gaussian. Should $p(x)$ try to fit a single mode really well (mode-seeking), or should it attempt to cover both (mode-covering)? There is no “right answer” to this question, which is why multiple f-divergences exist and are useful.
Fig 1. Which is the better approximation? The answer depends on the f-divergence you are using!
Poole et al.
3 have worked backwards to find the f-divergence being minimisedfor $G_{alt}$. It turns out that the divergenceis not a named or well-known function. The authors argue that theGAN divergence is on the mode-seeking end of the spectrum, which resultsin a tendency for the generator to produce less variety.
It would be nice to specify whichever divergence we wanted when training a GAN.Fortunately for us, f-GAN
4 describes a way to explicitly specify thef-divergence you want in the GAN objective.
Essentially the parts of the practical GAN objective specified earlier that imply the divergence are the sigmoid activation and the binary cross entropy loss. By replacing these parts with generic functions, we reach a more general formulation of the loss functions.
$$ \mathcal{L}_{real}(x) =-g _f(V(x;\omega)) $$
$$ \mathcal{L}_{fake}(x) =f^*(g _f(V(x;\omega))) $$
where $g_f(v)$ = an activation function tailored to the f-divergence, and $f^*(t)$ = the Fenchel conjugate of the f-divergence. A table of these functions can be found in the f-GAN paper, and they are relatively straightforward to implement as part of a custom criterion in Torch.
By setting $g_f(v) = \log(\sigma(v))$ and $f^*(t) = -\log(1 - e^t)$ we get the same discriminator loss functions as $G _{alt}$.
In the f-GAN paper, the generator loss is the same as $\mathcal{L}_{real}$:
$$ \mathcal{L}_{gen}(z) =-g _f(V(x;\omega)) $$
Pretty simple stuff here, really.
Poole et al. propose an extension which allows the generator and discriminator to be trained with different f-divergences. Roughly speaking this involves undoing the effects of the discriminator f-divergence to recover the density ratio $\frac{q(x)}{p(x)}$, and then applying the generator f-divergence $f_G$.
$$ \mathcal{L}_{gen}(z) =f_G((f’)^{-1}(g _f(V(x;\omega)))) $$
Here are some generated examples after training DCGAN on CIFAR-10 with different divergences, using the f-GAN generator loss.
f-divergence Generated output GAN divergence JS divergence RKL divergence
Generative Adversarial Networks. https://arxiv.org/abs/1406.2661
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. https://arxiv.org/abs/1511.06434
Improved generator objectives for GANs. https://arxiv.org/abs/1612.02780
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. https://arxiv.org/abs/1606.00709 |
Define a 3-dimensional QFT with $N=4$ supersymmetry (4 supercharges), and the field content is $g$ $N=4$ hyper-multiplets (that are in a representation $R$ of some group $G$).
Each hyper-multiplet is composed of 2 $N=2$ chiral-multiplets $Q,\tilde{Q}$ with complex conjugate representations $R,\bar R$. Therefore, the global (flavor) symmetry in the system would be $SU(g) \times SU(g) \times U(1)$, where each set of chirals transforms as a fundamental of $SU(g)$, and the $U(1)$ is a rotation between the 2 sets of chirals.
However, if $R$ is real (for example adjoint) then $R,\bar R$ are the same, the 2 sets of chirals are indistinguishable so the global symmetry is the larger $SU(2g)$.
Now, add an $N=4$ vector-multiplet (that is also adjoint of $G$) to the system, the superpotential gets a contribution of interaction terms between the chiral fields in the hyper-multiplet $Q,\tilde{Q}$ and the chiral field in the vector-multiplet $\Phi$ of the form $\sim Q\Phi\tilde{Q}$.
The
statement is that this breaks the global symmetry $SU\left(2g\right)$ (of dimension $\left(2g\right)^{2}-1$) to the smaller $USp\left(2g\right)$ (of dimension $g\left(2g+1\right)$).
Why is that? How to see this from the Lagrangian? |
Divisibility is a relation $R\subseteq \mathbb Z \times \mathbb Z$ denoted by the sign $”\mid”$ and defined as follows:
$$d\mid n:=\Leftrightarrow\exists m\in\mathbb Z\;\; d\cdot m=n\wedge d\neq 0.\label{E18333}\tag{1}$$
In other words, for two integers $n,d\in\mathbb Z$ with $d\neq 0$ $d$ is a
divisor of $n$, denoted by $d\mid n$ if and only if there is an \(m\in\mathbb Z\) with \(dm=n\). In order to indicate that \(d\) is a divisor of \(n\) we write \(d\mid n\), otherwise we write \(d\not\mid n\).
There are some related concepts, which shall be introduced here also:
1 Please note that multiples of \(d=0\) are undefined. Although \(0\cdot m=0\) is fulfilled for any \(m\), we cannot say that \(0\mid 0\), since \(0\) is not a divisor of any number (because by definition $\ref{E18333},$ it cannot be for $d=0$). We want to have a definition of multiples which is complementary to the definition of divisors.
| | | | | created: 2014-06-21 15:13:20 | modified: 2019-04-17 07:19:55 | by:
bookofproofs | references: [696]
[696]
Kramer Jürg, von Pippich, Anna-Maria: “Von den natürlichen Zahlen zu den Quaternionen”, Springer-Spektrum, 2013 |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
Suppose $\lim\limits_{n \rightarrow \infty} a_n =\lim\limits_{n \rightarrow \infty} b_n = c$ and $a_n \le c_n \le b_n$ for all $n$. Prove that $\lim\limits_{n \rightarrow \infty} c_n = c$.
How would I do this?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Concise answer
By the hypothesis we have $$|c_n-c|\le \max(|a_n-c|,|b_n-c|)$$ now let $\epsilon>0$ so by the limit's definition of $(a_n)$ and $(b_n)$ there's $N=\max(N_a,N_b)$ and if $n\ge N$ then $$|c_n-c|\le \max(|a_n-c|,|b_n-c|)\le \epsilon.$$
Hints:
For all $\;\epsilon > 0\;\exists\,N\in\Bbb N\;s.t.\;\;n>N\implies\begin{cases}c-\epsilon<a_n<c+\epsilon\\{}\\c-\epsilon<b_n<c+\epsilon\end{cases}\;$
and from here that for $\;n>N\;$:
$$c-\epsilon<a_n<c_n<b_n<c+\epsilon\;\implies\ldots$$
Given $\varepsilon>0$. There is $N\in \mathbb{N}$ such that $d(a_n,c)\le \varepsilon$ and $d(b_n,c)\le \varepsilon$ at the same time for all $n\ge N$ (why?). Then $c-\varepsilon\le a_n\le c_n\le b_n\le c+ \varepsilon$, so $|c_n-c|\le \varepsilon$. Since $\varepsilon$ was arbitrary it holds for each $\varepsilon>0$. Thus $\lim_{n\rightarrow \infty}c_n = c$ as desired. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.