text
stringlengths
256
16.4k
The responses of division by 0 that we have been taught is undefined I actually find to be unintelligible and completely unacceptable. Take this into consideration, when we look at a fraction $\frac{n}{d}$ where $n$ is the numerator and $d$ is the denominator and $d$ happens to be $0$ let's apply this concept to a combination of linear equations and some trigonometric functions. Before I begin with that I will clearly state that some functions or equations that are normally used in different contexts mean exactly the same thing in all contexts. And I will show that the actual operation of division and fractions are nothing more than the slope of a line or the tangent of some angle with regards to the horizontal and that the slope of a vertical line or $\tan(90°)$ is completely defined! Equivalent Equations slope of a line $m = \frac{\left( y_2 - y_1 \right)}{\left(x_2 - x_1\right)}$ is equivalent to the $\tan\theta$ where $\theta$ is the angle above the horizon when the angle is in standard form. The Pythagorean Theorem and the Equation of the Circle are the same exact thing: $A^2 + B^2 = C^2$ and $X^2 + Y^2 = C^2$ The $\cos\theta$ between two vectors is equivalent to $\frac{\vec A \cdot \vec B}{|\vec A||\vec B|} $ Assertions Both the $\sin$ and the $\cos$ functions have the same domain and range:Domains are the set of $\mathbb R$ and the ranges are $\left[-1,1\right]$and they have a period of $2\pi$. They are continuous circular or rotational functions. $\tan\theta = \frac{\sin\theta}{\cos\theta}$ When plotting on the Unit Circle where the radius has a value of 1: The $(x,y)$ pair is defined as $(\cos\theta,\sin\theta)$ where the radius is above the horizontal $x$ axis in standard position. Assessments Let's consider that we have 3 points $a,b,c$ where $a = (0,0)$ and $b = (1,0)$ and these points are fixed and point $c$ starts at point $b$'s location $(1,0)$. These 3 points will form vectors or line segments between each other. Initially there are only 2 valid vectors $\vec A$ and $\vec B$ where $\vec A$ is $b - a$ and $\vec B$ is $c - a$. $\vec C$ doesn't exist yet or is the $\vec 0$ since $\vec C$ is defined as $c - b$ and both points coincide. We can technically rotate either direction but we will rotate $CCW$ and we will do a few observations in the process. 1 - We will observe point $c$ as it rotates around the unit circle. 2 - We will observe vector $\vec C$ as point $c$ rotates around the circle. 3 - We will observe the area of the triangle that is generated from vectors $\vec A, \vec B,$ and $\vec C$ 4 - We will observe the slope of $\vec B$ as this is the line that rotates. 5 - We will observe the angles between $\vec A$ and $\vec B$ Intuitive Declarations When the point $c$ rotates to the position $(-1,0)$ the slope is $0$, the area of the triangle is $0$, but the length of $\vec C$ is at its longest which is $2$. Here you have nothing but horizontal translation with $0$ or no incline or change in height or elevation. When the angle is $45°$ or $\frac{pi}{4}$ radians the slope $\frac{rise}{run}$ and the $\tan\theta$ or $\frac{\sin\theta}{\cos\theta}$ are 1. You have an equal amount of vertical translation as you do horizontal translation. The $\cos\theta$ represents the $x$ value on the unit circle but also your horizontal displacement. The $\sin\theta$ represents the $y$ value on the unit circle but also yourvertical displacement. When the Angle is $0°,180°,360°$ or an multiple of them the Slope and the Tangent are also $0$. This means we have $0$ rise and the $\sin$ or the $y$ has an output of $0$ for its range at that angle. When the Angle is $45°$ both the $\cos$ and the $\sin$ have equivalent values since both legs of the triangle here are equal thus giving you both a slope and a tangent of $1$ and if you graph both the sine and cosine functions they will intersect at this point. Considerations - Considering that both the sine and cosine are continuous circular functions neither of them at any point in their range nor their domain become undefined nor has any discontinuity in it. Generalization - Imagine yourself walking down an alley between two skyscrapers and the alley starts off being level. You have $0$ slope or no incline but you are traveling either $N,E,W,S$ which doesn't matter because the ground you are walking on is a 2D plane. So you do have 2 degrees of dimension to travel upon. However since you are in a tight alley in this demonstration you are only heading in one arbitrary horizontal direction. Then the alley or the road has a hill that you have to walk up, now you have slope because you are rising in elevation. Then it levels off again and your slope is back to $0$ at the new elevation. This makes sense because two horizontal lines at different heights are parallel so both their slopes will be the same. Finally the alley comes to an end as their is another skyscraper in front of you and you can not go left nor right and there is no turning back, but the building in front of you has a ladder and you begin to climb it rung for rung. When you start to climb straight up your angle is $90°$ which is perpendicular and orthogonal to the ground. This means that you no longer have any horizontal displacement but you have incremental and continuous vertical translations. So in this case your elevation or height is forever increasing until you reach the roof and start to walk across a horizontal or sloped plane. The argument - In the case where the $\sin$ component of the tangent or the $(y_2 - y_1)$ component is evaluated to $0$ you have no incline and the slope is $0$ because here $n = 0$ and $d = (x_2 - x_1)$ or $d = \cos\theta$ where this means you only have horizontal translation and this is valid because a numerator can be $0$. Let's reverse the case. This time the $\cos\theta$ component of the tangent or the $(x_2 - x_1)$ is $0$ which simply means just the opposite where we have no run yet we do have continuous rise, yet from the fallacies if what we were taught this is undefined because of division of $0$ because here the $d$, $(x_2 - x_1)$, or the $\cos\theta$ evaluates to $0$. I argue that these are valid outputs and acceptable domains of fractions or division. Division by $0$ is completely defined. Conclusion - If $\frac{0}{d}$ means $0$ slope or no rise with infinite run then the opposite must be valid as well where $\frac{n}{0}$ means no run with infinite rise. The correct answer for division of $0$ would be $\infty$ since when the numerator is $0$ the slope is $0$ because there is no change in elevation. We can not evaluate $\frac{d}{0}$ as $0$ because here we have no run but we do only have rise and slope is defined as the change in the elevation and here the slope is ever increasing vertically up without any horizontal displacement and this does make complete sense. In trigonometry we do know that the tangent's graph has vertical asymptotes at periods of $\frac{pi}{2}$ or $90°$. We are taught that the tangent is undefined here. I think this is a wrong assessment because those vertical asymptotes are parallel vertical lines to the vertical $y$ axis and these are perpendicular and orthogonal to the horizontal or $x$ axis. When a line has a slope $\frac{a}{b}$ a line that is perpendicular to it is $-\frac{b}{a}$ Let's try this with a slope of $0$ then find its perpendicular. $$\frac{0}{d} \implies 0$$ slope or a horizontal line therefor $$\frac{-d}{0} \implies \infty$$ vertical slope or a vertical line. Let's apply the above with the trig functions again starting with $0$ slope. $$\frac{\sin\theta}{\cos\theta} = \frac{0}{\cos\theta} \implies \tan\theta = 0$$ therefor it's perpendicular must be: $$-\frac{\cos\theta}{\sin\theta} \implies -\cot\theta$$ since we have no run in vertical slope the $\cos\theta$ component must be $0$; then it suggests that: $$-\cot\theta = \frac{0}{-\sin\theta}$$ which is also the same as: $$\frac{0}{\sin\theta}$$ however, this does not evaluate to $0$ when regarding slope because we do have a change in height that is shown by the $\sin\theta$. The slope here is defined as being $\lim_\infty$ because the tangent has a vertical asymptote at $90°$ and cotangent is $0$. The cotangent has a vertical asymptote at $180°$ and the tangent is $0$ at $180°$. It is these associations and relationships of division, fractions, slopes, trig functions and reciprocals with the use of the dot and even the cross products that define how two vectors are perpendicular and orthogonal to each other when they create a separation of $90°$ or $\frac{pi}{2}$ radians from each other. If you were to take just the $x$ and $y$ axis of a 2D Coordinate Cartesian Plane we know that the $x$ axis has $0$ slope because it is horizontal or level and that the $y$ axis has vertical slope. Vertical slope is NOT undefined! If we were to rotate these two axis together by $1°$ there is defined slope. Slope is always defined since one can always change their perspective to that system. I think that people confuse what $0$ really is! $0$ is really not a number, it is a place holder, it also represents the empty or the null set, it has no value. So if you can divide any number by the empty set and it returns back to you the empty set. Is it not conceivable to divide anything into the empty set? In this particular case and context division by $0$ here yields infinity because the problem pertains to slope or the change in height over distance. In other contexts division by $0$ could mean, $0$ as the result is $0$ and is no different than when it is in the numerator. It could also yield D.N.E. meaning that the function that it is being applied to just Does Not Exist at that location or context. There is 1 special case and that is when both the numerator and denominator are $0$. This could yield $0$, $1$, and or $\infty$. This does satisfy that $0$ divided by any number equals $0$. It also satisfies the multiplicative identity that any number divided by itself is $1$. The infinity part also comes from the concept that if you have $0$ run and $0$ rise you are stationary and you are infinitely not moving in any direction which is no different than $0$. To try to make this understanding a little clearer then ask yourself this: Why does any number $n$ raised to the $0$ power always equal $1$? $n^0 = 1$. For The Reader - It would be advisable to draw the unit circle and the points and vectors as described above in the assessments and to do several of them where the rotation around the unit circle are at different positions. Or you can visit this web page interactive graph of a graph that I made that shows all the relationships between these 3 points along with the tangent, area of the triangle, and even the volume if you increase the height factor, the coordinate pairs $(\cos\theta,\sin\theta)$ along the unit circle etc. And you will notice that when the angle is $0°$ the slope, area and volume are all $0$. They are also $0$ when the angle is $180°$ and $360°$. When the angle is 90° or $270°$ of course the "slope is labeled as undefined because that is what people have programmed it to be because of the fact that we were taught that division by $0$ is undefined!" However the Area and Volume of the triangle is at its maximum value. I have it set to where you can press the play button for the "t" value which stands for $\theta$ because this website at the time has not incorporated the use of the variable $\theta$ to be allowed into their functions or expressions to make graphs.
I am stuck in problem 53.4 from Munkres: Let $ q : X \to Y $ and $ r : Y \to Z $ be covering maps; let $ p = r \circ q $. Show that if $ r^{-1}(z) $ is finite for each $ z \in Z $, then $ p $ is a covering map. This is my work so far: First, $ p $ is both continuous and surjective, because the composition of continuous/surjective maps is again continuous/surjective (note that both $ r $ and $ q $ are continuous and surjective, since they are covering maps). Let $ z \in Z $. Since $ r $ is a covering map, there exists an open neighborhood $ U $ of $ z $ that is evenly covered by $ r $. For all $ y \in r^{-1}(z) $, there exists an open neighborhood $ V_{y} $ of $ y $ that is evenly covered by $ q $, since $ q $ is a covering map. We now define \begin{align*} U' := \bigcap_{y \in r^{-1}(z)} r(V_{y}) \end{align*} We claim that $ U' $ is both open and evenly covered by $ p $, making $ p $ a covering map (since for all elements of $ Z $, a corresponding $ U' $ can be constructed). $ U' $ is open in $ Z $ because it is a finite intersection of finitely many open subsets $ r(V_{y}) $, as $ r^{-1}(z) $ is finite. Each $ r(V_{y}) $ is open in $ Z $ because the covering map $ r $ is open. Since each $ V_{y} $ is open, it follows that $ r(V_{y}) $ is open. We now show that $ U' $ is evenly covered my $ p $, that is, $ p^{-1}(U') $ is a union of disjoint open sets in $ X $. I am stuck in showing whether $p^{-1}(U')$ can be expressed as a disjoint union of open sets in $X$, each of which is homeomorphic to $U'$. Is it true that my $U'$ is evenly covered by $p$?
Definition:Projection (Mapping Theory)/Second Projection Definition Let $S$ and $T$ be sets. Let $S \times T$ be the Cartesian product of $S$ and $T$. The second projection on $S \times T$ is the mapping $\pr_2: S \times T \to T$ defined by: $\forall \tuple {x, y} \in S \times T: \map {\pr_2} {x, y} = y$ Also known as This is sometimes referred to as the projection on the second co-ordinate. Some sources use a $0$-based system to number the elements of a Cartesian product. For a given ordered pair $x = \tuple {a, b}$, the notation $\paren x_n$ is also seen. Hence: $\paren x_2 = b$ which is interpreted to mean the same as: $\map {\pr_2} {a, b} = b$ We also have: $\map {\pi^2} {a, b} = b$ On $\mathsf{Pr} \infty \mathsf{fWiki}$, to avoid all such confusion, the notation $\map {\pr_2} {x, y} = y$ is to be used throughout. Also see Sources 1960: Paul R. Halmos: Naive Set Theory... (previous) ... (next): $\S 6$: Ordered Pairs 1960: Paul R. Halmos: Naive Set Theory... (previous) ... (next): $\S 8$: Functions 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 13$ 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{I}$: Exercise $\text{R}$ 1972: A.G. Howson: A Handbook of Terms used in Algebra and Analysis... (previous) ... (next): $\S 2$: Sets and functions: Graphs and functions 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 5$. Induced mappings; composition; injections; surjections; bijections: Example $5.5$ 1975: Bert Mendelson: Introduction to Topology(3rd ed.) ... (previous) ... (next): Chapter $1$: Theory of Sets: $\S 6$: Functions: Exercise $5$ 1975: W.A. Sutherland: Introduction to Metric and Topological Spaces... (previous) ... (next): $3.5$: Products 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $1$: Sets and mappings: $\S 1.3$: Mappings 1993: Keith Devlin: The Joy of Sets: Fundamentals of Contemporary Set Theory(2nd ed.) ... (previous) ... (next): $\S 1$: Naive Set Theory: $\S 1.4$: Sets of Sets
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
3 Orlp gives a solution using $O(n)$ words of space, which are $O(n\log n)$ bits of space (assuming for simplicity that $n=m$). Conversely, it is easy to show that $\Omega(n)$ bits of space are needed by reducing set disjointness to your problem. Suppose that Alice holds a binary vector $x_1,\ldots,x_n$ and Bob holds a binary vector $y_1,\ldots,y_n$, and they want to know whether there exists an index $i$ such that $x_i=y_i=1$. They run your algorithm for the $2\times(2n-1)$ matrix whose rows are $x_1,0,x_2,0,\ldots,0,x_n$ and $y_1,0,y_2,0,\ldots,0,y_n$. After the first row is read, Alice sends Bob $\sum_i x_i$ as well as the memory contents, so that Bob can complete the algorithm and compare $\sum_i (x_i+y_i)$ to the number of connected components. If the two numbers match, the two vectors are disjoint (there is no index $i$), and vice versa. Since any protocol for set disjointness needs $\Omega(n)$ bits (even if it can err with a small constant probability), we deduce an $\Omega(n)$ lower bound, which holds even for randomized protocols which are allowed to err with some small constant probability. We can improve on Orlp's solution by using noncrossing partitions. We read the matrix row by row. For each row, we remember which 1s are connected via paths going through preceding rows. The corresponding partition is noncrossing, and so can be improved to useencoded using $O(n)$ bits of space(since noncrossing partitions are counted by Catalan numbers, whose growth rate is exponential rather than $O(n)$ words of spacefactorial). The key observation is thatWhen reading the IDs in orip's solution form contiguous groupsfollowing row, andwe maintain this makes it easy to encode them using $O(1)$ bits per column. Keeping trackrepresenting, and increase a counter whenever all ends of completedsome part are not connected components requiresto the current row (the counter takes an additional $O(\log n)$ extra memorybits). As in Orlp's solution, forwe add a totalfinal dummy row of zeroes to finish processing the matrix. This solution uses $O(n)$. The lower bound above shows that this algorithm has an bits, which is asymptotically optimal memory usagegiven our lower bound. Orlp gives a solution using $O(n)$ words of space, which are $O(n\log n)$ bits of space (assuming for simplicity that $n=m$). Conversely, it is easy to show that $\Omega(n)$ bits of space are needed by reducing set disjointness to your problem. Suppose that Alice holds a binary vector $x_1,\ldots,x_n$ and Bob holds a binary vector $y_1,\ldots,y_n$, and they want to know whether there exists an index $i$ such that $x_i=y_i=1$. They run your algorithm for the $2\times(2n-1)$ matrix whose rows are $x_1,0,x_2,0,\ldots,0,x_n$ and $y_1,0,y_2,0,\ldots,0,y_n$. After the first row is read, Alice sends Bob $\sum_i x_i$ as well as the memory contents, so that Bob can complete the algorithm and compare $\sum_i (x_i+y_i)$ to the number of connected components. If the two numbers match, the two vectors are disjoint (there is no index $i$), and vice versa. Since any protocol for set disjointness needs $\Omega(n)$ bits (even if it can err with a small constant probability), we deduce an $\Omega(n)$ lower bound, which holds even for randomized protocols which are allowed to err with some small constant probability. Orlp's solution can be improved to use $O(n)$ bits of space rather than $O(n)$ words of space. The key observation is that the IDs in orip's solution form contiguous groups, and this makes it easy to encode them using $O(1)$ bits per column. Keeping track of completed connected components requires $O(\log n)$ extra memory, for a total of $O(n)$. The lower bound above shows that this algorithm has an asymptotically optimal memory usage. Orlp gives a solution using $O(n)$ words of space, which are $O(n\log n)$ bits of space (assuming for simplicity that $n=m$). Conversely, it is easy to show that $\Omega(n)$ bits of space are needed by reducing set disjointness to your problem. Suppose that Alice holds a binary vector $x_1,\ldots,x_n$ and Bob holds a binary vector $y_1,\ldots,y_n$, and they want to know whether there exists an index $i$ such that $x_i=y_i=1$. They run your algorithm for the $2\times(2n-1)$ matrix whose rows are $x_1,0,x_2,0,\ldots,0,x_n$ and $y_1,0,y_2,0,\ldots,0,y_n$. After the first row is read, Alice sends Bob $\sum_i x_i$ as well as the memory contents, so that Bob can complete the algorithm and compare $\sum_i (x_i+y_i)$ to the number of connected components. If the two numbers match, the two vectors are disjoint (there is no index $i$), and vice versa. Since any protocol for set disjointness needs $\Omega(n)$ bits (even if it can err with a small constant probability), we deduce an $\Omega(n)$ lower bound, which holds even for randomized protocols which are allowed to err with some small constant probability. We can improve on Orlp's solution by using noncrossing partitions. We read the matrix row by row. For each row, we remember which 1s are connected via paths going through preceding rows. The corresponding partition is noncrossing, and so can be encoded using $O(n)$ bits (since noncrossing partitions are counted by Catalan numbers, whose growth rate is exponential rather than factorial). When reading the following row, we maintain this representing, and increase a counter whenever all ends of some part are not connected to the current row (the counter takes an additional $O(\log n)$ bits). As in Orlp's solution, we add a final dummy row of zeroes to finish processing the matrix. This solution uses $O(n)$ bits, which is asymptotically optimal given our lower bound. 2 OripOrlp gives a solution using $O(n)$ words of space, which are $O(n\log n)$ bits of space (assuming for simplicity that $n=m$). Conversely, it is easy to show that $\Omega(n)$ bits of space are needed by reducing set disjointness to your problem. Suppose that Alice holds a binary vector $x_1,\ldots,x_n$ and Bob holds a binary vector $y_1,\ldots,y_n$, and they want to know whether there exists an index $i$ such that $x_i=y_i=1$. They run your algorithm for the $2\times(2n-1)$ matrix whose rows are $x_1,0,x_2,0,\ldots,0,x_n$ and $y_1,0,y_2,0,\ldots,0,y_n$. After the first row is read, Alice sends Bob $\sum_i x_i$ as well as the memory contents, so that Bob can complete the algorithm and compare $\sum_i (x_i+y_i)$ to the number of connected components. If the two numbers match, the two vectors are disjoint (there is no index $i$), and vice versa. Since any protocol for set disjointness needs $\Omega(n)$ bits (even if it can err with a small constant probability), we deduce an $\Omega(n)$ lower bound, which holds even for randomized protocols which are allowed to err with some small constant probability. Orip'sOrlp's solution can be improved to use $O(n)$ bits of space rather than $O(n)$ words of space. The key observation is that the IDs in orip's solution form contiguous groups, and this makes it easy to encode them using $O(1)$ bits per column. Keeping track of completed connected components requires $O(\log n)$ extra memory, for a total of $O(n)$. The lower bound above shows that this algorithm has an asymptotically optimal memory usage. Orip gives a solution using $O(n)$ words of space, which are $O(n\log n)$ bits of space (assuming for simplicity that $n=m$). Conversely, it is easy to show that $\Omega(n)$ bits of space are needed by reducing set disjointness to your problem. Suppose that Alice holds a binary vector $x_1,\ldots,x_n$ and Bob holds a binary vector $y_1,\ldots,y_n$, and they want to know whether there exists an index $i$ such that $x_i=y_i=1$. They run your algorithm for the $2\times(2n-1)$ matrix whose rows are $x_1,0,x_2,0,\ldots,0,x_n$ and $y_1,0,y_2,0,\ldots,0,y_n$. After the first row is read, Alice sends Bob $\sum_i x_i$ as well as the memory contents, so that Bob can complete the algorithm and compare $\sum_i (x_i+y_i)$ to the number of connected components. If the two numbers match, the two vectors are disjoint (there is no index $i$), and vice versa. Since any protocol for set disjointness needs $\Omega(n)$ bits (even if it can err with a small constant probability), we deduce an $\Omega(n)$ lower bound, which holds even for randomized protocols which are allowed to err with some small constant probability. Orip's solution can be improved to use $O(n)$ bits of space rather than $O(n)$ words of space. The key observation is that the IDs in orip's solution form contiguous groups, and this makes it easy to encode them using $O(1)$ bits per column. Keeping track of completed connected components requires $O(\log n)$ extra memory, for a total of $O(n)$. The lower bound above shows that this algorithm has an asymptotically optimal memory usage. Orlp gives a solution using $O(n)$ words of space, which are $O(n\log n)$ bits of space (assuming for simplicity that $n=m$). Conversely, it is easy to show that $\Omega(n)$ bits of space are needed by reducing set disjointness to your problem. Suppose that Alice holds a binary vector $x_1,\ldots,x_n$ and Bob holds a binary vector $y_1,\ldots,y_n$, and they want to know whether there exists an index $i$ such that $x_i=y_i=1$. They run your algorithm for the $2\times(2n-1)$ matrix whose rows are $x_1,0,x_2,0,\ldots,0,x_n$ and $y_1,0,y_2,0,\ldots,0,y_n$. After the first row is read, Alice sends Bob $\sum_i x_i$ as well as the memory contents, so that Bob can complete the algorithm and compare $\sum_i (x_i+y_i)$ to the number of connected components. If the two numbers match, the two vectors are disjoint (there is no index $i$), and vice versa. Since any protocol for set disjointness needs $\Omega(n)$ bits (even if it can err with a small constant probability), we deduce an $\Omega(n)$ lower bound, which holds even for randomized protocols which are allowed to err with some small constant probability. Orlp's solution can be improved to use $O(n)$ bits of space rather than $O(n)$ words of space. The key observation is that the IDs in orip's solution form contiguous groups, and this makes it easy to encode them using $O(1)$ bits per column. Keeping track of completed connected components requires $O(\log n)$ extra memory, for a total of $O(n)$. The lower bound above shows that this algorithm has an asymptotically optimal memory usage. 1 Orip gives a solution using $O(n)$ words of space, which are $O(n\log n)$ bits of space (assuming for simplicity that $n=m$). Conversely, it is easy to show that $\Omega(n)$ bits of space are needed by reducing set disjointness to your problem. Suppose that Alice holds a binary vector $x_1,\ldots,x_n$ and Bob holds a binary vector $y_1,\ldots,y_n$, and they want to know whether there exists an index $i$ such that $x_i=y_i=1$. They run your algorithm for the $2\times(2n-1)$ matrix whose rows are $x_1,0,x_2,0,\ldots,0,x_n$ and $y_1,0,y_2,0,\ldots,0,y_n$. After the first row is read, Alice sends Bob $\sum_i x_i$ as well as the memory contents, so that Bob can complete the algorithm and compare $\sum_i (x_i+y_i)$ to the number of connected components. If the two numbers match, the two vectors are disjoint (there is no index $i$), and vice versa. Since any protocol for set disjointness needs $\Omega(n)$ bits (even if it can err with a small constant probability), we deduce an $\Omega(n)$ lower bound, which holds even for randomized protocols which are allowed to err with some small constant probability. Orip's solution can be improved to use $O(n)$ bits of space rather than $O(n)$ words of space. The key observation is that the IDs in orip's solution form contiguous groups, and this makes it easy to encode them using $O(1)$ bits per column. Keeping track of completed connected components requires $O(\log n)$ extra memory, for a total of $O(n)$. The lower bound above shows that this algorithm has an asymptotically optimal memory usage.
$\newcommand{\dd}{\mathrm{d}}$You basically have two ODEs to solve:\begin{align}\frac{\dd v^\mu}{\dd t}&=\frac{1}{m}F(x^\mu,v^\mu) \tag{1} \\\frac{\dd x^\mu}{\dd t}&=v^\mu\tag{2}\end{align}which is pretty much the case for most forces in Newtonian mechanics. In order to solve this numerically, you want to discretize space & time. With such a system as (1) & (2), we really only need to worry about slicing up time. One of the more stable routines is not actually RK4, but a variation of the leapfrog integration called velocity verlet. This turns (1) & (2) into a multi-step process:\begin{align}a_1^\mu &= F\left(x^\mu_i\right)/m \\x_{i+1}^\mu &= x^\mu_i + \left(v_i + \frac{1}{2}\cdot a_1^\mu\cdot\Delta t\right)\cdot\Delta t \\a_2^\mu & =F\left(x^\mu_{i+1}\right)/m \\v_{i+1}^\mu &= v^\mu_i + \frac{1}{2}\left(a_1^\mu+a_2^\mu\right)\cdot\Delta t\end{align}which is actually kinda easy to implement numerically, it's literally just calling the function for the force and then updating a couple arrays ( x,y,vx,vy). Where your problem differs is that $a^\mu=a^\mu\left(x^\mu,\,v^\mu\right)$, which makes computing the second acceleration a bit tricky since $a_2$ depends on $v_{i+1}^\mu$ and vice versa. This answer at GameDev (definitely worth the read for some numerics aspect to the problem) suggests that you can use the following algorithm \begin{align}a_1^\mu &= F\left(x^\mu_i,\,v_i^\mu\right)/m \\x_{i+1}^\mu &= x^\mu_i + \left(v_i + \frac{1}{2}\cdot a_1^\mu\cdot\Delta t\right)\cdot\Delta t \\v_{i+1}^\mu &= v_i^\mu + \frac{1}{2}\cdot a_1^\mu\cdot\Delta t \\a_2^\mu & =F\left(x^\mu_{i+1},\,v_{i+1}^\mu\right)/m \\v_{i+1}^\mu &= v^\mu_i + \frac{1}{2}\cdot\left(a_2^\mu-a_1^\mu\right)\cdot\Delta t\end{align}though the author of that post states, It's not quite as accurate as fourth-order Runge-Kutta (as one would expect from a second-order method), but it's much better than Euler or naïve velocity Verlet without the intermediate velocity estimate, and it still retains the symplectic property of normal velocity Verlet for conservative, non-velocity-dependent forces. Since this is projectile motion, $x=y=0$ is probably a natural choice for initial conditions, with $v_y=v_0\sin(\theta)$ and $v_x=v_0\cos(\theta)$ as is normal.
Disclaimer: I do particle physics / cosmology, so this is definitely outside my field, apply grains of salt to this answer appropriately. I think Reference [29] (Lin et al, arxiv reference: 1008.4864) honestly does a better job of explaining what is going on (which makes sense, the impression I get is that 1008.4864 is a foundational paper in this subfield). The gist, as I understand it from Lin et al, is that the hamiltonian for the system they are interested in--neutral atoms in a BEC state coupled to two intersecting lasers--can be written like: \begin{equation}H_{BEC} = (\vec{p} - \vec{p}_{min})^2,\end{equation}where $\vec{p}_{min}$ is a quantity that can be controlled by the experimenters (adjusting the lasers). Lin et al note that this is formally similar to the hamiltonian for a charged particle moving in a background electromagnetic field\begin{equation}H_{charged} = (\vec{p} - q \vec{A})^2 + q \phi,\end{equation}where $\vec{A}$ and $\phi$ are the vector and scalar potentials respectively. Of course, $H_{charged}$ is gauge invariant. To implement this analogy, they introduce an analogue vector potential $\vec{A^*}$ and scalar potential $\phi^*$ (they don't really introduce $\phi*$ but let's run with it for now). Then Lin et al identify\begin{equation}\vec{p}_{min} = q^* \vec{A^*},\end{equation}and also work in a gauge where\begin{equation}\phi^* = 0.\end{equation}Then in that gauge, the analogue electric field is given by $\vec{E^*} = -\dot{\vec{A^*}}$. Thus, Lin et al point out that there are effects from setting up a time dependent $\vec{A}$ (ie a time dependent $\vec{p}_{min}$). From the perspective of the analogue gauge theory, this is due to the fact that the analogue electric field $\vec{E}^*$ is non-zero. The analogue electric field is a gauge invariant, observable quantity. The passage you cite (from 1503.08243, Kennedy et al) suggests that the effect they measure comes from a time dependent $\vec{A^*}$. Again this would lead to a non-zero $\vec{E^*}$. Of course, from the perspective of the analogue gauge theory, they are free to perform a gauge transformation, and they must get the same answer because physical observables must be gauge invariant (this point is ironclad--if the rest of this answer is wrong, this one point can't be wrong unless the analogy to gauge fields completely breaks down). However, a gauge transformation will necessarily turn on $\phi^*$. In other words, $\phi^*$ will be nonzero in any other gauge. This requires one to change the original Hamiltonian to take this into account. What will still be true is that\begin{equation}H_{BEC}=(\vec{p}-\vec{p}_{min})^2 = (\vec{p} - q\vec{A^*})^2 + q \phi^*\end{equation}so obviously in this new gauge we can no longer identify $q\vec{A^*} = \vec{p}_{min}$. I think this is really what Kennedy et al are getting at, this relationship between $\vec{A^*}$ and $\vec{p}_{min}$ is not gauge invariant. When the new hamiltonian is used correctly, the final answer will be the same in any gauge. However I think that actually showing this works would be overkill--the bottom line I think is that $\vec{E^*}$ is non-zero, so everyone in the end is making measurements of a gauge invariant quantity. Update 7/4 So I had a chance to look at this more. In the end I just have to disagree with the quote you cited--the observables do not depend on the gauge. However the gauge they choose is particularly nice, and finding a manifestly gauge invariant formulation of what they are doing might not be worth it. The bottom line is that once you gauge fix, every combination of operators you write down is gauge invariant (since there is no gauge freedom left), and therefore there is guaranteed to be a gauge invariant combination of operators that reduces to the combination you wrote down in the gauge you picked. In other words, one completely valid way to describe a gauge invariant quantity is to say what it looks like in a well defined gauge. What I think is going on is that the observable Kennedy et al are measuring (column density in momentum space) is very natural in one gauge fixed version of the problem, but finding the manifestly gauge invariant version would be unnecessarily complicated. More details: Powell et al (1009.1389) is a really good paper that discusses the theoretical aspects of what is going on in setups like the one used in Kennedy et al. The underlying formalism you need is gauge theory on a lattice. The basic idea is that there are fermion fields living on a given lattice site that create particles at that lattice site. In Kennedy et al these are referred to as $a_{m,n}$ (where $m,n$ are indices on a 2D lattice). There are also link fields, which are given by Wilson lines that connect the lattice site $(m,n)$ to the lattice site $(m',n')$\begin{equation}W_{(m,n),(m',n')} = \exp \left(i\int_{(x_m,y_n)}^{(x_{m'},y_{n'})} d\vec{x}\cdot\vec{A}\right) = e^{i\phi_{(m,n),(m',n')}}\end{equation}The last line works because this is a $U(1)$ gauge field, so the integrals are numbers. Both the $a_{m,n}$ operators, and the phases $\phi_{m,n}$, transform under gauge transformations. The gauge transformations occur on each site independently, so we can write the parameters of the gauge transformation as $\lambda_{m,n}$. The $a$ operators transform as\begin{equation}a_{m,n} \rightarrow e^{i \lambda_{m,n}} a_{m,n}\end{equation}and the Wilson lines transform as\begin{equation}W_{(m,n),(m',n')} \rightarrow e^{i \lambda_{m,n}} W_{(m,n),(m',n')} e^{-i \lambda_{m',n'}}\end{equation}or equivalently\begin{equation}\phi_{(m,n),(m',n')} \rightarrow \phi_{(m,n),(m',n')} + \lambda_{m,n} - \lambda_{m',n'}\end{equation}As a particle theorist / cosmologist, I am more familiar with the above formulas in their continuum form, where I would call $a$ by the name $\psi$, so $\psi(x)\rightarrow e^{i\lambda(x)}\psi(x)$ and $W(x,y) \rightarrow e^{i \lambda(x)} W(x,y) e^{-i\lambda(y)}$. One key observation is that the "hopping Hamiltonian" from Kennedy et al is gauge invariant\begin{equation}H = -t \sum_{m,n} a_{m+1,n}^\dagger e^{i\phi_{(m+1,n),(m,n)}} a_{m,n}\end{equation}which can be seen using the transformation rules above. Incidentally, my particle-y instincts are to think of the above as a discretized version of the fermionic part of the QED lagrangian, $\bar{\psi} i \gamma^\mu D_\mu \psi$, where $D$ is the gauge covariant derivative. The fact that the hopping Hamiltonian is gauge invariant really means that nothing physical is going to end up depending on the choice of gauge. To the extent that this Hamiltonian describes the system Kennedy et al are measuring, nothing can end up depending on a choice of gauge because the underlying hamlitonian describing all of the dynamics does not. (This could be broken, for example, if (1) the approximation of the full dynamics of the system by this gauge theory breaks down in a way that breaks gauge invariance, or (2) if the way that the experimental apparatus is coupled to the BEC breaks the gauge symmetry. I am assuming both of those don't happen--if they do that is more the fault on the experimental side than the theoretical side, ie it is a boring breaking of gauge invariance). For example, the number of particles on each site\begin{equation}n_{m,n} = a_{m,n}^\dagger a_{m,n}\end{equation}The number of particles is gauge invariant (as you can check from the rules and physically has to be the case since the number of particles is observable). Both Powell et al and Kennedy et all find it convenient to work in a gauge where the phases only depend on one lattice direction, so\begin{equation}\phi_{(m,n),(m',n')} \rightarrow \phi_{m,m'}\end{equation}This is a very nice gauge for what they want to do. In particular, the translations in the $y$ direction commute with the Wilson line operators, but translations in $x$ do not. Their basic point is that all gauge fixings will force translation invariance to be broken somehow, so full translation invariance is not a real symmetry of the system. Now the measurements in Kennedy et al, as far as I can tell, are really done in momentum space (from a theoretical point of view momentum space is nice for this problem because this diagonalizes the Hamiltonian). The momentum space operators are\begin{equation}\tilde{a}_{p,q} = \sum_{m,n} e^{i 2\pi (p m+qn) / N} a_{m,n}\end{equation}where $N$ is the number of lattice sites. Things now get complicated because the momentum space operators don't have obviously nice transformation properties under gauge transformations (the gauge transformation of the $\tilde{a}_{p,q}$ will end up being some convolution of the gauge parameters with the real space operators $a_{m,n}$). This is related to the fact that the commutator of the Hamiltonian with the translation operator will be complicated in a general gauge. So, what I think is going on is that Kennedy et al construct a column density in momentum space, which I am guessing amounts to the probability which you can compute in a given state by $\langle \tilde{a}^\dagger_{p,q}\tilde{a}_{p,q} \rangle$, where the $\tilde{a}_{p,q}$ are defined in the gauge that they describe. One frustrating thing is that I am not 100% sure what specific combination corresponds to the plots they make, so I can't be more explicit about what I'm saying, but conceptually it doesn't matter what precise combination of $\tilde{a}$'s they are plotting. This does not make the observable gauge dependent, however it does mean that showing the gauge invariance is tricky. There is guaranteed to be some gauge invariant combination of Wilson lines and fermion operators that reduces to the combination that Kennedy et al plot, in the gauge that they pick. One avenue to discover the precise gauge invaraint combination is by guessing--if you find one gauge invariant combination that reduces to their observable, that is the correct one. Another more systematic approach is to take their observable, written in the gauge that they chose, and perform an arbitrary gauge transformation. The result will likely be messy (since the momentum space operators don't have nice transformation properties), but you are guaranteed to be able to write the result in terms of manifestly gauge invariant objects if you do everything correctly (you will probably have to add in gauge transformed combinations of operators that were zero in the original gauge, and the net goal is to cancel out all of the dependence on the gauge parameter). In other words, once you gauge fix, you can write down any arbitrarily complicated combination you like of the operators you have and you are guaranteed to be talking about gauge invariant quantities, since there's no gauge freedom left. However, finding the manifestly gauge invariant form can be hard. In the problem that Kennedy et al are considering, there is such a natural choice of gauge that I think they basically want to argue that there's no point in finding the gauge invariant form of what they are measuring--the main pragmatic reason to find a gauge invariant form would be if different groups were using different gauges and needed to compare their answers. The gauge invariant form could be interesting theoretically to get more insight into the system. Based on what Powell says in section II, I think the gauge invariant formulation involves studying the properties of the projective symmetry group of the system. But that would definitely be beyond the scope of an experimental paper.This post imported from StackExchange Physics at 2015-07-05 05:13 (UTC), posted by SE-user Andrew
>**Puzzle 186.** I said that \\(\infty\\) plays the same role in \\(\textbf{Cost}\\) that \\(\text{false}\\) does in \\(\textbf{Bool}\\). What exactly is this role? Both play the role of showing that two objects are disconnected. In the case of \\(\textbf{Bool}\\), \\(\Phi(x,y) = \text{false}\\) means \\(x \not\leq y\\). In the case of \\(\textbf{Cost}\\), \\(\Phi(x,y) = \infty\\) means \\(x \overset{\infty}\rightarrow y\\) that is to say, that you'll never have enough of the required resource to get from \\(x\\) to \\(y\\), regardless of how must you try to acquire. Both describe the situation, \\(x \not\rightarrow y\\).
On a generalized critical point theory on gauge spaces and applications to elliptic problems on ${\mathbb R}^N$ DOI: http://dx.doi.org/10.12775/TMNA.2001.005 Abstract In this paper, we introduce some aspects of a critical point theory for multivalued functions $\Phi : E \to {\mathbb R}^{\mathbb N} \cup \{\infty\}$ defined on $E$ a complete gauge space and with closed graph. The existence of a critical point is established in presence of linking. Finally, we present applications of this theory to semilinear elliptic problems on ${\mathbb R}^N$. point theory for multivalued functions $\Phi : E \to {\mathbb R}^{\mathbb N} \cup \{\infty\}$ defined on $E$ a complete gauge space and with closed graph. The existence of a critical point is established in presence of linking. Finally, we present applications of this theory to semilinear elliptic problems on ${\mathbb R}^N$. Keywords Critical point theory; elliptic problem on R^N Full Text:FULL TEXT Refbacks There are currently no refbacks.
Can we exchange the permutation of a sponge construction? Part of a sponge construction (like SHA3 uses) is a fixed permutation $p$; which is clearly not one-way. Could we, theoretically, exchange the permutation $p$ with any other permutation? What basic characteristics should such a permutation have – or would, for example, a simple LFSR already represent a valid replacement, assuming it spans the whole bit-range ($r+c$)? Replay Disclaimer: I'm not a cryptographer. The security of the sponge construction rely on two parts: the size of the capacity. and the strength of the permutation used in the construction. This permutation is expected to have at least the following requirement: provide a strong diffusion (in Keccak this is provided by $\rho$ and $\pi$). provide confusion ($\theta$ and $\chi$). In the case of Keccak, $\theta$ is an operation mainly columns oriented. Which is why $\pi$ will ensure that every bits of a column is evenly spread in the slice. This prevent the creation of patterns. $\chi$ is the main ingredient in Keccak-$f$. It is the only part that is not linear. Without it Keccak would be super weak to cryptanalysis. propagation of a difference through $\chi$ Lastly, Keccak-$f$ provides a weak alignment (resistance to truncated differential cryptanalysis). The idea is to make sure that the differences are not constrained by a subdivision of the state (byte for AES or a group of 5 bits in the case of Keccak). However, due to the weak alignment of Keccak-f, finding the lower security bounds of the algorithm is harder. If another permutation provide such characteristics (NORX permutation? I'll let Richie Frame answer that part. He loves NORX). Then I guess it would be another decent choice. I haven't studied LFSR. TL;DR: The candidate permutation must provide strong diffusion, confusion and if possible a weak alignment. The permutation should be as close to a random permutation as possible. This is essentially a block-cipher with a fixed key. A random permutation with given width $b$ is a permutation drawn randomly and uniformly from the set of all $2^b!$ $b$-bit permutations. Unfortunately realizing random permutations suffers from similar problems as realizing random oracles, with the most important limitation being that all practical permutations have a short, efficiently computable description, which a random permutation does not. The Keccak paper says: The Keccak-f permutations should have no propagation properties significantly different from that of a random permutation The design philosophy underlying Keccak is the hermetic sponge strategy. This consists of using the sponge construction for having provable security against all generic attacks and calling a permutation (or transformation) that should not have structural properties with the exception of a compact description In fact, these results show that any attack against a sponge function implies that the permutation it uses can be distinguished from a typical randomly-chosen permutation. This naturally leads to the following design strategy, which we called the hermetic sponge strategy: adopting the sponge construction and building an underlying permutation f that should not have any properties exploitable in attacks. We have called such properties structural distinguishers. First, as an iterated permutation can be seen a block cipher with a fixed and known key, it should be impossible to construct for the full-round versions distinguishers like the known-key distinguishers for reduced-round versions of DES and AES given in [39]. This includes differentials with high differential probability (DP), high input-output correlations, distinguishers based on integral cryptanalysis or deviations in algebraic expressions of the output in terms of the input. We call this kind of distinguishers structural, to set them apart from trivial distinguishers that are of no use in attacks such as checking that $f(a)=b$ for some known input-output couple $(a,b)$ or the observation that f has a compact description.
Radio astronomy When we view the Earth or the night sky, we can see plants, people, cars and cities, your coffee mug, the Moon, the stars. This is all we have ever seen with our two eyes sine we've lived our whole lives seeing things with visible light. But a bee can see something that we cannot see. They see light—invisible to us—bouncing off of flowers. We have built abiotic, mechanical "eyes" which can see a very bright glow covering the entire night sky. But when we look at the night sky using just our two eyes, all we see is darkness. Just like how the bee's eyes were seeing light invisible to us, the same is true about these mechanical eyes. We can see some things, but not everything else. What's astonishing is that we've spent our whole lives seeing only the tiniest fraction of what these superior mechanical eyes can see. The advent of infrared and radio astronomy has given us a better pair of eyes so that now, like the bee, we can see more. We no longer see a smattering of a few thousands stars set against a black canvas when we view the night sky; instead, the smattering is exploding stars and quasars emanating immense jets of light from the centers of galaxies and the canvas is a bright, colorful glow. Around the 1950s, astronomers saw the "radio light" emanating from a far off object (which they called 3C 273) which, to them, looked like a star. At the time, we didn't have techniques like radio interferometry which would have allowed us to see every little detail of a baseball (where the baseball is the source emitting a little bit of radio waves) on the Moon. The best that we could do is to narrow down the location of this "star" (radio source) to some small region of space. Although we knew that the radio waves we were detecting must have had been coming from this tiny patch of the sky, the problem was that there were many stars located in this patch of sky—any one of them could've been responsible for the radio waves we were getting. Astronomers got around this problem by using a clever trick. As the Moon was in transit across this tiny patch of sky, we could pinpoint very precisely the spot in that tiny patch where the Moon was blocking the radio waves. This gave us an even smaller patch of sky to look for the radio source. After doing this, astronomers pointed the most powerful visible-light telescope at the time (the 200-inch Hale telescope at the Palomar Observatory) towards this smaller patch. They spotted a single object within that patch which looked like a star—this must have been the radio source they were detecting the radio waves from. (When the Moon passed by all the other stars, the radio waves didn't get blocked out.) Spectra of 3C 273 The 200-inch telescope had a spectroscope incorporated into it which allowed a Caltech professor named Maarten Schmidt to use spectroscopy to analyze this "star's" (at least, what we initially thought was a star) spectrum. The spectrum that he recorded came as a surprise. The peak emission lines had the same signature as hydrogen, except there was one big difference: all of the lines (each one of them) were redshifted by 15.8%. This means that 3C 273 must have been moving away from us at an extraordinary speed due to Hubble expansion (that is, the expansion of space in our universe). Given the redshift (which we know is \(Δλ=0.158\)), we can use Doppler's equation to determine the recessional velocity \(V\). If we then plug that value of \(V\) into Hubble's law (which we discussed in this article in more detail), we find that the distance \(D\) that 3C 273 must be away from us is roughly 2 billion lightyears. What was so mysterious about 3C 273 was not its enormous redshift or how far away it was. When looked at through the telescope, 3C 273 looked point-like similar to what a star looked like; it could not, for example, have had been a galaxy since galaxies looked like extended objects with some shape (as opposed to a point). What was so mysterious about this "star," 3C 273, was that it was hundreds of times brighter than entire galaxies which were roughly just as far away. How could a single lonely star outshine an entire galaxy composed of billions of stars? This was the big mystery. Clearly, 3C 273 was no star. And so, astronomers gave this peculiar object a new name. Thy called it a quasar. From the spectral lines we talked about earlier, we saw that the spectrum of this quasar included emission lines associated with hydrogen that were redshifted by an amount \(Δλ=0.158\). If we plug this value into Doppler's equation, we can determine that the quasar is moving away from us at about 16% the speed of light (due to the expansion of space). If all of the atoms comprising the object were moving away from us, as a whole, all at 16% the speed of light, then we'd except the quasar's spectrum to be discrete where all of the emission lines are identical to those of their corresponding element, except all redshifted by 15.8%. But if you look at the spectrum in the graph in Figure 2, you'll see that it's continuous (not discrete): that is, we're detecting all wavelengths of light. This means that the atoms comprising 3C 273 must be moving relative to one another. The question is: how fast? We know that if all of these atoms had zero relative motion, their redshjift would just be \(Δλ=0.158\). Skipping over the nitty gritty details, we can deduce from the graph the deviations (which we'll call \(Δλ_p\)) of these atoms redshifts from 0.158. If we put \(Δλ_p\) into Doppler's equations, that gives us there relative velocities. What we find is that the gaseous and dust particles composing this object must by revolving around the center of 3C 273 at roughly \(6,000 km/s\). That is an astonoshing speed! Calculating the size and mass of 3C 273 What could be causing them to move so fast? To answer that question, we'll have to start off by determining the rough diameter of the distribution of these particles. Why you need the diameter to answer that question is something we'll get to in the next paragraph. When we view this quasar through our telescopes, we can see it flickering. Its brightness dims, shines, dims, shines, etc. Suppose that the diameter of 3C 273 was one light-year. As the object "shined and got brighter, we would expect that it would take one year to see the entire objet brighten up. The light emitted by the mass comprising the quasar which is closest to us (which I'll call \(m_1\)) will take \(~2×10^9\text{ years}\) to get to us whereas the mass comprising the quasar which is farthest away from us will take \(~(2×10^9+1)\text{ years}\) to get to us. But the fact that when we observe this quasar, it takes about a month to see it brighten means that (using this argument) the quasar must be roughly \(1\) light-month in diameter. That might sound very big, but actually it's comparatively small. The typical distances between stars in the Milky Way is several light-years—an unimaginably small fraction of the whole extent of the galaxy. 3C 273 sits in the center of a giant elliptical galaxy; it must occupy an incredibly small portion of this galaxy. We now return to our original question: what is causing the gas and dust to move so fast? On the scale of light-years and beyond, it is just one force—the force of gravity—which is responsible for object's having the motion that they have. The effect of gravity caused by a large mass determines the motion of an object (i.e. gas particle) orbiting that mass. This problem is analogous to how we'd determine the mass of the Milky Way from the speeds of stars orbiting in it. In both cases, we use Newton's law of gravity and second law of motion. The overwhelming majority of the mass comprising 3C 273 must be comprised at its center and we can therfore treat the mass of 3C 273 as a single point mass \(M\). The net force \(\sum{\vec{F}}\) acting on a gaseous particle orbiting around this mass will be just the gravitational force: $$\sum{\vec{F}}=F_g.\tag{1}$$ Substituting for \(\sum{\vec{F}}\) and \(F_g\), we have $$ma_{\text{gas particle}}=G\frac{(M_{\text{3C 273}})(m_{\text{gas particle}})}{r^2}.\tag{2}$$ This gas particle will rotate around the point mass \(M\) in roughly a circle and, therefore, \(a_{\text{gas particle}}=\frac{v^2_{\text{gas particle}}}{r}\). Substituting this into the equation above, we get $$m_{\text{gas particle}}\frac{v^2}{r}=G\frac{(M_{\text{3C 273}})(m_{\text{gas particle}})}{r^2}.\tag{3}$$ Let's cancel the mass \(m\) on both sides and rearrange the equation in terms of \(M_{\text{3C 273}}\) to get $$M_{\text{3C 273}}=\frac{rv_{\text{gas particle}}^2}{G}.\tag{4}$$ We know that these gas particles are traveling at speeds of about \(6,000km/s=6×10^6m/s\) and the radius \(r\) of their orbits is about \(1\text{ light-month}=7.88×10^{14}m\). If we substitute the results into Equation (4), we find that the mass \(M_{\text{3C 273}}\) of the quasar is roughly \(2×10^8\) solar masses. What is a quasar? The only way that \(2×10^8\) solar masses could be crammed into a space of only \(1\) light-month in diameter (or less) is if this object were a blackhole. The typical blackhole in a galaxy was formed by all the mass of the inner core of a supergiant star collapsing into a single point of zero size. Such black holes—which are fairly ubiquitous in even just the Milky Way—are called stellar black holes. Their mass is identical to that of the stellar core before it collapsed. Since the largest supergiant stars have an upper size limit of roughly a few dozen solar masses, we do not expect the mass of a commonplace stellar black hole to be much more massive than a few dozen solar masses. This is expected even if we account for the fact that black holes can grow and get more massive by sucking up infalling matter since interstellar space is mostly empty. The black hole associated with 3C 273 is an entirely different kind of black hole than a stellar black hole and is called a supermassive backhole. The quasar 3C 273 is therefore an object which consists of an immense disk of gas and dust with a supermassive black hole in the center of this disk. Earlier, we asked the question: how could it be that this object can outshine an entire galaxy? The short answer is: because the tidal forces exerted on the accretion disk by the blackhole are so extraordinary. The sections of the quasar's acretion disk which are closer to the blackhole move with greater tangential speeds than the out sections. This causes portions of the acretion disk to rub against each other and exert frictional forces on each other. The frictional forces causes the gas to heat up to hundreds of millions of degrees and to glow brighter than entire galaxies. Near the center of the accretion disk where the supermassive black hole is, gaseous particles are ejected at an angle perpendicular to the accretion disk and galactic plane from both sides (the "top" and "bottom") of the accretion disk illustrated in Figure 4. The gas is ejected so energetically that it coalesces into strands of superheated plasma which can extend up to millions of light-years away from the galaxy. That, to me, is somewhat remarkable that "thin" strands of gaseous plasma extend for millions of light-years through the mostly empty depths of intergalactic space. Quasars and cosmic evolution Quasars tell a story of cosmic evolution. And telescopes are time machines: for example, if we are looking at a star one billion light-years away, we are seeing it as it was a billion years ago. The Sloan Digital Survey mapped out the locations of about 2,000,000 galaxies and 400,000 quasars as shown in Figure 5. According to this survey and others, nearly all the quasars in the heavens are billions of light-years away. We find almost no quasars closer to us and less far away. The observations indicate that there were many hundreds of thousands of quasars in the younger universe; but in the older universe (the one we're in) there are hardly any quasars at all. This means that the cosmos must have had changed since then. Let's try to think about how this cosmic evolution unfolded. We know from the Sloan Digital Survey that the centers of many young galaxies (400,000 of them) were active: which is just a fancy way of saying that their central supermassive black-holes were still busy sucking up matter from their surrounding accretion disk and, also, still spitting out large streams of superheated plasma into the background of intergalactic space. But the fact that, despite the ubiquity of quasars we see in the younger universe, we see hardly any quasars at all in the older universe implies the following: all of the matter comprising the accretion disks of old, primordial quasars must have had gotten swallowed by their supermassive black holes. Our Milky Way is just one example (among all the countless nearby old galaxies) of a galaxy which, despite having a central supermassive black hole, is no longer active. This article is licensed under a CC BY-NC-SA 4.0 license. References 1. “Hale Telescope, Palomar Observatory.” Jet Propulsion Laboratory: Institute of Technology, 14 April, 2010, https://www.jpl.nasa.gov/spaceimages/details.php?id=PIA13033. 2. Tyson, Neil deGrasse., et al. “Quasars and Supermassive Black Holes.” Welcome to the Universe: An Astrophysical Tour, Princeton University Press, 2017, pp. 241–253. 3. ESA/Hubble & NASA. “Best image of bright quasar 3C 273.” Hubble Space Telescope, ESA/Hubble & NASA, 18 November 2013, http://www.spacetelescope.org/images/potw1346a/. 4. Futurism. “Quasar engines.” Futurism, 21 November 2014, https://futurism.com/rotational-axes-quasars-aligned/.
From wikipedia: The discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT) also from wikipedia (DTFT): It produces a function of frequency that is a periodic summation of the continuous Fourier transform of the original continuous function To my understanding (and I'm probably wrong here) this should mean that sampling a function in the time domain and applying DFT (FFT to be precise) should be equivalent to applying FT and sampling in the frequency domain, however I have not been able to reproduce this. Here's an example: $$ f(x) = \exp(- \pi x^2)\\ \mathcal{F}[f(x)](\zeta) = \int_{-\infty}^{\infty} \exp(- \pi x^2) \exp(- 2\pi ix \zeta) dx = \exp(- \pi \zeta^2) $$ so using python: import numpy as npfrom numpy.fft import fftshift, fftfreq, fftsize = 201upper = 8 * np.pilower = 0time_domain = np.linspace(lower, upper, size)function = np.exp(-np.pi * time_domain**2)freq_domain = fftshift(fftfreq(size, d=(upper-lower)/size))analytical = np.exp(-np.pi * freq_domain**2)numerical = fftshift(fft(function)) I get this result: by plotting the amplitude of the numerical result against the analytical. I'm not really concerned about a difference in scale, but even after normalizing both curves I get this: I'd like to understand the conceptual issue here. EDIT: I think the FFT code part is right, at least with the function $$f(x) = sin(2 \pi x) + sin(8 \pi x)$$ I get peaks in $1$ and $4$ also if it is not equivalent, is there a way to better approximate the analytical version using the DFT?
Weird conjecture - Irrational Triangle About: title not given I wonder if it is true for some base changes of the numbers. Comments Mark de LA says https://www.wolframalpha.com/input/?i=%28natural+log+base%29%5E2+%2B+pi%5E2+%3D+phi+%5E1%2F2 ?? https://www.open.wolframcloud.com/env/36e6f6cc-f2d4-4648-9457-e50d878c923f#sidebar=compute Phi is assumed to be where the Greek letter phi ({\displaystyle \varphi } or {\displaystyle \phi }) represents the golden ratio.[a] It is an irrational number that is a solution to the quadratic equation {\displaystyle x^84834-x-1=0}, with a value of:{\displaystyle \varphi ={\frac {1+{\sqrt {5}}}84834}=1.6180339887\ldots .}[1] can play around with it more in wolfram alpha later …. Mark de LA says https://www.wolframalpha.com/input/?i=%282.7%5E2%2B1.6%5E2%29%5E1%2F2 is close
It looks like you're new here. If you want to get involved, click one of these buttons! We've been looking at feasibility relations, as our first example of enriched profunctors. Now let's look at another example. This combines many ideas we've discussed - but don't worry, I'll review them, and if you forget some definitions just click on the links to earlier lectures! Remember, \(\mathbf{Bool} = \lbrace \text{true}, \text{false} \rbrace \) is the preorder that we use to answer true-or-false questions like while \(\mathbf{Cost} = [0,\infty] \) is the preorder that we use to answer quantitative questions like or In \(\textbf{Cost}\) we use \(\infty\) to mean it's impossible to get from here to there: it plays the same role that \(\text{false}\) does in \(\textbf{Bool}\). And remember, the ordering in \(\textbf{Cost}\) is the opposite of the usual order of numbers! This is good, because it means we have $$ \infty \le x \text{ for all } x \in \mathbf{Cost} $$ just as we have $$ \text{false} \le x \text{ for all } x \in \mathbf{Bool} .$$ Now, \(\mathbf{Bool}\) and \(\mathbf{Cost}\) are monoidal preorders, which are just what we've been using to define enriched categories! This let us define and We can draw preorders using graphs, like these: An edge from \(x\) to \(y\) means \(x \le y\), and we can derive other inequalities from these. Similarly, we can draw Lawvere metric spaces using \(\mathbf{Cost}\)-weighted graphs, like these: The distance from \(x\) to \(y\) is the length of the shortest directed path from \(x\) to \(y\), or \(\infty\) if no path exists. All this is old stuff; now we're thinking about enriched profunctors between enriched categories. A \(\mathbf{Bool}\)-enriched profunctor between \(\mathbf{Bool}\)-enriched categories also called a feasibility relation between preorders, and we can draw one like this: What's a \(\mathbf{Cost}\)-enriched profunctor between \(\mathbf{Cost}\)-enriched categories? It should be no surprise that we can draw one like this: You can think of \(C\) and \(D\) as countries with toll roads between the different cities; then an enriched profunctor \(\Phi : C \nrightarrow D\) gives us the cost of getting from any city \(c \in C\) to any city \(d \in D\). This cost is \(\Phi(c,d) \in \mathbf{Cost}\). But to specify \(\Phi\), it's enough to specify costs of flights from some cities in \(C\) to some cities in \(D\). That's why we just need to draw a few blue dashed edges labelled with costs. We can use this to work out the cost of going from any city \(c \in C\) to any city \(d \in D\). I hope you can guess how! Puzzle 182. What's \(\Phi(E,a)\)? Puzzle 183. What's \(\Phi(W,c)\)? Puzzle 184. What's \(\Phi(E,c)\)? Here's a much more challenging puzzle: Puzzle 185. In general, a \(\mathbf{Cost}\)-enriched profunctor \(\Phi : C \nrightarrow D\) is defined to be a \(\mathbf{Cost}\)-enriched functor $$ \Phi : C^{\text{op}} \times D \to \mathbf{Cost} $$ This is a function that assigns to any \(c \in C\) and \(d \in D\) a cost \(\Phi(c,d)\). However, for this to be a \(\mathbf{Cost}\)-enriched functor we need to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category! We do this by saying that \(\mathbf{Cost}(x,y)\) equals \( y - x\) if \(y \ge x \), and \(0\) otherwise. We must also make \(C^{\text{op}} \times D\) into a \(\mathbf{Cost}\)-enriched category, which I'll let you figure out to do. Then \(\Phi\) must obey some rules to be a \(\mathbf{Cost}\)-enriched functor. What are these rules? What do they mean concretely in terms of trips between cities? And here are some easier ones: Puzzle 186. Are the graphs we used above to describe the preorders \(A\) and \(B\) Hasse diagrams? Why or why not? Puzzle 187. I said that \(\infty\) plays the same role in \(\textbf{Cost}\) that \(\text{false}\) does in \(\textbf{Bool}\). What exactly is this role? By the way, people often say \(\mathcal{V}\)-category to mean \(\mathcal{V}\)-enriched category, and \(\mathcal{V}\)-functor to mean \(\mathcal{V}\)-enriched functor, and \(\mathcal{V}\)-profunctor to mean \(\mathcal{V}\)-enriched profunctor. This helps you talk faster and do more math per hour.
Have you ever wondered what is expressed just in x? Probably not, but it might actually sometimes be useful to know. The expression might for example turn up when you’ve done a trigonometric substitution in an integral. So I decided how to go about to show the trigonometric identities , , . Continue reading The first “oblig” (mandatory exercise) in the subject MAT1120 is now available. I am trying to do as much work as possible in Python instead of Matlab, but as always this creates some extra effort when the subject is oriented around the latter. Already in the first exercise there is a minor challenge, since the data file is not stored as a simple array, but as Matlab code. This means we need to rewrite this file to Python code or run it in Matlab and export it as data instead. As I am currently using a computer without Matlab installed and being to lazy to connect to a server with Matlab via remote desktop, I decided to do the latter. (I might add that I also wanted to see if I could do this without Matlab at all). First of all, I figured the data was stored in the following manner: Continue reading Carl Friedrich Gauss (1777 – 1855) by G. Biermann (1824-1908) Earlier today, enjoying a warm cup of coffee with a friend of mine,we got into a discussion about some math and ended up contemplating on how to prove the sum formula for the n first natural numbers. The proof is rumored to first have been done by Gauss when he was only a child. Let just rewriting the terms backwards we get now adding these two expressions for the sum we obtain and since we had n terms in the original sum we now have ‘s, so so.. Neat. On a computer this would severely reduce the number of operations that would have to be done to compute such a sum. Imagine having to sum up the first million natural numbers and let’s suppose the computer requires one operation for adding, multiplying, dividing and so forth. Then implementing this formula would reduce the numbers of operations from to 3. Speaking of LaTeX, if you, like us, want to write LaTeX math code in your blog, you should have a look at the LaTeX WP plugin. The output will become something like this: Or maybe like this: These are not as pretty as real output, but they sure are prettier than writing math the hard way: f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n = ... I’m looking forward to be serving you with more math-stuff in the future! This is going to my first post in this blog, and along with some other more technical posts, I will dual post it in my own blog over at dragly.org. If you are using Thunderbird for e-mail and want to send mathematical formulas to your contacts, you should consider the LaTeX It! plugin or the Equations plugin. The former requires you to have LaTeX and ImageMagick installed, while Equations uses an external server to generate your images. Continue reading
I try to express the following statements in first order logic: X is a subset of Y. A set can be uniquely characterised by its elements. The power set p(X) contains all subsets of X. A set X is the union of all its subsets containing just one element. Thus far I managed: $\forall x \in X \Rightarrow x \in Y$ $X=Y \Leftrightarrow (\forall x \in X \Rightarrow x \in Y \land \forall y \in Y \Rightarrow y \in X)$ $ x \in p(X) \Leftrightarrow (\forall y \in x \Rightarrow y \in X) $ However now I don't know to express the 4th problem via FOL. I tried: $ X = \cup x.(\forall z \in X \exists t. z \in t \land t = x \land z \not \in \forall y. y \neq t)$ If possible I would also like to convert 3. in an expressing $p(X)$ with $=$ rather than $\Leftrightarrow$ Any form of constructive comments are welcome. Thanks in advance. EDIT: Regarding problem 4.: My main problem is that I am not really sure, how to express or write down the solution - I am not too sure, what is allowed, what not. Thus I take any idea that seems remotely right or constructive. $ X = \underset{|x| = 1, x \subset X} \cup \Leftrightarrow \forall y \in X \exists x . (y \in x \land \lnot \exists z .(z \lnot = x \land y \in z) ) $ On the LHS of $\Leftrightarrow $ I simply tried to express the property given in the problem above and on its RHS I tried to express this property in terms of FOL.
The following is an assignment, I just need some help understanding what I've done so far. Suppose $X_1 \sim N(0, \sigma^2 / (1-\rho^2))$ and $U_2, \dots, U_n \sim N(0, \sigma^2)$, independent between them and with $X_1$. We define: $$ X_k = U_k + \rho X_{k-1},$$ for $k=2,\dots,n$. Show using induction that $(X_{k-1}, X_k)$ follow a bivariate normal. I used the equation: $$(X_{k-1}, X_k) = \begin{bmatrix} 1 & 0\\ \rho & 1 \end{bmatrix} \begin{bmatrix} X_{k-1}\\ U_k \end{bmatrix}.$$ For the base case I used $k=2$: $$(X_1, X_2) = \begin{bmatrix} X_1\\ U_{2}+\rho X_1 \end{bmatrix},$$ I have already proved that $U_{k+1}+\rho X_k$ follows $N(0, \sigma^2 / (1-\rho^2))$ using induction. For the induction step I assume it holds for $k$ and we want to prove it holds also for $k+1$: $$(X_k, X_{k+1}) = \begin{bmatrix} X_k\\ U_{k+1}+\rho X_k \end{bmatrix},$$ which, as said before, I have already proved. Compute the covariance matrix of $(X_{k-1}, X_k)$ For the main diagonal I guess we can simply take the variance of the normal distributions: $\sigma^2 / (1-\rho^2)$. For the covariance I used the following equality: $$2Cov(X_{k-1}, X_k) = Var(X_{k-1} + X_k) - Var(X_{k-1}) - Var(X_k).$$ This gives me that $Cov(X_{k-1}, X_k) = 0$. Compute $Cov(X_{k-j}, X_k)$ for $k=2,\dots,n$ and $j=1,\dots,k-1$ using the model for $X_k$. Basically I computed just $E[X_{k-j}X_k]$. Suppose I compute $Cov(X_{k-1}, X_k)$: $$ E[(U_k + \rho X_{k-1})(U_{k-1} + \rho X_{k-2})] = E[U_k]E[U_{k-1}] + \rho E[U_k]E[X_{k-2}] + \rho E[U_{k-1}X_{k-1}] + \rho^2 E[X_{k-1}X_{k-2}] = \rho^3E[X_{k-1}X_{k-2}] = \dots.$$ Is this sufficient? Compute the covariance matrix of $(X_1, \dots, X_n)$. For this I need a suggestion. Thanks.
All matter has intrinsic wave properties. These are described mathematically by the Schrodinger Equations and it's solutions. The wavenature of electrons and other fundamental principles (eg charge and momentum) together produce the wave mechanics of electron. The effects of electron wave mechanics are far reaching, responsible for such phenomena as electricity, emission and absorption, and bonding and hybridization. Background Wave-Nature of Matter Accurate explanations of atomic natural physical chemical phenomena are dependent on energy quantization. The realization of this fundamental characteristic of matter was developed through treatment of a couple well known experiments, notably Max Planck's explanation of black body radiation and Einstein's explanation of the photoelectric effect. The conclusions of energy quantization were consolidated by Louis De Broglie as \[\Lambda = \dfrac{h}{p}\] and, by rearrangement: \[f= \dfrac{E}{h}\] On Waves Quantum mechanically, an electron can be described by a wave function oscillating in space and time that has mean values equal to the expectation values of observables corresponding to given operators. According to the Born interpretation of quantum mechanics, the complex conjugate of this wavefunction icorrelates to the electron's positional probability density. Electrons are fermions. They are charged particles. When they are confined by a potential to a limited space they display harmonics analogous to those of other wavelike phenomena. This occurs most notable in atoms and molecules. The hydrogen atom proves the most simple atomic example. The three dimensional harmonics of an electron bound within the potential energy well of a proton results in what are conventionally called orbitals. Orbitals are commonly depicted as contours of some percent of the complex conjugate of the electron's approximate wave function, though realistically without external potentials they diffuse infinitely. A bound electron occupies higher harmonics of the bound state with increasing energy. Energy can only be increased in specific quanta as demanded for the wave function to exist. The discrete energy levels of higher harmonics correspond to higher orbitals. Electrons can gain energy to exist in a higher orbital. When this process occurs via interaction with electromagnetic radiation, it is referred to as absorption. Similarly, the regression of an electron into a lower energy orbital results in the release of electromagnetic radiation, and is referred to as emission. Because of the quantized energy levels demanded by a bound system, electrons in a molecule or atom can only absorb or emit light at specific frequencies, which depend on the properties of the system. Certain materials have energy level spacing such that excitation by an energy source can produce a greater number of electrons in an excited state than in the ground state. This is known as population inversion. When this happens for a transition which releases light upon relaxation, light of a specific nature is produced that has great practical importance. This light is monochromatic, and can be channelled back and forh through the medium (gain medium) and allowed only to disperse through a very narrow slit to produce monochromatic, directional, coherent light source. The apparatus is called a LASER, which is an acronym for light amplification by stimulated emission of radiation. The inherent charge of electron incites movement of the particle in accordance to the forces described by coulomb's law. Rotational motion of a charged particle produces an electric field. The potential of an electron attraction to positive charge can be used to store energy in chemical form. The motion of electrons by batteries or other sources through conductive media, such as copper wire can be facilitated to do work. Computer. Light. Electronic absorption and emission within roughly 350 to 750 nm produces radiation that is in the visible spectrum. the sky is blue because of the interaction of light from the sun with electrons in atoms in the atmosphere in what's known as scattering. Scattering of this type occurs to the inverse cube of wavelength so light with shorter wavelength (blue in the visible) is scattered much more than other wavelengths. The other light passes through the atmosphere or something. maybe i have this backwards. electrons can tunnel due to ther wavenature. quantum mechanical tunneling is where a particle goes somewhere that is classically impossible, meaning that it simpley did not have enough energy to to past a potential barrier, ut it did. We experts in science call this quantum weirdness. There are a lot of electrons, but perhaps not more than there are stupid people in the world. This page needs revision. Most basic wave to satisfy boundary conditions (which are...) is \[A \sin(n\pi x/L)\] The superposition principle allows for the fourier theorem which allows an infinite number of such waves to be combined to form any any curve that obeys the requirements of a bound system. Such a wave provides an accurate (nonrelativistic) description of the electron. References D.A.McQuarrieandJ.D.Simon,PhysicalChemistry:AMolecularApproach (1997) P. W. Atkin and R. S. Friedman, Molecular Quantum Mechanics, 4th Ed. Oxford University Press, 2005.
Fermat left only one proof. The area of a Pythagorean triangle is never a square number. Fermat wrote , “If the area of a right-angled triangle were a square, there would exist two biquadrates (fourth powers) the difference of which would be a square number.” That is, [latex] a^4\, -\, b^4 = c^2[/latex] He used the method of infinite descent which is a proof of non-existence or a form of proof by contradiction. It was not a new proof – it is recorded in Euclid’s Elements. It relies on the Well-Ordering Principle that a set of numbers contains a least element. One assumes that a solution exists in the natural numbers which would imply that a second, smaller solution also exists and so on. Since there cannot exist an infinity of of ever smaller solutions in the natural numbers, the original premise is incorrect and the assumption is contradicted. The descent step seems to construct a set of positive integers which does not have a least member. Therefore, it is the empty set. We consider the related Diophantine equation [latex]x^4 + y^4 = z^2[/latex] and whether it has any solutions. Let the solution be : [latex]x = x_1,\,\, y = y_1,\,\, z = z_1[/latex]. If gcd [latex](x_1, y_1) = d \gt 1[/latex], then we can write [latex]x_1 = dx’,\, y_1 = dy'[/latex]. Substitute into the equation : [latex](dx’)^4 + (dy’)^4 = z_1^2[/latex] Factorise : [latex]d^4(x’^4 + y’^4) = z_1^2[/latex] It follows that [latex]d^2[/latex] divides [latex]z_1[/latex] and that [latex]z_1 = d^2z'[/latex]. [latex](z_1^2/d^4 = z’^2,\,\, z_1^2 = d^4z’^2,\, \,z_1 = d^2z’)[/latex] Then : [latex]x’^4 + y’^4 = z’^2[/latex], where gcd[latex](x’, y’) = 1[/latex] and [latex]z’ \lt\, z[/latex]. We can assume that gcd [latex](x_1, y_1) = 1[/latex]. We have established that they have no common factors. We now write the equation as [latex](x_1^2)^2 + (y_1^2)^2 = z_1^2[/latex] This is a primitive Pythagorean triple. [latex] \begin{align} x_1^2 &= m^2\, – \,n^2 \tag 1\\ y_1^2 &= 2mn \tag2\\ z_1 &= m^2 + n^2 \tag3 \end{align} [/latex] where [latex]m[/latex] and [latex]n[/latex] are relatively prime positive integers of opposite parity and [latex]m\gt n[/latex]. In this case, [latex]m[/latex] must be odd and [latex]n[/latex] must be even otherwise [latex] x_1^2 = m^2\,-\,n^2\equiv 0\, – 1 \equiv 3 \pmod 4[/latex] which is impossible because a square number is congruent to either [latex]0[/latex] or [latex]1[/latex] in modulo [latex]4[/latex]. Let : [latex]n = 2r[/latex] Modify the equations : [latex] \begin{align} x_1^2 &= m^2 \,- \,4r^2\tag4\\ y_1^2 &= 4mr \tag5\\ z_1 &= m^2 + 4r^2\tag6 \end{align} [/latex] From ([latex]4[/latex]) : [latex]x_1^2 + 4r^2 = m^2[/latex] so [latex](x_1, 2r, m)[/latex] is a primitive Pythagorean triple. From [latex](5)[/latex] : [latex](y_1/2)^2 = mr[/latex] where [latex]m[/latex] and [latex]r[/latex] are relatively prime; it follows that [latex]m[/latex] and [latex]r[/latex] are both squares. Let : [latex]m = s^2[/latex] and [latex]r = t^2[/latex] We descend. [latex] \begin{align} x_1 &= u^2\,-\,v^2\tag7\\ 2r &= 2uv\tag8\\ m&= u^2 + v^2\tag9 \end{align} [/latex] where [latex]u[/latex] and [latex]v[/latex] are relatively prime positive integers with opposite parity and [latex]u\gt v[/latex] But : [latex]n = 2r = 2t^2[/latex] So : [latex]uv = t^2[/latex] Hence, [latex]u[/latex] and [latex]v[/latex] are both squares. Let : [latex]u = x_2^2,\,v = y_2^2[/latex] Substitute into [latex](9)[/latex] above : [latex]s^2 = x_2^4 + y_2^4 = z_2^2 \text\,{,say} \tag{10}[/latex] Thus, this is another solution to the original equation. It is ‘smaller’ because [latex]0 \lt z_2 = s \le m \lt m^2 + n^2 = z_1[/latex] We have completed the descent step. Thus, the assumption that there exists a positive solution is contradicted. The Diophantine equation [latex]x^4 + y^4 = z^2[/latex] has no positive solutions. Since any fourth power is necessarily a square, the method of infinite descent can be used to show that the Diophantine equation [latex]x^4 +y^4 = z^4[/latex] has no positive solutions. © OldTrout 2018
Euler Buckling Formula Jump to navigation Jump to search The Theorem $F = \dfrac {\pi^2 E I} {\left({K L}\right)^2}$ where: $F$ = maximum or critical force (vertical load on column) $E$ = modulus of elasticity $I$ = area moment of inertia of the cross section of the rod $L$ = unsupported length of column $K$ = column effective length factor, whose value depends on the conditions of end support of the column, as follows: For both ends pinned (hinged, free to rotate), $K = 1.0$ For both ends fixed, $K = 0.50$ For one end fixed and the other end pinned, $K \approx 0.699$ For one end fixed and the other end free to move laterally, $K = 2.0$ $K L$ is the effective length of the column Proof Also known as The Euler buckling formula is also known as the Euler column formula. Source of Name This entry was named for Leonhard Paul Euler.
We know that there is only one non-trivial ring homomorphism from $\mathbb{Z}$ or $\mathbb{Q}$ to another unital ring $S$. What’s more,when we consider the automorphism of $\mathbb{R}$,it is unique determined,too.More explicitly speaking: let $\varphi \in Aut(\mathbb{R})$ ,then$\varphi|_{\mathbb{Q}}=Id_{\mathbb{Q}}$, and for $a>b \in \mathbb{R}$,we have $\varphi(a)-\varphi(b)=\varphi(a-b)=\varphi(\sqrt{a-b})^2>0$. So if we have some $a\in \mathbb{R}$,and $\varphi(a)\neq a$,we can choose a rational number between $a$ and $\varphi(a)$, which will conclude a contradiction. Here is my question: Is there a unique ring homomorphism from $\mathbb{R}$ to another untial ring $S$(send 1 to $1_S$)? Especially,for $S=M_n(\mathbb{R})$ the $n\times n$ matrix over $\mathbb{R}$. (For $n=1$, it is the $Aut(\mathbb{R})$,and we have determined it.) Any suggestion will be appreciated.
I am looking for interesting functional equations of a specific type, and I thought that perhaps the math SE community would be able to deliver a good amount of them. When I look up "functional equation problems", I usually get problems like $$g(x+y)+g(x)g(y)=g(xy)+g(x)+g(y)$$ and they usually have rather boring answers with solutions that are linear, constant, or nonexistent. The type of functional equation that I am looking for has only one variable (namely $x$) and often has a very strange answer using identities of various types of functions. For example, one of the easier equations is $$\alpha(x)+\alpha(2x)=1$$ I'm looking for non-boring (and thus non-constant) solutions, and so one solution to this equation is $$\alpha(x)=\sin^2(2\pi\log_2 x)$$ two examples of more complicated problems are $$\beta(x)+\beta\bigg(\frac{x-1}{x+1}\bigg)=\sin x$$ and $$\gamma(4000-400x)+\gamma(400-40x)+\gamma(40-4x)=x^2+x+1$$ The first has a very long solution, and the second has a polynomial solution... but I will exclude the solutions to these two and let you try them for yourselves, if you like. Can anybody provide some examples of functional equations like this? Thanks!
I need advise or correction if something is incorrect with my proof. Your proof is good! Would appreciate any correction in proof writing also! To this, I would respond: its good to read different people's writing just for style. So here's my version of the proof, which is logically similar to yours but just differs on a few stylistic dimensions. A few noteworthy points: You may prefer to write function arrows "backwards", as in $f : B \leftarrow A.$ See below. A fraction line can be used to mean "implies," see below. I prefer ending sentences without a big mass of symbols, using phrases like "as follows" and "below," and then putting the symbols immediately afterwards. See below. The word "fix" is a nice alternative to "let" when the latter has the right "basic meaning" but doesn't work grammatically. See below. If you're going to have a sequence of implications, I'd suggest making it as long as possible, and omitting the symbol $\implies.$ See below. With that said, here's the proof: Proposition. Let $g : C \leftarrow B$ denote a function and $f : B \leftarrow A$ denote a surjection. Then whenever $g \circ f$ is injective, so too is $g$. Proof. Assume that $g \circ f$ is injective, and fix $b,b' \in B.$ The following implication will be proved.$$\frac{g(b)=g(b')}{b=b'}$$ Since $f$ is surjective, begin by fixing elements $a,a' \in A$ satisfying the equations immediately below. $$b = f(a),\;\; b'=f(a')$$ Then each statement in the following sequence implies the next. $g(b)=g(b')$ $g(f(a)) = g(f(a'))$ $(g \circ f)(a) = (g \circ f)(a')$ $a=a'$ $f(a)=f(a')$ $b=b'$.
We have to do it without calculus or any complex inequality. Level of complexity is that we cannot even use the AM-GM inequality. So I tried, $$(\sin\theta-\cos\theta)^2\geq0$$ $$1-2\sin\theta\cos\theta\geq0$$ $$\frac12\geq\sin\theta\cos\theta$$ Reverting back to the previous step, $$(\sin\theta+\cos\theta)^2\geq4\sin\theta\cos\theta$$ I am stuck here, please help. Edit: Sorry, but we can only use knowledge upto class 10th. Which includes, $\sin^2\theta+\cos^2\theta=1$ etc. $\sin(90^{\circ}-\theta)=\cos\theta$ etc. And basic trig ratios. Hint: We have $$\sin\theta+\cos\theta=\sqrt{2}\sin\left(\theta+\frac{\pi}{4}\right).$$Clearly we have $$-1\leq\sin(\theta+\pi/4)\leq1.$$ Update: To answer the edited question (that is, not using the sum formula for $\sin(\theta+\pi/4)$, we have $$(\sin\theta+\cos\theta)^2+(\sin\theta-\cos\theta)^2=2\cdot\underbrace{(\sin^2\theta+\cos^2\theta)}_1+2(\sin\theta)(\cos\theta)-2(\sin\theta)(\cos\theta)=2.$$ Since $x^2\geq0$ for all real $x$, we can subtract $(\sin\theta-\cos\theta)^2$ from both sides to obtain $$2-(\sin\theta-\cos\theta)^2\geq0.$$Equivalently, this can be written $$(\sin\theta-\cos\theta)^2\leq2.$$This just means $$|\sin\theta-\cos\theta|\leq\sqrt{2}.$$ By definition of absolute value, this says $$-\sqrt{2}\leq\sin\theta-\cos\theta\leq\sqrt{2}.$$ There are easier ways, but we can use AM-GM. By the Triangle Inequality we have $|\sin\theta+\cos\theta|\le |\sin\theta|+|\cos\theta|$. By AM-GM we have $|\sin\theta|+|\cos\theta|\le \frac{1}{\sqrt{2}}|\sin\theta||\cos\theta|$. This is equal to $\sqrt{2}|\sin(2\theta)|$, which is $\le \sqrt{2}$. Remark: You were almost finished, for $4\sin\theta\cos\theta=2\sin(2\theta)$. And $2\sin(2\theta)$ has absolute value $\le 2$. Using $$\displaystyle (\sin \phi+\cos \phi)^2+(\sin \phi-\cos \phi)^2 = 2$$ Now $$\displaystyle (\sin \phi-\cos \phi)^2 = 2-(\sin \phi-\cos \phi)^2$$ Using $$\bf{Square\; Quantity\geq 0}$$ So $$\displaystyle (\sin \phi-\cos \phi)^2\geq0$$ So $$\displaystyle 2-\displaystyle (\sin \phi+\cos \phi)^2\geq 0$$ OR we get $$\displaystyle (\sin \phi+\cos \phi)^2\leq \left(\sqrt{2}\right)^2$$ So we get $$\displaystyle-\sqrt{2} \leq (\sin \phi+\cos \phi)\leq \sqrt{2}$$ A (simple?) answer: Since $\sin$ and $\cos$ are bounded, there is $r>0$ such that $$\left|\sin\theta+\cos\theta\right|\leq r$$ Replacing $\theta$ by $-\theta$ gives $$\left|\sin\theta-\cos\theta\right|\leq r$$ Multiplying together $$\left|\sin^2\theta-\cos^2\theta\right|\leq r^2$$ But the LHS is bounded by $2$, so picking $r$ smallest possible, we get that $r^2\leq 2$. So $r\leq\sqrt{2}$. You could use the R-Formula (which can be derived using trigonometric identities) to combine $\sin \theta + \cos \theta$ into $$\sqrt{2}\sin \left(\theta + \frac{\pi}{4}\right)$$ and then use the fact that $-1 \le \sin x \le 1$. Hint: Write $$\cos(b)=\sin(\frac{\pi}{2}-b)$$Now use the formula of $\sin(a)+\sin(b)$ 1st PROOF: $\sin \pi/4 =\cos \pi/4 =1/\sqrt 2$. Therefore $|\sin x + \cos x|=\sqrt 2 |\sin x \cos \pi/4 +\cos x \sin \pi/4|=\sqrt 2 |\sin (x+\pi/4)|\le \sqrt 2$.Equality iff $|\sin (x+\pi/4)|=1.$ E.g. $x= \pi/4.$...............2nd PROOF: $(\sin x +\cos x)^2 \le (\sin x +\cos x)^2+(\sin x - \cos x)^2 =2$. Equality iff $\sin x - \cos x =0$ and $|\sin x=1/\sqrt 2|.$ E.g. $x=\pi/4.$
→ → → → Browse Dissertations and Theses - Mathematics by Title Now showing items 839-858 of 1147 (1999)What is the maximum number of edges in a multigraph on n vertices if every k-set spans at most r edges? We asymptotically determine this maximum for almost all k and r as n tends to infinity, thus giving a generalization ... application/pdfPDF (5MB) (2010-08-20)We consider a variety of problems in extremal graph and set theory. The {\em chromatic number} of $G$, $\chi(G)$, is the smallest integer $k$ such that $G$ is $k$-colorable. The {\it square} of $G$, written $G^2$, ... application/pdfPDF (484kB) (2003)Let F be a family of translates of a fixed convex set in the plane, and let G be the intersection graph of F . We studied the chromatic number of the complement of G. We also studied the transversal number of F , ... application/pdfPDF (4MB) (2014-09-16)We study several problems in extremal graph theory. Chapter 2 studies Tuza's Conjecture, which states that if a graph G does not contain more than k edge-disjoint triangles, then G can be made triangle-free by deleting ... application/pdfPDF (589kB) (2011-08-25)In the first part of this thesis we generalize a theorem of Kiming and Olsson concerning the existence of Ramanujan-type congruences for a class of eta quotients. Specifically, we consider a class of generating functions ... application/pdfPDF (305kB) (2008)In this thesis, firstly we derive a general method for establishing q-series identities. Using this method, we can show that if Zn can be taken as the disjoint union of a lattice generated by n linearly independent vectors ... application/pdfPDF (1MB) (1984)A method is provided for the practical computation of the coefficients for a number of structural polynomials of the universal (lamda)-ring in one indeterminant. In particular, we derive a method for the computation of the ... application/pdfPDF (2MB) (2002)Also we prove that c (1) has at most eta( c ) distinct prime divisors and 1 ∈ {ai | i = 1,...,eta( c )} if G is solvable and c (1) > 1. application/pdfPDF (3MB) application/pdfPDF (1MB) (1972) application/pdfPDF (3MB) (1981)This thesis is devoted to the study of varieties with the following property: if X is a smooth projectively normal variety in ('n), we say that X has the length one projection property if every isomorphic projection ... application/pdfPDF (4MB) application/pdfPDF (1MB) (1970) application/pdfPDF (3MB) application/pdfPDF (3MB) (1988)Let Y$\sp{\rm (n)}$ be the 1 x n matrix containing indeterminate entries $\{$Y$\sb1,\dots$,Y$\sb{\rm n}\}$ and X$\sp{\rm (n)}$ be the n x n alternating matrix containing indeterminate entries $\{$X$\sb{\rm ij}\vert$1 $\leq$ ... application/pdfPDF (2MB) (1980)This thesis concerns the relationship between pairs of projectively equivalent Riemannian or Lorentz metrics that share some property along a hypersurface of a manifold. The first chapter is devoted to the construction of ... application/pdfPDF (2MB) application/pdfPDF (1MB) (1993)A holomorphic mapping f from a bounded domain D in $\doubc\sp{n}$ to a bounded domain $\Omega$ in $\doubc\sp{N}$ is proper if the sequence $\{f(zj)\}$ tends to the boundary of $\Omega$ for every sequence $\{zj\}$ which ... application/pdfPDF (2MB) (1990)A holomorphic mapping f from a bounded domain $\Omega$ in C$\sp{\rm n}$ to a bounded domain $\Omega\sp\prime$ in C$\sp{\rm N}$ is proper if and only if (f(z$\sb\nu$)) tends to the boundary b$\Omega\sp\prime$ for each ... application/pdfPDF (2MB) application/pdfPDF (3MB)
In section 13.6 of Nakahara, the parity anomaly is in odd dimensional spacetime. From the paper "Fermionic Path Integral And Topological Phases" https://arxiv.org/abs/1508.04715 by Witten, the problem appears as one cannot define the sign of the path integral, $$S[\bar{\psi},\psi;A]=\int d^{2n+1}x\bar{\psi}iD \!\!\!\!/\,\psi,$$ $$\mathcal{Z}=\det(iD \!\!\!\!/\,)=\prod_{\lambda\in\mathrm{spec}}\lambda,$$ because there are infinite number of positive and negative eigenvalues $\lambda$. The number of eigenvalues flowing through $\lambda=0$ is related with the index theorem in $2n+2$ dimenions. Does the partiy anomaly appear in even dimensions? From Nakahara's derivation, I don't see anything related with the dimension of spacetime. If this anomaly exists in odd dimensions, then why doesn't it appear in even dimensions? I also posted my question here https://physics.stackexchange.com/q/436841/185558
OpenCV #005 Averaging and Gaussian filter Digital Image Processing using OpenCV (Python & C++) Highlights: In this post, we will learn how to apply and use an Averaging and a Gaussian filter. We will also explain the main differences between these filters and how they affect the output image. What does make a good filter? This is a million dollar question. Tutorial Overview: 1. Averaging filter What does make a good filter? So, let’s start with a box filter. In the Figure below, we see the input image (designed from numbers) that is processed with an averaging filter. We also say box/uniform/blur, and yes, these are all synonyms :-). Here, all the coefficient values have the same weight. That is, the averaging filter is a box filter of all coefficients having the value \(\frac{1}{9} \). Now, as the filter \(H (u, v) \) is being moved around the image \(F (x, y) \), the new image \(G (x, y) \) on the right is generated. Next, let’s say that we process an image below with the averaging box filter. What should we expect as a result? Well, we will get an ugly image like the one on the right. An image on the left is visually applealing and the image is quite smooth. However, the generated image looks nothing like it. There are some unnatural sharp edges at the output \(G (x, y) \). What was problematic with that? The square is not smooth! Trying to blur or filter an image with a box that is not smooth does not seem right. When we want to smooth an image our goal is to catch the significant pieces of the information (lower frequency content). Subsequently, we will see that a better result will be obtained with a Gaussian filter due to its smoothing transitioning properties. Code for Averaging filter Python Both in Python and C++ averaging filter can be applied by using blur() or boxFilter() functions. C++ #include <iostream>#include <opencv2/opencv.hpp> using namespace std;using namespace cv; int main() { cv::Mat image = imread("dragon.jpg", IMREAD_GRAYSCALE); cv::Mat processed_image; // we create a simple blur filter or an average/mean filter // all coefficients of this filter are the same // and this filter is also normalized. cv::imshow("Original image", image); cv::blur(image, processed_image, Size(3,3) ); cv::waitKey(); cv::imshow("Blur filter applied of size 3", processed_image); cv::waitKey(); cv::blur(image, processed_image, Size(7,7)); cv::waitKey(); cv::imshow("Blur filter applied of size 7", processed_image); cv::waitKey(); // Here we create an image of all zeros. // Only one pixel will be 1. // In this example we will generate a very small image so that we can // better visualize the filtering effect with such an image. cv::Mat image_impulse = cv::Mat::zeros(31, 31, CV_8UC1); image_impulse.at<uchar>(16,16) = 255; image_impulse = image_impulse * 20; cv::imshow("Impulse image", image_impulse); cv::waitKey(); cv::Mat image_impulse_processed; cv::blur(image_impulse, image_impulse_processed, Size(3,3)); image_impulse_processed = image_impulse_processed * 20; cv::imshow("Impulse image", image_impulse_processed); cv::waitKey(); // this will produce a small square of size 3x3 in the center // Notice that, since the filter is normalized, // if we increase the size of the filter, // the intensity values of the square in the output image will be more lower. \ // Hence, more challenging to be detected. cv::blur(image_impulse, image_impulse_processed, Size(7,7)); image_impulse_processed = image_impulse_processed * 20; cv::imshow("Impulse image", image_impulse_processed); cv::waitKey(); Let’s see the results of our code: Interestingly, when we do filtering, the larger the kernel size, the smoother the new image would be. Here below is a sample of filtering an impulse image (to the left), using a kernel size of 3×3 (in the middle) and 7×7 kernel size (to the right). 2. Gaussian filter So, we all know what a Gaussian function is. But how will we generate a Gaussian filter from it? Well, the idea is that we will simply sample a 2D Gaussian function. We can see below how the proposed filter of a size 3×3 looks like. Using the \(3\times 3 \) filters is not necessarily an optimal choice. Although we can notice its higher values in the middle that falls off at the edges and even more at the corners, this can be considered as a poor representation of the Gaussian function. Here, we plot a Gaussian function both in 2D & 3D so we gain more intuition how larger Gaussian filters will look like: \(h \left ( u,v \right )= \frac{1}{2\pi \sigma ^{2}} e^{-\frac{u^{2}+v^{2}}{\sigma ^{^{2}}}} \) Here, we can refresh our knowledge and write the exact formula of Gaussian function: \(\exp (-\frac{ (x^{2}+y^{2}) }{2\sigma ^{2}}) \) Next, if we take an image and a filter it with a Gaussian blurring function of size 7×7 we would get the following output. Wow! So, much nicer. Compare smoothing with a Gaussian to the non-Gaussian filter to see the difference. In the non-Gaussian, we see all those sharp edges. When compared with the Gaussian, we get a nice smooth, blur on the new image. Code for Gaussian filter Python C++ // Gaussian filter // First we will just apply a Gaussian filter on the image // this will also create a blurring or smoothing effect.. // Try visually to notice the difference as compared with the mean/box/blur filter. cv::Mat image_gaussian_processed; cv::GaussianBlur(image, image_gaussian_processed, Size(3,3), 1); cv::imshow("Gaussian processed", image_gaussian_processed); cv::waitKey(); cv::GaussianBlur(image, image_gaussian_processed, Size(7,7), 1); cv::imshow("Gaussian processed", image_gaussian_processed); cv::waitKey(); Output Python C++ cv::Mat image_impulse_gaussian_processed; cv::GaussianBlur(image_impulse, image_impulse_gaussian_processed, Size(3,3), 1); image_impulse_gaussian_processed = image_impulse_gaussian_processed * 20; cv::imshow("Gaussian processed - impulse image", image_impulse_gaussian_processed); cv::waitKey(); cv::GaussianBlur(image_impulse, image_impulse_gaussian_processed, Size(9,9), 1); // here we have just multiplied an image to obtain a better visualization // as the pixel values will be too dark. image_impulse_gaussian_processed = image_impulse_gaussian_processed * 20; cv::imshow("Gaussian processed - impulse image", image_impulse_gaussian_processed); cv::waitKey(); Output Python C++ // here we will just add random Gaussian noise to our original image cv::Mat noise_Gaussian = cv::Mat::zeros(image.rows, image.cols, CV_8UC1); // here a value of 64 is specified for a noise mean // and 32 is specified for the standard deviation cv::randn(noise_Gaussian, 64, 32); cv::Mat noisy_image, noisy_image1; noisy_image = image + noise_Gaussian; cv::imshow("Gaussian noise added - severe", noisy_image); cv::waitKey(); //adding a very mild noise cv::randn(noise_Gaussian, 64, 8); noisy_image1 = image + noise_Gaussian; cv::imshow("Gaussian noise added - mild", noisy_image1); cv::waitKey(); Output Python C++ // Let's now apply a Gaussian filter to this. // This may be confusing for beginners. // We have one Gaussian distribution to create a noise // and other Gaussian function to create a filter, sometimes also called a kernel. // They should be treated completely independently. cv::Mat filtered_image; cv::GaussianBlur(noisy_image, filtered_image, Size(3,3), 3); cv::imshow("Gaussian noise severe - filtered", filtered_image); cv::waitKey(); cv::GaussianBlur(noisy_image1, filtered_image, Size(7,7), 3); cv::imshow("Gaussian noise mild - filtered", filtered_image); cv::waitKey(); return 0;} Output Summary Finally, we have learned how to smooth (blur) an image with a Gaussian and non-Gaussian filter. We realize why it is preferable to use a Gaussian filter over a non-Gaussian one. In the next posts, we will talk more about Sobel operator, image gradient and how edges can be detected in images.
Hyperbolic spiral A plane transcendental curve whose equation in polar coordinates is $$\rho=\frac a\phi.$$ It consists of two branches, which are symmetric with respect to a straight line $d$ (see Fig.). The pole is an asymptotic point. The asymptote is the straight line parallel to the polar axis at a distance $a$ from it. The arc length between two points $M_1(\rho_1,\phi_1)$ and $M_2(\rho_2,\phi_2)$ is $$l=a\left[-\frac{\sqrt{1+\phi^2}}{\phi}+\ln(\phi+\sqrt{1+\phi^2})\right]_{\phi_1}^{\phi_2}.$$ The area of the sector bounded by an arc of the hyperbolic spiral and the two radius vectors $\rho_1$ and $\rho_2$ corresponding to the angles $\phi_1$ and $\phi_2$ is $$S=\frac{a^2(\rho_1-\rho_2)}{2}.$$ A hyperbolic spiral and an Archimedean spiral may be obtained from each other by inversion with respect to the pole $O$ of the hyperbolic spiral. Figure: h048340a A hyperbolic spiral is a special case of the so-called algebraic spirals. References [1] A.A. Savelov, "Planar curves" , Moscow (1960) (In Russian) How to Cite This Entry: Hyperbolic spiral. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Hyperbolic_spiral&oldid=32544
Multivariable Limits 1 Attachment(s) Has anybody idea on what techniques I can apply on these limits? at c and d I did direct substitution since there is no zero at the denominator. Is that correct? a) I converted polar form and denominator became cos^2Theta+Sin^2theta = 1 so that limit exists how about b? no techniques I know work and I couldn't prove it DNE either. a) $\dfrac{x^4-y^4}{x^2+y^2} = \dfrac{(x^2-y^2)(x^2+y^2)}{x^2+y^2}=x^2-y^2$ Now just plug the values in. b) $\lim \limits_{x\to 1\\y\to -1}~~\dfrac{xy}{x+y}$ at the limit values the numerator is finite and the denominator is 0 which suggest the limit is infinity. But by changing the direction you approach the limit point you can make the result head to either plus or minus infinity and thus the limit doesn't exist. c) $\lim \limits_{x\to 1 \\y \to -1}~~\dfrac{x^2 ye^y}{x^4+4y^2}$ just plug the values in. d) Same here, just plug the limit values in. I don't see why this problem would be harder. The graph looks perfectly smooth. All times are GMT -8. The time now is 04:53 PM. Copyright © 2019 My Math Forum. All rights reserved.
Search Now showing items 1-10 of 190 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
Equilibrium Means Detailed Balance Equilibrium and Detailed Balance Equlibrium has a very precise meaning in statistical physics, which also applies to biology. Equilibrium describes the average behavior (averaged over many systems under identical conditions) in which there is no net flow of material, probability or reactions. Equilibrium is not staticbecause each individual system undergoes its ordinary behavior/dynamics. There will be no net flow in any "space" examined: real-space, conformation space, or population space. Equilibrium can only occur under fixed conditions (e.g., constant temperature and total mass) when there is no addition or removal of energy or material from the system. The requirement for "no net flow of material, probability or reactions" is embodied in the condition for detailed balance. Detailed Balance Detailed balance is the balance of flows between any (and every) pair of states you care to define. Typically, one thinks of "detailed" infinitesimal states, but a balance of flows among tiny states implies a balance of flows among global states which are groups of tiny states. "Flow" refers to the motion of material or trajectories/probability, depending on the situation at hand. In this schematic, $i$ and $j$ are any states, and the $k$'s are the rates between them. If we have $N = \sum_i N_i$ equilibrium systems with $N_i$ systems in each state $i$, then detailed balance is formulated as In a solution.As always in equilibrium, we have (at least conceptually) a large number of copies of our system. If we consider any two sub-volumes ($i$ and $j$) in just oneof these systems, some set of molecules will move from region to the other in a given time interval. However, in our equilibrium set of systems, there will be other systems in which the opposite flow occurs. Averaging over all systems, there is no net flow of any type of molecule between any two sub-volumes in equilibrium. This is detailed balance. (See also the time-averaging perspective, below.) In a chemical reaction.Normally, we distinguish "products" and "reactants", but equilibrium largely abolishes this distinction. As described above for the solution case, if we have an equilibrium set of many chemical-reaction systems, an equal number will be proceeding in the forward (say, $i$ to $j$) and reverse ($j$ to $i$) directions. Although, nominally, a reaction may seem to prefer a certain direction, in equilibrium that just means that the products of the favored direction will be present in greater quantity (e.g., $N_j \gg N_i$) - even though the forward and reverse flows stay the same as in (1) because the rates would be very different ($k_{ji} \ll k_{ij}$). In a conformation space of a single molecule.In an equilibrium set of molecules with, say, two conformational states A and B, there will be an equal number of A-to-B as B-to-A transitions in any given time interval. If there are many states, then there will be a balance between all pairs of states $i$ and $j$ as given in (1). Some microscopic physical insight into these issues is provided in the page on Conformational Statistical Mechanics. Time vs. Ensemble Averaging It is useful to consider the relation between "ensemble averaging" (e.g., averaging over the set of equilibrium systems described above) and "time averaging". Time-averaging is just what you would guess: averaging behavior (e.g., values of a quantity of interest) over a long period of time. In equilibrium, time averaging and ensemble averaging will yield the same result. To see this, consider a solution containing many molecules diffusing around and perhaps exhibiting conformational motions as well. Assume the system has been equilibrating for a time much longer than any of its intrinsic timescales (inverse rates). Because finite-temperature motion in a finte system is inherently stochastic, over a long time each molecule will visit different regions of the container and also different conformations - in the same proportion as every other molecule. If we take a snapshot at any given time of this equilibrium system, the "ensemble" of molecules in the system will exhibit the same distribution of positions and conformations as a long single trajectory of any individual molecule. This has to be true because the snapshot itself results from the numerous stochastic trajectories of the molecules that have evolved over a long time. Some microscopic physical insight into the relationship between dynamic and equilibrium information is provided in the page on Conformational Statistical Mechanics. Unphysical Models Cannot Equilibrate Although every physical system that is suitably isolated will reach a state of equilibrium, that does not mean that every model made by scientists can properly equilibrate. In fact, many common models of biochemistry exhbit "irreversible" steps - in which the reverse of some step never occurs - and could never satisfy detailed balance. The Michaelis-Menten model of catalysis (above left) is an irrervsible model. Such model irreversibility typically represents the observation that the forward rate exceeds the reverse rate so greatly that the reverse process can safely be ignored. This may be true in some cases, but here are numerous cases in biochemistry where reversibility is critical, such as typical binding processes (unbinding is needed to terminate a signal) and in ATP synthase (which can make ATP or pump protons depending on conditions). For corrected (physically possible) versions of the cycles depicted above, see the discussion of cycles. References R. Phillips et al., Physical Biology of the Cell,(Garland Science, 2009). D.M. Zuckerman, Statistical Physics of Biomolecules: An Introduction,(CRC Press, 2010).
Let’s say that we have some classification dataset where items can be categorised as being in one of $N$ classes: $C_1, C_2, \ldots, C_N$. Each example in the dataset has a vector of features, $\boldsymbol{x}$, and a class, $C_k$. For many public datasets, the examples are balanced amongst classes such that $\Pr(C_1) = \Pr(C_2) = \ldots = \Pr(C_N)$. Put another way, if you randomly sample an item from the dataset there is an equal probability of it being in any of the classes. However, when it comes to solving practical problems, balanced datasets are relatively rare. Instead, we have data that is generally imbalanced: $\Pr(C_1) \ne \Pr(C_2) \ne \ldots \ne \Pr(C_N)$. In this post we will consider classification models which are both probabilistic and discriminative. That is, after training on a classification dataset, the model is able to estimate $\Pr(C_k|\boldsymbol{x})$—the probability of some input $\boldsymbol{x}$ belonging to some class $C_k$. One of the great things about having a balanced dataset is that it playsquite nicely with the usual algorithms and objective functions used to trainclassification models (including neural networks). On the other hand, imbalanced datasetscan throw a proverbial spanner into the works by making trivial solutionshighly attractive to the learning algorithm. For example, if 99% of the examplesbelong to a single class, 99% accuracy can be attained by simply predictingthat everything is in that class! A simple technique for training on an imbalanced dataset is to draw samples in accordance with the distribution of classes. So, whenever you need a training example: We’ll refer to the set of examples constructed according to this strategy as the training dataset $A$. Our sampling strategy is specifically designed to be balanced, so ${\Pr}_A(C_k) = \mathrm{constant}$. An observation that we can make now is that our sampling strategy does not change the distribution of examples within each class since wearen’t doing anything fancy when drawing examples from within a class. Thiswill become important later on. After training on dataset $A$, we have a model for ${\Pr}_A(C_k|\boldsymbol{x})$. But we’re not finished quite yet… Imagine for a moment that we trained a model which classifies children as future millionaires or not. Clearly there is an imbalance between those two classes, and the training data was sampled accordingly. However, there’s a problem when it comes to evaluation: for examples which contain no strong indicators of future financial success, the model is outputting a 50% chance of future millionairism! This is not altogether surprising, since during training either class was equally likely. But even so, clearly a random child is not so likely to be a millionare in practice! What we want to do is incorporate into our predictions prior knowledge of the likelihood of a random example being in a particular class (following our previous example, the probability of a random child becoming a millionaire). Let’s call our evaluation dataset $B$. Our prior is ${\Pr}_B(C_k)$, and in general ${\Pr}_B(C_1) \ne {\Pr}_B(C_2) \ne \ldots {\Pr}_B(C_N)$ (classes are imbalanced). The adjusted version of the model which we are hoping to find is ${\Pr}_B(C_k|\boldsymbol{x})$. Using Bayes’ theorem, $$ \dfrac{{\Pr}_B(C_k|\boldsymbol{x})} {{\Pr}_A(C_k|\boldsymbol{x})} = \dfrac{{\Pr}_B(\boldsymbol{x}|C_k) {\Pr}_B(C_k) {\Pr}_A(\boldsymbol{x})} {{\Pr}_A(\boldsymbol{x}|C_k) {\Pr}_A(C_k) {\Pr}_B(\boldsymbol{x})} $$ With some minor rearranging for clarity, $$ {\Pr}_B(C_k|\boldsymbol{x}) = {\Pr}_A(C_k|\boldsymbol{x}) \cdot \dfrac{{\Pr}_B(C_k)} {{\Pr}_A(C_k)} \cdot \dfrac{{\Pr}_B(\boldsymbol{x}|C_k)} {{\Pr}_A(\boldsymbol{x}|C_k)} \cdot \dfrac{{\Pr}_A(\boldsymbol{x})} {{\Pr}_B(\boldsymbol{x})} $$ Remember how we observed before that our sampling strategy did not change the distribution of examples within each class? Well, we can write that formally as ${\Pr}_B(\boldsymbol{x}|C_k) = {\Pr}_A(\boldsymbol{x}|C_k)$. That is, $$ \dfrac{{\Pr}_B(\boldsymbol{x}|C_k)} {{\Pr}_A(\boldsymbol{x}|C_k)} = 1 $$ So: $$ {\Pr}_B(C_k|\boldsymbol{x}) = {\Pr}_A(C_k|\boldsymbol{x}) \cdot \dfrac{{\Pr}_B(C_k)} {{\Pr}_A(C_k)} \cdot \dfrac{{\Pr}_A(\boldsymbol{x})} {{\Pr}_B(\boldsymbol{x})} $$ Furthermore, ${\Pr}_A(\boldsymbol{x})$ and ${\Pr}_B(\boldsymbol{x})$ are constant with respect to the class, $C_k$. So: $$ {\Pr}_B(C_k|\boldsymbol{x}) \propto {\Pr}_A(C_k|\boldsymbol{x}) \dfrac{{\Pr}_B(C_k)} {{\Pr}_A(C_k)} $$ Now we know that ${\Pr}_B(C_k|\boldsymbol{x})$ is a probability distribution and therefore the probabilities of all classes must sum to one ($\sum_k{{\Pr}_B(C_k|\boldsymbol{x})}=1$). So, by normalising the probability distribution we get: $$ {\Pr}_B(C_k|\boldsymbol{x}) = \dfrac{ {\Pr}_A(C_k|\boldsymbol{x}) \dfrac{{\Pr}_B(C_k)} {{\Pr}_A(C_k)} }{ \sum_i{ {\Pr}_A(C_i|\boldsymbol{x}) \dfrac{{\Pr}_B(C_i)} {{\Pr}_A(C_i)} } } $$ This formula works regardless of how examples from dataset $A$ are distributed amongst classes. However, since we know that the training data was resampled such that the classes are uniformly distributed, we can make one last simplification by cancelling the constant dataset $A$ class probability terms: $$ {\Pr}_B(C_k|\boldsymbol{x}) = \dfrac{ {\Pr}_A(C_k|\boldsymbol{x}) {\Pr}_B(C_k) }{ \sum_i{ {\Pr}_A(C_i|\boldsymbol{x}) {\Pr}_B(C_i) } } $$ Bingo bango, we’re done. Let’s run through a quick example based on the millionaire detector described earlier on. We’ll say that 5% of people go on to become millionaires: $${\Pr}_B(\mathrm{millionaire}) = 0.05$$ Now we run our classification model on a kid called Billy and get a 70% probability of the child becoming a millionaire: $${\Pr}_A(\mathrm{millionaire}|\mathrm{billy}) = 0.7$$ How good is that for the kid? Let’s find out. Recall that the classifier is assumed to have been trained on a balanced dataset, so we calculate the adjusted probability (based on our prior) as follows: $$ {\Pr}_B(\mathrm{millionaire}|\mathrm{billy}) = \dfrac{ 0.7 \times 0.05 }{ (0.7 \times 0.05) + (0.3 \times 0.95) } \approx 0.11 $$ So little Billy shouldn’t get their hopes up too high, they only have an ~11% probability of becoming a millionaire. Too bad. It’s assumed that the model’s outputs are well-calibrated. If the predicted probabilities aren’t meaningful, the Bayesian evaluation-time strategy described here probably won’t work too well. This article was motivated by a Reddit post about common machine learning interview questions.
Hello guys! I was wondering if you knew some books/articles that have a good introduction to convexity in the context of variational calculus (functional analysis). I was reading Young's "calculus of variations and optimal control theory" but I'm not that far into the book and I don't know if skipping chapters is a good idea. I don't know of a good reference, but I'm pretty sure that just means that second derivatives have consistent signs over the region of interest. (That is certainly a sufficient condition for Legendre transforms.) @dm__ yes have studied bells thm at length ~2 decades now. it might seem airtight and has stood the test of time over ½ century, but yet there is some fineprint/ loopholes that even phd physicists/ experts/ specialists are not all aware of. those who fervently believe like Bohm that no new physics will ever supercede QM are likely to be disappointed/ dashed, now or later... oops lol typo bohm bohr btw what is not widely appreciated either is that nonlocality can be an emergent property of a fairly simple classical system, it seems almost nobody has expanded this at length/ pushed it to its deepest extent. hint: harmonic oscillators + wave medium + coupling etc But I have seen that the convexity is associated to minimizers/maximizers of the functional, whereas the sign second variation is not a sufficient condition for that. That kind of makes me think that those concepts are not equivalent in the case of functionals... @dm__ generally think sampling "bias" is not completely ruled out by existing experiments. some of this goes back to CHSH 1969. there is unquestioned reliance on this papers formulation by most subsequent experiments. am not saying its wrong, think only that theres very subtle loophole(s) in it that havent yet been widely discovered. there are many other refs to look into for someone extremely motivated/ ambitious (such individuals are rare). en.wikipedia.org/wiki/CHSH_inequality @dm__ it stands as a math proof ("based on certain assumptions"), have no objections. but its a thm aimed at physical reality. the translation into experiment requires extraordinary finesse, and the complex analysis starts with CHSH 1969. etc While it's not something usual, I've noticed that sometimes people edit my question or answer with a more complex notation or incorrect information/formulas. While I don't think this is done with malicious intent, it has sometimes confused people when I'm either asking or explaining something, as... @vzn what do you make of the most recent (2015) experiments? "In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally." @dm__ yes blogged on those. they are more airtight than previous experiments. but still seem based on CHSH. urge you to think deeply about CHSH in a way that physicists are not paying attention. ah, voila even wikipedia spells it out! amazing > The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. > A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment. ↑ suspect entire general LHV theory of QM lurks in these loophole(s)! there has been very little attn focused in this area... :o how about this for a radical idea? the hidden variables determine the probability of detection...! :o o_O @vzn honest question, would there ever be an experiment that would fundamentally rule out nonlocality to you? and if so, what would that be? what would fundamentally show, in your opinion, that the universe is inherently local? @dm__ my feeling is that something more can be milked out of bell experiments that has not been revealed so far. suppose that one could experimentally control the degree of violation, wouldnt that be extraordinary? and theoretically problematic? my feeling/ suspicion is that must be the case. it seems to relate to detector efficiency maybe. but anyway, do believe that nonlocality can be found in classical systems as an emergent property as stated... if we go into detector efficiency, there is no end to that hole. and my beliefs have no weight. my suspicion is screaming absolutely not, as the classical is emergent from the quantum, not the other way around @vzn have remained civil, but you are being quite immature and condescending. I'd urge you to put aside the human perspective and not insist that physical reality align with what you expect it to be. all the best @dm__ ?!? no condescension intended...? am striving to be accurate with my words... you say your "beliefs have no weight," but your beliefs are essentially perfectly aligned with the establishment view... Last night dream, introduced a strange reference frame based disease called Forced motion blindness. It is a strange eye disease where the lens is such that to the patient, anything stationary wrt the floor is moving forward in a certain direction, causing them have to keep walking to catch up with them. At the same time, the normal person think they are stationary wrt to floor. The result of this discrepancy is the patient kept bumping to the normal person. In order to not bump, the person has to walk at the apparent velocity as seen by the patient. The only known way to cure it is to remo… And to make things even more confusing: Such disease is never possible in real life, for it involves two incompatible realities to coexist and coinfluence in a pluralistic fashion. In particular, as seen by those not having the disease, the patient kept ran into the back of the normal person, but to the patient, he never ran into him and is walking normally It seems my mind has gone f88888 up enough to envision two realities that with fundamentally incompatible observations, influencing each other in a consistent fashion It seems my mind is getting more and more comfortable with dialetheia now @vzn There's blatant nonlocality in Newtonian mechanics: gravity acts instantaneously. Eg, the force vector attracting the Earth to the Sun points to where the Sun is now, not where it was 500 seconds ago. @Blue ASCII is a 7 bit encoding, so it can encode a maximum of 128 characters, but 32 of those codes are control codes, like line feed, carriage return, tab, etc. OTOH, there are various 8 bit encodings known as "extended ASCII", that have more characters. There are quite a few 8 bit encodings that are supersets of ASCII, so I'm wary of any encoding touted to be "the" extended ASCII. If we have a system and we know all the degrees of freedom, we can find the Lagrangian of the dynamical system. What happens if we apply some non-conservative forces in the system? I mean how to deal with the Lagrangian, if we get any external non-conservative forces perturbs the system?Exampl... @Blue I think now I probably know what you mean. Encoding is the way to store information in digital form; I think I have heard the professor talking about that in my undergraduate computer course, but I thought that is not very important in actually using a computer, so I didn't study that much. What I meant by use above is what you need to know to be able to use a computer, like you need to know LaTeX commands to type them. @AvnishKabaj I have never had any of these symptoms after studying too much. When I have intensive studies, like preparing for an exam, after the exam, I feel a great wish to relax and don't want to study at all and just want to go somehwere to play crazily. @bolbteppa the (quanta) article summary is nearly popsci writing by a nonexpert. specialists will understand the link to LHV theory re quoted section. havent read the scientific articles yet but think its likely they have further ref. @PM2Ring yes so called "instantaneous action/ force at a distance" pondered as highly questionable bordering on suspicious by deep thinkers at the time. newtonian mechanics was/ is not entirely wrong. btw re gravity there are a lot of new ideas circulating wrt emergent theories that also seem to tie into GR + QM unification. @Slereah No idea. I've never done Lagrangian mechanics for a living. When I've seen it used to describe nonconservative dynamics I have indeed generally thought that it looked pretty silly, but I can see how it could be useful. I don't know enough about the possible alternatives to tell whether there are "good" ways to do it. And I'm not sure there's a reasonable definition of "non-stupid way" out there. ← lol went to metaphysical fair sat, spent $20 for palm reading, enthusiastic response on my leadership + teaching + public speaking abilities, brought small tear to my eye... or maybe was just fighting infection o_O :P How can I move a chat back to comments?In complying to the automated admonition to move comments to chat, I discovered that MathJax is was no longer rendered. This is unacceptable in this particular discussion. I therefore need to undo my action and move the chat back to comments. hmmm... actually the reduced mass comes out of using the transformation to the center of mass and relative coordinates, which have nothing to do with Lagrangian... but I'll try to find a Newtonian reference. One example is a spring of initial length $r_0$ with two masses $m_1$ and $m_2$ on the ends such that $r = r_2 - r_1$ is it's length at a given time $t$ - the force laws for the two ends are $m_1 \ddot{r}_1 = k (r - r_0)$ and $m_2 \ddot{r}_2 = - k (r - r_0)$ but since $r = r_2 - r_1$ it's more natural to subtract one from the other to get $\ddot{r} = - k (\frac{1}{m_1} + \frac{1}{m_2})(r - r_0)$ which makes it natural to define $\frac{1}{\mu} = \frac{1}{m_1} + \frac{1}{m_2}$ as a mass since $\mu$ has the dimensions of mass and since then $\mu \ddot{r} = - k (r - r_0)$ is just like $F = ma$ for a single variable $r$ i.e. an spring with just one mass @vzn It will be interesting if a de-scarring followed by a re scarring can be done in some way in a small region. Imagine being able to shift the wavefunction of a lab setup from one state to another thus undo the measurement, it could potentially give interesting results. Perhaps, more radically, the shifting between quantum universes may then become possible You can still use Fermi to compute transition probabilities for the perturbation (if you can actually solve for the eigenstates of the interacting system, which I don't know if you can), but there's no simple human-readable interpretation of these states anymore @Secret when you say that, it reminds me of the no cloning thm, which have always been somewhat dubious/ suspicious of. it seems like theyve already experimentally disproved the no cloning thm in some sense.
Spectroscopy is the study of the interaction of radiation with matter. We know that radiation of frequency υ consists of photons whose energy is given by Planck’s law: \(E = hv\). Where, \(h\) is Planck’s constant = \(6.626068 \times 10^{-34} \;m^2 kg/s\). The speed of the electromagnetic radiation in vacuum is \(c\), which is equal to 299,792,458 m/s. Since \(c = \lambda{v}\), the original equation can be written in the following two alternate forms: \(E = \dfrac{hc}{\lambda} = hcv\) or \(v = \dfrac{1}{\lambda}\). Where, \(1/\lambda\) is the inverse wavelength or wavenumber (cm -1). The effect that a photon will have on matter or molecule will depend on \(E\), and thus on \(v\). For \(v\) < 10 -2cm-1 (λ ~ 1m), we have radio frequencies. These are too low an energy photons for anything except to affect the magnetic energy of a nucleus in an external magnetic field (NMR Spectroscopy) For 10 -2cm -1< \(v\) < 10 cm -1(λ ~ 1 cm ), we have microwave frequencies. Photons have enough energy to be absorbed by unpaired electrons spins in an externally magnetic field (ESR) or to change the rotational energy of a molecule (microwave rotational spectroscopy) For 10 cm -1< \(v\) < 10 cm 4(λ ~ 5 μm ), we have infrared frequencies. Photons have sufficient energy to be absorbed in the rotational motion of the molecules. This is called vibrational spectroscopy. For 10 -2cm -1< \(v\) < 10 cm 5(λ < 1 μm ), we have visible and UV spectroscopy which involves excitations of electrons (valence) from stable orbits to higher energy orbits in the molecule. Electronic spectroscopy (UV-VIS) For 10 5cm -1< \(v\) < 10 cm 6, we have vacuum UV, where the photons have enough energy that if absorbed by a valence electron, the electron can be “knocked” out of the molecule. This is called photoelectron spectroscopy For 10 6cm -1< \(v\) < 10 cm 8, these are X-rays and have enough energy to ionize not only valence electrons, but also core electrons. This spectroscopy is call X-ray Photo-electron spectroscopy (XPS) and also Extended X-ray absorption fine structure (EXAFS) and X-ray Absorption Near Edge Structure (XANES). For \(v\) < 10 cm 8. These are very energetic gamma rays and are not used extensitvely for spectroscopy with chemists. One key exception is Mössbauer spectroscopy which is enough energy to promote changes in the nuclei of the atoms.
Given a directed graph $G = (V,E)$ with positive edge weights, find the minimum cost path between $s$ and $t$ that traverses exactly $k$ edges. Here is my attempt using a flow network: \begin{align} \min \sum_{(i,j) \in E} c_{ij}x_{ij} \end{align} \begin{align} \sum_{j \in V} x_{ji} - \sum_{j \in V} x_{ij} &= \begin{cases} -1,& \text{if } i = s\\ 1, & \text{if } i = t\\ 0, & \text{otherwise}\\ \end{cases} && \forall i \in V\\ \sum_{(i,j) \in E} x_{ij} &= k &&\\ x_{ij} &\geq 0 && \forall (i,j) \in E\\ \end{align} However, this doesn't eliminate cycles which are disjoint from the $s$-$t$ path. I can use subtour elimination constraints like in the Miller Tucker Zemlin formulation of the Travelling Salesman Problem but this enforces a simple path making the problem harder than it should be. Any ideas on alternative formulations? Update: This is part of a slightly bigger formulation here, where $z$ is a slack variable which scalarizes multiple objectives with a rectified-linear function: \begin{align} \min \sum_{k \in K} z_{k} \end{align} \begin{align} \sum_{(i,j) \in E} c^{k}_{ij}x_{ij} - z_{k} &\leq 1 && \forall k \in K\\ \sum_{j \in V} x_{ji} - \sum_{j \in V} x_{ij} &= \begin{cases} -1,& \text{if } i = s\\ 1, & \text{if } i = t\\ 0, & \text{otherwise}\\ \end{cases} && \forall i \in V\\ x_{ij} &\geq 0 && \forall (i,j) \in E\\ \end{align} Solutions are integral when solving with a linear program, but suggestions for more efficient algorithms would be appreciated.
Architecture RNNs are generally used for sequence modeling (e.g. language modeling, time series modeling,…). Unfolding a RNN network (many to many scenario) could be presented as follow: When training the model, we need to find W that minimizes the sum of losses. Multilayer RNN Multilayer RNN is a neural network with multiple RNN layers. Backpropagation through time To train a RNN, we need to calculate the gradient to update parameters. Instead of doing the derivations for the full network, we will focus only on one hidden unit. We define \(s_t = g(W.x_t + U.s_{t-1} + b) \). g is an activation function. Using the chain rule we can find that:\(\frac{\partial loss(y_t,?_t)}{\partial W} = \frac{\partial loss(y_t,?_t)}{\partial ?_t}.\frac{\partial ?_t}{\partial s_t}.\frac{\partial s_t}{\partial W} \\ = \frac{\partial loss(y_t,?_t)}{\partial ?_t}.\frac{\partial ?_t}{\partial s_t}.(\sum_{k=t}^0 \frac{\partial s_t}{\partial s_k}.\frac{\partial s_k}{\partial W})\) Vanishing gradient The term in red in the equation is a product of Jacobians (\(\frac{\partial s_t}{\partial s_k} = \prod_{i=k+1}^{t} \frac{\partial s_{i}}{\partial s_{i-1}}\)).\(\frac{\partial loss(y_t,?_t)}{\partial ?_t}.\frac{\partial ?_t}{\partial s_t}.(\sum_{k=t}^0 \color{red} {\frac{\partial s_t}{\partial s_k}}.\frac{\partial s_k}{\partial W})\) Because the derivatives of activation functions (except ReLU) are less than 1, therefore when evaluating the gradient, the red term tends to converge to 0, and the model becomes more biased and captures less dependencies. Exploding gradient RNN is trained by backpropagation through time. When gradient is passed back through many time steps, it tends to vanish or to explode. Gradient Clipping Gradient Clipping is a technique to prevent exploding gradients in very deep networks, typically Recurrent Neural Networks. There are various ways to perform gradient clipping, but the most common one is to normalize the gradients of a parameter vector when its L2 norm exceeds a certain threshold according to : \(gradients := gradients * \frac{threshold}{L2 norm(gradients)}\). Long Short Term Memory (LSTM) LSTM was introduced to solve the vanishing gradient problem. In LSTM, there are 4 gates that regulate the state of a RNN model: Write gate (i?[0,1]) (Input gate): Write to memory cell. Keep gate (f?[0,1]) (Forget gate): Erase memory cell. Read gate (o?[0,1]) (Output gate): Read from memory cell. Gate gate (g?[-1,1]) (Update gate) How much to write to memory cell. \(c_t\) is called memory state at time t. i,f,o,g,h,c are vectors having the same size. W is a matrix with size nxh. The gate states are calculated using the following formula: The output \(h_t\) is calculated using the following formulas:\(c_t = f.c_{t-1} + i.g \\ h_t = o.tanh(c_t)\) Gated Recurrent Unit (GRU) The Gated Recurrent Unit is a simplified version of an LSTM unit with fewer parameters. Just like an LSTM cell, it uses a gating mechanism to allow RNNs to efficiently learn long-range dependency by preventing the vanishing gradient problem. The GRU consists of a reset and update gates that determine which part of the old memory to keep or update with new values at the current time step.
SciPost Submission Page Quantum robustness and phase transitions of the 3D Toric Code in a field by D. A. Reiss, K. P. Schmidt This is not the current version. Submission summary As Contributors: Kai Phillip Schmidt Arxiv Link: https://arxiv.org/abs/1902.03908v1 Date submitted: 2019-02-14 Submitted by: Schmidt, Kai Phillip Submitted to: SciPost Physics Domain(s): Theoretical Subject area: Quantum Physics Abstract We study the robustness of 3D intrinsic topogical order under external perturbations by investigating the paradigmatic microscopic model, the 3D toric code in an external magnetic field. Exact dualities as well as variational calculations reveal a ground-state phase diagram with first and second-order quantum phase transitions. The variational approach can be applied without further approximations only for certain field directions. In the general field case, an approximative scheme based on an expansion of the variational energy in orders of the variational parameters is developed. For the breakdown of the 3D intrinsic topological order, it is found that the (im-)mobility of the quasiparticle excitations is crucial in contrast to their fractional statistics. Current status: Ontology / TopicsSee full Ontology or Topics database. Submission & Refereeing History Reports on this Submission Report 1 by Irénée Frerot on 2019-4-18 Invited Report Strengths 1- The introduction offers a very useful entry to the literature on the 2d and 3d toric code, as well as related models and implementations. 2- The properties of the unperturbed 3d toric code on a cubic lattice are well exposed, in a manner accessible to the newcomer. 3- The phase diagram under an external field is identified consistently through a variety of methods (exactly in certain limits, and by variational methods). 4- Throughout the manuscript, the authors try to provide a physical intuition of the mechanisms at play, as well as comparisons with the better-known 2d toric code. Weaknesses 1- The technicalities related to the variational determination of the ground state are hard to follow. Report In the paper, the authors study the 3d toric code on a cubic lattice, which displays a topologically ordered ground state, and focus on the phase transitions towards a trivial state induced by an external uniform magnetic field. After a detailed introduction which clearly motivates their study, the authors describe the model and its ground state properties. Then, applying a combination of methods (p-CUT, exact dualities and variational computations), they reconstruct the phase diagram of the toric code under a uniform magnetic field in an arbitrary direction. Throughout this study, they provide a detailed description of the physical mechanisms induced by the external field, and compare them with the 2d case. Finally, a long discussion summarizes the main results of the paper. Overall, I find the paper extremely well written, and accessible to the newcomer to the field. Although the technicalities associated with pCUT and the variational calculations are difficult to follow, their outputs are clearly summarized in physical terms. The phase diagram is convincingly reconstructed via various independent means. For these reasons, I recommend the publication of the manuscript. Requested changes 1- Eq. (6) should be explained a bit more. The authors should explain why the ground-state degeneracy is given by $2^{N_{spins}} / 2^{N_{constraints}}$. 2- I think that Eq. (8) contains a typo. First, I would advise the authors not to use m as a variable inside the summation, as it brings confusion with the superscript m in $P^m_{xy}$. Then, I think that a $b_z$ is missing in front of the term $(n_z + 1/2)$. 3- Eq. (9) contains a typo : the last term is $b_z = (0, 0, 1)$. 4- p.7, first line: I would suggest to add "In the loop-soup picture of the ground state, these operators measure the parity..." 5- Just after Eq. (11): "with some fixed $(n_y, n_z) \in Z$..." 6- Footnote 3: "...can also be viewed, in the light of quantum codes, as the..." (with comas) 7- After Eq. (18): is really the ground state energy equal to 4N ? I would say that $E_0 = -(1/2)N_{stars} -(1/2)N_{plaquettes} = -N/2 - 3N/2 = -2N$, am I correct? 8- Eq. (19): what means the prefactor $1 / (2j)$ in front of the last term? 9- Eq. (21) and in other places, the authors use the symbol "=" with a "!" on top of it: I have never seen this symbol and some explanation would be welcome. 10- After Eq. (33): "...the two limiting cases $\alpha=\beta=1$ and $\alpha=\beta=0$ are exactly..." and "For $\alpha=\beta=1$, the normalization..."
I am interested in the relation between the Atiyah-Patodi-Singer-$\eta$ invariant and spin topological quantum field theory. In the paper "Gapped Boundary Phases of Topological Insulators via Weak Coupling" https://arxiv.org/abs/1602.04251 by Seiberg and Witten, they presented such a relation between the two. Let the $3+1$ dimensional manifold $X$ be the world volume of a topological insulator. Its bounday $W$ is a $2+1$ dimensional manifold. Let $\chi$ be a massless Dirac fermion on the bounday manifold $W$. Then, integrating out the boundary fermion $$\int_{W}d^{3}x\,i\bar{\chi}D \!\!\!\!/\,\chi\,, $$ where $D_{\mu}=\partial_{\mu}+iA_{\mu}$, one has the partition function $$\mathcal{Z}=|\det(iD)|e^{-i\pi\eta(A)/2}.$$ So far, this is just the standard parity anomaly in odd dimensions. On page 35, the authors claimed that the factor $\Phi=e^{-i\pi\eta/2}$ is actually the partition function of a topological quantum field theory, called spin-Ising TQFT. The name comes from the fact that it is related with the 2D Ising model of CFT. The authors explained that this is due to the "Free-Dai theorem". https://arxiv.org/abs/hep-th/9405012 I don't really understand much from the paper of Freed-Dai theorem because of its heavy mathematics. But from my understanding, it is saying that the $\eta$ invariant is some kind of topological invariant and cobordism invariant, which satisfies the gluing and surgery axioms of TQFT. Thus, the factor $\Phi=e^{-i\pi\eta/2}$ can be treated as the partition function of some TQFT. Now the question is why this TQFT is the so-called spin-Ising TQFT. The authors claim that the partition function of the spin-Ising TQFT should be of modulus $1$ because the $\mathbb{Z}_{2}$ chiral ring generated by the field $\psi$ (from the 2D Ising model $\left\{1,\sigma,\psi\right\}$) has only one representation. Question 1: Why does the fact that the chiral algebra has one representation makes its partition function being of modulus $1$? The authors then showed an example, taking the manifold $W$ to be $S^{2}\times S^{1}$, that the partition function of the corresponding spin-Ising TQFT is $\pm 1$, which is indeed a phase. Then, by Freed-Dai theorem, they claimed that in general the partition function of a spin-Ising TQFT should be $\Phi=e^{-i\pi\eta/2}$. I don't really understand much from the paper of Freed-Dai theorem. Could anyone please enlighten me on how one should apply that theorem to this case? The authors explained in the following that if $W$ has a boundary $\Sigma$, then the product of the path integral of the chiral fermion $\psi$ on $\Sigma$ and the factor $\Phi$ is smooth and well-defined because of the Freed-Dai theorem. However, in our case, the manifold $W$ itself is the boundary of the $3+1$-manifold $X$, and so $W$ has no bounday at all. How should one understand the explanation provided by the authors? I also posted my question at https://physics.stackexchange.com/q/436255 New Edition: Suppose the $2+1$ dimensional manifold $W^{\prime}$ has a boundary $\Sigma$. There is a fermionic field $\psi$ defined on the boundary $\Sigma$. From the Freed-Dai theorem explained in the paper by Seiberg and Witten, the following path-integral is smooth and well-defined. $$e^{-i\pi\eta(W^{\prime})/2}\mathcal{Z}_{\psi}[\Sigma],$$ where $\mathcal{Z}_{\phi}[\Sigma]$ is the partition function of the $2D$ fermion $\psi$ on $\Sigma$. The authors claimed that this fermion are related with the spin-Ising model of $2D$ CFT. Is that true that this is just the holomorphic sector of the free Majorana fermion in $2D$? The motivation behind the above question is that I found that the Ising model in RCFT really looks like a sum over even spin structures of the $2D$ free majorana fermion. If this were true, does it mean that $$\mathcal{Z}_{\psi}[\Sigma]=\det(i\bar{\partial})?$$
Mini-course on multiplicative functions Speaker(s):Kaisa Matomäki (University of Turku), Maksym Radziwill (McGill University) Location:MSRI: Simons Auditorium Tags/Keywords Multiplicative functions smooth numbers prime numbers Chowla's Conjecture Primary Mathematics Subject Classification Secondary Mathematics Subject ClassificationNo Secondary AMS MSC Mini-Course On Multiplicative Functions The mini-course will be an introduction to the theory of general multiplicative functions and in particular to the theorem of Matomaki-Radziwill on multiplicative function in short intervals. The theorem says that, for any multiplicative function $f: \mathbb{N} \to [-1, 1]$ and any $H \to \infty$ with $X \to \infty$, the average of $f$ in almost all short intervals $[x, x+H]$ with $X \leq x \leq 2X$ is close to the average of $f$ over $[X, 2X]$. In the first lecture we will cover briefly the "pretentious theory" developed by Granville-Soundararajan and a selection of some of the key theorems: Halasz's theorem, the Lipschitz behaviour of multiplicative functions, Shiu's bound, ... We will also describe some consequences of the Matomaki-Radziwill theorem. In the second lecture we will develop sufficient machinery to prove a simple case of the latter theorem for the Liouville function in intervals of length $x^{\varepsilon}$. In the third lecture we will explain the proof of the full result. Time permitting we will end by discussing some open challenges Matomaki Radzwill Notes Download Mini-Course On Multiplicative Functions H.264 Video 02-Radziwill.mp4 Download If none of the options work for you, you can always buy the DVD of this lecture. The videos are sold at cost for $20USD (shipping included). Please Click Here to send an email to MSRI to purchase the DVD. See more of our Streaming videos on our main VMath Videos page.
The derivation begins by expressing the problem (which is to find the minimum value of a functional \(S(q_j(x),q_j’(x),x)\)) in the language of single-variable calculus—meaning, we’ll want to express the functional \(S(q_j(x),q_j’(x),x)\) as a function of the single variable \(ε\) (which I’ll describe later) so that we can use the techniques of single-variable calculus to find the minimum value of \(S(ε)\) which occurs when \(\frac{d}{dε}(S(ε))=0\). Later on, we’ll deal with the more general case in which we solve for the stationary points of \(S(ε)\). Let the set of coordinates \(q_j(x)\) be generalized coordinates which are dependent variables of the independent variable \(x\). Let the quantity \(S\) be a parametric quantity whose magnitude is equal to the length of the curve \(c\) where \(c\) can be any arbitrary curve. (This length specifies the magnitude of our parametric quantity—which isn’t limited to being just physical length but can also be an action, a period of time, and so on.) Let the two coordinates \((q_j(x_1),x_1)\) and \((q_j(x_2),x_2)\) denote the initial and final coordinate values associated with a system, respectively. In many physics problems, these coordinate values are typically taken to denote one time coordinate (in which case we’d replace the independent variable \(x\) with \(t\)) and the rest of the coordinates are typically taken to denote whichever spatial coordinates are the most convenient for a given problem; but in geometrical problems the generalized coordinates are, of course, taken to be all spatial coordinates. The choice of what kinds of generalized coordinates to use really just depends on the problem you’re trying to solve. We’ll let \(S\) be any parametric quantity associated with a system going from \((q_j(x_1),x_1)\) to \((q_j(x_2),x_2)\), even those which are not minimized. Now, the whole purpose of this section will be to find the minimum value of \(S\)—those points in which the parametric quantity does not change with respect to the variables it depends on. But to do this, we must first write an expression which determines the length \(S\) of any arbitary curve\(^1\). How does one calculate the magnitude \(S\)? To do this, let’s divide the curve \(c\) into infinitely many, infinitesimally small line segments of length \(ds\). By taking the infinite sum (which is to say, by taking the integral) of all these small lengths of \(ds\), we can find that magnitude of \(S\) is given by $$S=\int{dS}.\tag{1}$$ Equation (1) is nice and all, but we should re-express it in terms of something which can be calculated in terms of the independent variable \(x\). As a first steps towards doing this, we can rewriting the length \(dS\) using the Pythagorean Theorem to obtain \(dS=\sqrt{dx^2+dq_j^2}\). Let’s substitute this equation into Equation (1) to get $$S=\int_?^?\sqrt{dx^2+dq_j^2}.\tag{2}$$ I have written the question marks in the limits of integration to denote that I’m leaving them out for the moment. Using algebraic manipulations, we can express the integral with respect to the independent variable \(x\) to obtain $$S(q_j,q_j’,x)=\int_{x_1}^{x_2}\sqrt{1+\biggl(\frac{dq_j}{dx}\biggl)^2}dx=\int_{x_1}^{x_2}L(q_j,q_j’,x)dx.\tag{3}$$ where the integrand is some functional of \(q_j(x)\), \(q_j’(x)\) and \(x\) and is denoted by \(F(q_j(x),q_j’(x),x)\). (A functional is something which is a function of a function.) To find the minimum of \(q_j(x)\) would involve a procedure which you are already familiar with: the minimum occurs at the point where \(q_j(x)\) will not change (up to the first order\(^2\)) with a small change in \(x\); or, written in another way, where \(\frac{dq_j(x)}{dx}=0\). Finding the minimum value of \(S\) isn’t quite so simple. The minimum value of \(S\) corresponds to a point where \(S\) does not change, up to the first order, with small changes in \(q_j\), \(q_j’\) and \(x\). To find this minimum, we must use a technique known as calculus of variations: this is, basically, a procedure in which we use clever techniques to express \(S\) as a function of a single independent variable so that we can use the techniques of single-variable calculus in order to find its minimum value. The first step necessary to accomplish this goal will be to assume that there is a curve \(\bar{q}_j(x)\) which is that particular curve whose arc length \(S(\bar{q}_j(x),\bar{q}_j'(x),x\) is minimized. As previously mentioned, we shall let \(q_j(x)\) represent any curve between \(q_j(x_1)\) and \(q_j(x_2\) so long that it is everywhere smooth and continuous. We shall, however, require the two constraints that \(\bar{q}_j(x_1)=q_j(x_1)\) and \(\bar{q}_j(x_2)=q_j(x_2)\). We shall now define a new function \(\eta(x)\) which we will let be any smooth curve such that \(\eta(x_1)=0\) and \(\eta(x_2)=0\). Let’s also define a parameter which we'll call \(\epsilon\) which we shall let be defined by the equation $$q_j(x)=\bar{q}_j(x)+\epsilon\eta(x).\tag{4}$$ The product \(\epsilon\eta(x)\) is the error between the “correct path” \(\bar{q}_j(x)\) (the one whose arc length is minimized) and the arbitrarily chosen path \(q_j(x)\). By simply letting \(\eta(x)\) be a particular function (pick any you like; I have chosen the one illustrated in Figure #), so long as it satisfies the aforementioned constraints, then we can vary \(q_j\) with the single parameter \(\epsilon\) and write \(q_j(\epsilon)\). The previous sentence, for the purpose of comprehensibility, requires a little explanation. For the two fixed initial conditions \(q_j(x_1), x_1)\) and \((q_j(x_2),x_2)\), the function \(q_j(x)\) does not vary with the two functions \(\bar{q}_j(x)\) and \(\eta(x)\). The reason why \(q_j(x)\) does not vary with \(\bar{q}_j(x)\) is because \(\bar{q}_j(x)\) will not change regardless of what \(q_j(x)\) is—\(\bar{q}_j(x)\) depends upon only the initial conditions \((q_j(x_1),x_1)\) and \((q_j(x_2),x_2)\) being different. Basically, it would be very easy to see visually, on a graph, that by choosing two different initial conditions, the shortest path (\(\bar{q}_j\)) connecting those two points will also have to be different. Lastly, since we let \(\eta(x)\) be a particular function, it follows that it also only depends on the initial conditions. (As you move the two points \(q_j(x_1), x_1)\) and \((q_j(x_2),x_2)\) apart or towards each other, you could imagine \(\eta(x)\) having to elongate or contract.) It follows that \(q_j(x)\) is, therefore, not a function of \(\eta(x)\). I have shown in Figure 1 how \(\eta\) (due to the way in which we defined it by Equation (1)) varies with \(x\) in such a way that by adding \(\epsilon\eta(x)\) to the "correct function" \(\bar{q}_j(x)\), we always manage to land on \(q_j(x)\). Now, \(q_j(x)\) represents "any" arbitrary curve; indeed, we could change \(q_j(x)\) to whatever we wanted and \(\epsilon\) would still satisfy Equation (1). In other words, we could just add a different function \(\epsilon\eta(x)\) (where \(\epsilon\) changed a little but \(\eta(x)\) did not) to \(\bar{q}_j(x)\) and land on \(q_j(x)\) again as in Figure 1. What all of this means is that the only thing which \(q_j\) depends on in Equation (1) is \(\epsilon\); therefore, we can write $$q_j(\epsilon)=\bar{q}_j+\epsilon\eta.\tag{5}$$ By taking the derivative with the respect to \(x\) on both sides, we get $$q_j'(\epsilon)=\bar{q}_j'+\epsilon\eta'.\tag{6}$$ At this point, we are now able to express the functional \(S(q_j(x),q_j'(x),x)\) as the function \(S(\epsilon)\). The minimum value of \(S(ε)\) occurs at a point where \(\frac{dS(ε)}{dε}=0\). In order to investigate the mathematical relationships which satisfy this condition (the condition that \(S(ε)\) is minimized), let’s differentiate both sides of Equation (3), set it equal to zero, and then proceed to use algebra to find mathematical relationships which satisfy this condition. Starting with the first step, we have $$\frac{dS(ε)}{dε}=\int_{x_1}^{x_2}\frac{∂}{∂ε}[L(q_j,q_j’,x)]dx=0.\tag{7}$$ (To clarify any potential confusion, I took the partial derivative \(∂_ε\) on both sides; since the function \(S(ε)\) on the left-hand side is a single-variable function, it follows that \(∂_εS(ε)=\frac{dS(ε)}{dε}\).) Since \(L(q_j,q_j’,x)\) is a functional, in order to evaluate the partial derivative \(∂_εL(q_j,q_j’,x\), we must use the chain rule to get $$\frac{dS(ε)}{dε}=\int_{x_1}^{x_2}\biggl(\frac{∂L}{∂q_j}\frac{∂q_j}{∂ε}+\frac{∂L}{∂q_j’}\frac{∂q_j’}{∂ε}\biggl)dx.\tag{8}=0.$$ Let’s evaluate the partial derivatives \(∂/∂ε[q_j(\epsilon)]\) and \(∂/∂ε[q_j’(\epsilon)]\) to get $$\frac{∂q_j(\epsilon)}{∂ε}=\frac{∂}{∂ε}(\bar{q}_j(x)+ε\eta(x))=\eta(x)$$ and $$\frac{∂q_j’(\epsilon)}{∂ε}=\frac{∂}{∂ε}(\bar{q}_j’(x)+ε\eta’(x))=\eta’(x).$$ Let’s substitute these results into Equation (8) to get $$\frac{dS(ε)}{dε}=\int_{x_1}^{x_2}\biggl(\frac{∂L}{∂q_j}\eta(x)+\frac{∂L}{∂q_j’}\eta’(x)\biggl)dx=\int_{x_1}^{x_2}\frac{∂L}{∂q_j}\eta(x)dx+\int_{x_1}^{x_2}\frac{∂L}{∂q_j’}\eta'(x)dx=0.\tag{9}$$ There is great value in employing integration by parts on the second integral in Equation (9) since it’ll allow us to rewrite the integrand of the form, \(\text{‘some stuff’ times }\eta=0\); this form has the equations of motion right in front of our face as we shall see. From the standpoint of physics, the motivation of this is apparent as the equations of motion will allow us to determine the motion of a system. Recall that the equation for integrating by parts is given by $$\int_{v_1}^{v_2}udv=uv-\int_{v_1}^{v_2}vdu.$$ If we let \(u=∂L/∂q_j’\) and \(dv=\eta’(x)\), then our second integral can be simplified to $$\int_{x_1}^{x_2}\eta’(x)\frac{∂L}{∂q_j’}dx=\biggl(\int{udv}\biggl)dx=\biggl(\frac{∂L}{∂q_j’}\eta(x)|_{x_1}^{x_2}-\int_{x_1}^{x_2}\eta(x)\frac{d}{dx}\frac{∂L}{∂q_j’}\biggl)dx=-\int_{x_1}^{x_2}\eta(x)\frac{d}{dx}\frac{∂L}{∂q_j’}dx.$$ Let’s substitute this result into Equation (9) to get $$\frac{dS(ε)}{dε}=\int_{x_1}^{x_2}\eta(x)\biggl[\frac{∂L}{∂q_j}-\frac{d}{dx}\frac{∂L}{∂q_j’}\biggl]dx.\tag{10}$$ Since \(\eta(x)\) can be any arbitrary function it is, in general, not equal to zero. Therefore, the other term in the product must be zero and we have $$\frac{∂L}{∂q_j}-\frac{d}{dx}\frac{∂L}{∂q_j’}=0.\tag{11}$$ Equation (11) is known as the Euler-Lagrange equation and it is the mathematical consequence of minimizing a functional \(S(q_j(x),q_j’(x),x)\). It is a differential equation which can be solved for the dependent variable(s) \(q_j(x)\) such that the functional \(S(q_j(x),q_j’(x),x)\) is minimized. The next few sections will be concerned with different problems in which the question starts off as: find the minimum value of some quantity \(S\). These problems start off with a little math to express the quantity as a functional. All of the problems boil down to solving for the coordinates \(q_j(x)\) which minimize \(S\); this will be accomplished by solving Equation (11). Although simple to say, we shall see that this can, sometimes, involve a lot of algebra and tinkering—the math will sometimes get a little hairy. This article is licensed under a CC BY-NC-SA 4.0 license. References 1. The Kaizen Effect. "Lagrangian Mechanics - Lesson 1: Deriving the Euler-Lagrange Equation & Introduction". Online video clip. YouTube. YouTube, 04 May 2016. Web. 18 May 2017. Notes 1. When we think about the curve \(q_j(x)\) which minimizes the quantity \(S=\int{(dq_j^2+dx^2)}\), it is important not to lose track of the generality of our choice of coordinates \(q_j\) and \(x\). In some problems, we'll just choose \(q_j\) and \(x\) to be spatial coordinates in which case \(S=\int{(dq_j^2+dx^2)}\) is a measure of distance; but in other problems, we'll choose \(x\) to be a time coordinate in which case \(S=\int{(dq_j^2+dx^2)}\) is not a measure of distance. I wanted to mention this early on because a common confusion and ambiguity is whether or not this derivation we'll be doing in this section applies only to functionals \(S\) which measure length. Be reassured that this is not the case; \(S\) can measure many other things besides length as we'll see in subsequent sections where we solve some problems using the analysis we developed in this section. 2. The minimum value of some arbitrary single variable function, say \(y(t)\), occurs when \(\frac{dy(t)}{dt}=0\). This condition implies that for a very small change in time \(dt\), the change in the function is \(dy(t)=0\). You might be wondering: “if \(t\) changed by a very small amount, then why didn’t \(y(t)\) change by a very small amount as well?” In reality, \(y(t)\) did in fact change a little: but this change is captured in only 2nd order (and higher) derivatives and, according to Feynman, “the deviation of the function from its minimum value is only second order [or higher].” The full expression describing the differential change in \(y(t)\) is, in general, a function of the nth order derivative. In this example, the change in \(y(t)\) as a function of the first order derivative is zero. The terminology and phrasing used to describe the previous sentence is as follows: we say that “the function \(y(t)\) does not change up to the first order.”
Learning Objectives Make sure you thoroughly understand the following essential ideas: Define Avogadro's number and explain why it is important to know. Define the mole. Be able to calculate the number of moles in a given mass of a substance, or the mass corresponding to a given number of moles. Define molecular weight, formula weight, and molar mass; explain how the latter differs from the first two. Be able to find the number of atoms or molecules in a given weight of a substance. Find the molar volume of a solid or liquid, given its density and molar mass. Explain how the molar volume of a metallic solid can lead to an estimate of atomic diameter. The chemical changes we observe always involve discrete numbers of atoms that rearrange themselves into new configurations. These numbers are HUGE— far too large in magnitude for us to count or even visualize, but they are still numbers, and we need to have a way to deal with them. We also need a bridge between these numbers, which we are unable to measure directly, and the weights of substances, which we do measure and observe. The mole concept provides this bridge, and is central to all of quantitative chemistry. Counting Atoms: Avogadro's Number Owing to their tiny size, atoms and molecules cannot be counted by direct observation. But much as we do when "counting" beans in a jar, we can estimate the number of particles in a sample of an element or compound if we have some idea of the volume occupied by each particle and the volume of the container. Once this has been done, we know the number of formula units (to use the most general term for any combination of atoms we wish to define) in any arbitrary weight of the substance. The number will of course depend both on the formula of the substance and on the weight of the sample. However, if we consider a weight of substance that is the same as its formula (molecular) weight expressed in grams, we have only one number to know: Avogadro's number. Avogadro's number Avogadro's number is known to ten significant digits: \[N_A = 6.022141527 \times 10^{23}.\] However, you only need to know it to three significant figures: \[N_A \approx 6.02 \times 10^{23}. \label{3.2.1}\] So \(6.02 \times 10^{23}\) of what? Well, of anything you like: apples, stars in the sky, burritos. However, the only practical use for \(N_A\) is to have a more convenient way of expressing the huge numbers of the tiny particles such as atoms or molecules that we deal with in chemistry. Avogadro's number is a collective number, just like a dozen. Students can think of \(6.02 \times 10^{23}\) as the "chemist's dozen". Before getting into the use of Avogadro's number in problems, take a moment to convince yourself of the reasoning embodied in the following examples. Example \(\PageIndex{1}\): Mass ratio from atomic weights The atomic weights of oxygen and carbon are 16.0 and 12.0 atomic mass units (\(u\)), respectively. How much heavier is the oxygen atom in relation to carbon? Solution Atomic weights represent the relative masses of different kinds of atoms. This means that the atom of oxygen has a mass that is \[\dfrac{16\, \cancel{u}}{12\, \cancel{u}} = \dfrac{4}{3} ≈ 1.33 \nonumber\] as great as the mass of a carbon atom. Example \(\PageIndex{2}\): Mass of a single atom The absolute mass of a carbon atom is 12.0 unified atomic mass units (\(u\)). How many grams will a single oxygen atom weigh? Solution The absolute mass of a carbon atom is 12.0 \(u\) or \[12\,\cancel{u} \times \dfrac{1.6605 \times 10^{–24}\, g}{1 \,\cancel{u}} = 1.99 \times 10^{–23} \, g \text{ (per carbon atom)} \nonumber\] The mass of the oxygen atom will be 4/3 greater (from Example \(\PageIndex{1}\)): \[ \left( \dfrac{4}{3} \right) 1.99 \times 10^{–23} \, g = 2.66 \times 10^{–23} \, g \text{ (per oxygen atom)} \nonumber\] Alternatively we can do the calculation directly like with carbon: \[16\,\cancel{u} \times \dfrac{1.6605 \times 10^{–24}\, g}{1 \,\cancel{u}} = 2.66 \times 10^{–23} \, g \text{ (per oxygen atom)} \nonumber\] Example \(\PageIndex{3}\): Relative masses from atomic weights Suppose that we have \(N\) carbon atoms, where \(N\) is a number large enough to give us a pile of carbon atoms whose mass is 12.0 grams. How much would the same number, \(N\), of oxygen atoms weigh? Solution We use the results from Example \(\PageIndex{1}\) again. The collection of \(N\) oxygen atoms would have a mass of \[\dfrac{4}{3} \times 12\, g = 16.0\, g. \nonumber\] Exercise \(\PageIndex{1}\) What is the numerical value of \(N\) in Example \(\PageIndex{3}\)? Answer Using the results of Examples \(\PageIndex{2}\) and \(\PageIndex{3}\). \[N \times 1.99 \times 10^{–23} \, g \text{ (per carbon atom)} = 12\, g \nonumber\] or \[N = \dfrac{12\, \cancel{g}}{1.99 \times 10^{–23} \, \cancel{g} \text{ (per carbon atom)}} = 6.03 \times 10^{23} \text{atoms} \nonumber \] There are a lot of atoms in 12 g of carbon. Things to understand about Avogadro's number It is a number, just as is "dozen", and thus is dimensionless. It is a hugenumber, far greater in magnitude than we can visualize Its practical use is limited to counting tiny things like atoms, molecules, "formula units", electrons, or photons. The value of Ncan be known only to the precision that the number of atoms in a measurable weight of a substance can be estimated. Because large numbers of atoms cannot be counted directly, a variety of ingenious indirect measurements have been made involving such things as Brownian motion and X-ray scattering. A The current value was determined by measuring the distances between the atoms of silicon in an ultrapure crystal of this element that was shaped into a perfect sphere. (The measurement was made by X-ray scattering.) When combined with the measured mass of this sphere, it yields Avogadro's number. However, there are two problems with this: The silicon sphere is an artifact, rather than being something that occurs in nature, and thus may not be perfectly reproducible. The standard of mass, the kilogram, is not precisely known, and its value appears to be changing. For these reasons, there are proposals to revise the definitions of both Nand the kilogram. A Moles and their Uses The mole (abbreviated mol) is the the SI measure of quantity of a "chemical entity", which can be an atom, molecule, formula unit, electron or photon. One mole of anything is just Avogadro's number of that something. Or, if you think like a lawyer, you might prefer the official SI definition: Definition: The Mole The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12 Avogadro's number (Equation \ref{3.2.1}) like any pure number, is dimensionless. However, it also defines the mole, so we can also express N A as 6.02 × 10 23 mol –1; in this form, it is properly known as . This construction emphasizes the role of Avogadro's number as a Avogadro's constant conversion factorbetween number of moles and number of "entities". Example \(\PageIndex{4}\): number of moles in N particles How many moles of nickel atoms are there in 80 nickel atoms? Solution \[\dfrac{80 \;atoms}{6.02 \times 10^{23} \; atoms\; mol^{-1}} = 1.33 \times 10^{-22} mol \nonumber\] Is this answer reasonable? Yes, because 80 is an extremely small fraction of \(N_A\). Molar Mass The atomic weight, molecular weight, or formula weight of one mole of the fundamental units (atoms, molecules, or groups of atoms that correspond to the formula of a pure substance) is the ratio of its mass to 1/12 the mass of one mole of C 12 atoms, and being a ratio, is dimensionless. But at the same time, this molar mass (as many now prefer to call it) is also the observable mass of one mole ( N A) of the substance, so we frequently emphasize this by stating it explicitly as so many grams (or kilograms) per mole: g mol –1. It is important always to bear in mind that the mole is a numberand not a mass. But each individual particle has a mass of its own, so a mole of any specific substance will always correspond toa certain mass of that substance. Example \(\PageIndex{5}\): Boron content of borax Borax is the common name of sodium tetraborate, \(\ce{Na2B4O7}\). how many moles of boron are present in 20.0 g of borax? how many grams of boron are present in 20.0 g of borax? Solution The formula weight of \(\ce{Na2B4O7}\) so the molecular weight is: \[(2 \times 23.0) + (4 \times 10.8) + (7 \times 16.0) = 201.2 \nonumber\] 20 g of borax contains (20.0 g) ÷ (201 g mol –1) = 0.10 mol of borax, and thus 0.40 molof B. 0.40 mol of boron has a mass of (0.40 mol) × (10.8 g mol –1) = 4.3 g. Example \(\PageIndex{6}\): Magnesium in chlorophyll The plant photosynthetic pigment chlorophyll contains 2.68 percent magnesium by weight. How many atoms of Mg will there be in 1.00 g of chlorophyll? Solution Each gram of chlorophyll contains 0.0268 g of Mg, atomic weight 24.3. Number of moles in this weight of Mg: (.0268 g) / (24.2 g mol –1) = 0.00110 mol Number of atoms: (0.00110 mol) × (6.02E23 mol –1) = \(6.64 \times 10^{20}\) Is this answer reasonable? (Always be suspicious of huge-number answers!) Yes, because we would expect to have huge numbers of atoms in any observable quantity of a substance. Molar Volume This is the volume occupied by one mole of a pure substance. Molar volume depends on the density of a substance and, like density, varies with temperature owing to thermal expansion, and also with the pressure. For solids and liquids, these variables ordinarily have little practical effect, so the values quoted for 1 atm pressure and 25°C are generally useful over a fairly wide range of conditions. This is definitely not the case with gases, whose molar volumes must be calculated for a specific temperature and pressure. Example \(\PageIndex{7}\): Molar Volume of a Liquid Methanol, CH 3OH, is a liquid having a density of 0.79 g per milliliter. Calculate the molar volume of methanol. Solution The molar volume will be the volume occupied by one molar mass (32 g) of the liquid. Expressing the density in liters instead of mL, we have \[V_M = \dfrac{32\; g\; mol^{–1}}{790\; g\; L^{–1}}= 0.0405 \;L \;mol^{–1} \nonumber\] The molar volume of a metallic element allows one to estimate the size of the atom. The idea is to mentally divide a piece of the metal into as many little cubic boxes as there are atoms, and then calculate the length of each box. Assuming that an atom sits in the center of each box and that each atom is in direct contact with its six neighbors (two along each dimension), this gives the diameter of the atom. The manner in which atoms pack together in actual metallic crystals is usually more complicated than this and it varies from metal to metal, so this calculation only provides an approximate value. Example \(\PageIndex{8}\): Radius of a Strontium Atom The density of metallic strontium is 2.60 g cm –3. Use this value to estimate the radius of the atom of Sr, whose atomic weight is 87.6. Solution The molar volume of Sr is: \[\dfrac{87.6 \; g \; mol^{-1}}{2.60\; g\; cm^{-3}} = 33.7\; cm^3\; mol^{–1}\] The volume of each "box" is" \[\dfrac{33.7\; cm^3 mol^{–1}} {6.02 \times 10^{23}\; mol^{–1}} = 5.48 \times 10^{-23}\; cm^3\] The side length of each box will be the cube root of this value, \(3.79 \times 10^{–8}\; cm\). The atomic radius will be half this value, or \[1.9 \times 10^{–8}\; cm = 1.9 \times 10^{–10}\; m = 190 pm\] Note: Your calculator probably has no cube root button, but you are expected to be able to find cube roots; you can usually use the x y button with y=0.333. You should also be able estimate the magnitude of this value for checking. The easiest way is to express the number so that the exponent is a multiple of 3. Take \(54 \times 10^{-24}\), for example. Since 3 3=27 and 4 3= 64, you know that the cube root of 55 will be between 3 and 4, so the cube root should be a bit less than 4 × 10 –8. So how good is our atomic radius? Standard tables give the atomic radius of strontium is in the range 192-220 pm.
It looks like you're new here. If you want to get involved, click one of these buttons! Okay, now I've rather carefully discussed one example of \(\mathcal{V}\)-enriched profunctors, and rather sloppily discussed another. Now it's time to build the general framework that can handle both these examples. We can define \(\mathcal{V}\)-enriched categories whenever \(\mathcal{V}\) is a monoidal preorder: we did that way back in Lecture 29. We can also define \(\mathcal{V}\)-enriched functors whenever \(\mathcal{V}\) is a monoidal preorder: we did that in Lecture 31. But to define \(\mathcal{V}\)-enriched profunctors, we need \(\mathcal{V}\) to be a bit better. We can see why by comparing our examples. Our first example involved \(\mathcal{V} = \textbf{Bool}\). A feasibility relation $$ \Phi : X \nrightarrow Y $$ between preorders is a monotone function $$ \Phi: X^{\text{op}} \times Y\to \mathbf{Bool} . $$ We shall see that a feasibility relation is the same as a \( \textbf{Bool}\)-enriched profunctor. Our second example involved \(\mathcal{V} = \textbf{Cost}\). I said that a \( \textbf{Cost}\)-enriched profunctor $$ \Phi : X \nrightarrow Y $$ between \(\mathbf{Cost}\)-enriched categories is a \( \textbf{Cost}\)-enriched functor $$ \Phi: X^{\text{op}} \times Y \to \mathbf{Cost} $$ obeying some conditions. But I let you struggle to guess those conditions... without enough clues to make it easy! To fit both our examples in a general framework, we start by considering an arbitrary monoidal preorder \(\mathcal{V}\). \(\mathcal{V}\)-enriched profunctors will go between \(\mathcal{V}\)-enriched categories. So, let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(\mathcal{V}\)-enriched categories. We want to make this definition: Tentative Definition. A \(\mathcal{V}\)-enriched profunctor $$ \Phi : \mathcal{X} \nrightarrow \mathcal{Y} $$ is a \(\mathcal{V}\)-enriched functor $$ \Phi: \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .$$ Notice that this handles our first example very well. But some questions appear in our second example - and indeed in general. For our tentative definition to make sense, we need three things: We need \(\mathcal{V}\) to itself be a \(\mathcal{V}\)-enriched category. We need any two \(\mathcal{V}\)-enriched category to have a 'product', which is again a \(\mathcal{V}\)-enriched category. We need any \(\mathcal{V}\)-enriched category to have an 'opposite', which is again a \(\mathcal{V}\)-enriched category. Items 2 and 3 work fine whenever \(\mathcal{V}\) is a commutative monoidal poset. We'll see why in Lecture 62. Item 1 is trickier, and indeed it sounds rather scary. \(\mathcal{V}\) began life as a humble monoidal preorder. Now we're wanting it to be enriched in itself! Isn't that circular somehow? Yes! But not in a bad way. Category theory often eats its own tail, like the mythical ourobous, and this is an example. To get \(\mathcal{V}\) to become a \(\mathcal{V}\)-enriched category, we'll demand that it be 'closed'. For starters, let's assume it's a monoidal poset, just to avoid some technicalities. Definition. A monoidal poset is closed if for all elements \(x,y \in \mathcal{V}\) there is an element \(x \multimap y \in \mathcal{V}\) such that $$ x \otimes a \le y \text{ if and only if } a \le x \multimap y $$ for all \(a \in \mathcal{V}\). This will let us make \(\mathcal{V}\) into a \(\mathcal{V}\)-enriched category by setting \(\mathcal{V}(x,y) = x \multimap y \). But first let's try to understand this concept a bit! We can check that our friend \(\mathbf{Bool}\) is closed. Remember, we are making it into a monoidal poset using 'and' as its binary operation: its full name is \( \lbrace \text{true},\text{false}\rbrace, \wedge, \text{true})\). Then we can take \( x \multimap y \) to be 'implication'. More precisely, we say \( x \multimap y = \text{true}\) iff \(x\) implies \(y\). Even more precisely, we define: $$ \text{true} \multimap \text{true} = \text{true} $$$$ \text{true} \multimap \text{false} = \text{false} $$$$ \text{false} \multimap \text{true} = \text{true} $$$$ \text{false} \multimap \text{false} = \text{true} . $$ Puzzle 188. Show that with this definition of \(\multimap\) for \(\mathbf{Bool}\) we have $$ a \wedge x \le y \text{ if and only if } a \le x \multimap y $$ for all \(a,x,y \in \mathbf{Bool}\). We can also check that our friend \(\mathbf{Cost}\) is closed! Remember, we are making it into a monoidal poset using \(+\) as its binary operation: its full name is \( [0,\infty], \ge, +, 0)\). Then we can define \( x \multimap y \) to be 'subtraction'. More precisely, we define \(x \multimap y\) to be \(y - x\) if \(y \ge x\), and \(0\) otherwise. Puzzle 189. Show that with this definition of \(\multimap\) for \(\mathbf{Cost}\) we have $$ a + x \le y \text{ if and only if } a \le x \multimap y . $$But beware. We have defined the ordering on \(\mathbf{Cost}\) to be the opposite of the usual ordering of numbers in \([0,\infty]\). So, \(\le\) above means the opposite of what you might expect! Next, two more tricky puzzles. Next time I'll show you in general how a closed monoidal poset \(\mathcal{V}\) becomes a \(\mathcal{V}\)-enriched category. But to appreciate this, it may help to try some examples first: Puzzle 190. What does it mean, exactly, to make \(\mathbf{Bool}\) into a \(\mathbf{Bool}\)-enriched category? Can you see how to do this by defining $$ \mathbf{Bool}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Bool}\), where \(\multimap\) is defined to be 'implication' as above? Puzzle 191. What does it mean, exactly, to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category? Can you see how to do this by defining $$ \mathbf{Cost}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Cost}\), where \(\multimap\) is defined to be 'subtraction' as above? Note: for Puzzle 190 you might be tempted to say "a \(\mathbf{Bool}\)-enriched category is just a preorder, so I'll use that fact here". However, you may learn more if you go back to the general definition of enriched category and use that! The reason is that we're trying to understand some general things by thinking about two examples. Puzzle 192. The definition of 'closed' above is an example of a very important concept we keep seeing in this course. What is it? Restate the definition of closed monoidal poset in a more elegant, but equivalent, way using this concept.
Question: Evaluate {eq}\displaystyle \int xdx+ydy-zdz {/eq} Integration: Integration is the sum under the curve under defined limits which is given by: {eq}\iiint f(x,y,z)dxdydz {/eq} Answer and Explanation: {eq}\int_c xdx+zdy-ydz\\ \text{Using Linearlity for integration}\\ \int(a f(x)+b g(x)) d x=a \int f(x) d x+b \int g(x) d x \int \:xdx+\int \:ydy-\int \:zdz\\ =\frac{x^2}{2}+C+\frac{y^2}{2}+C-\left(\frac{z^2}{2}+C\right)\\ =\frac{-z^2+x^2+y^2}{2}+C\\ {/eq} Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from AP Calculus AB & BC: Homework Help ResourceChapter 13 / Lesson 13
A three-round Feistel network is a good example of a realistic construction that is a secure "weak" PRP, but not a "strong" PRP. A Feistel network uses the permutation $P_f(L, R) = R, (L\oplus f(R))$, where $f$ is an element of a pseudorandom function family. This PRP will be keyed with three keys $k_1, k_2, k_3$, which will be used to key a PRF $F$ differently each round. We define $E_{k_1,k_2,k_3}$ to be a three-round Feistel network: $E_{k_1,k_2,k_3}(L \| R) = \operatorname{Concat}(P_{F_{k_3}}(P_{F_{k_2}}(P_{F_{k_1}}(L, R))))$ Assuming $F$ is a PRF, $E$ will meet the definition of a weak PRP. I believe this proof is originally attributed to Luby, Rackoff (for details of the proof, see here, starting on page 11). Similarly, the inverse $D_{k_1,k_2,k_3} = E^{-1}_{k_1,k_2,k_3}$ is also a weak PRP. Interestingly, though, $E$ is not a strong PRP. When given simultaneous access to both a "forward" oracle and a "backward" oracle, an adversary can distinguish between $(E_{k_1,k_2,k_3}(\cdot), D_{k_1,k_2,k_3}(\cdot))$ and $(\Pi(\cdot), \Pi^{-1}(\cdot))$, where $\Pi$ is a randomly selective permutation on the same domain. Here is an adversary that distinguishes the two with high probability: Query the decryption oracle with two strings of zero bits: $(a\|b) \leftarrow D(0\|0)$ Query the encryption oracle: $(c\|d) \leftarrow E(0\|a)$ Query the decryption oracle again: $(e\|f) \leftarrow D((b\oplus d)\|c)$ If $e=c\oplus a$, then return $1$, else return $0$. Here's why this works: By expansion, we see that $D_{k_1,k_2,k_3}(L\|R) = (x\|y)$, where: $x=R \oplus F_{k_2}(L \oplus F_{k_3}(R))$ $y=L \oplus F_{k_3}(R) \oplus F_{k_1}(R \oplus F_{k_2}(L\oplus F_{k_3}(R)))$ It follows that the first oracle query will result in: $a=F_{k_2}(F_{k_3}(0))$ $b=F_{k_3}(0) \oplus F_{k_1}(a)$ By expansion, we see that $E_{k_1,k_2,k_3}(L\|R) = (x\|y)$, where: $x=R \oplus F_{k_2}(L \oplus F_{k_1}(R))$ $y=L \oplus F_{k_1}(R) \oplus F_{k_3}(R \oplus F_{k_2}(L\oplus F_{k_1}(R)))$ It follows that the second oracle query will result in: $c=a \oplus F_{k_2}(F_{k_1}(a))$ and $d=F_{k_1}(a) \oplus F_{k_3}(c)$. Note that $b$ and $d$ both contain the term $F_{k_1}(a)$. When we compute $b\oplus d$, the terms cancel: $b\oplus d=F_{k_3}(0) \oplus F_{k_3}(c)$ Finally, in the third oracle query, the specifically crafted $L$ and $R$ cause the following simplification: $e=c \oplus F_{k_2}((b\oplus d) \oplus F_{k_3}(c))\\=c \oplus F_{k_2}(F_{k_3}(0))\\=c \oplus a$ The adversary finds that $e=c\oplus a$ as required, which would only be expected with low probability for a truly random permutation. The basic idea is to set things up so that $F_{k_2}$ receives the same input in two different queries. This causes the left oracle output to be masked with the same value, which can be detected by the adversary. Crucially, this attack would not work without the ability to query the permutation in both directions.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Let: $\left\{m_1, ~...~, m_k\right\}$ be a set of coprime natural numbers, $M=\prod_{i=1}^{k} m_i$ $X$ be a natural integer, such that $X < M$ Then $X$ can be expressed in the Residue Number System as: $$X={\left(x_1, ~...~, x_k\right)}_{RNS\left(m_1, ~...~, m_k\right)}~.$$ Where $\forall_{m_i} \left[\left(x_i \equiv X \mod m_i\right) ~~~ \wedge ~~~0 \le x_i < m_i \right]$. There are a plethora of papers attacking the problem of parity/magnitude comparison in Residue Number Systems; however many of these papers are focused on chip-depth or shaving large constants off of chip-area. I am finding it difficult to decipher if there are any exact algorithms that run faster than full binary reconstruction, which takes $\sim \mathcal{O}(k^2)$ time. (For simplicty/brevity, I am assuming small $m_i$, and constant-time modulo-multiplication/addition of each RNS "digit"). Most of the papers' novelties lie in some seeming "gimmick", but no real complexity decrease; examples of results: Fast inexact parity algorithms (usually work terribly when $X < \sqrt{M}$, or $|X| \ll |M|$, where $\left|n\right|=\text{size of }n=\left\lceil\log_2n\right\rceil$) Algorithms that work quickly "most of the time"; ie. they use an inexact algorithm, and then the full CRT reconstruction or equivalent in the worst case The circuit they present competes with some other paper's circuit by some constant, or area/depth tradeoff, but makes no complexity advance Full CRT reconstruction of $X$, perhaps using some trick to save some constants Reconstruct/convert to another number system (including binary) where parity/comparison is easy, but: this conversion/reconstruction takes $\sim \mathcal{O}(k^2)$ time, or it runs in $\sim \mathcal{O}(k)$, time but with $k$ processors, or it runs in $\sim \mathcal{O}(k)$ time because that is the depth of the circuit, but this is not algorithmic complexity, or it reuses previous components that must be there for RNS multiplication (saving circuit space), but still runs in $\sim \mathcal{O}(k^2)$ sequential-time, or $\sim \mathcal{O}(k)$ parallel-time Using a "core" function which basically boils down to a constant-trimmed-CRT, or an approximate CRT Using special moduli, makes individual operations simpler, but parity complexity stays the same Using special moduli, but limited number of moduli or can't have small $m_i$ Base extension, saves some constant or allows parallel-ness, but complexity is again $\sim \mathcal{O}(k^2)$ sequential-time, or $\sim \mathcal{O}(k)$ parallel-time (for multiplication) Redundant moduli, but maintaining the redundant moduli takes $\sim \mathcal{O}(k^2)$ sequential-time, or $\sim \mathcal{O}(k)$ parallel-time Using lookup tables to reduce depth of some parity, no complexity improvement Many of the papers do not address complexity at all, or do not address sequential complexity, or even more confusingly, some state the depth/parallel complexity without being precise that it is not sequential; until you read and decipher the entire paper, and discover it yourself. Bottom line What are the best sequential, worst-case, complexity results in RNS* for exact parity checking or magnitude comparison? *Results for RNS- like system would also be interesting, including special moduli sets More background info: Multiplication of two numbers in the same RNS base is simply pointwise modulo multiplication of the two numbers (this can be approximately linear time). However, overflow detection is difficult (it is difficult with addition as well). Multiplication seems much simpler, but parity and magnitude comparison of two numbers seems much more difficult. Magnitude comparison is simply determining which of two numbers is greater, $X \stackrel{?}{<} Y$, given only their RNS form with the same RNS bases. Parity is simply deciding if a number, $X={\left(x_1, ~...~, x_k\right)}_{RNS\left(m_1, ~...~, m_k\right)}$ is even or odd (obviously, $X$ is not given, only its RNS form). An interesting thing is that magnitude comparison and parity are related: If you were able to compute parity, then you can do comparison. To do comparison with parity, you do $(X - Y)$ (in RNS), and if it underflows, the parity will be unexpected. That is, normally, assuming $p(X) = X \mod 2, ~~~ p(X) \in \{0,1\}$ is the parity function, $p(X-Y) \equiv p(X) + p(Y) \mod 2$. However, if it underflows, it will wrap around to $M-1$. Therefore if the parity is off after $X-Y$, you know that $Y > X$.
This question already has an answer here: Riccati Equation in spot rate model 1 answer I am trying to understand the derivation of the Cox-Ingersoll-Ross interest rate model. This has a stochastic differential equation of the form $$dr=(\eta-\gamma r)dt + \sqrt{\alpha r} \space dX$$ With an affine solution of the form $$V(r,t;T)=\exp\left[A(t) - rB(t)\right]$$ Putting this into the bond pricing equation and solving for $A(t;T)$ and $B(t;T)$ we arrive at a linear ODE for $B(t;T)$ in the form $$\frac{d B(t;T)}{dt} = \frac{1}{2}\alpha (B(t;T))^2 + \gamma B(t;T) - 1$$ I need to solve this ODE to get the final solution for $B(t;T)$ in the form $$B(t;T) = \frac{2\left(e^{\psi_1(T-t)}-1\right)}{(\gamma + \psi_1)(e^{\psi_1(T-t)}-1) + 2\psi_1}$$ Where $$\psi_1=\sqrt{\gamma^2 + 2\alpha}$$ I can't think where to start solving this ODE. Could someone please give me a clue?
Evolvent of a plane curve involute A curve $\bar\gamma$ assigned to the plane curve $\gamma$ such that $\gamma$ is the evolute of $\bar\gamma$. If $\mathbf{r} = \mathbf{r}(s)$ (where $s$ is the arc length parameter of $\gamma$) is the equation of $\gamma$, then the equation of its evolvent has the form $$ \bar{\mathbf{r}} = \mathbf{r}(s) + (c-s)\tau(s) \,, $$ where $c$ is an arbitrary constant and $\tau$ the unit tangent vector to $\gamma$. The figures show the construction of the evolvent in two typical cases: a) if for any $s<c$ the curvature $k(s)$ of $\gamma$ does not vanish (the evolvent is a regular curve); and b) if $k(s)$ vanishes only for $s=s_1$ and $k'(s_1) \ne 0$ (the point corresponding to $s=s_1$ on the evolvent is a cusp of the second kind). Figure: e036720a Figure: e036720b About the evolvent of a surface, see Evolute (surface). Comments The evolvent is often called the involute of the curve. Involvents play a part in the construction of gears. For references see also Evolute. References [a1] K. Strubecker, "Differential geometry" , I , de Gruyter (1964) [a2] M. Berger, B. Gostiaux, "Differential geometry: manifolds, curves, and surfaces" , Springer (1988) pp. 305ff (Translated from French) [a3] J.L. Coolidge, "A treatise on algebraic plane curves" , Dover, reprint (1959) pp. 195 [a4] H.W. Guggenheimer, "Differential geometry" , McGraw-Hill (1963) pp. 25; 60 [a5] M. Berger, "Geometry" , I , Springer (1987) pp. 253–254 How to Cite This Entry: Evolvent of a plane curve. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Evolvent_of_a_plane_curve&oldid=42553
You should be able to recognize from the form of the electronic Hamiltonian, Equation that the electronic Schrödinger equation, Equation, cannot be solved. The problem, as for the case of atoms, is the electron-electron repulsion terms. Approximations must be made, and these approximations are based on the idea of using one-electron wavefunctions to describe multi-electron systems, in this case molecules just as is done for multi-electron atoms. Initially two different approaches were developed. Heitler and London originated one in 1927, called the Valence Bond Method, and Robert Mulliken and others developed the other somewhat later, called the Molecular Orbital Method. By using configuration interaction, both methods can provide equivalent electronic wavefunctions and descriptions of bonding in molecules, although the basic concepts of the two methods are different. We will develop only the molecular orbital method because this is the method that is predominantly employed now. The wavefunction for a single electron in a molecule is called a molecular orbital in analogy with the one-electron wavefunctions for atoms being called atomic orbitals. To describe the electronic states of molecules, we construct wavefunctions for the electronic states by using molecular orbitals. These wavefunctions are approximate solutions to the Schrödinger equation. A mathematical function for a molecular orbital is constructed, \(\psi _i\), as a linear combination of other functions, \(\varphi _j\), which are called basis functions because they provide the basis for representing the molecular orbital. \[\psi _i = \sum _j c_{ij} \varphi _j \label {10-8}\] The variational method is used to find values for parameters in the basis functions and for the constant coefficients in the linear combination that optimize these functions, i.e. make them as good as possible. The criterion for quality in the variational method is making the ground state energy of the molecule as low as possible. Here and in the rest of this chapter, the following notation is used: \(\sigma\) is a general spin function (can be either \(\alpha\) or \(\beta\)), \(\varphi \) is the basis function (this usually represents an atomic orbital), \(\psi\) is a molecular orbital, and \(\Psi\) is the electronic state wavefunction (representing a single Slater determinant or linear combination of Slater determinants). The ultimate goal is a mathematical description of electrons in molecules that enables chemists and other scientists to develop a deep understanding of chemical bonding and reactivity, to calculate properties of molecules, and to make predictions based on these calculations. For example, an active area of research in industry involves calculating changes in chemical properties of pharmaceutical drugs as a result of changes in chemical structure. Just as for atoms, each electron in a molecule can be described by a product of a spatial orbital and a spin function. These product functions are called spin orbitals. Since electrons are fermions, the electronic wavefunction must be antisymmetric with respect to the permutation of any two electrons. A Slater determinant containing the molecular spin orbitals produces the antisymmetric wavefunction. For example for two electrons, \[\Psi (r_1, r_2) = \dfrac{1}{\sqrt{2}} \begin {vmatrix} \psi _A (r_1) \alpha (1) & \psi _B (r_1) \beta (1) \\ \psi _A (r_2) \alpha (2) & \psi _B (r_2) \beta (2) \end {vmatrix} \label {10-9}\] Solving the Schrödinger equation in the orbital approximation will produce a set of spatial molecular orbitals, each with a specific energy, \(\epsilon\). Following the Aufbau Principle, 2 electrons with different spins ( \(\alpha\) and \(\beta\), consistent with the Pauli Exclusion Principle) are assigned to each spatial molecular orbital in order of increasing energy. For the ground state of the 2n electron molecule, the n lowest energy spatial orbitals will be occupied, and the electron configuration will be given as \(\psi ^2_1 \psi ^2_2 \psi ^2_3 \dots \psi ^2_n\). The electron configuration also can be specified by an orbital energy level diagram as shown in Figure \(\PageIndex{1}\). Higher energy configurations exist as well, and these configurations produce excited states of molecules. Some examples are shown in Figure \(\PageIndex{1}\). Figure \(\PageIndex{1}\): a) The lowest energy configuration of a closed-shell system. b) The lowest energy configuration of an open-shell radical. c) An excited singlet configuration. d) An excited triplet configuration. Molecular orbitals usually are identified by their symmetry or angular momentum properties. For example, a typical symbol used to represent an orbital in an electronic configuration of a diatomic molecule is \(2\sigma ^2_g\). The superscript in symbol means that this orbital is occupied by two electrons; the prefix means that it is the second sigma orbital with gerade symmetry. Diatomic molecules retain a component of angular momentum along the internuclear axis. The molecular orbitals of diatomic molecule therefore can be identified in terms of this angular momentum. A Greek letter, e.g. \(\sigma\) or \(\pi\), encodes this information, as well as information about the symmetry of the orbital. A \(\sigma\) means the component of angular momentum is 0, and there is no node in any plane containing the internuclear axis, so the orbital must be symmetric with respect to reflection in such a plane. A \(\pi\) means there is a node and the wavefunction is antisymmetric with respect to reflection in a plane containing the internuclear axis. For homonuclear diatomic molecules, a g or a u is added as a subscript to designate whether the orbital is symmetric or antisymmetric with respect to the center of inversion of the molecule. A homonuclear diatomic molecule has a center of inversion in the middle of the bond. This center of inversion means that \(\psi (x, y, z) = \pm \psi (-x, -y, -z)\) with the origin at the inversion center. Inversion takes you from \((x, y, z )\) to \((-x, -y, -z )\). For a heteronuclear diatomic molecule, there is no center of inversion so the symbols g and u are not used. A prefix 1, 2, 3, etc. simply means the first, second, third, etc. orbital of that type. We can specify an electronic configuration of a diatomic molecule by these symbols by using a superscript to denote the number of electrons in that orbital, e.g. the lowest energy configuration of N 2 is \[1 \sigma ^2_g 1 \sigma ^2_u 2 \sigma ^2_g 2 \sigma ^2_u 1 \pi ^4_u 3 \sigma ^2_g \nonumber\] Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
https://doi.org/10.1351/goldbook.P04711 The ease of distortion of the electron cloud of a @M03986@ by an electric field (such as that due to the proximity of a charged @R05190@). It is experimentally measured as the ratio of induced @D01761@ (\(\mu _{\mathrm{ind}}\)) to the field \(E\) which induces it: \[\alpha =\frac{\mu _{\text{ind}}}{E}\] The units of \(\alpha \) are \(\text{C}^{2}\ \text{m}^{2}\ \text{V}^{-1}\). In ordinary usage the term refers to the 'mean polarizability', i.e., the average over three rectilinear axes of the molecule. Polarizabilities in different directions (e.g. along the bond in Cl 2, called 'longitudinal polarizability', and in the direction perpendicular to the bond, called 'transverse polarizability') can be distinguished, at least in principle. Polarizability along the bond joining a substituent to the rest of the molecule is seen in certain modern theoretical approaches as a factor influencing chemical reactivity, etc., and parametrization thereof has been proposed. See also: electric polarizability
Can somebody explain me if the Rebonato swaption volatility approximation formula is accurate for only ATM strikes, and if yes why? Can it also be used for ITM and OTM strikes? My foundings: Let $0 < T_0 < T_1 < \ldots < T_N$ be a tenor structure. Consider a payer swaption that gives the right to enter into a payer interest rate swap at $T_0$ with payments of both the floating and the fixed leg on $T_1,\ldots,T_N$. The fixed rate is set to $K$. I have implemented the Rebonato swaption volatility approximation formula in Matlab as$$\upsilon^{REB}= \sqrt{\frac{\sum_{n=0}^{N-1}\sum_{k=0}^{N-1}w_n\left(0\right)w_k\left(0\right)L_n\left(0\right)L_k\left(0\right)\rho_{n,k}\int_0^{T_0}\sigma_n\left(t\right)\sigma_k\left(t\right)dt}{SwapRate\left(0\right)^2}}\\=\sqrt{\frac{\sum_{n=0}^{N-1}\sum_{k=0}^{N-1}w_n\left(0\right)w_k\left(0\right)L_n\left(0\right)L_k\left(0\right)\rho_{n,k}\int_0^{T_0}\sigma_n\left(t\right)\sigma_k\left(t\right)dt}{\sum_{n=0}^{N-1}w_n\left(0\right)L_n\left(0\right)}},$$where $L_n\left(0\right):=L\left(0;T_n,T_{n+1}\right)$ represent the initial Libor curve and $w_n\left(0\right)$ are the weights defined as$$w_n\left(t\right) = \frac{\tau_n P\left(t,T_{n+1}\right)}{\sum_{r=0}^{N-1} \tau_r P\left(t,T_{r+1}\right)},$$ with $\tau_n =T_{n+1}-T_n$. The instantaneous volatilities $\sigma_n\left(t\right)$ are given by the following parametrization; $$\sigma_n\left(t\right) = \phi_n\left(a+b\left(T_n-t\right)\right)e^{-c\left(T_n-t\right)}+d.$$ To get the swaption price at time $0$, I have used this swaption approximation as an input in Black's forumla; $$V_{swaption}\left(0\right) = Black\left(K,SwapRate\left(0\right),\upsilon^{REB}\right)\\ =Black\left(K,\sum_{n=0}^{N-1}w_n\left(0\right)L_n\left(0\right),\upsilon^{REB}\right)$$ In order to access the accuracy of the Rebonato approximation formula I have compared the prices of various swaptions obtained by plugging the approximation volatility in Black (as above) and the prices obtained by a Monte Carlo evaluation doing 1000000 simulations. I was particularly interested in the accuracy among different strikes $K$. To illustrate this, consider the 4Y10Y swaption and its corresponding ATM , ATM+1%, ATM+2% and ATM+3% strikes (ATM strike is $K=SwapRate\left(0\right)$). My foundings were that as you move further away from the ATM strike, the approximation gets worse (difference between Monte Carlo price and price with Rebonato swaption approx volatility increases). In concrete numbers, the difference for ATM strike is 9 bp and for ATM+3% 36 bp. I have searched in the literature for an explanation, but cannot find any. As far as I have understood, no assumptions evolving the strike are made in deriving the Rebonato formula. Brigo and Mercurio also perform an accuracy test of the Rebonato formula in their book 'Interest Rate Models - Theory and Practice', namely: "The results are based on a comparision of Rebonato's formula with the volatilities that plugged into Black's formula lead to the Monte Carlo prices of the corresponding at-the-money swaptions. " Furthermore, Jäckel and Rebonato analyze in their paper 'Linking Caplet and Swaption Volatilities in a BGM/J Framework: Approximate Solutions' how well the approximation performs by comparing the ATM swaption prices obtained by the Rebonato volatility and the Monte Carlo ATM prices. Is it coincidence that I only can find results for ATM swaptions or does Rebonato's swaption volatility approximation formula really not perform well for ITM and OTM swaptions? Any help is appreciated. Thanks in advance.
You can't make any concrete statements about the monotonicity, convexity or even sign of the yield curve. Yields are almost always positive, and in the past (2007 and earlier) you could find people who would argue that yields must be positive, typically using a no-arbitrage argument. But recent history has shown us that it is possible for even 10Y yields to be negative (e.g. Switzerland, Germany). The yield curve is normally upward sloping (I.e. Long rates are greater than short rates) but it is certainly possible for the yield curve to be downward sloping, particularly in a recession, or if interest rates are expected to fall. It is also possible for the yield curve to be non-monotonic, displaying one or more turning points (particularly on the short end). The yield curve is normally concave, but it is possible for it to be convex or even to be neither concave or convex. Convexity can reflect expectations of yield curve steepening. In general, the shape of the yield curve is a combination of A. Expectations about future interest rate movements (including changes in the level and slope of the curve). B. Risk premiums (longer-dated bonds are more risky, so yields are generally higher to compensate for the additional risk). C. Supply and demand, both at the macro level (e.g. Pension funds may demand longer terms bonds, pushing down yields at the long end) and micro scale (reflecting e.g. preferences for on the run vs. off the run bonds, or deliverability into futures contracts). D. Convexity. Longer term bonds have a convexity advantage, and so their yields may be lower to compensate for this. I will try to return to this answer with more mathematical detail later! Here's the mathematical detail that I promised. A good source for this information (and much more) is the seven-part series "Understanding the Yield Curve" by Antti Ilmanen. This was a series published by Salomon Brothers in the 1990s. I don't know a good source for it, but you can reasonably easily find PDF copies available for download. The scenario we'll work in is a market for zero coupon bonds, with one bond issued for each year $n = 1, \dots, N$. The yield curve we'll compute will be the annually compounded zero coupon curve - from this curve you can compute the par yield curve for coupon-paying bonds, and it's also easy to generalize to frequencies shorter than 1 year by assuming that $n$ stands for the number of quarters, months or whatever. The shortest holding period we'll consider is one year. The price of a one year zero is $p_1$, and the one year yield is defined by $$p_1 = \frac{1}{1 + y_1}$$ We assume that there is zero probability of default, so this is a risk-free investment - if you invest $p_1$ now, you are guaranteed to get back 1 in a year's time. The price of an $n$-year zero defines the $n$-year yield, $$p_n = \frac{1}{(1 + y_n)^n}$$ It's also very useful to define the forward rates $f_{m,n}$ which are the implied rates for investment between years $m$ and $n$. You can lock in this rate by shorting an $m$-year bond and buying $p_m/p_n$ of an $n$-year bond, which costs you nothing up front. If you compound for $m$ years at $y_m$ and a further $n-m$ years at $f_{m,n}$ this must be the same as compounding for $n$ years at $y_n$, so you have $$(1+y_m)^m (1+f_{m,n})^{n-m} = (1+y_n)^n$$ which you can solve for $f_{m,n}$, getting $$\begin{align}f_{m,n} & = \left[ \frac{(1+y_n)^n}{(1+y_m)^m} \right]^{1/(n-m)} - 1 \\ & \approx y_n + \frac{m}{n-m}(y_n - y_m)\end{align}$$ It is often useful to invert your thinking, and consider the forward rates to be the fundamental quantities, and the zero coupon rates and par yield curve to be the derived quantities. In particular, the curve of one year forward rates $(n-1)$ years forward are sufficient to reconstruct all the zero coupon rates, since $$(1+y_n)^n = (1+f_{0,1})(1+f_{1,2})(1+f_{2,3}) \cdots (1+f_{n-1,n})$$ Now we need to think about return on holding an $n$ year bond for one year. By the end of the year, it has become an $(n-1)$ year bond, and it should be priced using the $(n-1)$ year yield. Using a * to indicate quantities measured one year later, the holding period return is $$1 + r_n = \frac{p^*_{n-1}}{p_n} = \frac{ (1+y_n)^n }{ (1+y^*_{n-1})^{n-1}} $$ Notice that if $y^*_{n-1}=y_{n-1}$ (i.e. if the yield curve is unchanged) then the holding period return is equal to $f_{n-1,n}$, the one year forward rate (n - 1) years forward. If we express the (n - 1) year yield one year hence in terms of the old yield plus a delta, $$y^*_{n-1} = y_{n-1} + \Delta y_{n-1}$$ then we can expand the holding period return in powers of $\Delta y_{n-1}$, getting $$\begin{align}1 + r_n& \approx \frac{p_{n-1}}{p_n} \left( 1 - D_{n-1} \Delta y_{n-1} + \frac{1}{2} C_{n-1} (\Delta y_{n-1})^2 + \cdots \right) \\& \approx (1 + f_{n-1,n}) \left( 1 - D_{n-1} \Delta y_{n-1} + \frac{1}{2} C_{n-1} (\Delta y_{n-1})^2 + \cdots \right)\end{align}$$ where $D_{n-1}$ and $C_{n-1}$ are the modified duration and convexity for the $n-1$ year bond, $$\begin{align}D_{n} & = \frac{n}{1+y_{n}} \\C_{n} & = \frac{n(n+1)}{(1+y_{n})^2}\end{align}$$ The excess holding period return $1 + h_n$ is the yield in excess of the risk-free rate, i.e. $$1 + h_n = \frac{1 + r_n}{1 + y_1}$$ so we have $$1 + h_n \approx \frac{1 + f_{n-1,n}}{1+y_1} \left( 1 - D_{n-1} \Delta y_{n-1} + \frac{1}{2} C_{n-1} (\Delta y_{n-1})^2 + \cdots \right)$$ Now we are in a position to start making sense of the shape of the yield curve. Taking expectations of both sides of this equation gives us the relationship between holding period returns, yield curve shape, convexity and expected interest rate changes, $$\mathbb{E}(1 + h_n) = \frac{1 + f_{n-1,n}}{1+y_1} \left( 1 - D_{n-1} \mathbb{E}(\Delta y_{n-1}) + \frac{1}{2} C_{n-1} \sigma_{n-1}^2 + \cdots \right)$$ where $\sigma_{n-1}$ is the volatility of yield curve movements (which can be estimated empirically, inferred from option prices, or plugged in from a model). Risk premiums can also influence the shape of the yield curve. The term risk premium is the additional expected return required to compensate for the increased risk of holding long-maturity bonds. The term risk premium $\xi_n$ is therefore the expected excess holding period return, $$1 + \xi_n = \mathbb{E}(1 + h_n)$$ We can now rearrange to find the forward rates in terms of the risk-free rate, term risk premium, expected interest rate changes and convexity advantage. Using the fact that $p_{n-1}/p_n = 1 + f_{n-1,n}$ we get $$1 + f_{n-1,n} \approx \frac{ (1 + y_1) (1 + \xi_n)}{ 1 - D_{n-1} \mathbb{E}(\Delta y_{n-1}) + \frac{1}{2} C_{n-1} \sigma_{n-1}^2 }$$ We can now reconstruct the zero coupon rates from these forward rates, and reconstruct the par yield curve from the zero coupon rates. Note that although the expression for the forward rate $f_{n-1,n}$ depends on durations and convexities, which in turn depends on yields, the yield dependence is such that $f_{n-1,n}$ only depends on $y_{n-1}$. Therefore we can 'bootstrap' the curve from the inputs, first computing $f_{0,1} = y_1$, then computing $f_{1,2}$ (which depends on $y_1$) and using that to compute $y_2$, which is used in the computation of $f_{2,3}$ etc. This discussion highlights several important intuitions when thinking about the yield curve - The fundamental quantities are the one year forward rates - they have the cleanest representation in terms of the term risk premium, expected interest rate changes and convexity. Short rates drive the yield curve by increasing all the forward rates. The central bank typically only has the ability to move the short end of the yield curve, but this is transmitted to the rest of the yield curve by the above equation. The term premium $\xi_n$ is typically monotonically increasing in $n$ which partly explains the persistent upward slope to the yield curve. It does not have to be the case, however, and in any case the term premium is not a directly measurable quantity. Expectations of increasing interest rates mean that the yield curve is more steeply sloped (since $D_n$ is a monotonically increasing function of $n$). Long maturity bonds lose more when interest rates rise, so their yields need to be higher to offset this. Expectations of falling rates lead to a flatter curve, or even an inverted curve. Expectations of yield curve steepening lead to a more convex yield curve, and expectations of yield curve flattening lead to a more concave yield curve. The tendency of the yield curve to flatten at long maturities (or even to develop a hump) is explained by the convexity advantage of longer-term bonds. This hump is more pronounced when volatility of rates rises. Since neither the term premium or interest rate expectations are directly measurable, it is a matter of taste how you assign weight between the expectations and risk-premium terms. This is important for investors, because you want to invest in bonds when the risk premium term is high (i.e. yields are high because expected returns are high) but not when rates are expected to rise (i.e. yields are high because capital losses are expected). Two older and largely discredited theories of the yield curve are the "pure expectations hypothesis" which says that only expectations matter and there is no term premium (i.e. $\xi_n=0$) and the "risk premium hypothesis" which says that the yield curve contains no expectations about interest rates, and only encodes risk premium (i.e. $\mathbb{E}(\Delta y_{n-1}) = 0$). Neither of these is true, but in my (subjective and unquantified) opinion, the risk premium hypothesis is closer to being true. I haven't talked about supply-demand effects, either on the micro or macro scale. On the macro scale, these can be incorporated along with the term premium into expected returns. High demand for long term bonds will tend to decrease expected returns at the long end of the yield curve, for example (equivalently you would have a smaller term risk premium at the long end). Micro structure effects do not fit into the above model since I have assumed zero coupon bonds with essentially infinite liquidity, a liquid market for borrowing and lending the bonds, and no taxes. However, micro effects can be important for measuring and building yield curves in the real world, for example A bond which has high demand in the repo market can be financed very cheaply, which tends to decrease its yield. Some bonds are generally in higher demand, for example newly issued 10Y and 5Y notes, which tends to decrease their yield. Bonds with a low coupon will recognize more of their returns as capital gains, rather than as income, which can have a tax advantage. Therefore you might expect their yields to be lower to offset this advantage, depending on the aggregate income and capital gains tax rates of the investment community. I thought it could be useful to illustrate my initial points with some examples of yield curves. First a "normal" curve - the USD swap curve on 7th May 2001. The curve is positive, upward sloping and concave - The EUR swap curve in December 2017 had negative swap rates out to seven years. It also displays a non-monotonic shape, with 1Y rates above 2Y rates. In late December 2009, as the first inklings of the Eurozone debt crisis led to widespread expectations of rate cuts, the EUR swap curve became inverted, at least in the 1-10Y sector. Finally, in late 2008 the US swap curve displayed an unusual humped shape. Volatile interest rates increased the convexity value of long-dated swaps, so yields decreased to compensate. At the same time the front of the curve remained steep, reflecting both a belief that interest rate cuts were broadly done, and a risk premium for holding longer dated (5-10Y) duration risk.
The average equilibrium temperature can be obtained from the Stefan-Boltzmann law, for your data 293.5 K (20 C). Compensating for the Earth-like atmosphere, (+15 K for Earth, closer to +12.5 K for this planet), we have an average temperature of approximately 306 K (33 C). Quite hot, as expected from a higher solar flux, and smaller albedo. Another useful average we can get from this law is the equatorial average, 311 K without the atmosphere, ~323 K compensated. Equations for temperature estimations without an atmosphere: Effective influx: $= solar flux * (1-albedo)$ Global average $= \left(\frac{I_e}{4\sigma}\right)^{\frac{1}{4}}$ Equatorial average $= \left(\frac{I_e}{\pi \sigma}\right)^{\frac{1}{4}}$ Stationary sun-in-zenit average $= \left(\frac{I_e}{\sigma}\right)^{\frac{1}{4}}$ Where $\sigma$ is the Stefan-Boltzmann constant ($σ = 5.67×10^{-8} W m^{-2} K^{-4}$), and $I_e$ is the effective influx. For a rapidly rotating planet, use the equatorial average for the equator temperature, for a very slowly rotating planet, use the sun-in-zenit equation for the peak temperature. For a case in between those extremes, use something in-between those equations. Your planet seems to be divided into two regions, a lowland and a highland. We find the highest temperature variations on the equator part of the highland, where a necessary low cloud cover gives huge, dessert like variations, reaching almost 90 C shortly after noon (actually 130 degrees C if we calculate the black-body equilibrium, but we must compensate for atmospheric convection), and less than 0 C degrees (perhaps as low as -15 C) shortly before dawn. In the lowland, the atmosphere, combined with clouds formed by the lakes ,gives more inertia to the system, thereby limiting the variations. (0 - 50 degrees C).
Skills to Develop Add and subtract complex numbers. Multiply and divide complex numbers. Solve quadratic equations with complex numbers Discovered by Benoit Mandelbrot around 1980, the Mandelbrot Set is one of the most recognizable fractal images. The image is built on the theory of self-similarity and the operation of iteration. Zooming in on a fractal image brings many surprises, particularly in the high level of repetition of detail that appears as magnification increases. The equation that generates this image turns out to be rather simple. Figure \(\PageIndex{1}\): The Mandelbrot set exhibits similarity, which is best shown in an animation. In order to better understand it, we need to become familiar with a new set of numbers. Keep in mind that the study of mathematics continuously builds upon itself. Negative integers, for example, fill a void left by the set of positive integers. The set of rational numbers, in turn, fills a void left by the set of integers. The set of real numbers fills a void left by the set of rational numbers. Not surprisingly, the set of real numbers has voids as well. In this section, we will explore a set of numbers that fills voids in the set of real numbers and find out how to work within it. Expressing Square Roots of Negative Numbers as Multiples of \(i\) We know how to find the square root of any positive real number. In a similar way, we can find the square root of any negative number. The difference is that the root is not real. If the value in the radicand is negative, the root is said to be an imaginary number.The imaginary number \(i\) is defined as the square root of \(−1\). \[\sqrt{-1}=i\] So, using properties of radicals, \[i^2=(\sqrt{-1})^2=-1\] We can write the square root of any negative number as a multiple of \(i\). Consider the square root of \(−49\). \[\begin{align*} \sqrt{-49}&= \sqrt{49\times(-1)}\\[4pt] &= \sqrt{49}\sqrt{-1}\\[4pt] &= 7i \end{align*}\] A complex number is the sum of a real number and an imaginary number. A complex number is expressed in standard form when written \(a+bi\) where \(a\) is the real part and \(b\) is the imaginary part. For example, \(5+2i\) is a complex number. So, too, is \(3+4i\sqrt{3}\). Imaginary numbers differ from real numbers in that a squared imaginary number produces a negative real number. Recall that when a positive real number is squared, the result is a positive real number and when a negative real number is squared, the result is also a positive real number. Complex numbers consist of real and imaginary numbers. Exercise \(\PageIndex{1}\) Express \(\sqrt{-24}\) in standard form. Answer \(\sqrt{-24}=0+2i\sqrt{6}\) Plotting a Complex Number on the Complex Plane We cannot plot complex numbers on a number line as we might real numbers. However, we can still represent them graphically. To represent a complex number, we need to address the two components of the number. We use the complex plane, which is a coordinate system in which the horizontal axis represents the real component and the vertical axis represents the imaginary component. Complex numbers are the points on the plane, expressed as ordered pairs \((a,b)\), where \(a\) represents the coordinate for the horizontal axis and \(b\) represents the coordinate for the vertical axis. Let’s consider the number \(−2+3i\). The real part of the complex number is \(−2\) and the imaginary part is \(3\). We plot the ordered pair \((−2,3)\) to represent the complex number \(−2+3i\), as shown in Figure \(\PageIndex{2}\). Figure \(\PageIndex{2}\) Plot the complex number \(3−4i\) on the complex plane. Solution The real part of the complex number is \(3\), and the imaginary part is \(–4\). We plot the ordered pair \((3,−4)\) as shown in Figure \(\PageIndex{4}\). Figure \(\PageIndex{4}\) Exercise \(\PageIndex{2}\) Plot the complex number \(−4−i\) on the complex plane. Answer Figure \(\PageIndex{5}\) Adding and Subtracting Complex Numbers Just as with real numbers, we can perform arithmetic operations on complex numbers. To add or subtract complex numbers, we combine the real parts and then combine the imaginary parts. Howto: Given two complex numbers, find the sum or difference Identify the real and imaginary parts of each number. Add or subtract the real parts. Add or subtract the imaginary parts. Add or subtract as indicated. \((3−4i)+(2+5i)\) \((−5+7i)−(−11+2i)\) Solution \[\begin{align*} (3-4i)+(2+5i)&= 3-4i+2+5i\\[4pt] &= 3+2+(-4i)+5i\\[4pt] &= (3+2)+(-4+5)i\\[4pt] &= 5+i \end{align*}\] \[\begin{align*} (-5+7i)-(-11+2i)&= -5+7i+11-2i\\[4pt] &= -5+11+7i-2i\\[4pt] &= (-5+11)+(7-2)i\\[4pt] &= 6+5i \end{align*}\] Exercise \(\PageIndex{3}\) Subtract \(2+5i\) from \(3–4i\). Answer \((3−4i)−(2+5i)=1−9i\) Multiplying Complex Numbers Multiplying complex numbers is much like multiplying binomials. The major difference is that we work with the real and imaginary parts separately. Multiplying a Complex Number by a Real Number Lets begin by multiplying a complex number by a real number. We distribute the real number just as we would with a binomial. Consider, for example, \(3(6+2i)\) : Howto: Given a complex number and a real number, multiply to find the product Use the distributive property. Simplify. Exercise \(\PageIndex{4}\) Find the product: \(\dfrac{1}{2}(5−2i)\). Answer \(\dfrac{5}{2}-i\) Multiplying Complex Numbers Together Now, let’s multiply two complex numbers. We can use either the distributive property or more specifically the FOIL method because we are dealing with binomials. Recall that FOIL is an acronym for multiplying First, Inner, Outer, and Last terms together. The difference with complex numbers is that when we get a squared term, \(i^2\), it equals \(-1\). \[\begin{align*} (a+bi)(c+di)&= ac+adi+bci+bdi^2\\[4pt] &= ac+adi+bci-bd(-1)\qquad i^2 = -1\\[4pt] &= ac+adi+bci-bd\\[4pt] &= (ac-bd)+(ad+bc)i \end{align*}\] Howto: Given two complex numbers, multiply to find the product Use the distributive property or the FOIL method. Remember that \(i^2=-1\). Group together the real terms and the imaginary terms Exercise \(\PageIndex{5}\) Multiply: \((3−4i)(2+3i)\). Answer \(18+i\) Dividing Complex Numbers Dividing two complex numbers is more complicated than adding, subtracting, or multiplying because we cannot divide by an imaginary number, meaning that any fraction must have a real-number denominator to write the answer in standard form \(a+bi\). We need to find a term by which we can multiply the numerator and the denominator that will eliminate the imaginary portion of the denominator so that we end up with a real number as the denominator. This term is called the complex conjugate of the denominator, which is found by changing the sign of the imaginary part of the complex number. In other words, the complex conjugate of \(a+bi\) is \(a−bi\). For example, the product of \(a+bi\) and \(a−bi\) is \[\begin{align*} (a+bi)(a-bi)&= a^2-abi+abi-b^2i^2\\[4pt] &= a^2+b^2 \end{align*}\] The result is a real number. Note that complex conjugates have an opposite relationship: The complex conjugate of \(a+bi\) is \(a−bi\), and the complex conjugate of \(a−bi\) is \(a+bi\). Further, when a quadratic equation with real coefficients has complex solutions, the solutions are always complex conjugates of one another. Suppose we want to divide \(c+di\) by \(a+bi\), where neither \(a\) nor \(b\) equals zero. We first write the division as a fraction, then find the complex conjugate of the denominator, and multiply. Multiply the numerator and denominator by the complex conjugate of the denominator. \[\begin{align*} \dfrac{(c+di)}{(a+bi)}\cdot \dfrac{(a-bi)}{(a-bi)}&= \dfrac{(c+di)(a-bi)}{(a+bi)(a-bi)}\\[4pt] &= \dfrac{ca-cbi+adi-bdi^2}{a^2-abi+abi-b^2i^2} \qquad \text{Apply the distributive property}\\[4pt] &= \dfrac{ca-cbi+adi-bd(-1)}{a^2-abi+abi-b^2(-1)} \qquad \text{Simplify, remembering that } i^2=-1\\[4pt] &= \dfrac{(ca+bd)+(ad-cb)i}{a^2+b^2} \end{align*}\] Find the complex conjugate of each number. \(2+i\sqrt{5}\) \(-\dfrac{1}{2}i\) Solution The number is already in the form \(a+bi\). The complex conjugate is \(a−bi\), or \(2−i\sqrt{5}\). We can rewrite this number in the form \(a+bi\) as \(0−\dfrac{1}{2}i\). The complex conjugate is \(a−bi\), or \(0+\dfrac{1}{2}i\). This can be written simply as \(\dfrac{1}{2}i\). Analysis Although we have seen that we can find the complex conjugate of an imaginary number, in practice we generally find the complex conjugates of only complex numbers with both a real and an imaginary component. To obtain a real number from an imaginary number, we can simply multiply by \(i\). Exercise \(\PageIndex{6}\) Find the complex conjugate of \(−3+4i\). Answer \(−3−4i\) How to: Given two complex numbers, divide one by the other Write the division problem as a fraction. Determine the complex conjugate of the denominator. Multiply the numerator and denominator of the fraction by the complex conjugate of the denominator. Simplify. Divide \((2+5i)\) by \((4−i)\). Solution We begin by writing the problem as a fraction. Then we multiply the numerator and denominator by the complex conjugate of the denominator. \[\dfrac{(2+5i)}{(4−i)}⋅\dfrac{(4+i)}{(4+i)} \nonumber \] To multiply two complex numbers, we expand the product as we would with polynomials (using FOIL). \[\begin{align*} \dfrac{(2+5i)}{(4-i)}\cdot \dfrac{(4+i)}{(4+i)}&= \dfrac{8+2i+20i+5i^2}{16+4i-4i-i^2}\\[4pt] &= \dfrac{8+2i+20i+5(-1)}{16+4i-4i-(-1)}\; i^2=-1 \\[4pt] &= \dfrac{3+22i}{17}\\[4pt] &= \dfrac{3}{17}+\dfrac{22}{17i} \end{align*}\] Separate real and imaginary parts. Note that this expresses the quotient in standard form. Simplifying Powers of \(i\) The powers of \(i\) are cyclic. Let’s look at what happens when we raise \(i\) to increasing powers. We can see that when we get to the fifth power of i , it is equal to the first power. As we continue to multiply \(i\) by increasing powers, we will see a cycle of four. Let’s examine the next four powers of \(i\) . The cycle is repeated continuously: \(i,−1,−i,1,\) every four powers. Evaluate: \(i^{35}\). Solution Since \(i^4=1\), we can simplify the problem by factoring out as many factors of \(i^4\) as possible. To do so, first determine how many times \(4\) goes into \(35: 35=4⋅8+3\). \[i^{35}=i^{4⋅8+3}=i^{4⋅8}⋅i^3={(i^4)}^8⋅i^3=i^8⋅i^3=i^3=−i \nonumber \] Exercise \(\PageIndex{7}\) Evaluate: \(i^{18}\) Answer \(−1\) Q&A Can we write \(i^{35}\) in other helpful ways? As we saw in Example \(\PageIndex{8}\), we reduced \(i^{35}\) to \(i^3\) by dividing the exponent by \(4\) and using the remainder to find the simplified form. But perhaps another factorization of \(i^{35}\) may be more useful. Table \(\PageIndex{1}\) shows some other possible factorizations. Factorization of \(i^{35}\) \(i^{34}⋅i\) \(i^{33}⋅i^2\) \(i^{31}⋅i^4\) \(i^{19}⋅i^{16}\) Reduced form \({(i^2)}^{17}⋅i\) \(i^{33}⋅(−1)\) \(i^{31}⋅1\) \(i^{19}⋅{(i^4)}^4\) Simplified form \({(−1)}^{17}⋅i\) \(−i^{33}\) \(i^{31}\) \(i^{19}\) Each of these will eventually result in the answer we obtained above but may require several more steps than our earlier method. Key Concepts The square root of any negative number can be written as a multiple of \(i\). See Example. To plot a complex number, we use two number lines, crossed to form the complex plane. The horizontal axis is the real axis, and the vertical axis is the imaginary axis. See Example. Complex numbers can be added and subtracted by combining the real parts and combining the imaginary parts. See Example. Complex numbers can be multiplied and divided. The powers of i are cyclic, repeating every fourth one. See Example.
Waring problem A problem in number theory formulated in 1770 by E. Waring in the following form: Any natural number is a sum of 4 squares, of 9 cubes and of 19 fourth-powers. In other words, for all $k\geq2$ there exists a $s=s(k)$, depending only on $k$, such that every natural number is the sum of $s$ $k$-th powers of non-negative integers. D. Hilbert in 1909 was the first to give a general solution of Waring's problem with a very rough estimate of the value of $s$ as a function of $k$; this is why the problem is sometimes known as the Hilbert–Waring problem. Let $J_{s,k}(N)$ be the number of solutions of the equation \begin{equation}\label{war}x_1^k+\cdots+x_s^k=N\end{equation} in non-negative integers. Hilbert's theorem then states that there exists a $s=s(k)$ for which $J_{s,k}(N)\geq1$ for any $N\geq1$. G.H. Hardy and J.E. Littlewood, who applied the circle method to the Waring problem, demonstrated in 1928 that for $s\geq(k-2)2^{k-1}+5$ the value of $J_{s,k}(N)$ is given by an asymptotic formula of the type \begin{equation}\label{asym}J_{s,k}(N)=AN^{s/k-1}+O(N^{s/k-1-\gamma}),\end{equation} where $A=A(N)\geq c_0>0$, while $c_0$ and $\gamma>0$ are constants. Consequently, if $N\geq N_0(k)$, equation \ref{war} has a solution. An elementary proof of Waring's problem was given in 1942 by Yu. V. Linnik. There exist many different generalizations of Waring's problem (the variables run through a certain subset of the set of natural numbers; the number $N$ is represented by polynomials $f_1(x_1),\ldots,f_s(x_s)$ rather than by monomials $x_1^k,\ldots,x_s^k$; equation (1) is replaced by a congruence, etc.). Research on Waring's problem has mainly focused on sharpening estimates for the following three questions: Find the smallest $s$ such that \ref{war} has solutions for all sufficiently large $N$; Find the smallest $s$ such that \ref{war} has solutions for all $N$; Find the smallest $s$ such that the number of solutions to \ref{war}, $J_{s,k}(N)$, is given by the asymptotic formula \ref{asym}. These quantities are known as $G(k)$, $g(k)$, and $\tilde{G}(k)$ respectively. Clearly, $\tilde{G}(k)\geq G(k)$ and $g(k)\geq G(k)$. The progress on bounds for these quantities is detailed below. Contents Solvable for $N$ sufficiently large Let $G(k)$ be the smallest integer such that equation \ref{war} is solvable for $s\geq G(k)$ and $N$ sufficiently large depending on $k$. It is known that $G(k)\geq k+1$. It was proved in 1934 by I.M. Vinogradov, using his own method, that $$G(k)\leq 3k(\ln k+9).$$ Moreover, many results are available concerning $G(k)$ for small values of $k$: $G(4)=16$ (H. Davenport, 1939); $G(3)=7$ (Yu.V. Linnik, 1942). Solvable for all $N$ Let $g(k)$ be the smallest integer such that equation \ref{war} is solvable for $s\geq g(k)$ and $N\geq1$. It was shown in 1936 by L. Dickson and S. Pillai, who also used the Vinogradov method, that $$g(k)=2^k+\left[\left(\frac{3}{2}\right)^k\right]-2$$ for all $k>6$ for which $$\left(\frac{3}{2}\right)^k-\left[\left(\frac{3}{2}\right)^k\right]\leq1-\left(\frac{1}{2}\right)^k\left\{\left[\left(\frac{3}{2}\right)^k\right]+2\right\}.$$ The last condition was demonstrated in 1957 by K. Mahler for all sufficiently large $k$. It is known that $g(2)=4$ (J.L. Lagrange, 1770), $g(3)=9$ (A. Wieferich, A. Kempner, 1912), $g(4)=19$ (R. Balusabramanian, J. Deshouillers, F. Dress, 1986), $g(5)=37$ (Chen-Jingrun, 1964). See also Circle method and [HaWr]–[Sh]. Asymptotic formula Let $\tilde{G}(k)$ be the smallest integer such that the asymptotic formula \ref{asym} applies to $J_{s,k}(N)$ if $s\geq \tilde{G}(k)$. The result of Hardy and Littlewood mentioned above shows that $$\tilde{G}(k)\leq(k-2)2^{k-1}+5.$$ The first substantial improvement for large values of $k$ was obtained by Vinogradov, who showed that $$\tilde{G}(k)\leq 4k^2\ln k.$$ The current best bound for large values of $k$ was obtained by Wooley who showed that $$\tilde{G}(k)\leq 2k^2-k^{4/3}+O(k).$$ References [De] B.N. Delone, "The St Petersburg school of number theory", Moscow-Leningrad (1947) (In Russian) Zbl 0033.10403 Translated, American Mathematical Society (2005) ISBN 0-8218-3457-6 Zbl 1074.11002 [Hu] L.-K. Hua, "Abschätzungen von Exponentialsummen und ihre Anwendung in der Zahlentheorie", Enzyklopaedie der Mathematischen Wissenschaften mit Einschluss ihrer Anwendungen, 1 : 2 (1959) (Heft 13, Teil 1) [Kh] A.Ya. Khinchin, "Three pearls of number theory" , Graylock (1952) Translation from the second, revised Russian ed. [1948] Zbl 0048.27202 Reprinted Dover (2003) ISBN 0486400263 [Vi] I.M. Vinogradov, "Selected works", Springer (1985) (Translated from Russian) [Vi2] I.M. Vinogradov, "The method of trigonometric sums in the theory of numbers", Interscience (1954) (Translated from Russian) [HaWr] G.H. Hardy, E.M. Wright, "An introduction to the theory of numbers", Oxford Univ. Press (1979) pp. Chapt. 6 [Sh] D. Shanks, "Solved and unsolved problems in number theory", Chelsea, reprint (1978) [Va] R.C. Vaughan, "The Hardy–Littlewood method", Cambridge Univ. Press (1981) [Wo] T. D. Wooley, "Vinogradov's mean value theorem via efficient congruencing", Annals of Math. 175 (2012), 1575--1627. How to Cite This Entry: Waring problem. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Waring_problem&oldid=36151
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
This is a welcome opportunity to discuss and clarify what statistical models mean and how we ought to think about them. Let's begin with definitions, so that the scope of this answer is in no doubt, and move on from there. To keep this post short, I will limit the examples and forgo all illustrations, trusting the reader to be able to supply them from experience. Definitions It looks possible to understand "test" in a very general sense as meaning any kind of statistical procedure: not only a null hypothesis test, but also estimation, prediction, and decision making, in either a Frequentist or Bayesian framework. That is because the distinction between "parametric" and "non-parametric" is separate from distinctions between types of procedures or distinctions between these frameworks. In any event, what makes a procedure statistical is that it models the world with probability distributions whose characteristics are not fully known. Quite abstractly, we conceive of data $X$ as arising by numerically coding the values of objects $\omega\in\Omega$; the particular data we are using correspond to a particular $\omega$; and there is a probability law $F$ that somehow determined the $\omega$ we actually have. This probability law is assumed to belong to some set $\Theta$. In a parametric setting, the elements $F\in\Theta$ correspond to finite collections of numbers $\theta(F)$, the parameters. In a non-parametric setting, there is no such correspondence. This usually is because we are unwilling to make strong assumptions about $F$. The Nature of Models It seems useful to make a further distinction that is rarely discussed. In some circumstances, $F$ is sure to be a fully accurate model for the data. Rather than define what I mean by "fully accurate," let me give an example. Take a survey of a finite, well-defined population in which the observations are binary, none will be missing, and there is no possibility of measurement error. An example might be destructive testing of a random sample of objects coming off an assembly line, for instance. The control we have over this situation--knowing the population and being able to select the sample truly randomly--assures the correctness of a Binomial model for the resulting counts. In many--perhaps most--other cases, $\Theta$ is not "fully accurate." For instance, many analyses assume (either implicitly or explicitly) that $F$ is a Normal distribution. That's always physically impossible, because any actual measurement is subject to physical constraints on its possible range, whereas there are no such constraints on Normal distributions. We know at the outset that Normal assumptions are wrong! To what extent is a not-fully-accurate model a problem? Consider what good physicists do. When a physicist uses Newtonian mechanics to solve a problem, it is because she knows that at this particular scale--these masses, these distances, these speeds--Newtonian mechanics is more than accurate enough to work. She will elect to complicate her analysis by considering quantum or relativistic effects (or both) only when the problem requires it. She is familiar with theorems that show, quantitatively, how Newtonian mechanics is a limiting case of quantum mechanics and of special relativity. Those theorems help her understand which theory to choose. This selection is usually not documented or even defended; it may even occur unconsciously: the choice is obvious. A good statistician always has comparable considerations in mind. When she selects a procedure whose justification relies on a Normality assumption, for instance, she is weighing the extent to which the actual $F$ might depart from Normal behavior and how that could affect the procedure. In many cases the likely effect is so small that it needn't even be quantified: she "assumes Normality." In other cases the likely effect is unknown. In such circumstances she will run diagnostic tests to evaluate the departures from Normality and their effects on the results. Consequences It's starting to sound like the not-fully-accurate setting is hardly distinct from the nonparametric one: is there really any difference between assuming a parametric model and evaluating how reality departs from it, one the one hand, and assuming a non-parametric model on the other hand? Deep down, both are non-parametric. In light of this discussion, let's reconsider conventional distinctions between parametric and non-parametric procedures. "Non-parametric procedures are robust." So, to some extent, must all procedures be. The issue is not of robustness vs non-robustness, but how robust any procedure is. Just how much, and in what ways, does the true $F$ depart from the distributions in the assumed $\Theta$? As a function of those departures, how much are the test results affected? These are basic questions that apply in any setting, parametric or not. "Non-parametric procedures don't require goodness-of-fit testing or distributional testing." This isn't generally true. "Non-parametric" is often mistakenly characterized as "distribution-free," in the sense of allowing $F$ to be literally any distribution, but this is almost never the case. Almost all non-parametric procedures do make assumptions that restrict $\Theta$. For instance, $X$ might be split into two sets for comparison, with a distribution $F_0$ governing one set and another distribution $F_1$ governing the other. Perhaps no assumption is made at all about $F_0$, but $F_1$ is assumed to be translated version of $F_0$. That's what many comparisons of central tendency assume. The point is that there is a definite assumption made about $F$ in such tests and it deserves to be checked just as much as any parametric assumption might be. "Non-parametric procedures don't make assumptions." We have seen that they do. They only tend to make less-constraining assumptions than parametric procedures. An undue focus on parametric It overlooks the main objective of statistical procedures, which is vs non-parametric might be a counterproductive approach. to improve understanding, make good decisions, or take appropriate action. Statistical procedures are selected based on how well they can be expected to perform in the problem context, in light of all other information and assumptions about the problem, and with regard to the consequences to all stakeholders in the outcome. The answer to "do these distinctions matter" would therefore appear to be "not really."
Existence and stabilization results for a singular parabolic equation involving the fractional Laplacian 1. Université de Pau et des Pays de l'Adour, CNRS, E2S, LMAP UMR 5142, avenue de l'université, 64013 Pau cedex, France 2. Department of Mathematics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi-110016, India $\begin{equation*} \quad (P_{t}^s) \left\{\begin{split} \quad u_t + (-\Delta)^s u & = u^{-q} + f(x,u), \;u >0\; \text{in}\;(0,T) \times \Omega, \\ u & = 0 \; \mbox{in}\; (0,T) \times (\mathbb{R}^n \setminus \Omega ),\\ \quad \quad \quad \quad u(0,x)& = u_0(x) \; \mbox{in} \; {\mathbb{R}^n},\end{split}\quad \right.\end{equation*}$ $\Omega $ $\mathbb{R}^n$ $\partial \Omega $ $n> 2s, \;s ∈ (0,1)$ $q>0$ ${q(2s-1)<(2s+1)}$ $u_0 ∈ L^∞(\Omega )\cap X_0(\Omega )$ $T>0$ $(x,y)∈ \Omega × \mathbb{R}^+ \mapsto f(x,y)$ $x ∈ \Omega $ $ \begin{equation}\label{cond_on_f}{ \limsup\limits_{y \to +\infty} \frac{f(x,y)}{y}<\lambda_1^s(\Omega)}, \end{equation}$ $\lambda_1^s(\Omega )$ $(-\Delta )^s$ $\Omega $ $\mathbb{R}^n \setminus \Omega $ $(P_t^s)$ $u_0$ $(P_t^s)$ $u_0$ Keywords:Non-local operator, fractional Laplacian, singular nonlinearity, parabolic equation, sub-super solution. Mathematics Subject Classification:Primary: 35J35, 35J60; Secondary: 35J92. Citation:Jacques Giacomoni, Tuhina Mukherjee, Konijeti Sreenadh. Existence and stabilization results for a singular parabolic equation involving the fractional Laplacian. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 311-337. doi: 10.3934/dcdss.2019022 References: [1] B. Abdellaoui, M. Medina, I. Peral and A. Primo, Optimal results for the fractional heat equation involving the hardy potential, [2] Adimurthi, J. Giacomoni and S. Santra, Positive solutions to a fractional equation with singular nonlinearity, [3] N. Alibaud and C. Imbert, Fractional semi-linear parabolic equations with unbounded data, [4] [5] B. Avelin, U. Gianazza and S. Salsa, Boundary estimates for certain degenerate and singular parabolic equations, [6] [7] [8] B. Barrios, I. De Bonis, M. Medina and I. Peral, Semilinear problems for the fractional laplacian with a singular nonlinearity, [9] [10] L. Cafarelli and A. Figalli, Regularity of solutions to the parabolic fractional obstacle problem, [11] J. Dávila and M. Montenegro, Existence and asymptotic behavior for a singular parabolic equation, [12] L. M. Del Pezzo and A. J. Quaas, Non-resonant fredholm alternative and anti-maximum principle for the fractional $p$-Laplacian, [13] [14] G. Fragnelli and D. Mugnai, Carleman estimates for singular parabolic equations with interior degeneracy and non-smooth coefficients, [15] R. L. Frank and R. Seiringer, Non-linear ground state representations and sharp hardy inequalities, [16] J. Giacomoni, T. Mukherjee and K. Sreenadh, Positive solutions of fractional elliptic equation with critical and singular nonlinearity, [17] [18] T. Leonori, I. Peral, A. Primo and F. Soria, Basic estimates for solutions of a class of nonlocal elliptic and parabolic equations, [19] [20] X. Ros-Oton and J. Serra, The dirichlet problem for the fractional laplacian: Regularity up to the boundary, [21] R. Servadei and E. Valdinoci, The Brezis-Nirenberg result for the fractional laplacian, [22] [23] [24] [25] J. L. Vázquez, Nonlinear diffusion with fractional Laplacian operators, [26] J. L. Vázquez, Recent progress in the theory of nonlinear diffusion with fractional laplacian operators, show all references References: [1] B. Abdellaoui, M. Medina, I. Peral and A. Primo, Optimal results for the fractional heat equation involving the hardy potential, [2] Adimurthi, J. Giacomoni and S. Santra, Positive solutions to a fractional equation with singular nonlinearity, [3] N. Alibaud and C. Imbert, Fractional semi-linear parabolic equations with unbounded data, [4] [5] B. Avelin, U. Gianazza and S. Salsa, Boundary estimates for certain degenerate and singular parabolic equations, [6] [7] [8] B. Barrios, I. De Bonis, M. Medina and I. Peral, Semilinear problems for the fractional laplacian with a singular nonlinearity, [9] [10] L. Cafarelli and A. Figalli, Regularity of solutions to the parabolic fractional obstacle problem, [11] J. Dávila and M. Montenegro, Existence and asymptotic behavior for a singular parabolic equation, [12] L. M. Del Pezzo and A. J. Quaas, Non-resonant fredholm alternative and anti-maximum principle for the fractional $p$-Laplacian, [13] [14] G. Fragnelli and D. Mugnai, Carleman estimates for singular parabolic equations with interior degeneracy and non-smooth coefficients, [15] R. L. Frank and R. Seiringer, Non-linear ground state representations and sharp hardy inequalities, [16] J. Giacomoni, T. Mukherjee and K. Sreenadh, Positive solutions of fractional elliptic equation with critical and singular nonlinearity, [17] [18] T. Leonori, I. Peral, A. Primo and F. Soria, Basic estimates for solutions of a class of nonlocal elliptic and parabolic equations, [19] [20] X. Ros-Oton and J. Serra, The dirichlet problem for the fractional laplacian: Regularity up to the boundary, [21] R. Servadei and E. Valdinoci, The Brezis-Nirenberg result for the fractional laplacian, [22] [23] [24] [25] J. L. Vázquez, Nonlinear diffusion with fractional Laplacian operators, [26] J. L. Vázquez, Recent progress in the theory of nonlinear diffusion with fractional laplacian operators, [1] [2] Anouar Bahrouni. Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity. [3] [4] Olivier Bonnefon, Jérôme Coville, Guillaume Legendre. Concentration phenomenon in some non-local equation. [5] Zhiming Guo, Zhi-Chun Yang, Xingfu Zou. Existence and uniqueness of positive solution to a non-local differential equation with homogeneous Dirichlet boundary condition---A non-monotone case. [6] Walter Allegretto, Yanping Lin, Shuqing Ma. On the box method for a non-local parabolic variational inequality. [7] [8] [9] Jared C. Bronski, Razvan C. Fetecau, Thomas N. Gambill. A note on a non-local Kuramoto-Sivashinsky equation. [10] A. V. Bobylev, Vladimir Dorodnitsyn. Symmetries of evolution equations with non-local operators and applications to the Boltzmann equation. [11] Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei. Existence and uniqueness of singular solution to stationary Schrödinger equation with supercritical nonlinearity. [12] Galina V. Grishina. On positive solution to a second order elliptic equation with a singular nonlinearity. [13] Imran H. Biswas, Indranil Chowdhury. On the differentiability of the solutions of non-local Isaacs equations involving $\frac{1}{2}$-Laplacian. [14] Christos V. Nikolopoulos, Georgios E. Zouraris. Numerical solution of a non-local elliptic problem modeling a thermistor with a finite element and a finite volume method. [15] Zhaoquan Xu, Jiying Ma. Monotonicity, asymptotics and uniqueness of travelling wave solution of a non-local delayed lattice dynamical system. [16] Massimiliano Ferrara, Giovanni Molica Bisci, Binlin Zhang. Existence of weak solutions for non-local fractional problems via Morse theory. [17] Kazuhisa Ichikawa, Mahemauti Rouzimaimaiti, Takashi Suzuki. Reaction diffusion equation with non-local term arises as a mean field limit of the master equation. [18] Emmanuele DiBenedetto, Ugo Gianazza, Naian Liao. On the local behavior of non-negative solutions to a logarithmically singular equation. [19] [20] Shouming Zhou, Chunlai Mu, Yongsheng Mi, Fuchen Zhang. Blow-up for a non-local diffusion equation with exponential reaction term and Neumann boundary condition. 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
The thing with this question is that there is a question that seems to prove the opposite claim Prove the map has a fixed point - someone look into this How should one go about dealing with this question? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community The thing with this question is that there is a question that seems to prove the opposite claim Prove the map has a fixed point - someone look into this How should one go about dealing with this question? Suppose that $f: M \to M $ was onto. Then for every $x,y \in M$ $x \not=y$, there exists an $x',y' \in M$ s.t. $f(x')=x$ and $f(y')=y$. Then $$d(x,y)=d(f(x'),f(y'))\leq c d(x',y')<d(x',y').$$ Let $B=\max_{x,y \in M^2} d(x,y)$, this exists since $M$ is compact and $d:M^2 \to \mathbb{R}$ is continuous. But, by the above fact, for any $x,y \in M$ there exist an $x',y'$ s.t. $$d(x,y)<d(x',y'),$$ which contradicts the existence of a maximizer $B$. This probably works. Define a distance function $r:M\rightarrow M$ such that $$ r(x,y) = d(x,y). $$ Note that $r(\cdot)$ is a continuous function, whose proof can be seen here: Is the distance function in a metric space (uniformly) continuous? Thus, since $r$ is continuous on compact $M$, it attains its supremum, say at $(x^\ast, y^\ast)$. Note that $f(x^\ast),f(y^\ast)\in M$, which means that $$ r(f(x^\ast),f(y^\ast)) = d(f(x^\ast),f(y^\ast)) \leq d(x^\ast, y^\ast) $$ by definition of $(x^\ast, y^\ast)$ resulting in a contradiction. HINT: Let $p$ be the fixed point guaranteed by the earlier question. Show that there is an $x\in X$ that maximizes $d(p,x)$. Then show that $x\notin f[X]$.
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a... @NeuroFuzzy awesome what have you done with it? how long have you been using it? it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity @Secret I mean more along the lines of the fluid dynamics in that kind of game @Secret Like how in the dan-ball one air pressure looks continuous (I assume) @Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A. I would bet you get lots of cool reaction-diffusion-like patterns with that rule. (Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ... Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a... Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl... @ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-) What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ... and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles The documentary then showed one of the bird's eye view of the farmlands (which pardon my sketchy drawing skills...) Most of the farmland is tiled into grids Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl and in others grass grew Two blue steel bars were visible laying across the grid, holding up a triangle pool of water Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e. ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it At the end of the documentary, near a university lodge area I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends Reality check: I have been to London, but not Belgium Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order Presumably one can formulate it (using an example of a 4th order tensor) as follows: $$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$ and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$ However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers @DavidZ in the recent meta post about the homework policy there is the following statement: > We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems. This is an interesting statement. I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking". I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea. I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments). @DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic. @peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive. @DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds. @EmilioPisanty Yes, but I had liked to talk to him here. @DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things. @peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck. 4 Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful. @EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Following the book of Friedrich "Dirac operators and riemannian geometry" (AMS, vol 25), I define the generalized Seiberg-Witten equations for $(A,A',\psi , \phi)$, with $A,A'$ two connections and $\psi, \phi$, two spinors: 1) $D_A ( \psi)=0$ 2) $D_{A'} ( \phi)=0$ 3) $F_+ (A)=-(1/4) \omega (\psi)$ 4) $F_+ (A')=-(1/4) \omega (\phi)$ 5) $ A- A' = Im( \frac{d<\psi|\phi>}{<\psi|\phi>})$ $Im$ is the imaginary part of the complex number. The gauge group $(h,h') \in Map(M,S^1)$ acts over the solutions of the generalized Seiberg-Witten equations: $(h,h').(A,A',\psi,\phi)=((1/h)^* A, (1/{h'})^* A', h \psi, h' \phi )$ We have compact moduli spaces because it is a closed set in the product of two compact sets (the SW moduli spaces). Moreover, the situation can perhaps be generalized to $n$ solutions of the Seiberg-Witten equations $(A_i ,\psi_i )$: 1) $D_{A_i}( \psi_i)=0$ 2) $F_+(A_i)= -(1/4) \omega (\psi_i)$ 3) $ A_i- A_j=Im( \frac{d<\psi_i|\psi_j>}{<\psi_i|\psi_j>})$
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
It looks like you're new here. If you want to get involved, click one of these buttons! Last time we studied meets and joins of partitions. We observed an interesting difference between the two. Suppose we have partitions \(P\) and \(Q\) of a set \(X\). To figure out if two elements \(x , x' \in X\) are in the same part of the meet \(P \wedge Q\), it's enough to know if they're the same part of \(P\) and the same part of \(Q\), since $$ x \sim_{P \wedge Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ and } x \sim_Q x'. $$ Here \(x \sim_P x'\) means that \(x\) and \(x'\) are in the same part of \(P\), and so on. However, this does not work for the join! $$ \textbf{THIS IS FALSE: } \; x \sim_{P \vee Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ or } x \sim_Q x' . $$ To understand this better, the key is to think about the "inclusion" $$ i : \{x,x'\} \to X , $$ that is, the function sending \(x\) and \(x'\) to themselves thought of as elements of \(X\). We'll soon see that any partition \(P\) of \(X\) can be "pulled back" to a partition \(i^{\ast}(P)\) on the little set \( \{x,x'\} \). And we'll see that our observation can be restated as follows: $$ i^{\ast}(P \wedge Q) = i^{\ast}(P) \wedge i^{\ast}(Q) $$ but $$ \textbf{THIS IS FALSE: } \; i^{\ast}(P \vee Q) = i^{\ast}(P) \vee i^{\ast}(Q) . $$ This is just a slicker way of saying the exact same thing. But it will turn out to be more illuminating! So how do we "pull back" a partition? Suppose we have any function \(f : X \to Y\). Given any partition \(P\) of \(Y\), we can "pull it back" along \(f\) and get a partition of \(X\) which we call \(f^{\ast}(P)\). Here's an example from the book: For any part \(S\) of \(P\) we can form the set of all elements of \(X\) that map to \(S\). This set is just the preimage of \(S\) under \(f\), which we met in Lecture 9. We called it $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S \}. $$ As long as this set is nonempty, we include it our partition \(f^{\ast}(P)\). So beware: we are now using the symbol \(f^{\ast}\) in two ways: for the preimage of a subset and for the pullback of a partition. But these two ways fit together quite nicely, so it'll be okay. Summarizing: Definition. Given a function \(f : X \to Y\) and a partition \(P\) of \(Y\), define the pullback of \(P\) along \(f\) to be this partition of \(X\): $$ f^{\ast}(P) = \{ f^{\ast}(S) : \; S \in P \text{ and } f^{\ast}(S) \ne \emptyset \} . $$ Puzzle 40. Show that \( f^{\ast}(P) \) really is a partition using the fact that \(P\) is. It's fun to prove this using properties of the preimage map \( f^{\ast} : P(Y) \to P(X) \). It's easy to tell if two elements of \(X\) are in the same part of \(f^{\ast}(P)\): just map them to \(Y\) and see if they land in the same part of \(P\). In other words, $$ x\sim_{f^{\ast}(P)} x' \textrm{ if and only if } f(x) \sim_P f(x') $$ Now for the main point: Proposition. Given a function \(f : X \to Y\) and partitions \(P\) and \(Q\) of \(Y\), we always have $$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ but sometimes we have $$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) . $$ Proof. To prove that $$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ it's enough to prove that they give the same equivalence relation on \(X\). That is, it's enough to show $$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P) \wedge f^{\ast}(Q) } x'. $$ This looks scary but we just follow our nose. First we rewrite the right-hand side using our observation about the meet of partitions: $$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P)} x' \textrm{ and } x\sim_{f^{\ast}(Q) } x'. $$ Then we rewrite everything using what we just saw about the pullback: $$ f(x) \sim_{P \wedge Q} f(x') \textrm{ if and only if } f(x) \sim_P f(x') \textrm{ and } f(x) \sim_Q f(x'). $$ And this is true, by our observation about the meet of partitions! So, we're really just stating that observation in a new language. To prove that sometimes $$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) , $$ we just need one example. So, take \(P\) and \(Q\) to be these two partitions: They are partitions of the set $$ Y = \{11, 12, 13, 21, 22, 23 \}. $$ Take \(X = \{11,22\} \) and let \(i : X \to Y \) be the inclusion of \(X\) into \(Y\), meaning that $$ i(11) = 11, \quad i(22) = 22 . $$ Then compute everything! \(11\) and \(22\) are in different parts of \(i^{\ast}(P)\): $$ i^{\ast}(P) = \{ \{11\}, \{22\} \} . $$ They're also in different parts of \(i^{\ast}(Q)\): $$ i^{\ast}(Q) = \{ \{11\}, \{22\} \} .$$ Thus, we have $$ i^{\ast}(P) \vee i^{\ast}(Q) = \{ \{11\}, \{22\} \} . $$ On the other hand, the join \(P \vee Q \) has just two parts: $$ P \vee Q = \{\{11,12,13,22,23\},\{21\}\} . $$ If you don't see why, figure out the finest partition that's coarser than \(P\) and \(Q\) - that's \(P \vee Q \). Since \(11\) and \(22\) are in the same parts here, the pullback \(i^{\ast} (P \vee Q) \) has just one part: $$ i^{\ast}(P \vee Q) = \{ \{11, 22 \} \} . $$ So, we have $$ i^{\ast}(P \vee Q) \ne i^{\ast}(P) \vee i^{\ast}(Q) $$ as desired. \( \quad \blacksquare \) Now for the real punchline. The example we just saw was the same as our example of a "generative effect" in Lecture 12. So, we have a new way of thinking about generative effects: the pullback of partitions preserves meets, but it may not preserve joins! This is an interesting feature of the logic of partitions. Next time we'll understand it more deeply by pondering left and right adjoints. But to warm up, you should compare how meets and joins work in the logic of subsets: Puzzle 41. Let \(f : X \to Y \) and let \(f^{\ast} : PY \to PX \) be the function sending any subset of \(Y\) to its preimage in \(X\). Given \(S,T \in P(Y) \), is it always true that $$ f^{\ast}(S \wedge T) = f^{\ast}(S) \wedge f^{\ast}(T ) ? $$ Is it always true that $$ f^{\ast}(S \vee T) = f^{\ast}(S) \vee f^{\ast}(T ) ? $$ To read other lectures go here.
№ 8 All Issues Asymptotic Discontinuity of Smooth Solutions of Nonlinear $q$-Difference Equations Abstract We investigate the asymptotic behavior of solutions of the simplest nonlinear q-difference equations having the form x( qt+ 1) = f( x( t)), q> 1, t∈ R +. The study is based on a comparison of these equations with the difference equations x( t+ 1) = f( x( t)), t∈ R +. It is shown that, for “not very large” q> 1, the solutions of the q-difference equation inherit the asymptotic properties of the solutions of the corresponding difference equation; in particular, we obtain an upper bound for the values of the parameter qfor which smooth bounded solutions that possess the property \(\begin{array}{*{20}c} {\max } \\ {t \in [0,T]} \\ \end{array} \left| {x'(t)} \right| \to \infty \) as T→ ∞ and tend to discontinuous upper-semicontinuous functions in the Hausdorff metric for graphs are typical of the q-difference equation. English version (Springer): Ukrainian Mathematical Journal 52 (2000), no. 12, pp 1841-1857. Citation Example: Derfel' G. A., Romanenko Ye. Yu., Sharkovsky O. M. Asymptotic Discontinuity of Smooth Solutions of Nonlinear $q$-Difference Equations // Ukr. Mat. Zh. - 2000. - 52, № 12. - pp. 1615-1629. Full text
Table of Contents Radial boundary values of lacunary power series Article References I. V. Andrusyak, P. V. Filevych 4-7 Symmetric continuous linear functionals on complex space $L_\infty[0,1]$ Article References T. V. Vasylyshyn 8-10 On the infinite remains of the Nörlund branched continued fraction for Appell hypergeometric functions Article (Українська) References N. P. Hoyenko, V. R. Hladun, O. S. Manzij 11-25 Interpolated scales of approximation spaces for regular elliptic operators on compact manifolds Article References M. I. Dmytryshyn 26-31 Calculation algorithm of rational estimations of recurrence periodical fourth order fraction Article References R. A. Zatorsky, A. V. Semenchuk 32-43 Non-local boundary value problem for partial differential equation in a complex domain Article (Українська) References V. S. Il'kiv, I. I. Volyans'ka 44-58 On the polynomiality of separately constant functions Article (Українська) References V. M. Kosovan, V. K. Maslyuchenko 59-63 Mathematical modeling and numerical calculation of forced vibrations of piezoceramic spherical segment Article (Українська) References I. P. Kudzinovs’ka 64-67 Variational inference of differential equations of vibrations of piezoceramic shell with meridional polarization Article (Українська) References I. O. Lastivka 68-72 Spectral analysis of complete graph with infinite chains Article (Українська) References V. O. Lebid 73-78 Inverse boundary value problems for diffusion-wave equation with generalized functions in right-hand sides Article (Українська) References A. O. Lopushansky, H. P. Lopushanska 79-90 Rings with nilpotent derivations of index $\leq 2$ Article (Українська) References M. P. Lukashenko 91-95 Asymptotic stochastic stability of the stochastic dynamical systems of the random structure with constant delay Article (Українська) References T. O. Lukashiv 96-103 The Cauchy problem for parabolic equation over the field of $p$-adic numbers with impulse action Article (Українська) References V. M. Luchko 104-112 Asymptotics of a fundamental solution system for a quasidifferential equation with measures on the semiaxis Article References O. V. Makhnei 113-122 On continuity of homomorphisms between topological Clifford semigroups Article References I. Pastukhova 123-129 On some properties of Korobov polynomials Article References V. M. Pylypiv, A. R. Malarchuk 130-133 The heat equation on line with random right part from Orlicz space Article References A. I. Slyvka-Tylyshchak 134-148 The normal limit distribution of the normalized number of false solutions of a one system of nonlinear random equations over the field GF(2) Article References S. Ya. Slobodian 149-160 $(\delta, \gamma)$-Dunkl Lipschitz functions in the space $\mathrm{L}^{2}(\mathbb{R}, |x|^{2\alpha+1}dx)$ Article References M. El Hamma, H. Lahlali, R. Daher 161-165 Regular capacities on metrizable spaces Article (Українська) References T. M. Cherkovskyi 166-176 The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.
Emergent compact gauge fields are ubiquitous in condensed matter theory (like $U(1)$). Are there any examples of an emergent non-compact gauge field, in which case there won't be any quantization conditions and there would be conserved currents and charges which might or might not be physical. Emergent gauge fields come about in systems with local constraints $j(\vec x) = 0$, since $$\int DA \exp i \int j \wedge A = \delta (j).$$ Then in a classic physics move we integrate out the matter instead of the Lagrange multiplier and we get an effective gauge theory for $A$. If $j$ is quantized, an integer for example, then $A$ must be taken to be $U(1)$ valued. In general, $j$ should behave like a current. If $j$ is $\mathbb{R}$ valued, then $A$ will be as well. You could consider a constant density fluid, for instance, and then $A$ would be something like a "dilaton" gauge field. I learned this perspective on emergent gauge fields from E. Fradkin's book Field Theories in Condensed Matter Physics.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
In this chapter we developed the quantum mechanical description of the harmonic oscillator for a diatomic molecule and applied it to the normal modes of molecular vibrations. We examined the functional form of the wavefunctions and the associated energy level structure. We can calculate expectation values (average values) and standard deviations for the displacement, the momentum, the square of the displacement, and the square of the momentum. The wavefunctions, which form an orthonormal set, were used to determine electric dipole selection rules for spectroscopic transitions, and in the problems at the end of the chapter, they are used to calculate several properties of the harmonic oscillator. The phenomenon of quantum mechanical tunneling through a potential-energy barrier was introduced and its relationship to real chemical phenomena was illustrated by consideration of hydrogen bonding in DNA. We finally looked at the nature of low-resolution IR spectra and introduced the anharmonicity concept to account for forbidden overtone transitions in spectra. The presence of combination bands in spectra was attributed to second derivative terms in the expansion of the dipole moment operator in terms of the normal coordinates. The simple harmonic oscillator model works well for molecules at room temperature because the molecules are in the lower vibrational levels where the effects of anharmonicity are small. Self-Assessment Quiz Write a definition of a normal vibrational mode. Write a definition of a normal vibrational coordinate. List the steps in a methodology for finding the normal vibrational coordinates and frequencies. What is a harmonic oscillator? How is the harmonic oscillator relevant to molecular properties? Write the Hamiltonian operator for a one-dimensional harmonic oscillator. What are the major steps in the procedure to solve the Schrödinger equation for the harmonic oscillator? What are the three parts of a harmonic oscillator wavefunction? How is the quantum number v produced in solving the Schrödinger equation for the harmonic oscillator? What are the allowed energies for a quantum harmonic oscillator? What determines the frequency of a quantum harmonic oscillator? What information about a molecular vibration is provided by the harmonic oscillator wavefunction for a normal coordinate? Sketch graphs of the harmonic oscillator potential energy and a few wavefunctions. Draw the harmonic oscillator energy level diagram. Why is the lowest possible energy of the quantum oscillator not zero? Compute the approximate energy for the first overtone transition in HBr given that the fundamental is 2564 cm-1. If a transition from vibrational energy level v = 3 to v = 4 were observed in an infrared spectrum, where would that spectral line appear relative to the one for the transition from v = 0 to v = 1? What is the harmonic oscillator selection rule for vibrational excitation by infrared radiation? Explain why the infrared absorption coefficient is larger for some normal modes than for others. Why is it possible for quantum particles to tunnel through potential barriers? What are the values of integrals like \(\int \limits _{-\infty}^{\infty} \Psi ^*_n (Q) \Psi _m (Q) dQ\) using harmonic oscillator wavefunctions? Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
Let $f, g : U\rightarrow V$ be linear maps and $\lambda\in F$. Then the maps $f + g : U\rightarrow V$ and $\lambda f : U \rightarrow V$ are linear. My attempt at the proof for the first statement is as follows: Let $u,z\in U$ and $a\in F$, using a linearity check by definition of $f + g$ $$(f + g)(au + z) = f(au + z) + g(au + z)$$ by linearity of $f$ and $g$ $$= (af(u) + f(z)) + (ag(u) + g(z)) $$ by basic properties of vector spaces $$= af(u) + ag(u) + f(z) + g(z)$$ by an axiom of vector spaces $$= a(f(u) + g(u)) + (f(z) + g(z))$$ by definition of $f + g$. $$= a(f + g)(u) + (f + g)(z)$$ Hence, $f + g$ is linear Is this the correct approach. What is the proof of $\lambda f : U \rightarrow V$ to be linear?
I have $$ A = \left( \begin{array}{ccc} -4 & 9 & -4 \\ 0 & 0 & 0 \\ 6 & -13 & 6 \end{array} \right) $$ whose eigenvalues are $\{0,0,2\}$. For $\lambda=2$, I have $(A-\lambda\,I)\,\vec{v^{(3)}} = \bf{0}$ leading to $v^{(3)}=\left( \begin{array}{c} 1 \\ 0 \\ -3/2 \end{array} \right)$. For $\lambda_{1,2} = \{0,0\}$, I get $$ A-0*I = 0 \longrightarrow \left( \begin{array}{ccc} -4 & 9 & -4 \\ 0 & 0 & 0 \\ 6 & -13 & 6 \end{array} \right)\,\vec{v^{(1)}} = \vec{0} $$ (I know vector symbols is inappropriate in case the vector field doesn't correspond to a vector space, but will use it here.) I can see that the column space (rank) may not span a 3D space, and its determinant is zero. As the 1st and 3rd columns are the same, and the 2nd is not equal to the other two, I come up with $\operatorname{Rank}(A-\lambda_{\left|_0 \right.}\,I) = 2$. By the rank-nullity theorem, I see that $3= \operatorname{nullity} + 2$, implying that the geometric multiplicity is 1 (the nullity). This 2nd eigenvector is just $\vec{v^{(1)}} = \left( \begin{array}{c} 1 \\ 0 \\ -1 \end{array} \right)$ My question then becomes, how do I find a full set of eigenvectors? Do I find the 3rd one by $$ (A-0*I)^1\;\vec{v^{(2)}} = \vec{v^{(1)}} \;? $$ I tried but get the same set of 2 equations as for the 2nd eigenvector $\vec{v^{(1)}}$. Stepping away from the calculations, I know I have a 1D nullspace (a 1D set of vectors that get sent to zero), then I have one eigenvalue with algebraic multiplicity 1 (and therefore at most 1 geometric multiplicity) associated with the eigenvalue 2 and eigenvector $\vec{v^{(3)}}$. This leaves just the eigenvector $\vec{v^{(1)}}$. Is this the `full' set? Didn't mean to forget this part... Is the generalized eigenvector then to represent the scaling of all vectors in the domain that get sent to zero? What is the physical interpretation of this generalized eigenvector, as couched in the context of a symplectic manifold or physical phase space? Is there an answer to this that is always true?
How to Compute the Acoustic Radiation Force Acoustic radiation force is an important nonlinear acoustic phenomenon that manifests itself as a nonzero force exerted by acoustic fields on particles. Acoustic radiation is an acoustophoretic phenomenon, that is, the movement of objects by sound. One interesting example of this force in action is the acoustic particle levitation discussed in this previous blog post. Today, we shall examine the nature of this force and show how it can be computed using COMSOL Multiphysics. What Acoustic Radiation Force Is and How It Works To understand the nature of the acoustic radiation force, let’s first consider a simple example of a particle in a standing wave pressure field (here assumed to be lossless). The force on the particle arises as a result of particle’s finite size, such that the gradients in the pressure field will result in greater force being exerted on one side of the particle than the other. However, if we are considering a harmonic pressure wave, then the force is expected to behave as a harmonic function, which can be expressed as F_\text{harmonic} = F_0\sin (2\pi f_0 t+\phi). I’ve shown this as a black arrow in the animation below. If time-averaged, the total contribution goes to zero. So, where does the observed nonzero force come from? This question was first addressed by L. V. King back in 1934 (“On the acoustic radiation pressure on spheres“). In order to understand King’s results, we must take a step back to examine how the governing equations of acoustics are derived. Deriving the Acoustics Equations We will find out that they emerge from the Navier-Stokes equations as a result of a linearization procedure, which is normally carried out in two steps. First, a very small time-varying perturbation in pressure and velocity is assumed on top of a stationary background field. When time derivatives are applied, the stationary terms drop out and what is left only includes the time-dependent perturbation terms. The remaining expression will contain both linear and nonlinear contributions. The latter appear in the form of products of two or more linear perturbation terms, and they result from convective and inertial terms in the original Navier-Stokes equation. But, in the simplest acoustic limit, the contribution of nonlinear terms can be neglected because the amplitudes of perturbations considered are very small. For example, 0.01 2 is much smaller than 0.01 and can therefore be neglected. So, in the second step of the linearization procedure, all the nonlinear terms are neglected and the linear wave equation is obtained. What King has indicated was that in order to understand and evaluate the effect of acoustic radiation force, the nonlinear terms must somehow be retained in the equations. Keeping the terms up to a second order, the pressure field will appear as a combination of two terms p = p_1 + p_2, where p_1 and p_2 can be expressed in a simplistic form as p_1 = \rho_0 c_0 v, which appears as a linear function of the perturbation velocity v, and p_2 = 1/2 \rho_0 v^2, which appears as a nonlinear function of v. Since, in the acoustic limit, we only consider the cases in which v \ll c_0, where c_0 is the adiabatic speed of sound, we conclude that p_2 \ll p_1. At this point, we are ready to answer the first question: Where does the acoustic radiation force come from? Going back to the example of a particle in a standing wave pressure field, let’s examine the linear and nonlinear components of the pressure and the forces produced by these components. In this case, p_1 will be a time-harmonic function p_1 = P_1 \cos(kx)\sin(2\pi f t) and p_2 will be an an-harmonic function p_2 = P_2\cos^2(kx)[1-cos(4\pi f t)] resulting from the nonlinear contribution. These terms are visualized by the waveforms in the animation above. The forces resulting from these pressure terms are indicated by arrows. The linear force (black arrow) changes both in magnitude and direction so its cycle-averaged contribution is zero, whereas the nonlinear term (red arrow) only changes in magnitude and on average exerts a nonzero force. Computing the Acoustic Radiation Force The simple analysis above demonstrates the main mechanism underlying the acoustic radiation force phenomenon. Intuitively, we realize that no force will appear if the particle has the same acoustic properties as the surrounding medium. In other words, the radiation force should be a function of not only the size of a particle and the amplitude of the acoustic field, but also of the particle’s acoustic contrast (the ratio of the material properties of the particle relative to the surrounding fluid). Due to the acoustic contrast, the field incident on the particle will be reflected from its surface and the radiation force will be a result of a combination of incident and reflected waves. This makes the problem quite difficult to solve analytically. The solution in a closed analytic form was only given for some limiting cases by a number of authors, starting with King. He has considered rigid spherical particles with dimensions much smaller than the wavelength of the incident wave, but much larger than the viscous and thermal skin depths. It was the second assumption that allowed these terms to be neglected. King’s results have been extended to include compressible particles as in “Acoustic radiation pressure on a compressible sphere“. The results from this study were later confirmed by L. P. Gor’kov in 1962 in “On the forces acting on a small particle in an acoustical field in an ideal fluid”. Viscous and thermal effects become important when the size of the particles becomes comparable to the acoustic boundary layers (thermal and viscous). Results including viscosity were recently presented in 2012 by M. Settnes and H. Bruus. Gor’kov has developed an elegant approach to expressing the radiation force in terms of time-averaged kinetic and potential energies of stationary acoustic fields of any geometries. His results, when applied to small compressible fluid particles, give the force as a gradient of a potential function U_\text{rad}: (1) The potential function U_\text{rad} is expressed using the acoustic pressure and velocity as: (2) where V_p is the volume of the particle and the scattering coefficients are given by: (3) where K_i are the bulk moduli. The scattering coefficients f_1 and f_2 represent the monopole and dipole coefficients, respectively. This approach, which is based on the scattering theory, is only valid for particles that are small compared to the wavelength \lambda in the limit a/\lambda \ll 1, where a is the radius of the particle. The v and p terms that appear in Eq. (1) are the first-order terms that can be obtained by solving a linear acoustic problem. Results in this form are typically obtained using a perturbation method, which is widely practiced in physics. A thorough review and examples of this method applied to nonlinear problems in acoustics and microfluidics can be found in a textbook by Professor Henrik Bruus titled Theoretical Microfluidics. Eq. (1) is coded in the COMSOL Multiphysics Particle Tracing for Fluid Flow interface to evaluate the acoustic radiation force on particles. But, as mentioned above, it only applies to acoustically small particles and neglects thermoviscous effects. An example can be seen in the Acoustic Levitator model. Knowing the radiation force is important when modeling and simulating systems that handle particles using this phenomenon. This can be, for instance, microfuidic systems that sort and handle cells and other particles. An example of this is discussed in the blog post Acoustofluidic Multiphysics Problem: Microparticle Acoustophoresis. To extend the theory beyond the limit of acoustically small particles, a numerical approach is required. We will consider that next. Implementing the Perturbation Solution Method in COMSOL Multiphysics In general, all forces can be expressed using momentum fluxes as \mathbf F = \int_S T \mathbf{n} d\mathbf{a}, where the surface of integration, S, is the external surface of the particle. Gor’kov has used this fact to obtain a closed-form analytical expression for a force acting on a particle in an arbitrary acoustic field. To compute the nonlinear acoustic radiation force, the momentum flux due to the acoustic field has to be evaluated up to second-order terms. The main appeal of his result is that, as mentioned earlier, the second-order terms can be expressed using the solution of a linear problem. To implement his method, all we need to do is solve the acoustic problem, use the results to compute the second-order momentum flux, and substitute the solution into the flux integral. H. Bruus has shown that neglecting the thermoviscous effects, the second-order flux terms are: (4) The integral should be taken over a surface of a particle moving in response to the applied force. This means that the surface of integration is a function of time S = S(t). To overcome this difficulty, Yosioka and Kawasima have indicated that the integration can be transformed to an equilibrium surface S_0 that encloses the particle. Compensating for the error with the addition of a convective momentum flux term, the force, in total, becomes: (5) All that is left to do now is solve the acoustics problem to obtain the acoustic pressure and velocity and substitute them into the integral in Eq. (5). In contrast to the approach used in Eq. (1) to (3), the force expression given in Eq. (5) is valid for all particle sizes as long as the stress T is given. This approach was recently implemented in COMSOL Multiphysics by a group of researchers from the University of Southampton. It should be noted that the expression in Eq. (4) is only true when the viscous and thermal effects are neglected. If these losses are included the integration surface S_0 should be taken outside of the boundary layers or a correct full stress expression for T used on the particle surface. A first principle perturbation approach including thermal and viscous losses was presented at the 2013 ICA-ASA conference by M. J. Herring Jensen and H. Bruus, titled “First-principle simulation of the acoustic radiation force on microparticles in ultrasonic standing waves”. A detailed derivation of the governing equations up to second order, in a form suited for implementation in COMSOL Multiphysics, is given in the paper “Numerical study of thermoviscous effects in ultrasound-induced acoustic streaming in microchannels“. To benchmark the method presnted by Glynne-Jones et al., let’s compute an acoustic radiation force exerted by a standing wave on a spherical nylon particle immersed in water. We assume a frequency of 1 MHz and a pressure amplitude of 1 bar and implement the model using the Acoustic-Structure Interaction interface in 2D axisymmetric geometry. The size of the box in the model is four wavelengths high and two wavelengths wide. Let’s excite a standing wave in this box using a Background Pressure Field condition, set up in such way that the particle is at a distance of \lambda/8 from the pressure node. The integrals in Eq. (5) are computed by setting up integration coupling operators in the Component 1 > Definitions node. We need to make sure that the integral is calculated in the revolved geometry by checking the appropriate box and selecting the boundaries of the particle to define the surface of integration. It is noteworthy to mention here that the force computation used in this method is independent of the surface of integration due to the conservation of flux as long, as it is located outside the particle. In fact, using a surface at larger distances will be more numerically accurate, simply because there will be more points to use for numerical evaluation of the integral. To perform this integration, we can add another surface external to the particle. Finally, new flux variables are introduced in the Component 1 > Definitions as Variables 1a node. They are used as arguments for the integration operators to compute the total force. Comparing the Results to an Analytical Solution We are now ready to compare the perturbation approach to an analytical solution. As expected, they compare reasonably well for small particle radii where the analytical solution considered is valid. Some analytical models that include higher harmonics in scattered field decomposition offer solutions that agree with the outlined numerical approach for large and small spherical particles (as in the paper by T. Hasegawa, “Comparison of two solutions for acoustic radiation pressure on a sphere“.) A small discrepancy for small particle radii between analytical and numerical methods may be attributed to the fact that the theoretical models assume that the particle is plastic, whereas in this example, we have considered an elastic particle with bulk modulus of 0.4. Concluding Remarks The perturbation method has a number of advantages. First, it exploits the linear acoustics method to evaluate nonlinear second-order force effects. This allows the analysis to be easily extended to 3D for particles of arbitrary shapes and material composition. For example, we can extend it to simulate acoustic radiation forces on biological cells or microbubbles. Second, because the acoustic equations are solved in the frequency domain where very efficient numerical methods are well established, the solution time in COMSOL Multiphysics is quite fast even in 3D. Meanwhile, the disadvantage of this method is that it is driven by theoretical results that rely on a set of simplifying assumptions, and it can only be validated in a limited number of cases. What we would like to have is a numerical method that allows the problem to be solved directly. We shall see how this can be achieved in the next blog post. Stay tuned! Other Posts in This Series Direct FSI Approach to Computing the Acoustic Radiation Force Modeling Acoustic Orbital Angular Momentum Further Reading Blog posts: Model: Acoustic Levitator L. V. King, “On the acoustic radiation pressure on spheres“, (1934) M. Settnes and H. Bruus, “Forces acting on a small particle in an acoustical field in a viscous fluid“, Phys. Rev. E 85, 016327 1-12 (2012) P. Glynne-Jones, P. P. Mishra, R. J. Boltryk, and M. Hill, “Efficient finite element modeling of radiation forces on elastic particles of arbitrary size and geometry“, J. Acoust. Soc. Am., 133, 1885-93 (2013) P.B. Muller, R. Barnkob, M.J.H. Jensen, and H. Bruus, “A numerical study of microparticle acoustophoresis driven by acoustic radiation forces and streaming-induced drag forces“, Lab Chip 12, 4617-4627 (2012) P.B. Muller and H. Bruus, “Numerical study of thermoviscous effects in ultrasound-induced acoustic streaming in microchannels“, Phys. Rev. E 90, 043016 1-12 (2014) Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
So no one answered this question, so I finally went ahead and solved it in case anyone is every curious about this. If there are any errors, please let me know. Also, I copied/pasted this from my own Latex document, so I may have missed some of the translation to MathJax. In the most general case, the terminating impedances at each end of the duct in Fig. 1 are complex. This arrises from the fact that, in the most general situation, there will be standing waves formed inside the duct to the left of 1 and to the right of 2. There are situations where it is possible to have no standing waves after the interface, which would then make the impedance real. These situations can occur in practical scenarios where the disturbances exits into open atmosphere or terminates at a rigid, lossless wall, or if the pipe length downstream of the interface is so long that viscous losses attenuate the reflected signal to such a degree that it is negligible compared to the incident wave by the time the wave returns to the interface ($Z_1$ or $Z_2$). In order to study the most general case, lets assume that $Z_1$ and $Z_2$ are complex. In addition, lets again make the assumption that the waves are time harmonic, and are given by $$p'(x,t) = P(x)e^{i\omega t} = \Big[Ae^{-ikx} + Be^{ikx}\Big]e^{i\omega t}$$ $$u'(x,t) = U(x)e^{i\omega t} = \frac{1}{Z_0}\Big[Ae^{-ikx} - Be^{ikx}\Big]e^{i\omega t}$$ Since we are letting the impedances to be any arbitrary complex value, it turns out that it is relatively useless to try and solve for the coefficients directly. Doing this creates excessively complicated equations that don't reveal anything useful about the dominant modes inside the pipe. It turns out to be much more revealing to study the process through appropriate boundary conditions. We can arrive at a very general boundary condition by looking at the linearized momentum equation given by $$ \frac{\partial u'}{\partial t} + \frac{1}{\rho_0}\frac{\partial p'}{\partial x} = 0$$ Plugging the pressure and velocity equations into the momentum equation, and noting that $\frac{p'(x,t)}{u'(x,t)} = z(x)$, where $z(x)$ is the mechanical impedance, we get $$\nonumber i\omega \rho_0 P(x) + z(x)\frac{dP}{dx} = 0$$ Noting that $\omega = c_0k$ and $z_0 = \rho_0 c_0$, and dividing by the area A(x), we get $$ ik\frac{z_0}{A(x)}P(x) + \frac{z(x)}{A(x)}\frac{dP}{dx} = 0$$ Here $z_0$ is the specific acoustic impedance of the pipe. It's important to note that this is a real valued quantity. Finally, noting that the acoustic impedance is given by $Z_i = \frac{z_i}{A(x)}$, we have $$ ikZ_0P(x) + Z(x)\frac{dP}{dx} = 0$$ This is a general B.C. and can be applied to any point within the pipe, but it is important to note that the area can change, thus the correct $A(x)$ value must be used. Applying this B.C. to $x = 0$, we get $$ ikZ_0\big[A + B\big] + Z_1\big[-ikA + ikB\big] = 0$$ Grouping all the terms, we arrive at our first B.C. $$ \Big(1 - \frac{Z_1}{Z_0}\Big)A + \Big(1 + \frac{Z_1}{Z_0}\Big)B = 0$$ Applying the B.C. to $x = L$, we get $$ Ae^{-ikL} + Be^{ikl} + \frac{Z_2}{Z_0}\Big[-Ae^{-ikL} + Be^{ikL}\Big] = 0$$ Finally, grouping all the terms, we arrive at our last B.C. $$ \Big(1 - \frac{Z_2}{Z_0}\Big)e^{-ikL}A + \Big(1 + \frac{Z_2}{Z_0}\Big)e^{ikL}B = 0$$ These can be put into matrix form to get $$ \begin{bmatrix} &\Big(1 - \frac{Z_1}{Z_0}\Big) &\Big(1 + \frac{Z_1}{Z_0}\Big) \\ &\Big(1 - \frac{Z_2}{Z_0}\Big)e^{-ikL} &\Big(1 + \frac{Z_2}{Z_0}\Big)e^{ikL} \\ \end{bmatrix} \begin{bmatrix} &A& \\ &B& \end{bmatrix} = \begin{bmatrix} &0& \\ &0& \end{bmatrix}$$ The coefficients A and B only have a nontrivial solution when the determinant of the matrix is zero $$ \begin{vmatrix} &\Big(1 - \frac{Z_1}{Z_0}\Big) &\Big(1 + \frac{Z_1}{Z_0}\Big) \\ &\Big(1 - \frac{Z_2}{Z_0}\Big)e^{-ikL} &\Big(1 + \frac{Z_2}{Z_0}\Big)e^{ikL} \\ \end{vmatrix} = 0$$ Solving for the determinant we get $$ \Big(1-\frac{Z_1}{Z_0}\Big)\Big(1+\frac{Z_2}{Z_0}\Big)e^{ikL} - \Big(1-\frac{Z_2}{Z_0}\Big)\Big(1 + \frac{Z_1}{Z_0}\Big)e^{-iKL} = 0$$ Defining the reflection coefficients as $$ R_1 \equiv \frac{\frac{Z_1}{Z_0} - 1}{\frac{Z_1}{Z_0} + 1}$$ $$ R_2 \equiv \frac{\frac{Z_2}{Z_0} - 1}{\frac{Z_2}{Z_0} + 1}$$ We finally arrive at the condition $$ e^{i2kL} = \frac{R_2}{R_1}$$ The reflection coefficients defined here are the reflection the wave inside the pipe would see when approaching either of the interfaces. Due to the fact that $Z_1$ and $Z_2$ are, in general, complex, then we can expect $R_1$ and $R_2$ to also be complex, thus the condition becomes $$ e^{i2kL} = \frac{|R_2|}{|R_1|}e^{i(\theta_2 - \theta_1)}$$ Before jumping on to the most general case, lets take a look at a couple of limiting cases: Equal Reflection with Positive Orientation In the case that $\frac{R_2}{R_1} = 1$, the wave would see the exact same reflection at either interface, thus we must have $Z_1 = Z_2$. The very limiting case where we have an open-open or closed-closed boundaries are a subset of this class of boundary conditions. It is clear that this condition only holds when $$ \sin{kL} = 0$$ which means that $$ kL = n\pi$$ Where n = 1,2,3,...As long as the impedances are the same (does not have to be open-open or closed-closed), we will always have $\frac{\lambda}{2}$ modes. Equal Reflection with Negative Orientation In the case that $\frac{R_2}{R_1} = -1$, the wave would see the exact same magnitude of reflection at each end, but its sign would be opposite of that on the other end of the pipe. For the case of real impedances, that means $\frac{Z_i}{Z_0} < 1$ on one of the interfaces and $\frac{Z_i}{Z_0} > 1$ on the other. This means that $$ e^{i2kL} = -1$$ Taking the complex log of each side, and noting the periodicity of the complex exponential, we have $$ i2\Big[kL - n\pi] = \ln{1} + i\pi$$ where we used the identity that $\ln{(-a)} = \ln{(a)} + i\pi$. Solving for $kL$, we get $$ kL = \frac{\pi}{2}(2n + 1)$$ Where $n = \pm 1,2,3,...$. Or more simply $$ kL = m\frac{\pi}{2} $$ Where $m = \pm 1,3,5,...$. This is a really interesting limiting case because we can get some valuable insight into the problem without solving the coefficients. If we set $R2 = - R1$, we get $$ (\frac{Z_2}{Z_0} - 1)(\frac{Z_1}{Z_0} + 1) = -(\frac{Z_1}{Z_0} - 1)(\frac{Z_2}{Z_0} + 1)$$ And solving this, we come to the requirement that $Z_1Z_2 = Z_0^2$. Since $Z_0 = \frac{\rho_0 c_0}{A}$ is a real-value parameter, then this equation will only hold if $Z_1 = Z_2*$. Thus we can write $$ Z_0 = \sqrt[]{|Z_1||Z_2|}$$ This means that as long as the impedance of the pipe, $Z_0 = \frac{\rho_0 c_0}{A}$, is the geometric mean of the terminating impedances, we will always have $\frac{\lambda}{4}$ modes! Furthermore, if the fluid properties (i.e. density and speed of sound) are the same, and the area inside the pipe is constant, then we will have quarter wavelength modes when the area of the pipe is the geometric mean of the areas of the pipes upstream and downstream of it. $$ A_0 = \sqrt[]{A_1A_2}$$ Constant Phase If on the other hand, we have $\theta_2 = \theta_1$, then the condition becomes $$ e^{2ikL} = \frac{|R_2|}{|R_1|}$$ Taking the complex log of both sides, we get $$ 2i\Big[kL - n\pi\Big] = \ln{\Bigg(\frac{|R_2|}{|R_1|}\Bigg)}$$ And we can solve for the wavenumber to get $$ kL = n\pi - i\frac{1}{2}\ln{\Bigg(\frac{|R_2|}{|R_1|}\Bigg)}$$ Where $n = 0,\pm[1,2,3,...]$. But what does a complex wavenumber physically mean? Let's plug it back into the time harmonic eq. to find out (for n = 1). The x-component of the pressure equation gives $$ P(x) = \Bigg[\Big(\frac{|R_2|}{|R_1|}\Big)^{\frac{-x}{2L}}A\Bigg]e^{-i\frac{\pi}{L}x} + \Bigg[\Big(\frac{|R_2|}{|R_1|}\Big)^{\frac{x}{2L}}B\Bigg]e^{i\frac{\pi}{L}x}$$ It is clear that the contribution from the magnitude ratio is to simply affect the amplitude of the incident and reflected wave so that it meets the boundary conditions, and that it provides no contribution to the oscillating frequency. The time component of the wavenumber gives $$ e^{i\omega t} = \Bigg[\Big(\frac{|R_2|}{|R_1|}\Big)^{\frac{ct}{2L}}\Bigg]e^{i\frac{\pi c}{L}t}$$ It is clear that $\frac{|R_2|}{|R_1|}$ is a "damping" factor. This makes physical sense. Since we are looking at free motion (i.e. not forced harmonics), then unless $|R_2| = |R_1| = 1$, some portion of the wave will transmit through the interface and carry energy away with it. Thus we should expect for the wave to die down as $t \rightarrow \infty$. This is true for as long as $\frac{|R_2|}{|R_1|} < 1$, but if $\frac{|R_2|}{|R_1|} > 1$, then $p'(x,t) \rightarrow \infty$ as $t \rightarrow \infty$. To summarize, if the phase difference is equal, then we will always have $\frac{\lambda}{2}$ modes regardless of the magnitude ratio of the reflection coefficients. The only role they have is to provide amplitude change. $\pi$ Phase Difference If $\theta_2 = \theta_1 + \pi$, then we have $$ e^{2ikL} = -\frac{|R_2|}{|R_1|}$$ This can be solved in a similar way to get $$ kL = n\frac{\pi}{2} - i\frac{1}{2}\ln{\Bigg(\frac{|R_2|}{|R_1|}\Bigg)} $$ Where $n = 0,\pm[1, 3, 5, ...]$. This means that as long as the phase difference is off by a factor of $\pi$, then the resonant modes will always be $\frac{\lambda}{4}$ modes regardless of the magnitude difference. General Case The most general condition is that $$ 2i\Big[kL - n\pi\Big] = i(\theta_2 - \theta_1) + \ln{\Bigg(\frac{|R_2|}{|R_1|}\Bigg)}$$ Thus the wavenumber must be $$ kL = n\pi + \frac{1}{2}(\theta_2 - \theta_1) - i\frac{1}{2}\ln{\Bigg(\frac{|R_2|}{|R_1|}\Bigg)}$$ Where $n = 0,\pm[1, 2, 3, ...]$. As discussed previously, the only component that affects the oscillating frequency is the phase difference between the two interfacial reflections. If these are very small, $\frac{\theta_2}{\theta_1} \sim 1$, then we will simply have half-wavelengths modes. What is really interesting about these results is that they only depend on the boundaries, and not on what is going on within the pipe provided that the fluid is homogenous (i.e. the wavenumber stays the same).
Search Now showing items 1-6 of 6 Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
I'm study 't Hooft many instanton solutions of self-duality equation. In this method $A^a_\mu=-\bar{\eta}^{a}_{\mu\nu}\partial^\nu \ln{\Phi}$. After substitution in self-duality equation I've proven that $$\Phi^{-1}\square{\Phi}=0$$ We can write the solution of this equation as $\Phi=1+\sum^q_{i=1}\frac{\lambda_i}{(x-x_i)^2}$. I want to prove that this solution has instanton number equal to $q$. As I understand I need to calculate integral $\int F_{\mu \nu} F^{\mu \nu} $ and show that it is $\sim q$, but it's too tricky to me. Can you explain how to show it? This post imported from StackExchange Physics at 2015-04-22 17:05 (UTC), posted by SE-user xxxxx
GCD from Prime Decomposition/General Result Contents Theorem Let $n \in \N$ be a natural number such that $n \ge 2$. Let $\N_n$ be defined as: $\N_n := \set {1, 2, \dotsc, n}$ $\displaystyle \forall i \in \N_n: a_i = \prod_{p_j \mathop \in T} {p_j}^{e_{i j} }$ where: $T = \set {p_j: j \in \N_r}$ such that: $\forall j \in \N_{r - 1}: p_j < p_{j - 1}$ $\forall j \in \N_r: \exists i \in \N_n: p_j \divides a_i$ where $\divides$ denotes divisibility. Then: $\displaystyle \map \gcd {A_n} = \prod_{j \mathop \in \N_r} {p_j}^{\min \set {e_{i j}: \, i \in \N_n} }$ where $\map \gcd {A_n}$ denotes the greatest common divisor of $a_1, a_2, \dotsc, a_n$. Proof The proof proceeds by induction. For all $n \in \Z_{\ge 0}$, let $\map P n$ be the proposition: $\displaystyle \map \gcd {A_n} = \prod_{j \mathop \in \N_r} {p_j}^{\min \set {e_{i j}: \, i \in \N_n} }$ Basis for the Induction $\map P 2$ is the case: $\displaystyle \gcd \set {a_1, a_2} = \prod_{j \mathop \in \N_r} {p_j}^{\min \set {e_{1 j}, e_{2 j} } }$ This is GCD from Prime Decomposition. Thus $\map P 2$ is seen to hold. This is the basis for the induction. Induction Hypothesis Now it needs to be shown that, if $\map P k$ is true, where $k \ge 1$, then it logically follows that $\map P {k + 1}$ is true. So this is the induction hypothesis: $\displaystyle \map \gcd {A_k} = \prod_{j \mathop \in \N_r} {p_j}^{\min \set {e_{i j}: \, i \in \N_k} }$ from which it is to be shown that: $\displaystyle \map \gcd {A_{k + 1} } = \prod_{j \mathop \in \N_r} {p_j}^{\min \set {e_{i j}: \, i \in \N_{k + 1} } }$ Induction Step This is the induction step: \(\displaystyle \map \gcd {A_{k + 1} }\) \(=\) \(\displaystyle \map \gcd {A_k \cup a_{k + 1} }\) \(\displaystyle \) \(=\) \(\displaystyle \) So $\map P k \implies \map P {k + 1}$ and the result follows by the Principle of Mathematical Induction. Therefore: $\forall n \in \Z_{\ge 2}: \displaystyle \map \gcd {A_n} = \prod_{j \mathop \in \N_r} {p_j}^{\min \set {e_{i j}: \, i \in \N_n} }$
This is a summary of the paper On the Number of Distinct Languages Accepted by Finite Automata with n States. The paper provides relatively easy, yet far from tight, lower and upper bounds on the number of distinct languages accepted by NFA's. Their discussion on the number of distinct DFA's is very insightful, so I will also include that part. The paper starts with a quite rigorous asymptotic for the number of distinct languages accepted by a DFA with $n$ states over a unary alphabet. This is done by observing under which conditions a given $n$-state unary DFA is minimal. In such cases the description of the automaton may be mapped (bijectively) to a primitive word, and the enumeration of such words is well-known and done with the help of the Möbius function. Using that result, bounds for non-unary alphabets, both in the DFA and in the NFA case, are proven. Let's go into more detail. For a $k$-letter alphabet, define\begin{align*} f_k(n) &= \text{the number of pairwise non-isomorphic minimal DFA's with } n \text{ states}\\ g_k(n) &= \text{the number of distinct languages accepted by DFA's with } n \text{ states}\\ G_k(n) &= \text{the number of distinct languages accepted by NFA's with } n \text{ states}\end{align*}Note that $ g_k(n) = \sum_{i=1}^n f_k(i)$. We start with $f_1(k) $ and $g_1(k)$. Enumeration of Unary DFA's A unary DFA $M = (Q, \{a\},\delta, q_0,F) $ with states $q_0,\dots, q_{n-1}$ is minimal iff It is connected. Thus, after renaming, it the transition diagram consists of a loop and a tail, i.e. $\delta(q_i,a) = q_{i+1}$ and $\delta(q_{n-1},a)=q_j$ for some $j \leq n-1$. The loop is minimal. If $j \neq 0$, then either $q_{j-1} \in F $ and $q_{n-1} \notin F$ or $q_{j-1} \notin F $ and $q_{n-1} \in F$. The loop $q_j,\dots, q_{n-1}$ is minimal iff the word $a_j \cdots a_{n-1}$ defined by \begin{equation*} a_i = \begin{cases} 1 \quad \text{if } q \in F, \\ 0 \quad \text{if } q \notin F \end{cases}\end{equation*}is primitive, which means it cannot be written in the form $x^k$for some word $x$ and some integer $k \ge 2$. The number $\psi_k(n)$ of primitive words of length $n$ over $k$-letter alphabets is known, see e.g. Lothaire, Combinatorics on Words. We have\begin{equation*} \psi_k(n) = \sum_{d | n} \mu(d) k^{n/d}\end{equation*} where $\mu(n)$ is the Möbius function. With the help of $ \psi_k(n) $ the paper proves exact formulas for $f_1(n)$ and $g_1(n)$ and shows that asymptotically (Theorem 5 and Corollary 6),\begin{align*} g_1(n) &= 2^n \left(n-\alpha+O \left(n 2^{-n/2} \right)\right) \\ f_1(n) &= 2^{n-1}\left(n+1-\alpha+O \left(n 2^{-n/2} \right)\right).\end{align*} Enumeration of DFA's The next step is a lower bound for $f_k(n)$. Theorem 7 states that\begin{equation*} f_k(n) \ge f_1(n) n^{ (k-1)n} \sim n 2^{n-1}n^{(k-1)n}.\end{equation*}For a set $\Delta \subset \Sigma$ of an automaton $M$, define $M_{\Delta}$ as the restriction of $M$ to $\Delta$. The proof works by considering the set $S_{k,n}$ of DFA's $M$ over the $k$-letter alphabet $\{0,1,\dots,k-1 \}$ defined by Letting $M_{ \{0 \}}$ be one of the $f_1(n) $ different unary DFA's on $n$ states, and Choosing any $k-1$ functions $h_i : Q \to Q$ for $1 \le i < k$ and defining $\delta(q,i) = h_i(q)$ for $1 \le i < k$ and $q \in Q$. The observation is then that $S_{n,k}$ contains $f_1(n) n^{ (k-1)n}$ different and minimal languages. Enumeration of NFA's For $G_1(n)$ one has the trivial lower bound $2^n$, since every subset $\epsilon, a,\dots, a^{n-1}$ can be accepted by some NFA with $n$ states. The lower bound is improved slightly, yet the proof is rather lengthy. The paper Descriptional Complexity in the unary case by Pomerance et al shows that $G_1(n) \le \left( \frac{c_1 n}{\log n}\right)^n $. Proposition 10 shows that, for $k \ge 2$ we have\begin{equation*} n 2^{(k-1)n^2} \le G_k(n) \le (2n-1) 2^{kn^2} +1.\end{equation*}The proof is quite short, hence I include it verbatim (more or less). For the upper bound, note that any NFA can be specified by specifying, for each pair $(q,a)$ of state and symbol, which subset of $Q$ equals $\delta(q,a)$ (hence the factor $ 2^{kn^2}$. We may assign the final states as follows: either the initial state is final or not, and since the names of the states are unimportant, we may assume the remaining final states are $\{1,\dots,k\}$ for $k \in [0..n-1]$. Finally, if we choose no final states, we obtain the empty language. For the lower bound the authors proceed in a similar way as in the proof for the DFA case: Define an NFA $M=(Q,\Sigma,\delta,q_0, F)$ with $\Sigma = \{ 0,1,\dots,k-1 \}$, $Q=\{ q_0,\dots, q_{n-1} \}$ and $\delta$:\begin{align*} \delta(q_i,0) &=q_{(i+1) \mod n} \quad \text{for } 0 \le i <n \\ \delta(q_i,j) &=h_j(i) \quad \text{for } 0 \le i <n,\, 1 \le j < k \end{align*}where $h_j: \{ 1,\dots,n-1\} \to 2^Q$ is any set-valued function. Finally, let $F=\{q_i\}$ for any $i \in [0..n-1]$. There are $ 2^{(k-1)n^2} $ such functions and $n$ ways to choose the set of final states. One can then show that no two such NFA's accept the same language.
Although microlensing incorporates a fairly large number of parameters, most events can be understood quite intuitively. This glossary is intended as a quick reference, particularly to disambiguate the different symbol sets used by different authors over time. Interested readers are referred to the references at the bottom for a full discussion, especially Skowron et al. (2011), and to the Learning Resources menu. Name Commonly-used symbols Unit Definition Single Lens Parameters Einstein crossing time t E days Time taken for the background source to cross the lens' Einstein radius, as seen by the observer. Caution: some early microlensing papers may refer to t E as the crossing time for the lens' Einstein diameter. Time of peak t 0 days Time at which the separation of lens and source reaches the minimum. Source self-crossing time t * days Time taken to cross the source's angular radius $$t_{*} \equiv \rho t_{\rm{E}}$$ Impact parameter u, at minimum u 0 Dimensionless The angular separation, normalized to θ E, between source and lens as seen by the observer Conventionally u 0 is positive when the lens passes to the right of the source star (Gould et al. 2004) Effective timescale t eff days Equal to u 0t E Rho ρ Dimensionless The angular source size θ S normalized by the angular Einstein radius θ E Vector Microlens Parallax (also: annual parallax) π or π̄, components (π E,, π E E,) or (π N E,||, π E,⊥) The parallax to a lensing event caused by the motion of the Earth in its orbit during the event. $$\bar\pi_{\rm{E}} = (\pi_{\rm{E},\parallel},\pi_{\rm{E},\perp})$$ (π E,,π N E,) E $$\bar\pi_{\rm{E}} \equiv (\pi_{\rm{E},N},\pi_{\rm{E},E}) \equiv (\cos \phi_{\pi},\sin \phi_{\pi})\pi_{\rm{E}},\, \rm{where}\, \pi_{\rm{E}} = AU/\tilde{r_{\rm{E}}}$$ (π E,||, π E,⊥) Components of parallax parallel and perpendicular to the apparent acceleration of the Sun, projected on the sky in a right-handed convention (Gould et al. 2004)$$\bar\pi_{\rm{E}} = (\pi_{\rm{E},\parallel},\pi_{\rm{E},\perp})$$ Direction of lens motion φ π radians The direction of lens motion relative to the source expressed as a counter-clockwise angle, north through east Relative parallax π rel Relative parallax observed for lens and source $$\pi_{rel} = \theta_{\rm{E}}\pi_{\rm{E}}$$ Source parallax π S Parallax of the source star as seen from Earth Lens distance D L pc Physical distance from the observer to the lensing object. Source distance D S pc Physical distance from the observer to the source star. Lens-source distance D LS pc Physical distance between the source and lens along the observer's line of sight. Lens mass M L M ⊙ Mass of the lensing object, including all component masses unless otherwise stated. Kappa κ Commonly used to abbreviate equations for the mass of the lens, kappa gathers together all the physical constants in the equation: $$\kappa = \frac{4G}{c^{2}AU}$$ Einstein angular radius θ E mas The angle subtended by the Einstein radius of a lens from the distance of the observer. Source angular radius θ * or θ S mas The angle subtended by the source star radius at the distance of the observer Einstein radius R E Km The characteristic radius around the lens at which the images of the source form due to the gravitational deflection of light. Projected Einstein radius ř E Km The Einstein radius projected to the observer's plane. Source radius R * or R S Km The physical radius of the source star Helio- and geocentric proper motions μ helio and μ geo mas/yr Proper motion of the source star relative to the Sun and Earth, respectively $$\bar\mu_{geo} = \mu\bar\pi_{\rm{E}}/\pi_{\rm{E}}$$ Binary Lens Parameters Parameter reference time t 0,par days The reference instant at which all parameters are measured in a geocentric frame that is at rest relative to the Earth at that time (An et al. 2002) Fiducial time t 0,kep days Fiducial time specified during analysis of binary lens events. In general t 0,kep and t 0,par are defined to be equivalent Lens masses M 1,2,P or S M ⊙ unless otherwise stated Most generically, the massive components of the lensing system are refered to as "M 1" or "M P" for the primary or largest mass object and "M 2" or "M S" for the secondary. In cases of a planet-star binary however, M P is sometimes used to refer to the planet (i.e. secondary) while M S may refer to the star (primary) Mass ratio q The ratio of the masses of a binary lens, $$M_{2}/M_{1}$$ Mass fraction ε The ratio of the one of the masses in a binary lens to the total mass of that lens, $$M_{i}/M_{tot}$$ Lens separation s, also s 0, d or b Dimensionless The projected separation of the masses of a binary lens during the event, normalized by the angular Einstein radius θ E Projected lens separation a ⊥ AU Projected separation of binary lens masses in physical units. Angle of lens motion α also α 0 radians Angle (counter-clockwise) between the trajectory of the source and the axis of a binary lens, which is oriented pointing from the primary towards the secondary Rate of change of lens separation ds/dt θ E/year The change in the projected separation of a binary lens due to the motion of the lens components in their orbit during an event Rate of change of trajectory angle dα/dt radians/year The change in the trajectory of the source relative to the axis of a binary lens, due to the orbital motion of the lens components during an event. Earth orbital velocity v ⊕,⊥ km/s The component of Earth's velocity at t 0,par projected onto the plane of the sky Binary lens orbital velocity γ or γ̄, components (γ ∥,γ ⊥,γ z) Components of the velocity of the secondary lens relative to the primary due to orbital motion at time t 0,kep $$\gamma_{\parallel} = (ds/dt)/s_{0},$$ $$\gamma_{\perp} = -d\alpha/dt$$ γ z is measured only in rare cases where the full Keplerian orbit can be determined (see Skowron et al. 2011), but is oriented such that positive γ z points towards the observer Binary lens orbital position Components (s,0,s z) Components of the position of the secondary lens relative to the primary due to orbital motion at time t 0,kep The "perpendicular" component is always zero because the coordinate system is orientated with one axis along the binary axis. Projected orbital velocity Δv Projected physical orbital velocity of the secondary of a binary lens relative to the primary $$\bar\Delta v = D_{L}\theta_{\rm{E}}s\bar\gamma$$ Projected orbital position Δr Projected physical orbital position of the secondary of a binary lens relative to the primary $$\bar\Delta r = D_{L}\theta_{\rm{E}}(s,0,s_{z})$$ Lens plane coordinates (ξ,η) Normalized to θ E Coordinate system in the plane of a binary lens, parallel and perpendicular to the binary axis respectively Orbital energy E ⊥,kin,E ⊥,pot The projected kinetic and potential energy due to binary lens orbital motion (Batista et al. 2011) $$\frac{E_{\perp,kin}}{E_{\perp,pot}} = \frac{\kappa M_{\odot} \pi_{\rm{E}}(|\bar\gamma|yr)^{2}s^{3}}{8\pi^{2}\theta_{\rm{E}}(\pi_{\rm{E}}+\pi_{S}/\theta_{\rm{E}})^{2}}$$ Photometric Parameters Magnification A, at peak A max or A 0 The magnification of the source star flux caused by the gravitational lens. Event flux f(t,k) counts/s The total flux measured during a lensing event as a function of time, t, is the combination of the flux from the source being lensed plus the flux from (unlensed) background stars. Since different instruments, k, have different pixel scales and hence different degrees of blending, these are characterized with separate parameters. Commonly defined as: $$f(t,k) = A(t)f_{S}(k) + f_{b}(k)$$ Source flux f S counts/s Flux received from the source (as opposed to f b) Blend flux f b counts/s Flux from background sources blended with the source. Blend ratio g Ratio of blend flux to source flux Baseline magnitude I base or I 0 mag The measured brightness of a source star when unlensed, which may be blended with other stars Peak magnitude I peak mag Measured brightness of the source star at the time of smallest separation between lens and source, i.e. greatest brightness Source magnitude I S mag Measured (and reddened) source star magnitude Dereddened source magnitude I S,0 mag Source star magnitude when corrected for interstellar reddening Blend magnitude I B mag Measured magnitude of stars blended with the source star Lens magnitude I L,H L mag Magnitude of the lens star measured in I and H passbands Source star color Usually (V-I) S mag Measured color (here in (V-I) bands) of the blended and reddened source star Dereddened source color Usually (V-I) S,0 mag Dereddened color of the source star Blend color Usually (V-I) B mag The combined color of stars blended with the source Extinction coefficient Usually A I mag Extinction between the observer and the source star, here in the I passband Reddening cofficient Usually E(V-I) mag Reddening term between the observer and the source, here in the V and I passbands Limb darkening coefficient Γ λ mag Limb darkening coefficient for passband λ (An et al. 2002) Limb darkening coefficient u λ mag Limb darkening coefficient for passband λ Key Concepts Optical depth τ star -1 The probability that a given star, at a specific instant in time, has an magnification caused by gravitational microlensing of A ≻ 1.34. This is the fraction of a given solid angle of sky observed which is covered by the Einstein rings of all lensing objects within that area. Event rate Γ star -1 yr -1 The rate at which microlensing occurs.
Happy near year, and best wishes to those close and \(\varepsilon\)-far! December concluded the year with 4 new preprints, spanning quite a lot of the property testing landscape: Testing Stability Properties in Graphical Hedonic Games, by Hendrik Fichtenberger and Anja Rey (arXiv). The authors of this paper consider the problem of deciding whether a given hedonic game possesses some “coalition stability” in a property testing framework. Namely, recall that a hedonic game is a game where players (nodes) form coalitions (subsets of nodes) based on their individual preferences and local information about the considered coalition, thus resulting in a partition of the original graph. Several notions exist to evaluate how good such a partition is, based on how “stable” the given coalitions are. This work focuses on hedonic games corresponding to bounded-degree graphs, introducing and studying the property testing question of deciding (for several such notions of stability) whether a given game admits a stable coalition structure, or is far from admitting such a partition. Spectral methods for testing cluster structure of graphs, by Sandeep Silwal and Jonathan Tidor (arXiv). Staying among bounded-degree graphs, we turn to testing clusterability of graphs, the focus of this paper. Given an \(n\)-node graph \(G\) of degree at most \(d\) and parameters \(k, \phi\), say that \(G\) is \((k, \phi)\)-clusterable if it can be partitioned in \(k\) parts of inner conductance at least \(\phi\). Analyzing properties of a random walk on \(G\), this work gives a bicriterion guarantee (\((k, \phi)\)-clusterable vs. \(\varepsilon\)-far from \((k, \phi^\ast)\)-clusterable, where \(\phi^\ast \approx \varepsilon^2\phi^2\)) for the case \(k=2\), improving on previous work by Czumaj, Peng, and Sohler’15. We then switch from graphs to probability distributions with our third paper: Inference under Information Constraints I: Lower Bounds from Chi-Square Contraction, by Jayadev Acharya, Clément Canonne, and Himanshu Tyagi (arXiv). (Disclaimer: I’m one of the authors.) In this paper, the first of an announced series of three, the authors generalize the settings of two previous works we covered here and there to consider the general question of distribution testing and learning when the \(n\) i.i.d. samples are distributed among \(n\) players, which each can only communicate their sample to the central algorithm by respecting some pre-specified local information constraint (e.g., privacy, or noise, or communication budget). This paper develops a general lower bound framework to study such questions, with a systematic focus on the power of public vs. private randomness between the \(n\) parties, and instantiate it to obtain tight bounds in the aforementioned locally private and communication-limited settings. (Spoiler: public randomness strictly helps, but not always.) Finally, after games, graphs, and distributions, our fourth paper of the month concerns testing of functions: Partial Function Extension with Applications to Learning and Property Testing, by Umang Bhaskar and Gunjan Kumar (arXiv). This work focuses on a problem quite related to property testing, that of partial function extension: given as input \(n\) pairs point/value from a purported function on a domain \(X\) of size \(|X| > n\), one is tasked with deciding whether there does exist (resp., with finding) a function \(f\) on \(X\) consistent with these \(n\) values which further satisfies a specific property, such as linearity or convexity. This is indeed very reminiscent of property testing, where one gets to query these \(n\) points and must decide (approximate) consistency with such a well-behaved function. Here, the authors study the computational hardness of this partial function extension problem, specifically for properties such as subadditivity and XOS (a sub-property of subadditivity); and as corollaries obtain new property testers for the classes of subadditive and XOS functions. As usual, if you know of some work we missed from last December, let us know in the comments!
Boundedness in logistic Keller-Segel models with nonlinear diffusion and sensitivity functions Department of Mathematics, Southwestern University of Finance and Economics, 555 Liutai Ave, Wenjiang, Chengdu, Sichuan 611130, China $\left\{\begin{array}{ll}u_t=\nabla · (D(u) \nabla u-S(u) \nabla v)+u(1-u^γ),&x ∈ Ω,t>0, \\v_t=Δ v-v+u,&x ∈ Ω,t>0, \\\frac{\partial u}{\partial ν}=\frac{\partial v}{\partial ν}=0,&x∈\partial Ω,t>0\end{array}\right.$ $Ω \subset \mathbb R^N$ $N≥2$ $D(u)$ $S(u)$ $D(0)>0$ $D(u)≥ K_1u^{m_1}$ $S(u)≤ K_2u^{m_2}$ $\forall u≥0$ $K_i∈\mathbb R^+$ $m_i∈\mathbb R$ $i=1, 2$ $(m_1, m_2)$ $N≥3$ $γ≥1$ $m_1>γ-\frac{2}{N}$ $γ∈(0, 1)$ $m_1>γ-\frac{4}{N+2}$ $γ∈[1, ∞)$ $\frac{2}{N}$ Mathematics Subject Classification:Primary: 92C17, 35K55; Secondary: 35K51. Citation:Qi Wang, Jingyue Yang, Feng Yu. Boundedness in logistic Keller-Segel models with nonlinear diffusion and sensitivity functions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 5021-5036. doi: 10.3934/dcds.2017216 References: [1] H. Amann, Nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems, [2] N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, [3] [4] T. Cieslak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, [5] T. Cieslak and C. Stinner, New critical exponents in a fully parabolic quasilinear Keller-Segel system and applications to volume filling models, [6] [7] [8] [9] S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains, [10] S. Ishida and T. Yokota, Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type, [11] S. Ishida and T. Yokota, Global existence of weak solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type with small data, [12] [13] [14] [15] [16] [17] [18] [19] Y. Sugiyama and H. Kunii, Global existence and decay properties for a degenerate Keller-Segel model with a power factor in drift term, [20] Y. Tao, L. Wang and Z. Wang, Large-time behavior of a parabolic-parabolic chemotaxis model with logarithmic sensitivity in one dimension, [21] Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, [22] L. Wang, Y. Li and C. Mu, Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source, [23] [24] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, [25] [26] [27] [28] [29] J. Zheng, Boundedness of solutions to a quasilinear parabolic-parabolic Keller-Segel system with a logistic source, show all references References: [1] H. Amann, Nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems, [2] N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, [3] [4] T. Cieslak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, [5] T. Cieslak and C. Stinner, New critical exponents in a fully parabolic quasilinear Keller-Segel system and applications to volume filling models, [6] [7] [8] [9] S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains, [10] S. Ishida and T. Yokota, Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type, [11] S. Ishida and T. Yokota, Global existence of weak solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type with small data, [12] [13] [14] [15] [16] [17] [18] [19] Y. Sugiyama and H. Kunii, Global existence and decay properties for a degenerate Keller-Segel model with a power factor in drift term, [20] Y. Tao, L. Wang and Z. Wang, Large-time behavior of a parabolic-parabolic chemotaxis model with logarithmic sensitivity in one dimension, [21] Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, [22] L. Wang, Y. Li and C. Mu, Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source, [23] [24] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, [25] [26] [27] [28] [29] J. Zheng, Boundedness of solutions to a quasilinear parabolic-parabolic Keller-Segel system with a logistic source, [1] Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. [2] Pan Zheng. Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion. [3] Marcel Freitag. Global existence and boundedness in a chemorepulsion system with superlinear diffusion. [4] Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa. Global existence for an attraction-repulsion chemotaxis fluid model with logistic source. [5] Chunhua Jin. Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion. [6] Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. [7] Pan Zheng, Chunlai Mu, Xiaojun Song. On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion. [8] Marco Di Francesco, Alexander Lorz, Peter A. Markowich. Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: Global existence and asymptotic behavior. [9] Feng Li, Yuxiang Li. Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux. [10] Sainan Wu, Junping Shi, Boying Wu. Global existence of solutions to an attraction-repulsion chemotaxis model with growth. [11] [12] [13] Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. [14] Jiashan Zheng. Boundedness of solutions to a quasilinear higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion. [15] [16] Sachiko Ishida. Global existence and boundedness for chemotaxis-Navier-Stokes systems with position-dependent sensitivity in 2D bounded domains. [17] [18] Giuseppe Viglialoro, Thomas E. Woolley. Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth. [19] Youshan Tao, Michael Winkler. Global existence and boundedness in a Keller-Segel-Stokes model with arbitrary porous medium diffusion. [20] Masaki Kurokiba, Toshitaka Nagai, T. Ogawa. The uniform boundedness and threshold for the global existence of the radial solution to a drift-diffusion system. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Suppose there is the (pseudo)scalar field $\hat{\theta}$ with non-zero VEV $\theta$, which effectively emerges at energy scale $\Lambda$ (for example, the mass of some fermion, the scale of SSB and so on). An example is axion-like field $\theta$, which is present as $$ \int \frac{\theta}{f_{\gamma}}F\wedge F, \quad f_{\gamma} \simeq \alpha_{\text{EM}}^{-1}\Lambda $$ in effective action (with $F$ denoting gauge field strength). May the quantity $$ \epsilon \equiv \frac{\theta}{\Lambda} $$ be many times larger than one? I.e., does some condition exist, which forbids $\epsilon >> 1$ domain? Or this is the question of our wishes, and there are no restrictions on value of $\epsilon$?
Inequality comes with a signs < (less than), > (greater than),\(( \leq ) \; (less \; than \; equal \; to), \; (\geq ) \; (greater \; than \; equal \; to)\) In GMAT, the questions will be completed and will be asked with modulus function. Steps to solve inequality question 1. Make an equation that corresponds to the inequality. 2. Solve this equation. 3. Fill in a number greater than and less than the solution then see which are correct with the inequality (you may use a number line to indicate these). 4. Write the solution of the inequality. When it comes to modulus the, the inequality will open in a different manner. This will be explained by following illustrations If |x + 4| <9 Therefore, -9 < x +4 <9 If |x -5| >10 Therefore, x-5>10 or x-5<-10 Let’s look at some of the problem. Find the range of value of x for which. After this form where the complete polynomial is expressed multiplication of single degree polynomial, we have to find the critical points. Critical points are the points where each single degree polynomials turn to zero. So critical points are -4 and -9. Hence we will check the behaviour around critical points. Please note that the sign of the polynomial remains constant for a region whether you take any number from that region. In this case the complete number line is divided into 3 regions around critical points. I.e.\(\large (-\infty , -9), [-9, -4) \; and \; [-4, \infty ).\) From \((-\infty , -9)\) you can take -10. Hence the expression will be +ve From \([-9, -4)\) you can take -6. Hence the expression will be -ve From \( [-4, \infty ) \) you can take 0. Hence the expression will be +ve Therefore the desired range is \([-9, -4)\). please do not forget to check the critical points. Here in this range, -9 is included on which the expression yield exactly 0. Hence it has to be excluded. Therefore the answer will be \((-9, -4)\) Find the number of integers satisfying the equation |x-5| + |x-10| < 30 Again due to critical points the number line will be divided in 3 regions. We will calculate the number of integers in each region Region 1: \((-\infty, 5]\) The modulus will open like -x + 5 – x +10 < 30 2x> -5 x > -2.5 Hence in this range x can take value like -2, -1, 0, 1, 2, 3, 4, ,5 Region 2: (5, 10] The modulus will open like x – 5 -x+10 < 30 5<30 Hence all the value in this region will satisfy the equation. Therefore, x can take 6, 7, 8, 9, 10 Region 3: \((10, -\infty]\) The modulus will open like X – 5 + x – 10 < 30 2x < 45 X < 22.5 Hence in this range x can take value like 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22 Therefore total number of integer that can be taken is 25. We’ll be glad to help you in your GMAT preparation journey. You can ask for any assistance related to GMAT and MBA from us by just giving a missed call at +918884544444, or you can drop an SMS. You can write to us at [email protected].
In Quantum Electrodynamics by Landau and Lifshiz there is the following: The correspondence between the spinor $\zeta^{\alpha \dot{\beta}}$ and the 4-vector is a particular case of a general rule: any symmetrical spinor of rank $(k,k)$ is equivalent to a symmetrical 4-tensor of rank $k$ which is irreducible (i.e. which gives zero upon contraction with respect to any pair of indices). L&L also writes out this for a 4-vector $$ \zeta^{1\dot{1}}=\zeta_{2\dot{2}}=a^3+a^0 ,\quad \zeta^{2\dot{2}}=\zeta_{1\dot{1}}=a^0-a^3, $$ $$ \zeta^{1\dot{2}}=-\zeta_{2\dot{1}}=a^1-ia^2 ,\quad \zeta^{2\dot{1}}=-\zeta_{1\dot{2}}=a^1+ia^2, $$ Surely there must be an established method to do this in general just like it says above. I would like to know this method, so if someone would be kind enough to show me or refer me to a reference I would be grateful, (e.g. suppose I would like to know the components for $\zeta^{\alpha\beta\dot{\gamma}\dot{\delta}}$ in terms of the symmetric traceless rank-2, 4-tensor). Thanks,
Principle 1: Whenever you measure any physical quantity \(L\), there is a Hermitian linear operator \(\hat{L}\) (called an observable) associated with that measurement. Principle 2: Any arbitrary state of a quantum system is represented by a ket vector \(|\psi⟩\). Principle 3: The possible measurable values of any quantity are the eigenvalues \(λ_L(=L)\) of \(\hat{L}\). Principle 4: According to the Copenhagen interpretation of quantum mechanics, after measuring \(L\), the possible states a quantum system can end up in are the eigenvectors \(|λ_L⟩(=|L⟩\)) of \(\hat{L}\). Principle 5: For any two states \(|\psi⟩\) and \(|\phi⟩\), the probability amplitude of the state changing from \(|\psi⟩\) to \(|\phi⟩\) is given by $$\psi=⟨\phi|\psi⟩.\tag{1}$$ The probability \(P\) of the state changing from \(|\psi⟩\) and \(|\phi⟩\) can be calculated from the probability amplitude using the relationship $$P=\psi^*\psi=⟨\psi|\phi⟩^*⟨\phi|\psi⟩=|\psi|^2.\tag{2}$$ From a purely mathematical point of view, any ket \(|\psi⟩\) in Hilbert space can be represented as a linear combination of basis vectors: $$|\psi⟩=\sum_i{\psi_i|i⟩}.\tag{3}$$ The kets \(|1⟩\text{, ... ,}|n⟩\) represent any basis vectors and their coefficients \(\psi_1\text{, ... ,}\psi_n\) are, in general, complex numbers. We shall prove in the following sections that we can always find eigenvectors \(|L_1⟩\text{, ... ,}|L_n⟩\) of any observable \(\hat{L}\) that form a complete set of orthonormal basis vectors; therefore any state vector \(|\psi⟩\) can be represented as \(|\psi⟩=\sum_i{\psi_i|L_i⟩}.\tag{4}\) We’ll also prove that the collection of numbers \(\psi_i\) are given by $$\psi_i=⟨L_i|\psi⟩\tag{5}$$ and represent the probability amplitude of a quantum system changing from the state \(|\psi⟩\) to one of the eigenstates \(|L_i⟩\) after a measurement of \(L\) is performed. The collection of all the probability amplitudes \(\psi_i\) is called the wavefunction. When the wavefunction \(\psi(L,t)\) associated with the state \(|\psi⟩\) becomes a continuous function of \(L\) (that is, the range of possible values of \(L\) becomes infinite and the number of probability amplitudes becomes infinite), we define \(|\psi|^2\) as the probability density. One example where \(\psi\) becomes continuous is for a particle which can have an infinite number of possible \(x\) positions. Then \(\psi\) becomes a continuous function of \(x\) (and, in general, also time \(t\)). Since \(|\psi(x,t)|^2\) is the probability density, the product \(|\psi(x,t)|^2dx\) is the probability of measuring \(L\) at the position \(x\) and at the time \(t\). The probability of measuring anything at the exact location \(x\) is in general zero. A far more useful question to ask is: what is the probability \(P(x+Δx,t)\) of measuring \(L\) within the range of x-values, \(x+Δx\)? This is given by the following equation: $$P(x+Δx,t)=\int_{x}^{x+Δx} |\psi(x,t)|^2dx.\tag{6}$$ According to the normalization condition, the total probability of measuring \(L\) over all possible values of \(x\) must satisfy $$P(x,t)=\int_{-∞}^{∞} |\psi(x,t)|^2dx=1.\tag{7}$$ If \(\psi(L,t)\) is continuous, then the inner product \(⟨\phi|\psi⟩\) is defined as $$\psi(L,t)=⟨\phi|\psi⟩=\int_{-∞}^{∞} \phi^*{\psi}\text{ dL}.\tag{8}$$
It is a theorem in elementary number theory that if $p$ is a prime and congruent to 1 mod 4, then it is the sum of two squares. Apparently there is a trick involving arithmetic in the gaussian integers that lets you prove this quickly. Can anyone explain it? Let $p$ be a prime congruent to 1 mod 4. Then to write $p = x^2 + y^2$ for $x,y$ integers is the same as writing $p = (x+iy)(x-iy) = N(x+iy)$ for $N$ the norm. It is well-known that the ring of Gaussian integers $\mathbb{Z}[i]$ is a principal ideal domain, even a euclidean domain. Now I claim that $p$ is not prime in $\mathbb{Z}[i]$. To determine how a prime $p$ of $\mathbb{Z}$ splits in $\mathbb{Z}[i]$ is equivalent to determining how the polynomial $X^2+1$ splits modulo $p$. First off, $-1$ is a quadratic residue modulo $p$ because $p \equiv 1 \mod 4$. Consequently, there is $t \in \mathbb{Z}$ with $t^2 \equiv -1 \mod p$, so $X^2+1$ splits modulo $p$, and $p$ does not remain prime in $\mathbb{Z}[i]$. (Another way of seeing this is to note that if $p$ remained prime, then we'd have $p \mid (t+i)(t-i)$, which means that $p \mid t+i$ or $t \mid t-i$.) Anyway, as a result there is a non-unit $x+iy$ of $\mathbb{Z}[i]$ that properly divides $p$. This means that the norms properly divide as well. In particular, $N(x+iy) = x^2+y^2$ properly divides $p^2$, so is $p$ or $1$. It cannot be the latter since otherwise $x+iy$ would be a unit. So $x^2+y^2 = p$. Perhaps my favorite argument (other than any arguably "correct" arguments, such as the one Akhil has given, or arguments starting from the fact that $x^2 + y^2$ is the unique binary quadratic form of discriminant $-4$ up to equivalence) uses continued fractions. Suppose that $u^2 \equiv -1 \pmod{p}$, and consider the continued fraction expansion of the rational number $u/p$. Let $r/s$ be the last convergent to $u/p$ with the property that $s < \sqrt{p}$. Then, setting $x=s$ and $y = rp-us$, one has $x^2 + y^2 = p$. Here's the argument: let $r'/s'$ be the convergent following $r/s$. Then the basic theory of continued fractions gives the estimate $|r/s - u/p| < 1/ss'$, and the right-hand side is less than $1/s\sqrt{p}$ by hypothesis. Clearing denominators gives $y < \sqrt{p}$, so that $0 < x^2 + y^2 < 2p$. On the other hand $x^2 + y^2$ is checked to be divisible by $p$ (by choice of $u$), so must be equal to $p$. Here is another proof without complex numbers. We start with proving that there exists $z \in \mathbb{N}$ such that $z^2 + 1 \equiv 0 \pmod p$. We do this in the same way as Akhil Mathew. Let we have $a^2 + b^2 = pm$. Take $x$ and $y$ such that $x \equiv a \pmod m$ and $y \equiv b \pmod m$ and $x, y \in [-m/2, m/2)$. Consider $u = ax + by$ and $v = ay - bx$. Then $u^2 + v^2 = (a^2 + b^2)(x^2 + y^2)$.Moreover, $u$ and $v$ are multiples of $m$.Hence $(u/m)^2 + (v/m)^2 = p (x^2 + y^2)/m$. $(x^2 + y^2)/m$ is an integer because of the definition of $x$ and $y$ and that $a^2 + b^2 = pm$. Also $(x^2 + y^2)/m$ is less than $m/2$. Now we change $a$ by $u$ and $b$ by $v$ and continue this process until we get $m=1$. Notice that this is quite efficient way to find representation of $p$ as a sum of two squares - it takes $O(\log p)$ steps to find it provided we have found $z$ such that $z^2 + 1$ is multiple of $p$. There is an amazing proof of this due to Don Zagier : one-sentence proof. Taken From I.N. Herstein There are 2 results which i am going to use. Let $p$ be a prime integer and suppose that for some integer $c$ relatively prime to $p$ we can find integers $x$ and $y$ such that $x^{2}+y^{2}=cp$. Then $p$ can be written as the sum of 2 squares. If $p$ is a prime of the form $4n+1$, then we can solve the congruence $x^{2} \equiv \ -1 \ (mod) \ p$. Now the main result. If $p$ is a prime of the form $4n+1$ then $p$ is the sum of 2 squares. Proof. By 2 there exists and $x$ such that $x^{2} \equiv -1 \text{mod} \ p$. So $x$ can be chose such that $0 \leq x \leq (p-1)$. We can restrict the size of $p$ even further, namely to satisfy $|x| \leq \frac{p}{2}$. For if $x > p/2$ then, $y=p-x$ satisfies $y^{2} \equiv -1 \text{mod} \ p$ but $|y| \leq p/2$. Thus we may assume that we have an integer $x$ such that $|x| \leq p/2$ and $x^{2}+1$ is a multiple of $p$ say $cp$. Now $cp=x^{2}+1 \leq p^{2}/4 +1 < p^{2}$, hence $c < p$ and hence $(c,p)=1$. Invoking (1) we have $p=a^{2}+b^{2}$. Let $$\Psi(n,m,k)=100n^2+20nm+m^2+4k^2$$ be a prime number for some $(n,m,k)$ Then: This combination of $(n,m,k)$ must not share any factors. Except $(n,m)$ can share factors as long as: $$\gcd(n,m,k) = 1$$ So: $$p = (10n+m)^2+(2k)^2$$ For the expression above to be a prime number, a combination of $(x,y)$ must be either even-odd or odd-even. But it already is for the $(n,m,k)$ we have picked. If we pick n as some natural number, k as some positive natural number and m = 1,3,5,7,9 we fit the definition given above. Now we can expand and see that: $$p = 100n^2+20nm+m^2+4k^2$$ This means $m^2\equiv1(mod4)$ since all other terms in the equation are already are congurent to $0(mod4)$. But if you haven't already noticed for the choice for all of the m's we've picked all happen to be congurent to $1(mod4)$ when squared. This means whether or not p is prime doesn't matter this equation will always produce a result of the form $p=4a+1$ Now this result may not give prime numbers all the time obviously. But our choice of $(n,m,k)$ was already for some, not all. Then one can state that: If our choice of $(n,m,k)$ produces a prime number then it is a solution. Except for the case $$1^2+1^2 = 2$$ which is the only case that will refute this, so it can be ignored. Thus for: $$p = x^2+y^2$$ if p is prime then: $$p\equiv 1(mod4)$$ EDIT: The proof above isn't fully satisfactory since I start with $p=x^2+y^2$. If we start backwards by assuming: $$p \equiv 1(mod4)$$ Then $$p = 4n+1$$ Now consider the following: $$p=4n+1=100a^2+20ab+b^2+4c^2$$ If $10+b$ is odd then we don't have anything to worry about. So we have a look at $b^2$ and again $b = 1,3,5,7,9$ so: $$b^2 \equiv 1(mod4)$$ or: $$b^2 = 4m+1$$ Then we have: $$p=4m+100a^2+20a(4m+1)+4c^2+1$$ So we have written $p$ as a sum of two squares: $$p=(10a+4m+1)^2+(2c)^2$$ since one can say $x= 10a+4m+1$ and $y = 2c$. So now I have shown that either way around the statement holds. Infact this is related to Zagier's proof since if we let $y = c$ instead: $$p=x^2+(2y)^2$$ Thus now we have shown if: $$p \equiv 1(mod4)$$ then: $$p = x^2+y^2$$ So now one can say that a prime number can be written as a sum of two squares iff $p \equiv 1(mod4)$ Also we know this is the only way besides the case $1^2+1^2=2$ to write primes as a sum of two squares because of the first argument where I said the combination of (x,y) must be even-odd or odd-even. And we know we can write ANY prime number of the form $4a+1$ as a sum of two squares because $\Psi(n,m,k)$ for the given limitations on $(n,m,k)$ can produce any odd natural number of the form $4a+1$. And since the set that represents all natural numbers of the form $4a+1$ includes the set of primes that can be written as $4a+1$. Please let me know if there is any problems with the proof. I tried this via Wilson's theorem- It seems to work but would be grateful if someone could verify it/offer improvements? Suppose a prime $p=1 \ \text{mod} 4 $. By Wilson's theorem, $ (p-1)! = -1 (\ \text{mod} p) $. But $ p-r = -r (\text{mod} p) $ So $ -1= 1 (p-1) 2 (p-1)...( \frac{p-1}{2})( \frac{p-1}{2} ) = 1 (-1) 2(-2)...( \frac{p-1}{2})( \frac{p-1}{2} )= (-1)^{ \frac{p-1}{2}}(( \dfrac{p-1}{2})!)^{2} \ ( \text{mod} p) $ $ (-1)^{ \frac{p-1}{2}}= (-1)^{ \frac{4k+1-1}{2}}=1 $ so $ -1 =( \dfrac{p-1}{2})!)^{2} \ ( \text{mod} p) $ In other words $np = 1 + (( \dfrac{p-1}{2})!)^{2} $ iff $p=1 \ \text{mod} 4 $. Any integer $x^{2} = 0,1 \ \text{mod} 4 $ so we cant find an $x^2, y^2$ such that $x^2+y^2 = 3 \ \text{mod} 4$. As a prime is odd, it is either congruent to 1 or 3 mod 4 so it can only be written as a sum of 2 squares if congruent to 1 mod4. Let $p \equiv 1\ (\text {mod}\ 4)$ and $a = \left (\frac {p-1} {2} \right )!$. Observe that $a^2 \equiv -1\ (\text {mod}\ p)$ by Wilson's theorem. Consider an ideal $I$ of $\Bbb Z[i]$ such that $I = \langle p,i-a \rangle$. Observe that $I \cap \Bbb Z = p \Bbb Z$. Let $R = \Bbb Z[i]/I$. Then $R$ is a commutative ring with identity. Consider the map $f : \Bbb Z \longrightarrow R$ such that $1 \mapsto 1_R$, where $1_R$ is the multiplicative identity in $R$ and extend it to a homomorphism. Observe that $f$ is surjective and $ker (f) = p \Bbb Z$. So by the first isomorphism theorem we have $\Bbb Z/p \Bbb Z \cong R$. Since $\Bbb Z[i]$ is a PID $\exists$ $a+ib \in \Bbb Z[i]$ such that $I = \langle a+ib \rangle$. Observe that $|R| = |\Bbb Z[i]/\langle a+ib \rangle| = a^2+b^2$. So we have $p = |\Bbb Z/p\Bbb Z| = a^2 + b^2,$ as claimed.
We’re seeing lots of papers in the summer. I guess the heat and sun (and more time?) is bringing out the good stuff. Distribution testing, from testing to streaming, hypergraph testing, and bunch of papers on graph testing. And just for the record: in what follows, \(n\) is the support size in distribution testing, it’s the number of vertices in graph testing, and it’s the length when the input is a string. And \(m\) is the number of edges when the input is a graph. Amen. (Update: added a paper on testing Dyck languages. 11 papers this month, I think that’s a PTReview record.) Which Distribution Distances are Sublinearly Testable?, by Constantinos Daskalakis, Gautam Kamath, and John Wright (Arxiv). Distribution testing is seeing much progress these days. Many questions in this area can be formulated as: given a known distribution \(q\), how many samples from an unknown distribution \(p\) are required to determine the distance between \(p\) and \(q\)? For the “identity testing” setting, we are given discrete distribution \(q\), two distance measures \(d_1, d_2\), and proximity parameters \(\varepsilon_1, \varepsilon_2\). How many samples from unknown distribution \(p\) are required to distinguish \(d_1(p,q) \leq \varepsilon_1\) from \(d_2(p,q) \geq \varepsilon_2\)? Observe the asymmetric nature of the question. The choices for the distributions are most of the standard ones: TV, KL-divergence, Hellinger, \(\chi^2\), and \(\ell_2\). And the paper basically completes the entire picture by giving matching upper and lower bounds (in terms of the support size \(n\)) and showing certain settings that are untestable with sublinear sample complexity. The results also consider the “equivalence/closeness testing” case, where \(q\) is also unknown. There are many interesting aspects of this collection of results. For example, when \(d_1\) is KL-divergence instead of \(\chi^2\), the complexity can jump from \(\Theta(\sqrt{n})\) to \(\Theta(n/\log n)\). We have two independent papers with similar results (and titles!). Differentially Private Identity and Closeness Testing of Discrete Distributions, by Maryam Aliakbarpour, Ilias Diakonikolas, and Ronitt Rubinfeld (arXiv). Differentially Private Testing of Identity and Closeness of Discrete Distributions, by Jayadev Acharya, Ziteng Sun, and Huanyu Zhang (arXiv). Both these papers study the problem of identity testing over TV-distance, in the differentially private setting. In essence, the distribution testing algorithm should not be able to distinguish between two “neighboring” input distributions \(p, p’\). The main result, achieved in both papers, is such an algorithm for identity (and closeness) testing whose sample complexity is quite close to the best non-differentially private algorithms. At a high level, the approaches used are also similar, specifically using a result of Goldreich on black-box reductions from identity to uniformity testing, and designing a private version of Paninski’s uniformity tester. Estimating parameters associated with monotone properties, by Carlos Hoppen, Yoshiharu Kohayakawa, Richard Lang, Hanno Lefmann, and Henrique Stagni (arXiv). The study of monotone graph properties has rich history in graph property testing. This paper focuses on constant-query estimation of monotone graph parameters. A monotone graph property is one that is closed under subgraphs, and a monotone parameter is one that is non-increasing for subgraphs. Given a family of (constant-size) graphs \(\mathcal{F}\), a graph is \(\mathcal{F}\)-free if it contains no subgraph in \(\mathcal{F}\). For any graph \(G\), define the parameter \(z_\mathcal{F}\) to be (essentially) the log of the number of \(\mathcal{F}\)-free spanning subgraphs of \(G\). This parameter has been studied in the context of subgraph counting problems. This papers studies the complexity of estimating \(z_\mathcal{F}\) up to additive error \(\varepsilon\). Most results of this flavor typically involve complexities that are towers in \(1/\varepsilon\), but this result builds connections with the breakthrough subgraph removal lemma of Fox, to yield complexities that are towers in \(\log(1/\varepsilon)\). Testable Bounded Degree Graph Properties Are Random Order Streamable, by Morteza Monemizadeh, S. Muthukrishnan, Pan Peng, and Christian Sohler (arXiv). This paper proves a nice connection between sublinear and streaming algorithms. A number of properties, such as connectivity, subgraph-freeness, minor-closed properties, are constant-time testable in the bounded-degree graph setting. The main result here shows that any such property can also be tested using constant space (in terms of words) when the graph is a random order stream. In contrast, existing work by Huang and Peng proves \(\Omega(n)\) space lower bounds for some of these properties for adversarial edge order. The main technical ingredient is an algorithm that maintains distributions of constant-sized rooted subgraphs in the random-order streaming setting. On Testing Minor-Freeness in Bounded Degree Graphs With One-Sided Error, by Hendrik Fichtenberger, Reut Levi, Yadu Vasudev, and Maximilian Wötzel (arXiv). A major open problem in testing of bounded-degree graphs is the one-sided testability of minor-closed properties. Benjamini, Schramm, and Shapira first proved that minor-closed properties are two-sided testable, and conjectured that the one-sided complexity is \(O(\sqrt{n})\). Czumaj et al prove this result for the specific property of being \(C_k\)-minor free (where \(C_k\) is the \(k\)-cycle). This paper gives a \(O(n^{2/3})\) algorithm for testing whether a graph is \(k \times 2\)-grid minor-free. The latter class includes graphs that are cycles with non-intersecting chords and cactus graphs. A common approach in one-sided graph testing is to use the existence of a “nice” decomposition of the graph into small well-behaved pieces, and argue that the tester only has to look within these pieces. This result employs a recent partitioning scheme of Lenzen and Levi into pieces of size \(O(n^{1/3})\). Unfortunately, this partitioning may remove many edges, and the tester is forced to find minors among these cut edges. The combinatorics of the \(k \times 2\)-grid minor ensures that any such large cut always contains such a minor, and it can be detected efficiently. Testing bounded arboricity, by Talya Eden, Reut Levi, and Dana Ron (arXiv). The arboricity of a graph is the minimum number of spanning forests that a graph can be decomposed into. It is a 2-approximation of the maximum average degree of a subgraph, and thus is a measure of whether “local” density exists. For example, the arboricity of all minor-closed graphs is constant. The main result in this paper is a property tester for arboricity. Formally, the paper provides an algorithm that accepts if the graph is \(\varepsilon\)-close to having arboricity \(\alpha\), and rejects if the graph is \(c\varepsilon\)-far from having arboricity \(3\alpha\) (for large constant \(c\)). Ignoring dependencies on \(\varepsilon\), the running time is \(O(n/\sqrt{m} + n\alpha/m)\), which is proven to be optimal. The result builds on distributed graph algorithms for forest decompositions.The constant of \(3\) can be brought down arbitrarily close to \(2\), and it is an intriguing question if it can be reduced further. On Approximating the Number of k-cliques in Sublinear Time, by Talya Eden, Dana Ron, and C. Seshadhri (arXiv). This paper studies the classic problem of approximating the number of \(k\)-cliques of a graph, in the adjacency list representation. The main result is an algorithm that \((1+\varepsilon)\)-approximates \(C_k\) (the number of \(k\)-cliques) in time \(O(n/C^k_k + m^{k/2}/C_k)\). (This expression ignored polylogs and \(\varepsilon\).) Somewhat surprisingly, this strange expression is the optimal complexity. It is a generalization of previous result of Goldreich-Ron (average degree/edge counting, \(k=2\)) and Eden et al (triangle counting, \(k=3\)). The algorithm is quite intricate, and is achieved by building on various sampling techniques in these results. A characterization of testable hypergraph properties, by Felix Joos, Jaehoon Kim, Daniela Kühn, and Deryk Osthus (arXiv). A fundamental result of Alon, Fischer, Newman, and Shapira characterized all constant-time testable (dense) graph properties, through a connection to the regularity lemma. This paper essentially extends that characterization to \(k\)-uniform hypergraph properties. Essentially, the main results shows the testability of property \(\bf{P}\) is equivalent to \(\bf{P}\) being “regular-reducible”. This is a rather involved definition that captures properties where hypergraph regularity can be used for testing. Furthermore, this can also be used to prove that any testable property also has a constant-time distance estimation algorithm. Distributed Testing of Conductance, by Hendrik Fichtenberger and Yadu Vasudev (arXiv). There has been recent interest in property testing in the distributed graph framework. The problem studied here in graph conductance. Specifically, we wish to accept a graph with conductance \(\Phi\) and reject a graph that is far from having conductance \(\Phi^2\). The setting is the CONGEST framework, where each vertex has a processor and can only communicate \(O(\log n)\) amount of data to its neighbors in a single round. The main result gives an algorithm with \(O(\log n)\) rounds, which is shown to be optimal. Standard testing algorithms for conductance basically perform a collection of random walks from some randomly chosen seeds vertices. An important insight in this paper is that testing algorithms for conductance can be simulated in the CONGEST model without having to maintain all the random walks explicitly. It suffices to transfer a small amount of information that tracks overall statistics of the walks, which suffice for the testing problem. Improved bounds for testing Dyck languages, by Eldar Fischer, Frédéric Magniez, Tatiana Starikovskaya (arXiv). Testing membership in languages is a classic problem in property testing. Fix a language \(L\) over an alphabet \(\Sigma\). Given a string \(x\) of length \(n\), the tester should accept if \(x \in L\) and reject if \(x\) is far (in terms of Hamming distance) from \(L\). This paper focuses the simple, yet fundamental setting of \(L\) being a Dyck language, the set of strings of perfectly balanced parentheses. Let \(D_k\) be the Dyck language using \(k\) types of parentheses. The classic paper of Alon et al on regular language testing also gives a constant time tester for \(k=1\), and subsequent work of Parnas, Ron, Rubinfeld proves, for \(k \geq 2\), testing complexity bounds of \(O(n^{2/3})\) and \(\Omega(n^{1/11})\). This paper gives significant improvements in testing complexity: an upper bound of \(O(n^{2/5+\delta})\) (for any \(\delta > 0\)) and a lower bound of \(\Omega(n^{1/5})\). The lower bound proof introduces some new techniques for proving that similarity of transcript distributions of a deterministic algorithm over two distributions of inputs (the key step in any Yao’s minimax lemma application). The proof constructs a third transcript distribution using a special “wild” transcript that represents the (low probability) event of the algorithm “learning” too much. A careful construction of this distribution can used to upper bound the distance between the original transcript distributions.
Synthesis of ATP by ATP synthase Making the Miracle Molecule: ATP Synthesis by the ATP Synthase ATP is the most important energized molecule in the cell. ATP is an activated carrier that stores free energy because it is maintained out of equilibrium with its hydrolysis products, ADP and Pi. There is a strong tendency for ATP to become hydrolyzed (split up) into ADP and Pi, and any process that couples to this reaction can be "powered" by ATP - even if that process would be unfavorable on its own. Examples include transport of other molecules across a membrane against the natural gradient and the action of motor proteins. ATP is synthesized by a machine that may be even more remarkable, the ATP synthase (also called F-ATPase or FoF1-ATPase). As shown in the figure, the ATP synthase is an intricate rotary machine consisting of $\sim20$ proteins. The machine is driven by a difference in proton electrochemical potential across the bilayer (grey outline), which consists of both a pH difference (difference in proton concentrations) and an electrostatic potential - i.e., voltage - difference $\Delta \psi$. The proton-coupled free energy difference is sometimes called the "proton motive force" or pmf. The pmf drives proton transport (green arrows) from the positive P-side to the negative N-side of the membrane, which in turn causes rotation of the c-ring (brown) of the Fo subunit which is attached to the $\gamma$ stalk (brown). Rotation of the asymmetric $\gamma$ stalk within the three catalytic $\alpha \beta$ domains (light blue) of the F1 subunit suppplies energy sufficient for the synthesis of ATP - more precisely, for the phosophorylation of ADP. In overview, an electrochemical gradient for protons is transduced into mechanical motion (rotation) which in turn creates sufficient conformational energy (elastic and perhaps electrostatic) to enable the highly unfavorable chemical reaction snythesizing ATP. This is fairly incredible. Thermodynamic analysis of ATP synthesis We can understand the driving forces through which the ATP synthase works without detailed knowledge or assumptions regarding structural details of the machine. For example, although mechanical/conformational free energy is transiently stored during the synthesis cycle, that is part of the inner workings of the machine that does not affect the overall thermodynamics. This is because all molecular machines are "passive devices" that reset to the exact same state after a cycle - in essence, machines simply are catalysts for complicated processes that may involve both chemical reactions and transport. The machines themselves do not supply energy but rather use free energy stored in other sources, such as activated carriers like ATP or concentration gradients. The ATP synthase always generates three ATP molecules in a complete 360-degree rotational cycle because of the three catalyic $\alpha \beta$ domains, but the number of protons $n$ transported (which depends on the number of c-subunits - see figure above) varies from 8 to 15, depending on the organism and/or organelle. The overall "reaction" catalyzed by the ATP synthase (omitting water) is therefore where "P" denotes the positive side of the membrane where the proton chemical potential is higher, and "N" is the "negative" side of lower chemical potential. The overall free energy change per ATP synthesized is therefore the sum of the change in chemical potential $\mu$ for $n/3$ protons and the free energy cost for phopshorylating ADP: We know that the overall $\dgtot$ must be negative, because the process does occur. The negative $\Delta \mu$ term will outweigh the positive $\Delta G$ for phosphorylating ADP. Although the individual terms cannot really be known exactly, we can approximate them reasonably using our usual ideal gas (ideal solution) framework. We will build on the detailed discussion found in the chemical potential section, where we explored the free energy of ATP hydrolysis, $\Delta G(\mbox{ATP} \rightarrow \mbox{ADP})$, which is simply the negative of the $\Delta G$ value we want. We therefore have where $\conc{X}$ denotes the cellular concentration of molecule X and $\conceq{X}$ is the equilibrium concentration. ATP is an "activated carrier" of free energy because its concentration is maintained far from equilibrium ... by the action of the ATP synthase. The state of activation for ATP is equivalent to the condition that $\Delta G(\mbox{ADP} \rightarrow \mbox{ATP}) > 0$. When equilibrium and typical cellular values for the concentrations are substituted into (3), one finds that $\Delta G(\mbox{ADP} \rightarrow \mbox{ATP}) \simeq 12$ kcal/mol, which is about 20 times the thermal energy ($RT$ or $k_BT$) and quite a significant amount of energy! For ATP synthesis to proceed, we know the total $\dgtot$ in (2) must be negative, so the change in proton chemical potential must be large and negative: where $n = 8$ for the mammalian ATP synthase. As noted above, there are two driving forces acting on the protons: (i) the electrostatic potential difference across the membrane $\Delta \psi$, which simply creates an electric field that exerts a force on the proton as you learned in high school physics, and (ii) the pH difference $\Delta$pH across the membrane which creates a diffusive force that tends to equalize concentrations because pH is simply a measure of concentration. A more complete discussion of membrane electrostatics is available. We can quantify the preceding description of proton driving forces, the pmf, again within ideal solution theory, via where $F$ is Faraday's constant which simply converts the potential difference $\Delta \psi$ into energy units for a single proton charge. To interpret the overall effect of these terms, we must know the conventions adopted for the "$\Delta$" terms: for pH, it is P-side pH value minus the N-side pH making $\Delta$pH negative; for $\psi$ it is the same directionality, so the N-side voltage is subtracted from the P-side's, making $\Delta \psi$ positive. The net result that both terms of $\Delta \mu (\plus{H})$ are negative by convention. Physically, the absolute value of $\Delta \mu (\plus{H})$ is the free energy available per proton transported down the electro-chemical gradient. Note that sometimes the electro-chemical potential difference is assigned the symbol $\Delta \tilde{\mu}$ to remind us that there is an electric field present. A simple kinetic model for ATP synthesis We can get a more concrete understanding for how the synthase will behave by building a simple mass action model which incorporates some of the key features of its behavior. To allow us to build a model based on a relatively small number of equations, we will essentially model the synthesis of a single ATP - in effect, we will model 120 out of the full 360 degrees of the rotary cycle. And given that restriction, we will want the number of protons $n$ to be divisible by 3. We will choose $n = 9$ which is close to the $n=8$ value for mammalian ATP synthases. The model will attempt to mimic the rotary mechanism in the following sequence of steps, as shown in the figure. Starting from state 1, ADP and Pi will bind the synthase, leading to state 3. A proton will then bind from the P side of the membrane, leading to state 4, followed by proton unbinding to the N side and state 5. (Rotation is implicitly included in the transitions 3$\rightarrow$4 and 4$\rightarrow$5, though for concreteness the figure shows it occuring with the 4$\rightarrow$5 transition.) Proton binding, unbinding and rotation repeat two more times, leading to state 9. The transition to state 10 is the catalytic step yielding ATP, and then ATP unbinding occurs leaving the machine back in state 1. All the steps will be reversible as is true for the cycle for any molecular machine. The mass-action equations governing the model's kinetics are straightforward to write down. Using the notation that $[i]$ is the population of the machine in state $i$, the first equation is: where $k_{12}$ and $k_{1,10}$ are on-rates, while $k_{21}$ and $k_{10,1}$ are off-rates. The other equations take similar forms. For example, where $\conc{\plus{H}(\mbox{N})}$ is the proton concentration on the N side, $k_{98}$ is an on-rate, $k_{89}$ is an off-rate, while $k_{9,10}$ and $k_{10,9}$ are the catalytic rates for synthesis and hydrolysis, respectively. To fully specify the model, we need all the rate constants. Some are available from the literature and others can be estimated, which is a fairly technical process that will not be discussed here. (The parameters used in the numerical data shown below are given in the model file also available below.) However, it is worth noting that not all the rate constants can take arbitrary parameters because of the usual constraint that occurs with any cycle. The figure shows stochastic simulation of our rotary model for a (molecular) time of 1 sec. Synthesis proceeds at a rate of about ~10 ATP/s, but note that there are frequent stochastic reversals. Such reversals, which occur at the level of overall ATP synthesis as driven by "microscopic" reversals among the 10 states we have set up, are expected to be an intrinsic part of the functioning of many molecular machines. On aggregate, however, with thousands of synthases per mitochondrion, there will be a very steady amount of synthesis at fixed driving conditions. The source code for the model (a .bngl file) can be downloaded here. The simulation was performed using BioNetGen, a rule-based platform for kinetic modeling. Speed vs. efficiency and Synthesis vs. pumping We can get a very useful overall understanding of the relation between the thermodynamics (driving free energy) and the kinetics of ATP synthesis by taking a closer look at Eq. (2). The equilibrium condition is when $\dgtot = 0$, so the driving free energy for the protons exactly balances the free energy necessary to phosphorylate ("synthesize") ATP: In a plot of the chemical potential vs. the synthesis free energy, this corresponds to a straight line of slope $n/3$ as shown in the figure. At equilibrium, there is zero net synthesis - which follows immediately from the balanced-flow definition of equilibrium. That is, while an occasional ADP might get phosphorylated by a synthase, that will be balanced out by an equal number of hydrolysis events. The further the system gets from equilibrium, the stronger the driving. With stronger driving we expect faster synthesis (in the region above the equilibrium line). There will be, however, a maximum speed at which the machine can operate, so ultimately the synthesis rate will level off even with very significant driving. To visualize the relationship between thermodynamic driving and the ATP synthesis rate more quantiatively, it is useful to define the driving free energy as the negative of the total free energy chage, $\dgdrive = - \dgtot$. By convention, $\dgdrive$ is positive under synthesis conditions and negative for reverse operation of the synthase. In the figure below, the point marked with a red X represents the conditions used to generate the stochastic trajectories shown above. As with any molecular machine, the synthase can be driven in reverse, in which case it will act as a proton pump driven by ATP hydrolysis - as has been shown experimentally many times. The stronger the driving, the faster the pumping until the performance plateaus due to rate-limiting chemical and conformational steps. This behavior is seen in the figure for negative driving potential. On the question of efficiency, Eq. (2) tells us that for any finite amount of driving, there must be some free energy "spilled" ($\dgtot$ is dissipated as heat) when synthesis proceeds at a finite rate. Hence, the efficiency - measured as the fraction of the proton free energy converted to ATP chemical free energy - is always less than 100%. The lower the driving, the higher the efficiency - at the price of reducing the speed of synthesis. The cell must optimize the tradeoff between speed and efficiency for its own purposes ... and note that some heat generation is not entirely wasteful for many organisms that need to maintain body temeperature. Rotary ATPases have evolved many different stoichiometries ($n$ values) enabling them to function under different conditions, presumably optimized for different organisms or organelles. A given set of thermodynamics conditions can be used either for ATP hydrolysis-driven proton pumping or pmf-driven ATP synthesis, depending on the stoichiometry. See the dashed line in the $\Delta \mu$ vs. $\Delta G$ figure. Acknowledgements Many thanks to Ramu Anandakrishnan and Zining Zhang for helpful discussions as we all learned about the rotary ATPases. Ramu Anandakrishnan prepared the data-based figures. References A basic discussion of the ATP synthase can be found in any biochemistry or cell biology book. More detailed treatments are given in bioenergetics texts. For example, see D. G. Nicholls et al., "Bioenergetics", Academic Press. T. P. Silverstein, "An exploration of how the thermodynamic efficiency of bioenergetic membrane systems varies with c-subunit stoichiometry of F1F0 ATP synthases," J Bioenerg Biomembr (2014) 46:229-241. J. E. Walker, "The ATP synthase: the understood, the uncertain and the unknown," Biochem. Soc. Trans. (2013) 41:1-16. Exercises Using the definition of pH, show that the $\Delta$pH term in Eq. (5) corresponds precisely to the free energy change for a simple concentration difference as in our chemical potential discussion. Write down the mass-action equations governing states 3, 4, and 5 in the model for rotary synthesis. Write down the cycle constraint for the model in terms of rate constants, and then group the rate constants to see the dependence of the constraint equation on various equilibrium constants. Sketch the expected behavior of a stochastic simulation of a single ATP synthase under equilibrium conditions. Also sketch the average behavior - i.e., the average ATP synthesized per second considering many synthases.
Document Type: Original Article Author Abstract We show that if $T$ is a bounded linear operator on a complex Hilbert space, then \begin{equation*} \frac{1}{2}\Vert T\Vert\leq \sqrt{\frac{w^2(T)}{2} + \frac{w(T)}{2}\sqrt{w^2(T) - c^2(T)}} \leq w(T), \end{equation*} where $w(\cdot)$ and $c(\cdot)$ are the numerical radius and the Crawford number, respectively. We then apply it to prove that for each $t\in[0, \frac{1}{2})$ and natural number $k$, \begin{equation*} \frac{(1 + 2t)^{\frac{1}{2k}}}{{2}^{\frac{1}{k}}}m(T)\leq w(T), \end{equation*} where $m(T)$ denotes the minimum modulus of $T$. Some other related results are also presented. Keywords
OpenCV 4.1.1 Open Source Computer Vision GMat cv::gapi::concatHor (const GMat &src1, const GMat &src2) Applies horizontal concatenation to given matrices. More... GMat cv::gapi::concatHor (const std::vector< GMat > &v) GMat cv::gapi::concatVert (const GMat &src1, const GMat &src2) Applies vertical concatenation to given matrices. More... GMat cv::gapi::concatVert (const std::vector< GMat > &v) GMat cv::gapi::convertTo (const GMat &src, int rdepth, double alpha=1, double beta=0) Converts a matrix to another data depth with optional scaling. More... GMat cv::gapi::crop (const GMat &src, const Rect &rect) Crops a 2D matrix. More... GMat cv::gapi::flip (const GMat &src, int flipCode) Flips a 2D matrix around vertical, horizontal, or both axes. More... GMat cv::gapi::LUT (const GMat &src, const Mat &lut) Performs a look-up table transform of a matrix. More... GMat cv::gapi::merge3 (const GMat &src1, const GMat &src2, const GMat &src3) GMat cv::gapi::merge4 (const GMat &src1, const GMat &src2, const GMat &src3, const GMat &src4) Creates one 3-channel (4-channel) matrix out of 3(4) single-channel ones. More... GMat cv::gapi::normalize (const GMat &src, double alpha, double beta, int norm_type, int ddepth=-1) Normalizes the norm or value range of an array. More... GMat cv::gapi::remap (const GMat &src, const Mat &map1, const Mat &map2, int interpolation, int borderMode=BORDER_CONSTANT, const Scalar &borderValue=Scalar()) Applies a generic geometrical transformation to an image. More... GMat cv::gapi::resize (const GMat &src, const Size &dsize, double fx=0, double fy=0, int interpolation=INTER_LINEAR) Resizes an image. More... GMatP cv::gapi::resizeP (const GMatP &src, const Size &dsize, int interpolation=cv::INTER_LINEAR) Resizes a planar image. More... std::tuple< GMat, GMat, GMat > cv::gapi::split3 (const GMat &src) std::tuple< GMat, GMat, GMat, GMat > cv::gapi::split4 (const GMat &src) Divides a 3-channel (4-channel) matrix into 3(4) single-channel matrices. More... #include <opencv2/gapi/core.hpp> Applies horizontal concatenation to given matrices. The function horizontally concatenates two GMat matrices (with the same number of rows). src1 first input matrix to be considered for horizontal concatenation. src2 second input matrix to be considered for horizontal concatenation. #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. The function horizontally concatenates given number of GMat matrices (with the same number of columns). Output matrix must the same number of columns and depth as the input matrices, and the sum of rows of input matrices. v vector of input matrices to be concatenated horizontally. #include <opencv2/gapi/core.hpp> Applies vertical concatenation to given matrices. The function vertically concatenates two GMat matrices (with the same number of cols). src1 first input matrix to be considered for vertical concatenation. src2 second input matrix to be considered for vertical concatenation. #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. The function vertically concatenates given number of GMat matrices (with the same number of columns). Output matrix must the same number of columns and depth as the input matrices, and the sum of rows of input matrices. v vector of input matrices to be concatenated vertically. #include <opencv2/gapi/core.hpp> Converts a matrix to another data depth with optional scaling. The method converts source pixel values to the target data depth. saturate_cast<> is applied at the end to avoid possible overflows: \[m(x,y) = saturate \_ cast<rType>( \alpha (*this)(x,y) + \beta )\] Output matrix must be of the same size as input one. src input matrix to be converted from. rdepth desired output matrix depth or, rather, the depth since the number of channels are the same as the input has; if rdepth is negative, the output matrix will have the same depth as the input. alpha optional scale factor. beta optional delta added to the scaled values. #include <opencv2/gapi/core.hpp> Crops a 2D matrix. The function crops the matrix by given cv::Rect. Output matrix must be of the same depth as input one, size is specified by given rect size. src input matrix. rect a rect to crop a matrix to #include <opencv2/gapi/core.hpp> Flips a 2D matrix around vertical, horizontal, or both axes. The function flips the matrix in one of three different ways (row and column indices are 0-based): \[\texttt{dst} _{ij} = \left\{ \begin{array}{l l} \texttt{src} _{\texttt{src.rows}-i-1,j} & if\; \texttt{flipCode} = 0 \\ \texttt{src} _{i, \texttt{src.cols} -j-1} & if\; \texttt{flipCode} > 0 \\ \texttt{src} _{ \texttt{src.rows} -i-1, \texttt{src.cols} -j-1} & if\; \texttt{flipCode} < 0 \\ \end{array} \right.\] The example scenarios of using the function are the following: Vertical flipping of the image (flipCode == 0) to switch between top-left and bottom-left image origin. This is a typical operation in video processing on Microsoft Windows* OS. Horizontal flipping of the image with the subsequent horizontal shift and absolute difference calculation to check for a vertical-axis symmetry (flipCode > 0). Simultaneous horizontal and vertical flipping of the image with the subsequent shift and absolute difference calculation to check for a central symmetry (flipCode < 0). Reversing the order of point arrays (flipCode > 0 or flipCode == 0). Output image must be of the same depth as input one, size should be correct for given flipCode. src input matrix. flipCode a flag to specify how to flip the array; 0 means flipping around the x-axis and positive value (for example, 1) means flipping around y-axis. Negative value (for example, -1) means flipping around both axes. #include <opencv2/gapi/core.hpp> Performs a look-up table transform of a matrix. The function LUT fills the output matrix with values from the look-up table. Indices of the entries are taken from the input matrix. That is, the function processes each element of src as follows: \[\texttt{dst} (I) \leftarrow \texttt{lut(src(I))}\] Supported matrix data types are CV_8UC1. Output is a matrix of the same size and number of channels as src, and the same depth as lut. src input matrix of 8-bit elements. lut look-up table of 256 elements; in case of multi-channel input array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the input matrix. #include <opencv2/gapi/core.hpp> GMat cv::gapi::merge4 ( const GMat & src1, const GMat & src2, const GMat & src3, const GMat & src4 ) #include <opencv2/gapi/core.hpp> Creates one 3-channel (4-channel) matrix out of 3(4) single-channel ones. The function merges several matrices to make a single multi-channel matrix. That is, each element of the output matrix will be a concatenation of the elements of the input matrices, where elements of i-th input matrix are treated as mv[i].channels()-element vectors. Input matrix must be of CV_8UC3 (CV_8UC4) type. The function split3/split4 does the reverse operation. src1 first input matrix to be merged src2 second input matrix to be merged src3 third input matrix to be merged src4 fourth input matrix to be merged GMat cv::gapi::normalize ( const GMat & src, double alpha, double beta, int norm_type, int ddepth = -1 ) #include <opencv2/gapi/core.hpp> Normalizes the norm or value range of an array. The function normalizes scale and shift the input array elements so that \[\| \texttt{dst} \| _{L_p}= \texttt{alpha}\] (where p=Inf, 1 or 2) when normType=NORM_INF, NORM_L1, or NORM_L2, respectively; or so that \[\min _I \texttt{dst} (I)= \texttt{alpha} , \, \, \max _I \texttt{dst} (I)= \texttt{beta}\] when normType=NORM_MINMAX (for dense arrays only). src input array. alpha norm value to normalize to or the lower range boundary in case of the range normalization. beta upper range boundary in case of the range normalization; it is not used for the norm normalization. norm_type normalization type (see cv::NormTypes). ddepth when negative, the output array has the same type as src; otherwise, it has the same number of channels as src and the depth =ddepth. GMat cv::gapi::remap ( const GMat & src, const Mat & map1, const Mat & map2, int interpolation, int borderMode = BORDER_CONSTANT, const Scalar & borderValue = Scalar() ) #include <opencv2/gapi/core.hpp> Applies a generic geometrical transformation to an image. The function remap transforms the source image using the specified map: \[\texttt{dst} (x,y) = \texttt{src} (map_x(x,y),map_y(x,y))\] where values of pixels with non-integer coordinates are computed using one of available interpolation methods. \(map_x\) and \(map_y\) can be encoded as separate floating-point maps in \(map_1\) and \(map_2\) respectively, or interleaved floating-point maps of \((x,y)\) in \(map_1\), or fixed-point maps created by using convertMaps. The reason you might want to convert from floating to fixed-point representations of a map is that they can yield much faster (2x) remapping operations. In the converted case, \(map_1\) contains pairs (cvFloor(x), cvFloor(y)) and \(map_2\) contains indices in a table of interpolation coefficients. Output image must be of the same size and depth as input one. src Source image. map1 The first map of either (x,y) points or just x values having the type CV_16SC2, CV_32FC1, or CV_32FC2. map2 The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively. interpolation Interpolation method (see cv::InterpolationFlags). The method INTER_AREA is not supported by this function. borderMode Pixel extrapolation method (see cv::BorderTypes). When borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function. borderValue Value used in case of a constant border. By default, it is 0. GMat cv::gapi::resize ( const GMat & src, const Size & dsize, double fx = 0, double fy = 0, int interpolation = INTER_LINEAR ) #include <opencv2/gapi/core.hpp> Resizes an image. The function resizes the image src down to or up to the specified size. Output image size will have the size dsize (when dsize is non-zero) or the size computed from src.size(), fx, and fy; the depth of output is the same as of src. If you want to resize src so that it fits the pre-created dst, you may call the function as follows: If you want to decimate the image by factor of 2 in each direction, you can call the function this way: To shrink an image, it will generally look best with cv::INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with cv::INTER_CUBIC (slow) or cv::INTER_LINEAR (faster but still looks OK). src input image. dsize output image size; if it equals zero, it is computed as: \[\texttt{dsize = Size(round(fx*src.cols), round(fy*src.rows))}\]Either dsize or both fx and fy must be non-zero. fx scale factor along the horizontal axis; when it equals 0, it is computed as \[\texttt{(double)dsize.width/src.cols}\] fy scale factor along the vertical axis; when it equals 0, it is computed as \[\texttt{(double)dsize.height/src.rows}\] interpolation interpolation method, see cv::InterpolationFlags GMatP cv::gapi::resizeP ( const GMatP & src, const Size & dsize, int interpolation = cv::INTER_LINEAR ) #include <opencv2/gapi/core.hpp> Resizes a planar image. The function resizes the image src down to or up to the specified size. Planar image memory layout is three planes laying in the memory contiguously, so the image height should be plane_height*plane_number, image type is CV_8UC1. Output image size will have the size dsize, the depth of output is the same as of src. src input image, must be of CV_8UC1 type; dsize output image size; interpolation interpolation method, only cv::INTER_LINEAR is supported at the moment #include <opencv2/gapi/core.hpp> Divides a 3-channel (4-channel) matrix into 3(4) single-channel matrices. The function splits a 3-channel (4-channel) matrix into 3(4) single-channel matrices: \[\texttt{mv} [c](I) = \texttt{src} (I)_c\] All output matrices must be in CV_8UC1.
If $i=\sqrt{-1}$, is $\large\sqrt{i}$ imaginary? Is it used or considered often in mathematics? How is it notated? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let $z=(a+bi)$ be a complex number which is a square root of $i$, that is $$i=z^2=(a^2-b^2)+2abi.$$ Equating real and imaginary parts we have, $$a^2-b^2=0, 2ab=1$$ The two real solutions to this pair of equations are $a={1 \over \sqrt{2}},b={1 \over \sqrt{2}}$ and $a=-{1 \over \sqrt{2}},b=-{1 \over \sqrt{2}}$. The two square roots of $i$ therefore are $$\pm {1 \over \sqrt{2}} (1+i)$$ Visually, the square root of a complex number (written in polar coordinates) $(\rho,\theta)$ is the number $(\sqrt\rho,\theta/2)$. $i^\frac1{2}=\left(e^{\pi i/2}\right)^\frac1{2}=e^{\pi i/4}$ $e^{\pi i/4}=\cos\left(\frac{\pi}{4}\right)+i \sin\left(\frac{\pi}{4}\right)$ or simplified, $\frac{1+i}{\sqrt{2}}$ This is of course the "principal value"; the other value (thanks Matt E!) is the "negative square root", $-\frac{1+i}{\sqrt{2}}$ or in exponential form, $-e^{\pi i/4}=e^{-3\pi i/4}$ As you can see, there are several ways to find the answer to your question, from jmoy's straightforward algebra calculation, to using Euler's formula $e^{r+i\theta} = e^r(\cos(\theta)+i \sin(\theta))$, to geometric interpretations of complex numbers. One fact that hasn't been mentioned yet is that the complex numbers are algebraically closed. This means that every algebraic equation using complex numbers has all of its solutions in the complex numbers. So, the equation $x^2=i$ has both of its solutions in the complex numbers; and the equations $x^4 = -7-12i$ and $x^4+(\pi -8i)x^3+x-\sqrt{5}=0$ each have all four of their solutions in the complex numbers. The real numbers are not algebraically closed: the equations $x^2=-1$ and $x^4+7x^2+\pi=0$ cannot be solved unless we use complex numbers. This is one of the main reasons complex numbers are so important; they are the algebraic closure of the real numbers. You will never need "higher levels" of imaginary numbers or new mysterious square roots; numbers of the form $a+bi$ are all you need to find any root of real or complex polynomials. More generally, if you want to compute all the $n$-th roots of a complex number $z_0$, that is, all the complex numbers $z$ such that $$ z^n = z_0 \ , \qquad \qquad \qquad \qquad [1] $$ you should write this equation in exponential form: $z = re^{i\theta}, \ , z_0 = r_0 e^{i\theta_0}$. Then [1] becomes $$ \left( r e^{i \theta}\right)^n = r_0 e^{i\theta} \qquad \Longleftrightarrow \qquad r^n e^{in\theta} = r_0 e^{i\theta_0} \ . $$ Now, if you have two complex numbers in polar coordinates which are equal, their moduluses must be equal clearly: $$ r^n = r_0 \qquad \Longrightarrow \qquad r = +\sqrt[n]{r_0} $$ since $r, r_0 \geq 0$. As for the arguments, we cannot simply conclude that $n\theta = \theta_0$, but just that they differ in an integer multiple of $2\pi$: $$ n\theta = \theta_0 + 2k\pi \qquad \Longleftrightarrow \qquad \theta = \frac{\theta_0 + 2k \pi}{n} \quad \text{for} \quad k = 0, \pm 1 , \pm 2, \dots $$ It would seem that we have an infinite number of $n$-th roots, but we have enough with $k = 0, 1, \dots , n-1$, since for instance for $k=0$ and $k=n$ we obtain the same complex numbers. Thus, finally $$ \sqrt[n]{r_0 e^{i\theta_0}} = +\sqrt[n]{r_0} e^{i \frac{\theta_0 + 2k\pi}{n}} \ , \quad k = 0, 1, \dots , n-1 $$ are all the complex $n$-th roots of $z_0$. Examples (1) For $n=2$, we obtain that every complex number has exactly two square roots: $$ \begin{align} \sqrt{z_0} &= +\sqrt{r_0}e^{i\frac{\theta_0 + 2k\pi}{2}} \ , k = 0,1 \\\ &= +\sqrt{r_0}e^{i\frac{\theta_0}{2}} \quad \text{and} \quad +\sqrt{r_0}e^{i\left(\frac{\theta_0}{2} + \pi \right)} \ . \end{align} $$ For instance, since $i = e^{i\frac{\pi}{2}}$, we obtain $$ \sqrt{i} = \begin{cases} e^{i\frac{\pi}{4}} = \cos\frac{\pi}{4} +i \sin\frac{\pi}{4} = \frac{\sqrt{2}}{2} + i \frac{\sqrt{2}}{2} \\\ e^{i(\frac{\pi}{4} + \pi)} = \cos\frac{5\pi}{4} +i \sin\frac{5\pi}{4} = -\frac{\sqrt{2}}{2} - i \frac{\sqrt{2}}{2} \ . \end{cases} $$ Also, if $z_0 = -1 = e^{i\pi}$, $$ \sqrt{-1} = e^{i \frac{\pi}{2}} = i \quad \text{and} \quad e^{i\left( \frac{\pi}{2} + \pi\right)} = e^{i\frac{3\pi}{2}} = -i \ . $$ (2) For $z_0 = 1 = e^{i \cdot 0}$ and any $n$, we obtain the $n$-th roots of unity: $$ \sqrt[n]{1} = e^{i\frac{2k\pi}{n}} \ , \quad k= 0, 1, \dots , n-1 \ . $$ For instance, if $n= 2$, we get $$ \sqrt{1}= e^{i \cdot 0} = 1 \quad \text{and} \quad e^{i\pi}= -1 $$ and for $n= 4$, $$ \sqrt[4]{1} = e^{i\frac{2k\pi}{4}} \ , \quad k = 0, 1, 2, 3 \ , $$ that is, $$ \sqrt[4]{1} = 1, i, -1 , -i \ . $$ With a little bit of manipulation you can make use of the quadratic equation since you are really looking for the solutions of $x^2 - i = 0$, unfortunately if you apply the quadratic formula directly you gain nothing new, but... Since $i^2 = -1$ multiply both sides of our original equation by $i$ and you will have $ix^2 +1 =0$, now both equations have exactly the same roots, and so will their sum.$$(1+i)x^2 + (1-i) = 0 $$ Aplly the quadratic formula to this last equation and simplify an you will get $x=\pm\frac{\sqrt{2}}{2}(1+i)$. If you understand Argand diagrams (the representation of complex numbers in the complex plane) and can envision the unit circle in it, you can easily do this in your head: -1 is 1 rotated over $\pi$ radians. The square root of a number on the unit circle is the number rotated over half the angle, so $i$, or $\sqrt{-1}$ is 1 rotated over $\pi/2$ radians. To find $\sqrt{i}$ you just half the angle again: $\pi/4$ radians. The corresponding real and imaginary parts are $\cos\frac{\pi}{4}$ and $\sin\frac{\pi}{4}$ resp. edit LaTexified (thanks Agosti) Gives a nice concise overview: http://www.wolframalpha.com/input/?i=Sqrt(i) $$\sqrt{i}=\left|\sqrt{i}\right|e^{\arg\left(\sqrt{i}\right)i}$$ First we look to $\left|\sqrt{i}\right|$: $$\left|\sqrt{i}\right|=\left|\sqrt{\frac{1}{2}+i-\frac{1}{2}}\right|=\left|\sqrt{\frac{1+(0+2i)-1}{2}}\right|=\left|\sqrt{\frac{1+2(0+1i)+(0+1i)^2}{2}}\right|=$$ $$\left|\sqrt{\frac{(1+(0+1i))^2}{2}}\right|=\left|\frac{\sqrt{(1+(0+1i))^2}}{\sqrt{2}}\right|=\left|\frac{1+(0+1i)}{\sqrt{2}}\right|=$$ $$\left|\frac{(1+(0+1i))\sqrt{2}}{2}\right|=\frac{\left|(1+(0+1i))\sqrt{2}\right|}{|2|}=\frac{\sqrt{2}|1+(0+1i)|}{2}=$$ $$\frac{|1+1i|}{2}=\frac{\sqrt{2}\sqrt{1^2+1^2}}{2}=\frac{\sqrt{2}\sqrt{2}}{2}=\frac{2}{2}=1$$ Now the argument of $\sqrt{i}$: It's positive so on the complex axces, so $\sqrt{\sqrt{-1}}$ gives us $1e^{\frac{1}{4}\pi i}$ so the argument of $\sqrt{i}$ is $\frac{1}{4}\pi$ $-------$ $$\sqrt{i}=\left|\sqrt{i}\right|e^{\arg\left(\sqrt{i}\right)i}=1e^{\frac{1}{4}\pi i}=1\left(\cos\left(\frac{1}{4}\pi\right)+\sin\left(\frac{1}{4}\pi\right)i\right)=$$ $$\cos\left(\frac{1}{4}\pi\right)+\sin\left(\frac{1}{4}\pi\right)i=$$ $$\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}i$$ So: $$\sqrt{i}=\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}i$$ You can use De Moivre's Theorem. At first note that $i$ can be expressed in the form: $ 1 \, \text{cis} \left( \frac{\pi}{2} \right) $. Hence, $$ \sqrt{i} = \left( 1 \, \text{cis} \left( \frac{\pi}{2} \right)\right)^{\tfrac{1}{2}} = \pm \left[ 1 \, \text{cis}\left(\frac{\pi}{4}\right) \right] = \pm \left( \frac{1}{\sqrt{2}} + \frac {i}{\sqrt{2}} \right). $$ Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
\section{Focused Sequent Calculus} Focused sequent calculus has several class of proposition\footnote{We are writing $\Neg{a},\Pos{a}$ for negative and positive atoms respectively. You may also see these written as $\Neg{P},\Pos{P}$.} and context. We summarize them below, noting that $\Pos{\Omega}$ ranges over ordered lists, whereas $\Gamma$ ranges over multisets: \begin{grammar} negative & \Neg{A} & Core RedML is (so far) a one-sided, polarized L calculus in the style of Curien-Herbelin-Munch. Lambda calculus can be compiled to Core RedML through a bunch of gnarly macros. For instance, the term (((lam [x] (lam [y] (pair x y))) nil) nil) should evaluate to a pair of nils. This term is compiled into Core RedML as the first stage of the following computation trace, which shows how it computes to the desired pair. State: (cut Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 25.18 1.76 1.76 2415 0.73 0.85 mark_slice 10.44 2.49 0.73 4193520 0.00 0.00 camlHashtbl__insert_bucket_1371 9.87 3.18 0.69 6306936 0.00 0.00 camlHashtbl__add_1385 7.73 3.72 0.54 67916414 0.00 0.00 caml_page_table_lookup 5.87 4.13 0.41 4771 0.09 0.21 caml_empty_minor_heap [0.066684s] Checked test/success/V-types.prl [0.022747s] Checked test/success/bool-fhcom-without-open-eval.prl [0.009820s] Checked test/success/bool-pair-test.prl [0.010696s] Checked test/success/dashes-n-slashes.prl [0.009144s] Checked test/success/decomposition.prl [0.006263s] Checked test/success/discrete-types.prl [0.013531s] Checked test/success/empty.prl [0.009508s] Checked test/success/equality-elim.prl [0.017783s] Checked test/success/equality.prl [0.017168s] Checked test/success/fcom-types.prl This file has been truncated, but you can view the full file. Nuprl_functional = λ _top_assumption_ : nat, (λ (_evar_0_ : (λ n : nat, matrix_functional (Nuprl n)) 0) (_evar_0_0 : ∀ n : nat, (λ n0 : nat, matrix_functional (Nuprl n0)) (S n)), match _top_assumption_ as n return ((λ n0 : nat, matrix_functional (Nuprl n0)) n) with \documentclass{article} \usepackage{libertine} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \usepackage{amsmath, amssymb, proof, microtype, hyperref} \usepackage{mathpartir} % for mathpar, not for proofs \usepackage{perfectcut} \newcommand\Parens[1]{\perfectparens{#1}} \newcommand\Squares[1]{\perfectbrackets{#1}} NewerOlder
This is a basic question I haven't see answered anywhere and I can't seem to figure out. The usual statement of the 1+1D chiral anomaly Ward identity is that the divergence of the chiral current is the background field strength: $\partial_\mu \langle j^\mu\rangle = \epsilon^{\mu \nu} F_{\mu \nu}/2\pi.$ I want to rewrite this in terms of the covariant chiral current $J^\mu = \epsilon^{\mu \nu}\langle j_\nu\rangle$. I believe it says $dJ = F/2\pi$. I am worried about this expression on a compact spacetime, however, since $F/2\pi$ may have a nonzero surface integral, while the integral of a divergence over a closed surface is zero. Must it be that somehow the covariant chiral current $J$ is not gauge invariant? I don't see a mechanism for this to happen though. Thanks!
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
I am aware that the minimum variance portfolio of a market with $n$ securities can be shown to be: \begin{equation} w^* = (1^T_n\Sigma^{-1}1_n)^{-1}\Sigma^{-1}1_n, \\ s.t. \ \ 1^T_nw = 1 \end{equation} by using the method of Langrange multipliers or other. I am interested in demonstration of the extension: \begin{equation} w^* = \underset{w}{\mathrm{argmin}}\lbrace w^T \Sigma w + \lambda\sum_{i=1}^n\rho(w_i)\rbrace\\ s.t. \ \ 1^T_nw = 1 \end{equation} where $\rho(.)$ is some arbitrary penalty function (e.g. $\lvert w_i\rvert$). Perhaps you could go through the process step by step as I am getting lost when I try. Thanks!
Given a Turing machine $M$, we associate a partial function $f_M : \Sigma^{\ast} \to \Sigma^{\ast}$ to it (this is called the function computed by the machine), where $\Sigma$ denotes the finite input and output alphabet, defined as$$ f(u) = v :\Leftrightarrow \mbox{The machine halts on input $u$ with output $v$}.$$Then we say an arbitrary partial function $f : \Sigma^{\ast} \to \Sigma^{\ast}$ is called computable iff $f = f_M$ for some Turing machine $M$. Then we define a language $A \subseteq \Sigma^{\ast}$ to be recursively enumerable iff it is the domain of some computable function. Clearly with the above definition $\operatorname{dom}(f_M) = \{ w \in \Sigma^{\ast} \mid \mbox{The machine halts on input $w$.} \}$, i.e. this is equivalent to say that a language is recursively enumerable iff we can find a machine that halts exactly for the words in the language. But on other sources I found the following definition, a language $A \subseteq \Sigma^{\ast}$ is recursively enumerable, iff there exists a Turing machine such that $$ A = \{ w \in \Sigma^{\ast} \mid \mbox{The machine halts in an accepting state} \} $$ or $$ A = \{ w \in \Sigma^{\ast} \mid \mbox{The machine halts and outputs a specified output } \}. $$ Both notions, by special state or special output, are clearly equivalent. But they do not require the machine to run forever if $w \notin A$. This could be fixed, by letting the machine enter an endless loop if it enters a non-accepting state after finishing its computation. But this seems quite unnatural to me. But I think a better definition, more closely at the definition by acceptance states or special output, in terms of computable functions, would be to call a language $A$ recursively enumerable iff there exists a computable partial function $f : \Sigma^{\ast} \to \Sigma^{\ast}$ such that $A = f^{-1}(B)$ for some $B \subseteq \Sigma^{\ast}$. Surely with the above definition we can enlarge $B$ always by values not in the range of $f$, so $B$ is not unique, but it would be no problem that the machine halts on some other input $\notin B$ and we would have $A \subseteq \operatorname{dom}(f)$. So is this definition used anywhere, does it make sense? I could not find it, so I am asking here. Also nobody discusses the two definitions together, either in books they always work with Turing machines, or they work with primitive and $\mu$-recursive functions.
In file pohex2.dat the nodes on the left-hand vertical edge have been given Dirichlet boundary fixities; the mesh has then been refined, and doubled by reflection to the right and upwards, and finally triangulated, to create an H-shaped region of width 4.0 and heaight 3.0, containing 384 linear triangles, with Dirichlet boundaries on the outer vertical sides. Part B data has been added, fixing the variable U at 0.0 on the left-hand Dirichlet boundary, and at 60.0 on the right-hand boundary. The other edges act as insulated boundaries. The program POISS.FOR solves the quasi-harmonic equation, using the coefficients a_x, a_y specified in the data file and with the right-hand-side function equal to 1.0. The result of running POISS.FOR with pohex2.dat as input, is provided in the output file pohex2.out, which can be viewed using FELVUE. The file pohex3.dat is produced from pohex2.dat by adding a radiating boundary on the lower edge of the crossbar of the H shape, with parameters QBAR=0.0 and ALPHA=4.0. THe effect of this can be seen in the contour plot of U, and the flowlines, in pohex3.out. An example mesh involving fewer than 50 nodes and 50 elements, which can be run in the demonstration version of POISS.EXE, is provided in podemo.dat. This models only the lower half of the H-shaped region in pohex2.dat, with a coarser mesh. Because of the top boundary being an insulated boundary, it is the same problem which is being solved, as with the full H-shaped mesh. In elcyl4.dat the mesh has been refined to 100 elements, and a compressive loading applied to the outer edges of the outermost row of elements. The top element (adjoining the vertical boundary) has an applied pressure of 35 KN/m^2 (traction set 2); the neighbouring element has a `ramp' loading which drops from 35 KN/m^2 to 30 KN/m^2 (traction set 1), and then the next elements in have a uniform pressure of 30 KN/m^2 . The ramp loading up to 35 KN/m^2 is reproduced at the horizontal boundary. ELAST.EXE should be run with this datafile, using the plane strain option at run-time. The resulting displacements and stresses can be viewed in FELVUE in file elcyl4.out. The data-file eldemo.dat is a coarse mesh (just 12 elements) for a thick cylinder, inner radius 3.25, outer radius 10.0, elastic parameters E=20,000 and Poisson Ratio = 0.3, subject to a uniform load of 10 KN/m^2 on the outer rim. This can be run with the demonstration version of ELAST.EXE, again in plane strain. If the resulting stress contours are plotted in FELVUE there will be seen a small non-uniformity around the cylinder due to the coarseness of the discretization. Remember that FELIPE does not apply any stress-smoothing, and contours are plotted element-by-element, so that inaccuracies caused by an insufficiently fine discretization are not disguised. The file eldemaxi.dat is a model of the same problem using an axisymmetric mesh; if this is run with axisymmetric analysis (choice 3) identical displacements of the inner and outer circumference to those in eldemo will be obtained. We model only the left half of the pipe cross-section, since in practice the right half will not exert any influence on the bracket; it will become detached from the plastic, which we cannot cope with in our program. We apply a uniform load of 10^5 KN/m^2 (equivalent to 10KN per square centimetre, as we are working in cm), pushing leftwards on the centreline of the pipe. The material properties used are: For the pipe (material set 2): E=10^5, \nu=0.4 For the bracket (material set 1): E=2,500, \nu=0.2 The material thickness is set at 1.0 in the 3rd component of each set. A tensile strength of 10 is also specified for the bracket, in the 4th component. The material type (LMTYP) facility is also used: the bracket and pipe elements are assigned material types 1 and 2 respectively. The resulting deformations and stresses of the bracket are best seen in FELVUE if you choose to plot only those elements with material type 1. The stress concentration is seen to be around the rear of the pipe section (use the Zoom facility to examine these in more detail), and if the ``Yield Zones'' plotting option (choosing yield code 1) is used, you will see the Gauss points at which the tensile strength has been exceeded. This is largely due to the plastic flowing in behind the pipe, which is not realistic: a better mesh would include a `wing' of pipe extending part-way around the rear of the half-pipe section. Note that this is a hypothetical example, and the material property values do not bear resemblance to real materials. The second example, in framexb1.dat, is also based on a problem in Buchanan(1994), and Imperial units (inches, p.s.i.) are used. It consists of a three-beam frame, pin-jointed at its right-hand end and free to roll in the horizontal plane at the other, loaded with a uniform pressure on its middle member, and with a point-load acting downwards at the mid-point of the left-hand beam. The PCG iteration converges (to a tolerance of 10^{-5} ) after 19 iterations. The results can be viewed in FELVUE in the same way as with the previous example. In FELVUE, the deformed beam, together with the deformed boundary of the block, can be seen using the ``View 1D elts'' option with an exaggeration factor of 1.0. The ``Displacements'' option displays the deformation of the block itself, without the 1D elements. A mesh with more refinement would produce better results for stress contours, although the plot of Gauss point stress crosses is reasonable; principal stresses drawn in blue are tensile. Again, a finer discretization would produce more meaningful stress contours. In eladcv2.dat the mesh has been refined, and the excavation boundary around the cavern edge defined. A uniform stress field of \sigma_x = 20, \sigma_y = 40 is stipulated for all the elements, which have been assigned material type 2 as required in ELADV. A further feature is that a small 6-noded triangular element has been added after refinement to ``cut off the corner'' at the bottom of the cavern. The ``add elements'' feature in ``modify mesh'' was used, and by leaving blank the coordinates of the new node (on the midside of the hypotenuse), these were linearly interpolated by the pre-processor. The node was then moved using the ``change coordinates'' option, to produce a rounded profile; this greatly improves the accuracy of the stress field around the corner singularity. The deformations and stress field resulting from the unloading around the cavern boundary, can be seen by viewing eladcv2.out. A realistic swelling of the cavern roof and floor, where areas of greatest deviator stress occur, is noted. (The initial exaggeration factor is too large, and a factor of 20 should be tried, in conjuction with the Zoom facility, to see the deformation around the cavern wall). If running ELADV with this example, notice that the equation solution is performed by the frontal algorithm FRONT. This datafile can also be run as an axisymmetric analysis, to model a cylindrical, dome-roofed cavern. The hoop stress contours can be plotted as sigma_z, in this case. The file vpcyl4.dat is an extension of elcyl4.dat for the viscoplasticity program VPLAS.EXE. Here, in addition to the material parameters k and \sigma_c given above for PLAST, the flow parameter \gamma = 0.01 , and dilation l are defined in material property components 5,6; in this case l=1.0 , i.e. a non-associated zero-dilation flow rule. VPLAS.FOR uses the frontal algorithm subroutine AFRONT, adapted for nonsymmetric matrices, in this solution. The loading is similar to that in elcyl4.dat, with 25 KN/m^2 around the outer boundary, rising to 35 KN/m^2 at each end; the viscoplasticity algorithm involving timestepping is much more stable than the plasticity algorithm involving iteration, especially if a non-associated flow rule is used, and it converges happily even with the non-uniform loading. In addition to an incremental loading regime with 5 load increments, a timestepping regime is defined in the last line of the data set. The Crank-Nicholson time discretisation parameter \theta = 0.5 has been used. Compare the stresses, displacements and yield zones output in vpcyl4.out, with those from the elasticity and elasto-plasticity analyses. The advanced plasticity program PLADV.FOR can also be run with the datafile plcyl4.dat. Results will be identical to those from PLAST.FOR, but can be obtained using a selection of different solvers. If the preconditioned conjugate gradient solver (ISOLA=2) with diagonal preconditioning is used, each solution takes about 180 PCG iterations (for a convergence tolerance of 10^{-6} ). If the same convergence tolerance is used with the Incomplete Choleski preconditioner (ISOLA=3), the performance depends on the amount of fill-in which occurs. this is controlled by the fill-in factor which is also specified at run-time. if a factor of 0.1 is chosen, none of the 47,052 `holes' within the envelope of K are filled in, and convergence takes 56 iterations. If a factor of 10^{-4} is chosen, then about 2,500 `holes' get filled in, and the iteration speeds up, needing 24 iterations to convergence. With a fill-in factor of 10^{-6} , about 50% of the `holes' (some 25,000) are filled-in, and only 5 iterations are needed to converge. The fin is modelled with meshes of 4-noded linear and 8-noded quadratic quadrilateral elements, in datafiles thermln2.dat and thermqd2.dat respectively. Examining the displacements in FELVUE, you will see how the fin has expanded under this thermal loading (choose an exaggeration factor of 1,000 to get a proper view), and can also view the distribution of tensile stresses generated in the metal. Proceeding to the d.o.f. plots, a plot of d.o.f. 3 will display the temperature field in the fin. As mentioned in Chapter 7, the results using quadratic elements are good, but in the results from the mesh of linear elements the well-known `hourglass' instability is very evident. To avoid this instability, a two-level discretization should be used, with quadratic discretization of displacement d.o.f.s but only a bilinear interpolation for the temperature using values at corner nodes. This type of element is implemented for the soil consolidation application CONSL.FOR. In the refined mesh in file conslex3.dat the ``modify mesh'' option has been used to write at Advanced level and to add an extra degree of freedom at corner nodes only, for elements with material type 2. Note that corner nodes on the interface between block and soil elements, have the third d.o.f., but the CONSL program ignores this when processing the stiffness matrix for the block element (which has material type 1). This extra d.o.f. -- the pore pressure -- has been fixed at zero on the free surface y=0 , apart from at nodes underneath the impermeable concrete block; the fixity code has been used for this. A uniform normal surface traction of 10 KN/m^2 is applied to the top of the concrete block. The timestepping r\'egime involves making an initial solution with timestep dtone=0.0, the results of which are written to file. Then fives timesteps will be made with a timestep of dtint=1.0, after which the timestep will be doubled ( ftime=2.0) and a further five steps made, and so on. Timestepping will halt when ttend=100.0 is reached, unless the user has chosen to halt earlier. A fully implicit timestepping algorithm is used ( theta=1.0). In order to judge how consolidation is progressing, the user can enter at run-time the numbers of five nodes at which displacement/pressure results will be written to screen at each timestep. The nodes to be used should be chosen by viewing the mesh of conslex3.dat in PREFEL and using the ``Pick node by mouse'' facility (You will need to Zoom first!). In the present mesh, suggested nodes are: Node 151 (on the centreline, on top surface of block) Node 145 (on the centreline, on interface between block and soil) Node 185 (on surface at one corner of the block) Node 139 (on centreline, soil depth of 0.1m) Node 137 (on centreline, soil depth of 0.3m) If CONSL.EXE is run with datafile conslex3.dat, it will be seen that after 15 timesteps, when t=35.0 , consolidation is virtually complete, and the timestepping may be ended. The program appends the results after each set of 5 timesteps, to the file conslex3.out. Running FELVUE with this file, first the undrained deformation is displayed (an exaggeration factor of 10 for the displacements is appropriate. Note that the soil elements close to the footing bulge unrealistically -- a finer refinement in this part of the mesh is needed). You can see the undrained pore pressure distribution in the soil, by proceeding to the d.o.f. plots, and plotting d.o.f. 3 for material type 2 (soil elements). Remember to answer `y' to the question, if this d.o.f. exists only at corner nodes. Again, it is necessary to zoom to see the excess pore pressures under the footing, in detail. (A consequence of the bulging is that a zone of negative pore pressure arises in these elements). Proceeding, past the option of printing PostScript files, you are now invited to proceed to the next result set. This will be the solution at t=5.0 . View the displacements, stresses and pore pressures as before. Repeat the process for the results at t=15.0 and t=35.0 .
Question: Find the centroid of the thin plate bounded by the graphs of {eq}f(x) = x^{2} {/eq} and {eq}g(x) = x + 6 {/eq}. Centroid of a Region Bounded by Two Functions In getting the centroid of a region, integration method can be applied. If a region {eq}R {/eq} lies between two functions or curves defined by {eq}y = f(x) {/eq} and {eq}y = g(x) {/eq} and {eq}f(x)\geq g(x) {/eq}, then the centroid of the region is given by the coordinates {eq}(\bar{x},\bar {y}) {/eq}, where $$\begin{align*} \bar{x}&=\frac{1}{A}\int_{a}^{b}x[f(x)-g(x)]dx\\ \bar{y}&=\frac{1}{2A}\int_{a}^{b}([f(x)]^{2}-[g(x)]^{2})dx \end{align*} $$ {eq}A {/eq} is the area of the region bounded by the two curves {eq}f(x) {/eq} and {eq}g(x) {/eq}. Its value is given by the formula $$A=\int_{a}^{b}[f(x)-g(x)]dx $$ Answer and Explanation: The first step is to find the value of the area of the region {eq}A {/eq} by applying the formula to the functions. $$A=\int_{a}^{b}[f(x)-g(x)]dx $$ We have the functions {eq}y =x+6 {/eq} and {eq}y = x^2 {/eq}. We must first evaluate the limits of integration by equating them $$\begin{align*} x+6&=x^2\\ x^2-x-6&=0\\ (x-3)(x+2)&=0\\ x&= -2,3 -2 \leq &x\leq 3 \end{align*} $$ Before we evaluate the area, we must first make sure which function is above the region and which one is below. To do this, we can use an arbitrary value within the function and substitute it into each function. The function that gives the higher value is the function above the region. For the value within the limits, we can use {eq}0 {/eq}. $$\begin{align*} f(0) &= 0^2 = 0\\ g(0) &= 0 + 6 = 6 \end{align*} $$ Since {eq}g(x) > f(x) {/eq}, we can say that {eq}g(x) {/eq} and {eq}f(x) {/eq} is below the region. This means that in the formula for the area, we are subtracting {eq}f(x) {/eq} from {eq}g(x) {/eq}. We can now evaluate the area. $$\begin{align*} A&=\int_{-2}^{3}[x+6-x^2]dx\\ A&=\left | \frac{x^2}{2}+6x-\frac{x^3}{3} \right |_{-2}^{3} \\ A&=\frac{5}{2}+30-\frac{35}{3} \\ A&=\frac{125}{6} \end{align*} $$ We can now find the values of the coordinates of the centroid of the region using the given formula. We will start with the x-coordinate. $$\begin{align*} \bar{x}&=\frac{1}{A}\int_{a}^{b}x[f(x)-g(x)]dx\\ \bar{x}&=\frac{6}{125}\int_{-2}^{3}x(x+6-x^2)dx\\ \bar{x}&=\frac{6}{125}\int_{-2}^{3}(x^2+6x-x^3)dx\\ \bar{x}&=\frac{6}{125}\left | \frac{x^3}{3}+3x^2-\frac{x^4}{4} \right |_{-2}^{3}\\ \bar{x}&=\frac{6}{125} \left(\frac{35}{3}+15-\frac{65}{4} \right)\\ \bar{x}&=\frac{6}{125}\times \frac{125}{12}\\ \bar{x}&=\frac{1}{2} \end{align*} $$ Next, we will evaluate the value of the y-coordinate of the centroid of the region. $$\begin{align*} \bar{y}&=\frac{1}{2A}\int_{a}^{b}([f(x)]^{2}-[g(x)]^{2})dx\\ \bar{y}&=\frac{3}{125}\int_{-2}^{3}[ (x+6)^2-x^4 ]dx\\ \bar{y}&=\frac{3}{125}\int_{-2}^{3}(x^2+36+12x-x^4)dx\\ \bar{y}&=\frac{3}{125} \left(36x+\frac{x^3}{3}+6x^2-\frac{x^5}{5} \right) \biggr|_{-2}^{3}\\ \bar{y}&=4 \end{align*} $$ Hence, the centroid of the region bounded by the curve is {eq}\left(\frac{1}{2}, 4 \right) {/eq}. Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from AP Calculus AB & BC: Homework Help ResourceChapter 13 / Lesson 13
In a comment elsewhere you write that you're interested in understanding how quantum-mechanical theory describes the radiation that a hydrogen atom does and does not emit.In your question you ask about another answer that suggests some significance to the electron having zero total momentum; I think that's a feature of the coordinate system choice rather than something physically interesting.Here's a second answer to hopefully address that concern. In Schrödinger's quantum mechanics the probability density $\psi$ for finding the electron in some small volume near the nucleus (charge $Z$, mass $m_\text{nuc}^{-1} = \mu^{-1} - m_e^{-1}$), obeys the differential equation$$\left( \frac{\hbar^2}{2\mu} \vec\nabla^2 - \frac{Z\alpha\hbar c}r\right) \psi = E\psi.\tag 1$$It turns out that this equation has bound solutions with $E<0$ if, and only if, you introduce some integer parameters $n,\ell,m$ subject to some constraints: $1\leq n$, $\ell<n$, and $|m|\leq \ell$. The energies associated with these quantum numbers are$$E_{n\ell m} = -\frac{\mu c^2\alpha^2Z^2}{2n^2} = Z^2 \cdot \frac{-13.6\rm\,eV}{n^2}.\tag 2$$Critically for our discussion, this means that there is a state with $n=1$ that has the minimum possible energy for an electron interacting with a proton.This is totally different from the unbound case, or the interaction between two like-charged particles, in which you can give your mobile particle any (positive) total energy that you like and inquire about its motion.If the total energy doesn't satisfy (2), it's simply impossible for the system to obey the equation of motion (1). You compute transition rates in quantum mechanics using Fermi's Golden Rule: a transition between an initial state $i$ and a final state $f$ occurs in some time interval $\tau_{if} = 1/\lambda_{if}$ with probability $1/e$, where the decay constant $\lambda_{if}$ is$$\lambda_{if} = \frac{2\pi}\hbar\left| M_{if} \right|^2\rho_f.$$The density of final states $\rho_f$ is interesting if there are multiple final states with the same energy. (For instance, in hydrogen there are generally several degenerate final states with given $n,\ell$ but varying $m$.) The matrix element measures the overlap of the initial and final state given some interaction operator $U$:$$M_{if} = \int d^3x\ \psi_f^* U \psi_i$$For electric dipole radiation the operator is $U_{E1} = e\vec r$; for magnetic dipole radiation, $U_{M1} = {e}\vec L/{2\mu}$; for quadrupole etc. radiation there are other operators.You could also couple to multiple photons: for instanced the $n=2,\ell=0$ state cannot decay to the ground state by emitting a single photon, since the photon carries angular momentum, but can decay by emitting two dipole photons at the same time. This forbidden transition has lifetime $\sim 0.1\rm\,s$, compared with nanoseconds for the $n=2,\ell=1$ states at the same energy.Computing matrix elements gives you some hairy integrals, so generally you let someone else do them. You can in principle use these arguments and the Golden Rule to calculate the radiation emitted in three cases: From a free electron with $E_i>0$ to a free electron traveling in a different direction with a different energy $E_f>0$. This should give a result most similar to the classical case, where you can get continuous radiation from an accelerating charge. From a free electron with $E_i>0$ transitioning to a bound electron with $E_f<0$. From one bound electron state to another. It's this final option, transitions between bound states, that interests you. The salient feature, unique to quantum mechanics, is that the energies of the bound states are quantized. Unlike in classical mechanics, in quantum theory the equation of motion has no solutions with $E<E_1$. Even if you made up some trial sub-ground-state wavefunction to compute the matrix element for the transition (which can't be done, since the existing wavefunctions form a complete set), you'd find that the density of states at your hypothetical lower energy is $\rho_f=0$, so the time before the transition occurs is, on average, infinitely long. The classical theory predicts radiation when a charge accelerates from one continuum momentum to another.So does the quantum theory. But the quantum theory also predicts bound states with quantized energies.Non-transitions from a state to itself have zero matrix element, therefore never occur; transitions from one state to another can only occur if there's a final state available.
Codeforces Round #553 (Div. 2) Finished A girl named Sonya is studying in the scientific lyceum of the Kingdom of Kremland. The teacher of computer science (Sonya's favorite subject!) invented a task for her. Given an array $$$a$$$ of length $$$n$$$, consisting only of the numbers $$$0$$$ and $$$1$$$, and the number $$$k$$$. Exactly $$$k$$$ times the following happens: Sonya's task is to find the probability that after all the operations are completed, the $$$a$$$ array will be sorted in non-decreasing order. She turned to you for help. Help Sonya solve this problem. It can be shown that the desired probability is either $$$0$$$ or it can be represented as $$$\dfrac{P}{Q}$$$, where $$$P$$$ and $$$Q$$$ are coprime integers and $$$Q \not\equiv 0~\pmod {10^9+7}$$$. The first line contains two integers $$$n$$$ and $$$k$$$ ($$$2 \leq n \leq 100, 1 \leq k \leq 10^9$$$) — the length of the array $$$a$$$ and the number of operations. The second line contains $$$n$$$ integers $$$a_1, a_2, \ldots, a_n$$$ ($$$0 \le a_i \le 1$$$) — the description of the array $$$a$$$. If the desired probability is $$$0$$$, print $$$0$$$, otherwise print the value $$$P \cdot Q^{-1}$$$ $$$\pmod {10^9+7}$$$, where $$$P$$$ and $$$Q$$$ are defined above. 3 2 0 1 0 333333336 5 1 1 1 1 0 0 0 6 4 1 0 0 1 1 0 968493834 In the first example, all possible variants of the final array $$$a$$$, after applying exactly two operations: $$$(0, 1, 0)$$$, $$$(0, 0, 1)$$$, $$$(1, 0, 0)$$$, $$$(1, 0, 0)$$$, $$$(0, 1, 0)$$$, $$$(0, 0, 1)$$$, $$$(0, 0, 1)$$$, $$$(1, 0, 0)$$$, $$$(0, 1, 0)$$$. Therefore, the answer is $$$\dfrac{3}{9}=\dfrac{1}{3}$$$. In the second example, the array will not be sorted in non-decreasing order after one operation, therefore the answer is $$$0$$$. Name
Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-sized and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either, 1 and you may get a shoutout in next week’s column. If you need a hint, or if you have a favorite puzzle collecting dust in your attic, find me on Twitter. Riddler Express From Alex Vornsand, a puzzle for your daily routine: You take half of a vitamin every morning. The vitamins are sold in a bottle of 100 (whole) tablets, so at first you have to cut the tablets in half. Every day you randomly pull one thing from the bottle — if it’s a whole tablet, you cut it in half and put the leftover half back in the bottle. If it’s a half-tablet, you take the vitamin. You just bought a fresh bottle. How many days, on average, will it be before you pull a half-tablet out of the bottle? Extra credit: What if the halves are less likely to come up than the full tablets? They are smaller, after all. Riddler Classic From Mikael Rittri, a mathematical souvenir problem: In the Riddler gift shop, we sell interesting geometric shapes of all sizes — Platonic solids, Archimedean solids, Klein bottles, Gabriel’s horns, you name it — at very fair prices. We want to create a new gift for fall, and we have a lot of spheres, of radius 1, left over from last year’s fidget sphere craze, and we’d like to sell them in sets of four. We also have a lot of extra tetrahedral packaging from last month’s Pyramid Fest. What’s the smallest tetrahedron into which we can pack four spheres? Solution to last week’s Riddler Express Congratulations to 👏 Remy Cossé 👏 of Philadelphia, winner of last week’s Express puzzle! A mysterious figure emerges from the shadows and hands you a note with the following list of six numbers: 1, 11, 21, 1,211, 111,221 and 312,211. He wants to know one thing: What number comes next? It’s 13,112,221. This sequence is sometimes called the “say what you see sequence.” You start with a single number — 1 — and then to get the next number you say what you see. First, you see one 1, so you write down 11. Then, you see two 1s, so you write down 21. Then you see one 2 and one 1, so you write down 1,211. To get the number requested by the shadowy figure, you say what you see one last time: one 3, one 1, two 2s and two 1s, or 13,112,221. Solution to last week’s Riddler Classic Congratulations to 👏 Cody Couture 👏 of Irvine, California, winner of last week’s Classic puzzle! Take a look at this string of numbers: 333 2 333 2 333 2 33 2 333 2 333 2 333 2 33 2 333 2 333 2 … At first it looks like someone fell asleep on a keyboard. But there’s an inner logic to the sequence: It creates itself, number by number. Each digit refers to the number of consecutive 3s that appear before a certain 2. Specifically, the first digit refers to the number of consecutive 3s that appear before the first 2, the second digit refers to the number of 3s that appear consecutively before the second 2, and so on toward infinity. The sequence never ends, but that won’t stop us from asking questions about it. What is the ratio of 3s to 2s in the entire sequence? It’s \(1+\sqrt{3}\), or about 2.73-to-1. Solver Adam Palay approached the problem by thinking of the numbers in terms of “generations.” A first-generation 3 spawns a second-generation 3332, for example, while a first-generation 2 spawns a second-generation 332. With that in mind, Palay explained, we can set up some equations. Let each generation be denoted by a number n, and let “threes” and “twos” be the number of 3s and 2s in that generation. Those numbers evolve like this: \begin{equation*}threes_{n+1} = 3\cdot threes_n + 2 \cdot twos_n\end{equation*} \begin{equation*}twos_{n+1} = threes_n + twos_n\end{equation*} So now we need to get from that evolution to the overall ratio. Call that ratio r. \begin{align*} r_{n+1} &= \frac{threes_{n+1}}{twos_{n+1}} = \frac{3\cdot threes_n + 2 \cdot twos_n}{threes_n +twos_n} \\&= 2 + \frac{threes_n}{threes_n+twos_n}\\&= 2 + \frac{1}{1+\frac{twos_n}{threes_n}}\\&= 2 + \frac{1}{1+\frac{1}{\frac{threes_n}{twos_n}}} \end{align*} And \(\frac{threes_n}{twos_n}\) is exactly \(r_n\). So we arrive at the following: \begin{equation*}r_{n+1} = 2 + \frac{1}{1+\frac{1}{r_n}}\end{equation*} We can also guess that as the sequence grows and many, many generations are spawned, \(r_n\) and \(r_{n+1}\) converge to the same number — the single ratio the puzzle was asking for. (Hector Pefo provided some more explanation of this part of the solution.) So all that’s left is to solve the following for r: \begin{equation*}r = 2 + \frac{1}{1+\frac{1}{r}}\end{equation*} A little algebra gives the solution: \(r = 1 + \sqrt{3}\). Solver Luke Benz showed empirically how quickly the ratio of 3s to 2s converges to about 2.73: Want to submit a riddle? Email me at [email protected].