text
stringlengths 256
16.4k
|
---|
Upper semicontinuity of global attractors for the perturbed viscous Cahn-Hilliard equations
DOI: http://dx.doi.org/10.12775/TMNA.2008.051
Abstract
It is known that the semigroup generated by the initial-boundary
value problem for the perturbed viscous Cahn-Hilliard equation with $\varepsilon> 0$ as a parameter admits a global attractor $\mathcal{A}_{\varepsilon}$ in the phase space $X^{{1}/{2}} =(H^2(\Omega)\cap H^{1}_{0}(\Omega))\times L^2(\Omega)$, $\Omega\subset \mathbb{R}^n$, $n\leq 3$ (see [M. B. Kania, < i> Global attractor for the perturbed viscous Cahn-Hilliard equation< /i> , Colloq. Math. < b> 109< /b> (2007), 217-229]). In this paper we show that the family $\{\mathcal{A}_{\varepsilon}\}_{\varepsilon\in[0,1]}$ is upper semicontinuous at $0$, which means that the Hausdorff semidistance $$ d_{X^{{1}/{2}}}(\mathcal{A}_{\varepsilon},\mathcal{A}_0)\equiv \sup_{\psi\in \mathcal{A}_{\varepsilon}}\inf_{\phi\in\mathcal{A}_{0}}\| \psi-\phi\|_{X^{{1}/{2}}}, $$ tends to 0 as $\varepsilon\to 0^{+}$.
value problem for the perturbed viscous Cahn-Hilliard equation with
$\varepsilon> 0$ as a parameter
admits a global attractor $\mathcal{A}_{\varepsilon}$ in the phase
space $X^{{1}/{2}} =(H^2(\Omega)\cap H^{1}_{0}(\Omega))\times L^2(\Omega)$,
$\Omega\subset \mathbb{R}^n$, $n\leq 3$ (see [M. B. Kania,
< i> Global attractor for the perturbed viscous Cahn-Hilliard equation< /i> , Colloq.
Math. < b> 109< /b> (2007), 217-229]). In this paper
we show that the family $\{\mathcal{A}_{\varepsilon}\}_{\varepsilon\in[0,1]}$
is upper semicontinuous at $0$, which means that the Hausdorff semidistance
$$
d_{X^{{1}/{2}}}(\mathcal{A}_{\varepsilon},\mathcal{A}_0)\equiv
\sup_{\psi\in
\mathcal{A}_{\varepsilon}}\inf_{\phi\in\mathcal{A}_{0}}\|
\psi-\phi\|_{X^{{1}/{2}}},
$$
tends to 0 as $\varepsilon\to 0^{+}$.
Keywords
Perturbed viscous Cahn-Hilliard equation; global attractor; upper semicontinuity
Full Text:FULL TEXT Refbacks There are currently no refbacks. |
$$ \sum_{n=1}^\infty \frac{1}{n^2 3^n} $$ I tried to use the regular way to calculate the sum of a power series $(x=1/3)$ to solve it but in the end I get to an integral I can't calculate.
Thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Given
$$ \sum_{n=1}^\infty \frac{1}{n^2 3^n}. \tag 1 $$
Write
$$ S(x) = \sum_{n=1}^\infty \frac{\exp( x n)}{n^2}. \tag 2 $$
So we get
$$ \sum_{n=1}^\infty \frac{1}{n^2 3^n} = S(-\ln(a)). \tag 3 $$
Note that
$$ \frac{d^2 S}{dx^2} = \sum_{n=1}^\infty \exp( x n) = \frac{\exp(x)}{1 - \exp(x)}. \tag 4 $$
Then
$$ S(x) = \int dx \int dx \frac{\exp(x)}{1 - \exp(x)} = - \int dx \ln(1 - \exp(x)) = \operatorname{Li}_2( \exp(x) ). \tag 5 $$
Thus
$$ \sum_{n=1}^\infty \frac{\exp( x n)}{n^2} = \operatorname{Li}_2( \exp(x) ). \tag 6 $$
Put in $x = -\ln(3)$ and we get
$$ \bbox[16px,border:2px solid #800000] { \sum_{n=1}^\infty \frac{1}{n^2 3^n} = \operatorname{Li}_2(1/3).} \tag 7 $$
I suppose that you get to an integral of $\frac{ln(1-x)}{x}$.
This cannot be expressed with a finite number of elementary function. In fact, the sum is the series definition of a special function called "dilogarithm" which belongs to the family of "polylogarithms". $$ \sum_{n=1}^\infty \frac{x^n}{n^2}=Li_2(x) $$ The integral definition of dilogarithm is : $$ Li_2(x)=-\int_0^x \frac{\ln(1-t)}{t}dt $$ In case of $x=1/3$ : $$ \sum_{n=1}^\infty \frac{1}{n^2 3^n}=Li_2(1/3)=0.366213... $$ For the meaning and the use of special functions, for example see :
A more general paper :
$$ \begin{aligned} I \ := \sum_{n=1}^{\infty} \dfrac{x^n}{n^2} & \implies \dfrac{\text{d}I}{\text{d}x} = \sum_{n=1}^{\infty} \dfrac{x^{n-1}}{n} \\ & \implies x \dfrac{\text{d}I}{\text{d}x} = \sum_{n=1}^{\infty} \dfrac{x^{n}}{n} \\ & \implies \dfrac{\text{d}}{\text{d}x} \left( x \dfrac{\text{d}I}{\text{d}x} \right) = \sum_{n=1}^{\infty} x^{n-1} = \dfrac{1}{1-x} \\ & \implies x \dfrac{\text{d}I}{\text{d}x} = \int \dfrac{1}{1-x} \text{ d}x \\ & \implies x \dfrac{\text{d}I}{\text{d}x} = \log \left( \dfrac{1}{1-x} \right) \\ & \implies I = \int \dfrac{1}{x} \ \log \left( \dfrac{1}{1-x} \right) \text{ d}x \ = \mathrm{Li}_2 (x) \end{aligned} $$
$$ \therefore \ \sum_{n=1}^{\infty} \dfrac{1}{n^2 3^n} \ := \ \mathrm{Li}_2 \left( \dfrac{1}{3} \right) $$ |
Consider a large number $N$ of distinguishable particles distributed among $M$ boxes.
We know that the total number of possible microstates is $$\Omega=M^N$$ and that the number of microstates with a distribution among the boxes given by the configuration $[n_1, n_2, ..., n_M]$ is given by $$\frac{N!}{\prod_{j=1}^M (n_j)!}\tag{1}$$ The most likely configuration corresponding to the particles distributed equally among the $M$ boxes is trivially $$n_0=\frac{N}{M}\tag{2}$$ Now if we let $\Omega_0$ denote the statistical weight of this configuration and $p_0$ its probability.
How should I go about calculating $p_0$?
This is how I thought it should be done:
$$p_0=\frac{\left(\frac{N}{M}\right)}{\frac{N!} {\prod_{j=1}^M (n_j)!}}=\color{red}{\fbox{$\frac{N}{M}\frac{\prod_{j=1}^M (n_j)!}{N!}$}}$$
where all I did was divide $(2)$ by $(1)$ since by my logic the probability is simply
the most likely configuration divided by the total number of configurations.
But apparently this is not the case and the correct answer is $$p_0=\frac{\Omega_0}{\Omega}=\frac{\frac{N!}{\Big(\left(\frac{N}{M}\right)!\Big)^M}}{M^N}=\color{#180}{\fbox{$\frac{N!}{\Big(\left(\frac{N}{M}\right)!\Big)^M M^N}$}}$$
Could someone please provide me with any hints or an explanation to justify why $$\color{#180}{p_0={\fbox{$\frac{N!}{\Big(\left(\frac{N}{M}\right)!\Big)^M M^N}$}}}$$ is the correct answer?
EDIT:
I have already been given an answer by BLAZE but I feel that there is a
much easier way of calculating $p_0$. The reason I say this is because the answer to the problem was just stated as
$$p_0=\frac{\Omega_0}{\Omega}=\frac{\frac{N!}{\Big(\left(\frac{N}{M}\right)!\Big)^M}}{M^N}={\fbox{$\frac{N!}{\Big(\left(\frac{N}{M}\right)!\Big)^M M^N}$}}$$ without any other working.
How does one arrive at this answer without writing down any intermediate steps?
Thanks. |
I am interested to write down a derivation of Lagrange equations from Newton's second law for a non-holonomic system of particles. Here, I mention my derivation where I am stuck right at the last step.
Consider a system of $N$ particles where their position vectors are written as
$$\mathbf{r}_i=\mathscr{R}_i(q_1(t),\dots,q_M(t),t),\quad i=1,\dots,N\,,\tag{1}$$
where the
functions $q_i:\mathbb{R}\to\mathbb{R}$ are called the generalized coordinates which are subjected to holonomic and non-holonomic constraints as below
\begin{align*} f_i(q_1(t),\dots,q_M(t),t)&=0,\quad i=1,\dots,C_h\,, \\ g_i(q_1(t),\dots,q_M(t),\dot q_1(t),\dots,\dot q_M(t),t)&=0,\quad i=1,\dots,C_n\,, \tag{2} \end{align*}
where $C_h$ and $C_n$ are the number of holonomic and non-holonomic constraints, respectively. Also, if the
degree of freedom of the system is $n$ then $n=M-C\ge1$ where $C=C_n+C_h$ is the total number of constraints. Using the chain rule of differentiation we have
\begin{align*} \mathscr{\dot R}_i := \mathbf{v}_i &= \mathbf{v}^*_i+\frac{\partial\mathscr{R}_i}{\partial t},\quad i=1,\dots,N\,, \\ \mathbf{v}^*_i&:=\sum_{j=1}^{M}\frac{\partial \mathscr{R}_i}{\partial q_j}\dot q_j\,,\tag{3} \end{align*}
where we defined the
virtual velocity of a particle by $\mathbf{v}^*_i$. Also, from Newton's second law we have
$$\mathbf{F}_i=m \mathbf{a}_i\tag{4},\quad i=1,\dots,N\,.$$
Multiplying both sides of $(4)$ by $\mathbf{v}^*_i$, summing over the number of particles $N$ and interchanging the the order of summations we get
$$\sum_{j=1}^{M}\sum_{i=1}^{N}(\mathbf{F}_i-m\mathbf{a}_i)\cdot\frac{\partial \mathscr{R}_i}{\partial q_j}\dot q_j=0\,.\tag{5}$$
Then using the following definitions and identities
\begin{align*} Q_j&:=\sum_{i=1}^{N}\mathbf{F}_i\cdot\frac{\partial \mathscr{R}_i}{\partial q_j},\quad j=1,\dots,M\,, \\ S_j&:=\sum_{i=1}^{N}m\mathbf{a}_i\cdot\frac{\partial \mathscr{R}_i}{\partial q_j}=\frac{d}{dt}\frac{\partial T}{\partial \dot q_j}-\frac{\partial T}{\partial q_j},\quad j=1,\dots,M\,, \\ T&:=\sum_{i=1}^{N}\frac{1}{2}m\mathbf{v}_i\cdot\mathbf{v}_i\,, \tag{6} \end{align*}
Eq. $(5)$ reduces to
$$\sum_{j=1}^{M}(Q_j-S_j)\dot q_j=0.\tag{7}$$
If there were no constraint equations at all, either holonomic or non-holonomic as mentioned in Eq.$(2)$, then the functions $q_i$ were linearly independent and from this we could conclude that the functions $\dot q_i$ are also linearly independent. Then Eq.$(7)$ would result in the well known form of Lagrange equations $S_j=Q_j$. But here is my question, what if there are constraint equations like Eq.$(2)$. Note that sometimes we are inclined not to eliminate the holomonic constraints by using a transformation. So I am insisting to have both holonomic and non-holonomic constraints at the same time.
As the functions $\dot q_i$ are not (linearly) independent in this case, I am wondering that how the last step works here?
If the non-holonomic constraints are linear in terms of generalized velocities
\begin{align*} &g_i(q_1(t),\dots,q_M(t),\dot q_1(t),\dots,\dot q_M(t),t)=\\ &\sum_{j=1}^{M}a_{ij}(q_1(t),\dots,q_M(t),t)\dot q_j(t)+b_i(q_1(t),\dots,q_M(t),t)=0,\quad \tag{8} \end{align*}
then we call them
quasi non-holonomic. In this case, I know that the final result should be
$$\frac{d}{dt}\frac{\partial T}{\partial \dot q_j}-\frac{\partial T}{\partial q_j}=Q_j+\sum_{i=1}^{C_h}\lambda_i\frac{\partial f_i}{\partial q_j}+\sum_{i=1}^{C_n}\mu_i\frac{\partial g_i}{\partial \dot q_j},\quad j=1,\dots,M\,,\tag{9}$$
where $\lambda_i$ and $\mu_i$ are some functions of time which are called
Lagrange multipliers.
A simple observation is that every holonomic constraint can be written in the form of a quasi non-holonomic constraint, that is
\begin{align*} &\\ &\sum_{j=1}^{M}\frac{\partial f_i}{\partial q_j}(q_1(t),\dots,q_M(t),t)\dot q_j(t)+\frac{\partial f_i}{\partial t}(q_1(t),\dots,q_M(t),t)=0.\quad \tag{10} \end{align*}
At the first step, it seems reasonable to establish an argument when all of the constraints are quasi non-holonomic. I have posted a related mathematical question on Mathematics SE in this regard. Interested reader can take a look at it. |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
While there is no confirmation that quark stars exist, is there any theoretical limit analogous to (but different from) the Tolman–Oppenheimer–Volkoff limit for neutron stars?
In other words, what is the maximum pressure for quark matter?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
While there is no confirmation that quark stars exist, is there any theoretical limit analogous to (but different from) the Tolman–Oppenheimer–Volkoff limit for neutron stars?
In other words, what is the maximum pressure for quark matter?
The upper mass limit for a quark star depends on your assumptions and ranges between 1 and 2 solar masses (cf. this paper (arXiv link) from 2001). It seems to me that the reason for the similarity to neutron stars' mass range is that it
both compact objects satisfy the TOV equation,$$\frac{dp}{dr}=-\frac{G}{r^2}\left[\rho+\frac{p}{c^2}\right]\left[M+4\pi r^2\frac{p}{c^2}\right]\left[1-\frac{2GM}{rc^2}\right]^{-1}$$but with different equations of state.
For the quark star, according to the aforemention paper, the pressure is defined as$$p(\mu)=\frac{N_f\mu^4}{4\pi^2}\left[1-2\frac{\alpha_s}{\pi}-\left(G+N_f\ln\frac{\alpha_s}\pi+\left(11-\frac23N_f\right)\ln\frac{\bar{\Lambda}}{\mu}\right)\frac{\alpha_s^2}{\pi^2}\right]$$where $G\simeq10.4-0.536N_f+N_f\ln N_f$, $\alpha_s$ the strong coupling, $N_f$ the number of flavors (often taken as 3), $\mu$ the chemical potential, and $\bar{\Lambda}$ the renormalization subtraction point (my understanding of this term is minimal, but it seems to change the
size of the mass-radius relation, but not the shape). |
Gravitational force exerted by ring
Let's say that we wanted to find the gravitational
force exerted by a disk of mass \(M\) on a particle of mass \(m\) a vertical height \(h\) above the center \(O\) of the disk as illustrated in Figure 2. To find this force we'll use Newton's law of gravity and the concept of a definite integral just like we did in the previous lesson. Let's say that the radius of the disk is \(R\) and that the disk has a finite thickness \(t\). Let's imagine subdividing this disk into infinitely many, infinitesimally skinny rings. I have drawn one of these rings in Figure 1. If the entire disk has a uniform mass density \(ρ\), then since the volume of one of the infinitesimal mass elements comprising the ring is \(dV\) it follows that the mass of such a mass element must be
$$dm=ρdV.\tag{1}$$
Since all of the mass of each mass element is contained within an infinitesimally small volume \(dV\), we can regard each mass element as a point-mass (that is, a particle). This is nice since, as we explained in another lesson, Newton's law of gravity tells us the gravitational force that one
particle exerts on another particle. Using Newton's law of gravity, we find that the point-mass \(ρdV\) exerts a gravitational force on the particle of mass \(m\) by an amount given by
$$dF_{ρdV,m}=G\frac{(ρdV)(m)}{r^2+h^2}.\tag{2}$$
To find the total gravitational force exerted by the entire ring, we must add up every force \(dF_{ρdV,m}\) due to all the particles comprising the ring. But notice that when we take this sum, all of the \(y\)-components of force cancel each other out. Let me take a moment to explain why this is. Notice that for any arbitrary mass element \(dm\) comprising the ring, there will always be some other mass element \(dm'\) on the opposite side of the ring. Since each mass element is an equal distance \(\sqrt{r^2+h^2}\) away from \(m\), both mass elements will exert the same magnitude of force on \(m\). But, due to their symmetrical distribution around \(m\), there \(y\)-components of force acting on \(m\) will be equal-and-opposite. Thus, the two \(y\)-components of force exerted by the particles \(dm\) and \(dm'\) cancel each other out. When we add up the forces exerted by each particle in each ring comprising the disk, the \(y\)-components of force always cancel. Thus, we really only need to add up the \(x\)-components of force exerted by each particle in the ring. As you can see from Figure 1, \(cosθ=dF_x/dF_{ρdV,m}\) and thus
$$dF_x=dF_{ρdV,m}cosθ.\tag{3}$$
Substituting Equation (3) into (2), we find that the \(x\)-component of gravitational force exerted by any arbitrary mass element in the ring is given by
$$dF_x=\frac{Gρm}{r^2+h^2}cosθdV.\tag{4}$$
To find the total gravitational force exerted by the ring on \(m\), we must add up the forces due to every particle in the ring to get
$$F_{ring}=\int{\frac{Gρm}{r^2+h^2}cosθdV}.\tag{5}$$
We know that the terms \(G\), \(ρ\), and \(m\) are all constants; but notice that for any particle along the ring, \(r\) and \(θ\) are also constants. Thus, we can pull everything outside of the integral in Equation (5). Doing so, we have
$$F_{ring}=\frac{Gρm}{r^2+h^2}cosθ\int{dV}.$$
Since \(\int{dV}=ΔV\), which is the volume of the ring, we have
$$F_{ring}=\frac{Gρm}{r^2+h^2}cosθΔV.$$
Since \(ρΔV\) is just the mass of the ring (which we'll represent by \(M_{ring}\), we find that the total gravitational force exerted by a ring on a particle a height \(h\) above or below the center of the ring is given by
$$F_{ring}=G\frac{mM_{ring}}{r^2+h^2}cosθ.\tag{7}$$
Gravitational force exerted by disk
To find the gravitational force exerted on \(m\) by the disk, all that we need to do is sum up the forces exerted by every ring comprising the disk to get
$$F_{disk}=\int{\frac{GρmcosθdV}{r^2+h^2}},\tag{8}$$
where I have set \(ΔV=dV\) since the ring is infinitesimally skinny. The rest of this lesson will be about simplifying the integral in Equation (8) so that we can eventually solve it. To solve this integral, everything in the integral must be represented in terms of a single variable. Let's represent everything in terms of \(r\). The volume \(dV\) of the ring is given by
$$dV=(2πrdr)t.$$
Substituting this result into Equation (8), we have
$$F_{disk}=\int{\frac{Gρmcosθ}{r^2+h^2}(2πrdr)t},\tag{9}$$
Substituting \(cosθ=h/\sqrt{r^2+h^2}\) and rearranging the terms in Equation (9), we have
$$F_{disk,m}=2πhGρmt\int_0^R\frac{r}{(r^2+h^2)^{3/2}}dr.\tag{11}$$
Now that we have simplified our integral so that everything is in terms of the single variable \(r\), we can compute the integral using the fundamental theorem of calculus and u-substitution. If we let \(u=r^2+h^2\), then
$$\frac{du}{dr}=2r$$
and
$$rdr=\frac{1}{2}du.\tag{12}$$
Substituting Equation (12) into (11), we have
$$hπGρmt\int_{h^2}^{R^2+h^2}\frac{1}{u^{3/2}}du.\tag{14}$$
Using the rules of integration and the fundamental theorem of calculus to solve the integral in Equation (14), Equation (14) becomes
$$2hπGρmt\biggl[\frac{1}{\sqrt{u}}\biggr]_{R^2+h^2}^{h^2}=2hπGρmt\biggl(\frac{1}{h}-\frac{1}{\sqrt{R^2+h^2}}\biggr).$$
Multiply by \(h/h\), we have
$$2hπGρmt\biggl(\frac{1}{h}-\frac{1}{\sqrt{R^2+h^2}}\biggr)=2πGρmt\biggl(1-\frac{h}{\sqrt{R^2+h^2}}\biggr).$$
Thus, the gravitational force exerted by a disk on a particle of mass \(m\) located a vertical distance \(h\) above or below the center of the disk is given by
$$F_{disk}=2πGρmt\biggl(1-\frac{h}{\sqrt{R^2+h^2}}\biggr).\tag{15}$$
This article is licensed under a CC BY-NC-SA 4.0 license. |
Inclusion Mapping is Surjection iff Identity Theorem
Let $T$ be a set.
Let $S\subseteq T$ be a subset.
Let $i_S: S \to T$ be the inclusion mapping.
Then:
where $I_S: S \to S$ denotes the identity mapping on $S$.
Alternatively, this theorem can be worded as: $i_S: S \to S = I_S: S \to S$ Proof $(1): \quad \Dom {i_S} = S = \Dom {I_S}$ $(2): \quad \forall s \in S: \map {i_S} s = s = \map {I_S} s$ Necessary Condition
Let $i_S: S \to T = I_S: S \to S$.
So $\forall s \in S: s = \map {i_S} s$ and so $i_S$ is surjective.
$\Box$
Sufficient Condition
Now let $i_S: S \to T$ be a surjection.
Then: $\forall s \in T: s = \map {i_S} s \implies s \in S$
and therefore:
$T \subseteq S$
Thus:
$T = S$ Thus $i_S: S \to T = I_S: S \to S$.
$\blacksquare$ |
A comparison sort cannot require fewer than $\Theta (n\log n)$ comparisons on average. However, consider this sorting algorithm:
sort(array): if length(array) < 2: return array unsorted ← empty_array i ← 0 while i < length(array) - 1: if array[i] > array[i + 1]: push(unsorted, pop(array, i + 1)) else: i ← i + 1 return merge(array, sort(unsorted))
(
push(array, element) puts the new element at the end of the array and increases the array’s length by 1.
pop(array, index) removes the element at that index from the array, moving all the elements at greater indices and decrementing the array’s length, and returns the removed element.
merge is the same as in mergesort.)
Instead of simply splitting the array in the middle like mergesort, it splits it so that one resulting array doesn’t need to be recursively sorted. Let $n$ be the length of the array to be sorted. Applying the Master Theorem gives us
$$\begin{align*} T(n) &= T(n / b) + \mathrm{splitComparisons}(n) + \mathrm{mergeComparisons}(n) \\ &= T(n / b) + (n - 1) + n \\ &= T(n / b) + 2n - 1\,, \end{align*}$$
so $f(n) = 2n - 1$ and $a = c = 1$ in the statement of the Master Theorem.
$b$ is one over the probability that an element is greater than the next element and will go into the array to be recursively sorted. For example, if there's a 25% chance that
array[i] > array[i + 1] (for all
i), $b = 4$. $b$ is clearly greater than $1$, since the length of unsorted array grows smaller with every recursive call, so taking the logarithm with base $b$ of $1$ will always give us $0$, which is less than $c$. Then $T(n) = \Theta(f(n)) = \Theta(n)$.
But that can’t be true, so the Master Theorem isn't applicable for some reason; I suspect because $b$ isn't constant, but I don’t know how to prove that. The worst case of the sorting algorithm obviously requires a quadratic number of comparisons and the best case linear, so by analogy with bubblesort, insertion sort, etc, I’m guessing this algorithm also makes a quadratic number of comparisons on average. |
№ 8
All Issues Volume 65, № 12, 2013
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1587–1603
We introduce the notion of Kravchuk derivations of the polynomial algebra. It is proved that any element of the kernel of a derivation of this kind gives a polynomial identity satisfied by the Kravchuk polynomials. In addition, we determine the explicit form of isomorphisms mapping the kernel of the basicWeitzenb¨ock derivation onto the kernels of Kravchuk derivations.
On the Best Approximation in the Mean by Algebraic Polynomials with Weight and the Exact Values of Widths for the Classes of Functions
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1604–1621
The exact value of the extremal characteristic
is obtained on the class
L 2 r D ρ , where r ∈ ℤ +; \( {D}_{\rho} = \sigma (x)\frac{d^2}{d{ x}^2}+\tau (x)\frac{d}{d x} \) , σ and τ are polynomials of at most the second and first degrees, respectively, ρ is a weight function, 0 < p ≤ 2 , 0 < h < 1 , λ n ρ)
are eigenvalues of the operator D ρ , φ
is a nonnegative measurable and summable function (in the interval ( a, b)) which is not equivalent to zero, Ω k, ρ k th order in the space L 2, ρ ( a, b) ,and E ( n f) 2,is the best polynomial approximation in the mean with weight ρ ρfor a function f ∈ L 2,( ρ a, b) .The exact values of widths for the classes of functions specified by the characteristic of smoothness Ω and the k, ρ K-functional \( \mathbb{K} \) mare also obtained.
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1622–1635
We present the definition and study the dependence on the initial approximation of the asymptotic rate of convergence of a two-layer symmetrizable iterative method of the variational type. The explicit expression is obtained for the substantial (with respect to the Lebesgue measure) range of its values. Its domain of continuity is described.
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1636–1645
We show that the invariant fields
F q X 1 , . . . ,X n G F q G are root subgroups of finite classical groups. The key step is to find good similar groups of our groups. Moreover, the invariant rings of the root subgroups of special linear groups are shown to be polynomial rings and their corresponding Poincaré series are presented.
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1646–1656
We establish some new Lyapunov-type inequalities for one-dimensional
p-Laplacian systems with antiperiodic boundary conditions. The lower bounds of eigenvalues are presented. Theorem on Closure and the Criterion of Compactness for the Classes of Solutions of the Beltrami Equations
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1657–1666
We study the classes of regular solutions of degenerate Beltrami equations with constraints of the integral type imposed on a complex coefficient, prove the theorem on closure, and establish a criterion of compactness for these classes.
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1667–1680
We present a generalization of several fixed and common fixed point theorems on
c -distance in ordered cone metric spaces. In this way, we improve and generalize various results existing in the literature.
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1681–1699
We obtain upper bounds for the values of the best bilinear approximations in the Lebesgue spaces of periodic functions of many variables from the Besov-type classes. In special cases, it is shown that these bounds are order exact.
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1700–1711
Let
p: X → X/A be a quotient map, where A is a subspace of X.
We study the conditions under which p ∗(π 1 qtop ( X, x 0)) is dense in π 1 qtop ( X/ A,∗)), where the fundamental groups have the natural quotient topology inherited from the loop space and p * is a continuous homomorphism induced by the quotient map p. In addition, we present some applications in order to determine the properties of π 1 qtop ( X/ A,∗). In particular, we establish conditions under which π 1 qtop ( X/ A,∗) is an indiscrete topological group.
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1712–1715
We explore the necessary and sufficient conditions for the two cone metrics to be topologically equivalent.
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1716–1722
We establish new sufficient conditions for the absolute |C,
α|-summability of the Fourier series of functions almost periodic in a sense of Besicovitch whose spectrum has limit points at infinity and at the origin for \( \alpha \ge \frac{1}{2} \) .
Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1723-1725 |
First we make some computations:$$\lim_{x\to 0, y\to y_0}x^3\log\Big(1+\frac{|y|^\alpha}{x^4}\Big)=\lim\left(x^3\sqrt{1+\frac{|y|^\alpha}{x^4}}\right)\frac{2\log(u)}{u}=\lim x\sqrt{x^4+|y|^\alpha}\cdot \frac{2\log(u)}{u},$$where $u=\sqrt{1+\frac{|y|^\alpha}{x^4}}$. If $(x,y)\to(0,a)$ and $u$ remains bounded, the last product goes to $0$ as the first factor dominates. But if $u\to+\infty$, we know that $\log(u)$ grows slower than $u$ and $\log(u)/u\to0$ and we are done again. This settles continuity and gives a way to treat limits involving this weird $\log$.
Next for differentiability we look at $\frac{\partial f}{\partial y}(a,0)$, $a\ne0$. Directly by definition we must compute the limit when $t\to 0$ of the quotient$$A=\frac{f(a,t)-f(a,0)}{t}=\frac{a^3\log\Big(1+\frac{|t|^\alpha}{x^4}\Big)}{t}.$$It's of the form $\frac{0}{0}$, hence we apply l'Hôpital:$$\lim_{t\to0}A=\pm a^3\lim_{t\to0}\frac{\alpha|t|^{\alpha-1}}{1+\frac{|t|^\alpha}{a^4}}=\begin{cases}0&\text{for $\alpha>1$,}\\\pm\alpha a^3&\text{for $\alpha=1$,}\\ \pm\infty&\text{for $\alpha<1$.}\end{cases}$$Here the $\pm$ distinguishes $t\to0^+$ and $t\to0^-$, and we see the limit doesn't exist for $\alpha\le1$. Consequently the function is not differentiable for $\alpha\le1$.
For the case $\alpha>1$ I summarise: clearly $f$ is ${\mathcal C}^1$ off the axis $xy=0$, so one focuses at those axis. Then compute partial derivatives off them through $x^3\log...)$ formula valid there to get$$\begin{cases}\frac{\partial f}{\partial x}(x,y)=3x^2\log\Big(1+\frac{|y|^\alpha}{x^4}\Big)-\frac{4x^2|y|^\alpha}{x^4+|y|^\alpha},\\\frac{\partial f}{\partial y}(x,y)=\frac{\pm \alpha x^6|y|^{\alpha-1}}{x^4+|y|^\alpha}.\end{cases}$$ On the other hand right from the definitions, one sees that all partial derivatives vanish at every point of the two axis. Next one computes the limits when $x\to0$ and when $y=0$ (two different cases) of the formulas above. As far as I've had the patience of doing it, the limits are indeed $0$, hence the function is ${\mathcal C}^1$ for $\alpha>1$.
For anotehr suggestion, let's see one limit of a partial derivative when $x\to0,\ y\to0$. WE bound as follows$$\left|\frac{\pm \alpha x^6|y|^{\alpha-1}}{x^4+|y|^\alpha}\right|\le \alpha x^2\frac{x^4}{x^4+|y|^\alpha}|y|^\alpha,$$
and note that the frist and third factor go to $0$, and the second is bounded: since $x^4+|y|^\alpha\ge x^4$, the quotient is $\le1$.
I hope I haven't go wrong in the computations, but I think the whole lot can be of some help in any case. For other thing, I would recommend
beware polar coordinates, they are useful but often lead to some confusions, see my post
Multivariable limit with polar coordinates
on the matter. |
Happy near year, and best wishes to those close and \(\varepsilon\)-far! December concluded the year with 4 new preprints, spanning quite a lot of the property testing landscape:
Testing Stability Properties in Graphical Hedonic Games, by Hendrik Fichtenberger and Anja Rey (arXiv). The authors of this paper consider the problem of deciding whether a given hedonic game possesses some “coalition stability” in a property testing framework. Namely, recall that a hedonic game is a game where players (nodes) form coalitions (subsets of nodes) based on their individual preferences and local information about the considered coalition, thus resulting in a partition of the original graph. Several notions exist to evaluate how good such a partition is, based on how “stable” the given coalitions are. This work focuses on hedonic games corresponding to bounded-degree graphs, introducing and studying the property testing question of deciding (for several such notions of stability) whether a given game admits a stable coalition structure, or is far from admitting such a partition. Spectral methods for testing cluster structure of graphs, by Sandeep Silwal and Jonathan Tidor (arXiv). Staying among bounded-degree graphs, we turn to testing clusterability of graphs, the focus of this paper. Given an \(n\)-node graph \(G\) of degree at most \(d\) and parameters \(k, \phi\), say that \(G\) is \((k, \phi)\)-clusterable if it can be partitioned in \(k\) parts of inner conductance at least \(\phi\). Analyzing properties of a random walk on \(G\), this work gives a bicriterion guarantee (\((k, \phi)\)-clusterable vs. \(\varepsilon\)-far from \((k, \phi^\ast)\)-clusterable, where \(\phi^\ast \approx \varepsilon^2\phi^2\)) for the case \(k=2\), improving on previous work by Czumaj, Peng, and Sohler’15.
We then switch from graphs to probability distributions with our third paper:
Inference under Information Constraints I: Lower Bounds from Chi-Square Contraction, by Jayadev Acharya, Clément Canonne, and Himanshu Tyagi (arXiv). (Disclaimer: I’m one of the authors.) In this paper, the first of an announced series of three, the authors generalize the settings of two previous works we covered here and there to consider the general question of distribution testing and learning when the \(n\) i.i.d. samples are distributed among \(n\) players, which each can only communicate their sample to the central algorithm by respecting some pre-specified local information constraint (e.g., privacy, or noise, or communication budget). This paper develops a general lower bound framework to study such questions, with a systematic focus on the power of public vs. private randomness between the \(n\) parties, and instantiate it to obtain tight bounds in the aforementioned locally private and communication-limited settings. (Spoiler: public randomness strictly helps, but not always.)
Finally, after games, graphs, and distributions, our fourth paper of the month concerns testing of functions:
Partial Function Extension with Applications to Learning and Property Testing, by Umang Bhaskar and Gunjan Kumar (arXiv). This work focuses on a problem quite related to property testing, that of partial function extension: given as input \(n\) pairs point/value from a purported function on a domain \(X\) of size \(|X| > n\), one is tasked with deciding whether there does exist (resp., with finding) a function \(f\) on \(X\) consistent with these \(n\) values which further satisfies a specific property, such as linearity or convexity. This is indeed very reminiscent of property testing, where one gets to query these \(n\) points and must decide (approximate) consistency with such a well-behaved function. Here, the authors study the computational hardness of this partial function extension problem, specifically for properties such as subadditivity and XOS (a sub-property of subadditivity); and as corollaries obtain new property testers for the classes of subadditive and XOS functions.
As usual, if you know of some work we missed from last December, let us know in the comments! |
The brachistochrone problem is a very famous problem in the history of physics which was first solved by an excellent mathematician named Jean Bernoulli. He posed this problem as a challenge to the greatest mathematicians of Europe during the period of the Renaissance. He stated the problem as such:
We are given two fixed points in a vertical plane. A particle starts from rest at one of the points and travels to the other under its own weight. Find the path that the particle must follow in order to reach its destination in the briefest time.
In other words, if a particle's initial position is \((x_1,y_1)\) and it moves to a final position \((x_2,y_2)\) under only the action of gravity, the problem is to find the particular path \(x(y)\) on the plane which, if the particle moved along that path, it would get to its final position fastest and in the least time. Like many problems in physics, this one involves idealizations of the physical system under consideration\(^1\): in this problem, the object is approximated as a particle which is subjected only to the action of gravity. In all theoretical schemes in the past which were used to solve the brachistochrone problem, all of them necessarily had to obey fundamental laws such as the conservation of energy. The conservation of energy implies that (see below), no matter which path an object takes, the total velocity \(v\) of the particle must be a function of height \(y\) and of the form \(v(y)=\sqrt{2gy}\). The change in the particle’s velocity depends on only its change in height.
$$\frac{1}{2}mv^2=mgy⇒v(y)=\sqrt{2gy}.$$
Bernoulli used a very sophisticated procedure to solve the brachistocrone problem. He new from Fermat's principle that it is a law of nature that light always moves along the path of least time and that light passes through materials of varying indices of refraction in accordance with Snell's law. He imagined stacking infinitely many, very thin sections of materials on top of each other where the refractive index varies smoothly from one layer to another. The velocity of a light beam passing through such a stack of materials, from Snell's law, would also vary smoothly as a function of height and can be written as \(\vec{c}(y)\). This light beam would trace out a special path which is the path of least time for any object, not just light. This very creative way of thinking allowed Bernoulli to solve the brachistocrone problem.
But in this section, we'll use a much less sophisticated method to solve the brachistocrone problem than Bernoulli, Leibnez, Newton, and others. The goal of all the proceeding math of this section will be to express the quantity which we want to minimize (in this problem, the time \(t\)) as a functional which takes the form \(S(q_j(x),q_j’(x),x)=t(x(y),x'(y),x)\). Then after that there will be a little more math, the goal of which will be to express the integrand as a functional of the form \(F(q_j,q_j’,x)\). This will allow us to calculate all of the derivatives (in this problem, there are three of them) in the Euler-Lagrange equation; these simplifications will allow us to use the Euler-Lagrange equation (equations of motion) to solve for the path \(x(y)\) which minimizes the functional (which is the time \(t\)).
Let's start out with our first goal of expressing the time as a functional. As the particle falls under the action of gravity, it's velocity is constantly changing. However, as the particle falls by a very, very small amount and traverses an infinitesimally small displacement \(dS\), it's velocity is pretty much constant and we can write \(dS=vdt\). Since we eventually want to be able to express the time as a functional, let's rearrange this equation in terms of time: \(dt=\frac{dS}{v}\). Already, we have a rough idea of where we're trying to go with the math. First of all, we have a hunch that we'll probably have to take an integral since we want to express time as a functional of the form in Equation (3) from the section on the derivation of the Euler-Lagrange equation. So, let's do that:
$$\int_{t_1}^{t_2}dt=t=\int{\frac{dS}{v}}.\tag{1}$$
So we're one step closer to getting an equation which looks like Equation (3) from the derivation section, but the problem is that our integrand doesn't really look quite like the integrand in Equation (3). To achieve this goal of getting our integrand to look like the integrand in Equation (3), we'll need to express the velocity \(v\) and the displacement \(dS\) in terms of \(x(y)\), \(x'(y)\), and \(y\). To achieve that goal for the velocity \(v\), we can just use the conservation of energy as we did earlier to get \(v(y)=\sqrt{2gy}\). So if we substitute this equation into Equation (1), we'll get
$$\int_{t_1}^{t_2}dt=t=\int{\frac{dS}{\sqrt{2gy}}};\tag{2}$$
and you can see that we have now expressed the time \(t\) in terms of a \(y\) term. Even before using the Pythagorean theorem to rewrite \(dS\), by just going back to our derivation section (towards the beginning) you'll see that it will allow us to pick up a \(x'(y)\) term. But that aside, from the Pythagorean theorem, we have \(dS=\sqrt{1+\frac{dy}{dx}^2}dx\). If we substitute this into Equation (2), we get
$$t(x(y),x'(y),y)=∫\sqrt{1+\frac{dy}{dx}^2}(2gy)^{-1/2}dy=\frac{1}{\sqrt{2g}}\int{\sqrt{1+\biggl(\frac{dy}{dx}\biggl)^2y^{-1}}}dy.\tag{3}$$
So, you can see that both the time and the integrand in Equation (3) are functionals of the form we want. Thus, we can apply the analysis we used in our derivation to solve this problem. As a reminder—and I'm sure that by now this might seem like a broken record—the Euler-Lagrange equation is the condition which satisfies our functional being minimized. At this point, we basically just want to do a bunch of algebra to derive the motion \(x(y)\) which satisfies the Euler-Lagrange equation—this will give us the path of least time. The first step to doing this will be to evaluate all of the derivatives in the Euler-Lagrange equation as shown in the video and below:
$$\frac{∂L}{∂x}=\frac{∂}{∂x}\biggl(\sqrt{1+\biggl(\frac{dx}{dy}\biggl)^2}\text{ }y^{-1/2}\biggl)=0$$
$$\frac{d}{dy}\frac{∂}{∂x'}\sqrt{\frac{1+\frac{dx}{dy}^2}{y}}=0.$$
Let's evaluate the partial derivative with respect to \(x'\) to simplify the above equation to
$$\frac{d}{dy}\frac{(1+(x')^2)^{-1/2}}{\sqrt{y}}x'=0.$$
If the derivative of something is zero then that something must be constant and we get
$$\frac{(1+(x')^2)^{-1/2}}{\sqrt{y}}x'=C.$$
We shall let \(C=\frac{1}{\sqrt{2a}}\) and we'll see later that the motivation for doing this is to be able make trigonometric substitutions to simplify our equation. By substituting for \(C\) and doing some algebra, we get a series of simplifications as show in the video and below:
$$\frac{x'^2}{y(1+x'^2)}=\frac{1}{2a}$$
$$2ax'^2=y(1+x'^2)$$
$$(2a-y)x'^2=y$$
$$x'^2=\frac{y}{2a-y}$$
$$x'=\frac{y}{2a-y}.$$
Our goal this entire time has just been to isolate the solution \(x(y)\), and we see that we are starting to get very close. All we have to do is "undo the derivative" on the left-hand side, so to speak, and we can do that by taking the anti-derivative, or the integral, on both sides of the equation:
$$x(y)=\int{\frac{y}{2a-y}}dy.$$
As intimidating as the integral above might look, solving it essentially just boils down to making a few trigonometric substitutions and a lot of algebra. If you are unfamiliar with the trigonometric substitutions used in the video above, I suggest refreshing your trigonometry skills by checking out the Khan Academy's videos on trigonometry. I'll omit the substitutions and algebra used to solve the integral since all of those steps are shown in the video; but after doing all the necessary trigonometry and algebra, what you end up with are the equations below:
$$x(θ)=a(θ-cosθ)$$
$$y(θ)=a(θ-sinθ)+C.$$
As described in the video above, the constant \(C\) can be solved for by setting \(x\) and \(y\) equal to zero. Doing so, we find that \(C=0\). The constant \(a\) can also be solved for by substituting the appropriate conditions; but we won't worry about this and we'll just be concerned with what the graph of these parametric equations look like. The equations above simplify to
$$x(θ)=a(θ-cosθ)$$
$$y(θ)=a(θ-sinθ).$$
If we graph these equations, the curve we'll get is a cycloid. If a particle rolls down a frictionless (remember that we said the only force that can act on it is gravity) surface whose shape is that of a cycloid, it will fall from any arbitrary point along the surface to any other arbitrary point along the surface (at a lower height) in the least amount of time.
This article is licensed under a CC BY-NC-SA 4.0 license.
References
1. The Kaizen Effect. "Lagrangian Mechanics - Lesson 3: The Brachistochrone Problem". Online video clip. YouTube. YouTube, 21 May 2016. Web. 01 June 2016.
2. http://www.storyofmathematics.com/20th.html
Notes
1. 1. It is always necessary that some combination of certain law, theories, approximations, and idealizations are assumed to be true in order for meaningful results to be mathematically deduced. The Euler-Lagrange equation is a mathematical deduction which is based upon the assumption that very fundamental laws are true. The fact that it is based on fundamental laws which are (or at least seem to be) universal and not approximations and theoretical schemes which are limited to very special circumstances means that the Euler-Lagrange equation is universal and applicable to nearly all situations (as long as these fundamental laws do not break down. One might think that the inapplicability of this equation to situations involving non-conservative forces limits their generality; while this is in fact the case in practice, it is not in principle since on the most fundamental level, there are no non-conservative forces in nature. For example, on the most fundamental level, friction (which is treated as non-conservative on macroscopic scales and in practice) is just the result of many atoms and molecules influencing each other via electromagnetic interactions (which are conservative forces). |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Conjugate Heat Transfer
In this blog post we will explain the concept of
conjugate heat transfer and show you some of its applications. Conjugate heat transfer corresponds with the combination of heat transfer in solids and heat transfer in fluids. In solids, conduction often dominates whereas in fluids, convection usually dominates. Conjugate heat transfer is observed in many situations. For example, heat sinks are optimized to combine heat transfer by conduction in the heat sink with the convection in the surrounding fluid.
Heat Transfer by Solids and Fluids Heat Transfer in a Solid
In most cases, heat transfer in solids, if only due to conduction, is described by Fourier’s law defining the conductive heat flux,
q, proportional to the temperature gradient: q=-k\nabla T.
For a time-dependent problem, the temperature field in an immobile solid verifies the following form of the heat equation:
Heat Transfer in a Fluid
Due to the fluid motion, three contributions to the heat equation are included:
The transport of fluid implies energy transport too, which appears in the heat equation as the convective contribution. Depending on the thermal properties on the fluid and on the flow regime, either the convective or the conductive heat transfer can dominate. The viscous effects of the fluid flow produce fluid heating. This term is often neglected, nevertheless, its contribution is noticeable for fast flow in viscous fluids. As soon as a fluid density is temperature-dependent, a pressure work term contributes to the heat equation. This accounts for the well-known effect that, for example, compressing air produces heat.
Accounting for these contributions, in addition to conduction, results in the following transient heat equation for the temperature field in a fluid:
Conjugate Heat Transfer Applications Effective Heat Transfer
Efficiently combining heat transfer in fluids and solids is the key to designing effective coolers, heaters, or heat exchangers.
The fluid usually plays the role of energy carrier on large distances. Forced convection is the most common way to achieve high heat transfer rate. In some applications, the performances are further improved by combining convection with phase change (for example liquid water to vapor phase change).
Even so, solids are also needed, in particular to separate fluids in a heat exchanger so that fluids exchange energy without being mixed.
Flow and temperature field in a shell-and-tube heat exchanger illustrating heat transfer between two fluids separated by the thin metallic wall.
Heat sinks are usually made of metal with high thermal conductivity (e.g. copper or aluminum). They dissipate heat by increasing the exchange area between the solid part and the surrounding fluid.
Temperature field in a power supply unit cooling due to an air flow generated by an extracting fan and a perforated grille. Two aluminum fins are used to increase the exchange area between the flow and the electronic components. Energy Savings
Heat transfer in fluids and solids can also be combined to minimize heat losses in various devices. Because most gases (especially at low pressure) have small thermal conductivities, they can be used as thermal insulators… provided they are not in motion. In many situations, gas is preferred to other material due to its low weight. In any case, it is important to limit the heat transfer by convection, in particular by reducing the natural convection effects. Judicious positioning of walls and use of small cavities helps to control the natural convection. Applied at the micro scale, the principle leads to the insulation foam concept where tiny cavities of air (bubbles) are trapped in the foam material (e.g. polyurethane), which combines high insulation performances with light weight.
Window cross section (left) and zoom-in on the window frame (right). Temperature profile in a window frame and glazing cross section from ISO 10077-2:2012 (thermal performance of windows). Fluid and Solid Interactions Fluid/Solid Interface
The temperature field and the heat flux are continuous at the fluid/solid interface. However, the temperature field can rapidly vary in a fluid in motion: close to the solid, the fluid temperature is close to the solid temperature, and far from the interface, the fluid temperature is close to the inlet or ambient fluid temperature. The distance where the fluid temperature varies from the solid temperature to the fluid bulk temperature is called the
thermal boundary layer. The thermal boundary layer size and the momentum boundary layer relative size is reflected by the Prandtl number (Pr=C_p \mu/k): for the Prandtl number to equal 1, thermal and momentum boundary layer thicknesses need to be the same. A thicker momentum layer would result in a Prandtl number larger than 1. Conversely, a Prandtl number smaller than 1 would indicate that the momentum boundary layer is thinner than the thermal boundary layer. The Prandtl number for air at atmospheric pressure and at 20°C is 0.7. That is because for air, the momentum and thermal boundary layer have similar size, while the momentum boundary layer is slightly thinner than the thermal boundary layer. For water at 20°C, the Prandtl number is about 7. So, in water, the temperature changes close to a wall are sharper than the velocity change. Normalized temperature (red) and velocity (blue) profile for natural convection of air close to a cold solid wall. Natural Convection
The natural convection regime corresponds to configurations where the flow is driven by buoyancy effects. Depending on the expected thermal performance, the natural convection can be beneficial (e.g. cooling application) or negative (e.g. natural convection in insulation layer).
The Rayleigh number, noted as Ra, is used to characterized the flow regime induced by the natural convection and the resulting heat transfer. The Rayleigh number is defined from fluid material properties, a typical cavity size, L, and the temperature difference,\Delta T, usually set by the solids surrounding the fluid:
The Grashof number is another flow regime indicator giving the ratio of buoyant to viscous forces:
The Rayleigh number can be expressed in terms of the Prandtl and the Grashof numbers through the relation Ra=Pr Gr.
When the Rayleigh number is small (typically <10
3), the convection is negligible and most of the heat transfer occurs by conduction in the fluid.
For a larger Rayleigh number, heat transfer by convection has to be considered. When buoyancy forces are large compared to viscous forces, the regime is turbulent, otherwise it is laminar. The transition between these two regimes is indicated by the critical order of the Grashof number, which is 10
9. The thermal boundary layer, giving the typical distance for temperature transition between the solid wall and the fluid bulk, can be approximated by \delta_\mathrm{T} \approx \frac{L}{\sqrt[4\,]{Ra}} when Pr is of order 1 or greater. Temperature profile induced by natural convection in a glass of cold water in contact with a hot surface . Forced Convection
The forced convection regime corresponds to configurations where the flow is driven by external phenomena (e.g. wind) or devices (e.g. fans, pumps) that dominate buoyancy effects.
In this case the flow regime can be characterized, similarly to isothermal flow, using the Reynolds number as an indicator,Re= \frac{\rho U L}{\mu}. The Reynolds number represents the ratio of inertial to viscous forces. At low Reynolds numbers, viscous forces dominate and laminar flow is observed. At high Reynolds numbers, the damping in the system is very low, giving small disturbances. If the Reynolds number is high enough, the flow field eventually ends up in turbulent regime.
The momentum boundary layer thickness can be evaluated, using the Reynolds number, by \delta_\mathrm{M} \approx \frac{L}{\sqrt{Re}}.
Streamlines and temperature profile around a heat sink cooling by forced convection. Radiative Heat Transfer
Radiative heat transfer can be combined with conductive and convective heat transfer described above.
In a majority of applications, the fluid is transparent to heat radiation and the solid is opaque. As a consequence, the heat transfer by radiation can be represented as surface-to-surface radiation transferring energy between the solid wall through transparent cavities. The radiative heat flux emitted by a diffuse gray surface is equal to \varepsilon n^2 \sigma T^4. When a surface is surrounded by bodies at a homogeneous T_\mathrm{amb}, the net radiative flux is q_\mathrm{r} = \varepsilon n^2 \sigma (T_\mathrm{amb}^4-T^4). When surrounding surfaces of different temperatures, each surface-to-surface exchange is determined by the surface’s view factors.
Nevertheless, both fluids and solids may be transparent or semitransparent. So radiation can occur in fluid and solids. In participating (or semitransparent) media, the radiation rays interact with the medium (solid or fluid) then absorb, emit, and scatter radiation.
Whereas radiative heat transfer can be neglected in applications with small temperature differences and lower emissivity, it plays a major role in applications with large temperature differences and large emissivities.
Comparison of temperature profiles for a heat sink with a surface emissivity \varepsilon = 0 (left) and \varepsilon = 0.9 (right). Conclusion
Heat transfer in solids and heat transfer in fluids are combined in the majority of applications. This is because fluids flow around solids or between solid walls, and because solids are usually immersed in a fluid. An accurate description of heat transfer modes, material properties, flow regimes, and geometrical configurations enables the analysis of temperature fields and heat transfer. Such a description is also the starting point for a numerical simulation that can be used to predict conjugate heat transfer effects or to test different configurations in order, for example, to improve thermal performances of a given application.
Notations
C_{p}: heat capacity at constant pressure (SI unit: J/kg/K)
g: gravity acceleration (SI unit: m/s
2)
Gr: Grashof number (dimensionless number)
k: thermal conductivity (SI unit: W/m/K)
L: characteristic dimension (SI unit: m)
n: refractive index (dimensionless number)
p_\mathrm{A}: absolute pressure (SI unit: Pa)
Pr: Prandtl number (dimensionless number)
q: heat flux (SI unit: W/m
2)
Q: heat source (SI unit: W/m
3)
Ra: Rayleigh number (dimensionless number)
S: strain rate tensor (SI unit: 1/s)
T: temperature field (SI unit:K)
T_\mathrm{amb}: ambient temperature (SI unit: K)
\bold{u}: velocity field (SI unit: m/s)
U: typical velocity magnitude (SI unit: m/s)
\alpha_{p}: thermal expansion coefficient (SI unit: 1/K)
\delta_\mathrm{M}: momentum boundary layer thickness (SI unit: m)
\delta_\mathrm{T}: thermal layer thickness (SI unit: m)
\Delta T: characteristic temperature difference (SI unit: K)
\varepsilon: surface emissivity (dimensionless number)
\rho: density (SI unit: kg/m
3)
\sigma: Stefan-Boltzmann constant (SI unit: W/m
2T 4)
\tau: viscous stress tensor (SI unit: N/m
2) Comments (27) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
In a
balanced chemical equation, the total number of atoms of each element present is the same on both sides of the equation. Stoichiometric coefficients are the coefficients required to balance a chemical equation. These are important because they relate the amounts of reactants used and products formed. The coefficients relate to the equilibrium constants because they are used to calculate them. For this reason, it is important to understand how to balance an equation before using the equation to calculate equilibrium constants. Introduction
There are several important rules for balancing an equation:
An equation can be balanced only by adjusting the coefficients. The equation must include only the reactants and products that participate in the reaction. Never change the equation in order to balance it. If an element occurs in only one compound on each side of the equation, try balancing this element first. When one element exists as a free element, balance this element last.
Example \(\PageIndex{1}\):
\[H_2\; (g) + O_2 \; (g) \rightleftharpoons H_2O \; (l) \nonumber \]
Because both reactants are in their elemental forms, they can be balanced in either order. Consider oxygen first. There are two atoms on the left and one on the right. Multiply the right by 2 \[H_2(g) + O_2(g) \rightleftharpoons 2H_2O(l) \nonumber \] Next, balance hydrogen. There are 4 atoms on the right, and only 2 atoms on the left. Multiply the hydrogen on left by 2 \[2H_2(g) + O_2(g) \rightleftharpoons 2H_2O(l)\nonumber \] Check the stoichiometry. Hydrogen: on the left, 2 x 2 = 4; on right 2 x 2= 4. Oxygen: on the left: 1 x 2 = 2; on the right 2 x 1 = 2 . All atoms balance, so the equation is properly balanced.
\[2H_2(g) + O_2(g) \rightleftharpoons 2H_2O(l)\nonumber \]
Example \(\PageIndex{2}\):
\[Al \; (s) + MnSO_4 \; (aq) \rightleftharpoons Al_2(SO_4)_3 + Mn ; (s) \nonumber \]
First, consider the SO 4 2 - ions. There is one on the left side of the equation, and three on the right side. Add a coefficient of three to the left side. \[Al(s) + 3MnSO_4(aq) \rightleftharpoons Al_2(SO_4)_3 + Mn(s) \nonumber \] Next, check the Mn atoms. There is one on the right side, but now there are three on the left side from the previous adjustment. Add a coefficient of three on the right side. \[Al(s) + 3MnSO_4(aq) \rightleftharpoons Al_2(SO_4)_3 + 3Mn(s)\nonumber \] Consider Al. There is one atom on the left side and two on the right side. Add a coefficient of two on the left side. Make sure there are equal numbers of each atom on each side. \[2Al(s) + 3MnSO_4(aq) \rightleftharpoons Al_2(SO_4)_3 + 3 Mn(s)\nonumber \]
Example \(\PageIndex{3}\):
\[P_4S_3 + KClO_3 \rightleftharpoons P_2O_5 + KCl + SO_2 \nonumber \]
This problem is more difficult. First, look at the P atoms. There are four on the reactant side and two on the product side. Add a coefficient of two to the product side. \[P_4S_3 + KClO_3 \rightleftharpoons 2P_2O_5 + KCl + SO_2\nonumber \] Next, consider the sulfur atoms. There are three on the left and one on the right. Add a coefficient of three to the right side. \[P_4S_3 + KClO_3 \rightleftharpoons 2P_2O_5 + KCl + 3SO_2\nonumber \] Now look at the oxygen atoms. There are three on the left and 16 on the right. Adding a coefficient of 16 to the KClO 3 on the left and the KCl on the right preserves equal numbers of K and Cl atoms, but increases the oxygen. \[P_4S_3 + 16KClO_3 \rightleftharpoons 2P_2O_5 + 16KCl + 3 SO_2\nonumber \] Tripling the other three species (P 4S 3, P 2O 5, and SO 2) balances the rest of the atoms. \[3P_4S_3 + 16 KClO_3 \rightleftharpoons 2(3)P_2O_5 + 16KCl + 3(3)SO_2\nonumber \] Simplify and check. \[3P_4S_3 + 16KClO_3 \rightleftharpoons 6P_2O_5 + 16KCl + 9SO_2\nonumber \] Chemical Equilibrium
Balanced chemical equations can now be applied to the concept of chemical equilibrium, the state in which the reactants and products experience no net change over time. This occurs when the forward and reverse reactions occur at equal rates. The equilibrium constant is used to determine the amount of each compound that present at equilibrium. Consider a chemical reaction of the following form:
\[ aA + bB \rightleftharpoons cC + dD\nonumber \]
For this equation, the equilibrium constant is defined as:
\[ K_c = \dfrac{[C]^c [D]^d}{[A]^a [B]^b} \nonumber \]
The activities of the products are in the numerator, and those of the reactants are in the denominator. For K
c, the activities are defined as the molar concentrations of the reactants and products ([A], [B] etc.). The lower case letters are the stoichiometric coefficients that balance the equation.
An important aspect of this equation is that pure liquids and solids are not included. This is because their activities are defined as one, so plugging them into the equation has no impact. This is due to the fact that pure liquids and solids have no effect on the physical equilibrium; no matter how much is added, the system can only dissolve as much as the solubility allows. For example, if more sugar is added to a solution after the equilibrium has been reached, the extra sugar will not dissolve (assuming the solution is not heated, which would increase the solubility). Because adding more does not change the equilibrium, it is not accounted for in the expression.
K is related to to the Balanced Chemical Reaction
The following are concepts that apply when adjusting K in response to changes to the corresponding balanced equation:
When the equation is reversed, the value of K is inverted. When the coefficients in a balanced equation are multiplied by a common factor, the equilibrium constant is raised to the power of the corresponding factor. When the coefficients in a balanced equation are divided by a common factor, the corresponding root of the equilibrium constant is taken. When individual equations are combined, their equilibrium constants are multiplied to obtain the equilibrium constant for the overall reaction.
A balanced equation is very important in using the constant because the coefficients become the powers of the concentrations of products and reactants. If the equation is not balanced, then the constant is incorrect.
K IS ALSO RELATED TO THE BALANCED CHEMICAL EQUATION OF GASES
For gas-phase equilibria, the equation is a function of the reactants' and products' partial pressures. The equilibrium constant is expressed as follows:
\[ K_p = \dfrac{P_C^c P_D^d}{P_A^a P_B^b} \nonumber \]
P represents partial pressure, usually in atmospheres. As before, pure solids and liquids are not accounted for in the equation. K
c and K p are related by the following equation:
\[ K_p = K_c(RT)^{\Delta n} \nonumber \]
where
\[ \Delta n = (c+d) - (a+b) \nonumber \]
This represents the change in gas molecules. a,b,c and d are the stoichiometric coefficients of the gas molecules found in the balanced equation.
Neither K
c nor K p have units. This is due to their formal definitions in terms of activities. Their units cancel in the calculation, preventing problems with units in further calculations.
Example \(\PageIndex{4}\): Using K
c
\[ PbI_2 \rightleftharpoons Pb \; (aq) + I \; (aq) \nonumber \]
First, balance the equation.
Check the Pb atoms. There is one on each side, so lead can be left alone for now. Next check the I atoms. There are two on the left side and one on the right side. To fix this, add a coefficient of two to the right side. \[PbI_2 \rightleftharpoons Pb^{2+}(aq) + 2I^-(aq)\nonumber \] Check to make ensure the numbers are equal. \[PbI_2 \rightleftharpoons Pb^{2+}(aq) + 2I^-(aq)\nonumber \]
Next, calculate find Kc. Use these concentrations: Pb
- 0.3 mol/L, I - 0.2 mol/L, PbI 2- 0.5 mol/L
\[ K_c = \dfrac{(0.3) * (0.2)^2}{(0.5)} \nonumber \]
\[K_c= 0.024\nonumber \]
Note: If the equation had not been balanced when the equilibrium constant was calculated, the concentration of I
- would not have been squared. This would have given an incorrect answer.
utomatic number to work, you need to add the "AutoNum" template (preferably at the end) to the page.
Example \(\PageIndex{5}\)
\[SO_2 \; (g) + O_2 \; (g) \rightleftharpoons SO_3 \; (g) \nonumber \]
First, make sure the equation is balanced.
Check to make sure S is equal on both sides. There is one on each side. Next look at the O. There are four on the left side and three on the right. Adding a coefficient to the O 2 on the left is ineffective, as the S on right must also be increased. Instead, add a coefficient to the SO 2 on the left and the SO 3 on the right. \[2SO_2 + O_2 \rightleftharpoons 2SO_3\nonumber \] The equation is now balanced. \[2SO_2 + O_2 \rightleftharpoons 2SO_3\nonumber \]
Calculate K
p. The partial pressures are as follows: SO 2- 0.25 atm, O 2- 0.45 atm, SO 3- 0.3 atm
\( K_p = \dfrac{(0.3)^2}{(0.25)^2 \times (0.45)} \)
\( K_p= 3.2\)
Contributors Charlotte Hutton, Sarah Reno, Curtis Kortemeier |
Nano Express Open Access Published: Improvement of Bipolar Switching Properties of Gd:SiO x RRAM Devices on Indium Tin Oxide Electrode by Low-Temperature Supercritical CO 2 Treatment Nanoscale Research Letters volume 11, Article number: 52 (2016) Article metrics
1362 Accesses
5 Citations
1 Altmetric
Abstract
Bipolar switching resistance behaviors of the Gd:SiO
2 resistive random access memory (RRAM) devices on indium tin oxide electrode by the low-temperature supercritical CO 2-treated technology were investigated. For physical and electrical measurement results obtained, the improvement on oxygen qualities, properties of indium tin oxide electrode, and operation current of the Gd:SiO 2 RRAM devices were also observed. In addition, the initial metallic filament-forming model analyses and conduction transferred mechanism in switching resistance properties of the RRAM devices were verified and explained. Finally, the electrical reliability and retention properties of the Gd:SiO 2 RRAM devices for low-resistance state (LRS)/high-resistance state (HRS) in different switching cycles were also measured for applications in nonvolatile random memory devices. Background
Many nonvolatile memory devices for ferroelectric random access memory (FeRAM), magnetic random access memory (MRAM), and phrase change memory (PCM) are widely discussed for applications in the smart memory cards, electronic devises, and portable electrical devices [1–8]. Among these memory devices, various metals doped into silicon-based oxide thin films are widely and considerably discussed for the resistive random access memory (RRAM) devices because of its great compatibility in integrated circuit (IC) processes, high operation speed, long retention time, and low operation voltage [9–13]. Recently, the transparent ITO electrode of the various memory devices are widely discussed and investigated because of its compatibility and integrated in system on panel concept applications [14–17]. The high thermal budget and fabrication cost of rapid temperature annealing (RTA) and conventional furnace annealing (CFA) post-treatment methods were widely used for applications in dielectric thin films reformed and passivated the defects [15–18]. However, the excellent liquid-like properties of the supercritical CO
2 fluid (SCF) process have attracted considerable research in efficiently transporting H 2O molecules diffusion into the microstructures of thin films at a low-temperature treatment [19–21].
To discuss the SCF-treated ITO electrode on bipolar switching properties of RRAM devices, the ITO/Gd:SiO
2/TiN structure was treated by low-temperature SCF treatment. In addition, the electrical transferred conduction mechanism of the initial metallic filament-forming model was explained to bipolar switching properties of RRAM devices on ITO electrode in this study. Methods
The metal-insulator-metal (MIM) structure of Gd:SiO
2 thin film RRAM devices was fabricated and prepared by SiO 2 and gadolinium co-sputtering technology on the TiN/Ti/SiO 2/Si substrate. The sputtering power was fixed with an rf power of 200 W and a DC power of 10 W. The 200-nm-thick ITO electrode was deposited on Gd:SiO 2 film to form ITO/Gd:SiO 2/TiN structure. In addition, the ITO/Gd:SiO 2/TiN structure sample was placed in the supercritical fluid system, which was mixed with 5 vol.% pure H 2O and 5 vol.% propyl alcohol, injected at 3000 psi and 150 °C for 2 h. The bipolar switching operation current versus applied voltage ( I– V) characteristics of Gd:SiO 2 RRAM devices are measured by Agilent B1500 semiconductor parameter analyzer. The X-ray photoelectron spectroscopy (XPS) is used to analyze the chemical composition and bonding of thin films, respectively. Results and Discussion
To investigate the SCF-treated ITO electrode effect, the bipolar resistance switching behavior of the Gd:SiO
2 RRAM devices was discussed and observed in Fig. 1. After the initial forming process of −10 V in Fig. 1 b, the Gd:SiO 2 RRAM devices exhibited a low-resistance state (LRS). Then, a high-resistance state (HRS) was forming by high negative bias. To define the set process state, the RRAM devices exhibited the LRS for applying a large negative bias than the set voltage. For reset process state, a gradual current decrease was presented in LRS to HRS for the bias to positive over the reset voltage. For inverted set/reset state properties of the Gd:SiO 2 RRAM devices, we suggested the transferred electron early captured by the lots of oxygen vacancy in top ITO electrode and formed the oppositely metallic filament [22]. The operation current of the Gd:SiO 2 RRAM devices for using SCF-treated ITO electrode was lower than that for the nontreated electrode of others. In order to further discuss the initial metallic filament path diagram model, the electrical transferred mechanisms of RRAM devices for the SCF-treated ITO electrode were discussed and investigated.
According to the relationship of the Schottky emission equation, \( J=A*{T}^2 \exp \left[-q\left({\phi}_{\mathrm{B}}-\sqrt{\raisebox{1ex}{$q{E}_{\mathrm{i}}$}\!\left/ \!\raisebox{-1ex}{$4\uppi {\varepsilon}_{\mathsf{i}}$}\right.}\right)/KT\right] \), where
T is the absolute temperature, Φ B is the Schottky barrier height, ε i is the insulator permittivity, K is Boltzmann’s constant, and A* is Richardson constant. The I–V switching curve of the Gd:SiO 2 RRAM devices was transferred to ln( I/ T 2) − V 1/2 and ln( I) − ln( V) curve to fit the Schottky emission and the ohmic conduction mechanism. In Fig. 2, the Gd:SiO 2 RRAM devices for LRS/HRS in the set state exhibited the ohmic conduction mechanism for low applied voltage. In Fig. 2 a for 0.3~0.5 V, the LRS/HRS of Gd:SiO 2 RRAM devices all exhibited the Schottky emission conduction by ln( I/ T 2) − V 1/2 curve fitting for the temperature of 300–350 K [23, 24]. If the J– E curves obey the Schottky emission model, the fitting curves should be straight in this figure. In Fig. 3 a , the LRS/HRS of Gd:SiO 2 RRAM devices in the reset state also exhibited the ohmic conduction mechanism by ln( I) − ln( V) curve and the Schottky emission conduction mechanism by ln( I/ T 2) − V 1/2 curve fitting.
To analyze the oxygen element of the chemical composition characteristics in ITO electrode, the mole fraction of stannum (Sn), indium (In), and oxygen (O), in the ITO thin film was 5.08, 47.76, and 47.15 %, respectively, calculated from the peak areas of XPS spectra. For the SCF-treated ITO electrode, we found that the mole fraction of Sn, In, and O elements was 4.7, 18.32, and 76.98 %, respectively. The mole fraction of the oxygen element increased from 47.15 to 76.98 %. The increase of oxygen ion qualities and decrease of the electric conductivity of SCF-treated ITO electrode were also proved and verified in the XPS spectra. In Fig. 1
b , the In 1+3d 5/2 peaks of ITO electrode that shifted two valences to In 3+3d 5/2 effect was caused and improved by oxidation ability and binding energy of SCF treatment. The oxidation ability and repaired damaged effect of ITO electrode of Gd:SiO 2 RRAM devices improved by SCF treatment process were found [15–17].
As discussed above, the electrical transferred mechanisms of
I– V curves results, the metal filament path diagram model of the Gd:SiO 2 RRAM devices was described. To the initial metallic filament path-forming process for the negative applied voltage, the uniform oxygen ions existed in Gd:SiO 2 thin film of the RRAM devices for the set state are shown in Fig. 4a. To continuously apply negative voltage, lots of oxygen ions were accompanied into the ITO electrode. The metallic filament path increased and exhibited Schottky emission conduction mechanism. In Fig. 4b, the oxygen ions in ITO electrode return back to Gd:SiO 2 thin film for the initial reset state exhibited the ohmic conduction mechanism for the low voltage applied. Then, the metallic filament path was decreased by oxygen ion oxidation and exhibited Schottky emission conduction mechanism for continuously applying positive voltage.
For the electrical reliability properties, the on/off ratio in
I– V curves of the Gd:SiO 2 RRAM devices was measured and obtained for the different switching cycle. In Fig. 2 b , no significant changes in the current values for 10 4 s were observed. In addition, the switching cycling measured another type of the retention characteristics shown in Fig. 3 b . The slight fluctuation of the resistance in the LRS/HRS and the stable switching property of 10 5 cycles exhibited the reliability properties of the nonvolatile Gd:SiO 2 RRAM devices applications. Conclusions
In conclusion, the bipolar resistance switching characteristics and low power consumption of Gd:SiO
2 RRAM devices for ITO top electrode were achieved by using a low-temperature supercritical CO 2 treatment. The switching resistance mechanisms in the SCF-treated ITO electrode of RRAM devices for HRS/LRS were proved and investigated by electrical transferred mechanisms and a metallic filament path diagram model. Finally, no significant changes of the operation current of the electrical reliability properties in Gd:SiO 2 RRAM devices for on/off state were maintained to 10 4 s. For the retention characteristics, the slight fluctuation of resistance in the LRS/HRS states and the stable switching property of 10 5 cycles were also found. References 1.
Yang PC, Chang TC, Chen SC, Lin YS, Huang HC, Gan DS (2011) Influence of bias-induced copper diffusion on the resistive switching characteristics of SiON thin film. Electrochem Solid State Lett 14(2):H93–H95
2.
Syu YE, Chang TC, Tsai TM, Hung YC, Chang KC, Tsai MJ, Kao MJ, Sze SM (2011) Redox reaction switching mechanism in RRAM device with Pt/CoSiO
X/TiN structure. IEEE Electron Device Lett 32(4):545–547 3.
Feng LW, Chang CY, Chang YF, Chen WR, Wang SY, Chiang PW, Chang TC (2010) A study of resistive switching effects on a thin FeO
xtransition layer produced at the oxide/iron interface of TiN/SiO 2/Fe-contented electrode structures. Appl Phys Lett 96:052111 4.
Feng LW, Chang CY, Chang YF, Chang TC, Wang SY, Chen SC, Lin CC, Chen SC, Chiang PW (2010) Improvement of resistance switching characteristics in a thin FeOx transition layer of TiN/SiO
2/FeO x/FePt structure by rapid annealing. Appl Phys Lett 96:222108 5.
Chen MC, Chang TC, Tsai CT, Huang SY, Chen SC, Hu CW, Sze SM, Tsai MJ (2010) Influence of electrode material on the resistive memory switching property of indium gallium zinc oxide thin films. Appl Phys Lett 96:262110
6.
Yang CF, Chen KH, Chen YC, Chang TC (2007) Fabrication and study on one-transistor-capacitor structure of nonvolatile random access memory TFT devices using ferroelectric gated oxide film. IEEE Trans Ultrason Ferroelectr Freq Control 54:1726–1730
7.
Yang CF, Chen KH, Chen YC, Chang TC (2008) Physical and electrical characteristics of Ba(Zr
0.1Ti 0.9)O 3thin films under oxygen plasma treatment for applications in nonvolatile memory devices. Applied Physics A 90:329 8.
Chen KH, Chen YC, Chen ZS, Yang CF, Chang TC (2007) Temperature and frequency dependence of the ferroelectric characteristics of Ba(Zr
0.1Ti 0.9)O 3thin films for nonvolatile memory applications. Applied Physics A 89:533 9.
Liu Q, Long S, Wang W, Zuo Q, Zhang S, Chen J, Liu M (2009) Improvement of resistive switching properties in ZrO2-based ReRAM with implanted Ti ions. IEEE Electron Device Lett 30(12):1335–1337
10.
Ming L, Abid Z, Wei W, Xiaoli H, Qi L, Weihua G (2009) Multilevel resistive switching with ionic and metallic filaments. Appl Phys Lett 94:233106
11.
Xinghua L, Zhuoyu J, Deyu T, Liwei S, Jiang L, Ming L, Changqing X (2009) Organic nonpolar nonvolatile resistive switching in poly(3,4-ethylene-dioxythiophene): polystyrenesulfonate thin film. Org Electron 10(6):1191–1194
12.
Zhang S, Long S, Guan W, Liu Q, Wang Q, Liu M (2009) Resistive switching characteristics of MnOx-based ReRAM. J Phys D Appl Phys 42:055112
13.
Wang Y, Liu Q, Long S, Wang W, Wang Q, Zhang M, Zhang S, Li Y, Zuo Q, Yang J, Liu M (2010) Investigation of resistive switching in Cu-doped HfO2 thin film for multilevel non-volatile memory applications. Nanotechnology 21:045202
14.
Shih CC, Chang KC, Chang TC, Tsa TM, Zhang R, Chen JH, Chen KH, Young TF, Chen HL, Lou JC, Chu TJ, Huang SY, Bao DH, Sze SM (2014) Resistive switching modification by ultraviolet illumination in transparent electrode resistive random access memory. IEEE Electron Device Lett 35(6):633–635
15.
Yang FW, Chen KH, Cheng CM, Su FY (2013) Bipolar resistive switching properties in transparent vanadium oxide resistive random access memory. Ceram Inter 39(1):S729–S732
16.
Chen KH, Liao CH, Tsai JH, Wu S (2013) Electrical conduction and bipolar switching properties in transparent vanadium oxide resistive random access memory (RRAM) devices. Appl Phys A 110(1):211–216
17.
Chen KH, Huang JW, Cheng CM, Lin JY, Wu TS (2014) Nonvolatile transparent manganese oxide thin film resistance random access memory devices. Jpn J Appl Phys 53:08NL03
18.
Tsai CT, Chang TC, Liu PT, Yang PY, Kuo YC, Kin KT, Chang PL, Huang FS (2007) Low-temperature method for enhancing sputter-deposited HfO2 films with complete oxidization. Appl Phys Lett 91(1):012109
19.
Tsai CT, Chang TC, Kin KT, Liu PT, Yang PY, Weng CF, Huang FS (2008) A low temperature fabrication of HfO
2films with supercritical CO2 fluid treatment. J Appl Phys 103(7):074108 20.
Chen MC, Chang TC, Huang SY, Chang KC, Li HW, Chen SC, Lu J, Shi Y (2009) A low-temperature method for improving the performance of sputter-deposited ZnO thin-film-transistors with supercritical fluid. Appl Phys Lett 94:162111
21.
Chen KH, Chang TC, Chang GC, Hsu YE, Chen YC, Xu HQ (2010) Low temperature improvement method on characteristics of Ba(Zr
0.1Ti 0.9)O 3thin films deposited on indium tin oxide/glass substrates. Applied Physics A 99:291–295 22.
Zhang R, Chang KC, Chang TC, Tsai TM, Huang SY, Chen WJ, Chen KH, Lou JC, Chen JH, Young TF, Chen MC, Chen HL, Liang SP, Syu YE, Sze SM (2014) Characterization of oxygen accumulation in indium-tin-oxide for resistance random access memory. IEEE Electron Device Lett 35(6):630–632
23.
Long S, Perniola L, Cagli C, Buckley J, Lian X, Miranda E, Pan F, Liu M, Suñé J (2013) Voltage and power-controlled regimes in the progressive unipolar RESET transition of HfO
2-based RRAM. Scientific Reports 3:2929 24.
Long S, Lian X, Cagli C, Cartoixà X, Rurali R, Miranda E, Jiménez D, Perniola L, Liu M, Suñé J (2013) Quantum-size effects in hafnium-oxide resistive switching. Appl Phys Lett 102:183505
Acknowledgements
This work was performed at the National Science Council Core Facilities Laboratory for Nano-Science and Nano-Technology in the Kaohsiung-Pingtung area and was supported by the National Science Council of the Republic of China under Contract Nos. MOST 104-2633-E-272 -001 -MY2, and MOST 103-2633-E-272 -001.
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
KHC and KCC designed and performed the experimental work, explained the obtained results, and wrote the paper. TCC, TMT, conceived the study and participated in its design and coordination. KHC, SPL, and TFY, helped in writing of the paper and participated in the experimental work. All authors read and approved the final manuscript. |
This post describes some generative machine learning algorithms.
Gaussian Discriminant Analysis
Gaussian Discriminant Analysis is a
supervised classification algorithm. In this algorithm we suppose the p(x|y) is multivariate Gaussian with parameter mean \(\overrightarrow{u}\) and covariance \(\Sigma : P(x|y) \sim N(\overrightarrow{u}, \Sigma) \).
If we suppose that y is Bernoulli distributed, then there are two distributions that can be defined, one for p(x|y=0) and one for p(x|y=1). The separator between these two distributions could be used to predict the values of y.\(P(x|y=0) = \frac{1}{(2\pi)^{\frac{n}{2}}|\Sigma|^{\frac{1}{2}}} exp(-\frac{1}{2}(x – \mu_0)^T\Sigma^{-1}(x – \mu_0))\) \(P(x|y=1) = \frac{1}{(2\pi)^{\frac{n}{2}}|\Sigma|^{\frac{1}{2}}} exp(-\frac{1}{2}(x – \mu_1)^T\Sigma^{-1}(x – \mu_1))\) \(P(y) = \phi^y.(1-\phi)^{1-y}\)
We need to find \(\phi, \mu_0, \mu_1, \Sigma\) that maximize the log-likelihood function:\(l(\phi, \mu_0, \mu_1, \Sigma) = log(\prod_{i=1}^{m} P(y^{(i)}|x^{(i)};\phi, \mu_0, \mu_1, \Sigma))\)
Using the Naive Bayes rule, we can transform the equation to:\(= log(\prod_{i=1}^{m} P(x^{(i)} | y^{(i)};\mu_0, \mu_1, \Sigma) * P(y^{(i)};\phi))) – log(\prod_{i=1}^{m} P(x^{(i)}))\)
To maximize l, we need to maximize: \(log(\prod_{i=1}^{m} P(x^{(i)} | y^{(i)};\mu_0, \mu_1, \Sigma) * P(y^{(i)};\phi)))\)
For new data, to predict y, we need to calculate \(P(y=1|x) = \frac{P(x|y=1) * P(y=1)}{P(x|y=0)P(y=0) + P(x|y=1)P(y=1)}\) (P(y=0) and P(y=1) can be easily calculated from training data).
EM Algorithm
EM algorithm is an
unsupervised clustering algorithm in which we assign a label to each training example. The clustering applied is soft clustering, which means that for each point we assign a probability for each label.
For a point x, we calculate the probability for each label.
There are two main steps in this algorithm:
E-Step: Using the current estimate of parameters, calculate if points are more likely to come from the red distribution or the blue distribution. M-Step: Re-estimate the parameters.
Given a training set \(S = \{x^{(i)}\}_{i=1}^m\). For each \(x^{(i)}\), there is hidden label \(z^{(i)}\) assigned to it.
Z is a multinomial distribution with \(P(z = j)=?_j\).
k the number of distinct values of Z (number of classes).
For each \(x^{(i)}\), we would like to know the value \(z^{(i)}\) that is assigned to it. In other words, find \(l \in [1, k]\) that maximizes the probability \(P(z^{(i)} = l | x^{(i)})\)
Log-likelihood function to maximize: \(l(?) = log(\prod_{i=1}^m P(x^{(i)}; ?)) = \sum_{i=1}^m log(P(x^{(i)}; ?))\)\( = \sum_{i=1}^m log(\sum_{l=1}^k P(x^{(i)}|z^{(i)} = l; ?) * P(z^{(i)} = l))\) \( = \sum_{i=1}^m log(\sum_{l=1}^k P(x^{(i)}, z^{(i)} = l; ?))\)
Let’s define \(Q_i\) as a probability distribution over Z: (\(Q_i(z^{(i)} = l)\) > = 0 and \(\sum_{l=1}^k Q_i(z^{(i)} = l) = 1\))\(l(?) = \sum_{i=1}^m log(\sum_{l=1}^k Q_i(z^{(i)} = l) \frac{P(x^{(i)}, z^{(i)} = l; ?)}{Q_i(z^{(i)} = l)})\) \( = \sum_{i=1}^m log(E_{z^{(i)} \sim Q_i}[\frac{P(x^{(i)}, z^{(i)}; ?)}{Q_i(z^{(i)})}])\)
Log is concave function. Based on Jensen’s inequality log(E[X]) >= E[log(X)].
Then:
\(l(?) >= \sum_{i=1}^m E_{z^{(i)} \sim Q_i}[log(\frac{P(x^{(i)}, z^{(i)}; ?)}{Q_i(z^{(i)})})]\) \( = \sum_{i=1}^m \sum_{l=1}^k Q_i(z^{(i)}=l) log(\frac{P(x^{(i)}, z^{(i)}=l; ?)}{Q_i(z^{(i)}=l)})\)
If we set \(Q_i(z^{(i)}) = P(z^{(i)}| x^{(i)}; ?)\) then: \(\frac{P(x^{(i)}, z^{(i)}; ?)}{Q(z^{(i)})}\) \(= \frac{P(z^{(i)}| x^{(i)}; ?) * P(x^{(i)}; ?)}{P(z^{(i)}| x^{(i)}; ?)}\) \(= P(x^{(i)}; ?)\) = Constant (over z). In that case log(E[X]) = E[log(X)] and for the current estimate of ?, \(l(?) = \sum_{i=1}^m \sum_{l=1}^k Q_i(z^{(i)}=l) log(\frac{P(x^{(i)}, z^{(i)}=l; ?)}{Q_i(z^{(i)}=l)})\) (E-Step)
EM (Expectation-Maximization) algorithm for density estimation:
1-Initiliaze the parameters ?
2-For each i, set \(Q_i(z^{(i)}) = P(z^{(i)}| x^{(i)}; ?)\) (E-Step)
3-Update ? (M-Step)\(? := arg\ \underset{?}{max} \sum_{i=1}^m \sum_{l=1}^k Q_i(z^{(i)}=l) log(\frac{P(x^{(i)}, z^{(i)}=l; ?)}{Q_i(z^{(i)}=l)})\)
Mixture of Gaussians
Given a value \(z^{(i)}=j\), If x is distributed normally \(N(?_j, ?_j)\), then:\(Q_i(z^{(i)} = j) = P(z^{(i)} = j| x^{(i)}; ?)\) \(= \frac{P(x^{(i)} | z^{(i)}=j) * P(z^{(i)} = j)}{\sum_{l=1}^k P(x^{(i)} | z^{(i)}=l) * P(z^{(i)}=l)}\) \(= \frac{\frac{1}{{(2?)}^{\frac{n}{2}} |?_j|^\frac{1}{2}} exp(-\frac{1}{2} (x^{(i)}-?_j)^T {?_j}^{-1} (x^{(i)}-?_j)) * ?_j}{\sum_{l=1}^k \frac{1}{{(2?)}^{\frac{n}{2}} |?_l|^\frac{1}{2}} exp(-\frac{1}{2} (x^{(i)}-?_l)^T {?_l}^{-1} (x^{(i)}-?_l)) * ?_l}\)
Using the method of Lagrange Multipliers, we can find \(?_j, ?_j, ?_j\) that maximize: \(\sum_{i=1}^m \sum_{l=1}^k Q_i(z^{(i)}=l) log(\frac{\frac{1}{{(2?)}^{\frac{n}{2}} |?_j|^\frac{1}{2}} exp(-\frac{1}{2} (x^{(i)}-?_j)^T {?_j}^{-1} (x^{(i)}-?_j)) * ?_j}{Q_i(z^{(i)}=l)})\) with the constraint \(\sum_{j=1}^k ?_j = 1\). We also need to set to zero the partial derivatives with respect to \(?_j\) and \(?_j\) and solve the equations to find the new values of these parameters. Repeat the operations until convergence.
More details about the algorithm can be found here: https://www.youtube.com/watch?v=ZZGTuAkF-Hw
Naive Bayes Classifier
Naive Bayes classifier is a
supervised classification algorithm. In this algorithm, we try to calculate P(y|x), but this probability can be calculated using the naive bayes rule:
P(y|x) = P(x|y).P(y)/P(x)
If x is the vector \([x_1,x_2,…,x_n]\), then:\(P(x|y) = P(x_1,x_2,…,x_n|y) = P(x_1|y) P(x_2|y, x_1) ….P(x_n|y, x_{n-1},…,x_1)\) \(P(x) = P(x1,x2,…,x_n)\)
If we suppose that \(x_i\) are conditionally independent (Naive Bayes assumption), then:\(P(x|y) = P(x_1|y) * P(x_2|y) * … * P(x_n|y)\) \(= \prod_{i=1}^n P(x_i|y)\) \(P(x) = \prod_{i=1}^n P(x_i)\)
If \(\exists\) i such as \(P(x_i|y)=0\), then P(x|y) = 0. To avoid that case we need to use Laplace Smoothing when calculating probabilities.
We suppose that \(x_i \in \{0,1\}\) (one-hot encoding: [1,0,1,…,0]) and y \(\in \{0,1\}\).
We define \(\phi_{i|y=1}, \phi_{i|y=0}, \phi_y\) such as:\(P(x_i=1|y=1) = \phi_{i|y=1} \\ P(x_i=0|y=1) = 1 – \phi_{i|y=1}\) \(P(x_i=1|y=0) = \phi_{i|y=0} \\ P(x_i=0|y=0) = 1 – \phi_{i|y=0}\) \(P(y) = \phi_y^y.(1-\phi_y)^{1-y}\)
We need to find \(\phi_y, \phi_{i|y=0}, \phi_{i|y=1}\) that maximize the log-likelihood function:\(l(\phi, \phi_{i|y=0}, \phi_{i|y=1}) = log(\prod_{i=1}^{m} \prod_{j=1}^n P(x_j^{(i)}|y^{(i)}) P(y^{(i)})/P(x^{(i)}))\)
Or maximize the following expression:\(log(\prod_{i=1}^{m} \prod_{j=1}^n P(x_j^{(i)}|y^{(i)}) P(y^{(i)}))\) |
2019-05-20 15:18 Detailed record - Similar records 2019-01-23 09:13
nuSTORM at CERN: Feasibility Study / Long, Kenneth Richard (Imperial College (GB)) The Neutrinos from Stored Muons, nuSTORM, facility has been designed to deliver a definitive neutrino-nucleus scattering programme using beams of $\bar{\nu}_e$ and $\bar{\nu}_\mu$ from the decay of muons confined within a storage ring. The facility is unique, it will be capable of storing $\mu^\pm$ beams with a central momentum of between 1 GeV/c and 6 GeV/c and a momentum spread of 16%. [...] CERN-PBC-REPORT-2019-003.- Geneva : CERN, 2019 - 150. Detailed record - Similar records 2019-01-23 08:54 Detailed record - Similar records 2019-01-15 15:35 Detailed record - Similar records 2018-12-20 13:45 Detailed record - Similar records 2018-12-18 14:08
Physics Beyond Colliders at CERN: Beyond the Standard Model Working Group Report / Beacham, J. (Ohio State U., Columbus (main)) ; Burrage, C. (U. Nottingham) ; Curtin, D. (Toronto U.) ; De Roeck, A. (CERN) ; Evans, J. (Cincinnati U.) ; Feng, J.L. (UC, Irvine) ; Gatto, C. (INFN, Naples ; NIU, DeKalb) ; Gninenko, S. (Moscow, INR) ; Hartin, A. (U. Coll. London) ; Irastorza, I. (U. Zaragoza, LFNAE) et al. The Physics Beyond Colliders initiative is an exploratory study aimed at exploiting the full scientific potential of the CERN’s accelerator complex and scientific infrastructures through projects complementary to the LHC and other possible future colliders. These projects will target fundamental physics questions in modern particle physics. [...] arXiv:1901.09966; CERN-PBC-REPORT-2018-007.- Geneva : CERN, 2018 - 150 p. Full Text: PDF; Fulltext: PDF; Detailed record - Similar records 2018-12-17 18:05
PBC technology subgroup report / Siemko, Andrzej (CERN) ; Dobrich, Babette (CERN) ; Cantatore, Giovanni (Universita e INFN Trieste (IT)) ; Delikaris, Dimitri (CERN) ; Mapelli, Livio (Universita e INFN, Cagliari (IT)) ; Cavoto, Gianluca (Sapienza Universita e INFN, Roma I (IT)) ; Pugnat, Pierre (Lab. des Champs Magnet. Intenses (FR)) ; Schaffran, Joern (Deutsches Elektronen-Synchrotron (DE)) ; Spagnolo, Paolo (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Ten Kate, Herman (CERN) et al. Goal of the technology WG set by PBC: Exploration and evaluation of possible technological contributions of CERN to non-accelerator projects possibly hosted elsewhere: survey of suitable experimental initiatives and their connection to and potential benefit to and from CERN; description of identified initiatives and how their relation to the unique CERN expertise is facilitated.. CERN-PBC-REPORT-2018-006.- Geneva : CERN, 2018 - 31. Fulltext: PDF; Detailed record - Similar records 2018-12-14 16:17
AWAKE++: The AWAKE Acceleration Scheme for New Particle Physics Experiments at CERN / Gschwendtner, Edda (CERN) ; Bartmann, Wolfgang (CERN) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Calviani, Marco (CERN) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Damerau, Heiko (CERN) ; Depero, Emilio (ETH Zurich (CH)) ; Doebert, Steffen (CERN) ; Gall, Jonathan (CERN) et al. The AWAKE experiment reached all planned milestones during Run 1 (2016-18), notably the demonstration of strong plasma wakes generated by proton beams and the acceleration of externally injected electrons to multi-GeV energy levels in the proton driven plasma wakefields. During Run~2 (2021 - 2024) AWAKE aims to demonstrate the scalability and the acceleration of electrons to high energies while maintaining the beam quality. [...] CERN-PBC-REPORT-2018-005.- Geneva : CERN, 2018 - 11. Detailed record - Similar records 2018-12-14 15:50
Particle physics applications of the AWAKE acceleration scheme / Wing, Matthew (University of London (GB)) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Depero, Emilio (ETH Zurich (CH)) ; Gall, Jonathan (CERN) ; Gninenko, Sergei (Russian Academy of Sciences (RU)) ; Gschwendtner, Edda (CERN) ; Hartin, Anthony (University of London (GB)) ; Keeble, Fearghus Robert (University of London (GB)) et al. The AWAKE experiment had a very successful Run 1 (2016-8), demonstrating proton-driven plasma wakefield acceleration for the first time, through the observation of the modulation of a long proton bunch into micro-bunches and the acceleration of electrons up to 2 GeV in 10 m of plasma. The aims of AWAKE Run 2 (2021-4) are to have high-charge bunches of electrons accelerated to high energy, about 10 GeV, maintaining beam quality through the plasma and showing that the process is scalable. [...] CERN-PBC-REPORT-2018-004.- Geneva : CERN, 2018 - 11. Fulltext: PDF; Detailed record - Similar records 2018-12-13 13:21
Summary Report of Physics Beyond Colliers at CERN / Jaeckel, Joerg (CERN) ; Lamont, Mike (CERN) ; Vallee, Claude (Centre National de la Recherche Scientifique (FR)) Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN's accelerator complex and its scientific infrastructure in the next two decades through projects complementary to the LHC, HL-LHC and other possible future colliders. These projects should target fundamental physics questions that are similar in spirit to those addressed by high-energy colliders, but that require different types of beams and experiments. [...] arXiv:1902.00260; CERN-PBC-REPORT-2018-003.- Geneva : CERN, 2018 - 66 p. Fulltext: PDF; PBC summary as submitted to the ESPP update in December 2018: PDF; Detailed record - Similar records |
This is a very simple question; I apologize if it has already been asked here. Define the following function (superficially similar to a theta function):
$$\varsigma(x)=\sum_{n=1}^\infty e^{-xn^3}$$
I am interested in knowing the Laurent series about $x=0$ of this series if it exists, i.e. I would like to know if there exist $\{a_n\}$ such that:
$$\varsigma(x)=\sum_{n=-\infty}^\infty a_nx^n\tag{1}$$
for small enough $x>0$. I'm pretty sure that $\varsigma(x)$ diverges to infinity at $x=0$ so I assume that some of the $a_n$ for $n<0$ will be nonzero. Ideally I would love a closed form for the $a_n$ but I am especially interested for the moment in $a_1$. I have no idea how to find these terms, since this is not a Taylor series and since I do not know the complex behaviour of $\varsigma(z)$. Wolfram Alpha doesn't help me either. I am aware of the formula for Laurent series coefficients, and that for instance we will have:
$$a_1=\frac{1}{2\pi i}\oint_C \frac{\varsigma(z)}{z^2}\;dz$$
where $C$ is a closed contour around $z=0$, but I am not sure how I should go about evaluating this; formally interchanging integral and sum only gives me a divergent sum: for instance making use of the fact that $e^{az}=\sum\limits_{n=0}^\infty \frac{a^nz^n}{n!}$ I formally get $a_1=-\sum\limits_{n=1}^\infty n^3$ but I don't know whether I can make this argument rigorous to get $a_1=-\zeta(-3)$.
Background: This question arose from some recreational thoughts of mine on summing divergent series; this answer used an $ne^{-\epsilon n}$ regularization rather than the usual $n^s$ to 'evaluate' $\sum\limits_{n=1}^\infty n$ and curiously obtained a constant term of $-\frac{1}{12}$ in the Laurent series in $\epsilon$; this interested me and made me wonder what an $n^3e^{-\epsilon n^3}$ regularization of $\sum\limits_{n=1}^\infty n^3$ would give. We have $\sum\limits_{n=1}^\infty n^3e^{-\epsilon n^3}=-\varsigma'(\epsilon)$ so the constant term in the Laurent series expansion of this function will be $-a_1$; thus I would conjecture that $a_1=-\frac{1}{120}$ (i.e. $-\zeta(-3)$; see here). The above calculation supports this, but I don't know whether it can be made rigorous.
Thus I have the following
questions: Do there exist $a_n$ such that $(1)$ holds for small $x>0$? If so, does $a_1=-\frac{1}{120}$? Is it possible to write a closed form for the $a_n$? |
Suppose chiral fermions $\psi$ interacting with gauge fields $A_{\mu,L/R}$. With $P_{L/R} \equiv \frac{1\mp\gamma_{5}}{2}$ and $t_{a,L/R}$ denoting the generators, the corresponding action reads
$$ S = \int d^{4}x\bar{\psi}i\gamma_{\mu}D^{\mu}\psi, \quad D_{\mu} = \partial_{\mu} - iA_{\mu,L}^{a}t_{a,L}P_{L} - i\gamma_{5}A_{\mu,R}^{a}t_{a,R}P_{R} $$ To check the presence of the anomaly $\text{A}(x)$ in the conservation law for the current $$ J^{\mu}_{L/R,c} \equiv \bar{\psi}\gamma^{\mu}\gamma_{5}t_{c}\psi, $$ we have to calculate the VEV of its covariant divergence: $$ \tag 1 \langle (D_{\mu}J^{\mu}_{L/R}(x))_{a}\rangle_{A_{L/R}} \equiv \langle \partial_{\mu}J^{\mu}_{L/R,a} +f_{abc}A^{L/R}_{\mu,b}J^{\mu}_{L/R,c}\rangle_{A_{L/R}} \equiv \text{A}^{L/R}_{a}(x), $$ where $f_{abc}$ is the structure constant.
Let's study the one-loop contributions (other contributions do not exist, as was established by Adler and Bardeen) in $(1)$. In general, we have to study triangle diagrams, box diagrams, pentagon diagrams and so on, arising from the quantum effective action $\Gamma$. From dimensional analysis of corresponding integrals we conclude that the three-point vertex $$ \Gamma_{\mu\nu\alpha}^{abc}(x,y,z) \equiv \frac{\delta \Gamma}{\delta A^{\mu}_{a}(x)\delta A^{\nu}_{b}(y)\delta A^{\alpha}_{c}(z)}, $$ which generates the triangle diagram, is linearly divergent, four-point vertex $\Gamma_{\mu\nu\alpha\beta}^{abcd}(x,y,z,t)$ is logarithmically divergent, five-point vertex $\Gamma_{\mu\nu\alpha\beta\gamma}^{abcde}(x,y,z,t,p)$ is convergent, and so on.
Unlike the abelian case, there the only triangle diagram makes the contribution in the anomaly, here more diagrams contribute. Precisely, we know that non-zero anomaly in triangle diagram requires non-zero coefficient
$$ D_{abc}^{L/R} \equiv \text{tr}[t_{a},\{t_{b},t_{c}\}]_{L/R} $$ The box diagram (with the requirement of the Bose symmetry) is proportional to $$ D_{abcd}^{L/R} \equiv \text{tr}[t_{a}\{t_{b},[t_{c},t_{d}]\}] = if_{cde}D^{L/R}_{abe}, $$ while the pentagon diagram - to (the subsctipt $[]$ means antisymmetrization) $$ D_{abcde}^{L/R} \equiv \text{tr}[t_{a}t_{[b}t_{c}t_{d}t_{e]}] \sim f_{r[bc}f_{de]s}D^{L/R}_{ars} $$ Therefore, it seems that they also make the contribution in the anomaly.
I have two
questions.
1) The chiral anomaly arises in the result of impossibility of defining the local (in terms of momenta) action functional generating the counterterm which cancels the gauge invariance breaking corrections in n-point vertices. The triangle diagram is linearly divergent, and because of bose symmetry it can be shown that the only non-local action can generate the anomaly in the limit of small momenta. In this spirit, we can cancel the box and pentagon diagrams (which diverge linearly) by adding the local counterterms, so I don't understand why they make the contribution in the anomaly $(1)$.
2) If there is the reason why they can't be cancelled by adding the counterterm, what about hexagonal diagrams and so on? Why do they vanish? Because of something like Jacobi identity for structure constants?
An edit
It seems that the answer is that the following diagrams make the contribution in the anomaly $(1)$, but not because of $(2)$, $(3)$ (the latter just shows that box and pentagon diagrams anomalous contribution vanishes if there is no triangle anomaly). The reason to make the contribution is in the structure of the anomalous Ward identities.
Suppose we're dealing with consistent anomaly. Then we have (I've omitted the subscript $L/R$), by the definition,
$$ -\text{A}_{a}(x) = \delta_{\epsilon_{a}(x)}\Gamma \equiv \partial_{\mu}^{x}\frac{\delta\Gamma}{\delta A_{\mu,a}(x)} + f_{abc}A_{\mu,b}(x)\frac{\delta \Gamma}{\delta A_{\mu,c}(x)} $$ The Ward identities for the $n$-point vertex are obtained by taking $n-1$ functional derivatives with respect to $A_{\mu_{i},a_{i}}$ and setting $A_{\mu_{i},a_{i}}$ to zero. It can be shown that the Ward identities for the derivative of the 4-point vertex (which is logarithmically divergent) contain 3-vectex functions which are anomalous. Therefore we see that the 4-point vertex also contribute to the anomaly (not by itself, since it is only logarithmically divergent, but through linearly diverging 3-point vertex).
What's about the 5-point vertex? The Ward identities for its derivative contain only the 4-point function, so in first sight it seems that it doesn't make the contribution in the anomaly. However, this is not true in particular cases. Indeed, if one of currents $\text{J}_{\mu}^{a}$ running in the loop is the global one, we can preserve the gauge invariance by pumping the anomaly to the $\text{J}^{a}_{\mu}$ conservation law. This is realized in particular by changing the 4-point vertex (not its derivative!) by the anomalous polynomial. Therefore the Ward identity for 5-point vertex becomes anomalous. However, even in this case this vertex may give no contribution in the anomaly (there is situation when the global current is the abelian one); in this case the $A^{4}$ term in the anomaly vanishes identically due to the group arguments - because of Jacobi identities.
This also illustrates why there are no anomalous contribution from the derivative of hexagonal diagrams, and higher. |
Question:
The pendulum is released from {eq}60^0 {/eq} position and then strikes the initially stationary cylinder of mass {eq}m_2 {/eq} When OA is vertical. Determine the maximum spring compression {eq}\delta {/eq} when {eq}m_1=3kg,m_2=2kg,OA=0.8m,k=6kN/m {/eq}
Assume the bar of the pendulum is light so that the mass {eq}m_1 {/eq} is effectively concentrated at point A The rubber cushion S stops the pendulum just after the collision is over. Neglect friction.
Conservation of Energy:
In an isolated system ( when neither mass or energy can get in or get out of the system ) total mass and energy are conserved. In modern physics, both mass and energy are interchangeable which was given by the famous equation of Einstein i.e. {eq}E=mc^2 {/eq}. But in classical mechanics, we can conserve mass and energy separately. Sometimes energy gets dissipated due to some nonconservative forces such as friction. The energy loss due to such forces doesn't get destroyed. It just gets lost in the form of heat and radiation.
Answer and Explanation: Step 1:
Let's analyze the situation first. The progress of transfer of energy will be from potential energy of mass 1 to the kinetic energy of mass 1 to the kinetic energy of mass 2 to compression of the spring. But, as there are no dissipative forces in the system, we can directly conserve the energy i.e. change in potential energy will be completely used in the compression of spring energy.
Step 2:
We need to find the height to determine the potential energy. Which means we need to find OC (refer to the figure below).
Using geometry, we can resolve OA into sine and cosine of the angle {eq}\begin{align*} \theta \end{align*} {/eq}.
Therefore,
{eq}\begin{align*} &CB = OB - OC\\ \Rightarrow &CB = OB - OA \cos\theta \\ \end{align*} {/eq}
It is given that OA= 0.8m and {eq}\begin{align*} \theta = 60^{\circ} \end{align*} {/eq}. Also OB = OA.
That's why,
{eq}\begin{align*} &CB = 0.8 - 0.8\cos 60\\ \Rightarrow & CB = 0.4 \end{align*} {/eq}
Let CB = h.
Step 3:
We can now simply conserve the energy in the system.
{eq}\begin{align*} &m_1\times g\times h=\frac{1}{2}\times k\times \delta^2\\ \Rightarrow & 3\times 9.8 \times 0.4 = \frac {1}{2} \times 6000 \times \delta^2 \\ \Rightarrow & \delta^2 = 0.00392 \\ \Rightarrow & \delta = \sqrt{0.00392}\\ \Rightarrow & \delta = 0.0626099\\ \end{align*} {/eq}
The maximum spring compression will be 0.0626099 m.
Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Search Answers Learn more about this topic:
from AP Physics 1: Exam PrepChapter 5 / Lesson 8 |
№ 8
All Issues Volume 61, № 11, 2009 Nonsymmetric approximations of classes of periodic functions by splines of defect 2 and Jackson-type inequalities
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1443-1454
We determine the exact values of the best (α, β)-approximations and the best one-sided approximations of classes of differentiable periodic functions by splines of defect 2. We obtain new sharp Jackson-type inequalities for the best approximations and the best one-sided approximations by splines of defect 2.
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1455-1473
We investigate closed 1-forms with isolated zeros on surfaces with edge. A criterion for the topological equivalence of closed 1-forms is proved.
Best orthogonal trigonometric approximations of the classes $B^{Ω}_{p,θ}$ of periodic functions of many variables
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1473-1484
We obtain exact-order estimates for the best orthogonal trigonometric approximations of the classes $B^{Ω}_{p,θ}$ of periodic functions of many variables in the space $L_q$.
On removable sets of solutions of second-order elliptic and parabolic equations in nondivergent form
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1485-1496
We consider nondivergent elliptic and parabolic equations of the second order whose leading coefficients satisfy the uniform Lipschitz condition. We find a sufficient condition for the removability of a compact set with respect to these equations in the space of Hölder functions.
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1497-1515
We obtain asymptotic equalities for upper bounds of approximations of functions from the class $C_{β,∞} ψ$ by Poisson integrals in the metric of the space $C$.
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1516-1530
Using a transformation matrix, we asymptotically reduce a system of differential equations with a small parameter in the coefficients of a part of derivatives and with multiple turning point to an integrable system of equations.
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1531-1540
We consider the topology \( t\left( \mathcal{M} \right) \) of convergence locally in measure in the *-algebra \( LS\left( \mathcal{M} \right) \) of all locally measurable operators affiliated to the von Neumann algebra \( \mathcal{M} \). We prove that \( t\left( \mathcal{M} \right) \) coincides with the (
o)-topology in \( L{S_h}\left( \mathcal{M} \right) = \left\{ {T \in LS\left( \mathcal{M} \right):T* = T} \right\} \) if and only if the algebra \( \mathcal{M} \) is σ-finite and is of finite type. We also establish relations between \( t\left( \mathcal{M} \right) \) and various topologies generated by a faithful normal semifinite trace on \( \mathcal{M} \). Method of local linear approximation in the theory of bounded solutions of nonlinear differential equations
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1541-1556
The conditions for the existence of solutions of nonlinear differential equations in a space of functions bounded on the axis are established by using local linear approximations of these equations.
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1557-1563
The second Lyapunov method is applied to the analysis of stability of triangular libration points in a three-dimensional restricted circular three-body problem. It is shown that the triangular libration points are unstable.
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1564-1574
We present the solutions of the initial-value problem in the entire space and the solutions of the boundary-value and initial-boundary-value problems for the wave equation $$\frac{∂^2U(t,x)}{∂x^2} = Δ_LU(t,x)$$ with infinite-dimensional Lévy Laplacian $Δ_L$ in the class of Gâteaux functions.
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1575-1578
It is shown that an adequate ring with nonzero Jacobson radical has a stable range 1. A class of matrices over an adequate ring with stable range 1 is indicated.
Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1579-1585
We prove that the generalized Temperley–Lieb algebras associated with simple graphs Γ have linear growth if and only if the graph Γ coincides with one of the extended Dynkin graphs \( {\tilde A_n} \), \( {\tilde D_n} \), \( {\tilde E_6} \), or \( {\tilde E_7} \). An algebra \( T{L_{\Gamma, \tau }} \) has exponential growth if and only if the graph Γ coincides with none of the graphs \( {A_n} \), \( {D_n} \), \( {E_n} \), \( {\tilde A_n} \), \( {\tilde D_n} \), \( {\tilde E_6} \), and \( {\tilde E_7} \). |
Advanced topics in information theory From CYPHYNETS
(→July 14: Rate distortion theory - I)
(→July 14: Rate distortion theory - I)
Line 86: Line 86:
* There is also an information theoretic definition,
* There is also an information theoretic definition,
-
<math>R^I(D) = \min_{p(x, \hat{x}), \sum
+
<math>R^I(D) = \min_{p(x, \hat{x}), \sum (x; \hat{x}) d(x; \hat{x}) \leq D} I(X; \hat{X})</math>
* We can show that both are equivalent.
* We can show that both are equivalent.
Revision as of 12:09, 30 July 2009 Reading Group: Advanced Topics in Information Theory Calendar: Summer 2009 Venue: LUMS School of Science & Engineering Organizer: Abubakr Muhammad
This group meets every week at LUMS to discuss some advanced topics in information theory.
This is a continuation of our formal course at LUMS,
CS-683: Information theory (offered most recently in Spring 2008). We hope to cover some advanced topics in information theory as well as its connections to other fundamental disciplines such as statistics, mathematics, physics and technology. Participants Mubasher Beg Shahida Jabeem Qasim Maqbool Muhammad Bilal Muzammad Baig Hassan Mohy-ud-Din Zartash Uzmi Shahab Baqai Abubakr Muhammad Topics
Include the following, but not limited to:
Rate distortion theory Network information theory Kolmogorov complexity Quantum information theory Sessions July 7: Organization. Recap of CS-683 Basic organization, presentation assignments. Review of Information theory ideas Entropy, AEP, Compression and Capacity
Entropy of a random variable is given by
The capacity of a channel is defined by
Compression and Capacity determine the two fundamental information theoretic limits of data transmission,
A review of Gaussain channels and their capacities. Let us take these analysis one step further. How much do you loose when you cross these barriers? We saw one situation when you try to transmit over the capacity. By Fano's inequality
Rate distortion: A theory for lossy data compression. References/Literature Elements of Information theoryby Cover and Thomas. July 14: Rate distortion theory - I Rate–distortion provides the theoretical foundations for lossy data compression. We try to find answer to the following question: Given an acceptable level of distortion, what is the minimal information that should be sent over a channel, so that the source can be reconstructed (up to that level of distortion) at the receiver? Quantization for a single random variable. Given a distribution for a random variable, what are the optimal choices for quantization? Answer is Lloyd's algorithm (closely related to k-means clustering). What about multiple random variables, treated at the same time? Even if the RVs are IID, quantizing them in sequences can result in better performance. (stated without proof). Define distortion D. When is a distortion pair (R,D) achievable? The rate distortion function R(D) is the min rate R such that (R,D) is achievable for a given D. There is also an information theoretic definition,
We can show that both are equivalent. Proof follows closely the treatment on channel capacity. |
Fractions, a topic that we had learned in our basic schooling and would still be present in some corner of our head. But with time and no practice on this topic could have rusted the concept. This article is to renew those concepts for the GMAT exam and make you ready to solve any problem related to fractions in minimal time.
What are Fractions?
Any number that cannot be expressed in form of whole number is a fraction.
What makes it Difficult?
The working on fractions is much different then that of whole numbers and so it can create confusion.
Rules of Fractions
\(\frac{x}{y} + \frac{w}{v} \neq \frac{x + w}{y + v}\)\(\frac{x}{y} – \frac{w}{v} \neq \frac{x – w}{y – v}\)
Addition and Subtraction
\(\frac{x}{y} \times \frac{w}{v} = \frac{xw}{yv}\)
Multiplication
\(\frac{x}{y} \div \frac{w}{v} = \frac{xv}{yw}\)
Division Proportions
If, \(\frac{x}{y} = \frac{w}{v}\)
Then, \(\frac{x}{1} = \frac{yw}{v}\)
Let’s solve some questions and understand fractions in a greater depth\(\frac{1}{160} + \frac{1}{40} + \frac{1}{1600} + \frac{1}{80}\)
\(= \frac{1}{40} + \left ( \frac{1}{40} + 1 + \frac{1}{40} + \frac{1}{2} \right )\)\(= \frac{1}{40} + \left ( \frac{10}{40} + \frac{40}{40} + \frac{1}{40} + \frac{20}{40} \right )\)\(= \frac{1}{40} \left ( \frac{10 + 40 + 1 + 20}{40} \right )\)\(= \frac{1}{40} \times \frac{71}{40}\)\(= \frac{71}{1600}\)
Solution:
We’ll be glad to help you in your GMAT preparation journey. You can ask for any assistance related to GMAT and MBA from us by just giving a missed call at
+918884544444, or you can drop an SMS. You can write to us at [email protected]. |
Skills to Develop
Apply the Binomial Theorem.
A polynomial with two terms is called a binomial. We have already learned to multiply binomials and to raise binomials to powers, but raising a binomial to a high power can be tedious and time-consuming. In this section, we will discuss a shortcut that will allow us to find \((x+y)^n\) without multiplying the binomial by itself \(n\) times.
Identifying Binomial Coefficients
In the shortcut to finding \({(x+y)}^n\), we will need to use combinations to find the coefficients that will appear in the expansion of the binomial. In this case, we use the notation \(\dbinom{n}{r}\) instead of \(C(n,r)\), but it can be calculated in the same way. So
\[\dbinom{n}{r}=C(n,r)=\dfrac{n!}{r!(n−r)!}\]
The combination \(\dbinom{n}{r}\) is called a
binomial coefficient. An example of a binomial coefficient is:
\(\dbinom{5}{2}=C(5,2)=10\)
Q&A: Is a binomial coefficient always a whole number?
Yes. Just as the number of combinations must always be a whole number, a binomial coefficient will always be a whole number.
Example \(\PageIndex{1}\): Finding Binomial Coefficients
Find each binomial coefficient.
\(\dbinom{5}{3}\) \(\dbinom{9}{2}\) \(\dbinom{9}{7}\) Solution
Use the Equation \ref{binomial1} to calculate each binomial coefficient. You can also use the \(nC_r\) function on your calculator.
\(\dbinom{5}{3}=\dfrac{5!}{3!(5−3)!}=\dfrac{5⋅4⋅3!}{3!2!}=10\) \(\dbinom{9}{2}=\dfrac{9!}{2!(9−2)!}=\dfrac{9⋅8⋅7!}{2!7!}=36\) \(\dbinom{9}{7}=\dfrac{9!}{7!(9−7)!}=\dfrac{9⋅8⋅7!}{7!2!}=36\) Analysis
Notice that we obtained the same result for parts (b) and (c). If you look closely at the solution for these two parts, you will see that you end up with the same two factorials in the denominator, but the order is reversed, just as with combinations.
\[\dbinom{n}{r}=\dbinom{n}{n−r} \nonumber\]
Exercise \(\PageIndex{1}\)
Find each binomial coefficient.
\(\dbinom{7}{3}\) \(\dbinom{11}{4}\) Answer a
\(35\)
Answer b
\(33\)
Using the Binomial Theorem
When we expand \({(x+y)}^n\) by multiplying, the result is called a binomial expansion, and it includes binomial coefficients. If we wanted to expand \({(x+y)}^{52}\), we might multiply \((x+y)\) by itself fifty-two times. This could take hours! If we examine some simple binomial expansions, we can find patterns that will lead us to a shortcut for finding more complicated binomial expansions.
\[\begin{align*} {(x+y)}^2 &= x^2+2xy+y^2 \\[4pt] {(x+y)}^3 &= x^3+3x^2y+3xy^2+y^3 \\[4pt] {(x+y)}^4 &= x^4+4x^3y+6x^2y^2+4xy^3+y^4 \end{align*}\]
First, let’s examine the exponents. With each successive term, the exponent for \(x\) decreases and the exponent for \(y\) increases. The sum of the two exponents is \(n\) for each term.
Next, let’s examine the coefficients. Notice that the coefficients increase and then decrease in a symmetrical pattern. The coefficients follow a pattern:
\(\dbinom{n}{0}\), \(\dbinom{n}{1}\), \(\dbinom{n}{2}\),..., \(\dbinom{n}{n}.\)
These patterns lead us to the
Binomial Theorem, which can be used to expand any binomial.
\[\begin{align*} {(x+y)}^n&=\sum_{k=0}^{n}\dbinom{n}{k}x^{n−k}y^k \\[4pt] &=x^n+\dbinom{n}{1}x^{n−1}y+\dbinom{n}{2}x^{n−2}y^2+...+\dbinom{n}{n−1}xy^{n−1}+y^n \end{align*}\]
Another way to see the coefficients is to examine the expansion of a binomial in general form, \(x+y\), to successive powers \(1\), \(2\), \(3\), and \(4\).
\[\begin{align*} {(x+y)}^1 &= x+y \\ {(x+y)}^2 &= x^2+2xy+y^2 \\ {(x+y)}^3 &= x^3+3x^2y+3xy^2+y^3 \\ {(x+y)}^4 &= x^4+4x^3y+6x^2y^2+4xy^3+y^4 \end{align*}\]
Can you guess the next expansion for the binomial \({(x+y)}^5\)?
Figure \(\PageIndex{1}\)
See Figure \(\PageIndex{1}\), which illustrates the following:
There are \(n+1\) terms in the expansion of \({(x+y)}^n\). The degree (or sum of the exponents) for each term is \(n\). The powers on \(x\) begin with \(n\) and decrease to \(0\). The powers on \(y\) begin with \(0\) and increase to \(n\). The coefficients are symmetric.
To determine the expansion on \({(x+y)}^5\), we see \(n=5\), thus, there will be \(5+1=6\) terms. Each term has a combined degree of \(5\). In descending order for powers of \(x\), the pattern is as follows:
Introduce \(x^5\), and then for each successive term reduce the exponent on \(x\) by \(1\) until \(x^0=1\) is reached. Introduce \(y^0=1\), and then increase the exponent on yy by 1 until \(y^5\) is reached.
\(x^5, x^4y, x^3y^2, x^2y^3, xy^4, y^5\)
The next expansion would be
\({(x+y)}^5=x^5+5x^4y+10x^3y^2+10x^2y^3+5xy^4+y^5\)
But where do those coefficients come from? The binomial coefficients are symmetric. We can see these coefficients in an array known as Pascal's Triangle, shown in Figure \(\PageIndex{2}\).
Figure \(\PageIndex{2}\)
To generate Pascal’s Triangle, we start by writing a \(1\). In the row below, row 2, we write two \(1’s\). In the 3
rd row, flank the ends of the rows with \(1’s\), and add \(1+1\) to find the middle number, \(2\). In the \(n^{th}\) row, flank the ends of the row with \(1’s\). Each element in the triangle is the sum of the two elements immediately above it.
To see the connection between Pascal’s Triangle and binomial coefficients, let us revisit the expansion of the binomials in general form.
THE BINOMIAL THEOREM
The
Binomial Theorem is a formula that can be used to expand any binomial.
\[ {(x+y)}^n = \sum_{k=0}^{n}\dbinom{n}{k}x^{n−k}y^k = x^n+\dbinom{n}{1}x^{n−1}y+\dbinom{n}{2}x^{n−2}y^2+...+\dbinom{n}{n−1}xy^{n−1}+y^n \]
Example \(\PageIndex{2}\): Expanding a Binomial
Write in expanded form.
\({(x+y)}^5\) \({(3x−y)}^4\) Solution
a. Substitute \(n=5\) into the formula. Evaluate the \(k=0\) through \(k=5\) terms. Simplify.
\[\begin{align*} {(x+y)}^5 &= \dbinom{5}{0}x^5y^0+\dbinom{5}{1}x^4y^1+\dbinom{5}{2}x^3y^2+\dbinom{5}{3}x^2y^3+\dbinom{5}{4}x^1y^4+\dbinom{5}{5}x^0y^5 \\ {(x+y)}^5 &= x^5+5x^4y+10x^3y^2+10x^2y^3+5xy^4+y^5 \end{align*}\]
b. Substitute \(n=4\) into the formula. Evaluate the \(k=0\) through \(k=4\) terms. Notice that \(3x\) is in the place that was occupied by \(x\) and that \(–y\) is in the place that was occupied by \(y\). So we substitute them. Simplify.
\[\begin{align*} {(3x−y)}^4 &= \dbinom{4}{0}{(3x)}^4{(−y)}^0+\dbinom{4}{1}{(3x)}^3{(−y)}^1+\dbinom{4}{2}{(3x)}^2{(−y)}^2+\dbinom{4}{3}{(3x)}^1{(−y)}^3+\dbinom{4}{4}{(3x)}^0{(−y)}^4 \\ {(3x−y)}^4 &= 81x^4−108x^3y+54x^2y^2−12xy^3+y^4 \end{align*}\]
Analysis
Notice the alternating signs in part b. This happens because \((−y)\) raised to odd powers is negative, but \((−y)\) raised to even powers is positive. This will occur whenever the binomial contains a subtraction sign.
Exercise \(\PageIndex{2}\)
Write in expanded form.
\({(x−y)}^5\) \({(2x+5y)}^3\) Answer a
\(x^5−5x^4y+10x^3y^2−10x^2y^3+5xy^4−y^5\)
Answer b
\(8x^3+60x^2y+150xy^2+125y^3\)
Using the Binomial Theorem to Find a Single Term
Expanding a binomial with a high exponent such as \({(x+2y)}^{16}\) can be a lengthy process. Sometimes we are interested only in a certain term of a binomial expansion. We do not need to fully expand a binomial to find a single specific term.
Note the pattern of coefficients in the expansion of \({(x+y)}^5\).
\({(x+y)}^5=x^5+\dbinom{5}{1}x^4y+\dbinom{5}{2}x^3y^2+\dbinom{5}{3}x^2y^3+\dbinom{5}{4}xy^4+y^5\)
The second term is \(\dbinom{5}{1}x^4y\). The third term is \(\dbinom{5}{2}x^3y^2\). We can generalize this result.
Example \(\PageIndex{3}\): Writing a Given Term of a Binomial Expansion
Find the tenth term of \({(x+2y)}^{16}\) without fully expanding the binomial.
Solution
Because we are looking for the tenth term, \(r+1=10\), we will use \(r=9\) in our calculations and Equation \ref{binomial5}.
\(\dbinom{16}{9}x^{16−9}{(2y)}^9=5,857,280x^7y^9\)
Exercise \(\PageIndex{3}\)
Find the sixth term of \({(3x−y)}^9\) without fully expanding the binomial.
Answer
\(−10,206x^4y^5\)
Key Equations
Binomial Theorem
\({(x+y)}^n=\sum_{k=0}^n\dbinom{n}{k}x^{n−k}y^k\)
\((r+1)\)th term of a binomial expansion
\(\dbinom{n}{r}x^{n−r}y^r\)
Key Concepts \(\dbinom{n}{r}\) is called a binomial coefficient and is equal to \(C(n,r)\). See Example \(\PageIndex{1}\). The Binomial Theorem allows us to expand binomials without multiplying. See Example \(\PageIndex{2}\). We can find a given term of a binomial expansion without fully expanding the binomial. See Example \(\PageIndex{3}\). |
Background: So, just for fun, I was trying to analyze the types of soultion one may recieve from a quadratic equation, the solutions from $\mathbb{Z},\mathbb{R},\mathbb{C}$ was all rather easy, but when it comes to solutions in $\mathbb{Q}$, I tried applying the rational root theorem but that one has a few criteria that wasn't especially well suited for my needs (or rather, I didn't find a way to apply it). So, here's my question: Given a rational number $a/b$, when is its square root rational?
Please note that I'm not just asking about square roots of integers, but actually any rational number, such as $1/2$.
My attempt We have $Q = a/b : a,b \in \mathbb{Z}$ and thus: $$\sqrt{Q}=\sqrt{\frac{a}{b}} = \frac{\sqrt{a}}{\sqrt{b}} = \frac{\sqrt{ab}}{b}$$ hence we can deduct that if $a=b^{2k+1} \neq 0 : k \in \mathbb{Z}\rightarrow \sqrt{ab} \in \mathbb{Q}$, then the expression as a whole is rational. That is, since $b^{2k+1}\times b = b^{2m} : m,k \in \mathbb{Z}$ then $\sqrt{b^{2m}}=(b^{2m})^{\frac{1}{2}}=b^{m}$ which is clearly rational.
It seems reasonable that this would be a equivalent relation, but I can't figure out how to prove it.
Thanks in advance. |
All:
Say $f$ is a measurable (integrable, actually) function over the Lebesgue-measurable set $S$, with $m(S)>0$.
Now, since $m(S)>0$, there exists a non-measurable subset $S'$ of $S$, and we can then write:
$$S=S'\cup (S\setminus S').$$
How would we then go about dealing with this (sorry, I don't know how to Tex an integral)
$$\int_S f\,d\mu=\int_{S'} f\,d\mu+ \int_{S\setminus S'}f\,d\mu?$$ (given that $S'$ and $S\setminus S'$ are clearly disjoint)
Doesn't this imply that the integral over the non-measurable subset S' can be defined?
It also seems , using inner- and outer- measure, that if $S'$ is non-measurable, i.e. $m^*<m_*$, neither is $S\setminus S'$.
So I'm confused here. Thanks for any comments.
Edit: what confuses me here is this:
We start with a set equality $A=B$ (given as $S=S'\cup (S-S')$, so that $A=S$, $B=S-S'$, from which we cannot conclude:
$\int_A f=\int_B f$ , it is as if we had $x=y+z$ , but we cannot then conclude, for any decomposition of $x$, that $f(x)=f(y+z)$. |
I take it that we call $TAUT$ the problem of given a DNF formula, decide if it is a tautology (if you do not want to restrict to DNF, this will still work as this only makes the problem more general).
The answer of your questions easily follows from the definition of $coNP$. Remember that a language $L \subseteq \{0,1\}^*$ is in $coNP$ is $\bar{L} = \{x \in \{0,1\}^* \mid x \notin L\} \in NP$. For example, $\overline{TAUT}$ is the set of DNF that are not a tautology. To prove that a DNF is not a tautology, you only have to find an assignment that does not satisfy your formula, which can be done in polynomial time with a NTM (just "bruteforce" the assignments). Hence, it is in $NP$. In other words, $\overline{TAUT} \in NP$ thus $TAUT \in coNP$.
Now take an $NP$-complete language $L$. By definition, $\bar{L} \in coNP$. We show that $\bar{L}$ is $coNP$-complete, that is, for every language $A \in coNP$, $A \leq \bar{L}$. Let $A \in coNP$. Then $\bar{A}$ is in $NP$. By $NP$-completeness of $L$, there exists a function $f$, computable in polynomial time such that $x \in \bar{A}$ iff $f(x) \in L$. This is equivalent to say that $x \notin \bar{A}$ iff $f(x) \notin L$. Which in turn, is equivalent to $x \in A$ iff $f(x) \in \bar{L}$. Thus, $f$ is also a reduction from $A$ to $\bar{L}$, meaning that $A \leq \bar{L}$. In other words, $\bar{L}$ is $coNP$-complete.
Now, if you want to show that $TAUT$ is $coNP$-complete, you only have to show that $\overline{TAUT}$ is $NP$-complete. And it is not hard to see that $SAT \leq \overline{TAUT}$. Indeed, a CNF $F$ is satisfiable iff $\neg F$, which is a DNF, is
not a tautology. |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
Linear Algebra
A matrix is a 2-D array of numbers.
A tensor is a n-D array of numbers.
Matrices are associative but not commutative.
Not all square matrices have inverse. A matrix that is not invertible is called Singular or Degenerative.
A matrix is “singular” if any of the following are true:
-Any row or column contains all zeros.
-Any two rows or columns are identical.
-Any row or column is a linear combination of other rows or columns.
A square matrix A is singular, if and only if the determinant(A) = 0
Inner product of two vectors \(\overrightarrow{u}\) and \(\overrightarrow{v}\):\(\overrightarrow{u} * \overrightarrow{v} = p * ||\overrightarrow{u}|| = p * \sqrt{\sum_{i=1}^{n}u_{i}^{2}} = u^{T} * v = \sum_{i=1}^{n}u_{i} * v_{i}\)
p is the projection of \(\overrightarrow{v}\) on \(\overrightarrow{u}\) and \(||\overrightarrow{u}||\) is the Euclidean norm of \(\overrightarrow{u}\).\(A = \begin{bmatrix} a & b & c\\ d & e & f\\ g & h & i \end{bmatrix}\)
Determinant(A) = |A| = a(ei-hf) – b(di-gf) + c(dh – ge)
If \(f: R^{n*m} \mapsto R\)\(\frac{\partial}{\partial A} f(A) = \begin{bmatrix} \frac{\partial}{\partial a_{11}}f(A) & … & \frac{\partial}{\partial a_{1m}} f(A)\\ … & … & …\\ \frac{\partial}{\partial a_{n1}}f(A) & … & \frac{\partial}{\partial a_{nm}}A\\ \end{bmatrix}\)
Example:\( f(A) = a_{11} + … + a_{nm} \) \(\frac{\partial}{\partial A} f(A) = \begin{bmatrix} 1 & … & 1\\ … & … & …\\ 1 & … & 1 \\ \end{bmatrix}\)
If A is a squared matrix:
trace(A) = \(\sum_{i=1}^n A_{ii}\)
trace(AB) = trace(BA)
trace(ABC) = trace(CAB) = trace(BCA)
trace(B) = trace(\(B^T\))
trace(a) = a\(\frac{\partial}{\partial A} trace(AB) = B^T\) \(\frac{\partial}{\partial A} trace(ABA^TC) = CAB + C^TAB^T\)
Eigenvector
Given a matrix A, if a vector μ satisfies the equation A*μ = λ*μ then μ is called an eigenvector for the matrix A, and λ is called the eigenvalue for the matrix A. The principal eigenvector for the matrix A is the eigenvalue with the largest eigenvalue.
Example:
The normalized eigenvectors for \(\begin{bmatrix}0 & 1 \\1 & 0 \end{bmatrix}\) are \(\begin{bmatrix}\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix}\) and \(\begin{bmatrix}-\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix}\), the eigenvalues are 1 and -1.
Eigendecomposition
Given a squared matrix A∈\(R^{n*n}\), ∃ Q∈\(R^{n*n}\), Λ∈\(R^{n*n}\) and Λ diagonal, such as \(A=QΛQ^T\).
Q’s columns are the eigenvectors of \(A\)
Λ is the diagonal matrix whose diagonal elements are the eigenvalues
Example:
The eigendecomposition of \(\begin{bmatrix}0 & 1 \\1 & 0 \end{bmatrix}\) is Q=\(\begin{bmatrix}\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{bmatrix}\), Λ=\(\begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{bmatrix}\)
Eigenvectors are vect
Single Value Decomposition
Given a matrix A∈\(R^{n*m}\), ∃ U∈\(R^{n*m}\), D∈\(R^{m*m}\) and D diagonal, V∈\(R^{m*m}\) such as \(A=UDV^T\).
U’s columns are the eigenvectors of \(AA^T\)
V’s columns are the eigenvectors of \(A^TA\)
Example:
The SVD decomposition of \(\begin{bmatrix} 0 & 1 \\1 & 0 \end{bmatrix}\) is U=\(\begin{bmatrix}0 & -1 \\ -1 & 0\end{bmatrix}\), D=\(\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\), V=\(\begin{bmatrix}-1 & 0 \\ 0 & -1\end{bmatrix}\).
More details about SVD can be found here: http://www.youtube.com/watch?v=9YtmGy-wfE4
The Moore-Penrose Pseudoinverse
The Moore-Penrose pseudoinverse is a matrix that can act as a partial replacement for the matrix inverse in cases where it does not exist (e.g. non-square matrices).
The pseudoinverse matrix is defined as: \(pinv(A) = \lim_{α \rightarrow 0} (A^TA + αI)^{-1} A^T\)
Analysis
0! = 1
exp(1) = 2.718
exp(0) = 1
ln(1) = 0\(ln(x) = log_e(x) \\ log_b (b^a) = a\)
exp(a + b) = exp(a) * exp(b)
ln(a * b) = ln(a) + ln(b)\(cos(x)^2 + sin(x)^2 = 1\)
Euler’s formula
exp(iθ) = cos(θ) + i sin(θ)
Complex numbers Rectangular form
z = a + ib (real part + imaginary part and i an imaginary unit satisfying \(i^2 = −1\)).
Polar form
z = r (cos(θ) + i sin(θ))
Exponential form
z = r.exp(iθ)
Multivariate equations
The solution set of a system of linear equations with 3 variables is the intersection of hyperplanes defined by each linear equation.
\(\frac{\partial f(x)}{\partial x} = \lim_{h \rightarrow 0} \frac{f(x+h) – f(x)}{h}\)
Derivatives
x^n
n * x^(n-1)
exp(x)
exp(x)
f o g (x)
g’(x) * f’ o g(x)
ln(x)
1/x
sin(x)
cos(x)
cos(x)
-sin(x)
\(\int_{a}^{b} (f(x) g(x))’ dx = \int_{a}^{b} f'(x) g(x) dx+ \int_{a}^{b} f(x) g'(x) dx\)
Integration by parts
\((x + y)^n = \sum_{k=0}^{n} C_n^k x^k y^{n-k}\)
Binomial theorem Chain rule
Z = f(x(u,v), y(u,v))\(\frac{\partial Z}{\partial u} = \frac{\partial Z}{\partial x} * \frac{\partial x}{\partial u} + \frac{\partial Z}{\partial y} * \frac{\partial y}{\partial u}\)
Entropy
Entropy measures the uncertainty associated with a random variable.\(H(X) = -\sum_{i=1}^n p(x^{(i)}) log(p(x^{(i)}))\)
Example:
Entropy({1,1,1,1}) = 0
Entropy({0,1,0,1}) = -½ (4*log(½))
\(H = \begin{bmatrix}\frac{\partial^2 f(θ)}{\partial θ_1\partial θ_1} & \frac{\partial^2 f(θ)}{\partial θ_1 \partial θ_2} \\ \frac{\partial^2 f(θ)}{\partial θ_2\partial θ_1} & \frac{\partial^2 f(θ)}{\partial θ_2\partial θ_2} \end{bmatrix}\)
Hessian
Example:\(f(θ) = θ_1^2 + θ_2^2 \\ H(f) = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix}\)
A function f(θ) is convex if its Hessian matrix is positive semidefinite (\(x^T.H(θ).x >= 0\), for every \(x∈R^2\)).\(x^T.H(θ).x = \begin{bmatrix} x_1 & x_2 \end{bmatrix} . \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} . \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = 2 x_1^2 + 2 x_2^2 >= 0\)
Method of Lagrange Multipliers
To maximize/minimize f(x) with the constraints \(h_r(x) = 0\) for r in {1,..,l},
We need to define the Lagrangian: \(L(x, α) = f(x) – \sum_{r=1}^l α_r h_r(x)\) and find x, α by solving the following equations:\(\frac{\partial L}{\partial x} = 0\)
\(\frac{\partial L}{\partial α_r} = 0\) for all r
\(h_r(x) = 0\) for all r
Calculate the Hessian matrix (f”(x)) to know if the solution is a minimum or maximum.
Method of Lagrange Multipliers with inequality constraints
To minimize f(x) with the constraints \(g_i(x) \geq 0\) for i in {1,..,k} and \(h_r(x) = 0\) for r in {1,..,l},
We need to define the Lagrangian: \(L(x, α, β) = f(w) – \sum_{i=1}^k α_i g_i(x) – \sum_{r=1}^l β_r h_r(x)\) and find x, α, β by solving the following equations:\(\frac{\partial L}{\partial x} = 0\)
\(\frac{\partial L}{\partial α_i} = 0\) for all i
\(\frac{\partial L}{\partial β_r} = 0\) for all r
\(h_r(x) = 0\) for all r
\(g_i(x) \geq 0\) for all i
\(α_i * g_i(x) = 0\) for all i (Karush–Kuhn–Tucker conditions)
\(α_i >= 0\) for all i (KTT conditions)
Lagrange strong duality – hard to understand 🙁
Lagrange dual function \(d(α, β) = \underset{x}{min} L(x, α, β)\), and x satisfies equality and inequality constraints.
We define \(d^* = \underset{α \geq 0, β}{max}\ d(α, β)\)
We define \(p^* = \underset{w}{min}\ f(x) \) (x satisfies equality and inequality constraints)
Under certain conditions (Slater conditions: f convex,…), \(p^* = d^*\)
Jensen’s inequality
If f a convex function, and X a random variable, then f(E[X]) <= E[f(X)].
If f a concave function, and X a random variable, then f(E[X]) >= E[f(X)].
If f is strictly convex (f”(x) > 0), then f(E[X]) = E[f(X)] holds true only if X = E[X] (X is a constant).
Probability
Below the main probability theorems.
Law of total probability
If A is an arbitrary event, and B are mutually exclusive events such as \(\sum_{i=1}^{n} P(B_{i}) = 1\), then:\(P(A) = \sum_{i=1}^{n} P(A|B_{i}) P(B_{i}) = \sum_{i=1}^{n} P(A,B_{i})\)
Example:
Suppose that 15% of the population of your country was exposed to a dangerous chemical Z. If exposure to Z quadruples the risk of lung cancer from .0001 to .0004. What’s the probability that you will get lung cancer.
P(cancer) = .15 * .0004 + .85 * .0001 = .000145
Bayes’ rule
P(A|B) = P(B|A) * P(A) / P(B)
Where A and B are events.
Example:
Suppose that 15% of the population of your country was exposed to a dangerous chemical Z. If exposure to Z quadruples the risk of lung cancer from .0001 to .0004. If you have lung cancer, what’s the probability that you were exposed to Z?
P(Z|Cancer) = P(Cancer|Z) * P(Z) / P(Cancer)
We can calculate the P(Cancer) using the law of total probability: P(Cancer) = P(Cancer|Z) * P(Z) + P(Cancer|~Z) * P(~Z)
P(Z|Cancer) = .0004 * 0.15 / (.0004 * 0.15 + .0001 * .85) = 0.41
\(P(A_1,A_2,…,A_n) = P(A_1) P(A_2|A_1) ….P(A_n|A_{n-1},…,A_1)\)
Chain rule
P(Y,X1,X2,X3,X4) = P(Y,X4,X3,X2,X1)
= P(Y|X4,X3,X2,X1) * P(X4|X3,X2,X1) * P(X3|X2,X1) * P(X2|X1) * P(X1)
= P(Y|X4,X3) * P(X4) * P(X3|X2,X1) * P(X2) * P(X1)
\(P(A_1 \cup A_2 … \cup A_n) \leq P(A_1) + P(A_2) + … + P(A_n) \)
The Union Bound Nb of permutations with replacement
Nb of permutations with replacement = \({n^r}\), r the number of events, n the number of elements.
Probability = \(\frac{1}{n^r}\)
Example:
Probability of getting 3 six when rolling a dice = 1/6 * 1/6 * 1/6
Probability of getting 3 heads when flipping a coin = 1/2 * 1/2 * 1/2
Nb of permutations without replacement and with ordering
Nb of permutations without replacement = \(\frac{n!}{(n-r)!}\), r the number of events, n the number of elements.
Probability = \(\frac{(n-r)!}{n!}\)
Example:
Probability of getting 1 red ball and then 1 green ball from an urn that contains 4 balls (1 red, 1 green, 1 black and 1 blue) = 1/4 * 1/3
Nb of combinations without replacement and without ordering
Nb of combinations \(\frac{n!}{(n-r)! \, r!}\), r the number of events, n the number of elements.
Probability = \(\frac{(n-r)! \, r!}{n!}\)
Example:
Probability of getting 1 red ball and 1 green ball from an urn that contains 4 balls (1 red, 1 green, 1 black and 1 blue) = 1/4 * 1/3 + 1/4 * ⅓ |
A few basic calculations:
The current through Rled, assuming the transistor is fully saturated:$$I_R = \frac{3.7V - 2.4V - 0.3V}{39 \Omega} = 25.6 mA$$
Looking at a 2N3904 Datasheet, they define saturation as the point hfe=10.
Thus:$$I_b = 2.56 mA$$
This means your micro controller needs a control signal of:$$V_m = 0.65V + I_b \cdot 3.9 k\Omega = 10.65V$$
You didn't specify what your microcontroller voltage supply is, but I'm willing to bet it's 5V or lower. To fix this, either lower the value of Rb or increase the value of Rled.
Assume \$V_m = 3.7V\$ and you don't want to change Rled:$$R_b = \frac{V_m - 0.65V}{I_b} = 1189.5 \Omega$$So pick \$R_b < 1.19 k\Omega\$ to saturate the transistor.
One other worrying issue:
Having the 2 LED's in parallel as you have implies that the LED's have a very well matched voltage drop. This is usually not the case! In practice you'll get an asymmetric current flow, where one LED could have a significantly higher current than the other. You really should have separate diode current limit resistors if you intend to wire D1 and D2 in parallel.
simulate this circuit – Schematic created using CircuitLab |
Verify Simulations with the Method of Manufactured Solutions
How do we check if a simulation tool works correctly? One approach is the Method of Manufactured Solutions. The process involves assuming a solution, obtaining source terms and other auxiliary conditions consistent with the assumption, solving the problem with those conditions as inputs to the simulation tool, and comparing the results with the assumed solution. The method is easy to use and very versatile. For example, researchers at Sandia National Laboratories have used it with several in-house codes.
Verification and Validation
Before using a numerical simulation tool to predict outcomes from previously unforeseen situations, we want to build trust in its reliability. We can do this by checking whether the simulation tool accurately reproduces available analytical solutions or whether its results match experimental observations. This brings us to two closely related topics of
verification and validation. Let’s clarify what these two terms mean in the context of numerical simulations.
To numerically simulate a physical problem, we take two steps:
Construct a mathematical model of the physical system. This is where we account for all of the factors (inputs) that influence observed behavior (outputs) and postulate the governing equations. The result is often a set of implicit relations between inputs and outputs. This is frequently a system of partial differential equations with initial and boundary conditions that collectively are referred to as an initial boundary value problem(IBVP). Solve the mathematical model to obtain the outputs as explicit functions of the inputs. However, such closed form solutions are not available for most problems of practical interest. In this case, we use numerical methods to obtain approximate solutions, often with the help of computers to solve large systems of generally nonlinear algebraic equations and inequalities.
There are two situations where errors can be introduced. First, they can occur in the mathematical model itself. Potential errors include overlooking an important factor or assuming an unphysical relationship between variables.
Validation is the process of making sure such errors are not introduced when constructing the mathematical model. Verification, on the other hand, is to ascertain that the mathematical model is accurately solved. Here, we are ensuring that the numerical algorithm is convergent and the computer implementation is correct, so that the numerical solution is accurate.
In brief, during
validation we ask if we posed the appropriate mathematical model to describe the physical system, whereas in verification we investigate if we are obtaining an accurate numerical solution to the mathematical model.
Now, we will dive deeper into the verification of numerical solutions to initial boundary value problems (IBVPs).
Different Verification Approaches
How do we check if a simulation tool is accurately solving an IBVP?
One possibility is to choose a problem that has an exact analytical solution and use the exact solution as a benchmark. The method of separation of variables, for example, can be used to obtain solutions to simple IBVPs. The utility of this approach is limited by the fact that most problems of practical interest do not have exact solutions — the
raison d’être of computer simulation. Still, this approach is useful as a sanity check for algorithms and programming.
Another approach is to compare simulation results with experimental data. To be clear, this is combining validation and verification in one step, which is sometimes called
qualification. It is possible but unlikely that experimental observations are matched coincidentally by a faulty solution through a combination of a flawed mathematical model and a wrong algorithm or a bug in the programming. Barring such rare occurrences, a good match between a numerical solution and an experimental observation vouches for the validity of the mathematical model and the veracity of the solution procedure.
The Application Libraries in COMSOL Multiphysics contain many verification models that use one or both of these approaches. They are organized by physics areas.
Verification models are available in the Application Libraries of COMSOL Multiphysics.
What if we want to verify our results in the absence of exact mathematical solutions and experimental data? We can turn to the method of manufactured solutions.
Implementing the Method of Manufactured Solutions
The goal of solving an IBVP is to find an explicit expression for the solution in terms of independent variables, usually space and time, given problem parameters such as material properties, boundary conditions, initial conditions, and source terms. Common forms of source terms include body forces such as gravity in structural mechanics and fluid flow problems, reaction terms in transport problems, and heat sources in thermal problems.
In the Method of Manufactured Solutions (MMS), we flip the script and start with an assumed explicit expression for the solution. Then, we substitute the solution to the differential equations and obtain a consistent set of source terms, initial conditions, and boundary conditions. This usually involves evaluating a number of derivatives. We will soon see how the symbolic algebra routines in COMSOL Multiphysics can help with this process. Similarly, we evaluate the assumed solution at time t = 0 and at the boundaries to obtain initial conditions and the boundary conditions.
Next comes the verification step. Given the source terms and auxiliary conditions just obtained, we use the simulation tool to obtain a numerical solution to the IBVP and compare it to the original assumed solution with which we started.
Let us illustrate the steps with a simple example.
Verifying 1D Heat Conduction
Consider a 1D heat conduction problem in a bar of length L
with initial condition
and fixed temperatures at the two ends given by
The coefficients A_c, \rho, C_p, and k stand for the cross-sectional area, mass density, heat capacity, and thermal conductivity, respectively. The heat source is given by Q.
Our goal is to verify the solution of this problem using the method of manufactured solutions.
First, we assume an explicit form for the solution. Let’s consider the temperature distribution
where \tau is a characteristic time, which for this example is an hour. We introduce a new variable u for the assumed temperature to distinguish it from the computed temperature T.
Next, we find the source term consistent with the assumed solution. We can hand calculate partial derivatives of the solution with respect to space and time and substitute them in the differential equation to obtain Q. Alternatively, since COMSOL Multiphysics is able to perform symbolic manipulations, we will use that feature instead of hand calculating the source term.
In the case of uniform material and cross-sectional properties, we can declare A_c, \rho, C_p, and k as parameters. The general heterogeneous case requires variables, as do time-dependent boundary conditions. Notice the use of the operator
d(), one of the built-in differentiation operators in COMSOL Multiphysics, shown in the screenshot below. The symbolic algebra routine in COMSOL Multiphysics can automate the evaluation of partial derivatives.
We perform this symbolic manipulation with the caveat that we trust the symbolic algebra. Otherwise, any errors observed later could be from the symbolic manipulation and not the numerical solution. Of course, we can plot a hand-calculated expression for Q alongside the result of the symbolic manipulation shown above to verify the symbolic algebra routine.
Next, we compute the initial and boundary conditions. The initial condition is the assumed solution evaluated at t = 0.
The values of the temperature at the two ends of the bar are g_1(t) = g_2(t) = 500 K.
Next, we obtain the numerical solution of the problem using the source term, as well as the initial and boundary conditions we have just calculated. For this example, let us use the
Heat Transfer in Solids physics interface. Add initial values, boundary conditions, and sources derived from the assumed solution.
For the final step, we compare the numerical solution with the assumed solution. The plots below show the temperature after a time period of one day. The first solution is obtained using linear elements, whereas the second solution is obtained using the quadratic elements. For this type of problem, COMSOL Multiphysics chooses quadratic elements by default.
The solution computed using the manufactured solution with linear elements (left) and quadratic elements (right). Checking Different Parts of the Code
The MMS gives us the flexibility to check different parts of the code. In the example given above, for the purpose of simplicity we have intentionally left many parts of the IBVP unchecked. In practice, every item in the equation should be checked in the most general form. For example, to check if the code accurately handles nonuniform cross-sectional areas, we need to define a spatially variable area before deriving the source term. The same is true for other coefficients such as material properties.
A similar check should be made for all boundary and initial conditions. If, for example, we want to specify the flux on the left end instead of the temperature, we first evaluate the flux corresponding to the manufactured solution, i.e., -n\cdot(-A_ck \nabla u), where n is the outward unit normal. For the assumed solution in this example, the inward flux at the left end becomes \frac{A_ck}{L}\frac{t}{\tau}*1K.
In COMSOL Multiphysics, the default boundary condition for heat transfer in solids is thermal insulation. What if we want to verify the handling of thermal insulation on the left end? We would need to manufacture a new solution where the derivative vanishes on the left end. For example, we can use
Note that during verification, we are checking if the equations are being correctly solved. We are not concerned with whether the solution corresponds to physical situations.
Remember that once we manufacture a new solution, we have to recalculate the source term, initial conditions, and boundary conditions according to the assumed solution. Of course, when we use the symbolic manipulation tools in COMSOL Multiphysics, we are exempt from the tedium!
Convergence Rate
As shown in the graph above, the solutions obtained by the linear element and the quadratic element converged as the mesh size was reduced. This qualitative convergence gives us some confidence in the numerical solution. We can further scrutinize the numerical method by studying its rate of convergence, which will provide a quantitative check of the numerical procedure.
For example, for the stationary version of the problem, the standard finite element error estimate for error measured in the m-order Sobolev norm is
where u and u_h are the exact and finite element solutions, h is the maximum element size, p is the order of the approximation polynomials (shape functions). For m = 0, this gives the error estimate
where C is a mesh independent constant.
Returning to the method of manufactured solutions, this implies that the solution with linear element (p = 1) should show second-order convergence when the mesh is refined. If we plot the norm of the error with respect to mesh size on a log-log plot, the slope should asymptotically approach 2. If this does not happen, we will have to check the code or the accuracy and regularity of inputs such as material and geometric properties. As the figures below show, the numerical solution converges at the theoretically expected rate.
Left: Use integration operators to define norms. The operator intop1 is defined to integrate over the domain. Right: Log-log plot of error versus mesh size shows second-order convergence in the L_2-norm (m = 0) for linear elements, which is consistent with theoretical prediction.
While we should always check convergence, the theoretical convergence rate can only be checked for those problems like the one above where
a priori error estimates are available. When you have such problems, remember that the method of manufactured solutions can help you verify if your code shows the correct asymptotic behavior. Nonlinear Problems and Coupled Problems
In case of constitutive nonlinearity, the coefficients in the equation depend on the solution. In heat conduction, for example, thermal conductivity can depend on the temperature. In such cases, the coefficients need to be derived from the assumed solution.
Coupled (multiphysics) problems have more than one governing equation. Once solutions are assumed for all the fields involved, source terms have to be derived for each governing equation.
Uniqueness
Note that the logic behind the method of manufactured solutions holds only if the governing system of equations has a unique solution under the conditions (source term, boundary, and initial conditions) implied by the assumed solution. For example, in the stationary heat conduction problem, uniqueness proofs require positive thermal conductivity. While this is straightforward to check for in the case of isotropic uniform thermal conductivity, in the case of temperature dependent conductivity or anisotropy more thought should be given when manufacturing the solution to not violate such assumptions.
When using the method of manufactured solutions, the solution exists by construction. In addition, uniqueness proofs are available for a much larger class of problems than we have exact analytical solutions. Thus, the method gives us more room to work with than looking for exact solutions starting with source terms and initial and boundary conditions.
Try It Yourself
The built-in symbolic manipulation functionality of COMSOL Multiphysics makes it easy to implement the MMS for code verification as well as for educational purposes. While we do extensive testing of our codes, we welcome scrutiny on the part of our users. This blog post introduced a versatile tool that you can use to verify the various physics interfaces. You can also verify your own implementations when using equation-based modeling or the Physics Builder in COMSOL Multiphysics. If you have any questions about this technique, please feel free to contact us!
Resources For an extensive discussion of the method of manufactured solutions including relative strengths and limitations, see this report from Sandia National Laboratories. The report details a set of blind tests in which one author planted a series of code mistakes unbeknownst to the second author, who had to mine-sweep using the method described in this blog post. For a broader discussion on verification and validation in the context of scientific computing, check out W. J. Oberkampf and C. J. Roy, Verification and Validation in Scientific Computing, Cambridge University Press, 2010 W. J. Oberkampf and C. J. Roy, Standard error estimates for the finite element method are available in texts such as Thomas J. R. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Dover Publications, 2000 B. Daya Reddy, Introductory Functional Analysis: With Applications to Boundary Value Problems and Finite Elements, Springer-Verlag, 1997 Thomas J. R. Hughes, Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
First the definitions:
The point $p$ is an
$\omega$-cluster pointfor a subset $A$ of a topological space $X$ if every neighbourhood of $p$ contains infinitely many points of $A$.
A space is
countably compactif every countable open cover has a finite subcover.
Now the question:
How does one prove that every countable infinite subset $A = \{ p_i : i \in \mathbb N \}$ of a countably compact space $X$ has an $\omega$-cluster point?
I know how to prove it if in addition $X$ is assumed to be Hausdorff: by contradiction assuming that there is no $\omega$-cluster point and considering the open cover $\{U_i : i \in \mathbb N \} \cup \{ \mathcal{C}A \}$, where $U_i$ is a neighbourhood of $p_i$ containing at most finitely many points of $A$, and $\mathcal{C}A=X\setminus A$. However $\mathcal{C}A$ is not necessarily open if $X$ is not Hausdorff and I do not know how to modify the proof. |
I am new to option pricing and following problem came up that I don't understand how to handle.
A derivative will pay out dollar amount equal to $$\frac1T\ln \frac{S_T}{S_0}$$ at maturity, where $S_T$ is distributed log-normally, and the expected return is $\mu$ and volatility is $\sigma$ and $T$ is the time.
So what is the price of the derivative using risk neutral valuation..
I know I have to use a stock and a derivative to make a risk neutral portfolio, but not really sure how to proceed. |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
PhD Thesis Research
My thesis research is in the field of underwater acoustics and signal processing. In particular, this research explores the possibility using a measured acoustic field with some bandwidth to create new fields that have much lower or much higher frequency content. This is made possible by mathematical constructions that I've developed in my research termed
First, we'll discuss what autoproducts are, then we'll discuss how they can be used for source localization. autoproducts. Autoproducts are quadratic products of frequency-domain acoustic fields at two different frequencies. In my research, I've explored some the theory behind these autoproduct fields, as well as some of their applications.
First, we'll discuss what autoproducts are, then we'll discuss how they can be used for source localization.
Autoproducts
We define the frequency-difference autoproduct and frequency-sum autoproduct as:
\[ AP_{\Delta}\left(\textbf{r},\Delta\omega,\omega\right)\equiv P\left(\textbf{r},\omega+\frac{1}{2}\Delta\omega\right)P^*\left(\textbf{r},\omega-\frac{1}{2}\Delta\omega\right)\sim P\left(\textbf{r},\Delta\omega\right)\] \[ AP_{\Sigma}\left(\textbf{r},\Delta\omega,\omega\right)\equiv P\left(\textbf{r},\omega+\frac{1}{2}\Delta\omega\right)P\left(\textbf{r},\omega-\frac{1}{2}\Delta\omega\right)\sim P\left(\textbf{r},2\omega\right)\] To see why the definitions in the first equation might imply the approximation in the second equation, consider a plane-wave acoustic field.
\[ AP_{\Delta}\left(\textbf{r},\Delta\omega,\omega\right)\equiv P\left(\textbf{r},\omega+\frac{1}{2}\Delta\omega\right)P^*\left(\textbf{r},\omega-\frac{1}{2}\Delta\omega\right)\sim P\left(\textbf{r},\Delta\omega\right)\]
\[ AP_{\Sigma}\left(\textbf{r},\Delta\omega,\omega\right)\equiv P\left(\textbf{r},\omega+\frac{1}{2}\Delta\omega\right)P\left(\textbf{r},\omega-\frac{1}{2}\Delta\omega\right)\sim P\left(\textbf{r},2\omega\right)\]
To see why the definitions in the first equation might imply the approximation in the second equation, consider a plane-wave acoustic field.
Plugging these plane waves into the autoproduct definitions, we get:
It's easy to see that, for a plane wave, autoproduct fields are exactly the same as their out-of-band field counterparts at frequencies above and below the original bandwidth. Note that this procedure appears qualitatively similar to
addingfields instead of multiplyingfields. Added fields will have a 'beat' patterns at the difference frequency, but this is fundamentally different from the frequency-difference autoproduct: the frequency-difference autoproduct is a genuine field (with real and imaginary components), not just a description of the envelope of the field. Source Localization Remote Sensing
Suppose you are out in the ocean, and you have an array of underwater microphones, called hydrophones, and you are simply sitting there in the ocean, quietly listening. Usually you'll just record ambient noise, but suppose you hear something (detection), and determine that whatever that something is, it's of interest to you (classification). Then the next step is to find the origin of that sound (localization), and if it's moving, figure out where it's heading (tracking). These four steps: Detection, Classification, Localization, and Tracking, are commonly grouped together into "remote sensing", with an optional fifth step being Identification (determining not just that it's something of interest, but determining what, or who, specifically, made that sound). In my PhD research, I focus primarily on the localization step, though there are other applications in detection and tracking as well
Matched Field Processing
To perform source localization, there are many methods, but the one we'll focus on is called matched field processing, or MFP for short, which was developed in 1976 by Homer Bucker. The objective of MFP is to use pressure-vs-time signals from each hydrophone as the input, and then output a spatial map of possible source locations. The way MFP works is to compare
measured data(recorded from the real environment) with modeled data(simulated from a computational model of the known environment). Basically, a grid of possible source locations is formed, and comparisons are made between the real, measured data and the simulated, modeled data for a hypothetical source at each grid location. A map is formed showing how strong these comparisons (or more precisely, the spatial cross-correlations) between measured and modeled data are for each location. This map is called an ambiguity surface, and can be thought of as a plot of most (or least) likely source locations. When functioning properly, the modeled data from the true source location will create the largest cross-correlation with the measured data, resulting in a peak at the true source location in the ambiguity surface. Through this process, source localization is performed. Environmental Mismatch
Fast forward to the early 90's. Research greatly expanded the capabilities and flexibility of the MFP technique, though it became clear that there was one problem that seemed insurmountable:
environmental mismatch. The issue is that the 'known' acoustic environment (which is necessary for the computational model to create the modeled data) is always slightly different than the true acoustic environment. In other words, unless we knew exactlywhere all the surface waves, bubbles, sea mounts, temperature gradients and salinity gradients were in the ocean when the sound was broadcast, then MFP will inevitably have errors. The naive expectation is that the worse the match, the poorer the source localization. And while that's not wrong, instead of it being a smooth linear relationship between environmental mismatch and localization error, it's actually a rather sharp and abrupt transition from fully functioning to catastrophically failing to localize. To demonstrate this, consider the following setup:
Autoproduct-Based MFP
If the signal has some bandwidth (as any finite time duration signal will), then the autoproduct definitions can be applied. Each of these plots includes environmental mismatch.
Frequency Difference MFP
High frequency signals can be 'downshifted' to much lower frequencies via the frequency-difference autoproduct, and then processed via conventional MFP at these much lower frequencies. Precision in the localization is sacrificed for robustness to environmental mismatch.
Frequency-Sum MFP
With this technique, the effective frequency is shifted up by a factor of two. In this case, robustness to environmental mismatch is sacrificed in exchange for a more precise localization result.
In typical shallow ocean environments (shallow meaning depths of around 100 m), the acoustic environment is usually only known well-enough to support localization via MFP for source frequencies up to 1,000 Hz. Typically, anything above 1,000 Hz is effectively impossible to localize via MFP. MFP techniques based on the Frequency-Difference Autoproduct present an opportunity to overcome the environmental mismatch problem.
To see this technique applied to real ocean data, or to learn more about the theoretical limitations of autoproducts, data check out my publications page.
To see this technique applied to real ocean data, or to learn more about the theoretical limitations of autoproducts, data check out my publications page.
Last updated 3/27/2019 |
The crucial observation is that if $A \vDash B$ then also $\sigma(A) \vDash \sigma(B)$. This follows since all $\sigma$ does is rename variables and flip some variables. For example, if $\sigma(x) = \lnot y$, $\sigma(y) = z$, and $\sigma(z) = x$, then $A(x,y,z) \vDash B(x,y,z)$ implies also $A(\lnot y, z, x) \vDash B(\lnot y, z, x)$.
Let $G$ be the set of all permutations of literals satisfying your condition. I claim that $G$ forms a group with respect to composition. Since $G$ is finite and composition is associative, it suffices to check that it is closed under composition. Indeed, if $\sigma(\lnot x) = \lnot \sigma(x)$ and $\tau(\lnot x) = \lnot \tau(x)$ then $\sigma(\tau(\lnot x)) = \sigma(\lnot\tau(x)) = \lnot\sigma(\tau(x))$.
Since $G$ is a group, $\sigma^{|G|}$ is the identity. If $F \vDash \sigma(F)$ then, applying $\sigma$ on both sides, we get $\sigma(F) \vDash \sigma^2(F)$. More generally, we get $\sigma^n(F) \vDash \sigma^{n+1}(F)$, and so $\sigma^n(F) \vDash \sigma^m(F)$ whenever $m \geq n$. In particular, choosing $n = 1$ and $m = |G|$, we obtain $\sigma(F) \vDash F$.
The group $G$ is known as the signed symmetric group, denoted $B_n$, where $n$ is the number of variables. |
Skills to Develop
Evaluate 2 × 2 determinants. Use Cramer’s Rule to solve a system of equations in two variables. Evaluate 3 × 3 determinants. Use Cramer’s Rule to solve a system of three equations in three variables. Know the properties of determinants.
We have learned how to solve systems of equations in two variables and three variables, and by multiple methods: substitution, addition, Gaussian elimination, using the inverse of a matrix, and graphing. Some of these methods are easier to apply than others and are more appropriate in certain situations. In this section, we will study two more strategies for solving systems of equations.
Evaluating the Determinant of a 2 × 2 Matrix
A determinant is a real number that can be very useful in mathematics because it has multiple applications, such as calculating area, volume, and other quantities. Here, we will use determinants to reveal whether a matrix is invertible by using the entries of a square matrix to determine whether there is a solution to the system of equations. Perhaps one of the more interesting applications, however, is their use in cryptography. Secure signals or messages are sometimes sent encoded in a matrix. The data can only be decrypted with an invertible matrix and the determinant. For our purposes, we focus on the determinant as an indication of the invertibility of the matrix. Calculating the determinant of a matrix involves following the specific patterns that are outlined in this section.
FIND THE DETERMINANT OF A 2 × 2 MATRIX
The
determinant of a 2 × 2 matrix, given
\(A=\begin{bmatrix}a&b\\c&d\end{bmatrix}\)
is defined as
Notice the change in notation. There are several ways to indicate the determinant, including \(\det(A)\) and replacing the brackets in a matrix with straight lines, \(| A |\).
Using Cramer’s Rule to Solve a System of Two Equations in Two Variables
We will now introduce a final method for solving systems of equations that uses determinants. Known as
Cramer’s Rule, this technique dates back to the middle of the 18th century and is named for its innovator, the Swiss mathematician Gabriel Cramer (1704-1752), who introduced it in 1750 in Introduction à l'Analyse des lignes Courbes algébriques. Cramer’s Rule is a viable and efficient method for finding solutions to systems with an arbitrary number of unknowns, provided that we have the same number of equations as unknowns.
Cramer’s Rule will give us the unique solution to a system of equations, if it exists. However, if the system has no solution or an infinite number of solutions, this will be indicated by a determinant of zero. To find out if the system is inconsistent or dependent, another method, such as elimination, will have to be used.
To understand Cramer’s Rule, let’s look closely at how we solve systems of linear equations using basic row operations. Consider a system of two equations in two variables.
\[\begin{align} a_1x+b_1y&= c_1 (1) \label{eq1}\\ a_2x+b_2y&= c_2 (2) \label{eq2}\\ \end{align}\]
We eliminate one variable using row operations and solve for the other. Say that we wish to solve for \(x\). If Equation \ref{eq2} is multiplied by the opposite of the coefficient of \(y\) in Equation \ref{eq1}, Equation \ref{eq1} is multiplied by the coefficient of \(y\) in Equation \ref{eq2}, and we add the two equations, the variable \(y\) will be eliminated.
\[\begin{align*} &b_2a_1x+b_2b_1y = b_2c_1 & \text{Multiply }R_1 \text{ by }b_2 \\ -&\underline{b_1a_2x−b_1b_2y=−b_1c_2} & \text{Multiply }R_2 \text{ by }−b_1 \\ & b_2a_1x−b_1a_2x=b_2c_1−b_1c_2 \end{align*}\]
Now, solve for \(x\).
\[\begin{align*} b_2a_1x−b_1a_2x &= b_2c_1−b_1c_2 \\ x(b_2a_1−b_1a_2) &= b_2c_1−b_1c_2 \\ x &= \dfrac{b_2c_1−b_1c_2}{b_2a_1−b_1a_2}=\dfrac{\begin{bmatrix}c_1&b_1\\c_2&b_2\end{bmatrix}}{\begin{bmatrix}a_1&b_1\\a_2&b_2\end{bmatrix}} \end{align*}\]
Similarly, to solve for \(y\),we will eliminate \(x\).
\[\begin{align*} & a_2a_1x+a_2b_1y = a_2c_1 & \text{Multiply }R_1 \text{ by }a_2 \\ -& \underline{a_1a_2x−a_1b_2y=−a_1c_2} & \text{Multiply }R_2 \text{ by }−a_1 \\ & a_2b_1y−a_1b_2y =a_2c_1−a_1c_2 \end{align*}\]
Solving for \(y\) gives
\[ \begin{align*} a_2b_1y−a_1b_2y &= a_2c_1−a_1c_2 \\ y(a_2b_1−a_1b_2) &= a_2c_1−a_1c_2 \\ y &= \dfrac{a_2c_1−a_1c_2}{a_2b_1−a_1b_2}=\dfrac{a_1c_2−a_2c_1}{a_1b_2−a_2b_1}=\dfrac{\begin{bmatrix}a_1&c_1\\a_2&c_2\end{bmatrix}}{\begin{bmatrix}a_1&b_1\\a_2&b_2\end{bmatrix}} \end{align*}\]
Notice that the denominator for both \(x\) and \(y\) is the determinant of the coefficient matrix.
We can use these formulas to solve for \(x\) and \(y\), but Cramer’s Rule also introduces new notation:
\(D\):determinant of the coefficient matrix \(D_x\):determinant of the numerator in the solution of \(x\)
\[x=\dfrac{D_x}{D}\]
\(D_y\):determinant of the numerator in the solution of \(y\)
\[y=\dfrac{D_y}{D}\]
The key to Cramer’s Rule is replacing the variable column of interest with the constant column and calculating the determinants. We can then express \(x\) and \(y\) as a quotient of two determinants.
CRAMER’S RULE FOR \(2×2\) SYSTEMS
Cramer’s Rule is a method that uses determinants to solve systems of equations that have the same number of equations as variables.
Consider a system of two linear equations in two variables.
\[\begin{align*} a_1x+b_1y&= c_1\\ a_2x+b_2y&= c_2 \end{align*}\]
The solution using Cramer’s Rule is given as
\[\begin{align} x&= \dfrac{D_x}{D} = \dfrac{\begin{bmatrix}c_1&b_1\\c_2&b_2\end{bmatrix}}{\begin{bmatrix}a_1&b_1\\a_2&b_2\end{bmatrix}}\; , D\neq 0\\ y&= \dfrac{D_y}{D} = \dfrac{\begin{bmatrix}a_1&c_1\\a_2&c_2\end{bmatrix}}{\begin{bmatrix}a_1&b_1\\a_2&b_2\end{bmatrix}}\; , D\neq 0 \end{align}\]
If we are solving for \(x\), the \(x\) column is replaced with the constant column. If we are solving for \(y\), the \(y\) column is replaced with the constant column.
Exercise \(\PageIndex{1}\)
Use Cramer’s Rule to solve the \(2 × 2\) system of equations.
\[\begin{align*} x+2y&= -11\\ -2x+y&= -13 \end{align*}\]
Answer
\((3,−7)\)
Evaluating the Determinant of a 3 × 3 Matrix
Finding the determinant of a 2×2 matrix is straightforward, but finding the determinant of a 3×3 matrix is more complicated. One method is to augment the 3×3 matrix with a repetition of the first two columns, giving a 3×5 matrix. Then we calculate the sum of the products of entries
down each of the three diagonals (upper left to lower right), and subtract the products of entries up each of the three diagonals (lower left to upper right). This is more easily understood with a visual and an example.
Find the determinant of the 3×3 matrix.
\(A=\begin{bmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{bmatrix}\)
Augment \(A\) with the first two columns.
\(\det(A)=\left| \begin{array}{ccc|cc} a_1&b_1&c_1&a_1&b_1\\a_2&b_2&c_2&a_2&b_2\\a_3&b_3&c_3&a_3&b_3\end{array} \right|\)
From upper left to lower right: Multiply the entries down the first diagonal. Add the result to the product of entries down the second diagonal. Add this result to the product of the entries down the third diagonal. From lower left to upper right: Subtract the product of entries up the first diagonal. From this result subtract the product of entries up the second diagonal. From this result, subtract the product of entries up the third diagonal.
The algebra is as follows:
\(| A |=a_1b_2c_3+b_1c_2a_3+c_1a_2b_3−a_3b_2c_1−b_3c_2a_1−c_3a_2b_1\)
Example \(\PageIndex{3}\): Finding the Determinant of a 3 × 3 Matrix
Find the determinant of the \(3 × 3\) matrix given
\(A=\begin{bmatrix}0&2&1\\3&−1&1\\4&0&1\end{bmatrix}\)
Solution
Augment the matrix with the first two columns and then follow the formula. Thus,
\[\begin{align*} | A | &= \left| \begin{array}{ccc|cc}0&2&1&0&2\\3&-1&1&3&-1\\4&0&1&4&0\end{array}\right| \\ &= 0(−1)(1)+2(1)(4)+1(3)(0)−4(−1)(1)−0(1)(0)−1(3)(2) \\ &=0+8+0+4−0−6 \\ &= 6 \end{align*}\]
Exercise \(\PageIndex{2}\)
Find the determinant of the 3 × 3 matrix.
\(\det(A)=\begin{vmatrix}1&−3&7\\1&1&1\\1&−2&3\end{vmatrix}\)
Answer
\(−10\)
Q&A: Can we use the same method to find the determinant of a larger matrix?
No, this method only works for 2 × 2 and 3 × 3 matrices. For larger matrices it is best to use a graphing utility or computer software. Using Cramer’s Rule to Solve a System of Three Equations in Three Variables
Now that we can find the determinant of a \(3 × 3\) matrix, we can apply Cramer’s Rule to solve a system of three equations in three variables. Cramer’s Rule is straightforward, following a pattern consistent with Cramer’s Rule for \(2 × 2\) matrices. As the order of the matrix increases to \(3 × 3\), however, there are many more calculations required.
When we calculate the determinant to be zero, Cramer’s Rule gives no indication as to whether the system has no solution or an infinite number of solutions. To find out, we have to perform elimination on the system.
Consider a \(3 × 3\) system of equations.
\[\begin{align} a_1x+b_1y+c_1z &= \color{blue}d_1 \\ a_2x+b_2y+c_2z &= \color{blue}d_2 \\ a_3x+b_3y+c_3z &= \color{blue}d_3 \\ \end{align}\]
\(x=\dfrac{D_x}{D}\), \(y=\dfrac{D_y}{D}\), \(z=\dfrac{D_z}{D}\), \(D≠0\)
where
\[D = \begin{vmatrix} a_1 & b_1 & c_1\\ a_2 & b_2 & c_2\\ a_3 & b_3 & c_3 \end{vmatrix}\; ,\; D_x = \begin{vmatrix} \color{blue}d_1 & b_1 & c_1\\ \color{blue}d_2 & b_2 & c_2\\ \color{blue}d_3 & b_3 & c_3 \end{vmatrix}\; ,\; D_y = \begin{vmatrix} a_1 & \color{blue}d_1 & c_1\\ a_2 & \color{blue}d_2 & c_2\\ a_3 & \color{blue}d_3 & c_3 \end{vmatrix}\; ,\; D_z = \begin{vmatrix} a_1 & b_1 & \color{blue}d_1\\ a_2 & b_2 & \color{blue}d_2\\ a_3 & b_3 & \color{blue}d_3 \end{vmatrix}\]
If we are writing the determinant \(D_x\),we replace the \(x\) column with the constant column. If we are writing the determinant \(D_y\),we replace they y column with the constant column. If we are writing the determinant \(D_z\),we replace the \(z\) column with the constant column. Always check the answer.
Example \(\PageIndex{4}\): Solving a \(3 × 3\) System Using Cramer’s Rule
Find the solution to the given \(3 × 3\) system using Cramer’s Rule.
\[\begin{align*} x+y-z&= 6\\ 3x-2y+z&= -5\\ x+3y-2z&= 14 \end{align*}\]
Solution
Use Cramer’s Rule.
\(D=\begin{vmatrix}1&1&−1\\3&−2&1\\1&3&−2\end{vmatrix}\), \(D_x=\begin{vmatrix}6&1&−1\\−5&−2&1\\14&3&−2\end{vmatrix}\), \(D_y=\begin{vmatrix}1&6&−1\\3&−5&1\\1&14&−2\end{vmatrix}\), \(D_z=\begin{vmatrix}1&1&6\\3&−2&−5\\1&3&14\end{vmatrix}\)
Then,
\[\begin{align*} x&= \dfrac{D_x}{D}&= \dfrac{-3}{-3}&= 1\\ y&= \dfrac{D_y}{D}&= \dfrac{-9}{-3}&= 3\\ z&= \dfrac{D_z}{D}&= \dfrac{6}{-3}&= -2\\ \end{align*}\]
The solution is \((1,3,−2)\).
Exercise \(\PageIndex{3}\)
Use Cramer’s Rule to solve the \(3 × 3\) matrix.
\[\begin{align*} x-3y+7z&= 13\\ x+y+z&= 1\\ x-2y+3z&= 4 \end{align*}\]
Answer
\(\left(−2,\dfrac{3}{5},\dfrac{12}{5}\right)\)
Example \(\PageIndex{5A}\): Using Cramer’s Rule to Solve an Inconsistent System
Solve the system of equations using Cramer’s Rule.
\[\begin{align} 3x-2y&= 4 \label{eq3}\\ 6x-4y&= 0 \label{eq4}\end{align}\]
Solution
We begin by finding the determinants \(D\), \(D_x\),and \(D_y\).
\(D=\begin{vmatrix}3&−2\\6&−4\end{vmatrix}=3(−4)−6(−2)=0\)
We know that a determinant of zero means that either the system has no solution or it has an infinite number of solutions. To see which one, we use the process of elimination. Our goal is to eliminate one of the variables.
Multiply Equation \ref{eq3} by \(−2\). Add the result to Equation \ref{eq4}.
\[\begin{align*} &−6x+4y=−8 \\ &\;\;\;\underline{6x−4y=0} \\ &\;\;\;\;\;\;\;\;\;\; 0=−8 \end{align*}\]
We obtain the equation \(0=−8\), which is false. Therefore, the system has no solution. Graphing the system reveals two parallel lines. See Figure \(\PageIndex{1}\).
Figure \(\PageIndex{1}\)
Example \(\PageIndex{5B}\): Use Cramer’s Rule to Solve a Dependent System
Solve the system with an infinite number of solutions.
\[\begin{align} x-2y+3z&= 0 \label{eq5}\\ 3x+y-2z&= 0 \label{eq6}\\ 2x-4y+6z&= 0 \label{eq7} \end{align}\]
Solution
Let’s find the determinant first. Set up a matrix augmented by the first two columns.
\(\left| \begin{array}{ccc|cc}1&−2&3&1&-2\\3&1&−2&3&1\\2&−4&6&2&-4\end{array}\right|\)
Then,
\(1(1)(6)+(−2)(−2)(2)+3(3)(−4)−2(1)(3)−(−4)(−2)(1)−6(3)(−2)=0\)
As the determinant equals zero, there is either no solution or an infinite number of solutions. We have to perform elimination to find out.
1. Multiply Equation \ref{eq5} by \(−2\) and add the result to Equation \ref{eq7}:
\[\begin{align*} &−2x+4y−6x=0 \\ &\;\;\underline{2x−4y+6z=0} \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;0=0 \end{align*}\]
2. Obtaining an answer of \(0=0\), a statement that is always true, means that the system has an infinite number of solutions. Graphing the system, we can see that two of the planes are the same and they both intersect the third plane on a line. See Figure \(\PageIndex{2}\).
Figure \(\PageIndex{2}\) Understanding Properties of Determinants
There are many properties of determinants. Listed here are some properties that may be helpful in calculating the determinant of a matrix.
Note: PROPERTIES OF DETERMINANTS
If the matrix is in upper triangular form, the determinant equals the product of entries down the main diagonal. When two rows are interchanged, the determinant changes sign. If either two rows or two columns are identical, the determinant equals zero. If a matrix contains either a row of zeros or a column of zeros, the determinant equals zero. The determinant of an inverse matrix \(A^{−1}\) is the reciprocal of the determinant of the matrix \(A\). If any row or column is multiplied by a constant, the determinant is multiplied by the same factor.
Example \(\PageIndex{7}\): Using Cramer’s Rule and Determinant Properties to Solve a System
Find the solution to the given \(3 × 3\) system.
\[\begin{align} 2x+4y+4z&=2 \label{eq8}\\ 3x+7y+7z&=-5 \label{eq9}\\ x+2y+2z&=4 \label{eq10}\end{align}\]
Solution
Using Cramer’s Rule, we have
\(D=\begin{bmatrix}2&4&4\\3&7&7\\1&2&2\end{bmatrix}\)
Notice that the second and third columns are identical. According to Property 3, the determinant will be zero, so there is either no solution or an infinite number of solutions. We have to perform elimination to find out.
1. Multiply Equation \ref{eq10} by \(–2\) and add the result to Equation \ref{eq8}.
\[\begin{align*} -2x-4y-4x&=-8\\ 2x+4y+4z&=2\\ 0&=-6 \end{align*}\]
Obtaining a statement that is a contradiction means that the system has no solution.
Media
Access these online resources for additional instruction and practice with Cramer’s Rule.
Key Concepts The determinant for \(\begin{bmatrix}a&b\\c&d\end{bmatrix}\) is \(ad−bc\). See Example \(\PageIndex{1}\). Cramer’s Rule replaces a variable column with the constant column. Solutions are \(x=\dfrac{D_x}{D}\), \(y=\dfrac{D_y}{D}\). See Example \(\PageIndex{2}\). To find the determinant of a \(3×3\) matrix, augment with the first two columns. Add the three diagonal entries (upper left to lower right) and subtract the three diagonal entries (lower left to upper right). See Example \(\PageIndex{3}\). To solve a system of three equations in three variables using Cramer’s Rule, replace a variable column with the constant column for each desired solution: \(x=\dfrac{D_x}{D}\), \(y=\dfrac{D_y}{D}\), \(z=\dfrac{D_z}{D}\). See Example \(\PageIndex{4}\). Cramer’s Rule is also useful for finding the solution of a system of equations with no solution or infinite solutions. See Example \(\PageIndex{5}\) and Example \(\PageIndex{6}\). Certain properties of determinants are useful for solving problems. For example: If the matrix is in upper triangular form, the determinant equals the product of entries down the main diagonal. When two rows are interchanged, the determinant changes sign. If either two rows or two columns are identical, the determinant equals zero. If a matrix contains either a row of zeros or a column of zeros, the determinant equals zero. The determinant of an inverse matrix \(A^{−1}\) is the reciprocal of the determinant of the matrix \(A\). If any row or column is multiplied by a constant, the determinant is multiplied by the same factor. See Example \(\PageIndex{7}\) and Example \(\PageIndex{8}\). |
Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Lily Chambers
Co-Editor in Chief
Lily Chambers transferred to d.tech during her Sophomore year from Woodside High School. She enjoys yoga and hiking on the weekends, and loves spending time with friends. During the week, she volunteers works at Bare Bowls Palo Alto, and gets homework done. Some of her favorite pastimes are reading philosophy books, playing the piano, and looking at the news. Lily values hard work, honesty, respect, always doing the right thing.
Phoebe Rak
Co-Editor in Chief
Phoebe Rak is a study in nuance. She’s a feminist who loves makeup, a liberal with a solid appreciation for rules and systems, a serious student with a comedic streak, and a vegan who likes animals but doesn’t really love them. She’s lived in California her whole life but has visited many states, including a road trip through the Dakotas because her mom thought she needed to experience red states for herself. She didn’t like it, but she also loved it. She’s always up for a lively debate on pretty much any topic and looks forward to exploring controversies and nuances through The Dragon.
Nathan Au-Yeung
Section Editor
Nathan Au-Yeung is a senior at Design Tech High School. Nathan has a brown cat named Sasha. When he is not spending my time writing for the Dragon or doing his homework, he is either practicing violin or playing video games. Nathan is currently thinking of Environmental Science as my college major and hopes that he will be able to make valuable contributions to the Dragon.
Leo Belman
Section Editor
Leo Belman was born in New York City and lived in Brooklyn until he was five, when his family moved to Hong Kong for two years, for his dad's work. He went to two middle schools in New York, before moving to California when he was 13, for eighth grade. Moving to California was a huge shift for him, because everything is more relaxed and quiet out here, which is something he had to get used to (and eventually did). Leo is currently a senior at d.tech (obviously), and lives in Menlo Park. He has two dogs, a golden retriever named Dallas and a pug named Picasso, and an older brother who is now in his freshman year of college.
Nicholas Boyko
Section Editor
Nicholas Boyko is a senior and an editor for the Dragon. He loves writing and reading, especially non-fiction. He also enjoys {\displaystyle \zeta (s)={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}-1}}\,\mathrm {d} x}. His favorite activity outside of school is playing bass, as well as listening to and talking about music, especially the Brutal Slamming Death Metal band Xavlegbmaofffassssitimiwoamndutroabcwapwaeiippohfffx. He enjoys going to concerts with his dad, Andrew Dennis Boyko, and is always looking for new music to listen to. He is also fluent in Morse code.
Kira Hofelmann
Section Editor
K.inda cool, I.rish dancer, R.eally likes unicorns, A.mazing calves, H.as a twin, O.ften takes naps, F.orever young, E.xceptionally awesome, L.oves movies, M.akes mistakes, A.lways chooses pink, N.ever turns down Disneyland, N.eeds more pets
Alexis Huang
Section Editor
Alexis Huang is a senior at d.tech who loves writing, graphic design, animals, and languages! Along with working on The Dragon, she is a part of many other d.tech clubs, such as d.leadership and yearbook, and also runs her my own study blog. If you're interested in her mediocre SAT advice or want to talk about anything from the perfect instant noodle-cooking technique to the age-old debate about whether milk or cereal should be poured first, don't be afraid to stop her in the hallway and say hi. (she pours milk first, by the way, but is always happy to hear an argument from the other side.) Alexis is excited to spend her last year here documenting the experiences of d.tech's 2019-20 school year through the newspaper and hope that everyone will stick around for the ride!
Jemma Schroder
Section Editor
Jemma Schroder is a senior at Design Tech High School and a section editor of The Dragon. She enjoys journalism because it allows her to view d.tech - and the world - through many different lenses. Jemma is also a co-president of the Model United Nations team and a co-captain of the d.tech sailing team. Outside of school, Jemma is extremely interested in politics and world affairs, but also greatly enjoys mathematics, philosophy, and cognitive science.
Alyssa Wend
Section Editor
Alyssa is a senior and has been working on the Dragon for the past year. She has written articles about sets of twins at d.tech and about the different grade levels. Alyssa looks forward to helping people who want to be writers on our team. She is also apart of the d.tech Flame, our school yearbook, as a photographer and assistant-editor-in-chief.
Chloe Duong
Director of Website
Chloe Duong is a senior who gets mistaken for a freshman most of the time. You can see around school hopefully not stressing out about her grade in math. When she's not worrying about school she umpires baseball games in the spring and coaches a kid baseball team over the summer. During spring she swims for d.tech's swim team. She loves learning more about her culture and folding origami. Beware of her long nails though!
Kelley Hill
Director of Photography & Art
In addition to photography, Kelley loves animals and spends much of her free time, training dog agility and working with horses at various barns. Although she currently only has 2 dogs and a bearded dragon, she has had 35 pets throughout her life: 24 fish, 4 dogs, 3 lizards, 2 stick insects, and 2 frogs.
Geran Benson
Director of Print
“Geran Benson, born in Dallas Texas one year after the turn of the century has lived almost all his life in California doing things and going places. He likes Cars, Games, Tech and multiple other things. Got any questions or random facts, Geran would love to talk.” -Morgan Freeman
Jasper Bull
Co-Video Director
Jasper is a bipedal, anthropomorphous being currently residing in Design Tech High School. Origins of Jasper are unknown. The intentions of Jasper are unknown. The biology of Jasper is unknown. Social contact must be maintained for a period of at least thirty (30) minutes a day to avoid aggravation. Multiple attempts to nullify Jasper have taken place, all ending in failure at a massive cost of human life. Under no circumstances should Jasper exit the confines of the school, otherwise [REDACTED] will be unable to stop the destruction that will inevitably ensue.
Max Hofelmann
Co-Video Director
Max Hofelmann is the new co-director of broadcasting. You’ve probably seen him around holding a camera or running down the halls after Asa or Vlad. Max is excited about this year and all the new videos to come.
Catherine Tang
Director of PR
Patrick Sullivan
Advisor
This is Patrick Sullivan's fourth year teaching at d.tech high school, but his first year advising for The Dragon. His experience in journalism includes being a staff writer for the Sonoma State Star and the Cougar Chronicle at Cal State San Marcos. He currently also serves as the copy editor of a small print magazine in Los Angeles, Spectacle of California. |
I have received important information from Michael Filaseta, with which we can answer:
1,2) Pick a prime $p$ big enough, put $c=p-f(0)$ and consider the polynomial $F(X)=f(X)+c$, which has $F(0)=p$. Specifically, if $f(X)=a_nX^n+\ldots+a_0$, by picking $p>|a_n|+\ldots+|a_1|$ we can guarantee that $F(X)$ has all its roots out of the unit circle, due to an iterative application of the reverse triangle inequality:$$|F(z)|\geq p-|a_1||z|-\ldots-|a_n||z^n|\geq p-(|a_1|+\ldots+|a_n|)>0,$$where we have used that $|z|\leq1$.
Suppose $F$ factors as $F(X)=g(X)h(X)$; then $g(0)h(0)=p$ is a factorization of a prime, so for example $|g(0)|=1$. Therefore the absolute value of the product of the roots of $g$ is not greater than 1 (by Vieta, taking into account the leading coefficient of $g$). This implies that there is at least one root of $g$ inside the (closed) unit circle. But the roots of $g$ come from the roots of $F$, so we have reached a contradiction.
Now, as there are infinite primes bigger than $|a_n|+\ldots+|a_1|$, we know how to find an infinite number of $c$ such that $f+c$ is irreducible.
3) Hilbert's irreducibility theorem also answers 1), and gives the asymptotic behaviour: the polynomial $f+c$ is irreducible for almost every $c$. Concretely, if we denote $S(f,x):=\sum_{|c|\leq x, f+c\text{ irreducible}}1$then we have $$S(f,x)=2x-o(x)$$(the $2$ in $2x$ just comes from the fact that we consider $|c|\leq x$, so the density is computed with respect to $2x$).
In fact, it may be possible that using results close to Siegel's lemma one could prove $S(f,x)=2x-O(\sqrt{x}).$
5) For polynomials of degree 2 over the reals, as has already been mentioned, we can use the sign of the discriminant to guarantee the existence of infinite $c$, which asymptotically have$$\lim \frac{S(f,x)}{2x}=1/2,$$ as (if $a>0$, say) there is a $c_0$ such that if $c<c_0$ then $f+c$ factors, while if $c>c_0$ then $f+c$ is irreducible.
Now every real closed field is elementarily equivalent to the reals, and we can encode the condition on the discriminant in first order logic, so the same applies to real closed fields. |
I've been playing around with some finite fields to test how rapid brute-force is when solving discrete logarithm problems occurring in DH methods.
Working in $\mathbb{F}_{101}$, pick a private key $\alpha=88$, $\beta$ arbitrary and let $g=41$ such that $A=g^{\alpha}\equiv 87 \pmod{101}$. I believe $g$ is not a primitive root modulo 101, and it is confirmed by the fact that $g^{8}\equiv g^{88} \equiv 87 \pmod{101}$.
Say if you were to use the
naive algorithm to attack this DLP and find this value $\tilde{\alpha}=8$, can we then simply say that we found the common key $K=g^{\alpha\beta}$? So is it always true that in this case $g^{\alpha\beta}\equiv g^{\tilde{\alpha}\beta} \pmod{101}$?
I feel like this should not necessarily hold, but I'm not sure if we can simply say this. Does it maybe have to do with $g$ not being a primitive root? |
We know $\frac{1}{81}$ gives us $0.\overline{0123456790}$
How do we create a recurrent decimal with the property of repeating:
$0.\overline{0123456789}$
a) Is there a method to construct such a number?
b) Is there a solution?
c) Is the solution in $\mathbb{Q}$?
According with this Wikipedia page: http://en.wikipedia.org/wiki/Decimal One could get this number by applying this series. Supppose:
$M=123456789$, $x=10^{10}$, then $0.\overline{0123456789}= \frac{M}{x}\cdot$ $\sum$ ${(10^{-9})}^k$ $=\frac{M}{x}\cdot\frac{1}{1-10^{-9}}$ $=\frac{M}{9999999990}$
Unless my calculator is crazy, this is giving me $0.012345679$, not the expected number. Although the example of wikipedia works fine with $0.\overline{123}$.
Some help I got from mathoverflow site was that the equation is: $\frac{M}{1-10^{-10}}$. Well, that does not work either.
So, just to get rid of the gnome calculator rounding problem, running a simple program written in C with very large precision (long double) I get this result:
#include <stdio.h> int main(void){ long double b; b=123456789.0/9999999990.0; printf("%.40Lf\n", b); }
Result: $0.0123456789123456787266031042804570461158$
Maybe it is still a matter of rounding problem, but I doubt that...
Please someone?
Thanks!
Beco
Edited:
Thanks for the answers. After understanding the problem I realize that long double is not sufficient. (float is 7 digits:32 bits, double is 15 digits:64 bits and long double is 19 digits:80 bits - although the compiler align the memory to 128 bits)
Using the wrong program above I should get $0.0\overline{123456789}$ instead of $0.\overline{0123456789}$. Using the denominator as $9999999999$ I must get the correct answer. So I tried to teach my computer how to divide:
#include <stdio.h>int main(void){ int i; long int n, d, q, r; n=123456789; d=9999999999; printf("0,"); n*=10; while(i<100) { if(n<d) { n*=10; printf("0"); i++; continue; } q=n/d; r=n%d; printf("%ld", q); if(!r) break; n=n-q*d; n*=10; i++; } printf("\n");} |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Observation of a peaking structure in the J/psi phi mass spectrum from B-+/- -> J/psi phi K-+/- decays
PHYSICS LETTERS B, ISSN 0370-2693, 06/2014, Volume 734, Issue 370-2693 0370-2693, pp. 261 - 281
A peaking structure in the J/psi phi mass spectrum near threshold is observed in B-+/- -> J/psi phi K-+/- decays, produced in pp collisions at root s = 7 TeV...
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | info:eu-repo/classification/ddc/ddc:530 | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | info:eu-repo/classification/ddc/ddc:530 | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
2. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30
The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma...
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article
Physics Letters B, ISSN 0370-2693, 05/2016, Volume 756, Issue C, pp. 84 - 102
A measurement of the ratio of the branching fractions of the meson to and to is presented. The , , and are observed through their decays to , , and ,...
scattering [p p] | pair production [pi] | statistical | Phi --> K+ K | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | 7000 GeV-cms | leptonic decay [J/psi] | (b)over-bar(s) | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | Violating Phase Phi(s) | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | hadronic decay [f0] | Decay | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
scattering [p p] | pair production [pi] | statistical | Phi --> K+ K | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | 7000 GeV-cms | leptonic decay [J/psi] | (b)over-bar(s) | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | Violating Phase Phi(s) | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | hadronic decay [f0] | Decay | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
Journal Article
EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 06/2013, Volume 73, Issue 6, pp. 1 - 17
Cross sections for elastic and proton-dissociative photoproduction of J/psi mesons are measured with the H1 detector in positron-proton collisions at HERA. The...
DIFFRACTIVE PHOTOPRODUCTION | CALIBRATION | VECTOR-MESON | H1 DETECTOR | EXCLUSIVE ELECTROPRODUCTION | VERTEX | LIQUID ARGON CALORIMETER | PHYSICS, PARTICLES & FIELDS | Mesons | Collisions | Energy measurement | Elastic scattering | Luminosity | Photoproduction | Cross sections | Physics | High Energy Physics - Experiment | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences
DIFFRACTIVE PHOTOPRODUCTION | CALIBRATION | VECTOR-MESON | H1 DETECTOR | EXCLUSIVE ELECTROPRODUCTION | VERTEX | LIQUID ARGON CALORIMETER | PHYSICS, PARTICLES & FIELDS | Mesons | Collisions | Energy measurement | Elastic scattering | Luminosity | Photoproduction | Cross sections | Physics | High Energy Physics - Experiment | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences
Journal Article
5. Search for rare decays of $$\mathrm {Z}$$ Z and Higgs bosons to $${\mathrm {J}/\psi } $$ J/ψ and a photon in proton-proton collisions at $$\sqrt{s}$$ s = 13$$\,\text {TeV}$$ TeV
The European Physical Journal C, ISSN 1434-6044, 2/2019, Volume 79, Issue 2, pp. 1 - 27
A search is presented for decays of $$\mathrm {Z}$$ Z and Higgs bosons to a $${\mathrm {J}/\psi } $$ J/ψ meson and a photon, with the subsequent decay of the...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
Physics Letters B, ISSN 0370-2693, 06/2014, Volume 734, pp. 261 - 281
A peaking structure in the mass spectrum near threshold is observed in decays, produced in pp collisions at collected with the CMS detector at the LHC. The...
Journal Article
Journal of High Energy Physics, ISSN 1126-6708, 2012, Volume 2012, Issue 5
Journal Article
8. Prompt and non-prompt $$J/\psi $$ J/ψ elliptic flow in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}} = 5.02$$ sNN=5.02 Tev with the ATLAS detector
The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 23
The elliptic flow of prompt and non-prompt $$J/\psi $$ J/ψ was measured in the dimuon decay channel in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}}=5.02$$...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
9. Suppression of non-prompt J/psi, prompt J/psi, and Upsilon(1S) in PbPb collisions at root s(NN)=2.76 TeV
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 05/2012, Issue 5
Yields of prompt and non-prompt J/psi ,as well as Upsilon(1S) mesons, are measured by the CMS experiment via their mu(+)mu(-) decays in PbPb and pp collisions...
P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS
P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS
Journal Article |
ISSN: 1432-0916
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics , Physics
Notes: Abstract In ordinary quantum mechanics for finite systems, the time evolution induced by Hamiltonians of the form $$H = \frac{{P^2 }}{{2m}} + V(Q)$$ is studied from the point of view of *-automorphisms of the CCRC*-algebra $$\bar \Delta $$ (see Ref. [1, 2]). It is proved that those Hamiltonians do not induce *-automorphisms of this algebra in the cases: a) $$V \in \bar \Delta $$ and b)V εL ∞ (ℝ,dx) ∩L 1 (ℝ,dx), except when the potential is trivial.
Type of Medium: Electronic Resource
URL: Permalink
http://dx.doi.org/10.1007/BF01646197 |
In the theory of chemical reactions, it is often possible to isolate a small number or even a single degree of freedom in the system that can be used to characterize the reaction. This degree of freedom is coupled to other degrees of freedom (for example, reactions often take place in solution). Isomerization or dissociation of a diatomic molecule in solution is an excellent example of this type of system. The degree of freedom of paramount interest is the distance between the two atoms of the molecule - this is the degree of freedom whose detailed dynamics we would like to elucidate. The dynamics of the ``bath'' or environment to which is couples is less interesting, but still must be accounted for in some manner. A model that has maintained a certain level of both popularity and success is the so called ``harmonic bath'' model, in which the environment to which the special degree(s) of freedom couple is replaced by an effective set of harmonic oscillators. We will examine this model for the case of a single degree of freedom of interest, which we will designate \(\underline {q} \). For the case of the isomerizing or dissociating diatomic, \(\underline {q} \) could be the coordinate \(r - \langle r \rangle \), where \(r \) is the distance between the atoms. The particular definition of \(\underline {q} \) ensures that \(\langle q \rangle = 0 \). The degree of freedom \(\underline {q} \) is assumed to couple to the bath linearly, giving a Hamiltonian of the form
\[ H = {p^2 \over 2m} + \phi(q) + \sum_{\alpha}\left[{p_{\alpha}^2 \over 2m_{\alpha} } + {1 \over 2} m_{\alpha} \omega _{\alpha}^2 \left (x_{\alpha} + {g_{\alpha}\over m_{\alpha}\omega_{\alpha}^2}q\right)^2\right] \]
where the index \(\alpha \) runs over all the bath degrees of freedom, \(\underline {\omega _{\alpha} } \) are the harmonic bath frequencies, \(\underline {m_{\alpha } } \) are the harmonic bath masses, and \(\underline {g_{\alpha} } \) are the coupling constants between the bath and the coordinate \(\underline {q} \). \(\underline {p} \) is a momentum conjugate to \(\underline {q} \), and \(m\) is the mass associated with this degree of freedom (e.g., the reduced mass \(\underline {\mu} \) in the case of a diatomic). The coordinate \(\underline {q} \) is assumed to be subject to a potential \(\phi (q) \) as well (e.g., an internal bond potential). The form of the coupling between the system ( \(\underline {q} \) ) and the bath (\(\underline {x_{\alpha}} \)) is known as bilinear.
Below, using a completely classical treatment of this Hamiltonian, we will derive an equation for the detailed dynamics of \(\underline {q} \) alone. This equation is known as the generalized Langevin equation (GLE). |
OpenCV #004 Common Types of Noise Digital Image Processing using OpenCV (Python & C++) Highlights: We will give an overview of the most common types of noise that is present in images. We will show how we can generate these types of noise and add them to clean images. Then, we will show how we can filter these images using a simple median filter. In this post,we will assume that we “know” how the noise looks like in our experiments and then it will be easier for us to find an optimal way how to remove that noise. #YesFilter 🙂 Tutorial Overview: Noise generation in Python and C++ Adding noise to images Explore how we can remove noise and filter our image 1. Noise generation in Python and C++
Different kind of imaging systems might give us different noise. Here, we give an overview of three basic types of noise that are common in image processing applications:
Gaussian noise. Random noise Salt and Pepper noise (Impulse noise – only white pixels)
Before we start with the generation of noise in images, we will give a brief method of how we can generate random numbers from a Gaussian distribution or from a uniform distribution.
Python C++
#include <iostream>#include <opencv2/opencv.hpp>using namespace std;using namespace cv;int main() { // Let's start with a basic method to generate a random number using OpenCV. // For this we will use a cv::RNG method // the code below illustrates how we can obtain 2 random numbers from uniform distribution // this returns the random number generator cv::RNG rng = theRNG(); // two random numbers will be generated // from the uniform distribution [0,1]. float a = rng.uniform(0.f, 1.f); float b = rng.uniform(0.f, 1.f); // printing two random numbers std::cout << "a:" << a << std::endl << "b:" << b << std::endl; // in the similar manner we can get two random numbers from a normal distribution // Here we can just specify the sigma value (we have use a value 1) // the distribution will be zero centered. float a_g = rng.gaussian(1); std::cout << a_g << std::endl ;
For this code we get the following output:
// The outputs are here belowa=0.302828, b=0.699259
Gaussian noise.We may say that a Gaussian noise will be an independent identically distributed intensity level drawn from a Gaussian distribution. Note that here we use 1D Gaussian distribution. Commonly, it is determined with parameters \(\mu\) and \(\sigma\).
The following code will generate a Gaussian noise.
Python C++
// Another way to generate the random values form the same distribution is to use // functions randu and randn cv::Mat image = cv::imread("a11.jpg", IMREAD_GRAYSCALE); // Let's first create a zero image with the same dimensions of the loaded image cv::Mat gaussian_noise = cv::Mat::zeros (image.rows, image.cols, CV_8UC1); cv::imshow("All zero values", gaussian_noise); cv::waitKey(); // now, we can set the pixel values as a Gaussian noise // we have set a mean value to 128 and a standard deviation to 20 cv::randn(gaussian_noise, 128, 20); // Let's plot this image and see how it looks like cv::imshow("Gaussian noise", gaussian_noise); cv::waitKey(); cv::imwrite("Gaussian random noise.jpg", gaussian_noise);
This image is generated to have the same dimension as our test image.
In a similar way, we can create a
random uniform noise. Both in Python and C++ the difference will actually be in just one letter within a command (so easy to figure that out!). Python C++
// In a similar manner we can create an image whose pixel values have // random values drawn from an uniform distribution cv::Mat uniform_noise = cv::Mat::zeros (image.rows, image.cols, CV_8UC1); cv::randu(uniform_noise, 0, 255); cv::imshow("Uniform random noise", uniform_noise ); cv::waitKey(); cv::imwrite("Uniform random noise.jpg", uniform_noise);
As we can see the uniform random noise is very similar to a Gaussian noise. Here, the pixel values were set from 0 to 255.
Next, we have to see how we can generate an impulse noise. This will be a black image with random white pixels. There are many ways to implement such an image. For instance, we can actually post-process a “uniform_noise” image. We can simply set a threshold value (binary thresholding) and convert an image into a set of black and white pixels. All pixels below a threshold (in our case 250 ) will become black (0), and those above this value will become white (255). By varying the values of a threshold we will get more or less white pixels (more or less noise).
Python C++
cv::Mat impulse_noise = uniform_noise.clone(); // here a number 250 is defined as a threshold value // Obviously, if we want to increase a number of white pixels // we will need to decrease it. // Otherwise, we can increase it and in that way we will suppress the // number of white pixles. cv::threshold(uniform_noise, impulse_noise, 250, 255, CV_8UC1); cv::imshow("Impulse_noise", impulse_noise); cv::waitKey(); cv::imwrite("Impulse_noise.jpg", impulse_noise);
2. Adding Noise to Images
If images are
just functions, then we can add two images similarly like we can add two functions. Simply, every pixel value will be summed with the corresponding pixel value that has the same coordinates. Of course, the images need to have the same dimensions. In case that you are a science geek and you miss some formulas in this post, we can write the following one just for you 🙂
\(\vec{I(x, y)’} = \vec{I(x, y)} + \vec{n(x, y)}\)
Python C++
cv::Mat noisy_image=image.clone(); // note that we can simply sum two Mat objects, that is, two images. // in order not to degrade the image quality too much // we will multipliy the gaussian_noise with 0.5. // in this way the effect of noise will be reduced noisy_image = image + gaussian_noise*0.5; cv::imshow("Noisy_image - Gaussian noise", noisy_image); cv::waitKey(); cv::Mat noisy_image1=image.clone(); noisy_image1 = image + uniform_noise*0.2; cv::imshow("Noisy_image - Uniform noise", noisy_image1); cv::waitKey(); cv::Mat noisy_image2=image.clone(); // similarly for a uniform noise, we will use even a lower factor of 0.2 noisy_image2 = image + impulse_noise*0.5; cv::imshow("Noisy_image - Impulse noise", noisy_image2); cv::waitKey();
3. Median Filter
Next, we are going to present a median filter and basic image processing. Do you know what a median operator/function does? Yes, you are right! It is that simple. A median filter just scrolls across the image, and for all the elements that are overlapping with the filter, position outputs the median element.
Let’s have a look at the illustration of a 2D median filter. Imagine that those are the pixel values in the image as shown in the Figure below. This means that the filter is centered at the value of 90. In this case, we use a 3 x 3 filter size, so all nine values we will sort in the ascending order. The median value is 27 and, that is the output value for this location of the filter. In this case, a value of 90 (an extreme value in this example), will be replaced with a number 27.
This type of filter can be relatively successful for both uniform and Gaussian random noise. However, it can be very effective for impulse (salt and pepper) noise! The reason is that for pixels values that are much lower or higher than the mean values, the median operator will work just fine to suppress them.
Python C++
cv::medianBlur(noisy_image, noisy_image, 3); cv::imshow("Gaussian random noise removed", noisy_image); cv::waitKey(); cv::medianBlur(noisy_image1, noisy_image1, 3); cv::imshow("Uniform random noise removed", noisy_image1); cv::waitKey(); cv::medianBlur(noisy_image2, noisy_image2, 3); cv::imshow("Impulse random noise removed", noisy_image2); cv::waitKey(); cv::destroyAllWindows(); return 0;}
It is good to note, that the median filter is actually an example of a non-linear filter. This is in contrast with the majority of other filters that can be applied using a convolution operation. In addition, a median filter is also sometimes referred to as an edge-preserving filter.
Summary
In this blog post, we gave an overview of the most common types of noise that are present in images. How to generate these types of noise, add them to the image and clean the image using a simple median filter. In the next post, we will talk more about
. blur/mean/box and Gaussian 2D image filters More resources on the topic:
For more resources about common types of noise and filter, check these other sites. |
Main Page See also Wikipedia:Introduction, Wikipedia:Manual of Style, Wikipedia:Tutorial, Help:Editing, and Help:Starting a new page Contents 1 Introduction 2 Minor edits 3 Major edits 4 Wiki markup 5 More information on editing wiki pages Introduction
Editing most Wikipedia pages is not difficult. Simply click on the "
edit this page" tab at the top of a Wikipedia page (or on a section-edit link). This will bring you to a new page with a text box containing the editable text of the original page. You should write a short edit summary in the small field below the edit-box. You may use shorthand to describe your changes, as described in the legend, and when you see the difference between the page with your edits and the previous version of the page by pressing the " Show changes" button. If you're satisfied with what you see, be bold and press the " Save page" button. Your changes will immediately be visible to all Wikipedia users.
You can also click on the "
Discussion" tab to see the corresponding talk page, which contains comments about the page from other Wikipedia users. Click on the " +" tab to add a new section, or edit the page in the same way as an article page.
You should also remember to sign your messages on talk pages and some special-purpose project pages, but you should
not sign edits you make to regular articles. In page histories, the MediaWiki software keeps track of which user makes each change. Minor edits
Template:FurtherA check to the "minor edit" box signifies that only superficial differences exist between the version with your edit and the previous version: typo corrections, formatting and presentational changes, rearranging of text without modifying content, etc. A
minor edit is a version that the editor believes requires no review and could never be the subject of a dispute. The "minor edit" option is one of several Wikipedia:Why create an account?#New_editing_options Major edits
All editors are encouraged to be bold, but there are several things that a user can do to ensure that major edits are performed smoothly. Before engaging in a major edit, a user should consider discussing proposed changes on the article discussion/talk page. During the edit, if doing so over an extended period of time, the {{inuse}} tag can reduce the likelihood of an edit conflict. Once the edit has been completed, the inclusion of an edit summary will assist in documenting the changes. These steps will all help to ensure that major edits are well received by the Wikipedia community.
A major edit should be reviewed to confirm that it is consensual to all concerned editors. Therefore, any change that affects the
meaning of an article is major (not minor), even if the edit is a single word.
There are no necessary terms to which you have to agree when doing major edits, but the recommendations above have become best practice. If you do it your own way, the likelihood of your edits being re-edited may be higher.
Wiki markup Links and URLs
What it looks like What you type
London has public transport.
London has [[public transport]].
San Francisco also has public transportation.
San Francisco also has [[public transport| public transportation]].
San Francisco also has public transportation.
San Francisco also has [[public transport]]ation. Examples include [[bus]]es, [[taxicab]]s, and [[streetcar]]s. a [[micro]]<nowiki>second </nowiki>
See the Wikipedia:Manual of Style.
See the [[Wikipedia:Manual of Style]].
Wikipedia:Manual of Style#Italics is a link to a section within another page.
#Links and URLs is a link to another section on the current page.
Italics is a piped link to a section within another page.
[[Wikipedia:Manual of Style#Italics]] is a link to a section within another page. [[#Links and URLs]] is a link to another section on the current page. [[Wikipedia:Manual of Style#Italics|Italics]] is a piped link to a section within another page.
Automatically hide stuff in parentheses: kingdom.
Automatically hide namespace: Village Pump.
Or both: Manual of Style
But not: [[Wikipedia:Manual of Style#Links|]]
Automatically hide stuff in parentheses: [[kingdom (biology)|]]. Automatically hide namespace: [[Wikipedia:Village Pump|]]. Or both: [[Wikipedia: Manual of Style (headings)|]] But not: [[Wikipedia: Manual of Style#Links|]]
National sarcasm society is a page that does not exist yet.
[[National sarcasm society]] is a page that does not exist yet.
Wikipedia:How to edit a page is a link to this page.
[[Wikipedia:How to edit a page]] is a link to this page.
The character
Adding three tildes (~~~) will add just your user name:
and adding five tildes (~~~~~) gives the date/time alone:
The character '''tilde''' (~) is used when adding a comment to a Talk page. You should sign your comment by appending four tildes (~~~~) to the comment so as to add your user name plus date/time: : ~~~~ Adding three tildes (~~~) will add just your user name: : ~~~ and adding five tildes (~~~~~) gives the date/time alone: : ~~~~~ #REDIRECT [[United States]] #REDIRECT [[United States#History|United States History]] will redirect to the [[United States]] page, to the History section if it exists
For example in the article on Plankton, which is available on a lot of other wikis, the interlanguage links would look like so:
'''What links here''' and '''Related changes''' pages can be linked as: [[Special:Whatlinkshere/ Wikipedia:How to edit a page]] and [[Special:Recentchangeslinked/ Wikipedia:How to edit a page]] A user's '''Contributions''' page can be linked as: [[Special:Contributions/UserName]] or [[Special:Contributions/192.0.2.0]] [[Category:Character sets]] [[:Category:Character sets]]
Three ways to link to external (non-wiki) sources:
Three ways to link to external (non-wiki) sources: # Bare URL: http://en.wikipedia.org/ (bad style) # Unnamed link: [http://en.wikipedia.org/] (only used within article body for footnotes) # Named link: [http://en.wikipedia.org Wikipedia]
Linking to other wikis:
Linking to another language's wiktionary:
Linking to other wikis: # [[Interwiki]] link: [[Wiktionary:Hello]] # Interwiki link without prefix: [[Wiktionary:Hello|]] # Named interwiki link: [[Wiktionary:Hello| Wiktionary definition of 'Hello']] Linking to another language's wiktionary: # [[Wiktionary:fr:bonjour]] # [[Wiktionary:fr:bonjour|bonjour]] # [[Wiktionary:fr:bonjour|]]
ISBN 012345678X
ISBN 0-12-345678-X
ISBN 012345678X ISBN 0-12-345678-X
Text mentioning RFC 4321 anywhere
Text mentioning RFC 4321 anywhere
Date formats:
Date formats: # [[July 20]], [[1969]] # [[20 July]] [[1969]] # [[1969]]-[[07-20]] # [[1969-07-20]] Special [[WP:AO|as-of]] links like [[As of 2006|this year]] needing future maintenance
Some uploaded sounds are listed at Wikipedia:Sound.
[[media:Classical guitar scale.ogg|Sound]] Images
What it looks like What you type A picture: A picture: [[Image:wiki.png]] With alternative text: With alternative text: [[Image:wiki.png|Wikipedia, The Free Encyclopedia.]] Floating to the right side of the page using the frame attribute and a caption: Floating to the right side of the page using the ''frame'' attribute and a caption: [[Image:wiki.png|frame|Wikipedia Encyclopedia]] Floating to the right side of the page using the thumb attribute and a caption: Floating to the right side of the page using the ''thumb'' attribute and a caption: [[Image:wiki.png|thumb|Wikipedia Encyclopedia]] Floating to the right side of the page without a caption: Floating to the right side of the page ''without'' a caption: [[Image:wiki.png|right|Wikipedia Encyclopedia]] A picture resized to 30 pixels... A picture resized to 30 pixels... [[Image:wiki.png|30 px]] Linking directly to the description page of an image: Linking directly to the description page of an image: [[:Image:wiki.png]]
(such as any of the ones above) also leads to the description page
Linking directly to an image without displaying it: Linking directly to an image without displaying it: [[:media:wiki.png|Image of the jigsaw globe logo]] Using the div tag to separate images from text (note that this may allow images to cover text): Example: <div style="display:inline; width:220px; float:right;"> Place images here </div> Using wiki markup to make a table in which to place a vertical column of images (this helps edit links match headers, especially in Firefox browsers): Example: {| align=right |- | Place images here |}
See the Wikipedia's image use policy as a guideline used on Wikipedia.
For further help on images, including some more versatile abilities, see the topic on Extended image syntax.
Character formatting
What it looks like What you type ''Italicized text'' '''Bold text''' '''''Italicized & Bold text'''''
A typewriter font for
A typewriter font for <tt>monospace text</tt> or for computer code: <code>int main()</code> Create codeblocks
that are printed as entered
Use <code><pre> Block of Code </pre></code> around the block of code. * The <pre> tags within the codeblock will create formatting issues - to solve, display the tags literally with <pre> and </pre>
You can use small text for captions.
You can use <small>small text</small> for captions.
Better stay away from big text, unless it's within small text.
Better stay away from <big>big text</big>, unless <small> it's <big>within</big> small</small> text.
You can
You can also mark
You can <s>strike out deleted material</s> and <u>underline new material</u>. You can also mark <del>deleted material</del> and <ins>inserted material</ins> using logical markup. For backwards compatibility better combine this potentially ignored new <del>logical</del> with the old <s><del>physical</del></s> markup. <nowiki>Link → (''to'') the [[Wikipedia FAQ]]</nowiki> <!-- comment here --> À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ñ Ò Ó Ô Õ Ö Ø Ù Ú Û Ü ß à á â ã ä å æ ç è é ê ë ì í î ï ñ ò ó ô œ õ ö ø ù ú û ü ÿ ¿ ¡ § ¶ † ‡ • – — ‹ › « » ‘ ’ “ ” ™ © ® ¢ € ¥ £ ¤
ε
x<sub>1</sub> x<sub>2</sub> x<sub>3</sub> or <br/> x₀ x₁ x₂ x₃ x₄ <br/> x₅ x₆ x₇ x₈ x₉ x<sup>1</sup> x<sup>2</sup> x<sup>3</sup> or <br/> x⁰ x¹ x² x³ x⁴ <br/> x⁵ x⁶ x⁷ x⁸ x⁹ ε<sub>0</sub> = 8.85 × 10<sup>−12</sup> C² / J m. 1 [[hectare]] = [[1 E4 m²]] α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ σ ς τ υ φ χ ψ ω Γ Δ Θ Λ Ξ Π Σ Φ Ψ Ω ∫ ∑ ∏ √ − ± ∞ ≈ ∝ ≡ ≠ ≤ ≥ × · ÷ ∂ ′ ″ ∇ ‰ ° ∴ ℵ ø ∈ ∉ ∩ ∪ ⊂ ⊃ ⊆ ⊇ ¬ ∧ ∨ ∃ ∀ ⇒ ⇐ ⇓ ⇑ ⇔ → ↓ ↑ ← ↔
<math>\,\! \sin x + \ln y</math>
<math>\mathbf{x} = 0</math>
Ordinary text should use wiki markup for emphasis, and should not use
<math>\,\! \sin x + \ln y</math> sin''x'' + ln''y'' <math>\mathbf{x} = 0</math> '''x''' = 0 Obviously, ''x''² ≥ 0 is true when ''x'' is a real number. : <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> (see also: Chess symbols in Unicode) No or limited formatting—showing exactly what is being typed
A few different kinds of formatting will tell the Wiki to display things as you typed them—what you see, is what you get!
What it looks like What you type <nowiki> tags
The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: →
<nowiki> The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: → </nowiki> <pre> tags The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → <pre> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → </pre> Leading spaces
Leading spaces are another way to preserve formatting.
Putting a space at the beginning of each line stops the text from being reformatted. It still interprets Wiki Leading spaces are another way to preserve formatting. Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Wiki]] ''markup'' and special characters: → Invisible text (comments)
Template:Main It's uncommon, but on occasion acceptable, to add a hidden comment within the text of an article. The format is this:
<!--- This is an example of text that won't normally be visible except in "edit" mode. ---> Table of contents
At the current status of the wiki markup language, having at least four headers on a page triggers the table of contents (TOC) to appear in front of the first header (or after introductory sections). Putting __TOC__ anywhere forces the TOC to appear at that point (instead of just before the first header). Putting __NOTOC__ anywhere forces the TOC to disappear. See also compact TOC for alphabet and year headings.
Tables
There are two ways to build tables:
in special Wiki-markup (see Help:Table) with the usual HTML elements: <table>, <tr>, <td> or <th>.
For the latter, and a discussion on when tables are appropriate, see Wikipedia:When to use tables.
Variables (See also Help:Variable)
Code Effect {{CURRENTWEEK}} 39 {{CURRENTDOW}} 1 {{CURRENTMONTH}} 09 {{CURRENTMONTHNAME}} September {{CURRENTMONTHNAMEGEN}} September {{CURRENTDAY}} 23 {{CURRENTDAYNAME}} Monday {{CURRENTYEAR}} 2019 {{CURRENTTIME}} 00:46 {{NUMBEROFARTICLES}} 57 {{NUMBEROFUSERS}} 15,190 {{PAGENAME}} Main Page {{NAMESPACE}} {{REVISIONID}} 17 {{localurl:pagename}} /index.php?title=Pagename {{localurl: Wikipedia:Sandbox|action=edit}} http://en.wikipedia.org/wiki/Sandbox?action=edit {{fullurl:pagename}} http://wiki.apple2.org/index.php?title=Pagename {{fullurl:pagename| query_string}} http://wiki.apple2.org/index.php?title=Pagename&query_string {{SERVER}} http://wiki.apple2.org {{ns:1}} Talk {{ns:2}} User {{ns:3}} User talk {{ns:4}} A2WebRef {{ns:5}} A2WebRef talk {{ns:6}} File {{ns:7}} File talk {{ns:8}} MediaWiki {{ns:9}} MediaWiki talk {{ns:10}} Template {{ns:11}} Template talk {{ns:12}} Help {{ns:13}} Help talk {{ns:14}} Category {{ns:15}} Category talk {{SITENAME}} wiki.apple2.org NUMBEROFARTICLES is the number of pages in the main namespace which contain a link and are not a redirect, in other words number of articles, stubs containing a link, and disambiguation pages. CURRENTMONTHNAMEGEN is the genitive (possessive) grammatical form of the month name, as used in some languages; CURRENTMONTHNAME is the nominative (subject) form, as usually seen in English.
In languages where it makes a difference, you can use constructs like {{grammar:case|word}} to convert a word from the nominative case to some other case. For example, {{grammar:genitive|{{CURRENTMONTHNAME}}}} means the same as {{CURRENTMONTHNAMEGEN}}.
Templates
The MediaWiki software used by Wikipedia has support for templates. This means standardized text chunks (such as boilerplate text) can be inserted into articles. For example, typing Template:Tl will appear as "
This article is a stub. You can help Wikipedia by expanding it." when the page is saved. See Wikipedia:Template messages for the complete list. Other commonly used templates are: Template:Tl for disambiguation pages and Template:Tl like an article stub but for a section. There are many subject-specific stubs for example: Template:Tl, Template:Tl, and Template:Tl. For a complete list of stubs see Wikipedia:WikiProject Stub sorting/Stub types. More information on editing wiki pages
You may also want to learn about:
How to start a page Informal tips on contributing to Wikipedia Editing tasks in general at the Wikipedia:Editing FAQ Wikipedia:Cheatsheet Rename pages boldly, at Wikipedia:How to rename (move) a page Preferred layout of your article, at Guide to layout Style conventions in the Wikipedia:Manual of Style An article with annotations pointing out common Wikipedia style and layout issues, at Wikipedia:Annotated article General policies in Wikipedia:Policies and guidelines Wikipedia:Naming conventions for how to name articles themselves Help on editing very large articles If you are making an article about something that belongs to a group of objects (a city, an astronomical object, a Chinese character...) check if there is a WikiProject on the group and try to follow its directions explicitly. Wikipedia:Namespace |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Strophoid
A third-order plane algebraic curve whose equation takes the form
$$y^2=x^2\frac{d+x}{d-x}$$
in Cartesian coordinates, and
$$\rho=-d\frac{\cos2\phi}{\cos\phi}$$
in polar coordinates. The coordinate origin is a node with tangents $y=\pm x$ (see Fig.). The asymptote is $x=d$. The area of the loop is
$$S=2d^2-\frac{1}{2\pi d^2}.$$
The area between the curve and the asymptote is
$$S_2=2d^2+\frac{1}{2\pi d^2}.$$
A strophoid is related to the so-called cusps (cf. Cusp).
Figure: s090630a
References
[1] A.A. Savelov, "Planar curves" , Moscow (1960) (In Russian) [2] A.S. Smogorzhevskii, E.S. Stolova, "Handbook of the theory of planar curves of the third order" , Moscow (1961) (In Russian) Comments References
[a1] F. Gomes Teixeira, "Traité des courbes" , 1–3 , Chelsea, reprint (1971) [a2] J.D. Lawrence, "A catalog of special planar curves" , Dover, reprint (1972) How to Cite This Entry:
Strophoid.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Strophoid&oldid=34459 |
I'm working on the following problem for my PDE's class as study for a test.
Sketch the Fourier series of $$ f(x) = 2x^2$$ On the interval of $[-1,1]$.
My professors answer key states that the Fourier series for this function is $2x^2$ repeated over the interval $[-1,1]$ (I'd post the graph, but I'm not sure how.)
My understanding of the Fourier series is that for any function $f(x)$ is given by
$$ A_0 + \sum_{n=1}^\infty A_n \cos\bigg(\frac{n \pi x}{L} \bigg ) + \sum_{n=1}^\infty B_n \sin\bigg(\frac{n \pi x}{L} \bigg ) \tag{1}\label{1} $$
Where $$ A_0 = \frac{1}{2L} \int_{-L}^L f(x) dx$$ $$ A_n = \frac{1}{L} \int_{-L}^L f(x) \cos\bigg(\frac{n \pi x}{L} \bigg )dx$$ $$ B_n = \frac{1}{L} \int_{-L}^L f(x) \sin\bigg(\frac{n \pi x}{L} \bigg )dx$$
There are easier techniques to graphing the Fourier series of some function, but in attempting to solve this problem the hard way I find that
$$ A_0 = \frac{2}{3} $$ $$ A_n = \frac{8}{(n \pi)^2} (-1)^n $$ $$ B_n = 0 $$
When you plug these coefficients into $(\ref{1})$, you do not get $2x^2$ as my professor states.
I've worked this problem three times now and cannot seem to find my mistake. So my question, what are the correct coefficients for $f(x)$ along the interval $[-1,1]$? |
I'm trying to understand BRST complex in its Lagrangian incarnation i.e. in the form mostly closed to original Faddeev-Popov formulation. It looks like the most important part of that construction (proof of vanishing of higher cohomology groups) is very hard to find in the literature, at least I was not able to do so. Let me formulate couple of questions on BRST, but in the form of exercises on Lie algebra cohomology.
Let $X$ be a smooth affine variety, and $g$ is a (reductive?) Lie algebra acting on $X$, I think we assume $g$ to be at least unimodular, otherwise BRST construction won't work, and also assume that map $g \to T_X$ is injective. In physics language this is closed and irreducible action of a Lie algebra of a gauge group of the space of fields $X$. Structure sheaf $\mathcal{O}_X$ is a module over $g$, and I could form Chevalley-Eilenberg complex with coefficients in this module$$C=\wedge g^* \otimes \mathcal{O}_X.$$
The ultimate goal if BRST construction is to provide "free model" of algebra of invarinats $\mathcal{O}_X^g$, it is nor clear what is "free model", but I think BRST construction is just Tate's procedure of killing cycles for Chevalley-Eilenberg complex above (Tate's construction works for any dg algebra, and $C$ is a dg algebra).
My first question is what exactly are cohomology of the complex $C$? In other words before killing cohomology I'd like to understand what exactly have to be killed. For me it looks like a classical question on Lie algebra cohomology and, perhaps, it was discussed in the literature 60 years ago.
It is not necessary to calculate these cohomology groups and then follow Tate's approach to construct complete BSRT complex (complete means I added anti-ghosts and lagrange multipliers to $C$ and modified the differential), but even if I start with BRST complex$$C_{BRST}=(\mathcal{O}_X \otimes \wedge (g \oplus g^*) \otimes S(g), d_{BRST}=d_{CE}+d_1),$$where I could find a proof that all higher cohomology vanishes? This post imported from StackExchange MathOverflow at 2014-08-24 09:17 (UCT), posted by SE-user Sasha Pavlov |
Alice is a soccer coach who occasionally bring her soccer team to explore Caveland (that can be modeled as an undirected unweighted connected graph) for special event, e.g. for initiation ceremony, to celebrate birthdays, etc. Caveland has $N$ junctions and $M$ tunnels.
Caveland is quite prone to flooding, but that does not stop Alice and her soccer team from doing what they enjoy. You are Bob, Alice’s good friend. You want to ensure Alice and her soccer team are as safe as possible by letting her know which junction(s) is/are safe(r) than the rest. You decide that a junction $v$ is considered to be safe(r) if
any one tunnel is flooded, Alice and her soccer team can still go out from junction $v$ to the entrance of Caveland (which is always junction $0$) via a non-flooded path.
For this example, junctions $\{ 0, 1, 2, 3\} $ are considered safe(r). If tunnel $0–2$ is flooded for example, Alice and her soccer team can detour via path $2 \rightarrow 3 \rightarrow 1 \rightarrow 0$ to reach Caveland entrance. However junctions $\{ 4, 5, 6, 7, 8\} $ are quite dangerous. If tunnel $2–8$ (or tunnel $3–4$) is flooded, Alice and her soccer team will be trapped (cannot reach safety/junction $0$).
The first line of input consists of $2$ integers: $N$ and $M$. ($2 \leq N \leq 10\, 000$, $1 \leq M \leq min(N(N-1)/2, 100\, 000)$). The next $M$ lines contains $M$ pairs of integers $u$ and $v$ that describe $0$-based indices of the two junctions that are connected with a tunnel in Caveland ($0 \leq u, v < N$, and $u \neq v$). No two junctions are directly connected with more than one tunnel. You are guaranteed that junction $0$ can reach all the other $N-1$ junctions (if there is no flood).
Print an integer in one line: The total number of junction(s) in Caveland that is/are safe(r) for Alice and her soccer team to explore. The actual junction numbers are not needed.
Sample Input 1 Sample Output 1 9 10 0 1 0 2 1 3 2 3 2 8 3 4 4 5 4 6 5 7 6 7 4 |
In
scattering theory, a part of mathematical physics, the Dyson series, formulated by Freeman Dyson, is a perturbative series where each term is represented by Feynman diagrams. This series diverges asymptotically, but in quantum electrodynamics (QED) at the second order the difference from experimental data is in the order of 10 −10. This close agreement holds because the coupling constant (also known as the fine structure constant) of QED is much less than 1. Notice that in this article Planck units are used, so that ħ = 1 (where ħ is the reduced Planck constant). The Dyson operator [ edit ]
Suppose that we have a
Hamiltonian H, which we split into a free part H 0 and an interacting part V, i.e. H = H 0 + V.
We will work in the
interaction picture here and assume units such that the reduced Planck constant ħ is 1.
In the interaction picture, the
evolution operator U defined by the equation Ψ ( t ) = U ( t , t 0 ) Ψ ( t 0 ) {\displaystyle \Psi (t)=U(t,t_{0})\Psi (t_{0})}
is called the
Dyson operator.
We have
U ( t , t ) = I , {\displaystyle U(t,t)=I,} U ( t , t 0 ) = U ( t , t 1 ) U ( t 1 , t 0 ) , {\displaystyle U(t,t_{0})=U(t,t_{1})U(t_{1},t_{0}),} U − 1 ( t , t 0 ) = U ( t 0 , t ) , {\displaystyle U^{-1}(t,t_{0})=U(t_{0},t),}
and hence the
Tomonaga–Schwinger equation, i d d t U ( t , t 0 ) Ψ ( t 0 ) = V ( t ) U ( t , t 0 ) Ψ ( t 0 ) . {\displaystyle i{\frac {d}{dt}}U(t,t_{0})\Psi (t_{0})=V(t)U(t,t_{0})\Psi (t_{0}).}
Consequently,
U ( t , t 0 ) = 1 − i ∫ t 0 t d t 1 V ( t 1 ) U ( t 1 , t 0 ) . {\displaystyle U(t,t_{0})=1-i\int _{t_{0}}^{t}{dt_{1}\ V(t_{1})U(t_{1},t_{0})}.} Derivation of the Dyson series [ edit ]
This leads to the following
Neumann series: U ( t , t 0 ) = 1 − i ∫ t 0 t d t 1 V ( t 1 ) + ( − i ) 2 ∫ t 0 t d t 1 ∫ t 0 t 1 d t 2 V ( t 1 ) V ( t 2 ) + ⋯ + ( − i ) n ∫ t 0 t d t 1 ∫ t 0 t 1 d t 2 ⋯ ∫ t 0 t n − 1 d t n V ( t 1 ) V ( t 2 ) ⋯ V ( t n ) + ⋯ . {\displaystyle {\begin{array}{lcl}U(t,t_{0})&=&1-i\int _{t_{0}}^{t}{dt_{1}V(t_{1})}+(-i)^{2}\int _{t_{0}}^{t}{dt_{1}\int _{t_{0}}^{t_{1}}{dt_{2}V(t_{1})V(t_{2})}}+\cdots \\&&{}+(-i)^{n}\int _{t_{0}}^{t}{dt_{1}\int _{t_{0}}^{t_{1}}{dt_{2}\cdots \int _{t_{0}}^{t_{n-1}}{dt_{n}V(t_{1})V(t_{2})\cdots V(t_{n})}}}+\cdots .\end{array}}}
Here we have
, so we can say that the fields are t 1 > t 2 > ⋯ > t n {\displaystyle t_{1}>t_{2}>\cdots >t_{n}} time-ordered, and it is useful to introduce an operator called T {\displaystyle {\mathcal {T}}} , defining time-ordering operator U n ( t , t 0 ) = ( − i ) n ∫ t 0 t d t 1 ∫ t 0 t 1 d t 2 ⋯ ∫ t 0 t n − 1 d t n T V ( t 1 ) V ( t 2 ) ⋯ V ( t n ) . {\displaystyle U_{n}(t,t_{0})=(-i)^{n}\int _{t_{0}}^{t}{dt_{1}\int _{t_{0}}^{t_{1}}{dt_{2}\cdots \int _{t_{0}}^{t_{n-1}}{dt_{n}{\mathcal {T}}V(t_{1})V(t_{2})\cdots V(t_{n})}}}.}
We can now try to make this integration simpler. In fact, by the following example:
S n = ∫ t 0 t d t 1 ∫ t 0 t 1 d t 2 ⋯ ∫ t 0 t n − 1 d t n K ( t 1 , t 2 , … , t n ) . {\displaystyle S_{n}=\int _{t_{0}}^{t}{dt_{1}\int _{t_{0}}^{t_{1}}{dt_{2}\cdots \int _{t_{0}}^{t_{n-1}}{dt_{n}K(t_{1},t_{2},\dots ,t_{n})}}}.}
Assume that
K is symmetric in its arguments and define (look at integration limits): I n = ∫ t 0 t d t 1 ∫ t 0 t d t 2 ⋯ ∫ t 0 t d t n K ( t 1 , t 2 , … , t n ) . {\displaystyle I_{n}=\int _{t_{0}}^{t}{dt_{1}\int _{t_{0}}^{t}{dt_{2}\cdots \int _{t_{0}}^{t}{dt_{n}K(t_{1},t_{2},\dots ,t_{n})}}}.}
The region of integration can be broken in
sub-regions defined by n ! {\displaystyle n!} , t 1 > t 2 > ⋯ > t n {\displaystyle t_{1}>t_{2}>\cdots >t_{n}} , etc. Due to the symmetry of t 2 > t 1 > ⋯ > t n {\displaystyle t_{2}>t_{1}>\cdots >t_{n}} K, the integral in each of these sub-regions is the same and equal to by definition. So it is true that S n {\displaystyle S_{n}} S n = 1 n ! I n . {\displaystyle S_{n}={\frac {1}{n!}}I_{n}.}
Returning to our previous integral, it holds the identity
U n = ( − i ) n n ! ∫ t 0 t d t 1 ∫ t 0 t d t 2 ⋯ ∫ t 0 t d t n T V ( t 1 ) V ( t 2 ) ⋯ V ( t n ) . {\displaystyle U_{n}={\frac {(-i)^{n}}{n!}}\int _{t_{0}}^{t}{dt_{1}\int _{t_{0}}^{t}{dt_{2}\cdots \int _{t_{0}}^{t}{dt_{n}{\mathcal {T}}V(t_{1})V(t_{2})\cdots V(t_{n})}}}.}
Summing up all the terms, we obtain the
Dyson series: U ( t , t 0 ) = ∑ n = 0 ∞ U n ( t , t 0 ) = T e − i ∫ t 0 t d τ V ( τ ) . {\displaystyle U(t,t_{0})=\sum _{n=0}^{\infty }U_{n}(t,t_{0})={\mathcal {T}}e^{-i\int _{t_{0}}^{t}{d\tau V(\tau )}}.} Wavefunctions [ edit ]
Then, going back to the wavefunction for
t > t 0, | Ψ ( t ) ⟩ = ∑ n = 0 ∞ ( − i ) n n ! ( ∏ k = 1 n ∫ t 0 t d t k ) T { ∏ k = 1 n e i H 0 t k V e − i H 0 t k } | Ψ ( t 0 ) ⟩ . {\displaystyle |\Psi (t)\rangle =\sum _{n=0}^{\infty }{(-i)^{n} \over n!}\left(\prod _{k=1}^{n}\int _{t_{0}}^{t}dt_{k}\right){\mathcal {T}}\left\{\prod _{k=1}^{n}e^{iH_{0}t_{k}}Ve^{-iH_{0}t_{k}}\right\}|\Psi (t_{0})\rangle .}
Returning to the
Schrödinger picture, for t > f t , i ⟨ ψ f ; t f ∣ ψ i ; t i ⟩ = ∑ n = 0 ∞ ( − i ) n ∫ d t 1 ⋯ d t n ⏟ t f ≥ t 1 ≥ ⋯ ≥ t n ≥ t i ⟨ ψ f ; t f ∣ e − i H 0 ( t f − t 1 ) V e − i H 0 ( t 1 − t 2 ) ⋯ V e − i H 0 ( t n − t i ) ∣ ψ i ; t i ⟩ . {\displaystyle \langle \psi _{f};t_{f}\mid \psi _{i};t_{i}\rangle =\sum _{n=0}^{\infty }(-i)^{n}\underbrace {\int dt_{1}\cdots dt_{n}} _{t_{f}\,\geq \,t_{1}\,\geq \,\cdots \,\geq \,t_{n}\,\geq \,t_{i}}\,\langle \psi _{f};t_{f}\mid e^{-iH_{0}(t_{f}-t_{1})}Ve^{-iH_{0}(t_{1}-t_{2})}\cdots Ve^{-iH_{0}(t_{n}-t_{i})}\mid \psi _{i};t_{i}\rangle .} See also [ edit ] References [ edit ] |
Recall that an operator \(T \in \mathcal{L}(V)\) is diagonalizable if there exists a basis \(B\) for \(V\) such that \(B\) consists entirely of eigenvectors for \(T\). The nicest operators on \(V\) are those that are diagonalizable with respect to some orthonormal basis for \(V\). In other words, these are the operators for which we can find an orthonormal basis for \(V\) that consists of eigenvectors for \(T\). The Spectral Theorem for finite-dimensional complex inner product spaces states that this can be done precisely for normal operators.
Theorem 11.3.1. Let \(V\) be a finite-dimensional inner product space over \(\mathbb{C}\) and \(T\in\mathcal{L}(V)\). Then \(T\) is normal if and only if there exists an orthonormal basis for \(V\) consisting of eigenvectors for \(T\).
Proof. \(( "\Longrightarrow" )\) Suppose that \(T\) is normal.
Combining Theorem7.5.3~\ref{thm:ComplexLinearMapsUpperTriangularWrtSomeBasis} and Corollary9.5.5~\ref{thm:ComplexLinearMapsUpperTriangularWrtSomeOrthonormalBasis}, there exists an orthonormal basis \(e=(e_1,\ldots,e_n)\) for which the matrix \(M(T)\) is upper triangular, i.e., \begin{equation*} M(T) = \begin{bmatrix} a_{11} & \cdots & a_{1n}\\ &\ddots& \vdots \\ 0&& a_{nn} \end{bmatrix}. \end{equation*} We will show that \(M(T)\) is, in fact, diagonal, which implies that the basis elements \(e_1,\ldots,e_n\) are eigenvectors of \(T\). Since \(M(T)=(a_{ij})_{i,j=1}^n\) with \(a_{ij}=0\) for \(i>j\), we have \(Te_1=a_{11}e_1\) and \(T^*e_1=\sum_{k=1}^n \overline{a}_{1k} e_k\). Thus, by the Pythagorean Theorem and Proposition11.2.3~\ref{prop:adjoint norm}, \begin{equation*} |a_{11}|^2 = \norm{a_{11}e_1}^2 = \norm{Te_1}^2 = \norm{T^*e_1}^2 =\norm{\sum_{k=1}^n \overline{a}_{1k} e_k}^2 = \sum_{k=1}^n |a_{1k}|^2, \end{equation*} from which it follows that \(|a_{12}| = \cdots = |a_{1n}| = 0\). Repeating this argument, \(\norm{Te_j}^2=|a_{jj}|^2\) and \(\norm{T^*e_j}^2 = \sum_{k=j}^n |a_{jk}|^2\) so that \(a_{ij}=0\) for all \(2\le i<j\le n\). Hence, \(T\) is diagonal with respect to the basis \(e\), and \(e_1,\ldots,e_n\) are eigenvectors of \(T\). \(( "\Longleftarrow" )\) Suppose there exists an orthonormal basis \((e_1,\ldots,e_n)\) for \(V\) that consists of eigenvectors for \(T\). Then the matrix \(M(T)\) with respect to this basis is diagonal. Moreover, \(M(T^*)=M(T)^*\) with respect to this basis must also be a diagonal matrix. It follows that \(TT^*=T^*T\) since their corresponding matrices commute: \begin{equation*} M(TT^*) = M(T)M(T^*) = M(T^*)M(T) = M(T^*T). \end{equation*}
The following corollary is the best possible decomposition of a complex vector space \(V\) into subspaces that are invariant under a normal operator \(T\). On each subspace \(\kernel(T-\lambda_i I)\), the operator \(T\) acts just like multiplication by scalar \(\lambda_i\). In other words,
\[ T|_{\kernel(T-\lambda_i I)} = \lambda_{i}I_{\kernel(T-\lambda_i I)}. \]
Corollary 11.3.2. Let \(T\in\mathcal{L}(V)\) be a normal operator, and denote by \(\lambda_1,\ldots,\lambda_m\) the distinct eigenvalues of \(T\). \(V = \kernel(T-\lambda_1 I) \oplus \cdots \oplus \kernel(T-\lambda_m I)\). If\(i\neq j\), then\(\kernel(T-\lambda_i I)\bot \kernel(T-\lambda_j I)\).
As we will see in the next section, we can use Corollary 11.3.2 to decompose the canonical matrix for a normal operator into a so-called “unitary diagonalization”. |
I'm having trouble evaluating the following integral: $$ \int^\pi_0 \frac{\cos^9(x)}{\sin^3(x)+\cos^3(x)}dx $$
I tried to convert it into an algebraic function by multiplying the numerator and denominator by $\sec^{11}(x)$ and substituting $\tan(x)=t$ as
$$ \int^\pi_0 \frac{\cos^9(x)\cdot \sec^{11}(x)}{(\sin^3(x)+\cos^3(x))\cdot \sec^{11}(x)}dx $$
$$ \int^\pi_0 \frac{\sec^2(x)}{(\tan^3(x)+1)\cdot (\tan^2(x)+1)^4}dx $$
Substituting $\tan(x)=t$,
$$ \int^0_0 \frac{dt}{(t^3+1)\cdot (t^2+1)^4} $$
But now both upper and lower limit become $0$ so apparently this is not the right approach, so how do I go about solving it? |
The expectation values$$ \langle p | \vec E(\vec x) | p\rangle $$and similarly for $\vec B(\vec x)$ vanish for a simple reason: the state $|p\rangle$ is by definition translational symmetric (translation only changes the phase of the state, the overall normalization) so the expectation values of any field in this state has to be translationally symmetric, too (the phase cancels between the ket and the bra).
So if you expect to see classical waves in expectation values in such momentum eigenstates, you are unsurprisingly disappointed. Incidentally, the same thing holds for any other field including the Dirac field (in contrast with the OP's assertion). If you compute the expectation value of the Dirac field $\Psi(\vec x)$ in a one-particle momentum eigenstate with one electron, this expectation value also vanishes. In this Dirac case, it's much easier to prove so because the expectation values of all fermionic operators (to the first or another odd power) vanish because of the Grassmann grading.
The vanishing of the expectation values of fields (those that can have both signs, namely the linear functions of the "basic" fields connected with the given particle) would be true for any momentum eigenstates, even multiparticle states which are momentum eigenstates simply because the argument above holds universally. You may think that this vanishing is because the one-particle momentum eigenstate is some mixture of infinitesimal electromagnetic waves that are allowed to be in any "phase" and these phases therefore cancel.
However, the formal relationship between the classical fields and the one-particle states still holds if one is more careful. In particular, one may construct "coherent states" which are multiparticle states with an uncertain number of particles which are the closest approximations of a classical configuration. You may think of coherent states as the ground states of a harmonic oscillator (and a quantum field
is an infinite-dimensional harmonic oscillator) which are shifted in the position directions and/or momentum directions, i.e. states$$ |a\rangle = C_\alpha \cdot \exp(\alpha\cdot a^\dagger) |0\rangle $$This expression may be Taylor-expanded to see the components with individual numbers of excitations, $N=0,1,2,3,\dots$ The $C_\alpha$ coefficient is just a normalization factor that doesn't affect physics of a single coherent state.
With a good choice of $\alpha$ for each value of the classical field (there are many independent $a^\dagger(k,\lambda)$ operators for a quantum field and each of them has its $\alpha(k,\lambda)$), such a coherent state may be constructed for any classical configuration. The expectation values of the classical fields $\vec B,\vec E$ in these coherent states will be what you want.
Now, with the coherent state toolkit, you may get a more detailed understanding of why the momentum eigenstates which are also eigenstates of the number of particles have vanishing eigenvalues. The coherent state is something like the wave function$$ \exp(-(x-x_S)^2/2) $$which is the Gaussian shifted to $x_S$ so $x_S$ is the expectation value of $x$ in it. Such a coherent state may be obtained by an exponential operator acting on the vacuum. The initial term in the Taylor-expansion is the vacuum itself; the next term is a one-particle state that knows about the structure of the coherent state – because the remaining terms in the Taylor expansions are just gotten from the same linear piece that acts many times, recall the $Y^k/k!$ form of the terms in the Taylor expansion of $\exp(Y)$: here, $Y$ is the only thing you need to know.
On the other hand, the expectation value of $x$ in the one-particle state is of course zero. It's because the wave function of a one-particle state is an odd function such as$$ x\cdot \exp(-x^2/2) $$whose probability density is symmetric (even) in $x$ so of course that the expectation value has to be zero. If you look at the structure of the coherent state and you imagine that the $\alpha$ coefficients are very small so that multiparticle states may be neglected for the sake of simplicity, you will realize that the nonzero expectation value of $x$ in the shifted state (the coherent state) boils down to some interference between the vacuum state and the one-particle state; it is not a property of the one-particle state itself! More generally, the nonzero expectation values of fields at particular points of the spacetime prove some interference between components of the state that have different numbers of the particle excitations in them.
The latter statement should be unsurprising from another viewpoint. If you consider something like the matrix element$$ \langle n | a^\dagger | m \rangle $$where the bra and ket vectors are eigenstates of a harmonic oscillator with some number of excitations, it's clear that it's nonzero only if $m=n\pm 1$. In particular, $m$ and $n$ cannot be equal. If you consider the expectation values of $a^\dagger$ in a particle-number eigenstate $|n\rangle$, it's obvious that the expectation value vanishes because $a$ and $a^\dagger$, and they're just a different way of writing linear combinations of $\vec B(\vec x)$ or $\vec E(\vec x)$, are operators that change the number of particle excitations by one or minus one (the same for all other fields including the Dirac fields).
So if you want to mimic a classical field or classical wave with nonzero expectation values of the fields, of course that you need to consider superpositions of states with different numbers of particle excitations! But it's still true that all these expectation values are already encoded in the one-particle states. Let me summarize it: the right states that mimic the classical configurations are $\exp(Y)|0\rangle$ where $Y$ is a linear combination of creation operators (you may add the annihilation ones but they won't make a difference, except for the overall normalization, because annihilation operators annihilate the vacuum). Such coherent exponential-shapes states have nonzero vevs of any classically allowed form that you may want. At the same moment, the exponential may be Taylor-expanded to $(1+Y+\dots)$ and the linear term $Y$ produces a one-particle state that is the ultimate "building block" of the classical configuration. But if you actually want to calculate the vevs of the fields, you can't drop the term $1$ or others, either: you need to include the contributions of the matrix elements between states with different numbers of the particle excitations. |
Matrix factorization is a simple embedding model. Given the feedback matrix A \(\in R^{m \times n}\), where \(m\) is the number of users (or queries) and \(n\) is the number of items, the model learns:
A user embedding matrix \(U \in \mathbb R^{m \times d}\), where row i is the embedding for user i. An item embedding matrix \(V \in \mathbb R^{n \times d}\), where row j is the embedding for item j.
The embeddings are learned such that the product \(U V^T\) is a good approximation of the feedback matrix A. Observe that the \((i, j)\) entry of \(U . V^T\) is simply the dot product \(\langle U_i, V_j\rangle\) of the embeddings of user \(i\) and item \(j\), which you want to be close to \(A_{i, j}\).
Choosing the Objective Function
One intuitive objective function is the squared distance. To do this, minimize the sum of squared errors over all pairs of observed entries:
\[\min_{U \in \mathbb R^{m \times d},\ V \in \mathbb R^{n \times d}} \sum_{(i, j) \in \text{obs}} (A_{ij} - \langle U_{i}, V_{j} \rangle)^2.\]
In this objective function, you only sum over observed pairs (i, j), that is, over non-zero values in the feedback matrix. However, only summing over values of one is not a good idea—a matrix of all ones will have a minimal loss and produce a model that can't make effective recommendations and that generalizes poorly.
Perhaps you could treat the unobserved values as zero, and sum over all entries in the matrix. This corresponds to minimizing the squared Frobenius distance between \(A\) and its approximation \(U V^T\):
\[\min_{U \in \mathbb R^{m \times d},\ V \in \mathbb R^{n \times d}} \|A - U V^T\|_F^2.\]
You can solve this quadratic problem through
Singular Value Decomposition ( SVD) of the matrix. However,SVD is not a great solution either, because in real applications, thematrix \(A\) may be very sparse. For example, think of all the videoson YouTube compared to all the videos a particular user has viewed.The solution \(UV^T\) (which corresponds to the model's approximationof the input matrix) will likely be close to zero, leading to poorgeneralization performance.
In contrast,
Weighted Matrix Factorization decomposes the objectiveinto the following two sums: A sum over observed entries. A sum over unobserved entries (treated as zeroes).
\[\min_{U \in \mathbb R^{m \times d},\ V \in \mathbb R^{n \times d}} \sum_{(i, j) \in \text{obs}} (A_{ij} - \langle U_{i}, V_{j} \rangle)^2 + w_0 \sum_{(i, j) \not \in \text{obs}} (\langle U_i, V_j\rangle)^2.\]
Here, \(w_0\) is a hyperparameter that weights the two terms so that the objective is not dominated by one or the other. Tuning this hyperparameter is very important.
\[\sum_{(i, j) \in \text{obs}} w_{i, j} (A_{i, j} - \langle U_i, V_j \rangle)^2 + w_0 \sum_{i, j \not \in \text{obs}} \langle U_i, V_j \rangle^2\]
where \(w_{i, j}\) is a function of the frequency of query i and item j.
Minimizing the Objective Function
Common algorithms to minimize the objective function include:
Stochastic gradient descent (SGD)is a generic method to minimize loss functions. Weighted Alternating Least Squares( WALS) is specialized to this particular objective.
The objective is quadratic in each of the two matrices U and V. (Note, however, that the problem is not jointly convex.) WALS works by initializing the embeddings randomly, then alternating between:
Fixing \(U\) and solving for \(V\). Fixing \(V\) and solving for \(U\).
Each stage can be solved exactly (via solution of a linear system) and can be distributed. This technique is guaranteed to converge because each step is guaranteed to decrease the loss.
SGD vs. WALS
SGD and WALS have advantages and disadvantages. Review the information below to see how they compare:
SGD
Very flexible—can use other loss functions.
Can be parallelized.
Slower—does not converge as quickly.
Harder to handle the unobserved entries (need to use negative sampling or gravity).
WALS
Reliant on Loss Squares only.
Can be parallelized.
Converges faster than SGD.
Easier to handle unobserved entries. |
This question already has an answer here:
I have been trying to follow this derivation from Sakurai and Shankar, pulling from both. I would like to see how the following derivation can be extended to orbital angular momentum, and thus find that $\ell$ is an integer. The details are omitted but the core of the proof is here.
Consider the following
$$ \begin{align*} [\hat{J}^2,\hat{J}_z] &= 0 \\ [\hat{J}_z,\hat{J}_\pm] &= \pm\hbar\hat{J}_\pm \\ \hat{J}_\pm &\equiv \hat{J}_x \pm \mathrm{i}\hat{J}_y. \end{align*} $$
Now let $$ \begin{align*} \hat{J}^2 \lvert \alpha, \beta \rangle &= \alpha \lvert \alpha, \beta \rangle \\ % \hat{J}_z \lvert \alpha, \beta \rangle &= \beta \lvert \alpha, \beta \rangle \end{align*} $$
From the above we can see that $$ \hat{J}_z \hat{J}_\pm \lvert \alpha, \beta \rangle = (\beta \pm \hbar) \hat{J}_\pm \lvert \alpha, \beta \rangle $$
and $$ \hat{J}^2 \hat{J}_\pm \lvert \alpha, \beta \rangle = \hat{J}_\pm \hat{J}^2 \lvert \alpha, \beta \rangle = \alpha \hat{J}_\pm \lvert \alpha, \beta \rangle $$
From this we see that $$ \hat{J}_\pm \lvert \alpha, \beta \rangle \propto \lvert \alpha, (\beta + \hbar) \rangle \\ % \implies \hat{J}_\pm \lvert \alpha, \beta \rangle = C(\alpha,\beta) \lvert \alpha, (\beta + \hbar) \rangle. $$
Now we see that there is an upper limit on $\beta$ $$ \langle \alpha \beta \rvert \hat{J}^2 - \hat{J}_z^2 \lvert \alpha \beta \rangle = \langle \alpha \beta \rvert \hat{J}_x^2 + \hat{J}_y^2 \lvert \alpha \beta \rangle \\ \implies \alpha \geq \beta^2 $$
So $$ \hat{J}_- \hat{J}_+ \lvert \alpha, \beta_{max} \rangle = (\hat{J}^2 - \hat{J}_z^2 - \hbar\hat{J}_z) \lvert \alpha, \beta_{max} \rangle = 0 \\ \implies \alpha = \beta_{max}(\beta_{max} +\hbar). $$
Similarly $$ \alpha = \beta_{min}(\beta_{min} +\hbar) $$
From this we can show $$ \beta_{min} = -\beta_{max}. $$
So $$ \beta_{max} = \frac{\hbar n}{2} \\ \implies \frac{\beta_{max}}{\hbar} = \frac{n}{2} = j. $$
Thus the eigenvalues are $$ \alpha = \hbar^2 j(j + 1). $$
Now we define $$ \beta \equiv m \hbar $$
and we have the eigen kets $$ \lvert j, m \rangle $$
where $j$ increments in half integer steps.
Question: So the only added restriction to the derivation of the integer values of the orbital angular momentum quantum number $\ell$ is$$ \vec{L} = \vec{r} \times \vec{p}.$$How does this added restriction require that the orbital angular momentum quantum number $\ell$ be an integer and more importantly how can I show this using the proof above? |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
This is cool: In today’s
Nature, Toby Cubitt, David Perez-Garcia, and Michael Wolf published a paper, “Undecidability of the spectral gap.” A short writeup is in Nature News, and an extended paper is on arXiv. It shows a problem in quantum physics–the spectral gap problem–to be undecidable by reducing the halting problem to it.
In the
Nature News story, the first author is quoted as saying, “I think it’s fair to say that ours is the first undecidability result for a major physics problem that people would really try to solve.” Is that true?
Note that the article also links the undecidability result to an independence result: “Our results imply that for any consistent, recursive axiomatisation of mathematics, there exist specific Hamiltonians for which the presence or absence of a spectral gap is independent of the axioms.” In order to get this consequence, you’d reduce the problem to provability/refutability in your favorite axiom system \(T\): Give a computable function \(F\) with the following property: If \(i\) is an instance of the problem, \(F(i)\) is a formula in the language of \(T\) such that if \(T \vdash F(i)\) then \(i\) is a positive instance, and if \(T \vdash \lnot F(i)\) then it is a negative instance. If \(T\) decided all the \(F(i)\), searching through all theorems of \(T\) will eventually yield either \(F(i)\) or \(\lnot F(i)\), and so provide a decision procedure for the problem. If the problem is undecidable, that can’t happen, so at least one \(F(i)\) must be independent of \(T\). (In fact, infinitely many must be, for any finite number could be treated as special cases before running the infinite search.) But you do have to give this coding of instances of the problem. (Just for a couple of simple examples, for the halting problem and a sound arithmetical theory, \(F(i)\) would be the standard description of “Turing machine with index \(i\) halts on input \(i\)”; for Hilbert’s 10th problem, given a Diophantine equation \(p(\vec x) = 0\), \(F(p)\) would be \(\exists \vec x p(\vec x) = 0\). See also the paper by Björn Poonen they cite in support of their claim, esp. p. 2.)
Unless I missed something, the authors haven’t done this. So in what sense have they shown that “there exist specific Hamiltonians for which the presence or absence of a spectral gap is independent”? To use the argument above, when you have the halting problem reduced to your new decision problem, you could just take the description of “TM \(i\) halts on input \(i\)” as your \(F(i)\). This will have the required property for whichever instances of your decision problem the halting problem instances reduce to. But this isn’t quite like actually giving a method for directly coding the physical problems in arithmetic, or exhibiting a sentence of arithmetic that says “quantum many-body model \(i\) is gapped.”
Note also that the coding \(F(i)\) depends on the axiom system, so the order of the quantifiers matters: for each axiom system, there will be possibly different encodings of the decision problem with the required property; and it’s not the case that there are instances of the decision problem that are independent of (and hence unsolvable by)
any axiom system. You can always add \(F(i)\) to your \(T\) for a true instance \(i\), or \(\lnot F(i)\) for a false instance, and this will yield a new axiom system which decides that instance in the sense given above. In fact you can even add \(\lnot F(i)\) for a true instance (if \(F(i)\) is independent and doesn’t happen to be \(\Sigma^0_1\))! Then you’ll get an unsound axiom system that will decide that instance incorrectly, and you’ll have to find a different coding.
I of course have no idea if the problem shown undecidable, or the features of the problem used in the reduction of the halting problem, are
actually physically interesting. It may well be that the physically interesting cases of the problem are decidable. Certainly one can decide at least some specific instances, and perhaps all instances that “occur in nature.” But IANAP.
tl;dr interesting “real-world” example of undecidability result physicists actually care about, not an interesting independence result. |
On the profile of solutions for an elliptic problem arising in nonlinear optics
1.
Institute of Mathematics, AMSS, Chinese Academy of Sciences, Beijing, 100080, China
2.
School of Mathematics, The University of New South Wales, Sydney 2052, Australia
3.
School of Mathematics and Statistics, University of Sydney, NSW 2006, Australia
$-\Delta u + (\lambda - h(x)) u = g(x) (u^{p-1} + f(u))$ in $\ \mathbb R^N,$
$u > 0$ in $\mathbb R^N,$
$u \in H^1(\mathbb R^N),$
where $\lambda > 0$ is a parameter, $h$ and $g$ are nonnegative functions in $L^\infty(\mathbb R^N).$ We obtain the asymptotic behaviour of the least energy solutions or solutions obtained by the minimax principle. From the asymptotic behaviour we conclude that those solutions are asymmetric for $\lambda$ large even if $h$ and $g$ are radially symmetric.
Mathematics Subject Classification:35B25, 35J6. Citation:Daomin Cao, Ezzat S. Noussair, Shusen Yan. On the profile of solutions for an elliptic problem arising in nonlinear optics. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 649-666. doi: 10.3934/dcds.2004.11.649
[1] [2]
Lisa Hollman, P. J. McKenna.
A conjecture on multiple solutions of a nonlinear elliptic boundary
value problem: some numerical evidence.
[3]
E. N. Dancer, Danielle Hilhorst, Shusen Yan.
Peak solutions for the Dirichlet problem of an elliptic system.
[4]
Liping Wang, Chunyi Zhao.
Solutions with clustered bubbles and a boundary layer of an elliptic problem.
[5]
Liping Wang, Dong Ye.
Concentrating solutions for an anisotropic elliptic problem with
large exponent.
[6]
Liping Wang, Juncheng Wei.
Solutions with interior bubble and boundary layer for an elliptic problem.
[7] [8]
Chengxia Lei, Yihong Du.
Asymptotic profile of the solution to a free boundary problem arising in a shifting climate model.
[9]
Alina Ostafe, Igor E. Shparlinski, Arne Winterhof.
On the generalized joint linear complexity profile of a class of nonlinear pseudorandom multisequences.
[10]
Walter Allegretto, Yanping Lin, Zhiyong Zhang.
Convergence to convection-diffusion waves for solutions to dissipative nonlinear evolution equations.
[11]
Cheng Hou Tsang, Boris A. Malomed, Kwok Wing Chow.
Exact solutions for periodic and solitary matter waves in nonlinear lattices.
[12]
Chunhui Qiu, Rirong Yuan.
On the Dirichlet problem for fully nonlinear elliptic equations on annuli of metric cones.
[13]
Martino Bardi, Paola Mannucci.
On the Dirichlet problem for non-totally degenerate fully nonlinear elliptic equations.
[14]
Gabriella Zecca.
An optimal control problem for some nonlinear elliptic equations with unbounded coefficients.
[15]
Patricio Cerda, Leonelo Iturriaga, Sebastián Lorca, Pedro Ubilla.
Positive radial solutions of a nonlinear boundary value problem.
[16]
Haitao Yang, Yibin Zhang.
Boundary bubbling solutions for a planar elliptic problem with exponential Neumann data.
[17] [18]
Qiuping Lu, Zhi Ling.
Least energy solutions for an elliptic problem involving sublinear term and peaking phenomenon.
[19]
Liping Wang.
Arbitrarily many solutions for an elliptic Neumann problem with sub- or supercritical
nonlinearity.
[20]
Zongming Guo, Xuefei Bai.
On the global branch of positive radial solutions of an elliptic problem with singular nonlinearity.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
According to the textbook by Silberschatz,
A relation schema R is in third normal form with respect to a set $F$ of functional dependencies if, for all functional dependencies in $F^+$ of the form $\alpha$ → $\beta$, where $\alpha$ ⊆ R and $\beta$ ⊆ R, at least one of the following holds:
$\bullet$ $\alpha$ → $\beta$ is a trivial functional dependency.
$\bullet$ $\alpha$ is a superkey for R.
$\bullet$ Each attribute A in $\beta - \alpha$ is contained in a candidate key for R.
Another definition from an online video lecture series by a reputed college that I am following states,
A relation is in third normal form if:
$\bullet$ It is in second normal form,
(or in other words, no non-prime attribute of R is dependent on any proper subset of any candidate key of R).
AND
$\bullet$ No non-prime attribute of R is functionally dependent on any other non-prime attribute of R.
I'm having a hard time understanding how they are equivalent. I can't understand how they seem so distinct from each other, and yet, at their core, are saying the same thing.
Can someone explain why saying one thing is the same as saying the other? |
I'm taking an undergrad GR course, and our text (Lambourne) mentions covariant and contravariant vectors and tensors ad-nauseum, but never really gives a formal definition for what they are, and how they are unique from each other in any physical sense (other than their difference in transformations). Is there any physical intuition behind these two labels? There should be, right? If they differ in how they transform with transformation of coordinates, doesnt that indicate that there has to be some way of visualizing their difference, since coordinate transformations are easily visualized?
This whole business of covariant vs contravariant is very old school. Some very old texts go into ways of visualizing this. I would suggest instead learning about tangent vectors (contravariant) and 1-forms (covariant) and the equivalence between tangent vectors and directional derivatives.
Associate the vector $\vec{v}$ with the derivative operator $\vec{\frac{d}{d\lambda}}$ by saying that there is a curve parameterized by $\lambda$ that has $\vec{v}$ as it's tangent vector.
Similarly, associate to the function $f$ the 1-form $df$. A 1-form is a linear map from tangent vectors onto real numbers. A 1-form $df$ maps a tangent vector $\vec{\frac{d}{d\lambda}}$ to the real number $df \left( \vec{\frac{d}{d\lambda}} \right) \equiv \frac{df}{d\lambda}$.
Once you are comfortable with this idea, you will notice that we can introduce a coordinate system $x^i$ and tangent vectors $\frac{\partial}{\partial x^i}$ and one-forms $dx^i$. Note that from our rule, $dx^i \left( \vec{\frac{\partial}{\partial x^j} } \right) = \delta^i_j$.
You can then parameterize your curve with the functions $x^i(\lambda)$. Note that from the chain rule
$\vec{ \frac{d}{d\lambda} } = \frac{\partial x^i}{\partial \lambda} \vec{\frac{\partial}{\partial x^i}}$
and you can use what we've produced so far to show that
$df = \frac{\partial f}{\partial x^i} dx^i$.
When all is said and done, you can prove that
$df \left( \vec{\frac{d}{d\lambda}} \right) = \frac{\partial x^i}{\partial \lambda} \frac{\partial f}{\partial x^j} \delta_i^j = \frac{df}{d\lambda}$
is coordinate independent, as it should be.
From there on, you can define arbitrary tensors as multilinear maps taking $n$ 1-forms and $m$ vectors onto real numbers. The utility of this construction is that it is very geometrical and at the same time not tied to coordinates (abstract). You also never have to wonder which way a thing transforms, because it's always the natural way.
I recommend you pick up a good book on differential geometry for physicists. Geometrical Methods of Mathematical Physics by Schutz is OK, his GR book is probably more useful. The bible by Misner, Thorne and Wheeler goes into great depth into this business and has handy visualizations of n-forms if you are so inclined.
Here is a visualization from
Geometrical Methods of Mathematical Physics by Schutz. The co-vector is here called a "one form". His notation $\langle \tilde{\omega} , \bar{V} \rangle$ is equivalent to $\omega_\alpha V^\alpha$, which you might be used to seeing.
Note that when the magnitude of $\bar{V}$ increases, the arrow gets longer. When the magnitude of $\tilde{\omega}$ increases, the parallel surfaces get closer together.
Physically, vectors and covectors are not meaningfully differet from each other. Mathematically, either can be defined as the dual space of the original, so they have all the same properties. You could switch which one is the arrow and which one the set of parallel lines in that picture. If you choose displacements to be prototypical vectors, then other quantities will be either vectors or covectors depending on how they're related to displacements. For example, velocities are just derivatives of displacements, so they are also vectors. Gradients act on displacements to produce scalars, so they are co-vectors.
Although this is saying the same thing as Lionel's answer and Mark's answer from a different standpoint, another idea that I like in describing the tangent space is to think of the one dimensional $C^1$ space curve (or spacetime curve) within the manifold $M$ as a grounding concept. So our fundamental idea is some function ("A Path" or "A Trail") through the manifold $M$ and centred on some point $p\in M$ which is constant for the present:
$$\sigma:(-\epsilon,\epsilon) \subset \mathbb{R}\to M$$
such that ${\rm d}_t \sigma(t)$ also exists in the same interval $(-\epsilon,\epsilon)$ and such that $\sigma(0) = p\in M$ and $\sigma(\epsilon)\neq p$.
After all one dimensional paths, even if very windy, have made sense to us human and related animals ever since we've needed to find water, food and the way back to our cave!
Then, the tangent space $T_p M$ at point $p\in M$ is the set of equivalence classes of such paths, where we define two such paths $\sigma_1: (-\epsilon,\epsilon)\to M$ and $\sigma_2: (-\epsilon,\epsilon)\to M$ as "equivalent" if their "tangents" are the same at $p$, i.e. if : $\left.{\rm d}_t \sigma_1(t)\right|_{t=0} = \left.{\rm d}_t \sigma_2(t)\right|_{t=0}$. We can then readily define scalar multiples of tangents and additions of tangents: here we must be a little careful because what we are doing of course is implicitly labelling $M$ with one of its atlas's charts so that we are implicity thinking about paths as functions $\sigma:(-\epsilon,\epsilon) \to \mathbb{R}^m$ and their "tangents" ${\rm d}_t \sigma:(-\epsilon,\epsilon) \to \mathbb{R}^m$, where $m$ is the manifold's dimension and this is how we compare paths and declare them to be "equivalent" in the above way. Otherwise, in general, there is of course no notion of the linear operations of scaling and addition in the manifold itself $M$.
So now "contravariant" vectors (or simply plain vectors) are objects that live in such tangent spaces.
Okay, all this is long winded, but my point is that I actually thing of wiggly, squirmy "threads" in families (the latter defined by this equivalence) when I think of tangent vectors and not little arrows. This is something I personally find very helpful as one can imagine something "real" within the manifold itself (and, implicitly through a chart, within our homely and wonted friend $\mathbb{R}^m$) and not simply some idea of "arrows" stuck all over the manifold by some graffiti vandal!
So now, with this concept, we take up Mark's Answer to imagine the one form - or what you're calling a covariant vector (or sometimes covector). Actually, I find the idea of a dual vector space pretty neat, so I generally sit with the mathematician's idea of the one-form. In finite dimensional $\mathbb{R}^m$, a dual vector - a linear functional $\mathbb{R}^m\to\mathbb{R}$ is
always an inner product as in Mark's answer (this assertion is the same as saying that $\mathbb{R}^m$ is a complete metric space) and indeed can be represented by its "components" - the values of the functional for the basis vectors of $T_p M$, with all values in $T_p M$ then following from linearity. So this (co)vector (one form) uniquely defines the vector orthogonal to it (modulo a multiplicative constant). The spacing between the level planes of this linear functional defines the "length" of the covector. if you want to get heavy handed, this is where the Riesz Representation Theorem comes on stage - although you don't need anything like the full strength of this theorem to discuss the ideas here.
Now, if your background is optics like me, you've got a very strong and concrete example of the one form. Namely, the
wave vector $\tilde{k}$. This beast forms inner products $\left<\tilde{k},\,\underset{\sim}{r}\right>$ with position vectors $\underset{\sim}{r}$ to give you the local phase of the plane wave component it represents. The maximum rate of change of phase in radians per metre is the length of the covector $\tilde{k}$.
Indeed, in Minkowsky spacetime, the four-wavevector is a one-form - a covector:
$$\tilde{k} = (\omega,-k_x,-k_t,-k_z);\qquad \omega = \sqrt{k_x^2+k_y^2+k_z^2}\,c$$
Now, to go to arbitrary valence tensors, if you haven't got some of the references in Lionel's or Mark's answers, a great introductory discussion is given in the first chapter of Kip Thorne's physics 136 course which is downloadable from here. He talks about all these ideas in terms linear functionals and "slots" for components, rather like you would go about representing and storing these ideas in computer memory (with a countably infinite word size, of course, to represent real numbers exactly!).
An aside about references: I'm not altogether sure that Schutz's GR book is as good a reference for geometry as it used to be (as Lionel makes out). True, it does still include the discussion of the one form as you and Mark's answer describe them, but stuff like the Lie derivative and much of the other geometrical discussion that used to be in his book has had to make way for expanded chapters of GR experimental evidence and issues. I think Schutz even says something about using his geometry book together with a second reading of his "first course on GR" in the latter's preface. So have a browse carefully at the contents of any book you might be thinking of buying - an older copy of Schutz may fit better with you.
Tensors (or rather tensor fields in case of differential geometry) are very generic and not particularly intuitive objects that can fill a lot of roles - volume elements, endomorphisms, Riemannian metrics are just a few things you can describe with tensors.
However, to get an intuition about co- and contravariance, it's enough to look at tangent vectors and covectors, which
can be visualized and are the building blocks of higher-rank tensors.
In differential geometry as traditionally taught in physics courses (mathematicians stopped doing it this way some decades ago), we
always work in charts (ie local coordinates).
A vector would be a column vector $$ \begin{pmatrix} v^1\\\vdots\\v^n \end{pmatrix} = (v^i)_{i=1}^n=v^i $$ and a covector a row vector $$ \begin{pmatrix} w_1&\cdots&w_n \end{pmatrix} = (w_i)_{i=1}^n=w_i $$ with duality pairing $$ \begin{pmatrix} w^1&\cdots&w^n \end{pmatrix} \begin{pmatrix} v^1\\\vdots\\v^n \end{pmatrix} =\sum_{i=1}^n w_iv^i = w_iv^i $$ As there is in general no global chart, we need to specify transformation laws and make our vectors and covectors equivalence classes with respect to these transformations.
The tranformations are given by the Jacobi matrix of the coordinate switchover and its inverse, which is obviously necessary to keep pairings invariant.
Now, the coordinates of vectors transform opposite to basis vectors - they are
contravariant - whereas the components of covectors transform the same way as basis vectors - they are covariant.
You may encounter the opinion that the difference between vectors and covectors doesn't really matter as in general relativity, you can raise and lower indices as you see fit, which culminates in the idea that there's only a single geometric object - the vector - with co- and contravariant components.
That is in my opinion a pretty bad idea: It only works if there is a distinguished non-degenerate bilinear form available and normally, there's a 'natural' placement of indices due to geometry which might even be relevant to model physical concepts (eg velocity vs momentum in context of Lagrangian and Hamiltonian mechanics).
Besides these technical definitions of vectors and covectors (which
do make some sense geometrically once you introduce principal bundles and associated vector bundles, but that's not something typically done in physics courses), there are of course more geometric ones:
We can consider a vector an equivalence class of curves through the same point with first order contact. Similarly, a covector would an equivalence class of real-valued functions at a given point with first order contact.
If we compose representatives of a covector and a vector, we get a function $\mathbb R\to\mathbb R$ and evaluating its derivative gives us the natural pairing.
There are also abstract definitions available: We can identify vectors with their directional derivatives, ie a vector is a linear functional on the space of real-valued functions that respects the Leibniz rule. For covectors, there's an algebraic definition as the ideal of real-valued functions vanishing at a point factored by the ideal generated by product of such functions.
Instead of coming up with definitions for both vectors and covectors, it's enough to define one of those manually (typically vectors) and define the other one by duality, ie as real-valued linear maps.
Now, if you actually want to
visualize these objects (as in draw meaningful pictures), the obvious representation for vectors is as little arrows in coordinate space. Now, if there's a distinguished non-degenerate bilinear form available, you can represent covectors via their associated vectors (raising its index) and pairing will just be the Euclidean scalar product.
A second way to visualize covectors would be as (oriented) hyperplane fields, with the pairing as described in Mark's answer. This also does not work for arbitrary manifolds - you need a volume form to do so (this is basically a variant of the Hodge dual with the last step from the Wikipedia explanation omitted).
(If you do not have a bilinear or volume form available, you could of course chose an arbitray local one). |
We can express any ket vector \(|\psi⟩\) (representing all the different possible states) in its “component form” as \(|\psi⟩=\sum_{i=1}^{N}\psi_i|i⟩\) where \(|i⟩\) are all the different possible basis vectors one could decompose \(|\psi⟩\) with respect to, \(\psi_i\) are the different components (which, in general, can be complex numbers), and the value of \(N\) is simply the number of dimensions in the space. The basis vectors \(|i⟩\) are by definition orthogonal vectors whose magnitudes equal one. This is analogous to the unit vectors \(\hat{i}\), \(\hat{j}\), and \(\hat{k}\) which are, by definition, perpendicular vectors whose magnitudes equal 1.
We shall now derive an equation which will allow us to find the components \(\psi_i\) of any complex vector . Let’s start by taking the inner product between \(|\psi⟩\) and any basis vector \(|j⟩\) to get
$$⟨j|\psi⟩=\sum_{i=1}^{N}\psi_i⟨j|i⟩.$$
If \(i≠j\), then we are considering the inner product between two different basis vectors which, by definition, are orthogonal. Therefore all the terms \(⟨j|i⟩\) in the sum in which \(i≠j\) are zero. When \(i=j\), we are taking the inner product between the same two basis vectors which have equal magnitudes of one and point in the same direction; thus \(⟨j|i⟩\)=1\) when \(i=j\). This means that the inner product \(⟨j|i⟩\) simply is just the Kronecker delta \(𝛿_{ij}\); thus \(⟨j|i⟩=𝛿_{ij}\) and
$$⟨j|\psi⟩=\sum_{i=1}^{N}\psi_i𝛿_{ij}.$$
In the sum, all of the terms become zero except for the \(\psi_j𝛿_{jj}=α_j\) term. Thus, the equation simplifies to
$$⟨j|\psi⟩=\psi_j.\tag{23}$$
This result indicates that we're at the halfway point towards our goal of deducing that \(\psi_i=⟨L_i|\psi⟩\)—something I said, earlier, that I'd eventually prove. The only discrepancy between this equation and Equation (23) is the dummy variable (which doesn't matter) and the fact that there's the bra \(⟨j|\) instead of \(⟨L_i|\). But once we prove that the eigenvectors \(|L_i⟩\) of any observable \(\hat{L}\) form a complete orthonormal basis, then you'll be able to breath a sigh of relief. Since this analysis applies to
any set of basis vectors \(|i⟩\), the generality of this analysis permits us to substitute \(|L_i⟩\) for \(|i⟩\) in order to obtain our desired result. |
So far, we have a fairly small collection of examples of groups: the dihedral groups, the symmetric group, and \(\mathbb{Z}_n\). In this section, we'll look at products of groups and find a way to make new groups from the groups we already know.
A very famous group - though not a very complicated one - is the
Klein Four-Group. This is the symmetry group of a rectangle. It has a pair of generators, given by the flips over the horizontal and vertical axes.
Figure 4.2.1. The symmetries of a rectangle, given by the Klien 4-group.
But the Klein Four-Group can also be thought of as a kind of mash-up of two copies of \(\mathbb{Z}_2\). Let \(H\) be the additive group with elements \(\{(0,0), (1,0), (0,1), (1,1)\}\), and operation given by just adding the elements coordinate-wise as elements of \(\mathbb{Z}_2\). (So that \((1,0)+(1,1)=(0,1)\). Then \(H\) is a group (check!), and is in fact isomorphic to the Klein Four-Group. It's an example of a product group!
Let's be more precise and set a definition of a product group.
Definition 4.1.0: Direct Product
The
direct product (or just product) of two groups \(G\) and \(H\) is the group \(G\times H\) with elements \((g,h)\) where \(g\in G\) and \(h\in H\). The group operation is given by \((g_1, h_1)\cdot (g_2, h_2) = (g_1g_2, h_1h_2)\), where the coordinate-wise operations are the operations in \(G\) and \(H\).
Here's an example. Take \(G=\mathbb{Z}_3\) and \(H=\mathbb{Z}_6\), and consider the product \(G\times H\). The product group has 18 elements: there are three choices for the first coordinate and 6 choices for the second coordinate. Since we use addition as the operation in both of the coordinate groups, we'll use addition as the operation in the product. So consider elements \((2,4)\) and \((1,3)\). Then \((2,4)+(1,3)=(0,1)\); addition in the first coordinate is according to \(\mathbb{Z}_3\), and addition in the second coordinate is according to \(\mathbb{Z}_6\).
We should check that the product of any pair of groups \(G\) and \(H\) is actually a group.
The product group has an identity \((1,1)\): \((1,1)\cdot (g,h)=(1g,1h)=(g,h)\). Associativity follows from associativity of \(G\) and \(H\). Closure also follows from closure in \(G\) and \(H\). The inverse of \((g,h)\) is \((g^{-1},h^{-1})\).
So \(G\times H\) really is a group.
We saw in the example that \(\mathbb{Z}_3 \times \mathbb{Z}_6\) has 18 elements. This isn't a coincidence! For any finite groups \(G\) and \(H\), the product group has \(|G||H|\) elements.
An interesting question at this point is suggested by Lagrange's Theorem, which told us that the cardinality of any subgroup divides the cardinality of the original group. We've seen that we can form product group sto 'multiply' groups: Is it also possible to 'divide' groups? Over the next few sections, we'll develop ideas that will let us build
quotient groups.
Exercise 4.1.1
Not all product groups are commutative. How many elements are in \(G=S_4\times \mathbb{Z}_3\)? Identify the identity. Write down a few non-identity elements and compute their respective products.
Contributors Tom Denton (Fields Institute/York University in Toronto) |
Given $x \sim N(0, I)$ and $f : \mathbb{R}^{n} \longrightarrow \mathbb{R}$ which is $L$-Lipschitz, we have that $f - \mathbb{E}f$ is a sub-Gaussian random variable, specifically that $$ \| f(x) - \mathbb{E} f(x) \|_{\psi_2} \leq C L \:. $$
I want to show that if in addition $f \geq 0$, then for all $p > 0$, the more general statement $$ \| f(x) - (\mathbb{E} f(x)^p)^{1/p} \|_{\psi_2} \leq C_p L $$ holds, where $C_p$ is a constant which depends on $p$. This is Exercise 5.2.5 from Vershynin's book: http://www-personal.umich.edu/~romanv/teaching/2015-16/626/HDP-book.pdf (and the notation above is using his notation).
Using the fact that if $X$ is sub-gaussian then $$ \| X\|_p \leq C \|X\|_{\psi_2} \sqrt{p} \:, \:\: p \geq 1 \:, $$ (Eq 2.16 in Vershynin's HDP book), I was able to show that (just by triangle inequality) $$ \| X - \|X\|_p \|_{\psi_2} \leq (C_1 + C_2 \sqrt{p}) \| X \|_{\psi_2} \:. $$
I wanted then to apply this fact to the sub-Gaussian r.v. $Z := f(x) - \mathbb{E} f(x)$, but this doesn't work for $p \leq 1$ since the constant does not come out of the $p$-th moment of $Z$, so I don't think this approach is correct. Furthermore, it only applies when $p \geq 1$.
Any other hints/suggestions? |
Damping in Structural Dynamics: Theory and Sources
If you strike a bowl made of glass or metal, you hear a tone with an intensity that decays with time. In a world without damping, the tone would linger forever. In reality, there are several physical processes through which the kinetic and elastic energy in the bowl dissipate into other energy forms. In this blog post, we will discuss how damping can be represented, and the physical phenomena that cause damping in vibrating structures.
How Is Damping Quantified?
There are several ways by which damping can be described from a mathematical point of view. Some of the more popular descriptions are summarized below.
One of the most obvious manifestations of damping is the amplitude decay during free vibrations, as in the case of a singing bowl. The rate of the decay depends on how large the damping is. It is most common that the vibration amplitude decreases exponentially with time. This is the case when the energy lost during a cycle is proportional to the amplitude of the cycle itself.
Let’s start out with the equation of motion for a system with a single degree of freedom (DOF) with viscous damping and no external loads,
After division with the mass,
m, we get a normalized form, usually written as
Here, \omega_0 is the undamped natural frequency and \zeta is called the
damping ratio.
In order for the motion to be periodic, the damping ratio must be limited to the range 0 \le \zeta < 1. The amplitude of the free vibration in this system will decay with the factor
where
T 0 is the period of the undamped vibration. Decay of a free vibration for three different values of the damping ratio.
Another measure in use is the
logarithmic decrement, δ. This is the logarithm of the ratio between the amplitudes of two subsequent peaks,
where
T is the period.
The relation between the logarithmic decrement and the damping ratio is
Another case in which the effect of damping has a prominent role is when a structure is subjected to a harmonic excitation at a frequency that is close to a natural frequency. Exactly at resonance, the vibration amplitude tends to infinity, unless there is some damping in the system. The actual amplitude at resonance is controlled solely by the amount of damping.
Amplification for a single-DOF system for different frequencies and damping ratios.
In some systems, like resonators, the aim is to get as much amplification as possible. This leads to another popular damping measure: the
quality factor or Q factor. It is defined as the amplification at resonance. The Q factor is related to the damping ratio by
Another starting point for the damping description is to assume that there is a certain phase shift between the applied force and resulting displacement, or between stress and strain. Talking about phase shifts is only meaningful for a steady-state harmonic vibration. If you plot the stress vs. strain for a complete period, you will see an ellipse describing a hysteresis loop.
Stress-strain history.
You can think of the material properties as being complex-valued. Thus, for uniaxial linear elasticity, the complex-valued stress-strain relation can be written as
Here, the real part of Young’s modulus is called the
storage modulus, and the imaginary part is called the loss modulus. Often, the loss modulus is described by a loss factor, η, so that
Here,
E can be identified as the storage modulus E’. You may also encounter another definition, in which E is the ratio between the stress amplitude and strain amplitude, thus
in which case
The distinction is important only for high values of the loss factor.
An equivalent measure for loss factor damping is the
loss tangent, defined as
The
loss angle δ is the phase shift between stress and strain.
Damping defined by a loss factor behaves somewhat differently from viscous damping. Loss factor damping is proportional to the displacement amplitude, whereas viscous damping is proportional to the velocity. Thus, it is not possible to directly convert one number into the other.
In the figure below, the response of a single-DOF system is compared for the two damping models. It can be seen that viscous damping predicts higher damping than loss factor damping above the resonance and lower damping below it.
Comparison of dynamic response for viscous damping (solid lines) and loss factor damping (dashed lines).
Usually, the conversion between the damping ratio and loss factor damping is considered at a resonant frequency, and then \eta \approx 2 \zeta. However, this is only true at a single frequency. In the figure below, a two-DOF system is considered. The damping values have been matched at the first resonance, and it is clear that the predictions at the second resonance differ significantly.
Comparison of dynamic response for viscous damping and loss factor damping for a two-DOF system.
The loss factor concept can be generalized by defining the loss factor in terms of energy. It can be shown that for the material model described above, the energy dissipated during a load cycle is
where \varepsilon_a is the strain amplitude.
Similarly, the maximum elastic energy during the cycle is
The loss factor can thus be written in terms of energy as
This definition in terms of dissipated energy can be used irrespective of whether the hysteresis loop actually is a perfect ellipse or not — as long as the two energy quantities can be determined.
Sources of Damping
From the physical point of view, there are many possible sources of damping. Nature has a tendency to always find a way to dissipate energy.
Internal Losses in the Material
All real materials will dissipate some energy when strained. You can think of it as a kind of internal friction. If you look at a stress-strain curve for a complete load cycle, it will not trace a perfect straight line. Rather, you will see something that is more like a thin ellipse.
Often, loss factor damping is considered a suitable representation for material damping, since experience shows that the energy loss per cycle tends to have rather weak dependencies on frequency and amplitude. However, since the mathematical foundation for loss factor damping is based on complex-valued quantities, the underlying assumption is harmonic vibration. Thus, this damping model can only be used for frequency-domain analyses.
The loss factor for a material can have quite a large variation, depending on its detailed composition and which sources you consult. In the table below, some rough estimates are provided.
Material Loss Factor, η Aluminum 0.0001–0.02 Concrete 0.02–0.05 Copper 0.001–0.05 Glass 0.0001–0.005 Rubber 0.05–2 Steel 0.0001–0.01
Loss factors and similar damping descriptions are mainly used when the exact physics of the damping in the material is not known or not important. In several material models, such as viscoelasticity, the dissipation is an inherent property of the model.
Friction in Joints
It is common that structures are joined by, for example, bolts or rivets. If the joined surfaces are sliding relative to each other during the vibration, the energy is dissipated through friction. As long as the value of the friction force itself does not change during the cycle, the energy lost per cycle is more or less frequency independent. In this sense, the friction is similar to internal losses in the material.
Bolted joints are common in mechanical engineering. The amount of dissipation that will be experienced in bolted joints can vary quite a lot, depending on the design. If low damping is important, then the bolts should be closely spaced and well-tightened so that macroscopic slip between the joined surfaces is avoided.
Sound Emission
A vibrating surface will displace the surrounding air (or other surrounding medium) so that sound waves are emitted. These sound waves carry away some energy, which results in the energy loss from the point of view of the structure.
A plot of the sound emission in a Tonpilz transducer. Anchor Losses
Often, a small component is attached to a larger structure that is not part of the simulation. When the component vibrates, some waves will be induced in the supporting structure and carried away. This phenomenon is often called
anchor losses, particularly in the context of MEMS. Thermoelastic Damping
Even with pure elastic deformation without dissipation, straining a material will change its temperature slightly. Local stretching leads to a temperature decrease, while compression implies a local heating.
Fundamentally, this is a reversible process, so the temperature will return to the original value if the stress is released. Usually, however, there are gradients in the stress field with associated gradients in the temperature distribution. This will cause a heat flux from warmer to cooler regions. When the stress is removed during a later part of the load cycle, the temperature distribution is no longer the same as the one caused by the onloading. Thus, it is not possible to locally return to the original state. This becomes a source of dissipation.
The thermoelastic damping effect is mostly important when working with small length scales and high-frequency vibrations. For MEMS resonators, thermoelastic damping may give a significant decrease of the Q factor.
Dashpots
Sometimes, a structure contains intentional discrete dampers, like the shock absorbers in a wheel suspension.
Such components obviously have a large influence on the total damping in a structure, at least with respect to some vibration modes.
Seismic Dampers
A particular case where much effort is spent on damping is in civil engineering structures in seismically active areas. It is of the utmost importance to reduce the vibration levels in buildings if hit by an earthquake. The purpose of such dampers can be both to isolate a structure from its foundation and to provide dissipation.
Further Reading
Read the follow-up to this blog post here: How to Model Different Types of Damping in COMSOL Multiphysics®
Comments (11) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
I came across a version of a proof that One Time Pads have perfect secrecy and have a few questions with this version of the proof. The proof is attributed to Dan Boneh (the proof starts on slide 10),
Definition: A cipher (E,D) over (K,M,C) has perfect secrecy if $\forall m_{0}, m_{1} \in$ M, $(\left\vert{m_{0}}\right\vert)$ = $(\left\vert{m_{1}}\right\vert)$ and $\forall$c $\in$ C, such that: Pr[E(k,$m_{0}$)=c] = Pr[E(k,$m_{1}$)=c] where k is a random variable that is uniformly sampled in the keyspace K.
My understanding of the Proof is as below,
Lemma: OTP has perfect secrecy
Proof:
For every message m and every ciphertext c:
Pr[E(k,m)=c] = ${\dfrac{\text{ #keys k in K s.t. E(k,m)=c}}{\text{Total number of Keys}}}$
Suppose that we have a cipher for all m,c: k in K: E(k,m)=c is equal to some constant.
If that is the case then for all $m_{0}, m_{1}$ the probability of E(k,m)=c is the same and Dan states the the denominator (total number of keys) is the same as the number of keys k in K such that: E(k,m)=c.
If this probability is true then the cipher has perfect secrecy.
Let m in M and c in C, then the number of OTP keys that map m to c is 1.
If E(k,m)=c => k $\oplus$ m = c => k= m $\oplus$ c
What this says is that the number of keys k in K: E(k,m)=c = 1 which completes the proof that OTP has perfect secrecy.
Questions:
Why does Pr[E(k,m)=c] = ${\dfrac{\text{ #keys k in K s.t. E(k,m)=c}}{\text{Total number of Keys}}}$ Why is the #keys k in K s.t. E(k,m)=c equal to the total number of keys? Is it because of the following condition: $\forall m_{0}, m_{1} \in$ M, $(\left\vert{m_{0}}\right\vert)$ = $(\left\vert{m_{1}}\right\vert)$ -- which means one of the requirements for perfect secrecy is that every message m has to be the same length, hence meaning that every key k also has to be the same length, and for each unique message m there must be a unique key k that encrypt its. So the total number of keys is equal to the total number of messages.
But this line of reasoning still doesn't make sense to me. Say there are 5 messages: $m_{1},m_{2}, m_{3}, m_{4}, m_{5}$ and 5 keys that encrypt those messages: $k_{1},k_{2}, k_{3}, k_{4}, k_{5}$ -- then the total number of keys is 5 and the total number of keys that uniquely map $m_{1},m_{2}, m_{3}, m_{4}, m_{5}$ to $c_{1},c_{2}, c_{3}, c_{4}, c_{5}$ is 1, namely: E($m_{n}, k_{n})=c_{n}$ for each m,k, c, as you have a 1-1 pair for messages to keys, so the ratio is 1/5 and not 1. What is wrong with this? Or is it counting the total number of such keys that make (k,m) to c? If it counts the total number of such keys, then that answer is 5 in this example. I'm confused.
How does showing Pr[E(k,m)=c] = ${\dfrac{\text{ #keys k in K s.t. E(k,m)=c}}{\text{Total number of Keys}}}$ = 1 prove OTP have perfect secrecy. This doesn't make any sense to me given the statement of the theorem.
I need help understanding this proof, breaking down what the proof is saying.
Thanks! |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-10 of 20
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... |
Search
Now showing items 1-6 of 6
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Prove that this sequence converges. I can't do it.
Let $\{a_n\}$ be a sequence of positive real numbers that converges to a number $A$. Prove that $\{(a_1\cdots a_n)^{1/n}\}$ converges to $A$.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Since $a_n$ converges to $A$, and all numbers are positive, it follows that $\log(a_n)$ converges to $\log(A)$. By Cesaro's averaging theorem:
$$\frac{\sum_{i=1}^n \log(a_i)}{n}\rightarrow \log(A).$$
Exponentiating both sides gives the desired result.
Let $$ x_n=\ln(a_1a_2\ldots a_n)^{1/n}=\frac1n\sum_{i=1}^n\ln a_i. $$ Since $\lim_na_n=A$, then $\lim_n\ln a_n=\ln A$, and therefore the sequence $(x_n)$ is convergent, with $\lim_nx_n=\ln A$. It follows that $$ \lim_n(a_1a_2\ldots a_n)^{1/n}=\lim_ne^{x_n}=e^{\ln A}=A. $$ |
This article will be permanently flagged as inappropriate and made unaccessible to everyone.
Are you certain this article is inappropriate?
Excessive Violence
Sexual Content
Political / Social
Email Address:
Article Id:
WHEBN0000102476
Reproduction Date:
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = \ln(X) has a normal distribution. Likewise, if Y has a normal distribution, then X = \exp(Y) has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. The distribution is occasionally referred to as the Galton distribution or Galton's distribution, after Francis Galton.[1] The log-normal distribution also has been associated with other names, such as McAlister, Gibrat and Cobb–Douglas.[1]
A log-normal process is the statistical realization of the multiplicative product of many independent random variables, each of which is positive. This is justified by considering the central limit theorem in the log domain. The log-normal distribution is the maximum entropy probability distribution for a random variate X for which the mean and variance of \ln(X) are specified.[2]
Given a log-normally distributed random variable X and two parameters \mu and \sigma that are, respectively, the mean and standard deviation of the variable’s natural logarithm, then the logarithm of X is normally distributed, and we can write X as
with Z a standard normal variable.
This relationship is true regardless of the base of the logarithmic or exponential function. If \log_a(Y) is normally distributed, then so is \log_b(Y), for any two positive numbers a,b\neq 1. Likewise, if e^X is log-normally distributed, then so is a^{X}, where a is a positive number \neq 1.
On a logarithmic scale, \mu and \sigma can be called the location parameter and the scale parameter, respectively.
In contrast, the mean, standard deviation, and variance of the non-logarithmized sample values are respectively denoted m, s.d., and v in this article. The two sets of parameters can be related as (see also Arithmetic moments below)[3]
A random positive variable x is log-normally distributed if the logarithm of x is normally distributed,
A change of variables must conserve differential probability. In particular,
where
is the log-normal probability density function.[1]
The cumulative distribution function is
where erfc is the complementary error function, and Φ is the cumulative distribution function of the standard normal distribution.
All moments of the log-normal distribution exist and it holds that: \operatorname{E}[X^n]=\mathrm{e}^{n\mu+\frac{n^2\sigma^2}{2}} (which can be derived by letting z=\frac{\ln(x) - (\mu+n\sigma^2)}{\sigma} within the integral). However, the expected value \operatorname{E}[e^{t X}] is not defined for any positive value of the argument t as the defining integral diverges. In consequence the moment generating function is not defined.[4] The last is related to the fact that the lognormal distribution is not uniquely determined by its moments.
Similarly, the characteristic function \operatorname{E}[e^{i t X}] is not defined in the half complex plane and therefore it is not analytic in the origin. In consequence, the characteristic function of the log-normal distribution cannot be represented as an infinite convergent series.[5] In particular, its Taylor formal series \sum_{n=0}^\infty \frac{(it)^n}{n!}e^{n\mu+n^2\sigma^2/2} diverges. However, a number of alternative divergent series representations have been obtained[5][6][7][8]
A closed-form formula for the characteristic function \varphi(t) with t in the domain of convergence is not known. A relatively simple approximating formula is available in closed form and given by[9]
\varphi(t)\approx\frac{\exp\bigg(-\dfrac{W^2(t\sigma^2e^\mu)+2W(t\sigma^2e^\mu)}{2\sigma^2}\bigg)}{\sqrt{1+W(t\sigma^2e^\mu)}}
where W is the Lambert W function. This approximation is derived via an asymptotic method but it stays sharp all over the domain of convergence of \varphi.
The location and scale parameters of a log-normal distribution, i.e. \mu and \sigma, are more readily treated using the geometric mean, \mathrm{GM}[X], and the geometric standard deviation, \mathrm{GSD}[X], rather than the arithmetic mean, \mathrm{E}[X], and the arithmetic standard deviation, \mathrm{SD}[X].
The geometric mean of the log-normal distribution is \mathrm{GM}[X] = e^{\mu}, and the geometric standard deviation is \mathrm{GSD}[X] = e^{\sigma}.[10][11] By analogy with the arithmetic statistics, one can define a geometric variance, \mathrm{GVar}[X] = e^{\sigma^2}, and a geometric coefficient of variation,[10] \mathrm{GCV}[X] = e^{\sigma} - 1.
Because the log-transformed variable Y = \ln X is symmetric and quantiles are preserved under monotonic transformations, the geometric mean of a log-normal distribution is equal to its median, \mathrm{Med}[X].[12]
Note that the geometric mean is less than the arithmetic mean. This is due to the AM–GM inequality, and corresponds to the logarithm being convex down. In fact,
In finance the term e^{-\frac12\sigma^2} is sometimes interpreted as a convexity correction. From the point of view of stochastic calculus, this is the same correction term as in Itō's lemma for geometric Brownian motion.
The arithmetic mean, arithmetic variance, and arithmetic standard deviation of a log-normally distributed variable X are given by
respectively.
The location (\mu) and scale (\sigma) parameters can be obtained if the arithmetic mean and the arithmetic variance are known; it is simpler if \sigma is computed first:
For any real or complex number s, the sth moment of a log-normally distributed variable X is given by[1]
A log-normal distribution is not uniquely determined by its moments \operatorname{E}[X^k] for k\geq1, that is, there exists some other distribution with the same moments for all k.[1] In fact, there is a whole family of distributions with the same moments as the log-normal distribution.
The mode is the point of global maximum of the probability density function. In particular, it solves the equation (\ln f)'=0:
The median is such a point where F_X=0.5:
The arithmetic coefficient of variation \mathrm{CV}[X] is the ratio \frac{\mathrm{SD}[X]}{\mathrm{E}[X]} (on the natural scale). For a log-normal distribution it is equal to
Contrary to the arithmetic standard deviation, the arithmetic coefficient of variation is independent of the arithmetic mean.
The partial expectation of a random variable X with respect to a threshold k is defined as g(k) = \int_k^\infty \!x{\ln\mathcal{N}}(x)\, dx where {\ln\mathcal{N}}(x) is the probability density function of X. Alternatively, and using the definition of conditional expectation, it can be written as g(k)=\operatorname{E}[X|X>k] P(X>k). For a log-normal random variable the partial expectation is given by:
Where Phi is the normal cumulative distribution function. The derivation of the formula is provided in the discussion of this WorldHeritage entry. The partial expectation formula has applications in insurance and economics, it is used in solving the partial differential equation leading to the Black–Scholes formula.
The conditional expectation of a lognormal random variable X with respect to a threshold k is its partial expectation divided by the cumulative probability of being in that range:
A set of data that arises from the log-normal distribution has a symmetric Lorenz curve (see also Lorenz asymmetry coefficient).[13]
The harmonic H, geometric G and arithmetic A means of this distribution are related;[14] such relation is given by
Log-normal distributions are infinitely divisible,[15] but they are not stable distributions, which can be easily drawn from.[16]
The log-normal distribution is important in the description of natural phenomena. The reason is that for many natural processes of growth, relative growth rate is independent of size. This is also known as Gibrat's law, after Robert Gibrat (1904–1980) who formulated it for companies. It can be shown that a growth process following Gibrat's law will result in entity sizes with a log-normal distribution.[17] Examples include:
For determining the maximum likelihood estimators of the log-normal distribution parameters μ and σ, we can use the same procedure as for the normal distribution. To avoid repetition, we observe that
where by L we denote the probability density function of the log-normal distribution and by \mathcal{N} that of the normal distribution. Therefore, using the same indices to denote distributions, we can write the log-likelihood function thus:
Since the first term is constant with regard to μ and σ, both logarithmic likelihood functions, \ell_L and \ell_N, reach their maximum with the same \mu and \sigma. Hence, using the formulas for the normal distribution maximum likelihood parameter estimators and the equality above, we deduce that for the log-normal distribution it holds that
If \boldsymbol X \sim \mathcal{N}(\boldsymbol\mu,\,\boldsymbol\Sigma) is a multivariate normal distribution then \boldsymbol Y=\exp(\boldsymbol X) has a multivariate log-normal distribution[33] with mean
and covariance matrix
In the case that all X_j have the same variance parameter \sigma_j=\sigma, these formulas simplify to
A substitute for the log-normal whose integral can be expressed in terms of more elementary functions[36] can be obtained based on the logistic distribution to get an approximation for the CDF
This is a log-logistic distribution.
Central limit theorem, Carl Friedrich Gauss, Conjugate prior, YouTube, Variance
Statistics, Normal distribution, Probability density function, Integral, Survey methodology
Economics, Statistics, Cumulative distribution function, Survival analysis, Distribution of wealth
Statistics, Regression analysis, Survival analysis, Parametric statistics, Survey methodology
Probability density function, Probability distribution, Exponential distribution, Normal distribution, Social sciences |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
4:28 AM
@MartinSleziak Here I am! Thank you for opening this chat room and all your comments on my post, Martin. They are really good feedback to this project.
@MartinSleziak Yeah, using a chat room to exchange ideas and feedback makes a lot of sense compared to leaving comments in my post. BTW. Anyone finds a
\oint\frac{1}{1-z^2}dz expression in old posts? Send to me and I will investigate why this issue occurs.
@MartinSleziak It is OK, don't feel anything bad. As long as there is a place that comes to people's mind if they want to report some issue on Approach0, I am willing to come to that place and discuss. I am really interested in pushing Approach0 forward.
4:57 AM
Hi @WeiZhong thanks for joining the room. I will write a bit more here when I have more time. For now two minor things.
I just want to make sure that you know that the answer on meta is community wiki. Which means that various users are invited to edit it, you can see from revision history who added what to the question.
You can see in revision history that this bullet point was added by Workaholic: "I searched for
\oint $\oint$, but I only got results related to
\int $\int$. I tried for
\oint \frac{dz}{1-z^2} $\oint \frac{dz}{1-z^2}$ which is an integral that appears quite often but it did not yield any correct results."
So if you want to make sure that this user is notified about your comments, you can simply add @Workaholic. Any of the editors can be pinged.
And I noticed also this about one of the quizzes (I did not check whether some of the other quizzes have similar problem.)
I suppose that the quizzes are supposed to be chosen in such way that Approach0 indeed helps to find the question. I.e., each quiz was created with some specific question in mind, which should be among the search results. Is that correct?
I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$.
However when I try the query from this quiz, I get completely different results.
I vaguely recall that I tried some quizzes, including this one, and they worked. (By which I mean that the answer to the question from the quiz could be found among the search results.) So is this perhaps due to some changes that were made since then? Or is that simply because when I tried the quiz last time, less questions were indexed. (And now that question is still somewhere among the results, but further down.)
I was wondering whether to add the word to my last message, but it is probably not a bug. It is simply that search results are not exactly as I would expect.
My impression from the search results is that not only x, y, z are replaced by various variables, but also 5,6,7 are replaced by various numbers.
5:40 AM
I think that this implicitly contains a question whether when searching for $x^5+y^6=z^7$ also the questions containing $x^2+y^2=z^2$ or $a^3+b^3=c^3$ should be matches.
For the sake of completeness I will copy here the part of quiz list which is relevant to the quiz I mentioned above:
"Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ",
Hmm, I should have posted this as a single multiline message. But now I see that it is already too late to delete the above messages. Sorry for the duplication:
{ /* 4 */
"Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" },
"Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ",
"hints": [
"This should be easy, the only thing I need to do is do some calculation...",
"I can use my computer to enumerate...",
"... (10 minutes after) ...",
"OK, I give up. Why borther list them <b>all</b>?",
"Is that possible to <a href=\"#\">search it</a> on Internet?"
],
"search": "all positive integers, $i^5 + j^6 = k^7$"
},
8 hours later…
1:19 PM
@MartinSleziak OK, I get it. So next time I would definitely reply to whom actually makes the revision.
@MartinSleziak Yes, remember the first time when we talk in a chat room? At that version of approach0, when a very limited posts have been indexed, you can actually get relevant posts on $i^5+j^6=k^7$. However, when I has enlarged the index (now almost the entire MSE), that piece of quiz (in fact, some quiz I selected earilier like [this one]()) does not find relevant posts anymore.
I have noticed that "quiz" does not work, but I am really lazy and have not investigated it. Instead of change that "quiz", I agree to investigate on why that relevant result has gone. As far as I can guess, there can be two reasons:
1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware
1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them)
2) there is a bug in approach0 that I am not aware
In order to investigate this problem, I am trying to find the original posts that you and me have seen (as you remember vaguely) which is relevant to $i^5+j^6=k^7$ quiz, if you find that post, please send me the URL.
@MartinSleziak It can be a bug, but I need to know if my index does contain a relevant post, so first let us find that post we think relevant. And I will have a look whether or not it is in my index, perhaps the crawler just missed that one. If it is in our index currently, then I should spend some time to find out the reason.
@MartinSleziak As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are
structurallyrelevant to query. So $x^5+y^6=z^7$ will get you $x^2+y^2=z^2$ or $a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical.
After filtering out these structurally relevant expressions, Approach0 will evaluate their symbolic relevance degree with regarding to query expression. Suppose $x^5+y^6=z^7$ gives you $x^2+y^2=z^2$, $a^3+b^3=c^3$ and also $x^5+y^6=z^7$, expression $x^5+y^6=z^7$ will be ranked higher than $x^2+y^2=z^2$ and $a^3+b^3=c^3$, this is because $x^5+y^6=z^7$ has higher symbolic score (in fact, since it has identical symbol set to query, it has the highest possible symbolic score).
I am sorry, I should use "and" instead of "or". Let me repeat the message before previous one below:
As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are
structurallyrelevant to query. So $x^5+y^6=z^7$ will get you both$x^2+y^2=z^2$ and$a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical.
Now the next things for me to do is to investigate some "missing results" suggested by you.
1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed)
1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed)
2:23 PM
Unfortunately, I fail to find any relevant old post in neither case 1 nor case 2 after a few tries (using MSE default search). So the only thing I can do now is to do an "integrated test" (see the new code I have just pushed to Github: github.com/approach0/search-engine/commit/…)
An "integrated test" means I make a minimal index with a few specified math expressions and search a specified query, and see if the results is expected. For example, the test case tests/cases/math-rank/oint.txt specified the query $\oint \frac{dz}{1-z^2}$, and the entire index has just two expressions: $\oint \frac{dz}{1-z^2}$ and $\oint \frac{dx}{1-x^2}$, and the expected search result is both these two expressions are HIT (i.e. they should appear in search result)
10 hours ago, by Martin Sleziak
I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$.
2:39 PM
For anyone interested, I post the screenshot of integrated test results here: imgur.com/a/xYBD5
3:04 PM
For example like this: chat.stackexchange.com/transcript/message/32711761#32711761 You get the link by clicking on the little arrow next to the message and then clicking on "permalink".
I am mentioning this because (hypothetically) if Workaholic only sees your comment a few days later and then they come here to see what the message you refer to, they might have problem with finding it if there are plenty of newer messages.
However, this room does not have that much traffic, so very likely this is not going to be a problem in this specific case.
Another possible way to linke to a specific set of messages is to go to the transcript and then choose a specific day, like this: chat.stackexchange.com/transcript/46148/2016/10/1
Or to bookmark a conversation. This can be done from the room menu on the right. This question on meta.SE even has some pictures.
This is also briefly mentioned in chat help: chat.stackexchange.com/faq#permalink
3:25 PM
@MartinSleziak Good to learn this. I just posted another comment with permalink in that meta post for Workaholic to refer.
I just checked the index on server, yes, that post is indeed indexed. (for my own reference, docID = 249331)
2 hours later…
5:13 PM
Update: I have fixed that quiz problem. See: approach0.xyz/search/…
That is not strictly a bug, it is because I put a restriction on the number of document to be searched in one posting list (not trying to be very technical). I have pushed my new code to GitHub (see commit github.com/approach0/search-engine/commit/…), this change gets rid of that restriction and now that relevant post is shown as the 2nd search result.
2 hours later…
6:57 PM
« first day (2 days earlier) next day → last day (1080 days later) » |
Lesson overview
In this lesson, we'll prove when the value of the p-series \(\sum_{n=1}^∞\frac{1}{n^p}\) converges to a finite value and when its diverges to infinity. We'll show that when \(0<p<1\), the p-series diverges; and when \(p>1\), the p-series converges.
A graph of the function \(y=\frac{1}{x^p}\) is shown in Figure 1 and in the video.\(^{[1]}\) The first term in the p-series is \(\frac{1}{1^p}\) and is the area of the first rectangle. The second term in this p-series is \(\frac{1}{2^p}\) and is the area of the second rectangle. The sum of all the terms in the p-series is equal to the sum of all the areas of each rectangle and is the upper-Riemann estimate of the area under the curve \(y=1/x^p\) from \(x=1\) to \(x=∞\). The integral \(∫_1^∞\frac{1}{x^p}dx\) is equal to the actual area under the curve from \(x=1\) to \(x=∞\). For \(p>0\), the function \(y=\frac{1}{x^p}\) is always positive; therefore, the area underneath it, \(∫_1^∞\frac{1}{x^p}dx\), is always positive and the overestimate of that area, \(\sum_{n=1}^∞\frac{1}{n^p}\), is also always positive. Since the p-series is an overestimate of the actual area, it follows that \(∫_1^∞\frac{1}{x^p}dx<\sum_{n=1}^∞\frac{1}{n^p}\). The expression \(1+∫_1^∞\frac{1}{x^p}dx\) equals the area of the shaded regions (region 1 and region 2) in Figure 2. Since the p-series equals the area of region 1 plus the lower-Riemann estimate of region 2, it follows that \(\sum_{n=1}^∞\frac{1}{n^p}<1+∫_1^∞\frac{1}{x^p}dx\) and, thus,
$$∫_1^∞\frac{1}{x^p}dx<\sum_{n=1}^∞\frac{1}{n^p}<1+∫_1^∞\frac{1}{x^p}dx.\tag{1}$$
From Inequalities (1), we see that if \(∑_{n=1}^∞\frac{1}{n^p}\) converges (that is, equals a finite value), then since \(∫_1^∞\frac{1}{x^p}dx\) must be positive but less than \(\sum_{n=1}^∞\frac{1}{n^p}\) (which is a finite value if it converges) the integral \(∫_1^∞\frac{1}{x^p}dx\) must also be equal to some finite value and converge. We can also make the converse statement: if \(∫_1^∞\frac{1}{x^p}dx\) converges then \(1+∫_1^∞\frac{1}{x^p}dx\) must also equal a finite value and converge and thus the value of the p-series is “in between” two finite values (given by \(∫_1^∞\frac{1}{x^p}dx\) and \(1+∫_1^∞\frac{1}{x^p}dx\)) and must also converge. We can summarize both of these statement by saying,
$$\text{For }p>0,\text{ the p-series }\sum_{n=1}^∞\frac{1}{n^p}\text{ converges if and only if the integral }∫_1^∞\frac{1}{x^p}dx\text{ converges.}$$
If the integral \(∫_1^∞\frac{1}{x^p}dx\) diverges (that is, equals infinity), then since \(\sum_{n=1}^∞\frac{1}{n^p}>∫_1^∞\frac{1}{x^p}dx\) it follows that the p-series \(∑_{n=1}^∞\frac{1}{n^p}\) is also infinite and diverges. We can once again make the converse statement: if the p-series \(∑_{n=1}^∞\frac{1}{n^p}\) diverges, then since \(1+∫_1^∞\frac{1}{x^p}dx>∑_{n=1}^∞\frac{1}{n^p}\), it follows that \(1+∫_1^∞\frac{1}{x^p}dx\) must also be infinite. If we subtract one from an infinite value to get \(∫_1^∞\frac{1}{x^p}dx\), we’ll still end up with an infinite value. Thus,
$$\text{For }p>0,\text{ the p-series }\sum_{n=1}^∞\frac{1}{n^p}\text{ diverges if and only if the integral }∫_1^∞\frac{1}{x^p}dx\text{ diverges.}$$
In other words if the p-series converges/diverge we know that the integral converges/diverges, and vice versa. Let’s now see for what values of \(p\) (greater than zero) there is convergence and for what values of \(p\) there is divergence. We’ll prove that for values of \(p\) within the range \(0<p≤1\), both the integral and p-series converges and that for values of \(p\) within the range \(p>1\) both the integral and p-series diverges.
First, let’s see what happen if \(p=1\). We can rewrite the improper integral \(∫_1^∞\frac{1}{x^p}dx\) as \(∫_1^∞\frac{1}{x^p}dx=\lim_{m→∞}∫_1^m\frac{1}{x^p}dx\). If \(p=1\). then \(∫_1^m\frac{1}{x^p}dx=∫_1^m\frac{1}{x}dx=[ln|x| ]_1^m=ln|m|-ln(1)\). Since we must raise \(e\) to the power zero to get one, \(ln(1)=0\). Thus, \(∫_1^m\frac{1}{x}dx=ln|m|\) and \(∫_1^∞\frac{1}{x^p}dx=\lim_{m→∞}∫_1^m\frac{1}{x^p}dx=lim_{m→∞}ln|m|\). The natural logarithm \(ln|m|\) is the power \(e\) must be raised to in order to get \(|m|\); in other words, \(e^{ln|m|}=|m|\). Clearly, if \(m→∞\), the power of \(e\) (which is \(ln|m|\)) must also approach infinity. Therefore, if \(p=1\), then \(∫_1^∞\frac{1}{x^p}dx=lim_{m→∞}ln|m| =∞\) and the integral diverges; since the integral diverges it also follows that the p-series \(\sum_{n=1}^∞\frac{1}{n^p}\) diverges. In summary,
$$\text{If }p=1,\text{ the integral }∫_1^∞\frac{1}{x^p}dx\text{ and the p-series }\sum_{n=1}^∞\frac{1}{n^p}\text{ diverges.}$$
Let’s now see what happens to the integral \(∫_1^∞\frac{1}{x^p}dx\) and the p-series \(∑_{n=1}^∞\frac{1}{n^p}\) when \(0<p<1\) and when \(p>1\) but finite. Again we can rewrite the improper integral as \(∫_1^∞\frac{1}{x^p}dx=lim_{m→∞}∫_1^m\frac{1}{x^p}dx\). To solve the anti-derivative \(∫_1^mx^{-p}dx\) we raise the exponent by one and then divide by the new exponent; in other words, \(∫_1^mx^{-p}dx=[\frac{x^{1-p}}{1-p}]_1^m=\frac{m^{1-p}}{1-p}-\frac{1^{1-p}}{1-p}\). Since one raised to any power always equals one, \(1^{1-p}=1\) and \(∫_1^mx^{-p}dx=\frac{m^{1-p}}{1-p}-\frac{1}{1-p}\). If we take the limit as \(m→∞\) on both sides, we get \(∫_1^∞\frac{1}{x^p}dx=lim_{m→∞}∫_1^mx^{-p}dx=\frac{1}{1-p}lim_{m→∞}(m^{1-p}-1)\). If \(p>1\), then the power \(1-p\) of \(m\) is negative; you can also view this as \(\frac{1}{m^{\text{positive number}}}\). In this case, as \(m→∞\), the quantity \(m^{1-p}=\frac{1}{m^{-(1-p)}}\) will approach zero and the integral converges. As discussed earlier if the integral converges then the p-series must also converge. Thus,
$$\text{If }p>1,\text{ then the integral }∫_1^∞\frac{1}{x^p}dx\text{ and the p-series }∑_{n=1}^∞\frac{1}{n^p}\text{ converges.}$$
If \(0<p<1\), then the exponent \(1-p\) of \(m\) is positive. If \(m→∞\), then the quantity \(m^{1-p}\) (with its positive exponent) will also go to infinity. Thus, if \(0<p<1\), then the integral \(∫_1^∞\frac{1}{x^p}dx\) diverges; since the integral diverges it necessarily follows that the p-series \(∑_{n=1}^∞\frac{1}{n^p}\) also diverges. Thus,
$$\text{If }0<p<1,\text{ then the integral }∫_1^∞\frac{1}{x^p}dx\text{ and the p-series }∑_{n=1}^∞\frac{1}{n^p}\text{ diverges.}$$ |
Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ with smooth boundary $\partial\Omega$. Consider the following initial-boundary value problem for the heat equation:
\begin{equation} \begin{cases} u_t=\Delta u\quad\quad\quad\;\; \text{in}\;\Omega\times(0,\infty) , \\ u=0 \quad\quad\quad\;\;\;\;\; \text{on}\;\partial\Omega\times(0,\infty), \\ u(x,0)=u_0(x),\quad x\in\Omega. \end{cases} \end{equation}
It is mentioned in one book that the solution $u(x,t)\rightarrow0$ as $t\rightarrow\infty$.
The method I can think of to prove this is the energy method by letting $H[u](t)=\frac{1}{2}\,\int_{\Omega}u^2(x,t)\,dx$. However, this method seems does not work. I can only show that $\int_{\Omega}u^2(x,t)\,dx\rightarrow0$ as $t\rightarrow\infty$ by using integration by parts and the Poincare inequality. How could I obtain the desired result? Some hints, please. |
Flatland Fidget Spinner
Freddy the Flatland Photographer wants to report on fun new things in Flatland for the Flatland Financial Times. He saw a really nice picture of a Fidget Spinner in Flatland Weekly, and he would like to publish a similar picture. Actually, he likes the picture so much he would like to use the exact same picture. Flatland copyright law forbits Freddy from copying the picture, so he decides to take an originalTM picture that looks the same. Can you help Freddy position his camera?
On Flatland Photography
Freddy has one really fancy 1MP camera, but also some cheaper cameras with a smaller number of pixels. Each pixel records three floating point numbers between $0$ and $1$, $(R,G,B)$, representing a colour. In the picture that he wants to reproduce, the Fidget Spinner is photographed on a $(0,0,0)$ black background. At most $40\% $ of the picture is fully black. The Fidget spinner is not “cut off”; the leftmost and rightmost pixel are always fully black. The arms of the Fidget Spinner have really pure colours; in counter clockwise order, they are $(1,0,0)$ red, $(0,1,0)$ green and $(0,0,1)$ blue. The arms are length one each, and all separated by equal angles ($\frac{2\pi }{3}=120^\circ $). The Fidget Spinner is located at the Origin Photography Studio, with its middle at coordinates $x=0$, $y=0$, and the tip of its blue arm at $x=-1$, $y=0$.
A flatland camera setup and the resulting picture
In the above example, a camera with $n=8$ pixels is used. This vintage camera has a viewing angle of $\theta =80^\circ $, thus one pixel covers a $10^\circ $ angle. The camera is placed at angle $\alpha $ (the counter clockwise angle between the positive $x$-axis and the center of the camera view). In the above example, one pixel covers both the red and blue arm of the Fidget Spinner. Within this pixel’s range, blue covers $6^\circ $ while red covers $4^\circ $. As a result, the $(R,G,B)$-color registered by this pixel is $\frac{4}{10}\cdot (1,0,0)+\frac{6}{10}\cdot (0,0,1)=(0.4,0.0,0.6)$, a shade of purple. Freddy is happy with the replica if the R, G and B components of all pixels are at most $0.1$ different from the original picture, so, for example, a slightly different purple $(0.31, 0.1, 0.7)$ is also fine.
Input
One line, containing the camera properties; the number of pixels $8 \leq n \leq 10^6$ and the viewing angle $\frac{2\pi }{8} \leq \theta \leq \frac{2\pi }{4}$ (in radians). Then the picture is given in $n$ lines each containing three floating point numbers $0\leq R,G,B \leq 1$ with $R+G+B\leq 1+ 10^{-10}$. The pixels are ordered in clockwise order. All floating point numbers in the input will have at most $10$ decimal digits.
Output
Print space separated numbers $x$, $y$, and $0\leq \alpha < 2\pi $: a position and rotation (in radians) of the camera that would (nearly) reproduce the input picture.
Sample Input 1 Sample Output 1 8 1.538 0 0 0 0 0 0 0 0 0.4502869372 0 0 1 0.3773483381 0 0.6226516619 1 0 0 0.7631122372 0 0 0 0 0 -1.5 -2 1.047
Sample Input 2 Sample Output 2 10 0.916 0 0 0 0 0 0 0 0 0 0 0 0.8760797241 0 0 1 0.251073 0.362151 0.386776 0 1 0 0 1 0 0 0.3465619503 0 0 0 0 1.6474 -2.565784 2.2 |
It looks like you're new here. If you want to get involved, click one of these buttons!
In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly:
Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.
Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.
Today we'll conclude our discussion of Chapter 1 with two more bombshells:
Joins
are left adjoints, and meets are right adjoints.
Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.
This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world!
Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders.
In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets.
Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have
all joins: it's enough that all the joins in this formula exist:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have
all meets: it's enough that all the meets in this formula exist:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes.
Suppose \(A\) is a poset with all binary joins. Then we get a function
$$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows:
$$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that
$$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the
diagonal
$$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called
duplication, since it duplicates any element of \(A\).
Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact:
$$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \).
Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \).
A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function
$$ \wedge : A \times A \to A $$that's the
right adjoint of \( \Delta \). This is just a clever way of saying
$$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.
Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number.
All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on.
Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by
$$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short.
I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason.
Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\).
So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called
duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs.
Once you start looking you can find duality everywhere, from ancient Chinese philosophy:
to modern computers:
But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality!
This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises. |
I want to solve the one-touch American call at $t = 0$ with level $B,$ maturity $T$ under the following assumption: $$d S= rSd t + \sigma SdW,\quad S_0<B.$$ We have following formula: $$V(S_0,0) = \left(\dfrac{B}{S_0}\right)^{2r/\sigma^2}N(d_2)+\dfrac{S}{B}N(d_1),$$ where we ormit $d_1,d_2.$
Is there any easy way or reference to obtain above formula? I solve as this the following way, but it is very complicated:
$$S(t)=S_0\exp^{\sigma W(t)+(r-\dfrac{1}{2}\sigma^2)t}=S_0\exp^{\sigma \hat{w}(t)},$$ where $\hat{w}(t) = W(t) + \alpha t,\ \alpha = \dfrac{1}{\sigma}(r-\dfrac{1}{2}\sigma^2).$
Then first passage time is given by $$\tau_m = \min\{t\geq 0; W(t) = B\}=\min\{t\geq 0; \hat{w} = \hat{B}\}$$ and we have the value $$V(S_0,0) = E[e^{-r\tau_m}\textrm{II}_{\{\tau_m\leq T\}}].$$ Here, $\textrm{II}$ is the indicator function. Then change the measure to make $\hat{W}(t)$ be a Brownian motion: $$\hat{E}[\dfrac{1}{\hat{Z(T)}}e^{-r\tau_m}\textrm{II}_{\{\tau_m\leq T\}}] = E[e^{-r\tau_m}\textrm{II}_{\{\tau_m\leq T\}}]$$ Here, $Z(T) = \exp(-\alpha W(t)-\dfrac{1}{2}\alpha^2)$ is the transition function.
And we know the joint CDF of first passage time $\tau_m$ and $\hat{W}(t)$ under $\hat{P}$ ,finally we solve this double integral. |
Prediction for Fitted Multiple Point Process Model
Given a fitted multiple point process model obtained by
mppm, evaluate the spatial trend and/or the conditional intensity of the model. By default, predictions are evaluated over a grid of locations, yielding pixel images of the trend and conditional intensity. Alternatively predictions may be evaluated at specified locations with specified values of the covariates.
Usage
# S3 method for mppmpredict(object, ..., newdata = NULL, type = c("trend", "cif"), ngrid = 40, locations=NULL, verbose=FALSE)
Arguments object
The fitted model. An object of class
"mppm"obtained from
mppm.
…
Ignored.
newdata
New values of the covariates, for which the predictions should be computed. If
newdata=NULL, predictions are computed for the original values of the covariates, to which the model was fitted. Otherwise
newdatashould be a hyperframe (see
hyperframe) containing columns of covariates as required by the model. If
typeincludes
"cif", then
newdatamust also include a column of spatial point pattern responses, in order to compute the conditional intensity.
type
Type of predicted values required. A character string or vector of character strings. Options are
"trend"for the spatial trend (first-order term) and
"cif"or
"lambda"for the conditional intensity. Alternatively
type="all"selects all options.
ngrid
Dimensions of the grid of spatial locations at which prediction will be performed (if
locations=NULL). An integer or a pair of integers.
locations
Optional. The locations at which predictions should be performed. A list of point patterns, with one entry for each row of
newdata.
verbose
Logical flag indicating whether to print progress reports.
Details
This function computes the spatial trend and the conditional intensity of a fitted multiple spatial point process model. See Baddeley and Turner (2000) and Baddeley et al (2007) for explanation and examples.
Note that by ``spatial trend'' we mean the (exponentiated) first order potential and not the intensity of the process. [For example if we fit the stationary Strauss process with parameters \(\beta\) and \(\gamma\), then the spatial trend is constant and equal to \(\beta\).] The conditional intensity \(\lambda(u,X)\) of the fitted model is evaluated at each required spatial location u, with respect to the response point pattern X.
If
locations=NULL, then predictions are performed at an
ngrid by
ngrid grid of locations in the window for each response point pattern. The result will be a hyperframe containing a column of images of the trend (if selected) and a column of images of the conditional intensity (if selected). The result can be plotted.
If
locations is given, then it should be a list of point patterns (objects of class
"ppp"). Predictions are performed at these points. The result is a hyperframe containing a column of marked point patterns where the locations each point.
Value
A hyperframe with columns named
trend and
cif.
If
locations=NULL, the entries of the hyperframe are pixel images.
If
locations is not null, the entries are marked point patterns constructed by attaching the predicted values to the
locations point patterns.
References
Baddeley, A. and Turner, R. Practical maximum pseudolikelihood for spatial point patterns.
Australian and New Zealand Journal of Statistics 42 (2000) 283--322.
Baddeley, A., Bischof, L., Sintorn, I.-M., Haggarty, S., Bell, M. and Turner, R. Analysis of a designed experiment where the response is a spatial point pattern. In preparation.
Baddeley, A., Rubak, E. and Turner, R. (2015)
Spatial Point Patterns: Methodology and Applications with R. London: Chapman and Hall/CRC Press. See Also Aliases predict.mppm Examples
# NOT RUN { h <- hyperframe(Bugs=waterstriders) fit <- mppm(Bugs ~ x, data=h, interaction=Strauss(7)) # prediction on a grid p <- predict(fit) plot(p$trend) # prediction at specified locations loc <- with(h, runifpoint(20, Window(Bugs))) p2 <- predict(fit, locations=loc) plot(p2$trend)# }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2) |
Say I have a family of hash functions that are weakly universal, i.e. the probability of two non-identical keys $x\neq y$ are mapped to the same hash-value is bounded by $k/m$ when I have $m$ bins and $k$ is some constant (I am mainly interested in $k=1$ and $k=2$):
$$\Pr_h\left[h(x)=h(y)\right] \leq \frac{k}{m} $$
I have $n$ keys mapped to $m$ bin and I keep a constant bound on the load factor $n/m\leq \alpha$.
With this setup, I am interested in putting a bound on how many collisions I allow in any bin, $K$, and rehashing whenever I have to insert a key in a bin that already holds $K$ elements. For a given bound on the load factor, $\alpha$, and a given bound on the maximum elements in any bin, $K$, I am interested in knowing the probability of picking a hash function that puts less than $K$ elements in each bin.
This is a setup I am interested in for teaching purposes, and I am aware that there are smarter solutions to hash tables, but I would like to handle this setup before moving on to them. In my lecture notes, I have already described rehashing and just want to take this to a probabilistic analysis place, so I want to know the expected number of rehashes needed in this setup.
I would actually also love to know if I could put a bound on the probe length in open addressing hashing and rehash when I reach that, but I guess this is a much harder question.
I guess that if I can fix the probability of hitting any given bin more than $K$ times at a constant $p$, I have a strategy for sampling my way out of the problem, where I expect to rehash $1/p$ times if the number of keys, $n$, and the number of bins, $m$, are fixed. But if I add keys one at a time, resize when $n/m>\alpha$, and rehash when I exceed the bound $K$ in any bin, do I have any guarantee that I will not have to rehash for every new element added after number $K+1$? I mean, my rehash could put $K$ elements in the largest bin and the next key could then trigger a rehash. An advisory, of course, cannot pick the keys so this happens, if I sample random functions, but do I have any probabilistic bounds that guarantee that I can rehash to keep probe-lengths bounded by a constant and still not rehash so often that I break the amortised constant running time on the table operations? |
Matthew Needham Dr M D Needham Research interests
I am a Chancellors Fellow in the School of Physics and Astronomy at the University of Edinburgh. My main research interest is in indirect searches for physics beyond the Standard Model of Particle Physics in the decays of beauty mesons and neutrinos.
One of the puzzling features of the Universe is the absence of anti-matter. To explain this inbalance processes such as CP violation that distinguish matter and anti-matter are needed. Though CP violation is allowed in the Standard Model the level is much smaller than that needed to explain the matter-antimatter inbalance. Models of physics beyond the Standard Model (e.g. Supersymmetry or Little Higgs model) naturally lead to additional sources of CP violation. By comparing measurements of CP violation in the decays of beauty and charm quarks to the Standard Model predictions we aim to uncover the effect of new physics.
My work so far focussed on the analysis of data taken by the LHCb experiment. LHCb has collected the world's largest sample of heavy flavour decays and is able to perform a wide range of precision measurements. My main focus is search for the effect of new physics in the decays of Bs mesons.
Heavy flavour decays provide a rich environment to probe Quantum Chromodynamics. I am involved in studies related to the production and spectroscopy of charmonium resonances and searches for exotic states such as tetraquarks.
Another exciting possiblity to explain the absence of anti-matter is CP violation in the neutrino sector. The observation of neutrino oscillations makes this a viable possibility and two newlong baseline experiments (HyperK and DUNE) are planned for the 2020s that will search for this. I am a member of the HyperK collaboration and work on studies of the simulation and reconstruction of a new planned intermediate detector (TITUS).
Supervision of MPhys student projects related to LHCb data analysis Senior Honours projects measuring the muon lifetime using cosmic rays Junior Honours Research Methods advisor SUPA flavour and neutrino physics course organizer Junior Honours laboratory demonstrator Junior Honours DAH course demonstrator
Matthew has featured in the following recent School news stories:
Recent publications Observation of a Narrow Pentaquark State, Pc (4312)+, and of the Two-Peak Structure of the Pc (4450)+ DOI, Physical Review Letters, 122, 22 Observation of $B^0_{(s)} \to J/\psi p \overline{p}$ decays and precision measurements of the $B^0_{(s)}$ masses DOI, Physical Review Letters, 122, p. 191804 Search for $CP$ violation in $D_s^+\to K_S^0 \pi^+$, $D^+\to K_S^0 K^+$ and $D^+\to \phi \pi^+$ decays DOI, Physical Review Letters, 122, p. 191803 Journal of High Energy Physics, 1905 Physical Review Letters, 122, 19, p. 191801 |
The summer gets off to a flying start, with three property testing papers, spanning differential privacy, distribution testing, and juntas in Gaussian space!
On closeness to \(k\)-wise uniformity, by Ryan O’Donnell and Yu Zhao (arXiv) In this paper, the authors consider the following structural question about probability distributions over the Boolean hypercube \(\{-1,1\}^n\): ” what is the relation between total variation distance \(\delta\) to \(k\)-wise independence, and bound \(\varepsilon\) on the Fourier coefficients of the distribution on degrees up to \(k\)?”
While this question might seem a bit esoteric at first glance, it has direct and natural applications to derandomization, and of course to distribution testing (namely, to test \(k\)-wise independence and its generalization, \((\varepsilon, k)\)-wise independence of distributions over the hypercube).
The main contribution here is to improve (by a \((\log n)^{O(k)}\) factor) the bounds on \(\delta (n,k,\varepsilon)\) over the previous work by Alon et al. [AAK+07], making them either tight (for \(k\) even) or near-tight. To do so, the authors introduce a new hammer to the game, using linear programming duality in the proof of both their upper and lower bounds.
Property Testing for Differential Privacy, by Anna Gilbert and Audra McMillan (arXiv) Differential privacy, as introduced by Dwork et al., needs no introduction. Property testing, especially on this website, needs even less. What about a combination of the two? Namely, given black-box access to an algorithm claiming to perform a differentially private computation, how to test whether this is indeed the case?
Introducing and considering this quite natural question for the first time
[01/31/2019: see erratum below], this work shows, roughly speaking, that testing differential privacy is hard. Specifically, they show that for many notions of differential privacy (pure, approximate, and their distributional counterparts), testing is either impossible or possible but not with a sublinear number of queries (even when the tester is provided with side information about the black-box). In other terms, as the authors put it: trusting the privacy of an algorithm “requires compromise by either the verifier or algorithm owner” (and, in the latter case, even then it’s not a simple matter). Is your data low-dimensional?, by Anindya De, Elchanan Mossel, and Joe Neeman (arXiv) (Well, is it?) To state it upfront, I am biased here, as it is a problem I was very eager to see investigated to begin with. To recap, the question is as follows: “given query access to some unknown Boolean-valued function \(f\colon \mathbb{R}^n \to \{-1,1\}\) over the high-dimensional space \(\mathbb{R}^n\) endowed with the Gaussian measure, how can one check whether \(f\) only depends on “few” (i.e., \(k \ll n\)) variables?”
This is the continuous, Gaussian version of the (quite famous) junta testing problem, which has gathered significant attention over the past years
(the Gaussian version has, to the best of my knowledge, never been investigated). Now, the above formulation has a major flaw: specifically, it is uninteresting. In Gaussian space*, who cares about the particular basis I expressed my input vector in? So a more relevant question, and that that the authors tackle, is the more robust and natural one: “given query access to some unknown Boolean-valued function \(f\colon \mathbb{R}^n \to \{-1,1\}\) over the high-dimensional space \(\mathbb{R}^n\) endowed with the Gaussian measure, how can one check whether \(f\) only depends on a low-dimensional linear combination of the variables?” Or, put differently, does all the relevant information for \(f\) live in a low-dimensional subspace?
De, Mossel, and Neeman show how can do this, non-adaptively, with a query complexity independent of the dimension \(n\) (hurray!), but instead polynomial in \(k\), the distance parameter \(\varepsilon\), and the
surface area \(s\) of \(f\). And since this last parameter may seem quite arbitrary, they also proceed to show that a polynomial dependence in this \(s\) is indeed required. *”In Gaussian space, no one can hear you change basis?” Erratum (01/31/2019): It was brought to our attention that our overview of “Property Testing for Differential Privacy” was overlooking a key part of the literature; specifically, a work of Dixit et al. (TCC 2013) which introduces this very question. From the abstract:
How does one ensure that [those third-parties] have implemented their algorithms in a way which meet the specifications of the privacy requirements? […] In this work, we propose a new approach to the above problem which we call privacy testing. We do this by formulating the above problem in the well-studied framework of property testing. |
The geometry of multiplying complex numbers is usually a depiction, not a definition. I don't understand why you claim it's "not defined well," though. It's perfectly defined. If you need any clarification on how it works, you can always ask a question. Also, please do be aware that the geometry of multiplying complex numbers is much richer than just multiplying by $i$. Read and learn some more about them and you will find how the geometry works with polar form. Every complex number can be written as $z=re^{i\theta}$, where geometrically $r$ is the absolute value and the phase $\theta$ describes the angle between the positive real axis and the number (as a vector). Then multiplying two numbers multiplies their absolute values and adds their phases.
There is a paradigm shift that mathematics has made historically, and that you might still need to make on your journey: math isn't just discovery, it's also part invention. With science and technology, whenever our world wants for a device or machine that does a certain job, we make one to do it. Math can be the same way! If you want a number system with certain properties, you
construct it. We wanted a number system containing $\Bbb R$ in which we can solve all polynomial equations, and so we simply made one. The fact that everything works out perfectly is proof enough that we are talking about a real thing. Whether we want to still call this real thing a number system is an important and "morally correct" choice we've made based off of all of the facts.
One can always factor polynomials down to quadratic and linear factors over $\Bbb R$, the one issue with going further and factoring all the way occurs whenever we try to factor by completing the square: negative numbers are not squares in $\Bbb R$. So $x^2=-1$ is the
prototypical equation to introduce a new solution to, call it $i$. As it happens, once you introduce $i$, every number in the new number system looks like $a+bi$, since higher powers of $i$ can simplified. And yes this definition is general enough - as a result of this definition, we can add/subtract/multiply/divide complex numbers and we can solve any polynomial equation, just as we wanted.
A person could ask, "but what if some polynomials simply don't have roots"? One could analogously ask, "but what if there really is no game with the rules of chess? What if we're incorrect in believing that knights must move two spaces and then one orthogonal?" But there is such a game with such-and-such rules: we made it. There
can be potential issues in math of constructing a new structure, and in the process in order to keep everything consistent you end up collapsing the structure into something completely trivial and degenerate, essentially because the thing you wanted was simply not meant to be. Or, you may want to construct a new structure with a set of properties, but there it isn't possible to get all of the properties satisfied simultaneously. Gladly, the case of complex numbers is not one of those situations.
Note "assuming properties of square roots extend to negatives" is a subpar way of thinking about it; instead we should be thinking about it as "we wanted a number system in which we could still follow the usual rules of arithmetic but successfully solve more polynomial equations, and we were able to construct one." Sometimes we introduce facts like $0!=1$ or $x^0=1$ as "assuming the properties of factorials and powers extend to $0$," but a better way of thinking about it is that we want to define it in such a way that the properties do extend, because wouldn't that be nice.
After we construct $\Bbb C$ we are able to determine, as a matter of investigation, what properties still carry over. The rules of arithmetic for addition, subtraction, multiplication and division still work, and the rules for integer exponents still work, for instance. The rules for radicals and rational exponents we know for fact do not actually still work without caveats. For instance the familiar rule $\sqrt{ab}=\sqrt{a}\sqrt{b}$ is not true for all real numbers $a$ and $b$, so not only are we not assuming that properties of square roots carry over willy-nilly, namby-pamby, we know that they don't.
There are other ways of defining and constructing $\Bbb C$ of course.
Given any quadratic polynomial $ax^2+bx+c$ that cannot be factored with real numbers, we can define $\Bbb C$ as the quotient ring $\Bbb R[x]/(ax^2+bx+c)$. If you study some abstract algebra, you will find polynomial rings and quotient rings are very useful in constructing rings (generalized number systems) with desired properties. In particular, what this construction does is "adjoin" a root of the polynomial $ax^2+bx+c$. One major drawback here is that there is no geometry, no notion of the sizes of or distances between complex numbers.
One can also define $\Bbb C$ as a subset of the algebra $M_2(\Bbb R)$ of $2\times 2$ real matrices comprised of elements of the form $(\begin{smallmatrix}\phantom{-}a & b \\ -b & a\end{smallmatrix})$ . Every such matrix can be written as $a(\begin{smallmatrix}1 & 0 \\ 0 & 1\end{smallmatrix})+b (\begin{smallmatrix}\phantom{-}0 & 1 \\ -1 & 0 \end{smallmatrix})$, which is no coincidence: $(\begin{smallmatrix}1 & 0 \\ 0 & 1\end{smallmatrix})$ is the identity and $(\begin{smallmatrix}\phantom{-}0 & 1 \\ -1 & 0 \end{smallmatrix})$ is a $90^\circ$ rotation. So this is very close to the usual, familiar understanding of complex numbers and has geometry in it.
Finally, if you want an elegant definition through properties instead of an at-first-artificial-seeming construction, consider the fact that $\Bbb C$ is
the smallest uncountable algebraically closed field containing the integers, or equivalently the smallest algebraically closed field containing the reals. This is a useful and correct way of thinking about what the $\Bbb C$ "is," but it's even more pedagogically and practically flawed than the first definition: not only do we not have any geometry or distances, we don't even know what the elements of $\Bbb C$ look like or how they interact in this definition! |
Sorry if this seems like a dumb question, but what what type of logarithm is $\log$ in Wikipedia articles? Cheers.
I think it depends on the situation. For a mathematician, $\log n$ probably means the natural logarithm. For a computer scientist, $\log n$ probably denotes the base $2$ logarithm, etc.
I think sometimes people write $\ln x$ for the natural logarithm, $\lg x$ for the base $2$ logarithm and $\log x$ for the base $10$ logarithm.
For programming languages such as Matlab, Python, Julia, $\log x$ means the natural logarithm of $x$.
Do you have a specific Wikipedia article that contains $\log$?
It depends entirely on context. In mathematics as a whole, $\log$ usually denotes the natural logarithm (base $\mathrm{e}$). In computer science, the situation isn't as clean because we often want to talk about things like a number of bits or the height of a binary tree and, in those cases, the most natural (*baddum-tsh*) logarithm to use is base-$2$. Some authors use $\ln$ and $\lg$ to explicitly denote base-$\mathrm{e}$ and base-$2$ logs, respectively; many authors just write $\log$ to mean whatever base is most appropriate for the situation.
Of course, Wikipedia is written by a huge number of unconnected authors, so you can't assume any consistent convention is used. Indeed, a careful Wikipedia editor would make sure that $\log$ is used consistently within an article, but you can't rely on everyone being careful.
It usually doens't matter. Changing the base of a logarithm just multiplies the answer by a constant factor because, for all $a,b>1$ and all $x>0$,$$\log_a x = \frac{\log_b x}{\log_b a}\,.$$In many situations, this multiplicative factor of $1/\log_b a$ isnt at all important. In the context of a big-$O$, big-$\Theta$, big-$\Omega$ etc bound, $O(\log n)$ means the same thing regardless of the base, since big-$O$ disregards constant factors. (But note that, e.g., $2^{\log_2 x}=x$, whereas $2^{\log_3 x}=x^{1/\log_2 3}\approx x^{1.58}$ so there are situations in which the base makes a big difference.) Your specific example. In a comment, you say you're specifically interested in the page about tf-idf. I know nothing about document retrieval but my understanding from skim-reading that article is as follows. We're interested in ranking documents according to their relevance, and the relevance of a document is given by a score of the form $x\log y$, where $x$ and $y$ are statistics of the document. Since changing the base of the logarithm just multiplies all the scores by a constant, it doesn't change the ranking order. Therefore, it doesn't matter what base you use for the logarithm in this situation. |
Radio astronomy
When we view the Earth or the night sky, we
can see plants, people, cars and cities, your coffee mug, the Moon, the stars. This is all we have ever seen with our two eyes sine we've lived our whole lives seeing things with visible light. But a bee can see something that we cannot see. They see light—invisible to us—bouncing off of flowers. We have built abiotic, mechanical "eyes" which can see a very bright glow covering the entire night sky. But when we look at the night sky using just our two eyes, all we see is darkness. Just like how the bee's eyes were seeing light invisible to us, the same is true about these mechanical eyes. We can see some things, but not everything else. What's astonishing is that we've spent our whole lives seeing only the tiniest fraction of what these superior mechanical eyes can see.
The advent of infrared and radio astronomy has given us a better pair of eyes so that now, like the bee, we can see more. We no longer see a smattering of a few thousands stars set against a black canvas when we view the night sky; instead, the smattering is exploding stars and quasars emanating immense jets of light from the centers of galaxies and the canvas is a bright, colorful glow.
Around the 1950s, astronomers saw the "radio light" emanating from a far off object (which they called 3C 273) which, to them, looked like a star. At the time, we didn't have techniques like radio interferometry which would have allowed us to see every little detail of a baseball (where the baseball is the source emitting a little bit of radio waves) on the Moon. The best that we could do is to narrow down the location of this "star" (radio source) to some small region of space. Although we knew that the radio waves we were detecting must have had been coming from this tiny patch of the sky, the problem was that there were many stars located in this patch of sky—any one of them could've been responsible for the radio waves we were getting.
Astronomers got around this problem by using a clever trick. As the Moon was in transit across this tiny patch of sky, we could pinpoint very precisely the spot in that tiny patch where the Moon was blocking the radio waves. This gave us an even smaller patch of sky to look for the radio source. After doing this, astronomers pointed the most powerful visible-light telescope at the time (the 200-inch Hale telescope at the Palomar Observatory) towards this smaller patch. They spotted a single object within that patch which looked like a star—this must have been the radio source they were detecting the radio waves from. (When the Moon passed by all the other stars, the radio waves didn't get blocked out.)
Spectra of 3C 273
The 200-inch telescope had a spectroscope incorporated into it which allowed a Caltech professor named Maarten Schmidt to use spectroscopy to analyze this "star's" (at least, what we initially thought was a star) spectrum. The spectrum that he recorded came as a surprise. The peak emission lines had the same signature as hydrogen, except there was one big difference: all of the lines (each one of them) were redshifted by 15.8%. This means that 3C 273 must have been moving away from us at an extraordinary speed due to Hubble expansion (that is, the expansion of space in our universe). Given the redshift (which we know is \(Δλ=0.158\)), we can use Doppler's equation to determine the recessional velocity \(V\). If we then plug that value of \(V\) into Hubble's law (which we discussed in this article in more detail), we find that the distance \(D\) that 3C 273 must be away from us is roughly 2 billion lightyears.
What was so mysterious about 3C 273 was not its enormous redshift or how far away it was. When looked at through the telescope, 3C 273 looked point-like similar to what a star looked like; it could not, for example, have had been a galaxy since galaxies looked like extended objects with some shape (as opposed to a point). What was so mysterious about this "star," 3C 273, was that it was hundreds of times brighter than entire galaxies which were roughly just as far away. How could a single lonely star outshine an entire galaxy composed of billions of stars? This was the big mystery. Clearly, 3C 273 was no star. And so, astronomers gave this peculiar object a new name. Thy called it a
quasar.
From the spectral lines we talked about earlier, we saw that the spectrum of this quasar included emission lines associated with hydrogen that were redshifted by an amount \(Δλ=0.158\). If we plug this value into Doppler's equation, we can determine that the quasar is moving away from us at about 16% the speed of light (due to the expansion of space). If all of the atoms comprising the object were moving away from us, as a whole, all at 16% the speed of light, then we'd except the quasar's spectrum to be discrete where all of the emission lines are identical to those of their corresponding element, except all redshifted by 15.8%. But if you look at the spectrum in the graph in Figure 2, you'll see that it's continuous (not discrete): that is, we're detecting all wavelengths of light. This means that the atoms comprising 3C 273 must be moving relative to one another. The question is: how fast?
We know that if all of these atoms had zero relative motion, their redshjift would just be \(Δλ=0.158\). Skipping over the nitty gritty details, we can deduce from the graph the deviations (which we'll call \(Δλ_p\)) of these atoms redshifts from 0.158. If we put \(Δλ_p\) into Doppler's equations, that gives us there relative velocities. What we find is that the gaseous and dust particles composing this object must by revolving around the center of 3C 273 at roughly \(6,000 km/s\). That is an astonoshing speed!
Calculating the size and mass of 3C 273
What could be causing them to move so fast? To answer that question, we'll have to start off by determining the rough diameter of the distribution of these particles. Why you need the diameter to answer that question is something we'll get to in the next paragraph. When we view this quasar through our telescopes, we can see it flickering. Its brightness dims, shines, dims, shines, etc. Suppose that the diameter of 3C 273 was one light-year. As the object "shined and got brighter, we would expect that it would take one year to see the entire objet brighten up. The light emitted by the mass comprising the quasar which is closest to us (which I'll call \(m_1\)) will take \(~2×10^9\text{ years}\) to get to us whereas the mass comprising the quasar which is farthest away from us will take \(~(2×10^9+1)\text{ years}\) to get to us. But the fact that when we observe this quasar, it takes about a month to see it brighten means that (using this argument) the quasar must be roughly \(1\) light-month in diameter.
That might sound very big, but actually it's comparatively small. The typical distances between stars in the Milky Way is several light-years—an unimaginably small fraction of the whole extent of the galaxy. 3C 273 sits in the center of a giant elliptical galaxy; it must occupy an incredibly small portion of this galaxy.
We now return to our original question: what is causing the gas and dust to move so fast? On the scale of light-years and beyond, it is just one force—the force of gravity—which is responsible for object's having the motion that they have. The effect of gravity caused by a large mass determines the motion of an object (i.e. gas particle) orbiting that mass. This problem is analogous to how we'd determine the mass of the Milky Way from the speeds of stars orbiting in it. In both cases, we use Newton's law of gravity and second law of motion.
The overwhelming majority of the mass comprising 3C 273 must be comprised at its center and we can therfore treat the mass of 3C 273 as a single point mass \(M\). The net force \(\sum{\vec{F}}\) acting on a gaseous particle orbiting around this mass will be just the gravitational force:
$$\sum{\vec{F}}=F_g.\tag{1}$$
Substituting for \(\sum{\vec{F}}\) and \(F_g\), we have
$$ma_{\text{gas particle}}=G\frac{(M_{\text{3C 273}})(m_{\text{gas particle}})}{r^2}.\tag{2}$$
This gas particle will rotate around the point mass \(M\) in roughly a circle and, therefore, \(a_{\text{gas particle}}=\frac{v^2_{\text{gas particle}}}{r}\). Substituting this into the equation above, we get
$$m_{\text{gas particle}}\frac{v^2}{r}=G\frac{(M_{\text{3C 273}})(m_{\text{gas particle}})}{r^2}.\tag{3}$$
Let's cancel the mass \(m\) on both sides and rearrange the equation in terms of \(M_{\text{3C 273}}\) to get
$$M_{\text{3C 273}}=\frac{rv_{\text{gas particle}}^2}{G}.\tag{4}$$
We know that these gas particles are traveling at speeds of about \(6,000km/s=6×10^6m/s\) and the radius \(r\) of their orbits is about \(1\text{ light-month}=7.88×10^{14}m\). If we substitute the results into Equation (4), we find that the mass \(M_{\text{3C 273}}\) of the quasar is roughly \(2×10^8\) solar masses.
What is a quasar?
The only way that \(2×10^8\) solar masses could be crammed into a space of only \(1\) light-month in diameter (or less) is if this object were a blackhole. The typical blackhole in a galaxy was formed by all the mass of the inner core of a supergiant star collapsing into a single point of zero size. Such black holes—which are fairly ubiquitous in even just the Milky Way—are called stellar black holes. Their mass is identical to that of the stellar core before it collapsed. Since the largest supergiant stars have an upper size limit of roughly a few dozen solar masses, we do not expect the mass of a commonplace stellar black hole to be much more massive than a few dozen solar masses. This is expected even if we account for the fact that black holes can grow and get more massive by sucking up infalling matter since interstellar space is mostly empty. The black hole associated with 3C 273 is an entirely different kind of black hole than a stellar black hole and is called a supermassive backhole.
The quasar 3C 273 is therefore an object which consists of an immense disk of gas and dust with a supermassive black hole in the center of this disk. Earlier, we asked the question: how could it be that this object can outshine an entire galaxy? The short answer is: because the tidal forces exerted on the accretion disk by the blackhole are so extraordinary. The sections of the quasar's acretion disk which are closer to the blackhole move with greater tangential speeds than the out sections. This causes portions of the acretion disk to rub against each other and exert frictional forces on each other. The frictional forces causes the gas to heat up to hundreds of millions of degrees and to glow brighter than entire galaxies.
Near the center of the accretion disk where the supermassive black hole is, gaseous particles are ejected at an angle perpendicular to the accretion disk and galactic plane from both sides (the "top" and "bottom") of the accretion disk illustrated in Figure 4. The gas is ejected so energetically that it coalesces into strands of superheated plasma which can extend up to millions of light-years away from the galaxy. That, to me, is somewhat remarkable that "thin" strands of gaseous plasma extend for millions of light-years through the mostly empty depths of intergalactic space.
Quasars and cosmic evolution
Quasars tell a story of cosmic evolution. And telescopes are time machines: for example, if we are looking at a star one billion light-years away, we are seeing it as it was a billion years ago.
The Sloan Digital Survey mapped out the locations of about 2,000,000 galaxies and 400,000 quasars as shown in Figure 5. According to this survey and others, nearly all the quasars in the heavens are billions of light-years away. We find almost no quasars closer to us and less far away. The observations indicate that there were many hundreds of thousands of quasars in the younger universe; but in the older universe (the one we're in) there are hardly any quasars at all. This means that the cosmos must have had changed since then.
Let's try to think about how this cosmic evolution unfolded. We know from the Sloan Digital Survey that the centers of many young galaxies (400,000 of them) were active: which is just a fancy way of saying that their central supermassive black-holes were still busy sucking up matter from their surrounding accretion disk and, also, still spitting out large streams of superheated plasma into the background of intergalactic space. But the fact that, despite the ubiquity of quasars we see in the younger universe, we see hardly any quasars at all in the older universe implies the following: all of the matter comprising the accretion disks of old, primordial quasars must have had gotten swallowed by their supermassive black holes. Our Milky Way is just one example (among all the countless nearby old galaxies) of a galaxy which, despite having a central supermassive black hole, is no longer active.
This article is licensed under a CC BY-NC-SA 4.0 license.
References
1. “Hale Telescope, Palomar Observatory.”
Jet Propulsion Laboratory: Institute of Technology, 14 April, 2010, https://www.jpl.nasa.gov/spaceimages/details.php?id=PIA13033.
2. Tyson, Neil deGrasse., et al. “Quasars and Supermassive Black Holes.”
Welcome to the Universe: An Astrophysical Tour, Princeton University Press, 2017, pp. 241–253.
3. ESA/Hubble & NASA. “Best image of bright quasar 3C 273.”
Hubble Space Telescope, ESA/Hubble & NASA, 18 November 2013, http://www.spacetelescope.org/images/potw1346a/.
4. Futurism. “Quasar engines.”
Futurism, 21 November 2014, https://futurism.com/rotational-axes-quasars-aligned/. |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Now that we have a feel for the set of values for which a logarithmic function is defined, we move on to graphing logarithmic functions. The family of logarithmic functions includes the parent function [latex]y={\mathrm{log}}_{b}\left(x\right)[/latex] along with all its transformations: shifts, stretches, compressions, and reflections.
We begin with the parent function [latex]y={\mathrm{log}}_{b}\left(x\right)[/latex]. Because every logarithmic function of this form is the inverse of an exponential function with the form [latex]y={b}^{x}[/latex], their graphs will be reflections of each other across the line [latex]y=x[/latex]. To illustrate this, we can observe the relationship between the input and output values of [latex]y={2}^{x}[/latex] and its equivalent [latex]x={\mathrm{log}}_{2}\left(y\right)[/latex] in the table below.
x –3 –2 –1 0 1 2 3 [latex]{2}^{x}=y[/latex] [latex]\frac{1}{8}[/latex] [latex]\frac{1}{4}[/latex] [latex]\frac{1}{2}[/latex] 1 2 4 8 [latex]{\mathrm{log}}_{2}\left(y\right)=x[/latex] –3 –2 –1 0 1 2 3
Using the inputs and outputs from the table above, we can build another table to observe the relationship between points on the graphs of the inverse functions [latex]f\left(x\right)={2}^{x}[/latex] and [latex]g\left(x\right)={\mathrm{log}}_{2}\left(x\right)[/latex].
[latex]f\left(x\right)={2}^{x}[/latex] [latex]\left(-3,\frac{1}{8}\right)[/latex] [latex]\left(-2,\frac{1}{4}\right)[/latex] [latex]\left(-1,\frac{1}{2}\right)[/latex] [latex]\left(0,1\right)[/latex] [latex]\left(1,2\right)[/latex] [latex]\left(2,4\right)[/latex] [latex]\left(3,8\right)[/latex] [latex]g\left(x\right)={\mathrm{log}}_{2}\left(x\right)[/latex] [latex]\left(\frac{1}{8},-3\right)[/latex] [latex]\left(\frac{1}{4},-2\right)[/latex] [latex]\left(\frac{1}{2},-1\right)[/latex] [latex]\left(1,0\right)[/latex] [latex]\left(2,1\right)[/latex] [latex]\left(4,2\right)[/latex] [latex]\left(8,3\right)[/latex]
As we’d expect, the
x– and y-coordinates are reversed for the inverse functions. The figure below shows the graph of f and g. Figure 2. Notice that the graphs of [latex]f\left(x\right)={2}^{x}[/latex] and [latex]g\left(x\right)={\mathrm{log}}_{2}\left(x\right)[/latex] are reflections about the line y = x.
Observe the following from the graph:
[latex]f\left(x\right)={2}^{x}[/latex] has a y-intercept at [latex]\left(0,1\right)[/latex] and [latex]g\left(x\right)={\mathrm{log}}_{2}\left(x\right)[/latex] has an x-intercept at [latex]\left(1,0\right)[/latex]. The domain of [latex]f\left(x\right)={2}^{x}[/latex], [latex]\left(-\infty ,\infty \right)[/latex], is the same as the range of [latex]g\left(x\right)={\mathrm{log}}_{2}\left(x\right)[/latex]. The range of [latex]f\left(x\right)={2}^{x}[/latex], [latex]\left(0,\infty \right)[/latex], is the same as the domain of [latex]g\left(x\right)={\mathrm{log}}_{2}\left(x\right)[/latex]. A General Note: Characteristics of the Graph of the Parent Function, f( x) = log ( b x) b
For any real number
x and constant b > 0, [latex]b\ne 1[/latex], we can see the following characteristics in the graph of [latex]f\left(x\right)={\mathrm{log}}_{b}\left(x\right)[/latex]: one-to-one function vertical asymptote: x= 0 domain: [latex]\left(0,\infty \right)[/latex] range: [latex]\left(-\infty ,\infty \right)[/latex] x-intercept: [latex]\left(1,0\right)[/latex] and key point [latex]\left(b,1\right)[/latex] y-intercept: none increasing if [latex]b>1[/latex] decreasing if 0 < b< 1
Figure 3 shows how changing the base
b in [latex]f\left(x\right)={\mathrm{log}}_{b}\left(x\right)[/latex] can affect the graphs. Observe that the graphs compress vertically as the value of the base increases. ( Note: recall that the function [latex]\mathrm{ln}\left(x\right)[/latex] has base [latex]e\approx \text{2}.\text{718.)}[/latex] How To: Given a logarithmic function with the form [latex]f\left(x\right)={\mathrm{log}}_{b}\left(x\right)[/latex], graph the function. Draw and label the vertical asymptote, x= 0. Plot the x-intercept, [latex]\left(1,0\right)[/latex]. Plot the key point [latex]\left(b,1\right)[/latex]. Draw a smooth curve through the points. State the domain, [latex]\left(0,\infty \right)[/latex], the range, [latex]\left(-\infty ,\infty \right)[/latex], and the vertical asymptote, x= 0. Example 3: Graphing a Logarithmic Function with the Form [latex]f\left(x\right)={\mathrm{log}}_{b}\left(x\right)[/latex].
Graph [latex]f\left(x\right)={\mathrm{log}}_{5}\left(x\right)[/latex]. State the domain, range, and asymptote.
Solution
Before graphing, identify the behavior and key points for the graph.
Since b= 5 is greater than one, we know the function is increasing. The left tail of the graph will approach the vertical asymptote x= 0, and the right tail will increase slowly without bound. The x-intercept is [latex]\left(1,0\right)[/latex]. The key point [latex]\left(5,1\right)[/latex] is on the graph. We draw and label the asymptote, plot and label the points, and draw a smooth curve through the points. Figure 5. The domain is [latex]\left(0,\infty \right)[/latex], the range is [latex]\left(-\infty ,\infty \right)[/latex], and the vertical asymptote is x = 0. Try It 3
Graph [latex]f\left(x\right)={\mathrm{log}}_{\frac{1}{5}}\left(x\right)[/latex]. State the domain, range, and asymptote. |
Electrochemistry, which is the study of the interaction of electricity and chemical reactions, is a central theme in the story of the discovery of cisplatin. In the redox chemistry module, we saw how electrons are transferred from one species to another in a chemical reaction, and we also learned some of the formalisms for describing these types of reactions. Now we will take these concepts a step further and discuss how various chemical species vary in their ability to "pull" electrons—that is, to be reduced. The implications of the different pulling powers of various species are far reaching. As a result of this phenomenon, we are able to power batteries, produce aluminum, extract metals from their salts, and protect metals through electroplating. We can also gain a more complete understanding of the serendipitous discovery of cisplatin.
In the modules on control experiments and the role of platinum electrodes (as well as the discovery of cisplatin), we saw that platinum-containing electrolysis products were responsible for causing both elongation of bacterial cells and regression of murine tumors. We now discuss the origin of these electrolysis products.
Electrolysis occurs when an electric current from an external source is used to bring about a nonspontaneous chemical reaction. The researchers found that when they applied an electric current through platinum electrodes in a continuous culture chamber containing E. coli bacteria, a reaction occurred in which the supposedly inert platinum metal (Pt 0) of the electrodes was oxidized to form both platinum(II) (Pt 2+) and platinum(IV) (Pt 4+). Complexes containing oxidized forms of platinum—particularly Pt 2+—were found to be the causative agents in bacterial elongation. Let us now examine the oxidation reaction in which Pt 0 is converted into Pt 2+:
\[Pt^0_{(s)} → Pt^{2+}_{(aq)} + 2 e^-\]
Chemists can determine whether this oxidation reaction is spontaneous and if so, how much energy is released. Likewise, they can determine whether the reaction is nonspontaneous and how much energy is required to make the reaction proceed. Knowing the amount of energy released or required for various oxidation and reduction reactions gives us an idea of the ability these reactions (called cell reactions) have to push or pull electrons through a circuit; the measure of this ability is called the
cell potential. The cell potential is a useful value when it can be compared to other cell potentials obtained under the same standard set of conditions—that is, when all participating gases exist at a pressure of 1 atm, and all ions are present in a concentration of 1 mol L -1.
A cell potential under standard conditions is called a
standard cell potential, or a standard electrode potential, E°. Standard cell potentials are reported only for reduction reactions; for this reason, they are also called standard reduction potentials. Standard electrode potentials are all measured in reference to the hydrogen electrode’s standard potential, which is arbitrarily set at zero. If another species oxidizes a molecule of H 2 to H + ions, the other species is itself reduced and has a positive standard reduction potential. On the other hand, if H + ions oxidize another species, that species has a negative standard reduction potential. Most general chemistry textbooks have tables of standard cell potential values. When we look up the standard cell potential for the reduction of Pt 2+ to Pt 0 (the opposite of the oxidation reaction written above), we see that this reaction has a positive value for E°, meaning that Pt 2+ oxidizes H 2 to H + under standard conditions.
\[Pt^{2+}_{(aq)} + 2 e^- → Pt^0_{(s)}\;\;\;\; E^o = +1.20\; V\]
As stated above, we are really more interested in the reverse reaction, the oxidation of Pt
0 to Pt 2+. The standard cell potential for the oxidation reaction has the same value but the opposite sign as that for the reduction reaction:
\[Pt^0_{(s)} → Pt^{2+}_{(aq)} + 2 e^-\;\;\;\; E^o = -1.20\; V\]
Now let’s take a look at what these cell potential values mean. We will see that the cell potential is related both to the reaction free energy, which tells us whether or not the reaction is spontaneous, and to the equilibrium constant for the reaction, which tells us the extent of the reaction at equilibrium. The standard cell potential is related to the standard reaction free energy through the following equation:
\[ΔG° = -nFE^o\]
Here,
n is the number of moles of electrons participating in the reaction. F is the Faraday constant, which is the magnitude of charge per mole of electrons, and is equal to 9.6485 x 10 4 C mol-1. We can now calculate the standard free energy for the oxidation reaction written above:
\[ΔG^o = -nFE^o = -2 \times (9.6485 \times 10^4\, C\, mol^{-1}) \times (-1.20\; V)\]
\[= +231,564\; C \cdot V\; mol^{-1}\]
\[= +231,564 \;J \;mol^{-1}\]
\[= +232 \;kJ \;mol^{-1}\]
When the reaction free energy is positive (and the cell potential is negative), the reaction is nonspontaneous in the direction written. Therefore, the oxidation of Pt
0 to Pt 2+ is nonspontaneous; energy must be supplied in order for the reaction to proceed. The energy required for the oxidation of Pt 0 (from the platinum electrodes) can be supplied in the form of an electric current; such a current was applied to the continuous culture chamber containing E. coli bacteria. The standard cell potential is also related to the equilibrium constant, K, for the redox reaction:
\[ΔG^o = -RT\ln K\]
Here,
R is the gas constant, 8.31451 J K-1 mol-1, and T is the temperature. Given this expression and the relationship above between reaction free energy and cell potential, we can relate the cell potential directly to the equilibrium constant:
\[ΔG^o = -RT\ ln K = -nFE^o\]
\[\ln K = \dfrac{nFE^o}{RT}\]
\[K = e^{(\frac{nFE^o}{RT})}\]
Therefore, at 25° C (298.15 K), the equilibrium constant for the oxidation of Pt
0 to Pt 2+ is calculated to be the following:
\[K = e^{(\frac{nFE^o}{RT})} = e^{-93.41}\]
\[= 2.70 \times 10^{-41}\]
As this calculation shows, when Pt
0 is oxidized to Pt 2+, the reaction strongly favors the reactants—unless energy is supplied to the reaction. Now that we have seen how Pt 2+ can be generated under the reaction conditions described in the electric fields module, we can understand the control experiment in which the potassium iodide-starch test was used as a way to detect the presence of an oxidizing agent. We saw in the control experiments and redox chemistry modules that in this test, an oxidizing agent (here, Pt 2+) reacts with iodide ions to form the reduced, neutral metal and elemental iodine:
\[Pt^{2+}_{(aq)} + 2 I^-_{(aq)} → Pt^0_{(s)} + I^2_{(s)}\]
One way to see the electron transfer more clearly is to break this redox reaction into two
half-reactions:
\[2 I^-_{(aq)} → I^2_{(s) }+ 2 e^-\]
\[Pt^{2+}_{(aq)} + 2 e^- → Pt^0_{(s)}\]
The two iodide ions are each losing one electron—in a net two-electron oxidation. As we learned in the redox chemistry module, I- is the reducing agent. Likewise, Pt
2+ is gaining two electrons and is reduced, so Pt 2+ is the oxidizing agent. Using half-reactions is a convenient way to keep track what is going on in a redox reaction. However, it is important to realize that the electrons are not ever actually free in solution, as the half-reactions suggest; rather, they are in transit between the reducing agent and the oxidizing agent. Now, if we add these two half-reactions together, we see that we obtain the original balanced redox equation. Rules for balancing simple redox equations are given in the redox chemistry module.
2 I
-(aq) → I 2(s) + 2 e-
Pt
2+((aq) + 2 e- → Pt 0(s) ______________________________ Pt 2+(aq) + 2 I -(aq) → Pt 0(s) + I 2(s)
Just as we added together the two half-reactions to obtain a balanced redox equation, we can add together the two standard cell potentials to obtain the net potential.
2 I
-(aq) → I 2(s) + 2 e- E° = -0.54 V
Pt
2+(aq) + 2 e- → Pt 0(s) E° = +1.20 V ______________________________________________ Pt 2+(aq) + 2 I-(aq) → Pt 0(s) + I 2(s) E° = + 0.66 V
We see that the net potential for the redox reaction occurring in the potassium iodide-starch test is a positive number; this will lead to a negative value for the reaction free energy, meaning that the reaction is spontaneous as written. This means that Pt
2+ is a strong enough oxidizing agent to effect the oxidation of iodide ions, explaining why Pt 2+gives a positive result in the potassium iodide-starch test. In summary, we have seen how Pt 2+ is generated when an electric current is applied to the platinum electrodes in the continuous culture chamber. We have also seen how the presence of Pt 2+ gives a positive result when subjected to the potassium iodide-starch test. We now have the pieces of the puzzle to understand how platinum-containing complexes such as cisplatin are generated under the reaction conditions. We will do this in the transition metal chemistry module. References Atkins, P. W., Jones, L. L. Chemistry: Molecules, Matter, and Change,3rd ed. W. H. Freeman and Company: New York, 1997, Chapter 17. |
When I showed to my brother how I proved \begin{equation} \int_{0}^{\!\Large \frac{\pi}{2}} \ln \left(x^{2} + \ln^2\cos x\right) \, \mathrm{d}x=\pi\ln\ln2 \end{equation} using the following theorem by Mr. Olivier Oloa \begin{equation}{\large\int_{0}^{\!\Large \frac{\pi}{2}}} \frac{\cos \left( s \arctan \left(-\frac{x}{\ln \cos x}\right)\right)}{(x^2+\ln^2\! \cos x)^{\Large\frac{s}{2}}}\, \mathrm{d}x = \frac{\pi}{2}\frac{1}{\ln^{\Large s}\!2}\qquad,\;\text{for }-1<s<1.\end{equation} He showed me the following interesting formula
\begin{equation} \int_{0}^{\!\Large \frac{\pi}{2}} x\csc^2(x)\arctan \left(\alpha \tan x\right)\, \mathrm{d}x =\frac{\pi}{2}\, \ln\left(\left[1 + \alpha\right]^{1 + \alpha} \over \alpha^\alpha\right)\,,\qquad \mbox{for}\ \alpha > 0\tag{✪}. \end{equation}
I tried several values of $\alpha$ to check its validity (
since he always messes around with me) and the numerical results match the output of Mathematica $9$. The problem is how to prove this formula since he didn't tell me ( as always). I tried Feynman's integration trick and I arrived to the following result:\begin{equation}\partial_\alpha\int_{0}^{\!\Large \frac{\pi}{2}} x\csc^2(x)\arctan \left(\alpha \tan x\right)\, \mathrm{d}x = \int_{0}^{\!\Large \frac{\pi}{2}} \frac{x\cot x}{\cos^2x+\alpha^2 \sin^2 x}\, \mathrm{d}x\end{equation}but I am having difficulty to crack the very last integral. Could anyone here please help me to prove the formula $(✪)$ preferably with elementary ways (high school methods)? Any help would be greatly appreciated. Thank you. |
I will not give you the numerical solution, but I will below explain some analytical simplifications that I believe are required to solve the numerical problem. The strategy is simple: try to express all the parameters of the integral in term of dimensionless variables. To achieve a discussion in term of $\delta = \Delta(T) / \Delta(T=0)$ and $\tau = T / T_{c}$ a bit of work is required, that's what I do in the first section below. The second section give the final result, that you can check numerically.
Please first note the related question Interaction strength in BCS theory, where I also gave some remarks about the method and some numerical value for the parameter $$\eta = \frac{1}{N(0)V}$$, the (inverse) interaction strength. At the end of the calculation, this will turn out to be the only parameter you need.
In the following, the equation numbers are those of the original BCS paper [Bardeen, J., Cooper, L. N., & Schrieffer, J. R. ;
Theory of Superconductivity. Physical Review, 108, 1175–1204 (1957). http://dx.doi.org/10.1103/PhysRev.108.1175 -> free to read on the APS website]. I change a bit their notation, though, the gap parameter will be called $\Delta$ instead of $\varepsilon_{0}$, and the Debye frequency will be noted $\omega_{D}$ instead of the BCS $\omega$. Subsequently, here $\Delta_0=\Delta\left(T=0\right)$ for simplicity. I've tried to be as explicit as I could, such that in principle there is no need to check the BCS paper, except for the first equation (the self-consistent gap equation, see below). Somme preparatory calculations: the universal law
Let us start with the
self-consistent integral of the gap:
$$\eta = \int_{0}^{\hslash \omega_{D}}\tanh \frac{\sqrt{\Delta^{2}+\xi^{2}}}{2k_{B}T}\frac{d\xi}{\sqrt{\Delta^{2}+\xi^{2}}}$$
where $\Delta=\Delta(T)$ is a short hand. We will first discuss an expression for the critical temperature $T_{c}$, then for the gap amplitude $\Delta_{0}$ at zero temperature, to end-up with a universal relation between $\Delta_{0}$ and $T_{c}$.
1.Critical temperature
The critical transition temperature $T=T_{c}$ is given by the above integral at the onset of $\Delta$. Say differently, $\Delta(T_{c})=0$. Then, $T_{c}$ is given by the integral relation
$$\eta = \int_{0}^{\hslash \omega_{D}}\tanh \frac{\xi}{2k_{B}T_{c}}\frac{d\xi}{\xi} = \int_{0}^{\kappa} \frac{\tanh z}{z} dz$$
since $\xi$ is a positive variable (representing the kinetic energy). I defined $\kappa = \hslash \omega_{D} / 2 k_{B} T_{c} $ in the above integral. This integral is badly defined at its lower boundary, but can be evaluated by an integration by part
$$\int_{0}^{\kappa} \frac{\tanh z}{z} dz = \left. \ln z \tanh z \right|_{0} ^{\kappa} - \int_{0}^{\kappa} \frac{\ln z}{\cosh^2z}dz$$
where the first term is exactly evaluated, and the second one is approximated
$$\int_{0}^{\kappa} \frac{\tanh z}{z} dz \approx \ln \kappa - \int_{0}^{\infty} \frac{\ln z}{\cosh^2z}dz = \ln \kappa + \ln \frac{4 e^{\gamma}}{\pi}$$
for small enough $T_{c}$ or large enough Debye cut-off (large $\kappa$). The second integral is still badly defined at its lower boundary, but it is well tabulated and written in terms of the Euler constant $\gamma$.
Collecting all we have done so far, we have
$$\eta \approx \ln \frac{4e^{\gamma}\kappa}{\pi} \;\; ; \;\; \kappa \gg 1$$
or equivalently the important relation
$$\frac{k_{B}T_{c}}{\hslash\omega_{D}} \approx \frac{2e^{\gamma}}{\pi}e^{-\eta} \approx 1.134 e^{-\eta} \;\; ; \;\; \hslash\omega_{D} \gg k_{B}T_{c}$$
which helps you numerically finding the critical temperature. This is Eq.(3.29) of BCS paper. This equation has important consequences. The most important of them is that it is impossible to find $T_{c}$ by perturbation expansion of the interaction strength $\eta^{-1}$. This point (among others, like the absence of quantum mechanics -- hence QM of many-body -- when superconductivity was discovered) explain the long waiting before an efficient theory of superconductivity was discovered.
2. Zero-temperature gap
The previous (long) discussion is just a numerical help for the critical temperature. Let us now find a simple evaluation of $\Delta_{0}$, the gap at zero temperature. At zero temperature, the self-consistent equation is just
$$\eta=\int_{0}^{\hslash \omega_{D}}\frac{d\xi}{\sqrt{\xi^{2}+\Delta_{0}^{2}}} \Rightarrow \boxed{\dfrac{\hslash \omega_{D}}{\Delta_{0}}=\sinh \eta}$$
since the integral can be calculated
exactly. This is equation (2.40) of BCS paper.
3. Universal BCS relation
Now, the important point. We have above two evaluations of $\eta$, one dependent of $T_{c}$, the other one dependent on $\Delta_{0}$. Then, one has
$$\frac{\hslash \omega_{D}}{\Delta_{0}} = \sinh \ln \frac{4e^{\gamma}\kappa}{\pi}$$
, which, for small $\Delta_{0} / \hslash \omega_{D}$, gives approximately (NB: the equation to resolve is a second order polynomial equation in $\kappa$, so it has two solutions. Only the positive solution must be retain, since all the parameters are positive) the approximate
BCS universal law for weak coupling
$$\boxed{ \Delta_{0} \approx 1.76 \; k_{B}T_{c}} \;\; ; \;\; \hslash \omega_{D} \gg \Delta_{0}$$
almost the Eq.(3.30) of BCS (their numerical prefactor is 1.75, not 1.76, but that's what I found using Maple actually). One can check the asymptotic relation used twice above: $\hslash\omega_{D}\gg \left(k_{B}T_{c} , \Delta_{0} \right)$ since $\Delta_{0}$ and $k_{B}T_{c}$ are of the same order of magnitude through the universal BCS law.
This important point of the above (long...) discussion is that we end up with a universal law linking the gap parameter at zero temperature, and the critical temperature
of all BCS superconductors. This can be used to define what a BCS superconductor is, but I'm not sure a non-BCS superconductor has a commonly accepted definition as violating the previous expression. The universal version for weak coupling
Using all the above expressions, in particular what I called the universal law $\Delta_{0} \approx 1.76 k_{B}T_{c}$ and the exact result $\hslash \omega_{D}/\Delta_{0}=\sinh \eta$, one obtain easily
$$\eta \approx \int_{0}^{\delta^{-1}\sinh\eta}\tanh\left(0.882 \frac{\delta}{\tau}\sqrt{1+z^{2}}\right)\frac{dz}{\sqrt{1+z^{2}}}$$
with $\delta=\Delta(T)/\Delta_{0}$ and $\tau = T / T_{c}$. NB: I putted the numerical factor $0.882$ instead of $1.76 / 2$, which is more precise. I found it using the method explained in the previous section.
So, now, the numerical strategy is to fix $\eta$ for a run of calculation (see e.g. Interaction strength in BCS theory), then fix $\delta$ and find $\tau$, change $\delta$ and solve for $\tau$, ... up to obtain $\delta(\tau)$. I believe the previous way is simpler than first fixing $\tau$ to solve for $\delta$, because of the $1/\delta$ in the boundary, but maybe Im wrong on this point.
I obtained the following figure from Maple. 30 seconds of numerical integration is sufficient.
NB I calculated $-\delta$, since the numerically obtained $\delta$ are negative... I do not understand why (anybody gets an idea ?). Also, I add by hand the points [0,1] (at $T =T_{c}$) and [1,0] (at $T=0$), since $\lim_{\epsilon \rightarrow 0}1/\epsilon$ is badly defined numerically, but we know these points from the full integral.
To my experience, if you try to calculate $\Delta(T)$ from scratch (choosing a $\omega_{D}$ and an $\eta$
randomly), you will have (a lot of !) troubles. This is because the region when the solution of the self-consistent equation is non-trivial (a solution $\delta \neq 0$ exists) is very narrow. Now, $\tau$ and $\delta$ in between $0$ and $1$ is the good region of non-trivial phase, and we know this from the beginning ! By the way, it's always simpler to discuss dimensionless variables as $\delta$ and $\tau$, since they are the only one you should plot.
Finally, a remark: you should wonder about the
universal aspect of the simplification... Well, I would say that the BCS theory is valid for weak coupling only from the beginning. So it is of no interest to try to resolve the first integral I wrote in this answer instead of the last one.
P.S.: I've found no mention of the above discussion / substitution in literature, so I wrote my own one :-). I'm deeply interested in some references indeed. |
Suppose we wish to find the zeros of \(f(x) = x^3 + 4x^2-5x-14\). Setting \(f(x)=0\) results in the polynomial equation \(x^3 + 4x^2-5x-14=0\). Despite all of the factoring techniques we learned in Intermediate Algebra, this equation foils us at every turn. If we graph \(f\) using the graphing calculator, we get
The graph suggests that the function has three zeros, one of which is \(x=2\). It's easy to show that \(f(2) = 0\), but the other two zeros seem to be less friendly. Even though we could use the 'Zero' command to find decimal approximations for these, we seek a method to find the remaining zeros
exactly. Based on our experience, if \(x=2\) is a zero, it seems that there should be a factor of \((x-2)\) lurking around in the factorization of \(f(x)\). In other words, we should expect that \(x^3 + 4x^2-5x-14=(x-2) \, q(x)\), where \(q(x)\) is some other polynomial. How could we find such a \(q(x)\), if it even exists? The answer comes from our old friend, polynomial division. Dividing \(x^3 + 4x^2-5x-14\) by \(x-2\) gives
As you may recall, this means \(x^3 + 4x^2-5x-14=(x-2)\left(x^2+6x+7\right)\), so to find the zeros of \(f\), we now solve \((x-2)\left(x^2+6x+7\right)=0\). We get \(x-2=0\) (which gives us our known zero, \(x=2\)) as well as \(x^2+6x+7=0\). The latter doesn't factor nicely, so we apply the Quadratic Formula to get \(x = -3 \pm \sqrt{2}\). The point of this section is to generalize the technique applied here. First up is a friendly reminder of what we can expect when we divide polynomials.
Suppose \(d(x)\) and \(p(x)\) are nonzero polynomials where the degree of \(p\) is greater than or equal to the degree of \(d\). There exist two unique polynomials, \(q(x)\) and \(r(x)\), such that \(p(x) = d(x) \, q(x) + r(x),\,\) where either \(r(x) = 0\) or the degree of \(r\) is strictly less than the degree of \(d\).
As you may recall, all of the polynomials in Theorem 3.4 have special names. The polynomial \(p\) is called the
dividend; \(d\) is the divisor; \(q\) is the quotient; \(r\) is the remainder. If \(r(x)=0\) then \(d\) is called a factor of \(p\). The proof of Theorem 3.4 is usually relegated to a course in Abstract Algebra, but we can still use the result to establish two important facts which are the basis of the rest of the chapter.
Suppose \(p\) is a polynomial of degree at least \(1\) and \(c\) is a real number. When \(p(x)\) is divided by \(x-c\) the remainder is \(p(c)\).
The proof of Theorem 3.5 is a direct consequence of Theorem 3.4. When a polynomial is divided by \(x-c\), the remainder is either \(0\) or has degree less than the degree of \(x-c\). Since \(x-c\) is degree \(1\), the degree of the remainder must be \(0\), which means the remainder is a constant. Hence, in either case, \(p(x) = (x-c) \, q(x) + r\), where \(r\), the remainder, is a real number, possibly \(0\). It follows that \(p(c) = (c-c) \, q(c) + r = 0 \cdot q(c) + r = r\), so we get \(r = p(c)\) as required. There is one last 'low hanging fruit' to collect which we present below.
Suppose \(p\) is a nonzero polynomial. The real number \(c\) is a zero of \(p\) if and only if \((x-c)\) is a factor of \(p(x)\).
The proof of The Factor Theorem is a consequence of what we already know. If \((x-c)\) is a factor of \(p(x)\), this means \(p(x) = (x-c) \, q(x)\) for some polynomial \(q\). Hence, \(p(c) = (c-c) \, q(c) = 0\), so \(c\) is a zero of \(p\). Conversely, if \(c\) is a zero of \(p\), then \(p(c) = 0\). In this case, The Remainder Theorem tells us the remainder when \(p(x)\) is divided by \((x-c)\), namely \(p(c)\), is \(0\), which means \((x-c)\) is a factor of \(p\). What we have established is the fundamental connection between zeros of polynomials and factors of polynomials.
Of the things The Factor Theorem tells us, the most pragmatic is that we had better find a more efficient way to divide polynomials by quantities of the form \(x-c\). Fortunately, people have already blazed this trail. Let's take a closer look at the long division we performed at the beginning of the section and try to streamline it. First off, let's change all of the subtractions into additions by distributing through the \(-1\)s.
Next, observe that the terms \(-x^3\), \(-6x^2\) and \(-7x\) are the exact opposite of the terms above them. The algorithm we use ensures this is always the case, so we can omit them without losing any information. Also note that the terms we 'bring down' (namely the \(-5x\) and \(-14\)) aren't really necessary to recopy, so we omit them, too.
Now, let's move things up a bit and, for reasons which will become clear in a moment, copy the \(x^3\) into the last row.
Note that by arranging things in this manner, each term in the last row is obtained by adding the two terms above it. Notice also that the quotient polynomial can be obtained by dividing each of the first three terms in the last row by \(x\) and adding the results. If you take the time to work back through the original division problem, you will find that this is exactly the way we determined the quotient polynomial. This means that we no longer need to write the quotient polynomial down, nor the \(x\) in the divisor, to determine our answer.
We've streamlined things quite a bit so far, but we can still do more. Let's take a moment to remind ourselves where the \(2x^2\), \(12x\) and \(14\) came from in the second row. Each of these terms was obtained by multiplying the terms in the quotient, \(x^2\), \(6x\) and \(7\), respectively, by the \(-2\) in \(x-2\), then by \(-1\) when we changed the subtraction to addition. Multiplying by \(-2\) then by \(-1\) is the same as multiplying by \(2\), so we replace the \(-2\) in the divisor by \(2\). Furthermore, the coefficients of the quotient polynomial match the coefficients of the first three terms in the last row, so we now take the plunge and write only the coefficients of the terms to get
We have constructed a
synthetic division tableau for this polynomial division problem. Let's re-work our division problem using this tableau to see how it greatly streamlines the division process. To divide \(x^3+4x^2-5x-14\) by \(x-2\), we write \(2\) in the place of the divisor and the coefficients of \(x^3+4x^2-5x-14\) in for the dividend. Then 'bring down' the first coefficient of the dividend.
Next, take the \(2\) from the divisor and multiply by the \(1\) that was 'brought down' to get \(2\). Write this underneath the \(4\), then add to get \(6\).
Now take the \(2\) from the divisor times the \(6\) to get \(12\), and add it to the \(-5\) to get \(7\).
Finally, take the \(2\) in the divisor times the \(7\) to get \(14\), and add it to the \(-14\) to get \(0\).
The first three numbers in the last row of our tableau are the coefficients of the quotient polynomial. Remember, we started with a third degree polynomial and divided by a first degree polynomial, so the quotient is a second degree polynomial. Hence the quotient is \(x^2+6x+7\). The number in the box is the remainder. Synthetic division is our tool of choice for dividing polynomials by divisors of the form \(x-c\). It is important to note that it works
only for these kinds of divisors.\footnote{You'll need to use good old-fashioned polynomial long division for divisors of degree larger than 1.} Also take note that when a polynomial (of degree at least \(1\)) is divided by \(x-c\), the result will be a polynomial of exactly one less degree. Finally, it is worth the time to trace each step in synthetic division back to its corresponding step in long division. While the authors have done their best to indicate where the algorithm comes from, there is no substitute for working through it yourself.
Example \(\PageIndex{1}\):
Use synthetic division to perform the following polynomial divisions. Find the quotient and the remainder polynomials, then write the dividend, quotient and remainder in the form given in Theorem 3.4.
\(\left(5x^3 - 2x^2 + 1\right) \div (x-3) \) \(\left(x^3+8\right) \div (x+2)\) \(\dfrac{4-8x-12x^2}{2x-3}\) Solution
When setting up the synthetic division tableau, we need to enter \(0\) for the coefficient of \(x\) in the dividend. Doing so gives
Since the dividend was a third degree polynomial, the quotient is a quadratic polynomial with coefficients \(5\), \(13\) and \(39\). Our quotient is \(q(x) = 5x^2+13x+39\) and the remainder is \(r(x) = 118\). According to Theorem 3.4, we have \(5x^3 - 2x^2 + 1 = (x-3)\left(5x^2+13x+39 \right) + 118\).
For this division, we rewrite \(x+2\) as \(x-(-2)\) and proceed as before
We get the quotient \(q(x) = x^2-2x+4\) and the remainder \(r(x) =0\). Relating the dividend, quotient and remainder gives \(x^3+8 = (x+2)\left( x^2-2x+4 \right)\).
To divide \(4-8x-12x^2\) by \(2x-3\), two things must be done. First, we write the dividend in descending powers of \(x\) as \(-12x^2-8x+4\). Second, since synthetic division works only for factors of the form \(x-c\), we factor \(2x-3\) as \(2\left(x-\frac{3}{2}\right)\). Our strategy is to first divide \(-12x^2-8x+4\) by \(2\), to get \(-6x^2-4x+2\). Next, we divide by \(\left(x-\frac{3}{2}\right)\). The tableau becomes
From this, we get \(-6x^2-4x+2 = \left(x-\frac{3}{2}\right)(-6 x - 13) - \frac{35}{2}\). Multiplying both sides by \(2\) and distributing gives \(-12x^2-8x+4 = \left(2x-3\right) (-6 x - 13) - 35\). At this stage, we have written \(-12x^2-8x+4\) in the
form \((2x-3) q(x) + r(x)\), but how can we be sure the quotient polynomial is \(-6x-13\) and the remainder is \(-35\)? The answer is the word 'unique' in Theorem 3.4. The theorem states that there is only one way to decompose \(-12x^2-8x+4\) into a multiple of \((2x-3)\) plus a constant term. Since we have found such a way, we can be sure it is the only way. \(\Box\)
The next example pulls together all of the concepts discussed in this section.
Example \(\PageIndex{2}\):
Let \(p(x) = 2x^3-5x+3\).
Find \(p(-2)\) using The Remainder Theorem. Check your answer by substitution. Use the fact that \(x=1\) is a zero of \(p\) to factor \(p(x)\) and then find all of the real zeros of \(p\). Solution
The Remainder Theorem states \(p(-2)\) is the remainder when \(p(x)\) is divided by \(x-(-2)\). We set up our synthetic division tableau below. We are careful to record the coefficient of \(x^2\) as \(0\), and proceed as above.
According to the Remainder Theorem, \(p(-2) = -3\). We can check this by direct substitution into the formula for \(p(x)\): \(p(-2) = 2(-2)^3-5(-2)+3 = -16+10+3=-3\).
The Factor Theorem tells us that since \(x=1\) is a zero of \(p\), \(x-1\) is a factor of \(p(x)\). To factor \(p(x)\), we divide
We get a remainder of \(0\) which verifies that, indeed, \(p(1) = 0\). Our quotient polynomial is a second degree polynomial with coefficients \(2\), \(2\), and \(-3\). So \(q(x) = 2x^2 + 2x - 3\). Theorem 3.4 tells us \(p(x) = (x-1)\left( 2x^2 + 2x - 3\right)\). To find the remaining real zeros of \(p\), we need to solve \(2x^2 + 2x - 3=0\) for \(x\). Since this doesn't factor nicely, we use the quadratic formula to find that the remaining zeros are \(x = \frac{-1 \pm \sqrt{7}}{2}\). \(\Box\)
In Section 3.1, we discussed the notion of the multiplicity of a zero. Roughly speaking, a zero with multiplicity \(2\) can be divided twice into a polynomial; multiplicity \(3\), three times and so on. This is illustrated in the next example.
Example \(\PageIndex{3}\):
Let \(p(x) = 4x^4-4x^3-11x^2+12x-3\). Given that \(x=\frac{1}{2}\) is a zero of multiplicity \(2\), find all of the real zeros of \(p\).
Solution
We set up for synthetic division. Since we are told the multiplicity of \(\frac{1}{2}\) is two, we continue our tableau and divide \(\frac{1}{2}\) into the quotient polynomial
From the first division, we get \(4x^4-4x^3-11x^2+12x-3=\left(x-\frac{1}{2}\right) \left(4x^3-2x^2-12x+6\right)\). The second division tells us \(4x^3-2x^2-12x+6=\left(x-\frac{1}{2}\right)\left(4x^2-12\right)\). Combining these results, we have \(4x^4-4x^3-11x^2+12x-3 = \left(x-\frac{1}{2}\right)^2\left(4x^2-12\right)\). To find the remaining zeros of \(p\), we set \(4x^2-12=0\) and get \(x = \pm \sqrt{3}\). \(\Box\)
A couple of things about the last example are worth mentioning. First, the extension of the synthetic division tableau for repeated divisions will be a common site in the sections to come. Typically, we will start with a higher order polynomial and peel off one zero at a time until we are left with a quadratic, whose roots can always be found using the Quadratic Formula. Secondly, we found \(x = \pm \sqrt{3}\) are zeros of \(p\). The Factor Theorem guarantees \(\left(x-\sqrt{3}\right)\) and \(\left(x - \left(-\sqrt{3}\right)\right)\) are both factors of \(p\). We can certainly put the Factor Theorem to the test and continue the synthetic division tableau from above to see what happens.
This gives us
\[4x^4-4x^3-11x^2+12x-3=\left(x-\frac{1}{2}\right)^2 \left(x-\sqrt{3}\right)\left(x - \left(-\sqrt{3}\right)\right) (4)\]
or, when written with the constant in front
\[ p(x) = 4\left(x-\frac{1}{2}\right)^2 \left(x-\sqrt{3}\right)\left(x - \left(-\sqrt{3}\right)\right)\]
We have shown that \(p\) is a product of its leading coefficient times linear factors of the form \((x-c)\) where \(c\) are zeros of \(p\). It may surprise and delight the reader that, in theory, all polynomials can be reduced to this kind of factorization. We leave that discussion to Section 3.4, because the zeros may not be real numbers. Our final theorem in the section gives us an upper bound on the number of real zeros.
Theorem 3.7:
Suppose \(f\) is a polynomial of degree \(n \geq 1\). Then \(f\) has at most \(n\) real zeros, counting multiplicities.
Theorem 3.7 is a consequence of the Factor Theorem and polynomial multiplication. Every zero \(c\) of \(f\) gives us a factor of the form \((x-c)\) for \(f(x)\). Since \(f\) has degree \(n\), there can be at most \(n\) of these factors. The next section provides us some tools which not only help us determine where the real zeros are to be found, but which real numbers they may be.
We close this section with a summary of several concepts previously presented. You should take the time to look back through the text to see where each concept was first introduced and where each connection to the other concepts was made.
Suppose \(p\) is a polynomial function of degree \(n \geq 1\). The following statements are equivalent:
The real number \(c\) is a zero of \(p\) \(p(c) = 0\) \(x = c\) is a solution to the polynomial equation \(p(x) = 0\) \((x - c)\) is a factor of \(p(x)\) The point \((c, 0)\) is an \(x\)-intercept of the graph of \(y = p(x)\) Contributors Carl Stitz, Ph.D. (Lakeland Community College) and Jeff Zeager, Ph.D. (Lorain County Community College) |
Inverse Cauchy problem for fractional telegraph equations with distributions Abstract
$$u^{(\alpha)}_t-r(t)u^{(\beta)}_t+a^2(-\Delta)^{\gamma/2} u=F_0(x)g(t), \;\;\; (x,t) \in {\rm R}^n\times
(0,T],$$ with given distributions in the right-hand sides of the equation and initial conditions is studied. Our task is to determinate a pair of functions: a generalized solution $u$ (continuous in time variable in general sense) and unknown continuous minor coefficient $r(t)$. The unique solvability of the problem is established. Keywords
3 :: 8
Refbacks There are currently no refbacks.
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
We can derive Lagrange equations supposing that the virtual work of a system is zero.
$$\delta W=\sum_i (\mathbf{F}_i-\dot {\mathbf{p}_i})\delta \mathbf{r}_i=\sum_i (\mathbf{F}^{(a)}_i+\mathbf{f}_i-\dot {\mathbf{p}_i})\delta \mathbf{r}_i=0$$
Where $\mathbf{f}_i$ are the constrainded forces and are supposed to do no work, which it's true in most cases. Quoting Goldstein:
[The principle of virtual work] is no longer true if sliding friction forces are present [in the tally of constraint forces], ...
So I understand that we should
exclude friction forces of our treatmeant. After some manipulations we arrive to:
$$\frac{d}{dt}\frac {\partial T}{\partial \dot q_i}-\frac{\partial T}{\partial q_i}=Q_i$$
Further in the book, the Rayleigh dissipation function is introduced to
include friction forces. So given that $Q_i=-\frac {\partial \mathcal{F}}{\partial \dot q_i}$ and $L=T-U$, we get:
$$\frac{d}{dt}\frac {\partial L}{\partial \dot q_i}-\frac{\partial L}{\partial q_i}+\frac {\partial \mathcal{F}}{\partial \dot q_i}=0$$
Question: Isn't this an inconsistency of our proof, how do we know the equation holds? Or is it just an educated guess which turns out to be true? |
The Annals of Statistics Ann. Statist. Volume 9, Number 3 (1981), 544-554. A Nonparametric Control Chart for Detecting Small Disorders Abstract
We consider sequential observation of independent random variables $X_1,\cdots, X_N$ whose distribution changes from $F$ to $G$ after the first $\lbrack N\theta \rbrack$ variables. The object is to detect the unknown change-point quickly without too many false alarms. A nonparametric control chart based on partial weighted sums of sequential ranks is proposed. It is shown that if the change from $F$ to $G$ is small, then as $N \rightarrow \infty$, the appropriately scaled and linearly interpolated graph of partial rank sums converges to a Brownian motion on which a drift sets in at time $\theta$. Using this, the asymptotic performance of the one-sided control chart is compared with one based on partial sums of the $X$'s. Location change, scale change and contamination are considered. It is found that for distributions with heavy tails, the control chart based on ranks stops more frequently and faster than the one based on the $X$'s. Performance of the two procedures are also tested on simulated data and the outcomes are compatible with the theoretical results.
Article information Source Ann. Statist., Volume 9, Number 3 (1981), 544-554. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176345458 Digital Object Identifier doi:10.1214/aos/1176345458 Mathematical Reviews number (MathSciNet) MR615430 Zentralblatt MATH identifier 0503.62077 JSTOR links.jstor.org Citation
Bhattacharya, P. K.; Frierson, Dargan. A Nonparametric Control Chart for Detecting Small Disorders. Ann. Statist. 9 (1981), no. 3, 544--554. doi:10.1214/aos/1176345458. https://projecteuclid.org/euclid.aos/1176345458 |
$\textbf{The Problem:}$ Let $m(X)<\infty$ and $f$ bounded and measurable. For $1\leq q<p<\infty$ prove that $\|f\|_{p}^{p}\leq\|f\|_{q}^{q}\|f\|_{\infty}^{p-q}.$
$\textbf{My Thoughts:}$ At first sight I was trying to somehow bring in Holder's inequality here, but I got nowhere. So I observed that the result would follow if it is true that $$\left(\frac{f(x)}{\|f\|_\infty}\right)^p\leq\left(\frac{f(x)}{\|f\|_\infty}\right)^q\text{ for almost every }x\in X.$$ Now I fix $x\in X$ for which the above is true, and observe that $$\frac{f(x)}{\|f\|_\infty}\leq1.$$ Since the function $$\frac{f(x)}{\|f\|_\infty}\mapsto\left(\frac{f(x)}{\|f\|_\infty}\right)^z$$ decreases monotonically, the result follows.
Is my line of reasoning on the right track?
My apologies if my wording is bad. If it is though, please point out, and I will do my best to correct it.
Thank you for your time and appreciate any feedback. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.