text
stringlengths 256
16.4k
|
---|
While reading chapter 7.1.2 of di Francesco I encountered following definition of character of Verma module:$$\chi_{(c,h)}(\tau)=\text{Tr }q^{L_0 -c/24}$$where $q = e^{2 \pi i \tau}$ and $\tau$ and c are two parameters (c-central charge, $\tau$ - some parameter connected to so called modular invariance). I am trying to understand logic beneath this definition.
In order to construct Verma module one usually acts in analogy with $SO(3) $ group. So that is what I tried to do as well. I found slightly more detailed description of latter case in Jones H. "Groups, Representations and Physics". So I am going to follow it.
This is where I arrive to first confusion. According to its definition character is vector which elements are given by traces of the group (algebra) elements. However it seems to be common practice to call trace of single element of group (algebra) as character. For instance for given irrep of $SO(3)$: $$\text{One of generators: } X_3 = \text{diag}(j,j-1,...-j+1, -j)$$ $$\text{Corresponding group element: } R_3(\varphi)=e^{-iX_3 \varphi}$$ $$\text{and finally: }\chi^{j}(\varphi)=e^{-ij\varphi}+...+e^{ij\varphi}$$which they originally call(as of course it is) trace of $R_3(\varphi)$. Yet in next sentence they call it character of rotation through $\varphi$ about certain axis. Note that explicit form of $X_3$ may be immediately seen from following expression: $$\langle jm'|X_3|jm\rangle = m \delta_{m',m}$$ Now note that $\text{Tr}(R_3(\varphi))= \text{Tr}(R_n(\varphi))$ for arbitrary n. This follows from:
...character of rotation through $\varphi$ about axis 3 is also character of rotation through $\varphi$ about any other axis. It is so because the conjugacy classes of rotations are all rotations around the same angle about different axes.
So we arrived to the conclusion that $\chi^{j}(\varphi)$ known as character of rotation through $\varphi$ tells us value of trace of any element of group $SO(3)$ for given irrep.
Now I want to take above intuition in order to justify expression for character of Verma module.
First note that $L_0$ is analogous to $X_3$ and $L_{\pm m}$ are ladder operators for it. Based on def of $L_0$ we can show that: $$\langle h+N'|L_0 |h+N\rangle = (h+N)\delta_{N',N}$$ Taking into account that(I do not know how to prove it and whether it is indeed the case)$$\text{Tr}(q^{L_0 -c/24})=\text{Tr}(q^{L_n -c/24}) \quad \text{,for arbitrary } n $$ we can define $$\chi_{(c,h)}(\tau)=\text{Tr }q^{L_0 -c/24} = \sum_{n=0}^{\infty} \text{dim}(h+n) q^{n+h -c/24}$$ Now $\chi_{(c,h)}(\tau)$ may be understood(up to some constants) as trace of any element of group, corresponding to certain representation of Virasoro algebra. Is it what is usually meant by character of Verma module? |
Question:
How thick (minimum) should the air layer be between two flat glass surfaces if the glass is to appear bright when 540 mm light is incident normally?
{eq}T_{min} {/eq} = _____ m
What if the glass is to appear dark?
{eq}T_{min-} {/eq} = _____ m
Interference:
Interference is a phenomenon in which two waves interact and superimpose each other. If the waves that superimpose are in-phase, then this form a resultant wave of higher amplitude, this is called constructive interference. Destructive interference is when two waves that superimpose are out-of-phase and forming a resultant wave of lower amplitude.
Answer and Explanation:
Given:
{eq}\lambda = 540 \ mm = 540 \times 10^{-3} \ m {/eq}
Part A) For the glass to appear bright, then a constructive interference must have taken place. The formula for the minimum thickness for constructive interference is given by,
{eq}2 t = (m + \frac {1}{2}) \lambda {/eq} where m = 0
Solving for t,
{eq}t = \frac {(m + \frac {1}{2})\lambda}{2} = \frac {(0 + \frac {1}{2})(540 \times 10^{-3} \ m)}{2} = \boxed {0.135 \ m} {/eq}
Part B) For the glass to appear dark this time, then it is destructive interference that took place. The formula for the minimum thickness for destructive interference is given by,
{eq}2 t = m \lambda {/eq} where m = 1
Solving for the thickness t,
{eq}t = \frac {m \lambda}{2} = \frac {1 (540 \times 10^{-3} \ m)}{2} = \boxed {0.27 \ m} {/eq}
Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Search Answers Learn more about this topic:
from CLEP Natural Sciences: Study Guide & Test PrepChapter 8 / Lesson 16 |
In fluid dynamics,
wave shoaling is the effect by which surface waves entering shallower water change in wave height. It is caused by the fact that the group velocity, which is also the wave-energy transport velocity, changes with water depth. Under stationary conditions, a decrease in transport speed must be compensated by an increase in energy density in order to maintain a constant energy flux. [2] Shoaling waves will also exhibit a reduction in wavelength while the frequency remains constant.
In shallow water and parallel depth contours, non-breaking waves will increase in wave height as the wave packet enters shallower water.
[3] This is particularly evident for tsunamis as they wax in height when approaching a coastline, with devastating results. Mathematics
When waves enter shallow water they slow down. Under stationary conditions, the wave length is reduced. The energy flux must remain constant and the reduction in group (transport) speed is compensated by an increase in wave height (and thus wave energy density).
For non-breaking waves, the energy flux associated with the wave motion, which is the product of the wave energy density with the group velocity, between two wave rays is a conserved quantity (i.e. a constant when following the energy of a wave packet from one location to another). Under stationary conditions the total energy transport must be constant along the wave ray,
[4] \frac{d}{ds}(c_g E) = 0,
where
s is the co-ordinate along the wave ray and c_g E is the energy flux per unit crest length. A decrease in group speed c_g must be compensated by an increase in energy density E. This can be formulated as a shoaling coefficient relative to the wave height in deep water. [5] [6]
Following Phillips (1977) and Mei (1989),
[7] [8] denote the phase of a wave ray as S = S(\mathbf{x},t), 0\leq S<2\pi.
The local wave number vector is the gradient of the phase function,
\mathbf{k} = \nabla S,
and the angular frequency is proportional to its local rate of change,
\omega = -\partial S/\partial t.
Simplifying to one dimension and cross-differentiating it is now easily seen that the above definitions indicate simply that the rate of change of wavenumber is balanced by the convergence of the frequency along a ray;
\frac{\partial k}{\partial t} + \frac{\partial \omega}{\partial x} = 0.
Assuming stationary conditions (\partial/\partial t = 0), this implies that wave crests are conserved and the frequency must remain constant along a wave ray as \partial \omega / \partial x = 0. As waves enter shallower waters, the decrease in group velocity caused by the reduction in water depth leads to a reduction in wave length \lambda = 2\pi/k because the nondispersive shallow water limit of the dispersion relation for the wave phase speed,
\omega/k \equiv c = \sqrt{gh}
dictates that
k = \omega/\sqrt{gh},
i.e., a steady increase in
k (decrease in \lambda) as the phase speed decreases under constant \omega. See also Notes ^ Wiegel, R.L. (2013). Oceanographical Engineering. Dover Publications. p. 17, Figure 2.4. ^ Longuet-Higgins, M.S.; Stewart, R.W. (1964). "Radiation stresses in water waves; a physical discussion, with applications". Deep Sea Research and Oceanographic Abstracts 11 (4): 529–562. ^ WMO (1998). Guide to Wave Analysis and Forecasting 702 (2 ed.). World Meteorological Organization. ^ ^ Dean, R.G.; Dalrymple, R.A. (1991). Water wave mechanics for engineers and scientists. Advanced Series on Ocean Engineering 2. Singapore: World Scientific. ^ Goda, Y. (2000). Random Seas and Design of Maritime Structures. Advanced Series on Ocean Engineering 15 (2 ed.). Singapore: World Scientific. ^ Phillips, Owen M. (1977). The dynamics of the upper ocean (2nd ed.). Cambridge University Press. ^ External links
Wave transformation at Coastal Wiki
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
TL;DR
I'd propose that weak force life has a tiny change of existing in environments where particles travel at high speeds. A possible example is the jets produced by an active galactic nucleus. At the high energies (and high speeds) particles reach in these jets, the range of the weak force could be sizably extended to the point where it is less negligible than for a low-energy environment, because at high speeds, the $W$ and $Z$ bosons' lifetimes can be dramatically extended. While it's difficult to speculate as to what structures and processes - let alone life - could coherently arrive, I would bet that proton-antiproton collisions and the decay of charged leptons (muons and tau particles) might be potential sources of the $W$ and $Z$ bosons.
The decay problem
The weak force is mediated by three particles: The charged $W^{\pm}$ bosons and the neutral $Z$ boson. Unlike the photon, their cousin, these bosons have mass, approximately 80.4 GeV and 91.2 GeV, respectively. Also unlike the photon, the bosons decay. The $W^+$ boson has several decay paths, including hadronic paths (dominated by quark-antiquark pairs) and leptonic paths (a positively charged lepton and its associated neutrino); the $W^-$ decays involve the corresponding antiparticles. For the $Z$ bosons, hadronic decays to quarks are also the main contributors, although pairs of charged leptons and their antiparticles may also be produced.
Both particles have half-lives of $\tau\sim10^{-25}$ seconds, and so the range of the weak force is approximately $r\approx\tau c\sim10^{-17}$ meters, even in the case of relativistic particles. Another way of expressing this uses the derivation of the half-life from Heisenberg's uncertainty principle:$$r\approx\frac{\hbar}{2mc}\propto\frac{1}{m}$$where $m$ is the mass of the boson. Therefore, by decreasing the mass of the $W$ and $Z$ bosons, you could of course extend the range of the weak force. That said, changing the mass would involve changing weak force coupling constant across the universe, which would cause serious issues.
Time dilation
Changing our fundamental constants seems to be right out, then, so let's stay away from those. Instead, let's see what happens if we try to extend the lifetimes of these bosons through time dilation. Time dilation comes in two flavors: gravitational and special relativistic. It turns out that to dilate time enough to significantly extend $r$, you need to be in a steep gravitational field, quite close to a black hole; this seems an unlikely and unsafe (certainly short-lived) setup.
However, we
could extend the range of the weak force by instead having these bosons travel quickly, as happens with muons in Earth's atmosphere. The boson's lifetime should be $\tau=\gamma\tau_0$, where $\gamma$ is the Lorentz factor and $\tau_0\sim10^{-25}$ seconds, from before. The highest Lorentz factors we've seen come from ultra-high energy cosmic rays; the Oh-My-God particle had a kinetic energy of $3.2\times10^{20}$ eV, and thus (as you can determine by calculating the relativistic kinetic energy, $T\approx m\gamma c^2$) a Lorentz factor of $\sim10^{11}$, corresponding to a speed that differs from $c$ by less than one part in $10^{23}$. The boson's lifetime is then $\tau\sim10^{-14}$ seconds, and the weak force's range is a surprising $r\sim10^{-6}$ meters.
There are some caveats:
Propelling a particle to this energy requires an active galactic nucleus, and therefore, ambient $W$ and $Z$ bosons can only survive in the jets emitted from such an AGN. The jets should be dense with leptons and hadrons, an extreme environment that produces gamma rays and cosmic rays. Interactions should be frequent, and it seems that bosons could very quickly interact with these ambient particles, limiting their range. There could be a limit similar to the GZK limit for cosmic rays, albeit involving these ambient fermions. The bosons presumably can't be accelerated to these speeds in the same manner as normal cosmic rays, but they could be produced by high-energy particles in the jets. Proton-antiproton interactions can produce both $W$ and $Z$ bosons; if these interactions transferred the majority of the progenitor's energies to the bosons, we might well see the bosons reach the required energies. This is guesswork on my part, though.
While I would propose AGN jets as an alternative to a4android's neutron star suggestion, simply because they're the only energy sources that could create these Lorentz factors, it seems clear that only these extreme environments could host anything akin to life based on the weak force.
What particle(s) would life be based on?
As you might have guessed, you likely won't see elements
per se in these jets. Nuclei, yes, primarily protons. What you will see is, as I mentioned before, a messy soup of hadrons and leptons, producing synchrotron radiation and gamma rays. These particles will make up your building blocks of life.
How will these bosons be produced, then? There are two basic types of weak force interactions: charged current interactions (involving the $W$ bosons) and neutral current interactions (involving the $Z$ boson). Examples include:
Quark-antiquark interactions from proton-antiproton collisions, as I mentioned above. We see these occur in colliders. Typical pathways involve up and down quarks ($u$ and $d$) and their antiparticles ($\bar{u}$ and $\bar{d}$):$$\bar{d}u\to W^+,\quad d\bar{u}\to W^-,\quad u\bar{u}\to Z,\quad d\bar{d}\to Z$$ Lepton decay, e.g. a muon decaying to a muon neutrino and a $W^-$ boson, which then decays to an electron and an electron antineutrino:$$\mu\to\nu_{\mu}+W^-\to\nu_{\mu}+e^-+\bar{\nu}_e$$
There are other hadronic decay processes, of course (e.g. pion decay); I list the above just as examples. The dominant production processes depend on the ambient fermions and hadrons.
A note on WIMPs
I'd like to second Spencer's suggestion of weakly interacting massive particles, or WIMPs, which remain prime dark matter candidates. They're high-mass particles that interact only via gravity and the weak nuclear force, and hence would be excellent candidates for a creature that primarily uses the weak force insofar as it really couldn't interact in any other way. It does seem unlikely that they would combine in high densities, as dark matter doesn't clump quite like normal matter does, but they remain an interesting possibility. |
Question:
Two-in-phase loudspeakers that emit sound with the same frequency are placed along a wall and are separated by a distance of {eq}5.00\ m {/eq}. A person is standing {eq}12.0\ m {/eq} away from the wall, equidistant from the loudspeakers. When the person moves {eq}1.00\ m {/eq} parallel to the wall, she experiences destructive interference for the first time. What is the frequency of the sound? The speed of sound in air is {eq}343\ \dfrac ms {/eq}.
A. 211 Hz.
b. 256 Hz.
C. 422 Hz.
D. 512 Hz.
E. 674 Hz.
Interference
Diffraction is the capacity of a wave to bend around the edge of an obstacle. while interference is the combination of two or more waves to form a resultant wave in which the displacement is either reinforced (constructive interference) or canceled (destructive interference).
The Condition for destructive interference is,
{eq}d' - d = (m + \dfrac 12 \lambda ) {/eq}
Wherein,
{eq}\lambda {/eq} is the wavelength of the wave and d is the separation distance.
Answer and Explanation:
The below figure indicates the destructive interference from the given data:
Determining the distance "d" from the top triangle in the figure above,
{eq}d = \sqrt {(1.5m)^2 + (12 m)^2 } \ = \ 12.09 \ m {/eq}
Determining the distance "d1" from the bottom triangle in the figure above,
{eq}d1 = \sqrt {(3.5m)^2 + (12 m)^2 } \ = \ 12.5 \ m {/eq}
Since, she experiences teh destructive interference for the first time, the value of m there is zero.
{eq}d1 - d = (0 + \dfrac 12 \lambda ) \\ 12.5 - 12.093386 = \dfrac 12 \lambda \\ \lambda = 2 (0.406614 m) \\ \lambda = 0.813228 \ m {/eq}
The frequency of sound wave is determined as,
{eq}f \ = \ \dfrac {343 \dfrac ms}{0.813228 \ m } \\ f \ = \ 421.77 \ Hz \\ \\ f \ = \ 422 \ Hz {/eq}
The answer is (C) 422 Hz.
Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Search Answers Learn more about this topic:
from MTEL Physics (11): Practice & Study GuideChapter 16 / Lesson 5 |
Fast Logistic Regression
When we are programming Logistic Regression or Neural Networks we should avoid explicit \(for \) loops. It’s not always possible, but when we can, we should use built-in functions or find some other ways to compute it. Vectorizing the implementation of Logistic Regression makes the code highly efficient. In this post we will see how we can use this technique to compute gradient descent without using even a single \(for \) loop.
Now, we will examine the forward propagation step of logistic regression. If we have \(m\) training examples, to make a prediction on the first example we need to compute \(z \) and the activation function \(a\) as follows:
\(z^{(1)}= \omega^T x^{(1)} + b \)
\(a^{(1)} = \sigma(z^{(1)}) \)
To make prediction on the second training example we need to compute this :
\(z^{(2)}= \omega^T x^{(2)} + b \)
\(a^{(2)} = \sigma(z^{(2)}) \)
The same is with prediction of third training example:
\(z^{(3)}= \omega^T x^{(3)} + b \)
\(a^{(3)} = \sigma(z^{(3)}) \)
So if we have \(m\) training examples we need to do these calculations \(m\) times. In order to carry out the forward propagation step, which means to compute these predictions for all \(m\) training examples, there is a way to do this without needing an explicit for loop.
We will stack all training examples horizontally in a matrix \(\textbf{X}\), so that every column in matrix \(\textbf{X} \) represents one training example:
$$ \textbf{X} = \begin{bmatrix} \vert & \vert & \dots & \vert \\ x^{(1)} & x^{(2)} & \dots & x^{(m)} \\ \vert & \vert & \dots & \vert \end{bmatrix} $$
$$ $$
Notice that matrix \(\omega \) is a \(n_{x} \times 1\) matrix (or a column vector), so when we transpose it we get \(\omega^T \) which is a \(1 \times n_{x}\) matrix (or a row vector) so multiplying \( \omega^T \) with \(\textbf{X} \) we get a \(1 \times m\) matrix. Then we add a \(1 \times m\) matrix \(b \) to obtain \(\textbf{Z}\).
We will define matrix \(\textbf{Z} \) by placing all \(z^{(i)} \) values in a row vector :
$$ $$
\(\textbf{Z}= \begin{bmatrix} z^{(1)} & z^{(2)} & \dots & z^{(m)} \end{bmatrix} = \omega^T \textbf{X} + b = \begin{bmatrix} \omega^T x^{(1)} +b & \omega^T x^{(2)} + b & \dots & \omega^T x^{(m)} + b \end{bmatrix} \)
$$ $$
In Python, we can easily implement the calculation of a matrix \(\textbf{Z} \):
$$ \textbf{Z} = np.dot(\omega^T, \textbf{X}) + b $$
As we can see \(b \) is defined as a scalar. When you add this vector to this real number, Python automatically takes this real number \(b \) and expands it out to the \(1 \times m\) row vector. This operation is called broadcasting, and more about it we will see at the end of this we will see at the end of this post.
Matrix \(\textbf{A} \) is defined as a \(1 \times m\), wich we also got by stacking horizontaly values \(a^{(i)}\) as we did with matrix $latex \textbf{Z} $:
\(\textbf{A} = \begin{bmatrix} a^{(1)} & a^{(2)} & \dots & a^{(m)} \end{bmatrix} = \sigma (Z) \)
$$ $$
In Python, we can also calculate matrix \(\textbf{A} \) with one line of code as follows (if we have defined sigmoid function as above) :
\(\textbf{A} = sigmoid(\textbf{Z}) \)
$$ $$
Vectorization of Logistic Regression
$$ $$
In the previous post we saw that for the gradient computation we had to compute detivative \(dz \) for every training example:
\(dz^{(1)} = a^{(1)} – y^{(1)} \)
\(dz^{(2)} = a^{(2)} – y^{(2)} \)
\(\vdots \)
\(dz^{(m)} = a^{(m)} – y^{(m)} \)
In the same way, we have defined previous variables, now we will define matrix \(\textbf{dZ} \), where we will stack all \(dz^{(i)} \) variables horizontally, dimension of this matrix \(\textbf{dZ} \) is \(1\times m\) or alternativly a \(m \) dimensional row vector .
\(\textbf{dZ} = \begin{bmatrix} dz^{(1)} & dz^{(2)} & \dots & dz^{(m)} \end{bmatrix} \)
As we know that matrices \(\textbf{A} \) and \(\textbf{Y} \) are defined as follows:
\(\textbf{A} = \begin{bmatrix} a^{(1)} & a^{(2)} & \dots & a^{(m)} \end{bmatrix} \)
\(\textbf{Y} = \begin{bmatrix} y^{(1)} & y^{(2)} & \dots & y^{(m)} \end{bmatrix} \)
We can see that \(\textbf{dZ} \) is:
$$ \textbf{dZ} = \textbf{A} – \textbf{Y} = \begin{bmatrix} a^{(1)} – y^{(1)} & a^{(2)} – y^{(2)} & \dots & a^{(m)} – y^{(m)} \end{bmatrix} $$
and all values in \(\textbf{dZ} \) can be computed at the same time.
To implement Logistic Regression on code we did this:
\(for \enspace i \enspace in \enspace range(m): \)
After leaving the inner \(for \) loop, we have divided \(J\), \(\mathrm{d} w_{1}\), \(\mathrm{d} w_{1}\) and \(b\) by \(m\)
, because we computed their averages
\( J/=m; \enspace dw_{1}/=m; \enspace dw_{2}/=m; \enspace db/=m; \)
This code was non-vectorized and highly inefficent so we need to transform it. First, using vectorization, we can transform equations \((*) \) and \((**) \) into one equation:
\(dw += x^{(i)}dz^{(i)} \)
Remember that in this case we have two features, \( x_1 \) and \(x_2 \). If we had had more features, for example n features, we would have needed another for loop to calculate \( dw_{1} \) … \(dw_{n} \) .
The cost function is : $$ J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)}) $$
The derivatives are:
$$ \frac{\partial J}{\partial w} = dw = \frac{1}{m}\textbf{X}(\textbf{A}-\textbf{Y})^T $$
$$ \frac{\partial J}{\partial b} = db = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)}) $$
To calculate \(w \) and \(b \) we will still need following \(for \) loop.
\( for \enspace i \enspace in \enspace range(num \enspace of \enspace iterations): \)
\( \textbf{Z}=w^{T}\textbf{X} + b \)
\( \textbf{A}=\sigma (\textbf{Z}) \)
\( \textbf{dZ}=\textbf{A} – \textbf{Y} \)
\( dw\enspace = \frac{1}{m}np.dot(\textbf{X}, \textbf{dZ}^T) \)
\( db\enspace = \frac{1}{m}np.sum(\textbf{dZ}) \)
\( w += -\alpha dw \)
\( b += -\alpha db \)
We don’t need to loop through entire training set, but still we need to loop through number of iterations and that’s a \(for \) loop that we can’t get rid off.
This post completes the Logistic regression. It can be seen as a one neuron neural network. Let’s see why, what and how about neural networks!
In the next post we will learn what vectorization is.
More resources on the topic: |
Inverse boundary value problems for diffusion-wave equation with generalized functions in right-hand sides Abstract
We prove the unique solvability of the problem on determination of the solution $u(x,t)$ of the first boundary value problem for equation
$$u^{(\beta)}_t-a(t)\Delta u=F_0(x)\cdot g(t), \;\;\; (x,t) \in (0,l)\times
(0,T],$$
with fractional derivative $u^{(\beta)}_t$ of the order $\beta\in (0,2)$, generalized functions in initial conditions, and also determination of unknown continuous coefficient $a(t)>0, \; t\in [0,T]$ (or unknown continuous function $g(t)$) under given the values $(a(t)u_x(\cdot,t),\varphi_0(\cdot))$ ($(u(\cdot,t),\varphi_0(\cdot))$, respectively) of according generalized function onto some test function $\varphi_0(x)$.
Keywords
3 :: 6
Refbacks There are currently no refbacks.
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
Dimensionality reduction is used to remove irrelevant and redundant features.
When the number of features in a dataset is bigger than the number of examples, then the probability density function of the dataset becomes difficult to calculate.
For example, if we model a dataset \(S = \{x^{(i)}\}_{i=1}^m,\ x \in R^{n}\) as a single Gaussian N(μ, ∑), then the probability density function is defined as: \(P(x) = \frac{1}{{(2π)}^{\frac{n}{2}} |Σ|^\frac{1}{2}} exp(-\frac{1}{2} (x-μ)^T {Σ}^{-1} (x-μ))\) such as \(μ = \frac{1}{m} \sum_{i=1}^m x^{(i)} \\ ∑ = \frac{1}{m} \sum_{i=1}^m (x^{(i)} – μ)(x^{(i)} – μ)^T\).
But If n >> m, then ∑ will be singular, and calculating P(x) will be impossible.
Note: \((x^{(i)} – μ)(x^{(i)} – μ)^T\) is always singular, but the \(\sum_{i=1}^m\) of many singular matrices is most likely invertible when m >> n.
Principal Component Analysis
Given a set \(S = \{x^{(1)}=(0,1), x^{(2)}=(1,1)\}\), to reduce the dimensionality of S from 2 to 1, we need to project data on a vector that maximizes the projections. In other words, find the normalized vector \(μ = (μ_1, μ_2)\) that maximizes \( ({x^{(1)}}^T.μ)^2 + ({x^{(2)}}^T.μ)^2 = (μ_2)^2 + (μ_1 + μ_2)^2\).
Using the method of Lagrange Multipliers, we can solve the maximization problem with constraint \(||u|| = μ_1^2 + μ_2^2 = 1\).\(L(μ, λ) = (μ_2)^2 + (μ_1 + μ_2)^2 – λ (μ_1^2 + μ_2^2 – 1) \)
We need to find μ such as \(∇_u = 0 \) and ||u|| = 1
After derivations we will find that the solution is the vector μ = (0.52, 0.85)
Generalization
Given a set \(S = \{x^{(i)}\}_{i=1}^m,\ x \in R^{n}\), to reduce the dimensionality of S, we need to find μ that maximizes \(arg \ \underset{u: ||u|| = 1}{max} \frac{1}{m} \sum_{i=1}^m ({x^{(i)}}^T u)^2\)\(=\frac{1}{m} \sum_{i=1}^m (u^T {x^{(i)}})({x^{(i)}}^T u)\) \(=u^T (\frac{1}{m} \sum_{i=1}^m {x^{(i)}} * {x^{(i)}}^T) u\)
Let’s define \( ∑ = \frac{1}{m} \sum_{i=1}^m {x^{(i)}} * {x^{(i)}}^T \)
Using the method of Lagrange Multipliers, we can solve the maximization problem with constraint \(||u|| = u^Tu\) = 1.\(L(μ, λ) = u^T ∑ u – λ (u^Tu – 1) \)
If we calculate the derivative with respect to u, we will find:\(∇_u = ∑ u – λ u = 0\)
Therefore u that solves this maximization problem must be an eigenvector of ∑. We need to choose the eigenvector with highest eigenvalue.
If we choose k eigenvectors \({u_1, u_2, …, u_k}\), then we need to transform the data by multiplying each example with each eigenvector.\(x^{(i)} := (u_1^T x^{(i)}, u_2^T x^{(i)},…, , u_k^T x^{(i)}) = U^T x^{(i)}\)
Data should be normalized before running the PCA algorithm:
1-\(μ = \frac{1}{m} \sum_{i=1}^m x^{(i)}\)
2-\(x^{(i)} := x^{(i)} – μ\)
3-\(σ_j^{(i)} = \frac{1}{m} \sum_{i=1}^m {x_j^{(i)}}^2\)
4-\(x^{(i)} := \frac{x_j^{(i)}}{σ_j^{(i)}}\)
To reconstruct the original data, we need to calculate \(\widehat{x}^{(i)} := U^T x^{(i)}\)
Factor Analysis
Factor analysis is a way to take a mass of data and shrinking it to a smaller data set with less features.
Given a set \(S = \{x^{(i)}\}_{i=1}^m,\ x \in R^{n}\), and S is modeled as a single Gaussian.
To reduce the dimensionality of S, we define a relationship between the variable x and a laten (hidden) variable z called factor such as \(x^{(i)} = μ + Λ z^{(i)} + ϵ^{(i)}\) and \(μ \in R^{n}\), \(z^{(i)} \in R^{d}\), \(Λ \in R^{n*d}\), \(ϵ \sim N(0, Ψ)\), Ψ is diagonal, \(z \sim N(0, I)\) and d <= n.
From Λ we can find the features that are related to each factor, and then identify the features that need to be eliminated or combined in order to reduce the dimensionality of the data.
Below the steps to estimate the parameters Ψ, μ, Λ.\(E[x] = E[μ + Λz + ϵ] = E[μ] + ΛE[z] + E[ϵ] = μ \) \(Var(x) = E[(x – μ)^2] = E[(x – μ)(x – μ)^T] = E[(Λz + ϵ)(Λz + ϵ)^T]\) \(=E[Λzz^TΛ^T + ϵz^TΛ^T + Λzϵ^T + ϵϵ^T]\) \(=ΛE[zz^T]Λ^T + E[ϵz^TΛ^T] + E[Λzϵ^T] + E[ϵϵ^T]\) \(=Λ.Var(z).Λ^T + E[ϵz^TΛ^T] + E[Λzϵ^T] + Var(ϵ)\)
ϵ and z are independent, then the join probability of p(ϵ,z) = p(ϵ)*p(z), and \(E[ϵz]=\int_{ϵ}\int_{z} ϵ*z*p(ϵ,z) dϵ dz\)\(=\int_{ϵ}\int_{z} ϵ*z*p(ϵ)*p(z) dϵ dz\) \(=\int_{ϵ} ϵ*p(ϵ) \int_{z} z*p(z) dz dϵ\) \(=E[ϵ]E[z]\)
So:\(Var(x)=ΛΛ^T + Ψ\)
Therefore \(x \sim N(μ, ΛΛ^T + Ψ)\) and \(P(x) = \frac{1}{{(2π)}^{\frac{n}{2}} |ΛΛ^T + Ψ|^\frac{1}{2}} exp(-\frac{1}{2} (x-μ)^T {(ΛΛ^T + Ψ)}^{-1} (x-μ))\)
\(Λ \in R^{n*d}\), if d <= m, then \(ΛΛ^T + Ψ\) is most likely invertible.
To find Ψ, μ, Λ, we need to maximize the log-likelihood function.\(l(Ψ, μ, Λ) = \sum_{i=1}^m log(P(x^{(i)}; Ψ, μ, Λ))\) \(= \sum_{i=1}^m log(\frac{1}{{(2π)}^{\frac{n}{2}} |ΛΛ^T + Ψ|^\frac{1}{2}} exp(-\frac{1}{2} (x^{(i)}-μ)^T {(ΛΛ^T + Ψ)}^{-1} (x^{(i)}-μ)))\)
This maximization problem cannot be solved by calculating the \(∇_Ψ l(Ψ, μ, Λ) = 0\), \(∇_μ l(Ψ, μ, Λ) = 0\), \(∇_Λ l(Ψ, μ, Λ) = 0\). However using the EM algorithm, we can solve that problem.
More details can be found in this video: https://www.youtube.com/watch?v=ey2PE5xi9-A
Restricted Boltzmann Machine
A restricted Boltzmann machine (RBM) is a two-layer
stochastic neural network where the first layer consists of observed data variables (or visible units), and the second layer consists of latent variables (or hidden units). The visible layer is fully connected to the hidden layer. Both the visible and hidden layers are restricted to have no within-layer connections.
In this model, we update the parameters using the following equations:
\(W := W + α * \frac{x⊗Transpose(h_0) – v_1 ⊗ Transpose(h_1)}{n} \\ b_v := b_v + α * mean(x – v_1) \\ b_h := b_h + α * mean(h_0 – h_1) \\ error = mean(square(x – v_1))\).
Deep Belief Network
A deep belief network is obtained by stacking several RBMs on top of each other. The hidden layer of the RBM at layer i becomes the input of the RBM at layer i+1. The first layer RBM gets as input the input of the network, and the hidden layer of the last RBM represents the output.
Autoencoders
An autoencoder, autoassociator or Diabolo network is a
deterministic artificial neural network used for unsupervised learning of efficient codings. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. A deep Autoencoder contains multiple hidden units.
Loss function
For binary values, the loss function is defined as:
\(loss(x,\hat{x}) = -\sum_{k=1}^{size(x)} x_k.log(\hat{x_k}) + (1-x_k).log(1 – \hat{x_k})\).
For real values, the loss function is defined as:
\(loss(x,\hat{x}) = ½ \sum_{k=1}^{size(x)} (x_k – \hat{x_k})^2\).
Dimensionality reduction
Autoencoders separate data better than PCA.
Variational Autoencoder
Variational autoencoder (VAE) models inherit autoencoder architecture, but make strong assumptions concerning the distribution of latent variables. In general, we suppose the distribution of the latent variable is gaussian.
The training algorithm used in VAEs is similar to EM algorithm. |
As you have seen, calculating multiple integrals is tricky even for simple functions and regions. For complicated functions, it may not be possible to evaluate one of the iterated integrals in a simple closed form. Luckily there are numerical methods for approximating the value of a multiple integral. The method we will discuss is called the
Monte Carlo method. The idea behind it is based on the concept of the average value of a function, which you learned in single-variable calculus. Recall that for a continuous function \(f (x)\), the average value \(\bar f \text{ of }f\) over an interval \([a,b]\) is defined as
\[\bar f = \dfrac{1}{b-a}\int_a^b f (x)\,dx \label{Eq3.11}\]
The quantity \(b − a\) is the length of the interval \([a,b]\), which can be thought of as the “volume” of the interval. Applying the same reasoning to functions of two or three variables, we define the
average value of \(f (x, y)\) over a region \(R\) to be
\[\bar f = \dfrac{1}{A(R)} \iint\limits_R f (x, y)\,d A \label{Eq3.12}\]
where \(A(R)\) is the area of the region \(R\), and we define the average value of \(f (x, y, z)\) over a solid \(S\) to be
\[\bar f = \dfrac{1}{V(S)} \iiint\limits_S f (x, y, z)\,dV \label{Eq3.13}\]
where \(V(S)\) is the volume of the solid \(S\). Thus, for example, we have
\[\iint\limits_R f (x, y)\,d A = A(R) \bar f \label{Eq3.14}\]
The average value of \(f (x, y)\) over \(R\) can be thought of as representing the sum of all the values of \(f\) divided by the number of points in \(R\). Unfortunately there are an infinite number (in fact,
uncountably many) points in any region, i.e. they can not be listed in a discrete sequence. But what if we took a very large number \(N\) of random points in the region \(R\) (which can be generated by a computer) and then took the average of the values of \(f\) for those points, and used that average as the value of \(\bar f\) ? This is exactly what the Monte Carlo method does. So in Formula \ref{Eq3.14} the approximation we get is
\[\iint\limits_R f (x, y)\,d A \approx A(R) \bar f \pm A(R) \sqrt{\dfrac{\bar {f^2}-(\bar f)^2}{N}}\label{Eq3.15}\]
where
\[\bar f = \dfrac{\sum_{i=1}^{N}f(x_i,y_i)}{N}\text{ and } \bar {f^2} = \dfrac{\sum_{i=1}^{N}(f(x_i,y_i))^2}{N}\label{Eq3.16}\]
with the sums taken over the \(N\) random points \((x_1 , y_1), ..., (x_N , y_N )\). The \(\pm\) “error term” in Formula \ref{Eq3.15} does not really provide hard bounds on the approximation. It represents a single
standard deviation from the expected value of the integral. That is, it provides a likely bound on the error. Due to its use of random points, the Monte Carlo method is an example of a probabilistic method (as opposed to deterministic methods such as Newton’s method, which use a specific formula for generating points).
For example, we can use Formula \ref{Eq3.15} to approximate the volume \(V\) under the plane \(z = 8x + 6y\) over the rectangle \(R = [0,1] \times [0,2]\). In Example 3.1 in Section 3.1, we showed that the actual volume is 20. Below is a code listing (montecarlo.java) for a Java program that calculates the volume, using a number of points \(N\) that is passed on the command line as a parameter.
Listing 3.1 Program listing for montecarlo.java
The results of running this program with various numbers of random points (e.g. java montecarlo 100) are shown below:
As you can see, the approximation is fairly good. As \(N \to \infty\), it can be shown that the Monte Carlo approximation converges to the actual volume (on the order of \(O( \sqrt{N})\), in computational complexity terminology).
In the above example the region \(R\) was a rectangle. To use the Monte Carlo method for a nonrectangular (bounded) region \(R\), only a slight modification is needed. Pick a rectangle \(\tilde R \text{ that encloses }R\), and generate random points in that rectangle as before. Then use those points in the calculation of \(\bar f\) only if they are inside \(R\). There is no need to calculate the area of \(R\) for Equation \ref{Eq3.15} in this case, since the exclusion of points not inside \(R\) allows you to use the area of the rectangle \(\tilde R\) instead, similar to before.
For instance, in Example 3.4 we showed that the volume under the surface \(z = 8x + 6y\) over the nonrectangular region \(R = {(x, y) : 0 ≤ x ≤ 1, 0 ≤ y ≤ 2x^2 }\) is 6.4. Since the rectangle \(\tilde R = [0,1] \times [0,2] \text{ contains }R\), we can use the same program as before, with the only change being a check to see if \(y < 2x^2\) for a random point \((x, y) \text{ in }[0,1] \times [0,2]\). Listing 3.2 below contains the code (montecarlo2.java):
Listing 3.2 Program listing for montecarlo2.java
The results of running the program with various numbers of random points (e.g. java montecarlo2 1000) are shown below:
To use the Monte Carlo method to evaluate triple integrals, you will need to generate random triples \((x, y, z)\) in a parallelepiped, instead of random pairs \((x, y)\) in a rectangle, and use the volume of the parallelepiped instead of the area of a rectangle in Equation \ref{Eq3.15} (see Exercise 2). For a more detailed discussion of numerical integration methods, see PRESS et al. |
I have seen two definitions of Beta one is $$\beta = \rho\dfrac{\sigma_{asset}}{\sigma_{market}}$$ Here $\rho$ is the correlated coeffient
another one is$$\beta = \dfrac{r_{expect} - r_{risk\ free}}{r_{market} - r_{risk\ free}}$$I don't know which one is correct or they are equivalent? By the way, here $\sigma_{asset}$ is the volatility of
historical return or
expected return? |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Natural waters contain a wide variety of solutes that act together to determine the pH, which typically ranges from 6 to 9. Some of the major processes that affect the acid-base balance of natural systems are:
Contact with atmospheric carbon dioxide Input of acidic gases from volcanic and industrial emissions Contact with minerals, rocks, and clays Presence of buffer systems such as carbonate, phosphate, silicate, and borate Presence of acidic anions, such as \(Fe(H_2O)^{3+}_6\) Input and removal of \(CO_2\) through respiration and photosynthesis Other biological processes, such as oxidation (\(O_2+ H^+ +e^- \rightarrow H_2O\)), nitrification, denitrification, and sulfate reduction.
In this chapter and also in the next one which deals specifically with the carbonate system, we will consider acid-base equilibria as they apply to natural waters. We will assume that you are already familiar with such fundamentals as the Arrhenius and Brfinsted concepts of acids and bases and the pH scale. You should also have some familiarity with the concepts of free energy and activity. The treatment of equilibrium calculations will likely extend somewhat beyond what you encountered in your General Chemistry course, and considerable emphasis will be placed on graphical methods of estimating equilibrium concentrations of various species.
1 Proton donor-acceptor Equilibria 1.1 Acid-base strengths
The tendency of an acid or a base to donate or accept a proton cannot be measured for individual species separately; the best we can do is compare two donor-acceptor systems. One of these is commonly the solvent, normally water. Thus the proton exchange
\[ HA + H_2O \rightleftharpoons H_3O^+ + A^- \tag{M1.1}\]
with equilibrium constant \(K_1\)
is the sum of two reactions
with equilibrium constant \(K_2\) and
with equilibrium constant \(K_3=1\).
The unity value of \(K_3\) stems from the defined value of \(\Delta G^o = 0\) for this reaction, and assumes that the activity of the \(H_2O\) is unity. Combining these equilibrium constants, we have
Formally, equilibrium constants for reactions in ionic solutions are defined in terms of activities, in which the reference state is a hypothetical one in which individual ion activities are unity but there are 2 no ion-ion interactions in the solution of the ion in pure water.
\[K=\dfrac{fH+gfA°gf}{HAgfH2Og} \tag{M1.6}\]
There has been considerable debate about the \(K_a\) values of water and of the hydronium ion. The conventional value of 10° 14 shown for \(H_2O\) in Table 1 is very commonly used, but it does not reflect the observed relative acid strength of \(H_2O\) when it is compared with other very weak acids. When such comparisons are carried out in media in which \(H_2O\) and the other acid are present in comparable concentrations, water behaves as a much weaker acid with
Kaº10°16
To understand the discrepancy, we must recall that acids are usually treated as solutes,so we must consider a proton-donor \(H_2O\) molecule in this context. Although the fraction of \(H_2O\) molecules that will lose a proton is extremely small (hence the designation of those that do so as solute molecules), virtually any \(H_2O\) molecule is capable of accepting the proton, so these would most realistically be regarded as solvent molecules.
The equation that defines the acid strength of water is
\[ H_2O (solute)+H2O(solvent)°!H3O++OH°\]
whose equilibrium constant is
\[K=(fH+g=1)(fA°g=1)(fH2Og=1)(fH2Og=55:5) \tag{M1.7}\]
in which the standard states are shown explicitly. Do you see the problem? The standard state of a solute is normally taken as unit molality, so we do not usually show it (or even think about it!) in most equilibrium expressions for substances in solution. For a solute, however, the standard state is the pure liquid, which for water corresponds to a molality of 55.5.
The value of \(K_a=10^{14}\) for water refers to the reaction H2O°!H++OH°in which \(H_2O\) is treated only as the solvent. Using Equation M1.7, the corresponding \( K_a\) has the value \(55.5/10^{14} = 1:8 \times 10^{16}\), which is close to the observed acid strength noted above.
What difference does all this make? In the context of Table 1 or Fig. 2, the exact \(pK_a\) of wateris of little significance since no other species having similar \(pK_a\)s are shown. On the other hand, if one were considering the acid-base reaction between glycerol (pKa=14.2) and water, the prediction of equilibrium concentrations (and in this case, the direction of the net reaction) would depend on which value of the water p \(K_a\) is used.
Instead of using pure water as the reference state, an alternative convention is to use a solution of some arbitrary constant ionic strength in which the species of interest is \infinitely dilute". In practice, this means a concentration of less than about one-tenth of the total ionic concentration. This convention,which is widely used in chemical oceanography, incorporates the equilibrium quotient into the equilibrium constant:
\[^cK=\dfrac{[H^+][A^-]}{[HA]} \tag{M1.8}\]
A third alternative is to use a "mixed acidity constant" in which the hydrogen ion concentration is expressed on the activity scale (which corresponds to the values obtained by experimental pH measurements), but the acid and base amounts are expressed in concentrations.
\[K'=\dfrac{\{H^+\}[A^-]}{[HA]} \tag{M1.8}\]
The value of K0 can be estimated from the Güntelberg approximation for single-ion activities:
\[ pK' = pK+ \dfrac{0.5(z^2_{acid}-z^2_{base}\sqrt{I}}{1+\sqrt{I}} \tag{M1.10}\]
in which
\(I\) is the ionic strength, and \(z_{acid}\) and \(z_{base}\) are the ionic charges of the acid and base species, respectively. |
№ 8
All Issues Volume 68, № 4, 2016
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 435-448
The purpose of this work is to obtain Jackson and converse inequalities of the polynomial approximation in Bergman spaces. Some known results presented for the moduli of continuity are extended to the moduli of smoothness. We proved some simultaneous approximation theorems and obtained the Nikolskii – Stechkin inequality for polynomials in these spaces.
Approximation of some classes of set-valued periodic functions by generalized trigonometric polynomials
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 449-459
We generalize some known results on the best, best linear, and best one-sided approximations by trigonometric polynomials from the classes of $2 \pi$ -periodic functions presented in the form of convolutions to the case of classes of set-valued functions.
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 460-468
We obtain the maximum principle for two versions of the Laplacian with respect to the measure, namely, for the “classical” and “$L^2$” versions in a domain of the Hilbert space.
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 469-484
We introduce the notion of “$s$”-convolution on the hyperbolic plane $H^2$ and consider its properties. Analogs of the Helgason spherical transform on the spaces of compactly supported distributions in $H^2$ are studied. We prove a Paley –Wiener – Schwartz-type theorem for these transforms.
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 485-494
The aim of the paper is to determine the degree of approximation of functions by matrix means of their Fourier series in a new space of functions introduced by Das, Nath, and Ray. In particular, we extend some results of Leindler and some other results by weakening the monotonicity conditions in results obtained by Singh and Sonker for some classes of numerical sequences introduced by Mohapatra and Szal and present new results by using matrix means.
Jacobi-type block matrices corresponding to the two-dimensional moment problem: polynomials of the second kind and Weyl function
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 495-505
We continue our investigations of Jacobi-type symmetric matrices corresponding to the two-dimensional real power moment problem. We introduce polynomials of second kind and the corresponding analog of the Weyl function.
Sufficient conditions for the existence of the $\upsilon$ -density for zeros of entire function of order zero
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 506-516
We select the subclasses of zero-order entire functions $f$ for which we present sufficient conditions for the existence of $\upsilon$ -density for zeros of $f$ in terms of the asymptotic behavior of the logarithmic derivative F and regular growth of the Fourier coefficients of $F$.
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 517-528
We study the existence of global attractors in discontinuous infinite-dimensional dynamical systems, which may have trajectories with infinitely many impulsive perturbations. We also select a class of impulsive systems for which the existence of a global attractor is proved for weakly nonlinear parabolic equations.
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 529-541
We study the Potts model with external field on the Cayley tree of order $k \geq 2$. For the antiferromagnetic Potts model with external field and $k \geq 6$ and $q \geq 3$, it is shown that the weakly periodic Gibbs measure, which is not periodic, is not unique. For the Potts model with external field equal to zero, we also study weakly periodic Gibbs measures. It is shown that, under certain conditions, the number of these measures cannot be smaller than $2^q - 2$.
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 542-550
We study the transformation versions of the Weyl-type theorems from operators $T$ and $S$ for their tensor product $T \otimes S$ in the infinite-dimensional space setting.
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 551-562
We describe the isotropic Besov spaces of functions of several variables in the terms of conditions imposed on the Fourier – Haar coefficients.
Ukr. Mat. Zh. - 2016. - 68, № 4. - pp. 563-576
We establish necessary and sufficient conditions for the invertibility of nonlinear differentiable maps in the case of arbitrary Banach spaces. We establish conditions for the existence and uniqueness of bounded and almost periodic solutions of nonlinear differential and difference equations. |
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result. |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
The classical version of this question is for Hamiltonian cycles, but there is probably little difference. I will only consider the version with cycles.
In order for a graph to contain a Hamiltonian cycle, the minimal degree should be at least 2. This is essentially the only obstruction for Hamiltonicity. To state this we need to define the following process:
For every pair $\{x,y\} \in [n]$, let $\theta_{xy} \sim U([0,1])$ (independently). Define $G_p = \{ \{x,y\} : \theta_{xy} \leq p \}$.
By construction, $G_p \sim G(n,p)$. As $p$ goes from 0 to 1, more and more edges are "exposed". Bollobás proved the following result:
Let $p_2$ be the minimum $p$ such that the minimal degree of $G_p$ is at least 2.
Let $p_H$ be the minimum $p$ such that $G_p$ is Hamiltonian.
With high probability, $p_2 = p_H$.
The expected degree of a vertex is $p(n-1) \approx pn$. This suggests looking at $p = c/n$. A simple calculation shows that for appropriate values of $c$, the degree of a vertex has distribution roughly Poisson with expectation $c$. In particular, the probability that a vertex has degree less than 2 is roughly $q = e^{-c}(1 + c)$. A further calculation shows that these events for different vertices are roughly independent, and so the distribution of the number of vertices of degree less than 2 is roughly Poisson with expectation $nq$. In particular, the probability that the minimal degree is at least 2 is roughly $e^{-nq}$, for appropriate values of $c$. If $c = \log n + \log \log n + r$ then $q = \frac{\log n + \log \log n + r + 1}{e^rn\log n} \approx \frac{e^{-r}}{n}$, and so $e^{-nq} \approx e^{-e^{-r}}$. This suggests the following result:
The probability that $G(n,p)$ is Hamiltonian for $p = \frac{\log n + \log\log n + r}{n}$ tends to $e^{-e^{-r}}$ for constant $r$.
If $r \to -\infty$ the probability tends to zero, and if $r \to \infty$ it tends to one.
This result indeed holds.
You can see a final project of Brunet for some pointers. For more information, consult any decent textbook on random graphs; Hamiltonicity is a classical topic which would be covered in many of them. |
Portfolio optimization techniques, such as those defined under Modern Portfolio Theory (MPT), are mildly predicated on the assumption of joint normality. Even though there will be a set of portfolio weights which minimizes variance regardless of the underlying distributions, correlation is only a complete measure of association if the joint multivariate distribution is normal; i.e., covariance is only an exhaustive measure of co-movement if the joint distributions are themselves normal. We can see this is true because the joint distribution of X and Y is defined by joint normality:
${\frac {1}{2\pi \sigma _{X}\sigma _{Y}{\sqrt {1-\rho ^{2}}}}}\iint _{X\,Y}\exp \left[-{\frac {1}{2(1-\rho ^{2})}}\left({\frac {X^{2}}{\sigma _{X}^{2}}}+{\frac {Y^{2}}{\sigma _{Y}^{2}}}-{\frac {2\rho XY}{\sigma _{X}\sigma _{Y}}}\right)\right]\,\mathrm {d} X\,\mathrm {d} Y$
Which through a proof can be show to produce:
$\sigma _{X+Y}={\sqrt {\sigma _{X}^{2}+\sigma _{Y}^{2}+2\rho \sigma _{X}\sigma _{Y}}},$
If now, we define $\omega_i \sigma^2_i=\sigma_X$, and $\omega_j \sigma^2_j=\sigma_Y$, then we get back the equation which is used as the basis of mean variance optimization of a two asset portfolio:
$\mathbb{E}[\sigma _{p}^{2}]=\omega_{i}^{2}\sigma _{i}^{2}+\omega_{j}^{2}\sigma _{j}^{2}+2\omega_{i}\omega_{j}\sigma _{i}\sigma _{j}\rho _{ij}$
So while the portfolio covariance matrix can always be computed, to the extent that underlying assets have returns which are not normal the optimization is likely to result in spuriously optimal weights. |
I have read in Bayesian Data Analysis by Andrew Gelman that log predictive density can be used to compare Bayesian models due to its connection to the Kullback-Leibler information.
The log predictive density has an important role in statistical model comparison because of its connection to the Kullback-Leibler information measure. In the limit of large sample sizes, the model with the lowest Kullback-Leibler information—and thus, the highest expected log predictive density—will have the highest posterior probability. Thus, it seems reasonable to use expected log predictive density as a measure of overall model fit. Due to its generality, we use the log predictive density to measure predictive accuracy in this chapter.
The Kullback-Leibler divergence between $Q(\theta)$ and $P(\theta|y)$ is,
$D_{KL}(Q||P) = \sum_{\theta}Q(\theta)ln\frac{Q(\theta)}{P(\theta|y)}$
$D_{KL}(Q||P) = E_\theta[lnQ(\theta)] -E_\theta[lnP(\theta,y)]+lnP(y)$
$D_{KL}(Q||P) = E_\theta[lnQ(\theta)] -E_\theta[lnP(y|\theta)P(\theta)] +lnP(y)$
Is this the connection between KL information and the expected log predictive density? If this is the connection, how do we assume that maximum expected log predictive density implies the minimum KL divergence. Comparing different models means changing the values for $\theta$. However, when changing $\theta$ the term $E_\theta[lnQ(\theta)]$ also changes. Why do we neglect the effect of this term to the KL divergence ? Is not the predictive density also used to calculate the prediction for unseen $\bar{y}$ ? The book also says,
The ideal measure of a model’s fit would be its out-of-sample predictive performance for new data produced from the true data-generating process (external validation).
Therefore, choosing the model with maximum predictive density for new data points means that we choose the model that gives maximum value for the predictions. How could a model gives maximum value for the prediction is better than other models?
I appreciate if someone can help me to understand these connections. Thanks |
Third, since $\sf{L} \subseteq \sf{NC}^2$, is there an algorithm to convert any logspace algorithm into a parallel version?
It can be shown (Arora and Barak textbook) given a $t(n)$-time TM $M$, that an oblivious TM $M'$ (i.e. a TM whose head movement is independent of its input $x$) can construct a circuit $C_n$ to compute $M(x)$ where $|x| = n$.
The proof sketch is along the lines of having $M'$ simulate $M$ and defining "snapshots" of its state (i.e. head positions, symbols at heads) at each time-step $t_i$ (think of a computational log). Each step $t_i$ can be computed from $x$ and the state $t_{i-1}$. Because each snapshot involves only a constant-sized string, and there exist only a constant amount of strings of that size, the snapshot at $t_i$ can be computed by a constant-sized circuit.
If you compose the constant-sized circuits for each $t_i$ we have a circuit that computes $M(x)$. Using this fact, along with the restriction that the language of $M$ is in $\sf{L}$ we see that our circuit $C_n$ is by definition
logspace-uniform, where uniformity just means that our circuits in our circuit family $\{C_n\}$ computing $M(x)$ all have the same algorithm. Not a custom-made algorithm for each circuit operating on input size $n$.
Again, from the definition of uniformity we see that circuits deciding any language in $\sf{L}$ must have a function $\text{size}(n)$ computable in $O(\log n).$ The circuit family $\sf{AC}^1$ has at most $O(\log n)$ depth.
Finally it can be shown that $\sf{AC}^1 \subseteq \sf{NC}^2$ giving the relation in question.
Fourth, it sounds like most people assume that $\sf{NC} \neq \sf{P}$ in the same way that $\sf{P} \neq \sf{NP}$. What is the intuition behind this?
Before we go further, let us define what $\sf{P}$-completeness means.
A language $L$ is $\sf{P}$-complete if $L \in \sf{P}$ and every language in $\sf{P}$ is logspace reducible to it. Additionally, if $L$ is $\sf{P}$-complete then the following are true
$L \in \sf{NC} \iff \sf{P} = \sf{NC}$
$L \in \sf{L} \iff \sf{P} = \sf{L}$
Now we consider $\sf{NC}$ to be the class of languages efficiently decided by a parallel computer (our circuit). There are some problems in $\sf{P}$ that seem to resist any attempt at parallelization (i.e. Linear Programming, and Circuit Value Problem). That is to say, certain problems require computation to be done in a step-wise fashion.
For example, the Circuit Value Problem is defined as:
Given a circuit $C$ and, input $x$, and a gate $g \in C$, what is the output of $g$ on $C(x)$?
We do not know how to compute this any better than computing all the gates $g'$ that come before $g$. Given
some of them may be computed in parallel, for example if they all occur at some time-step $t_i$, but we dont know how compute the output of gates at timestep $t_i$ and time-step $t_{i+1}$ for the obvious difficulty that gates at $t_{i+1}$ require the output of gates at $t_i$!
This is the intuition behind $\sf{NC} \neq \sf{P}$.
Limits to Parallel Computation is a book about $\sf{P}$-Completeness in similar vein of Garey & Johnson's $\sf{NP}$-Completeness book.
Third, since $\sf{L} \subseteq \sf{NC}^2$, is there an algorithm to convert any logspace algorithm into a parallel version?
It can be shown (Arora and Barak textbook) given a $t(n)$-time TM $M$, that an oblivious TM $M'$ (i.e. a TM whose head movement is independent of its input $x$) can construct a circuit $C_n$ to compute $M(x)$ where $|x| = n$.
The proof sketch is along the lines of having $M'$ simulate $M$ and defining "snapshots" of its state (i.e. head positions, symbols at heads) at each time-step $t_i$ (think of a computational log). Each step $t_i$ can be computed from $x$ and the state $t_{i-1}$. Because each snapshot involves only a constant-sized string, and there exist only a constant amount of strings of that size, the snapshot at $t_i$ can be computed by a constant-sized circuit.
If you compose the constant-sized circuits for each $t_i$ we have a circuit that computes $M(x)$. Using this fact, along with the restriction that the language of $M$ is in $\sf{L}$ we see that our circuit $C_n$ is by definition
logspace-uniform, where uniformity just means that our circuits in our circuit family $\{C_n\}$ computing $M(x)$ all have the same algorithm. Not a custom-made algorithm for each circuit operating on input size $n$.
Again, from the definition of uniformity we see that circuits deciding any language in $\sf{L}$ must have a function $\text{size}(n)$ computable in $O(\log n).$ The circuit family $\sf{AC}^1$ has at most $O(\log n)$ depth.
Finally it can be shown that $\sf{AC}^1 \subseteq \sf{NC}^2$ giving the relation in question.
Fourth, it sounds like most people assume that $\sf{NC} \neq \sf{P}$ in the same way that $\sf{P} \neq \sf{NP}$. What is the intuition behind this?
Before we go further, let us define what $\sf{P}$-completeness means.
A language $L$ is $\sf{P}$-complete if $L \in \sf{P}$ and every language in $\sf{P}$ is logspace reducible to it. Additionally, if $L$ is $\sf{P}$-complete then the following are true
$L \in \sf{NC} \iff \sf{P} = \sf{NC}$
$L \in \sf{L} \iff \sf{P} = \sf{L}$
Now we consider $\sf{NC}$ to be the class of languages efficiently decided by a parallel computer (our circuit). There are some problems in $\sf{P}$ that seem to resist any attempt at parallelization (i.e. Linear Programming, and Circuit Value Problem). That is to say, certain problems require computation to be done in a step-wise fashion.
For example, the Circuit Value Problem is defined as:
Given a circuit $C$ and input $x$ and a gate $g \in C$, what is the output of $g$ on $C(x)$?
We do not know how to compute this any better than computing all the gates $g'$ that come before $g$. Given
some of them may be computed in parallel, for example if they all occur at some time-step $t_i$, but we dont know how compute the output of gates at timestep $t_i$ and time-step $t_{i+1}$ for the obvious difficulty that gates at $t_{i+1}$ require the output of gates at $t_i$!
This is the intuition behind $\sf{NC} \neq \sf{P}$.
Third, since $\sf{L} \subseteq \sf{NC}^2$, is there an algorithm to convert any logspace algorithm into a parallel version?
It can be shown (Arora and Barak textbook) given a $t(n)$-time TM $M$, that an oblivious TM $M'$ (i.e. a TM whose head movement is independent of its input $x$) can construct a circuit $C_n$ to compute $M(x)$ where $|x| = n$.
The proof sketch is along the lines of having $M'$ simulate $M$ and defining "snapshots" of its state (i.e. head positions, symbols at heads) at each time-step $t_i$ (think of a computational log). Each step $t_i$ can be computed from $x$ and the state $t_{i-1}$. Because each snapshot involves only a constant-sized string, and there exist only a constant amount of strings of that size, the snapshot at $t_i$ can be computed by a constant-sized circuit.
If you compose the constant-sized circuits for each $t_i$ we have a circuit that computes $M(x)$. Using this fact, along with the restriction that the language of $M$ is in $\sf{L}$ we see that our circuit $C_n$ is by definition
logspace-uniform, where uniformity just means that our circuits in our circuit family $\{C_n\}$ computing $M(x)$ all have the same algorithm. Not a custom-made algorithm for each circuit operating on input size $n$.
Again, from the definition of uniformity we see that circuits deciding any language in $\sf{L}$ must have a function $\text{size}(n)$ computable in $O(\log n).$ The circuit family $\sf{AC}^1$ has at most $O(\log n)$ depth.
Finally it can be shown that $\sf{AC}^1 \subseteq \sf{NC}^2$ giving the relation in question.
Fourth, it sounds like most people assume that $\sf{NC} \neq \sf{P}$ in the same way that $\sf{P} \neq \sf{NP}$. What is the intuition behind this?
Before we go further, let us define what $\sf{P}$-completeness means.
A language $L$ is $\sf{P}$-complete if $L \in \sf{P}$ and every language in $\sf{P}$ is logspace reducible to it. Additionally, if $L$ is $\sf{P}$-complete then the following are true
$L \in \sf{NC} \iff \sf{P} = \sf{NC}$
$L \in \sf{L} \iff \sf{P} = \sf{L}$
Now we consider $\sf{NC}$ to be the class of languages efficiently decided by a parallel computer (our circuit). There are some problems in $\sf{P}$ that seem to resist any attempt at parallelization (i.e. Linear Programming, and Circuit Value Problem). That is to say, certain problems require computation to be done in a step-wise fashion.
For example, the Circuit Value Problem is defined as:
Given a circuit $C$, input $x$, and a gate $g \in C$, what is the output of $g$ on $C(x)$?
We do not know how to compute this any better than computing all the gates $g'$ that come before $g$. Given
some of them may be computed in parallel, for example if they all occur at some time-step $t_i$, but we dont know how compute the output of gates at timestep $t_i$ and time-step $t_{i+1}$ for the obvious difficulty that gates at $t_{i+1}$ require the output of gates at $t_i$!
This is the intuition behind $\sf{NC} \neq \sf{P}$.
Limits to Parallel Computation is a book about $\sf{P}$-Completeness in similar vein of Garey & Johnson's $\sf{NP}$-Completeness book. |
Keeping Track of Element Order in Multiphysics Models
Whenever you are building a finite element model in COMSOL Multiphysics, you should be aware of the element order that is being used. This is particularly important for multiphysics models as there are some distinct benefits to using different element orders for different physics. Today, we will review the key concepts behind element order and discuss how it applies to some common multiphysics models.
What Is Element Order?
Whenever we solve a finite element problem, we are approximating the true solution field to a partial differential equation (PDE) over a domain. The finite element method starts by subdividing the modeling domain up into smaller, simpler domains called
elements. These elements are defined by a set of points, traditionally called nodes, and each node has a set of shape functions or basis functions. Every shape function is associated with some degrees of freedom. The set of all of these discrete degrees of freedom is traditionally referred to as the solution vector.
Note: You can read more about the process of going from the governing PDE to the solution vector in our previous blog posts “A Brief Introduction to the Weak Form” and “Discretizing the Weak Form Equations“.
Once the solution vector is computed, the finite element approximation to the solution field is constructed by interpolation using the solution vector and the set of all of the basis functions in all of the elements. The
element order refers to the type of basis functions that are used.
Let’s now visualize some of the basis functions for one of the more commonly used elements in COMSOL Multiphysics: the two-dimensional Lagrange element. We will look at a square domain meshed with a single quadrilateral (four-sided) element that has a node at each corner. If we are computing a scalar field, then the Lagrange element has a single degree of freedom at each node. You can visualize the shape functions for a first-order Lagrange element in the image below.
The shape functions for a first-order square quadrilateral Lagrange element.
The first-order shape functions are each unity at one node and zero at all of the others. The complete finite element solution over this element is the sum of each shape function times its associated degree of freedom. We’ll now compare our first-order shape functions with our second-order shape functions.
The shape functions for a single second-order square quadrilateral Lagrange element.
Observe that the second-order quadrilateral Lagrange element has node points at the midpoints of the sides as well as in the element center. It has a total of nine shape functions and, again, each shape function is unity at one node and zero elsewhere.
Let’s now look at what happens when our single quadrilateral element represents a domain that is not a perfect square but rather a domain with some curved sides. In such cases, it is common to use a so-called
isoparametric element, meaning that the geometry is approximated with the same shape functions as the one used for the solution. This geometric approximation is shown below for the first- and second-order cases. A domain with curved sides. Single first- and second-order quadrilateral elements are applied.
As we can see in the image above, the first-order element simply approximates the curved sides as straight sides. The second-order element much more accurately approximates these curved boundaries. This difference, known as a geometric discretization error, is discussed in greater detail in an earlier blog post. The shape functions for the isoparametric first- and second-order Lagrange elements are shown below.
The shape functions of a single first-order isoparametric Lagrange element for the domain with curved sides. The shape functions of a single second-order isoparametric Lagrange element for a domain with curved sides.
We can observe from the above two images that the first-order element approximates all sides of the domain as straight lines, while the second-order element approximates the curved shapes much more accurately. Thus, if we are modeling a domain with curved sides, we need to use several linear elements along any curved domain boundaries just so that we can accurately represent the domain itself.
For any real-world finite element model, there will of course always be more than one element describing the geometry. Additionally, keep in mind that regardless of the element order, you will want to perform a mesh refinement study, also called a mesh convergence study. That is, you will use finer and finer meshes (smaller and smaller elements) to solve the same problem and see how the solution converges. You terminate this mesh refinement procedure after achieving your desired accuracy. A good example of a mesh refinement study is presented in the application example of a Stress Analysis of an Elliptic Membrane.
All well-posed, single-physics finite element problems will converge toward the same answer, regardless of the element order. However, different element orders will converge at different rates and therefore require various computational resources. Let’s explore why different PDEs have different element orders.
Element Order in Single-Physics Models
For the purposes of this discussion, let’s consider just the set of PDEs governing common single-physics problems that exhibit no variation in time. We can put all of these PDEs into one of two broad categories:
Poisson-type: Poisson-type PDEs are used to describe heat transfer in solids, solid mechanics, electric currents, electrostatics and magnetostatics, thin-film flow, and flow in porous media governed by Darcy’s law or the Richards’ equation. Such governing PDEs are all of the form:\nabla \cdot (- D \nabla u ) = f
Note that this is a second-order PDE, thus second-order (quadratic) elements are the default choice within COMSOL Multiphysics for all of these types of equations.
Transport-type: Transport-type PDEs are used to describe chemical species transport as well as heat transfer in fluids and porous media. The governing equations here are quite similar to Poisson’s equation, with one extra term — a velocity vector:\nabla \cdot ( -D \nabla u + \mathbf{v} u ) = f
The extra velocity term results in a governing equation that is closer to a first-order PDE. The velocity field is usually computed by solving the Navier-Stokes equation, which is itself a type of transport equation that describes fluid flow. It is often the case that, for such problems, there is a high Péclet number or Reynolds number. This is one of the reasons why the default choice is to use first-order (linear) elements for these PDEs.
Note that for fluid flow problems where the Reynolds number is low, the default is to use the so-called
P2 + P1 elementsthat solve for the fluid velocity via second-order discretization and solve for the pressure via first-order discretization. The P2 + P1 elements are the default for the Creeping Flow, Brinkman Equationsand Free and Porous Media Flowinterfaces. This is also the case for the Two-Phase Flow, Level Setand Two-Phase Flow, Phase Fieldinterfaces. Further, any type of transport or fluid flow interface uses stabilization to solve the problem more quickly and robustly. For an overview of stabilization methods, check out our earlier blog post “Understanding Stabilization Methods“.
So how can we check the default settings for the element order used by a particular physics interface? Within the Model Builder, we first need to go to the
Show menu and toggle on Discretization. After doing so, you will see a Discretization section within the physics interface settings, as shown in the screenshot below. Screenshot showing how to view the element order of a physics interface.
Keep in mind that as long as you’re working with only single physics, it typically does not matter too much which element order you use as long as you remember to perform a mesh convergence study. The solutions with a different element order may require quite varying amounts of memory and time to solve, but they will all converge toward the same solution with sufficient mesh refinement. However, when we start dealing with multiphysics problems, things become a little bit more complicated. Next, we’ll look at two special cases of multiphysics modeling where you should be aware of element order.
Conjugate Heat Transfer: Heat Transfer in Solids with Heat Transfer in Fluids
COMSOL Multiphysics includes a predefined multiphysics coupling between heat transfer and fluid flow that is meant for simulating the temperature of objects that are cooled or heated by a surrounding fluid. The
Conjugate Heat Transfer interface (and the functionally equivalent Non-Isothermal Flow interface) is available with the Heat Transfer Module and the CFD Module for both laminar and turbulent fluid flow.
The
Conjugate Heat Transfer interface is composed of two physics interfaces: the Heat Transfer interface and the Fluid Flow interface. The Fluid Flow interface (whether laminar or turbulent) uses linear element order to solve for the fluid velocity and pressure fields. The Heat Transfer interface solves for the temperature field in the fluid as well as the temperature field in the solid. The same linear element discretization is used throughout the temperature field in both the solid and fluid domains.
Now, if you are setting up a conjugate heat transfer problem by manually adding the various physics interfaces, you do need to be careful. If you start with the
Heat Transfer in Solids interface and add a Heat Transfer in Fluids domain feature to the interface, a second-order discretization will be used for the temperature field by default. This is not generally advised, as it will require more memory than a first-order temperature discretization. The default first-order discretization of the fluid flow field justifies using first-order elements throughout the model.
It is also worth mentioning a related multiphysics coupling: the
Local Thermal Non-Equilibrium interface available with the Heat Transfer Module. This interface is designed to solve for the temperature field of a fluid flowing through a porous matrix medium as well as the temperature of the matrix through which the fluid flows. That is, there are two different temperatures, the fluid and the solid matrix temperature, at each point in space. The interface also uses first-order discretization for both of the temperatures. Thermal Stress: Heat Transfer in Solids with Solid Mechanics
The other common case where a multiphysics coupling uses different element orders from a single-physics problem is when computing thermal stresses. For the
Thermal Stress multiphysics coupling, the default is to use linear discretization for the temperature and quadratic discretization for the structural displacements. To understand why this is so, we can look at the governing Poisson-type PDE for linear elasticity:
where \mathbf{C} is the stiffness tensor and \mathbf{\epsilon} is the strain tensor.
For a problem where temperature variation affects stresses, the strain tensor is:
where \mathbf{\alpha} is a tensor containing the coefficients of thermal expansion, T is the temperature, T_0 is the strain-free reference temperature, and \mathbf{u} is the structural displacement field.
By default, we solve for the structural displacements using quadratic discretization, but we can see from the equation above that the strains are computed by taking the gradients of the displacement fields. This lowers the discretization order of the strains to a linear order. Hence, the temperature field discretization should also be lowered to a linear order.
Closing Remarks
We have discussed the meaning of discretization order in COMSOL Multiphysics and why it is relevant for two different multiphysics cases that frequently arise. If you are putting together your own multiphysics models, you’ll want to keep element order in mind.
Additionally, it is good to address what can happen if you build a multiphysics model with element orders that disagree with what we’ve outlined here. As it turns out, in many cases, the worst thing that will happen is that your model will simply require more memory and converge to a solution more slowly. In the limit of mesh refinement, any combination of element orders in different physics will give the same results, but the convergence may well be very slow and oscillatory. If you do observe any spatial oscillations to the solution (for example, a stress field that looks rippled or wavy), then check the element orders.
Today’s blog post is designed as a practical guideline for element selection in multiphysics problems within COMSOL Multiphysics. A more in-depth discussion of stability criterion for mixed (hybrid) finite element methods can be found in many texts, such as
Concepts and Applications of Finite Element Analysis by Robert D. Cook, David S. Malkus, Michael E. Plesha, and Robert J. Witt. Comments (4) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30
The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma...
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, ISSN 1742-5468, 07/2019, Volume 2019, Issue 7, p. 73104
We analyze the thermodynamics and the critical behavior of the supersymmetric su(m) t-J model with long-range interactions. Using the transfer matrix...
quantum criticality | solvable lattice models | ENERGY | MECHANICS | ASYMPTOTIC BETHE-ANSATZ | SPIN CHAIN | integrable spin chains and vertex models | SYSTEMS | SEPARATION | EXCHANGE | PHYSICS, MATHEMATICAL | CHARGE | Physics - Strongly Correlated Electrons
quantum criticality | solvable lattice models | ENERGY | MECHANICS | ASYMPTOTIC BETHE-ANSATZ | SPIN CHAIN | integrable spin chains and vertex models | SYSTEMS | SEPARATION | EXCHANGE | PHYSICS, MATHEMATICAL | CHARGE | Physics - Strongly Correlated Electrons
Journal Article
Physics Letters B, ISSN 0370-2693, 05/2016, Volume 756, Issue C, pp. 84 - 102
A measurement of the ratio of the branching fractions of the meson to and to is presented. The , , and are observed through their decays to , , and ,...
scattering [p p] | pair production [pi] | statistical | Phi --> K+ K | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | 7000 GeV-cms | leptonic decay [J/psi] | (b)over-bar(s) | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | Violating Phase Phi(s) | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | hadronic decay [f0] | Decay | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
scattering [p p] | pair production [pi] | statistical | Phi --> K+ K | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | 7000 GeV-cms | leptonic decay [J/psi] | (b)over-bar(s) | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | Violating Phase Phi(s) | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | hadronic decay [f0] | Decay | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
Journal Article
Journal of High Energy Physics, ISSN 1126-6708, 2012, Volume 2012, Issue 5
Journal Article
Physics Letters B, ISSN 0370-2693, 06/2014, Volume 734, pp. 261 - 281
A peaking structure in the mass spectrum near threshold is observed in decays, produced in pp collisions at collected with the CMS detector at the LHC. The...
Journal Article
6. Search for rare decays of $$\mathrm {Z}$$ Z and Higgs bosons to $${\mathrm {J}/\psi } $$ J/ψ and a photon in proton-proton collisions at $$\sqrt{s}$$ s = 13$$\,\text {TeV}$$ TeV
The European Physical Journal C, ISSN 1434-6044, 2/2019, Volume 79, Issue 2, pp. 1 - 27
A search is presented for decays of $$\mathrm {Z}$$ Z and Higgs bosons to a $${\mathrm {J}/\psi } $$ J/ψ meson and a photon, with the subsequent decay of the...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
Astrophysical Journal Letters, ISSN 2041-8205, 10/2017, Volume 848, Issue 2, p. L12
On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational...
Stars: neutron | Gravitational waves | GAMMA-RAY BURST | NEARBY SUPERNOVA RATES | AFTERGLOW | PULSAR | stars: neutron | EVOLUTION | GRAVITATIONAL-WAVES | ASTRONOMY & ASTROPHYSICS | gravitational waves | REDSHIFT | IMAGER | NUCLEOSYNTHESIS | HOST GALAXY | Astrofísica | Astrophysics | Ones gravitacionals | Astronomia i astrofísica | Astronomia | Raigs gamma | Astronomy | Àrees temàtiques de la UPC | Física | Gamma ray astronomy | ASTRONOMY AND ASTROPHYSICS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Stars: neutron | Gravitational waves | GAMMA-RAY BURST | NEARBY SUPERNOVA RATES | AFTERGLOW | PULSAR | stars: neutron | EVOLUTION | GRAVITATIONAL-WAVES | ASTRONOMY & ASTROPHYSICS | gravitational waves | REDSHIFT | IMAGER | NUCLEOSYNTHESIS | HOST GALAXY | Astrofísica | Astrophysics | Ones gravitacionals | Astronomia i astrofísica | Astronomia | Raigs gamma | Astronomy | Àrees temàtiques de la UPC | Física | Gamma ray astronomy | ASTRONOMY AND ASTROPHYSICS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, ISSN 1742-5468, 04/2019, Volume 2019, Issue 4, p. 43105
We study the spectrum of the long-range supersymmetric su(m) t-J model of Kuramoto and Yokoyama in the presence of an external magnetic field and a charge...
solvable lattice models | MECHANICS | THERMODYNAMICS | HEISENBERG CHAIN | SPIN CHAIN | integrable spin chains and vertex models | SEPARATION | EXCHANGE | PHYSICS, MATHEMATICAL
solvable lattice models | MECHANICS | THERMODYNAMICS | HEISENBERG CHAIN | SPIN CHAIN | integrable spin chains and vertex models | SEPARATION | EXCHANGE | PHYSICS, MATHEMATICAL
Journal Article
9. Relative Modification of Prompt ψ (2S) and J /ψ Yields from pp to PbPb Collisions at sNN =5.02 TeV
Physical Review Letters, ISSN 0031-9007, 04/2017, Volume 118, Issue 16
Journal Article
10. Comment on "Evidence from acoustic imaging for submarine volcanic activity in 2012 off the west coast of El Hierro (Canary Islands, Spain)" by Perez NM, Somoza L, Hernandez PA, Gonzalez de Vallejo L, Leon R, Sagiya T, Biain A, Gonzalez FJ, Medialdea T, Barrancos J, Ibanez J, Sumino H, Nogami K and Romero C [Bull Volcanol (2014) 76:882-896]
BULLETIN OF VOLCANOLOGY, ISSN 0258-8900, 07/2015, Volume 77, Issue 7, p. 1
Journal Article |
Search
Now showing items 1-10 of 20
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... |
Chaperone-aided Protein Folding
Surprisingly, unfolded proteins are toxic to the cell because of their potential to form large, difficult-to-degrade aggregates consisting of many proteins. Machinery for safely "catalyzing" protein folding is therefore an essential part of cell functioning.
Chaperones are a class of proteins and protein complexes that enable successful protein folding. We will see that to be maximally effective, chaperones must use free energy, such as from hydrolysis of the activated carrier ATP.
Our discussion, as usual, will focus on the essential biophysics rather than on more detailed models of specific systems. The simpler discussion enables understanding of the key driving forces and mechanisms, which in turn can provide building blocks for more sophisticated modeling.
A very valuable preliminary model
We can gain a surprising amount of insight from studying a very simple model of folding and aggregation
without chaperones. The model is instructive on its own, but also establishes a key reference point for models chaperones.
From the figure above, you should see immediately that this is a driven system. Driving occurs because unfolded protein is being synthesized (at a rate $\ks$) and folded protein is removed (at a rate $\kr$) for trafficking to other parts of the cell where the proteins will be used. Unfolded proteins are also assumed to aggregate irreversibly at rate $\ka$. We are not concerned here with the source of energy for this driving, but it is critical to appreciate that free energy is being expended in the process. The spontaneous flow or driving indicates that indeed free energy is being expended. The system is not in equilibrium.
The need for chaperones implies that the rate of folding - at least for some proteins - is small compared to other rates, especially that for aggregaton. We will also assume that, once folded, proteins are reasonably stable so that the unfolding rate is even smaller than the folding rate. Hence, our picture is that $\ku < \kf$ and both are smaller than other rate constants in the model. This picture applies to the subset of proteins which are not fast folders.
Our goals are to determine the amount of protein which ends up aggegated compared to what is folded, and to understand how this ratio depends on the parameters of our simple model. Thus, we want to calculate the ratio
where the populations of the unfolded and folded states have been denoted by [U]and [F]. This ratio of fluxes or overall rates (as opposed to rate constants alone) derives from basic mass action principles.
Given the input and removal of molecules from the system, it is natural to analyze the system in a steady state, which conveniently is the simplest analysis. (Note that subjecting a system to a steady-state analysis is not a claim that the system in question will always exhibit steady behavior. Rather, the steady state is a convenient and informative condition to examine.) We will therefore formulate our analysis in terms of steady-state concentrations: $\concss{X}$ for species X.
Our mathematical task is simplified by the observation that the ratio (1) does not require the absolute values of the concentrations, but only their ratio. This ratio is determined using the continuity of flow from the unfolded to folded to the "removed" state (upper right in figure above). That is, the
net flow from U to F must match the flow that is removed:
The result depends only on rate constants and not on the absolute concentrations, which makes it straightforward to interpret.
To solidify our understanding of this almost-but-not-quite trivial model, we can rewrite (4) as $(\ka / \kf) \, [ (\ku / \kr) + 1]$. For proteins that are slow to fold spontaneously, we expect that the aggregation rate $\ka$ is much larger than the folding rate $\kf$; this is, after all, why chaperones are needed in the first place. Our re-write of the ratio shows that aggregation is indeed expected to be significant in our simple analysis without the presence of chaperones: even though the first term in the square brackets may be small due to slow unfolding (i.e., protein stability), it must be positive and hence the whole ratio must exceed $\ka / \kf$, which is large. In the limit that unfolding is much slower than removal ($\ku \ll \kr$), the ratio approaches $\ka / \kf \gg 1$ reflecting the fractional outflows from unfolded state. So we've done a little math to quantify our intuition that some kind of chaperone mechanism is needed when folding is slow, and equally importantly, set the stage for more realistic models.
It is worth noting that the ratio of unfolded protein in
steady state given in (3) generally will be far from the equilibrium value. The balance condition which must hold in equilibrium would dictate a ratio of $\ku / \kf$, which differs significantly from (3) given our assumption that $\ku$ is small compared to other rates. Thus, perhaps ironically, the driving in this case shifts the populations toward the dangerous unfolded state, though this would appear to be intrinsic to the directionality of the system - proteins start out unfolded! The simplest chaperone model - no ATP
Although this model is more complicated than our previous one, it has the distinct advantage of actually including chaperones! Note that the chaperones are purely "passive" in the model as shown - they store no free energy and do not use ATP. The chaperones will act simply as catalysts. However, because we are considering a driven non-equilibrium condition, the chaperones' presence can alter the aggregation ratio.
To give away the punchline first, note that our new model adds to the prior model only by adding an additional pathway between the unfolded and folded state. Other processes are not altered. Hence, the net result of the model will be modified, "effective" rate constants that will replace $\kf$ and $\ku$ in our analyses above. All we need to do is set up the math to figure out what happens.
Before getting into detailed analysis of the model, we immediately see that it contains a cycle (U-F-FC-UC), and therefore the rates must satisfy a constraint, as holds for all cycles. In other words, among the eight rate constants in the cycle, only seven can be considered as adjustable parameters due to the cycle constraint
To extract biophysical information for this model, we will solve for its steady state. The algebra is somewhat complicated, although straightforward, and we just sketch it here. (Derivations of some results are given as exercises, with hints.) Fortunately, the basic idea is simple. We use the fact that the net flow through the chaperone pathway will be constant in a steady state - i.e., the flow from stat U to UC will match that from UC to FC and from FC to F. Our standard mass action machinery enables us to write down the corresponding equations easily:
where we have omitted the "SS" (steady state) superscripts to keep the equations cleaner.
Using a strategy described in the Exercises, we can solve these equations for the effective rate constants, $\aow$ and $\awo$, along the chaperone path.
The graphic above demonstrates that the presence of chaperones in the model, which initially appeared a great complication, can be included as a parallel pathway with the effective first-order rate constants $\aow$ and $\awo$. That is, the probability of folding (transitioning from state U to F) per unit time is $\conc{U} \, ( \kf + \aow )$ and for unfolding is $\conc{F} ( \ku + \awo )$. To put it another way, the overall rate constants, accounting for both paths between U and F, are:
Biophyscial discussion of passive chaperone effects
Our goal is to determine how the presence of passive chaperones (which do not use ATP or another energy source) can affect the aggregration ratio (1). In the presence of the chaperone pathway, (4) must be modified to account for both processes:
Let's examine the aggregation ratio term by term. We'll focus first on the factor $\kutot / \kftot$ and compare it to the trivial case given in (4). In fact, this factor is unchanged, as we can see by examining the ratio
where the last equality derives from the cycle constraint (5). This ratio of effective rates does not change, and hence the first term in (12) is the same as the corresponding term in the chaperone-free case, (4).
The second term in (12) clearly can differ from the chaperone-free case. In the limit of large chaperone concentration $\conc{C}$, the term can become very small (within our mass-action picture; in reality, there is a strict limit to the concentration of a large protein or complex). So the second term can get small, but the first term remains as it was in the absence of chaperones.
where the "tot" superscripts are omitted because $\kutot / \kftot = \ku / \kf$ in the case of passive chaperones. We can see that for proteins with a strong tendency to aggregate (large $\ka$) and/or modest stability ($\ku$ significant compared to $\kf$), significant aggregation could still occur.
The only way to improve on (14) within our current chaperone cycle is to somehow drive the chaperone function.
ATP-driven chaperone-aided folding
Let's now consider chaperones that use ATP based on the schematic below, which is not meant to indicate specfics as to when ATP hydrolysis occurs.
ATP-driven chaperones can achieve a higher level of successful folding compared to the passive case. Such chaperones convert the free energy stored in the cell's non-equilibrium concentration of ATP (relative to ADP) into greater folding "fidelity" - i.e., more folding, less aggregation. This exchange bears qualitative similarities to the cell's exchange of free energy for greater fidelity in translation.
The basic mechanism for the increased folding with ATP driving is easy to see within our simple kinetic modeling. As we showed in the previous section, without driving, the ratio $\kutot / \kftot$ that appears in (12) cannot change. This is because, in essence, the passive chaperone acts simply as a catalyst. The ATP-driven chaperone, by contrast, can modify the ratio. The distinction between the two underscores the differences in cycle structure, as discussed in the cycle logic section: the distinguishability between ATP- and ADP-bound chaperones provides a "handle" to drive the cycle in one direction, whereas passive chaperones (no ATP or ADP) act to drive the cylce in both directions equally.
The effect of ATP-driving can be seen in the effective rate constants, $\alpha$ given in (8) and (9). Instead of $\conc{C}$ in $\aow$, we will have $\conc{C \cdot ATP}$ and in $\awo$ we will have $\conc{C \cdot ADP}$. In turn, these will modify $\kftot$ and $\kutot$ in (10) and (11), and lead to a significantly modified aggregation ratio (12). In particular, the first term in (12) can be decreased well below the passive-case minimum given in (14) - and we expect significantly more folding.
To see this more explicitly, we can revisit the first term in (12). Recall that the solution folding and unfolding rates, $\ku$ and $\kf$ are presumed small compare to other rates (necessitating chaperone use in the first place). Hence we have
where we used the constraint (5). The fraction (15) can be much less than $\ku / \kf$ because we expect that any protein evolved to use ATP will bind much more strongly to ATP than to ADP. That is, we expect $\conc{C \cdot ATP} \gg \conc{C \cdot ADP}$. Recall from the section on ATP that the concentrations of the two nucleotides are about the same.
Summing Up
To avoid aggregation, chaperone systems encourage folding in two ways. The first way is simply to catalyze folding without using free energy, but this is a weak effect that we have seen is severely limited. More importantly, the use of free energy stored in ATP allows the system to be driven toward greater folding. In terms of "cycle logic", ATP-bound chaperones provide a handle with which the system can be driven uni-directionally - which wouldn't be possible if ATP did not bind or did not get hydrolyzed to ADP.
We have not touched on quite interesting questions regarding details of how free energy from ATP is used - e.g., whether chaperones perform mechanical work to aid folding or simply prevent aggregation (see work by Lorimer and by Horwich). Our simple analysis suggests that such mechanistic details may be less important than general process of transducing free energy for the end result of more folded protein.
Arguably, the driven process of chaperone-aided folding echoes the driven or "kinetic" proofreading which occurs in protein translation.
References General reference B. Alberts et al., "Molecular Biology of the Cell," Garland Science (many editions available). The following are biophysical studies and perspectives on chaperones, which can help you get started in the large body of literature: D. Thirumalai, G. H. Lorimer, "Chaperonin-mediated protein folding," Annu Rev Biophys Biomol Struct 30:245-269 (2001). Arthur L. Horwich, Adrian C. Apetri, Wayne A. Fenton, "The GroEL/GroES cis cavity as a passive anti-aggregation device," FEBS Letters 583:2654-2662 (2009). Nicholas C. Corsepius and George H. Lorimer, "Measuring how much work the chaperone GroEL can do," PNAS 110:E2451-E2459 (2013). Exercises Derive (5). Derive Eqs. (8) and (9) in several stages. (a) First use (6) to solve for $\conc{UC}$ in terms of other variables. (b) Substitute this result into (7) and solve for $\conc{FC}$ in terms of $\conc{U}$, $\conc{F}$ and $\conc{C}$. (c) Use the result for $\conc{FC}$ in your expression for $\conc{UC}$. (d) Solve for the netflow from state U to UC: the left-hand side of (6). The coefficients of $\conc{U}$ and $\conc{F}$ are the effective rate constants $\aow$ and $\awo$. |
Search Click on the Title, Author, or Picture to Open an Entry
Displaying entries 21-30 out of 44.
Surface plasmon polariton excitation in Kretschmann configuration
9 reviews
Excitation of surface plasmon polaritions at the gold-air interface in Kretschmann configuration.
Tutorial models for COMSOL Webinar "Simulating Graphene-Based Photonic and Optoelectronic Devices"
68 reviews
Basic tutorial models for COMSOL Webinar "Simulating Graphene-Based Photonic and Optoelectronic Devices" by Prof. Alexander Kildishev, Purdue University, USA Validation with a meshless method...
Shape of a static meniscus pinned at the contact line from Young-Laplace equation
4 reviews
This is a simple example for equation based modeling where the static Young-Laplace equation - [Delta P] = [surface tension] * [divergence of the surface normal vector] - is solved to determine the...
Maxwell-Wagner Model of Blood Permittivity
2 reviews
Maxwell-Wagner model is used for explanation of frequency dispersion, which takes place for permittivity at various kinds of suspensions. In particular, this phenomenon is observed in the blood. The...
Laserwelding
2 reviews
Laser Welding of PMMA with 1 W Laser.
Convection dominated Convection-Diffusion Equation by upwind discontinuous Galerkin (dG) method
3 reviews
We consider the Convection-Diffusion Equation with very small diffusion coefficient $\mu$: \[ -mu\Delta u + \mathbf{\beta}\dot\nabla u =f \mathrm{in}~ \Omega u=g(x,y) \mathrm{on}~...
Microsphere resonator
8 reviews
This model reproduces the simulation results from: http://dx.doi.org/10.1063/1.4801474 Solutions were stripped so you will have to run the simulation to see the results. That may take a while...
2D Directional Coupler
8 reviews
A simplification of the 3D directional coupler using the RF Module and Boundary Mode Analysis. Just download and compute to see the results. Made with Comsol version 4.4.0.248. Enjoy!
Material: Water H2O
3 reviews
Just open and save material to your own User-Defined Library, or copy the Interpolations to your own material. Hale and Querry 1973- Water; n,k 0.2-200 µm; 25 °C Data from:...
Material: Fused Silica with sellmeier refractive index
2 reviews
Material Fused Silica, just open and save material to your own User-Defined Library, or copy the equation to your own material. Refractive index data (0.21-3.71 µm) based on Sellmeier equation... |
I asked a question earlier about Saving to the Database, which was very general and about the requirements for a proof when you go through many layers of non-verified systems such as the network and databases.
In this question I am wondering about a more middle-level proof, this time about transforming an object $f : A \to B$ with side effect $C$. Say I have as input a string $A$, and as output an Abstract Syntax Tree (AST) $B$. All of this happens in memory with a small string of say a few KB. Right now, ignoring all the details of the hardware implementation and all the details of any particular language.
I am wondering at a high level what it takes to prove something like this. Specifically I wanted to focus on
side effects in this question. Say during the parse process, we create a global symbol table to store classes. Then as we are parsing through the code and we encounter the class, we add to the symbol table. So instead of $f : A \to B$, we really have:
\begin{align} f : A &\to B\\ &\downarrow\\ &C \end{align}
That is for the symbol table $C$ and AST output $B$. Somewhere in the function $f$ implementation there is another function $g : \{C,c\} \to C'$, which adds the new symbol $c$ to $C$.
What I would like to prove (in this question, just at a high level, some key points) is that the function generates the symbol table $C$, even though the
output of the function is the AST $B$. In type theory, the proof for AST $B$ could possibly just be the sequence of type definitions and transformations, similar to Hoare Logic. But to prove that the function $f$ has side effect $C$ seems much harder/trickier.
It seems that you have to go and step through the algorithm one step at a time, and (assuming everything is strongly typed), figure out what the "current state" would look like at that point (of the whole program). Then you would compare your
pattern (the assertion of the post-condition if it were Hoare Logic) with the current state of the program at that point, and see if it was a match. And see if that stayed true until the end of the function/algorithm. But this sort of seems like it's becoming Model Checking, which I only know the basics about, not sure if that is correct to assume though. Also, this one-step-at-a-time stepping through the algorithm seems like program simulation, so wondering if that is true or not or if simulation has a role here.
So I'm wondering, at a high level, what is required to prove that function $f$ generates a side effect $C$. As a specification, I would write "$f$ generates the symbol table $C$". |
I'm not sure if this will precisely answer your question concerning "metrics".... but this might have the spirit ofwhat you may be seeking.
Here's an overview of a coordinate-free derivation of the Schwarzschild solution by Robert Geroch.
[
short answer: using symmetries specified by Killing vector fields, construct various scalar fields for use in the Einstein Fields Equations to obtain a set of differential equations for those scalar fields. After the solutions are obtained,the results can be expressed in coordinate-form, if desired.]
(sources:
General Relativity: 1972 Lecture Notes (Lecture Notes Series) (Volume 1) Minkowski Institute Press; 1 edition (February 25, 2013) ISBN 978-0987987174 also http://home.uchicago.edu/~geroch/Course%20Notes (latexed draft?) http://www.gravity.psu.edu/links/general_relativity_notes.pdf (scan of original notes) )
Refer to the above for details.
Below I will quote some passages from the LaTeXed file and summarize some parts of the approach given. (Hopefully my transcriptions are accurate.)
Ch 25: The Schwarzschild Solution
Physically, the Schwarzschild solution represents the geometry of an
“isolated, non rotating star, which has settled down to equilibrium”.
What properties would we expect such a solution to have? Firstly,
we would expect the solution to be static, i.e., we would expect to
have a timelike, hypersurface-orthogonal Killing vector $t^a$. Secondly,
we would expect the solution to be spherically symmetric, i.e., we
would expect to have Killing vectors ${l_1}^a$, ${l_2}^a$, ${l_3}^a$
which are spacelike,
linearly dependent, and have the commutation relations
$$[{l_1},{l_2}]^a={l_3}^a\quad [{l_2},{l_3}]^a={l_1}^a\quad [{l_3},{l_1}]^a={l_2}^a\quad (79)$$
Finally,
we would expect that the time-translations and rotations commute,
i.e., we would expect to have additional commutation relations
$$[t,{l_1}]^a=[t,{l_2}]^a=[t,{l_3}]^a=0\quad (80)$$
To summarize, we are concerned with space-time having four Killing
vectors, with the commutation relations (79) and (80). For the matter
composing the star, we take a fluid. Thus, we have the density $\rho$,
pressure $p$, and (unit) velocity field $\eta^a$.
Since the star is supposed to have “settled down to equilibrium”, we suppose that the fluid does not
“move relative to static observers”, i,e.,
we take $\eta^a$ a multiple of $t^a$.
To summarize, the Schwarzschild solution is a space-time with four
Killing vectors, $t^a$ (timelike, hypersurface-orthogonal), and
${l_1}^a$, ${l_2}^a$, ${l_3}^a$
(spacelike, linearly dependent), subject to (79) and (80), where the matter
is a fluid with four-velocity field proportional to $t^a$. We now discuss
the geometry of the Schwarzschild solution.
Then, Geroch proceeds as follows:
Define a scalar field $\lambda=t^a t_a$. ($\lambda<0$ since $t^a$ is timelike [signature $(-+++)$])
Write Ricci in terms of $\lambda$ using the hypersurface-orthogonality of $t^a$:$$R_{mb} t^m =\frac{1}{2}\lambda^{-2}t_b(\nabla^c \lambda \nabla_c \lambda) -\frac{1}{2}\lambda^{-1}t_b\nabla^2\lambda\quad (83)$$
Use the Einstein field equations for a perfect fluid to introduce matter variables (in place of the Ricci terms) to obtain$$R_{ab}=8\pi G\left[ -\lambda^{-1}(\rho+p)t_a t_b+\frac{1}{2}(\rho-p)g_{ab}\right] \quad(84)$$
$$\lambda^{-1}\nabla^2\lambda-\lambda^{-2}(\nabla^c \lambda \nabla_c \lambda)=8\pi G(\rho+ 3p)\quad (85) $$which "can be rewritten in the more suggestive form"$$\nabla^2 \left[\frac{1}{2}\ln(-\lambda)\right]=4\pi G(\rho+3p)\quad (86)$$
Define a positive scalar field $r$ in spacetime as$$2r^2=l_1{}^a l_1{}_a+l_2{}^a l_2{}_a+l_3{}^a l_3{}_a\qquad (88) $$ which he describes "as a sort of 'radial distance from the center of the star' "
Define the scalar field $\mu=(\nabla^a r)\nabla_a r$, where$\mu=1$ for flat space, and deviations of $\mu$ from 1 represent the "curvature of space"
Let us summarize the situation. We think of $r$ as a “radial coordinate”.
We think of $\lambda$ and $\mu$ as "fields which describe the geometry
of space-time."
Since our space-time is static and spherically symmetric,
we expect that everything of interest will be a function only of $r$....
The idea is to use Einstein’s equation to obtain a pair of ordinary differential equations on the functions $\lambda(r)$
and $\mu(r)$.
Eventually, for the region outside the star (so $\rho=0$, $p=0$), Geroch arrives at these$$\lambda''\mu -\frac{1}{2}\lambda^{-1} \mu(\lambda')^2+\frac{1}{2}\lambda'\mu' +2\mu r^{-1} \lambda'=0\quad(94)$$
$$-\frac{1}{4}\lambda^{-1} \mu \lambda' \mu'-\mu \mu' r^{-1} + \frac{1}{4}\lambda^{-2} \mu^2(\lambda')^2-\frac{1}{2}\lambda^{-1} \mu^2\lambda''=0\quad(95)$$where $d/dr$ is denoted by a prime.
We have now obtained the ordinary differential equations we sought. What remains is to solve them.
Eliminating $\lambda''$ between (94) and (95),
we obtain simply $\lambda'/\lambda=\mu'/\mu$.
So, $\lambda$ is a constant multiple of $\mu$.
What multiple should we choose?
$\vdots$
[physical and mathematical arguments]
In Minkowski space, $\lambda=-1$ and $\mu=1$, which suggests $\lambda=-\mu$.
$\vdots$
Setting $\lambda=-\mu$ in (95)... the solution is $\lambda=a+b/r$
$\vdots$
We write $\lambda= −1+2GM/r$...
$\vdots$
It should now be clear that one can choose coordinates in which
the metric for the Schwarzschild solution takes the well-known form
$$−\left(1 −\frac{2GM}{r} \right) dt^2 +
\left(1 −\frac{2GM}{r}\right)^{−1} dr^2 +
r^2( d\theta^2 + sin^2 \theta d\phi^2)$$
The $\theta$ and $\phi$ are "angular coordinates", while the scalar field $r$ becomes a "radial coordinate".
...so coordinates are introduced at the last step. |
Search
Now showing items 1-2 of 2
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Since this is my first time writing a blog post here, let me start with a word of introduction. I am a computer scientist at the Tata Institute of Fundamental Research, broadly interested in connections between Biology and Computer Science, with a particular interest in reaction networks. I first started thinking about them during my Ph.D. at the Laboratory for Molecular Science. My fascination with them has been predominantly mathematical. As a graduate student, I encountered an area with rich connections between combinatorics and dynamics, and surprisingly easy-to-state and compelling unsolved conjectures, and got hooked.
There is a story about Richard Feynman that he used to take bets with mathematicians. If any mathematician could make Feynman understand a mathematical statement, then Feynman would guess whether or not the statement was true. Of course, Feynman was in a habit of winning these bets, which allowed him to make the boast that mathematics, especially in its obsession for proof, was essentially irrelevant, since a relative novice like himself could after a moment's thought guess at the truth of these mathematical statements. I have always felt Feynman's claim to be unjust, but have often wondered what mathematical statement I would put to him so that his chances of winning were no better than random.
Today I want to tell you of a result about reaction networks that I have recently discovered with Abhishek Deshpande. The statement seems like a fine candidate to throw at Feynman because until we proved it, I would not have bet either way about its truth. Even after we obtained a short and elementary proof, I do not completely 'see' why it must be true. I am hoping some of you will be able to demystify it for me. So, I'm just going to introduce enough terms to be able to make the statement of our result, and let you think about how to prove it.
John and his colleagues have been talking about reaction networks as Petri nets in the network theory series on this blog. As discussed in part 2 of that series, a Petri net is a diagram like this:
Following John's terminology, I will call the aqua squares 'transitions' and the yellow circles 'species'. If we have some number #rabbit of rabbits and some number #wolf of wolves, we draw #rabbit many black dots called 'tokens' inside the yellow circle for rabbit, and #wolf tokens inside the yellow circle for wolf, like this:
Here #rabbit = 4 and #wolf = 3. The predation transition consumes one 'rabbit' token and one 'wolf' token, and produces two 'wolf' tokens, taking us here:
John explained in parts 2 and 3 how one can put rates on different transitions. For today I am only going to be concerned with 'reachability:' what token states are reachable from what other token states. John talked about this idea in part 25.
By a
complex I will mean a population vector: a snapshot of the number of tokens in each species. In the example above, (#rabbit, #wolf) is a complex. If $y, y'$ are two complexes, then we write
if we can get from $y$ to $y'$ by a single transition in our Petri net. For example, we just saw that$$(4,3)\to (3,4)$$
via the predation transition.
$$y=y_0,y_1,y_2,\dots,y_k =y'$$
Reachability, denoted $\to^*$, is the transitive closure of the relation $\to$. So $y\to^* y'$ (read $y'$ is reachable from $y$) iff there are complexes
such that$$y_0\to y_1\to\cdots\to y_{k-1}\to y_k$$
For example, here $(5,1) \to^* (1, 5)$ by repeated predation.
I am very interested in switches. After all, a computer is essentially a box of switches! You can build computers by connecting switches together. In fact, that's how early computers like the Z3 were built. The CMOS gates at the heart of modern computers are essentially switches. By analogy, the study of switches in reaction networks may help us understand biochemical circuits.
A
siphon is a set of species that is 'switch-offable'. That is, if there are no tokens in the siphon states, then they will remain absent in future. Equivalently, the only reactions that can produce tokens in the siphon states are those that require tokens from the siphon states before they can fire. Note that no matter how many rabbits there are, if there are no wolves, there will continue to be no wolves. So {wolf} is a siphon. Similarly, {rabbit} is a siphon, as is the union {rabbit, wolf}. However, when Hydrogen and Oxygen form Water, {Water} is not a siphon.
For another example, consider this Petri net:
The set {HCl, NaCl} is a siphon. However, there is a conservation law: whenever an HCl token is destroyed, an NaCl token is created, so that #HCl + #NaCl is invariant. If both HCl and NaCl were present to begin with, the complexes where both are absent are not reachable. In this sense, this siphon is not 'really' switch-offable. As a first pass at capturing this idea, we will introduce the notion of 'critical set'.
A
conservation law is a linear expression involving numbers of tokens that is invariant under every transition in the Petri net. A conservation law is positive if all the coefficients are non-negative. A critical set of states is a set that does not contain the support of a positive conservation law.
For example, the support of the positive conservation law #HCl + #NaCl is {HCl, NaCl}, and hence no set containing this set is critical. Thus {HCl, NaCl} is a siphon, but not critical. On the other hand, the set {NaCl} is critical but not a siphon. {HCl} is a critical siphon. And in our other example, {Wolf, Rabbit} is a critical siphon.
Of particular interest to us will be
minimal critical siphons, the minimal sets among critical siphons. Consider this example:
Here we have two transitions:$$X \to 2Y$$
and$$2X \to Y$$
The set $\{X,Y\}$ is a critical siphon. But so is the smaller set $\{X\}$. So, $\{X,Y\}$ is not minimal.
We define a
self-replicable set to be a set $A$ of species such that there exist complexes $y$ and $y'$ with $y\to^* y'$ such that for all $i \in A$ we have
So, there are transitions that accomplish the job of creating more tokens for all the species in $A$. In other words: these species can 'replicate themselves'.
We define a
drainable set by changing the $>$ to a $<$. So, there are transitions that accomplish the job of reducing the number of tokens for all the species in $A$. These species can 'drain away'.
Now here comes the statement:
We prove it in this paper:
• Abhishek Deshpande and Manoj Gopalkrishnan, Autocatalysis in reaction networks.
But first note that the statement becomes false if the critical siphon is not minimal. Look at this example again:
The set $\{X,Y\}$ is a critical siphon. However $\{X,Y\}$ is neither self-replicable (since every reaction destroys $X$) nor drainable (since every reaction produces $Y$). But we've already seen that $\{X,Y\}$ is not minimal. It has a critical subsiphon, namely $\{X\}$. This one
is minimal—and it obeys our theorem, because it is drainable. Checking these statements is a good way to make sure you understand the concepts! I know I've introduced a lot of terminology here, and it takes a while to absorb.
Anyway: our proof that every minimal critical siphon is either drainable or self-replicable makes use of a fun result about matrices. Consider a real square matrix with a sign pattern like this:$$\left( \begin{array}{cccc} <0 & >0 & \cdots & > 0 \\ >0 & <0 & \cdots &> 0 \\ \vdots & \vdots & <0 &> 0 \\ >0 & >0 & \cdots & <0 \end{array} \right)$$
If the matrix is full-rank then there is a positive linear combination of the rows of the matrix so that all the entries are nonzero and have the same sign. In fact, we prove something stronger in Theorem 5.9 of our paper. At first, we thought this statement about matrices should be equivalent to one of the many well-known alternative statements of Farkas' lemma, like Gordan's theorem.
However, we could not find a way to make this work, so we ended up proving it by a different technique. Later, my colleague Jaikumar Radhakrishnan came up with a clever proof that uses Farkas' lemma twice. However, so far we have not obtained the stronger result in Theorem 5.9 with this proof technique.
My interest in the result that every minimal critical siphon is either drainable or self-replicable is not purely aesthetic (though aesthetics is a big part of it). There is a research community of folks who are thinking of reaction networks as a programming language, and synthesizing molecular systems that exhibit sophisticated dynamical behavior as per specification:
• International Conference on DNA Computing and Molecular Programming.
• Foundations of Nanoscience: Self-Assembled Architectures and Devices.
• Molecular Programming Architectures, Abstractions, Algorithms and Applications.
Networks that exhibit some kind of catalytic behavior are a recurring theme among such systems, and even more so in biochemical circuits.
Here is an example of catalytic behavior:$$A + C \to B + C$$
The 'catalyst' $C$ helps transform $A$ to $B$. In the absence of $C$, the reaction is turned off. Hence, catalysts are switches in chemical circuits! From this point of view, it is hardly surprising that they are required for the synthesis of complex behaviors.
In information processing, one needs amplification to make sure that a signal can propagate through a circuit without being overwhelmed by errors. Here is a chemical counterpart to such amplification:$$A + C \to 2C$$
Here the catalyst $C$ catalyzes its own production: it is an 'autocatalyst', or a self-replicating species. By analogy, autocatalysis is key for scaling synthetic molecular systems.
Our work deals with these notions on a network level. We generalize the notion of catalysis in two ways. First, we allow a catalyst to be a set of species instead of a single species; second, its absence can turn off a reaction pathway instead of a single reaction. We propose the notion of self-replicable siphons as a generalization of the notion of autocatalysis. In particular, 'weakly reversible' networks have critical siphons precisely when they exhibit autocatalytic behavior. I was led to this work when I noticed the manifestation of this last statement in many examples.
Another hope I have is that perhaps one can study the dynamics of each minimal critical siphon of a reaction network separately, and then somehow be able to answer interesting questions about the dynamics of the entire network, by stitching together what we know for each minimal critical siphon. On the synthesis side, perhaps this could lead to a programming language to synthesize a reaction network that will achieve a specified dynamics. If any of this works out, it would be really cool! I think of how abelian group theory (and more broadly, the theory of abelian categories, which includes categories of vector bundles) benefits from a fundamental theorem that lets you break a finite abelian group into parts that are easy to study—or how number theory benefits from a special case, the fundamental theorem of arithmetic. John has also pointed out that reaction networks are really presentations of symmetric monoidal categories, so perhaps this could point the way to a Fundamental Theorem for Symmetric Monoidal Categories.
And then there is the Global Attractor Conjecture, a long-standing open problem concerning the long-term behavior of solutions to the rate equations. Now that is a whole story by itself, and will have to wait for another day.
You can also read comments on Azimuth, and make your own comments or ask questions there! |
I'm having a bit of trouble formulating a bijection between the sets $\{0,1\} \times \mathbb N$ and $\mathbb Z$. I understand how to find a bijection between $\mathbb N$ and $\mathbb Z$ using a piecewise function that sends even values of $\mathbb N$ to positive integers and odd values of $\mathbb N$ to negative integers, but I'm a bit stuck formulating a function $f(a,n)$ for these two sets. Any help would be greatly appreciated and I apologize for formatting.
Visual solution:
$\mathbb N$ looks like this:
$$\times\times\times\times\times\times\times\times\times\times\cdots\\$$
So $\mathbb N\times\{0,1\}$ looks like this:
$$\times\times\times\times\times\times\times\times\times\times\cdots\\ \times\times\times\times\times\times\times\times\times\times\cdots\\$$
While $\mathbb Z$ looks like this:
$$\cdots\times\times\times\times\times\times\times\times\times\times\cdots\\$$
Now, imagine you take the middle picture and you take the upper line of $\times$-s and flip it so you would get:
$$\cdots \times\times\times\times\times\times\times\times\times\times\times\\ \times\times\times\times\times\times\times\times\times\times\cdots\\$$
Now, imagine if I draw a little more space between two arbitrary elements of $\mathbb Z$:
$$\cdots \times\times\times\times\times\times\qquad\times\times\times\times\times\times\cdots\\$$
Can you see the bijection that is naturally appearing between these two sets?
Assuming $\Bbb N$ does not contain zero, you can use
$$f(s,n)\quad=\quad\begin{cases} n&\text{for $s=0$} \\ 1-n &\text{for $s=1$}\end{cases} \quad=\quad (-1)^s\left(n-\frac12\right)+\frac12$$
which can be written with or without cases. |
As Lubos Motl and twistor59 explain, a necessary condition for unitarity is that the Yang Mills (YM) gauge group $G$ with corresponding Lie algebra $g$ should be real and have a positive (semi)definite associative/invariant bilinear form $\kappa: g\times g \to \mathbb{R}$, cf. the kinetic part of the Yang Mills action. The bilinear form $\kappa$ is often chosen to be (proportional to) the Killing form, but that need not be the case.
If $\kappa$ is degenerate, this will induce additional zeromodes/gauge-symmetries, which will have to be gauge-fixed, thereby effectively diminishing the gauge group $G$ to a smaller subgroup, where the corresponding (restriction of) $\kappa$ is non-degenerate.
When $G$ is semi-simple, the corresponding Killing form is non-degenerate.But $G$ does
not have to be semi-simple. Recall e.g. that $U(1)$ by definition is not a simple Lie group. Its Killing form is identically zero. Nevertheless, we have the following YM-type theories:
QED with $G=U(1)$.
the Glashow-Weinberg-Salam model for electroweak interaction with $G=U(1)\times SU(2)$.
Also the gauge group $G$ does in principle not have to be compact. This post imported from StackExchange Physics at 2015-01-19 14:11 (UTC), posted by SE-user Qmechanic |
$L = \dfrac{1}{\pi}\displaystyle \int_{-\infty}^\infty \dfrac{b}{(z-a)^2+b^2} dz = 1$
we take a closed contour on the upper-half complex plane. This means we only consider the $z=a+ib$ pole when finding residues. I know this has to do with the winding number, but can you give a more physical explanation of why we do this?
Do we still use contours that cover the upper-half plane when $f(z)$ in
$F = \dfrac{1}{\pi}\displaystyle \int_{-\infty}^\infty \dfrac{b\,f(z)}{(z-a)^2+b^2} dz$
contains poles at say, $z=-c+id$ where $d=(2n+1)\pi i, n\in \mathbb{Z}$? |
Suppose we have a matrix of $n$ by $n$ dimension $M$ is that is
full rank, symmetric, and positive semi-definite, in that $z^TMz \geq 0$ for all $z \in \mathbb{R}^p$. This can be thought of as a covariance matrix in statistics. If I were to take a square partition, would that square partition still be full rank? In example, suppose that:
$$ M = \begin{pmatrix} a_{11} \ldots a_{1n}\\ \ldots\\ a_{n1} \ldots a_{nn}\\ \end{pmatrix} $$
Then a square partition might be:
$$ M_{33} = \begin{pmatrix} a_{33} \ldots a_{3n}\\ \ldots\\ a_{n3} \ldots a_{nn}\\ \end{pmatrix} $$
Where I took the bottom right side of the original matrix $M$. Would this be full-rank as well? |
I was learning ϴ(n) notation in my course "Asymptotic Analysis for Algorithms" when I encountered the following example:
For any non-negative constants $c_1\geq 0,c_2\geq 0,n\geq n_0$ we have the following inequality:
$$c_1\leq\frac{1}{2}-\frac{3}{n}\leq c_2$$
For these constants to satisfy this inequality, will be $c_2\geq\frac{1}{2}$ when $n\geq 1$. This is the part I'm having issues with.
I tried to derive it assuming $n \geq 1$ and I found:
\begin{align*} c_2 &\geq \frac{1}{2} - \frac{3}{n} \\ \implies c_2 &\geq \frac{1}{2} - \frac{3}{1} \\ \implies c_2 &\geq -\frac{5}{2}` \end{align*}
I managed to show the left inequality holds when $n\geq 7$ and $c_1\leq \frac{1}{14}$. |
Show that the two straight lines $x^2(\tan^2 (\theta)+\cos^2 (\theta))-2xy\tan (\theta)+y^2.\sin^2 (\theta)=0$ make with x axis such that the difference of their tangents is $2$.
My Attempt: $$x^2(\tan^2 (\theta) +\cos^2 (\theta))-2xy\tan (\theta) + y^2 \sin^2 (\theta)=0$$
Let $y-m_1x=0$ and $y-m_2x=0$ be the two lines represented by the above equation. Their combined equation is: $$(y-m_1x)(y-m_2x)=0$$ $$y^2-(m_1+m_2)xy+(m_1m_2)x^2=0$$
How do I proceed further? |
Imagine the speed of light to be $1$ meter per second and the speed of light in the medium with a high refractive index to be $\frac{1}{2}$ meters per second.
If you have a single peak of a wave in the slower medium, that peak must move forwards at speed $\frac{1}{2}$, no matter what angle it's facing. In the faster medium, that peak must move forwards at speed $1$, no matter what angle it's facing.
The critical angle comes into play when you consider where the peak of the wave is on the boundary between two mediums. If $\theta$ is the angle between the wave direction and the surface normal and $v$ is the speed of the wave, this point travels at a speed $\csc(\theta) v$. This makes sense: if $\theta=\pi/2$, the point at the boundary is just the wave speed. If $\theta=0$, the wave passes instantly and so the question isn't really defined (because there is
no point where the peak of the wave is on the boundary).
We demand $\csc(\theta_1) v_1=\csc(\theta_2) v_2$. That is, there should be a single point on the boundary where the peaks of both waves meet. The velocity of the point can be expressed in two ways, and both must be equal.
Unfortunately for the faster medium, if you have a full wave, the point on the boundary can never move slower than $v_2$, in this case, $1$ meter per second. But we're trying to send in a wave whose boundary point can move as slow as $\frac{1}{2}$ meter per second. There is absolutely no way any wave in the faster medium can satisfy that.
The result of this is an evanescent wave, where something is "transmitted" but decays exponentially (so that nothing is truly transmitted over long distances). You can't see that very well in optical light, but you can in microwaves. Take for example this Sixtysymbols video. Around three minutes in, two microwave prisms get pushed together. The reading starts to increase slowly before the two prisms are mushed up right next to each other because there is an evanescent wave "escaping" the prism but transmitting nothing over long distance. If the evanescent wave hits another prism, there is some actual transmission. |
Solutions to Try Its
1. 5.5556
2. About 1.548 billion people; by the year 2031, India’s population will exceed China’s by about 0.001 billion, or 1 million people.
3. [latex]\left(0,129\right)[/latex] and [latex]\left(2,236\right);N\left(t\right)=129{\left(\text{1}\text{.3526}\right)}^{t}[/latex]
4. [latex]f\left(x\right)=2{\left(1.5\right)}^{x}[/latex]
5. [latex]f\left(x\right)=\sqrt{2}{\left(\sqrt{2}\right)}^{x}[/latex]. Answers may vary due to round-off error. The answer should be very close to [latex]1.4142{\left(1.4142\right)}^{x}[/latex].
6. [latex]y\approx 12\cdot {1.85}^{x}[/latex]
7. about $3,644,675.88
8. $13,693
9. [latex]{e}^{-0.5}\approx 0.60653[/latex]
10. $3,659,823.44
11. 3.77E-26 (This is calculator notation for the number written as [latex]3.77\times {10}^{-26}[/latex] in scientific notation. While the output of an exponential function is never zero, this number is so close to zero that for all practical purposes we can accept zero as the answer.)
Solutions to Odd-Numbered Exercises
1. Linear functions have a constant rate of change. Exponential functions increase based on a percent of the original.
3. When interest is compounded, the percentage of interest earned to principal ends up being greater than the annual percentage rate for the investment account. Thus, the annual percentage rate does not necessarily correspond to the real interest earned, which is the very definition of
nominal.
5. exponential; the population decreases by a proportional rate.
7. not exponential; the charge decreases by a constant amount each visit, so the statement represents a linear function.
9. The forest represented by the function [latex]B\left(t\right)=82{\left(1.029\right)}^{t}[/latex].
11. After
t = 20 years, forest A will have 43 more trees than forest B.
13. Answers will vary. Sample response: For a number of years, the population of forest A will increasingly exceed forest B, but because forest B actually grows at a faster rate, the population will eventually become larger than forest A and will remain that way as long as the population growth models hold. Some factors that might influence the long-term validity of the exponential growth model are drought, an epidemic that culls the population, and other environmental and biological factors.
15. exponential growth; The growth factor, 1.06, is greater than 1.
17. exponential decay; The decay factor, 0.97, is between 0 and 1.
19. [latex]f\left(x\right)=2000{\left(0.1\right)}^{x}[/latex]
21. [latex]f\left(x\right)={\left(\frac{1}{6}\right)}^{-\frac{3}{5}}{\left(\frac{1}{6}\right)}^{\frac{x}{5}}\approx 2.93{\left(0.699\right)}^{x}[/latex]
23. Linear
25. Neither
27. Linear
29. $10,250
31. $13,268.58
33. [latex]P=A\left(t\right)\cdot {\left(1+\frac{r}{n}\right)}^{-nt}[/latex]
35. $4,572.56
37. 4%
39. continuous growth; the growth rate is greater than 0.
41. continuous decay; the growth rate is less than 0.
43. $669.42
45. [latex]f\left(-1\right)=-4[/latex]
47. [latex]f\left(-1\right)\approx -0.2707[/latex]
49. [latex]f\left(3\right)\approx 483.8146[/latex]
51. [latex]y=3\cdot {5}^{x}[/latex]
53. [latex]y\approx 18\cdot {1.025}^{x}[/latex]
55. [latex]y\approx 0.2\cdot {1.95}^{x}[/latex]
57. [latex]\text{APY}=\frac{A\left(t\right)-a}{a}=\frac{a{\left(1+\frac{r}{365}\right)}^{365\left(1\right)}-a}{a}=\frac{a\left[{\left(1+\frac{r}{365}\right)}^{365}-1\right]}{a}={\left(1+\frac{r}{365}\right)}^{365}-1[/latex]; [latex]I\left(n\right)={\left(1+\frac{r}{n}\right)}^{n}-1[/latex]
59. Let
f be the exponential decay function [latex]f\left(x\right)=a\cdot {\left(\frac{1}{b}\right)}^{x}[/latex] such that [latex]b>1[/latex]. Then for some number [latex]n>0[/latex], [latex]f\left(x\right)=a\cdot {\left(\frac{1}{b}\right)}^{x}=a{\left({b}^{-1}\right)}^{x}=a{\left({\left({e}^{n}\right)}^{-1}\right)}^{x}=a{\left({e}^{-n}\right)}^{x}=a{\left(e\right)}^{-nx}[/latex].
61. 47,622 fox
63. 1.39%; $155,368.09
65. $35,838.76
67. $82,247.78; $449.75 |
Electronic Journal of Probability Electron. J. Probab. Volume 15 (2010), paper no. 22, 684-709. Poisson-Type Processes Governed by Fractional and Higher-Order Recursive Differential Equations Abstract
We consider some fractional extensions of the recursive differential equation governing the Poisson process, i.e. $\partial_tp_k(t)=-\lambda(p_k(t)-p_{k-1}(t))$, $k\geq0$, $t>0$ by introducing fractional time-derivatives of order $\nu,2\nu,\ldots,n\nu$. We show that the so-called "Generalized Mittag-Leffler functions" $E_{\alpha,\beta^k}(x)$, $x\in\mathbb{R}$ (introduced by Prabhakar [24] )arise as solutions of these equations. The corresponding processes are proved to be renewal, with density of the intearrival times (represented by Mittag-Leffler functions) possessing power, instead of exponential, decay, for $t\to\infty$. On the other hand, near the origin the behavior of the law of the interarrival times drastically changes for the parameter $\nu$ varying in $(0,1]$. For integer values of $\nu$, these models can be viewed as a higher-order Poisson processes, connected with the standard case by simple and explict relationships.
Article information Source Electron. J. Probab., Volume 15 (2010), paper no. 22, 684-709. Dates Accepted: 20 May 2010 First available in Project Euclid: 1 June 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1464819807 Digital Object Identifier doi:10.1214/EJP.v15-762 Mathematical Reviews number (MathSciNet) MR2650778 Zentralblatt MATH identifier 1228.60093 Subjects Primary: 60K05: Renewal theory Secondary: 33E12: Mittag-Leffler functions and generalizations 26A33: Fractional derivatives and integrals Rights This work is licensed under aCreative Commons Attribution 3.0 License. Citation
Beghin, Luisa; Orsingher, Enzo. Poisson-Type Processes Governed by Fractional and Higher-Order Recursive Differential Equations. Electron. J. Probab. 15 (2010), paper no. 22, 684--709. doi:10.1214/EJP.v15-762. https://projecteuclid.org/euclid.ejp/1464819807 |
Define the Nucleus?
An atom consists of the nucleus which is positively charged. The atomic radius is larger than the nucleus’s radius. The mass of an atom is focused on the nucleus. The atom consists of neutrons which have the same mass as protons. The protons and neutrons are bound with each other with a nuclear force. The energies present in nuclear activities is much larger than that of a chemical process.
Isotopes
When the nuclides have the same number of atoms but a different number of neutrons then it is termed as isotopes.
Radioactivity
The nuclei of certain elements give out \(\alpha, \beta,\gamma\) rays and each representing as helium nuclei, electrons and electromagnetic radiation respectively. Radioactivity is the sign of instability of the nuclei.
Why fusion requires high temperature?
The nuclei which are light must have sufficient initial energy to overpower the Coulombs law between electrical charges having a potential barrier which is only possible in high temperatures.
Nature of Binding Energy
This shows that nuclear reactions which are exothermic in nature are possible in which when a heavy nucleus undergoes fission or two lighter nuclei fuse together to form nuclei with medium mass.
What is the density of nucleic matter?
The mass density of an atoms does not make it capable of depending upon the size of the nucleus.
Also Access NCERT Solutions for Class 12 Physics Chapter 13 NCERT Exemplar for Class 12 Physics Chapter 13 Important Questions If India had a target of producing by 2021 AD, 500,000 MW of electric power, 10% of which was to be obtained from nuclear power plants. Suppose, the efficiency of utilization (i.e. conversion to electric energy) of thermal energy produced in a reactor was 50%. How much amount of fissionable uranium would our country need per year by 2021? Take the heat energy per fission of U 235 to be about 300 MeV If Kinetic energy required for one fusion event =average thermal kinetic energy available with the interacting particles = 2(3kT/2); k = Boltzmann’s constant, T = absolute temperature. That is the kinetic energy needed to overcome the Coulomb repulsion between the two nuclei? Mention the temperature up to which the gas be heated to initiate the reaction? Consider the radius of both deuterium and tritium to be approximately 3.0 fm. \(Fr_{38}^{90}\) half-life is 30 years. Calculate the disintegration rate of 20 mg of the isotope? Also Read: |
Given a specific function, a parabola in this instance, I can calculate the length of a segment using integrals to sum infinite right angled triangles hypotenuse lengths. My question is, can I reverse the process? If this question has an obvious answer please forgive me as I have just started studying integrals. I am simply curious as to if I'm given a function, and told that a certain point is a set distance along it's length from a known point, can I find the coordinates of the point?
Filling in some of the unpleasant details hidden behind the Shabbeh hint.
For simplicity, let's use the "standard" parabola $y = x^2$.
The arclength $s(t)$ between the points $(0,0)$ and $(t,t^2)$ is given by $$ s(t) = \int_0^t \sqrt{ 1 + \left(\frac{dy}{dx}\right)^2} dx \\ = \int_0^t \sqrt{ 1 + 4x^2} dx \\ = \frac{1}{4}\left(2t \sqrt{ 1 + 4t^2} + \sinh^{-1}(2t)\right) $$ To find the integral in the last step, I used Wolfram Alpha. It's much better at this sort of thing than I am. So, if you want a point at an arclength $k$ from $(0,0)$, you have to find the value of $t$ that gives $s(t) = k$. In other words, given $k$, you have to solve the equation $$ \frac{1}{4}\left(2t \sqrt{ 1 + 4t^2} + \sinh^{-1}(2t)\right) = k $$ This is pretty nasty, and you'll almost certainly need to use a numerical solver to do this. The good thing, though, is that $s(t)$ is a nice smooth monotonically increasing function of $t$, so even a very crude numerical solver should work quickly and reliably. For an introduction to numerical root-finding, you can start here.
See also this question.
This is the problem of finding an arc-length parametrization of a curve. While this can always be done in theory and is a great tool in several proofs regarding parametric curves, it is rarely possible to find a closed form solution.
The general method is:
Describe your curve by $\vec{r}(t)$, for $a \leq t \leq b$. Find $\vec{r}\,'(t) = \frac{d}{dt}\vec{r}(t)$ Create the function $s(t) = \int_{a}^{t} |\vec{r}\,'(u)| \, du$, which will give you the arclength from $a \leq u \leq t$ Once you have $s(t)$, find the inverse function $t(s)$, with something like the inverse function theorem. The inverse will always exist for an appropriate parametrization $r(t)$, since $s(t)$ can be always increasing and positive. Once you have $s(t)$, just use $\vec{r}(t(s))$ to get the position given the arc-length $s$.
The problem, of course, is that steps 2, 3 and 4 are generally hard to do analytically.
However, using a computer this can be done by simply making a table of arc-length $s$ per parameter $t$, and then reading it backwards, with some interpolation if necessary. So the problem is very simple numerically and, in principle, well behaved for general smooth curves. It's just the closed form part that's hard. |
Summer came and went, and Fall begins at full throttle with a (metric) ton of papers. Eight that we counted — if any was missed, please mention it in the comments!
Efficient Removal without Efficient Regularity, by Lior Gishboliner, Asaf Shapira (arXiv). Obtaining efficient removal lemmata for graphs pattern (such as triangle, to name the most famous), that is removal results with bounds on the number of copies of the pattern that is not mind-blowingly huge like a tower of \(\varepsilon\), is a classic and longstanding problem. This work makes significant progress for the last remaining case, i.e. for the pattern \(C_4\): providing bounds that are merely exponential in \(\varepsilon\).
Local decoding and testing of polynomials over grids, by Srikanth Srinivasan, Madhu Sudan (arXiv, ECCC). In this work, the authors study the local decodability and local testability of error-correcting codes corresponding to low-degree polynomials on the grid \(\{0,1\}^n\) (over a field \(\mathbb{F}\supseteq \{0,1\}\)). Obtaining both positive and negative results on these, a consequence of their results is a separation between local testability and local decodability for a natural family of codes.
Lower Bounds for Approximating Graph Parameters via Communication Complexity, by Talya Eden and Will Rosenbaum (arXiv). This paper establishes an analogue of the framework of Blais, Brody, and Matulef (2012), which enabled one to obtain property testing lower bounds by a reduction from communication complexity, for the setting of graph parameter estimation. The authors then leverage this technique to give new and simpler proofs of lower bounds for several such estimation tasks.
A Note on Property Testing Sum of Squares and Multivariate Polynomial Interpolation, by Aaron Potechin and Liu Yang (arXiv). The authors introduce and study the question of testing “sum-of-square-ness,” i.e. the property of a degree-\(d\)-polynomial being a sum of squares. Specifically, they show that one-sided sample-based testers cannot do much better than the trivial approach, that is that they require sample complexity \(n^{\Omega(d)}\) — while learning the polynomial can be done with \(n^{O(d)}\) samples.
Sharp Bounds for Generalized Uniformity Testing, by Ilias Diakonikolas, Daniel Kane, and Alistair Stewart (arXiv, ECCC). Remember the post from last month, which included a paper on “Generalized Uniformity Testing”? Well, this paper more or less settles the question, establishing tight bounds on the sample complexity of testing whether an (unknown) probability distribution over an (unknown) discrete domain is uniform on its support, or far from every uniform distribution. Specifically, the authors significantly strengthen the previous upper bound, by getting the right dependence on \(\varepsilon\) for all regimes; and complement it by a matching worst-case lower bound.
Sample-Optimal Identity Testing with High Probability, by Ilias Diakonikolas, Themis Gouleakis, John Peebles, and Eric Price (ECCC). Usually, in property testing we do not care too much about the error probability \(\delta\): if one can achieve \(1/3\), then simple repetition can bring it down to \(\delta\) at the mild price of a \(\log(1/\delta)\) factor in the query/sample complexity. Is that necessary, though? This paper shows that for uniformity and identity testing of distributions, the answer is “no”: for some regimes, this repetition trick is strictly suboptimal, as one can pay instead only a multiplicative \(\sqrt{\log(1/\delta)}\). And quite interestingly, this improvement is achieved with the simplest algorithm one can think of: by considering the empirical distribution obtained from the samples.
A Family of Dictatorship Tests with Perfect Completeness for 2-to-2 Label Cover, by Joshua Brakensiek and Venkatesan Guruswami (ECCC). While I tried to paraphrase the original abstract, but my attempts only succeeded in making it less clear; and, for fear of botching the job, decided to instead quote said abstract: “[the authors] give a family of dictatorship tests with perfect completeness [that is, one-sided] and low-soundness for 2-to-2 constraints. The associated 2-to-2 conjecture has been the basis of some previous inapproximability results with perfect completeness. However, evidence towards the conjecture in the form of integrality gaps even against weak semidefinite programs has been elusive. [Their] result provides some indication of the expressiveness and non-triviality of 2-to-2 constraints, given the close connections between dictatorship tests and satisfiability and approximability of CSPs.”
A polynomial bound for the arithmetic \(k\)-cycle removal lemma in vector spaces, by Jacob Fox, László Miklós Lovász, and Lisa Sauermann (arXiv). And back to removal lemmata! This work proves a generalization of Green’s arithmetic \(k\)-cycle removal lemma, which held for any \(k\geq 3\) and abelian group \(G\); however, the bounds in this lemma were quite large — i.e., tower-ype. Here, the authors establish an efficient lemma (with polynomial bounds) for the case of the group \(\mathbb{F}_p^n\) (where \(p\geq 2\) is any fixed prime, and \(k\geq 3\)).
Update (10/04): Finally, a paper we covered last summer, The Dictionary Testing Problem, by Siddharth Barman, Arnab Bhattacharyya, and Suprovat Ghoshal, went under signficant changes. Now titled Testing Sparsity over Known and Unknown Bases, it now includes (in addition to the previous results) a testing algorithm for sparsity with regard to a specific basis: given a matrix \(A \in \mathbb{R}^{d \times m}\) and unknown input vector \(y \in \mathbb{R}^d\), does \(y\) equal \(Ax\) for some \(k\)-sparse vector \(x\), or is it far from all such representations?
Update (10/5): we missed a recent paper of Benjamin Fish, Lev Reyzin, and Benjamin Rubinstein on Sublinear-Time Adaptive Data Analysis (arXiv). While not directly falling into the umbrella of property testing, this work considers sublinear-time algorithms for adaptive data analysis — similar in goal and spirit to property testing. |
Suppose you have three positive integers $a, b, c$ that form a Pythagorean triple:\begin{equation} a^2 + b^2 = c^2. \tag{1}\label{1}\end{equation}Additionally, suppose that when you apply Euler's totient function to each term, the equation still holds:$$ \phi(a^2) + \phi(b^2) = \phi(c^2). \tag{2}\label{2}$$One way this can happen is if $a^2, b^2, c^2$ have the same primes in their prime factorization. (For example, starting from the Pythagorean triple $3,4,5$, we could multiply all three terms by $30$ to get $90, 120, 150$. If we do, then we have $90^2 + 120^2 = 150^2$ and $\phi(90^2) + \phi(120^2) = \phi(150^2)$.) In that case, because all three terms are squares, they all contain these prime factors at least twice, and so we must have$$ \phi(\phi(a^2)) + \phi(\phi(b^2)) = \phi(\phi(c^2)). \tag{3}\label{3}$$My question is: are there any "atypical" solutions to the two equations $\eqref{1}$ and $\eqref{2}$ for which $\eqref{3}$ does
not hold? Or at least where $\eqref{1}$ and $\eqref{2}$ hold, but the prime factorizations of $a,b,c$ do not consist of the same primes, even if $\eqref{3}$ happens to hold for a different reason?
In the comments, Peter and Gerry Myerson have checked small cases (all triples for $1 \le a \le b \le 10^5$ and primitive triples generated by $(m,n)$ for $1 \le n \le m \le 2000$) without finding any atypical solutions.
Here is an in-depth explanation for why typical solutions like $(90,120,150)$ work. By a typical solution, I mean a solution where $a,b,c$ have the same primes in their prime factorization. Such a triple satisfies $\eqref{2}$ and $\eqref{3}$ whenever it satisfies $\eqref{1}$, as shown below.
Let $\operatorname{rad}(x)$ denote the radical of $x$: the product of all distinct prime factors of $x$. To get a typical solution, we start with any Pythagorean triple, then scale $(a,b,c)$ so that $\operatorname{rad}(a) = \operatorname{rad}(b) = \operatorname{rad}(c) = r$.
It is a general totient function identity that whenever $\operatorname{rad}(x) = r$, $\phi(x) = \frac{\phi(r)}{r} \cdot x$. In other words, $\phi(x) = x \prod\limits_{p \mid x} \frac{p-1}{p}$ where the product is over all primes $p$ that divide $x$.
In the case above, we have$$ \phi(a^2) + \phi(b^2) = \frac{\phi(r)}{r} \cdot a^2 + \frac{\phi(r)}{r} \cdot b^2 = \frac{\phi(r)}{r} \cdot c^2 = \phi(c^2),$$and $\eqref{2}$ holds.Moreover, since $r \mid a,b,c$, we have $r^2 \mid a^2,b^2,c^2$, so when we multiply by $\frac{\phi(r)}{r}$, we have $r \phi(r) \mid \phi(a^2), \phi(b^2), \phi(c^2)$. Therefore all prime factors of $r \phi(r)$ divide each of $\phi(a^2)$, $\phi(b^2)$, and $\phi(c^2)$. These are
all their prime factors, since $r$ contained all the prime factors of $a^2, b^2,c^2$ and since then the only new prime factors introduced came from multiplying by $\phi(r)$.
As a result, $\phi(a^2), \phi(b^2), \phi(c^2)$ still have the same set of prime factors: $\operatorname{rad}(\phi(a^2)) = \operatorname{rad}(r \phi(r)) = s$, and similarly $\operatorname{rad}(\phi(b^2)) = \operatorname{rad}(\phi(c^2)) = s$. So $\eqref{3}$ holds, because $$ \phi(\phi(a^2)) + \phi(\phi(b^2)) = \frac{\phi(s)}{s} \cdot \phi(a^2) + \frac{\phi(s)}{s} \cdot \phi(b^2) = \frac{\phi(s)}{s} \cdot \phi(c^2) = \phi(\phi(c^2)). $$ |
I am trying to read my probability book on the "Strong Law of Large Numbers", and came across this example that is really confusing me.
Let $X_i$ be a sequence of independent uniformly distributed random variables in $[0, 1]$ and $Y_n = \min(X_1, ..., X_n)$. Show that $Y_n$ converges to zero with probability 1.
The book says let $Y$ be the limit of the $Y_n$s which exists because nonincreasing and bounded below. Then for $1 > \epsilon > 0$, $$P(Y \ge \epsilon) = P(X_1\ge \epsilon \text{ & }\cdots\text{ & }X_n \ge \epsilon) = (1 - \epsilon)^n.$$ Here it means $Y_n$ I think, right?
Then it says $$P(Y \ge \epsilon) \le \lim_{n \to \infty} (1 - \epsilon)^n = 0.$$ So we can conclude $P(Y \ge \epsilon) = 0$, so $P(Y = 0) = 1$.
Why does it have a less than or equal sign for the last step? Shouldn't it be equality?
Thanks for any help. |
I'm stuck on how to evaluate the following using L'Hôpital's rule:
$$\lim_{x \to \infty}\left(1 + \frac{2}{x}\right)^{3x}$$
This is a problem that I encountered on Khan Academy and I attempted to understand it using the resources there. Here are the tips given for the problem; the portion that I'm having trouble understanding is highlighted:
I also attempted to use this video (screenshot following) to help; I understand the concepts in the video but it seems like there are some missing steps in the tips above.
I also attempted to use WolframAlpha's step-by-step solution but it was indecipherable to me.
Any help is greatly appreciated. |
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... |
Motivation: It is a well-known fact that $ay''+by'+cy=0$ has solutions which are found from substituting the ansatz $y=e^{\lambda t}$ into the DEqn. It turns out that we replace the calculus problem $ay''+by'+cy=0$ with the algebra problem of solving the characteristic equation $a\lambda^2+b\lambda+c=0$. When the solution is a conjugate pair of complex numbers or distinct pair of real numbers the solutions arise from $e^{\lambda t}$. On the other hand, when the solution is real and repeated then the ansatz solution $y=e^{\lambda t}$ only covers half of the general solution.
Suppose that $a\lambda^2+b\lambda+c=0$ has double root solution $\lambda = r$ then we form the general solution of $ay''+by'+cy=0$ as $$ y(t) = c_1e^{rt}+c_2te^{rt}. $$ The inclusion of the $t$ in the solution is surprising to many students. I think many have asked "where'd the $t$ come from?". Of course, we could just as well ask "where the $e^{\lambda t}$ come from?". I know of several ways to derive the $t$. In particular:
$y''=0$ integrates twice to $y=c_1+tc_2$ and $e^{0t}=1$ so this is an example of the double root. A simple change of coordinates allows this derivation to be extended to an arbitrary double-root.
reduction of order to a system of ODEs in normal form. We'll obtain a $2 \times 2$ matrix which is not diagonalizable. However, the matrix exponential gives a solution and the generalized e-vector piece generates the $t$ in the second solution.
you can use the second linearly independent solution formula from the theory of ODEs. This formula is found by making a reduction of order based on the fact $y=e^{rt}$ is a solution. After a bit the problem reduces to a linear ODE which integrates to give a lovely formula with nested integrals. This formula also will derive the $t$ in the double root solution.
Laplace transforms. We can transform the given ODE in $t$ to obtain an algebra equation with $(s-r)^2Y$ which gives $\frac{F(s)}{(s-r)^2}$ and upon inverse transform the appearance of the $(s-r)^2$ in the denominator gives us the $te^{rt}$ solution
Inverse operators. By writing the given ODE as $(D-r)^2[y]=0$ we can integrate in a certain way and again derive the $te^{rt}$ solution.
Series solution techniques.
added 10/6:start with the distinct root solution $y=c_1e^{\lambda_1 t}+c_2e^{\lambda_2t}$ and consider the limit $\lambda_1 \rightarrow \lambda_2$ to derive the second solution.
These are the methods which seem fairly obvious in view of the introductory course (up to notation, several of these are the same method). My question is this:
Question:What is the history of the solution $y=te^{rt}$? Who studied the problem $ay''+by'+cy=0$ and found this solution?
I'm also interested in the particular sub-histories of the other methods I mention above.
Thanks in advance for any insights! |
Although this questions is very much math related, I posted it in Physics since it is related to variational (Lagrangian/Hamiltonian) principles for dynamical systems. If I should migrate this elsewhere, please tell me.
Often times, in graduate and undergraduate courses, we are told that we can only formulate the Lagrangian (and Hamiltonian) for "potential" systems, where in the dynamics satisfy the condition that: $$ m\ddot{\mathbf{x}}=-\nabla V $$ If this is true, we can formulate a functional which is stationary with respect to the system as: $$ F[\mathbf{x}]=\int^{t}_0\left(\frac{1}{2}m\dot{\mathbf{x}}(\tau)^2-V(\mathbf{x}(\tau))\right)\,\text{d}\tau $$
Taking the first variation of this functional yields the dynamics of the system, along with a condition that effectively states that the initial configuration should be similar to the final configuration (variation at the boundaries is zero).
Now, given the functional: $$ F[\mathbf{x}]=\frac{1}{2}[\mathbf{x}^{\text{T}} * D(\mathbf{x})]+\frac{1}{2}[\mathbf{x}^{\text{T}} * \mathbf{Ax}]-\frac{1}{2}\mathbf{x}'(0)\mathbf{x}(t) $$ With $\mathbf{A}$ symmetric and $\mathbf{x}(0)$ being the initial condition, and: $$ [\mathbf{f}^{\text{T}} * \mathbf{g}]=\int^{t}_0 \mathbf{f}^{\text{T}}(t-\tau)\mathbf{g}(\tau)\,\text{d}\tau $$
If we take the first variation and assume
only that the initial variation is zero, the functional is stationary with respect to: $$ \frac{d\mathbf{x}(t)}{dt}= \mathbf{Ax}(t) $$
This is a functional derived by Tonti and Gurtin, it represents a variational principle for linear initial value problems with symmetric state matrices and shows, as a proof of concept, that functionals
can be derived for non-potential systems, initial value or dissipative systems.
My question is, is it possible to derive these functionals for arbitrary nonlinear systems which do not have similar initial and final configurations (and cannot have similar initial and final configurations due to dissipation)?
What sorts of conditions would exists on the dynamics of these systems?
In this example, $\mathbf{A}$ must be symmetric which already implies all of it's eigenvalues are real and thus it is a non-potential system, but there is still a functional which can be derived for it.
Any related sources, information, or answers regarding specific cases would be appreciated. If anyone needs clarification, or a proof of any result I presented here, let me know.
Edit: Also, a related question anyone seeing this: I'm currently just interested in the abstract aspect of the problem (solving/investigating it for the sake of it), but why are functional representations such as these useful? I know there are some numerical application, but if I have a functional which attains a minimum for a certain system, what can I do with it?
This post imported from StackExchange Physics at 2015-07-29 19:08 (UTC), posted by SE-user Ron |
Definition:Symmetric Difference/Notation
Jump to navigation Jump to search
Notation for Symmetric Difference
There is no standard symbol for symmetric difference. The one used here, and in general on $\mathsf{Pr} \infty \mathsf{fWiki}$:
$S * T$ The following are often found for $S * T$: $S \oplus T$ $S + T$ $S \mathop \triangle T$ or $S \mathop \Delta T$ $S \mathop \Theta T$ $S \mathop \triangledown T$
are also variants for denoting this concept. |
How disheartening it is to know that many of the advanced knowledge in science and mathematics and astronomy and medicine that we know today are said to be the discoveries of Europeans while the truth is that long before the west even came out of Stone Age, ancient Indian sages and scholars had not only discovered them but also put them to regular use!
The dirty game of British Imperialism not only robbed India’s wealth and culture but also belittled the highly advanced knowledge in science and philosophy. Those robbers, in an attempt to show racial supremacy even created the hoax of Aryan Race that never actually existed. It is even sadder that we Indians run for their culture while ignoring our own. It is high time that we start recognizing and respecting our own culture and it is high time that west starts giving credit for things that rightfully belongs to us.
Today we will focus of Pythagoras Theorem. Well, just like the Atomic Theory is credited to John Dalton, Pythagoras Theorem is credited to Pythagoras. The truth however is that ancient Indian sage Kanada came up with Atomic Theory over 2,600 years before John Dalton and ancient Indian mathematician and possibly a sage and an architect name Baudhayana actually gave the Pythagoras Theorem over 200 years before Pythagoras was even born.
You May Also Like: Who was Baudhayana?
Not much is known about Baudhayana. However, historians attach the date c. 800 BCE (or BC). Not even the exact date of death of this great mathematician is recorded. Some believe that he was not just a mathematician but in fact, he was also a priest and an architect of very high standards.
What makes Baudhayana Important?
The case of Baudhayana is one of the many examples where Greeks and other western civilizations took credit of the discoveries originally made by ancient Indians. Baudhayana in particular is the person who contributed three important things towards the advancements of mathematics:
He gave us the theorem that became known as Pythagorean Theorem. Actually we should be calling it Baudhayana Theorem. He gave us the method of circling a square. He also gave us the method of finding the square root of 2.
Let us take a look at each of his contributions separately.
The Pythagorean (Baudhayana) Theorem
Baudhayana wrote what is known as Baudhayana Sulbasutra. It is one of the earliest Sulba Sutras written. Now Sulba Sutras are nothing but appendices to famous Vedas and primarily dealt with rules of altar construction. In Baudhayana Sulbasutra, there are several mathematical formulae or results that told how to precisely construct an altar. In essence, Baudhayana Sulbasutra was more like a pocket dictionary, full of formulae and results for quick references. Baudhayana essentially belonged to Yajurveda school and hence, most of his work on mathematics was primarily for ensuring that all sacrificial rituals were performed accurately.
One of the most important contributions by Baudhayana was the theorem that has been credited to Greek mathematician Pythagoras. There is an irony to this as well that we will discuss in a while.
What later became known as Pythagorean Theorem has been mentioned as a verse or a shloka in Baudhayana Sulbasutra. Here is the exact shloka followed by English interpretation:
दीर्घचतुरश्रस्याक्ष्णया रज्जु: पार्श्र्वमानी तिर्यग् मानी च यत् पृथग् भूते कुरूतस्तदुभयं करोति ॥
or
dīrghachatursrasyākṣaṇayā rajjuḥ pārśvamānī, tiryagmānī, cha yat pṛthagbhūte kurutastadubhayāṅ karoti.
When translated to English, it becomes:
If a rope is stretched along the diagonal’s length, the resulting area will be equal to the sum total of the area of horizontal and vertical sides taken together.
So the question is, what the heck do these horizontal and vertical sides refer to? Some people have argued that the sides refer to the sides of a rectangle and some say that they refer to the sides of a square.
Whatever the case be, if Baudhayana’s formula is restricted to a right-angled isosceles triangle, whatever is claimed by the shloka becomes to restricted. Fortunately, there is no reference to drawn to right-angled isosceles triangle and hence, the shloka lends itself to geometrical figures with unequal sides as well.
Because Baudhayana’s verse is opened, it is pretty logical to assume that the sides he referred to may be the sides of a rectangle. If so, it is actually the statement of the Pythagorean Theorem that came to existence at least 200 years before Pythagoras was even born!
It is not that Baudhayana was the only person who came up with the theorem. Later came Apastamba – another mainstream mathematician from ancient India who too provided the Pythagorean Triplet using numerical calculations.
So the next big question, why the hell is the theorem attributed to Pythagoras and not Baudhayana? That’s because Baudhayana went on to prove it by NOT using geometry but using area calculation. Later on when the Greeks started proving it, they (specifically Euclid and some others) provided geometrical proof. Since Baudhayana’s proof was not geometrical by nature, his discovery was completely ignored.
However, one thing that was interestingly suppressed that Baudhayana not only gave the proof of Pythagorean Theorem in terms of area calculation but also came up with a geometric proof using isosceles triangles. So essentially, Baudhayana gave the geometric proof and Apastamba gave the numerical proof.
Funny thing about Pythagoras?
It is clear that Pythagoras didn’t really discover the theorem. In fact it was after 300 years of Pythagoras’ so-called discovery that the theorem was credit to Pythagoras by other Greek philosophers, historians and mathematicians. Later on, many historians actually tried to find a relation between Pythagoras and
Pythagorean Theorem but actually failed to find any such link but they did manage to find a relation between the theorem and Euclid, who was born several hundred years after Pythagoras.
Here is something more surprising. Many historians have actually come up with evidences that Pythagoras traveled from Greece to Egypt to India and then back to Greece. Possibly Pythagoras learned the theorem in India and took the knowledge back to Greece but hid the fact that the source of the knowledge was India.
The flimsy critics
It is not unnatural to find many critics from western world who prefer to maintain their facade of racial supremacy and egoistic western imperialism who blatantly try to discredit Baudhayana even today. They argue that what Baudhayana gave was a mere statement and that to in form of a verse or poetry or shloka and that there is no hardcore proof. Even if we are to believe that Baudhayana did not give a proof, how on earth we are supposed to believe that someone gave a formula, which forms the very basis of geometry and algebra, without really knowing the explanation or proof in details? Is that even possible? Did Einstein simply give the formula E=mC2 without giving the proof? So, these egoistic western critics are very flimsy with their criticism.
We should not forget that many (literally thousands and thousands) books and libraries were burned to ashes under British Imperialist rule of Indian subcontinent. Many of our age-old ancient knowledge has been burned down to ashes and completely destroyed by these idiots. Not just the British, India also suffered a lot during Islamic conquests. It is high time that the world looks at India with respect for producing the most advanced knowledge it science, philosophy, astronomy and medicine when the rest of the world was doomed in the darkness of ignorance.
Baudhayana’s Contribution towards Circling a Square and Pi
It was not just the
Pythagorean Baudhayana Theorem that was first provided by Baudhayana. He even gave us the value of Pi (π). The Baudhayana Sulbasutra has several approximations of π that Baudhayana possibly used while constructing circular shapes.
The various approximations of π that can be found in Baudhyana Sulbasutra are:
$$\Pi =\frac { 676 }{ 225 } =3.004$$
$$\Pi =\frac { 900 }{ 289 } =3.114$$
$$\Pi =\frac { 1156 }{ 408 } =3.202$$
None of the values of π mentioned in Baudhayana Sulbasutra are accurate because the value of π is approximately 3.14159. However, the approximations that Baudhayana used wouldn’t really lead to major error during the construction of circular shapes in altars.
Baudhayana’s Contribution Towards the Square Root of 2
Interestingly Baudhayana did come up with a very accurate value of the square root of 2, which is denoted by √2. This value can be found in Baudhayana Sulbasutra Chapter 1, Verse 61. Whatever Baudhayana wrote in Sanskrit actually boils down to this symbolic representation:
$$\sqrt { 2 } =1+\frac { 1 }{ 3 } +\frac { 1 }{ \left( 3\times 4 \right) } -\frac { 1 }{ \left( 3\times 4\times 34 \right) } =\frac { 577 }{ 408 } =1.414215686$$
This value is accurate to 5 decimal places.
In case Baudhayana restricted his approximation of √2 to the following:
$$\sqrt { 2 } =1+\frac { 1 }{ 3 } +\frac { 1 }{ \left( 3\times 4 \right) }$$
In above restricted case, the error would be of the order of 0.002. This value is way more accurate than the approximations of π he provided. This is where one confusing question pops up – “why did Baudhayana need a far more accurate approximation in case of √2 compared to π?” Well, there is no one who can give us that answer.
Bottom line however is that it was Baudhayana who gave us the Pythagorean Theorem, the value of π and the square root of 2. The Greeks and other western mathematicians simply stole those discoveries, who, through the annals of history, became known as the discoverers of those concepts while Baudhayana remained discredited for his discoveries that laid down the foundations of geometry and algebra. |
A spacetime diagram might help elaborate on the comment and answer you have already received.
I am going to use a spacetime diagram on rotated graph paperso that we can visualize the time and space intervals.
Let each light-clock diamond represent "0.1 sec".
Using Minkowski-right triangle $OPQ$, Alice has velocity $v_{Alice}=PQ/OP=(6/10)c=(3/5)c$ with respect to B. The time-dilation factor $\gamma=\frac{1}{\sqrt{1-v^2}}=5/4$. [In terms of rapidity $\theta$ (the Minkowski analogue of angle, where $v_{Alice}=\tanh\theta$, we have $\gamma=\cosh\theta=\cosh({\rm arctanh}(v))$.]
For part a)...
Alice arrives at event E. According to Bob, event B on his worldline is simultaneous with E. (BE is parallel to the spacelike diagonal of Bob's light-clock diamonds.) So, when Bob says "Alice is $9\times10^{7}\rm\ m$ away", we have $BE=9\times10^{7}\rm\ m=9\times10^{7}\rm\ m\left(\frac{c}{3\times10^{8}\rm{\ m/s}}\right)=0.3\rm{\ light-sec}$,and thus $OB=BE/v_{Alice}=(0.3)/(3/5)=0.5$. Then, since time-dilation in triangle OBE with hypotenuse OE implies $\gamma=\cosh\theta=\frac{ADJ}{HYP}=\frac{OB}{OE}$,we have $OE=OB/\gamma=(0.5)/(5/4)=0.4$, as you said.
For part (b)...
I will rephrase "When Alice’s clock reads 0.4s, what does Bob’s clock read?" as "When Alice’s clock reads 0.4s, what does Alice say Bob’s clock reads [at the event E' on Bob's worldline that Alice says is simultaneous with event E on her worldline]?"
According to Alice, event E on her worldline is simultaneous with E' on Bob's worldline.
(EE' is parallel to the spacelike diagonal of Alice's light-clock diamonds.)
Since time-dilation in triangle OEE' with hypotenuse OE' implies $\gamma=\cosh\theta=\frac{ADJ}{HYP}=\frac{OE}{OE'}$,we have $OE'=OE/\gamma=(0.4)/(5/4)=0.32$, as you said.
[Note that $|EE'|\neq |BE|$.]
So, as others have implied, Alice and Bob disagree about which event on Bob's worldline is simultaneous with event E on Alice's worldline.
The spacetime diagram [on rotated graph paper] and its geometric/trigonometric interpretation hopefully makes this clearer [as compared to merely using a formula... without recognizing these interpretations]. |
Your score is simply the sum of difficulties of your solved problems. Solving the same problem twice does not give any extra points. Note that Kattis' difficulty estimates vary over time, and that this can cause your score to go up or down without you doing anything.
Scores are only updated every few minutes – your score and rank will not increase instantaneously after you have solved a problem, you have to wait a short while.
If you have set your account to be anonymous, you will not be shown in ranklists, and your score will not contribute to the combined score of your country or university. Your user profile will show a tentative rank which is the rank you would get if you turned off anonymous mode (assuming no anonymous users with a higher score than you do the same).
The combined score for a group of people (e.g., all users from a given country or university) is computed as a weighted average of the scores of the individual users, with geometrically decreasing weights (higher weights given to the larger scores). Suppose the group contains $n$ people, and that their scores, ordered in non-increasing order, are $s_0 \ge s_1 \ge \ldots \ge s_{n-1}$ Then the combined score for this group of people is calculated as \[ S = \frac{1}{f} \sum_{i=0}^{n-1} \left(1-\frac{1}{f}\right)^i \cdot s_i, \] where the parameter $f$ gives a trade-off between the contribution from having a few high scores and the contribution from having many users. In Kattis, the value of this parameter is chosen to be $f = 5$.
For example, if the group consists of a single user, the score for the group is 20% of the score of that user. If the group consists of a very large number of users, about 90% of the score is contributed by the 10 highest scores.
Adding a new user with a non-zero score to a group always increases the combined score of the group.
Kattis has problems of varying difficulty. She estimates the difficulty for different problems by using a variant of the ELO rating system. Broadly speaking, problems which are solved by many people using few submissions get low difficulty scores, and problems which are often attempted but rarely solved get high difficulty scores. Problems with very few submissions tend to get medium difficulty scores, since Kattis does not have enough data about their difficulty.
The difficulty estimation process also assigns an ELO-style rating to you as a user. This rating increases when you solve problems, like your regular score, but is also affected by your submission accuracy. We use your rating to choose which problems to suggest for you to solve. If your rating is higher, the problems we suggest to you in each category (trivial, easy, medium, hard) will have higher difficulty values. |
I have a set of discrete points (at most a single $y$ value for a given $x$) and I need to find two parallel lines which contain all of these points and minimize the distance between them. Note that the lines do not have to be parallel with the $x$ axis as in the picture, they can have arbitrary angle. Is there a well known way of solving this?
Prediction Interval Bands in Simple Linear Regression.
Following from my Comment, here is a plot of $n = 20$ pairs $(x, Y)$ with a regression line of $Y$ on $x,$ showing 95% 'prediction interval bands'. For a new $x_p$ (not used to make the regression line), the corresponding predicted $Y$-value is between the two curves (nearly linear here) just above $x_p$ (with 95% confidence).
Depending on the data, it is possible for a
few points to lie outsidethe bands. If this is undesirable, one can use a higher level of confidencefor the predictions interval (perhaps 99% instead of 95%).
If you will look at the plot
very closely, you will see that the 'bands' arecurves. They are a little closer to each other at $\bar X = 10.5$ thananywhere else.
This is a standard procedure. Formulas for the prediction interval are given in almost any basic statistics text containing a treatment of simple linear regression.
Notes: (1) The data were simulated according to the model $Y_i = 10 + 2x_i + e_i,$ where$e_i \stackrel{indep}{\sim} \mathsf{Norm}(\mu=0,\sigma=2).$ (2) The plot is from Minitab 17 software.
Choose a pair of parallel lines as a second degree degenerate conic
$$ ( y-cx- a) (y-cx-b) =0 $$
in which three constants $(a,b,c)$ can be found out by least square methods... like fitting data to a parabola $$ y -(ax^2+bx+c)=0 $$ |
Let \((a,b),(c,d)\) be ordered pairs of natural numbers. We consider them equivalent, if there exist a natural number \(h\) such that one ordered pair can be obtained from the other ordered pair by adding \(h\) to both natural numbers of that pair, formally
\[(a,b)\sim (c,d)\quad\Longleftrightarrow\quad\begin{cases}(a+h,b+h)=(c,d)& or\\(a,b)=(c+h,d+h).\end{cases}\]
The relation “\(\sim\)” defined above is an equivalence relation, i.e. for a given ordered pair \((a,b)\in\mathbb N\times\mathbb N\), we can consider a whole set of ordered pairs \((c,d)\in\mathbb N\times\mathbb N\) equivalent to \((a,b)\):
\[x:=\{(c,d)\in\mathbb N\times\mathbb N:\quad( c, d )\sim ( a, b )\}.\]
The set \(x\) is called an
integer 1. We say that the ordered pair \((a,b)\in\mathbb N\times\mathbb N\) is representing the integer \(x\). The set of all integers is denoted by \(\mathbb Z\).
In order to make a difference in notation, we write \([a,b]\), instead of \((a,b)\), if we mean the integer represented by the ordered pair \((a,b)\) rather than the concrete ordered pair \((a,b)\). A more common (e.g. taught in the elementary school) notation is the notation of integers retrieved from the difference \(a-b\), however, the concept of building a difference is not introduced yet (in fact, we have not introduced the concept of negative integers yet
2). For the time being, we give a comparison of the different notations to make more clear:
Common integer notation Alternative integer notations Set of ordered pairs of natural numbers, each notation stands for \(\vdots\) \(\vdots\) \(\vdots\) \(-3\) e.g. \([0,3],[1,4],\ldots\) \(\begin{array}{llllll}\{(0,3),&(1,4),&(2,5),&\ldots,&(h,3+h),&~h\in\mathbb N\}\end{array}\) \(-2\) e.g. \([0,2],[1,3],\ldots\) \(\begin{array}{llllll}\{(0,2),&(1,3),&(2,4),&\ldots,&(h,2+h),&~h\in\mathbb N\}\end{array}\) \(-1\) e.g. \([0,1],[1,2],\ldots\) \(\begin{array}{llllll}\{(0,1),&(1,2),&(2,3),&\ldots,&(h,1+h),&~h\in\mathbb N\}\end{array}\) \(0\) e.g. \([0,0],[1,1],\ldots\) \(\begin{array}{llllll}\{(0,0),&(1,1),&(2,2),&\ldots,&(h,h),&~h\in\mathbb N\}\end{array}\) \(1\) e.g. \([1,0],[2,1],\ldots\) \(\begin{array}{llllll}\{(1,0),&(2,1),&(3,2),&\ldots,&(1+h,h),&~h\in\mathbb N\}\end{array}\) \(2\) e.g. \([1,0],[3,1],\ldots\) \(\begin{array}{llllll}\{(2,0),&(3,1),&(4,2),&\ldots,&(2+h,h),&~h\in\mathbb N\}\end{array}\) \(3\) e.g. \([1,0],[4,1],\ldots\) \(\begin{array}{llllll}\{(3,0),&(4,1),&(5,2),&\ldots,&(3+h,h),&~h\in\mathbb N\}\end{array}\) \(\vdots\) \(\vdots\) \(\vdots\) 1 Please note that integers are in fact sets. 2 The concept of negative integers will be introduced in the discussion of order relation for integers.
| | | | | created: 2014-09-09 22:11:57 | modified: 2018-05-13 23:38:18 | by:
bookofproofs | references: [696]
[696]
Kramer Jürg, von Pippich, Anna-Maria: “Von den natürlichen Zahlen zu den Quaternionen”, Springer-Spektrum, 2013 |
Tverberg plus minus Connections for Women Workshop: Geometric and Topological Combinatorics August 31, 2017 - September 01, 2017 Speaker(s):Imre Barany (Alfréd Rényi Institute of Mathematics) Location:MSRI: Simons Auditorium Tags/Keywords
Tverberg's theorem
sign conditions
Primary Mathematics Subject Classification Secondary Mathematics Subject ClassificationNo Secondary AMS MSC 6-Barany
We prove a Tverberg type theorem: Given a set $A \subset \R^d$ in general position with $|A|=(r-1)(d+1)+1$ and $k\in \{0,1,\ldots,r-1\}$, there is a partition of $A$ into $r$ sets $A_1,\ldots,A_r$ (where $|A_p|\le d+1$ for each $p$) with the following property. The unique $z \in \bigcap_{p=1}^r \aff A_p$ can be written as an affine combination of the elements in $A_p$: $z=\sum_{x\in A_p}\al(x)x$ for every $p$ and exactly $k$ of the coefficients $\al(x)$ are negative. The case $k=0$ is Tverberg's classical theorem. This is joint works with Pablo Soberon.
6-Barany
H.264 Video 6-Barany.mp4 Download
If none of the options work for you, you can always buy the DVD of this lecture. The videos are sold at cost for $20USD (shipping included). Please Click Here to send an email to MSRI to purchase the DVD.
See more of our Streaming videos on our main VMath Videos page. |
For anyone viewing this question who is not familiar with the notion of Induced EMF in the coil of wire let me briefly explain what i understood by it.
EMF is induced inside a coil of wire whenever you change the environment of coil-magnetic field system. That means that if you change the magnetic field that it sits in over time or move or otherwise distort the coil inside a static ( or changing ) magnetic field you will induce an electro-magnetic force inside the coil of wire and make current run through it. So the EMF is defined as $e_{ind}=-\frac {d\phi} {dt}$ That sums it up. Now the bug that's biting me:
The image shows a system of a very long (considered infinite) straight wire with current through it and a half-circe wire shape. The current through the straight part is given as $i(t)=I_a\sin {\omega t}$ and the problem asks for a EMF induced inside a half-circle wire. For purposes of simplification I am to ignore the self-induction caused by the current that is induced in that same system.
Now I will guide you through the process I took to achieve my answer:
From the given current direction and the shape of the magnetic field lines from the Ampere's law i got that $B_{wire}=\frac{\mu_0i(t)}{2\pi r}$ and the direction is of that into the the drawing plane. By the definition of the induced EMF $e_{ind}=-\frac {d\phi} {dt}$ I need the flux through the surface.
From here I have a disagreement.
It was stated in the notes I got from my friend that the elementary surface of the system is $dS=2a \cos \theta dr$ where $r=a +a \sin \theta$ and $dr=a \cos \theta d \theta$ but my conclusion would have first lead me to say that $dS=2a \cos \theta a d \theta$. Could you help me understand what it is that I got wrong and that I missed in my reasoning.
Also it would be really helpful to comment on my mistakes that were made while posting this question as it's my first post and don't yet know what is not allowed to ask or do.
Thanks :) |
Lets say we have following case:
Probability of bring a drunk driver = $0.10$
Probability of a drinking test coming positive = $0.30$
Probability of a drinking test coming negative, given the subject was not drunk = $0.90$
Then by Bayes theorem,
$$P(Not Drunk|Negative Test) = \frac{P(Negative Test|Not Drunk) \times P(Not Drunk) }{P(Negative Test)}.$$ Now, \begin{align} P(Negative Test|Not Drunk)& = 0.90\\ P(Not Drunk)& = 0.90\\ P(Negative test)& = 0.70 = (1 - Probability(Positive Test)) \end{align} Thus, $$P(Not Drunk|Negative Test) = (0.90 * 0.90) / 0.70 = 1.51.$$
As far as I understand, the probabilities shouldn't ever become more than 1 and above result is counterintuitive to me. Is this correct, if not where am I going wrong? |
I am trying to find the frequency of the artifact on the MRI image of the knee below both manually and with ImageJ:
As you can see the artifact results in a bar pattern extending horizontally along the image - i.e. a spike artifact.
After transforming to Fourier space, there are a couple of dots along the x-axis that seem to stand out in their intensity (yellow circles), and are therefore potential culprits for the artifact:
at frequencies $5.02\text{ pixels/cycle}$ and $2.4\text{ pixels/cycle},$ but the frequency that I calculate visually (and painfully) on the $256 \times 256\text{ pixel}$ image corresponds to $\approx 53 \text{ dark vertical bars},$ which would amount to
$$\frac{256}{53}=4.8\text{ pixel/cycle}$$
This is close enough to the the higher frequency dot in Fourier space ($5.02 \text{ pixels/cycle})$. Is this the explanation for the artifact?
Is there a contribution from the second dot that should be considered?
$$\small\begin{align}\text{Freq}&=5.019\text{ pix/cycle}\\ \text{Direction}&=181.12^°\\ \text{Phase }&= \arctan(68.263/-87.982)=-0.6598^°\\ \text{Magnitude}&=\sqrt{(-87.982)^2 +(68.263)^2}=111.36 \end{align}$$
$$\small\begin{align} \text{Freq}&=2.438 \text{ pix/cycle}\\ \text{Direction}&=181.091^°\\ \text{Phase }&= \arctan(10.977/-5.43)=-1.11^°\\ \text{Magnitude}&=\sqrt{(-5.43)^2+(10.977)^2}=12.25 \end{align}$$ |
Northern Illinois Center for Accelerator and Detector Development Research Projects Detector Development Group Detector Development Group
Our detector group works toward the development of the next linear collider. The group focuses on the design and prototyping of two collider components: a hadron calorimeter and a tail-catcher/muon tracker. The group also develops software prototypes for the collider.
The next linear collider will need to achieve unprecedented resolutions in jet (30%/$\sqrt{E}$ or better) and missing energy measurements. Particle-flow algorithms are a promising way to accomplish these superior resolutions. A calorimeter designed for these algorithms must be finely segmented in order to reconstruct the showers constituting a jet. Our research in these areas will allow us to optimize the information generated by collider events.
Research Areas Overview
The NIU team has been investigating a finely segmented scintillator-based hadron calorimeter for some time now. This option combines proven detection techniques with new photodetector devices. The absence of fluids/gases and very high voltages inside the detector increases longevity and operational stability.
Challenges
The main challenge for a scintillator-based hadron calorimeter is the architecture and cost of converting light from a large number of channels to an electrical signal. Our studies demonstrate that small cells (six-10) with embedded Silicon Photomultipliers (SiPMs)/Metal Resistive Semiconductor (MRS) photodetectors offer the most promise in tackling this issue. The use of these photodetectors opens the door to integration of the full readout chain to an extent that makes a multimillion channel scintillator calorimeter plausible. Also, in large quantities the devices are expected to cost a few dollars per channel, making the construction of a full-scale detector equipped with these photodiodes financially feasible.
The large number of readout channels can still pose a significant challenge in the form of complexity and cost of signal processing and data acquisition. Reducing the dynamic range of the readout is a potential solution. Monte Carlo studies have shown that this is a promising possibility. Scintillator cells with an area in the six-10 range are good candidates for one (digital) or two-bit (semi-digital) readout, where the lowest threshold is set to detect the passage of a minimum ionizing particle. Performance of PFAs on scintillator hadron calorimeter Monte Carlo's with a minimum of amplitude information in the form of thresholds also looks very competitive. The fabrication of cheap and compact electronics with just a few thresholds (three in the case of a two-bit readout) that will deliver the required performance is a realistic possibility for a scintillator hadron calorimeter.
Collaboration
We have coordinated our efforts with European groups pursuing similar interests. This interaction takes place under the umbrella of the CALICE collaboration, which bands together universities and labs from all over the world that share an interest in developing calorimeters for the linear collider. We are the only group in the United States actively investigating the promising option of a scintillator-based hadron calorimeter.
Muon ID and Reconstruction
Many key physics channels expected to appear at the linear collider have muons in their final states. Given the smallness of the expected cross sections, high efficiency in the tracking and identification of the muons will be very important. Since the precise measurement of the muon momentum will be done with the central tracker, a high-granularity muon system that can efficiently match hits in it with those in the tracker and calorimeter will be needed.
Energy Leakage
Hermeticity and resolution constraints require that the calorimeters be placed inside the superconducting coil to avoid serious degradation of calorimeter performance. On the other hand, cost considerations associated with the size of the coil imply that the total calorimetric system will be relatively thin (4.5-5.5 $\lambda$). Additional calorimetric sampling may be required behind the coil to estimate and correct for hadronic leakage.
Shower Validation
Current hadronic shower models differ significantly from each other. This puts conclusions on detector performances drawn from PFAs on rather shaky ground. One of the most important goals of the LC test beam program is the validation of hadronic simulation packages. A TCMT that can provide a reasonably detailed picture of the tail end of showers will be very helpful in this task.
The TCMT prototype will have fine and coarse sections distinguished by the thickness of the steel absorber plates. The fine section will sit directly behind the hadron calorimeter and have the same longitudinal segmentation as the HCAL. It will provide a detailed measurement of the tail end of the hadron showers. This is crucial to the validation of hadronic shower models, since the biggest deviations between models occurs in the tails. The following coarse section will serve as a prototype muon system for any design of a linear collider detector. It will facilitate studies of muon tracking and identification within the particle flow reconstruction framework. Additionally, the TCMT will provide valuable insights into hadronic leakage and punch-through from thin calorimeters and the impact of the coil in correcting for this leakage.
Overview
The detector development group is interested in calorimeter research and development for the proposed ILC. We propose to develop, in simulation and in prototype, designs for a hadron calorimeter (HCal) optimized for jet reconstruction using particle-flow algorithms (also called energy-flow algorithms). Simulation/algorithm development and hardware prototyping are envisioned as the two main components of our efforts. The text below addresses the first component.
High-precision Measurements
An e+e- linear collider is a precision instrument that can elucidate Standard Model (SM) physics near the electroweak energy scale and discover new physics processes in that regime, should they exist. In order to fully realize the potential anticipated from a machine of this type, the collection of standard high-energy physics detector components comprising an experiment must be optimized, sometimes in ways not yet realized at current experiments. One such example is the hadron calorimeter, which will play a key role in measuring jets from decays of vector bosons and other heavy particles, such as the top quark and the Higgs boson(s).
In particular, it will be important to be able to distinguish, in the final state of an e+e- interaction, the presence of a Z or a W boson by its hadronic decay into two jets. This means that the dijet mass must be measured within ~ 3GeV, or, in terms of jet energy resolution, $\sigma (E) \approx 0.3\sqrt{E}$ (E in GeV). Such high precision in jet energy measurement cannot be achieved by any existing calorimeter in the absence of a kinematically overconstrained event topology. Similar precision in measurements of jet and missing momentum will be crucial for discovery and characterization of several other new physics processes, as well as for precision tests of the Standard Model. Such ambitious objectives place strong demands on the performance of the calorimeters working in conjunction with the tracking system at the ILC, and require the development of new algorithms and technology.
Particle-flow Algorithms
The most promising means to achieving such unprecedented jet energy resolutions is through particle-flow algorithms (PFA). A PFA attempts to separately identify in a jet its charged, electromagnetic and neutral hadron components, in order to use the best means to measure each. On average, neutral hadrons carry only ~11 percent of a jet's total energy, which can only be measured with the relatively poor resolution of the HCal. The tracker is used to measure with much better precision the charged components (~64 percent of jet energy), and the electromagnetic calorimeter (ECal) to measure the photons with $\sigma (E) \approx 0.15\sqrt{E}$ (~24 percent of jet energy).
On average, only a small fraction of a jet's energy is carried by particles with momenta greater than 20 GeV. Measurements from the tracker are at least two orders (one order) of magnitude more precise than those from the calorimeter for particles below 20 GeV (100 GeV). A net jet energy resolution of $\sigma (E) \approx 0.3\sqrt{E}$ is thus deemed achievable by using the HCal only to measure the neutral hadrons with $\sigma (E) \approx 0.6\sqrt{E}$. However, this will certainly require extensive and simultaneous optimization of detector design and tuning of algorithm parameters.
Calorimeter Design and Event Simulation
A calorimeter designed for PFAs must be finely segmented both transversely and longitudinally for 3D shower reconstruction, separation of neutral and charged clusters, and association of the charged clusters to corresponding tracks. This requires realistic simulation of parton shower evolution and of the detector's response to the particles passing through it. Accurate simulation relies heavily on analysis of data from beam test of prototype modules. The detector optimization requires the simulation, visualization and analysis packages to be highly flexible, which calls for careful design and implementation of the software itself.
Very large numbers of events will have to be simulated to evaluate competing detector designs in relation to ILC physics goals. Characterization of signatures arising from processes predicted by some extensions of the Standard Model will require simultaneous coverage of broad ranges of undetermined parameters. Parametrized fast simulation programs will thus have to be developed once the algorithms have stabilized. Parametrization of PFAs will require much work and is one of our key objectives. |
Starting from special relativity, here I see
the de Broglie approximation is valid only if $m_0=0$. Derivation:
$E^2=P^2C^2+m_0^2C^4$. Here we put Plank-Einstein relation $E=h\nu=h\frac{C}{\lambda}$. Finally,
$\lambda=\frac{h}{\sqrt{P^2+m_0^2C^2}} \hspace{2cm} (1)$.
If $m_0=0$ then $\lambda=\frac{h}{p}$ (de Broglie approximation).
Furthermorewe know the Schrodinger equation was derived by assuming that the de Broglie approximation is true for all particles, even if $m_0 \neq 0$. But if we take special relativity very strictly then this approximation looks incorrect.
In addition, if we try to derive the Schrodinger equation from the exact relation found in '1', we find completely different equation. For checking it out, lets take a wave function-
$\Psi=Ae^{i(\frac{2\pi}{\lambda}x-\omega t)}=Ae^{i(\frac{\sqrt{P^2+m_0^2C^2}}{\hbar}x-\frac{E}{\hbar} t)} \hspace{2cm}$ (putting $\lambda$ from '1', $\frac{h}{2\pi}=\hbar$ and $E=\hbar\omega$).
Then, $\frac{\partial^2 \Psi}{\partial x^2}=-\frac{P^2+m_0^2C^2}{\hbar^2}\Psi=-\frac{E^2}{C^2\hbar^2}\Psi$
$\implies E^2\Psi=-C^2\hbar^2\frac{\partial^2 \Psi}{\partial x^2} \hspace{4cm} (2) $
Again, $\frac{\partial \Psi}{\partial t}=-i\frac{E}{\hbar}\Psi \implies E\Psi=-i\hbar\frac{\partial \Psi}{\partial t}$.
Here we see operator $E=-i\hbar\frac{\partial}{\partial t} \implies E^2=-\hbar^2\frac{\partial^2}{\partial t^2}$.
$\implies E^2\Psi=-\hbar^2\frac{\partial^2 \Psi}{\partial t^2} \hspace{4cm} (3)$
Combining (2) and (3) we find the differential equation:
$\frac{\partial^2 \Psi}{\partial x^2}=\frac{1}{C^2}\frac{\partial^2 \Psi}{\partial t^2}$
It is the Maxwell's equation, not the well known Schrodiner equation!
Therefore for the Schrodinger equation to exist, the de Broglie approximation must hold for $m_0 \neq 0.$ I see a clear contradiction here. Then why is the Schrodinger equation correct after all? |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Polybrominated diphenyl ethers, 2,2′,4,4′,5,5′- hexachlorobiphenyl (PCB-153), and p, p ′-dichlorodiphenyldichloroethylene (p, p ′-DDE) concentrations in sera collected in 2009 from Texas children
Environmental Science and Technology, ISSN 0013-936X, 07/2014, Volume 48, Issue 14, pp. 8196 - 8202
Polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs) and p,p'-dichlorodiphenyldichloroethylene (p,p'-DDE) have been measured in surplus...
UNITED-STATES | POPULATION | ENVIRONMENTAL SCIENCES | DUST | ENGINEERING, ENVIRONMENTAL | BEHAVIOR | PBDE-99 | MICE | DEVELOPMENTAL EXPOSURE | POLYCHLORINATED-BIPHENYLS | Dichlorodiphenyl Dichloroethylene - blood | Confidence Intervals | Environmental Monitoring | Limit of Detection | Humans | Child, Preschool | Polychlorinated Biphenyls - blood | Infant | Male | Texas | Adolescent | Female | Child | Infant, Newborn | Polychlorinated biphenyls--PCB | Chemical compounds | Human exposure | Children & youth | Index Medicus
UNITED-STATES | POPULATION | ENVIRONMENTAL SCIENCES | DUST | ENGINEERING, ENVIRONMENTAL | BEHAVIOR | PBDE-99 | MICE | DEVELOPMENTAL EXPOSURE | POLYCHLORINATED-BIPHENYLS | Dichlorodiphenyl Dichloroethylene - blood | Confidence Intervals | Environmental Monitoring | Limit of Detection | Humans | Child, Preschool | Polychlorinated Biphenyls - blood | Infant | Male | Texas | Adolescent | Female | Child | Infant, Newborn | Polychlorinated biphenyls--PCB | Chemical compounds | Human exposure | Children & youth | Index Medicus
Journal Article
2014, ISBN 1447316223, xii, 339
Book
Journal of High Energy Physics, ISSN 1126-6708, 3/2018, Volume 2018, Issue 3, pp. 1 - 23
The ratios of the branching fractions of the decays Λ c + → pπ − π +, Λ c + → pK − K +, and Λ c + → pπ − K + with respect to the Cabibbo-favoured Λ c + →...
Spectroscopy | Branching fraction | Charm physics | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Luminosity | Uncertainty | Large Hadron Collider | Particle collisions | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Spectroscopy | Branching fraction | Charm physics | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Luminosity | Uncertainty | Large Hadron Collider | Particle collisions | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
4. Measurement of multi-particle azimuthal correlations in pp, p + Pb and low-multiplicity Pb + Pb collisions with the ATLAS detector
The European Physical Journal C, ISSN 1434-6044, 6/2017, Volume 77, Issue 6, pp. 1 - 40
Multi-particle cumulants and corresponding Fourier harmonics are measured for azimuthal angle distributions of charged particles in $$pp$$ p p collisions at...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | Measurement | Comparative analysis | Detectors | Harmonics | Charged particles | Particle production | Correlation analysis | Collisions | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | Measurement | Comparative analysis | Detectors | Harmonics | Charged particles | Particle production | Correlation analysis | Collisions | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences
Journal Article
Physics Letters B, ISSN 0370-2693, 10/2013, Volume 726, Issue 1-3, pp. 164 - 177
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons,...
Journal Article
Journal of High Energy Physics, ISSN 1126-6708, 05/2017, Volume 2017, Issue 5, pp. 1 - 43
An amplitude analysis of the decay $\Lambda_b^0\to D^0 p \pi^-$ is performed in the part of the phase space containing resonances in the $D^0 p$ channel. The...
Spectroscopy | B physics | QCD | Charm physics | Hadron-Hadron scattering (experiments) | Phenomenology | High Energy Physics | info:eu-repo/classification/arxiv/High Energy Physics::Phenomenology | Experiment | Nuclear and particle physics. Atomic energy. Radioactivity | High Energy Physics - Experiment | info:eu-repo/classification/arxiv/High Energy Physics::Experiment
Spectroscopy | B physics | QCD | Charm physics | Hadron-Hadron scattering (experiments) | Phenomenology | High Energy Physics | info:eu-repo/classification/arxiv/High Energy Physics::Phenomenology | Experiment | Nuclear and particle physics. Atomic energy. Radioactivity | High Energy Physics - Experiment | info:eu-repo/classification/arxiv/High Energy Physics::Experiment
Journal Article
7. Measurement with the ATLAS detector of multi-particle azimuthal correlations in p+Pb collisions at sNN=5.02 TeV
Physics Letters B, ISSN 0370-2693, 08/2013, Volume 725, Issue 1-3, pp. 60 - 78
In order to study further the long-range correlations (“ridge”) observed recently in collisions at , the second-order azimuthal anisotropy parameter of charged...
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
Physical Review Letters, ISSN 0031-9007, 08/2015, Volume 115, Issue 7
Journal Article
9. Measurement of the top quark mass with lepton+jets final states using $$\mathrm {p}$$ p $$\mathrm {p}$$ p collisions at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV
The European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11, pp. 1 - 27
The mass of the top quark is measured using a sample of $${{\text {t}}\overline{{\text {t}}}$$ tt¯ events collected by the CMS detector using proton-proton...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article |
Acoustic Topology Optimization with Thermoviscous Losses Today, guest blogger René Christensen of GN Hearing discusses including thermoviscous losses in the topology optimization of microacoustic devices.
Topology optimization helps engineers design applications in an optimized manner with respect to certain
a priori objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization.
Standard Acoustic Topology Optimization
A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar large-scale geometries.
The governing equation is the standard wave equation with material parameters given in terms of the density \rho and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, \epsilon. This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1.
Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot.
Using this approach will work for applications where the so-called thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the
Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape. Thermoviscous Acoustics (Microacoustics)
For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the so-called thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects.
Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity.
An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾-revolution plot.
The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely.
Governing Equations of Thermoviscous Acoustics
Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized Navier-Stokes equations with quiescent conditions. These equations are implemented in the
Thermoviscous Acoustics physics interfaces available in the Acoustics Module add-on to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as
where the viscous field \Psi_{v} is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions.
In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as
where the thermal field \Psi_{h} is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions.
As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme.
Topology Optimization for Thermoviscous Acoustics Applications
For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no one-equation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section.
For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the so-called Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as
(1)
where \Delta_{cd} is the Laplacian in the cross-sectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the
Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure.
In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are
and
These boundary conditions give us insight into how to perform the optimization procedure, since an air-solid interface could be represented by the former boundary condition and an air-air interface by the latter. We write the governing equation in a more general matter:
We already know that for air domains, (a
v,f v) = (1,1), since that gives us the original equation (1). If we instead set a v to a large value so that the gradient term becomes insignificant, and set f v to zero, we get
This corresponds exactly to the boundary condition for no-slip boundaries, just as a solid-air interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a
v,f v) should have values of (“large”,0). Thus, we have established our interpolation extremes:
and
I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a
v and f v are input. Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air.
The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions.
Figure 4: The resulting field with contours for the setup in Figure 3.
The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition.
Optimizing an Acoustic Loss Response
Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this:
Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz.
A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied.
Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz.
The normalized acoustic loss for the initial hexagonal geometry and the topology-optimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz.
Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topology-optimized geometry (solid), respectively.
For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example.
This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future.
References W.R. Kampinga, Y.H. Wijnant, A. de Boer, “An Efficient Finite Element Model for Viscothermal Acoustics,” Acta Acousticaunited with Acoustica, vol. 97, pp. 618–631, 2011. M.P. Bendsoe, O. Sigmund, Topology Optimization: Theory, Methods, and Applications, Springer, 2003. About the Guest Author
René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN Hearing A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN Hearing as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.
Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
I'm developing an application to calculate the optimal build order for a strategy game. While doing so, I stumbled over an interesting problem which might be applicable to other cases as well. I will give my specific problem below as example, but I wan't to ask for an general answer. Therefore, i will give formulate the general problem:
We have a set of items with elements $I$. Each item specifies a set of $p$ properties which are positive real numbers or zero. Say the properties are numbered and $I(x)$ is the value of the $x$th property. There is a function $P$ which specifies the price of an item $P(I)$. We have a list of items $B=\{I_1,I_2,...,I_k\}$ which contains all items we have bought. The total price of $B$ is $P(B)=\sum\limits_{i=1}^k P(I_i)$ and the value of $B$ is $V(B)=\prod\limits_{j=1}^p(c_j+\sum\limits_{i=1}^k I_k(j))$ where $c_j$ is a positiv real number.
Question 1: For a given set of items and a given maximum price, find a list $B$ of items, so that $P(B_1)<\hat{P}$ and $V(B)$ is maximal.
The real problem is about an order to get the items in. The difficulty is that you can't exchange items as you wish. Only certain items of a lower price can be combined to more expensive items. In other words: You can exchange cheap items you already have for expensive items. Say we have a relation $C$ which matches the cheap items with the expensive ones you can exchange them for. This might be something like $C=\{(I_1,I_2,I_3),(I_3,I_4)\}$ for "item one and two can be exchange for item three and item three can be exchange for item four". It is important to say that $P(I_3)\neq P(I_1)+P(I_2)$. The option to combine items only allows to remove items from your old list, when you add the more expensive one to your new list. They do not change the price of your items (clarification in the example).
Question 2: For a given list $B_0$ and a given maximum price $\hat{P}>P(B_0)$, find a list $B_1$ of items so that $B_1$ contains all items of $B_0$ or there combinations, so that $P(B_1)<\hat{P}$ and $V(B_1)$ is maximal. Question 3: For a given series of maximum prices $(\hat{P}_n)_{n\in\mathbb{N}}$ with $\hat{P}_n<\hat{P}_{n+1}$, find a series of lists $(B_n)_{n\in\mathbb{N}}$ with $P(B_n)<\hat{P}_n$ and the combination criteria from the question above, so that $\sum\limits_{n=0}^N V(B_n)$ is maximal for a given $N$. Example: The example is taken from the popular game League of Legends. We have the following items:
Item | AS | AD | CS | CB | PriceDagger | .12| 0 | 0 | 0 | 300Gloves | 0 | 0 | .10| 0 | 400Bow | .25| 15 | 0 | 0 | 1000Zeal | .15| 0 | .20| 0 | 1300Hurricane | .40| 15 | .30| 0 | 2600Sword | 0 | 40 | 0 | 0 | 1300Pickaxe | 0 | 25 | 0 | 0 | 875Cloak | 0 | 0 | .20| 0 | 800Edge | 0 | 70 | .20| .50| 3600
The cobinations are:
Dagger + Dagger -> BowDagger + Gloves -> ZealZeal + Bow -> HurricaneSword + Pickaxe + Cloak -> Edge
Value function (a variation of the DPS formula): $V=(0.5+AS)*(50+AD)*(1+CS)*(1+CB)$ (in other words: the constants $c_j$ are $\{0.5,50,0,1\}$)
Situation 1: We don't have items and we can buy items worth up to $\hat{P}=1300$. If we buy a sword we have $V(Sword)=0.5*(50+40)*1*1=45$. If we buy a Zeal we have $V(Zeal)=(0.5+0.15)*50*(1+0.2)*1=39$. We know now: It is better to buy a sword than a zeal.
Situation 2: We have a Pickaxe and can spend $1300$ more. So $B_0={Pickaxe}$ and $\hat{P}=875+1300=2175$. Let's test the same two things as above: If we buy a sword we have $V(Sword,Pickaxe)=0.5*(50+40+25)*1*1=57.5$. If we buy a zeal we have $V(Zeal,Pickaxe)=(0.5+0.15)*(50+25)*(1+0.2)*1=58.5$. So in this situation buying a zeal is better.
Situation 3: This is the situation I can't solve and why I'm asking the question. We want to buy both Edge und Hurricane. Which is the optimal order to buy them in? I will give the the examples for only Egde and only Hurricane. As series of maximum prices I take $\hat{P}_n=1000*n$.
Edgen=1: P=1000 B={}V(Pickaxe)=37.5V(Cloak)=30-> buy Pickaxen=2: P=875+1225 B={Pickaxe}V(Cloak,Pickaxe)=45-> buy Cloakn=3: P=875+800+1425 B={Cloak,Pickaxe}V(Sword,Cloak,Pickaxe)=69-> buy swordn=4: P=875+800+1300+1125 B={Sword,Cloak,Pickaxe}-> combine to edgeV(Edge)=108Sum of all Bs: 37.5+45+69+108=259.5Hurricanen=1: P=1000 B={}V(Dagger+Dagger+Dagger)=43V(Dagger+Dagger+Gloves)=40.7V(Bow)=48.75-> buy bown=2: P=1000+1000 B={Bow}V(Bow+Dagger+Gloves)=62.25-> buy dagger + glovesn=3: P=1000+300+400+1300 B{Bow,Dagger,Gloves}-> combine to hurricaneV(Hurricane)=76.05n=4: Do nothingSum of all Bs: 48.75+62.25+76.05+76.05=263.1
I guess the best build order is to buy items for Edge and for Hurricane, but I think this example will help to clarify my problem.
I'm happy to hear any ideas and possible solutions. Thanks in advance for your help! |
Gravitational Force Exerted by a Rod
In this lesson, we'll derive a formula which will allow us to calculate the gravitational force exerted by a rod of length \(L\) on a particle a horizontal distance \(x\) away from the rod as illustrated in Figure 1. We'll assume that the width and depth of the rod are negligible and approximate all of the mass comprising the rod as being distributed along only one dimension. We'll model the rod as being composed of an infinite number of particles of mass \(dm\). The mass of the rod is given by the infinite sum of all the mass elements (or particles) comprising the rod:
$$M_{rod}=\int{dm}.\tag{1}$$
We're interested in finding the gravitational force exerted by the rod on a particle of some mass \(m\). Now, of source, the notion of a particle is something that is very abstract—an object of zero size with all of its mass concentrated at a single point (more precisely, a
geometrical point which is another very abstract notion) in space. No object is actually a particle (except a black hole), but if the object is very small compared to the size of the rod then it is reasonable to ignore the size of the dimensions of that object and to approximate it as a point mass.
Newton's law of gravity is defined as
$$\vec{F}_{m_1,m_2}=G\frac{m_1m_2}{r^2}\hat{r}_{1,2}.\tag{2}$$
where \(\vec{F}_{m_1,m_2}\) is the gravitational force exerted by a particle of mass \(m_1\) on another particle of mass \(m_2\), \(r\) is their separation distance, and \(\hat{r}_{1,2}\) is a unit vector pointing from \(m_1\) to \(m_2\). For the moment, we'll just be interested in the
magnitude of the gravitational force exerted on \(m_2\) which is given by
$$F_{m_1,m_2}=G\frac{m_1m_2}{r^2}.\tag{3}$$
When I was telling you the definition of Newton's law of gravity, notice how I was very specific about how Equation (2) (and thus Equation (3) as well) gives the gravitational force exerted by one
particle (or point-mass) on another particle. The famous equation representing Newton's law of gravity only deals with particles and for this reason we cannot use Equation (2) or (3) to compute the gravitational force exerted by a rod on a particle—the rod isn't a particle, it's an extended object. When dealing with the mass of any extended object in classical mechanics—whether it be a rod, disk, ball, or any other geometrical shape—we can think of the entire shape of that object as being built up by an infinite number of point-masses of mass \(dm\). Given how I defined Equations (2) and (3) as being in terms of only particles, we can use Equation (3) to compute the gravitational force exerted by one of the particles of mass \(dm\) exerted on the particle of mass \(m\) as
$$F_g=Gm\frac{1}{r^2}dm,\tag{4}$$
where \(dm\) and \(m\) are the mass of each particle, and \(r\) is their separation distance. As you can see from Figure 1, if \(x\) represents the position of the mass \(dm\) on the \(x\)-axis, then the separation distance between \(dm\) and \(m\) must be \((L+d)-x\). Thus, we can represent Equation (4) as
$$F_g=Gm\frac{1}{((L+d)-x)^2}dm.\tag{5}$$
Equation (5) represents the gravitational force exerted by any particle in the rod on the particle as horizontal distance \(d\) away from the rod. To find the total gravitational force exerted by the rod, we must "add up" (indeed, "add up" an infinite number of time) the gravitational forces, \(Gmdm/x^2\), exerted by every particle \(dm\) on the mass \(m\):
$$F_{rod,m}=Gm\int{\frac{1}{((L+d)-x)^2}dm}.\tag{6}$$
Equation (6) does indeed give the magnitude of the gravitational force exerted by \(M_{rod}\) on \(m\)—but the only problem is that we cannot calculate the value of this force since we cannot evaluate the integral in Equation (6). To calculate the integral in Equation (60, the integrand and limits of integration must be represented in terms of the same variable. If we assume that the mass density \(λ\) of the rod is constant, then
$$λ=\frac{dm}{dx}$$
and
$$dm=λdx.\tag{7}$$
Substituting Equation (7) into (6), we have
$$F_{rod,m}=Gmλ\int_{-L/2}^{L/2}\frac{1}{((L+d)-x)^2}dx.\tag{8}$$
As you can see, after representing everything in the integral in terms of \(x\), we have ended up with an integral that is fairly straightforward to calculate. If we let \(u=L+d-x\), then
$$\frac{du}{dx}=-1$$
and
$$dx=-du\tag{9}$$
Substituting \(u=L+d-x\) and Equation (9) into (8), we have
$$F_{rod,m}=-Gmλ\int_{?_1}^{?_2}\frac{1}{u^2}du.\tag{10}$$
When \(x=-L/2\), \(u=\frac{3L}{2}+d\) and when \(x=L/2\), \(u=\frac{L}{2}+d\). Substituting these limits of integration into Equation (10), we have
$$F_{rod,m}=Gmλ\int_{\frac{L}{2}+d}^{\frac{3L}{2}+d}\frac{1}{u^2}du.\tag{11}$$
Solving the integral in Equation (11), we have
$$Gmλ\int_{\frac{L}{2}+d}^{\frac{3L}{2}+d}\frac{1}{u^2}du=Gmλ\biggl[\frac{-1}{u}\biggr]_{\frac{L}{2}+d}^{\frac{3L}{2}+d}=Gmλ(\frac{1}{d}-\frac{1}{L+d}).$$
Thus,
$$F_{rod,m}=Gmλ(\frac{1}{d}-\frac{1}{L+d}).\tag{12}$$
Equation (12) allows us to calculate the magnitude of the gravitational force exerted by a rod on a particle. Since each mass \(dm\) in the rod is pulling on the mas \(m\) in the \(-x\) direction, the entire rod pulls on \(m\) in the \(-x\) direction. If we multiply the magnitude of the gravitational force, \(F_{rod,m}\), by \(-\hat{i}\), this will give us a gravitational force with a magnitude of \(F_{rod,m}\) and a direction of \(-\hat{i}\) in the negative \(x\) direction. Thus, the gravitational force exerted by the rod on the mass \(m\) is given by
$$\vec{F}_{rod,m}=Gmλ(\frac{1}{d}-\frac{1}{L+d})(-\hat{i}).\tag{13}$$
The entire problem we just solved would also apply to a problem from electrostatics which deals with finding the electric force exerted by a charged rod on a charged particle. This is because the law specifying the electric force (namely, Column's law) is completely analogous to Newton's law of gravity. What I mean by this is that both laws have analogous mathematical expressions and both involve a universal constant, two parameters describing two particles, and both laws are inverse-square laws.
This article is licensed under a CC BY-NC-SA 4.0 license. |
Green's Theorem allows us to convert the line integral into a double integral over the region enclosed by \(C\). The discussion is given in terms of velocity fields of fluid flows (a fluid is a liquid or a gas) because they are easy to visualize. However, Green's Theorem applies to any vector field, independent of any particular interpretation of the field, provided the assumptions of the theorem are satisfied. We introduce two new ideas for Green's Theorem: divergence and circulation density around an axis perpendicular to the plane.
Divergence
Suppose that \(F(x, y) = M(x, y) \hat{\textbf{i}} + N(x, y) \hat{\text{j}}\), is the velocity field of a fluid flowing in the plane and that the first partial derivatives of \(M\) and \(N\) are continuous at each point of a region \(R\).
Let \((x, y)\) be a point in \(R\) and let \(A\) be a small rectangle with one corner at \((x, y)\) that, along with its interior, lies entirely in \(R\). The sides of the rectangle, parallel to the coordinate axes, have lengths of \( \Delta x \) and \( \Delta y \). Assume that the components \(M\) and \(N\) do not change sign troughout a small region containing the rectangle \(A\). The rate at which fluid leaves the rectangle across the bottom edge is approximately
\[F(x,y)=M(x,y) \hat{\textbf{i}}+N(x,y) \hat{\textbf{j}}\]
This is the scalar component of the velocity at \((x,y)\) in the direction of the outward normal times the length of the segment. If the velocity is in meters per second, for example, the flow rate will be in meters per second times meters or square meters per second. The rates at which the fluid crosses the other three sides in the directions of their outward normals can be estimated in a similar way. The flow rates may be positive or negative depending on the signs of the components of \(F\). We approximate the net flow rate across the rectangular boundary of \(A\) by summing the flow rates across the four edges as defined by the following dot products.
Top: \[F(x,y+\Delta y)\cdot (\hat{\textbf{j}})\Delta x=-N(x,y+\Delta y)\Delta x\] Bottom: \[F(x,y)\cdot (-\hat{\textbf{j}})\Delta x=-N(x,y)\Delta x\] Right: \[F(x+\Delta x,y)\cdot (\hat{\textbf{i}})\Delta y=M(x+\Delta x,y)\Delta y\] Left: \[F(x,y)\cdot (-\hat{\textbf{i}})\Delta y=-M(x,y)\Delta y\]
Summing opposite pairs gives
Top and bottom: \[(N(x,y+\Delta y)-N(x,y))\cdot(\Delta x)\] Right and left: \[(M(x+\Delta x,y)-M(x,y))\cdot(\Delta y)\]
Adding these last two equations gives the net effect of the flow rates, or the Flux across rectangle boundary. We now divide by \(xy\) to estimate the total flux per unit area or flux density for the rectangle: Finally, we let \(J_{lx}\)and \(J_{ly}\) approach zero to define the flux density of \(F\) at the point \((x,y)\). In mathematics, we call the flux density the divergence of \(F\). The symbol for it is div \(F\), pronounced "divergence of \(F\)' or "div \(F\)."
The divergence (flux density) of a vector field \(F= \text{the point } (x,y)\) is
\[divF=\dfrac{\partial M}{\partial x}+\dfrac{\partial N}{\partial x}.\]
Spin Around an Axis: The k-Component of Curl
The second idea we need for Green's Theorem bas to do with measuring how a floating paddle wheel, with axis perpendicular to the plane, spins at a point in a fluid flowing in a plane region. This idea gives some sense of how the fluid is circulating around axes located at different points and perpendicular to the region. Physicists sometimes refer to this as the circulation density of a vector field \(F\) at a point. To obtain it, we return to the velocity field
\[F(x,y)=M(x,y)\hat{\textbf{i}}+N(x,y)\hat{\textbf{j}}\]
and consider the rectangle \(A\) in Figure 16.29 (where we assume both components of \(F\) are positive).
The circulation rate of \(F\) around the boundary of \(A\) is the sum of flow rates along the sides in the tangential direction. For the bottom edge, the flow rate is approximately
\[F(x,y)\cdot ( \hat{\textbf{i}} )\Delta x=-M(x,y)\Delta x\]
This is the scalar component of the velocity \(F(x, y)\) in the tangent direction \( \hat{\textbf{i}} \) times the length of the segment. The flow rates may be positive or negative depending on the components of \(F\). We approximate the net circulation rate around the rectangular boundary of \(A\) by summing the flow rates along the four edges as determined by the following dot products.
Top: \[F(x,y + \Delta y) \cdot (-i) \Delta x= -M(x,y+ \Delta y)\Delta x \] Bottom: \[F(x,y) \cdot ( \hat{\textbf{i}}) \Delta x = M(x,y) \Delta x\] Right: \[F(x+\Delta x ,y ) \cdot (\hat{\textbf{j}} ) \Delta y = N(x+ \Delta x, y) \Delta y \] Left: \[ F(x,y) \cdot (- \hat{\textbf{j}} \Delta y = - N(x,y) \Delta y \] Top and bottom: \[-(M(x,y+\Delta y)-M(x,y))\cdot(\Delta x)\] Right and left: \[(N(x+\Delta x,y)-N(x,y))\cdot(\Delta y)\]
Adding these last two equations gives the net circulation relative to the counterclockwise orientation, and dividing by JlxJly gives an estimate of the circulation density for the rectangle:
Circulation around Rectangle Rectangle Area
We let \(J_{lx}\) and \(J_{ly}\) approach zero to define the circulation density of \(F\) at the point \((x,y)\).
If we see a counterclockwise rotation looking downward onto the xy-plane from the tip of the unit \( \hat{\textbf{k}}\) vector, then the circulation density is positive (Figure 16.30). The value of the circulation density is the \( \hat{\textbf{k}}\)-component of a more general circulation vector field we addressed in Section 16.7, called the curl of the vector field \(F\). For Green's Theorem, we need only this \( \hat{\textbf{k}}\) -component.
The circulation density of a vector field \(F= M \hat{\textbf{i}} + N \hat{\textbf{j}}\) at the point \( ( x, y ) \) is the scalar expression
\[\dfrac{\partial M}{\partial x} - \dfrac{\partial N}{\partial x} \]
Theorem \(\PageIndex{1}\): Green's Theorem (Flux-Divergence Form)
Let \(C\) be a piecewise smooth, simple closed curve enclosing a region \(R\) in the plane. Let \(F = M \hat{\textbf{i}} + N \hat{\textbf{j}}\) be a vector field with \(M\) and \(N\) having continuous first partial derivatives in an open region containing \(R\). Then the outward flux of \(F\) across \(C\) equals the double integral of \(div F\) over the region \(R\) enclosed by \(C\).
\[\oint_C F\cdot nds=\oint_CMdy-Ndx=\iint_{R}^{ } \left(\dfrac{\partial M}{\partial x}+\dfrac{\partial N}{\partial x}\right) dx\,dy\]
Theorem \(\PageIndex{2}\): Green's Theorem (Flux-Divergence Form)
Let \(C\) be a piecewise smooth, simple closed curve enclosing a region \(R\) in the plane. Let \(F = M \hat{\textbf{i}} + N \hat{\textbf{j}}\) be a vector field with \(M\) and \(N\) having continuous first partial derivatives in an open region containing \(R\). Then the counterclockwise circulation of \(F\) around \(C\) equals the double integral of \((curl F) \cdot k\) over \(R\).
\[\oint_C F\cdot Tds=\oint_CMdy+Ndx=\iint_{R}^{ } \left(\dfrac{\partial N}{\partial x}-\dfrac{\partial M}{\partial x}\right) dx\,dy\]
Contributors
Integrated by Justin Marshall. |
What helps to solve your problem is the rule ${\rm vec}\{A \cdot {\rm diag}(b) \cdot C^T\} = (C \diamond A)\cdot b$, where
${\rm vec}\{X\}$ is the vectorization operator that rearranges the elements of a matrix $X \in \mathbb{R}^{m \times n}$ into a vector $\in \mathbb{R}^{m \cdot n \times 1}$ The $\diamond$ operator is the "Khatri-Rao product", also known as the column-wise Kronecker product between two matrices, i.e., for given matrices $A = [a_1, \ldots, a_k] \in \mathbb{R}^{m \times k}$ and $B = [b_1, \ldots, b_k] \in \mathbb{R}^{n \times k}$, the matrix $C = A \diamond B$ is given by $C = [a_1 \otimes b_1, \ldots, a_k \otimes b_k] \in \mathbb{R}^{m \cdot n \times k}$, where $\otimes$ is the Kronecker product.
Let us apply this rule to your problem. First, we rearrange a bit:
$$\begin{align}-\omega^2U^T(M+\Delta M)U+U^T(K+\Delta K)U&=0_M \\-\omega^2U^T{\rm diag}\{\Delta m\} U+U^T{\rm diag}\{\Delta k\} U&=\omega^2 U^T M U - U^T K U\end{align}$$Now, let us vectorize:$$\begin{align}-(U^T \diamond \omega^2 U^T) \cdot \Delta m+(U^T \diamond U^T) \cdot \Delta k& = {\rm vec} \{ \omega^2 U^T M U - U^T K U\} \\\left[-U^T \diamond \omega^2 U^T, U^T \diamond U^T\right]\cdot\left[\Delta m^T, \Delta k^T\right]^T & = {\rm vec} \{ \omega^2 U^T M U - U^T K U\},\end{align}$$which is in the desired form $A x = b$, where $A = \left[-U^T \diamond \omega^2 U^T, U^T \diamond U^T\right] \in \mathbb{R}^{9 \times 10}$, $x = \left[\Delta m^T, \Delta k^T\right]^T\in \mathbb{R}^{10 \times 1}$, and $b = {\rm vec} \{ \omega^2 U^T M U - U^T K U\} \in \mathbb{R}^{9 \times 1}$.
As a slight simplification, you can rewrite your system matrix $A$ into $$A = \left[-(I_3 \otimes \omega^2) \cdot (U^T \diamond U^T), U^T \diamond U^T\right]= \left[-D \cdot (U^T \diamond U^T), U^T \diamond U^T\right],$$ where $D = {\rm diag}\{\omega_1, \omega_2, \omega_3, \omega_1, \omega_2, \omega_3, \omega_1, \omega_2, \omega_3\} \in \mathbb{R}^{9 \times 9}$, since $\omega$ is diagonal.
Clearly, if your $\omega_i$ are equal, the system has rank at most 5 (not surprisingly). In general, you may be more lucky. I just tried with randomly drawn data and got rank 9. My guess is for randomly drawn $\omega$, $U$ (from a continuous distribution) you get full rank almost surely, but that's a guess only. |
https://doi.org/10.1351/goldbook.M03774
Defined by the equation: \[a_{\pm }=\mathrm{e}^{(\mu _{\text{B}}- \mu _{\text{B}}^{\unicode{x29B5}})\ \nu \ R\ T}\] where \(\mu _{\text{B}}\) is the @C01032@ of the solute B in a solution containing B and other species. The nature of B must be clearly stated: it is taken as a group of ions of two kinds carrying an equal number of positive and negative charges, e.g. Na
++ NO 3 −or Ba 2++ 2Cl −or 2Al 3++ 3SO 4 2−. \(v\) is the total number of ions making up the group i.e. 2, 3 and 5 respectively in the above examples. \(\mu _{\text{B}}^{\unicode{x29B5}}\) is the @C01032@ of B in its @S05925@, usually the hypothetical ideal solution of concentration \(1\ \text{mol}\ \text{kg}^{-1}\) and at the same temperature and pressure as the solution under consideration. See also:
activity |
Asteroid LauncherThe ship sneaks into the asteroid belt and starts manufacturing engines on the asteroids. When enough is made, it launches them at the Earth and/or other targets.Sure the Earth forces can try and blow them up but that's not really going to help as it just changes a single shot round into a shotgun round.Multiply that by thousands of ...
Local singularity.Deploy AI into the local system that is more advanced than whatever is present.This frightening prospect is real even for people today.Secondly, hallucinogenic weapons. For some reason never deployed even in the worst horrors of human war. LSD bombs.
Not meant to be a top answer but as something to consider: what if it isnt superior technology but superior design?The human ships are all efficient, they dont waste space and are build as compact and capable as possible. That makes turning easier and gives you a smaller profile to hit. Great atteibutes right?But the aliens come with an absolutely ...
"The Three Body Problem" by Liu Cixin has several ideas for crazy advanced technology and I recommend reading it for the full details (it's also really good).The two most applicable to you:multi-dimensional entitiesThe ship is just the 3D projection of something that is actually an eleven-dimensional object. Among other things, that means its internal "...
I would strongly recommend relaxing your rules for the aliens A LOT.I will assume that you want to have some kind of narrative. To really make the alien alien, the other, I would give them future-fantasy devices. Not necessarily the most used in future fantasy, but offense and defense impossible by our current understanding of physics.Examples: Short-...
The other idea's here are all good, but there isn't going to be a silver bullet.A human civilization like you're describing is a large, diverse ecosystem of adaptable self-replicating intelligent agents. (And it may have access to it's own super-intelligent AI, or it may not, IDK.)Even if you can get 99% fatalities in your first volley, the invader is ...
With all due credit given to Roddenberry's masterpiece...Cloaking FieldIf the alien vessel can absorb all of the energies (including visible light) that our active sensors use, and if it can also store/conceal all of this collected energy plus its' own emitted energies from our passive sensors, then all of our weapons and maneuverability won't help....
Trojan horseThe automatic warship limps into the system, obviously disabled, flying blind. The long dead corpses of its crew are still aboard. Earth recognizes its alien nature and realizes that it is a ghost ship. The military wants that alien tech! And the ship does have excellent and complex tech.It is brought back to earth for study. As its ...
von Neumann Machinesaka Gray Goo, one of the more horrific potential apocalypses facing the humans race. Raw numbers are probably the strongest of force multipliers you could ask for, which means that unless you give this alien warship weapons and technology which just outclass the humans, which, given that humans have kinetic projectiles at sizeable ...
What about the effects of adding an electromagnetic charge to the shield? While it might not do much to negate that kinetic energy, maybe it could deflect the rod or its fragments in harmless directions...
As many other posters pointed out, the Whipple Shield isn't going to do much against a vary large, dense projectile. It's purpose is to absorb the impact of very small objects like dust grains or micrometeors.However, it is possible to take this principle and apply it as a form of active armour. Rather than a fixed plate, the ship carries batteries of ...
To pull up an old but useful formula derived from work on shaped charge jets penetrating tank armour: $$P = L\sqrt{\frac{\rho_j}{\rho_t}}$$$P$ is the penetration depth, $L$ is the length of the penetrator, $\rho_j$ and $\rho_t$ are the densities of the penetrator and target respectively. Note that this is different from the classic Newtonian penetrator ...
60 km/s is so high, that you can neglect any inter-atom bounds and thermal movement and consider both armor and missile as a set of independent atoms. At first stages of impact missile atoms would pass through atoms of armor. Then scattering of tungsten atoms on tungsten atoms begins. You just can't call it evaporation - it would be an understatement.Since ...
I think I'm pretty much saying the same thing as Thucydides except in laymen's terms. The issue with using a nuke would be blow back and fouling.Blowback being the amount of energy released back onto the, i guess you could say nuke cannon. That would be a very bad thing in zero-g.It would act as a propellant against the ship.And the fouling could be ...
It won't be a cannon at all, but a warhead.This sort of device was actually investigated as far back as the 1980's under the Strategic Defense Initiative, and was part of a wider ranging investigation to harness the power of nuclear devices to drive weapons effects, so called "Third Generation Nuclear Weapons"The basis of all these devices is to encase ...
More important than the size of the object is its heat signature. Slowing down from 200 km/s to 3 km/s, without having the ship be 99.9% propellant, requires a LOT of energy, and that energy will show up as a bright heat signature even if the deceleration burn starts months in advance. It's my understanding that heat signatures in space are very, very easy ...
If the ship's orbit is perfectly circular and with a speed of 3km/s, it will be orbiting at an altitude of approximately 37,917 kilometers above sea level. That is just a very little bit above geosynchronous orbit.This may be interesting for you: geosync altitude is kinda the sweet spot for communication satellites, so slots in it are in high demand. If ...
Without any kind of stealth technology, it is 100% likely to be seen long before it gets anywhere close to Earth Orbit. This might give you some perspective:A comet coming in unannounced from intersteller space, from outside of the ecliptic, was detected at 3au (three times as far from the sun as the earth is), by more than 20 different telescopes....
Instead of a reflective layer of aluminium, you can technically use the principle behind gradient optical fibre and use multiple films of transparent materials to guide the laser along the surface of the hull and redirect it. The gradient has to be such that no matter what angle the laser hits, it would work.
I think if you build the ship wall out of retroreflectors, with a layer of high melting point material below, then you've got the perfect defence against attack lasers. Any laser directed at your hull will be, to a large percentage of the energy, directed right back to the attacking ship. And the layer below makes sure that the heat from what gets absorbed ...
The thing that kills lasers at range is beam spread.A laser spreads a lot less than, say a flashlight, but it does spread. That makes the energy density go down. It is the energy density that burns through a ship. If the energy density isn't enough to damage the hull, it just heats the target and, as you pointed out, you lose the heat war.So, things ...
Which laser?There are many kinds of lasers available with wavelengths from radio "masers" all the way to ultraviolet "excimer" lasers, potentially more available with technological progress. And there is no known material that fully reflects all these wavelengths. For example, aluminium does have a dip in its reflectivity in the infrared 700nm – 900nm ...
It's been my assumption that a laser would be an effective weapon at short enough ranges that a pulse can vaporize hull plating."Short range" is a tricky thing to quantify. It very much depends on your own tech level assumptions and requirements, and as you haven't communicated them to us then I can't really speculate. The shorter the wavelength, the ...
Wallpapering spaceship with 2-5 mm aluminum (steel would be much better) and active cooling will save you from any reasonable laser if you keep your distance.80% dissipating energy is only for laboratoy lasers. For powerfull "battle lasers" only 0.1% - 5% of energy goes to the beam. And the beam itself greatly looses energy density with range due to ...
Reflective surfaces will always help deflect some of the power, however no material is capable of reflecting 100% and some of that energy will always be absorbed. So even if you have a 99.99999% reflective surface, you still absorb a tiny bit of energy which is the main point.A laser is powerful because it can focus a decent amount of energy into a very ...
It depends on the power that is being fired at you.Conventional mirrors are not 100% reflective. They normally reflect a bit more than 90% of the impinging light, meaning that around 10% of that power is absorbed or transmitted.If you are targeted with a mW laser, 10% of that are peanuts, and you don't have to worry.If you are targeted with a petawatt ...
Answering the question as it's asked - sure. If you're burning it (rather than pouring it through a nuclear thermal rocket), then it's inarguably a chemical rocket fuel.As some of the commenters have pointed out, it's potentially a little hoity-toity for the tough, grizzled space adventurers using it - probably they'd call it LMH/fuel, and then shorten it ...
The idea sounds faintly troubling, in that the aether has become some magical place where you can just send unwanted energy and everything will be ok, but so long as it doesn't allow FTL or form a privileged reference frame then you might just about be ok, from the point of view of not annihilating physics as we know it.(also, make sure you have a read of ...
There are two general requirements - power and space.Power1,000,000 humans need about 2,000 calories per day each plus energy for day-to-day functions. Total up the calories and convert it to something workable, and we have a total of 96.85 MW. That is a lot, but that pales in comparison to energy. The average US household uses 900 kWH per month, giving ...
The answer given by mcRobusta accounts very well for the circular accelerator. The formula is indeed $F=\frac{m v^2}{r}$. But to compare to the linear case, consider a linear gun with barrel length $d=2 r$. At constant acceleration the exit velocity is $\sqrt{2 d a}$. That is, to get a velocity of $v$ you need a force as follows. $$ F = m \frac{v^2}{2d}= m\...
Our friend Isaac Newton can help us here with his equation of circular motion:$F = \frac{mv^2}{r}$where F is the force produced, m is your object mass, v is your object's linear velocity, and r is the radius of motion. As you can see, it's the speed you want to reach that is going to have more of an impact than your mass (which, by the way, is still a ...
This very much depends on the sort of story you want to write and whether the technology itself is effectively a character in that story.Star Wars. It's entirely fantasy, it has wizards and magic swords, for all practical purposes they might as well be riding horses or flying carpets as in spaceships. Everything runs on handwavium and technology is ...
Write what you know.You are conflicted about your desire for certain SF tech and your inability to explain them. Clearly that interests you. You can get good traction from that for your story!Your engineering characters have the same concerns and are conflicted in the same way. You can have one or more scenes where they walk thru the tech.I ...
Circular CoilgunWhy not make the coilgun circular like a collider and keep circulating low mass projectiles until they reach high speed and then release them. It works for atom smashers so should work in principle. The limiting factors is the strength of the magnets and size of the ring. The bigger both are, the faster you can go.
The problem with shields, in this contextYou need to reconcile the Star Trek transporter dilemma wherein having shields up prevents certain high density data signal data transfers, such as transporters. At the end of the day, since the field units are limited to non-FTL communications, they are going to be limited to radio signals. They can be encrypted, ...
I'd like a way to justify having 100 meter long guns be able to throw out 1-ton projectiles at 30 km/s or betterI don't understand your strange "tons", so lets use a nice easy measurement like a tonne. Your projectile will leave the barrel with a hefty $4.5*10^{11}$ joules of kinetic energy. If your coilgun only wastes 1% of that energy in heating the ...
Muzzle velocities may be more modest than your projected velocity of 21 km/s. When Gerard O'Neill was conducting trials with mass-drivers. This was pioneering work for the construction of his proposed Lagrange cylinder habitats. This research found there was a limiting velocity of around 4 km/s. After which any projectile launched with a mass-river tended to ...
So it's probably important to explain a few things here about physics and Newton's Laws. The whole point of a railgun is to be able to do a lot of damage with a smaller projectile by giving it far more velocity.Momentum = Mass x VelocityIn this equation, what we're saying is that you can increase the damage caused in a collision with something in two ...
For the sake of convenience let’s start with the assumption that your population will desire a North American urban population density and a globally averaged diet.The city of Austin TX has approximately 1,000,000 residents and a surface area of approx. 800 km^2, for a density of about 1,250/km^2.The City of New York (5 boroughs) NY also has a surface ...
If you're talking about war ships, maybe independant, conventional, unpowered space stations with conventional equipped canons would work.Big motherships place them in space where they operate a few days/weeks and then gather them again.As a weapon I can think of (untraceable) mines placed in space or mines which get shot out of starships.They unfold ...
See the Honor Harrington series by Weber. He actually goes through a several generation series of offence and defence weapons.Miltary grade ships have accelerations of 400 to 700 Gs with smaller ships being more capable. Missiles have accelerations of 10-50 times that, but their 'impellers' burn out. But 3 minutes acceleration at 20,000Gs covers major ... |
Maclaurin polynomials and series
In this lesson, we're going to focus on developing a technique for
approximating the value of any arbitrary function \(f(x)\) at each value of \(x\) such that \(f(x)\) is smooth and continuous for all \(x\) values. How can we approximate \(f(x)\)? We'll approximate the value of \(f(x)\) with a function \(g(x)\) where the values of \(g(x)\) agree with the values of \(f(x)\) within a certain error \(E=|f(x)-g(x)|\). But the question is this: what kind of function would \(g(x)\) have to be to very closely "match" the values of \(f(x)\) at each value of \(x\)? It turns out that a polynomial function of the form
$$g(x)=c_0+c_1x+c_2x^2+...+c_nx^n,\tag{1}$$
would do the best job of approximating \(f(x)\). (At the moment, that claim might seem pretty ad hoc. But later, when we put \(f(x)\) and \(g(x)\) on the same graph, we'll see that Equation (1) is indeed a good approximation of any function \(f(x)\) that is smooth and continuous at every \(x\) value.) So let's say that we want \(g(x)\) to approximate \(f(x)\) at values of \(x\) close to \(x=0\). The first thing to notice here is that we can make our approximate function \(g(x)\) the same as the function \(f(x)\) (the function we want to approximate the best we can) by requiring that \(g(0)=f(0)\).
Let's now evaluate \(g(0)\) using Equation (1). If we substitute \(x=0\) into \(g(x)\), then Equation (1) simplifies to \(g(0)=c_0.\) Thus, we have shown that the first term in Equation (1) must be given by \(c_0=f(0).\) We are trying to "create" and "build" a function \(g(x)\) that is close to \(f(x)\); thus far we are at a good start since \(g(x)\) is identical to \(f(x)\) right at \(x=0\). But the problem is that as we move away from \(x=0\) to higher \(x\) values, \(g(0)=c_0\) is pretty far off. What we're going to show next is that by adding the additional term \(c_1x\) to \(c_0\), the expression \(c_0+c_1x\) will be a better approximate of \(f(x)\) (better than just \(c_0\), a pretty poor estimate away from \(x=0\)) for a bigger range of \(x\) values. To do this, let's start off by requiring that \(g'(0)=f'(0)\); that is to say, the derivative of the approximate function \(g(x)\) is the same as the function \(f(x)\) right at \(x=0\). To evaluate \(g'(x)\), let's start off by taking the derivative of \(g(x)\) (Equation (1)) to get
$$g'(x)=c_1+2c_2x+...+nc_nx^{n-1}.\tag{2}$$
Evaluating Equation (2) at \(x=0\), we have \(g'(0)=c_1\). Thus \(c_1=f'(0)\) and the second term in \(g(x)\) (see Equation (1)) must be given by \(f'(0)x\). Let's refer to the function \(f(0)+f'(0)x\) as \(g_1(x)\); then, \(g_1(x)=f(0)+f'(0)x\). Let's now graph \(g_1(x)\) in the same xy-plane as \(f(x)\) and see if it does a better job of estimating \(f(x)\) at more \(x\) values than \(c_0=f(0)\) (which we'll just called \(g_0(x)\)). In this example, we can see from Figure 1 that \(f(0)=0\); thus, \(g_1(x)\) must simplify to \(g_1(x)=f'(0)x\). We know from back in our days of algebra that the product of a slope (in this case, \(f'(0)\)) and a change in \(x\) (in this case, \(Δx=x-0\)) gives the value of a function at the point \(x\) such that that function is a straight line with a y-intercept of \(0\). Therefore, the function \(g_1(x)\) gives the y-value of each \(x\) along the straight red line (for \(n=1\)) in Figure 1. We can see from the graph in Figure 1 that \(g_1(x)\) is not only equal to \(f(x)\) at \(x=0\) like the function \(g_0(x)\), but since \(g_1(x)\) "hugs" \(f(x)\) more closely, it does a better job of approximating \(f(x)\) for more \(x\) values.
What we're going to show next is that by adding the third term \(c_2x^2\) to our approximate function, \(g_2(x)\) will "hug" \(f(x)\) even more closely as shown in Figure 1. But let's explain how we got to that graph of \(g_2(x)\) vs. \(x\) as shown in Figure 3. Let's require that \(g''(0)=f''(0)\). To find the expression for \(g''(0)\), let's start off by taking the derivative of both sides of Equation (2) to get:
$$g''(x)=2c_2+...+n(n-1)x^{n-2}.\tag{3}$$
Evaluating Equation (3) at \(x=0\), we see that \(g''(0)\) is given by \(g''(0)-2c_2\) and that \(\frac{g''(0)}{2}=c_2\). Thus, the third term in \(g(x)\) (Equation (1)) must be \(\frac{g''(0)}{2}x^2\). The expression for \(g_2(x)\) is given by
$$g_2(x)=f(0)+f'(0)x+\frac{f''(0)}{2}x^2.$$
Using MATLAB, we can find that the graph of \(g_2(x)\) vs. \(x\) is as shown in Figure 1. At first, when I was initially using the term "hugging," it might have been a little unclear what I meant by that. But hopefully, we can all understand what was meant by that term as we can see graphically that \(g_2(x)\) is "hugging" \(f(x)\) very closely within a certain range of \(x\) values. In fact, if you look at a pretty small range of \(x\) values around \(x=0\), it is pretty difficult to distinguish between \(f(x)\) and \(g_2(x)\). What if we wanted to derive the expressions for \(g_5(x)\), \(g_10(x)\), or \(g_n(x)=g(x)\)? How would we go about doing that? Well, it would essentially be analogous to how we got the expressions for \(g_0(x)\), \(g_1(x)\) and \(g_2(x)\). We would just have to take the derivative of \(g(x)\) a five, ten, or \(n\) number of times; then evaluate \(g(0)\), do some algebra, and then make a substitution to find either the fifth, tenth, or \(n\)th term in Equation (1). But let's skip some of those intermediate steps of finding the fourth through \((n-1)\)th terms and solve for the \(n\)th term in Equation (1). This will give us a general expression for our approximate function \(g(x)\). Taking the \(n\)th derivative of \(g(x)\) (represented as \(g^{(n)}(x)\)), we have
$$g^{(n)}(x)=n(n-1)(n-2)...·2·1·c_n.$$
Evaluating \(g^{(n)}(x)\) at \(x=0\) simply just gives us
$$g^{(n)}(0)=n(n-1)(n-2)...·2·1·c_n.$$
Analogous to all of the previous steps, we'll require that \(f^{(n)}(x)=g^{(n)}(x)\). Thus, the \(n\)th term of \(g(x)\) is given by
$$\frac{f^{(n)}(0)}{n(n-1)(n-2)...·2·1}x^n.$$
Using factorial notation, we can rewrite the term above simply as
$$\frac{f^{(n)}(x)}{n!}x^n.$$
If we substitute the first through \(n\)th terms of \(g(x)\) that we derived, Equation (1) simplifies to
$$g_n(x)=f(0)+f'(0)x+\frac{f''(0)}{2}x^2+...+\frac{f^{(n)}(x)}{n!}x^n.\tag{4}$$
Using Walfram Alpha, we can graph \(g_n(x)vs.x\) for all the different values of \(n\) as shown in Figure 1. The equation above is called an \(n\)
th order Maclaurin polynomial and can be used to approximate any arbitary function \(f(x)\) so long as \(f(x)\) is smooth and continuous. Notice that the more number of terms \(n\) that we use in our approximation \(g_n(x)\), the better that the approximation is. Furthermore, as the number of terms \(n\) approaches infinity, the approximation becomes exact. Let's take the limit of both sides of the equation above as \(n→∞\):
$$\lim_{n→∞}g_n(x)=f(0)+f'(0)x+\frac{f''(0)}{2}x^2+...+\frac{f^{(n)}(0)}{n!}x^n+....\tag{4}$$
We can rewrite Equation (4) more compactly by using summation notation to get
$$\lim_{n→∞}g_n(x)=\lim_{n→∞}\sum_{i=0}^n\frac{f^{(n)}(0)x^n}{n!}.\tag{5}$$
Let's define the quantity \(g(x)\) as \(g(x)≡ \lim_{n→∞}g_n(x)\) simplifying Equation (5) to
$$g(x)=\lim_{n→∞}\sum_{i=0}^n\frac{f^{(n)}(0)x^n}{n!}.\tag{6}$$
As mentioned earlier, as the number of terms in the approximation becomes infinite, \(g(x)\) becomes equal to \(f(x)\). Equation (6) is called the
Maclaurin series.
This article is licensed under a CC BY-NC-SA 4.0 license.
Sources: Khan Academy, Wikipedia |
The fundamental thermodynamic equations follow from five primary thermodynamic definitions and describe internal energy, enthalpy, Helmholtz energy, and Gibbs energy in terms of their natural variables. Here they will be presented in their differential forms.
Introduction
The fundamental thermodynamic equations describe the thermodynamic quantities U, H, G, and A in terms of their natural variables. The term "natural variable" simply denotes a variable that is one of the convenient variables to describe U, H, G, or A. When considered as a whole, the four fundamental equations demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like \(G\) or \(H\).
First Law of Thermodynamics
The first law of thermodynamics is represented below in its differential form
\[ dU = đq+đw \]
where
\(U\) is the internal energy of the system, \(q\) is heat flow of the system, and \(w\) is the work of the system.
The "đ" symbol represent
inexact differentials and indicates that both \(q\) and \(w\) are path functions. Recall that \(U\) is a state function. The first law states that internal energy changes occur only as a result of heat flow and work done.
It is assumed that w refers only to PV work, where
\[ w = -\int{pdV}\]
The fundamental thermodynamic equation for internal energy follows directly from the first law and the principle of Clausius:
\[ dU = đq + đw\] \[ dS = \dfrac{\delta q_{rev}}{T} \]
we have
\[ dU = TdS + \delta w\]
Since only \(PV\) work is performed,
\[ dU = TdS - pdV \label{DefU}\]
The above equation is the fundamental equation for \(U\) with natural variables of entropy \(S\) and volume\(V\).
Principle of Clausius
The
Principle of Clausius states that the entropy change of a system is equal to the ratio of heat flow in a reversible process to the temperature at which the process occurs. Mathematically this is written as
\[ dS = \dfrac{\delta q_{rev}}{T}\]
where
\(S\) is the entropy of the system, \(q_{rev}\) is the heat flow of a reversible process, and \(T\) is the temperature in Kelvin. Enthalpy
Mathematically, enthalpy is defined as
\[ H = U + pV \label{DefEnth}\]
where \(H\) is enthalpy of the system, p is pressure, and V is volume. The fundamental thermodynamic equation for enthalpy follows directly from it deffinition (Equation \(\ref{DefEnth}\)) and the fundamental equation for internal energy (Equation \(\ref{DefU}\)) :
\[ dH = dU + d(pV)\] \[ = dU + pdV + VdP\] \[ dU = TdS - pdV\] \[ dH = TdS - pdV + pdV + Vdp\] \[ dH = TdS + Vdp\]
The above equation is the fundamental equation for H. The natural variables of enthalpy are S and p, entropy and pressure.
Gibbs Energy
The mathematical description of Gibbs energy is as follows
\[ G = U + pV - TS = H - TS \label{Defgibbs}\]
where \(G\) is the Gibbs energy of the system. The fundamental thermodynamic equation for Gibbs Energy follows directly from its definition \(\ref{Defgibbs}\) and the fundamental equation for enthalpy \(\ref{DefEnth}\):
\[ dG = dH - d(TS)\] \[ = dH - TdS - SdT\]
Since
\[ dH = TdS + Vdp\]
\[ dG = TdS + Vdp - TdS - SdT\]
\[ dG = Vdp - SdT\]
The above equation is the fundamental equation for G. The natural variables of Gibbs energy are p and T, pressure and temperature.
Helmholtz Energy
Mathematically, Helmholtz energy is defined as
\[ A = U - TS \label{DefHelm}\]
where \(A\) is the Helmholtz energy of the system, which is often written as the symbol \(F\). The fundamental thermodynamic equation for Helmholtz energy follows directly from its definition (Equation \(\ref{DefHelm}\)) and the fundamental equation for internal energy (Equation \(\ref{DefU}\)):
\[ dA = dU - d(TS)\] \[ = dU - TdS - SdT\]
Since
\[ dU = TdS - pdV\]
\[ dA = TdS - pdV -TdS - SdT\]
\[ dA = -pdV - SdT\]
The above equation is the fundamental equation for A with natural variables of \(V\) and \(T\). For the definitions to hold, it is assumed that
only PV work is done and that only reversible processes are used. These assumptions are required for the first law and the principle of Clausius to remain valid. Also, these equations do not account include n, the number of moles, as a variable. When \(n\) is included, the equations appear different, but the essence of their meaning is captured without including the n-dependence. Chemical Potential
The fundamental equations derived above were not dependent on changes in the amounts of species in the system. Below the n-dependent forms are presented
1,4.
\[ dU = TdS - PdV + \sum_{i=1}^{N}\mu_idn_i \] \[ dH = TdS + VdP + \sum_{i=1}^{N}\mu_idn_i \] \[ dG = -SdT + Vdp + \sum_{i=1}^{N}\mu_idn_i \] \[ dA = -SdT - PdV + \sum_{i=1}^{N}\mu_idn_i\]
where μ
i is the chemical potential of species i and dn i is the change in number of moles of substance i. Importance/Relevance of Fundamental Equations
The differential fundamental equations describe U, H, G, and A in terms of their natural variables. The natural variables become useful in understanding not only how thermodynamic quantities are related to each other, but also in analyzing relationships between measurable quantities (i.e. P, V, T) in order to learn about the thermodynamics of a system. Below is a table summarizing the natural variables for U, H, G, and A:
Thermodynamic Quantity Natural Variables U (internal energy) S, V H (enthalpy) S, P G (Gibbs energy) T, P A (Helmholtz energy) T, V Maxwell Relations
The fundamental thermodynamic equations are the means by which the Maxwell relations are derived
1,4. The Maxwell Relations can, in turn, be used to group thermodynamic functions and relations into more general "families" 2,3. See the sample problems and the Maxwell Relation section for details. References DOI: 10.1063/1.1749582 DOI: 10.1063/1.1749549 DOI:10.1103/PhysRev.3.273 A Treatise on Physical Chemistry,3rd ed.; Taylor, H. S. and Glasstone, S., Eds.; D. Van Nostrand Company: New York, 1942; Vol. 1; p 454-485. Problems If the assumptions made in the derivations above were not made, what would effect would that have? Try to think of examples were these assumptions would be violated. Could the definitions, principles, and laws used to derive the fundamental equations still be used? Why or why not? For what kind of system does the number of moles not change? This said, do the fundamental equations without n-dependence apply to a wide range of processes and systems? Derive the Maxwell Relations. Derive the expression
\[ \left (\dfrac{\partial H}{\partial P} \right)_{T,n} = -T \left(\dfrac{\partial V}{\partial T} \right)_{P,n} +V \]
Then apply this equation to an ideal gas. Does the result seem reasonable?
5. Using the definition of Gibbs energy and the conditions observed at phase equilibria, derive the Clapeyron equation. Answers If it was not assumed that PV-work was the only work done, then the work term in the second law of thermodynamics equation would include other terms (e.g. for electrical work, mechanical work). If reversible processes were not assumed, the Principle of Clausius could not be used. One example of such situations could the movement of charged particles towards a region of like charge (electrical work) or an irreversible process like combustion of hydrocarbons or friction. In general, a closed system of non-reacting components would fit this description. For example, the number of moles would not change for a closed system in which a gas is sealed (to prevent leaks) in a container and allowed to expand/is contracted. See the Maxwell Relations section. \((\dfrac{\partial H}{\partial P})_{T,n} = 0 \) for an ideal gas. Since there are no interactions between ideal gas molecules, changing the pressure will not involve the formation or breaking of any intermolecular interactions or bonds. See the third outside link. Contributors Andreana Rosnik, Hope College |
Article
Keywords: $M$-estimator; generalized linear models; pseudolinear models
Summary: Real valued $M$-estimators $\hat{\theta }_n:=\min \sum _1^n\rho (Y_i-\tau (\theta ))$ in a statistical model with observations $Y_i\sim F_{\theta _0}$ are replaced by $\mathbb{R}^p$-valued $M$-estimators $\hat{\beta }_n:=\min \sum _1^n\rho (Y_i-\tau (u(z_i^T\,\beta )))$ in a new model with observations $Y_i\sim F_{u(z_i^t\beta _0)}$, where $z_i\in \mathbb{R}^p$ are regressors, $\beta _0\in \mathbb{R}^p$ is a structural parameter and $u:\mathbb{R}\rightarrow \mathbb{R}$ a structural function of the new model. Sufficient conditions for the consistency of $\hat{\beta }_n$ are derived, motivated by the sufficiency conditions for the simpler “parent estimator” $\hat{\theta }_n$. The result is a general method of consistent estimation in a class of nonlinear (pseudolinear) statistical problems. If $F_\theta $ has a natural exponential density $\mathrm{e}^{\theta x-b(x)}$ then our pseudolinear model with $u=(g\circ \mu )^{-1}$ reduces to the well known generalized linear model, provided $\mu (\theta )= {\mathrm d}b(\theta )/{\mathrm d}\theta $ and $g$ is the so-called link function of the generalized linear model. General results are illustrated for special pairs $\rho $ and $\tau $ leading to some classical $M$-estimators of mathematical statistics, as well as to a new class of generalized $\alpha $-quantile estimators.
References:
[1] L. D. Brown:
Fundamentals of Statistical Exponential Families. Lecture Notes No. 9
. Institute of Mathematical Statistics, Hayward, California, 1986. MR 0882001
[2] L. Fahrmeir, H. Kaufman:
Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models
. Annals of Statistics 13 (1985), 342–368. DOI 10.1214/aos/1176346597
| MR 0773172
[3] F. R. Hampel, P. J. Rousseeuw, E. M. Ronchetti, W. A. Stahel:
Robust Statistics: The Approach Based on Influence Functions
. Wiley, New York, 1986. MR 0829458
[4] J. Jurečková, B. Procházka:
Regression quantiles and trimmed least squares estimator in nonlinear regression model
. Nonparametric Statistics 3 (1994), 201–222. MR 1291545
[16] J. Jurečková, P. K. Sen:
Robust Statistical Procedures
. Wiley, New York, 1996. MR 1387346
[7] F. Liese, I. Vajda:
Necessary and sufficient conditions for consistency of generalized $M$-estimates
. Metrika 42 (1995), 291–324. DOI 10.1007/BF01894328
| MR 1380211
[8] E. L. Lehman:
Theory of Point Estimation
. Wiley, New York, 1983. MR 0702834
[11] D. Pollard:
Convergence of Stochastic Processes. Springer, New York
. 1984. MR 0762984
[13] A. van der Vaart, J. A. Wellner:
Weak Convergence and Empirical Processes
. Springer, New York, 1996. MR 1385671
[14] S. Zwanzig:
On $L_1$-norm estimators in nonlinear regression and in nonlinear error-in-variables models
. IMS Lecture Notes 31, 101–118, Hayward, 1997. MR 1833587
| Zbl 0935.62074 |
Equilibrium and Detailed Balance Equilibrium and Detailed Balance Equlibrium has a very precise meaning in statistical physics, which also applies to biology. Equilibrium describes the average behavior (averaged over many systems under identical conditions) in which there is no net flow of material, probability or reactions. Equilibrium is not staticbecause each individual system undergoes its ordinary behavior/dynamics. There will be no net flow in any "space" examined: real-space, conformation space, or population space. Equilibrium can only occur under fixed conditions (e.g., constant temperature and total mass) when there is no addition or removal of energy or material from the system. The requirement for "no net flow of material, probability or reactions" is embodied in the condition for detailed balance. Detailed Balance
Detailed balance is the balance of flows between any (and every) pair of states you care to define. Typically, one thinks of "detailed" infinitesimal states, but a balance of flows among tiny states implies a balance of flows among global states which are groups of tiny states.
"Flow" refers to the motion of material or trajectories/probability, depending on the situation at hand.
In this schematic, $i$ and $j$ are any states, and the $k$'s are the rates between them. If we have $N = \sum_i N_i$ equilibrium systems with $N_i$ systems in each state $i$, then detailed balance is formulated as
(1) In a solution.As always in equilibrium, we have (at least conceptually) a large number of copies of our system. If we consider any two sub-volumes ($i$ and $j$) in just oneof these systems, some set of molecules will move from region to the other in a given time interval. However, in our equilibrium set of systems, there will be other systems in which the opposite flow occurs. Averaging over all systems, there is no net flow of any type of molecule between any two sub-volumes in equilibrium. This is detailed balance. (See also the time-averaging perspective, below.) In a chemical reaction.Normally, we distinguish "products" and "reactants", but equilibrium largely abolishes this distinction. As described above for the solution case, if we have an equilibrium set of many chemical-reaction systems, an equal number will be proceeding in the forward (say, $i$ to $j$) and reverse ($j$ to $i$) directions. Although, nominally, a reaction may seem to prefer a certain direction, in equilibrium that just means that the products of the favored direction will be present in greater quantity (e.g., $N_j \gg N_i$) - even though the forward and reverse flows stay the same as in (1) because the rates would be very different ($k_{ji} \ll k_{ij}$). In a conformation space of a single molecule.In an equilibrium set of molecules with, say, two conformational states A and B, there will be an equal number of A-to-B as B-to-A transitions in any given time interval. If there are many states, then there will be a balance between all pairs of states $i$ and $j$ as given in (1). Time vs. Ensemble Averaging
It is useful to consider the relation between "ensemble averaging" (e.g., averaging over the set of equilibrium systems described above) and "time averaging". Time-averaging is just what you would guess: averaging behavior (e.g., values of a quantity of interest) over a long period of time.
In equilibrium,
time averaging and ensemble averaging will yield the same result. To see this, consider a solution containing many molecules diffusing around and perhaps exhibiting conformational motions as well. Assume the system has been equilibrating for a time much longer than any of its intrinsic timescales (inverse rates). Because finite-temperature motion in a finte system is inherently stochastic, over a long time each molecule will visit different regions of the container and also different conformations - in the same proportion as every other molecule. If we take a snapshot at any given time of this equilibrium system, the "ensemble" of molecules in the system will exhibit the same distribution of positions and conformations as a long single trajectory of any individual molecule. This has to be true because the snapshot itself results from the numerous stochastic trajectories of the molecules that have evolved over a long time. Unphysical Models Cannot Equilibrate
Although every physical system that is suitably isolated will reach a state of equilibrium, that does not mean that every
model made by scientists can properly equilibrate. In fact, many common models of biochemistry exhbit "irreversible" steps - in which the reverse of some step never occurs - and could never satisfy detailed balance. The Michaelis-Menten model of catalysis (above left) is an irrervsible model. Such model irreversibility typically represents the observation that the forward rate exceeds the reverse rate so greatly that the reverse process can safely be ignored. This may be true in some cases, but here are numerous cases in biochemistry where reversibility is critical, such as typical binding processes (unbinding is needed to terminate a signal) and in ATP synthase (which can make ATP or pump protons depending on conditions).
For corrected (physically possible) versions of the cycles depicted above, see the discussion of cycles.
References R. Phillips et al., Physical Biology of the Cell,(Garland Science, 2009). D.M. Zuckerman, Statistical Physics of Biomolecules: An Introduction,(CRC Press, 2010). |
An integral is useful for finding the area underneath a function. Let \(f(x)\) be any arbitrary function such that it is smooth and continuous at every point. To find the area underneath \(f(x)\), we must go through several steps. First, we'll start off by drawing an \(n\) (where \(n\) is any positive integer) number of rectangles of equal width underneath \(f(x)\) as illustrated in Figure 1. What is the total area of
all the rectangles? The area of the first rectangle is \(A_1=f(x_1)(x_2-x_1)\); the area of the second rectangle is \(A_2=f(x_2)(x_3-x_2)\); and the area of the \(n\)th rectangle is
\(A_n=f(x_n)(x_{n+1}-x_n)\). Since every rectangle has the same width, it follows that \(x_2-x_2=x_3-x_2=x_{n+1}x_n= Δx\). To find the total area of all the rectangles, let's add up the area of each rectangle:
Figure 1
$$A=A_1+A_2+...+A_n=f(x_1)Δx+ f(x_2)Δx+...+ f(x_n)Δx\sum_{i=1}^nf(x_n)Δx.\tag{1}$$
As you can see visually in the animation in Figure 2, as the number of rectangles \(n\) increases, the area \(A\) becomes closer and closer to equaling the exact area underneath the curve. Using Equations (1), let's take the limit as \(n→∞\) to get
$$\lim_{n→∞}\sum_{i=1}^nf(x_n)Δx.\tag{2}$$
Let's review the notion of a limit that we covered in an earlier lesson. The value that the limit, \(\lim_{z→c}g(x)\), is equal to is the value that \(g(x)\) gets closer and closer to while \( z→c\). Take for example the limit, \(\lim_{x→2}x^2\), that we looked at in a previous lesson. The value that this limit is equal to is the value that \(x^2\) gets closer and closer to as \(x→2\). We showed that this value is \(4\). Similarly the value of the limit, \(\lim_{n→∞}\sum_{i=1}^nf(x_n)Δx\), is the value that \(\sum_{i=1}^nf(x_n)Δx\) gets closer and closer to equaling the exact area underneath the curve \(f(x)\). Thus, the limit must equal the area underneath \(f(x)\) and
$$\text{Area underneath f(x)}=\lim_{n→∞}\sum_{i=1}^nf(x_n)Δx.\tag{3}$$
Let's see if there is a simpler way of rewriting the right-hand side of Equation (3). Not to sound too annoyingly repetitive but, again, the limit \(\lim_{n→∞}\sum_{i=1}^nf(x_n)Δx\) is equal to the thing that \(\sum_{i=1}^nf(x_n)Δx\) gets closer and closer to equaling as \(n→∞\). But if you think about it for a moment, the following must be true: if \(n→∞\), then the number \(n\) of the terms \(f(x_i) Δx\) is getting closer and closer to infinity; thus, the finite sum \(\sum_{i=1}^n\) (of an \(n\) number of terms) is getting closer and closeer to becoming an infinite sum. Let's represent an infinite sum (that's to say, a sum of infinitely many terms) by the symbol "\(∫\)." As \(n→∞\), it is also true that the width \(Δx\) is getting closer and closer to becoming infinitely small. Let's represent an infinitely small \(Δx\) by the symbol " \(dx\) ."
With all that said, I'd like to just make a few remarks about the variable \(x\), then about the expression \(\int{f(x)dx}\), and then we'll see how that ties in with our discussion of the limit \(\lim_{n→∞}\sum_{i=1}^nf(x_n)Δx\). Since \(x\) is a continuous variable, it can take on an infinite number of values: such as the numbers \(2\), \(π\), \(3.001\), \(3.00001\), \(3.00000001\), etc. Thus, there are an infinite number of y-values along the curve \(f(x)\): including \(f(2)\), \(f(π)\), \(f(3.001\), etc. The term \(f(x)dx\) is the area of an infinitely skinny rectangle; and the expression \(\int{f(x)dx}\) is the sum of an infinite number of the terms, \(f(x)dx\). \(\int{f(x)dx}\) gives the infinite sum of all the areas \(f(x)dx\) of (infinitely) skinny rectangle. What is \(\sum_{i=1}^nf(x_i)Δx\) getting closer and closer to equaling as \( n→∞\)? Well, clearly its getting closer and closer to being an infinite sum, and \(x_i\) and \( Δx\) are approaching \(x\) and \(dx\). Thus, the limit \(\lim_{n→∞}\sum_{i=1}^nf(x_n)Δx\) must also equal that thing and
$$\int{f(x)dx}=\lim_{n→∞}\sum_{i=1}^nf(x_i) Δx.\tag{4}$$
The expression \(\int{f(x)dx}\) is called "the integral of \(f(x)\) with respect to the variable \(x\)" and it is equal to two things: first, the area underneath \(f(x)\); second, it is also the sum of the infinite number of the terms \(f(x)dx\) and is the infinite sum of the area of infinitely many, infinitely skinny rectangles. My apologies, the latter is quite a mouthful. But hopefully this lesson helped give you a better idea of what an integral actually is. In the next several lessons, we'll investigate techniques for solving integrals - that is, finding the area underneath various different functions \(f(x)\). |
Cantor's diagonal method shows that the set $S=\{x\in \Bbb R|x \in [0,1)\}$ is uncountably infinite, because there is no bijection between the set $S$ and the set of natural number $\Bbb N$.
I came up with this method of mapping the set $S$ to $\Bbb N$. It should be wrong, but I don't know where.
We start with any $x \in S$, and we will build the whole set $S$ using the following method. First, for example, choose $x$ that $$x=0.12345678900000......$$ Next, we can choose any digit of x, and change it such that $0 \rightarrow 1$, $1 \rightarrow 2$, $2 \rightarrow 3$, $...$, $8 \rightarrow 9$, $9 \rightarrow 0$, to make a new real number $x_1\in S$. Because the list of digits of $x$ is countably infinite, there are countably infinite ways of choosing a digit, making the set $\chi_1$ of all real numbers differs $x$ by 1 digit a countably infinite set.
For any $x_1^i \in \chi_1$ (the subscript "1" stands for "differs $x$ by 1 digit"), we can follow the previous method, to make a set of real numbers differing $x$ by 2 digits (actually $x$ itself is in this set too, but we don't care about duplication). There are countably infinite number of such sets, denoted $\chi_2^i$, built from each number $x_1^i$ of the set $\chi_1$.
At this point, we might attempt to create the union of all the sets that we have just created: $$U_2=\chi_1 \cup \biggl( \bigcup_{i=1,\\i\in \Bbb N}^{\infty}\chi_2^i\biggr)$$ $U_2$ is countably infinite.
We can now create the set $U_3$ as the countably infinite union of all countably infinite sets $\chi_3^i$. Then $U_4$, $U_5$, ...
For any number $r$, $r \in S$, it differs $x$ atmost at countably infinite number of digits. Therefore, by repeating this procedure countably infinitely many times, we can construct a set that is a superset of $S$ $$U_\infty = U_{i\in \Bbb N,i\rightarrow \infty} \supset S$$ $U_\infty$ is a proper superset of S, because it contains infinitely many duplicates of any number $r \in S$. But if $U_\infty$ is countably infinite, then S is also countably infinite.
Please help me see where the error in my "proof" is. |
COMSOL 4.4 Brings Particle-Field and Fluid-Particle Interactions
The trajectories of particles through fields can often be modeled using a one-way coupling between physics interfaces. In other words, we can first compute the fields, such as an electric field, magnetic field, or fluid velocity field, and then use these fields to exert forces on the particles using the Particle Tracing Module. If the number density of the particles is very large, however, the particles begin to noticeably perturb the fields around them, and a two-way coupling is needed — that is, the fields affect the motion of the particles, and the particle trajectories affect the fields. For example, charged particles act as point sources that affect the electric field around them, and small particles that move through a fluid may drag the fluid with them. Although two-way coupling between particles and fields presents new modeling challenges and is computationally more time-consuming than one-way coupling, new tools available in COMSOL version 4.4 can address many of these challenges by using an efficient, self-consistent approach.
One-Way vs. Two-Way Coupling
Consider the motion of a group of ions or electrons through electric and magnetic fields. To model the system using a one-way coupling, we first solve for the electric and magnetic fields, typically using a stationary or frequency-domain study step. To compute the trajectories of charged particles in these fields, we can then use the
Charged Particle Tracing interface, which solves a second-order ODE for each particle’s position:
Here, m\mathbf{v} is the particle’s momentum, q is the particle’s charge, \mathbf{E} is the electric field, and \mathbf{B} is the magnetic flux density. This approach relies on the following assumptions:
The fields are either stationary, change very slowly relative to the motion of the particles, or vary sinusoidally over time. The charged particles have a negligibly small effect on the electric and magnetic fields.
Being able to compute the fields using a stationary or frequency-domain study step is a tremendous time saver, since time-dependent studies involving the Particle Tracing Module often require a very large number of time steps. Several examples of one-way coupling between particles and electromagnetic fields are available in the Model Gallery, including the following:
Magnetic Lens (requires the AC/DC Module) Particle Tracing in a Quadrupole Mass Spectrometer (requires the AC/DC Module) Quadrupole Mass Filter (requires the AC/DC Module) Einzel Lens
Several examples of one-way coupling between fluid velocity fields and particle trajectories are also available, such as the following:
All of these examples follow the same pattern: compute the field using a stationary or frequency-domain study step, then couple the solution to a time-dependent study step for the particle trajectories.
If the particles are numerous enough that they noticeably affect fields in the surrounding domains, we must recompute the fields at each time step to account for the changed positions of the particles. At this point, a two-way coupling between particles and fields is required. Typical examples of systems requiring a two-way coupling are ion and electron beams, electron guns, and sprays of particles entering a crossflow. In these situations, we must often compute the space charge density due to a group of charged particles or the volume force exerted by particles on a fluid.
Implementing Point Sources
The particles used in the physics interfaces of the Particle Tracing Module are treated as point masses in many respects. Although some pre-defined forces, such as the drag force, are size-dependent, the particles are considered infinitesimally small for the purpose of determining when they collide with walls. In addition, particles immersed in a fluid don’t displace any volume of fluid. Because each particle is treated as a point mass, the charge density or volume force due to the presence of a particle reaches a singularity at that particle’s location.
In some instances, you can improve the accuracy of a solution close to a singularity using adaptive mesh refinement; see, for example, Implementing a Point Source Using Poisson’s Equation in the Model Gallery. However, this is not a viable option for managing singularities due to particles for several reasons: there can be a very large number of singularities, the particles are constantly moving, and they generally don’t coincide with nodes of the finite element mesh. Instead, the singularities are avoided by distributing the space charge density or volume force due to each particle over the mesh element the particle is currently in. Although this means that the solution is somewhat mesh-dependent, the error introduced is typically very small if the number of particles is sufficiently large.
Modeling Steady-State Systems
In the context of particle-field or fluid-particle interactions, we take
steady-state to mean that the fields do not change over time. For example, an ion beam would be considered to operate under steady-state conditions if the electric field at any point remains constant, typically as a result of a constant ion flux. A pulsed beam, on the other hand, would not be considered a steady-state system.
A unique feature of steady-state systems is that they allow the particle trajectories and fields to be computed using a self-consistent method that is more efficient than computing the entire solution with a time-dependent study. This method involves the set-up of an iterative loop of different solver types, as we will see in the following example.
Creating a Self-Consistent Model of an Electron Beam with COMSOL 4.4
To illustrate the available solution techniques for steady-state systems with two-way coupling between particles and fields, consider a beam of electrons that is released into a large, open area at constant user-defined current. In order to model a large, open area, we add an Infinite Element Domain around the exterior of the modeling domain, represented by the highlighted areas in the image below. The circle shown at one end of the cylinder will be used to define an
Inlet feature for electrons.
We expect that the electrons in the beam will repel each other, causing the beam to become wider as it propagates forward. We will assume that the electrons are non-relativistic, so that the force on the beam electrons due to the beam’s magnetic field is negligibly small compared to the force due to the beam’s electric field. We seek a self-consistent solution to the following equations of motion:
-\nabla \cdot \epsilon_0 \nabla V &= \sum_{i=1}^N q\delta \left({\mathbf{r}}-{\mathbf{q}}_i\right)\\
\frac{d}{dt}\left(m{\mathbf{v}}\right) &= -q\nabla V
\end{aligned}
The first equation is a Poisson equation for the electric potential, with a space charge density term due to the presence of charged particles. Here, \delta is the Dirac delta function, N is the total number of particles, \mathbf{r} is the position vector of a point in space, and \mathbf{q}_i is the position of the ith particle. The second equation is the equation of motion of a particle subjected to an electric force. Solving both equations of motion in the same time-dependent study would be extremely time-consuming, and would require a very large number of particles to be released at small, regular time intervals to ensure that the desired beam current is maintained.
An alternative solution method involves a physics interface property called the
Release type, available for the Charged Particle Tracing and Particle Tracing for Fluid Flow interfaces in COMSOL 4.4. The default setting, Transient, is the correct choice for most applications. Changing the Release type to “Static” affects the available settings of particle release features, such as the Inlet, and changes the way the Particle-Field Interaction and Fluid-Particle Interaction features work.
Working with the Static release type requires us to change our interpretation of what the model particles represent. Rather than representing a single particle or group of particles at a specific point in space, each model particle now represents a certain number of particles per unit time. The number of real particles per unit time represented by each model particle is computed so that each Inlet, Release, or Release from Grid feature provides a user-defined charged particle current or mass flow rate (for the
Charged Particle Tracing and Particle Tracing for Fluid Flow interfaces, respectively).
To accompany this new interpretation of the model particles, the space charge density, \rho, due to the presence of charged particles is now computed as:
Here, f_{\textrm{rel}} is the number of ions or electrons per second represented by each model particle so that the user-defined current is obtained. Similarly, when modeling fluid-particle interactions, the volume force, \mathbf{F}_V, that is exerted by particles on the fluid is computed as:
where {\mathbf{F}}_{\textrm{D}} is the drag force on the particle. The time derivative on the left-hand side of each equation indicates that instead of creating a contribution to the space charge density at one location in space, each model particle leaves a trail of space charge or volume force along its trajectory, representing the combined effect of all particles that follow that trajectory. As a result, only a single release of model particles at time t=0 is needed to compute the space charge density due to an electron beam operating at constant current.
When computing the space charge density due to a group of particles, a time-dependent solver with a fixed maximum time step is recommended. The maximum time step should be small enough so that, on average, each particle spends several time steps inside each mesh element. In addition, the number of model particles should be large compared to the number of mesh elements in a cross section of the beam. These two guidelines ensure that the particles don’t “miss” any elements inside the beam, thereby creating non-physical gaps in the space charge distribution.
Creating a Solver Loop
So far, we’ve seen that a single release of particles can be used to compute the space charge density due to a continuous beam of charged particles. However, the resulting space charge density must still be coupled back to a Poisson equation for the electric potential. Changes to the electric potential might in turn perturb the particle trajectories. To reach a self-consistent solution, we can compute the electric potential using a stationary solver, then use this potential to compute the particle trajectories and space charge density using a time-dependent solver, then use the space charge density to recompute the electric potential, and so on. This type of iterative sequence can be implemented in COMSOL by adding
For and End For nodes to the solver sequence. Any solvers in-between these two nodes will be executed a number of times specified by the user in the For node settings. New to COMSOL Multiphysics in version 4.4, the For and End For nodes give the user sophisticated tools to set up two-way coupling between physics interfaces that require different types of solvers.
The self-consistent solution confirms our expectations: the electron beam diverges due to its self potential. In the image below, the lines represent particle trajectories that begin in the background and move to the foreground. The shading of each line represents the model particle’s radial displacement from its original position; the slice plot shows the beam potential; and the arrows show the electric force acting on the beam due to self potential. The result is in close agreement with analytical expressions for the shape of a non-relativistic charged particle beam.
Although the method outlined above is only valid for static fields, it reduces the number of particles required for accurate modeling by several orders of magnitude. The Electron Beam Diverging Due to Self Potential model demonstrates the new
For and End For nodes that can be added to the solver sequence with COMSOL 4.4. Concluding Thoughts If the number density of particles is very low, the particles may have a negligibly small effect on electric, magnetic, or fluid velocity fields in the surrounding domain. In this case, computing the field first and then using this field to exert a force on the particles is the most efficient approach. To accurately model two-way coupling between particles and fields, use a large number of model particles and specify a fixed maximum time step. You may need to increase the number of particles or reduce the time step further after refining the mesh. The Static release type can be used to model a constant charged particle current or mass flow rate. If the field is not time-dependent, computing the fields and particle trajectories in separate steps within a solver loop can be much more efficient than including all physics in a single time-dependent study step. Comments (3) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-sized and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,
1 and you may get a shoutout (chosen at random) in next week’s column. If you need a hint, you can try asking me nicely on Twitter. Riddler Express
From Christopher Dierkes, a lazy day puzzle:
You and I find ourselves indoors one rainy afternoon, with nothing but some loose change in the couch cushions to entertain us. We decide that we’ll take turns flipping a coin, and that the winner will be whoever flips 10 heads first. The winner gets to keep all the change in the couch! Predictably, an enormous argument erupts: We both want to be the one to go first.
What is the first flipper’s advantage? In other words, what percentage of the time does the first flipper win this game?
Riddler Classic
From Sebastian de la Torre, an open road puzzle:
You are driving your car on a perfectly flat, straight road. You are the only one on the road and you can see anything ahead of you perfectly. At time t=0, you are at Point A, cruising along at a speed of 100 kilometers per hour, which is the speed limit for the whole road. You want to reach Point C, exactly 4 kilometers ahead, in the shortest time possible.
But, at Point B, 2 kilometers ahead of you, there is a traffic light.
At time t=0, the light is green, but you don’t know how long it has been green. You do know that at the beginning of each second, there is a 1 percent chance that the light will turn yellow. Once it turns yellow, it remains yellow for 5 seconds and then turns red for 20 seconds. Your car can accelerate or decelerate at a maximum rate of 2 meters per second-squared. You must always drive at or below the speed limit. You can pass through the intersection when the traffic light is yellow, but not when it is red.
What is the best strategy to reach your destination as soon as possible?
Solution to last week’s Riddler Express
Congratulations to 👏 Eyal Rosin 👏 of Rosh Ha’ayin, Israel, winner of last week’s Express puzzle!
It’s your 30th birthday, and your friends got you a cake with 30 lit candles. You try to blow them out, but each time you blow you successfully extinguish a random number of candles, between one and the number that remain lit. How many blows will it take, on average, to extinguish them all? Very, very close to
four.
More precisely, it will take about 3.994987 blows. Why?
Let’s start with a smaller number of candles and work our way up. Suppose you have a cake with just a single candle. You’ll blow it out in one blow, for sure. Suppose there are two. Half the time you’ll blow them both out in one go, and half the time it’ll take two blows. Let’s make a list:
One candle: 1
Two candles: \((1/2)\cdot 1 + (1/2)\cdot 2 = 1.5\)
Three candles: \((1/3)\cdot 1 + (1/3)\cdot (1+1.5) + (1/3)\cdot (1+1) = 1.8\bar{3}\)
Four candles: \((1/4)\cdot 1 + (1/4)\cdot (1+1.8\bar{3}) + (1/4)\cdot (1+1.5) + (1/4)\cdot (1+1) = 2.08\bar{3}\)
With each additional candle, you have an equal chance of blowing them out in one go and of only snuffing some specific number, leaving some to tackle on the next blow. Notice the pattern! For one candle, the average number of blows is one. For two, it’s 1+1/2. For three, it’s 1+1/2+1/3. For four, it’s 1+1/2+1/3+1/4. And so on. So to get the answer, we simply compute this harmonic sum:
\begin{equation} \sum_{i=1}^{30} \frac{1}{i} \approx 3.994987 \end{equation}
Happy birthday!
Solution to last week’s Riddler Classic
Congratulations to 👏 Art Roth 👏 of Skokie, Illinois, winner of last week’s Classic puzzle!
You and I just purchased a nifty 100-sided die at our local game shop. We aren’t quite sure what to do with this new toy, so we invent a simple game. We keep rolling it until it shows a number smaller than the number before. Feeling generous, I give you $1 every time we roll. How much money do you expect to win?
About $2.73.
More precisely, you can expect to win \((100/99)^{100} \approx 2.731999\) dollars. Why? I’ve adapted the excellent solution from reader Sam Elder here:
The idea is similar to this week’s Riddler Express solution above: We set up a recurrence and then compare consecutive values to get an easier recurrence. Suppose the most recent roll was \(n\). Since we don’t care about the rolls that came before the most recent roll anymore, we can simply call the number of expected remaining rolls \(t_n\). For starters, if \(n = 100\), unless we roll another 100, we’re going to stop after one roll. So \(t_{100} = 1 + 1/100 t_{100}\), which means that \(t_{100} = 100/99\). In general, if we roll a number equal to or higher than \(n\), we’re going to keep rolling, but update our latest roll to that number. So, for any \(n\), \(t_n = 1 + (1/100) (t_n + t_{n+1} + \ldots + t_{100})\). We then apply a very similar trick to that in the Express solution and compute that \(t_{n-1} - t_n = 1/100(t_{n-1})\), so \(t_{n-1} = t_n \cdot (100/99)\). So this is just a geometric sequence! In general, \(t_n = (100/99)^{101-n}\). To compute the expected total number of rolls, we have to take into account our first roll and then average all of the \(t_n\)’s. So the total expected number of rolls is \(1 + (1/100) (100/99 + (100/99)^2 + \ldots + (100/99)^{100}) = \\ 1 + (1/100)((100/99)^{101} - 100/99)/(100/99 - 1) = \\ 1 + (99/100)((100/99)^{101} -100/99) = \\ (100/99)^{100}.\)
As the number of sides on the die increases, from 100 toward infinity, your expected winnings approach $
e, where e is Euler’s number—the mathematical constant equal to about 2.71828. You can find a thorough discussion of your expected winnings for dice of various sizes in this great post by Laurent Lessard.
Don’t spend all your $2.73 in one place.
Want to submit a riddle?
Email me at [email protected]. |
I've been trying to understand what exactly is meant by parametrisation invariance of the Jeffreys prior.
Already I've read here that invariance is technically not the best term to use, and that it's more a case of covariance. My understanding of covariance is that it describes the property of transforming in a particular known way, which agrees with the `change of variables theorem' that I've often seen invoked in the context of reparametrising a Jeffreys prior.
$$p(\theta) = p(\psi) \left| \frac{d\psi}{d\theta} \right|$$
I presume this comes from setting the probability area of $p(\theta) d\theta$ equal to $p(\psi) d\psi$ (and then essentially dividing by the infinitesimal $d\theta$), though I'm kind of wary of such expressions in the absence of integrals.
My question then is how this gives parametrisation invariance/covariance. For example, the Jeffreys prior $p(\sigma) = \frac{1}{\sigma}$ over the positive reals is known to be invariant/covariant under power transformations. Choosing $\gamma = \sigma^n$ and applying this change of variables theorem however, I get
$$p(\sigma) = p(\gamma) \left| \frac{d\gamma}{d\sigma} \right| = \frac{1}{\sigma^n} n \sigma^{n-1} = \frac{n}{\sigma} \neq p(\sigma)$$
so I must be doing something wrong, unless it's simply a matter of normalisation. |
Unfortunately, various definitions of Henry’s law and the corresponding proportionality constant $H$ or $K_\mathrm H$ exist. (For several definitions and corresponding parameter values, see Sander, R. Compilation of Henry’s law constants (version 4.0) for water as solvent.
Atmos. Chem. Phys. 2015, 15, 4399–4981.) Therefore, it is important to identify the dimensions of the given data. In the question, the proportionality constant is given as Henry volatility $K_\mathrm H$ with
$$K_\mathrm H = 10^5\ \mathrm{atm}$$
Apparently,* since the given Henry volatility $K_\mathrm H$ is expressed in terms of pressure (the unit symbol “atm” stands for “standard atmosphere”, which is an obsolete unit of pressure; the use of this unit is actually deprecated), the used definition is
$$K_\mathrm H=\frac{p_{\ce{N2}}}{x_{\ce{N2}}}$$
where $p_{\ce{N2}}$ is partial pressure of nitrogen and $x_{\ce{N2}}$ is amount-of-substance fraction (the use of the unsystematic name “mole fraction” is deprecated) of nitrogen
in the aqueous phase.
The amount-of-substance fraction $x_{\ce{N2}}$ is defined as
$$x_{\ce{N2}}=\frac{n_{\ce{N2}}}{n}$$
where $n_{\ce{N2}}$ is the amount of substance of nitrogen and $n$ is the total amount of substance. For dilute aqueous solutions, the total amount of substance is approximately equal to the amount of water
$$n\approx n_{\ce{H2O}}$$
which is given as $n_{\ce{H2O}}=10\ \mathrm{mol}$.
Thus, the amount of nitrogen $n_{\ce{N2}}$ can be calculated from the partial pressure of nitrogen $p_{\ce{N2}}$ as
$$\begin{aligned}n_{\ce{N2}}&=n\cdot x_{\ce{N2}}\\&=n\cdot \frac{p_{\ce{N2}}}{K_\mathrm H}\\&\approx n_{\ce{H2O}}\cdot \frac{p_{\ce{N2}}}{K_\mathrm H}\end{aligned}$$
For a mixture of gases, the partial pressure of nitrogen $p_{\ce{N2}}$ is defined as
$$p_{\ce{N2}}=x_{\ce{N2}}\cdot p$$
where $x_{\ce{N2}}$ is the amount-of-substance fraction of nitrogen
in the gaseous phase (as apposed to the above-mentioned parameter in Henry’s law) and p is the total pressure.
Since the total pressure is given as $p=5\ \mathrm{atm}$ and the amount-of-substance fraction of nitrogen in air is given as $x_{\ce{N2}}=0.8$, the partial pressure of nitrogen in air is
$$\begin{aligned}p_{\ce{N2}}&=x_{\ce{N2}}\cdot p\\&=0.8\times5\ \mathrm{atm}\\&=4\ \mathrm{atm}\end{aligned}$$
*
This use of the unit is in the question is actually not permissible. The unit symbol should not be used to provide specific information about the quantity and should never be the sole source of information on the quantity. |
Skills to Develop
In this section students will:
Simplify rational expressions. Multiply rational expressions. Divide rational expressions. Add and subtract rational expressions. Simplify complex rational expressions.
A pastry shop has fixed costs of \($280\) per week and variable costs of \($9\) per box of pastries. The shop’s costs per week in terms of \(x\) , the number of boxes made, is \(280 +9x\). We can divide the costs per week by the number of boxes made to determine the cost per box of pastries.
Notice that the result is a polynomial expression divided by a second polynomial expression. In this section, we will explore quotients of polynomial expressions.
Simplifying Rational Expressions
The quotient of two polynomial expressions is called a rational expression. We can apply the properties of fractions to rational expressions, such as simplifying the expressions by canceling common factors from the numerator and the denominator. To do this, we first need to factor both the numerator and denominator. Let’s start with the rational expression shown.
We can factor the numerator and denominator to rewrite the expression.
Then we can simplify that expression by canceling the common factor \((x+4)\).
Howto: Given a rational expression, simplify it
Factor the numerator and denominator. Cancel any common factors.
Simplify \(\dfrac{x^2-9}{x^2+4x+3}\)
Solution
\[\begin{align*} &\dfrac{(x+3)(x-3)}{(x+3)(x+1)}\qquad \text{Factor the numerator and the denominator}\\ &\dfrac{x-3}{x+1}\qquad \text{Cancel common factor } (x+3) \end{align*}\]
Analysis
We can cancel the common factor because any expression divided by itself is equal to \(1\).
Q&A
Can the \(x^2\) term be cancelled in the last example?
No. A factor is an expression that is multiplied by another expression. The \(x^2\) term is not a factor of the numerator or the denominator.
Exercise \(\PageIndex{1}\)
Simplify \(\dfrac{x-6}{x^2-36}\)
Answer
\(\dfrac{1}{x+6}\)
Multiplying Rational Expressions
Multiplication of rational expressions works the same way as multiplication of any other fractions. We multiply the numerators to find the numerator of the product, and then multiply the denominators to find the denominator of the product. Before multiplying, it is helpful to factor the numerators and denominators just as we did when simplifying rational expressions. We are often able to simplify the product of rational expressions.
Howto: Given two rational expressions, multiply them
Factor the numerator and denominator. Multiply the numerators. Multiply the denominators. Simplify.
Multiply the rational expressions and show the product in simplest form:
\(\dfrac{(x+5)(x-1)}{3(x+6)}\times\dfrac{(2x-1)}{(x+5)}\)
Solution
\[\begin{align*} &\dfrac{(x+5)(x-1)}{3(x+6)}\times\dfrac{(2x-1)}{(x+5)}\qquad \text{Factor the numerator and denominator.}\\ &\dfrac{(x+5)(x-1)(2x-1)}{3(x+6)(x+5)}\qquad \text{Multiply numerators and denominators}\\ &\dfrac{(x-1)(2x-1)}{3(x+6)}\qquad \text{Cancel common factors to simplify} \end{align*}\]
Exercise \(\PageIndex{2}\)
Multiply the rational expressions and show the product in simplest form:
\(\dfrac{x^2+11x+30}{x^2+5x+6}\times\dfrac{x^2+7x+12}{x^2+8x+16}\)
Answer
\(\dfrac{(x+5)(x+6)}{(x+2)(x+4)}\)
Dividing Rational Expressions
Division of rational expressions works the same way as division of other fractions. To divide a rational expression by another rational expression, multiply the first expression by the reciprocal of the second. Using this approach, we would rewrite \(\dfrac{1}{x}÷\dfrac{x^2}{3}\) as the product \(\dfrac{1}{x}⋅\dfrac{3}{x^2}\). Once the division expression has been rewritten as a multiplication expression, we can multiply as we did before.
Howto: Given two rational expressions, divide them
Rewrite as the first rational expression multiplied by the reciprocal of the second. Factor the numerators and denominators. Multiply the numerators. Multiply the denominators. Simplify.
Exercise \(\PageIndex{3}\)
Divide the rational expressions and express the quotient in simplest form:
\[\dfrac{9x^2-16}{3x^2+17x-28}÷\dfrac{3x^2-2x-8}{x^2+5x-14} \nonumber \]
Answer
\(0\)
Adding and Subtracting Rational Expressions
Adding and subtracting rational expressions works just like adding and subtracting numerical fractions. To add fractions, we need to find a common denominator. Let’s look at an example of fraction addition.
We have to rewrite the fractions so they share a common denominator before we are able to add. We must do the same thing when adding or subtracting rational expressions.
The easiest common denominator to use will be the
least common denominator, or LCD. The LCD is the smallest multiple that the denominators have in common. To find the LCD of two rational expressions, we factor the expressions and multiply all of the distinct factors. For instance, if the factored denominators were \((x+3)(x+4)\) and \((x+4)(x+5)\), then the LCD would be \((x+3)(x+4)(x+5)\).
Once we find the LCD, we need to multiply each expression by the form of \(1\) that will change the denominator to the LCD. We would need to multiply the expression with a denominator of \((x+3)(x+4)\) by \(\dfrac{x+5}{x+5}\) and the expression with a denominator of \((x+4)(x+5)\) by \(\dfrac{x+3}{x+3}\).
Howto: Given two rational expressions, add or subtract them
Factor the numerator and denominator. Find the LCD of the expressions. Multiply the expressions by a form of 1 that changes the denominators to the LCD. Add or subtract the numerators. Simplify.
Add the rational expressions: \[\dfrac{5}{x}+\dfrac{6}{y} \nonumber \]
Solution
First, we have to find the LCD. In this case, the LCD will be \(xy\). We then multiply each expression by the appropriate form of \(1\) to obtain \(xy\) as the denominator for each fraction.
\[\begin{align*} &\dfrac{5}{x}\times\dfrac{y}{y}+\dfrac{6}{y}\times\dfrac{x}{x}\\ &\dfrac{5y}{xy}+\dfrac{6x}{xy} \end{align*}\]
Now that the expressions have the same denominator, we simply add the numerators to find the sum.
\[\dfrac{6x+5y}{xy} \nonumber \]
Analysis
Multiplying by \(\dfrac{y}{y}\) or \(\dfrac{x}{x}\) does not change the value of the original expression because any number divided by itself is \(1\), and multiplying an expression by \(1\) gives the original expression.
Subtract the rational expressions: \[\dfrac{6}{x^2+4x+4}-\dfrac{2}{x^2-4}\]
Solution
\[\begin{align*}
&\dfrac{6}{{(x+2)}^2}-\dfrac{2}{(x+2)(x-2)}\qquad \text{Factor}\\ &\dfrac{6}{{(x+2)}^2}\times\dfrac{x-2}{x-2}-\dfrac{2}{(x+2)(x-2)}\times\dfrac{x+2}{x+2}\qquad \text{Multiply each fraction to get LCD as denominator}\\ &\dfrac{6(x-2)}{{(x+2)}^2(x-2)}-\dfrac{2(x+2)}{{(x+2)}^2}(x-2)\qquad \text{Multiply}\\ &\dfrac{6x-12-(2x+4)}{{(x+2)}^2(x-2)}\qquad \text{Apply distributive property}\\ &\dfrac{4x-16}{{(x+2)}^2(x-2)}\qquad \text{Subtract}\\ &\dfrac{4(x-4)}{{(x+2)}^2(x-2)}\qquad \text{Simplify} \end{align*}\]
Q&A
Do we have to use the LCD to add or subtract rational expressions?
No. Any common denominator will work, but it is easiest to use the LCD.
Exercise \(\PageIndex{4}\)
Subtract the rational expressions: \(\dfrac{3}{x+5}-\dfrac{1}{x-3}\)
Answer
\(\dfrac{2(x-7)}{(x+5)(x-3)}\)
Simplifying Complex Rational Expressions
A complex rational expression is a rational expression that contains additional rational expressions in the numerator, the denominator, or both. We can simplify complex rational expressions by rewriting the numerator and denominator as single rational expressions and dividing. The complex rational expression \(\dfrac{a}{\dfrac{1}{b}+c}\) can be simplified by rewriting the numerator as the fraction \(\dfrac{a}{1}\) and combining the expressions in the denominator as \(\dfrac{1+bc}{b}\). We can then rewrite the expression as a multiplication problem using the reciprocal of the denominator. We get \(\dfrac{a}{1}⋅\dfrac{b}{1+bc}\), which is equal to \(\dfrac{ab}{1+bc}\).
Howto: Given a complex rational expression, simplify it
Combine the expressions in the numerator into a single rational expression by adding or subtracting. Combine the expressions in the denominator into a single rational expression by adding or subtracting. Rewrite as the numerator divided by the denominator. Rewrite as multiplication. Multiply. Simplify.
Simplify: \(\dfrac{y+\dfrac{1}{x}}{\dfrac{x}{y}}\)
Solution
Begin by combining the expressions in the numerator into one expression.
\[\begin{align*} &y\times\dfrac{x}{x}+\dfrac{1}{x}\qquad \text{Multiply by } \dfrac{x}{x} \text{ to get LCD as denominator}\\ &\dfrac{xy}{x}+\dfrac{1}{x}\\ &\dfrac{xy+1}{x}\qquad \text{Add numerators} \end{align*}\]
Now the numerator is a single rational expression and the denominator is a single rational expression.
\[\begin{align*} &\dfrac{\dfrac{xy+1}{x}}{\dfrac{x}{y}}\\ \text{We can rewrite this as division, and then multiplication.}\\ &\dfrac{xy+1}{x}÷\dfrac{x}{y}\\ &\dfrac{xy+1}{x}\times\dfrac{y}{x}\qquad \text{Rewrite as multiplication}\\ &\dfrac{y(xy+1)}{x^2}\qquad \text{Multiply} \end{align*}\]
Exercise \(\PageIndex{5}\)
Simplify: \(\dfrac{\dfrac{x}{y}-\dfrac{y}{x}}{y}\)
Answer
\(\dfrac{x^2-y^2}{xy^2}\)
Q&A
Can a complex rational expression always be simplified?
Yes. We can always rewrite a complex rational expression as a simplified rational expression.
Key Concepts Rational expressions can be simplified by cancelling common factors in the numerator and denominator. See Example. We can multiply rational expressions by multiplying the numerators and multiplying the denominators. See Example. To divide rational expressions, multiply by the reciprocal of the second expression. See Example. Adding or subtracting rational expressions requires finding a common denominator. See Example and Example. Complex rational expressions have fractions in the numerator or the denominator. These expressions can be simplified. See Example. |
The growth of entire functions in the terms of generalized orders Abstract
Let $\Phi$ be a convex function on $[x_0,+\infty)$ such that$\frac{\Phi(x)}x\to+\infty$, $x\to+\infty$, $f(z)=\sum_{n=0}^\infty a_nz^n$ — a transcendental entire function, let $M(r,f)$ be the maximum modulus of $f$ and let
$$\rho_\Phi(f)=\limsup_{r\to +\infty}\frac{\ln\ln M(r,f)}{\ln\Phi(\ln r)},\quad c_{\Phi}=\limsup_{x\to +\infty}\frac{\ln x}{\ln\Phi(x)},\quad d_{\Phi}=\limsup\limits_{x\to +\infty}\frac{\ln\ln\Phi'_+(x)}{\ln\Phi(x)}.$$
It is proved that for every transcendental entire function $f$ the generalized order $\rho_\Phi(f)$ is independent on the arguments of the coefficients $a_n$ (or defined by the sequence $(|a_n|)$) if and only if the inequality $d_{\Phi}\le c_{\Phi}$ holds.
3 :: 4
Refbacks There are currently no refbacks.
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
Ok, well. One of the best way to understand a proof is to reproduce it via free recall after glancing over the basic components of the proof.
So the theorem is:
Let a and b be integers; let $a>b$. Applying Euclidean Algorithm to find the GCD will take n steps. The theorem says that if b has $d$ digits, then $n\leq 5d$.
The components are Fibonacci's number and Euclidean Algorithm, and some other things.
First, let us see why the theorem might be true.
Well, first, let us look at Euclidean Algorithm. What does it do? We look in wikipedia and sees that
Given $a, b$, we first find $b_1 = a\mod b$, and then set $a_1=b$.
At the $i$'th step, we find $b_{i} = a_{i-1} \mod b_{i-1}$ and then set $a_{i} = b_{i-1}$. The algorithm ends at step n when we find that $b_n=0$.
Well, does the theorem make any sense? Should there be a max limit to the number of steps the algorithm will take.
Well, yes, of course.
First of all, we note that $b_{n-1} \geq 1$.
Second, let us look at an example. Let $b_{i-2} = 30 = {a_{i-1}}$, what is the highest possible $b_i$? Well, let us say that $b_{i-1}$ can be anything. What will give the greatest $b_i$?. Well, the whole point is, $30 \pmod {b_{i-1}}$ cannot be greater than $14$. So, we can see that $b_i < \frac{1}{2}b_{i-2}$.
This already gives us a bound. You see, $2^4>10$, so $b_i$ has at least one less digit than $b_{i-8}$.
Now, we want a stronger bond. The other important component is the Fibonacci numbers.
Let us look at them.
1,2,3,5,8,13,21,34,55,89, 144 ...
We can see immediately that for every number, if you look 5 numbers over, there is at least one more digit.
So, if we can relate our thing to the Fibonacci's sequence, that'd be great.
Well, we can.
We already said that $b_n = 0$. The smallest value $b_{n-1}$ could be is $b_{n-1} = 1$. (In case you need to justify this, consider the other possibilities. It must be an integer. If it is 0, then the algorithm would have ended at the $n-1$ step. But we already designated n to be the step at which it ends at.) Well, how about $b_{n-2}$?
Well first, we see that $b_i<b_{i-1}$. This should be obvious since $b_i$ is the remainder of something divided by $b_{i-1}$.
So, the smallest $b_{n-2}$ could be is $2$.
Now, let us proceed.
What is the smallest $b_{n-3}$ could be. Well, to get $b_{n-1}$, we divided $a_{n-2} = b_{n-3}$ by $b_{n-2}$ and found the remainder. Therefore, $a_{n-2}=b_{n-3} = c\times b_{n-2}+b_{n-1}$. This is obviously greater or equal to just $b_{n-2}+b_{n-1}$.
Now, we can generalize this, can't we. Let us say we know that $b_i$, and $b_{i-1}$. Well, we know that $b_i = b_{i-2} \mod b_{i-1}$, so $b_{i-2} = cb_{i-1}+b_i \geq b_{i-1} + {b_i}$. $$$$$$$$
What does this look like? Why, the Fibonacci's sequence!
COOL!
Ok, so it seems like the $b_{n-j}$ cannot be less than the jth number in the sequence.
There is subtle point here. We have not proved it. We have only proved that $b_{n-1} \geq 1$, $b_{n-2}\geq 2$, and that $b_{i-2}\geq b_{i-1}+b_{i} $.
We want to prove that the least possible value of $b_{n-j}$ is the jth number in the sequence. So...
We still need to prove that the least possible value of $b_{i-2}$ is greater than the sum of the least possible value of $b_{i-1}$ and $b_{i}$. This is pretty easy to prove.
For any $b_{i-2}$, we have that $b_{i-2} \geq b_{i-1} + b_{i} \geq \min(b_{i-1})+ \min(b_{i})$. As such, $\min (b_{i-2})\geq \min(b_{i-1})+ \min(b_{i})$.
So now, we prove that $b_{n-j}$ is greater or equal to $F_j$ the $j'$th term of the Fibonacci sequence.
It is true for $j=1$ and $j=2$. $b_{n-1}>\geq 1$, and $b_{n-2}\geq 2$.
If it is true for $j=k$ and $j=k+1$, then let $i=(n-k)$. So, $b_{n-(k+2)} = b_{i-2}$. $b_{n-(k+1)} = b_{i-1}$. $b_{n-k} = b_i$. Then we see that $\min(b_{n-(k+2)}) = \min(b_{i-2}) \geq \min(b_{i-1})+ \min(b_{i}) = F_{k+1}+F_{k} = F_k$.
So, $b_{n-(k+2)} \geq F_{k+2}$, and the statement is thus true for $j=k+2$.
By mathematical induction then, we have that the statement is true for all integer $j$.
So, what have we proven so far?
Well, we have proven that $b_{n-n} = b \geq F_n$. In other words, the Knuth form of the theorem.
Now, how about the original?
Well, we just need to give the proof that each decimal place increase for the Fibonacci sequence takes at most 5 steps.
So.
Let us do that.
\begin{align}F_{n+5} & = f_{n+4}+f_{n+3}\\& = f_{n+3} + f_{n+2} + f_{n+3}\\& = 2f_{n+3}+f_{n+2}\\& = 2f_{n+2}+2f_{n+1}+f_{n+2}\\& = 3f_{n+2}+2f_{n+1}\\& = 5f_{n+1}+3f_n\\& = 8f_n+5f_{n-1}\\& = 13f_{n-1}+8f_{n-2}\\& = 10f_{n-1}+3f_{n-1}+8f_{n-2}\\& = 10f_{n-1}+11f_{n-2}+3f{n-1}\\& >10f_n\end{align}
So we have what we wanted. $F_n > 10 F_{n-5}$. So, we start with b. After 5 steps of the algorithm, $b$ has one less decimal places. If b has d decimal places, then after $5(d-1)$ steps, b only has 1 decimal place.
Hence, we are done.
I would like to note that the definition of the first step here might be slightly different, so the answer might be off by 1.
But in general, this is how you would do it.
Again, the best way to understand a proof is to replicate it via free recall. |
Lesson Overview
In this lesson, we'll derive a formula known as Green's Theorem. This formula is useful because it gives
us a simpler way of calculating a specific subset of line integral problems—namely, problems in which the curve is closed (plus a few extra criteria described below). We won't concern ourselves with using this formula to solve problems in this article; we'll save that for future lessons. In this lesson, we will require that the curve \(c\) be closed plus specify some other restrictions (but even with these conditions, our analysis will be pretty general); after doing so, we'll take the line integral, then do some calculus and algebra to derive a simple formula for calculating that line integral. Although this derivation might seem pretty tedious at times, just remember that it's mostly just calculus and algebra which you are already familiar with. We derive Green's Theorem for any continuous, smooth, closed, simple, piece-wise curve such that this curve is split into two separate curves; even though we won't prove it in this article, it turns out that our analysis is more general and can apply to that same curve even if it's split into an \(n\) number of curves.
Green's Theorem Proof (Part 1)
In this lesson, we're going to focus on proving Green's Theorem. We discussed in a previous lesson how to calculate
any line integral by parameterizing the integrand and limits of integration. Solving that parameterized integral can be quite tedious sometimes but it is, in general, how we calculate any line integral. But what if we considered calculating a special subset of line integrals that involved taking the line integral of a vector field around certain types of closed curves? Well, when it comes to calculating these kinds of line integrals we don't have to use that complicated parameterized definite integrals discussed earlier. For such line integrals of vector fields around these certain kinds of closed curves, we can use Green's theorem to calculate them.
These particular kinds of closed curves can be fully described by the following description: they are any arbitrary curve \(C\) on the \(xy\)-plane that is piece-wise smooth, positively oriented, simple, and closed as illustrated in Figure 1. That description might sound like a mouthful, but let's break down the meaning of each term. A
closed curve, as the name suggests, is any curve such that if you start at a point on that curve and then "walk around" that curve, you'll come back to the same point that you started at. A simple curve is any curve that doesn't criss cross and intersect itself; for example, a curve shaped like the number eight would not be a simple curve. A positively-oriented curve is one that you travel around counter-clock wise and a piece-wise-smooth curve can be subdivided into an \(n\) number of smooth curves with an \(n\) number of edges. Whenever we take a line integral of a vector field around these kinds of curves, it is usually easier to calculate the line integral using Green's theorem.
The kinds of vector fields that we can calculate the line integral of using Green's theorem are pretty general but must meet a few criteria: they can be any arbitrary vector field \(\vec{F}(x,y)\) defined as
$$\vec{F}(x,y)=P(x,y)\hat{i}+Q(x,y)\hat{j},$$
so long as the vector field \(\vec{F}(x,y)\) is differentiable at every point inside of the region \(R\) (enclosed by the curve \(C\)) and at every point along the curve \(C\). (We'll see why this criteria must be met as we are proving Green's theorem; Green's theorem involves taking the partial derivatives of the vector field.) Our goal is to calculate the line integral, \(∮_c\vec{F}(x,y)·d\vec{S}\), for the particular kind of vector field and curve just described. (Notice that a circle is drawn on the integral; this is to signify that the curve \(C\) is a closed curve.) Regardless of whether or not we were to use Green's theorem
or the technique already discussed involving parameterizing the integrand and limits of integration, the first step in calculating this integral would be the same: we must first evaluate the dot product \(\vec{F}(x,y)·d\vec{S}\). Doing so, we have
$$\vec{F}(x,y)·d\vec{S}=(P(x,y)\hat{i}+Q(x,y)\hat{j})·(dx\hat{i}+dy\hat{j}).$$
Since \(\hat{i}\) is perpendicular to \(\hat{j}\), the cross terms cancel. The other two terms involve the dot product \(\hat{i}·\hat{i}\) and \(\hat{j}·\hat{j}\) which simply just equal one since both unit vectors are parallel and have magnitudes of one. Thus, the dot product can be further simplified to
$$(P(x,y)\hat{i}+Q(x,y)\hat{j})·(dx\hat{i}+dy\hat{j})=P(x,y)dx\hat{i}·\hat{i}+Q(x,y)dy\hat{j}·\hat{j}=P(x,y)dx+Q(x,y)dy.$$
Substituting this simplified version of the dot product into the line integral \(∮_c\vec{F}(x,y)·d\vec{S}\), we have
$$∮_c\vec{F}(x,y)·d\vec{S}=∮_c(P(x,y)dx+Q(x,y)dy)=∮_cP(x,y)dx+∮_cQ(x,y)dy.\tag{1}$$
One way to calculate the line integral in Equation (1) would be to parameterize the right-hand side of Equation (1); this would allow us to calculate any line integral. But as I previously mentioned, this process can, in general, get quite complicated. But when we consider the subset of line integrals which deal with taking the line integrals of vector fields over the kinds of closed curves we just discussed, we can can calculate the line integral by calculating \(∮_cP(x,y)dx\) in terms of \(x\), then calculating \(∮_cQ(x,y)dy\) in terms of \(y\), and then adding the two results together. Let's first start out by calculating \(∮_cP(x,y)dx\) in terms of \(x\). Let's start out by splitting the curve \(C\) into two separate curves \(C_1\) and \(C_2\) as illustrated in Figure 2. To get everything in terms of \(x\), let's represent each \(y\)-coordinate on each point on the curves \(C_1\) and \(C_2\) in Figure 2 (and also shown in the video above) as functions of \(x\); \(y_1(x)\) will specify each \(y\)-coordinate associated with each point on \(C_1\) and \(y_2(x)\) will specify each \(y\)-coordinate associated with each point on \(C_2\). Substituting both of these functions into the integral \(∮_cP(x,y)dx\), we have
$$∮_cP(x,y)dx=\int_{x=a}^{x=b}P(x,y_1(x))dx+\int_{x=b}^{x=a}P(x,y_2(x))dx.\tag{2}$$
(Notice that since everything in each integral is represented in terms of a single variable, the line integral simplified to a definite integral. As we discussed in the lesson on the Introduction of Line Integrals, if the integrand and limits of integration which are, in general, expressed with respect to the arclength \(S\) can instead be represented with respect to say \(x\) or \(y\), then the line integral can be simplified to a definite integral.)
The limits of integration in the integral \(\int_{x=b}^{x=a}P(x,y_2(x))dx\) go from \(x=b\) to \(x=a\). If we "swap" the lower and upper limits of integration of that integral to get \(\int_{x=a}^{x=b}P(x,y_2(x))dx\), we are essentially just changing the order of subtraction of the anti-derivative; if we add a minus sign in front of the integral \(\int_{x=a}^{x=b}P(x,y_2(x))dx\), then that integral will be the same as the integral \(\int_{x=b}^{x=a}P(x,y_2(x))dx\). Thus, we have
$$\int_{x=b}^{x=a}P(x,y_2(x))dx=-\int_{x=a}^{x=b}P(x,y_2(x))dx.\tag{3}$$
Substituting Equation (3) into (2), we have
$$∮_cP(x,y)dx=\int_{x=a}^{x=b}P(x,y_1(x))dx-\int_{x=a}^{x=b}P(x,y_2(x))dx=\int_{x=a}^{x=b}\biggl(P(x,y_1(x))-P(x,y_2(x))\biggr)dx.$$
Thus,
$$∮_cP(x,y)dx=\int_{x=a}^{x=b}\biggl(P(x,y_1(x))-P(x,y_2(x))\biggr)dx.\tag{4}$$
The next step is to multiply the right-hand side of Equation (4) by \((-1)·(-1)=1\). Initially, this step might seem pretty ad hoc, but essentially what we're going to do is "build up an integral in reverse." In the next few steps, you'll see that the integrand in the right-hand side of Equation (4) can be written as an integral. Multiplying the right-hand side of Equation (4) by \((-1)·(-1)=1\), we have
$$(-1)·(-1)·\int_{x=a}^{x=b}\biggl(P(x,y_1(x)-P(x,y_2(x)\biggr)dx=-1·\int_{x=a}^{x=b}\biggl[-1·\biggl(P(x,y_1(x)-P(x,y_2(x)\biggr)\biggr]dx$$
$$=-\int_{x=a}^{x=b}\biggl(P(x,y_2(x)-P(x,y_1(x)\biggr)dx.$$
Thus, we have
$$∮_cP(x,y)dx=-\int_{x=a}^{x=b}\biggl(P(x,y_2(x)-P(x,y_1(x)\biggr)dx.\tag{5}$$
Notice that the integrand, \(P(x,y_2(x)-P(x,y_1(x)\), in the right-hand side of Equation (5), is the same thing as \(\int_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x)}{∂y}dy\) and thus
$$P(x,y_2(x)-P(x,y_1(x)=\int_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x)}{∂y}dy.\tag{6}$$
(This step can be quite confusing so let me explain why it is valid. The partial derivative, \(\frac{∂P(x,y(x)}{∂y}\), is the same thing as taking the ordinary derivative of \(P(x,y)\) with respect to \(y\) with \(y(x)\) set equal to some constant—in other words, \(\frac{∂P(x,y(x)}{∂y}=\frac{dP(constant,y)}{dy}\). When we evaluate the integral, or anti-derivative, of the integrand \(\frac{∂P(x,y(x)}{∂y}=\frac{dP(x,constant)}{dy}\), we "undo the derivative" so to speak. This means that the anti-derivative (another name for the integral) of \(\frac{dP(constant,y)}{dy}=\frac{∂P(x,y(x)}{∂y}\) is just \(P(x,y(x)\). That would be the solution if we were taking an indefinite integral, but since we are taking the definite integral from \(y(x)=y_1(x)\) to \(y(x)=y_2(x)\), the solution to the integral is actually \(P(x,y(x))|_{y(x)=y_1(x)}^{y(x)=y_2(x)}=P(x,y_2(x)-P(x,y_1(x)\).)
Substituting Equation (6) into (5), we have
$$-∫_{x=a}^{x=b}(P(x,y_2(x)-P(x,y_1(x))dx=-∫_{x=a}^{x=b}\biggl(∫_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x)}{∂x}dy\biggr)dx.\tag{7}$$
Thus, we have
$$∮_cP(x,y)dx=-∫_{x=a}^{x=b}∫_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x))}{∂y}dydx.\tag{8}$$
Equation (8) is essentially just the volume contained between the surface \(-\frac{∂P(x(y),y)}{∂y}\) and the region \(R\). In other words, Equation (8) represents the infinite sum of infinitesimally skinny columns of volume \(P_y(x(y)),y)dxdy\) over the region \(R\). The notation for writing this is
$$-∫_{x=a}^{x=b}∫_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x))}{∂y}dydx=∫∫_R-\frac{∂P(x,y(x))}{∂y}dA.\tag{9}$$
Canceling out the minus signs on both sides of Equation (9), we have
$$∫_{x=a}^{x=b}∫_{y(x)=y_1(x)}^{y(x)=y_2(x)}\frac{∂P(x,y(x))}{∂y}dydx=∫∫_R\frac{∂P(x,y(x))}{∂y}dA.$$
Finally, if we substitute this result into Equation (8), we have
$$∮_cP(x,y)dx=-∫∫_R\frac{∂P(x,y(x))}{∂y}dA.\tag{10}$$
Green's Theorem Proof (Part 2)
Equation (10) allows us to calculate the line integral \(∮_cP(x,y)dx\) entirely in terms of \(x\). The final step we need to complete to calculate the line integral of \(\vec{F}(x,y)\) is to calculate the line integral \(∮_cQ(x,y)dy\) and then add this result to Equation (10). To calculate the line integral \(∮_cQ(x,y)dy\), we'll go through an analogous procedure to the one which we went through to calculate \(∮_cP(x,y)dy\). First, let's split up the curve \(C\) into the two separate curves \(C_1\) and \(C_2\) illustrated in Figure 3. Let's express the variable \(x\) associated with each point on the curve \(C\) as \(x_1(y)\) and \(x_2(y)\) associated with each \(x\)-value on the two curves \(C_1\) and \(C_2\), respectively. Analogous to what we did previously, let's write the integral \(∮_cQ(x,y)dy\) as the sum of two line integrals of the form,
$$∮_cQ(x,y)dy=∮_{c_1}Q(x_1(y),y)dy+∮_{c_2}Q(x_2(y),y)dy.\tag{11}$$
Just like last time, our goal will be to write the right-hand side of Equation (11) as a double integral. First, we do some manipulations to write the two line integrals as a single definite integral; then, after that, we do some algebra and calculus to rewrite the integrand as another definite integral. Since the two lines integrals on the right-hand side of Equation (11) are expressed in terms of \(y\), we can rewrite them as definite integrals to get
$$∮_{c_1}Q(x_1(y),y)dy+∮_{c_2}Q(x_2(y),y)dy=∫_{y=a}^{y=b}Q(x_1(y),y)dy+∫_{y=b}^{y=a}Q(x_2(y),y)dy.\tag{12}$$
Substituting \(∫_{y=b}^{y=a}Q(x_2(y),y)dy=-∫_{y=a}^{y=b}Q(x_2(y),y)dy\) into the right-hand side of Equation (12) and making the same algebraic simplifications as before, we have
$$∫_{y=a}^{y=b}Q(x_1(y),y)dy+∫_{y=b}^{y=a}Q(x_2(y),y)dy=∫_{y=a}^{y=b}Q(x_1(y),y)dy-∫_{y=a}^{y=b}Q(x_2(y),y)dy$$
$$=∫_{y=a}^{y=b}(Q(x_1(y),y)-Q(x_2(y),y))dy=∫_{y=a}^{y=b}(Q(x(y),y)|_{x(y)=x_2(y)}^{x(y)=x_1(y)}dy.$$
$$=∫_{y=a}^{y=b}\biggl(∫_{x_2(y)}^{x_1(y)}\frac{∂Q(x(y),y)}{∂x}dx\biggr)dy.$$
Thus,
$$∮_cQ(x,y)dy=∫_{y=a}^{y=b}\biggl(∫_{x_2(y)}^{x_1(y)}\frac{∂Q(x(y),y)}{∂x}dx\biggr)dy.\tag{13}$$
Equation (13) is essentially just the volume contained between the surface \(\frac{∂Q(x(y),y)}{∂x}\) and the region \(R\). In other words, Equation (13) represents the infinite sum of infinitesimally skinny columns of volume \(Q_x(x(y),y)dxdy\) over the region \(R\). The notation for writing this is
$$∫_{y=a}^{y=b}\biggl(∫_{x_2(y)}^{x_1(y)}\frac{∂Q(x(y),y)}{∂x}dx\biggr)dy=∫∫_R\frac{∂Q(x(y),y)}{∂x}dA.\tag{14}$$
Let's substitute Equations (10) and (14) into (1) to get
$$∮_c\vec{F}(x,y)·d\vec{S}=∫∫_R\frac{∂Q(x(y),y)}{∂x}dA+∫∫_R-\frac{∂P(x,y(x))}{∂x}dA.$$
Simplifying the expression on the right-hand side of the above equation, we get Green's theorem which states that
$$∮_c\vec{F}(x,y)·d\vec{S}=∫∫_R\biggl(\frac{∂Q(x(y),y)}{∂x}-\frac{∂P(x,y(x))}{∂x}\biggr)dA,\tag{15}$$
or, equivalently,
$$∮_cP(x,y)dx+∮_cQ(x,y)dy=∫∫_R\biggl(\frac{∂Q(x(y),y)}{∂x}-\frac{∂P(x,y(x))}{∂x}\biggr)dA.\tag{16}$$
In the next couple of lessons, we'll use Green's theorem to solve some line integrals of vector fields over piecewise smooth, simple, closed curves.
This article is licensed under a CC BY-NC-SA 4.0 license.
Sources: Khan Academy |
Let $V$ be a $\mathbb{R}$-vector space. Let $\Phi:V^n\to\mathbb{R}$ a multilinear symmetric operator.
Is it true and how do we show that for any $v_1,\ldots,v_n\in V$, we have:
$$\Phi[v_1,\ldots,v_n]=\frac{1}{n!} \sum_{k=1}^n \sum_{1\leq j_1<\cdots<j_k\leq n} (-1)^{n-k}\phi (v_{j_1}+\cdots+v_{j_k}),$$ where $\phi(v)=\Phi(v,\ldots,v)$.
My question come from that, I have seen this formula when I was reading about mixed volume, and also when I was reading about mixed Monge-Ampère measure. The setting was not exactly the one of a vector space $V$ but I think the formula is true here and I am interested by having this property shown out of the specific context of Monge-Ampère measures or volumes. I have done some work in the other direction,
i.e. starting from an operator $\phi:V\to\mathbb{R}$ satisfying some condition and obtaining a multilinear operator $\Phi$ ; bellow are the results I have seen in this direction.
I already know that if $\phi':V\to\mathbb{R}$ is such that for any $v_1,\ldots,v_n\in V$, $\phi'(\lambda_1 v_1+\ldots+\lambda_n v_n)$ is a homogeneous polynomial of degree $n$ in the variables $\lambda_i$, then there exists a unique multilinear symmetric operator $\Phi':V^n\to\mathbb{R}$ such that $\Phi'(v,\ldots,v)=\phi'(v)$ for any $v\in V$. Moreover $\Phi'(v_1,\ldots,v_n)$ is the coefficient of the symmetric monomial $\lambda_1\cdots\lambda_n$ in $\phi'(\lambda_1 v_1+\ldots+\lambda_n v_n)$ (see Symmetric multilinear form from an homogenous form.).
I also know that if $\phi'(\lambda v)=\lambda^n \phi'(v)$ and we define $$\Phi''(v_1,\ldots,v_n)=\frac{1}{n!} \sum_{k=1}^n \sum_{1\leq j_1<\cdots<j_k\leq n} (-1)^{n-k}\phi' (v_{j_1}+\cdots+v_{j_k}),$$ then $\Phi''(v,\ldots,v)=\frac{1}{n!} \sum_{k=1}^n (-1)^{n-k} \binom{n}{k} k^n \phi'(v)=\phi'(v)$ (see Show this equality (The factorial as an alternate sum with binomial coefficients).). It is clear that $\Phi''$ is symmetric, but I don't know if $\Phi''$ is multilinear.
Formula for $n=2$: $$\Phi[v_1,v_2]=\frac12 [\phi(v_1+v_2)-\phi(v_1)-\phi(v_2)].$$
Formula for $n=3$: $$\Phi[v_1,v_2,v_3]=\frac16 [\phi(v_1+v_2+v_3)-\phi(v_1+v_2)-\phi(v_1+v_3)-\phi(v_2+v_3)+\phi(v_1)+\phi(v_2)+\phi(v_3)].$$ |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
I get the parameters (long-term mean, volatility, mean-reversion speed, correlation) of two correlated Ornstein-Uhlenbeck processes via a likelihood estimation from hourly data. If I want to transform these to use them to create a daily - instead of hourly - simulation (tree or Monte Carlo), what do I have to do? Thanks in advance.
You can aggregate your starting hourly data to obtain daily data and re-estimate the parameters, then simulate. Alternatvely, with your parameters already obtained, you can simulate hourly data and make a post-simulation aggregation to have daily data.
Let $X^h$ be your hourly process
Let $X^d$ be your daily process
Let $\delta$ be one day
you have
$$X^d_t=\frac{1}{\delta}\int_{t-\delta}^{t}X^h_s ds$$
$$dX^h_t = a(b-X^h_t)dt + \sigma dB_t$$
$$\Delta X^d_t := X^d_{t+\delta}-X^d_t =\frac{1}{\delta}\int_{t-\delta}^t\left(X^h_{u+\delta}-X^h_{u}\right)du$$
so it is a gaussian random variable by knowns results on OU.
You can express it and compute $Cov(\Delta X^{d}_{k\delta},\Delta X^d_{j\delta})$
You will then be able to conclude.
Details
by known results :
$$X^h_{t+\delta}-X^h_t=(b-X_{t})(1-e^{-a\delta})+\int_{t}^{t+\delta}e^{a(u-t)}dB_u$$
so:
$$\begin{split} X^d_{t+\delta}-X^d_t &= (b-X^d_t)(1-e^{-2a\delta})+\int_{t-\delta}^{t}\frac{1}{\delta}\int_{u}^{u+\delta}e^{a(s-u)}dB_s du \\ & = (b-X^d_t)(1-e^{-2a\delta})+\int_{t}^{t+\delta}\frac{1}{\delta}\int_{u-\delta}^{u}e^{a(s-u+\delta)}dB_s du \\ \end{split} $$ |
№ 8
All Issues Volume 60, № 10, 2008 Continuity with respect to initial data and absolute-continuity approach to the first-order regularity of nonlinear diffusions on noncompact manifolds
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1299–1316
We study the dependence on initial data for solutions of diffusion equations with globally non-Lipschitz coefficients on noncompact manifolds. Though the metric distance may not be everywhere twice differentiable, we show that, under certain monotonicity conditions on the coefficients and curvature of the manifold, there are estimates exponential in time for the continuity of a diffusion process with respect to initial data. These estimates are combined with methods of the theory of absolutely continuous functions to achieve the first-order regularity of solutions with respect to initial data. The suggested approach neither appeals to the local stopping time arguments, nor applies the exponential mappings on the tangent space, nor uses imbeddings of a manifold to linear spaces of higher dimensions.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1317–1325
A system of ordinary differential equations with impulse effects at fixed moments of time is considered. This system admits the zero solution. Sufficient conditions of the equiasymptotic stability of the zero solution are obtained.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1326–1337
Some special space of convex compact sets is considered and notions of a derivative and an integral for multivalued mapping different from already known ones are introduced. The differential equation with multivalued right-hand side satisfying the Caratheodory conditions is also considered and the theorems on the existence and uniqueness of its solutions are proved. In contrast to O. Kaleva's approach, the given approach enables one to consider fuzzy differential equations as usual differential equations with multivalued solutions.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1338 – 1349
The following sharp inequality for local norms of functions $x \in L^{r}_{\infty,\infty}(\textbf{R})$ is proved: $$\frac1{b-a}\int\limits_a^b|x'(t)|^qdt \leq \frac1{\pi}\int\limits_0^{\pi}|\varphi_{r-1}(t)|^q dt \left(\frac{||x||_{L_{\infty}(\textbf{R})}}{||\varphi_r||_{\infty}}\right)^{\frac{r-1}rq}||x^{(r)}||^q_{\infty}r,\quad r \in \textbf{N},$$ where $\varphi_r$ is the perfect Euler spline, takes place on intervals $[a, b]$ of monotonicity of the function $x$ for $q \geq 1$ or for any $q > 0$ in the cases of $r = 2$ and $r = 3.$ As a corollary, well-known A. A. Ligun's inequality for functions $x \in L^{r}_{\infty}$ of the form $$||x^{(k)}||_q \leq \frac{||\varphi_{r-k}||_q}{||\varphi_r||_{\infty}^{1-k/r}} ||x||^{1-k/r}_{\infty}||x^{(r)}||^{k/r}_{\infty},\quad k,r \in \textbf{N},\quad k < r, \quad 1 \leq q < \infty,$$ is proved for $q \in [0,1)$ in the cases of $r = 2$ and $r = 3.$
Lattice of normal subgroups of a group of local isometries of the boundary of a spherically homogeneous tree
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1350–1356
We describe the structure of the lattice of normal subgroups of the group of local isometries of the boundary of a spherically homogeneous tree LIsom ∂T. It is proved that every normal subgroup of this group contains its commutant. We characterize the quotient group of the group LIsom ∂T by its commutant.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1357–1366
We show that the conjugacy of elements of finite order in the group of finite-state automorphisms of a rooted tree is equivalent to their conjugacy in the group of all automorphisms of the rooted tree. We establish a criterion for conjugacy between a finite-state automorphism and the adding machine in the group of finite-state automorphisms of a rooted tree of valency 2.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1367–1377
We study the notion of finite absolute continuity for measures on infinite-dimensional spaces. For Gaussian product measures on \(\mathbb{R}^{\infty}\) and Gaussian measures on a Hilbert space, we establish criteria for finite absolute continuity. We consider cases where the condition of finite absolute continuity of Gaussian measures is equivalent to the condition of their equivalence.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1378–1388
We study the problem of the elimination of isolated singularities for so-called
Q-homeomorphisms in Loewner spaces. We formulate several conditions for a function Q( x) under which every Q-homeomorphism admits a continuous extension to an isolated singular point. We also consider the problem of the homeomorphicity of the extension obtained. The results are applied to Riemannian manifolds and Carnot groups.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1389–1400
We study space mappings with branching that satisfy modulus inequalities. For classes of these mappings, we obtain several sufficient conditions for the normality of families.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1401–1413
In this paper, the global exponential stability of a class of neural networks is investigated. The neural networks contain variable and unbounded delays. By constructing a suitable Lyapunov function and using the technique of matrix analysis, we obtain some new sufficient conditions for global exponential stability.
Continuum cardinality of the set of solutions of one class of equations that contain the function of frequency of ternary digits of a number
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1414–1421
We study the equation
v 1 ( x) = x, where v 1 ( x) is the function of frequency of the digit 1 in ternary expansion of x.
We prove that this equation has a unique rational solution and a continuum set of irrational solutions.
An algorithm for the construction of solutions is proposed. We also describe the topological and metric properties of the set of all solutions.
Some additional facts about equations v i ( x), i= 0,2, are also given.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1422–1426
We prove a theorem on the well-posedness of the Cauchy problem for a linear higher-order stochastic equation of parabolic type with time-dependent coefficients and continuous perturbations whose solutions are subjected to pulse action at fixed times.
Solutions of the Kirkwood–Salsburg equation for a lattice classical system of one-dimensional oscillators with positive finite-range many-body interaction potentials
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1427–1433
For a system of classical one-dimensional oscillators on the
d-dimensional hypercubic lattice interacting via pair superstable and many-body positive
finite-range potentials, the (lattice) Kirkwood–Salsburg equation is proposed for the first time and is solved.
Ukr. Mat. Zh. - 2008. - 60, № 10. - pp. 1434–1440
We consider the special case of the three-body problem where the mass of one of the bodies is considerably smaller than the masses of the other two bodies and investigate the relationship between the Lagrange stability of a pair of massive bodies and the Hill stability of the system of three bodies. We prove a theorem on the existence of Hill stable motions in the case considered. We draw an analogy with the restricted three-body problem. The theorem obtained allows one to conclude that there exist Hill stable motions for the elliptic restricted three-body problem. |
Skills to Develop
Solve a system of nonlinear equations using substitution. Solve a system of nonlinear equations using elimination. Graph a nonlinear inequality. Graph a system of nonlinear inequalities.
Halley’s Comet (Figure \(\PageIndex{1}\)) orbits the sun about once every \(75\) years. Its path can be considered to be a very elongated ellipse. Other comets follow similar paths in space. These orbital paths can be studied using systems of equations. These systems, however, are different from the ones we considered in the previous section because the equations are not linear.
Figure \(\PageIndex{1}\): Halley’s Comet (credit: "NASA Blueshift"/Flickr)
In this section, we will consider the intersection of a parabola and a line, a circle and a line, and a circle and an ellipse. The methods for solving systems of nonlinear equations are similar to those for linear equations.
Solving a System of Nonlinear Equations Using Substitution
A system of nonlinear equations is a system of two or more equations in two or more variables containing at least one equation that is not linear. Recall that a linear equation can take the form \(Ax+By+C=0\). Any equation that cannot be written in this form in nonlinear. The substitution method we used for linear systems is the same method we will use for nonlinear systems. We solve one equation for one variable and then substitute the result into the second equation to solve for another variable, and so on. There is, however, a variation in the possible outcomes.
Intersection of a Parabola and a Line
There are three possible types of solutions for a system of nonlinear equations involving a parabola and a line.
POSSIBLE TYPES OF SOLUTIONS FOR POINTS OF INTERSECTION OF A PARABOLA AND A LINE
Figure \(\PageIndex{2}\) illustrates possible solution sets for a system of equations involving a parabola and a line.
No solution - The line will never intersect the parabola. One solution - The line is tangent to the parabola and intersects the parabola at exactly one point. Two solutions - The line crosses on the inside of the parabola and intersects the parabola at two points.
Figure \(\PageIndex{2}\)
Example \(\PageIndex{1}\): Solving a System of Nonlinear Equations Representing a Parabola and a Line
Solve the system of equations.
\[\begin{align*} x−y &= −1\nonumber \\ y &= x^2+1 \nonumber \end{align*}\]
Solution
Solve the first equation for \(x\) and then substitute the resulting expression into the second equation.
\[\begin{align*} x−y &=−1\nonumber \\ x &= y−1 \;\; & \text{Solve for }x.\nonumber \\\nonumber \\ y &=x^2+1\nonumber \\ y & ={(y−1)}^2+1 \;\; & \text{Substitute expression for }x. \nonumber \end{align*}\]
Expand the equation and set it equal to zero.
\[ \begin{align*} y & ={(y−1)}^2+1\nonumber \\ &=(y^2−2y+1)+1\nonumber \\ &=y^2−2y+2\nonumber \\ 0 &= y^2−3y+2\nonumber \\ &= (y−2)(y−1) \nonumber \end{align*}\]
Solving for \(y\) gives \(y=2\) and \(y=1\). Next, substitute each value for \(y\) into the first equation to solve for \(x\). Always substitute the value into the linear equation to check for extraneous solutions.
\[\begin{align*} x−y &=−1\nonumber \\ x−(2) &= −1\nonumber \\ x &= 1\nonumber \\ x−(1) &=−1\nonumber \\ x &= 0 \nonumber \end{align*}\]
The solutions are \((1,2)\) and \((0,1)\),which can be verified by substituting these \((x,y)\) values into both of the original equations (Figure \(\PageIndex{3}\)).
Figure \(\PageIndex{3}\)
Q&A: Could we have substituted values for \(y\) into the second equation to solve for \(x\) in last example?
Yes, but because \(x\) is squared in the second equation this could give us extraneous solutions for \(x\).
For \(y=1\)
\[\begin{align*} y &= x^2+1\nonumber \\ y &= x^2+1\nonumber \\ x^2 &= 0\nonumber \\ x &= \pm \sqrt{0}=0 \nonumber \end{align*}\]
This gives us the same value as in the solution.
For \(y=2\)
\[\begin{align*} y &= x^2+1\nonumber \\ 2 &= x^2+1\nonumber \\ x^2 &= 1\nonumber \\ x &= \pm \sqrt{1}=\pm 1 \nonumber \end{align*}\]
Notice that \(−1\) is an extraneous solution.
Exercise \(\PageIndex{1}\)
Solve the given system of equations by substitution.
\[\begin{align*} 3x−y &= −2\nonumber \\ 2x^2−y &= 0 \nonumber \end{align*}\]
Answer
\(\left(−\dfrac{1}{2},\dfrac{1}{2}\right)\) and \((2,8)\)
Intersection of a Circle and a Line
Just as with a parabola and a line, there are three possible outcomes when solving a system of equations representing a circle and a line.
POSSIBLE TYPES OF SOLUTIONS FOR THE POINTS OF INTERSECTION OF A CIRCLE AND A LINE
Figure \(\PageIndex{4}\) illustrates possible solution sets for a system of equations involving a circle and a line.
No solution - The line does not intersect the circle. One solution - The line is tangent to the circle and intersects the circle at exactly one point. Two solutions - The line crosses the circle and intersects it at two points.
Figure \(\PageIndex{4}\)
Example \(\PageIndex{2}\): Finding the Intersection of a Circle and a Line by Substitution
Find the intersection of the given circle and the given line by substitution.
\[\begin{align*} x^2+y^2 &= 5\nonumber \\ y &= 3x−5 \nonumber \end{align*}\]
Solution
One of the equations has already been solved for \(y\). We will substitute \(y=3x−5\) into the equation for the circle.
\[\begin{align*} x^2+{(3x−5)}^2 &= 5\nonumber \\ x^2+9x^2−30x+25 &= 5\nonumber \\ 10x^2−30x+20 &= 0 \end{align*} \]
Now, we factor and solve for \(x\).
\[\begin{align*} 10(x2−3x+2) &= 0\nonumber \\ 10(x−2)(x−1) &= 0\nonumber \\ x &= 2\nonumber \\ x &= 1 \nonumber \end{align*}\]
Substitute the two \(x\)-values into the original linear equation to solve for \(y\).
\[\begin{align*} y &= 3(2)−5\nonumber \\ &= 1\nonumber \\ y &= 3(1)−5\nonumber \\ &= −2 \nonumber \end{align*}\]
The line intersects the circle at \((2,1)\) and \((1,−2)\),which can be verified by substituting these \((x,y)\) values into both of the original equations (Figure \(\PageIndex{5}\)).
Figure \(\PageIndex{5}\)
Exercise \(\PageIndex{2}\)
Solve the system of nonlinear equations.
\[\begin{align*} x^2+y^2 &= 10\nonumber \\ x−3y &= −10 \nonumber \end{align*}\]
Answer
\((−1,3)\)
Solving a System of Nonlinear Equations Using Elimination
We have seen that substitution is often the preferred method when a system of equations includes a linear equation and a nonlinear equation. However, when both equations in the system have like variables of the second degree, solving them using elimination by addition is often easier than substitution. Generally, elimination is a far simpler method when the system involves only two equations in two variables (a two-by-two system), rather than a three-by-three system, as there are fewer steps. As an example, we will investigate the possible types of solutions when solving a system of equations representing a circle and an ellipse.
POSSIBLE TYPES OF SOLUTIONS FOR THE POINTS OF INTERSECTION OF A CIRCLE AND AN ELLIPSE
Figure \(\PageIndex{6}\) illustrates possible solution sets for a system of equations involving a circle and an ellipse.
No solution - The circle and ellipse do not intersect. One shape is inside the other or the circle and the ellipse are a distance away from the other. One solution - The circle and ellipse are tangent to each other, and intersect at exactly one point. Two solutions - The circle and the ellipse intersect at two points. Three solutions - The circle and the ellipse intersect at three points. Four solutions - The circle and the ellipse intersect at four points.
Figure \(\PageIndex{6}\) Graphing a Nonlinear Inequality
All of the equations in the systems that we have encountered so far have involved equalities, but we may also encounter systems that involve inequalities. We have already learned to graph linear inequalities by graphing the corresponding equation, and then shading the region represented by the inequality symbol. Now, we will follow similar steps to graph a nonlinear
inequality so that we can learn to solve systems of nonlinear inequalities. A nonlinear inequality is an inequality containing a nonlinear expression. Graphing a nonlinear inequality is much like graphing a linear inequality.
Recall that when the inequality is greater than, \(y>a\),or less than, \(y<a\),the graph is drawn with a dashed line. When the inequality is greater than or equal to, \(y≥a\),or less than or equal to, \(y≤a\),the graph is drawn with a solid line. The graphs will create regions in the plane, and we will test each region for a solution. If one point in the region works, the whole region works. That is the region we shade (Figure \(\PageIndex{8}\)).
Figure \(\PageIndex{8}\): (a) an example of \(y>a\); (b) an example of \(y≥a\); (c) an example of \(y<a\); (d) an example of \(y≤a\)
How to: Given an inequality bounded by a parabola, sketch a graph
Graph the parabola as if it were an equation. This is the boundary for the region that is the solution set. If the boundary is included in the region (the operator is \(≤\) or \(≥\)), the parabola is graphed as a solid line. If the boundary is not included in the region (the operator is \(<\) or \(>\)), the parabola is graphed as a dashed line. Test a point in one of the regions to determine whether it satisfies the inequality statement. If the statement is true, the solution set is the region including the point. If the statement is false, the solution set is the region on the other side of the boundary line. Shade the region representing the solution set.
Example \(\PageIndex{4}\): Graphing an Inequality for a Parabola
Graph the inequality \(y>x^2+1\).
Solution
First, graph the corresponding equation \(y=x^2+1\). Since \(y>x^2+1\) has a greater than symbol, we draw the graph with a dashed line. Then we choose points to test both inside and outside the parabola. Let’s test the points
\((0,2)\) and \((2,0)\). One point is clearly inside the parabola and the other point is clearly outside.
\[\begin{align*} y &> x^2+1\nonumber \\ 2 &> (0)^2+1\nonumber \\ 2 &>1 & \text{True}\nonumber \\\nonumber \\\nonumber \\ 0 &> (2)^2+1\nonumber \\ 0 &> 5 & \text{False} \nonumber \end{align*}\]
The graph is shown in Figure \(\PageIndex{9}\). We can see that the solution set consists of all points inside the parabola, but not on the graph itself.
Figure \(\PageIndex{9}\) Graphing a System of Nonlinear Inequalities
Now that we have learned to graph nonlinear inequalities, we can learn how to graph systems of nonlinear inequalities. A
system of nonlinear inequalities is a system of two or more inequalities in two or more variables containing at least one inequality that is not linear. Graphing a system of nonlinear inequalities is similar to graphing a system of linear inequalities. The difference is that our graph may result in more shaded regions that represent a solution than we find in a system of linear inequalities. The solution to a nonlinear system of inequalities is the region of the graph where the shaded regions of the graph of each inequality overlap, or where the regions intersect, called the feasible region.
How to: Given a system of nonlinear inequalities, sketch a graph
Find the intersection points by solving the corresponding system of nonlinear equations. Graph the nonlinear equations. Find the shaded regions of each inequality. Identify the feasible region as the intersection of the shaded regions of each inequality or the set of points common to each inequality.
Example \(\PageIndex{5}\): Graphing a System of Inequalities
Graph the given system of inequalities.
\[\begin{align*} x^2−y &≤ 0\nonumber \\ 2x^2+y &≤ 12 \nonumber \end{align*}\]
Solution
These two equations are clearly parabolas. We can find the points of intersection by the elimination process: Add both equations and the variable \(y\) will be eliminated. Then we solve for \(x\).
\[\begin{align*} x^2−y = 0&\nonumber \\ \underline{2x^2+y=12}&\nonumber \\ 3x^2=12&\nonumber \\ x^2=4 &\nonumber \\ x=\pm 2 & \nonumber \end{align*}\]
Substitute the \(x\)-values into one of the equations and solve for \(y\).
\[\begin{align*} x^2−y &= 0\nonumber \\ {(2)}^2−y &= 0\nonumber \\ 4−y &= 0\nonumber \\ y &= 4\nonumber \\\nonumber \\ {(−2)}^2−y &= 0\nonumber \\ 4−y &= 0\nonumber \\ y &= 4 \nonumber \end{align*}\]
The two points of intersection are \((2,4)\) and \((−2,4)\). Notice that the equations can be rewritten as follows.
\[\begin{align*} x^2-y & ≤ 0\nonumber \\ x^2 &≤ y\nonumber \\ y &≥ x^2\nonumber \\\nonumber \\\nonumber \\ 2x^2+y &≤ 12\nonumber \\ y &≤ −2x^2+12 \nonumber \end{align*}\]
Graph each inequality. See Figure \(\PageIndex{10}\). The feasible region is the region between the two equations bounded by \(2x^2+y≤12\) on the top and \(x^2−y≤0\) on the bottom.
Figure \(\PageIndex{10}\)
Exercise \(\PageIndex{5}\)
Graph the given system of inequalities.
\[\begin{align*} y &≥ x^2−1\nonumber \\ x−y &≥ −1 \nonumber \end{align*}\]
Answer
Shade the area bounded by the two curves, above the quadratic and below the line.
Figure \(\PageIndex{11}\)
Media
Access these online resources for additional instruction and practice with nonlinear equations.
Key Concepts There are three possible types of solutions to a system of equations representing a line and a parabola: (1) no solution, the line does not intersect the parabola; (2) one solution, the line is tangent to the parabola; and (3) two solutions, the line intersects the parabola in two points. See Example \(\PageIndex{1}\). There are three possible types of solutions to a system of equations representing a circle and a line: (1) no solution, the line does not intersect the circle; (2) one solution, the line is tangent to the parabola; (3) two solutions, the line intersects the circle in two points. See Example \(\PageIndex{2}\). There are five possible types of solutions to the system of nonlinear equations representing an ellipse and a circle: (1) no solution, the circle and the ellipse do not intersect; (2) one solution, the circle and the ellipse are tangent to each other; (3) two solutions, the circle and the ellipse intersect in two points; (4) three solutions, the circle and ellipse intersect in three places; (5) four solutions, the circle and the ellipse intersect in four points. See Example \(\PageIndex{3}\). An inequality is graphed in much the same way as an equation, except for > or <, we draw a dashed line and shade the region containing the solution set. See Example \(\PageIndex{4}\). Inequalities are solved the same way as equalities, but solutions to systems of inequalities must satisfy both inequalities. See Example \(\PageIndex{5}\). |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
FELIPE Online Manual The following table provides links to webpages for various chapters of the
User Manual. The full manual can be accessed as a pdf file
here.
Introduction
FELIPE (Finite Element Learning Package) is a software package whose
primary objective is to help students understand the finite element method
in mathematics and engineering, and develop their own f.e. programs. Its advantage
over the f.e. textbooks which provide their example programs printed or on ftp
or cd-rom, is that it combines full, commented and documented source code
(in standard Fortran77) for the f.e. `main engines', with powerful interactive
graphics pre- and post-processors capable of generating complex, detailed meshes.
Because of this, it is also very suitable
for practising engineers and researchers as a low-cost alternative to the
many commercial ``black box'' packages on the market (which do not provide
source code).
The principal components of FELIPE are:
By this means, users of FELIPE have the interactive graphics
processing power of a commercial finite element package, combined with
fully-documented source code which they can modify and extend for their own
purposes. The package is completely self-standing; the only supporting
software needed is a Fortran compiler if the user wishes to modify the
source code and re-compile it. Since the pre- and post-processors
communicate with the main f.e. programs through formatted ASCII data files,
they can also be interfaced with other finite element programs, whether in
Fortran or another language. PREFEL, a pre-processor, used to create the input data file
which defines the finite element mesh, boundary conditions, material
properties, loading, etc. The resulting data file is given a
.dat filename
extension. The pre-processor is provided as the executable
file
PREFEL.EXE;
Three Basic-level Fortran77 `main engines' (for the 2D Poisson's equation,
plane elasticity, and beam/frame analyses), the theory for which is summarized
in this Manual. Each `main engine' reads an input file such as
prob1.dat (created by the PREFEL pre-processor) and produces an output
file
prob1.out (for use with the post-processor) and a results file
prob1.prt in printable format;
Six further Advanced-level Fortran77 `main engines' for analysing a range of
mathematics and engineering applications (e.g. viscoplasticity, thermoelasticity),
which illustrate the main practical aspects of
finite element programming. In particular, a wide range of 1D and 2D finite and
infinite element types are used, and coding is provided for all the most important
algorithms for equation solution (from Gaussian elimination to conjugate gradients
with Incomplete Choleski preconditioning). The individual `main engines' are
summarized below. There is also a file of input/output subroutines common to all
the `main engines'. Each `engine' is provided in source code
.for and
executable
.exe form;
linking files (
.inf) for each of the `main engines' if they are to be
compiled, linked and run using the Salford FTN77 compiler;
FELVUE, a post-processor which reads in a
.out file
and displays the results graphically. Displays include contouring, stress crosses,
displacement vectors, deformed mesh, etc.
This processor is also provided in executable form, namely in the file
FELVUE.EXE.
It is possible to produce PostScript files (with a
.ps extension)
of the graphical displays, for later printing;
SALFLIBC.DLL. The Salford Fortran library needed to allow
the programs to run on any PC;
Sample
.dat and
.out files, for one or more example problems for
each `main engine', also documented in this Manual and shown on
the FELIPE website.
The pre- and post-processors are compiled under FTN77 (as are the executable
versions of the `main engines'), which includes
a memory extender; thus, large meshes can thus be created,
and problems of real mathematical and engineering significance solved.
(The processors are dimensioned to handle a maximum of 900 elements,
and 3,000 nodes.)
They make extensive use of the graphics and mouse routines available
with the FTN77 compiler, and can also produce PostScript graphics files for
subsequent processing by, for example, GhostScript software (available
from www.hensa.ac.uk by ftp) and printing on LaserJet or DeskJet printers.
The `main engines' can also be compiled
and run using FTN77 (which is available for personal use free from the Salford
software website), but if this is not available any other Fortran
compiler can be used, as they are written in standard Fortran77.
The nine `main engines'
The three Basic-level `main engines', and their principal features, are now listed:
The six Advanced-level `main engines' are:
POISS.FOR
Application: solves Poisson's equation -a_x u_{xx} - a_y u_{yy} = f(x,y)
on an arbitrary 2D domain. Material properties: diffusion coefficients a_x, a_y Primary nodal unknown: potentials U Secondary unknowns: flow rates U_x, U_y Elements used: 3-noded linear triangles. Boundary conditions: reflecting, radiating and Dirichlet boundaries File size: 25.8KB Analysis type: linear Matrix storage: symmetric band Solution algorithm: Choleski (L.L^T) decomposition
ELAST.FOR
Application: solves plane strain, plane stress or axisymmetric linear elasticity problems Material properties: Young's modulus E, Poisson's ratio \nu, thickness t,
tensile strength \sigma_{\mbox{ten}} Primary nodal unknown: displacements u,v Secondary unknowns: stresses \sigma_x, \sigma_y, \tau_{xy} Elements used: 8-noded `serendipity' quadrilaterals Boundary conditions: fixities in x or y planes Loading: point loads, surface tractions File size: 34.8KB Analysis type: linear Matrix storage: symmetric band Solution algorithm: Choleski (L.L^T) decomposition
FRAME.FOR
Application: analyses plane frames comprising elastic beams. Material properties: Young's modulus E, Moment of Inertia I,
cross-sectional area A Primary nodal unknown: displacements x,y and rotations \theta Elements used: 2-noded cubic beam elements. Boundary conditions: displacement and rotation fixities Loading: point loads, surface tractions File size: 24.4KB Analysis type: linear Matrix storage: element-by-element matrices on scratch file Solution algorithm: preconditioned conjugate gradients, with diagonal preconditioning
ELADV.FOR
Application: Large-scale 2D elasticity analyses Material properties: Young's modulus E, Poisson's ratio \nu, thickness t,
tensile strength \sigma_{\mbox{ten}} Primary nodal unknown: displacements u,v Secondary unknowns: stresses \sigma_x, \sigma_y, \tau_{xy} Elements used: 3- and 6-noded triangles, 4- and 8-noded quadrilaterals, mapped infinite
elements, 2- and 3- noded (cubic and quartic) beam elements Boundary conditions: fixities in x or y planes Loading: point loads, specified displacements, surface tractions, body forces,
excavation loading File size: 91.9KB Analysis type: linear Solution algorithm: symmetric frontal algorithm
PLAST.FOR
Application: plane strain associated-flow Mohr-Coulomb elasto-plasticity analyses Material properties: Young's modulus E, Poisson's ratio \nu, triaxial stress
factor k, strength \sigma_c Primary nodal unknown: displacements u,v Secondary unknowns: stresses \sigma_x, \sigma_y, \tau_{xy} Elements used: 8-noded `serendipity' quadrilaterals Boundary conditions: fixities in x or y planes Loading: point loads, surface tractions File size: 50.3KB Analysis type: nonlinear, iterative, incremental Matrix storage: symmetric band Solution algorithm: Choleski (L.L^T) decomposition
VPLAS.FOR
Application: plane strain Mohr-Coulomb elasto-viscoplasticity analyses, with
non-associated flow Material properties: Young's modulus E, Poisson's ratio \nu, triaxial stress
factor k, strength \sigma_c, fluidity \gamma, dilation factor l Primary nodal unknown: displacements u,v Secondary unknowns: stresses \sigma_x, \sigma_y, \tau_{xy} Elements used: 8-noded `serendipity' quadrilaterals Boundary conditions: fixities in x or y planes Loading: point loads, surface tractions File size: 75.5KB Analysis type: nonlinear, incremental, time-dependent Solution algorithm: Frontal algorithm, for non-symmetric matrices
PLADV.FOR
Application: as
PLAST, but with a range of solution algorithms
Material properties: Young's modulus E, Poisson's ratio \nu, triaxial stress
factor k, strength \sigma_c Primary nodal unknown: displacements u,v Secondary unknowns: stresses \sigma_x, \sigma_y, \tau_{xy} Elements used: 8-noded `serendipity' quadrilaterals Boundary conditions: fixities in x or y planes Loading: point loads, surface tractions File size: 67.7KB Analysis type: nonlinear, iterative, incremental Matrix storage: symmetric skyline, element-by-element Solution algorithms: Choleski (L.L^T) and L.D.L^T decomposition,
conjugate gradients with diagonal or Incomplete Choleski preconditioning
THERM.FOR
Application: plane stress/strain thermoelasticity Material properties: Young's modulus E, Poisson's ratio \nu, thickness t,
conductivity coefficient k, coefficient of thermal expansion \alpha Primary nodal unknown: displacements u,v, temperatures T Secondary unknowns: stresses \sigma_x, \sigma_y, \tau_{xy} Elements used: 4-noded (linear) and 8-noded (serendipity) quadrilaterals with
(u,v,T) degrees of freedom at all nodes Boundary conditions: fixities in x or y planes, reflecting and Dirichlet
temperature boundaries Loading: point loads, surface tractions File size: 44.2KB Analysis type: linear, coupled Matrix storage: nonsymmetric band Solution algorithm: Gauss elimination for non-symmetric matrices
CONSL.FOR
Application: plane strain soil consolidation (poroelasticity) Material properties: Young's modulus E, Poisson's ratio \nu,
effective permeabilities \frac{k_x}{\gamma_w}, \frac{k_y}{\gamma_w},
effective porosity \frac{\eta}{K_f} Primary nodal unknown: displacements u,v, pore-pressures p Secondary unknowns: effective stresses \sigma_x, \sigma_y, \tau_{xy} Elements used: 8-noded `serendipity' quadrilaterals with pore-pressure d.o.f.s at
corner nodes only Boundary conditions: fixities in x or y planes, impermeable or
permeable boundaries Loading: point loads, surface tractions File size: 53.4KB Analysis type: linear, coupled, time-dependent Matrix storage: symmetric band Solution algorithm: L.D.L^T decomposition
Installation
To install the package from the floppy disk, run the self-extracting zip
file
FELIPE.EXE which is on the disk. You can do this by locating
the file on your disk drive (normally the A: drive) using
My Computer
or Windows Explorer, and double-clicking on it. Alternatively,
type
a:\felipe.exe into the command line copy using the
Run...
utility
from the Start menu. You will be prompted to nominate a directory into which the
FELIPE files should be unzipped; the default is
C:\FELIPE.
If you have already downloaded the evaluation version of FELIPE from the
website into the
C:\FELIPE directory, you can still use the
same
directory; the demonstration versions of the files will be overwritten by the
full versions, and new files added.
The installation process does not alter any Windows settings on your PC.
To uninstall FELIPE, simply delete the directory containing the
files.
\vspace{1cm}
In this manual, Chapter 2 describes how to use the PREFEL pre-processor.
The next three chapters describe the three Basic-level `main engines':
Chapter 3 covers the theory and programming of the Poisson solver,
while Chapter 4 describes the solver for elasticity problems, and Chapter 5 deals with
beam theory.
Chapter 6 describes the use of the FELVUE post-processor. Then
Chapter 7 summarizes the operation and use of the other six, Advanced-level
`main engines'. Chapter 8 covers the various algorithms used for equation solution.
Chapter 9 documents the sample datafiles provided in the FELIPE package.
The final Chapter suggests ways in which the `main engines' may be
modified, and new `main engines' written, and gives recommendations for textbooks for
further reading about the finite element method.
Acknowledgements:
I am very grateful to Prof. J.R. Whiteman and
Dr. M.K. Warby for permission to use in FELIPE some of the
basic graphics and PostScript subroutines developed by Dr. Warby, and to
Dr. T.-Y. Chao for working with me on the programming and documentation of
the elasticity module. I also acknowledge the support of the Enterprise in
Higher Education Unit at Brunel University, in enabling me to work on this
project. The Fortran coding of the elasticity 'main engines' is based on the
FINEPACK program developed at the Dept. of Civil Engineering, University
College Swansea, and I am grateful to Dr. D.J. Naylor for permission to
use this.
Back to top |
Answer
C
Work Step by Step
Theoretical yield: $125\ g\ Al_4C_3\div 143.96\ g\ Al_4C_3/mol\ Al_4C_3\times \dfrac{3\ mol\ CH_4}{1\ mol\ Al_4C_3}\times16.04\ g\ CH_4/mol\ CH_4=41.78\ g\ CH_4$ Percent yield: $13.6\div41.78\times100\%=32.55\%$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
Infinite Limits
Evaluating the limit of a function at a point or evaluating the limit of a function from the right and left at a point helps us to characterize the behavior of a function around a given value. As we shall see, we can also describe the behavior of functions that do not have finite limits.
We now turn our attention to \(h(x)=1/(x−2)^2\), the third and final function introduced at the beginning of this section (see Figure(c)). From its graph we see that as the values of x approach 2, the values of \(h(x)=1/(x−2)^2\) become larger and larger and, in fact, become infinite. Mathematically, we say that the limit of \(h(x)\) as x approaches 2 is positive infinity. Symbolically, we express this idea as
\[\lim_{x \to 2}h(x)=+∞.\]
More generally, we define
infinite limits as follows:
Definitions: infinite limits
We define three types of
infinite limits. Infinite limits from the left: Let \(f(x)\) be a function defined at all values in an open interval of the form \((b,a)\).
i. If the values of \(f(x)\) increase without bound as the values of x (where \(x<a\)) approach the number \(a\), then we say that the limit as x approaches a from the left is positive infinity and we write \[\lim_{x \to a−}f(x)=+∞.\]
ii. If the values of \(f(x)\) decrease without bound as the values of x (where \(x<a\)) approach the number \(a\), then we say that the limit as x approaches a from the left is negative infinity and we write \[\lim_{x \to a−}f(x)=−∞.\]
Infinite limits from the right: Let \(f(x)\) be a function defined at all values in an open interval of the form \((a,c)\).
i. If the values of \(f(x)\) increase without bound as the values of x (where \(x>a\)) approach the number \(a\), then we say that the limit as x approaches a from the left is positive infinity and we write \[\lim_{x \to a+}f(x)=+∞.\]
ii. If the values of \(f(x)\) decrease without bound as the values of x (where \(x>a\)) approach the number \(a\), then we say that the limit as x approaches a from the left is negative infinity and we write \[\lim_{x \to a+}f(x)=−∞.\]
Two-sided infinite limit: Let \(f(x)\) be defined for all \(x≠a\) in an open interval containing \(a\)
i. If the values of \(f(x)\) increase without bound as the values of x (where \(x≠a\)) approach the number \(a\), then we say that the limit as x approaches a is positive infinity and we write \[\lim_{x \to a} f(x)=+∞.\]
ii. If the values of \(f(x)\) decrease without bound as the values of x (where \(x≠a\)) approach the number \(a\), then we say that the limit as x approaches a is negative infinity and we write \[\lim_{x \to a}f(x)=−∞.\]
It is important to understand that when we write statements such as \(\displaystyle \lim_{x \to a}f(x)=+∞\) or \(\displaystyle \lim_{x \to a}f(x)=−∞\) we are describing the behavior of the function, as we have just defined it. We are not asserting that a limit exists. For the limit of a function f(x) to exist at a, it must approach a real number L as x approaches a. That said, if, for example, \(\displaystyle \lim_{x \to a}f(x)=+∞\), we always write \(\displaystyle \lim_{x \to a}f(x)=+∞\) rather than \(\displaystyle \lim_{x \to a}f(x)\) DNE.
Example \(\PageIndex{5}\): Recognizing an Infinite Limit
Evaluate each of the following limits, if possible. Use a table of functional values and graph \(f(x)=1/x\) to confirm your conclusion.
\(\displaystyle \lim_{x \to 0−} \frac{1}{x}\) \(\displaystyle \lim_{x \to 0+} \frac{1}{x}\) \( \displaystyle \lim_{x \to 0}\frac{1}{x}\) Solution
Begin by constructing a table of functional values.
\(x\) \(\frac{1}{x}\) \(x\) \(\frac{1}{x}\) -0.1 -10 0.1 10 -0.01 -100 0.01 100 -0.001 -1000 0.001 1000 -0.0001 -10,000 0.0001 10,000 -0.00001 -100,000 0.00001 100,000 -0.000001 -1,000,000 0.000001 1,000,000
a. The values of \(1/x\) decrease without bound as \(x\) approaches 0 from the left. We conclude that
\[\lim_{x \to 0−}\frac{1}{x}=−∞.\nonumber\]
b. The values of \(1/x\) increase without bound as \(x\) approaches 0 from the right. We conclude that
\[\lim_{x \to 0+}\frac{1}{x}=+∞. \nonumber\]
c. Since \(\displaystyle \lim_{x \to 0−}\frac{1}{x}=−∞\) and \(\displaystyle \lim_{x \to 0+}\frac{1}{x}=+∞\) have different values, we conclude that
\[\lim_{x \to 0}\frac{1}{x}DNE. \nonumber\]
The graph of \(f(x)=1/x\) in Figure \(\PageIndex{8}\) confirms these conclusions.
Figure \(\PageIndex{8}\): The graph of \(f(x)=1/x\) confirms that the limit as x approaches 0 does not exist.
Exercise \(\PageIndex{5}\)
Evaluate each of the following limits, if possible. Use a table of functional values and graph \(f(x)=1/x^2\) to confirm your conclusion.
\(\displaystyle \lim_{x \to 0−}\frac{1}{x^2}\) \(\displaystyle \lim_{x \to 0+}\frac{1}{x^2}\) \(\displaystyle \lim_{x \to 0}\frac{1}{x^2}\) |
Search
Now showing items 1-3 of 3
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2013-11)
We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ... |
The Fundamental Theorem of Line Integrals
Consider the force field representing the wind shown below
You are a pilot attempting to minimize the work your engines need to do. Does it matter which path you take? Clearly the red path goes with the wind and the green path goes against the wind. With this vector field, work is dependent on the path that is taken.
Next consider the vector field
\[ \textbf{F}(x,y) = y\hat{\textbf{i}} + x \hat{\textbf{j}} \]
shown below
It turns out that going from point A to point B, every path leads to the same amount of work done. What is special about this vector field?
The key here, as you can quickly check, is that the vector field \(\textbf{F}\) is
conservative with \(M_y = N_x\). Since for a conservative vector field, all paths produce the same amount of work, we seek a formula that gives this work quantity. The theorem below shows us how to find this quantity. Notice the strong resemblance to the fundamental theorem of calculus.
Theorem: The Fundamental Theorem of Line Integrals
Let \(\textbf{F}\) be a conservative vector field with potential function \(f\), and \(C\) be any smooth curve starting at the point \(A\) and ending at the point \(B\). Then
\[ \int_C F \cdot dr = f(B)-f(A)\]
The next example demonstrates the power of this theorem.
Example \(\PageIndex{1}\)
Find the work done by the vector field
\[ \textbf{F}(x,y) = (2x -3y) \hat{\textbf{i}} + (3y^2 - 3x) \hat{\textbf{j}} \]
along the curve indicated in the graph below.
Solution
First notice that
\[ M_y = -3 = N_x \]
We can use the fundamental theorem of line integrals to solve this. There are two approaches.
Approach 1
We find the potential function. We have
\[ f_x = 2x - 3y \]
Integrating we get
\[ f(x,y) = x^2 - 3xy + c(y)\]
Now take the derivative with respect to \(y\) to get
\[ f_y = -3x + c'(y) = 3y^2 - 3x. \]
Hence
\[ c'(y) = 3y^2 \]
and
\[ c(y) = y^3.\]
The potential function is
\[ f(x,y) = x^2 - 3xy + y^3.\]
Now use the fundamental theorem of line integrals to get
\[ f(B) - f(A) = f(1,0) - f(0,0) = 1.\]
Approach 2
Since the vector field is conservative, any path from point A to point B will produce the same work. Hence the work over the easier line segment from \((0,0)\) to \((1,0)\) will also give the correct answer. We parameterize by
\[ \bar{r}(t)= t \hat{\text{i}} \;\;\;\; 0 \leq t \leq 1.\]
We have
\[\bar{r}i(t) = \hat{\text{i}} \]
so that
\[ \begin{align} \textbf{F} \cdot d \hat{r} &= \big((2x-3y)\hat{\text{i}} + (3y^2-3x)\hat{\text{j}} \big)\ \cdot \hat{\text{i}} \\[4pt] &= 2x-3y \\[4pt] &= 2t . \end{align} \]
Now just integrate
\[ \int_0^1 2t \; dt = t^2 |_0^1 = 1. \]
Proof of the Fundamental Theorem of Line Integrals
To prove the fundamental theorem of line integrals we will use the following outcome of the chain rule:
If
\[\bar{\textbf{r}}(t) = x(t) \hat{\textbf{i}} + y(t) \hat{\textbf{j}} \]
is a vector valued function, then
\[\dfrac{d}{dt} f(\bar{\textbf{r}}(t)) = f_x x'(t) + f_y y'(t). \]
We are now ready to prove the theorem. We have
\[\begin{align} \int_C \textbf{F} \cdot d \hat{\textbf{r}} &= \int_{a}^{b} \textbf{F}(x,y) \cdot \bar{\textbf{r}}' (t) \; dt \\[4pt] &= \int_{a}^{b} \big( f_x (x,y) \hat{\textbf{i}} + f_y(x,y) \hat{\textbf{j}} \big) \cdot \big( x'(t) \hat{\textbf{i}} + y'(t) \hat{\textbf{j}} \big) \; dt \\[4pt] &= \int_{a}^{b} \big( f_x (x,y)x'(t) + f_y (x,y)y'(t) \big) \; dt \\[4pt] &= \int_{a}^{b} \dfrac{d}{dt} \big(f(x(t), y(t)) \big)\; dt \\[4pt] &= f(x(b),y(b))-f(x(a),y(a)) \\[4pt] &=f(B) -f(A) \end{align}\]
\(\square\)
Independence of Path and Closed Curves
Example \(\PageIndex{2}\)
Find the work done by the vector field
\[\textbf{F}(x,y) = (\cos x + y) \hat{\textbf{i}} + (x+e^{\sin y})\hat{\textbf{j}} + (\sin(\cos z)) \hat{\textbf{k}} \]
along the closed curve shown below.
Solution
First we check that
F is conservative. We have
\[ \text{Curl } \textbf{F} = \begin{vmatrix} \hat{\textbf{i}} & \hat{\textbf{j}} & \hat{\textbf{k}} \\[4pt] \partial x & \partial y & \partial z \\[4pt] \cos x+y & x+ e^{\sin y} & \sin (\cos z) \end{vmatrix} = (0-0) \hat{\textbf{i}} - (0-0) \hat{\textbf{j}} + (1-1) \hat{\textbf{k}} \]
Since the vector field is conservative, we can use the fundamental theorem of line integrals. Notice that the curve begins and ends at the same place. We do not even need to find the potential function, since whatever it is, say \(f\), we have
\[ f(A) - f(A) = 0.\]
In general, the work done by a conservative vector field is zero along any closed curve. The converse is also true, which we state without proof.
Theorem: Conservative Vector Fields and Closed Curves
Let
\(F\) be a vector field with components that have continuous first order partial derivatives and let \(C\) be a piecewise smooth curve. Then the following three statements are equivalent \(F\)is conservative. \( \int_C \textbf{F}\cdot dx\ \) is independent of path. \(\int_C \textbf{F} \cdot dr = 0\) for all closed curves \(C\).
Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall. |
Data
I have three $N \times N$ complex hermitian matrices $A=xx^{H}$,$R=rr^{H}$ and a positive-definite matrix $B$. Here $x$ and $r$ are two $N \times 1$ complex vectors. Let $\lambda_{i}, 1\leq i\leq N$ denotes the N eigenvalues of B which are also positive. Clearly $A$ and $R$ are two rank one positive semi-definite matrices. $B$ is invertible.
What I need to find What is the largest eigenvalue of the GEVP?
\begin{align} (A\otimes R)v=\gamma (B\otimes R)v \end{align}
Will the maximum eigenvalue be (seemingly nice) $||r||^{2}x^{H}B^{-1}x$? What I know Consider the generalized Eigenvalue problem (GEVP) \begin{align} Av=\gamma Bv \end{align} Since $B$ is invertible, this is equivalent to find the eigenvalues of $B^{-1}A$, in fact , since $A$ is rank one matrix, there is only one eigenvalue which will be positive, and it will be given by $x^{H}B^{-1}x$ ($A=xx^{H}$). Now I am interested in the matrices, $A \otimes R$ and $B \otimes R$ which are $N^{2} \times N^{2}$ in dimension. Now $A \otimes R$ is a rank one matrix, and its only non-zero eigenvalue is $||x||^{2}||r||^{2}$. $B \otimes R$ is a positive semi-definite matrix with $N$ of its eigenvalues being $\lambda_{i}||r||^{2}, 1\leq i \leq N$ and rest of the $N^{2}-N$ eigenvalues being zero. |
class
LinearMechanism¶ ↑
Syntax:
lm = new LinearMechanism(c, g, y, [y0], b)
section lm = new LinearMechanism(c, g, y, [y0], b, x)
lm = new LinearMechanism(c, g, y, [y0], b, sl, xvec, [layervec])
lm = new LinearMechanism(pycallable, c, g, y, ...)
Description:
Adds linear equations to the tree matrix current balance equations. I.e. the equations are solved simultaneously with the current balance equations. These equations may modify current balance equations and involve membrane potentials as dependent variables.
The equations added are of the differential-algebraic form \(c \frac{dy}{dt} + g y = b\) with initial conditions specified by the optional y0 vector argument. c and g must be square matrices of the same rank as the y and b vectors. The implementation is more efficient if c is a sparse matrix since at every time step c*y/dt must be computed.
When a LinearMechanism is created, all the potentially non-zero elements for the c and g matrices must be actually non-zero so that the mathematical topology of the matrices is known in advance. After creation, elements can be set to 0 if desired.
The arguments after the b vector specify which voltages and current balance equations are coupled to this system. The scalar form, x, with a currently accessed section means that the first equation is added to the current balance equation at this location and the first dependent variable is a copy of the membrane potential. If the system is coupled to more than one location, then sl must be a SectionList and xvec a Vector of relative positions (0 … 1) specifying the locations. In this case, the first xvec.size equations are added to the corresponding current balance equations and the first xvec.size dependent y variables are copies of the membrane potentials at this location. If the optional layervec argument is present then the values must be 0, 1, or 2 (or up to however many layers are defined in
src/nrnoc/options.h) 0 refers to the internal potential (equal to the membrane potential when the extracellular mechanism is not inserted), and higher numbers refer to the
vext[layer-1]layer (or ground if the extracellular mechanism is not inserted).
If some y variables correspond to membrane potential, the corresponding initial values in the y0 vector are ignored and the initial values come from the values of v during the normal
finitialize()call. If you change the value of v after finitialize, then you should also change the corresponding y values if the linear system involves derivatives of v.
Note that current balance equations of sections when 0 < x < 1 have dimensions of milliamp/cm2 and positive terms are outward. Thus c elements involving voltages in mV have dimensions of 1000 \(\mathrm{\mu{}F/cm^2}\) (so a value of .001 corresponds to 1 \(\mathrm{\mu{}F/cm^2}\)), g elements have dimensions of \(\mathrm{S/cm^2}\), and b elements have dimensions of outward current in \(\mathrm{milliamp/cm^2}\). The current balance equations for the zero area nodes at the beginning and end of a section (x = 0 and x = 1) have terms with the dimensions of nanoamps. Thus c elements involving voltages in mV have dimensions of nF and g elements have dimensions of \(\mathrm{\mu{}S}\).
The existence of one or more LinearMechanism switches the gaussian elimination solver to the general sparse linear equation solver written by Kenneth S. Kundert and available from http://www.netlib.org/sparse/index.html Although, even with no added equations, the solving of m*x=b takes more than twice as long as the original default solver, there is no restriction to a tree topology.
Example:
load_file("nrngui.hoc") create soma soma { insert hh } //ideal voltage clamp. objref c, g, y, b, model c = new Matrix(2,2,2) //sparse - no elements used g = new Matrix(2,2) y = new Vector(2) // y.x[1] is injected current b = new Vector(2) g.x[0][1] = -1 g.x[1][0] = 1 b.x[1] = 10 // voltage clamp level soma model = new LinearMechanism(c, g, y, b, .5) proc advance() { printf("t=%g v=%g y.x[1]=%g\n", t, soma.v(.5), y.x[1]) fadvance() } run()
Warning
Does not work with the CVODE integrator but does work with the differential-algebraic solver IDA. Note that if the standard run system is loaded,
cvode_active(1)will automatically choose the correct variable step integrator. Does not allow changes to coupling locations. Is not notified when matrices, vectors, or segments it depends on disappear.
Description (continued): If the pycallable argument (A Python Callable object) is present it is called just before the b Vector is used during a simulation. The callable can change the elements of b and g (but do not introduce new elements into g) as a function of time and states. It may be useful for stability and performance to place the linearized part of b into g. Consider the following pendulum.py with equations
Example:\[\frac{d\theta}{dt} = \omega\]\[\frac{d\omega}{dt} = -\frac{g}{L} \sin(\theta) \text{ with } \frac{g}{L}=1\]
from neuron import h from math import sin h.load_file('nrngui.hoc') cmat = h.Matrix(2,2,2).ident() gmat = h.Matrix(2,2,2) gmat.setval(0,1, -1) y = h.Vector(2) y0 = h.Vector(2) b = h.Vector(2) def callback(): b.x[1] = -sin(y.x[0]) nlm = h.LinearMechanism(callback, cmat, gmat, y, y0, b) dummy = h.Section() trajec = h.Vector() tvec = h.Vector() trajec.record(y._ref_x[0]) tvec.record(h._ref_t) graph = h.Graph() h.tstop=50 def prun(theta0, omega0): graph.erase() y0.x[0] = theta0 y0.x[1] = omega0 h.run() trajec.line(graph, tvec) h.dt /= 10 h.cvode.atol(1e-5) h.cvode_active(1) prun(0, 1.9999) # 2.0001 will keep it rotating graph.exec_menu("View = plot") |
The objective of this project is to evaluate the quality of human movements from visual information which has use in a broad range of applications, from diagnosis and rehabilitation to movement optimisation in sports science. Observed movements are compared with a model of normal movement and the amount of deviation from normality is quantified automatically.
Description of the proposed method
The figure below illustrates the pipeline of our proposed method.
Skeleton extraction
We use a Kinect camera, that measures distances and provides a depth map of the scene (see Fig. 2) instead of the classic RGB image. A skeleton tracker [1] can use this depth map to fit a skeleton on the person being filmed. We then normalise the skeleton to compensate for people having various heights. This normalised skeleton is the basis of our movement analysis technique.
Robust dimensionality reduction
A skeleton contains 15 joints, forming a vector of 45 coordinates. Such vector has a quite high dimensionality but also redundant information. We use a manifold learning method, Diffusion Maps [2], to reduce the dimensionality and extract the significant information from this skeleton.
Skeletons extracted from depth maps tend to suffer from a high amount of noise and outliers. Therefore, we modify the original Diffusion Maps [2] to add the extension that Gerber et al. [3] proposed for dealing with outliers in Laplacian Eigenmaps that are another type of manifold.
Our manifold provides us with a new representation mathbf{Y}[\latex] of the pose, derived from the normalised skeleton, with a much lower dimensionality (typically 3 dimensions instead of the initial 45) and significantly less noise and outliers. We use this new pose feature mathbf{Y}[\latex] to assess the quality of the movement.
Assessment against a statistical model of normal movement
We build two statistical models from our new pose feature, which describe respectively normal
poses and normal dynamics. These models are represented by probability density functions (pdf) which are learnt, using Parzen window estimators, from training sequences that contain only normal instances of the movement.
The pose model is in the form of the pdf \(f_{Y}\left(y\right)\) of a random variable \(Y\) that takes as value \(y=\mathbf{Y}\) our pose feature vector \(\mathbf{Y}\). The quality of a new pose \(y_t\) at frame \(t\) is then assessed as the log-likelihood of being described by the pose model, i.e. $$\mbox{llh}_{\mbox{pose}}= \log f_{Y}\left(y_t\right) \; .$$
The dynamics model is represented as the pdf \(f_{Y_t}\left(y_t|y_1,\ldots,y_{t-1}\right)\) which describes the likelihood of a pose \(y_t\) at a new frame \(t\) given the poses at the previous frames. In order to compute it, we introduce \(X_t\) with value \(x_t \in \left[0,1\right]\), which is the stage of the (periodic or non-periodic) movement at frame \(t\). Note, in the case of periodic movements, this movement stage can also be seen as the phase of the movement’s cycle. Based on Markovian assumptions, we find that $$ f_{Y_t}\left(y_t|y_1,\ldots,y_{t-1}\right) \approx f_{Y_t}\left(y_t|\hat{x}_t\right) f_{X_t}\left(\hat{x}_t|\hat{x}_{t-1}\right) \; ,$$ with \(\hat{x}_t\) an approximation of \(x_t\) that minimises \(f_{\left\{X_0,\ldots,X_t\right\}}\left(x_0,\ldots,x_t|y_1,\ldots,y_t\right)\). \(f_{Y_t}\left(y_t|x_t\right)\) is learnt from training sequences using Parzen window estimation, while \(f_{X_t}\left(x_t|x_{t-1}\right)\) is set analytically so that \(x_t\) evolves steadily during a movement. The dynamics quality is then assessed as the log-likelihood of the model describing a sequence of poses within a window of size \(\omega\): $$\mbox{llh}_{\mbox{dyn}} \approx \frac{1}{\omega} \sum_{i=t-\omega+1}^{t} \log\left( f_{Y_i}\left(y_i|x_i\right) f_{X_i}\left(x_i|x_{i-1}\right) \right)\; .$$
Two thresholds on the two likelihoods, determined empirically, are used to classify the gait being normal and abnormal. Thresholds on the derivatives of the log-likelihoods allow refining the detections of abnormalities and of returns to normal.
Results Gait on stairs
In order to analyse the quality of gait of subjects walking up stairs, we build our model of normal movement using 17 training sequences from 6 healthy subjects having no injury or disability, from which we extract 42 gait cycles.
We first prove the ability of our model to generalise to the gait of new subjects by evaluating the 13 normal gait sequences of 6 new subjects. As illustrated in Figs. 3 and 4, the normal gaits of new persons are well represented by the model, with the two likelihoods (middle and bottom rows) staying above the thresholds (dotted lines). In only one sequence out of all 13 did the likelihood drop slightly under the threshold (frames 45–47 of Fig. 4) due to particularly noisy skeletons.
Figure 3: Example 1 of normal gait – The model of normal movement can represent well the gait of a new subject, with the two likelihoods (middle and bottom rows) staying above the thresholds (dotted lines). Green: Normal, Red: Abnormal. Figure 4: Example 2 of normal gait – In frames 45–47, a particularly noisy skeleton leads to the likelihood dropping slightly under the thresholds. As a result, this part of the gait is classified as abnormal. Green: Normal, Red: Abnormal.
Next, we apply our proposed method to three types of abnormal gaits:
“Left leg Lead” (LL) abnormal gait: the subjects walk up the stairs always initially using their left leg to move to the next upper step (illustrated in Fig. 5). “Right leg Lead” (RL) abnormal gait: the subjects walk up the stairs always initially using their right leg to move to the next upper step (illustrated in Fig. 6). “Freeze of Gait” (FoG): the subjects freeze at some stage of the movement (illustrated in Fig. 7).
In all three cases, the pose of the subject is always normal, but its dynamics is affected by either the use of the unexpected leg (LL and RL) or by the (temporary) complete stop of the movement.
In our tests, these abnormal events are detected by our method with a rate of 0.93, with the likelihood dropping at all but 2 gait cycles in the LL and RL cases, and during the stops in the FoG case. Table 1 summarises the detection rates of abnormal events by our method.
Figure 5: Example of “Left leg Lead” abnormal gait – Every time the subject uses an unexpected leg, the movement’s stage stops evolving steadily and the dynamics likelihood (bottom row) drops below its threshold (dotted line). Green: Normal, Red: Abnormal, Blue: Refined detection of normal, Orange: Refined detections of abnormal. Manual detections are presented as shaded blue areas. Figure 6: Example of “Right leg Lead” abnormal gait – Every time the subject uses an unexpected leg, the movement’s stage stops evolving steadily and the dynamics likelihood (bottom row) drops below its threshold (dotted line). Green: Normal, Red: Abnormal, Blue: Refined detection of normal, Orange: Refined detections of abnormal. Manual detections are presented as shaded blue areas. Figure 7: Example of “Freeze of gait” – The subject freezes twice during the sequence, resulting in the movement’s stage not evolving anymore at these times, and the dynamics likelihood dropping dramatically. Green: Normal, Red: Abnormal, Blue: Refined detection of normal, Orange: Refined detections of abnormal. Manual detections are presented as shaded blue areas.
Type of event Number of occurences False Positives True Positives False Negatives Proportion missed LL 21 0 19 2 0.10 RL 25 0 23 2 0.08 FoG 12 2 12 0 0 All 58 2 54 4 0.07 Sitting and standing
We also apply our proposed method to the analysis of sitting and standing movements. Two separate (bi-component) models are built, to represent sitting and standing movements respectively. They are executed concurrently, and their analyses are triggered when their respective starting conditions are detected. We use the very simple starting condition of the first coordinate of \(\mathbf{Y}\) staying at its starting value for a few frames, and then deviating. Our stopping condition is similar.
For our experiments, a qualified physiotherapist simulates abnormal sitting and standing movements, such as a loss of balance while standing up that leads to an exaggerated inclination of the torso, as illustrated in Figs. 9 and 10.
Figure 8: Example of normal sitting and standing movements – The two sitting and standing models are used iteratively and are triggered automatically when their starting conditions are detected. Figure 9: Example abnormal standing movement – The subject loses their balance and leans forward. Green: Normal, Red: Abnormal, Orange: Refined detections of abnormal. Figure 10: Example of difficult standing movement – The subject fails on their first attempt to stand up. This failure is detected and the tracking stops. It resumes on the second attempt, and detects the torso leaning forward exaggeratedly. Green: Normal, Red: Abnormal, Orange: Refined detections of abnormal.
Sport boxing
We analyse boxing movements consisting of a cross left punch (a straight punch thrown from the back hand in a southpaw stance) and a return to initial position. We use the same strategy than for sitting and standing movements, with two separate models that are triggered iteratively and automatically when their respective starting conditions are observed.
In our testing sequence, the subject alternates between 3 normal and 3 abnormal punches. Different types of abnormalities that are typical beginner mistakes are simulated for each set of 3 abnormal punches. The results, presented in Fig. 11, show that as in previous experiments, abnormal movements are correctly detected, as well as return to normality. Note that in this experiment, most abnormal movements are due to a wrong pose of the subject and therefore trigger strong responses from the pose model. The level of abnormality is also be quantified by the variations of \(\mbox{llh}_{\mbox{pose}}\) and \(\mbox{llh}_{\mbox{seq}}\) that correspond to different amplitudes of pose mistakes. For example, non-rotating hips (first 2 sets of anomalies) affect the whole body thus they trigger a stronger response than a too high punching elbow (fourth set of anomalies).
Figure 11: Example of analysis of sport movements: cross left punch in boxing. Publications and datasets
Our proposed method for assessing movement quality is presented in the following article:
Adeline Paiement, Lili Tao, Sion Hannuna, Massimo Camplani, Dima Damen, Majid Mirmehdi, Online quality assessment of human movement from skeleton data. Proceedings of British Machine Vision Conference (BMVC),September 2014.
The dataset used in this article can be downloaded in full (depth videos + skeleton)
here, and a lighter version with skeleton only here. It may be used on the condition of citing our paper “Online quality assessment of human movement from skeleton data, BMVC2014” and the SPHERE project. References
[1] OpenNI skeleton tracker. URL http://www.openni.org/documentation.
[2] R. R. Coifman and S. Lafon. Diffusion maps. Applied and computational harmonic analysis, 21(1):5–30, 2006
[3] S. Gerber, T. Tasdizen, and R. Whitaker. Robust non-linear dimensionality reduction using successive 1-dimensional Laplacian eigenmaps. In Proceedings of the 24th international conference on Machine learning, pages 281–288. ACM, 2007 |
To start off, I was looking at the following ingeniously made form of the Gamma function:
$$\Gamma(z+1)=\lim_{n\to\infty}\frac{n!(n+1)^z}{(1+z)(2+z)\cdots(n+z)}$$
which lies on the back of
$$1=\lim_{n\to\infty}\frac{n!(n+1)^z}{(n+z)!}$$
for all integer $z$. One them multiplies through by $z!$ and use the recursive formula for the factorial to reach the above formula.
In the same light, I was wondering if a limit definition of tetration was possible. Consider the following:
$$a^{a^{a^{\dots}}}=\underbrace{a\uparrow a\uparrow a\uparrow \dots\uparrow}_nb=a\uparrow_nb$$
And then consider the following:
$$a\uparrow_nf(n)$$
Particularly, I was wondering if there was a continuous function $f:\mathbb R\to\mathbb C$ such that
$$\lim_{n\to\infty}a\uparrow_nf(n)=c$$
for some constant $c$. From there, one could imagine something like...
$$\begin{align}a\uparrow_{1/2}c&=a\uparrow_{1/2}\lim_{n\to\infty}a\uparrow_nf(n)\\&=\lim_{n\to\infty}a\uparrow_{n+1/2}f(n)\\&=\lim_{n\to\infty}a\uparrow_nf(n-\frac12)\end{align}$$
Does this seem reasonable? Does anyone know much about if this is a good path towards defining fractional ordered tetration? |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.